code stringlengths 22 1.05M | apis listlengths 1 3.31k | extract_api stringlengths 75 3.25M |
|---|---|---|
#!/usr/bin/env python
#-*- coding:utf-8 -*-
##
## formula.py
##
## Created on: Dec 7, 2016
## Author: <NAME>
## E-mail: <EMAIL>
##
"""
===============
List of classes
===============
.. autosummary::
:nosignatures:
IDPool
CNF
CNFPlus
WCNF
WCNFPlus
==================
Module description
==================
This module is designed to facilitate fast and easy PySAT-development by
providing a simple way to manipulate formulas in PySAT. Although only
clausal formulas are supported at this point, future releases of PySAT are
expected to implement data structures and methods to manipulate arbitrary
Boolean formulas. The module implements the :class:`CNF` class, which
represents a formula in `conjunctive normal form (CNF)
<https://en.wikipedia.org/wiki/Conjunctive_normal_form>`__.
Recall that a CNF formula is conventionally seen as a set of clauses, each
being a set of literals. A literal is a Boolean variable or its negation.
In PySAT, a Boolean variable and a literal should be specified as an
integer. For instance, a Boolean variable :math:`x_{25}` is represented as
integer ``25``. A literal :math:`\\neg{x_{10}}` should be specified as
``-10``. Moreover, a clause :math:`(\\neg{x_2}\\vee x_{19}\\vee x_{46})`
should be specified as ``[-2, 19, 46]`` in PySAT. *Unit size clauses* are
to be specified as unit size lists as well, e.g. a clause :math:`(x_3)` is
a list ``[3]``.
CNF formulas can be created as an object of class :class:`CNF`. For
instance, the following piece of code creates a CNF formula
:math:`(\\neg{x_1}\\vee x_2)\\wedge(\\neg{x_2}\\vee x_3)`.
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF()
>>> cnf.append([-1, 2])
>>> cnf.append([-2, 3])
The clauses of a formula can be accessed through the ``clauses`` variable
of class :class:`CNF`, which is a list of lists of integers:
.. code-block:: python
>>> print(cnf.clauses)
[[-1, 2], [-2 ,3]]
The number of variables in a CNF formula, i.e. the *largest variable
identifier*, can be obtained using the ``nv`` variable, e.g.
.. code-block:: python
>>> print(cnf.nv)
3
Class :class:`CNF` has a few methods to read and write a CNF formula into a
file or a string. The formula is read/written in the standard `DIMACS CNF
<https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#SAT_problem_format>`__
format. A clause in the DIMACS format is a string containing
space-separated integer literals followed by ``0``. For instance, a clause
:math:`(\\neg{x_2}\\vee x_{19}\\vee x_{46})` is written as ``-2 19 46 0``
in DIMACS. The clauses in DIMACS should be preceded by a *preamble*, which
is a line ``p cnf nof_variables nof_clauses``, where ``nof_variables`` and
``nof_clauses`` are integers. A preamble line for formula
:math:`(\\neg{x_1}\\vee x_2)\\wedge(\\neg{x_2}\\vee x_3)` would be ``p cnf
3 2``. The complete DIMACS file describing the formula looks this:
::
p cnf 3 2
-1 2 0
-2 3 0
Reading and writing formulas in DIMACS can be done with PySAT in the
following way:
.. code-block:: python
>>> from pysat.formula import CNF
>>> f1 = CNF(from_file='some-file-name.cnf') # reading from file
>>> f1.to_file('another-file-name.cnf') # writing to a file
>>>
>>> with open('some-file-name.cnf', 'r+') as fp:
... f2 = CNF(from_fp=fp) # reading from a file pointer
...
... fp.seek(0)
... f2.to_fp(fp) # writing to a file pointer
>>>
>>> f3 = CNF(from_string='p cnf 3 3\\n-1 2 0\\n-2 3 0\\n-3 0\\n')
>>> print(f3.clauses)
[[-1, 2], [-2, 3], [-3]]
>>> print(f3.nv)
3
Besides plain CNF formulas, the :mod:`pysat.formula` module implements an
additional class for dealing with *partial* and *weighted partial* CNF
formulas, i.e. WCNF formulas. A WCNF formula is a conjunction of two sets
of clauses: *hard* clauses and *soft* clauses, i.e.
:math:`\mathcal{F}=\mathcal{H}\wedge\mathcal{S}`. Soft clauses of a WCNF
are labeled with integer *weights*, i.e. a soft clause of
:math:`\mathcal{S}` is a pair :math:`(c_i, w_i)`. In partial (unweighted)
formulas, all soft clauses have weight 1.
WCNF can be of help when solving optimization problems using the SAT
technology. A typical example of where a WCNF formula can be used is
`maximum satisfiability (MaxSAT)
<https://en.wikipedia.org/wiki/Maximum_satisfiability_problem>`__, which
given a WCNF formula :math:`\mathcal{F}=\mathcal{H}\wedge\mathcal{S}`
targets satisfying all its hard clauses :math:`\mathcal{H}` and maximizing
the sum of weights of satisfied soft clauses, i.e. maximizing the value of
:math:`\sum_{c_i\in\mathcal{S}}{w_i\\cdot c_i}`.
An object of class :class:`WCNF` has two variables to access the hard and
soft clauses of the corresponding formula: ``hard`` and ``soft``. The
weights of soft clauses are stored in variable ``wght``.
.. code-block:: python
>>> from pysat.formula import WCNF
>>>
>>> wcnf = WCNF()
>>> wcnf.append([-1, -2])
>>> wcnf.append([1], weight=1)
>>> wcnf.append([2], weight=3) # the formula becomes unsatisfiable
>>>
>>> print(wcnf.hard)
[[-1, -2]]
>>> print(wcnf.soft)
[[1], [2]]
>>> print(wcnf.wght)
[1, 3]
A properly constructed WCNF formula must have a *top weight*, which should
be equal to :math:`1+\sum_{c_i\in\mathcal{S}}{w_i}`. Top weight of a
formula can be accessed through variable ``topw``.
.. code-block:: python
>>> wcnf.topw = sum(wcnf.wght) + 1 # (1 + 3) + 1
>>> print(wcnf.topw)
5
Additionally to classes :class:`CNF` and :class:`WCNF`, the module provides
the extended classes :class:`CNFPlus` and :class:`WCNFPlus`. The only
difference between ``?CNF`` and ``?CNFPlus`` is the support for *native*
cardinality constraints provided by the `MiniCard solver
<https://github.com/liffiton/minicard>`__ (see :mod:`pysat.card` for
details). The corresponding variable in objects of ``CNFPlus``
(``WCNFPlus``, resp.) responsible for storing the AtMostK constraints is
``atmosts`` (``atms``, resp.). **Note** that at this point, AtMostK
constraints in ``WCNF`` can be *hard* only.
Besides the implementations of CNF and WCNF formulas in PySAT, the
:mod:`pysat.formula` module also provides a way to manage variable
identifiers. This can be done with the use of the :class:`IDPool` manager.
With the use of the :class:`CNF` and :class:`WCNF` classes as well as with
the :class:`IDPool` variable manager, it is pretty easy to develop
practical problem encoders into SAT or MaxSAT/MinSAT. As an example, a PHP
formula encoder is shown below (the implementation can also be found in
:class:`.examples.genhard.PHP`).
.. code-block:: python
from pysat.formula import CNF
cnf = CNF() # we will store the formula here
# nof_holes is given
# initializing the pool of variable ids
vpool = IDPool(start_from=1)
pigeon = lambda i, j: vpool.id('pigeon{0}@{1}'.format(i, j))
# placing all pigeons into holes
for i in range(1, nof_holes + 2):
cnf.append([pigeon(i, j) for j in range(1, nof_holes + 1)])
# there cannot be more than 1 pigeon in a hole
pigeons = range(1, nof_holes + 2)
for j in range(1, nof_holes + 1):
for comb in itertools.combinations(pigeons, 2):
cnf.append([-pigeon(i, j) for i in comb])
==============
Module details
==============
"""
#
#==============================================================================
from __future__ import print_function
import collections
import copy
import itertools
import os
from pysat._fileio import FileObject
import sys
# checking whether or not py-aiger-cnf is available and working as expected
aiger_present = True
try:
import aiger_cnf
except ImportError:
aiger_present = False
try: # for Python2
from cStringIO import StringIO
except ImportError: # for Python3
from io import StringIO
#
#==============================================================================
class IDPool(object):
"""
A simple manager of variable IDs. It can be used as a pool of integers
assigning an ID to any object. Identifiers are to start from ``1`` by
default. The list of occupied intervals is empty be default. If
necessary the top variable ID can be accessed directly using the
``top`` variable.
:param start_from: the smallest ID to assign.
:param occupied: a list of occupied intervals.
:type start_from: int
:type occupied: list(list(int))
"""
def __init__(self, start_from=1, occupied=[]):
"""
Constructor.
"""
self.restart(start_from=start_from, occupied=occupied)
def restart(self, start_from=1, occupied=[]):
"""
Restart the manager from scratch. The arguments replicate those of
the constructor of :class:`IDPool`.
"""
# initial ID
self.top = start_from - 1
# occupied IDs
self._occupied = sorted(occupied, key=lambda x: x[0])
# main dictionary storing the mapping from objects to variable IDs
self.obj2id = collections.defaultdict(lambda: self._next())
# mapping back from variable IDs to objects
# (if for whatever reason necessary)
self.id2obj = {}
def id(self, obj):
"""
The method is to be used to assign an integer variable ID for a
given new object. If the object already has an ID, no new ID is
created and the old one is returned instead.
An object can be anything. In some cases it is convenient to use
string variable names.
:param obj: an object to assign an ID to.
:rtype: int.
Example:
.. code-block:: python
>>> from pysat.formula import IDPool
>>> vpool = IDPool(occupied=[[12, 18], [3, 10]])
>>>
>>> # creating 5 unique variables for the following strings
>>> for i in range(5):
... print(vpool.id('v{0}'.format(i + 1)))
1
2
11
19
20
In some cases, it makes sense to create an external function for
accessing IDPool, e.g.:
.. code-block:: python
>>> # continuing the previous example
>>> var = lambda i: vpool.id('var{0}'.format(i))
>>> var(5)
20
>>> var('hello_world!')
21
"""
vid = self.obj2id[obj]
if vid not in self.id2obj:
self.id2obj[vid] = obj
return vid
def obj(self, vid):
"""
The method can be used to map back a given variable identifier to
the original object labeled by the identifier.
:param vid: variable identifier.
:type vid: int
:return: an object corresponding to the given identifier.
Example:
.. code-block:: python
>>> vpool.obj(21)
'hello_world!'
"""
if vid in self.id2obj:
return self.id2obj[vid]
return None
def occupy(self, start, stop):
"""
Mark a given interval as occupied so that the manager could skip
the values from ``start`` to ``stop`` (**inclusive**).
:param start: beginning of the interval.
:param stop: end of the interval.
:type start: int
:type stop: int
"""
self._occupied.append([start, stop])
self._occupied.sort(key=lambda x: x[0])
def _next(self):
"""
Get next variable ID. Skip occupied intervals if any.
"""
self.top += 1
while self._occupied and self.top >= self._occupied[0][0]:
if self.top <= self._occupied[0][1]:
self.top = self._occupied[0][1] + 1
self._occupied.pop(0)
return self.top
#
#==============================================================================
class CNF(object):
"""
Class for manipulating CNF formulas. It can be used for creating
formulas, reading them from a file, or writing them to a file. The
``comment_lead`` parameter can be helpful when one needs to parse
specific comment lines starting not with character ``c`` but with
another character or a string.
:param from_file: a DIMACS CNF filename to read from
:param from_fp: a file pointer to read from
:param from_string: a string storing a CNF formula
:param from_clauses: a list of clauses to bootstrap the formula with
:param from_aiger: an AIGER circuit to bootstrap the formula with
:param comment_lead: a list of characters leading comment lines
:type from_file: str
:type from_fp: file_pointer
:type from_string: str
:type from_clauses: list(list(int))
:type from_aiger: :class:`aiger.AIG` (see `py-aiger package <https://github.com/mvcisback/py-aiger>`__)
:type comment_lead: list(str)
"""
def __init__(self, from_file=None, from_fp=None, from_string=None,
from_clauses=[], from_aiger=None, comment_lead=['c']):
"""
Constructor.
"""
self.nv = 0
self.clauses = []
self.comments = []
if from_file:
self.from_file(from_file, comment_lead, compressed_with='use_ext')
elif from_fp:
self.from_fp(from_fp, comment_lead)
elif from_string:
self.from_string(from_string, comment_lead)
elif from_clauses:
self.from_clauses(from_clauses)
elif from_aiger:
self.from_aiger(from_aiger)
def from_file(self, fname, comment_lead=['c'], compressed_with='use_ext'):
"""
Read a CNF formula from a file in the DIMACS format. A file name is
expected as an argument. A default argument is ``comment_lead`` for
parsing comment lines. A given file can be compressed by either
gzip, bzip2, or lzma.
:param fname: name of a file to parse.
:param comment_lead: a list of characters leading comment lines
:param compressed_with: file compression algorithm
:type fname: str
:type comment_lead: list(str)
:type compressed_with: str
Note that the ``compressed_with`` parameter can be ``None`` (i.e.
the file is uncompressed), ``'gzip'``, ``'bzip2'``, ``'lzma'``, or
``'use_ext'``. The latter value indicates that compression type
should be automatically determined based on the file extension.
Using ``'lzma'`` in Python 2 requires the ``backports.lzma``
package to be additionally installed.
Usage example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf1 = CNF()
>>> cnf1.from_file('some-file.cnf.gz', compressed_with='gzip')
>>>
>>> cnf2 = CNF(from_file='another-file.cnf')
"""
with FileObject(fname, mode='r', compression=compressed_with) as fobj:
self.from_fp(fobj.fp, comment_lead)
def from_fp(self, file_pointer, comment_lead=['c']):
"""
Read a CNF formula from a file pointer. A file pointer should be
specified as an argument. The only default argument is
``comment_lead``, which can be used for parsing specific comment
lines.
:param file_pointer: a file pointer to read the formula from.
:param comment_lead: a list of characters leading comment lines
:type file_pointer: file pointer
:type comment_lead: list(str)
Usage example:
.. code-block:: python
>>> with open('some-file.cnf', 'r') as fp:
... cnf1 = CNF()
... cnf1.from_fp(fp)
>>>
>>> with open('another-file.cnf', 'r') as fp:
... cnf2 = CNF(from_fp=fp)
"""
self.nv = 0
self.clauses = []
self.comments = []
comment_lead = tuple('p') + tuple(comment_lead)
for line in file_pointer:
line = line.strip()
if line:
if line[0] not in comment_lead:
cl = [int(l) for l in line.split()[:-1]]
self.nv = max([abs(l) for l in cl] + [self.nv])
self.clauses.append(cl)
elif not line.startswith('p cnf '):
self.comments.append(line)
def from_string(self, string, comment_lead=['c']):
"""
Read a CNF formula from a string. The string should be specified as
an argument and should be in the DIMACS CNF format. The only
default argument is ``comment_lead``, which can be used for parsing
specific comment lines.
:param string: a string containing the formula in DIMACS.
:param comment_lead: a list of characters leading comment lines
:type string: str
:type comment_lead: list(str)
Example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf1 = CNF()
>>> cnf1.from_string(='p cnf 2 2\\n-1 2 0\\n1 -2 0')
>>> print(cnf1.clauses)
[[-1, 2], [1, -2]]
>>>
>>> cnf2 = CNF(from_string='p cnf 3 3\\n-1 2 0\\n-2 3 0\\n-3 0\\n')
>>> print(cnf2.clauses)
[[-1, 2], [-2, 3], [-3]]
>>> print(cnf2.nv)
3
"""
self.from_fp(StringIO(string), comment_lead)
def from_clauses(self, clauses):
"""
This methods copies a list of clauses into a CNF object.
:param clauses: a list of clauses
:type clauses: list(list(int))
Example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF(from_clauses=[[-1, 2], [1, -2], [5]])
>>> print(cnf.clauses)
[[-1, 2], [1, -2], [5]]
>>> print(cnf.nv)
5
"""
self.clauses = copy.deepcopy(clauses)
for cl in self.clauses:
self.nv = max([abs(l) for l in cl] + [self.nv])
def from_aiger(self, aig, vpool=None):
"""
Create a CNF formula by Tseitin-encoding an input AIGER circuit.
Input circuit is expected to be an object of class
:class:`aiger.AIG`. Alternatively, it can be specified as an
:class:`aiger.BoolExpr`, or an ``*.aag`` filename, or an AIGER
string to parse. (Classes :class:`aiger.AIG` and
:class:`aiger.BoolExpr` are defined in the `py-aiger package
<https://github.com/mvcisback/py-aiger>`__.)
:param aig: an input AIGER circuit
:param vpool: pool of variable identifiers (optional)
:type aig: :class:`aiger.AIG` (see `py-aiger package <https://github.com/mvcisback/py-aiger>`__)
:type vpool: :class:`.IDPool`
Example:
.. code-block:: python
>>> import aiger
>>> x, y, z = aiger.atom('x'), aiger.atom('y'), aiger.atom('z')
>>> expr = ~(x | y) & z
>>> print(expr.aig)
aag 5 3 0 1 2
2
4
8
10
6 3 5
10 6 8
i0 y
i1 x
i2 z
o0 6c454aea-c9e1-11e9-bbe3-3af9d34370a9
>>>
>>> from pysat.formula import CNF
>>> cnf = CNF(from_aiger=expr.aig)
>>> print(cnf.nv)
5
>>> print(cnf.clauses)
[[3, 2, 4], [-3, -4], [-2, -4], [-4, -1, 5], [4, -5], [1, -5]]
>>> print(['{0} <-> {1}'.format(v, cnf.vpool.obj(v)) for v in cnf.inps])
['3 <-> y', '2 <-> x', '1 <-> z']
>>> print(['{0} <-> {1}'.format(v, cnf.vpool.obj(v)) for v in cnf.outs])
['5 <-> 6c454aea-c9e1-11e9-bbe3-3af9d34370a9']
"""
assert aiger_present, 'Package \'py-aiger-cnf\' is unavailable. Check your installation.'
# creating a pool of variable IDs if necessary
self.vpool = vpool if vpool else IDPool()
# Use py-aiger-cnf to insulate from internal py-aiger details.
aig_cnf = aiger_cnf.aig2cnf(aig, fresh=self.vpool.id, force_true=False)
self.clauses = [list(cls) for cls in aig_cnf.clauses]
self.comments = ['c ' + c.strip() for c in aig_cnf.comments]
self.nv = max(map(abs, itertools.chain(*self.clauses)))
# saving input and output variables
self.inps = list(aig_cnf.input2lit.values())
self.outs = list(aig_cnf.output2lit.values())
# updating input name to variable mappings
for var in self.inps:
name = self.vpool.id2obj[var].name
self.vpool.obj2id[name] = var
self.vpool.id2obj[var] = name
# saving the output in the pool by its name
for name, lit in aig_cnf.output2lit.items():
self.vpool.obj2id[name] = lit
self.vpool.id2obj[lit] = name
def copy(self):
"""
This method can be used for creating a copy of a CNF object. It
creates another object of the :class:`CNF` class and makes use of
the *deepcopy* functionality to copy the clauses.
:return: an object of class :class:`CNF`.
Example:
.. code-block:: python
>>> cnf1 = CNF(from_clauses=[[-1, 2], [1]])
>>> cnf2 = cnf1.copy()
>>> print(cnf2.clauses)
[[-1, 2], [1]]
>>> print(cnf2.nv)
2
"""
cnf = CNF()
cnf.nv = self.nv
cnf.clauses = copy.deepcopy(self.clauses)
cnf.comments = copy.deepcopy(self.comments)
return cnf
def to_file(self, fname, comments=None, compress_with='use_ext'):
"""
The method is for saving a CNF formula into a file in the DIMACS
CNF format. A file name is expected as an argument. Additionally,
supplementary comment lines can be specified in the ``comments``
parameter. Also, a file can be compressed using either gzip, bzip2,
or lzma (xz).
:param fname: a file name where to store the formula.
:param comments: additional comments to put in the file.
:param compress_with: file compression algorithm
:type fname: str
:type comments: list(str)
:type compress_with: str
Note that the ``compress_with`` parameter can be ``None`` (i.e.
the file is uncompressed), ``'gzip'``, ``'bzip2'``, ``'lzma'``, or
``'use_ext'``. The latter value indicates that compression type
should be automatically determined based on the file extension.
Using ``'lzma'`` in Python 2 requires the ``backports.lzma``
package to be additionally installed.
Example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF()
...
>>> # the formula is filled with a bunch of clauses
>>> cnf.to_file('some-file-name.cnf') # writing to a file
"""
with FileObject(fname, mode='w', compression=compress_with) as fobj:
self.to_fp(fobj.fp, comments)
def to_fp(self, file_pointer, comments=None):
"""
The method can be used to save a CNF formula into a file pointer.
The file pointer is expected as an argument. Additionally,
supplementary comment lines can be specified in the ``comments``
parameter.
:param fname: a file name where to store the formula.
:param comments: additional comments to put in the file.
:type fname: str
:type comments: list(str)
Example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF()
...
>>> # the formula is filled with a bunch of clauses
>>> with open('some-file.cnf', 'w') as fp:
... cnf.to_fp(fp) # writing to the file pointer
"""
# saving formula's internal comments
for c in self.comments:
print(c, file=file_pointer)
# saving externally specified comments
if comments:
for c in comments:
print(c, file=file_pointer)
print('p cnf', self.nv, len(self.clauses), file=file_pointer)
for cl in self.clauses:
print(' '.join(str(l) for l in cl), '0', file=file_pointer)
def append(self, clause):
"""
Add one more clause to CNF formula. This method additionally
updates the number of variables, i.e. variable ``self.nv``, used in
the formula.
:param clause: a new clause to add.
:type clause: list(int)
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF(from_clauses=[[-1, 2], [3]])
>>> cnf.append([-3, 4])
>>> print(cnf.clauses)
[[-1, 2], [3], [-3, 4]]
"""
self.nv = max([abs(l) for l in clause] + [self.nv])
self.clauses.append(clause)
def extend(self, clauses):
"""
Add several clauses to CNF formula. The clauses should be given in
the form of list. For every clause in the list, method
:meth:`append` is invoked.
:param clauses: a list of new clauses to add.
:type clauses: list(list(int))
Example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF(from_clauses=[[-1, 2], [3]])
>>> cnf.extend([[-3, 4], [5, 6]])
>>> print(cnf.clauses)
[[-1, 2], [3], [-3, 4], [5, 6]]
"""
for cl in clauses:
self.append(cl)
def __iter__(self):
"""
Iterator over all clauses of the formula.
"""
for cl in self.clauses:
yield cl
def weighted(self):
"""
This method creates a weighted copy of the internal formula. As a
result, an object of class :class:`WCNF` is returned. Every clause
of the CNF formula is *soft* in the new WCNF formula and its weight
is equal to ``1``. The set of hard clauses of the formula is empty.
:return: an object of class :class:`WCNF`.
Example:
.. code-block:: python
>>> from pysat.formula import CNF
>>> cnf = CNF(from_clauses=[[-1, 2], [3, 4]])
>>>
>>> wcnf = cnf.weighted()
>>> print(wcnf.hard)
[]
>>> print(wcnf.soft)
[[-1, 2], [3, 4]]
>>> print(wcnf.wght)
[1, 1]
"""
wcnf = WCNF()
wcnf.nv = self.nv
wcnf.hard = []
wcnf.soft = copy.deepcopy(self.clauses)
wcnf.wght = [1 for cl in wcnf.soft]
wcnf.topw = len(wcnf.wght) + 1
wcnf.comments = self.comments[:]
return wcnf
def negate(self, topv=None):
"""
Given a CNF formula :math:`\mathcal{F}`, this method creates a CNF
formula :math:`\\neg{\mathcal{F}}`. The negation of the formula is
encoded to CNF with the use of *auxiliary* Tseitin variables [1]_.
A new CNF formula is returned keeping all the newly introduced
variables that can be accessed through the ``auxvars`` variable.
**Note** that the negation of each clause is encoded with one
auxiliary variable if it is not unit size. Otherwise, no auxiliary
variable is introduced.
:param topv: top variable identifier if any.
:type topv: int
:return: an object of class :class:`CNF`.
.. [1] <NAME>. *On the complexity of derivations in the
propositional calculus*. Studies in Mathematics and
Mathematical Logic, Part II. pp. 115–125, 1968
.. code-block:: python
>>> from pysat.formula import CNF
>>> pos = CNF(from_clauses=[[-1, 2], [3]])
>>> neg = pos.negate()
>>> print(neg.clauses)
[[1, -4], [-2, -4], [-1, 2, 4], [4, -3]]
>>> print(neg.auxvars)
[4, -3]
"""
negated = CNF()
negated.nv = topv
if not negated.nv:
negated.nv = self.nv
negated.clauses = []
negated.auxvars = []
for cl in self.clauses:
auxv = -cl[0]
if len(cl) > 1:
negated.nv += 1
auxv = negated.nv
# direct implication
for l in cl:
negated.clauses.append([-l, -auxv])
# opposite implication
negated.clauses.append(cl + [auxv])
# keeping all Tseitin variables
negated.auxvars.append(auxv)
negated.clauses.append(negated.auxvars)
return negated
#
#==============================================================================
class WCNF(object):
"""
Class for manipulating partial (weighted) CNF formulas. It can be used
for creating formulas, reading them from a file, or writing them to a
file. The ``comment_lead`` parameter can be helpful when one needs to
parse specific comment lines starting not with character ``c`` but with
another character or a string.
:param from_file: a DIMACS CNF filename to read from
:param from_fp: a file pointer to read from
:param from_string: a string storing a CNF formula
:param comment_lead: a list of characters leading comment lines
:type from_file: str
:type from_fp: file_pointer
:type from_string: str
:type comment_lead: list(str)
"""
def __init__(self, from_file=None, from_fp=None, from_string=None,
comment_lead=['c']):
"""
Constructor.
"""
self.nv = 0
self.hard = []
self.soft = []
self.wght = []
self.topw = 1
self.comments = []
if from_file:
self.from_file(from_file, comment_lead, compressed_with='use_ext')
elif from_fp:
self.from_fp(from_fp, comment_lead)
elif from_string:
self.from_string(from_string, comment_lead)
def from_file(self, fname, comment_lead=['c'], compressed_with='use_ext'):
"""
Read a WCNF formula from a file in the DIMACS format. A file name
is expected as an argument. A default argument is ``comment_lead``
for parsing comment lines. A given file can be compressed by either
gzip, bzip2, or lzma.
:param fname: name of a file to parse.
:param comment_lead: a list of characters leading comment lines
:param compressed_with: file compression algorithm
:type fname: str
:type comment_lead: list(str)
:type compressed_with: str
Note that the ``compressed_with`` parameter can be ``None`` (i.e.
the file is uncompressed), ``'gzip'``, ``'bzip2'``, ``'lzma'``, or
``'use_ext'``. The latter value indicates that compression type
should be automatically determined based on the file extension.
Using ``'lzma'`` in Python 2 requires the ``backports.lzma``
package to be additionally installed.
Usage example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> cnf1 = WCNF()
>>> cnf1.from_file('some-file.wcnf.bz2', compressed_with='bzip2')
>>>
>>> cnf2 = WCNF(from_file='another-file.wcnf')
"""
with FileObject(fname, mode='r', compression=compressed_with) as fobj:
self.from_fp(fobj.fp, comment_lead)
def from_fp(self, file_pointer, comment_lead=['c']):
"""
Read a WCNF formula from a file pointer. A file pointer should be
specified as an argument. The only default argument is
``comment_lead``, which can be used for parsing specific comment
lines.
:param file_pointer: a file pointer to read the formula from.
:param comment_lead: a list of characters leading comment lines
:type file_pointer: file pointer
:type comment_lead: list(str)
Usage example:
.. code-block:: python
>>> with open('some-file.cnf', 'r') as fp:
... cnf1 = WCNF()
... cnf1.from_fp(fp)
>>>
>>> with open('another-file.cnf', 'r') as fp:
... cnf2 = WCNF(from_fp=fp)
"""
self.nv = 0
self.hard = []
self.soft = []
self.wght = []
self.topw = 1
self.comments = []
comment_lead = tuple('p') + tuple(comment_lead)
for line in file_pointer:
line = line.strip()
if line:
if line[0] not in comment_lead:
cl = [int(l) for l in line.split()[:-1]]
w = cl.pop(0)
self.nv = max([abs(l) for l in cl] + [self.nv])
if w >= self.topw:
self.hard.append(cl)
else:
self.soft.append(cl)
self.wght.append(w)
elif not line.startswith('p wcnf '):
self.comments.append(line)
else: # expecting the preamble
self.topw = int(line.rsplit(' ', 1)[1])
def from_string(self, string, comment_lead=['c']):
"""
Read a WCNF formula from a string. The string should be specified
as an argument and should be in the DIMACS CNF format. The only
default argument is ``comment_lead``, which can be used for parsing
specific comment lines.
:param string: a string containing the formula in DIMACS.
:param comment_lead: a list of characters leading comment lines
:type string: str
:type comment_lead: list(str)
Example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> cnf1 = WCNF()
>>> cnf1.from_string(='p wcnf 2 2 2\\n 2 -1 2 0\\n1 1 -2 0')
>>> print(cnf1.hard)
[[-1, 2]]
>>> print(cnf1.soft)
[[1, 2]]
>>>
>>> cnf2 = WCNF(from_string='p wcnf 3 3 2\\n2 -1 2 0\\n2 -2 3 0\\n1 -3 0\\n')
>>> print(cnf2.hard)
[[-1, 2], [-2, 3]]
>>> print(cnf2.soft)
[[-3]]
>>> print(cnf2.nv)
3
"""
self.from_fp(StringIO(string), comment_lead)
def copy(self):
"""
This method can be used for creating a copy of a WCNF object. It
creates another object of the :class:`WCNF` class and makes use of
the *deepcopy* functionality to copy both hard and soft clauses.
:return: an object of class :class:`WCNF`.
Example:
.. code-block:: python
>>> cnf1 = WCNF()
>>> cnf1.append([-1, 2])
>>> cnf1.append([1], weight=10)
>>>
>>> cnf2 = cnf1.copy()
>>> print(cnf2.hard)
[[-1, 2]]
>>> print(cnf2.soft)
[[1]]
>>> print(cnf2.wght)
[10]
>>> print(cnf2.nv)
2
"""
wcnf = WCNF()
wcnf.nv = self.nv
wcnf.topw = self.topw
wcnf.hard = copy.deepcopy(self.hard)
wcnf.soft = copy.deepcopy(self.soft)
wcnf.wght = copy.deepcopy(self.wght)
wcnf.comments = copy.deepcopy(self.comments)
return wcnf
def to_file(self, fname, comments=None, compress_with='use_ext'):
"""
The method is for saving a WCNF formula into a file in the DIMACS
CNF format. A file name is expected as an argument. Additionally,
supplementary comment lines can be specified in the ``comments``
parameter. Also, a file can be compressed using either gzip, bzip2,
or lzma (xz).
:param fname: a file name where to store the formula.
:param comments: additional comments to put in the file.
:param compress_with: file compression algorithm
:type fname: str
:type comments: list(str)
:type compress_with: str
Note that the ``compress_with`` parameter can be ``None`` (i.e.
the file is uncompressed), ``'gzip'``, ``'bzip2'``, ``'lzma'``, or
``'use_ext'``. The latter value indicates that compression type
should be automatically determined based on the file extension.
Using ``'lzma'`` in Python 2 requires the ``backports.lzma``
package to be additionally installed.
Example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> wcnf = WCNF()
...
>>> # the formula is filled with a bunch of clauses
>>> wcnf.to_file('some-file-name.wcnf') # writing to a file
"""
with FileObject(fname, mode='w', compression=compress_with) as fobj:
self.to_fp(fobj.fp, comments)
def to_fp(self, file_pointer, comments=None):
"""
The method can be used to save a WCNF formula into a file pointer.
The file pointer is expected as an argument. Additionally,
supplementary comment lines can be specified in the ``comments``
parameter.
:param fname: a file name where to store the formula.
:param comments: additional comments to put in the file.
:type fname: str
:type comments: list(str)
Example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> wcnf = WCNF()
...
>>> # the formula is filled with a bunch of clauses
>>> with open('some-file.wcnf', 'w') as fp:
... wcnf.to_fp(fp) # writing to the file pointer
"""
# saving formula's internal comments
for c in self.comments:
print(c, file=file_pointer)
# saving externally specified comments
if comments:
for c in comments:
print(c, file=file_pointer)
print('p wcnf', self.nv, len(self.hard) + len(self.soft), self.topw, file=file_pointer)
# soft clauses are dumped first because
# some tools (e.g. LBX) cannot count them properly
for i, cl in enumerate(self.soft):
print(self.wght[i], ' '.join(str(l) for l in cl), '0', file=file_pointer)
for cl in self.hard:
print(self.topw, ' '.join(str(l) for l in cl), '0', file=file_pointer)
def append(self, clause, weight=None):
"""
Add one more clause to WCNF formula. This method additionally
updates the number of variables, i.e. variable ``self.nv``, used in
the formula.
The clause can be hard or soft depending on the ``weight``
argument. If no weight is set, the clause is considered to be hard.
:param clause: a new clause to add.
:param weight: integer weight of the clause.
:type clause: list(int)
:type weight: integer or None
.. code-block:: python
>>> from pysat.formula import WCNF
>>> cnf = WCNF()
>>> cnf.append([-1, 2])
>>> cnf.append([1], weight=10)
>>> cnf.append([-2], weight=20)
>>> print(cnf.hard)
[[-1, 2]]
>>> print(cnf.soft)
[[1], [-2]]
>>> print(cnf.wght)
[10, 20]
"""
self.nv = max([abs(l) for l in clause] + [self.nv])
if weight:
self.soft.append(clause)
self.wght.append(weight)
self.topw += weight
else:
self.hard.append(clause)
def extend(self, clauses, weights=None):
"""
Add several clauses to WCNF formula. The clauses should be given in
the form of list. For every clause in the list, method
:meth:`append` is invoked.
The clauses can be hard or soft depending on the ``weights``
argument. If no weights are set, the clauses are considered to be
hard.
:param clauses: a list of new clauses to add.
:param weights: a list of integer weights.
:type clauses: list(list(int))
:type weights: list(int)
Example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> cnf = WCNF()
>>> cnf.extend([[-3, 4], [5, 6]])
>>> cnf.extend([[3], [-4], [-5], [-6]], weights=[1, 5, 3, 4])
>>> print(cnf.hard)
[[-3, 4], [5, 6]]
>>> print(cnf.soft)
[[3], [-4], [-5], [-6]]
>>> print(cnf.wght)
[1, 5, 3, 4]
"""
if weights:
# clauses are soft
for i, cl in enumerate(clauses):
self.append(cl, weight=weights[i])
else:
# clauses are hard
for cl in clauses:
self.append(cl)
def unweighted(self):
"""
This method creates a *plain* (unweighted) copy of the internal
formula. As a result, an object of class :class:`CNF` is returned.
Every clause (both hard or soft) of the WCNF formula is copied to
the ``clauses`` variable of the resulting plain formula, i.e. all
weights are discarded.
:return: an object of class :class:`CNF`.
Example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> wcnf = WCNF()
>>> wcnf.extend([[-3, 4], [5, 6]])
>>> wcnf.extend([[3], [-4], [-5], [-6]], weights=[1, 5, 3, 4])
>>>
>>> cnf = wcnf.unweighted()
>>> print(cnf.clauses)
[[-3, 4], [5, 6], [3], [-4], [-5], [-6]]
"""
cnf = CNF()
cnf.nv = self.nv
cnf.clauses = copy.deepcopy(self.hard) + copy.deepcopy(self.soft)
cnf.commends = self.comments[:]
return cnf
#
#==============================================================================
class CNFPlus(CNF, object):
"""
CNF formulas augmented with *native* cardinality constraints.
This class inherits most of the functionality of the :class:`CNF`
class. The only difference between the two is that :class:`CNFPlus`
supports *native* cardinality constraints of `MiniCard
<https://github.com/liffiton/minicard>`__.
The parser of input DIMACS files of :class:`CNFPlus` assumes the syntax
of AtMostK and AtLeastK constraints defined in the `description
<https://github.com/liffiton/minicard>`__ of MiniCard:
::
c Example: Two cardinality constraints followed by a clause
p cnf+ 7 3
1 -2 3 5 -7 <= 3
4 5 6 -7 >= 2
3 5 7 0
Each AtLeastK constraint is translated into an AtMostK constraint in
the standard way: :math:`\sum_{i=1}^{n}{x_i}\geq k \leftrightarrow
\sum_{i=1}^{n}{\\neg{x_i}}\leq (n-k)`. Internally, AtMostK constraints
are stored in variable ``atmosts``, each being a pair ``(lits, k)``,
where ``lits`` is a list of literals in the sum and ``k`` is the upper
bound.
Example:
.. code-block:: python
>>> from pysat.formula import CNFPlus
>>> cnf = CNFPlus(from_string='p cnf+ 7 3\\n1 -2 3 5 -7 <= 3\\n4 5 6 -7 >= 2\\n 3 5 7 0\\n')
>>> print(cnf.clauses)
[[3, 5, 7]]
>>> print(cnf.atmosts)
[[[1, -2, 3, 5, -7], 3], [[-4, -5, -6, 7], 2]]
>>> print(cnf.nv)
7
For details on the functionality, see :class:`CNF`.
"""
def __init__(self, from_file=None, from_fp=None, from_string=None,
comment_lead=['c']):
"""
Constructor.
"""
# atmost constraints are initially empty
self.atmosts = []
# calling the base class constructor
super(CNFPlus, self).__init__(from_file=from_file, from_fp=from_fp,
from_string=from_string, comment_lead=comment_lead)
def from_fp(self, file_pointer, comment_lead=['c']):
"""
Read a CNF+ formula from a file pointer. A file pointer should be
specified as an argument. The only default argument is
``comment_lead``, which can be used for parsing specific comment
lines.
:param file_pointer: a file pointer to read the formula from.
:param comment_lead: a list of characters leading comment lines
:type file_pointer: file pointer
:type comment_lead: list(str)
Usage example:
.. code-block:: python
>>> with open('some-file.cnf+', 'r') as fp:
... cnf1 = CNFPlus()
... cnf1.from_fp(fp)
>>>
>>> with open('another-file.cnf+', 'r') as fp:
... cnf2 = CNFPlus(from_fp=fp)
"""
self.nv = 0
self.clauses = []
self.atmosts = []
self.comments = []
comment_lead = tuple('p') + tuple(comment_lead)
for line in file_pointer:
line = line.strip()
if line:
if line[0] not in comment_lead:
if int(line.rsplit(' ', 1)[-1]) == 0: # normal clause
cl = [int(l) for l in line.split()[:-1]]
self.nv = max([abs(l) for l in cl] + [self.nv])
self.clauses.append(cl)
else: # atmost/atleast constraint
items = [i for i in line.split()]
lits = [int(l) for l in items[:-2]]
rhs = int(items[-1])
self.nv = max([abs(l) for l in lits] + [self.nv])
if items[-2][0] == '>':
lits = list(map(lambda l: -l, lits))
rhs = len(lits) - rhs
self.atmosts.append([lits, rhs])
elif not line.startswith('p cnf'): # cnf is allowed here
self.comments.append(line)
def to_fp(self, file_pointer, comments=None):
"""
The method can be used to save a CNF+ formula into a file pointer.
The file pointer is expected as an argument. Additionally,
supplementary comment lines can be specified in the ``comments``
parameter.
:param fname: a file name where to store the formula.
:param comments: additional comments to put in the file.
:type fname: str
:type comments: list(str)
Example:
.. code-block:: python
>>> from pysat.formula import CNFPlus
>>> cnf = CNFPlus()
...
>>> # the formula is filled with a bunch of clauses
>>> with open('some-file.cnf+', 'w') as fp:
... cnf.to_fp(fp) # writing to the file pointer
"""
# saving formula's internal comments
for c in self.comments:
print(c, file=file_pointer)
# saving externally specified comments
if comments:
for c in comments:
print(c, file=file_pointer)
ftype = 'cnf+' if self.atmosts else 'cnf'
print('p', ftype, self.nv, len(self.clauses) + len(self.atmosts),
file=file_pointer)
for cl in self.clauses:
print(' '.join(str(l) for l in cl), '0', file=file_pointer)
for am in self.atmosts:
print(' '.join(str(l) for l in am[0]), '<=', am[1], file=file_pointer)
def append(self, clause, is_atmost=False):
"""
Add a single clause or a single AtMostK constraint to CNF+ formula.
This method additionally updates the number of variables, i.e.
variable ``self.nv``, used in the formula.
If the clause is an AtMostK constraint, this should be set with the
use of the additional default argument ``is_atmost``, which is set
to ``False`` by default.
:param clause: a new clause to add.
:param is_atmost: if ``True``, the clause is AtMostK.
:type clause: list(int)
:type is_atmost: bool
.. code-block:: python
>>> from pysat.formula import CNFPlus
>>> cnf = CNFPlus()
>>> cnf.append([-3, 4])
>>> cnf.append([[1, 2, 3], 1], is_atmost=True)
>>> print(cnf.clauses)
[[-3, 4]]
>>> print(cnf.atmosts)
[[1, 2, 3], 1]
"""
if not is_atmost:
self.nv = max([abs(l) for l in clause] + [self.nv])
self.clauses.append(clause)
else:
self.nv = max([abs(l) for l in clause[0]] + [self.nv])
self.atmosts.append(clause)
def weighted(self):
"""
This method creates a weighted copy of the internal formula. As a
result, an object of class :class:`WCNFPlus` is returned. Every
clause of the CNFPlus formula is *soft* in the new WCNFPlus
formula and its weight is equal to ``1``. The set of hard clauses
of the new formula is empty. The set of cardinality constraints
remains unchanged.
:return: an object of class :class:`WCNFPlus`.
Example:
.. code-block:: python
>>> from pysat.formula import CNFPlus
>>> cnf = CNFPlus()
>>> cnf.append([-1, 2])
>>> cnf.append([3, 4])
>>> cnf.append([[1, 2], 1], is_atmost=True)
>>>
>>> wcnf = cnf.weighted()
>>> print(wcnf.hard)
[]
>>> print(wcnf.soft)
[[-1, 2], [3, 4]]
>>> print(wcnf.wght)
[1, 1]
>>> print(wcnf.atms)
[[[1, 2], 1]]
"""
wcnf = WCNFPlus()
wcnf.nv = self.nv
wcnf.hard = []
wcnf.soft = copy.deepcopy(self.clauses)
wcnf.atms = copy.deepcopy(self.atmosts)
wcnf.wght = [1 for cl in wcnf.soft]
wcnf.topw = len(wcnf.wght) + 1
wcnf.comments = self.comments[:]
return wcnf
def copy(self):
"""
This method can be used for creating a copy of a CNFPlus object.
It creates another object of the :class:`CNFPlus` class, call the
copy function of CNF class and makes use of the *deepcopy*
functionality to copy the atmost constraints.
:return: an object of class :class:`CNFPlus`.
Example:
.. code-block:: python
>>> cnf1 = CNFPlus()
>>> cnf1.extend([[-1, 2], [1]])
>>> cnf1.append([[1, 2], 1], is_atmost=True)
>>> cnf2 = cnf1.copy()
>>> print(cnf2.clauses)
[[-1, 2], [1]]
>>> print(cnf2.nv)
2
>>> print(cnf2.atmosts)
[[[1, 2], 1]]
"""
cnfplus = super(CNFPlus, self).copy()
cnfplus.atmosts = copy.deepcopy(self.atmosts)
return cnfplus
#
#==============================================================================
class WCNFPlus(WCNF, object):
"""
WCNF formulas augmented with *native* cardinality constraints.
This class inherits most of the functionality of the :class:`WCNF`
class. The only difference between the two is that :class:`WCNFPlus`
supports *native* cardinality constraints of `MiniCard
<https://github.com/liffiton/minicard>`__.
The parser of input DIMACS files of :class:`WCNFPlus` assumes the
syntax of AtMostK and AtLeastK constraints following the one defined
for :class:`CNFPlus` in the `description
<https://github.com/liffiton/minicard>`__ of MiniCard:
::
c Example: Two (hard) cardinality constraints followed by a soft clause
p wcnf+ 7 3 10
10 1 -2 3 5 -7 <= 3
10 4 5 6 -7 >= 2
5 3 5 7 0
**Note** that every cardinality constraint is assumed to be *hard*,
i.e. soft cardinality constraints are currently *not supported*.
Each AtLeastK constraint is translated into an AtMostK constraint in
the standard way: :math:`\sum_{i=1}^{n}{x_i}\geq k \leftrightarrow
\sum_{i=1}^{n}{\\neg{x_i}}\leq (n-k)`. Internally, AtMostK constraints
are stored in variable ``atms``, each being a pair ``(lits, k)``, where
``lits`` is a list of literals in the sum and ``k`` is the upper bound.
Example:
.. code-block:: python
>>> from pysat.formula import WCNFPlus
>>> cnf = WCNFPlus(from_string='p wcnf+ 7 3 10\\n10 1 -2 3 5 -7 <= 3\\n10 4 5 6 -7 >= 2\\n5 3 5 7 0\\n')
>>> print(cnf.soft)
[[3, 5, 7]]
>>> print(cnf.wght)
[5]
>>> print(cnf.hard)
[]
>>> print(cnf.atms)
[[[1, -2, 3, 5, -7], 3], [[-4, -5, -6, 7], 2]]
>>> print(cnf.nv)
7
For details on the functionality, see :class:`WCNF`.
"""
def __init__(self, from_file=None, from_fp=None, from_string=None, comment_lead=['c']):
"""
Constructor.
"""
# atmost constraints are initially empty
self.atms = []
# calling the base class constructor
super(WCNFPlus, self).__init__(from_file=from_file, from_fp=from_fp,
from_string=from_string, comment_lead=comment_lead)
def from_fp(self, file_pointer, comment_lead=['c']):
"""
Read a WCNF+ formula from a file pointer. A file pointer should be
specified as an argument. The only default argument is
``comment_lead``, which can be used for parsing specific comment
lines.
:param file_pointer: a file pointer to read the formula from.
:param comment_lead: a list of characters leading comment lines
:type file_pointer: file pointer
:type comment_lead: list(str)
Usage example:
.. code-block:: python
>>> with open('some-file.wcnf+', 'r') as fp:
... cnf1 = WCNFPlus()
... cnf1.from_fp(fp)
>>>
>>> with open('another-file.wcnf+', 'r') as fp:
... cnf2 = WCNFPlus(from_fp=fp)
"""
self.nv = 0
self.hard = []
self.atms = []
self.soft = []
self.wght = []
self.topw = 1
self.comments = []
comment_lead = tuple('p') + tuple(comment_lead)
for line in file_pointer:
line = line.strip()
if line:
if line[0] not in comment_lead:
if int(line.rsplit(' ', 1)[-1]) == 0: # normal clause
cl = [int(l) for l in line.split()[:-1]]
w = cl.pop(0)
self.nv = max([abs(l) for l in cl] + [self.nv])
if w >= self.topw:
self.hard.append(cl)
else:
self.soft.append(cl)
self.wght.append(w)
else: # atmost/atleast constraint
items = [i for i in line.split()]
lits = [int(l) for l in items[1:-2]]
rhs = int(items[-1])
self.nv = max([abs(l) for l in lits] + [self.nv])
if items[-2][0] == '>':
lits = list(map(lambda l: -l, lits))
rhs = len(lits) - rhs
self.atms.append([lits, rhs])
elif not line.startswith('p wcnf'): # wcnf is allowed here
self.comments.append(line)
else: # expecting the preamble
self.topw = int(line.rsplit(' ', 1)[1])
def to_fp(self, file_pointer, comments=None):
"""
The method can be used to save a WCNF+ formula into a file pointer.
The file pointer is expected as an argument. Additionally,
supplementary comment lines can be specified in the ``comments``
parameter.
:param fname: a file name where to store the formula.
:param comments: additional comments to put in the file.
:type fname: str
:type comments: list(str)
Example:
.. code-block:: python
>>> from pysat.formula import WCNFPlus
>>> cnf = WCNFPlus()
...
>>> # the formula is filled with a bunch of clauses
>>> with open('some-file.wcnf+', 'w') as fp:
... cnf.to_fp(fp) # writing to the file pointer
"""
# saving formula's internal comments
for c in self.comments:
print(c, file=file_pointer)
# saving externally specified comments
if comments:
for c in comments:
print(c, file=file_pointer)
ftype = 'wcnf+' if self.atms else 'wcnf'
print('p', ftype, self.nv, len(self.hard) + len(self.soft) + len(self.atms),
self.topw, file=file_pointer)
# soft clauses are dumped first because
# some tools (e.g. LBX) cannot count them properly
for i, cl in enumerate(self.soft):
print(self.wght[i], ' '.join(str(l) for l in cl), '0', file=file_pointer)
for cl in self.hard:
print(self.topw, ' '.join(str(l) for l in cl), '0', file=file_pointer)
# atmost constraints are hard
for am in self.atms:
print(self.topw, ' '.join(str(l) for l in am[0]), '<=', am[1], file=file_pointer)
def append(self, clause, weight=None, is_atmost=False):
"""
Add a single clause or a single AtMostK constraint to WCNF+
formula. This method additionally updates the number of variables,
i.e. variable ``self.nv``, used in the formula.
If the clause is an AtMostK constraint, this should be set with the
use of the additional default argument ``is_atmost``, which is set
to ``False`` by default.
If ``is_atmost`` is set to ``False``, the clause can be either hard
or soft depending on the ``weight`` argument. If no weight is
specified, the clause is considered hard. Otherwise, the clause is
soft.
:param clause: a new clause to add.
:param weight: an integer weight of the clause.
:param is_atmost: if ``True``, the clause is AtMostK.
:type clause: list(int)
:type weight: integer or None
:type is_atmost: bool
.. code-block:: python
>>> from pysat.formula import WCNFPlus
>>> cnf = WCNFPlus()
>>> cnf.append([-3, 4])
>>> cnf.append([[1, 2, 3], 1], is_atmost=True)
>>> cnf.append([-1, -2], weight=35)
>>> print(cnf.hard)
[[-3, 4]]
>>> print(cnf.atms)
[[1, 2, 3], 1]
>>> print(cnf.soft)
[[-1, -2]]
>>> print(cnf.wght)
[35]
"""
if not is_atmost:
self.nv = max([abs(l) for l in clause] + [self.nv])
if weight:
self.soft.append(clause)
self.wght.append(weight)
self.topw += weight
else:
self.hard.append(clause)
else:
self.nv = max([abs(l) for l in clause[0]] + [self.nv])
self.atms.append(clause)
def unweighted(self):
"""
This method creates a *plain* (unweighted) copy of the internal
formula. As a result, an object of class :class:`CNFPlus` is
returned. Every clause (both hard or soft) of the original
WCNFPlus formula is copied to the ``clauses`` variable of the
resulting plain formula, i.e. all weights are discarded.
Note that the cardinality constraints of the original (weighted)
formula remain unchanged in the new (plain) formula.
:return: an object of class :class:`CNFPlus`.
Example:
.. code-block:: python
>>> from pysat.formula import WCNF
>>> wcnf = WCNFPlus()
>>> wcnf.extend([[-3, 4], [5, 6]])
>>> wcnf.extend([[3], [-4], [-5], [-6]], weights=[1, 5, 3, 4])
>>> wcnf.append([[1, 2, 3], 1], is_atmost=True)
>>>
>>> cnf = wcnf.unweighted()
>>> print(cnf.clauses)
[[-3, 4], [5, 6], [3], [-4], [-5], [-6]]
>>> print(cnf.atmosts)
[[[1, 2, 3], 1]]
"""
cnf = CNFPlus()
cnf.nv = self.nv
cnf.clauses = copy.deepcopy(self.hard) + copy.deepcopy(self.soft)
cnf.atmosts = copy.deepcopy(self.atms)
cnf.commends = self.comments[:]
return cnf
def copy(self):
"""
This method can be used for creating a copy of a WCNFPlus object.
It creates another object of the :class:`WCNFPlus` class, call the
copy function of WCNF class and makes use of the *deepcopy*
functionality to copy the atmost constraints.
:return: an object of class :class:`WCNFPlus`.
Example:
.. code-block:: python
>>> cnf1 = WCNFPlus()
>>> cnf1.append([-1, 2])
>>> cnf1.append([1], weight=10)
>>> cnf1.append([[1, 2], 1], is_atmost=True)
>>> cnf2 = cnf1.copy()
>>> print(cnf2.hard)
[[-1, 2]]
>>> print(cnf2.soft)
[[1]]
>>> print(cnf2.wght)
[10]
>>> print(cnf2.nv)
2
>> print(cnf2.atms)
[[[1, 2], 1]]
"""
wcnfplus = super(WCNFPlus, self).copy()
wcnfplus.atms = copy.deepcopy(self.atms)
return wcnfplus
| [
"itertools.chain",
"aiger_cnf.aig2cnf",
"copy.deepcopy",
"io.StringIO",
"pysat._fileio.FileObject"
] | [((19160, 19182), 'copy.deepcopy', 'copy.deepcopy', (['clauses'], {}), '(clauses)\n', (19173, 19182), False, 'import copy\n'), ((21483, 21544), 'aiger_cnf.aig2cnf', 'aiger_cnf.aig2cnf', (['aig'], {'fresh': 'self.vpool.id', 'force_true': '(False)'}), '(aig, fresh=self.vpool.id, force_true=False)\n', (21500, 21544), False, 'import aiger_cnf\n'), ((22963, 22990), 'copy.deepcopy', 'copy.deepcopy', (['self.clauses'], {}), '(self.clauses)\n', (22976, 22990), False, 'import copy\n'), ((23014, 23042), 'copy.deepcopy', 'copy.deepcopy', (['self.comments'], {}), '(self.comments)\n', (23027, 23042), False, 'import copy\n'), ((28452, 28479), 'copy.deepcopy', 'copy.deepcopy', (['self.clauses'], {}), '(self.clauses)\n', (28465, 28479), False, 'import copy\n'), ((37573, 37597), 'copy.deepcopy', 'copy.deepcopy', (['self.hard'], {}), '(self.hard)\n', (37586, 37597), False, 'import copy\n'), ((37618, 37642), 'copy.deepcopy', 'copy.deepcopy', (['self.soft'], {}), '(self.soft)\n', (37631, 37642), False, 'import copy\n'), ((37663, 37687), 'copy.deepcopy', 'copy.deepcopy', (['self.wght'], {}), '(self.wght)\n', (37676, 37687), False, 'import copy\n'), ((37712, 37740), 'copy.deepcopy', 'copy.deepcopy', (['self.comments'], {}), '(self.comments)\n', (37725, 37740), False, 'import copy\n'), ((52931, 52958), 'copy.deepcopy', 'copy.deepcopy', (['self.clauses'], {}), '(self.clauses)\n', (52944, 52958), False, 'import copy\n'), ((52979, 53006), 'copy.deepcopy', 'copy.deepcopy', (['self.atmosts'], {}), '(self.atmosts)\n', (52992, 53006), False, 'import copy\n'), ((54051, 54078), 'copy.deepcopy', 'copy.deepcopy', (['self.atmosts'], {}), '(self.atmosts)\n', (54064, 54078), False, 'import copy\n'), ((64201, 64225), 'copy.deepcopy', 'copy.deepcopy', (['self.atms'], {}), '(self.atms)\n', (64214, 64225), False, 'import copy\n'), ((65337, 65361), 'copy.deepcopy', 'copy.deepcopy', (['self.atms'], {}), '(self.atms)\n', (65350, 65361), False, 'import copy\n'), ((15915, 15971), 'pysat._fileio.FileObject', 'FileObject', (['fname'], {'mode': '"""r"""', 'compression': 'compressed_with'}), "(fname, mode='r', compression=compressed_with)\n", (15925, 15971), False, 'from pysat._fileio import FileObject\n'), ((18575, 18591), 'io.StringIO', 'StringIO', (['string'], {}), '(string)\n', (18583, 18591), False, 'from io import StringIO\n'), ((24547, 24601), 'pysat._fileio.FileObject', 'FileObject', (['fname'], {'mode': '"""w"""', 'compression': 'compress_with'}), "(fname, mode='w', compression=compress_with)\n", (24557, 24601), False, 'from pysat._fileio import FileObject\n'), ((33486, 33542), 'pysat._fileio.FileObject', 'FileObject', (['fname'], {'mode': '"""r"""', 'compression': 'compressed_with'}), "(fname, mode='r', compression=compressed_with)\n", (33496, 33542), False, 'from pysat._fileio import FileObject\n'), ((36634, 36650), 'io.StringIO', 'StringIO', (['string'], {}), '(string)\n', (36642, 36650), False, 'from io import StringIO\n'), ((39252, 39306), 'pysat._fileio.FileObject', 'FileObject', (['fname'], {'mode': '"""w"""', 'compression': 'compress_with'}), "(fname, mode='w', compression=compress_with)\n", (39262, 39306), False, 'from pysat._fileio import FileObject\n'), ((44538, 44562), 'copy.deepcopy', 'copy.deepcopy', (['self.hard'], {}), '(self.hard)\n', (44551, 44562), False, 'import copy\n'), ((44565, 44589), 'copy.deepcopy', 'copy.deepcopy', (['self.soft'], {}), '(self.soft)\n', (44578, 44589), False, 'import copy\n'), ((64127, 64151), 'copy.deepcopy', 'copy.deepcopy', (['self.hard'], {}), '(self.hard)\n', (64140, 64151), False, 'import copy\n'), ((64154, 64178), 'copy.deepcopy', 'copy.deepcopy', (['self.soft'], {}), '(self.soft)\n', (64167, 64178), False, 'import copy\n'), ((21708, 21738), 'itertools.chain', 'itertools.chain', (['*self.clauses'], {}), '(*self.clauses)\n', (21723, 21738), False, 'import itertools\n')] |
# Copyright 2019 TerraPower, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
The parameters system holds state information for everything within ARMI's composite
structure:
.. list-table:: Example Parameters
:widths: 50 50
:header-rows: 1
* - Object
- Parameters
* - :py:class:`~armi.reactor.reactors.Reactor`
- :py:mod:`Reactor Parameters <armi.reactor.reactorParameters>`
* - :py:class:`~armi.reactor.assemblies.Assembly`
- :py:mod:`Assembly Parameters <armi.reactor.assemblyParameters>`
* - :py:class:`~armi.reactor.blocks.Block`
- :py:mod:`Block Parameters <armi.reactor.blockParameters>`
* - :py:class:`~armi.reactor.components.Component`
- :py:mod:`Component Parameters <armi.reactor.components.componentParameters>`
Basic Usage
===========
Given an ARMI reactor model object such as ``r``, one may set or get a parameter just
like any other instance attribute on ``r.p``::
>>> r.p.cycleLength
350.0
Alternatively, dictionary-like access is supported::
>>> r.p["cycleLength"]
350.0
.. note::
The data themselves are stored in special hidden fields, which are typically
accessed through the ``Parameter`` definition that describes them. The name for such
a parameter field looks like ``"_p_" + paramName``. For example, to get
``cycleLength`` one could do::
>>> r.core.p._p_cycleLength
350.0
However, it is not recommended to access parameters in this way, as it circumvents
the setters and getters that may have been implemented for a given parameter. One
should always use the style from the first two examples to access parameter values.
Furthermore, ``ParameterCollection`` classes have some extra controls to make sure
that someone doesn't try to set random extra attributes on them. Only parameters
that were defined before a particular ``ParameterCollection`` class is instantiated
may be accessed.The rationale behind this is documented in the Design
Considerations section below.
Most parameters in ARMI are block parameters. These include flux, power, temperatures,
number densities, etc. Parameters can be any basic type (float, int, str), or an array
of any such types. The type within a given array should be homogeneous. Examples::
>>> b.p.flux = 2.5e13
>>> b.p.fuelTemp = numpy.array(range(217), dtype=float)
>>> b.p.fuelTemp[58] = 600
.. note::
There have been many discussions on what the specific name of this module/system
should be. After great deliberation, the definition of parameter seemed very
suitable:
One of a set of measurable factors, such as temperature and pressure, that
define a system and determine its behavior and are varied in an experiment ~
`thefreedictionary`_
any of a set of physical properties whose values determine the characteristics
or behavior of something <parameters of the atmosphere such as temperature,
pressure, and density> ~ `Meriam-Webster`_
The parameters system is composed of several classes:
:py:class:`~armi.reactor.parameters.parameterDefinitions.Parameter` :
These store metadata about each parameter including the name, description, its
units, etc. :py:class:`Parameters <parameterDefinitions.Parameter>` also define some
behaviors such as setters/getters, and what to do when retrieving a value that has
not been set, and whether or not to store the parameter in the database. The
:py:class:`parameterDefinitions.Parameter` object implement the Python descriptor
protocol (the magic behind ``@property``), and are stored on corresponding
:py:class:`parameterCollections.ParameterCollection` classes to access their
underlying values.
:py:class:`~armi.reactor.parameters.parameterDefinitions.ParameterDefinitionCollection` :
As the name suggests, these represent a collection of parameter definitions. Each
:py:class:`ParameterCollection` gets a :py:class:`ParameterDefinitionCollection`,
and there are also module-global collections, such as ``ALL_DEFINITIONS``
(containing all defined parameters over all ``ArmiObject`` classes), and others
which break parameters down by their categories, associated composite types, etc.
:py:class:`~armi.reactor.parameters.parameterDefinitions.ParameterBuilder` :
These are used to aid in the creation of :py:class:`Parameter` instances, and store
default arguments to the :py:class:`Parameter` constructor.
:py:class:`~armi.reactor.parameters.parameterCollections.ParameterCollection` :
These are used to store parameter values for a specific instance of an item in the
ARMI composite structure, and have features for accessing those parameters and their
definitions. The actual parameter values are stored in secret `"_p_"+paramName`
fields, and accessed through the Parameter definition, which functions as a
descriptor. Parameter definitions are stored as class attributes so that they can be
shared amongst instances. All parameter fields are filled with an initial value in
their ``__init__()`` to benefit from the split-key dictionaries introduced in
PEP-412. This and protections to prevent setting any other attributes form a sort of
"``__slots__`` lite".
:py:class:`~armi.reactor.parameters.resolveCollections.ResolveParametersMeta` :
This metaclass is used by the base ``ArmiObject`` class to aid in the creation of a
hierarchy of ``ParameterCollection`` classes that appropriately represent a specific
``ArmiObject`` subclass's parameters. In short, it looks at the class attributes of
an ``ArmiObject`` subclass to see if there is a ``pDefs`` attribute (which should be
an instance of ``ParameterDefinitionCollection``). If the ``pDefs`` attribute
exists, the class will get its own ``ParameterCollection`` class, which will itself
be a subclass of the parameter collection class associated with the most immediate
ancestor that also had its own ``pDefs``. If an ``ArmiObject`` subclass has not
``pDefs`` attribute of its own, it will simply be associated with the parameter
collection class of its parent.
This rather roundabout approach is used to address many of the design considerations
laid out below. Namely that pains be taken to minimize memory consumption, properties
be used to control data access, and that it be relatively difficult to introduce
programming errors related to improperly-defined or colliding parameters.
Design Considerations
=====================
.. list-table:: Design considerations
:header-rows: 1
* - Issue
- Resolution/Consequences
* - Metadata about parameters is necessary for determining whether a parameter
should be stored in the database, and to allow the user to toggle this switch.
- Parameters must uniquely named within a ``Composite`` subclass.
Also, we need to have :py:class:`Parameter` classes to store this metadata.
* - There should not be any naming restrictions between different ``Composite`` subclasses.
- Parameters must be defined or associated with a specific ``ParameterCollection`` subclass.
* - PyLint cannot find programming errors related to incorrect strings.
- We would like to use methods/functions for controlling state information.
This also eliminated the possibility of using resource files to define the
properties, otherwise we would be mapping names between some resource file and
the associated parameter/property definition.
* - Creating getters and setters for every parameter would be overwhelming and
unsustainable.
- We will use Python descriptors, which have *most* of the functionality used in
getters and setters.
:py:class:`ParameterCollection` knows how to generate descriptors for itself,
based on a :py:class:`ParameterDefinitionCollection`.
* - The majority of memory consumption occurs in parameters, strings and
dictionaries. Minimizing the storage requirements of the parameters is desirable.
- Python ``__slots__`` are a language feature which eliminates the need for each
class instance to have a ``__dict__``. This saves memory when there are many
instances of a class. Slot access can sometimes be faster as well.
In the past, ``__slots__`` were used to store parameter values. This became
rather onerous when we wanted to support parameter definitions from plugins. We
now use the traditional ``__dict__``, but take pains to make sure that we can
get the memory savings from the key-sharing dicts provided by PEP-412. Namely,
all attributes from the parameter definitions and other state are initialized to
__something__ within the ``__init__()`` routine.
* - Parameters are just fancy properties with meta data.
- Implementing the descriptor interface on a :py:class:`Parameter` removes the
need to construct a :py:class:`Parameter` without a name, then come back through
with the ``applyParameters()`` class method to apply the
:py:class:`Parameter` as a descriptor.
.. _thefreedictionary: http://www.thefreedictionary.com/parameter
.. _Meriam-Webster: http://www.merriam-webster.com/dictionary/parameter
"""
from armi.reactor.parameters.parameterCollections import (
ParameterCollection,
collectPluginParameters,
)
from armi.reactor.parameters.parameterCollections import applyAllParameters
from armi.reactor.parameters.parameterDefinitions import (
ParameterDefinitionCollection,
Parameter,
)
from armi.reactor.parameters.parameterDefinitions import (
SINCE_INITIALIZATION,
SINCE_LAST_DB_TRANSMISSION,
SINCE_LAST_DISTRIBUTE_STATE,
SINCE_LAST_GEOMETRY_TRANSFORMATION,
SINCE_BACKUP,
SINCE_ANYTHING,
NEVER,
Serializer,
Category,
ParamLocation,
NoDefault,
ALL_DEFINITIONS,
)
from armi.reactor.parameters.exceptions import (
ParameterDefinitionError,
ParameterError,
UnknownParameterError,
)
forType = ALL_DEFINITIONS.forType
inCategory = ALL_DEFINITIONS.inCategory
byNameAndType = ALL_DEFINITIONS.byNameAndType
resetAssignmentFlag = ALL_DEFINITIONS.resetAssignmentFlag
since = ALL_DEFINITIONS.since
def reset():
"""Reset the status of all parameter definintions.
This may become necessary when the state of the global parameter definitions becomes
invalid. Typically this happens when running multiple cases for the same import of
this module, e.g. in unit tests. In this case things like the assigned flags will
persist across test cases, leading to strange and incorrect behavior.
"""
for pd in ALL_DEFINITIONS:
pd.assigned = NEVER
def generateTable(klass, fwParams, app=None):
"""
Return a string containing one or more restructured text list tables containing
parameter descriptions for the passed ArmiObject class.
Parameters
----------
klass : ArmiObject subclass
The Class for which parameter tables should be generated
fwParams : ParameterDefinitionCollection
A parameter definition collection containing the parameters that are always
defined for the passed ``klass``. The rest of the parameters come from the
plugins registered with the passed ``app``
app : App, optional
The ARMI-based application to draw plugins from.
"""
from armi import apps
if app is None:
app = apps.App()
defs = {None: fwParams}
app = apps.App()
for plugin in app.pluginManager.get_plugins():
plugParams = plugin.defineParameters()
if plugParams is not None:
pDefs = plugParams.get(klass, None)
if pDefs is not None:
defs[plugin] = pDefs
header_content = """
.. list-table:: {} Parameters from {{}}
:header-rows: 1
:widths: 30 40 30
* - Name
- Description
- Units
""".format(
klass.__name__
)
content = ""
for plugin, pdefs in defs.items():
srcName = plugin.__name__ if plugin is not None else "Framework"
pluginContent = header_content.format(srcName)
for pd in pdefs:
pluginContent += f""" * - {pd.name}
- {pd.description}
- {pd.units}
"""
content += pluginContent + "\n\n"
return content
| [
"armi.apps.App"
] | [((12097, 12107), 'armi.apps.App', 'apps.App', ([], {}), '()\n', (12105, 12107), False, 'from armi import apps\n'), ((12046, 12056), 'armi.apps.App', 'apps.App', ([], {}), '()\n', (12054, 12056), False, 'from armi import apps\n')] |
# -*- coding: utf-8 -*-
import csv
import logging
import math
import multiprocessing
import os
import shutil
import time
from concurrent.futures import ThreadPoolExecutor
from pathlib import Path
from typing import Dict, List, Tuple
from django.utils import timezone
import numpy as np
import pandas as pd
import psutil
import rpy2.robjects as ro
import simplejson as json
from rpy2.robjects import pandas2ri, r as rlang
from rpy2.robjects.packages import importr
from data_refinery_common.logging import get_and_configure_logger
from data_refinery_common.models import ComputedFile, Sample
from data_refinery_common.utils import get_env_variable
from data_refinery_workers.processors import utils
MULTIPROCESSING_MAX_THREAD_COUNT = max(1, math.floor(multiprocessing.cpu_count() / 2) - 1)
RESULTS_BUCKET = get_env_variable("S3_RESULTS_BUCKET_NAME", "refinebio-results-bucket")
S3_BUCKET_NAME = get_env_variable("S3_BUCKET_NAME", "data-refinery")
BODY_HTML = (
Path("data_refinery_workers/processors/smasher_email.min.html").read_text().replace("\n", "")
)
BODY_ERROR_HTML = (
Path("data_refinery_workers/processors/smasher_email_error.min.html")
.read_text()
.replace("\n", "")
)
BYTES_IN_GB = 1024 * 1024 * 1024
QN_CHUNK_SIZE = 10000
logger = get_and_configure_logger(__name__)
### DEBUG ###
logger.setLevel(logging.getLevelName("DEBUG"))
def log_state(message, job_id, start_time=False):
if logger.isEnabledFor(logging.DEBUG):
process = psutil.Process(os.getpid())
ram_in_GB = process.memory_info().rss / BYTES_IN_GB
logger.debug(message, total_cpu=psutil.cpu_percent(), process_ram=ram_in_GB, job_id=job_id)
if start_time:
logger.debug("Duration: %s" % (time.time() - start_time), job_id=job_id)
else:
return time.time()
def prepare_files(job_context: Dict) -> Dict:
"""
Fetches and prepares the files to smash.
"""
start_prepare_files = log_state("start prepare files", job_context["job"].id)
found_files = False
job_context["filtered_samples"] = {}
job_context["input_files"] = {}
# `key` can either be the species name or experiment accession.
for key, samples in job_context["samples"].items():
smashable_files = []
seen_files = set()
for sample in samples:
smashable_file = sample.get_most_recent_smashable_result_file()
if smashable_file is not None and smashable_file not in seen_files:
smashable_files = smashable_files + [(smashable_file, sample)]
seen_files.add(smashable_file)
found_files = True
else:
sample_metadata = sample.to_metadata_dict()
job_context["filtered_samples"][sample.accession_code] = {
**sample_metadata,
"reason": "This sample did not have a processed file associated with it in our database.",
"experiment_accession_code": get_experiment_accession(
sample.accession_code, job_context["dataset"].data
),
}
job_context["input_files"][key] = smashable_files
job_context["num_input_files"] = len(job_context["input_files"])
job_context["group_by_keys"] = list(job_context["input_files"].keys())
if not found_files:
raise utils.ProcessorJobError(
"Couldn't get any files to smash for Smash job!!",
success=False,
dataset_id=job_context["dataset"].id,
num_samples=len(job_context["samples"]),
)
dataset_id = str(job_context["dataset"].pk)
job_context["work_dir"] = "/home/user/data_store/smashed/" + dataset_id + "/"
# Ensure we have a fresh smash directory
shutil.rmtree(job_context["work_dir"], ignore_errors=True)
os.makedirs(job_context["work_dir"])
job_context["output_dir"] = job_context["work_dir"] + "output/"
os.makedirs(job_context["output_dir"])
log_state("end prepare files", job_context["job"].id, start_prepare_files)
return job_context
def _load_and_sanitize_file(computed_file_path) -> pd.DataFrame:
""" Read and sanitize a computed file """
data = pd.read_csv(
computed_file_path,
sep="\t",
header=0,
index_col=0,
dtype={0: str, 1: np.float32},
error_bad_lines=False,
)
# Strip any funky whitespace
data.columns = data.columns.str.strip()
data = data.dropna(axis="columns", how="all")
# Make sure the index type is correct
data.index = data.index.map(str)
# Ensure that we don't have any dangling Brainarray-generated probe symbols.
# BA likes to leave '_at', signifying probe identifiers,
# on their converted, non-probe identifiers. It makes no sense.
# So, we chop them off and don't worry about it.
data.index = data.index.str.replace("_at", "")
# Remove any lingering Affymetrix control probes ("AFFX-")
data = data[~data.index.str.contains("AFFX-")]
# If there are any _versioned_ gene identifiers, remove that
# version information. We're using the latest brainarray for everything anyway.
# Jackie says this is okay.
# She also says that in the future, we may only want to do this
# for cross-technology smashes.
# This regex needs to be able to handle EGIDs in the form:
# ENSGXXXXYYYZZZZ.6
# and
# fgenesh2_kg.7__3016__AT5G35080.1 (via http://plants.ensembl.org/Arabidopsis_lyrata/ \
# Gene/Summary?g=fgenesh2_kg.7__3016__AT5G35080.1;r=7:17949732-17952000;t=fgenesh2_kg. \
# 7__3016__AT5G35080.1;db=core)
data.index = data.index.str.replace(r"(\.[^.]*)$", "")
# Squish duplicated rows together.
# XXX/TODO: Is mean the appropriate method here?
# We can make this an option in future.
# Discussion here: https://github.com/AlexsLemonade/refinebio/issues/186#issuecomment-395516419
data = data.groupby(data.index, sort=False).mean()
return data
def process_frame(work_dir, computed_file, sample_accession_code, aggregate_by) -> pd.DataFrame:
""" Downloads the computed file from S3 and tries to see if it's smashable.
Returns a data frame if the file can be processed or False otherwise. """
try:
# Download the file to a job-specific location so it
# won't disappear while we're using it.
computed_file_path = computed_file.get_synced_file_path(
path="%s%s" % (work_dir, computed_file.filename)
)
# Bail appropriately if this isn't a real file.
if not computed_file_path or not os.path.exists(computed_file_path):
logger.warning(
"Smasher received non-existent file path.",
computed_file_path=computed_file_path,
computed_file_id=computed_file.id,
)
return None
data = _load_and_sanitize_file(computed_file_path)
if len(data.columns) > 2:
# Most of the time, >1 is actually bad, but we also need to support
# two-channel samples. I think ultimately those should be given some kind of
# special consideration.
logger.info(
"Found a frame with more than 2 columns - this shouldn't happen!",
computed_file_path=computed_file_path,
computed_file_id=computed_file.id,
)
return None
# via https://github.com/AlexsLemonade/refinebio/issues/330:
# aggregating by experiment -> return untransformed output from tximport
# aggregating by species -> log2(x + 1) tximport output
if aggregate_by == "SPECIES" and computed_file.has_been_log2scaled():
data = data + 1
data = np.log2(data)
# Ideally done in the NO-OPPER, but sanity check here.
if (not computed_file.has_been_log2scaled()) and (data.max() > 100).any():
logger.info("Detected non-log2 microarray data.", computed_file_id=computed_file.id)
data = np.log2(data)
# Explicitly title this dataframe
try:
data.columns = [sample_accession_code]
except ValueError:
# This sample might have multiple channels, or something else.
# Don't mess with it.
logger.warn(
"Smasher found multi-channel column (probably) - skipping!",
exc_info=1,
computed_file_path=computed_file_path,
)
return None
except Exception:
# Okay, somebody probably forgot to create a SampleComputedFileAssociation
# Don't mess with it.
logger.warn(
"Smasher found very bad column title - skipping!",
exc_info=1,
computed_file_path=computed_file_path,
)
return None
except Exception:
logger.exception("Unable to smash file", file=computed_file_path)
return None
# TEMPORARY for iterating on compendia more quickly.
# finally:
# # Delete before archiving the work dir
# if computed_file_path and os.path.exists(computed_file_path):
# os.remove(computed_file_path)
return data
def load_first_pass_data_if_cached(work_dir: str):
path = os.path.join(work_dir, "first_pass.csv")
try:
with open(path, newline="") as csvfile:
reader = csv.reader(csvfile)
gene_ids = next(reader)
microarray_columns = next(reader)
rnaseq_columns = next(reader)
return {
"gene_ids": gene_ids,
"microarray_columns": microarray_columns,
"rnaseq_columns": rnaseq_columns,
}
# If the file doesn't exist then the gene ids aren't cached. Any
# other exception should be handled and higher in the stack.
except FileNotFoundError:
return None
def cache_first_pass(
job_context: Dict, gene_ids: List[str], microarray_columns: List[str], rnaseq_columns: List[str]
):
try:
path = os.path.join(job_context["work_dir"], "first_pass.csv")
logger.info(
"Caching gene_ids, microarray_columns, and rnaseq_columns to %s",
path,
job_id=job_context["job"].id,
)
with open(path, "w", newline="") as csvfile:
writer = csv.writer(csvfile)
writer.writerow(gene_ids)
writer.writerow(microarray_columns)
writer.writerow(rnaseq_columns)
# Nothing in the above try should raise an exception, but if it
# does don't waste the work we did in the first pass.
except Exception:
logger.exception(
"Error writing gene identifiers to CSV file.", job_id=job_context["job"].id
)
def process_frames_for_key(
key: str, input_files: List[Tuple[ComputedFile, Sample]], job_context: Dict
) -> Dict:
"""Download, read, and chunk processed sample files from s3.
`key` is the species or experiment whose samples are contained in `input_files`.
Will add to job_context the keys 'microarray_matrix' and
'rnaseq_matrix' with pandas dataframes containing all of the
samples' data. Also adds the key 'unsmashable_files' containing a
list of paths that were determined to be unsmashable.
"""
start_gene_ids = log_state(
"Collecting all gene identifiers for key {}".format(key), job_context["job"].id
)
# Build up a list of gene identifiers because these will be the
# rows of our matrices, and we want to preallocate them so we need
# to know them all.
## We may have built this list in a previous job, check to see if it's cached:
cached_data = load_first_pass_data_if_cached(job_context["work_dir"])
first_pass_was_cached = False
if cached_data:
logger.info(
(
"The data from the first pass was cached, so we're using "
"that and skipping the first pass."
),
job_id=job_context["job"].id,
)
first_pass_was_cached = True
all_gene_identifiers = cached_data["gene_ids"]
microarray_columns = cached_data["microarray_columns"]
rnaseq_columns = cached_data["rnaseq_columns"]
else:
gene_identifier_counts = {}
microarray_columns = []
rnaseq_columns = []
for index, (computed_file, sample) in enumerate(input_files):
log_state("1st processing frame {}".format(index), job_context["job"].id)
frame_data = process_frame(
job_context["work_dir"],
computed_file,
sample.accession_code,
job_context["dataset"].aggregate_by,
)
if frame_data is None:
# we were unable to process this sample, so we drop
logger.warning(
"Unable to smash file",
computed_file=computed_file.id,
dataset_id=job_context["dataset"].id,
job_id=job_context["job"].id,
)
sample_metadata = sample.to_metadata_dict()
job_context["filtered_samples"][sample.accession_code] = {
**sample_metadata,
"reason": "The file associated with this sample did not pass the QC checks we apply before aggregating.",
"filename": computed_file.filename,
"experiment_accession_code": get_experiment_accession(
sample.accession_code, job_context["dataset"].data
),
}
continue
# Count how many frames are in each tech so we can preallocate
# the matrices in both directions.
for gene_id in frame_data.index:
if gene_id in gene_identifier_counts:
gene_identifier_counts[gene_id] += 1
else:
gene_identifier_counts[gene_id] = 1
# Each dataframe should only have 1 column, but it's
# returned as a list so use extend.
if sample.technology == "MICROARRAY":
microarray_columns.extend(frame_data.columns)
elif sample.technology == "RNA-SEQ":
rnaseq_columns.extend(frame_data.columns)
# We only want to use gene identifiers which are present
# in >50% of the samples. We're doing this because a large
# number of gene identifiers present in only a modest
# number of experiments have leaked through. We wouldn't
# necessarily want to do this if we'd mapped all the data
# to ENSEMBL identifiers successfully.
total_samples = len(microarray_columns) + len(rnaseq_columns)
all_gene_identifiers = [
gene_id
for gene_id in gene_identifier_counts
if gene_identifier_counts[gene_id] > (total_samples * 0.5)
]
all_gene_identifiers.sort()
del gene_identifier_counts
log_template = (
"Collected {0} gene identifiers for {1} across"
" {2} micrarry samples and {3} RNA-Seq samples."
)
log_state(
log_template.format(
len(all_gene_identifiers), key, len(microarray_columns), len(rnaseq_columns)
),
job_context["job"].id,
start_gene_ids,
)
# Temporarily only cache mouse compendia because it may not succeed.
if not first_pass_was_cached and key == "MUS_MUSCULUS":
cache_first_pass(job_context, all_gene_identifiers, microarray_columns, rnaseq_columns)
start_build_matrix = log_state("Beginning to build the full matrices.", job_context["job"].id)
# Sort the columns so that the matrices are in predictable orders.
microarray_columns.sort()
rnaseq_columns.sort()
# Preallocate the matrices to be the exact size we will need. This
# should prevent any operations from happening while we build it
# up, so the only RAM used will be needed.
job_context["microarray_matrix"] = pd.DataFrame(
data=None, index=all_gene_identifiers, columns=microarray_columns, dtype=np.float32
)
job_context["rnaseq_matrix"] = pd.DataFrame(
data=None, index=all_gene_identifiers, columns=rnaseq_columns, dtype=np.float32
)
for index, (computed_file, sample) in enumerate(input_files):
log_state("2nd processing frame {}".format(index), job_context["job"].id)
frame_data = process_frame(
job_context["work_dir"],
computed_file,
sample.accession_code,
job_context["dataset"].aggregate_by,
)
if frame_data is None:
job_context["unsmashable_files"].append(computed_file.filename)
sample_metadata = sample.to_metadata_dict()
job_context["filtered_samples"][sample.accession_code] = {
**sample_metadata,
"reason": "The file associated with this sample did not contain a vector that fit the expected dimensions of the matrix.",
"filename": computed_file.filename,
"experiment_accession_code": get_experiment_accession(
sample.accession_code, job_context["dataset"].data
),
}
continue
frame_data = frame_data.reindex(all_gene_identifiers)
# The dataframe for each sample will only have one column
# whose header will be the accession code.
column = frame_data.columns[0]
if sample.technology == "MICROARRAY":
job_context["microarray_matrix"][column] = frame_data.values
elif sample.technology == "RNA-SEQ":
job_context["rnaseq_matrix"][column] = frame_data.values
job_context["num_samples"] = 0
if job_context["microarray_matrix"] is not None:
job_context["num_samples"] += len(job_context["microarray_matrix"].columns)
if job_context["rnaseq_matrix"] is not None:
job_context["num_samples"] += len(job_context["rnaseq_matrix"].columns)
log_state(
"Built full matrices for key {}".format(key), job_context["job"].id, start_build_matrix
)
return job_context
# Modified from: http://yaoyao.codes/pandas/2018/01/23/pandas-split-a-dataframe-into-chunks
def _index_marks(num_columns, chunk_size):
return range(chunk_size, math.ceil(num_columns / chunk_size) * chunk_size, chunk_size)
def _split_dataframe_columns(dataframe, chunk_size):
indices = _index_marks(dataframe.shape[1], chunk_size)
return np.split(dataframe, indices, axis=1)
def _quantile_normalize_matrix(target_vector, original_matrix):
preprocessCore = importr("preprocessCore")
as_numeric = rlang("as.numeric")
data_matrix = rlang("data.matrix")
# Convert the smashed frames to an R numeric Matrix
target_vector = as_numeric(target_vector)
# Do so in chunks if the matrix is too large.
if original_matrix.shape[1] <= QN_CHUNK_SIZE:
merged_matrix = data_matrix(original_matrix)
normalized_matrix = preprocessCore.normalize_quantiles_use_target(
x=merged_matrix, target=target_vector, copy=True
)
# And finally convert back to Pandas
ar = np.array(normalized_matrix)
new_merged = pd.DataFrame(ar, columns=original_matrix.columns, index=original_matrix.index)
else:
matrix_chunks = _split_dataframe_columns(original_matrix, QN_CHUNK_SIZE)
for i, chunk in enumerate(matrix_chunks):
R_chunk = data_matrix(chunk)
normalized_chunk = preprocessCore.normalize_quantiles_use_target(
x=R_chunk, target=target_vector, copy=True
)
ar = np.array(normalized_chunk)
start_column = i * QN_CHUNK_SIZE
end_column = (i + 1) * QN_CHUNK_SIZE
original_matrix.iloc[:, start_column:end_column] = ar
new_merged = original_matrix
return new_merged
def _test_qn(merged_matrix):
""" Selects a list of 100 random pairs of columns and performs the KS Test on them.
Returns a list of tuples with the results of the KN test (statistic, pvalue) """
# Verify this QN, related:
# https://github.com/AlexsLemonade/refinebio/issues/599#issuecomment-422132009
data_matrix = rlang("data.matrix")
as_numeric = rlang("as.numeric")
set_seed = rlang("set.seed")
combn = rlang("combn")
ncol = rlang("ncol")
ks_test = rlang("ks.test")
which = rlang("which")
merged_R_matrix = data_matrix(merged_matrix)
set_seed(123)
n = ncol(merged_R_matrix)[0]
m = 2
# Not enough columns to perform KS test - either bad smash or single sample smash.
if n < m:
return None
# This wont work with larger matricies
# https://github.com/AlexsLemonade/refinebio/issues/1860
ncolumns = ncol(merged_R_matrix)
if ncolumns[0] <= 200:
# Convert to NP, Shuffle, Return to R
combos = combn(ncolumns, 2)
ar = np.array(combos)
np.random.shuffle(np.transpose(ar))
else:
indexes = [*range(ncolumns[0])]
np.random.shuffle(indexes)
ar = np.array([*zip(indexes[0:100], indexes[100:200])])
nr, nc = ar.shape
combos = ro.r.matrix(ar, nrow=nr, ncol=nc)
result = []
# adapted from
# https://stackoverflow.com/questions/9661469/r-t-test-over-all-columns
# apply KS test to randomly selected pairs of columns (samples)
for i in range(1, min(ncol(combos)[0], 100)):
value1 = combos.rx(1, i)[0]
value2 = combos.rx(2, i)[0]
test_a = merged_R_matrix.rx(True, value1)
test_b = merged_R_matrix.rx(True, value2)
# RNA-seq has a lot of zeroes in it, which
# breaks the ks_test. Therefore we want to
# filter them out. To do this we drop the
# lowest half of the values. If there's
# still zeroes in there, then that's
# probably too many zeroes so it's okay to
# fail.
median_a = np.median(test_a)
median_b = np.median(test_b)
# `which` returns indices which are
# 1-indexed. Python accesses lists with
# zero-indexes, even if that list is
# actually an R vector. Therefore subtract
# 1 to account for the difference.
test_a = [test_a[i - 1] for i in which(test_a > median_a)]
test_b = [test_b[i - 1] for i in which(test_b > median_b)]
# The python list comprehension gives us a
# python list, but ks_test wants an R
# vector so let's go back.
test_a = as_numeric(test_a)
test_b = as_numeric(test_b)
ks_res = ks_test(test_a, test_b)
statistic = ks_res.rx("statistic")[0][0]
pvalue = ks_res.rx("p.value")[0][0]
result.append((statistic, pvalue))
return result
def quantile_normalize(job_context: Dict, ks_check=True, ks_stat=0.001) -> Dict:
"""
Apply quantile normalization.
"""
# Prepare our QN target file
organism = job_context["organism"]
if not organism.qn_target:
raise utils.ProcessorJobError(
"Could not find QN target for Organism: " + str(organism),
success=False,
organism=organism,
dataset_id=job_context["dataset"].id,
)
qn_target_path = organism.qn_target.computedfile_set.latest().sync_from_s3()
qn_target_frame = pd.read_csv(
qn_target_path, sep="\t", header=None, index_col=None, error_bad_lines=False
)
# Prepare our RPy2 bridge
pandas2ri.activate()
# Remove un-quantiled normalized matrix from job_context
# because we no longer need it.
merged_no_qn = job_context.pop("merged_no_qn")
# Perform the Actual QN
new_merged = _quantile_normalize_matrix(qn_target_frame[0], merged_no_qn)
# And add the quantile normalized matrix to job_context.
job_context["merged_qn"] = new_merged
# For now, don't test the QN for mouse/human. This never fails on
# smasher jobs and is OOM-killing our very large compendia
# jobs. Let's run this manually after we have a compendia job
# actually finish.
if organism.name in ["MUS_MUSCULUS", "HOMO_SAPIENS"]:
return job_context
ks_res = _test_qn(new_merged)
if ks_res:
for (statistic, pvalue) in ks_res:
job_context["ks_statistic"] = statistic
job_context["ks_pvalue"] = pvalue
# We're unsure of how strigent to be about
# the pvalue just yet, so we're extra lax
# rather than failing tons of tests. This may need tuning.
if ks_check and (statistic > ks_stat or pvalue < 0.8):
job_context["ks_warning"] = (
"Failed Kolmogorov Smirnov test! Stat: "
+ str(statistic)
+ ", PVal: "
+ str(pvalue)
)
else:
logger.warning(
"Not enough columns to perform KS test - either bad smash or single sample smash.",
dataset_id=job_context["dataset"].id,
)
return job_context
def compile_metadata(job_context: Dict) -> Dict:
"""Compiles metadata about the job.
Returns a new dict containing the metadata, not the job_context.
"""
metadata = {}
metadata["num_samples"] = job_context["num_samples"]
metadata["num_experiments"] = job_context["experiments"].count()
metadata["quant_sf_only"] = job_context["dataset"].quant_sf_only
if not job_context["dataset"].quant_sf_only:
metadata["aggregate_by"] = job_context["dataset"].aggregate_by
metadata["scale_by"] = job_context["dataset"].scale_by
# https://github.com/AlexsLemonade/refinebio/pull/421#discussion_r203799646
# TODO: do something with these.
# metadata['non_aggregated_files'] = job_context["unsmashable_files"]
metadata["ks_statistic"] = job_context.get("ks_statistic", None)
metadata["ks_pvalue"] = job_context.get("ks_pvalue", None)
metadata["ks_warning"] = job_context.get("ks_warning", None)
metadata["quantile_normalized"] = job_context["dataset"].quantile_normalize
filtered_samples = job_context["filtered_samples"]
samples = {}
for sample in job_context["dataset"].get_samples():
if sample.accession_code in filtered_samples:
# skip the samples that were filtered
continue
samples[sample.accession_code] = sample.to_metadata_dict()
metadata["samples"] = samples
experiments = {}
for experiment in job_context["dataset"].get_experiments():
experiment_metadata = experiment.to_metadata_dict()
# exclude filtered samples from experiment metadata
all_samples = experiment_metadata["sample_accession_codes"]
all_samples = [code for code in all_samples if code not in filtered_samples]
experiment_metadata["sample_accession_codes"] = all_samples
experiments[experiment.accession_code] = experiment_metadata
metadata["experiments"] = experiments
return metadata
def write_non_data_files(job_context: Dict) -> Dict:
"""Writes the files that are not the actual data of the dataset.
This include LICENSE.txt and README.md files and the metadata.
Adds the key `metadata` to job_context and populates it with all
the metadata that needs to be written.
"""
job_context["metadata"] = compile_metadata(job_context)
shutil.copy("README_DATASET.md", job_context["output_dir"] + "README.md")
shutil.copy("LICENSE_DATASET.txt", job_context["output_dir"] + "LICENSE.TXT")
# Write samples metadata to TSV
try:
write_tsv_json(job_context)
# Metadata to JSON
job_context["metadata"]["created_at"] = timezone.now().strftime("%Y-%m-%dT%H:%M:%S")
aggregated_metadata_path = os.path.join(
job_context["output_dir"], "aggregated_metadata.json"
)
with open(aggregated_metadata_path, "w", encoding="utf-8") as metadata_file:
json.dump(job_context["metadata"], metadata_file, indent=4, sort_keys=True)
if job_context["filtered_samples"]:
# generate filtered samples file only if some samples were skipped
filtered_samples_path = os.path.join(
job_context["output_dir"], "filtered_samples_metadata.json"
)
with open(filtered_samples_path, "w", encoding="utf-8") as metadata_file:
json.dump(job_context["filtered_samples"], metadata_file, indent=4, sort_keys=True)
columns = get_tsv_columns(job_context["filtered_samples"])
filtered_samples_tsv_path = os.path.join(
job_context["output_dir"], "filtered_samples_metadata.tsv"
)
with open(filtered_samples_tsv_path, "w", encoding="utf-8") as tsv_file:
dw = csv.DictWriter(tsv_file, columns, delimiter="\t", extrasaction="ignore")
dw.writeheader()
for sample_metadata in job_context["filtered_samples"].values():
dw.writerow(get_tsv_row_data(sample_metadata, job_context["dataset"].data))
except Exception:
raise utils.ProcessorJobError("Failed to write metadata TSV!", success=False)
return job_context
def get_experiment_accession(sample_accession_code, dataset_data):
for experiment_accession, samples in dataset_data.items():
if sample_accession_code in samples:
return experiment_accession
return "" # Should never happen, because the sample is by definition in the dataset
def _add_annotation_column(annotation_columns, column_name):
"""Add annotation column names in place.
Any column_name that starts with "refinebio_" will be skipped.
"""
if not column_name.startswith("refinebio_"):
annotation_columns.add(column_name)
def _add_annotation_value(row_data, col_name, col_value, sample_accession_code):
"""Adds a new `col_name` key whose value is `col_value` to row_data.
If col_name already exists in row_data with different value, print
out a warning message.
"""
# Generate a warning message if annotation field name starts with
# "refinebio_". This should rarely (if ever) happen.
if col_name.startswith("refinebio_"):
logger.warning(
"Annotation value skipped",
annotation_field=col_name,
annotation_value=col_value,
sample_accession_code=sample_accession_code,
)
elif col_name not in row_data:
row_data[col_name] = col_value
# Generate a warning message in case of conflicts of annotation values.
# (Requested by Dr. <NAME>)
elif row_data[col_name] != col_value:
logger.warning(
"Conflict of values found in column %s: %s vs. %s"
% (col_name, row_data[col_name], col_value),
sample_accession_code=sample_accession_code,
)
def get_tsv_row_data(sample_metadata, dataset_data):
"""Returns field values based on input sample_metadata.
Some annotation fields are treated specially because they are more
important. See `get_tsv_columns` function above for details.
"""
sample_accession_code = sample_metadata.get("refinebio_accession_code", "")
row_data = dict()
for meta_key, meta_value in sample_metadata.items():
# If the field is a refinebio-specific field, simply copy it.
if meta_key != "refinebio_annotations":
row_data[meta_key] = meta_value
continue
# Decompose sample_metadata["refinebio_annotations"], which is
# an array of annotations.
for annotation in meta_value:
for annotation_key, annotation_value in annotation.items():
# "characteristic" in ArrayExpress annotation
if (
sample_metadata.get("refinebio_source_database", "") == "ARRAY_EXPRESS"
and annotation_key == "characteristic"
):
for pair_dict in annotation_value:
if "category" in pair_dict and "value" in pair_dict:
col_name, col_value = pair_dict["category"], pair_dict["value"]
_add_annotation_value(
row_data, col_name, col_value, sample_accession_code
)
# "variable" in ArrayExpress annotation
elif (
sample_metadata.get("refinebio_source_database", "") == "ARRAY_EXPRESS"
and annotation_key == "variable"
):
for pair_dict in annotation_value:
if "name" in pair_dict and "value" in pair_dict:
col_name, col_value = pair_dict["name"], pair_dict["value"]
_add_annotation_value(
row_data, col_name, col_value, sample_accession_code
)
# Skip "source" field ArrayExpress sample's annotation
elif (
sample_metadata.get("refinebio_source_database", "") == "ARRAY_EXPRESS"
and annotation_key == "source"
):
continue
# "characteristics_ch1" in GEO annotation
elif (
sample_metadata.get("refinebio_source_database", "") == "GEO"
and annotation_key == "characteristics_ch1"
): # array of strings
for pair_str in annotation_value:
if ":" in pair_str:
col_name, col_value = pair_str.split(":", 1)
col_value = col_value.strip()
_add_annotation_value(
row_data, col_name, col_value, sample_accession_code
)
# If annotation_value includes only a 'name' key, extract its value directly:
elif (
isinstance(annotation_value, dict)
and len(annotation_value) == 1
and "name" in annotation_value
):
_add_annotation_value(
row_data, annotation_key, annotation_value["name"], sample_accession_code
)
# If annotation_value is a single-element array, extract the element directly:
elif isinstance(annotation_value, list) and len(annotation_value) == 1:
_add_annotation_value(
row_data, annotation_key, annotation_value[0], sample_accession_code
)
# Otherwise save all annotation fields in separate columns
else:
_add_annotation_value(
row_data, annotation_key, annotation_value, sample_accession_code
)
row_data["experiment_accession"] = get_experiment_accession(sample_accession_code, dataset_data)
return row_data
def get_tsv_columns(samples_metadata):
"""Returns an array of strings that will be written as a TSV file's
header. The columns are based on fields found in samples_metadata.
Some nested annotation fields are taken out as separate columns
because they are more important than the others.
"""
refinebio_columns = set()
annotation_columns = set()
for sample_metadata in samples_metadata.values():
for meta_key, meta_value in sample_metadata.items():
if meta_key != "refinebio_annotations":
refinebio_columns.add(meta_key)
continue
# Decompose sample_metadata["annotations"], which is an array of annotations!
for annotation in meta_value:
for annotation_key, annotation_value in annotation.items():
# For ArrayExpress samples, take out the fields
# nested in "characteristic" as separate columns.
if (
sample_metadata.get("refinebio_source_database", "") == "ARRAY_EXPRESS"
and annotation_key == "characteristic"
):
for pair_dict in annotation_value:
if "category" in pair_dict and "value" in pair_dict:
_add_annotation_column(annotation_columns, pair_dict["category"])
# For ArrayExpress samples, also take out the fields
# nested in "variable" as separate columns.
elif (
sample_metadata.get("refinebio_source_database", "") == "ARRAY_EXPRESS"
and annotation_key == "variable"
):
for pair_dict in annotation_value:
if "name" in pair_dict and "value" in pair_dict:
_add_annotation_column(annotation_columns, pair_dict["name"])
# For ArrayExpress samples, skip "source" field
elif (
sample_metadata.get("refinebio_source_database", "") == "ARRAY_EXPRESS"
and annotation_key == "source"
):
continue
# For GEO samples, take out the fields nested in
# "characteristics_ch1" as separate columns.
elif (
sample_metadata.get("refinebio_source_database", "") == "GEO"
and annotation_key == "characteristics_ch1"
): # array of strings
for pair_str in annotation_value:
if ":" in pair_str:
tokens = pair_str.split(":", 1)
_add_annotation_column(annotation_columns, tokens[0])
# Saves all other annotation fields in separate columns
else:
_add_annotation_column(annotation_columns, annotation_key)
# Return sorted columns, in which "refinebio_accession_code" and "experiment_accession" are
# always first, followed by the other refinebio columns (in alphabetic order), and
# annotation columns (in alphabetic order) at the end.
refinebio_columns.discard("refinebio_accession_code")
return (
["refinebio_accession_code", "experiment_accession"]
+ sorted(refinebio_columns)
+ sorted(annotation_columns)
)
def write_tsv_json(job_context):
"""Writes tsv files on disk.
If the dataset is aggregated by species, also write species-level
JSON file.
"""
# Avoid pulling this out of job_context repeatedly.
metadata = job_context["metadata"]
# Uniform TSV header per dataset
columns = get_tsv_columns(metadata["samples"])
# Per-Experiment Metadata
if job_context["dataset"].aggregate_by == "EXPERIMENT":
tsv_paths = []
for experiment_title, experiment_data in metadata["experiments"].items():
experiment_dir = job_context["output_dir"] + experiment_title + "/"
experiment_dir = experiment_dir.encode("ascii", "ignore")
os.makedirs(experiment_dir, exist_ok=True)
tsv_path = experiment_dir.decode("utf-8") + "metadata_" + experiment_title + ".tsv"
tsv_path = tsv_path.encode("ascii", "ignore")
tsv_paths.append(tsv_path)
with open(tsv_path, "w", encoding="utf-8") as tsv_file:
dw = csv.DictWriter(tsv_file, columns, delimiter="\t", extrasaction="ignore")
dw.writeheader()
for sample_accession_code, sample_metadata in metadata["samples"].items():
if sample_accession_code in experiment_data["sample_accession_codes"]:
row_data = get_tsv_row_data(sample_metadata, job_context["dataset"].data)
dw.writerow(row_data)
return tsv_paths
# Per-Species Metadata
elif job_context["dataset"].aggregate_by == "SPECIES":
tsv_paths = []
for species in job_context["group_by_keys"]:
species_dir = job_context["output_dir"] + species + "/"
os.makedirs(species_dir, exist_ok=True)
samples_in_species = []
tsv_path = species_dir + "metadata_" + species + ".tsv"
tsv_paths.append(tsv_path)
with open(tsv_path, "w", encoding="utf-8") as tsv_file:
# See http://www.lucainvernizzi.net/blog/2015/08/03/8x-speed-up-for-python-s-csv-dictwriter/
# about extrasaction.
dw = csv.DictWriter(tsv_file, columns, delimiter="\t", extrasaction="ignore")
dw.writeheader()
i = 0
for sample_metadata in metadata["samples"].values():
if sample_metadata.get("refinebio_organism", "") == species:
row_data = get_tsv_row_data(sample_metadata, job_context["dataset"].data)
dw.writerow(row_data)
samples_in_species.append(sample_metadata)
i = i + 1
if i % 1000 == 0:
progress_template = (
"Done with {0} out of {1} lines of metadata " "for species {2}"
)
log_state(
progress_template.format(i, len(metadata["samples"]), species),
job_context["job"].id,
)
# Writes a json file for current species:
if len(samples_in_species):
species_metadata = {"species": species, "samples": samples_in_species}
json_path = species_dir + "metadata_" + species + ".json"
with open(json_path, "w", encoding="utf-8") as json_file:
json.dump(species_metadata, json_file, indent=4, sort_keys=True)
return tsv_paths
# All Metadata
else:
all_dir = job_context["output_dir"] + "ALL/"
os.makedirs(all_dir, exist_ok=True)
tsv_path = all_dir + "metadata_ALL.tsv"
with open(tsv_path, "w", encoding="utf-8") as tsv_file:
dw = csv.DictWriter(tsv_file, columns, delimiter="\t", extrasaction="ignore")
dw.writeheader()
for sample_metadata in metadata["samples"].values():
row_data = get_tsv_row_data(sample_metadata, job_context["dataset"].data)
dw.writerow(row_data)
return [tsv_path]
def download_computed_file(download_tuple: Tuple[ComputedFile, str]):
""" this function downloads the latest computed file. Receives a tuple with
the computed file and the path where it needs to be downloaded
This is used to parallelize downloading quantsf files. """
(latest_computed_file, output_file_path) = download_tuple
try:
latest_computed_file.get_synced_file_path(path=output_file_path)
except:
# Let's not fail if there's an error syncing one of the quant.sf files
logger.exception("Failed to sync computed file", computed_file_id=latest_computed_file.pk)
def sync_quant_files(output_path, samples: List[Sample]):
""" Takes a list of ComputedFiles and copies the ones that are quant files to the provided directory.
Returns the total number of samples that were included """
num_samples = 0
page_size = 100
# split the samples in groups and download each one individually
with ThreadPoolExecutor(max_workers=MULTIPROCESSING_MAX_THREAD_COUNT) as executor:
# for each sample we need it's latest quant.sf file we don't want to query the db
# for all of them, so we do it in groups of 100, and then download all of the computed_files
# in parallel
for sample_page in (
samples[i * page_size : i + page_size] for i in range(0, len(samples), page_size)
):
sample_and_computed_files = []
for sample in sample_page:
latest_computed_file = sample.get_most_recent_quant_sf_file()
if not latest_computed_file:
continue
output_file_path = output_path + sample.accession_code + "_quant.sf"
sample_and_computed_files.append((latest_computed_file, output_file_path))
# download this set of files, this will take a few seconds that should also help the db recover
executor.map(download_computed_file, sample_and_computed_files)
num_samples += len(sample_and_computed_files)
return num_samples
| [
"csv.DictWriter",
"rpy2.robjects.pandas2ri.activate",
"pandas.read_csv",
"data_refinery_common.logging.get_and_configure_logger",
"multiprocessing.cpu_count",
"numpy.array",
"rpy2.robjects.r",
"os.path.exists",
"pathlib.Path",
"rpy2.robjects.packages.importr",
"django.utils.timezone.now",
"os.... | [((811, 881), 'data_refinery_common.utils.get_env_variable', 'get_env_variable', (['"""S3_RESULTS_BUCKET_NAME"""', '"""refinebio-results-bucket"""'], {}), "('S3_RESULTS_BUCKET_NAME', 'refinebio-results-bucket')\n", (827, 881), False, 'from data_refinery_common.utils import get_env_variable\n'), ((899, 950), 'data_refinery_common.utils.get_env_variable', 'get_env_variable', (['"""S3_BUCKET_NAME"""', '"""data-refinery"""'], {}), "('S3_BUCKET_NAME', 'data-refinery')\n", (915, 950), False, 'from data_refinery_common.utils import get_env_variable\n'), ((1265, 1299), 'data_refinery_common.logging.get_and_configure_logger', 'get_and_configure_logger', (['__name__'], {}), '(__name__)\n', (1289, 1299), False, 'from data_refinery_common.logging import get_and_configure_logger\n'), ((1330, 1359), 'logging.getLevelName', 'logging.getLevelName', (['"""DEBUG"""'], {}), "('DEBUG')\n", (1350, 1359), False, 'import logging\n'), ((3782, 3840), 'shutil.rmtree', 'shutil.rmtree', (["job_context['work_dir']"], {'ignore_errors': '(True)'}), "(job_context['work_dir'], ignore_errors=True)\n", (3795, 3840), False, 'import shutil\n'), ((3845, 3881), 'os.makedirs', 'os.makedirs', (["job_context['work_dir']"], {}), "(job_context['work_dir'])\n", (3856, 3881), False, 'import os\n'), ((3955, 3993), 'os.makedirs', 'os.makedirs', (["job_context['output_dir']"], {}), "(job_context['output_dir'])\n", (3966, 3993), False, 'import os\n'), ((4221, 4347), 'pandas.read_csv', 'pd.read_csv', (['computed_file_path'], {'sep': '"""\t"""', 'header': '(0)', 'index_col': '(0)', 'dtype': '{(0): str, (1): np.float32}', 'error_bad_lines': '(False)'}), "(computed_file_path, sep='\\t', header=0, index_col=0, dtype={(0):\n str, (1): np.float32}, error_bad_lines=False)\n", (4232, 4347), True, 'import pandas as pd\n'), ((9367, 9407), 'os.path.join', 'os.path.join', (['work_dir', '"""first_pass.csv"""'], {}), "(work_dir, 'first_pass.csv')\n", (9379, 9407), False, 'import os\n'), ((16220, 16322), 'pandas.DataFrame', 'pd.DataFrame', ([], {'data': 'None', 'index': 'all_gene_identifiers', 'columns': 'microarray_columns', 'dtype': 'np.float32'}), '(data=None, index=all_gene_identifiers, columns=\n microarray_columns, dtype=np.float32)\n', (16232, 16322), True, 'import pandas as pd\n'), ((16367, 16464), 'pandas.DataFrame', 'pd.DataFrame', ([], {'data': 'None', 'index': 'all_gene_identifiers', 'columns': 'rnaseq_columns', 'dtype': 'np.float32'}), '(data=None, index=all_gene_identifiers, columns=rnaseq_columns,\n dtype=np.float32)\n', (16379, 16464), True, 'import pandas as pd\n'), ((18725, 18761), 'numpy.split', 'np.split', (['dataframe', 'indices'], {'axis': '(1)'}), '(dataframe, indices, axis=1)\n', (18733, 18761), True, 'import numpy as np\n'), ((18849, 18874), 'rpy2.robjects.packages.importr', 'importr', (['"""preprocessCore"""'], {}), "('preprocessCore')\n", (18856, 18874), False, 'from rpy2.robjects.packages import importr\n'), ((18892, 18911), 'rpy2.robjects.r', 'rlang', (['"""as.numeric"""'], {}), "('as.numeric')\n", (18897, 18911), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((18930, 18950), 'rpy2.robjects.r', 'rlang', (['"""data.matrix"""'], {}), "('data.matrix')\n", (18935, 18950), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20474, 20494), 'rpy2.robjects.r', 'rlang', (['"""data.matrix"""'], {}), "('data.matrix')\n", (20479, 20494), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20512, 20531), 'rpy2.robjects.r', 'rlang', (['"""as.numeric"""'], {}), "('as.numeric')\n", (20517, 20531), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20547, 20564), 'rpy2.robjects.r', 'rlang', (['"""set.seed"""'], {}), "('set.seed')\n", (20552, 20564), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20577, 20591), 'rpy2.robjects.r', 'rlang', (['"""combn"""'], {}), "('combn')\n", (20582, 20591), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20603, 20616), 'rpy2.robjects.r', 'rlang', (['"""ncol"""'], {}), "('ncol')\n", (20608, 20616), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20631, 20647), 'rpy2.robjects.r', 'rlang', (['"""ks.test"""'], {}), "('ks.test')\n", (20636, 20647), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((20660, 20674), 'rpy2.robjects.r', 'rlang', (['"""which"""'], {}), "('which')\n", (20665, 20674), True, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((21421, 21454), 'rpy2.robjects.r.matrix', 'ro.r.matrix', (['ar'], {'nrow': 'nr', 'ncol': 'nc'}), '(ar, nrow=nr, ncol=nc)\n', (21432, 21454), True, 'import rpy2.robjects as ro\n'), ((23583, 23676), 'pandas.read_csv', 'pd.read_csv', (['qn_target_path'], {'sep': '"""\t"""', 'header': 'None', 'index_col': 'None', 'error_bad_lines': '(False)'}), "(qn_target_path, sep='\\t', header=None, index_col=None,\n error_bad_lines=False)\n", (23594, 23676), True, 'import pandas as pd\n'), ((23722, 23742), 'rpy2.robjects.pandas2ri.activate', 'pandas2ri.activate', ([], {}), '()\n', (23740, 23742), False, 'from rpy2.robjects import pandas2ri, r as rlang\n'), ((27651, 27724), 'shutil.copy', 'shutil.copy', (['"""README_DATASET.md"""', "(job_context['output_dir'] + 'README.md')"], {}), "('README_DATASET.md', job_context['output_dir'] + 'README.md')\n", (27662, 27724), False, 'import shutil\n'), ((27729, 27806), 'shutil.copy', 'shutil.copy', (['"""LICENSE_DATASET.txt"""', "(job_context['output_dir'] + 'LICENSE.TXT')"], {}), "('LICENSE_DATASET.txt', job_context['output_dir'] + 'LICENSE.TXT')\n", (27740, 27806), False, 'import shutil\n'), ((10147, 10202), 'os.path.join', 'os.path.join', (["job_context['work_dir']", '"""first_pass.csv"""'], {}), "(job_context['work_dir'], 'first_pass.csv')\n", (10159, 10202), False, 'import os\n'), ((19412, 19439), 'numpy.array', 'np.array', (['normalized_matrix'], {}), '(normalized_matrix)\n', (19420, 19439), True, 'import numpy as np\n'), ((19461, 19539), 'pandas.DataFrame', 'pd.DataFrame', (['ar'], {'columns': 'original_matrix.columns', 'index': 'original_matrix.index'}), '(ar, columns=original_matrix.columns, index=original_matrix.index)\n', (19473, 19539), True, 'import pandas as pd\n'), ((21175, 21191), 'numpy.array', 'np.array', (['combos'], {}), '(combos)\n', (21183, 21191), True, 'import numpy as np\n'), ((21294, 21320), 'numpy.random.shuffle', 'np.random.shuffle', (['indexes'], {}), '(indexes)\n', (21311, 21320), True, 'import numpy as np\n'), ((22190, 22207), 'numpy.median', 'np.median', (['test_a'], {}), '(test_a)\n', (22199, 22207), True, 'import numpy as np\n'), ((22227, 22244), 'numpy.median', 'np.median', (['test_b'], {}), '(test_b)\n', (22236, 22244), True, 'import numpy as np\n'), ((28044, 28111), 'os.path.join', 'os.path.join', (["job_context['output_dir']", '"""aggregated_metadata.json"""'], {}), "(job_context['output_dir'], 'aggregated_metadata.json')\n", (28056, 28111), False, 'import os\n'), ((43948, 44012), 'concurrent.futures.ThreadPoolExecutor', 'ThreadPoolExecutor', ([], {'max_workers': 'MULTIPROCESSING_MAX_THREAD_COUNT'}), '(max_workers=MULTIPROCESSING_MAX_THREAD_COUNT)\n', (43966, 44012), False, 'from concurrent.futures import ThreadPoolExecutor\n'), ((1489, 1500), 'os.getpid', 'os.getpid', ([], {}), '()\n', (1498, 1500), False, 'import os\n'), ((1804, 1815), 'time.time', 'time.time', ([], {}), '()\n', (1813, 1815), False, 'import time\n'), ((7815, 7828), 'numpy.log2', 'np.log2', (['data'], {}), '(data)\n', (7822, 7828), True, 'import numpy as np\n'), ((8092, 8105), 'numpy.log2', 'np.log2', (['data'], {}), '(data)\n', (8099, 8105), True, 'import numpy as np\n'), ((9486, 9505), 'csv.reader', 'csv.reader', (['csvfile'], {}), '(csvfile)\n', (9496, 9505), False, 'import csv\n'), ((10446, 10465), 'csv.writer', 'csv.writer', (['csvfile'], {}), '(csvfile)\n', (10456, 10465), False, 'import csv\n'), ((18538, 18573), 'math.ceil', 'math.ceil', (['(num_columns / chunk_size)'], {}), '(num_columns / chunk_size)\n', (18547, 18573), False, 'import math\n'), ((19890, 19916), 'numpy.array', 'np.array', (['normalized_chunk'], {}), '(normalized_chunk)\n', (19898, 19916), True, 'import numpy as np\n'), ((21218, 21234), 'numpy.transpose', 'np.transpose', (['ar'], {}), '(ar)\n', (21230, 21234), True, 'import numpy as np\n'), ((28231, 28306), 'simplejson.dump', 'json.dump', (["job_context['metadata']", 'metadata_file'], {'indent': '(4)', 'sort_keys': '(True)'}), "(job_context['metadata'], metadata_file, indent=4, sort_keys=True)\n", (28240, 28306), True, 'import simplejson as json\n'), ((28467, 28540), 'os.path.join', 'os.path.join', (["job_context['output_dir']", '"""filtered_samples_metadata.json"""'], {}), "(job_context['output_dir'], 'filtered_samples_metadata.json')\n", (28479, 28540), False, 'import os\n'), ((28869, 28941), 'os.path.join', 'os.path.join', (["job_context['output_dir']", '"""filtered_samples_metadata.tsv"""'], {}), "(job_context['output_dir'], 'filtered_samples_metadata.tsv')\n", (28881, 28941), False, 'import os\n'), ((29397, 29468), 'data_refinery_workers.processors.utils.ProcessorJobError', 'utils.ProcessorJobError', (['"""Failed to write metadata TSV!"""'], {'success': '(False)'}), "('Failed to write metadata TSV!', success=False)\n", (29420, 29468), False, 'from data_refinery_workers.processors import utils\n'), ((39595, 39637), 'os.makedirs', 'os.makedirs', (['experiment_dir'], {'exist_ok': '(True)'}), '(experiment_dir, exist_ok=True)\n', (39606, 39637), False, 'import os\n'), ((42494, 42529), 'os.makedirs', 'os.makedirs', (['all_dir'], {'exist_ok': '(True)'}), '(all_dir, exist_ok=True)\n', (42505, 42529), False, 'import os\n'), ((756, 783), 'multiprocessing.cpu_count', 'multiprocessing.cpu_count', ([], {}), '()\n', (781, 783), False, 'import multiprocessing\n'), ((969, 1032), 'pathlib.Path', 'Path', (['"""data_refinery_workers/processors/smasher_email.min.html"""'], {}), "('data_refinery_workers/processors/smasher_email.min.html')\n", (973, 1032), False, 'from pathlib import Path\n'), ((1089, 1158), 'pathlib.Path', 'Path', (['"""data_refinery_workers/processors/smasher_email_error.min.html"""'], {}), "('data_refinery_workers/processors/smasher_email_error.min.html')\n", (1093, 1158), False, 'from pathlib import Path\n'), ((1602, 1622), 'psutil.cpu_percent', 'psutil.cpu_percent', ([], {}), '()\n', (1620, 1622), False, 'import psutil\n'), ((6650, 6684), 'os.path.exists', 'os.path.exists', (['computed_file_path'], {}), '(computed_file_path)\n', (6664, 6684), False, 'import os\n'), ((27964, 27978), 'django.utils.timezone.now', 'timezone.now', ([], {}), '()\n', (27976, 27978), False, 'from django.utils import timezone\n'), ((28673, 28760), 'simplejson.dump', 'json.dump', (["job_context['filtered_samples']", 'metadata_file'], {'indent': '(4)', 'sort_keys': '(True)'}), "(job_context['filtered_samples'], metadata_file, indent=4,\n sort_keys=True)\n", (28682, 28760), True, 'import simplejson as json\n'), ((29078, 29150), 'csv.DictWriter', 'csv.DictWriter', (['tsv_file', 'columns'], {'delimiter': '"""\t"""', 'extrasaction': '"""ignore"""'}), "(tsv_file, columns, delimiter='\\t', extrasaction='ignore')\n", (29092, 29150), False, 'import csv\n'), ((39920, 39992), 'csv.DictWriter', 'csv.DictWriter', (['tsv_file', 'columns'], {'delimiter': '"""\t"""', 'extrasaction': '"""ignore"""'}), "(tsv_file, columns, delimiter='\\t', extrasaction='ignore')\n", (39934, 39992), False, 'import csv\n'), ((40619, 40658), 'os.makedirs', 'os.makedirs', (['species_dir'], {'exist_ok': '(True)'}), '(species_dir, exist_ok=True)\n', (40630, 40658), False, 'import os\n'), ((42659, 42731), 'csv.DictWriter', 'csv.DictWriter', (['tsv_file', 'columns'], {'delimiter': '"""\t"""', 'extrasaction': '"""ignore"""'}), "(tsv_file, columns, delimiter='\\t', extrasaction='ignore')\n", (42673, 42731), False, 'import csv\n'), ((41038, 41110), 'csv.DictWriter', 'csv.DictWriter', (['tsv_file', 'columns'], {'delimiter': '"""\t"""', 'extrasaction': '"""ignore"""'}), "(tsv_file, columns, delimiter='\\t', extrasaction='ignore')\n", (41052, 41110), False, 'import csv\n'), ((1729, 1740), 'time.time', 'time.time', ([], {}), '()\n', (1738, 1740), False, 'import time\n'), ((42314, 42378), 'simplejson.dump', 'json.dump', (['species_metadata', 'json_file'], {'indent': '(4)', 'sort_keys': '(True)'}), '(species_metadata, json_file, indent=4, sort_keys=True)\n', (42323, 42378), True, 'import simplejson as json\n')] |
from copy import deepcopy
from dataclasses import dataclass, asdict
from logging import getLogger, WARNING
import anyconfig
import click
import sys
from pathlib import Path
from typing import List, Tuple
from .command import CwsMultiCommands
from .error import CwsClientError
from ..config import DEFAULT_PROJECT_DIR, DEFAULT_WORKSPACE
from ..utils import import_attr, get_system_info
from ..version import __version__
PROJECT_CONFIG_VERSION = 2
@click.group()
@click.version_option(version=__version__, message=f'%(prog)s %(version)s, {get_system_info()}')
@click.option('-p', '--project-dir', default=DEFAULT_PROJECT_DIR,
help=f"The project directory path (absolute or relative) [default to '{DEFAULT_PROJECT_DIR}'].")
@click.option('-c', '--config-file', help="Configuration file path [path from project dir].")
@click.option('-m', '--module', help="Filename of your microservice python source file.")
@click.option('-s', '--service', help="Microservice variable name in the source file.")
@click.option('-w', '--workspace', default=DEFAULT_WORKSPACE,
help=f"Application stage [default to '{DEFAULT_WORKSPACE}'].")
@click.pass_context
def client(*args, **kwargs):
...
def invoke(ctx):
"""Invokes the command over the service or the declared services in project configuration file."""
try:
args = ctx.args
protected_args = ctx.protected_args
if not protected_args:
sys.stderr.write(str("No command given.\n"))
client.main(['--help'])
sys.exit(1)
command_name = protected_args[0]
# get project options
cws_options = CwsClientOptions(ctx.params)
if not cws_options.services:
sys.stderr.write(str("Nothing to execute as no service defined.\n"))
sys.exit(1)
project_dir = cws_options.project_dir
workspace = cws_options.workspace
# Iterates over the declared services in project configuration file
commands_to_be_executed = CwsMultiCommands()
for module, service in cws_options.services:
ctx.args = list(args)
ctx.protected_args = protected_args
# Get command from the microservice description
handler = cws_options.get_handler(module, service)
handler.deferred_init(workspace)
service_config = cws_options.get_service_config(module, service)
command = service_config.get_command(command_name, handler)
if not command:
raise CwsClientError(f"Undefined command {command_name}.\n")
command_options = service_config.get_command_options(command_name)
# Get user defined options and convert them in right types
client_options, _, cmd_opts = command.make_parser(ctx).parse_args(ctx.args)
for opt_key, opt_value in client_options.items():
cmd_opt = next(x for x in cmd_opts if x.name == opt_key)
client_options[opt_key] = cmd_opt.type(opt_value)
# Adds command and global options
options = {**command_options, **client_options, '_from_cws': True}
if options.get('help', False):
print(command.get_help(ctx))
return
command.make_context(command.name, options)
commands_to_be_executed.append(command, options)
# Executes all commands
for command_class, execution_list in commands_to_be_executed.items():
command_class.multi_execute(project_dir, workspace, execution_list)
except CwsClientError as client_err:
sys.stderr.write(f"Error in command: {client_err.msg}\n")
sys.exit(1)
except Exception as e:
sys.stderr.write(f"Error in command: {str(e)}\n")
sys.exit(1)
client.invoke = invoke
@dataclass
class CwsClientOptions:
"""Client options defined from click command."""
project_dir: str
workspace: str
module: str
service: str
config_file: str
config_file_suffix: str
def __init__(self, params):
self.project_dir = params.get('project_dir')
self.workspace = params.get('workspace')
self.module = params.get('module')
self.service = params.get('service')
self.config_file = params.get('config_file') or 'project'
self.config_file_suffix = params.get('config_file_suffix') or '.cws.yml'
self.project_config = ProjectConfig(self.project_dir, self.config_file, self.config_file_suffix)
@property
def services(self):
"""Returns the list of services defined from the client optons."""
if self.service:
return [(self.module, self.service)]
return self.project_config.all_services(self.module)
def get_handler(self, module, service):
"""Loads microservice handler."""
try:
return import_attr(module, service, cwd=self.project_dir)
except AttributeError as e:
raise CwsClientError(f"Module '{module}' has no microservice {service} : {str(e)}\n")
except ModuleNotFoundError as e:
raise CwsClientError(f"The module '{module}' is not defined in {self.project_dir} : {str(e)}\n")
except Exception as e:
raise CwsClientError(f"Error {e} when loading module '{module}'\n")
def get_service_config(self, module, service, workspace=None):
"""Returns the microserrvice's configuration."""
workspace = workspace or self.workspace
return ServiceConfig(self.project_config, module, service, workspace)
class ProjectConfig:
"""Class for the project configuration file."""
def __init__(self, project_dir, file_name, file_suffix):
self.project_dir = project_dir
self.params = {}
getLogger('anyconfig').setLevel(WARNING)
# Loads project configuration file at project dir then at root if not found
self.params = self._load_config(project_dir, file_name, file_suffix)
if not self.params:
self.params = self._load_config('.', file_name, file_suffix)
# Checks results
if not self.params:
raise CwsClientError(f"Cannot find project file ({file_name + file_suffix}).\n")
if self.params.get('version') != PROJECT_CONFIG_VERSION:
raise CwsClientError(f"Wrong project file version (should be {PROJECT_CONFIG_VERSION}).\n")
def get_service_config(self, module, service, workspace):
return ServiceConfig(self, module, service, workspace)
def all_services(self, module: str = None) -> List[Tuple[str, str]]:
""" Returns the list of (module, microservice) on which the command will be executed."""
services = self.params.get('services', {})
res = []
for s in services:
if 'module' not in s or 'services' not in s:
raise CwsClientError(f"Services wrongly defined.\n")
if module and s['module'] != module:
continue
if 'services' in s:
_module = s['module']
_services = s['services']
if type(_services) is str:
res.append((_module, _services))
else:
for service in _services:
res.append((_module, service))
return res
@property
def all_commands(self):
""" Returns the list of microservices on which the command will be executed."""
return self.params.get('commands', {})
@staticmethod
def _load_config(dir, file_name, file_suffix):
"""Loads the project configuration file."""
project_dir_path = Path(dir)
project_file = project_dir_path / (file_name + file_suffix)
project_secret_file = project_dir_path / (file_name + '.secret' + file_suffix)
return anyconfig.multi_load([project_file, project_secret_file], ac_ignore_missing=True)
@staticmethod
def _get_workspace_options(options, workspace):
"""Returns the option values defined for the specific workspace or globally."""
workspaces = options.pop('workspaces', {})
workspace_options = {k: v for x in workspaces if x.pop('workspace', None) == workspace
for k, v in x.items()}
return {**options, **workspace_options}
def _get_service_options(self, services, service, workspace):
"""Returns the option values defined for the specific service and workspace or globally."""
service_options = {}
for s in services:
if s.pop('service', None) == service:
s.pop('module', None)
service_options.update(self._get_workspace_options(s, workspace))
return {**service_options}
def get_module_options(self, options_list, module, service, workspace):
"""Returns the option values defined for the specific module, service and workspace or globally."""
if type(options_list) is not list:
options_list = [options_list]
service_options = {}
module_options = {}
for options in options_list:
if 'module' not in options or options.pop('module') == module:
services = options.pop('services', {})
module_options.update(self._get_workspace_options(options, workspace))
service_options.update(self._get_service_options(services, service, workspace))
return {**module_options, **service_options}
@dataclass
class ServiceConfig:
project_config: ProjectConfig
module: str
service: str
workspace: str
@property
def client_params(self):
res = asdict(self)
del res['project_config']
res['project_dir'] = self.project_config.project_dir
return res
def get_command(self, cmd_name, ms):
"""Get the command associated to this microservice."""
# Get command already added in handler
for name in ms.commands:
if name == cmd_name:
return ms.commands[name]
# Creates it from project class parameter if not already defined
cmd_class = self._command_class(cmd_name)
if cmd_class:
cmd = cmd_class(ms, name=cmd_name)
# Installs needed commands
for needed in cmd.needed_commands:
self.get_command(needed, ms)
return cmd
def _command_class(self, cmd_name):
"""Loads the command class defined by name."""
cmd_class_name = self.get_command_options(cmd_name).get('class')
if cmd_class_name:
splitted = cmd_class_name.split('.')
return import_attr('.'.join(splitted[:-1]), splitted[-1], cwd=self.project_config.project_dir)
def get_command_options(self, cmd_name):
options = deepcopy(self.project_config.all_commands.get(cmd_name, {}))
module_options = self.project_config.get_module_options(options, self.module, self.service, self.workspace)
return {**self.client_params, **module_options}
def main():
return client()
if __name__ == "__main__":
main()
| [
"logging.getLogger",
"dataclasses.asdict",
"pathlib.Path",
"click.group",
"click.option",
"sys.stderr.write",
"sys.exit",
"anyconfig.multi_load"
] | [((452, 465), 'click.group', 'click.group', ([], {}), '()\n', (463, 465), False, 'import click\n'), ((564, 735), 'click.option', 'click.option', (['"""-p"""', '"""--project-dir"""'], {'default': 'DEFAULT_PROJECT_DIR', 'help': 'f"""The project directory path (absolute or relative) [default to \'{DEFAULT_PROJECT_DIR}\']."""'}), '(\'-p\', \'--project-dir\', default=DEFAULT_PROJECT_DIR, help=\n f"The project directory path (absolute or relative) [default to \'{DEFAULT_PROJECT_DIR}\']."\n )\n', (576, 735), False, 'import click\n'), ((741, 838), 'click.option', 'click.option', (['"""-c"""', '"""--config-file"""'], {'help': '"""Configuration file path [path from project dir]."""'}), "('-c', '--config-file', help=\n 'Configuration file path [path from project dir].')\n", (753, 838), False, 'import click\n'), ((835, 928), 'click.option', 'click.option', (['"""-m"""', '"""--module"""'], {'help': '"""Filename of your microservice python source file."""'}), "('-m', '--module', help=\n 'Filename of your microservice python source file.')\n", (847, 928), False, 'import click\n'), ((925, 1016), 'click.option', 'click.option', (['"""-s"""', '"""--service"""'], {'help': '"""Microservice variable name in the source file."""'}), "('-s', '--service', help=\n 'Microservice variable name in the source file.')\n", (937, 1016), False, 'import click\n'), ((1013, 1141), 'click.option', 'click.option', (['"""-w"""', '"""--workspace"""'], {'default': 'DEFAULT_WORKSPACE', 'help': 'f"""Application stage [default to \'{DEFAULT_WORKSPACE}\']."""'}), '(\'-w\', \'--workspace\', default=DEFAULT_WORKSPACE, help=\n f"Application stage [default to \'{DEFAULT_WORKSPACE}\'].")\n', (1025, 1141), False, 'import click\n'), ((7691, 7700), 'pathlib.Path', 'Path', (['dir'], {}), '(dir)\n', (7695, 7700), False, 'from pathlib import Path\n'), ((7871, 7957), 'anyconfig.multi_load', 'anyconfig.multi_load', (['[project_file, project_secret_file]'], {'ac_ignore_missing': '(True)'}), '([project_file, project_secret_file], ac_ignore_missing\n =True)\n', (7891, 7957), False, 'import anyconfig\n'), ((9698, 9710), 'dataclasses.asdict', 'asdict', (['self'], {}), '(self)\n', (9704, 9710), False, 'from dataclasses import dataclass, asdict\n'), ((1543, 1554), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (1551, 1554), False, 'import sys\n'), ((1808, 1819), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (1816, 1819), False, 'import sys\n'), ((3630, 3687), 'sys.stderr.write', 'sys.stderr.write', (['f"""Error in command: {client_err.msg}\n"""'], {}), "(f'Error in command: {client_err.msg}\\n')\n", (3646, 3687), False, 'import sys\n'), ((3696, 3707), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (3704, 3707), False, 'import sys\n'), ((3801, 3812), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (3809, 3812), False, 'import sys\n'), ((5799, 5821), 'logging.getLogger', 'getLogger', (['"""anyconfig"""'], {}), "('anyconfig')\n", (5808, 5821), False, 'from logging import getLogger, WARNING\n')] |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# from smac.env.multiagentenv import MultiAgentEnv
# from smac.env.starcraft2.maps import get_map_params
from ..multiagentenv import MultiAgentEnv
from ..starcraft2.maps import get_map_params
import atexit
from operator import attrgetter
from copy import deepcopy
import numpy as np
import enum
import math, time
from absl import logging
from pysc2 import maps
from pysc2 import run_configs
from pysc2.lib import protocol
from s2clientprotocol import common_pb2 as sc_common
from s2clientprotocol import sc2api_pb2 as sc_pb
from s2clientprotocol import raw_pb2 as r_pb
from s2clientprotocol import debug_pb2 as d_pb
races = {
"R": sc_common.Random,
"P": sc_common.Protoss,
"T": sc_common.Terran,
"Z": sc_common.Zerg,
}
difficulties = {
"1": sc_pb.VeryEasy,
"2": sc_pb.Easy,
"3": sc_pb.Medium,
"4": sc_pb.MediumHard,
"5": sc_pb.Hard,
"6": sc_pb.Harder,
"7": sc_pb.VeryHard,
"8": sc_pb.CheatVision,
"9": sc_pb.CheatMoney,
"A": sc_pb.CheatInsane,
}
actions = {
"move": 16, # target: PointOrUnit
"attack": 23, # target: PointOrUnit
"stop": 4, # target: None
"heal": 386, # Unit
}
class Direction(enum.IntEnum):
NORTH = 0
SOUTH = 1
EAST = 2
WEST = 3
class StarCraftWrappedEnv(MultiAgentEnv):
"""The StarCraft II environment for decentralised multi-agent
micromanagement scenarios.
"""
def __init__(
self,
map_name="8m",
step_mul=8,
move_amount=2,
difficulty="7",
game_version=None,
seed=None,
continuing_episode=False,
obs_all_health=True,
obs_own_health=True,
obs_last_action=False,
obs_pathing_grid=False,
obs_terrain_height=False,
obs_instead_of_state=False,
obs_timestep_number=False,
state_last_action=True,
state_timestep_number=False,
reward_sparse=False,
reward_only_positive=True,
reward_death_value=10,
reward_win=200,
reward_defeat=0,
reward_negative_scale=0.5,
reward_scale=True,
reward_scale_rate=20,
replay_dir="",
replay_prefix="",
window_size_x=1920,
window_size_y=1200,
heuristic_ai=False,
debug=False,
is_replay=False
):
"""
Create a StarCraftC2Env environment.
Parameters
----------
map_name : str, optional
The name of the SC2 map to play (default is "8m"). The full list
can be found by running bin/map_list.
step_mul : int, optional
How many game steps per agent step (default is 8). None
indicates to use the default map step_mul.
move_amount : float, optional
How far away units are ordered to move per step (default is 2).
difficulty : str, optional
The difficulty of built-in computer AI bot (default is "7").
game_version : str, optional
StarCraft II game version (default is None). None indicates the
latest version.
seed : int, optional
Random seed used during game initialisation. This allows to
continuing_episode : bool, optional
Whether to consider episodes continuing or finished after time
limit is reached (default is False).
obs_all_health : bool, optional
Agents receive the health of all units (in the sight range) as part
of observations (default is True).
obs_own_health : bool, optional
Agents receive their own health as a part of observations (default
is False). This flag is ignored when obs_all_health == True.
obs_last_action : bool, optional
Agents receive the last actions of all units (in the sight range)
as part of observations (default is False).
obs_pathing_grid : bool, optional
Whether observations include pathing values surrounding the agent
(default is False).
obs_terrain_height : bool, optional
Whether observations include terrain height values surrounding the
agent (default is False).
obs_instead_of_state : bool, optional
Use combination of all agents' observations as the global state
(default is False).
obs_timestep_number : bool, optional
Whether observations include the current timestep of the episode
(default is False).
state_last_action : bool, optional
Include the last actions of all agents as part of the global state
(default is True).
state_timestep_number : bool, optional
Whether the state include the current timestep of the episode
(default is False).
reward_sparse : bool, optional
Receive 1/-1 reward for winning/loosing an episode (default is
False). Whe rest of reward parameters are ignored if True.
reward_only_positive : bool, optional
Reward is always positive (default is True).
reward_death_value : float, optional
The amount of reward received for killing an enemy unit (default
is 10). This is also the negative penalty for having an allied unit
killed if reward_only_positive == False.
reward_win : float, optional
The reward for winning in an episode (default is 200).
reward_defeat : float, optional
The reward for loosing in an episode (default is 0). This value
should be nonpositive.
reward_negative_scale : float, optional
Scaling factor for negative rewards (default is 0.5). This
parameter is ignored when reward_only_positive == True.
reward_scale : bool, optional
Whether or not to scale the reward (default is True).
reward_scale_rate : float, optional
Reward scale rate (default is 20). When reward_scale == True, the
reward received by the agents is divided by (max_reward /
reward_scale_rate), where max_reward is the maximum possible
reward per episode without considering the shield regeneration
of Protoss units.
replay_dir : str, optional
The directory to save replays (default is None). If None, the
replay will be saved in Replays directory where StarCraft II is
installed.
replay_prefix : str, optional
The prefix of the replay to be saved (default is None). If None,
the name of the map will be used.
window_size_x : int, optional
The length of StarCraft II window size (default is 1920).
window_size_y: int, optional
The height of StarCraft II window size (default is 1200).
heuristic_ai: bool, optional
Whether or not to use a non-learning heuristic AI (default False).
debug: bool, optional
Log messages about observations, state, actions and rewards for
debugging purposes (default is False).
"""
# Map arguments
print("inside the new env..")
time.sleep(100)
self.map_name = map_name
map_params = get_map_params(self.map_name)
self.n_agents = map_params["n_agents"]
self.n_enemies = map_params["n_enemies"]
self.episode_limit = map_params["limit"]
self._move_amount = move_amount
self._step_mul = step_mul
self.difficulty = difficulty
# Observations and state
self.obs_own_health = obs_own_health
self.obs_all_health = obs_all_health
self.obs_instead_of_state = obs_instead_of_state
self.obs_last_action = obs_last_action
self.obs_pathing_grid = obs_pathing_grid
self.obs_terrain_height = obs_terrain_height
self.obs_timestep_number = obs_timestep_number
self.state_last_action = state_last_action
self.state_timestep_number = state_timestep_number
if self.obs_all_health:
self.obs_own_health = True
self.n_obs_pathing = 8
self.n_obs_height = 9
# Rewards args
self.reward_sparse = reward_sparse
self.reward_only_positive = reward_only_positive
self.reward_negative_scale = reward_negative_scale
self.reward_death_value = reward_death_value
self.reward_win = reward_win
self.reward_defeat = reward_defeat
self.reward_scale = reward_scale
self.reward_scale_rate = reward_scale_rate
# Other
self.game_version = game_version
self.continuing_episode = continuing_episode
self._seed = seed
self.heuristic_ai = heuristic_ai
self.debug = debug
self.is_replay = is_replay
self.window_size = (window_size_x, window_size_y)
self.replay_dir = replay_dir
self.replay_prefix = replay_prefix
# Actions
self.n_actions_no_attack = 6
self.n_actions_move = 4
self.n_actions = self.n_actions_no_attack + self.n_enemies
# Map info
self._agent_race = map_params["a_race"]
self._bot_race = map_params["b_race"]
self.shield_bits_ally = 1 if self._agent_race == "P" else 0
self.shield_bits_enemy = 1 if self._bot_race == "P" else 0
self.unit_type_bits = map_params["unit_type_bits"]
self.map_type = map_params["map_type"]
self.max_reward = (
self.n_enemies * self.reward_death_value + self.reward_win
)
self.agents = {}
self.enemies = {}
self._episode_count = 0
self._episode_steps = 0
self._total_steps = 0
self._obs = None
self.battles_won = 0
self.battles_game = 0
self.timeouts = 0
self.force_restarts = 0
self.last_stats = None
self.death_tracker_ally = np.zeros(self.n_agents)
self.death_tracker_enemy = np.zeros(self.n_enemies)
self.previous_ally_units = None
self.previous_enemy_units = None
self.last_action = np.zeros((self.n_agents, self.n_actions))
self._min_unit_type = 0
self.marine_id = self.marauder_id = self.medivac_id = 0
self.hydralisk_id = self.zergling_id = self.baneling_id = 0
self.stalker_id = self.colossus_id = self.zealot_id = self.sentry_id = 0
self.void_ray_id = 0
self.max_distance_x = 0
self.max_distance_y = 0
self.map_x = 0
self.map_y = 0
self.terrain_height = None
self.pathing_grid = None
self._run_config = None
self._sc2_proc = None
self._controller = None
# Try to avoid leaking SC2 processes on shutdown
atexit.register(lambda: self.close())
def _launch(self):
"""Launch the StarCraft II game."""
# self._run_config = run_configs.get(version=self.game_version)
self._run_config = run_configs.get()
_map = maps.get(self.map_name)
# Setting up the interface
interface_options = sc_pb.InterfaceOptions(raw=True, score=False)
self._sc2_proc = self._run_config.start(window_size=self.window_size)
self._controller = self._sc2_proc.controller
# Request to create the game
create = sc_pb.RequestCreateGame(
local_map=sc_pb.LocalMap(
map_path=_map.path,
map_data=self._run_config.map_data(_map.path)),
realtime=False,
random_seed=self._seed)
create.player_setup.add(type=sc_pb.Participant)
create.player_setup.add(type=sc_pb.Computer, race=races[self._bot_race],
difficulty=difficulties[self.difficulty])
self._controller.create_game(create)
join = sc_pb.RequestJoinGame(race=races[self._agent_race],
options=interface_options)
self._controller.join_game(join)
game_info = self._controller.game_info()
map_info = game_info.start_raw
map_play_area_min = map_info.playable_area.p0
map_play_area_max = map_info.playable_area.p1
self.max_distance_x = map_play_area_max.x - map_play_area_min.x
self.max_distance_y = map_play_area_max.y - map_play_area_min.y
self.map_x = map_info.map_size.x
self.map_y = map_info.map_size.y
if map_info.pathing_grid.bits_per_pixel == 1:
vals = np.array(list(map_info.pathing_grid.data)).reshape(
self.map_x, int(self.map_y / 8))
self.pathing_grid = np.transpose(np.array([
[(b >> i) & 1 for b in row for i in range(7, -1, -1)]
for row in vals], dtype=np.bool))
else:
self.pathing_grid = np.invert(np.flip(np.transpose(np.array(
list(map_info.pathing_grid.data), dtype=np.bool).reshape(
self.map_x, self.map_y)), axis=1))
self.terrain_height = np.flip(
np.transpose(np.array(list(map_info.terrain_height.data))
.reshape(self.map_x, self.map_y)), 1) / 255
def reset(self):
"""Reset the environment. Required after each full episode.
Returns initial observations and states.
"""
self._episode_steps = 0
if self._episode_count == 0:
# Launch StarCraft II
self._launch()
else:
self._restart()
# Information kept for counting the reward
self.death_tracker_ally = np.zeros(self.n_agents)
self.death_tracker_enemy = np.zeros(self.n_enemies)
self.previous_ally_units = None
self.previous_enemy_units = None
self.win_counted = False
self.defeat_counted = False
self.last_action = np.zeros((self.n_agents, self.n_actions))
if self.heuristic_ai:
self.heuristic_targets = [None] * self.n_agents
try:
self._obs = self._controller.observe()
self.init_units()
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
if self.debug:
logging.debug("Started Episode {}"
.format(self._episode_count).center(60, "*"))
return self.get_obs(), self.get_state()
def _restart(self):
"""Restart the environment by killing all units on the map.
There is a trigger in the SC2Map file, which restarts the
episode when there are no units left.
"""
try:
self._kill_all_units()
self._controller.step(2)
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
def full_restart(self):
"""Full restart. Closes the SC2 process and launches a new one. """
self._sc2_proc.close()
self._launch()
self.force_restarts += 1
def step(self, actions):
"""A single environment step. Returns reward, terminated, info."""
if self.is_replay:
positions = []
for agent_id in range(self.n_agents):
unit = self.get_unit_by_id(agent_id)
positions.append([agent_id, unit.pos.x, unit.pos.y, unit.health])
for e_id, e_unit in self.enemies.items():
positions.append([e_id, e_unit.pos.x, e_unit.pos.y, e_unit.health])
# positions.insert(0,self._episode_steps)
print(positions, ",")
actions = [int(a) for a in actions]
self.last_action = np.eye(self.n_actions)[np.array(actions)]
# Collect individual actions
sc_actions = []
if self.debug:
logging.debug("Actions".center(60, "-"))
for a_id, action in enumerate(actions):
if not self.heuristic_ai:
agent_action = self.get_agent_action(a_id, action)
else:
agent_action = self.get_agent_action_heuristic(a_id, action)
if agent_action:
sc_actions.append(agent_action)
# Send action request
req_actions = sc_pb.RequestAction(actions=sc_actions)
try:
self._controller.actions(req_actions)
# Make step in SC2, i.e. apply actions
self._controller.step(self._step_mul)
# Observe here so that we know if the episode is over.
self._obs = self._controller.observe()
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
return 0, True, {}
self._total_steps += 1
self._episode_steps += 1
# Update units
game_end_code = self.update_units()
terminated = False
reward = self.reward_battle()
info = {"battle_won": False}
if game_end_code is not None:
# Battle is over
terminated = True
self.battles_game += 1
if game_end_code == 1 and not self.win_counted:
self.battles_won += 1
self.win_counted = True
info["battle_won"] = True
if not self.reward_sparse:
reward += self.reward_win
else:
reward = 1
elif game_end_code == -1 and not self.defeat_counted:
self.defeat_counted = True
if not self.reward_sparse:
reward += self.reward_defeat
else:
reward = -1
elif self._episode_steps >= self.episode_limit:
# Episode limit reached
terminated = True
if self.continuing_episode:
info["episode_limit"] = True
self.battles_game += 1
self.timeouts += 1
if self.debug:
logging.debug("Reward = {}".format(reward).center(60, '-'))
if terminated:
self._episode_count += 1
if self.is_replay:
positions = []
for agent_id in range(self.n_agents):
unit = self.get_unit_by_id(agent_id)
positions.append([agent_id, unit.pos.x, unit.pos.y, unit.health])
for e_id, e_unit in self.enemies.items():
positions.append([e_id, e_unit.pos.x, e_unit.pos.y, e_unit.health])
# positions.insert(0,self._episode_steps)
print(positions, ",")
if self.reward_scale:
reward /= self.max_reward / self.reward_scale_rate
print("type of reward returned from within starcraft is: ", type(reward))
return 2 * reward, terminated, info
def get_agent_action(self, a_id, action):
"""Construct the action for agent a_id."""
avail_actions = self.get_avail_agent_actions(a_id)
assert avail_actions[action] == 1, \
"Agent {} cannot perform action {}".format(a_id, action)
unit = self.get_unit_by_id(a_id)
tag = unit.tag
x = unit.pos.x
y = unit.pos.y
if action == 0:
# no-op (valid only when dead)
assert unit.health == 0, "No-op only available for dead agents."
if self.debug:
logging.debug("Agent {}: Dead".format(a_id))
return None
elif action == 1:
# stop
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["stop"],
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Stop".format(a_id))
elif action == 2:
# move north
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x, y=y + self._move_amount),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move North".format(a_id))
elif action == 3:
# move south
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x, y=y - self._move_amount),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move South".format(a_id))
elif action == 4:
# move east
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x + self._move_amount, y=y),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move East".format(a_id))
elif action == 5:
# move west
cmd = r_pb.ActionRawUnitCommand(
ability_id=actions["move"],
target_world_space_pos=sc_common.Point2D(
x=x - self._move_amount, y=y),
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {}: Move West".format(a_id))
else:
# attack/heal units that are in range
target_id = action - self.n_actions_no_attack
if self.map_type in ["MMM", "GMMM"] and unit.unit_type == self.medivac_id:
target_unit = self.agents[target_id]
action_name = "heal"
else:
target_unit = self.enemies[target_id]
action_name = "attack"
action_id = actions[action_name]
target_tag = target_unit.tag
cmd = r_pb.ActionRawUnitCommand(
ability_id=action_id,
target_unit_tag=target_tag,
unit_tags=[tag],
queue_command=False)
if self.debug:
logging.debug("Agent {} {}s unit # {}".format(
a_id, action_name, target_id))
sc_action = sc_pb.Action(action_raw=r_pb.ActionRaw(unit_command=cmd))
return sc_action
def get_agent_action_heuristic(self, a_id, action):
unit = self.get_unit_by_id(a_id)
tag = unit.tag
target = self.heuristic_targets[a_id]
if unit.unit_type == self.medivac_id:
if (target is None or self.agents[target].health == 0 or
self.agents[target].health == self.agents[target].health_max):
min_dist = math.hypot(self.max_distance_x, self.max_distance_y)
min_id = -1
for al_id, al_unit in self.agents.items():
if al_unit.unit_type == self.medivac_id:
continue
if (al_unit.health != 0 and
al_unit.health != al_unit.health_max):
dist = self.distance(unit.pos.x, unit.pos.y,
al_unit.pos.x, al_unit.pos.y)
if dist < min_dist:
min_dist = dist
min_id = al_id
self.heuristic_targets[a_id] = min_id
if min_id == -1:
self.heuristic_targets[a_id] = None
return None
action_id = actions['heal']
target_tag = self.agents[self.heuristic_targets[a_id]].tag
else:
if target is None or self.enemies[target].health == 0:
min_dist = math.hypot(self.max_distance_x, self.max_distance_y)
min_id = -1
for e_id, e_unit in self.enemies.items():
if (unit.unit_type == self.marauder_id and
e_unit.unit_type == self.medivac_id):
continue
if e_unit.health > 0:
dist = self.distance(unit.pos.x, unit.pos.y,
e_unit.pos.x, e_unit.pos.y)
if dist < min_dist:
min_dist = dist
min_id = e_id
self.heuristic_targets[a_id] = min_id
action_id = actions['attack']
target_tag = self.enemies[self.heuristic_targets[a_id]].tag
cmd = r_pb.ActionRawUnitCommand(
ability_id=action_id,
target_unit_tag=target_tag,
unit_tags=[tag],
queue_command=False)
sc_action = sc_pb.Action(action_raw=r_pb.ActionRaw(unit_command=cmd))
return sc_action
def reward_battle(self):
"""Reward function when self.reward_spare==False.
Returns accumulative hit/shield point damage dealt to the enemy
+ reward_death_value per enemy unit killed, and, in case
self.reward_only_positive == False, - (damage dealt to ally units
+ reward_death_value per ally unit killed) * self.reward_negative_scale
"""
if self.reward_sparse:
return 0
reward = 0
delta_deaths = 0
delta_ally = 0
delta_enemy = 0
neg_scale = self.reward_negative_scale
# update deaths
for al_id, al_unit in self.agents.items():
if not self.death_tracker_ally[al_id]:
# did not die so far
prev_health = (
self.previous_ally_units[al_id].health
+ self.previous_ally_units[al_id].shield
)
if al_unit.health == 0:
# just died
self.death_tracker_ally[al_id] = 1
if not self.reward_only_positive:
delta_deaths -= self.reward_death_value * neg_scale
delta_ally += prev_health * neg_scale
else:
# still alive
delta_ally += neg_scale * (
prev_health - al_unit.health - al_unit.shield
)
for e_id, e_unit in self.enemies.items():
if not self.death_tracker_enemy[e_id]:
prev_health = (
self.previous_enemy_units[e_id].health
+ self.previous_enemy_units[e_id].shield
)
if e_unit.health == 0:
self.death_tracker_enemy[e_id] = 1
delta_deaths += self.reward_death_value
delta_enemy += prev_health
else:
delta_enemy += prev_health - e_unit.health - e_unit.shield
if self.reward_only_positive:
reward = abs(delta_enemy + delta_deaths) # shield regeneration
else:
reward = delta_enemy + delta_deaths - delta_ally
return reward
def get_total_actions(self):
"""Returns the total number of actions an agent could ever take."""
return self.n_actions
@staticmethod
def distance(x1, y1, x2, y2):
"""Distance between two points."""
return math.hypot(x2 - x1, y2 - y1)
def unit_shoot_range(self, agent_id):
"""Returns the shooting range for an agent."""
return 6
def unit_sight_range(self, agent_id):
"""Returns the sight range for an agent."""
return 9
def unit_max_cooldown(self, unit):
"""Returns the maximal cooldown for a unit."""
switcher = {
self.marine_id: 15,
self.marauder_id: 25,
self.medivac_id: 200, # max energy
self.stalker_id: 35,
self.void_ray_id: 35,
self.sentry_id: 22,
self.zealot_id: 22,
self.colossus_id: 24,
self.hydralisk_id: 10,
self.zergling_id: 11,
self.baneling_id: 1
}
return switcher.get(unit.unit_type, 15)
def save_replay(self):
"""Save a replay."""
prefix = self.replay_prefix or self.map_name
replay_dir = self.replay_dir or ""
replay_path = self._run_config.save_replay(
self._controller.save_replay(), replay_dir=replay_dir, prefix=prefix)
logging.info("Replay saved at: %s" % replay_path)
def unit_max_shield(self, unit):
"""Returns maximal shield for a given unit."""
if unit.unit_type == 74 or unit.unit_type == self.stalker_id:
return 80 # Protoss's Stalker
if unit.unit_type == 73 or unit.unit_type == self.zealot_id:
return 50 # Protoss's Zealot
if unit.unit_type == 4 or unit.unit_type == self.colossus_id:
return 150 # Protoss's Colossus
if unit.unit_type == 77 or unit.unit_type == self.sentry_id:
return 40 # Protoss's Sentry
if unit.unit_type == self.void_ray_id:
return 100 # Protoss's Void Ray
def can_move(self, unit, direction):
"""Whether a unit can move in a given direction."""
m = self._move_amount / 2
if direction == Direction.NORTH:
x, y = int(unit.pos.x), int(unit.pos.y + m)
elif direction == Direction.SOUTH:
x, y = int(unit.pos.x), int(unit.pos.y - m)
elif direction == Direction.EAST:
x, y = int(unit.pos.x + m), int(unit.pos.y)
else:
x, y = int(unit.pos.x - m), int(unit.pos.y)
if self.check_bounds(x, y) and self.pathing_grid[x, y]:
return True
return False
def get_surrounding_points(self, unit, include_self=False):
"""Returns the surrounding points of the unit in 8 directions."""
x = int(unit.pos.x)
y = int(unit.pos.y)
ma = self._move_amount
points = [
(x, y + 2 * ma),
(x, y - 2 * ma),
(x + 2 * ma, y),
(x - 2 * ma, y),
(x + ma, y + ma),
(x - ma, y - ma),
(x + ma, y - ma),
(x - ma, y + ma),
]
if include_self:
points.append((x, y))
return points
def check_bounds(self, x, y):
"""Whether a point is within the map bounds."""
return (0 <= x < self.map_x and 0 <= y < self.map_y)
def get_surrounding_pathing(self, unit):
"""Returns pathing values of the grid surrounding the given unit."""
points = self.get_surrounding_points(unit, include_self=False)
vals = [
self.pathing_grid[x, y] if self.check_bounds(x, y) else 1
for x, y in points
]
return vals
def get_surrounding_height(self, unit):
"""Returns height values of the grid surrounding the given unit."""
points = self.get_surrounding_points(unit, include_self=True)
vals = [
self.terrain_height[x, y] if self.check_bounds(x, y) else 1
for x, y in points
]
return vals
def get_own_feature_size(self):
nf_own = self.unit_type_bits
if self.obs_own_health:
nf_own += 1 + self.shield_bits_ally
return nf_own
def get_units_type_id(self):
self.reset()
type_ids = []
for agent_i in range(self.n_agents):
agent = self.get_unit_by_id(agent_i)
type_ids.append(self.get_unit_type_id(agent, True))
print('>>>', type_ids)
return type_ids
def get_obs_agent(self, agent_id):
"""Returns observation for agent_id.
NOTE: Agents should have access only to their local observations
during decentralised execution.
"""
unit = self.get_unit_by_id(agent_id)
nf_al = 4 + self.unit_type_bits
nf_en = 4 + self.unit_type_bits
if self.obs_all_health:
nf_al += 1 + self.shield_bits_ally
nf_en += 1 + self.shield_bits_enemy
if self.obs_last_action:
nf_al += self.n_actions
nf_own = self.unit_type_bits
if self.obs_own_health:
nf_own += 1 + self.shield_bits_ally
move_feats_len = self.n_actions_move
if self.obs_pathing_grid:
move_feats_len += self.n_obs_pathing
if self.obs_terrain_height:
move_feats_len += self.n_obs_height
move_feats = np.zeros(move_feats_len, dtype=np.float32)
enemy_feats = np.zeros((self.n_enemies, nf_en), dtype=np.float32)
ally_feats = np.zeros((self.n_agents - 1, nf_al), dtype=np.float32)
own_feats = np.zeros(nf_own, dtype=np.float32)
if unit.health > 0: # otherwise dead, return all zeros
x = unit.pos.x
y = unit.pos.y
sight_range = self.unit_sight_range(agent_id)
# Movement features
avail_actions = self.get_avail_agent_actions(agent_id)
for m in range(self.n_actions_move):
move_feats[m] = avail_actions[m + 2]
ind = self.n_actions_move
if self.obs_pathing_grid:
move_feats[
ind: ind + self.n_obs_pathing
] = self.get_surrounding_pathing(unit)
ind += self.n_obs_pathing
if self.obs_terrain_height:
move_feats[ind:] = self.get_surrounding_height(unit)
# Enemy features
for e_id, e_unit in self.enemies.items():
e_x = e_unit.pos.x
e_y = e_unit.pos.y
dist = self.distance(x, y, e_x, e_y)
if (
dist < sight_range and e_unit.health > 0
): # visible and alive
# Sight range > shoot range
enemy_feats[e_id, 0] = avail_actions[
self.n_actions_no_attack + e_id
] # available
enemy_feats[e_id, 1] = dist / sight_range # distance
enemy_feats[e_id, 2] = (
e_x - x
) / sight_range # relative X
enemy_feats[e_id, 3] = (
e_y - y
) / sight_range # relative Y
ind = 4
if self.obs_all_health:
enemy_feats[e_id, ind] = (
e_unit.health / e_unit.health_max
) # health
ind += 1
if self.shield_bits_enemy > 0:
max_shield = self.unit_max_shield(e_unit)
enemy_feats[e_id, ind] = (
e_unit.shield / max_shield
) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(e_unit, False)
enemy_feats[e_id, ind + type_id] = 1 # unit type
# Ally features
al_ids = [
al_id for al_id in range(self.n_agents) if al_id != agent_id
]
for i, al_id in enumerate(al_ids):
al_unit = self.get_unit_by_id(al_id)
al_x = al_unit.pos.x
al_y = al_unit.pos.y
dist = self.distance(x, y, al_x, al_y)
if (
dist < sight_range and al_unit.health > 0
): # visible and alive
ally_feats[i, 0] = 1 # visible
ally_feats[i, 1] = dist / sight_range # distance
ally_feats[i, 2] = (al_x - x) / sight_range # relative X
ally_feats[i, 3] = (al_y - y) / sight_range # relative Y
ind = 4
if self.obs_all_health:
ally_feats[i, ind] = (
al_unit.health / al_unit.health_max
) # health
ind += 1
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(al_unit)
ally_feats[i, ind] = (
al_unit.shield / max_shield
) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(al_unit, True)
ally_feats[i, ind + type_id] = 1
ind += self.unit_type_bits
if self.obs_last_action:
ally_feats[i, ind:] = self.last_action[al_id]
# Own features
ind = 0
if self.obs_own_health:
own_feats[ind] = unit.health / unit.health_max
ind += 1
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(unit)
own_feats[ind] = unit.shield / max_shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(unit, True)
own_feats[ind + type_id] = 1
agent_obs = np.concatenate(
(
move_feats.flatten(),
enemy_feats.flatten(),
ally_feats.flatten(),
own_feats.flatten(),
)
)
if self.obs_timestep_number:
agent_obs = np.append(agent_obs,
self._episode_steps / self.episode_limit)
if self.debug:
logging.debug("Obs Agent: {}".format(agent_id).center(60, "-"))
logging.debug("Avail. actions {}".format(
self.get_avail_agent_actions(agent_id)))
logging.debug("Move feats {}".format(move_feats))
logging.debug("Enemy feats {}".format(enemy_feats))
logging.debug("Ally feats {}".format(ally_feats))
logging.debug("Own feats {}".format(own_feats))
return agent_obs
def get_obs(self):
"""Returns all agent observations in a list.
NOTE: Agents should have access only to their local observations
during decentralised execution.
"""
agents_obs = [self.get_obs_agent(i) for i in range(self.n_agents)]
return agents_obs
def get_state(self):
"""Returns the global state.
NOTE: This functon should not be used during decentralised execution.
"""
if self.obs_instead_of_state:
obs_concat = np.concatenate(self.get_obs(), axis=0).astype(
np.float32
)
return obs_concat
nf_al = 4 + self.shield_bits_ally + self.unit_type_bits
nf_en = 3 + self.shield_bits_enemy + self.unit_type_bits
ally_state = np.zeros((self.n_agents, nf_al))
enemy_state = np.zeros((self.n_enemies, nf_en))
center_x = self.map_x / 2
center_y = self.map_y / 2
for al_id, al_unit in self.agents.items():
if al_unit.health > 0:
x = al_unit.pos.x
y = al_unit.pos.y
max_cd = self.unit_max_cooldown(al_unit)
ally_state[al_id, 0] = (
al_unit.health / al_unit.health_max
) # health
if (
self.map_type in ["MMM", "GMMM"]
and al_unit.unit_type == self.medivac_id
):
ally_state[al_id, 1] = al_unit.energy / max_cd # energy
else:
ally_state[al_id, 1] = (
al_unit.weapon_cooldown / max_cd
) # cooldown
ally_state[al_id, 2] = (
x - center_x
) / self.max_distance_x # relative X
ally_state[al_id, 3] = (
y - center_y
) / self.max_distance_y # relative Y
ind = 4
if self.shield_bits_ally > 0:
max_shield = self.unit_max_shield(al_unit)
ally_state[al_id, ind] = (
al_unit.shield / max_shield
) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(al_unit, True)
ally_state[al_id, ind + type_id] = 1
for e_id, e_unit in self.enemies.items():
if e_unit.health > 0:
x = e_unit.pos.x
y = e_unit.pos.y
enemy_state[e_id, 0] = (
e_unit.health / e_unit.health_max
) # health
enemy_state[e_id, 1] = (
x - center_x
) / self.max_distance_x # relative X
enemy_state[e_id, 2] = (
y - center_y
) / self.max_distance_y # relative Y
ind = 3
if self.shield_bits_enemy > 0:
max_shield = self.unit_max_shield(e_unit)
enemy_state[e_id, ind] = (
e_unit.shield / max_shield
) # shield
ind += 1
if self.unit_type_bits > 0:
type_id = self.get_unit_type_id(e_unit, False)
enemy_state[e_id, ind + type_id] = 1
state = np.append(ally_state.flatten(), enemy_state.flatten())
if self.state_last_action:
state = np.append(state, self.last_action.flatten())
if self.state_timestep_number:
state = np.append(state,
self._episode_steps / self.episode_limit)
state = state.astype(dtype=np.float32)
if self.debug:
logging.debug("STATE".center(60, "-"))
logging.debug("Ally state {}".format(ally_state))
logging.debug("Enemy state {}".format(enemy_state))
if self.state_last_action:
logging.debug("Last actions {}".format(self.last_action))
return state
def get_obs_size(self):
"""Returns the size of the observation."""
nf_al = 4 + self.unit_type_bits
nf_en = 4 + self.unit_type_bits
if self.obs_all_health:
nf_al += 1 + self.shield_bits_ally
nf_en += 1 + self.shield_bits_enemy
own_feats = self.unit_type_bits
if self.obs_own_health:
own_feats += 1 + self.shield_bits_ally
if self.obs_timestep_number:
own_feats += 1
if self.obs_last_action:
nf_al += self.n_actions
move_feats = self.n_actions_move
if self.obs_pathing_grid:
move_feats += self.n_obs_pathing
if self.obs_terrain_height:
move_feats += self.n_obs_height
enemy_feats = self.n_enemies * nf_en
ally_feats = (self.n_agents - 1) * nf_al
return move_feats + enemy_feats + ally_feats + own_feats
def get_state_size(self):
"""Returns the size of the global state."""
if self.obs_instead_of_state:
return self.get_obs_size() * self.n_agents
nf_al = 4 + self.shield_bits_ally + self.unit_type_bits
nf_en = 3 + self.shield_bits_enemy + self.unit_type_bits
enemy_state = self.n_enemies * nf_en
ally_state = self.n_agents * nf_al
size = enemy_state + ally_state
if self.state_last_action:
size += self.n_agents * self.n_actions
if self.state_timestep_number:
size += 1
return size
def get_unit_type_id(self, unit, ally):
"""Returns the ID of unit type in the given scenario."""
if ally: # use new SC2 unit types
type_id = unit.unit_type - self._min_unit_type
else: # use default SC2 unit types
if self.map_type == "stalkers_and_zealots":
# id(Stalker) = 74, id(Zealot) = 73
type_id = unit.unit_type - 73
if self.map_type == "bane_vs_sz":
# id(Stalker) = 74, id(Zealot) = 73
type_id = unit.unit_type - 73
if self.map_type == "stalkers_and_zealots_vs_zb":
# id(Stalker) = 74, id() =
if unit.unit_type == 9:
type_id = 0
else:
type_id = 1
elif self.map_type == "colossi_stalkers_zealots":
# id(Stalker) = 74, id(Zealot) = 73, id(Colossus) = 4
if unit.unit_type == 4:
type_id = 0
elif unit.unit_type == 74:
type_id = 1
else:
type_id = 2
elif self.map_type == "stalkers_and_sentries":
# id(Stalker) = 74, id(Sentry) = 77
if unit.unit_type == 77:
type_id = 1
elif unit.unit_type == 74:
type_id = 0
elif self.map_type == "zv_mb":
# id(Battlecrusier) = 57, id(Marine) = 48
if unit.unit_type == 57:
type_id = 1
elif unit.unit_type == 48:
type_id = 0
elif self.map_type == "bane":
if unit.unit_type == 9:
type_id = 0
else:
type_id = 1
elif self.map_type == "MMM":
if unit.unit_type == 51:
type_id = 0
elif unit.unit_type == 48:
type_id = 1
else:
type_id = 2
elif self.map_type == "GMMM":
if unit.unit_type == 51:
type_id = 0
elif unit.unit_type == 48:
type_id = 1
elif unit.unit_type == 54:
type_id = 2
else:
type_id = 3
return type_id
def get_avail_agent_actions(self, agent_id):
"""Returns the available actions for agent_id."""
unit = self.get_unit_by_id(agent_id)
if unit.health > 0:
# cannot choose no-op when alive
avail_actions = [0] * self.n_actions
# stop should be allowed
avail_actions[1] = 1
# see if we can move
if self.can_move(unit, Direction.NORTH):
avail_actions[2] = 1
if self.can_move(unit, Direction.SOUTH):
avail_actions[3] = 1
if self.can_move(unit, Direction.EAST):
avail_actions[4] = 1
if self.can_move(unit, Direction.WEST):
avail_actions[5] = 1
# Can attack only alive units that are alive in the shooting range
shoot_range = self.unit_shoot_range(agent_id)
target_items = self.enemies.items()
if self.map_type in ["MMM", "GMMM"] and unit.unit_type == self.medivac_id:
# Medivacs cannot heal themselves or other flying units
target_items = [
(t_id, t_unit)
for (t_id, t_unit) in self.agents.items()
if t_unit.unit_type != self.medivac_id
]
for t_id, t_unit in target_items:
if t_unit.health > 0:
dist = self.distance(
unit.pos.x, unit.pos.y, t_unit.pos.x, t_unit.pos.y
)
if dist <= shoot_range:
avail_actions[t_id + self.n_actions_no_attack] = 1
return avail_actions
else:
# only no-op allowed
return [1] + [0] * (self.n_actions - 1)
def get_avail_actions(self):
"""Returns the available actions of all agents in a list."""
avail_actions = []
for agent_id in range(self.n_agents):
avail_agent = self.get_avail_agent_actions(agent_id)
avail_actions.append(avail_agent)
return avail_actions
def close(self):
"""Close StarCraft II."""
if self._sc2_proc:
self._sc2_proc.close()
def seed(self):
"""Returns the random seed used by the environment."""
return self._seed
def render(self):
"""Not implemented."""
pass
def _kill_all_units(self):
"""Kill all units on the map."""
units_alive = [
unit.tag for unit in self.agents.values() if unit.health > 0
] + [unit.tag for unit in self.enemies.values() if unit.health > 0]
debug_command = [
d_pb.DebugCommand(kill_unit=d_pb.DebugKillUnit(tag=units_alive))
]
self._controller.debug(debug_command)
def init_units(self):
"""Initialise the units."""
while True:
# Sometimes not all units have yet been created by SC2
self.agents = {}
self.enemies = {}
ally_units = [
unit
for unit in self._obs.observation.raw_data.units
if unit.owner == 1
]
ally_units_sorted = sorted(
ally_units,
key=attrgetter("unit_type", "pos.x", "pos.y"),
reverse=False,
)
for i in range(len(ally_units_sorted)):
self.agents[i] = ally_units_sorted[i]
if self.debug:
logging.debug(
"Unit {} is {}, x = {}, y = {}".format(
len(self.agents),
self.agents[i].unit_type,
self.agents[i].pos.x,
self.agents[i].pos.y,
)
)
for unit in self._obs.observation.raw_data.units:
if unit.owner == 2:
self.enemies[len(self.enemies)] = unit
if self._episode_count == 0:
self.max_reward += unit.health_max + unit.shield_max
if self._episode_count == 0:
min_unit_type = min(
unit.unit_type for unit in self.agents.values()
)
self._init_ally_unit_types(min_unit_type)
all_agents_created = (len(self.agents) == self.n_agents)
all_enemies_created = (len(self.enemies) == self.n_enemies)
if all_agents_created and all_enemies_created: # all good
return
try:
self._controller.step(1)
self._obs = self._controller.observe()
except (protocol.ProtocolError, protocol.ConnectionError):
self.full_restart()
self.reset()
def update_units(self):
"""Update units after an environment step.
This function assumes that self._obs is up-to-date.
"""
n_ally_alive = 0
n_enemy_alive = 0
# Store previous state
self.previous_ally_units = deepcopy(self.agents)
self.previous_enemy_units = deepcopy(self.enemies)
for al_id, al_unit in self.agents.items():
updated = False
for unit in self._obs.observation.raw_data.units:
if al_unit.tag == unit.tag:
self.agents[al_id] = unit
updated = True
n_ally_alive += 1
break
if not updated: # dead
al_unit.health = 0
for e_id, e_unit in self.enemies.items():
updated = False
for unit in self._obs.observation.raw_data.units:
if e_unit.tag == unit.tag:
self.enemies[e_id] = unit
updated = True
n_enemy_alive += 1
break
if not updated: # dead
e_unit.health = 0
if (n_ally_alive == 0 and n_enemy_alive > 0
or self.only_medivac_left(ally=True)):
return -1 # lost
if (n_ally_alive > 0 and n_enemy_alive == 0
or self.only_medivac_left(ally=False)):
return 1 # won
if n_ally_alive == 0 and n_enemy_alive == 0:
return 0
return None
def _init_ally_unit_types(self, min_unit_type):
"""Initialise ally unit types. Should be called once from the
init_units function.
"""
self._min_unit_type = min_unit_type
if self.map_type == "marines":
self.marine_id = min_unit_type
elif self.map_type == "stalkers_and_zealots":
self.stalker_id = min_unit_type
self.zealot_id = min_unit_type + 1
elif self.map_type == "stalkers_and_zealots_vs_zb":
self.stalker_id = min_unit_type
self.zealot_id = min_unit_type + 1
elif self.map_type == "stalkers_and_sentries":
self.stalker_id = min_unit_type + 1
self.sentry_id = min_unit_type
elif self.map_type == "colossi_stalkers_zealots":
self.colossus_id = min_unit_type
self.stalker_id = min_unit_type + 1
self.zealot_id = min_unit_type + 2
elif self.map_type == "zv_mb":
self.void_ray_id = min_unit_type
self.zealot_id = min_unit_type + 1
elif self.map_type == "MMM":
self.marauder_id = min_unit_type
self.marine_id = min_unit_type + 1
self.medivac_id = min_unit_type + 2
elif self.map_type == 'GMMM':
self.marauder_id = min_unit_type
self.marine_id = min_unit_type + 1
self.medivac_id = min_unit_type + 2
self.ghost_id = min_unit_type + 3
elif self.map_type == "zealots":
self.zealot_id = min_unit_type
elif self.map_type == "hydralisks":
self.hydralisk_id = min_unit_type
elif self.map_type == "stalkers":
self.stalker_id = min_unit_type
elif self.map_type == "colossus":
self.colossus_id = min_unit_type
elif self.map_type == "bane":
self.baneling_id = min_unit_type
self.zergling_id = min_unit_type + 1
elif self.map_type == "bane_vs_sz":
self.baneling_id = min_unit_type
self.zergling_id = min_unit_type + 1
def only_medivac_left(self, ally):
"""Check if only Medivac units are left."""
if self.map_type not in ["MMM", "GMMM"]:
return False
if ally:
units_alive = [
a
for a in self.agents.values()
if (a.health > 0 and a.unit_type != self.medivac_id)
]
if len(units_alive) == 0:
return True
return False
else:
units_alive = [
a
for a in self.enemies.values()
if (a.health > 0 and a.unit_type != self.medivac_id)
]
if len(units_alive) == 1 and units_alive[0].unit_type == 54:
return True
return False
def get_unit_by_id(self, a_id):
"""Get unit by ID."""
return self.agents[a_id]
def get_stats(self):
stats = {
"battles_won": self.battles_won,
"battles_game": self.battles_game,
"battles_draw": self.timeouts,
"win_rate": self.battles_won / self.battles_game,
"timeouts": self.timeouts,
"restarts": self.force_restarts,
}
return stats
| [
"operator.attrgetter",
"numpy.eye",
"s2clientprotocol.raw_pb2.ActionRawUnitCommand",
"copy.deepcopy",
"s2clientprotocol.sc2api_pb2.InterfaceOptions",
"s2clientprotocol.sc2api_pb2.RequestJoinGame",
"absl.logging.info",
"time.sleep",
"s2clientprotocol.raw_pb2.ActionRaw",
"numpy.append",
"numpy.arr... | [((7454, 7469), 'time.sleep', 'time.sleep', (['(100)'], {}), '(100)\n', (7464, 7469), False, 'import math, time\n'), ((10200, 10223), 'numpy.zeros', 'np.zeros', (['self.n_agents'], {}), '(self.n_agents)\n', (10208, 10223), True, 'import numpy as np\n'), ((10259, 10283), 'numpy.zeros', 'np.zeros', (['self.n_enemies'], {}), '(self.n_enemies)\n', (10267, 10283), True, 'import numpy as np\n'), ((10392, 10433), 'numpy.zeros', 'np.zeros', (['(self.n_agents, self.n_actions)'], {}), '((self.n_agents, self.n_actions))\n', (10400, 10433), True, 'import numpy as np\n'), ((11251, 11268), 'pysc2.run_configs.get', 'run_configs.get', ([], {}), '()\n', (11266, 11268), False, 'from pysc2 import run_configs\n'), ((11284, 11307), 'pysc2.maps.get', 'maps.get', (['self.map_name'], {}), '(self.map_name)\n', (11292, 11307), False, 'from pysc2 import maps\n'), ((11372, 11417), 's2clientprotocol.sc2api_pb2.InterfaceOptions', 'sc_pb.InterfaceOptions', ([], {'raw': '(True)', 'score': '(False)'}), '(raw=True, score=False)\n', (11394, 11417), True, 'from s2clientprotocol import sc2api_pb2 as sc_pb\n'), ((12103, 12181), 's2clientprotocol.sc2api_pb2.RequestJoinGame', 'sc_pb.RequestJoinGame', ([], {'race': 'races[self._agent_race]', 'options': 'interface_options'}), '(race=races[self._agent_race], options=interface_options)\n', (12124, 12181), True, 'from s2clientprotocol import sc2api_pb2 as sc_pb\n'), ((13834, 13857), 'numpy.zeros', 'np.zeros', (['self.n_agents'], {}), '(self.n_agents)\n', (13842, 13857), True, 'import numpy as np\n'), ((13893, 13917), 'numpy.zeros', 'np.zeros', (['self.n_enemies'], {}), '(self.n_enemies)\n', (13901, 13917), True, 'import numpy as np\n'), ((14096, 14137), 'numpy.zeros', 'np.zeros', (['(self.n_agents, self.n_actions)'], {}), '((self.n_agents, self.n_actions))\n', (14104, 14137), True, 'import numpy as np\n'), ((16410, 16449), 's2clientprotocol.sc2api_pb2.RequestAction', 'sc_pb.RequestAction', ([], {'actions': 'sc_actions'}), '(actions=sc_actions)\n', (16429, 16449), True, 'from s2clientprotocol import sc2api_pb2 as sc_pb\n'), ((24693, 24810), 's2clientprotocol.raw_pb2.ActionRawUnitCommand', 'r_pb.ActionRawUnitCommand', ([], {'ability_id': 'action_id', 'target_unit_tag': 'target_tag', 'unit_tags': '[tag]', 'queue_command': '(False)'}), '(ability_id=action_id, target_unit_tag=target_tag,\n unit_tags=[tag], queue_command=False)\n', (24718, 24810), True, 'from s2clientprotocol import raw_pb2 as r_pb\n'), ((27446, 27474), 'math.hypot', 'math.hypot', (['(x2 - x1)', '(y2 - y1)'], {}), '(x2 - x1, y2 - y1)\n', (27456, 27474), False, 'import math, time\n'), ((28551, 28600), 'absl.logging.info', 'logging.info', (["('Replay saved at: %s' % replay_path)"], {}), "('Replay saved at: %s' % replay_path)\n", (28563, 28600), False, 'from absl import logging\n'), ((32616, 32658), 'numpy.zeros', 'np.zeros', (['move_feats_len'], {'dtype': 'np.float32'}), '(move_feats_len, dtype=np.float32)\n', (32624, 32658), True, 'import numpy as np\n'), ((32681, 32732), 'numpy.zeros', 'np.zeros', (['(self.n_enemies, nf_en)'], {'dtype': 'np.float32'}), '((self.n_enemies, nf_en), dtype=np.float32)\n', (32689, 32732), True, 'import numpy as np\n'), ((32754, 32808), 'numpy.zeros', 'np.zeros', (['(self.n_agents - 1, nf_al)'], {'dtype': 'np.float32'}), '((self.n_agents - 1, nf_al), dtype=np.float32)\n', (32762, 32808), True, 'import numpy as np\n'), ((32829, 32863), 'numpy.zeros', 'np.zeros', (['nf_own'], {'dtype': 'np.float32'}), '(nf_own, dtype=np.float32)\n', (32837, 32863), True, 'import numpy as np\n'), ((39210, 39242), 'numpy.zeros', 'np.zeros', (['(self.n_agents, nf_al)'], {}), '((self.n_agents, nf_al))\n', (39218, 39242), True, 'import numpy as np\n'), ((39265, 39298), 'numpy.zeros', 'np.zeros', (['(self.n_enemies, nf_en)'], {}), '((self.n_enemies, nf_en))\n', (39273, 39298), True, 'import numpy as np\n'), ((51741, 51762), 'copy.deepcopy', 'deepcopy', (['self.agents'], {}), '(self.agents)\n', (51749, 51762), False, 'from copy import deepcopy\n'), ((51799, 51821), 'copy.deepcopy', 'deepcopy', (['self.enemies'], {}), '(self.enemies)\n', (51807, 51821), False, 'from copy import deepcopy\n'), ((15851, 15873), 'numpy.eye', 'np.eye', (['self.n_actions'], {}), '(self.n_actions)\n', (15857, 15873), True, 'import numpy as np\n'), ((15874, 15891), 'numpy.array', 'np.array', (['actions'], {}), '(actions)\n', (15882, 15891), True, 'import numpy as np\n'), ((37839, 37901), 'numpy.append', 'np.append', (['agent_obs', '(self._episode_steps / self.episode_limit)'], {}), '(agent_obs, self._episode_steps / self.episode_limit)\n', (37848, 37901), True, 'import numpy as np\n'), ((42266, 42324), 'numpy.append', 'np.append', (['state', '(self._episode_steps / self.episode_limit)'], {}), '(state, self._episode_steps / self.episode_limit)\n', (42275, 42324), True, 'import numpy as np\n'), ((19662, 19757), 's2clientprotocol.raw_pb2.ActionRawUnitCommand', 'r_pb.ActionRawUnitCommand', ([], {'ability_id': "actions['stop']", 'unit_tags': '[tag]', 'queue_command': '(False)'}), "(ability_id=actions['stop'], unit_tags=[tag],\n queue_command=False)\n", (19687, 19757), True, 'from s2clientprotocol import raw_pb2 as r_pb\n'), ((22425, 22457), 's2clientprotocol.raw_pb2.ActionRaw', 'r_pb.ActionRaw', ([], {'unit_command': 'cmd'}), '(unit_command=cmd)\n', (22439, 22457), True, 'from s2clientprotocol import raw_pb2 as r_pb\n'), ((22877, 22929), 'math.hypot', 'math.hypot', (['self.max_distance_x', 'self.max_distance_y'], {}), '(self.max_distance_x, self.max_distance_y)\n', (22887, 22929), False, 'import math, time\n'), ((23895, 23947), 'math.hypot', 'math.hypot', (['self.max_distance_x', 'self.max_distance_y'], {}), '(self.max_distance_x, self.max_distance_y)\n', (23905, 23947), False, 'import math, time\n'), ((24901, 24933), 's2clientprotocol.raw_pb2.ActionRaw', 'r_pb.ActionRaw', ([], {'unit_command': 'cmd'}), '(unit_command=cmd)\n', (24915, 24933), True, 'from s2clientprotocol import raw_pb2 as r_pb\n'), ((49351, 49386), 's2clientprotocol.debug_pb2.DebugKillUnit', 'd_pb.DebugKillUnit', ([], {'tag': 'units_alive'}), '(tag=units_alive)\n', (49369, 49386), True, 'from s2clientprotocol import debug_pb2 as d_pb\n'), ((49904, 49945), 'operator.attrgetter', 'attrgetter', (['"""unit_type"""', '"""pos.x"""', '"""pos.y"""'], {}), "('unit_type', 'pos.x', 'pos.y')\n", (49914, 49945), False, 'from operator import attrgetter\n'), ((20071, 20118), 's2clientprotocol.common_pb2.Point2D', 'sc_common.Point2D', ([], {'x': 'x', 'y': '(y + self._move_amount)'}), '(x=x, y=y + self._move_amount)\n', (20088, 20118), True, 'from s2clientprotocol import common_pb2 as sc_common\n'), ((20485, 20532), 's2clientprotocol.common_pb2.Point2D', 'sc_common.Point2D', ([], {'x': 'x', 'y': '(y - self._move_amount)'}), '(x=x, y=y - self._move_amount)\n', (20502, 20532), True, 'from s2clientprotocol import common_pb2 as sc_common\n'), ((22059, 22176), 's2clientprotocol.raw_pb2.ActionRawUnitCommand', 'r_pb.ActionRawUnitCommand', ([], {'ability_id': 'action_id', 'target_unit_tag': 'target_tag', 'unit_tags': '[tag]', 'queue_command': '(False)'}), '(ability_id=action_id, target_unit_tag=target_tag,\n unit_tags=[tag], queue_command=False)\n', (22084, 22176), True, 'from s2clientprotocol import raw_pb2 as r_pb\n'), ((20898, 20945), 's2clientprotocol.common_pb2.Point2D', 'sc_common.Point2D', ([], {'x': '(x + self._move_amount)', 'y': 'y'}), '(x=x + self._move_amount, y=y)\n', (20915, 20945), True, 'from s2clientprotocol import common_pb2 as sc_common\n'), ((21310, 21357), 's2clientprotocol.common_pb2.Point2D', 'sc_common.Point2D', ([], {'x': '(x - self._move_amount)', 'y': 'y'}), '(x=x - self._move_amount, y=y)\n', (21327, 21357), True, 'from s2clientprotocol import common_pb2 as sc_common\n')] |
import gym.wrappers
from nn.mlp import MLP
import pickle
def test_cartpole(nn, file):
global observation
nn.load(file)
for _ in range(500):
env.render()
action = nn.forward(observation)
observation, reward, done, info = env.step(round(action.item()))
if done:
break
def save_model(nn, filename):
with open(filename, 'wb') as output:
pickle.dump(nn, output)
if __name__ == '__main__':
env = gym.make('CartPole-v1')
env.seed(123)
# env = gym.wrappers.Monitor(env, 'cartpole', video_callable=lambda episode_id: True, force=True)
observation = env.reset()
nn = MLP(4, 2, 1)
test_cartpole(nn, '../../../models/cartpole/cartpole12-27-2019_20-29_NN=MLPIndividual_POPSIZE=100_GEN'
'=20_PMUTATION_0.4_PCROSSOVER_0.9.npy')
# save_model(nn, "09-09-2019_17-37_POPSIZE=100_GEN=20_PMUTATION_0.4_PCROSSOVER_0.9.pkl")
env.close()
| [
"nn.mlp.MLP",
"pickle.dump"
] | [((653, 665), 'nn.mlp.MLP', 'MLP', (['(4)', '(2)', '(1)'], {}), '(4, 2, 1)\n', (656, 665), False, 'from nn.mlp import MLP\n'), ((406, 429), 'pickle.dump', 'pickle.dump', (['nn', 'output'], {}), '(nn, output)\n', (417, 429), False, 'import pickle\n')] |
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
import codecs
import os
import re
import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath(".."))
META_PATH = os.path.join("..", "mutatest", "__init__.py")
HERE = os.path.abspath(os.path.dirname(__file__))
# determine if on readthedocs.org
ON_RTD = os.getenv("READTHEDOCS") == True
def read(*parts):
"""
Build an absolute path from *parts* and and return the contents of the
resulting file. Assume UTF-8 encoding.
"""
with codecs.open(os.path.join(HERE, *parts), "rb", "utf-8") as f:
return f.read()
META_FILE = read(META_PATH)
def find_meta(meta):
"""
Extract __*meta*__ from META_FILE.
"""
meta_match = re.search(r"^__{meta}__ = ['\"]([^'\"]*)['\"]".format(meta=meta), META_FILE, re.M)
if meta_match:
return meta_match.group(1)
raise RuntimeError("Unable to find __{meta}__ string.".format(meta=meta))
# -- Project information -----------------------------------------------------
project = "Mutatest"
copyright = find_meta("copyright")
author = find_meta("author")
release = find_meta("version")
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.intersphinx",
"sphinx.ext.napoleon",
"sphinx.ext.todo",
"sphinx.ext.viewcode",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
intersphinx_mapping = {
"python": ("https://docs.python.org/3/", None),
}
pygments_style = "sphinx"
# -- Options for HTML output -------------------------------------------------
# for local builds, not needed on RTD platform
if not ON_RTD:
import sphinx_rtd_theme
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
| [
"sphinx_rtd_theme.get_html_theme_path",
"os.getenv",
"os.path.join",
"os.path.dirname",
"os.path.abspath"
] | [((653, 698), 'os.path.join', 'os.path.join', (['""".."""', '"""mutatest"""', '"""__init__.py"""'], {}), "('..', 'mutatest', '__init__.py')\n", (665, 698), False, 'import os\n'), ((616, 637), 'os.path.abspath', 'os.path.abspath', (['""".."""'], {}), "('..')\n", (631, 637), False, 'import os\n'), ((722, 747), 'os.path.dirname', 'os.path.dirname', (['__file__'], {}), '(__file__)\n', (737, 747), False, 'import os\n'), ((793, 817), 'os.getenv', 'os.getenv', (['"""READTHEDOCS"""'], {}), "('READTHEDOCS')\n", (802, 817), False, 'import os\n'), ((2714, 2752), 'sphinx_rtd_theme.get_html_theme_path', 'sphinx_rtd_theme.get_html_theme_path', ([], {}), '()\n', (2750, 2752), False, 'import sphinx_rtd_theme\n'), ((1002, 1028), 'os.path.join', 'os.path.join', (['HERE', '*parts'], {}), '(HERE, *parts)\n', (1014, 1028), False, 'import os\n')] |
#!/usr/bin/env python3
from sage.all import ZZ, Combinations
from dissect.traits.trait_interface import compute_results
from dissect.utils.custom_curve import CustomCurve
from dissect.utils.json_handler import FLOAT_PRECISION
def i04_curve_function(curve: CustomCurve, weight):
"""Computes the number of curve points whose x-coord has the given Hamming weight"""
bit_length = ZZ(curve.cardinality).nbits()
E = curve.EC
x_coord_count = 0
combination_list = Combinations(range(bit_length), weight).list()
for combination in combination_list:
binary = "0" * bit_length
for bit in combination:
binary = binary[:bit] + "1" + binary[bit + 1 :]
x_coord = ZZ("0b" + binary)
if E.is_x_coord(x_coord):
x_coord_count += 1
expected = len(combination_list) // 2
ratio = expected / x_coord_count
curve_results = {
"x_coord_count": x_coord_count,
"expected": expected,
"ratio": round(ratio, FLOAT_PRECISION),
}
return curve_results
def compute_i04_results(curve_list, desc="", verbose=False):
compute_results(curve_list, "i04", i04_curve_function, desc=desc, verbose=verbose)
| [
"dissect.traits.trait_interface.compute_results",
"sage.all.ZZ"
] | [((1112, 1199), 'dissect.traits.trait_interface.compute_results', 'compute_results', (['curve_list', '"""i04"""', 'i04_curve_function'], {'desc': 'desc', 'verbose': 'verbose'}), "(curve_list, 'i04', i04_curve_function, desc=desc, verbose=\n verbose)\n", (1127, 1199), False, 'from dissect.traits.trait_interface import compute_results\n'), ((712, 729), 'sage.all.ZZ', 'ZZ', (["('0b' + binary)"], {}), "('0b' + binary)\n", (714, 729), False, 'from sage.all import ZZ, Combinations\n'), ((388, 409), 'sage.all.ZZ', 'ZZ', (['curve.cardinality'], {}), '(curve.cardinality)\n', (390, 409), False, 'from sage.all import ZZ, Combinations\n')] |
import numbers
from typing import Any, Dict, List, Optional, Sequence, Tuple, Union
import numpy as np
import torch
from PIL import Image, ImageOps, ImageEnhance
from typing_extensions import Literal
try:
import accimage
except ImportError:
accimage = None
@torch.jit.unused
def _is_pil_image(img: Any) -> bool:
if accimage is not None:
return isinstance(img, (Image.Image, accimage.Image))
else:
return isinstance(img, Image.Image)
@torch.jit.unused
def get_image_size(img: Any) -> List[int]:
if _is_pil_image(img):
return list(img.size)
raise TypeError(f"Unexpected type {type(img)}")
@torch.jit.unused
def get_image_num_channels(img: Any) -> int:
if _is_pil_image(img):
return 1 if img.mode == "L" else 3
raise TypeError(f"Unexpected type {type(img)}")
@torch.jit.unused
def hflip(img: Image.Image) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return img.transpose(Image.FLIP_LEFT_RIGHT)
@torch.jit.unused
def vflip(img: Image.Image) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return img.transpose(Image.FLIP_TOP_BOTTOM)
@torch.jit.unused
def adjust_brightness(img: Image.Image, brightness_factor: float) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
enhancer = ImageEnhance.Brightness(img)
img = enhancer.enhance(brightness_factor)
return img
@torch.jit.unused
def adjust_contrast(img: Image.Image, contrast_factor: float) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
enhancer = ImageEnhance.Contrast(img)
img = enhancer.enhance(contrast_factor)
return img
@torch.jit.unused
def adjust_saturation(img: Image.Image, saturation_factor: float) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
enhancer = ImageEnhance.Color(img)
img = enhancer.enhance(saturation_factor)
return img
@torch.jit.unused
def adjust_hue(img: Image.Image, hue_factor: float) -> Image.Image:
if not (-0.5 <= hue_factor <= 0.5):
raise ValueError(f"hue_factor ({hue_factor}) is not in [-0.5, 0.5].")
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
input_mode = img.mode
if input_mode in {"L", "1", "I", "F"}:
return img
h, s, v = img.convert("HSV").split()
np_h = np.array(h, dtype=np.uint8)
# uint8 addition take cares of rotation across boundaries
with np.errstate(over="ignore"):
np_h += np.uint8(hue_factor * 255)
h = Image.fromarray(np_h, "L")
img = Image.merge("HSV", (h, s, v)).convert(input_mode)
return img
@torch.jit.unused
def adjust_gamma(
img: Image.Image,
gamma: float,
gain: float = 1.0,
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
if gamma < 0:
raise ValueError("Gamma should be a non-negative real number")
input_mode = img.mode
img = img.convert("RGB")
gamma_map = [int((255 + 1 - 1e-3) * gain * pow(ele / 255.0, gamma)) for ele in range(256)] * 3
img = img.point(gamma_map) # use PIL's point-function to accelerate this part
img = img.convert(input_mode)
return img
@torch.jit.unused
def pad(
img: Image.Image,
padding: Union[int, List[int], Tuple[int, ...]],
fill: Optional[Union[float, List[float], Tuple[float, ...]]] = 0,
padding_mode: Literal["constant", "edge", "reflect", "symmetric"] = "constant",
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
if not isinstance(padding, (numbers.Number, tuple, list)):
raise TypeError("Got inappropriate padding arg")
if not isinstance(fill, (numbers.Number, str, tuple)):
raise TypeError("Got inappropriate fill arg")
if not isinstance(padding_mode, str):
raise TypeError("Got inappropriate padding_mode arg")
if isinstance(padding, list):
padding = tuple(padding)
if isinstance(padding, tuple) and len(padding) not in [1, 2, 4]:
raise ValueError(f"Padding must be an int or a 1, 2, or 4 element tuple, not a {len(padding)} element tuple")
if isinstance(padding, tuple) and len(padding) == 1:
# Compatibility with `functional_tensor.pad`
padding = padding[0]
if padding_mode not in ["constant", "edge", "reflect", "symmetric"]:
raise ValueError("Padding mode should be either constant, edge, reflect or symmetric")
if padding_mode == "constant":
opts = _parse_fill(fill, img, name="fill")
if img.mode == "P":
palette = img.getpalette()
image = ImageOps.expand(img, border=padding, **opts)
image.putpalette(palette)
return image
return ImageOps.expand(img, border=padding, **opts)
else:
if isinstance(padding, int):
pad_left = pad_right = pad_top = pad_bottom = padding
if isinstance(padding, tuple) and len(padding) == 2:
pad_left = pad_right = padding[0]
pad_top = pad_bottom = padding[1]
if isinstance(padding, tuple) and len(padding) == 4:
pad_left = padding[0]
pad_top = padding[1]
pad_right = padding[2]
pad_bottom = padding[3]
p = [pad_left, pad_top, pad_right, pad_bottom]
cropping = -np.minimum(p, 0)
if cropping.any():
crop_left, crop_top, crop_right, crop_bottom = cropping
img = img.crop((crop_left, crop_top, img.width - crop_right, img.height - crop_bottom))
pad_left, pad_top, pad_right, pad_bottom = np.maximum(p, 0)
if img.mode == "P":
palette = img.getpalette()
img = np.asarray(img)
img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right)), mode=padding_mode)
img = Image.fromarray(img)
img.putpalette(palette)
return img
img = np.asarray(img)
# RGB image
if len(img.shape) == 3:
img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right), (0, 0)), padding_mode)
# Grayscale image
if len(img.shape) == 2:
img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right)), padding_mode)
return Image.fromarray(img)
@torch.jit.unused
def crop(
img: Image.Image,
top: int,
left: int,
height: int,
width: int,
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return img.crop((left, top, left + width, top + height))
@torch.jit.unused
def resize(
img: Image.Image,
size: Union[Sequence[int], int],
interpolation: int = Image.BILINEAR,
max_size: Optional[int] = None,
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
if not (isinstance(size, int) or (isinstance(size, Sequence) and len(size) in (1, 2))):
raise TypeError(f"Got inappropriate size arg: {size}")
if isinstance(size, Sequence) and len(size) == 1:
size = size[0]
if isinstance(size, int):
w, h = img.size
short, long = (w, h) if w <= h else (h, w)
if short == size:
return img
new_short, new_long = size, int(size * long / short)
if max_size is not None:
if max_size <= size:
raise ValueError(
f"max_size = {max_size} must be strictly greater than the requested "
f"size for the smaller edge size = {size}"
)
if new_long > max_size:
new_short, new_long = int(max_size * new_short / new_long), max_size
new_w, new_h = (new_short, new_long) if w <= h else (new_long, new_short)
return img.resize((new_w, new_h), interpolation)
else:
if max_size is not None:
raise ValueError(
"max_size should only be passed if size specifies the length of the smaller edge, "
"i.e. size should be an int or a sequence of length 1 in torchscript mode."
)
return img.resize(size[::-1], interpolation)
@torch.jit.unused
def _parse_fill(
fill: Optional[Union[float, List[float], Tuple[float, ...]]],
img: Image.Image,
name: str = "fillcolor",
) -> Dict[str, Optional[Union[float, List[float], Tuple[float, ...]]]]:
# Process fill color for affine transforms
num_bands = len(img.getbands())
if fill is None:
fill = 0
if isinstance(fill, (int, float)) and num_bands > 1:
fill = tuple([fill] * num_bands)
if isinstance(fill, (list, tuple)):
if len(fill) != num_bands:
msg = "The number of elements in 'fill' does not match the number of bands of the image ({} != {})"
raise ValueError(msg.format(len(fill), num_bands))
fill = tuple(fill)
return {name: fill}
@torch.jit.unused
def affine(
img: Image.Image,
matrix: List[float],
interpolation: int = Image.NEAREST,
fill: Optional[Union[float, List[float], Tuple[float, ...]]] = 0,
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
output_size = img.size
opts = _parse_fill(fill, img)
return img.transform(output_size, Image.AFFINE, matrix, interpolation, **opts)
@torch.jit.unused
def rotate(
img: Image.Image,
angle: float,
interpolation: int = Image.NEAREST,
expand: bool = False,
center: Optional[Tuple[int, int]] = None,
fill: Optional[Union[float, List[float], Tuple[float, ...]]] = 0,
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
opts = _parse_fill(fill, img)
return img.rotate(angle, interpolation, expand, center, **opts)
@torch.jit.unused
def perspective(
img: Image.Image,
perspective_coeffs: float,
interpolation: int = Image.BICUBIC,
fill: Optional[Union[float, List[float], Tuple[float, ...]]] = 0,
) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
opts = _parse_fill(fill, img)
return img.transform(img.size, Image.PERSPECTIVE, perspective_coeffs, interpolation, **opts)
@torch.jit.unused
def to_grayscale(img: Image.Image, num_output_channels: int) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
if num_output_channels == 1:
img = img.convert("L")
elif num_output_channels == 3:
img = img.convert("L")
np_img = np.array(img, dtype=np.uint8)
np_img = np.dstack([np_img, np_img, np_img])
img = Image.fromarray(np_img, "RGB")
else:
raise ValueError("num_output_channels should be either 1 or 3")
return img
@torch.jit.unused
def invert(img: Image.Image) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return ImageOps.invert(img)
@torch.jit.unused
def posterize(img: Image.Image, bits: int) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return ImageOps.posterize(img, bits)
@torch.jit.unused
def solarize(img: Image.Image, threshold: int) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return ImageOps.solarize(img, threshold)
@torch.jit.unused
def adjust_sharpness(img: Image.Image, sharpness_factor: float) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
enhancer = ImageEnhance.Sharpness(img)
img = enhancer.enhance(sharpness_factor)
return img
@torch.jit.unused
def autocontrast(img: Image.Image) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return ImageOps.autocontrast(img)
@torch.jit.unused
def equalize(img: Image.Image) -> Image.Image:
if not _is_pil_image(img):
raise TypeError(f"img should be PIL Image. Got {type(img)}")
return ImageOps.equalize(img)
| [
"numpy.uint8",
"PIL.ImageEnhance.Contrast",
"numpy.array",
"PIL.ImageOps.posterize",
"PIL.ImageOps.autocontrast",
"PIL.ImageOps.expand",
"numpy.asarray",
"PIL.ImageEnhance.Sharpness",
"PIL.ImageEnhance.Color",
"PIL.ImageOps.invert",
"numpy.maximum",
"PIL.ImageOps.equalize",
"PIL.ImageOps.sol... | [((1472, 1500), 'PIL.ImageEnhance.Brightness', 'ImageEnhance.Brightness', (['img'], {}), '(img)\n', (1495, 1500), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((1776, 1802), 'PIL.ImageEnhance.Contrast', 'ImageEnhance.Contrast', (['img'], {}), '(img)\n', (1797, 1802), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((2080, 2103), 'PIL.ImageEnhance.Color', 'ImageEnhance.Color', (['img'], {}), '(img)\n', (2098, 2103), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((2615, 2642), 'numpy.array', 'np.array', (['h'], {'dtype': 'np.uint8'}), '(h, dtype=np.uint8)\n', (2623, 2642), True, 'import numpy as np\n'), ((2793, 2819), 'PIL.Image.fromarray', 'Image.fromarray', (['np_h', '"""L"""'], {}), "(np_h, 'L')\n", (2808, 2819), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((11381, 11401), 'PIL.ImageOps.invert', 'ImageOps.invert', (['img'], {}), '(img)\n', (11396, 11401), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((11592, 11621), 'PIL.ImageOps.posterize', 'ImageOps.posterize', (['img', 'bits'], {}), '(img, bits)\n', (11610, 11621), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((11816, 11849), 'PIL.ImageOps.solarize', 'ImageOps.solarize', (['img', 'threshold'], {}), '(img, threshold)\n', (11833, 11849), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((12066, 12093), 'PIL.ImageEnhance.Sharpness', 'ImageEnhance.Sharpness', (['img'], {}), '(img)\n', (12088, 12093), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((12336, 12362), 'PIL.ImageOps.autocontrast', 'ImageOps.autocontrast', (['img'], {}), '(img)\n', (12357, 12362), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((12541, 12563), 'PIL.ImageOps.equalize', 'ImageOps.equalize', (['img'], {}), '(img)\n', (12558, 12563), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((2714, 2740), 'numpy.errstate', 'np.errstate', ([], {'over': '"""ignore"""'}), "(over='ignore')\n", (2725, 2740), True, 'import numpy as np\n'), ((2758, 2784), 'numpy.uint8', 'np.uint8', (['(hue_factor * 255)'], {}), '(hue_factor * 255)\n', (2766, 2784), True, 'import numpy as np\n'), ((5072, 5116), 'PIL.ImageOps.expand', 'ImageOps.expand', (['img'], {'border': 'padding'}), '(img, border=padding, **opts)\n', (5087, 5116), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((5923, 5939), 'numpy.maximum', 'np.maximum', (['p', '(0)'], {}), '(p, 0)\n', (5933, 5939), True, 'import numpy as np\n'), ((6252, 6267), 'numpy.asarray', 'np.asarray', (['img'], {}), '(img)\n', (6262, 6267), True, 'import numpy as np\n'), ((6586, 6606), 'PIL.Image.fromarray', 'Image.fromarray', (['img'], {}), '(img)\n', (6601, 6606), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((2831, 2860), 'PIL.Image.merge', 'Image.merge', (['"""HSV"""', '(h, s, v)'], {}), "('HSV', (h, s, v))\n", (2842, 2860), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((4948, 4992), 'PIL.ImageOps.expand', 'ImageOps.expand', (['img'], {'border': 'padding'}), '(img, border=padding, **opts)\n', (4963, 4992), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((5658, 5674), 'numpy.minimum', 'np.minimum', (['p', '(0)'], {}), '(p, 0)\n', (5668, 5674), True, 'import numpy as np\n'), ((6026, 6041), 'numpy.asarray', 'np.asarray', (['img'], {}), '(img)\n', (6036, 6041), True, 'import numpy as np\n'), ((6060, 6138), 'numpy.pad', 'np.pad', (['img', '((pad_top, pad_bottom), (pad_left, pad_right))'], {'mode': 'padding_mode'}), '(img, ((pad_top, pad_bottom), (pad_left, pad_right)), mode=padding_mode)\n', (6066, 6138), True, 'import numpy as np\n'), ((6157, 6177), 'PIL.Image.fromarray', 'Image.fromarray', (['img'], {}), '(img)\n', (6172, 6177), False, 'from PIL import Image, ImageOps, ImageEnhance\n'), ((6338, 6423), 'numpy.pad', 'np.pad', (['img', '((pad_top, pad_bottom), (pad_left, pad_right), (0, 0))', 'padding_mode'], {}), '(img, ((pad_top, pad_bottom), (pad_left, pad_right), (0, 0)),\n padding_mode)\n', (6344, 6423), True, 'import numpy as np\n'), ((6496, 6569), 'numpy.pad', 'np.pad', (['img', '((pad_top, pad_bottom), (pad_left, pad_right))', 'padding_mode'], {}), '(img, ((pad_top, pad_bottom), (pad_left, pad_right)), padding_mode)\n', (6502, 6569), True, 'import numpy as np\n'), ((10979, 11008), 'numpy.array', 'np.array', (['img'], {'dtype': 'np.uint8'}), '(img, dtype=np.uint8)\n', (10987, 11008), True, 'import numpy as np\n'), ((11026, 11061), 'numpy.dstack', 'np.dstack', (['[np_img, np_img, np_img]'], {}), '([np_img, np_img, np_img])\n', (11035, 11061), True, 'import numpy as np\n'), ((11076, 11106), 'PIL.Image.fromarray', 'Image.fromarray', (['np_img', '"""RGB"""'], {}), "(np_img, 'RGB')\n", (11091, 11106), False, 'from PIL import Image, ImageOps, ImageEnhance\n')] |
from PIL import Image
import PIL
import tkinter as tk
from tkinter.filedialog import askopenfile, asksaveasfile
from tkinter.messagebox import showinfo, askyesno, askokcancel
import enum
class ValueType(enum.Enum):
Bool = 0
Int = 1
Float = 2
IntPair = 3
Vec2 = 4
NormVec2 = 5
Randomize = 6
def create_meta_pixel_value(self, kwargs):
if self.value == 0: return Bool()
elif self.value == 1: return Int(**kwargs)
elif self.value == 2: return Float(**kwargs)
elif self.value == 3: return IntPair(**kwargs)
elif self.value == 4: return Vec2(**kwargs)
elif self.value == 5: return NormalizedVec2(**kwargs)
elif self.value == 6: return Randomize()
return None
class TypeHolder:
COLORS = {
0: "Snow2",
1: "cyan",
2: "Lime",
3: "DeepSkyBlue",
4: "Tan1",
5: "Maroon1",
6: "SlateBlue"
}
def __init__(self, meta_pixel_value_type: ValueType, **kwargs):
self.type = meta_pixel_value_type
self.kwargs = kwargs
def generate(self):
return self.type.create_meta_pixel_value(self.kwargs)
class MetaPixelValue:
def __init__(self, value):
assert isinstance(value, ValueType)
self.type = value
self.g = 0
self.b = 0
def get_type(self):
return self.type
def get_values(self):
return ()
def get_colors(self):
self.calc_colors()
return self.g, self.b
def calc_colors(self):
self.g = 0
self.b = 0
def set_colors(self, g: int, b: int):
pass
def get_help(self):
return self.type.name
def set_value(self, a: float, b: float):
pass
class Bool(MetaPixelValue):
def __init__(self):
super().__init__(ValueType.Bool)
def get_values(self):
return ()
def get_help(self):
return "If this meta pixel exists it's properties are on"
class Vec2(MetaPixelValue):
def __init__(self, vec_range=128.0, value=(128.0, 128.0)):
super().__init__(ValueType.Vec2)
self.midpoint = value
self.range = vec_range
self.value_x = self.midpoint[0]
self.value_y = self.midpoint[1]
def get_values(self):
self.set_colors(self.g, self.b)
self.calc_colors()
return self.value_x, self.value_y
def calc_colors(self):
self.g = self.value_x+self.midpoint[0]
self.b = self.value_y+self.midpoint[1]
def set_colors(self, g: int, b: int):
self.g = g
self.b = b
self.value_x = self.g-self.midpoint[0]
self.value_y = self.b-self.midpoint[1]
if self.range < self.value_x: self.value_x = self.range
if self.range < self.value_y: self.value_y = self.range
if self.value_x < -self.range: self.value_x = -self.range
if self.value_y < -self.range: self.value_y = -self.range
def set_value(self, a: float, b: float):
self.value_x = a
self.value_y = b
self.calc_colors()
def get_help(self):
return str(self.midpoint[0])+" is 0, negative numbers are below that value & positive above. This Vec2 has a " \
"range of " \
+ str(self.range)+" which means the min & max are ("+str(self.midpoint[0]-self.range)+", "\
+ str(self.midpoint[1]+self.range)+")"
class Float(MetaPixelValue):
def __init__(self, float_range=1.0, value=1.0):
super().__init__(ValueType.Float)
self.range = float_range
self.value = value
def get_values(self):
self.set_colors(self.g, self.b)
self.calc_colors()
return self.value,
def calc_colors(self):
self.g = int(255 * (self.value / self.range))
self.b = 0
def set_colors(self, g: int, b: int):
self.g = g
self.b = 0
self.value = (self.g / 255) * self.range
def set_value(self, a: float, b: float):
self.value = a
self.calc_colors()
def get_help(self):
return "0 through 255 will be translated into a number between two values. For this float those two values are"\
+ " 0.0 to "+str(self.range)+"."
class Int(MetaPixelValue):
def __init__(self, int_range=255, value=0):
super().__init__(ValueType.Int)
self.range = int_range
self.value = value
def get_values(self):
self.set_colors(self.g, self.b)
self.calc_colors()
return self.value,
def calc_colors(self):
self.g = self.value
self.b = 0
def set_colors(self, g: int, b: int):
self.g = g
self.b = 0
self.value = g
if self.range < self.value: self.value = self.range
if self.value < 0: self.value = 0
def set_value(self, a: float, b: float):
self.value = a
self.calc_colors()
def get_help(self):
return "The green RGB value is the value of the integer. This integer has a max value of "+str(self.range)
class IntPair(MetaPixelValue):
def __init__(self, int_range_x=255, int_range_y=255, value_x=0, value_y=0):
super().__init__(ValueType.IntPair)
self.value_x = Int(int_range=int_range_x, value=value_x)
self.value_y = Int(int_range=int_range_y, value=value_y)
def get_values(self):
self.set_colors(self.g, self.b)
self.calc_colors()
return self.value_x.get_values()[0], self.value_y.get_values()[0]
def calc_colors(self):
self.g = self.value_x.get_colors()[0]
self.b = self.value_y.get_colors()[0]
def set_colors(self, g: int, b: int):
self.g = g
self.b = b
self.value_x.set_colors(g, 0)
self.value_y.set_colors(b, 0)
def set_value(self, a: float, b: float):
self.value_x.set_value(a, 0.0), self.value_y.set_value(b, 0.0)
self.calc_colors()
def get_help(self):
return "The green RGB value is the first value of the integer, the second is the blue RGB value. This IntPair "\
+ "has a max value A & max value B of "+str((self.value_x.range, self.value_y.range))
class NormalizedVec2(MetaPixelValue):
def __init__(self, vec_range=1.0, value=(0.0, 0.0), allow_negative=True):
super().__init__(ValueType.NormVec2)
self.offset = (128.0, 128.0) if allow_negative else (0.0, 0.0)
self.negative = allow_negative
self.range = vec_range
self.value_x = value[0]
self.value_y = value[1]
def get_values(self):
self.set_colors(self.g, self.b)
self.calc_colors()
return self.value_x, self.value_y
def calc_colors(self):
self.g = (255 * (self.value_x / self.range))+self.offset[0]
self.b = (255 * (self.value_y / self.range))+self.offset[1]
def set_colors(self, g: int, b: int):
self.g = g
self.b = b
self.value_x = (self.g - self.offset[0]) / 255 * self.range
self.value_y = (self.b - self.offset[1]) / 255 * self.range
if self.negative:
if self.value_x < -self.range/2: self.value_x = -self.range/2
elif self.range/2 < self.value_x: self.value_x = self.range/2
if self.value_y < -self.range/2: self.value_y = -self.range/2
elif self.range/2 < self.value_y: self.value_y = self.range/2
else:
if self.value_x < 0: self.value_x = 0
elif self.range < self.value_x: self.value_x = self.range
if self.value_y < -self.range / 2: self.value_y = -self.range / 2
elif self.range < self.value_y: self.value_y = self.range / 2
def set_value(self, a: float, b: float):
self.value_x = a
self.value_y = b
self.calc_colors()
def get_help(self):
range_min = -self.range/2 if self.negative else 0.0
range_max = self.range/2 if self.negative else self.range
return "A vector that behaves like a float where the green & blue RGB values (0 through 255) are turned into" \
" a different number range. For this NormalizedVec2 that is ("+str(range_min)+", "+str(range_max)+")"
class Randomize(MetaPixelValue):
def __init__(self):
super().__init__(ValueType.Randomize)
def get_values(self):
return ()
def calc_colors(self):
pass
def set_colors(self, g: int, b: int):
self.g = g
self.b = b
class MetaPixelType(enum.Enum):
# Misc
HatOffset = 1, TypeHolder(ValueType.Vec2, vec_range=16), \
"Hat offset position in pixels", 0
UseDuckColor = 2, TypeHolder(ValueType.Bool), \
"If this metapixel exists, White (255, 255, 255) and Grey(157, 157, 157) will be recolored to duck colors.", 0
# Capes
CapeOffset = 10, TypeHolder(ValueType.Vec2, vec_range=16), \
"Cape offset position in pixels", 1
CapeForeground = 11, TypeHolder(ValueType.Bool), \
"If this metapixel exists, the cape will be drawn over the duck.", 1
CapeSwayModifier = 12, TypeHolder(ValueType.NormVec2, value=(0.3, 1.0)), \
"Affects cape length, and left to right sway.", 1
CapeWiggleModifier = 13, TypeHolder(ValueType.NormVec2, value=(1.0, 1.0)), \
"Affects how much the cape wiggles in the wind.", 1
CapeTaperStart = 14, TypeHolder(ValueType.Float, value=0.5), \
"Affects how narrow the cape/trail is at the top/beginning.", 1
CapeTaperEnd = 15, TypeHolder(ValueType.Float), \
"Affects how narrow the cape/trail is at the bottom/end.", 1
CapeAlphaStart = 16, TypeHolder(ValueType.Float), \
"Affects how transparent the cape/trail is at the top/beginning.", 1
CapeAlphaEnd = 17, TypeHolder(ValueType.Float), \
"Affects how transparent the cape/trail is at the bottom/end.", 1
CapeIsTrail = 20, TypeHolder(ValueType.Bool), \
"If this metapixel exists, the cape will be a trail instead of a cape (think of the rainbow trail left by the " \
"TV object).", 1
# Particles
ParticleEmitterOffset = 30, TypeHolder(ValueType.Vec2, vec_range=16.0), \
"The offset in pixels from the center of the hat where particles will be emitted.", 2
ParticleDefaultBehavior = 31, TypeHolder(ValueType.Int, int_range=4, value=0), \
"B defines a particle behavior from a list of presets: 0 = No Behavior, 1 = Spit, 2 = Burst," \
" 3 = Halo, 4 = Exclamation", 2
ParticleEmitShape = 32, TypeHolder(ValueType.IntPair, int_range_x=2, int_range_y=2), \
"G: 0 = Point, 1 = Circle, 2 = Box B: 0 = Emit Around Shape Border Randomly, 1 = Fill Shape Randomly, " \
"2 = Emit Around Shape Border Uniformly", 2
ParticleEmitShapeSize = 33, TypeHolder(ValueType.Vec2, vec_range=24.0, value=(24.0, 24.0)), \
"X and Y size of the particle emitter (in pixels). Should be IntPair with usage but is this type in docs.", 2
ParticleCount = 34, TypeHolder(ValueType.Int, int_range=8, value=4), \
"The number of particles to emit.", 2
ParticleLifespan = 35, TypeHolder(ValueType.Float, float_range=2.0), \
"Life span of the particle, in seconds (0 to 2 seconds)", 2
ParticleVelocity = 36, TypeHolder(ValueType.NormVec2, vec_range=2.0), \
"Initial velocity of the particle.", 2
ParticleGravity = 37, TypeHolder(ValueType.NormVec2, vec_range=2.0), \
"Gravity applied to the particle.", 2
ParticleFriction = 38, TypeHolder(ValueType.NormVec2, vec_range=2.0, allow_negative=False, value=(1.0, 1.0)), \
"Friction applied to the particle (The value it's velocity is multiplied by every frame).", 2
ParticleAlpha = 39, TypeHolder(ValueType.NormVec2, vec_range=2.0, allow_negative=False, value=(1.0, 1.0)), \
"G = Start alpha, B = End alpha", 2
ParticleScale = 40, TypeHolder(ValueType.NormVec2, vec_range=2.0, allow_negative=False, value=(1.0, 0.0)), \
"G = Start scale, B = End scale", 2
ParticleRotation = 41, TypeHolder(ValueType.NormVec2, vec_range=36.0, allow_negative=False, value=(0.0, 0.0)),\
"G = Start rotation, B = End rotation", 2
ParticleOffset = 42, TypeHolder(ValueType.Vec2, vec_range=16), \
"Additional X Y offset of particle.", 2
ParticleBackground = 43, TypeHolder(ValueType.Bool), \
"If this metapixel exists, particles will be rendered behind the duck.", 2
ParticleAnchor = 44, TypeHolder(ValueType.Bool), \
"If this metapixel exists, particles will stay anchored around the hat position when it's moving.", 2
ParticleAnimated = 45, TypeHolder(ValueType.Bool), \
"If this metapixel exists, particles will animate through their frames. Otherwise, a frame will be picked " \
"randomly.", 2
ParticleAnimationLoop = 46, TypeHolder(ValueType.Bool), \
"If this metapixel exists, the particle animation will loop.", 2
ParticleAnimationRandomFrame = 47, TypeHolder(ValueType.Bool), \
"If this metapixel exists, the particle animation will start on a random frame.", 2
ParticleAnimationSpeed = 48, TypeHolder(ValueType.Float, value=0.1), \
"How quickly the particle animates.", 2
# Strange
WetLips = 70, TypeHolder(ValueType.Bool), \
"If this metapixel exists, the hat will have 'wet lips'.", 3
MechanicalLips = 71, TypeHolder(ValueType.Bool), \
"If this metapixel exists, the hat will have 'mechanical lips'.", 3
# Special
RandomizeParameterX = 100, TypeHolder(ValueType.Randomize), \
"If present, the previously defined metapixel value will have a random number between G and B applied to its A " \
"value each time it's used. This will generally only work with particles..", 4
RandomizeParameterY = 101, TypeHolder(ValueType.Randomize), \
"If present, the previously defined metapixel value will have a random number between G and B applied to its B " \
"value each time it's used. This will generally only work with particles..", 4
RandomizeParameter = 102, TypeHolder(ValueType.Randomize), \
"If present, the previously defined metapixel value will have a random number between G and B applied to its " \
"A and B values each time it's used. This will generally only work with particles..", 4
class MetaPixel:
TYPES = {
1: MetaPixelType.HatOffset,
2: MetaPixelType.UseDuckColor,
10: MetaPixelType.CapeOffset,
11: MetaPixelType.CapeForeground,
12: MetaPixelType.CapeSwayModifier,
13: MetaPixelType.CapeWiggleModifier,
14: MetaPixelType.CapeTaperStart,
15: MetaPixelType.CapeTaperEnd,
16: MetaPixelType.CapeAlphaStart,
17: MetaPixelType.CapeAlphaEnd,
20: MetaPixelType.CapeIsTrail,
30: MetaPixelType.ParticleEmitterOffset,
31: MetaPixelType.ParticleDefaultBehavior,
32: MetaPixelType.ParticleEmitShape,
33: MetaPixelType.ParticleEmitShapeSize,
34: MetaPixelType.ParticleCount,
35: MetaPixelType.ParticleLifespan,
36: MetaPixelType.ParticleVelocity,
37: MetaPixelType.ParticleGravity,
38: MetaPixelType.ParticleFriction,
39: MetaPixelType.ParticleAlpha,
40: MetaPixelType.ParticleScale,
41: MetaPixelType.ParticleRotation,
42: MetaPixelType.ParticleOffset,
43: MetaPixelType.ParticleBackground,
44: MetaPixelType.ParticleAnchor,
45: MetaPixelType.ParticleAnimated,
46: MetaPixelType.ParticleAnimationLoop,
47: MetaPixelType.ParticleAnimationRandomFrame,
48: MetaPixelType.ParticleAnimationSpeed,
70: MetaPixelType.WetLips,
71: MetaPixelType.MechanicalLips,
100: MetaPixelType.RandomizeParameterX,
101: MetaPixelType.RandomizeParameterY,
102: MetaPixelType.RandomizeParameter
}
COLORS = {
0: "Gold",
1: "LightSkyBlue",
2: "PaleGreen",
3: "Wheat1",
4: "HotPink"
}
def __init__(self, meta_pixel_type: MetaPixelType, g, b):
assert isinstance(meta_pixel_type, MetaPixelType)
self.type = meta_pixel_type
value = meta_pixel_type.value[1]
assert isinstance(value, TypeHolder)
self.value = value.generate()
self.value.set_colors(g, b)
def get_rgba(self):
gb = self.value.get_colors()
r, g, b, a = int(self.type.value[0]), int(gb[0]), int(gb[1]), 255
g = 0 if g < 0 else 255 if 255 < g else g
b = 0 if b < 0 else 255 if 255 < b else b
return r, g, b, a
def get_value(self):
return self.value.get_values()
class MetaPixelGui:
def __init__(self, pixel: MetaPixel, row: int, frame: tk.Frame, label: tk.Label, editor):
assert isinstance(editor, Editor)
self.editor = editor
self.label = label
self.pixel = pixel
self.remove_button = tk.Button(frame, text="X", bg="red")
self.remove_button["command"] = self.click_x
self.remove_button.grid(column=0, row=row, sticky=tk.NSEW)
self.meta_pixel_button = tk.Button(frame, text=pixel.type.name, bg=self.pixel.COLORS.get(pixel.type.value[3]))
self.meta_pixel_button["command"] = self.click_meta
self.meta_pixel_button.grid(column=1, row=row, sticky=tk.NSEW)
self.G = tk.Text(frame, width=3, height=1)
self.G.grid(column=2, row=row, sticky=tk.NSEW)
self.G.insert(tk.INSERT, str(self.pixel.value.g))
self.B = tk.Text(frame, width=3, height=1)
self.B.grid(column=3, row=row, sticky=tk.NSEW)
self.B.insert(tk.INSERT, str(self.pixel.value.b))
self.G.edit_modified(False)
self.B.edit_modified(False)
self.ValueType = tk.Button(frame, text=pixel.type.value[1].type.name,
bg=TypeHolder.COLORS.get(pixel.type.value[1].type.value))
self.ValueType["command"] = self.click_type
self.ValueType.grid(column=4, row=row, sticky=tk.NSEW)
values = self.pixel.value.get_values()
self.valueA = None
self.valueB = None
if 0 < len(values):
self.valueA = tk.Text(frame, width=6, height=1)
self.valueA.grid(column=5, row=row, sticky=tk.NSEW)
self.valueA.insert(tk.INSERT, str(self.pixel.value.get_values()[0]))
self.valueA.edit_modified(False)
if 1 < len(values):
self.valueB = tk.Text(frame, width=6, height=1)
self.valueB.grid(column=6, row=row, sticky=tk.NSEW)
self.valueB.insert(tk.INSERT, str(self.pixel.value.get_values()[1]))
self.valueB.edit_modified(False)
self.up_button = tk.Button(frame, text="∧", bg="LightBlue")
self.up_button["command"] = self.click_up
self.up_button.grid(column=7, row=row, sticky=tk.NSEW)
self.down_button = tk.Button(frame, text="∨", bg="LightBlue")
self.down_button["command"] = self.click_down
self.down_button.grid(column=8, row=row, sticky=tk.NSEW)
self.on_value_change()
self.on_color_change()
def key_event(self):
if self.G.edit_modified() or self.B.edit_modified():
self.on_color_change()
self.G.edit_modified(False)
self.B.edit_modified(False)
if self.valueA is not None:
if self.valueA.edit_modified():
self.on_value_change()
self.valueA.edit_modified(False)
if self.valueB is not None:
if self.valueB.edit_modified():
self.on_value_change()
self.valueB.edit_modified(False)
def on_value_change(self):
value_a = 0.0
value_b = 0.0
try:
if self.valueA is not None: value_a = float(self.valueA.get('1.0', tk.END))
if self.valueB is not None: value_b = float(self.valueB.get('1.0', tk.END))
except ValueError:
pass
self.pixel.value.set_value(value_a, value_b)
self.G.delete('1.0', tk.END)
self.G.insert(tk.INSERT, int(self.pixel.value.g))
self.G.edit_modified(False)
self.B.delete('1.0', tk.END)
self.B.insert(tk.INSERT, int(self.pixel.value.b))
self.B.edit_modified(False)
def on_color_change(self):
g = 0
b = 0
try:
g = int(self.G.get('1.0', tk.END))
b = int(self.B.get('1.0', tk.END))
except ValueError:
pass
g = 0 if g < 0 else 255 if 255 < g else g
b = 0 if b < 0 else 255 if 255 < b else b
self.pixel.value.set_colors(g, b)
values = self.pixel.value.get_values()
if 0 < len(values):
value = float("{:.3f}".format(values[0]))
self.valueA.delete('1.0', tk.END)
self.valueA.insert(tk.INSERT, int(values[0]) if int(values[0]) == value else value)
self.valueA.edit_modified(False)
if 1 < len(values):
value = float("{:.3f}".format(values[1]))
self.valueB.delete('1.0', tk.END)
self.valueB.insert(tk.INSERT, int(values[1]) if int(values[1]) == value else value)
self.valueB.edit_modified(False)
def click_meta(self):
self.label["text"] = self.pixel.type.value[2]
def click_type(self):
self.label["text"] = self.pixel.value.get_help()
def click_x(self):
self.editor.remove_meta_pixel(self.pixel)
def click_up(self):
self.editor.move_meta_pixel_up(self.pixel)
def click_down(self):
self.editor.move_meta_pixel_down(self.pixel)
def remove(self):
self.meta_pixel_button.destroy()
self.G.destroy()
self.B.destroy()
self.ValueType.destroy()
if self.valueA is not None: self.valueA.destroy()
if self.valueB is not None: self.valueB.destroy()
self.remove_button.destroy()
self.up_button.destroy()
self.down_button.destroy()
class Editor:
def __init__(self):
# Image stuff
self.icon = """<KEY>
<KEY>
D<KEY>pAMIIAAEEICGyL4ilpt2vDVRBx
AAAggAAdOVgrbfHq2ODiAABBAAAmaX0n/K6wACQAABIIAAEEAACCAABPwbvgGpc2uukqDTSwAAAA
BJRU5ErkJggg=="""
self.image = None
self.meta_pixel_keys = []
self.meta_pixels = []
self.pixels = None
# self.image.save("pixel_grid.png")
# Gui Stuff
self.root = tk.Tk()
self.frame1 = tk.Frame(self.root, borderwidth=4, relief="groove")
self.frame1.grid(column=0, row=0, sticky=tk.NSEW)
self.frame2 = tk.Frame(self.root, borderwidth=0)
self.frame2.grid(column=0, row=1, sticky=tk.NSEW)
self.label = tk.Label(self.frame2, text="", justify=tk.LEFT, pady=0, borderwidth=1, wraplength=300)
self.label.pack()
tk.Label(self.frame1, text="X", justify=tk.LEFT, pady=0, borderwidth=1, bg="red"). \
grid(column=0, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="MetaPixelType", justify=tk.LEFT, pady=0, borderwidth=1, bg="gray"). \
grid(column=1, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="Green", justify=tk.LEFT, pady=0, borderwidth=1, bg="gray"). \
grid(column=2, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="Blue", justify=tk.LEFT, pady=0, borderwidth=1, bg="gray"). \
grid(column=3, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="ValueType", justify=tk.LEFT, pady=0, borderwidth=1, bg="gray"). \
grid(column=4, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="Value A", justify=tk.LEFT, pady=0, borderwidth=1, bg="gray"). \
grid(column=5, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="Value B", justify=tk.LEFT, pady=0, borderwidth=1, bg="gray"). \
grid(column=6, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="▲", justify=tk.LEFT, pady=0, borderwidth=1, bg="Aqua"). \
grid(column=7, row=0, sticky=tk.NSEW)
tk.Label(self.frame1, text="▼", justify=tk.LEFT, pady=0, borderwidth=1, bg="Aqua"). \
grid(column=8, row=0, sticky=tk.NSEW)
self.frame3 = tk.Frame(self.root, borderwidth=0)
self.frame3.grid(column=1, row=0, sticky=tk.NSEW)
button = tk.Button(self.frame3, text="+", justify=tk.LEFT, pady=0, borderwidth=1, bg="LightGreen")
button["command"] = self.click_add
button.grid(column=0, row=0, sticky=tk.NSEW)
button = tk.Button(self.frame3, text="Save", justify=tk.LEFT, pady=0, borderwidth=1, bg="Lime")
button["command"] = self.click_save
button.grid(column=0, row=1, sticky=tk.NSEW)
button = tk.Button(self.frame3, text="Load", justify=tk.LEFT, pady=0, borderwidth=1, bg="Orange")
button["command"] = self.click_load
button.grid(column=0, row=3, sticky=tk.NSEW)
# Final Gui pixel stuff
self.metas = []
self.gen_meta_pixels()
self.root.bind_all('<KeyPress>', self.key_stuff)
self.root.bind_all('<KeyRelease>', self.key_stuff)
self.root.title("DuckGame hat MetaPixel Editor")
self.root.iconphoto(False, tk.PhotoImage(data=self.icon))
self.root.protocol("WM_DELETE_WINDOW", self.on_closing)
self.root.resizable(width=False, height=False)
self.add_open = False
self.root.mainloop()
def key_stuff(self, _):
for meta in self.metas:
meta.key_event()
def click_save(self):
if self.image is None:
showinfo(title="Can't Save MetaPixels", message="You haven't loaded a hat yet so you aren't able to save a "
"hat")
return
image_file = asksaveasfile(title="Select file to save as", filetypes=[('PNG Files', '*.png')])
if image_file is None: return
for p in range(len(self.meta_pixels)):
meta_pixel = self.meta_pixels[p]
assert isinstance(meta_pixel, MetaPixel)
self.pixels[96, p] = meta_pixel.get_rgba()
self.image.save(image_file.name)
image_file.close()
def click_load(self):
image_file = askopenfile(mode='r', title="Select file", filetypes=[('PNG Files', '*.png')])
if image_file is None: return
try:
image = Image.open(image_file.name)
except PIL.UnidentifiedImageError:
showinfo(title="Can't Open "+image_file.name, message="Wasn't able to identify file as image")
return
if image.format != "PNG":
showinfo(title="Can't Open "+image_file.name, message="Couldn't open file because it wasn't a PNG file")
return
if image.size != (97, 56):
showinfo(title="Can't Open "+image_file.name, message="Couldn't open file because duck game hats need to be"
" (97, 56) pixels in size")
return
try:
image = image.convert('RGBA')
except ValueError:
showinfo(title="Can't Open " + image_file.name, message="Couldn't open file because it's color mode is "
"wrong")
return
self.meta_pixel_keys = []
self.meta_pixels = []
self.image = image
image_file.close()
self.pixels = self.image.load()
self.load()
self.gen_meta_pixels()
def click_add(self):
if self.add_open: return
if self.image is None:
showinfo(title="Can't Add MetaPixel", message="You haven't loaded a hat yet so you aren't able to add a "
"MetaPixel")
return
root = tk.Tk()
class Option:
def __init__(self, pixel_type: MetaPixelType, editor: Editor, r: tk.Tk):
self.root = r
self.editor = editor
self.button = tk.Button(root, text=pixel_type.name, justify=tk.LEFT, pady=0, borderwidth=1,
bg=MetaPixel.COLORS.get(pixel_type.value[3]))
self.button.pack(fill="x")
self.button["command"] = self.click
self.meta_pixel_type = pixel_type
def click(self):
meta_pixel = MetaPixel(self.meta_pixel_type, 0, 0)
self.editor.meta_pixel_keys.append(self.meta_pixel_type)
self.editor.meta_pixels.append(meta_pixel)
self.editor.gen_meta_pixels()
self.editor.add_open = False
self.root.quit()
self.root.destroy()
root.resizable(width=False, height=False)
for meta_pixel_type in MetaPixel.TYPES.values():
if self.meta_pixel_keys.__contains__(meta_pixel_type) and meta_pixel_type.value[0] < 100: continue
Option(meta_pixel_type, self, root)
self.add_open = True
root.iconphoto(False, tk.PhotoImage(master=root, data=self.icon))
root.mainloop(1)
self.add_open = False
def load(self):
for p in range(56):
r, g, b, a = self.pixels[96, p]
meta_pixel_type = MetaPixel.TYPES.get(r)
if meta_pixel_type is not None:
if not self.meta_pixel_keys.__contains__(meta_pixel_type) or 100 <= meta_pixel_type.value[0]:
self.meta_pixel_keys.append(meta_pixel_type)
self.meta_pixels.append(MetaPixel(meta_pixel_type, g, b))
self.pixels[96, p] = 0, 0, 0, 0
def add_meta_pixel(self, meta_pixel: MetaPixel):
self.metas.append(MetaPixelGui(meta_pixel, len(self.metas)+1, self.frame1, self.label, self))
def gen_meta_pixels(self):
for meta in self.metas: meta.remove()
self.metas = []
for item in self.meta_pixels: self.add_meta_pixel(item)
def remove_meta_pixel(self, meta_pixel: MetaPixel):
if self.meta_pixels.__contains__(meta_pixel):
self.meta_pixels.remove(meta_pixel)
self.meta_pixel_keys.remove(meta_pixel.type)
self.gen_meta_pixels()
def move_meta_pixel_up(self, meta_pixel: MetaPixel):
if self.meta_pixels.__contains__(meta_pixel):
for i in range(len(self.meta_pixels)):
if self.meta_pixels[i] == meta_pixel:
u = i-1 if 0 < i else len(self.meta_pixel_keys)-1
move_down = self.meta_pixel_keys[u]
self.meta_pixel_keys[i] = move_down
self.meta_pixel_keys[u] = meta_pixel.type
self.metas[i].remove()
self.metas[u].remove()
move_down = self.meta_pixels[u]
self.meta_pixels[u] = meta_pixel
self.meta_pixels[i] = move_down
self.metas[i] = MetaPixelGui(move_down, i+1, self.frame1, self.label, self)
self.metas[u] = MetaPixelGui(meta_pixel, u+1, self.frame1, self.label, self)
return
def move_meta_pixel_down(self, meta_pixel: MetaPixel):
if self.meta_pixels.__contains__(meta_pixel):
for i in range(len(self.meta_pixels)):
if self.meta_pixels[i] == meta_pixel:
u = i+1 if i < len(self.meta_pixel_keys) - 1 else 0
move_up = self.meta_pixel_keys[u]
self.meta_pixel_keys[u] = meta_pixel.type
self.meta_pixel_keys[i] = move_up
self.metas[i].remove()
self.metas[u].remove()
move_up = self.meta_pixels[u]
self.meta_pixels[u] = meta_pixel
self.meta_pixels[i] = move_up
self.metas[i] = MetaPixelGui(move_up, i+1, self.frame1, self.label, self)
self.metas[u] = MetaPixelGui(meta_pixel, u+1, self.frame1, self.label, self)
return
def on_closing(self):
if self.image is None:
self.root.destroy()
return
if askokcancel("Quit Prompt", "Do you want to exit? Any unsaved data will be lost!"): self.root.destroy()
if __name__ == '__main__':
Editor()
| [
"tkinter.messagebox.showinfo",
"PIL.Image.open",
"tkinter.messagebox.askokcancel",
"tkinter.filedialog.asksaveasfile",
"tkinter.Button",
"tkinter.Tk",
"tkinter.Label",
"tkinter.filedialog.askopenfile",
"tkinter.PhotoImage",
"tkinter.Text",
"tkinter.Frame"
] | [((17403, 17439), 'tkinter.Button', 'tk.Button', (['frame'], {'text': '"""X"""', 'bg': '"""red"""'}), "(frame, text='X', bg='red')\n", (17412, 17439), True, 'import tkinter as tk\n'), ((17837, 17870), 'tkinter.Text', 'tk.Text', (['frame'], {'width': '(3)', 'height': '(1)'}), '(frame, width=3, height=1)\n', (17844, 17870), True, 'import tkinter as tk\n'), ((18006, 18039), 'tkinter.Text', 'tk.Text', (['frame'], {'width': '(3)', 'height': '(1)'}), '(frame, width=3, height=1)\n', (18013, 18039), True, 'import tkinter as tk\n'), ((19227, 19269), 'tkinter.Button', 'tk.Button', (['frame'], {'text': '"""∧"""', 'bg': '"""LightBlue"""'}), "(frame, text='∧', bg='LightBlue')\n", (19236, 19269), True, 'import tkinter as tk\n'), ((19415, 19457), 'tkinter.Button', 'tk.Button', (['frame'], {'text': '"""∨"""', 'bg': '"""LightBlue"""'}), "(frame, text='∨', bg='LightBlue')\n", (19424, 19457), True, 'import tkinter as tk\n'), ((23074, 23081), 'tkinter.Tk', 'tk.Tk', ([], {}), '()\n', (23079, 23081), True, 'import tkinter as tk\n'), ((23105, 23156), 'tkinter.Frame', 'tk.Frame', (['self.root'], {'borderwidth': '(4)', 'relief': '"""groove"""'}), "(self.root, borderwidth=4, relief='groove')\n", (23113, 23156), True, 'import tkinter as tk\n'), ((23241, 23275), 'tkinter.Frame', 'tk.Frame', (['self.root'], {'borderwidth': '(0)'}), '(self.root, borderwidth=0)\n', (23249, 23275), True, 'import tkinter as tk\n'), ((23359, 23449), 'tkinter.Label', 'tk.Label', (['self.frame2'], {'text': '""""""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'wraplength': '(300)'}), "(self.frame2, text='', justify=tk.LEFT, pady=0, borderwidth=1,\n wraplength=300)\n", (23367, 23449), True, 'import tkinter as tk\n'), ((24868, 24902), 'tkinter.Frame', 'tk.Frame', (['self.root'], {'borderwidth': '(0)'}), '(self.root, borderwidth=0)\n', (24876, 24902), True, 'import tkinter as tk\n'), ((24982, 25076), 'tkinter.Button', 'tk.Button', (['self.frame3'], {'text': '"""+"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""LightGreen"""'}), "(self.frame3, text='+', justify=tk.LEFT, pady=0, borderwidth=1, bg\n ='LightGreen')\n", (24991, 25076), True, 'import tkinter as tk\n'), ((25190, 25280), 'tkinter.Button', 'tk.Button', (['self.frame3'], {'text': '"""Save"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""Lime"""'}), "(self.frame3, text='Save', justify=tk.LEFT, pady=0, borderwidth=1,\n bg='Lime')\n", (25199, 25280), True, 'import tkinter as tk\n'), ((25396, 25488), 'tkinter.Button', 'tk.Button', (['self.frame3'], {'text': '"""Load"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""Orange"""'}), "(self.frame3, text='Load', justify=tk.LEFT, pady=0, borderwidth=1,\n bg='Orange')\n", (25405, 25488), True, 'import tkinter as tk\n'), ((26490, 26575), 'tkinter.filedialog.asksaveasfile', 'asksaveasfile', ([], {'title': '"""Select file to save as"""', 'filetypes': "[('PNG Files', '*.png')]"}), "(title='Select file to save as', filetypes=[('PNG Files',\n '*.png')])\n", (26503, 26575), False, 'from tkinter.filedialog import askopenfile, asksaveasfile\n'), ((26938, 27016), 'tkinter.filedialog.askopenfile', 'askopenfile', ([], {'mode': '"""r"""', 'title': '"""Select file"""', 'filetypes': "[('PNG Files', '*.png')]"}), "(mode='r', title='Select file', filetypes=[('PNG Files', '*.png')])\n", (26949, 27016), False, 'from tkinter.filedialog import askopenfile, asksaveasfile\n'), ((28587, 28594), 'tkinter.Tk', 'tk.Tk', ([], {}), '()\n', (28592, 28594), True, 'import tkinter as tk\n'), ((33057, 33142), 'tkinter.messagebox.askokcancel', 'askokcancel', (['"""Quit Prompt"""', '"""Do you want to exit? Any unsaved data will be lost!"""'], {}), "('Quit Prompt',\n 'Do you want to exit? Any unsaved data will be lost!')\n", (33068, 33142), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n'), ((18687, 18720), 'tkinter.Text', 'tk.Text', (['frame'], {'width': '(6)', 'height': '(1)'}), '(frame, width=6, height=1)\n', (18694, 18720), True, 'import tkinter as tk\n'), ((18972, 19005), 'tkinter.Text', 'tk.Text', (['frame'], {'width': '(6)', 'height': '(1)'}), '(frame, width=6, height=1)\n', (18979, 19005), True, 'import tkinter as tk\n'), ((25892, 25921), 'tkinter.PhotoImage', 'tk.PhotoImage', ([], {'data': 'self.icon'}), '(data=self.icon)\n', (25905, 25921), True, 'import tkinter as tk\n'), ((26273, 26390), 'tkinter.messagebox.showinfo', 'showinfo', ([], {'title': '"""Can\'t Save MetaPixels"""', 'message': '"""You haven\'t loaded a hat yet so you aren\'t able to save a hat"""'}), '(title="Can\'t Save MetaPixels", message=\n "You haven\'t loaded a hat yet so you aren\'t able to save a hat")\n', (26281, 26390), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n'), ((27091, 27118), 'PIL.Image.open', 'Image.open', (['image_file.name'], {}), '(image_file.name)\n', (27101, 27118), False, 'from PIL import Image\n'), ((27341, 27452), 'tkinter.messagebox.showinfo', 'showinfo', ([], {'title': '("Can\'t Open " + image_file.name)', 'message': '"""Couldn\'t open file because it wasn\'t a PNG file"""'}), '(title="Can\'t Open " + image_file.name, message=\n "Couldn\'t open file because it wasn\'t a PNG file")\n', (27349, 27452), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n'), ((27517, 27662), 'tkinter.messagebox.showinfo', 'showinfo', ([], {'title': '("Can\'t Open " + image_file.name)', 'message': '"""Couldn\'t open file because duck game hats need to be (97, 56) pixels in size"""'}), '(title="Can\'t Open " + image_file.name, message=\n "Couldn\'t open file because duck game hats need to be (97, 56) pixels in size"\n )\n', (27525, 27662), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n'), ((28373, 28493), 'tkinter.messagebox.showinfo', 'showinfo', ([], {'title': '"""Can\'t Add MetaPixel"""', 'message': '"""You haven\'t loaded a hat yet so you aren\'t able to add a MetaPixel"""'}), '(title="Can\'t Add MetaPixel", message=\n "You haven\'t loaded a hat yet so you aren\'t able to add a MetaPixel")\n', (28381, 28493), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n'), ((29856, 29898), 'tkinter.PhotoImage', 'tk.PhotoImage', ([], {'master': 'root', 'data': 'self.icon'}), '(master=root, data=self.icon)\n', (29869, 29898), True, 'import tkinter as tk\n'), ((23484, 23570), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""X"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""red"""'}), "(self.frame1, text='X', justify=tk.LEFT, pady=0, borderwidth=1, bg=\n 'red')\n", (23492, 23570), True, 'import tkinter as tk\n'), ((23631, 23729), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""MetaPixelType"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""gray"""'}), "(self.frame1, text='MetaPixelType', justify=tk.LEFT, pady=0,\n borderwidth=1, bg='gray')\n", (23639, 23729), True, 'import tkinter as tk\n'), ((23791, 23881), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""Green"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""gray"""'}), "(self.frame1, text='Green', justify=tk.LEFT, pady=0, borderwidth=1,\n bg='gray')\n", (23799, 23881), True, 'import tkinter as tk\n'), ((23943, 24032), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""Blue"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""gray"""'}), "(self.frame1, text='Blue', justify=tk.LEFT, pady=0, borderwidth=1,\n bg='gray')\n", (23951, 24032), True, 'import tkinter as tk\n'), ((24094, 24188), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""ValueType"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""gray"""'}), "(self.frame1, text='ValueType', justify=tk.LEFT, pady=0,\n borderwidth=1, bg='gray')\n", (24102, 24188), True, 'import tkinter as tk\n'), ((24250, 24343), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""Value A"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""gray"""'}), "(self.frame1, text='Value A', justify=tk.LEFT, pady=0, borderwidth=\n 1, bg='gray')\n", (24258, 24343), True, 'import tkinter as tk\n'), ((24404, 24497), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""Value B"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""gray"""'}), "(self.frame1, text='Value B', justify=tk.LEFT, pady=0, borderwidth=\n 1, bg='gray')\n", (24412, 24497), True, 'import tkinter as tk\n'), ((24558, 24645), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""▲"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""Aqua"""'}), "(self.frame1, text='▲', justify=tk.LEFT, pady=0, borderwidth=1, bg=\n 'Aqua')\n", (24566, 24645), True, 'import tkinter as tk\n'), ((24706, 24793), 'tkinter.Label', 'tk.Label', (['self.frame1'], {'text': '"""▼"""', 'justify': 'tk.LEFT', 'pady': '(0)', 'borderwidth': '(1)', 'bg': '"""Aqua"""'}), "(self.frame1, text='▼', justify=tk.LEFT, pady=0, borderwidth=1, bg=\n 'Aqua')\n", (24714, 24793), True, 'import tkinter as tk\n'), ((27176, 27277), 'tkinter.messagebox.showinfo', 'showinfo', ([], {'title': '("Can\'t Open " + image_file.name)', 'message': '"""Wasn\'t able to identify file as image"""'}), '(title="Can\'t Open " + image_file.name, message=\n "Wasn\'t able to identify file as image")\n', (27184, 27277), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n'), ((27841, 27956), 'tkinter.messagebox.showinfo', 'showinfo', ([], {'title': '("Can\'t Open " + image_file.name)', 'message': '"""Couldn\'t open file because it\'s color mode is wrong"""'}), '(title="Can\'t Open " + image_file.name, message=\n "Couldn\'t open file because it\'s color mode is wrong")\n', (27849, 27956), False, 'from tkinter.messagebox import showinfo, askyesno, askokcancel\n')] |
# Circular primes
# Problem 35
# The number, 197, is called a circular prime because all rotations of the digits: 197, 971, and 719, are themselves prime.
# There are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, and 97.
# How many circular primes are there below one million?
from common import PrimeGenerator, lst_to_int, is_prime
def rotate(lst):
for _ in range(len(lst)):
head, *tail = lst
lst = tail + [head]
yield ''.join(lst)
def is_circular_prime(prime, prime_dict):
digits = str(prime)
rs = rotate(digits)
return all(prime_dict.get(n, False) for n in rs)
def solve():
primes = list(PrimeGenerator(lt = 1_000_000))
prime_dict = dict((str(prime), True) for prime in primes)
return tuple(prime for prime in primes if is_circular_prime(prime, prime_dict))
if __name__ == '__main__':
print(__file__ + ':', len(solve()))
| [
"common.PrimeGenerator"
] | [((671, 697), 'common.PrimeGenerator', 'PrimeGenerator', ([], {'lt': '(1000000)'}), '(lt=1000000)\n', (685, 697), False, 'from common import PrimeGenerator, lst_to_int, is_prime\n')] |
import inspect
import configparser
import os.path
# ----------------------------------------------------------------------------------------------------------------------
def create_sql_object():
"""
Create a sql resources object.
:return:
A sql resources object.
"""
module_d = os.path.split(inspect.stack()[0][1])[0]
resources_d = os.path.abspath(os.path.join(module_d, "..", "..", "..", "resources"))
parser = configparser.ConfigParser()
parser.read(os.path.join(resources_d, "sql.ini"))
return parser
| [
"configparser.ConfigParser",
"inspect.stack"
] | [((458, 485), 'configparser.ConfigParser', 'configparser.ConfigParser', ([], {}), '()\n', (483, 485), False, 'import configparser\n'), ((329, 344), 'inspect.stack', 'inspect.stack', ([], {}), '()\n', (342, 344), False, 'import inspect\n')] |
"""
Flask app with auto-discovery of blueprints, cli commands etc.
"""
import copy
import functools
import sys
import types
import typing
import click
import flask
from mara_app import config, layout
from mara_page import response, _, bootstrap, navigation
from werkzeug import exceptions
def module_functionalities(module: types.ModuleType, MARA_XXX: str, type) -> []:
"""
Returns some functionalities of a module that is declared in a MARA_XXX variable or function
`module.MARA_XXX` can be
- a function that returns a list or dict
- a list
- a dict
"""
if MARA_XXX in dir(module):
functionalities = getattr(module, MARA_XXX)
if isinstance(functionalities, typing.Callable):
functionalities = functionalities()
if isinstance(functionalities, typing.Dict):
functionalities = functionalities.values()
if not isinstance(functionalities, typing.Iterable):
raise TypeError(
f'{module.__name__}.{MARA_XXX} should be or return a list or dict of {type.__name__}. Got "{functionalities}".')
for functionality in functionalities:
if not isinstance(functionality, type):
raise TypeError(f'In {module.__name__}.{MARA_XXX}: Expected a {type.__name__}, got "{functionality}"')
return functionalities
else:
return []
class MaraApp(flask.Flask):
def __init__(self):
super().__init__('mara')
self.register_blueprints()
self.register_commands()
self.register_page_layout()
self.register_error_handlers()
self.disable_caching()
self.patch_flask_url_for()
self.config.update(config.flask_config())
def register_blueprints(self):
"""Searches for all declared blueprints and adds them to the app"""
for module in copy.copy(sys.modules).values():
for blueprint in module_functionalities(module, 'MARA_FLASK_BLUEPRINTS', flask.Blueprint):
self.register_blueprint(blueprint)
def register_commands(self):
"""Searches for all declared click commands and adds them to the app, grouped by package"""
for module in copy.copy(sys.modules).values():
for command in module_functionalities(module, 'MARA_CLICK_COMMANDS', click.Command):
if 'callback' in command.__dict__ and command.__dict__['callback']:
package = command.__dict__['callback'].__module__.rpartition('.')[0]
if package != 'flask':
command.name = package + '.' + command.name
self.cli.add_command(command)
def register_page_layout(self):
"""Adds a global layout with navigation etc. to pages"""
def after_request(r: flask.Response):
if isinstance(r, response.Response):
r.set_data(layout.layout(r))
return r
self.after_request(after_request)
def disable_caching(self):
"""
Disable caching for dynamic content (not static files).
See https://stackoverflow.com/questions/23112316/using-flask-how-do-i-modify-the-cache-control-header-for-all-output/37331139#37331139
"""
def after_request(r: flask.Response):
if 'Cache-Control' not in r.headers:
r.headers['Cache-Control'] = 'no-store'
return r
self.after_request(after_request)
def register_error_handlers(self):
"""Sets up error pages for all http exceptions"""
def error_handler(error):
if not isinstance(error, exceptions.HTTPException):
error = exceptions.InternalServerError()
return response.Response(bootstrap.card(body=_.span[_.p(style='color:#888')[error.description or ''],
_.img(src=flask.url_for('mara_app.static',
filename='mara.jpg'),
style='margin-top:30px;max-width:100%;')]),
title=f'{error.code} {error.name}',
status=error.code)
for cls in exceptions.HTTPException.__subclasses__():
self.register_error_handler(cls, error_handler)
def patch_flask_url_for(self):
"""Caches calls to flask.url_for because it's kind of slow
https://stackoverflow.com/questions/16713644/why-is-flask-url-for-too-slow"""
original_url_for = flask.url_for
flask.url_for = functools.lru_cache(maxsize=None)(original_url_for)
@functools.lru_cache(maxsize=None)
def combine_navigation_entries() -> navigation.NavigationEntry:
"""Collects and merges all instances of NavigationEntry"""
navigation_root = config.navigation_root()
def all_children(navigation_entry: navigation.NavigationEntry) -> {navigation.NavigationEntry}:
return functools.reduce(set.union, [all_children(child) for child in navigation_entry.children],
set([navigation_entry]))
# all navigation entries that have already been registered via `config.navigation_root`
existing_navigation_entries = all_children(navigation_root)
for module in copy.copy(sys.modules).values():
for navigation_entry in module_functionalities(module, 'MARA_NAVIGATION_ENTRIES', navigation.NavigationEntry):
# only add navigation entries that have not been added yet via `config.navigation_root`
if not navigation_entry in existing_navigation_entries and navigation_entry != navigation_root:
navigation_root.add_child(navigation_entry)
return navigation_root
| [
"mara_page._.p",
"werkzeug.exceptions.InternalServerError",
"mara_app.layout.layout",
"flask.url_for",
"mara_app.config.navigation_root",
"werkzeug.exceptions.HTTPException.__subclasses__",
"functools.lru_cache",
"copy.copy",
"mara_app.config.flask_config"
] | [((4720, 4753), 'functools.lru_cache', 'functools.lru_cache', ([], {'maxsize': 'None'}), '(maxsize=None)\n', (4739, 4753), False, 'import functools\n'), ((4903, 4927), 'mara_app.config.navigation_root', 'config.navigation_root', ([], {}), '()\n', (4925, 4927), False, 'from mara_app import config, layout\n'), ((4307, 4348), 'werkzeug.exceptions.HTTPException.__subclasses__', 'exceptions.HTTPException.__subclasses__', ([], {}), '()\n', (4346, 4348), False, 'from werkzeug import exceptions\n'), ((1704, 1725), 'mara_app.config.flask_config', 'config.flask_config', ([], {}), '()\n', (1723, 1725), False, 'from mara_app import config, layout\n'), ((4665, 4698), 'functools.lru_cache', 'functools.lru_cache', ([], {'maxsize': 'None'}), '(maxsize=None)\n', (4684, 4698), False, 'import functools\n'), ((5367, 5389), 'copy.copy', 'copy.copy', (['sys.modules'], {}), '(sys.modules)\n', (5376, 5389), False, 'import copy\n'), ((1861, 1883), 'copy.copy', 'copy.copy', (['sys.modules'], {}), '(sys.modules)\n', (1870, 1883), False, 'import copy\n'), ((2204, 2226), 'copy.copy', 'copy.copy', (['sys.modules'], {}), '(sys.modules)\n', (2213, 2226), False, 'import copy\n'), ((3679, 3711), 'werkzeug.exceptions.InternalServerError', 'exceptions.InternalServerError', ([], {}), '()\n', (3709, 3711), False, 'from werkzeug import exceptions\n'), ((2897, 2913), 'mara_app.layout.layout', 'layout.layout', (['r'], {}), '(r)\n', (2910, 2913), False, 'from mara_app import config, layout\n'), ((3776, 3799), 'mara_page._.p', '_.p', ([], {'style': '"""color:#888"""'}), "(style='color:#888')\n", (3779, 3799), False, 'from mara_page import response, _, bootstrap, navigation\n'), ((3900, 3953), 'flask.url_for', 'flask.url_for', (['"""mara_app.static"""'], {'filename': '"""mara.jpg"""'}), "('mara_app.static', filename='mara.jpg')\n", (3913, 3953), False, 'import flask\n')] |
# cython.* namespace for pure mode.
__version__ = "0.23dev"
# BEGIN shameless copy from Cython/minivect/minitypes.py
class _ArrayType(object):
is_array = True
subtypes = ['dtype']
def __init__(self, dtype, ndim, is_c_contig=False, is_f_contig=False,
inner_contig=False, broadcasting=None):
self.dtype = dtype
self.ndim = ndim
self.is_c_contig = is_c_contig
self.is_f_contig = is_f_contig
self.inner_contig = inner_contig or is_c_contig or is_f_contig
self.broadcasting = broadcasting
def __repr__(self):
axes = [":"] * self.ndim
if self.is_c_contig:
axes[-1] = "::1"
elif self.is_f_contig:
axes[0] = "::1"
return "%s[%s]" % (self.dtype, ", ".join(axes))
def index_type(base_type, item):
"""
Support array type creation by slicing, e.g. double[:, :] specifies
a 2D strided array of doubles. The syntax is the same as for
Cython memoryviews.
"""
class InvalidTypeSpecification(Exception):
pass
def verify_slice(s):
if s.start or s.stop or s.step not in (None, 1):
raise InvalidTypeSpecification(
"Only a step of 1 may be provided to indicate C or "
"Fortran contiguity")
if isinstance(item, tuple):
step_idx = None
for idx, s in enumerate(item):
verify_slice(s)
if s.step and (step_idx or idx not in (0, len(item) - 1)):
raise InvalidTypeSpecification(
"Step may only be provided once, and only in the "
"first or last dimension.")
if s.step == 1:
step_idx = idx
return _ArrayType(base_type, len(item),
is_c_contig=step_idx == len(item) - 1,
is_f_contig=step_idx == 0)
elif isinstance(item, slice):
verify_slice(item)
return _ArrayType(base_type, 1, is_c_contig=bool(item.step))
else:
# int[8] etc.
assert int(item) == item # array size must be a plain integer
array(base_type, item)
# END shameless copy
compiled = False
_Unspecified = object()
# Function decorators
def _empty_decorator(x):
return x
def locals(**arg_types):
return _empty_decorator
def test_assert_path_exists(*paths):
return _empty_decorator
def test_fail_if_path_exists(*paths):
return _empty_decorator
class _EmptyDecoratorAndManager(object):
def __call__(self, x):
return x
def __enter__(self):
pass
def __exit__(self, exc_type, exc_value, traceback):
pass
cclass = ccall = cfunc = _EmptyDecoratorAndManager()
returns = wraparound = boundscheck = profile = freelist = lambda arg: _EmptyDecoratorAndManager()
final = internal = type_version_tag = no_gc_clear = _empty_decorator
def inline(f, *args, **kwds):
if isinstance(f, basestring):
from Cython.Build.Inline import cython_inline
return cython_inline(f, *args, **kwds)
else:
assert len(args) == len(kwds) == 0
return f
def compile(f):
from Cython.Build.Inline import RuntimeCompiledFunction
return RuntimeCompiledFunction(f)
# Special functions
def cdiv(a, b):
q = a / b
if q < 0:
q += 1
def cmod(a, b):
r = a % b
if (a*b) < 0:
r -= b
return r
# Emulated language constructs
def cast(type, *args):
if hasattr(type, '__call__'):
return type(*args)
else:
return args[0]
def sizeof(arg):
return 1
def typeof(arg):
return arg.__class__.__name__
# return type(arg)
def address(arg):
return pointer(type(arg))([arg])
def declare(type=None, value=_Unspecified, **kwds):
if type not in (None, object) and hasattr(type, '__call__'):
if value is not _Unspecified:
return type(value)
else:
return type()
else:
return value
class _nogil(object):
"""Support for 'with nogil' statement
"""
def __enter__(self):
pass
def __exit__(self, exc_class, exc, tb):
return exc_class is None
nogil = _nogil()
gil = _nogil()
del _nogil
# Emulated types
class CythonMetaType(type):
def __getitem__(type, ix):
return array(type, ix)
CythonTypeObject = CythonMetaType('CythonTypeObject', (object,), {})
class CythonType(CythonTypeObject):
def _pointer(self, n=1):
for i in range(n):
self = pointer(self)
return self
class PointerType(CythonType):
def __init__(self, value=None):
if isinstance(value, (ArrayType, PointerType)):
self._items = [cast(self._basetype, a) for a in value._items]
elif isinstance(value, list):
self._items = [cast(self._basetype, a) for a in value]
elif value is None or value == 0:
self._items = []
else:
raise ValueError
def __getitem__(self, ix):
if ix < 0:
raise IndexError("negative indexing not allowed in C")
return self._items[ix]
def __setitem__(self, ix, value):
if ix < 0:
raise IndexError("negative indexing not allowed in C")
self._items[ix] = cast(self._basetype, value)
def __eq__(self, value):
if value is None and not self._items:
return True
elif type(self) != type(value):
return False
else:
return not self._items and not value._items
def __repr__(self):
return "%s *" % (self._basetype,)
class ArrayType(PointerType):
def __init__(self):
self._items = [None] * self._n
class StructType(CythonType):
def __init__(self, cast_from=_Unspecified, **data):
if cast_from is not _Unspecified:
# do cast
if len(data) > 0:
raise ValueError('Cannot accept keyword arguments when casting.')
if type(cast_from) is not type(self):
raise ValueError('Cannot cast from %s'%cast_from)
for key, value in cast_from.__dict__.items():
setattr(self, key, value)
else:
for key, value in data.iteritems():
setattr(self, key, value)
def __setattr__(self, key, value):
if key in self._members:
self.__dict__[key] = cast(self._members[key], value)
else:
raise AttributeError("Struct has no member '%s'" % key)
class UnionType(CythonType):
def __init__(self, cast_from=_Unspecified, **data):
if cast_from is not _Unspecified:
# do type cast
if len(data) > 0:
raise ValueError('Cannot accept keyword arguments when casting.')
if isinstance(cast_from, dict):
datadict = cast_from
elif type(cast_from) is type(self):
datadict = cast_from.__dict__
else:
raise ValueError('Cannot cast from %s'%cast_from)
else:
datadict = data
if len(datadict) > 1:
raise AttributeError("Union can only store one field at a time.")
for key, value in datadict.iteritems():
setattr(self, key, value)
def __setattr__(self, key, value):
if key in '__dict__':
CythonType.__setattr__(self, key, value)
elif key in self._members:
self.__dict__ = {key: cast(self._members[key], value)}
else:
raise AttributeError("Union has no member '%s'" % key)
def pointer(basetype):
class PointerInstance(PointerType):
_basetype = basetype
return PointerInstance
def array(basetype, n):
class ArrayInstance(ArrayType):
_basetype = basetype
_n = n
return ArrayInstance
def struct(**members):
class StructInstance(StructType):
_members = members
for key in members:
setattr(StructInstance, key, None)
return StructInstance
def union(**members):
class UnionInstance(UnionType):
_members = members
for key in members:
setattr(UnionInstance, key, None)
return UnionInstance
class typedef(CythonType):
def __init__(self, type, name=None):
self._basetype = type
self.name = name
def __call__(self, *arg):
value = cast(self._basetype, *arg)
return value
def __repr__(self):
return self.name or str(self._basetype)
__getitem__ = index_type
class _FusedType(CythonType):
pass
def fused_type(*args):
if not args:
raise TypeError("Expected at least one type as argument")
# Find the numeric type with biggest rank if all types are numeric
rank = -1
for type in args:
if type not in (py_int, py_long, py_float, py_complex):
break
if type_ordering.index(type) > rank:
result_type = type
else:
return result_type
# Not a simple numeric type, return a fused type instance. The result
# isn't really meant to be used, as we can't keep track of the context in
# pure-mode. Casting won't do anything in this case.
return _FusedType()
def _specialized_from_args(signatures, args, kwargs):
"Perhaps this should be implemented in a TreeFragment in Cython code"
raise Exception("yet to be implemented")
py_int = typedef(int, "int")
try:
py_long = typedef(long, "long")
except NameError: # Py3
py_long = typedef(int, "long")
py_float = typedef(float, "float")
py_complex = typedef(complex, "double complex")
# Predefined types
int_types = ['char', 'short', 'Py_UNICODE', 'int', 'Py_UCS4', 'long', 'longlong', 'Py_ssize_t', 'size_t']
float_types = ['longdouble', 'double', 'float']
complex_types = ['longdoublecomplex', 'doublecomplex', 'floatcomplex', 'complex']
other_types = ['bint', 'void']
to_repr = {
'longlong': 'long long',
'longdouble': 'long double',
'longdoublecomplex': 'long double complex',
'doublecomplex': 'double complex',
'floatcomplex': 'float complex',
}.get
gs = globals()
# note: cannot simply name the unicode type here as 2to3 gets in the way and replaces it by str
try:
import __builtin__ as builtins
except ImportError: # Py3
import builtins
gs['unicode'] = typedef(getattr(builtins, 'unicode', str), 'unicode')
del builtins
for name in int_types:
reprname = to_repr(name, name)
gs[name] = typedef(py_int, reprname)
if name not in ('Py_UNICODE', 'Py_UCS4') and not name.endswith('size_t'):
gs['u'+name] = typedef(py_int, "unsigned " + reprname)
gs['s'+name] = typedef(py_int, "signed " + reprname)
for name in float_types:
gs[name] = typedef(py_float, to_repr(name, name))
for name in complex_types:
gs[name] = typedef(py_complex, to_repr(name, name))
bint = typedef(bool, "bint")
void = typedef(int, "void")
for t in int_types + float_types + complex_types + other_types:
for i in range(1, 4):
gs["%s_%s" % ('p'*i, t)] = globals()[t]._pointer(i)
void = typedef(None, "void")
NULL = p_void(0)
integral = floating = numeric = _FusedType()
type_ordering = [py_int, py_long, py_float, py_complex]
class CythonDotParallel(object):
"""
The cython.parallel module.
"""
__all__ = ['parallel', 'prange', 'threadid']
def parallel(self, num_threads=None):
return nogil
def prange(self, start=0, stop=None, step=1, schedule=None, nogil=False):
if stop is None:
stop = start
start = 0
return range(start, stop, step)
def threadid(self):
return 0
# def threadsavailable(self):
# return 1
import sys
sys.modules['cython.parallel'] = CythonDotParallel()
del sys
| [
"Cython.Build.Inline.cython_inline",
"Cython.Build.Inline.RuntimeCompiledFunction"
] | [((3192, 3218), 'Cython.Build.Inline.RuntimeCompiledFunction', 'RuntimeCompiledFunction', (['f'], {}), '(f)\n', (3215, 3218), False, 'from Cython.Build.Inline import RuntimeCompiledFunction\n'), ((3012, 3043), 'Cython.Build.Inline.cython_inline', 'cython_inline', (['f', '*args'], {}), '(f, *args, **kwds)\n', (3025, 3043), False, 'from Cython.Build.Inline import cython_inline\n')] |
import torch
import random
from tqdm import trange
from layers import Subgraph, Discriminator
from utils import GraphDatasetGenerator
import itertools
import json
from tqdm import tqdm
import numpy as np
import os
class Subgraph_Learning(object):
def __init__(self, args):
super(Subgraph_Learning, self).__init__()
self.args = args
self.dataset_generator = GraphDatasetGenerator(self.args.data)
self.batch_size = self.args.batch_size
self.train_percent = self.args.train_percent
self.valiate_percent = self.args.validate_percent
self.D_criterion = torch.nn.BCEWithLogitsLoss()
self.inner_loop = self.args.inner_loop
def _dataset_spilt(self):
Data_Length = len(self.dataset_generator.graphs)
Training_Length = int(self.train_percent * Data_Length)
Validate_Length = int(self.valiate_percent * Data_Length)
Testing_Length = Data_Length - Training_Length - Validate_Length
test_ind = [i for i in range(0, Testing_Length)]
all_ind = [j for j in range(0, Data_Length)]
train_val_ind = list(set(all_ind)-set(test_ind))
train_ind = train_val_ind[0:Training_Length]
validate_ind = train_val_ind[Training_Length:]
self.training_data = [self.dataset_generator.graphs[i] for i in train_ind]
self.valiate_data = [self.dataset_generator.graphs[i] for i in validate_ind]
self.testing_data = [self.dataset_generator.graphs[i] for i in test_ind]
def _setup_model(self):
self.model = Subgraph(self.args, self.dataset_generator.number_of_features)
self.discriminator = Discriminator(self.args)
if torch.cuda.is_available():
self.discriminator = Discriminator(self.args).cuda()
self.model = Subgraph(self.args, self.dataset_generator.number_of_features).cuda()
def set_requires_grad(self, net, requires_grad=False):
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad
def fit_a_single_model(self):
self._dataset_spilt()
self._setup_model()
optimizer = torch.optim.Adam(self.model.parameters(),
lr=self.args.learning_rate,
weight_decay=self.args.weight_decay)
Data_Length = len(self.training_data)
Num_split = int(Data_Length / self.batch_size)
for _ in tqdm(range(self.args.epochs)):
for i in range(0, Num_split):
data = self.training_data[int(i*self.batch_size): min(int((i+1)*self.batch_size),Data_Length)]
embeddings, positive, negative, cls_loss, positive_penalty = self.model(data)
for j in range(0, self.inner_loop):
optimizer_local = torch.optim.Adam(self.discriminator.parameters(),
lr=self.args.learning_rate,
weight_decay=self.args.weight_decay)
optimizer_local.zero_grad()
local_loss = - self.MI_Est(self.discriminator, embeddings, positive)
local_loss.backward(retain_graph = True)
optimizer_local.step()
mi_loss = self.MI_Est(self.discriminator, embeddings, positive)
optimizer.zero_grad()
loss = cls_loss + positive_penalty + self.args.mi_weight * mi_loss
loss.backward()
optimizer.step()
print("Loss:%.2f"%(loss))
def MI_Est(self, discriminator, embeddings, positive):
shuffle_embeddings = embeddings[torch.randperm(self.batch_size)]
joint = discriminator(embeddings,positive)
margin = discriminator(shuffle_embeddings,positive)
mi_est = torch.mean(joint) - torch.log(torch.mean(torch.exp(margin)))
return mi_est
def return_index(self,data):
self.model.eval()
ind = self.model.assemble(data)
return ind
def validate(self):
ind = self.return_index(self.valiate_data)
count = 0
for data in ind:
save_path = os.path.join(self.args.save_validate, str(count) + '.json')
dump_data = json.dumps(data)
F = open(save_path, 'w')
F.write(dump_data)
F.close()
count += 1
def test(self):
ind = self.return_index(self.testing_data)
count = 0
for data in ind:
save_path = os.path.join(self.args.save_test, str(count) + '.json')
dump_data = json.dumps(data)
F = open(save_path, 'w')
F.write(dump_data)
F.close()
count += 1
def fit(self):
print("\nTraining started.\n")
self.fit_a_single_model() | [
"torch.randperm",
"torch.mean",
"json.dumps",
"torch.exp",
"layers.Discriminator",
"layers.Subgraph",
"torch.cuda.is_available",
"torch.nn.BCEWithLogitsLoss",
"utils.GraphDatasetGenerator"
] | [((388, 425), 'utils.GraphDatasetGenerator', 'GraphDatasetGenerator', (['self.args.data'], {}), '(self.args.data)\n', (409, 425), False, 'from utils import GraphDatasetGenerator\n'), ((611, 639), 'torch.nn.BCEWithLogitsLoss', 'torch.nn.BCEWithLogitsLoss', ([], {}), '()\n', (637, 639), False, 'import torch\n'), ((1552, 1614), 'layers.Subgraph', 'Subgraph', (['self.args', 'self.dataset_generator.number_of_features'], {}), '(self.args, self.dataset_generator.number_of_features)\n', (1560, 1614), False, 'from layers import Subgraph, Discriminator\n'), ((1644, 1668), 'layers.Discriminator', 'Discriminator', (['self.args'], {}), '(self.args)\n', (1657, 1668), False, 'from layers import Subgraph, Discriminator\n'), ((1681, 1706), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (1704, 1706), False, 'import torch\n'), ((3714, 3745), 'torch.randperm', 'torch.randperm', (['self.batch_size'], {}), '(self.batch_size)\n', (3728, 3745), False, 'import torch\n'), ((3875, 3892), 'torch.mean', 'torch.mean', (['joint'], {}), '(joint)\n', (3885, 3892), False, 'import torch\n'), ((4307, 4323), 'json.dumps', 'json.dumps', (['data'], {}), '(data)\n', (4317, 4323), False, 'import json\n'), ((4657, 4673), 'json.dumps', 'json.dumps', (['data'], {}), '(data)\n', (4667, 4673), False, 'import json\n'), ((1741, 1765), 'layers.Discriminator', 'Discriminator', (['self.args'], {}), '(self.args)\n', (1754, 1765), False, 'from layers import Subgraph, Discriminator\n'), ((1798, 1860), 'layers.Subgraph', 'Subgraph', (['self.args', 'self.dataset_generator.number_of_features'], {}), '(self.args, self.dataset_generator.number_of_features)\n', (1806, 1860), False, 'from layers import Subgraph, Discriminator\n'), ((3916, 3933), 'torch.exp', 'torch.exp', (['margin'], {}), '(margin)\n', (3925, 3933), False, 'import torch\n')] |
import numpy as np
from pyray.shapes.twod.paraboloid import *
from pyray.shapes.twod.functional import *
from pyray.rotation import *
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib as mpl
import os
basedir = '.\\Images\\RotatingCube\\'
if os.name == 'posix':
basedir = 'Images/RotatingCube/'
def draw_cubic():
fn = lambda x,y: x**3+y**3
for i in range(20):
im = Image.new("RGB", (2048, 2048), "black")
draw = ImageDraw.Draw(im, 'RGBA')
r = general_rotation(np.array([1,0,0]),np.pi/120*i)
#drawFunctionalXYGridInCircle(draw, r, fn=fn, scale=10.0)
im.save(basedir + 'im' + str(i) + '.png')
def three_d_grid():
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
R = (X**3 + Y**3)
Z = R
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the z axis.
#ax.set_zlim(-1.01, 1.01)
#ax.zaxis.set_major_locator(LinearLocator(10))
#ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# Add a color bar which maps values to colors.
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
mpl.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
theta = np.linspace(0, 2 * np.pi, 100)
for r in np.arange(0.1,1.0,0.1):
#r = 1.0
x = r * np.sin(theta)
y = r * np.cos(theta)
z = x**3+y**3
ax.plot(x, y, z, label='parametric curve')
#ax.legend()
plt.show()
def paraboloid_w_grad(im_ind=0, scale=200, shift=np.array([1000,1000,0]), opacity=60,
basepath='.\\'):
r1 = np.eye(4)
rot = general_rotation(np.array([0,0,1]), np.pi/20.0 * (8 + im_ind/3.0))
j=4
r = rotation(3, 2 * np.pi* j /30.0)
rr = general_rotation(np.array([0,1,0]), np.pi/20.0 * (im_ind/7.0))
r = np.dot(r,rr)
r = np.dot(r, rot)
r1[:3,:3] = r
im = Image.new("RGB", (2048, 2048), "black")
draw = ImageDraw.Draw(im, 'RGBA')
render_scene_4d_axis(draw, r1, 4, scale, shift)
# This is what draws the pink paraboloid.
for z in np.arange(0.001, 3.5, 0.02):
point1 = np.array([np.sqrt(z),0,z])
generalized_arc(draw, r, center=np.array([0,0,z]), vec=np.array([0,0,1]),
point=point1, radius=np.sqrt(z), prcnt=1.0,
rgba=(255,20,147,50))
xax1=np.array([-100.0,0,0.0]);xax1=np.dot(r,xax1)*scale+shift
xax2=np.array([100.0,0,0.0]);xax2=np.dot(r,xax2)*scale+shift
draw.line((xax1[0], xax1[1], xax2[0], xax2[1]), fill=(255,255,0), width=4)
xax1=np.array([0.0,-100,0.0]);xax1=np.dot(r,xax1)*scale+shift
xax2=np.array([0.0,100,0.0]);xax2=np.dot(r,xax2)*scale+shift
draw.line((xax1[0], xax1[1], xax2[0], xax2[1]), fill=(255,255,0), width=4)
#gradients(draw,r)
pt = shift
draw.ellipse((pt[0]-10, pt[1]-10, pt[0]+10, pt[1]+10), fill = (0,255,0))
draw_paraboloid_plane(draw,r,3.3)
draw_paraboloid_plane(draw,r,2.0,extent=1.4)
draw_paraboloid_plane(draw,r,1.0,extent=1.0)
im.save(basepath + 'im' + str(im_ind) + '.png')
def gradients(draw,r):
#for z in [0.3,1.3,2.3,3.3]:
for z in [3.3,2.0,1.0]:
x = np.sqrt(z)
for x in np.arange(-x,x,x/2):
y = np.sqrt(z-x*x)
arrowV1(draw,r,np.array([y,x,z]), np.array([1.5*y,1.5*x,z]), (204,102,255))
if z>3.0:
arrowV1(draw,r,np.array([-y,x,z]), np.array([-1.5*y,1.5*x,z]), (204,102,255))
def draw_paraboloid_plane(draw,r,z=3.3,scale=200,shift=np.array([1000,1000,0]),extent=2):
pt1=np.array([extent,extent,z]);pt1=np.dot(r,pt1)*scale+shift
pt2=np.array([extent,-extent,z]);pt2=np.dot(r,pt2)*scale+shift
pt3=np.array([-extent,-extent,z]);pt3=np.dot(r,pt3)*scale+shift
pt4=np.array([-extent,extent,z]);pt4=np.dot(r,pt4)*scale+shift
draw.polygon([(pt1[0], pt1[1]), (pt2[0], pt2[1]), (pt3[0], pt3[1]), (pt4[0], pt4[1])],\
(0,102,255,50))
point1 = np.array([np.sqrt(z),0,z])
generalized_arc(draw, r, center=np.array([0,0,z]), vec=np.array([0,0,1]),
point=point1, radius=np.sqrt(z), prcnt=1.0,scale=scale,
rgba=(255,20,10,100),width=10)
def plane_w_arrows(im_ind=0, scale=200,\
shift=np.array([824,824,0]),\
basepath='.\\'):
r1 = np.eye(4)
rot = general_rotation(np.array([0,0,1]), np.pi/20.0*(8 + im_ind/3.0))
j=4
r = rotation(3, 2*np.pi*j/30.0)
rr = general_rotation(np.array([0,1,0]), np.pi/20.0*(im_ind/7.0))
r = np.dot(r,rr)
r = np.dot(r, rot)
r1[:3,:3] = r
im = Image.new("RGB", (1648, 1648), "black")
draw = ImageDraw.Draw(im, 'RGBA')
pt1 = 3*np.array([1.0,-1.0,0]); pt2 = 3*np.array([1.0,1.0,0])
z = 1.2**2+1
pt3 = 3*np.array([-1.0,1.0,0]); pt4 = 3*np.array([-1.0,-1.0,0])
pt1 = np.dot(r,pt1)*scale+shift; pt2 = np.dot(r,pt2)*scale+shift
pt3 = np.dot(r,pt3)*scale+shift; pt4 = np.dot(r,pt4)*scale+shift
draw.polygon([(pt1[0], pt1[1]), (pt2[0], pt2[1]), (pt3[0], pt3[1]), (pt4[0], pt4[1])],\
(0,102,255,50))
draw_arrows(draw,r,rgba=(255,250,47),shift=shift)
draw_arrows(draw,r,rot_angl=np.pi/2.0, rgba=(73,200,250),shift=shift)
draw_arrows(draw,r,rot_angl=np.pi/2.0+np.pi/3, rgba=(255,20,147),shift=shift)
arrowV1(draw,r,np.array([0,0,0]), np.array([0,0,2.5]), shift=shift,rgb=(20,200,25))
arrowV1(draw,r,np.array([0,0,0]), np.array([0,0,-2.5]), shift=shift,rgb=(255,20,25))
im.save(basepath + 'im' + str(im_ind) + '.png')
def draw_arrows(draw,r,rot_angl=np.pi/6.0,rgba=(255,20,147),shift=np.array([1000,1000,0])):
base = np.array([0,0,1.5])
for theta in np.arange(0,np.pi*2,2*np.pi/3):
a = np.array([np.cos(theta),np.sin(theta),0])
rr = general_rotation(a, rot_angl)
arrow1 = np.dot(rr,base)
arrowV1(draw,r,np.array([0,0,0]), arrow1, rgb=rgba,shift=shift)
rgba = rgba+(150,)
generalized_arc(draw, r, center=np.array([0,0,1.5*np.cos(rot_angl)]),
vec=np.array([0,0,1]),
point=1.5*np.array([0,np.sin(rot_angl),np.cos(rot_angl)]),
radius=100, prcnt=1.0,
rgba=rgba,shift=shift)
#####################
## Paraboloid with Lagrange visualized.
im = Image.new("RGB", (2048, 2048), (1, 1, 1))
draw = ImageDraw.Draw(im, 'RGBA')
scale=5.0; ind=0; sep = 24; i = 2.0; base_coeff = 0.02; start_line = -12.0
shift = np.array([1000.0, 1000.0, 0.0])
r1 = np.eye(4); j=24
r = rotation(3, np.pi/30*j)
r1[:3,:3] = r
render_scene_4d_axis(draw, r1, 4)
fn = lambda x, y : paraboloid(x, y, coeff=i*base_coeff, intercept=i)
drawFunctionalXYGrid(draw, r, scale=scale, fn=fn,
extent=60, rgba2=(255,20,147,80),
saperatingPlane=np.array([-1,-1,sep]))
three_d_parabola(draw, r, r2)
im.save(basedir + 'im' + str(0) + '.png')
| [
"numpy.eye",
"numpy.sqrt",
"numpy.array",
"matplotlib.pyplot.figure",
"numpy.linspace",
"numpy.dot",
"numpy.cos",
"numpy.sin",
"numpy.meshgrid",
"numpy.arange",
"matplotlib.pyplot.show"
] | [((1463, 1475), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (1473, 1475), True, 'import matplotlib.pyplot as plt\n'), ((1514, 1544), 'numpy.linspace', 'np.linspace', (['(0)', '(2 * np.pi)', '(100)'], {}), '(0, 2 * np.pi, 100)\n', (1525, 1544), True, 'import numpy as np\n'), ((1555, 1579), 'numpy.arange', 'np.arange', (['(0.1)', '(1.0)', '(0.1)'], {}), '(0.1, 1.0, 0.1)\n', (1564, 1579), True, 'import numpy as np\n'), ((1727, 1737), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (1735, 1737), True, 'import matplotlib.pyplot as plt\n'), ((6730, 6761), 'numpy.array', 'np.array', (['[1000.0, 1000.0, 0.0]'], {}), '([1000.0, 1000.0, 0.0])\n', (6738, 6761), True, 'import numpy as np\n'), ((6768, 6777), 'numpy.eye', 'np.eye', (['(4)'], {}), '(4)\n', (6774, 6777), True, 'import numpy as np\n'), ((809, 821), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (819, 821), True, 'import matplotlib.pyplot as plt\n'), ((882, 904), 'numpy.arange', 'np.arange', (['(-5)', '(5)', '(0.25)'], {}), '(-5, 5, 0.25)\n', (891, 904), True, 'import numpy as np\n'), ((913, 935), 'numpy.arange', 'np.arange', (['(-5)', '(5)', '(0.25)'], {}), '(-5, 5, 0.25)\n', (922, 935), True, 'import numpy as np\n'), ((947, 964), 'numpy.meshgrid', 'np.meshgrid', (['X', 'Y'], {}), '(X, Y)\n', (958, 964), True, 'import numpy as np\n'), ((1406, 1416), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (1414, 1416), True, 'import matplotlib.pyplot as plt\n'), ((1789, 1814), 'numpy.array', 'np.array', (['[1000, 1000, 0]'], {}), '([1000, 1000, 0])\n', (1797, 1814), True, 'import numpy as np\n'), ((1872, 1881), 'numpy.eye', 'np.eye', (['(4)'], {}), '(4)\n', (1878, 1881), True, 'import numpy as np\n'), ((2087, 2100), 'numpy.dot', 'np.dot', (['r', 'rr'], {}), '(r, rr)\n', (2093, 2100), True, 'import numpy as np\n'), ((2108, 2122), 'numpy.dot', 'np.dot', (['r', 'rot'], {}), '(r, rot)\n', (2114, 2122), True, 'import numpy as np\n'), ((2339, 2366), 'numpy.arange', 'np.arange', (['(0.001)', '(3.5)', '(0.02)'], {}), '(0.001, 3.5, 0.02)\n', (2348, 2366), True, 'import numpy as np\n'), ((2619, 2645), 'numpy.array', 'np.array', (['[-100.0, 0, 0.0]'], {}), '([-100.0, 0, 0.0])\n', (2627, 2645), True, 'import numpy as np\n'), ((2685, 2710), 'numpy.array', 'np.array', (['[100.0, 0, 0.0]'], {}), '([100.0, 0, 0.0])\n', (2693, 2710), True, 'import numpy as np\n'), ((2829, 2855), 'numpy.array', 'np.array', (['[0.0, -100, 0.0]'], {}), '([0.0, -100, 0.0])\n', (2837, 2855), True, 'import numpy as np\n'), ((2895, 2920), 'numpy.array', 'np.array', (['[0.0, 100, 0.0]'], {}), '([0.0, 100, 0.0])\n', (2903, 2920), True, 'import numpy as np\n'), ((3772, 3797), 'numpy.array', 'np.array', (['[1000, 1000, 0]'], {}), '([1000, 1000, 0])\n', (3780, 3797), True, 'import numpy as np\n'), ((3815, 3844), 'numpy.array', 'np.array', (['[extent, extent, z]'], {}), '([extent, extent, z])\n', (3823, 3844), True, 'import numpy as np\n'), ((3881, 3911), 'numpy.array', 'np.array', (['[extent, -extent, z]'], {}), '([extent, -extent, z])\n', (3889, 3911), True, 'import numpy as np\n'), ((3948, 3979), 'numpy.array', 'np.array', (['[-extent, -extent, z]'], {}), '([-extent, -extent, z])\n', (3956, 3979), True, 'import numpy as np\n'), ((4016, 4046), 'numpy.array', 'np.array', (['[-extent, extent, z]'], {}), '([-extent, extent, z])\n', (4024, 4046), True, 'import numpy as np\n'), ((4526, 4549), 'numpy.array', 'np.array', (['[824, 824, 0]'], {}), '([824, 824, 0])\n', (4534, 4549), True, 'import numpy as np\n'), ((4596, 4605), 'numpy.eye', 'np.eye', (['(4)'], {}), '(4)\n', (4602, 4605), True, 'import numpy as np\n'), ((4803, 4816), 'numpy.dot', 'np.dot', (['r', 'rr'], {}), '(r, rr)\n', (4809, 4816), True, 'import numpy as np\n'), ((4824, 4838), 'numpy.dot', 'np.dot', (['r', 'rot'], {}), '(r, rot)\n', (4830, 4838), True, 'import numpy as np\n'), ((5870, 5895), 'numpy.array', 'np.array', (['[1000, 1000, 0]'], {}), '([1000, 1000, 0])\n', (5878, 5895), True, 'import numpy as np\n'), ((5907, 5928), 'numpy.array', 'np.array', (['[0, 0, 1.5]'], {}), '([0, 0, 1.5])\n', (5915, 5928), True, 'import numpy as np\n'), ((5944, 5982), 'numpy.arange', 'np.arange', (['(0)', '(np.pi * 2)', '(2 * np.pi / 3)'], {}), '(0, np.pi * 2, 2 * np.pi / 3)\n', (5953, 5982), True, 'import numpy as np\n'), ((1604, 1617), 'numpy.sin', 'np.sin', (['theta'], {}), '(theta)\n', (1610, 1617), True, 'import numpy as np\n'), ((1630, 1643), 'numpy.cos', 'np.cos', (['theta'], {}), '(theta)\n', (1636, 1643), True, 'import numpy as np\n'), ((1909, 1928), 'numpy.array', 'np.array', (['[0, 0, 1]'], {}), '([0, 0, 1])\n', (1917, 1928), True, 'import numpy as np\n'), ((2033, 2052), 'numpy.array', 'np.array', (['[0, 1, 0]'], {}), '([0, 1, 0])\n', (2041, 2052), True, 'import numpy as np\n'), ((3431, 3441), 'numpy.sqrt', 'np.sqrt', (['z'], {}), '(z)\n', (3438, 3441), True, 'import numpy as np\n'), ((3459, 3482), 'numpy.arange', 'np.arange', (['(-x)', 'x', '(x / 2)'], {}), '(-x, x, x / 2)\n', (3468, 3482), True, 'import numpy as np\n'), ((4633, 4652), 'numpy.array', 'np.array', (['[0, 0, 1]'], {}), '([0, 0, 1])\n', (4641, 4652), True, 'import numpy as np\n'), ((4751, 4770), 'numpy.array', 'np.array', (['[0, 1, 0]'], {}), '([0, 1, 0])\n', (4759, 4770), True, 'import numpy as np\n'), ((4956, 4980), 'numpy.array', 'np.array', (['[1.0, -1.0, 0]'], {}), '([1.0, -1.0, 0])\n', (4964, 4980), True, 'import numpy as np\n'), ((4988, 5011), 'numpy.array', 'np.array', (['[1.0, 1.0, 0]'], {}), '([1.0, 1.0, 0])\n', (4996, 5011), True, 'import numpy as np\n'), ((5039, 5063), 'numpy.array', 'np.array', (['[-1.0, 1.0, 0]'], {}), '([-1.0, 1.0, 0])\n', (5047, 5063), True, 'import numpy as np\n'), ((5071, 5096), 'numpy.array', 'np.array', (['[-1.0, -1.0, 0]'], {}), '([-1.0, -1.0, 0])\n', (5079, 5096), True, 'import numpy as np\n'), ((5592, 5611), 'numpy.array', 'np.array', (['[0, 0, 0]'], {}), '([0, 0, 0])\n', (5600, 5611), True, 'import numpy as np\n'), ((5611, 5632), 'numpy.array', 'np.array', (['[0, 0, 2.5]'], {}), '([0, 0, 2.5])\n', (5619, 5632), True, 'import numpy as np\n'), ((5680, 5699), 'numpy.array', 'np.array', (['[0, 0, 0]'], {}), '([0, 0, 0])\n', (5688, 5699), True, 'import numpy as np\n'), ((5699, 5721), 'numpy.array', 'np.array', (['[0, 0, -2.5]'], {}), '([0, 0, -2.5])\n', (5707, 5721), True, 'import numpy as np\n'), ((6090, 6106), 'numpy.dot', 'np.dot', (['rr', 'base'], {}), '(rr, base)\n', (6096, 6106), True, 'import numpy as np\n'), ((7065, 7088), 'numpy.array', 'np.array', (['[-1, -1, sep]'], {}), '([-1, -1, sep])\n', (7073, 7088), True, 'import numpy as np\n'), ((630, 649), 'numpy.array', 'np.array', (['[1, 0, 0]'], {}), '([1, 0, 0])\n', (638, 649), True, 'import numpy as np\n'), ((2649, 2664), 'numpy.dot', 'np.dot', (['r', 'xax1'], {}), '(r, xax1)\n', (2655, 2664), True, 'import numpy as np\n'), ((2714, 2729), 'numpy.dot', 'np.dot', (['r', 'xax2'], {}), '(r, xax2)\n', (2720, 2729), True, 'import numpy as np\n'), ((2859, 2874), 'numpy.dot', 'np.dot', (['r', 'xax1'], {}), '(r, xax1)\n', (2865, 2874), True, 'import numpy as np\n'), ((2924, 2939), 'numpy.dot', 'np.dot', (['r', 'xax2'], {}), '(r, xax2)\n', (2930, 2939), True, 'import numpy as np\n'), ((3496, 3514), 'numpy.sqrt', 'np.sqrt', (['(z - x * x)'], {}), '(z - x * x)\n', (3503, 3514), True, 'import numpy as np\n'), ((3847, 3861), 'numpy.dot', 'np.dot', (['r', 'pt1'], {}), '(r, pt1)\n', (3853, 3861), True, 'import numpy as np\n'), ((3914, 3928), 'numpy.dot', 'np.dot', (['r', 'pt2'], {}), '(r, pt2)\n', (3920, 3928), True, 'import numpy as np\n'), ((3982, 3996), 'numpy.dot', 'np.dot', (['r', 'pt3'], {}), '(r, pt3)\n', (3988, 3996), True, 'import numpy as np\n'), ((4049, 4063), 'numpy.dot', 'np.dot', (['r', 'pt4'], {}), '(r, pt4)\n', (4055, 4063), True, 'import numpy as np\n'), ((4226, 4236), 'numpy.sqrt', 'np.sqrt', (['z'], {}), '(z)\n', (4233, 4236), True, 'import numpy as np\n'), ((4279, 4298), 'numpy.array', 'np.array', (['[0, 0, z]'], {}), '([0, 0, z])\n', (4287, 4298), True, 'import numpy as np\n'), ((4302, 4321), 'numpy.array', 'np.array', (['[0, 0, 1]'], {}), '([0, 0, 1])\n', (4310, 4321), True, 'import numpy as np\n'), ((4367, 4377), 'numpy.sqrt', 'np.sqrt', (['z'], {}), '(z)\n', (4374, 4377), True, 'import numpy as np\n'), ((5105, 5119), 'numpy.dot', 'np.dot', (['r', 'pt1'], {}), '(r, pt1)\n', (5111, 5119), True, 'import numpy as np\n'), ((5138, 5152), 'numpy.dot', 'np.dot', (['r', 'pt2'], {}), '(r, pt2)\n', (5144, 5152), True, 'import numpy as np\n'), ((5174, 5188), 'numpy.dot', 'np.dot', (['r', 'pt3'], {}), '(r, pt3)\n', (5180, 5188), True, 'import numpy as np\n'), ((5207, 5221), 'numpy.dot', 'np.dot', (['r', 'pt4'], {}), '(r, pt4)\n', (5213, 5221), True, 'import numpy as np\n'), ((6129, 6148), 'numpy.array', 'np.array', (['[0, 0, 0]'], {}), '([0, 0, 0])\n', (6137, 6148), True, 'import numpy as np\n'), ((6303, 6322), 'numpy.array', 'np.array', (['[0, 0, 1]'], {}), '([0, 0, 1])\n', (6311, 6322), True, 'import numpy as np\n'), ((2395, 2405), 'numpy.sqrt', 'np.sqrt', (['z'], {}), '(z)\n', (2402, 2405), True, 'import numpy as np\n'), ((2452, 2471), 'numpy.array', 'np.array', (['[0, 0, z]'], {}), '([0, 0, z])\n', (2460, 2471), True, 'import numpy as np\n'), ((2475, 2494), 'numpy.array', 'np.array', (['[0, 0, 1]'], {}), '([0, 0, 1])\n', (2483, 2494), True, 'import numpy as np\n'), ((2540, 2550), 'numpy.sqrt', 'np.sqrt', (['z'], {}), '(z)\n', (2547, 2550), True, 'import numpy as np\n'), ((3538, 3557), 'numpy.array', 'np.array', (['[y, x, z]'], {}), '([y, x, z])\n', (3546, 3557), True, 'import numpy as np\n'), ((3557, 3588), 'numpy.array', 'np.array', (['[1.5 * y, 1.5 * x, z]'], {}), '([1.5 * y, 1.5 * x, z])\n', (3565, 3588), True, 'import numpy as np\n'), ((5998, 6011), 'numpy.cos', 'np.cos', (['theta'], {}), '(theta)\n', (6004, 6011), True, 'import numpy as np\n'), ((6012, 6025), 'numpy.sin', 'np.sin', (['theta'], {}), '(theta)\n', (6018, 6025), True, 'import numpy as np\n'), ((3652, 3672), 'numpy.array', 'np.array', (['[-y, x, z]'], {}), '([-y, x, z])\n', (3660, 3672), True, 'import numpy as np\n'), ((3672, 3704), 'numpy.array', 'np.array', (['[-1.5 * y, 1.5 * x, z]'], {}), '([-1.5 * y, 1.5 * x, z])\n', (3680, 3704), True, 'import numpy as np\n'), ((6255, 6271), 'numpy.cos', 'np.cos', (['rot_angl'], {}), '(rot_angl)\n', (6261, 6271), True, 'import numpy as np\n'), ((6368, 6384), 'numpy.sin', 'np.sin', (['rot_angl'], {}), '(rot_angl)\n', (6374, 6384), True, 'import numpy as np\n'), ((6385, 6401), 'numpy.cos', 'np.cos', (['rot_angl'], {}), '(rot_angl)\n', (6391, 6401), True, 'import numpy as np\n')] |
from django import forms
from core.models import Post
class PostCreateForm(forms.ModelForm):
class Meta:
model = Post
fields = ('text','image')
widgets = {
'text' : forms.TextInput(attrs={
'class' : 'form-control form-control-sm',
'placeholder' :'Caption this .... '
})
} | [
"django.forms.TextInput"
] | [((206, 311), 'django.forms.TextInput', 'forms.TextInput', ([], {'attrs': "{'class': 'form-control form-control-sm', 'placeholder': 'Caption this .... '}"}), "(attrs={'class': 'form-control form-control-sm',\n 'placeholder': 'Caption this .... '})\n", (221, 311), False, 'from django import forms\n')] |
import cc_dat_utils
#Part 1
input_dat_file = "data/pfgd_test.dat"
#Use cc_dat_utils.make_cc_level_pack_from_dat() to load the file specified by input_dat_file
mylevel = cc_dat_utils.make_cc_level_pack_from_dat(input_dat_file)
#print the resulting data
print(mylevel) | [
"cc_dat_utils.make_cc_level_pack_from_dat"
] | [((171, 227), 'cc_dat_utils.make_cc_level_pack_from_dat', 'cc_dat_utils.make_cc_level_pack_from_dat', (['input_dat_file'], {}), '(input_dat_file)\n', (211, 227), False, 'import cc_dat_utils\n')] |
import os
import numpy as np
from vmaf import plt
from vmaf.core.cross_validation import ModelCrossValidation
from vmaf.core.feature_assembler import FeatureAssembler
from vmaf.core.quality_runner import VmafQualityRunner
from vmaf.core.result_store import FileSystemResultStore
from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension
from vmaf.config import VmafConfig, DisplayConfig
from vmaf.core.asset import Asset
from vmaf.core.train_test_model import TrainTestModel, RegressorMixin, ClassifierMixin
from vmaf.core.local_explainer import LocalExplainer
__copyright__ = "Copyright 2016-2020, Netflix, Inc."
__license__ = "BSD+Patent"
def read_dataset(dataset, **kwargs):
groundtruth_key = kwargs['groundtruth_key'] if 'groundtruth_key' in kwargs else None
skip_asset_with_none_groundtruth = kwargs['skip_asset_with_none_groundtruth'] \
if 'skip_asset_with_none_groundtruth' in kwargs else False
content_ids = kwargs['content_ids'] if 'content_ids' in kwargs else None
asset_ids = kwargs['asset_ids'] if 'asset_ids' in kwargs else None
workdir_root = kwargs['workdir_root'] if 'workdir_root' in kwargs else VmafConfig.workdir_path()
# asserts, can add more to the list...
assert hasattr(dataset, 'dataset_name')
assert hasattr(dataset, 'ref_videos')
assert hasattr(dataset, 'dis_videos')
assert hasattr(dataset, 'yuv_fmt') or all(['yuv_fmt' in ref_video for ref_video in dataset.ref_videos])
data_set_name = dataset.dataset_name
ref_videos = dataset.ref_videos
dis_videos = dataset.dis_videos
width = dataset.width if hasattr(dataset, 'width') else None
height = dataset.height if hasattr(dataset, 'height') else None
yuv_fmt = dataset.yuv_fmt if hasattr(dataset, 'yuv_fmt') else None
quality_width = dataset.quality_width if hasattr(dataset, 'quality_width') else None
quality_height = dataset.quality_height if hasattr(dataset, 'quality_height') else None
resampling_type = dataset.resampling_type if hasattr(dataset, 'resampling_type') else None
crop_cmd = dataset.crop_cmd if hasattr(dataset, 'crop_cmd') else None
pad_cmd = dataset.pad_cmd if hasattr(dataset, 'pad_cmd') else None
workfile_yuv_type = dataset.workfile_yuv_type if hasattr(dataset, 'workfile_yuv_type') else None
duration_sec = dataset.duration_sec if hasattr(dataset, 'duration_sec') else None
fps = dataset.fps if hasattr(dataset, 'fps') else None
start_frame = dataset.start_frame if hasattr(dataset, 'start_frame') else None
end_frame = dataset.end_frame if hasattr(dataset, 'end_frame') else None
ref_dict = {} # dictionary of content_id -> path for ref videos
for ref_video in ref_videos:
ref_dict[ref_video['content_id']] = ref_video
assets = []
for dis_video in dis_videos:
if content_ids is not None and dis_video['content_id'] not in content_ids:
continue
if asset_ids is not None and dis_video['asset_id'] not in asset_ids:
continue
if groundtruth_key is not None:
groundtruth = dis_video[groundtruth_key]
else:
if 'dmos' in dis_video:
groundtruth = dis_video['dmos']
elif 'mos' in dis_video:
groundtruth = dis_video['mos']
elif 'groundtruth' in dis_video:
groundtruth = dis_video['groundtruth']
else:
groundtruth = None
if 'os' in dis_video:
raw_groundtruth = dis_video['os']
else:
raw_groundtruth = None
if 'groundtruth_std' in dis_video:
groundtruth_std = dis_video['groundtruth_std']
else:
groundtruth_std = None
if 'rebuf_indices' in dis_video:
rebuf_indices = dis_video['rebuf_indices']
else:
rebuf_indices = None
ref_video = ref_dict[dis_video['content_id']]
ref_path = ref_video['path']
ref_yuv_fmt_ = yuv_fmt if yuv_fmt is not None else ref_dict[dis_video['content_id']]['yuv_fmt']
dis_yuv_fmt_ = dis_video['yuv_fmt'] if 'yuv_fmt' in dis_video else ref_yuv_fmt_
if width is not None:
width_ = width
elif 'width' in ref_video and 'width' not in dis_video:
width_ = ref_video['width']
elif 'width' in dis_video and 'width' not in ref_video:
width_ = dis_video['width']
elif 'width' in ref_video and 'width' in dis_video:
assert ref_video['width'] == dis_video['width']
width_ = ref_video['width']
else:
width_ = None
if height is not None:
height_ = height
elif 'height' in ref_video and 'height' not in dis_video:
height_ = ref_video['height']
elif 'height' in dis_video and 'height' not in ref_video:
height_ = dis_video['height']
elif 'height' in ref_video and 'height' in dis_video:
assert ref_video['height'] == dis_video['height']
height_ = ref_video['height']
else:
height_ = None
if quality_width is not None:
quality_width_ = quality_width
elif 'quality_width' in dis_video:
quality_width_ = dis_video['quality_width']
else:
quality_width_ = None
if quality_height is not None:
quality_height_ = quality_height
elif 'quality_height' in dis_video:
quality_height_ = dis_video['quality_height']
else:
quality_height_ = None
if resampling_type is not None:
resampling_type_ = resampling_type
elif 'resampling_type' in dis_video:
resampling_type_ = dis_video['resampling_type']
else:
resampling_type_ = None
if crop_cmd is not None:
ref_crop_cmd_ = crop_cmd
dis_crop_cmd_ = crop_cmd
else:
if 'crop_cmd' in ref_video:
ref_crop_cmd_ = ref_video['crop_cmd']
else:
ref_crop_cmd_ = None
if 'crop_cmd' in dis_video:
dis_crop_cmd_ = dis_video['crop_cmd']
else:
dis_crop_cmd_ = None
if pad_cmd is not None:
ref_pad_cmd_ = pad_cmd
dis_pad_cmd_ = pad_cmd
else:
if 'pad_cmd' in ref_video:
ref_pad_cmd_ = ref_video['pad_cmd']
else:
ref_pad_cmd_ = None
if 'pad_cmd' in dis_video:
dis_pad_cmd_ = dis_video['pad_cmd']
else:
dis_pad_cmd_ = None
if duration_sec is not None:
duration_sec_ = duration_sec
elif 'duration_sec' in dis_video:
duration_sec_ = dis_video['duration_sec']
else:
duration_sec_ = None
if fps is not None:
fps_ = fps
elif 'fps' in dis_video:
fps_ = dis_video['fps']
else:
fps_ = None
if start_frame is not None:
start_frame_ = start_frame
elif 'start_frame' in dis_video:
start_frame_ = dis_video['start_frame']
else:
start_frame_ = None
if end_frame is not None:
end_frame_ = end_frame
elif 'end_frame' in dis_video:
end_frame_ = dis_video['end_frame']
else:
end_frame_ = None
asset_dict = {'ref_yuv_type': ref_yuv_fmt_, 'dis_yuv_type': dis_yuv_fmt_}
if width_ is not None:
if asset_dict['ref_yuv_type'] != 'notyuv':
asset_dict['ref_width'] = width_
if asset_dict['dis_yuv_type'] != 'notyuv':
asset_dict['dis_width'] = width_
if height_ is not None:
if asset_dict['ref_yuv_type'] != 'notyuv':
asset_dict['ref_height'] = height_
if asset_dict['dis_yuv_type'] != 'notyuv':
asset_dict['dis_height'] = height_
if groundtruth is not None:
asset_dict['groundtruth'] = groundtruth
if raw_groundtruth is not None:
asset_dict['raw_groundtruth'] = raw_groundtruth
if groundtruth_std is not None:
asset_dict['groundtruth_std'] = groundtruth_std
if quality_width_ is not None:
asset_dict['quality_width'] = quality_width_
if quality_height_ is not None:
asset_dict['quality_height'] = quality_height_
if resampling_type_ is not None:
asset_dict['resampling_type'] = resampling_type_
if ref_crop_cmd_ is not None:
asset_dict['ref_crop_cmd'] = ref_crop_cmd_
if dis_crop_cmd_ is not None:
asset_dict['dis_crop_cmd'] = dis_crop_cmd_
if ref_pad_cmd_ is not None:
asset_dict['ref_pad_cmd'] = ref_pad_cmd_
if dis_pad_cmd_ is not None:
asset_dict['dis_pad_cmd'] = dis_pad_cmd_
if duration_sec_ is not None:
asset_dict['duration_sec'] = duration_sec_
if workfile_yuv_type is not None:
asset_dict['workfile_yuv_type'] = workfile_yuv_type
if rebuf_indices is not None:
asset_dict['rebuf_indices'] = rebuf_indices
if fps_ is not None:
asset_dict['fps'] = fps_
if start_frame_ is not None:
asset_dict['start_frame'] = start_frame_
if end_frame_ is not None:
asset_dict['end_frame'] = end_frame_
if groundtruth is None and skip_asset_with_none_groundtruth:
pass
else:
asset = Asset(dataset=data_set_name,
content_id=dis_video['content_id'],
asset_id=dis_video['asset_id'],
workdir_root=workdir_root,
ref_path=ref_path,
dis_path=dis_video['path'],
asset_dict=asset_dict,
)
assets.append(asset)
return assets
def run_test_on_dataset(test_dataset, runner_class, ax,
result_store, model_filepath,
parallelize=True, fifo_mode=True,
aggregate_method=np.mean,
type='regressor',
**kwargs):
test_assets = read_dataset(test_dataset, **kwargs)
test_raw_assets = None
try:
for test_asset in test_assets:
assert test_asset.groundtruth is not None
except AssertionError:
# no groundtruth, try do subjective modeling
from sureal.dataset_reader import RawDatasetReader
from sureal.subjective_model import DmosModel
subj_model_class = kwargs['subj_model_class'] if 'subj_model_class' in kwargs and kwargs['subj_model_class'] is not None else DmosModel
dataset_reader_class = kwargs['dataset_reader_class'] if 'dataset_reader_class' in kwargs else RawDatasetReader
subjective_model = subj_model_class(dataset_reader_class(test_dataset))
subjective_model.run_modeling(**kwargs)
test_dataset_aggregate = subjective_model.to_aggregated_dataset(**kwargs)
test_raw_assets = test_assets
test_assets = read_dataset(test_dataset_aggregate, **kwargs)
if model_filepath is not None:
optional_dict = {'model_filepath': model_filepath}
if 'model_720_filepath' in kwargs and kwargs['model_720_filepath'] is not None:
optional_dict['720model_filepath'] = kwargs['model_720_filepath']
if 'model_480_filepath' in kwargs and kwargs['model_480_filepath'] is not None:
optional_dict['480model_filepath'] = kwargs['model_480_filepath']
else:
optional_dict = None
if 'enable_transform_score' in kwargs and kwargs['enable_transform_score'] is not None:
if not optional_dict:
optional_dict = {}
optional_dict['enable_transform_score'] = kwargs['enable_transform_score']
if 'disable_clip_score' in kwargs and kwargs['disable_clip_score'] is not None:
if not optional_dict:
optional_dict = {}
optional_dict['disable_clip_score'] = kwargs['disable_clip_score']
if 'subsample' in kwargs and kwargs['subsample'] is not None:
if not optional_dict:
optional_dict = {}
optional_dict['subsample'] = kwargs['subsample']
# run
runner = runner_class(
test_assets,
None, fifo_mode=fifo_mode,
delete_workdir=True,
result_store=result_store,
optional_dict=optional_dict,
optional_dict2=None,
)
runner.run(parallelize=parallelize)
results = runner.results
for result in results:
result.set_score_aggregate_method(aggregate_method)
try:
model_type = runner.get_train_test_model_class()
except:
if type == 'regressor':
model_type = RegressorMixin
elif type == 'classifier':
model_type = ClassifierMixin
else:
assert False
split_test_indices_for_perf_ci = kwargs['split_test_indices_for_perf_ci'] \
if 'split_test_indices_for_perf_ci' in kwargs else False
# plot
groundtruths = list(map(lambda asset: asset.groundtruth, test_assets))
predictions = list(map(lambda result: result[runner_class.get_score_key()], results))
raw_grountruths = None if test_raw_assets is None else \
list(map(lambda asset: asset.raw_groundtruth, test_raw_assets))
groundtruths_std = None if test_assets is None else \
list(map(lambda asset: asset.groundtruth_std, test_assets))
try:
predictions_bagging = list(map(lambda result: result[runner_class.get_bagging_score_key()], results))
predictions_stddev = list(map(lambda result: result[runner_class.get_stddev_score_key()], results))
predictions_ci95_low = list(map(lambda result: result[runner_class.get_ci95_low_score_key()], results))
predictions_ci95_high = list(map(lambda result: result[runner_class.get_ci95_high_score_key()], results))
predictions_all_models = list(map(lambda result: result[runner_class.get_all_models_score_key()], results))
# need to revert the list of lists, so that the outer list has the predictions for each model separately
predictions_all_models = np.array(predictions_all_models).T.tolist()
num_models = np.shape(predictions_all_models)[0]
stats = model_type.get_stats(groundtruths, predictions,
ys_label_raw=raw_grountruths,
ys_label_pred_bagging=predictions_bagging,
ys_label_pred_stddev=predictions_stddev,
ys_label_pred_ci95_low=predictions_ci95_low,
ys_label_pred_ci95_high=predictions_ci95_high,
ys_label_pred_all_models=predictions_all_models,
ys_label_stddev=groundtruths_std,
split_test_indices_for_perf_ci=split_test_indices_for_perf_ci)
except Exception as e:
print('Stats calculation failed, using default stats calculation. Error cause: ')
print(e)
stats = model_type.get_stats(groundtruths, predictions,
ys_label_raw=raw_grountruths,
ys_label_stddev=groundtruths_std,
split_test_indices_for_perf_ci=split_test_indices_for_perf_ci)
num_models = 1
print('Stats on testing data: {}'.format(model_type.format_stats_for_print(stats)))
# printing stats if multiple models are present
if 'SRCC_across_model_distribution' in stats \
and 'PCC_across_model_distribution' in stats \
and 'RMSE_across_model_distribution' in stats:
print('Stats on testing data (across multiple models, using all test indices): {}'.format(
model_type.format_across_model_stats_for_print(model_type.extract_across_model_stats(stats))))
if split_test_indices_for_perf_ci:
print('Stats on testing data (single model, multiple test sets): {}'
.format(model_type.format_stats_across_test_splits_for_print(model_type.extract_across_test_splits_stats(stats))))
if ax is not None:
content_ids = list(map(lambda asset: asset.content_id, test_assets))
if 'point_label' in kwargs:
if kwargs['point_label'] == 'asset_id':
point_labels = list(map(lambda asset: asset.asset_id, test_assets))
elif kwargs['point_label'] == 'dis_path':
point_labels = list(map(lambda asset: get_file_name_without_extension(asset.dis_path), test_assets))
else:
raise AssertionError("Unknown point_label {}".format(kwargs['point_label']))
else:
point_labels = None
model_type.plot_scatter(ax, stats, content_ids=content_ids, point_labels=point_labels, **kwargs)
ax.set_xlabel('True Score')
ax.set_ylabel("Predicted Score")
ax.grid()
ax.set_title("{runner}{num_models}\n{stats}".format(
dataset=test_assets[0].dataset,
runner=runner_class.TYPE,
stats=model_type.format_stats_for_plot(stats),
num_models=", {} models".format(num_models) if num_models > 1 else "",
))
return test_assets, results
def print_matplotlib_warning():
print("Warning: cannot import matplotlib, no picture displayed. " \
"If you are on Mac OS and have installed matplotlib, you " \
"possibly need to run: \nsudo pip uninstall python-dateutil \n" \
"sudo pip install python-dateutil==2.2 \n" \
"Refer to: http://stackoverflow.com/questions/27630114/matplotlib-issue-on-os-x-importerror-cannot-import-name-thread")
def train_test_vmaf_on_dataset(train_dataset, test_dataset,
feature_param, model_param,
train_ax, test_ax, result_store,
parallelize=True, logger=None, fifo_mode=True,
output_model_filepath=None,
aggregate_method=np.mean,
**kwargs):
train_assets = read_dataset(train_dataset, **kwargs)
train_raw_assets = None
try:
for train_asset in train_assets:
assert train_asset.groundtruth is not None
except AssertionError:
# no groundtruth, try do subjective modeling
from sureal.dataset_reader import RawDatasetReader
from sureal.subjective_model import DmosModel
subj_model_class = kwargs['subj_model_class'] if 'subj_model_class' in kwargs and kwargs['subj_model_class'] is not None else DmosModel
dataset_reader_class = kwargs['dataset_reader_class'] if 'dataset_reader_class' in kwargs else RawDatasetReader
subjective_model = subj_model_class(dataset_reader_class(train_dataset))
subjective_model.run_modeling(**kwargs)
train_dataset_aggregate = subjective_model.to_aggregated_dataset(**kwargs)
train_raw_assets = train_assets
train_assets = read_dataset(train_dataset_aggregate, **kwargs)
train_fassembler = FeatureAssembler(
feature_dict=feature_param.feature_dict,
feature_option_dict=None,
assets=train_assets,
logger=logger,
fifo_mode=fifo_mode,
delete_workdir=True,
result_store=result_store,
optional_dict=None,
optional_dict2=None,
parallelize=parallelize,
)
train_fassembler.run()
train_features = train_fassembler.results
for result in train_features:
result.set_score_aggregate_method(aggregate_method)
model_type = model_param.model_type
model_param_dict = model_param.model_param_dict
model_class = TrainTestModel.find_subclass(model_type)
train_xys = model_class.get_xys_from_results(train_features)
train_xs = model_class.get_xs_from_results(train_features)
train_ys = model_class.get_ys_from_results(train_features)
model = model_class(model_param_dict, logger)
model.train(train_xys, **kwargs)
# append additional information to model before saving, so that
# VmafQualityRunner can read and process
model.append_info('feature_dict', feature_param.feature_dict)
if 'score_clip' in model_param_dict:
VmafQualityRunner.set_clip_score(model, model_param_dict['score_clip'])
if 'score_transform' in model_param_dict:
VmafQualityRunner.set_transform_score(model, model_param_dict['score_transform'])
train_ys_pred = VmafQualityRunner.predict_with_model(model, train_xs, **kwargs)['ys_pred']
raw_groundtruths = None if train_raw_assets is None else \
list(map(lambda asset: asset.raw_groundtruth, train_raw_assets))
train_stats = model.get_stats(train_ys['label'], train_ys_pred, ys_label_raw=raw_groundtruths)
log = 'Stats on training data: {}'.format(model.format_stats_for_print(train_stats))
if logger:
logger.info(log)
else:
print(log)
# save model
if output_model_filepath is not None:
model.to_file(output_model_filepath)
if train_ax is not None:
train_content_ids = list(map(lambda asset: asset.content_id, train_assets))
model_class.plot_scatter(train_ax, train_stats, content_ids=train_content_ids)
train_ax.set_xlabel('True Score')
train_ax.set_ylabel("Predicted Score")
train_ax.grid()
train_ax.set_title("Dataset: {dataset}, Model: {model}\n{stats}".format(
dataset=train_dataset.dataset_name,
model=model.model_id,
stats=model_class.format_stats_for_plot(train_stats)
))
# === test model on test dataset ===
if test_dataset is None:
test_assets = None
test_stats = None
test_fassembler = None
else:
test_assets = read_dataset(test_dataset, **kwargs)
test_raw_assets = None
try:
for test_asset in test_assets:
assert test_asset.groundtruth is not None
except AssertionError:
# no groundtruth, try do subjective modeling
from sureal.dataset_reader import RawDatasetReader
from sureal.subjective_model import DmosModel
subj_model_class = kwargs['subj_model_class'] if 'subj_model_class' in kwargs and kwargs['subj_model_class'] is not None else DmosModel
dataset_reader_class = kwargs['dataset_reader_class'] if 'dataset_reader_class' in kwargs else RawDatasetReader
subjective_model = subj_model_class(dataset_reader_class(test_dataset))
subjective_model.run_modeling(**kwargs)
test_dataset_aggregate = subjective_model.to_aggregated_dataset(**kwargs)
test_raw_assets = test_assets
test_assets = read_dataset(test_dataset_aggregate, **kwargs)
test_fassembler = FeatureAssembler(
feature_dict=feature_param.feature_dict,
feature_option_dict=None,
assets=test_assets,
logger=logger,
fifo_mode=fifo_mode,
delete_workdir=True,
result_store=result_store,
optional_dict=None,
optional_dict2=None,
parallelize=True,
)
test_fassembler.run()
test_features = test_fassembler.results
for result in test_features:
result.set_score_aggregate_method(aggregate_method)
test_xs = model_class.get_xs_from_results(test_features)
test_ys = model_class.get_ys_from_results(test_features)
test_ys_pred = VmafQualityRunner.predict_with_model(model, test_xs, **kwargs)['ys_pred']
raw_groundtruths = None if test_raw_assets is None else \
list(map(lambda asset: asset.raw_groundtruth, test_raw_assets))
test_stats = model.get_stats(test_ys['label'], test_ys_pred, ys_label_raw=raw_groundtruths)
log = 'Stats on testing data: {}'.format(model_class.format_stats_for_print(test_stats))
if logger:
logger.info(log)
else:
print(log)
if test_ax is not None:
test_content_ids = list(map(lambda asset: asset.content_id, test_assets))
model_class.plot_scatter(test_ax, test_stats, content_ids=test_content_ids)
test_ax.set_xlabel('True Score')
test_ax.set_ylabel("Predicted Score")
test_ax.grid()
test_ax.set_title("Dataset: {dataset}, Model: {model}\n{stats}".format(
dataset=test_dataset.dataset_name,
model=model.model_id,
stats=model_class.format_stats_for_plot(test_stats)
))
return train_fassembler, train_assets, train_stats, test_fassembler, test_assets, test_stats, model
def construct_kfold_list(assets, contentid_groups):
# construct cross validation kfold input list
content_ids = list(map(lambda asset: asset.content_id, assets))
kfold = []
for curr_content_group in contentid_groups:
curr_indices = indices(content_ids, lambda x: x in curr_content_group)
kfold.append(curr_indices)
return kfold
def cv_on_dataset(dataset, feature_param, model_param, ax, result_store,
contentid_groups, logger=None, aggregate_method=np.mean):
assets = read_dataset(dataset)
kfold = construct_kfold_list(assets, contentid_groups)
fassembler = FeatureAssembler(
feature_dict=feature_param.feature_dict,
feature_option_dict=None,
assets=assets,
logger=logger,
delete_workdir=True,
result_store=result_store,
optional_dict=None,
optional_dict2=None,
parallelize=True, fifo_mode=True,
# parallelize=False, fifo_mode=False, # VQM
)
fassembler.run()
results = fassembler.results
for result in results:
result.set_score_aggregate_method(aggregate_method)
model_class = TrainTestModel.find_subclass(model_param.model_type)
# run nested kfold cv for each combintation
cv_output = ModelCrossValidation.run_kfold_cross_validation(
model_class,
model_param.model_param_dict,
results,
kfold,
logger=logger,
)
print('Feature parameters: {}'.format(feature_param.feature_dict))
print('Model type: {}'.format(model_param.model_type))
print('Model parameters: {}'.format(model_param.model_param_dict))
print('Stats: {}'.format(model_class.format_stats_for_print(cv_output['aggr_stats'])))
if ax is not None:
model_class.plot_scatter(ax, cv_output['aggr_stats'], cv_output['contentids'])
ax.set_xlabel('True Score')
ax.set_ylabel("Predicted Score")
ax.grid()
ax.set_title("Dataset: {dataset}, Model: {model},\n{stats}".format(
dataset=dataset.dataset_name,
model=model_param.model_type,
stats=model_class.format_stats_for_plot(cv_output['aggr_stats'])
))
return assets, cv_output
def run_remove_results_for_dataset(result_store, dataset, executor_class):
assets = read_dataset(dataset)
executor = executor_class(assets=assets, logger=None, result_store=result_store)
executor.remove_results()
def run_vmaf_cv(train_dataset_filepath,
test_dataset_filepath,
param_filepath,
output_model_filepath=None,
**kwargs):
result_store_dir = kwargs['result_store_dir'] if 'result_store_dir' in kwargs else VmafConfig.file_result_store_path()
logger = get_stdout_logger()
result_store = FileSystemResultStore(result_store_dir)
train_dataset = import_python_file(train_dataset_filepath)
test_dataset = import_python_file(test_dataset_filepath) if test_dataset_filepath is not None else None
param = import_python_file(param_filepath)
# === plot scatter ===
nrows = 1
ncols = 2
fig, axs = plt.subplots(figsize=(5*ncols, 5*nrows), nrows=nrows, ncols=ncols)
train_test_vmaf_on_dataset(train_dataset, test_dataset, param, param, axs[0], axs[1],
result_store, parallelize=True, logger=None,
output_model_filepath=output_model_filepath,
**kwargs)
if 'xlim' in kwargs:
axs[0].set_xlim(kwargs['xlim'])
axs[1].set_xlim(kwargs['xlim'])
if 'ylim' in kwargs:
axs[0].set_ylim(kwargs['ylim'])
axs[1].set_ylim(kwargs['ylim'])
bbox = {'facecolor':'white', 'alpha':1, 'pad':20}
axs[0].annotate('Training Set', xy=(0.1, 0.85), xycoords='axes fraction', bbox=bbox)
axs[1].annotate('Testing Set', xy=(0.1, 0.85), xycoords='axes fraction', bbox=bbox)
plt.tight_layout()
# === clean up ===
close_logger(logger)
def run_vmaf_kfold_cv(dataset_filepath,
contentid_groups,
param_filepath,
aggregate_method,
result_store_dir=VmafConfig.file_result_store_path(),
):
logger = get_stdout_logger()
result_store = FileSystemResultStore(result_store_dir)
dataset = import_python_file(dataset_filepath)
param = import_python_file(param_filepath)
fig, ax = plt.subplots(figsize=(5, 5), nrows=1, ncols=1)
cv_on_dataset(dataset, param, param, ax, result_store, contentid_groups,
logger, aggregate_method)
ax.set_xlim([0, 120])
ax.set_ylim([0, 120])
plt.tight_layout()
# === clean up ===
close_logger(logger)
def explain_model_on_dataset(model, test_assets_selected_indexs,
test_dataset_filepath,
result_store_dir=VmafConfig.file_result_store_path()):
def print_assets(test_assets):
print('\n'.join(map(
lambda tasset: "Asset {i}: {name}".format(
i=tasset[0], name=get_file_name_without_extension(tasset[1].dis_path)),
enumerate(test_assets)
)))
test_dataset = import_python_file(test_dataset_filepath)
test_assets = read_dataset(test_dataset)
print_assets(test_assets)
print("Assets selected for local explanation: {}".format(
test_assets_selected_indexs))
result_store = FileSystemResultStore(result_store_dir)
test_assets = [test_assets[i] for i in test_assets_selected_indexs]
test_fassembler = FeatureAssembler(
feature_dict=model.model_dict['feature_dict'],
feature_option_dict=None,
assets=test_assets,
logger=None,
fifo_mode=True,
delete_workdir=True,
result_store=result_store,
optional_dict=None,
optional_dict2=None,
parallelize=True,
)
test_fassembler.run()
test_feature_results = test_fassembler.results
test_xs = model.get_xs_from_results(test_feature_results)
test_ys = model.get_ys_from_results(test_feature_results)
test_ys_pred = model.predict(test_xs)['ys_label_pred']
explainer = LocalExplainer(neighbor_samples=1000)
test_exps = explainer.explain(model, test_xs)
explainer.print_explanations(test_exps, assets=test_assets, ys=test_ys, ys_pred=test_ys_pred)
explainer.plot_explanations(test_exps, assets=test_assets, ys=test_ys, ys_pred=test_ys_pred)
DisplayConfig.show()
def generate_dataset_from_raw(raw_dataset_filepath, output_dataset_filepath, **kwargs):
if raw_dataset_filepath:
from sureal.subjective_model import DmosModel
subj_model_class = kwargs['subj_model_class'] if 'subj_model_class' in kwargs else DmosModel
content_ids = kwargs['content_ids'] if 'content_ids' in kwargs else None
asset_ids = kwargs['asset_ids'] if 'asset_ids' in kwargs else None
subjective_model = subj_model_class.from_dataset_file(raw_dataset_filepath,
content_ids=content_ids,
asset_ids=asset_ids)
subjective_model.run_modeling(**kwargs)
subjective_model.to_aggregated_dataset_file(output_dataset_filepath, **kwargs)
def run_vmaf_cv_from_raw(train_dataset_raw_filepath, test_dataset_raw_filepath,
param_filepath, output_model_filepath, **kwargs):
if 'train_quality_wh' in kwargs and kwargs['train_quality_wh'] is not None:
train_quality_width, train_quality_height = kwargs['train_quality_wh']
else:
train_quality_width = None
train_quality_height = None
if 'test_quality_wh' in kwargs and kwargs['test_quality_wh'] is not None:
test_quality_width, test_quality_height = kwargs['test_quality_wh']
else:
test_quality_width = None
test_quality_height = None
if 'train_transform_final' in kwargs and kwargs['train_transform_final'] is not None:
train_transform_final = kwargs['train_transform_final']
else:
train_transform_final = None
if 'test_transform_final' in kwargs and kwargs['test_transform_final'] is not None:
test_transform_final = kwargs['test_transform_final']
else:
test_transform_final = None
workspace_path = kwargs['workspace_path'] if 'workspace_path' in kwargs else VmafConfig.workspace_path()
train_output_dataset_filepath = os.path.join(workspace_path, 'dataset', 'train_dataset.py')
generate_dataset_from_raw(raw_dataset_filepath=train_dataset_raw_filepath,
output_dataset_filepath=train_output_dataset_filepath,
quality_width=train_quality_width,
quality_height=train_quality_height,
transform_final=train_transform_final,
**kwargs)
test_output_dataset_filepath = os.path.join(workspace_path, 'dataset', 'test_dataset.py') \
if test_dataset_raw_filepath is not None else None
generate_dataset_from_raw(raw_dataset_filepath=test_dataset_raw_filepath,
output_dataset_filepath=test_output_dataset_filepath,
quality_width=test_quality_width,
quality_height=test_quality_height,
transform_final=test_transform_final,
**kwargs)
run_vmaf_cv(
train_dataset_filepath=train_output_dataset_filepath,
test_dataset_filepath=test_output_dataset_filepath,
param_filepath=param_filepath,
output_model_filepath=output_model_filepath,
**kwargs
)
| [
"vmaf.core.train_test_model.TrainTestModel.find_subclass",
"vmaf.core.quality_runner.VmafQualityRunner.predict_with_model",
"numpy.array",
"vmaf.core.local_explainer.LocalExplainer",
"vmaf.config.VmafConfig.file_result_store_path",
"vmaf.config.VmafConfig.workspace_path",
"vmaf.plt.subplots",
"vmaf.co... | [((19415, 19682), 'vmaf.core.feature_assembler.FeatureAssembler', 'FeatureAssembler', ([], {'feature_dict': 'feature_param.feature_dict', 'feature_option_dict': 'None', 'assets': 'train_assets', 'logger': 'logger', 'fifo_mode': 'fifo_mode', 'delete_workdir': '(True)', 'result_store': 'result_store', 'optional_dict': 'None', 'optional_dict2': 'None', 'parallelize': 'parallelize'}), '(feature_dict=feature_param.feature_dict,\n feature_option_dict=None, assets=train_assets, logger=logger, fifo_mode\n =fifo_mode, delete_workdir=True, result_store=result_store,\n optional_dict=None, optional_dict2=None, parallelize=parallelize)\n', (19431, 19682), False, 'from vmaf.core.feature_assembler import FeatureAssembler\n'), ((20037, 20077), 'vmaf.core.train_test_model.TrainTestModel.find_subclass', 'TrainTestModel.find_subclass', (['model_type'], {}), '(model_type)\n', (20065, 20077), False, 'from vmaf.core.train_test_model import TrainTestModel, RegressorMixin, ClassifierMixin\n'), ((25696, 25946), 'vmaf.core.feature_assembler.FeatureAssembler', 'FeatureAssembler', ([], {'feature_dict': 'feature_param.feature_dict', 'feature_option_dict': 'None', 'assets': 'assets', 'logger': 'logger', 'delete_workdir': '(True)', 'result_store': 'result_store', 'optional_dict': 'None', 'optional_dict2': 'None', 'parallelize': '(True)', 'fifo_mode': '(True)'}), '(feature_dict=feature_param.feature_dict,\n feature_option_dict=None, assets=assets, logger=logger, delete_workdir=\n True, result_store=result_store, optional_dict=None, optional_dict2=\n None, parallelize=True, fifo_mode=True)\n', (25712, 25946), False, 'from vmaf.core.feature_assembler import FeatureAssembler\n'), ((26225, 26277), 'vmaf.core.train_test_model.TrainTestModel.find_subclass', 'TrainTestModel.find_subclass', (['model_param.model_type'], {}), '(model_param.model_type)\n', (26253, 26277), False, 'from vmaf.core.train_test_model import TrainTestModel, RegressorMixin, ClassifierMixin\n'), ((26342, 26468), 'vmaf.core.cross_validation.ModelCrossValidation.run_kfold_cross_validation', 'ModelCrossValidation.run_kfold_cross_validation', (['model_class', 'model_param.model_param_dict', 'results', 'kfold'], {'logger': 'logger'}), '(model_class, model_param.\n model_param_dict, results, kfold, logger=logger)\n', (26389, 26468), False, 'from vmaf.core.cross_validation import ModelCrossValidation\n'), ((27837, 27856), 'vmaf.tools.misc.get_stdout_logger', 'get_stdout_logger', ([], {}), '()\n', (27854, 27856), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((27876, 27915), 'vmaf.core.result_store.FileSystemResultStore', 'FileSystemResultStore', (['result_store_dir'], {}), '(result_store_dir)\n', (27897, 27915), False, 'from vmaf.core.result_store import FileSystemResultStore\n'), ((27937, 27979), 'vmaf.tools.misc.import_python_file', 'import_python_file', (['train_dataset_filepath'], {}), '(train_dataset_filepath)\n', (27955, 27979), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((28101, 28135), 'vmaf.tools.misc.import_python_file', 'import_python_file', (['param_filepath'], {}), '(param_filepath)\n', (28119, 28135), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((28208, 28278), 'vmaf.plt.subplots', 'plt.subplots', ([], {'figsize': '(5 * ncols, 5 * nrows)', 'nrows': 'nrows', 'ncols': 'ncols'}), '(figsize=(5 * ncols, 5 * nrows), nrows=nrows, ncols=ncols)\n', (28220, 28278), False, 'from vmaf import plt\n'), ((29008, 29026), 'vmaf.plt.tight_layout', 'plt.tight_layout', ([], {}), '()\n', (29024, 29026), False, 'from vmaf import plt\n'), ((29055, 29075), 'vmaf.tools.misc.close_logger', 'close_logger', (['logger'], {}), '(logger)\n', (29067, 29075), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((29275, 29310), 'vmaf.config.VmafConfig.file_result_store_path', 'VmafConfig.file_result_store_path', ([], {}), '()\n', (29308, 29310), False, 'from vmaf.config import VmafConfig, DisplayConfig\n'), ((29351, 29370), 'vmaf.tools.misc.get_stdout_logger', 'get_stdout_logger', ([], {}), '()\n', (29368, 29370), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((29390, 29429), 'vmaf.core.result_store.FileSystemResultStore', 'FileSystemResultStore', (['result_store_dir'], {}), '(result_store_dir)\n', (29411, 29429), False, 'from vmaf.core.result_store import FileSystemResultStore\n'), ((29444, 29480), 'vmaf.tools.misc.import_python_file', 'import_python_file', (['dataset_filepath'], {}), '(dataset_filepath)\n', (29462, 29480), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((29493, 29527), 'vmaf.tools.misc.import_python_file', 'import_python_file', (['param_filepath'], {}), '(param_filepath)\n', (29511, 29527), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((29543, 29589), 'vmaf.plt.subplots', 'plt.subplots', ([], {'figsize': '(5, 5)', 'nrows': '(1)', 'ncols': '(1)'}), '(figsize=(5, 5), nrows=1, ncols=1)\n', (29555, 29589), False, 'from vmaf import plt\n'), ((29769, 29787), 'vmaf.plt.tight_layout', 'plt.tight_layout', ([], {}), '()\n', (29785, 29787), False, 'from vmaf import plt\n'), ((29816, 29836), 'vmaf.tools.misc.close_logger', 'close_logger', (['logger'], {}), '(logger)\n', (29828, 29836), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((30002, 30037), 'vmaf.config.VmafConfig.file_result_store_path', 'VmafConfig.file_result_store_path', ([], {}), '()\n', (30035, 30037), False, 'from vmaf.config import VmafConfig, DisplayConfig\n'), ((30315, 30356), 'vmaf.tools.misc.import_python_file', 'import_python_file', (['test_dataset_filepath'], {}), '(test_dataset_filepath)\n', (30333, 30356), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((30551, 30590), 'vmaf.core.result_store.FileSystemResultStore', 'FileSystemResultStore', (['result_store_dir'], {}), '(result_store_dir)\n', (30572, 30590), False, 'from vmaf.core.result_store import FileSystemResultStore\n'), ((30685, 30944), 'vmaf.core.feature_assembler.FeatureAssembler', 'FeatureAssembler', ([], {'feature_dict': "model.model_dict['feature_dict']", 'feature_option_dict': 'None', 'assets': 'test_assets', 'logger': 'None', 'fifo_mode': '(True)', 'delete_workdir': '(True)', 'result_store': 'result_store', 'optional_dict': 'None', 'optional_dict2': 'None', 'parallelize': '(True)'}), "(feature_dict=model.model_dict['feature_dict'],\n feature_option_dict=None, assets=test_assets, logger=None, fifo_mode=\n True, delete_workdir=True, result_store=result_store, optional_dict=\n None, optional_dict2=None, parallelize=True)\n", (30701, 30944), False, 'from vmaf.core.feature_assembler import FeatureAssembler\n'), ((31294, 31331), 'vmaf.core.local_explainer.LocalExplainer', 'LocalExplainer', ([], {'neighbor_samples': '(1000)'}), '(neighbor_samples=1000)\n', (31308, 31331), False, 'from vmaf.core.local_explainer import LocalExplainer\n'), ((31582, 31602), 'vmaf.config.DisplayConfig.show', 'DisplayConfig.show', ([], {}), '()\n', (31600, 31602), False, 'from vmaf.config import VmafConfig, DisplayConfig\n'), ((33594, 33653), 'os.path.join', 'os.path.join', (['workspace_path', '"""dataset"""', '"""train_dataset.py"""'], {}), "(workspace_path, 'dataset', 'train_dataset.py')\n", (33606, 33653), False, 'import os\n'), ((1210, 1235), 'vmaf.config.VmafConfig.workdir_path', 'VmafConfig.workdir_path', ([], {}), '()\n', (1233, 1235), False, 'from vmaf.config import VmafConfig, DisplayConfig\n'), ((20588, 20659), 'vmaf.core.quality_runner.VmafQualityRunner.set_clip_score', 'VmafQualityRunner.set_clip_score', (['model', "model_param_dict['score_clip']"], {}), "(model, model_param_dict['score_clip'])\n", (20620, 20659), False, 'from vmaf.core.quality_runner import VmafQualityRunner\n'), ((20714, 20800), 'vmaf.core.quality_runner.VmafQualityRunner.set_transform_score', 'VmafQualityRunner.set_transform_score', (['model', "model_param_dict['score_transform']"], {}), "(model, model_param_dict[\n 'score_transform'])\n", (20751, 20800), False, 'from vmaf.core.quality_runner import VmafQualityRunner\n'), ((20817, 20880), 'vmaf.core.quality_runner.VmafQualityRunner.predict_with_model', 'VmafQualityRunner.predict_with_model', (['model', 'train_xs'], {}), '(model, train_xs, **kwargs)\n', (20853, 20880), False, 'from vmaf.core.quality_runner import VmafQualityRunner\n'), ((23162, 23421), 'vmaf.core.feature_assembler.FeatureAssembler', 'FeatureAssembler', ([], {'feature_dict': 'feature_param.feature_dict', 'feature_option_dict': 'None', 'assets': 'test_assets', 'logger': 'logger', 'fifo_mode': 'fifo_mode', 'delete_workdir': '(True)', 'result_store': 'result_store', 'optional_dict': 'None', 'optional_dict2': 'None', 'parallelize': '(True)'}), '(feature_dict=feature_param.feature_dict,\n feature_option_dict=None, assets=test_assets, logger=logger, fifo_mode=\n fifo_mode, delete_workdir=True, result_store=result_store,\n optional_dict=None, optional_dict2=None, parallelize=True)\n', (23178, 23421), False, 'from vmaf.core.feature_assembler import FeatureAssembler\n'), ((25324, 25379), 'vmaf.tools.misc.indices', 'indices', (['content_ids', '(lambda x: x in curr_content_group)'], {}), '(content_ids, lambda x: x in curr_content_group)\n', (25331, 25379), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((27787, 27822), 'vmaf.config.VmafConfig.file_result_store_path', 'VmafConfig.file_result_store_path', ([], {}), '()\n', (27820, 27822), False, 'from vmaf.config import VmafConfig, DisplayConfig\n'), ((27999, 28040), 'vmaf.tools.misc.import_python_file', 'import_python_file', (['test_dataset_filepath'], {}), '(test_dataset_filepath)\n', (28017, 28040), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((33529, 33556), 'vmaf.config.VmafConfig.workspace_path', 'VmafConfig.workspace_path', ([], {}), '()\n', (33554, 33556), False, 'from vmaf.config import VmafConfig, DisplayConfig\n'), ((34050, 34108), 'os.path.join', 'os.path.join', (['workspace_path', '"""dataset"""', '"""test_dataset.py"""'], {}), "(workspace_path, 'dataset', 'test_dataset.py')\n", (34062, 34108), False, 'import os\n'), ((9667, 9869), 'vmaf.core.asset.Asset', 'Asset', ([], {'dataset': 'data_set_name', 'content_id': "dis_video['content_id']", 'asset_id': "dis_video['asset_id']", 'workdir_root': 'workdir_root', 'ref_path': 'ref_path', 'dis_path': "dis_video['path']", 'asset_dict': 'asset_dict'}), "(dataset=data_set_name, content_id=dis_video['content_id'], asset_id=\n dis_video['asset_id'], workdir_root=workdir_root, ref_path=ref_path,\n dis_path=dis_video['path'], asset_dict=asset_dict)\n", (9672, 9869), False, 'from vmaf.core.asset import Asset\n'), ((14458, 14490), 'numpy.shape', 'np.shape', (['predictions_all_models'], {}), '(predictions_all_models)\n', (14466, 14490), True, 'import numpy as np\n'), ((23875, 23937), 'vmaf.core.quality_runner.VmafQualityRunner.predict_with_model', 'VmafQualityRunner.predict_with_model', (['model', 'test_xs'], {}), '(model, test_xs, **kwargs)\n', (23911, 23937), False, 'from vmaf.core.quality_runner import VmafQualityRunner\n'), ((14393, 14425), 'numpy.array', 'np.array', (['predictions_all_models'], {}), '(predictions_all_models)\n', (14401, 14425), True, 'import numpy as np\n'), ((16811, 16858), 'vmaf.tools.misc.get_file_name_without_extension', 'get_file_name_without_extension', (['asset.dis_path'], {}), '(asset.dis_path)\n', (16842, 16858), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n'), ((30194, 30245), 'vmaf.tools.misc.get_file_name_without_extension', 'get_file_name_without_extension', (['tasset[1].dis_path'], {}), '(tasset[1].dis_path)\n', (30225, 30245), False, 'from vmaf.tools.misc import indices, get_stdout_logger, import_python_file, close_logger, get_file_name_without_extension\n')] |
# -*- coding: utf-8 -*-
"""
easybimehlanding
This file was automatically generated by APIMATIC v2.0 ( https://apimatic.io ).
"""
import unittest
from ..http_response_catcher import HttpResponseCatcher
from easybimehlanding.easybimehlanding_client import EasybimehlandingClient
from easybimehlanding.configuration import Configuration
class ControllerTestBase(unittest.TestCase):
"""All test classes inherit from this base class. It abstracts out
common functionality and configuration variables set up."""
@classmethod
def setUpClass(cls):
"""Class method called once before running tests in a test class."""
cls.api_client = EasybimehlandingClient()
cls.request_timeout = 30
cls.assert_precision = 0.01
def setUp(self):
"""Method called once before every test in a test class."""
self.response_catcher = HttpResponseCatcher()
self.controller.http_call_back = self.response_catcher
| [
"easybimehlanding.easybimehlanding_client.EasybimehlandingClient"
] | [((693, 717), 'easybimehlanding.easybimehlanding_client.EasybimehlandingClient', 'EasybimehlandingClient', ([], {}), '()\n', (715, 717), False, 'from easybimehlanding.easybimehlanding_client import EasybimehlandingClient\n')] |
import logging
import plotly.graph_objects as go
from bots import imps, load_candle
from openbb_terminal.common.technical_analysis import volume_model
from openbb_terminal.decorators import log_start_end
# pylint: disable=R0913
logger = logging.getLogger(__name__)
@log_start_end(log=logger)
def adosc_command(
ticker="",
interval: int = 15,
past_days: int = 0,
is_open: bool = False,
fast="3",
slow="10",
start="",
end="",
extended_hours: bool = False,
heikin_candles: bool = False,
trendline: bool = False,
news: bool = False,
):
"""Displays chart with chaikin oscillator [Yahoo Finance]"""
# Debug
if imps.DEBUG:
# pylint: disable=logging-too-many-args
logger.debug(
"ta adosc %s %s %s %s %s %s %s %s %s %s %s %s",
ticker,
interval,
past_days,
is_open,
fast,
slow,
start,
end,
extended_hours,
heikin_candles,
trendline,
news,
)
# Check for argument
if ticker == "":
raise Exception("Stock ticker is required")
if not fast.lstrip("-").isnumeric():
raise Exception("Number has to be an integer")
fast = int(fast)
if not slow.lstrip("-").isnumeric():
raise Exception("Number has to be an integer")
slow = int(slow)
# Retrieve Data
df_stock, start, end, bar_start = load_candle.stock_data(
ticker=ticker,
interval=interval,
past_days=past_days,
extended_hours=extended_hours,
start=start,
end=end,
heikin_candles=heikin_candles,
)
if df_stock.empty:
raise Exception("No Data Found")
df_ta = df_stock.loc[(df_stock.index >= start) & (df_stock.index < end)]
df_ta = df_ta.join(volume_model.adosc(df_stock, is_open, fast, slow))
# Output Data
if interval != 1440:
df_ta = df_ta.loc[(df_ta.index >= bar_start) & (df_ta.index < end)]
df_ta = df_ta.fillna(0.0)
plot = load_candle.candle_fig(
df_ta,
ticker,
interval,
extended_hours,
news,
bar=bar_start,
int_bar=interval,
trendline=trendline,
rows=2,
cols=1,
shared_xaxes=True,
vertical_spacing=0.05,
row_width=[0.4, 0.7],
specs=[
[{"secondary_y": True}],
[{"secondary_y": False}],
],
)
title = f"<b>{plot['plt_title']} AD Oscillator</b>"
fig = plot["fig"]
fig.add_trace(
go.Scatter(
name="AD Osc [M]",
mode="lines",
x=df_ta.index,
y=df_ta.iloc[:, 6].values
if (not trendline) and (interval != 1440)
else df_ta.iloc[:, 11].values,
line=dict(width=2),
opacity=1,
),
row=2,
col=1,
)
fig.update_layout(
margin=dict(l=0, r=0, t=50, b=20),
template=imps.PLT_TA_STYLE_TEMPLATE,
colorway=imps.PLT_TA_COLORWAY,
title=title,
title_x=0.1,
title_font_size=14,
dragmode="pan",
)
imagefile = "ta_adosc.png"
# Check if interactive settings are enabled
plt_link = ""
if imps.INTERACTIVE:
plt_link = imps.inter_chart(fig, imagefile, callback=False)
imagefile = imps.image_border(imagefile, fig=fig)
return {
"title": f"Stocks: Accumulation/Distribution Oscillator {ticker.upper()}",
"description": plt_link,
"imagefile": imagefile,
}
| [
"logging.getLogger",
"bots.load_candle.candle_fig",
"bots.load_candle.stock_data",
"bots.imps.image_border",
"bots.imps.inter_chart",
"openbb_terminal.common.technical_analysis.volume_model.adosc",
"openbb_terminal.decorators.log_start_end"
] | [((240, 267), 'logging.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (257, 267), False, 'import logging\n'), ((271, 296), 'openbb_terminal.decorators.log_start_end', 'log_start_end', ([], {'log': 'logger'}), '(log=logger)\n', (284, 296), False, 'from openbb_terminal.decorators import log_start_end\n'), ((1470, 1640), 'bots.load_candle.stock_data', 'load_candle.stock_data', ([], {'ticker': 'ticker', 'interval': 'interval', 'past_days': 'past_days', 'extended_hours': 'extended_hours', 'start': 'start', 'end': 'end', 'heikin_candles': 'heikin_candles'}), '(ticker=ticker, interval=interval, past_days=\n past_days, extended_hours=extended_hours, start=start, end=end,\n heikin_candles=heikin_candles)\n', (1492, 1640), False, 'from bots import imps, load_candle\n'), ((2074, 2350), 'bots.load_candle.candle_fig', 'load_candle.candle_fig', (['df_ta', 'ticker', 'interval', 'extended_hours', 'news'], {'bar': 'bar_start', 'int_bar': 'interval', 'trendline': 'trendline', 'rows': '(2)', 'cols': '(1)', 'shared_xaxes': '(True)', 'vertical_spacing': '(0.05)', 'row_width': '[0.4, 0.7]', 'specs': "[[{'secondary_y': True}], [{'secondary_y': False}]]"}), "(df_ta, ticker, interval, extended_hours, news, bar=\n bar_start, int_bar=interval, trendline=trendline, rows=2, cols=1,\n shared_xaxes=True, vertical_spacing=0.05, row_width=[0.4, 0.7], specs=[\n [{'secondary_y': True}], [{'secondary_y': False}]])\n", (2096, 2350), False, 'from bots import imps, load_candle\n'), ((3388, 3425), 'bots.imps.image_border', 'imps.image_border', (['imagefile'], {'fig': 'fig'}), '(imagefile, fig=fig)\n', (3405, 3425), False, 'from bots import imps, load_candle\n'), ((1861, 1910), 'openbb_terminal.common.technical_analysis.volume_model.adosc', 'volume_model.adosc', (['df_stock', 'is_open', 'fast', 'slow'], {}), '(df_stock, is_open, fast, slow)\n', (1879, 1910), False, 'from openbb_terminal.common.technical_analysis import volume_model\n'), ((3322, 3370), 'bots.imps.inter_chart', 'imps.inter_chart', (['fig', 'imagefile'], {'callback': '(False)'}), '(fig, imagefile, callback=False)\n', (3338, 3370), False, 'from bots import imps, load_candle\n')] |
# -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-05-20 10:23
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('driver', '0003_auto_20180520_1217'),
]
operations = [
migrations.AlterModelOptions(
name='car',
options={'ordering': ['seat_capacity']},
),
migrations.RenameField(
model_name='car',
old_name='seats_available',
new_name='seat_capacity',
),
]
| [
"django.db.migrations.AlterModelOptions",
"django.db.migrations.RenameField"
] | [((289, 375), 'django.db.migrations.AlterModelOptions', 'migrations.AlterModelOptions', ([], {'name': '"""car"""', 'options': "{'ordering': ['seat_capacity']}"}), "(name='car', options={'ordering': [\n 'seat_capacity']})\n", (317, 375), False, 'from django.db import migrations\n'), ((415, 513), 'django.db.migrations.RenameField', 'migrations.RenameField', ([], {'model_name': '"""car"""', 'old_name': '"""seats_available"""', 'new_name': '"""seat_capacity"""'}), "(model_name='car', old_name='seats_available',\n new_name='seat_capacity')\n", (437, 513), False, 'from django.db import migrations\n')] |
#------------------------------
"""Class :py:class:`QWList` is a QListView->QWidget for list model
===================================================================
Usage ::
# Run test: python lcls2/psdaq/psdaq/control_gui/QWList.py
from psdaq.control_gui.QWList import QWList
w = QWList()
Copy psana.graphqt.QWList on 2019-03-11
"""
#------------------------------
import logging
logger = logging.getLogger(__name__)
from PyQt5.QtWidgets import QListView, QVBoxLayout, QAbstractItemView
from PyQt5.QtGui import QStandardItemModel, QStandardItem
from PyQt5.QtCore import Qt, QModelIndex
#from psdaq.control_gui.QWIcons import icon
#------------------------------
class QWList(QListView) :
"""Widget for List
"""
def __init__(self, **kwargs) :
QListView.__init__(self, kwargs.get('parent', None))
#self._name = self.__class__.__name__
#icon.set_icons()
self.model = QStandardItemModel()
self.set_selection_mode()
self.fill_list_model(**kwargs) # defines self.model
self.setModel(self.model)
self.set_style()
self.show_tool_tips()
self.connect_signals()
#self.disconnect_signals()
def connect_signals(self) :
self.model.itemChanged.connect(self.on_item_changed)
self.connect_item_selected_to(self.on_item_selected)
self.clicked[QModelIndex].connect(self.on_click)
self.doubleClicked[QModelIndex].connect(self.on_double_click)
def disconnect_signals(self) :
self.model.itemChanged.disconnect(self.on_item_changed)
self.disconnect_item_selected_from(self.on_item_selected)
self.clicked[QModelIndex].disconnect(self.on_click)
self.doubleClicked[QModelIndex].disconnect(self.on_double_click)
def set_selection_mode(self, smode='extended') :
logger.debug('Set selection mode: %s'%smode)
mode = {'single' : QAbstractItemView.SingleSelection,
'contiguous' : QAbstractItemView.ContiguousSelection,
'extended' : QAbstractItemView.ExtendedSelection,
'multi' : QAbstractItemView.MultiSelection,
'no selection': QAbstractItemView.NoSelection}[smode]
self.setSelectionMode(mode)
def connect_item_selected_to(self, recipient) :
self.selectionModel().currentChanged[QModelIndex, QModelIndex].connect(recipient)
def disconnect_item_selected_from(self, recipient) :
self.selectionModel().currentChanged[QModelIndex, QModelIndex].disconnect(recipient)
def selected_indexes(self):
return self.selectedIndexes()
def selected_items(self):
indexes = self.selectedIndexes()
return [self.model.itemFromIndex(i) for i in self.selectedIndexes()]
def clear_model(self):
rows = self.model.rowCount()
self.model.removeRows(0, rows)
def fill_list_model(self, **kwargs):
self.clear_model()
for i in range(20):
item = QStandardItem('%02d item text'%(i))
#item.setIcon(icon.icon_table)
item.setCheckable(True)
self.model.appendRow(item)
def on_item_selected(self, selected, deselected):
itemsel = self.model.itemFromIndex(selected)
if itemsel is not None :
msg = 'on_item_selected row:%02d selected: %s' % (selected.row(), itemsel.text())
logger.info(msg)
#itemdes = self.model.itemFromIndex(deselected)
#if itemdes is not None :
# msg = 'on_item_selected row: %d deselected %s' % (deselected.row(), itemdes.text())
# logger.info(msg)
def on_item_changed(self, item):
state = ['UNCHECKED', 'TRISTATE', 'CHECKED'][item.checkState()]
msg = 'on_item_changed: item "%s", is at state %s' % (item.text(), state)
logger.info(msg)
def on_click(self, index):
item = self.model.itemFromIndex(index)
txt = item.text()
txtshow = txt if len(txt)<50 else '%s...'%txt[:50]
msg = 'doc clicked in row%02d: %s' % (index.row(), txtshow)
logger.info(msg)
def on_double_click(self, index):
item = self.model.itemFromIndex(index)
msg = 'on_double_click item in row:%02d text: %s' % (index.row(), item.text())
logger.debug(msg)
#--------------------------
#--------------------------
#--------------------------
def show_tool_tips(self):
self.setToolTip('List model')
def set_style(self):
#from psana.graphqt.Styles import style
#self.setWindowIcon(icon.icon_monitor)
#self.layout().setContentsMargins(0,0,0,0)
self.setStyleSheet("QListView::item:hover{background-color:#00FFAA;}")
#self.palette = QPalette()
#self.resetColorIsSet = False
#self.butELog .setIcon(icon.icon_mail_forward)
#self.butFile .setIcon(icon.icon_save)
self.setMinimumHeight(100)
self.setMinimumWidth(50)
#self.adjustSize()
#self. setStyleSheet(style.styleBkgd)
#self.butSave.setStyleSheet(style.styleButton)
#self.butFBrowser.setVisible(False)
#self.butExit.setText('')
#self.butExit.setFlat(True)
#def resizeEvent(self, e):
#pass
#self.frame.setGeometry(self.rect())
#logger.debug('resizeEvent')
#def moveEvent(self, e):
#logger.debug('moveEvent')
#self.position = self.mapToGlobal(self.pos())
#self.position = self.pos()
#logger.debug('moveEvent - pos:' + str(self.position), __name__)
#pass
def closeEvent(self, e):
logger.debug('closeEvent')
QListView.closeEvent(self, e)
#try : self.gui_win.close()
#except : pass
#try : del self.gui_win
#except : pass
def on_exit(self):
logger.debug('on_exit')
self.close()
def process_selected_items(self) :
selitems = self.selected_items()
msg = '%d Selected items:' % len(selitems)
for i in selitems :
msg += '\n %s' % i.text()
logger.info(msg)
if __name__ == "__main__" :
def key_usage(self) :
return 'Keys:'\
'\n ESC - exit'\
'\n S - show selected items'\
'\n'
def keyPressEvent(self, e) :
#logger.info('keyPressEvent, key=', e.key())
if e.key() == Qt.Key_Escape :
self.close()
elif e.key() == Qt.Key_S :
self.process_selected_items()
else :
logger.info(self.key_usage())
#------------------------------
#------------------------------
#------------------------------
if __name__ == "__main__" :
import sys
from PyQt5.QtWidgets import QApplication
logging.basicConfig(format='%(message)s', level=logging.DEBUG)
#logging.basicConfig(format='%(asctime)s %(name)s %(levelname)s: %(message)s', datefmt='%H:%M:%S', level=logging.DEBUG)
app = QApplication(sys.argv)
w = QWList()
w.setGeometry(10, 25, 400, 600)
w.setWindowTitle('QWList')
w.move(100,50)
w.show()
app.exec_()
del w
del app
#------------------------------
| [
"logging.getLogger",
"logging.basicConfig",
"PyQt5.QtGui.QStandardItemModel",
"PyQt5.QtGui.QStandardItem",
"PyQt5.QtWidgets.QListView.closeEvent",
"PyQt5.QtWidgets.QApplication"
] | [((408, 435), 'logging.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (425, 435), False, 'import logging\n'), ((6840, 6902), 'logging.basicConfig', 'logging.basicConfig', ([], {'format': '"""%(message)s"""', 'level': 'logging.DEBUG'}), "(format='%(message)s', level=logging.DEBUG)\n", (6859, 6902), False, 'import logging\n'), ((7038, 7060), 'PyQt5.QtWidgets.QApplication', 'QApplication', (['sys.argv'], {}), '(sys.argv)\n', (7050, 7060), False, 'from PyQt5.QtWidgets import QApplication\n'), ((934, 954), 'PyQt5.QtGui.QStandardItemModel', 'QStandardItemModel', ([], {}), '()\n', (952, 954), False, 'from PyQt5.QtGui import QStandardItemModel, QStandardItem\n'), ((5711, 5740), 'PyQt5.QtWidgets.QListView.closeEvent', 'QListView.closeEvent', (['self', 'e'], {}), '(self, e)\n', (5731, 5740), False, 'from PyQt5.QtWidgets import QListView, QVBoxLayout, QAbstractItemView\n'), ((3018, 3053), 'PyQt5.QtGui.QStandardItem', 'QStandardItem', (["('%02d item text' % i)"], {}), "('%02d item text' % i)\n", (3031, 3053), False, 'from PyQt5.QtGui import QStandardItemModel, QStandardItem\n')] |
"""Validate services schema."""
from importlib import import_module
from pathlib import Path
import voluptuous as vol
from ..const import ATTR_ADDON, ATTR_CONFIG, ATTR_DISCOVERY, ATTR_SERVICE, ATTR_UUID
from ..utils.validate import schema_or
from ..validate import uuid_match
def valid_discovery_service(service):
"""Validate service name."""
service_file = Path(__file__).parent.joinpath(f"services/{service}.py")
if not service_file.exists():
raise vol.Invalid(f"Service {service} not found") from None
return service
def valid_discovery_config(service, config):
"""Validate service name."""
try:
service_mod = import_module(f".services.{service}", "supervisor.discovery")
except ImportError:
raise vol.Invalid(f"Service {service} not found") from None
return service_mod.SCHEMA(config)
SCHEMA_DISCOVERY = vol.Schema(
[
vol.Schema(
{
vol.Required(ATTR_UUID): uuid_match,
vol.Required(ATTR_ADDON): str,
vol.Required(ATTR_SERVICE): valid_discovery_service,
vol.Required(ATTR_CONFIG): vol.Maybe(dict),
},
extra=vol.REMOVE_EXTRA,
)
]
)
SCHEMA_DISCOVERY_CONFIG = vol.Schema(
{vol.Optional(ATTR_DISCOVERY, default=list): schema_or(SCHEMA_DISCOVERY)},
extra=vol.REMOVE_EXTRA,
)
| [
"voluptuous.Required",
"importlib.import_module",
"pathlib.Path",
"voluptuous.Invalid",
"voluptuous.Maybe",
"voluptuous.Optional"
] | [((475, 518), 'voluptuous.Invalid', 'vol.Invalid', (['f"""Service {service} not found"""'], {}), "(f'Service {service} not found')\n", (486, 518), True, 'import voluptuous as vol\n'), ((659, 720), 'importlib.import_module', 'import_module', (['f""".services.{service}"""', '"""supervisor.discovery"""'], {}), "(f'.services.{service}', 'supervisor.discovery')\n", (672, 720), False, 'from importlib import import_module\n'), ((1267, 1309), 'voluptuous.Optional', 'vol.Optional', (['ATTR_DISCOVERY'], {'default': 'list'}), '(ATTR_DISCOVERY, default=list)\n', (1279, 1309), True, 'import voluptuous as vol\n'), ((759, 802), 'voluptuous.Invalid', 'vol.Invalid', (['f"""Service {service} not found"""'], {}), "(f'Service {service} not found')\n", (770, 802), True, 'import voluptuous as vol\n'), ((370, 384), 'pathlib.Path', 'Path', (['__file__'], {}), '(__file__)\n', (374, 384), False, 'from pathlib import Path\n'), ((941, 964), 'voluptuous.Required', 'vol.Required', (['ATTR_UUID'], {}), '(ATTR_UUID)\n', (953, 964), True, 'import voluptuous as vol\n'), ((994, 1018), 'voluptuous.Required', 'vol.Required', (['ATTR_ADDON'], {}), '(ATTR_ADDON)\n', (1006, 1018), True, 'import voluptuous as vol\n'), ((1041, 1067), 'voluptuous.Required', 'vol.Required', (['ATTR_SERVICE'], {}), '(ATTR_SERVICE)\n', (1053, 1067), True, 'import voluptuous as vol\n'), ((1110, 1135), 'voluptuous.Required', 'vol.Required', (['ATTR_CONFIG'], {}), '(ATTR_CONFIG)\n', (1122, 1135), True, 'import voluptuous as vol\n'), ((1137, 1152), 'voluptuous.Maybe', 'vol.Maybe', (['dict'], {}), '(dict)\n', (1146, 1152), True, 'import voluptuous as vol\n')] |
from __future__ import absolute_import
from __future__ import division
from past.utils import old_div
from proteus import *
from proteus.default_n import *
try:
from .adr_bl_3d_p import *
except:
from adr_bl_3d_p import *
#steady-state so no time integration
timeIntegration = NoIntegration
#number of output timesteps
nDTout = 1
#finite element spaces
femSpaces = {0:C0_AffineLinearOnSimplexWithNodalBasis}
#numerical quadrature choices
elementQuadrature = SimplexGaussQuadrature(nd,3)
elementBoundaryQuadrature = SimplexGaussQuadrature(nd-1,3)
logEvent("""Mesh generated using: tetgen -%s %s""" % (triangleOptions,domain.polyfile+".poly"))
triangleOptions="VApq1.35q12feena%e" % (old_div((he**3),6.0),)
#number of levels in mesh
nLevels = 1
subgridError = ADR.SubgridError(coefficients=coefficients,nd=nd)
shockCapturing = ADR.ShockCapturing(coefficients,nd,shockCapturingFactor=0.99,lag=False)
#nonlinear solver choices
multilevelNonlinearSolver = Newton
levelNonlinearSolver = Newton
#linear problem so force 1 iteration allowed
maxNonlinearIts = 10
maxLineSearches = 1
fullNewtonFlag = True
#absolute nonlinear solver residual tolerance
nl_atol_res = 1.0e-6
#relative nonlinear solver convergence tolerance as a function of h
#(i.e., tighten relative convergence test as we refine)
tolFac = 0.0
#matrix type
matrix = SparseMatrix
#convenience flag
parallel = True
if parallel:
multilevelLinearSolver = KSP_petsc4py
#for petsc do things lie
#"-ksp_type cg -pc_type asm -pc_asm_type basic -ksp_atol 1.0e-10 -ksp_rtol 1.0e-10 -ksp_monitor_draw" or
#-pc_type lu -pc_factor_mat_solver_package
#can also set -pc_asm_overlap 2 with default asm type (restrict)
levelLinearSolver = KSP_petsc4py#
#for petsc do things like
#"-ksp_type cg -pc_type asm -pc_asm_type basic -ksp_atol 1.0e-10 -ksp_rtol 1.0e-10 -ksp_monitor_draw" or
#-pc_type lu -pc_factor_mat_solver_package
#can also set -pc_asm_overlap 2 with default asm type (restrict)
#levelLinearSolver = PETSc#
#pick number of layers to use in overlap
nLayersOfOverlapForParallel = 0
#type of partition
parallelPartitioningType = MeshParallelPartitioningTypes.node
#parallelPartitioningType = MeshParallelPartitioningTypes.element
#have to have a numerical flux in parallel
numericalFluxType = ADR.NumericalFlux
#for true residual test
linearSolverConvergenceTest = 'r-true'
#to allow multiple models to set different ksp options
#linear_solver_options_prefix = 'poisson_'
linearSmoother = None
else:
multilevelLinearSolver = LU
levelLinearSolver = LU
numericalFluxType = ADR.NumericalFlux
#linear solver relative convergence test
linTolFac = 0.0
#linear solver absolute convergence test
l_atol_res = 1.0e-8
#conservativeFlux = {0:'pwl'}
| [
"past.utils.old_div"
] | [((698, 719), 'past.utils.old_div', 'old_div', (['(he ** 3)', '(6.0)'], {}), '(he ** 3, 6.0)\n', (705, 719), False, 'from past.utils import old_div\n')] |
# This script is a demo to perform basic operations on GCP AutoML Tables model training. #
# Reference: https://github.com/GoogleCloudPlatform/python-docs-samples/blob/tables/tables/automl/automl_tables_predict.py #
import argparse
import os
def predict(project_id, compute_region, model_id, file_path):
"""
Make a prediction.
# project_id: the 'Project ID showed in GCP Console
# compute_region: only 'us-central1' works now
# model_id: 'table id+today's day'
# file_path: path of the data which will make prediction
"""
from google.cloud import automl_v1beta1 as automl
import pandas as pd
import json
automl_client = automl.AutoMlClient()
# Get the full path of the model.
model_full_id = automl_client.model_path(
project_id, compute_region, model_id
)
# Create client for prediction service.
prediction_client = automl.PredictionServiceClient()
params = {}
#prepare the payload, each row is one payload
df = pd.read_csv(file_path)
df = df.drop('price_per_mile', axis=1)
df = df.values
for i in range(len(df)):
values = df[i].tolist()
payload = {
"row": {
"values": values
}
}
data = {}
data['payload'] = payload
print(data)
with open('request.json', 'w') as outfile:
json.dump(data, outfile)
# Query model
import subprocess
bashCommand = "curl -X POST -H 'Content-Type: application/json' \
-H \"Authorization: Bearer $(gcloud auth application-default print-access-token)\" \
https://automl.googleapis.com/v1beta1/projects/hackathon1-183523/locations/us-central1/models/TBL2266814744474157056:predict \
-d @request.json"
output = subprocess.call(bashCommand, shell=True)
print(output)
def batch_predict(project_id, compute_region, model_id, input_path, output_path):
"""
Make a batch of predictions.
# project_id: the 'Project ID showed in GCP Console
# compute_region: only 'us-central1' works now
# model_id: 'table id+today's day'
# input_path: path of the data which will make prediction
# output_path: path to store the prediction
"""
from google.cloud import automl_v1beta1 as automl
import csv
automl_client = automl.AutoMlClient()
# Get the full path of the model.
model_full_id = automl_client.model_path(
project_id, compute_region, model_id
)
# Create client for prediction service.
prediction_client = automl.PredictionServiceClient()
if input_path.startswith('bq'):
input_config = {"bigquery_source": {"input_uri": input_path}}
else:
# Get the multiple Google Cloud Storage URIs.
input_uris = input_path.split(",").strip()
input_config = {"gcs_source": {"input_uris": input_uris}}
if output_path.startswith('bq'):
output_config = {"bigquery_destination": {"output_uri": output_path}}
else:
# Get the multiple Google Cloud Storage URIs.
output_uris = output_path.split(",").strip()
output_config = {"gcs_destination": {"output_uris": output_uris}}
# Query model
response = prediction_client.batch_predict(
model_full_id, input_config, output_config)
print("Making batch prediction... ")
try:
result = response.result()
except:
# Hides Any to BatchPredictResult error.
pass
print("Batch prediction complete.\n{}".format(response.metadata))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest="command")
predict_parser = subparsers.add_parser("predict", help='online prediction')
predict_parser.add_argument("--model_id")
predict_parser.add_argument("--file_path")
batch_predict_parser = subparsers.add_parser("batch_predict", help='batch prediction')
batch_predict_parser.add_argument("--model_id")
batch_predict_parser.add_argument("--input_path")
batch_predict_parser.add_argument("--output_path")
args = parser.parse_args()
project_id = "hackathon1-183523"
compute_region = "us-central1"
if args.command == "predict":
predict(
project_id,
compute_region,
args.model_id,
args.file_path
)
if args.command == "batch_predict":
batch_predict(
project_id,
compute_region,
args.model_id,
args.input_path,
args.output_path,
)
| [
"argparse.ArgumentParser",
"pandas.read_csv",
"subprocess.call",
"google.cloud.automl_v1beta1.PredictionServiceClient",
"google.cloud.automl_v1beta1.AutoMlClient",
"json.dump"
] | [((669, 690), 'google.cloud.automl_v1beta1.AutoMlClient', 'automl.AutoMlClient', ([], {}), '()\n', (688, 690), True, 'from google.cloud import automl_v1beta1 as automl\n'), ((896, 928), 'google.cloud.automl_v1beta1.PredictionServiceClient', 'automl.PredictionServiceClient', ([], {}), '()\n', (926, 928), True, 'from google.cloud import automl_v1beta1 as automl\n'), ((1009, 1031), 'pandas.read_csv', 'pd.read_csv', (['file_path'], {}), '(file_path)\n', (1020, 1031), True, 'import pandas as pd\n'), ((2400, 2421), 'google.cloud.automl_v1beta1.AutoMlClient', 'automl.AutoMlClient', ([], {}), '()\n', (2419, 2421), True, 'from google.cloud import automl_v1beta1 as automl\n'), ((2627, 2659), 'google.cloud.automl_v1beta1.PredictionServiceClient', 'automl.PredictionServiceClient', ([], {}), '()\n', (2657, 2659), True, 'from google.cloud import automl_v1beta1 as automl\n'), ((3666, 3691), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (3689, 3691), False, 'import argparse\n'), ((1856, 1896), 'subprocess.call', 'subprocess.call', (['bashCommand'], {'shell': '(True)'}), '(bashCommand, shell=True)\n', (1871, 1896), False, 'import subprocess\n'), ((1420, 1444), 'json.dump', 'json.dump', (['data', 'outfile'], {}), '(data, outfile)\n', (1429, 1444), False, 'import json\n')] |
from django import template
from django.contrib.contenttypes.models import ContentType
from ..models import Comment
from ..forms import CommentForm
register = template.Library()
@register.simple_tag
def get_comment_count(obj):
content_type = ContentType.objects.get_for_model(obj)
return Comment.objects.filter(content_type=content_type, object_id=obj.pk).count()
@register.simple_tag
def get_comment_form(obj):
content_type = ContentType.objects.get_for_model(obj)
form = CommentForm(initial={
'content_type': content_type.model,
'object_id': obj.pk,
'reply_comment_id': 0})
return form
@register.simple_tag
def get_comment_list(obj):
content_type = ContentType.objects.get_for_model(obj)
comments = Comment.objects.filter(content_type=content_type, object_id=obj.pk, parent=None)
return comments.order_by('-comment_time')
| [
"django.contrib.contenttypes.models.ContentType.objects.get_for_model",
"django.template.Library"
] | [((161, 179), 'django.template.Library', 'template.Library', ([], {}), '()\n', (177, 179), False, 'from django import template\n'), ((249, 287), 'django.contrib.contenttypes.models.ContentType.objects.get_for_model', 'ContentType.objects.get_for_model', (['obj'], {}), '(obj)\n', (282, 287), False, 'from django.contrib.contenttypes.models import ContentType\n'), ((443, 481), 'django.contrib.contenttypes.models.ContentType.objects.get_for_model', 'ContentType.objects.get_for_model', (['obj'], {}), '(obj)\n', (476, 481), False, 'from django.contrib.contenttypes.models import ContentType\n'), ((718, 756), 'django.contrib.contenttypes.models.ContentType.objects.get_for_model', 'ContentType.objects.get_for_model', (['obj'], {}), '(obj)\n', (751, 756), False, 'from django.contrib.contenttypes.models import ContentType\n')] |
from qsim.qcircuit import QCircuit
from qsim.qconstants import H, I, X
circuit = QCircuit()
circuit.addQubits(0, 0, 0, 1)
circuit.addGates([H, H, H, H])
# BEGIN Uf - function constant
# circuit.addGate(X, 3)
# END Uf
# BEGIN Uf - function balanced
circuit.addToffoli([0, 1], 3)
circuit.addCNOT(2, 3)
# END Uf
circuit.addGates([H, H, H, I])
circuit.simulate()
state = circuit.measure()
print(state) | [
"qsim.qcircuit.QCircuit"
] | [((81, 91), 'qsim.qcircuit.QCircuit', 'QCircuit', ([], {}), '()\n', (89, 91), False, 'from qsim.qcircuit import QCircuit\n')] |
import pytest
from model.contact import ContactInfo
from model.group import Group
import random
def test_del_contact_from_group(app, ormdb):
group = None
# проверить есть ли контакты
with pytest.allure.step('Precondition: check that at least 1 contact exist or create'):
if len(ormdb.get_contact_list()) == 0:
app.contact.create(
ContactInfo(firstname="KirillPrecondition", middlename="", lastname="Sukhomlin", nickname="kisuro",
title="mr", company="Deutsche Telekom", address="Piter", home="7000111",
mobile="7000112", work="7000113", fax="7000114", email="<EMAIL>",
email2="<EMAIL>", email3="<EMAIL>", homepage="www.spb.com", bday="26",
bmonth="June", byear="1982", aday="3", amonth="April", ayear="1990",
address2="AddressSecondary", phone2="7555555", notes="DrinkMe"))
# проверить есть ли группы
with pytest.allure.step('Precondition: check that at least 1 group exist or create'):
if len(ormdb.get_group_list()) == 0:
app.group.create(
Group(name="testGroupPrecondition", header="groupHeaderPrecondition", footer="groupFooterPrecondition"))
# берем случайный контакт из бд
with pytest.allure.step('Given a contact'):
contact = random.choice(ormdb.get_contact_list())
# для выбранного контакта проверяем, что есть связь с группой, если нет - добавляем
with pytest.allure.step('When check that contact in group OR add'):
list_of_groups_for_contact = ormdb.get_groups_for_contact(contact)
if (len(list_of_groups_for_contact)) == 0:
group = random.choice(ormdb.get_group_list())
app.contact.add_to_group(contact, group)
else:
# определяем список групп, в которых находится контакт и выбираем одну случайную
index = random.randrange(len(list_of_groups_for_contact))
group = list_of_groups_for_contact[index]
with pytest.allure.step('When remove contact %s from group %s' % (contact, group)):
app.contact.remove_from_group(contact, group)
# проверяем, что после удаления, контакт не входит в группу из которой удалили(бд)
with pytest.allure.step('Then check that contact %s removed from group %s' % (contact, group)):
assert group not in ormdb.get_groups_for_contact(contact)
def test_add_contact_to_group(app, ormdb):
exception_group_list = []
# проверить есть ли контакты
with pytest.allure.step('Precondition: check that at least 1 contact exist or create'):
if len(ormdb.get_contact_list()) == 0:
app.contact.create(
ContactInfo(firstname="KirillPrecondition", middlename="", lastname="Sukhomlin", nickname="kisuro",
title="mr", company="Deutsche Telekom", address="Piter", home="7000111",
mobile="7000112", work="7000113", fax="7000114", email="<EMAIL>",
email2="<EMAIL>", email3="<EMAIL>", homepage="www.spb.com", bday="26",
bmonth="June", byear="1982", aday="3", amonth="April", ayear="1990",
address2="AddressSecondary", phone2="7555555", notes="DrinkMe"))
# проверить есть ли группы
with pytest.allure.step('Precondition: check that at least 1 group exist or create'):
if len(ormdb.get_group_list()) == 0:
app.group.create(
Group(name="testGroupPrecondition", header="groupHeaderPrecondition", footer="groupFooterPrecondition"))
# берем случайный контакт из бд
with pytest.allure.step('Given a contact'):
contact = random.choice(ormdb.get_contact_list())
# для выбранного контакта определяем группу в которую он не входит:
# берем список групп в которые входит контакт
with pytest.allure.step('When define group for contact'):
list_of_groups_for_contact = ormdb.get_groups_for_contact(contact)
# берем список всех групп
list_of_all_groups = ormdb.get_group_list()
# составляем список групп в которые контакт не входит
# (идем по списку всех групп и если встречаем отсутсвующую в list_of_groups_for_contact заносим в новый list)
for gr in list_of_all_groups:
if gr not in list_of_groups_for_contact:
exception_group_list.append(gr)
# дополнительно проверяем, что контакт, не находится во всех существующих группах
if len(exception_group_list) == 0:
# если это так, удаляем его из одной случайной группы
group = list_of_groups_for_contact[random.randrange(len(list_of_groups_for_contact))]
app.contact.remove_from_group(contact, group)
else:
# из списка групп в которые контакт не входит, выбираем случайную
index = random.randrange(len(exception_group_list))
group = list_of_all_groups[index]
# добавляем контакт в группу
with pytest.allure.step('When add contact %s to group %s' % (contact, group)):
app.contact.add_to_group(contact, group)
# проверяем что после добавления, контакт входит в группу
with pytest.allure.step('Then check that contact %s added to group %s' % (contact, group)):
assert group in ormdb.get_groups_for_contact(contact)
| [
"model.group.Group",
"model.contact.ContactInfo",
"pytest.allure.step"
] | [((203, 289), 'pytest.allure.step', 'pytest.allure.step', (['"""Precondition: check that at least 1 contact exist or create"""'], {}), "(\n 'Precondition: check that at least 1 contact exist or create')\n", (221, 289), False, 'import pytest\n'), ((1005, 1084), 'pytest.allure.step', 'pytest.allure.step', (['"""Precondition: check that at least 1 group exist or create"""'], {}), "('Precondition: check that at least 1 group exist or create')\n", (1023, 1084), False, 'import pytest\n'), ((1327, 1364), 'pytest.allure.step', 'pytest.allure.step', (['"""Given a contact"""'], {}), "('Given a contact')\n", (1345, 1364), False, 'import pytest\n'), ((1525, 1586), 'pytest.allure.step', 'pytest.allure.step', (['"""When check that contact in group OR add"""'], {}), "('When check that contact in group OR add')\n", (1543, 1586), False, 'import pytest\n'), ((2065, 2142), 'pytest.allure.step', 'pytest.allure.step', (["('When remove contact %s from group %s' % (contact, group))"], {}), "('When remove contact %s from group %s' % (contact, group))\n", (2083, 2142), False, 'import pytest\n'), ((2294, 2388), 'pytest.allure.step', 'pytest.allure.step', (["('Then check that contact %s removed from group %s' % (contact, group))"], {}), "('Then check that contact %s removed from group %s' % (\n contact, group))\n", (2312, 2388), False, 'import pytest\n'), ((2568, 2654), 'pytest.allure.step', 'pytest.allure.step', (['"""Precondition: check that at least 1 contact exist or create"""'], {}), "(\n 'Precondition: check that at least 1 contact exist or create')\n", (2586, 2654), False, 'import pytest\n'), ((3370, 3449), 'pytest.allure.step', 'pytest.allure.step', (['"""Precondition: check that at least 1 group exist or create"""'], {}), "('Precondition: check that at least 1 group exist or create')\n", (3388, 3449), False, 'import pytest\n'), ((3692, 3729), 'pytest.allure.step', 'pytest.allure.step', (['"""Given a contact"""'], {}), "('Given a contact')\n", (3710, 3729), False, 'import pytest\n'), ((3920, 3971), 'pytest.allure.step', 'pytest.allure.step', (['"""When define group for contact"""'], {}), "('When define group for contact')\n", (3938, 3971), False, 'import pytest\n'), ((5052, 5124), 'pytest.allure.step', 'pytest.allure.step', (["('When add contact %s to group %s' % (contact, group))"], {}), "('When add contact %s to group %s' % (contact, group))\n", (5070, 5124), False, 'import pytest\n'), ((5246, 5336), 'pytest.allure.step', 'pytest.allure.step', (["('Then check that contact %s added to group %s' % (contact, group))"], {}), "('Then check that contact %s added to group %s' % (\n contact, group))\n", (5264, 5336), False, 'import pytest\n'), ((381, 851), 'model.contact.ContactInfo', 'ContactInfo', ([], {'firstname': '"""KirillPrecondition"""', 'middlename': '""""""', 'lastname': '"""Sukhomlin"""', 'nickname': '"""kisuro"""', 'title': '"""mr"""', 'company': '"""Deutsche Telekom"""', 'address': '"""Piter"""', 'home': '"""7000111"""', 'mobile': '"""7000112"""', 'work': '"""7000113"""', 'fax': '"""7000114"""', 'email': '"""<EMAIL>"""', 'email2': '"""<EMAIL>"""', 'email3': '"""<EMAIL>"""', 'homepage': '"""www.spb.com"""', 'bday': '"""26"""', 'bmonth': '"""June"""', 'byear': '"""1982"""', 'aday': '"""3"""', 'amonth': '"""April"""', 'ayear': '"""1990"""', 'address2': '"""AddressSecondary"""', 'phone2': '"""7555555"""', 'notes': '"""DrinkMe"""'}), "(firstname='KirillPrecondition', middlename='', lastname=\n 'Sukhomlin', nickname='kisuro', title='mr', company='Deutsche Telekom',\n address='Piter', home='7000111', mobile='7000112', work='7000113', fax=\n '7000114', email='<EMAIL>', email2='<EMAIL>', email3='<EMAIL>',\n homepage='www.spb.com', bday='26', bmonth='June', byear='1982', aday=\n '3', amonth='April', ayear='1990', address2='AddressSecondary', phone2=\n '7555555', notes='DrinkMe')\n", (392, 851), False, 'from model.contact import ContactInfo\n'), ((1177, 1284), 'model.group.Group', 'Group', ([], {'name': '"""testGroupPrecondition"""', 'header': '"""groupHeaderPrecondition"""', 'footer': '"""groupFooterPrecondition"""'}), "(name='testGroupPrecondition', header='groupHeaderPrecondition',\n footer='groupFooterPrecondition')\n", (1182, 1284), False, 'from model.group import Group\n'), ((2746, 3216), 'model.contact.ContactInfo', 'ContactInfo', ([], {'firstname': '"""KirillPrecondition"""', 'middlename': '""""""', 'lastname': '"""Sukhomlin"""', 'nickname': '"""kisuro"""', 'title': '"""mr"""', 'company': '"""Deutsche Telekom"""', 'address': '"""Piter"""', 'home': '"""7000111"""', 'mobile': '"""7000112"""', 'work': '"""7000113"""', 'fax': '"""7000114"""', 'email': '"""<EMAIL>"""', 'email2': '"""<EMAIL>"""', 'email3': '"""<EMAIL>"""', 'homepage': '"""www.spb.com"""', 'bday': '"""26"""', 'bmonth': '"""June"""', 'byear': '"""1982"""', 'aday': '"""3"""', 'amonth': '"""April"""', 'ayear': '"""1990"""', 'address2': '"""AddressSecondary"""', 'phone2': '"""7555555"""', 'notes': '"""DrinkMe"""'}), "(firstname='KirillPrecondition', middlename='', lastname=\n 'Sukhomlin', nickname='kisuro', title='mr', company='Deutsche Telekom',\n address='Piter', home='7000111', mobile='7000112', work='7000113', fax=\n '7000114', email='<EMAIL>', email2='<EMAIL>', email3='<EMAIL>',\n homepage='www.spb.com', bday='26', bmonth='June', byear='1982', aday=\n '3', amonth='April', ayear='1990', address2='AddressSecondary', phone2=\n '7555555', notes='DrinkMe')\n", (2757, 3216), False, 'from model.contact import ContactInfo\n'), ((3542, 3649), 'model.group.Group', 'Group', ([], {'name': '"""testGroupPrecondition"""', 'header': '"""groupHeaderPrecondition"""', 'footer': '"""groupFooterPrecondition"""'}), "(name='testGroupPrecondition', header='groupHeaderPrecondition',\n footer='groupFooterPrecondition')\n", (3547, 3649), False, 'from model.group import Group\n')] |
# -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-06-23 18:25
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('filer', '0004_auto_20160328_1434'),
]
operations = [
migrations.AlterField(
model_name='file',
name='owner',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='owned_files', to=settings.AUTH_USER_MODEL, verbose_name='owner'),
),
migrations.AlterField(
model_name='folder',
name='owner',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='filer_owned_folders', to=settings.AUTH_USER_MODEL, verbose_name='owner'),
),
migrations.AlterField(
model_name='folderpermission',
name='user',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='filer_folder_permissions', to=settings.AUTH_USER_MODEL, verbose_name='user'),
),
]
| [
"django.db.models.ForeignKey"
] | [((461, 636), 'django.db.models.ForeignKey', 'models.ForeignKey', ([], {'blank': '(True)', 'null': '(True)', 'on_delete': 'django.db.models.deletion.SET_NULL', 'related_name': '"""owned_files"""', 'to': 'settings.AUTH_USER_MODEL', 'verbose_name': '"""owner"""'}), "(blank=True, null=True, on_delete=django.db.models.\n deletion.SET_NULL, related_name='owned_files', to=settings.\n AUTH_USER_MODEL, verbose_name='owner')\n", (478, 636), False, 'from django.db import migrations, models\n'), ((747, 930), 'django.db.models.ForeignKey', 'models.ForeignKey', ([], {'blank': '(True)', 'null': '(True)', 'on_delete': 'django.db.models.deletion.SET_NULL', 'related_name': '"""filer_owned_folders"""', 'to': 'settings.AUTH_USER_MODEL', 'verbose_name': '"""owner"""'}), "(blank=True, null=True, on_delete=django.db.models.\n deletion.SET_NULL, related_name='filer_owned_folders', to=settings.\n AUTH_USER_MODEL, verbose_name='owner')\n", (764, 930), False, 'from django.db import migrations, models\n'), ((1050, 1237), 'django.db.models.ForeignKey', 'models.ForeignKey', ([], {'blank': '(True)', 'null': '(True)', 'on_delete': 'django.db.models.deletion.SET_NULL', 'related_name': '"""filer_folder_permissions"""', 'to': 'settings.AUTH_USER_MODEL', 'verbose_name': '"""user"""'}), "(blank=True, null=True, on_delete=django.db.models.\n deletion.SET_NULL, related_name='filer_folder_permissions', to=settings\n .AUTH_USER_MODEL, verbose_name='user')\n", (1067, 1237), False, 'from django.db import migrations, models\n')] |
# !/usr/bin/env python
"""Run a test calculation on localhost.
Usage: ./example_01.py
"""
from os import path
from aiida_shengbte import helpers
from aiida import cmdline, engine, orm
from aiida.plugins import DataFactory, CalculationFactory, WorkflowFactory
import click
import logging
logging.basicConfig(level=logging.INFO)
INPUT_DIR = path.join(path.dirname(path.realpath(__file__)), 'input_files')
Dict = DataFactory('dict')
def test_run(thirdorder_sow_code=None, thirdorder_reap_code=None):
"""Run a calculation on the localhost computer.
Uses test helpers to create AiiDA Code on the fly.
"""
computer = helpers.get_computer()
if not thirdorder_sow_code:
# get code
thirdorder_sow_code = helpers.get_code(
entry_point='thirdorder_vasp_sow', computer=computer)
if not thirdorder_reap_code:
thirdorder_reap_code = helpers.get_code(entry_point='thirdorder_vasp_reap',
computer=computer, prepend_text='find job.* -name vasprun.xml|sort -n|')
# set up calculation
base_incar_dict = {
'PREC': 'Accurate',
'IBRION': 8,
'EDIFF': 1e-8,
'NELMIN': 5,
'NELM': 100,
'ENCUT': 240,
'IALGO': 38,
'ISMEAR': 0,
'SIGMA': 0.1,
'LREAL': False,
'lcharg': False,
'lwave': False,
}
forces_config = {
'code_string': 'vasp@vasp',
'kpoints_density': 0.5, # k-point density,
'potential_family': 'pbe',
'potential_mapping': {'Si': 'Si'},
'options': {
'resources': {'num_machines': 1, 'tot_num_mpiprocs': 4},
'max_wallclock_seconds': 3600 * 10
},
'parser_settings': {
'add_energies': True,
'add_forces': True,
'add_stress': True
},
'parameters': base_incar_dict
}
inputs = {
'structure': helpers.get_test_structure(),
'thirdorder_sow': {
'code': thirdorder_sow_code,
'parameters': Dict(dict={
'supercell_matrix': [3, 3, 3],
'option': 3
})
},
'thirdorder_reap': {
'code': thirdorder_reap_code,
'parameters': Dict(dict={
'supercell_matrix': [3, 3, 3],
'option': 3
})
},
'vasp_settings': Dict(dict={'forces': forces_config}),
# 'clean_workdir': orm.Bool(True),
'metadata': {
'description': "Test job submission with the aiida_shengbte thirdorder plugin",
},
}
logging.error(inputs)
result = engine.run(WorkflowFactory('shengbte.thirdorder'), **inputs)
logging.info(result)
@click.command()
@cmdline.utils.decorators.with_dbenv()
@cmdline.params.options.CODE()
def cli(code):
"""Run example.
Example usage: $ ./example_01.py -- code diff@localhost
Alternative (creates diff@localhost-test code): $ ./example_01.py
Help: $ ./example_01.py -- help
"""
test_run(code)
if __name__ == '__main__':
cli() # pylint: disable = no-value-for-parameter
| [
"logging.basicConfig",
"aiida_shengbte.helpers.get_computer",
"logging.info",
"aiida_shengbte.helpers.get_code",
"os.path.realpath",
"aiida_shengbte.helpers.get_test_structure",
"aiida.cmdline.utils.decorators.with_dbenv",
"aiida.plugins.DataFactory",
"aiida.plugins.WorkflowFactory",
"click.comman... | [((288, 327), 'logging.basicConfig', 'logging.basicConfig', ([], {'level': 'logging.INFO'}), '(level=logging.INFO)\n', (307, 327), False, 'import logging\n'), ((412, 431), 'aiida.plugins.DataFactory', 'DataFactory', (['"""dict"""'], {}), "('dict')\n", (423, 431), False, 'from aiida.plugins import DataFactory, CalculationFactory, WorkflowFactory\n'), ((2754, 2769), 'click.command', 'click.command', ([], {}), '()\n', (2767, 2769), False, 'import click\n'), ((2771, 2808), 'aiida.cmdline.utils.decorators.with_dbenv', 'cmdline.utils.decorators.with_dbenv', ([], {}), '()\n', (2806, 2808), False, 'from aiida import cmdline, engine, orm\n'), ((2810, 2839), 'aiida.cmdline.params.options.CODE', 'cmdline.params.options.CODE', ([], {}), '()\n', (2837, 2839), False, 'from aiida import cmdline, engine, orm\n'), ((632, 654), 'aiida_shengbte.helpers.get_computer', 'helpers.get_computer', ([], {}), '()\n', (652, 654), False, 'from aiida_shengbte import helpers\n'), ((2629, 2650), 'logging.error', 'logging.error', (['inputs'], {}), '(inputs)\n', (2642, 2650), False, 'import logging\n'), ((2730, 2750), 'logging.info', 'logging.info', (['result'], {}), '(result)\n', (2742, 2750), False, 'import logging\n'), ((364, 387), 'os.path.realpath', 'path.realpath', (['__file__'], {}), '(__file__)\n', (377, 387), False, 'from os import path\n'), ((736, 806), 'aiida_shengbte.helpers.get_code', 'helpers.get_code', ([], {'entry_point': '"""thirdorder_vasp_sow"""', 'computer': 'computer'}), "(entry_point='thirdorder_vasp_sow', computer=computer)\n", (752, 806), False, 'from aiida_shengbte import helpers\n'), ((884, 1013), 'aiida_shengbte.helpers.get_code', 'helpers.get_code', ([], {'entry_point': '"""thirdorder_vasp_reap"""', 'computer': 'computer', 'prepend_text': '"""find job.* -name vasprun.xml|sort -n|"""'}), "(entry_point='thirdorder_vasp_reap', computer=computer,\n prepend_text='find job.* -name vasprun.xml|sort -n|')\n", (900, 1013), False, 'from aiida_shengbte import helpers\n'), ((1940, 1968), 'aiida_shengbte.helpers.get_test_structure', 'helpers.get_test_structure', ([], {}), '()\n', (1966, 1968), False, 'from aiida_shengbte import helpers\n'), ((2675, 2713), 'aiida.plugins.WorkflowFactory', 'WorkflowFactory', (['"""shengbte.thirdorder"""'], {}), "('shengbte.thirdorder')\n", (2690, 2713), False, 'from aiida.plugins import DataFactory, CalculationFactory, WorkflowFactory\n')] |
import asyncio
import aiohttp.client_exceptions
import wrapt
from botocore.response import ResponseStreamingError, IncompleteReadError, \
ReadTimeoutError
from aiobotocore import parsers
class AioReadTimeoutError(ReadTimeoutError, asyncio.TimeoutError):
pass
class StreamingBody(wrapt.ObjectProxy):
"""Wrapper class for an http response body.
This provides a few additional conveniences that do not exist
in the urllib3 model:
* Set the timeout on the socket (i.e read() timeouts)
* Auto validation of content length, if the amount of bytes
we read does not match the content length, an exception
is raised.
"""
_DEFAULT_CHUNK_SIZE = 1024
def __init__(self, raw_stream, content_length):
super().__init__(raw_stream)
self._self_content_length = content_length
self._self_amount_read = 0
# https://github.com/GrahamDumpleton/wrapt/issues/73
async def __aenter__(self):
return await self.__wrapped__.__aenter__()
async def __aexit__(self, exc_type, exc_val, exc_tb):
return await self.__wrapped__.__aexit__(exc_type, exc_val, exc_tb)
# NOTE: set_socket_timeout was only for when requests didn't support
# read timeouts, so not needed
def tell(self):
return self._self_amount_read
async def read(self, amt=None):
"""Read at most amt bytes from the stream.
If the amt argument is omitted, read all data.
"""
# botocore to aiohttp mapping
try:
chunk = await self.__wrapped__.read(amt if amt is not None else -1)
except asyncio.TimeoutError as e:
raise AioReadTimeoutError(endpoint_url=self.__wrapped__.url,
error=e)
except aiohttp.client_exceptions.ClientConnectionError as e:
raise ResponseStreamingError(error=e)
self._self_amount_read += len(chunk)
if amt is None or (not chunk and amt > 0):
# If the server sends empty contents or
# we ask to read all of the contents, then we know
# we need to verify the content length.
self._verify_content_length()
return chunk
def __aiter__(self):
"""Return an iterator to yield 1k chunks from the raw stream.
"""
return self.iter_chunks(self._DEFAULT_CHUNK_SIZE)
async def __anext__(self):
"""Return the next 1k chunk from the raw stream.
"""
current_chunk = await self.read(self._DEFAULT_CHUNK_SIZE)
if current_chunk:
return current_chunk
raise StopAsyncIteration
anext = __anext__
async def iter_lines(self, chunk_size=1024, keepends=False):
"""Return an iterator to yield lines from the raw stream.
This is achieved by reading chunk of bytes (of size chunk_size) at a
time from the raw stream, and then yielding lines from there.
"""
pending = b''
async for chunk in self.iter_chunks(chunk_size):
lines = (pending + chunk).splitlines(True)
for line in lines[:-1]:
yield line.splitlines(keepends)[0]
pending = lines[-1]
if pending:
yield pending.splitlines(keepends)[0]
async def iter_chunks(self, chunk_size=_DEFAULT_CHUNK_SIZE):
"""Return an iterator to yield chunks of chunk_size bytes from the raw
stream.
"""
while True:
current_chunk = await self.read(chunk_size)
if current_chunk == b"":
break
yield current_chunk
def _verify_content_length(self):
# See: https://github.com/kennethreitz/requests/issues/1855
# Basically, our http library doesn't do this for us, so we have
# to do this ourself.
if self._self_content_length is not None and \
self._self_amount_read != int(self._self_content_length):
raise IncompleteReadError(
actual_bytes=self._self_amount_read,
expected_bytes=int(self._self_content_length))
async def get_response(operation_model, http_response):
protocol = operation_model.metadata['protocol']
response_dict = {
'headers': http_response.headers,
'status_code': http_response.status_code,
}
# TODO: Unfortunately, we have to have error logic here.
# If it looks like an error, in the streaming response case we
# need to actually grab the contents.
if response_dict['status_code'] >= 300:
response_dict['body'] = http_response.content
elif operation_model.has_streaming_output:
response_dict['body'] = StreamingBody(
http_response.raw, response_dict['headers'].get('content-length'))
else:
response_dict['body'] = http_response.content
parser = parsers.create_parser(protocol)
if asyncio.iscoroutinefunction(parser.parse):
parsed = await parser.parse(
response_dict, operation_model.output_shape)
else:
parsed = parser.parse(
response_dict, operation_model.output_shape)
return http_response, parsed
| [
"aiobotocore.parsers.create_parser",
"asyncio.iscoroutinefunction",
"botocore.response.ResponseStreamingError"
] | [((4877, 4908), 'aiobotocore.parsers.create_parser', 'parsers.create_parser', (['protocol'], {}), '(protocol)\n', (4898, 4908), False, 'from aiobotocore import parsers\n'), ((4916, 4957), 'asyncio.iscoroutinefunction', 'asyncio.iscoroutinefunction', (['parser.parse'], {}), '(parser.parse)\n', (4943, 4957), False, 'import asyncio\n'), ((1872, 1903), 'botocore.response.ResponseStreamingError', 'ResponseStreamingError', ([], {'error': 'e'}), '(error=e)\n', (1894, 1903), False, 'from botocore.response import ResponseStreamingError, IncompleteReadError, ReadTimeoutError\n')] |
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""TextCNN"""
import mindspore.nn as nn
import mindspore.ops.operations as P
from mindspore import Tensor
from mindspore.nn.cell import Cell
import mindspore.ops.functional as F
import mindspore
class SoftmaxCrossEntropyExpand(Cell):
r"""
Computes softmax cross entropy between logits and labels. Implemented by expanded formula.
This is a wrapper of several functions.
.. math::
\ell(x_i, t_i) = -log\left(\frac{\exp(x_{t_i})}{\sum_j \exp(x_j)}\right),
where :math:`x_i` is a 1D score Tensor, :math:`t_i` is the target class.
Note:
When argument sparse is set to True, the format of label is the index
range from :math:`0` to :math:`C - 1` instead of one-hot vectors.
Args:
sparse(bool): Specifies whether labels use sparse format or not. Default: False.
Inputs:
- **input_data** (Tensor) - Tensor of shape :math:`(x_1, x_2, ..., x_R)`.
- **label** (Tensor) - Tensor of shape :math:`(y_1, y_2, ..., y_S)`.
Outputs:
Tensor, a scalar tensor including the mean loss.
Examples:
>>> loss = nn.SoftmaxCrossEntropyExpand(sparse=True)
>>> input_data = Tensor(np.ones([64, 512]), dtype=mindspore.float32)
>>> label = Tensor(np.ones([64]), dtype=mindspore.int32)
>>> loss(input_data, label)
"""
def __init__(self, sparse=False):
super(SoftmaxCrossEntropyExpand, self).__init__()
self.exp = P.Exp()
self.reduce_sum = P.ReduceSum(keep_dims=True)
self.onehot = P.OneHot()
self.on_value = Tensor(1.0, mindspore.float32)
self.off_value = Tensor(0.0, mindspore.float32)
self.div = P.Div()
self.log = P.Log()
self.sum_cross_entropy = P.ReduceSum(keep_dims=False)
self.mul = P.Mul()
self.mul2 = P.Mul()
self.cast = P.Cast()
self.reduce_mean = P.ReduceMean(keep_dims=False)
self.sparse = sparse
self.reduce_max = P.ReduceMax(keep_dims=True)
self.sub = P.Sub()
def construct(self, logit, label):
"""
construct
"""
logit_max = self.reduce_max(logit, -1)
exp = self.exp(self.sub(logit, logit_max))
exp_sum = self.reduce_sum(exp, -1)
softmax_result = self.div(exp, exp_sum)
if self.sparse:
label = self.onehot(label, F.shape(logit)[1], self.on_value, self.off_value)
softmax_result_log = self.log(softmax_result)
loss = self.sum_cross_entropy((self.mul(softmax_result_log, label)), -1)
loss = self.mul2(F.scalar_to_array(-1.0), loss)
loss = self.reduce_mean(loss, -1)
return loss
def make_conv_layer(kernel_size):
return nn.Conv2d(in_channels=1, out_channels=96, kernel_size=kernel_size, padding=1,
pad_mode="pad", has_bias=True)
class TextCNN(nn.Cell):
"""
TextCNN architecture
"""
def __init__(self, vocab_len, word_len, num_classes, vec_length):
super(TextCNN, self).__init__()
self.vec_length = vec_length
self.word_len = word_len
self.num_classes = num_classes
self.unsqueeze = P.ExpandDims()
self.embedding = nn.Embedding(vocab_len, self.vec_length, embedding_table='uniform')
self.slice = P.Slice()
self.layer1 = self.make_layer(kernel_height=3)
self.layer2 = self.make_layer(kernel_height=4)
self.layer3 = self.make_layer(kernel_height=5)
self.concat = P.Concat(1)
self.fc = nn.Dense(96*3, self.num_classes)
self.drop = nn.Dropout(keep_prob=0.5)
self.print = P.Print()
self.reducemax = P.ReduceMax(keep_dims=False)
def make_layer(self, kernel_height):
return nn.SequentialCell(
[
make_conv_layer((kernel_height, self.vec_length)), nn.ReLU(),
nn.MaxPool2d(kernel_size=(self.word_len-kernel_height+1, 1)),
]
)
def construct(self, x):
"""
construct
"""
x = self.unsqueeze(x, 1)
x = self.embedding(x)
x1 = self.layer1(x)
x2 = self.layer2(x)
x3 = self.layer3(x)
x1 = self.reducemax(x1, (2, 3))
x2 = self.reducemax(x2, (2, 3))
x3 = self.reducemax(x3, (2, 3))
x = self.concat((x1, x2, x3))
x = self.drop(x)
x = self.fc(x)
return x
| [
"mindspore.ops.operations.Concat",
"mindspore.ops.operations.ReduceSum",
"mindspore.ops.operations.Print",
"mindspore.ops.operations.Sub",
"mindspore.ops.operations.OneHot",
"mindspore.ops.functional.shape",
"mindspore.nn.Dropout",
"mindspore.ops.operations.Mul",
"mindspore.ops.operations.Exp",
"m... | [((3368, 3481), 'mindspore.nn.Conv2d', 'nn.Conv2d', ([], {'in_channels': '(1)', 'out_channels': '(96)', 'kernel_size': 'kernel_size', 'padding': '(1)', 'pad_mode': '"""pad"""', 'has_bias': '(True)'}), "(in_channels=1, out_channels=96, kernel_size=kernel_size, padding=\n 1, pad_mode='pad', has_bias=True)\n", (3377, 3481), True, 'import mindspore.nn as nn\n'), ((2110, 2117), 'mindspore.ops.operations.Exp', 'P.Exp', ([], {}), '()\n', (2115, 2117), True, 'import mindspore.ops.operations as P\n'), ((2144, 2171), 'mindspore.ops.operations.ReduceSum', 'P.ReduceSum', ([], {'keep_dims': '(True)'}), '(keep_dims=True)\n', (2155, 2171), True, 'import mindspore.ops.operations as P\n'), ((2194, 2204), 'mindspore.ops.operations.OneHot', 'P.OneHot', ([], {}), '()\n', (2202, 2204), True, 'import mindspore.ops.operations as P\n'), ((2229, 2259), 'mindspore.Tensor', 'Tensor', (['(1.0)', 'mindspore.float32'], {}), '(1.0, mindspore.float32)\n', (2235, 2259), False, 'from mindspore import Tensor\n'), ((2285, 2315), 'mindspore.Tensor', 'Tensor', (['(0.0)', 'mindspore.float32'], {}), '(0.0, mindspore.float32)\n', (2291, 2315), False, 'from mindspore import Tensor\n'), ((2335, 2342), 'mindspore.ops.operations.Div', 'P.Div', ([], {}), '()\n', (2340, 2342), True, 'import mindspore.ops.operations as P\n'), ((2362, 2369), 'mindspore.ops.operations.Log', 'P.Log', ([], {}), '()\n', (2367, 2369), True, 'import mindspore.ops.operations as P\n'), ((2403, 2431), 'mindspore.ops.operations.ReduceSum', 'P.ReduceSum', ([], {'keep_dims': '(False)'}), '(keep_dims=False)\n', (2414, 2431), True, 'import mindspore.ops.operations as P\n'), ((2451, 2458), 'mindspore.ops.operations.Mul', 'P.Mul', ([], {}), '()\n', (2456, 2458), True, 'import mindspore.ops.operations as P\n'), ((2479, 2486), 'mindspore.ops.operations.Mul', 'P.Mul', ([], {}), '()\n', (2484, 2486), True, 'import mindspore.ops.operations as P\n'), ((2507, 2515), 'mindspore.ops.operations.Cast', 'P.Cast', ([], {}), '()\n', (2513, 2515), True, 'import mindspore.ops.operations as P\n'), ((2543, 2572), 'mindspore.ops.operations.ReduceMean', 'P.ReduceMean', ([], {'keep_dims': '(False)'}), '(keep_dims=False)\n', (2555, 2572), True, 'import mindspore.ops.operations as P\n'), ((2628, 2655), 'mindspore.ops.operations.ReduceMax', 'P.ReduceMax', ([], {'keep_dims': '(True)'}), '(keep_dims=True)\n', (2639, 2655), True, 'import mindspore.ops.operations as P\n'), ((2675, 2682), 'mindspore.ops.operations.Sub', 'P.Sub', ([], {}), '()\n', (2680, 2682), True, 'import mindspore.ops.operations as P\n'), ((3810, 3824), 'mindspore.ops.operations.ExpandDims', 'P.ExpandDims', ([], {}), '()\n', (3822, 3824), True, 'import mindspore.ops.operations as P\n'), ((3850, 3917), 'mindspore.nn.Embedding', 'nn.Embedding', (['vocab_len', 'self.vec_length'], {'embedding_table': '"""uniform"""'}), "(vocab_len, self.vec_length, embedding_table='uniform')\n", (3862, 3917), True, 'import mindspore.nn as nn\n'), ((3940, 3949), 'mindspore.ops.operations.Slice', 'P.Slice', ([], {}), '()\n', (3947, 3949), True, 'import mindspore.ops.operations as P\n'), ((4138, 4149), 'mindspore.ops.operations.Concat', 'P.Concat', (['(1)'], {}), '(1)\n', (4146, 4149), True, 'import mindspore.ops.operations as P\n'), ((4169, 4203), 'mindspore.nn.Dense', 'nn.Dense', (['(96 * 3)', 'self.num_classes'], {}), '(96 * 3, self.num_classes)\n', (4177, 4203), True, 'import mindspore.nn as nn\n'), ((4222, 4247), 'mindspore.nn.Dropout', 'nn.Dropout', ([], {'keep_prob': '(0.5)'}), '(keep_prob=0.5)\n', (4232, 4247), True, 'import mindspore.nn as nn\n'), ((4269, 4278), 'mindspore.ops.operations.Print', 'P.Print', ([], {}), '()\n', (4276, 4278), True, 'import mindspore.ops.operations as P\n'), ((4304, 4332), 'mindspore.ops.operations.ReduceMax', 'P.ReduceMax', ([], {'keep_dims': '(False)'}), '(keep_dims=False)\n', (4315, 4332), True, 'import mindspore.ops.operations as P\n'), ((3228, 3251), 'mindspore.ops.functional.scalar_to_array', 'F.scalar_to_array', (['(-1.0)'], {}), '(-1.0)\n', (3245, 3251), True, 'import mindspore.ops.functional as F\n'), ((4490, 4499), 'mindspore.nn.ReLU', 'nn.ReLU', ([], {}), '()\n', (4497, 4499), True, 'import mindspore.nn as nn\n'), ((4517, 4581), 'mindspore.nn.MaxPool2d', 'nn.MaxPool2d', ([], {'kernel_size': '(self.word_len - kernel_height + 1, 1)'}), '(kernel_size=(self.word_len - kernel_height + 1, 1))\n', (4529, 4581), True, 'import mindspore.nn as nn\n'), ((3017, 3031), 'mindspore.ops.functional.shape', 'F.shape', (['logit'], {}), '(logit)\n', (3024, 3031), True, 'import mindspore.ops.functional as F\n')] |
# Copyright 2016-2017 Open Source Robotics Foundation, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any
from typing import Callable
from typing import Optional
from typing import TypeVar
import rclpy
from rclpy.node import Node
from rclpy.qos import QoSProfile
from rclpy.qos_event import SubscriptionEventCallbacks
from rclpy.qos_event import UnsupportedEventTypeError
from rclpy.utilities import get_rmw_implementation_identifier
from ros2cli.node.strategy import add_arguments as add_strategy_node_arguments
from ros2cli.node.strategy import NodeStrategy
from ros2topic.api import add_qos_arguments_to_argument_parser
from ros2topic.api import get_msg_class
from ros2topic.api import qos_profile_from_short_keys
from ros2topic.api import TopicNameCompleter
from ros2topic.api import unsigned_int
from ros2topic.verb import VerbExtension
from rosidl_runtime_py import message_to_csv
from rosidl_runtime_py import message_to_yaml
from rosidl_runtime_py.utilities import get_message
DEFAULT_TRUNCATE_LENGTH = 128
MsgType = TypeVar('MsgType')
class EchoVerb(VerbExtension):
"""Output messages from a topic."""
def add_arguments(self, parser, cli_name):
add_strategy_node_arguments(parser)
arg = parser.add_argument(
'topic_name',
help="Name of the ROS topic to listen to (e.g. '/chatter')")
arg.completer = TopicNameCompleter(
include_hidden_topics_key='include_hidden_topics')
parser.add_argument(
'message_type', nargs='?',
help="Type of the ROS message (e.g. 'std_msgs/msg/String')")
add_qos_arguments_to_argument_parser(
parser, is_publisher=False, default_preset='sensor_data')
parser.add_argument(
'--csv', action='store_true',
help='Output all recursive fields separated by commas (e.g. for '
'plotting)')
parser.add_argument(
'--full-length', '-f', action='store_true',
help='Output all elements for arrays, bytes, and string with a '
"length > '--truncate-length', by default they are truncated "
"after '--truncate-length' elements with '...''")
parser.add_argument(
'--truncate-length', '-l', type=unsigned_int, default=DEFAULT_TRUNCATE_LENGTH,
help='The length to truncate arrays, bytes, and string to '
'(default: %d)' % DEFAULT_TRUNCATE_LENGTH)
parser.add_argument(
'--no-arr', action='store_true', help="Don't print array fields of messages")
parser.add_argument(
'--no-str', action='store_true', help="Don't print string fields of messages")
parser.add_argument(
'--lost-messages', action='store_true', help='Report when a message is lost')
parser.add_argument(
'--raw', action='store_true', help='Echo the raw binary representation')
def main(self, *, args):
return main(args)
def main(args):
if not args.csv:
truncate_length = args.truncate_length if not args.full_length else None
callback = subscriber_cb(truncate_length, args.no_arr, args.no_str)
else:
truncate_length = args.truncate_length if not args.full_length else None
callback = subscriber_cb_csv(truncate_length, args.no_arr, args.no_str)
qos_profile = qos_profile_from_short_keys(
args.qos_profile, reliability=args.qos_reliability, durability=args.qos_durability,
depth=args.qos_depth, history=args.qos_history)
with NodeStrategy(args) as node:
if args.message_type is None:
message_type = get_msg_class(
node, args.topic_name, include_hidden_topics=True)
else:
try:
message_type = get_message(args.message_type)
except (AttributeError, ModuleNotFoundError, ValueError):
raise RuntimeError('The passed message type is invalid')
if message_type is None:
raise RuntimeError(
'Could not determine the type for the passed topic')
subscriber(
node,
args.topic_name,
message_type,
callback,
qos_profile,
args.lost_messages,
args.raw)
def subscriber(
node: Node,
topic_name: str,
message_type: MsgType,
callback: Callable[[MsgType], Any],
qos_profile: QoSProfile,
report_lost_messages: bool,
raw: bool
) -> Optional[str]:
"""Initialize a node with a single subscription and spin."""
event_callbacks = None
if report_lost_messages:
event_callbacks = SubscriptionEventCallbacks(
message_lost=message_lost_event_callback)
try:
node.create_subscription(
message_type,
topic_name,
callback,
qos_profile,
event_callbacks=event_callbacks,
raw=raw)
except UnsupportedEventTypeError:
assert report_lost_messages
print(
f"The rmw implementation '{get_rmw_implementation_identifier()}'"
' does not support reporting lost messages'
)
rclpy.spin(node)
def subscriber_cb(truncate_length, noarr, nostr):
def cb(msg):
nonlocal truncate_length, noarr, nostr
if isinstance(msg, bytes):
print(msg, end='\n---\n')
else:
print(
message_to_yaml(
msg, truncate_length=truncate_length, no_arr=noarr, no_str=nostr),
end='---\n')
return cb
def subscriber_cb_csv(truncate_length, noarr, nostr):
def cb(msg):
nonlocal truncate_length, noarr, nostr
if isinstance(msg, bytes):
print(msg)
else:
print(message_to_csv(msg, truncate_length=truncate_length, no_arr=noarr, no_str=nostr))
return cb
def message_lost_event_callback(message_lost_status):
print(
'A message was lost!!!\n\ttotal count change:'
f'{message_lost_status.total_count_change}'
f'\n\ttotal count: {message_lost_status.total_count}',
end='---\n'
)
| [
"rclpy.utilities.get_rmw_implementation_identifier",
"rosidl_runtime_py.utilities.get_message",
"rclpy.spin",
"rosidl_runtime_py.message_to_csv",
"ros2topic.api.get_msg_class",
"ros2cli.node.strategy.NodeStrategy",
"ros2topic.api.qos_profile_from_short_keys",
"rosidl_runtime_py.message_to_yaml",
"ty... | [((1551, 1569), 'typing.TypeVar', 'TypeVar', (['"""MsgType"""'], {}), "('MsgType')\n", (1558, 1569), False, 'from typing import TypeVar\n'), ((3887, 4055), 'ros2topic.api.qos_profile_from_short_keys', 'qos_profile_from_short_keys', (['args.qos_profile'], {'reliability': 'args.qos_reliability', 'durability': 'args.qos_durability', 'depth': 'args.qos_depth', 'history': 'args.qos_history'}), '(args.qos_profile, reliability=args.\n qos_reliability, durability=args.qos_durability, depth=args.qos_depth,\n history=args.qos_history)\n', (3914, 4055), False, 'from ros2topic.api import qos_profile_from_short_keys\n'), ((5703, 5719), 'rclpy.spin', 'rclpy.spin', (['node'], {}), '(node)\n', (5713, 5719), False, 'import rclpy\n'), ((1699, 1734), 'ros2cli.node.strategy.add_arguments', 'add_strategy_node_arguments', (['parser'], {}), '(parser)\n', (1726, 1734), True, 'from ros2cli.node.strategy import add_arguments as add_strategy_node_arguments\n'), ((1894, 1963), 'ros2topic.api.TopicNameCompleter', 'TopicNameCompleter', ([], {'include_hidden_topics_key': '"""include_hidden_topics"""'}), "(include_hidden_topics_key='include_hidden_topics')\n", (1912, 1963), False, 'from ros2topic.api import TopicNameCompleter\n'), ((2126, 2224), 'ros2topic.api.add_qos_arguments_to_argument_parser', 'add_qos_arguments_to_argument_parser', (['parser'], {'is_publisher': '(False)', 'default_preset': '"""sensor_data"""'}), "(parser, is_publisher=False,\n default_preset='sensor_data')\n", (2162, 2224), False, 'from ros2topic.api import add_qos_arguments_to_argument_parser\n'), ((4073, 4091), 'ros2cli.node.strategy.NodeStrategy', 'NodeStrategy', (['args'], {}), '(args)\n', (4085, 4091), False, 'from ros2cli.node.strategy import NodeStrategy\n'), ((5178, 5246), 'rclpy.qos_event.SubscriptionEventCallbacks', 'SubscriptionEventCallbacks', ([], {'message_lost': 'message_lost_event_callback'}), '(message_lost=message_lost_event_callback)\n', (5204, 5246), False, 'from rclpy.qos_event import SubscriptionEventCallbacks\n'), ((4166, 4230), 'ros2topic.api.get_msg_class', 'get_msg_class', (['node', 'args.topic_name'], {'include_hidden_topics': '(True)'}), '(node, args.topic_name, include_hidden_topics=True)\n', (4179, 4230), False, 'from ros2topic.api import get_msg_class\n'), ((4310, 4340), 'rosidl_runtime_py.utilities.get_message', 'get_message', (['args.message_type'], {}), '(args.message_type)\n', (4321, 4340), False, 'from rosidl_runtime_py.utilities import get_message\n'), ((5958, 6044), 'rosidl_runtime_py.message_to_yaml', 'message_to_yaml', (['msg'], {'truncate_length': 'truncate_length', 'no_arr': 'noarr', 'no_str': 'nostr'}), '(msg, truncate_length=truncate_length, no_arr=noarr, no_str=\n nostr)\n', (5973, 6044), False, 'from rosidl_runtime_py import message_to_yaml\n'), ((6315, 6400), 'rosidl_runtime_py.message_to_csv', 'message_to_csv', (['msg'], {'truncate_length': 'truncate_length', 'no_arr': 'noarr', 'no_str': 'nostr'}), '(msg, truncate_length=truncate_length, no_arr=noarr, no_str=nostr\n )\n', (6329, 6400), False, 'from rosidl_runtime_py import message_to_csv\n'), ((5594, 5629), 'rclpy.utilities.get_rmw_implementation_identifier', 'get_rmw_implementation_identifier', ([], {}), '()\n', (5627, 5629), False, 'from rclpy.utilities import get_rmw_implementation_identifier\n')] |
# Code is generated: DO NOT EDIT
# Copyright 2019 Machine Zone, Inc. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
from kubespec import context
from kubespec import types
from kubespec.k8s import base
from kubespec.k8s import resource
from kubespec.k8s import v1 as k8sv1
from kubespec.k8s.meta import v1 as metav1
from typeguard import check_type, typechecked
from typing import Any, Dict, List, Optional
# MetricSourceType indicates the type of metric.
MetricSourceType = base.Enum(
"MetricSourceType",
{
# External is a global metric that is not associated
# with any Kubernetes object. It allows autoscaling based on information
# coming from components running outside of cluster
# (for example length of queue in cloud messaging service, or
# QPS from loadbalancer running outside of cluster).
"External": "External",
# Object is a metric describing a kubernetes object
# (for example, hits-per-second on an Ingress object).
"Object": "Object",
# Pods is a metric describing each pod in the current scale
# target (for example, transactions-processed-per-second). The values
# will be averaged together before being compared to the target value.
"Pods": "Pods",
# Resource is a resource metric known to Kubernetes, as
# specified in requests and limits, describing each pod in the current
# scale target (e.g. CPU or memory). Such metrics are built in to
# Kubernetes, and have special scaling options on top of those available
# to normal per-pod metrics (the "pods" source).
"Resource": "Resource",
},
)
# MetricTargetType specifies the type of metric being targeted, and should be either
# "Value", "AverageValue", or "Utilization"
MetricTargetType = base.Enum(
"MetricTargetType",
{
# AverageValue declares a MetricTarget is an
"AverageValue": "AverageValue",
# Utilization declares a MetricTarget is an AverageUtilization value
"Utilization": "Utilization",
# Value declares a MetricTarget is a raw value
"Value": "Value",
},
)
class CrossVersionObjectReference(types.Object):
"""
CrossVersionObjectReference contains enough information to let you identify the referred resource.
"""
@context.scoped
@typechecked
def __init__(self, kind: str = "", name: str = "", api_version: str = None):
super().__init__()
self.__kind = kind
self.__name = name
self.__api_version = api_version
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
kind = self.kind()
check_type("kind", kind, str)
v["kind"] = kind
name = self.name()
check_type("name", name, str)
v["name"] = name
api_version = self.api_version()
check_type("api_version", api_version, Optional[str])
if api_version: # omit empty
v["apiVersion"] = api_version
return v
def kind(self) -> str:
"""
Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
"""
return self.__kind
def name(self) -> str:
"""
Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
"""
return self.__name
def api_version(self) -> Optional[str]:
"""
API version of the referent
"""
return self.__api_version
class MetricIdentifier(types.Object):
"""
MetricIdentifier defines the name and optionally selector for a metric
"""
@context.scoped
@typechecked
def __init__(self, name: str = "", selector: "metav1.LabelSelector" = None):
super().__init__()
self.__name = name
self.__selector = selector
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
name = self.name()
check_type("name", name, str)
v["name"] = name
selector = self.selector()
check_type("selector", selector, Optional["metav1.LabelSelector"])
if selector is not None: # omit empty
v["selector"] = selector
return v
def name(self) -> str:
"""
name is the name of the given metric
"""
return self.__name
def selector(self) -> Optional["metav1.LabelSelector"]:
"""
selector is the string-encoded form of a standard kubernetes label selector for the given metric
When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping.
When unset, just the metricName will be used to gather metrics.
"""
return self.__selector
class MetricTarget(types.Object):
"""
MetricTarget defines the target value, average value, or average utilization of a specific metric
"""
@context.scoped
@typechecked
def __init__(
self,
type: MetricTargetType = None,
value: "resource.Quantity" = None,
average_value: "resource.Quantity" = None,
average_utilization: int = None,
):
super().__init__()
self.__type = type
self.__value = value
self.__average_value = average_value
self.__average_utilization = average_utilization
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
type = self.type()
check_type("type", type, MetricTargetType)
v["type"] = type
value = self.value()
check_type("value", value, Optional["resource.Quantity"])
if value is not None: # omit empty
v["value"] = value
average_value = self.average_value()
check_type("average_value", average_value, Optional["resource.Quantity"])
if average_value is not None: # omit empty
v["averageValue"] = average_value
average_utilization = self.average_utilization()
check_type("average_utilization", average_utilization, Optional[int])
if average_utilization is not None: # omit empty
v["averageUtilization"] = average_utilization
return v
def type(self) -> MetricTargetType:
"""
type represents whether the metric type is Utilization, Value, or AverageValue
"""
return self.__type
def value(self) -> Optional["resource.Quantity"]:
"""
value is the target value of the metric (as a quantity).
"""
return self.__value
def average_value(self) -> Optional["resource.Quantity"]:
"""
averageValue is the target value of the average of the
metric across all relevant pods (as a quantity)
"""
return self.__average_value
def average_utilization(self) -> Optional[int]:
"""
averageUtilization is the target value of the average of the
resource metric across all relevant pods, represented as a percentage of
the requested value of the resource for the pods.
Currently only valid for Resource metric source type
"""
return self.__average_utilization
class ExternalMetricSource(types.Object):
"""
ExternalMetricSource indicates how to scale on a metric not associated with
any Kubernetes object (for example length of queue in cloud
messaging service, or QPS from loadbalancer running outside of cluster).
"""
@context.scoped
@typechecked
def __init__(
self, metric: "MetricIdentifier" = None, target: "MetricTarget" = None
):
super().__init__()
self.__metric = metric if metric is not None else MetricIdentifier()
self.__target = target if target is not None else MetricTarget()
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
metric = self.metric()
check_type("metric", metric, "MetricIdentifier")
v["metric"] = metric
target = self.target()
check_type("target", target, "MetricTarget")
v["target"] = target
return v
def metric(self) -> "MetricIdentifier":
"""
metric identifies the target metric by name and selector
"""
return self.__metric
def target(self) -> "MetricTarget":
"""
target specifies the target value for the given metric
"""
return self.__target
class ObjectMetricSource(types.Object):
"""
ObjectMetricSource indicates how to scale on a metric describing a
kubernetes object (for example, hits-per-second on an Ingress object).
"""
@context.scoped
@typechecked
def __init__(
self,
described_object: "CrossVersionObjectReference" = None,
target: "MetricTarget" = None,
metric: "MetricIdentifier" = None,
):
super().__init__()
self.__described_object = (
described_object
if described_object is not None
else CrossVersionObjectReference()
)
self.__target = target if target is not None else MetricTarget()
self.__metric = metric if metric is not None else MetricIdentifier()
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
described_object = self.described_object()
check_type("described_object", described_object, "CrossVersionObjectReference")
v["describedObject"] = described_object
target = self.target()
check_type("target", target, "MetricTarget")
v["target"] = target
metric = self.metric()
check_type("metric", metric, "MetricIdentifier")
v["metric"] = metric
return v
def described_object(self) -> "CrossVersionObjectReference":
return self.__described_object
def target(self) -> "MetricTarget":
"""
target specifies the target value for the given metric
"""
return self.__target
def metric(self) -> "MetricIdentifier":
"""
metric identifies the target metric by name and selector
"""
return self.__metric
class PodsMetricSource(types.Object):
"""
PodsMetricSource indicates how to scale on a metric describing each pod in
the current scale target (for example, transactions-processed-per-second).
The values will be averaged together before being compared to the target
value.
"""
@context.scoped
@typechecked
def __init__(
self, metric: "MetricIdentifier" = None, target: "MetricTarget" = None
):
super().__init__()
self.__metric = metric if metric is not None else MetricIdentifier()
self.__target = target if target is not None else MetricTarget()
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
metric = self.metric()
check_type("metric", metric, "MetricIdentifier")
v["metric"] = metric
target = self.target()
check_type("target", target, "MetricTarget")
v["target"] = target
return v
def metric(self) -> "MetricIdentifier":
"""
metric identifies the target metric by name and selector
"""
return self.__metric
def target(self) -> "MetricTarget":
"""
target specifies the target value for the given metric
"""
return self.__target
class ResourceMetricSource(types.Object):
"""
ResourceMetricSource indicates how to scale on a resource metric known to
Kubernetes, as specified in requests and limits, describing each pod in the
current scale target (e.g. CPU or memory). The values will be averaged
together before being compared to the target. Such metrics are built in to
Kubernetes, and have special scaling options on top of those available to
normal per-pod metrics using the "pods" source. Only one "target" type
should be set.
"""
@context.scoped
@typechecked
def __init__(self, name: k8sv1.ResourceName = None, target: "MetricTarget" = None):
super().__init__()
self.__name = name
self.__target = target if target is not None else MetricTarget()
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
name = self.name()
check_type("name", name, k8sv1.ResourceName)
v["name"] = name
target = self.target()
check_type("target", target, "MetricTarget")
v["target"] = target
return v
def name(self) -> k8sv1.ResourceName:
"""
name is the name of the resource in question.
"""
return self.__name
def target(self) -> "MetricTarget":
"""
target specifies the target value for the given metric
"""
return self.__target
class MetricSpec(types.Object):
"""
MetricSpec specifies how to scale based on a single metric
(only `type` and one other matching field should be set at once).
"""
@context.scoped
@typechecked
def __init__(
self,
type: MetricSourceType = None,
object: "ObjectMetricSource" = None,
pods: "PodsMetricSource" = None,
resource: "ResourceMetricSource" = None,
external: "ExternalMetricSource" = None,
):
super().__init__()
self.__type = type
self.__object = object
self.__pods = pods
self.__resource = resource
self.__external = external
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
type = self.type()
check_type("type", type, MetricSourceType)
v["type"] = type
object = self.object()
check_type("object", object, Optional["ObjectMetricSource"])
if object is not None: # omit empty
v["object"] = object
pods = self.pods()
check_type("pods", pods, Optional["PodsMetricSource"])
if pods is not None: # omit empty
v["pods"] = pods
resource = self.resource()
check_type("resource", resource, Optional["ResourceMetricSource"])
if resource is not None: # omit empty
v["resource"] = resource
external = self.external()
check_type("external", external, Optional["ExternalMetricSource"])
if external is not None: # omit empty
v["external"] = external
return v
def type(self) -> MetricSourceType:
"""
type is the type of metric source. It should be one of "Object",
"Pods" or "Resource", each mapping to a matching field in the object.
"""
return self.__type
def object(self) -> Optional["ObjectMetricSource"]:
"""
object refers to a metric describing a single kubernetes object
(for example, hits-per-second on an Ingress object).
"""
return self.__object
def pods(self) -> Optional["PodsMetricSource"]:
"""
pods refers to a metric describing each pod in the current scale target
(for example, transactions-processed-per-second). The values will be
averaged together before being compared to the target value.
"""
return self.__pods
def resource(self) -> Optional["ResourceMetricSource"]:
"""
resource refers to a resource metric (such as those specified in
requests and limits) known to Kubernetes describing each pod in the
current scale target (e.g. CPU or memory). Such metrics are built in to
Kubernetes, and have special scaling options on top of those available
to normal per-pod metrics using the "pods" source.
"""
return self.__resource
def external(self) -> Optional["ExternalMetricSource"]:
"""
external refers to a global metric that is not associated
with any Kubernetes object. It allows autoscaling based on information
coming from components running outside of cluster
(for example length of queue in cloud messaging service, or
QPS from loadbalancer running outside of cluster).
"""
return self.__external
class HorizontalPodAutoscalerSpec(types.Object):
"""
HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler.
"""
@context.scoped
@typechecked
def __init__(
self,
scale_target_ref: "CrossVersionObjectReference" = None,
min_replicas: int = None,
max_replicas: int = 0,
metrics: List["MetricSpec"] = None,
):
super().__init__()
self.__scale_target_ref = (
scale_target_ref
if scale_target_ref is not None
else CrossVersionObjectReference()
)
self.__min_replicas = min_replicas if min_replicas is not None else 1
self.__max_replicas = max_replicas
self.__metrics = metrics if metrics is not None else []
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
scale_target_ref = self.scale_target_ref()
check_type("scale_target_ref", scale_target_ref, "CrossVersionObjectReference")
v["scaleTargetRef"] = scale_target_ref
min_replicas = self.min_replicas()
check_type("min_replicas", min_replicas, Optional[int])
if min_replicas is not None: # omit empty
v["minReplicas"] = min_replicas
max_replicas = self.max_replicas()
check_type("max_replicas", max_replicas, int)
v["maxReplicas"] = max_replicas
metrics = self.metrics()
check_type("metrics", metrics, Optional[List["MetricSpec"]])
if metrics: # omit empty
v["metrics"] = metrics
return v
def scale_target_ref(self) -> "CrossVersionObjectReference":
"""
scaleTargetRef points to the target resource to scale, and is used to the pods for which metrics
should be collected, as well as to actually change the replica count.
"""
return self.__scale_target_ref
def min_replicas(self) -> Optional[int]:
"""
minReplicas is the lower limit for the number of replicas to which the autoscaler
can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the
alpha feature gate HPAScaleToZero is enabled and at least one Object or External
metric is configured. Scaling is active as long as at least one metric value is
available.
"""
return self.__min_replicas
def max_replicas(self) -> int:
"""
maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up.
It cannot be less that minReplicas.
"""
return self.__max_replicas
def metrics(self) -> Optional[List["MetricSpec"]]:
"""
metrics contains the specifications for which to use to calculate the
desired replica count (the maximum replica count across all metrics will
be used). The desired replica count is calculated multiplying the
ratio between the target value and the current value by the current
number of pods. Ergo, metrics used must decrease as the pod count is
increased, and vice-versa. See the individual metric source types for
more information about how each type of metric must respond.
If not set, the default metric will be set to 80% average CPU utilization.
"""
return self.__metrics
class HorizontalPodAutoscaler(base.TypedObject, base.NamespacedMetadataObject):
"""
HorizontalPodAutoscaler is the configuration for a horizontal pod
autoscaler, which automatically manages the replica count of any resource
implementing the scale subresource based on the metrics specified.
"""
@context.scoped
@typechecked
def __init__(
self,
namespace: str = None,
name: str = None,
labels: Dict[str, str] = None,
annotations: Dict[str, str] = None,
spec: "HorizontalPodAutoscalerSpec" = None,
):
super().__init__(
api_version="autoscaling/v2beta2",
kind="HorizontalPodAutoscaler",
**({"namespace": namespace} if namespace is not None else {}),
**({"name": name} if name is not None else {}),
**({"labels": labels} if labels is not None else {}),
**({"annotations": annotations} if annotations is not None else {}),
)
self.__spec = spec if spec is not None else HorizontalPodAutoscalerSpec()
@typechecked
def _root(self) -> Dict[str, Any]:
v = super()._root()
spec = self.spec()
check_type("spec", spec, Optional["HorizontalPodAutoscalerSpec"])
v["spec"] = spec
return v
def spec(self) -> Optional["HorizontalPodAutoscalerSpec"]:
"""
spec is the specification for the behaviour of the autoscaler.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
"""
return self.__spec
| [
"typeguard.check_type",
"kubespec.k8s.base.Enum"
] | [((556, 675), 'kubespec.k8s.base.Enum', 'base.Enum', (['"""MetricSourceType"""', "{'External': 'External', 'Object': 'Object', 'Pods': 'Pods', 'Resource':\n 'Resource'}"], {}), "('MetricSourceType', {'External': 'External', 'Object': 'Object',\n 'Pods': 'Pods', 'Resource': 'Resource'})\n", (565, 675), False, 'from kubespec.k8s import base\n'), ((1910, 2025), 'kubespec.k8s.base.Enum', 'base.Enum', (['"""MetricTargetType"""', "{'AverageValue': 'AverageValue', 'Utilization': 'Utilization', 'Value': 'Value'\n }"], {}), "('MetricTargetType', {'AverageValue': 'AverageValue',\n 'Utilization': 'Utilization', 'Value': 'Value'})\n", (1919, 2025), False, 'from kubespec.k8s import base\n'), ((2780, 2809), 'typeguard.check_type', 'check_type', (['"""kind"""', 'kind', 'str'], {}), "('kind', kind, str)\n", (2790, 2809), False, 'from typeguard import check_type, typechecked\n'), ((2870, 2899), 'typeguard.check_type', 'check_type', (['"""name"""', 'name', 'str'], {}), "('name', name, str)\n", (2880, 2899), False, 'from typeguard import check_type, typechecked\n'), ((2974, 3027), 'typeguard.check_type', 'check_type', (['"""api_version"""', 'api_version', 'Optional[str]'], {}), "('api_version', api_version, Optional[str])\n", (2984, 3027), False, 'from typeguard import check_type, typechecked\n'), ((4115, 4144), 'typeguard.check_type', 'check_type', (['"""name"""', 'name', 'str'], {}), "('name', name, str)\n", (4125, 4144), False, 'from typeguard import check_type, typechecked\n'), ((4213, 4279), 'typeguard.check_type', 'check_type', (['"""selector"""', 'selector', "Optional['metav1.LabelSelector']"], {}), "('selector', selector, Optional['metav1.LabelSelector'])\n", (4223, 4279), False, 'from typeguard import check_type, typechecked\n'), ((5623, 5665), 'typeguard.check_type', 'check_type', (['"""type"""', 'type', 'MetricTargetType'], {}), "('type', type, MetricTargetType)\n", (5633, 5665), False, 'from typeguard import check_type, typechecked\n'), ((5728, 5785), 'typeguard.check_type', 'check_type', (['"""value"""', 'value', "Optional['resource.Quantity']"], {}), "('value', value, Optional['resource.Quantity'])\n", (5738, 5785), False, 'from typeguard import check_type, typechecked\n'), ((5914, 5987), 'typeguard.check_type', 'check_type', (['"""average_value"""', 'average_value', "Optional['resource.Quantity']"], {}), "('average_value', average_value, Optional['resource.Quantity'])\n", (5924, 5987), False, 'from typeguard import check_type, typechecked\n'), ((6151, 6220), 'typeguard.check_type', 'check_type', (['"""average_utilization"""', 'average_utilization', 'Optional[int]'], {}), "('average_utilization', average_utilization, Optional[int])\n", (6161, 6220), False, 'from typeguard import check_type, typechecked\n'), ((8059, 8107), 'typeguard.check_type', 'check_type', (['"""metric"""', 'metric', '"""MetricIdentifier"""'], {}), "('metric', metric, 'MetricIdentifier')\n", (8069, 8107), False, 'from typeguard import check_type, typechecked\n'), ((8176, 8220), 'typeguard.check_type', 'check_type', (['"""target"""', 'target', '"""MetricTarget"""'], {}), "('target', target, 'MetricTarget')\n", (8186, 8220), False, 'from typeguard import check_type, typechecked\n'), ((9501, 9580), 'typeguard.check_type', 'check_type', (['"""described_object"""', 'described_object', '"""CrossVersionObjectReference"""'], {}), "('described_object', described_object, 'CrossVersionObjectReference')\n", (9511, 9580), False, 'from typeguard import check_type, typechecked\n'), ((9668, 9712), 'typeguard.check_type', 'check_type', (['"""target"""', 'target', '"""MetricTarget"""'], {}), "('target', target, 'MetricTarget')\n", (9678, 9712), False, 'from typeguard import check_type, typechecked\n'), ((9781, 9829), 'typeguard.check_type', 'check_type', (['"""metric"""', 'metric', '"""MetricIdentifier"""'], {}), "('metric', metric, 'MetricIdentifier')\n", (9791, 9829), False, 'from typeguard import check_type, typechecked\n'), ((11046, 11094), 'typeguard.check_type', 'check_type', (['"""metric"""', 'metric', '"""MetricIdentifier"""'], {}), "('metric', metric, 'MetricIdentifier')\n", (11056, 11094), False, 'from typeguard import check_type, typechecked\n'), ((11163, 11207), 'typeguard.check_type', 'check_type', (['"""target"""', 'target', '"""MetricTarget"""'], {}), "('target', target, 'MetricTarget')\n", (11173, 11207), False, 'from typeguard import check_type, typechecked\n'), ((12494, 12538), 'typeguard.check_type', 'check_type', (['"""name"""', 'name', 'k8sv1.ResourceName'], {}), "('name', name, k8sv1.ResourceName)\n", (12504, 12538), False, 'from typeguard import check_type, typechecked\n'), ((12603, 12647), 'typeguard.check_type', 'check_type', (['"""target"""', 'target', '"""MetricTarget"""'], {}), "('target', target, 'MetricTarget')\n", (12613, 12647), False, 'from typeguard import check_type, typechecked\n'), ((13784, 13826), 'typeguard.check_type', 'check_type', (['"""type"""', 'type', 'MetricSourceType'], {}), "('type', type, MetricSourceType)\n", (13794, 13826), False, 'from typeguard import check_type, typechecked\n'), ((13891, 13951), 'typeguard.check_type', 'check_type', (['"""object"""', 'object', "Optional['ObjectMetricSource']"], {}), "('object', object, Optional['ObjectMetricSource'])\n", (13901, 13951), False, 'from typeguard import check_type, typechecked\n'), ((14065, 14119), 'typeguard.check_type', 'check_type', (['"""pods"""', 'pods', "Optional['PodsMetricSource']"], {}), "('pods', pods, Optional['PodsMetricSource'])\n", (14075, 14119), False, 'from typeguard import check_type, typechecked\n'), ((14235, 14301), 'typeguard.check_type', 'check_type', (['"""resource"""', 'resource', "Optional['ResourceMetricSource']"], {}), "('resource', resource, Optional['ResourceMetricSource'])\n", (14245, 14301), False, 'from typeguard import check_type, typechecked\n'), ((14429, 14495), 'typeguard.check_type', 'check_type', (['"""external"""', 'external', "Optional['ExternalMetricSource']"], {}), "('external', external, Optional['ExternalMetricSource'])\n", (14439, 14495), False, 'from typeguard import check_type, typechecked\n'), ((17283, 17362), 'typeguard.check_type', 'check_type', (['"""scale_target_ref"""', 'scale_target_ref', '"""CrossVersionObjectReference"""'], {}), "('scale_target_ref', scale_target_ref, 'CrossVersionObjectReference')\n", (17293, 17362), False, 'from typeguard import check_type, typechecked\n'), ((17461, 17516), 'typeguard.check_type', 'check_type', (['"""min_replicas"""', 'min_replicas', 'Optional[int]'], {}), "('min_replicas', min_replicas, Optional[int])\n", (17471, 17516), False, 'from typeguard import check_type, typechecked\n'), ((17663, 17708), 'typeguard.check_type', 'check_type', (['"""max_replicas"""', 'max_replicas', 'int'], {}), "('max_replicas', max_replicas, int)\n", (17673, 17708), False, 'from typeguard import check_type, typechecked\n'), ((17790, 17850), 'typeguard.check_type', 'check_type', (['"""metrics"""', 'metrics', "Optional[List['MetricSpec']]"], {}), "('metrics', metrics, Optional[List['MetricSpec']])\n", (17800, 17850), False, 'from typeguard import check_type, typechecked\n'), ((20896, 20961), 'typeguard.check_type', 'check_type', (['"""spec"""', 'spec', "Optional['HorizontalPodAutoscalerSpec']"], {}), "('spec', spec, Optional['HorizontalPodAutoscalerSpec'])\n", (20906, 20961), False, 'from typeguard import check_type, typechecked\n')] |
# -*- coding: utf-8 -*-
import ckan.plugins as p
def x2(sender):
return sender * 2
def x10(sender):
return sender * 10
class ExampleISignalPlugin(p.SingletonPlugin):
p.implements(p.ISignal)
# ISignal
def get_signal_subscriptions(self):
return {
p.toolkit.signals.ckanext.signal(u'isignal_number'): [
x2,
{u'receiver': x10, u'sender': 10}
]
}
| [
"ckan.plugins.toolkit.signals.ckanext.signal",
"ckan.plugins.implements"
] | [((185, 208), 'ckan.plugins.implements', 'p.implements', (['p.ISignal'], {}), '(p.ISignal)\n', (197, 208), True, 'import ckan.plugins as p\n'), ((294, 345), 'ckan.plugins.toolkit.signals.ckanext.signal', 'p.toolkit.signals.ckanext.signal', (['u"""isignal_number"""'], {}), "(u'isignal_number')\n", (326, 345), True, 'import ckan.plugins as p\n')] |
from typing import Dict, Optional, Text, List
import apache_beam as beam
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis import config
from tensorflow_model_analysis import constants
from tensorflow_model_analysis import model_util
from tensorflow_model_analysis import types
from tensorflow_model_analysis.extractors import extractor
from tfx_bsl.tfxio import tensor_adapter
BATCHED_PREDICT_EXTRACTOR_STAGE_NAME = 'ExtractBatchPredictions'
def custom_extractors(eval_config,
eval_shared_model,
tensor_adapter_config
) -> List[tfma.extractors.Extractor]:
return tfma.default_extractors(
eval_config=eval_config,
eval_shared_model=eval_shared_model,
tensor_adapter_config=tensor_adapter_config,
custom_predict_extractor=BatchedPredictExtractor(eval_config,
eval_shared_model,
tensor_adapter_config
))
def BatchedPredictExtractor(
eval_config: config.EvalConfig,
eval_shared_model: types.MaybeMultipleEvalSharedModels,
tensor_adapter_config: Optional[
tensor_adapter.TensorAdapterConfig] = None,
) -> extractor.Extractor:
eval_shared_models = model_util.verify_and_update_eval_shared_models(
eval_shared_model)
return extractor.Extractor(
stage_name=BATCHED_PREDICT_EXTRACTOR_STAGE_NAME,
ptransform=_ExtractBatchedPredictions(
eval_config=eval_config,
eval_shared_models={m.model_name: m for m in eval_shared_models},
tensor_adapter_config=tensor_adapter_config))
@beam.ptransform_fn
@beam.typehints.with_input_types(types.Extracts)
@beam.typehints.with_output_types(types.Extracts)
def _ExtractBatchedPredictions(
extracts: beam.pvalue.PCollection,
eval_config: config.EvalConfig,
eval_shared_models: Dict[Text, types.EvalSharedModel],
tensor_adapter_config: Optional[
tensor_adapter.TensorAdapterConfig] = None,
) -> beam.pvalue.PCollection:
signature_names = {}
for spec in eval_config.model_specs:
model_name = '' if len(eval_config.model_specs) == 1 else spec.name
signature_names[model_name] = [spec.signature_name]
return (extracts
| 'Predict' >> beam.ParDo(
model_util.ModelSignaturesDoFn(
eval_config=eval_config,
eval_shared_models=eval_shared_models,
signature_names={
constants.PREDICTIONS_KEY: signature_names},
prefer_dict_outputs=True,
tensor_adapter_config=tensor_adapter_config)))
| [
"apache_beam.typehints.with_output_types",
"tensorflow_model_analysis.model_util.verify_and_update_eval_shared_models",
"tensorflow_model_analysis.model_util.ModelSignaturesDoFn",
"apache_beam.typehints.with_input_types"
] | [((1793, 1840), 'apache_beam.typehints.with_input_types', 'beam.typehints.with_input_types', (['types.Extracts'], {}), '(types.Extracts)\n', (1824, 1840), True, 'import apache_beam as beam\n'), ((1842, 1890), 'apache_beam.typehints.with_output_types', 'beam.typehints.with_output_types', (['types.Extracts'], {}), '(types.Extracts)\n', (1874, 1890), True, 'import apache_beam as beam\n'), ((1384, 1450), 'tensorflow_model_analysis.model_util.verify_and_update_eval_shared_models', 'model_util.verify_and_update_eval_shared_models', (['eval_shared_model'], {}), '(eval_shared_model)\n', (1431, 1450), False, 'from tensorflow_model_analysis import model_util\n'), ((2475, 2716), 'tensorflow_model_analysis.model_util.ModelSignaturesDoFn', 'model_util.ModelSignaturesDoFn', ([], {'eval_config': 'eval_config', 'eval_shared_models': 'eval_shared_models', 'signature_names': '{constants.PREDICTIONS_KEY: signature_names}', 'prefer_dict_outputs': '(True)', 'tensor_adapter_config': 'tensor_adapter_config'}), '(eval_config=eval_config, eval_shared_models=\n eval_shared_models, signature_names={constants.PREDICTIONS_KEY:\n signature_names}, prefer_dict_outputs=True, tensor_adapter_config=\n tensor_adapter_config)\n', (2505, 2716), False, 'from tensorflow_model_analysis import model_util\n')] |
# Generated by Django 3.1.5 on 2021-03-05 17:55
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("contenttypes", "0002_remove_content_type_name"),
("core", "0003_migrate_content_type_to_core_20210219_1642"),
]
operations = [
migrations.AlterField(
model_name="abstractparticipation",
name="polymorphic_ctype",
field=models.ForeignKey(
editable=False,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="polymorphic_core.abstractparticipation_set+",
to="contenttypes.contenttype",
),
),
migrations.AlterField(
model_name="abstractparticipation",
name="shift",
field=models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
related_name="participations",
to="core.shift",
verbose_name="shift",
),
),
]
| [
"django.db.models.ForeignKey"
] | [((475, 670), 'django.db.models.ForeignKey', 'models.ForeignKey', ([], {'editable': '(False)', 'null': '(True)', 'on_delete': 'django.db.models.deletion.CASCADE', 'related_name': '"""polymorphic_core.abstractparticipation_set+"""', 'to': '"""contenttypes.contenttype"""'}), "(editable=False, null=True, on_delete=django.db.models.\n deletion.CASCADE, related_name=\n 'polymorphic_core.abstractparticipation_set+', to=\n 'contenttypes.contenttype')\n", (492, 670), False, 'from django.db import migrations, models\n'), ((886, 1023), 'django.db.models.ForeignKey', 'models.ForeignKey', ([], {'on_delete': 'django.db.models.deletion.CASCADE', 'related_name': '"""participations"""', 'to': '"""core.shift"""', 'verbose_name': '"""shift"""'}), "(on_delete=django.db.models.deletion.CASCADE, related_name\n ='participations', to='core.shift', verbose_name='shift')\n", (903, 1023), False, 'from django.db import migrations, models\n')] |
"""model.py"""
import torch
import torch.nn as nn
# import torch.nn.functional as F
import torch.nn.init as init
from torch.autograd import Variable
def reparametrize(mu, logvar):
std = logvar.div(2).exp()
eps = Variable(std.data.new(std.size()).normal_())
return mu + std * eps
class View(nn.Module):
def __init__(self, size):
super(View, self).__init__()
self.size = size
def forward(self, tensor):
return tensor.view(self.size)
class BetaVAE_H(nn.Module):
"""Model proposed in original beta-VAE paper(Higgins et al, ICLR, 2017)."""
def __init__(self, z_dim=10, nc=3):
super(BetaVAE_H, self).__init__()
self.z_dim = z_dim
self.nc = nc
self.encoder = nn.Sequential(
nn.Conv2d(nc, 32, 4, 2, 1), # B, 32, 120, 180
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 60, 90
nn.ReLU(True),
nn.Conv2d(32, 64, 4, 2, (3, 4)), # B, 64, 32, 48
nn.ReLU(True),
nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 16, 24
nn.ReLU(True),
nn.Conv2d(64, 64, 4, 2, 1), # B, 64, 8, 12
nn.ReLU(True),
nn.Conv2d(64, 128, 4, 2, 1), # B, 128, 4, 6
nn.ReLU(True),
nn.Conv2d(128, 256, 4, 2, 1), # B, 256, 2, 3
nn.ReLU(True),
View((-1, 256 * 2 * 3)), # B, 256 * 2 * 3
nn.Linear(1536, z_dim * 2), # B, z_dim*2
)
self.decoder = nn.Sequential(
nn.Linear(z_dim, 1536), # B, 5120
View((-1, 256, 2, 3)), # B, 256, 2, 3
nn.ReLU(True),
nn.ConvTranspose2d(256, 128, 4, 2, 1), # B, 128, 4, 6
nn.ReLU(True),
nn.ConvTranspose2d(128, 64, 4, 2, 1), # B, 64, 8, 12
nn.ReLU(True),
nn.ConvTranspose2d(64, 64, 4, 2, 1), # B, 64, 16, 24
nn.ReLU(True),
nn.ConvTranspose2d(64, 64, 4, 2, 2), # B, 64, 32, 48
nn.ReLU(True),
nn.ConvTranspose2d(64, 32, 4, 2, (1, 2)), # B, 32, 60, 91
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 120, 180
nn.ReLU(True),
nn.ConvTranspose2d(32, nc, 4, 2, 1), # B, nc, 240, 360
)
self.weight_init()
def weight_init(self):
for block in self._modules:
for m in self._modules[block]:
kaiming_init(m)
def forward(self, x):
distributions = self._encode(x)
mu = distributions[:, :self.z_dim]
logvar = distributions[:, self.z_dim:]
z = reparametrize(mu, logvar)
x_recon = self._decode(z)
return x_recon, mu, logvar
def _encode(self, x):
return self.encoder(x)
def _decode(self, z):
return self.decoder(z)
class BetaVAE_B(BetaVAE_H):
"""Model proposed in understanding beta-VAE paper(Burgess et al, arxiv:1804.03599, 2018)."""
def __init__(self, z_dim=10, nc=1):
super(BetaVAE_B, self).__init__()
self.nc = nc
self.z_dim = z_dim
self.encoder = nn.Sequential(
nn.Conv2d(nc, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 8, 8
nn.ReLU(True),
nn.Conv2d(32, 32, 4, 2, 1), # B, 32, 4, 4
nn.ReLU(True),
View((-1, 32 * 4 * 4)), # B, 512
nn.Linear(32 * 4 * 4, 256), # B, 256
nn.ReLU(True),
nn.Linear(256, 256), # B, 256
nn.ReLU(True),
nn.Linear(256, z_dim * 2), # B, z_dim*2
)
self.decoder = nn.Sequential(
nn.Linear(z_dim, 256), # B, 256
nn.ReLU(True),
nn.Linear(256, 256), # B, 256
nn.ReLU(True),
nn.Linear(256, 32 * 4 * 4), # B, 512
nn.ReLU(True),
View((-1, 32, 4, 4)), # B, 32, 4, 4
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 8, 8
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 16, 16
nn.ReLU(True),
nn.ConvTranspose2d(32, 32, 4, 2, 1), # B, 32, 32, 32
nn.ReLU(True),
nn.ConvTranspose2d(32, nc, 4, 2, 1), # B, nc, 64, 64
)
self.weight_init()
def weight_init(self):
for block in self._modules:
for m in self._modules[block]:
kaiming_init(m)
def forward(self, x):
distributions = self._encode(x)
mu = distributions[:, :self.z_dim]
logvar = distributions[:, self.z_dim:]
z = reparametrize(mu, logvar)
x_recon = self._decode(z).view(x.size())
return x_recon, mu, logvar
def _encode(self, x):
return self.encoder(x)
def _decode(self, z):
return self.decoder(z)
def kaiming_init(m):
if isinstance(m, (nn.Linear, nn.Conv2d)):
init.kaiming_normal(m.weight)
if m.bias is not None:
m.bias.data.fill_(0)
elif isinstance(m, (nn.BatchNorm1d, nn.BatchNorm2d)):
m.weight.data.fill_(1)
if m.bias is not None:
m.bias.data.fill_(0)
def normal_init(m, mean, std):
if isinstance(m, (nn.Linear, nn.Conv2d)):
m.weight.data.normal_(mean, std)
if m.bias.data is not None:
m.bias.data.zero_()
elif isinstance(m, (nn.BatchNorm2d, nn.BatchNorm1d)):
m.weight.data.fill_(1)
if m.bias.data is not None:
m.bias.data.zero_()
if __name__ == '__main__':
pass
| [
"torch.nn.ReLU",
"torch.nn.Conv2d",
"torch.nn.Linear",
"torch.nn.init.kaiming_normal",
"torch.nn.ConvTranspose2d"
] | [((5052, 5081), 'torch.nn.init.kaiming_normal', 'init.kaiming_normal', (['m.weight'], {}), '(m.weight)\n', (5071, 5081), True, 'import torch.nn.init as init\n'), ((772, 798), 'torch.nn.Conv2d', 'nn.Conv2d', (['nc', '(32)', '(4)', '(2)', '(1)'], {}), '(nc, 32, 4, 2, 1)\n', (781, 798), True, 'import torch.nn as nn\n'), ((832, 845), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (839, 845), True, 'import torch.nn as nn\n'), ((859, 885), 'torch.nn.Conv2d', 'nn.Conv2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (868, 885), True, 'import torch.nn as nn\n'), ((917, 930), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (924, 930), True, 'import torch.nn as nn\n'), ((944, 975), 'torch.nn.Conv2d', 'nn.Conv2d', (['(32)', '(64)', '(4)', '(2)', '(3, 4)'], {}), '(32, 64, 4, 2, (3, 4))\n', (953, 975), True, 'import torch.nn as nn\n'), ((1007, 1020), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1014, 1020), True, 'import torch.nn as nn\n'), ((1034, 1060), 'torch.nn.Conv2d', 'nn.Conv2d', (['(64)', '(64)', '(4)', '(2)', '(1)'], {}), '(64, 64, 4, 2, 1)\n', (1043, 1060), True, 'import torch.nn as nn\n'), ((1092, 1105), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1099, 1105), True, 'import torch.nn as nn\n'), ((1119, 1145), 'torch.nn.Conv2d', 'nn.Conv2d', (['(64)', '(64)', '(4)', '(2)', '(1)'], {}), '(64, 64, 4, 2, 1)\n', (1128, 1145), True, 'import torch.nn as nn\n'), ((1176, 1189), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1183, 1189), True, 'import torch.nn as nn\n'), ((1203, 1230), 'torch.nn.Conv2d', 'nn.Conv2d', (['(64)', '(128)', '(4)', '(2)', '(1)'], {}), '(64, 128, 4, 2, 1)\n', (1212, 1230), True, 'import torch.nn as nn\n'), ((1261, 1274), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1268, 1274), True, 'import torch.nn as nn\n'), ((1288, 1316), 'torch.nn.Conv2d', 'nn.Conv2d', (['(128)', '(256)', '(4)', '(2)', '(1)'], {}), '(128, 256, 4, 2, 1)\n', (1297, 1316), True, 'import torch.nn as nn\n'), ((1347, 1360), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1354, 1360), True, 'import torch.nn as nn\n'), ((1429, 1455), 'torch.nn.Linear', 'nn.Linear', (['(1536)', '(z_dim * 2)'], {}), '(1536, z_dim * 2)\n', (1438, 1455), True, 'import torch.nn as nn\n'), ((1531, 1553), 'torch.nn.Linear', 'nn.Linear', (['z_dim', '(1536)'], {}), '(z_dim, 1536)\n', (1540, 1553), True, 'import torch.nn as nn\n'), ((1631, 1644), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1638, 1644), True, 'import torch.nn as nn\n'), ((1658, 1695), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(256)', '(128)', '(4)', '(2)', '(1)'], {}), '(256, 128, 4, 2, 1)\n', (1676, 1695), True, 'import torch.nn as nn\n'), ((1727, 1740), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1734, 1740), True, 'import torch.nn as nn\n'), ((1754, 1790), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(128)', '(64)', '(4)', '(2)', '(1)'], {}), '(128, 64, 4, 2, 1)\n', (1772, 1790), True, 'import torch.nn as nn\n'), ((1823, 1836), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1830, 1836), True, 'import torch.nn as nn\n'), ((1850, 1885), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(64)', '(64)', '(4)', '(2)', '(1)'], {}), '(64, 64, 4, 2, 1)\n', (1868, 1885), True, 'import torch.nn as nn\n'), ((1919, 1932), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (1926, 1932), True, 'import torch.nn as nn\n'), ((1946, 1981), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(64)', '(64)', '(4)', '(2)', '(2)'], {}), '(64, 64, 4, 2, 2)\n', (1964, 1981), True, 'import torch.nn as nn\n'), ((2015, 2028), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (2022, 2028), True, 'import torch.nn as nn\n'), ((2042, 2082), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(64)', '(32)', '(4)', '(2)', '(1, 2)'], {}), '(64, 32, 4, 2, (1, 2))\n', (2060, 2082), True, 'import torch.nn as nn\n'), ((2114, 2127), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (2121, 2127), True, 'import torch.nn as nn\n'), ((2141, 2176), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (2159, 2176), True, 'import torch.nn as nn\n'), ((2210, 2223), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (2217, 2223), True, 'import torch.nn as nn\n'), ((2237, 2272), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(32)', 'nc', '(4)', '(2)', '(1)'], {}), '(32, nc, 4, 2, 1)\n', (2255, 2272), True, 'import torch.nn as nn\n'), ((3160, 3186), 'torch.nn.Conv2d', 'nn.Conv2d', (['nc', '(32)', '(4)', '(2)', '(1)'], {}), '(nc, 32, 4, 2, 1)\n', (3169, 3186), True, 'import torch.nn as nn\n'), ((3218, 3231), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3225, 3231), True, 'import torch.nn as nn\n'), ((3245, 3271), 'torch.nn.Conv2d', 'nn.Conv2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (3254, 3271), True, 'import torch.nn as nn\n'), ((3303, 3316), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3310, 3316), True, 'import torch.nn as nn\n'), ((3330, 3356), 'torch.nn.Conv2d', 'nn.Conv2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (3339, 3356), True, 'import torch.nn as nn\n'), ((3388, 3401), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3395, 3401), True, 'import torch.nn as nn\n'), ((3415, 3441), 'torch.nn.Conv2d', 'nn.Conv2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (3424, 3441), True, 'import torch.nn as nn\n'), ((3473, 3486), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3480, 3486), True, 'import torch.nn as nn\n'), ((3546, 3572), 'torch.nn.Linear', 'nn.Linear', (['(32 * 4 * 4)', '(256)'], {}), '(32 * 4 * 4, 256)\n', (3555, 3572), True, 'import torch.nn as nn\n'), ((3596, 3609), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3603, 3609), True, 'import torch.nn as nn\n'), ((3623, 3642), 'torch.nn.Linear', 'nn.Linear', (['(256)', '(256)'], {}), '(256, 256)\n', (3632, 3642), True, 'import torch.nn as nn\n'), ((3666, 3679), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3673, 3679), True, 'import torch.nn as nn\n'), ((3693, 3718), 'torch.nn.Linear', 'nn.Linear', (['(256)', '(z_dim * 2)'], {}), '(256, z_dim * 2)\n', (3702, 3718), True, 'import torch.nn as nn\n'), ((3795, 3816), 'torch.nn.Linear', 'nn.Linear', (['z_dim', '(256)'], {}), '(z_dim, 256)\n', (3804, 3816), True, 'import torch.nn as nn\n'), ((3840, 3853), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3847, 3853), True, 'import torch.nn as nn\n'), ((3867, 3886), 'torch.nn.Linear', 'nn.Linear', (['(256)', '(256)'], {}), '(256, 256)\n', (3876, 3886), True, 'import torch.nn as nn\n'), ((3910, 3923), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3917, 3923), True, 'import torch.nn as nn\n'), ((3937, 3963), 'torch.nn.Linear', 'nn.Linear', (['(256)', '(32 * 4 * 4)'], {}), '(256, 32 * 4 * 4)\n', (3946, 3963), True, 'import torch.nn as nn\n'), ((3987, 4000), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (3994, 4000), True, 'import torch.nn as nn\n'), ((4066, 4101), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (4084, 4101), True, 'import torch.nn as nn\n'), ((4133, 4146), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (4140, 4146), True, 'import torch.nn as nn\n'), ((4160, 4195), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (4178, 4195), True, 'import torch.nn as nn\n'), ((4227, 4240), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (4234, 4240), True, 'import torch.nn as nn\n'), ((4254, 4289), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(32)', '(32)', '(4)', '(2)', '(1)'], {}), '(32, 32, 4, 2, 1)\n', (4272, 4289), True, 'import torch.nn as nn\n'), ((4321, 4334), 'torch.nn.ReLU', 'nn.ReLU', (['(True)'], {}), '(True)\n', (4328, 4334), True, 'import torch.nn as nn\n'), ((4348, 4383), 'torch.nn.ConvTranspose2d', 'nn.ConvTranspose2d', (['(32)', 'nc', '(4)', '(2)', '(1)'], {}), '(32, nc, 4, 2, 1)\n', (4366, 4383), True, 'import torch.nn as nn\n')] |
import pytest
from grpclib.testing import ChannelFor
from pymapadmin.grpc.admin_grpc import MailboxStub
from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE
from pymap.admin.handlers.mailbox import MailboxHandlers
from .base import TestBase
pytestmark = pytest.mark.asyncio
class TestMailboxHandlers(TestBase):
admin_token = '<KEY>' \
'ID0gYWRtaW4KMDAyZnNpZ25hdHVyZSBTApt6-KNq85_1TeSmQyqTZjWPfHCYPY8EIG' \
'q6NMqv4go'
metadata = {'auth-token': admin_token}
@pytest.fixture
def overrides(self):
return {'admin_key': b'testadmintoken'}
async def test_append(self, backend, imap_server) -> None:
handlers = MailboxHandlers(backend)
data = b'From: user<EMAIL>\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert SUCCESS == response.result.code
assert 105 == response.uid
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_select(b'INBOX')
transport.push_readline(
b'fetch1 UID FETCH * FULL\r\n')
transport.push_write(
b'* 5 FETCH (FLAGS (\\Flagged \\Recent \\Seen)'
b' INTERNALDATE "13-Feb-2009 23:31:30 +0000"'
b' RFC822.SIZE 38'
b' ENVELOPE (NIL NIL (("" NIL "user" "example.com"))'
b' (("" NIL "user" "example.com")) (("" NIL "user" "example.com"))'
b' NIL NIL NIL NIL NIL)'
b' BODY ("text" "plain" NIL NIL NIL "7BIT" 38 3)'
b' UID 105)\r\n'
b'fetch1 OK UID FETCH completed.\r\n')
transport.push_logout()
await self.run(transport)
async def test_append_user_not_found(self, backend) -> None:
handlers = MailboxHandlers(backend)
request = AppendRequest(user='baduser')
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert FAILURE == response.result.code
assert 'UserNotFound' == response.result.key
async def test_append_mailbox_not_found(self, backend) -> None:
handlers = MailboxHandlers(backend)
request = AppendRequest(user='testuser', mailbox='BAD')
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert FAILURE == response.result.code
assert 'BAD' == response.mailbox
assert 'MailboxNotFound' == response.result.key
async def test_append_filter_reject(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'Subject: reject this\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert FAILURE == response.result.code
assert 'AppendFailure' == response.result.key
async def test_append_filter_discard(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'Subject: discard this\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert SUCCESS == response.result.code
assert not response.mailbox
assert not response.uid
async def test_append_filter_address_is(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'From: <EMAIL>\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 1' == response.mailbox
async def test_append_filter_address_contains(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'From: user@foo.com\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 2' == response.mailbox
async def test_append_filter_address_matches(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'To: <EMAIL>\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 3' == response.mailbox
async def test_append_filter_envelope_is(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'From: <EMAIL>\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
sender='<EMAIL>',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 4' == response.mailbox
async def test_append_filter_envelope_contains(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'From: <EMAIL>\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
sender='<EMAIL>',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 5' == response.mailbox
async def test_append_filter_envelope_matches(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'From: <EMAIL>\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
recipient='<EMAIL>',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 6' == response.mailbox
async def test_append_filter_exists(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'X-Foo: foo\nX-Bar: bar\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 7' == response.mailbox
async def test_append_filter_header(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'X-Caffeine: C8H10N4O2\n\ntest message!\n'
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 8' == response.mailbox
async def test_append_filter_size(self, backend) -> None:
handlers = MailboxHandlers(backend)
data = b'From: user@example.com\n\ntest message!\n'
data = data + b'x' * (1234 - len(data))
request = AppendRequest(user='testuser', mailbox='INBOX',
flags=['\\Flagged', '\\Seen'],
when=1234567890, data=data)
async with ChannelFor([handlers]) as channel:
stub = MailboxStub(channel)
response = await stub.Append(request, metadata=self.metadata)
assert 'Test 9' == response.mailbox
| [
"pymapadmin.grpc.admin_pb2.AppendRequest",
"grpclib.testing.ChannelFor",
"pymap.admin.handlers.mailbox.MailboxHandlers",
"pymapadmin.grpc.admin_grpc.MailboxStub"
] | [((682, 706), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (697, 706), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((780, 890), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (793, 890), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((2057, 2081), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (2072, 2081), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((2100, 2129), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""baduser"""'}), "(user='baduser')\n", (2113, 2129), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((2486, 2510), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (2501, 2510), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((2529, 2574), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""BAD"""'}), "(user='testuser', mailbox='BAD')\n", (2542, 2574), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((2971, 2995), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (2986, 2995), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((3072, 3182), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (3085, 3182), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((3597, 3621), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (3612, 3621), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((3699, 3809), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (3712, 3809), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((4241, 4265), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (4256, 4265), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((4335, 4445), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (4348, 4445), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((4812, 4836), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (4827, 4836), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((4911, 5021), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (4924, 5021), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((5387, 5411), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (5402, 5411), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((5479, 5589), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (5492, 5589), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((5951, 5975), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (5966, 5975), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((6045, 6174), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'sender': '"""<EMAIL>"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', sender='<EMAIL>', flags=[\n '\\\\Flagged', '\\\\Seen'], when=1234567890, data=data)\n", (6058, 6174), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((6573, 6597), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (6588, 6597), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((6667, 6796), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'sender': '"""<EMAIL>"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', sender='<EMAIL>', flags=[\n '\\\\Flagged', '\\\\Seen'], when=1234567890, data=data)\n", (6680, 6796), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((7194, 7218), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (7209, 7218), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((7288, 7420), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'recipient': '"""<EMAIL>"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', recipient='<EMAIL>', flags=\n ['\\\\Flagged', '\\\\Seen'], when=1234567890, data=data)\n", (7301, 7420), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((7808, 7832), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (7823, 7832), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((7911, 8021), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (7924, 8021), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((8378, 8402), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (8393, 8402), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((8480, 8590), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (8493, 8590), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((8945, 8969), 'pymap.admin.handlers.mailbox.MailboxHandlers', 'MailboxHandlers', (['backend'], {}), '(backend)\n', (8960, 8969), False, 'from pymap.admin.handlers.mailbox import MailboxHandlers\n'), ((9096, 9206), 'pymapadmin.grpc.admin_pb2.AppendRequest', 'AppendRequest', ([], {'user': '"""testuser"""', 'mailbox': '"""INBOX"""', 'flags': "['\\\\Flagged', '\\\\Seen']", 'when': '(1234567890)', 'data': 'data'}), "(user='testuser', mailbox='INBOX', flags=['\\\\Flagged',\n '\\\\Seen'], when=1234567890, data=data)\n", (9109, 9206), False, 'from pymapadmin.grpc.admin_pb2 import AppendRequest, SUCCESS, FAILURE\n'), ((970, 992), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (980, 992), False, 'from grpclib.testing import ChannelFor\n'), ((1024, 1044), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (1035, 1044), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((2149, 2171), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (2159, 2171), False, 'from grpclib.testing import ChannelFor\n'), ((2203, 2223), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (2214, 2223), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((2594, 2616), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (2604, 2616), False, 'from grpclib.testing import ChannelFor\n'), ((2648, 2668), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (2659, 2668), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((3262, 3284), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (3272, 3284), False, 'from grpclib.testing import ChannelFor\n'), ((3316, 3336), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (3327, 3336), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((3889, 3911), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (3899, 3911), False, 'from grpclib.testing import ChannelFor\n'), ((3943, 3963), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (3954, 3963), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((4525, 4547), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (4535, 4547), False, 'from grpclib.testing import ChannelFor\n'), ((4579, 4599), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (4590, 4599), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((5101, 5123), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (5111, 5123), False, 'from grpclib.testing import ChannelFor\n'), ((5155, 5175), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (5166, 5175), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((5669, 5691), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (5679, 5691), False, 'from grpclib.testing import ChannelFor\n'), ((5723, 5743), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (5734, 5743), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((6285, 6307), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (6295, 6307), False, 'from grpclib.testing import ChannelFor\n'), ((6339, 6359), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (6350, 6359), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((6907, 6929), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (6917, 6929), False, 'from grpclib.testing import ChannelFor\n'), ((6961, 6981), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (6972, 6981), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((7531, 7553), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (7541, 7553), False, 'from grpclib.testing import ChannelFor\n'), ((7585, 7605), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (7596, 7605), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((8101, 8123), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (8111, 8123), False, 'from grpclib.testing import ChannelFor\n'), ((8155, 8175), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (8166, 8175), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((8670, 8692), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (8680, 8692), False, 'from grpclib.testing import ChannelFor\n'), ((8724, 8744), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (8735, 8744), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n'), ((9286, 9308), 'grpclib.testing.ChannelFor', 'ChannelFor', (['[handlers]'], {}), '([handlers])\n', (9296, 9308), False, 'from grpclib.testing import ChannelFor\n'), ((9340, 9360), 'pymapadmin.grpc.admin_grpc.MailboxStub', 'MailboxStub', (['channel'], {}), '(channel)\n', (9351, 9360), False, 'from pymapadmin.grpc.admin_grpc import MailboxStub\n')] |
# coding=UTF-8
# python version=3.5
import socket
import sys
import time
import threading
class ServerSocket(object):
conn = None
addr = None
mesStr = []
recv = ''
name = 'a'
threadLock1 = threading.Lock()
"""docstring for ServerSocket"""
def __init__(self):
super(ServerSocket, self).__init__()
"""init"""
def __init__(self, conn, addr):
print('start acc')
self.conn = conn
self.addr = addr
# 两个线程是分别处理的,接收的信息经过处理, 再发送
# 接收请求
threading.Thread(target=self.startAcceptClientInfo, args=(self,)).start()
# 发送信息
threading.Thread(target=self.startSendClientInfo, args=(self,)).start()
def sendInfo(self, data):
# self.conn.send(data)
# self.threadLock1.acquire()
self.mesStr.append(data)
print('send %s' % data)
# self.threadLock1.release()
# 发送给请求端
def startSendClientInfo(self):
global isClose
while not isClose:
# time.sleep(1)
# self.threadLock1.acquire()
if self.conn is None:
break
if self.mesStr is None or not self.mesStr or not len(self.mesStr):
continue
try:
for x in self.mesStr:
print('sen one:' + x)
self.conn.send(bytes(x, 'utf-8'))
self.mesStr = []
pass
except Exception as e:
print('conn close')
self.release()
break
# self.threadLock1.release()
print('end cur send loop')
pass
# 接收请求端的信息
def startAcceptClientInfo(self):
global isClose
global msgList
print('cur msgList:' + str(msgList))
while not isClose:
if self.conn is None:
print('conn close')
break
try:
self.recv = (self.conn.recv(buffSize))
pass
except Exception as e:
print('conn close')
self.release()
break
finally:
pass
if self.recv is None or self.recv.decode('utf-8') == '':
continue
# threadLock.acquire()
if msgList is None or msgList.get(self.name) is None:
print('set msg')
msgList[self.name] = ''
print('recv data:' + self.recv.decode('utf-8'))
msgList[self.name] = msgList.get(self.name, '') + ';' + self.recv.decode('utf-8')
print('cur send client:' + msgList[self.name])
self.recv = ''
# threadLock.release()
# time.sleep(1)
print('end cur')
pass
def release(self):
self.mesStr = []
self.recv = ''
if self.conn is not None:
self.conn.close()
self.conn = None
buffSize = 1024
# 连接到服务端的客户端
clientList = {}
# 连接服务器去请求内网的
linkList = {}
global isClose
isClose = False
# 客户端登陆
global connMain
def startAcceptClient(addrs, port):
ip_port = (addrs, port)
# 生成句柄
webSer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# 绑定端口
webSer.bind(ip_port)
# 最多连接数
webSer.listen(1)
while not isClose:
print("waiting...")
# 阻塞
conn, addr = webSer.accept()
print(addr)
# isEq = False
# if addr not in linkList.keys():
# for x in linkList.keys():
# if x[0] == addr[0] and x[1] == addr[1]:
# isEq = True
# break
# pass
# if isEq:
# continue
if linkList and linkList.get(addr) is not None:
linkList[addr].release()
threadLock.acquire()
linkList[addr] = ServerSocket(conn, addr)
threadLock.release()
# 获取客户端请求数据
# recvData = conn.recv(buffSize)
# 打印接受数据 注:当浏览器访问的时候,接受的数据的浏览器的信息等。
# print(b'get:' + recvData)
# print(addr)
# print(conn.getsockname())
# print(conn.getpeername())
# 向对方发送数据
# conn.send(bytes('<h1>welcome</h1>', 'utf8'))
# 关闭链接
# conn.close()
# break
pass
# 接收访问服务器的客户端信息
def recvData(clientAddr):
global isClose
global connMain
print(connMain)
conn = clientList[clientAddr]
while not isClose:
if conn is None:
print('client no link')
break
recvData = conn.recv(buffSize)
mesStr = recvData.decode('utf-8')
if mesStr is None or mesStr == '':
continue
print('client send data:' + mesStr)
threadLock.acquire()
tmp = linkList.keys()
for link in tmp:
if link is None:
continue
print(mesStr)
if mesStr.split(':')[0] == linkList[link].name:
print(mesStr.split(':')[1])
linkList[link].sendInfo(mesStr.split(':')[1])
threadLock.release()
print('end cur loop2')
# time.sleep(1)
pass
# 信息发送给内网客户端
def sendData(clientAddr):
global msgList
conn = clientList[clientAddr]
global isClose
while not isClose:
if conn is None:
print('client no link')
break
if not msgList:
continue
print(msgList)
# threadLock.acquire()
for keyConn in msgList.keys():
if keyConn is None or keyConn == '' or not len(keyConn):
continue
print(keyConn + ':' + msgList[keyConn])
for msgs in msgList[keyConn].split(';'):
if msgs is None or msgs == '' or not len(msgs):
continue
conn.send(bytes(keyConn + ':' + msgs, 'utf-8'))
for keyConn in msgList.keys():
msgList[keyConn] = ''
msgList = {}
# self.conn.send(bytes(self.mesStr, 'utf-8'))
# threadLock.release()
print('end loop4')
pass
def startRecvAndSend(clientAddr):
threading.Thread(target=recvData, args=(clientAddr,)).start()
threading.Thread(target=sendData, args=(clientAddr,)).start()
pass
# 客户端信息
def startMainServer():
global connMain
ip_port = ('localhost', 60007)
# 生成句柄
webMainSer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# 绑定端口
webMainSer.bind(ip_port)
# 最多连接数
webMainSer.listen(1)
while not isClose:
print("waiting client...")
# 阻塞
connMain, clientAddr = webMainSer.accept()
print('new client:' + str(clientAddr))
clientList[clientAddr] = connMain
startRecvAndSend(clientAddr)
pass
threadLock = threading.Lock()
global msgList
msgList = {}
webMainSer = None
if __name__ == '__main__':
# 开启ip和端口
ip_port = ('192.168.0.214', 60017)
threading.Thread(target=startAcceptClient, args=ip_port).start()
try:
startMainServer()
pass
except Exception as e:
isClose = True
for x in clientList:
x.close()
pass
for x in linkList:
x.close
if webMainSer:
webMainSer.close()
raise
else:
pass
finally:
pass
print('end main')
| [
"threading.Lock",
"threading.Thread",
"socket.socket"
] | [((6750, 6766), 'threading.Lock', 'threading.Lock', ([], {}), '()\n', (6764, 6766), False, 'import threading\n'), ((216, 232), 'threading.Lock', 'threading.Lock', ([], {}), '()\n', (230, 232), False, 'import threading\n'), ((3169, 3218), 'socket.socket', 'socket.socket', (['socket.AF_INET', 'socket.SOCK_STREAM'], {}), '(socket.AF_INET, socket.SOCK_STREAM)\n', (3182, 3218), False, 'import socket\n'), ((6351, 6400), 'socket.socket', 'socket.socket', (['socket.AF_INET', 'socket.SOCK_STREAM'], {}), '(socket.AF_INET, socket.SOCK_STREAM)\n', (6364, 6400), False, 'import socket\n'), ((6096, 6149), 'threading.Thread', 'threading.Thread', ([], {'target': 'recvData', 'args': '(clientAddr,)'}), '(target=recvData, args=(clientAddr,))\n', (6112, 6149), False, 'import threading\n'), ((6162, 6215), 'threading.Thread', 'threading.Thread', ([], {'target': 'sendData', 'args': '(clientAddr,)'}), '(target=sendData, args=(clientAddr,))\n', (6178, 6215), False, 'import threading\n'), ((6898, 6954), 'threading.Thread', 'threading.Thread', ([], {'target': 'startAcceptClient', 'args': 'ip_port'}), '(target=startAcceptClient, args=ip_port)\n', (6914, 6954), False, 'import threading\n'), ((529, 594), 'threading.Thread', 'threading.Thread', ([], {'target': 'self.startAcceptClientInfo', 'args': '(self,)'}), '(target=self.startAcceptClientInfo, args=(self,))\n', (545, 594), False, 'import threading\n'), ((626, 689), 'threading.Thread', 'threading.Thread', ([], {'target': 'self.startSendClientInfo', 'args': '(self,)'}), '(target=self.startSendClientInfo, args=(self,))\n', (642, 689), False, 'import threading\n')] |
"""
Show how to plot SST on a subdomain
"""
import numpy as np
import marray as ma
import grid as gr
import nctools as nct
import croco as croco
import giga_tools as giga
import giga_subdomains as gs
import matplotlib.pyplot as plt
import matplotlib.colorbar as cb
import schwimmbad
import time
rc = plt.rc
font = {'family': "Comic Sans MS",
'size': 16}
rc('font', **font)
plt.ion()
def get_sst(iblock):
""" process the`iblock`-th block from `blocks` list"""
block = blocks[iblock]
grid = croco.load_grid(giga.grdfiles, block, giga.dimpart,
giga.nsigma, halow=halow)
# the grid MDataset is needed to get the sizes of the possibly
# missing tiles in the history files
ncgrid = nct.MDataset(giga.grdfiles, block, giga.dimpart, halow=halow)
# ncgrid.sizes is sent to the history MDataset
nch = nct.MDataset(giga.hisfiles, block, giga.dimpart,
halow=halow, gridsizes=ncgrid.sizes)
# because of nelem=5, ncread extracts the timeindex=5
# from nch and returns a 3D array. slice(-1) extracts
# the top level, hence the sst
sst = croco.ncread(nch, grid, "temp", elem=5).slice(-1)
# to access grid coordinates use the functions the `stagg` keyword
# returns the coordinate at the correct location, here at f-point
xf = grid.xi(stagg=croco.fpoint)
yf = grid.eta(stagg=croco.fpoint)
return (iblock, (xf, yf, sst))
# halo width set to zero because we don't need any halo in this case
halow = 0
# parallelization is done on blocks of 3x3 tiles
blocksize = 3
# Gibraltar region ((lon west, lat south), (lon east, lat north))
domain = gs.LLTR2domain((-10, 32), (0, 40))
tileslist = gs.find_tiles_inside(domain)
# blocks is the list of blocks
# subds is the list of subdomains that need to be mounted
subds, blocks = gs.get_blocks_subds_from_tiles(tileslist, blocksize)
if giga.rank == 0:
# check out the figure to understand tileslist and blocks
gs.plot_blocks(domain, tileslist, blocks, "../figures/demo_1_blocks.png")
for subd in subds:
# don't forget to mount the history tar files
giga.mount(subd)
# pool the SST read with as many threads as number of cores
pool = schwimmbad.MultiPool()
t0 = time.time()
res = pool.map(get_sst, range(len(blocks)))
# don't forget, if pool is not closed then no worker will
# go beyond that point
t1 = time.time()
elapsed = t1-t0
print(f"elapsed time: {elapsed:.2f}s")
# Do the figure
cmap = "YlGnBu_r"
plt.figure(figsize=(7, 7))
for r in res:
iblock, data = r
plt.pcolormesh(*data, vmin=17, vmax=25, cmap=cmap)
hc = plt.colorbar(orientation="horizontal")
cb.ColorbarBase.set_label(hc, "SST [°C]")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.tight_layout()
plt.savefig("../figures/demo_1.png")
| [
"croco.ncread",
"matplotlib.pyplot.ylabel",
"matplotlib.pyplot.pcolormesh",
"matplotlib.colorbar.ColorbarBase.set_label",
"croco.load_grid",
"matplotlib.pyplot.xlabel",
"giga_subdomains.plot_blocks",
"giga_subdomains.LLTR2domain",
"matplotlib.pyplot.savefig",
"schwimmbad.MultiPool",
"matplotlib.... | [((384, 393), 'matplotlib.pyplot.ion', 'plt.ion', ([], {}), '()\n', (391, 393), True, 'import matplotlib.pyplot as plt\n'), ((1658, 1692), 'giga_subdomains.LLTR2domain', 'gs.LLTR2domain', (['(-10, 32)', '(0, 40)'], {}), '((-10, 32), (0, 40))\n', (1672, 1692), True, 'import giga_subdomains as gs\n'), ((1705, 1733), 'giga_subdomains.find_tiles_inside', 'gs.find_tiles_inside', (['domain'], {}), '(domain)\n', (1725, 1733), True, 'import giga_subdomains as gs\n'), ((1840, 1892), 'giga_subdomains.get_blocks_subds_from_tiles', 'gs.get_blocks_subds_from_tiles', (['tileslist', 'blocksize'], {}), '(tileslist, blocksize)\n', (1870, 1892), True, 'import giga_subdomains as gs\n'), ((2212, 2234), 'schwimmbad.MultiPool', 'schwimmbad.MultiPool', ([], {}), '()\n', (2232, 2234), False, 'import schwimmbad\n'), ((2240, 2251), 'time.time', 'time.time', ([], {}), '()\n', (2249, 2251), False, 'import time\n'), ((2384, 2395), 'time.time', 'time.time', ([], {}), '()\n', (2393, 2395), False, 'import time\n'), ((2486, 2512), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(7, 7)'}), '(figsize=(7, 7))\n', (2496, 2512), True, 'import matplotlib.pyplot as plt\n'), ((2609, 2647), 'matplotlib.pyplot.colorbar', 'plt.colorbar', ([], {'orientation': '"""horizontal"""'}), "(orientation='horizontal')\n", (2621, 2647), True, 'import matplotlib.pyplot as plt\n'), ((2648, 2689), 'matplotlib.colorbar.ColorbarBase.set_label', 'cb.ColorbarBase.set_label', (['hc', '"""SST [°C]"""'], {}), "(hc, 'SST [°C]')\n", (2673, 2689), True, 'import matplotlib.colorbar as cb\n'), ((2690, 2713), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""Longitude"""'], {}), "('Longitude')\n", (2700, 2713), True, 'import matplotlib.pyplot as plt\n'), ((2714, 2736), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""Latitude"""'], {}), "('Latitude')\n", (2724, 2736), True, 'import matplotlib.pyplot as plt\n'), ((2737, 2755), 'matplotlib.pyplot.tight_layout', 'plt.tight_layout', ([], {}), '()\n', (2753, 2755), True, 'import matplotlib.pyplot as plt\n'), ((2756, 2792), 'matplotlib.pyplot.savefig', 'plt.savefig', (['"""../figures/demo_1.png"""'], {}), "('../figures/demo_1.png')\n", (2767, 2792), True, 'import matplotlib.pyplot as plt\n'), ((514, 591), 'croco.load_grid', 'croco.load_grid', (['giga.grdfiles', 'block', 'giga.dimpart', 'giga.nsigma'], {'halow': 'halow'}), '(giga.grdfiles, block, giga.dimpart, giga.nsigma, halow=halow)\n', (529, 591), True, 'import croco as croco\n'), ((740, 801), 'nctools.MDataset', 'nct.MDataset', (['giga.grdfiles', 'block', 'giga.dimpart'], {'halow': 'halow'}), '(giga.grdfiles, block, giga.dimpart, halow=halow)\n', (752, 801), True, 'import nctools as nct\n'), ((864, 954), 'nctools.MDataset', 'nct.MDataset', (['giga.hisfiles', 'block', 'giga.dimpart'], {'halow': 'halow', 'gridsizes': 'ncgrid.sizes'}), '(giga.hisfiles, block, giga.dimpart, halow=halow, gridsizes=\n ncgrid.sizes)\n', (876, 954), True, 'import nctools as nct\n'), ((1979, 2052), 'giga_subdomains.plot_blocks', 'gs.plot_blocks', (['domain', 'tileslist', 'blocks', '"""../figures/demo_1_blocks.png"""'], {}), "(domain, tileslist, blocks, '../figures/demo_1_blocks.png')\n", (1993, 2052), True, 'import giga_subdomains as gs\n'), ((2127, 2143), 'giga_tools.mount', 'giga.mount', (['subd'], {}), '(subd)\n', (2137, 2143), True, 'import giga_tools as giga\n'), ((2552, 2602), 'matplotlib.pyplot.pcolormesh', 'plt.pcolormesh', (['*data'], {'vmin': '(17)', 'vmax': '(25)', 'cmap': 'cmap'}), '(*data, vmin=17, vmax=25, cmap=cmap)\n', (2566, 2602), True, 'import matplotlib.pyplot as plt\n'), ((1135, 1174), 'croco.ncread', 'croco.ncread', (['nch', 'grid', '"""temp"""'], {'elem': '(5)'}), "(nch, grid, 'temp', elem=5)\n", (1147, 1174), True, 'import croco as croco\n')] |
# Copyright 2016 - Nokia
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from oslo_config import cfg
from oslo_log import log as logging
from vitrage.common.constants import DatasourceProperties as DSProps
from vitrage.common.constants import EventAction
from vitrage.datasources.zabbix.properties import ZabbixProperties as ZProps
from vitrage.tests.mocks import utils
from vitrage.tests.unit.datasources.zabbix.mock_driver import MockZabbixDriver
from vitrage.tests.unit.datasources.zabbix.zabbix_base_test import \
ZabbixBaseTest
LOG = logging.getLogger(__name__)
# noinspection PyProtectedMember
class ZabbixDriverTest(ZabbixBaseTest):
OPTS = [
cfg.StrOpt('config_file',
help='Zabbix configuration file',
default=utils.get_resources_dir()
+ '/zabbix/zabbix_conf.yaml'),
]
# noinspection PyPep8Naming
@classmethod
def setUpClass(cls):
cls.conf = cfg.ConfigOpts()
cls.conf.register_opts(cls.OPTS, group='zabbix')
def test_get_all(self):
# Test Setup
zabbix_driver = MockZabbixDriver(self.conf)
alarm_data1 = self._extract_alarm_data(triggerid='1', status='1')
alarm_data2 = self._extract_alarm_data(triggerid='2', status='1',
value='1')
alarm_data3 = self._extract_alarm_data(triggerid='3', value='1')
alarm_data4 = self._extract_alarm_data(triggerid='4')
zabbix_driver.set_alarm_datas([alarm_data1,
alarm_data2,
alarm_data3,
alarm_data4])
# Test Action
alarms = zabbix_driver._get_all_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data3, alarms)
def test_get_all_functionality(self):
# Step 1 - Services with status OK should not be returned
# Test setup scenario
zabbix_driver = MockZabbixDriver(self.conf)
alarm_data1 = self._extract_alarm_data()
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2')
alarm_data3 = self._extract_alarm_data(z_resource_name='compute-2',
triggerid='2')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
# Test action
alarms = zabbix_driver._get_all_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(0, len(alarms))
# Step 2 - one raised alarm
# Test setup
alarm_data1 = self._extract_alarm_data(value='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
# Test action
alarms = zabbix_driver._get_all_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
# Step 3 - two raised alarms
# Test setup
alarm_data1 = self._extract_alarm_data(value='1', priority='4')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
value='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
expected_alarm1 = alarm_data1
expected_alarm2 = copy.copy(alarm_data2)
expected_alarm2[ZProps.RESOURCE_NAME] = 'host2'
# Test action
alarms = zabbix_driver._get_all_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(2, len(alarms))
self._assert_contains(expected_alarm1, alarms)
self._assert_contains(expected_alarm2, alarms)
# Step 4 - Check inactive alarms. Get all function should return
# inactive alarm (alarm that teir status has changed to OK)
# Test setup
alarm_data1 = self._extract_alarm_data()
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
expected_alarm1 = alarm_data1
expected_alarm2 = copy.copy(alarm_data2)
expected_alarm2[ZProps.RESOURCE_NAME] = 'host2'
# Test action
alarms = zabbix_driver._get_all_alarms()
# Test assertions
# The alarms of alarm_data1/2 should be returned although their
# status is OK, because they were not OK earlier
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(2, len(alarms))
self._assert_contains(expected_alarm1, alarms)
self._assert_contains(expected_alarm2, alarms)
# Step 4 - get all when all alarms are inactivated and their status
# was not changed
# Test action
alarms = zabbix_driver._get_all_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'alarms is None')
self.assertEqual(0, len(alarms))
def test_get_changes_functionality(self):
# Step 1 - get changes when all alarms are OK
# Test setup
zabbix_driver = MockZabbixDriver(self.conf)
alarm_data1 = self._extract_alarm_data(priority='2')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='2')
alarm_data3 = self._extract_alarm_data(z_resource_name='compute-2',
description='Uptime',
priority='3')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
# Test action
alarms = zabbix_driver._get_changed_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(0, len(alarms))
# Step 2 - get changes when alarm is raised
# Test setup
alarm_data1 = self._extract_alarm_data(priority='2', value='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
# Test action
alarms = zabbix_driver._get_changed_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
# Step 3 - get changes when the priority of inactive alarm is changed
# Test setup
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='3')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
# Test action
alarms = zabbix_driver._get_changed_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(0, len(alarms))
# Step 4 - get changes when:
# 1. alarm1 - priority of active alarm is changed (should be returned)
# 2. alarm2 - raised alarm (should be returned)
# 3. alarm3 - priority of inactive alarm is changed (should not
# be returned)
# Test setup
alarm_data1 = self._extract_alarm_data(priority='4', value='1')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='1', value='1')
alarm_data3 = self._extract_alarm_data(z_resource_name='compute-2',
triggerid='22222',
priority='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
expected_alarm1 = alarm_data1
expected_alarm2 = copy.copy(alarm_data2)
expected_alarm2[ZProps.RESOURCE_NAME] = 'host2'
# Test action
alarms = zabbix_driver._get_changed_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(2, len(alarms))
self._assert_contains(expected_alarm1, alarms)
self._assert_contains(expected_alarm2, alarms)
# Step 5 - get changes when all active alarms are changed to inactive
# Test setup
alarm_data1 = self._extract_alarm_data(priority='4')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
expected_alarm1 = alarm_data1
expected_alarm2 = copy.copy(alarm_data2)
expected_alarm2[ZProps.RESOURCE_NAME] = 'host2'
# Test action
alarms = zabbix_driver._get_changed_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(2, len(alarms))
self._assert_contains(expected_alarm1, alarms)
self._assert_contains(expected_alarm2, alarms)
# Step 6 - get changes when no change occurred
# Action
alarms = zabbix_driver._get_changed_alarms()
# Test assertions
self.assertIsNotNone(alarms, 'alarms is None')
self.assertEqual(0, len(alarms))
def test_get_changes_and_get_all(self):
# Step 1 - get changes
# Step setup
zabbix_driver = MockZabbixDriver(self.conf)
alarm_data1 = self._extract_alarm_data(priority='2', value='1')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='2')
alarm_data3 = self._extract_alarm_data(z_resource_name='compute-2',
triggerid='2')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
# Step action
alarms = zabbix_driver._get_changed_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
# Step 2 - get changes when no change occurred (returns nothing)
# Step action
alarms = zabbix_driver._get_changed_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(0, len(alarms))
# Step 3 - get all
# Step action
alarms = zabbix_driver._get_all_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
# Step 4 - get all for second time
# (when no change has occurred it returns the same)
# Step action
alarms = zabbix_driver._get_all_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
# Step 5 - calling get changes right after get all (returns nothing)
# Step setup
alarm_data1 = self._extract_alarm_data(priority='4', value='1')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='1',
value='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
expected_alarm1 = alarm_data1
expected_alarm2 = copy.copy(alarm_data2)
expected_alarm2[ZProps.RESOURCE_NAME] = 'host2'
# Step action
get_all_alarms = zabbix_driver._get_all_alarms()
changed_alarms = zabbix_driver._get_changed_alarms()
# Step assertions
self.assertIsNotNone(get_all_alarms, 'No alarms returned')
self.assertEqual(2, len(get_all_alarms))
self._assert_contains(expected_alarm1, get_all_alarms)
self._assert_contains(expected_alarm2, get_all_alarms)
self.assertIsNotNone(changed_alarms, 'No alarms returned')
self.assertEqual(0, len(changed_alarms))
# Step 6 - get changes
# Step setup
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2',
priority='4',
value='1')
alarm_data3 = self._extract_alarm_data(z_resource_name='compute-2',
triggerid='2',
priority='4',
value='1')
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
expected_alarm1 = copy.copy(alarm_data2)
expected_alarm1[ZProps.RESOURCE_NAME] = 'host2'
expected_alarm2 = copy.copy(expected_alarm1)
expected_alarm2[ZProps.TRIGGER_ID] = '2'
# Step action
alarms = zabbix_driver._get_changed_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(2, len(alarms))
self._assert_contains(expected_alarm1, alarms)
self._assert_contains(expected_alarm2, alarms)
def test_delete_alarm(self):
# Test setup
alarm_data1 = self._extract_alarm_data(value='1')
alarm_data2 = self._extract_alarm_data(z_resource_name='compute-2')
alarm_data3 = self._extract_alarm_data(z_resource_name='compute-2',
triggerid='2')
# Step 1 - delete inactive alarm
# Step setup
zabbix_driver = MockZabbixDriver(self.conf)
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2, alarm_data3])
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2])
# Step action
alarms = zabbix_driver._get_all_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
# Step 2 - delete active alarm
# Step setup
zabbix_driver.set_alarm_datas([alarm_data2])
# Step action
alarms = zabbix_driver._get_all_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
self.assertEqual(EventAction.DELETE_ENTITY,
alarms[0][DSProps.EVENT_TYPE])
# Step 3 - get changes after get all should not return deleted alarm
# Step action
alarms = zabbix_driver._get_changed_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'alarms is None')
self.assertEqual(0, len(alarms))
# Step 4 -
# Step setup
zabbix_driver.set_alarm_datas([alarm_data1, alarm_data2])
zabbix_driver._get_all_alarms()
zabbix_driver.set_alarm_datas([alarm_data2])
# Step action
alarms = zabbix_driver._get_changed_alarms()
# Step assertions
self.assertIsNotNone(alarms, 'No alarms returned')
self.assertEqual(1, len(alarms))
self._assert_contains(alarm_data1, alarms)
self.assertEqual(EventAction.DELETE_ENTITY,
alarms[0][DSProps.EVENT_TYPE])
def _extract_alarm_data(self,
z_resource_name='compute-1',
description='cpu',
status='0',
value='0',
priority='1',
triggerid='0'):
return {ZProps.ZABBIX_RESOURCE_NAME: z_resource_name,
ZProps.DESCRIPTION: description,
ZProps.STATUS: status,
ZProps.VALUE: value,
ZProps.PRIORITY: priority,
ZProps.RESOURCE_NAME: z_resource_name,
ZProps.TRIGGER_ID: triggerid}
| [
"oslo_config.cfg.ConfigOpts",
"vitrage.tests.unit.datasources.zabbix.mock_driver.MockZabbixDriver",
"copy.copy",
"vitrage.tests.mocks.utils.get_resources_dir",
"oslo_log.log.getLogger"
] | [((1058, 1085), 'oslo_log.log.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (1075, 1085), True, 'from oslo_log import log as logging\n'), ((1470, 1486), 'oslo_config.cfg.ConfigOpts', 'cfg.ConfigOpts', ([], {}), '()\n', (1484, 1486), False, 'from oslo_config import cfg\n'), ((1618, 1645), 'vitrage.tests.unit.datasources.zabbix.mock_driver.MockZabbixDriver', 'MockZabbixDriver', (['self.conf'], {}), '(self.conf)\n', (1634, 1645), False, 'from vitrage.tests.unit.datasources.zabbix.mock_driver import MockZabbixDriver\n'), ((2611, 2638), 'vitrage.tests.unit.datasources.zabbix.mock_driver.MockZabbixDriver', 'MockZabbixDriver', (['self.conf'], {}), '(self.conf)\n', (2627, 2638), False, 'from vitrage.tests.unit.datasources.zabbix.mock_driver import MockZabbixDriver\n'), ((4038, 4060), 'copy.copy', 'copy.copy', (['alarm_data2'], {}), '(alarm_data2)\n', (4047, 4060), False, 'import copy\n'), ((4859, 4881), 'copy.copy', 'copy.copy', (['alarm_data2'], {}), '(alarm_data2)\n', (4868, 4881), False, 'import copy\n'), ((5821, 5848), 'vitrage.tests.unit.datasources.zabbix.mock_driver.MockZabbixDriver', 'MockZabbixDriver', (['self.conf'], {}), '(self.conf)\n', (5837, 5848), False, 'from vitrage.tests.unit.datasources.zabbix.mock_driver import MockZabbixDriver\n'), ((8405, 8427), 'copy.copy', 'copy.copy', (['alarm_data2'], {}), '(alarm_data2)\n', (8414, 8427), False, 'import copy\n'), ((9240, 9262), 'copy.copy', 'copy.copy', (['alarm_data2'], {}), '(alarm_data2)\n', (9249, 9262), False, 'import copy\n'), ((10003, 10030), 'vitrage.tests.unit.datasources.zabbix.mock_driver.MockZabbixDriver', 'MockZabbixDriver', (['self.conf'], {}), '(self.conf)\n', (10019, 10030), False, 'from vitrage.tests.unit.datasources.zabbix.mock_driver import MockZabbixDriver\n'), ((12129, 12151), 'copy.copy', 'copy.copy', (['alarm_data2'], {}), '(alarm_data2)\n', (12138, 12151), False, 'import copy\n'), ((13347, 13369), 'copy.copy', 'copy.copy', (['alarm_data2'], {}), '(alarm_data2)\n', (13356, 13369), False, 'import copy\n'), ((13452, 13478), 'copy.copy', 'copy.copy', (['expected_alarm1'], {}), '(expected_alarm1)\n', (13461, 13478), False, 'import copy\n'), ((14256, 14283), 'vitrage.tests.unit.datasources.zabbix.mock_driver.MockZabbixDriver', 'MockZabbixDriver', (['self.conf'], {}), '(self.conf)\n', (14272, 14283), False, 'from vitrage.tests.unit.datasources.zabbix.mock_driver import MockZabbixDriver\n'), ((1289, 1314), 'vitrage.tests.mocks.utils.get_resources_dir', 'utils.get_resources_dir', ([], {}), '()\n', (1312, 1314), False, 'from vitrage.tests.mocks import utils\n')] |
#
# Copyright Soramitsu Co., Ltd. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
#
from iroha import Iroha, IrohaCrypto
from iroha import primitive_pb2
import commons
admin = commons.new_user('admin@test')
alice = commons.new_user('alice@test')
iroha = Iroha(admin['id'])
@commons.hex
def genesis_tx():
test_permissions = [primitive_pb2.can_detach_role]
genesis_commands = commons.genesis_block(admin, alice, test_permissions)
tx = iroha.transaction(genesis_commands)
IrohaCrypto.sign_transaction(tx, admin['key'])
return tx
@commons.hex
def detach_role_tx():
tx = iroha.transaction([
iroha.command('DetachRole', account_id=admin['id'], role_name='test_role')
], creator_account=alice['id'])
IrohaCrypto.sign_transaction(tx, alice['key'])
return tx
| [
"commons.genesis_block",
"iroha.Iroha",
"iroha.IrohaCrypto.sign_transaction",
"commons.new_user"
] | [((189, 219), 'commons.new_user', 'commons.new_user', (['"""admin@test"""'], {}), "('admin@test')\n", (205, 219), False, 'import commons\n'), ((228, 258), 'commons.new_user', 'commons.new_user', (['"""alice@test"""'], {}), "('alice@test')\n", (244, 258), False, 'import commons\n'), ((267, 285), 'iroha.Iroha', 'Iroha', (["admin['id']"], {}), "(admin['id'])\n", (272, 285), False, 'from iroha import Iroha, IrohaCrypto\n'), ((397, 450), 'commons.genesis_block', 'commons.genesis_block', (['admin', 'alice', 'test_permissions'], {}), '(admin, alice, test_permissions)\n', (418, 450), False, 'import commons\n'), ((500, 546), 'iroha.IrohaCrypto.sign_transaction', 'IrohaCrypto.sign_transaction', (['tx', "admin['key']"], {}), "(tx, admin['key'])\n", (528, 546), False, 'from iroha import Iroha, IrohaCrypto\n'), ((750, 796), 'iroha.IrohaCrypto.sign_transaction', 'IrohaCrypto.sign_transaction', (['tx', "alice['key']"], {}), "(tx, alice['key'])\n", (778, 796), False, 'from iroha import Iroha, IrohaCrypto\n')] |
import re,random,requests,json
from os import path
from .DefaultSubs import *
from .spellcheck import correction,WORDS
try:
from urllib import quote
except ImportError as e:
from urllib.parse import quote
reflections = {
"i am" : "you are",
"i was" : "you were",
"i" : "you",
"i'm" : "you are",
"i'd" : "you would",
"i've" : "you have",
"i'll" : "you will",
"my" : "your",
"you are" : "I am",
"you were" : "I was",
"you've" : "I have",
"you'll" : "I will",
"your" : "my",
"yours" : "mine",
"you" : "me",
"me" : "you"
}
class multiFunctionCall:
def __init__(self,func={}):
self.__func__ = func
def defaultfunc(self,string,sessionID ="general"):
return string
def call(self,string,sessionID):
s = string.split(":")
if len(s)<=1:
return string
name = s[0].strip()
s = ":".join(s[1:])
func = self.defaultfunc
try:func = self.__func__[name]
except:s = string
return re.sub(r'\\([\[\]{}%:])',r"\1",func(re.sub(r'([\[\]{}%:])',r"\\\1",s),sessionID =sessionID))
class dummyMatch:
def __init__(self,string):
self.string = string
def group(self,index):
if index==0:return self.string
raise IndexError("no such group")
def groupdict(self,*arg,**karg):
return {}
class Topic:
def __init__(self,topics):
self.topic={"general":''}
self.topics = topics
def __setitem__(self,key,value):
value = value.strip()
if value and value[0]==".":
index=1
current_topic = self.topic[key].split(".")
while value[index]==".":
index+=1
current_topic.pop()
current_topic.append(value[index:])
value = ".".join(current_topic)
self.topic[key]=value
def __getitem__(self,key):
topic = self.topic[key]
if topic in self.topics():
return topic
return ''
class Chat(object):
def __init__(self, pairs=(), reflections=reflections, call=multiFunctionCall(), api={}, normalizer=defaultNormal,default_template=path.join(path.dirname(path.abspath(__file__)),"default.template")):
"""
Initialize the chatbot. Pairs is a list of patterns and responses. Each
pattern is a regular expression matching the user's statement or question,
e.g. r'I like (.*)'. For each such pattern a list of possible responses
is given, e.g. ['Why do you like %1', 'Did you ever dislike %1']. Material
which is matched by parenthesized sections of the patterns (e.g. .*) is mapped to
the numbered positions in the responses, e.g. %1.
:type pairs: list of tuple
:param pairs: The patterns and responses
:type reflections: dict
:param reflections: A mapping between first and second person expressions
:rtype: None
"""
self.__init__handler()
defaultpairs = self.__processTemplateFile(default_template)
if type(pairs).__name__ in ('unicode','str'):
pairs = self.__processTemplateFile(pairs)
self._pairs = {'':{"pairs":[],"defaults":[]}}
if type(pairs)!=dict:
pairs = {'':{"pairs":pairs,"defaults":[]}}
elif not '' in pairs:
raise KeyError("Default topic missing")
self._normalizer = dict(normalizer)
for key in normalizer:
self._normalizer[key.lower()] = normalizer[key]
self._normalizer_regex = self._compile_reflections(normalizer)
self.__processLearn(defaultpairs)
self.__processLearn(pairs)
self._reflections = reflections
self._regex = self._compile_reflections(reflections)
self._memory = {"general":{}}
self.conversation = {"general":[]}
self.sessionID = "general"
self.attr = {"general":{"match":None,"pmatch":None,"_quote":False,"substitute":True}}
self.call = call
self.topic = Topic(self._pairs.keys)
try:self._api = api if type(api)==dict else json.load(api)
except:raise SyntaxError("Invalid value for api")
def __init__handler(self):
"""
initialize handlers and operator functionality
"""
self.__action_handlers = {"chat":self.__chat_handler,
"low":self.__low_handler,
"up":self.__up_handler,
"cap":self.__cap_handler,
"call":self.__call_handler,
"topic":self.__topic_handler,
"map":self.__map_handler,
"eval":self.__eval_handler,
}
self.__confitional_operator = {
"!=":lambda a,b:a!=b,
">=":lambda a,b:a>=b,
"<=":lambda a,b:a<=b,
"==":lambda a,b:a==b,
"<":lambda a,b:a<b,
">":lambda a,b:a>b
}
self.__logical_operator ={
'&':lambda a,b:a and b,
'|':lambda a,b:a or b,
'^':lambda a,b:a ^ b
}
def __normalize(self,text):
"""
Substitute words in the string, according to the specified Normal,
e.g. "I'm" -> "I am"
:type str: str
:param str: The string to be mapped
:rtype: str
"""
return self._normalizer_regex.sub(lambda mo:
self._normalizer[mo.string[mo.start():mo.end()]],
text.lower())
def __errorMessage(self,expected,found):
return "Expected '%s' tag found '%s'" % (expected,found)
def __responseTags(self,text,pos,index):
next_index=index+1
if pos[next_index][2]!="endresponse":
raise SyntaxError(self.__errorMessage("endresponse",pos[next_index][2]))
return text[pos[index][1]:pos[next_index][0]].strip(" \t\n")
def __blockTags(self,text,pos,length,index):
withinblock = {"learn":{},"response":[],"client":[],"prev":[]}
while pos[index][2]!="endblock":
if pos[index][2]=="learn":
withinblock["learn"]={}
index = self.__GroupTags(text,pos,withinblock["learn"],(lambda i:pos[i][2]!="endlearn"),length,index+1)
index-=1
elif pos[index][2]=="response":
withinblock["response"].append(self.__responseTags(text,pos,index))
index+=1
elif pos[index][2]=="client":
index+=1
if pos[index][2]!="endclient":
raise SyntaxError(self.__errorMessage("endclient",pos[index][2]))
withinblock["client"].append(text[pos[index-1][1]:pos[index][0]].strip(" \t\n"))
elif pos[index][2]=="prev":
index+=1
if pos[index][2]!="endprev":
raise SyntaxError(self.__errorMessage("endprev",pos[index][2]))
withinblock["prev"].append(text[pos[index-1][1]:pos[index][0]].strip(" \t\n"))
else:
raise NameError("Invalid Tag '%s'" % pos[index][2])
index+=1
return index+1,(withinblock["client"][0],
withinblock["prev"][0] if withinblock["prev"] else None,
withinblock["response"],
withinblock["learn"] )
def __GroupTags(self,text,pos,groups,condition,length,index=0,name=""):
pairs=[]
defaults=[]
while condition(index):
if pos[index][2]=="block":
p,within = self.__blockTags(text,pos,length,index+1)
pairs.append(within)
index=p
elif pos[index][2]=="response":
defaults.append(self.__responseTags(text,pos,index))
index+=2
elif pos[index][2]=="group":
child_name=(name+"."+pos[index][3].strip()) if name else pos[index][3].strip()
index = self.__GroupTags(text,pos,groups,(lambda i:pos[i][2]!="endgroup"),length,index+1, name=child_name)
else:
raise SyntaxError(self.__errorMessage('group, block, or response',pos[index][2]))
if name in groups:
groups[name]["pairs"].extend(pairs)
groups[name]["defaults"].extend(defaults)
else:
groups[name]={"pairs":pairs,"defaults":defaults}
return index+1
def __processTemplateFile(self,fileName):
with open(fileName, encoding='utf-8') as template:
text = template.read()
pos = [(m.start(0),m.end(0),text[m.start(1):m.end(1)],text[m.start(4):m.end(4)]) \
for m in re.finditer(
r'{%[\s\t]+((end)?(block|learn|response|client|prev|group))[\s\t]+([^%]*|%(?=[^}]))%}',
text)
]
length = len(pos)
groups = {}
self.__GroupTags(text,pos,groups,(lambda i:i<length),length)
return groups
def __build_pattern(self,pattern):
if pattern!=None:
try:return re.compile(self.__normalize(pattern), re.IGNORECASE)
except Exception as e:
e.args=(str(e)+ " in pattern "+pattern, )
raise e
def __processLearn(self,pairs):
for topic in pairs:
if topic not in self._pairs:self._pairs[topic]={"pairs":[],"defaults":[]}
self._pairs[topic]["defaults"].extend([(i,self._condition(i))
for i in pairs[topic].get("defaults",[])])
for pair in pairs[topic]["pairs"][::-1]:
learn, previous = {}, None
length = len(pair)
if length>3:client,previous,responses,learn = pair[:4]
elif length==3:
if type(pair[1]) in (tuple,list):client,responses,learn = pair
else:client,previous,responses = pair
elif length==2 and type(pair[1]) in (tuple,list):client,responses = pair
else:raise ValueError("Response not specified")
if type(learn) != dict:
raise TypeError("Invalid Type for learn expected dict got '%s'" % type(l).__name__)
self._pairs[topic]["pairs"].insert(0,(self.__build_pattern(client),
self.__build_pattern(previous),
tuple((i,self._condition(i)) for i in responses),
learn))
def _startNewSession(self,sessionID,topic=''):
self._memory[sessionID]={}
self.conversation[sessionID]=[]
self.attr[sessionID]={"match":None,"pmatch":None,"_quote":False,"substitute":True}
self.topic[sessionID] = topic
def _restructure(self,group,index=None):
if index==None:
toremove={}
allElem = list(group)
for i in group:
toremove[i]=set()
for j in group[i]:
toremove[i].update(set(group[i]).intersection(group[j]))
for i in group:
for j in toremove[i]:
group[i].remove(j)
try: allElem.remove(j)
except: pass
index = list(group)
toremove = [j for i in list(allElem) for j in group[i]]
for i in toremove:
try: allElem.remove(i)
except: pass
else:
allElem = list(index)
while index:
i = index.pop()
if type(group[i])==list:
group[i] = self._restructure(dict(group),group[i])
for j in list(group[i]):
try: index.remove(j)
except: pass
return {i:group[i] for i in allElem}
def _subAction(self,group,start_end_pair,action):
return {i:{
"action":action[i],
"start":start_end_pair[i][0],
"end":start_end_pair[i][1],
"child":self._subAction(group[i],start_end_pair,action)
} for i in group}
def _getWithin(self,group,index):
def init_group(i):
group[index[i]]["within"]=[]
orderedGroup.append(group[index[i]])
return i+1
def append_group(pos,i):
pos,within = self._getWithin(group,index[pos:])
group[index[i-1]]["within"]+=within
return pos
i = 0
orderedGroup = []
while i<len(index):
if group[index[i]]["action"]=="if":
i=init_group(i)
startIF = True
while startIF:
if i>=len(index):
raise SyntaxError("If not closed in Conditional statement")
if group[index[i]]["action"]=="elif": i = init_group(i)
elif group[index[i]]["action"]=="else":
pos = i = init_group(i)
startIF = False
while group[index[pos]]["action"]!="endif": pos = append_group(pos,i)+i
i = init_group(pos)
elif group[index[i]]["action"]=="endif":
i = init_group(i)
startIF = False
else:
pos = append_group(i,i)
for j in range(i,pos): del group[index[j]]
i += pos
elif group[index[i]]["action"] in self.__action_handlers .keys():
orderedGroup.append(group[index[i]])
i += 1
else:return i,orderedGroup
return i,orderedGroup
def _setwithin(self,group):
old =group
for i in group:
if group[i]["child"]:
group[i]["child"] = self._setwithin(group[i]["child"])
index = list(group)
index.sort(key =lambda x: group[x]["start"])
pos,orderedGroup = self._getWithin(group,index)
if pos<len(index):
raise SyntaxError("invalid statement")
return orderedGroup
def _inherit(self,start_end_pair,action):
group = {}
for i in range(len(start_end_pair)):
group[i] = []
for j in range(len(start_end_pair)):
if start_end_pair[i][0]<start_end_pair[j][0] and start_end_pair[i][1]>start_end_pair[j][1]:
group[i].append(j)
group = self._restructure(group)
group = self._subAction(group,start_end_pair,action)
return self._setwithin(group)
def _condition(self,response):
pos = [(m.start(0),m.end(0)) for m in re.finditer(r'{%?|%?}|\[|\]', response)]
newPos = [(start,end) for start,end in pos if (not start) or response[start-1]!="\\" ]
i=0
start_end_pair = []
actions = []
while newPos:
for i in range(1,len(newPos)):
if response[newPos[i][1]-1] in "}]":
break
if response[newPos[i-1][0]] in "{[":
endTag = newPos.pop(i)
biginTag = newPos.pop(i-1)
bN = biginTag[1]-biginTag[0]
eN = endTag[1]-endTag[0]
if bN != eN or not ((response[biginTag[0]] == "{" and response[endTag[1]-1] == "}") or (response[biginTag[0]] == "[" and response[endTag[1]-1] == "]")):
raise SyntaxError("invalid syntax '%s'" % response)
start_end_pair.append((biginTag[1],endTag[0]))
if bN == 2:
statement = re.findall( r'^[\s\t]*(if|endif|elif|else|chat|low|up|cap|call|topic)[\s\t]+',
response[biginTag[1]:endTag[0]])
if statement:
actions.append(statement[0])
else:
raise SyntaxError("invalid statement '%s'" % response[biginTag[1]:endTag[0]] )
else:
if response[biginTag[0]] == "{":
actions.append("map")
else:
actions.append("eval")
else:
raise SyntaxError("invalid syntax in \"%s\"" % response)
try:
group = self._inherit(start_end_pair,actions)
except SyntaxError:
raise SyntaxError("invalid statement in \"%s\"" % response)
return group
def _compile_reflections(self,normal):
sorted_refl = sorted(normal.keys(), key=len, reverse=True)
return re.compile(r"\b({0})\b".format("|".join(map(re.escape,
sorted_refl))), re.IGNORECASE)
def _substitute(self, str):
"""
Substitute words in the string, according to the specified reflections,
e.g. "I'm" -> "you are"
:type str: str
:param str: The string to be mapped
:rtype: str
"""
if not self.attr.get("substitute",True):return str
return self._regex.sub(lambda mo:
self._reflections[mo.string[mo.start():mo.end()]],
str.lower())
def _checkIF(self,con,sessionID = "general"):
pos = [(m.start(0),m.end(0),m.group(0)) for m in re.finditer(r'([\<\>!=]=|[\<\>]|&|\|)', con)]
if not pos:return con.strip()
res = prevres = True
prevO = None
pcsymbol = "&"
A = con[0:pos[0][0]].strip()
for j in range(len(pos)):
s,e,o = pos[j]
try:B = con[e:pos[j+1][0]].strip()
except:B = con[e:].strip()
try:a,b = float(A),float(b)
except:a,b = A,B
try:res = self.__confitional_operator[o](a,b) and res
except:
try:prevres,res = self.__logical_operator[pcsymbol](prevres,res),True
except:SyntaxError("invalid conditional operator \"%s\"" % pcsymbol)
pcsymbol = o
A = B
return self.__logical_operator[pcsymbol](prevres,res)
def __if_handler(self,i,condition,response,sessionID):
start = self.__get_start_pos(condition[i]["start"],response,"if")
end = condition[i]["end"]
check = True
matchedIndex = None
_quote = self.attr[sessionID]["_quote"]
self.attr[sessionID]["_quote"] = False
substitute = self.attr.get("substitute",True)
self.attr["substitute"] = False
while check:
con = self._checkAndEvaluateCondition(response,condition[i]["child"],start,end,sessionID =sessionID)
i+=1
if self._checkIF(con,sessionID =sessionID):
matchedIndex = i-1
while condition[i]["action"] != "endif":
i+=1
check = False
elif condition[i]["action"] == "else":
matchedIndex = i
while condition[i]["action"] != "endif":
i+=1
check = False
elif condition[i]["action"] == "elif":
start = self.__get_start_pos(condition[i]["start"],response,"elif")
end = condition[i]["end"]
elif condition[i]["action"] == "endif":
check = False
self.attr[sessionID]["_quote"] = _quote
self.attr["substitute"] = substitute
return ((self._checkAndEvaluateCondition(
response,
condition[matchedIndex]["within"],
condition[matchedIndex]["end"]+2,
condition[matchedIndex+1]["start"]-2,
sessionID =sessionID
) if matchedIndex!=None else ""),i)
def __handler(self,condition,response,action,sessionID):
return self._checkAndEvaluateCondition(
response,
condition["child"],
self.__get_start_pos(condition["start"],response,action),
condition["end"],
sessionID =sessionID
)
def __chat_handler(self,i,condition,response,sessionID):
substitute = self.attr.get("substitute",True)
self.attr["substitute"] = False
response = self.respond(self.__handler(condition[i],response,"chat",sessionID),sessionID =sessionID)
self.attr["substitute"] = substitute
return response
def __low_handler(self,i,condition,response,sessionID):
return self.__handler(condition[i],response,"low",sessionID).lower()
def __up_handler(self,i,condition,response,sessionID):
return self.__handler(condition[i],response,"up",sessionID).upper()
def __cap_handler(self,i,condition,response,sessionID):
return self.__handler(condition[i],response,"cap",sessionID).capitalize()
def __call_handler(self,i,condition,response,sessionID):
return self.call.call(self.__handler(condition[i],response,"call",sessionID),sessionID =sessionID)
def __topic_handler(self,i,condition,response,sessionID):
self.topic[sessionID] = self.__handler(condition[i],response,"topic",sessionID).strip()
return ""
def __get_start_pos(self,start,response,exp):
return start+re.compile(r"([\s\t]*"+exp+"[\s\t]+)").search(response[start:]).end(1)
def __map_handler(self,i,condition,response,sessionID):
start = condition[i]["start"]
end = condition[i]["end"]
think = False
if response[start] == "!":
think = True
start +=1
content = self._checkAndEvaluateCondition(
response,
condition[i]["child"],
start,
end,
sessionID =sessionID
).strip().split(":")
name = content[0]
thisIndex=0
for thisIndex in range(1,len(content)):
if name[-1]=="\\":
name += ":"+content[thisIndex]
else:
thisIndex-=1
break
thisIndex+=1
name = name.strip().lower()
if thisIndex<(len(content)):
value = content[thisIndex]
for thisIndex in range(thisIndex+1,len(content)):
if value[-1]=="\\":
value += ":"+content[thisIndex]
else:
break
self._memory[sessionID][name] = self._substitute(value.strip())
return self._memory[sessionID][name] if not think and name in self._memory[sessionID] else ""
def __eval_handler(self,i,condition,restopnse,sessionID):
start = condition[i]["start"]
end = condition[i]["end"]
think = False
if response[start] == "!":
think = True
start +=1
_quote = self.attr[sessionID]["_quote"]
self.attr[sessionID]["_quote"] = True
content = self._checkAndEvaluateCondition(
response,
condition[i]["child"],
start,
end,
sessionID =sessionID
).strip()
self.attr[sessionID]["_quote"] = _quote
vals = content.split(",")
names = vals[0].split(":")
apiName = names[0]
methodName = ":".join(names[1:])
data={}
key=None
for i in vals[1:]:
pair = vals[i].split(":")
if len(pair)>=2:
key = pair[0]
data[key]=":".join(pair[1:])
elif key!=None:
data[key]+=","+pair[0]
else:raise SyntaxError("invalid syntax '%s'" % response[start:end] )
result = self.__api_handler(apiName,methodName,data)
return "" if think else result
def __api_request(self,url,method,**karg):
try:return requests.__dict__[method.lower().strip()](url,**karg)
except requests.exceptions.MissingSchema as e:
return self.__api_request("http://"+url,method,**karg)
except requests.exceptions.ConnectionError as e:
raise RuntimeError("Couldn't connect to server (unreachable). Check your network")
except KeyError as e:
raise RuntimeError("Invalid method name '%s' in api.json" % method)
def __api_handler(self,apiName,methodName,data={}):
if apiName not in self._api or methodName not in self._api[apiName]:
raise RuntimeError("Invalid method name '%s' for api '%s' ",(methodName,apiName))
api_params = dict(self._api[apiName][methodName])
if "auth" in self._api[apiName]:
try:api_params["cookies"] = self.__api_request(**self._api[apiName]["auth"]).cookies
except:raise ValueError("In api.json 'auth' of '%s' is wrongly configured." % apiName)
param = "params" if self._api[apiName][methodName]["method"].upper().strip() == "GET" else "data"
try:api_params[param].update(data)
except:api_params[param] = data
api_type = "normal"
if "type" in api_params:
api_type = api_params["type"]
del api_params["type"]
api_data_getter = []
if "value_getter" in api_params:
api_data_getter = api_params["value_getter"]
del api_params["value_getter"]
response = self.__api_request(**api_params)
responseText = response.json() if api_type.upper().strip()=="JSON" else response.content
for key in api_data_getter:
responseText = responseText[key]
return responseText
def _quote(self,string,sessionID):
if self.attr[sessionID]["_quote"]:
try:return urllib2.quote(string)
except:return urllib2.quote(string.encode("UTF-8"))
return string
def __substituteFromClientStatement(self,match,prevResponse,extraSymbol="",sessionID = "general"):
"""
Substitute from Client statement into respose
"""
prev = 0
startPadding = 1+len(extraSymbol)
finalResponse = ""
for m in re.finditer(r'%'+extraSymbol+'[0-9]+', prevResponse):
start = m.start(0)
end = m.end(0)
num = int(prevResponse[start+startPadding:end])
finalResponse += prevResponse[prev:start]
try:finalResponse += self._quote(self._substitute(match.group(num)),sessionID)
except IndexError as e:pass
prev = end
namedGroup = match.groupdict()
if namedGroup:
prevResponse = finalResponse + prevResponse[prev:]
finalResponse = ""
prev = 0
for m in re.finditer(r'%'+extraSymbol+'([a-zA-Z_][a-zA-Z_0-9]*)([^a-zA-Z_0-9]|$)', prevResponse):
start = m.start(1)
end = m.end(1)
finalResponse += prevResponse[prev:start]
try:
value = namedGroup[prevResponse[start+startPadding:end]]
if value:finalResponse += self._quote(self._substitute(value),sessionID)
except KeyError as e:pass
prev = end
return finalResponse + prevResponse[prev:]
def _checkAndEvaluateCondition(self, response,condition=[],startIndex=0,endIndex=None,sessionID = "general"):
endIndex = endIndex if endIndex != None else len(response)
if not condition:
finalResponse = self.__substituteFromClientStatement(self.attr[sessionID]["match"],response[startIndex:endIndex],sessionID = sessionID)
parentMatch=self.attr[sessionID]["pmatch"]
return self.__substituteFromClientStatement(parentMatch,finalResponse,extraSymbol = '!',sessionID = sessionID) if parentMatch!=None else finalResponse
i=0
finalResponse = ""
while i < len(condition):
pos = condition[i]["start"]-(1 if condition[i]["action"] in ["map","eval"] else 2)
finalResponse += self._checkAndEvaluateCondition(response[startIndex:pos],sessionID =sessionID)
_quote = self.attr[sessionID]["_quote"]
try:
self.attr[sessionID]["_quote"] = False
tempResponse = self.__action_handlers[condition[i]["action"]](i,condition,response,sessionID)
self.attr[sessionID]["_quote"] = _quote
finalResponse += self._quote(tempResponse,sessionID)
except KeyError as e:
if condition[i]["action"] == "if":
self.attr[sessionID]["_quote"] = _quote
response_txt,i = self.__if_handler(i,condition,response,sessionID)
finalResponse += response_txt
startIndex = condition[i]["end"]+(1 if condition[i]["action"] in ["map","eval"] else 2)
i+=1
self.attr[sessionID]["_quote"] = _quote
finalResponse += self._checkAndEvaluateCondition(response[startIndex:endIndex],sessionID = sessionID)
return finalResponse
def _wildcards(self, response, match, parentMatch,sessionID = "general"):
self.attr[sessionID]["match"]=match
self.attr[sessionID]["pmatch"]=parentMatch
response,condition = response
return re.sub(r'\\([\[\]{}%:])',r"\1",self._checkAndEvaluateCondition(response,condition,sessionID = sessionID ))
def __chose_and_process(self,choices,match,parentMatch,sessionID):
resp = random.choice(choices) # pick a random response
resp = self._wildcards(resp, match, parentMatch,sessionID = sessionID) # process wildcards
# fix munged punctuation at the end
if resp[-2:] == '?.': resp = resp[:-2] + '.'
if resp[-2:] == '??': resp = resp[:-2] + '?'
return resp
def __intend_selection(self, text, previousText, current_topic, sessionID):
for (pattern, parent, response,learn) in self._pairs[current_topic]["pairs"]:# check each pattern
match = pattern.match(text)
if not match:continue
if parent==None:return match,None,response,learn
parentMatch = parent.match(previousText)
if parentMatch:# did the pattern match?
return match,parentMatch,response,learn
def __response_on_topic(self, text, previousText, text_correction, current_topic, sessionID = "general"):
match=self.__intend_selection(text, previousText, current_topic, sessionID) or \
self.__intend_selection(text_correction, previousText, current_topic, sessionID)
if match:
match,parentMatch,response,learn=match
if learn:
self.__processLearn({
self._wildcards((topic,self._condition(topic)), match, parentMatch,sessionID = sessionID): \
{'pairs':[self.__substituteInLearn(pair, match, parentMatch,sessionID = sessionID) for pair in learn[topic]['pairs']],
'defaults':[self._wildcards((default,self._condition(default)), match, parentMatch,sessionID = sessionID) for default in learn[topic]['defaults']] } \
for topic in learn})
return self.__chose_and_process(response,match,parentMatch,sessionID)
if self._pairs[current_topic]["defaults"]:
return self.__chose_and_process(self._pairs[current_topic]["defaults"],dummyMatch(text), None,sessionID)
raise ValueError("No match found")
def __correction(self,text):
"""
spell correction
"""
new_text = []
for i in text.split():
if len(i)>3:
low = i.lower()
new_text.append(i if WORDS[i] else correction(i))
else:new_text.append(i)
return " ".join(new_text)
def respond(self, text, sessionID = "general"):
"""
Generate a response to the user input.
:type text: str
:param text: The string to be mapped
:rtype: str
"""
text = self.__normalize(text)
previousText = self.__normalize(self.conversation[sessionID][-2])
text_correction = self.__correction(text)
current_topic = self.topic[sessionID]
current_topic_order = current_topic.split(".")
while current_topic_order:
try:return self.__response_on_topic(text, previousText, text_correction, current_topic, sessionID)
except ValueError as e:pass
current_topic_order.pop()
current_topic = ".".join(current_topic_order)
try:return self.__response_on_topic(text, previousText, text_correction, current_topic, sessionID)
except ValueError as e:return "Sorry I couldn't find anything relevant"
def __substituteInLearn(self,pair, match, parentMatch,sessionID = "general"):
return tuple((self.__substituteInLearn(i, match, parentMatch,sessionID = sessionID) if type(i) in (tuple,list) else \
(i if type(i) == dict else (self._wildcards((i,self._condition(i)), match, parentMatch,sessionID = sessionID) if i else i))) for i in pair)
def __get_topic_recursion(self,topics):
result={}
for topic in topics:
topic_depth=result
for sub_topic in topic.split("."):
topic_depth=topic_depth.setdefault(sub_topic,{})
try:
del result['']
result={'':result}
except:pass
return result
def save_template(self,filename):
with open(filename,"w") as template:
for topic_name,sub_topic in self.__get_topic_recursion(self._pairs).items():
self.__genrate_and_write_template(template,self._pairs,topic_name,sub_topic)
def __genrate_and_write_template(self,template,pairs,topic,sub_topics,base_path=None):
full_path=(base_path+"."+topic) if base else topic
if topic:template.write("{% group "+topic+" %}")
for topic_name,sub_topic in sub_topics.items():self.__genrate_and_write_template(template,pairs,topic_name,sub_topic,full_path)
for (pattern, parent, response,learn) in pairs[full_path]["pairs"]:
template.write("{% block %}")
template.write("{% client %}"+pattern.pattern+"{% endclient %}")
if parent!=None:
template.write("{% prev %}"+parent.pattern+"{% endprev %}")
for res in response:
template.write("{% response %}"+res[0]+"{% response %}")
if learn:
template.write("{% learn %}")
for topic_name,sub_topic in self.__get_topic_recursion(learn).items():self.__genrate_and_write_template(template,learn,topic_name,sub_topic)
template.write("{% endlearn %}")
template.write("{% endblock %}")
for res in pairs[topic]["defaults"]:
template.write("{% response %}"+res[0]+"{% response %}")
if topic:template.write("{% endgroup %}")
# Hold a conversation with a chatbot
def converse(self,firstQuestion=None ,quit="quit",sessionID = "general"):
if firstQuestion!= None:
self.conversation[sessionID].append(firstQuestion)
print (firstQuestion)
try:input_reader = raw_input
except NameError:input_reader = input
input_sentence = ""
while input_sentence != quit:
input_sentence = quit
try: input_sentence = input_reader("> ")
except EOFError:print (input_sentence)
if input_sentence:
self.conversation[sessionID].append(input_sentence)
while input_sentence[-1] in "!.": input_sentence = input_sentence[:-1]
self.conversation[sessionID].append(self.respond(input_sentence,sessionID=sessionID))
print (self.conversation[sessionID][-1])
def demo():
firstQuestion="Hi, how are you?"
Chat().converse(firstQuestion)
| [
"random.choice",
"re.compile",
"json.load",
"re.finditer",
"os.path.abspath",
"re.sub",
"re.findall"
] | [((26743, 26798), 're.finditer', 're.finditer', (["('%' + extraSymbol + '[0-9]+')", 'prevResponse'], {}), "('%' + extraSymbol + '[0-9]+', prevResponse)\n", (26754, 26798), False, 'import re, random, requests, json\n'), ((30080, 30102), 'random.choice', 'random.choice', (['choices'], {}), '(choices)\n', (30093, 30102), False, 'import re, random, requests, json\n'), ((27326, 27420), 're.finditer', 're.finditer', (["('%' + extraSymbol + '([a-zA-Z_][a-zA-Z_0-9]*)([^a-zA-Z_0-9]|$)')", 'prevResponse'], {}), "('%' + extraSymbol + '([a-zA-Z_][a-zA-Z_0-9]*)([^a-zA-Z_0-9]|$)',\n prevResponse)\n", (27337, 27420), False, 'import re, random, requests, json\n'), ((1149, 1187), 're.sub', 're.sub', (['"""([\\\\[\\\\]{}%:])"""', '"""\\\\\\\\\\\\1"""', 's'], {}), "('([\\\\[\\\\]{}%:])', '\\\\\\\\\\\\1', s)\n", (1155, 1187), False, 'import re, random, requests, json\n'), ((2289, 2311), 'os.path.abspath', 'path.abspath', (['__file__'], {}), '(__file__)\n', (2301, 2311), False, 'from os import path\n'), ((4196, 4210), 'json.load', 'json.load', (['api'], {}), '(api)\n', (4205, 4210), False, 'import re, random, requests, json\n'), ((8850, 8968), 're.finditer', 're.finditer', (['"""{%[\\\\s\\\\t]+((end)?(block|learn|response|client|prev|group))[\\\\s\\\\t]+([^%]*|%(?=[^}]))%}"""', 'text'], {}), "(\n '{%[\\\\s\\\\t]+((end)?(block|learn|response|client|prev|group))[\\\\s\\\\t]+([^%]*|%(?=[^}]))%}'\n , text)\n", (8861, 8968), False, 'import re, random, requests, json\n'), ((14975, 15015), 're.finditer', 're.finditer', (['"""{%?|%?}|\\\\[|\\\\]"""', 'response'], {}), "('{%?|%?}|\\\\[|\\\\]', response)\n", (14986, 15015), False, 'import re, random, requests, json\n'), ((17545, 17593), 're.finditer', 're.finditer', (['"""([\\\\<\\\\>!=]=|[\\\\<\\\\>]|&|\\\\|)"""', 'con'], {}), "('([\\\\<\\\\>!=]=|[\\\\<\\\\>]|&|\\\\|)', con)\n", (17556, 17593), False, 'import re, random, requests, json\n'), ((15897, 16015), 're.findall', 're.findall', (['"""^[\\\\s\\\\t]*(if|endif|elif|else|chat|low|up|cap|call|topic)[\\\\s\\\\t]+"""', 'response[biginTag[1]:endTag[0]]'], {}), "('^[\\\\s\\\\t]*(if|endif|elif|else|chat|low|up|cap|call|topic)[\\\\s\\\\t]+'\n , response[biginTag[1]:endTag[0]])\n", (15907, 16015), False, 'import re, random, requests, json\n'), ((21718, 21762), 're.compile', 're.compile', (["('([\\\\s\\\\t]*' + exp + '[\\\\s\\t]+)')"], {}), "('([\\\\s\\\\t]*' + exp + '[\\\\s\\t]+)')\n", (21728, 21762), False, 'import re, random, requests, json\n')] |
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
import unittest
import paddle.fluid as fluid
import paddle.fluid.core as core
from paddle.fluid.dygraph.nn import Embedding, Linear
import paddle.fluid.framework as framework
from paddle.fluid.optimizer import Adam
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph.learning_rate_scheduler import LearningRateDecay
from test_imperative_base import new_program_scope
import numpy as np
import six
class SimpleLSTMRNN(fluid.Layer):
def __init__(self,
hidden_size,
num_steps,
num_layers=2,
init_scale=0.1,
dropout=None):
super(SimpleLSTMRNN, self).__init__()
self._hidden_size = hidden_size
self._num_layers = num_layers
self._init_scale = init_scale
self._dropout = dropout
self._input = None
self._num_steps = num_steps
self.cell_array = []
self.hidden_array = []
self.weight_1_arr = []
self.weight_2_arr = []
self.bias_arr = []
self.mask_array = []
for i in range(self._num_layers):
weight_1 = self.create_parameter(
attr=fluid.ParamAttr(
initializer=fluid.initializer.UniformInitializer(
low=-self._init_scale, high=self._init_scale)),
shape=[self._hidden_size * 2, self._hidden_size * 4],
dtype="float32",
default_initializer=fluid.initializer.UniformInitializer(
low=-self._init_scale, high=self._init_scale))
self.weight_1_arr.append(self.add_parameter('w_%d' % i, weight_1))
bias_1 = self.create_parameter(
attr=fluid.ParamAttr(
initializer=fluid.initializer.UniformInitializer(
low=-self._init_scale, high=self._init_scale)),
shape=[self._hidden_size * 4],
dtype="float32",
default_initializer=fluid.initializer.Constant(0.0))
self.bias_arr.append(self.add_parameter('b_%d' % i, bias_1))
def forward(self, input_embedding, init_hidden=None, init_cell=None):
self.cell_array = []
self.hidden_array = []
for i in range(self._num_layers):
pre_hidden = fluid.layers.slice(
init_hidden, axes=[0], starts=[i], ends=[i + 1])
pre_cell = fluid.layers.slice(
init_cell, axes=[0], starts=[i], ends=[i + 1])
pre_hidden = fluid.layers.reshape(
pre_hidden, shape=[-1, self._hidden_size])
pre_cell = fluid.layers.reshape(
pre_cell, shape=[-1, self._hidden_size])
self.hidden_array.append(pre_hidden)
self.cell_array.append(pre_cell)
res = []
for index in range(self._num_steps):
self._input = fluid.layers.slice(
input_embedding, axes=[1], starts=[index], ends=[index + 1])
self._input = fluid.layers.reshape(
self._input, shape=[-1, self._hidden_size])
for k in range(self._num_layers):
pre_hidden = self.hidden_array[k]
pre_cell = self.cell_array[k]
weight_1 = self.weight_1_arr[k]
bias = self.bias_arr[k]
nn = fluid.layers.concat([self._input, pre_hidden], 1)
gate_input = fluid.layers.matmul(x=nn, y=weight_1)
gate_input = fluid.layers.elementwise_add(gate_input, bias)
i, j, f, o = fluid.layers.split(
gate_input, num_or_sections=4, dim=-1)
c = pre_cell * fluid.layers.sigmoid(f) + fluid.layers.sigmoid(
i) * fluid.layers.tanh(j)
m = fluid.layers.tanh(c) * fluid.layers.sigmoid(o)
self.hidden_array[k] = m
self.cell_array[k] = c
self._input = m
if self._dropout is not None and self._dropout > 0.0:
self._input = fluid.layers.dropout(
self._input,
dropout_prob=self._dropout,
dropout_implementation='upscale_in_train')
res.append(
fluid.layers.reshape(
self._input, shape=[1, -1, self._hidden_size]))
real_res = fluid.layers.concat(res, 0)
real_res = fluid.layers.transpose(x=real_res, perm=[1, 0, 2])
last_hidden = fluid.layers.concat(self.hidden_array, 1)
last_hidden = fluid.layers.reshape(
last_hidden, shape=[-1, self._num_layers, self._hidden_size])
last_hidden = fluid.layers.transpose(x=last_hidden, perm=[1, 0, 2])
last_cell = fluid.layers.concat(self.cell_array, 1)
last_cell = fluid.layers.reshape(
last_cell, shape=[-1, self._num_layers, self._hidden_size])
last_cell = fluid.layers.transpose(x=last_cell, perm=[1, 0, 2])
return real_res, last_hidden, last_cell
class PtbModel(fluid.Layer):
def __init__(self,
hidden_size,
vocab_size,
num_layers=2,
num_steps=20,
init_scale=0.1,
dropout=None):
super(PtbModel, self).__init__()
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.init_scale = init_scale
self.num_layers = num_layers
self.num_steps = num_steps
self.dropout = dropout
self.simple_lstm_rnn = SimpleLSTMRNN(
hidden_size,
num_steps,
num_layers=num_layers,
init_scale=init_scale,
dropout=dropout)
self.embedding = Embedding(
size=[vocab_size, hidden_size],
dtype='float32',
is_sparse=False,
param_attr=fluid.ParamAttr(
name='embedding_para',
initializer=fluid.initializer.UniformInitializer(
low=-init_scale, high=init_scale)))
self.softmax_weight = self.create_parameter(
attr=fluid.ParamAttr(),
shape=[self.hidden_size, self.vocab_size],
dtype="float32",
default_initializer=fluid.initializer.UniformInitializer(
low=-self.init_scale, high=self.init_scale))
self.softmax_bias = self.create_parameter(
attr=fluid.ParamAttr(),
shape=[self.vocab_size],
dtype="float32",
default_initializer=fluid.initializer.UniformInitializer(
low=-self.init_scale, high=self.init_scale))
def forward(self, input, label, init_hidden, init_cell):
init_h = fluid.layers.reshape(
init_hidden, shape=[self.num_layers, -1, self.hidden_size])
init_c = fluid.layers.reshape(
init_cell, shape=[self.num_layers, -1, self.hidden_size])
x_emb = self.embedding(input)
x_emb = fluid.layers.reshape(
x_emb, shape=[-1, self.num_steps, self.hidden_size])
if self.dropout is not None and self.dropout > 0.0:
x_emb = fluid.layers.dropout(
x_emb,
dropout_prob=self.drop_out,
dropout_implementation='upscale_in_train')
rnn_out, last_hidden, last_cell = self.simple_lstm_rnn(x_emb, init_h,
init_c)
rnn_out = fluid.layers.reshape(
rnn_out, shape=[-1, self.num_steps, self.hidden_size])
projection = fluid.layers.matmul(rnn_out, self.softmax_weight)
projection = fluid.layers.elementwise_add(projection, self.softmax_bias)
projection = fluid.layers.reshape(
projection, shape=[-1, self.vocab_size])
loss = fluid.layers.softmax_with_cross_entropy(
logits=projection, label=label, soft_label=False)
loss = fluid.layers.reshape(loss, shape=[-1, self.num_steps])
loss = fluid.layers.reduce_mean(loss, dim=[0])
loss = fluid.layers.reduce_sum(loss)
return loss, last_hidden, last_cell
class TestDygraphPtbRnn(unittest.TestCase):
def setUp(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
bd = []
lr_arr = [1.0]
# this a fake lr decay strategy
for i in range(1, 10):
bd.append(100 * i)
new_lr = 1.0
lr_arr.append(new_lr)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr_arr),
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
for i in range(batch_num):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
if i == 0:
for param in ptb_model.parameters():
dy_param_init[param.name] = param.numpy()
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
if i == batch_num - 1:
for param in ptb_model.parameters():
dy_param_updated[param.name] = param.numpy()
# check optimizer
self.opti_dict = adam.state_dict()
self.base_opti = {}
for k, v in self.opti_dict.items():
self.base_opti[v.name] = v.numpy()
self.assertTrue(np.sum(np.abs(v.numpy())) != 0)
fluid.save_dygraph(self.opti_dict, "./test_dy")
self.state_dict = ptb_model.state_dict()
self.model_base = {}
for k, v in self.state_dict.items():
np_t = v.numpy()
self.model_base[k] = np_t
fluid.save_dygraph(self.state_dict, "./test_dy")
def testLoadAndSetVarBase(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
bd = []
lr_arr = [1.0]
# this a fake lr decay strategy
for i in range(1, 10):
bd.append(100 * i)
new_lr = 1.0
lr_arr.append(new_lr)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr_arr),
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
for i in range(batch_num):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
if i == 0:
for param in ptb_model.parameters():
dy_param_init[param.name] = param.numpy()
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
if i == batch_num - 1:
for param in ptb_model.parameters():
dy_param_updated[param.name] = param.numpy()
# check optimizer
opti_dict = adam.state_dict()
# set to zero
for k, v in opti_dict.items():
np_t = v.numpy()
var = v.value().get_tensor()
var.set(np.zeros_like(np_t), place)
self.assertTrue(np.sum(np.abs(v.numpy())) == 0)
if isinstance(adam._learning_rate, LearningRateDecay):
adam._learning_rate.step_num = 0
para_state_dict, opti_state_dict = fluid.load_dygraph("./test_dy")
adam.set_dict(opti_state_dict)
opti_dict = adam.state_dict()
for k, v in opti_dict.items():
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name]))
# check parameter
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
np_t = v.numpy()
var = v.value().get_tensor()
var.set(np.zeros_like(np_t), place)
ptb_model.set_dict(para_state_dict)
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
new_t = v.numpy()
base_t = self.model_base[k]
self.assertTrue(np.array_equal(new_t, base_t))
def testSetVariable(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
bd = []
lr_arr = [1.0]
# this a fake lr decay strategy
for i in range(1, 10):
bd.append(100 * i)
new_lr = 1.0
lr_arr.append(new_lr)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr_arr),
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
for i in range(batch_num):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
if i == 0:
for param in ptb_model.parameters():
dy_param_init[param.name] = param.numpy()
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
if i == batch_num - 1:
for param in ptb_model.parameters():
dy_param_updated[param.name] = param.numpy()
# check optimizer
opti_dict = adam.state_dict()
# set to zero
for k, v in opti_dict.items():
np_t = v.numpy()
var = v.value().get_tensor()
var.set(np.zeros_like(np_t), place)
self.assertTrue(np.sum(np.abs(v.numpy())) == 0)
if isinstance(adam._learning_rate, LearningRateDecay):
adam._learning_rate.step_num = 0
adam.set_dict(self.opti_dict)
opti_dict = adam.state_dict()
for k, v in opti_dict.items():
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name]))
# check parameter
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
np_t = v.numpy()
var = v.value().get_tensor()
var.set(np.zeros_like(np_t), place)
ptb_model.set_dict(self.state_dict)
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
new_t = v.numpy()
base_t = self.model_base[k]
self.assertTrue(np.array_equal(new_t, base_t))
def testSetNumpy(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
bd = []
lr_arr = [1.0]
# this a fake lr decay strategy
for i in range(1, 10):
bd.append(100 * i)
new_lr = 1.0
lr_arr.append(new_lr)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr_arr),
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
for i in range(batch_num):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
if i == 0:
for param in ptb_model.parameters():
dy_param_init[param.name] = param.numpy()
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
if i == batch_num - 1:
for param in ptb_model.parameters():
dy_param_updated[param.name] = param.numpy()
# check optimizer
opti_dict = adam.state_dict()
np_opti_dict = {}
# set to zero
for k, v in opti_dict.items():
np_t = v.numpy()
np_opti_dict[v.name] = np_t
var = v.value().get_tensor()
var.set(np.zeros_like(np_t), place)
self.assertTrue(np.sum(np.abs(v.numpy())) == 0)
if isinstance(adam._learning_rate, LearningRateDecay):
adam._learning_rate.step_num = 0
adam.set_dict(np_opti_dict)
opti_dict = adam.state_dict()
for k, v in opti_dict.items():
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name]))
# check parameter
state_dict = ptb_model.state_dict()
np_state_dict = {}
for k, v in state_dict.items():
np_t = v.numpy()
np_state_dict[k] = np_t
var = v.value().get_tensor()
var.set(np.zeros_like(np_t), place)
ptb_model.set_dict(np_state_dict)
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
new_t = v.numpy()
base_t = self.model_base[k]
self.assertTrue(np.array_equal(new_t, base_t))
def testSetVariableBeforeTrain(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=0.0,
beta1=0.8,
beta2=0.6,
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
adam.set_dict(self.opti_dict)
ptb_model.set_dict(self.state_dict)
for i in range(1):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
opti_dict = adam.state_dict()
for k, v in opti_dict.items():
if k == "global_step":
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] + 1))
if k.find("beta1_pow_acc_0") > 0:
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] *
adam._beta1))
if k.find("beta2_pow_acc_0") > 0:
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] *
adam._beta2))
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
new_t = v.numpy()
base_t = self.model_base[k]
self.assertTrue(np.array_equal(new_t, base_t))
def testLoadAndSetVarBaseBeforeTrain(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
bd = []
lr_arr = [0.0]
# this a fake lr decay strategy
for i in range(1, 10):
bd.append(100 * i)
# set lr to zero not update parameter
new_lr = 0.0
lr_arr.append(new_lr)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=0.0,
beta1=0.8,
beta2=0.6,
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
state_dict, opti_dict = fluid.load_dygraph("./test_dy")
adam.set_dict(opti_dict)
ptb_model.set_dict(state_dict)
for i in range(1):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
opti_dict = adam.state_dict()
for k, v in opti_dict.items():
if k == "global_step":
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] + 1))
if k.find("beta1_pow_acc_0") > 0:
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] *
adam._beta1))
if k.find("beta2_pow_acc_0") > 0:
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] *
adam._beta2))
# check parameter
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
new_t = v.numpy()
base_t = self.model_base[k]
self.assertTrue(np.array_equal(new_t, base_t))
def testSetNumpyBeforeTrain(self):
seed = 90
hidden_size = 10
vocab_size = 1000
num_layers = 1
num_steps = 3
init_scale = 0.1
batch_size = 4
batch_num = 200
with fluid.dygraph.guard():
fluid.default_startup_program().random_seed = seed
fluid.default_main_program().random_seed = seed
# TODO: marsyang1993 Change seed to
ptb_model = PtbModel(
hidden_size=hidden_size,
vocab_size=vocab_size,
num_layers=num_layers,
num_steps=num_steps,
init_scale=init_scale)
bd = []
lr_arr = [0.0]
# this a fake lr decay strategy
for i in range(1, 10):
bd.append(100 * i)
# set lr to 0.0, not update parameter
new_lr = 0.0
lr_arr.append(new_lr)
place = fluid.CPUPlace() if not core.is_compiled_with_cuda(
) else fluid.CUDAPlace(0)
adam = Adam(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr_arr),
beta1=0.8,
beta2=0.6,
parameter_list=ptb_model.parameters())
dy_param_updated = dict()
dy_param_init = dict()
dy_loss = None
last_hidden = None
last_cell = None
np_opti_dict = {}
np_state_dict = {}
for k, v in self.opti_dict.items():
np_opti_dict[v.name] = v.numpy()
for k, v in self.state_dict.items():
np_state_dict[k] = v.numpy()
adam.set_dict(np_opti_dict)
ptb_model.set_dict(np_state_dict)
for i in range(1):
x_data = np.arange(12).reshape(4, 3).astype('int64')
y_data = np.arange(1, 13).reshape(4, 3).astype('int64')
y_data = y_data.reshape((-1, 1))
init_hidden_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
init_cell_data = np.zeros(
(num_layers, batch_size, hidden_size), dtype='float32')
x = to_variable(x_data)
y = to_variable(y_data)
init_hidden = to_variable(init_hidden_data)
init_cell = to_variable(init_cell_data)
dy_loss, last_hidden, last_cell = ptb_model(x, y, init_hidden,
init_cell)
dy_loss.backward()
adam.minimize(dy_loss)
ptb_model.clear_gradients()
opti_dict = adam.state_dict()
for k, v in opti_dict.items():
if k == "global_step":
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] + 1))
if k.find("beta1_pow_acc_0") > 0:
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] *
adam._beta1))
if k.find("beta2_pow_acc_0") > 0:
self.assertTrue(
np.array_equal(v.numpy(), self.base_opti[v.name] *
adam._beta2))
# check parameter
state_dict = ptb_model.state_dict()
for k, v in state_dict.items():
new_t = v.numpy()
base_t = self.model_base[k]
self.assertTrue(np.array_equal(new_t, base_t))
def testOnlyLoadParams(self):
with fluid.dygraph.guard():
emb = fluid.dygraph.Embedding([10, 10])
state_dict = emb.state_dict()
fluid.save_dygraph(state_dict, os.path.join('saved_dy', 'emb_dy'))
para_state_dict, opti_state_dict = fluid.load_dygraph(
os.path.join('saved_dy', 'emb_dy'))
self.assertTrue(opti_state_dict == None)
if __name__ == '__main__':
unittest.main()
| [
"paddle.fluid.dygraph.guard",
"paddle.fluid.dygraph.Embedding",
"paddle.fluid.layers.split",
"paddle.fluid.layers.piecewise_decay",
"unittest.main",
"paddle.fluid.layers.transpose",
"paddle.fluid.layers.matmul",
"numpy.arange",
"paddle.fluid.layers.reshape",
"paddle.fluid.save_dygraph",
"paddle.... | [((34298, 34313), 'unittest.main', 'unittest.main', ([], {}), '()\n', (34311, 34313), False, 'import unittest\n'), ((5049, 5076), 'paddle.fluid.layers.concat', 'fluid.layers.concat', (['res', '(0)'], {}), '(res, 0)\n', (5068, 5076), True, 'import paddle.fluid as fluid\n'), ((5096, 5146), 'paddle.fluid.layers.transpose', 'fluid.layers.transpose', ([], {'x': 'real_res', 'perm': '[1, 0, 2]'}), '(x=real_res, perm=[1, 0, 2])\n', (5118, 5146), True, 'import paddle.fluid as fluid\n'), ((5169, 5210), 'paddle.fluid.layers.concat', 'fluid.layers.concat', (['self.hidden_array', '(1)'], {}), '(self.hidden_array, 1)\n', (5188, 5210), True, 'import paddle.fluid as fluid\n'), ((5233, 5320), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['last_hidden'], {'shape': '[-1, self._num_layers, self._hidden_size]'}), '(last_hidden, shape=[-1, self._num_layers, self.\n _hidden_size])\n', (5253, 5320), True, 'import paddle.fluid as fluid\n'), ((5351, 5404), 'paddle.fluid.layers.transpose', 'fluid.layers.transpose', ([], {'x': 'last_hidden', 'perm': '[1, 0, 2]'}), '(x=last_hidden, perm=[1, 0, 2])\n', (5373, 5404), True, 'import paddle.fluid as fluid\n'), ((5425, 5464), 'paddle.fluid.layers.concat', 'fluid.layers.concat', (['self.cell_array', '(1)'], {}), '(self.cell_array, 1)\n', (5444, 5464), True, 'import paddle.fluid as fluid\n'), ((5485, 5570), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['last_cell'], {'shape': '[-1, self._num_layers, self._hidden_size]'}), '(last_cell, shape=[-1, self._num_layers, self._hidden_size]\n )\n', (5505, 5570), True, 'import paddle.fluid as fluid\n'), ((5599, 5650), 'paddle.fluid.layers.transpose', 'fluid.layers.transpose', ([], {'x': 'last_cell', 'perm': '[1, 0, 2]'}), '(x=last_cell, perm=[1, 0, 2])\n', (5621, 5650), True, 'import paddle.fluid as fluid\n'), ((7396, 7481), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['init_hidden'], {'shape': '[self.num_layers, -1, self.hidden_size]'}), '(init_hidden, shape=[self.num_layers, -1, self.hidden_size]\n )\n', (7416, 7481), True, 'import paddle.fluid as fluid\n'), ((7508, 7586), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['init_cell'], {'shape': '[self.num_layers, -1, self.hidden_size]'}), '(init_cell, shape=[self.num_layers, -1, self.hidden_size])\n', (7528, 7586), True, 'import paddle.fluid as fluid\n'), ((7655, 7728), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['x_emb'], {'shape': '[-1, self.num_steps, self.hidden_size]'}), '(x_emb, shape=[-1, self.num_steps, self.hidden_size])\n', (7675, 7728), True, 'import paddle.fluid as fluid\n'), ((8137, 8212), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['rnn_out'], {'shape': '[-1, self.num_steps, self.hidden_size]'}), '(rnn_out, shape=[-1, self.num_steps, self.hidden_size])\n', (8157, 8212), True, 'import paddle.fluid as fluid\n'), ((8248, 8297), 'paddle.fluid.layers.matmul', 'fluid.layers.matmul', (['rnn_out', 'self.softmax_weight'], {}), '(rnn_out, self.softmax_weight)\n', (8267, 8297), True, 'import paddle.fluid as fluid\n'), ((8319, 8378), 'paddle.fluid.layers.elementwise_add', 'fluid.layers.elementwise_add', (['projection', 'self.softmax_bias'], {}), '(projection, self.softmax_bias)\n', (8347, 8378), True, 'import paddle.fluid as fluid\n'), ((8400, 8461), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['projection'], {'shape': '[-1, self.vocab_size]'}), '(projection, shape=[-1, self.vocab_size])\n', (8420, 8461), True, 'import paddle.fluid as fluid\n'), ((8490, 8583), 'paddle.fluid.layers.softmax_with_cross_entropy', 'fluid.layers.softmax_with_cross_entropy', ([], {'logits': 'projection', 'label': 'label', 'soft_label': '(False)'}), '(logits=projection, label=label,\n soft_label=False)\n', (8529, 8583), True, 'import paddle.fluid as fluid\n'), ((8608, 8662), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['loss'], {'shape': '[-1, self.num_steps]'}), '(loss, shape=[-1, self.num_steps])\n', (8628, 8662), True, 'import paddle.fluid as fluid\n'), ((8678, 8717), 'paddle.fluid.layers.reduce_mean', 'fluid.layers.reduce_mean', (['loss'], {'dim': '[0]'}), '(loss, dim=[0])\n', (8702, 8717), True, 'import paddle.fluid as fluid\n'), ((8733, 8762), 'paddle.fluid.layers.reduce_sum', 'fluid.layers.reduce_sum', (['loss'], {}), '(loss)\n', (8756, 8762), True, 'import paddle.fluid as fluid\n'), ((2972, 3039), 'paddle.fluid.layers.slice', 'fluid.layers.slice', (['init_hidden'], {'axes': '[0]', 'starts': '[i]', 'ends': '[i + 1]'}), '(init_hidden, axes=[0], starts=[i], ends=[i + 1])\n', (2990, 3039), True, 'import paddle.fluid as fluid\n'), ((3080, 3145), 'paddle.fluid.layers.slice', 'fluid.layers.slice', (['init_cell'], {'axes': '[0]', 'starts': '[i]', 'ends': '[i + 1]'}), '(init_cell, axes=[0], starts=[i], ends=[i + 1])\n', (3098, 3145), True, 'import paddle.fluid as fluid\n'), ((3188, 3251), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['pre_hidden'], {'shape': '[-1, self._hidden_size]'}), '(pre_hidden, shape=[-1, self._hidden_size])\n', (3208, 3251), True, 'import paddle.fluid as fluid\n'), ((3292, 3353), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['pre_cell'], {'shape': '[-1, self._hidden_size]'}), '(pre_cell, shape=[-1, self._hidden_size])\n', (3312, 3353), True, 'import paddle.fluid as fluid\n'), ((3554, 3633), 'paddle.fluid.layers.slice', 'fluid.layers.slice', (['input_embedding'], {'axes': '[1]', 'starts': '[index]', 'ends': '[index + 1]'}), '(input_embedding, axes=[1], starts=[index], ends=[index + 1])\n', (3572, 3633), True, 'import paddle.fluid as fluid\n'), ((3677, 3741), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['self._input'], {'shape': '[-1, self._hidden_size]'}), '(self._input, shape=[-1, self._hidden_size])\n', (3697, 3741), True, 'import paddle.fluid as fluid\n'), ((7822, 7924), 'paddle.fluid.layers.dropout', 'fluid.layers.dropout', (['x_emb'], {'dropout_prob': 'self.drop_out', 'dropout_implementation': '"""upscale_in_train"""'}), "(x_emb, dropout_prob=self.drop_out,\n dropout_implementation='upscale_in_train')\n", (7842, 7924), True, 'import paddle.fluid as fluid\n'), ((9075, 9096), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (9094, 9096), True, 'import paddle.fluid as fluid\n'), ((11724, 11771), 'paddle.fluid.save_dygraph', 'fluid.save_dygraph', (['self.opti_dict', '"""./test_dy"""'], {}), "(self.opti_dict, './test_dy')\n", (11742, 11771), True, 'import paddle.fluid as fluid\n'), ((11997, 12045), 'paddle.fluid.save_dygraph', 'fluid.save_dygraph', (['self.state_dict', '"""./test_dy"""'], {}), "(self.state_dict, './test_dy')\n", (12015, 12045), True, 'import paddle.fluid as fluid\n'), ((12284, 12305), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (12303, 12305), True, 'import paddle.fluid as fluid\n'), ((15149, 15180), 'paddle.fluid.load_dygraph', 'fluid.load_dygraph', (['"""./test_dy"""'], {}), "('./test_dy')\n", (15167, 15180), True, 'import paddle.fluid as fluid\n'), ((16186, 16207), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (16205, 16207), True, 'import paddle.fluid as fluid\n'), ((20005, 20026), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (20024, 20026), True, 'import paddle.fluid as fluid\n'), ((23979, 24000), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (23998, 24000), True, 'import paddle.fluid as fluid\n'), ((27017, 27038), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (27036, 27038), True, 'import paddle.fluid as fluid\n'), ((28200, 28231), 'paddle.fluid.load_dygraph', 'fluid.load_dygraph', (['"""./test_dy"""'], {}), "('./test_dy')\n", (28218, 28231), True, 'import paddle.fluid as fluid\n'), ((30418, 30439), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (30437, 30439), True, 'import paddle.fluid as fluid\n'), ((33895, 33916), 'paddle.fluid.dygraph.guard', 'fluid.dygraph.guard', ([], {}), '()\n', (33914, 33916), True, 'import paddle.fluid as fluid\n'), ((33936, 33969), 'paddle.fluid.dygraph.Embedding', 'fluid.dygraph.Embedding', (['[10, 10]'], {}), '([10, 10])\n', (33959, 33969), True, 'import paddle.fluid as fluid\n'), ((4011, 4060), 'paddle.fluid.layers.concat', 'fluid.layers.concat', (['[self._input, pre_hidden]', '(1)'], {}), '([self._input, pre_hidden], 1)\n', (4030, 4060), True, 'import paddle.fluid as fluid\n'), ((4090, 4127), 'paddle.fluid.layers.matmul', 'fluid.layers.matmul', ([], {'x': 'nn', 'y': 'weight_1'}), '(x=nn, y=weight_1)\n', (4109, 4127), True, 'import paddle.fluid as fluid\n'), ((4158, 4204), 'paddle.fluid.layers.elementwise_add', 'fluid.layers.elementwise_add', (['gate_input', 'bias'], {}), '(gate_input, bias)\n', (4186, 4204), True, 'import paddle.fluid as fluid\n'), ((4234, 4291), 'paddle.fluid.layers.split', 'fluid.layers.split', (['gate_input'], {'num_or_sections': '(4)', 'dim': '(-1)'}), '(gate_input, num_or_sections=4, dim=-1)\n', (4252, 4291), True, 'import paddle.fluid as fluid\n'), ((4940, 5007), 'paddle.fluid.layers.reshape', 'fluid.layers.reshape', (['self._input'], {'shape': '[1, -1, self._hidden_size]'}), '(self._input, shape=[1, -1, self._hidden_size])\n', (4960, 5007), True, 'import paddle.fluid as fluid\n'), ((6799, 6816), 'paddle.fluid.ParamAttr', 'fluid.ParamAttr', ([], {}), '()\n', (6814, 6816), True, 'import paddle.fluid as fluid\n'), ((6934, 7019), 'paddle.fluid.initializer.UniformInitializer', 'fluid.initializer.UniformInitializer', ([], {'low': '(-self.init_scale)', 'high': 'self.init_scale'}), '(low=-self.init_scale, high=self.init_scale\n )\n', (6970, 7019), True, 'import paddle.fluid as fluid\n'), ((7101, 7118), 'paddle.fluid.ParamAttr', 'fluid.ParamAttr', ([], {}), '()\n', (7116, 7118), True, 'import paddle.fluid as fluid\n'), ((7218, 7303), 'paddle.fluid.initializer.UniformInitializer', 'fluid.initializer.UniformInitializer', ([], {'low': '(-self.init_scale)', 'high': 'self.init_scale'}), '(low=-self.init_scale, high=self.init_scale\n )\n', (7254, 7303), True, 'import paddle.fluid as fluid\n'), ((9110, 9141), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (9139, 9141), True, 'import paddle.fluid as fluid\n'), ((9173, 9201), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (9199, 9201), True, 'import paddle.fluid as fluid\n'), ((9748, 9764), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (9762, 9764), True, 'import paddle.fluid as fluid\n'), ((9819, 9837), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (9834, 9837), True, 'import paddle.fluid as fluid\n'), ((10454, 10518), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (10462, 10518), True, 'import numpy as np\n'), ((10573, 10637), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (10581, 10637), True, 'import numpy as np\n'), ((10679, 10698), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (10690, 10698), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((10719, 10738), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (10730, 10738), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((10769, 10798), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (10780, 10798), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((10827, 10854), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (10838, 10854), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((12319, 12350), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (12348, 12350), True, 'import paddle.fluid as fluid\n'), ((12382, 12410), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (12408, 12410), True, 'import paddle.fluid as fluid\n'), ((12957, 12973), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (12971, 12973), True, 'import paddle.fluid as fluid\n'), ((13028, 13046), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (13043, 13046), True, 'import paddle.fluid as fluid\n'), ((13663, 13727), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (13671, 13727), True, 'import numpy as np\n'), ((13782, 13846), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (13790, 13846), True, 'import numpy as np\n'), ((13888, 13907), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (13899, 13907), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((13928, 13947), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (13939, 13947), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((13978, 14007), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (13989, 14007), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((14036, 14063), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (14047, 14063), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((16221, 16252), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (16250, 16252), True, 'import paddle.fluid as fluid\n'), ((16284, 16312), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (16310, 16312), True, 'import paddle.fluid as fluid\n'), ((16859, 16875), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (16873, 16875), True, 'import paddle.fluid as fluid\n'), ((16930, 16948), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (16945, 16948), True, 'import paddle.fluid as fluid\n'), ((17565, 17629), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (17573, 17629), True, 'import numpy as np\n'), ((17684, 17748), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (17692, 17748), True, 'import numpy as np\n'), ((17790, 17809), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (17801, 17809), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((17830, 17849), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (17841, 17849), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((17880, 17909), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (17891, 17909), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((17938, 17965), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (17949, 17965), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((20040, 20071), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (20069, 20071), True, 'import paddle.fluid as fluid\n'), ((20103, 20131), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (20129, 20131), True, 'import paddle.fluid as fluid\n'), ((20678, 20694), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (20692, 20694), True, 'import paddle.fluid as fluid\n'), ((20749, 20767), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (20764, 20767), True, 'import paddle.fluid as fluid\n'), ((21384, 21448), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (21392, 21448), True, 'import numpy as np\n'), ((21503, 21567), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (21511, 21567), True, 'import numpy as np\n'), ((21609, 21628), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (21620, 21628), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((21649, 21668), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (21660, 21668), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((21699, 21728), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (21710, 21728), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((21757, 21784), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (21768, 21784), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((24014, 24045), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (24043, 24045), True, 'import paddle.fluid as fluid\n'), ((24077, 24105), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (24103, 24105), True, 'import paddle.fluid as fluid\n'), ((24423, 24439), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (24437, 24439), True, 'import paddle.fluid as fluid\n'), ((24494, 24512), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (24509, 24512), True, 'import paddle.fluid as fluid\n'), ((25190, 25254), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (25198, 25254), True, 'import numpy as np\n'), ((25309, 25373), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (25317, 25373), True, 'import numpy as np\n'), ((25415, 25434), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (25426, 25434), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((25455, 25474), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (25466, 25474), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((25505, 25534), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (25516, 25534), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((25563, 25590), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (25574, 25590), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((27052, 27083), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (27081, 27083), True, 'import paddle.fluid as fluid\n'), ((27115, 27143), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (27141, 27143), True, 'import paddle.fluid as fluid\n'), ((27744, 27760), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (27758, 27760), True, 'import paddle.fluid as fluid\n'), ((27815, 27833), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (27830, 27833), True, 'import paddle.fluid as fluid\n'), ((28569, 28633), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (28577, 28633), True, 'import numpy as np\n'), ((28688, 28752), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (28696, 28752), True, 'import numpy as np\n'), ((28794, 28813), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (28805, 28813), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((28834, 28853), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (28845, 28853), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((28884, 28913), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (28895, 28913), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((28942, 28969), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (28953, 28969), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((30453, 30484), 'paddle.fluid.default_startup_program', 'fluid.default_startup_program', ([], {}), '()\n', (30482, 30484), True, 'import paddle.fluid as fluid\n'), ((30516, 30544), 'paddle.fluid.default_main_program', 'fluid.default_main_program', ([], {}), '()\n', (30542, 30544), True, 'import paddle.fluid as fluid\n'), ((31145, 31161), 'paddle.fluid.CPUPlace', 'fluid.CPUPlace', ([], {}), '()\n', (31159, 31161), True, 'import paddle.fluid as fluid\n'), ((31216, 31234), 'paddle.fluid.CUDAPlace', 'fluid.CUDAPlace', (['(0)'], {}), '(0)\n', (31231, 31234), True, 'import paddle.fluid as fluid\n'), ((32238, 32302), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (32246, 32302), True, 'import numpy as np\n'), ((32357, 32421), 'numpy.zeros', 'np.zeros', (['(num_layers, batch_size, hidden_size)'], {'dtype': '"""float32"""'}), "((num_layers, batch_size, hidden_size), dtype='float32')\n", (32365, 32421), True, 'import numpy as np\n'), ((32463, 32482), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['x_data'], {}), '(x_data)\n', (32474, 32482), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((32503, 32522), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['y_data'], {}), '(y_data)\n', (32514, 32522), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((32553, 32582), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_hidden_data'], {}), '(init_hidden_data)\n', (32564, 32582), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((32611, 32638), 'paddle.fluid.dygraph.base.to_variable', 'to_variable', (['init_cell_data'], {}), '(init_cell_data)\n', (32622, 32638), False, 'from paddle.fluid.dygraph.base import to_variable\n'), ((34055, 34089), 'os.path.join', 'os.path.join', (['"""saved_dy"""', '"""emb_dy"""'], {}), "('saved_dy', 'emb_dy')\n", (34067, 34089), False, 'import os\n'), ((34175, 34209), 'os.path.join', 'os.path.join', (['"""saved_dy"""', '"""emb_dy"""'], {}), "('saved_dy', 'emb_dy')\n", (34187, 34209), False, 'import os\n'), ((2139, 2226), 'paddle.fluid.initializer.UniformInitializer', 'fluid.initializer.UniformInitializer', ([], {'low': '(-self._init_scale)', 'high': 'self._init_scale'}), '(low=-self._init_scale, high=self.\n _init_scale)\n', (2175, 2226), True, 'import paddle.fluid as fluid\n'), ((2663, 2694), 'paddle.fluid.initializer.Constant', 'fluid.initializer.Constant', (['(0.0)'], {}), '(0.0)\n', (2689, 2694), True, 'import paddle.fluid as fluid\n'), ((4458, 4478), 'paddle.fluid.layers.tanh', 'fluid.layers.tanh', (['c'], {}), '(c)\n', (4475, 4478), True, 'import paddle.fluid as fluid\n'), ((4481, 4504), 'paddle.fluid.layers.sigmoid', 'fluid.layers.sigmoid', (['o'], {}), '(o)\n', (4501, 4504), True, 'import paddle.fluid as fluid\n'), ((4722, 4830), 'paddle.fluid.layers.dropout', 'fluid.layers.dropout', (['self._input'], {'dropout_prob': 'self._dropout', 'dropout_implementation': '"""upscale_in_train"""'}), "(self._input, dropout_prob=self._dropout,\n dropout_implementation='upscale_in_train')\n", (4742, 4830), True, 'import paddle.fluid as fluid\n'), ((9772, 9800), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (9798, 9800), True, 'import paddle.fluid.core as core\n'), ((9893, 9951), 'paddle.fluid.layers.piecewise_decay', 'fluid.layers.piecewise_decay', ([], {'boundaries': 'bd', 'values': 'lr_arr'}), '(boundaries=bd, values=lr_arr)\n', (9921, 9951), True, 'import paddle.fluid as fluid\n'), ((12981, 13009), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (13007, 13009), True, 'import paddle.fluid.core as core\n'), ((13102, 13160), 'paddle.fluid.layers.piecewise_decay', 'fluid.layers.piecewise_decay', ([], {'boundaries': 'bd', 'values': 'lr_arr'}), '(boundaries=bd, values=lr_arr)\n', (13130, 13160), True, 'import paddle.fluid as fluid\n'), ((14891, 14910), 'numpy.zeros_like', 'np.zeros_like', (['np_t'], {}), '(np_t)\n', (14904, 14910), True, 'import numpy as np\n'), ((15640, 15659), 'numpy.zeros_like', 'np.zeros_like', (['np_t'], {}), '(np_t)\n', (15653, 15659), True, 'import numpy as np\n'), ((15923, 15952), 'numpy.array_equal', 'np.array_equal', (['new_t', 'base_t'], {}), '(new_t, base_t)\n', (15937, 15952), True, 'import numpy as np\n'), ((16883, 16911), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (16909, 16911), True, 'import paddle.fluid.core as core\n'), ((17004, 17062), 'paddle.fluid.layers.piecewise_decay', 'fluid.layers.piecewise_decay', ([], {'boundaries': 'bd', 'values': 'lr_arr'}), '(boundaries=bd, values=lr_arr)\n', (17032, 17062), True, 'import paddle.fluid as fluid\n'), ((18793, 18812), 'numpy.zeros_like', 'np.zeros_like', (['np_t'], {}), '(np_t)\n', (18806, 18812), True, 'import numpy as np\n'), ((19462, 19481), 'numpy.zeros_like', 'np.zeros_like', (['np_t'], {}), '(np_t)\n', (19475, 19481), True, 'import numpy as np\n'), ((19745, 19774), 'numpy.array_equal', 'np.array_equal', (['new_t', 'base_t'], {}), '(new_t, base_t)\n', (19759, 19774), True, 'import numpy as np\n'), ((20702, 20730), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (20728, 20730), True, 'import paddle.fluid.core as core\n'), ((20823, 20881), 'paddle.fluid.layers.piecewise_decay', 'fluid.layers.piecewise_decay', ([], {'boundaries': 'bd', 'values': 'lr_arr'}), '(boundaries=bd, values=lr_arr)\n', (20851, 20881), True, 'import paddle.fluid as fluid\n'), ((22686, 22705), 'numpy.zeros_like', 'np.zeros_like', (['np_t'], {}), '(np_t)\n', (22699, 22705), True, 'import numpy as np\n'), ((23424, 23443), 'numpy.zeros_like', 'np.zeros_like', (['np_t'], {}), '(np_t)\n', (23437, 23443), True, 'import numpy as np\n'), ((23705, 23734), 'numpy.array_equal', 'np.array_equal', (['new_t', 'base_t'], {}), '(new_t, base_t)\n', (23719, 23734), True, 'import numpy as np\n'), ((24447, 24475), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (24473, 24475), True, 'import paddle.fluid.core as core\n'), ((26737, 26766), 'numpy.array_equal', 'np.array_equal', (['new_t', 'base_t'], {}), '(new_t, base_t)\n', (26751, 26766), True, 'import numpy as np\n'), ((27768, 27796), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (27794, 27796), True, 'import paddle.fluid.core as core\n'), ((30147, 30176), 'numpy.array_equal', 'np.array_equal', (['new_t', 'base_t'], {}), '(new_t, base_t)\n', (30161, 30176), True, 'import numpy as np\n'), ((31169, 31197), 'paddle.fluid.core.is_compiled_with_cuda', 'core.is_compiled_with_cuda', ([], {}), '()\n', (31195, 31197), True, 'import paddle.fluid.core as core\n'), ((31290, 31348), 'paddle.fluid.layers.piecewise_decay', 'fluid.layers.piecewise_decay', ([], {'boundaries': 'bd', 'values': 'lr_arr'}), '(boundaries=bd, values=lr_arr)\n', (31318, 31348), True, 'import paddle.fluid as fluid\n'), ((33816, 33845), 'numpy.array_equal', 'np.array_equal', (['new_t', 'base_t'], {}), '(new_t, base_t)\n', (33830, 33845), True, 'import numpy as np\n'), ((4344, 4367), 'paddle.fluid.layers.sigmoid', 'fluid.layers.sigmoid', (['f'], {}), '(f)\n', (4364, 4367), True, 'import paddle.fluid as fluid\n'), ((4370, 4393), 'paddle.fluid.layers.sigmoid', 'fluid.layers.sigmoid', (['i'], {}), '(i)\n', (4390, 4393), True, 'import paddle.fluid as fluid\n'), ((4417, 4437), 'paddle.fluid.layers.tanh', 'fluid.layers.tanh', (['j'], {}), '(j)\n', (4434, 4437), True, 'import paddle.fluid as fluid\n'), ((6634, 6704), 'paddle.fluid.initializer.UniformInitializer', 'fluid.initializer.UniformInitializer', ([], {'low': '(-init_scale)', 'high': 'init_scale'}), '(low=-init_scale, high=init_scale)\n', (6670, 6704), True, 'import paddle.fluid as fluid\n'), ((1890, 1977), 'paddle.fluid.initializer.UniformInitializer', 'fluid.initializer.UniformInitializer', ([], {'low': '(-self._init_scale)', 'high': 'self._init_scale'}), '(low=-self._init_scale, high=self.\n _init_scale)\n', (1926, 1977), True, 'import paddle.fluid as fluid\n'), ((2437, 2524), 'paddle.fluid.initializer.UniformInitializer', 'fluid.initializer.UniformInitializer', ([], {'low': '(-self._init_scale)', 'high': 'self._init_scale'}), '(low=-self._init_scale, high=self.\n _init_scale)\n', (2473, 2524), True, 'import paddle.fluid as fluid\n'), ((10254, 10267), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (10263, 10267), True, 'import numpy as np\n'), ((10323, 10339), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (10332, 10339), True, 'import numpy as np\n'), ((13463, 13476), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (13472, 13476), True, 'import numpy as np\n'), ((13532, 13548), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (13541, 13548), True, 'import numpy as np\n'), ((17365, 17378), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (17374, 17378), True, 'import numpy as np\n'), ((17434, 17450), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (17443, 17450), True, 'import numpy as np\n'), ((21184, 21197), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (21193, 21197), True, 'import numpy as np\n'), ((21253, 21269), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (21262, 21269), True, 'import numpy as np\n'), ((24990, 25003), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (24999, 25003), True, 'import numpy as np\n'), ((25059, 25075), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (25068, 25075), True, 'import numpy as np\n'), ((28369, 28382), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (28378, 28382), True, 'import numpy as np\n'), ((28438, 28454), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (28447, 28454), True, 'import numpy as np\n'), ((32038, 32051), 'numpy.arange', 'np.arange', (['(12)'], {}), '(12)\n', (32047, 32051), True, 'import numpy as np\n'), ((32107, 32123), 'numpy.arange', 'np.arange', (['(1)', '(13)'], {}), '(1, 13)\n', (32116, 32123), True, 'import numpy as np\n')] |
import sys
from contextlib import contextmanager
from os.path import dirname, join
from qtpy.QtGui import QPixmap
from qtpy.QtWidgets import QApplication, QSplashScreen
@contextmanager
def gui_qt(*, startup_logo=False):
"""Start a Qt event loop in which to run the application.
Parameters
----------
startup_logo : bool
Show a splash screen with the napari logo during startup.
Notes
-----
This context manager is not needed if running napari within an interactive
IPython session. In this case, use the ``%gui qt`` magic command, or start
IPython with the Qt GUI event loop enabled by default by using
``ipython --gui=qt``.
"""
splash_widget = None
app = QApplication.instance()
if not app:
# if this is the first time the Qt app is being instantiated, we set
# the name, so that we know whether to raise_ in Window.show()
app = QApplication(sys.argv)
app.setApplicationName('napari')
if startup_logo:
logopath = join(dirname(__file__), '..', 'resources', 'logo.png')
splash_widget = QSplashScreen(QPixmap(logopath).scaled(400, 400))
splash_widget.show()
yield app
# if the application already existed before this function was called,
# there's no need to start it again. By avoiding unnecessary calls to
# ``app.exec_``, we avoid blocking.
if app.applicationName() == 'napari':
if splash_widget and startup_logo:
splash_widget.close()
app.exec_()
| [
"os.path.dirname",
"qtpy.QtWidgets.QApplication.instance",
"qtpy.QtWidgets.QApplication",
"qtpy.QtGui.QPixmap"
] | [((721, 744), 'qtpy.QtWidgets.QApplication.instance', 'QApplication.instance', ([], {}), '()\n', (742, 744), False, 'from qtpy.QtWidgets import QApplication, QSplashScreen\n'), ((923, 945), 'qtpy.QtWidgets.QApplication', 'QApplication', (['sys.argv'], {}), '(sys.argv)\n', (935, 945), False, 'from qtpy.QtWidgets import QApplication, QSplashScreen\n'), ((1040, 1057), 'os.path.dirname', 'dirname', (['__file__'], {}), '(__file__)\n', (1047, 1057), False, 'from os.path import dirname, join\n'), ((1132, 1149), 'qtpy.QtGui.QPixmap', 'QPixmap', (['logopath'], {}), '(logopath)\n', (1139, 1149), False, 'from qtpy.QtGui import QPixmap\n')] |
#encoding: utf-8
"""Tornado handlers for the terminal emulator."""
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
from tornado import web
import terminado
from notebook._tz import utcnow
from ..base.handlers import IPythonHandler
from ..base.zmqhandlers import WebSocketMixin
class TerminalHandler(IPythonHandler):
"""Render the terminal interface."""
@web.authenticated
def get(self, term_name):
self.write(self.render_template('terminal.html',
ws_path="terminals/websocket/%s" % term_name))
class TermSocket(WebSocketMixin, IPythonHandler, terminado.TermSocket):
def origin_check(self):
"""Terminado adds redundant origin_check
Tornado already calls check_origin, so don't do anything here.
"""
return True
def get(self, *args, **kwargs):
if not self.get_current_user():
raise web.HTTPError(403)
return super(TermSocket, self).get(*args, **kwargs)
def on_message(self, message):
super(TermSocket, self).on_message(message)
self.application.settings['terminal_last_activity'] = utcnow()
def write_message(self, message, binary=False):
super(TermSocket, self).write_message(message, binary=binary)
self.application.settings['terminal_last_activity'] = utcnow()
| [
"notebook._tz.utcnow",
"tornado.web.HTTPError"
] | [((1178, 1186), 'notebook._tz.utcnow', 'utcnow', ([], {}), '()\n', (1184, 1186), False, 'from notebook._tz import utcnow\n'), ((1372, 1380), 'notebook._tz.utcnow', 'utcnow', ([], {}), '()\n', (1378, 1380), False, 'from notebook._tz import utcnow\n'), ((949, 967), 'tornado.web.HTTPError', 'web.HTTPError', (['(403)'], {}), '(403)\n', (962, 967), False, 'from tornado import web\n')] |
# SPDX-FileCopyrightText: 2022 <NAME>
# SPDX-License-Identifier: MIT
import asyncio
import time
import board
import neopixel
import keypad
import supervisor
from adafruit_ht16k33.segments import BigSeg7x4, Seg14x4
from digitalio import DigitalInOut, Direction
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
YELLOW = (255, 255, 0)
PURPLE = (255, 0, 255)
ORANGE = (255, 70, 0)
WHITE = (255, 255, 255)
PINK = (255, 90, 90)
CYAN = (0, 255, 255)
GREY = (50, 50, 50)
BLACK = (0, 0, 0)
PLAYER_COLORS = (
RED,
GREEN,
BLUE,
YELLOW,
PURPLE,
ORANGE,
PINK,
CYAN,
GREY,
WHITE,
BLACK,
)
# Initialize components
i2c = board.I2C()
large_segment = BigSeg7x4(i2c)
large_segment.brightness = 0.1
small_segment = Seg14x4(i2c, address=0x71)
small_segment.brightness = 0.1
large_button_light = DigitalInOut(board.D25)
large_button_light.direction = Direction.OUTPUT
large_button_light.value = True
large_button_pin = board.D24
small_button_light = DigitalInOut(board.D12)
small_button_light.direction = Direction.OUTPUT
small_button_light.value = True
small_button_pin = board.D11
pixels = neopixel.NeoPixel(
board.D5, 8, brightness=0.1, auto_write=False, pixel_order=(1, 0, 2, 3)
)
pixels.fill(0x000000)
pixels.show()
def show_player(player, marquee):
"""Helper module change all elements when the player changes"""
pixels.fill(player.color)
pixels.show()
marquee.message("{} PLAYER ".format(player.number), 0.3, True)
class TimerTracker:
"""Timer that updates elapsed time passed"""
def __init__(self):
self.time = 0.0
self.paused = True
self.start_time = 0.0
self.pre_pause_time = 0.0
def pause(self):
"""Pause the timer"""
self.pre_pause_time = self.time
self.paused = True
def resume(self):
"""Resume the timer"""
self.start_time = time.monotonic()
self.paused = False
def update(self):
"""Update the timer. Must be called regularly."""
if self.paused is True:
return
self.time = time.monotonic() - self.start_time + self.pre_pause_time
class Player:
"""Holds all information about a game player"""
# pylint: disable=too-few-public-methods
def __init__(self, number=1, color=(0, 0, 0)):
self.number = number
self.color = color
self.timer = TimerTracker()
class Game:
"""Holds all information about a game"""
def __init__(self):
self.players = []
self._current_player = 0
self.paused = True
self.game_over = False
@property
def current_player(self):
"""Return current player object"""
return self.players[self._current_player]
def next_player(self):
"""Advance to the next player"""
self._current_player = (self._current_player + 1) % len(self.players)
def pause(self):
"""Pause the game"""
self.current_player.timer.pause()
self.paused = True
def resume(self):
"""Resume the game"""
self.current_player.timer.resume()
self.paused = False
class MarqueeMessage:
"""Message to be displayed on the marquee display"""
# pylint: disable=too-few-public-methods
def __init__(self, text, speed):
self.text = text
self.speed = speed
self.scroll = True
def message(self, text=None, speed=0.3, scroll=True):
"""Set the message to show, speed and if it scrolls."""
self.text = text
self.speed = speed
self.scroll = scroll
async def marquee_routine(message):
"""Display the current message on the small segment display."""
position = 0
current_text = ""
while True:
if message.text is not None:
if message.text is not current_text:
current_text = message.text
if message.scroll is True:
position = 0
small_segment.print(" ")
else: # scroll is False
small_segment.print(message.text)
else:
if message.scroll is True:
small_segment.scroll(1)
small_segment[3] = current_text[position]
position += 1
if position >= len(current_text):
position = 0
small_segment.show()
await asyncio.sleep(message.speed)
else:
await asyncio.sleep(0)
async def setup_routine(game, marquee):
"""Setup a new game and players for the game."""
# pylint: disable=too-many-branches,too-many-statements
setup_phase = "PLAYERS"
color_phase_player = 1
current_color = 0
large_button_light.value = False
large_segment.print(" 1")
large_segment.colon = False
marquee.message("HOW MANY PLAYERS ", 0.3, True)
number_players = 1
with keypad.Keys(
(small_button_pin, large_button_pin), value_when_pressed=False, pull=True
) as keys:
while setup_phase != "DONE":
key_event = keys.events.get()
if key_event and key_event.pressed:
key_number = key_event.key_number
if key_number == 0:
small_button_light.value = False
if setup_phase == "PLAYERS":
setup_phase = "COLORS"
pixels.fill(PLAYER_COLORS[current_color])
pixels.show()
marquee.message(
"PLAYER {} COLOR ".format(color_phase_player), 0.3, True
)
large_segment.print(" {}".format(color_phase_player))
elif setup_phase == "COLORS":
player = Player(
color_phase_player, PLAYER_COLORS[current_color]
)
game.players.append(player)
color_phase_player += 1
if color_phase_player > number_players:
setup_phase = "DONE"
marquee.message(None)
else:
current_color = 0
pixels.fill(PLAYER_COLORS[current_color])
pixels.show()
marquee.message(
"PLAYER {} COLOR ".format(color_phase_player),
0.3,
True,
)
large_segment.print(" {}".format(color_phase_player))
if key_number == 1:
large_button_light.value = True
if setup_phase == "PLAYERS":
number_players += 1
if number_players == 10:
number_players = 1
large_segment.print(" {}".format(number_players))
elif setup_phase == "COLORS":
current_color += 1
if current_color == len(PLAYER_COLORS):
current_color = 0
pixels.fill(PLAYER_COLORS[current_color])
pixels.show()
if key_event and key_event.released:
key_number = key_event.key_number
if key_number == 0:
small_button_light.value = True
if key_number == 1:
large_button_light.value = False
await asyncio.sleep(0)
async def update_timer(game):
"""Regularlly update the game timer."""
while True:
game.current_player.timer.update()
await asyncio.sleep(0)
async def show_timer(game):
"""Show the current player time on the large LED segment display."""
large_segment.colon = True
while True:
current_time = game.current_player.timer.time
minutes = int(current_time / 60)
seconds = int(current_time % 60)
time_string = "{:02d}".format(minutes) + "{:02d}".format(seconds)
large_segment.print(time_string)
await asyncio.sleep(0.05)
async def monitor_buttons(game, marquee):
"""Check if either button is pressed to run the timer project."""
# pylint: disable=too-many-branches
large_button_light.value = False
small_button_press_time = 0
with keypad.Keys(
(small_button_pin, large_button_pin), value_when_pressed=False, pull=True
) as keys:
while True:
key_event = keys.events.get()
if key_event and key_event.pressed:
key_number = key_event.key_number
if key_number == 0: # small button
small_button_press_time = key_event.timestamp
small_button_light.value = False
if game.game_over is False:
if game.paused is True:
game.resume()
show_player(game.current_player, marquee)
else:
game.pause()
marquee.message("PAUSED ", 0.3, True)
if key_number == 1:
large_button_light.value = True
if game.game_over is True:
game.next_player()
show_player(game.current_player, marquee)
if game.paused is False:
game.current_player.timer.pause()
game.next_player()
show_player(game.current_player, marquee)
game.current_player.timer.resume()
if key_event and key_event.released:
key_number = key_event.key_number
if key_number == 0:
small_button_light.value = True
if (
key_event.timestamp - small_button_press_time
) > 4000: # 4 seconds
if game.game_over is False:
game.game_over = True
game.pause()
marquee.message("GAME OVER ", 0.3, True)
else:
supervisor.reload()
if key_number == 1:
large_button_light.value = False
await asyncio.sleep(0)
async def main():
"""Setup and control the overall project."""
game = Game()
marquee = MarqueeMessage(None, 1.0)
asyncio.create_task(marquee_routine(marquee))
setup_task = asyncio.create_task(setup_routine(game, marquee))
await asyncio.gather(setup_task)
pixels.fill(0)
pixels.show()
marquee.message("RDY ", 0, False)
time.sleep(0.5)
buttons_task = asyncio.create_task(monitor_buttons(game, marquee))
timer_task = asyncio.create_task(update_timer(game))
show_timer_task = asyncio.create_task(show_timer(game))
await asyncio.gather(buttons_task, timer_task, show_timer_task)
# We never get here
asyncio.run(main()) | [
"board.I2C",
"asyncio.sleep",
"time.monotonic",
"time.sleep",
"neopixel.NeoPixel",
"adafruit_ht16k33.segments.Seg14x4",
"supervisor.reload",
"asyncio.gather",
"adafruit_ht16k33.segments.BigSeg7x4",
"digitalio.DigitalInOut",
"keypad.Keys"
] | [((700, 711), 'board.I2C', 'board.I2C', ([], {}), '()\n', (709, 711), False, 'import board\n'), ((731, 745), 'adafruit_ht16k33.segments.BigSeg7x4', 'BigSeg7x4', (['i2c'], {}), '(i2c)\n', (740, 745), False, 'from adafruit_ht16k33.segments import BigSeg7x4, Seg14x4\n'), ((797, 822), 'adafruit_ht16k33.segments.Seg14x4', 'Seg14x4', (['i2c'], {'address': '(113)'}), '(i2c, address=113)\n', (804, 822), False, 'from adafruit_ht16k33.segments import BigSeg7x4, Seg14x4\n'), ((880, 903), 'digitalio.DigitalInOut', 'DigitalInOut', (['board.D25'], {}), '(board.D25)\n', (892, 903), False, 'from digitalio import DigitalInOut, Direction\n'), ((1040, 1063), 'digitalio.DigitalInOut', 'DigitalInOut', (['board.D12'], {}), '(board.D12)\n', (1052, 1063), False, 'from digitalio import DigitalInOut, Direction\n'), ((1188, 1282), 'neopixel.NeoPixel', 'neopixel.NeoPixel', (['board.D5', '(8)'], {'brightness': '(0.1)', 'auto_write': '(False)', 'pixel_order': '(1, 0, 2, 3)'}), '(board.D5, 8, brightness=0.1, auto_write=False,\n pixel_order=(1, 0, 2, 3))\n', (1205, 1282), False, 'import neopixel\n'), ((11261, 11276), 'time.sleep', 'time.sleep', (['(0.5)'], {}), '(0.5)\n', (11271, 11276), False, 'import time\n'), ((1978, 1994), 'time.monotonic', 'time.monotonic', ([], {}), '()\n', (1992, 1994), False, 'import time\n'), ((5126, 5216), 'keypad.Keys', 'keypad.Keys', (['(small_button_pin, large_button_pin)'], {'value_when_pressed': '(False)', 'pull': '(True)'}), '((small_button_pin, large_button_pin), value_when_pressed=False,\n pull=True)\n', (5137, 5216), False, 'import keypad\n'), ((8792, 8882), 'keypad.Keys', 'keypad.Keys', (['(small_button_pin, large_button_pin)'], {'value_when_pressed': '(False)', 'pull': '(True)'}), '((small_button_pin, large_button_pin), value_when_pressed=False,\n pull=True)\n', (8803, 8882), False, 'import keypad\n'), ((11147, 11173), 'asyncio.gather', 'asyncio.gather', (['setup_task'], {}), '(setup_task)\n', (11161, 11173), False, 'import asyncio\n'), ((11483, 11540), 'asyncio.gather', 'asyncio.gather', (['buttons_task', 'timer_task', 'show_timer_task'], {}), '(buttons_task, timer_task, show_timer_task)\n', (11497, 11540), False, 'import asyncio\n'), ((8084, 8100), 'asyncio.sleep', 'asyncio.sleep', (['(0)'], {}), '(0)\n', (8097, 8100), False, 'import asyncio\n'), ((8532, 8551), 'asyncio.sleep', 'asyncio.sleep', (['(0.05)'], {}), '(0.05)\n', (8545, 8551), False, 'import asyncio\n'), ((2182, 2198), 'time.monotonic', 'time.monotonic', ([], {}), '()\n', (2196, 2198), False, 'import time\n'), ((4609, 4637), 'asyncio.sleep', 'asyncio.sleep', (['message.speed'], {}), '(message.speed)\n', (4622, 4637), False, 'import asyncio\n'), ((4672, 4688), 'asyncio.sleep', 'asyncio.sleep', (['(0)'], {}), '(0)\n', (4685, 4688), False, 'import asyncio\n'), ((7911, 7927), 'asyncio.sleep', 'asyncio.sleep', (['(0)'], {}), '(0)\n', (7924, 7927), False, 'import asyncio\n'), ((10863, 10879), 'asyncio.sleep', 'asyncio.sleep', (['(0)'], {}), '(0)\n', (10876, 10879), False, 'import asyncio\n'), ((10731, 10750), 'supervisor.reload', 'supervisor.reload', ([], {}), '()\n', (10748, 10750), False, 'import supervisor\n')] |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import itertools
from copy import copy
from typing import Any
from typing import Optional
from typing import Tuple
from typing import Type
from ._compat import PY2
from ._compat import unicode
from .exceptions import ParseError
from .exceptions import UnexpectedCharError
from .exceptions import UnexpectedEofError
from .toml_char import TOMLChar
class _State:
def __init__(
self, source, save_marker=False, restore=False
): # type: (_Source, Optional[bool], Optional[bool]) -> None
self._source = source
self._save_marker = save_marker
self.restore = restore
def __enter__(self): # type: () -> None
# Entering this context manager - save the state
if PY2:
# Python 2.7 does not allow to directly copy
# an iterator, so we have to make tees of the original
# chars iterator.
self._source._chars, self._chars = itertools.tee(self._source._chars)
else:
self._chars = copy(self._source._chars)
self._idx = self._source._idx
self._current = self._source._current
self._marker = self._source._marker
return self
def __exit__(self, exception_type, exception_val, trace):
# Exiting this context manager - restore the prior state
if self.restore or exception_type:
self._source._chars = self._chars
self._source._idx = self._idx
self._source._current = self._current
if self._save_marker:
self._source._marker = self._marker
class _StateHandler:
"""
State preserver for the Parser.
"""
def __init__(self, source): # type: (Source) -> None
self._source = source
self._states = []
def __call__(self, *args, **kwargs):
return _State(self._source, *args, **kwargs)
def __enter__(self): # type: () -> None
state = self()
self._states.append(state)
return state.__enter__()
def __exit__(self, exception_type, exception_val, trace):
state = self._states.pop()
return state.__exit__(exception_type, exception_val, trace)
class Source(unicode):
EOF = TOMLChar("\0")
def __init__(self, _): # type: (unicode) -> None
super(Source, self).__init__()
# Collection of TOMLChars
self._chars = iter([(i, TOMLChar(c)) for i, c in enumerate(self)])
self._idx = 0
self._marker = 0
self._current = TOMLChar("")
self._state = _StateHandler(self)
self.inc()
def reset(self):
# initialize both idx and current
self.inc()
# reset marker
self.mark()
@property
def state(self): # type: () -> _StateHandler
return self._state
@property
def idx(self): # type: () -> int
return self._idx
@property
def current(self): # type: () -> TOMLChar
return self._current
@property
def marker(self): # type: () -> int
return self._marker
def extract(self): # type: () -> unicode
"""
Extracts the value between marker and index
"""
return self[self._marker : self._idx]
def inc(self, exception=None): # type: (Optional[Type[ParseError]]) -> bool
"""
Increments the parser if the end of the input has not been reached.
Returns whether or not it was able to advance.
"""
try:
self._idx, self._current = next(self._chars)
return True
except StopIteration:
self._idx = len(self)
self._current = self.EOF
if exception:
raise self.parse_error(exception)
return False
def inc_n(self, n, exception=None): # type: (int, Exception) -> bool
"""
Increments the parser by n characters
if the end of the input has not been reached.
"""
for _ in range(n):
if not self.inc(exception=exception):
return False
return True
def consume(self, chars, min=0, max=-1):
"""
Consume chars until min/max is satisfied is valid.
"""
while self.current in chars and max != 0:
min -= 1
max -= 1
if not self.inc():
break
# failed to consume minimum number of characters
if min > 0:
self.parse_error(UnexpectedCharError)
def end(self): # type: () -> bool
"""
Returns True if the parser has reached the end of the input.
"""
return self._current is self.EOF
def mark(self): # type: () -> None
"""
Sets the marker to the index's current position
"""
self._marker = self._idx
def parse_error(
self, exception=ParseError, *args
): # type: (Type[ParseError], Any) -> ParseError
"""
Creates a generic "parse error" at the current position.
"""
line, col = self._to_linecol()
return exception(line, col, *args)
def _to_linecol(self): # type: () -> Tuple[int, int]
cur = 0
for i, line in enumerate(self.splitlines()):
if cur + len(line) + 1 > self.idx:
return (i + 1, self.idx - cur)
cur += len(line) + 1
return len(self.splitlines()), 0
| [
"copy.copy",
"itertools.tee"
] | [((990, 1024), 'itertools.tee', 'itertools.tee', (['self._source._chars'], {}), '(self._source._chars)\n', (1003, 1024), False, 'import itertools\n'), ((1065, 1090), 'copy.copy', 'copy', (['self._source._chars'], {}), '(self._source._chars)\n', (1069, 1090), False, 'from copy import copy\n')] |
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import image_util
from paddle.utils.image_util import *
import random
from PIL import Image
import numpy as np
import xml.etree.ElementTree
import os
class Settings(object):
def __init__(self, data_dir, label_file, resize_h, resize_w, mean_value,
apply_distort, apply_expand):
self._data_dir = data_dir
self._label_list = []
label_fpath = os.path.join(data_dir, label_file)
for line in open(label_fpath):
self._label_list.append(line.strip())
self._apply_distort = apply_distort
self._apply_expand = apply_expand
self._resize_height = resize_h
self._resize_width = resize_w
self._img_mean = np.array(mean_value)[:, np.newaxis, np.newaxis].astype(
'float32')
self._expand_prob = 0.5
self._expand_max_ratio = 4
self._hue_prob = 0.5
self._hue_delta = 18
self._contrast_prob = 0.5
self._contrast_delta = 0.5
self._saturation_prob = 0.5
self._saturation_delta = 0.5
self._brightness_prob = 0.5
self._brightness_delta = 0.125
@property
def apply_distort(self):
return self._apply_expand
@property
def apply_distort(self):
return self._apply_distort
@property
def data_dir(self):
return self._data_dir
@property
def label_list(self):
return self._label_list
@property
def resize_h(self):
return self._resize_height
@property
def resize_w(self):
return self._resize_width
@property
def img_mean(self):
return self._img_mean
def _reader_creator(settings, file_list, mode, shuffle):
def reader():
with open(file_list) as flist:
lines = [line.strip() for line in flist]
if shuffle:
random.shuffle(lines)
for line in lines:
if mode == 'train' or mode == 'test':
img_path, label_path = line.split()
img_path = os.path.join(settings.data_dir, img_path)
label_path = os.path.join(settings.data_dir, label_path)
elif mode == 'infer':
img_path = os.path.join(settings.data_dir, line)
img = Image.open(img_path)
img_width, img_height = img.size
# layout: label | xmin | ymin | xmax | ymax | difficult
if mode == 'train' or mode == 'test':
bbox_labels = []
root = xml.etree.ElementTree.parse(label_path).getroot()
for object in root.findall('object'):
bbox_sample = []
# start from 1
bbox_sample.append(
float(
settings.label_list.index(
object.find('name').text)))
bbox = object.find('bndbox')
difficult = float(object.find('difficult').text)
bbox_sample.append(
float(bbox.find('xmin').text) / img_width)
bbox_sample.append(
float(bbox.find('ymin').text) / img_height)
bbox_sample.append(
float(bbox.find('xmax').text) / img_width)
bbox_sample.append(
float(bbox.find('ymax').text) / img_height)
bbox_sample.append(difficult)
bbox_labels.append(bbox_sample)
sample_labels = bbox_labels
if mode == 'train':
if settings._apply_distort:
img = image_util.distort_image(img, settings)
if settings._apply_expand:
img, bbox_labels = image_util.expand_image(
img, bbox_labels, img_width, img_height,
settings)
batch_sampler = []
# hard-code here
batch_sampler.append(
image_util.sampler(1, 1, 1.0, 1.0, 1.0, 1.0, 0.0,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.1,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.3,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.5,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.7,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.9,
0.0))
batch_sampler.append(
image_util.sampler(1, 50, 0.3, 1.0, 0.5, 2.0, 0.0,
1.0))
""" random crop """
sampled_bbox = image_util.generate_batch_samples(
batch_sampler, bbox_labels, img_width, img_height)
img = np.array(img)
if len(sampled_bbox) > 0:
idx = int(random.uniform(0, len(sampled_bbox)))
img, sample_labels = image_util.crop_image(
img, bbox_labels, sampled_bbox[idx], img_width,
img_height)
img = Image.fromarray(img)
img = img.resize((settings.resize_w, settings.resize_h),
Image.ANTIALIAS)
img = np.array(img)
if mode == 'train':
mirror = int(random.uniform(0, 2))
if mirror == 1:
img = img[:, ::-1, :]
for i in xrange(len(sample_labels)):
tmp = sample_labels[i][1]
sample_labels[i][1] = 1 - sample_labels[i][3]
sample_labels[i][3] = 1 - tmp
if len(img.shape) == 3:
img = np.swapaxes(img, 1, 2)
img = np.swapaxes(img, 1, 0)
img = img[[2, 1, 0], :, :]
img = img.astype('float32')
img -= settings.img_mean
img = img.flatten()
img = img * 0.007843
sample_labels = np.array(sample_labels)
if mode == 'train' or mode == 'test':
if mode == 'train' and len(sample_labels) == 0: continue
yield img.astype(
'float32'
), sample_labels[:, 1:5], sample_labels[:, 0].astype(
'int32'), sample_labels[:, -1].astype('int32')
elif mode == 'infer':
yield img.astype('float32')
return reader
def train(settings, file_list, shuffle=True):
return _reader_creator(settings, file_list, 'train', shuffle)
def test(settings, file_list):
return _reader_creator(settings, file_list, 'test', False)
def infer(settings, file_list):
return _reader_creator(settings, file_list, 'infer', False)
| [
"image_util.expand_image",
"PIL.Image.fromarray",
"PIL.Image.open",
"random.uniform",
"random.shuffle",
"image_util.sampler",
"image_util.generate_batch_samples",
"image_util.distort_image",
"os.path.join",
"numpy.swapaxes",
"numpy.array",
"image_util.crop_image"
] | [((996, 1030), 'os.path.join', 'os.path.join', (['data_dir', 'label_file'], {}), '(data_dir, label_file)\n', (1008, 1030), False, 'import os\n'), ((2454, 2475), 'random.shuffle', 'random.shuffle', (['lines'], {}), '(lines)\n', (2468, 2475), False, 'import random\n'), ((2897, 2917), 'PIL.Image.open', 'Image.open', (['img_path'], {}), '(img_path)\n', (2907, 2917), False, 'from PIL import Image\n'), ((6789, 6802), 'numpy.array', 'np.array', (['img'], {}), '(img)\n', (6797, 6802), True, 'import numpy as np\n'), ((7598, 7621), 'numpy.array', 'np.array', (['sample_labels'], {}), '(sample_labels)\n', (7606, 7621), True, 'import numpy as np\n'), ((1309, 1329), 'numpy.array', 'np.array', (['mean_value'], {}), '(mean_value)\n', (1317, 1329), True, 'import numpy as np\n'), ((2648, 2689), 'os.path.join', 'os.path.join', (['settings.data_dir', 'img_path'], {}), '(settings.data_dir, img_path)\n', (2660, 2689), False, 'import os\n'), ((2723, 2766), 'os.path.join', 'os.path.join', (['settings.data_dir', 'label_path'], {}), '(settings.data_dir, label_path)\n', (2735, 2766), False, 'import os\n'), ((7291, 7313), 'numpy.swapaxes', 'np.swapaxes', (['img', '(1)', '(2)'], {}), '(img, 1, 2)\n', (7302, 7313), True, 'import numpy as np\n'), ((7340, 7362), 'numpy.swapaxes', 'np.swapaxes', (['img', '(1)', '(0)'], {}), '(img, 1, 0)\n', (7351, 7362), True, 'import numpy as np\n'), ((2836, 2873), 'os.path.join', 'os.path.join', (['settings.data_dir', 'line'], {}), '(settings.data_dir, line)\n', (2848, 2873), False, 'import os\n'), ((6111, 6199), 'image_util.generate_batch_samples', 'image_util.generate_batch_samples', (['batch_sampler', 'bbox_labels', 'img_width', 'img_height'], {}), '(batch_sampler, bbox_labels, img_width,\n img_height)\n', (6144, 6199), False, 'import image_util\n'), ((6256, 6269), 'numpy.array', 'np.array', (['img'], {}), '(img)\n', (6264, 6269), True, 'import numpy as np\n'), ((6623, 6643), 'PIL.Image.fromarray', 'Image.fromarray', (['img'], {}), '(img)\n', (6638, 6643), False, 'from PIL import Image\n'), ((6873, 6893), 'random.uniform', 'random.uniform', (['(0)', '(2)'], {}), '(0, 2)\n', (6887, 6893), False, 'import random\n'), ((4421, 4460), 'image_util.distort_image', 'image_util.distort_image', (['img', 'settings'], {}), '(img, settings)\n', (4445, 4460), False, 'import image_util\n'), ((4559, 4633), 'image_util.expand_image', 'image_util.expand_image', (['img', 'bbox_labels', 'img_width', 'img_height', 'settings'], {}), '(img, bbox_labels, img_width, img_height, settings)\n', (4582, 4633), False, 'import image_util\n'), ((4857, 4911), 'image_util.sampler', 'image_util.sampler', (['(1)', '(1)', '(1.0)', '(1.0)', '(1.0)', '(1.0)', '(0.0)', '(0.0)'], {}), '(1, 1, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0)\n', (4875, 4911), False, 'import image_util\n'), ((5034, 5089), 'image_util.sampler', 'image_util.sampler', (['(1)', '(50)', '(0.3)', '(1.0)', '(0.5)', '(2.0)', '(0.1)', '(0.0)'], {}), '(1, 50, 0.3, 1.0, 0.5, 2.0, 0.1, 0.0)\n', (5052, 5089), False, 'import image_util\n'), ((5212, 5267), 'image_util.sampler', 'image_util.sampler', (['(1)', '(50)', '(0.3)', '(1.0)', '(0.5)', '(2.0)', '(0.3)', '(0.0)'], {}), '(1, 50, 0.3, 1.0, 0.5, 2.0, 0.3, 0.0)\n', (5230, 5267), False, 'import image_util\n'), ((5390, 5445), 'image_util.sampler', 'image_util.sampler', (['(1)', '(50)', '(0.3)', '(1.0)', '(0.5)', '(2.0)', '(0.5)', '(0.0)'], {}), '(1, 50, 0.3, 1.0, 0.5, 2.0, 0.5, 0.0)\n', (5408, 5445), False, 'import image_util\n'), ((5568, 5623), 'image_util.sampler', 'image_util.sampler', (['(1)', '(50)', '(0.3)', '(1.0)', '(0.5)', '(2.0)', '(0.7)', '(0.0)'], {}), '(1, 50, 0.3, 1.0, 0.5, 2.0, 0.7, 0.0)\n', (5586, 5623), False, 'import image_util\n'), ((5746, 5801), 'image_util.sampler', 'image_util.sampler', (['(1)', '(50)', '(0.3)', '(1.0)', '(0.5)', '(2.0)', '(0.9)', '(0.0)'], {}), '(1, 50, 0.3, 1.0, 0.5, 2.0, 0.9, 0.0)\n', (5764, 5801), False, 'import image_util\n'), ((5924, 5979), 'image_util.sampler', 'image_util.sampler', (['(1)', '(50)', '(0.3)', '(1.0)', '(0.5)', '(2.0)', '(0.0)', '(1.0)'], {}), '(1, 50, 0.3, 1.0, 0.5, 2.0, 0.0, 1.0)\n', (5942, 5979), False, 'import image_util\n'), ((6445, 6530), 'image_util.crop_image', 'image_util.crop_image', (['img', 'bbox_labels', 'sampled_bbox[idx]', 'img_width', 'img_height'], {}), '(img, bbox_labels, sampled_bbox[idx], img_width,\n img_height)\n', (6466, 6530), False, 'import image_util\n')] |
import os
from dvc.repo.scm_context import scm_context
from dvc.utils import relpath, resolve_output, resolve_paths
from dvc.utils.fs import path_isin
from ..exceptions import InvalidArgumentError, OutputDuplicationError
from . import locked
@locked
@scm_context
def imp_url(
self,
url,
out=None,
fname=None,
erepo=None,
frozen=True,
no_exec=False,
remote=None,
to_remote=False,
desc=None,
jobs=None,
):
from dvc.dvcfile import Dvcfile
from dvc.stage import Stage, create_stage, restore_meta
out = resolve_output(url, out)
path, wdir, out = resolve_paths(
self, out, always_local=to_remote and not out
)
if to_remote and no_exec:
raise InvalidArgumentError(
"--no-exec can't be combined with --to-remote"
)
if not to_remote and remote:
raise InvalidArgumentError(
"--remote can't be used without --to-remote"
)
# NOTE: when user is importing something from within their own repository
if (
erepo is None
and os.path.exists(url)
and path_isin(os.path.abspath(url), self.root_dir)
):
url = relpath(url, wdir)
stage = create_stage(
Stage,
self,
fname or path,
wdir=wdir,
deps=[url],
outs=[out],
erepo=erepo,
)
restore_meta(stage)
if desc:
stage.outs[0].desc = desc
dvcfile = Dvcfile(self, stage.path)
dvcfile.remove()
try:
self.check_modified_graph([stage])
except OutputDuplicationError as exc:
raise OutputDuplicationError(exc.output, set(exc.stages) - {stage})
if no_exec:
stage.ignore_outs()
elif to_remote:
remote = self.cloud.get_remote(remote, "import-url")
stage.outs[0].transfer(url, odb=remote.odb, jobs=jobs)
stage.save_deps()
stage.md5 = stage.compute_md5()
else:
stage.run(jobs=jobs)
stage.frozen = frozen
dvcfile.dump(stage)
return stage
| [
"dvc.utils.resolve_paths",
"os.path.exists",
"dvc.stage.restore_meta",
"dvc.stage.create_stage",
"dvc.utils.resolve_output",
"os.path.abspath",
"dvc.utils.relpath",
"dvc.dvcfile.Dvcfile"
] | [((559, 583), 'dvc.utils.resolve_output', 'resolve_output', (['url', 'out'], {}), '(url, out)\n', (573, 583), False, 'from dvc.utils import relpath, resolve_output, resolve_paths\n'), ((606, 666), 'dvc.utils.resolve_paths', 'resolve_paths', (['self', 'out'], {'always_local': '(to_remote and not out)'}), '(self, out, always_local=to_remote and not out)\n', (619, 666), False, 'from dvc.utils import relpath, resolve_output, resolve_paths\n'), ((1208, 1300), 'dvc.stage.create_stage', 'create_stage', (['Stage', 'self', '(fname or path)'], {'wdir': 'wdir', 'deps': '[url]', 'outs': '[out]', 'erepo': 'erepo'}), '(Stage, self, fname or path, wdir=wdir, deps=[url], outs=[out],\n erepo=erepo)\n', (1220, 1300), False, 'from dvc.stage import Stage, create_stage, restore_meta\n'), ((1364, 1383), 'dvc.stage.restore_meta', 'restore_meta', (['stage'], {}), '(stage)\n', (1376, 1383), False, 'from dvc.stage import Stage, create_stage, restore_meta\n'), ((1447, 1472), 'dvc.dvcfile.Dvcfile', 'Dvcfile', (['self', 'stage.path'], {}), '(self, stage.path)\n', (1454, 1472), False, 'from dvc.dvcfile import Dvcfile\n'), ((1076, 1095), 'os.path.exists', 'os.path.exists', (['url'], {}), '(url)\n', (1090, 1095), False, 'import os\n'), ((1176, 1194), 'dvc.utils.relpath', 'relpath', (['url', 'wdir'], {}), '(url, wdir)\n', (1183, 1194), False, 'from dvc.utils import relpath, resolve_output, resolve_paths\n'), ((1118, 1138), 'os.path.abspath', 'os.path.abspath', (['url'], {}), '(url)\n', (1133, 1138), False, 'import os\n')] |
# peacocktv.py
import sys
from datetime import date
import logging
import requests
import xml.etree.ElementTree as ET
from .. import pysitemap
if __name__ == '__main__':
url_list = []
tree = ET.parse('local_sitemaps/sitemap-1.xml')
root = tree.getroot()
for child in root:
url_list.append(child[0].text)
today=date.today()
d=today.strftime("%m.%d.%y")
with open(f'../outputs/peacocktv1-{d}.txt', "w") as outfile:
for i in url_list:
outfile.write('%s\n' % i)
# the url_list above creates all the urls of movies
url_list2 = []
tree2 = ET.parse('local_sitemaps/sitemap-2.xml')
root2 = tree2.getroot()
for child in root2:
url_list2.append(child[0].text)
with open(f'../outputs/peacocktv2-{d}.txt', "w") as outfile:
for i in url_list2:
outfile.write('%s\n' % i)
# the url_list above creates all the urls of tv
### The mothod below works with most urls but there are always 3 urls raising errors
# import sys
# from datetime import date
# import logging
# from .. import pysitemap
# if __name__ == '__main__':
# if '--iocp' in sys.argv:
# from asyncio import events, windows_events
# sys.argv.remove('--iocp')
# logging.info('using iocp')
# el = windows_events.ProactorEventLoop()
# events.set_event_loop(el)
# # root_url = sys.argv[1]
# root_url = 'https://www.peacocktv.com/'
# today = date.today()
# d = today.strftime("%m.%d.%y")
# crawler(root_url, out_file=f'/Users/harperhe/Documents/Vale/Github/scrapy-tsg/urlgen/outputs/peacocktv{d}.txt', out_format='txt')
| [
"datetime.date.today",
"xml.etree.ElementTree.parse"
] | [((202, 242), 'xml.etree.ElementTree.parse', 'ET.parse', (['"""local_sitemaps/sitemap-1.xml"""'], {}), "('local_sitemaps/sitemap-1.xml')\n", (210, 242), True, 'import xml.etree.ElementTree as ET\n'), ((343, 355), 'datetime.date.today', 'date.today', ([], {}), '()\n', (353, 355), False, 'from datetime import date\n'), ((607, 647), 'xml.etree.ElementTree.parse', 'ET.parse', (['"""local_sitemaps/sitemap-2.xml"""'], {}), "('local_sitemaps/sitemap-2.xml')\n", (615, 647), True, 'import xml.etree.ElementTree as ET\n')] |
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 21 23:17:28 2021
@author: AKayal
"""
import Polygon
from Polygon import *
import Polygons
from Polygons import *
import pytest
def test_polygon():
abs_tol = 0.001
rel_tol = 0.001
try:
p = Polygon(2, 10)
assert False, ('Creating a Polygon with 2 sides: '
' Exception expected, not received')
except ValueError:
pass
n = 3
R = 1
p = Polygon(n, R)
p._n ## to trigger the setter
p._R ## to trigger the setter
assert str(p) == 'Polygon(n=3, R=1)', f'actual: {str(p)}'
assert p.count_vertices == n, (f'actual: {p.count_vertices},'
f' expected: {n}')
assert p.count_edges == n, f'actual: {p.count_edges}, expected: {n}'
assert p.circumradius == R, f'actual: {p.circumradius}, expected: {n}'
assert p.interior_angle == 60, (f'actual: {p.interior_angle},'
' expected: 60')
def test_polygon_size():
abs_tol = 0.001
rel_tol = 0.001
try:
p = Polygon(2, 10)
assert False, ('Creating a Polygon with 2 sides: '
' Exception expected, not received')
except ValueError:
pass
n = 4
R = 1
p = Polygon(n, R)
p._n ## to trigger the setter
p._R ## to trigger the setter
assert p.interior_angle == 90, (f'actual: {p.interior_angle}, '
' expected: 90')
assert math.isclose(p.area, 2,
rel_tol=abs_tol,
abs_tol=abs_tol), (f'actual: {p.area},'
' expected: 2.0')
assert math.isclose(p.side_length, math.sqrt(2),
rel_tol=rel_tol,
abs_tol=abs_tol), (f'actual: {p.side_length},'
f' expected: {math.sqrt(2)}')
assert math.isclose(p.perimeter, 4 * math.sqrt(2),
rel_tol=rel_tol,
abs_tol=abs_tol), (f'actual: {p.perimeter},'
f' expected: {4 * math.sqrt(2)}')
assert math.isclose(p.apothem, 0.707,
rel_tol=rel_tol,
abs_tol=abs_tol), (f'actual: {p.perimeter},'
' expected: 0.707')
p = Polygon(6, 2)
p._n ## to trigger the setter
p._R ## to trigger the setter
assert math.isclose(p.side_length, 2,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.apothem, 1.73205,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.area, 10.3923,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.perimeter, 12,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.interior_angle, 120,
rel_tol=rel_tol, abs_tol=abs_tol)
p = Polygon(12, 3)
p._n ## to trigger the setter
p._R ## to trigger the setter
assert math.isclose(p.side_length, 1.55291,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.apothem, 2.89778,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.area, 27,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.perimeter, 18.635,
rel_tol=rel_tol, abs_tol=abs_tol)
assert math.isclose(p.interior_angle, 150,
rel_tol=rel_tol, abs_tol=abs_tol)
def test_polygon_compare():
abs_tol = 0.001
rel_tol = 0.001
try:
p = Polygon(2, 10)
assert False, ('Creating a Polygon with 2 sides: '
' Exception expected, not received')
except ValueError:
pass
p1 = Polygon(3, 10)
p1._n ## to trigger the setter
p1._R ## to trigger the setter
p2 = Polygon(10, 10)
p2._n ## to trigger the setter
p2._R ## to trigger the setter
p3 = Polygon(15, 10)
p3._n ## to trigger the setter
p3._R ## to trigger the setter
p4 = Polygon(15, 100)
p4._n ## to trigger the setter
p4._R ## to trigger the setter
p5 = Polygon(15, 100)
p5._n ## to trigger the setter
p5._R ## to trigger the setter
assert p2 > p1
assert p2 < p3
assert p3 != p4
assert p1 != p4
assert p4 == p5
# test_polygon() | [
"Polygon"
] | [((485, 498), 'Polygon', 'Polygon', (['n', 'R'], {}), '(n, R)\n', (492, 498), False, 'import Polygon\n'), ((1327, 1340), 'Polygon', 'Polygon', (['n', 'R'], {}), '(n, R)\n', (1334, 1340), False, 'import Polygon\n'), ((2454, 2467), 'Polygon', 'Polygon', (['(6)', '(2)'], {}), '(6, 2)\n', (2461, 2467), False, 'import Polygon\n'), ((3066, 3080), 'Polygon', 'Polygon', (['(12)', '(3)'], {}), '(12, 3)\n', (3073, 3080), False, 'import Polygon\n'), ((3948, 3962), 'Polygon', 'Polygon', (['(3)', '(10)'], {}), '(3, 10)\n', (3955, 3962), False, 'import Polygon\n'), ((4049, 4064), 'Polygon', 'Polygon', (['(10)', '(10)'], {}), '(10, 10)\n', (4056, 4064), False, 'import Polygon\n'), ((4154, 4169), 'Polygon', 'Polygon', (['(15)', '(10)'], {}), '(15, 10)\n', (4161, 4169), False, 'import Polygon\n'), ((4261, 4277), 'Polygon', 'Polygon', (['(15)', '(100)'], {}), '(15, 100)\n', (4268, 4277), False, 'import Polygon\n'), ((4369, 4385), 'Polygon', 'Polygon', (['(15)', '(100)'], {}), '(15, 100)\n', (4376, 4385), False, 'import Polygon\n'), ((263, 277), 'Polygon', 'Polygon', (['(2)', '(10)'], {}), '(2, 10)\n', (270, 277), False, 'import Polygon\n'), ((1125, 1139), 'Polygon', 'Polygon', (['(2)', '(10)'], {}), '(2, 10)\n', (1132, 1139), False, 'import Polygon\n'), ((3769, 3783), 'Polygon', 'Polygon', (['(2)', '(10)'], {}), '(2, 10)\n', (3776, 3783), False, 'import Polygon\n')] |
import torch
import torch.nn as nn
import torch.nn.functional as F
from .cond_bn import ConditionalBatchNorm1d
# adopted Generator ResBlock from https://arxiv.org/abs/1909.11646
class GBlock(nn.Module):
def __init__(self, in_channels, out_channels, condition_dim):
super().__init__()
self.cond_bn = nn.ModuleList([
ConditionalBatchNorm1d(in_channels if i==0 else out_channels, condition_dim)
for i in range(4)])
self.leaky_relu = nn.LeakyReLU(0.2)
self.cnn = nn.ModuleList([
nn.Conv1d(in_channels if i==0 else out_channels, out_channels,
kernel_size=3, dilation=2**i, padding=2**i)
for i in range(4)])
self.shortcut = nn.Conv1d(in_channels, out_channels, kernel_size=1)
def forward(self, x, z, mask=None):
identity = x
x = self.cnn[0](self.leaky_relu(self.cond_bn[0](x, z)))
if mask is not None:
x.masked_fill_(mask, 0.0)
x = self.cnn[1](self.leaky_relu(self.cond_bn[1](x, z)))
if mask is not None:
x.masked_fill_(mask, 0.0)
x = x + self.shortcut(identity)
if mask is not None:
x.masked_fill_(mask, 0.0)
identity = x
x = self.cnn[2](self.leaky_relu(self.cond_bn[2](x, z)))
if mask is not None:
x.masked_fill_(mask, 0.0)
x = self.cnn[3](self.leaky_relu(self.cond_bn[3](x, z)))
if mask is not None:
x.masked_fill_(mask, 0.0)
x = x + identity
return x
class VCDecoder(nn.Module):
def __init__(self, hp):
super().__init__()
self.stem = nn.Conv1d(hp.chn.encoder + hp.chn.residual_out, hp.chn.gblock[0], kernel_size=7, padding=3)
self.gblock = nn.ModuleList([
GBlock(in_channels, out_channels, hp.chn.speaker.token)
for in_channels, out_channels in
zip(list(hp.chn.gblock)[:-1], hp.chn.gblock[1:])])
self.final = nn.Conv1d(hp.chn.gblock[-1], hp.audio.n_mel_channels, kernel_size=1)
def forward(self, x, speaker_emb, mask=None):
# x: linguistic features + pitch info.
# [B, chn.encoder + chn.residual_out, T_dec]
x = self.stem(x) # [B, chn.gblock[0], T]
if mask is not None:
x.masked_fill_(mask, 0.0)
for gblock in self.gblock:
x = gblock(x, speaker_emb, mask)
# x: [B, chn.gblock[-1], T]
x = self.final(x) # [B, M, T]
if mask is not None:
x.masked_fill_(mask, 0.0)
return x
| [
"torch.nn.Conv1d",
"torch.nn.LeakyReLU"
] | [((485, 502), 'torch.nn.LeakyReLU', 'nn.LeakyReLU', (['(0.2)'], {}), '(0.2)\n', (497, 502), True, 'import torch.nn as nn\n'), ((729, 780), 'torch.nn.Conv1d', 'nn.Conv1d', (['in_channels', 'out_channels'], {'kernel_size': '(1)'}), '(in_channels, out_channels, kernel_size=1)\n', (738, 780), True, 'import torch.nn as nn\n'), ((1656, 1751), 'torch.nn.Conv1d', 'nn.Conv1d', (['(hp.chn.encoder + hp.chn.residual_out)', 'hp.chn.gblock[0]'], {'kernel_size': '(7)', 'padding': '(3)'}), '(hp.chn.encoder + hp.chn.residual_out, hp.chn.gblock[0],\n kernel_size=7, padding=3)\n', (1665, 1751), True, 'import torch.nn as nn\n'), ((1983, 2051), 'torch.nn.Conv1d', 'nn.Conv1d', (['hp.chn.gblock[-1]', 'hp.audio.n_mel_channels'], {'kernel_size': '(1)'}), '(hp.chn.gblock[-1], hp.audio.n_mel_channels, kernel_size=1)\n', (1992, 2051), True, 'import torch.nn as nn\n'), ((550, 666), 'torch.nn.Conv1d', 'nn.Conv1d', (['(in_channels if i == 0 else out_channels)', 'out_channels'], {'kernel_size': '(3)', 'dilation': '(2 ** i)', 'padding': '(2 ** i)'}), '(in_channels if i == 0 else out_channels, out_channels,\n kernel_size=3, dilation=2 ** i, padding=2 ** i)\n', (559, 666), True, 'import torch.nn as nn\n')] |
import numpy as np
import pandas as pd
from docplex.mp.model import Model as Model
def offline_optimal(df_tasks, df_nodes, n_time, n_tasks, n_nodes, gap=0.1):
mdl = Model(name='Maximise social welfare')
# set the tolerance to 1%
mdl.parameters.mip.tolerances.mipgap = 0.01
# auxiliary variable representing if fog node p is allocated for user n at time slot t
z = mdl.binary_var_dict(((n, p, t) for n in range(N)
for p in range(n_nodes) for t in range(T)), name="z")
# variable that is one if z change from 0 to 1
d = mdl.binary_var_dict(((n, t) for n in range(N)
for t in range(T-1)), name='d')
value_of_tasks = mdl.sum(df_tasks.loc[n, 'valuation_coefficient'] * z[n, p, t]
for n in range(N) for p in range(n_nodes) for t in range(T))
CPU_cost = mdl.sum(df_tasks.loc[n, 'CPU'] * df_nodes.loc[p, 'CPU_cost'] * z[n, p, t]
for n in range(N) for p in range(n_nodes) for t in range(T))
RAM_cost = mdl.sum(df_tasks.loc[n, 'RAM'] * df_nodes.loc[p, 'RAM_cost'] * z[n, p, t]
for n in range(N) for p in range(n_nodes) for t in range(T))
storage_cost = mdl.sum(df_tasks.loc[n, 'storage'] * df_nodes.loc[p, 'storage_cost'] * z[n, p, t]
for n in range(N) for p in range(n_nodes) for t in range(T))
social_welfare = value_of_tasks - CPU_cost - RAM_cost - storage_cost
# the objective is to maximise the social welfare
mdl.maximize(social_welfare)
# time constraints
for n in range(N):
mdl.add_constraint(mdl.sum(z[n, p, t] for p in range(n_nodes) for t in range(T)) <=
df_tasks.loc[n, 'usage_time']) # allocated time <= required time
for p in range(n_nodes):
for t in range(int(df_tasks.loc[n, 'start_time'])):
mdl.add_constraint(z[n, p, t] == 0) # no usage time before the start time
for t in range(int(df_tasks.loc[n, 'deadline'] + 1), T):
mdl.add_constraint(z[n, p, t] == 0) # no usage time after the deadline
# resource constraints
for t in range(T):
for p in range(n_nodes):
mdl.add_constraint(mdl.sum(z[n, p, t] * df_tasks.loc[n, 'CPU'] for n in range(N))
<= df_nodes.loc[p, 'CPU'])
mdl.add_constraint(mdl.sum(z[n, p, t] * df_tasks.loc[n, 'RAM'] for n in range(N))
<= df_nodes.loc[p, 'RAM'])
mdl.add_constraint(mdl.sum(z[n, p, t] * df_tasks.loc[n, 'storage'] for n in range(N))
<= df_nodes.loc[p, 'storage'])
# one task is only processed in one fog node
for n in range(N):
for t in range(T):
mdl.add_constraint(mdl.sum(z[n, p, t] for p in range(n_nodes)) <= 1)
# tasks are non-preemptive
# d is 1 if z change from 0 to 1
for n in range(N):
for t in range(T-1):
mdl.add_constraint(d[n, t] == (mdl.sum(z[n, p, t+1] for p in range(n_nodes)) - 1
>= mdl.sum(z[n, p, t] for p in range(n_nodes))))
# sum(d) inspect of time is less or equal to one
mdl.add_constraint(mdl.sum(d[n, t] for t in range(T-1)) <= 1)
# # tasks are non-preemptive
# for p in range(n_nodes):
# for n in range(n_tasks):
# start_time = df_tasks.loc[n, 'start_time']
# deadline = df_tasks.loc[n, "deadline"]
# for t1 in range(start_time + 1, deadline):
# for t2 in range(t1, deadline):
# mdl.add_constraint(z[n, p, t1] - z[n, p, t1-1] + z[n, p, t2] - z[n, p, t2+1]
# >= -1)
mdl.solve()
number_of_allocated_tasks = 0
for n in range(0, N):
x = 0
for t in range(T):
for p in range(n_nodes):
if z[(n, p, t)].solution_value != 0:
x = 1
number_of_allocated_tasks += x
return social_welfare.solution_value, number_of_allocated_tasks, z
| [
"docplex.mp.model.Model"
] | [((171, 208), 'docplex.mp.model.Model', 'Model', ([], {'name': '"""Maximise social welfare"""'}), "(name='Maximise social welfare')\n", (176, 208), True, 'from docplex.mp.model import Model as Model\n')] |
"""Pulse host control."""
from datetime import timedelta
from enum import Enum
import logging
from typing import List, Optional
import attr
from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed
from ..coresys import CoreSys, CoreSysAttributes
from ..exceptions import PulseAudioError
from ..utils import AsyncThrottle
_LOGGER: logging.Logger = logging.getLogger(__name__)
PULSE_NAME = "supervisor"
class StreamType(str, Enum):
"""INPUT/OUTPUT type of source."""
INPUT = "input"
OUTPUT = "output"
@attr.s(frozen=True)
class AudioApplication:
"""Represent a application on the stream."""
name: str = attr.ib()
index: int = attr.ib()
stream_index: str = attr.ib()
stream_type: StreamType = attr.ib()
volume: float = attr.ib()
mute: bool = attr.ib()
addon: str = attr.ib()
@attr.s(frozen=True)
class AudioStream:
"""Represent a input/output stream."""
name: str = attr.ib()
index: int = attr.ib()
description: str = attr.ib()
volume: float = attr.ib()
mute: bool = attr.ib()
default: bool = attr.ib()
card: Optional[int] = attr.ib()
applications: List[AudioApplication] = attr.ib()
@attr.s(frozen=True)
class SoundProfile:
"""Represent a Sound Card profile."""
name: str = attr.ib()
description: str = attr.ib()
active: bool = attr.ib()
@attr.s(frozen=True)
class SoundCard:
"""Represent a Sound Card."""
name: str = attr.ib()
index: int = attr.ib()
driver: str = attr.ib()
profiles: List[SoundProfile] = attr.ib()
class SoundControl(CoreSysAttributes):
"""Pulse control from Host."""
def __init__(self, coresys: CoreSys) -> None:
"""Initialize PulseAudio sound control."""
self.coresys: CoreSys = coresys
self._cards: List[SoundCard] = []
self._inputs: List[AudioStream] = []
self._outputs: List[AudioStream] = []
self._applications: List[AudioApplication] = []
@property
def cards(self) -> List[SoundCard]:
"""Return a list of available sound cards and profiles."""
return self._cards
@property
def inputs(self) -> List[AudioStream]:
"""Return a list of available input streams."""
return self._inputs
@property
def outputs(self) -> List[AudioStream]:
"""Return a list of available output streams."""
return self._outputs
@property
def applications(self) -> List[AudioApplication]:
"""Return a list of available application streams."""
return self._applications
async def set_default(self, stream_type: StreamType, name: str) -> None:
"""Set a stream to default input/output."""
def _set_default():
try:
with Pulse(PULSE_NAME) as pulse:
if stream_type == StreamType.INPUT:
# Get source and set it as default
source = pulse.get_source_by_name(name)
pulse.source_default_set(source)
else:
# Get sink and set it as default
sink = pulse.get_sink_by_name(name)
pulse.sink_default_set(sink)
except PulseIndexError:
_LOGGER.error("Can't find %s stream %s", source, name)
raise PulseAudioError()
except PulseError as err:
_LOGGER.error("Can't set %s as stream: %s", name, err)
raise PulseAudioError()
# Run and Reload data
await self.sys_run_in_executor(_set_default)
await self.update()
async def set_volume(
self, stream_type: StreamType, index: int, volume: float, application: bool
) -> None:
"""Set a stream to volume input/output/application."""
def _set_volume():
try:
with Pulse(PULSE_NAME) as pulse:
if stream_type == StreamType.INPUT:
if application:
stream = pulse.source_output_info(index)
else:
stream = pulse.source_info(index)
else:
if application:
stream = pulse.sink_input_info(index)
else:
stream = pulse.sink_info(index)
# Set volume
pulse.volume_set_all_chans(stream, volume)
except PulseIndexError:
_LOGGER.error(
"Can't find %s stream %d (App: %s)", stream_type, index, application
)
raise PulseAudioError()
except PulseError as err:
_LOGGER.error("Can't set %d volume: %s", index, err)
raise PulseAudioError()
# Run and Reload data
await self.sys_run_in_executor(_set_volume)
await self.update()
async def set_mute(
self, stream_type: StreamType, index: int, mute: bool, application: bool
) -> None:
"""Set a stream to mute input/output/application."""
def _set_mute():
try:
with Pulse(PULSE_NAME) as pulse:
if stream_type == StreamType.INPUT:
if application:
stream = pulse.source_output_info(index)
else:
stream = pulse.source_info(index)
else:
if application:
stream = pulse.sink_input_info(index)
else:
stream = pulse.sink_info(index)
# Mute stream
pulse.mute(stream, mute)
except PulseIndexError:
_LOGGER.error(
"Can't find %s stream %d (App: %s)", stream_type, index, application
)
raise PulseAudioError()
except PulseError as err:
_LOGGER.error("Can't set %d volume: %s", index, err)
raise PulseAudioError()
# Run and Reload data
await self.sys_run_in_executor(_set_mute)
await self.update()
async def ativate_profile(self, card_name: str, profile_name: str) -> None:
"""Set a profile to volume input/output."""
def _activate_profile():
try:
with Pulse(PULSE_NAME) as pulse:
card = pulse.get_sink_by_name(card_name)
pulse.card_profile_set(card, profile_name)
except PulseIndexError:
_LOGGER.error("Can't find %s profile %s", card_name, profile_name)
raise PulseAudioError()
except PulseError as err:
_LOGGER.error(
"Can't activate %s profile %s: %s", card_name, profile_name, err
)
raise PulseAudioError()
# Run and Reload data
await self.sys_run_in_executor(_activate_profile)
await self.update()
@AsyncThrottle(timedelta(seconds=10))
async def update(self):
"""Update properties over dbus."""
_LOGGER.info("Update PulseAudio information")
def _update():
try:
with Pulse(PULSE_NAME) as pulse:
server = pulse.server_info()
# Update applications
self._applications.clear()
for application in pulse.sink_input_list():
self._applications.append(
AudioApplication(
application.proplist.get(
"application.name", application.name
),
application.index,
application.sink,
StreamType.OUTPUT,
application.volume.value_flat,
bool(application.mute),
application.proplist.get(
"application.process.machine_id", ""
).replace("-", "_"),
)
)
for application in pulse.source_output_list():
self._applications.append(
AudioApplication(
application.proplist.get(
"application.name", application.name
),
application.index,
application.source,
StreamType.INPUT,
application.volume.value_flat,
bool(application.mute),
application.proplist.get(
"application.process.machine_id", ""
).replace("-", "_"),
)
)
# Update output
self._outputs.clear()
for sink in pulse.sink_list():
self._outputs.append(
AudioStream(
sink.name,
sink.index,
sink.description,
sink.volume.value_flat,
bool(sink.mute),
sink.name == server.default_sink_name,
sink.card if sink.card != 0xFFFFFFFF else None,
[
application
for application in self._applications
if application.stream_index == sink.index
and application.stream_type == StreamType.OUTPUT
],
)
)
# Update input
self._inputs.clear()
for source in pulse.source_list():
# Filter monitor devices out because we did not use it now
if source.name.endswith(".monitor"):
continue
self._inputs.append(
AudioStream(
source.name,
source.index,
source.description,
source.volume.value_flat,
bool(source.mute),
source.name == server.default_source_name,
source.card if source.card != 0xFFFFFFFF else None,
[
application
for application in self._applications
if application.stream_index == source.index
and application.stream_type == StreamType.INPUT
],
)
)
# Update Sound Card
self._cards.clear()
for card in pulse.card_list():
sound_profiles: List[SoundProfile] = []
# Generate profiles
for profile in card.profile_list:
if not profile.available:
continue
sound_profiles.append(
SoundProfile(
profile.name,
profile.description,
profile.name == card.profile_active.name,
)
)
self._cards.append(
SoundCard(
card.name, card.index, card.driver, sound_profiles
)
)
except PulseOperationFailed as err:
_LOGGER.error("Error while processing pulse update: %s", err)
raise PulseAudioError()
except PulseError as err:
_LOGGER.debug("Can't update PulseAudio data: %s", err)
# Run update from pulse server
await self.sys_run_in_executor(_update)
| [
"logging.getLogger",
"attr.s",
"pulsectl.Pulse",
"datetime.timedelta",
"attr.ib"
] | [((371, 398), 'logging.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (388, 398), False, 'import logging\n'), ((542, 561), 'attr.s', 'attr.s', ([], {'frozen': '(True)'}), '(frozen=True)\n', (548, 561), False, 'import attr\n'), ((850, 869), 'attr.s', 'attr.s', ([], {'frozen': '(True)'}), '(frozen=True)\n', (856, 869), False, 'import attr\n'), ((1198, 1217), 'attr.s', 'attr.s', ([], {'frozen': '(True)'}), '(frozen=True)\n', (1204, 1217), False, 'import attr\n'), ((1372, 1391), 'attr.s', 'attr.s', ([], {'frozen': '(True)'}), '(frozen=True)\n', (1378, 1391), False, 'import attr\n'), ((652, 661), 'attr.ib', 'attr.ib', ([], {}), '()\n', (659, 661), False, 'import attr\n'), ((679, 688), 'attr.ib', 'attr.ib', ([], {}), '()\n', (686, 688), False, 'import attr\n'), ((713, 722), 'attr.ib', 'attr.ib', ([], {}), '()\n', (720, 722), False, 'import attr\n'), ((753, 762), 'attr.ib', 'attr.ib', ([], {}), '()\n', (760, 762), False, 'import attr\n'), ((783, 792), 'attr.ib', 'attr.ib', ([], {}), '()\n', (790, 792), False, 'import attr\n'), ((810, 819), 'attr.ib', 'attr.ib', ([], {}), '()\n', (817, 819), False, 'import attr\n'), ((837, 846), 'attr.ib', 'attr.ib', ([], {}), '()\n', (844, 846), False, 'import attr\n'), ((949, 958), 'attr.ib', 'attr.ib', ([], {}), '()\n', (956, 958), False, 'import attr\n'), ((976, 985), 'attr.ib', 'attr.ib', ([], {}), '()\n', (983, 985), False, 'import attr\n'), ((1009, 1018), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1016, 1018), False, 'import attr\n'), ((1039, 1048), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1046, 1048), False, 'import attr\n'), ((1066, 1075), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1073, 1075), False, 'import attr\n'), ((1096, 1105), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1103, 1105), False, 'import attr\n'), ((1132, 1141), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1139, 1141), False, 'import attr\n'), ((1185, 1194), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1192, 1194), False, 'import attr\n'), ((1297, 1306), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1304, 1306), False, 'import attr\n'), ((1330, 1339), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1337, 1339), False, 'import attr\n'), ((1359, 1368), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1366, 1368), False, 'import attr\n'), ((1460, 1469), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1467, 1469), False, 'import attr\n'), ((1487, 1496), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1494, 1496), False, 'import attr\n'), ((1515, 1524), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1522, 1524), False, 'import attr\n'), ((1560, 1569), 'attr.ib', 'attr.ib', ([], {}), '()\n', (1567, 1569), False, 'import attr\n'), ((7144, 7165), 'datetime.timedelta', 'timedelta', ([], {'seconds': '(10)'}), '(seconds=10)\n', (7153, 7165), False, 'from datetime import timedelta\n'), ((2775, 2792), 'pulsectl.Pulse', 'Pulse', (['PULSE_NAME'], {}), '(PULSE_NAME)\n', (2780, 2792), False, 'from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed\n'), ((3899, 3916), 'pulsectl.Pulse', 'Pulse', (['PULSE_NAME'], {}), '(PULSE_NAME)\n', (3904, 3916), False, 'from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed\n'), ((5221, 5238), 'pulsectl.Pulse', 'Pulse', (['PULSE_NAME'], {}), '(PULSE_NAME)\n', (5226, 5238), False, 'from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed\n'), ((6483, 6500), 'pulsectl.Pulse', 'Pulse', (['PULSE_NAME'], {}), '(PULSE_NAME)\n', (6488, 6500), False, 'from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed\n'), ((7354, 7371), 'pulsectl.Pulse', 'Pulse', (['PULSE_NAME'], {}), '(PULSE_NAME)\n', (7359, 7371), False, 'from pulsectl import Pulse, PulseError, PulseIndexError, PulseOperationFailed\n')] |
import numpy as np
from quickndirtybot.strategies.util import moving_fun
def measure(dataframe, smaperiod, minmaxperiod):
"""measure things on dataframe
smaperiod
period in minutes of moving average
minmaxperiod
period to calculate the minimum/maximum over, in minutes
"""
# get deltat in min; time is in ms
timeidx = list(dataframe.columns).index('time')
basic_clock = int((dataframe.iloc[1, timeidx] -
dataframe.iloc[0, timeidx])/60000)
tperiod_minmax = int(minmaxperiod / basic_clock)
tperiod_smalong = int(2*smaperiod / basic_clock)
tperiod_smashort = int(smaperiod / basic_clock)
# moving averages of hi and lo prices
moving_fun(dataframe, 'highest', blanking=0, duration=tperiod_smashort,
newname='sma-hi-now', fun=np.mean)
moving_fun(dataframe, 'lowest', blanking=0, duration=tperiod_smashort,
newname='sma-lo-now', fun=np.mean)
# moving average of one period previously
moving_fun(dataframe, 'highest', blanking=tperiod_smashort,
duration=tperiod_smashort,
newname='sma-hi-prev', fun=np.mean)
moving_fun(dataframe, 'lowest', blanking=tperiod_smashort,
duration=tperiod_smashort,
newname='sma-lo-prev', fun=np.mean)
# moving min and max of hi nd lo prices
moving_fun(dataframe, 'sma-hi-now', blanking=2*tperiod_smashort,
duration=tperiod_minmax,
newname='min-hi-prev', fun=np.min)
moving_fun(dataframe, 'sma-lo-now', blanking=2*tperiod_smashort,
duration=tperiod_minmax,
newname='max-lo-prev', fun=np.max)
# moving_fun(dataframe, 'highest', blanking=tperiod_smashort, duration=tperiod_minmax,
# newname='min-hi', fun=np.min)
# moving_fun(dataframe, 'lowest', blanking=tperiod_smashort, duration=tperiod_minmax,
# newname='max-lo', fun=np.max)
# quotient of max and min
dataframe.loc[:, 'max/min'] = dataframe['max-lo-prev']/dataframe['min-hi-prev']
def findbuy(dataframe, thresh_maxovermin=1.0075):
"""find time points for buying"""
dataframe.loc[((dataframe['sma-hi-now'] > dataframe['sma-hi-prev']) &
(dataframe['sma-hi-prev'] < dataframe['min-hi-prev']) &
(dataframe['max/min'] > thresh_maxovermin)
),
'buy'] = 1
def findsell(dataframe, thresh_maxovermin=1.0075):
"""find time points for selling"""
dataframe.loc[((dataframe['sma-lo-now'] < dataframe['sma-lo-prev']) &
(dataframe['sma-lo-prev'] > dataframe['max-lo-prev']) &
(dataframe['max/min'] > thresh_maxovermin)
)
, 'sell'] = 1
| [
"quickndirtybot.strategies.util.moving_fun"
] | [((713, 823), 'quickndirtybot.strategies.util.moving_fun', 'moving_fun', (['dataframe', '"""highest"""'], {'blanking': '(0)', 'duration': 'tperiod_smashort', 'newname': '"""sma-hi-now"""', 'fun': 'np.mean'}), "(dataframe, 'highest', blanking=0, duration=tperiod_smashort,\n newname='sma-hi-now', fun=np.mean)\n", (723, 823), False, 'from quickndirtybot.strategies.util import moving_fun\n'), ((839, 948), 'quickndirtybot.strategies.util.moving_fun', 'moving_fun', (['dataframe', '"""lowest"""'], {'blanking': '(0)', 'duration': 'tperiod_smashort', 'newname': '"""sma-lo-now"""', 'fun': 'np.mean'}), "(dataframe, 'lowest', blanking=0, duration=tperiod_smashort,\n newname='sma-lo-now', fun=np.mean)\n", (849, 948), False, 'from quickndirtybot.strategies.util import moving_fun\n'), ((1010, 1137), 'quickndirtybot.strategies.util.moving_fun', 'moving_fun', (['dataframe', '"""highest"""'], {'blanking': 'tperiod_smashort', 'duration': 'tperiod_smashort', 'newname': '"""sma-hi-prev"""', 'fun': 'np.mean'}), "(dataframe, 'highest', blanking=tperiod_smashort, duration=\n tperiod_smashort, newname='sma-hi-prev', fun=np.mean)\n", (1020, 1137), False, 'from quickndirtybot.strategies.util import moving_fun\n'), ((1167, 1293), 'quickndirtybot.strategies.util.moving_fun', 'moving_fun', (['dataframe', '"""lowest"""'], {'blanking': 'tperiod_smashort', 'duration': 'tperiod_smashort', 'newname': '"""sma-lo-prev"""', 'fun': 'np.mean'}), "(dataframe, 'lowest', blanking=tperiod_smashort, duration=\n tperiod_smashort, newname='sma-lo-prev', fun=np.mean)\n", (1177, 1293), False, 'from quickndirtybot.strategies.util import moving_fun\n'), ((1368, 1499), 'quickndirtybot.strategies.util.moving_fun', 'moving_fun', (['dataframe', '"""sma-hi-now"""'], {'blanking': '(2 * tperiod_smashort)', 'duration': 'tperiod_minmax', 'newname': '"""min-hi-prev"""', 'fun': 'np.min'}), "(dataframe, 'sma-hi-now', blanking=2 * tperiod_smashort, duration\n =tperiod_minmax, newname='min-hi-prev', fun=np.min)\n", (1378, 1499), False, 'from quickndirtybot.strategies.util import moving_fun\n'), ((1527, 1658), 'quickndirtybot.strategies.util.moving_fun', 'moving_fun', (['dataframe', '"""sma-lo-now"""'], {'blanking': '(2 * tperiod_smashort)', 'duration': 'tperiod_minmax', 'newname': '"""max-lo-prev"""', 'fun': 'np.max'}), "(dataframe, 'sma-lo-now', blanking=2 * tperiod_smashort, duration\n =tperiod_minmax, newname='max-lo-prev', fun=np.max)\n", (1537, 1658), False, 'from quickndirtybot.strategies.util import moving_fun\n')] |
import torch
import unittest
import numpy as np
from torch.autograd import Variable
from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM
from losses.functional import Topk_Smooth_SVM
from tests.utils import assert_all_close, V
from tests.py_ref import svm_topk_smooth_py_1, svm_topk_smooth_py_2,\
smooth_svm_py, max_svm_py, svm_topk_max_py
from torch.autograd.gradcheck import gradcheck
class TestMaxSVM(unittest.TestCase):
def setUp(self):
torch.manual_seed(1234)
np.random.seed(1234)
self.n_samples = 20
self.n_classes = 7
self.alpha = 1.
self.x = torch.randn(self.n_samples, self.n_classes)
self.y = torch.from_numpy(np.random.randint(0, self.n_classes,
size=self.n_samples))
self.k = 3
def testMaxSVM(self):
max_svm_th = MaxTop1SVM(self.n_classes, alpha=self.alpha)
res_th = max_svm_th(V(self.x), V(self.y))
res_py = max_svm_py(V(self.x), V(self.y), alpha=self.alpha)
assert_all_close(res_th, res_py)
def testMaxSVMtopk(self):
max_svm_th = MaxTopkSVM(self.n_classes, k=self.k)
res_th = max_svm_th(V(self.x), V(self.y))
res_py = svm_topk_max_py(V(self.x), V(self.y), k=self.k)
assert_all_close(res_th, res_py)
class TestSmoothSVM(unittest.TestCase):
def setUp(self):
torch.manual_seed(1234)
np.random.seed(1234)
self.n_samples = 20
self.n_classes = 7
self.tau = float(2.)
self.x = torch.randn(self.n_samples, self.n_classes)
self.y = torch.from_numpy(np.random.randint(0, self.n_classes,
size=self.n_samples))
def testSmoothSVM(self):
smooth_svm_th = SmoothTop1SVM(self.n_classes, tau=self.tau)
res_th = smooth_svm_th(V(self.x), V(self.y))
res_py = smooth_svm_py(V(self.x), V(self.y), self.tau)
assert_all_close(res_th, res_py)
class TestSmoothSVMTopk(unittest.TestCase):
def setUp(self):
torch.manual_seed(1234)
np.random.seed(1234)
self.n_samples = 2
self.n_classes = 7
self.k = 5
self.tau = float(2.)
self.x = torch.randn(self.n_samples, self.n_classes)
self.y = torch.from_numpy(np.random.randint(0, self.n_classes,
size=self.n_samples))
self.labels = torch.from_numpy(np.arange(self.n_classes))
def testSmoothSVMpy(self):
res_py_1 = svm_topk_smooth_py_1(V(self.x), V(self.y), self.tau, self.k)
res_py_2 = svm_topk_smooth_py_2(V(self.x), V(self.y), self.tau, self.k)
assert_all_close(res_py_1, res_py_2)
def testSmoothSVMth_functional(self):
F = Topk_Smooth_SVM(self.labels, self.k, self.tau)
res_th = F(V(self.x), V(self.y))
res_py = svm_topk_smooth_py_1(V(self.x), V(self.y), self.tau, self.k)
assert_all_close(res_th, res_py)
def testSmoothSVMth_loss(self):
svm_topk_smooth_th = SmoothTopkSVM(self.n_classes, tau=self.tau,
k=self.k)
res_th = svm_topk_smooth_th(V(self.x), V(self.y))
res_py = svm_topk_smooth_py_1(V(self.x),
V(self.y),
self.tau, self.k).mean()
assert_all_close(res_th, res_py)
def testSmoothSVMth_loss_scales(self):
svm_topk_smooth_th = SmoothTopkSVM(self.n_classes, tau=self.tau, k=self.k)
for scale in (1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3):
x = self.x * scale
res_th = svm_topk_smooth_th(V(x), V(self.y))
res_py = svm_topk_smooth_py_1(V(x), V(self.y), self.tau, self.k).mean()
assert_all_close(res_th, res_py)
def testGradSmoothSVMth_loss(self):
svm_topk_smooth_th = SmoothTopkSVM(self.n_classes, tau=self.tau, k=self.k)
for scale in (1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, 1e2, 1e3, 1e4):
x = self.x * scale
x = Variable(x, requires_grad=True)
assert gradcheck(lambda x: svm_topk_smooth_th(x, V(self.y)),
(x,), atol=1e-2, rtol=1e-3, eps=max(1e-4 * scale, 1e-2)), \
"failed with scale {}".format(scale)
| [
"torch.manual_seed",
"numpy.arange",
"losses.svm.MaxTopkSVM",
"tests.utils.assert_all_close",
"losses.functional.Topk_Smooth_SVM",
"numpy.random.randint",
"tests.utils.V",
"numpy.random.seed",
"losses.svm.MaxTop1SVM",
"torch.autograd.Variable",
"torch.randn",
"losses.svm.SmoothTop1SVM",
"los... | [((486, 509), 'torch.manual_seed', 'torch.manual_seed', (['(1234)'], {}), '(1234)\n', (503, 509), False, 'import torch\n'), ((518, 538), 'numpy.random.seed', 'np.random.seed', (['(1234)'], {}), '(1234)\n', (532, 538), True, 'import numpy as np\n'), ((636, 679), 'torch.randn', 'torch.randn', (['self.n_samples', 'self.n_classes'], {}), '(self.n_samples, self.n_classes)\n', (647, 679), False, 'import torch\n'), ((893, 937), 'losses.svm.MaxTop1SVM', 'MaxTop1SVM', (['self.n_classes'], {'alpha': 'self.alpha'}), '(self.n_classes, alpha=self.alpha)\n', (903, 937), False, 'from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM\n'), ((1065, 1097), 'tests.utils.assert_all_close', 'assert_all_close', (['res_th', 'res_py'], {}), '(res_th, res_py)\n', (1081, 1097), False, 'from tests.utils import assert_all_close, V\n'), ((1151, 1187), 'losses.svm.MaxTopkSVM', 'MaxTopkSVM', (['self.n_classes'], {'k': 'self.k'}), '(self.n_classes, k=self.k)\n', (1161, 1187), False, 'from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM\n'), ((1312, 1344), 'tests.utils.assert_all_close', 'assert_all_close', (['res_th', 'res_py'], {}), '(res_th, res_py)\n', (1328, 1344), False, 'from tests.utils import assert_all_close, V\n'), ((1418, 1441), 'torch.manual_seed', 'torch.manual_seed', (['(1234)'], {}), '(1234)\n', (1435, 1441), False, 'import torch\n'), ((1450, 1470), 'numpy.random.seed', 'np.random.seed', (['(1234)'], {}), '(1234)\n', (1464, 1470), True, 'import numpy as np\n'), ((1573, 1616), 'torch.randn', 'torch.randn', (['self.n_samples', 'self.n_classes'], {}), '(self.n_samples, self.n_classes)\n', (1584, 1616), False, 'import torch\n'), ((1817, 1860), 'losses.svm.SmoothTop1SVM', 'SmoothTop1SVM', (['self.n_classes'], {'tau': 'self.tau'}), '(self.n_classes, tau=self.tau)\n', (1830, 1860), False, 'from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM\n'), ((1986, 2018), 'tests.utils.assert_all_close', 'assert_all_close', (['res_th', 'res_py'], {}), '(res_th, res_py)\n', (2002, 2018), False, 'from tests.utils import assert_all_close, V\n'), ((2096, 2119), 'torch.manual_seed', 'torch.manual_seed', (['(1234)'], {}), '(1234)\n', (2113, 2119), False, 'import torch\n'), ((2128, 2148), 'numpy.random.seed', 'np.random.seed', (['(1234)'], {}), '(1234)\n', (2142, 2148), True, 'import numpy as np\n'), ((2269, 2312), 'torch.randn', 'torch.randn', (['self.n_samples', 'self.n_classes'], {}), '(self.n_samples, self.n_classes)\n', (2280, 2312), False, 'import torch\n'), ((2726, 2762), 'tests.utils.assert_all_close', 'assert_all_close', (['res_py_1', 'res_py_2'], {}), '(res_py_1, res_py_2)\n', (2742, 2762), False, 'from tests.utils import assert_all_close, V\n'), ((2819, 2865), 'losses.functional.Topk_Smooth_SVM', 'Topk_Smooth_SVM', (['self.labels', 'self.k', 'self.tau'], {}), '(self.labels, self.k, self.tau)\n', (2834, 2865), False, 'from losses.functional import Topk_Smooth_SVM\n'), ((2994, 3026), 'tests.utils.assert_all_close', 'assert_all_close', (['res_th', 'res_py'], {}), '(res_th, res_py)\n', (3010, 3026), False, 'from tests.utils import assert_all_close, V\n'), ((3094, 3147), 'losses.svm.SmoothTopkSVM', 'SmoothTopkSVM', (['self.n_classes'], {'tau': 'self.tau', 'k': 'self.k'}), '(self.n_classes, tau=self.tau, k=self.k)\n', (3107, 3147), False, 'from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM\n'), ((3419, 3451), 'tests.utils.assert_all_close', 'assert_all_close', (['res_th', 'res_py'], {}), '(res_th, res_py)\n', (3435, 3451), False, 'from tests.utils import assert_all_close, V\n'), ((3526, 3579), 'losses.svm.SmoothTopkSVM', 'SmoothTopkSVM', (['self.n_classes'], {'tau': 'self.tau', 'k': 'self.k'}), '(self.n_classes, tau=self.tau, k=self.k)\n', (3539, 3579), False, 'from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM\n'), ((3935, 3988), 'losses.svm.SmoothTopkSVM', 'SmoothTopkSVM', (['self.n_classes'], {'tau': 'self.tau', 'k': 'self.k'}), '(self.n_classes, tau=self.tau, k=self.k)\n', (3948, 3988), False, 'from losses.svm import SmoothTop1SVM, SmoothTopkSVM, MaxTop1SVM, MaxTopkSVM\n'), ((714, 771), 'numpy.random.randint', 'np.random.randint', (['(0)', 'self.n_classes'], {'size': 'self.n_samples'}), '(0, self.n_classes, size=self.n_samples)\n', (731, 771), True, 'import numpy as np\n'), ((966, 975), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (967, 975), False, 'from tests.utils import assert_all_close, V\n'), ((977, 986), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (978, 986), False, 'from tests.utils import assert_all_close, V\n'), ((1016, 1025), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (1017, 1025), False, 'from tests.utils import assert_all_close, V\n'), ((1027, 1036), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (1028, 1036), False, 'from tests.utils import assert_all_close, V\n'), ((1216, 1225), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (1217, 1225), False, 'from tests.utils import assert_all_close, V\n'), ((1227, 1236), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (1228, 1236), False, 'from tests.utils import assert_all_close, V\n'), ((1271, 1280), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (1272, 1280), False, 'from tests.utils import assert_all_close, V\n'), ((1282, 1291), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (1283, 1291), False, 'from tests.utils import assert_all_close, V\n'), ((1651, 1708), 'numpy.random.randint', 'np.random.randint', (['(0)', 'self.n_classes'], {'size': 'self.n_samples'}), '(0, self.n_classes, size=self.n_samples)\n', (1668, 1708), True, 'import numpy as np\n'), ((1892, 1901), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (1893, 1901), False, 'from tests.utils import assert_all_close, V\n'), ((1903, 1912), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (1904, 1912), False, 'from tests.utils import assert_all_close, V\n'), ((1945, 1954), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (1946, 1954), False, 'from tests.utils import assert_all_close, V\n'), ((1956, 1965), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (1957, 1965), False, 'from tests.utils import assert_all_close, V\n'), ((2347, 2404), 'numpy.random.randint', 'np.random.randint', (['(0)', 'self.n_classes'], {'size': 'self.n_samples'}), '(0, self.n_classes, size=self.n_samples)\n', (2364, 2404), True, 'import numpy as np\n'), ((2497, 2522), 'numpy.arange', 'np.arange', (['self.n_classes'], {}), '(self.n_classes)\n', (2506, 2522), True, 'import numpy as np\n'), ((2597, 2606), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (2598, 2606), False, 'from tests.utils import assert_all_close, V\n'), ((2608, 2617), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (2609, 2617), False, 'from tests.utils import assert_all_close, V\n'), ((2677, 2686), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (2678, 2686), False, 'from tests.utils import assert_all_close, V\n'), ((2688, 2697), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (2689, 2697), False, 'from tests.utils import assert_all_close, V\n'), ((2885, 2894), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (2886, 2894), False, 'from tests.utils import assert_all_close, V\n'), ((2896, 2905), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (2897, 2905), False, 'from tests.utils import assert_all_close, V\n'), ((2945, 2954), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (2946, 2954), False, 'from tests.utils import assert_all_close, V\n'), ((2956, 2965), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (2957, 2965), False, 'from tests.utils import assert_all_close, V\n'), ((3227, 3236), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (3228, 3236), False, 'from tests.utils import assert_all_close, V\n'), ((3238, 3247), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (3239, 3247), False, 'from tests.utils import assert_all_close, V\n'), ((3831, 3863), 'tests.utils.assert_all_close', 'assert_all_close', (['res_th', 'res_py'], {}), '(res_th, res_py)\n', (3847, 3863), False, 'from tests.utils import assert_all_close, V\n'), ((4108, 4139), 'torch.autograd.Variable', 'Variable', (['x'], {'requires_grad': '(True)'}), '(x, requires_grad=True)\n', (4116, 4139), False, 'from torch.autograd import Variable\n'), ((3718, 3722), 'tests.utils.V', 'V', (['x'], {}), '(x)\n', (3719, 3722), False, 'from tests.utils import assert_all_close, V\n'), ((3724, 3733), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (3725, 3733), False, 'from tests.utils import assert_all_close, V\n'), ((3287, 3296), 'tests.utils.V', 'V', (['self.x'], {}), '(self.x)\n', (3288, 3296), False, 'from tests.utils import assert_all_close, V\n'), ((3336, 3345), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (3337, 3345), False, 'from tests.utils import assert_all_close, V\n'), ((3777, 3781), 'tests.utils.V', 'V', (['x'], {}), '(x)\n', (3778, 3781), False, 'from tests.utils import assert_all_close, V\n'), ((3783, 3792), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (3784, 3792), False, 'from tests.utils import assert_all_close, V\n'), ((4201, 4210), 'tests.utils.V', 'V', (['self.y'], {}), '(self.y)\n', (4202, 4210), False, 'from tests.utils import assert_all_close, V\n')] |
import os
import sys
import platform
from distutils.version import LooseVersion
def is_active():
return True
def get_name():
return "Android"
def can_build():
return ("ANDROID_NDK_ROOT" in os.environ)
def get_platform(platform):
return int(platform.split("-")[1])
def get_opts():
from SCons.Variables import BoolVariable, EnumVariable
return [
('ANDROID_NDK_ROOT', 'Path to the Android NDK', os.environ.get("ANDROID_NDK_ROOT", 0)),
('ndk_platform', 'Target platform (android-<api>, e.g. "android-18")', "android-18"),
EnumVariable('android_arch', 'Target architecture', "armv7", ('armv7', 'armv6', 'arm64v8', 'x86', 'x86_64')),
BoolVariable('android_neon', 'Enable NEON support (armv7 only)', True),
]
def get_flags():
return [
('tools', False),
]
def create(env):
tools = env['TOOLS']
if "mingw" in tools:
tools.remove('mingw')
if "applelink" in tools:
tools.remove("applelink")
env.Tool('gcc')
return env.Clone(tools=tools)
def configure(env):
# Workaround for MinGW. See:
# http://www.scons.org/wiki/LongCmdLinesOnWin32
if (os.name == "nt"):
import subprocess
def mySubProcess(cmdline, env):
# print("SPAWNED : " + cmdline)
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
proc = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, startupinfo=startupinfo, shell=False, env=env)
data, err = proc.communicate()
rv = proc.wait()
if rv:
print("=====")
print(err)
print("=====")
return rv
def mySpawn(sh, escape, cmd, args, env):
newargs = ' '.join(args[1:])
cmdline = cmd + " " + newargs
rv = 0
if len(cmdline) > 32000 and cmd.endswith("ar"):
cmdline = cmd + " " + args[1] + " " + args[2] + " "
for i in range(3, len(args)):
rv = mySubProcess(cmdline + args[i], env)
if rv:
break
else:
rv = mySubProcess(cmdline, env)
return rv
env['SPAWN'] = mySpawn
## Architecture
if env['android_arch'] not in ['armv7', 'armv6', 'arm64v8', 'x86', 'x86_64']:
env['android_arch'] = 'armv7'
neon_text = ""
if env["android_arch"] == "armv7" and env['android_neon']:
neon_text = " (with NEON)"
print("Building for Android (" + env['android_arch'] + ")" + neon_text)
can_vectorize = True
if env['android_arch'] == 'x86':
env['ARCH'] = 'arch-x86'
env.extra_suffix = ".x86" + env.extra_suffix
target_subpath = "x86-4.9"
abi_subpath = "i686-linux-android"
arch_subpath = "x86"
env["x86_libtheora_opt_gcc"] = True
if env['android_arch'] == 'x86_64':
if get_platform(env["ndk_platform"]) < 21:
print("WARNING: android_arch=x86_64 is not supported by ndk_platform lower than android-21; setting ndk_platform=android-21")
env["ndk_platform"] = "android-21"
env['ARCH'] = 'arch-x86_64'
env.extra_suffix = ".x86_64" + env.extra_suffix
target_subpath = "x86_64-4.9"
abi_subpath = "x86_64-linux-android"
arch_subpath = "x86_64"
env["x86_libtheora_opt_gcc"] = True
elif env['android_arch'] == 'armv6':
env['ARCH'] = 'arch-arm'
env.extra_suffix = ".armv6" + env.extra_suffix
target_subpath = "arm-linux-androideabi-4.9"
abi_subpath = "arm-linux-androideabi"
arch_subpath = "armeabi"
can_vectorize = False
elif env["android_arch"] == "armv7":
env['ARCH'] = 'arch-arm'
target_subpath = "arm-linux-androideabi-4.9"
abi_subpath = "arm-linux-androideabi"
arch_subpath = "armeabi-v7a"
if env['android_neon']:
env.extra_suffix = ".armv7.neon" + env.extra_suffix
else:
env.extra_suffix = ".armv7" + env.extra_suffix
elif env["android_arch"] == "arm64v8":
if get_platform(env["ndk_platform"]) < 21:
print("WARNING: android_arch=arm64v8 is not supported by ndk_platform lower than android-21; setting ndk_platform=android-21")
env["ndk_platform"] = "android-21"
env['ARCH'] = 'arch-arm64'
target_subpath = "aarch64-linux-android-4.9"
abi_subpath = "aarch64-linux-android"
arch_subpath = "arm64-v8a"
env.extra_suffix = ".armv8" + env.extra_suffix
## Build type
if (env["target"].startswith("release")):
if (env["optimize"] == "speed"): #optimize for speed (default)
env.Append(LINKFLAGS=['-O2'])
env.Append(CPPFLAGS=['-O2', '-DNDEBUG', '-fomit-frame-pointer'])
else: #optimize for size
env.Append(CPPFLAGS=['-Os', '-DNDEBUG'])
env.Append(LINKFLAGS=['-Os'])
if (can_vectorize):
env.Append(CPPFLAGS=['-ftree-vectorize'])
if (env["target"] == "release_debug"):
env.Append(CPPFLAGS=['-DDEBUG_ENABLED'])
elif (env["target"] == "debug"):
env.Append(LINKFLAGS=['-O0'])
env.Append(CPPFLAGS=['-O0', '-D_DEBUG', '-UNDEBUG', '-DDEBUG_ENABLED',
'-DDEBUG_MEMORY_ENABLED', '-g', '-fno-limit-debug-info'])
## Compiler configuration
env['SHLIBSUFFIX'] = '.so'
if env['PLATFORM'] == 'win32':
env.Tool('gcc')
env.use_windows_spawn_fix()
mt_link = True
if (sys.platform.startswith("linux")):
host_subpath = "linux-x86_64"
elif (sys.platform.startswith("darwin")):
host_subpath = "darwin-x86_64"
elif (sys.platform.startswith('win')):
if (platform.machine().endswith('64')):
host_subpath = "windows-x86_64"
else:
mt_link = False
host_subpath = "windows"
if env["android_arch"] == "arm64v8":
mt_link = False
compiler_path = env["ANDROID_NDK_ROOT"] + "/toolchains/llvm/prebuilt/" + host_subpath + "/bin"
gcc_toolchain_path = env["ANDROID_NDK_ROOT"] + "/toolchains/" + target_subpath + "/prebuilt/" + host_subpath
tools_path = gcc_toolchain_path + "/" + abi_subpath + "/bin"
# For Clang to find NDK tools in preference of those system-wide
env.PrependENVPath('PATH', tools_path)
ccache_path = os.environ.get("CCACHE")
if ccache_path is None:
env['CC'] = compiler_path + '/clang'
env['CXX'] = compiler_path + '/clang++'
else:
# there aren't any ccache wrappers available for Android,
# to enable caching we need to prepend the path to the ccache binary
env['CC'] = ccache_path + ' ' + compiler_path + '/clang'
env['CXX'] = ccache_path + ' ' + compiler_path + '/clang++'
env['AR'] = tools_path + "/ar"
env['RANLIB'] = tools_path + "/ranlib"
env['AS'] = tools_path + "/as"
common_opts = ['-fno-integrated-as', '-gcc-toolchain', gcc_toolchain_path]
lib_sysroot = env["ANDROID_NDK_ROOT"] + "/platforms/" + env['ndk_platform'] + "/" + env['ARCH']
## Compile flags
env.Append(CPPFLAGS=["-isystem", env["ANDROID_NDK_ROOT"] + "/sources/cxx-stl/llvm-libc++/include"])
env.Append(CPPFLAGS=["-isystem", env["ANDROID_NDK_ROOT"] + "/sources/cxx-stl/llvm-libc++abi/include"])
env.Append(CXXFLAGS=["-std=gnu++14"])
# Disable exceptions and rtti on non-tools (template) builds
if env['tools']:
env.Append(CXXFLAGS=['-frtti'])
else:
env.Append(CXXFLAGS=['-fno-rtti', '-fno-exceptions'])
# Don't use dynamic_cast, necessary with no-rtti.
env.Append(CPPFLAGS=['-DNO_SAFE_CAST'])
ndk_version = get_ndk_version(env["ANDROID_NDK_ROOT"])
if ndk_version != None and LooseVersion(ndk_version) >= LooseVersion("15.0.4075724"):
print("Using NDK unified headers")
sysroot = env["ANDROID_NDK_ROOT"] + "/sysroot"
env.Append(CPPFLAGS=["--sysroot="+sysroot])
env.Append(CPPFLAGS=["-isystem", sysroot + "/usr/include/" + abi_subpath])
env.Append(CPPFLAGS=["-isystem", env["ANDROID_NDK_ROOT"] + "/sources/android/support/include"])
# For unified headers this define has to be set manually
env.Append(CPPFLAGS=["-D__ANDROID_API__=" + str(get_platform(env['ndk_platform']))])
else:
print("Using NDK deprecated headers")
env.Append(CPPFLAGS=["-isystem", lib_sysroot + "/usr/include"])
env.Append(CPPFLAGS='-fpic -ffunction-sections -funwind-tables -fstack-protector-strong -fvisibility=hidden -fno-strict-aliasing'.split())
env.Append(CPPFLAGS='-DNO_STATVFS -DGLES_ENABLED'.split())
env['neon_enabled'] = False
if env['android_arch'] == 'x86':
target_opts = ['-target', 'i686-none-linux-android']
# The NDK adds this if targeting API < 21, so we can drop it when Godot targets it at least
env.Append(CPPFLAGS=['-mstackrealign'])
elif env['android_arch'] == 'x86_64':
target_opts = ['-target', 'x86_64-none-linux-android']
elif env["android_arch"] == "armv6":
target_opts = ['-target', 'armv6-none-linux-androideabi']
env.Append(CPPFLAGS='-D__ARM_ARCH_6__ -march=armv6 -mfpu=vfp -mfloat-abi=softfp'.split())
elif env["android_arch"] == "armv7":
target_opts = ['-target', 'armv7-none-linux-androideabi']
env.Append(CPPFLAGS='-D__ARM_ARCH_7__ -D__ARM_ARCH_7A__ -march=armv7-a -mfloat-abi=softfp'.split())
if env['android_neon']:
env['neon_enabled'] = True
env.Append(CPPFLAGS=['-mfpu=neon', '-D__ARM_NEON__'])
else:
env.Append(CPPFLAGS=['-mfpu=vfpv3-d16'])
elif env["android_arch"] == "arm64v8":
target_opts = ['-target', 'aarch64-none-linux-android']
env.Append(CPPFLAGS=['-D__ARM_ARCH_8A__'])
env.Append(CPPFLAGS=['-mfix-cortex-a53-835769'])
env.Append(CPPFLAGS=target_opts)
env.Append(CPPFLAGS=common_opts)
## Link flags
if ndk_version != None and LooseVersion(ndk_version) >= LooseVersion("15.0.4075724"):
if LooseVersion(ndk_version) >= LooseVersion("17.1.4828580"):
env.Append(LINKFLAGS=['-Wl,--exclude-libs,libgcc.a','-Wl,--exclude-libs,libatomic.a','-nostdlib++'])
else:
env.Append(LINKFLAGS=[env["ANDROID_NDK_ROOT"] +"/sources/cxx-stl/llvm-libc++/libs/"+arch_subpath+"/libandroid_support.a"])
env.Append(LINKFLAGS=['-shared', '--sysroot=' + lib_sysroot, '-Wl,--warn-shared-textrel'])
env.Append(LIBPATH=[env["ANDROID_NDK_ROOT"] + "/sources/cxx-stl/llvm-libc++/libs/"+arch_subpath+"/"])
env.Append(LINKFLAGS=[env["ANDROID_NDK_ROOT"] +"/sources/cxx-stl/llvm-libc++/libs/"+arch_subpath+"/libc++_shared.so"])
else:
env.Append(LINKFLAGS=['-shared', '--sysroot=' + lib_sysroot, '-Wl,--warn-shared-textrel'])
if mt_link:
env.Append(LINKFLAGS=['-Wl,--threads'])
if env["android_arch"] == "armv7":
env.Append(LINKFLAGS='-Wl,--fix-cortex-a8'.split())
env.Append(LINKFLAGS='-Wl,--no-undefined -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now'.split())
env.Append(LINKFLAGS='-Wl,-soname,libgodot_android.so -Wl,--gc-sections'.split())
env.Append(LINKFLAGS=target_opts)
env.Append(LINKFLAGS=common_opts)
env.Append(LIBPATH=[env["ANDROID_NDK_ROOT"] + '/toolchains/' + target_subpath + '/prebuilt/' +
host_subpath + '/lib/gcc/' + abi_subpath + '/4.9.x'])
env.Append(LIBPATH=[env["ANDROID_NDK_ROOT"] +
'/toolchains/' + target_subpath + '/prebuilt/' + host_subpath + '/' + abi_subpath + '/lib'])
env.Append(CPPPATH=['#platform/android'])
env.Append(CPPFLAGS=['-DANDROID_ENABLED', '-DUNIX_ENABLED', '-DNO_FCNTL'])
env.Append(LIBS=['OpenSLES', 'EGL', 'GLESv3', 'android', 'log', 'z', 'dl'])
# Return NDK version string in source.properties (adapted from the Chromium project).
def get_ndk_version(path):
if path is None:
return None
prop_file_path = os.path.join(path, "source.properties")
try:
with open(prop_file_path) as prop_file:
for line in prop_file:
key_value = list(map(lambda x: x.strip(), line.split("=")))
if key_value[0] == "Pkg.Revision":
return key_value[1]
except:
print("Could not read source prop file '%s'" % prop_file_path)
return None
| [
"SCons.Variables.EnumVariable",
"subprocess.Popen",
"os.path.join",
"sys.platform.startswith",
"os.environ.get",
"SCons.Variables.BoolVariable",
"subprocess.STARTUPINFO",
"platform.machine",
"distutils.version.LooseVersion",
"platform.split"
] | [((5724, 5756), 'sys.platform.startswith', 'sys.platform.startswith', (['"""linux"""'], {}), "('linux')\n", (5747, 5756), False, 'import sys\n'), ((6572, 6596), 'os.environ.get', 'os.environ.get', (['"""CCACHE"""'], {}), "('CCACHE')\n", (6586, 6596), False, 'import os\n'), ((12207, 12246), 'os.path.join', 'os.path.join', (['path', '"""source.properties"""'], {}), "(path, 'source.properties')\n", (12219, 12246), False, 'import os\n'), ((577, 689), 'SCons.Variables.EnumVariable', 'EnumVariable', (['"""android_arch"""', '"""Target architecture"""', '"""armv7"""', "('armv7', 'armv6', 'arm64v8', 'x86', 'x86_64')"], {}), "('android_arch', 'Target architecture', 'armv7', ('armv7',\n 'armv6', 'arm64v8', 'x86', 'x86_64'))\n", (589, 689), False, 'from SCons.Variables import BoolVariable, EnumVariable\n'), ((695, 765), 'SCons.Variables.BoolVariable', 'BoolVariable', (['"""android_neon"""', '"""Enable NEON support (armv7 only)"""', '(True)'], {}), "('android_neon', 'Enable NEON support (armv7 only)', True)\n", (707, 765), False, 'from SCons.Variables import BoolVariable, EnumVariable\n'), ((5807, 5840), 'sys.platform.startswith', 'sys.platform.startswith', (['"""darwin"""'], {}), "('darwin')\n", (5830, 5840), False, 'import sys\n'), ((264, 283), 'platform.split', 'platform.split', (['"""-"""'], {}), "('-')\n", (278, 283), False, 'import platform\n'), ((435, 472), 'os.environ.get', 'os.environ.get', (['"""ANDROID_NDK_ROOT"""', '(0)'], {}), "('ANDROID_NDK_ROOT', 0)\n", (449, 472), False, 'import os\n'), ((1331, 1355), 'subprocess.STARTUPINFO', 'subprocess.STARTUPINFO', ([], {}), '()\n', (1353, 1355), False, 'import subprocess\n'), ((1442, 1589), 'subprocess.Popen', 'subprocess.Popen', (['cmdline'], {'stdin': 'subprocess.PIPE', 'stdout': 'subprocess.PIPE', 'stderr': 'subprocess.PIPE', 'startupinfo': 'startupinfo', 'shell': '(False)', 'env': 'env'}), '(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, startupinfo=startupinfo, shell=False, env=env)\n', (1458, 1589), False, 'import subprocess\n'), ((5892, 5922), 'sys.platform.startswith', 'sys.platform.startswith', (['"""win"""'], {}), "('win')\n", (5915, 5922), False, 'import sys\n'), ((7970, 7995), 'distutils.version.LooseVersion', 'LooseVersion', (['ndk_version'], {}), '(ndk_version)\n', (7982, 7995), False, 'from distutils.version import LooseVersion\n'), ((7999, 8027), 'distutils.version.LooseVersion', 'LooseVersion', (['"""15.0.4075724"""'], {}), "('15.0.4075724')\n", (8011, 8027), False, 'from distutils.version import LooseVersion\n'), ((10211, 10236), 'distutils.version.LooseVersion', 'LooseVersion', (['ndk_version'], {}), '(ndk_version)\n', (10223, 10236), False, 'from distutils.version import LooseVersion\n'), ((10240, 10268), 'distutils.version.LooseVersion', 'LooseVersion', (['"""15.0.4075724"""'], {}), "('15.0.4075724')\n", (10252, 10268), False, 'from distutils.version import LooseVersion\n'), ((10281, 10306), 'distutils.version.LooseVersion', 'LooseVersion', (['ndk_version'], {}), '(ndk_version)\n', (10293, 10306), False, 'from distutils.version import LooseVersion\n'), ((10310, 10338), 'distutils.version.LooseVersion', 'LooseVersion', (['"""17.1.4828580"""'], {}), "('17.1.4828580')\n", (10322, 10338), False, 'from distutils.version import LooseVersion\n'), ((5937, 5955), 'platform.machine', 'platform.machine', ([], {}), '()\n', (5953, 5955), False, 'import platform\n')] |
# pylint: disable=g-bad-file-header
# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tests for bsuite.experiments.mnist."""
# Import all required packages
from absl.testing import absltest
from bsuite.experiments.mnist import mnist
from dm_env import test_utils
import numpy as np
class CatchInterfaceTest(test_utils.EnvironmentTestMixin, absltest.TestCase):
def make_object_under_test(self):
return mnist.MNISTBandit(seed=101)
def make_action_sequence(self):
num_actions = self.environment.action_spec().num_values
rng = np.random.RandomState(42)
for _ in range(100):
yield rng.randint(num_actions)
if __name__ == '__main__':
absltest.main()
| [
"bsuite.experiments.mnist.mnist.MNISTBandit",
"absl.testing.absltest.main",
"numpy.random.RandomState"
] | [((1312, 1327), 'absl.testing.absltest.main', 'absltest.main', ([], {}), '()\n', (1325, 1327), False, 'from absl.testing import absltest\n'), ((1060, 1087), 'bsuite.experiments.mnist.mnist.MNISTBandit', 'mnist.MNISTBandit', ([], {'seed': '(101)'}), '(seed=101)\n', (1077, 1087), False, 'from bsuite.experiments.mnist import mnist\n'), ((1193, 1218), 'numpy.random.RandomState', 'np.random.RandomState', (['(42)'], {}), '(42)\n', (1214, 1218), True, 'import numpy as np\n')] |
from setuptools import setup, find_packages
from setuptools.command.install import install
import os
import setuptools
import sys
# should match codalab/common.py#CODALAB_VERSION
CODALAB_VERSION = "1.1.4"
class Install(install):
_WARNING_TEMPLATE = (
'\n\n\033[1m\033[93mWarning! CodaLab was installed at {}, which is not\n'
'one of the following paths in $PATH:\n\n{}\n\nConsider adding {} to $PATH\n'
'to use the CodaLab CLI. You can do this by {}\033[0m\n\n'
)
_UNIX_FIX = 'appending the following line to your .bashrc:\nexport PATH="$PATH:{}"'
_WINDOWS_FIX = (
'by selecting System from the Control Panel, selecting Advanced system\n'
'settings, clicking Environment Variables and adding {} to the list.'
)
_WINDOWS_PLATFORM_VALUES = {'win32', 'cygwin'}
@staticmethod
def _build_fix_message(installed_path):
return (
Install._WINDOWS_FIX.format(installed_path)
if sys.platform in Install._WINDOWS_PLATFORM_VALUES
else Install._UNIX_FIX.format(installed_path)
)
def run(self):
install.run(self)
self._check_path()
def _check_path(self):
cl_path = self.install_scripts
executable_paths = os.environ['PATH'].split(os.pathsep)
if cl_path not in executable_paths:
# Prints a yellow, bold warning message in regards to the installation path not in $PATH
print(
Install._WARNING_TEMPLATE.format(
cl_path,
'\n'.join(executable_paths),
cl_path,
Install._build_fix_message(cl_path),
)
)
def get_requirements(*requirements_file_paths):
requirements = []
for requirements_file_path in requirements_file_paths:
with open(requirements_file_path) as requirements_file:
for line in requirements_file:
if line[0:2] != '-r':
requirements.append(line.strip())
return requirements
if int(setuptools.__version__.split('.')[0]) < 25:
print(
"WARNING: Please upgrade setuptools to a newer version, otherwise installation may break. "
"Recommended command: `pip3 install -U setuptools`"
)
setup(
name='codalab',
version=CODALAB_VERSION,
description='CLI for CodaLab, a platform for reproducible computation',
long_description=(
'Visit https://worksheets.codalab.org/ or setup your own server by following the '
'instructions in the documentation (https://codalab-worksheets.readthedocs.io/en/latest/Server-Setup).'
),
url='https://github.com/codalab/codalab-worksheets',
author='CodaLab',
author_email='<EMAIL>',
license='Apache License 2.0',
keywords='codalab reproducible computation worksheets competitions',
packages=find_packages(exclude=["tests*"]),
classifiers=[
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.6",
"License :: OSI Approved :: Apache Software License",
],
py_modules=['codalab_service'],
python_requires='~=3.6',
cmdclass={'install': Install},
include_package_data=True,
install_requires=get_requirements('requirements.txt'),
entry_points={
'console_scripts': [
'cl=codalab.bin.cl:main',
'cl-server=codalab.bin.server:main',
'cl-bundle-manager=codalab.bin.bundle_manager:main',
'codalab-service=codalab_service:main',
'cl-worker=codalab.worker.main:main',
'cl-worker-manager=codalab.worker_manager.main:main',
'cl-competitiond=scripts.competitiond:main',
]
},
zip_safe=False,
)
| [
"setuptools.command.install.install.run",
"setuptools.find_packages",
"setuptools.__version__.split"
] | [((1123, 1140), 'setuptools.command.install.install.run', 'install.run', (['self'], {}), '(self)\n', (1134, 1140), False, 'from setuptools.command.install import install\n'), ((2886, 2919), 'setuptools.find_packages', 'find_packages', ([], {'exclude': "['tests*']"}), "(exclude=['tests*'])\n", (2899, 2919), False, 'from setuptools import setup, find_packages\n'), ((2072, 2105), 'setuptools.__version__.split', 'setuptools.__version__.split', (['"""."""'], {}), "('.')\n", (2100, 2105), False, 'import setuptools\n')] |
# Copyright 2021 solo-learn development team.
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all copies
# or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import argparse
from typing import Any, Dict, List, Sequence, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from solo.losses.byol import byol_loss_func
from solo.methods.base import BaseMomentumMethod
from solo.utils.momentum import initialize_momentum_params
from solo.utils.misc import gather
class SELFSWINBYOL(BaseMomentumMethod):
def __init__(
self,
proj_output_dim: int,
proj_hidden_dim: int,
pred_hidden_dim: int,
queue_size: int,
temperature:float,
**kwargs,
):
"""Implements self swin self supervised learning with momentum encoders similar to byol.
In addition to custom loss funcion that uses supervised loss.
Args:
proj_output_dim (int): number of dimensions of projected features.
proj_hidden_dim (int): number of neurons of the hidden layers of the projector.
pred_hidden_dim (int): number of neurons of the hidden layers of the predictor.
queue_size (int): number of samples to keep in the queue.
.. note::
NNBYOL is similar to NNSiam but the queue from which the neighbors are retrieved is
updated using the features of the momentum backbone. See NNCLR's paper for more details:
"""
self.temperature = temperature
super().__init__(**kwargs)
self.queue_size = queue_size
# projector
self.projector = nn.Sequential(
nn.Linear(self.features_dim, proj_hidden_dim),
nn.BatchNorm1d(proj_hidden_dim),
nn.ReLU(),
nn.Linear(proj_hidden_dim, proj_output_dim),
)
# momentum projector
self.momentum_projector = nn.Sequential(
nn.Linear(self.features_dim, proj_hidden_dim),
nn.BatchNorm1d(proj_hidden_dim),
nn.ReLU(),
nn.Linear(proj_hidden_dim, proj_output_dim),
)
initialize_momentum_params(self.projector, self.momentum_projector)
# predictor
self.predictor = nn.Sequential(
nn.Linear(proj_output_dim, pred_hidden_dim),
nn.BatchNorm1d(pred_hidden_dim),
nn.ReLU(),
nn.Linear(pred_hidden_dim, proj_output_dim),
)
# queue
self.register_buffer("queue", torch.randn(self.queue_size, proj_output_dim))
self.register_buffer("queue_y", torch.tensor([0,1,2], dtype=torch.long).repeat(self.queue_size))
self.queue_y = self.queue_y[:self.queue_size]
self.queue = F.normalize(self.queue, dim=1)
self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long))
def custom_loss(self,z1,targets):
features = torch.cat([z1,self.queue],dim=0)
device = (torch.device('cuda')
if features.is_cuda
else torch.device('cpu'))
batch_size = z1.shape[0]
labels = torch.cat([targets,self.queue_y],dim=0)
labels = labels.contiguous().view(-1, 1)
mask = torch.eq(labels, labels.T).float()
contrast_count = features.shape[0]
contrast_feature = features
anchor_feature = z1
anchor_count = 1
# compute logits
anchor_dot_contrast = torch.div(
torch.matmul(anchor_feature, contrast_feature.T),
self.temperature)
# for numerical stability
logits_max, _ = torch.max(anchor_dot_contrast, dim=1, keepdim=True)
logits = anchor_dot_contrast - logits_max.detach()
logits_mask = torch.scatter(
torch.ones_like(mask),
1,
torch.arange(anchor_count * batch_size).view(-1, 1).to(device),
0
)
mask = mask * logits_mask
# compute log_prob
exp_logits = torch.exp(logits) * logits_mask[:logits.shape[0],:]
log_prob = logits - torch.log(exp_logits.sum(1, keepdim=True))
# compute mean of log-likelihood over positive
mean_log_prob_pos = (mask[:logits.shape[0],:] * log_prob).sum(1) / mask[:logits.shape[0],:].sum(1)
# loss
loss = - (self.temperature) * mean_log_prob_pos
loss = loss.view(anchor_count * batch_size).mean()
return loss
@staticmethod
def add_model_specific_args(parent_parser: argparse.ArgumentParser) -> argparse.ArgumentParser:
parent_parser = super(SELFSWINBYOL, SELFSWINBYOL).add_model_specific_args(parent_parser)
parser = parent_parser.add_argument_group("byol")
# projector
parser.add_argument("--proj_output_dim", type=int, default=256)
parser.add_argument("--proj_hidden_dim", type=int, default=2048)
# predictor
parser.add_argument("--pred_hidden_dim", type=int, default=512)
# queue settings
parser.add_argument("--queue_size", default=65536, type=int)
parser.add_argument("--temperature", type=float, default=0.2)
return parent_parser
@property
def learnable_params(self) -> List[dict]:
"""Adds projector and predictor parameters to the parent's learnable parameters.
Returns:
List[dict]: list of learnable parameters.
"""
extra_learnable_params = [
{"params": self.projector.parameters()},
{"params": self.predictor.parameters()},
]
return super().learnable_params + extra_learnable_params
@property
def momentum_pairs(self) -> List[Tuple[Any, Any]]:
"""Adds (projector, momentum_projector) to the parent's momentum pairs.
Returns:
List[Tuple[Any, Any]]: list of momentum pairs.
"""
extra_momentum_pairs = [(self.projector, self.momentum_projector)]
return super().momentum_pairs + extra_momentum_pairs
@torch.no_grad()
def dequeue_and_enqueue(self, z: torch.Tensor, y: torch.Tensor):
"""Adds new samples and removes old samples from the queue in a fifo manner. Also stores
the labels of the samples.
Args:
z (torch.Tensor): batch of projected features.
y (torch.Tensor): labels of the samples in the batch.
"""
z = gather(z)
y = gather(y)
batch_size = z.shape[0]
ptr = int(self.queue_ptr) # type: ignore
assert self.queue_size % batch_size == 0
self.queue[ptr : ptr + batch_size, :] = z
self.queue_y[ptr : ptr + batch_size] = y # type: ignore
ptr = (ptr + batch_size) % self.queue_size
self.queue_ptr[0] = ptr # type: ignore
@torch.no_grad()
def find_nn(self, z: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""Finds the nearest neighbor of a sample.
Args:
z (torch.Tensor): a batch of projected features.
Returns:
Tuple[torch.Tensor, torch.Tensor]:
indices and projected features of the nearest neighbors.
"""
idx = (z @ self.queue.T).max(dim=1)[1]
nn = self.queue[idx]
return idx, nn
def forward(self, X: torch.Tensor, *args, **kwargs) -> Dict[str, Any]:
"""Performs forward pass of the online backbone, projector and predictor.
Args:
X (torch.Tensor): batch of images in tensor format.
Returns:
Dict[str, Any]:
a dict containing the outputs of the parent, the projected features and the
predicted features.
"""
out = super().forward(X, *args, **kwargs)
z = self.projector(out["feats"])
p = self.predictor(z)
return {**out, "z": z, "p": p}
def training_step(self, batch: Sequence[Any], batch_idx: int) -> torch.Tensor:
"""Training step for NNBYOL reusing BaseMethod training step.
Args:
batch (Sequence[Any]): a batch of data in the format of [img_indexes, [X], Y], where
[X] is a list of size num_crops containing batches of images.
batch_idx (int): index of the batch.
Returns:
torch.Tensor: total loss composed of NNBYOL and classification loss.
"""
targets = batch[-1]
out = super().training_step(batch, batch_idx)
class_loss = out["loss"]
feats1, feats2 = out["feats"]
momentum_feats1, momentum_feats2 = out["momentum_feats"]
z1 = self.projector(feats1)
z2 = self.projector(feats2)
p1 = self.predictor(z1)
p2 = self.predictor(z2)
# forward momentum backbone
with torch.no_grad():
z1_momentum = self.momentum_projector(momentum_feats1)
z2_momentum = self.momentum_projector(momentum_feats2)
z1_momentum = F.normalize(z1_momentum, dim=-1)
z2_momentum = F.normalize(z2_momentum, dim=-1)
# find nn
idx1, nn1_momentum = self.find_nn(z1_momentum)
_, nn2_momentum = self.find_nn(z2_momentum)
# ------- negative cosine similarity loss -------
neg_cos_sim = byol_loss_func(p1, nn2_momentum) + byol_loss_func(p2, nn1_momentum)
custom_loss = (
self.custom_loss(F.normalize(p1,dim=-1),targets) /2
+ self.custom_loss(F.normalize(p2,dim=-1),targets) /2
)
# compute nn accuracy
b = targets.size(0)
nn_acc = (targets == self.queue_y[idx1]).sum() / b
# dequeue and enqueue
self.dequeue_and_enqueue(z2_momentum, targets)
# calculate std of features
z1_std = F.normalize(z1, dim=-1).std(dim=0).mean()
z2_std = F.normalize(z2, dim=-1).std(dim=0).mean()
z_std = (z1_std + z2_std) / 2
metrics = {
"train_neg_cos_sim": neg_cos_sim,
"train_z_std": z_std,
"train_nn_acc": nn_acc,
"train_custom_loss":custom_loss,
}
self.log_dict(metrics, on_epoch=True, sync_dist=True)
return custom_loss + class_loss
| [
"torch.nn.ReLU",
"solo.utils.momentum.initialize_momentum_params",
"torch.max",
"torch.exp",
"torch.eq",
"torch.nn.BatchNorm1d",
"torch.arange",
"solo.losses.byol.byol_loss_func",
"torch.matmul",
"torch.randn",
"torch.ones_like",
"torch.nn.functional.normalize",
"solo.utils.misc.gather",
"... | [((6928, 6943), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (6941, 6943), False, 'import torch\n'), ((7697, 7712), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (7710, 7712), False, 'import torch\n'), ((3060, 3127), 'solo.utils.momentum.initialize_momentum_params', 'initialize_momentum_params', (['self.projector', 'self.momentum_projector'], {}), '(self.projector, self.momentum_projector)\n', (3086, 3127), False, 'from solo.utils.momentum import initialize_momentum_params\n'), ((3663, 3693), 'torch.nn.functional.normalize', 'F.normalize', (['self.queue'], {'dim': '(1)'}), '(self.queue, dim=1)\n', (3674, 3693), True, 'import torch.nn.functional as F\n'), ((3837, 3871), 'torch.cat', 'torch.cat', (['[z1, self.queue]'], {'dim': '(0)'}), '([z1, self.queue], dim=0)\n', (3846, 3871), False, 'import torch\n'), ((4042, 4083), 'torch.cat', 'torch.cat', (['[targets, self.queue_y]'], {'dim': '(0)'}), '([targets, self.queue_y], dim=0)\n', (4051, 4083), False, 'import torch\n'), ((4539, 4590), 'torch.max', 'torch.max', (['anchor_dot_contrast'], {'dim': '(1)', 'keepdim': '(True)'}), '(anchor_dot_contrast, dim=1, keepdim=True)\n', (4548, 4590), False, 'import torch\n'), ((7310, 7319), 'solo.utils.misc.gather', 'gather', (['z'], {}), '(z)\n', (7316, 7319), False, 'from solo.utils.misc import gather\n'), ((7332, 7341), 'solo.utils.misc.gather', 'gather', (['y'], {}), '(y)\n', (7338, 7341), False, 'from solo.utils.misc import gather\n'), ((9837, 9869), 'torch.nn.functional.normalize', 'F.normalize', (['z1_momentum'], {'dim': '(-1)'}), '(z1_momentum, dim=-1)\n', (9848, 9869), True, 'import torch.nn.functional as F\n'), ((9892, 9924), 'torch.nn.functional.normalize', 'F.normalize', (['z2_momentum'], {'dim': '(-1)'}), '(z2_momentum, dim=-1)\n', (9903, 9924), True, 'import torch.nn.functional as F\n'), ((2597, 2642), 'torch.nn.Linear', 'nn.Linear', (['self.features_dim', 'proj_hidden_dim'], {}), '(self.features_dim, proj_hidden_dim)\n', (2606, 2642), True, 'import torch.nn as nn\n'), ((2656, 2687), 'torch.nn.BatchNorm1d', 'nn.BatchNorm1d', (['proj_hidden_dim'], {}), '(proj_hidden_dim)\n', (2670, 2687), True, 'import torch.nn as nn\n'), ((2701, 2710), 'torch.nn.ReLU', 'nn.ReLU', ([], {}), '()\n', (2708, 2710), True, 'import torch.nn as nn\n'), ((2724, 2767), 'torch.nn.Linear', 'nn.Linear', (['proj_hidden_dim', 'proj_output_dim'], {}), '(proj_hidden_dim, proj_output_dim)\n', (2733, 2767), True, 'import torch.nn as nn\n'), ((2870, 2915), 'torch.nn.Linear', 'nn.Linear', (['self.features_dim', 'proj_hidden_dim'], {}), '(self.features_dim, proj_hidden_dim)\n', (2879, 2915), True, 'import torch.nn as nn\n'), ((2929, 2960), 'torch.nn.BatchNorm1d', 'nn.BatchNorm1d', (['proj_hidden_dim'], {}), '(proj_hidden_dim)\n', (2943, 2960), True, 'import torch.nn as nn\n'), ((2974, 2983), 'torch.nn.ReLU', 'nn.ReLU', ([], {}), '()\n', (2981, 2983), True, 'import torch.nn as nn\n'), ((2997, 3040), 'torch.nn.Linear', 'nn.Linear', (['proj_hidden_dim', 'proj_output_dim'], {}), '(proj_hidden_dim, proj_output_dim)\n', (3006, 3040), True, 'import torch.nn as nn\n'), ((3201, 3244), 'torch.nn.Linear', 'nn.Linear', (['proj_output_dim', 'pred_hidden_dim'], {}), '(proj_output_dim, pred_hidden_dim)\n', (3210, 3244), True, 'import torch.nn as nn\n'), ((3258, 3289), 'torch.nn.BatchNorm1d', 'nn.BatchNorm1d', (['pred_hidden_dim'], {}), '(pred_hidden_dim)\n', (3272, 3289), True, 'import torch.nn as nn\n'), ((3303, 3312), 'torch.nn.ReLU', 'nn.ReLU', ([], {}), '()\n', (3310, 3312), True, 'import torch.nn as nn\n'), ((3326, 3369), 'torch.nn.Linear', 'nn.Linear', (['pred_hidden_dim', 'proj_output_dim'], {}), '(pred_hidden_dim, proj_output_dim)\n', (3335, 3369), True, 'import torch.nn as nn\n'), ((3436, 3481), 'torch.randn', 'torch.randn', (['self.queue_size', 'proj_output_dim'], {}), '(self.queue_size, proj_output_dim)\n', (3447, 3481), False, 'import torch\n'), ((3736, 3768), 'torch.zeros', 'torch.zeros', (['(1)'], {'dtype': 'torch.long'}), '(1, dtype=torch.long)\n', (3747, 3768), False, 'import torch\n'), ((3888, 3908), 'torch.device', 'torch.device', (['"""cuda"""'], {}), "('cuda')\n", (3900, 3908), False, 'import torch\n'), ((3970, 3989), 'torch.device', 'torch.device', (['"""cpu"""'], {}), "('cpu')\n", (3982, 3989), False, 'import torch\n'), ((4401, 4449), 'torch.matmul', 'torch.matmul', (['anchor_feature', 'contrast_feature.T'], {}), '(anchor_feature, contrast_feature.T)\n', (4413, 4449), False, 'import torch\n'), ((4709, 4730), 'torch.ones_like', 'torch.ones_like', (['mask'], {}), '(mask)\n', (4724, 4730), False, 'import torch\n'), ((4930, 4947), 'torch.exp', 'torch.exp', (['logits'], {}), '(logits)\n', (4939, 4947), False, 'import torch\n'), ((9663, 9678), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (9676, 9678), False, 'import torch\n'), ((10132, 10164), 'solo.losses.byol.byol_loss_func', 'byol_loss_func', (['p1', 'nn2_momentum'], {}), '(p1, nn2_momentum)\n', (10146, 10164), False, 'from solo.losses.byol import byol_loss_func\n'), ((10167, 10199), 'solo.losses.byol.byol_loss_func', 'byol_loss_func', (['p2', 'nn1_momentum'], {}), '(p2, nn1_momentum)\n', (10181, 10199), False, 'from solo.losses.byol import byol_loss_func\n'), ((4146, 4172), 'torch.eq', 'torch.eq', (['labels', 'labels.T'], {}), '(labels, labels.T)\n', (4154, 4172), False, 'import torch\n'), ((3523, 3564), 'torch.tensor', 'torch.tensor', (['[0, 1, 2]'], {'dtype': 'torch.long'}), '([0, 1, 2], dtype=torch.long)\n', (3535, 3564), False, 'import torch\n'), ((10253, 10276), 'torch.nn.functional.normalize', 'F.normalize', (['p1'], {'dim': '(-1)'}), '(p1, dim=-1)\n', (10264, 10276), True, 'import torch.nn.functional as F\n'), ((10320, 10343), 'torch.nn.functional.normalize', 'F.normalize', (['p2'], {'dim': '(-1)'}), '(p2, dim=-1)\n', (10331, 10343), True, 'import torch.nn.functional as F\n'), ((10623, 10646), 'torch.nn.functional.normalize', 'F.normalize', (['z1'], {'dim': '(-1)'}), '(z1, dim=-1)\n', (10634, 10646), True, 'import torch.nn.functional as F\n'), ((10682, 10705), 'torch.nn.functional.normalize', 'F.normalize', (['z2'], {'dim': '(-1)'}), '(z2, dim=-1)\n', (10693, 10705), True, 'import torch.nn.functional as F\n'), ((4759, 4798), 'torch.arange', 'torch.arange', (['(anchor_count * batch_size)'], {}), '(anchor_count * batch_size)\n', (4771, 4798), False, 'import torch\n')] |
"""Manipulating the Sphinx AST with Jupyter objects"""
import os
import json
from pathlib import Path
import docutils
from docutils.parsers.rst import Directive, directives
from docutils.nodes import math_block, image, literal
from sphinx.util import parselinenos
from sphinx.util.docutils import ReferenceRole
from sphinx.addnodes import download_reference
from sphinx.transforms import SphinxTransform
from sphinx.environment.collectors.asset import ImageCollector
from sphinx.errors import ExtensionError
import ipywidgets.embed
import nbconvert
from .utils import strip_latex_delimiters, sphinx_abs_dir
from .thebelab import ThebeSourceNode, ThebeOutputNode
WIDGET_VIEW_MIMETYPE = "application/vnd.jupyter.widget-view+json"
WIDGET_STATE_MIMETYPE = "application/vnd.jupyter.widget-state+json"
def csv_option(s):
return [p.strip() for p in s.split(",")] if s else []
def load_content(cell, location, logger):
if cell.arguments:
# As per 'sphinx.directives.code.LiteralInclude'
env = cell.state.document.settings.env
rel_filename, filename = env.relfn2path(cell.arguments[0])
env.note_dependency(rel_filename)
if cell.content:
logger.warning(
'Ignoring inline code in Jupyter cell included from "{}"'.format(
rel_filename
),
location=location,
)
try:
with Path(filename).open() as f:
content = [line.rstrip() for line in f.readlines()]
except (IOError, OSError):
raise IOError("File {} not found or reading it failed".format(filename))
else:
cell.assert_has_content()
content = cell.content
return content
def get_highlights(cell, content, location, logger):
# The code fragment is taken from CodeBlock directive almost unchanged:
# https://github.com/sphinx-doc/sphinx/blob/0319faf8f1503453b6ce19020819a8cf44e39f13/sphinx/directives/code.py#L134-L148
emphasize_linespec = cell.options.get("emphasize-lines")
if emphasize_linespec:
nlines = len(content)
hl_lines = parselinenos(emphasize_linespec, nlines)
if any(i >= nlines for i in hl_lines):
logger.warning(
"Line number spec is out of range(1-{}): {}".format(
nlines, emphasize_linespec
),
location=location,
)
hl_lines = [i + 1 for i in hl_lines if i < nlines]
else:
hl_lines = []
return hl_lines
class JupyterCell(Directive):
"""Define a code cell to be later executed in a Jupyter kernel.
The content of the directive is the code to execute. Code is not
executed when the directive is parsed, but later during a doctree
transformation.
Arguments
---------
filename : str (optional)
If provided, a path to a file containing code.
Options
-------
hide-code : bool
If provided, the code will not be displayed in the output.
hide-output : bool
If provided, the cell output will not be displayed in the output.
code-below : bool
If provided, the code will be shown below the cell output.
linenos : bool
If provided, the code will be shown with line numbering.
lineno-start: nonnegative int
If provided, the code will be show with line numbering beginning from
specified line.
emphasize-lines : comma separated list of line numbers
If provided, the specified lines will be highlighted.
raises : comma separated list of exception types
If provided, a comma-separated list of exception type names that
the cell may raise. If one of the listed execption types is raised
then the traceback is printed in place of the cell output. If an
exception of another type is raised then we raise a RuntimeError
when executing.
Content
-------
code : str
A code cell.
"""
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
has_content = True
option_spec = {
"hide-code": directives.flag,
"hide-output": directives.flag,
"code-below": directives.flag,
"linenos": directives.flag,
"lineno-start": directives.nonnegative_int,
"emphasize-lines": directives.unchanged_required,
"raises": csv_option,
"stderr": directives.flag,
}
def run(self):
# This only works lazily because the logger is inited by Sphinx
from . import logger
location = self.state_machine.get_source_and_line(self.lineno)
content = load_content(self, location, logger)
try:
hl_lines = get_highlights(self, content, location, logger)
except ValueError as err:
return [self.state.document.reporter.warning(err, line=self.lineno)]
# A top-level placeholder for our cell
cell_node = JupyterCellNode(
execute=True,
hide_code=("hide-code" in self.options),
hide_output=("hide-output" in self.options),
code_below=("code-below" in self.options),
emphasize_lines=hl_lines,
raises=self.options.get("raises"),
stderr=("stderr" in self.options),
classes=["jupyter_cell"],
)
# Add the input section of the cell, we'll add output at execution time
cell_input = CellInputNode(classes=["cell_input"])
cell_input += docutils.nodes.literal_block(
text="\n".join(content),
linenos=("linenos" in self.options),
linenostart=(self.options.get("lineno-start")),
)
cell_node += cell_input
return [cell_node]
class CellInput(Directive):
"""Define a code cell to be included verbatim but not executed.
Arguments
---------
filename : str (optional)
If provided, a path to a file containing code.
Options
-------
linenos : bool
If provided, the code will be shown with line numbering.
lineno-start: nonnegative int
If provided, the code will be show with line numbering beginning from
specified line.
emphasize-lines : comma separated list of line numbers
If provided, the specified lines will be highlighted.
Content
-------
code : str
A code cell.
"""
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
has_content = True
option_spec = {
"linenos": directives.flag,
"lineno-start": directives.nonnegative_int,
"emphasize-lines": directives.unchanged_required,
}
def run(self):
# This only works lazily because the logger is inited by Sphinx
from . import logger
location = self.state_machine.get_source_and_line(self.lineno)
content = load_content(self, location, logger)
try:
hl_lines = get_highlights(self, content, location, logger)
except ValueError as err:
return [self.state.document.reporter.warning(err, line=self.lineno)]
# A top-level placeholder for our cell
cell_node = JupyterCellNode(
execute=False,
hide_code=False,
hide_output=True,
code_below=False,
emphasize_lines=hl_lines,
raises=False,
stderr=False,
classes=["jupyter_cell"],
)
# Add the input section of the cell, we'll add output when jupyter-execute cells are run
cell_input = CellInputNode(classes=["cell_input"])
cell_input += docutils.nodes.literal_block(
text="\n".join(content),
linenos=("linenos" in self.options),
linenostart=(self.options.get("lineno-start")),
)
cell_node += cell_input
return [cell_node]
class CellOutput(Directive):
"""Define an output cell to be included verbatim.
Arguments
---------
filename : str (optional)
If provided, a path to a file containing output.
Content
-------
code : str
An output cell.
"""
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
has_content = True
option_spec = {}
def run(self):
# This only works lazily because the logger is inited by Sphinx
from . import logger
location = self.state_machine.get_source_and_line(self.lineno)
content = load_content(self, location, logger)
# A top-level placeholder for our cell
cell_node = JupyterCellNode(
execute=False,
hide_code=True,
hide_output=False,
code_below=False,
emphasize_lines=[],
raises=False,
stderr=False,
)
# Add a blank input and the given output to the cell
cell_input = CellInputNode(classes=["cell_input"])
cell_input += docutils.nodes.literal_block(
text="",
linenos=False,
linenostart=None,
)
cell_node += cell_input
content_str = "\n".join(content)
cell_output = CellOutputNode(classes=["cell_output"])
cell_output += docutils.nodes.literal_block(
text=content_str,
rawsource=content_str,
language="none",
classes=["output", "stream"],
)
cell_node += cell_output
return [cell_node]
class JupyterCellNode(docutils.nodes.container):
"""Inserted into doctree whever a JupyterCell directive is encountered.
Contains code that will be executed in a Jupyter kernel at a later
doctree-transformation step.
"""
class CellInputNode(docutils.nodes.container):
"""Represent an input cell in the Sphinx AST."""
def __init__(self, rawsource="", *children, **attributes):
super().__init__("", **attributes)
class CellOutputNode(docutils.nodes.container):
"""Represent an output cell in the Sphinx AST."""
def __init__(self, rawsource="", *children, **attributes):
super().__init__("", **attributes)
class CellOutputBundleNode(docutils.nodes.container):
"""Represent a MimeBundle in the Sphinx AST, to be transformed later."""
def __init__(self, outputs, rawsource="", *children, **attributes):
self.outputs = outputs
super().__init__("", **attributes)
class JupyterKernelNode(docutils.nodes.Element):
"""Inserted into doctree whenever a JupyterKernel directive is encountered.
Used as a marker to signal that the following JupyterCellNodes (until the
next, if any, JupyterKernelNode) should be executed in a separate kernel.
"""
class JupyterWidgetViewNode(docutils.nodes.Element):
"""Inserted into doctree whenever a Jupyter cell produces a widget as output.
Contains a unique ID for this widget; enough information for the widget
embedding javascript to render it, given the widget state. For non-HTML
outputs this doctree node is rendered generically.
"""
def __init__(self, rawsource="", *children, **attributes):
super().__init__("", view_spec=attributes["view_spec"])
def html(self):
return ipywidgets.embed.widget_view_template.format(
view_spec=json.dumps(self["view_spec"])
)
class JupyterWidgetStateNode(docutils.nodes.Element):
"""Appended to doctree if any Jupyter cell produced a widget as output.
Contains the state needed to render a collection of Jupyter widgets.
Per doctree there is 1 JupyterWidgetStateNode per kernel that produced
Jupyter widgets when running. This is fine as (presently) the
'html-manager' Javascript library, which embeds widgets, loads the state
from all script tags on the page of the correct mimetype.
"""
def __init__(self, rawsource="", *children, **attributes):
super().__init__("", state=attributes["state"])
def html(self):
# TODO: render into a separate file if 'html-manager' starts fully
# parsing script tags, and not just grabbing their innerHTML
# https://github.com/jupyter-widgets/ipywidgets/blob/master/packages/html-manager/src/libembed.ts#L36
return ipywidgets.embed.snippet_template.format(
load="", widget_views="", json_data=json.dumps(self["state"])
)
def cell_output_to_nodes(outputs, data_priority, write_stderr, out_dir,
thebe_config, inline=False):
"""Convert a jupyter cell with outputs and filenames to doctree nodes.
Parameters
----------
outputs : a list of outputs from a Jupyter cell
data_priority : list of mime types
Which media types to prioritize.
write_stderr : bool
If True include stderr in cell output
out_dir : string
Sphinx "absolute path" to the output folder, so it is a relative path
to the source folder prefixed with ``/``.
thebe_config: dict
Thebelab configuration object or None
inline: False
Whether the nodes will be placed in-line with the text.
Returns
-------
to_add : list of docutils nodes
Each output, converted into a docutils node.
"""
# If we're in `inline` mode, ensure that we don't add block-level nodes
if inline:
literal_node = docutils.nodes.literal
math_node = docutils.nodes.math
else:
literal_node = docutils.nodes.literal_block
math_node = math_block
to_add = []
for output in outputs:
output_type = output["output_type"]
if output_type == "stream":
if output["name"] == "stderr":
if not write_stderr:
continue
else:
# Output a container with an unhighlighted literal block for
# `stderr` messages.
#
# Adds a "stderr" class that can be customized by the user for both
# the container and the literal_block.
#
# Not setting "rawsource" disables Pygment hightlighting, which
# would otherwise add a <div class="highlight">.
literal = literal_node(
text=output["text"],
rawsource="", # disables Pygment highlighting
language="none",
classes=["stderr"],
)
if inline:
# In this case, we don't wrap the text in containers
to_add.append(literal)
else:
container = docutils.nodes.container(classes=["stderr"])
container.append(literal)
to_add.append(container)
else:
to_add.append(
literal_node(
text=output["text"],
rawsource=output["text"],
language="none",
classes=["output", "stream"],
)
)
elif output_type == "error":
traceback = "\n".join(output["traceback"])
text = nbconvert.filters.strip_ansi(traceback)
to_add.append(
literal_node(
text=text,
rawsource=text,
language="ipythontb",
classes=["output", "traceback"],
)
)
elif output_type in ("display_data", "execute_result"):
try:
# First mime_type by priority that occurs in output.
mime_type = next(x for x in data_priority if x in output["data"])
except StopIteration:
continue
data = output["data"][mime_type]
if mime_type.startswith("image"):
file_path = Path(output.metadata["filenames"][mime_type])
out_dir = Path(out_dir)
# Sphinx treats absolute paths as being rooted at the source
# directory, so make a relative path, which Sphinx treats
# as being relative to the current working directory.
filename = file_path.name
if out_dir in file_path.parents:
out_dir = file_path.parent
uri = (out_dir / filename).as_posix()
to_add.append(docutils.nodes.image(uri=uri))
elif mime_type == "text/html":
to_add.append(
docutils.nodes.raw(
text=data, format="html", classes=["output", "text_html"]
)
)
elif mime_type == "text/latex":
to_add.append(
math_node(
text=strip_latex_delimiters(data),
nowrap=False,
number=None,
classes=["output", "text_latex"],
)
)
elif mime_type == "text/plain":
to_add.append(
literal_node(
text=data,
rawsource=data,
language="none",
classes=["output", "text_plain"],
)
)
elif mime_type == "application/javascript":
to_add.append(
docutils.nodes.raw(
text='<script type="{mime_type}">{data}</script>'.format(
mime_type=mime_type, data=data
),
format="html",
)
)
elif mime_type == WIDGET_VIEW_MIMETYPE:
to_add.append(JupyterWidgetViewNode(view_spec=data))
return to_add
def attach_outputs(output_nodes, node, thebe_config):
if not node.attributes["hide_code"]: # only add css if code is displayed
classes = node.attributes.get("classes", [])
classes += ["jupyter_container"]
(input_node,) = node.traverse(CellInputNode)
(outputbundle_node,) = node.traverse(CellOutputBundleNode)
output_node = CellOutputNode(classes=["cell_output"])
if thebe_config:
# Move the source from the input node into the thebe_source node
source = input_node.children.pop(0)
thebe_source = ThebeSourceNode(
hide_code=node.attributes["hide_code"],
code_below=node.attributes["code_below"],
language=node.attributes["cm_language"],
)
thebe_source.children = [source]
input_node.children = [thebe_source]
if not node.attributes["hide_output"]:
thebe_output = ThebeOutputNode()
thebe_output.children = output_nodes
output_node += thebe_output
else:
if node.attributes["hide_code"]:
node.children.pop(0)
if not node.attributes["hide_output"]:
output_node.children = output_nodes
# Now replace the bundle with our OutputNode
outputbundle_node.replace_self(output_node)
# Swap inputs and outputs if we want the code below
if node.attributes["code_below"]:
node.children = node.children[::-1]
class JupyterDownloadRole(ReferenceRole):
def run(self):
_, filetype = self.name.split(":")
assert filetype in ("notebook", "nb", "script")
ext = ".ipynb" if filetype in ("notebook", "nb") else ".py"
download_file = self.target + ext
reftarget = sphinx_abs_dir(self.env, download_file)
node = download_reference(self.rawtext, reftarget=reftarget)
self.set_source_info(node)
title = self.title if self.has_explicit_title else download_file
node += literal(self.rawtext, title, classes=["xref", "download"])
return [node], []
def get_widgets(notebook):
try:
return notebook.metadata.widgets[WIDGET_STATE_MIMETYPE]
except AttributeError:
# Don't catch KeyError, as it's a bug if 'widgets' does
# not contain 'WIDGET_STATE_MIMETYPE'
return None
class CombineCellInputOutput(SphinxTransform):
"""Merge nodes from CellOutput with the preceding CellInput node."""
default_priority = 120
def apply(self):
moved_outputs = set()
for cell_node in self.document.traverse(JupyterCellNode):
if cell_node.attributes["execute"] == False:
if cell_node.attributes["hide_code"] == False:
# Cell came from jupyter-input
sibling = cell_node.next_node(descend=False, siblings=True)
if (
isinstance(sibling, JupyterCellNode)
and sibling.attributes["execute"] == False
and sibling.attributes["hide_code"] == True
):
# Sibling came from jupyter-output, so we merge
cell_node += sibling.children[1]
cell_node.attributes["hide_output"] = False
moved_outputs.update({sibling})
else:
# Call came from jupyter-output
if cell_node not in moved_outputs:
raise ExtensionError(
"Found a jupyter-output node without a preceding jupyter-input"
)
for output_node in moved_outputs:
output_node.replace_self([])
class CellOutputsToNodes(SphinxTransform):
"""Use the builder context to transform a CellOutputNode into Sphinx nodes."""
default_priority = 700
def apply(self):
thebe_config = self.config.jupyter_sphinx_thebelab_config
for cell_node in self.document.traverse(JupyterCellNode):
(output_bundle_node,) = cell_node.traverse(CellOutputBundleNode)
# Create doctree nodes for cell outputs.
output_nodes = cell_output_to_nodes(
output_bundle_node.outputs,
self.config.jupyter_execute_data_priority,
bool(cell_node.attributes["stderr"]),
sphinx_abs_dir(self.env),
thebe_config,
)
# Remove the outputbundlenode and we'll attach the outputs next
attach_outputs(output_nodes, cell_node, thebe_config)
| [
"docutils.nodes.container",
"pathlib.Path",
"json.dumps",
"sphinx.errors.ExtensionError",
"nbconvert.filters.strip_ansi",
"docutils.nodes.literal_block",
"sphinx.addnodes.download_reference",
"docutils.nodes.raw",
"docutils.nodes.literal",
"sphinx.util.parselinenos",
"docutils.nodes.image"
] | [((2130, 2170), 'sphinx.util.parselinenos', 'parselinenos', (['emphasize_linespec', 'nlines'], {}), '(emphasize_linespec, nlines)\n', (2142, 2170), False, 'from sphinx.util import parselinenos\n'), ((9023, 9093), 'docutils.nodes.literal_block', 'docutils.nodes.literal_block', ([], {'text': '""""""', 'linenos': '(False)', 'linenostart': 'None'}), "(text='', linenos=False, linenostart=None)\n", (9051, 9093), False, 'import docutils\n'), ((9299, 9419), 'docutils.nodes.literal_block', 'docutils.nodes.literal_block', ([], {'text': 'content_str', 'rawsource': 'content_str', 'language': '"""none"""', 'classes': "['output', 'stream']"}), "(text=content_str, rawsource=content_str,\n language='none', classes=['output', 'stream'])\n", (9327, 9419), False, 'import docutils\n'), ((19787, 19840), 'sphinx.addnodes.download_reference', 'download_reference', (['self.rawtext'], {'reftarget': 'reftarget'}), '(self.rawtext, reftarget=reftarget)\n', (19805, 19840), False, 'from sphinx.addnodes import download_reference\n'), ((19965, 20023), 'docutils.nodes.literal', 'literal', (['self.rawtext', 'title'], {'classes': "['xref', 'download']"}), "(self.rawtext, title, classes=['xref', 'download'])\n", (19972, 20023), False, 'from docutils.nodes import math_block, image, literal\n'), ((11356, 11385), 'json.dumps', 'json.dumps', (["self['view_spec']"], {}), "(self['view_spec'])\n", (11366, 11385), False, 'import json\n'), ((12397, 12422), 'json.dumps', 'json.dumps', (["self['state']"], {}), "(self['state'])\n", (12407, 12422), False, 'import json\n'), ((15335, 15374), 'nbconvert.filters.strip_ansi', 'nbconvert.filters.strip_ansi', (['traceback'], {}), '(traceback)\n', (15363, 15374), False, 'import nbconvert\n'), ((1426, 1440), 'pathlib.Path', 'Path', (['filename'], {}), '(filename)\n', (1430, 1440), False, 'from pathlib import Path\n'), ((14767, 14811), 'docutils.nodes.container', 'docutils.nodes.container', ([], {'classes': "['stderr']"}), "(classes=['stderr'])\n", (14791, 14811), False, 'import docutils\n'), ((16036, 16081), 'pathlib.Path', 'Path', (["output.metadata['filenames'][mime_type]"], {}), "(output.metadata['filenames'][mime_type])\n", (16040, 16081), False, 'from pathlib import Path\n'), ((16108, 16121), 'pathlib.Path', 'Path', (['out_dir'], {}), '(out_dir)\n', (16112, 16121), False, 'from pathlib import Path\n'), ((21485, 21564), 'sphinx.errors.ExtensionError', 'ExtensionError', (['"""Found a jupyter-output node without a preceding jupyter-input"""'], {}), "('Found a jupyter-output node without a preceding jupyter-input')\n", (21499, 21564), False, 'from sphinx.errors import ExtensionError\n'), ((16567, 16596), 'docutils.nodes.image', 'docutils.nodes.image', ([], {'uri': 'uri'}), '(uri=uri)\n', (16587, 16596), False, 'import docutils\n'), ((16692, 16769), 'docutils.nodes.raw', 'docutils.nodes.raw', ([], {'text': 'data', 'format': '"""html"""', 'classes': "['output', 'text_html']"}), "(text=data, format='html', classes=['output', 'text_html'])\n", (16710, 16769), False, 'import docutils\n')] |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""extenders tests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.feature_column import feature_column_lib as fc
from tensorflow.python.framework import constant_op
from tensorflow.python.keras import metrics as metrics_module
from tensorflow.python.platform import test
from tensorflow_estimator.python.estimator import extenders
from tensorflow_estimator.python.estimator import run_config
from tensorflow_estimator.python.estimator.canned import linear
def get_input_fn(x, y):
def input_fn():
dataset = tf.compat.v1.data.Dataset.from_tensor_slices({'x': x, 'y': y})
iterator = tf.compat.v1.data.make_one_shot_iterator(dataset)
features = iterator.get_next()
labels = features.pop('y')
return features, labels
return input_fn
class AddMetricsTest(tf.test.TestCase):
def test_should_add_metrics(self):
def _test_metric_fn(metric_fn):
input_fn = get_input_fn(
x=np.arange(4)[:, None, None], y=np.ones(4)[:, None])
config = run_config.RunConfig(log_step_count_steps=1)
estimator = linear.LinearClassifierV2([tf.feature_column.numeric_column('x')],
config=config)
estimator = extenders.add_metrics(estimator, metric_fn)
estimator.train(input_fn=input_fn)
metrics = estimator.evaluate(input_fn=input_fn)
self.assertIn('mean_x', metrics)
self.assertEqual(1.5, metrics['mean_x'])
# assert that it keeps original estimators metrics
self.assertIn('auc', metrics)
def metric_fn(features):
metric = metrics_module.Mean()
metric.update_state(features['x'])
return {'mean_x': metric}
_test_metric_fn(metric_fn)
def test_should_error_out_for_not_recognized_args(self):
estimator = linear.LinearClassifierV2([tf.feature_column.numeric_column('x')])
def metric_fn(features, not_recognized):
_, _ = features, not_recognized
return {}
with self.assertRaisesRegexp(ValueError, 'not_recognized'):
estimator = extenders.add_metrics(estimator, metric_fn)
def test_all_supported_args(self):
input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
estimator = linear.LinearClassifierV2([tf.feature_column.numeric_column('x')])
def metric_fn(features, predictions, labels, config):
self.assertIn('x', features)
self.assertIsNotNone(labels)
self.assertIn('logistic', predictions)
self.assertTrue(isinstance(config, run_config.RunConfig))
return {}
estimator = extenders.add_metrics(estimator, metric_fn)
estimator.train(input_fn=input_fn)
estimator.evaluate(input_fn=input_fn)
def test_all_supported_args_in_different_order(self):
input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
estimator = linear.LinearClassifierV2([tf.feature_column.numeric_column('x')])
def metric_fn(labels, config, features, predictions):
self.assertIn('x', features)
self.assertIsNotNone(labels)
self.assertIn('logistic', predictions)
self.assertTrue(isinstance(config, run_config.RunConfig))
return {}
estimator = extenders.add_metrics(estimator, metric_fn)
estimator.train(input_fn=input_fn)
estimator.evaluate(input_fn=input_fn)
def test_all_args_are_optional(self):
def _test_metric_fn(metric_fn):
input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
estimator = linear.LinearClassifierV2([tf.feature_column.numeric_column('x')])
estimator = extenders.add_metrics(estimator, metric_fn)
estimator.train(input_fn=input_fn)
metrics = estimator.evaluate(input_fn=input_fn)
self.assertEqual(2., metrics['two'])
def metric_fn():
metric = metrics_module.Mean()
metric.update_state(tf.constant([2.]))
return {'two': metric}
_test_metric_fn(metric_fn)
def test_overrides_existing_metrics(self):
def _test_metric_fn(metric_fn):
input_fn = get_input_fn(x=[[[0.]]], y=[[[1]]])
estimator = linear.LinearClassifierV2([tf.feature_column.numeric_column('x')])
estimator.train(input_fn=input_fn)
metrics = estimator.evaluate(input_fn=input_fn)
self.assertNotEqual(2., metrics['auc'])
estimator = extenders.add_metrics(estimator, metric_fn)
metrics = estimator.evaluate(input_fn=input_fn)
self.assertEqual(2., metrics['auc'])
def metric_fn():
metric = metrics_module.Mean()
metric.update_state(tf.constant([2.]))
return {'auc': metric}
_test_metric_fn(metric_fn)
if __name__ == '__main__':
tf.test.main()
| [
"tensorflow_estimator.python.estimator.run_config.RunConfig",
"tensorflow.python.keras.metrics.Mean",
"numpy.ones",
"tensorflow.compat.v1.data.make_one_shot_iterator",
"tensorflow.test.main",
"tensorflow.feature_column.numeric_column",
"tensorflow.compat.v1.data.Dataset.from_tensor_slices",
"tensorflo... | [((5382, 5396), 'tensorflow.test.main', 'tf.test.main', ([], {}), '()\n', (5394, 5396), True, 'import tensorflow as tf\n'), ((1391, 1453), 'tensorflow.compat.v1.data.Dataset.from_tensor_slices', 'tf.compat.v1.data.Dataset.from_tensor_slices', (["{'x': x, 'y': y}"], {}), "({'x': x, 'y': y})\n", (1435, 1453), True, 'import tensorflow as tf\n'), ((1469, 1518), 'tensorflow.compat.v1.data.make_one_shot_iterator', 'tf.compat.v1.data.make_one_shot_iterator', (['dataset'], {}), '(dataset)\n', (1509, 1518), True, 'import tensorflow as tf\n'), ((3370, 3413), 'tensorflow_estimator.python.estimator.extenders.add_metrics', 'extenders.add_metrics', (['estimator', 'metric_fn'], {}), '(estimator, metric_fn)\n', (3391, 3413), False, 'from tensorflow_estimator.python.estimator import extenders\n'), ((3958, 4001), 'tensorflow_estimator.python.estimator.extenders.add_metrics', 'extenders.add_metrics', (['estimator', 'metric_fn'], {}), '(estimator, metric_fn)\n', (3979, 4001), False, 'from tensorflow_estimator.python.estimator import extenders\n'), ((1858, 1902), 'tensorflow_estimator.python.estimator.run_config.RunConfig', 'run_config.RunConfig', ([], {'log_step_count_steps': '(1)'}), '(log_step_count_steps=1)\n', (1878, 1902), False, 'from tensorflow_estimator.python.estimator import run_config\n'), ((2066, 2109), 'tensorflow_estimator.python.estimator.extenders.add_metrics', 'extenders.add_metrics', (['estimator', 'metric_fn'], {}), '(estimator, metric_fn)\n', (2087, 2109), False, 'from tensorflow_estimator.python.estimator import extenders\n'), ((2430, 2451), 'tensorflow.python.keras.metrics.Mean', 'metrics_module.Mean', ([], {}), '()\n', (2449, 2451), True, 'from tensorflow.python.keras import metrics as metrics_module\n'), ((2883, 2926), 'tensorflow_estimator.python.estimator.extenders.add_metrics', 'extenders.add_metrics', (['estimator', 'metric_fn'], {}), '(estimator, metric_fn)\n', (2904, 2926), False, 'from tensorflow_estimator.python.estimator import extenders\n'), ((4317, 4360), 'tensorflow_estimator.python.estimator.extenders.add_metrics', 'extenders.add_metrics', (['estimator', 'metric_fn'], {}), '(estimator, metric_fn)\n', (4338, 4360), False, 'from tensorflow_estimator.python.estimator import extenders\n'), ((4537, 4558), 'tensorflow.python.keras.metrics.Mean', 'metrics_module.Mean', ([], {}), '()\n', (4556, 4558), True, 'from tensorflow.python.keras import metrics as metrics_module\n'), ((5045, 5088), 'tensorflow_estimator.python.estimator.extenders.add_metrics', 'extenders.add_metrics', (['estimator', 'metric_fn'], {}), '(estimator, metric_fn)\n', (5066, 5088), False, 'from tensorflow_estimator.python.estimator import extenders\n'), ((5223, 5244), 'tensorflow.python.keras.metrics.Mean', 'metrics_module.Mean', ([], {}), '()\n', (5242, 5244), True, 'from tensorflow.python.keras import metrics as metrics_module\n'), ((2660, 2697), 'tensorflow.feature_column.numeric_column', 'tf.feature_column.numeric_column', (['"""x"""'], {}), "('x')\n", (2692, 2697), True, 'import tensorflow as tf\n'), ((3059, 3096), 'tensorflow.feature_column.numeric_column', 'tf.feature_column.numeric_column', (['"""x"""'], {}), "('x')\n", (3091, 3096), True, 'import tensorflow as tf\n'), ((3647, 3684), 'tensorflow.feature_column.numeric_column', 'tf.feature_column.numeric_column', (['"""x"""'], {}), "('x')\n", (3679, 3684), True, 'import tensorflow as tf\n'), ((4585, 4603), 'tensorflow.constant', 'tf.constant', (['[2.0]'], {}), '([2.0])\n', (4596, 4603), True, 'import tensorflow as tf\n'), ((5271, 5289), 'tensorflow.constant', 'tf.constant', (['[2.0]'], {}), '([2.0])\n', (5282, 5289), True, 'import tensorflow as tf\n'), ((1948, 1985), 'tensorflow.feature_column.numeric_column', 'tf.feature_column.numeric_column', (['"""x"""'], {}), "('x')\n", (1980, 1985), True, 'import tensorflow as tf\n'), ((4259, 4296), 'tensorflow.feature_column.numeric_column', 'tf.feature_column.numeric_column', (['"""x"""'], {}), "('x')\n", (4291, 4296), True, 'import tensorflow as tf\n'), ((4845, 4882), 'tensorflow.feature_column.numeric_column', 'tf.feature_column.numeric_column', (['"""x"""'], {}), "('x')\n", (4877, 4882), True, 'import tensorflow as tf\n'), ((1791, 1803), 'numpy.arange', 'np.arange', (['(4)'], {}), '(4)\n', (1800, 1803), True, 'import numpy as np\n'), ((1822, 1832), 'numpy.ones', 'np.ones', (['(4)'], {}), '(4)\n', (1829, 1832), True, 'import numpy as np\n')] |
import random
import math
from estadisticas import desviacion_estandar, media
def aventar_agujas(numero_de_agujas):
adentro_del_circulo = 0
for _ in range(numero_de_agujas):
x = random.random() * random.choice([-1, 1])
y = random.random() * random.choice([-1, 1])
distancia_desde_el_centro = math.sqrt(x**2 + y**2)
if distancia_desde_el_centro <= 1:
adentro_del_circulo += 1
return (2 * adentro_del_circulo) / numero_de_agujas
def estimacion(numero_de_agujas, numero_de_intentos):
estimados = []
for _ in range(numero_de_intentos):
estimacion_pi = aventar_agujas(numero_de_agujas)
estimados.append(estimacion_pi)
media_estimados = media(estimados)
sigma = desviacion_estandar(estimados)
print(f'Est={round(media_estimados, 5)}, sigma={round(sigma, 5)}, agujas={numero_de_agujas}')
return (media_estimados, sigma)
def estimar_pi(precision, numero_de_intentos):
numero_de_agujas = 1000
sigma = precision
while sigma >= precision / 1.96:
media, sigma = estimacion(numero_de_agujas, numero_de_intentos)
numero_de_agujas *= 2
return media
if __name__ == '__main__':
estimar_pi(0.01, 1000) | [
"random.choice",
"estadisticas.media",
"math.sqrt",
"estadisticas.desviacion_estandar",
"random.random"
] | [((746, 762), 'estadisticas.media', 'media', (['estimados'], {}), '(estimados)\n', (751, 762), False, 'from estadisticas import desviacion_estandar, media\n'), ((776, 806), 'estadisticas.desviacion_estandar', 'desviacion_estandar', (['estimados'], {}), '(estimados)\n', (795, 806), False, 'from estadisticas import desviacion_estandar, media\n'), ((336, 362), 'math.sqrt', 'math.sqrt', (['(x ** 2 + y ** 2)'], {}), '(x ** 2 + y ** 2)\n', (345, 362), False, 'import math\n'), ((204, 219), 'random.random', 'random.random', ([], {}), '()\n', (217, 219), False, 'import random\n'), ((222, 244), 'random.choice', 'random.choice', (['[-1, 1]'], {}), '([-1, 1])\n', (235, 244), False, 'import random\n'), ((258, 273), 'random.random', 'random.random', ([], {}), '()\n', (271, 273), False, 'import random\n'), ((276, 298), 'random.choice', 'random.choice', (['[-1, 1]'], {}), '([-1, 1])\n', (289, 298), False, 'import random\n')] |
import time
import torch
from torch.utils.data import DataLoader, RandomSampler
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm
from datasets.dataset_FTR import *
from src.models.FTR_model import *
from .inpainting_metrics import get_inpainting_metrics
from .utils import Progbar, create_dir, stitch_images, SampleEdgeLineLogits
class LaMa:
def __init__(self, config, gpu, rank, test=False):
self.config = config
self.device = gpu
self.global_rank = rank
self.model_name = 'inpaint'
kwargs = dict(config.training_model)
kwargs.pop('kind')
self.inpaint_model = LaMaInpaintingTrainingModule(config, gpu=gpu, rank=rank, test=test, **kwargs).to(gpu)
self.train_dataset = ImgDataset(config.TRAIN_FLIST, config.INPUT_SIZE, config.MASK_RATE, config.TRAIN_MASK_FLIST,
augment=True, training=True, test_mask_path=None)
if config.DDP:
self.train_sampler = DistributedSampler(self.train_dataset, num_replicas=config.world_size,
rank=self.global_rank, shuffle=True)
# else:
# self.train_sampler = DistributedSampler(self.train_dataset, num_replicas=1, rank=0, shuffle=True)
self.val_dataset = ImgDataset(config.VAL_FLIST, config.INPUT_SIZE, mask_rates=None, mask_path=None, augment=False,
training=False, test_mask_path=config.TEST_MASK_FLIST)
self.sample_iterator = self.val_dataset.create_iterator(config.SAMPLE_SIZE)
self.samples_path = os.path.join(config.PATH, 'samples')
self.results_path = os.path.join(config.PATH, 'results')
self.val_path = os.path.join(config.PATH, 'validation')
create_dir(self.val_path)
self.log_file = os.path.join(config.PATH, 'log_' + self.model_name + '.dat')
self.best = float("inf") if self.inpaint_model.best is None else self.inpaint_model.best
def save(self):
if self.global_rank == 0:
self.inpaint_model.save()
def train(self):
if self.config.DDP:
train_loader = DataLoader(self.train_dataset, shuffle=False, pin_memory=True,
batch_size=self.config.BATCH_SIZE // self.config.world_size,
num_workers=12, sampler=self.train_sampler)
else:
train_loader = DataLoader(self.train_dataset, pin_memory=True,
batch_size=self.config.BATCH_SIZE, num_workers=12, shuffle=True)
epoch = 0
keep_training = True
max_iteration = int(float((self.config.MAX_ITERS)))
total = len(self.train_dataset) // self.config.world_size
if total == 0 and self.global_rank == 0:
print('No training data was provided! Check \'TRAIN_FLIST\' value in the configuration file.')
return
while keep_training:
epoch += 1
if self.config.DDP:
self.train_sampler.set_epoch(epoch + 1) # Shuffle each epoch
epoch_start = time.time()
if self.global_rank == 0:
print('\n\nTraining epoch: %d' % epoch)
progbar = Progbar(total, width=20, stateful_metrics=['epoch', 'iter', 'loss_scale'],
verbose=1 if self.global_rank == 0 else 0)
for _, items in enumerate(train_loader):
self.inpaint_model.train()
items['image'] = items['image'].to(self.device)
items['mask'] = items['mask'].to(self.device)
# train
outputs, gen_loss, dis_loss, logs, batch = self.inpaint_model.process(items)
iteration = self.inpaint_model.iteration
if iteration >= max_iteration:
keep_training = False
break
logs = [
("epoch", epoch),
("iter", iteration),
] + [(i, logs[0][i]) for i in logs[0]] + [(i, logs[1][i]) for i in logs[1]]
if self.config.No_Bar:
pass
else:
progbar.add(len(items['image']),
values=logs if self.config.VERBOSE else [x for x in logs if not x[0].startswith('l_')])
# log model at checkpoints
if self.config.LOG_INTERVAL and iteration % self.config.LOG_INTERVAL == 1 and self.global_rank == 0:
self.log(logs)
# sample model at checkpoints
if self.config.SAMPLE_INTERVAL and iteration % self.config.SAMPLE_INTERVAL == 1 and self.global_rank == 0:
self.sample()
# evaluate model at checkpoints
if self.config.EVAL_INTERVAL and iteration % self.config.EVAL_INTERVAL == 1:
if self.global_rank == 0:
print('\nstart eval...\n')
print("Epoch: %d" % epoch)
psnr, ssim, fid = self.eval()
if self.best > fid and self.global_rank == 0:
self.best = fid
print("current best epoch is %d" % epoch)
print('\nsaving %s...\n' % self.inpaint_model.name)
raw_model = self.inpaint_model.generator.module if \
hasattr(self.inpaint_model.generator, "module") else self.inpaint_model.generator
torch.save({
'iteration': self.inpaint_model.iteration,
'generator': raw_model.state_dict(),
'best_fid': fid,
'ssim': ssim,
'psnr': psnr
}, os.path.join(self.config.PATH, self.inpaint_model.name + '_best_gen.pth'))
raw_model = self.inpaint_model.discriminator.module if \
hasattr(self.inpaint_model.discriminator, "module") else self.inpaint_model.discriminator
torch.save({
'discriminator': raw_model.state_dict(),
'best_fid': fid,
'ssim': ssim,
'psnr': psnr
}, os.path.join(self.config.PATH, self.inpaint_model.name + '_best_dis.pth'))
# save model at checkpoints
if self.config.SAVE_INTERVAL and iteration % self.config.SAVE_INTERVAL == 1 and self.global_rank == 0:
self.save()
if self.global_rank == 0:
print("Epoch: %d, time for one epoch: %d seconds" % (epoch, time.time() - epoch_start))
logs = [('Epoch', epoch), ('time', time.time() - epoch_start)]
self.log(logs)
print('\nEnd training....')
def eval(self):
if self.config.DDP:
val_loader = DataLoader(self.val_dataset, shuffle=False, pin_memory=True,
batch_size=self.config.BATCH_SIZE // self.config.world_size, ## BS of each GPU
num_workers=12)
else:
val_loader = DataLoader(self.val_dataset, shuffle=False, pin_memory=True,
batch_size=self.config.BATCH_SIZE, num_workers=12)
total = len(self.val_dataset)
self.inpaint_model.eval()
if self.config.No_Bar:
pass
else:
progbar = Progbar(total, width=20, stateful_metrics=['it'])
iteration = 0
with torch.no_grad():
for items in tqdm(val_loader):
iteration += 1
items['image'] = items['image'].to(self.device)
items['mask'] = items['mask'].to(self.device)
b, _, _, _ = items['image'].size()
# inpaint model
# eval
items = self.inpaint_model(items)
outputs_merged = (items['predicted_image'] * items['mask']) + (items['image'] * (1 - items['mask']))
# save
outputs_merged *= 255.0
outputs_merged = outputs_merged.permute(0, 2, 3, 1).int().cpu().numpy()
for img_num in range(b):
cv2.imwrite(self.val_path + '/' + items['name'][img_num], outputs_merged[img_num, :, :, ::-1])
our_metric = get_inpainting_metrics(self.val_path, self.config.GT_Val_FOLDER, None, fid_test=True)
if self.global_rank == 0:
print("iter: %d, PSNR: %f, SSIM: %f, FID: %f, LPIPS: %f" %
(self.inpaint_model.iteration, float(our_metric['psnr']), float(our_metric['ssim']),
float(our_metric['fid']), float(our_metric['lpips'])))
logs = [('iter', self.inpaint_model.iteration), ('PSNR', float(our_metric['psnr'])),
('SSIM', float(our_metric['ssim'])), ('FID', float(our_metric['fid'])), ('LPIPS', float(our_metric['lpips']))]
self.log(logs)
return float(our_metric['psnr']), float(our_metric['ssim']), float(our_metric['fid'])
def sample(self, it=None):
# do not sample when validation set is empty
if len(self.val_dataset) == 0:
return
self.inpaint_model.eval()
with torch.no_grad():
items = next(self.sample_iterator)
items['image'] = items['image'].to(self.device)
items['mask'] = items['mask'].to(self.device)
# inpaint model
iteration = self.inpaint_model.iteration
inputs = (items['image'] * (1 - items['mask']))
items = self.inpaint_model(items)
outputs_merged = (items['predicted_image'] * items['mask']) + (items['image'] * (1 - items['mask']))
if it is not None:
iteration = it
image_per_row = 2
if self.config.SAMPLE_SIZE <= 6:
image_per_row = 1
images = stitch_images(
self.postprocess(items['image'].cpu()),
self.postprocess(inputs.cpu()),
self.postprocess(items['mask'].cpu()),
self.postprocess(items['predicted_image'].cpu()),
self.postprocess(outputs_merged.cpu()),
img_per_row=image_per_row
)
path = os.path.join(self.samples_path, self.model_name)
name = os.path.join(path, str(iteration).zfill(5) + ".png")
create_dir(path)
print('\nsaving sample ' + name)
images.save(name)
def log(self, logs):
with open(self.log_file, 'a') as f:
f.write('%s\n' % ' '.join([str(item[0]) + '\t' + str(item[1]) for item in logs]))
def cuda(self, *args):
return (item.to(self.config.DEVICE) for item in args)
def postprocess(self, img):
# [0, 1] => [0, 255]
img = img * 255.0
img = img.permute(0, 2, 3, 1)
return img.int()
class ZITS:
def __init__(self, config, gpu, rank, test=False):
self.config = config
self.device = gpu
self.global_rank = rank
self.model_name = 'inpaint'
kwargs = dict(config.training_model)
kwargs.pop('kind')
self.inpaint_model = DefaultInpaintingTrainingModule(config, gpu=gpu, rank=rank, test=test, **kwargs).to(gpu)
if config.min_sigma is None:
min_sigma = 2.0
else:
min_sigma = config.min_sigma
if config.max_sigma is None:
max_sigma = 2.5
else:
max_sigma = config.max_sigma
if config.round is None:
round = 1
else:
round = config.round
if not test:
self.train_dataset = DynamicDataset(config.TRAIN_FLIST, mask_path=config.TRAIN_MASK_FLIST,
batch_size=config.BATCH_SIZE // config.world_size,
pos_num=config.rel_pos_num, augment=True, training=True,
test_mask_path=None, train_line_path=config.train_line_path,
add_pos=config.use_MPE, world_size=config.world_size,
min_sigma=min_sigma, max_sigma=max_sigma, round=round)
if config.DDP:
self.train_sampler = DistributedSampler(self.train_dataset, num_replicas=config.world_size,
rank=self.global_rank, shuffle=True)
else:
self.train_sampler = DistributedSampler(self.train_dataset, num_replicas=1, rank=0, shuffle=True)
self.samples_path = os.path.join(config.PATH, 'samples')
self.results_path = os.path.join(config.PATH, 'results')
self.log_file = os.path.join(config.PATH, 'log_' + self.model_name + '.dat')
self.best = float("inf") if self.inpaint_model.best is None else self.inpaint_model.best
self.val_dataset = DynamicDataset(config.VAL_FLIST, mask_path=None, pos_num=config.rel_pos_num,
batch_size=config.BATCH_SIZE, augment=False, training=False,
test_mask_path=config.TEST_MASK_FLIST,
eval_line_path=config.eval_line_path,
add_pos=config.use_MPE, input_size=config.INPUT_SIZE,
min_sigma=min_sigma, max_sigma=max_sigma)
self.sample_iterator = self.val_dataset.create_iterator(config.SAMPLE_SIZE)
self.val_path = os.path.join(config.PATH, 'validation')
create_dir(self.val_path)
def save(self):
if self.global_rank == 0:
self.inpaint_model.save()
def train(self):
if self.config.DDP:
train_loader = DataLoader(self.train_dataset, shuffle=False, pin_memory=True,
batch_size=self.config.BATCH_SIZE // self.config.world_size,
num_workers=12, sampler=self.train_sampler)
else:
train_loader = DataLoader(self.train_dataset, pin_memory=True,
batch_size=self.config.BATCH_SIZE, num_workers=12,
sampler=self.train_sampler)
epoch = self.inpaint_model.iteration // len(train_loader)
keep_training = True
max_iteration = int(float((self.config.MAX_ITERS)))
total = len(self.train_dataset) // self.config.world_size
if total == 0 and self.global_rank == 0:
print('No training data was provided! Check \'TRAIN_FLIST\' value in the configuration file.')
return
while keep_training:
epoch += 1
if self.config.DDP or self.config.DP:
self.train_sampler.set_epoch(epoch + 1)
if self.config.fix_256 is None or self.config.fix_256 is False:
self.train_dataset.reset_dataset(self.train_sampler)
epoch_start = time.time()
if self.global_rank == 0:
print('\n\nTraining epoch: %d' % epoch)
progbar = Progbar(total, width=20, stateful_metrics=['epoch', 'iter', 'loss_scale',
'g_lr', 'd_lr', 'str_lr', 'img_size'],
verbose=1 if self.global_rank == 0 else 0)
for _, items in enumerate(train_loader):
iteration = self.inpaint_model.iteration
self.inpaint_model.train()
for k in items:
if type(items[k]) is torch.Tensor:
items[k] = items[k].to(self.device)
image_size = items['image'].shape[2]
random_add_v = random.random() * 1.5 + 1.5
random_mul_v = random.random() * 1.5 + 1.5 # [1.5~3]
# random mix the edge and line
if iteration > int(self.config.MIX_ITERS):
b, _, _, _ = items['edge'].shape
if int(self.config.MIX_ITERS) < iteration < int(self.config.Turning_Point):
pred_rate = (iteration - int(self.config.MIX_ITERS)) / \
(int(self.config.Turning_Point) - int(self.config.MIX_ITERS))
b = np.clip(int(pred_rate * b), 2, b)
iteration_num_for_pred = int(random.random() * 5) + 1
edge_pred, line_pred = SampleEdgeLineLogits(self.inpaint_model.transformer,
context=[items['img_256'][:b, ...],
items['edge_256'][:b, ...],
items['line_256'][:b, ...]],
mask=items['mask_256'][:b, ...].clone(),
iterations=iteration_num_for_pred,
add_v=0.05, mul_v=4)
edge_pred = edge_pred.detach().to(torch.float32)
line_pred = line_pred.detach().to(torch.float32)
if self.config.fix_256 is None or self.config.fix_256 is False:
if image_size < 300 and random.random() < 0.5:
edge_pred = F.interpolate(edge_pred, size=(image_size, image_size), mode='nearest')
line_pred = F.interpolate(line_pred, size=(image_size, image_size), mode='nearest')
else:
edge_pred = self.inpaint_model.structure_upsample(edge_pred)[0]
edge_pred = torch.sigmoid((edge_pred + random_add_v) * random_mul_v)
edge_pred = F.interpolate(edge_pred, size=(image_size, image_size), mode='bilinear',
align_corners=False)
line_pred = self.inpaint_model.structure_upsample(line_pred)[0]
line_pred = torch.sigmoid((line_pred + random_add_v) * random_mul_v)
line_pred = F.interpolate(line_pred, size=(image_size, image_size), mode='bilinear',
align_corners=False)
items['edge'][:b, ...] = edge_pred.detach()
items['line'][:b, ...] = line_pred.detach()
# train
outputs, gen_loss, dis_loss, logs, batch = self.inpaint_model.process(items)
if iteration >= max_iteration:
keep_training = False
break
logs = [("epoch", epoch), ("iter", iteration)] + \
[(i, logs[0][i]) for i in logs[0]] + [(i, logs[1][i]) for i in logs[1]]
logs.append(("g_lr", self.inpaint_model.g_scheduler.get_lr()[0]))
logs.append(("d_lr", self.inpaint_model.d_scheduler.get_lr()[0]))
logs.append(("str_lr", self.inpaint_model.str_scheduler.get_lr()[0]))
logs.append(("img_size", batch['size_ratio'][0].item() * 256))
progbar.add(len(items['image']),
values=logs if self.config.VERBOSE else [x for x in logs if not x[0].startswith('l_')])
# log model at checkpoints
if self.config.LOG_INTERVAL and iteration % self.config.LOG_INTERVAL == 0 and self.global_rank == 0:
self.log(logs)
# sample model at checkpoints
if self.config.SAMPLE_INTERVAL and iteration > 0 and iteration % self.config.SAMPLE_INTERVAL == 0 and self.global_rank == 0:
self.sample()
# evaluate model at checkpoints
if self.config.EVAL_INTERVAL and iteration > 0 and iteration % self.config.EVAL_INTERVAL == 0 and self.global_rank == 0:
print('\nstart eval...\n')
print("Epoch: %d" % epoch)
psnr, ssim, fid = self.eval()
if self.best > fid:
self.best = fid
print("current best epoch is %d" % epoch)
print('\nsaving %s...\n' % self.inpaint_model.name)
raw_model = self.inpaint_model.generator.module if \
hasattr(self.inpaint_model.generator, "module") else self.inpaint_model.generator
raw_encoder = self.inpaint_model.str_encoder.module if \
hasattr(self.inpaint_model.str_encoder, "module") else self.inpaint_model.str_encoder
torch.save({
'iteration': self.inpaint_model.iteration,
'generator': raw_model.state_dict(),
'str_encoder': raw_encoder.state_dict(),
'best_fid': fid,
'ssim': ssim,
'psnr': psnr
}, os.path.join(self.config.PATH,
self.inpaint_model.name + '_best_gen_HR.pth'))
raw_model = self.inpaint_model.discriminator.module if \
hasattr(self.inpaint_model.discriminator, "module") else self.inpaint_model.discriminator
torch.save({
'discriminator': raw_model.state_dict()
}, os.path.join(self.config.PATH, self.inpaint_model.name + '_best_dis_HR.pth'))
# save model at checkpoints
if self.config.SAVE_INTERVAL and iteration > 0 and iteration % self.config.SAVE_INTERVAL == 0 and self.global_rank == 0:
self.save()
if self.global_rank == 0:
print("Epoch: %d, time for one epoch: %d seconds" % (epoch, time.time() - epoch_start))
logs = [('Epoch', epoch), ('time', time.time() - epoch_start)]
self.log(logs)
print('\nEnd training....')
def eval(self):
val_loader = DataLoader(self.val_dataset, shuffle=False, pin_memory=True,
batch_size=self.config.BATCH_SIZE, num_workers=12)
self.inpaint_model.eval()
with torch.no_grad():
for items in tqdm(val_loader):
for k in items:
if type(items[k]) is torch.Tensor:
items[k] = items[k].to(self.device)
b, _, _, _ = items['edge'].shape
edge_pred, line_pred = SampleEdgeLineLogits(self.inpaint_model.transformer,
context=[items['img_256'][:b, ...],
items['edge_256'][:b, ...],
items['line_256'][:b, ...]],
mask=items['mask_256'][:b, ...].clone(),
iterations=5,
add_v=0.05, mul_v=4,
device=self.device)
edge_pred, line_pred = edge_pred[:b, ...].detach().to(torch.float32), \
line_pred[:b, ...].detach().to(torch.float32)
if self.config.fix_256 is None or self.config.fix_256 is False:
edge_pred = self.inpaint_model.structure_upsample(edge_pred)[0]
edge_pred = torch.sigmoid((edge_pred + 2) * 2)
line_pred = self.inpaint_model.structure_upsample(line_pred)[0]
line_pred = torch.sigmoid((line_pred + 2) * 2)
items['edge'][:b, ...] = edge_pred.detach()
items['line'][:b, ...] = line_pred.detach()
# eval
items = self.inpaint_model(items)
outputs_merged = (items['predicted_image'] * items['mask']) + (items['image'] * (1 - items['mask']))
# save
outputs_merged *= 255.0
outputs_merged = outputs_merged.permute(0, 2, 3, 1).int().cpu().numpy()
for img_num in range(b):
cv2.imwrite(self.val_path + '/' + items['name'][img_num], outputs_merged[img_num, :, :, ::-1])
our_metric = get_inpainting_metrics(self.val_path, self.config.GT_Val_FOLDER, None, fid_test=True)
if self.global_rank == 0:
print("iter: %d, PSNR: %f, SSIM: %f, FID: %f, LPIPS: %f" %
(self.inpaint_model.iteration, float(our_metric['psnr']), float(our_metric['ssim']),
float(our_metric['fid']), float(our_metric['lpips'])))
logs = [('iter', self.inpaint_model.iteration), ('PSNR', float(our_metric['psnr'])),
('SSIM', float(our_metric['ssim'])), ('FID', float(our_metric['fid'])),
('LPIPS', float(our_metric['lpips']))]
self.log(logs)
return float(our_metric['psnr']), float(our_metric['ssim']), float(our_metric['fid'])
def sample(self, it=None):
# do not sample when validation set is empty
if len(self.val_dataset) == 0:
return
self.inpaint_model.eval()
with torch.no_grad():
items = next(self.sample_iterator)
for k in items:
if type(items[k]) is torch.Tensor:
items[k] = items[k].to(self.device)
b, _, _, _ = items['edge'].shape
edge_pred, line_pred = SampleEdgeLineLogits(self.inpaint_model.transformer,
context=[items['img_256'][:b, ...],
items['edge_256'][:b, ...],
items['line_256'][:b, ...]],
mask=items['mask_256'][:b, ...].clone(),
iterations=5,
add_v=0.05, mul_v=4,
device=self.device)
edge_pred, line_pred = edge_pred[:b, ...].detach().to(torch.float32), \
line_pred[:b, ...].detach().to(torch.float32)
if self.config.fix_256 is None or self.config.fix_256 is False:
edge_pred = self.inpaint_model.structure_upsample(edge_pred)[0]
edge_pred = torch.sigmoid((edge_pred + 2) * 2)
line_pred = self.inpaint_model.structure_upsample(line_pred)[0]
line_pred = torch.sigmoid((line_pred + 2) * 2)
items['edge'][:b, ...] = edge_pred.detach()
items['line'][:b, ...] = line_pred.detach()
# inpaint model
iteration = self.inpaint_model.iteration
inputs = (items['image'] * (1 - items['mask']))
items = self.inpaint_model(items)
outputs_merged = (items['predicted_image'] * items['mask']) + (items['image'] * (1 - items['mask']))
if it is not None:
iteration = it
image_per_row = 2
if self.config.SAMPLE_SIZE <= 6:
image_per_row = 1
images = stitch_images(
self.postprocess((items['image']).cpu()),
self.postprocess((inputs).cpu()),
self.postprocess(items['edge'].cpu()),
self.postprocess(items['line'].cpu()),
self.postprocess(items['mask'].cpu()),
self.postprocess((items['predicted_image']).cpu()),
self.postprocess((outputs_merged).cpu()),
img_per_row=image_per_row
)
path = os.path.join(self.samples_path, self.model_name)
name = os.path.join(path, str(iteration).zfill(6) + ".jpg")
create_dir(path)
print('\nsaving sample ' + name)
images.save(name)
def log(self, logs):
with open(self.log_file, 'a') as f:
f.write('%s\n' % ' '.join([str(item[0]) + '\t' + str(item[1]) for item in logs]))
def cuda(self, *args):
return (item.to(self.config.DEVICE) for item in args)
def postprocess(self, img):
# [0, 1] => [0, 255]
img = img * 255.0
img = img.permute(0, 2, 3, 1)
return img.int()
| [
"tqdm.tqdm",
"torch.sigmoid",
"torch.utils.data.distributed.DistributedSampler",
"torch.utils.data.DataLoader",
"torch.no_grad",
"time.time"
] | [((22539, 22655), 'torch.utils.data.DataLoader', 'DataLoader', (['self.val_dataset'], {'shuffle': '(False)', 'pin_memory': '(True)', 'batch_size': 'self.config.BATCH_SIZE', 'num_workers': '(12)'}), '(self.val_dataset, shuffle=False, pin_memory=True, batch_size=\n self.config.BATCH_SIZE, num_workers=12)\n', (22549, 22655), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((1016, 1128), 'torch.utils.data.distributed.DistributedSampler', 'DistributedSampler', (['self.train_dataset'], {'num_replicas': 'config.world_size', 'rank': 'self.global_rank', 'shuffle': '(True)'}), '(self.train_dataset, num_replicas=config.world_size, rank\n =self.global_rank, shuffle=True)\n', (1034, 1128), False, 'from torch.utils.data.distributed import DistributedSampler\n'), ((2187, 2363), 'torch.utils.data.DataLoader', 'DataLoader', (['self.train_dataset'], {'shuffle': '(False)', 'pin_memory': '(True)', 'batch_size': '(self.config.BATCH_SIZE // self.config.world_size)', 'num_workers': '(12)', 'sampler': 'self.train_sampler'}), '(self.train_dataset, shuffle=False, pin_memory=True, batch_size=\n self.config.BATCH_SIZE // self.config.world_size, num_workers=12,\n sampler=self.train_sampler)\n', (2197, 2363), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((2472, 2589), 'torch.utils.data.DataLoader', 'DataLoader', (['self.train_dataset'], {'pin_memory': '(True)', 'batch_size': 'self.config.BATCH_SIZE', 'num_workers': '(12)', 'shuffle': '(True)'}), '(self.train_dataset, pin_memory=True, batch_size=self.config.\n BATCH_SIZE, num_workers=12, shuffle=True)\n', (2482, 2589), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((3162, 3173), 'time.time', 'time.time', ([], {}), '()\n', (3171, 3173), False, 'import time\n'), ((7100, 7242), 'torch.utils.data.DataLoader', 'DataLoader', (['self.val_dataset'], {'shuffle': '(False)', 'pin_memory': '(True)', 'batch_size': '(self.config.BATCH_SIZE // self.config.world_size)', 'num_workers': '(12)'}), '(self.val_dataset, shuffle=False, pin_memory=True, batch_size=\n self.config.BATCH_SIZE // self.config.world_size, num_workers=12)\n', (7110, 7242), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((7368, 7484), 'torch.utils.data.DataLoader', 'DataLoader', (['self.val_dataset'], {'shuffle': '(False)', 'pin_memory': '(True)', 'batch_size': 'self.config.BATCH_SIZE', 'num_workers': '(12)'}), '(self.val_dataset, shuffle=False, pin_memory=True, batch_size=\n self.config.BATCH_SIZE, num_workers=12)\n', (7378, 7484), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((7760, 7775), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (7773, 7775), False, 'import torch\n'), ((7802, 7818), 'tqdm.tqdm', 'tqdm', (['val_loader'], {}), '(val_loader)\n', (7806, 7818), False, 'from tqdm import tqdm\n'), ((9489, 9504), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (9502, 9504), False, 'import torch\n'), ((14059, 14235), 'torch.utils.data.DataLoader', 'DataLoader', (['self.train_dataset'], {'shuffle': '(False)', 'pin_memory': '(True)', 'batch_size': '(self.config.BATCH_SIZE // self.config.world_size)', 'num_workers': '(12)', 'sampler': 'self.train_sampler'}), '(self.train_dataset, shuffle=False, pin_memory=True, batch_size=\n self.config.BATCH_SIZE // self.config.world_size, num_workers=12,\n sampler=self.train_sampler)\n', (14069, 14235), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((14344, 14475), 'torch.utils.data.DataLoader', 'DataLoader', (['self.train_dataset'], {'pin_memory': '(True)', 'batch_size': 'self.config.BATCH_SIZE', 'num_workers': '(12)', 'sampler': 'self.train_sampler'}), '(self.train_dataset, pin_memory=True, batch_size=self.config.\n BATCH_SIZE, num_workers=12, sampler=self.train_sampler)\n', (14354, 14475), False, 'from torch.utils.data import DataLoader, RandomSampler\n'), ((15276, 15287), 'time.time', 'time.time', ([], {}), '()\n', (15285, 15287), False, 'import time\n'), ((22732, 22747), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (22745, 22747), False, 'import torch\n'), ((22774, 22790), 'tqdm.tqdm', 'tqdm', (['val_loader'], {}), '(val_loader)\n', (22778, 22790), False, 'from tqdm import tqdm\n'), ((25830, 25845), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (25843, 25845), False, 'import torch\n'), ((12531, 12643), 'torch.utils.data.distributed.DistributedSampler', 'DistributedSampler', (['self.train_dataset'], {'num_replicas': 'config.world_size', 'rank': 'self.global_rank', 'shuffle': '(True)'}), '(self.train_dataset, num_replicas=config.world_size, rank\n =self.global_rank, shuffle=True)\n', (12549, 12643), False, 'from torch.utils.data.distributed import DistributedSampler\n'), ((12750, 12826), 'torch.utils.data.distributed.DistributedSampler', 'DistributedSampler', (['self.train_dataset'], {'num_replicas': '(1)', 'rank': '(0)', 'shuffle': '(True)'}), '(self.train_dataset, num_replicas=1, rank=0, shuffle=True)\n', (12768, 12826), False, 'from torch.utils.data.distributed import DistributedSampler\n'), ((27110, 27144), 'torch.sigmoid', 'torch.sigmoid', (['((edge_pred + 2) * 2)'], {}), '((edge_pred + 2) * 2)\n', (27123, 27144), False, 'import torch\n'), ((27253, 27287), 'torch.sigmoid', 'torch.sigmoid', (['((line_pred + 2) * 2)'], {}), '((line_pred + 2) * 2)\n', (27266, 27287), False, 'import torch\n'), ((24076, 24110), 'torch.sigmoid', 'torch.sigmoid', (['((edge_pred + 2) * 2)'], {}), '((edge_pred + 2) * 2)\n', (24089, 24110), False, 'import torch\n'), ((24227, 24261), 'torch.sigmoid', 'torch.sigmoid', (['((line_pred + 2) * 2)'], {}), '((line_pred + 2) * 2)\n', (24240, 24261), False, 'import torch\n'), ((6931, 6942), 'time.time', 'time.time', ([], {}), '()\n', (6940, 6942), False, 'import time\n'), ((18079, 18135), 'torch.sigmoid', 'torch.sigmoid', (['((edge_pred + random_add_v) * random_mul_v)'], {}), '((edge_pred + random_add_v) * random_mul_v)\n', (18092, 18135), False, 'import torch\n'), ((18456, 18512), 'torch.sigmoid', 'torch.sigmoid', (['((line_pred + random_add_v) * random_mul_v)'], {}), '((line_pred + random_add_v) * random_mul_v)\n', (18469, 18512), False, 'import torch\n'), ((22402, 22413), 'time.time', 'time.time', ([], {}), '()\n', (22411, 22413), False, 'import time\n'), ((6852, 6863), 'time.time', 'time.time', ([], {}), '()\n', (6861, 6863), False, 'import time\n'), ((22323, 22334), 'time.time', 'time.time', ([], {}), '()\n', (22332, 22334), False, 'import time\n')] |
#!/usr/bin/python
import getopt
import sys
import os
import json
setConfig = "true"
feEnable = "true"
rebuildImpact = "low"
conf_file = "/../../../config/ibofos_for_perf_ci.conf"
current_path = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(1, current_path + "./")
def config_change(feEnable, impact):
with open(current_path + conf_file, "r") as jsonFile:
data = json.load(jsonFile)
with open("default_ibofos.conf", "w") as jsonFile:
json.dump(data, jsonFile, indent=4)
if ("fe_qos" not in data):
data["fe_qos"] = {}
if ("perf_impact" not in data):
data["perf_impact"] = []
if (feEnable == "true"):
data["fe_qos"]["enable"] = True
elif (feEnable == "false"):
data["fe_qos"]["enable"] = False
if (impact == "high"):
data["perf_impact"]["rebuild"] = "high"
elif (impact == "low"):
data["perf_impact"]["rebuild"] = "low"
with open("/etc/pos/pos.conf", "w") as jsonFile:
json.dump(data, jsonFile, indent=4)
def config_reset():
with open("default_ibofos.conf", "r") as jsonFile:
data = json.load(jsonFile)
with open("/etc/conf/pos.conf", "w") as jsonFile:
json.dump(data, jsonFile, indent=4)
def help():
print('Use below command:')
print('config.py -s true/false -v true/false -f true/false -i high/low')
def main(argv):
global conf_file
try:
opts, args = getopt.getopt(argv, "hs:v:f:i:", ["set", "fe=", "impact=", "vm="])
except getopt.GetoptError:
help()
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
help()
sys.exit()
elif opt in ("-v", "--vm"):
if ("true" == arg):
conf_file = "/../../../config/ibofos_for_vm_ci.conf"
elif opt in ("-s", "--set"):
if ("true" != arg and "false" != arg):
help()
sys.exit(1)
setConfig = arg
elif opt in ("-f", "--fe"):
if ("true" != arg and "false" != arg):
help()
sys.exit(1)
feEnable = arg
elif opt in ("-i", "--impact"):
if ("high" != arg and "low" != arg):
help()
sys.exit(1)
rebuildImpact = arg
if ("true" == setConfig):
config_change(feEnable, rebuildImpact)
else:
config_reset()
if __name__ == '__main__':
main(sys.argv[1:])
| [
"getopt.getopt",
"sys.path.insert",
"os.path.realpath",
"sys.exit",
"json.load",
"json.dump"
] | [((241, 280), 'sys.path.insert', 'sys.path.insert', (['(1)', "(current_path + './')"], {}), "(1, current_path + './')\n", (256, 280), False, 'import sys\n'), ((213, 239), 'os.path.realpath', 'os.path.realpath', (['__file__'], {}), '(__file__)\n', (229, 239), False, 'import os\n'), ((393, 412), 'json.load', 'json.load', (['jsonFile'], {}), '(jsonFile)\n', (402, 412), False, 'import json\n'), ((477, 512), 'json.dump', 'json.dump', (['data', 'jsonFile'], {'indent': '(4)'}), '(data, jsonFile, indent=4)\n', (486, 512), False, 'import json\n'), ((998, 1033), 'json.dump', 'json.dump', (['data', 'jsonFile'], {'indent': '(4)'}), '(data, jsonFile, indent=4)\n', (1007, 1033), False, 'import json\n'), ((1126, 1145), 'json.load', 'json.load', (['jsonFile'], {}), '(jsonFile)\n', (1135, 1145), False, 'import json\n'), ((1209, 1244), 'json.dump', 'json.dump', (['data', 'jsonFile'], {'indent': '(4)'}), '(data, jsonFile, indent=4)\n', (1218, 1244), False, 'import json\n'), ((1437, 1503), 'getopt.getopt', 'getopt.getopt', (['argv', '"""hs:v:f:i:"""', "['set', 'fe=', 'impact=', 'vm=']"], {}), "(argv, 'hs:v:f:i:', ['set', 'fe=', 'impact=', 'vm='])\n", (1450, 1503), False, 'import getopt\n'), ((1558, 1569), 'sys.exit', 'sys.exit', (['(2)'], {}), '(2)\n', (1566, 1569), False, 'import sys\n'), ((1651, 1661), 'sys.exit', 'sys.exit', ([], {}), '()\n', (1659, 1661), False, 'import sys\n'), ((1926, 1937), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (1934, 1937), False, 'import sys\n'), ((2092, 2103), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (2100, 2103), False, 'import sys\n'), ((2259, 2270), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (2267, 2270), False, 'import sys\n')] |
"""
Módulo com todos os tipos de propriedades do Movidesk.
Exemplo de uso:
>>> from datetime import datetime
>>> from pyvidesk.tickets import Tickets
>>> my_date = datetime(2020, 1, 1, 20, 0, 0)
>>> tickets = Tickets(token="<PASSWORD>")
>>> tickets_properties = tickets.get_properties()
>>> print(tickets_properties["createdDate"] >= my_date)
... createdDate ge 2020-01-01T20:00:00Z
>>> print(tickets_properties["createdDate"].get_description())
... Data de abertura do ticket. A data informada deve estar no formato UTC*.
... *Caso não for informada, será preenchida com a data atual.
"""
from dataclasses import dataclass, field
import datetime
from decimal import Decimal
from dateutil.parser import parse as dateutil_parse
class PropertyBase:
"""
Classe base para todas as propriedades (exceto as complexas).
"""
def __init__(self, name_, description_, read_only, fathers=None):
"""
Args:
name_ (str): O nome da propriedade.
fathers (str): O nome da(s) propriedade(s) "pai(s)", separados por '/',
se houver mais de um.
description_ (str): A descrição da propriedade, conforme documentação do
Movidesk.
read_only (bool): True, se a propriedade for somente leitura. False, do contrário.
('_' em 'name_' e 'description_' para padronizar com a classe ComplexProperty)
"""
self.name_ = name_
self._fathers = fathers
self.description_ = description_
self._read_only = read_only
@property
def is_read_only(self):
return self._read_only
@property
def full_name(self):
if self._fathers:
return "/".join((self._fathers, self.name_))
return self.name_
def get_description(self):
return self.description_
def __repr__(self):
return "<Property({0})>".format(self.full_name)
def serialize(self, value):
"""
Usado para serializar o valor para JSON.
A ideia desse método é ser usado na hora da criação de objeto que representará
uma entidade do Movidesk.
Esse objeto poderá ser manipulado para alteração de dados.
Args:
value (): O valor na linguagem Python.
Returns:
(): O valor que será usado no JSON.
"""
raise NotImplementedError()
def deserialize(self, value):
"""
Usado para serializar o valor para linguagem python.
A ideia desse método é ser usado na hora da criação de objeto que representará
uma entidade do Movidesk.
Esse objeto poderá ser manipulado para alteração de dados.
Args:
value (): O valor no JSON.
Returns:
(): O valor na linguagem Python.
"""
raise NotImplementedError()
def escape_value(self, value):
"""
Usado para ajustar o valor na hora de usar a propriedade na classe Query.
Args:
value (): Valor da propriedade.
Returns:
value (): Valor que pode ser usado na classe Query.
"""
if value is None:
return "null"
return value
def asc(self):
return f"{self.full_name} asc"
def desc(self):
return f"{self.full_name} desc"
def __eq__(self, other):
value = self.escape_value(other)
return f"{self.full_name} eq {value}"
def __ne__(self, other):
value = self.escape_value(other)
return f"{self.full_name} ne {value}"
def __ge__(self, other):
value = self.escape_value(other)
return f"{self.full_name} ge {value}"
def __gt__(self, other):
value = self.escape_value(other)
return f"{self.full_name} gt {value}"
def __le__(self, other):
value = self.escape_value(other)
return f"{self.full_name} le {value}"
def __lt__(self, other):
value = self.escape_value(other)
return f"{self.full_name} lt {value}"
class IntegerProperty(PropertyBase):
"""
Propriedade que armazena um inteiro.
"""
alias = int
def serialize(self, value):
return value
def deserialize(self, value):
return value
class FloatProperty(IntegerProperty):
"""
Propriedade que armazena um float.
"""
alias = float
class StringProperty(PropertyBase):
"""
Propriedade que armazena uma string.
"""
alias = str
def serialize(self, value):
return value
def deserialize(self, value):
return value
def escape_value(self, value):
if value is None:
return "null"
return f"'{value}'"
def contains(self, value):
return f"contains({self.full_name}, {self.escape_value(value)})"
class ArrayProperty(StringProperty):
"""
Propriedade que armazena um array de strings.
"""
alias = list
def has(self, value):
"""
Metodo que checa tem o objetivo de preparar uma query para verificar
se um array de strings contém um string.
Args:
value (str): A string que será procurada no array.
Returns:
(str): Uma string que representa essa query.
"""
if "/" in self.full_name:
p1, p2 = self.full_name.split("/")
return f"{p1}/any(x: x/{p2}/any(y: y eq {self.escape_value(value)}))"
return f"{self.full_name}/any(x: x eq {self.escape_value(value)})"
# __contains__ precisa retornar um valor booleano, logo, não podemos aplicar a lógica acima
# para alterar o operador 'in'
class BooleanProperty(PropertyBase):
"""
Propriedade que armazena um valor booleano.
"""
alias = bool
def escape_value(self, value):
if value:
return "true"
return "false"
def serialize(self, value):
return bool(value)
def deserialize(self, value):
return bool(value)
class DatetimeProperty(PropertyBase):
"""
Propriedade que armazena um objeto datetime.
JSON não suporta objetos datetime nativamente, então as datas são
formatadas como strings seguindo a ISO-8601.
A classe também aceita objetos datetime.date.
"""
alias = (datetime.datetime, datetime.date)
def escape_value(self, value):
if value is None:
return "null"
if isinstance(value, str):
value = dateutil_parse(value)
return value.isoformat() + "Z"
# O Z no final vem da prórpria ISO-8601 e do padrão de datas pelo UTC do Movidesk:
# "If the time is in UTC, add a 'Z' directly after the time without a space."
# Sem o "Z", recebemos o seguinte erro:
# Message: The query specified in the URI is not valid. The DateTimeOffset
# text should be in format 'yyyy-mm-ddThh:mm:ss('.'s+)?(zzzzzz)?'
# and each field value is within valid range.
def serialize(self, value):
if isinstance(value, datetime.date):
value = datetime.datetime.combine(value, datetime.datetime.min.time())
if isinstance(value, datetime.datetime):
return value.isoformat()
def deserialize(self, value):
if value:
return dateutil_parse(value)
class TimeProperty(PropertyBase):
"""
Propriedade que armazena um objeto datetime.time.
"""
alias = datetime.time
def escape_value(self, value):
if value is None:
return "null"
if isinstance(value, str):
value = dateutil_parse(value).time()
return value.isoformat()
def serialize(self, value):
if isinstance(value, datetime.time):
return value.isoformat()
def deserialize(self, value):
if value:
return dateutil_parse(value).time()
class DecimalProperty(PropertyBase):
"""
Propriedade que armazena um valor decimal. JSON não suporta isso nativamente,
então o valor será formatado como um float.
"""
alias = Decimal
def escape_value(self, value):
if value is None:
return "null"
return str(value)
def serialize(self, value):
if value is not None:
return float(value)
def deserialize(self, value):
if value is not None:
return Decimal(str(value))
@dataclass
class ComplexProperty:
"""Classe que representa uma propriedade complexa do Movidesk"""
name_: str # a propriedade complexa pode ter uma propriedade 'name'
description_: str # a propriedade complexa pode ter uma propriedade 'description'
properties: dict
read_only: bool = False
fathers: str = None
alias: type = dict
def __post_init__(self):
for property_name, property_infos in self.properties.items():
property_class = property_infos["property"]
setattr(
self,
property_name,
property_class(
name_=property_name,
fathers=self.full_name,
description_=property_infos["description"],
read_only=property_infos["readOnly"],
),
)
@property
def full_name(self):
if self.fathers:
return "/".join((self.fathers, self.name_))
return self.name_
@property
def is_read_only(self):
return self.read_only
def get_description(self):
return self.description_
def get_properties(self, as_model=False):
"""
Metodo que obtem as propriedades "filhas" dessa proprieda.
Returns:
properties (dict): Dicionário com as propriedades da entidade.
as_model (bool): True, se as propriedades forem usadas para criar um modelo.
False, do contrário.
TODO: pensar em outro nome/outra maneira para 'as_model'
"""
properties = {}
for property_name, property_infos in self.properties.items():
property_class = property_infos["property"]
property_obj = property_class(
name_=property_name,
fathers=self.full_name,
description_=property_infos["description"],
read_only=property_infos.get("readOnly"),
)
if as_model: # para criar um Model de ComplexProperty
properties[property_obj.name_] = property_obj
else:
properties[property_obj.full_name] = property_obj
return properties
def serialize(self, value):
if isinstance(value, list):
data = []
for v in value:
data.append(self._serialize(v))
return data
return self._serialize(value)
def _serialize(self, values):
if values is None:
return "null"
data = dict()
for prop, value in values.items():
data[prop] = getattr(self, prop).serialize(value)
return data
def deserialize(self, value):
if isinstance(value, list):
data = []
for i in value:
data.append(self._deserialize(i))
return data
return self._deserialize(value)
def _deserialize(self, values):
data = dict()
for prop, value in values.items():
try:
data[prop] = getattr(self, prop).deserialize(value)
except AttributeError: # algumas propriedas não estão documentadas
data[prop] = value
return data
@dataclass
class CustomFieldValuesItems(ComplexProperty):
"""
Entity » Campos adicionais » Itens
Classe que representa os itens do campo customFieldValues.
"""
properties: dict = field(
default_factory=lambda: {
"personId": {
"property": IntegerProperty,
"description": (
"Id (Cod. ref.) da empresa, departamento ou pessoa. "
"*Obrigatório quando o tipo do campo for lista de pessoas."
),
"readOnly": False,
},
"clientId": {
"property": IntegerProperty,
"description": (
"Id (Cod. ref.) da empresa, departamento ou pessoa. "
"*Obrigatório quando o tipo do campo for lista de clientes."
),
"readOnly": False,
},
"team": {
"property": StringProperty,
"description": (
"Nome da equipe. *Obrigatório quando o tipo do campo lista de agentes "
"(o personId pode ser informado para especificar o agente da equipe)."
),
"readOnly": False,
},
"customFieldItem": {
"property": StringProperty,
"description": (
"Nome do item do campo adicional. *Obrigatório quando o tipo do campo for: "
"lista de valores, seleção múltipla ou seleção única."
),
"readOnly": False,
},
}
)
@dataclass
class CustomFieldValues(ComplexProperty):
"""
Entity » Campos adicionais
Classe que representa o campo customFieldValues.
"""
properties: dict = field(
default_factory=lambda: {
"customFieldId": {
"property": IntegerProperty,
"description": (
"Id do campo adicional "
"(pode ser obtido na listagem de campos adicionais no website)."
),
"readOnly": False,
},
"customFieldRuleId": {
"property": IntegerProperty,
"description": (
"Id da regra de exibição dos campos adicionais "
"(pode ser obtido na listagem de regras para exibição no website)."
),
"readOnly": False,
},
"line": {
"property": IntegerProperty,
"description": (
"Número da linha da regra de exibição na tela do ticket. "
"Quando a regra não permitir a adição de novas linhas deve ser informado "
"o valor 1 e não devem ser repetidos valores de campos adicionais para o id "
"da regra em conjunto com o id do campo. Para alterar o valor de um campo "
"deve ser informada a linha em que ele se encontra. Os campos que estiverem "
"na base de dados e não forem enviados no corpo da requisição serão excluídos."
),
"readOnly": False,
},
"value": {
"property": StringProperty,
"description": (
"Valor texto do campo adicional. *Obrigatório quando o tipo do campo for: "
"texto de uma linha, texto com várias linhas, texto HTML, expressão regular, "
"numérico, data, hora, data e hora, e-mail, telefone ou URL. "
"Os campos de data devem estar em horário *UTC e no formato "
"YYYY-MM-DDThh:MM:ss.000Z e o campo hora deve ser informado juntamente com a "
"data fixa '1991-01-01'. O campo numérico deve estar no formato brasileiro, "
"por exemplo '1.530,75'."
),
"readOnly": False,
},
"items": {
"property": CustomFieldValuesItems,
"description": (
"Lista de itens. *Obrigatório quando o tipo do campo for: "
"lista de valores, lista de pessoas, lista de clientes, lista de agentes, "
"seleção múltipla ou seleção única. Deve ser informado apenas um item se o "
"campo adicional não permitir seleção múltipla."
),
"readOnly": False,
},
}
)
| [
"dateutil.parser.parse",
"datetime.datetime.min.time",
"dataclasses.field"
] | [((11793, 12682), 'dataclasses.field', 'field', ([], {'default_factory': "(lambda : {'personId': {'property': IntegerProperty, 'description':\n 'Id (Cod. ref.) da empresa, departamento ou pessoa. *Obrigatório quando o tipo do campo for lista de pessoas.'\n , 'readOnly': False}, 'clientId': {'property': IntegerProperty,\n 'description':\n 'Id (Cod. ref.) da empresa, departamento ou pessoa. *Obrigatório quando o tipo do campo for lista de clientes.'\n , 'readOnly': False}, 'team': {'property': StringProperty,\n 'description':\n 'Nome da equipe. *Obrigatório quando o tipo do campo lista de agentes (o personId pode ser informado para especificar o agente da equipe).'\n , 'readOnly': False}, 'customFieldItem': {'property': StringProperty,\n 'description':\n 'Nome do item do campo adicional. *Obrigatório quando o tipo do campo for: lista de valores, seleção múltipla ou seleção única.'\n , 'readOnly': False}})"}), "(default_factory=lambda : {'personId': {'property': IntegerProperty,\n 'description':\n 'Id (Cod. ref.) da empresa, departamento ou pessoa. *Obrigatório quando o tipo do campo for lista de pessoas.'\n , 'readOnly': False}, 'clientId': {'property': IntegerProperty,\n 'description':\n 'Id (Cod. ref.) da empresa, departamento ou pessoa. *Obrigatório quando o tipo do campo for lista de clientes.'\n , 'readOnly': False}, 'team': {'property': StringProperty,\n 'description':\n 'Nome da equipe. *Obrigatório quando o tipo do campo lista de agentes (o personId pode ser informado para especificar o agente da equipe).'\n , 'readOnly': False}, 'customFieldItem': {'property': StringProperty,\n 'description':\n 'Nome do item do campo adicional. *Obrigatório quando o tipo do campo for: lista de valores, seleção múltipla ou seleção única.'\n , 'readOnly': False}})\n", (11798, 12682), False, 'from dataclasses import dataclass, field\n'), ((13387, 15214), 'dataclasses.field', 'field', ([], {'default_factory': '(lambda : {\'customFieldId\': {\'property\': IntegerProperty, \'description\':\n \'Id do campo adicional (pode ser obtido na listagem de campos adicionais no website).\'\n , \'readOnly\': False}, \'customFieldRuleId\': {\'property\': IntegerProperty,\n \'description\':\n \'Id da regra de exibição dos campos adicionais (pode ser obtido na listagem de regras para exibição no website).\'\n , \'readOnly\': False}, \'line\': {\'property\': IntegerProperty,\n \'description\':\n \'Número da linha da regra de exibição na tela do ticket. Quando a regra não permitir a adição de novas linhas deve ser informado o valor 1 e não devem ser repetidos valores de campos adicionais para o id da regra em conjunto com o id do campo. Para alterar o valor de um campo deve ser informada a linha em que ele se encontra. Os campos que estiverem na base de dados e não forem enviados no corpo da requisição serão excluídos.\'\n , \'readOnly\': False}, \'value\': {\'property\': StringProperty,\n \'description\':\n "Valor texto do campo adicional. *Obrigatório quando o tipo do campo for: texto de uma linha, texto com várias linhas, texto HTML, expressão regular, numérico, data, hora, data e hora, e-mail, telefone ou URL. Os campos de data devem estar em horário *UTC e no formato YYYY-MM-DDThh:MM:ss.000Z e o campo hora deve ser informado juntamente com a data fixa \'1991-01-01\'. O campo numérico deve estar no formato brasileiro, por exemplo \'1.530,75\'."\n , \'readOnly\': False}, \'items\': {\'property\': CustomFieldValuesItems,\n \'description\':\n \'Lista de itens. *Obrigatório quando o tipo do campo for: lista de valores, lista de pessoas, lista de clientes, lista de agentes, seleção múltipla ou seleção única. Deve ser informado apenas um item se o campo adicional não permitir seleção múltipla.\'\n , \'readOnly\': False}})'}), '(default_factory=lambda : {\'customFieldId\': {\'property\':\n IntegerProperty, \'description\':\n \'Id do campo adicional (pode ser obtido na listagem de campos adicionais no website).\'\n , \'readOnly\': False}, \'customFieldRuleId\': {\'property\': IntegerProperty,\n \'description\':\n \'Id da regra de exibição dos campos adicionais (pode ser obtido na listagem de regras para exibição no website).\'\n , \'readOnly\': False}, \'line\': {\'property\': IntegerProperty,\n \'description\':\n \'Número da linha da regra de exibição na tela do ticket. Quando a regra não permitir a adição de novas linhas deve ser informado o valor 1 e não devem ser repetidos valores de campos adicionais para o id da regra em conjunto com o id do campo. Para alterar o valor de um campo deve ser informada a linha em que ele se encontra. Os campos que estiverem na base de dados e não forem enviados no corpo da requisição serão excluídos.\'\n , \'readOnly\': False}, \'value\': {\'property\': StringProperty,\n \'description\':\n "Valor texto do campo adicional. *Obrigatório quando o tipo do campo for: texto de uma linha, texto com várias linhas, texto HTML, expressão regular, numérico, data, hora, data e hora, e-mail, telefone ou URL. Os campos de data devem estar em horário *UTC e no formato YYYY-MM-DDThh:MM:ss.000Z e o campo hora deve ser informado juntamente com a data fixa \'1991-01-01\'. O campo numérico deve estar no formato brasileiro, por exemplo \'1.530,75\'."\n , \'readOnly\': False}, \'items\': {\'property\': CustomFieldValuesItems,\n \'description\':\n \'Lista de itens. *Obrigatório quando o tipo do campo for: lista de valores, lista de pessoas, lista de clientes, lista de agentes, seleção múltipla ou seleção única. Deve ser informado apenas um item se o campo adicional não permitir seleção múltipla.\'\n , \'readOnly\': False}})\n', (13392, 15214), False, 'from dataclasses import dataclass, field\n'), ((6438, 6459), 'dateutil.parser.parse', 'dateutil_parse', (['value'], {}), '(value)\n', (6452, 6459), True, 'from dateutil.parser import parse as dateutil_parse\n'), ((7255, 7276), 'dateutil.parser.parse', 'dateutil_parse', (['value'], {}), '(value)\n', (7269, 7276), True, 'from dateutil.parser import parse as dateutil_parse\n'), ((7067, 7095), 'datetime.datetime.min.time', 'datetime.datetime.min.time', ([], {}), '()\n', (7093, 7095), False, 'import datetime\n'), ((7554, 7575), 'dateutil.parser.parse', 'dateutil_parse', (['value'], {}), '(value)\n', (7568, 7575), True, 'from dateutil.parser import parse as dateutil_parse\n'), ((7804, 7825), 'dateutil.parser.parse', 'dateutil_parse', (['value'], {}), '(value)\n', (7818, 7825), True, 'from dateutil.parser import parse as dateutil_parse\n')] |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import uuid
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
from pip.req import parse_requirements
with open('README.rst') as readme_file:
readme = readme_file.read()
with open('HISTORY.rst') as history_file:
history = history_file.read().replace('.. :changelog:', '')
install_reqs = parse_requirements('requirements.txt', session=uuid.uuid1())
requirements = [str(ir.req) for ir in install_reqs]
test_requirements = requirements
setup(
name='vulyk_declarations_review',
version='0.1.0',
description="Vulyk processed declarations review plugin",
long_description=readme + '\n\n' + history,
author='<NAME>',
author_email='<EMAIL>',
url='https://github.com/mrgambal/vulyk-declarations-review',
packages=[
'vulyk_declarations_review',
'vulyk_declarations_review.models',
'vulyk_declarations_review.static',
'vulyk_declarations_review.views'
],
package_dir={'vulyk_declarations_review':
'vulyk_declarations_review'},
include_package_data=True,
install_requires=requirements,
license="BSD",
zip_safe=False,
keywords='vulyk_declarations_review',
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
"Programming Language :: Python :: 2",
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
],
test_suite='tests',
scripts=[],
tests_require=test_requirements
)
| [
"uuid.uuid1",
"distutils.core.setup"
] | [((539, 1690), 'distutils.core.setup', 'setup', ([], {'name': '"""vulyk_declarations_review"""', 'version': '"""0.1.0"""', 'description': '"""Vulyk processed declarations review plugin"""', 'long_description': "(readme + '\\n\\n' + history)", 'author': '"""<NAME>"""', 'author_email': '"""<EMAIL>"""', 'url': '"""https://github.com/mrgambal/vulyk-declarations-review"""', 'packages': "['vulyk_declarations_review', 'vulyk_declarations_review.models',\n 'vulyk_declarations_review.static', 'vulyk_declarations_review.views']", 'package_dir': "{'vulyk_declarations_review': 'vulyk_declarations_review'}", 'include_package_data': '(True)', 'install_requires': 'requirements', 'license': '"""BSD"""', 'zip_safe': '(False)', 'keywords': '"""vulyk_declarations_review"""', 'classifiers': "['Development Status :: 2 - Pre-Alpha', 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License', 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4']", 'test_suite': '"""tests"""', 'scripts': '[]', 'tests_require': 'test_requirements'}), "(name='vulyk_declarations_review', version='0.1.0', description=\n 'Vulyk processed declarations review plugin', long_description=readme +\n '\\n\\n' + history, author='<NAME>', author_email='<EMAIL>', url=\n 'https://github.com/mrgambal/vulyk-declarations-review', packages=[\n 'vulyk_declarations_review', 'vulyk_declarations_review.models',\n 'vulyk_declarations_review.static', 'vulyk_declarations_review.views'],\n package_dir={'vulyk_declarations_review': 'vulyk_declarations_review'},\n include_package_data=True, install_requires=requirements, license='BSD',\n zip_safe=False, keywords='vulyk_declarations_review', classifiers=[\n 'Development Status :: 2 - Pre-Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License', 'Natural Language :: English',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4'], test_suite='tests', scripts=[\n ], tests_require=test_requirements)\n", (544, 1690), False, 'from distutils.core import setup\n'), ((437, 449), 'uuid.uuid1', 'uuid.uuid1', ([], {}), '()\n', (447, 449), False, 'import uuid\n')] |
# coding=utf-8
# Copyright 2018 The TF-Agents Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""Train and Eval TD3.
To run:
```bash
tf_agents/agents/td3/examples/v2/train_eval_rnn -- \
--root_dir=$HOME/tmp/td3_rnn/dm/CartPole-Balance/ \
--alsologtostderr
```
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import os
import time
from absl import app
from absl import flags
from absl import logging
import tensorflow as tf
from tf_agents.agents.ddpg import actor_rnn_network
from tf_agents.agents.ddpg import critic_rnn_network
from tf_agents.agents.td3 import td3_agent
from tf_agents.drivers import dynamic_episode_driver
from tf_agents.environments import suite_dm_control
from tf_agents.environments import tf_py_environment
from tf_agents.environments import wrappers
from tf_agents.metrics import metric_utils
from tf_agents.metrics import tf_metrics
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.utils import common
import gin.tf
flags.DEFINE_string('root_dir', os.getenv('TEST_UNDECLARED_OUTPUTS_DIR'),
'Root directory for writing logs/summaries/checkpoints.')
flags.DEFINE_integer('num_iterations', 100000,
'Total number train/eval iterations to perform.')
flags.DEFINE_multi_string('gin_file', None, 'Paths to the gin-config files.')
flags.DEFINE_multi_string('gin_param', None, 'Gin binding parameters.')
FLAGS = flags.FLAGS
@gin.configurable
def train_eval(
root_dir,
env_name='cartpole',
task_name='balance',
observations_whitelist='position',
num_iterations=100000,
actor_fc_layers=(400, 300),
actor_output_fc_layers=(100,),
actor_lstm_size=(40,),
critic_obs_fc_layers=(400,),
critic_action_fc_layers=None,
critic_joint_fc_layers=(300,),
critic_output_fc_layers=(100,),
critic_lstm_size=(40,),
# Params for collect
initial_collect_episodes=1,
collect_episodes_per_iteration=1,
replay_buffer_capacity=100000,
ou_stddev=0.2,
ou_damping=0.15,
# Params for target update
target_update_tau=0.05,
target_update_period=5,
# Params for train
train_steps_per_iteration=200,
batch_size=64,
train_sequence_length=10,
actor_learning_rate=1e-4,
critic_learning_rate=1e-3,
dqda_clipping=None,
td_errors_loss_fn=None,
gamma=0.995,
reward_scale_factor=1.0,
gradient_clipping=None,
use_tf_functions=True,
# Params for eval
num_eval_episodes=10,
eval_interval=10000,
# Params for checkpoints, summaries, and logging
log_interval=1000,
summary_interval=1000,
summaries_flush_secs=10,
debug_summaries=False,
summarize_grads_and_vars=False,
eval_metrics_callback=None):
"""A simple train and eval for TD3."""
root_dir = os.path.expanduser(root_dir)
train_dir = os.path.join(root_dir, 'train')
eval_dir = os.path.join(root_dir, 'eval')
train_summary_writer = tf.compat.v2.summary.create_file_writer(
train_dir, flush_millis=summaries_flush_secs * 1000)
train_summary_writer.set_as_default()
eval_summary_writer = tf.compat.v2.summary.create_file_writer(
eval_dir, flush_millis=summaries_flush_secs * 1000)
eval_metrics = [
tf_metrics.AverageReturnMetric(buffer_size=num_eval_episodes),
tf_metrics.AverageEpisodeLengthMetric(buffer_size=num_eval_episodes)
]
global_step = tf.compat.v1.train.get_or_create_global_step()
with tf.compat.v2.summary.record_if(
lambda: tf.math.equal(global_step % summary_interval, 0)):
if observations_whitelist is not None:
env_wrappers = [
functools.partial(
wrappers.FlattenObservationsWrapper,
observations_whitelist=[observations_whitelist])
]
else:
env_wrappers = []
tf_env = tf_py_environment.TFPyEnvironment(
suite_dm_control.load(env_name, task_name, env_wrappers=env_wrappers))
eval_tf_env = tf_py_environment.TFPyEnvironment(
suite_dm_control.load(env_name, task_name, env_wrappers=env_wrappers))
actor_net = actor_rnn_network.ActorRnnNetwork(
tf_env.time_step_spec().observation,
tf_env.action_spec(),
input_fc_layer_params=actor_fc_layers,
lstm_size=actor_lstm_size,
output_fc_layer_params=actor_output_fc_layers)
critic_net_input_specs = (tf_env.time_step_spec().observation,
tf_env.action_spec())
critic_net = critic_rnn_network.CriticRnnNetwork(
critic_net_input_specs,
observation_fc_layer_params=critic_obs_fc_layers,
action_fc_layer_params=critic_action_fc_layers,
joint_fc_layer_params=critic_joint_fc_layers,
lstm_size=critic_lstm_size,
output_fc_layer_params=critic_output_fc_layers,
)
tf_agent = td3_agent.Td3Agent(
tf_env.time_step_spec(),
tf_env.action_spec(),
actor_network=actor_net,
critic_network=critic_net,
actor_optimizer=tf.compat.v1.train.AdamOptimizer(
learning_rate=actor_learning_rate),
critic_optimizer=tf.compat.v1.train.AdamOptimizer(
learning_rate=critic_learning_rate),
ou_stddev=ou_stddev,
ou_damping=ou_damping,
target_update_tau=target_update_tau,
target_update_period=target_update_period,
dqda_clipping=dqda_clipping,
td_errors_loss_fn=td_errors_loss_fn,
gamma=gamma,
reward_scale_factor=reward_scale_factor,
gradient_clipping=gradient_clipping,
debug_summaries=debug_summaries,
summarize_grads_and_vars=summarize_grads_and_vars,
train_step_counter=global_step,
)
tf_agent.initialize()
train_metrics = [
tf_metrics.NumberOfEpisodes(),
tf_metrics.EnvironmentSteps(),
tf_metrics.AverageReturnMetric(),
tf_metrics.AverageEpisodeLengthMetric(),
]
eval_policy = tf_agent.policy
collect_policy = tf_agent.collect_policy
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
tf_agent.collect_data_spec,
batch_size=tf_env.batch_size,
max_length=replay_buffer_capacity)
initial_collect_driver = dynamic_episode_driver.DynamicEpisodeDriver(
tf_env,
collect_policy,
observers=[replay_buffer.add_batch] + train_metrics,
num_episodes=initial_collect_episodes)
collect_driver = dynamic_episode_driver.DynamicEpisodeDriver(
tf_env,
collect_policy,
observers=[replay_buffer.add_batch] + train_metrics,
num_episodes=collect_episodes_per_iteration)
if use_tf_functions:
initial_collect_driver.run = common.function(initial_collect_driver.run)
collect_driver.run = common.function(collect_driver.run)
tf_agent.train = common.function(tf_agent.train)
# Collect initial replay data.
logging.info(
'Initializing replay buffer by collecting experience for %d episodes '
'with a random policy.', initial_collect_episodes)
initial_collect_driver.run()
results = metric_utils.eager_compute(
eval_metrics,
eval_tf_env,
eval_policy,
num_episodes=num_eval_episodes,
train_step=global_step,
summary_writer=eval_summary_writer,
summary_prefix='Metrics',
)
if eval_metrics_callback is not None:
eval_metrics_callback(results, global_step.numpy())
metric_utils.log_metrics(eval_metrics)
time_step = None
policy_state = collect_policy.get_initial_state(tf_env.batch_size)
timed_at_step = global_step.numpy()
time_acc = 0
# Dataset generates trajectories with shape [BxTx...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=train_sequence_length + 1).prefetch(3)
iterator = iter(dataset)
for _ in range(num_iterations):
start_time = time.time()
time_step, policy_state = collect_driver.run(
time_step=time_step,
policy_state=policy_state,
)
for _ in range(train_steps_per_iteration):
experience, _ = next(iterator)
train_loss = tf_agent.train(experience)
time_acc += time.time() - start_time
if global_step.numpy() % log_interval == 0:
logging.info('step = %d, loss = %f', global_step.numpy(),
train_loss.loss)
steps_per_sec = (global_step.numpy() - timed_at_step) / time_acc
logging.info('%.3f steps/sec', steps_per_sec)
tf.compat.v2.summary.scalar(
name='global_steps_per_sec', data=steps_per_sec, step=global_step)
timed_at_step = global_step.numpy()
time_acc = 0
for train_metric in train_metrics:
train_metric.tf_summaries(
train_step=global_step, step_metrics=train_metrics[:2])
if global_step.numpy() % eval_interval == 0:
results = metric_utils.eager_compute(
eval_metrics,
eval_tf_env,
eval_policy,
num_episodes=num_eval_episodes,
train_step=global_step,
summary_writer=eval_summary_writer,
summary_prefix='Metrics',
)
if eval_metrics_callback is not None:
eval_metrics_callback(results, global_step.numpy())
metric_utils.log_metrics(eval_metrics)
return train_loss
def main(_):
tf.compat.v1.enable_v2_behavior()
logging.set_verbosity(logging.INFO)
gin.parse_config_files_and_bindings(FLAGS.gin_file, FLAGS.gin_param)
train_eval(FLAGS.root_dir, num_iterations=FLAGS.num_iterations)
if __name__ == '__main__':
flags.mark_flag_as_required('root_dir')
app.run(main)
| [
"tensorflow.compat.v1.train.AdamOptimizer",
"absl.logging.info",
"tf_agents.metrics.tf_metrics.NumberOfEpisodes",
"tensorflow.compat.v1.enable_v2_behavior",
"tf_agents.metrics.metric_utils.log_metrics",
"tf_agents.agents.ddpg.critic_rnn_network.CriticRnnNetwork",
"absl.app.run",
"tf_agents.metrics.met... | [((1724, 1824), 'absl.flags.DEFINE_integer', 'flags.DEFINE_integer', (['"""num_iterations"""', '(100000)', '"""Total number train/eval iterations to perform."""'], {}), "('num_iterations', 100000,\n 'Total number train/eval iterations to perform.')\n", (1744, 1824), False, 'from absl import flags\n'), ((1842, 1919), 'absl.flags.DEFINE_multi_string', 'flags.DEFINE_multi_string', (['"""gin_file"""', 'None', '"""Paths to the gin-config files."""'], {}), "('gin_file', None, 'Paths to the gin-config files.')\n", (1867, 1919), False, 'from absl import flags\n'), ((1920, 1991), 'absl.flags.DEFINE_multi_string', 'flags.DEFINE_multi_string', (['"""gin_param"""', 'None', '"""Gin binding parameters."""'], {}), "('gin_param', None, 'Gin binding parameters.')\n", (1945, 1991), False, 'from absl import flags\n'), ((1604, 1644), 'os.getenv', 'os.getenv', (['"""TEST_UNDECLARED_OUTPUTS_DIR"""'], {}), "('TEST_UNDECLARED_OUTPUTS_DIR')\n", (1613, 1644), False, 'import os\n'), ((3373, 3401), 'os.path.expanduser', 'os.path.expanduser', (['root_dir'], {}), '(root_dir)\n', (3391, 3401), False, 'import os\n'), ((3416, 3447), 'os.path.join', 'os.path.join', (['root_dir', '"""train"""'], {}), "(root_dir, 'train')\n", (3428, 3447), False, 'import os\n'), ((3461, 3491), 'os.path.join', 'os.path.join', (['root_dir', '"""eval"""'], {}), "(root_dir, 'eval')\n", (3473, 3491), False, 'import os\n'), ((3518, 3615), 'tensorflow.compat.v2.summary.create_file_writer', 'tf.compat.v2.summary.create_file_writer', (['train_dir'], {'flush_millis': '(summaries_flush_secs * 1000)'}), '(train_dir, flush_millis=\n summaries_flush_secs * 1000)\n', (3557, 3615), True, 'import tensorflow as tf\n'), ((3683, 3779), 'tensorflow.compat.v2.summary.create_file_writer', 'tf.compat.v2.summary.create_file_writer', (['eval_dir'], {'flush_millis': '(summaries_flush_secs * 1000)'}), '(eval_dir, flush_millis=\n summaries_flush_secs * 1000)\n', (3722, 3779), True, 'import tensorflow as tf\n'), ((3966, 4012), 'tensorflow.compat.v1.train.get_or_create_global_step', 'tf.compat.v1.train.get_or_create_global_step', ([], {}), '()\n', (4010, 4012), True, 'import tensorflow as tf\n'), ((9965, 9998), 'tensorflow.compat.v1.enable_v2_behavior', 'tf.compat.v1.enable_v2_behavior', ([], {}), '()\n', (9996, 9998), True, 'import tensorflow as tf\n'), ((10001, 10036), 'absl.logging.set_verbosity', 'logging.set_verbosity', (['logging.INFO'], {}), '(logging.INFO)\n', (10022, 10036), False, 'from absl import logging\n'), ((10205, 10244), 'absl.flags.mark_flag_as_required', 'flags.mark_flag_as_required', (['"""root_dir"""'], {}), "('root_dir')\n", (10232, 10244), False, 'from absl import flags\n'), ((10247, 10260), 'absl.app.run', 'app.run', (['main'], {}), '(main)\n', (10254, 10260), False, 'from absl import app\n'), ((3807, 3868), 'tf_agents.metrics.tf_metrics.AverageReturnMetric', 'tf_metrics.AverageReturnMetric', ([], {'buffer_size': 'num_eval_episodes'}), '(buffer_size=num_eval_episodes)\n', (3837, 3868), False, 'from tf_agents.metrics import tf_metrics\n'), ((3876, 3944), 'tf_agents.metrics.tf_metrics.AverageEpisodeLengthMetric', 'tf_metrics.AverageEpisodeLengthMetric', ([], {'buffer_size': 'num_eval_episodes'}), '(buffer_size=num_eval_episodes)\n', (3913, 3944), False, 'from tf_agents.metrics import tf_metrics\n'), ((5030, 5326), 'tf_agents.agents.ddpg.critic_rnn_network.CriticRnnNetwork', 'critic_rnn_network.CriticRnnNetwork', (['critic_net_input_specs'], {'observation_fc_layer_params': 'critic_obs_fc_layers', 'action_fc_layer_params': 'critic_action_fc_layers', 'joint_fc_layer_params': 'critic_joint_fc_layers', 'lstm_size': 'critic_lstm_size', 'output_fc_layer_params': 'critic_output_fc_layers'}), '(critic_net_input_specs,\n observation_fc_layer_params=critic_obs_fc_layers,\n action_fc_layer_params=critic_action_fc_layers, joint_fc_layer_params=\n critic_joint_fc_layers, lstm_size=critic_lstm_size,\n output_fc_layer_params=critic_output_fc_layers)\n', (5065, 5326), False, 'from tf_agents.agents.ddpg import critic_rnn_network\n'), ((6570, 6713), 'tf_agents.replay_buffers.tf_uniform_replay_buffer.TFUniformReplayBuffer', 'tf_uniform_replay_buffer.TFUniformReplayBuffer', (['tf_agent.collect_data_spec'], {'batch_size': 'tf_env.batch_size', 'max_length': 'replay_buffer_capacity'}), '(tf_agent.collect_data_spec,\n batch_size=tf_env.batch_size, max_length=replay_buffer_capacity)\n', (6616, 6713), False, 'from tf_agents.replay_buffers import tf_uniform_replay_buffer\n'), ((6765, 6933), 'tf_agents.drivers.dynamic_episode_driver.DynamicEpisodeDriver', 'dynamic_episode_driver.DynamicEpisodeDriver', (['tf_env', 'collect_policy'], {'observers': '([replay_buffer.add_batch] + train_metrics)', 'num_episodes': 'initial_collect_episodes'}), '(tf_env, collect_policy,\n observers=[replay_buffer.add_batch] + train_metrics, num_episodes=\n initial_collect_episodes)\n', (6808, 6933), False, 'from tf_agents.drivers import dynamic_episode_driver\n'), ((6980, 7154), 'tf_agents.drivers.dynamic_episode_driver.DynamicEpisodeDriver', 'dynamic_episode_driver.DynamicEpisodeDriver', (['tf_env', 'collect_policy'], {'observers': '([replay_buffer.add_batch] + train_metrics)', 'num_episodes': 'collect_episodes_per_iteration'}), '(tf_env, collect_policy,\n observers=[replay_buffer.add_batch] + train_metrics, num_episodes=\n collect_episodes_per_iteration)\n', (7023, 7154), False, 'from tf_agents.drivers import dynamic_episode_driver\n'), ((7442, 7583), 'absl.logging.info', 'logging.info', (['"""Initializing replay buffer by collecting experience for %d episodes with a random policy."""', 'initial_collect_episodes'], {}), "(\n 'Initializing replay buffer by collecting experience for %d episodes with a random policy.'\n , initial_collect_episodes)\n", (7454, 7583), False, 'from absl import logging\n'), ((7642, 7835), 'tf_agents.metrics.metric_utils.eager_compute', 'metric_utils.eager_compute', (['eval_metrics', 'eval_tf_env', 'eval_policy'], {'num_episodes': 'num_eval_episodes', 'train_step': 'global_step', 'summary_writer': 'eval_summary_writer', 'summary_prefix': '"""Metrics"""'}), "(eval_metrics, eval_tf_env, eval_policy,\n num_episodes=num_eval_episodes, train_step=global_step, summary_writer=\n eval_summary_writer, summary_prefix='Metrics')\n", (7668, 7835), False, 'from tf_agents.metrics import metric_utils\n'), ((7994, 8032), 'tf_agents.metrics.metric_utils.log_metrics', 'metric_utils.log_metrics', (['eval_metrics'], {}), '(eval_metrics)\n', (8018, 8032), False, 'from tf_agents.metrics import metric_utils\n'), ((4425, 4494), 'tf_agents.environments.suite_dm_control.load', 'suite_dm_control.load', (['env_name', 'task_name'], {'env_wrappers': 'env_wrappers'}), '(env_name, task_name, env_wrappers=env_wrappers)\n', (4446, 4494), False, 'from tf_agents.environments import suite_dm_control\n'), ((4557, 4626), 'tf_agents.environments.suite_dm_control.load', 'suite_dm_control.load', (['env_name', 'task_name'], {'env_wrappers': 'env_wrappers'}), '(env_name, task_name, env_wrappers=env_wrappers)\n', (4578, 4626), False, 'from tf_agents.environments import suite_dm_control\n'), ((6302, 6331), 'tf_agents.metrics.tf_metrics.NumberOfEpisodes', 'tf_metrics.NumberOfEpisodes', ([], {}), '()\n', (6329, 6331), False, 'from tf_agents.metrics import tf_metrics\n'), ((6341, 6370), 'tf_agents.metrics.tf_metrics.EnvironmentSteps', 'tf_metrics.EnvironmentSteps', ([], {}), '()\n', (6368, 6370), False, 'from tf_agents.metrics import tf_metrics\n'), ((6380, 6412), 'tf_agents.metrics.tf_metrics.AverageReturnMetric', 'tf_metrics.AverageReturnMetric', ([], {}), '()\n', (6410, 6412), False, 'from tf_agents.metrics import tf_metrics\n'), ((6422, 6461), 'tf_agents.metrics.tf_metrics.AverageEpisodeLengthMetric', 'tf_metrics.AverageEpisodeLengthMetric', ([], {}), '()\n', (6459, 6461), False, 'from tf_agents.metrics import tf_metrics\n'), ((7240, 7283), 'tf_agents.utils.common.function', 'common.function', (['initial_collect_driver.run'], {}), '(initial_collect_driver.run)\n', (7255, 7283), False, 'from tf_agents.utils import common\n'), ((7311, 7346), 'tf_agents.utils.common.function', 'common.function', (['collect_driver.run'], {}), '(collect_driver.run)\n', (7326, 7346), False, 'from tf_agents.utils import common\n'), ((7370, 7401), 'tf_agents.utils.common.function', 'common.function', (['tf_agent.train'], {}), '(tf_agent.train)\n', (7385, 7401), False, 'from tf_agents.utils import common\n'), ((8493, 8504), 'time.time', 'time.time', ([], {}), '()\n', (8502, 8504), False, 'import time\n'), ((4066, 4114), 'tensorflow.math.equal', 'tf.math.equal', (['(global_step % summary_interval)', '(0)'], {}), '(global_step % summary_interval, 0)\n', (4079, 4114), True, 'import tensorflow as tf\n'), ((4193, 4300), 'functools.partial', 'functools.partial', (['wrappers.FlattenObservationsWrapper'], {'observations_whitelist': '[observations_whitelist]'}), '(wrappers.FlattenObservationsWrapper,\n observations_whitelist=[observations_whitelist])\n', (4210, 4300), False, 'import functools\n'), ((5556, 5623), 'tensorflow.compat.v1.train.AdamOptimizer', 'tf.compat.v1.train.AdamOptimizer', ([], {'learning_rate': 'actor_learning_rate'}), '(learning_rate=actor_learning_rate)\n', (5588, 5623), True, 'import tensorflow as tf\n'), ((5663, 5731), 'tensorflow.compat.v1.train.AdamOptimizer', 'tf.compat.v1.train.AdamOptimizer', ([], {'learning_rate': 'critic_learning_rate'}), '(learning_rate=critic_learning_rate)\n', (5695, 5731), True, 'import tensorflow as tf\n'), ((8787, 8798), 'time.time', 'time.time', ([], {}), '()\n', (8796, 8798), False, 'import time\n'), ((9048, 9093), 'absl.logging.info', 'logging.info', (['"""%.3f steps/sec"""', 'steps_per_sec'], {}), "('%.3f steps/sec', steps_per_sec)\n", (9060, 9093), False, 'from absl import logging\n'), ((9102, 9200), 'tensorflow.compat.v2.summary.scalar', 'tf.compat.v2.summary.scalar', ([], {'name': '"""global_steps_per_sec"""', 'data': 'steps_per_sec', 'step': 'global_step'}), "(name='global_steps_per_sec', data=steps_per_sec,\n step=global_step)\n", (9129, 9200), True, 'import tensorflow as tf\n'), ((9490, 9683), 'tf_agents.metrics.metric_utils.eager_compute', 'metric_utils.eager_compute', (['eval_metrics', 'eval_tf_env', 'eval_policy'], {'num_episodes': 'num_eval_episodes', 'train_step': 'global_step', 'summary_writer': 'eval_summary_writer', 'summary_prefix': '"""Metrics"""'}), "(eval_metrics, eval_tf_env, eval_policy,\n num_episodes=num_eval_episodes, train_step=global_step, summary_writer=\n eval_summary_writer, summary_prefix='Metrics')\n", (9516, 9683), False, 'from tf_agents.metrics import metric_utils\n'), ((9886, 9924), 'tf_agents.metrics.metric_utils.log_metrics', 'metric_utils.log_metrics', (['eval_metrics'], {}), '(eval_metrics)\n', (9910, 9924), False, 'from tf_agents.metrics import metric_utils\n')] |
from pyfilter import get_new_client
def _wait_enter():
input("Press 'Enter' to exit...")
exit(0)
def main():
cats_in_shelter = [
"brown siamese cat. It's friendly and healthy.", # This looks like something we want!
"A lovely white persian cat. It's healthy, though unvaccinated.",
# The above looks like something we don't want! Wrong colour, unvaccinated and not friendly
"A healthy and friendly brown Sphynx kitty. Vaccinations are up to date!" # Unfortunately, it's the wrong breed
]
# Using our simple object factory API to start up a new client and connect it to the same port as the server
client = get_new_client(insecure_port=8886, quiet=False)
# Unary->Unary (single) filter. We are using the first value so we expect it to pass
resp = client.filter(cats_in_shelter[0], casefold=False)
print('Unary->Unary response: ', resp)
assert resp is True
# Stream->Unary (multi) filter. We expect back a list of all passing strings (aka only the first)
resp = client.multi_filter(cats_in_shelter, casefold=False)
print('Stream->Unary response: ', resp)
assert resp == cats_in_shelter[:1]
# Stream->Stream (multi) filter. We stream strings to the server and expect a stream of bools (passed/filtered out)
i = 0
for resp in client.multi_filter_stream(cats_in_shelter, casefold=False):
print(f'Stream->Unary for request: {cats_in_shelter[i]} we got response: ', resp)
i += 1
if __name__ == "__main__":
main()
_wait_enter()
| [
"pyfilter.get_new_client"
] | [((668, 715), 'pyfilter.get_new_client', 'get_new_client', ([], {'insecure_port': '(8886)', 'quiet': '(False)'}), '(insecure_port=8886, quiet=False)\n', (682, 715), False, 'from pyfilter import get_new_client\n')] |
from optparse import make_option
from django.core.management.base import AppCommand
from django.core.management.sql import sql_custom
from django.db import connections, DEFAULT_DB_ALIAS
class Command(AppCommand):
help = "Prints the custom table modifying SQL statements for the given app name(s)."
option_list = AppCommand.option_list + (
make_option('--database', action='store', dest='database',
default=DEFAULT_DB_ALIAS, help='Nominates a database to print the '
'SQL for. Defaults to the "default" database.'),
)
output_transaction = True
def handle_app(self, app, **options):
return u'\n'.join(sql_custom(app, self.style, connections[options.get('database')])).encode('utf-8')
| [
"optparse.make_option"
] | [((358, 545), 'optparse.make_option', 'make_option', (['"""--database"""'], {'action': '"""store"""', 'dest': '"""database"""', 'default': 'DEFAULT_DB_ALIAS', 'help': '"""Nominates a database to print the SQL for. Defaults to the "default" database."""'}), '(\'--database\', action=\'store\', dest=\'database\', default=\n DEFAULT_DB_ALIAS, help=\n \'Nominates a database to print the SQL for. Defaults to the "default" database.\'\n )\n', (369, 545), False, 'from optparse import make_option\n')] |
from itertools import zip_longest
from typing import List, Union
from bytepatches.ops import Opcode, sync_ops, LOAD_FAST, STORE_FAST, JumpOp, LOAD_NAME, STORE_NAME
from bytepatches.parser import Parser
from bytepatches.utils import patch_function, make_bytecode
class OpNotFound(Exception):
pass
def change_ops(ops: List[Opcode], ops_before: List[Opcode], ops_after: List[Opcode]):
index = 0
found = False
_cache = {}
indices = []
while True:
if index == len(ops):
if not found:
raise OpNotFound("Ops not found!")
break
target = ops[index:index + len(ops_before)]
if target == ops_before:
for existing, op in zip(target, ops_before):
if op is not None and isinstance(op._arg, str):
if op._arg not in _cache:
_cache[op._arg] = [existing]
else:
_cache[op._arg].append(existing)
found = True
indices.append(index)
index += 1
for index in indices:
for before, after in zip_longest(ops_before, ops_after):
if after is not None:
if isinstance(after._arg, str):
target = _cache[after._arg].pop(0)
cls = type(after)
after = cls(target._arg, target.arg, target.val)
if before is None:
# Append after
ops.insert(index, after)
elif after is None:
# Remove before
# We can't pop because that fucks stuff up, but we can set to None and remove later
# Go forwards first
new_target = None
direction = 1
pos = index
target = ops[index]
ops[index] = None
while new_target is None:
pos += direction
try:
new_target = ops[pos]
except IndexError:
direction = -1
for op in ops:
if isinstance(op, JumpOp) and op.val == target:
if op.reljump():
op._arg = new_target.bytecode_pos - op.bytecode_pos
else:
op._arg = new_target.bytecode_pos
op.val = new_target
else:
# Switch ops
for op in ops:
if isinstance(op, JumpOp) and op.val == before:
op.val = after
after.set_bytecode_pos(ops[index].bytecode_pos)
ops[index] = after
index += 1
for index, item in reversed(list(enumerate(ops))):
if item is None:
ops.pop(index)
sync_ops(ops)
def replace(func, before_code: Union[str, List[Opcode]], after_code: Union[str, List[Opcode]], name_to_fast=False):
fn_code = func.__code__
consts = list(fn_code.co_consts)
names = list(fn_code.co_names)
varnames = list(fn_code.co_varnames)
groups = []
if isinstance(before_code, str):
before = compile(before_code, "<input>", "exec")
groups.append(before)
if isinstance(after_code, str):
after = compile(after_code, "<input>", "exec")
groups.append(after)
for group in groups:
for const in group.co_consts:
if const not in consts:
consts.append(const)
for name in group.co_names:
if name not in names:
names.append(name)
for varname in group.co_varnames:
if varname not in varnames:
varnames.append(varname)
if name_to_fast:
for name in names:
if name not in varnames:
varnames.append(name)
if isinstance(before_code, str):
before_ops = Parser(before_code).parse_bytecode(False)
else:
before_ops = before_code
if isinstance(after_code, str):
after_ops = Parser(after_code).parse_bytecode(False)
else:
after_ops = after_code
# TODO: Find a more reliable way to strip LOAD_CONST(None) RETURN_VALUE from code if not in the input
if before_ops[-1].op_name == "RETURN_VALUE" and before_ops[-1].arg is not None and before_ops[-1].arg.arg is None:
before_ops = before_ops[:-2]
after_ops = after_ops[:-2]
if before_ops[-1].op_name == "POP_TOP" and before_ops[-1].arg is None:
before_ops = before_ops[:-1]
after_ops = after_ops[:-1]
if isinstance(before_code, str):
for i, op in enumerate(before_ops):
if name_to_fast:
if op.op_name == "LOAD_NAME":
op = LOAD_FAST(op._arg, op.arg, op.val)
elif op.op_name == "STORE_NAME":
op = STORE_FAST(op._arg, op.arg, op.val)
if "CONST" in op.op_name:
val = before.co_consts[op._arg]
if op._arg != consts.index(val):
op._arg = consts.index(val)
elif "NAME" in op.op_name:
val = before.co_names[op._arg]
if op._arg != names.index(val):
op._arg = names.index(val)
elif "FAST" in op.op_name:
group = before.co_varnames
if name_to_fast:
group += before.co_names
val = group[op._arg]
if op._arg != varnames.index(val):
op._arg = varnames.index(val)
before_ops[i] = op
if isinstance(after_code, str):
for i, op in enumerate(after_ops):
if name_to_fast:
if op.op_name == "LOAD_NAME":
op = LOAD_FAST(op._arg, op.arg, op.val)
elif op.op_name == "STORE_NAME":
op = STORE_FAST(op._arg, op.arg, op.val)
if "CONST" in op.op_name:
val = after.co_consts[op._arg]
if op._arg != consts.index(val):
op._arg = consts.index(val)
elif "NAME" in op.op_name:
val = after.co_names[op._arg]
if op._arg != names.index(val):
op._arg = names.index(val)
elif "FAST" in op.op_name:
group = after.co_varnames
if name_to_fast:
group += after.co_names
val = group[op._arg]
if op._arg != varnames.index(val):
op._arg = varnames.index(val)
after_ops[i] = op
ops = Parser(func).parse_bytecode(False)
change_ops(ops, before_ops, after_ops)
names, varnames = optimize_access(ops)
payload = make_bytecode(ops)
patch_function(func, payload, consts=tuple(consts), names=tuple(names), varnames=tuple(varnames))
return func
def optimize_access(ops: List[Opcode]):
accessed_names = []
accessed_varnames = []
for op in ops:
if isinstance(op, (LOAD_NAME, STORE_NAME)) and op.arg not in accessed_names:
accessed_names.append(op.arg)
elif isinstance(op, (LOAD_FAST, STORE_FAST)) and op.arg not in accessed_varnames:
accessed_varnames.append(op.arg)
accessed_names = tuple(accessed_names)
accessed_varnames = tuple(accessed_varnames)
for op in ops:
if isinstance(op, (LOAD_NAME, STORE_NAME)):
op._arg = accessed_names.index(op.arg)
elif isinstance(op, (LOAD_FAST, STORE_FAST)):
op._arg = accessed_varnames.index(op.arg)
return accessed_names, accessed_varnames
| [
"itertools.zip_longest",
"bytepatches.ops.LOAD_FAST",
"bytepatches.ops.STORE_FAST",
"bytepatches.parser.Parser",
"bytepatches.utils.make_bytecode",
"bytepatches.ops.sync_ops"
] | [((2873, 2886), 'bytepatches.ops.sync_ops', 'sync_ops', (['ops'], {}), '(ops)\n', (2881, 2886), False, 'from bytepatches.ops import Opcode, sync_ops, LOAD_FAST, STORE_FAST, JumpOp, LOAD_NAME, STORE_NAME\n'), ((6807, 6825), 'bytepatches.utils.make_bytecode', 'make_bytecode', (['ops'], {}), '(ops)\n', (6820, 6825), False, 'from bytepatches.utils import patch_function, make_bytecode\n'), ((1122, 1156), 'itertools.zip_longest', 'zip_longest', (['ops_before', 'ops_after'], {}), '(ops_before, ops_after)\n', (1133, 1156), False, 'from itertools import zip_longest\n'), ((6670, 6682), 'bytepatches.parser.Parser', 'Parser', (['func'], {}), '(func)\n', (6676, 6682), False, 'from bytepatches.parser import Parser\n'), ((3953, 3972), 'bytepatches.parser.Parser', 'Parser', (['before_code'], {}), '(before_code)\n', (3959, 3972), False, 'from bytepatches.parser import Parser\n'), ((4095, 4113), 'bytepatches.parser.Parser', 'Parser', (['after_code'], {}), '(after_code)\n', (4101, 4113), False, 'from bytepatches.parser import Parser\n'), ((4804, 4838), 'bytepatches.ops.LOAD_FAST', 'LOAD_FAST', (['op._arg', 'op.arg', 'op.val'], {}), '(op._arg, op.arg, op.val)\n', (4813, 4838), False, 'from bytepatches.ops import Opcode, sync_ops, LOAD_FAST, STORE_FAST, JumpOp, LOAD_NAME, STORE_NAME\n'), ((5824, 5858), 'bytepatches.ops.LOAD_FAST', 'LOAD_FAST', (['op._arg', 'op.arg', 'op.val'], {}), '(op._arg, op.arg, op.val)\n', (5833, 5858), False, 'from bytepatches.ops import Opcode, sync_ops, LOAD_FAST, STORE_FAST, JumpOp, LOAD_NAME, STORE_NAME\n'), ((4913, 4948), 'bytepatches.ops.STORE_FAST', 'STORE_FAST', (['op._arg', 'op.arg', 'op.val'], {}), '(op._arg, op.arg, op.val)\n', (4923, 4948), False, 'from bytepatches.ops import Opcode, sync_ops, LOAD_FAST, STORE_FAST, JumpOp, LOAD_NAME, STORE_NAME\n'), ((5933, 5968), 'bytepatches.ops.STORE_FAST', 'STORE_FAST', (['op._arg', 'op.arg', 'op.val'], {}), '(op._arg, op.arg, op.val)\n', (5943, 5968), False, 'from bytepatches.ops import Opcode, sync_ops, LOAD_FAST, STORE_FAST, JumpOp, LOAD_NAME, STORE_NAME\n')] |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import argparse
def make_live_to_txt(live_file, txt_file, source_encoding='UTF-8'):
"""
Convert a special ctm file (with the segmentation information in it) to a
json file used for the web demo
"""
new_sentence = False
words = []
start = None
with open(live_file, "r", encoding=source_encoding) as live_f:
live_content = live_f.readlines()
for line in live_content:
if "###" in line and len(words) > 0:
print("{} : {}".format(start," ".join(words)))
start = None
words = []
pass
else:
parts = line.split(" ")
if len(parts) == 2:
word_parts = parts[1].replace("(", "").split(",")
timestamp = parts[0].strip()
if start == None:
start = timestamp
word = word_parts[0]
# Filter fillers
if("<" not in word):
words.append(word)
print("{} : {}".format(start, " ".join(words)))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Create txt file from live output')
parser.add_argument("live_file", help="the xml (v2) file corresponding to the demo_file.")
parser.add_argument("txt_file", help="the file you want to write too.")
args = parser.parse_args()
make_live_to_txt(args.live_file, args.txt_file)
| [
"argparse.ArgumentParser"
] | [((1211, 1282), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""Create txt file from live output"""'}), "(description='Create txt file from live output')\n", (1234, 1282), False, 'import argparse\n')] |
from setuptools import setup, find_packages
with open("README.md", "r") as fh:
long_description = fh.read()
setup(name="geots2img",
version="0.1.3",
description="Geo Time Series to Image",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/juliandehoog/geo-timeseries-to-image",
author="<NAME>",
author_email='<EMAIL>',
license="MIT",
packages=find_packages(),
install_requires=[
'pandas',
'setuptools',
'numpy',
'matplotlib',
'scipy',
'Pillow',
'pytz',
],
zip_safe=False)
| [
"setuptools.find_packages"
] | [((459, 474), 'setuptools.find_packages', 'find_packages', ([], {}), '()\n', (472, 474), False, 'from setuptools import setup, find_packages\n')] |
from aiohttp import web
import os
# 返回中间件,处理返回值
@web.middleware
async def res_middleware(request, handler):
r = await handler(request)
if isinstance(r, web.StreamResponse):
return r
if isinstance(r, bytes):
resp = web.Response(body=r, content_type='application/octet-stream')
return resp
if isinstance(r, str):
if r.startswith('redirect:'):
return web.HTTPFound(r[9:])
resp = web.Response(body=r.encode('utf-8'), content_type='text/html', charset='utf-8')
return resp
if isinstance(r, dict):
template = r.get('__template__')
if template is None:
return web.json_response(r)
else:
r['user'] = request.__user__
shicifile = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'templates', 'shici.html')
with open(shicifile,'rb') as f:
r['poetry'] = f.read().decode('utf-8')
resp = web.Response(
body=request.app['__templating__'].get_template(template).render(**r).encode('utf-8'),content_type='text/html', charset='utf-8')
return resp
if isinstance(r, int) and r >= 100 and r < 600:
return web.Response(r)
if isinstance(r, tuple) and len(r) == 2:
t, m = r
if isinstance(t, int) and t >= 100 and t < 600:
return web.Response(t, str(m))
# default:
resp = web.Response(body=str(r).encode('utf-8'), content_type='text/html', charset='utf-8')
return resp
middlewares = [res_middleware] | [
"os.path.abspath",
"aiohttp.web.HTTPFound",
"aiohttp.web.Response",
"aiohttp.web.json_response"
] | [((243, 304), 'aiohttp.web.Response', 'web.Response', ([], {'body': 'r', 'content_type': '"""application/octet-stream"""'}), "(body=r, content_type='application/octet-stream')\n", (255, 304), False, 'from aiohttp import web\n'), ((1214, 1229), 'aiohttp.web.Response', 'web.Response', (['r'], {}), '(r)\n', (1226, 1229), False, 'from aiohttp import web\n'), ((409, 429), 'aiohttp.web.HTTPFound', 'web.HTTPFound', (['r[9:]'], {}), '(r[9:])\n', (422, 429), False, 'from aiohttp import web\n'), ((662, 682), 'aiohttp.web.json_response', 'web.json_response', (['r'], {}), '(r)\n', (679, 682), False, 'from aiohttp import web\n'), ((791, 816), 'os.path.abspath', 'os.path.abspath', (['__file__'], {}), '(__file__)\n', (806, 816), False, 'import os\n')] |
# plugin which demonstrates
# how to toggle a state
# on a keypress, for simplicity reasons we just start vlc in hidden mode
import os
cfg = globals()["config"]
drv = globals()["drivers"]
AUTOFIRE_PRESS = "_b_autofire_press"
# plugin_data is a user reserved namespace which can be used by the plugins to store global data
if AUTOFIRE_PRESS in cfg.plugin_data:
del cfg.plugin_data[AUTOFIRE_PRESS]
print("Disabling autofire for button B")
os.system("/home/werpu/gamepadservice/b-autofire-remove.sh")
pass
else:
cfg.plugin_data[AUTOFIRE_PRESS] = True
print("Enabling autifore for button B")
os.system("/home/werpu/gamepadservice/b-autofire.sh")
print(cfg.plugin_data)
pass
| [
"os.system"
] | [((454, 514), 'os.system', 'os.system', (['"""/home/werpu/gamepadservice/b-autofire-remove.sh"""'], {}), "('/home/werpu/gamepadservice/b-autofire-remove.sh')\n", (463, 514), False, 'import os\n'), ((622, 675), 'os.system', 'os.system', (['"""/home/werpu/gamepadservice/b-autofire.sh"""'], {}), "('/home/werpu/gamepadservice/b-autofire.sh')\n", (631, 675), False, 'import os\n')] |
import bs4, requests, os, xlsxwriter, time
start_time = time.time()
def generateID(n):
canID = str(n)
count = 0
num = n
while num > 0:
count += 1
num//=10
count0 = 4 - count
for i in range (count0):
canID = "0" + canID
return canID
'''
workbook = xlsxwriter.Workbook('PhoDiemData.xlsx')
worksheet = workbook.add_worksheet()
row = 2
col = 0
for count in range (1000000):
link = 'https://news.zing.vn/tra-cuu-diem-thi-thpt-2017-ket-qua.html?text=' + "01" + generateID(count)
res = requests.get(link)
soup = bs4.BeautifulSoup(res.text, "html.parser")
participantInfo = soup.select('.table td')
if participantInfo != []:
worksheet.write(row, col, participantInfo[0].text)
worksheet.write(row, col + 1, participantInfo[1].text)
worksheet.write(row, col + 2, participantInfo[2].text)
worksheet.write(row, col + 3, participantInfo[3].text)
worksheet.write(row, col + 4, participantInfo[5].text)
row += 1
workbook.close()
'''
count = 747
link = 'https://news.zing.vn/tra-cuu-diem-thi-thpt-2017-ket-qua.html?text=' + "0101" + generateID(count)
print("--- %s seconds ---" % (time.time() - start_time))
| [
"time.time"
] | [((57, 68), 'time.time', 'time.time', ([], {}), '()\n', (66, 68), False, 'import bs4, requests, os, xlsxwriter, time\n'), ((1192, 1203), 'time.time', 'time.time', ([], {}), '()\n', (1201, 1203), False, 'import bs4, requests, os, xlsxwriter, time\n')] |
import scrapy
from scrapy.contrib.pipeline.images import ImagesPipeline
ITEM_PIPELINES = {'imgur.pipelines.ImgurPipeline': 1}
class ImgurPipeline(ImagesPipeline):
def set_filename(self, response):
#add a regex here to check the title is valid for a filename.
return 'full/{0}.jpg'.format(response.meta['title'][0])
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield scrapy.Request(image_url, meta={'title': item['title']})
def get_images(self, response, request, info):
for key, image, buf in super(ImgurPipeline, self).get_images(response, request, info):
key = self.set_filename(response)
yield key, image, buf
| [
"scrapy.Request"
] | [((428, 484), 'scrapy.Request', 'scrapy.Request', (['image_url'], {'meta': "{'title': item['title']}"}), "(image_url, meta={'title': item['title']})\n", (442, 484), False, 'import scrapy\n')] |
"""
Django settings for fzw project.
Generated by 'django-admin startproject' using Django 2.0.3.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
import dj_database_url
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
DATA_DIR = os.path.join(BASE_DIR, 'data')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get('SECRET_KEY', 'abracadabra')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(int(os.environ.get('DEBUG', '0')))
BACKEND_HOST = os.environ.get('BACKEND_HOST', '*')
DATABASE_URL = os.environ.get(
'DATABASE_URL',
'postgres://fajniezewiesz:fajniezewiesz@localhost:5432/fajniezewiesz')
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_S3_REGION_NAME = os.environ.get('AWS_S3_REGION_NAME')
FZW_ASSETS_S3_BUCKET = os.environ.get('FZW_ASSETS_S3_BUCKET')
FZW_MEDIA_S3_BUCKET = os.environ.get('FZW_MEDIA_S3_BUCKET')
ALLOWED_HOSTS = [
BACKEND_HOST,
]
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'storages',
'fzw.news',
'fzw.quizes',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'fzw.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'fzw.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
DATABASES = {
'default': dj_database_url.parse(DATABASE_URL, conn_max_age=600)
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa: E501
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa: E501
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa: E501
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa: E501
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s' # noqa
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'simple',
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'ERROR',
},
'fzw': {
'handlers': ['console'],
'level': 'INFO',
},
},
}
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
USE_S3 = bool(AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
def s3_url(bucket_name):
return 'https://{bucket_name}.s3-eu-west-1.amazonaws.com/'.format(
bucket_name=bucket_name)
if USE_S3 and FZW_ASSETS_S3_BUCKET:
STATIC_URL = s3_url(FZW_ASSETS_S3_BUCKET)
STATIC_ROOT = None
STATICFILES_STORAGE = 'fzw.storage_backends.AssetsStorage'
else:
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(DATA_DIR, 'assets')
if USE_S3 and FZW_MEDIA_S3_BUCKET:
MEDIA_URL = s3_url(FZW_MEDIA_S3_BUCKET)
MEDIA_ROOT = None
DEFAULT_FILE_STORAGE = 'fzw.storage_backends.MediaStorage'
else:
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(DATA_DIR, 'media')
| [
"os.path.abspath",
"os.path.join",
"os.environ.get",
"dj_database_url.parse"
] | [((493, 523), 'os.path.join', 'os.path.join', (['BASE_DIR', '"""data"""'], {}), "(BASE_DIR, 'data')\n", (505, 523), False, 'import os\n'), ((742, 785), 'os.environ.get', 'os.environ.get', (['"""SECRET_KEY"""', '"""abracadabra"""'], {}), "('SECRET_KEY', 'abracadabra')\n", (756, 785), False, 'import os\n'), ((917, 952), 'os.environ.get', 'os.environ.get', (['"""BACKEND_HOST"""', '"""*"""'], {}), "('BACKEND_HOST', '*')\n", (931, 952), False, 'import os\n'), ((968, 1073), 'os.environ.get', 'os.environ.get', (['"""DATABASE_URL"""', '"""postgres://fajniezewiesz:fajniezewiesz@localhost:5432/fajniezewiesz"""'], {}), "('DATABASE_URL',\n 'postgres://fajniezewiesz:fajniezewiesz@localhost:5432/fajniezewiesz')\n", (982, 1073), False, 'import os\n'), ((1100, 1135), 'os.environ.get', 'os.environ.get', (['"""AWS_ACCESS_KEY_ID"""'], {}), "('AWS_ACCESS_KEY_ID')\n", (1114, 1135), False, 'import os\n'), ((1160, 1199), 'os.environ.get', 'os.environ.get', (['"""AWS_SECRET_ACCESS_KEY"""'], {}), "('AWS_SECRET_ACCESS_KEY')\n", (1174, 1199), False, 'import os\n'), ((1221, 1257), 'os.environ.get', 'os.environ.get', (['"""AWS_S3_REGION_NAME"""'], {}), "('AWS_S3_REGION_NAME')\n", (1235, 1257), False, 'import os\n'), ((1282, 1320), 'os.environ.get', 'os.environ.get', (['"""FZW_ASSETS_S3_BUCKET"""'], {}), "('FZW_ASSETS_S3_BUCKET')\n", (1296, 1320), False, 'import os\n'), ((1343, 1380), 'os.environ.get', 'os.environ.get', (['"""FZW_MEDIA_S3_BUCKET"""'], {}), "('FZW_MEDIA_S3_BUCKET')\n", (1357, 1380), False, 'import os\n'), ((2805, 2858), 'dj_database_url.parse', 'dj_database_url.parse', (['DATABASE_URL'], {'conn_max_age': '(600)'}), '(DATABASE_URL, conn_max_age=600)\n', (2826, 2858), False, 'import dj_database_url\n'), ((4803, 4835), 'os.path.join', 'os.path.join', (['DATA_DIR', '"""assets"""'], {}), "(DATA_DIR, 'assets')\n", (4815, 4835), False, 'import os\n'), ((5050, 5081), 'os.path.join', 'os.path.join', (['DATA_DIR', '"""media"""'], {}), "(DATA_DIR, 'media')\n", (5062, 5081), False, 'import os\n'), ((454, 479), 'os.path.abspath', 'os.path.abspath', (['__file__'], {}), '(__file__)\n', (469, 479), False, 'import os\n'), ((870, 898), 'os.environ.get', 'os.environ.get', (['"""DEBUG"""', '"""0"""'], {}), "('DEBUG', '0')\n", (884, 898), False, 'import os\n')] |
# Something to keep the docker running
import time
count = 0
max_count = 10000
while True:
print ("We are at count {}.".format(count))
count += 1
if count >= max_count:
print ("Resetting.")
count = 0
time.sleep(1) | [
"time.sleep"
] | [((234, 247), 'time.sleep', 'time.sleep', (['(1)'], {}), '(1)\n', (244, 247), False, 'import time\n')] |
# -*- test-case-name: mimic.test.test_loadbalancer -*-
"""
Defines add node and delete node from load balancers
"""
from __future__ import absolute_import, division, unicode_literals
import json
from uuid import uuid4
from six import string_types, text_type
from zope.interface import implementer
from twisted.plugin import IPlugin
from mimic.rest.mimicapp import MimicApp
from mimic.imimic import IAPIMock
from mimic.catalog import Entry
from mimic.catalog import Endpoint
from mimic.model.clb_errors import invalid_json_schema
from mimic.model.clb_objects import (
GlobalCLBCollections, BadKeysError, BadValueError
)
from random import randrange
from mimic.util.helper import invalid_resource, json_dump
from mimic.util.helper import json_from_request
from characteristic import attributes
@implementer(IAPIMock, IPlugin)
class LoadBalancerApi(object):
"""
This class registers the load balancer API in the service catalog.
"""
def __init__(self, regions=["ORD"]):
"""
Create an API with the specified regions.
"""
self._regions = regions
def catalog_entries(self, tenant_id):
"""
Cloud load balancer entries.
"""
return [
Entry(tenant_id, "rax:load-balancer", "cloudLoadBalancers",
[
Endpoint(tenant_id, region, text_type(uuid4()),
prefix="v2")
for region in self._regions
])
]
def resource_for_region(self, region, uri_prefix, session_store):
"""
Get an :obj:`twisted.web.iweb.IResource` for the given URI prefix;
implement :obj:`IAPIMock`.
"""
lb_region = LoadBalancerRegion(self, uri_prefix, session_store,
region)
return lb_region.app.resource()
def _get_session(self, session_store, tenant_id):
"""
Retrieve or create a new LoadBalancer session from a given tenant identifier
and :obj:`SessionStore`.
For use with ``data_for_api``.
Temporary hack; see this issue
https://github.com/rackerlabs/mimic/issues/158
"""
return (
session_store.session_for_tenant_id(tenant_id)
.data_for_api(self, lambda: GlobalCLBCollections(
clock=session_store.clock
))
)
@implementer(IAPIMock, IPlugin)
@attributes(["lb_api"])
class LoadBalancerControlApi(object):
"""
This class registers the load balancer controller API in the service
catalog.
"""
def catalog_entries(self, tenant_id):
"""
Cloud load balancer controller endpoints.
"""
return [
Entry(
tenant_id, "rax:load-balancer", "cloudLoadBalancerControl",
[
Endpoint(tenant_id, region, text_type(uuid4()), prefix="v2")
for region in self.lb_api._regions
]
)
]
def resource_for_region(self, region, uri_prefix, session_store):
"""
Get an :obj:`twisted.web.iweb.IResource` for the given URI prefix;
implement :obj:`IAPIMock`.
"""
lbc_region = LoadBalancerControlRegion(api_mock=self, uri_prefix=uri_prefix,
session_store=session_store, region=region)
return lbc_region.app.resource()
@attributes(["api_mock", "uri_prefix", "session_store", "region"])
class LoadBalancerControlRegion(object):
"""
Klein routes for load balancer's control API within a particular region.
"""
app = MimicApp()
def _collection_from_tenant(self, tenant_id):
"""
Retrieve the server collection for this region for the given tenant.
"""
return (self.api_mock.lb_api._get_session(self.session_store, tenant_id)
.collection_for_region(self.region))
@app.route(
'/v2/<string:tenant_id>/loadbalancer/<int:clb_id>/attributes',
methods=['PATCH']
)
def set_attributes(self, request, tenant_id, clb_id):
"""
Alters the supported attributes of the CLB to supported values. To
return things back to normal, you'll first need to list the CLB to get
any original values yourself.
"""
regional_lbs = self._collection_from_tenant(tenant_id)
if not regional_lbs.lb_in_region(clb_id):
request.setResponseCode(404)
return json.dumps({
"message": "Tenant {0} doesn't own load balancer {1}".format(
tenant_id, clb_id
),
"code": 404,
})
try:
content = json_from_request(request)
except ValueError:
request.setResponseCode(400)
return json.dumps(invalid_resource("Invalid JSON request body"))
try:
regional_lbs.set_attributes(clb_id, content)
except BadKeysError as bke:
request.setResponseCode(400)
return json.dumps({
"message": str(bke),
"code": 400,
})
except BadValueError as bve:
request.setResponseCode(400)
return json.dumps({
"message": str(bve),
"code": 400,
})
else:
request.setResponseCode(204)
return b''
class LoadBalancerRegion(object):
"""
Klein routes for load balancer API methods within a particular region.
"""
app = MimicApp()
def __init__(self, api_mock, uri_prefix, session_store, region_name):
"""
Fetches the load balancer id for a failure, invalid scenarios and
the count on the number of time 422 should be returned on add node.
"""
self.uri_prefix = uri_prefix
self.region_name = region_name
self._api_mock = api_mock
self._session_store = session_store
def session(self, tenant_id):
"""
Gets a session for a particular tenant, creating one if there isn't
one.
"""
tenant_session = self._session_store.session_for_tenant_id(tenant_id)
clb_global_collection = tenant_session.data_for_api(
self._api_mock,
lambda: GlobalCLBCollections(
clock=self._session_store.clock))
clb_region_collection = clb_global_collection.collection_for_region(
self.region_name)
return clb_region_collection
@app.route('/v2/<string:tenant_id>/loadbalancers', methods=['POST'])
def add_load_balancer(self, request, tenant_id):
"""
Creates a load balancer and adds it to the load balancer store.
Returns the newly created load balancer with response code 202
"""
try:
content = json_from_request(request)
except ValueError:
request.setResponseCode(400)
return json.dumps(invalid_resource("Invalid JSON request body"))
lb_id = randrange(99999)
response_data = self.session(tenant_id).add_load_balancer(
content['loadBalancer'], lb_id
)
request.setResponseCode(response_data[1])
return json.dumps(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>', methods=['GET'])
def get_load_balancers(self, request, tenant_id, lb_id):
"""
Returns a list of all load balancers created using mimic with response code 200
"""
response_data = self.session(tenant_id).get_load_balancers(lb_id)
request.setResponseCode(response_data[1])
return json.dumps(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers', methods=['GET'])
def list_load_balancers(self, request, tenant_id):
"""
Returns a list of all load balancers created using mimic with response code 200
"""
response_data = self.session(tenant_id).list_load_balancers()
request.setResponseCode(response_data[1])
return json.dumps(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>', methods=['DELETE'])
def delete_load_balancer(self, request, tenant_id, lb_id):
"""
Creates a load balancer and adds it to the load balancer store.
Returns the newly created load balancer with response code 200
"""
response_data = self.session(tenant_id).del_load_balancer(lb_id)
request.setResponseCode(response_data[1])
return json_dump(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes', methods=['POST'])
def add_node_to_load_balancer(self, request, tenant_id, lb_id):
"""
Return a successful add node response with code 200
"""
try:
content = json_from_request(request)
except ValueError:
request.setResponseCode(400)
return json.dumps(invalid_resource("Invalid JSON request body"))
node_list = content['nodes']
response_data = self.session(tenant_id).add_node(node_list, lb_id)
request.setResponseCode(response_data[1])
return json.dumps(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes/<int:node_id>',
methods=['GET'])
def get_node(self, request, tenant_id, lb_id, node_id):
"""
Returns a 200 response code and a particular node on the load balancer
"""
body, code = self.session(tenant_id).get_node(lb_id, node_id)
request.setResponseCode(code)
return json.dumps(body)
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes/<int:node_id>.atom',
methods=['GET'])
def get_node_feed(self, request, tenant_id, lb_id, node_id):
"""
Returns a 200 response code and node's feed on the load balancer
"""
body, code = self.session(tenant_id).get_node_feed(lb_id, node_id)
request.setResponseCode(code)
request.setHeader(b"Content-Type", b"application/atom+xml")
return body
@app.route(
'/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes/<int:node_id>',
methods=['PUT'])
def update_node(self, request, tenant_id, lb_id, node_id):
"""
Return a 202 response code to updating a node, if successful.
"""
try:
content = json_from_request(request)
assert (isinstance(content, dict) and
list(content.keys()) == ["node"])
content = content["node"]
assert isinstance(content, dict)
except (ValueError, AssertionError):
resp_body, resp_code = invalid_json_schema()
else:
resp_body, resp_code = self.session(tenant_id).update_node(
lb_id, node_id, content
)
request.setResponseCode(resp_code)
if isinstance(resp_body, string_types):
return resp_body
return json.dumps(resp_body)
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes/<int:node_id>',
methods=['DELETE'])
def delete_node_from_load_balancer(self, request, tenant_id, lb_id, node_id):
"""
Returns a 204 response code, for any load balancer created using the mocks
"""
response_data = self.session(tenant_id).delete_node(lb_id, node_id)
request.setResponseCode(response_data[1])
return json.dumps(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes',
methods=['DELETE'])
def delete_nodes_from_load_balancer(self, request, tenant_id, lb_id):
"""
Deletes multiple nodes from a LB.
"""
node_ids = [int(node_id) for node_id in request.args.get(b'id', [])]
response_data = self.session(tenant_id).delete_nodes(lb_id, node_ids)
request.setResponseCode(response_data[1])
return json_dump(response_data[0])
@app.route('/v2/<string:tenant_id>/loadbalancers/<int:lb_id>/nodes',
methods=['GET'])
def list_nodes_for_load_balancer(self, request, tenant_id, lb_id):
"""
Returns a 200 response code and list of nodes on the load balancer
"""
response_data = self.session(tenant_id).list_nodes(lb_id)
request.setResponseCode(response_data[1])
return json.dumps(response_data[0])
| [
"characteristic.attributes",
"mimic.model.clb_objects.GlobalCLBCollections",
"random.randrange",
"json.dumps",
"zope.interface.implementer",
"uuid.uuid4",
"mimic.util.helper.invalid_resource",
"mimic.rest.mimicapp.MimicApp",
"mimic.util.helper.json_from_request",
"mimic.util.helper.json_dump",
"... | [((803, 833), 'zope.interface.implementer', 'implementer', (['IAPIMock', 'IPlugin'], {}), '(IAPIMock, IPlugin)\n', (814, 833), False, 'from zope.interface import implementer\n'), ((2411, 2441), 'zope.interface.implementer', 'implementer', (['IAPIMock', 'IPlugin'], {}), '(IAPIMock, IPlugin)\n', (2422, 2441), False, 'from zope.interface import implementer\n'), ((2443, 2465), 'characteristic.attributes', 'attributes', (["['lb_api']"], {}), "(['lb_api'])\n", (2453, 2465), False, 'from characteristic import attributes\n'), ((3455, 3520), 'characteristic.attributes', 'attributes', (["['api_mock', 'uri_prefix', 'session_store', 'region']"], {}), "(['api_mock', 'uri_prefix', 'session_store', 'region'])\n", (3465, 3520), False, 'from characteristic import attributes\n'), ((3666, 3676), 'mimic.rest.mimicapp.MimicApp', 'MimicApp', ([], {}), '()\n', (3674, 3676), False, 'from mimic.rest.mimicapp import MimicApp\n'), ((5599, 5609), 'mimic.rest.mimicapp.MimicApp', 'MimicApp', ([], {}), '()\n', (5607, 5609), False, 'from mimic.rest.mimicapp import MimicApp\n'), ((7082, 7098), 'random.randrange', 'randrange', (['(99999)'], {}), '(99999)\n', (7091, 7098), False, 'from random import randrange\n'), ((7284, 7312), 'json.dumps', 'json.dumps', (['response_data[0]'], {}), '(response_data[0])\n', (7294, 7312), False, 'import json\n'), ((7710, 7738), 'json.dumps', 'json.dumps', (['response_data[0]'], {}), '(response_data[0])\n', (7720, 7738), False, 'import json\n'), ((8114, 8142), 'json.dumps', 'json.dumps', (['response_data[0]'], {}), '(response_data[0])\n', (8124, 8142), False, 'import json\n'), ((8599, 8626), 'mimic.util.helper.json_dump', 'json_dump', (['response_data[0]'], {}), '(response_data[0])\n', (8608, 8626), False, 'from mimic.util.helper import invalid_resource, json_dump\n'), ((9256, 9284), 'json.dumps', 'json.dumps', (['response_data[0]'], {}), '(response_data[0])\n', (9266, 9284), False, 'import json\n'), ((9691, 9707), 'json.dumps', 'json.dumps', (['body'], {}), '(body)\n', (9701, 9707), False, 'import json\n'), ((11101, 11122), 'json.dumps', 'json.dumps', (['resp_body'], {}), '(resp_body)\n', (11111, 11122), False, 'import json\n'), ((11576, 11604), 'json.dumps', 'json.dumps', (['response_data[0]'], {}), '(response_data[0])\n', (11586, 11604), False, 'import json\n'), ((12074, 12101), 'mimic.util.helper.json_dump', 'json_dump', (['response_data[0]'], {}), '(response_data[0])\n', (12083, 12101), False, 'from mimic.util.helper import invalid_resource, json_dump\n'), ((12509, 12537), 'json.dumps', 'json.dumps', (['response_data[0]'], {}), '(response_data[0])\n', (12519, 12537), False, 'import json\n'), ((4759, 4785), 'mimic.util.helper.json_from_request', 'json_from_request', (['request'], {}), '(request)\n', (4776, 4785), False, 'from mimic.util.helper import json_from_request\n'), ((6893, 6919), 'mimic.util.helper.json_from_request', 'json_from_request', (['request'], {}), '(request)\n', (6910, 6919), False, 'from mimic.util.helper import json_from_request\n'), ((8906, 8932), 'mimic.util.helper.json_from_request', 'json_from_request', (['request'], {}), '(request)\n', (8923, 8932), False, 'from mimic.util.helper import json_from_request\n'), ((10510, 10536), 'mimic.util.helper.json_from_request', 'json_from_request', (['request'], {}), '(request)\n', (10527, 10536), False, 'from mimic.util.helper import json_from_request\n'), ((2319, 2366), 'mimic.model.clb_objects.GlobalCLBCollections', 'GlobalCLBCollections', ([], {'clock': 'session_store.clock'}), '(clock=session_store.clock)\n', (2339, 2366), False, 'from mimic.model.clb_objects import GlobalCLBCollections, BadKeysError, BadValueError\n'), ((6348, 6401), 'mimic.model.clb_objects.GlobalCLBCollections', 'GlobalCLBCollections', ([], {'clock': 'self._session_store.clock'}), '(clock=self._session_store.clock)\n', (6368, 6401), False, 'from mimic.model.clb_objects import GlobalCLBCollections, BadKeysError, BadValueError\n'), ((10804, 10825), 'mimic.model.clb_errors.invalid_json_schema', 'invalid_json_schema', ([], {}), '()\n', (10823, 10825), False, 'from mimic.model.clb_errors import invalid_json_schema\n'), ((4884, 4929), 'mimic.util.helper.invalid_resource', 'invalid_resource', (['"""Invalid JSON request body"""'], {}), "('Invalid JSON request body')\n", (4900, 4929), False, 'from mimic.util.helper import invalid_resource, json_dump\n'), ((7018, 7063), 'mimic.util.helper.invalid_resource', 'invalid_resource', (['"""Invalid JSON request body"""'], {}), "('Invalid JSON request body')\n", (7034, 7063), False, 'from mimic.util.helper import invalid_resource, json_dump\n'), ((9031, 9076), 'mimic.util.helper.invalid_resource', 'invalid_resource', (['"""Invalid JSON request body"""'], {}), "('Invalid JSON request body')\n", (9047, 9076), False, 'from mimic.util.helper import invalid_resource, json_dump\n'), ((1372, 1379), 'uuid.uuid4', 'uuid4', ([], {}), '()\n', (1377, 1379), False, 'from uuid import uuid4\n'), ((2910, 2917), 'uuid.uuid4', 'uuid4', ([], {}), '()\n', (2915, 2917), False, 'from uuid import uuid4\n')] |
import logging
from multiprocessing import Pool
from pathlib import Path
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(name)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
DATABASE = 'population_statistics'
cwd = Path(__file__).parent
data_types = {
'general': 't',
'women': 'f',
'men': 'm',
'children_under_five': 't_00_04',
'youth_15_24': 't_15_24',
'elderly_60_plus': 't_60_plus',
'women_of_reproductive_age_15_49': 'f_15_49',
}
def run_process(func):
results = []
pool = Pool()
for name in data_types:
args = [name]
result = pool.apply_async(func, args=args)
results.append(result)
pool.close()
pool.join()
for result in results:
result.get()
| [
"logging.basicConfig",
"multiprocessing.Pool",
"pathlib.Path"
] | [((74, 194), 'logging.basicConfig', 'logging.basicConfig', ([], {'level': 'logging.INFO', 'format': '"""%(asctime)s - %(name)s - %(message)s"""', 'datefmt': '"""%Y-%m-%d %H:%M:%S"""'}), "(level=logging.INFO, format=\n '%(asctime)s - %(name)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')\n", (93, 194), False, 'import logging\n'), ((272, 286), 'pathlib.Path', 'Path', (['__file__'], {}), '(__file__)\n', (276, 286), False, 'from pathlib import Path\n'), ((573, 579), 'multiprocessing.Pool', 'Pool', ([], {}), '()\n', (577, 579), False, 'from multiprocessing import Pool\n')] |
# Written by @HeisenbergTheDanger (Keep credits else gay)
import asyncio
from telethon.tl.types import InputMediaUploadedPhoto
from uniborg.util import admin_cmd
from telebot import CMD_HELP
from telebot.plugins.sql_helper.ghdb_sql import (
add_channel,
get_all_channels,
in_channels,
rm_channel,
)
logs_id = Var.PRIVATE_GROUP_ID
# Keep all credits pls, made with great effort by @HeisenbergTheDanger
@telebot.on(admin_cmd(pattern="forward ?(.*)"))
@telebot.on(sudo_cmd(pattern="forward ?(.*)", allow_sudo=True))
async def forw(event):
if event.fwd_from:
return
mssg = await eor(event, "`...'")
if not event.is_reply:
await mssg.edit("Reply to a message to broadcast.")
return
channels = get_all_channels()
await mssg.edit("Sending...")
error_count = 0
sent_count = 0
if event.reply_to_msg_id:
previous_message = await event.get_reply_message()
previous_message.message
previous_message.raw_text
error_count = 0
for channel in channels:
try:
await borg.forward_messages(int(channel.chat_id), previous_message)
sent_count += 1
await mssg.edit(
f"Sent : {sent_count}\nError : {error_count}\nTotal : {len(channels)}"
)
except Exception as error:
try:
await borg.send_message(
logs_id, f"Error in sending at {channel.chat_id}."
)
await borg.send_message(logs_id, "Error! " + str(error))
if error == "The message cannot be empty unless a file is provided":
mssg.edit(
"For sending files, upload in Saved Messages and reply .forward to in."
)
return
except BaseException:
pass
error_count += 1
await mssg.edit(
f"Sent : {sent_count}\nError : {error_count}\nTotal : {len(channels)}",
)
await mssg.edit(f"{sent_count} messages sent with {error_count} errors.")
if error_count > 0:
try:
await borg.send_message(logs_id, f"{error_count} Errors")
except BaseException:
await mssg.edit("Set up log channel for checking errors.")
@telebot.on(admin_cmd(pattern="broadcast ?(.*)"))
@telebot.on(sudo_cmd(pattern="broadcast ?(.*)", allow_sudo=True))
async def _(event):
if event.fwd_from:
return
mssg = await eor(event, "`...`")
if not event.is_reply:
await mssg.edit("Reply to a message to broadcast.")
return
channels = get_all_channels()
error_count = 0
sent_count = 0
await mssg.edit("Sending....")
if event.reply_to_msg_id:
previous_message = await event.get_reply_message()
if previous_message.sticker or previous_message.poll:
await mssg.edit("Reply .forward for stickers and polls.")
return
if (
previous_message.gif
or previous_message.audio
or previous_message.voice
or previous_message.video
or previous_message.video_note
or previous_message.contact
or previous_message.game
or previous_message.geo
or previous_message.invoice
): # Written by @HeisenbergTheDanger
await mssg.edit("Not supported. Try `.forward`")
return
if not previous_message.web_preview and previous_message.photo:
file = await borg.download_file(previous_message.media)
uploaded_doc = await borg.upload_file(file, file_name="img.png")
raw_text = previous_message.text
for channel in channels:
try:
if previous_message.photo:
await borg.send_file(
int(channel.chat_id),
InputMediaUploadedPhoto(file=uploaded_doc),
force_document=False,
caption=raw_text,
link_preview=False,
)
sent_count += 1
await mssg.edit(
f"Sent : {sent_count}\nError : {error_count}\nTotal : {len(channels)}",
)
except Exception as error:
try:
await borg.send_message(
logs_id, f"Error in sending at {chat_id}."
)
await borg.send_message(logs_id, "Error! " + str(error))
if (
error
== "The message cannot be empty unless a file is provided"
):
mssg.edit(
"For sending files, upload in Saved Messages and reply .forward to in."
)
return
except BaseException:
pass
error_count += 1
await mssg.edit(
f"Sent : {sent_count}\nError : {error_count}\nTotal : {len(channels)}",
)
await mssg.edit(f"{sent_count} messages sent with {error_count} errors.")
if error_count > 0:
try:
await borg.send_message(logs_id, f"{error_count} Errors")
except BaseException:
pass
else:
raw_text = previous_message.text
for channel in channels:
try:
await borg.send_message(
int(channel.chat_id), raw_text, link_preview=False
)
sent_count += 1
await mssg.edit(
f"Sent : {sent_count}\nError : {error_count}\nTotal : {len(channels)}",
)
except Exception as error:
try:
await borg.send_message(
logs_id, f"Error in sending at {channel.chat_id}."
)
await borg.send_message(logs_id, "Error! " + str(error))
if (
error
== "The message cannot be empty unless a file is provided"
):
mssg.edit(
"For sending files, upload in Saved Messages and reply .forward to in."
)
return
except BaseException:
pass
error_count += 1
await mssg.edit(
f"Sent : {sent_count}\nError : {error_count}\nTotal : {len(channels)}",
)
await mssg.edit(f"{sent_count} messages sent with {error_count} errors.")
if error_count > 0:
try:
await borg.send_message(logs_id, f"{error_count} Errors")
except BaseException:
await mssg.edit("Set up log channel for checking errors.")
# Written by @HeisenbergTheDanger
@telebot.on(admin_cmd(pattern="add ?(.*)"))
async def add_ch(event):
if event.fwd_from:
return
if (
"addcf" in event.raw_text.lower()
or "addblacklist" in event.raw_text.lower()
or "addsudo" in event.raw_text.lower()
): # fix for ".addcf" in lydia, ".addsudo" and ".addblacklist"
return
if event.reply_to_msg_id:
await eor(event, "Adding...")
previous_message = await event.get_reply_message()
raw_text = previous_message.text
lines = raw_text.split("\n")
length = len(lines)
for line_number in range(1, length - 2):
channel_id = lines[line_number][4:-1]
if not in_channels(channel_id):
add_channel(channel_id)
await eor(event, "Channels added!")
await asyncio.sleep(3)
await event.delete()
return
chat_id = event.chat_id
try:
if int(chat_id) == logs_id:
return
except BaseException:
pass
if not in_channels(chat_id):
add_channel(chat_id)
await eor(event, "`Added to database!`")
await asyncio.sleep(3)
await event.delete()
elif in_channels(chat_id):
await eor(event, "`Channel is already is database!`")
await asyncio.sleep(3)
await event.delete()
@telebot.on(admin_cmd(pattern="rm ?(.*)"))
async def remove_ch(event):
if event.fwd_from:
return
chat_id = event.pattern_match.group(1)
if chat_id == "all":
await eor(event, "Removing...")
channels = get_all_channels()
for channel in channels:
rm_channel(channel.chat_id)
await eor(event, "Database cleared.")
return
if in_channels(chat_id):
rm_channel(chat_id)
await eor(event, "Removed from database")
await asyncio.sleep(3)
await event.delete()
elif in_channels(event.chat_id):
rm_channel(event.chat_id)
await eor(event, "Removed from database")
await asyncio.sleep(3)
await event.delete()
elif not in_channels(event.chat_id):
await eor(event, "Channel is already removed from database. ")
await asyncio.sleep(3)
await event.delete()
@telebot.on(admin_cmd(pattern="listchannels"))
@telebot.on(sudo_cmd(pattern="listchannels", allow_sudo=True))
async def list(event):
if event.fwd_from:
return
channels = get_all_channels()
msg = "Channels in database:\n"
for channel in channels:
msg += f"=> `{channel.chat_id}`\n"
msg += f"\nTotal {len(channels)} channels."
if len(msg) > Config.MAX_MESSAGE_SIZE_LIMIT:
with io.BytesIO(str.encode(msg)) as out_file:
out_file.name = "channels.text"
await borg.send_file(
event.chat_id,
out_file,
force_document=True,
allow_cache=False,
caption="Channels in database",
reply_to=event,
)
await event.delete()
else:
await eor(event, msg)
@telebot.on(admin_cmd(pattern="search ?(.*)"))
@telebot.on(sudo_cmd(pattern="search ?(.*)", allow_sudo=True))
async def search(event):
channel_id = event.pattern_match.group(1)
try:
channel = await borg.get_entity(int(channel_id))
except ValueError:
await eor(event, "Invalid id.")
return
except BaseException:
return
name = channel.title
username = channel.username
if username:
username = "@" + username
await eor(event, f"Name : {name}\nUsername: {username}")
CMD_HELP.update(
{
"giveawayhelper": ".add\nUse - Add the channel/group to your database.\
\n\n.rm (all)<channel/group id>\nUse - Remove the channel/group from database. Use rm all to remove all groups.\
\n\n.broadcast <reply to message>\nUse - Send the message to all channels/groups in the db.\
\n\n.forward <reply to polls/stickers>\nUse - Forwards the poll/sticker to all channels/groups in db.\
\n\n.listchannels\nUse - List all added channels.\
\n\n.search <channel id>\nUse - Search for the channel name from id."
}
)
| [
"uniborg.util.admin_cmd",
"telethon.tl.types.InputMediaUploadedPhoto",
"telebot.plugins.sql_helper.ghdb_sql.get_all_channels",
"telebot.plugins.sql_helper.ghdb_sql.add_channel",
"telebot.plugins.sql_helper.ghdb_sql.rm_channel",
"asyncio.sleep",
"telebot.plugins.sql_helper.ghdb_sql.in_channels",
"teleb... | [((10974, 11521), 'telebot.CMD_HELP.update', 'CMD_HELP.update', (['{\'giveawayhelper\':\n """.add\nUse - Add the channel/group to your database. \n\n.rm (all)<channel/group id>\nUse - Remove the channel/group from database. Use rm all to remove all groups. \n\n.broadcast <reply to message>\nUse - Send the message to all channels/groups in the db. \n\n.forward <reply to polls/stickers>\nUse - Forwards the poll/sticker to all channels/groups in db. \n\n.listchannels\nUse - List all added channels. \n\n.search <channel id>\nUse - Search for the channel name from id."""\n }'], {}), '({\'giveawayhelper\':\n """.add\nUse - Add the channel/group to your database. \n\n.rm (all)<channel/group id>\nUse - Remove the channel/group from database. Use rm all to remove all groups. \n\n.broadcast <reply to message>\nUse - Send the message to all channels/groups in the db. \n\n.forward <reply to polls/stickers>\nUse - Forwards the poll/sticker to all channels/groups in db. \n\n.listchannels\nUse - List all added channels. \n\n.search <channel id>\nUse - Search for the channel name from id."""\n })\n', (10989, 11521), False, 'from telebot import CMD_HELP\n'), ((751, 769), 'telebot.plugins.sql_helper.ghdb_sql.get_all_channels', 'get_all_channels', ([], {}), '()\n', (767, 769), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((436, 470), 'uniborg.util.admin_cmd', 'admin_cmd', ([], {'pattern': '"""forward ?(.*)"""'}), "(pattern='forward ?(.*)')\n", (445, 470), False, 'from uniborg.util import admin_cmd\n'), ((2646, 2664), 'telebot.plugins.sql_helper.ghdb_sql.get_all_channels', 'get_all_channels', ([], {}), '()\n', (2662, 2664), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((2330, 2366), 'uniborg.util.admin_cmd', 'admin_cmd', ([], {'pattern': '"""broadcast ?(.*)"""'}), "(pattern='broadcast ?(.*)')\n", (2339, 2366), False, 'from uniborg.util import admin_cmd\n'), ((7366, 7396), 'uniborg.util.admin_cmd', 'admin_cmd', ([], {'pattern': '"""add ?(.*)"""'}), "(pattern='add ?(.*)')\n", (7375, 7396), False, 'from uniborg.util import admin_cmd\n'), ((9082, 9102), 'telebot.plugins.sql_helper.ghdb_sql.in_channels', 'in_channels', (['chat_id'], {}), '(chat_id)\n', (9093, 9102), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8698, 8727), 'uniborg.util.admin_cmd', 'admin_cmd', ([], {'pattern': '"""rm ?(.*)"""'}), "(pattern='rm ?(.*)')\n", (8707, 8727), False, 'from uniborg.util import admin_cmd\n'), ((9783, 9801), 'telebot.plugins.sql_helper.ghdb_sql.get_all_channels', 'get_all_channels', ([], {}), '()\n', (9799, 9801), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((9609, 9642), 'uniborg.util.admin_cmd', 'admin_cmd', ([], {'pattern': '"""listchannels"""'}), "(pattern='listchannels')\n", (9618, 9642), False, 'from uniborg.util import admin_cmd\n'), ((10449, 10482), 'uniborg.util.admin_cmd', 'admin_cmd', ([], {'pattern': '"""search ?(.*)"""'}), "(pattern='search ?(.*)')\n", (10458, 10482), False, 'from uniborg.util import admin_cmd\n'), ((8371, 8391), 'telebot.plugins.sql_helper.ghdb_sql.in_channels', 'in_channels', (['chat_id'], {}), '(chat_id)\n', (8382, 8391), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8401, 8421), 'telebot.plugins.sql_helper.ghdb_sql.add_channel', 'add_channel', (['chat_id'], {}), '(chat_id)\n', (8412, 8421), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8540, 8560), 'telebot.plugins.sql_helper.ghdb_sql.in_channels', 'in_channels', (['chat_id'], {}), '(chat_id)\n', (8551, 8560), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8922, 8940), 'telebot.plugins.sql_helper.ghdb_sql.get_all_channels', 'get_all_channels', ([], {}), '()\n', (8938, 8940), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((9112, 9131), 'telebot.plugins.sql_helper.ghdb_sql.rm_channel', 'rm_channel', (['chat_id'], {}), '(chat_id)\n', (9122, 9131), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((9251, 9277), 'telebot.plugins.sql_helper.ghdb_sql.in_channels', 'in_channels', (['event.chat_id'], {}), '(event.chat_id)\n', (9262, 9277), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8168, 8184), 'asyncio.sleep', 'asyncio.sleep', (['(3)'], {}), '(3)\n', (8181, 8184), False, 'import asyncio\n'), ((8485, 8501), 'asyncio.sleep', 'asyncio.sleep', (['(3)'], {}), '(3)\n', (8498, 8501), False, 'import asyncio\n'), ((8986, 9013), 'telebot.plugins.sql_helper.ghdb_sql.rm_channel', 'rm_channel', (['channel.chat_id'], {}), '(channel.chat_id)\n', (8996, 9013), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((9196, 9212), 'asyncio.sleep', 'asyncio.sleep', (['(3)'], {}), '(3)\n', (9209, 9212), False, 'import asyncio\n'), ((9287, 9312), 'telebot.plugins.sql_helper.ghdb_sql.rm_channel', 'rm_channel', (['event.chat_id'], {}), '(event.chat_id)\n', (9297, 9312), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8045, 8068), 'telebot.plugins.sql_helper.ghdb_sql.in_channels', 'in_channels', (['channel_id'], {}), '(channel_id)\n', (8056, 8068), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8086, 8109), 'telebot.plugins.sql_helper.ghdb_sql.add_channel', 'add_channel', (['channel_id'], {}), '(channel_id)\n', (8097, 8109), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((8638, 8654), 'asyncio.sleep', 'asyncio.sleep', (['(3)'], {}), '(3)\n', (8651, 8654), False, 'import asyncio\n'), ((9377, 9393), 'asyncio.sleep', 'asyncio.sleep', (['(3)'], {}), '(3)\n', (9390, 9393), False, 'import asyncio\n'), ((9436, 9462), 'telebot.plugins.sql_helper.ghdb_sql.in_channels', 'in_channels', (['event.chat_id'], {}), '(event.chat_id)\n', (9447, 9462), False, 'from telebot.plugins.sql_helper.ghdb_sql import add_channel, get_all_channels, in_channels, rm_channel\n'), ((9549, 9565), 'asyncio.sleep', 'asyncio.sleep', (['(3)'], {}), '(3)\n', (9562, 9565), False, 'import asyncio\n'), ((3952, 3994), 'telethon.tl.types.InputMediaUploadedPhoto', 'InputMediaUploadedPhoto', ([], {'file': 'uploaded_doc'}), '(file=uploaded_doc)\n', (3975, 3994), False, 'from telethon.tl.types import InputMediaUploadedPhoto\n')] |
import matplotlib.pyplot as plt
from pyspark.mllib.linalg import Vectors
from pyspark.mllib.clustering import KMeans
from pyspark import SparkContext, SparkConf
import datetime as dt
import dateutil.parser as par
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pandas as pd
import numpy as np
from pyspark.ml.feature import MinMaxScaler
import pyspark.ml.linalg
conf = SparkConf().setAppName("test").setMaster("local[*]")
sc = SparkContext(conf=conf)
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("KMeans") \
.config("spark.some.config.option", "Angadpreet-KMeans") \
.getOrCreate()
today = dt.datetime.today()
spark_df = spark.read.json("Data/yelp_academic_dataset_business.json").select("stars","review_count","is_open").rdd
scaler = MinMaxScaler(inputCol="_1",\
outputCol="scaled_1")
trial_df = spark_df.map(lambda x: pyspark.ml.linalg.Vectors.dense(x)).map(lambda x:(x, )).toDF()
scalerModel = scaler.fit(trial_df)
vector_df = scalerModel.transform(trial_df).select("scaled_1").rdd.map(lambda x:Vectors.dense(x))
Sum_of_squared_distances = []
K = range(1,15)
for k in K:
km = KMeans()
kme = km.train(vector_df, k)
Sum_of_squared_distances.append(kme.computeCost(vector_df))
plt.plot(K, Sum_of_squared_distances, 'bx-')
plt.xlabel('k')
plt.ylabel('Sum_of_squared_distances')
plt.title('Elbow Method For Optimal k')
plt.show()
| [
"pyspark.mllib.clustering.KMeans",
"matplotlib.pyplot.title",
"matplotlib.pyplot.ylabel",
"matplotlib.pyplot.xlabel",
"matplotlib.pyplot.plot",
"pyspark.mllib.linalg.Vectors.dense",
"pyspark.SparkConf",
"pyspark.sql.SparkSession.builder.appName",
"datetime.datetime.today",
"pyspark.SparkContext",
... | [((459, 482), 'pyspark.SparkContext', 'SparkContext', ([], {'conf': 'conf'}), '(conf=conf)\n', (471, 482), False, 'from pyspark import SparkContext, SparkConf\n'), ((674, 693), 'datetime.datetime.today', 'dt.datetime.today', ([], {}), '()\n', (691, 693), True, 'import datetime as dt\n'), ((819, 868), 'pyspark.ml.feature.MinMaxScaler', 'MinMaxScaler', ([], {'inputCol': '"""_1"""', 'outputCol': '"""scaled_1"""'}), "(inputCol='_1', outputCol='scaled_1')\n", (831, 868), False, 'from pyspark.ml.feature import MinMaxScaler\n'), ((1282, 1326), 'matplotlib.pyplot.plot', 'plt.plot', (['K', 'Sum_of_squared_distances', '"""bx-"""'], {}), "(K, Sum_of_squared_distances, 'bx-')\n", (1290, 1326), True, 'import matplotlib.pyplot as plt\n'), ((1327, 1342), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""k"""'], {}), "('k')\n", (1337, 1342), True, 'import matplotlib.pyplot as plt\n'), ((1343, 1381), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""Sum_of_squared_distances"""'], {}), "('Sum_of_squared_distances')\n", (1353, 1381), True, 'import matplotlib.pyplot as plt\n'), ((1382, 1421), 'matplotlib.pyplot.title', 'plt.title', (['"""Elbow Method For Optimal k"""'], {}), "('Elbow Method For Optimal k')\n", (1391, 1421), True, 'import matplotlib.pyplot as plt\n'), ((1422, 1432), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (1430, 1432), True, 'import matplotlib.pyplot as plt\n'), ((1176, 1184), 'pyspark.mllib.clustering.KMeans', 'KMeans', ([], {}), '()\n', (1182, 1184), False, 'from pyspark.mllib.clustering import KMeans\n'), ((1091, 1107), 'pyspark.mllib.linalg.Vectors.dense', 'Vectors.dense', (['x'], {}), '(x)\n', (1104, 1107), False, 'from pyspark.mllib.linalg import Vectors\n'), ((401, 412), 'pyspark.SparkConf', 'SparkConf', ([], {}), '()\n', (410, 412), False, 'from pyspark import SparkContext, SparkConf\n'), ((529, 567), 'pyspark.sql.SparkSession.builder.appName', 'SparkSession.builder.appName', (['"""KMeans"""'], {}), "('KMeans')\n", (557, 567), False, 'from pyspark.sql import SparkSession\n')] |
import os
import mlflow
def dump_mlflow_info():
print("MLflow Info:")
print(" MLflow Version:", mlflow.version.VERSION)
print(" Tracking URI:", mlflow.tracking.get_tracking_uri())
mlflow_host = get_mlflow_host()
print(" Real MLflow host:", mlflow_host)
print(" MLFLOW_TRACKING_URI:", os.environ.get("MLFLOW_TRACKING_URI",""))
print(" DATABRICKS_HOST:", os.environ.get("DATABRICKS_HOST",""))
print(" DATABRICKS_TOKEN:", os.environ.get("DATABRICKS_TOKEN",""))
def get_mlflow_host():
""" Returns the host (tracking URI) and token """
return get_mlflow_host_token()[0]
def get_mlflow_host_token():
""" Returns the host (tracking URI) and token """
uri = os.environ.get("MLFLOW_TRACKING_URI",None)
if uri is not None and uri != "databricks":
return (uri,None)
try:
from mlflow_export_import.common import databricks_cli_utils
profile = os.environ.get("MLFLOW_PROFILE",None)
##host_token = databricks_cli_utils.get_host_token(profile)
return databricks_cli_utils.get_host_token(profile)
#except databricks_cli.utils.InvalidConfigurationError as e:
except Exception as e: # TODO: make more specific
print("WARNING:",e)
return (None,None)
def get_experiment(mlflow_client, exp_id_or_name):
""" Gets an experiment either by ID or name. """
exp = mlflow_client.get_experiment_by_name(exp_id_or_name)
if exp is None:
try:
exp = mlflow_client.get_experiment(exp_id_or_name)
except Exception:
raise Exception(f"Cannot find experiment ID or name '{exp_id_or_name}'. Client: {mlflow_client}'")
return exp
def create_workspace_dir(dbx_client, workspace_dir):
"""
Create Databricks workspace directory.
"""
print(f"Creating Databricks workspace directory '{workspace_dir}'")
dbx_client.post("workspace/mkdirs", { "path": workspace_dir })
def set_experiment(dbx_client, exp_name):
"""
Set experiment name.
For Databricks, create the workspace directory if it doesn't exist.
"""
from mlflow_export_import import utils
if utils.importing_into_databricks():
create_workspace_dir(dbx_client, os.path.dirname(exp_name))
mlflow.set_experiment(exp_name)
experiment = mlflow.get_experiment_by_name(exp_name)
return experiment.experiment_id
# BUG
def _get_experiment(mlflow_client, exp_id_or_name):
try:
exp = mlflow_client.get_experiment(exp_id_or_name)
except Exception:
exp = mlflow_client.get_experiment_by_name(exp_id_or_name)
if exp is None:
raise Exception(f"Cannot find experiment ID or name '{exp_id_or_name}'. Client: {mlflow_client}'")
return exp
| [
"mlflow_export_import.common.databricks_cli_utils.get_host_token",
"mlflow.tracking.get_tracking_uri",
"os.environ.get",
"mlflow.get_experiment_by_name",
"mlflow.set_experiment",
"os.path.dirname",
"mlflow_export_import.utils.importing_into_databricks"
] | [((707, 750), 'os.environ.get', 'os.environ.get', (['"""MLFLOW_TRACKING_URI"""', 'None'], {}), "('MLFLOW_TRACKING_URI', None)\n", (721, 750), False, 'import os\n'), ((2136, 2169), 'mlflow_export_import.utils.importing_into_databricks', 'utils.importing_into_databricks', ([], {}), '()\n', (2167, 2169), False, 'from mlflow_export_import import utils\n'), ((2243, 2274), 'mlflow.set_experiment', 'mlflow.set_experiment', (['exp_name'], {}), '(exp_name)\n', (2264, 2274), False, 'import mlflow\n'), ((2292, 2331), 'mlflow.get_experiment_by_name', 'mlflow.get_experiment_by_name', (['exp_name'], {}), '(exp_name)\n', (2321, 2331), False, 'import mlflow\n'), ((159, 193), 'mlflow.tracking.get_tracking_uri', 'mlflow.tracking.get_tracking_uri', ([], {}), '()\n', (191, 193), False, 'import mlflow\n'), ((313, 354), 'os.environ.get', 'os.environ.get', (['"""MLFLOW_TRACKING_URI"""', '""""""'], {}), "('MLFLOW_TRACKING_URI', '')\n", (327, 354), False, 'import os\n'), ((387, 424), 'os.environ.get', 'os.environ.get', (['"""DATABRICKS_HOST"""', '""""""'], {}), "('DATABRICKS_HOST', '')\n", (401, 424), False, 'import os\n'), ((458, 496), 'os.environ.get', 'os.environ.get', (['"""DATABRICKS_TOKEN"""', '""""""'], {}), "('DATABRICKS_TOKEN', '')\n", (472, 496), False, 'import os\n'), ((920, 958), 'os.environ.get', 'os.environ.get', (['"""MLFLOW_PROFILE"""', 'None'], {}), "('MLFLOW_PROFILE', None)\n", (934, 958), False, 'import os\n'), ((1041, 1085), 'mlflow_export_import.common.databricks_cli_utils.get_host_token', 'databricks_cli_utils.get_host_token', (['profile'], {}), '(profile)\n', (1076, 1085), False, 'from mlflow_export_import.common import databricks_cli_utils\n'), ((2212, 2237), 'os.path.dirname', 'os.path.dirname', (['exp_name'], {}), '(exp_name)\n', (2227, 2237), False, 'import os\n')] |
from django.conf.urls import url, include
from . import views
urlpatterns = [
url(r"^api/v1/", include(("webhook.api_v1.urls", "webhook.api_v1"))),
url(r"^api/v2/", include(("webhook.api_v2.urls", "webhook.api_v2"))),
url(r"^(\d+)$", views.list_webhook, name="list_webhook"),
]
| [
"django.conf.urls.include",
"django.conf.urls.url"
] | [((232, 288), 'django.conf.urls.url', 'url', (['"""^(\\\\d+)$"""', 'views.list_webhook'], {'name': '"""list_webhook"""'}), "('^(\\\\d+)$', views.list_webhook, name='list_webhook')\n", (235, 288), False, 'from django.conf.urls import url, include\n'), ((101, 151), 'django.conf.urls.include', 'include', (["('webhook.api_v1.urls', 'webhook.api_v1')"], {}), "(('webhook.api_v1.urls', 'webhook.api_v1'))\n", (108, 151), False, 'from django.conf.urls import url, include\n'), ((175, 225), 'django.conf.urls.include', 'include', (["('webhook.api_v2.urls', 'webhook.api_v2')"], {}), "(('webhook.api_v2.urls', 'webhook.api_v2'))\n", (182, 225), False, 'from django.conf.urls import url, include\n')] |
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneauth1 import adapter
import mock
from openstack.tests.unit import base
from otcextensions.sdk.auto_scaling.v1 import group
EXAMPLE = {
"networks": [
{
"id": " a8327883-6b07-4497-9c61-68d03ee193a "
}
],
"detail": None,
"scaling_group_name": "healthCheck",
"scaling_group_id": "77a7a397-7d2f-4e79-9da9-6a35e2709150",
"scaling_group_status": "INSERVICE",
"scaling_configuration_id": "1d281494-6085-4579-b817-c1f813be835f",
"scaling_configuration_name": "healthCheck",
"current_instance_number": 0,
"desire_instance_number": 1,
"min_instance_number": 0,
"max_instance_number": 500,
"cool_down_time": 300,
"lb_listener_id": "f06c0112570743b51c0e8fbe1f235bab",
"security_groups": [
{
"id": "8a4b1d5b-0054-419f-84b1-5c8a59ebc829"
}
],
"create_time": "2015-07-23T02:46:29Z",
"vpc_id": "863ccae2-ee85-4d27-bc5b-3ba2a198a9e2",
"health_periodic_audit_method": "ELB_AUDIT",
"health_periodic_audit_time": "5",
"instance_terminate_policy": "OLD_CONFIG_OLD_INSTANCE",
"is_scaling": False,
"delete_publicip": False,
"notifications": [
"EMAIL"
]
}
EXAMPLE_EXTEND = {
"networks": [
{
"id": " a8327883-6b07-4497-9c61-68d03ee193a "
}
],
"detail": None,
"scaling_group_name": "healthCheck",
"scaling_group_id": "77a7a397-7d2f-4e79-9da9-6a35e2709150",
"scaling_group_status": "INSERVICE",
"scaling_configuration_id": "1d281494-6085-4579-b817-c1f813be835f",
"scaling_configuration_name": "healthCheck",
"current_instance_number": 0,
"desire_instance_number": 1,
"min_instance_number": 0,
"max_instance_number": 500,
"cool_down_time": 300,
"lbaas_listeners": [
{
"pool_id": "2f7dae72-fb59-4fa1-b663-042dcd030f81",
"protocol_port": 80,
"weight": 1
}
],
"security_groups": [
{
"id": "8a4b1d5b-0054-419f-84b1-5c8a59ebc829"
}
],
"create_time": "2015-07-23T02:46:29Z",
"vpc_id": "863ccae2-ee85-4d27-bc5b-3ba2a198a9e2",
"health_periodic_audit_method": "ELB_AUDIT",
"health_periodic_audit_time": "5",
"health_periodic_audit_grace_period": 600,
"instance_terminate_policy": "OLD_CONFIG_OLD_INSTANCE",
"is_scaling": False,
"delete_publicip": False,
"delete_volume": False,
"notifications": [
"EMAIL"
],
"multi_az_priority_policy": "EQUILIBRIUM_DISTRIBUTE"
}
class TestGroup(base.TestCase):
def setUp(self):
super(TestGroup, self).setUp()
self.sess = mock.Mock(spec=adapter.Adapter)
self.sess.get = mock.Mock()
self.sess.post = mock.Mock()
self.sess.delete = mock.Mock()
self.sess.put = mock.Mock()
self.sess.get_project_id = mock.Mock()
self.sot = group.Group(**EXAMPLE)
def test_basic(self):
sot = group.Group()
self.assertEqual('scaling_group', sot.resource_key)
self.assertEqual('scaling_groups', sot.resources_key)
self.assertEqual('/scaling_group', sot.base_path)
self.assertTrue(sot.allow_list)
self.assertTrue(sot.allow_create)
self.assertTrue(sot.allow_fetch)
self.assertTrue(sot.allow_commit)
self.assertTrue(sot.allow_delete)
def test_make_it(self):
sot = group.Group(**EXAMPLE)
self.assertEqual(EXAMPLE['scaling_group_id'], sot.id)
self.assertEqual(EXAMPLE['scaling_group_name'], sot.name)
self.assertEqual(EXAMPLE['create_time'], sot.create_time)
def test_make_it_extend(self):
sot = group.Group(**EXAMPLE_EXTEND)
self.assertEqual(EXAMPLE_EXTEND['scaling_group_id'], sot.id)
self.assertEqual(EXAMPLE_EXTEND['scaling_group_name'], sot.name)
self.assertEqual(EXAMPLE_EXTEND['create_time'], sot.create_time)
self.assertEqual(
EXAMPLE_EXTEND['lbaas_listeners'], sot.lbaas_listeners
)
self.assertEqual(
EXAMPLE_EXTEND['health_periodic_audit_grace_period'],
sot.health_periodic_audit_grace_period
)
self.assertEqual(
EXAMPLE_EXTEND['delete_volume'], sot.delete_volume
)
self.assertEqual(
EXAMPLE_EXTEND['multi_az_priority_policy'],
sot.multi_az_priority_policy
)
| [
"mock.Mock",
"otcextensions.sdk.auto_scaling.v1.group.Group"
] | [((3199, 3230), 'mock.Mock', 'mock.Mock', ([], {'spec': 'adapter.Adapter'}), '(spec=adapter.Adapter)\n', (3208, 3230), False, 'import mock\n'), ((3255, 3266), 'mock.Mock', 'mock.Mock', ([], {}), '()\n', (3264, 3266), False, 'import mock\n'), ((3292, 3303), 'mock.Mock', 'mock.Mock', ([], {}), '()\n', (3301, 3303), False, 'import mock\n'), ((3331, 3342), 'mock.Mock', 'mock.Mock', ([], {}), '()\n', (3340, 3342), False, 'import mock\n'), ((3367, 3378), 'mock.Mock', 'mock.Mock', ([], {}), '()\n', (3376, 3378), False, 'import mock\n'), ((3414, 3425), 'mock.Mock', 'mock.Mock', ([], {}), '()\n', (3423, 3425), False, 'import mock\n'), ((3445, 3467), 'otcextensions.sdk.auto_scaling.v1.group.Group', 'group.Group', ([], {}), '(**EXAMPLE)\n', (3456, 3467), False, 'from otcextensions.sdk.auto_scaling.v1 import group\n'), ((3509, 3522), 'otcextensions.sdk.auto_scaling.v1.group.Group', 'group.Group', ([], {}), '()\n', (3520, 3522), False, 'from otcextensions.sdk.auto_scaling.v1 import group\n'), ((3953, 3975), 'otcextensions.sdk.auto_scaling.v1.group.Group', 'group.Group', ([], {}), '(**EXAMPLE)\n', (3964, 3975), False, 'from otcextensions.sdk.auto_scaling.v1 import group\n'), ((4220, 4249), 'otcextensions.sdk.auto_scaling.v1.group.Group', 'group.Group', ([], {}), '(**EXAMPLE_EXTEND)\n', (4231, 4249), False, 'from otcextensions.sdk.auto_scaling.v1 import group\n')] |
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class BookinfoItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
coverImage = scrapy.Field() #"cover.jpg",
classify = scrapy.Field() #"分类",
index = scrapy.Field() #"列表页中的排名",
pageNo = scrapy.Field() #"列表中的第几页",
content = scrapy.Field() #"html内容"
| [
"scrapy.Field"
] | [((295, 309), 'scrapy.Field', 'scrapy.Field', ([], {}), '()\n', (307, 309), False, 'import scrapy\n'), ((341, 355), 'scrapy.Field', 'scrapy.Field', ([], {}), '()\n', (353, 355), False, 'import scrapy\n'), ((380, 394), 'scrapy.Field', 'scrapy.Field', ([], {}), '()\n', (392, 394), False, 'import scrapy\n'), ((424, 438), 'scrapy.Field', 'scrapy.Field', ([], {}), '()\n', (436, 438), False, 'import scrapy\n'), ((468, 482), 'scrapy.Field', 'scrapy.Field', ([], {}), '()\n', (480, 482), False, 'import scrapy\n')] |
from telegram import Update
from telegram.ext import CallbackContext
import re
import db
import random
import operator
from utils import oppisWithSameText
class Oppija:
def __init__(self):
self.commands = { 'opi': self.learnHandler,
'opis': self.opisCountHandler,
'jokotai': self.jokotaiHandler,
'alias': self.aliasHandler,
'arvaa': self.guessHandler }
self.correctOppi = {}
def getCommands(self):
return self.commands
def defineTerm(self, update: Update, context: CallbackContext, question, inverted=False):
definition = db.findOppi(question.group(2), update.message.chat.id)
if definition is not None:
if (inverted):
inverted_definition = self.invertStringList(definition)[0]
inverted_question = self.invertStringList([question.group(2)])[0]
context.bot.sendMessage(chat_id=update.message.chat_id, text=(inverted_definition + ' :' + inverted_question))
else:
context.bot.sendMessage(chat_id=update.message.chat_id, text=(question.group(2) + ': ' + definition[0]))
else:
no_idea = 'En tiedä'
if (inverted):
no_idea = self.invertStringList([no_idea])[0]
context.bot.sendMessage(chat_id=update.message.chat_id, text=no_idea)
def learnHandler(self, update: Update, context: CallbackContext):
if len(context.args) < 2:
context.bot.sendMessage(chat_id=update.message.chat_id, text='Usage: /opi <asia> <määritelmä>')
return
keyword, definition = context.args[0], ' '.join(context.args[1:])
self.learn(update, context, keyword, definition)
def learn(self, update: Update, context: CallbackContext, keyword, definition):
chat_id = update.message.chat.id
db.upsertOppi(keyword, definition, chat_id, update.message.from_user.username)
def opisCountHandler(self, update: Update, context: CallbackContext):
result = db.countOpis(update.message.chat.id)
context.bot.sendMessage(chat_id=update.message.chat_id, text=(str(result[0]) + ' opis'))
def randomOppiHandler(self, update: Update, context: CallbackContext, inverted=False):
if (inverted):
result = db.randomOppi(update.message.chat.id)
inverted_result = self.invertStringList(result)
context.bot.sendMessage(chat_id=update.message.chat_id, text=(inverted_result[1] + ' :' + inverted_result[0]))
else:
result = db.randomOppi(update.message.chat.id)
context.bot.sendMessage(chat_id=update.message.chat_id, text=(result[0] + ': ' + result[1]))
def invertStringList(self, list):
# Reference table for the Unicode chars: http://www.upsidedowntext.com/unicode
chars_standard = 'abcdefghijklmnopqrstuvwxyzåäö'
chars_inverted = 'ɐqɔpǝɟbɥıɾʞןɯuodbɹsʇnʌʍxʎzɐɐo'
chars_standard += '_,;.?!/\\\'<>(){}[]`&'
chars_inverted += '‾\'؛˙¿¡/\\,><)(}{][,⅋'
chars_standard += 'ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄÖ'
chars_inverted += '∀qϽᗡƎℲƃHIſʞ˥WNOԀὉᴚS⊥∩ΛMXʎZ∀∀O'
chars_standard += '0123456789'
chars_inverted += '0ƖᄅƐㄣϛ9ㄥ86'
inverted_list = []
for string in list:
inverted_string = ''
for char in string:
try:
charIndex = chars_standard.index(char)
except:
inverted_string += char
continue
inverted_string += chars_inverted[charIndex]
# Reverse the string to make it readable upside down
inverted_list.append(inverted_string[::-1])
return inverted_list
def jokotaiHandler(self, update: Update, context: CallbackContext):
sides = ['kruuna', 'klaava']
maximalRigging = random.choice(sides)
riggedQuestion = re.match(r"^(\?\?)\s(\S+)$", "?? " + maximalRigging)
context.bot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text='*♪ Se on kuulkaas joko tai, joko tai! ♪*')
self.defineTerm(update, context, riggedQuestion)
def aliasHandler(self, update: Update, context: CallbackContext):
chat_id = update.message.chat_id
if chat_id not in self.correctOppi:
self.correctOppi[chat_id] = None
if self.correctOppi[chat_id] is None:
definitions = db.readDefinitions(chat_id)
correctOppi = random.choice(definitions)
self.correctOppi[chat_id] = oppisWithSameText(definitions, correctOppi[0])
message = 'Arvaa mikä oppi: \"{}\"?'.format(self.correctOppi[chat_id][0])
context.bot.sendMessage(chat_id=chat_id, text=message)
else:
context.bot.sendMessage(chat_id=chat_id,
text='Edellinen alias on vielä käynnissä! Selitys oli: \"{}\"?'.format(self.correctOppi[chat_id][0]))
def guessHandler(self, update: Update, context: CallbackContext):
chat_id = update.message.chat_id
if chat_id not in self.correctOppi:
self.correctOppi[chat_id] = None
if len(context.args) < 1:
return
elif self.correctOppi[chat_id] is not None:
if context.args[0].lower() in self.correctOppi[chat_id][1]:
self.correctOppi[chat_id] = None
context.bot.sendSticker(chat_id=chat_id, sticker='CAADBAADuAADQAGFCMDNfgtXUw0QFgQ')
def messageHandler(self, update: Update, context: CallbackContext):
msg = update.message
if msg.text is not None:
# Matches messages in formats "?? something" and "¿¿ something"
question = re.match(r"^(\?\?)\s(\S+)$", msg.text)
inverted_question = re.match(r"^(\¿\¿)\s(\S+)$", msg.text)
if question:
self.defineTerm(update, context, question)
elif inverted_question:
self.defineTerm(update, context, inverted_question, True)
# Matches message "?!"
elif re.match(r"^(\?\!)$", msg.text):
self.randomOppiHandler(update, context)
# Matches message "¡¿"
elif re.match(r"^(\¡\¿)$", msg.text):
self.randomOppiHandler(update, context, True)
elif re.match(r"^.+\?$", msg.text) and random.randint(1, 50) == 1:
getattr(context.bot, (lambda _, __: _(_, __))(
lambda _, __: chr(__ % 256) + _(_, __ // 256) if __ else "",
122589709182092589684122995)
)(chat_id=operator.attrgetter((lambda _, __: _(_, __))(
lambda _, __: chr(__ % 256) + _(_, __ // 256) if __ else "",
521366901555324942823356189990151533))(update), text=((lambda _, __: _(_, __))(
lambda _, __: chr(__ % 256) + _(_, __ // 256) if __ else "",
random.sample([3041605, 779117898, 17466, 272452313416, 7022364615740061032, 2360793474633670572049331836447094], 1)[0])))
| [
"db.upsertOppi",
"random.choice",
"db.randomOppi",
"utils.oppisWithSameText",
"random.sample",
"re.match",
"db.readDefinitions",
"db.countOpis",
"random.randint"
] | [((1931, 2009), 'db.upsertOppi', 'db.upsertOppi', (['keyword', 'definition', 'chat_id', 'update.message.from_user.username'], {}), '(keyword, definition, chat_id, update.message.from_user.username)\n', (1944, 2009), False, 'import db\n'), ((2102, 2138), 'db.countOpis', 'db.countOpis', (['update.message.chat.id'], {}), '(update.message.chat.id)\n', (2114, 2138), False, 'import db\n'), ((3955, 3975), 'random.choice', 'random.choice', (['sides'], {}), '(sides)\n', (3968, 3975), False, 'import random\n'), ((4001, 4056), 're.match', 're.match', (['"""^(\\\\?\\\\?)\\\\s(\\\\S+)$"""', "('?? ' + maximalRigging)"], {}), "('^(\\\\?\\\\?)\\\\s(\\\\S+)$', '?? ' + maximalRigging)\n", (4009, 4056), False, 'import re\n'), ((2372, 2409), 'db.randomOppi', 'db.randomOppi', (['update.message.chat.id'], {}), '(update.message.chat.id)\n', (2385, 2409), False, 'import db\n'), ((2628, 2665), 'db.randomOppi', 'db.randomOppi', (['update.message.chat.id'], {}), '(update.message.chat.id)\n', (2641, 2665), False, 'import db\n'), ((4522, 4549), 'db.readDefinitions', 'db.readDefinitions', (['chat_id'], {}), '(chat_id)\n', (4540, 4549), False, 'import db\n'), ((4577, 4603), 'random.choice', 'random.choice', (['definitions'], {}), '(definitions)\n', (4590, 4603), False, 'import random\n'), ((4644, 4690), 'utils.oppisWithSameText', 'oppisWithSameText', (['definitions', 'correctOppi[0]'], {}), '(definitions, correctOppi[0])\n', (4661, 4690), False, 'from utils import oppisWithSameText\n'), ((5803, 5844), 're.match', 're.match', (['"""^(\\\\?\\\\?)\\\\s(\\\\S+)$"""', 'msg.text'], {}), "('^(\\\\?\\\\?)\\\\s(\\\\S+)$', msg.text)\n", (5811, 5844), False, 'import re\n'), ((5874, 5915), 're.match', 're.match', (['"""^(\\\\¿\\\\¿)\\\\s(\\\\S+)$"""', 'msg.text'], {}), "('^(\\\\¿\\\\¿)\\\\s(\\\\S+)$', msg.text)\n", (5882, 5915), False, 'import re\n'), ((6160, 6192), 're.match', 're.match', (['"""^(\\\\?\\\\!)$"""', 'msg.text'], {}), "('^(\\\\?\\\\!)$', msg.text)\n", (6168, 6192), False, 'import re\n'), ((6302, 6334), 're.match', 're.match', (['"""^(\\\\¡\\\\¿)$"""', 'msg.text'], {}), "('^(\\\\¡\\\\¿)$', msg.text)\n", (6310, 6334), False, 'import re\n'), ((6415, 6444), 're.match', 're.match', (['"""^.+\\\\?$"""', 'msg.text'], {}), "('^.+\\\\?$', msg.text)\n", (6423, 6444), False, 'import re\n'), ((6449, 6470), 'random.randint', 'random.randint', (['(1)', '(50)'], {}), '(1, 50)\n', (6463, 6470), False, 'import random\n'), ((7024, 7144), 'random.sample', 'random.sample', (['[3041605, 779117898, 17466, 272452313416, 7022364615740061032, \n 2360793474633670572049331836447094]', '(1)'], {}), '([3041605, 779117898, 17466, 272452313416, 7022364615740061032,\n 2360793474633670572049331836447094], 1)\n', (7037, 7144), False, 'import random\n')] |
#!/usr/bin/python
import struct
import os
import sys
import tempfile
import subprocess
useragent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'
def main():
try:
p = subprocess.Popen('./pow.py ask 3'.split())
p.communicate()
if p.returncode != 2:
exit(1)
except:
exit(1)
print('Would you like to add a file to the VM? This isn\'t part of the challenge (Y/n)')
choice = raw_input()
if choice.strip() != 'n':
print('File URL (max size 1MB): ')
url = raw_input().strip()
tmp_file = tempfile.mktemp()
# Do some basic validation of the URL
if not (url.startswith('http://') or url.startswith('https://')) \
or 'localhost' in url \
or '::1' in url \
or '127.0.0.1' in url:
print('Invalid URL')
exit(1)
# Fetch the file
p = subprocess.Popen(['curl', '-A', useragent, '--max-filesize', '1048576', '-o', tmp_file, url]) # max 1MB
p.communicate()
if p.returncode != 0:
print('exited with code {}'.format(ret))
exit(1)
# Validate magic of the downloaded file
with open(tmp_file) as f:
if f.read(4) != '\x7fELF':
#print('ELF files only')
exit(1)
# Make copy of initramfs and insert exploit file
new_ramfs = tempfile.mkdtemp()
#print('New initramfs: {}'.format(new_ramfs))
os.system('cp -r base_qemu/initramfs/ {}'.format(new_ramfs))
out_file = '{}/initramfs/bin/exploit'.format(new_ramfs)
#print('Moving {} to {}'.format(tmp_file, out_file))
os.system('mv {} {}'.format(tmp_file, out_file))
print('Your binary is at /bin/exploit')
# Pack new initramfs
os.system('./pack_initramfs.sh {}/initramfs/ src/kpets.ko'.format(new_ramfs))
os.system('./start_qemu.sh qemu/bzImage {}/initramfs.cpio'.format(new_ramfs))
os.system('rm -r {}'.format(new_ramfs))
else:
# Use standard initramfs
os.system('./start_qemu.sh qemu/bzImage qemu/initramfs.cpio')
if __name__=="__main__":
main()
| [
"tempfile.mktemp",
"subprocess.Popen",
"os.system",
"tempfile.mkdtemp"
] | [((646, 663), 'tempfile.mktemp', 'tempfile.mktemp', ([], {}), '()\n', (661, 663), False, 'import tempfile\n'), ((991, 1088), 'subprocess.Popen', 'subprocess.Popen', (["['curl', '-A', useragent, '--max-filesize', '1048576', '-o', tmp_file, url]"], {}), "(['curl', '-A', useragent, '--max-filesize', '1048576',\n '-o', tmp_file, url])\n", (1007, 1088), False, 'import subprocess\n'), ((1489, 1507), 'tempfile.mkdtemp', 'tempfile.mkdtemp', ([], {}), '()\n', (1505, 1507), False, 'import tempfile\n'), ((2163, 2224), 'os.system', 'os.system', (['"""./start_qemu.sh qemu/bzImage qemu/initramfs.cpio"""'], {}), "('./start_qemu.sh qemu/bzImage qemu/initramfs.cpio')\n", (2172, 2224), False, 'import os\n')] |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import shutil
import sys
import tempfile
from observations.r.pension import pension
def test_pension():
"""Test module pension.py by downloading
pension.csv and testing shape of
extracted data has 194 rows and 19 columns
"""
test_path = tempfile.mkdtemp()
x_train, metadata = pension(test_path)
try:
assert x_train.shape == (194, 19)
except:
shutil.rmtree(test_path)
raise()
| [
"observations.r.pension.pension",
"tempfile.mkdtemp",
"shutil.rmtree"
] | [((362, 380), 'tempfile.mkdtemp', 'tempfile.mkdtemp', ([], {}), '()\n', (378, 380), False, 'import tempfile\n'), ((403, 421), 'observations.r.pension.pension', 'pension', (['test_path'], {}), '(test_path)\n', (410, 421), False, 'from observations.r.pension import pension\n'), ((481, 505), 'shutil.rmtree', 'shutil.rmtree', (['test_path'], {}), '(test_path)\n', (494, 505), False, 'import shutil\n')] |
import numpy as np
from scipy.sparse import csc_matrix, save_npz, hstack
import time
import argparse
import gzip
from pysam import VariantFile, TabixFile
import json
import os
import itertools
parser = argparse.ArgumentParser(description='Pull genotypes.')
parser.add_argument('vcf_file', type=str, help='VCF file to pull from.')
parser.add_argument('assembly', type=str, help='Human genome reference used.')
parser.add_argument('out_directory', type=str, help='Output directory.')
parser.add_argument('chrom', type=str, help='Chromosome of interest.')
parser.add_argument('--batch_size', type=int, default=-1, help='Restrict number of positions per file to batch_size.')
parser.add_argument('--batch_num', type=int, default=0, help='To be used along with batch_size to restrict positions per file. Will include positions >= batch_num*batch_size and <= (batch_num+1)*batch_size')
parser.add_argument('--maxsize', type=int, default=500000000, help='Amount of memory per block.')
parser.add_argument('--additional_vcf_files', type=str, nargs='+', help='Additional VCF files to pull data from.')
parser.add_argument('--id_mapper_file', type=str, default=None, help='File that maps old ids to new ones.')
parser.add_argument('--id_mapper_sep', type=str, default='\t', help='Separater to parse id_mapper_file.')
parser.add_argument('--old_id_index', type=int, default=0, help='Index of old_id in id_mapper_file.')
parser.add_argument('--new_id_index', type=int, default=1, help='Index of new_id in id_mapper_file.')
args = parser.parse_args()
t0 = time.time()
chrom_int = 23 if args.chrom == 'X' else 24 if args.chrom == 'Y' else 25 if args.chrom == 'MT' else int(args.chrom)
gen_mapping = {'./.': -1, '0/0': 0, '0|0': 0, '0/1': 1, '0|1': 1, '1/0': 1, '1|0': 1, '1/1': 2, '1|1': 2}
def process_header(vcf):
sample_ids = [x.replace('.', '_') for x in vcf.header.samples]
if args.id_mapper_file is not None:
old_id_to_new_id = dict()
with open(args.id_mapper_file, 'r') as f:
for line in f:
pieces = line.strip().split(args.id_mapper_sep)
if len(pieces)>args.old_id_index and len(pieces)>args.new_id_index:
old_id_to_new_id[pieces[args.old_id_index]] = pieces[args.new_id_index]
sample_ids = [old_id_to_new_id[x] for x in sample_ids]
sample_file = '%s/samples.json' % args.out_directory
if os.path.isfile(sample_file):
with open(sample_file, 'r') as f:
stored_sample_ids = json.load(f)
assert sample_ids == stored_sample_ids
else:
with open(sample_file, 'w+') as f:
json.dump(sample_ids, f)
return sample_ids, vcf.header.contigs
def process_body(records, sample_ids):
data, indices, indptr, index = np.zeros((args.maxsize,), dtype=np.int8), np.zeros((args.maxsize,), dtype=int), [0], 0
chrom_coord = []
with gzip.open('%s/chr.%s.%d.gen.variants.txt.gz' % (args.out_directory, args.chrom, args.batch_num), 'wt') as variant_f:
for line in records:
pieces = line.strip().split('\t')
fmt = pieces[8].strip().split(':')
# Write variant to file
variant_f.write('\t'.join(pieces[:9]) + '\n')
# pull chrom_coord information
pos, _, ref, alt = pieces[1:5]
is_biallelic_snp = 1 if len(ref) == 1 and len(alt) == 1 and ref != '.' and alt != '.' else 0
is_pass = pieces[6] == 'PASS'
chrom_coord.append((chrom_int, int(pos), is_biallelic_snp, is_pass))
# pull genotypes
gen_index = fmt.index('GT')
for i, piece in enumerate(pieces[9:]):
segment = piece.split(':', maxsplit=gen_index+1)
gt = gen_mapping.get(segment[gen_index], -1) # For now we mark multi-base loci as unknown
if gt != 0:
indices[index] = i
data[index] = gt
index += 1
indptr.append(index)
gen = csc_matrix((data[:index], indices[:index], indptr), shape=(len(sample_ids), len(indptr)-1), dtype=np.int8)
# Save to file
save_npz('%s/chr.%s.%d.gen' % (args.out_directory, args.chrom, args.batch_num), gen)
np.save('%s/chr.%s.%d.gen.coordinates' % (args.out_directory, args.chrom, args.batch_num), np.asarray(np.asarray(chrom_coord, dtype=int), dtype=int))
print('Completed in ', time.time()-t0, 'sec')
with open('%s/info.json' % args.out_directory, 'w+') as f:
json.dump({'assembly': args.assembly, 'batch_size': args.batch_size, 'vcf_directory': '/'.join(args.vcf_file.split('/')[:-1])}, f)
vcf = VariantFile(args.vcf_file)
sample_ids, contigs = process_header(vcf)
if args.additional_vcf_files is not None:
for vcf_file in args.additional_vcf_files:
if os.path.isfile(vcf_file):
new_vcf = VariantFile(vcf_file)
new_sample_ids, _ = process_header(new_vcf)
assert sample_ids == new_sample_ids
else:
print(vcf_file, 'does not exist')
contig = None
if args.chrom in contigs:
contig = contigs[args.chrom]
elif 'chr%s' % args.chrom in contigs:
contig = contigs['chr%s' % args.chrom]
else:
raise Exception('Trouble finding contig', args.chrom, 'in', contigs)
print('Chrom length', contig.length)
vcf_files = [args.vcf_file]
if args.additional_vcf_files is not None:
vcf_files.extend(args.additional_vcf_files)
if np.all([os.path.isfile(vcf_file + '.tbi') for vcf_file in vcf_files]):
vcfs = [TabixFile(vcf_file, parser=None) for vcf_file in vcf_files]
if args.batch_size != -1:
start_pos, end_pos = args.batch_num*args.batch_size, (args.batch_num+1)*args.batch_size
print('Interval', start_pos, end_pos)
if start_pos < contig.length:
process_body(itertools.chain(*[vcf.fetch(reference=contig.name, start=start_pos, end=end_pos) for vcf in vcfs]), sample_ids)
else:
print('Interval (%d-%d) is longer than chromosome (length=%d).' % (start_pos, end_pos, contig.length))
else:
process_body(itertools.chain(*[vcf.fetch(reference=contig.name) for vcf in vcfs]), sample_ids)
else:
print('Error, .tbi files are missing.')
| [
"pysam.VariantFile",
"argparse.ArgumentParser",
"gzip.open",
"numpy.asarray",
"os.path.isfile",
"numpy.zeros",
"pysam.TabixFile",
"json.load",
"scipy.sparse.save_npz",
"time.time",
"json.dump"
] | [((204, 258), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""Pull genotypes."""'}), "(description='Pull genotypes.')\n", (227, 258), False, 'import argparse\n'), ((1546, 1557), 'time.time', 'time.time', ([], {}), '()\n', (1555, 1557), False, 'import time\n'), ((4632, 4658), 'pysam.VariantFile', 'VariantFile', (['args.vcf_file'], {}), '(args.vcf_file)\n', (4643, 4658), False, 'from pysam import VariantFile, TabixFile\n'), ((2395, 2422), 'os.path.isfile', 'os.path.isfile', (['sample_file'], {}), '(sample_file)\n', (2409, 2422), False, 'import os\n'), ((4141, 4230), 'scipy.sparse.save_npz', 'save_npz', (["('%s/chr.%s.%d.gen' % (args.out_directory, args.chrom, args.batch_num))", 'gen'], {}), "('%s/chr.%s.%d.gen' % (args.out_directory, args.chrom, args.\n batch_num), gen)\n", (4149, 4230), False, 'from scipy.sparse import csc_matrix, save_npz, hstack\n'), ((2771, 2811), 'numpy.zeros', 'np.zeros', (['(args.maxsize,)'], {'dtype': 'np.int8'}), '((args.maxsize,), dtype=np.int8)\n', (2779, 2811), True, 'import numpy as np\n'), ((2813, 2849), 'numpy.zeros', 'np.zeros', (['(args.maxsize,)'], {'dtype': 'int'}), '((args.maxsize,), dtype=int)\n', (2821, 2849), True, 'import numpy as np\n'), ((2889, 2996), 'gzip.open', 'gzip.open', (["('%s/chr.%s.%d.gen.variants.txt.gz' % (args.out_directory, args.chrom, args\n .batch_num))", '"""wt"""'], {}), "('%s/chr.%s.%d.gen.variants.txt.gz' % (args.out_directory, args.\n chrom, args.batch_num), 'wt')\n", (2898, 2996), False, 'import gzip\n'), ((4802, 4826), 'os.path.isfile', 'os.path.isfile', (['vcf_file'], {}), '(vcf_file)\n', (4816, 4826), False, 'import os\n'), ((5439, 5472), 'os.path.isfile', 'os.path.isfile', (["(vcf_file + '.tbi')"], {}), "(vcf_file + '.tbi')\n", (5453, 5472), False, 'import os\n'), ((5514, 5546), 'pysam.TabixFile', 'TabixFile', (['vcf_file'], {'parser': 'None'}), '(vcf_file, parser=None)\n', (5523, 5546), False, 'from pysam import VariantFile, TabixFile\n'), ((2498, 2510), 'json.load', 'json.load', (['f'], {}), '(f)\n', (2507, 2510), False, 'import json\n'), ((2627, 2651), 'json.dump', 'json.dump', (['sample_ids', 'f'], {}), '(sample_ids, f)\n', (2636, 2651), False, 'import json\n'), ((4332, 4366), 'numpy.asarray', 'np.asarray', (['chrom_coord'], {'dtype': 'int'}), '(chrom_coord, dtype=int)\n', (4342, 4366), True, 'import numpy as np\n'), ((4407, 4418), 'time.time', 'time.time', ([], {}), '()\n', (4416, 4418), False, 'import time\n'), ((4850, 4871), 'pysam.VariantFile', 'VariantFile', (['vcf_file'], {}), '(vcf_file)\n', (4861, 4871), False, 'from pysam import VariantFile, TabixFile\n')] |
import random
class Car:
def __init__(self, num_of_street, streets):
self.num_of_street = num_of_street
self.streets = streets
'''
tot_duration = 0
for street in streets:
tot_duration += street.time
self.path_duration = tot_duration
'''
def description(self):
return "Num of street along path: " + str(self.num_of_street) + ". \nDescription: " + str(self.streets)
class Street:
def __init__(self, start_intersec, end_intersec, name, time):
self.start_intersec = start_intersec
self.end_intersec = end_intersec
self.name = name
self.time = time
self.num_of_cars = 0
self.num_of_reamining_routes = 0
self.positions_in_path = []
def get_avg_positions(self):
if not len(self.positions_in_path) == 0:
return sum(self.positions_in_path) / len(self.positions_in_path)
else:
return 0
def description(self):
return "Street " + str(self.name) + " [" + str(self.time) + "s] " + str(self.start_intersec) + " --> " + str(self.end_intersec) + " {" \
+ str(self.num_of_cars) +" car pass} with " + str(self.num_of_reamining_routes) + " remaining routes"
class Intersection():
def __init__(self, id, input_streets=[], output_streets=[]):
self.id = id
self.input_streets = []
self.output_streets = []
def is_fake(self):
return len(self.input_streets) == 1 and len(self.output_streets) == 1
def description(self):
if not len(self.input_streets) == 1 and len(self.output_streets) == 1:
fake_desc = " NOT FAKE "
else:
fake_desc = " FAKE "
return "ID: " + str(self.id) + fake_desc + "Input streets " + str([street.description() for street in self.input_streets]) + " \nOutput streets " + str([street.description() for street in self.output_streets])
class Map:
def __init__(self, duration, num_of_intersec, num_of_street, num_of_cars, bonus):
self.duration = duration
self.num_of_intersec = num_of_intersec
self.num_of_street = num_of_street
self.num_of_cars = num_of_cars
self.bonus = bonus
def read_txt(path):
f = open(path, "r")
num_line = 0
x = f.readline()
infos = x.split("\n")[0].split(" ")
duration = int(infos[0])
num_of_intersec = int(infos[1])
num_of_street = int(infos[2])
num_of_cars = int(infos[3])
bonus = int(infos[4])
map = Map(duration, num_of_intersec, num_of_street, num_of_cars, bonus)
intersections = []
for i in range(num_of_intersec):
intersections.append(Intersection(i, [], []))
streets = {}
for s in range(num_of_street):
infos = f.readline().split("\n")[0].split(" ")
start = infos[0]
end = infos[1]
name = infos[2]
time = infos[3]
ss = Street(start, end, name, time)
streets[name] = ss
intersections[int(end)].input_streets.append(ss)
intersections[int(start)].output_streets.append(ss)
cars = []
for v in range(num_of_cars):
streets_infos = f.readline().split("\n")[0].split(" ")
count = 0
name_of_streets = []
for info in streets_infos:
if count == 0:
num_of_streets = streets_infos[0]
else:
name = streets_infos[count]
name_of_streets.append(name)
streets[name].num_of_cars += 1
count += 1
for (n, idx) in zip(name_of_streets, range(1, (len(name_of_streets) + 1))):
streets[n].num_of_reamining_routes += int(num_of_streets) - idx
streets[n].positions_in_path.append(idx)
cars.append(Car(num_of_streets, name_of_streets))
# Using dictionary comprehension to find list
# keys having value in 3.
delete = [key for key in streets if streets[key].num_of_cars == 0]
# delete the key
for key in delete:
del streets[key]
'''
for cc in cars:
print(cc.description())
print()
for name in streets:
print(streets[name].description())
for ii in intersections:
print(ii.description())
print()
'''
return streets, cars, intersections, map
def work_fn(level):
if level == 0:
file_name = "a"
result_name = "a_result"
elif level == 1:
file_name = "b"
result_name = "b_result"
elif level == 2:
file_name = "c"
result_name = "c_result"
elif level == 3:
file_name = "d"
result_name = "d_result"
elif level == 4:
file_name = "e"
result_name = "e_result"
elif level == 5:
file_name = "f"
result_name = "f_result"
path = "./" + file_name + ".txt"
out = "./" + result_name + ".txt"
streets, cars, intersections, mappa = read_txt(path)
result = ""
count_fake = 0
for intersection in intersections:
# Calcolo il peso che ha la ogni strada in input rispetto all'intersezione in esame
tot_car = sum([street.num_of_cars for street in intersection.input_streets])
tot_remaing_routes = sum([street.num_of_reamining_routes for street in intersection.input_streets])
if tot_car == 0 or tot_remaing_routes == 0:
continue
count_fake += 1
result += str(intersection.id) + "\n"
result += str(len(intersection.input_streets)) + "\n"
# intersection.input_streets.sort(key=lambda x: (x.num_of_cars, -x.num_of_reamining_routes, x.get_avg_positions()), reverse=True)
# intersection.input_streets.sort(key=lambda x: x.num_of_reamining_routes, reverse=False)
intersection.input_streets.sort(key=lambda x: x.num_of_cars, reverse=False)
# random.shuffle(intersection.input_streets)
intersec_period = len(intersection.input_streets) * 2
duration = mappa.duration
for street in intersection.input_streets:
result += street.name + " " + str(random.randint(1, 5)) + "\n"
'''
percentuale = street.num_of_cars / tot_car
w = int(percentuale * 5 + 1)
# w = int(street.num_of_cars / tot_car * (mappa.duration / 12))
# w = int((tot_cycle_time / duration) * duration)
# tot_cycle_time -= int(tot_cycle_time / duration)
# w = int(street.num_of_reamining_routes / tot_remaing_routes * (mappa.duration / 1))
if not w == 0:
result += street.name + " " + str(w) + "\n"
else:
result += street.name + " " + str(1) + "\n"
'''
rr = str(count_fake) + "\n" + result
f = open(out, "w")
f.write(rr)
f.close()
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
for level in range(6):
print("START " + str(level))
work_fn(level)
print("END " + str(level))
| [
"random.randint"
] | [((6078, 6098), 'random.randint', 'random.randint', (['(1)', '(5)'], {}), '(1, 5)\n', (6092, 6098), False, 'import random\n')] |
from prefect import Flow, Parameter, task
@task
def a(x):
print(f'a: {x}')
@task
def b(x):
print(f'b: {x}')
@task
def c(x):
print(f'c: {x}')
with Flow('my-flow') as flow:
x = Parameter('x')
a.set_dependencies(downstream_tasks=[b], keyword_tasks={'x': x})
b.set_dependencies(downstream_tasks=[c], keyword_tasks={'x': x})
c.set_dependencies(keyword_tasks={'x': x})
# b.set_dependencies(downstream_tasks=[C], keyword_tasks={"x": param})
# c.set_dependencies(keyword_tasks={"x": param})
flow.run(x=1)
| [
"prefect.Parameter",
"prefect.Flow"
] | [((166, 181), 'prefect.Flow', 'Flow', (['"""my-flow"""'], {}), "('my-flow')\n", (170, 181), False, 'from prefect import Flow, Parameter, task\n'), ((199, 213), 'prefect.Parameter', 'Parameter', (['"""x"""'], {}), "('x')\n", (208, 213), False, 'from prefect import Flow, Parameter, task\n')] |
# Copyright 2019 WebPageTest LLC.
# Copyright 2017 Google Inc.
# Use of this source code is governed by the Apache 2.0 license that can be
# found in the LICENSE file.
"""Cross-platform support for os-level things that differ on different platforms"""
import logging
import os
import platform
import subprocess
def kill_all(exe, force, timeout=30):
"""Terminate all instances of the given process"""
logging.debug("Terminating all instances of %s", exe)
plat = platform.system()
if plat == "Windows":
if force:
subprocess.call(['taskkill', '/F', '/T', '/IM', exe])
else:
subprocess.call(['taskkill', '/IM', exe])
elif plat == "Linux" or plat == "Darwin":
if force:
subprocess.call(['killall', '-s', 'SIGKILL', exe])
else:
subprocess.call(['killall', exe])
wait_for_all(exe, timeout)
def wait_for_all(exe, timeout=30):
"""Wait for the given process to exit"""
import psutil
processes = []
for proc in psutil.process_iter():
try:
pinfo = proc.as_dict(attrs=['pid', 'name', 'exe'])
except psutil.NoSuchProcess:
pass
else:
if 'exe' in pinfo and pinfo['exe'] is not None and\
os.path.basename(pinfo['exe']) == exe:
processes.append(proc)
if len(processes):
logging.debug("Waiting up to %d seconds for %s to exit", timeout, exe)
psutil.wait_procs(processes, timeout=timeout)
def flush_dns():
"""Flush the OS DNS resolver"""
logging.debug("Flushing DNS")
plat = platform.system()
if plat == "Windows":
run_elevated('ipconfig', '/flushdns')
elif plat == "Darwin":
subprocess.call(['sudo', 'killall', '-HUP', 'mDNSResponder'])
subprocess.call(['sudo', 'dscacheutil', '-flushcache'])
subprocess.call(['sudo', 'lookupd', '-flushcache'])
elif plat == "Linux":
subprocess.call(['sudo', 'service', 'dnsmasq', 'restart'])
subprocess.call(['sudo', 'rndc', 'restart'])
subprocess.call(['sudo', 'systemd-resolve', '--flush-caches'])
# pylint: disable=E0611,E0401
def run_elevated(command, args, wait=True):
"""Run the given command as an elevated user and wait for it to return"""
ret = 1
try:
if command.find(' ') > -1:
command = '"' + command + '"'
if platform.system() == 'Windows':
import win32api
import win32con
import win32event
import win32process
from win32com.shell.shell import ShellExecuteEx
from win32com.shell import shellcon
logging.debug(command + ' ' + args)
process_info = ShellExecuteEx(nShow=win32con.SW_HIDE,
fMask=shellcon.SEE_MASK_NOCLOSEPROCESS,
lpVerb='runas',
lpFile=command,
lpParameters=args)
if wait:
win32event.WaitForSingleObject(process_info['hProcess'], 600000)
ret = win32process.GetExitCodeProcess(process_info['hProcess'])
win32api.CloseHandle(process_info['hProcess'])
else:
ret = process_info
else:
logging.debug('sudo ' + command + ' ' + args)
ret = subprocess.call('sudo ' + command + ' ' + args, shell=True)
except Exception:
logging.exception('Error running elevated command: %s', command)
return ret
def wait_for_elevated_process(process_info):
if platform.system() == 'Windows' and 'hProcess' in process_info:
import win32api
import win32con
import win32event
import win32process
win32event.WaitForSingleObject(process_info['hProcess'], 600000)
ret = win32process.GetExitCodeProcess(process_info['hProcess'])
win32api.CloseHandle(process_info['hProcess'])
return ret
# pylint: enable=E0611,E0401
# pylint: disable=E1101
def get_free_disk_space():
"""Return the number of bytes free on the given disk in Gigabytes (floating)"""
path = os.path.dirname(os.path.realpath(__file__))
if platform.system() == 'Windows':
import ctypes
free_bytes = ctypes.c_ulonglong(0)
ctypes.windll.kernel32.GetDiskFreeSpaceExW(ctypes.c_wchar_p(path),
None, None, ctypes.pointer(free_bytes))
return float(free_bytes.value / 1024 / 1024) / 1024.0
else:
stat = os.statvfs(path)
return float(stat.f_bavail * stat.f_frsize / 1024 / 1024) / 1024.0
# pylint: enable=E1101
def get_file_version(filename):
version = 0.0
try:
from win32api import GetFileVersionInfo, LOWORD, HIWORD
info = GetFileVersionInfo (filename, "\\")
ms = info['FileVersionMS']
ls = info['FileVersionLS']
version = '{0}.{1}.{2}.{3}'.format(HIWORD(ms), LOWORD(ms), HIWORD(ls), LOWORD(ls))
except:
logging.exception('Error getting file version for %s', filename)
return version
| [
"win32com.shell.shell.ShellExecuteEx",
"logging.debug",
"os.statvfs",
"logging.exception",
"ctypes.c_ulonglong",
"ctypes.pointer",
"ctypes.c_wchar_p",
"win32api.CloseHandle",
"platform.system",
"subprocess.call",
"win32api.HIWORD",
"win32event.WaitForSingleObject",
"psutil.process_iter",
"... | [((409, 462), 'logging.debug', 'logging.debug', (['"""Terminating all instances of %s"""', 'exe'], {}), "('Terminating all instances of %s', exe)\n", (422, 462), False, 'import logging\n'), ((474, 491), 'platform.system', 'platform.system', ([], {}), '()\n', (489, 491), False, 'import platform\n'), ((1022, 1043), 'psutil.process_iter', 'psutil.process_iter', ([], {}), '()\n', (1041, 1043), False, 'import psutil\n'), ((1565, 1594), 'logging.debug', 'logging.debug', (['"""Flushing DNS"""'], {}), "('Flushing DNS')\n", (1578, 1594), False, 'import logging\n'), ((1606, 1623), 'platform.system', 'platform.system', ([], {}), '()\n', (1621, 1623), False, 'import platform\n'), ((1382, 1452), 'logging.debug', 'logging.debug', (['"""Waiting up to %d seconds for %s to exit"""', 'timeout', 'exe'], {}), "('Waiting up to %d seconds for %s to exit', timeout, exe)\n", (1395, 1452), False, 'import logging\n'), ((1461, 1506), 'psutil.wait_procs', 'psutil.wait_procs', (['processes'], {'timeout': 'timeout'}), '(processes, timeout=timeout)\n', (1478, 1506), False, 'import psutil\n'), ((3803, 3867), 'win32event.WaitForSingleObject', 'win32event.WaitForSingleObject', (["process_info['hProcess']", '(600000)'], {}), "(process_info['hProcess'], 600000)\n", (3833, 3867), False, 'import win32event\n'), ((3882, 3939), 'win32process.GetExitCodeProcess', 'win32process.GetExitCodeProcess', (["process_info['hProcess']"], {}), "(process_info['hProcess'])\n", (3913, 3939), False, 'import win32process\n'), ((3948, 3994), 'win32api.CloseHandle', 'win32api.CloseHandle', (["process_info['hProcess']"], {}), "(process_info['hProcess'])\n", (3968, 3994), False, 'import win32api\n'), ((4202, 4228), 'os.path.realpath', 'os.path.realpath', (['__file__'], {}), '(__file__)\n', (4218, 4228), False, 'import os\n'), ((4237, 4254), 'platform.system', 'platform.system', ([], {}), '()\n', (4252, 4254), False, 'import platform\n'), ((4312, 4333), 'ctypes.c_ulonglong', 'ctypes.c_ulonglong', (['(0)'], {}), '(0)\n', (4330, 4333), False, 'import ctypes\n'), ((4587, 4603), 'os.statvfs', 'os.statvfs', (['path'], {}), '(path)\n', (4597, 4603), False, 'import os\n'), ((4841, 4875), 'win32api.GetFileVersionInfo', 'GetFileVersionInfo', (['filename', '"""\\\\"""'], {}), "(filename, '\\\\')\n", (4859, 4875), False, 'from win32api import GetFileVersionInfo, LOWORD, HIWORD\n'), ((548, 601), 'subprocess.call', 'subprocess.call', (["['taskkill', '/F', '/T', '/IM', exe]"], {}), "(['taskkill', '/F', '/T', '/IM', exe])\n", (563, 601), False, 'import subprocess\n'), ((628, 669), 'subprocess.call', 'subprocess.call', (["['taskkill', '/IM', exe]"], {}), "(['taskkill', '/IM', exe])\n", (643, 669), False, 'import subprocess\n'), ((1731, 1792), 'subprocess.call', 'subprocess.call', (["['sudo', 'killall', '-HUP', 'mDNSResponder']"], {}), "(['sudo', 'killall', '-HUP', 'mDNSResponder'])\n", (1746, 1792), False, 'import subprocess\n'), ((1801, 1856), 'subprocess.call', 'subprocess.call', (["['sudo', 'dscacheutil', '-flushcache']"], {}), "(['sudo', 'dscacheutil', '-flushcache'])\n", (1816, 1856), False, 'import subprocess\n'), ((1865, 1916), 'subprocess.call', 'subprocess.call', (["['sudo', 'lookupd', '-flushcache']"], {}), "(['sudo', 'lookupd', '-flushcache'])\n", (1880, 1916), False, 'import subprocess\n'), ((2396, 2413), 'platform.system', 'platform.system', ([], {}), '()\n', (2411, 2413), False, 'import platform\n'), ((2666, 2701), 'logging.debug', 'logging.debug', (["(command + ' ' + args)"], {}), "(command + ' ' + args)\n", (2679, 2701), False, 'import logging\n'), ((2729, 2863), 'win32com.shell.shell.ShellExecuteEx', 'ShellExecuteEx', ([], {'nShow': 'win32con.SW_HIDE', 'fMask': 'shellcon.SEE_MASK_NOCLOSEPROCESS', 'lpVerb': '"""runas"""', 'lpFile': 'command', 'lpParameters': 'args'}), "(nShow=win32con.SW_HIDE, fMask=shellcon.\n SEE_MASK_NOCLOSEPROCESS, lpVerb='runas', lpFile=command, lpParameters=args)\n", (2743, 2863), False, 'from win32com.shell.shell import ShellExecuteEx\n'), ((3343, 3388), 'logging.debug', 'logging.debug', (["('sudo ' + command + ' ' + args)"], {}), "('sudo ' + command + ' ' + args)\n", (3356, 3388), False, 'import logging\n'), ((3407, 3466), 'subprocess.call', 'subprocess.call', (["('sudo ' + command + ' ' + args)"], {'shell': '(True)'}), "('sudo ' + command + ' ' + args, shell=True)\n", (3422, 3466), False, 'import subprocess\n'), ((3497, 3561), 'logging.exception', 'logging.exception', (['"""Error running elevated command: %s"""', 'command'], {}), "('Error running elevated command: %s', command)\n", (3514, 3561), False, 'import logging\n'), ((3630, 3647), 'platform.system', 'platform.system', ([], {}), '()\n', (3645, 3647), False, 'import platform\n'), ((4385, 4407), 'ctypes.c_wchar_p', 'ctypes.c_wchar_p', (['path'], {}), '(path)\n', (4401, 4407), False, 'import ctypes\n'), ((4472, 4498), 'ctypes.pointer', 'ctypes.pointer', (['free_bytes'], {}), '(free_bytes)\n', (4486, 4498), False, 'import ctypes\n'), ((4990, 5000), 'win32api.HIWORD', 'HIWORD', (['ms'], {}), '(ms)\n', (4996, 5000), False, 'from win32api import GetFileVersionInfo, LOWORD, HIWORD\n'), ((5002, 5012), 'win32api.LOWORD', 'LOWORD', (['ms'], {}), '(ms)\n', (5008, 5012), False, 'from win32api import GetFileVersionInfo, LOWORD, HIWORD\n'), ((5014, 5024), 'win32api.HIWORD', 'HIWORD', (['ls'], {}), '(ls)\n', (5020, 5024), False, 'from win32api import GetFileVersionInfo, LOWORD, HIWORD\n'), ((5026, 5036), 'win32api.LOWORD', 'LOWORD', (['ls'], {}), '(ls)\n', (5032, 5036), False, 'from win32api import GetFileVersionInfo, LOWORD, HIWORD\n'), ((5058, 5122), 'logging.exception', 'logging.exception', (['"""Error getting file version for %s"""', 'filename'], {}), "('Error getting file version for %s', filename)\n", (5075, 5122), False, 'import logging\n'), ((746, 796), 'subprocess.call', 'subprocess.call', (["['killall', '-s', 'SIGKILL', exe]"], {}), "(['killall', '-s', 'SIGKILL', exe])\n", (761, 796), False, 'import subprocess\n'), ((823, 856), 'subprocess.call', 'subprocess.call', (["['killall', exe]"], {}), "(['killall', exe])\n", (838, 856), False, 'import subprocess\n'), ((1951, 2009), 'subprocess.call', 'subprocess.call', (["['sudo', 'service', 'dnsmasq', 'restart']"], {}), "(['sudo', 'service', 'dnsmasq', 'restart'])\n", (1966, 2009), False, 'import subprocess\n'), ((2018, 2062), 'subprocess.call', 'subprocess.call', (["['sudo', 'rndc', 'restart']"], {}), "(['sudo', 'rndc', 'restart'])\n", (2033, 2062), False, 'import subprocess\n'), ((2071, 2133), 'subprocess.call', 'subprocess.call', (["['sudo', 'systemd-resolve', '--flush-caches']"], {}), "(['sudo', 'systemd-resolve', '--flush-caches'])\n", (2086, 2133), False, 'import subprocess\n'), ((3056, 3120), 'win32event.WaitForSingleObject', 'win32event.WaitForSingleObject', (["process_info['hProcess']", '(600000)'], {}), "(process_info['hProcess'], 600000)\n", (3086, 3120), False, 'import win32event\n'), ((3143, 3200), 'win32process.GetExitCodeProcess', 'win32process.GetExitCodeProcess', (["process_info['hProcess']"], {}), "(process_info['hProcess'])\n", (3174, 3200), False, 'import win32process\n'), ((3217, 3263), 'win32api.CloseHandle', 'win32api.CloseHandle', (["process_info['hProcess']"], {}), "(process_info['hProcess'])\n", (3237, 3263), False, 'import win32api\n'), ((1273, 1303), 'os.path.basename', 'os.path.basename', (["pinfo['exe']"], {}), "(pinfo['exe'])\n", (1289, 1303), False, 'import os\n')] |
#!/usr/bin/env python
##
## @file echo_sedml.py
## @brief Echos (and prints) a NuML data.
## @author <NAME>
##
## <!--------------------------------------------------------------------------
## This file is part of libNUML. Please visit http://numl.org for more
## information about NUML, and the latest version of libNUML.
##
## Copyright (c) 2013, University of Manchester
## All rights reserved.
##
##
import sys
import os.path
import libnuml
def main (args):
"""Usage: echo_numl input-filename output-filename
"""
if len(args) != 3:
print(main.__doc__)
sys.exit(1)
doc = libnuml.readNUML(args[1]);
##if ( doc.getErrorLog().getNumFailsWithSeverity(libsedml.LIBSNUML_SEV_ERROR) > 0):
## print doc.getErrorLog().toString();
## else:
libnuml.writeNUML(doc, args[2]);
return 0;
if __name__ == '__main__':
main(sys.argv) | [
"libnuml.writeNUML",
"sys.exit",
"libnuml.readNUML"
] | [((607, 632), 'libnuml.readNUML', 'libnuml.readNUML', (['args[1]'], {}), '(args[1])\n', (623, 632), False, 'import libnuml\n'), ((774, 805), 'libnuml.writeNUML', 'libnuml.writeNUML', (['doc', 'args[2]'], {}), '(doc, args[2])\n', (791, 805), False, 'import libnuml\n'), ((586, 597), 'sys.exit', 'sys.exit', (['(1)'], {}), '(1)\n', (594, 597), False, 'import sys\n')] |
from statistics import mean
from math import ceil, floor
with open("./input.txt", "r") as inputFile:
positionsStrLine = inputFile.readline()
positionStrs = positionsStrLine.split(',')
positions = [int(positionStr) for positionStr in positionStrs]
positions.sort()
bestPosition = positions[((len(positions) + 1) // 2) - 1]
print(f'Best position basic: {sum(abs(position - bestPosition) for position in positions)}')
meanPosition = mean(positions)
bestPositionMin = floor(meanPosition)
fuelMinPosition = sum((abs(position - bestPositionMin) * (1 + abs(position - bestPositionMin)))//2 for position in positions)
bestPositionMax = ceil(meanPosition)
fuelMaxPosition = sum((abs(position - bestPositionMax) * (1 + abs(position - bestPositionMax)))//2 for position in positions)
print(f'Best position complex: {min(fuelMinPosition, fuelMaxPosition)}') | [
"statistics.mean",
"math.ceil",
"math.floor"
] | [((442, 457), 'statistics.mean', 'mean', (['positions'], {}), '(positions)\n', (446, 457), False, 'from statistics import mean\n'), ((476, 495), 'math.floor', 'floor', (['meanPosition'], {}), '(meanPosition)\n', (481, 495), False, 'from math import ceil, floor\n'), ((642, 660), 'math.ceil', 'ceil', (['meanPosition'], {}), '(meanPosition)\n', (646, 660), False, 'from math import ceil, floor\n')] |
"""
byceps.blueprints.api.v1.user.views
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:Copyright: 2006-2021 <NAME>
:License: Revised BSD (see `LICENSE` file for details)
"""
from flask import abort, jsonify, request
from marshmallow import ValidationError
from marshmallow.schema import SchemaMeta
from .....services.user import (
email_address_verification_service,
service as user_service,
)
from .....signals import user as user_signals
from .....util.framework.blueprint import create_blueprint
from .....util.views import create_empty_json_response
from .....util.views import respond_no_content
from ...decorators import api_token_required
from .schemas import InvalidateEmailAddressRequest
blueprint = create_blueprint('user', __name__)
@blueprint.get('/<uuid:user_id>/profile')
def get_profile(user_id):
"""Return (part of) user's profile as JSON."""
user = user_service.find_active_user(user_id, include_avatar=True)
if user is None:
return create_empty_json_response(404)
return jsonify(
{
'id': user.id,
'screen_name': user.screen_name,
'avatar_url': user.avatar_url,
}
)
@blueprint.post('/invalidate_email_address')
@api_token_required
@respond_no_content
def invalidate_email_address():
"""Invalidate the email address."""
schema = InvalidateEmailAddressRequest()
request_data = request.get_json()
try:
req = schema.load(request_data)
except ValidationError as e:
abort(400, str(e.normalized_messages()))
user = user_service.find_user_by_email_address(req['email_address'])
if user is None:
abort(404, 'Unknown email address')
event = email_address_verification_service.invalidate_email_address(
user.id, req['reason']
)
user_signals.email_address_invalidated.send(None, event=event)
| [
"flask.abort",
"flask.request.get_json",
"flask.jsonify"
] | [((1022, 1115), 'flask.jsonify', 'jsonify', (["{'id': user.id, 'screen_name': user.screen_name, 'avatar_url': user.avatar_url}"], {}), "({'id': user.id, 'screen_name': user.screen_name, 'avatar_url': user\n .avatar_url})\n", (1029, 1115), False, 'from flask import abort, jsonify, request\n'), ((1395, 1413), 'flask.request.get_json', 'request.get_json', ([], {}), '()\n', (1411, 1413), False, 'from flask import abort, jsonify, request\n'), ((1650, 1685), 'flask.abort', 'abort', (['(404)', '"""Unknown email address"""'], {}), "(404, 'Unknown email address')\n", (1655, 1685), False, 'from flask import abort, jsonify, request\n')] |