hexsha stringlengths 40 40 | size int64 4 996k | ext stringclasses 8
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 4 245 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 4 245 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 67k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 4 245 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 4 996k | avg_line_length float64 1.33 58.2k | max_line_length int64 2 323k | alphanum_fraction float64 0 0.97 | content_no_comment stringlengths 0 946k | is_comment_constant_removed bool 2
classes | is_sharp_comment_removed bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f7fbbe4d7825e1ffaab8a2bb0ef632c92d7b5af3 | 58,990 | py | Python | neural-navigation-with-lstm/MARCO/nltk/parser/chunk.py | ronaldahmed/SLAM-for-ugv | 52e3241b8b737a0cfe5682c0aa87ec8c27d6a33d | [
"MIT"
] | 14 | 2016-04-03T19:25:13.000Z | 2022-01-05T07:03:07.000Z | neural-navigation-with-lstm/MARCO/nltk/parser/chunk.py | ronaldahmed/SLAM-for-ugv | 52e3241b8b737a0cfe5682c0aa87ec8c27d6a33d | [
"MIT"
] | null | null | null | neural-navigation-with-lstm/MARCO/nltk/parser/chunk.py | ronaldahmed/SLAM-for-ugv | 52e3241b8b737a0cfe5682c0aa87ec8c27d6a33d | [
"MIT"
] | 5 | 2018-06-21T12:58:58.000Z | 2020-02-15T05:33:39.000Z | # Natural Language Toolkit: A Chunk Parser
#
# Copyright (C) 2001 University of Pennsylvania
# Author: Steven Bird <sb@ldc.upenn.edu>
# URL: <http://nltk.sf.net>
# For license information, see LICENSE.TXT
#
# $Id: chunk.py,v 1.1.1.2 2004/09/29 21:58:22 adastra Exp $
"""
Classes and interfaces for identifying non-overlapping linguistic
groups (such as base noun phrases) in unrestricted text. This task is
called X{chunk parsing} or X{chunking}, and the identified groups are
called X{chunks}. The chunked text is represented using a shallow
tree called a "chunk structure." A X{chunk structure} is a tree
containing tokens and chunks, where each chunk is a subtree containing
only tokens. For example, the chunk structure for base noun phrase
chunks in the sentence "I saw the big dog on the hill" is::
(SENTENCE:
(NP: <I>)
<saw>
(NP: <the> <big> <dog>)
<on>
(NP: <the> <hill>))
To convert a chunk structure back to a list of tokens, simply use the
chunk structure's L{leaves<Tree.leaves>} method.
The C{parser.chunk} module defines L{ChunkParserI}, a standard
interface for chunking texts; and L{RegexpChunkParser}, a
regular-expression based implementation of that interface. It also
defines the L{ChunkedTaggedTokenizer} and L{ConllChunkedTokenizer}
classes, which tokenize strings containing chunked and tagged texts;
and L{ChunkScore}, a utility class for scoring chunk parsers.
RegexpChunkParser
=============
C{RegexpChunkParser} is an implementation of the chunk parser interface
that uses regular-expressions over tags to chunk a text. Its
C{parse} method first constructs a C{ChunkString}, which encodes a
particular chunking of the input text. Initially, nothing is
chunked. C{RegexpChunkParser} then applies a sequence of
C{RegexpChunkParserRule}s to the C{ChunkString}, each of which modifies
the chunking that it encodes. Finally, the C{ChunkString} is
transformed back into a chunk structure, which is returned.
C{RegexpChunkParser} can only be used to chunk a single kind of phrase.
For example, you can use an C{RegexpChunkParser} to chunk the noun
phrases in a text, or the verb phrases in a text; but you can not
use it to simultaneously chunk both noun phrases and verb phrases in
the same text. (This is a limitation of C{RegexpChunkParser}, not of
chunk parsers in general.)
RegexpChunkParserRules
------------------
C{RegexpChunkParserRule}s are transformational rules that update the
chunking of a text by modifying its C{ChunkString}. Each
C{RegexpChunkParserRule} defines the C{apply} method, which modifies
the chunking encoded by a C{ChunkString}. The
L{RegexpChunkParserRule} class itself can be used to implement any
transformational rule based on regular expressions. There are
also a number of subclasses, which can be used to implement
simpler types of rules:
- L{ChunkRule} chunks anything that matches a given regular
expression.
- L{ChinkRule} chinks anything that matches a given regular
expression.
- L{UnChunkRule} will un-chunk any chunk that matches a given
regular expression.
- L{MergeRule} can be used to merge two contiguous chunks.
- L{SplitRule} can be used to split a single chunk into two
smaller chunks.
Tag Patterns
~~~~~~~~~~~~
C{RegexpChunkParserRule}s use a modified version of regular
expression patterns, called X{tag patterns}. Tag patterns are
used to match sequences of tags. Examples of tag patterns are::
r'(<DT>|<JJ>|<NN>)+'
r'<NN>+'
r'<NN.*>'
The differences between regular expression patterns and tag
patterns are:
- In tag patterns, C{'<'} and C{'>'} act as parenthases; so
C{'<NN>+'} matches one or more repetitions of C{'<NN>'}, not
C{'<NN'} followed by one or more repetitions of C{'>'}.
- Whitespace in tag patterns is ignored. So
C{'<DT> | <NN>'} is equivalant to C{'<DT>|<NN>'}
- In tag patterns, C{'.'} is equivalant to C{'[^{}<>]'}; so
C{'<NN.*>'} matches any single tag starting with C{'NN'}.
The function L{tag_pattern2re_pattern} can be used to transform
a tag pattern to an equivalent regular expression pattern.
Efficiency
----------
Preliminary tests indicate that C{RegexpChunkParser} can chunk at a
rate of about 300 tokens/second, with a moderately complex rule
set.
There may be problems if C{RegexpChunkParser} is used with more than
5,000 tokens at a time. In particular, evaluation of some regular
expressions may cause the Python regular expression engine to
exceed its maximum recursion depth. We have attempted to minimize
these problems, but it is impossible to avoid them completely. We
therefore recommend that you apply the chunk parser to a single
sentence at a time.
Emacs Tip
---------
If you evaluate the following elisp expression in emacs, it will
colorize C{ChunkString}s when you use an interactive python shell
with emacs or xemacs ("C-c !")::
(let ()
(defconst comint-mode-font-lock-keywords
'(("<[^>]+>" 0 'font-lock-reference-face)
("[{}]" 0 'font-lock-function-name-face)))
(add-hook 'comint-mode-hook (lambda () (turn-on-font-lock))))
You can evaluate this code by copying it to a temporary buffer,
placing the cursor after the last close parenthasis, and typing
"C{C-x C-e}". You should evaluate it before running the interactive
session. The change will last until you close emacs.
Unresolved Issues
-----------------
If we use the C{re} module for regular expressions, Python's
regular expression engine generates "maximum recursion depth
exceeded" errors when processing very large texts, even for
regular expressions that should not require any recursion. We
therefore use the C{pre} module instead. But note that C{pre}
does not include Unicode support, so this module will not work
with unicode strings. Note also that C{pre} regular expressions
are not quite as advanced as C{re} ones (e.g., no leftward
zero-length assertions).
@type _VALID_CHUNK_STRING: C{regexp}
@var _VALID_CHUNK_STRING: A regular expression to test whether a chunk
string is valid.
@type _VALID_TAG_PATTERN: C{regexp}
@var _VALID_TAG_PATTERN: A regular expression to test whether a tag
pattern is valid.
@group Interfaces: ChunkParserI
@group Chunk Parsers: RegexpChunkParser
@group Chunk Parser Rules: RegexpChunkParserRule, ChunkRule,
ChinkRule, MergeRule, SplitRule, UnChunkRule, ChunkString
@group Evaluation: ChunkScore
@group Tokenizers: ChunkedTaggedTokenizer
@sort: ChunkParserI, RegexpChunkParser, RegexpChunkParserRule, ChunkRule,
ChinkRule, MergeRule, SplitRule, UnChunkRule, ChunkString,
ChunkScore, ChunkedTaggedTokenizer, demo, demo_eval,
tag_pattern2re_pattern
"""
from nltk import TaskI, PropertyIndirectionMixIn
from nltk.parser import ParserI, AbstractParser
from nltk.tree import Tree
from nltk.tokenizer import TokenizerI, AbstractTokenizer
from nltk.tokenizer import LineTokenizer, RegexpTokenizer, WhitespaceTokenizer
from nltk.token import Token, CharSpanLocation, SubtokenContextPointer
from nltk.chktype import chktype
from sets import Set
import types, re
##//////////////////////////////////////////////////////
## Chunk Parser Interface
##//////////////////////////////////////////////////////
class ChunkParserI(ParserI):
"""
A processing interface for identifying non-overlapping groups in
unrestricted text. Typically, chunk parsers are used to find base
syntactic constituants, such as base noun phrases. Unlike
L{ParserI}, C{ChunkParserI} guarantees that the C{parse} method
will always generate a parse.
@inprop: C{SUBTOKENS}: The list of subtokens to be parsed.
@outprop: C{TREE}: The parse tree. I{(generated by L{parse})}
@outprop: C{TREES}: A list of possible parse trees.
I{(generated by L{parse_n})}
"""
def parse(self, token):
"""
Find the best chunk structure for the given token's
C{subtoknes}, and output it to the token's C{TREE} property.
@param token: The token whose subtokens should be parsed.
@type token: L{Token}
"""
assert 0, "ChunkParserI is an abstract interface"
def parse_n(self, token, n=None):
"""
Find a list of the C{n} most likely chunk structures for the
given token's C{subtokens}, and output it to the token's
C{TREES} property. If the given token has fewer than C{n}
chunk structures, then find all chunk structures. The chunk
structures should be stored in descending order of estimated
likelihood.
@type n: C{int}
@param n: The number of chunk structures to generate. At most
C{n} chunk structures will be generated. If C{n} is not
specified, generate all chunk structures.
@type token: L{Token}
@param token: The token whose subtokens should be chunked.
"""
assert 0, "ChunkParserI is an abstract interface"
##//////////////////////////////////////////////////////
## Evaluation Helper
##//////////////////////////////////////////////////////
class ChunkScore:
"""
A utility class for scoring chunk parsers. C{ChunkScore} can
evaluate a chunk parser's output, based on a number of statistics
(precision, recall, f-measure, misssed chunks, incorrect chunks).
It can also combine the scores from the parsing of multiple texts;
this makes it signifigantly easier to evaluate a chunk parser that
operates one sentence at a time.
Texts are evaluated with the C{score} method. The results of
evaluation can be accessed via a number of accessor methods, such
as C{precision} and C{f_measure}. A typical use of the
C{ChunkScore} class is::
>>> chunkscore = ChunkScore()
>>> for correct in correct_sentences:
... guess = chunkparser.parse(correct.leaves())
... chunkscore.score(correct, guess)
>>> print 'F Measure:', chunkscore.f_measure()
F Measure: 0.823
@ivar kwargs: Keyword arguments:
- max_tp_examples: The maximum number actual examples of true
positives to record. This affects the C{correct} member
function: C{correct} will not return more than this number
of true positive examples. This does *not* affect any of
the numerical metrics (precision, recall, or f-measure)
- max_fp_examples: The maximum number actual examples of false
positives to record. This affects the C{incorrect} member
function and the C{guessed} member function: C{incorrect}
will not return more than this number of examples, and
C{guessed} will not return more than this number of true
positive examples. This does *not* affect any of the
numerical metrics (precision, recall, or f-measure)
- max_fn_examples: The maximum number actual examples of false
negatives to record. This affects the C{missed} member
function and the C{correct} member function: C{missed}
will not return more than this number of examples, and
C{correct} will not return more than this number of true
negative examples. This does *not* affect any of the
numerical metrics (precision, recall, or f-measure)
@type _tp: C{list} of C{Token}
@ivar _tp: List of true positives
@type _fp: C{list} of C{Token}
@ivar _fp: List of false positives
@type _fn: C{list} of C{Token}
@ivar _fn: List of false negatives
@type _tp_num: C{int}
@ivar _tp_num: Number of true positives
@type _fp_num: C{int}
@ivar _fp_num: Number of false positives
@type _fn_num: C{int}
@ivar _fn_num: Number of false negatives.
"""
def __init__(self, **kwargs):
self._correct = Set()
self._guessed = Set()
self._tp = Set()
self._fp = Set()
self._fn = Set()
self._max_tp = kwargs.get('max_tp_examples', 100)
self._max_fp = kwargs.get('max_fp_examples', 100)
self._max_fn = kwargs.get('max_fn_examples', 100)
self._tp_num = 0
self._fp_num = 0
self._fn_num = 0
def _childtuple(self, t):
return tuple([c.freeze() for c in t])
def score(self, correct, guessed):
"""
Given a correctly chunked text, score another chunked text.
Merge the results with all previous scorings. Note that when
the score() function is used repeatedly, each token I{must}
have a unique location. For sentence-at-a-time chunking, it
is recommended that you use locations like C{@12w@3s} (the
word at index 12 of the sentence at index 3).
@type correct: chunk structure
@param correct: The known-correct ("gold standard") chunked
sentence.
@type guessed: chunk structure
@param guessed: The chunked sentence to be scored.
"""
assert chktype(1, correct, Tree)
assert chktype(2, guessed, Tree)
self._correct |= Set([self._childtuple(t) for t in correct
if isinstance(t, Tree)])
self._guessed |= Set([self._childtuple(t) for t in guessed
if isinstance(t, Tree)])
self._tp = self._guessed & self._correct
self._fn = self._correct - self._guessed
self._fp = self._guessed - self._correct
self._tp_num = len(self._tp)
self._fp_num = len(self._fp)
self._fn_num = len(self._fn)
def precision(self):
"""
@return: the overall precision for all texts that have been
scored by this C{ChunkScore}.
@rtype: C{float}
"""
div = self._tp_num + self._fp_num
if div == 0: return 0
else: return float(self._tp_num) / div
def recall(self):
"""
@return: the overall recall for all texts that have been
scored by this C{ChunkScore}.
@rtype: C{float}
"""
div = self._tp_num + self._fn_num
if div == 0: return 0
else: return float(self._tp_num) / div
def f_measure(self, alpha=0.5):
"""
@return: the overall F measure for all texts that have been
scored by this C{ChunkScore}.
@rtype: C{float}
@param alpha: the relative weighting of precision and recall.
Larger alpha biases the score towards the precision value,
while smaller alpha biases the score towards the recall
value. C{alpha} should have a value in the range [0,1].
@type alpha: C{float}
"""
assert chktype(1, alpha, types.FloatType, types.IntType)
p = self.precision()
r = self.recall()
if p == 0 or r == 0: # what if alpha is 0 or 1?
return 0
return 1/(alpha/p + (1-alpha)/r)
def missed(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the
correct chunk structures, but not in the guessed chunk
structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the missed chunks.
"""
return list(self._fn)
def incorrect(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the
guessed chunk structures, but not in the correct chunk
structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the incorrect chunks.
"""
return list(self._fp)
def correct(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the correct
chunk structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the correct chunks.
"""
return list(self._correct)
def guessed(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the guessed
chunk structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the guessed chunks.
"""
return list(self._guessed)
def __len__(self):
return self._tp_num + self._fn_num
def __repr__(self):
"""
@rtype: C{String}
@return: a concise representation of this C{ChunkScoring}.
"""
return '<ChunkScoring of '+`len(self)`+' chunks>'
def __str__(self):
"""
@rtype: C{String}
@return: a verbose representation of this C{ChunkScoring}.
This representation includes the precision, recall, and
f-measure scores. For other information about the score,
use the accessor methods (e.g., C{missed()} and
C{incorrect()}).
"""
return ("ChunkParser score:\n" +
(" Precision: %5.1f%%\n" % (self.precision()*100)) +
(" Recall: %5.1f%%\n" % (self.recall()*100))+
(" F-Measure: %5.1f%%\n" % (self.f_measure()*100)))
def _chunk_toks(self, text):
"""
@return: The list of tokens contained in C{text}.
"""
return [tok for tok in text if isinstance(tok, AbstractTree)]
##//////////////////////////////////////////////////////
## Precompiled regular expressions
##//////////////////////////////////////////////////////
_TAGCHAR = r'[^\{\}<>]'
_TAG = r'(<%s+?>)' % _TAGCHAR
_VALID_TAG_PATTERN = re.compile(r'^((%s|<%s>)*)$' %
('[^\{\}<>]+',
'[^\{\}<>]+'))
##//////////////////////////////////////////////////////
## ChunkString
##//////////////////////////////////////////////////////
class ChunkString(PropertyIndirectionMixIn):
"""
A string-based encoding of a particular chunking of a text.
Internally, the C{ChunkString} class uses a single string to
encode the chunking of the input text. This string contains a
sequence of angle-bracket delimited tags, with chunking indicated
by braces. An example of this encoding is::
{<DT><JJ><NN>}<VBN><IN>{<DT><NN>}<.>{<DT><NN>}<VBD><.>
C{ChunkString} are created from tagged texts (i.e., C{list}s of
C{tokens} whose type is C{TaggedType}). Initially, nothing is
chunked.
The chunking of a C{ChunkString} can be modified with the C{xform}
method, which uses a regular expression to transform the string
representation. These transformations should only add and remove
braces; they should I{not} modify the sequence of angle-bracket
delimited tags.
@type _str: C{string}
@ivar _str: The internal string representation of the text's
encoding. This string representation contains a sequence of
angle-bracket delimited tags, with chunking indicated by
braces. An example of this encoding is::
{<DT><JJ><NN>}<VBN><IN>{<DT><NN>}<.>{<DT><NN>}<VBD><.>
@type _ttoks: C{list} of C{Token}
@ivar _ttoks: The text whose chunking is encoded by this
C{ChunkString}.
@ivar _debug: The debug level. See the constructor docs.
@cvar IN_CHUNK_PATTERN: A zero-width regexp pattern string that
will only match positions that are in chunks.
@cvar IN_CHINK_PATTERN: A zero-width regexp pattern string that
will only match positions that are in chinks.
"""
IN_CHUNK_PATTERN = r'(?=[^\{]*\})'
IN_CHINK_PATTERN = r'(?=[^\}]*(\{|$))'
# These are used by _verify
_CHUNK = r'(\{%s+?\})+?' % _TAG
_CHINK = r'(%s+?)+?' % _TAG
_VALID = re.compile(r'(\{?%s\}?)*?' % _TAG)
_BRACKETS = re.compile('[^\{\}]+')
_BALANCED_BRACKETS = re.compile(r'(\{\})*$')
def __init__(self, tagged_tokens, debug_level=3, **property_names):
"""
Construct a new C{ChunkString} that encodes the chunking of
the text C{tagged_tokens}.
@type tagged_tokens: C{list} of C{Token} with C{TaggedType}s
@param tagged_tokens: The text whose chunking is encoded by
this C{ChunkString}.
@type debug_level: int
@param debug_level: The level of debugging which should be
applied to transformations on the C{ChunkString}. The
valid levels are:
- 0: no checks
- 1: full check on to_chunkstruct
- 2: full check on to_chunkstruct and cursory check after
each transformation.
- 3: full check on to_chunkstruct and full check after
each transformation.
We recommend you use at least level 1. You should
probably use level 3 if you use any non-standard
subclasses of C{RegexpChunkParserRule}.
"""
assert chktype(1, tagged_tokens, [Token, Tree], (Token, Tree), Tree)
assert chktype(2, debug_level, types.IntType)
PropertyIndirectionMixIn.__init__(self, **property_names)
self._ttoks = tagged_tokens
tags = [self._tag(tok) for tok in tagged_tokens]
self._str = '<' + '><'.join(tags) + '>'
self._debug = debug_level
def _tag(self, tok):
if isinstance(tok, Token):
return tok[self.property('TAG')]
elif isinstance(tok, Tree):
return tok.node
else:
raise ValueError, 'tagged_tokens must contain tokens and trees'
def _verify(self, verify_tags):
"""
Check to make sure that C{_str} still corresponds to some chunked
version of C{_ttoks}.
@type verify_tags: C{boolean}
@param verify_tags: Whether the individual tags should be
checked. If this is false, C{_verify} will check to make
sure that C{_str} encodes a chunked version of I{some}
list of tokens. If this is true, then C{_verify} will
check to make sure that the tags in C{_str} match those in
C{_ttoks}.
@raise ValueError: if this C{ChunkString}'s internal string
representation is invalid or not consistant with _ttoks.
"""
# Check overall form
if not ChunkString._VALID.match(self._str):
raise ValueError('Transformation generated invalid chunkstring')
# Check that parens are balanced. If the string is long, we
# have to do this in pieces, to avoid a maximum recursion
# depth limit for regular expressions.
brackets = ChunkString._BRACKETS.sub('', self._str)
for i in range(1+len(brackets)/5000):
substr = brackets[i*5000:i*5000+5000]
if not ChunkString._BALANCED_BRACKETS.match(substr):
raise ValueError('Transformation generated invalid '+
'chunkstring')
if verify_tags<=0: return
tags1 = (re.split(r'[\{\}<>]+', self._str))[1:-1]
tags2 = [self._tag(tok) for tok in self._ttoks]
if tags1 != tags2:
raise ValueError('Transformation generated invalid chunkstring')
def to_chunkstruct(self, chunk_node='CHUNK', top_node='TEXT'):
"""
@return: the chunk structure encoded by this C{ChunkString}.
A chunk structure is a C{list} containing tagged tokens
and sublists of tagged tokens, where each sublist
represents a single chunk.
@rtype: chunk structure
@raise ValueError: If a transformation has generated an
invalid chunkstring.
"""
if self._debug > 0: self._verify(1)
# Extract a list of alternating chinks & chunks
pieces = re.split('[{}]', self._str)
# Use this alternating list to create the chunkstruct.
chunkstruct = []
index = 0
piece_in_chunk = 0
for piece in pieces:
# Find the list of tokens contained in this piece.
length = piece.count('<')
subsequence = self._ttoks[index:index+length]
# Add this list of tokens to our chunkstruct.
if piece_in_chunk:
chunkstruct.append(Tree(chunk_node, subsequence))
else:
chunkstruct += subsequence
# Update index, piece_in_chunk
index += length
piece_in_chunk = not piece_in_chunk
return Tree(top_node, chunkstruct)
def xform(self, regexp, repl):
"""
Apply the given transformation to this C{ChunkString}'s string
encoding. In particular, find all occurances that match
C{regexp}, and replace them using C{repl} (as done by
C{re.sub}).
This transformation should only add and remove braces; it
should I{not} modify the sequence of angle-bracket delimited
tags. Furthermore, this transformation may not result in
improper bracketing. Note, in particular, that bracketing may
not be nested.
@type regexp: C{string} or C{regexp}
@param regexp: A regular expression matching the substring
that should be replaced. This will typically include a
named group, which can be used by C{repl}.
@type repl: C{string}
@param repl: An expression specifying what should replace the
matched substring. Typically, this will include a named
replacement group, specified by C{regexp}.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
if type(regexp).__name__ != 'SRE_Pattern':
assert chktype(1, regexp, types.StringType)
assert chktype(2, repl, types.StringType)
# Do the actual substitution
self._str = re.sub(regexp, repl, self._str)
# The substitution might have generated "empty chunks"
# (substrings of the form "{}"). Remove them, so they don't
# interfere with other transformations.
self._str = re.sub('\{\}', '', self._str)
# Make sure that the transformation was legal.
if self._debug > 1: self._verify(self._debug-2)
def xform_chunk(self, pattern, repl):
# Docstring adopted from xform's docstring.
"""
Apply the given transformation to the chunks in this
C{ChunkString}'s string encoding. In particular, find all
occurances within chunks that match C{regexp}, and replace
them using C{repl} (as done by C{re.sub}).
This transformation should only add and remove braces; it
should I{not} modify the sequence of angle-bracket delimited
tags. Furthermore, this transformation may not result in
improper bracketing. Note, in particular, that bracketing may
not be nested.
@type pattern: C{string}
@param pattern: A regular expression pattern matching the substring
that should be replaced. This will typically include a
named group, which can be used by C{repl}.
@type repl: C{string}
@param repl: An expression specifying what should replace the
matched substring. Typically, this will include a named
replacement group, specified by C{regexp}.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
if type(pattern).__name__ == 'SRE_Pattern': pattern = pattern.pattern
assert chktype(1, pattern, types.StringType)
assert chktype(2, repl, types.StringType)
self.xform(pattern+ChunkString.IN_CHUNK_PATTERN, repl)
def xform_chink(self, pattern, repl):
# Docstring adopted from xform's docstring.
"""
Apply the given transformation to the chinks in this
C{ChinkString}'s string encoding. In particular, find all
occurances within chinks that match C{regexp}, and replace
them using C{repl} (as done by C{re.sub}).
This transformation should only add and remove braces; it
should I{not} modify the sequence of angle-bracket delimited
tags. Furthermore, this transformation may not result in
improper bracketing. Note, in particular, that bracketing may
not be nested.
@type pattern: C{string} or C{regexp}
@param pattern: A regular expression pattern matching the substring
that should be replaced. This will typically include a
named group, which can be used by C{repl}.
@type repl: C{string}
@param repl: An expression specifying what should replace the
matched substring. Typically, this will include a named
replacement group, specified by C{regexp}.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
if type(pattern).__name__ == 'SRE_Pattern': pattern = pattern.pattern
assert chktype(1, pattern, types.StringType)
assert chktype(2, repl, types.StringType)
self.xform(pattern+ChunkString.IN_CHINK_PATTERN, repl)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this C{ChunkString}. This
string representation has the form::
<ChunkString: '{<DT><JJ><NN>}<VBN><IN>{<DT><NN>}'>
"""
return '<ChunkString: %s>' % `self._str`
def __str__(self):
"""
@rtype: C{string}
@return: A formatted representation of this C{ChunkString}'s
string encoding. This representation will include extra
spaces to ensure that tags will line up with the
representation of other C{ChunkStrings} for the same text,
regardless of the chunking.
"""
# Add spaces to make everything line up.
str = re.sub(r'>(?!\})', r'> ', self._str)
str = re.sub(r'([^\{])<', r'\1 <', str)
if str[0] == '<': str = ' ' + str
return str
##//////////////////////////////////////////////////////
## Rules
##//////////////////////////////////////////////////////
def tag_pattern2re_pattern(tag_pattern):
"""
Convert a tag pattern to a regular expression pattern. A X{tag
pattern} is a modified verison of a regular expression, designed
for matching sequences of tags. The differences between regular
expression patterns and tag patterns are:
- In tag patterns, C{'<'} and C{'>'} act as parenthases; so
C{'<NN>+'} matches one or more repetitions of C{'<NN>'}, not
C{'<NN'} followed by one or more repetitions of C{'>'}.
- Whitespace in tag patterns is ignored. So
C{'<DT> | <NN>'} is equivalant to C{'<DT>|<NN>'}
- In tag patterns, C{'.'} is equivalant to C{'[^{}<>]'}; so
C{'<NN.*>'} matches any single tag starting with C{'NN'}.
In particular, C{tag_pattern2re_pattern} performs the following
transformations on the given pattern:
- Replace '.' with '[^<>{}]'
- Remove any whitespace
- Add extra parens around '<' and '>', to make '<' and '>' act
like parenthases. E.g., so that in '<NN>+', the '+' has scope
over the entire '<NN>'; and so that in '<NN|IN>', the '|' has
scope over 'NN' and 'IN', but not '<' or '>'.
- Check to make sure the resulting pattern is valid.
@type tag_pattern: C{string}
@param tag_pattern: The tag pattern to convert to a regular
expression pattern.
@raise ValueError: If C{tag_pattern} is not a valid tag pattern.
In particular, C{tag_pattern} should not include braces; and it
should not contain nested or mismatched angle-brackets.
@rtype: C{string}
@return: A regular expression pattern corresponding to
C{tag_pattern}.
"""
assert chktype(1, tag_pattern, types.StringType)
# Clean up the regular expression
tag_pattern = re.sub(r'\s', '', tag_pattern)
tag_pattern = re.sub(r'<', '(<(', tag_pattern)
tag_pattern = re.sub(r'>', ')>)', tag_pattern)
# Check the regular expression
if not _VALID_TAG_PATTERN.match(tag_pattern):
raise ValueError('Bad tag pattern: %s' % tag_pattern)
# Replace "." with _TAGCHAR.
# We have to do this after, since it adds {}[]<>s, which would
# confuse _VALID_TAG_PATTERN.
# PRE doesn't have lookback assertions, so reverse twice, and do
# the pattern backwards (with lookahead assertions). This can be
# made much cleaner once we can switch back to SRE.
def reverse_str(str):
lst = list(str)
lst.reverse()
return ''.join(lst)
tc_rev = reverse_str(_TAGCHAR)
reversed = reverse_str(tag_pattern)
reversed = re.sub(r'\.(?!\\(\\\\)*($|[^\\]))', tc_rev, reversed)
tag_pattern = reverse_str(reversed)
return tag_pattern
class RegexpChunkParserRule:
"""
A rule specifying how to modify the chunking in a C{ChunkString},
using a transformational regular expression. The
C{RegexpChunkParserRule} class itself can be used to implement any
transformational rule based on regular expressions. There are
also a number of subclasses, which can be used to implement
simpler types of rules, based on matching regular expressions.
Each C{RegexpChunkParserRule} has a regular expression and a
replacement expression. When a C{RegexpChunkParserRule} is X{applied}
to a C{ChunkString}, it searches the C{ChunkString} for any
substring that matches the regular expression, and replaces it
using the replacement expression. This search/replace operation
has the same semantics as C{re.sub}.
Each C{RegexpChunkParserRule} also has a description string, which
gives a short (typically less than 75 characters) description of
the purpose of the rule.
This transformation defined by this C{RegexpChunkParserRule} should
only add and remove braces; it should I{not} modify the sequence
of angle-bracket delimited tags. Furthermore, this transformation
may not result in nested or mismatched bracketing.
"""
def __init__(self, regexp, repl, descr):
"""
Construct a new RegexpChunkParserRule.
@type regexp: C{regexp} or C{string}
@param regexp: This C{RegexpChunkParserRule}'s regular expression.
When this rule is applied to a C{ChunkString}, any
substring that matches C{regexp} will be replaced using
the replacement string C{repl}. Note that this must be a
normal regular expression, not a tag pattern.
@type repl: C{string}
@param repl: This C{RegexpChunkParserRule}'s replacement
expression. When this rule is applied to a
C{ChunkString}, any substring that matches C{regexp} will
be replaced using C{repl}.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
if type(regexp).__name__ == 'SRE_Pattern': regexp = regexp.pattern
assert chktype(1, regexp, types.StringType)
assert chktype(2, repl, types.StringType)
assert chktype(3, descr, types.StringType)
self._repl = repl
self._descr = descr
if type(regexp) == types.StringType:
self._regexp = re.compile(regexp)
else:
self._regexp = regexp
def apply(self, chunkstr):
# Keep docstring generic so we can inherit it.
"""
Apply this rule to the given C{ChunkString}. See the
class reference documentation for a description of what it
means to apply a rule.
@type chunkstr: C{ChunkString}
@param chunkstr: The chunkstring to which this rule is
applied.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
assert chktype(1, chunkstr, ChunkString)
chunkstr.xform(self._regexp, self._repl)
def descr(self):
"""
@rtype: C{string}
@return: a short description of the purpose and/or effect of
this rule.
"""
return self._descr
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<RegexpChunkParserRule: '{<IN|VB.*>}'->'<IN>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return ('<RegexpChunkParserRule: '+`self._regexp.pattern`+
'->'+`self._repl`+'>')
class ChunkRule(RegexpChunkParserRule):
"""
A rule specifying how to add chunks to a C{ChunkString}, using a
matching tag pattern. When applied to a C{ChunkString}, it will
find any substring that matches this tag pattern and that is not
already part of a chunk, and create a new chunk containing that
substring.
"""
def __init__(self, tag_pattern, descr):
"""
Construct a new C{ChunkRule}.
@type tag_pattern: C{string}
@param tag_pattern: This rule's tag pattern. When
applied to a C{ChunkString}, this rule will
chunk any substring that matches this tag pattern and that
is not already part of a chunk.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, tag_pattern, types.StringType)
assert chktype(2, descr, types.StringType)
self._pattern = tag_pattern
regexp = re.compile('(?P<chunk>%s)%s' %
(tag_pattern2re_pattern(tag_pattern),
ChunkString.IN_CHINK_PATTERN))
RegexpChunkParserRule.__init__(self, regexp, '{\g<chunk>}', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<ChunkRule: '<IN|VB.*>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return '<ChunkRule: '+`self._pattern`+'>'
class ChinkRule(RegexpChunkParserRule):
"""
A rule specifying how to remove chinks to a C{ChunkString},
using a matching tag pattern. When applied to a
C{ChunkString}, it will find any substring that matches this
tag pattern and that is contained in a chunk, and remove it
from that chunk, thus creating two new chunks.
"""
def __init__(self, tag_pattern, descr):
"""
Construct a new C{ChinkRule}.
@type tag_pattern: C{string}
@param tag_pattern: This rule's tag pattern. When
applied to a C{ChunkString}, this rule will
find any substring that matches this tag pattern and that
is contained in a chunk, and remove it from that chunk,
thus creating two new chunks.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, tag_pattern, types.StringType)
assert chktype(2, descr, types.StringType)
self._pattern = tag_pattern
regexp = re.compile('(?P<chink>%s)%s' %
(tag_pattern2re_pattern(tag_pattern),
ChunkString.IN_CHUNK_PATTERN))
RegexpChunkParserRule.__init__(self, regexp, '}\g<chink>{', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<ChinkRule: '<IN|VB.*>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return '<ChinkRule: '+`self._pattern`+'>'
class UnChunkRule(RegexpChunkParserRule):
"""
A rule specifying how to remove chunks to a C{ChunkString},
using a matching tag pattern. When applied to a
C{ChunkString}, it will find any complete chunk that matches this
tag pattern, and un-chunk it.
"""
def __init__(self, tag_pattern, descr):
"""
Construct a new C{UnChunkRule}.
@type tag_pattern: C{string}
@param tag_pattern: This rule's tag pattern. When
applied to a C{ChunkString}, this rule will
find any complete chunk that matches this tag pattern,
and un-chunk it.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, tag_pattern, types.StringType)
assert chktype(2, descr, types.StringType)
self._pattern = tag_pattern
regexp = re.compile('\{(?P<chunk>%s)\}' %
tag_pattern2re_pattern(tag_pattern))
RegexpChunkParserRule.__init__(self, regexp, '\g<chunk>', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<UnChunkRule: '<IN|VB.*>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return '<UnChunkRule: '+`self._pattern`+'>'
class MergeRule(RegexpChunkParserRule):
"""
A rule specifying how to merge chunks in a C{ChunkString}, using
two matching tag patterns: a left pattern, and a right pattern.
When applied to a C{ChunkString}, it will find any chunk whose end
matches left pattern, and immediately followed by a chunk whose
beginning matches right pattern. It will then merge those two
chunks into a single chunk.
"""
def __init__(self, left_tag_pattern, right_tag_pattern, descr):
"""
Construct a new C{MergeRule}.
@type right_tag_pattern: C{string}
@param right_tag_pattern: This rule's right tag
pattern. When applied to a C{ChunkString}, this
rule will find any chunk whose end matches
C{left_tag_pattern}, and immediately followed by a chunk
whose beginning matches this pattern. It will
then merge those two chunks into a single chunk.
@type left_tag_pattern: C{string}
@param left_tag_pattern: This rule's left tag
pattern. When applied to a C{ChunkString}, this
rule will find any chunk whose end matches
this pattern, and immediately followed by a chunk
whose beginning matches C{right_tag_pattern}. It will
then merge those two chunks into a single chunk.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, left_tag_pattern, types.StringType)
assert chktype(2, right_tag_pattern, types.StringType)
assert chktype(3, descr, types.StringType)
self._left_tag_pattern = left_tag_pattern
self._right_tag_pattern = right_tag_pattern
regexp = re.compile('(?P<left>%s)}{(?=%s)' %
(tag_pattern2re_pattern(left_tag_pattern),
tag_pattern2re_pattern(right_tag_pattern)))
RegexpChunkParserRule.__init__(self, regexp, '\g<left>', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<MergeRule: '<NN|DT|JJ>', '<NN|JJ>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return ('<MergeRule: '+`self._left_tag_pattern`+', '+
`self._right_tag_pattern`+'>')
class SplitRule(RegexpChunkParserRule):
"""
A rule specifying how to split chunks in a C{ChunkString}, using
two matching tag patterns: a left pattern, and a right pattern.
When applied to a C{ChunkString}, it will find any chunk that
matches the left pattern followed by the right pattern. It will
then split the chunk into two new chunks, at the point between the
two pattern matches.
"""
def __init__(self, left_tag_pattern, right_tag_pattern, descr):
"""
Construct a new C{SplitRule}.
@type right_tag_pattern: C{string}
@param right_tag_pattern: This rule's right tag
pattern. When applied to a C{ChunkString}, this rule will
find any chunk containing a substring that matches
C{left_tag_pattern} followed by this pattern. It will
then split the chunk into two new chunks at the point
between these two matching patterns.
@type left_tag_pattern: C{string}
@param left_tag_pattern: This rule's left tag
pattern. When applied to a C{ChunkString}, this rule will
find any chunk containing a substring that matches this
pattern followed by C{right_tag_pattern}. It will then
split the chunk into two new chunks at the point between
these two matching patterns.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, left_tag_pattern, types.StringType)
assert chktype(2, right_tag_pattern, types.StringType)
assert chktype(3, descr, types.StringType)
self._left_tag_pattern = left_tag_pattern
self._right_tag_pattern = right_tag_pattern
regexp = re.compile('(?P<left>%s)(?=%s)' %
(tag_pattern2re_pattern(left_tag_pattern),
tag_pattern2re_pattern(right_tag_pattern)))
RegexpChunkParserRule.__init__(self, regexp, r'\g<left>}{', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<SplitRule: '<NN>', '<DT>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return ('<SplitRule: '+`self._left_tag_pattern`+', '+
`self._right_tag_pattern`+'>')
##//////////////////////////////////////////////////////
## RegexpChunkParser
##//////////////////////////////////////////////////////
class RegexpChunkParser(ChunkParserI, AbstractParser):
"""
A regular expression based chunk parser. C{RegexpChunkParser} uses a
sequence X{rules} to find chunks within a text. The chunking of
the text is encoded using a C{ChunkString}, and each rule acts by
modifying the chunking in the C{ChunkString}. The rules are all
implemented using regular expression matching and substitution.
The C{RegexpChunkParserRule} class and its subclasses (C{ChunkRule},
C{ChinkRule}, C{UnChunkRule}, C{MergeRule}, and C{SplitRule})
define the rules that are used by C{RegexpChunkParser}. Each rule
defines an C{apply} method, which modifies the chunking encoded
by a given C{ChunkString}.
@type _rules: C{list} of C{RegexpChunkParserRule}
@ivar _rules: The list of rules that should be applied to a text.
@type _trace: C{int}
@ivar _trace: The default level of tracing.
@inprop: C{SUBTOKENS}: The list of subtokens to be chunked.
@inprop: C{LEAF}: The string content of the subtokens.
@outprop: C{TREE}: The chunk structure. I{(generated by L{parse})}
@outprop: C{TREES}: A list of possible chunk structures.
I{(generated by L{parse_n})}
"""
def __init__(self, rules, chunk_node='CHUNK', top_node='TEXT',
trace=0, **property_names):
"""
Construct a new C{RegexpChunkParser}.
@type rules: C{list} of C{RegexpChunkParserRule}
@param rules: The sequence of rules that should be used to
generate the chunking for a tagged text.
@type chunk_node: C{string}
@param chunk_node: The node value that should be used for
chunk subtrees. This is typically a short string
describing the type of information contained by the chunk,
such as C{"NP"} for base noun phrases.
@type top_node: C{string}
@param top_node: The node value that should be used for the
top node of the chunk structure.
@type trace: C{int}
@param trace: The level of tracing that should be used when
parsing a text. C{0} will generate no tracing output;
C{1} will generate normal tracing output; and C{2} or
highter will generate verbose tracing output.
"""
assert chktype(1, rules, [RegexpChunkParserRule], (RegexpChunkParserRule,))
assert chktype(4, trace, types.IntType)
self._rules = rules
self._trace = trace
self._chunk_node = chunk_node
self._top_node = top_node
AbstractParser.__init__(self, **property_names)
def _trace_apply(self, chunkstr, verbose):
"""
Apply each of this C{RegexpChunkParser}'s rules to C{chunkstr}, in
turn. Generate trace output between each rule. If C{verbose}
is true, then generate verbose output.
@type chunkstr: C{ChunkString}
@param chunkstr: The chunk string to which each rule should be
applied.
@type verbose: C{boolean}
@param verbose: Whether output should be verbose.
@rtype: C{None}
"""
indent = ' '*(35-len(str(chunkstr))/2)
print 'Input:'
print indent, chunkstr
for rule in self._rules:
rule.apply(chunkstr)
if verbose:
print rule.descr()+' ('+`rule`+'):'
else:
print rule.descr()+':'
print indent, chunkstr
print
def _notrace_apply(self, chunkstr):
"""
Apply each of this C{RegexpChunkParser}'s rules to C{chunkstr}, in
turn.
@param chunkstr: The chunk string to which each rule should be
applied.
@type chunkstr: C{ChunkString}
@rtype: C{None}
"""
for rule in self._rules:
rule.apply(chunkstr)
def parse(self, token, trace=None):
"""
@rtype: chunk structure
@return: a chunk structure that encodes the chunks in a given
tagged sentence. A chunk is a non-overlapping linguistic
group, such as a noun phrase. The set of chunks
identified in the chunk structure depends on the rules
used to define this C{RegexpChunkParser}.
@type trace: C{int}
@param trace: The level of tracing that should be used when
parsing a text. C{0} will generate no tracing output;
C{1} will generate normal tracing output; and C{2} or
highter will generate verbose tracing output. This value
overrides the trace level value that was given to the
constructor.
"""
assert chktype(1, token, Token)
assert chktype(2, trace, types.NoneType, types.IntType)
SUBTOKENS = self.property('SUBTOKENS')
TREE = self.property('TREE')
if len(token[SUBTOKENS]) == 0:
print 'Warning: parsing empty text'
token[TREE] = Tree(self._top_node, [])
return
# Use the default trace value?
if trace == None: trace = self._trace
# Create the chunkstring, using the same properties as the parser
chunkstr = ChunkString(token[SUBTOKENS], **self.property_names())
# Apply the sequence of rules to the chunkstring.
if trace:
verbose = (trace>1)
self._trace_apply(chunkstr, verbose)
else:
self._notrace_apply(chunkstr)
# Use the chunkstring to create a chunk structure.
tree = chunkstr.to_chunkstruct(self._chunk_node, self._top_node)
token[TREE] = tree
def rules(self):
"""
@return: the sequence of rules used by this C{ChunkParser}.
@rtype: C{list} of C{RegexpChunkParserRule}
"""
return self._rules
def __repr__(self):
"""
@return: a concise string representation of this
C{RegexpChunkParser}.
@rtype: C{string}
"""
return "<RegexpChunkParser with %d rules>" % len(self._rules)
def __str__(self):
"""
@return: a verbose string representation of this
C{RegexpChunkParser}.
@rtype: C{string}
"""
s = "RegexpChunkParser with %d rules:\n" % len(self._rules)
margin = 0
for rule in self._rules:
margin = max(margin, len(rule.descr()))
if margin < 35:
format = " %" + `-(margin+3)` + "s%s\n"
else:
format = " %s\n %s\n"
for rule in self._rules:
s += format % (rule.descr(), `rule`)
return s[:-1]
##//////////////////////////////////////////////////////
## Demonstration code
##//////////////////////////////////////////////////////
def demo_eval(chunkparser, text):
"""
Demonstration code for evaluating a chunk parser, using a
C{ChunkScore}. This function assumes that C{text} contains one
sentence per line, and that each sentence has the form expected by
C{ChunkedTaggedTokenizer}. It runs the given chunk parser on each
sentence in the text, and scores the result. It prints the final
score (precision, recall, and f-measure); and reports the set of
chunks that were missed and the set of chunks that were
incorrect. (At most 10 missing chunks and 10 incorrect chunks are
reported).
@param chunkparser: The chunkparser to be tested
@type chunkparser: C{ChunkParserI}
@param text: The chunked tagged text that should be used for
evaluation.
@type text: C{string}
"""
assert chktype(1, chunkparser, ChunkParserI)
assert chktype(1, text, types.StringType)
# Evaluate our chunk parser.
chunkscore = ChunkScore()
from nltk.tokenreader import ChunkedTaggedTokenReader
ctt = ChunkedTaggedTokenReader(chunk_node='NP', SUBTOKENS='WORDS')
for sentence in text.split('\n'):
sentence = sentence.strip()
if not sentence: continue
gold = ctt.read_token(sentence)
test = Token(WORDS=gold['WORDS'])#, LOC=sentence['LOC'])
chunkparser.parse(test)
chunkscore.score(gold['TREE'], test['TREE'])
print '/'+('='*75)+'\\'
print 'Scoring', chunkparser
print ('-'*77)
print 'Precision: %5.1f%%' % (chunkscore.precision()*100), ' '*4,
print 'Recall: %5.1f%%' % (chunkscore.recall()*100), ' '*6,
print 'F-Measure: %5.1f%%' % (chunkscore.f_measure()*100)
# Missed chunks.
if chunkscore.missed():
print 'Missed:'
missed = chunkscore.missed()
# sort, so they'll always be listed in the same order.
missed.sort()
for chunk in missed[:10]:
print ' ', chunk
if len(chunkscore.missed()) > 10:
print ' ...'
# Incorrect chunks.
if chunkscore.incorrect():
print 'Incorrect:'
incorrect = chunkscore.incorrect()
incorrect.sort() # sort, so they'll always be listed in the same order.
for chunk in incorrect[:10]:
print ' ', chunk
if len(chunkscore.incorrect()) > 10:
print ' ...'
print '\\'+('='*75)+'/'
def demo():
"""
A demonstration for the C{RegexpChunkParser} class. A single text is
parsed with four different chunk parsers, using a variety of rules
and strategies.
"""
text = """\
[ the/DT little/JJ cat/NN ] sat/VBD on/IN [ the/DT mat/NN ] ./.
[ The/DT cats/NNS ] ./.
[ John/NNP ] saw/VBD [the/DT cat/NN] [the/DT dog/NN] liked/VBD ./."""
print '*'*75
print 'Evaluation text:'
print text
print '*'*75
# Use a simple regexp to define regular expressions.
r1 = ChunkRule(r'<DT>?<JJ>*<NN.*>', 'Chunk NPs')
cp = RegexpChunkParser([r1], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
# Use a chink rule to remove everything that's *not* an NP
r1 = ChunkRule(r'<.*>+', 'Chunk everything')
r2 = ChinkRule(r'<VB.*>|<IN>|<\.>', 'Unchunk VB and IN and .')
cp = RegexpChunkParser([r1, r2], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
# Unchunk non-NP words, and then merge consecutive NPs
r1 = ChunkRule(r'(<.*>)', 'Chunk each tag')
r2 = UnChunkRule(r'<VB.*>|<IN>|<.>', 'Unchunk VB? and IN and .')
r3 = MergeRule(r'<DT|JJ|NN.*>', r'<DT|JJ|NN.*>', 'Merge NPs')
cp = RegexpChunkParser([r1,r2,r3], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
# Chunk sequences of NP words, and split them at determiners
r1 = ChunkRule(r'(<DT|JJ|NN.*>+)', 'Chunk sequences of DT&JJ&NN')
r2 = SplitRule('', r'<DT>', 'Split before DT')
cp = RegexpChunkParser([r1,r2], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
if __name__ == '__main__':
demo()
| 40.965278 | 83 | 0.616969 |
"""
Classes and interfaces for identifying non-overlapping linguistic
groups (such as base noun phrases) in unrestricted text. This task is
called X{chunk parsing} or X{chunking}, and the identified groups are
called X{chunks}. The chunked text is represented using a shallow
tree called a "chunk structure." A X{chunk structure} is a tree
containing tokens and chunks, where each chunk is a subtree containing
only tokens. For example, the chunk structure for base noun phrase
chunks in the sentence "I saw the big dog on the hill" is::
(SENTENCE:
(NP: <I>)
<saw>
(NP: <the> <big> <dog>)
<on>
(NP: <the> <hill>))
To convert a chunk structure back to a list of tokens, simply use the
chunk structure's L{leaves<Tree.leaves>} method.
The C{parser.chunk} module defines L{ChunkParserI}, a standard
interface for chunking texts; and L{RegexpChunkParser}, a
regular-expression based implementation of that interface. It also
defines the L{ChunkedTaggedTokenizer} and L{ConllChunkedTokenizer}
classes, which tokenize strings containing chunked and tagged texts;
and L{ChunkScore}, a utility class for scoring chunk parsers.
RegexpChunkParser
=============
C{RegexpChunkParser} is an implementation of the chunk parser interface
that uses regular-expressions over tags to chunk a text. Its
C{parse} method first constructs a C{ChunkString}, which encodes a
particular chunking of the input text. Initially, nothing is
chunked. C{RegexpChunkParser} then applies a sequence of
C{RegexpChunkParserRule}s to the C{ChunkString}, each of which modifies
the chunking that it encodes. Finally, the C{ChunkString} is
transformed back into a chunk structure, which is returned.
C{RegexpChunkParser} can only be used to chunk a single kind of phrase.
For example, you can use an C{RegexpChunkParser} to chunk the noun
phrases in a text, or the verb phrases in a text; but you can not
use it to simultaneously chunk both noun phrases and verb phrases in
the same text. (This is a limitation of C{RegexpChunkParser}, not of
chunk parsers in general.)
RegexpChunkParserRules
------------------
C{RegexpChunkParserRule}s are transformational rules that update the
chunking of a text by modifying its C{ChunkString}. Each
C{RegexpChunkParserRule} defines the C{apply} method, which modifies
the chunking encoded by a C{ChunkString}. The
L{RegexpChunkParserRule} class itself can be used to implement any
transformational rule based on regular expressions. There are
also a number of subclasses, which can be used to implement
simpler types of rules:
- L{ChunkRule} chunks anything that matches a given regular
expression.
- L{ChinkRule} chinks anything that matches a given regular
expression.
- L{UnChunkRule} will un-chunk any chunk that matches a given
regular expression.
- L{MergeRule} can be used to merge two contiguous chunks.
- L{SplitRule} can be used to split a single chunk into two
smaller chunks.
Tag Patterns
~~~~~~~~~~~~
C{RegexpChunkParserRule}s use a modified version of regular
expression patterns, called X{tag patterns}. Tag patterns are
used to match sequences of tags. Examples of tag patterns are::
r'(<DT>|<JJ>|<NN>)+'
r'<NN>+'
r'<NN.*>'
The differences between regular expression patterns and tag
patterns are:
- In tag patterns, C{'<'} and C{'>'} act as parenthases; so
C{'<NN>+'} matches one or more repetitions of C{'<NN>'}, not
C{'<NN'} followed by one or more repetitions of C{'>'}.
- Whitespace in tag patterns is ignored. So
C{'<DT> | <NN>'} is equivalant to C{'<DT>|<NN>'}
- In tag patterns, C{'.'} is equivalant to C{'[^{}<>]'}; so
C{'<NN.*>'} matches any single tag starting with C{'NN'}.
The function L{tag_pattern2re_pattern} can be used to transform
a tag pattern to an equivalent regular expression pattern.
Efficiency
----------
Preliminary tests indicate that C{RegexpChunkParser} can chunk at a
rate of about 300 tokens/second, with a moderately complex rule
set.
There may be problems if C{RegexpChunkParser} is used with more than
5,000 tokens at a time. In particular, evaluation of some regular
expressions may cause the Python regular expression engine to
exceed its maximum recursion depth. We have attempted to minimize
these problems, but it is impossible to avoid them completely. We
therefore recommend that you apply the chunk parser to a single
sentence at a time.
Emacs Tip
---------
If you evaluate the following elisp expression in emacs, it will
colorize C{ChunkString}s when you use an interactive python shell
with emacs or xemacs ("C-c !")::
(let ()
(defconst comint-mode-font-lock-keywords
'(("<[^>]+>" 0 'font-lock-reference-face)
("[{}]" 0 'font-lock-function-name-face)))
(add-hook 'comint-mode-hook (lambda () (turn-on-font-lock))))
You can evaluate this code by copying it to a temporary buffer,
placing the cursor after the last close parenthasis, and typing
"C{C-x C-e}". You should evaluate it before running the interactive
session. The change will last until you close emacs.
Unresolved Issues
-----------------
If we use the C{re} module for regular expressions, Python's
regular expression engine generates "maximum recursion depth
exceeded" errors when processing very large texts, even for
regular expressions that should not require any recursion. We
therefore use the C{pre} module instead. But note that C{pre}
does not include Unicode support, so this module will not work
with unicode strings. Note also that C{pre} regular expressions
are not quite as advanced as C{re} ones (e.g., no leftward
zero-length assertions).
@type _VALID_CHUNK_STRING: C{regexp}
@var _VALID_CHUNK_STRING: A regular expression to test whether a chunk
string is valid.
@type _VALID_TAG_PATTERN: C{regexp}
@var _VALID_TAG_PATTERN: A regular expression to test whether a tag
pattern is valid.
@group Interfaces: ChunkParserI
@group Chunk Parsers: RegexpChunkParser
@group Chunk Parser Rules: RegexpChunkParserRule, ChunkRule,
ChinkRule, MergeRule, SplitRule, UnChunkRule, ChunkString
@group Evaluation: ChunkScore
@group Tokenizers: ChunkedTaggedTokenizer
@sort: ChunkParserI, RegexpChunkParser, RegexpChunkParserRule, ChunkRule,
ChinkRule, MergeRule, SplitRule, UnChunkRule, ChunkString,
ChunkScore, ChunkedTaggedTokenizer, demo, demo_eval,
tag_pattern2re_pattern
"""
from nltk import TaskI, PropertyIndirectionMixIn
from nltk.parser import ParserI, AbstractParser
from nltk.tree import Tree
from nltk.tokenizer import TokenizerI, AbstractTokenizer
from nltk.tokenizer import LineTokenizer, RegexpTokenizer, WhitespaceTokenizer
from nltk.token import Token, CharSpanLocation, SubtokenContextPointer
from nltk.chktype import chktype
from sets import Set
import types, re
ypically, chunk parsers are used to find base
syntactic constituants, such as base noun phrases. Unlike
L{ParserI}, C{ChunkParserI} guarantees that the C{parse} method
will always generate a parse.
@inprop: C{SUBTOKENS}: The list of subtokens to be parsed.
@outprop: C{TREE}: The parse tree. I{(generated by L{parse})}
@outprop: C{TREES}: A list of possible parse trees.
I{(generated by L{parse_n})}
"""
def parse(self, token):
"""
Find the best chunk structure for the given token's
C{subtoknes}, and output it to the token's C{TREE} property.
@param token: The token whose subtokens should be parsed.
@type token: L{Token}
"""
assert 0, "ChunkParserI is an abstract interface"
def parse_n(self, token, n=None):
"""
Find a list of the C{n} most likely chunk structures for the
given token's C{subtokens}, and output it to the token's
C{TREES} property. If the given token has fewer than C{n}
chunk structures, then find all chunk structures. The chunk
structures should be stored in descending order of estimated
likelihood.
@type n: C{int}
@param n: The number of chunk structures to generate. At most
C{n} chunk structures will be generated. If C{n} is not
specified, generate all chunk structures.
@type token: L{Token}
@param token: The token whose subtokens should be chunked.
"""
assert 0, "ChunkParserI is an abstract interface"
ut, based on a number of statistics
(precision, recall, f-measure, misssed chunks, incorrect chunks).
It can also combine the scores from the parsing of multiple texts;
this makes it signifigantly easier to evaluate a chunk parser that
operates one sentence at a time.
Texts are evaluated with the C{score} method. The results of
evaluation can be accessed via a number of accessor methods, such
as C{precision} and C{f_measure}. A typical use of the
C{ChunkScore} class is::
>>> chunkscore = ChunkScore()
>>> for correct in correct_sentences:
... guess = chunkparser.parse(correct.leaves())
... chunkscore.score(correct, guess)
>>> print 'F Measure:', chunkscore.f_measure()
F Measure: 0.823
@ivar kwargs: Keyword arguments:
- max_tp_examples: The maximum number actual examples of true
positives to record. This affects the C{correct} member
function: C{correct} will not return more than this number
of true positive examples. This does *not* affect any of
the numerical metrics (precision, recall, or f-measure)
- max_fp_examples: The maximum number actual examples of false
positives to record. This affects the C{incorrect} member
function and the C{guessed} member function: C{incorrect}
will not return more than this number of examples, and
C{guessed} will not return more than this number of true
positive examples. This does *not* affect any of the
numerical metrics (precision, recall, or f-measure)
- max_fn_examples: The maximum number actual examples of false
negatives to record. This affects the C{missed} member
function and the C{correct} member function: C{missed}
will not return more than this number of examples, and
C{correct} will not return more than this number of true
negative examples. This does *not* affect any of the
numerical metrics (precision, recall, or f-measure)
@type _tp: C{list} of C{Token}
@ivar _tp: List of true positives
@type _fp: C{list} of C{Token}
@ivar _fp: List of false positives
@type _fn: C{list} of C{Token}
@ivar _fn: List of false negatives
@type _tp_num: C{int}
@ivar _tp_num: Number of true positives
@type _fp_num: C{int}
@ivar _fp_num: Number of false positives
@type _fn_num: C{int}
@ivar _fn_num: Number of false negatives.
"""
def __init__(self, **kwargs):
self._correct = Set()
self._guessed = Set()
self._tp = Set()
self._fp = Set()
self._fn = Set()
self._max_tp = kwargs.get('max_tp_examples', 100)
self._max_fp = kwargs.get('max_fp_examples', 100)
self._max_fn = kwargs.get('max_fn_examples', 100)
self._tp_num = 0
self._fp_num = 0
self._fn_num = 0
def _childtuple(self, t):
return tuple([c.freeze() for c in t])
def score(self, correct, guessed):
"""
Given a correctly chunked text, score another chunked text.
Merge the results with all previous scorings. Note that when
the score() function is used repeatedly, each token I{must}
have a unique location. For sentence-at-a-time chunking, it
is recommended that you use locations like C{@12w@3s} (the
word at index 12 of the sentence at index 3).
@type correct: chunk structure
@param correct: The known-correct ("gold standard") chunked
sentence.
@type guessed: chunk structure
@param guessed: The chunked sentence to be scored.
"""
assert chktype(1, correct, Tree)
assert chktype(2, guessed, Tree)
self._correct |= Set([self._childtuple(t) for t in correct
if isinstance(t, Tree)])
self._guessed |= Set([self._childtuple(t) for t in guessed
if isinstance(t, Tree)])
self._tp = self._guessed & self._correct
self._fn = self._correct - self._guessed
self._fp = self._guessed - self._correct
self._tp_num = len(self._tp)
self._fp_num = len(self._fp)
self._fn_num = len(self._fn)
def precision(self):
"""
@return: the overall precision for all texts that have been
scored by this C{ChunkScore}.
@rtype: C{float}
"""
div = self._tp_num + self._fp_num
if div == 0: return 0
else: return float(self._tp_num) / div
def recall(self):
"""
@return: the overall recall for all texts that have been
scored by this C{ChunkScore}.
@rtype: C{float}
"""
div = self._tp_num + self._fn_num
if div == 0: return 0
else: return float(self._tp_num) / div
def f_measure(self, alpha=0.5):
"""
@return: the overall F measure for all texts that have been
scored by this C{ChunkScore}.
@rtype: C{float}
@param alpha: the relative weighting of precision and recall.
Larger alpha biases the score towards the precision value,
while smaller alpha biases the score towards the recall
value. C{alpha} should have a value in the range [0,1].
@type alpha: C{float}
"""
assert chktype(1, alpha, types.FloatType, types.IntType)
p = self.precision()
r = self.recall()
if p == 0 or r == 0: # what if alpha is 0 or 1?
return 0
return 1/(alpha/p + (1-alpha)/r)
def missed(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the
correct chunk structures, but not in the guessed chunk
structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the missed chunks.
"""
return list(self._fn)
def incorrect(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the
guessed chunk structures, but not in the correct chunk
structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the incorrect chunks.
"""
return list(self._fp)
def correct(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the correct
chunk structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the correct chunks.
"""
return list(self._correct)
def guessed(self):
"""
@rtype: C{Set} of C{Token}
@return: the set of chunks which were included in the guessed
chunk structures. Each chunk is encoded as a single token,
spanning the chunk. This encoding makes it easier to
examine the guessed chunks.
"""
return list(self._guessed)
def __len__(self):
return self._tp_num + self._fn_num
def __repr__(self):
"""
@rtype: C{String}
@return: a concise representation of this C{ChunkScoring}.
"""
return '<ChunkScoring of '+`len(self)`+' chunks>'
def __str__(self):
"""
@rtype: C{String}
@return: a verbose representation of this C{ChunkScoring}.
This representation includes the precision, recall, and
f-measure scores. For other information about the score,
use the accessor methods (e.g., C{missed()} and
C{incorrect()}).
"""
return ("ChunkParser score:\n" +
(" Precision: %5.1f%%\n" % (self.precision()*100)) +
(" Recall: %5.1f%%\n" % (self.recall()*100))+
(" F-Measure: %5.1f%%\n" % (self.f_measure()*100)))
def _chunk_toks(self, text):
"""
@return: The list of tokens contained in C{text}.
"""
return [tok for tok in text if isinstance(tok, AbstractTree)]
##//////////////////////////////////////////////////////
## Precompiled regular expressions
##//////////////////////////////////////////////////////
_TAGCHAR = r'[^\{\}<>]'
_TAG = r'(<%s+?>)' % _TAGCHAR
_VALID_TAG_PATTERN = re.compile(r'^((%s|<%s>)*)$' %
('[^\{\}<>]+',
'[^\{\}<>]+'))
##//////////////////////////////////////////////////////
## ChunkString
##//////////////////////////////////////////////////////
class ChunkString(PropertyIndirectionMixIn):
"""
A string-based encoding of a particular chunking of a text.
Internally, the C{ChunkString} class uses a single string to
encode the chunking of the input text. This string contains a
sequence of angle-bracket delimited tags, with chunking indicated
by braces. An example of this encoding is::
{<DT><JJ><NN>}<VBN><IN>{<DT><NN>}<.>{<DT><NN>}<VBD><.>
C{ChunkString} are created from tagged texts (i.e., C{list}s of
C{tokens} whose type is C{TaggedType}). Initially, nothing is
chunked.
The chunking of a C{ChunkString} can be modified with the C{xform}
method, which uses a regular expression to transform the string
representation. These transformations should only add and remove
braces; they should I{not} modify the sequence of angle-bracket
delimited tags.
@type _str: C{string}
@ivar _str: The internal string representation of the text's
encoding. This string representation contains a sequence of
angle-bracket delimited tags, with chunking indicated by
braces. An example of this encoding is::
{<DT><JJ><NN>}<VBN><IN>{<DT><NN>}<.>{<DT><NN>}<VBD><.>
@type _ttoks: C{list} of C{Token}
@ivar _ttoks: The text whose chunking is encoded by this
C{ChunkString}.
@ivar _debug: The debug level. See the constructor docs.
@cvar IN_CHUNK_PATTERN: A zero-width regexp pattern string that
will only match positions that are in chunks.
@cvar IN_CHINK_PATTERN: A zero-width regexp pattern string that
will only match positions that are in chinks.
"""
IN_CHUNK_PATTERN = r'(?=[^\{]*\})'
IN_CHINK_PATTERN = r'(?=[^\}]*(\{|$))'
_CHUNK = r'(\{%s+?\})+?' % _TAG
_CHINK = r'(%s+?)+?' % _TAG
_VALID = re.compile(r'(\{?%s\}?)*?' % _TAG)
_BRACKETS = re.compile('[^\{\}]+')
_BALANCED_BRACKETS = re.compile(r'(\{\})*$')
def __init__(self, tagged_tokens, debug_level=3, **property_names):
"""
Construct a new C{ChunkString} that encodes the chunking of
the text C{tagged_tokens}.
@type tagged_tokens: C{list} of C{Token} with C{TaggedType}s
@param tagged_tokens: The text whose chunking is encoded by
this C{ChunkString}.
@type debug_level: int
@param debug_level: The level of debugging which should be
applied to transformations on the C{ChunkString}. The
valid levels are:
- 0: no checks
- 1: full check on to_chunkstruct
- 2: full check on to_chunkstruct and cursory check after
each transformation.
- 3: full check on to_chunkstruct and full check after
each transformation.
We recommend you use at least level 1. You should
probably use level 3 if you use any non-standard
subclasses of C{RegexpChunkParserRule}.
"""
assert chktype(1, tagged_tokens, [Token, Tree], (Token, Tree), Tree)
assert chktype(2, debug_level, types.IntType)
PropertyIndirectionMixIn.__init__(self, **property_names)
self._ttoks = tagged_tokens
tags = [self._tag(tok) for tok in tagged_tokens]
self._str = '<' + '><'.join(tags) + '>'
self._debug = debug_level
def _tag(self, tok):
if isinstance(tok, Token):
return tok[self.property('TAG')]
elif isinstance(tok, Tree):
return tok.node
else:
raise ValueError, 'tagged_tokens must contain tokens and trees'
def _verify(self, verify_tags):
"""
Check to make sure that C{_str} still corresponds to some chunked
version of C{_ttoks}.
@type verify_tags: C{boolean}
@param verify_tags: Whether the individual tags should be
checked. If this is false, C{_verify} will check to make
sure that C{_str} encodes a chunked version of I{some}
list of tokens. If this is true, then C{_verify} will
check to make sure that the tags in C{_str} match those in
C{_ttoks}.
@raise ValueError: if this C{ChunkString}'s internal string
representation is invalid or not consistant with _ttoks.
"""
# Check overall form
if not ChunkString._VALID.match(self._str):
raise ValueError('Transformation generated invalid chunkstring')
# Check that parens are balanced. If the string is long, we
# have to do this in pieces, to avoid a maximum recursion
# depth limit for regular expressions.
brackets = ChunkString._BRACKETS.sub('', self._str)
for i in range(1+len(brackets)/5000):
substr = brackets[i*5000:i*5000+5000]
if not ChunkString._BALANCED_BRACKETS.match(substr):
raise ValueError('Transformation generated invalid '+
'chunkstring')
if verify_tags<=0: return
tags1 = (re.split(r'[\{\}<>]+', self._str))[1:-1]
tags2 = [self._tag(tok) for tok in self._ttoks]
if tags1 != tags2:
raise ValueError('Transformation generated invalid chunkstring')
def to_chunkstruct(self, chunk_node='CHUNK', top_node='TEXT'):
"""
@return: the chunk structure encoded by this C{ChunkString}.
A chunk structure is a C{list} containing tagged tokens
and sublists of tagged tokens, where each sublist
represents a single chunk.
@rtype: chunk structure
@raise ValueError: If a transformation has generated an
invalid chunkstring.
"""
if self._debug > 0: self._verify(1)
# Extract a list of alternating chinks & chunks
pieces = re.split('[{}]', self._str)
# Use this alternating list to create the chunkstruct.
chunkstruct = []
index = 0
piece_in_chunk = 0
for piece in pieces:
# Find the list of tokens contained in this piece.
length = piece.count('<')
subsequence = self._ttoks[index:index+length]
# Add this list of tokens to our chunkstruct.
if piece_in_chunk:
chunkstruct.append(Tree(chunk_node, subsequence))
else:
chunkstruct += subsequence
# Update index, piece_in_chunk
index += length
piece_in_chunk = not piece_in_chunk
return Tree(top_node, chunkstruct)
def xform(self, regexp, repl):
"""
Apply the given transformation to this C{ChunkString}'s string
encoding. In particular, find all occurances that match
C{regexp}, and replace them using C{repl} (as done by
C{re.sub}).
This transformation should only add and remove braces; it
should I{not} modify the sequence of angle-bracket delimited
tags. Furthermore, this transformation may not result in
improper bracketing. Note, in particular, that bracketing may
not be nested.
@type regexp: C{string} or C{regexp}
@param regexp: A regular expression matching the substring
that should be replaced. This will typically include a
named group, which can be used by C{repl}.
@type repl: C{string}
@param repl: An expression specifying what should replace the
matched substring. Typically, this will include a named
replacement group, specified by C{regexp}.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
if type(regexp).__name__ != 'SRE_Pattern':
assert chktype(1, regexp, types.StringType)
assert chktype(2, repl, types.StringType)
self._str = re.sub(regexp, repl, self._str)
# interfere with other transformations.
self._str = re.sub('\{\}', '', self._str)
# Make sure that the transformation was legal.
if self._debug > 1: self._verify(self._debug-2)
def xform_chunk(self, pattern, repl):
# Docstring adopted from xform's docstring.
"""
Apply the given transformation to the chunks in this
C{ChunkString}'s string encoding. In particular, find all
occurances within chunks that match C{regexp}, and replace
them using C{repl} (as done by C{re.sub}).
This transformation should only add and remove braces; it
should I{not} modify the sequence of angle-bracket delimited
tags. Furthermore, this transformation may not result in
improper bracketing. Note, in particular, that bracketing may
not be nested.
@type pattern: C{string}
@param pattern: A regular expression pattern matching the substring
that should be replaced. This will typically include a
named group, which can be used by C{repl}.
@type repl: C{string}
@param repl: An expression specifying what should replace the
matched substring. Typically, this will include a named
replacement group, specified by C{regexp}.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
if type(pattern).__name__ == 'SRE_Pattern': pattern = pattern.pattern
assert chktype(1, pattern, types.StringType)
assert chktype(2, repl, types.StringType)
self.xform(pattern+ChunkString.IN_CHUNK_PATTERN, repl)
def xform_chink(self, pattern, repl):
# Docstring adopted from xform's docstring.
"""
Apply the given transformation to the chinks in this
C{ChinkString}'s string encoding. In particular, find all
occurances within chinks that match C{regexp}, and replace
them using C{repl} (as done by C{re.sub}).
This transformation should only add and remove braces; it
should I{not} modify the sequence of angle-bracket delimited
tags. Furthermore, this transformation may not result in
improper bracketing. Note, in particular, that bracketing may
not be nested.
@type pattern: C{string} or C{regexp}
@param pattern: A regular expression pattern matching the substring
that should be replaced. This will typically include a
named group, which can be used by C{repl}.
@type repl: C{string}
@param repl: An expression specifying what should replace the
matched substring. Typically, this will include a named
replacement group, specified by C{regexp}.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
if type(pattern).__name__ == 'SRE_Pattern': pattern = pattern.pattern
assert chktype(1, pattern, types.StringType)
assert chktype(2, repl, types.StringType)
self.xform(pattern+ChunkString.IN_CHINK_PATTERN, repl)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this C{ChunkString}. This
string representation has the form::
<ChunkString: '{<DT><JJ><NN>}<VBN><IN>{<DT><NN>}'>
"""
return '<ChunkString: %s>' % `self._str`
def __str__(self):
"""
@rtype: C{string}
@return: A formatted representation of this C{ChunkString}'s
string encoding. This representation will include extra
spaces to ensure that tags will line up with the
representation of other C{ChunkStrings} for the same text,
regardless of the chunking.
"""
str = re.sub(r'>(?!\})', r'> ', self._str)
str = re.sub(r'([^\{])<', r'\1 <', str)
if str[0] == '<': str = ' ' + str
return str
ag
pattern} is a modified verison of a regular expression, designed
for matching sequences of tags. The differences between regular
expression patterns and tag patterns are:
- In tag patterns, C{'<'} and C{'>'} act as parenthases; so
C{'<NN>+'} matches one or more repetitions of C{'<NN>'}, not
C{'<NN'} followed by one or more repetitions of C{'>'}.
- Whitespace in tag patterns is ignored. So
C{'<DT> | <NN>'} is equivalant to C{'<DT>|<NN>'}
- In tag patterns, C{'.'} is equivalant to C{'[^{}<>]'}; so
C{'<NN.*>'} matches any single tag starting with C{'NN'}.
In particular, C{tag_pattern2re_pattern} performs the following
transformations on the given pattern:
- Replace '.' with '[^<>{}]'
- Remove any whitespace
- Add extra parens around '<' and '>', to make '<' and '>' act
like parenthases. E.g., so that in '<NN>+', the '+' has scope
over the entire '<NN>'; and so that in '<NN|IN>', the '|' has
scope over 'NN' and 'IN', but not '<' or '>'.
- Check to make sure the resulting pattern is valid.
@type tag_pattern: C{string}
@param tag_pattern: The tag pattern to convert to a regular
expression pattern.
@raise ValueError: If C{tag_pattern} is not a valid tag pattern.
In particular, C{tag_pattern} should not include braces; and it
should not contain nested or mismatched angle-brackets.
@rtype: C{string}
@return: A regular expression pattern corresponding to
C{tag_pattern}.
"""
assert chktype(1, tag_pattern, types.StringType)
tag_pattern = re.sub(r'\s', '', tag_pattern)
tag_pattern = re.sub(r'<', '(<(', tag_pattern)
tag_pattern = re.sub(r'>', ')>)', tag_pattern)
if not _VALID_TAG_PATTERN.match(tag_pattern):
raise ValueError('Bad tag pattern: %s' % tag_pattern)
# the pattern backwards (with lookahead assertions). This can be
# made much cleaner once we can switch back to SRE.
def reverse_str(str):
lst = list(str)
lst.reverse()
return ''.join(lst)
tc_rev = reverse_str(_TAGCHAR)
reversed = reverse_str(tag_pattern)
reversed = re.sub(r'\.(?!\\(\\\\)*($|[^\\]))', tc_rev, reversed)
tag_pattern = reverse_str(reversed)
return tag_pattern
class RegexpChunkParserRule:
"""
A rule specifying how to modify the chunking in a C{ChunkString},
using a transformational regular expression. The
C{RegexpChunkParserRule} class itself can be used to implement any
transformational rule based on regular expressions. There are
also a number of subclasses, which can be used to implement
simpler types of rules, based on matching regular expressions.
Each C{RegexpChunkParserRule} has a regular expression and a
replacement expression. When a C{RegexpChunkParserRule} is X{applied}
to a C{ChunkString}, it searches the C{ChunkString} for any
substring that matches the regular expression, and replaces it
using the replacement expression. This search/replace operation
has the same semantics as C{re.sub}.
Each C{RegexpChunkParserRule} also has a description string, which
gives a short (typically less than 75 characters) description of
the purpose of the rule.
This transformation defined by this C{RegexpChunkParserRule} should
only add and remove braces; it should I{not} modify the sequence
of angle-bracket delimited tags. Furthermore, this transformation
may not result in nested or mismatched bracketing.
"""
def __init__(self, regexp, repl, descr):
"""
Construct a new RegexpChunkParserRule.
@type regexp: C{regexp} or C{string}
@param regexp: This C{RegexpChunkParserRule}'s regular expression.
When this rule is applied to a C{ChunkString}, any
substring that matches C{regexp} will be replaced using
the replacement string C{repl}. Note that this must be a
normal regular expression, not a tag pattern.
@type repl: C{string}
@param repl: This C{RegexpChunkParserRule}'s replacement
expression. When this rule is applied to a
C{ChunkString}, any substring that matches C{regexp} will
be replaced using C{repl}.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
if type(regexp).__name__ == 'SRE_Pattern': regexp = regexp.pattern
assert chktype(1, regexp, types.StringType)
assert chktype(2, repl, types.StringType)
assert chktype(3, descr, types.StringType)
self._repl = repl
self._descr = descr
if type(regexp) == types.StringType:
self._regexp = re.compile(regexp)
else:
self._regexp = regexp
def apply(self, chunkstr):
# Keep docstring generic so we can inherit it.
"""
Apply this rule to the given C{ChunkString}. See the
class reference documentation for a description of what it
means to apply a rule.
@type chunkstr: C{ChunkString}
@param chunkstr: The chunkstring to which this rule is
applied.
@rtype: C{None}
@raise ValueError: If this transformation generateds an
invalid chunkstring.
"""
assert chktype(1, chunkstr, ChunkString)
chunkstr.xform(self._regexp, self._repl)
def descr(self):
"""
@rtype: C{string}
@return: a short description of the purpose and/or effect of
this rule.
"""
return self._descr
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<RegexpChunkParserRule: '{<IN|VB.*>}'->'<IN>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return ('<RegexpChunkParserRule: '+`self._regexp.pattern`+
'->'+`self._repl`+'>')
class ChunkRule(RegexpChunkParserRule):
"""
A rule specifying how to add chunks to a C{ChunkString}, using a
matching tag pattern. When applied to a C{ChunkString}, it will
find any substring that matches this tag pattern and that is not
already part of a chunk, and create a new chunk containing that
substring.
"""
def __init__(self, tag_pattern, descr):
"""
Construct a new C{ChunkRule}.
@type tag_pattern: C{string}
@param tag_pattern: This rule's tag pattern. When
applied to a C{ChunkString}, this rule will
chunk any substring that matches this tag pattern and that
is not already part of a chunk.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, tag_pattern, types.StringType)
assert chktype(2, descr, types.StringType)
self._pattern = tag_pattern
regexp = re.compile('(?P<chunk>%s)%s' %
(tag_pattern2re_pattern(tag_pattern),
ChunkString.IN_CHINK_PATTERN))
RegexpChunkParserRule.__init__(self, regexp, '{\g<chunk>}', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<ChunkRule: '<IN|VB.*>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return '<ChunkRule: '+`self._pattern`+'>'
class ChinkRule(RegexpChunkParserRule):
"""
A rule specifying how to remove chinks to a C{ChunkString},
using a matching tag pattern. When applied to a
C{ChunkString}, it will find any substring that matches this
tag pattern and that is contained in a chunk, and remove it
from that chunk, thus creating two new chunks.
"""
def __init__(self, tag_pattern, descr):
"""
Construct a new C{ChinkRule}.
@type tag_pattern: C{string}
@param tag_pattern: This rule's tag pattern. When
applied to a C{ChunkString}, this rule will
find any substring that matches this tag pattern and that
is contained in a chunk, and remove it from that chunk,
thus creating two new chunks.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, tag_pattern, types.StringType)
assert chktype(2, descr, types.StringType)
self._pattern = tag_pattern
regexp = re.compile('(?P<chink>%s)%s' %
(tag_pattern2re_pattern(tag_pattern),
ChunkString.IN_CHUNK_PATTERN))
RegexpChunkParserRule.__init__(self, regexp, '}\g<chink>{', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<ChinkRule: '<IN|VB.*>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return '<ChinkRule: '+`self._pattern`+'>'
class UnChunkRule(RegexpChunkParserRule):
"""
A rule specifying how to remove chunks to a C{ChunkString},
using a matching tag pattern. When applied to a
C{ChunkString}, it will find any complete chunk that matches this
tag pattern, and un-chunk it.
"""
def __init__(self, tag_pattern, descr):
"""
Construct a new C{UnChunkRule}.
@type tag_pattern: C{string}
@param tag_pattern: This rule's tag pattern. When
applied to a C{ChunkString}, this rule will
find any complete chunk that matches this tag pattern,
and un-chunk it.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, tag_pattern, types.StringType)
assert chktype(2, descr, types.StringType)
self._pattern = tag_pattern
regexp = re.compile('\{(?P<chunk>%s)\}' %
tag_pattern2re_pattern(tag_pattern))
RegexpChunkParserRule.__init__(self, regexp, '\g<chunk>', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<UnChunkRule: '<IN|VB.*>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return '<UnChunkRule: '+`self._pattern`+'>'
class MergeRule(RegexpChunkParserRule):
"""
A rule specifying how to merge chunks in a C{ChunkString}, using
two matching tag patterns: a left pattern, and a right pattern.
When applied to a C{ChunkString}, it will find any chunk whose end
matches left pattern, and immediately followed by a chunk whose
beginning matches right pattern. It will then merge those two
chunks into a single chunk.
"""
def __init__(self, left_tag_pattern, right_tag_pattern, descr):
"""
Construct a new C{MergeRule}.
@type right_tag_pattern: C{string}
@param right_tag_pattern: This rule's right tag
pattern. When applied to a C{ChunkString}, this
rule will find any chunk whose end matches
C{left_tag_pattern}, and immediately followed by a chunk
whose beginning matches this pattern. It will
then merge those two chunks into a single chunk.
@type left_tag_pattern: C{string}
@param left_tag_pattern: This rule's left tag
pattern. When applied to a C{ChunkString}, this
rule will find any chunk whose end matches
this pattern, and immediately followed by a chunk
whose beginning matches C{right_tag_pattern}. It will
then merge those two chunks into a single chunk.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, left_tag_pattern, types.StringType)
assert chktype(2, right_tag_pattern, types.StringType)
assert chktype(3, descr, types.StringType)
self._left_tag_pattern = left_tag_pattern
self._right_tag_pattern = right_tag_pattern
regexp = re.compile('(?P<left>%s)}{(?=%s)' %
(tag_pattern2re_pattern(left_tag_pattern),
tag_pattern2re_pattern(right_tag_pattern)))
RegexpChunkParserRule.__init__(self, regexp, '\g<left>', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<MergeRule: '<NN|DT|JJ>', '<NN|JJ>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return ('<MergeRule: '+`self._left_tag_pattern`+', '+
`self._right_tag_pattern`+'>')
class SplitRule(RegexpChunkParserRule):
"""
A rule specifying how to split chunks in a C{ChunkString}, using
two matching tag patterns: a left pattern, and a right pattern.
When applied to a C{ChunkString}, it will find any chunk that
matches the left pattern followed by the right pattern. It will
then split the chunk into two new chunks, at the point between the
two pattern matches.
"""
def __init__(self, left_tag_pattern, right_tag_pattern, descr):
"""
Construct a new C{SplitRule}.
@type right_tag_pattern: C{string}
@param right_tag_pattern: This rule's right tag
pattern. When applied to a C{ChunkString}, this rule will
find any chunk containing a substring that matches
C{left_tag_pattern} followed by this pattern. It will
then split the chunk into two new chunks at the point
between these two matching patterns.
@type left_tag_pattern: C{string}
@param left_tag_pattern: This rule's left tag
pattern. When applied to a C{ChunkString}, this rule will
find any chunk containing a substring that matches this
pattern followed by C{right_tag_pattern}. It will then
split the chunk into two new chunks at the point between
these two matching patterns.
@type descr: C{string}
@param descr: A short description of the purpose and/or effect
of this rule.
"""
assert chktype(1, left_tag_pattern, types.StringType)
assert chktype(2, right_tag_pattern, types.StringType)
assert chktype(3, descr, types.StringType)
self._left_tag_pattern = left_tag_pattern
self._right_tag_pattern = right_tag_pattern
regexp = re.compile('(?P<left>%s)(?=%s)' %
(tag_pattern2re_pattern(left_tag_pattern),
tag_pattern2re_pattern(right_tag_pattern)))
RegexpChunkParserRule.__init__(self, regexp, r'\g<left>}{', descr)
def __repr__(self):
"""
@rtype: C{string}
@return: A string representation of this rule. This
string representation has the form::
<SplitRule: '<NN>', '<DT>'>
Note that this representation does not include the
description string; that string can be accessed
separately with the C{descr} method.
"""
return ('<SplitRule: '+`self._left_tag_pattern`+', '+
`self._right_tag_pattern`+'>')
er} uses a
sequence X{rules} to find chunks within a text. The chunking of
the text is encoded using a C{ChunkString}, and each rule acts by
modifying the chunking in the C{ChunkString}. The rules are all
implemented using regular expression matching and substitution.
The C{RegexpChunkParserRule} class and its subclasses (C{ChunkRule},
C{ChinkRule}, C{UnChunkRule}, C{MergeRule}, and C{SplitRule})
define the rules that are used by C{RegexpChunkParser}. Each rule
defines an C{apply} method, which modifies the chunking encoded
by a given C{ChunkString}.
@type _rules: C{list} of C{RegexpChunkParserRule}
@ivar _rules: The list of rules that should be applied to a text.
@type _trace: C{int}
@ivar _trace: The default level of tracing.
@inprop: C{SUBTOKENS}: The list of subtokens to be chunked.
@inprop: C{LEAF}: The string content of the subtokens.
@outprop: C{TREE}: The chunk structure. I{(generated by L{parse})}
@outprop: C{TREES}: A list of possible chunk structures.
I{(generated by L{parse_n})}
"""
def __init__(self, rules, chunk_node='CHUNK', top_node='TEXT',
trace=0, **property_names):
"""
Construct a new C{RegexpChunkParser}.
@type rules: C{list} of C{RegexpChunkParserRule}
@param rules: The sequence of rules that should be used to
generate the chunking for a tagged text.
@type chunk_node: C{string}
@param chunk_node: The node value that should be used for
chunk subtrees. This is typically a short string
describing the type of information contained by the chunk,
such as C{"NP"} for base noun phrases.
@type top_node: C{string}
@param top_node: The node value that should be used for the
top node of the chunk structure.
@type trace: C{int}
@param trace: The level of tracing that should be used when
parsing a text. C{0} will generate no tracing output;
C{1} will generate normal tracing output; and C{2} or
highter will generate verbose tracing output.
"""
assert chktype(1, rules, [RegexpChunkParserRule], (RegexpChunkParserRule,))
assert chktype(4, trace, types.IntType)
self._rules = rules
self._trace = trace
self._chunk_node = chunk_node
self._top_node = top_node
AbstractParser.__init__(self, **property_names)
def _trace_apply(self, chunkstr, verbose):
"""
Apply each of this C{RegexpChunkParser}'s rules to C{chunkstr}, in
turn. Generate trace output between each rule. If C{verbose}
is true, then generate verbose output.
@type chunkstr: C{ChunkString}
@param chunkstr: The chunk string to which each rule should be
applied.
@type verbose: C{boolean}
@param verbose: Whether output should be verbose.
@rtype: C{None}
"""
indent = ' '*(35-len(str(chunkstr))/2)
print 'Input:'
print indent, chunkstr
for rule in self._rules:
rule.apply(chunkstr)
if verbose:
print rule.descr()+' ('+`rule`+'):'
else:
print rule.descr()+':'
print indent, chunkstr
print
def _notrace_apply(self, chunkstr):
"""
Apply each of this C{RegexpChunkParser}'s rules to C{chunkstr}, in
turn.
@param chunkstr: The chunk string to which each rule should be
applied.
@type chunkstr: C{ChunkString}
@rtype: C{None}
"""
for rule in self._rules:
rule.apply(chunkstr)
def parse(self, token, trace=None):
"""
@rtype: chunk structure
@return: a chunk structure that encodes the chunks in a given
tagged sentence. A chunk is a non-overlapping linguistic
group, such as a noun phrase. The set of chunks
identified in the chunk structure depends on the rules
used to define this C{RegexpChunkParser}.
@type trace: C{int}
@param trace: The level of tracing that should be used when
parsing a text. C{0} will generate no tracing output;
C{1} will generate normal tracing output; and C{2} or
highter will generate verbose tracing output. This value
overrides the trace level value that was given to the
constructor.
"""
assert chktype(1, token, Token)
assert chktype(2, trace, types.NoneType, types.IntType)
SUBTOKENS = self.property('SUBTOKENS')
TREE = self.property('TREE')
if len(token[SUBTOKENS]) == 0:
print 'Warning: parsing empty text'
token[TREE] = Tree(self._top_node, [])
return
if trace == None: trace = self._trace
chunkstr = ChunkString(token[SUBTOKENS], **self.property_names())
if trace:
verbose = (trace>1)
self._trace_apply(chunkstr, verbose)
else:
self._notrace_apply(chunkstr)
tree = chunkstr.to_chunkstruct(self._chunk_node, self._top_node)
token[TREE] = tree
def rules(self):
"""
@return: the sequence of rules used by this C{ChunkParser}.
@rtype: C{list} of C{RegexpChunkParserRule}
"""
return self._rules
def __repr__(self):
"""
@return: a concise string representation of this
C{RegexpChunkParser}.
@rtype: C{string}
"""
return "<RegexpChunkParser with %d rules>" % len(self._rules)
def __str__(self):
"""
@return: a verbose string representation of this
C{RegexpChunkParser}.
@rtype: C{string}
"""
s = "RegexpChunkParser with %d rules:\n" % len(self._rules)
margin = 0
for rule in self._rules:
margin = max(margin, len(rule.descr()))
if margin < 35:
format = " %" + `-(margin+3)` + "s%s\n"
else:
format = " %s\n %s\n"
for rule in self._rules:
s += format % (rule.descr(), `rule`)
return s[:-1]
s function assumes that C{text} contains one
sentence per line, and that each sentence has the form expected by
C{ChunkedTaggedTokenizer}. It runs the given chunk parser on each
sentence in the text, and scores the result. It prints the final
score (precision, recall, and f-measure); and reports the set of
chunks that were missed and the set of chunks that were
incorrect. (At most 10 missing chunks and 10 incorrect chunks are
reported).
@param chunkparser: The chunkparser to be tested
@type chunkparser: C{ChunkParserI}
@param text: The chunked tagged text that should be used for
evaluation.
@type text: C{string}
"""
assert chktype(1, chunkparser, ChunkParserI)
assert chktype(1, text, types.StringType)
chunkscore = ChunkScore()
from nltk.tokenreader import ChunkedTaggedTokenReader
ctt = ChunkedTaggedTokenReader(chunk_node='NP', SUBTOKENS='WORDS')
for sentence in text.split('\n'):
sentence = sentence.strip()
if not sentence: continue
gold = ctt.read_token(sentence)
test = Token(WORDS=gold['WORDS'])
chunkparser.parse(test)
chunkscore.score(gold['TREE'], test['TREE'])
print '/'+('='*75)+'\\'
print 'Scoring', chunkparser
print ('-'*77)
print 'Precision: %5.1f%%' % (chunkscore.precision()*100), ' '*4,
print 'Recall: %5.1f%%' % (chunkscore.recall()*100), ' '*6,
print 'F-Measure: %5.1f%%' % (chunkscore.f_measure()*100)
if chunkscore.missed():
print 'Missed:'
missed = chunkscore.missed()
missed.sort()
for chunk in missed[:10]:
print ' ', chunk
if len(chunkscore.missed()) > 10:
print ' ...'
# Incorrect chunks.
if chunkscore.incorrect():
print 'Incorrect:'
incorrect = chunkscore.incorrect()
incorrect.sort() # sort, so they'll always be listed in the same order.
for chunk in incorrect[:10]:
print ' ', chunk
if len(chunkscore.incorrect()) > 10:
print ' ...'
print '\\'+('='*75)+'/'
def demo():
"""
A demonstration for the C{RegexpChunkParser} class. A single text is
parsed with four different chunk parsers, using a variety of rules
and strategies.
"""
text = """\
[ the/DT little/JJ cat/NN ] sat/VBD on/IN [ the/DT mat/NN ] ./.
[ The/DT cats/NNS ] ./.
[ John/NNP ] saw/VBD [the/DT cat/NN] [the/DT dog/NN] liked/VBD ./."""
print '*'*75
print 'Evaluation text:'
print text
print '*'*75
r1 = ChunkRule(r'<DT>?<JJ>*<NN.*>', 'Chunk NPs')
cp = RegexpChunkParser([r1], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
r1 = ChunkRule(r'<.*>+', 'Chunk everything')
r2 = ChinkRule(r'<VB.*>|<IN>|<\.>', 'Unchunk VB and IN and .')
cp = RegexpChunkParser([r1, r2], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
# Unchunk non-NP words, and then merge consecutive NPs
r1 = ChunkRule(r'(<.*>)', 'Chunk each tag')
r2 = UnChunkRule(r'<VB.*>|<IN>|<.>', 'Unchunk VB? and IN and .')
r3 = MergeRule(r'<DT|JJ|NN.*>', r'<DT|JJ|NN.*>', 'Merge NPs')
cp = RegexpChunkParser([r1,r2,r3], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
# Chunk sequences of NP words, and split them at determiners
r1 = ChunkRule(r'(<DT|JJ|NN.*>+)', 'Chunk sequences of DT&JJ&NN')
r2 = SplitRule('', r'<DT>', 'Split before DT')
cp = RegexpChunkParser([r1,r2], chunk_node='NP', top_node='S', trace=1,
SUBTOKENS='WORDS')
demo_eval(cp, text)
print
if __name__ == '__main__':
demo()
| false | true |
f7fbbf62f2943cc79ec76501fd55c8862927df08 | 511 | py | Python | Python/Collections.OrderedDict.py | WinrichSy/HackerRank-Solutions | ed928de50cbbbdf0aee471630f6c04f9a0f69a1f | [
"Apache-2.0"
] | null | null | null | Python/Collections.OrderedDict.py | WinrichSy/HackerRank-Solutions | ed928de50cbbbdf0aee471630f6c04f9a0f69a1f | [
"Apache-2.0"
] | null | null | null | Python/Collections.OrderedDict.py | WinrichSy/HackerRank-Solutions | ed928de50cbbbdf0aee471630f6c04f9a0f69a1f | [
"Apache-2.0"
] | null | null | null | #Collections.OrderedDict()
#https://www.hackerrank.com/challenges/py-collections-ordereddict/problem
from collections import OrderedDict
items = int(input())
ordered_dict = OrderedDict()
for i in range(items):
item_price = input().split()
item = ' '.join(item_price[0:-1])
price = int(item_price[-1])
if item not in ordered_dict.keys():
ordered_dict[item] = price
else:
ordered_dict[item] += price
for key, val in ordered_dict.items():
print('{} {}'.format(key, val))
| 26.894737 | 73 | 0.675147 |
from collections import OrderedDict
items = int(input())
ordered_dict = OrderedDict()
for i in range(items):
item_price = input().split()
item = ' '.join(item_price[0:-1])
price = int(item_price[-1])
if item not in ordered_dict.keys():
ordered_dict[item] = price
else:
ordered_dict[item] += price
for key, val in ordered_dict.items():
print('{} {}'.format(key, val))
| true | true |
f7fbbfc00c62087ff680b4e49b6e3bdf810186d5 | 6,625 | py | Python | ck/repo/module/user/module.py | santosh653/ck | f09b836df48598aff4db241b52c37899a73eb569 | [
"BSD-3-Clause"
] | 37 | 2015-02-04T16:03:33.000Z | 2021-01-03T13:24:29.000Z | ck/repo/module/user/module.py | santosh653/ck | f09b836df48598aff4db241b52c37899a73eb569 | [
"BSD-3-Clause"
] | 3 | 2018-10-25T13:03:05.000Z | 2018-11-13T05:14:57.000Z | ck/repo/module/user/module.py | santosh653/ck | f09b836df48598aff4db241b52c37899a73eb569 | [
"BSD-3-Clause"
] | 4 | 2017-11-21T01:39:41.000Z | 2020-08-09T19:22:43.000Z | #
# Collective Knowledge (web-based user auth)
#
#
#
#
# Developer:
#
cfg={} # Will be updated by CK (meta description of this module)
work={} # Will be updated by CK (temporal data)
ck=None # Will be updated by CK (initialized CK kernel)
# Local settings
##############################################################################
# Initialize module
def init(i):
"""
Input: {}
Output: {
return - return code = 0, if successful
> 0, if error
(error) - error text if return > 0
}
"""
return {'return':0}
##############################################################################
# authentication of a user
def auth(i):
"""
Input: {
(web_vars_post)
(web_vars_get)
(web_vars_session)
}
Output: {
return - return code = 0, if successful
> 0, if error
(error) - error text if return > 0
user_id
session_id
}
"""
import re
import hashlib
rr={'return':0}
wvp=i.get('web_vars_post',{})
# Check username
username=wvp.get('username','').strip()
if username=='':
return {'return':1, 'error':'username is empty'}
if not re.match("^[A-Za-z0-9.@_-]*$", username):
return {'return':1, 'error':'username contains forbidden characters'}
# Check password
password=wvp.get('password','').strip()
if password=='':
return {'return':1, 'error':'password is empty'}
password_md5=hashlib.md5(password.encode('utf8')).hexdigest()
# Check if entry exists
default_repo=ck.cfg.get('default_repo_to_write','')
if default_repo=='': default_repo='local'
r=ck.access({'action':'load',
'module_uoa':work['self_module_uid'],
'repo_uoa':default_repo,
'data_uoa':username})
if r['return']>0:
if r['return']==16: return {'return':16, 'error':'Username not registered'}
return r
d=r['dict']
load_password_md5=d.get('password_md5','')
if password_md5!=load_password_md5:
return {'return':1, 'error':'password did not match'}
# Generate random token for the session
r=ck.gen_uid({})
if r['return']>0: return r
rr['session_id']=r['data_uid']
rr['user_id']=username
return rr
##############################################################################
# create account
def create(i):
"""
Input: {
(web_vars_post)
(web_vars_get)
(web_vars_session)
}
Output: {
return - return code = 0, if successful
> 0, if error
(error) - error text if return > 0
user_id
session_id
}
"""
import re
import hashlib
rr={'return':0}
wvp=i.get('web_vars_post',{})
# Check username
username=wvp.get('username','').strip()
if username=='':
return {'return':1, 'error':'username is empty'}
if not re.match("^[A-Za-z0-9.@_-]*$", username):
return {'return':1, 'error':'username contains forbidden characters'}
# Check password
password=wvp.get('password','').strip()
if password=='':
return {'return':1, 'error':'password is empty'}
password_md5=hashlib.md5(password.encode('utf8')).hexdigest()
# Check email
email=wvp.get('email','').strip()
if email=='':
return {'return':1, 'error':'email is empty'}
realname=wvp.get('realname','').strip()
# Check if entry exists
default_repo=ck.cfg.get('default_repo_to_write','')
if default_repo=='': default_repo='local'
r=ck.access({'action':'load',
'module_uoa':work['self_module_uid'],
'repo_uoa':default_repo,
'data_uoa':username})
if r['return']==0:
return {'return':32, 'error':'Username already registered'}
if r['return']!=16: return r
# Create entry
d={'password_md5':password_md5,
'email':email,
'realname':realname}
r=ck.access({'action':'add',
'module_uoa':work['self_module_uid'],
'repo_uoa':default_repo,
'data_uoa':username,
'dict':d,
'sort_key':'yes'})
if r['return']>0: return r
# Generate random token for the session
r=ck.gen_uid({})
if r['return']>0: return r
rr['session_id']=r['data_uid']
rr['user_id']=username
return rr
##############################################################################
# renew user
def renew(i):
"""
Input: {
(web_vars_post)
(web_vars_get)
(web_vars_session)
}
Output: {
return - return code = 0, if successful
> 0, if error
(error) - error text if return > 0
user_id
session_id
}
"""
import re
import hashlib
rr={'return':0}
wvp=i.get('web_vars_post',{})
# Check username
username=wvp.get('username','').strip()
if username=='':
return {'return':1, 'error':'username is empty'}
if not re.match("^[A-Za-z0-9.@_-]*$", username):
return {'return':1, 'error':'username contains forbidden characters'}
# Load user
default_repo=ck.cfg.get('default_repo_to_write','')
if default_repo=='': default_repo='local'
r=ck.access({'action':'load',
'module_uoa':work['self_module_uid'],
'repo_uoa':default_repo,
'data_uoa':username})
if r['return']>0: return r
d=r['dict']
# Check password
password=wvp.get('password','').strip()
if password!='':
password_md5=hashlib.md5(password.encode('utf8')).hexdigest()
d['password_md5']=password_md5
# Check email
email=wvp.get('email','').strip()
if email!='':
d['email']=email
realname=wvp.get('realname','').strip()
if realname!='':
d['realname']=realname
# Update entry
r=ck.access({'action':'update',
'module_uoa':work['self_module_uid'],
'repo_uoa':default_repo,
'data_uoa':username,
'dict':d,
'substitute':'yes',
'sort_key':'yes'})
if r['return']>0: return r
return {'return':0}
| 25 | 82 | 0.497358 |
cfg={}
work={}
ck=None
| true | true |
f7fbc09d21dc9d20beb5a5fc6a3538c481e012c2 | 1,580 | py | Python | frappe/setup_logging.py | pawaranand/phr_frappe | d997ae7d6fbade4b2c4a2491603d988876dfd67e | [
"MIT"
] | 1 | 2022-03-05T16:02:39.000Z | 2022-03-05T16:02:39.000Z | frappe/setup_logging.py | pawaranand/phr_frappe | d997ae7d6fbade4b2c4a2491603d988876dfd67e | [
"MIT"
] | 1 | 2015-07-11T20:52:38.000Z | 2019-12-06T15:00:58.000Z | frappe/setup_logging.py | pawaranand/phr_frappe | d997ae7d6fbade4b2c4a2491603d988876dfd67e | [
"MIT"
] | 2 | 2015-09-05T05:30:23.000Z | 2018-03-21T19:45:10.000Z | import frappe
import logging
import logging.config
import os
import json
from pprint import pformat
class ContextFilter(logging.Filter):
"""
This is a filter which injects request information (if available) into the log.
"""
def filter(self, record):
record.form_dict = pformat(getattr(frappe.local, 'form_dict', None))
record.site = getattr(frappe.local, 'site', None)
record.tb = frappe.utils.get_traceback()
return True
def setup_logging():
conf = frappe.get_site_config(sites_path=os.environ.get('SITES_PATH', '.'))
if conf.logging_conf:
logging_conf = conf.logging_conf
else:
logging_conf = {
"version": 1,
"disable_existing_loggers": True,
"filters": {
"context_filter": {
"()": "frappe.setup_logging.ContextFilter"
}
},
"formatters": {
"site_wise": {
"format": "\n%(asctime)s %(message)s \n site: %(site)s\n form: %(form_dict)s\n\n%(tb)s\n--------------"
}
},
"loggers": {
"frappe": {
"level": "INFO",
"propagate": False,
"filters": ["context_filter"],
"handlers": ["request_exception"]
}
},
"handlers": {
"request_exception": {
"level": "ERROR",
"formatter": "site_wise",
"class": "logging.StreamHandler",
}
}
}
if conf.request_exception_log_file:
logging_conf.update({
"handlers": {
"request_exception": {
"level": "ERROR",
"formatter": "site_wise",
"class": "logging.handlers.WatchedFileHandler",
"filename": conf.request_exception_log_file
}
}
})
logging.config.dictConfig(logging_conf)
| 24.307692 | 108 | 0.633544 | import frappe
import logging
import logging.config
import os
import json
from pprint import pformat
class ContextFilter(logging.Filter):
def filter(self, record):
record.form_dict = pformat(getattr(frappe.local, 'form_dict', None))
record.site = getattr(frappe.local, 'site', None)
record.tb = frappe.utils.get_traceback()
return True
def setup_logging():
conf = frappe.get_site_config(sites_path=os.environ.get('SITES_PATH', '.'))
if conf.logging_conf:
logging_conf = conf.logging_conf
else:
logging_conf = {
"version": 1,
"disable_existing_loggers": True,
"filters": {
"context_filter": {
"()": "frappe.setup_logging.ContextFilter"
}
},
"formatters": {
"site_wise": {
"format": "\n%(asctime)s %(message)s \n site: %(site)s\n form: %(form_dict)s\n\n%(tb)s\n--------------"
}
},
"loggers": {
"frappe": {
"level": "INFO",
"propagate": False,
"filters": ["context_filter"],
"handlers": ["request_exception"]
}
},
"handlers": {
"request_exception": {
"level": "ERROR",
"formatter": "site_wise",
"class": "logging.StreamHandler",
}
}
}
if conf.request_exception_log_file:
logging_conf.update({
"handlers": {
"request_exception": {
"level": "ERROR",
"formatter": "site_wise",
"class": "logging.handlers.WatchedFileHandler",
"filename": conf.request_exception_log_file
}
}
})
logging.config.dictConfig(logging_conf)
| true | true |
f7fbc0c8571bb84f8156dfaed15a35423276590d | 5,459 | py | Python | mitmproxy/addons/cut.py | dotnes/mitmproxy | 5eb17bbf6d47c8d703763bfa41cf1ff3f98a632f | [
"MIT"
] | 1 | 2017-12-27T09:05:23.000Z | 2017-12-27T09:05:23.000Z | mitmproxy/addons/cut.py | dotnes/mitmproxy | 5eb17bbf6d47c8d703763bfa41cf1ff3f98a632f | [
"MIT"
] | null | null | null | mitmproxy/addons/cut.py | dotnes/mitmproxy | 5eb17bbf6d47c8d703763bfa41cf1ff3f98a632f | [
"MIT"
] | null | null | null | import io
import csv
import typing
from mitmproxy import command
from mitmproxy import exceptions
from mitmproxy import flow
from mitmproxy import ctx
from mitmproxy import certs
from mitmproxy.utils import strutils
import mitmproxy.types
import pyperclip
def headername(spec: str):
if not (spec.startswith("header[") and spec.endswith("]")):
raise exceptions.CommandError("Invalid header spec: %s" % spec)
return spec[len("header["):-1].strip()
def is_addr(v):
return isinstance(v, tuple) and len(v) > 1
def extract(cut: str, f: flow.Flow) -> typing.Union[str, bytes]:
path = cut.split(".")
current = f # type: typing.Any
for i, spec in enumerate(path):
if spec.startswith("_"):
raise exceptions.CommandError("Can't access internal attribute %s" % spec)
part = getattr(current, spec, None)
if i == len(path) - 1:
if spec == "port" and is_addr(current):
return str(current[1])
if spec == "host" and is_addr(current):
return str(current[0])
elif spec.startswith("header["):
if not current:
return ""
return current.headers.get(headername(spec), "")
elif isinstance(part, bytes):
return part
elif isinstance(part, bool):
return "true" if part else "false"
elif isinstance(part, certs.Cert):
return part.to_pem().decode("ascii")
current = part
return str(current or "")
class Cut:
@command.command("cut")
def cut(
self,
flows: typing.Sequence[flow.Flow],
cuts: mitmproxy.types.CutSpec,
) -> mitmproxy.types.Data:
"""
Cut data from a set of flows. Cut specifications are attribute paths
from the base of the flow object, with a few conveniences - "port"
and "host" retrieve parts of an address tuple, ".header[key]"
retrieves a header value. Return values converted to strings or
bytes: SSL certificates are converted to PEM format, bools are "true"
or "false", "bytes" are preserved, and all other values are
converted to strings.
"""
ret = [] # type:typing.List[typing.List[typing.Union[str, bytes]]]
for f in flows:
ret.append([extract(c, f) for c in cuts])
return ret # type: ignore
@command.command("cut.save")
def save(
self,
flows: typing.Sequence[flow.Flow],
cuts: mitmproxy.types.CutSpec,
path: mitmproxy.types.Path
) -> None:
"""
Save cuts to file. If there are multiple flows or cuts, the format
is UTF-8 encoded CSV. If there is exactly one row and one column,
the data is written to file as-is, with raw bytes preserved. If the
path is prefixed with a "+", values are appended if there is an
existing file.
"""
append = False
if path.startswith("+"):
append = True
path = mitmproxy.types.Path(path[1:])
try:
if len(cuts) == 1 and len(flows) == 1:
with open(path, "ab" if append else "wb") as fp:
if fp.tell() > 0:
# We're appending to a file that already exists and has content
fp.write(b"\n")
v = extract(cuts[0], flows[0])
if isinstance(v, bytes):
fp.write(v)
else:
fp.write(v.encode("utf8"))
ctx.log.alert("Saved single cut.")
else:
with open(path, "a" if append else "w", newline='', encoding="utf8") as fp:
writer = csv.writer(fp)
for f in flows:
vals = [extract(c, f) for c in cuts]
writer.writerow(
[strutils.always_str(x) or "" for x in vals] # type: ignore
)
ctx.log.alert("Saved %s cuts over %d flows as CSV." % (len(cuts), len(flows)))
except IOError as e:
ctx.log.error(str(e))
@command.command("cut.clip")
def clip(
self,
flows: typing.Sequence[flow.Flow],
cuts: mitmproxy.types.CutSpec,
) -> None:
"""
Send cuts to the clipboard. If there are multiple flows or cuts, the
format is UTF-8 encoded CSV. If there is exactly one row and one
column, the data is written to file as-is, with raw bytes preserved.
"""
fp = io.StringIO(newline="")
if len(cuts) == 1 and len(flows) == 1:
v = extract(cuts[0], flows[0])
if isinstance(v, bytes):
fp.write(strutils.always_str(v))
else:
fp.write(v)
ctx.log.alert("Clipped single cut.")
else:
writer = csv.writer(fp)
for f in flows:
vals = [extract(c, f) for c in cuts]
writer.writerow(
[strutils.always_str(v) or "" for v in vals] # type: ignore
)
ctx.log.alert("Clipped %s cuts as CSV." % len(cuts))
try:
pyperclip.copy(fp.getvalue())
except pyperclip.PyperclipException as e:
ctx.log.error(str(e))
| 37.390411 | 94 | 0.535446 | import io
import csv
import typing
from mitmproxy import command
from mitmproxy import exceptions
from mitmproxy import flow
from mitmproxy import ctx
from mitmproxy import certs
from mitmproxy.utils import strutils
import mitmproxy.types
import pyperclip
def headername(spec: str):
if not (spec.startswith("header[") and spec.endswith("]")):
raise exceptions.CommandError("Invalid header spec: %s" % spec)
return spec[len("header["):-1].strip()
def is_addr(v):
return isinstance(v, tuple) and len(v) > 1
def extract(cut: str, f: flow.Flow) -> typing.Union[str, bytes]:
path = cut.split(".")
current = f
for i, spec in enumerate(path):
if spec.startswith("_"):
raise exceptions.CommandError("Can't access internal attribute %s" % spec)
part = getattr(current, spec, None)
if i == len(path) - 1:
if spec == "port" and is_addr(current):
return str(current[1])
if spec == "host" and is_addr(current):
return str(current[0])
elif spec.startswith("header["):
if not current:
return ""
return current.headers.get(headername(spec), "")
elif isinstance(part, bytes):
return part
elif isinstance(part, bool):
return "true" if part else "false"
elif isinstance(part, certs.Cert):
return part.to_pem().decode("ascii")
current = part
return str(current or "")
class Cut:
@command.command("cut")
def cut(
self,
flows: typing.Sequence[flow.Flow],
cuts: mitmproxy.types.CutSpec,
) -> mitmproxy.types.Data:
ret = [] # type:typing.List[typing.List[typing.Union[str, bytes]]]
for f in flows:
ret.append([extract(c, f) for c in cuts])
return ret # type: ignore
@command.command("cut.save")
def save(
self,
flows: typing.Sequence[flow.Flow],
cuts: mitmproxy.types.CutSpec,
path: mitmproxy.types.Path
) -> None:
append = False
if path.startswith("+"):
append = True
path = mitmproxy.types.Path(path[1:])
try:
if len(cuts) == 1 and len(flows) == 1:
with open(path, "ab" if append else "wb") as fp:
if fp.tell() > 0:
# We're appending to a file that already exists and has content
fp.write(b"\n")
v = extract(cuts[0], flows[0])
if isinstance(v, bytes):
fp.write(v)
else:
fp.write(v.encode("utf8"))
ctx.log.alert("Saved single cut.")
else:
with open(path, "a" if append else "w", newline='', encoding="utf8") as fp:
writer = csv.writer(fp)
for f in flows:
vals = [extract(c, f) for c in cuts]
writer.writerow(
[strutils.always_str(x) or "" for x in vals]
)
ctx.log.alert("Saved %s cuts over %d flows as CSV." % (len(cuts), len(flows)))
except IOError as e:
ctx.log.error(str(e))
@command.command("cut.clip")
def clip(
self,
flows: typing.Sequence[flow.Flow],
cuts: mitmproxy.types.CutSpec,
) -> None:
fp = io.StringIO(newline="")
if len(cuts) == 1 and len(flows) == 1:
v = extract(cuts[0], flows[0])
if isinstance(v, bytes):
fp.write(strutils.always_str(v))
else:
fp.write(v)
ctx.log.alert("Clipped single cut.")
else:
writer = csv.writer(fp)
for f in flows:
vals = [extract(c, f) for c in cuts]
writer.writerow(
[strutils.always_str(v) or "" for v in vals]
)
ctx.log.alert("Clipped %s cuts as CSV." % len(cuts))
try:
pyperclip.copy(fp.getvalue())
except pyperclip.PyperclipException as e:
ctx.log.error(str(e))
| true | true |
f7fbc1b08bdb0f523b71727d92cf25327e2fc11a | 936 | py | Python | kubernetes_asyncio/test/test_v1_scale_spec.py | PidgeyBE/kubernetes_asyncio | 14d15dc309890253c26b6274a022e84441e05217 | [
"Apache-2.0"
] | null | null | null | kubernetes_asyncio/test/test_v1_scale_spec.py | PidgeyBE/kubernetes_asyncio | 14d15dc309890253c26b6274a022e84441e05217 | [
"Apache-2.0"
] | null | null | null | kubernetes_asyncio/test/test_v1_scale_spec.py | PidgeyBE/kubernetes_asyncio | 14d15dc309890253c26b6274a022e84441e05217 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Kubernetes
No description provided (generated by Openapi Generator https://github.com/openapitools/openapi-generator) # noqa: E501
OpenAPI spec version: v1.13.5
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import kubernetes_asyncio.client
from kubernetes_asyncio.client.models.v1_scale_spec import V1ScaleSpec # noqa: E501
from kubernetes_asyncio.client.rest import ApiException
class TestV1ScaleSpec(unittest.TestCase):
"""V1ScaleSpec unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testV1ScaleSpec(self):
"""Test V1ScaleSpec"""
# FIXME: construct object with mandatory attributes with example values
# model = kubernetes_asyncio.client.models.v1_scale_spec.V1ScaleSpec() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 23.4 | 124 | 0.71688 |
from __future__ import absolute_import
import unittest
import kubernetes_asyncio.client
from kubernetes_asyncio.client.models.v1_scale_spec import V1ScaleSpec
from kubernetes_asyncio.client.rest import ApiException
class TestV1ScaleSpec(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def testV1ScaleSpec(self):
s
if __name__ == '__main__':
unittest.main()
| true | true |
f7fbc2dd55385ce727b6db9a0cf7aa762a44f47c | 3,579 | py | Python | var/spack/repos/builtin/packages/roctracer-dev/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2,360 | 2017-11-06T08:47:01.000Z | 2022-03-31T14:45:33.000Z | var/spack/repos/builtin/packages/roctracer-dev/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 13,838 | 2017-11-04T07:49:45.000Z | 2022-03-31T23:38:39.000Z | var/spack/repos/builtin/packages/roctracer-dev/package.py | LiamBindle/spack | e90d5ad6cfff2ba3de7b537d6511adccd9d5fcf1 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1,793 | 2017-11-04T07:45:50.000Z | 2022-03-30T14:31:53.000Z | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class RoctracerDev(CMakePackage):
"""ROC-tracer library: Runtimes Generic Callback/Activity APIs.
The goal of the implementation is to provide a generic independent from
specific runtime profiler to trace API and asyncronous activity."""
homepage = "https://github.com/ROCm-Developer-Tools/roctracer"
git = "https://github.com/ROCm-Developer-Tools/roctracer.git"
url = "https://github.com/ROCm-Developer-Tools/roctracer/archive/rocm-4.3.0.tar.gz"
maintainers = ['srekolam', 'arjun-raj-kuppala']
version('4.3.1', sha256='88ada5f256a570792d1326a305663e94cf2c3b0cbd99f7e745326923882dafd2')
version('4.3.0', sha256='c3d9f408df8d4dc0e9c0026217b8c684f68e775da80b215fecb3cd24419ee6d3')
version('4.2.0', sha256='62a9c0cb1ba50b1c39a0636c886ac86e75a1a71cbf5fec05801517ceb0e67a37')
version('4.1.0', sha256='5d93de4e92895b6eb5f9d098f5dbd182d33923bd9b2ab69cf5a1abbf91d70695')
version('4.0.0', sha256='f47859a46173228b597c463eda850b870e810534af5efd5f2a746067ef04edee')
version('3.10.0', sha256='ac4a1d059fc34377e906071fd0e56f5434a7e0e4ded9db8faf9217a115239dec')
version('3.9.0', sha256='0678f9faf45058b16923948c66d77ba2c072283c975d167899caef969169b292')
version('3.8.0', sha256='5154a84ce7568cd5dba756e9508c34ae9fc62f4b0b5731f93c2ad68b21537ed1')
version('3.7.0', sha256='6fa5b771e990f09c242237ab334b9f01039ec7d54ccde993e719c5d6577d1518')
version('3.5.0', sha256='7af5326c9ca695642b4265232ec12864a61fd6b6056aa7c4ecd9e19c817f209e')
variant('build_type', default='Release', values=("Release", "Debug", "RelWithDebInfo"), description='CMake build type')
depends_on('cmake@3:', type='build')
depends_on('python@:2', type='build', when='@:4.1.0')
depends_on('python@3:', type='build', when='@4.2.0:')
depends_on('py-cppheaderparser', type='build')
for ver in ['3.5.0', '3.7.0', '3.8.0', '3.9.0', '3.10.0', '4.0.0', '4.1.0',
'4.2.0', '4.3.0', '4.3.1']:
depends_on('hsakmt-roct@' + ver, when='@' + ver)
depends_on('hsa-rocr-dev@' + ver, when='@' + ver)
depends_on('rocminfo@' + ver, when='@' + ver)
depends_on('hip@' + ver, when='@' + ver)
for ver in ['4.2.0', '4.3.0', '4.3.1']:
depends_on('rocprofiler-dev@' + ver, when='@' + ver)
def setup_build_environment(self, build_env):
spec = self.spec
build_env.set("HIP_PATH", spec['hip'].prefix)
def patch(self):
filter_file('${CMAKE_PREFIX_PATH}/hsa',
'${HSA_RUNTIME_INC_PATH}', 'src/CMakeLists.txt',
string=True)
kwargs = {'ignore_absent': False, 'backup': False, 'string': False}
with working_dir('script'):
match = '^#!/usr/bin/python[23]'
python = self.spec['python'].command.path
substitute = "#!{python}".format(python=python)
files = [
'check_trace.py', 'gen_ostream_ops.py', 'hsaap.py', 'kfdap.py'
]
filter_file(match, substitute, *files, **kwargs)
def cmake_args(self):
args = ['-DHIP_VDI=1',
'-DCMAKE_MODULE_PATH={0}/cmake_modules'.format(
self.stage.source_path),
'-DHSA_RUNTIME_HSA_INC_PATH={0}/include'.format(
self.spec['hsa-rocr-dev'].prefix)
]
return args
| 48.364865 | 123 | 0.655211 |
from spack import *
class RoctracerDev(CMakePackage):
homepage = "https://github.com/ROCm-Developer-Tools/roctracer"
git = "https://github.com/ROCm-Developer-Tools/roctracer.git"
url = "https://github.com/ROCm-Developer-Tools/roctracer/archive/rocm-4.3.0.tar.gz"
maintainers = ['srekolam', 'arjun-raj-kuppala']
version('4.3.1', sha256='88ada5f256a570792d1326a305663e94cf2c3b0cbd99f7e745326923882dafd2')
version('4.3.0', sha256='c3d9f408df8d4dc0e9c0026217b8c684f68e775da80b215fecb3cd24419ee6d3')
version('4.2.0', sha256='62a9c0cb1ba50b1c39a0636c886ac86e75a1a71cbf5fec05801517ceb0e67a37')
version('4.1.0', sha256='5d93de4e92895b6eb5f9d098f5dbd182d33923bd9b2ab69cf5a1abbf91d70695')
version('4.0.0', sha256='f47859a46173228b597c463eda850b870e810534af5efd5f2a746067ef04edee')
version('3.10.0', sha256='ac4a1d059fc34377e906071fd0e56f5434a7e0e4ded9db8faf9217a115239dec')
version('3.9.0', sha256='0678f9faf45058b16923948c66d77ba2c072283c975d167899caef969169b292')
version('3.8.0', sha256='5154a84ce7568cd5dba756e9508c34ae9fc62f4b0b5731f93c2ad68b21537ed1')
version('3.7.0', sha256='6fa5b771e990f09c242237ab334b9f01039ec7d54ccde993e719c5d6577d1518')
version('3.5.0', sha256='7af5326c9ca695642b4265232ec12864a61fd6b6056aa7c4ecd9e19c817f209e')
variant('build_type', default='Release', values=("Release", "Debug", "RelWithDebInfo"), description='CMake build type')
depends_on('cmake@3:', type='build')
depends_on('python@:2', type='build', when='@:4.1.0')
depends_on('python@3:', type='build', when='@4.2.0:')
depends_on('py-cppheaderparser', type='build')
for ver in ['3.5.0', '3.7.0', '3.8.0', '3.9.0', '3.10.0', '4.0.0', '4.1.0',
'4.2.0', '4.3.0', '4.3.1']:
depends_on('hsakmt-roct@' + ver, when='@' + ver)
depends_on('hsa-rocr-dev@' + ver, when='@' + ver)
depends_on('rocminfo@' + ver, when='@' + ver)
depends_on('hip@' + ver, when='@' + ver)
for ver in ['4.2.0', '4.3.0', '4.3.1']:
depends_on('rocprofiler-dev@' + ver, when='@' + ver)
def setup_build_environment(self, build_env):
spec = self.spec
build_env.set("HIP_PATH", spec['hip'].prefix)
def patch(self):
filter_file('${CMAKE_PREFIX_PATH}/hsa',
'${HSA_RUNTIME_INC_PATH}', 'src/CMakeLists.txt',
string=True)
kwargs = {'ignore_absent': False, 'backup': False, 'string': False}
with working_dir('script'):
match = '^#!/usr/bin/python[23]'
python = self.spec['python'].command.path
substitute = "#!{python}".format(python=python)
files = [
'check_trace.py', 'gen_ostream_ops.py', 'hsaap.py', 'kfdap.py'
]
filter_file(match, substitute, *files, **kwargs)
def cmake_args(self):
args = ['-DHIP_VDI=1',
'-DCMAKE_MODULE_PATH={0}/cmake_modules'.format(
self.stage.source_path),
'-DHSA_RUNTIME_HSA_INC_PATH={0}/include'.format(
self.spec['hsa-rocr-dev'].prefix)
]
return args
| true | true |
f7fbc5228971a5c65f10bb5e37b1f626858d1332 | 3,871 | py | Python | analysis/video.py | MorrisHuang-skipper/Serial-MD | 48356dc88cdc47a832fa02bc61a03d8583bb4a79 | [
"MIT"
] | null | null | null | analysis/video.py | MorrisHuang-skipper/Serial-MD | 48356dc88cdc47a832fa02bc61a03d8583bb4a79 | [
"MIT"
] | null | null | null | analysis/video.py | MorrisHuang-skipper/Serial-MD | 48356dc88cdc47a832fa02bc61a03d8583bb4a79 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from pylab import cm
import math
from mpl_toolkits.mplot3d import Axes3D
import os
import sys
import matplotlib.gridspec as gridspec
mpl.rcParams['font.family'] = 'STIXGeneral'
plt.rcParams['xtick.labelsize'] = 16
plt.rcParams['ytick.labelsize'] = 16
plt.rcParams['font.size'] = 16
plt.rcParams['figure.figsize'] = [5.4*4, 5.2]
plt.rcParams['axes.titlesize'] = 16
plt.rcParams['axes.labelsize'] = 16
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 6
plt.rcParams['legend.fontsize'] = 13
plt.rcParams['mathtext.fontset'] = 'stix'
plt.rcParams['axes.linewidth'] = 1
colors = cm.get_cmap('Set1', 10)
fig = plt.figure(constrained_layout=True)
spec = gridspec.GridSpec(ncols=4, nrows=1, figure=fig)
ax = fig.add_subplot(spec[0, 0])
ax2 = fig.add_subplot(spec[0, 1])
ax3 = fig.add_subplot(spec[0, 2])
ax4 = fig.add_subplot(spec[0, 3])
# ax = fig.add_subplot(1, 3, 1, projection='3d')
ax.xaxis.set_tick_params(which='major', size=5, width=1,
direction='in', top='on')
ax.xaxis.set_tick_params(which='minor', size=3, width=1,
direction='in', top='on')
ax.yaxis.set_tick_params(which='major', size=5, width=1,
direction='in', right='on')
ax.yaxis.set_tick_params(which='minor', size=3, width=1,
direction='in', right='on')
fname = 'ext'
rx = np.loadtxt('../../data/'+fname+'/rx.dat')
ry = np.loadtxt('../../data/'+fname+'/ry.dat')
rz = np.loadtxt('../../data/'+fname+'/rz.dat')
vx = np.loadtxt('../../data/'+fname+'/vx.dat')
vy = np.loadtxt('../../data/'+fname+'/vy.dat')
vz = np.loadtxt('../../data/'+fname+'/vz.dat')
t, Ki, Ke, K, U, Te, Ti, within = np.loadtxt('../../data/'+fname+'/info.dat', unpack=True)
x1 = np.arange(-10, 10, 0.00001)
yy1 = (40-x1**2)**0.5
yy2 = -(40-x1**2)**0.5
step = rx.shape[0]
NP = rx.shape[1]
e = 1.60217e-19
kt = e/38.9
nth = 50
# plot phase space
for i in range(0, step, 1):
num = '{0:04}'.format(i)
# ax.set_zlim(0, 10)
# 2d
ax.plot(rx[i, :-(NP//2)]*1e9, ry[i, :-(NP//2)]*1e9, '.', color=colors(0), label='$H^+$', markersize=3)
ax.plot(rx[i, (NP//2):]*1e9, ry[i, (NP//2):]*1e9, '.', color=colors(1), label='$e^{-}$', markersize=3)
ax.plot(rx[:i, nth]*1e9, ry[:i, nth]*1e9, '-', color=colors(2), label='tracking')
# for j in range(50):
# ax.plot(rx[:i, nth+j]*1e9, ry[:i, nth+j]*1e9, '.')
ax.set_xlabel('$x \ [nm]$')
ax.set_ylabel('$y \ [nm]$')
ax.set_xlim(0, 40)
ax.set_ylim(0, 40)
# diag
ax2.plot(t[:i]*1e15, (Ki[:i]+Ke[:i])/e/NP, '-', label=r'$K_{avg}$', color=colors(2))
ax2.plot(t[:i]*1e15, U[:i]/e/NP, '-', label=r'$U_{avg}$', color=colors(3))
ax2.plot(t[:i]*1e15, Ki[:i]/e/NP+Ke[:i]/e/NP+U[:i]/e/NP, '-', label=r'$E_{avg}$', color=colors(4))
ax2.set_xlabel('$time \ [fs]$')
# ax2.set_ylabel('$E \ [ev]$')
ax2.set_xlim(0, 100)
ax3.plot(t[:i]*1e15, Te[:i]/e, '-', label=r'$T_e$', color=colors(6))
ax3.plot(t[:i]*1e15, Ti[:i]/e, '-', label=r'$T_i$', color=colors(0))
ax3.set_xlabel('$time \ [fs]$')
ax3.set_ylabel('$Temperature \ [eV]$')
# ax3.set_ylabel('$Temperature \ [k_BT]$')
ax3.set_xlim(0, 100)
ax4.hist(vx[i, 1:NP//2], histtype='step')
ax4.hist(vx[i, NP//2+1:NP], histtype='step')
# ax4.set_xlabel('$time \ [fs]$')
# ax4.set_ylabel('$Temperature \ [eV]$')
# ax4.set_xlim(0, 100)
ax.legend(loc=1)
ax2.legend(loc=1)
ax3.legend(loc=1)
ax4.legend(loc=1)
plt.tight_layout()
plt.show(block=False)
plt.pause(.01)
# plt.savefig('../figures/'+fname+'/'+str(num)+'.png', dpi=300)
ax.clear()
ax2.clear()
ax3.clear()
ax4.clear()
# os.system('ffmpeg -i ../figures/'+fname+'/%04d.png -c:v ffv1 -qscale:v 0 ../figures/animate.mp4')
# plt.tight_layout()
| 32.258333 | 106 | 0.58667 | import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from pylab import cm
import math
from mpl_toolkits.mplot3d import Axes3D
import os
import sys
import matplotlib.gridspec as gridspec
mpl.rcParams['font.family'] = 'STIXGeneral'
plt.rcParams['xtick.labelsize'] = 16
plt.rcParams['ytick.labelsize'] = 16
plt.rcParams['font.size'] = 16
plt.rcParams['figure.figsize'] = [5.4*4, 5.2]
plt.rcParams['axes.titlesize'] = 16
plt.rcParams['axes.labelsize'] = 16
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 6
plt.rcParams['legend.fontsize'] = 13
plt.rcParams['mathtext.fontset'] = 'stix'
plt.rcParams['axes.linewidth'] = 1
colors = cm.get_cmap('Set1', 10)
fig = plt.figure(constrained_layout=True)
spec = gridspec.GridSpec(ncols=4, nrows=1, figure=fig)
ax = fig.add_subplot(spec[0, 0])
ax2 = fig.add_subplot(spec[0, 1])
ax3 = fig.add_subplot(spec[0, 2])
ax4 = fig.add_subplot(spec[0, 3])
ax.xaxis.set_tick_params(which='major', size=5, width=1,
direction='in', top='on')
ax.xaxis.set_tick_params(which='minor', size=3, width=1,
direction='in', top='on')
ax.yaxis.set_tick_params(which='major', size=5, width=1,
direction='in', right='on')
ax.yaxis.set_tick_params(which='minor', size=3, width=1,
direction='in', right='on')
fname = 'ext'
rx = np.loadtxt('../../data/'+fname+'/rx.dat')
ry = np.loadtxt('../../data/'+fname+'/ry.dat')
rz = np.loadtxt('../../data/'+fname+'/rz.dat')
vx = np.loadtxt('../../data/'+fname+'/vx.dat')
vy = np.loadtxt('../../data/'+fname+'/vy.dat')
vz = np.loadtxt('../../data/'+fname+'/vz.dat')
t, Ki, Ke, K, U, Te, Ti, within = np.loadtxt('../../data/'+fname+'/info.dat', unpack=True)
x1 = np.arange(-10, 10, 0.00001)
yy1 = (40-x1**2)**0.5
yy2 = -(40-x1**2)**0.5
step = rx.shape[0]
NP = rx.shape[1]
e = 1.60217e-19
kt = e/38.9
nth = 50
for i in range(0, step, 1):
num = '{0:04}'.format(i)
ax.plot(rx[i, :-(NP//2)]*1e9, ry[i, :-(NP//2)]*1e9, '.', color=colors(0), label='$H^+$', markersize=3)
ax.plot(rx[i, (NP//2):]*1e9, ry[i, (NP//2):]*1e9, '.', color=colors(1), label='$e^{-}$', markersize=3)
ax.plot(rx[:i, nth]*1e9, ry[:i, nth]*1e9, '-', color=colors(2), label='tracking')
ax.set_xlabel('$x \ [nm]$')
ax.set_ylabel('$y \ [nm]$')
ax.set_xlim(0, 40)
ax.set_ylim(0, 40)
ax2.plot(t[:i]*1e15, (Ki[:i]+Ke[:i])/e/NP, '-', label=r'$K_{avg}$', color=colors(2))
ax2.plot(t[:i]*1e15, U[:i]/e/NP, '-', label=r'$U_{avg}$', color=colors(3))
ax2.plot(t[:i]*1e15, Ki[:i]/e/NP+Ke[:i]/e/NP+U[:i]/e/NP, '-', label=r'$E_{avg}$', color=colors(4))
ax2.set_xlabel('$time \ [fs]$')
ax2.set_xlim(0, 100)
ax3.plot(t[:i]*1e15, Te[:i]/e, '-', label=r'$T_e$', color=colors(6))
ax3.plot(t[:i]*1e15, Ti[:i]/e, '-', label=r'$T_i$', color=colors(0))
ax3.set_xlabel('$time \ [fs]$')
ax3.set_ylabel('$Temperature \ [eV]$')
ax3.set_xlim(0, 100)
ax4.hist(vx[i, 1:NP//2], histtype='step')
ax4.hist(vx[i, NP//2+1:NP], histtype='step')
ax.legend(loc=1)
ax2.legend(loc=1)
ax3.legend(loc=1)
ax4.legend(loc=1)
plt.tight_layout()
plt.show(block=False)
plt.pause(.01)
ax.clear()
ax2.clear()
ax3.clear()
ax4.clear()
| true | true |
f7fbc648a9b7bedcd218e285211ce2787a1458ef | 1,210 | py | Python | docker/server_setup.py | maxpark/dive | 5dce25822d9b53d96ff0c2c8fb02265e4b43911e | [
"Apache-2.0"
] | 28 | 2021-02-15T13:46:58.000Z | 2022-03-27T14:20:37.000Z | docker/server_setup.py | maxpark/dive | 5dce25822d9b53d96ff0c2c8fb02265e4b43911e | [
"Apache-2.0"
] | 506 | 2021-02-12T22:33:10.000Z | 2022-03-30T17:46:36.000Z | docker/server_setup.py | maxpark/dive | 5dce25822d9b53d96ff0c2c8fb02265e4b43911e | [
"Apache-2.0"
] | 14 | 2021-02-21T10:18:48.000Z | 2022-02-23T15:32:40.000Z | import os
import cherrypy
from girder.exceptions import ValidationException
from girder.models.assetstore import Assetstore
from girder.models.setting import Setting
from girder.models.user import User
cherrypy.config["database"]["uri"] = os.getenv("GIRDER_MONGO_URI")
ADMIN_USER = os.getenv("GIRDER_ADMIN_USER", "admin")
ADMIN_PASS = os.getenv("GIRDER_ADMIN_PASS", "letmein")
def createInitialUser():
try:
User().createUser(
ADMIN_USER,
ADMIN_PASS,
ADMIN_USER,
ADMIN_USER,
"admin@admin.com",
admin=True,
public=True,
)
except ValidationException:
print("Admin user already exists, skipping...")
def createAssetstore():
try:
Assetstore().createFilesystemAssetstore("assetstore", "/home/assetstore")
except ValidationException:
print("Assetstore already exists, skipping...")
def configure():
Setting().set(
"core.cors.allow_origin",
os.environ.get(
"CORS_ALLOWED_ORIGINS", "http://localhost:8080, http://localhost:8010"
),
)
if __name__ == '__main__':
createInitialUser()
createAssetstore()
configure()
| 24.2 | 82 | 0.649587 | import os
import cherrypy
from girder.exceptions import ValidationException
from girder.models.assetstore import Assetstore
from girder.models.setting import Setting
from girder.models.user import User
cherrypy.config["database"]["uri"] = os.getenv("GIRDER_MONGO_URI")
ADMIN_USER = os.getenv("GIRDER_ADMIN_USER", "admin")
ADMIN_PASS = os.getenv("GIRDER_ADMIN_PASS", "letmein")
def createInitialUser():
try:
User().createUser(
ADMIN_USER,
ADMIN_PASS,
ADMIN_USER,
ADMIN_USER,
"admin@admin.com",
admin=True,
public=True,
)
except ValidationException:
print("Admin user already exists, skipping...")
def createAssetstore():
try:
Assetstore().createFilesystemAssetstore("assetstore", "/home/assetstore")
except ValidationException:
print("Assetstore already exists, skipping...")
def configure():
Setting().set(
"core.cors.allow_origin",
os.environ.get(
"CORS_ALLOWED_ORIGINS", "http://localhost:8080, http://localhost:8010"
),
)
if __name__ == '__main__':
createInitialUser()
createAssetstore()
configure()
| true | true |
f7fbc72dfaa2722fa757ff4575e23e08699ce449 | 9,147 | py | Python | fas2ipa/groups.py | markobrien1/fas2ipa | df8696b4a25f29f4de0f093c12806c5b1b45db16 | [
"MIT"
] | 1 | 2020-09-22T17:13:39.000Z | 2020-09-22T17:13:39.000Z | fas2ipa/groups.py | markobrien1/fas2ipa | df8696b4a25f29f4de0f093c12806c5b1b45db16 | [
"MIT"
] | 18 | 2020-02-25T11:38:47.000Z | 2021-03-19T18:12:44.000Z | fas2ipa/groups.py | markobrien1/fas2ipa | df8696b4a25f29f4de0f093c12806c5b1b45db16 | [
"MIT"
] | 7 | 2020-02-24T15:11:52.000Z | 2021-03-30T14:34:00.000Z | import click
import progressbar
import python_freeipa
from collections import defaultdict
from typing import Any, Dict, List
from .status import Status, print_status
from .utils import ObjectManager
class Groups(ObjectManager):
def __init__(self, *args, agreements, **kwargs):
super().__init__(*args, **kwargs)
self.agreements = agreements
def pull_from_fas(self) -> Dict[str, List[Dict]]:
fas_groups = {}
for fas_name, fas_inst in self.fas_instances.items():
click.echo(f"Pulling group information from FAS ({fas_name})...")
fas_conf = self.config["fas"][fas_name]
groups = fas_inst.send_request(
"/group/list",
req_params={"search": fas_conf["groups"]["search"]},
auth=True,
timeout=240,
)["groups"]
groups.sort(key=lambda g: g["name"])
click.echo(f"Got {len(groups)} groups!")
fas_groups[fas_name] = groups
return fas_groups
def push_to_ipa(
self, groups: Dict[str, List[Dict]], conflicts: Dict[str, List[Dict[str, Any]]],
) -> dict:
added = 0
edited = 0
counter = 0
if not conflicts:
conflicts = {}
skip_conflicts = set(self.config["groups"].get("skip_conflicts", ()))
for fas_name, fas_groups in groups.items():
click.echo(f"Pushing {fas_name} group information to IPA...")
fas_conf = self.config["fas"][fas_name]
# Start by creating the umbrella group, if any
umbrella_group = fas_conf["groups"].get("umbrella")
if umbrella_group:
click.echo(f"Ensuring umbrella group {umbrella_group['name']} exists...")
name_max_length = max((len(g["name"]) for g in fas_groups))
click.echo(umbrella_group["name"].ljust(name_max_length + 2), nl=False)
status = self._write_group_to_ipa(fas_name, umbrella_group, from_fas=False)
print_status(status)
if status == Status.ADDED:
added += 1
elif status == Status.UPDATED:
edited += 1
umbrella_members = set()
# Start by creating groups
fas_groups = [
g
for g in fas_groups
if g["name"] not in fas_conf["groups"].get("ignore", ())
]
name_max_length = max((len(g["name"]) for g in fas_groups))
for group in progressbar.progressbar(fas_groups, redirect_stdout=True):
counter += 1
group_conflicts = set(conflicts.get(group["name"], ()))
group_skip_conflicts = skip_conflicts & group_conflicts
if group_skip_conflicts:
print_status(
Status.FAILED,
f"[{fas_name}: Skipping group '{group['name']}' because of conflicts:"
f" {', '.join(group_skip_conflicts)}",
)
continue
self.check_reauth(counter)
click.echo(group["name"].ljust(name_max_length + 2), nl=False)
status = self._write_group_to_ipa(fas_name, group)
print_status(status)
if status == Status.ADDED:
added += 1
elif status == Status.UPDATED:
edited += 1
if umbrella_group and status in (Status.ADDED, Status.UPDATED, Status.UNMODIFIED):
umbrella_members.add(
fas_conf["groups"].get("prefix", "") + group["name"].lower()
)
if umbrella_group:
ipa_group = self.ipa.group_show(umbrella_group["name"])
existing_umbrella_members = set(ipa_group.get("member_group", []))
new_umbrella_members = umbrella_members - existing_umbrella_members
if not new_umbrella_members:
click.echo(f"No new members to add to umbrella group {umbrella_group['name']}")
else:
click.echo(
f"Adding {len(new_umbrella_members)} new groups to umbrella group"
f" {umbrella_group['name']}"
)
self.ipa.group_add_member(
umbrella_group["name"], groups=list(new_umbrella_members)
)
click.echo(f"Done with {fas_name}")
# add groups to agreements
click.echo("Recording group requirements in IPA...")
self.agreements.record_group_requirements(groups)
click.echo("Done.")
return dict(groups_added=added, groups_edited=edited, groups_counter=counter,)
def _write_group_to_ipa(self, fas_name: str, group: dict, from_fas: bool = True):
if from_fas:
# transform FAS group info into what IPA expects
name = (
self.config["fas"][fas_name]["groups"].get("prefix", "")
+ group["name"].lower()
)
# calculate the IRC channel (FAS has 2 fields, freeipa-fas has a single one )
# if we have an irc channel defined. try to generate the irc:// uri
# there are a handful of groups that have an IRC server defined (freenode), but
# no channel, which is kind of useless, so we don't handle that case.
irc_channel = group.get("irc_channel")
irc_string = None
if irc_channel:
if irc_channel[0] == "#":
irc_channel = irc_channel[1:]
irc_network = group.get("irc_network").lower()
if "gimp" in irc_network:
irc_string = f"irc://irc.gimp.org/#{irc_channel}"
elif "oftc" in irc_network:
irc_string = f"irc://irc.oftc.net/#{irc_channel}"
else:
# the remainder of the entries here are either blank or
# freenode, so we freenode them all.
irc_string = f"irc://irc.freenode.net/#{irc_channel}"
url = group.get("url")
if not url:
url = None
else:
url = url.strip()
mailing_list = group.get("mailing_list")
if not mailing_list:
mailing_list = None
else:
if "@" not in mailing_list:
mailing_list = f"{mailing_list}@lists.fedoraproject.org"
mailing_list = mailing_list.strip()
mailing_list = mailing_list.rstrip(".")
mailing_list = mailing_list.lower()
group_args = {
"description": group["display_name"].strip(),
"fasurl": url,
"fasmailinglist": mailing_list,
"fasircchannel": irc_string,
}
else:
name = group["name"]
group_args = {
k: v for k, v in group.items() if k in (
"description", "fasurl", "fasmailinglist", "fasircchannel"
)
}
group["fasgroup"] = True
try:
self.ipa.group_add(name, **group_args)
return Status.ADDED
except python_freeipa.exceptions.FreeIPAError as e:
if e.message == 'group with name "%s" already exists' % name:
try:
self.ipa.group_mod(name, **group_args)
except python_freeipa.exceptions.FreeIPAError as e:
if e.message != "no modifications to be performed":
raise
return Status.UNMODIFIED
else:
print(e.message)
print(e)
print(url, mailing_list, irc_string)
return Status.FAILED
except Exception as e:
print(e)
print(url, mailing_list, irc_string)
return Status.FAILED
def find_group_conflicts(
self, fas_groups: Dict[str, List[Dict]]
) -> Dict[str, List[str]]:
"""Compare groups from different FAS instances and flag conflicts."""
click.echo("Checking for conflicts between groups from different FAS instances")
groups_to_conflicts = {}
groupnames_to_fas = defaultdict(set)
for fas_name, group_objs in fas_groups.items():
for group_obj in group_objs:
groupnames_to_fas[group_obj["name"]].add(fas_name)
for group_name, fas_names in sorted(
groupnames_to_fas.items(), key=lambda x: x[0]
):
if len(fas_names) == 1:
continue
groups_to_conflicts[group_name] = group_conflicts = defaultdict(list)
group_conflicts["same_group_name"] = {"fas_names": fas_names}
click.echo("Done checking group conflicts.")
click.echo(f"Found {len(groups_to_conflicts)} groups with conflicts.")
return groups_to_conflicts
| 39.769565 | 99 | 0.539521 | import click
import progressbar
import python_freeipa
from collections import defaultdict
from typing import Any, Dict, List
from .status import Status, print_status
from .utils import ObjectManager
class Groups(ObjectManager):
def __init__(self, *args, agreements, **kwargs):
super().__init__(*args, **kwargs)
self.agreements = agreements
def pull_from_fas(self) -> Dict[str, List[Dict]]:
fas_groups = {}
for fas_name, fas_inst in self.fas_instances.items():
click.echo(f"Pulling group information from FAS ({fas_name})...")
fas_conf = self.config["fas"][fas_name]
groups = fas_inst.send_request(
"/group/list",
req_params={"search": fas_conf["groups"]["search"]},
auth=True,
timeout=240,
)["groups"]
groups.sort(key=lambda g: g["name"])
click.echo(f"Got {len(groups)} groups!")
fas_groups[fas_name] = groups
return fas_groups
def push_to_ipa(
self, groups: Dict[str, List[Dict]], conflicts: Dict[str, List[Dict[str, Any]]],
) -> dict:
added = 0
edited = 0
counter = 0
if not conflicts:
conflicts = {}
skip_conflicts = set(self.config["groups"].get("skip_conflicts", ()))
for fas_name, fas_groups in groups.items():
click.echo(f"Pushing {fas_name} group information to IPA...")
fas_conf = self.config["fas"][fas_name]
umbrella_group = fas_conf["groups"].get("umbrella")
if umbrella_group:
click.echo(f"Ensuring umbrella group {umbrella_group['name']} exists...")
name_max_length = max((len(g["name"]) for g in fas_groups))
click.echo(umbrella_group["name"].ljust(name_max_length + 2), nl=False)
status = self._write_group_to_ipa(fas_name, umbrella_group, from_fas=False)
print_status(status)
if status == Status.ADDED:
added += 1
elif status == Status.UPDATED:
edited += 1
umbrella_members = set()
fas_groups = [
g
for g in fas_groups
if g["name"] not in fas_conf["groups"].get("ignore", ())
]
name_max_length = max((len(g["name"]) for g in fas_groups))
for group in progressbar.progressbar(fas_groups, redirect_stdout=True):
counter += 1
group_conflicts = set(conflicts.get(group["name"], ()))
group_skip_conflicts = skip_conflicts & group_conflicts
if group_skip_conflicts:
print_status(
Status.FAILED,
f"[{fas_name}: Skipping group '{group['name']}' because of conflicts:"
f" {', '.join(group_skip_conflicts)}",
)
continue
self.check_reauth(counter)
click.echo(group["name"].ljust(name_max_length + 2), nl=False)
status = self._write_group_to_ipa(fas_name, group)
print_status(status)
if status == Status.ADDED:
added += 1
elif status == Status.UPDATED:
edited += 1
if umbrella_group and status in (Status.ADDED, Status.UPDATED, Status.UNMODIFIED):
umbrella_members.add(
fas_conf["groups"].get("prefix", "") + group["name"].lower()
)
if umbrella_group:
ipa_group = self.ipa.group_show(umbrella_group["name"])
existing_umbrella_members = set(ipa_group.get("member_group", []))
new_umbrella_members = umbrella_members - existing_umbrella_members
if not new_umbrella_members:
click.echo(f"No new members to add to umbrella group {umbrella_group['name']}")
else:
click.echo(
f"Adding {len(new_umbrella_members)} new groups to umbrella group"
f" {umbrella_group['name']}"
)
self.ipa.group_add_member(
umbrella_group["name"], groups=list(new_umbrella_members)
)
click.echo(f"Done with {fas_name}")
click.echo("Recording group requirements in IPA...")
self.agreements.record_group_requirements(groups)
click.echo("Done.")
return dict(groups_added=added, groups_edited=edited, groups_counter=counter,)
def _write_group_to_ipa(self, fas_name: str, group: dict, from_fas: bool = True):
if from_fas:
name = (
self.config["fas"][fas_name]["groups"].get("prefix", "")
+ group["name"].lower()
)
irc_channel = group.get("irc_channel")
irc_string = None
if irc_channel:
if irc_channel[0] == "#":
irc_channel = irc_channel[1:]
irc_network = group.get("irc_network").lower()
if "gimp" in irc_network:
irc_string = f"irc://irc.gimp.org/#{irc_channel}"
elif "oftc" in irc_network:
irc_string = f"irc://irc.oftc.net/#{irc_channel}"
else:
# the remainder of the entries here are either blank or
# freenode, so we freenode them all.
irc_string = f"irc://irc.freenode.net/#{irc_channel}"
url = group.get("url")
if not url:
url = None
else:
url = url.strip()
mailing_list = group.get("mailing_list")
if not mailing_list:
mailing_list = None
else:
if "@" not in mailing_list:
mailing_list = f"{mailing_list}@lists.fedoraproject.org"
mailing_list = mailing_list.strip()
mailing_list = mailing_list.rstrip(".")
mailing_list = mailing_list.lower()
group_args = {
"description": group["display_name"].strip(),
"fasurl": url,
"fasmailinglist": mailing_list,
"fasircchannel": irc_string,
}
else:
name = group["name"]
group_args = {
k: v for k, v in group.items() if k in (
"description", "fasurl", "fasmailinglist", "fasircchannel"
)
}
group["fasgroup"] = True
try:
self.ipa.group_add(name, **group_args)
return Status.ADDED
except python_freeipa.exceptions.FreeIPAError as e:
if e.message == 'group with name "%s" already exists' % name:
try:
self.ipa.group_mod(name, **group_args)
except python_freeipa.exceptions.FreeIPAError as e:
if e.message != "no modifications to be performed":
raise
return Status.UNMODIFIED
else:
print(e.message)
print(e)
print(url, mailing_list, irc_string)
return Status.FAILED
except Exception as e:
print(e)
print(url, mailing_list, irc_string)
return Status.FAILED
def find_group_conflicts(
self, fas_groups: Dict[str, List[Dict]]
) -> Dict[str, List[str]]:
click.echo("Checking for conflicts between groups from different FAS instances")
groups_to_conflicts = {}
groupnames_to_fas = defaultdict(set)
for fas_name, group_objs in fas_groups.items():
for group_obj in group_objs:
groupnames_to_fas[group_obj["name"]].add(fas_name)
for group_name, fas_names in sorted(
groupnames_to_fas.items(), key=lambda x: x[0]
):
if len(fas_names) == 1:
continue
groups_to_conflicts[group_name] = group_conflicts = defaultdict(list)
group_conflicts["same_group_name"] = {"fas_names": fas_names}
click.echo("Done checking group conflicts.")
click.echo(f"Found {len(groups_to_conflicts)} groups with conflicts.")
return groups_to_conflicts
| true | true |
f7fbc7a9eba15a1c42b2f4cbc34f30de2a9613c5 | 53 | py | Python | cacreader/swig-4.0.2/Examples/test-suite/python/global_ns_arg_runme.py | kyletanyag/LL-Smartcard | 02abea9de5a13f8bae4d7832ab34cb7f0d9514c9 | [
"BSD-3-Clause"
] | 1,031 | 2015-01-02T14:08:47.000Z | 2022-03-29T02:25:27.000Z | cacreader/swig-4.0.2/Examples/test-suite/python/global_ns_arg_runme.py | kyletanyag/LL-Smartcard | 02abea9de5a13f8bae4d7832ab34cb7f0d9514c9 | [
"BSD-3-Clause"
] | 240 | 2015-01-11T04:27:19.000Z | 2022-03-30T00:35:57.000Z | cacreader/swig-4.0.2/Examples/test-suite/python/global_ns_arg_runme.py | kyletanyag/LL-Smartcard | 02abea9de5a13f8bae4d7832ab34cb7f0d9514c9 | [
"BSD-3-Clause"
] | 224 | 2015-01-05T06:13:54.000Z | 2022-02-25T14:39:51.000Z | from global_ns_arg import *
a = foo(1)
b = bar_fn()
| 10.6 | 27 | 0.660377 | from global_ns_arg import *
a = foo(1)
b = bar_fn()
| true | true |
f7fbc8b13652297abf6e73d85bc19be2333ab0b8 | 4,238 | py | Python | src/common/utils.py | DmitryBurnaev/podcast | 48c7c60e2a46378f36635dc58222e5e7682f977f | [
"MIT"
] | 1 | 2020-09-05T10:37:55.000Z | 2020-09-05T10:37:55.000Z | src/common/utils.py | DmitryBurnaev/podcast | 48c7c60e2a46378f36635dc58222e5e7682f977f | [
"MIT"
] | null | null | null | src/common/utils.py | DmitryBurnaev/podcast | 48c7c60e2a46378f36635dc58222e5e7682f977f | [
"MIT"
] | null | null | null | import logging
import logging.config
import aiohttp
from aiohttp import web
import settings
from common.excpetions import SendRequestError
def get_logger(name: str = None):
""" Getting configured logger """
logging.config.dictConfig(settings.LOGGING)
return logging.getLogger(name or "app")
def redirect(
request, router_name: str, *, permanent=False, url: str = None, reason="HttpFound", **kwargs
):
""" Redirect to given URL name
:param request: current request for web app
:param router_name: router name for creating redirect url (used if url not specified)
:param permanent: use permanent redirect
:param url: directly set url for redirecting
:param reason: Http reason for redirect moving ('HttpFound' by default)
:param kwargs: url-kwargs for forming redirecting url
:raises `aiohttp.web.HTTPMovedPermanently` `aiohttp.web.HTTPFound`
"""
if not url:
for key in kwargs.keys():
kwargs[key] = str(kwargs[key])
url = request.app.router[router_name].url_for(**kwargs)
if permanent:
raise web.HTTPMovedPermanently(url, reason=reason)
raise web.HTTPFound(url, reason=reason)
def add_message(request, message: str, title: str = None, kind: str = "info"):
""" Put message into session """
messages = request.session.get("messages", [])
kind = "danger" if kind == "error" else kind
message = {"title": title or "Podcasts informer", "message": message}
messages.append((kind, message))
request.session["messages"] = messages
async def get_object_or_404(request, model, **kwargs):
""" Get object or raise HttpNotFound """
try:
return await request.app.objects.get(model, **kwargs)
except model.DoesNotExist:
raise web.HTTPNotFound()
def is_mobile_app(request):
""" Detects requests from mobile application (It will change some view)"""
user_agent = request.headers.get("User-Agent")
return "mobile-app-web-view" in user_agent
def database_init(db):
db.init(
settings.DATABASE["name"],
host=settings.DATABASE["host"],
port=settings.DATABASE["port"],
user=settings.DATABASE["username"],
password=settings.DATABASE["password"],
)
return db
async def send_email(recipient_email: str, subject: str, html_content: str):
""" Allows to send email via Sendgrid API """
request_url = f"https://api.sendgrid.com/{settings.SENDGRID_API_VERSION}/mail/send"
request_data = {
"personalizations": [{"to": [{"email": recipient_email}], "subject": subject}],
"from": {"email": settings.EMAIL_FROM},
"content": [{"type": "text/html", "value": html_content}],
}
request_header = {"Authorization": f"Bearer {settings.SENDGRID_API_KEY}"}
request_logger = get_logger(__name__)
request_logger.info("Send request to %s. Data: %s", request_url, request_data)
async with aiohttp.ClientSession() as session:
async with session.post(request_url, json=request_data, headers=request_header) as response:
if response.status > 299:
response_text = await response.text()
raise SendRequestError(
f"Couldn't send email to {recipient_email}",
f"Got status code: {response.status}; response text: {response_text}",
response_status=response.status,
request_url=request_url,
)
else:
request_logger.info(
"Email sent to %s. Status code: %s", recipient_email, response.status
)
def cut_string(source_string: str, max_length: int, finish_seq: str = "...") -> str:
"""
Allows to limit source_string and append required sequence
>>> cut_string('Some long string', max_length=13)
'Some long ...'
>>> cut_string('Some long string', max_length=8, finish_seq="***")
'Some ***'
>>> cut_string('Some long string', max_length=1)
''
"""
if len(source_string) > max_length:
slice_length = max_length - len(finish_seq)
return source_string[:slice_length] + finish_seq if (slice_length > 0) else ""
return source_string
| 35.024793 | 100 | 0.651958 | import logging
import logging.config
import aiohttp
from aiohttp import web
import settings
from common.excpetions import SendRequestError
def get_logger(name: str = None):
logging.config.dictConfig(settings.LOGGING)
return logging.getLogger(name or "app")
def redirect(
request, router_name: str, *, permanent=False, url: str = None, reason="HttpFound", **kwargs
):
if not url:
for key in kwargs.keys():
kwargs[key] = str(kwargs[key])
url = request.app.router[router_name].url_for(**kwargs)
if permanent:
raise web.HTTPMovedPermanently(url, reason=reason)
raise web.HTTPFound(url, reason=reason)
def add_message(request, message: str, title: str = None, kind: str = "info"):
messages = request.session.get("messages", [])
kind = "danger" if kind == "error" else kind
message = {"title": title or "Podcasts informer", "message": message}
messages.append((kind, message))
request.session["messages"] = messages
async def get_object_or_404(request, model, **kwargs):
try:
return await request.app.objects.get(model, **kwargs)
except model.DoesNotExist:
raise web.HTTPNotFound()
def is_mobile_app(request):
user_agent = request.headers.get("User-Agent")
return "mobile-app-web-view" in user_agent
def database_init(db):
db.init(
settings.DATABASE["name"],
host=settings.DATABASE["host"],
port=settings.DATABASE["port"],
user=settings.DATABASE["username"],
password=settings.DATABASE["password"],
)
return db
async def send_email(recipient_email: str, subject: str, html_content: str):
request_url = f"https://api.sendgrid.com/{settings.SENDGRID_API_VERSION}/mail/send"
request_data = {
"personalizations": [{"to": [{"email": recipient_email}], "subject": subject}],
"from": {"email": settings.EMAIL_FROM},
"content": [{"type": "text/html", "value": html_content}],
}
request_header = {"Authorization": f"Bearer {settings.SENDGRID_API_KEY}"}
request_logger = get_logger(__name__)
request_logger.info("Send request to %s. Data: %s", request_url, request_data)
async with aiohttp.ClientSession() as session:
async with session.post(request_url, json=request_data, headers=request_header) as response:
if response.status > 299:
response_text = await response.text()
raise SendRequestError(
f"Couldn't send email to {recipient_email}",
f"Got status code: {response.status}; response text: {response_text}",
response_status=response.status,
request_url=request_url,
)
else:
request_logger.info(
"Email sent to %s. Status code: %s", recipient_email, response.status
)
def cut_string(source_string: str, max_length: int, finish_seq: str = "...") -> str:
if len(source_string) > max_length:
slice_length = max_length - len(finish_seq)
return source_string[:slice_length] + finish_seq if (slice_length > 0) else ""
return source_string
| true | true |
f7fbc8f88b244ccead27235fc29111b1a56ae99a | 4,164 | py | Python | src/afancontrol/fans.py | X0rg/afancontrol | 91fd8b3bf1fa759651b991ffc5e0ae551e6e4907 | [
"MIT"
] | 1 | 2021-07-28T21:16:43.000Z | 2021-07-28T21:16:43.000Z | src/afancontrol/fans.py | X0rg/afancontrol | 91fd8b3bf1fa759651b991ffc5e0ae551e6e4907 | [
"MIT"
] | null | null | null | src/afancontrol/fans.py | X0rg/afancontrol | 91fd8b3bf1fa759651b991ffc5e0ae551e6e4907 | [
"MIT"
] | null | null | null | from contextlib import ExitStack
from typing import Mapping, MutableSet, Optional
from afancontrol.config import FanName
from afancontrol.logger import logger
from afancontrol.pwmfan import PWMFanNorm, PWMValueNorm
from afancontrol.report import Report
class Fans:
def __init__(self, fans: Mapping[FanName, PWMFanNorm], *, report: Report) -> None:
self.fans = fans
self.report = report
self._stack = None # type: Optional[ExitStack]
# Set of fans marked as failing (which speed is 0)
self._failed_fans = set() # type: MutableSet[FanName]
# Set of fans that will be skipped on speed check
self._stopped_fans = set() # type: MutableSet[FanName]
def is_fan_failing(self, fan_name: FanName) -> bool:
return fan_name in self._failed_fans
def is_fan_stopped(self, fan_name: FanName) -> bool:
return fan_name in self._stopped_fans
def __enter__(self): # reusable
self._stack = ExitStack()
logger.info("Enabling PWM on fans...")
try:
for pwmfan in self.fans.values():
self._stack.enter_context(pwmfan)
except Exception:
self._stack.close()
raise
return self
def __exit__(self, exc_type, exc_value, exc_tb):
assert self._stack is not None
logger.info("Disabling PWM on fans...")
self._stack.close()
logger.info("Done. Fans should be returned to full speed")
return None
def check_speeds(self) -> None:
for name, fan in self.fans.items():
if name in self._stopped_fans:
continue
try:
if fan.get_speed() <= 0:
raise RuntimeError("Fan speed is 0")
except Exception as e:
self._ensure_fan_is_failing(name, e)
else:
self._ensure_fan_is_not_failing(name)
def set_all_to_full_speed(self) -> None:
for name, fan in self.fans.items():
if name in self._failed_fans:
continue
try:
fan.set_full_speed()
except Exception as e:
logger.warning("Unable to set the fan '%s' to full speed:\n%s", name, e)
def set_fan_speeds(self, speeds: Mapping[FanName, PWMValueNorm]) -> None:
assert speeds.keys() == self.fans.keys()
self._stopped_fans.clear()
for name, pwm_norm in speeds.items():
fan = self.fans[name]
assert 0.0 <= pwm_norm <= 1.0
if name in self._failed_fans:
continue
try:
pwm = fan.set(pwm_norm)
except Exception as e:
logger.warning(
"Unable to set the fan '%s' to speed %s:\n%s", name, pwm_norm, e
)
else:
logger.debug(
"Fan status [%s]: speed: %.3f, pwm: %s", name, pwm_norm, pwm
)
if fan.is_pwm_stopped(pwm):
self._stopped_fans.add(name)
def _ensure_fan_is_failing(self, name: FanName, get_speed_exc: Exception) -> None:
if name in self._failed_fans:
return
self._failed_fans.add(name)
fan = self.fans[name]
try:
# Perhaps it had jammed, so setting it to full speed might
# recover it?
fan.set_full_speed()
except Exception as e:
full_speed_result = "Setting fan speed to full has failed:\n%s" % e
else:
full_speed_result = "Fan has been set to full speed"
self.report.report(
"fan stopped: %s" % name,
"Looks like the fan '%s' is failing:\n%s\n\n%s"
% (name, get_speed_exc, full_speed_result),
)
def _ensure_fan_is_not_failing(self, name: FanName) -> None:
if name not in self._failed_fans:
return
self.report.report(
"fan started: %s" % name,
"Fan '%s' which had previously been reported as failing has just started."
% name,
)
self._failed_fans.remove(name)
| 34.991597 | 88 | 0.571085 | from contextlib import ExitStack
from typing import Mapping, MutableSet, Optional
from afancontrol.config import FanName
from afancontrol.logger import logger
from afancontrol.pwmfan import PWMFanNorm, PWMValueNorm
from afancontrol.report import Report
class Fans:
def __init__(self, fans: Mapping[FanName, PWMFanNorm], *, report: Report) -> None:
self.fans = fans
self.report = report
self._stack = None
self._failed_fans = set()
self._stopped_fans = set()
def is_fan_failing(self, fan_name: FanName) -> bool:
return fan_name in self._failed_fans
def is_fan_stopped(self, fan_name: FanName) -> bool:
return fan_name in self._stopped_fans
def __enter__(self):
self._stack = ExitStack()
logger.info("Enabling PWM on fans...")
try:
for pwmfan in self.fans.values():
self._stack.enter_context(pwmfan)
except Exception:
self._stack.close()
raise
return self
def __exit__(self, exc_type, exc_value, exc_tb):
assert self._stack is not None
logger.info("Disabling PWM on fans...")
self._stack.close()
logger.info("Done. Fans should be returned to full speed")
return None
def check_speeds(self) -> None:
for name, fan in self.fans.items():
if name in self._stopped_fans:
continue
try:
if fan.get_speed() <= 0:
raise RuntimeError("Fan speed is 0")
except Exception as e:
self._ensure_fan_is_failing(name, e)
else:
self._ensure_fan_is_not_failing(name)
def set_all_to_full_speed(self) -> None:
for name, fan in self.fans.items():
if name in self._failed_fans:
continue
try:
fan.set_full_speed()
except Exception as e:
logger.warning("Unable to set the fan '%s' to full speed:\n%s", name, e)
def set_fan_speeds(self, speeds: Mapping[FanName, PWMValueNorm]) -> None:
assert speeds.keys() == self.fans.keys()
self._stopped_fans.clear()
for name, pwm_norm in speeds.items():
fan = self.fans[name]
assert 0.0 <= pwm_norm <= 1.0
if name in self._failed_fans:
continue
try:
pwm = fan.set(pwm_norm)
except Exception as e:
logger.warning(
"Unable to set the fan '%s' to speed %s:\n%s", name, pwm_norm, e
)
else:
logger.debug(
"Fan status [%s]: speed: %.3f, pwm: %s", name, pwm_norm, pwm
)
if fan.is_pwm_stopped(pwm):
self._stopped_fans.add(name)
def _ensure_fan_is_failing(self, name: FanName, get_speed_exc: Exception) -> None:
if name in self._failed_fans:
return
self._failed_fans.add(name)
fan = self.fans[name]
try:
fan.set_full_speed()
except Exception as e:
full_speed_result = "Setting fan speed to full has failed:\n%s" % e
else:
full_speed_result = "Fan has been set to full speed"
self.report.report(
"fan stopped: %s" % name,
"Looks like the fan '%s' is failing:\n%s\n\n%s"
% (name, get_speed_exc, full_speed_result),
)
def _ensure_fan_is_not_failing(self, name: FanName) -> None:
if name not in self._failed_fans:
return
self.report.report(
"fan started: %s" % name,
"Fan '%s' which had previously been reported as failing has just started."
% name,
)
self._failed_fans.remove(name)
| true | true |
f7fbc9150969f438fbd8f4e7994322f560e87fe5 | 9,894 | py | Python | tests/fixtures/test_signup.py | AusDTO/dto-digitalmarketplace-api | 937843c9c01a71518cf4688b4daa55bbe7df1965 | [
"MIT"
] | 6 | 2017-06-09T03:38:53.000Z | 2021-12-22T02:42:15.000Z | tests/fixtures/test_signup.py | AusDTO/dto-digitalmarketplace-api | 937843c9c01a71518cf4688b4daa55bbe7df1965 | [
"MIT"
] | 47 | 2016-08-02T05:21:31.000Z | 2022-03-28T01:14:17.000Z | tests/fixtures/test_signup.py | AusDTO/dto-digitalmarketplace-api | 937843c9c01a71518cf4688b4daa55bbe7df1965 | [
"MIT"
] | 7 | 2016-09-13T13:07:18.000Z | 2021-02-17T10:16:21.000Z | import json
import pytest
import mock
import copy
from app.models import db, Agency, AgencyDomain
from app.api.services import user_claims_service
from nose.tools import assert_equal
test_seller = {
'name': 'matt',
'email_address': 'email+s@company.com',
'user_type': 'seller',
'abn': '123456'
}
gov_au_buyer = {
'name': 'Indiana Jones',
'email_address': 'indy+b@adventure.gov.au',
'user_type': 'buyer'
}
whitelisted_non_gov_au_buyer = {
'name': 'Han Solo',
'email_address': 'solo+b@falcon.net.au',
'user_type': 'buyer'
}
non_whitelisted_buyer_in_agency = {
'name': 'Luke Skywalker',
'email_address': 'luke+b@jedi.edu.au',
'user_type': 'buyer'
}
non_whitelisted_buyer = {
'name': 'Rick Deckard',
'email_address': 'deckard+b@runner.com.au',
'user_type': 'buyer'
}
@pytest.fixture()
def agencies(app, request):
with app.app_context():
db.session.add(Agency(
id=1,
name='Department of Adventure',
domain='adventure.gov.au',
category='Commonwealth',
state='ACT',
whitelisted=True,
domains=[AgencyDomain(
domain='adventure.gov.au',
active=True
)]
))
db.session.add(Agency(
id=2,
name='Department of Millenium Falcons',
domain='falcon.net.au',
category='Commonwealth',
state='NSW',
whitelisted=True,
domains=[AgencyDomain(
domain='falcon.net.au',
active=True
)]
))
db.session.add(Agency(
id=3,
name='Jedi Temple',
domain='jedi.edu.au',
category='State',
state='NSW',
whitelisted=False,
domains=[AgencyDomain(
domain='jedi.edu.au',
active=True
)]
))
db.session.commit()
yield Agency.query.all()
@mock.patch('app.api.views.users.send_account_activation_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_send_seller_type_signup_invite_email(user_claim, mock_send_email, client):
response = client.post(
'/2/signup',
data=json.dumps(test_seller),
content_type='application/json')
assert response.status_code == 200
mock_send_email.assert_called()
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_seller_signup_fail_missing_abn(user_claim, client):
test_seller_no_abn = copy.copy(test_seller)
del test_seller_no_abn['abn']
response = client.post(
'/2/signup',
data=json.dumps(test_seller_no_abn),
content_type='application/json')
assert response.status_code == 400
@mock.patch('app.api.views.users.send_account_activation_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_send_buyer_type_signup_invite_email(user_claim, mock_send_email, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@digital.gov.au',
'name': 'Jeff Labowski',
'user_type': 'buyer',
'employment_status': 'employee'
}),
content_type='application/json')
assert response.status_code == 200
data = json.loads(response.data)
assert data['message'] == 'Email invite sent successfully'
mock_send_email.assert_called()
@mock.patch('app.api.views.users.send_account_activation_manager_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_send_contractor_buyer_type_signup_invite_email(user_claim, mock_send_email, client):
response = client.post(
'/2/signup',
data=json.dumps({
'line_manager_email': 'm@danger.gov.au',
'line_manager_name': 'Jeff Labowski',
'employment_status': 'contractor',
'user_type': 'buyer',
'name': 'Royal',
'email_address': 'rtenenabaum@mouse.gov.au'
}),
content_type='application/json')
assert response.status_code == 200
data = json.loads(response.data)
assert data['message'] == 'Email invite sent successfully'
mock_send_email.assert_called()
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_invalid_employment_status(user_claim, client):
response = client.post(
'/2/signup',
data=json.dumps({
'line_manager_email': 'm@danger.gov.au',
'line_manager_name': 'Jeff Labowski',
'employment_status': 'nope',
'user_type': 'buyer',
'name': 'Royal',
'email_address': 'rtenenabaum@mouse.gov.au'
}),
content_type='application/json')
assert response.status_code == 400
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_missing_name(user_claim, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@goolag.com',
'user_type': 'seller'
}),
content_type='application/json')
assert response.status_code == 400
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_signup_fails_without_required_fields(user_claim, client, supplier_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@goolag.com',
'name': 'Jeff Labowski'
}),
content_type='application/json')
assert response.status_code == 400
data = json.loads(response.data)
assert_equal(data['message'], "'user_type' is a required property")
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_supplier_with_same_domain(user_claim, client, supplier_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@examplecompany.biz',
'name': 'Jeff Labowski',
'user_type': 'seller',
'abn': '56789'
}),
content_type='application/json')
assert response.status_code == 200
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_duplicate_supplier_with_same_abn(user_claim, client, suppliers, supplier_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@examplecompany.biz',
'name': 'Jeff Labowski',
'user_type': 'seller',
'abn': '1'
}),
content_type='application/json')
assert response.status_code == 409
data = json.loads(response.data)
assert_equal(data['message'], 'There is already a seller account with ABN 1')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_duplicate_application_with_same_abn(user_claim, client, application_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@don.com',
'name': 'Jeff Labowski',
'user_type': 'seller',
'abn': '123456'
}),
content_type='application/json')
assert response.status_code == 409
data = json.loads(response.data)
assert_equal(data['message'], 'There is already a seller account with ABN 123456')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_generic_domain(user_claim, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@gmail.com',
'name': 'Jeff Labowski',
'user_type': 'seller',
'url': '/2',
'abn': '56789'
}),
content_type='application/json')
assert response._status_code == 200
data = json.loads(response.data)
assert_equal(data['message'], 'Email invite sent successfully')
@mock.patch('app.api.views.users.send_account_activation_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
@pytest.mark.parametrize('user', [gov_au_buyer, whitelisted_non_gov_au_buyer])
def test_buyer_can_signup_with_whitelisted_email(user_claim, mock_send_email, client, agencies, user):
response = client.post(
'/2/signup',
data=json.dumps({
'name': user['name'],
'email_address': user['email_address'],
'employment_status': 'employee',
'user_type': user['user_type']
}),
content_type='application/json')
assert response.status_code == 200
data = json.loads(response.data)
assert data['message'] == 'Email invite sent successfully'
mock_send_email.assert_called()
@mock.patch('app.tasks.publish_tasks.user_claim')
@pytest.mark.parametrize('user', [non_whitelisted_buyer_in_agency, non_whitelisted_buyer])
def test_buyer_can_not_signup_with_non_whitelisted_email(user_claim, client, agencies, user):
response = client.post(
'/2/signup',
data=json.dumps({
'name': user['name'],
'email_address': user['email_address'],
'employment_status': 'employee',
'user_type': user['user_type']
}),
content_type='application/json')
assert response.status_code == 403
data = json.loads(response.data)
assert data['message'] == 'A buyer account must have a valid government entity email domain'
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_signup_creates_signup_claim_token(user_claim, app, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@digital.gov.au',
'name': 'Jeff Labowski',
'user_type': 'buyer',
'employment_status': 'employee'
}),
content_type='application/json')
assert response.status_code == 200
with app.app_context():
claim = user_claims_service.find(email_address='m@digital.gov.au', type='signup', claimed=False).one_or_none()
assert claim.email_address == 'm@digital.gov.au'
| 31.509554 | 118 | 0.624419 | import json
import pytest
import mock
import copy
from app.models import db, Agency, AgencyDomain
from app.api.services import user_claims_service
from nose.tools import assert_equal
test_seller = {
'name': 'matt',
'email_address': 'email+s@company.com',
'user_type': 'seller',
'abn': '123456'
}
gov_au_buyer = {
'name': 'Indiana Jones',
'email_address': 'indy+b@adventure.gov.au',
'user_type': 'buyer'
}
whitelisted_non_gov_au_buyer = {
'name': 'Han Solo',
'email_address': 'solo+b@falcon.net.au',
'user_type': 'buyer'
}
non_whitelisted_buyer_in_agency = {
'name': 'Luke Skywalker',
'email_address': 'luke+b@jedi.edu.au',
'user_type': 'buyer'
}
non_whitelisted_buyer = {
'name': 'Rick Deckard',
'email_address': 'deckard+b@runner.com.au',
'user_type': 'buyer'
}
@pytest.fixture()
def agencies(app, request):
with app.app_context():
db.session.add(Agency(
id=1,
name='Department of Adventure',
domain='adventure.gov.au',
category='Commonwealth',
state='ACT',
whitelisted=True,
domains=[AgencyDomain(
domain='adventure.gov.au',
active=True
)]
))
db.session.add(Agency(
id=2,
name='Department of Millenium Falcons',
domain='falcon.net.au',
category='Commonwealth',
state='NSW',
whitelisted=True,
domains=[AgencyDomain(
domain='falcon.net.au',
active=True
)]
))
db.session.add(Agency(
id=3,
name='Jedi Temple',
domain='jedi.edu.au',
category='State',
state='NSW',
whitelisted=False,
domains=[AgencyDomain(
domain='jedi.edu.au',
active=True
)]
))
db.session.commit()
yield Agency.query.all()
@mock.patch('app.api.views.users.send_account_activation_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_send_seller_type_signup_invite_email(user_claim, mock_send_email, client):
response = client.post(
'/2/signup',
data=json.dumps(test_seller),
content_type='application/json')
assert response.status_code == 200
mock_send_email.assert_called()
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_seller_signup_fail_missing_abn(user_claim, client):
test_seller_no_abn = copy.copy(test_seller)
del test_seller_no_abn['abn']
response = client.post(
'/2/signup',
data=json.dumps(test_seller_no_abn),
content_type='application/json')
assert response.status_code == 400
@mock.patch('app.api.views.users.send_account_activation_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_send_buyer_type_signup_invite_email(user_claim, mock_send_email, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@digital.gov.au',
'name': 'Jeff Labowski',
'user_type': 'buyer',
'employment_status': 'employee'
}),
content_type='application/json')
assert response.status_code == 200
data = json.loads(response.data)
assert data['message'] == 'Email invite sent successfully'
mock_send_email.assert_called()
@mock.patch('app.api.views.users.send_account_activation_manager_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_send_contractor_buyer_type_signup_invite_email(user_claim, mock_send_email, client):
response = client.post(
'/2/signup',
data=json.dumps({
'line_manager_email': 'm@danger.gov.au',
'line_manager_name': 'Jeff Labowski',
'employment_status': 'contractor',
'user_type': 'buyer',
'name': 'Royal',
'email_address': 'rtenenabaum@mouse.gov.au'
}),
content_type='application/json')
assert response.status_code == 200
data = json.loads(response.data)
assert data['message'] == 'Email invite sent successfully'
mock_send_email.assert_called()
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_invalid_employment_status(user_claim, client):
response = client.post(
'/2/signup',
data=json.dumps({
'line_manager_email': 'm@danger.gov.au',
'line_manager_name': 'Jeff Labowski',
'employment_status': 'nope',
'user_type': 'buyer',
'name': 'Royal',
'email_address': 'rtenenabaum@mouse.gov.au'
}),
content_type='application/json')
assert response.status_code == 400
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_missing_name(user_claim, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@goolag.com',
'user_type': 'seller'
}),
content_type='application/json')
assert response.status_code == 400
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_signup_fails_without_required_fields(user_claim, client, supplier_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@goolag.com',
'name': 'Jeff Labowski'
}),
content_type='application/json')
assert response.status_code == 400
data = json.loads(response.data)
assert_equal(data['message'], "'user_type' is a required property")
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_supplier_with_same_domain(user_claim, client, supplier_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@examplecompany.biz',
'name': 'Jeff Labowski',
'user_type': 'seller',
'abn': '56789'
}),
content_type='application/json')
assert response.status_code == 200
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_duplicate_supplier_with_same_abn(user_claim, client, suppliers, supplier_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@examplecompany.biz',
'name': 'Jeff Labowski',
'user_type': 'seller',
'abn': '1'
}),
content_type='application/json')
assert response.status_code == 409
data = json.loads(response.data)
assert_equal(data['message'], 'There is already a seller account with ABN 1')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_duplicate_application_with_same_abn(user_claim, client, application_user):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@don.com',
'name': 'Jeff Labowski',
'user_type': 'seller',
'abn': '123456'
}),
content_type='application/json')
assert response.status_code == 409
data = json.loads(response.data)
assert_equal(data['message'], 'There is already a seller account with ABN 123456')
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_generic_domain(user_claim, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@gmail.com',
'name': 'Jeff Labowski',
'user_type': 'seller',
'url': '/2',
'abn': '56789'
}),
content_type='application/json')
assert response._status_code == 200
data = json.loads(response.data)
assert_equal(data['message'], 'Email invite sent successfully')
@mock.patch('app.api.views.users.send_account_activation_email')
@mock.patch('app.tasks.publish_tasks.user_claim')
@pytest.mark.parametrize('user', [gov_au_buyer, whitelisted_non_gov_au_buyer])
def test_buyer_can_signup_with_whitelisted_email(user_claim, mock_send_email, client, agencies, user):
response = client.post(
'/2/signup',
data=json.dumps({
'name': user['name'],
'email_address': user['email_address'],
'employment_status': 'employee',
'user_type': user['user_type']
}),
content_type='application/json')
assert response.status_code == 200
data = json.loads(response.data)
assert data['message'] == 'Email invite sent successfully'
mock_send_email.assert_called()
@mock.patch('app.tasks.publish_tasks.user_claim')
@pytest.mark.parametrize('user', [non_whitelisted_buyer_in_agency, non_whitelisted_buyer])
def test_buyer_can_not_signup_with_non_whitelisted_email(user_claim, client, agencies, user):
response = client.post(
'/2/signup',
data=json.dumps({
'name': user['name'],
'email_address': user['email_address'],
'employment_status': 'employee',
'user_type': user['user_type']
}),
content_type='application/json')
assert response.status_code == 403
data = json.loads(response.data)
assert data['message'] == 'A buyer account must have a valid government entity email domain'
@mock.patch('app.tasks.publish_tasks.user_claim')
def test_signup_creates_signup_claim_token(user_claim, app, client):
response = client.post(
'/2/signup',
data=json.dumps({
'email_address': 'm@digital.gov.au',
'name': 'Jeff Labowski',
'user_type': 'buyer',
'employment_status': 'employee'
}),
content_type='application/json')
assert response.status_code == 200
with app.app_context():
claim = user_claims_service.find(email_address='m@digital.gov.au', type='signup', claimed=False).one_or_none()
assert claim.email_address == 'm@digital.gov.au'
| true | true |
f7fbc96245e2f0ea302afae850c53c5b30f0d66a | 166 | py | Python | src/ralph/lib/transitions/apps.py | andrzej-jankowski/ralph | 68ec18a66b8fe47ddf1c082c3ce2d82b2cd430dc | [
"Apache-2.0"
] | null | null | null | src/ralph/lib/transitions/apps.py | andrzej-jankowski/ralph | 68ec18a66b8fe47ddf1c082c3ce2d82b2cd430dc | [
"Apache-2.0"
] | null | null | null | src/ralph/lib/transitions/apps.py | andrzej-jankowski/ralph | 68ec18a66b8fe47ddf1c082c3ce2d82b2cd430dc | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from django.apps import AppConfig
class TransitionAppConfig(AppConfig):
name = 'ralph.lib.transitions'
verbose_name = 'Transitions'
| 20.75 | 37 | 0.710843 |
from django.apps import AppConfig
class TransitionAppConfig(AppConfig):
name = 'ralph.lib.transitions'
verbose_name = 'Transitions'
| true | true |
f7fbca6c2148ac0fc50fbec1cae5c9a0710cdce0 | 493 | py | Python | hazelcast/protocol/codec/map_clear_codec.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 98 | 2015-12-08T14:26:27.000Z | 2022-03-23T17:44:11.000Z | hazelcast/protocol/codec/map_clear_codec.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 396 | 2016-02-23T11:07:55.000Z | 2022-03-31T14:26:34.000Z | hazelcast/protocol/codec/map_clear_codec.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 62 | 2015-12-09T11:20:53.000Z | 2022-01-28T01:30:54.000Z | from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer
from hazelcast.protocol.builtin import StringCodec
# hex: 0x012D00
_REQUEST_MESSAGE_TYPE = 77056
# hex: 0x012D01
_RESPONSE_MESSAGE_TYPE = 77057
_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE
def encode_request(name):
buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE)
StringCodec.encode(buf, name, True)
return OutboundMessage(buf, False)
| 30.8125 | 105 | 0.8357 | from hazelcast.protocol.client_message import OutboundMessage, REQUEST_HEADER_SIZE, create_initial_buffer
from hazelcast.protocol.builtin import StringCodec
_REQUEST_MESSAGE_TYPE = 77056
_RESPONSE_MESSAGE_TYPE = 77057
_REQUEST_INITIAL_FRAME_SIZE = REQUEST_HEADER_SIZE
def encode_request(name):
buf = create_initial_buffer(_REQUEST_INITIAL_FRAME_SIZE, _REQUEST_MESSAGE_TYPE)
StringCodec.encode(buf, name, True)
return OutboundMessage(buf, False)
| true | true |
f7fbcacf21058ebd2add82f562467cc65c7e70fa | 668 | py | Python | core/models.py | danielqiang/CombineSearch | d24b49c4ef0cc0bf4470383f293a6d625765f3d2 | [
"MIT"
] | null | null | null | core/models.py | danielqiang/CombineSearch | d24b49c4ef0cc0bf4470383f293a6d625765f3d2 | [
"MIT"
] | null | null | null | core/models.py | danielqiang/CombineSearch | d24b49c4ef0cc0bf4470383f293a6d625765f3d2 | [
"MIT"
] | 1 | 2020-03-20T20:41:30.000Z | 2020-03-20T20:41:30.000Z | from django.db import models
class Author(models.Model):
name = models.CharField(max_length=30)
def __str__(self):
return self.name
class Category(models.Model):
name = models.CharField(max_length=20)
def __str__(self):
return self.name
class Journal(models.Model):
title = models.CharField(max_length=120)
author = models.ForeignKey(Author, on_delete=models.CASCADE)
categories = models.ManyToManyField(Category)
publish_date = models.DateTimeField(auto_now_add=True)
views = models.IntegerField(default=0)
reviewed = models.BooleanField(default=False)
def __str__(self):
return self.title
| 23.857143 | 64 | 0.715569 | from django.db import models
class Author(models.Model):
name = models.CharField(max_length=30)
def __str__(self):
return self.name
class Category(models.Model):
name = models.CharField(max_length=20)
def __str__(self):
return self.name
class Journal(models.Model):
title = models.CharField(max_length=120)
author = models.ForeignKey(Author, on_delete=models.CASCADE)
categories = models.ManyToManyField(Category)
publish_date = models.DateTimeField(auto_now_add=True)
views = models.IntegerField(default=0)
reviewed = models.BooleanField(default=False)
def __str__(self):
return self.title
| true | true |
f7fbcaf2144a99a1427daa079bab87f73f035547 | 2,099 | py | Python | 002-pyopengl/PyOpenGL-Demo-3.0.1b1/PyOpenGL-Demo/proesch/colorCube/colorCube.py | lhl/vrdev | fc1a9af2b51d159c99c8779349ef3392a70ed9ed | [
"Apache-2.0"
] | 12 | 2015-12-02T02:36:36.000Z | 2020-09-20T17:14:24.000Z | 002-pyopengl/PyOpenGL-Demo-3.0.1b1/PyOpenGL-Demo/proesch/colorCube/colorCube.py | lhl/vrdev | fc1a9af2b51d159c99c8779349ef3392a70ed9ed | [
"Apache-2.0"
] | null | null | null | 002-pyopengl/PyOpenGL-Demo-3.0.1b1/PyOpenGL-Demo/proesch/colorCube/colorCube.py | lhl/vrdev | fc1a9af2b51d159c99c8779349ef3392a70ed9ed | [
"Apache-2.0"
] | 8 | 2016-11-02T11:17:04.000Z | 2021-10-21T07:42:19.000Z | #!/usr/bin/python
# rotating color cube
# Copyright (C) 2007 "Peter Roesch" <Peter.Roesch@fh-augsburg.de>
#
# This code is licensed under the PyOpenGL License.
# Details are given in the file license.txt included in this distribution.
import sys
try:
from OpenGL.GLUT import *
from OpenGL.GL import *
from OpenGL.GLU import *
except:
print ''' Error: PyOpenGL not installed properly '''
sys.exit( )
import array
vertices = array.array('f', [ -1,-1,1, -1,1,1, 1,1,1, 1,-1,1,\
-1,-1,-1, -1,1,-1, 1,1,-1, 1,-1,-1 ] )
colors = array.array('f', [ 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, \
0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1] )
cIndices = array.array('B', [0, 3, 2, 1, 2, 3, 7, 6, 0, 4, 7, 3, \
1, 2, 6, 5, 4, 5, 6, 7, 0, 1, 5, 4 ] )
animationAngle = 0.0
frameRate = 25
from time import sleep
def animationStep( ):
global animationAngle
global frameRate
animationAngle += 2
while animationAngle > 360:
animationAngle -= 360
sleep( 1 / float( frameRate ) )
glutPostRedisplay( )
def display( ):
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT )
glMatrixMode( GL_PROJECTION )
glLoadIdentity( )
glOrtho( -2, 2, -2, 2, -2, 2 )
glMatrixMode( GL_MODELVIEW )
glLoadIdentity( )
glRotatef( animationAngle, 1, 1, 1 )
glEnableClientState( GL_COLOR_ARRAY )
glEnableClientState( GL_VERTEX_ARRAY )
glColorPointer( 3, GL_FLOAT, 0, colors.tostring( ) )
glVertexPointer( 3, GL_FLOAT, 0, vertices.tostring( ) )
glDrawElements( GL_QUADS, 24, GL_UNSIGNED_BYTE, cIndices.tostring( ) )
glDisableClientState( GL_COLOR_ARRAY )
glDisableClientState( GL_VERTEX_ARRAY )
glutSwapBuffers( )
def init( ):
if not (glColorPointer and glVertexPointer and glDrawElements):
print ''' Error: no vertex array support'''
sys.exit( )
glClearColor ( 0, 0, 0, 0 )
glEnable(GL_DEPTH_TEST)
glShadeModel( GL_SMOOTH )
glutInit( sys.argv )
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH )
glutInitWindowSize( 250, 250 )
glutInitWindowPosition( 100, 100 )
glutCreateWindow( sys.argv[0] )
init( )
glutDisplayFunc( display )
glutIdleFunc( animationStep )
glutMainLoop( )
| 28.364865 | 74 | 0.685565 |
import sys
try:
from OpenGL.GLUT import *
from OpenGL.GL import *
from OpenGL.GLU import *
except:
print ''' Error: PyOpenGL not installed properly '''
sys.exit( )
import array
vertices = array.array('f', [ -1,-1,1, -1,1,1, 1,1,1, 1,-1,1,\
-1,-1,-1, -1,1,-1, 1,1,-1, 1,-1,-1 ] )
colors = array.array('f', [ 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, \
0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1] )
cIndices = array.array('B', [0, 3, 2, 1, 2, 3, 7, 6, 0, 4, 7, 3, \
1, 2, 6, 5, 4, 5, 6, 7, 0, 1, 5, 4 ] )
animationAngle = 0.0
frameRate = 25
from time import sleep
def animationStep( ):
global animationAngle
global frameRate
animationAngle += 2
while animationAngle > 360:
animationAngle -= 360
sleep( 1 / float( frameRate ) )
glutPostRedisplay( )
def display( ):
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT )
glMatrixMode( GL_PROJECTION )
glLoadIdentity( )
glOrtho( -2, 2, -2, 2, -2, 2 )
glMatrixMode( GL_MODELVIEW )
glLoadIdentity( )
glRotatef( animationAngle, 1, 1, 1 )
glEnableClientState( GL_COLOR_ARRAY )
glEnableClientState( GL_VERTEX_ARRAY )
glColorPointer( 3, GL_FLOAT, 0, colors.tostring( ) )
glVertexPointer( 3, GL_FLOAT, 0, vertices.tostring( ) )
glDrawElements( GL_QUADS, 24, GL_UNSIGNED_BYTE, cIndices.tostring( ) )
glDisableClientState( GL_COLOR_ARRAY )
glDisableClientState( GL_VERTEX_ARRAY )
glutSwapBuffers( )
def init( ):
if not (glColorPointer and glVertexPointer and glDrawElements):
print ''' Error: no vertex array support'''
sys.exit( )
glClearColor ( 0, 0, 0, 0 )
glEnable(GL_DEPTH_TEST)
glShadeModel( GL_SMOOTH )
glutInit( sys.argv )
glutInitDisplayMode( GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH )
glutInitWindowSize( 250, 250 )
glutInitWindowPosition( 100, 100 )
glutCreateWindow( sys.argv[0] )
init( )
glutDisplayFunc( display )
glutIdleFunc( animationStep )
glutMainLoop( )
| false | true |
f7fbccd65eb14ab564d2f902ca145e5f22769a61 | 294 | py | Python | sympy/integrals/tests/test_lineintegrals.py | sn6uv/sympy | 5b149c2f72847e4785c65358b09d99b29f101dd5 | [
"BSD-3-Clause"
] | 2 | 2015-05-11T12:26:38.000Z | 2016-08-19T00:11:03.000Z | sympy/integrals/tests/test_lineintegrals.py | goodok/sympy | de84ed2139125a755ea7b6ba91d945d9fbbe5ed9 | [
"BSD-3-Clause"
] | 1 | 2016-06-13T01:29:51.000Z | 2016-06-14T00:38:27.000Z | sympy/integrals/tests/test_lineintegrals.py | goodok/sympy | de84ed2139125a755ea7b6ba91d945d9fbbe5ed9 | [
"BSD-3-Clause"
] | 1 | 2022-03-21T09:07:27.000Z | 2022-03-21T09:07:27.000Z | from sympy import (symbols, integrate, Integral, diff, sin, cos, pi, E, ln,
sympify, Curve, line_integrate, sqrt)
s, t, x, y, z = symbols('s,t,x,y,z')
def test_lineintegral():
c = Curve([E**t + 1, E**t - 1], (t, 0, ln(2)))
assert line_integrate(x + y, c, [x, y]) == 3*sqrt(2)
| 32.666667 | 75 | 0.57483 | from sympy import (symbols, integrate, Integral, diff, sin, cos, pi, E, ln,
sympify, Curve, line_integrate, sqrt)
s, t, x, y, z = symbols('s,t,x,y,z')
def test_lineintegral():
c = Curve([E**t + 1, E**t - 1], (t, 0, ln(2)))
assert line_integrate(x + y, c, [x, y]) == 3*sqrt(2)
| true | true |
f7fbcd2dab63894d0039ebaed84bcaa13266d9e1 | 7,042 | py | Python | src/transformers/models/deberta_v2/configuration_deberta_v2.py | kct22aws/transformers | 04cddaf402591e9f5bdb5f116a111d829a0ce4f4 | [
"Apache-2.0"
] | 31 | 2022-02-02T13:13:41.000Z | 2022-03-29T08:37:20.000Z | src/transformers/models/deberta_v2/configuration_deberta_v2.py | guang7400613/transformers | 28e091430eea9e0d40839e56fd0d57aec262f5f9 | [
"Apache-2.0"
] | 2 | 2022-03-14T10:13:16.000Z | 2022-03-14T11:50:27.000Z | src/transformers/models/deberta_v2/configuration_deberta_v2.py | guang7400613/transformers | 28e091430eea9e0d40839e56fd0d57aec262f5f9 | [
"Apache-2.0"
] | 2 | 2022-03-21T04:32:39.000Z | 2022-03-22T01:02:49.000Z | # coding=utf-8
# Copyright 2020, Microsoft and the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" DeBERTa-v2 model configuration"""
from ...configuration_utils import PretrainedConfig
from ...utils import logging
logger = logging.get_logger(__name__)
DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"microsoft/deberta-v2-xlarge": "https://huggingface.co/microsoft/deberta-v2-xlarge/resolve/main/config.json",
"microsoft/deberta-v2-xxlarge": "https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/config.json",
"microsoft/deberta-v2-xlarge-mnli": "https://huggingface.co/microsoft/deberta-v2-xlarge-mnli/resolve/main/config.json",
"microsoft/deberta-v2-xxlarge-mnli": "https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/config.json",
}
class DebertaV2Config(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`DebertaV2Model`]. It is used to instantiate a
DeBERTa-v2 model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the DeBERTa
[microsoft/deberta-v2-xlarge](https://huggingface.co/microsoft/deberta-base) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Arguments:
vocab_size (`int`, *optional*, defaults to 128100):
Vocabulary size of the DeBERTa-v2 model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`DebertaV2Model`].
hidden_size (`int`, *optional*, defaults to 1536):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 24):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 24):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 6144):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"`, `"gelu"`, `"tanh"`, `"gelu_fast"`, `"mish"`, `"linear"`, `"sigmoid"` and `"gelu_new"`
are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 0):
The vocabulary size of the `token_type_ids` passed when calling [`DebertaModel`] or [`TFDebertaModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-7):
The epsilon used by the layer normalization layers.
relative_attention (`bool`, *optional*, defaults to `True`):
Whether use relative position encoding.
max_relative_positions (`int`, *optional*, defaults to -1):
The range of relative positions `[-max_position_embeddings, max_position_embeddings]`. Use the same value
as `max_position_embeddings`.
pad_token_id (`int`, *optional*, defaults to 0):
The value used to pad input_ids.
position_biased_input (`bool`, *optional*, defaults to `False`):
Whether add absolute position embedding to content embedding.
pos_att_type (`List[str]`, *optional*):
The type of relative position attention, it can be a combination of `["p2c", "c2p", "p2p"]`, e.g.
`["p2c"]`, `["p2c", "c2p"]`, `["p2c", "c2p", 'p2p"]`.
layer_norm_eps (`float`, optional, defaults to 1e-12):
The epsilon used by the layer normalization layers.
"""
model_type = "deberta-v2"
def __init__(
self,
vocab_size=128100,
hidden_size=1536,
num_hidden_layers=24,
num_attention_heads=24,
intermediate_size=6144,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=0,
initializer_range=0.02,
layer_norm_eps=1e-7,
relative_attention=False,
max_relative_positions=-1,
pad_token_id=0,
position_biased_input=True,
pos_att_type=None,
pooler_dropout=0,
pooler_hidden_act="gelu",
**kwargs
):
super().__init__(**kwargs)
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
self.relative_attention = relative_attention
self.max_relative_positions = max_relative_positions
self.pad_token_id = pad_token_id
self.position_biased_input = position_biased_input
# Backwards compatibility
if type(pos_att_type) == str:
pos_att_type = [x.strip() for x in pos_att_type.lower().split("|")]
self.pos_att_type = pos_att_type
self.vocab_size = vocab_size
self.layer_norm_eps = layer_norm_eps
self.pooler_hidden_size = kwargs.get("pooler_hidden_size", hidden_size)
self.pooler_dropout = pooler_dropout
self.pooler_hidden_act = pooler_hidden_act
| 51.028986 | 125 | 0.690571 |
from ...configuration_utils import PretrainedConfig
from ...utils import logging
logger = logging.get_logger(__name__)
DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"microsoft/deberta-v2-xlarge": "https://huggingface.co/microsoft/deberta-v2-xlarge/resolve/main/config.json",
"microsoft/deberta-v2-xxlarge": "https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/config.json",
"microsoft/deberta-v2-xlarge-mnli": "https://huggingface.co/microsoft/deberta-v2-xlarge-mnli/resolve/main/config.json",
"microsoft/deberta-v2-xxlarge-mnli": "https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/config.json",
}
class DebertaV2Config(PretrainedConfig):
model_type = "deberta-v2"
def __init__(
self,
vocab_size=128100,
hidden_size=1536,
num_hidden_layers=24,
num_attention_heads=24,
intermediate_size=6144,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=0,
initializer_range=0.02,
layer_norm_eps=1e-7,
relative_attention=False,
max_relative_positions=-1,
pad_token_id=0,
position_biased_input=True,
pos_att_type=None,
pooler_dropout=0,
pooler_hidden_act="gelu",
**kwargs
):
super().__init__(**kwargs)
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
self.relative_attention = relative_attention
self.max_relative_positions = max_relative_positions
self.pad_token_id = pad_token_id
self.position_biased_input = position_biased_input
if type(pos_att_type) == str:
pos_att_type = [x.strip() for x in pos_att_type.lower().split("|")]
self.pos_att_type = pos_att_type
self.vocab_size = vocab_size
self.layer_norm_eps = layer_norm_eps
self.pooler_hidden_size = kwargs.get("pooler_hidden_size", hidden_size)
self.pooler_dropout = pooler_dropout
self.pooler_hidden_act = pooler_hidden_act
| true | true |
f7fbcdcafadd48f6c9d1e843eefee400b2e44500 | 411 | py | Python | utility_belt/__init__.py | ajsilveira/utility_belt | b593400bdbe1bcda89117615175045f6eefda74a | [
"MIT"
] | null | null | null | utility_belt/__init__.py | ajsilveira/utility_belt | b593400bdbe1bcda89117615175045f6eefda74a | [
"MIT"
] | null | null | null | utility_belt/__init__.py | ajsilveira/utility_belt | b593400bdbe1bcda89117615175045f6eefda74a | [
"MIT"
] | null | null | null | """
utility_belt
A set of tools for my work
"""
# Make Python 2 and 3 imports work the same
# Safe to remove with Python 3-only code
from __future__ import absolute_import
# Add imports here
from .utility_belt import *
# Handle versioneer
from ._version import get_versions
versions = get_versions()
__version__ = versions['version']
__git_revision__ = versions['full-revisionid']
del get_versions, versions
| 21.631579 | 46 | 0.778589 |
from __future__ import absolute_import
from .utility_belt import *
from ._version import get_versions
versions = get_versions()
__version__ = versions['version']
__git_revision__ = versions['full-revisionid']
del get_versions, versions
| true | true |
f7fbce9552f151899983ca30fa2163e5439036cc | 16,110 | py | Python | intel/k8s.py | hanyin-intel/CPU-Manager-for-Kubernetes | c9848d62f0d9c1d7a3de0745ea172df9a4ccaf7b | [
"Apache-2.0"
] | 152 | 2018-05-24T02:19:41.000Z | 2022-03-23T08:39:11.000Z | intel/k8s.py | hanyin-intel/CPU-Manager-for-Kubernetes | c9848d62f0d9c1d7a3de0745ea172df9a4ccaf7b | [
"Apache-2.0"
] | 64 | 2018-05-30T13:09:40.000Z | 2021-12-29T14:00:25.000Z | intel/k8s.py | isabella232/CPU-Manager-for-Kubernetes | b92d994fedc734898f9852bb65fcd4cd2be55384 | [
"Apache-2.0"
] | 78 | 2018-05-22T13:26:54.000Z | 2022-02-20T22:33:07.000Z | # Copyright (c) 2017 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from intel import util
from kubernetes import client as k8sclient, config as k8sconfig
from kubernetes.client import V1Namespace, V1DeleteOptions
VERSION_NAME = "v1.9.0"
# Only set up the Volume Mounts necessary for the container
CONTAINER_VOLUME_MOUNTS = {
"init": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"install": {
"volumeMounts": [
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
}
],
"securityContext": {
"privileged": True
}
},
"discover": {
"volumeMounts": [
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"reconcile": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
},
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"nodereport": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
},
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"webhook": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
},
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
"readOnly": True
}
],
"securityContext": {
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"reconfigure": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
}
}
def get_pod_template(saname="cmk-serviceaccount"):
pod_template = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "PODNAME",
"annotations": {
}
},
"spec": {
"serviceAccount": saname,
"nodeName": "NODENAME",
"containers": [
],
"restartPolicy": "Never",
"volumes": [
{
"hostPath": {
"path": "/proc"
},
"name": "host-proc"
},
{
"hostPath": {
"path": "/opt/bin"
},
"name": "cmk-install-dir"
}
]
}
}
return pod_template
def ds_from(pod, version):
ds_template = {}
if version >= util.parse_version(VERSION_NAME):
ds_template = {
"apiVersion": "apps/v1",
"kind": "DaemonSet",
"metadata": {
"name": pod["metadata"]["name"].replace("pod", "ds")
},
"spec": {
"selector": {
"matchLabels": {
"app": pod["metadata"]["name"].replace("pod", "ds")
}
},
"template": {
"metadata": {
"labels": {
"app":
pod["metadata"]["name"].replace("pod", "ds")
}
},
"spec": pod["spec"]
}
}
}
# for k8s versions older than 1.9.0 use extensions/v1beta1 API
else:
ds_template = {
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"metadata": {
"name": pod["metadata"]["name"].replace("pod", "ds")
},
"spec": {
"template": {
"metadata": {
"labels": {
"app":
pod["metadata"]["name"].replace("pod", "ds")
}
},
"spec": pod["spec"]
}
}
}
return ds_template
def deployment_from(pod):
deployment_template = {
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"labels": pod["metadata"]["labels"],
"name": pod["metadata"]["name"].replace("pod", "deployment")
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": pod["metadata"]["labels"]
},
"template": {
"metadata": {
"labels": pod["metadata"]["labels"]
},
"spec": pod["spec"]
}
}
}
return deployment_template
def client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.CoreV1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.CoreV1Api(api_client=client)
def apps_api_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.AppsV1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.AppsV1Api(api_client=client)
def extensions_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.ExtensionsV1beta1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.ExtensionsV1beta1Api(api_client=client)
def version_api_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.VersionApi()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.VersionApi(api_client=client)
def admissionregistartion_api_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.AdmissionregistrationV1beta1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.AdmissionregistrationV1beta1Api(api_client=client)
def get_container_template(cmd):
container_template = {
"args": [
"ARGS"
],
"command": ["/bin/bash", "-c"],
"env": [
{
"name": "CMK_PROC_FS",
"value": "/host/proc"
},
{
"name": "NODE_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "spec.nodeName"
}
}
}
],
"image": "IMAGENAME",
"name": "NAME",
"securityContext": CONTAINER_VOLUME_MOUNTS[cmd]["securityContext"],
"volumeMounts": CONTAINER_VOLUME_MOUNTS[cmd]["volumeMounts"],
"imagePullPolicy": "Never"
}
return container_template
# get_node_list() returns the node list in the current Kubernetes cluster.
def get_node_list(config, label_selector=None):
k8s_api = client_from_config(config)
if label_selector:
nodes = k8s_api.list_node(label_selector=label_selector).to_dict()
else:
nodes = k8s_api.list_node().to_dict()
return nodes["items"]
# get_node_from_pod returns the node that a given pod is running on
def get_node_from_pod(config, pod_name):
pods = get_pod_list(config)
for p in pods["items"]:
if p["metadata"]["name"] == pod_name:
return p["spec"]["node_name"]
# get_pod_list() returns the pod list in the current Kubernetes cluster.
def get_pod_list(config):
k8s_api = client_from_config(config)
return k8s_api.list_pod_for_all_namespaces().to_dict()
# create_pod() sends a request to the Kubernetes API server to create a
# pod based on podspec.
def create_pod(config, podspec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_pod(ns_name, podspec)
# create_legacy_ds() sends a request to the Kubernetes API server to create a
# ds based on deamonset spec.
def create_ds(config, spec, ns_name, version):
if version >= util.parse_version(VERSION_NAME):
k8s_api = apps_api_client_from_config(config)
return k8s_api.create_namespaced_daemon_set(ns_name, spec)
else:
k8s_api = extensions_client_from_config(config)
return k8s_api.create_namespaced_daemon_set(ns_name, spec)
def create_service(config, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_service(ns_name, spec)
def create_config_map(config, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_config_map(ns_name, spec)
def patch_config_map(config, cm_name, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.patch_namespaced_config_map(cm_name, ns_name, spec)
def create_secret(config, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_secret(ns_name, spec)
def create_mutating_webhook_configuration(config, spec):
k8s_api = admissionregistartion_api_client_from_config(config)
return k8s_api.create_mutating_webhook_configuration(spec)
def create_deployment(config, spec, ns_name):
k8s_api = apps_api_client_from_config(config)
return k8s_api.create_namespaced_deployment(ns_name, spec)
# Create list of schedulable nodes.
def get_compute_nodes(config, label_selector=None):
compute_nodes = []
for node in get_node_list(config, label_selector):
if "unschedulable" in node["spec"] and \
node["spec"]["unschedulable"]:
continue
compute_nodes.append(node)
return compute_nodes
# Set label to selected node.
def set_node_label(config, node, label, label_value):
patch_body = [{
"op": "add",
"path": "/metadata/labels/%s" % label,
"value": label_value,
}]
k8s_api = client_from_config(config)
k8s_api.patch_node(node, patch_body)
# Unset label from node.
def unset_node_label(config, node, label):
patch_body = [{
"op": "remove",
"path": "/metadata/labels/%s" % label,
}]
k8s_api = client_from_config(config)
k8s_api.patch_node(node, patch_body)
# Create namespace with generated namespace name.
def create_namespace(config, ns_name):
metadata = {'name': ns_name}
namespace = V1Namespace(metadata=metadata)
k8s_api = client_from_config(config)
k8s_api.create_namespace(namespace)
# Get available namespaces.
def get_namespaces(config):
k8s_api = client_from_config(config)
return k8s_api.list_namespace().to_dict()
# Get k8s version
def get_kube_version(config):
k8s_api = version_api_client_from_config(config)
version_info = k8s_api.get_code()
return version_info.git_version
# Get named configmap
def get_config_map(config, name, ns_name):
k8s_api = client_from_config(config)
configmaps = k8s_api.list_namespaced_config_map(ns_name)
for cm in configmaps.items:
if cm.metadata.name == name:
return cm
# Delete namespace by name.
def delete_namespace(config, ns_name, delete_options=V1DeleteOptions()):
k8s_api = client_from_config(config)
k8s_api.delete_namespace(ns_name)
# Delete pod from namespace.
def delete_pod(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
k8s_api.delete_namespaced_pod(name, ns_name)
# Delete ds from namespace.
# Due to problem with orphan_dependents flag and changes
# in cascade deletion in k8s, first delete the ds, then the pod
# https://github.com/kubernetes-incubator/client-python/issues/162
# https://github.com/kubernetes/kubernetes/issues/44046
def delete_ds(config, version, ds_name, ns_name="default",
body=V1DeleteOptions()):
k8s_api_core = client_from_config(config)
if version >= util.parse_version(VERSION_NAME):
k8s_api_apps = apps_api_client_from_config(config)
k8s_api_apps.delete_namespaced_daemon_set(ds_name,
ns_name,
grace_period_seconds=0,
orphan_dependents=False)
else:
k8s_api_ext = extensions_client_from_config(config)
k8s_api_ext.delete_namespaced_daemon_set(ds_name,
ns_name,
grace_period_seconds=0,
orphan_dependents=False)
# Pod in ds has fixed label so we use label selector
data = k8s_api_core.list_namespaced_pod(
ns_name, label_selector="app={}".format(ds_name)).to_dict()
# There should be only one pod
for pod in data["items"]:
logging.debug("Removing pod \"{}\"".format(pod["metadata"]["name"]))
delete_pod(None, pod["metadata"]["name"], ns_name)
def delete_service(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
return k8s_api.delete_namespaced_service(name, ns_name)
def delete_config_map(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
return k8s_api.delete_namespaced_config_map(name, ns_name)
def delete_secret(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
return k8s_api.delete_namespaced_secret(name, ns_name)
def delete_mutating_webhook_configuration(config, name,
body=V1DeleteOptions()):
k8s_api = admissionregistartion_api_client_from_config(config)
return k8s_api.delete_mutating_webhook_configuration(name)
def delete_deployment(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = apps_api_client_from_config(config)
return k8s_api.delete_namespaced_deployment(name, ns_name)
| 30.56926 | 79 | 0.566046 |
import logging
from intel import util
from kubernetes import client as k8sclient, config as k8sconfig
from kubernetes.client import V1Namespace, V1DeleteOptions
VERSION_NAME = "v1.9.0"
CONTAINER_VOLUME_MOUNTS = {
"init": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"install": {
"volumeMounts": [
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
}
],
"securityContext": {
"privileged": True
}
},
"discover": {
"volumeMounts": [
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"reconcile": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
},
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"nodereport": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
},
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"webhook": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
},
{
"mountPath": "/opt/bin",
"name": "cmk-install-dir",
"readOnly": True
}
],
"securityContext": {
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
},
"reconfigure": {
"volumeMounts": [
{
"mountPath": "/host/proc",
"name": "host-proc",
"readOnly": True
}
],
"securityContext": {
"readOnlyRootFilesystem": True,
"runAsNonRoot": True,
"runAsUser": 1000,
"runAsGroup": 3000,
"fsGroup": 2000
}
}
}
def get_pod_template(saname="cmk-serviceaccount"):
pod_template = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "PODNAME",
"annotations": {
}
},
"spec": {
"serviceAccount": saname,
"nodeName": "NODENAME",
"containers": [
],
"restartPolicy": "Never",
"volumes": [
{
"hostPath": {
"path": "/proc"
},
"name": "host-proc"
},
{
"hostPath": {
"path": "/opt/bin"
},
"name": "cmk-install-dir"
}
]
}
}
return pod_template
def ds_from(pod, version):
ds_template = {}
if version >= util.parse_version(VERSION_NAME):
ds_template = {
"apiVersion": "apps/v1",
"kind": "DaemonSet",
"metadata": {
"name": pod["metadata"]["name"].replace("pod", "ds")
},
"spec": {
"selector": {
"matchLabels": {
"app": pod["metadata"]["name"].replace("pod", "ds")
}
},
"template": {
"metadata": {
"labels": {
"app":
pod["metadata"]["name"].replace("pod", "ds")
}
},
"spec": pod["spec"]
}
}
}
else:
ds_template = {
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"metadata": {
"name": pod["metadata"]["name"].replace("pod", "ds")
},
"spec": {
"template": {
"metadata": {
"labels": {
"app":
pod["metadata"]["name"].replace("pod", "ds")
}
},
"spec": pod["spec"]
}
}
}
return ds_template
def deployment_from(pod):
deployment_template = {
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"labels": pod["metadata"]["labels"],
"name": pod["metadata"]["name"].replace("pod", "deployment")
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": pod["metadata"]["labels"]
},
"template": {
"metadata": {
"labels": pod["metadata"]["labels"]
},
"spec": pod["spec"]
}
}
}
return deployment_template
def client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.CoreV1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.CoreV1Api(api_client=client)
def apps_api_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.AppsV1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.AppsV1Api(api_client=client)
def extensions_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.ExtensionsV1beta1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.ExtensionsV1beta1Api(api_client=client)
def version_api_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.VersionApi()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.VersionApi(api_client=client)
def admissionregistartion_api_client_from_config(config):
if config is None:
k8sconfig.load_incluster_config()
return k8sclient.AdmissionregistrationV1beta1Api()
else:
client = k8sclient.ApiClient(configuration=config)
return k8sclient.AdmissionregistrationV1beta1Api(api_client=client)
def get_container_template(cmd):
container_template = {
"args": [
"ARGS"
],
"command": ["/bin/bash", "-c"],
"env": [
{
"name": "CMK_PROC_FS",
"value": "/host/proc"
},
{
"name": "NODE_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "spec.nodeName"
}
}
}
],
"image": "IMAGENAME",
"name": "NAME",
"securityContext": CONTAINER_VOLUME_MOUNTS[cmd]["securityContext"],
"volumeMounts": CONTAINER_VOLUME_MOUNTS[cmd]["volumeMounts"],
"imagePullPolicy": "Never"
}
return container_template
def get_node_list(config, label_selector=None):
k8s_api = client_from_config(config)
if label_selector:
nodes = k8s_api.list_node(label_selector=label_selector).to_dict()
else:
nodes = k8s_api.list_node().to_dict()
return nodes["items"]
def get_node_from_pod(config, pod_name):
pods = get_pod_list(config)
for p in pods["items"]:
if p["metadata"]["name"] == pod_name:
return p["spec"]["node_name"]
def get_pod_list(config):
k8s_api = client_from_config(config)
return k8s_api.list_pod_for_all_namespaces().to_dict()
def create_pod(config, podspec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_pod(ns_name, podspec)
def create_ds(config, spec, ns_name, version):
if version >= util.parse_version(VERSION_NAME):
k8s_api = apps_api_client_from_config(config)
return k8s_api.create_namespaced_daemon_set(ns_name, spec)
else:
k8s_api = extensions_client_from_config(config)
return k8s_api.create_namespaced_daemon_set(ns_name, spec)
def create_service(config, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_service(ns_name, spec)
def create_config_map(config, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_config_map(ns_name, spec)
def patch_config_map(config, cm_name, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.patch_namespaced_config_map(cm_name, ns_name, spec)
def create_secret(config, spec, ns_name):
k8s_api = client_from_config(config)
return k8s_api.create_namespaced_secret(ns_name, spec)
def create_mutating_webhook_configuration(config, spec):
k8s_api = admissionregistartion_api_client_from_config(config)
return k8s_api.create_mutating_webhook_configuration(spec)
def create_deployment(config, spec, ns_name):
k8s_api = apps_api_client_from_config(config)
return k8s_api.create_namespaced_deployment(ns_name, spec)
def get_compute_nodes(config, label_selector=None):
compute_nodes = []
for node in get_node_list(config, label_selector):
if "unschedulable" in node["spec"] and \
node["spec"]["unschedulable"]:
continue
compute_nodes.append(node)
return compute_nodes
def set_node_label(config, node, label, label_value):
patch_body = [{
"op": "add",
"path": "/metadata/labels/%s" % label,
"value": label_value,
}]
k8s_api = client_from_config(config)
k8s_api.patch_node(node, patch_body)
def unset_node_label(config, node, label):
patch_body = [{
"op": "remove",
"path": "/metadata/labels/%s" % label,
}]
k8s_api = client_from_config(config)
k8s_api.patch_node(node, patch_body)
def create_namespace(config, ns_name):
metadata = {'name': ns_name}
namespace = V1Namespace(metadata=metadata)
k8s_api = client_from_config(config)
k8s_api.create_namespace(namespace)
def get_namespaces(config):
k8s_api = client_from_config(config)
return k8s_api.list_namespace().to_dict()
def get_kube_version(config):
k8s_api = version_api_client_from_config(config)
version_info = k8s_api.get_code()
return version_info.git_version
def get_config_map(config, name, ns_name):
k8s_api = client_from_config(config)
configmaps = k8s_api.list_namespaced_config_map(ns_name)
for cm in configmaps.items:
if cm.metadata.name == name:
return cm
def delete_namespace(config, ns_name, delete_options=V1DeleteOptions()):
k8s_api = client_from_config(config)
k8s_api.delete_namespace(ns_name)
def delete_pod(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
k8s_api.delete_namespaced_pod(name, ns_name)
def delete_ds(config, version, ds_name, ns_name="default",
body=V1DeleteOptions()):
k8s_api_core = client_from_config(config)
if version >= util.parse_version(VERSION_NAME):
k8s_api_apps = apps_api_client_from_config(config)
k8s_api_apps.delete_namespaced_daemon_set(ds_name,
ns_name,
grace_period_seconds=0,
orphan_dependents=False)
else:
k8s_api_ext = extensions_client_from_config(config)
k8s_api_ext.delete_namespaced_daemon_set(ds_name,
ns_name,
grace_period_seconds=0,
orphan_dependents=False)
data = k8s_api_core.list_namespaced_pod(
ns_name, label_selector="app={}".format(ds_name)).to_dict()
for pod in data["items"]:
logging.debug("Removing pod \"{}\"".format(pod["metadata"]["name"]))
delete_pod(None, pod["metadata"]["name"], ns_name)
def delete_service(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
return k8s_api.delete_namespaced_service(name, ns_name)
def delete_config_map(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
return k8s_api.delete_namespaced_config_map(name, ns_name)
def delete_secret(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = client_from_config(config)
return k8s_api.delete_namespaced_secret(name, ns_name)
def delete_mutating_webhook_configuration(config, name,
body=V1DeleteOptions()):
k8s_api = admissionregistartion_api_client_from_config(config)
return k8s_api.delete_mutating_webhook_configuration(name)
def delete_deployment(config, name, ns_name="default", body=V1DeleteOptions()):
k8s_api = apps_api_client_from_config(config)
return k8s_api.delete_namespaced_deployment(name, ns_name)
| true | true |
f7fbcf3f077ecc368c9cd1e42d6381cbf2728225 | 2,505 | py | Python | tests/store/artifact/test_dbfs_artifact_repo_delegation.py | PeterSulcs/mlflow | 14c48e7bb1ca6cd6a3c1b249a486cd98bd5e7051 | [
"Apache-2.0"
] | 10,351 | 2018-07-31T02:52:49.000Z | 2022-03-31T23:33:13.000Z | tests/store/artifact/test_dbfs_artifact_repo_delegation.py | PeterSulcs/mlflow | 14c48e7bb1ca6cd6a3c1b249a486cd98bd5e7051 | [
"Apache-2.0"
] | 3,733 | 2018-07-31T01:38:51.000Z | 2022-03-31T23:56:25.000Z | tests/store/artifact/test_dbfs_artifact_repo_delegation.py | PeterSulcs/mlflow | 14c48e7bb1ca6cd6a3c1b249a486cd98bd5e7051 | [
"Apache-2.0"
] | 2,596 | 2018-07-31T06:38:39.000Z | 2022-03-31T23:56:32.000Z | import os
import pytest
from unittest import mock
from mlflow.store.artifact.artifact_repository_registry import get_artifact_repository
from mlflow.store.artifact.local_artifact_repo import LocalArtifactRepository
from mlflow.store.artifact.dbfs_artifact_repo import DbfsRestArtifactRepository
from mlflow.store.artifact.dbfs_artifact_repo import DatabricksArtifactRepository
from mlflow.utils.rest_utils import MlflowHostCreds
@pytest.fixture()
def host_creds_mock():
with mock.patch(
"mlflow.store.artifact.dbfs_artifact_repo._get_host_creds_from_default_store"
) as get_creds_mock:
get_creds_mock.return_value = lambda: MlflowHostCreds("http://host")
yield
@mock.patch("mlflow.utils.databricks_utils.is_dbfs_fuse_available")
def test_dbfs_artifact_repo_delegates_to_correct_repo(
is_dbfs_fuse_available, host_creds_mock
): # pylint: disable=unused-argument
# fuse available
is_dbfs_fuse_available.return_value = True
artifact_uri = "dbfs:/databricks/my/absolute/dbfs/path"
repo = get_artifact_repository(artifact_uri)
assert isinstance(repo, LocalArtifactRepository)
assert repo.artifact_dir == os.path.join(
os.path.sep, "dbfs", "databricks", "my", "absolute", "dbfs", "path"
)
# fuse available but a model repository DBFS location
repo = get_artifact_repository("dbfs:/databricks/mlflow-registry/version12345/models")
assert isinstance(repo, DbfsRestArtifactRepository)
# fuse not available
with mock.patch.dict(os.environ, {"MLFLOW_ENABLE_DBFS_FUSE_ARTIFACT_REPO": "false"}):
fuse_disabled_repo = get_artifact_repository(artifact_uri)
assert isinstance(fuse_disabled_repo, DbfsRestArtifactRepository)
assert fuse_disabled_repo.artifact_uri == artifact_uri
is_dbfs_fuse_available.return_value = False
rest_repo = get_artifact_repository(artifact_uri)
assert isinstance(rest_repo, DbfsRestArtifactRepository)
assert rest_repo.artifact_uri == artifact_uri
with mock.patch(
"mlflow.store.artifact.databricks_artifact_repo"
+ ".DatabricksArtifactRepository._get_run_artifact_root"
) as get_run_artifact_root_mock:
mock_uri = "dbfs:/databricks/mlflow-tracking/MOCK-EXP/MOCK-RUN-ID/artifacts"
get_run_artifact_root_mock.return_value = mock_uri
databricks_repo = get_artifact_repository(mock_uri)
assert isinstance(databricks_repo, DatabricksArtifactRepository)
assert databricks_repo.artifact_uri == mock_uri
| 44.732143 | 90 | 0.786427 | import os
import pytest
from unittest import mock
from mlflow.store.artifact.artifact_repository_registry import get_artifact_repository
from mlflow.store.artifact.local_artifact_repo import LocalArtifactRepository
from mlflow.store.artifact.dbfs_artifact_repo import DbfsRestArtifactRepository
from mlflow.store.artifact.dbfs_artifact_repo import DatabricksArtifactRepository
from mlflow.utils.rest_utils import MlflowHostCreds
@pytest.fixture()
def host_creds_mock():
with mock.patch(
"mlflow.store.artifact.dbfs_artifact_repo._get_host_creds_from_default_store"
) as get_creds_mock:
get_creds_mock.return_value = lambda: MlflowHostCreds("http://host")
yield
@mock.patch("mlflow.utils.databricks_utils.is_dbfs_fuse_available")
def test_dbfs_artifact_repo_delegates_to_correct_repo(
is_dbfs_fuse_available, host_creds_mock
):
is_dbfs_fuse_available.return_value = True
artifact_uri = "dbfs:/databricks/my/absolute/dbfs/path"
repo = get_artifact_repository(artifact_uri)
assert isinstance(repo, LocalArtifactRepository)
assert repo.artifact_dir == os.path.join(
os.path.sep, "dbfs", "databricks", "my", "absolute", "dbfs", "path"
)
repo = get_artifact_repository("dbfs:/databricks/mlflow-registry/version12345/models")
assert isinstance(repo, DbfsRestArtifactRepository)
with mock.patch.dict(os.environ, {"MLFLOW_ENABLE_DBFS_FUSE_ARTIFACT_REPO": "false"}):
fuse_disabled_repo = get_artifact_repository(artifact_uri)
assert isinstance(fuse_disabled_repo, DbfsRestArtifactRepository)
assert fuse_disabled_repo.artifact_uri == artifact_uri
is_dbfs_fuse_available.return_value = False
rest_repo = get_artifact_repository(artifact_uri)
assert isinstance(rest_repo, DbfsRestArtifactRepository)
assert rest_repo.artifact_uri == artifact_uri
with mock.patch(
"mlflow.store.artifact.databricks_artifact_repo"
+ ".DatabricksArtifactRepository._get_run_artifact_root"
) as get_run_artifact_root_mock:
mock_uri = "dbfs:/databricks/mlflow-tracking/MOCK-EXP/MOCK-RUN-ID/artifacts"
get_run_artifact_root_mock.return_value = mock_uri
databricks_repo = get_artifact_repository(mock_uri)
assert isinstance(databricks_repo, DatabricksArtifactRepository)
assert databricks_repo.artifact_uri == mock_uri
| true | true |
f7fbd05bd7299fa0aaeb5b1d3712e2cf9cef053e | 2,472 | py | Python | python/seldon_deploy_sdk/models/capability.py | adriangonz/seldon-deploy-sdk | c5504838630a87053387cec57ec2e1e7251971e2 | [
"Apache-2.0"
] | 6 | 2021-02-18T14:37:54.000Z | 2022-01-13T13:27:43.000Z | python/seldon_deploy_sdk/models/capability.py | adriangonz/seldon-deploy-sdk | c5504838630a87053387cec57ec2e1e7251971e2 | [
"Apache-2.0"
] | 14 | 2021-01-04T16:32:03.000Z | 2021-12-13T17:53:59.000Z | python/seldon_deploy_sdk/models/capability.py | adriangonz/seldon-deploy-sdk | c5504838630a87053387cec57ec2e1e7251971e2 | [
"Apache-2.0"
] | 7 | 2021-03-17T09:05:55.000Z | 2022-01-05T10:39:56.000Z | # coding: utf-8
"""
Seldon Deploy API
API to interact and manage the lifecycle of your machine learning models deployed through Seldon Deploy. # noqa: E501
OpenAPI spec version: v1alpha1
Contact: hello@seldon.io
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class Capability(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
}
attribute_map = {
}
def __init__(self): # noqa: E501
"""Capability - a model defined in Swagger""" # noqa: E501
self.discriminator = None
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(Capability, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, Capability):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 28.090909 | 122 | 0.55623 |
import pprint
import re
import six
class Capability(object):
swagger_types = {
}
attribute_map = {
}
def __init__(self):
self.discriminator = None
def to_dict(self):
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(Capability, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
return pprint.pformat(self.to_dict())
def __repr__(self):
return self.to_str()
def __eq__(self, other):
if not isinstance(other, Capability):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
| true | true |
f7fbd071cb019beda3409456c1d66e1b9eeacdd4 | 2,249 | py | Python | day-32-automated-email/main.py | jskolnicki/100-Days-of-Python | 146af2b73914a525121f1c91737abd4857dc2f89 | [
"CNRI-Python"
] | null | null | null | day-32-automated-email/main.py | jskolnicki/100-Days-of-Python | 146af2b73914a525121f1c91737abd4857dc2f89 | [
"CNRI-Python"
] | null | null | null | day-32-automated-email/main.py | jskolnicki/100-Days-of-Python | 146af2b73914a525121f1c91737abd4857dc2f89 | [
"CNRI-Python"
] | null | null | null | ##################### Extra Hard Starting Project ######################
from email import message
from re import template
import pandas as pd
import os
import datetime
import random
import smtplib
from csv import reader
os.chdir(os.path.dirname(__file__))
my_email = "jareds.automated.email@gmail.com"
password = "abcdefg123()"
# 1. Update the birthdays.csv
# 2. Check if today matches a birthday in the birthdays.csv
df = pd.read_csv("birthdays.csv")
today_string = datetime.date.today().strftime('%m-%d')
with open('birthdays.csv', 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
# Check file as empty
if header != None:
# Iterate over each row after the header in the csv
for row in csv_reader:
# row variable is a list that represents a row in csv
birthday = f'{row[4].zfill(2)}-{row[5].zfill(2)}'
if birthday == today_string:
# 3. If step 2 is true, pick a random letter from letter templates and replace the [NAME] with the person's actual name from birthdays.csv
number_of_letter_options = 0
for letter in os.listdir('letter_templates'):
if row[1] in letter:
number_of_letter_options += 1
with open(f'C:/Users/jared/GitHub/continued-ed/100-Days-of-Python/day-32-email/letter_templates/{row[1]}_{random.randint(1,number_of_letter_options)}.txt') as file:
message = file.readlines()
email_message = ""
for line in message:
email_message = email_message + line
email_message = email_message.replace("[NAME]",row[0])
# 4. Send the letter generated in step 3 to that person's email address.
with smtplib.SMTP("smtp.gmail.com", port=587) as connection:
connection.starttls()
connection.login(user=my_email, password=password)
connection.sendmail(from_addr=my_email,
to_addrs=row[2],
msg= f'Subject:Happy Birthday!\n\n{email_message}') | 38.775862 | 180 | 0.586483 | connection.starttls()
connection.login(user=my_email, password=password)
connection.sendmail(from_addr=my_email,
to_addrs=row[2],
msg= f'Subject:Happy Birthday!\n\n{email_message}') | true | true |
f7fbd073906bc9f5fcfc8f06b63d0dfb6db3baaa | 8,418 | py | Python | verifiable_mpc/ac20/compressed_pivot.py | marcusmjh/verifiable_mpc | 8f9b734c28e98057cb3928870cb504ab045c3a83 | [
"MIT"
] | null | null | null | verifiable_mpc/ac20/compressed_pivot.py | marcusmjh/verifiable_mpc | 8f9b734c28e98057cb3928870cb504ab045c3a83 | [
"MIT"
] | null | null | null | verifiable_mpc/ac20/compressed_pivot.py | marcusmjh/verifiable_mpc | 8f9b734c28e98057cb3928870cb504ab045c3a83 | [
"MIT"
] | null | null | null | """ Implementation of https://eprint.iacr.org/2020/152
``Compressed Σ-Protocol Theory and Practical Application
to Plug & Play Secure Algorithmics''
Protocols:
* Compressed Σ-protocol Π_c for relation R, page 15, protocol 5
"""
import os
import sys
import logging
from random import SystemRandom
project_root = sys.path.append(os.path.abspath(".."))
if project_root not in sys.path:
sys.path.insert(0, project_root)
import verifiable_mpc.ac20.pivot as pivot
from sec_groups.fingroups import EllipticCurveElement
prng = SystemRandom()
logger_cp = logging.getLogger("compressed_pivot")
logger_cp.setLevel(logging.INFO)
logger_cp_hin = logging.getLogger("compressed_pivot_hash_inputs")
logger_cp_hin.setLevel(logging.INFO)
logger_cp_hout = logging.getLogger("compressed_pivot_hash_outputs")
logger_cp_hout.setLevel(logging.INFO)
def protocol_4_prover(g_hat, k, Q, L_tilde, z_hat, gf, proof={}, round_i=0):
""" Non-interactive version of Protocol 4 from Section 4.2: Prover's side
"""
# Step 5: Prover calculates A, B
half = len(g_hat) // 2
g_hat_l = g_hat[:half]
g_hat_r = g_hat[half:]
z_hat_l = z_hat[:half]
z_hat_r = z_hat[half:]
logger_cp.debug("Calculate A_i, B_i.")
A = pivot.vector_commitment(z_hat_l, int(L_tilde([0] * half + z_hat_l)), g_hat_r, k)
B = pivot.vector_commitment(z_hat_r, int(L_tilde(z_hat_r + [0] * half)), g_hat_l, k)
proof["A" + str(round_i)] = A
proof["B" + str(round_i)] = B
# Step 6: Prover calculates challenge
order = k.order
# Hash A, B and all public elements of Protocol 4
# input_list = [A, B, g_hat, k, Q, L_tilde]
if isinstance(A, EllipticCurveElement):
input_list = [A.to_affine(), B.to_affine(), g_hat, k, Q.to_affine(), L_tilde]
else:
input_list = [A, B, g_hat, k, Q, L_tilde]
logger_cp_hin.debug(
f"Method protocol_4_prover: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c = pivot.fiat_shamir_hash(input_list, order)
logger_cp_hout.debug(f"After hash, hash=\n{c}")
# Step 7: Prover calculates following public values
logger_cp.debug("Calculate g_prime.")
g_prime = [(g_hat_l[i] ** c) * g_hat_r[i] for i in range(half)]
logger_cp.debug("Calculate Q_prime.")
Q_prime = A * (Q ** c) * (B ** (c ** 2))
assert (
L_tilde.constant == 0
), "Next line assumes L_tilde is a linear form, not affine form."
c_L_tilde_l_coeffs = [coeff * gf(c) for coeff in L_tilde.coeffs[:half]]
L_prime = pivot.LinearForm(c_L_tilde_l_coeffs) + pivot.LinearForm(
L_tilde.coeffs[half:]
)
# Step 8: Prover calculates z_prime and tests if recursion is required (if z' in Z or Z^2)
z_prime = [z_hat_l[i] + c * z_hat_r[i] for i in range(half)]
if len(z_prime) <= 2:
proof["z_prime"] = z_prime
return proof
else:
# Step 9: Prover sends z_prime and starts recursion of protocol 4
round_i += 1
proof = protocol_4_prover(
g_prime, k, Q_prime, L_prime, z_prime, gf, proof, round_i
)
return proof
def protocol_5_prover(generators, P, L, y, x, gamma, gf):
g = generators["g"]
h = generators["h"]
k = generators["k"]
proof = {}
# Public parameters
n = len(x)
L, y = pivot.affine_to_linear(L, y, n)
assert (
bin(n + 1).count("1") == 1
), "This implementation requires n+1 to be power of 2 (else, use padding with zeros)."
# Non-interactive proof
# Step 1: Prover calculates ("announcement") t, A
order = gf.order
r = list(prng.randrange(order) for i in range(n))
rho = prng.randrange(order)
logger_cp.debug("Calculate t.")
t = L(r)
logger_cp.debug("Calculate A.")
A = pivot.vector_commitment(r, rho, g, h)
proof["t"] = t
proof["A"] = A
# Step 2: Prover computes challenge
# input_list = [t, A, generators, P, L, y]
if isinstance(A, EllipticCurveElement):
input_list = [t, A.to_affine(), generators, P.to_affine(), L, y]
else:
input_list = [t, A, generators, P, L, y]
logger_cp_hin.debug(
f"Method protocol_5_prover: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c0 = pivot.fiat_shamir_hash(
input_list + [0] + ["First hash of compressed pivot"], order
)
c1 = pivot.fiat_shamir_hash(
input_list + [1] + ["First hash of compressed pivot"], order
)
logger_cp_hout.debug(f"After hash, hash=\n{c0}, {c1}")
# Step 3: Prover calculates
z = [c0 * x_i + r[i] for i, x_i in enumerate(x)]
phi = gf(c0 * gamma + rho)
z_hat = z + [phi]
# Step 4: Prover calculates following public variables
g_hat = g + [h]
logger_cp.debug("Calculate Q.")
Q = A * (P ** c0) * (k ** int(c1 * (c0 * y + t)))
L_tilde = pivot.LinearForm(L.coeffs + [0]) * c1
assert L(z) * c1 == L_tilde(z_hat)
proof = protocol_4_prover(g_hat, k, Q, L_tilde, z_hat, gf, proof)
return proof
def protocol_4_verifier(g_hat, k, Q, L_tilde, gf, proof, round_i=0):
""" Non-interactive version of Protocol 4 from Section 4.2: Verifier's side
"""
# Step 5
half = len(g_hat) // 2
g_hat_l = g_hat[:half]
g_hat_r = g_hat[half:]
logger_cp.debug("Load from proof: A_i, B_i.")
A = proof["A" + str(round_i)]
B = proof["B" + str(round_i)]
# Step 6
order = k.order
# Hash A, B and all public elements of Protocol 4
# input_list = [A, B, g_hat, k, Q, L_tilde]
if isinstance(A, EllipticCurveElement):
input_list = [A.to_affine(), B.to_affine(), g_hat, k, Q.to_affine(), L_tilde]
else:
input_list = [A, B, g_hat, k, Q, L_tilde]
logger_cp_hin.debug(
f"Method protocol_4_verifier: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c = pivot.fiat_shamir_hash(input_list, order)
logger_cp_hout.debug(f"After hash, hash=\n{c}")
# Step 7
logger_cp.debug("Calculate g_prime.")
g_prime = [(g_hat_l[i] ** c) * g_hat_r[i] for i in range(half)]
logger_cp.debug("Calculate Q_prime.")
Q_prime = A * (Q ** c) * (B ** (c ** 2))
assert (
L_tilde.constant == 0
), "Next line assumes L_tilde is a linear form, not affine form."
c_L_tilde_l_coeffs = [coeff * gf(c) for coeff in L_tilde.coeffs[:half]]
L_prime = pivot.LinearForm(c_L_tilde_l_coeffs) + pivot.LinearForm(
L_tilde.coeffs[half:]
)
# Step 8
if len(g_prime) <= 2:
z_prime = proof["z_prime"]
Q_check = pivot.vector_commitment(z_prime, int(L_prime(z_prime)), g_prime, k)
logger_cp.debug("Arrived in final step of protocol_4_verifier.")
logger_cp.debug(f"Q_check= {Q_check}")
logger_cp.debug(f"Q_prime= {Q_prime}")
verification = Q_check == Q_prime
return verification
else:
# Step 9
round_i += 1
return protocol_4_verifier(g_prime, k, Q_prime, L_prime, gf, proof, round_i)
def protocol_5_verifier(generators, P, L, y, proof, gf):
g = generators["g"]
h = generators["h"]
k = generators["k"]
order = gf.order
n = len(g)
L, y = pivot.affine_to_linear(L, y, n)
logger_cp.debug("Load from proof: t, A.")
t = proof["t"]
A = proof["A"]
# input_list = [t, A, generators, P, L, y]
if isinstance(A, EllipticCurveElement):
input_list = [t, A.to_affine(), generators, P.to_affine(), L, y]
else:
input_list = [t, A, generators, P, L, y]
logger_cp_hin.debug(
f"Method protocol_5_verifier: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c0 = pivot.fiat_shamir_hash(
input_list + [0] + ["First hash of compressed pivot"], order
)
c1 = pivot.fiat_shamir_hash(
input_list + [1] + ["First hash of compressed pivot"], order
)
logger_cp_hout.debug(f"After hash, hash=\n{c0}, {c1}")
g_hat = g + [h]
logger_cp.debug("Calculate Q.")
Q = A * (P ** c0) * (k ** int(c1 * (c0 * y + t)))
L_tilde = pivot.LinearForm(L.coeffs + [0]) * c1
verification = protocol_4_verifier(g_hat, k, Q, L_tilde, gf, proof)
return verification
# TODO-list
# TODO1: Input y can be removed from inputs to protocol5 method.
| 33.807229 | 95 | 0.614517 |
import os
import sys
import logging
from random import SystemRandom
project_root = sys.path.append(os.path.abspath(".."))
if project_root not in sys.path:
sys.path.insert(0, project_root)
import verifiable_mpc.ac20.pivot as pivot
from sec_groups.fingroups import EllipticCurveElement
prng = SystemRandom()
logger_cp = logging.getLogger("compressed_pivot")
logger_cp.setLevel(logging.INFO)
logger_cp_hin = logging.getLogger("compressed_pivot_hash_inputs")
logger_cp_hin.setLevel(logging.INFO)
logger_cp_hout = logging.getLogger("compressed_pivot_hash_outputs")
logger_cp_hout.setLevel(logging.INFO)
def protocol_4_prover(g_hat, k, Q, L_tilde, z_hat, gf, proof={}, round_i=0):
half = len(g_hat) // 2
g_hat_l = g_hat[:half]
g_hat_r = g_hat[half:]
z_hat_l = z_hat[:half]
z_hat_r = z_hat[half:]
logger_cp.debug("Calculate A_i, B_i.")
A = pivot.vector_commitment(z_hat_l, int(L_tilde([0] * half + z_hat_l)), g_hat_r, k)
B = pivot.vector_commitment(z_hat_r, int(L_tilde(z_hat_r + [0] * half)), g_hat_l, k)
proof["A" + str(round_i)] = A
proof["B" + str(round_i)] = B
order = k.order
if isinstance(A, EllipticCurveElement):
input_list = [A.to_affine(), B.to_affine(), g_hat, k, Q.to_affine(), L_tilde]
else:
input_list = [A, B, g_hat, k, Q, L_tilde]
logger_cp_hin.debug(
f"Method protocol_4_prover: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c = pivot.fiat_shamir_hash(input_list, order)
logger_cp_hout.debug(f"After hash, hash=\n{c}")
logger_cp.debug("Calculate g_prime.")
g_prime = [(g_hat_l[i] ** c) * g_hat_r[i] for i in range(half)]
logger_cp.debug("Calculate Q_prime.")
Q_prime = A * (Q ** c) * (B ** (c ** 2))
assert (
L_tilde.constant == 0
), "Next line assumes L_tilde is a linear form, not affine form."
c_L_tilde_l_coeffs = [coeff * gf(c) for coeff in L_tilde.coeffs[:half]]
L_prime = pivot.LinearForm(c_L_tilde_l_coeffs) + pivot.LinearForm(
L_tilde.coeffs[half:]
)
z_prime = [z_hat_l[i] + c * z_hat_r[i] for i in range(half)]
if len(z_prime) <= 2:
proof["z_prime"] = z_prime
return proof
else:
# Step 9: Prover sends z_prime and starts recursion of protocol 4
round_i += 1
proof = protocol_4_prover(
g_prime, k, Q_prime, L_prime, z_prime, gf, proof, round_i
)
return proof
def protocol_5_prover(generators, P, L, y, x, gamma, gf):
g = generators["g"]
h = generators["h"]
k = generators["k"]
proof = {}
# Public parameters
n = len(x)
L, y = pivot.affine_to_linear(L, y, n)
assert (
bin(n + 1).count("1") == 1
), "This implementation requires n+1 to be power of 2 (else, use padding with zeros)."
# Non-interactive proof
# Step 1: Prover calculates ("announcement") t, A
order = gf.order
r = list(prng.randrange(order) for i in range(n))
rho = prng.randrange(order)
logger_cp.debug("Calculate t.")
t = L(r)
logger_cp.debug("Calculate A.")
A = pivot.vector_commitment(r, rho, g, h)
proof["t"] = t
proof["A"] = A
# Step 2: Prover computes challenge
# input_list = [t, A, generators, P, L, y]
if isinstance(A, EllipticCurveElement):
input_list = [t, A.to_affine(), generators, P.to_affine(), L, y]
else:
input_list = [t, A, generators, P, L, y]
logger_cp_hin.debug(
f"Method protocol_5_prover: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c0 = pivot.fiat_shamir_hash(
input_list + [0] + ["First hash of compressed pivot"], order
)
c1 = pivot.fiat_shamir_hash(
input_list + [1] + ["First hash of compressed pivot"], order
)
logger_cp_hout.debug(f"After hash, hash=\n{c0}, {c1}")
# Step 3: Prover calculates
z = [c0 * x_i + r[i] for i, x_i in enumerate(x)]
phi = gf(c0 * gamma + rho)
z_hat = z + [phi]
# Step 4: Prover calculates following public variables
g_hat = g + [h]
logger_cp.debug("Calculate Q.")
Q = A * (P ** c0) * (k ** int(c1 * (c0 * y + t)))
L_tilde = pivot.LinearForm(L.coeffs + [0]) * c1
assert L(z) * c1 == L_tilde(z_hat)
proof = protocol_4_prover(g_hat, k, Q, L_tilde, z_hat, gf, proof)
return proof
def protocol_4_verifier(g_hat, k, Q, L_tilde, gf, proof, round_i=0):
# Step 5
half = len(g_hat) // 2
g_hat_l = g_hat[:half]
g_hat_r = g_hat[half:]
logger_cp.debug("Load from proof: A_i, B_i.")
A = proof["A" + str(round_i)]
B = proof["B" + str(round_i)]
# Step 6
order = k.order
# Hash A, B and all public elements of Protocol 4
# input_list = [A, B, g_hat, k, Q, L_tilde]
if isinstance(A, EllipticCurveElement):
input_list = [A.to_affine(), B.to_affine(), g_hat, k, Q.to_affine(), L_tilde]
else:
input_list = [A, B, g_hat, k, Q, L_tilde]
logger_cp_hin.debug(
f"Method protocol_4_verifier: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c = pivot.fiat_shamir_hash(input_list, order)
logger_cp_hout.debug(f"After hash, hash=\n{c}")
# Step 7
logger_cp.debug("Calculate g_prime.")
g_prime = [(g_hat_l[i] ** c) * g_hat_r[i] for i in range(half)]
logger_cp.debug("Calculate Q_prime.")
Q_prime = A * (Q ** c) * (B ** (c ** 2))
assert (
L_tilde.constant == 0
), "Next line assumes L_tilde is a linear form, not affine form."
c_L_tilde_l_coeffs = [coeff * gf(c) for coeff in L_tilde.coeffs[:half]]
L_prime = pivot.LinearForm(c_L_tilde_l_coeffs) + pivot.LinearForm(
L_tilde.coeffs[half:]
)
# Step 8
if len(g_prime) <= 2:
z_prime = proof["z_prime"]
Q_check = pivot.vector_commitment(z_prime, int(L_prime(z_prime)), g_prime, k)
logger_cp.debug("Arrived in final step of protocol_4_verifier.")
logger_cp.debug(f"Q_check= {Q_check}")
logger_cp.debug(f"Q_prime= {Q_prime}")
verification = Q_check == Q_prime
return verification
else:
# Step 9
round_i += 1
return protocol_4_verifier(g_prime, k, Q_prime, L_prime, gf, proof, round_i)
def protocol_5_verifier(generators, P, L, y, proof, gf):
g = generators["g"]
h = generators["h"]
k = generators["k"]
order = gf.order
n = len(g)
L, y = pivot.affine_to_linear(L, y, n)
logger_cp.debug("Load from proof: t, A.")
t = proof["t"]
A = proof["A"]
# input_list = [t, A, generators, P, L, y]
if isinstance(A, EllipticCurveElement):
input_list = [t, A.to_affine(), generators, P.to_affine(), L, y]
else:
input_list = [t, A, generators, P, L, y]
logger_cp_hin.debug(
f"Method protocol_5_verifier: Before fiat_shamir_hash, input_list=\n{input_list}"
)
c0 = pivot.fiat_shamir_hash(
input_list + [0] + ["First hash of compressed pivot"], order
)
c1 = pivot.fiat_shamir_hash(
input_list + [1] + ["First hash of compressed pivot"], order
)
logger_cp_hout.debug(f"After hash, hash=\n{c0}, {c1}")
g_hat = g + [h]
logger_cp.debug("Calculate Q.")
Q = A * (P ** c0) * (k ** int(c1 * (c0 * y + t)))
L_tilde = pivot.LinearForm(L.coeffs + [0]) * c1
verification = protocol_4_verifier(g_hat, k, Q, L_tilde, gf, proof)
return verification
# TODO-list
# TODO1: Input y can be removed from inputs to protocol5 method.
| true | true |
f7fbd0f6b49a9995ee4760d8ee58541fa28e487c | 720 | py | Python | config.py | hackersandslackers/flask-blueprint-tutorial | a082799815d7e19c6d7e8b3c85e109bbd32ca713 | [
"MIT"
] | 315 | 2020-02-18T10:41:38.000Z | 2022-03-24T04:01:07.000Z | config.py | Diego-MP/flask-blueprint-tutorial | a082799815d7e19c6d7e8b3c85e109bbd32ca713 | [
"MIT"
] | 138 | 2020-05-07T10:11:57.000Z | 2022-03-31T10:04:15.000Z | config.py | Diego-MP/flask-blueprint-tutorial | a082799815d7e19c6d7e8b3c85e109bbd32ca713 | [
"MIT"
] | 47 | 2020-02-18T22:29:54.000Z | 2022-03-09T05:58:41.000Z | """Class-based Flask app configuration."""
from os import environ, path
from dotenv import load_dotenv
basedir = path.abspath(path.dirname(__file__))
load_dotenv(path.join(basedir, ".env"))
class Config:
"""Configuration from environment variables."""
SECRET_KEY = environ.get("SECRET_KEY")
FLASK_ENV = environ.get("FLASK_ENV")
FLASK_APP = "wsgi.py"
# Flask-Assets
LESS_BIN = environ.get("LESS_BIN")
ASSETS_DEBUG = True
LESS_RUN_IN_DEBUG = True
# Static Assets
STATIC_FOLDER = "static"
TEMPLATES_FOLDER = "templates"
COMPRESSOR_DEBUG = True
# Datadog
DD_SERVICE = environ.get("DD_SERVICE")
# API
BEST_BUY_API_KEY = environ.get("BEST_BUY_API_KEY")
| 22.5 | 54 | 0.694444 | from os import environ, path
from dotenv import load_dotenv
basedir = path.abspath(path.dirname(__file__))
load_dotenv(path.join(basedir, ".env"))
class Config:
SECRET_KEY = environ.get("SECRET_KEY")
FLASK_ENV = environ.get("FLASK_ENV")
FLASK_APP = "wsgi.py"
LESS_BIN = environ.get("LESS_BIN")
ASSETS_DEBUG = True
LESS_RUN_IN_DEBUG = True
STATIC_FOLDER = "static"
TEMPLATES_FOLDER = "templates"
COMPRESSOR_DEBUG = True
DD_SERVICE = environ.get("DD_SERVICE")
BEST_BUY_API_KEY = environ.get("BEST_BUY_API_KEY")
| true | true |
f7fbd1e9b6412b7b6ae78cddd74c9f47b792fa4e | 127 | py | Python | traffic/core/logging.py | xabbu137/traffic | c82e859700a226ffaf04d7d782800e24fa4fc290 | [
"MIT"
] | 1 | 2020-01-30T08:21:07.000Z | 2020-01-30T08:21:07.000Z | traffic/core/logging.py | xabbu137/traffic | c82e859700a226ffaf04d7d782800e24fa4fc290 | [
"MIT"
] | null | null | null | traffic/core/logging.py | xabbu137/traffic | c82e859700a226ffaf04d7d782800e24fa4fc290 | [
"MIT"
] | 1 | 2020-03-13T00:31:03.000Z | 2020-03-13T00:31:03.000Z | import logging
def loglevel(mode: str) -> None:
logger = logging.getLogger()
logger.setLevel(getattr(logging, mode))
| 18.142857 | 43 | 0.700787 | import logging
def loglevel(mode: str) -> None:
logger = logging.getLogger()
logger.setLevel(getattr(logging, mode))
| true | true |
f7fbd28978c17fad6f5e3ae95968055986a7a08b | 2,068 | py | Python | dfirtrack_config/tests/filter_forms/test_filter_forms.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | null | null | null | dfirtrack_config/tests/filter_forms/test_filter_forms.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | 6 | 2022-03-16T12:30:51.000Z | 2022-03-28T01:34:45.000Z | dfirtrack_config/tests/filter_forms/test_filter_forms.py | thomas-kropeit/dfirtrack | b1e0e659af7bc8085cfe2d269ddc651f9f4ba585 | [
"Apache-2.0"
] | null | null | null | from django.test import TestCase
from dfirtrack_config.filter_forms import AssignmentFilterForm
class AssignmentFilterFormTestCase(TestCase):
"""assignment filter form tests"""
def test_case_form_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(form.fields['case'].label, 'Filter for case')
def test_case_form_empty_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(form.fields['case'].empty_label, 'Filter for case')
def test_tag_form_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(form.fields['tag'].label, 'Filter for tag')
def test_tag_form_empty_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(form.fields['tag'].empty_label, 'Filter for tag')
def test_user_form_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(form.fields['user'].label, 'Filter for user')
def test_user_form_empty_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(form.fields['user'].empty_label, 'No user assigned')
def test_filter_assignment_view_keep_form_label(self):
"""test form label"""
# get object
form = AssignmentFilterForm()
# compare
self.assertEqual(
form.fields['filter_assignment_view_keep'].label,
'Remember filter settings (confirm by applying)',
)
def test_assignment_filter_form_empty(self):
"""test minimum form requirements / VALID"""
# get object
form = AssignmentFilterForm(data={})
# compare
self.assertTrue(form.is_valid())
| 27.573333 | 77 | 0.621857 | from django.test import TestCase
from dfirtrack_config.filter_forms import AssignmentFilterForm
class AssignmentFilterFormTestCase(TestCase):
def test_case_form_label(self):
form = AssignmentFilterForm()
self.assertEqual(form.fields['case'].label, 'Filter for case')
def test_case_form_empty_label(self):
form = AssignmentFilterForm()
self.assertEqual(form.fields['case'].empty_label, 'Filter for case')
def test_tag_form_label(self):
form = AssignmentFilterForm()
self.assertEqual(form.fields['tag'].label, 'Filter for tag')
def test_tag_form_empty_label(self):
form = AssignmentFilterForm()
self.assertEqual(form.fields['tag'].empty_label, 'Filter for tag')
def test_user_form_label(self):
form = AssignmentFilterForm()
self.assertEqual(form.fields['user'].label, 'Filter for user')
def test_user_form_empty_label(self):
form = AssignmentFilterForm()
self.assertEqual(form.fields['user'].empty_label, 'No user assigned')
def test_filter_assignment_view_keep_form_label(self):
form = AssignmentFilterForm()
self.assertEqual(
form.fields['filter_assignment_view_keep'].label,
'Remember filter settings (confirm by applying)',
)
def test_assignment_filter_form_empty(self):
form = AssignmentFilterForm(data={})
self.assertTrue(form.is_valid())
| true | true |
f7fbd3776ed1e1f6443976d5b489ca6a36d0f901 | 321 | py | Python | test/benchmark/api_v2_0/__init__.py | nucklehead/RackHD | c21876e1138c19caeeaf5ef2e2a6c0236e780896 | [
"Apache-2.0"
] | null | null | null | test/benchmark/api_v2_0/__init__.py | nucklehead/RackHD | c21876e1138c19caeeaf5ef2e2a6c0236e780896 | [
"Apache-2.0"
] | null | null | null | test/benchmark/api_v2_0/__init__.py | nucklehead/RackHD | c21876e1138c19caeeaf5ef2e2a6c0236e780896 | [
"Apache-2.0"
] | null | null | null | from poller_tests import BenchmarkPollerTests
from discovery_tests import BenchmarkDiscoveryTests
from bootstrap_tests import BenchmarkBootstrapTests
benchmark_poller_tests = [
'benchmark.poller'
]
benchmark_discovery_tests = [
'benchmark.discovery'
]
benchmark_bootstrap_tests = [
'benchmark.bootstrap'
]
| 20.0625 | 51 | 0.813084 | from poller_tests import BenchmarkPollerTests
from discovery_tests import BenchmarkDiscoveryTests
from bootstrap_tests import BenchmarkBootstrapTests
benchmark_poller_tests = [
'benchmark.poller'
]
benchmark_discovery_tests = [
'benchmark.discovery'
]
benchmark_bootstrap_tests = [
'benchmark.bootstrap'
]
| true | true |
f7fbd39c2503053d6e0e62a5d448563647642992 | 9,116 | py | Python | tests/test_auth.py | anlcnydn/graceful | d4678cb6349a5c843a5e58002fc80140821609e4 | [
"BSD-3-Clause"
] | 83 | 2015-06-18T18:08:50.000Z | 2021-05-21T06:12:46.000Z | tests/test_auth.py | anlcnydn/graceful | d4678cb6349a5c843a5e58002fc80140821609e4 | [
"BSD-3-Clause"
] | 59 | 2015-07-02T12:28:50.000Z | 2019-01-21T18:32:47.000Z | tests/test_auth.py | anlcnydn/graceful | d4678cb6349a5c843a5e58002fc80140821609e4 | [
"BSD-3-Clause"
] | 14 | 2015-08-26T22:43:47.000Z | 2020-04-17T11:00:37.000Z | # -*- coding: utf-8 -*-
import base64
import pytest
import hashlib
from falcon.testing import TestBase
from falcon import API
from falcon import status_codes
from graceful.resources.base import BaseResource
from graceful import authentication
from graceful import authorization
@authorization.authentication_required
class ExampleResource(BaseResource, with_context=True):
def on_get(self, req, resp, **kwargs):
assert 'user' in req.context
class ExampleKVUserStorage(authentication.KeyValueUserStorage):
class SimpleKVStore(dict):
def set(self, key, value):
self[key] = value
def __init__(self, data=None):
super().__init__(self.SimpleKVStore(data or {}))
def clear(self):
self.kv_store.clear()
@ExampleKVUserStorage.hash_identifier.register(authentication.Basic)
def _(identified_with, identifier):
return ":".join([
identifier[0],
hashlib.sha1(identifier[1].encode()).hexdigest()
])
def test_default_kv_hashes_only_strings():
with pytest.raises(TypeError):
ExampleKVUserStorage.hash_identifier(None, [1, 2, 3, 4])
def test_invalid_basic_auth_realm():
with pytest.raises(ValueError):
authentication.Basic(realm="Impro=per realm%%% &")
@pytest.mark.parametrize(
"auth_class", [
authentication.Basic,
authentication.Token,
authentication.XAPIKey,
]
)
def test_auth_requires_storage(auth_class):
with pytest.raises(ValueError):
auth_class()
class AuthTestsMixin:
""" Test mixin that defines common routine for testing auth classes.
"""
class SkipTest(Exception):
"""Raised when given tests is marked to be skipped
Note: we use this exception instead of self.skipTest() method because
this has slightly different semantics. We simply don't want to report
these tests as skipped.
"""
route = '/foo/'
user = {
"username": "foo",
"details": "bar",
"password": "secretP4ssw0rd",
"allowed_ip": "127.100.100.1",
"allowed_remote": "127.0.0.1",
"token": "s3cr3t70ken",
'allowed_ip_range': ['127.100.100.1'],
}
ident_keys = ['password']
auth_storage = ExampleKVUserStorage()
auth_middleware = [authentication.Anonymous(user)]
def get_authorized_headers(self):
raise NotImplementedError
def get_invalid_headers(self):
raise NotImplementedError
def get_unauthorized_headers(self):
return {}
def setUp(self):
super().setUp()
self.api = API(middleware=self.auth_middleware)
self.api.add_route(self.route, ExampleResource())
self.auth_storage.clear()
identity = [self.user[key] for key in self.ident_keys]
self.auth_storage.register(
self.auth_middleware[0],
identity[0] if len(identity) == 1 else identity,
self.user
)
def test_unauthorized(self):
try:
self.simulate_request(
self.route, decode='utf-8', method='GET',
headers=self.get_unauthorized_headers()
)
assert self.srmock.status == status_codes.HTTP_UNAUTHORIZED
except self.SkipTest:
pass
def test_authorized(self):
try:
self.simulate_request(
self.route, decode='utf-8', method='GET',
headers=self.get_authorized_headers()
)
assert self.srmock.status == status_codes.HTTP_OK
except self.SkipTest:
pass
def test_bad_request(self):
try:
maybe_multiple_headers_sets = self.get_invalid_headers()
if isinstance(maybe_multiple_headers_sets, tuple):
header_sets = maybe_multiple_headers_sets
else:
header_sets = (maybe_multiple_headers_sets,)
for headers in header_sets:
self.simulate_request(
self.route, decode='utf-8', method='GET',
headers=headers
)
assert self.srmock.status == status_codes.HTTP_BAD_REQUEST
except self.SkipTest:
pass
class AnonymousAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.Anonymous(...)]
def get_authorized_headers(self):
return {}
def get_unauthorized_headers(self):
# note: Anonymous always authenticates the user.
raise self.SkipTest
def get_invalid_headers(self):
# note: it is not possible to have invalid header for this auth.
raise self.SkipTest
class BasicAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.Basic(AuthTestsMixin.auth_storage)]
ident_keys = ['username', 'password']
def get_authorized_headers(self):
return {
"Authorization":
"Basic " + base64.b64encode(
":".join(
[self.user['username'], self.user['password']]
).encode()
).decode()
}
def get_invalid_headers(self):
return (
# to many header tokens
{"Authorization": "Basic Basic Basic"},
# non base64 decoded
{"Authorization": "Basic nonbase64decoded"}
)
class TokenAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.Token(AuthTestsMixin.auth_storage)]
ident_keys = ['token']
def get_authorized_headers(self):
return {"Authorization": "Token " + self.user['token']}
def get_invalid_headers(self):
return {"Authorization": "Token Token Token"}
class XAPIKeyAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.XAPIKey(AuthTestsMixin.auth_storage)]
ident_keys = ['token']
def get_authorized_headers(self):
return {"X-Api-Key": self.user['token']}
def get_invalid_headers(self):
# note: it is not possible to have invalid header for this auth.
raise self.SkipTest
class XForwardedForAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [
authentication.XForwardedFor(AuthTestsMixin.auth_storage)
]
ident_keys = ['allowed_ip']
def get_authorized_headers(self):
return {"X-Forwarded-For": self.user['allowed_ip']}
def get_invalid_headers(self):
# note: it is not possible to have invalid header for this auth.
raise self.SkipTest
class XForwardedForWithoutStorageAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.XForwardedFor()]
ident_keys = ['allowed_ip']
def get_authorized_headers(self):
return {"X-Forwarded-For": self.user['allowed_ip']}
def get_invalid_headers(self):
# note: it is not possible to have invalid header for this auth.
raise self.SkipTest
class XForwardedForWithFallbackAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [
authentication.XForwardedFor(remote_address_fallback=True)
]
ident_keys = ['allowed_remote']
def get_authorized_headers(self):
return {}
def get_unauthorized_headers(self):
raise self.SkipTest
def get_invalid_headers(self):
# note: it is not possible to have invalid header for this auth.
raise self.SkipTest
class IPRangeXForwardedForAuthTestCase(AuthTestsMixin, TestBase):
class IPRangeWhitelistStorage(authentication.IPRangeWhitelistStorage):
"""Test compatible implementation of IPRangeWhitelistStorage.
This implementation simply extends the base class with
tests-compatible ``register()`` and ``clear()`` methods.
"""
def register(self, identified_with, identity, user):
self.ip_range = identity
self.user = user
def clear(self):
self.ip_range = []
self.user = None
auth_storage = IPRangeWhitelistStorage([], None)
auth_middleware = [
authentication.XForwardedFor(auth_storage)
]
ident_keys = ['allowed_ip_range']
def get_authorized_headers(self):
return {'X-Forwarded-For': self.user['allowed_ip_range'][0]}
def get_unauthorized_headers(self):
raise self.SkipTest
def get_invalid_headers(self):
# note: it is not possible to have invalid header for this auth.
raise self.SkipTest
class MultipleAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [
authentication.Token(AuthTestsMixin.auth_storage),
authentication.Anonymous(...),
authentication.Basic(AuthTestsMixin.auth_storage),
]
ident_keys = ["token"]
def get_unauthorized_headers(self):
# note: Anonymous will always authenticate the user as a fallback auth
raise self.SkipTest
def get_invalid_headers(self):
# this is invalid header for basic authentication
return {"Authorization": "Token Basic Basic"}
def get_authorized_headers(self):
return {"Authorization": "Token " + self.user['password']}
| 29.501618 | 78 | 0.65204 |
import base64
import pytest
import hashlib
from falcon.testing import TestBase
from falcon import API
from falcon import status_codes
from graceful.resources.base import BaseResource
from graceful import authentication
from graceful import authorization
@authorization.authentication_required
class ExampleResource(BaseResource, with_context=True):
def on_get(self, req, resp, **kwargs):
assert 'user' in req.context
class ExampleKVUserStorage(authentication.KeyValueUserStorage):
class SimpleKVStore(dict):
def set(self, key, value):
self[key] = value
def __init__(self, data=None):
super().__init__(self.SimpleKVStore(data or {}))
def clear(self):
self.kv_store.clear()
@ExampleKVUserStorage.hash_identifier.register(authentication.Basic)
def _(identified_with, identifier):
return ":".join([
identifier[0],
hashlib.sha1(identifier[1].encode()).hexdigest()
])
def test_default_kv_hashes_only_strings():
with pytest.raises(TypeError):
ExampleKVUserStorage.hash_identifier(None, [1, 2, 3, 4])
def test_invalid_basic_auth_realm():
with pytest.raises(ValueError):
authentication.Basic(realm="Impro=per realm%%% &")
@pytest.mark.parametrize(
"auth_class", [
authentication.Basic,
authentication.Token,
authentication.XAPIKey,
]
)
def test_auth_requires_storage(auth_class):
with pytest.raises(ValueError):
auth_class()
class AuthTestsMixin:
class SkipTest(Exception):
route = '/foo/'
user = {
"username": "foo",
"details": "bar",
"password": "secretP4ssw0rd",
"allowed_ip": "127.100.100.1",
"allowed_remote": "127.0.0.1",
"token": "s3cr3t70ken",
'allowed_ip_range': ['127.100.100.1'],
}
ident_keys = ['password']
auth_storage = ExampleKVUserStorage()
auth_middleware = [authentication.Anonymous(user)]
def get_authorized_headers(self):
raise NotImplementedError
def get_invalid_headers(self):
raise NotImplementedError
def get_unauthorized_headers(self):
return {}
def setUp(self):
super().setUp()
self.api = API(middleware=self.auth_middleware)
self.api.add_route(self.route, ExampleResource())
self.auth_storage.clear()
identity = [self.user[key] for key in self.ident_keys]
self.auth_storage.register(
self.auth_middleware[0],
identity[0] if len(identity) == 1 else identity,
self.user
)
def test_unauthorized(self):
try:
self.simulate_request(
self.route, decode='utf-8', method='GET',
headers=self.get_unauthorized_headers()
)
assert self.srmock.status == status_codes.HTTP_UNAUTHORIZED
except self.SkipTest:
pass
def test_authorized(self):
try:
self.simulate_request(
self.route, decode='utf-8', method='GET',
headers=self.get_authorized_headers()
)
assert self.srmock.status == status_codes.HTTP_OK
except self.SkipTest:
pass
def test_bad_request(self):
try:
maybe_multiple_headers_sets = self.get_invalid_headers()
if isinstance(maybe_multiple_headers_sets, tuple):
header_sets = maybe_multiple_headers_sets
else:
header_sets = (maybe_multiple_headers_sets,)
for headers in header_sets:
self.simulate_request(
self.route, decode='utf-8', method='GET',
headers=headers
)
assert self.srmock.status == status_codes.HTTP_BAD_REQUEST
except self.SkipTest:
pass
class AnonymousAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.Anonymous(...)]
def get_authorized_headers(self):
return {}
def get_unauthorized_headers(self):
raise self.SkipTest
def get_invalid_headers(self):
raise self.SkipTest
class BasicAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.Basic(AuthTestsMixin.auth_storage)]
ident_keys = ['username', 'password']
def get_authorized_headers(self):
return {
"Authorization":
"Basic " + base64.b64encode(
":".join(
[self.user['username'], self.user['password']]
).encode()
).decode()
}
def get_invalid_headers(self):
return (
{"Authorization": "Basic Basic Basic"},
{"Authorization": "Basic nonbase64decoded"}
)
class TokenAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.Token(AuthTestsMixin.auth_storage)]
ident_keys = ['token']
def get_authorized_headers(self):
return {"Authorization": "Token " + self.user['token']}
def get_invalid_headers(self):
return {"Authorization": "Token Token Token"}
class XAPIKeyAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.XAPIKey(AuthTestsMixin.auth_storage)]
ident_keys = ['token']
def get_authorized_headers(self):
return {"X-Api-Key": self.user['token']}
def get_invalid_headers(self):
raise self.SkipTest
class XForwardedForAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [
authentication.XForwardedFor(AuthTestsMixin.auth_storage)
]
ident_keys = ['allowed_ip']
def get_authorized_headers(self):
return {"X-Forwarded-For": self.user['allowed_ip']}
def get_invalid_headers(self):
raise self.SkipTest
class XForwardedForWithoutStorageAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [authentication.XForwardedFor()]
ident_keys = ['allowed_ip']
def get_authorized_headers(self):
return {"X-Forwarded-For": self.user['allowed_ip']}
def get_invalid_headers(self):
raise self.SkipTest
class XForwardedForWithFallbackAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [
authentication.XForwardedFor(remote_address_fallback=True)
]
ident_keys = ['allowed_remote']
def get_authorized_headers(self):
return {}
def get_unauthorized_headers(self):
raise self.SkipTest
def get_invalid_headers(self):
raise self.SkipTest
class IPRangeXForwardedForAuthTestCase(AuthTestsMixin, TestBase):
class IPRangeWhitelistStorage(authentication.IPRangeWhitelistStorage):
def register(self, identified_with, identity, user):
self.ip_range = identity
self.user = user
def clear(self):
self.ip_range = []
self.user = None
auth_storage = IPRangeWhitelistStorage([], None)
auth_middleware = [
authentication.XForwardedFor(auth_storage)
]
ident_keys = ['allowed_ip_range']
def get_authorized_headers(self):
return {'X-Forwarded-For': self.user['allowed_ip_range'][0]}
def get_unauthorized_headers(self):
raise self.SkipTest
def get_invalid_headers(self):
raise self.SkipTest
class MultipleAuthTestCase(AuthTestsMixin, TestBase):
auth_middleware = [
authentication.Token(AuthTestsMixin.auth_storage),
authentication.Anonymous(...),
authentication.Basic(AuthTestsMixin.auth_storage),
]
ident_keys = ["token"]
def get_unauthorized_headers(self):
raise self.SkipTest
def get_invalid_headers(self):
return {"Authorization": "Token Basic Basic"}
def get_authorized_headers(self):
return {"Authorization": "Token " + self.user['password']}
| true | true |
f7fbd472d8bcfc731ae2f04f592f284801beae5f | 411 | py | Python | test/face_eye_detection.py | jeffskinnerbox/people-counter | f4381ee82df11f30d98673d0bba403c3350a8463 | [
"Unlicense",
"MIT"
] | 64 | 2017-10-19T11:46:14.000Z | 2021-11-10T03:40:38.000Z | test/face_eye_detection.py | mathewspjacob/people-counter | f4381ee82df11f30d98673d0bba403c3350a8463 | [
"Unlicense",
"MIT"
] | 1 | 2020-05-14T07:56:48.000Z | 2020-05-14T07:56:48.000Z | test/face_eye_detection.py | jeffskinnerbox/people-counter | f4381ee82df11f30d98673d0bba403c3350a8463 | [
"Unlicense",
"MIT"
] | 27 | 2018-03-12T15:50:12.000Z | 2021-03-29T16:32:53.000Z | #!/usr/bin/python3
#
# Maintainer: jeffskinnerbox@yahoo.com / www.jeffskinnerbox.me
# Version: 0.3.0
#
# Haar Cascade Object Detection Face & Eye OpenCV Python Tutorial - https://pythonprogramming.net/haar-cascade-face-eye-detection-python-opencv-tutorial/
# Creating your own Haar Cascade OpenCV Python Tutorial - https://pythonprogramming.net/haar-cascade-object-detection-python-opencv-tutorial/
| 31.615385 | 153 | 0.768856 | true | true | |
f7fbd5747006c092c841e0ff277182958285676a | 21,374 | py | Python | One-GORD/Ours-o/model.py | zju-vipa/One-GORD | 9f3846949d9d7de5b21caaad1f8696f4ce257b55 | [
"Apache-2.0"
] | 3 | 2020-12-23T08:54:00.000Z | 2021-04-12T08:39:12.000Z | One-GORD/Ours-o/model.py | HymEric/One-GORD | a63ecaf2b9538789ca0e761d55608a28d7194c4d | [
"Apache-2.0"
] | null | null | null | One-GORD/Ours-o/model.py | HymEric/One-GORD | a63ecaf2b9538789ca0e761d55608a28d7194c4d | [
"Apache-2.0"
] | 2 | 2021-01-03T06:26:31.000Z | 2021-04-12T08:39:15.000Z | import os, sys
import time
import re
import numpy as np
import tensorflow as tf
import sys
sys.path.append('../')
import lib.models as lib
from lib.models import params_with_name
from lib.models.save_images import save_images
from lib.models.distributions import Bernoulli, Gaussian, Product
from lib.models.nets_32x32_small import NetsRetreiver, NetsRetreiverWithClassifier
TINY = 1e-8
SEED = 123
ch=1
CRITIC_ITERS = 5 # For WGAN and WGAN-GP, number of critic iters per gen iter
class DIAE(object):
def __init__(self, session, arch,lr,alpha,beta,latent_dim,latent_num,class_net_unit_num,output_dim, batch_size, image_shape, exp_name, dirs,
vis_reconst):
"""
:type output_dist: Distribution
:type z_dist: Gaussian
"""
self.session = session
self.arch = arch
self.lr=lr
self.alpha=alpha
self.beta=beta
self.latent_dim=latent_dim
self.latent_num=latent_num
self.class_net_unit_num=class_net_unit_num
self.output_dim=output_dim
self.batch_size = batch_size
self.image_shape = image_shape
self.exp_name = exp_name
self.dirs = dirs
self.vis_reconst = vis_reconst
self.__build_graph()
def __build_graph(self):
tf.set_random_seed(SEED)
np.random.seed(SEED)
self.is_training = tf.placeholder(tf.bool)
self.x1 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux1_mask = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
# auxilary dataset
self.aux1 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux2 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux_GT1 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux_GT2 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.class_gt0 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt1 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt2 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt3 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt4 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt5 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt6 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt7 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt8 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt9 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt10 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
# onesample labels
self.aux_class_gt=tf.placeholder(tf.float32,shape=[None]+list([self.latent_num+9]))
# Normalize + reshape 'real' input data
norm_x1 = 2*(tf.cast(self.x1, tf.float32)-.5)
norm_aux1 = 2*(tf.cast(self.aux1, tf.float32)-.5)
norm_aux2 = 2*(tf.cast(self.aux2, tf.float32)-.5)
norm_aux_GT1 = 2*(tf.cast(self.aux_GT1, tf.float32)-.5)
norm_aux_GT2 = 2*(tf.cast(self.aux_GT2, tf.float32)-.5)
# Set Encoder and Decoder archs
self.Encoder, self.Decoder,self.Classifier,self.gan_discriminator = NetsRetreiverWithClassifier(self.arch)
# Encode and decode
self.z1 = self.__Enc(norm_x1)
self.x1_out = self.__Dec(self.z1)
# aux data
self.aux_z1 = self.__Enc(norm_aux1)
self.aux1_out = self.__Dec(self.aux_z1)
self.aux_z2 = self.__Enc(norm_aux2)
self.aux2_out = self.__Dec(self.aux_z2)
aux1_head,aux1_bg=tf.split(self.aux_z1,2,axis=1)
aux2_head,aux2_bg=tf.split(self.aux_z2,2,axis=1)
GT1_z=tf.concat([aux2_head,aux1_bg],axis=1)
GT2_z=tf.concat([aux1_head,aux2_bg],axis=1)
self.GT1_out = self.__Dec(GT1_z)
self.GT2_out = self.__Dec(GT2_z)
#dual swap
x1_head,x1_bg=tf.split(self.z1,2,axis=1)
self.mix_head_out=self.__Dec(tf.concat([aux1_head,x1_bg],axis=1))
mix_head,mix_bg=tf.split(self.__Enc(self.mix_head_out),2,axis=1)
x1_dual_out=self.__Dec(tf.concat([x1_head,mix_bg],axis=1))
self.aux1_mix_head_out=self.__Dec(tf.concat([x1_head,aux1_bg],axis=1))
# classification loss
## for x1
r_part1, r_part2 = tf.split(self.z1, 2, axis=1)
c_p0 = self.__Classifier(r_part1)
c_p1 = self.__Classifier(r_part2)
## for aux1
aux1_r_part1, aux1_r_part2 = tf.split(self.aux_z1, 2, axis=1)
aux1_c_p0 = self.__Classifier(aux1_r_part1)
aux1_c_p1 = self.__Classifier(aux1_r_part2)
## for aux2
aux2_r_part1, aux2_r_part2 = tf.split(self.aux_z2, 2, axis=1)
aux2_c_p0 = self.__Classifier(aux2_r_part1)
aux2_c_p1 = self.__Classifier(aux2_r_part2)
# Loss and optimizer
self.__prep_loss_optimizer(norm_x1,norm_aux1,norm_aux2,norm_aux_GT1,norm_aux_GT2,x1_dual_out,c_p0,c_p1,aux1_c_p0,aux1_c_p1,aux2_c_p0,aux2_c_p1)
def __Enc(self, x):
#resnet_encoder(name, inputs, n_channels, latent_dim, is_training, mode=None, nonlinearity=tf.nn.relu):
z= self.Encoder('Encoder', x, self.image_shape[0], self.latent_dim,self.is_training)
return z
def __Dec(self, z):
x_out_logit = self.Decoder('Decoder', z, self.image_shape[0], self.is_training)
x_out = tf.tanh(x_out_logit)
return x_out
def __Classifier(self,z):
x_out= self.Classifier('Classifier', z, self.class_net_unit_num,self.latent_num+9, self.is_training)
x_out = tf.nn.softmax(x_out)
return x_out
def __prep_loss_optimizer(self,norm_x1,norm_aux1,norm_aux2,norm_aux_GT1,norm_aux_GT2,x1_dual_out,c_p0,c_p1,aux1_c_p0,aux1_c_p1,aux2_c_p0,aux2_c_p1):
norm_x1= tf.reshape(norm_x1, [-1, self.output_dim])
norm_aux1= tf.reshape(norm_aux1, [-1, self.output_dim])
norm_aux2= tf.reshape(norm_aux2, [-1, self.output_dim])
norm_aux_GT1= tf.reshape(norm_aux_GT1, [-1, self.output_dim])
norm_aux_GT2= tf.reshape(norm_aux_GT2, [-1, self.output_dim])
#[Loss1]dual unsupervised img reconstruction loss
self.rec_img_loss1 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_x1 -self.x1_out), axis=1))
self.rec_aux1_loss2 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux1 -self.aux1_out), axis=1))
self.rec_aux2_loss3 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux2 -self.aux2_out), axis=1))
#swap loss
self.rec_aux1_swap_loss4 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux_GT1 -self.GT1_out), axis=1))
self.rec_aux2_swap_loss5 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux_GT2 -self.GT2_out), axis=1))
# dual swap loss
self.rec_dual_loss6 = tf.reduce_mean(tf.reduce_sum(tf.square(x1_dual_out -self.x1_out), axis=1))
# # head loss
# # segment head and do head loss with mask
# x1_out_img = tf.reshape(self.mix_head_out, shape=[-1] + self.image_shape) # to img tensor
# x1_out_head = tf.multiply(x1_out_img, self.aux1_mask)
# norm_aux1_img = tf.reshape(norm_aux1, shape=[-1] + self.image_shape) # to img tensor
# aux1_head = tf.multiply(norm_aux1_img, self.aux1_mask)
# self.head_loss = tf.reduce_mean(tf.reduce_sum(tf.square(x1_out_head - aux1_head), axis=1))
# classification loss
# temp_1=self.vec_gt-tf.reduce_sum((self.class_gt1-self.class_gt1*c_p1),1)*tf.reduce_sum((self.class_gt2-self.class_gt2*c_p1),1)
# self.class1_loss=-tf.reduce_mean(tf.log(temp_1))
temp=1-tf.reduce_sum((self.class_gt0-self.class_gt0*c_p0),1)*tf.reduce_sum((self.class_gt1-self.class_gt1*c_p0),1)*tf.reduce_sum((self.class_gt2-self.class_gt2*c_p0),1)*tf.reduce_sum((self.class_gt3-self.class_gt3*c_p0),1)*\
tf.reduce_sum((self.class_gt4-self.class_gt4*c_p0),1)*tf.reduce_sum((self.class_gt5-self.class_gt5*c_p0),1)*tf.reduce_sum((self.class_gt6-self.class_gt6*c_p0),1)*tf.reduce_sum((self.class_gt7-self.class_gt7*c_p0),1)*tf.reduce_sum((self.class_gt8-self.class_gt8*c_p0),1)*\
tf.reduce_sum((self.class_gt9-self.class_gt9*c_p0),1)
self.fuzzy_class_loss = -tf.reduce_mean(tf.log(temp))
self.fuzzy_bg_class_loss= -tf.reduce_mean(self.class_gt10 * tf.log(c_p1))
# class_loss1 = -tf.reduce_mean(self.class_gt1 * tf.log(c_p1))
self.aux1_class_loss = -tf.reduce_mean(self.aux_class_gt * tf.log(aux1_c_p0))
self.aux1_bg_class_loss = -tf.reduce_mean(self.class_gt10 * tf.log(aux1_c_p1)) # [0,0,0,1]
self.aux2_class_loss = -tf.reduce_mean(self.aux_class_gt * tf.log(aux2_c_p0))
self.aux2_bg_class_loss = - tf.reduce_mean(self.class_gt10 * tf.log(aux2_c_p1))
self.class_loss = self.fuzzy_class_loss+self.fuzzy_bg_class_loss+self.aux1_class_loss+ self.aux1_bg_class_loss+self.aux2_class_loss+ self.aux2_bg_class_loss
self.loss=2*self.rec_img_loss1+self.rec_aux1_loss2+self.rec_aux2_loss3+2*self.rec_aux1_swap_loss4+2*self.rec_aux2_swap_loss5+self.rec_dual_loss6+5*self.class_loss
lr=self.lr
self.optimizer = tf.train.AdamOptimizer(learning_rate=lr, beta1=0., beta2=0.9).minimize(self.loss)
print('Learning rate=')
print(lr)
def load(self):
#self.saver = tf.train.Saver()
self.saver = tf.train.Saver(max_to_keep=3760)
ckpt = tf.train.get_checkpoint_state(self.dirs['ckpt'])
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = ckpt.model_checkpoint_path
self.saver.restore(self.session, ckpt_name)
print("Checkpoint restored: {0}".format(ckpt_name))
prev_step = int(next(re.finditer("(\d+)(?!.*\d)",ckpt_name)).group(0))
print('prev_step=')
print(prev_step)
else:
print("Failed to find checkpoint.")
prev_step = 0
sys.stdout.flush()
return prev_step + 1
def load_fixedNum(self,inter_num):
#self.saver = tf.train.Saver()
self.saver = tf.train.Saver(max_to_keep=3760)
ckpt = tf.train.get_checkpoint_state(self.dirs['ckpt'])
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = ckpt.model_checkpoint_path
ckpt_name_prefix=ckpt_name.split('-')[0]
ckpt_name_new=ckpt_name_prefix+'-'+str(inter_num)
self.saver.restore(self.session, ckpt_name_new)
print("Checkpoint restored: {0}".format(ckpt_name_new))
prev_step = int(next(re.finditer("(\d+)(?!.*\d)",ckpt_name_new)).group(0))
print('prev_step=')
print(prev_step)
else:
print("Failed to find checkpoint.")
prev_step = 0
sys.stdout.flush()
return prev_step + 1
def train(self, n_iters, n_iters_per_epoch, stats_iters, ckpt_interval):
# for save loss
count=0
self.session.run(tf.global_variables_initializer())
# Fixed GT samples - save
fixed_x1,fixed_mask_1,_= next(self.train_iter1)
fixed_x2,fixed_mask_2,_= next(self.train_iter2)
fixed_aux1,fixed_GT1,_= next(self.train_iter3)
fixed_aux2,fixed_GT2,_= next(self.train_iter4)
fixed_x1= self.session.run(tf.constant(fixed_x1))
fixed_mask_1= self.session.run(tf.constant(fixed_mask_1))
#fixed_x1 = ((fixed_x1+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
save_images(fixed_x1, os.path.join(self.dirs['samples'], 'samples_1_groundtruth.png'))
##save_images(fixed_mask_1, os.path.join(self.dirs['samples'], 'mask_1_groundtruth.png'))
fixed_aux1= self.session.run(tf.constant(fixed_aux1))
fixed_aux2= self.session.run(tf.constant(fixed_aux2))
#fixed_aux1 = ((fixed_aux1+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
#fixed_aux2 = ((fixed_aux2+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
save_images(fixed_aux1, os.path.join(self.dirs['samples'], 'aux_1_groundtruth.png'))
save_images(fixed_aux2, os.path.join(self.dirs['samples'], 'aux_2_groundtruth.png'))
#
start_iter = self.load()
running_cost = 0.
class_gt0, class_gt1,class_gt2,class_gt3,class_gt4,class_gt5, class_gt6, class_gt7, class_gt8,class_gt9,class_gt10 = self.generateClassificationLabel(self.batch_size)
_gan_data=fixed_x1
logs=open('loss_records.txt','w')
for iteration in range(start_iter, n_iters):
start_time = time.time()
_data1,_mask1, _ = next(self.train_iter1)
_aux_label, _, _ = next(self.train_iter2)
_aux1,_gt1, _ = next(self.train_iter3)
_aux2,_gt2, _ = next(self.train_iter4)
_, cost = self.session.run((self.optimizer, self.loss),feed_dict={self.x1:_data1,self.aux_class_gt:_aux_label,self.aux1_mask:_mask1,self.aux1:_aux1,self.aux2:_aux2,self.aux_GT1:_gt1,self.aux_GT2:_gt2,self.is_training:True,self.class_gt0:class_gt0,self.class_gt1:class_gt1,self.class_gt2:class_gt2,self.class_gt3:class_gt3,
self.class_gt4:class_gt4, self.class_gt5:class_gt5, self.class_gt6:class_gt6,self.class_gt7:class_gt7, self.class_gt8:class_gt8, self.class_gt9:class_gt9, self.class_gt10:class_gt10})
running_cost += cost
if iteration % n_iters_per_epoch == 1:
print("Epoch: {0}".format(iteration // n_iters_per_epoch))
# Print avg stats and dev set stats
if (iteration < start_iter + 4) or iteration % stats_iters == 0:
t = time.time()
dev_data1,dev_mask1, _ = next(self.dev_iter1)
dev_aux1,dev_gt1, _ = next(self.dev_iter3)
dev_aux2,dev_gt2, _ = next(self.dev_iter4)
dev_loss,dev_rec_img_loss1,dev_rec_aux1_loss2,dev_rec_aux2_loss3,dev_rec_aux1_swap_loss4,dev_rec_aux2_swap_loss5,dev_rec_dual_loss6,class_loss,fuzzy_class,aux1_class_loss,aux2_class_loss,fuzzy_bg_class_loss,aux1_bg_class_loss,aux2_bg_class_loss= \
self.session.run([self.loss,self.rec_img_loss1,self.rec_aux1_loss2,self.rec_aux2_loss3,self.rec_aux1_swap_loss4,self.rec_aux2_swap_loss5,self.rec_dual_loss6,self.class_loss,self.fuzzy_class_loss,self.aux1_class_loss,self.aux2_class_loss,self.fuzzy_bg_class_loss,self.aux1_bg_class_loss,self.aux2_bg_class_loss],
feed_dict={self.x1:dev_data1,self.aux1_mask:dev_mask1,self.aux1:dev_aux1,self.aux2:dev_aux2,self.aux_GT1:dev_gt1,self.aux_GT2:dev_gt2,self.is_training:False,
self.class_gt0: class_gt0, self.class_gt1: class_gt1,self.class_gt2: class_gt2,self.class_gt3:class_gt3,self.aux_class_gt:_aux_label,self.class_gt4:class_gt4, self.class_gt5:class_gt5, self.class_gt6:class_gt6,self.class_gt7:class_gt7, self.class_gt8:class_gt8, self.class_gt9:class_gt9, self.class_gt10:class_gt10})
n_samples = 1. if (iteration < start_iter + 4) else float(stats_iters)
avg_cost = running_cost / n_samples
running_cost = 0.
print("Iteration:{0} \t| Train cost:{1:.1f} \t| Dev cost: {2:.1f}(img1_loss:{3:.1f},aux1_loss2:{4:.1f},aux2_loss3:{5:.1f},aux1_swap_loss:{6:.1f},aux2_swap_loss:{7:.1f},dual_swap_loss:{8:.1f},class loss:{9:.1f}(fuzzy_class:{10:.1f},aux1_class_loss:{11:.1f},aux2_class_loss:{12:.1f},fuzzy_bg_class_loss:{13:.1f},aux1_bg_class_loss:{14:.1f},aux2_bg_class_loss:{15:.1f}))".
format(iteration, avg_cost, dev_loss,dev_rec_img_loss1,dev_rec_aux1_loss2,dev_rec_aux2_loss3,dev_rec_aux1_swap_loss4,dev_rec_aux2_swap_loss5,dev_rec_dual_loss6,class_loss,fuzzy_class,aux1_class_loss,aux2_class_loss,fuzzy_bg_class_loss,aux1_bg_class_loss,aux2_bg_class_loss))
logs.writelines("Iteration:{0} \t| Train cost:{1:.1f} \t| Dev cost: {2:.1f}(img1_loss:{3:.1f},aux1_loss2:{4:.1f},aux2_loss3:{5:.1f},aux1_swap_loss:{6:.1f},aux2_swap_loss:{7:.1f},dual_swap_loss:{8:.1f},class loss:{9:.1f}(fuzzy_class:{10:.1f},aux1_class_loss:{11:.1f},aux2_class_loss:{12:.1f},fuzzy_bg_class_loss:{13:.1f},aux1_bg_class_loss:{14:.1f},aux2_bg_class_loss:{15:.1f}))\n".
format(iteration, avg_cost, dev_loss,dev_rec_img_loss1,dev_rec_aux1_loss2,dev_rec_aux2_loss3,dev_rec_aux1_swap_loss4,dev_rec_aux2_swap_loss5,dev_rec_dual_loss6,class_loss,fuzzy_class,aux1_class_loss,aux2_class_loss,fuzzy_bg_class_loss,aux1_bg_class_loss,aux2_bg_class_loss))
#print(avg_cost)
#print(dev_loss)
#print("Iteration:{0} \t| Train cost:{1:.1f} \t| Dev cost: {2:.1f}".format(iteration, avg_cost, dev_loss))
count=count+1
if self.vis_reconst:
self.visualise_reconstruction(fixed_x1,fixed_aux1,fixed_aux2,iteration)
#self.visualise_reconstruction(img_zero,fixed_mk1,iteration)
if np.any(np.isnan(avg_cost)):
raise ValueError("NaN detected!")
# save checkpoint
if (iteration > start_iter) and iteration % (ckpt_interval) == 0:
self.saver.save(self.session, os.path.join(self.dirs['ckpt'], self.exp_name), global_step=iteration)
_gan_data=_data1
logs.close()
# for save loss
#np.save('logArray.npy',logArray)
def reconstruct(self, X1, aux1,aux2, is_training=False):
""" Reconstruct data. """
return self.session.run([self.x1_out,self.mix_head_out,self.aux1_mix_head_out],
feed_dict={self.x1: X1,self.aux1:aux1,self.aux2:aux2,self.is_training: is_training})
def visualise_reconstruction(self, X1, aux1,aux2,iteration):
X_out1,mix_head_out,aux1_mix_head_out= self.reconstruct(X1, aux1,aux2)
# print(X_out1.shape)
X1 = ((X_out1+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
mix_head_out = ((mix_head_out+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
aux1_mix_head_out = ((aux1_mix_head_out+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
save_images(X1, os.path.join(self.dirs['samples'], str(iteration)+'samples_1_rec.png'))
save_images(mix_head_out, os.path.join(self.dirs['samples'], str(iteration)+'X1bg_aux1head.png'))
save_images(aux1_mix_head_out, os.path.join(self.dirs['samples'], str(iteration)+'X1head_aux1bg.png'))
def generateClassificationLabel(self, batch_size):
# ==============get mask==============
class_num = 11
class_gt1 = np.zeros((batch_size, class_num))
class_gt2 = np.zeros((batch_size, class_num))
class_gt3 = np.zeros((batch_size, class_num))
class_gt4 = np.zeros((batch_size, class_num))
class_gt5 = np.zeros((batch_size, class_num))
class_gt6 = np.zeros((batch_size, class_num))
class_gt7 = np.zeros((batch_size, class_num))
class_gt8 = np.zeros((batch_size, class_num))
class_gt9 = np.zeros((batch_size, class_num))
class_gt10 = np.zeros((batch_size, class_num))
class_gt11 = np.zeros((batch_size, class_num))
for i in range(batch_size):
class_gt1[i, 0] = 1
class_gt2[i, 1] = 1
class_gt3[i, 2] = 1
class_gt4[i, 3] = 1
class_gt5[i, 4] = 1
class_gt6[i, 5] = 1
class_gt7[i, 6] = 1
class_gt8[i, 7] = 1
class_gt9[i, 8] = 1
class_gt10[i, 9] = 1
class_gt11[i, 10] = 1
return class_gt1, class_gt2, class_gt3, class_gt4,class_gt5, class_gt6, class_gt7, class_gt8,class_gt9, class_gt10,class_gt11
def getCodesAndImgs(self, pathForSave, X1, k, is_training=False):
z1, X_r0 = self.session.run([self.z1, self.x1_out],
feed_dict={self.x1: X1, self.is_training: is_training})
ImageNorm0_1 = ((X_r0 + 1.) * (1.00 / 2)).astype('double').reshape(
[-1, self.image_shape[1], self.image_shape[2], self.image_shape[0]])
# for visual the first result to valide it effectiveness
if k == 1:
X_save = ((X_r0 + 1.) * (255.99 / 2)).astype('int32').reshape([-1] + self.image_shape)
save_images(X_save, os.path.join(pathForSave, 'iter' + str(k) + '_samples_reconstructed.png'))
return z1, ImageNorm0_1
| 59.207756 | 398 | 0.649434 | import os, sys
import time
import re
import numpy as np
import tensorflow as tf
import sys
sys.path.append('../')
import lib.models as lib
from lib.models import params_with_name
from lib.models.save_images import save_images
from lib.models.distributions import Bernoulli, Gaussian, Product
from lib.models.nets_32x32_small import NetsRetreiver, NetsRetreiverWithClassifier
TINY = 1e-8
SEED = 123
ch=1
CRITIC_ITERS = 5
class DIAE(object):
def __init__(self, session, arch,lr,alpha,beta,latent_dim,latent_num,class_net_unit_num,output_dim, batch_size, image_shape, exp_name, dirs,
vis_reconst):
self.session = session
self.arch = arch
self.lr=lr
self.alpha=alpha
self.beta=beta
self.latent_dim=latent_dim
self.latent_num=latent_num
self.class_net_unit_num=class_net_unit_num
self.output_dim=output_dim
self.batch_size = batch_size
self.image_shape = image_shape
self.exp_name = exp_name
self.dirs = dirs
self.vis_reconst = vis_reconst
self.__build_graph()
def __build_graph(self):
tf.set_random_seed(SEED)
np.random.seed(SEED)
self.is_training = tf.placeholder(tf.bool)
self.x1 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux1_mask = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux1 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux2 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux_GT1 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.aux_GT2 = tf.placeholder(tf.float32, shape=[None] + list(self.image_shape))
self.class_gt0 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt1 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt2 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt3 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num+9]))
self.class_gt4 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt5 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt6 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt7 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt8 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt9 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.class_gt10 = tf.placeholder(tf.float32, shape=[None] + list([self.latent_num + 9]))
self.aux_class_gt=tf.placeholder(tf.float32,shape=[None]+list([self.latent_num+9]))
norm_x1 = 2*(tf.cast(self.x1, tf.float32)-.5)
norm_aux1 = 2*(tf.cast(self.aux1, tf.float32)-.5)
norm_aux2 = 2*(tf.cast(self.aux2, tf.float32)-.5)
norm_aux_GT1 = 2*(tf.cast(self.aux_GT1, tf.float32)-.5)
norm_aux_GT2 = 2*(tf.cast(self.aux_GT2, tf.float32)-.5)
self.Encoder, self.Decoder,self.Classifier,self.gan_discriminator = NetsRetreiverWithClassifier(self.arch)
self.z1 = self.__Enc(norm_x1)
self.x1_out = self.__Dec(self.z1)
self.aux_z1 = self.__Enc(norm_aux1)
self.aux1_out = self.__Dec(self.aux_z1)
self.aux_z2 = self.__Enc(norm_aux2)
self.aux2_out = self.__Dec(self.aux_z2)
aux1_head,aux1_bg=tf.split(self.aux_z1,2,axis=1)
aux2_head,aux2_bg=tf.split(self.aux_z2,2,axis=1)
GT1_z=tf.concat([aux2_head,aux1_bg],axis=1)
GT2_z=tf.concat([aux1_head,aux2_bg],axis=1)
self.GT1_out = self.__Dec(GT1_z)
self.GT2_out = self.__Dec(GT2_z)
x1_head,x1_bg=tf.split(self.z1,2,axis=1)
self.mix_head_out=self.__Dec(tf.concat([aux1_head,x1_bg],axis=1))
mix_head,mix_bg=tf.split(self.__Enc(self.mix_head_out),2,axis=1)
x1_dual_out=self.__Dec(tf.concat([x1_head,mix_bg],axis=1))
self.aux1_mix_head_out=self.__Dec(tf.concat([x1_head,aux1_bg],axis=1))
r_part1, r_part2 = tf.split(self.z1, 2, axis=1)
c_p0 = self.__Classifier(r_part1)
c_p1 = self.__Classifier(r_part2)
x1_r_part1, aux1_r_part2 = tf.split(self.aux_z1, 2, axis=1)
aux1_c_p0 = self.__Classifier(aux1_r_part1)
aux1_c_p1 = self.__Classifier(aux1_r_part2)
x2_r_part1, aux2_r_part2 = tf.split(self.aux_z2, 2, axis=1)
aux2_c_p0 = self.__Classifier(aux2_r_part1)
aux2_c_p1 = self.__Classifier(aux2_r_part2)
self.__prep_loss_optimizer(norm_x1,norm_aux1,norm_aux2,norm_aux_GT1,norm_aux_GT2,x1_dual_out,c_p0,c_p1,aux1_c_p0,aux1_c_p1,aux2_c_p0,aux2_c_p1)
def __Enc(self, x):
z= self.Encoder('Encoder', x, self.image_shape[0], self.latent_dim,self.is_training)
return z
def __Dec(self, z):
x_out_logit = self.Decoder('Decoder', z, self.image_shape[0], self.is_training)
x_out = tf.tanh(x_out_logit)
return x_out
def __Classifier(self,z):
x_out= self.Classifier('Classifier', z, self.class_net_unit_num,self.latent_num+9, self.is_training)
x_out = tf.nn.softmax(x_out)
return x_out
def __prep_loss_optimizer(self,norm_x1,norm_aux1,norm_aux2,norm_aux_GT1,norm_aux_GT2,x1_dual_out,c_p0,c_p1,aux1_c_p0,aux1_c_p1,aux2_c_p0,aux2_c_p1):
norm_x1= tf.reshape(norm_x1, [-1, self.output_dim])
norm_aux1= tf.reshape(norm_aux1, [-1, self.output_dim])
norm_aux2= tf.reshape(norm_aux2, [-1, self.output_dim])
norm_aux_GT1= tf.reshape(norm_aux_GT1, [-1, self.output_dim])
norm_aux_GT2= tf.reshape(norm_aux_GT2, [-1, self.output_dim])
self.rec_img_loss1 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_x1 -self.x1_out), axis=1))
self.rec_aux1_loss2 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux1 -self.aux1_out), axis=1))
self.rec_aux2_loss3 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux2 -self.aux2_out), axis=1))
self.rec_aux1_swap_loss4 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux_GT1 -self.GT1_out), axis=1))
self.rec_aux2_swap_loss5 = tf.reduce_mean(tf.reduce_sum(tf.square(norm_aux_GT2 -self.GT2_out), axis=1))
self.rec_dual_loss6 = tf.reduce_mean(tf.reduce_sum(tf.square(x1_dual_out -self.x1_out), axis=1))
temp=1-tf.reduce_sum((self.class_gt0-self.class_gt0*c_p0),1)*tf.reduce_sum((self.class_gt1-self.class_gt1*c_p0),1)*tf.reduce_sum((self.class_gt2-self.class_gt2*c_p0),1)*tf.reduce_sum((self.class_gt3-self.class_gt3*c_p0),1)*\
tf.reduce_sum((self.class_gt4-self.class_gt4*c_p0),1)*tf.reduce_sum((self.class_gt5-self.class_gt5*c_p0),1)*tf.reduce_sum((self.class_gt6-self.class_gt6*c_p0),1)*tf.reduce_sum((self.class_gt7-self.class_gt7*c_p0),1)*tf.reduce_sum((self.class_gt8-self.class_gt8*c_p0),1)*\
tf.reduce_sum((self.class_gt9-self.class_gt9*c_p0),1)
self.fuzzy_class_loss = -tf.reduce_mean(tf.log(temp))
self.fuzzy_bg_class_loss= -tf.reduce_mean(self.class_gt10 * tf.log(c_p1))
self.aux1_class_loss = -tf.reduce_mean(self.aux_class_gt * tf.log(aux1_c_p0))
self.aux1_bg_class_loss = -tf.reduce_mean(self.class_gt10 * tf.log(aux1_c_p1))
self.aux2_class_loss = -tf.reduce_mean(self.aux_class_gt * tf.log(aux2_c_p0))
self.aux2_bg_class_loss = - tf.reduce_mean(self.class_gt10 * tf.log(aux2_c_p1))
self.class_loss = self.fuzzy_class_loss+self.fuzzy_bg_class_loss+self.aux1_class_loss+ self.aux1_bg_class_loss+self.aux2_class_loss+ self.aux2_bg_class_loss
self.loss=2*self.rec_img_loss1+self.rec_aux1_loss2+self.rec_aux2_loss3+2*self.rec_aux1_swap_loss4+2*self.rec_aux2_swap_loss5+self.rec_dual_loss6+5*self.class_loss
lr=self.lr
self.optimizer = tf.train.AdamOptimizer(learning_rate=lr, beta1=0., beta2=0.9).minimize(self.loss)
print('Learning rate=')
print(lr)
def load(self):
self.saver = tf.train.Saver(max_to_keep=3760)
ckpt = tf.train.get_checkpoint_state(self.dirs['ckpt'])
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = ckpt.model_checkpoint_path
self.saver.restore(self.session, ckpt_name)
print("Checkpoint restored: {0}".format(ckpt_name))
prev_step = int(next(re.finditer("(\d+)(?!.*\d)",ckpt_name)).group(0))
print('prev_step=')
print(prev_step)
else:
print("Failed to find checkpoint.")
prev_step = 0
sys.stdout.flush()
return prev_step + 1
def load_fixedNum(self,inter_num):
self.saver = tf.train.Saver(max_to_keep=3760)
ckpt = tf.train.get_checkpoint_state(self.dirs['ckpt'])
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = ckpt.model_checkpoint_path
ckpt_name_prefix=ckpt_name.split('-')[0]
ckpt_name_new=ckpt_name_prefix+'-'+str(inter_num)
self.saver.restore(self.session, ckpt_name_new)
print("Checkpoint restored: {0}".format(ckpt_name_new))
prev_step = int(next(re.finditer("(\d+)(?!.*\d)",ckpt_name_new)).group(0))
print('prev_step=')
print(prev_step)
else:
print("Failed to find checkpoint.")
prev_step = 0
sys.stdout.flush()
return prev_step + 1
def train(self, n_iters, n_iters_per_epoch, stats_iters, ckpt_interval):
count=0
self.session.run(tf.global_variables_initializer())
fixed_x1,fixed_mask_1,_= next(self.train_iter1)
fixed_x2,fixed_mask_2,_= next(self.train_iter2)
fixed_aux1,fixed_GT1,_= next(self.train_iter3)
fixed_aux2,fixed_GT2,_= next(self.train_iter4)
fixed_x1= self.session.run(tf.constant(fixed_x1))
fixed_mask_1= self.session.run(tf.constant(fixed_mask_1))
save_images(fixed_x1, os.path.join(self.dirs['samples'], 'samples_1_groundtruth.png'))
ux2= self.session.run(tf.constant(fixed_aux2))
save_images(fixed_aux1, os.path.join(self.dirs['samples'], 'aux_1_groundtruth.png'))
save_images(fixed_aux2, os.path.join(self.dirs['samples'], 'aux_2_groundtruth.png'))
start_iter = self.load()
running_cost = 0.
class_gt0, class_gt1,class_gt2,class_gt3,class_gt4,class_gt5, class_gt6, class_gt7, class_gt8,class_gt9,class_gt10 = self.generateClassificationLabel(self.batch_size)
_gan_data=fixed_x1
logs=open('loss_records.txt','w')
for iteration in range(start_iter, n_iters):
start_time = time.time()
_data1,_mask1, _ = next(self.train_iter1)
_aux_label, _, _ = next(self.train_iter2)
_aux1,_gt1, _ = next(self.train_iter3)
_aux2,_gt2, _ = next(self.train_iter4)
_, cost = self.session.run((self.optimizer, self.loss),feed_dict={self.x1:_data1,self.aux_class_gt:_aux_label,self.aux1_mask:_mask1,self.aux1:_aux1,self.aux2:_aux2,self.aux_GT1:_gt1,self.aux_GT2:_gt2,self.is_training:True,self.class_gt0:class_gt0,self.class_gt1:class_gt1,self.class_gt2:class_gt2,self.class_gt3:class_gt3,
self.class_gt4:class_gt4, self.class_gt5:class_gt5, self.class_gt6:class_gt6,self.class_gt7:class_gt7, self.class_gt8:class_gt8, self.class_gt9:class_gt9, self.class_gt10:class_gt10})
running_cost += cost
if iteration % n_iters_per_epoch == 1:
print("Epoch: {0}".format(iteration // n_iters_per_epoch))
if (iteration < start_iter + 4) or iteration % stats_iters == 0:
t = time.time()
dev_data1,dev_mask1, _ = next(self.dev_iter1)
dev_aux1,dev_gt1, _ = next(self.dev_iter3)
dev_aux2,dev_gt2, _ = next(self.dev_iter4)
dev_loss,dev_rec_img_loss1,dev_rec_aux1_loss2,dev_rec_aux2_loss3,dev_rec_aux1_swap_loss4,dev_rec_aux2_swap_loss5,dev_rec_dual_loss6,class_loss,fuzzy_class,aux1_class_loss,aux2_class_loss,fuzzy_bg_class_loss,aux1_bg_class_loss,aux2_bg_class_loss= \
self.session.run([self.loss,self.rec_img_loss1,self.rec_aux1_loss2,self.rec_aux2_loss3,self.rec_aux1_swap_loss4,self.rec_aux2_swap_loss5,self.rec_dual_loss6,self.class_loss,self.fuzzy_class_loss,self.aux1_class_loss,self.aux2_class_loss,self.fuzzy_bg_class_loss,self.aux1_bg_class_loss,self.aux2_bg_class_loss],
feed_dict={self.x1:dev_data1,self.aux1_mask:dev_mask1,self.aux1:dev_aux1,self.aux2:dev_aux2,self.aux_GT1:dev_gt1,self.aux_GT2:dev_gt2,self.is_training:False,
self.class_gt0: class_gt0, self.class_gt1: class_gt1,self.class_gt2: class_gt2,self.class_gt3:class_gt3,self.aux_class_gt:_aux_label,self.class_gt4:class_gt4, self.class_gt5:class_gt5, self.class_gt6:class_gt6,self.class_gt7:class_gt7, self.class_gt8:class_gt8, self.class_gt9:class_gt9, self.class_gt10:class_gt10})
n_samples = 1. if (iteration < start_iter + 4) else float(stats_iters)
avg_cost = running_cost / n_samples
running_cost = 0.
print("Iteration:{0} \t| Train cost:{1:.1f} \t| Dev cost: {2:.1f}(img1_loss:{3:.1f},aux1_loss2:{4:.1f},aux2_loss3:{5:.1f},aux1_swap_loss:{6:.1f},aux2_swap_loss:{7:.1f},dual_swap_loss:{8:.1f},class loss:{9:.1f}(fuzzy_class:{10:.1f},aux1_class_loss:{11:.1f},aux2_class_loss:{12:.1f},fuzzy_bg_class_loss:{13:.1f},aux1_bg_class_loss:{14:.1f},aux2_bg_class_loss:{15:.1f}))".
format(iteration, avg_cost, dev_loss,dev_rec_img_loss1,dev_rec_aux1_loss2,dev_rec_aux2_loss3,dev_rec_aux1_swap_loss4,dev_rec_aux2_swap_loss5,dev_rec_dual_loss6,class_loss,fuzzy_class,aux1_class_loss,aux2_class_loss,fuzzy_bg_class_loss,aux1_bg_class_loss,aux2_bg_class_loss))
logs.writelines("Iteration:{0} \t| Train cost:{1:.1f} \t| Dev cost: {2:.1f}(img1_loss:{3:.1f},aux1_loss2:{4:.1f},aux2_loss3:{5:.1f},aux1_swap_loss:{6:.1f},aux2_swap_loss:{7:.1f},dual_swap_loss:{8:.1f},class loss:{9:.1f}(fuzzy_class:{10:.1f},aux1_class_loss:{11:.1f},aux2_class_loss:{12:.1f},fuzzy_bg_class_loss:{13:.1f},aux1_bg_class_loss:{14:.1f},aux2_bg_class_loss:{15:.1f}))\n".
format(iteration, avg_cost, dev_loss,dev_rec_img_loss1,dev_rec_aux1_loss2,dev_rec_aux2_loss3,dev_rec_aux1_swap_loss4,dev_rec_aux2_swap_loss5,dev_rec_dual_loss6,class_loss,fuzzy_class,aux1_class_loss,aux2_class_loss,fuzzy_bg_class_loss,aux1_bg_class_loss,aux2_bg_class_loss))
count=count+1
if self.vis_reconst:
self.visualise_reconstruction(fixed_x1,fixed_aux1,fixed_aux2,iteration)
if np.any(np.isnan(avg_cost)):
raise ValueError("NaN detected!")
if (iteration > start_iter) and iteration % (ckpt_interval) == 0:
self.saver.save(self.session, os.path.join(self.dirs['ckpt'], self.exp_name), global_step=iteration)
_gan_data=_data1
logs.close()
def reconstruct(self, X1, aux1,aux2, is_training=False):
return self.session.run([self.x1_out,self.mix_head_out,self.aux1_mix_head_out],
feed_dict={self.x1: X1,self.aux1:aux1,self.aux2:aux2,self.is_training: is_training})
def visualise_reconstruction(self, X1, aux1,aux2,iteration):
X_out1,mix_head_out,aux1_mix_head_out= self.reconstruct(X1, aux1,aux2)
X1 = ((X_out1+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
mix_head_out = ((mix_head_out+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
aux1_mix_head_out = ((aux1_mix_head_out+1.)*(255.99/2)).astype('int32').reshape([-1] + self.image_shape)
save_images(X1, os.path.join(self.dirs['samples'], str(iteration)+'samples_1_rec.png'))
save_images(mix_head_out, os.path.join(self.dirs['samples'], str(iteration)+'X1bg_aux1head.png'))
save_images(aux1_mix_head_out, os.path.join(self.dirs['samples'], str(iteration)+'X1head_aux1bg.png'))
def generateClassificationLabel(self, batch_size):
class_num = 11
class_gt1 = np.zeros((batch_size, class_num))
class_gt2 = np.zeros((batch_size, class_num))
class_gt3 = np.zeros((batch_size, class_num))
class_gt4 = np.zeros((batch_size, class_num))
class_gt5 = np.zeros((batch_size, class_num))
class_gt6 = np.zeros((batch_size, class_num))
class_gt7 = np.zeros((batch_size, class_num))
class_gt8 = np.zeros((batch_size, class_num))
class_gt9 = np.zeros((batch_size, class_num))
class_gt10 = np.zeros((batch_size, class_num))
class_gt11 = np.zeros((batch_size, class_num))
for i in range(batch_size):
class_gt1[i, 0] = 1
class_gt2[i, 1] = 1
class_gt3[i, 2] = 1
class_gt4[i, 3] = 1
class_gt5[i, 4] = 1
class_gt6[i, 5] = 1
class_gt7[i, 6] = 1
class_gt8[i, 7] = 1
class_gt9[i, 8] = 1
class_gt10[i, 9] = 1
class_gt11[i, 10] = 1
return class_gt1, class_gt2, class_gt3, class_gt4,class_gt5, class_gt6, class_gt7, class_gt8,class_gt9, class_gt10,class_gt11
def getCodesAndImgs(self, pathForSave, X1, k, is_training=False):
z1, X_r0 = self.session.run([self.z1, self.x1_out],
feed_dict={self.x1: X1, self.is_training: is_training})
ImageNorm0_1 = ((X_r0 + 1.) * (1.00 / 2)).astype('double').reshape(
[-1, self.image_shape[1], self.image_shape[2], self.image_shape[0]])
if k == 1:
X_save = ((X_r0 + 1.) * (255.99 / 2)).astype('int32').reshape([-1] + self.image_shape)
save_images(X_save, os.path.join(pathForSave, 'iter' + str(k) + '_samples_reconstructed.png'))
return z1, ImageNorm0_1
| true | true |
f7fbd5bd1cef121fc4393bb804be64d14bfbae49 | 1,963 | py | Python | aliyun-python-sdk-hbr/aliyunsdkhbr/request/v20170908/RenewClientTokenRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 1,001 | 2015-07-24T01:32:41.000Z | 2022-03-25T01:28:18.000Z | aliyun-python-sdk-hbr/aliyunsdkhbr/request/v20170908/RenewClientTokenRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 363 | 2015-10-20T03:15:00.000Z | 2022-03-08T12:26:19.000Z | aliyun-python-sdk-hbr/aliyunsdkhbr/request/v20170908/RenewClientTokenRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 682 | 2015-09-22T07:19:02.000Z | 2022-03-22T09:51:46.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkhbr.endpoint import endpoint_data
class RenewClientTokenRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'hbr', '2017-09-08', 'RenewClientToken','hbr')
self.set_protocol_type('https')
self.set_method('POST')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_ClientId(self):
return self.get_query_params().get('ClientId')
def set_ClientId(self,ClientId):
self.add_query_param('ClientId',ClientId)
def get_VaultId(self):
return self.get_query_params().get('VaultId')
def set_VaultId(self,VaultId):
self.add_query_param('VaultId',VaultId)
def get_Token(self):
return self.get_query_params().get('Token')
def set_Token(self,Token):
self.add_query_param('Token',Token)
def get_ExpireInSeconds(self):
return self.get_query_params().get('ExpireInSeconds')
def set_ExpireInSeconds(self,ExpireInSeconds):
self.add_query_param('ExpireInSeconds',ExpireInSeconds) | 34.438596 | 75 | 0.76108 |
from aliyunsdkcore.request import RpcRequest
from aliyunsdkhbr.endpoint import endpoint_data
class RenewClientTokenRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'hbr', '2017-09-08', 'RenewClientToken','hbr')
self.set_protocol_type('https')
self.set_method('POST')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_ClientId(self):
return self.get_query_params().get('ClientId')
def set_ClientId(self,ClientId):
self.add_query_param('ClientId',ClientId)
def get_VaultId(self):
return self.get_query_params().get('VaultId')
def set_VaultId(self,VaultId):
self.add_query_param('VaultId',VaultId)
def get_Token(self):
return self.get_query_params().get('Token')
def set_Token(self,Token):
self.add_query_param('Token',Token)
def get_ExpireInSeconds(self):
return self.get_query_params().get('ExpireInSeconds')
def set_ExpireInSeconds(self,ExpireInSeconds):
self.add_query_param('ExpireInSeconds',ExpireInSeconds) | true | true |
f7fbd67754e0880e81c219bdc08f1400c9178289 | 13,271 | py | Python | lib/spack/spack/cmd/uninstall.py | NERSC/spack | cf8de02e7387727f02ff5d695405fca8b4cad400 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 11 | 2015-10-04T02:17:46.000Z | 2018-02-07T18:23:00.000Z | lib/spack/spack/cmd/uninstall.py | NERSC/spack | cf8de02e7387727f02ff5d695405fca8b4cad400 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 22 | 2017-08-01T22:45:10.000Z | 2022-03-10T07:46:31.000Z | lib/spack/spack/cmd/uninstall.py | NERSC/spack | cf8de02e7387727f02ff5d695405fca8b4cad400 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 4 | 2016-06-10T17:57:39.000Z | 2018-09-11T04:59:38.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import itertools
import sys
from llnl.util import tty
from llnl.util.tty.colify import colify
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
import spack.error
import spack.package
import spack.repo
import spack.store
from spack.database import InstallStatuses
description = "remove installed packages"
section = "build"
level = "short"
error_message = """You can either:
a) use a more specific spec, or
b) specify the spec by its hash (e.g. `spack uninstall /hash`), or
c) use `spack uninstall --all` to uninstall ALL matching specs.
"""
# Arguments for display_specs when we find ambiguity
display_args = {
'long': True,
'show_flags': False,
'variants': False,
'indent': 4,
}
def setup_parser(subparser):
epilog_msg = ("Specs to be uninstalled are specified using the spec syntax"
" (`spack help --spec`) and can be identified by their "
"hashes. To remove packages that are needed only at build "
"time and were not explicitly installed see `spack gc -h`."
"\n\nWhen using the --all option ALL packages matching the "
"supplied specs will be uninstalled. For instance, "
"`spack uninstall --all libelf` uninstalls all the versions "
"of `libelf` currently present in Spack's store. If no spec "
"is supplied, all installed packages will be uninstalled. "
"If used in an environment, all packages in the environment "
"will be uninstalled.")
subparser.epilog = epilog_msg
subparser.add_argument(
'-f', '--force', action='store_true', dest='force',
help="remove regardless of whether other packages or environments "
"depend on this one")
arguments.add_common_arguments(
subparser, ['recurse_dependents', 'yes_to_all', 'installed_specs'])
subparser.add_argument(
'-a', '--all', action='store_true', dest='all',
help="remove ALL installed packages that match each supplied spec"
)
subparser.add_argument(
'--origin', dest='origin',
help="only remove DB records with the specified origin"
)
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False,
origin=None):
"""Returns a list of specs matching the not necessarily
concretized specs given from cli
Args:
env (spack.environment.Environment): active environment, or ``None``
if there is not one
specs (list): list of specs to be matched against installed packages
allow_multiple_matches (bool): if True multiple matches are admitted
Return:
list: list of specs
"""
# constrain uninstall resolution to current environment if one is active
hashes = env.all_hashes() if env else None
# List of specs that match expressions given via command line
specs_from_cli = []
has_errors = False
for spec in specs:
install_query = [InstallStatuses.INSTALLED, InstallStatuses.DEPRECATED]
matching = spack.store.db.query_local(
spec, hashes=hashes, installed=install_query, origin=origin)
# For each spec provided, make sure it refers to only one package.
# Fail and ask user to be unambiguous if it doesn't
if not allow_multiple_matches and len(matching) > 1:
tty.error('{0} matches multiple packages:'.format(spec))
sys.stderr.write('\n')
spack.cmd.display_specs(matching, output=sys.stderr,
**display_args)
sys.stderr.write('\n')
sys.stderr.flush()
has_errors = True
# No installed package matches the query
if len(matching) == 0 and spec is not any:
if env:
pkg_type = "packages in environment '%s'" % env.name
else:
pkg_type = 'installed packages'
tty.die('{0} does not match any {1}.'.format(spec, pkg_type))
specs_from_cli.extend(matching)
if has_errors:
tty.die(error_message)
return specs_from_cli
def installed_dependents(specs, env):
"""Map each spec to a list of its installed dependents.
Args:
specs (list): list of Specs
env (spack.environment.Environment or None): the active environment, or None
Returns:
tuple: two mappings: one from specs to their dependent environments in the
active environment (or global scope if there is no environment), and one from
specs to their dependents in *inactive* environments (empty if there is no
environment
"""
active_dpts = {}
inactive_dpts = {}
env_hashes = set(env.all_hashes()) if env else set()
all_specs_in_db = spack.store.db.query()
for spec in specs:
installed = [x for x in all_specs_in_db if spec in x]
# separate installed dependents into dpts in this environment and
# dpts that are outside this environment
for dpt in installed:
if dpt not in specs:
if not env or dpt.dag_hash() in env_hashes:
active_dpts.setdefault(spec, set()).add(dpt)
else:
inactive_dpts.setdefault(spec, set()).add(dpt)
return active_dpts, inactive_dpts
def dependent_environments(specs):
"""Map each spec to environments that depend on it.
Args:
specs (list): list of Specs
Returns:
dict: mapping from spec to lists of dependent Environments
"""
dependents = {}
for env in ev.all_environments():
hashes = set(env.all_hashes())
for spec in specs:
if spec.dag_hash() in hashes:
dependents.setdefault(spec, []).append(env)
return dependents
def inactive_dependent_environments(spec_envs):
"""Strip the active environment from a dependent map.
Take the output of ``dependent_environment()`` and remove the active
environment from all mappings. Remove any specs in the map that now
have no dependent environments. Return the result.
Args:
spec_envs (dict): mapping from spec to lists of dependent Environments
Returns:
dict: mapping from spec to lists of *inactive* dependent Environments
"""
spec_inactive_envs = {}
for spec, de_list in spec_envs.items():
inactive = [de for de in de_list if not de.active]
if inactive:
spec_inactive_envs[spec] = inactive
return spec_inactive_envs
def _remove_from_env(spec, env):
"""Remove a spec from an environment if it is a root."""
try:
# try removing the spec from the current active
# environment. this will fail if the spec is not a root
env.remove(spec, force=True)
except ev.SpackEnvironmentError:
pass # ignore non-root specs
def do_uninstall(env, specs, force):
"""Uninstalls all the specs in a list.
Args:
env (spack.environment.Environment or None): active environment, or ``None``
if there is not one
specs (list): list of specs to be uninstalled
force (bool): force uninstallation (boolean)
"""
packages = []
for item in specs:
try:
# should work if package is known to spack
packages.append(item.package)
except spack.repo.UnknownEntityError:
# The package.py file has gone away -- but still
# want to uninstall.
spack.package.Package.uninstall_by_spec(item, force=True)
# A package is ready to be uninstalled when nothing else references it,
# unless we are requested to force uninstall it.
def is_ready(dag_hash):
if force:
return True
_, record = spack.store.db.query_by_spec_hash(dag_hash)
if not record.ref_count:
return True
# If this spec is only used as a build dependency, we can uninstall
return all(
dspec.deptypes == ("build",)
for dspec in record.spec.edges_from_dependents()
)
while packages:
ready = [x for x in packages if is_ready(x.spec.dag_hash())]
if not ready:
msg = 'unexpected error [cannot proceed uninstalling specs with' \
' remaining link or run dependents {0}]'
msg = msg.format(', '.join(x.name for x in packages))
raise spack.error.SpackError(msg)
packages = [x for x in packages if x not in ready]
for item in ready:
item.do_uninstall(force=force)
def get_uninstall_list(args, specs, env):
# Gets the list of installed specs that match the ones give via cli
# args.all takes care of the case where '-a' is given in the cli
uninstall_list = find_matching_specs(env, specs, args.all, args.force,
args.origin)
# Takes care of '-R'
active_dpts, inactive_dpts = installed_dependents(uninstall_list, env)
# if we are in the global scope, we complain if you try to remove a
# spec that's in an environment. If we're in an environment, we'll
# just *remove* it from the environment, so we ignore this
# error when *in* an environment
spec_envs = dependent_environments(uninstall_list)
spec_envs = inactive_dependent_environments(spec_envs)
# Process spec_dependents and update uninstall_list
has_error = not args.force and (
(active_dpts and not args.dependents) # dependents in the current env
or (not env and spec_envs) # there are environments that need specs
)
# say why each problem spec is needed
if has_error:
specs = set(active_dpts)
if not env:
specs.update(set(spec_envs)) # environments depend on this
for i, spec in enumerate(sorted(specs)):
# space out blocks of reasons
if i > 0:
print()
spec_format = '{name}{@version}{%compiler}{/hash:7}'
tty.info("Will not uninstall %s" % spec.cformat(spec_format),
format='*r')
dependents = active_dpts.get(spec)
if dependents:
print('The following packages depend on it:')
spack.cmd.display_specs(dependents, **display_args)
if not env:
envs = spec_envs.get(spec)
if envs:
print('It is used by the following environments:')
colify([e.name for e in envs], indent=4)
msgs = []
if active_dpts:
msgs.append(
'use `spack uninstall --dependents` to remove dependents too')
if spec_envs:
msgs.append('use `spack env remove` to remove from environments')
print()
tty.die('There are still dependents.', *msgs)
elif args.dependents:
for spec, lst in active_dpts.items():
uninstall_list.extend(lst)
uninstall_list = list(set(uninstall_list))
# only force-remove (don't completely uninstall) specs that still
# have external dependent envs or pkgs
removes = set(inactive_dpts)
if env:
removes.update(spec_envs)
# remove anything in removes from the uninstall list
uninstall_list = set(uninstall_list) - removes
return uninstall_list, removes
def uninstall_specs(args, specs):
env = ev.active_environment()
uninstall_list, remove_list = get_uninstall_list(args, specs, env)
anything_to_do = set(uninstall_list).union(set(remove_list))
if not anything_to_do:
tty.warn('There are no package to uninstall.')
return
if not args.yes_to_all:
confirm_removal(anything_to_do)
if env:
# Remove all the specs that are supposed to be uninstalled or just
# removed.
with env.write_transaction():
for spec in itertools.chain(remove_list, uninstall_list):
_remove_from_env(spec, env)
env.write()
# Uninstall everything on the list
do_uninstall(env, uninstall_list, args.force)
def confirm_removal(specs):
"""Display the list of specs to be removed and ask for confirmation.
Args:
specs (list): specs to be removed
"""
tty.msg('The following packages will be uninstalled:\n')
spack.cmd.display_specs(specs, **display_args)
print('')
answer = tty.get_yes_or_no('Do you want to proceed?', default=False)
if not answer:
tty.msg('Aborting uninstallation')
sys.exit(0)
def uninstall(parser, args):
if not args.specs and not args.all:
tty.die('uninstall requires at least one package argument.',
' Use `spack uninstall --all` to uninstall ALL packages.')
# [any] here handles the --all case by forcing all specs to be returned
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
uninstall_specs(args, specs)
| 35.201592 | 85 | 0.638686 |
from __future__ import print_function
import itertools
import sys
from llnl.util import tty
from llnl.util.tty.colify import colify
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
import spack.error
import spack.package
import spack.repo
import spack.store
from spack.database import InstallStatuses
description = "remove installed packages"
section = "build"
level = "short"
error_message = """You can either:
a) use a more specific spec, or
b) specify the spec by its hash (e.g. `spack uninstall /hash`), or
c) use `spack uninstall --all` to uninstall ALL matching specs.
"""
display_args = {
'long': True,
'show_flags': False,
'variants': False,
'indent': 4,
}
def setup_parser(subparser):
epilog_msg = ("Specs to be uninstalled are specified using the spec syntax"
" (`spack help --spec`) and can be identified by their "
"hashes. To remove packages that are needed only at build "
"time and were not explicitly installed see `spack gc -h`."
"\n\nWhen using the --all option ALL packages matching the "
"supplied specs will be uninstalled. For instance, "
"`spack uninstall --all libelf` uninstalls all the versions "
"of `libelf` currently present in Spack's store. If no spec "
"is supplied, all installed packages will be uninstalled. "
"If used in an environment, all packages in the environment "
"will be uninstalled.")
subparser.epilog = epilog_msg
subparser.add_argument(
'-f', '--force', action='store_true', dest='force',
help="remove regardless of whether other packages or environments "
"depend on this one")
arguments.add_common_arguments(
subparser, ['recurse_dependents', 'yes_to_all', 'installed_specs'])
subparser.add_argument(
'-a', '--all', action='store_true', dest='all',
help="remove ALL installed packages that match each supplied spec"
)
subparser.add_argument(
'--origin', dest='origin',
help="only remove DB records with the specified origin"
)
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False,
origin=None):
# constrain uninstall resolution to current environment if one is active
hashes = env.all_hashes() if env else None
# List of specs that match expressions given via command line
specs_from_cli = []
has_errors = False
for spec in specs:
install_query = [InstallStatuses.INSTALLED, InstallStatuses.DEPRECATED]
matching = spack.store.db.query_local(
spec, hashes=hashes, installed=install_query, origin=origin)
# For each spec provided, make sure it refers to only one package.
# Fail and ask user to be unambiguous if it doesn't
if not allow_multiple_matches and len(matching) > 1:
tty.error('{0} matches multiple packages:'.format(spec))
sys.stderr.write('\n')
spack.cmd.display_specs(matching, output=sys.stderr,
**display_args)
sys.stderr.write('\n')
sys.stderr.flush()
has_errors = True
if len(matching) == 0 and spec is not any:
if env:
pkg_type = "packages in environment '%s'" % env.name
else:
pkg_type = 'installed packages'
tty.die('{0} does not match any {1}.'.format(spec, pkg_type))
specs_from_cli.extend(matching)
if has_errors:
tty.die(error_message)
return specs_from_cli
def installed_dependents(specs, env):
active_dpts = {}
inactive_dpts = {}
env_hashes = set(env.all_hashes()) if env else set()
all_specs_in_db = spack.store.db.query()
for spec in specs:
installed = [x for x in all_specs_in_db if spec in x]
for dpt in installed:
if dpt not in specs:
if not env or dpt.dag_hash() in env_hashes:
active_dpts.setdefault(spec, set()).add(dpt)
else:
inactive_dpts.setdefault(spec, set()).add(dpt)
return active_dpts, inactive_dpts
def dependent_environments(specs):
dependents = {}
for env in ev.all_environments():
hashes = set(env.all_hashes())
for spec in specs:
if spec.dag_hash() in hashes:
dependents.setdefault(spec, []).append(env)
return dependents
def inactive_dependent_environments(spec_envs):
spec_inactive_envs = {}
for spec, de_list in spec_envs.items():
inactive = [de for de in de_list if not de.active]
if inactive:
spec_inactive_envs[spec] = inactive
return spec_inactive_envs
def _remove_from_env(spec, env):
try:
env.remove(spec, force=True)
except ev.SpackEnvironmentError:
pass
def do_uninstall(env, specs, force):
packages = []
for item in specs:
try:
packages.append(item.package)
except spack.repo.UnknownEntityError:
spack.package.Package.uninstall_by_spec(item, force=True)
def is_ready(dag_hash):
if force:
return True
_, record = spack.store.db.query_by_spec_hash(dag_hash)
if not record.ref_count:
return True
return all(
dspec.deptypes == ("build",)
for dspec in record.spec.edges_from_dependents()
)
while packages:
ready = [x for x in packages if is_ready(x.spec.dag_hash())]
if not ready:
msg = 'unexpected error [cannot proceed uninstalling specs with' \
' remaining link or run dependents {0}]'
msg = msg.format(', '.join(x.name for x in packages))
raise spack.error.SpackError(msg)
packages = [x for x in packages if x not in ready]
for item in ready:
item.do_uninstall(force=force)
def get_uninstall_list(args, specs, env):
uninstall_list = find_matching_specs(env, specs, args.all, args.force,
args.origin)
active_dpts, inactive_dpts = installed_dependents(uninstall_list, env)
# just *remove* it from the environment, so we ignore this
# error when *in* an environment
spec_envs = dependent_environments(uninstall_list)
spec_envs = inactive_dependent_environments(spec_envs)
# Process spec_dependents and update uninstall_list
has_error = not args.force and (
(active_dpts and not args.dependents) # dependents in the current env
or (not env and spec_envs) # there are environments that need specs
)
# say why each problem spec is needed
if has_error:
specs = set(active_dpts)
if not env:
specs.update(set(spec_envs)) # environments depend on this
for i, spec in enumerate(sorted(specs)):
# space out blocks of reasons
if i > 0:
print()
spec_format = '{name}{@version}{%compiler}{/hash:7}'
tty.info("Will not uninstall %s" % spec.cformat(spec_format),
format='*r')
dependents = active_dpts.get(spec)
if dependents:
print('The following packages depend on it:')
spack.cmd.display_specs(dependents, **display_args)
if not env:
envs = spec_envs.get(spec)
if envs:
print('It is used by the following environments:')
colify([e.name for e in envs], indent=4)
msgs = []
if active_dpts:
msgs.append(
'use `spack uninstall --dependents` to remove dependents too')
if spec_envs:
msgs.append('use `spack env remove` to remove from environments')
print()
tty.die('There are still dependents.', *msgs)
elif args.dependents:
for spec, lst in active_dpts.items():
uninstall_list.extend(lst)
uninstall_list = list(set(uninstall_list))
# only force-remove (don't completely uninstall) specs that still
removes = set(inactive_dpts)
if env:
removes.update(spec_envs)
uninstall_list = set(uninstall_list) - removes
return uninstall_list, removes
def uninstall_specs(args, specs):
env = ev.active_environment()
uninstall_list, remove_list = get_uninstall_list(args, specs, env)
anything_to_do = set(uninstall_list).union(set(remove_list))
if not anything_to_do:
tty.warn('There are no package to uninstall.')
return
if not args.yes_to_all:
confirm_removal(anything_to_do)
if env:
with env.write_transaction():
for spec in itertools.chain(remove_list, uninstall_list):
_remove_from_env(spec, env)
env.write()
do_uninstall(env, uninstall_list, args.force)
def confirm_removal(specs):
tty.msg('The following packages will be uninstalled:\n')
spack.cmd.display_specs(specs, **display_args)
print('')
answer = tty.get_yes_or_no('Do you want to proceed?', default=False)
if not answer:
tty.msg('Aborting uninstallation')
sys.exit(0)
def uninstall(parser, args):
if not args.specs and not args.all:
tty.die('uninstall requires at least one package argument.',
' Use `spack uninstall --all` to uninstall ALL packages.')
specs = spack.cmd.parse_specs(args.specs) if args.specs else [any]
uninstall_specs(args, specs)
| true | true |
f7fbd84bd7dc834c667a2f4c61326629d1896354 | 2,687 | py | Python | setup.py | legaultmarc/geneparse | 5a844df77ded5adc765a086a8d346fce6ba01f3d | [
"MIT"
] | 4 | 2018-11-09T11:10:24.000Z | 2021-07-23T22:17:58.000Z | setup.py | legaultmarc/geneparse | 5a844df77ded5adc765a086a8d346fce6ba01f3d | [
"MIT"
] | 5 | 2017-05-02T15:28:01.000Z | 2018-04-16T18:29:15.000Z | setup.py | legaultmarc/geneparse | 5a844df77ded5adc765a086a8d346fce6ba01f3d | [
"MIT"
] | 1 | 2017-05-12T17:58:32.000Z | 2017-05-12T17:58:32.000Z | #!/usr/bin/env python
# How to build source distribution
# - python setup.py sdist --format bztar
# - python setup.py sdist --format gztar
# - python setup.py sdist --format zip
# - python setup.py bdist_wheel
import os
import sys
from setuptools import setup, find_packages
MAJOR = 0
MINOR = 8
MICRO = 1
VERSION = "{0}.{1}.{2}".format(MAJOR, MINOR, MICRO)
def check_python_version():
"""Checks the python version, exits if < 3.4."""
python_major, python_minor = sys.version_info[:2]
if python_major != 3 or python_minor < 4:
sys.stderr.write("geneparse requires python 3 "
"(version 3.4 or higher)\n")
sys.exit(1)
def write_version_file(fn=None):
if fn is None:
fn = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
os.path.join("geneparse", "version.py"),
)
content = ("\n# THIS FILE WAS GENERATED AUTOMATICALLY\n"
'geneparse_version = "{version}"\n')
a = open(fn, "w")
try:
a.write(content.format(version=VERSION))
finally:
a.close()
def setup_package():
# Checking the python version prior to installation
check_python_version()
# Saving the version into a file
write_version_file()
setup(
name="geneparse",
version=VERSION,
description="A suite of parse for genotype formats.",
url="https://github.com/pgxcentre/geneparse",
license="MIT",
test_suite="geneparse.tests.test_suite",
zip_safe=False,
install_requires=["numpy >= 1.11.0", "pandas >= 0.19.0",
"pyplink >= 1.3.4", "setuptools >= 26.1.0",
"biopython >= 1.68", "pybgen >= 0.7.0"],
packages=find_packages(),
package_data={"geneparse.tests": ["data/*", "data/*/*"]},
classifiers=["Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Free for non-commercial use",
"Operating System :: Unix",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft",
"Programming Language :: Python",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Topic :: Scientific/Engineering :: Bio-Informatics"],
keywords="bioinformatics genetics statistics",
)
if __name__ == "__main__":
setup_package()
| 30.885057 | 75 | 0.560849 |
import os
import sys
from setuptools import setup, find_packages
MAJOR = 0
MINOR = 8
MICRO = 1
VERSION = "{0}.{1}.{2}".format(MAJOR, MINOR, MICRO)
def check_python_version():
python_major, python_minor = sys.version_info[:2]
if python_major != 3 or python_minor < 4:
sys.stderr.write("geneparse requires python 3 "
"(version 3.4 or higher)\n")
sys.exit(1)
def write_version_file(fn=None):
if fn is None:
fn = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
os.path.join("geneparse", "version.py"),
)
content = ("\n# THIS FILE WAS GENERATED AUTOMATICALLY\n"
'geneparse_version = "{version}"\n')
a = open(fn, "w")
try:
a.write(content.format(version=VERSION))
finally:
a.close()
def setup_package():
check_python_version()
write_version_file()
setup(
name="geneparse",
version=VERSION,
description="A suite of parse for genotype formats.",
url="https://github.com/pgxcentre/geneparse",
license="MIT",
test_suite="geneparse.tests.test_suite",
zip_safe=False,
install_requires=["numpy >= 1.11.0", "pandas >= 0.19.0",
"pyplink >= 1.3.4", "setuptools >= 26.1.0",
"biopython >= 1.68", "pybgen >= 0.7.0"],
packages=find_packages(),
package_data={"geneparse.tests": ["data/*", "data/*/*"]},
classifiers=["Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Free for non-commercial use",
"Operating System :: Unix",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft",
"Programming Language :: Python",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Topic :: Scientific/Engineering :: Bio-Informatics"],
keywords="bioinformatics genetics statistics",
)
if __name__ == "__main__":
setup_package()
| true | true |
f7fbd980831ccec066261d37e528035e5f2d7c7a | 12,278 | py | Python | open-hackathon-client/src/client/config_sample.py | overbest/open-hackathon | 62e085fbe603bcb00ca56d2b96cfc43bf44c710b | [
"MIT"
] | null | null | null | open-hackathon-client/src/client/config_sample.py | overbest/open-hackathon | 62e085fbe603bcb00ca56d2b96cfc43bf44c710b | [
"MIT"
] | null | null | null | open-hackathon-client/src/client/config_sample.py | overbest/open-hackathon | 62e085fbe603bcb00ca56d2b96cfc43bf44c710b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# -----------------------------------------------------------------------------------
# Copyright (c) Microsoft Open Technologies (Shanghai) Co. Ltd. All rights reserved.
#
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# -----------------------------------------------------------------------------------
# "javascript" section for javascript. see @app.route('/config.js') in app/views.py
# NOTE: all following key/secrets for test purpose.
HOSTNAME = "http://localhost" # host name of the UI site
# hacking.kaiyuanshe.cn is used for wechat oauth login
# HOSTNAME = "http://hacking.kaiyuanshe.cn"
# HOSTNAME = "http://open-hackathon-dev.chinacloudapp.cn" # host name of the UI site
# HOSTNAME = "http://hacking.kaiyuanshe.cn"
QQ_OAUTH_STATE = "openhackathon" # todo state should be constant. Actually it should be unguessable to prevent CSFA
HACKATHON_API_ENDPOINT = "http://localhost:15000"
# HACKATHON_API_ENDPOINT = "http://open-hackathon-dev.chinacloudapp.cn:15000"
# HACKATHON_API_ENDPOINT = "http://hacking.kaiyuanshe.cn:15000"
# github key for `localhost`
GITHUB_CLIENT_ID = "b44f3d47bdeb26b9c4e6"
GITHUB_CLIENT_SECRET = "98de14161c4b2ed3ea7a19787d62cda73b8e292c"
# github oauth key for `open-hackathon-dev.chinacloudapp.cn`
# GITHUB_CLIENT_ID = "b8e407813350f26bf537"
# GITHUB_CLIENT_SECRET = "daa78ae27e13c9f5b4a884bd774cadf2f75a199f"
QQ_CLIENT_ID = "101200890"
QQ_CLIENT_SECRET = "88ad67bd4521c4cc47136854781cb9b5"
QQ_META_CONTENT = "274307566465013314076545663016134754100636"
WECHAT_APP_ID = "wxe75b8aef71c2059f"
WECHAT_SECRET = "4532b90750f4c7bc70fcfbc42d881622"
WECHAT_OAUTH_STATE = "openhackathon" # NOTE: may be should be same as QQ_OAUTH_STATE?
WEIBO_CLIENT_ID = "479757037"
WEIBO_CLIENT_SECRET = "efc5e75ff8891be37d90b4eaec5c02de"
WEIBO_META_CONTENT = "ae884e09bc02b700"
LIVE_CLIENT_ID = "000000004414E0A6"
LIVE_CLIENT_SECRET = "b4mkfVqjtwHY2wJh0T4tj74lxM5LgAT2"
ALAUDA_CLIENT_ID = "4VR9kzNZVyWcnk9OnAwMuSus7xOOcozJIpic6W6y"
ALAUDA_CLIENT_SECRET = "E5PUL5h9feLlEirec5HQhjIzYecv7vVbEBjWLBkRMoCoFXdvS1PzNmd4AAeNgu4M2AJ87uGnnJaoDLCcDuVxkBoHRWCn6LmfB4SKK1Dty1SkGukkTcZPEk9wpHLSiRQ3"
Config = {
"environment": "local",
"app": {
"secret_key": "secret_key"
},
"login": {
"github": {
"client_id": GITHUB_CLIENT_ID,
"access_token_url": 'https://github.com/login/oauth/access_token?client_id=%s&client_secret=%s&redirect_uri=%s/github&code=' % (
GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET, HOSTNAME),
"user_info_url": 'https://api.github.com/user?access_token=',
"emails_info_url": 'https://api.github.com/user/emails?access_token='
},
"qq": {
"client_id": QQ_CLIENT_ID,
"meta_content": QQ_META_CONTENT,
"access_token_url": 'https://graph.qq.com/oauth2.0/token?grant_type=authorization_code&client_id=%s&client_secret=%s&redirect_uri=%s/qq&code=' % (
QQ_CLIENT_ID, QQ_CLIENT_SECRET, HOSTNAME),
"openid_url": 'https://graph.qq.com/oauth2.0/me?access_token=',
"user_info_url": 'https://graph.qq.com/user/get_user_info?access_token=%s&oauth_consumer_key=%s&openid=%s'
},
"wechat": {
"client_id": WECHAT_APP_ID,
"access_token_url": "https://api.weixin.qq.com/sns/oauth2/access_token?appid=%s&secret=%s&code=%%s&grant_type=authorization_code" % (
WECHAT_APP_ID, WECHAT_SECRET),
"user_info_url": "https://api.weixin.qq.com/sns/userinfo?access_token=%s&openid=%s"
},
"weibo": {
"client_id": WEIBO_CLIENT_ID,
"meta_content": WEIBO_META_CONTENT,
"user_info_url": 'https://api.weibo.com/2/users/show.json?access_token=',
"email_info_url": 'https://api.weibo.com/2/account/profile/email.json?access_token=',
"access_token_url": 'https://api.weibo.com/oauth2/access_token?client_id=%s&client_secret=%s&grant_type=authorization_code&redirect_uri=%s/weibo&code=' % (
WEIBO_CLIENT_ID, WEIBO_CLIENT_SECRET, HOSTNAME)
},
"live": {
"client_id": LIVE_CLIENT_ID,
"client_secret": LIVE_CLIENT_SECRET,
"redirect_uri": '%s/live' % HOSTNAME,
"access_token_url": 'https://login.live.com/oauth20_token.srf',
"user_info_url": 'https://apis.live.net/v5.0/me?access_token='
},
"alauda": {
"client_id": ALAUDA_CLIENT_ID,
"client_secret": ALAUDA_CLIENT_SECRET,
"redirect_uri": '%s/alauda' % HOSTNAME,
"access_token_url": 'http://console.int.alauda.io/oauth/token'
},
"provider_enabled": ["github", "wechat"],
"session_valid_time_minutes": 60
},
"hackathon-api": {
"endpoint": HACKATHON_API_ENDPOINT
},
"javascript": {
"github": {
"authorize_url": "https://github.com/login/oauth/authorize?client_id=%s&redirect_uri=%s/github&scope=user" % (
GITHUB_CLIENT_ID, HOSTNAME)
},
"weibo": {
"authorize_url": "https://api.weibo.com/oauth2/authorize?client_id=%s&redirect_uri=%s/weibo&scope=all" % (
WEIBO_CLIENT_ID, HOSTNAME)
},
"qq": {
"authorize_url": "https://graph.qq.com/oauth2.0/authorize?client_id=%s&redirect_uri=%s/qq&scope=get_user_info&state=%s&response_type=code" % (
QQ_CLIENT_ID, HOSTNAME, QQ_OAUTH_STATE)
},
"wechat": {
"authorize_url": "https://open.weixin.qq.com/connect/qrconnect?appid=%s&redirect_uri=%s/wechat&response_type=code&scope=snsapi_login&state=%s#wechat_redirect" % (
WECHAT_APP_ID, HOSTNAME, WECHAT_OAUTH_STATE)
},
"live": {
"authorize_url": "https://login.live.com/oauth20_authorize.srf?client_id=%s&scope=wl.basic+,wl.emails&response_type=code&redirect_uri=%s/live" % (
LIVE_CLIENT_ID, HOSTNAME)
},
"alauda": {
"authorize_url": "http://console.int.alauda.io/oauth/authorize?response_type=code&client_id=%s&state=state&redirect_uri=%s/alauda" % (
ALAUDA_CLIENT_ID, HOSTNAME)
},
"hackathon": {
"endpoint": HACKATHON_API_ENDPOINT
},
"apiconfig": {
"proxy": HACKATHON_API_ENDPOINT,
"api": {
"admin": {
"hackathon": {
"": ["get", "post", "put", "delete"],
"checkname": ["get"],
"list": ["get"],
"online": ["post"],
"applyonline": ["post"],
"offline": ["post"],
"tags": ["get", "post", "put", "delete"],
"config": ["get", "post", "put", "delete"],
"administrator": {
"": ["put", "post", "delete"],
"list": ["get"]
},
"template": {
"": ["post", "delete"],
"list": ["get"],
"check": ["get"]
},
"organizer": {
"": ["get", "post", "put", "delete"]
},
"award": {
"": ["get", "post", "put", "delete"],
"list": ["get"]
},
"notice": {
"": ["get", "post", "put", "delete"]
}
},
"registration": {
"": ["get", "post", "delete", "put"],
"list": ["get"]
},
"azure": {
"": ["get", "post", "delete", "put"],
"checksubid": ["post"]
},
"experiment": {
"list": ["get"],
"": ["post", "put"]
},
"team": {
"list": ["get"],
"score": {
"list": ["get"]
},
"award": ["get", "post", "delete"]
},
"user": {
"list": ["get"]
},
"hostserver": {
"": ["get", "post", "delete", "put"],
"list": ["get"]
}
},
"template": {
"": ["get", "post", "delete", "put"],
"file": ["post"],
"list": ["get"],
"check": ["get"]
},
"user": {
"": ["get"],
"login": ["post", "delete"],
"experiment": {
"": ["get", "post", "delete", "put"]
},
"registration": {
"": ["put", "post", "get"],
"checkemail": ["get"],
"list": ["get"]
},
"profile": {
"": ["post", "put"]
},
"picture": {
"": ["put"]
},
"team": {
"member": ["get"]
},
"hackathon": {
"like": ["get", "post", "delete"]
},
"notice": {
"read": ["put"]
},
"show": {
"list": ["get"]
},
"file": {
"": ["post"]
}
},
"hackathon": {
"": ["get"],
"list": ["get"],
"stat": ["get"],
"template": ["get"],
"team": {
"list": ["get"]
},
"registration": {
"list": ["get"]
},
"show": {
"list": ["get"]
},
"grantedawards": ["get"],
"notice": {
"list": ["get"]
}
},
"team": {
"": ["get", "post", "put", "delete"],
"score": ["get", "post", "put"],
"member": {
"": ["post", "put", "delete"],
"list": ["get"]
},
"show": ["get", "post", "delete"],
"template": ["post", "delete"]
},
"talent": {
"list": ["get"]
},
"grantedawards": ["get"]
}
}
}
}
| 42.93007 | 174 | 0.476136 |
HOSTNAME = "http://localhost"
ackathon"
HACKATHON_API_ENDPOINT = "http://localhost:15000"
GITHUB_CLIENT_ID = "b44f3d47bdeb26b9c4e6"
GITHUB_CLIENT_SECRET = "98de14161c4b2ed3ea7a19787d62cda73b8e292c"
QQ_CLIENT_ID = "101200890"
QQ_CLIENT_SECRET = "88ad67bd4521c4cc47136854781cb9b5"
QQ_META_CONTENT = "274307566465013314076545663016134754100636"
WECHAT_APP_ID = "wxe75b8aef71c2059f"
WECHAT_SECRET = "4532b90750f4c7bc70fcfbc42d881622"
WECHAT_OAUTH_STATE = "openhackathon"
WEIBO_CLIENT_ID = "479757037"
WEIBO_CLIENT_SECRET = "efc5e75ff8891be37d90b4eaec5c02de"
WEIBO_META_CONTENT = "ae884e09bc02b700"
LIVE_CLIENT_ID = "000000004414E0A6"
LIVE_CLIENT_SECRET = "b4mkfVqjtwHY2wJh0T4tj74lxM5LgAT2"
ALAUDA_CLIENT_ID = "4VR9kzNZVyWcnk9OnAwMuSus7xOOcozJIpic6W6y"
ALAUDA_CLIENT_SECRET = "E5PUL5h9feLlEirec5HQhjIzYecv7vVbEBjWLBkRMoCoFXdvS1PzNmd4AAeNgu4M2AJ87uGnnJaoDLCcDuVxkBoHRWCn6LmfB4SKK1Dty1SkGukkTcZPEk9wpHLSiRQ3"
Config = {
"environment": "local",
"app": {
"secret_key": "secret_key"
},
"login": {
"github": {
"client_id": GITHUB_CLIENT_ID,
"access_token_url": 'https://github.com/login/oauth/access_token?client_id=%s&client_secret=%s&redirect_uri=%s/github&code=' % (
GITHUB_CLIENT_ID, GITHUB_CLIENT_SECRET, HOSTNAME),
"user_info_url": 'https://api.github.com/user?access_token=',
"emails_info_url": 'https://api.github.com/user/emails?access_token='
},
"qq": {
"client_id": QQ_CLIENT_ID,
"meta_content": QQ_META_CONTENT,
"access_token_url": 'https://graph.qq.com/oauth2.0/token?grant_type=authorization_code&client_id=%s&client_secret=%s&redirect_uri=%s/qq&code=' % (
QQ_CLIENT_ID, QQ_CLIENT_SECRET, HOSTNAME),
"openid_url": 'https://graph.qq.com/oauth2.0/me?access_token=',
"user_info_url": 'https://graph.qq.com/user/get_user_info?access_token=%s&oauth_consumer_key=%s&openid=%s'
},
"wechat": {
"client_id": WECHAT_APP_ID,
"access_token_url": "https://api.weixin.qq.com/sns/oauth2/access_token?appid=%s&secret=%s&code=%%s&grant_type=authorization_code" % (
WECHAT_APP_ID, WECHAT_SECRET),
"user_info_url": "https://api.weixin.qq.com/sns/userinfo?access_token=%s&openid=%s"
},
"weibo": {
"client_id": WEIBO_CLIENT_ID,
"meta_content": WEIBO_META_CONTENT,
"user_info_url": 'https://api.weibo.com/2/users/show.json?access_token=',
"email_info_url": 'https://api.weibo.com/2/account/profile/email.json?access_token=',
"access_token_url": 'https://api.weibo.com/oauth2/access_token?client_id=%s&client_secret=%s&grant_type=authorization_code&redirect_uri=%s/weibo&code=' % (
WEIBO_CLIENT_ID, WEIBO_CLIENT_SECRET, HOSTNAME)
},
"live": {
"client_id": LIVE_CLIENT_ID,
"client_secret": LIVE_CLIENT_SECRET,
"redirect_uri": '%s/live' % HOSTNAME,
"access_token_url": 'https://login.live.com/oauth20_token.srf',
"user_info_url": 'https://apis.live.net/v5.0/me?access_token='
},
"alauda": {
"client_id": ALAUDA_CLIENT_ID,
"client_secret": ALAUDA_CLIENT_SECRET,
"redirect_uri": '%s/alauda' % HOSTNAME,
"access_token_url": 'http://console.int.alauda.io/oauth/token'
},
"provider_enabled": ["github", "wechat"],
"session_valid_time_minutes": 60
},
"hackathon-api": {
"endpoint": HACKATHON_API_ENDPOINT
},
"javascript": {
"github": {
"authorize_url": "https://github.com/login/oauth/authorize?client_id=%s&redirect_uri=%s/github&scope=user" % (
GITHUB_CLIENT_ID, HOSTNAME)
},
"weibo": {
"authorize_url": "https://api.weibo.com/oauth2/authorize?client_id=%s&redirect_uri=%s/weibo&scope=all" % (
WEIBO_CLIENT_ID, HOSTNAME)
},
"qq": {
"authorize_url": "https://graph.qq.com/oauth2.0/authorize?client_id=%s&redirect_uri=%s/qq&scope=get_user_info&state=%s&response_type=code" % (
QQ_CLIENT_ID, HOSTNAME, QQ_OAUTH_STATE)
},
"wechat": {
"authorize_url": "https://open.weixin.qq.com/connect/qrconnect?appid=%s&redirect_uri=%s/wechat&response_type=code&scope=snsapi_login&state=%s#wechat_redirect" % (
WECHAT_APP_ID, HOSTNAME, WECHAT_OAUTH_STATE)
},
"live": {
"authorize_url": "https://login.live.com/oauth20_authorize.srf?client_id=%s&scope=wl.basic+,wl.emails&response_type=code&redirect_uri=%s/live" % (
LIVE_CLIENT_ID, HOSTNAME)
},
"alauda": {
"authorize_url": "http://console.int.alauda.io/oauth/authorize?response_type=code&client_id=%s&state=state&redirect_uri=%s/alauda" % (
ALAUDA_CLIENT_ID, HOSTNAME)
},
"hackathon": {
"endpoint": HACKATHON_API_ENDPOINT
},
"apiconfig": {
"proxy": HACKATHON_API_ENDPOINT,
"api": {
"admin": {
"hackathon": {
"": ["get", "post", "put", "delete"],
"checkname": ["get"],
"list": ["get"],
"online": ["post"],
"applyonline": ["post"],
"offline": ["post"],
"tags": ["get", "post", "put", "delete"],
"config": ["get", "post", "put", "delete"],
"administrator": {
"": ["put", "post", "delete"],
"list": ["get"]
},
"template": {
"": ["post", "delete"],
"list": ["get"],
"check": ["get"]
},
"organizer": {
"": ["get", "post", "put", "delete"]
},
"award": {
"": ["get", "post", "put", "delete"],
"list": ["get"]
},
"notice": {
"": ["get", "post", "put", "delete"]
}
},
"registration": {
"": ["get", "post", "delete", "put"],
"list": ["get"]
},
"azure": {
"": ["get", "post", "delete", "put"],
"checksubid": ["post"]
},
"experiment": {
"list": ["get"],
"": ["post", "put"]
},
"team": {
"list": ["get"],
"score": {
"list": ["get"]
},
"award": ["get", "post", "delete"]
},
"user": {
"list": ["get"]
},
"hostserver": {
"": ["get", "post", "delete", "put"],
"list": ["get"]
}
},
"template": {
"": ["get", "post", "delete", "put"],
"file": ["post"],
"list": ["get"],
"check": ["get"]
},
"user": {
"": ["get"],
"login": ["post", "delete"],
"experiment": {
"": ["get", "post", "delete", "put"]
},
"registration": {
"": ["put", "post", "get"],
"checkemail": ["get"],
"list": ["get"]
},
"profile": {
"": ["post", "put"]
},
"picture": {
"": ["put"]
},
"team": {
"member": ["get"]
},
"hackathon": {
"like": ["get", "post", "delete"]
},
"notice": {
"read": ["put"]
},
"show": {
"list": ["get"]
},
"file": {
"": ["post"]
}
},
"hackathon": {
"": ["get"],
"list": ["get"],
"stat": ["get"],
"template": ["get"],
"team": {
"list": ["get"]
},
"registration": {
"list": ["get"]
},
"show": {
"list": ["get"]
},
"grantedawards": ["get"],
"notice": {
"list": ["get"]
}
},
"team": {
"": ["get", "post", "put", "delete"],
"score": ["get", "post", "put"],
"member": {
"": ["post", "put", "delete"],
"list": ["get"]
},
"show": ["get", "post", "delete"],
"template": ["post", "delete"]
},
"talent": {
"list": ["get"]
},
"grantedawards": ["get"]
}
}
}
}
| true | true |
f7fbda38fbc7b3f196dad37b9ad073f1cfcd3845 | 9,350 | py | Python | cqlengine/tests/statements/test_assignment_clauses.py | dokai/cqlengine | a080aff3a73351d37126b14eef606061b445aa37 | [
"BSD-3-Clause"
] | null | null | null | cqlengine/tests/statements/test_assignment_clauses.py | dokai/cqlengine | a080aff3a73351d37126b14eef606061b445aa37 | [
"BSD-3-Clause"
] | null | null | null | cqlengine/tests/statements/test_assignment_clauses.py | dokai/cqlengine | a080aff3a73351d37126b14eef606061b445aa37 | [
"BSD-3-Clause"
] | null | null | null | from unittest import TestCase
from cqlengine.statements import AssignmentClause, SetUpdateClause, ListUpdateClause, MapUpdateClause, MapDeleteClause, FieldDeleteClause, CounterUpdateClause
class AssignmentClauseTests(TestCase):
def test_rendering(self):
pass
def test_insert_tuple(self):
ac = AssignmentClause('a', 'b')
ac.set_context_id(10)
self.assertEqual(ac.insert_tuple(), ('a', 10))
class SetUpdateClauseTests(TestCase):
def test_update_from_none(self):
c = SetUpdateClause('s', {1, 2}, None)
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, {1, 2})
self.assertIsNone(c._additions)
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {1, 2}})
def test_null_update(self):
""" tests setting a set to None creates an empty update statement """
c = SetUpdateClause('s', None, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._additions)
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 0)
self.assertEqual(str(c), '')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {})
def test_no_update(self):
""" tests an unchanged value creates an empty update statement """
c = SetUpdateClause('s', {1, 2}, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._additions)
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 0)
self.assertEqual(str(c), '')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {})
def test_additions(self):
c = SetUpdateClause('s', {1, 2, 3}, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._additions, {3})
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = "s" + :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {3}})
def test_removals(self):
c = SetUpdateClause('s', {1, 2}, {1, 2, 3})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._additions)
self.assertEqual(c._removals, {3})
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = "s" - :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {3}})
def test_additions_and_removals(self):
c = SetUpdateClause('s', {2, 3}, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._additions, {3})
self.assertEqual(c._removals, {1})
self.assertEqual(c.get_context_size(), 2)
self.assertEqual(str(c), '"s" = "s" + :0, "s" = "s" - :1')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {3}, '1': {1}})
class ListUpdateClauseTests(TestCase):
def test_update_from_none(self):
c = ListUpdateClause('s', [1, 2, 3])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
def test_update_from_empty(self):
c = ListUpdateClause('s', [1, 2, 3], previous=[])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
def test_update_from_different_list(self):
c = ListUpdateClause('s', [1, 2, 3], previous=[3, 2, 1])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
def test_append(self):
c = ListUpdateClause('s', [1, 2, 3, 4], previous=[1, 2])
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._append, [3, 4])
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = "s" + :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [3, 4]})
def test_prepend(self):
c = ListUpdateClause('s', [1, 2, 3, 4], previous=[3, 4])
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._append)
self.assertEqual(c._prepend, [1, 2])
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0 + "s"')
ctx = {}
c.update_context(ctx)
# test context list reversal
self.assertEqual(ctx, {'0': [2, 1]})
def test_append_and_prepend(self):
c = ListUpdateClause('s', [1, 2, 3, 4, 5, 6], previous=[3, 4])
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._append, [5, 6])
self.assertEqual(c._prepend, [1, 2])
self.assertEqual(c.get_context_size(), 2)
self.assertEqual(str(c), '"s" = :0 + "s", "s" = "s" + :1')
ctx = {}
c.update_context(ctx)
# test context list reversal
self.assertEqual(ctx, {'0': [2, 1], '1': [5, 6]})
def test_shrinking_list_update(self):
""" tests that updating to a smaller list results in an insert statement """
c = ListUpdateClause('s', [1, 2, 3], previous=[1, 2, 3, 4])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
class MapUpdateTests(TestCase):
def test_update(self):
c = MapUpdateClause('s', {3: 0, 5: 6}, {5: 0, 3: 4})
c._analyze()
c.set_context_id(0)
self.assertEqual(c._updates, [3, 5])
self.assertEqual(c.get_context_size(), 4)
self.assertEqual(str(c), '"s"[:0] = :1, "s"[:2] = :3')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': 3, "1": 0, '2': 5, '3': 6})
def test_update_from_null(self):
c = MapUpdateClause('s', {3: 0, 5: 6})
c._analyze()
c.set_context_id(0)
self.assertEqual(c._updates, [3, 5])
self.assertEqual(c.get_context_size(), 4)
self.assertEqual(str(c), '"s"[:0] = :1, "s"[:2] = :3')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': 3, "1": 0, '2': 5, '3': 6})
def test_nulled_columns_arent_included(self):
c = MapUpdateClause('s', {3: 0}, {1: 2, 3: 4})
c._analyze()
c.set_context_id(0)
self.assertNotIn(1, c._updates)
class CounterUpdateTests(TestCase):
def test_positive_update(self):
c = CounterUpdateClause('a', 5, 3)
c.set_context_id(5)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"a" = "a" + :5')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'5': 2})
def test_negative_update(self):
c = CounterUpdateClause('a', 4, 7)
c.set_context_id(3)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"a" = "a" - :3')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'3': 3})
def noop_update(self):
c = CounterUpdateClause('a', 5, 5)
c.set_context_id(5)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"a" = "a" + :5')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'5': 0})
class MapDeleteTests(TestCase):
def test_update(self):
c = MapDeleteClause('s', {3: 0}, {1: 2, 3: 4, 5: 6})
c._analyze()
c.set_context_id(0)
self.assertEqual(c._removals, [1, 5])
self.assertEqual(c.get_context_size(), 2)
self.assertEqual(str(c), '"s"[:0], "s"[:1]')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': 1, '1': 5})
class FieldDeleteTests(TestCase):
def test_str(self):
f = FieldDeleteClause("blake")
assert str(f) == '"blake"'
| 28.506098 | 158 | 0.563316 | from unittest import TestCase
from cqlengine.statements import AssignmentClause, SetUpdateClause, ListUpdateClause, MapUpdateClause, MapDeleteClause, FieldDeleteClause, CounterUpdateClause
class AssignmentClauseTests(TestCase):
def test_rendering(self):
pass
def test_insert_tuple(self):
ac = AssignmentClause('a', 'b')
ac.set_context_id(10)
self.assertEqual(ac.insert_tuple(), ('a', 10))
class SetUpdateClauseTests(TestCase):
def test_update_from_none(self):
c = SetUpdateClause('s', {1, 2}, None)
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, {1, 2})
self.assertIsNone(c._additions)
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {1, 2}})
def test_null_update(self):
c = SetUpdateClause('s', None, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._additions)
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 0)
self.assertEqual(str(c), '')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {})
def test_no_update(self):
c = SetUpdateClause('s', {1, 2}, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._additions)
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 0)
self.assertEqual(str(c), '')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {})
def test_additions(self):
c = SetUpdateClause('s', {1, 2, 3}, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._additions, {3})
self.assertIsNone(c._removals)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = "s" + :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {3}})
def test_removals(self):
c = SetUpdateClause('s', {1, 2}, {1, 2, 3})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._additions)
self.assertEqual(c._removals, {3})
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = "s" - :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {3}})
def test_additions_and_removals(self):
c = SetUpdateClause('s', {2, 3}, {1, 2})
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._additions, {3})
self.assertEqual(c._removals, {1})
self.assertEqual(c.get_context_size(), 2)
self.assertEqual(str(c), '"s" = "s" + :0, "s" = "s" - :1')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': {3}, '1': {1}})
class ListUpdateClauseTests(TestCase):
def test_update_from_none(self):
c = ListUpdateClause('s', [1, 2, 3])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
def test_update_from_empty(self):
c = ListUpdateClause('s', [1, 2, 3], previous=[])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
def test_update_from_different_list(self):
c = ListUpdateClause('s', [1, 2, 3], previous=[3, 2, 1])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
def test_append(self):
c = ListUpdateClause('s', [1, 2, 3, 4], previous=[1, 2])
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._append, [3, 4])
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = "s" + :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [3, 4]})
def test_prepend(self):
c = ListUpdateClause('s', [1, 2, 3, 4], previous=[3, 4])
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertIsNone(c._append)
self.assertEqual(c._prepend, [1, 2])
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0 + "s"')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [2, 1]})
def test_append_and_prepend(self):
c = ListUpdateClause('s', [1, 2, 3, 4, 5, 6], previous=[3, 4])
c._analyze()
c.set_context_id(0)
self.assertIsNone(c._assignments)
self.assertEqual(c._append, [5, 6])
self.assertEqual(c._prepend, [1, 2])
self.assertEqual(c.get_context_size(), 2)
self.assertEqual(str(c), '"s" = :0 + "s", "s" = "s" + :1')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [2, 1], '1': [5, 6]})
def test_shrinking_list_update(self):
c = ListUpdateClause('s', [1, 2, 3], previous=[1, 2, 3, 4])
c._analyze()
c.set_context_id(0)
self.assertEqual(c._assignments, [1, 2, 3])
self.assertIsNone(c._append)
self.assertIsNone(c._prepend)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"s" = :0')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': [1, 2, 3]})
class MapUpdateTests(TestCase):
def test_update(self):
c = MapUpdateClause('s', {3: 0, 5: 6}, {5: 0, 3: 4})
c._analyze()
c.set_context_id(0)
self.assertEqual(c._updates, [3, 5])
self.assertEqual(c.get_context_size(), 4)
self.assertEqual(str(c), '"s"[:0] = :1, "s"[:2] = :3')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': 3, "1": 0, '2': 5, '3': 6})
def test_update_from_null(self):
c = MapUpdateClause('s', {3: 0, 5: 6})
c._analyze()
c.set_context_id(0)
self.assertEqual(c._updates, [3, 5])
self.assertEqual(c.get_context_size(), 4)
self.assertEqual(str(c), '"s"[:0] = :1, "s"[:2] = :3')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': 3, "1": 0, '2': 5, '3': 6})
def test_nulled_columns_arent_included(self):
c = MapUpdateClause('s', {3: 0}, {1: 2, 3: 4})
c._analyze()
c.set_context_id(0)
self.assertNotIn(1, c._updates)
class CounterUpdateTests(TestCase):
def test_positive_update(self):
c = CounterUpdateClause('a', 5, 3)
c.set_context_id(5)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"a" = "a" + :5')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'5': 2})
def test_negative_update(self):
c = CounterUpdateClause('a', 4, 7)
c.set_context_id(3)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"a" = "a" - :3')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'3': 3})
def noop_update(self):
c = CounterUpdateClause('a', 5, 5)
c.set_context_id(5)
self.assertEqual(c.get_context_size(), 1)
self.assertEqual(str(c), '"a" = "a" + :5')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'5': 0})
class MapDeleteTests(TestCase):
def test_update(self):
c = MapDeleteClause('s', {3: 0}, {1: 2, 3: 4, 5: 6})
c._analyze()
c.set_context_id(0)
self.assertEqual(c._removals, [1, 5])
self.assertEqual(c.get_context_size(), 2)
self.assertEqual(str(c), '"s"[:0], "s"[:1]')
ctx = {}
c.update_context(ctx)
self.assertEqual(ctx, {'0': 1, '1': 5})
class FieldDeleteTests(TestCase):
def test_str(self):
f = FieldDeleteClause("blake")
assert str(f) == '"blake"'
| true | true |
f7fbda7490aea884559b813a79978c1c6ea5a0cb | 2,092 | py | Python | CNN/utils/activations.py | shafzhr/SimpleConvNet | 89b669a59099743f0e115526cbc156aa22f453c3 | [
"MIT"
] | 2 | 2020-06-29T16:40:59.000Z | 2020-11-12T18:24:01.000Z | CNN/utils/activations.py | shafzhr/SimpleConvNet | 89b669a59099743f0e115526cbc156aa22f453c3 | [
"MIT"
] | 1 | 2021-02-24T16:28:15.000Z | 2021-02-24T16:28:15.000Z | CNN/utils/activations.py | shafzhr/SimpleConvNet | 89b669a59099743f0e115526cbc156aa22f453c3 | [
"MIT"
] | null | null | null | import numpy as np
import abc
class Activation(metaclass=abc.ABCMeta):
"""
Activation abstract class
"""
@abc.abstractmethod
def apply(self, x, is_training):
"""
Applying the activation function over `x`
"""
pass
@abc.abstractmethod
def backprop(self, dA_prev):
"""
Back Propagation in an activation function
"""
pass
class ReLU(Activation):
"""
ReLU activation function
"""
def __init__(self):
self.X = None
def apply(self, x, is_training=True):
"""
Applying ReLU over `x`
:param x: input (numpy array)
:param is_training: a boolean indicating whether training or not
"""
if is_training:
self.X = x.copy()
x[x < 0] = 0
return x
def backprop(self, dA_prev):
"""
Back Propagation in ReLU
:param dA_prev: derivative of the cost function with respect to the previous layer(when going backwards)
:return: the derivative of the cost layer with respect to the current layer
"""
return dA_prev * np.where(self.X > 0, 1, 0)
class Softmax(Activation):
"""
Softmax activation
"""
def __init__(self):
self.X = None
def apply(self, x, is_training=True):
"""
Applying Softmax over `x`
:param is_training: a boolean indicating whether training or not
:param x: input (numpy array)
"""
if is_training:
self.X = x.copy()
shiftx = x - np.max(x, axis=1, keepdims=True)
exps = np.exp(shiftx)
probs = exps / np.sum(exps, axis=1, keepdims=True)
return probs
def backprop(self, dA_prev):
"""
Back Propagation in Softmax
:param dA_prev: derivative of the cost function with respect to the previous layer(when going backwards)
:return: the derivative of the cost layer with respect to the current layer
"""
return dA_prev
ACTIVATION_FUNCTIONS = {'relu': ReLU, 'softmax': Softmax}
| 24.904762 | 112 | 0.585564 | import numpy as np
import abc
class Activation(metaclass=abc.ABCMeta):
@abc.abstractmethod
def apply(self, x, is_training):
pass
@abc.abstractmethod
def backprop(self, dA_prev):
pass
class ReLU(Activation):
def __init__(self):
self.X = None
def apply(self, x, is_training=True):
if is_training:
self.X = x.copy()
x[x < 0] = 0
return x
def backprop(self, dA_prev):
return dA_prev * np.where(self.X > 0, 1, 0)
class Softmax(Activation):
def __init__(self):
self.X = None
def apply(self, x, is_training=True):
if is_training:
self.X = x.copy()
shiftx = x - np.max(x, axis=1, keepdims=True)
exps = np.exp(shiftx)
probs = exps / np.sum(exps, axis=1, keepdims=True)
return probs
def backprop(self, dA_prev):
return dA_prev
ACTIVATION_FUNCTIONS = {'relu': ReLU, 'softmax': Softmax}
| true | true |
f7fbda9b7a677ee3832e4b52f14ded305adcce8d | 62,448 | py | Python | addon_common/common/updater_core.py | senjacob/retopoflow | 7817bb7d68f98e5ae2c7835f28eeafe76367789e | [
"OML"
] | 1 | 2021-08-04T07:02:57.000Z | 2021-08-04T07:02:57.000Z | addon_common/common/updater_core.py | Varelshen/retopoflow | 5e9fd7ff65e7a5a64bf3078c78fb71cc270fdb71 | [
"OML"
] | null | null | null | addon_common/common/updater_core.py | Varelshen/retopoflow | 5e9fd7ff65e7a5a64bf3078c78fb71cc270fdb71 | [
"OML"
] | null | null | null | # ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
"""
See documentation for usage
https://github.com/CGCookie/blender-addon-updater
"""
import errno
import ssl
import urllib.request
import urllib
import os
import json
import zipfile
import shutil
import threading
import fnmatch
from datetime import datetime, timedelta
# blender imports, used in limited cases
import bpy
import addon_utils
# -----------------------------------------------------------------------------
# Define error messages/notices & hard coded globals
# -----------------------------------------------------------------------------
# currently not used
DEFAULT_TIMEOUT = 10
DEFAULT_PER_PAGE = 30
# -----------------------------------------------------------------------------
# The main class
# -----------------------------------------------------------------------------
class Singleton_updater(object):
"""
This is the singleton class to reference a copy from,
it is the shared module level class
"""
def __init__(self):
self._engine = GithubEngine()
self._user = None
self._repo = None
self._website = None
self._current_version = None
self._subfolder_path = None
self._tags = []
self._tag_latest = None
self._tag_names = []
self._latest_release = None
self._use_releases = False
self._include_branches = False
self._include_branch_list = ['master']
self._include_branch_autocheck = False
self._manual_only = False
self._version_min_update = None
self._version_max_update = None
# by default, backup current addon if new is being loaded
self._backup_current = True
self._backup_ignore_patterns = None
# set patterns for what files to overwrite on update
self._overwrite_patterns = ["*.py","*.pyc"]
self._remove_pre_update_patterns = []
# by default, don't auto enable/disable the addon on update
# as it is slightly less stable/won't always fully reload module
self._auto_reload_post_update = False
# settings relating to frequency and whether to enable auto background check
self._check_interval_enable = False
self._check_interval_months = 0
self._check_interval_days = 7
self._check_interval_hours = 0
self._check_interval_minutes = 0
# runtime variables, initial conditions
self._verbose = False
self._fake_install = False
self._async_checking = False # only true when async daemon started
self._update_ready = None
self._update_link = None
self._update_version = None
self._source_zip = None
self._check_thread = None
self._select_link = None
self.skip_tag = None
# get from module data
self._addon = __package__.split('.')[0].lower()
self._addon_package = __package__.split('.')[0] # must be set to addon's package!
self._updater_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', 'updater_tmp'))
self._addon_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
self._json = {}
self._error = None
self._error_msg = None
self._prefiltered_tag_count = 0
# UI code only, ie not used within this module but still useful
# properties to have
# to verify a valid import, in place of placeholder import
self.showpopups = True # used in UI to show or not show update popups
self.invalidupdater = False
# pre-assign basic select-link function
def select_link_function(self, tag):
return tag["zipball_url"]
self._select_link = select_link_function
# -------------------------------------------------------------------------
# Getters and setters
# -------------------------------------------------------------------------
@property
def addon(self):
return self._addon
@addon.setter
def addon(self, value):
self._addon = str(value)
@property
def addon_package(self):
return self._addon_package
@addon_package.setter
def addon_package(self, value):
self._addon_package = value
@property
def api_url(self):
return self._engine.api_url
@api_url.setter
def api_url(self, value):
if self.check_is_url(value) == False:
raise ValueError("Not a valid URL: " + value)
self._engine.api_url = value
@property
def async_checking(self):
return self._async_checking
@property
def auto_reload_post_update(self):
return self._auto_reload_post_update
@auto_reload_post_update.setter
def auto_reload_post_update(self, value):
try:
self._auto_reload_post_update = bool(value)
except:
raise ValueError("Must be a boolean value")
@property
def backup_current(self):
return self._backup_current
@backup_current.setter
def backup_current(self, value):
if value == None:
self._backup_current = False
return
else:
self._backup_current = value
@property
def backup_ignore_patterns(self):
return self._backup_ignore_patterns
@backup_ignore_patterns.setter
def backup_ignore_patterns(self, value):
if value == None:
self._backup_ignore_patterns = None
return
elif type(value) != type(['list']):
raise ValueError("Backup pattern must be in list format")
else:
self._backup_ignore_patterns = value
@property
def check_interval(self):
return (self._check_interval_enable,
self._check_interval_months,
self._check_interval_days,
self._check_interval_hours,
self._check_interval_minutes)
@property
def current_version(self):
return self._current_version
@current_version.setter
def current_version(self, tuple_values):
if tuple_values==None:
self._current_version = None
return
elif type(tuple_values) is not tuple:
try:
tuple(tuple_values)
except:
raise ValueError(
"Not a tuple! current_version must be a tuple of integers")
for i in tuple_values:
if type(i) is not int:
raise ValueError(
"Not an integer! current_version must be a tuple of integers")
self._current_version = tuple(tuple_values)
@property
def engine(self):
return self._engine.name
@engine.setter
def engine(self, value):
if value.lower()=="github":
self._engine = GithubEngine()
elif value.lower()=="gitlab":
self._engine = GitlabEngine()
elif value.lower()=="bitbucket":
self._engine = BitbucketEngine()
else:
raise ValueError("Invalid engine selection")
@property
def error(self):
return self._error
@property
def error_msg(self):
return self._error_msg
@property
def fake_install(self):
return self._fake_install
@fake_install.setter
def fake_install(self, value):
if type(value) != type(False):
raise ValueError("fake_install must be a boolean value")
self._fake_install = bool(value)
# not currently used
@property
def include_branch_autocheck(self):
return self._include_branch_autocheck
@include_branch_autocheck.setter
def include_branch_autocheck(self, value):
try:
self._include_branch_autocheck = bool(value)
except:
raise ValueError("include_branch_autocheck must be a boolean value")
@property
def include_branch_list(self):
return self._include_branch_list
@include_branch_list.setter
def include_branch_list(self, value):
try:
if value == None:
self._include_branch_list = ['master']
elif type(value) != type(['master']) or value==[]:
raise ValueError("include_branch_list should be a list of valid branches")
else:
self._include_branch_list = value
except:
raise ValueError("include_branch_list should be a list of valid branches")
@property
def include_branches(self):
return self._include_branches
@include_branches.setter
def include_branches(self, value):
try:
self._include_branches = bool(value)
except:
raise ValueError("include_branches must be a boolean value")
@property
def json(self):
if self._json == {}:
self.set_updater_json()
return self._json
@property
def latest_release(self):
if self._latest_release == None:
return None
return self._latest_release
@property
def manual_only(self):
return self._manual_only
@manual_only.setter
def manual_only(self, value):
try:
self._manual_only = bool(value)
except:
raise ValueError("manual_only must be a boolean value")
@property
def overwrite_patterns(self):
return self._overwrite_patterns
@overwrite_patterns.setter
def overwrite_patterns(self, value):
if value == None:
self._overwrite_patterns = ["*.py","*.pyc"]
elif type(value) != type(['']):
raise ValueError("overwrite_patterns needs to be in a list format")
else:
self._overwrite_patterns = value
@property
def private_token(self):
return self._engine.token
@private_token.setter
def private_token(self, value):
if value==None:
self._engine.token = None
else:
self._engine.token = str(value)
@property
def remove_pre_update_patterns(self):
return self._remove_pre_update_patterns
@remove_pre_update_patterns.setter
def remove_pre_update_patterns(self, value):
if value == None:
self._remove_pre_update_patterns = []
elif type(value) != type(['']):
raise ValueError("remove_pre_update_patterns needs to be in a list format")
else:
self._remove_pre_update_patterns = value
@property
def repo(self):
return self._repo
@repo.setter
def repo(self, value):
try:
self._repo = str(value)
except:
raise ValueError("User must be a string")
@property
def select_link(self):
return self._select_link
@select_link.setter
def select_link(self, value):
# ensure it is a function assignment, with signature:
# input self, tag; returns link name
if not hasattr(value, "__call__"):
raise ValueError("select_link must be a function")
self._select_link = value
@property
def stage_path(self):
return self._updater_path
@stage_path.setter
def stage_path(self, value):
if value == None:
if self._verbose: print("Aborting assigning stage_path, it's null")
return
elif value != None and not os.path.exists(value):
try:
os.makedirs(value)
except:
if self._verbose: print("Error trying to staging path")
return
self._updater_path = value
@property
def subfolder_path(self):
return self._subfolder_path
@subfolder_path.setter
def subfolder_path(self, value):
self._subfolder_path = value
@property
def tags(self):
if self._tags == []:
return []
tag_names = []
for tag in self._tags:
tag_names.append(tag["name"])
return tag_names
@property
def tag_latest(self):
if self._tag_latest == None:
return None
return self._tag_latest["name"]
@property
def update_link(self):
return self._update_link
@property
def update_ready(self):
return self._update_ready
@property
def update_version(self):
return self._update_version
@property
def use_releases(self):
return self._use_releases
@use_releases.setter
def use_releases(self, value):
try:
self._use_releases = bool(value)
except:
raise ValueError("use_releases must be a boolean value")
@property
def user(self):
return self._user
@user.setter
def user(self, value):
try:
self._user = str(value)
except:
raise ValueError("User must be a string value")
@property
def verbose(self):
return self._verbose
@verbose.setter
def verbose(self, value):
try:
self._verbose = bool(value)
if self._verbose == True:
print(self._addon+" updater verbose is enabled")
except:
raise ValueError("Verbose must be a boolean value")
@property
def version_max_update(self):
return self._version_max_update
@version_max_update.setter
def version_max_update(self, value):
if value == None:
self._version_max_update = None
return
if type(value) != type((1,2,3)):
raise ValueError("Version maximum must be a tuple")
for subvalue in value:
if type(subvalue) != int:
raise ValueError("Version elements must be integers")
self._version_max_update = value
@property
def version_min_update(self):
return self._version_min_update
@version_min_update.setter
def version_min_update(self, value):
if value == None:
self._version_min_update = None
return
if type(value) != type((1,2,3)):
raise ValueError("Version minimum must be a tuple")
for subvalue in value:
if type(subvalue) != int:
raise ValueError("Version elements must be integers")
self._version_min_update = value
@property
def website(self):
return self._website
@website.setter
def website(self, value):
if self.check_is_url(value) == False:
raise ValueError("Not a valid URL: " + value)
self._website = value
# -------------------------------------------------------------------------
# Parameter validation related functions
# -------------------------------------------------------------------------
def check_is_url(self, url):
if not ("http://" in url or "https://" in url):
return False
if "." not in url:
return False
return True
def get_tag_names(self):
tag_names = []
self.get_tags()
for tag in self._tags:
tag_names.append(tag["name"])
return tag_names
def set_check_interval(self,enable=False,months=0,days=14,hours=0,minutes=0):
# enabled = False, default initially will not check against frequency
# if enabled, default is then 2 weeks
if type(enable) is not bool:
raise ValueError("Enable must be a boolean value")
if type(months) is not int:
raise ValueError("Months must be an integer value")
if type(days) is not int:
raise ValueError("Days must be an integer value")
if type(hours) is not int:
raise ValueError("Hours must be an integer value")
if type(minutes) is not int:
raise ValueError("Minutes must be an integer value")
if enable==False:
self._check_interval_enable = False
else:
self._check_interval_enable = True
self._check_interval_months = months
self._check_interval_days = days
self._check_interval_hours = hours
self._check_interval_minutes = minutes
# declare how the class gets printed
def __repr__(self):
return "<Module updater from {a}>".format(a=__file__)
def __str__(self):
return "Updater, with user: {a}, repository: {b}, url: {c}".format(
a=self._user,
b=self._repo, c=self.form_repo_url())
# -------------------------------------------------------------------------
# API-related functions
# -------------------------------------------------------------------------
def form_repo_url(self):
return self._engine.form_repo_url(self)
def form_tags_url(self):
return self._engine.form_tags_url(self)
def form_branch_url(self, branch):
return self._engine.form_branch_url(branch, self)
def get_tags(self):
request = self.form_tags_url()
if self._verbose: print("Getting tags from server")
# get all tags, internet call
all_tags = self._engine.parse_tags(self.get_api(request), self)
if all_tags is not None:
self._prefiltered_tag_count = len(all_tags)
else:
self._prefiltered_tag_count = 0
all_tags = []
# pre-process to skip tags
if self.skip_tag != None:
self._tags = [tg for tg in all_tags if self.skip_tag(self, tg)==False]
else:
self._tags = all_tags
# get additional branches too, if needed, and place in front
# Does NO checking here whether branch is valid
if self._include_branches == True:
temp_branches = self._include_branch_list.copy()
temp_branches.reverse()
for branch in temp_branches:
request = self.form_branch_url(branch)
include = {
"name":branch.title(),
"zipball_url":request
}
self._tags = [include] + self._tags # append to front
if self._tags == None:
# some error occurred
self._tag_latest = None
self._tags = []
return
elif self._prefiltered_tag_count == 0 and self._include_branches == False:
self._tag_latest = None
if self._error == None: # if not None, could have had no internet
self._error = "No releases found"
self._error_msg = "No releases or tags found on this repository"
if self._verbose: print("No releases or tags found on this repository")
elif self._prefiltered_tag_count == 0 and self._include_branches == True:
if not self._error: self._tag_latest = self._tags[0]
if self._verbose:
branch = self._include_branch_list[0]
print("{} branch found, no releases".format(branch), self._tags[0])
elif (len(self._tags)-len(self._include_branch_list)==0 and self._include_branches==True) \
or (len(self._tags)==0 and self._include_branches==False) \
and self._prefiltered_tag_count > 0:
self._tag_latest = None
self._error = "No releases available"
self._error_msg = "No versions found within compatible version range"
if self._verbose: print("No versions found within compatible version range")
else:
if self._include_branches == False:
self._tag_latest = self._tags[0]
if self._verbose: print("Most recent tag found:",self._tags[0]['name'])
else:
# don't return branch if in list
n = len(self._include_branch_list)
self._tag_latest = self._tags[n] # guaranteed at least len()=n+1
if self._verbose: print("Most recent tag found:",self._tags[n]['name'])
# all API calls to base url
def get_raw(self, url):
# print("Raw request:", url)
request = urllib.request.Request(url)
try:
context = ssl._create_unverified_context()
except:
# some blender packaged python versions don't have this, largely
# useful for local network setups otherwise minimal impact
context = None
# setup private request headers if appropriate
if self._engine.token != None:
if self._engine.name == "gitlab":
request.add_header('PRIVATE-TOKEN',self._engine.token)
else:
if self._verbose: print("Tokens not setup for engine yet")
# run the request
try:
if context:
result = urllib.request.urlopen(request, context=context)
else:
result = urllib.request.urlopen(request)
except urllib.error.HTTPError as e:
if str(e.code) == "403":
self._error = "HTTP error (access denied)"
self._error_msg = str(e.code) + " - server error response"
print(self._error, self._error_msg)
else:
self._error = "HTTP error"
self._error_msg = str(e.code)
print(self._error, self._error_msg)
self._update_ready = None
except urllib.error.URLError as e:
reason = str(e.reason)
if "TLSV1_ALERT" in reason or "SSL" in reason.upper():
self._error = "Connection rejected, download manually"
self._error_msg = reason
print(self._error, self._error_msg)
else:
self._error = "URL error, check internet connection"
self._error_msg = reason
print(self._error, self._error_msg)
self._update_ready = None
return None
else:
result_string = result.read()
result.close()
return result_string.decode()
# result of all api calls, decoded into json format
def get_api(self, url):
# return the json version
get = None
get = self.get_raw(url)
if get != None:
try:
return json.JSONDecoder().decode(get)
except Exception as e:
self._error = "API response has invalid JSON format"
self._error_msg = str(e.reason)
self._update_ready = None
print(self._error, self._error_msg)
return None
else:
return None
# create a working directory and download the new files
def stage_repository(self, url):
local = os.path.join(self._updater_path,"update_staging")
error = None
# make/clear the staging folder
# ensure the folder is always "clean"
if self._verbose: print("Preparing staging folder for download:\n",local)
if os.path.isdir(local) == True:
try:
shutil.rmtree(local)
os.makedirs(local)
except:
error = "failed to remove existing staging directory. check permissions for staging directory"
else:
try:
os.makedirs(local)
except:
error = "failed to create staging directory. check permissions for staging directory"
if error != None:
if self._verbose: print("Error: Aborting update, "+error)
self._error = "Update aborted, staging path error"
self._error_msg = "Error: {}".format(error)
return False
if self._backup_current==True:
self.create_backup()
if self._verbose: print("Now retrieving the new source zip")
self._source_zip = os.path.join(local,"source.zip")
if self._verbose: print("Starting download update zip")
try:
request = urllib.request.Request(url)
context = ssl._create_unverified_context()
# setup private token if appropriate
if self._engine.token != None:
if self._engine.name == "gitlab":
request.add_header('PRIVATE-TOKEN',self._engine.token)
else:
if self._verbose: print("Tokens not setup for selected engine yet")
self.urlretrieve(urllib.request.urlopen(request,context=context), self._source_zip)
# add additional checks on file size being non-zero
if self._verbose: print("Successfully downloaded update zip")
return True
except Exception as e:
self._error = "Error retrieving download, bad link?"
self._error_msg = "Error: {}".format(e)
if self._verbose:
print("Error retrieving download, bad link?")
print("Error: {}".format(e))
return False
def create_backup(self):
if self._verbose: print("Backing up current addon folder")
local = os.path.join(self._updater_path,"backup")
tempdest = os.path.join(self._addon_root,
os.pardir,
self._addon+"_updater_backup_temp")
if self._verbose: print("Backup destination path: ",local)
if os.path.isdir(local):
try:
shutil.rmtree(local)
except:
if self._verbose:print("Failed to removed previous backup folder, contininuing")
# remove the temp folder; shouldn't exist but could if previously interrupted
if os.path.isdir(tempdest):
try:
shutil.rmtree(tempdest)
except:
if self._verbose:print("Failed to remove existing temp folder, contininuing")
# make the full addon copy, which temporarily places outside the addon folder
if self._backup_ignore_patterns != None:
shutil.copytree(
self._addon_root,tempdest,
ignore=shutil.ignore_patterns(*self._backup_ignore_patterns))
else:
shutil.copytree(self._addon_root,tempdest)
shutil.move(tempdest,local)
# save the date for future ref
now = datetime.now()
self._json["backup_date"] = "{m}-{d}-{yr}".format(
m=now.strftime("%B"),d=now.day,yr=now.year)
self.save_updater_json()
def restore_backup(self):
if self._verbose: print("Restoring backup")
if self._verbose: print("Backing up current addon folder")
backuploc = os.path.join(self._updater_path,"backup")
tempdest = os.path.join(self._addon_root,
os.pardir,
self._addon+"_updater_backup_temp")
tempdest = os.path.abspath(tempdest)
# make the copy
shutil.move(backuploc,tempdest)
shutil.rmtree(self._addon_root)
os.rename(tempdest,self._addon_root)
self._json["backup_date"] = ""
self._json["just_restored"] = True
self._json["just_updated"] = True
self.save_updater_json()
self.reload_addon()
def unpack_staged_zip(self,clean=False):
"""Unzip the downloaded file, and validate contents"""
if os.path.isfile(self._source_zip) == False:
if self._verbose: print("Error, update zip not found")
self._error = "Install failed"
self._error_msg = "Downloaded zip not found"
return -1
# clear the existing source folder in case previous files remain
outdir = os.path.join(self._updater_path, "source")
if os.path.exists(outdir):
if self._verbose:
print("Old source folder exists; clearing")
try:
shutil.rmtree(outdir)
if self._verbose:
print("Old source folder cleared")
except:
print("Error occurred while clearing old extract dir:")
print(str(err))
self._error = "Install failed"
self._error_msg = "Failed to clear old extract directory"
return -1
# Create parent directories if needed, would not be relevant unless
# installing addon into another location or via an addon manager
try:
os.makedirs(outdir)
if self._verbose:
print("Source folder cleared and recreated")
except Exception as err:
print("Error occurred while making extract dir:")
print(str(err))
self._error = "Install failed"
self._error_msg = "Failed to make extract directory"
return -1
if not os.path.isdir(outdir):
print("Failed to create source directory")
self._error = "Install failed"
self._error_msg = "Failed to create extract directory"
return -1
if self._verbose:
print("Begin extracting source from zip:", self._source_zip)
zfile = zipfile.ZipFile(self._source_zip, "r")
if not zfile:
if self._verbose:
print("Resulting file is not a zip, cannot extract")
self._error = "Install failed"
self._error_msg = "Resulting file is not a zip, cannot extract"
return -1
# Now extract directly from the first subfolder (not root)
# this avoids adding the first subfolder to the path length,
# which can be too long if the download has the SHA in the name
zsep = '/' #os.sep # might just always be / even on windows
for name in zfile.namelist():
if zsep not in name:
continue
top_folder = name[:name.index(zsep)+1]
if name == top_folder + zsep:
continue # skip top level folder
subpath = name[name.index(zsep)+1:]
if name.endswith(zsep):
try:
os.mkdir(os.path.join(outdir, subpath))
if self._verbose:
print("Extract - mkdir: ", os.path.join(outdir, subpath))
except OSError as exc:
if exc.errno != errno.EEXIST:
self._error = "Install failed"
self._error_msg = "Could not create folder from zip"
return -1
else:
with open(os.path.join(outdir, subpath), "wb") as outfile:
data = zfile.read(name)
outfile.write(data)
if self._verbose:
print("Extract - create:", os.path.join(outdir, subpath))
if self._verbose:
print("Extracted source")
unpath = os.path.join(self._updater_path, "source")
if not os.path.isdir(unpath):
self._error = "Install failed"
self._error_msg = "Extracted path does not exist"
print("Extracted path does not exist: ", unpath)
return -1
if self._subfolder_path:
self._subfolder_path.replace('/', os.path.sep)
self._subfolder_path.replace('\\', os.path.sep)
# either directly in root of zip/one subfolder, or use specified path
if os.path.isfile(os.path.join(unpath,"__init__.py")) == False:
dirlist = os.listdir(unpath)
if len(dirlist)>0:
if self._subfolder_path == "" or self._subfolder_path == None:
unpath = os.path.join(unpath, dirlist[0])
else:
unpath = os.path.join(unpath, self._subfolder_path)
# smarter check for additional sub folders for a single folder
# containing __init__.py
if os.path.isfile(os.path.join(unpath,"__init__.py")) == False:
if self._verbose:
print("not a valid addon found")
print("Paths:")
print(dirlist)
self._error = "Install failed"
self._error_msg = "No __init__ file found in new source"
return -1
# merge code with running addon directory, using blender default behavior
# plus any modifiers indicated by user (e.g. force remove/keep)
self.deepMergeDirectory(self._addon_root, unpath, clean)
# Now save the json state
# Change to True, to trigger the handler on other side
# if allowing reloading within same blender instance
self._json["just_updated"] = True
self.save_updater_json()
self.reload_addon()
self._update_ready = False
return 0
def deepMergeDirectory(self,base,merger,clean=False):
"""Merge folder 'merger' into folder 'base' without deleting existing"""
if not os.path.exists(base):
if self._verbose:
print("Base path does not exist:", base)
return -1
elif not os.path.exists(merger):
if self._verbose:
print("Merger path does not exist")
return -1
# paths to be aware of and not overwrite/remove/etc
staging_path = os.path.join(self._updater_path,"update_staging")
backup_path = os.path.join(self._updater_path,"backup")
# If clean install is enabled, clear existing files ahead of time
# note: will not delete the update.json, update folder, staging, or staging
# but will delete all other folders/files in addon directory
error = None
if clean==True:
try:
# implement clearing of all folders/files, except the
# updater folder and updater json
# Careful, this deletes entire subdirectories recursively...
# make sure that base is not a high level shared folder, but
# is dedicated just to the addon itself
if self._verbose: print("clean=True, clearing addon folder to fresh install state")
# remove root files and folders (except update folder)
files = [f for f in os.listdir(base) if os.path.isfile(os.path.join(base,f))]
folders = [f for f in os.listdir(base) if os.path.isdir(os.path.join(base,f))]
for f in files:
os.remove(os.path.join(base,f))
print("Clean removing file {}".format(os.path.join(base,f)))
for f in folders:
if os.path.join(base,f)==self._updater_path: continue
shutil.rmtree(os.path.join(base,f))
print("Clean removing folder and contents {}".format(os.path.join(base,f)))
except Exception as err:
error = "failed to create clean existing addon folder"
print(error, str(err))
# Walk through the base addon folder for rules on pre-removing
# but avoid removing/altering backup and updater file
for path, dirs, files in os.walk(base):
# prune ie skip updater folder
dirs[:] = [d for d in dirs if os.path.join(path,d) not in [self._updater_path]]
for file in files:
for ptrn in self.remove_pre_update_patterns:
if fnmatch.filter([file],ptrn):
try:
fl = os.path.join(path,file)
os.remove(fl)
if self._verbose: print("Pre-removed file "+file)
except OSError:
print("Failed to pre-remove "+file)
# Walk through the temp addon sub folder for replacements
# this implements the overwrite rules, which apply after
# the above pre-removal rules. This also performs the
# actual file copying/replacements
for path, dirs, files in os.walk(merger):
# verify this structure works to prune updater sub folder overwriting
dirs[:] = [d for d in dirs if os.path.join(path,d) not in [self._updater_path]]
relPath = os.path.relpath(path, merger)
destPath = os.path.join(base, relPath)
if not os.path.exists(destPath):
os.makedirs(destPath)
for file in files:
# bring in additional logic around copying/replacing
# Blender default: overwrite .py's, don't overwrite the rest
destFile = os.path.join(destPath, file)
srcFile = os.path.join(path, file)
# decide whether to replace if file already exists, and copy new over
if os.path.isfile(destFile):
# otherwise, check each file to see if matches an overwrite pattern
replaced=False
for ptrn in self._overwrite_patterns:
if fnmatch.filter([destFile],ptrn):
replaced=True
break
if replaced:
os.remove(destFile)
os.rename(srcFile, destFile)
if self._verbose: print("Overwrote file "+os.path.basename(destFile))
else:
if self._verbose: print("Pattern not matched to "+os.path.basename(destFile)+", not overwritten")
else:
# file did not previously exist, simply move it over
os.rename(srcFile, destFile)
if self._verbose: print("New file "+os.path.basename(destFile))
# now remove the temp staging folder and downloaded zip
try:
shutil.rmtree(staging_path)
except:
error = "Error: Failed to remove existing staging directory, consider manually removing "+staging_path
if self._verbose: print(error)
def reload_addon(self):
# if post_update false, skip this function
# else, unload/reload addon & trigger popup
if self._auto_reload_post_update == False:
print("Restart blender to reload addon and complete update")
return
if self._verbose: print("Reloading addon...")
addon_utils.modules(refresh=True)
bpy.utils.refresh_script_paths()
# not allowed in restricted context, such as register module
# toggle to refresh
bpy.ops.wm.addon_disable(module=self._addon_package)
bpy.ops.wm.addon_refresh()
bpy.ops.wm.addon_enable(module=self._addon_package)
# -------------------------------------------------------------------------
# Other non-api functions and setups
# -------------------------------------------------------------------------
def clear_state(self):
self._update_ready = None
self._update_link = None
self._update_version = None
self._source_zip = None
self._error = None
self._error_msg = None
# custom urlretrieve implementation
def urlretrieve(self, urlfile, filepath):
chunk = 1024*8
f = open(filepath, "wb")
while 1:
data = urlfile.read(chunk)
if not data:
#print("done.")
break
f.write(data)
#print("Read %s bytes"%len(data))
f.close()
def version_tuple_from_text(self,text):
if text == None: return ()
# should go through string and remove all non-integers,
# and for any given break split into a different section
segments = []
tmp = ''
for l in str(text):
if l.isdigit()==False:
if len(tmp)>0:
segments.append(int(tmp))
tmp = ''
else:
tmp+=l
if len(tmp)>0:
segments.append(int(tmp))
if len(segments)==0:
if self._verbose: print("No version strings found text: ",text)
if self._include_branches == False:
return ()
else:
return (text)
return tuple(segments)
# called for running check in a background thread
def check_for_update_async(self, callback=None):
if self._json != None and "update_ready" in self._json and self._json["version_text"]!={}:
if self._json["update_ready"] == True:
self._update_ready = True
self._update_link = self._json["version_text"]["link"]
self._update_version = str(self._json["version_text"]["version"])
# cached update
callback(True)
return
# do the check
if self._check_interval_enable == False:
return
elif self._async_checking == True:
if self._verbose: print("Skipping async check, already started")
return # already running the bg thread
elif self._update_ready == None:
self.start_async_check_update(False, callback)
def check_for_update_now(self, callback=None):
self._error = None
self._error_msg = None
if self._verbose:
print("Check update pressed, first getting current status")
if self._async_checking == True:
if self._verbose: print("Skipping async check, already started")
return # already running the bg thread
elif self._update_ready == None:
self.start_async_check_update(True, callback)
else:
self._update_ready = None
self.start_async_check_update(True, callback)
# this function is not async, will always return in sequential fashion
# but should have a parent which calls it in another thread
def check_for_update(self, now=False):
if self._verbose: print("Checking for update function")
# clear the errors if any
self._error = None
self._error_msg = None
# avoid running again in, just return past result if found
# but if force now check, then still do it
if self._update_ready != None and now == False:
return (self._update_ready,self._update_version,self._update_link)
if self._current_version == None:
raise ValueError("current_version not yet defined")
if self._repo == None:
raise ValueError("repo not yet defined")
if self._user == None:
raise ValueError("username not yet defined")
self.set_updater_json() # self._json
if now == False and self.past_interval_timestamp()==False:
if self._verbose:
print("Aborting check for updated, check interval not reached")
return (False, None, None)
# check if using tags or releases
# note that if called the first time, this will pull tags from online
if self._fake_install == True:
if self._verbose:
print("fake_install = True, setting fake version as ready")
self._update_ready = True
self._update_version = "(999,999,999)"
self._update_link = "http://127.0.0.1"
return (self._update_ready, self._update_version, self._update_link)
# primary internet call
self.get_tags() # sets self._tags and self._tag_latest
self._json["last_check"] = str(datetime.now())
self.save_updater_json()
# can be () or ('master') in addition to branches, and version tag
new_version = self.version_tuple_from_text(self.tag_latest)
if len(self._tags)==0:
self._update_ready = False
self._update_version = None
self._update_link = None
return (False, None, None)
if self._include_branches == False:
link = self.select_link(self, self._tags[0])
else:
n = len(self._include_branch_list)
if len(self._tags)==n:
# effectively means no tags found on repo
# so provide the first one as default
link = self.select_link(self, self._tags[0])
else:
link = self.select_link(self, self._tags[n])
if new_version == ():
self._update_ready = False
self._update_version = None
self._update_link = None
return (False, None, None)
elif str(new_version).lower() in self._include_branch_list:
# handle situation where master/whichever branch is included
# however, this code effectively is not triggered now
# as new_version will only be tag names, not branch names
if self._include_branch_autocheck == False:
# don't offer update as ready,
# but set the link for the default
# branch for installing
self._update_ready = False
self._update_version = new_version
self._update_link = link
self.save_updater_json()
return (True, new_version, link)
else:
raise ValueError("include_branch_autocheck: NOT YET DEVELOPED")
# bypass releases and look at timestamp of last update
# from a branch compared to now, see if commit values
# match or not.
else:
# situation where branches not included
if new_version > self._current_version:
self._update_ready = True
self._update_version = new_version
self._update_link = link
self.save_updater_json()
return (True, new_version, link)
# elif new_version != self._current_version:
# self._update_ready = False
# self._update_version = new_version
# self._update_link = link
# self.save_updater_json()
# return (True, new_version, link)
# if no update, set ready to False from None
self._update_ready = False
self._update_version = None
self._update_link = None
return (False, None, None)
def set_tag(self, name):
"""Assign the tag name and url to update to"""
tg = None
for tag in self._tags:
if name == tag["name"]:
tg = tag
break
if tg:
new_version = self.version_tuple_from_text(self.tag_latest)
self._update_version = new_version
self._update_link = self.select_link(self, tg)
elif self._include_branches and name in self._include_branch_list:
# scenario if reverting to a specific branch name instead of tag
tg = name
link = self.form_branch_url(tg)
self._update_version = name # this will break things
self._update_link = link
if not tg:
raise ValueError("Version tag not found: "+name)
def run_update(self,force=False,revert_tag=None,clean=False,callback=None):
"""Runs an install, update, or reversion of an addon from online source
Arguments:
force: Install assigned link, even if self.update_ready is False
revert_tag: Version to install, if none uses detected update link
clean: not used, but in future could use to totally refresh addon
callback: used to run function on update completion
"""
self._json["update_ready"] = False
self._json["ignore"] = False # clear ignore flag
self._json["version_text"] = {}
if revert_tag != None:
self.set_tag(revert_tag)
self._update_ready = True
# clear the errors if any
self._error = None
self._error_msg = None
if self._verbose: print("Running update")
if self._fake_install == True:
# change to True, to trigger the reload/"update installed" handler
if self._verbose:
print("fake_install=True")
print("Just reloading and running any handler triggers")
self._json["just_updated"] = True
self.save_updater_json()
if self._backup_current == True:
self.create_backup()
self.reload_addon()
self._update_ready = False
res = True # fake "success" zip download flag
elif force==False:
if self._update_ready != True:
if self._verbose:
print("Update stopped, new version not ready")
if callback:
callback(
self._addon_package,
"Update stopped, new version not ready")
return "Update stopped, new version not ready"
elif self._update_link == None:
# this shouldn't happen if update is ready
if self._verbose:
print("Update stopped, update link unavailable")
if callback:
callback(
self._addon_package,
"Update stopped, update link unavailable")
return "Update stopped, update link unavailable"
if self._verbose and revert_tag==None:
print("Staging update")
elif self._verbose:
print("Staging install")
res = self.stage_repository(self._update_link)
if res !=True:
print("Error in staging repository: "+str(res))
if callback != None:
callback(self._addon_package, self._error_msg)
return self._error_msg
res = self.unpack_staged_zip(clean)
if res<0:
if callback:
callback(self._addon_package, self._error_msg)
return res
else:
if self._update_link == None:
if self._verbose:
print("Update stopped, could not get link")
return "Update stopped, could not get link"
if self._verbose:
print("Forcing update")
res = self.stage_repository(self._update_link)
if res !=True:
print("Error in staging repository: "+str(res))
if callback:
callback(self._addon_package, self._error_msg)
return self._error_msg
res = self.unpack_staged_zip(clean)
if res<0:
return res
# would need to compare against other versions held in tags
# run the front-end's callback if provided
if callback:
callback(self._addon_package)
# return something meaningful, 0 means it worked
return 0
def past_interval_timestamp(self):
if self._check_interval_enable == False:
return True # ie this exact feature is disabled
if "last_check" not in self._json or self._json["last_check"] == "":
return True
else:
now = datetime.now()
last_check = datetime.strptime(self._json["last_check"],
"%Y-%m-%d %H:%M:%S.%f")
next_check = last_check
offset = timedelta(
days=self._check_interval_days + 30*self._check_interval_months,
hours=self._check_interval_hours,
minutes=self._check_interval_minutes
)
delta = (now - offset) - last_check
if delta.total_seconds() > 0:
if self._verbose:
print("{} Updater: Time to check for updates!".format(self._addon))
return True
else:
if self._verbose:
print("{} Updater: Determined it's not yet time to check for updates".format(self._addon))
return False
def get_json_path(self):
"""Returns the full path to the JSON state file used by this updater.
Will also rename old file paths to addon-specific path if found
"""
json_path = os.path.join(self._updater_path,
"{}_updater_status.json".format(self._addon_package))
old_json_path = os.path.join(self._updater_path, "updater_status.json")
# rename old file if it exists
try:
os.rename(old_json_path, json_path)
except FileNotFoundError:
pass
except Exception as err:
print("Other OS error occurred while trying to rename old JSON")
print(err)
return json_path
def set_updater_json(self):
"""Load or initialize JSON dictionary data for updater state"""
if self._updater_path == None:
raise ValueError("updater_path is not defined")
elif os.path.isdir(self._updater_path) == False:
os.makedirs(self._updater_path)
jpath = self.get_json_path()
if os.path.isfile(jpath):
with open(jpath) as data_file:
self._json = json.load(data_file)
if self._verbose:
print("{} Updater: Read in JSON settings from file".format(
self._addon))
else:
# set data structure
self._json = {
"last_check":"",
"backup_date":"",
"update_ready":False,
"ignore":False,
"just_restored":False,
"just_updated":False,
"version_text":{}
}
self.save_updater_json()
def save_updater_json(self):
# first save the state
if self._update_ready == True:
if type(self._update_version) == type((0,0,0)):
self._json["update_ready"] = True
self._json["version_text"]["link"]=self._update_link
self._json["version_text"]["version"]=self._update_version
else:
self._json["update_ready"] = False
self._json["version_text"] = {}
else:
self._json["update_ready"] = False
self._json["version_text"] = {}
jpath = self.get_json_path()
outf = open(jpath,'w')
data_out = json.dumps(self._json, indent=4)
outf.write(data_out)
outf.close()
if self._verbose:
print(self._addon+": Wrote out updater JSON settings to file, with the contents:")
print(self._json)
def json_reset_postupdate(self):
self._json["just_updated"] = False
self._json["update_ready"] = False
self._json["version_text"] = {}
self.save_updater_json()
def json_reset_restore(self):
self._json["just_restored"] = False
self._json["update_ready"] = False
self._json["version_text"] = {}
self.save_updater_json()
self._update_ready = None # reset so you could check update again
def ignore_update(self):
self._json["ignore"] = True
self.save_updater_json()
# -------------------------------------------------------------------------
# ASYNC stuff
# -------------------------------------------------------------------------
def start_async_check_update(self, now=False, callback=None):
"""Start a background thread which will check for updates"""
if self._async_checking is True:
return
if self._verbose:
print("{} updater: Starting background checking thread".format(
self._addon))
check_thread = threading.Thread(target=self.async_check_update,
args=(now,callback,))
check_thread.daemon = True
self._check_thread = check_thread
check_thread.start()
def async_check_update(self, now, callback=None):
"""Perform update check, run as target of background thread"""
self._async_checking = True
if self._verbose:
print("{} BG thread: Checking for update now in background".format(
self._addon))
try:
self.check_for_update(now=now)
except Exception as exception:
print("Checking for update error:")
print(exception)
if not self._error:
self._update_ready = False
self._update_version = None
self._update_link = None
self._error = "Error occurred"
self._error_msg = "Encountered an error while checking for updates"
self._async_checking = False
self._check_thread = None
if self._verbose:
print("{} BG thread: Finished checking for update, doing callback".format(self._addon))
if callback:
callback(self._update_ready)
def stop_async_check_update(self):
"""Method to give impression of stopping check for update.
Currently does nothing but allows user to retry/stop blocking UI from
hitting a refresh button. This does not actually stop the thread, as it
will complete after the connection timeout regardless. If the thread
does complete with a successful response, this will be still displayed
on next UI refresh (ie no update, or update available).
"""
if self._check_thread != None:
if self._verbose: print("Thread will end in normal course.")
# however, "There is no direct kill method on a thread object."
# better to let it run its course
#self._check_thread.stop()
self._async_checking = False
self._error = None
self._error_msg = None
# -----------------------------------------------------------------------------
# Updater Engines
# -----------------------------------------------------------------------------
class BitbucketEngine(object):
"""Integration to Bitbucket API for git-formatted repositories"""
def __init__(self):
self.api_url = 'https://api.bitbucket.org'
self.token = None
self.name = "bitbucket"
def form_repo_url(self, updater):
return self.api_url+"/2.0/repositories/"+updater.user+"/"+updater.repo
def form_tags_url(self, updater):
return self.form_repo_url(updater) + "/refs/tags?sort=-name"
def form_branch_url(self, branch, updater):
return self.get_zip_url(branch, updater)
def get_zip_url(self, name, updater):
return "https://bitbucket.org/{user}/{repo}/get/{name}.zip".format(
user=updater.user,
repo=updater.repo,
name=name)
def parse_tags(self, response, updater):
if response == None:
return []
return [{"name": tag["name"], "zipball_url": self.get_zip_url(tag["name"], updater)} for tag in response["values"]]
class GithubEngine(object):
"""Integration to Github API"""
def __init__(self):
self.api_url = 'https://api.github.com'
self.token = None
self.name = "github"
def form_repo_url(self, updater):
return "{}{}{}{}{}".format(self.api_url,"/repos/",updater.user,
"/",updater.repo)
def form_tags_url(self, updater):
if updater.use_releases:
return "{}{}".format(self.form_repo_url(updater),"/releases")
else:
return "{}{}".format(self.form_repo_url(updater),"/tags")
def form_branch_list_url(self, updater):
return "{}{}".format(self.form_repo_url(updater),"/branches")
def form_branch_url(self, branch, updater):
return "{}{}{}".format(self.form_repo_url(updater),
"/zipball/",branch)
def parse_tags(self, response, updater):
if response == None:
return []
return response
class GitlabEngine(object):
"""Integration to GitLab API"""
def __init__(self):
self.api_url = 'https://gitlab.com'
self.token = None
self.name = "gitlab"
def form_repo_url(self, updater):
return "{}{}{}".format(self.api_url,"/api/v4/projects/",updater.repo)
def form_tags_url(self, updater):
return "{}{}".format(self.form_repo_url(updater),"/repository/tags")
def form_branch_list_url(self, updater):
# does not validate branch name.
return "{}{}".format(
self.form_repo_url(updater),
"/repository/branches")
def form_branch_url(self, branch, updater):
# Could clash with tag names and if it does, it will
# download TAG zip instead of branch zip to get
# direct path, would need.
return "{}{}{}".format(
self.form_repo_url(updater),
"/repository/archive.zip?sha=",
branch)
def get_zip_url(self, sha, updater):
return "{base}/repository/archive.zip?sha={sha}".format(
base=self.form_repo_url(updater),
sha=sha)
# def get_commit_zip(self, id, updater):
# return self.form_repo_url(updater)+"/repository/archive.zip?sha:"+id
def parse_tags(self, response, updater):
if response == None:
return []
return [{"name": tag["name"], "zipball_url": self.get_zip_url(tag["commit"]["id"], updater)} for tag in response]
# -----------------------------------------------------------------------------
# The module-shared class instance,
# should be what's imported to other files
# -----------------------------------------------------------------------------
Updater = Singleton_updater()
| 37.260143 | 123 | 0.573725 | self._engine = GithubEngine()
self._user = None
self._repo = None
self._website = None
self._current_version = None
self._subfolder_path = None
self._tags = []
self._tag_latest = None
self._tag_names = []
self._latest_release = None
self._use_releases = False
self._include_branches = False
self._include_branch_list = ['master']
self._include_branch_autocheck = False
self._manual_only = False
self._version_min_update = None
self._version_max_update = None
self._backup_current = True
self._backup_ignore_patterns = None
self._overwrite_patterns = ["*.py","*.pyc"]
self._remove_pre_update_patterns = []
# as it is slightly less stable/won't always fully reload module
self._auto_reload_post_update = False
self._check_interval_enable = False
self._check_interval_months = 0
self._check_interval_days = 7
self._check_interval_hours = 0
self._check_interval_minutes = 0
self._verbose = False
self._fake_install = False
self._async_checking = False
self._update_ready = None
self._update_link = None
self._update_version = None
self._source_zip = None
self._check_thread = None
self._select_link = None
self.skip_tag = None
self._addon = __package__.split('.')[0].lower()
self._addon_package = __package__.split('.')[0]
self._updater_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', 'updater_tmp'))
self._addon_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
self._json = {}
self._error = None
self._error_msg = None
self._prefiltered_tag_count = 0
# UI code only, ie not used within this module but still useful
# properties to have
# to verify a valid import, in place of placeholder import
self.showpopups = True # used in UI to show or not show update popups
self.invalidupdater = False
# pre-assign basic select-link function
def select_link_function(self, tag):
return tag["zipball_url"]
self._select_link = select_link_function
# -------------------------------------------------------------------------
# Getters and setters
# -------------------------------------------------------------------------
@property
def addon(self):
return self._addon
@addon.setter
def addon(self, value):
self._addon = str(value)
@property
def addon_package(self):
return self._addon_package
@addon_package.setter
def addon_package(self, value):
self._addon_package = value
@property
def api_url(self):
return self._engine.api_url
@api_url.setter
def api_url(self, value):
if self.check_is_url(value) == False:
raise ValueError("Not a valid URL: " + value)
self._engine.api_url = value
@property
def async_checking(self):
return self._async_checking
@property
def auto_reload_post_update(self):
return self._auto_reload_post_update
@auto_reload_post_update.setter
def auto_reload_post_update(self, value):
try:
self._auto_reload_post_update = bool(value)
except:
raise ValueError("Must be a boolean value")
@property
def backup_current(self):
return self._backup_current
@backup_current.setter
def backup_current(self, value):
if value == None:
self._backup_current = False
return
else:
self._backup_current = value
@property
def backup_ignore_patterns(self):
return self._backup_ignore_patterns
@backup_ignore_patterns.setter
def backup_ignore_patterns(self, value):
if value == None:
self._backup_ignore_patterns = None
return
elif type(value) != type(['list']):
raise ValueError("Backup pattern must be in list format")
else:
self._backup_ignore_patterns = value
@property
def check_interval(self):
return (self._check_interval_enable,
self._check_interval_months,
self._check_interval_days,
self._check_interval_hours,
self._check_interval_minutes)
@property
def current_version(self):
return self._current_version
@current_version.setter
def current_version(self, tuple_values):
if tuple_values==None:
self._current_version = None
return
elif type(tuple_values) is not tuple:
try:
tuple(tuple_values)
except:
raise ValueError(
"Not a tuple! current_version must be a tuple of integers")
for i in tuple_values:
if type(i) is not int:
raise ValueError(
"Not an integer! current_version must be a tuple of integers")
self._current_version = tuple(tuple_values)
@property
def engine(self):
return self._engine.name
@engine.setter
def engine(self, value):
if value.lower()=="github":
self._engine = GithubEngine()
elif value.lower()=="gitlab":
self._engine = GitlabEngine()
elif value.lower()=="bitbucket":
self._engine = BitbucketEngine()
else:
raise ValueError("Invalid engine selection")
@property
def error(self):
return self._error
@property
def error_msg(self):
return self._error_msg
@property
def fake_install(self):
return self._fake_install
@fake_install.setter
def fake_install(self, value):
if type(value) != type(False):
raise ValueError("fake_install must be a boolean value")
self._fake_install = bool(value)
# not currently used
@property
def include_branch_autocheck(self):
return self._include_branch_autocheck
@include_branch_autocheck.setter
def include_branch_autocheck(self, value):
try:
self._include_branch_autocheck = bool(value)
except:
raise ValueError("include_branch_autocheck must be a boolean value")
@property
def include_branch_list(self):
return self._include_branch_list
@include_branch_list.setter
def include_branch_list(self, value):
try:
if value == None:
self._include_branch_list = ['master']
elif type(value) != type(['master']) or value==[]:
raise ValueError("include_branch_list should be a list of valid branches")
else:
self._include_branch_list = value
except:
raise ValueError("include_branch_list should be a list of valid branches")
@property
def include_branches(self):
return self._include_branches
@include_branches.setter
def include_branches(self, value):
try:
self._include_branches = bool(value)
except:
raise ValueError("include_branches must be a boolean value")
@property
def json(self):
if self._json == {}:
self.set_updater_json()
return self._json
@property
def latest_release(self):
if self._latest_release == None:
return None
return self._latest_release
@property
def manual_only(self):
return self._manual_only
@manual_only.setter
def manual_only(self, value):
try:
self._manual_only = bool(value)
except:
raise ValueError("manual_only must be a boolean value")
@property
def overwrite_patterns(self):
return self._overwrite_patterns
@overwrite_patterns.setter
def overwrite_patterns(self, value):
if value == None:
self._overwrite_patterns = ["*.py","*.pyc"]
elif type(value) != type(['']):
raise ValueError("overwrite_patterns needs to be in a list format")
else:
self._overwrite_patterns = value
@property
def private_token(self):
return self._engine.token
@private_token.setter
def private_token(self, value):
if value==None:
self._engine.token = None
else:
self._engine.token = str(value)
@property
def remove_pre_update_patterns(self):
return self._remove_pre_update_patterns
@remove_pre_update_patterns.setter
def remove_pre_update_patterns(self, value):
if value == None:
self._remove_pre_update_patterns = []
elif type(value) != type(['']):
raise ValueError("remove_pre_update_patterns needs to be in a list format")
else:
self._remove_pre_update_patterns = value
@property
def repo(self):
return self._repo
@repo.setter
def repo(self, value):
try:
self._repo = str(value)
except:
raise ValueError("User must be a string")
@property
def select_link(self):
return self._select_link
@select_link.setter
def select_link(self, value):
# ensure it is a function assignment, with signature:
# input self, tag; returns link name
if not hasattr(value, "__call__"):
raise ValueError("select_link must be a function")
self._select_link = value
@property
def stage_path(self):
return self._updater_path
@stage_path.setter
def stage_path(self, value):
if value == None:
if self._verbose: print("Aborting assigning stage_path, it's null")
return
elif value != None and not os.path.exists(value):
try:
os.makedirs(value)
except:
if self._verbose: print("Error trying to staging path")
return
self._updater_path = value
@property
def subfolder_path(self):
return self._subfolder_path
@subfolder_path.setter
def subfolder_path(self, value):
self._subfolder_path = value
@property
def tags(self):
if self._tags == []:
return []
tag_names = []
for tag in self._tags:
tag_names.append(tag["name"])
return tag_names
@property
def tag_latest(self):
if self._tag_latest == None:
return None
return self._tag_latest["name"]
@property
def update_link(self):
return self._update_link
@property
def update_ready(self):
return self._update_ready
@property
def update_version(self):
return self._update_version
@property
def use_releases(self):
return self._use_releases
@use_releases.setter
def use_releases(self, value):
try:
self._use_releases = bool(value)
except:
raise ValueError("use_releases must be a boolean value")
@property
def user(self):
return self._user
@user.setter
def user(self, value):
try:
self._user = str(value)
except:
raise ValueError("User must be a string value")
@property
def verbose(self):
return self._verbose
@verbose.setter
def verbose(self, value):
try:
self._verbose = bool(value)
if self._verbose == True:
print(self._addon+" updater verbose is enabled")
except:
raise ValueError("Verbose must be a boolean value")
@property
def version_max_update(self):
return self._version_max_update
@version_max_update.setter
def version_max_update(self, value):
if value == None:
self._version_max_update = None
return
if type(value) != type((1,2,3)):
raise ValueError("Version maximum must be a tuple")
for subvalue in value:
if type(subvalue) != int:
raise ValueError("Version elements must be integers")
self._version_max_update = value
@property
def version_min_update(self):
return self._version_min_update
@version_min_update.setter
def version_min_update(self, value):
if value == None:
self._version_min_update = None
return
if type(value) != type((1,2,3)):
raise ValueError("Version minimum must be a tuple")
for subvalue in value:
if type(subvalue) != int:
raise ValueError("Version elements must be integers")
self._version_min_update = value
@property
def website(self):
return self._website
@website.setter
def website(self, value):
if self.check_is_url(value) == False:
raise ValueError("Not a valid URL: " + value)
self._website = value
def check_is_url(self, url):
if not ("http://" in url or "https://" in url):
return False
if "." not in url:
return False
return True
def get_tag_names(self):
tag_names = []
self.get_tags()
for tag in self._tags:
tag_names.append(tag["name"])
return tag_names
def set_check_interval(self,enable=False,months=0,days=14,hours=0,minutes=0):
if type(enable) is not bool:
raise ValueError("Enable must be a boolean value")
if type(months) is not int:
raise ValueError("Months must be an integer value")
if type(days) is not int:
raise ValueError("Days must be an integer value")
if type(hours) is not int:
raise ValueError("Hours must be an integer value")
if type(minutes) is not int:
raise ValueError("Minutes must be an integer value")
if enable==False:
self._check_interval_enable = False
else:
self._check_interval_enable = True
self._check_interval_months = months
self._check_interval_days = days
self._check_interval_hours = hours
self._check_interval_minutes = minutes
def __repr__(self):
return "<Module updater from {a}>".format(a=__file__)
def __str__(self):
return "Updater, with user: {a}, repository: {b}, url: {c}".format(
a=self._user,
b=self._repo, c=self.form_repo_url())
def form_repo_url(self):
return self._engine.form_repo_url(self)
def form_tags_url(self):
return self._engine.form_tags_url(self)
def form_branch_url(self, branch):
return self._engine.form_branch_url(branch, self)
def get_tags(self):
request = self.form_tags_url()
if self._verbose: print("Getting tags from server")
all_tags = self._engine.parse_tags(self.get_api(request), self)
if all_tags is not None:
self._prefiltered_tag_count = len(all_tags)
else:
self._prefiltered_tag_count = 0
all_tags = []
if self.skip_tag != None:
self._tags = [tg for tg in all_tags if self.skip_tag(self, tg)==False]
else:
self._tags = all_tags
if self._include_branches == True:
temp_branches = self._include_branch_list.copy()
temp_branches.reverse()
for branch in temp_branches:
request = self.form_branch_url(branch)
include = {
"name":branch.title(),
"zipball_url":request
}
self._tags = [include] + self._tags
if self._tags == None:
self._tag_latest = None
self._tags = []
return
elif self._prefiltered_tag_count == 0 and self._include_branches == False:
self._tag_latest = None
if self._error == None:
self._error = "No releases found"
self._error_msg = "No releases or tags found on this repository"
if self._verbose: print("No releases or tags found on this repository")
elif self._prefiltered_tag_count == 0 and self._include_branches == True:
if not self._error: self._tag_latest = self._tags[0]
if self._verbose:
branch = self._include_branch_list[0]
print("{} branch found, no releases".format(branch), self._tags[0])
elif (len(self._tags)-len(self._include_branch_list)==0 and self._include_branches==True) \
or (len(self._tags)==0 and self._include_branches==False) \
and self._prefiltered_tag_count > 0:
self._tag_latest = None
self._error = "No releases available"
self._error_msg = "No versions found within compatible version range"
if self._verbose: print("No versions found within compatible version range")
else:
if self._include_branches == False:
self._tag_latest = self._tags[0]
if self._verbose: print("Most recent tag found:",self._tags[0]['name'])
else:
n = len(self._include_branch_list)
self._tag_latest = self._tags[n] # guaranteed at least len()=n+1
if self._verbose: print("Most recent tag found:",self._tags[n]['name'])
# all API calls to base url
def get_raw(self, url):
# print("Raw request:", url)
request = urllib.request.Request(url)
try:
context = ssl._create_unverified_context()
except:
# some blender packaged python versions don't have this, largely
context = None
if self._engine.token != None:
if self._engine.name == "gitlab":
request.add_header('PRIVATE-TOKEN',self._engine.token)
else:
if self._verbose: print("Tokens not setup for engine yet")
try:
if context:
result = urllib.request.urlopen(request, context=context)
else:
result = urllib.request.urlopen(request)
except urllib.error.HTTPError as e:
if str(e.code) == "403":
self._error = "HTTP error (access denied)"
self._error_msg = str(e.code) + " - server error response"
print(self._error, self._error_msg)
else:
self._error = "HTTP error"
self._error_msg = str(e.code)
print(self._error, self._error_msg)
self._update_ready = None
except urllib.error.URLError as e:
reason = str(e.reason)
if "TLSV1_ALERT" in reason or "SSL" in reason.upper():
self._error = "Connection rejected, download manually"
self._error_msg = reason
print(self._error, self._error_msg)
else:
self._error = "URL error, check internet connection"
self._error_msg = reason
print(self._error, self._error_msg)
self._update_ready = None
return None
else:
result_string = result.read()
result.close()
return result_string.decode()
def get_api(self, url):
get = None
get = self.get_raw(url)
if get != None:
try:
return json.JSONDecoder().decode(get)
except Exception as e:
self._error = "API response has invalid JSON format"
self._error_msg = str(e.reason)
self._update_ready = None
print(self._error, self._error_msg)
return None
else:
return None
def stage_repository(self, url):
local = os.path.join(self._updater_path,"update_staging")
error = None
if self._verbose: print("Preparing staging folder for download:\n",local)
if os.path.isdir(local) == True:
try:
shutil.rmtree(local)
os.makedirs(local)
except:
error = "failed to remove existing staging directory. check permissions for staging directory"
else:
try:
os.makedirs(local)
except:
error = "failed to create staging directory. check permissions for staging directory"
if error != None:
if self._verbose: print("Error: Aborting update, "+error)
self._error = "Update aborted, staging path error"
self._error_msg = "Error: {}".format(error)
return False
if self._backup_current==True:
self.create_backup()
if self._verbose: print("Now retrieving the new source zip")
self._source_zip = os.path.join(local,"source.zip")
if self._verbose: print("Starting download update zip")
try:
request = urllib.request.Request(url)
context = ssl._create_unverified_context()
if self._engine.token != None:
if self._engine.name == "gitlab":
request.add_header('PRIVATE-TOKEN',self._engine.token)
else:
if self._verbose: print("Tokens not setup for selected engine yet")
self.urlretrieve(urllib.request.urlopen(request,context=context), self._source_zip)
if self._verbose: print("Successfully downloaded update zip")
return True
except Exception as e:
self._error = "Error retrieving download, bad link?"
self._error_msg = "Error: {}".format(e)
if self._verbose:
print("Error retrieving download, bad link?")
print("Error: {}".format(e))
return False
def create_backup(self):
if self._verbose: print("Backing up current addon folder")
local = os.path.join(self._updater_path,"backup")
tempdest = os.path.join(self._addon_root,
os.pardir,
self._addon+"_updater_backup_temp")
if self._verbose: print("Backup destination path: ",local)
if os.path.isdir(local):
try:
shutil.rmtree(local)
except:
if self._verbose:print("Failed to removed previous backup folder, contininuing")
if os.path.isdir(tempdest):
try:
shutil.rmtree(tempdest)
except:
if self._verbose:print("Failed to remove existing temp folder, contininuing")
# make the full addon copy, which temporarily places outside the addon folder
if self._backup_ignore_patterns != None:
shutil.copytree(
self._addon_root,tempdest,
ignore=shutil.ignore_patterns(*self._backup_ignore_patterns))
else:
shutil.copytree(self._addon_root,tempdest)
shutil.move(tempdest,local)
# save the date for future ref
now = datetime.now()
self._json["backup_date"] = "{m}-{d}-{yr}".format(
m=now.strftime("%B"),d=now.day,yr=now.year)
self.save_updater_json()
def restore_backup(self):
if self._verbose: print("Restoring backup")
if self._verbose: print("Backing up current addon folder")
backuploc = os.path.join(self._updater_path,"backup")
tempdest = os.path.join(self._addon_root,
os.pardir,
self._addon+"_updater_backup_temp")
tempdest = os.path.abspath(tempdest)
# make the copy
shutil.move(backuploc,tempdest)
shutil.rmtree(self._addon_root)
os.rename(tempdest,self._addon_root)
self._json["backup_date"] = ""
self._json["just_restored"] = True
self._json["just_updated"] = True
self.save_updater_json()
self.reload_addon()
def unpack_staged_zip(self,clean=False):
if os.path.isfile(self._source_zip) == False:
if self._verbose: print("Error, update zip not found")
self._error = "Install failed"
self._error_msg = "Downloaded zip not found"
return -1
# clear the existing source folder in case previous files remain
outdir = os.path.join(self._updater_path, "source")
if os.path.exists(outdir):
if self._verbose:
print("Old source folder exists; clearing")
try:
shutil.rmtree(outdir)
if self._verbose:
print("Old source folder cleared")
except:
print("Error occurred while clearing old extract dir:")
print(str(err))
self._error = "Install failed"
self._error_msg = "Failed to clear old extract directory"
return -1
# Create parent directories if needed, would not be relevant unless
# installing addon into another location or via an addon manager
try:
os.makedirs(outdir)
if self._verbose:
print("Source folder cleared and recreated")
except Exception as err:
print("Error occurred while making extract dir:")
print(str(err))
self._error = "Install failed"
self._error_msg = "Failed to make extract directory"
return -1
if not os.path.isdir(outdir):
print("Failed to create source directory")
self._error = "Install failed"
self._error_msg = "Failed to create extract directory"
return -1
if self._verbose:
print("Begin extracting source from zip:", self._source_zip)
zfile = zipfile.ZipFile(self._source_zip, "r")
if not zfile:
if self._verbose:
print("Resulting file is not a zip, cannot extract")
self._error = "Install failed"
self._error_msg = "Resulting file is not a zip, cannot extract"
return -1
# Now extract directly from the first subfolder (not root)
# this avoids adding the first subfolder to the path length,
# which can be too long if the download has the SHA in the name
zsep = '/' #os.sep # might just always be / even on windows
for name in zfile.namelist():
if zsep not in name:
continue
top_folder = name[:name.index(zsep)+1]
if name == top_folder + zsep:
continue # skip top level folder
subpath = name[name.index(zsep)+1:]
if name.endswith(zsep):
try:
os.mkdir(os.path.join(outdir, subpath))
if self._verbose:
print("Extract - mkdir: ", os.path.join(outdir, subpath))
except OSError as exc:
if exc.errno != errno.EEXIST:
self._error = "Install failed"
self._error_msg = "Could not create folder from zip"
return -1
else:
with open(os.path.join(outdir, subpath), "wb") as outfile:
data = zfile.read(name)
outfile.write(data)
if self._verbose:
print("Extract - create:", os.path.join(outdir, subpath))
if self._verbose:
print("Extracted source")
unpath = os.path.join(self._updater_path, "source")
if not os.path.isdir(unpath):
self._error = "Install failed"
self._error_msg = "Extracted path does not exist"
print("Extracted path does not exist: ", unpath)
return -1
if self._subfolder_path:
self._subfolder_path.replace('/', os.path.sep)
self._subfolder_path.replace('\\', os.path.sep)
# either directly in root of zip/one subfolder, or use specified path
if os.path.isfile(os.path.join(unpath,"__init__.py")) == False:
dirlist = os.listdir(unpath)
if len(dirlist)>0:
if self._subfolder_path == "" or self._subfolder_path == None:
unpath = os.path.join(unpath, dirlist[0])
else:
unpath = os.path.join(unpath, self._subfolder_path)
# smarter check for additional sub folders for a single folder
# containing __init__.py
if os.path.isfile(os.path.join(unpath,"__init__.py")) == False:
if self._verbose:
print("not a valid addon found")
print("Paths:")
print(dirlist)
self._error = "Install failed"
self._error_msg = "No __init__ file found in new source"
return -1
# merge code with running addon directory, using blender default behavior
# plus any modifiers indicated by user (e.g. force remove/keep)
self.deepMergeDirectory(self._addon_root, unpath, clean)
# Now save the json state
# Change to True, to trigger the handler on other side
# if allowing reloading within same blender instance
self._json["just_updated"] = True
self.save_updater_json()
self.reload_addon()
self._update_ready = False
return 0
def deepMergeDirectory(self,base,merger,clean=False):
if not os.path.exists(base):
if self._verbose:
print("Base path does not exist:", base)
return -1
elif not os.path.exists(merger):
if self._verbose:
print("Merger path does not exist")
return -1
# paths to be aware of and not overwrite/remove/etc
staging_path = os.path.join(self._updater_path,"update_staging")
backup_path = os.path.join(self._updater_path,"backup")
# If clean install is enabled, clear existing files ahead of time
# note: will not delete the update.json, update folder, staging, or staging
# but will delete all other folders/files in addon directory
error = None
if clean==True:
try:
# implement clearing of all folders/files, except the
# updater folder and updater json
# Careful, this deletes entire subdirectories recursively...
# make sure that base is not a high level shared folder, but
# is dedicated just to the addon itself
if self._verbose: print("clean=True, clearing addon folder to fresh install state")
# remove root files and folders (except update folder)
files = [f for f in os.listdir(base) if os.path.isfile(os.path.join(base,f))]
folders = [f for f in os.listdir(base) if os.path.isdir(os.path.join(base,f))]
for f in files:
os.remove(os.path.join(base,f))
print("Clean removing file {}".format(os.path.join(base,f)))
for f in folders:
if os.path.join(base,f)==self._updater_path: continue
shutil.rmtree(os.path.join(base,f))
print("Clean removing folder and contents {}".format(os.path.join(base,f)))
except Exception as err:
error = "failed to create clean existing addon folder"
print(error, str(err))
# Walk through the base addon folder for rules on pre-removing
# but avoid removing/altering backup and updater file
for path, dirs, files in os.walk(base):
# prune ie skip updater folder
dirs[:] = [d for d in dirs if os.path.join(path,d) not in [self._updater_path]]
for file in files:
for ptrn in self.remove_pre_update_patterns:
if fnmatch.filter([file],ptrn):
try:
fl = os.path.join(path,file)
os.remove(fl)
if self._verbose: print("Pre-removed file "+file)
except OSError:
print("Failed to pre-remove "+file)
# Walk through the temp addon sub folder for replacements
# this implements the overwrite rules, which apply after
# the above pre-removal rules. This also performs the
# actual file copying/replacements
for path, dirs, files in os.walk(merger):
# verify this structure works to prune updater sub folder overwriting
dirs[:] = [d for d in dirs if os.path.join(path,d) not in [self._updater_path]]
relPath = os.path.relpath(path, merger)
destPath = os.path.join(base, relPath)
if not os.path.exists(destPath):
os.makedirs(destPath)
for file in files:
# bring in additional logic around copying/replacing
# Blender default: overwrite .py's, don't overwrite the rest
destFile = os.path.join(destPath, file)
srcFile = os.path.join(path, file)
# decide whether to replace if file already exists, and copy new over
if os.path.isfile(destFile):
# otherwise, check each file to see if matches an overwrite pattern
replaced=False
for ptrn in self._overwrite_patterns:
if fnmatch.filter([destFile],ptrn):
replaced=True
break
if replaced:
os.remove(destFile)
os.rename(srcFile, destFile)
if self._verbose: print("Overwrote file "+os.path.basename(destFile))
else:
if self._verbose: print("Pattern not matched to "+os.path.basename(destFile)+", not overwritten")
else:
# file did not previously exist, simply move it over
os.rename(srcFile, destFile)
if self._verbose: print("New file "+os.path.basename(destFile))
# now remove the temp staging folder and downloaded zip
try:
shutil.rmtree(staging_path)
except:
error = "Error: Failed to remove existing staging directory, consider manually removing "+staging_path
if self._verbose: print(error)
def reload_addon(self):
# if post_update false, skip this function
# else, unload/reload addon & trigger popup
if self._auto_reload_post_update == False:
print("Restart blender to reload addon and complete update")
return
if self._verbose: print("Reloading addon...")
addon_utils.modules(refresh=True)
bpy.utils.refresh_script_paths()
# not allowed in restricted context, such as register module
# toggle to refresh
bpy.ops.wm.addon_disable(module=self._addon_package)
bpy.ops.wm.addon_refresh()
bpy.ops.wm.addon_enable(module=self._addon_package)
# -------------------------------------------------------------------------
# Other non-api functions and setups
# -------------------------------------------------------------------------
def clear_state(self):
self._update_ready = None
self._update_link = None
self._update_version = None
self._source_zip = None
self._error = None
self._error_msg = None
# custom urlretrieve implementation
def urlretrieve(self, urlfile, filepath):
chunk = 1024*8
f = open(filepath, "wb")
while 1:
data = urlfile.read(chunk)
if not data:
#print("done.")
break
f.write(data)
#print("Read %s bytes"%len(data))
f.close()
def version_tuple_from_text(self,text):
if text == None: return ()
# should go through string and remove all non-integers,
# and for any given break split into a different section
segments = []
tmp = ''
for l in str(text):
if l.isdigit()==False:
if len(tmp)>0:
segments.append(int(tmp))
tmp = ''
else:
tmp+=l
if len(tmp)>0:
segments.append(int(tmp))
if len(segments)==0:
if self._verbose: print("No version strings found text: ",text)
if self._include_branches == False:
return ()
else:
return (text)
return tuple(segments)
# called for running check in a background thread
def check_for_update_async(self, callback=None):
if self._json != None and "update_ready" in self._json and self._json["version_text"]!={}:
if self._json["update_ready"] == True:
self._update_ready = True
self._update_link = self._json["version_text"]["link"]
self._update_version = str(self._json["version_text"]["version"])
# cached update
callback(True)
return
# do the check
if self._check_interval_enable == False:
return
elif self._async_checking == True:
if self._verbose: print("Skipping async check, already started")
return # already running the bg thread
elif self._update_ready == None:
self.start_async_check_update(False, callback)
def check_for_update_now(self, callback=None):
self._error = None
self._error_msg = None
if self._verbose:
print("Check update pressed, first getting current status")
if self._async_checking == True:
if self._verbose: print("Skipping async check, already started")
return # already running the bg thread
elif self._update_ready == None:
self.start_async_check_update(True, callback)
else:
self._update_ready = None
self.start_async_check_update(True, callback)
# this function is not async, will always return in sequential fashion
# but should have a parent which calls it in another thread
def check_for_update(self, now=False):
if self._verbose: print("Checking for update function")
# clear the errors if any
self._error = None
self._error_msg = None
# avoid running again in, just return past result if found
# but if force now check, then still do it
if self._update_ready != None and now == False:
return (self._update_ready,self._update_version,self._update_link)
if self._current_version == None:
raise ValueError("current_version not yet defined")
if self._repo == None:
raise ValueError("repo not yet defined")
if self._user == None:
raise ValueError("username not yet defined")
self.set_updater_json() # self._json
if now == False and self.past_interval_timestamp()==False:
if self._verbose:
print("Aborting check for updated, check interval not reached")
return (False, None, None)
# check if using tags or releases
# note that if called the first time, this will pull tags from online
if self._fake_install == True:
if self._verbose:
print("fake_install = True, setting fake version as ready")
self._update_ready = True
self._update_version = "(999,999,999)"
self._update_link = "http://127.0.0.1"
return (self._update_ready, self._update_version, self._update_link)
# primary internet call
self.get_tags() # sets self._tags and self._tag_latest
self._json["last_check"] = str(datetime.now())
self.save_updater_json()
# can be () or ('master') in addition to branches, and version tag
new_version = self.version_tuple_from_text(self.tag_latest)
if len(self._tags)==0:
self._update_ready = False
self._update_version = None
self._update_link = None
return (False, None, None)
if self._include_branches == False:
link = self.select_link(self, self._tags[0])
else:
n = len(self._include_branch_list)
if len(self._tags)==n:
# effectively means no tags found on repo
# so provide the first one as default
link = self.select_link(self, self._tags[0])
else:
link = self.select_link(self, self._tags[n])
if new_version == ():
self._update_ready = False
self._update_version = None
self._update_link = None
return (False, None, None)
elif str(new_version).lower() in self._include_branch_list:
# handle situation where master/whichever branch is included
# however, this code effectively is not triggered now
# as new_version will only be tag names, not branch names
if self._include_branch_autocheck == False:
# don't offer update as ready,
self._update_ready = False
self._update_version = new_version
self._update_link = link
self.save_updater_json()
return (True, new_version, link)
else:
raise ValueError("include_branch_autocheck: NOT YET DEVELOPED")
else:
if new_version > self._current_version:
self._update_ready = True
self._update_version = new_version
self._update_link = link
self.save_updater_json()
return (True, new_version, link)
self._update_ready = False
self._update_version = None
self._update_link = None
return (False, None, None)
def set_tag(self, name):
tg = None
for tag in self._tags:
if name == tag["name"]:
tg = tag
break
if tg:
new_version = self.version_tuple_from_text(self.tag_latest)
self._update_version = new_version
self._update_link = self.select_link(self, tg)
elif self._include_branches and name in self._include_branch_list:
tg = name
link = self.form_branch_url(tg)
self._update_version = name
self._update_link = link
if not tg:
raise ValueError("Version tag not found: "+name)
def run_update(self,force=False,revert_tag=None,clean=False,callback=None):
self._json["update_ready"] = False
self._json["ignore"] = False
self._json["version_text"] = {}
if revert_tag != None:
self.set_tag(revert_tag)
self._update_ready = True
self._error = None
self._error_msg = None
if self._verbose: print("Running update")
if self._fake_install == True:
if self._verbose:
print("fake_install=True")
print("Just reloading and running any handler triggers")
self._json["just_updated"] = True
self.save_updater_json()
if self._backup_current == True:
self.create_backup()
self.reload_addon()
self._update_ready = False
res = True
elif force==False:
if self._update_ready != True:
if self._verbose:
print("Update stopped, new version not ready")
if callback:
callback(
self._addon_package,
"Update stopped, new version not ready")
return "Update stopped, new version not ready"
elif self._update_link == None:
if self._verbose:
print("Update stopped, update link unavailable")
if callback:
callback(
self._addon_package,
"Update stopped, update link unavailable")
return "Update stopped, update link unavailable"
if self._verbose and revert_tag==None:
print("Staging update")
elif self._verbose:
print("Staging install")
res = self.stage_repository(self._update_link)
if res !=True:
print("Error in staging repository: "+str(res))
if callback != None:
callback(self._addon_package, self._error_msg)
return self._error_msg
res = self.unpack_staged_zip(clean)
if res<0:
if callback:
callback(self._addon_package, self._error_msg)
return res
else:
if self._update_link == None:
if self._verbose:
print("Update stopped, could not get link")
return "Update stopped, could not get link"
if self._verbose:
print("Forcing update")
res = self.stage_repository(self._update_link)
if res !=True:
print("Error in staging repository: "+str(res))
if callback:
callback(self._addon_package, self._error_msg)
return self._error_msg
res = self.unpack_staged_zip(clean)
if res<0:
return res
# would need to compare against other versions held in tags
# run the front-end's callback if provided
if callback:
callback(self._addon_package)
return 0
def past_interval_timestamp(self):
if self._check_interval_enable == False:
return True
if "last_check" not in self._json or self._json["last_check"] == "":
return True
else:
now = datetime.now()
last_check = datetime.strptime(self._json["last_check"],
"%Y-%m-%d %H:%M:%S.%f")
next_check = last_check
offset = timedelta(
days=self._check_interval_days + 30*self._check_interval_months,
hours=self._check_interval_hours,
minutes=self._check_interval_minutes
)
delta = (now - offset) - last_check
if delta.total_seconds() > 0:
if self._verbose:
print("{} Updater: Time to check for updates!".format(self._addon))
return True
else:
if self._verbose:
print("{} Updater: Determined it's not yet time to check for updates".format(self._addon))
return False
def get_json_path(self):
json_path = os.path.join(self._updater_path,
"{}_updater_status.json".format(self._addon_package))
old_json_path = os.path.join(self._updater_path, "updater_status.json")
# rename old file if it exists
try:
os.rename(old_json_path, json_path)
except FileNotFoundError:
pass
except Exception as err:
print("Other OS error occurred while trying to rename old JSON")
print(err)
return json_path
def set_updater_json(self):
if self._updater_path == None:
raise ValueError("updater_path is not defined")
elif os.path.isdir(self._updater_path) == False:
os.makedirs(self._updater_path)
jpath = self.get_json_path()
if os.path.isfile(jpath):
with open(jpath) as data_file:
self._json = json.load(data_file)
if self._verbose:
print("{} Updater: Read in JSON settings from file".format(
self._addon))
else:
# set data structure
self._json = {
"last_check":"",
"backup_date":"",
"update_ready":False,
"ignore":False,
"just_restored":False,
"just_updated":False,
"version_text":{}
}
self.save_updater_json()
def save_updater_json(self):
# first save the state
if self._update_ready == True:
if type(self._update_version) == type((0,0,0)):
self._json["update_ready"] = True
self._json["version_text"]["link"]=self._update_link
self._json["version_text"]["version"]=self._update_version
else:
self._json["update_ready"] = False
self._json["version_text"] = {}
else:
self._json["update_ready"] = False
self._json["version_text"] = {}
jpath = self.get_json_path()
outf = open(jpath,'w')
data_out = json.dumps(self._json, indent=4)
outf.write(data_out)
outf.close()
if self._verbose:
print(self._addon+": Wrote out updater JSON settings to file, with the contents:")
print(self._json)
def json_reset_postupdate(self):
self._json["just_updated"] = False
self._json["update_ready"] = False
self._json["version_text"] = {}
self.save_updater_json()
def json_reset_restore(self):
self._json["just_restored"] = False
self._json["update_ready"] = False
self._json["version_text"] = {}
self.save_updater_json()
self._update_ready = None # reset so you could check update again
def ignore_update(self):
self._json["ignore"] = True
self.save_updater_json()
# -------------------------------------------------------------------------
# ASYNC stuff
# -------------------------------------------------------------------------
def start_async_check_update(self, now=False, callback=None):
if self._async_checking is True:
return
if self._verbose:
print("{} updater: Starting background checking thread".format(
self._addon))
check_thread = threading.Thread(target=self.async_check_update,
args=(now,callback,))
check_thread.daemon = True
self._check_thread = check_thread
check_thread.start()
def async_check_update(self, now, callback=None):
self._async_checking = True
if self._verbose:
print("{} BG thread: Checking for update now in background".format(
self._addon))
try:
self.check_for_update(now=now)
except Exception as exception:
print("Checking for update error:")
print(exception)
if not self._error:
self._update_ready = False
self._update_version = None
self._update_link = None
self._error = "Error occurred"
self._error_msg = "Encountered an error while checking for updates"
self._async_checking = False
self._check_thread = None
if self._verbose:
print("{} BG thread: Finished checking for update, doing callback".format(self._addon))
if callback:
callback(self._update_ready)
def stop_async_check_update(self):
if self._check_thread != None:
if self._verbose: print("Thread will end in normal course.")
# however, "There is no direct kill method on a thread object."
# better to let it run its course
#self._check_thread.stop()
self._async_checking = False
self._error = None
self._error_msg = None
# -----------------------------------------------------------------------------
# Updater Engines
# -----------------------------------------------------------------------------
class BitbucketEngine(object):
def __init__(self):
self.api_url = 'https://api.bitbucket.org'
self.token = None
self.name = "bitbucket"
def form_repo_url(self, updater):
return self.api_url+"/2.0/repositories/"+updater.user+"/"+updater.repo
def form_tags_url(self, updater):
return self.form_repo_url(updater) + "/refs/tags?sort=-name"
def form_branch_url(self, branch, updater):
return self.get_zip_url(branch, updater)
def get_zip_url(self, name, updater):
return "https://bitbucket.org/{user}/{repo}/get/{name}.zip".format(
user=updater.user,
repo=updater.repo,
name=name)
def parse_tags(self, response, updater):
if response == None:
return []
return [{"name": tag["name"], "zipball_url": self.get_zip_url(tag["name"], updater)} for tag in response["values"]]
class GithubEngine(object):
def __init__(self):
self.api_url = 'https://api.github.com'
self.token = None
self.name = "github"
def form_repo_url(self, updater):
return "{}{}{}{}{}".format(self.api_url,"/repos/",updater.user,
"/",updater.repo)
def form_tags_url(self, updater):
if updater.use_releases:
return "{}{}".format(self.form_repo_url(updater),"/releases")
else:
return "{}{}".format(self.form_repo_url(updater),"/tags")
def form_branch_list_url(self, updater):
return "{}{}".format(self.form_repo_url(updater),"/branches")
def form_branch_url(self, branch, updater):
return "{}{}{}".format(self.form_repo_url(updater),
"/zipball/",branch)
def parse_tags(self, response, updater):
if response == None:
return []
return response
class GitlabEngine(object):
def __init__(self):
self.api_url = 'https://gitlab.com'
self.token = None
self.name = "gitlab"
def form_repo_url(self, updater):
return "{}{}{}".format(self.api_url,"/api/v4/projects/",updater.repo)
def form_tags_url(self, updater):
return "{}{}".format(self.form_repo_url(updater),"/repository/tags")
def form_branch_list_url(self, updater):
# does not validate branch name.
return "{}{}".format(
self.form_repo_url(updater),
"/repository/branches")
def form_branch_url(self, branch, updater):
# Could clash with tag names and if it does, it will
# download TAG zip instead of branch zip to get
# direct path, would need.
return "{}{}{}".format(
self.form_repo_url(updater),
"/repository/archive.zip?sha=",
branch)
def get_zip_url(self, sha, updater):
return "{base}/repository/archive.zip?sha={sha}".format(
base=self.form_repo_url(updater),
sha=sha)
# def get_commit_zip(self, id, updater):
# return self.form_repo_url(updater)+"/repository/archive.zip?sha:"+id
def parse_tags(self, response, updater):
if response == None:
return []
return [{"name": tag["name"], "zipball_url": self.get_zip_url(tag["commit"]["id"], updater)} for tag in response]
# -----------------------------------------------------------------------------
# The module-shared class instance,
# should be what's imported to other files
Updater = Singleton_updater()
| true | true |
f7fbdb885279ab0c23eadba10893de95e20c379e | 11,746 | py | Python | wpcmd/base.py | zrong/wpcmd | f755bedba8cd44af62f59cf6d9d8bace46245e92 | [
"BSD-3-Clause"
] | 17 | 2015-11-22T04:28:59.000Z | 2020-02-15T11:29:39.000Z | wpcmd/base.py | zrong/wpcmd | f755bedba8cd44af62f59cf6d9d8bace46245e92 | [
"BSD-3-Clause"
] | 1 | 2019-04-02T04:07:41.000Z | 2019-04-02T08:26:05.000Z | wpcmd/base.py | zrong/wpcmd | f755bedba8cd44af62f59cf6d9d8bace46245e92 | [
"BSD-3-Clause"
] | 8 | 2015-11-21T02:04:45.000Z | 2019-10-06T12:27:09.000Z | #########################################
# base.py
#
# Author zrong(zengrong.net)
# Creation 2014-12-04
# Modification 2015-11-22
#########################################
import os
import re
import platform
import shutil
from xmlrpc.client import Fault
from string import Template
from datetime import (datetime, timedelta)
import configparser
from wordpress_xmlrpc import (Client,
WordPressPost, WordPressPage, WordPressTerm, WordPressMedia)
from wordpress_xmlrpc.exceptions import InvalidCredentialsError
from wordpress_xmlrpc.methods.taxonomies import (GetTerms)
from pkg_resources import (resource_filename, resource_string)
from rookout import slog
from rookout.base import (list_dir, read_file, write_file)
from rookout.conf import PYConf
class Conf(object):
TPL_FILE = 'wpcmd.ini.tpl'
PRE_NAME = '_' if platform.system() == 'Windows' else '.'
INI_FILE = PRE_NAME+'wpcmd.ini'
CACHE_FILE = PRE_NAME+'wpcmd.cache.py'
ARTICLE_TYPES = ('post', 'page', 'draft')
def __init__(self, conffile):
self.conffile = conffile
self.ini = configparser.ConfigParser()
self.cache = None
def init(self, workdir):
if os.path.exists(self.conffile):
self.read_from_file()
return True
tplstr = read_file(resource_filename('wpcmd', Conf.TPL_FILE))
inistr = Template(tplstr).substitute({
'CACHEFILE':Conf.CACHE_FILE,
'WORK':workdir,
})
self.save_to_file(inistr)
self.read_from_file()
slog.info('Please modify %s !'%self.conffile)
return False
def init_cache(self, site_name):
self.__site_section = site_name
self.cache = TermCache(self.get_site('cachefile'))
self.cache.init()
def __missing__(self, key):
return None
def __getattr__(self, name):
return self.ini[name]
def get(self, section, option):
return self.ini.get(section, option, raw=True, fallback=None)
def get_site(self, option):
return self.get(self.__site_section, option)
def get_user(self):
return self.get_site('user')
def get_password(self):
return self.get_site('password')
def get_url(self, only_site=False):
url = self.get_site('url')
site = None
if url.endswith('/xmlrpc.php'):
site = url[:-11]
elif url.endswith('/'):
site = url[:-1]
url = url + 'xmlrpc.php'
else:
site = url
url = url + '/xmlrpc.php'
if only_site:
return site
return url
def save_to_file(self, inistr):
write_file(self.conffile, inistr)
def read_from_file(self):
self.ini.read(self.conffile)
def is_article(self, posttype):
return posttype in Conf.ARTICLE_TYPES
def get_draft(self, name):
"""
There are two kind of draft file in draft directory.
One has published to wordpress and in draft status;
One has not been published to wordpress yet.
"""
draftname = (self.get_site('draftfmt') % str(name))+self.get_site('ext')
return self.get_work_path('draft', draftname), draftname
def get_new_draft(self, name=None):
draftdir = self.get_work_path('draft')
draftnames = list(list_dir(draftdir))
draftfile, draftname = None, None
if name:
draftfile, draftname = self.get_draft(name)
if draftname in draftnames:
raise WPError('The draft file "%s" is already existence!'%
draftname)
else:
name = 1
draftfile, draftname = self.get_draft(name)
while os.path.exists(draftfile):
name += 1
draftfile, draftname = self.get_draft(name)
return draftfile, draftname
def get_article(self, name, posttype):
postname = name+self.get_site('ext')
if self.is_article(posttype):
return self.get_work_path(posttype, postname), postname
return None, None
def get_path(self, name, *path):
workdir = os.path.join(self.get_site('work'), name)
if path:
return os.path.abspath(os.path.join(workdir, *path))
return workdir
def get_work_path(self, dirname, *path):
workpath = self.get_path(self.get_site(dirname))
if not os.path.exists(workpath):
os.makedirs(workpath)
if path:
return os.path.join(workpath, *path)
return workpath
def get_mdfiles(self, posttype):
workpath = self.get_work_path(posttype)
for afile in os.listdir(workpath):
if afile.endswith(self.get_site('ext')):
name = afile.split('.')[0]
filepath = os.path.join(workpath, afile)
yield (posttype, name, filepath)
class Action(object):
def __init__(self, gconf, gtermcache, gargs, gparser):
self.conf = gconf
self.conf.site = gargs.site
self.cache = gtermcache
self.args = gargs
self.parser = gparser
self._wp = None
def get_postid(self, as_list=False):
if not self.args.query:
return None
if as_list:
postids = []
for postid in self.args.query:
match = re.match(r'^(\d+)-(\d+)$', postid)
if match:
a = int(match.group(1))
b = int(match.group(2))
for i in range(a,b+1):
postids.append(str(i))
else:
postids.append(postid)
return postids
return self.args.query[0]
def get_dict_from_query(self, query):
if query:
d = {}
for v in query:
value = v.split('=')
d[value[0]] = value[1]
return d
return None
def get_term_query(self):
typ = self.args.type
q = self.args.query
query = []
if typ == 'term':
query = q
else:
if typ == 'tag':
typ = 'post_tag'
query.append(typ)
if q and len(q)>0:
query.append(q[0])
return query
def get_terms_from_wp(self, query, force=False):
if not query or len(query)== 0:
slog.error('Please provide a taxonomy name! You can use '
'"show -t tax" to get one.')
return None
taxname = query[0]
slug = query[1] if len(query)>1 else None
terms = self.cache[taxname]
if not terms or force:
results = self.wpcall(GetTerms(taxname))
if results:
self.cache.save_terms(results, taxname)
if terms and slug:
return terms[slug]
return terms
def print_result(self, result):
if isinstance(result, WordPressTerm):
slog.info('id=%s, group=%s, '
'taxnomy_id=%s, name=%s, slug=%s, '
'parent=%s, count=%d',
result.id, result.group,
result.taxonomy_id, result.name, result.slug,
result.parent, result.count)
elif isinstance(result, WordPressPost):
slog.info('id=%s, date=%s, date_modified=%s, '
'slug=%s, title=%s, post_status=%s, post_type=%s',
result.id, str(result.date), str(result.date_modified),
result.slug, result.title,
result.post_status, result.post_type)
elif isinstance(result, WordPressMedia):
slog.info('id=%s, parent=%s, title=%s, '
'description=%s, caption=%s, date_created=%s, link=%s, '
'thumbnail=%s, metadata=%s',
result.id, result.parent, result.title,
result.description, result.caption, str(result.date_created),
result.link,
result.thumbnail, result.metadata)
else:
slog.info(result)
def print_results(self, results):
if isinstance(results, list):
for result in results:
self.print_result(result)
elif isinstance(results, dict):
for k,v in results.items():
slog.info('%s %s'%(k, str(v)))
else:
self.print_result(results)
def get_datetime(self, datestring):
dt = datetime.strptime(datestring, '%Y-%m-%d %H:%M:%S')
return dt - timedelta(hours=8)
def wpcall(self, method):
if not self._wp:
self._wp = Client(self.conf.get_url(),
self.conf.get_user(),
self.conf.get_password())
try:
results = self._wp.call(method)
except InvalidCredentialsError as e:
slog.error(e)
return None
except Fault as e:
slog.error(e)
return None
return results
def go(self):
pass
def build(self):
if self.args.type:
self.go()
elif self.parser:
self.parser.print_help()
class TermCache(PYConf):
""" A cache for terms.
"""
def __init__(self, filepath):
self.cachefile = filepath
def init(self):
if os.path.exists(self.cachefile):
super().read_from_file(self.cachefile)
def save_to_file(self):
super().save_to_file(self.cachefile)
def save_terms(self, terms, taxname):
termdict = PYConf()
for term in terms:
self.save_term(term, taxname, termdict)
self[taxname] = termdict
self.save_to_file()
def save_term(self, term, taxname, termdict=None):
if termdict == None:
termdict = self[taxname]
if termdict == None:
termdict = PYConf()
self[taxname] = termdict
termdict[term.slug] = PYConf({
'id':term.id,
'group':term.group,
'taxonomy':term.taxonomy,
'taxonomy_id':term.taxonomy_id,
'name':term.name,
'slug':term.slug,
'description':term.description,
'parent':term.parent,
'count':term.count,
})
def get_term(self, taxname, slug):
if not self[taxname]:
return None
if not self[taxname][slug]:
return None
termdict = self[taxname][slug]
term = WordPressTerm()
term.id = termdict['id']
term.group = termdict['group']
term.taxonomy = termdict['taxonomy']
term.taxonomy_id = termdict['taxonomy_id']
term.name = termdict['name']
term.slug = termdict['slug']
term.description = termdict['description']
term.parent = termdict['parent']
term.count = termdict['count']
return term
def get_terms_from_meta(self, categories, tags):
terms = []
if categories:
for cat in categories:
term = self.get_term('category', cat)
if not term:
slog.error('The category "%s" is not in wordpress.'
' Please create it first.'%cat)
return None
terms.append(term)
if tags:
for tag in tags:
term = self.get_term('post_tag', tag)
if not term:
slog.error('The tag "%s" is not in wordpress.'
'Please create it first'%tag)
return None
terms.append(term)
return terms
| 32.269231 | 82 | 0.548187 | lf, key):
return None
def __getattr__(self, name):
return self.ini[name]
def get(self, section, option):
return self.ini.get(section, option, raw=True, fallback=None)
def get_site(self, option):
return self.get(self.__site_section, option)
def get_user(self):
return self.get_site('user')
def get_password(self):
return self.get_site('password')
def get_url(self, only_site=False):
url = self.get_site('url')
site = None
if url.endswith('/xmlrpc.php'):
site = url[:-11]
elif url.endswith('/'):
site = url[:-1]
url = url + 'xmlrpc.php'
else:
site = url
url = url + '/xmlrpc.php'
if only_site:
return site
return url
def save_to_file(self, inistr):
write_file(self.conffile, inistr)
def read_from_file(self):
self.ini.read(self.conffile)
def is_article(self, posttype):
return posttype in Conf.ARTICLE_TYPES
def get_draft(self, name):
draftname = (self.get_site('draftfmt') % str(name))+self.get_site('ext')
return self.get_work_path('draft', draftname), draftname
def get_new_draft(self, name=None):
draftdir = self.get_work_path('draft')
draftnames = list(list_dir(draftdir))
draftfile, draftname = None, None
if name:
draftfile, draftname = self.get_draft(name)
if draftname in draftnames:
raise WPError('The draft file "%s" is already existence!'%
draftname)
else:
name = 1
draftfile, draftname = self.get_draft(name)
while os.path.exists(draftfile):
name += 1
draftfile, draftname = self.get_draft(name)
return draftfile, draftname
def get_article(self, name, posttype):
postname = name+self.get_site('ext')
if self.is_article(posttype):
return self.get_work_path(posttype, postname), postname
return None, None
def get_path(self, name, *path):
workdir = os.path.join(self.get_site('work'), name)
if path:
return os.path.abspath(os.path.join(workdir, *path))
return workdir
def get_work_path(self, dirname, *path):
workpath = self.get_path(self.get_site(dirname))
if not os.path.exists(workpath):
os.makedirs(workpath)
if path:
return os.path.join(workpath, *path)
return workpath
def get_mdfiles(self, posttype):
workpath = self.get_work_path(posttype)
for afile in os.listdir(workpath):
if afile.endswith(self.get_site('ext')):
name = afile.split('.')[0]
filepath = os.path.join(workpath, afile)
yield (posttype, name, filepath)
class Action(object):
def __init__(self, gconf, gtermcache, gargs, gparser):
self.conf = gconf
self.conf.site = gargs.site
self.cache = gtermcache
self.args = gargs
self.parser = gparser
self._wp = None
def get_postid(self, as_list=False):
if not self.args.query:
return None
if as_list:
postids = []
for postid in self.args.query:
match = re.match(r'^(\d+)-(\d+)$', postid)
if match:
a = int(match.group(1))
b = int(match.group(2))
for i in range(a,b+1):
postids.append(str(i))
else:
postids.append(postid)
return postids
return self.args.query[0]
def get_dict_from_query(self, query):
if query:
d = {}
for v in query:
value = v.split('=')
d[value[0]] = value[1]
return d
return None
def get_term_query(self):
typ = self.args.type
q = self.args.query
query = []
if typ == 'term':
query = q
else:
if typ == 'tag':
typ = 'post_tag'
query.append(typ)
if q and len(q)>0:
query.append(q[0])
return query
def get_terms_from_wp(self, query, force=False):
if not query or len(query)== 0:
slog.error('Please provide a taxonomy name! You can use '
'"show -t tax" to get one.')
return None
taxname = query[0]
slug = query[1] if len(query)>1 else None
terms = self.cache[taxname]
if not terms or force:
results = self.wpcall(GetTerms(taxname))
if results:
self.cache.save_terms(results, taxname)
if terms and slug:
return terms[slug]
return terms
def print_result(self, result):
if isinstance(result, WordPressTerm):
slog.info('id=%s, group=%s, '
'taxnomy_id=%s, name=%s, slug=%s, '
'parent=%s, count=%d',
result.id, result.group,
result.taxonomy_id, result.name, result.slug,
result.parent, result.count)
elif isinstance(result, WordPressPost):
slog.info('id=%s, date=%s, date_modified=%s, '
'slug=%s, title=%s, post_status=%s, post_type=%s',
result.id, str(result.date), str(result.date_modified),
result.slug, result.title,
result.post_status, result.post_type)
elif isinstance(result, WordPressMedia):
slog.info('id=%s, parent=%s, title=%s, '
'description=%s, caption=%s, date_created=%s, link=%s, '
'thumbnail=%s, metadata=%s',
result.id, result.parent, result.title,
result.description, result.caption, str(result.date_created),
result.link,
result.thumbnail, result.metadata)
else:
slog.info(result)
def print_results(self, results):
if isinstance(results, list):
for result in results:
self.print_result(result)
elif isinstance(results, dict):
for k,v in results.items():
slog.info('%s %s'%(k, str(v)))
else:
self.print_result(results)
def get_datetime(self, datestring):
dt = datetime.strptime(datestring, '%Y-%m-%d %H:%M:%S')
return dt - timedelta(hours=8)
def wpcall(self, method):
if not self._wp:
self._wp = Client(self.conf.get_url(),
self.conf.get_user(),
self.conf.get_password())
try:
results = self._wp.call(method)
except InvalidCredentialsError as e:
slog.error(e)
return None
except Fault as e:
slog.error(e)
return None
return results
def go(self):
pass
def build(self):
if self.args.type:
self.go()
elif self.parser:
self.parser.print_help()
class TermCache(PYConf):
def __init__(self, filepath):
self.cachefile = filepath
def init(self):
if os.path.exists(self.cachefile):
super().read_from_file(self.cachefile)
def save_to_file(self):
super().save_to_file(self.cachefile)
def save_terms(self, terms, taxname):
termdict = PYConf()
for term in terms:
self.save_term(term, taxname, termdict)
self[taxname] = termdict
self.save_to_file()
def save_term(self, term, taxname, termdict=None):
if termdict == None:
termdict = self[taxname]
if termdict == None:
termdict = PYConf()
self[taxname] = termdict
termdict[term.slug] = PYConf({
'id':term.id,
'group':term.group,
'taxonomy':term.taxonomy,
'taxonomy_id':term.taxonomy_id,
'name':term.name,
'slug':term.slug,
'description':term.description,
'parent':term.parent,
'count':term.count,
})
def get_term(self, taxname, slug):
if not self[taxname]:
return None
if not self[taxname][slug]:
return None
termdict = self[taxname][slug]
term = WordPressTerm()
term.id = termdict['id']
term.group = termdict['group']
term.taxonomy = termdict['taxonomy']
term.taxonomy_id = termdict['taxonomy_id']
term.name = termdict['name']
term.slug = termdict['slug']
term.description = termdict['description']
term.parent = termdict['parent']
term.count = termdict['count']
return term
def get_terms_from_meta(self, categories, tags):
terms = []
if categories:
for cat in categories:
term = self.get_term('category', cat)
if not term:
slog.error('The category "%s" is not in wordpress.'
' Please create it first.'%cat)
return None
terms.append(term)
if tags:
for tag in tags:
term = self.get_term('post_tag', tag)
if not term:
slog.error('The tag "%s" is not in wordpress.'
'Please create it first'%tag)
return None
terms.append(term)
return terms
| true | true |
f7fbdd0928e0dd0fb1e4cd34a710e45de480bda2 | 2,604 | py | Python | allennlp/tests/data/dataset_readers/semantic_parsing/atis_test.py | threefoldo/allennlp | 983db284cb46fd18c898dd3b0e04eed6cb932768 | [
"Apache-2.0"
] | null | null | null | allennlp/tests/data/dataset_readers/semantic_parsing/atis_test.py | threefoldo/allennlp | 983db284cb46fd18c898dd3b0e04eed6cb932768 | [
"Apache-2.0"
] | null | null | null | allennlp/tests/data/dataset_readers/semantic_parsing/atis_test.py | threefoldo/allennlp | 983db284cb46fd18c898dd3b0e04eed6cb932768 | [
"Apache-2.0"
] | null | null | null | # pylint: disable=no-self-use,invalid-name
from allennlp.common.file_utils import cached_path
from allennlp.data.dataset_readers import AtisDatasetReader
from allennlp.common.testing import AllenNlpTestCase
from allennlp.semparse.worlds import AtisWorld
class TestAtisReader(AllenNlpTestCase):
def test_atis_read_from_file(self):
data_path = AllenNlpTestCase.FIXTURES_ROOT / "data" / "atis" / "sample.json"
database_file = cached_path("https://s3-us-west-2.amazonaws.com/allennlp/datasets/atis/atis.db")
reader = AtisDatasetReader(database_file=database_file)
instances = list(reader.read(str(data_path)))
assert len(instances) == 13
instance = instances[0]
assert set(instance.fields.keys()) == \
{'utterance',
'actions',
'world',
'sql_queries',
'target_action_sequence',
'linking_scores'}
assert [t.text for t in instance.fields["utterance"].tokens] == \
['show', 'me', 'the', 'one', 'way',
'flights', 'from', 'detroit', 'to',
'westchester', 'county']
assert isinstance(instance.fields['world'].as_tensor({}), AtisWorld)
world = instance.fields['world'].metadata
assert set(world.valid_actions['number']) == \
{'number -> ["1"]',
'number -> ["0"]',
'number -> ["41"]',
'number -> ["60"]'}
assert world.linked_entities['string']['airport_airport_code_string -> ["\'DTW\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] # ``detroit`` -> ``DTW``
assert world.linked_entities['string']['flight_stop_stop_airport_string -> ["\'DTW\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] # ``detroit`` -> ``DTW``
assert world.linked_entities['string']['city_city_code_string -> ["\'DDTT\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0] # ``detroit`` -> ``DDTT``
assert world.linked_entities['string']['fare_basis_economy_string -> ["\'NO\'"]'][2] == \
[0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0] # ``one way`` -> ``NO``
assert world.linked_entities['string']['city_city_name_string -> ["\'WESTCHESTER COUNTY\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1] # ``westchester county`` -> ``WESTCHESTER COUNTY``
assert world.linked_entities['string']['city_city_code_string -> ["\'HHPN\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1] # ``westchester county`` -> ``HHPN``
| 49.132075 | 109 | 0.544547 |
from allennlp.common.file_utils import cached_path
from allennlp.data.dataset_readers import AtisDatasetReader
from allennlp.common.testing import AllenNlpTestCase
from allennlp.semparse.worlds import AtisWorld
class TestAtisReader(AllenNlpTestCase):
def test_atis_read_from_file(self):
data_path = AllenNlpTestCase.FIXTURES_ROOT / "data" / "atis" / "sample.json"
database_file = cached_path("https://s3-us-west-2.amazonaws.com/allennlp/datasets/atis/atis.db")
reader = AtisDatasetReader(database_file=database_file)
instances = list(reader.read(str(data_path)))
assert len(instances) == 13
instance = instances[0]
assert set(instance.fields.keys()) == \
{'utterance',
'actions',
'world',
'sql_queries',
'target_action_sequence',
'linking_scores'}
assert [t.text for t in instance.fields["utterance"].tokens] == \
['show', 'me', 'the', 'one', 'way',
'flights', 'from', 'detroit', 'to',
'westchester', 'county']
assert isinstance(instance.fields['world'].as_tensor({}), AtisWorld)
world = instance.fields['world'].metadata
assert set(world.valid_actions['number']) == \
{'number -> ["1"]',
'number -> ["0"]',
'number -> ["41"]',
'number -> ["60"]'}
assert world.linked_entities['string']['airport_airport_code_string -> ["\'DTW\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
assert world.linked_entities['string']['flight_stop_stop_airport_string -> ["\'DTW\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
assert world.linked_entities['string']['city_city_code_string -> ["\'DDTT\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0]
assert world.linked_entities['string']['fare_basis_economy_string -> ["\'NO\'"]'][2] == \
[0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0]
assert world.linked_entities['string']['city_city_name_string -> ["\'WESTCHESTER COUNTY\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]
assert world.linked_entities['string']['city_city_code_string -> ["\'HHPN\'"]'][2] == \
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]
| true | true |
f7fbde329e3743fcc1be07306b336fa20800b168 | 809 | py | Python | src/project/utils/xmodels.py | tgrx/obliviscor | 31f9a4476892460c931b9a8fc5403c3afcc47607 | [
"Apache-2.0"
] | null | null | null | src/project/utils/xmodels.py | tgrx/obliviscor | 31f9a4476892460c931b9a8fc5403c3afcc47607 | [
"Apache-2.0"
] | 20 | 2020-04-16T23:45:50.000Z | 2020-05-05T14:22:03.000Z | src/project/utils/xmodels.py | tgrx/obliviscor | 31f9a4476892460c931b9a8fc5403c3afcc47607 | [
"Apache-2.0"
] | null | null | null | from functools import singledispatch
from typing import Text
import django
from django.db.models.base import ModelBase
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor
from django.db.models.fields.related_descriptors import ManyToManyDescriptor
from django.db.models.query_utils import DeferredAttribute
@singledispatch
def a(obj) -> Text:
return str(obj)
@a.register
def _(obj: DeferredAttribute) -> Text:
if django.VERSION[0] < 3: # pragma: nocover
return obj.field_name
return obj.field.get_attname()
@a.register
def _(obj: ForwardManyToOneDescriptor) -> Text:
return obj.field.name
@a.register
def _(obj: ManyToManyDescriptor) -> Text:
return obj.field.name
@a.register
def _(obj: ModelBase) -> Text:
return obj._meta.db_table
| 22.472222 | 82 | 0.765142 | from functools import singledispatch
from typing import Text
import django
from django.db.models.base import ModelBase
from django.db.models.fields.related_descriptors import ForwardManyToOneDescriptor
from django.db.models.fields.related_descriptors import ManyToManyDescriptor
from django.db.models.query_utils import DeferredAttribute
@singledispatch
def a(obj) -> Text:
return str(obj)
@a.register
def _(obj: DeferredAttribute) -> Text:
if django.VERSION[0] < 3:
return obj.field_name
return obj.field.get_attname()
@a.register
def _(obj: ForwardManyToOneDescriptor) -> Text:
return obj.field.name
@a.register
def _(obj: ManyToManyDescriptor) -> Text:
return obj.field.name
@a.register
def _(obj: ModelBase) -> Text:
return obj._meta.db_table
| true | true |
f7fbdf22581f47bc1202559017c47381d8bda634 | 394 | py | Python | scripts/jenkins_win32.py | mbits-os/JiraDesktop | eb9b66b9c11b2fba1079f03a6f90aea425ce6138 | [
"MIT"
] | null | null | null | scripts/jenkins_win32.py | mbits-os/JiraDesktop | eb9b66b9c11b2fba1079f03a6f90aea425ce6138 | [
"MIT"
] | null | null | null | scripts/jenkins_win32.py | mbits-os/JiraDesktop | eb9b66b9c11b2fba1079f03a6f90aea425ce6138 | [
"MIT"
] | null | null | null | import sys, os, glob, zipfile
ROOT = 'apps/Tasks/Release/'
FILES = [
'tasks.exe',
'locale/Tasks.*'
]
def paths():
for FILE in FILES:
for name in glob.iglob(os.path.join(ROOT, FILE)):
yield (name, os.path.join('bin', os.path.relpath(name, ROOT)))
with zipfile.ZipFile(sys.argv[1], 'w') as zip:
for src, dst in paths(): zip.write(src, dst)
zip.write('msbuild.log')
| 21.888889 | 66 | 0.626904 | import sys, os, glob, zipfile
ROOT = 'apps/Tasks/Release/'
FILES = [
'tasks.exe',
'locale/Tasks.*'
]
def paths():
for FILE in FILES:
for name in glob.iglob(os.path.join(ROOT, FILE)):
yield (name, os.path.join('bin', os.path.relpath(name, ROOT)))
with zipfile.ZipFile(sys.argv[1], 'w') as zip:
for src, dst in paths(): zip.write(src, dst)
zip.write('msbuild.log')
| true | true |
f7fbdfa1d83c3367452e2b32485d78c99766b1f7 | 1,812 | py | Python | test/image_sampler_test.py | Enigmatisms/NeRF | 870abd83c9679e0735284a1641ffc0d13f0cf1d4 | [
"Apache-2.0"
] | 1 | 2022-03-26T16:56:41.000Z | 2022-03-26T16:56:41.000Z | test/image_sampler_test.py | Enigmatisms/NeRF | 870abd83c9679e0735284a1641ffc0d13f0cf1d4 | [
"Apache-2.0"
] | null | null | null | test/image_sampler_test.py | Enigmatisms/NeRF | 870abd83c9679e0735284a1641ffc0d13f0cf1d4 | [
"Apache-2.0"
] | null | null | null | #-*-coding:utf-8-*-
"""
@author Enigmatisms @date 2022.3.23
Last testing module for image sampler
"""
import torch
import numpy as np
from time import time
from nerf_helper import imageSampling
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from instances import BLENDER_FOV, Rs, ts
from torchvision import transforms
import sys
sys.path.append("..")
from py.utils import fov2Focal
from py.dataset import CustomDataSet
POINT_TO_SAMPLE = 16
IMAGE_SIZE = 50
NEAR_T, FAR_T = 2., 6.
INITIAL_SKIP = 0
if __name__ == "__main__":
torch.set_default_tensor_type(torch.FloatTensor)
# tf = np.concatenate((Rs[-1], ts[-1]), axis = -1)
# tf = torch.from_numpy(tf).float().cuda()
dataset = CustomDataSet("../../dataset/nerf_synthetic/%s/"%("lego"), transforms.ToTensor(), True, use_alpha = True)
cam_fov_train, train_cam_tf, _ = dataset.get_dataset(to_cuda = True)
focal = fov2Focal(cam_fov_train, IMAGE_SIZE)
axis = plt.axes(projection='3d')
colors = ('r', 'g', 'b', 'y', 'p')
for k in range(4):
output = torch.zeros(IMAGE_SIZE, IMAGE_SIZE, POINT_TO_SAMPLE, 6, dtype = torch.float32).cuda()
lengths = torch.zeros(IMAGE_SIZE, IMAGE_SIZE, POINT_TO_SAMPLE, dtype = torch.float32).cuda()
imageSampling(train_cam_tf[k], output, lengths, IMAGE_SIZE, IMAGE_SIZE, POINT_TO_SAMPLE, focal, NEAR_T, FAR_T)
for i in range(IMAGE_SIZE):
for j in range(IMAGE_SIZE):
point_list = output[i, j].cpu().numpy()
axis.plot3D(point_list[:, 0], point_list[:, 1], point_list[:, 2], c = colors[k], alpha = 0.7)
# axis.scatter(point_list[:, 0], point_list[:, 1], point_list[:, 2], c = 'r', s = 2)
axis.set_zlabel('Z')
axis.set_ylabel('Y')
axis.set_xlabel('X')
plt.show() | 38.553191 | 119 | 0.666115 |
import torch
import numpy as np
from time import time
from nerf_helper import imageSampling
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from instances import BLENDER_FOV, Rs, ts
from torchvision import transforms
import sys
sys.path.append("..")
from py.utils import fov2Focal
from py.dataset import CustomDataSet
POINT_TO_SAMPLE = 16
IMAGE_SIZE = 50
NEAR_T, FAR_T = 2., 6.
INITIAL_SKIP = 0
if __name__ == "__main__":
torch.set_default_tensor_type(torch.FloatTensor)
dataset = CustomDataSet("../../dataset/nerf_synthetic/%s/"%("lego"), transforms.ToTensor(), True, use_alpha = True)
cam_fov_train, train_cam_tf, _ = dataset.get_dataset(to_cuda = True)
focal = fov2Focal(cam_fov_train, IMAGE_SIZE)
axis = plt.axes(projection='3d')
colors = ('r', 'g', 'b', 'y', 'p')
for k in range(4):
output = torch.zeros(IMAGE_SIZE, IMAGE_SIZE, POINT_TO_SAMPLE, 6, dtype = torch.float32).cuda()
lengths = torch.zeros(IMAGE_SIZE, IMAGE_SIZE, POINT_TO_SAMPLE, dtype = torch.float32).cuda()
imageSampling(train_cam_tf[k], output, lengths, IMAGE_SIZE, IMAGE_SIZE, POINT_TO_SAMPLE, focal, NEAR_T, FAR_T)
for i in range(IMAGE_SIZE):
for j in range(IMAGE_SIZE):
point_list = output[i, j].cpu().numpy()
axis.plot3D(point_list[:, 0], point_list[:, 1], point_list[:, 2], c = colors[k], alpha = 0.7)
axis.set_zlabel('Z')
axis.set_ylabel('Y')
axis.set_xlabel('X')
plt.show() | true | true |
f7fbe0513fe16af12b96d800b2b62c560b4bcd2d | 509 | py | Python | setup.py | what-digital/aldryn-simple-glossary | af0e9bd56f55a0773ac528dccd2393182c3b3c53 | [
"MIT"
] | null | null | null | setup.py | what-digital/aldryn-simple-glossary | af0e9bd56f55a0773ac528dccd2393182c3b3c53 | [
"MIT"
] | null | null | null | setup.py | what-digital/aldryn-simple-glossary | af0e9bd56f55a0773ac528dccd2393182c3b3c53 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from setuptools import setup, find_packages
from aldryn_simple_glossary import __version__
REQUIREMENTS = [
'Django>=1.7',
'djangocms-text-ckeditor',
]
setup(
name='aldryn-simple-glossary',
version=__version__,
description=open('README.rst').read(),
author='Divio AG',
author_email='info@divio.com',
packages=find_packages(),
platforms=['OS Independent'],
install_requires=REQUIREMENTS,
include_package_data=True,
zip_safe=False,
)
| 20.36 | 46 | 0.691552 |
from setuptools import setup, find_packages
from aldryn_simple_glossary import __version__
REQUIREMENTS = [
'Django>=1.7',
'djangocms-text-ckeditor',
]
setup(
name='aldryn-simple-glossary',
version=__version__,
description=open('README.rst').read(),
author='Divio AG',
author_email='info@divio.com',
packages=find_packages(),
platforms=['OS Independent'],
install_requires=REQUIREMENTS,
include_package_data=True,
zip_safe=False,
)
| true | true |
f7fbe09243f22bf796b7740f20cc26f56e877667 | 5,055 | py | Python | markote/onenote_html_mapper.py | Frederick-S/one-mark | b2a85dce82f56fc0d08df7519178b4a7e4f89b70 | [
"MIT"
] | 8 | 2018-07-12T02:33:04.000Z | 2020-11-12T20:28:49.000Z | markote/onenote_html_mapper.py | Frederick-S/one-mark | b2a85dce82f56fc0d08df7519178b4a7e4f89b70 | [
"MIT"
] | 353 | 2018-05-21T03:34:36.000Z | 2022-03-28T21:06:43.000Z | markote/onenote_html_mapper.py | Frederick-S/one-mark | b2a85dce82f56fc0d08df7519178b4a7e4f89b70 | [
"MIT"
] | 2 | 2019-08-04T23:39:09.000Z | 2020-04-14T12:22:20.000Z | import io
import uuid
from PIL import Image
from pyquery import PyQuery
from markote.oauth import oauth
from markote.resource import Resource
from markote.util import convert_svg_to_png
class OneNoteHtmlMapper(object):
"""
Converts standard html to OneNote supported html.
"""
def __init__(self, document):
self.document = document
self.resources = []
def convert(self):
self._convert_svg_to_resources()
self._convert_local_image_to_resources()
self._move_inline_images_to_table()
if self._only_contains_table():
self._table_only_content_hack()
def get_html(self):
return self.document.outer_html()
def _convert_local_image_to_resources(self):
images = [PyQuery(image) for image in self.document.find('img')]
for image in images:
src = image.attr('src')
if src.lower().startswith('http') or src.startswith('name:'):
continue
try:
file_name = src.split('/')[-2]
oauth_client = oauth.microsoft_graph
response = oauth_client.get(
'me/drive/special/approot:/{0}:/content'.format(file_name))
if response.status_code == 200:
name = uuid.uuid4().hex
image.attr('src', 'name:{0}'.format(name))
content_type = response.headers['Content-Type']
self.resources.append(
Resource(name, response.content, content_type))
except Exception as e:
print(e)
def _convert_svg_to_resources(self):
self.resources = [self._convert_svg_to_resource(svg_element)
for svg_element in self.document.find('svg')]
def _convert_svg_to_resource(self, svg_element):
"""
Converts svg element to image.
"""
name = uuid.uuid4().hex
element = PyQuery(svg_element)
svg = element.outer_html().replace('viewbox', 'viewBox')
element.replace_with(PyQuery('<img src="name:{0}" />'.format(name)))
return Resource(name, convert_svg_to_png(svg), 'image/png')
def _move_inline_images_to_table(self):
"""
OneNote doesn't support inline images, so we put inline images to
a table row.
"""
images = self.document.find('img')
parents = [tuple(PyQuery(image).parent()) for image in images]
parents = list(map(lambda x: PyQuery(x[0]), list(set(parents))))
parents = list(filter(self._has_inline_images, parents))
for parent in parents:
table = PyQuery('<table></table>')
table.append(self._create_table_row_with_inline_images(
parent.contents()))
parent.replace_with(table)
def _has_inline_images(self, element):
"""
Check if the element contains inline images.
The element is a parent element which already contains images,
so we only need to check if the length of all children is greater
than the size of all images.
"""
return element.contents().length > element.find('img').length
def _create_table_cell_with_elements(self, elements):
cell = PyQuery('<td></td>')
for element in elements:
cell.append(element)
return cell
def _create_table_row_with_inline_images(self, contents):
"""
Move inline images and other elements into a table row.
"""
children_so_far = []
row = PyQuery('<tr></tr>')
for content in contents:
if hasattr(content, 'tag') and content.tag == 'img':
if len(children_so_far) != 0:
row.append(self._create_table_cell_with_elements(
children_so_far))
children_so_far = []
row.append(
'<td>{0}</td>'.format(PyQuery(content).outer_html()))
else:
children_so_far.append(content)
if len(children_so_far) != 0:
row.append(self._create_table_cell_with_elements(
children_so_far))
return row
def _only_contains_table(self):
"""
Check if the document only contains tables.
"""
return all(element.tag == 'table'
for element in self.document.children())
def _table_only_content_hack(self):
"""
Hack for https://stackoverflow.com/questions/50789978.
If the document only contains a table element, then create
a 1*1 image element and append it to the document.
"""
file = io.BytesIO()
image = Image.new('RGB', (1, 1), color='white')
image.save(file, 'png')
file.seek(0)
name = uuid.uuid4().hex
element = PyQuery('<img src="name:{0}" />'.format(name))
self.document.append(element)
self.resources.append(Resource(name, file, 'image/png'))
| 32.403846 | 79 | 0.58912 | import io
import uuid
from PIL import Image
from pyquery import PyQuery
from markote.oauth import oauth
from markote.resource import Resource
from markote.util import convert_svg_to_png
class OneNoteHtmlMapper(object):
def __init__(self, document):
self.document = document
self.resources = []
def convert(self):
self._convert_svg_to_resources()
self._convert_local_image_to_resources()
self._move_inline_images_to_table()
if self._only_contains_table():
self._table_only_content_hack()
def get_html(self):
return self.document.outer_html()
def _convert_local_image_to_resources(self):
images = [PyQuery(image) for image in self.document.find('img')]
for image in images:
src = image.attr('src')
if src.lower().startswith('http') or src.startswith('name:'):
continue
try:
file_name = src.split('/')[-2]
oauth_client = oauth.microsoft_graph
response = oauth_client.get(
'me/drive/special/approot:/{0}:/content'.format(file_name))
if response.status_code == 200:
name = uuid.uuid4().hex
image.attr('src', 'name:{0}'.format(name))
content_type = response.headers['Content-Type']
self.resources.append(
Resource(name, response.content, content_type))
except Exception as e:
print(e)
def _convert_svg_to_resources(self):
self.resources = [self._convert_svg_to_resource(svg_element)
for svg_element in self.document.find('svg')]
def _convert_svg_to_resource(self, svg_element):
name = uuid.uuid4().hex
element = PyQuery(svg_element)
svg = element.outer_html().replace('viewbox', 'viewBox')
element.replace_with(PyQuery('<img src="name:{0}" />'.format(name)))
return Resource(name, convert_svg_to_png(svg), 'image/png')
def _move_inline_images_to_table(self):
images = self.document.find('img')
parents = [tuple(PyQuery(image).parent()) for image in images]
parents = list(map(lambda x: PyQuery(x[0]), list(set(parents))))
parents = list(filter(self._has_inline_images, parents))
for parent in parents:
table = PyQuery('<table></table>')
table.append(self._create_table_row_with_inline_images(
parent.contents()))
parent.replace_with(table)
def _has_inline_images(self, element):
return element.contents().length > element.find('img').length
def _create_table_cell_with_elements(self, elements):
cell = PyQuery('<td></td>')
for element in elements:
cell.append(element)
return cell
def _create_table_row_with_inline_images(self, contents):
children_so_far = []
row = PyQuery('<tr></tr>')
for content in contents:
if hasattr(content, 'tag') and content.tag == 'img':
if len(children_so_far) != 0:
row.append(self._create_table_cell_with_elements(
children_so_far))
children_so_far = []
row.append(
'<td>{0}</td>'.format(PyQuery(content).outer_html()))
else:
children_so_far.append(content)
if len(children_so_far) != 0:
row.append(self._create_table_cell_with_elements(
children_so_far))
return row
def _only_contains_table(self):
return all(element.tag == 'table'
for element in self.document.children())
def _table_only_content_hack(self):
file = io.BytesIO()
image = Image.new('RGB', (1, 1), color='white')
image.save(file, 'png')
file.seek(0)
name = uuid.uuid4().hex
element = PyQuery('<img src="name:{0}" />'.format(name))
self.document.append(element)
self.resources.append(Resource(name, file, 'image/png'))
| true | true |
f7fbe2c78a9888a6d2b82ad7266d559f4c9c00ba | 10,485 | py | Python | dace/fpga_testing.py | jdahm/dace | d4f32d97cd1fafd9ad8e52ca7ad5283461c64e78 | [
"BSD-3-Clause"
] | 227 | 2019-03-15T23:39:06.000Z | 2022-03-30T07:49:08.000Z | dace/fpga_testing.py | jdahm/dace | d4f32d97cd1fafd9ad8e52ca7ad5283461c64e78 | [
"BSD-3-Clause"
] | 834 | 2019-07-31T22:49:31.000Z | 2022-03-28T14:01:32.000Z | dace/fpga_testing.py | jdahm/dace | d4f32d97cd1fafd9ad8e52ca7ad5283461c64e78 | [
"BSD-3-Clause"
] | 64 | 2019-03-19T05:40:37.000Z | 2022-03-11T15:02:42.000Z | # Copyright 2019-2021 ETH Zurich and the DaCe authors. All rights reserved.
from datetime import datetime
import importlib.util
import inspect
import os
import multiprocessing as mp
from pathlib import Path
import pytest
import re
import subprocess as sp
from typing import Callable, Iterable, Optional, Tuple, Union
from dace import SDFG
from dace.config import Config, temporary_config
TEST_TIMEOUT = 600 # Timeout tests after 10 minutes
class Colors:
SUCCESS = "\033[92m"
STATUS = "\033[94m"
ERROR = "\033[91m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
END = "\033[0m"
def print_status(message):
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"{Colors.STATUS}{Colors.BOLD}[{timestamp}]{Colors.END} {message}")
def print_success(message):
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"{Colors.SUCCESS}{Colors.BOLD}[{timestamp}]{Colors.END} {message}")
def print_error(message):
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"{Colors.ERROR}{Colors.BOLD}[{timestamp}]{Colors.END} {message}")
def dump_logs(proc_or_logs: Union[sp.CompletedProcess, Tuple[str, str]]):
if isinstance(proc_or_logs, tuple):
log_out, log_err = proc_or_logs
else:
proc_or_logs.terminate()
proc_or_logs.kill()
try:
log_out, log_err = proc_or_logs.communicate(timeout=10)
except sp.TimeoutExpired:
return None # Failed to even kill the process
if log_out:
print(log_out)
if log_err:
print(log_err)
return log_out, log_err
# https://stackoverflow.com/a/33599967/2949968
class FPGATestProcess(mp.Process):
def __init__(self, *args, **kwargs):
mp.Process.__init__(self, *args, **kwargs)
self._pconn, self._cconn = mp.Pipe()
self._exception = None
def run(self):
try:
ret = mp.Process.run(self)
self._cconn.send(ret)
except Exception as e:
self._cconn.send(e)
raise e
@property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
class TestFailed(Exception):
pass
def raise_error(message):
print_error(message)
raise TestFailed(message)
def _run_fpga_test(vendor: str,
test_function: Callable,
run_synthesis: bool = True,
assert_ii_1: bool = True):
path = Path(inspect.getfile(test_function))
base_name = f"{path.stem}::{Colors.UNDERLINE}{test_function.__name__}{Colors.END}"
with temporary_config():
Config.set("compiler", "use_cache", value=False)
Config.set("cache", value="unique")
Config.set("optimizer", "transform_on_call", value=False)
Config.set("optimizer", "interface", value=None)
Config.set("optimizer", "autooptimize", value=False)
if vendor == "xilinx":
Config.set("compiler", "fpga_vendor", value="xilinx")
Config.set("compiler", "xilinx", "mode", value="simulation")
# Simulation in software
print_status(f"{base_name} [Xilinx]: Running simulation.")
if "rtl" in path.parts:
Config.set("compiler",
"xilinx",
"mode",
value="hardware_emulation")
if "LIBRARY_PATH" not in os.environ:
os.environ["LIBRARY_PATH"] = ""
library_path_backup = None
else:
library_path_backup = os.environ["LIBRARY_PATH"]
os.environ["LIBRARY_PATH"] += ":/usr/lib/x86_64-linux-gnu"
sdfgs = test_function()
if "rtl" in path.parts:
if library_path_backup is None:
del os.environ["LIBRARY_PATH"]
else:
os.environ["LIBRARY_PATH"] = library_path_backup
if sdfgs is None:
raise_error("No SDFG(s) returned by FPGA test.")
elif isinstance(sdfgs, SDFG):
sdfgs = [sdfgs]
print_success(f"{base_name} [Xilinx]: " "Simulation successful.")
for sdfg in sdfgs:
build_folder = Path(sdfg.build_folder) / "build"
if not build_folder.exists():
raise_error(f"Build folder {build_folder} "
f"not found for {base_name}.")
# High-level synthesis
if run_synthesis:
print_status(f"{base_name} [Xilinx]: Running high-level "
f"synthesis for {sdfg.name}.")
try:
proc = sp.Popen(["make", "synthesis"],
cwd=build_folder,
stdout=sp.PIPE,
stderr=sp.PIPE,
encoding="utf=8")
syn_out, syn_err = proc.communicate(
timeout=TEST_TIMEOUT)
except sp.TimeoutExpired:
dump_logs(proc)
raise_error(f"{base_name} [Xilinx]: High-level "
f"synthesis timed out after "
f"{TEST_TIMEOUT} seconds.")
if proc.returncode != 0:
dump_logs(proc)
raise_error(f"{base_name} [Xilinx]: High-level "
f"synthesis failed.")
print_success(f"{base_name} [Xilinx]: High-level "
f"synthesis successful for "
f"{sdfg.name}.")
open(build_folder / "synthesis.out", "w").write(syn_out)
open(build_folder / "synthesis.err", "w").write(syn_err)
# Check if loops were pipelined with II=1
if assert_ii_1:
loops_found = False
for f in build_folder.iterdir():
if "hls.log" in f.name:
hls_log = f
break
else:
raise_error(f"{base_name} [Xilinx]: HLS "
f"log file not found.")
hls_log = open(hls_log, "r").read()
for m in re.finditer(r"Final II = ([0-9]+)", hls_log):
loops_found = True
if int(m.group(1)) != 1:
dump_logs((syn_out, syn_err))
raise_error(f"{base_name} [Xilinx]: "
f"Failed to achieve II=1.")
if not loops_found:
dump_logs((syn_out, syn_err))
raise_error(f"{base_name} [Xilinx]: No "
f"pipelined loops found.")
print_success(f"{base_name} [Xilinx]: II=1 "
f"achieved.")
elif vendor == "intel_fpga":
# Set environment variables
Config.set("compiler", "fpga_vendor", value="intel_fpga")
Config.set("compiler", "default_data_types", value="C")
Config.set("compiler", "intel_fpga", "mode", value="emulator")
# Simulation in software
print_status(f"{base_name} [Intel FPGA]: Running " f"emulation.")
test_function()
print_success(f"{base_name} [Intel FPGA]: Emulation "
f"successful.")
else:
raise ValueError(f"Unrecognized vendor {vendor}.")
def fpga_test(run_synthesis: bool = True,
assert_ii_1: bool = True,
xilinx: bool = True,
intel: bool = True):
"""
Decorator to run an FPGA test with pytest, setting the appropriate
variables and performing additional checks, such as running HLS and
asserting II=1. The test function must return an SDFG or a list of SDFGs
that will be used for this check.
:param run_synthesis: Whether to run HLS for Xilinx tests (Intel tests will always run synthesis).
:param assert_ii_1: Assert that all loops have been fully pipelined (currently only implemented for Xilinx).
:param xilinx: Run as a Xilinx test.
:param intel: Run as an Intel test.
"""
# Check arguments
if not xilinx and not intel:
raise ValueError("FPGA test must be run for Xilinx, Intel, or both.")
pytest_params = []
if xilinx:
pytest_params.append("xilinx")
if intel:
pytest_params.append("intel_fpga")
def decorator(test_function: Callable):
@pytest.mark.fpga
@pytest.mark.parametrize("vendor", pytest_params)
def wrapper(vendor: Optional[str]):
if vendor == None:
vendor = Config.get("compiler", "fpga_vendor")
p = FPGATestProcess(target=_run_fpga_test,
args=(vendor, test_function, run_synthesis,
assert_ii_1))
p.start()
p.join(timeout=TEST_TIMEOUT)
if p.is_alive():
p.kill()
raise_error(f"Test {Colors.UNDERLINE}{test_function.__name__}"
f"{Colors.END} timed out.")
if p.exception:
raise p.exception
return wrapper
return decorator
def xilinx_test(*args, **kwargs):
return fpga_test(*args, xilinx=True, intel=False, **kwargs)
def intel_fpga_test(*args, **kwargs):
return fpga_test(*args, xilinx=False, intel=True, **kwargs)
def import_sample(path: Union[Path, str]):
"""
Import a Python file from the samples directory as a module so it can be
used in a test.
:param path: Path relative to the DaCe samples directory.
"""
path = Path(__file__).parent.parent / "samples" / Path(path)
if not path.exists():
raise ValueError(f"Sample {path} not found.")
name = path.stem
spec = importlib.util.spec_from_file_location(name, path)
loaded_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(loaded_module)
return loaded_module
| 38.127273 | 112 | 0.539628 |
from datetime import datetime
import importlib.util
import inspect
import os
import multiprocessing as mp
from pathlib import Path
import pytest
import re
import subprocess as sp
from typing import Callable, Iterable, Optional, Tuple, Union
from dace import SDFG
from dace.config import Config, temporary_config
TEST_TIMEOUT = 600
class Colors:
SUCCESS = "\033[92m"
STATUS = "\033[94m"
ERROR = "\033[91m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
END = "\033[0m"
def print_status(message):
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"{Colors.STATUS}{Colors.BOLD}[{timestamp}]{Colors.END} {message}")
def print_success(message):
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"{Colors.SUCCESS}{Colors.BOLD}[{timestamp}]{Colors.END} {message}")
def print_error(message):
timestamp = datetime.now().strftime("%H:%M:%S")
print(f"{Colors.ERROR}{Colors.BOLD}[{timestamp}]{Colors.END} {message}")
def dump_logs(proc_or_logs: Union[sp.CompletedProcess, Tuple[str, str]]):
if isinstance(proc_or_logs, tuple):
log_out, log_err = proc_or_logs
else:
proc_or_logs.terminate()
proc_or_logs.kill()
try:
log_out, log_err = proc_or_logs.communicate(timeout=10)
except sp.TimeoutExpired:
return None
if log_out:
print(log_out)
if log_err:
print(log_err)
return log_out, log_err
class FPGATestProcess(mp.Process):
def __init__(self, *args, **kwargs):
mp.Process.__init__(self, *args, **kwargs)
self._pconn, self._cconn = mp.Pipe()
self._exception = None
def run(self):
try:
ret = mp.Process.run(self)
self._cconn.send(ret)
except Exception as e:
self._cconn.send(e)
raise e
@property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
class TestFailed(Exception):
pass
def raise_error(message):
print_error(message)
raise TestFailed(message)
def _run_fpga_test(vendor: str,
test_function: Callable,
run_synthesis: bool = True,
assert_ii_1: bool = True):
path = Path(inspect.getfile(test_function))
base_name = f"{path.stem}::{Colors.UNDERLINE}{test_function.__name__}{Colors.END}"
with temporary_config():
Config.set("compiler", "use_cache", value=False)
Config.set("cache", value="unique")
Config.set("optimizer", "transform_on_call", value=False)
Config.set("optimizer", "interface", value=None)
Config.set("optimizer", "autooptimize", value=False)
if vendor == "xilinx":
Config.set("compiler", "fpga_vendor", value="xilinx")
Config.set("compiler", "xilinx", "mode", value="simulation")
print_status(f"{base_name} [Xilinx]: Running simulation.")
if "rtl" in path.parts:
Config.set("compiler",
"xilinx",
"mode",
value="hardware_emulation")
if "LIBRARY_PATH" not in os.environ:
os.environ["LIBRARY_PATH"] = ""
library_path_backup = None
else:
library_path_backup = os.environ["LIBRARY_PATH"]
os.environ["LIBRARY_PATH"] += ":/usr/lib/x86_64-linux-gnu"
sdfgs = test_function()
if "rtl" in path.parts:
if library_path_backup is None:
del os.environ["LIBRARY_PATH"]
else:
os.environ["LIBRARY_PATH"] = library_path_backup
if sdfgs is None:
raise_error("No SDFG(s) returned by FPGA test.")
elif isinstance(sdfgs, SDFG):
sdfgs = [sdfgs]
print_success(f"{base_name} [Xilinx]: " "Simulation successful.")
for sdfg in sdfgs:
build_folder = Path(sdfg.build_folder) / "build"
if not build_folder.exists():
raise_error(f"Build folder {build_folder} "
f"not found for {base_name}.")
if run_synthesis:
print_status(f"{base_name} [Xilinx]: Running high-level "
f"synthesis for {sdfg.name}.")
try:
proc = sp.Popen(["make", "synthesis"],
cwd=build_folder,
stdout=sp.PIPE,
stderr=sp.PIPE,
encoding="utf=8")
syn_out, syn_err = proc.communicate(
timeout=TEST_TIMEOUT)
except sp.TimeoutExpired:
dump_logs(proc)
raise_error(f"{base_name} [Xilinx]: High-level "
f"synthesis timed out after "
f"{TEST_TIMEOUT} seconds.")
if proc.returncode != 0:
dump_logs(proc)
raise_error(f"{base_name} [Xilinx]: High-level "
f"synthesis failed.")
print_success(f"{base_name} [Xilinx]: High-level "
f"synthesis successful for "
f"{sdfg.name}.")
open(build_folder / "synthesis.out", "w").write(syn_out)
open(build_folder / "synthesis.err", "w").write(syn_err)
if assert_ii_1:
loops_found = False
for f in build_folder.iterdir():
if "hls.log" in f.name:
hls_log = f
break
else:
raise_error(f"{base_name} [Xilinx]: HLS "
f"log file not found.")
hls_log = open(hls_log, "r").read()
for m in re.finditer(r"Final II = ([0-9]+)", hls_log):
loops_found = True
if int(m.group(1)) != 1:
dump_logs((syn_out, syn_err))
raise_error(f"{base_name} [Xilinx]: "
f"Failed to achieve II=1.")
if not loops_found:
dump_logs((syn_out, syn_err))
raise_error(f"{base_name} [Xilinx]: No "
f"pipelined loops found.")
print_success(f"{base_name} [Xilinx]: II=1 "
f"achieved.")
elif vendor == "intel_fpga":
Config.set("compiler", "fpga_vendor", value="intel_fpga")
Config.set("compiler", "default_data_types", value="C")
Config.set("compiler", "intel_fpga", "mode", value="emulator")
print_status(f"{base_name} [Intel FPGA]: Running " f"emulation.")
test_function()
print_success(f"{base_name} [Intel FPGA]: Emulation "
f"successful.")
else:
raise ValueError(f"Unrecognized vendor {vendor}.")
def fpga_test(run_synthesis: bool = True,
assert_ii_1: bool = True,
xilinx: bool = True,
intel: bool = True):
if not xilinx and not intel:
raise ValueError("FPGA test must be run for Xilinx, Intel, or both.")
pytest_params = []
if xilinx:
pytest_params.append("xilinx")
if intel:
pytest_params.append("intel_fpga")
def decorator(test_function: Callable):
@pytest.mark.fpga
@pytest.mark.parametrize("vendor", pytest_params)
def wrapper(vendor: Optional[str]):
if vendor == None:
vendor = Config.get("compiler", "fpga_vendor")
p = FPGATestProcess(target=_run_fpga_test,
args=(vendor, test_function, run_synthesis,
assert_ii_1))
p.start()
p.join(timeout=TEST_TIMEOUT)
if p.is_alive():
p.kill()
raise_error(f"Test {Colors.UNDERLINE}{test_function.__name__}"
f"{Colors.END} timed out.")
if p.exception:
raise p.exception
return wrapper
return decorator
def xilinx_test(*args, **kwargs):
return fpga_test(*args, xilinx=True, intel=False, **kwargs)
def intel_fpga_test(*args, **kwargs):
return fpga_test(*args, xilinx=False, intel=True, **kwargs)
def import_sample(path: Union[Path, str]):
path = Path(__file__).parent.parent / "samples" / Path(path)
if not path.exists():
raise ValueError(f"Sample {path} not found.")
name = path.stem
spec = importlib.util.spec_from_file_location(name, path)
loaded_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(loaded_module)
return loaded_module
| true | true |
f7fbe2dd98394448bba6ae61ada6dc2211884eec | 7,972 | py | Python | assignments/tutorialSpaceInvaders/tutorialSpaceInvaders.py | noelleclement/pythoncourse | 37d6b26aaf7c8d4537f13fd1b92bfa2d5e4d5128 | [
"MIT"
] | null | null | null | assignments/tutorialSpaceInvaders/tutorialSpaceInvaders.py | noelleclement/pythoncourse | 37d6b26aaf7c8d4537f13fd1b92bfa2d5e4d5128 | [
"MIT"
] | null | null | null | assignments/tutorialSpaceInvaders/tutorialSpaceInvaders.py | noelleclement/pythoncourse | 37d6b26aaf7c8d4537f13fd1b92bfa2d5e4d5128 | [
"MIT"
] | null | null | null | import pyglet
from pyglet.window import key, FPSDisplay
import numpy as np
def preload_image(image):
img = pyglet.image.load('resources/sprites/'+image)
return img
class GameWindow(pyglet.window.Window):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.set_location(150,50)
self.frame_rate = 1/60.0
#self.fps_display = FPSDisplay(self)
#self.fps_display.label.font_size = 50
#self.main_batch = pyglet.graphics.Batch()
self.right = False
self.left = False
self.player_speed = 300
#self.fire = False
#self.player_fire_rate = 0
self.player_img = preload_image('PlayerShip.png')
self.player_spr = pyglet.sprite.Sprite(self.player_img)
self.player = GameAttribute(500, 100, 0, 0, self.player_spr) #why can you put the image as a later parameter (otherwise it gives an error)
'''
self.player_laser_list = [] #niet zo'n goede datastructuur? is wel redelijk opgelost met if statement bij update_player_laser
self.player_laser_img = preload_image('laser.png')
self.player_laser_sprite = pyglet.sprite.Sprite(self.player_laser_img)
self.player_laser = GameAttribute(self.player.pos_x+(self.player.width/2.0-(self.player_laser_img.width/2.0)), self.player.pos_y+self.player.height, 0, 0, self.player_laser_sprite)
'''
#rest van player_laser_list staat in def player_fire
self.space_img = preload_image('space.jpg')
self.space_list = []
self.space_sprite = pyglet.sprite.Sprite(self.space_img)
#initiate scrolling background
for spc in range(2):
self.space_list.append(GameAttribute(0, spc*1200, 0, -100, self.space_sprite))
'''
self.enemy_ship = preload_image('enemyShip_Sh01.png')
self.enemy_ship_seq = pyglet.image.ImageGrid(self.enemy_ship, 1, 15, item_width=100, item_height=100)
self.enemy_ship_anim = pyglet.image.Animation.from_image_sequence(self.enemy_ship_seq[0:], 0.1, loop=True)
self.enemy_ship_sprite = pyglet.sprite.Sprite(self.enemy_ship_anim, np.random.randint(200,600), np.random.randint(500,800), batch=self.main_batch)
self.enemy_ship_list = []
for enship in range(5): #miss variabele met hoeveel enemies
self.enemy_ship_list.append(self.enemy_ship_sprite)
'''
def on_key_press(self, symbol, modifiers):
if symbol == key.RIGHT:
self.right = True
if symbol == key.LEFT:
self.left = True
#if symbol == key.SPACE:
# self.fire = True
if symbol == key.ESCAPE:
pyglet.app.exit()
def on_key_release(self, symbol, modifiers):
if symbol == key.RIGHT: #hoe kan dit, waarom moet het niet != key.Right zijn? (geldt ook voor left)
self.right = False
if symbol == key.LEFT:
self.left = False
#if symbol == key.SPACE:
# self.fire = False
def on_draw(self):
self.clear()
for spc in self.space_list:
spc.draw()
self.player.draw() #later make an encompassing class with all sprites?
#for lsr in self.player_laser_list:
# lsr.draw()
#self.main_batch.draw()
#self.fps_display.draw()
def update_player(self, dt):
self.player.update(dt)
if self.right and self.player.pos_x < 1000 - self.player.width: #hier integreren grootte beeldscherm (hij stopt nu 200 pixels vd rechterkant, relatief tot linkerkant afbeelding)
self.player.pos_x += self.player_speed * dt
if self.left and self.player.pos_x > 100: #hier integreren grootte beeldscherm + grootte afbeelding (hij stopt nu 100 pixels vd linkerkant, relatief tot linkerkant afbeelding)
self.player.pos_x -= self.player_speed * dt
'''
def update_player_laser(self, dt):
for lsr in self.player_laser_list:
lsr.update(dt)
lsr.pos_y += 400 * dt
if lsr.pos_y > 900: #hier iets toevoegen van beeldschermgrootte
self.player_laser_list.remove(lsr)
def player_fire(self, dt):
self.player_fire_rate -= dt
if self.player_fire_rate <= 0:
self.player_laser_list.append(self.player_laser)
self.player_fire_rate += 0.2 # waarom precies 0.2? kan dit relatief tot iets zijn?
'''
def update_space(self,dt):
for spc in self.space_list:
spc.update(dt)
if spc.pos_y <= -1300: #windowsize add
self.space_list.remove(spc) #if bottom image is below screen, remove and create new image and add to list (see below)
self.space_list.append(GameAttribute(0, 1100, 0, -100, self.space_sprite)) #is this an elegant way to solve the scrolling (creating new background), is the list just growing?
def update(self, dt):
self.update_player(dt)
#if self.fire:
# self.player_fire(dt)
#self.update_player_laser(dt)
self.update_space(dt)
class GameAttribute:
def __init__(self, pos_x, pos_y, vel_x, vel_y, sprite=None): #ik snap nog niet helemaal waarom de sprite sprite heet (heette eerst image, maar is nu een apart functie voor refactoring)
self.pos_x = pos_x #start position xcoor based on given parameter
self.pos_y = pos_y #start position xcoor based on given parameter
self.vel_x = vel_x #velocity x
self.vel_y = vel_y #velocity y
if sprite is not None:
self.sprite = sprite
self.sprite.x = self.pos_x
self.sprite.y = self.pos_y
self.width = self.sprite.width
self.height = self.sprite.height
def draw(self):
self.sprite.draw()
def update(self, dt): #dt = deltatime
self.pos_x += self.vel_x*dt
self.pos_y += self.vel_y*dt
self.sprite.x = self.pos_x
self.sprite.y = self.pos_y
if __name__ == "__main__":
window = GameWindow(1200, 800, "Space Invaders", resizable=False)
pyglet.clock.schedule_interval(window.update, window.frame_rate)
pyglet.app.run()
"""
to do:
- window relatief tot beeldscherm maken (full screen of x-*iets* etc) (let op background, die is y1200 > lijkt niet uit te maken want heeft meer te maken met image)
- ik heb nu sprite functies in het algemene gameattribute staan, dit is niet de bedoeling
- framerate meer realtime (zie assignment)
- alles door pushen (in algemene attributes een update, maar ook in elke attribute zelf? en dan link je die allemaal en dan hoef je in update alleen maar die ene attribute aan te roepen?)
- player als onderdeel van een grotere groep sprites
- space aparte attribute
- naam: geen window, maar wat je ziet
- niet van window erven ( proberen niet te veel afhankelijk van pyglet.window.Window, let op uitbreidbaarheid + vernieuwbaarheid, maar alleen als nog tijd over)
- centrale key handler
- kijken of def preload_image op een betere plek kan (en als binnen een classe, dan nog bruikbaar erbuiten?)
- kijken of bepaalde variabelen centraal bepaald kunnen worden (makkelijk aan te passen)
- misschien kan ik de velocity meegeven met de update (dus in gameattribute in de update wordt de velocity dan meegerekend) > doet ie nu al maar dat kan voor player bijv handiger zijn want nu wordt dat allemaal in update player gedaan
""" | 45.039548 | 234 | 0.622303 | import pyglet
from pyglet.window import key, FPSDisplay
import numpy as np
def preload_image(image):
img = pyglet.image.load('resources/sprites/'+image)
return img
class GameWindow(pyglet.window.Window):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.set_location(150,50)
self.frame_rate = 1/60.0
self.right = False
self.left = False
self.player_speed = 300
self.player_img = preload_image('PlayerShip.png')
self.player_spr = pyglet.sprite.Sprite(self.player_img)
self.player = GameAttribute(500, 100, 0, 0, self.player_spr)
self.space_img = preload_image('space.jpg')
self.space_list = []
self.space_sprite = pyglet.sprite.Sprite(self.space_img)
for spc in range(2):
self.space_list.append(GameAttribute(0, spc*1200, 0, -100, self.space_sprite))
def on_key_press(self, symbol, modifiers):
if symbol == key.RIGHT:
self.right = True
if symbol == key.LEFT:
self.left = True
if symbol == key.ESCAPE:
pyglet.app.exit()
def on_key_release(self, symbol, modifiers):
if symbol == key.RIGHT:
self.right = False
if symbol == key.LEFT:
self.left = False
def on_draw(self):
self.clear()
for spc in self.space_list:
spc.draw()
self.player.draw()
def update_player(self, dt):
self.player.update(dt)
if self.right and self.player.pos_x < 1000 - self.player.width:
self.player.pos_x += self.player_speed * dt
if self.left and self.player.pos_x > 100:
self.player.pos_x -= self.player_speed * dt
def update_space(self,dt):
for spc in self.space_list:
spc.update(dt)
if spc.pos_y <= -1300:
self.space_list.remove(spc)
self.space_list.append(GameAttribute(0, 1100, 0, -100, self.space_sprite))
def update(self, dt):
self.update_player(dt)
self.update_space(dt)
class GameAttribute:
def __init__(self, pos_x, pos_y, vel_x, vel_y, sprite=None):
self.pos_x = pos_x
self.pos_y = pos_y
self.vel_x = vel_x
self.vel_y = vel_y
if sprite is not None:
self.sprite = sprite
self.sprite.x = self.pos_x
self.sprite.y = self.pos_y
self.width = self.sprite.width
self.height = self.sprite.height
def draw(self):
self.sprite.draw()
def update(self, dt):
self.pos_x += self.vel_x*dt
self.pos_y += self.vel_y*dt
self.sprite.x = self.pos_x
self.sprite.y = self.pos_y
if __name__ == "__main__":
window = GameWindow(1200, 800, "Space Invaders", resizable=False)
pyglet.clock.schedule_interval(window.update, window.frame_rate)
pyglet.app.run()
| true | true |
f7fbe2e15e3f55c65d5589bf082424b9e93dbe23 | 13,708 | py | Python | homeassistant/components/emulated_hue/hue_api.py | robin13/home-assistant | 4976569e304c23975d34ec88e2dfb94e84ab1f1c | [
"Apache-2.0"
] | 7 | 2018-08-03T10:15:36.000Z | 2019-03-25T13:31:55.000Z | homeassistant/components/emulated_hue/hue_api.py | robin13/home-assistant | 4976569e304c23975d34ec88e2dfb94e84ab1f1c | [
"Apache-2.0"
] | 6 | 2021-02-08T20:25:50.000Z | 2022-03-11T23:27:53.000Z | homeassistant/components/emulated_hue/hue_api.py | robin13/home-assistant | 4976569e304c23975d34ec88e2dfb94e84ab1f1c | [
"Apache-2.0"
] | 3 | 2018-09-14T07:34:09.000Z | 2018-09-29T12:57:10.000Z | """Provides a Hue API to control Home Assistant."""
import asyncio
import logging
from aiohttp import web
from homeassistant import core
from homeassistant.const import (
ATTR_ENTITY_ID, SERVICE_TURN_OFF, SERVICE_TURN_ON, SERVICE_VOLUME_SET,
SERVICE_OPEN_COVER, SERVICE_CLOSE_COVER, STATE_ON, STATE_OFF,
HTTP_BAD_REQUEST, HTTP_NOT_FOUND, ATTR_SUPPORTED_FEATURES,
)
from homeassistant.components.light import (
ATTR_BRIGHTNESS, SUPPORT_BRIGHTNESS
)
from homeassistant.components.media_player import (
ATTR_MEDIA_VOLUME_LEVEL, SUPPORT_VOLUME_SET,
)
from homeassistant.components.fan import (
ATTR_SPEED, SUPPORT_SET_SPEED, SPEED_OFF, SPEED_LOW,
SPEED_MEDIUM, SPEED_HIGH
)
from homeassistant.components.http import HomeAssistantView
_LOGGER = logging.getLogger(__name__)
HUE_API_STATE_ON = 'on'
HUE_API_STATE_BRI = 'bri'
class HueUsernameView(HomeAssistantView):
"""Handle requests to create a username for the emulated hue bridge."""
url = '/api'
name = 'emulated_hue:api:create_username'
extra_urls = ['/api/']
requires_auth = False
@asyncio.coroutine
def post(self, request):
"""Handle a POST request."""
try:
data = yield from request.json()
except ValueError:
return self.json_message('Invalid JSON', HTTP_BAD_REQUEST)
if 'devicetype' not in data:
return self.json_message('devicetype not specified',
HTTP_BAD_REQUEST)
return self.json([{'success': {'username': '12345678901234567890'}}])
class HueGroupView(HomeAssistantView):
"""Group handler to get Logitech Pop working."""
url = '/api/{username}/groups/0/action'
name = 'emulated_hue:groups:state'
requires_auth = False
def __init__(self, config):
"""Initialize the instance of the view."""
self.config = config
@core.callback
def put(self, request, username):
"""Process a request to make the Logitech Pop working."""
return self.json([{
'error': {
'address': '/groups/0/action/scene',
'type': 7,
'description': 'invalid value, dummy for parameter, scene'
}
}])
class HueAllLightsStateView(HomeAssistantView):
"""Handle requests for getting and setting info about entities."""
url = '/api/{username}/lights'
name = 'emulated_hue:lights:state'
requires_auth = False
def __init__(self, config):
"""Initialize the instance of the view."""
self.config = config
@core.callback
def get(self, request, username):
"""Process a request to get the list of available lights."""
hass = request.app['hass']
json_response = {}
for entity in hass.states.async_all():
if self.config.is_entity_exposed(entity):
state, brightness = get_entity_state(self.config, entity)
number = self.config.entity_id_to_number(entity.entity_id)
json_response[number] = entity_to_json(
self.config, entity, state, brightness)
return self.json(json_response)
class HueOneLightStateView(HomeAssistantView):
"""Handle requests for getting and setting info about entities."""
url = '/api/{username}/lights/{entity_id}'
name = 'emulated_hue:light:state'
requires_auth = False
def __init__(self, config):
"""Initialize the instance of the view."""
self.config = config
@core.callback
def get(self, request, username, entity_id):
"""Process a request to get the state of an individual light."""
hass = request.app['hass']
entity_id = self.config.number_to_entity_id(entity_id)
entity = hass.states.get(entity_id)
if entity is None:
_LOGGER.error('Entity not found: %s', entity_id)
return web.Response(text="Entity not found", status=404)
if not self.config.is_entity_exposed(entity):
_LOGGER.error('Entity not exposed: %s', entity_id)
return web.Response(text="Entity not exposed", status=404)
state, brightness = get_entity_state(self.config, entity)
json_response = entity_to_json(self.config, entity, state, brightness)
return self.json(json_response)
class HueOneLightChangeView(HomeAssistantView):
"""Handle requests for getting and setting info about entities."""
url = '/api/{username}/lights/{entity_number}/state'
name = 'emulated_hue:light:state'
requires_auth = False
def __init__(self, config):
"""Initialize the instance of the view."""
self.config = config
@asyncio.coroutine
def put(self, request, username, entity_number):
"""Process a request to set the state of an individual light."""
config = self.config
hass = request.app['hass']
entity_id = config.number_to_entity_id(entity_number)
if entity_id is None:
_LOGGER.error('Unknown entity number: %s', entity_number)
return self.json_message('Entity not found', HTTP_NOT_FOUND)
entity = hass.states.get(entity_id)
if entity is None:
_LOGGER.error('Entity not found: %s', entity_id)
return self.json_message('Entity not found', HTTP_NOT_FOUND)
if not config.is_entity_exposed(entity):
_LOGGER.error('Entity not exposed: %s', entity_id)
return web.Response(text="Entity not exposed", status=404)
try:
request_json = yield from request.json()
except ValueError:
_LOGGER.error('Received invalid json')
return self.json_message('Invalid JSON', HTTP_BAD_REQUEST)
# Parse the request into requested "on" status and brightness
parsed = parse_hue_api_put_light_body(request_json, entity)
if parsed is None:
_LOGGER.error('Unable to parse data: %s', request_json)
return web.Response(text="Bad request", status=400)
result, brightness = parsed
# Choose general HA domain
domain = core.DOMAIN
# Entity needs separate call to turn on
turn_on_needed = False
# Convert the resulting "on" status into the service we need to call
service = SERVICE_TURN_ON if result else SERVICE_TURN_OFF
# Construct what we need to send to the service
data = {ATTR_ENTITY_ID: entity_id}
# Make sure the entity actually supports brightness
entity_features = entity.attributes.get(ATTR_SUPPORTED_FEATURES, 0)
if entity.domain == "light":
if entity_features & SUPPORT_BRIGHTNESS:
if brightness is not None:
data[ATTR_BRIGHTNESS] = brightness
# If the requested entity is a script add some variables
elif entity.domain == "script":
data['variables'] = {
'requested_state': STATE_ON if result else STATE_OFF
}
if brightness is not None:
data['variables']['requested_level'] = brightness
# If the requested entity is a media player, convert to volume
elif entity.domain == "media_player":
if entity_features & SUPPORT_VOLUME_SET:
if brightness is not None:
turn_on_needed = True
domain = entity.domain
service = SERVICE_VOLUME_SET
# Convert 0-100 to 0.0-1.0
data[ATTR_MEDIA_VOLUME_LEVEL] = brightness / 100.0
# If the requested entity is a cover, convert to open_cover/close_cover
elif entity.domain == "cover":
domain = entity.domain
if service == SERVICE_TURN_ON:
service = SERVICE_OPEN_COVER
else:
service = SERVICE_CLOSE_COVER
# If the requested entity is a fan, convert to speed
elif entity.domain == "fan":
if entity_features & SUPPORT_SET_SPEED:
if brightness is not None:
domain = entity.domain
# Convert 0-100 to a fan speed
if brightness == 0:
data[ATTR_SPEED] = SPEED_OFF
elif 0 < brightness <= 33.3:
data[ATTR_SPEED] = SPEED_LOW
elif 33.3 < brightness <= 66.6:
data[ATTR_SPEED] = SPEED_MEDIUM
elif 66.6 < brightness <= 100:
data[ATTR_SPEED] = SPEED_HIGH
if entity.domain in config.off_maps_to_on_domains:
# Map the off command to on
service = SERVICE_TURN_ON
# Caching is required because things like scripts and scenes won't
# report as "off" to Alexa if an "off" command is received, because
# they'll map to "on". Thus, instead of reporting its actual
# status, we report what Alexa will want to see, which is the same
# as the actual requested command.
config.cached_states[entity_id] = (result, brightness)
# Separate call to turn on needed
if turn_on_needed:
hass.async_add_job(hass.services.async_call(
core.DOMAIN, SERVICE_TURN_ON, {ATTR_ENTITY_ID: entity_id},
blocking=True))
hass.async_add_job(hass.services.async_call(
domain, service, data, blocking=True))
json_response = \
[create_hue_success_response(entity_id, HUE_API_STATE_ON, result)]
if brightness is not None:
json_response.append(create_hue_success_response(
entity_id, HUE_API_STATE_BRI, brightness))
return self.json(json_response)
def parse_hue_api_put_light_body(request_json, entity):
"""Parse the body of a request to change the state of a light."""
if HUE_API_STATE_ON in request_json:
if not isinstance(request_json[HUE_API_STATE_ON], bool):
return None
if request_json['on']:
# Echo requested device be turned on
brightness = None
report_brightness = False
result = True
else:
# Echo requested device be turned off
brightness = None
report_brightness = False
result = False
if HUE_API_STATE_BRI in request_json:
try:
# Clamp brightness from 0 to 255
brightness = \
max(0, min(int(request_json[HUE_API_STATE_BRI]), 255))
except ValueError:
return None
# Make sure the entity actually supports brightness
entity_features = entity.attributes.get(ATTR_SUPPORTED_FEATURES, 0)
if entity.domain == "light":
if entity_features & SUPPORT_BRIGHTNESS:
report_brightness = True
result = (brightness > 0)
elif entity.domain == "scene":
brightness = None
report_brightness = False
result = True
elif (entity.domain == "script" or
entity.domain == "media_player" or
entity.domain == "fan"):
# Convert 0-255 to 0-100
level = brightness / 255 * 100
brightness = round(level)
report_brightness = True
result = True
return (result, brightness) if report_brightness else (result, None)
def get_entity_state(config, entity):
"""Retrieve and convert state and brightness values for an entity."""
cached_state = config.cached_states.get(entity.entity_id, None)
if cached_state is None:
final_state = entity.state != STATE_OFF
final_brightness = entity.attributes.get(
ATTR_BRIGHTNESS, 255 if final_state else 0)
# Make sure the entity actually supports brightness
entity_features = entity.attributes.get(ATTR_SUPPORTED_FEATURES, 0)
if entity.domain == "light":
if entity_features & SUPPORT_BRIGHTNESS:
pass
elif entity.domain == "media_player":
level = entity.attributes.get(
ATTR_MEDIA_VOLUME_LEVEL, 1.0 if final_state else 0.0)
# Convert 0.0-1.0 to 0-255
final_brightness = round(min(1.0, level) * 255)
elif entity.domain == "fan":
speed = entity.attributes.get(ATTR_SPEED, 0)
# Convert 0.0-1.0 to 0-255
final_brightness = 0
if speed == SPEED_LOW:
final_brightness = 85
elif speed == SPEED_MEDIUM:
final_brightness = 170
elif speed == SPEED_HIGH:
final_brightness = 255
else:
final_state, final_brightness = cached_state
# Make sure brightness is valid
if final_brightness is None:
final_brightness = 255 if final_state else 0
return (final_state, final_brightness)
def entity_to_json(config, entity, is_on=None, brightness=None):
"""Convert an entity to its Hue bridge JSON representation."""
return {
'state':
{
HUE_API_STATE_ON: is_on,
HUE_API_STATE_BRI: brightness,
'reachable': True
},
'type': 'Dimmable light',
'name': config.get_entity_name(entity),
'modelid': 'HASS123',
'uniqueid': entity.entity_id,
'swversion': '123'
}
def create_hue_success_response(entity_id, attr, value):
"""Create a success response for an attribute set on a light."""
success_key = '/lights/{}/state/{}'.format(entity_id, attr)
return {'success': {success_key: value}}
| 35.329897 | 79 | 0.618471 | import asyncio
import logging
from aiohttp import web
from homeassistant import core
from homeassistant.const import (
ATTR_ENTITY_ID, SERVICE_TURN_OFF, SERVICE_TURN_ON, SERVICE_VOLUME_SET,
SERVICE_OPEN_COVER, SERVICE_CLOSE_COVER, STATE_ON, STATE_OFF,
HTTP_BAD_REQUEST, HTTP_NOT_FOUND, ATTR_SUPPORTED_FEATURES,
)
from homeassistant.components.light import (
ATTR_BRIGHTNESS, SUPPORT_BRIGHTNESS
)
from homeassistant.components.media_player import (
ATTR_MEDIA_VOLUME_LEVEL, SUPPORT_VOLUME_SET,
)
from homeassistant.components.fan import (
ATTR_SPEED, SUPPORT_SET_SPEED, SPEED_OFF, SPEED_LOW,
SPEED_MEDIUM, SPEED_HIGH
)
from homeassistant.components.http import HomeAssistantView
_LOGGER = logging.getLogger(__name__)
HUE_API_STATE_ON = 'on'
HUE_API_STATE_BRI = 'bri'
class HueUsernameView(HomeAssistantView):
url = '/api'
name = 'emulated_hue:api:create_username'
extra_urls = ['/api/']
requires_auth = False
@asyncio.coroutine
def post(self, request):
try:
data = yield from request.json()
except ValueError:
return self.json_message('Invalid JSON', HTTP_BAD_REQUEST)
if 'devicetype' not in data:
return self.json_message('devicetype not specified',
HTTP_BAD_REQUEST)
return self.json([{'success': {'username': '12345678901234567890'}}])
class HueGroupView(HomeAssistantView):
url = '/api/{username}/groups/0/action'
name = 'emulated_hue:groups:state'
requires_auth = False
def __init__(self, config):
self.config = config
@core.callback
def put(self, request, username):
return self.json([{
'error': {
'address': '/groups/0/action/scene',
'type': 7,
'description': 'invalid value, dummy for parameter, scene'
}
}])
class HueAllLightsStateView(HomeAssistantView):
url = '/api/{username}/lights'
name = 'emulated_hue:lights:state'
requires_auth = False
def __init__(self, config):
self.config = config
@core.callback
def get(self, request, username):
hass = request.app['hass']
json_response = {}
for entity in hass.states.async_all():
if self.config.is_entity_exposed(entity):
state, brightness = get_entity_state(self.config, entity)
number = self.config.entity_id_to_number(entity.entity_id)
json_response[number] = entity_to_json(
self.config, entity, state, brightness)
return self.json(json_response)
class HueOneLightStateView(HomeAssistantView):
url = '/api/{username}/lights/{entity_id}'
name = 'emulated_hue:light:state'
requires_auth = False
def __init__(self, config):
self.config = config
@core.callback
def get(self, request, username, entity_id):
hass = request.app['hass']
entity_id = self.config.number_to_entity_id(entity_id)
entity = hass.states.get(entity_id)
if entity is None:
_LOGGER.error('Entity not found: %s', entity_id)
return web.Response(text="Entity not found", status=404)
if not self.config.is_entity_exposed(entity):
_LOGGER.error('Entity not exposed: %s', entity_id)
return web.Response(text="Entity not exposed", status=404)
state, brightness = get_entity_state(self.config, entity)
json_response = entity_to_json(self.config, entity, state, brightness)
return self.json(json_response)
class HueOneLightChangeView(HomeAssistantView):
url = '/api/{username}/lights/{entity_number}/state'
name = 'emulated_hue:light:state'
requires_auth = False
def __init__(self, config):
self.config = config
@asyncio.coroutine
def put(self, request, username, entity_number):
config = self.config
hass = request.app['hass']
entity_id = config.number_to_entity_id(entity_number)
if entity_id is None:
_LOGGER.error('Unknown entity number: %s', entity_number)
return self.json_message('Entity not found', HTTP_NOT_FOUND)
entity = hass.states.get(entity_id)
if entity is None:
_LOGGER.error('Entity not found: %s', entity_id)
return self.json_message('Entity not found', HTTP_NOT_FOUND)
if not config.is_entity_exposed(entity):
_LOGGER.error('Entity not exposed: %s', entity_id)
return web.Response(text="Entity not exposed", status=404)
try:
request_json = yield from request.json()
except ValueError:
_LOGGER.error('Received invalid json')
return self.json_message('Invalid JSON', HTTP_BAD_REQUEST)
parsed = parse_hue_api_put_light_body(request_json, entity)
if parsed is None:
_LOGGER.error('Unable to parse data: %s', request_json)
return web.Response(text="Bad request", status=400)
result, brightness = parsed
domain = core.DOMAIN
turn_on_needed = False
service = SERVICE_TURN_ON if result else SERVICE_TURN_OFF
data = {ATTR_ENTITY_ID: entity_id}
entity_features = entity.attributes.get(ATTR_SUPPORTED_FEATURES, 0)
if entity.domain == "light":
if entity_features & SUPPORT_BRIGHTNESS:
if brightness is not None:
data[ATTR_BRIGHTNESS] = brightness
elif entity.domain == "script":
data['variables'] = {
'requested_state': STATE_ON if result else STATE_OFF
}
if brightness is not None:
data['variables']['requested_level'] = brightness
elif entity.domain == "media_player":
if entity_features & SUPPORT_VOLUME_SET:
if brightness is not None:
turn_on_needed = True
domain = entity.domain
service = SERVICE_VOLUME_SET
data[ATTR_MEDIA_VOLUME_LEVEL] = brightness / 100.0
elif entity.domain == "cover":
domain = entity.domain
if service == SERVICE_TURN_ON:
service = SERVICE_OPEN_COVER
else:
service = SERVICE_CLOSE_COVER
elif entity.domain == "fan":
if entity_features & SUPPORT_SET_SPEED:
if brightness is not None:
domain = entity.domain
if brightness == 0:
data[ATTR_SPEED] = SPEED_OFF
elif 0 < brightness <= 33.3:
data[ATTR_SPEED] = SPEED_LOW
elif 33.3 < brightness <= 66.6:
data[ATTR_SPEED] = SPEED_MEDIUM
elif 66.6 < brightness <= 100:
data[ATTR_SPEED] = SPEED_HIGH
if entity.domain in config.off_maps_to_on_domains:
service = SERVICE_TURN_ON
# report as "off" to Alexa if an "off" command is received, because
# they'll map to "on". Thus, instead of reporting its actual
config.cached_states[entity_id] = (result, brightness)
if turn_on_needed:
hass.async_add_job(hass.services.async_call(
core.DOMAIN, SERVICE_TURN_ON, {ATTR_ENTITY_ID: entity_id},
blocking=True))
hass.async_add_job(hass.services.async_call(
domain, service, data, blocking=True))
json_response = \
[create_hue_success_response(entity_id, HUE_API_STATE_ON, result)]
if brightness is not None:
json_response.append(create_hue_success_response(
entity_id, HUE_API_STATE_BRI, brightness))
return self.json(json_response)
def parse_hue_api_put_light_body(request_json, entity):
if HUE_API_STATE_ON in request_json:
if not isinstance(request_json[HUE_API_STATE_ON], bool):
return None
if request_json['on']:
brightness = None
report_brightness = False
result = True
else:
brightness = None
report_brightness = False
result = False
if HUE_API_STATE_BRI in request_json:
try:
brightness = \
max(0, min(int(request_json[HUE_API_STATE_BRI]), 255))
except ValueError:
return None
entity_features = entity.attributes.get(ATTR_SUPPORTED_FEATURES, 0)
if entity.domain == "light":
if entity_features & SUPPORT_BRIGHTNESS:
report_brightness = True
result = (brightness > 0)
elif entity.domain == "scene":
brightness = None
report_brightness = False
result = True
elif (entity.domain == "script" or
entity.domain == "media_player" or
entity.domain == "fan"):
level = brightness / 255 * 100
brightness = round(level)
report_brightness = True
result = True
return (result, brightness) if report_brightness else (result, None)
def get_entity_state(config, entity):
cached_state = config.cached_states.get(entity.entity_id, None)
if cached_state is None:
final_state = entity.state != STATE_OFF
final_brightness = entity.attributes.get(
ATTR_BRIGHTNESS, 255 if final_state else 0)
entity_features = entity.attributes.get(ATTR_SUPPORTED_FEATURES, 0)
if entity.domain == "light":
if entity_features & SUPPORT_BRIGHTNESS:
pass
elif entity.domain == "media_player":
level = entity.attributes.get(
ATTR_MEDIA_VOLUME_LEVEL, 1.0 if final_state else 0.0)
final_brightness = round(min(1.0, level) * 255)
elif entity.domain == "fan":
speed = entity.attributes.get(ATTR_SPEED, 0)
final_brightness = 0
if speed == SPEED_LOW:
final_brightness = 85
elif speed == SPEED_MEDIUM:
final_brightness = 170
elif speed == SPEED_HIGH:
final_brightness = 255
else:
final_state, final_brightness = cached_state
if final_brightness is None:
final_brightness = 255 if final_state else 0
return (final_state, final_brightness)
def entity_to_json(config, entity, is_on=None, brightness=None):
return {
'state':
{
HUE_API_STATE_ON: is_on,
HUE_API_STATE_BRI: brightness,
'reachable': True
},
'type': 'Dimmable light',
'name': config.get_entity_name(entity),
'modelid': 'HASS123',
'uniqueid': entity.entity_id,
'swversion': '123'
}
def create_hue_success_response(entity_id, attr, value):
success_key = '/lights/{}/state/{}'.format(entity_id, attr)
return {'success': {success_key: value}}
| true | true |
f7fbe2f0a69c1aae8f05d0d0cc30c8c7142db7e2 | 965 | py | Python | tests/core/test_synonyms.py | IgorZyktin/MediaStorageSystem | df8d260581cb806eb54f320d63aa674c6175c17e | [
"MIT"
] | 2 | 2021-03-06T16:07:30.000Z | 2021-03-17T10:27:25.000Z | tests/core/test_synonyms.py | IgorZyktin/MediaStorageSystem | df8d260581cb806eb54f320d63aa674c6175c17e | [
"MIT"
] | null | null | null | tests/core/test_synonyms.py | IgorZyktin/MediaStorageSystem | df8d260581cb806eb54f320d63aa674c6175c17e | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Tests.
"""
import operator
from functools import partial, reduce
from mss.core.class_synonyms import Synonyms
def test_synonyms_creation(synonyms):
assert str(synonyms) == 'Synonyms<2 groups>'
def test_synonyms_iteration(synonyms):
assert list(synonyms) == [frozenset({'b', 'a'}),
frozenset({'d', 'c'})]
def test_synonyms_from_dict():
inst = Synonyms.from_dict({
'first comment': ['one', 'two'],
'second_comment': ['three', 'four'],
})
assert list(inst) == [frozenset({'one', 'two'}),
frozenset({'three', 'four'})]
def test_synonyms_sum():
inst1 = Synonyms([['a', 'b'], ['c', 'd']])
inst2 = Synonyms([['c', 'd'], ['e', 'f']])
ref = Synonyms([['a', 'b'], ['c', 'd'], ['e', 'f']])
assert inst1 + inst2 == ref
assert inst2 + inst1 == ref
_sum = partial(reduce, operator.add)
assert _sum([inst1, inst2]) == ref
| 25.394737 | 56 | 0.553368 |
import operator
from functools import partial, reduce
from mss.core.class_synonyms import Synonyms
def test_synonyms_creation(synonyms):
assert str(synonyms) == 'Synonyms<2 groups>'
def test_synonyms_iteration(synonyms):
assert list(synonyms) == [frozenset({'b', 'a'}),
frozenset({'d', 'c'})]
def test_synonyms_from_dict():
inst = Synonyms.from_dict({
'first comment': ['one', 'two'],
'second_comment': ['three', 'four'],
})
assert list(inst) == [frozenset({'one', 'two'}),
frozenset({'three', 'four'})]
def test_synonyms_sum():
inst1 = Synonyms([['a', 'b'], ['c', 'd']])
inst2 = Synonyms([['c', 'd'], ['e', 'f']])
ref = Synonyms([['a', 'b'], ['c', 'd'], ['e', 'f']])
assert inst1 + inst2 == ref
assert inst2 + inst1 == ref
_sum = partial(reduce, operator.add)
assert _sum([inst1, inst2]) == ref
| true | true |
f7fbe35fc05575866e02a6b7537f1be5c906cc69 | 14,642 | py | Python | src/train_cf.py | mattiafrak/Processes-Predictions-with-MP-A-Priori-Knowledge | 7e1bb94bb2fc535972a351f543be4f0ad8475275 | [
"MIT"
] | null | null | null | src/train_cf.py | mattiafrak/Processes-Predictions-with-MP-A-Priori-Knowledge | 7e1bb94bb2fc535972a351f543be4f0ad8475275 | [
"MIT"
] | null | null | null | src/train_cf.py | mattiafrak/Processes-Predictions-with-MP-A-Priori-Knowledge | 7e1bb94bb2fc535972a351f543be4f0ad8475275 | [
"MIT"
] | null | null | null | """
this script trains an LSTM model on one of the data files in the data folder of
this repository. the input file can be changed to another file from the data folder
by changing its name in line 46.
it is recommended to run this script on GPU, as recurrent networks are quite
computationally intensive.
Author: Niek Tax
"""
from __future__ import print_function, division
import copy
import csv
import os
import time
from datetime import datetime
from itertools import izip
import numpy as np
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.layers import Input, BatchNormalization, LeakyReLU, Dropout
from keras.layers.core import Dense
from keras.layers.recurrent import LSTM
from keras.models import Model
from keras.optimizers import Nadam
from shared_variables import get_unicode_from_int
class TrainCF:
def __init__(self):
pass
@staticmethod
def _build_model(max_len, num_features, target_chars):
print('Build model...')
main_input = Input(shape=(max_len, num_features), name='main_input')
processed = main_input
processed = Dense(32)(processed)
processed = BatchNormalization()(processed)
processed = LeakyReLU()(processed)
processed = Dropout(0.5)(processed)
processed = LSTM(64, return_sequences=False, recurrent_dropout=0.5)(processed)
processed = Dense(32)(processed)
processed = BatchNormalization()(processed)
processed = LeakyReLU()(processed)
processed = Dropout(0.5)(processed)
act_output = Dense(len(target_chars), activation='softmax', name='act_output')(processed)
time_output = Dense(1, activation='sigmoid', name='time_output')(processed)
model = Model(main_input, [act_output, time_output])
model.compile(loss={'act_output': 'categorical_crossentropy',
'time_output': 'mse'},
loss_weights=[1.0, 0.0],
optimizer='adam')
return model
@staticmethod
def _create_checkpoints_path(log_name, models_folder, fold):
folder_path = 'output_files/' + models_folder + '/' + str(fold) + '/models/CF/' + log_name + '/'
if not os.path.exists(folder_path):
os.makedirs(folder_path)
checkpoint_name = folder_path + 'model_{epoch:03d}-{val_loss:.3f}.h5'
return checkpoint_name
@staticmethod
def _train_model(model, checkpoint_name, X, y_a, y_t):
model_checkpoint = ModelCheckpoint(checkpoint_name, save_best_only=True)
# lr_reducer = ReduceLROnPlateau(factor=0.5, patience=10, verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', patience=5)
model.fit(X, {'act_output': y_a,
'time_output': y_t},
validation_split=0.2,
verbose=2,
callbacks=[early_stopping, model_checkpoint],
epochs=10)
@staticmethod
def _load_dataset(log_name):
dataset = []
current_case = []
current_case_id = None
current_case_start_time = None
last_event_time = None
csvfile = open('../data2/final_experiments/%s.csv' % log_name, 'r')
csv_reader = csv.reader(csvfile, delimiter=',', quotechar='|')
next(csv_reader, None)
for row in csv_reader: # CaseID, ActivityID, Timestamp, ResourceID
case_id = row[0]
activity_id = int(row[1])
timestamp = datetime.strptime(row[2], '%Y-%m-%d %H:%M:%S')
resource_id = int(row[3])
if case_id != current_case_id:
if len(current_case) > 0:
dataset.append(current_case)
current_case = []
current_case_id = case_id
current_case_start_time = timestamp
last_event_time = timestamp
time_since_case_start = int((timestamp - current_case_start_time).total_seconds())
time_since_last_event = int((timestamp - last_event_time).total_seconds())
time_since_midnight = int(
(timestamp - timestamp.replace(hour=0, minute=0, second=0, microsecond=0)).total_seconds())
day_of_week = timestamp.weekday()
current_case.append(
(activity_id, time_since_case_start, time_since_last_event, time_since_midnight, day_of_week))
print(len(dataset))
@staticmethod
def train(log_name, models_folder, folds):
# TrainCF._load_dataset(log_name)
lines = [] # list of all the activity sequences
timeseqs = [] # time sequences (differences between two events)
timeseqs2 = [] # time sequences (differences between the current and first)
# helper variables
last_case = ''
line = '' # sequence of activities for one case
first_line = True
times = []
times2 = []
num_lines = 0
case_start_time = None
last_event_time = None
csvfile = open('../data2/final_experiments/%s.csv' % log_name, 'r')
csv_reader = csv.reader(csvfile, delimiter=',', quotechar='|')
next(csv_reader, None) # skip the headers
for row in csv_reader: # the rows are "CaseID,ActivityID,CompleteTimestamp"
t = time.strptime(row[2], "%Y-%m-%d %H:%M:%S") # creates a datetime object from row[2]
if row[0] != last_case: # 'last_case' is to save the last executed case for the loop
case_start_time = t
last_event_time = t
last_case = row[0]
if not first_line: # here we actually add the sequences to the lists
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
line = ''
times = []
times2 = []
num_lines += 1
line += get_unicode_from_int(row[1])
time_since_last_event = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(last_event_time))
time_since_case_start = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(case_start_time))
time_diff = 86400 * time_since_last_event.days + time_since_last_event.seconds
time_diff2 = 86400 * time_since_case_start.days + time_since_case_start.seconds
times.append(time_diff)
times2.append(time_diff2)
last_event_time = t
first_line = False
# add last case
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
num_lines += 1
divisor = 1.0 * np.max([item for sublist in timeseqs for item in sublist]) # average time between events
print('divisor: {}'.format(divisor))
divisor2 = 1.0 * np.max([item for sublist in timeseqs2 for item in sublist]) # average time between current and
# first events
print('divisor2: {}'.format(divisor2))
# separate training data into 2(out of 3) parts
elements_per_fold = int(round(num_lines / 3))
many = 0
for i in range(len(lines)):
many = many + len(lines[i])
print("average length of the trace: ", many / len(lines))
print("number of traces: ", len(lines))
fold1 = lines[:elements_per_fold]
fold2 = lines[elements_per_fold:2 * elements_per_fold]
lines = fold1 + fold2
lines = map(lambda x: x + '!', lines) # put delimiter symbol
maxlen = max(map(lambda x: len(x), lines)) # find maximum line size
# next lines here to get all possible characters for events and annotate them with numbers
chars = map(lambda x: set(x), lines) # remove duplicate activities from each separate case
chars = list(set().union(*chars)) # creates a list of all the unique activities in the data set
chars.sort() # sorts the chars in alphabetical order
target_chars = copy.copy(chars)
chars.remove('!')
print('total chars: {}, target chars: {}'.format(len(chars), len(target_chars)))
char_indices = dict((c, i) for i, c in enumerate(chars))
target_char_indices = dict((c, i) for i, c in enumerate(target_chars))
csvfile = open('../data2/final_experiments/%s.csv' % log_name, 'r')
csv_reader = csv.reader(csvfile, delimiter=',', quotechar='|')
next(csv_reader, None) # skip the headers
last_case = ''
line = ''
first_line = True
lines = []
timeseqs = []
timeseqs2 = []
timeseqs3 = []
timeseqs4 = []
times = []
times2 = []
times3 = []
times4 = []
num_lines = 0
case_start_time = None
last_event_time = None
for row in csv_reader:
t = time.strptime(row[2], "%Y-%m-%d %H:%M:%S")
# new case starts
if row[0] != last_case:
case_start_time = t
last_event_time = t
last_case = row[0]
if not first_line:
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
timeseqs3.append(times3)
timeseqs4.append(times4)
line = ''
times = []
times2 = []
times3 = []
times4 = []
num_lines += 1
line += get_unicode_from_int(row[1])
time_since_last_event = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(last_event_time))
time_since_case_start = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(case_start_time))
midnight = datetime.fromtimestamp(time.mktime(t)).replace(hour=0, minute=0, second=0, microsecond=0)
timesincemidnight = datetime.fromtimestamp(time.mktime(t)) - midnight
time_diff = 86400 * time_since_last_event.days + time_since_last_event.seconds
time_diff2 = 86400 * time_since_case_start.days + time_since_case_start.seconds
timediff3 = timesincemidnight.seconds # this leaves only time even occurred after midnight
timediff4 = datetime.fromtimestamp(time.mktime(t)).weekday() # day of the week
times.append(time_diff)
times2.append(time_diff2)
times3.append(timediff3)
times4.append(timediff4)
last_event_time = t
first_line = False
# add last case
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
timeseqs3.append(times3)
timeseqs4.append(times4)
num_lines += 1
elements_per_fold = int(round(num_lines / 3))
lines = lines[:-elements_per_fold]
lines_t = timeseqs[:-elements_per_fold]
lines_t2 = timeseqs2[:-elements_per_fold]
lines_t3 = timeseqs3[:-elements_per_fold]
lines_t4 = timeseqs4[:-elements_per_fold]
step = 1
sentences = []
softness = 0
next_chars = []
lines = map(lambda x: x + '!', lines)
sentences_t = []
sentences_t2 = []
sentences_t3 = []
sentences_t4 = []
next_chars_t = []
next_chars_t2 = []
next_chars_t3 = []
next_chars_t4 = []
for line, line_t, line_t2, line_t3, line_t4 in izip(lines, lines_t, lines_t2, lines_t3, lines_t4):
for i in range(0, len(line), step):
if i == 0:
continue
# we add iteratively, first symbol of the line, then two first, three...
sentences.append(line[0: i])
sentences_t.append(line_t[0:i])
sentences_t2.append(line_t2[0:i])
sentences_t3.append(line_t3[0:i])
sentences_t4.append(line_t4[0:i])
next_chars.append(line[i])
if i == len(line) - 1: # special case to deal time of end character
next_chars_t.append(0)
next_chars_t2.append(0)
next_chars_t3.append(0)
next_chars_t4.append(0)
else:
next_chars_t.append(line_t[i])
next_chars_t2.append(line_t2[i])
next_chars_t3.append(line_t3[i])
next_chars_t4.append(line_t4[i])
print('nb sequences:', len(sentences))
print('Vectorization...')
num_features = len(chars) + 5
print('num features: {}'.format(num_features))
X = np.zeros((len(sentences), maxlen, num_features), dtype=np.float32)
y_a = np.zeros((len(sentences), len(target_chars)), dtype=np.float32)
y_t = np.zeros((len(sentences)), dtype=np.float32)
for i, sentence in enumerate(sentences):
leftpad = maxlen - len(sentence)
next_t = next_chars_t[i]
sentence_t = sentences_t[i]
sentence_t2 = sentences_t2[i]
sentence_t3 = sentences_t3[i]
sentence_t4 = sentences_t4[i]
for t, char in enumerate(sentence):
# multiset_abstraction = Counter(sentence[:t+1])
for c in chars:
if c == char: # this will encode present events to the right places
X[i, t + leftpad, char_indices[c]] = 1
X[i, t + leftpad, len(chars)] = t + 1
X[i, t + leftpad, len(chars) + 1] = sentence_t[t] / divisor
X[i, t + leftpad, len(chars) + 2] = sentence_t2[t] / divisor2
X[i, t + leftpad, len(chars) + 3] = sentence_t3[t] / 86400
X[i, t + leftpad, len(chars) + 4] = sentence_t4[t] / 7
for c in target_chars:
if c == next_chars[i]:
y_a[i, target_char_indices[c]] = 1 - softness
else:
y_a[i, target_char_indices[c]] = softness / (len(target_chars) - 1)
y_t[i] = next_t / divisor
for fold in range(folds):
# model = build_model(max_length, num_features, max_activity_id)
model = TrainCF._build_model(maxlen, num_features, target_chars)
checkpoint_name = TrainCF._create_checkpoints_path(log_name, models_folder, fold)
TrainCF._train_model(model, checkpoint_name, X, y_a, y_t)
| 41.361582 | 120 | 0.588922 |
from __future__ import print_function, division
import copy
import csv
import os
import time
from datetime import datetime
from itertools import izip
import numpy as np
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.layers import Input, BatchNormalization, LeakyReLU, Dropout
from keras.layers.core import Dense
from keras.layers.recurrent import LSTM
from keras.models import Model
from keras.optimizers import Nadam
from shared_variables import get_unicode_from_int
class TrainCF:
def __init__(self):
pass
@staticmethod
def _build_model(max_len, num_features, target_chars):
print('Build model...')
main_input = Input(shape=(max_len, num_features), name='main_input')
processed = main_input
processed = Dense(32)(processed)
processed = BatchNormalization()(processed)
processed = LeakyReLU()(processed)
processed = Dropout(0.5)(processed)
processed = LSTM(64, return_sequences=False, recurrent_dropout=0.5)(processed)
processed = Dense(32)(processed)
processed = BatchNormalization()(processed)
processed = LeakyReLU()(processed)
processed = Dropout(0.5)(processed)
act_output = Dense(len(target_chars), activation='softmax', name='act_output')(processed)
time_output = Dense(1, activation='sigmoid', name='time_output')(processed)
model = Model(main_input, [act_output, time_output])
model.compile(loss={'act_output': 'categorical_crossentropy',
'time_output': 'mse'},
loss_weights=[1.0, 0.0],
optimizer='adam')
return model
@staticmethod
def _create_checkpoints_path(log_name, models_folder, fold):
folder_path = 'output_files/' + models_folder + '/' + str(fold) + '/models/CF/' + log_name + '/'
if not os.path.exists(folder_path):
os.makedirs(folder_path)
checkpoint_name = folder_path + 'model_{epoch:03d}-{val_loss:.3f}.h5'
return checkpoint_name
@staticmethod
def _train_model(model, checkpoint_name, X, y_a, y_t):
model_checkpoint = ModelCheckpoint(checkpoint_name, save_best_only=True)
early_stopping = EarlyStopping(monitor='val_loss', patience=5)
model.fit(X, {'act_output': y_a,
'time_output': y_t},
validation_split=0.2,
verbose=2,
callbacks=[early_stopping, model_checkpoint],
epochs=10)
@staticmethod
def _load_dataset(log_name):
dataset = []
current_case = []
current_case_id = None
current_case_start_time = None
last_event_time = None
csvfile = open('../data2/final_experiments/%s.csv' % log_name, 'r')
csv_reader = csv.reader(csvfile, delimiter=',', quotechar='|')
next(csv_reader, None)
for row in csv_reader:
case_id = row[0]
activity_id = int(row[1])
timestamp = datetime.strptime(row[2], '%Y-%m-%d %H:%M:%S')
resource_id = int(row[3])
if case_id != current_case_id:
if len(current_case) > 0:
dataset.append(current_case)
current_case = []
current_case_id = case_id
current_case_start_time = timestamp
last_event_time = timestamp
time_since_case_start = int((timestamp - current_case_start_time).total_seconds())
time_since_last_event = int((timestamp - last_event_time).total_seconds())
time_since_midnight = int(
(timestamp - timestamp.replace(hour=0, minute=0, second=0, microsecond=0)).total_seconds())
day_of_week = timestamp.weekday()
current_case.append(
(activity_id, time_since_case_start, time_since_last_event, time_since_midnight, day_of_week))
print(len(dataset))
@staticmethod
def train(log_name, models_folder, folds):
lines = []
timeseqs = []
timeseqs2 = []
last_case = ''
line = ''
first_line = True
times = []
times2 = []
num_lines = 0
case_start_time = None
last_event_time = None
csvfile = open('../data2/final_experiments/%s.csv' % log_name, 'r')
csv_reader = csv.reader(csvfile, delimiter=',', quotechar='|')
next(csv_reader, None)
for row in csv_reader:
t = time.strptime(row[2], "%Y-%m-%d %H:%M:%S")
if row[0] != last_case:
case_start_time = t
last_event_time = t
last_case = row[0]
if not first_line:
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
line = ''
times = []
times2 = []
num_lines += 1
line += get_unicode_from_int(row[1])
time_since_last_event = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(last_event_time))
time_since_case_start = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(case_start_time))
time_diff = 86400 * time_since_last_event.days + time_since_last_event.seconds
time_diff2 = 86400 * time_since_case_start.days + time_since_case_start.seconds
times.append(time_diff)
times2.append(time_diff2)
last_event_time = t
first_line = False
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
num_lines += 1
divisor = 1.0 * np.max([item for sublist in timeseqs for item in sublist])
print('divisor: {}'.format(divisor))
divisor2 = 1.0 * np.max([item for sublist in timeseqs2 for item in sublist])
print('divisor2: {}'.format(divisor2))
elements_per_fold = int(round(num_lines / 3))
many = 0
for i in range(len(lines)):
many = many + len(lines[i])
print("average length of the trace: ", many / len(lines))
print("number of traces: ", len(lines))
fold1 = lines[:elements_per_fold]
fold2 = lines[elements_per_fold:2 * elements_per_fold]
lines = fold1 + fold2
lines = map(lambda x: x + '!', lines)
maxlen = max(map(lambda x: len(x), lines))
chars = map(lambda x: set(x), lines)
chars = list(set().union(*chars))
chars.sort()
target_chars = copy.copy(chars)
chars.remove('!')
print('total chars: {}, target chars: {}'.format(len(chars), len(target_chars)))
char_indices = dict((c, i) for i, c in enumerate(chars))
target_char_indices = dict((c, i) for i, c in enumerate(target_chars))
csvfile = open('../data2/final_experiments/%s.csv' % log_name, 'r')
csv_reader = csv.reader(csvfile, delimiter=',', quotechar='|')
next(csv_reader, None)
last_case = ''
line = ''
first_line = True
lines = []
timeseqs = []
timeseqs2 = []
timeseqs3 = []
timeseqs4 = []
times = []
times2 = []
times3 = []
times4 = []
num_lines = 0
case_start_time = None
last_event_time = None
for row in csv_reader:
t = time.strptime(row[2], "%Y-%m-%d %H:%M:%S")
if row[0] != last_case:
case_start_time = t
last_event_time = t
last_case = row[0]
if not first_line:
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
timeseqs3.append(times3)
timeseqs4.append(times4)
line = ''
times = []
times2 = []
times3 = []
times4 = []
num_lines += 1
line += get_unicode_from_int(row[1])
time_since_last_event = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(last_event_time))
time_since_case_start = datetime.fromtimestamp(time.mktime(t)) - datetime.fromtimestamp(
time.mktime(case_start_time))
midnight = datetime.fromtimestamp(time.mktime(t)).replace(hour=0, minute=0, second=0, microsecond=0)
timesincemidnight = datetime.fromtimestamp(time.mktime(t)) - midnight
time_diff = 86400 * time_since_last_event.days + time_since_last_event.seconds
time_diff2 = 86400 * time_since_case_start.days + time_since_case_start.seconds
timediff3 = timesincemidnight.seconds
timediff4 = datetime.fromtimestamp(time.mktime(t)).weekday()
times.append(time_diff)
times2.append(time_diff2)
times3.append(timediff3)
times4.append(timediff4)
last_event_time = t
first_line = False
lines.append(line)
timeseqs.append(times)
timeseqs2.append(times2)
timeseqs3.append(times3)
timeseqs4.append(times4)
num_lines += 1
elements_per_fold = int(round(num_lines / 3))
lines = lines[:-elements_per_fold]
lines_t = timeseqs[:-elements_per_fold]
lines_t2 = timeseqs2[:-elements_per_fold]
lines_t3 = timeseqs3[:-elements_per_fold]
lines_t4 = timeseqs4[:-elements_per_fold]
step = 1
sentences = []
softness = 0
next_chars = []
lines = map(lambda x: x + '!', lines)
sentences_t = []
sentences_t2 = []
sentences_t3 = []
sentences_t4 = []
next_chars_t = []
next_chars_t2 = []
next_chars_t3 = []
next_chars_t4 = []
for line, line_t, line_t2, line_t3, line_t4 in izip(lines, lines_t, lines_t2, lines_t3, lines_t4):
for i in range(0, len(line), step):
if i == 0:
continue
sentences.append(line[0: i])
sentences_t.append(line_t[0:i])
sentences_t2.append(line_t2[0:i])
sentences_t3.append(line_t3[0:i])
sentences_t4.append(line_t4[0:i])
next_chars.append(line[i])
if i == len(line) - 1:
next_chars_t.append(0)
next_chars_t2.append(0)
next_chars_t3.append(0)
next_chars_t4.append(0)
else:
next_chars_t.append(line_t[i])
next_chars_t2.append(line_t2[i])
next_chars_t3.append(line_t3[i])
next_chars_t4.append(line_t4[i])
print('nb sequences:', len(sentences))
print('Vectorization...')
num_features = len(chars) + 5
print('num features: {}'.format(num_features))
X = np.zeros((len(sentences), maxlen, num_features), dtype=np.float32)
y_a = np.zeros((len(sentences), len(target_chars)), dtype=np.float32)
y_t = np.zeros((len(sentences)), dtype=np.float32)
for i, sentence in enumerate(sentences):
leftpad = maxlen - len(sentence)
next_t = next_chars_t[i]
sentence_t = sentences_t[i]
sentence_t2 = sentences_t2[i]
sentence_t3 = sentences_t3[i]
sentence_t4 = sentences_t4[i]
for t, char in enumerate(sentence):
for c in chars:
if c == char:
X[i, t + leftpad, char_indices[c]] = 1
X[i, t + leftpad, len(chars)] = t + 1
X[i, t + leftpad, len(chars) + 1] = sentence_t[t] / divisor
X[i, t + leftpad, len(chars) + 2] = sentence_t2[t] / divisor2
X[i, t + leftpad, len(chars) + 3] = sentence_t3[t] / 86400
X[i, t + leftpad, len(chars) + 4] = sentence_t4[t] / 7
for c in target_chars:
if c == next_chars[i]:
y_a[i, target_char_indices[c]] = 1 - softness
else:
y_a[i, target_char_indices[c]] = softness / (len(target_chars) - 1)
y_t[i] = next_t / divisor
for fold in range(folds):
model = TrainCF._build_model(maxlen, num_features, target_chars)
checkpoint_name = TrainCF._create_checkpoints_path(log_name, models_folder, fold)
TrainCF._train_model(model, checkpoint_name, X, y_a, y_t)
| true | true |
f7fbe627150d8e8a0f60ccd57ac8c6f63ee82bee | 15,817 | py | Python | MayaIntegration/PAIO.py | MontyThibault/Force-Sensor-Integration | 9f949439ea0c1116c00af89735c034cda423ae7e | [
"MIT"
] | null | null | null | MayaIntegration/PAIO.py | MontyThibault/Force-Sensor-Integration | 9f949439ea0c1116c00af89735c034cda423ae7e | [
"MIT"
] | null | null | null | MayaIntegration/PAIO.py | MontyThibault/Force-Sensor-Integration | 9f949439ea0c1116c00af89735c034cda423ae7e | [
"MIT"
] | null | null | null | from ctypes import *
import unittest
class Singleton(object):
"""Ensures only one instance of subtypes exist at a time."""
_instance = None
def __new__(class_, *args, **kwargs):
if not isinstance(class_._instance, class_):
class_._instance = object.__new__(class_, *args, **kwargs)
return class_._instance
class AIO(Singleton):
""" This class overrides the default attribute lookup of the aio DLL library
in to implement automated error catching for all functions, assuming the
common interface is a zero-valued return.
Alternatively, the original functions can still be called via `aio._raw.f()"""
# Should be on system path if device drivers were installed properly
_raw = cdll.LoadLibrary('CAIO.dll')
class ErrorWrapper:
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
errorCode = self.f(*args, **kwargs)
if errorCode != 0:
# Max error string length of 200 characters
errorStringBuffer = create_string_buffer(200)
# Copy error code into buffer
AIO()._raw.AioGetErrorString(errorCode, errorStringBuffer)
print("aio.%s = %s : %s" %
(self.f.__name__, errorCode, errorStringBuffer.value.decode("utf-8")))
return errorCode
def __getattr__(self, key):
if key in consts:
# For constants (defined below)
return consts[key]
else:
# For functions
return self.ErrorWrapper(getattr(self._raw, key))
class AIODevice(object):
""" Sensor instance with single device name and device ID (each device has
up to 32 channels). """
aio = AIO()
def __init__(self, name = b"AIO000"):
self.deviceName = c_char_p(name)
self.deviceID = c_short()
def Init(self):
""" This is the only function in the AIO library that does not follow
the convention of passing DeviceID as the first argument; this is the
function that generates the ID and sets the value via the pointer in the
second argument. It gets a special name without a `Aio` prefix. """
self.aio.AioInit(self.deviceName, byref(self.deviceID))
def _callableWithID(self, f):
def composite(*args, **kwargs):
f(self.deviceID, *args, **kwargs)
return composite
def __getattr__(self, key):
""" Eliminates "deviceID" syntax from all AIO functions. Note: C function
usage & examples are specified in the Contec help files """
return self._callableWithID(getattr(self.aio, key))
class Tests(object):
def test_paio_singleton_interface(self):
x = AIO()
x.test = 10
y = AIO()
assert y.test == 10
def test_access_PAIO_functions(self):
assert hasattr(AIO(), 'AioInit')
assert hasattr(AIO(), 'AioGetErrorString')
def test_access_PAIO_constants(self):
assert hasattr(AIO(), 'PM10')
assert hasattr(AIO(), 'AIOM_CNTM_CARRY_BORROW')
def test_PAIO_error_wrapping(self):
def alwaysFails():
return 10101
AIO()._raw.bad = alwaysFails
assert AIO().bad() == 10101
del AIO()._raw.bad
def test_PAIO_device_argument_elision(self):
def noArguments(deviceID):
assert deviceID == 123
AIO()._raw.noargs = noArguments
d = AIODevice('name')
d.deviceID = 123
d.noargs()
del AIO()._raw.noargs
consts = {
# # ----------------------------------------------------------------------------------------------
# # External Control Signal
# # ----------------------------------------------------------------------------------------------
'AIO_AIF_CLOCK': 0, # Analog input external clock
'AIO_AIF_START': 1, # Analog input external start trigger
'AIO_AIF_STOP': 2, # Analog input external stop trigger
'AIO_AOF_CLOCK': 3, # Analog output external clock
'AIO_AOF_START': 4, # Analog output external start trigger
'AIO_AOF_STOP': 5, # Analog output external stop trigger
# # ----------------------------------------------------------------------------------------------
# # Input/Output Range
# # ----------------------------------------------------------------------------------------------
'PM10': 0, # +/-10V
'PM5': 1, # +/-5V
'PM25': 2, # +/-2.5V
'PM125': 3, # +/-1.25V
'PM1': 4, # +/-1V
'PM0625': 5, # +/-0.625V
'PM05': 6, # +/-0.5V
'PM03125': 7, # +/-0.3125V
'PM025': 8, # +/-0.25V
'PM0125': 9, # +/-0.125V
'PM01': 10, # +/-0.1V
'PM005': 11, # +/-0.05V
'PM0025': 12, # +/-0.025V
'PM00125': 13, # +/-0.0125V
'PM001': 14, # +/-0.01V
'P10': 50, # 0 ~ 10V
'P5': 51, # 0 ~ 5V
'P4095': 52, # 0 ~ 4.095V
'P25': 53, # 0 ~ 2.5V
'P125': 54, # 0 ~ 1.25V
'P1': 55, # 0 ~ 1V
'P05': 56, # 0 ~ 0.5V
'P025': 57, # 0 ~ 0.25V
'P01': 58, # 0 ~ 0.1V
'P005': 59, # 0 ~ 0.05V
'P0025': 60, # 0 ~ 0.025V
'P00125': 61, # 0 ~ 0.0125V
'P001': 62, # 0 ~ 0.01V
'P20MA': 100, # 0 ~ 20mA
'P4TO20MA': 101, # 4 ~ 20mA
'P1TO5': 150, # 1 ~ 5V
# # ----------------------------------------------------------------------------------------------
# # Analog Input Event
# # ----------------------------------------------------------------------------------------------
'AIE_START': 0x00000002, # Event that AD converting start conditions are satisfied
'AIE_RPTEND': 0x00000010, # Event of repeat end
'AIE_END': 0x00000020, # Event of device operation end
'AIE_DATA_NUM': 0x00000080, # Event that data of the specified sampling times are stored
'AIE_DATA_TSF': 0x00000100, # Event that data of the specified number are transferred
'AIE_OFERR': 0x00010000, # Event of Overflow
'AIE_SCERR': 0x00020000, # Event of sampling clock error
'AIE_ADERR': 0x00040000, # Event of AD converting error
# # ----------------------------------------------------------------------------------------------
# # Analog Output Event
# # ----------------------------------------------------------------------------------------------
'AOE_START': 0x00000002, # Event that DA converting start conditions are satisfied
'AOE_RPTEND': 0x00000010, # Event of repeat end
'AOE_END': 0x00000020, # Event of device operation end
'AOE_DATA_NUM': 0x00000080, # Event that data of the specified sampling times are output
'AOE_DATA_TSF': 0x00000100, # Event that data of the specified number are transferred
'AOE_SCERR': 0x00020000, # Event of sampling clock error
'AOE_DAERR': 0x00040000, # Event of DA converting error
# # ----------------------------------------------------------------------------------------------
# # Counter Event
# # ----------------------------------------------------------------------------------------------
'CNTE_DATA_NUM': 0x00000010, # Event of count comparison match
'CNTE_ORERR': 0x00010000, # Event of count overrun
'CNTE_ERR': 0x00020000, # Counter operating error
# # ----------------------------------------------------------------------------------------------
# # Timer Event
# # ----------------------------------------------------------------------------------------------
'TME_INT': 0x00000001, # Event that interval is satisfied
# # ----------------------------------------------------------------------------------------------
# # Analog Input Status
# # ----------------------------------------------------------------------------------------------
'AIS_BUSY': 0x00000001, # Device is working
'AIS_START_TRG': 0x00000002, # Wait the start trigger
'AIS_DATA_NUM': 0x00000010, # Store the data of the specified number of samplings
'AIS_OFERR': 0x00010000, # Overflow
'AIS_SCERR': 0x00020000, # Sampling clock error
'AIS_AIERR': 0x00040000, # AD converting error
'AIS_DRVERR': 0x00080000, # Driver spec error
# # ----------------------------------------------------------------------------------------------
# # Analog Output Status
# # ----------------------------------------------------------------------------------------------
'AOS_BUSY': 0x00000001, # Device is working
'AOS_START_TRG': 0x00000002, # /Wait the start trigger
'AOS_DATA_NUM': 0x00000010, # Output the data of the specified number of samplings
'AOS_SCERR': 0x00020000, # Sampling clock error
'AOS_AOERR': 0x00040000, # DA converting error
'AOS_DRVERR': 0x00080000, # Driver spec error
# # ----------------------------------------------------------------------------------------------
# # Counter Status
# # ----------------------------------------------------------------------------------------------
'CNTS_BUSY': 0x00000001, # Counter is working
'CNTS_DATA_NUM': 0x00000010, # Count comparison match
'CNTS_ORERR': 0x00010000, # Overrun
'CNTS_ERR': 0x00020000, # Count operating error
# # ----------------------------------------------------------------------------------------------
# # Analog Input Message
# # ----------------------------------------------------------------------------------------------
'AIOM_AIE_START': 0x1000, # Event that AD converting start condition are satisfied
'AIOM_AIE_RPTEND': 0x1001, # Event of repeat end
'AIOM_AIE_END': 0x1002, # Event of device operation end
'AIOM_AIE_DATA_NUM': 0x1003, # Event that data of the specified sampling times are stored
'AIOM_AIE_DATA_TSF': 0x1007, # Event that data of the specified number are transferred
'AIOM_AIE_OFERR': 0x1004, # Event of Overflow
'AIOM_AIE_SCERR': 0x1005, # Event of sampling clock error
'AIOM_AIE_ADERR': 0x1006, # Event of AD converting error
# # ----------------------------------------------------------------------------------------------
# # Analog Output Message
# # ----------------------------------------------------------------------------------------------
'AIOM_AOE_START': 0x1020, # Event that DA converting start conditions are satisfied
'AIOM_AOE_RPTEND': 0x1021, # Event of repeat end
'AIOM_AOE_END': 0x1022, # Event of device operation end
'AIOM_AOE_DATA_NUM': 0x1023, # Event that data of the specified sampling times are output
'AIOM_AOE_DATA_TSF': 0x1027, # Event that data of the specified number are transferred
'AIOM_AOE_SCERR': 0x1025, # Event of sampling clock error
'AIOM_AOE_DAERR': 0x1026, # Event of DA converting error
# # ----------------------------------------------------------------------------------------------
# # Counter Message
# # ----------------------------------------------------------------------------------------------
'AIOM_CNTE_DATA_NUM': 0x1042, # Event of count comparison match
'AIOM_CNTE_ORERR': 0x1043, # Event of count overrun
'AIOM_CNTE_ERR': 0x1044, # Event of counter operating error
# # ----------------------------------------------------------------------------------------------
# # Timer Message
# # ----------------------------------------------------------------------------------------------
'AIOM_TME_INT': 0x1060, # Event that interval is satisfied
# # ----------------------------------------------------------------------------------------------
# # Analog Input Attached Data
# # ----------------------------------------------------------------------------------------------
'AIAT_AI': 0x00000001, # Analog input attached information
'AIAT_AO0': 0x00000100, # Analong output data
'AIAT_DIO0': 0x00010000, # Digital input and output data
'AIAT_CNT0': 0x01000000, # Data of counter channel 0
'AIAT_CNT1': 0x02000000, # Data of counter channel 1
# # ----------------------------------------------------------------------------------------------
# # Counter Action Mode
# # ----------------------------------------------------------------------------------------------
'CNT_LOADPRESET': 0x0000001, # Load the preset count value
'CNT_LOADCOMP': 0x0000002, # Load the count comparison value
# # ----------------------------------------------------------------------------------------------
# # Event Controller Destination Signal
# # ----------------------------------------------------------------------------------------------
'AIOECU_DEST_AI_CLK': 4, # Analog input sampling clock
'AIOECU_DEST_AI_START': 0, # Analog input converting start signal
'AIOECU_DEST_AI_STOP': 2, # Analog input converting stop signal
'AIOECU_DEST_AO_CLK': 36, # Analog output sampling clock
'AIOECU_DEST_AO_START': 32, # Analog output converting start signal
'AIOECU_DEST_AO_STOP': 34, # Analog output converting stop signal
'AIOECU_DEST_CNT0_UPCLK': 134, # Up clock signal of counter 0
'AIOECU_DEST_CNT1_UPCLK': 135, # Up clock signal of counter 1
'AIOECU_DEST_CNT0_START': 128, # Start signal of counter 0, timer 0
'AIOECU_DEST_CNT1_START': 129, # Start signal of counter 1, timer 1
'AIOECU_DEST_CNT0_STOP': 130, # Stop signal of counter 0, timer 0
'AIOECU_DEST_CNT1_STOP': 131, # Stop signal of counter 1, timer 1
'AIOECU_DEST_MASTER1': 104, # Synchronization bus master signal 1
'AIOECU_DEST_MASTER2': 105, # Synchronization bus master signal 2
'AIOECU_DEST_MASTER3': 106, # Synchronization bus master signal 3
# # ----------------------------------------------------------------------------------------------
# # Event Controller Source Signal
# # ----------------------------------------------------------------------------------------------
'AIOECU_SRC_OPEN': -1, # Not connect
'AIOECU_SRC_AI_CLK': 4, # Analog input internal clock signal
'AIOECU_SRC_AI_EXTCLK': 146, # Analog input external clock signal
'AIOECU_SRC_AI_TRGSTART': 144, # Analog input external trigger start signal
'AIOECU_SRC_AI_LVSTART': 28, # Analog input level trigger start signal
'AIOECU_SRC_AI_STOP': 17, # Analog input signal that data of the specified sampling times have been converted (No delay)
'AIOECU_SRC_AI_STOP_DELAY': 18, # Analog input signal that data of the specified sampling times have been converted (delay)
'AIOECU_SRC_AI_LVSTOP': 29, # Analog input level trigger stop signal
'AIOECU_SRC_AI_TRGSTOP': 145, # Analog input external trigger stop signal
'AIOECU_SRC_AO_CLK': 66, # Analog output internal clock signal
'AIOECU_SRC_AO_EXTCLK': 149, # Analog output external clock signal
'AIOECU_SRC_AO_TRGSTART': 147, # Analog output external trigger start signal
'AIOECU_SRC_AO_STOP_FIFO': 352, # Analog output signal that data of the specified sampling times have been output (FIFO)
'AIOECU_SRC_AO_STOP_RING': 80, # Analog output signal that data of the specified sampling times have been output (RING)
'AIOECU_SRC_AO_TRGSTOP': 148, # Analog output external trigger stop signal
'AIOECU_SRC_CNT0_UPCLK': 150, # Up clock signal of counter 0
'AIOECU_SRC_CNT1_UPCLK': 152, # Up clock signal of counter 1
'AIOECU_SRC_CNT0_CMP': 288, # Count comparison match of counter 0
'AIOECU_SRC_CNT1_CMP': 289, # Count comparison match of counter 1
'AIOECU_SRC_SLAVE1': 136, # Synchronization bus master signal 1
'AIOECU_SRC_SLAVE2': 137, # Synchronization bus master signal 2
'AIOECU_SRC_SLAVE3': 138, # Synchronization bus master signal 3
'AIOECU_SRC_START': 384, # Ai, Ao, Cnt, Tm software start signal
'AIOECU_SRC_STOP': 385, # Ai, Ao, Cnt, Tm software stop signal
# # ----------------------------------------------------------------------------------------------
# # Mdevice counter Message
# # ----------------------------------------------------------------------------------------------
'AIOM_CNTM_COUNTUP_CH0': 0x1070, # COUNTUP, CHANNEL No.0
'AIOM_CNTM_COUNTUP_CH1': 0x1071, # " 1
'AIOM_CNTM_TIME_UP': 0x1090, # Timer
'AIOM_CNTM_COUNTER_ERROR': 0x1091, # Counter error
'AIOM_CNTM_CARRY_BORROW': 0x1092 # Carry/Borrow
} | 45.979651 | 124 | 0.52981 | from ctypes import *
import unittest
class Singleton(object):
_instance = None
def __new__(class_, *args, **kwargs):
if not isinstance(class_._instance, class_):
class_._instance = object.__new__(class_, *args, **kwargs)
return class_._instance
class AIO(Singleton):
_raw = cdll.LoadLibrary('CAIO.dll')
class ErrorWrapper:
def __init__(self, f):
self.f = f
def __call__(self, *args, **kwargs):
errorCode = self.f(*args, **kwargs)
if errorCode != 0:
errorStringBuffer = create_string_buffer(200)
AIO()._raw.AioGetErrorString(errorCode, errorStringBuffer)
print("aio.%s = %s : %s" %
(self.f.__name__, errorCode, errorStringBuffer.value.decode("utf-8")))
return errorCode
def __getattr__(self, key):
if key in consts:
return consts[key]
else:
return self.ErrorWrapper(getattr(self._raw, key))
class AIODevice(object):
aio = AIO()
def __init__(self, name = b"AIO000"):
self.deviceName = c_char_p(name)
self.deviceID = c_short()
def Init(self):
self.aio.AioInit(self.deviceName, byref(self.deviceID))
def _callableWithID(self, f):
def composite(*args, **kwargs):
f(self.deviceID, *args, **kwargs)
return composite
def __getattr__(self, key):
return self._callableWithID(getattr(self.aio, key))
class Tests(object):
def test_paio_singleton_interface(self):
x = AIO()
x.test = 10
y = AIO()
assert y.test == 10
def test_access_PAIO_functions(self):
assert hasattr(AIO(), 'AioInit')
assert hasattr(AIO(), 'AioGetErrorString')
def test_access_PAIO_constants(self):
assert hasattr(AIO(), 'PM10')
assert hasattr(AIO(), 'AIOM_CNTM_CARRY_BORROW')
def test_PAIO_error_wrapping(self):
def alwaysFails():
return 10101
AIO()._raw.bad = alwaysFails
assert AIO().bad() == 10101
del AIO()._raw.bad
def test_PAIO_device_argument_elision(self):
def noArguments(deviceID):
assert deviceID == 123
AIO()._raw.noargs = noArguments
d = AIODevice('name')
d.deviceID = 123
d.noargs()
del AIO()._raw.noargs
consts = {
'P5': 51,
'P4095': 52,
'P25': 53,
'P125': 54,
'P1': 55,
'P05': 56,
'P025': 57,
'P01': 58,
'P005': 59,
'P0025': 60,
'P00125': 61,
'P001': 62,
'P20MA': 100,
'P4TO20MA': 101,
'P1TO5': 150,
x00040000,
| true | true |
f7fbe6a159e62b037f1ac3d3c3112fb41bc6e8c5 | 2,395 | py | Python | Dataload/dataload_SST_binary.py | TTdeveloping/text_classification | b8e5ae22de6441479151efcd79a8d4899a1b1fd5 | [
"Apache-2.0"
] | 3 | 2019-04-13T08:29:09.000Z | 2021-11-11T02:55:24.000Z | Dataload/dataload_SST_binary.py | TTdeveloping/text_classification | b8e5ae22de6441479151efcd79a8d4899a1b1fd5 | [
"Apache-2.0"
] | null | null | null | Dataload/dataload_SST_binary.py | TTdeveloping/text_classification | b8e5ae22de6441479151efcd79a8d4899a1b1fd5 | [
"Apache-2.0"
] | null | null | null | import random
import sys
from Dataload.Instance import Instance
class DataLoader():
def __init__(self, path, shuffle, config):
"""
:param path:
:param shuffle:
:param config:
:return:
"""
print("Loading data:......")
self.data_list = []
self.max_count = config.max_count
self.path = path
self.shuffle = shuffle
def dataload(self):
"""
:return:
"""
path = self.path
shuffle = self.shuffle
assert isinstance(path,list,), "path must be in list"#instanec()指定对象是某种特定的形式
print('data path{}'.format(path))
for data_id in range(len(path)):
print("load data form{}".format(path[data_id]))
insts = self._Load_Each_Data(path=path[data_id], shuffle=shuffle)
if shuffle is True and data_id == 0:
print("shuffle train data......")
random.shuffle(insts)
self.data_list.append(insts)#把三个文件的标签列表和单词列表都放在里边
# return train/dev/test data
if len(self.data_list) == 3:
return self.data_list[0], self.data_list[1], self.data_list[2]
elif len(self.data_list) == 2:
return self.data_list[0], self.data_list[1]
def _Load_Each_Data(self,path=None,shuffle=False):
"""
:param path:
:param shuffle:
:return:
"""
assert path is not None, "the data is not allow empty"
insts=[]
now_lines = 0
with open(path,encoding="utf-8") as f:
for line in f.readlines():
line = line.strip()
line = line.split()
inst = Instance()
now_lines += 1
if now_lines % 20 == 0:
# print("just only handle {} datas".format(now_lines))
sys.stdout.write("\rreading the {} line\t".format(now_lines))
label=line[0]
word=line[1:]
if label not in ["0", "1"]:
print("Error line:", "".join(line))
continue
inst.words=word
inst.labels.append(label)
inst.words_size = len(inst.words)
insts.append(inst)
if len(insts) == self.max_count:
break
return insts
| 30.705128 | 84 | 0.508977 | import random
import sys
from Dataload.Instance import Instance
class DataLoader():
def __init__(self, path, shuffle, config):
print("Loading data:......")
self.data_list = []
self.max_count = config.max_count
self.path = path
self.shuffle = shuffle
def dataload(self):
path = self.path
shuffle = self.shuffle
assert isinstance(path,list,), "path must be in list"
print('data path{}'.format(path))
for data_id in range(len(path)):
print("load data form{}".format(path[data_id]))
insts = self._Load_Each_Data(path=path[data_id], shuffle=shuffle)
if shuffle is True and data_id == 0:
print("shuffle train data......")
random.shuffle(insts)
self.data_list.append(insts)
if len(self.data_list) == 3:
return self.data_list[0], self.data_list[1], self.data_list[2]
elif len(self.data_list) == 2:
return self.data_list[0], self.data_list[1]
def _Load_Each_Data(self,path=None,shuffle=False):
assert path is not None, "the data is not allow empty"
insts=[]
now_lines = 0
with open(path,encoding="utf-8") as f:
for line in f.readlines():
line = line.strip()
line = line.split()
inst = Instance()
now_lines += 1
if now_lines % 20 == 0:
sys.stdout.write("\rreading the {} line\t".format(now_lines))
label=line[0]
word=line[1:]
if label not in ["0", "1"]:
print("Error line:", "".join(line))
continue
inst.words=word
inst.labels.append(label)
inst.words_size = len(inst.words)
insts.append(inst)
if len(insts) == self.max_count:
break
return insts
| true | true |
f7fbe8a4d328c05c5e4599deeee2b1ff3d5f0605 | 461 | py | Python | tcutils/pkgs/syn_ack_test/syn_server.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | 1 | 2017-06-13T04:42:34.000Z | 2017-06-13T04:42:34.000Z | tcutils/pkgs/syn_ack_test/syn_server.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | tcutils/pkgs/syn_ack_test/syn_server.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
from scapy.all import *
from time import sleep
import sys
ip_remote = sys.argv[1]
ip_local = sys.argv[2]
os.system(
'iptables -A OUTPUT -p tcp --tcp-flags RST RST -s %s -j DROP' %
ip_local)
server = conf.L3socket(filter='host %s' % ip_remote)
ip=IP(dst=ip_remote)
SYN = server.recv()
sleep(182)
SYNACK = ip/TCP(sport=SYN.dport, dport=SYN.sport, flags="SA", seq=1001, ack=SYN.seq + 1)
sr1(SYNACK)
print "SUCCESS"
| 20.954545 | 88 | 0.665944 |
from scapy.all import *
from time import sleep
import sys
ip_remote = sys.argv[1]
ip_local = sys.argv[2]
os.system(
'iptables -A OUTPUT -p tcp --tcp-flags RST RST -s %s -j DROP' %
ip_local)
server = conf.L3socket(filter='host %s' % ip_remote)
ip=IP(dst=ip_remote)
SYN = server.recv()
sleep(182)
SYNACK = ip/TCP(sport=SYN.dport, dport=SYN.sport, flags="SA", seq=1001, ack=SYN.seq + 1)
sr1(SYNACK)
print "SUCCESS"
| false | true |
f7fbe93ae2eb83e5f74d3936c436a08e047b8741 | 3,944 | py | Python | runner/datacker_runner/tests/test_runner.py | tsptoni/datacker | e2989aa9103121e5fd35a38ca061d03e298a43b9 | [
"BSD-3-Clause"
] | 4 | 2021-01-26T21:45:28.000Z | 2021-02-02T11:45:15.000Z | runner/datacker_runner/tests/test_runner.py | tsptoni/datacker | e2989aa9103121e5fd35a38ca061d03e298a43b9 | [
"BSD-3-Clause"
] | 1 | 2021-02-02T11:16:32.000Z | 2021-02-02T11:16:32.000Z | runner/datacker_runner/tests/test_runner.py | tsptoni/datacker | e2989aa9103121e5fd35a38ca061d03e298a43b9 | [
"BSD-3-Clause"
] | 1 | 2021-02-02T10:37:29.000Z | 2021-02-02T10:37:29.000Z | import sys
import io
import unittest
from unittest.mock import patch, mock_open
from pathlib import Path
from ..run import get_config, get_parameters, run
class TestGetConfig(unittest.TestCase):
def test_get_config_defaults(self):
environment = {}
expected = {
"NOTEBOOK_NAME": "main",
"NOTEBOOKS_PATH": Path("/notebooks"),
"OUTPUT_PATH": Path("/output"),
"OUTPUT_PREFIX": "%Y%m%d%H%M%S%f_",
"OUTPUT_STDOUT": False,
}
result = get_config(environment)
self.assertEqual(result, expected)
def test_get_config(self):
environment = {
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": "/test_notebooks",
"OUTPUT_PATH": "/test_output",
"OUTPUT_PREFIX": "%Y%m%d_",
"OUTPUT_STDOUT": "true",
}
expected = {
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": Path("/test_notebooks"),
"OUTPUT_PATH": Path("/test_output"),
"OUTPUT_PREFIX": "%Y%m%d_",
"OUTPUT_STDOUT": True,
}
result = get_config(environment)
self.assertEqual(result, expected)
class TestGetParameters(unittest.TestCase):
def test_get_json_param(self):
environment = {"PARAMETERS": '{"var1": 3.1415, "var2": "Test value"}'}
expected = {"var1": 3.1415, "var2": "Test value"}
result = get_parameters(environment)
self.assertEqual(result, expected)
def test_get_param(self):
environment = {"PARAM_var1": "3.1415", "PARAM_var2": '"Test value"'}
expected = {"var1": 3.1415, "var2": "Test value"}
result = get_parameters(environment)
self.assertEqual(result, expected)
def test_get_param_priority(self):
environment = {
"PARAMETERS": '{"var1": 3.1415, "var2": "Test value"}',
"PARAM_var1": "3",
"PARAM_var2": '"Another value"',
}
expected = {"var1": 3, "var2": "Another value"}
result = get_parameters(environment)
self.assertEqual(result, expected)
class TestRun(unittest.TestCase):
@patch("datacker_runner.run.pm.execute_notebook")
def test_run(self, mock_pm_execute_notebook):
run(
parameters={"param1": 1.1, "param2": "A string"},
config={
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": Path("/test_notebooks"),
"OUTPUT_PATH": Path("/test_output"),
"OUTPUT_PREFIX": "prefix_",
"OUTPUT_STDOUT": False,
},
)
mock_pm_execute_notebook.assert_called_with(
Path("/test_notebooks/test_notebook.ipynb"),
Path("/test_output/prefix_test_notebook.ipynb"),
parameters={"param1": 1.1, "param2": "A string"},
)
@patch("datacker_runner.run.pm.execute_notebook")
def test_output_stdout(self, mock_pm_execute_notebook):
saved_stdout = sys.stdout
try:
output = io.StringIO()
sys.stdout = output
mock_open_notebook = mock_open(read_data="test_output")
with patch("builtins.open", mock_open_notebook):
run(
parameters={"param1": 1.1, "param2": "A string"},
config={
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": Path("/test_notebooks"),
"OUTPUT_PATH": Path("/test_output"),
"OUTPUT_PREFIX": "prefix_",
"OUTPUT_STDOUT": True,
},
)
mock_open_notebook.assert_called_once_with(
Path("/test_output/prefix_test_notebook.ipynb")
)
self.assertEqual(output.getvalue(), "test_output\n")
finally:
sys.stdout = saved_stdout
| 36.183486 | 78 | 0.559331 | import sys
import io
import unittest
from unittest.mock import patch, mock_open
from pathlib import Path
from ..run import get_config, get_parameters, run
class TestGetConfig(unittest.TestCase):
def test_get_config_defaults(self):
environment = {}
expected = {
"NOTEBOOK_NAME": "main",
"NOTEBOOKS_PATH": Path("/notebooks"),
"OUTPUT_PATH": Path("/output"),
"OUTPUT_PREFIX": "%Y%m%d%H%M%S%f_",
"OUTPUT_STDOUT": False,
}
result = get_config(environment)
self.assertEqual(result, expected)
def test_get_config(self):
environment = {
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": "/test_notebooks",
"OUTPUT_PATH": "/test_output",
"OUTPUT_PREFIX": "%Y%m%d_",
"OUTPUT_STDOUT": "true",
}
expected = {
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": Path("/test_notebooks"),
"OUTPUT_PATH": Path("/test_output"),
"OUTPUT_PREFIX": "%Y%m%d_",
"OUTPUT_STDOUT": True,
}
result = get_config(environment)
self.assertEqual(result, expected)
class TestGetParameters(unittest.TestCase):
def test_get_json_param(self):
environment = {"PARAMETERS": '{"var1": 3.1415, "var2": "Test value"}'}
expected = {"var1": 3.1415, "var2": "Test value"}
result = get_parameters(environment)
self.assertEqual(result, expected)
def test_get_param(self):
environment = {"PARAM_var1": "3.1415", "PARAM_var2": '"Test value"'}
expected = {"var1": 3.1415, "var2": "Test value"}
result = get_parameters(environment)
self.assertEqual(result, expected)
def test_get_param_priority(self):
environment = {
"PARAMETERS": '{"var1": 3.1415, "var2": "Test value"}',
"PARAM_var1": "3",
"PARAM_var2": '"Another value"',
}
expected = {"var1": 3, "var2": "Another value"}
result = get_parameters(environment)
self.assertEqual(result, expected)
class TestRun(unittest.TestCase):
@patch("datacker_runner.run.pm.execute_notebook")
def test_run(self, mock_pm_execute_notebook):
run(
parameters={"param1": 1.1, "param2": "A string"},
config={
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": Path("/test_notebooks"),
"OUTPUT_PATH": Path("/test_output"),
"OUTPUT_PREFIX": "prefix_",
"OUTPUT_STDOUT": False,
},
)
mock_pm_execute_notebook.assert_called_with(
Path("/test_notebooks/test_notebook.ipynb"),
Path("/test_output/prefix_test_notebook.ipynb"),
parameters={"param1": 1.1, "param2": "A string"},
)
@patch("datacker_runner.run.pm.execute_notebook")
def test_output_stdout(self, mock_pm_execute_notebook):
saved_stdout = sys.stdout
try:
output = io.StringIO()
sys.stdout = output
mock_open_notebook = mock_open(read_data="test_output")
with patch("builtins.open", mock_open_notebook):
run(
parameters={"param1": 1.1, "param2": "A string"},
config={
"NOTEBOOK_NAME": "test_notebook",
"NOTEBOOKS_PATH": Path("/test_notebooks"),
"OUTPUT_PATH": Path("/test_output"),
"OUTPUT_PREFIX": "prefix_",
"OUTPUT_STDOUT": True,
},
)
mock_open_notebook.assert_called_once_with(
Path("/test_output/prefix_test_notebook.ipynb")
)
self.assertEqual(output.getvalue(), "test_output\n")
finally:
sys.stdout = saved_stdout
| true | true |
f7fbe99146e8b5da4bec7e1aa27780397afc4434 | 5,397 | py | Python | test/end_2_end/test_multiple_events_in_tracker_different_events.py | ryomahan/read-tracardi-api | d0a012fb097ca81daf046b314000301eb54bfad8 | [
"MIT"
] | 3 | 2021-11-27T18:03:31.000Z | 2022-02-06T21:47:59.000Z | test/end_2_end/test_multiple_events_in_tracker_different_events.py | ryomahan/read-tracardi-api | d0a012fb097ca81daf046b314000301eb54bfad8 | [
"MIT"
] | 13 | 2021-11-03T18:15:06.000Z | 2022-03-27T22:28:38.000Z | test/end_2_end/test_multiple_events_in_tracker_different_events.py | ryomahan/read-tracardi-api | d0a012fb097ca81daf046b314000301eb54bfad8 | [
"MIT"
] | 8 | 2021-11-16T04:07:41.000Z | 2022-03-14T14:51:34.000Z | from uuid import uuid4
from tracardi.process_engine.action.v1.flow.start.start_action import StartAction
from tracardi.process_engine.action.v1.increase_views_action import IncreaseViewsAction
from tracardi.domain.flow import Flow
from tracardi.process_engine.action.v1.end_action import EndAction
from tracardi.process_engine.action.v1.debug_payload_action import DebugPayloadAction
from ..api.test_source import create_event_source
from tracardi.service.wf.service.builders import action
from ..utils import Endpoint
endpoint = Endpoint()
def test_source_rule_and_flow():
source_id = str(uuid4())
profile_id = str(uuid4())
flow_id = str(uuid4())
rule_id = str(uuid4())
event_type = 'my-event'
session_id = str(uuid4())
try:
# Delete profile
assert endpoint.delete(f'/profile/{profile_id}').status_code in [200, 404]
assert endpoint.delete(f'/rule/{rule_id}').status_code in [200, 404]
assert endpoint.delete(f'/flow/{flow_id}').status_code in [200, 404]
# Create resource
assert create_event_source(source_id, type='javascript', name="End2End test").status_code == 200
assert endpoint.get('/event-sources/refresh').status_code == 200
response = endpoint.post('/rule', data={
"id": rule_id,
"name": "Multiple events in one track",
"event": {
"type": event_type
},
"flow": {
"id": flow_id,
"name": "Multiple events in one track"
},
"source": {
"id": source_id,
"name": "my test source"
},
"enabled": True
})
assert response.status_code == 200
assert endpoint.get('/rules/refresh').status_code == 200
# Create flows
debug = action(DebugPayloadAction, init={"event": {"type": event_type}})
start = action(StartAction)
increase_views = action(IncreaseViewsAction)
end = action(EndAction)
flow = Flow.build("Profile quick update - test", id=flow_id)
flow += debug('event') >> start('payload')
flow += start('payload') >> increase_views('payload')
flow += increase_views('payload') >> end('payload')
assert endpoint.post('/flow/production', data=flow.dict()).status_code == 200
assert endpoint.get('/flows/refresh').status_code == 200
# Send event
payload = {
"source": {
"id": source_id
},
"session": {
"id": session_id
},
"profile": {
"id": profile_id
},
"events": [
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": event_type},
],
"options": {"profile": True}
}
response = endpoint.post("/track", data=payload)
assert endpoint.get(f'/profiles/refresh').status_code == 200
assert endpoint.get(f'/sessions/refresh').status_code == 200
assert response.status_code == 200
result = response.json()
profile_id = result['profile']['id']
assert endpoint.delete(f'/profile/{profile_id}').status_code in [200, 404]
assert result['profile']['stats']['views'] == 1
finally:
# Delete
assert endpoint.get(f'/profiles/refresh').status_code == 200
assert endpoint.get(f'/sessions/refresh').status_code == 200
assert endpoint.get(f'/rules/refresh').status_code == 200
assert endpoint.get(f'/flows/refresh').status_code == 200
assert endpoint.get(f'/event-sources/refresh').status_code == 200
assert endpoint.delete(f'/event-source/{source_id}').status_code in [200, 404]
assert endpoint.delete(f'/profile/{profile_id}').status_code in [200, 404]
assert endpoint.delete(f'/flow/{flow_id}').status_code in [200, 404]
assert endpoint.delete(f'/rule/{rule_id}').status_code in [200, 404]
assert endpoint.delete(f'/session/{session_id}').status_code in [200, 404]
| 38.276596 | 104 | 0.519918 | from uuid import uuid4
from tracardi.process_engine.action.v1.flow.start.start_action import StartAction
from tracardi.process_engine.action.v1.increase_views_action import IncreaseViewsAction
from tracardi.domain.flow import Flow
from tracardi.process_engine.action.v1.end_action import EndAction
from tracardi.process_engine.action.v1.debug_payload_action import DebugPayloadAction
from ..api.test_source import create_event_source
from tracardi.service.wf.service.builders import action
from ..utils import Endpoint
endpoint = Endpoint()
def test_source_rule_and_flow():
source_id = str(uuid4())
profile_id = str(uuid4())
flow_id = str(uuid4())
rule_id = str(uuid4())
event_type = 'my-event'
session_id = str(uuid4())
try:
assert endpoint.delete(f'/profile/{profile_id}').status_code in [200, 404]
assert endpoint.delete(f'/rule/{rule_id}').status_code in [200, 404]
assert endpoint.delete(f'/flow/{flow_id}').status_code in [200, 404]
assert create_event_source(source_id, type='javascript', name="End2End test").status_code == 200
assert endpoint.get('/event-sources/refresh').status_code == 200
response = endpoint.post('/rule', data={
"id": rule_id,
"name": "Multiple events in one track",
"event": {
"type": event_type
},
"flow": {
"id": flow_id,
"name": "Multiple events in one track"
},
"source": {
"id": source_id,
"name": "my test source"
},
"enabled": True
})
assert response.status_code == 200
assert endpoint.get('/rules/refresh').status_code == 200
debug = action(DebugPayloadAction, init={"event": {"type": event_type}})
start = action(StartAction)
increase_views = action(IncreaseViewsAction)
end = action(EndAction)
flow = Flow.build("Profile quick update - test", id=flow_id)
flow += debug('event') >> start('payload')
flow += start('payload') >> increase_views('payload')
flow += increase_views('payload') >> end('payload')
assert endpoint.post('/flow/production', data=flow.dict()).status_code == 200
assert endpoint.get('/flows/refresh').status_code == 200
payload = {
"source": {
"id": source_id
},
"session": {
"id": session_id
},
"profile": {
"id": profile_id
},
"events": [
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": str(uuid4())},
{"type": event_type},
],
"options": {"profile": True}
}
response = endpoint.post("/track", data=payload)
assert endpoint.get(f'/profiles/refresh').status_code == 200
assert endpoint.get(f'/sessions/refresh').status_code == 200
assert response.status_code == 200
result = response.json()
profile_id = result['profile']['id']
assert endpoint.delete(f'/profile/{profile_id}').status_code in [200, 404]
assert result['profile']['stats']['views'] == 1
finally:
assert endpoint.get(f'/profiles/refresh').status_code == 200
assert endpoint.get(f'/sessions/refresh').status_code == 200
assert endpoint.get(f'/rules/refresh').status_code == 200
assert endpoint.get(f'/flows/refresh').status_code == 200
assert endpoint.get(f'/event-sources/refresh').status_code == 200
assert endpoint.delete(f'/event-source/{source_id}').status_code in [200, 404]
assert endpoint.delete(f'/profile/{profile_id}').status_code in [200, 404]
assert endpoint.delete(f'/flow/{flow_id}').status_code in [200, 404]
assert endpoint.delete(f'/rule/{rule_id}').status_code in [200, 404]
assert endpoint.delete(f'/session/{session_id}').status_code in [200, 404]
| true | true |
f7fbe9a3708b617da4237bfbe9e18984e7a255e8 | 445 | py | Python | mmdeploy/apis/openvino/__init__.py | Kayce001/mmdeploy | 59470fef0b28e0b760c72269e0696bbdf57db7f1 | [
"Apache-2.0"
] | 1 | 2022-03-08T12:22:34.000Z | 2022-03-08T12:22:34.000Z | mmdeploy/apis/openvino/__init__.py | Kayce001/mmdeploy | 59470fef0b28e0b760c72269e0696bbdf57db7f1 | [
"Apache-2.0"
] | null | null | null | mmdeploy/apis/openvino/__init__.py | Kayce001/mmdeploy | 59470fef0b28e0b760c72269e0696bbdf57db7f1 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) OpenMMLab. All rights reserved.
from mmdeploy.backend.openvino import is_available
__all__ = ['is_available']
if is_available():
from mmdeploy.backend.openvino.onnx2openvino import (get_output_model_file,
onnx2openvino)
from .utils import get_input_info_from_cfg
__all__ += [
'onnx2openvino', 'get_output_model_file', 'get_input_info_from_cfg'
]
| 34.230769 | 79 | 0.665169 |
from mmdeploy.backend.openvino import is_available
__all__ = ['is_available']
if is_available():
from mmdeploy.backend.openvino.onnx2openvino import (get_output_model_file,
onnx2openvino)
from .utils import get_input_info_from_cfg
__all__ += [
'onnx2openvino', 'get_output_model_file', 'get_input_info_from_cfg'
]
| true | true |
f7fbe9e61649859d1ef656fa555216e15edb3f9f | 57 | py | Python | simp_py_examples/m5stack/ex029_jpg.py | kcfkwok2003/Simp_py | f75e66da01b45dc8688dda602f8b33d4258f0c31 | [
"MIT"
] | null | null | null | simp_py_examples/m5stack/ex029_jpg.py | kcfkwok2003/Simp_py | f75e66da01b45dc8688dda602f8b33d4258f0c31 | [
"MIT"
] | null | null | null | simp_py_examples/m5stack/ex029_jpg.py | kcfkwok2003/Simp_py | f75e66da01b45dc8688dda602f8b33d4258f0c31 | [
"MIT"
] | null | null | null |
from simp_py import tft
tft.tft.image(0,0,'kyoco.jpg')
| 11.4 | 30 | 0.719298 |
from simp_py import tft
tft.tft.image(0,0,'kyoco.jpg')
| true | true |
f7fbea52e6bc74951f2e6a5fcfdd21016b8c96fb | 942 | py | Python | test/test_update_schedule_response.py | Logicworks/opsgenie-python-sdk | 244c4c40ddcc25e70df5ba4425ab8d7c8da59c18 | [
"Apache-2.0"
] | null | null | null | test/test_update_schedule_response.py | Logicworks/opsgenie-python-sdk | 244c4c40ddcc25e70df5ba4425ab8d7c8da59c18 | [
"Apache-2.0"
] | null | null | null | test/test_update_schedule_response.py | Logicworks/opsgenie-python-sdk | 244c4c40ddcc25e70df5ba4425ab8d7c8da59c18 | [
"Apache-2.0"
] | 1 | 2020-11-07T11:27:13.000Z | 2020-11-07T11:27:13.000Z | # coding: utf-8
"""
OpsGenie REST API
OpsGenie OpenAPI Specification # noqa: E501
OpenAPI spec version: 2.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import opsgenie_swagger
from opsgenie_swagger.models.update_schedule_response import UpdateScheduleResponse # noqa: E501
from opsgenie_swagger.rest import ApiException
class TestUpdateScheduleResponse(unittest.TestCase):
"""UpdateScheduleResponse unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testUpdateScheduleResponse(self):
"""Test UpdateScheduleResponse"""
# FIXME: construct object with mandatory attributes with example values
# model = opsgenie_swagger.models.update_schedule_response.UpdateScheduleResponse() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 22.97561 | 105 | 0.726115 |
from __future__ import absolute_import
import unittest
import opsgenie_swagger
from opsgenie_swagger.models.update_schedule_response import UpdateScheduleResponse
from opsgenie_swagger.rest import ApiException
class TestUpdateScheduleResponse(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def testUpdateScheduleResponse(self):
s
if __name__ == '__main__':
unittest.main()
| true | true |
f7fbead5fe0e61025c73eb9af7f67c2cb20b3107 | 867 | py | Python | scripts/web_cleaner.py | mgarrettm/graphlet-counting | edbd61d5d3399d901c15f9ccc51016828b6cc4d3 | [
"MIT"
] | null | null | null | scripts/web_cleaner.py | mgarrettm/graphlet-counting | edbd61d5d3399d901c15f9ccc51016828b6cc4d3 | [
"MIT"
] | null | null | null | scripts/web_cleaner.py | mgarrettm/graphlet-counting | edbd61d5d3399d901c15f9ccc51016828b6cc4d3 | [
"MIT"
] | null | null | null | import sys
edges = {}
seen = {}
offsets = {}
max_vertex = 0
s = open(sys.argv[2], "w+")
with open(sys.argv[1]) as f:
for line in f:
arr = line.split("\t")
u = int(arr[0])
v = int(arr[1])
seen[u] = True
seen[v] = True
if u not in edges:
edges[u] = {}
if v not in edges:
edges[v] = {}
edges[u][v] = True
edges[v][u] = True
if max_vertex < u:
max_vertex = u
if max_vertex < v:
max_vertex = v
offset = 0
for u in range(max_vertex + 1):
if u in seen:
offsets[u] = offset
else:
offset += 1
for u in sorted(edges.keys()):
for v in sorted(edges[u].keys()):
if u != v and u in edges and v in edges[u] and u < v:
s.write(str(u - offsets[u] + 1) + "\t" + str(v - offsets[v] + 1) + "\n")
| 23.432432 | 84 | 0.468281 | import sys
edges = {}
seen = {}
offsets = {}
max_vertex = 0
s = open(sys.argv[2], "w+")
with open(sys.argv[1]) as f:
for line in f:
arr = line.split("\t")
u = int(arr[0])
v = int(arr[1])
seen[u] = True
seen[v] = True
if u not in edges:
edges[u] = {}
if v not in edges:
edges[v] = {}
edges[u][v] = True
edges[v][u] = True
if max_vertex < u:
max_vertex = u
if max_vertex < v:
max_vertex = v
offset = 0
for u in range(max_vertex + 1):
if u in seen:
offsets[u] = offset
else:
offset += 1
for u in sorted(edges.keys()):
for v in sorted(edges[u].keys()):
if u != v and u in edges and v in edges[u] and u < v:
s.write(str(u - offsets[u] + 1) + "\t" + str(v - offsets[v] + 1) + "\n")
| true | true |
f7fbeae393b51337883694aefe65ee83138499c4 | 2,280 | py | Python | convert_vg_images.py | robotics-upo/og-sgg | 106c56919428ce927a1cae494932c00a5f58c37d | [
"MIT"
] | null | null | null | convert_vg_images.py | robotics-upo/og-sgg | 106c56919428ce927a1cae494932c00a5f58c37d | [
"MIT"
] | null | null | null | convert_vg_images.py | robotics-upo/og-sgg | 106c56919428ce927a1cae494932c00a5f58c37d | [
"MIT"
] | null | null | null | import tensorflow as tf
#import tensorflow_hub as hub
import numpy as np
#import cv2
import zipfile
import json
import lzma
import os
import telenet.dataset_data as tn_data
from telenet.utils import load_image_for_vrd_yolo, mdl_yolo, parse_yolo_results
from telenet.config import get as tn_config
from tqdm import tqdm
VG_PATH = tn_config('paths.vg')
imgcnvdata = tn_data.load_json_xz('vg-imgcnvdata')
zf1 = zipfile.ZipFile(os.path.join(VG_PATH, 'images.zip'), 'r')
zf2 = zipfile.ZipFile(os.path.join(VG_PATH, 'images2.zip'), 'r')
train_imgs = []
test_imgs = []
for obj in imgcnvdata:
(train_imgs,test_imgs)[obj['split']].append(obj)
def load_image(db, index):
obj = db[index]
if obj['dir'] == 1:
imgdata = zf1.read(f"VG_100K/{obj['file']}")
elif obj['dir'] == 2:
imgdata = zf2.read(f"VG_100K_2/{obj['file']}")
else:
raise "Bad dir"
img, w, h = load_image_for_vrd_yolo(imgdata)
return obj['id'], img, w, h
def load_train_image(index):
return load_image(train_imgs, index)
def load_test_image(index):
return load_image(test_imgs, index)
train_dataset = tf.data.Dataset.from_tensor_slices(list(range(len(train_imgs)))).map(
lambda x: tf.py_function(func=load_train_image, inp=[x], Tout=[tf.string, tf.float32, tf.float32, tf.float32]),
num_parallel_calls=tf.data.AUTOTUNE).batch(1)
test_dataset = tf.data.Dataset.from_tensor_slices(list(range(len(test_imgs)))).map(
lambda x: tf.py_function(func=load_test_image, inp=[x], Tout=[tf.string, tf.float32, tf.float32, tf.float32]),
num_parallel_calls=tf.data.AUTOTUNE).batch(1)
def convert_dataset(dataset, outfile, outfile2):
res = {}
with zipfile.ZipFile(tn_data.path(outfile), 'w') as zfo:
for names,img,widths,heights in tqdm(dataset):
names = names.numpy()
features,yolodata = mdl_yolo(img)
for imnm,imft,imyl,imw,imh in zip(names,features,yolodata,widths,heights):
imnm = imnm.decode('utf-8')
res[imnm] = parse_yolo_results(np.expand_dims(imyl, axis=0), imw, imh)
with zfo.open(f'{imnm}.npy','w') as f:
np.save(f, imft)
with lzma.open(tn_data.path(outfile2), 'wt', encoding='utf-8') as f:
json.dump(res, f)
convert_dataset(train_dataset, 'vg-yolo-train.zip', 'vg-yolo-train-objs.json.xz')
convert_dataset(test_dataset, 'vg-yolo-test.zip', 'vg-yolo-test-objs.json.xz')
| 34.029851 | 112 | 0.732018 | import tensorflow as tf
import numpy as np
import zipfile
import json
import lzma
import os
import telenet.dataset_data as tn_data
from telenet.utils import load_image_for_vrd_yolo, mdl_yolo, parse_yolo_results
from telenet.config import get as tn_config
from tqdm import tqdm
VG_PATH = tn_config('paths.vg')
imgcnvdata = tn_data.load_json_xz('vg-imgcnvdata')
zf1 = zipfile.ZipFile(os.path.join(VG_PATH, 'images.zip'), 'r')
zf2 = zipfile.ZipFile(os.path.join(VG_PATH, 'images2.zip'), 'r')
train_imgs = []
test_imgs = []
for obj in imgcnvdata:
(train_imgs,test_imgs)[obj['split']].append(obj)
def load_image(db, index):
obj = db[index]
if obj['dir'] == 1:
imgdata = zf1.read(f"VG_100K/{obj['file']}")
elif obj['dir'] == 2:
imgdata = zf2.read(f"VG_100K_2/{obj['file']}")
else:
raise "Bad dir"
img, w, h = load_image_for_vrd_yolo(imgdata)
return obj['id'], img, w, h
def load_train_image(index):
return load_image(train_imgs, index)
def load_test_image(index):
return load_image(test_imgs, index)
train_dataset = tf.data.Dataset.from_tensor_slices(list(range(len(train_imgs)))).map(
lambda x: tf.py_function(func=load_train_image, inp=[x], Tout=[tf.string, tf.float32, tf.float32, tf.float32]),
num_parallel_calls=tf.data.AUTOTUNE).batch(1)
test_dataset = tf.data.Dataset.from_tensor_slices(list(range(len(test_imgs)))).map(
lambda x: tf.py_function(func=load_test_image, inp=[x], Tout=[tf.string, tf.float32, tf.float32, tf.float32]),
num_parallel_calls=tf.data.AUTOTUNE).batch(1)
def convert_dataset(dataset, outfile, outfile2):
res = {}
with zipfile.ZipFile(tn_data.path(outfile), 'w') as zfo:
for names,img,widths,heights in tqdm(dataset):
names = names.numpy()
features,yolodata = mdl_yolo(img)
for imnm,imft,imyl,imw,imh in zip(names,features,yolodata,widths,heights):
imnm = imnm.decode('utf-8')
res[imnm] = parse_yolo_results(np.expand_dims(imyl, axis=0), imw, imh)
with zfo.open(f'{imnm}.npy','w') as f:
np.save(f, imft)
with lzma.open(tn_data.path(outfile2), 'wt', encoding='utf-8') as f:
json.dump(res, f)
convert_dataset(train_dataset, 'vg-yolo-train.zip', 'vg-yolo-train-objs.json.xz')
convert_dataset(test_dataset, 'vg-yolo-test.zip', 'vg-yolo-test-objs.json.xz')
| true | true |
f7fbebe3442090849c044d81cc48620b11f473b9 | 1,627 | py | Python | dali/lagrange1d.py | dlouk/DALI | d2de8841123aaf7178b7e5f82908302345dfa586 | [
"BSD-3-Clause"
] | 1 | 2021-11-13T15:22:44.000Z | 2021-11-13T15:22:44.000Z | dali/lagrange1d.py | dlouk/DALI | d2de8841123aaf7178b7e5f82908302345dfa586 | [
"BSD-3-Clause"
] | null | null | null | dali/lagrange1d.py | dlouk/DALI | d2de8841123aaf7178b7e5f82908302345dfa586 | [
"BSD-3-Clause"
] | 1 | 2022-01-29T00:33:06.000Z | 2022-01-29T00:33:06.000Z | # -*- coding: utf-8 -*-
"""
Created on Mon Feb 12 21:09:10 2018
@author: Dimitris Loukrezis
Univariate Lagrange and hierarchical polynomials.
"""
import numpy as np
class Lagrange1d:
"""Univariate Lagrange nodal basis polynomial"""
def __init__(self, current_knot, knots):
self.current_knot = current_knot
self.knots = np.array(knots)
self.other_knots = np.setdiff1d(knots, current_knot)
# compute denominator once and re-use
self.denoms_prod = (self.current_knot - self.other_knots).prod()
def evaluate(self, non_grid_knots):
"""Evaluate polynomial on specific non-grid knots"""
non_grid_knots = np.array(non_grid_knots).flatten()
L = map(lambda x: np.prod(x-self.other_knots)/self.denoms_prod,
non_grid_knots)
return L
class Hierarchical1d(Lagrange1d):
"""Univariate Lagrange hierarchical basis polynomial"""
def __init__(self, knots):
self.knots = np.array(knots)
self.current_knot = self.knots[-1]
self.other_knots = self.knots[:-1]
# compute denominator once and re-use
self.denoms_prod = (self.current_knot - self.other_knots).prod()
def lagrange1d_eval(current_knot, other_knots, non_grid_knots):
"""Evaluate on NON_GRID_KNOTS a univariate Lagrange polynomial, defined
for CURRENT_KNOT and OTHER_KNOTS"""
other_knots = np.array(other_knots)
non_grid_knots = np.array(non_grid_knots)
denoms = current_knot - other_knots
denoms_prod = denoms.prod()
L = map(lambda x: np.prod(x - other_knots) / denoms_prod, non_grid_knots)
return L | 33.895833 | 77 | 0.688998 |
import numpy as np
class Lagrange1d:
def __init__(self, current_knot, knots):
self.current_knot = current_knot
self.knots = np.array(knots)
self.other_knots = np.setdiff1d(knots, current_knot)
self.denoms_prod = (self.current_knot - self.other_knots).prod()
def evaluate(self, non_grid_knots):
non_grid_knots = np.array(non_grid_knots).flatten()
L = map(lambda x: np.prod(x-self.other_knots)/self.denoms_prod,
non_grid_knots)
return L
class Hierarchical1d(Lagrange1d):
def __init__(self, knots):
self.knots = np.array(knots)
self.current_knot = self.knots[-1]
self.other_knots = self.knots[:-1]
self.denoms_prod = (self.current_knot - self.other_knots).prod()
def lagrange1d_eval(current_knot, other_knots, non_grid_knots):
other_knots = np.array(other_knots)
non_grid_knots = np.array(non_grid_knots)
denoms = current_knot - other_knots
denoms_prod = denoms.prod()
L = map(lambda x: np.prod(x - other_knots) / denoms_prod, non_grid_knots)
return L | true | true |
f7fbed4e144dd563190b35303ecb4106f7ef4b94 | 23,422 | py | Python | train_emosup.py | swidi/poemo-generation | 3a349ac3a6fc3e82b24410013bced60a24c2d8bf | [
"MIT"
] | 15 | 2020-10-18T08:52:19.000Z | 2022-03-22T10:25:26.000Z | train_emosup.py | swidi/poemo-generation | 3a349ac3a6fc3e82b24410013bced60a24c2d8bf | [
"MIT"
] | 9 | 2021-02-08T16:40:45.000Z | 2021-11-29T09:30:14.000Z | train_emosup.py | swidi/poemo-generation | 3a349ac3a6fc3e82b24410013bced60a24c2d8bf | [
"MIT"
] | 4 | 2020-10-29T08:12:20.000Z | 2022-01-01T11:10:37.000Z | # Copyright 2019 The Texar Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Example of fine-tuning OpenAI GPT-2 language model.
Use this for base model and emosup.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import importlib
import numpy as np
import tensorflow as tf
import texar as tx
from data_utils import model_utils, processor, utils
# pylint: disable=invalid-name, too-many-locals, too-many-statements, no-member
# pylint: disable=invalid-name, too-many-locals, too-many-statements, no-member
# pylint: disable=too-many-branches
flags = tf.flags
FLAGS = flags.FLAGS
flags.DEFINE_string("checkpoint", None,
"Model checkpoint to resume training or for test.")
flags.DEFINE_string("pretrain_checkpoint",
"gpt2_pretrained_models/model_117M/model.ckpt",
"OpenAI pretrained model checkpoint. Ignored if "
"'--checkpoint' is specified.")
flags.DEFINE_string("pretrained_model_dir", "gpt2_pretrained_models/model_117M",
"The directory of pretrained model, for loading vocabuary, "
"etc.")
flags.DEFINE_float("temperature", 0.7,
"Softmax temperature for top-k sample decoding. Must be "
"strictly greater than 0. Defaults to 0.7.")
flags.DEFINE_integer("top_k", 40,
"The number of top most likely candidates from a vocab "
"distribution.")
flags.DEFINE_string("config_train", "configs.config_train",
"Configurations of GPT-2 training, including data and "
"optimization hyperparameters.")
flags.DEFINE_string("config_type", "texar",
"The configuration file type. Set to 'json' if the GPT-2 "
"config file is in the same type of the official GPT-2 "
"config file. Set to 'texar' if GPT-2 config file is in "
"Texar type.")
flags.DEFINE_string("config_model", "configs.config_model",
"The model configuration file to configure the model. "
"The config file type is define by the 'config_type',"
"it be of texar type or json type."
"For '--config_type=json', set the json config file path"
"like: '--config_model gpt2_pretrained_models/model_117M/"
"hparams.json';"
"For '--config_type=texar', set the texar config file "
"like: '--config_model configs.config_model'.")
flags.DEFINE_string("output_dir", "output/remove_space/",
"The output directory where the model checkpoints will be "
"written.")
flags.DEFINE_bool("do_train", False, "Whether to run training.")
flags.DEFINE_bool("do_eval", False, "Whether to run eval on the dev set.")
flags.DEFINE_bool("do_test", False, "Whether to run test on the test set.")
flags.DEFINE_bool("distributed", False, "Whether to run in distributed mode.")
flags.DEFINE_bool("finetune", False, "Whether to test on finetune mode.")
flags.DEFINE_bool("beam", False, "Whether to do a beam serach for inference?")
flags.DEFINE_bool("bpe_loss", False, "Whether to report loss bpe base or word base?")
config_train = importlib.import_module(FLAGS.config_train)
def _log(msg, log_fn=None):
tf.logging.info(msg)
if log_fn is None:
log_fn = os.path.join(FLAGS.output_dir, config_train.name, 'log.txt')
with open(log_fn, 'a') as flog:
flog.write(msg + '\n')
def _ids_to_text(ids, proc):
eos_token_id = proc.encoder['<|endoftext|>']
if ids[0] == eos_token_id:
ids = ids[1:]
text = proc.decode(ids)
return text
def main(_):
"""
Builds the model and runs
"""
if FLAGS.distributed:
import horovod.tensorflow as hvd
hvd.init()
tf.logging.set_verbosity(tf.logging.INFO)
if len(config_train.name) > 0:
output_dir = os.path.join(FLAGS.output_dir, config_train.name)
else:
output_dir = FLAGS.output_dir
tx.utils.maybe_create_dir(output_dir)
## Loads GPT-2 model configuration
if FLAGS.config_type == "json":
gpt2_config = model_utils.transform_gpt2_to_texar_config(
FLAGS.config_model)
elif FLAGS.config_type == 'texar':
gpt2_config = importlib.import_module(
FLAGS.config_model)
else:
raise ValueError('Unknown config_type.')
# Creates a data pre-processor for, e.g., BPE encoding
proc = processor.get_encoder(FLAGS.pretrained_model_dir)
max_decoding_length = config_train.max_decoding_length
assert max_decoding_length <= gpt2_config.position_size, (
"max_decoding_length should not be greater than position_size. "
"{}>{}".format(max_decoding_length, gpt2_config.position_size))
## Loads data
# Configures training data shard in distribued mode
if FLAGS.distributed:
config_train.train_hparam["dataset"]["num_shards"] = hvd.size()
config_train.train_hparam["dataset"]["shard_id"] = hvd.rank()
config_train.train_hparam["batch_size"] //= hvd.size()
datasets = {}
#if FLAGS.do_train:
train_dataset = tx.data.TFRecordData(hparams=config_train.train_hparam)
datasets['train'] = train_dataset
#if FLAGS.do_eval:
dev_dataset = tx.data.TFRecordData(hparams=config_train.dev_hparam)
datasets['dev'] = dev_dataset
#if FLAGS.do_test:
test_dataset = tx.data.TFRecordData(hparams=config_train.test_hparam)
datasets['test'] = test_dataset
iterator = tx.data.FeedableDataIterator(datasets)
batch = iterator.get_next()
batch_size = tf.shape(batch['x1x4_ids'])[0]
## Builds the GPT-2 model
vocab_size = gpt2_config.vocab_size
word_embedder = tx.modules.WordEmbedder(
vocab_size=vocab_size,
hparams=gpt2_config.embed)
pos_embedder = tx.modules.PositionEmbedder(
position_size=gpt2_config.position_size,
hparams=gpt2_config.pos_embed)
# Ties output layer with input word embedding
output_layer = tf.transpose(word_embedder.embedding, (1, 0))
decoder = tx.modules.TransformerDecoder(
vocab_size=vocab_size,
output_layer=output_layer,
hparams=gpt2_config.decoder)
# For training
def _get_recon_loss(ids, full_len, prefix_len, mask_prefix=True, do_print=False):
ids = ids[:,:tf.reduce_max(full_len)]
batch_size__ = tf.shape(ids)[0]
seq_len = tf.fill([batch_size__], tf.shape(ids)[1])
pos_embeds = pos_embedder(sequence_length=seq_len)
input_embeds = word_embedder(ids) + pos_embeds
outputs = decoder(inputs=input_embeds, decoding_strategy='train_greedy')
max_full_len = tf.reduce_max(full_len)
ids = ids[:, :max_full_len]
logits = outputs.logits[:, :max_full_len]
if mask_prefix:
loss_recon = tx.losses.sequence_sparse_softmax_cross_entropy(
labels=ids[:, 1:],
logits=logits[:, :-1, :],
sequence_length=full_len-1,
average_across_timesteps=False,
sum_over_timesteps=False,
average_across_batch=False,
sum_over_batch=False)
mask_recon = tf.sequence_mask(
full_len-1,
dtype=tf.float32)
mask_recon_prefix = 1 - tf.sequence_mask(
prefix_len-1,
maxlen=max_full_len-1,#max_decoding_length-1,
dtype=tf.float32)
mask_recon = mask_recon * mask_recon_prefix
if do_print:
print_op_1 = tf.print(mask_recon)
loss_recon_flat = tx.utils.reduce_with_weights(
tensor=loss_recon,
weights=mask_recon,
average_across_remaining=False,
sum_over_remaining=False,
average_across_batch=False)
print_op_2 = tf.print(loss_recon_flat)
with tf.control_dependencies([print_op_1, print_op_2]):
loss_recon = tx.utils.reduce_with_weights(
tensor=loss_recon,
weights=mask_recon,
average_across_remaining=True,
sum_over_remaining=False)
return loss_recon, mask_recon, loss_recon_flat
else:
loss_recon = tx.utils.reduce_with_weights(
tensor=loss_recon,
weights=mask_recon,
average_across_remaining=True,
sum_over_remaining=False)
else:
loss_recon = tx.losses.sequence_sparse_softmax_cross_entropy(
labels=ids[:, 1:],
logits=logits[:, :-1, :],
sequence_length=full_len-1,
average_across_timesteps=True,
sum_over_timesteps=False,
average_across_batch=True,
sum_over_batch=False)
return loss_recon
## ROC Loss-1: ML loss
x1_len = tf.placeholder(tf.int32, shape=[None], name='x1_len')
x1x4_ids = tf.placeholder(tf.int32, shape=[None, None], name='x1x4_ids')
x1x4_len = tf.placeholder(tf.int32, shape=[None], name='x1x4_len')
loss_fine = _get_recon_loss(x1x4_ids, x1x4_len, x1_len)
tau = tf.placeholder(tf.float32, shape=[], name='tau')
end_token = proc.encoder['<|endoftext|>']
loss = config_train.w_fine * loss_fine
loss_dict = {
'loss': loss,
'loss_fine': config_train.w_fine * loss_fine,
}
## Inference
def _embedding_fn(ids, times):
return word_embedder(ids) + pos_embedder(times)
def _infer(context_name, target_name):
helper = tx.modules.TopKSampleEmbeddingHelper(
embedding=_embedding_fn,
start_tokens=batch['%s_ids' % context_name][:, 0],
end_token=end_token,
top_k=FLAGS.top_k,
softmax_temperature=FLAGS.temperature)
outputs_infer, len_infer = decoder(
context=batch['%s_ids' % context_name],
context_sequence_length=batch['%s_len' % context_name],
max_decoding_length=max_decoding_length,
helper=helper)
yy_ids = tx.utils.varlength_roll(
outputs_infer.sample_id, -batch['%s_len' % context_name])
yy_len = len_infer - batch['%s_len' % context_name]
yy_ids = yy_ids[:, :tf.reduce_max(yy_len)]
yy_logits = outputs_infer.logits
yy_loss = _evaluate_loss_test(yy_logits, target_name, context_name)
return yy_ids, yy_len, yy_loss
def _evaluate_loss_test(logits, target_name, context_name, bpe_loss=FLAGS.bpe_loss):
ids = batch['%s_ids' % target_name]
full_len = batch['%s_len' % target_name]
ids = ids[:, :tf.reduce_max(full_len)]
# new code
max_full_len = tf.reduce_max(full_len)
logits = logits[:, :max_full_len]
test_loss = tx.losses.sequence_sparse_softmax_cross_entropy(
labels=ids[:, 1:],
logits=logits[:, :-1, :],
sequence_length=full_len - 1,
average_across_timesteps=False,
sum_over_timesteps=not bpe_loss, # True,
average_across_batch=False,
sum_over_batch=False)
mask_recon = tf.sequence_mask(
full_len - 1,
dtype=tf.float32)
mask_recon_prefix = 1 - tf.sequence_mask(
batch['%s_len' % context_name] - 1,
maxlen=max_full_len - 1, # max_decoding_length-1,
dtype=tf.float32)
mask_recon = mask_recon * mask_recon_prefix
test_loss = tx.utils.reduce_with_weights(
tensor=test_loss,
weights=mask_recon,
average_across_batch=bpe_loss,
average_across_remaining=bpe_loss,
sum_over_remaining=not bpe_loss)
return test_loss # [bs,] ?
x4_ids_fine, x4_len_fine, x4_loss_fine = _infer('x1', 'x1x4')
## Optimization
def _get_beam_ids(context_name):
# beam-search
predictions = decoder(
beam_width=5,
length_penalty=config_train.length_penalty,
embedding=_embedding_fn,
context=batch['%s_ids' % context_name],
context_sequence_length=batch['%s_len' % context_name],
max_decoding_length=max_decoding_length,
end_token=end_token,
mode=tf.estimator.ModeKeys.PREDICT)
beam_output_ids = tx.utils.varlength_roll(predictions["sample_id"][:, :, 0], -batch['%s_len' % context_name])
return beam_output_ids
beam_search_ids = _get_beam_ids('x1')
## Optimization
trainable_variables = tx.utils.collect_trainable_variables(
[word_embedder, pos_embedder, decoder])
global_step = tf.Variable(0, trainable=False)
opt = tx.core.get_optimizer(
global_step=global_step,
hparams=config_train.opt)
if FLAGS.distributed:
opt = hvd.DistributedOptimizer(opt)
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=global_step,
learning_rate=None,
optimizer=opt,
variables=trainable_variables)
## Train/eval/test routine
saver = tf.train.Saver()
saver_best = tf.train.Saver(max_to_keep=1)
dev_best = {
'loss': 1e8, 'loss_fine': 1e8}
def _log_losses(losses, step=None):
loss_str = 'loss: %.4f, loss_fine: %.4f' % \
(losses['loss'], losses['loss_fine'])
if step is not None:
loss_str = 'step: %d, %s' % (step, loss_str)
_log(loss_str)
def _is_head():
if not FLAGS.distributed:
return True
else:
return hvd.rank() == 0
def _train_epoch(sess, initial=False):
"""Trains on the training set, and evaluates on the dev set
periodically.
"""
iterator.restart_dataset(sess, 'train')
while True:
try:
# (1) Get data and yy sample
fetches_data = {
'batch': batch,
'batch_size': batch_size,
}
feed_dict_data = {
iterator.handle: iterator.get_handle(sess, 'train'),
tx.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets_data = sess.run(fetches_data, feed_dict_data)
# (2) Optimize loss
feed_dict = {
#x1_ids: rets_data['batch']['x1_ids'],
x1_len: rets_data['batch']['x1_len'],
x1x4_ids: rets_data['batch']['x1x4_ids'],
x1x4_len: rets_data['batch']['x1x4_len'],
tau: config_train.tau,
tx.global_mode(): tf.estimator.ModeKeys.TRAIN,
}
fetches = {
'train_op': train_op,
'step': global_step,
}
fetches.update(loss_dict)
rets = sess.run(fetches, feed_dict)
step = rets['step']
dis_steps = config_train.display_steps
if _is_head() and dis_steps > 0 and step % dis_steps == 0:
_log_losses(rets, step)
eval_steps = config_train.eval_steps
if _is_head() and eval_steps > 0 and step % eval_steps == 0:
_dev_epoch(sess)
sample_steps = config_train.sample_steps
if _is_head() and sample_steps > 0 and step % sample_steps == 0:
print('-----------testing-----------------')
_test_epoch(sess, step=step)
ckpt_steps = config_train.checkpoint_steps
if _is_head() and ckpt_steps > 0 and step % ckpt_steps == 0:
ckpt_fn = os.path.join(output_dir, 'model.ckpt')
ckpt_fn = saver.save(sess, ckpt_fn, global_step=step)
_log('Checkpoint to {}'.format(ckpt_fn))
except tf.errors.OutOfRangeError:
break
def _dev_epoch(sess):
"""Evaluates on the dev set.
"""
iterator.restart_dataset(sess, 'dev')
results = tx.utils.AverageRecorder()
nsamples = 0
fetches = {}
fetches.update(loss_dict)
# i = 0
while True:
try:
# (1) Get data and yy sample
fetches_data = {
'batch': batch,
'batch_size': batch_size,
}
feed_dict_data = {
iterator.handle: iterator.get_handle(sess, 'dev'),
tx.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets_data = sess.run(fetches_data, feed_dict_data)
# (2) eval loss
feed_dict = {
#x1_ids: rets_data['batch']['x1_ids'],
x1_len: rets_data['batch']['x1_len'],
x1x4_ids: rets_data['batch']['x1x4_ids'],
x1x4_len: rets_data['batch']['x1x4_len'],
tau: config_train.tau,
tx.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets = sess.run(fetches, feed_dict)
results.add(rets, weight=rets_data['batch_size'])
nsamples += rets_data['batch_size']
except tf.errors.OutOfRangeError:
break
_log_losses(results.avg())
_log('nsamples: %d' % nsamples)
avg_loss = results.avg('loss')
if FLAGS.do_train and avg_loss < dev_best['loss']:
dev_best.update(results.avg())
ckpt_fn = os.path.join(output_dir, 'model_best.ckpt')
ckpt_fn = saver_best.save(sess, ckpt_fn)
_log('Checkpoint best to {}'.format(ckpt_fn))
def _test_epoch(sess, step=None):
"""Generates samples on the test set.
"""
iterator.restart_dataset(sess, 'test')
_all_inputs = []
_all_samples = []
_all_loss = []
if FLAGS.finetune:
_log('Generation input: x1')
fetches = {
'inputs': batch['x1_ids'],
'length': batch['x1_len'],
'samples_length': x4_len_fine,
'samples': x4_ids_fine,
'sample_loss': x4_loss_fine,
'outputs': batch['x1x4_ids'],
'out_length': batch['x1x4_len']
}
res_fn_appendix = "x1"
while True:
try:
feed_dict = {
iterator.handle: iterator.get_handle(sess, 'test'),
tx.context.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets = sess.run(fetches, feed_dict=feed_dict)
_inputs = []
for i, l in zip(rets['inputs'], rets['length']):
# Delete padding
_inputs.append(i[:l].tolist())
_all_inputs.extend(_inputs)
_samples = []
_loss = []
if not FLAGS.beam:
for s, l in zip(rets['samples'], rets['samples_length']):
_samples.append(s[:l].tolist())
else:
_samples.extend(h.tolist() for h in rets['samples'])
_samples = utils.list_strip_eos(_samples, eos_token=proc.encoder['<|endoftext|>'])
_all_samples.extend(_samples)
except tf.errors.OutOfRangeError:
break
# Parse samples and write to file
eos_token_id = proc.encoder['<|endoftext|>']
_all_input_text = []
for i in _all_inputs:
if i[0] == eos_token_id:
i = i[1:]
i_text = proc.decode(i)
_all_input_text.append(i_text)
_all_input_text = tx.utils.strip_eos(_all_input_text,
eos_token='<|endoftext|>')
_all_samples_text = []
for j, (i, s) in enumerate(zip(_all_inputs, _all_samples)):
s_text = proc.decode(s)
s_text = s_text.replace('\n', ' ')
# print(s_text)
_all_samples_text.append(s_text)
if j % 1000 == 0:
print("{} stories is process of total {}".format(j, len(_all_inputs)))
_all_samples_text = tx.utils.strip_eos(_all_samples_text,
eos_token='<|endoftext|>')
if step is None:
fn = "test_samples_%s_sample40.tsv" % res_fn_appendix
else:
fn = "test_samples_%s_%d.tsv" % (res_fn_appendix, step)
output_file = os.path.join(output_dir, fn)
_log('Write samples to {}'.format(output_file))
if not FLAGS.beam:
tx.utils.write_paired_text(
_all_input_text, _all_samples_text, output_file)
else:
with open(output_file, 'w') as f:
for item in _all_samples_text:
f.write("%s\n" % item)
# Broadcasts global variables from rank-0 process
if FLAGS.distributed:
bcast = hvd.broadcast_global_variables(0)
session_config = tf.ConfigProto()
if FLAGS.distributed:
session_config.gpu_options.visible_device_list = str(hvd.local_rank())
with tf.Session(config=session_config) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
sess.run(tf.tables_initializer())
# smry_writer = tf.summary.FileWriter(FLAGS.output_dir, graph=sess.graph)
if FLAGS.distributed:
bcast.run()
#Restores trained model if specified
if FLAGS.checkpoint:
_log('Restore from {}'.format(FLAGS.checkpoint))
saver.restore(sess, FLAGS.checkpoint)
elif FLAGS.pretrain_checkpoint:
_log('Restore from {}'.format(FLAGS.pretrain_checkpoint))
model_utils.init_gpt2_checkpoint(sess, FLAGS.pretrain_checkpoint)
print("\nFinished loading\n")
saver.save(sess, output_dir + '/gpt2_model.ckpt')
iterator.initialize_dataset(sess)
if FLAGS.do_train:
for epoch in range(config_train.max_train_epoch):
print("Training epoch {}".format(epoch))
_train_epoch(sess, epoch==0)
saver.save(sess, output_dir + '/model.ckpt')
if FLAGS.do_eval:
_dev_epoch(sess)
if FLAGS.do_test:
_test_epoch(sess)
if __name__ == "__main__":
tf.app.run()
| 36.145062 | 117 | 0.587567 |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import importlib
import numpy as np
import tensorflow as tf
import texar as tx
from data_utils import model_utils, processor, utils
flags = tf.flags
FLAGS = flags.FLAGS
flags.DEFINE_string("checkpoint", None,
"Model checkpoint to resume training or for test.")
flags.DEFINE_string("pretrain_checkpoint",
"gpt2_pretrained_models/model_117M/model.ckpt",
"OpenAI pretrained model checkpoint. Ignored if "
"'--checkpoint' is specified.")
flags.DEFINE_string("pretrained_model_dir", "gpt2_pretrained_models/model_117M",
"The directory of pretrained model, for loading vocabuary, "
"etc.")
flags.DEFINE_float("temperature", 0.7,
"Softmax temperature for top-k sample decoding. Must be "
"strictly greater than 0. Defaults to 0.7.")
flags.DEFINE_integer("top_k", 40,
"The number of top most likely candidates from a vocab "
"distribution.")
flags.DEFINE_string("config_train", "configs.config_train",
"Configurations of GPT-2 training, including data and "
"optimization hyperparameters.")
flags.DEFINE_string("config_type", "texar",
"The configuration file type. Set to 'json' if the GPT-2 "
"config file is in the same type of the official GPT-2 "
"config file. Set to 'texar' if GPT-2 config file is in "
"Texar type.")
flags.DEFINE_string("config_model", "configs.config_model",
"The model configuration file to configure the model. "
"The config file type is define by the 'config_type',"
"it be of texar type or json type."
"For '--config_type=json', set the json config file path"
"like: '--config_model gpt2_pretrained_models/model_117M/"
"hparams.json';"
"For '--config_type=texar', set the texar config file "
"like: '--config_model configs.config_model'.")
flags.DEFINE_string("output_dir", "output/remove_space/",
"The output directory where the model checkpoints will be "
"written.")
flags.DEFINE_bool("do_train", False, "Whether to run training.")
flags.DEFINE_bool("do_eval", False, "Whether to run eval on the dev set.")
flags.DEFINE_bool("do_test", False, "Whether to run test on the test set.")
flags.DEFINE_bool("distributed", False, "Whether to run in distributed mode.")
flags.DEFINE_bool("finetune", False, "Whether to test on finetune mode.")
flags.DEFINE_bool("beam", False, "Whether to do a beam serach for inference?")
flags.DEFINE_bool("bpe_loss", False, "Whether to report loss bpe base or word base?")
config_train = importlib.import_module(FLAGS.config_train)
def _log(msg, log_fn=None):
tf.logging.info(msg)
if log_fn is None:
log_fn = os.path.join(FLAGS.output_dir, config_train.name, 'log.txt')
with open(log_fn, 'a') as flog:
flog.write(msg + '\n')
def _ids_to_text(ids, proc):
eos_token_id = proc.encoder['<|endoftext|>']
if ids[0] == eos_token_id:
ids = ids[1:]
text = proc.decode(ids)
return text
def main(_):
if FLAGS.distributed:
import horovod.tensorflow as hvd
hvd.init()
tf.logging.set_verbosity(tf.logging.INFO)
if len(config_train.name) > 0:
output_dir = os.path.join(FLAGS.output_dir, config_train.name)
else:
output_dir = FLAGS.output_dir
tx.utils.maybe_create_dir(output_dir)
on":
gpt2_config = model_utils.transform_gpt2_to_texar_config(
FLAGS.config_model)
elif FLAGS.config_type == 'texar':
gpt2_config = importlib.import_module(
FLAGS.config_model)
else:
raise ValueError('Unknown config_type.')
proc = processor.get_encoder(FLAGS.pretrained_model_dir)
max_decoding_length = config_train.max_decoding_length
assert max_decoding_length <= gpt2_config.position_size, (
"max_decoding_length should not be greater than position_size. "
"{}>{}".format(max_decoding_length, gpt2_config.position_size))
f FLAGS.distributed:
config_train.train_hparam["dataset"]["num_shards"] = hvd.size()
config_train.train_hparam["dataset"]["shard_id"] = hvd.rank()
config_train.train_hparam["batch_size"] //= hvd.size()
datasets = {}
train_dataset = tx.data.TFRecordData(hparams=config_train.train_hparam)
datasets['train'] = train_dataset
dev_dataset = tx.data.TFRecordData(hparams=config_train.dev_hparam)
datasets['dev'] = dev_dataset
test_dataset = tx.data.TFRecordData(hparams=config_train.test_hparam)
datasets['test'] = test_dataset
iterator = tx.data.FeedableDataIterator(datasets)
batch = iterator.get_next()
batch_size = tf.shape(batch['x1x4_ids'])[0]
onfig.vocab_size
word_embedder = tx.modules.WordEmbedder(
vocab_size=vocab_size,
hparams=gpt2_config.embed)
pos_embedder = tx.modules.PositionEmbedder(
position_size=gpt2_config.position_size,
hparams=gpt2_config.pos_embed)
output_layer = tf.transpose(word_embedder.embedding, (1, 0))
decoder = tx.modules.TransformerDecoder(
vocab_size=vocab_size,
output_layer=output_layer,
hparams=gpt2_config.decoder)
def _get_recon_loss(ids, full_len, prefix_len, mask_prefix=True, do_print=False):
ids = ids[:,:tf.reduce_max(full_len)]
batch_size__ = tf.shape(ids)[0]
seq_len = tf.fill([batch_size__], tf.shape(ids)[1])
pos_embeds = pos_embedder(sequence_length=seq_len)
input_embeds = word_embedder(ids) + pos_embeds
outputs = decoder(inputs=input_embeds, decoding_strategy='train_greedy')
max_full_len = tf.reduce_max(full_len)
ids = ids[:, :max_full_len]
logits = outputs.logits[:, :max_full_len]
if mask_prefix:
loss_recon = tx.losses.sequence_sparse_softmax_cross_entropy(
labels=ids[:, 1:],
logits=logits[:, :-1, :],
sequence_length=full_len-1,
average_across_timesteps=False,
sum_over_timesteps=False,
average_across_batch=False,
sum_over_batch=False)
mask_recon = tf.sequence_mask(
full_len-1,
dtype=tf.float32)
mask_recon_prefix = 1 - tf.sequence_mask(
prefix_len-1,
maxlen=max_full_len-1,
dtype=tf.float32)
mask_recon = mask_recon * mask_recon_prefix
if do_print:
print_op_1 = tf.print(mask_recon)
loss_recon_flat = tx.utils.reduce_with_weights(
tensor=loss_recon,
weights=mask_recon,
average_across_remaining=False,
sum_over_remaining=False,
average_across_batch=False)
print_op_2 = tf.print(loss_recon_flat)
with tf.control_dependencies([print_op_1, print_op_2]):
loss_recon = tx.utils.reduce_with_weights(
tensor=loss_recon,
weights=mask_recon,
average_across_remaining=True,
sum_over_remaining=False)
return loss_recon, mask_recon, loss_recon_flat
else:
loss_recon = tx.utils.reduce_with_weights(
tensor=loss_recon,
weights=mask_recon,
average_across_remaining=True,
sum_over_remaining=False)
else:
loss_recon = tx.losses.sequence_sparse_softmax_cross_entropy(
labels=ids[:, 1:],
logits=logits[:, :-1, :],
sequence_length=full_len-1,
average_across_timesteps=True,
sum_over_timesteps=False,
average_across_batch=True,
sum_over_batch=False)
return loss_recon
eholder(tf.int32, shape=[None], name='x1_len')
x1x4_ids = tf.placeholder(tf.int32, shape=[None, None], name='x1x4_ids')
x1x4_len = tf.placeholder(tf.int32, shape=[None], name='x1x4_len')
loss_fine = _get_recon_loss(x1x4_ids, x1x4_len, x1_len)
tau = tf.placeholder(tf.float32, shape=[], name='tau')
end_token = proc.encoder['<|endoftext|>']
loss = config_train.w_fine * loss_fine
loss_dict = {
'loss': loss,
'loss_fine': config_train.w_fine * loss_fine,
}
mbedding_fn(ids, times):
return word_embedder(ids) + pos_embedder(times)
def _infer(context_name, target_name):
helper = tx.modules.TopKSampleEmbeddingHelper(
embedding=_embedding_fn,
start_tokens=batch['%s_ids' % context_name][:, 0],
end_token=end_token,
top_k=FLAGS.top_k,
softmax_temperature=FLAGS.temperature)
outputs_infer, len_infer = decoder(
context=batch['%s_ids' % context_name],
context_sequence_length=batch['%s_len' % context_name],
max_decoding_length=max_decoding_length,
helper=helper)
yy_ids = tx.utils.varlength_roll(
outputs_infer.sample_id, -batch['%s_len' % context_name])
yy_len = len_infer - batch['%s_len' % context_name]
yy_ids = yy_ids[:, :tf.reduce_max(yy_len)]
yy_logits = outputs_infer.logits
yy_loss = _evaluate_loss_test(yy_logits, target_name, context_name)
return yy_ids, yy_len, yy_loss
def _evaluate_loss_test(logits, target_name, context_name, bpe_loss=FLAGS.bpe_loss):
ids = batch['%s_ids' % target_name]
full_len = batch['%s_len' % target_name]
ids = ids[:, :tf.reduce_max(full_len)]
max_full_len = tf.reduce_max(full_len)
logits = logits[:, :max_full_len]
test_loss = tx.losses.sequence_sparse_softmax_cross_entropy(
labels=ids[:, 1:],
logits=logits[:, :-1, :],
sequence_length=full_len - 1,
average_across_timesteps=False,
sum_over_timesteps=not bpe_loss,
average_across_batch=False,
sum_over_batch=False)
mask_recon = tf.sequence_mask(
full_len - 1,
dtype=tf.float32)
mask_recon_prefix = 1 - tf.sequence_mask(
batch['%s_len' % context_name] - 1,
maxlen=max_full_len - 1,
dtype=tf.float32)
mask_recon = mask_recon * mask_recon_prefix
test_loss = tx.utils.reduce_with_weights(
tensor=test_loss,
weights=mask_recon,
average_across_batch=bpe_loss,
average_across_remaining=bpe_loss,
sum_over_remaining=not bpe_loss)
return test_loss
x4_ids_fine, x4_len_fine, x4_loss_fine = _infer('x1', 'x1x4')
_beam_ids(context_name):
predictions = decoder(
beam_width=5,
length_penalty=config_train.length_penalty,
embedding=_embedding_fn,
context=batch['%s_ids' % context_name],
context_sequence_length=batch['%s_len' % context_name],
max_decoding_length=max_decoding_length,
end_token=end_token,
mode=tf.estimator.ModeKeys.PREDICT)
beam_output_ids = tx.utils.varlength_roll(predictions["sample_id"][:, :, 0], -batch['%s_len' % context_name])
return beam_output_ids
beam_search_ids = _get_beam_ids('x1')
_variables = tx.utils.collect_trainable_variables(
[word_embedder, pos_embedder, decoder])
global_step = tf.Variable(0, trainable=False)
opt = tx.core.get_optimizer(
global_step=global_step,
hparams=config_train.opt)
if FLAGS.distributed:
opt = hvd.DistributedOptimizer(opt)
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=global_step,
learning_rate=None,
optimizer=opt,
variables=trainable_variables)
er()
saver_best = tf.train.Saver(max_to_keep=1)
dev_best = {
'loss': 1e8, 'loss_fine': 1e8}
def _log_losses(losses, step=None):
loss_str = 'loss: %.4f, loss_fine: %.4f' % \
(losses['loss'], losses['loss_fine'])
if step is not None:
loss_str = 'step: %d, %s' % (step, loss_str)
_log(loss_str)
def _is_head():
if not FLAGS.distributed:
return True
else:
return hvd.rank() == 0
def _train_epoch(sess, initial=False):
iterator.restart_dataset(sess, 'train')
while True:
try:
fetches_data = {
'batch': batch,
'batch_size': batch_size,
}
feed_dict_data = {
iterator.handle: iterator.get_handle(sess, 'train'),
tx.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets_data = sess.run(fetches_data, feed_dict_data)
feed_dict = {
x1_len: rets_data['batch']['x1_len'],
x1x4_ids: rets_data['batch']['x1x4_ids'],
x1x4_len: rets_data['batch']['x1x4_len'],
tau: config_train.tau,
tx.global_mode(): tf.estimator.ModeKeys.TRAIN,
}
fetches = {
'train_op': train_op,
'step': global_step,
}
fetches.update(loss_dict)
rets = sess.run(fetches, feed_dict)
step = rets['step']
dis_steps = config_train.display_steps
if _is_head() and dis_steps > 0 and step % dis_steps == 0:
_log_losses(rets, step)
eval_steps = config_train.eval_steps
if _is_head() and eval_steps > 0 and step % eval_steps == 0:
_dev_epoch(sess)
sample_steps = config_train.sample_steps
if _is_head() and sample_steps > 0 and step % sample_steps == 0:
print('-----------testing-----------------')
_test_epoch(sess, step=step)
ckpt_steps = config_train.checkpoint_steps
if _is_head() and ckpt_steps > 0 and step % ckpt_steps == 0:
ckpt_fn = os.path.join(output_dir, 'model.ckpt')
ckpt_fn = saver.save(sess, ckpt_fn, global_step=step)
_log('Checkpoint to {}'.format(ckpt_fn))
except tf.errors.OutOfRangeError:
break
def _dev_epoch(sess):
iterator.restart_dataset(sess, 'dev')
results = tx.utils.AverageRecorder()
nsamples = 0
fetches = {}
fetches.update(loss_dict)
while True:
try:
fetches_data = {
'batch': batch,
'batch_size': batch_size,
}
feed_dict_data = {
iterator.handle: iterator.get_handle(sess, 'dev'),
tx.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets_data = sess.run(fetches_data, feed_dict_data)
feed_dict = {
x1_len: rets_data['batch']['x1_len'],
x1x4_ids: rets_data['batch']['x1x4_ids'],
x1x4_len: rets_data['batch']['x1x4_len'],
tau: config_train.tau,
tx.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets = sess.run(fetches, feed_dict)
results.add(rets, weight=rets_data['batch_size'])
nsamples += rets_data['batch_size']
except tf.errors.OutOfRangeError:
break
_log_losses(results.avg())
_log('nsamples: %d' % nsamples)
avg_loss = results.avg('loss')
if FLAGS.do_train and avg_loss < dev_best['loss']:
dev_best.update(results.avg())
ckpt_fn = os.path.join(output_dir, 'model_best.ckpt')
ckpt_fn = saver_best.save(sess, ckpt_fn)
_log('Checkpoint best to {}'.format(ckpt_fn))
def _test_epoch(sess, step=None):
iterator.restart_dataset(sess, 'test')
_all_inputs = []
_all_samples = []
_all_loss = []
if FLAGS.finetune:
_log('Generation input: x1')
fetches = {
'inputs': batch['x1_ids'],
'length': batch['x1_len'],
'samples_length': x4_len_fine,
'samples': x4_ids_fine,
'sample_loss': x4_loss_fine,
'outputs': batch['x1x4_ids'],
'out_length': batch['x1x4_len']
}
res_fn_appendix = "x1"
while True:
try:
feed_dict = {
iterator.handle: iterator.get_handle(sess, 'test'),
tx.context.global_mode(): tf.estimator.ModeKeys.PREDICT,
}
rets = sess.run(fetches, feed_dict=feed_dict)
_inputs = []
for i, l in zip(rets['inputs'], rets['length']):
_inputs.append(i[:l].tolist())
_all_inputs.extend(_inputs)
_samples = []
_loss = []
if not FLAGS.beam:
for s, l in zip(rets['samples'], rets['samples_length']):
_samples.append(s[:l].tolist())
else:
_samples.extend(h.tolist() for h in rets['samples'])
_samples = utils.list_strip_eos(_samples, eos_token=proc.encoder['<|endoftext|>'])
_all_samples.extend(_samples)
except tf.errors.OutOfRangeError:
break
eos_token_id = proc.encoder['<|endoftext|>']
_all_input_text = []
for i in _all_inputs:
if i[0] == eos_token_id:
i = i[1:]
i_text = proc.decode(i)
_all_input_text.append(i_text)
_all_input_text = tx.utils.strip_eos(_all_input_text,
eos_token='<|endoftext|>')
_all_samples_text = []
for j, (i, s) in enumerate(zip(_all_inputs, _all_samples)):
s_text = proc.decode(s)
s_text = s_text.replace('\n', ' ')
_all_samples_text.append(s_text)
if j % 1000 == 0:
print("{} stories is process of total {}".format(j, len(_all_inputs)))
_all_samples_text = tx.utils.strip_eos(_all_samples_text,
eos_token='<|endoftext|>')
if step is None:
fn = "test_samples_%s_sample40.tsv" % res_fn_appendix
else:
fn = "test_samples_%s_%d.tsv" % (res_fn_appendix, step)
output_file = os.path.join(output_dir, fn)
_log('Write samples to {}'.format(output_file))
if not FLAGS.beam:
tx.utils.write_paired_text(
_all_input_text, _all_samples_text, output_file)
else:
with open(output_file, 'w') as f:
for item in _all_samples_text:
f.write("%s\n" % item)
if FLAGS.distributed:
bcast = hvd.broadcast_global_variables(0)
session_config = tf.ConfigProto()
if FLAGS.distributed:
session_config.gpu_options.visible_device_list = str(hvd.local_rank())
with tf.Session(config=session_config) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
sess.run(tf.tables_initializer())
if FLAGS.distributed:
bcast.run()
if FLAGS.checkpoint:
_log('Restore from {}'.format(FLAGS.checkpoint))
saver.restore(sess, FLAGS.checkpoint)
elif FLAGS.pretrain_checkpoint:
_log('Restore from {}'.format(FLAGS.pretrain_checkpoint))
model_utils.init_gpt2_checkpoint(sess, FLAGS.pretrain_checkpoint)
print("\nFinished loading\n")
saver.save(sess, output_dir + '/gpt2_model.ckpt')
iterator.initialize_dataset(sess)
if FLAGS.do_train:
for epoch in range(config_train.max_train_epoch):
print("Training epoch {}".format(epoch))
_train_epoch(sess, epoch==0)
saver.save(sess, output_dir + '/model.ckpt')
if FLAGS.do_eval:
_dev_epoch(sess)
if FLAGS.do_test:
_test_epoch(sess)
if __name__ == "__main__":
tf.app.run()
| true | true |
f7fbee4ccc9422d35bd6f57d12073fe1989f3c2e | 32,393 | py | Python | qiskit/algorithms/minimum_eigen_solvers/vqe.py | MaxKelsen/qiskit-terra | 1ec2f5db5fea8df907cace3a0e0c92b6deac7fd3 | [
"Apache-2.0"
] | 1 | 2022-03-10T03:07:30.000Z | 2022-03-10T03:07:30.000Z | qiskit/algorithms/minimum_eigen_solvers/vqe.py | MaxKelsen/qiskit-terra | 1ec2f5db5fea8df907cace3a0e0c92b6deac7fd3 | [
"Apache-2.0"
] | null | null | null | qiskit/algorithms/minimum_eigen_solvers/vqe.py | MaxKelsen/qiskit-terra | 1ec2f5db5fea8df907cace3a0e0c92b6deac7fd3 | [
"Apache-2.0"
] | null | null | null | # This code is part of Qiskit.
#
# (C) Copyright IBM 2018, 2020.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""The Variational Quantum Eigensolver algorithm.
See https://arxiv.org/abs/1304.3061
"""
from typing import Optional, List, Callable, Union, Dict, Tuple
import logging
import warnings
from time import time
import numpy as np
from qiskit.circuit import QuantumCircuit, Parameter
from qiskit.circuit.library import RealAmplitudes
from qiskit.providers import BaseBackend
from qiskit.providers import Backend
from qiskit.opflow import (
OperatorBase,
ExpectationBase,
ExpectationFactory,
StateFn,
CircuitStateFn,
ListOp,
CircuitSampler,
PauliSumOp,
)
from qiskit.opflow.gradients import GradientBase
from qiskit.utils.validation import validate_min
from qiskit.utils.backend_utils import is_aer_provider
from qiskit.utils.deprecation import deprecate_function
from qiskit.utils import QuantumInstance, algorithm_globals
from ..optimizers import Optimizer, SLSQP
from ..variational_algorithm import VariationalAlgorithm, VariationalResult
from .minimum_eigen_solver import MinimumEigensolver, MinimumEigensolverResult, ListOrDict
from ..exceptions import AlgorithmError
logger = logging.getLogger(__name__)
class VQE(VariationalAlgorithm, MinimumEigensolver):
r"""The Variational Quantum Eigensolver algorithm.
`VQE <https://arxiv.org/abs/1304.3061>`__ is a quantum algorithm that uses a
variational technique to find
the minimum eigenvalue of the Hamiltonian :math:`H` of a given system.
An instance of VQE requires defining two algorithmic sub-components:
a trial state (a.k.a. ansatz) which is a :class:`QuantumCircuit`, and one of the classical
:mod:`~qiskit.algorithms.optimizers`. The ansatz is varied, via its set of parameters, by the
optimizer, such that it works towards a state, as determined by the parameters applied to the
ansatz, that will result in the minimum expectation value being measured of the input operator
(Hamiltonian).
An optional array of parameter values, via the *initial_point*, may be provided as the
starting point for the search of the minimum eigenvalue. This feature is particularly useful
such as when there are reasons to believe that the solution point is close to a particular
point. As an example, when building the dissociation profile of a molecule,
it is likely that using the previous computed optimal solution as the starting
initial point for the next interatomic distance is going to reduce the number of iterations
necessary for the variational algorithm to converge. It provides an
`initial point tutorial <https://github.com/Qiskit/qiskit-tutorials-community/blob/master
/chemistry/h2_vqe_initial_point.ipynb>`__ detailing this use case.
The length of the *initial_point* list value must match the number of the parameters
expected by the ansatz being used. If the *initial_point* is left at the default
of ``None``, then VQE will look to the ansatz for a preferred value, based on its
given initial state. If the ansatz returns ``None``,
then a random point will be generated within the parameter bounds set, as per above.
If the ansatz provides ``None`` as the lower bound, then VQE
will default it to :math:`-2\pi`; similarly, if the ansatz returns ``None``
as the upper bound, the default value will be :math:`2\pi`.
"""
def __init__(
self,
ansatz: Optional[QuantumCircuit] = None,
optimizer: Optional[Optimizer] = None,
initial_point: Optional[np.ndarray] = None,
gradient: Optional[Union[GradientBase, Callable]] = None,
expectation: Optional[ExpectationBase] = None,
include_custom: bool = False,
max_evals_grouped: int = 1,
callback: Optional[Callable[[int, np.ndarray, float, float], None]] = None,
quantum_instance: Optional[Union[QuantumInstance, BaseBackend, Backend]] = None,
) -> None:
"""
Args:
ansatz: A parameterized circuit used as Ansatz for the wave function.
optimizer: A classical optimizer.
initial_point: An optional initial point (i.e. initial parameter values)
for the optimizer. If ``None`` then VQE will look to the ansatz for a preferred
point and if not will simply compute a random one.
gradient: An optional gradient function or operator for optimizer.
expectation: The Expectation converter for taking the average value of the
Observable over the ansatz state function. When ``None`` (the default) an
:class:`~qiskit.opflow.expectations.ExpectationFactory` is used to select
an appropriate expectation based on the operator and backend. When using Aer
qasm_simulator backend, with paulis, it is however much faster to leverage custom
Aer function for the computation but, although VQE performs much faster
with it, the outcome is ideal, with no shot noise, like using a state vector
simulator. If you are just looking for the quickest performance when choosing Aer
qasm_simulator and the lack of shot noise is not an issue then set `include_custom`
parameter here to ``True`` (defaults to ``False``).
include_custom: When `expectation` parameter here is None setting this to ``True`` will
allow the factory to include the custom Aer pauli expectation.
max_evals_grouped: Max number of evaluations performed simultaneously. Signals the
given optimizer that more than one set of parameters can be supplied so that
potentially the expectation values can be computed in parallel. Typically this is
possible when a finite difference gradient is used by the optimizer such that
multiple points to compute the gradient can be passed and if computed in parallel
improve overall execution time. Deprecated if a gradient operator or function is
given.
callback: a callback that can access the intermediate data during the optimization.
Four parameter values are passed to the callback as follows during each evaluation
by the optimizer for its current set of parameters as it works towards the minimum.
These are: the evaluation count, the optimizer parameters for the
ansatz, the evaluated mean and the evaluated standard deviation.`
quantum_instance: Quantum Instance or Backend
"""
validate_min("max_evals_grouped", max_evals_grouped, 1)
super().__init__()
self._max_evals_grouped = max_evals_grouped
self._circuit_sampler = None # type: Optional[CircuitSampler]
self._expectation = None
self.expectation = expectation
self._include_custom = include_custom
# set ansatz -- still supporting pre 0.18.0 sorting
self._ansatz_params = None
self._ansatz = None
self.ansatz = ansatz
self._optimizer = None
self.optimizer = optimizer
self._initial_point = None
self.initial_point = initial_point
self._gradient = None
self.gradient = gradient
self._quantum_instance = None
if quantum_instance is not None:
self.quantum_instance = quantum_instance
self._eval_time = None
self._eval_count = 0
self._callback = None
self.callback = callback
logger.info(self.print_settings())
# TODO remove this once the stateful methods are deleted
self._ret = None
@property
def ansatz(self) -> QuantumCircuit:
"""Returns the ansatz."""
return self._ansatz
@ansatz.setter
def ansatz(self, ansatz: Optional[QuantumCircuit]):
"""Sets the ansatz.
Args:
ansatz: The parameterized circuit used as an ansatz.
If None is passed, RealAmplitudes is used by default.
"""
if ansatz is None:
ansatz = RealAmplitudes()
self._ansatz = ansatz
self._ansatz_params = list(ansatz.parameters)
@property
def gradient(self) -> Optional[Union[GradientBase, Callable]]:
"""Returns the gradient."""
return self._gradient
@gradient.setter
def gradient(self, gradient: Optional[Union[GradientBase, Callable]]):
"""Sets the gradient."""
self._gradient = gradient
@property
def quantum_instance(self) -> Optional[QuantumInstance]:
"""Returns quantum instance."""
return self._quantum_instance
@quantum_instance.setter
def quantum_instance(
self, quantum_instance: Union[QuantumInstance, BaseBackend, Backend]
) -> None:
"""Sets quantum_instance"""
if not isinstance(quantum_instance, QuantumInstance):
quantum_instance = QuantumInstance(quantum_instance)
self._quantum_instance = quantum_instance
self._circuit_sampler = CircuitSampler(
quantum_instance, param_qobj=is_aer_provider(quantum_instance.backend)
)
@property
def initial_point(self) -> Optional[np.ndarray]:
"""Returns initial point"""
return self._initial_point
@initial_point.setter
def initial_point(self, initial_point: np.ndarray):
"""Sets initial point"""
self._initial_point = initial_point
@property
def max_evals_grouped(self) -> int:
"""Returns max_evals_grouped"""
return self._max_evals_grouped
@max_evals_grouped.setter
def max_evals_grouped(self, max_evals_grouped: int):
"""Sets max_evals_grouped"""
self._max_evals_grouped = max_evals_grouped
self.optimizer.set_max_evals_grouped(max_evals_grouped)
@property
def include_custom(self) -> bool:
"""Returns include_custom"""
return self._include_custom
@include_custom.setter
def include_custom(self, include_custom: bool):
"""Sets include_custom. If set to another value than the one that was previsously set,
the expectation attribute is reset to None.
"""
if include_custom != self._include_custom:
self._include_custom = include_custom
self.expectation = None
@property
def callback(self) -> Optional[Callable[[int, np.ndarray, float, float], None]]:
"""Returns callback"""
return self._callback
@callback.setter
def callback(self, callback: Optional[Callable[[int, np.ndarray, float, float], None]]):
"""Sets callback"""
self._callback = callback
@property
def expectation(self) -> Optional[ExpectationBase]:
"""The expectation value algorithm used to construct the expectation measurement from
the observable."""
return self._expectation
@expectation.setter
def expectation(self, exp: Optional[ExpectationBase]) -> None:
self._expectation = exp
def _check_operator_ansatz(self, operator: OperatorBase):
"""Check that the number of qubits of operator and ansatz match."""
if operator is not None and self.ansatz is not None:
if operator.num_qubits != self.ansatz.num_qubits:
# try to set the number of qubits on the ansatz, if possible
try:
self.ansatz.num_qubits = operator.num_qubits
self._ansatz_params = sorted(self.ansatz.parameters, key=lambda p: p.name)
except AttributeError as ex:
raise AlgorithmError(
"The number of qubits of the ansatz does not match the "
"operator, and the ansatz does not allow setting the "
"number of qubits using `num_qubits`."
) from ex
@property
def optimizer(self) -> Optimizer:
"""Returns optimizer"""
return self._optimizer
@optimizer.setter
def optimizer(self, optimizer: Optional[Optimizer]):
"""Sets the optimizer attribute.
Args:
optimizer: The optimizer to be used. If None is passed, SLSQP is used by default.
"""
if optimizer is None:
optimizer = SLSQP()
optimizer.set_max_evals_grouped(self.max_evals_grouped)
self._optimizer = optimizer
@property
def setting(self):
"""Prepare the setting of VQE as a string."""
ret = f"Algorithm: {self.__class__.__name__}\n"
params = ""
for key, value in self.__dict__.items():
if key[0] == "_":
if "initial_point" in key and value is None:
params += "-- {}: {}\n".format(key[1:], "Random seed")
else:
params += f"-- {key[1:]}: {value}\n"
ret += f"{params}"
return ret
def print_settings(self):
"""
Preparing the setting of VQE into a string.
Returns:
str: the formatted setting of VQE
"""
ret = "\n"
ret += "==================== Setting of {} ============================\n".format(
self.__class__.__name__
)
ret += f"{self.setting}"
ret += "===============================================================\n"
if self.ansatz is not None:
ret += "{}".format(self.ansatz.draw(output="text"))
else:
ret += "ansatz has not been set"
ret += "===============================================================\n"
ret += f"{self._optimizer.setting}"
ret += "===============================================================\n"
return ret
def construct_expectation(
self,
parameter: Union[List[float], List[Parameter], np.ndarray],
operator: OperatorBase,
return_expectation: bool = False,
) -> Union[OperatorBase, Tuple[OperatorBase, ExpectationBase]]:
r"""
Generate the ansatz circuit and expectation value measurement, and return their
runnable composition.
Args:
parameter: Parameters for the ansatz circuit.
operator: Qubit operator of the Observable
return_expectation: If True, return the ``ExpectationBase`` expectation converter used
in the construction of the expectation value. Useful e.g. to compute the standard
deviation of the expectation value.
Returns:
The Operator equalling the measurement of the ansatz :class:`StateFn` by the
Observable's expectation :class:`StateFn`, and, optionally, the expectation converter.
Raises:
AlgorithmError: If no operator has been provided.
AlgorithmError: If no expectation is passed and None could be inferred via the
ExpectationFactory.
"""
if operator is None:
raise AlgorithmError("The operator was never provided.")
self._check_operator_ansatz(operator)
# if expectation was never created, try to create one
if self.expectation is None:
expectation = ExpectationFactory.build(
operator=operator,
backend=self.quantum_instance,
include_custom=self._include_custom,
)
else:
expectation = self.expectation
param_dict = dict(zip(self._ansatz_params, parameter)) # type: Dict
wave_function = self.ansatz.assign_parameters(param_dict)
observable_meas = expectation.convert(StateFn(operator, is_measurement=True))
ansatz_circuit_op = CircuitStateFn(wave_function)
expect_op = observable_meas.compose(ansatz_circuit_op).reduce()
if return_expectation:
return expect_op, expectation
return expect_op
def construct_circuit(
self,
parameter: Union[List[float], List[Parameter], np.ndarray],
operator: OperatorBase,
) -> List[QuantumCircuit]:
"""Return the circuits used to compute the expectation value.
Args:
parameter: Parameters for the ansatz circuit.
operator: Qubit operator of the Observable
Returns:
A list of the circuits used to compute the expectation value.
"""
expect_op = self.construct_expectation(parameter, operator).to_circuit_op()
circuits = []
# recursively extract circuits
def extract_circuits(op):
if isinstance(op, CircuitStateFn):
circuits.append(op.primitive)
elif isinstance(op, ListOp):
for op_i in op.oplist:
extract_circuits(op_i)
extract_circuits(expect_op)
return circuits
@classmethod
def supports_aux_operators(cls) -> bool:
return True
def _eval_aux_ops(
self,
parameters: np.ndarray,
aux_operators: ListOrDict[OperatorBase],
expectation: ExpectationBase,
threshold: float = 1e-12,
) -> ListOrDict[Tuple[complex, complex]]:
# Create new CircuitSampler to avoid breaking existing one's caches.
sampler = CircuitSampler(self.quantum_instance)
if isinstance(aux_operators, dict):
list_op = ListOp(list(aux_operators.values()))
else:
list_op = ListOp(aux_operators)
aux_op_expect = expectation.convert(
StateFn(list_op, is_measurement=True).compose(
CircuitStateFn(self.ansatz.bind_parameters(parameters))
)
)
aux_op_expect_sampled = sampler.convert(aux_op_expect)
# compute means
values = np.real(aux_op_expect_sampled.eval())
# compute standard deviations
variances = np.real(expectation.compute_variance(aux_op_expect_sampled))
if not isinstance(variances, np.ndarray) and variances == 0.0:
# when `variances` is a single value equal to 0., our expectation value is exact and we
# manually ensure the variances to be a list of the correct length
variances = np.zeros(len(aux_operators), dtype=float)
std_devs = np.sqrt(variances / self.quantum_instance.run_config.shots)
# Discard values below threshold
aux_op_means = values * (np.abs(values) > threshold)
# zip means and standard deviations into tuples
aux_op_results = zip(aux_op_means, std_devs)
# Return None eigenvalues for None operators if aux_operators is a list.
# None operators are already dropped in compute_minimum_eigenvalue if aux_operators is a dict.
if isinstance(aux_operators, list):
aux_operator_eigenvalues = [None] * len(aux_operators)
key_value_iterator = enumerate(aux_op_results)
else:
aux_operator_eigenvalues = {}
key_value_iterator = zip(aux_operators.keys(), aux_op_results)
for key, value in key_value_iterator:
if aux_operators[key] is not None:
aux_operator_eigenvalues[key] = value
return aux_operator_eigenvalues
def compute_minimum_eigenvalue(
self, operator: OperatorBase, aux_operators: Optional[ListOrDict[OperatorBase]] = None
) -> MinimumEigensolverResult:
super().compute_minimum_eigenvalue(operator, aux_operators)
if self.quantum_instance is None:
raise AlgorithmError(
"A QuantumInstance or Backend must be supplied to run the quantum algorithm."
)
self.quantum_instance.circuit_summary = True
# this sets the size of the ansatz, so it must be called before the initial point
# validation
self._check_operator_ansatz(operator)
# set an expectation for this algorithm run (will be reset to None at the end)
initial_point = _validate_initial_point(self.initial_point, self.ansatz)
bounds = _validate_bounds(self.ansatz)
# We need to handle the array entries being zero or Optional i.e. having value None
if aux_operators:
zero_op = PauliSumOp.from_list([("I" * self.ansatz.num_qubits, 0)])
# Convert the None and zero values when aux_operators is a list.
# Drop None and convert zero values when aux_operators is a dict.
if isinstance(aux_operators, list):
key_op_iterator = enumerate(aux_operators)
converted = [zero_op] * len(aux_operators)
else:
key_op_iterator = aux_operators.items()
converted = {}
for key, op in key_op_iterator:
if op is not None:
converted[key] = zero_op if op == 0 else op
aux_operators = converted
else:
aux_operators = None
# Convert the gradient operator into a callable function that is compatible with the
# optimization routine.
if isinstance(self._gradient, GradientBase):
gradient = self._gradient.gradient_wrapper(
~StateFn(operator) @ StateFn(self._ansatz),
bind_params=self._ansatz_params,
backend=self._quantum_instance,
)
else:
gradient = self._gradient
self._eval_count = 0
energy_evaluation, expectation = self.get_energy_evaluation(
operator, return_expectation=True
)
start_time = time()
# keep this until Optimizer.optimize is removed
try:
opt_result = self.optimizer.minimize(
fun=energy_evaluation, x0=initial_point, jac=gradient, bounds=bounds
)
except AttributeError:
# self.optimizer is an optimizer with the deprecated interface that uses
# ``optimize`` instead of ``minimize```
warnings.warn(
"Using an optimizer that is run with the ``optimize`` method is "
"deprecated as of Qiskit Terra 0.19.0 and will be unsupported no "
"sooner than 3 months after the release date. Instead use an optimizer "
"providing ``minimize`` (see qiskit.algorithms.optimizers.Optimizer).",
DeprecationWarning,
stacklevel=2,
)
opt_result = self.optimizer.optimize(
len(initial_point), energy_evaluation, gradient, bounds, initial_point
)
eval_time = time() - start_time
result = VQEResult()
result.optimal_point = opt_result.x
result.optimal_parameters = dict(zip(self._ansatz_params, opt_result.x))
result.optimal_value = opt_result.fun
result.cost_function_evals = opt_result.nfev
result.optimizer_time = eval_time
result.eigenvalue = opt_result.fun + 0j
result.eigenstate = self._get_eigenstate(result.optimal_parameters)
logger.info(
"Optimization complete in %s seconds.\nFound opt_params %s in %s evals",
eval_time,
result.optimal_point,
self._eval_count,
)
# TODO delete as soon as get_optimal_vector etc are removed
self._ret = result
if aux_operators is not None:
aux_values = self._eval_aux_ops(opt_result.x, aux_operators, expectation=expectation)
result.aux_operator_eigenvalues = aux_values
return result
def get_energy_evaluation(
self,
operator: OperatorBase,
return_expectation: bool = False,
) -> Callable[[np.ndarray], Union[float, List[float]]]:
"""Returns a function handle to evaluates the energy at given parameters for the ansatz.
This is the objective function to be passed to the optimizer that is used for evaluation.
Args:
operator: The operator whose energy to evaluate.
return_expectation: If True, return the ``ExpectationBase`` expectation converter used
in the construction of the expectation value. Useful e.g. to evaluate other
operators with the same expectation value converter.
Returns:
Energy of the hamiltonian of each parameter, and, optionally, the expectation
converter.
Raises:
RuntimeError: If the circuit is not parameterized (i.e. has 0 free parameters).
"""
num_parameters = self.ansatz.num_parameters
if num_parameters == 0:
raise RuntimeError("The ansatz must be parameterized, but has 0 free parameters.")
expect_op, expectation = self.construct_expectation(
self._ansatz_params, operator, return_expectation=True
)
def energy_evaluation(parameters):
parameter_sets = np.reshape(parameters, (-1, num_parameters))
# Create dict associating each parameter with the lists of parameterization values for it
param_bindings = dict(zip(self._ansatz_params, parameter_sets.transpose().tolist()))
start_time = time()
sampled_expect_op = self._circuit_sampler.convert(expect_op, params=param_bindings)
means = np.real(sampled_expect_op.eval())
if self._callback is not None:
variance = np.real(expectation.compute_variance(sampled_expect_op))
estimator_error = np.sqrt(variance / self.quantum_instance.run_config.shots)
for i, param_set in enumerate(parameter_sets):
self._eval_count += 1
self._callback(self._eval_count, param_set, means[i], estimator_error[i])
else:
self._eval_count += len(means)
end_time = time()
logger.info(
"Energy evaluation returned %s - %.5f (ms), eval count: %s",
means,
(end_time - start_time) * 1000,
self._eval_count,
)
return means if len(means) > 1 else means[0]
if return_expectation:
return energy_evaluation, expectation
return energy_evaluation
@deprecate_function(
"""
The VQE.get_optimal_cost method is deprecated as of Qiskit Terra 0.18.0
and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.eigenvalue."""
)
def get_optimal_cost(self) -> float:
"""Get the minimal cost or energy found by the VQE."""
if self._ret.optimal_point is None:
raise AlgorithmError(
"Cannot return optimal cost before running the algorithm to find optimal params."
)
return self._ret.optimal_value
@deprecate_function(
"""
The VQE.get_optimal_circuit method is deprecated as of Qiskit Terra
0.18.0 and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.ansatz.bind_parameters(VQEResult.optimal_point)."""
)
def get_optimal_circuit(self) -> QuantumCircuit:
"""Get the circuit with the optimal parameters."""
if self._ret.optimal_point is None:
raise AlgorithmError(
"Cannot find optimal circuit before running the algorithm to find optimal params."
)
return self.ansatz.assign_parameters(self._ret.optimal_parameters)
@deprecate_function(
"""
The VQE.get_optimal_vector method is deprecated as of Qiskit Terra 0.18.0
and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.eigenvector."""
)
def get_optimal_vector(self) -> Union[List[float], Dict[str, int]]:
"""Get the simulation outcome of the optimal circuit."""
if self._ret.optimal_parameters is None:
raise AlgorithmError(
"Cannot find optimal circuit before running the algorithm to find optimal vector."
)
return self._get_eigenstate(self._ret.optimal_parameters)
def _get_eigenstate(self, optimal_parameters) -> Union[List[float], Dict[str, int]]:
"""Get the simulation outcome of the ansatz, provided with parameters."""
optimal_circuit = self.ansatz.bind_parameters(optimal_parameters)
state_fn = self._circuit_sampler.convert(StateFn(optimal_circuit)).eval()
if self.quantum_instance.is_statevector:
state = state_fn.primitive.data # VectorStateFn -> Statevector -> np.array
else:
state = state_fn.to_dict_fn().primitive # SparseVectorStateFn -> DictStateFn -> dict
return state
@property
@deprecate_function(
"""
The VQE.optimal_params property is deprecated as of Qiskit Terra 0.18.0
and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.optimal_point."""
)
def optimal_params(self) -> np.ndarray:
"""The optimal parameters for the ansatz."""
if self._ret.optimal_point is None:
raise AlgorithmError("Cannot find optimal params before running the algorithm.")
return self._ret.optimal_point
class VQEResult(VariationalResult, MinimumEigensolverResult):
"""VQE Result."""
def __init__(self) -> None:
super().__init__()
self._cost_function_evals = None
@property
def cost_function_evals(self) -> Optional[int]:
"""Returns number of cost optimizer evaluations"""
return self._cost_function_evals
@cost_function_evals.setter
def cost_function_evals(self, value: int) -> None:
"""Sets number of cost function evaluations"""
self._cost_function_evals = value
@property
def eigenstate(self) -> Optional[np.ndarray]:
"""return eigen state"""
return self._eigenstate
@eigenstate.setter
def eigenstate(self, value: np.ndarray) -> None:
"""set eigen state"""
self._eigenstate = value
def _validate_initial_point(point, ansatz):
expected_size = ansatz.num_parameters
# try getting the initial point from the ansatz
if point is None and hasattr(ansatz, "preferred_init_points"):
point = ansatz.preferred_init_points
# if the point is None choose a random initial point
if point is None:
# get bounds if ansatz has them set, otherwise use [-2pi, 2pi] for each parameter
bounds = getattr(ansatz, "parameter_bounds", None)
if not bounds:
bounds = [(-2 * np.pi, 2 * np.pi)] * expected_size
# replace all Nones by [-2pi, 2pi]
lower_bounds = []
upper_bounds = []
for lower, upper in bounds:
lower_bounds.append(lower if lower is not None else -2 * np.pi)
upper_bounds.append(upper if upper is not None else 2 * np.pi)
# sample from within bounds
point = algorithm_globals.random.uniform(lower_bounds, upper_bounds)
elif len(point) != expected_size:
raise ValueError(
f"The dimension of the initial point ({len(point)}) does not match the "
f"number of parameters in the circuit ({expected_size})."
)
return point
def _validate_bounds(ansatz):
if hasattr(ansatz, "parameter_bounds") and ansatz.parameter_bounds is not None:
bounds = ansatz.parameter_bounds
if len(bounds) != ansatz.num_parameters:
raise ValueError(
f"The number of bounds ({len(bounds)}) does not match the number of "
f"parameters in the circuit ({ansatz.num_parameters})."
)
else:
bounds = [(None, None)] * ansatz.num_parameters
return bounds
| 40.592732 | 102 | 0.649245 |
from typing import Optional, List, Callable, Union, Dict, Tuple
import logging
import warnings
from time import time
import numpy as np
from qiskit.circuit import QuantumCircuit, Parameter
from qiskit.circuit.library import RealAmplitudes
from qiskit.providers import BaseBackend
from qiskit.providers import Backend
from qiskit.opflow import (
OperatorBase,
ExpectationBase,
ExpectationFactory,
StateFn,
CircuitStateFn,
ListOp,
CircuitSampler,
PauliSumOp,
)
from qiskit.opflow.gradients import GradientBase
from qiskit.utils.validation import validate_min
from qiskit.utils.backend_utils import is_aer_provider
from qiskit.utils.deprecation import deprecate_function
from qiskit.utils import QuantumInstance, algorithm_globals
from ..optimizers import Optimizer, SLSQP
from ..variational_algorithm import VariationalAlgorithm, VariationalResult
from .minimum_eigen_solver import MinimumEigensolver, MinimumEigensolverResult, ListOrDict
from ..exceptions import AlgorithmError
logger = logging.getLogger(__name__)
class VQE(VariationalAlgorithm, MinimumEigensolver):
def __init__(
self,
ansatz: Optional[QuantumCircuit] = None,
optimizer: Optional[Optimizer] = None,
initial_point: Optional[np.ndarray] = None,
gradient: Optional[Union[GradientBase, Callable]] = None,
expectation: Optional[ExpectationBase] = None,
include_custom: bool = False,
max_evals_grouped: int = 1,
callback: Optional[Callable[[int, np.ndarray, float, float], None]] = None,
quantum_instance: Optional[Union[QuantumInstance, BaseBackend, Backend]] = None,
) -> None:
validate_min("max_evals_grouped", max_evals_grouped, 1)
super().__init__()
self._max_evals_grouped = max_evals_grouped
self._circuit_sampler = None
self._expectation = None
self.expectation = expectation
self._include_custom = include_custom
self._ansatz_params = None
self._ansatz = None
self.ansatz = ansatz
self._optimizer = None
self.optimizer = optimizer
self._initial_point = None
self.initial_point = initial_point
self._gradient = None
self.gradient = gradient
self._quantum_instance = None
if quantum_instance is not None:
self.quantum_instance = quantum_instance
self._eval_time = None
self._eval_count = 0
self._callback = None
self.callback = callback
logger.info(self.print_settings())
self._ret = None
@property
def ansatz(self) -> QuantumCircuit:
return self._ansatz
@ansatz.setter
def ansatz(self, ansatz: Optional[QuantumCircuit]):
if ansatz is None:
ansatz = RealAmplitudes()
self._ansatz = ansatz
self._ansatz_params = list(ansatz.parameters)
@property
def gradient(self) -> Optional[Union[GradientBase, Callable]]:
return self._gradient
@gradient.setter
def gradient(self, gradient: Optional[Union[GradientBase, Callable]]):
self._gradient = gradient
@property
def quantum_instance(self) -> Optional[QuantumInstance]:
return self._quantum_instance
@quantum_instance.setter
def quantum_instance(
self, quantum_instance: Union[QuantumInstance, BaseBackend, Backend]
) -> None:
if not isinstance(quantum_instance, QuantumInstance):
quantum_instance = QuantumInstance(quantum_instance)
self._quantum_instance = quantum_instance
self._circuit_sampler = CircuitSampler(
quantum_instance, param_qobj=is_aer_provider(quantum_instance.backend)
)
@property
def initial_point(self) -> Optional[np.ndarray]:
return self._initial_point
@initial_point.setter
def initial_point(self, initial_point: np.ndarray):
self._initial_point = initial_point
@property
def max_evals_grouped(self) -> int:
return self._max_evals_grouped
@max_evals_grouped.setter
def max_evals_grouped(self, max_evals_grouped: int):
self._max_evals_grouped = max_evals_grouped
self.optimizer.set_max_evals_grouped(max_evals_grouped)
@property
def include_custom(self) -> bool:
return self._include_custom
@include_custom.setter
def include_custom(self, include_custom: bool):
if include_custom != self._include_custom:
self._include_custom = include_custom
self.expectation = None
@property
def callback(self) -> Optional[Callable[[int, np.ndarray, float, float], None]]:
return self._callback
@callback.setter
def callback(self, callback: Optional[Callable[[int, np.ndarray, float, float], None]]):
self._callback = callback
@property
def expectation(self) -> Optional[ExpectationBase]:
return self._expectation
@expectation.setter
def expectation(self, exp: Optional[ExpectationBase]) -> None:
self._expectation = exp
def _check_operator_ansatz(self, operator: OperatorBase):
if operator is not None and self.ansatz is not None:
if operator.num_qubits != self.ansatz.num_qubits:
try:
self.ansatz.num_qubits = operator.num_qubits
self._ansatz_params = sorted(self.ansatz.parameters, key=lambda p: p.name)
except AttributeError as ex:
raise AlgorithmError(
"The number of qubits of the ansatz does not match the "
"operator, and the ansatz does not allow setting the "
"number of qubits using `num_qubits`."
) from ex
@property
def optimizer(self) -> Optimizer:
return self._optimizer
@optimizer.setter
def optimizer(self, optimizer: Optional[Optimizer]):
if optimizer is None:
optimizer = SLSQP()
optimizer.set_max_evals_grouped(self.max_evals_grouped)
self._optimizer = optimizer
@property
def setting(self):
ret = f"Algorithm: {self.__class__.__name__}\n"
params = ""
for key, value in self.__dict__.items():
if key[0] == "_":
if "initial_point" in key and value is None:
params += "-- {}: {}\n".format(key[1:], "Random seed")
else:
params += f"-- {key[1:]}: {value}\n"
ret += f"{params}"
return ret
def print_settings(self):
ret = "\n"
ret += "==================== Setting of {} ============================\n".format(
self.__class__.__name__
)
ret += f"{self.setting}"
ret += "===============================================================\n"
if self.ansatz is not None:
ret += "{}".format(self.ansatz.draw(output="text"))
else:
ret += "ansatz has not been set"
ret += "===============================================================\n"
ret += f"{self._optimizer.setting}"
ret += "===============================================================\n"
return ret
def construct_expectation(
self,
parameter: Union[List[float], List[Parameter], np.ndarray],
operator: OperatorBase,
return_expectation: bool = False,
) -> Union[OperatorBase, Tuple[OperatorBase, ExpectationBase]]:
if operator is None:
raise AlgorithmError("The operator was never provided.")
self._check_operator_ansatz(operator)
if self.expectation is None:
expectation = ExpectationFactory.build(
operator=operator,
backend=self.quantum_instance,
include_custom=self._include_custom,
)
else:
expectation = self.expectation
param_dict = dict(zip(self._ansatz_params, parameter))
wave_function = self.ansatz.assign_parameters(param_dict)
observable_meas = expectation.convert(StateFn(operator, is_measurement=True))
ansatz_circuit_op = CircuitStateFn(wave_function)
expect_op = observable_meas.compose(ansatz_circuit_op).reduce()
if return_expectation:
return expect_op, expectation
return expect_op
def construct_circuit(
self,
parameter: Union[List[float], List[Parameter], np.ndarray],
operator: OperatorBase,
) -> List[QuantumCircuit]:
expect_op = self.construct_expectation(parameter, operator).to_circuit_op()
circuits = []
def extract_circuits(op):
if isinstance(op, CircuitStateFn):
circuits.append(op.primitive)
elif isinstance(op, ListOp):
for op_i in op.oplist:
extract_circuits(op_i)
extract_circuits(expect_op)
return circuits
@classmethod
def supports_aux_operators(cls) -> bool:
return True
def _eval_aux_ops(
self,
parameters: np.ndarray,
aux_operators: ListOrDict[OperatorBase],
expectation: ExpectationBase,
threshold: float = 1e-12,
) -> ListOrDict[Tuple[complex, complex]]:
sampler = CircuitSampler(self.quantum_instance)
if isinstance(aux_operators, dict):
list_op = ListOp(list(aux_operators.values()))
else:
list_op = ListOp(aux_operators)
aux_op_expect = expectation.convert(
StateFn(list_op, is_measurement=True).compose(
CircuitStateFn(self.ansatz.bind_parameters(parameters))
)
)
aux_op_expect_sampled = sampler.convert(aux_op_expect)
# compute means
values = np.real(aux_op_expect_sampled.eval())
# compute standard deviations
variances = np.real(expectation.compute_variance(aux_op_expect_sampled))
if not isinstance(variances, np.ndarray) and variances == 0.0:
# when `variances` is a single value equal to 0., our expectation value is exact and we
# manually ensure the variances to be a list of the correct length
variances = np.zeros(len(aux_operators), dtype=float)
std_devs = np.sqrt(variances / self.quantum_instance.run_config.shots)
# Discard values below threshold
aux_op_means = values * (np.abs(values) > threshold)
# zip means and standard deviations into tuples
aux_op_results = zip(aux_op_means, std_devs)
# Return None eigenvalues for None operators if aux_operators is a list.
# None operators are already dropped in compute_minimum_eigenvalue if aux_operators is a dict.
if isinstance(aux_operators, list):
aux_operator_eigenvalues = [None] * len(aux_operators)
key_value_iterator = enumerate(aux_op_results)
else:
aux_operator_eigenvalues = {}
key_value_iterator = zip(aux_operators.keys(), aux_op_results)
for key, value in key_value_iterator:
if aux_operators[key] is not None:
aux_operator_eigenvalues[key] = value
return aux_operator_eigenvalues
def compute_minimum_eigenvalue(
self, operator: OperatorBase, aux_operators: Optional[ListOrDict[OperatorBase]] = None
) -> MinimumEigensolverResult:
super().compute_minimum_eigenvalue(operator, aux_operators)
if self.quantum_instance is None:
raise AlgorithmError(
"A QuantumInstance or Backend must be supplied to run the quantum algorithm."
)
self.quantum_instance.circuit_summary = True
# this sets the size of the ansatz, so it must be called before the initial point
# validation
self._check_operator_ansatz(operator)
# set an expectation for this algorithm run (will be reset to None at the end)
initial_point = _validate_initial_point(self.initial_point, self.ansatz)
bounds = _validate_bounds(self.ansatz)
# We need to handle the array entries being zero or Optional i.e. having value None
if aux_operators:
zero_op = PauliSumOp.from_list([("I" * self.ansatz.num_qubits, 0)])
# Convert the None and zero values when aux_operators is a list.
# Drop None and convert zero values when aux_operators is a dict.
if isinstance(aux_operators, list):
key_op_iterator = enumerate(aux_operators)
converted = [zero_op] * len(aux_operators)
else:
key_op_iterator = aux_operators.items()
converted = {}
for key, op in key_op_iterator:
if op is not None:
converted[key] = zero_op if op == 0 else op
aux_operators = converted
else:
aux_operators = None
# Convert the gradient operator into a callable function that is compatible with the
# optimization routine.
if isinstance(self._gradient, GradientBase):
gradient = self._gradient.gradient_wrapper(
~StateFn(operator) @ StateFn(self._ansatz),
bind_params=self._ansatz_params,
backend=self._quantum_instance,
)
else:
gradient = self._gradient
self._eval_count = 0
energy_evaluation, expectation = self.get_energy_evaluation(
operator, return_expectation=True
)
start_time = time()
# keep this until Optimizer.optimize is removed
try:
opt_result = self.optimizer.minimize(
fun=energy_evaluation, x0=initial_point, jac=gradient, bounds=bounds
)
except AttributeError:
# self.optimizer is an optimizer with the deprecated interface that uses
# ``optimize`` instead of ``minimize```
warnings.warn(
"Using an optimizer that is run with the ``optimize`` method is "
"deprecated as of Qiskit Terra 0.19.0 and will be unsupported no "
"sooner than 3 months after the release date. Instead use an optimizer "
"providing ``minimize`` (see qiskit.algorithms.optimizers.Optimizer).",
DeprecationWarning,
stacklevel=2,
)
opt_result = self.optimizer.optimize(
len(initial_point), energy_evaluation, gradient, bounds, initial_point
)
eval_time = time() - start_time
result = VQEResult()
result.optimal_point = opt_result.x
result.optimal_parameters = dict(zip(self._ansatz_params, opt_result.x))
result.optimal_value = opt_result.fun
result.cost_function_evals = opt_result.nfev
result.optimizer_time = eval_time
result.eigenvalue = opt_result.fun + 0j
result.eigenstate = self._get_eigenstate(result.optimal_parameters)
logger.info(
"Optimization complete in %s seconds.\nFound opt_params %s in %s evals",
eval_time,
result.optimal_point,
self._eval_count,
)
# TODO delete as soon as get_optimal_vector etc are removed
self._ret = result
if aux_operators is not None:
aux_values = self._eval_aux_ops(opt_result.x, aux_operators, expectation=expectation)
result.aux_operator_eigenvalues = aux_values
return result
def get_energy_evaluation(
self,
operator: OperatorBase,
return_expectation: bool = False,
) -> Callable[[np.ndarray], Union[float, List[float]]]:
num_parameters = self.ansatz.num_parameters
if num_parameters == 0:
raise RuntimeError("The ansatz must be parameterized, but has 0 free parameters.")
expect_op, expectation = self.construct_expectation(
self._ansatz_params, operator, return_expectation=True
)
def energy_evaluation(parameters):
parameter_sets = np.reshape(parameters, (-1, num_parameters))
# Create dict associating each parameter with the lists of parameterization values for it
param_bindings = dict(zip(self._ansatz_params, parameter_sets.transpose().tolist()))
start_time = time()
sampled_expect_op = self._circuit_sampler.convert(expect_op, params=param_bindings)
means = np.real(sampled_expect_op.eval())
if self._callback is not None:
variance = np.real(expectation.compute_variance(sampled_expect_op))
estimator_error = np.sqrt(variance / self.quantum_instance.run_config.shots)
for i, param_set in enumerate(parameter_sets):
self._eval_count += 1
self._callback(self._eval_count, param_set, means[i], estimator_error[i])
else:
self._eval_count += len(means)
end_time = time()
logger.info(
"Energy evaluation returned %s - %.5f (ms), eval count: %s",
means,
(end_time - start_time) * 1000,
self._eval_count,
)
return means if len(means) > 1 else means[0]
if return_expectation:
return energy_evaluation, expectation
return energy_evaluation
@deprecate_function(
"""
The VQE.get_optimal_cost method is deprecated as of Qiskit Terra 0.18.0
and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.eigenvalue."""
)
def get_optimal_cost(self) -> float:
if self._ret.optimal_point is None:
raise AlgorithmError(
"Cannot return optimal cost before running the algorithm to find optimal params."
)
return self._ret.optimal_value
@deprecate_function(
"""
The VQE.get_optimal_circuit method is deprecated as of Qiskit Terra
0.18.0 and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.ansatz.bind_parameters(VQEResult.optimal_point)."""
)
def get_optimal_circuit(self) -> QuantumCircuit:
if self._ret.optimal_point is None:
raise AlgorithmError(
"Cannot find optimal circuit before running the algorithm to find optimal params."
)
return self.ansatz.assign_parameters(self._ret.optimal_parameters)
@deprecate_function(
"""
The VQE.get_optimal_vector method is deprecated as of Qiskit Terra 0.18.0
and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.eigenvector."""
)
def get_optimal_vector(self) -> Union[List[float], Dict[str, int]]:
if self._ret.optimal_parameters is None:
raise AlgorithmError(
"Cannot find optimal circuit before running the algorithm to find optimal vector."
)
return self._get_eigenstate(self._ret.optimal_parameters)
def _get_eigenstate(self, optimal_parameters) -> Union[List[float], Dict[str, int]]:
optimal_circuit = self.ansatz.bind_parameters(optimal_parameters)
state_fn = self._circuit_sampler.convert(StateFn(optimal_circuit)).eval()
if self.quantum_instance.is_statevector:
state = state_fn.primitive.data # VectorStateFn -> Statevector -> np.array
else:
state = state_fn.to_dict_fn().primitive # SparseVectorStateFn -> DictStateFn -> dict
return state
@property
@deprecate_function(
"""
The VQE.optimal_params property is deprecated as of Qiskit Terra 0.18.0
and will be removed no sooner than 3 months after the releasedate.
This information is part of the returned result object and can be
queried as VQEResult.optimal_point."""
)
def optimal_params(self) -> np.ndarray:
if self._ret.optimal_point is None:
raise AlgorithmError("Cannot find optimal params before running the algorithm.")
return self._ret.optimal_point
class VQEResult(VariationalResult, MinimumEigensolverResult):
def __init__(self) -> None:
super().__init__()
self._cost_function_evals = None
@property
def cost_function_evals(self) -> Optional[int]:
return self._cost_function_evals
@cost_function_evals.setter
def cost_function_evals(self, value: int) -> None:
self._cost_function_evals = value
@property
def eigenstate(self) -> Optional[np.ndarray]:
return self._eigenstate
@eigenstate.setter
def eigenstate(self, value: np.ndarray) -> None:
self._eigenstate = value
def _validate_initial_point(point, ansatz):
expected_size = ansatz.num_parameters
# try getting the initial point from the ansatz
if point is None and hasattr(ansatz, "preferred_init_points"):
point = ansatz.preferred_init_points
# if the point is None choose a random initial point
if point is None:
# get bounds if ansatz has them set, otherwise use [-2pi, 2pi] for each parameter
bounds = getattr(ansatz, "parameter_bounds", None)
if not bounds:
bounds = [(-2 * np.pi, 2 * np.pi)] * expected_size
# replace all Nones by [-2pi, 2pi]
lower_bounds = []
upper_bounds = []
for lower, upper in bounds:
lower_bounds.append(lower if lower is not None else -2 * np.pi)
upper_bounds.append(upper if upper is not None else 2 * np.pi)
# sample from within bounds
point = algorithm_globals.random.uniform(lower_bounds, upper_bounds)
elif len(point) != expected_size:
raise ValueError(
f"The dimension of the initial point ({len(point)}) does not match the "
f"number of parameters in the circuit ({expected_size})."
)
return point
def _validate_bounds(ansatz):
if hasattr(ansatz, "parameter_bounds") and ansatz.parameter_bounds is not None:
bounds = ansatz.parameter_bounds
if len(bounds) != ansatz.num_parameters:
raise ValueError(
f"The number of bounds ({len(bounds)}) does not match the number of "
f"parameters in the circuit ({ansatz.num_parameters})."
)
else:
bounds = [(None, None)] * ansatz.num_parameters
return bounds
| true | true |
f7fbef0f15e368ac2e21ea80661d64ecef173c7f | 1,599 | py | Python | cvxpy/problems/param_prob.py | rpradal/cvxpy | 222ae345d2df67040346267f55369d8433d2e50f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cvxpy/problems/param_prob.py | rpradal/cvxpy | 222ae345d2df67040346267f55369d8433d2e50f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cvxpy/problems/param_prob.py | rpradal/cvxpy | 222ae345d2df67040346267f55369d8433d2e50f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | """
Copyright 2013 Steven Diamond, 2017 Akshay Agrawal
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import abc
class ParamProb:
"""An abstract base class for parameterized problems.
Parameterized problems are produced during the first canonicalization
and allow canonicalization to be short-circuited for future solves.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractproperty
def is_mixed_integer(self) -> bool:
"""Is the problem mixed-integer?"""
raise NotImplementedError()
@abc.abstractproperty
def apply_parameters(self, id_to_param_value=None, zero_offset=False,
keep_zeros=False):
"""Returns A, b after applying parameters (and reshaping).
Args:
id_to_param_value: (optional) dict mapping parameter ids to values
zero_offset: (optional) if True, zero out the constant offset in the
parameter vector
keep_zeros: (optional) if True, store explicit zeros in A where
parameters are affected
"""
raise NotImplementedError()
| 35.533333 | 78 | 0.702314 | import abc
class ParamProb:
__metaclass__ = abc.ABCMeta
@abc.abstractproperty
def is_mixed_integer(self) -> bool:
raise NotImplementedError()
@abc.abstractproperty
def apply_parameters(self, id_to_param_value=None, zero_offset=False,
keep_zeros=False):
raise NotImplementedError()
| true | true |
f7fbf093307f47a893fa7863d53d2b6cc57e8a86 | 2,627 | py | Python | girder_larger_image/create_tiff.py | cj-abcs/larger_image | 6f7af6382c2b4ca7379bdb11c0d130d04e9524af | [
"Apache-2.0"
] | 1 | 2020-12-17T18:22:08.000Z | 2020-12-17T18:22:08.000Z | girder_larger_image/create_tiff.py | abcsFrederick/larger_image | 6f7af6382c2b4ca7379bdb11c0d130d04e9524af | [
"Apache-2.0"
] | 1 | 2019-09-11T17:52:12.000Z | 2019-09-11T17:52:12.000Z | girder_larger_image/create_tiff.py | cj-abcs/larger_image | 6f7af6382c2b4ca7379bdb11c0d130d04e9524af | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
###############################################################################
# Girder, large_image plugin framework and tests adapted from Kitware Inc.
# source and documentation by the Imaging and Visualization Group, Advanced
# Biomedical Computational Science, Frederick National Laboratory for Cancer
# Research.
#
# Copyright Kitware Inc.
#
# Licensed under the Apache License, Version 2.0 ( the "License" );
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
import os
import subprocess
def create_tiff(in_path, compression, quality, tile_size, out_path):
convert_command = (
'vips',
# Additional vips options can be added to aid debugging. For instance,
# '--vips-concurrency', '1',
# '--vips-progress',
# can show how vips is processing a file.
'tiffsave',
in_path,
out_path,
'--compression', compression,
'--Q', str(quality),
'--tile',
'--tile-width', str(tile_size),
'--tile-height', str(tile_size),
'--pyramid',
'--bigtiff'
)
try:
import six.moves
print('Command: %s' % (
' '.join([six.moves.shlex_quote(arg) for arg in convert_command])))
except ImportError:
pass
proc = subprocess.Popen(convert_command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = proc.communicate()
if out.strip():
print('stdout: ' + out)
if err.strip():
print('stderr: ' + err)
if proc.returncode:
raise Exception('VIPS command failed (rc=%d): %s' % (
proc.returncode, ' '.join(convert_command)))
try:
# Define Girder Worker globals for the style checker
_tempdir = _tempdir # noqa
in_path = in_path # noqa
compression = compression # noqa
quality = quality # noqa
tile_size = tile_size # noqa
out_filename = out_filename # noqa
out_path = os.path.join(_tempdir, out_filename)
create_tiff(in_path, compression, quality, tile_size, out_path)
except NameError:
pass
| 32.8375 | 79 | 0.606776 | true | true | |
f7fbf0ff1f7bdc600f9d34c1be2f349301a80f22 | 1,016 | py | Python | py/py_0083_path_sum_four_ways.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | py/py_0083_path_sum_four_ways.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | py/py_0083_path_sum_four_ways.py | lcsm29/project-euler | fab794ece5aa7a11fc7c2177f26250f40a5b1447 | [
"MIT"
] | null | null | null | # Solution of;
# Project Euler Problem 82: Path sum: four ways
# https://projecteuler.net/problem=83
#
# NOTE: This problem is a significantly more challenging version of Problem 81.
# In the 5 by 5 matrix below, the minimal path sum from the top left to the
# bottom right, by moving left, right, up, and down,
# is indicated in asterisk(*) and is equal to 2297.
#
# 131(*) 673(.) 234(*) 103(*) 018(*)
# 201(*) 096(*) 342(*) 965(.) 150(*)
# 630(.) 803(.) 746(.) 422(*) 111(*)
# 537(.) 699(.) 497(.) 121(*) 956(.)
# 805(.) 732(.) 524(.) 037(*) 331(*)
#
# Find the minimal path sum from the top left to the bottom right by moving
# left, right, up, and down in matrix.txt,
# https://projecteuler.net/project/resources/p083_matrix.txt
# a 31K text file containing an 80 by 80 matrix.
#
# by lcsm29 http://github.com/lcsm29/project-euler
import timed
def dummy(n):
pass
if __name__ == '__main__':
n = 1000
i = 10000
prob_id = 83
timed.caller(dummy, n, i, prob_id)
| 29.882353 | 79 | 0.633858 |
import timed
def dummy(n):
pass
if __name__ == '__main__':
n = 1000
i = 10000
prob_id = 83
timed.caller(dummy, n, i, prob_id)
| true | true |
f7fbf1dd797b49d4d282a5cd229eb70fb2a5a9f8 | 2,843 | py | Python | drop-boxes/register-wf-individualizedproteome/register-individualizedproteome-dropbox.py | qbicsoftware/etl-scripts | e1cea11b5f55fb218e7d4c8d49bdd3c5fe6c62c6 | [
"MIT"
] | 2 | 2018-04-20T15:48:02.000Z | 2021-11-30T17:39:28.000Z | drop-boxes/register-wf-individualizedproteome/register-individualizedproteome-dropbox.py | qbicsoftware/etl-scripts | e1cea11b5f55fb218e7d4c8d49bdd3c5fe6c62c6 | [
"MIT"
] | 41 | 2017-07-19T11:17:26.000Z | 2021-09-28T12:10:49.000Z | drop-boxes/register-wf-individualizedproteome/register-individualizedproteome-dropbox.py | qbicsoftware/etl-scripts | e1cea11b5f55fb218e7d4c8d49bdd3c5fe6c62c6 | [
"MIT"
] | 2 | 2017-04-27T10:32:33.000Z | 2018-02-20T09:26:12.000Z | '''
Note:
print statements go to: ~openbis/servers/datastore_server/log/startup_log.txt
'''
import sys
sys.path.append('/home-link/qeana10/bin/')
import re
import os
import time
import datetime
import shutil
import subprocess
import ch.systemsx.cisd.etlserver.registrator.api.v2
from java.io import File
from org.apache.commons.io import FileUtils
from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchCriteria
from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchSubCriteria
# Data import and registration
# expected:
# *Q[Project Code]^4[Sample No.]^3[Sample Type][Checksum]*.*
ePattern = re.compile('Q\w{4}E[0-9]+')
pPattern = re.compile('Q\w{4}')
def process(transaction):
context = transaction.getRegistrationContext().getPersistentMap()
# Get the incoming path of the transaction
incomingPath = transaction.getIncoming().getAbsolutePath()
key = context.get("RETRY_COUNT")
if (key == None):
key = 1
# Get the name of the incoming file
name = transaction.getIncoming().getName()
nameSplit = name.split("-")
space = nameSplit[0]
project = pPattern.findall(nameSplit[1])[0]
experiment_id = ePattern.findall(nameSplit[2])[0]
sampleCode = nameSplit[-1]
sample_id = "/"+space+"/"+sampleCode
if not experiment_id:
print "The identifier matching the pattern Q\w{4}E\[0-9]+ was not found in the fileName "+name
sample = transaction.getSampleForUpdate(sample_id)
parents = sample.getParentSampleIdentifiers()
parentcodes = []
for parent in parents:
parentcodes.append(parent.split("/")[-1])
parentInfos = "_".join(parentcodes)
experiment = transaction.getExperimentForUpdate("/"+space+"/"+project+"/"+experiment_id)
experiment.setPropertyValue("Q_WF_STATUS", "FINISHED")
endpoint = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
experiment.setPropertyValue("Q_WF_FINISHED_AT", endpoint)
sample.setExperiment(experiment)
#Register files
dataSetRes = transaction.createNewDataSet('Q_WF_MS_INDIVIDUALIZED_PROTEOME_RESULTS')
dataSetRes.setMeasuredData(False)
dataSetLogs = transaction.createNewDataSet('Q_WF_MS_INDIVIDUALIZED_PROTEOME_LOGS')
dataSetLogs.setMeasuredData(False)
dataSetRes.setSample(sample)
dataSetLogs.setSample(sample)
resultsname = incomingPath+"/"+parentInfos+"_workflow_results"
logname = incomingPath+"/"+parentInfos+"_workflow_logs"
os.rename(incomingPath+"/logs", logname)
os.rename(incomingPath+"/result", resultsname)
transaction.moveFile(resultsname, dataSetRes)
transaction.moveFile(logname, dataSetLogs)
| 35.5375 | 110 | 0.689061 | '''
Note:
print statements go to: ~openbis/servers/datastore_server/log/startup_log.txt
'''
import sys
sys.path.append('/home-link/qeana10/bin/')
import re
import os
import time
import datetime
import shutil
import subprocess
import ch.systemsx.cisd.etlserver.registrator.api.v2
from java.io import File
from org.apache.commons.io import FileUtils
from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchCriteria
from ch.systemsx.cisd.openbis.generic.shared.api.v1.dto import SearchSubCriteria
ePattern = re.compile('Q\w{4}E[0-9]+')
pPattern = re.compile('Q\w{4}')
def process(transaction):
context = transaction.getRegistrationContext().getPersistentMap()
incomingPath = transaction.getIncoming().getAbsolutePath()
key = context.get("RETRY_COUNT")
if (key == None):
key = 1
name = transaction.getIncoming().getName()
nameSplit = name.split("-")
space = nameSplit[0]
project = pPattern.findall(nameSplit[1])[0]
experiment_id = ePattern.findall(nameSplit[2])[0]
sampleCode = nameSplit[-1]
sample_id = "/"+space+"/"+sampleCode
if not experiment_id:
print "The identifier matching the pattern Q\w{4}E\[0-9]+ was not found in the fileName "+name
sample = transaction.getSampleForUpdate(sample_id)
parents = sample.getParentSampleIdentifiers()
parentcodes = []
for parent in parents:
parentcodes.append(parent.split("/")[-1])
parentInfos = "_".join(parentcodes)
experiment = transaction.getExperimentForUpdate("/"+space+"/"+project+"/"+experiment_id)
experiment.setPropertyValue("Q_WF_STATUS", "FINISHED")
endpoint = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
experiment.setPropertyValue("Q_WF_FINISHED_AT", endpoint)
sample.setExperiment(experiment)
dataSetRes = transaction.createNewDataSet('Q_WF_MS_INDIVIDUALIZED_PROTEOME_RESULTS')
dataSetRes.setMeasuredData(False)
dataSetLogs = transaction.createNewDataSet('Q_WF_MS_INDIVIDUALIZED_PROTEOME_LOGS')
dataSetLogs.setMeasuredData(False)
dataSetRes.setSample(sample)
dataSetLogs.setSample(sample)
resultsname = incomingPath+"/"+parentInfos+"_workflow_results"
logname = incomingPath+"/"+parentInfos+"_workflow_logs"
os.rename(incomingPath+"/logs", logname)
os.rename(incomingPath+"/result", resultsname)
transaction.moveFile(resultsname, dataSetRes)
transaction.moveFile(logname, dataSetLogs)
| false | true |
f7fbf298d5b64ba1cefa46a4a5d2823c2fa8cf17 | 417 | py | Python | configs/_base_/models/hrnet/hrnet-w18.py | YuxinZou/mmclassification | 2037260ea6c98a3b115e97727e1151a1c2c32f7a | [
"Apache-2.0"
] | 1 | 2022-03-15T07:36:04.000Z | 2022-03-15T07:36:04.000Z | configs/_base_/models/hrnet/hrnet-w18.py | YuxinZou/mmclassification | 2037260ea6c98a3b115e97727e1151a1c2c32f7a | [
"Apache-2.0"
] | null | null | null | configs/_base_/models/hrnet/hrnet-w18.py | YuxinZou/mmclassification | 2037260ea6c98a3b115e97727e1151a1c2c32f7a | [
"Apache-2.0"
] | 1 | 2022-03-25T08:40:07.000Z | 2022-03-25T08:40:07.000Z | # model settings
model = dict(
type='ImageClassifier',
backbone=dict(type='HRNet', arch='w18'),
neck=[
dict(type='HRFuseScales', in_channels=(18, 36, 72, 144)),
dict(type='GlobalAveragePooling'),
],
head=dict(
type='LinearClsHead',
in_channels=2048,
num_classes=1000,
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
topk=(1, 5),
))
| 26.0625 | 65 | 0.582734 |
model = dict(
type='ImageClassifier',
backbone=dict(type='HRNet', arch='w18'),
neck=[
dict(type='HRFuseScales', in_channels=(18, 36, 72, 144)),
dict(type='GlobalAveragePooling'),
],
head=dict(
type='LinearClsHead',
in_channels=2048,
num_classes=1000,
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
topk=(1, 5),
))
| true | true |
f7fbf2d9bef66db0c629235703060bf818cc2326 | 8,142 | py | Python | rules/check_section_names.py | deptofdefense/SalSA | 7ef771398e3d59597bade95d0a23540de0842e2a | [
"MIT"
] | 84 | 2018-01-07T19:43:45.000Z | 2021-12-23T14:17:44.000Z | rules/check_section_names.py | deptofdefense/SalSA | 7ef771398e3d59597bade95d0a23540de0842e2a | [
"MIT"
] | 7 | 2018-04-02T20:24:28.000Z | 2019-06-07T21:48:04.000Z | rules/check_section_names.py | deptofdefense/SalSA | 7ef771398e3d59597bade95d0a23540de0842e2a | [
"MIT"
] | 18 | 2017-12-26T19:44:46.000Z | 2021-09-13T12:21:02.000Z | """
check sction names for common packers/uncommon names
"""
# list of common packer section names
packers = {
'.aspack': 'Aspack packer',
'.adata': 'Aspack packer/Armadillo packer',
'ASPack': 'Aspack packer',
'.ASPack': 'ASPAck Protector',
'.boom': 'The Boomerang List Builder (config+exe xored with a single byte key 0x77)',
'.ccg': 'CCG Packer (Chinese Packer)',
'.charmve': 'Added by the PIN tool',
'BitArts': 'Crunch 2.0 Packer',
'DAStub': 'DAStub Dragon Armor protector',
'!EPack': 'Epack packer',
'FSG!': 'FSG packer (not a section name, but a good identifier)',
'.gentee': 'Gentee installer',
'kkrunchy': 'kkrunchy Packer',
'.mackt': 'ImpRec-created section',
'.MaskPE': 'MaskPE Packer',
'MEW': 'MEW packer',
'.MPRESS1': 'Mpress Packer',
'.MPRESS2': 'Mpress Packer',
'.neolite': 'Neolite Packer',
'.neolit': 'Neolite Packer',
'.nsp1': 'NsPack packer',
'.nsp0': 'NsPack packer',
'.nsp2': 'NsPack packer',
'nsp1': 'NsPack packer',
'nsp0': 'NsPack packer',
'nsp2': 'NsPack packer',
'.packed': 'RLPack Packer (first section)',
'pebundle': 'PEBundle Packer',
'PEBundle': 'PEBundle Packer',
'PEC2TO': 'PECompact packer',
'PECompact2': 'PECompact packer (not a section name, but a good identifier)',
'PEC2': 'PECompact packer',
'pec1': 'PECompact packer',
'pec2': 'PECompact packer',
'PEC2MO': 'PECompact packer',
'PELOCKnt': 'PELock Protector',
'.perplex': 'Perplex PE-Protector',
'PESHiELD': 'PEShield Packer',
'.petite': 'Petite Packer',
'.pinclie': 'Added by the PIN tool',
'ProCrypt': 'ProCrypt Packer',
'.RLPack': 'RLPack Packer (second section)',
'.rmnet': 'Ramnit virus marker',
'RCryptor': 'RPCrypt Packer',
'.RPCrypt': 'RPCrypt Packer',
'.seau': 'SeauSFX Packer',
'.sforce3': 'StarForce Protection',
'.spack': 'Simple Pack (by bagie)',
'.svkp': 'SVKP packer',
'Themida': 'Themida Packer',
'.Themida': 'Themida Packer',
'.taz': 'Some version os PESpin',
'.tsuarch': 'TSULoader',
'.tsustub': 'TSULoader',
'.packed': 'Unknown Packer',
'PEPACK!!': 'Pepack',
'.Upack': 'Upack packer',
'.ByDwing': 'Upack Packer',
'UPX0': 'UPX packer',
'UPX1': 'UPX packer',
'UPX2': 'UPX packer',
'UPX!': 'UPX packer',
'.UPX0': 'UPX Packer',
'.UPX1': 'UPX Packer',
'.UPX2': 'UPX Packer',
'.vmp0': 'VMProtect packer',
'.vmp1': 'VMProtect packer',
'.vmp2': 'VMProtect packer',
'VProtect': 'Vprotect Packer',
'.winapi': 'Added by API Override tool',
'WinLicen': 'WinLicense (Themida) Protector',
'_winzip_': 'WinZip Self-Extractor',
'.WWPACK': 'WWPACK Packer',
'.yP': 'Y0da Protector',
'.y0da': 'Y0da Protector',
}
# list of common section names
common = {
'.00cfg': 'Control Flow Guard (CFG) section (added by newer versions of Visual Studio)',
'.arch': 'Alpha-architecture section',
'.autoload_text': 'cygwin/gcc; the Cygwin DLL uses a section to avoid copying certain data on fork.',
'.bindat': 'Binary data (also used by one of the downware installers based on LUA)',
'.bootdat': 'section that can be found inside Visual Studio files; contains palette entries',
'.bss': 'Uninitialized Data Section',
'.BSS': 'Uninitialized Data Section',
'.buildid': 'gcc/cygwin; Contains debug information (if overlaps with debug directory)',
'.CLR_UEF': '.CLR Unhandled Exception Handler section',
'.code': 'Code Section',
'.cormeta': '.CLR Metadata Section',
'.complua': 'Binary data, most likely compiled LUA (also used by one of the downware installers based on LUA)',
'.CRT': 'Initialized Data Section (C RunTime)',
'.cygwin_dll_common': 'cygwin section containing flags representing Cygwin capabilities',
'.data': 'Data Section',
'.DATA': 'Data Section',
'.data1': 'Data Section',
'.data2': 'Data Section',
'.data3': 'Data Section',
'.debug': 'Debug info Section',
'.debug$F': 'Debug info Section (Visual C++ version <7.0)',
'.debug$P': 'Debug info Section (Visual C++ debug information: precompiled information',
'.debug$S': 'Debug info Section (Visual C++ debug information: symbolic information)',
'.debug$T': 'Debug info Section (Visual C++ debug information: type information)',
'.drectve ': 'directive section (temporary, linker removes it after processing it; should not appear in a final PE image)',
'.didat': 'Delay Import Section',
'.didata': 'Delay Import Section',
'.edata': 'Export Data Section',
'.eh_fram': 'gcc/cygwin; Exception Handler Frame section',
'.export': 'Alternative Export Data Section',
'.fasm': 'FASM flat Section',
'.flat': 'FASM flat Section',
'.gfids': 'section added by new Visual Studio (14.0); purpose unknown',
'.giats': 'section added by new Visual Studio (14.0); purpose unknown',
'.gljmp': 'section added by new Visual Studio (14.0); purpose unknown',
'.glue_7t': 'ARMv7 core glue functions (thumb mode)',
'.glue_7': 'ARMv7 core glue functions (32-bit ARM mode)',
'.idata': 'Initialized Data Section (Borland Compiler)',
'.idlsym': 'IDL Attributes (registered SEH)',
'.impdata': 'Alternative Import data section',
'.itext': 'Code Section (Borland Compiler)',
'.ndata': 'Nullsoft Installer section',
'.orpc': 'Code section inside rpcrt4.dll',
'.pdata': 'Exception Handling Functions Section (PDATA records)',
'.rdata': 'Read-only initialized Data Section (Microsoft and Borland Compiler)',
'.reloc': 'Relocations Section',
'.rodata': 'Read-only Data Section',
'.rsrc': 'Resource section',
'.sbss': 'GP-relative Uninitialized Data Section',
'.script': 'Section containing script',
'.shared': 'Shared section',
'.sdata': 'GP-relative Initialized Data Section',
'.srdata': 'GP-relative Read-only Data Section',
'.stab': 'Created by Haskell compiler (GHC)',
'.stabstr': 'Created by Haskell compiler (GHC)',
'.sxdata': 'Registered Exception Handlers Section',
'.text': 'Code Section',
'.text0': 'Alternative Code Section',
'.text1': 'Alternative Code Section',
'.text2': 'Alternative Code Section',
'.text3': 'Alternative Code Section',
'.textbss': 'Section used by incremental linking',
'.tls': 'Thread Local Storage Section',
'.tls$': 'Thread Local Storage Section',
'.udata': 'Uninitialized Data Section',
'.vsdata': 'GP-relative Initialized Data',
'.xdata': 'Exception Information Section',
'.wixburn': 'Wix section',
'BSS': 'Uninitialized Data Section (Borland Compiler)',
'CODE': 'Code Section (Borland Compiler)',
'DATA': 'Data Section (Borland Compiler)',
'DGROUP': 'Legacy data group section',
'edata': 'Export Data Section',
'idata': 'Initialized Data Section (C RunTime)',
'INIT': 'INIT section (drivers)',
'minATL': 'Section that can be found inside some ARM PE files; purpose unknown',
'PAGE': 'PAGE section (drivers)',
'rdata': 'Read-only Data Section',
'sdata': 'Initialized Data Section',
'shared': 'Shared section',
'Shared': 'Shared section',
'testdata': 'section containing test data (can be found inside Visual Studio files)',
'text': 'Alternative Code Section',
}
def run(peobject):
alerts = []
unknown = []
known_bad = []
known_common = []
# loop through each section
d = peobject.dict()
if 'SECTIONS' in d:
for s in d['SECTIONS']:
# check for known/unknown sections
if (s['Name'] in packers):
known_bad.append(''.join([s['Name'], ' : ', packers[s['Name']]]))
elif (s['Name'] in common):
known_common.append(''.join([s['Name'], ' : ', common[s['Name']]]))
else:
unknown.append(''.join([s['Name'], ' : ???']))
# parse each section
if known_common:
alerts.append({
'title': 'Known Common Section Names',
'description': 'These can indicate executable functionality.',
'data': known_common,
'code': '',
})
if known_bad:
alerts.append({
'title': 'Known Bad Section Names',
'description': 'These can indicate packer usage.',
'data': known_bad,
'code': '',
})
if unknown:
alerts.append({
'title': 'Unknown Section Names',
'description': 'These can indicate unknown packers.',
'data': unknown,
'code': '',
})
return alerts
| 38.40566 | 125 | 0.652174 |
packers = {
'.aspack': 'Aspack packer',
'.adata': 'Aspack packer/Armadillo packer',
'ASPack': 'Aspack packer',
'.ASPack': 'ASPAck Protector',
'.boom': 'The Boomerang List Builder (config+exe xored with a single byte key 0x77)',
'.ccg': 'CCG Packer (Chinese Packer)',
'.charmve': 'Added by the PIN tool',
'BitArts': 'Crunch 2.0 Packer',
'DAStub': 'DAStub Dragon Armor protector',
'!EPack': 'Epack packer',
'FSG!': 'FSG packer (not a section name, but a good identifier)',
'.gentee': 'Gentee installer',
'kkrunchy': 'kkrunchy Packer',
'.mackt': 'ImpRec-created section',
'.MaskPE': 'MaskPE Packer',
'MEW': 'MEW packer',
'.MPRESS1': 'Mpress Packer',
'.MPRESS2': 'Mpress Packer',
'.neolite': 'Neolite Packer',
'.neolit': 'Neolite Packer',
'.nsp1': 'NsPack packer',
'.nsp0': 'NsPack packer',
'.nsp2': 'NsPack packer',
'nsp1': 'NsPack packer',
'nsp0': 'NsPack packer',
'nsp2': 'NsPack packer',
'.packed': 'RLPack Packer (first section)',
'pebundle': 'PEBundle Packer',
'PEBundle': 'PEBundle Packer',
'PEC2TO': 'PECompact packer',
'PECompact2': 'PECompact packer (not a section name, but a good identifier)',
'PEC2': 'PECompact packer',
'pec1': 'PECompact packer',
'pec2': 'PECompact packer',
'PEC2MO': 'PECompact packer',
'PELOCKnt': 'PELock Protector',
'.perplex': 'Perplex PE-Protector',
'PESHiELD': 'PEShield Packer',
'.petite': 'Petite Packer',
'.pinclie': 'Added by the PIN tool',
'ProCrypt': 'ProCrypt Packer',
'.RLPack': 'RLPack Packer (second section)',
'.rmnet': 'Ramnit virus marker',
'RCryptor': 'RPCrypt Packer',
'.RPCrypt': 'RPCrypt Packer',
'.seau': 'SeauSFX Packer',
'.sforce3': 'StarForce Protection',
'.spack': 'Simple Pack (by bagie)',
'.svkp': 'SVKP packer',
'Themida': 'Themida Packer',
'.Themida': 'Themida Packer',
'.taz': 'Some version os PESpin',
'.tsuarch': 'TSULoader',
'.tsustub': 'TSULoader',
'.packed': 'Unknown Packer',
'PEPACK!!': 'Pepack',
'.Upack': 'Upack packer',
'.ByDwing': 'Upack Packer',
'UPX0': 'UPX packer',
'UPX1': 'UPX packer',
'UPX2': 'UPX packer',
'UPX!': 'UPX packer',
'.UPX0': 'UPX Packer',
'.UPX1': 'UPX Packer',
'.UPX2': 'UPX Packer',
'.vmp0': 'VMProtect packer',
'.vmp1': 'VMProtect packer',
'.vmp2': 'VMProtect packer',
'VProtect': 'Vprotect Packer',
'.winapi': 'Added by API Override tool',
'WinLicen': 'WinLicense (Themida) Protector',
'_winzip_': 'WinZip Self-Extractor',
'.WWPACK': 'WWPACK Packer',
'.yP': 'Y0da Protector',
'.y0da': 'Y0da Protector',
}
common = {
'.00cfg': 'Control Flow Guard (CFG) section (added by newer versions of Visual Studio)',
'.arch': 'Alpha-architecture section',
'.autoload_text': 'cygwin/gcc; the Cygwin DLL uses a section to avoid copying certain data on fork.',
'.bindat': 'Binary data (also used by one of the downware installers based on LUA)',
'.bootdat': 'section that can be found inside Visual Studio files; contains palette entries',
'.bss': 'Uninitialized Data Section',
'.BSS': 'Uninitialized Data Section',
'.buildid': 'gcc/cygwin; Contains debug information (if overlaps with debug directory)',
'.CLR_UEF': '.CLR Unhandled Exception Handler section',
'.code': 'Code Section',
'.cormeta': '.CLR Metadata Section',
'.complua': 'Binary data, most likely compiled LUA (also used by one of the downware installers based on LUA)',
'.CRT': 'Initialized Data Section (C RunTime)',
'.cygwin_dll_common': 'cygwin section containing flags representing Cygwin capabilities',
'.data': 'Data Section',
'.DATA': 'Data Section',
'.data1': 'Data Section',
'.data2': 'Data Section',
'.data3': 'Data Section',
'.debug': 'Debug info Section',
'.debug$F': 'Debug info Section (Visual C++ version <7.0)',
'.debug$P': 'Debug info Section (Visual C++ debug information: precompiled information',
'.debug$S': 'Debug info Section (Visual C++ debug information: symbolic information)',
'.debug$T': 'Debug info Section (Visual C++ debug information: type information)',
'.drectve ': 'directive section (temporary, linker removes it after processing it; should not appear in a final PE image)',
'.didat': 'Delay Import Section',
'.didata': 'Delay Import Section',
'.edata': 'Export Data Section',
'.eh_fram': 'gcc/cygwin; Exception Handler Frame section',
'.export': 'Alternative Export Data Section',
'.fasm': 'FASM flat Section',
'.flat': 'FASM flat Section',
'.gfids': 'section added by new Visual Studio (14.0); purpose unknown',
'.giats': 'section added by new Visual Studio (14.0); purpose unknown',
'.gljmp': 'section added by new Visual Studio (14.0); purpose unknown',
'.glue_7t': 'ARMv7 core glue functions (thumb mode)',
'.glue_7': 'ARMv7 core glue functions (32-bit ARM mode)',
'.idata': 'Initialized Data Section (Borland Compiler)',
'.idlsym': 'IDL Attributes (registered SEH)',
'.impdata': 'Alternative Import data section',
'.itext': 'Code Section (Borland Compiler)',
'.ndata': 'Nullsoft Installer section',
'.orpc': 'Code section inside rpcrt4.dll',
'.pdata': 'Exception Handling Functions Section (PDATA records)',
'.rdata': 'Read-only initialized Data Section (Microsoft and Borland Compiler)',
'.reloc': 'Relocations Section',
'.rodata': 'Read-only Data Section',
'.rsrc': 'Resource section',
'.sbss': 'GP-relative Uninitialized Data Section',
'.script': 'Section containing script',
'.shared': 'Shared section',
'.sdata': 'GP-relative Initialized Data Section',
'.srdata': 'GP-relative Read-only Data Section',
'.stab': 'Created by Haskell compiler (GHC)',
'.stabstr': 'Created by Haskell compiler (GHC)',
'.sxdata': 'Registered Exception Handlers Section',
'.text': 'Code Section',
'.text0': 'Alternative Code Section',
'.text1': 'Alternative Code Section',
'.text2': 'Alternative Code Section',
'.text3': 'Alternative Code Section',
'.textbss': 'Section used by incremental linking',
'.tls': 'Thread Local Storage Section',
'.tls$': 'Thread Local Storage Section',
'.udata': 'Uninitialized Data Section',
'.vsdata': 'GP-relative Initialized Data',
'.xdata': 'Exception Information Section',
'.wixburn': 'Wix section',
'BSS': 'Uninitialized Data Section (Borland Compiler)',
'CODE': 'Code Section (Borland Compiler)',
'DATA': 'Data Section (Borland Compiler)',
'DGROUP': 'Legacy data group section',
'edata': 'Export Data Section',
'idata': 'Initialized Data Section (C RunTime)',
'INIT': 'INIT section (drivers)',
'minATL': 'Section that can be found inside some ARM PE files; purpose unknown',
'PAGE': 'PAGE section (drivers)',
'rdata': 'Read-only Data Section',
'sdata': 'Initialized Data Section',
'shared': 'Shared section',
'Shared': 'Shared section',
'testdata': 'section containing test data (can be found inside Visual Studio files)',
'text': 'Alternative Code Section',
}
def run(peobject):
alerts = []
unknown = []
known_bad = []
known_common = []
d = peobject.dict()
if 'SECTIONS' in d:
for s in d['SECTIONS']:
if (s['Name'] in packers):
known_bad.append(''.join([s['Name'], ' : ', packers[s['Name']]]))
elif (s['Name'] in common):
known_common.append(''.join([s['Name'], ' : ', common[s['Name']]]))
else:
unknown.append(''.join([s['Name'], ' : ???']))
if known_common:
alerts.append({
'title': 'Known Common Section Names',
'description': 'These can indicate executable functionality.',
'data': known_common,
'code': '',
})
if known_bad:
alerts.append({
'title': 'Known Bad Section Names',
'description': 'These can indicate packer usage.',
'data': known_bad,
'code': '',
})
if unknown:
alerts.append({
'title': 'Unknown Section Names',
'description': 'These can indicate unknown packers.',
'data': unknown,
'code': '',
})
return alerts
| true | true |
f7fbf3e0539de519d4dd1b4bb1d7b37b825d2dd1 | 16,363 | py | Python | python/ccxt/braziliex.py | KaceyBolman/ccxt | d34a0651b209ac77453f05c4ce31883f0cd2d6b8 | [
"MIT"
] | 1 | 2018-07-06T08:14:13.000Z | 2018-07-06T08:14:13.000Z | python/ccxt/braziliex.py | LeapM/ccxt | aeb07fc86991ce38a218c1199c6ae3bb5cfdf228 | [
"MIT"
] | null | null | null | python/ccxt/braziliex.py | LeapM/ccxt | aeb07fc86991ce38a218c1199c6ae3bb5cfdf228 | [
"MIT"
] | 2 | 2019-03-14T15:17:46.000Z | 2019-09-08T19:26:04.000Z | # -*- coding: utf-8 -*-
# PLEASE DO NOT EDIT THIS FILE, IT IS GENERATED AND WILL BE OVERWRITTEN:
# https://github.com/ccxt/ccxt/blob/master/CONTRIBUTING.md#how-to-contribute-code
from ccxt.base.exchange import Exchange
import hashlib
import math
from ccxt.base.errors import ExchangeError
from ccxt.base.errors import AuthenticationError
from ccxt.base.errors import InvalidOrder
class braziliex (Exchange):
def describe(self):
return self.deep_extend(super(braziliex, self).describe(), {
'id': 'braziliex',
'name': 'Braziliex',
'countries': ['BR'],
'rateLimit': 1000,
'has': {
'fetchCurrencies': True,
'fetchTickers': True,
'fetchOpenOrders': True,
'fetchMyTrades': True,
'fetchDepositAddress': True,
},
'urls': {
'logo': 'https://user-images.githubusercontent.com/1294454/34703593-c4498674-f504-11e7-8d14-ff8e44fb78c1.jpg',
'api': 'https://braziliex.com/api/v1',
'www': 'https://braziliex.com/',
'doc': 'https://braziliex.com/exchange/api.php',
'fees': 'https://braziliex.com/exchange/fees.php',
},
'api': {
'public': {
'get': [
'currencies',
'ticker',
'ticker/{market}',
'orderbook/{market}',
'tradehistory/{market}',
],
},
'private': {
'post': [
'balance',
'complete_balance',
'open_orders',
'trade_history',
'deposit_address',
'sell',
'buy',
'cancel_order',
],
},
},
'commonCurrencies': {
'EPC': 'Epacoin',
},
'fees': {
'trading': {
'maker': 0.005,
'taker': 0.005,
},
},
'precision': {
'amount': 8,
'price': 8,
},
})
def fetch_currencies(self, params={}):
currencies = self.publicGetCurrencies(params)
ids = list(currencies.keys())
result = {}
for i in range(0, len(ids)):
id = ids[i]
currency = currencies[id]
precision = self.safe_integer(currency, 'decimal')
uppercase = id.upper()
code = self.common_currency_code(uppercase)
active = self.safe_integer(currency, 'active') == 1
maintenance = self.safe_integer(currency, 'under_maintenance')
if maintenance != 0:
active = False
canWithdraw = self.safe_integer(currency, 'is_withdrawal_active') == 1
canDeposit = self.safe_integer(currency, 'is_deposit_active') == 1
if not canWithdraw or not canDeposit:
active = False
result[code] = {
'id': id,
'code': code,
'name': currency['name'],
'active': active,
'precision': precision,
'funding': {
'withdraw': {
'active': canWithdraw,
'fee': currency['txWithdrawalFee'],
},
'deposit': {
'active': canDeposit,
'fee': currency['txDepositFee'],
},
},
'limits': {
'amount': {
'min': currency['minAmountTrade'],
'max': math.pow(10, precision),
},
'price': {
'min': math.pow(10, -precision),
'max': math.pow(10, precision),
},
'cost': {
'min': None,
'max': None,
},
'withdraw': {
'min': currency['MinWithdrawal'],
'max': math.pow(10, precision),
},
'deposit': {
'min': currency['minDeposit'],
'max': None,
},
},
'info': currency,
}
return result
def fetch_markets(self):
markets = self.publicGetTicker()
ids = list(markets.keys())
result = []
for i in range(0, len(ids)):
id = ids[i]
market = markets[id]
baseId, quoteId = id.split('_')
base = baseId.upper()
quote = quoteId.upper()
base = self.common_currency_code(base)
quote = self.common_currency_code(quote)
symbol = base + '/' + quote
active = self.safe_integer(market, 'active') == 1
precision = {
'amount': 8,
'price': 8,
}
lot = math.pow(10, -precision['amount'])
result.append({
'id': id,
'symbol': symbol.upper(),
'base': base,
'quote': quote,
'baseId': baseId,
'quoteId': quoteId,
'active': active,
'lot': lot,
'precision': precision,
'limits': {
'amount': {
'min': lot,
'max': math.pow(10, precision['amount']),
},
'price': {
'min': math.pow(10, -precision['price']),
'max': math.pow(10, precision['price']),
},
'cost': {
'min': None,
'max': None,
},
},
'info': market,
})
return result
def parse_ticker(self, ticker, market=None):
symbol = market['symbol']
timestamp = ticker['date']
ticker = ticker['ticker']
last = self.safe_float(ticker, 'last')
return {
'symbol': symbol,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'high': self.safe_float(ticker, 'highestBid24'),
'low': self.safe_float(ticker, 'lowestAsk24'),
'bid': self.safe_float(ticker, 'highestBid'),
'bidVolume': None,
'ask': self.safe_float(ticker, 'lowestAsk'),
'askVolume': None,
'vwap': None,
'open': None,
'close': last,
'last': last,
'previousClose': None,
'change': self.safe_float(ticker, 'percentChange'),
'percentage': None,
'average': None,
'baseVolume': self.safe_float(ticker, 'baseVolume24'),
'quoteVolume': self.safe_float(ticker, 'quoteVolume24'),
'info': ticker,
}
def fetch_ticker(self, symbol, params={}):
self.load_markets()
market = self.market(symbol)
ticker = self.publicGetTickerMarket(self.extend({
'market': market['id'],
}, params))
ticker = {
'date': self.milliseconds(),
'ticker': ticker,
}
return self.parse_ticker(ticker, market)
def fetch_tickers(self, symbols=None, params={}):
self.load_markets()
tickers = self.publicGetTicker(params)
result = {}
timestamp = self.milliseconds()
ids = list(tickers.keys())
for i in range(0, len(ids)):
id = ids[i]
market = self.markets_by_id[id]
symbol = market['symbol']
ticker = {
'date': timestamp,
'ticker': tickers[id],
}
result[symbol] = self.parse_ticker(ticker, market)
return result
def fetch_order_book(self, symbol, limit=None, params={}):
self.load_markets()
orderbook = self.publicGetOrderbookMarket(self.extend({
'market': self.market_id(symbol),
}, params))
return self.parse_order_book(orderbook, None, 'bids', 'asks', 'price', 'amount')
def parse_trade(self, trade, market=None):
timestamp = None
if 'date_exec' in trade:
timestamp = self.parse8601(trade['date_exec'])
else:
timestamp = self.parse8601(trade['date'])
price = self.safe_float(trade, 'price')
amount = self.safe_float(trade, 'amount')
symbol = market['symbol']
cost = self.safe_float(trade, 'total')
orderId = self.safe_string(trade, 'order_number')
return {
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'symbol': symbol,
'id': self.safe_string(trade, '_id'),
'order': orderId,
'type': 'limit',
'side': trade['type'],
'price': price,
'amount': amount,
'cost': cost,
'fee': None,
'info': trade,
}
def fetch_trades(self, symbol, since=None, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
trades = self.publicGetTradehistoryMarket(self.extend({
'market': market['id'],
}, params))
return self.parse_trades(trades, market, since, limit)
def fetch_balance(self, params={}):
self.load_markets()
balances = self.privatePostCompleteBalance(params)
result = {'info': balances}
currencies = list(balances.keys())
for i in range(0, len(currencies)):
id = currencies[i]
balance = balances[id]
currency = self.common_currency_code(id)
account = {
'free': float(balance['available']),
'used': 0.0,
'total': float(balance['total']),
}
account['used'] = account['total'] - account['free']
result[currency] = account
return self.parse_balance(result)
def parse_order(self, order, market=None):
symbol = None
if market is None:
marketId = self.safe_string(order, 'market')
if marketId:
if marketId in self.markets_by_id:
market = self.markets_by_id[marketId]
if market:
symbol = market['symbol']
timestamp = self.safe_value(order, 'timestamp')
if not timestamp:
timestamp = self.parse8601(order['date'])
price = self.safe_float(order, 'price')
cost = self.safe_float(order, 'total', 0.0)
amount = self.safe_float(order, 'amount')
filledPercentage = self.safe_float(order, 'progress')
filled = amount * filledPercentage
remaining = self.amount_to_precision(symbol, amount - filled)
info = order
if 'info' in info:
info = order['info']
return {
'id': order['order_number'],
'datetime': self.iso8601(timestamp),
'timestamp': timestamp,
'lastTradeTimestamp': None,
'status': 'open',
'symbol': symbol,
'type': 'limit',
'side': order['type'],
'price': price,
'cost': cost,
'amount': amount,
'filled': filled,
'remaining': remaining,
'trades': None,
'fee': self.safe_value(order, 'fee'),
'info': info,
}
def create_order(self, symbol, type, side, amount, price=None, params={}):
self.load_markets()
market = self.market(symbol)
method = 'privatePost' + self.capitalize(side)
response = getattr(self, method)(self.extend({
'market': market['id'],
# 'price': self.price_to_precision(symbol, price),
# 'amount': self.amount_to_precision(symbol, amount),
'price': price,
'amount': amount,
}, params))
success = self.safe_integer(response, 'success')
if success != 1:
raise InvalidOrder(self.id + ' ' + self.json(response))
parts = response['message'].split(' / ')
parts = parts[1:]
feeParts = parts[5].split(' ')
order = self.parse_order({
'timestamp': self.milliseconds(),
'order_number': response['order_number'],
'type': parts[0].lower(),
'market': parts[0].lower(),
'amount': parts[2].split(' ')[1],
'price': parts[3].split(' ')[1],
'total': parts[4].split(' ')[1],
'fee': {
'cost': float(feeParts[1]),
'currency': feeParts[2],
},
'progress': '0.0',
'info': response,
}, market)
id = order['id']
self.orders[id] = order
return order
def cancel_order(self, id, symbol=None, params={}):
self.load_markets()
market = self.market(symbol)
result = self.privatePostCancelOrder(self.extend({
'order_number': id,
'market': market['id'],
}, params))
return result
def fetch_open_orders(self, symbol=None, since=None, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
orders = self.privatePostOpenOrders(self.extend({
'market': market['id'],
}, params))
return self.parse_orders(orders['order_open'], market, since, limit)
def fetch_my_trades(self, symbol=None, since=None, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
trades = self.privatePostTradeHistory(self.extend({
'market': market['id'],
}, params))
return self.parse_trades(trades['trade_history'], market, since, limit)
def fetch_deposit_address(self, code, params={}):
self.load_markets()
currency = self.currency(code)
response = self.privatePostDepositAddress(self.extend({
'currency': currency['id'],
}, params))
address = self.safe_string(response, 'deposit_address')
self.check_address(address)
tag = self.safe_string(response, 'payment_id')
return {
'currency': code,
'address': address,
'tag': tag,
'info': response,
}
def sign(self, path, api='public', method='GET', params={}, headers=None, body=None):
url = self.urls['api'] + '/' + api
query = self.omit(params, self.extract_params(path))
if api == 'public':
url += '/' + self.implode_params(path, params)
if query:
url += '?' + self.urlencode(query)
else:
self.check_required_credentials()
query = self.extend({
'command': path,
'nonce': self.nonce(),
}, query)
body = self.urlencode(query)
signature = self.hmac(self.encode(body), self.encode(self.secret), hashlib.sha512)
headers = {
'Content-type': 'application/x-www-form-urlencoded',
'Key': self.apiKey,
'Sign': self.decode(signature),
}
return {'url': url, 'method': method, 'body': body, 'headers': headers}
def request(self, path, api='public', method='GET', params={}, headers=None, body=None):
response = self.fetch2(path, api, method, params, headers, body)
if 'success' in response:
success = self.safe_integer(response, 'success')
if success == 0:
message = self.safe_string(response, 'message')
if message == 'Invalid APIKey':
raise AuthenticationError(message)
raise ExchangeError(message)
return response
| 36.688341 | 126 | 0.478885 |
ge import Exchange
import hashlib
import math
from ccxt.base.errors import ExchangeError
from ccxt.base.errors import AuthenticationError
from ccxt.base.errors import InvalidOrder
class braziliex (Exchange):
def describe(self):
return self.deep_extend(super(braziliex, self).describe(), {
'id': 'braziliex',
'name': 'Braziliex',
'countries': ['BR'],
'rateLimit': 1000,
'has': {
'fetchCurrencies': True,
'fetchTickers': True,
'fetchOpenOrders': True,
'fetchMyTrades': True,
'fetchDepositAddress': True,
},
'urls': {
'logo': 'https://user-images.githubusercontent.com/1294454/34703593-c4498674-f504-11e7-8d14-ff8e44fb78c1.jpg',
'api': 'https://braziliex.com/api/v1',
'www': 'https://braziliex.com/',
'doc': 'https://braziliex.com/exchange/api.php',
'fees': 'https://braziliex.com/exchange/fees.php',
},
'api': {
'public': {
'get': [
'currencies',
'ticker',
'ticker/{market}',
'orderbook/{market}',
'tradehistory/{market}',
],
},
'private': {
'post': [
'balance',
'complete_balance',
'open_orders',
'trade_history',
'deposit_address',
'sell',
'buy',
'cancel_order',
],
},
},
'commonCurrencies': {
'EPC': 'Epacoin',
},
'fees': {
'trading': {
'maker': 0.005,
'taker': 0.005,
},
},
'precision': {
'amount': 8,
'price': 8,
},
})
def fetch_currencies(self, params={}):
currencies = self.publicGetCurrencies(params)
ids = list(currencies.keys())
result = {}
for i in range(0, len(ids)):
id = ids[i]
currency = currencies[id]
precision = self.safe_integer(currency, 'decimal')
uppercase = id.upper()
code = self.common_currency_code(uppercase)
active = self.safe_integer(currency, 'active') == 1
maintenance = self.safe_integer(currency, 'under_maintenance')
if maintenance != 0:
active = False
canWithdraw = self.safe_integer(currency, 'is_withdrawal_active') == 1
canDeposit = self.safe_integer(currency, 'is_deposit_active') == 1
if not canWithdraw or not canDeposit:
active = False
result[code] = {
'id': id,
'code': code,
'name': currency['name'],
'active': active,
'precision': precision,
'funding': {
'withdraw': {
'active': canWithdraw,
'fee': currency['txWithdrawalFee'],
},
'deposit': {
'active': canDeposit,
'fee': currency['txDepositFee'],
},
},
'limits': {
'amount': {
'min': currency['minAmountTrade'],
'max': math.pow(10, precision),
},
'price': {
'min': math.pow(10, -precision),
'max': math.pow(10, precision),
},
'cost': {
'min': None,
'max': None,
},
'withdraw': {
'min': currency['MinWithdrawal'],
'max': math.pow(10, precision),
},
'deposit': {
'min': currency['minDeposit'],
'max': None,
},
},
'info': currency,
}
return result
def fetch_markets(self):
markets = self.publicGetTicker()
ids = list(markets.keys())
result = []
for i in range(0, len(ids)):
id = ids[i]
market = markets[id]
baseId, quoteId = id.split('_')
base = baseId.upper()
quote = quoteId.upper()
base = self.common_currency_code(base)
quote = self.common_currency_code(quote)
symbol = base + '/' + quote
active = self.safe_integer(market, 'active') == 1
precision = {
'amount': 8,
'price': 8,
}
lot = math.pow(10, -precision['amount'])
result.append({
'id': id,
'symbol': symbol.upper(),
'base': base,
'quote': quote,
'baseId': baseId,
'quoteId': quoteId,
'active': active,
'lot': lot,
'precision': precision,
'limits': {
'amount': {
'min': lot,
'max': math.pow(10, precision['amount']),
},
'price': {
'min': math.pow(10, -precision['price']),
'max': math.pow(10, precision['price']),
},
'cost': {
'min': None,
'max': None,
},
},
'info': market,
})
return result
def parse_ticker(self, ticker, market=None):
symbol = market['symbol']
timestamp = ticker['date']
ticker = ticker['ticker']
last = self.safe_float(ticker, 'last')
return {
'symbol': symbol,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'high': self.safe_float(ticker, 'highestBid24'),
'low': self.safe_float(ticker, 'lowestAsk24'),
'bid': self.safe_float(ticker, 'highestBid'),
'bidVolume': None,
'ask': self.safe_float(ticker, 'lowestAsk'),
'askVolume': None,
'vwap': None,
'open': None,
'close': last,
'last': last,
'previousClose': None,
'change': self.safe_float(ticker, 'percentChange'),
'percentage': None,
'average': None,
'baseVolume': self.safe_float(ticker, 'baseVolume24'),
'quoteVolume': self.safe_float(ticker, 'quoteVolume24'),
'info': ticker,
}
def fetch_ticker(self, symbol, params={}):
self.load_markets()
market = self.market(symbol)
ticker = self.publicGetTickerMarket(self.extend({
'market': market['id'],
}, params))
ticker = {
'date': self.milliseconds(),
'ticker': ticker,
}
return self.parse_ticker(ticker, market)
def fetch_tickers(self, symbols=None, params={}):
self.load_markets()
tickers = self.publicGetTicker(params)
result = {}
timestamp = self.milliseconds()
ids = list(tickers.keys())
for i in range(0, len(ids)):
id = ids[i]
market = self.markets_by_id[id]
symbol = market['symbol']
ticker = {
'date': timestamp,
'ticker': tickers[id],
}
result[symbol] = self.parse_ticker(ticker, market)
return result
def fetch_order_book(self, symbol, limit=None, params={}):
self.load_markets()
orderbook = self.publicGetOrderbookMarket(self.extend({
'market': self.market_id(symbol),
}, params))
return self.parse_order_book(orderbook, None, 'bids', 'asks', 'price', 'amount')
def parse_trade(self, trade, market=None):
timestamp = None
if 'date_exec' in trade:
timestamp = self.parse8601(trade['date_exec'])
else:
timestamp = self.parse8601(trade['date'])
price = self.safe_float(trade, 'price')
amount = self.safe_float(trade, 'amount')
symbol = market['symbol']
cost = self.safe_float(trade, 'total')
orderId = self.safe_string(trade, 'order_number')
return {
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'symbol': symbol,
'id': self.safe_string(trade, '_id'),
'order': orderId,
'type': 'limit',
'side': trade['type'],
'price': price,
'amount': amount,
'cost': cost,
'fee': None,
'info': trade,
}
def fetch_trades(self, symbol, since=None, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
trades = self.publicGetTradehistoryMarket(self.extend({
'market': market['id'],
}, params))
return self.parse_trades(trades, market, since, limit)
def fetch_balance(self, params={}):
self.load_markets()
balances = self.privatePostCompleteBalance(params)
result = {'info': balances}
currencies = list(balances.keys())
for i in range(0, len(currencies)):
id = currencies[i]
balance = balances[id]
currency = self.common_currency_code(id)
account = {
'free': float(balance['available']),
'used': 0.0,
'total': float(balance['total']),
}
account['used'] = account['total'] - account['free']
result[currency] = account
return self.parse_balance(result)
def parse_order(self, order, market=None):
symbol = None
if market is None:
marketId = self.safe_string(order, 'market')
if marketId:
if marketId in self.markets_by_id:
market = self.markets_by_id[marketId]
if market:
symbol = market['symbol']
timestamp = self.safe_value(order, 'timestamp')
if not timestamp:
timestamp = self.parse8601(order['date'])
price = self.safe_float(order, 'price')
cost = self.safe_float(order, 'total', 0.0)
amount = self.safe_float(order, 'amount')
filledPercentage = self.safe_float(order, 'progress')
filled = amount * filledPercentage
remaining = self.amount_to_precision(symbol, amount - filled)
info = order
if 'info' in info:
info = order['info']
return {
'id': order['order_number'],
'datetime': self.iso8601(timestamp),
'timestamp': timestamp,
'lastTradeTimestamp': None,
'status': 'open',
'symbol': symbol,
'type': 'limit',
'side': order['type'],
'price': price,
'cost': cost,
'amount': amount,
'filled': filled,
'remaining': remaining,
'trades': None,
'fee': self.safe_value(order, 'fee'),
'info': info,
}
def create_order(self, symbol, type, side, amount, price=None, params={}):
self.load_markets()
market = self.market(symbol)
method = 'privatePost' + self.capitalize(side)
response = getattr(self, method)(self.extend({
'market': market['id'],
'price': price,
'amount': amount,
}, params))
success = self.safe_integer(response, 'success')
if success != 1:
raise InvalidOrder(self.id + ' ' + self.json(response))
parts = response['message'].split(' / ')
parts = parts[1:]
feeParts = parts[5].split(' ')
order = self.parse_order({
'timestamp': self.milliseconds(),
'order_number': response['order_number'],
'type': parts[0].lower(),
'market': parts[0].lower(),
'amount': parts[2].split(' ')[1],
'price': parts[3].split(' ')[1],
'total': parts[4].split(' ')[1],
'fee': {
'cost': float(feeParts[1]),
'currency': feeParts[2],
},
'progress': '0.0',
'info': response,
}, market)
id = order['id']
self.orders[id] = order
return order
def cancel_order(self, id, symbol=None, params={}):
self.load_markets()
market = self.market(symbol)
result = self.privatePostCancelOrder(self.extend({
'order_number': id,
'market': market['id'],
}, params))
return result
def fetch_open_orders(self, symbol=None, since=None, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
orders = self.privatePostOpenOrders(self.extend({
'market': market['id'],
}, params))
return self.parse_orders(orders['order_open'], market, since, limit)
def fetch_my_trades(self, symbol=None, since=None, limit=None, params={}):
self.load_markets()
market = self.market(symbol)
trades = self.privatePostTradeHistory(self.extend({
'market': market['id'],
}, params))
return self.parse_trades(trades['trade_history'], market, since, limit)
def fetch_deposit_address(self, code, params={}):
self.load_markets()
currency = self.currency(code)
response = self.privatePostDepositAddress(self.extend({
'currency': currency['id'],
}, params))
address = self.safe_string(response, 'deposit_address')
self.check_address(address)
tag = self.safe_string(response, 'payment_id')
return {
'currency': code,
'address': address,
'tag': tag,
'info': response,
}
def sign(self, path, api='public', method='GET', params={}, headers=None, body=None):
url = self.urls['api'] + '/' + api
query = self.omit(params, self.extract_params(path))
if api == 'public':
url += '/' + self.implode_params(path, params)
if query:
url += '?' + self.urlencode(query)
else:
self.check_required_credentials()
query = self.extend({
'command': path,
'nonce': self.nonce(),
}, query)
body = self.urlencode(query)
signature = self.hmac(self.encode(body), self.encode(self.secret), hashlib.sha512)
headers = {
'Content-type': 'application/x-www-form-urlencoded',
'Key': self.apiKey,
'Sign': self.decode(signature),
}
return {'url': url, 'method': method, 'body': body, 'headers': headers}
def request(self, path, api='public', method='GET', params={}, headers=None, body=None):
response = self.fetch2(path, api, method, params, headers, body)
if 'success' in response:
success = self.safe_integer(response, 'success')
if success == 0:
message = self.safe_string(response, 'message')
if message == 'Invalid APIKey':
raise AuthenticationError(message)
raise ExchangeError(message)
return response
| true | true |
f7fbf44db0fbb7cca4e38afe57fb7a389fcd8330 | 9,358 | py | Python | src/Algorithms/PPO.py | fightZero/fightZero | 84c2f76c7dda31837d002e47cd74936044251079 | [
"MIT"
] | 1 | 2021-12-13T14:05:05.000Z | 2021-12-13T14:05:05.000Z | src/Algorithms/PPO.py | FightZero/FightZero | 84c2f76c7dda31837d002e47cd74936044251079 | [
"MIT"
] | null | null | null | src/Algorithms/PPO.py | FightZero/FightZero | 84c2f76c7dda31837d002e47cd74936044251079 | [
"MIT"
] | null | null | null | # an implementation of PPO algorithm
# reference to: https://github.com/nikhilbarhate99/PPO-PyTorch
import torch
import torch.nn as nn
from torch.optim import Adam, RMSprop
from torch.distributions import Categorical
from torch.utils.tensorboard.writer import SummaryWriter
from torch.utils.data import BatchSampler, RandomSampler
from typing import Tuple
# this class implements an actor critic model with linear networks
class ActorCritic(nn.Module):
def __init__(self, state_dimension, action_dimension):
super().__init__()
# save dimensions
self.d_state = state_dimension
self.d_action = action_dimension
# create actor network
self.actor = nn.Sequential(
nn.Linear(self.d_state, 1024),
nn.LeakyReLU(),
nn.Linear(1024, 512),
nn.LeakyReLU(),
nn.Linear(512, 256),
nn.LeakyReLU(),
nn.Linear(256, 128),
nn.LeakyReLU(),
nn.Linear(128, self.d_action),
nn.Softmax(dim=1)
)
# create critic network
self.critic = nn.Sequential(
nn.Linear(self.d_state, 1024),
nn.LeakyReLU(),
nn.Linear(1024, 512),
nn.LeakyReLU(),
nn.Linear(512, 256),
nn.LeakyReLU(),
nn.Linear(256, 128),
nn.LeakyReLU(),
nn.Linear(128, 64),
nn.LeakyReLU(),
nn.Linear(64, 1)
)
def forward(self, x):
"""
Empty forward function
"""
return x
def action(self, state) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Get action and log probs
"""
# get probabilities of actions
probs = self.actor(state)
dist = Categorical(probs=probs)
# sample an action
action = dist.sample()
logprob = dist.log_prob(action)
return action.detach(), logprob.detach()
def evaluate(self, state, action) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Evaluates an action
"""
# get probabilities of actions
probs = self.actor(state)
dist = Categorical(probs=probs)
# get distribution entropy and log probs of chosen action
entropy = dist.entropy()
logprob = dist.log_prob(action).diagonal().view(action.shape)
# get critic value
critics = self.critic(state)
return entropy, logprob, critics
# this structure stores buffer info for PPO
class PPOBuffer:
def __init__(self):
self.actions = []
self.states = []
self.logprobs = []
self.rewards = []
self.is_terminals = []
def reset(self):
del self.actions[:]
del self.states[:]
del self.logprobs[:]
del self.rewards[:]
del self.is_terminals[:]
self.actions = []
self.states = []
self.logprobs = []
self.rewards = []
self.is_terminals = []
def isEmpty(self):
return len(self.actions) <= 0
# this class implements PPO model
# with actor and critic updated individually
class PPO(object):
def __init__(self,
state_dimension, action_dimension,
lr_actor, lr_critic,
num_epochs, discount,
eps_clip, batch_size,
max_grad_norm, train
):
self.discount = discount
self.num_epochs = num_epochs
self.eps_clip = eps_clip
self.lr_actor = lr_actor
self.lr_critic = lr_critic
self.batch_size = batch_size
self.max_grad_norm = max_grad_norm
self.training = train
self.iter_count = 0
# create buffer
self.buffer = PPOBuffer()
# select running environment for train
self.device = "cuda" if torch.cuda.is_available() else "cpu"
# create actor critic model
self.AC = ActorCritic(state_dimension, action_dimension).to(self.device)
# set optimizer
self.optim_actor = Adam(self.AC.actor.parameters(), lr_actor)
self.optim_critic = Adam(self.AC.critic.parameters(), lr_critic)
# set saved model
self.AC_saved = ActorCritic(state_dimension, action_dimension).to(self.device)
self.AC_saved.load_state_dict(self.AC.state_dict())
self.AC_saved.eval()
# set loss function
self.loss = nn.MSELoss()
def action(self, state):
"""
Choose next action
"""
with torch.no_grad():
# get new action from actor
state = torch.FloatTensor(state).to(self.device)
action, logprob = self.AC_saved.action(state)
# store into buffer
if self.training:
self.buffer.states.append(state)
self.buffer.actions.append(action)
self.buffer.logprobs.append(logprob)
return action.cpu().item()
def save(self, filename):
"""
Save current network to file path
"""
torch.save((
self.AC_saved.state_dict(),
self.optim_actor.state_dict(),
self.optim_critic.state_dict(),
), filename)
def load(self, filename):
"""
Load network from file path
"""
states, opt1, opt2 = torch.load(filename, map_location=lambda storage, _: storage)
self.AC.load_state_dict(states)
self.AC_saved.load_state_dict(states)
self.optim_actor.load_state_dict(opt1)
self.optim_critic.load_state_dict(opt2)
def train(self, writer : SummaryWriter):
"""
Update policy
"""
if not self.training: return
if self.buffer.isEmpty(): return
rewards = []
reward_disc = 0.0
for reward, is_terminal in zip(reversed(self.buffer.rewards), reversed(self.buffer.is_terminals)):
# if is terminal state, set reward to 0
if is_terminal:
reward_disc = 0.0
reward_disc = reward + (self.discount * reward_disc)
rewards.insert(0, reward_disc)
length = len(rewards)
# normalize the rewards
target_values = torch.FloatTensor(rewards)
target_values = (target_values - target_values.mean()) / (target_values.std() + 1e-8)
target_values = target_values.view(-1, 1)
# convert list to tensor
old_states = torch.squeeze(torch.stack(self.buffer.states[:length], dim=0)).detach()
old_actions = torch.squeeze(torch.stack(self.buffer.actions[:length], dim=0)).view(-1, 1).detach()
old_logprobs = torch.squeeze(torch.stack(self.buffer.logprobs[:length], dim=0)).view(-1, 1).detach()
# start training
self.AC.train()
torch.cuda.empty_cache()
for _ in range(self.num_epochs):
for indices in BatchSampler(RandomSampler(range(length)), batch_size=self.batch_size, drop_last=False):
target_values_gpu = target_values[indices].to(self.device)
old_states_gpu = old_states[indices].to(self.device)
old_actions_gpu = old_actions[indices].to(self.device)
old_logprobs_gpu = old_logprobs[indices].to(self.device)
# get critics
_, logprob, state_values = self.AC.evaluate(old_states_gpu, old_actions_gpu)
# state_values = torch.squeeze(critics)
# compute advantages
advantages = (target_values_gpu - state_values).detach()
# find the ratio (pi_theta / pi_theta__old)
ratios = torch.exp((logprob - old_logprobs_gpu))
# find Surrogate Loss (Clipped Surrogate Objective)
surr1 = ratios * advantages
surr2 = torch.clamp(ratios, 1 - self.eps_clip, 1 + self.eps_clip) * advantages
# compute actor loss
loss_actor = -torch.min(surr1, surr2).mean()
# optimize actor
self.optim_actor.zero_grad()
loss_actor.backward()
torch.nn.utils.clip_grad.clip_grad_norm_(self.AC.actor.parameters(), max_norm=self.max_grad_norm)
self.optim_actor.step()
# compute critic loss
loss_critic = self.loss(state_values, target_values_gpu).mean()
self.optim_critic.zero_grad()
loss_critic.backward()
torch.nn.utils.clip_grad.clip_grad_norm_(self.AC.critic.parameters(), max_norm=self.max_grad_norm)
self.optim_critic.step()
# log in tensorboard
writer.add_scalar("PPO/Loss Actor", loss_actor.cpu().detach().item(), self.iter_count)
writer.add_scalar("PPO/Loss Critic", loss_critic.cpu().detach().item(), self.iter_count)
writer.add_scalar("PPO/Advantage", advantages.cpu().detach().mean().item(), self.iter_count)
self.iter_count += 1
self.AC.eval()
# save weights after training
self.AC_saved.load_state_dict(self.AC.state_dict())
self.AC_saved.eval()
# clear buffer
self.buffer.reset()
def update(self, reward, is_terminal):
"""
Update buffer
"""
if not self.training: return
self.buffer.rewards.append(reward)
self.buffer.is_terminals.append(is_terminal) | 37.88664 | 115 | 0.596495 |
import torch
import torch.nn as nn
from torch.optim import Adam, RMSprop
from torch.distributions import Categorical
from torch.utils.tensorboard.writer import SummaryWriter
from torch.utils.data import BatchSampler, RandomSampler
from typing import Tuple
class ActorCritic(nn.Module):
def __init__(self, state_dimension, action_dimension):
super().__init__()
self.d_state = state_dimension
self.d_action = action_dimension
self.actor = nn.Sequential(
nn.Linear(self.d_state, 1024),
nn.LeakyReLU(),
nn.Linear(1024, 512),
nn.LeakyReLU(),
nn.Linear(512, 256),
nn.LeakyReLU(),
nn.Linear(256, 128),
nn.LeakyReLU(),
nn.Linear(128, self.d_action),
nn.Softmax(dim=1)
)
self.critic = nn.Sequential(
nn.Linear(self.d_state, 1024),
nn.LeakyReLU(),
nn.Linear(1024, 512),
nn.LeakyReLU(),
nn.Linear(512, 256),
nn.LeakyReLU(),
nn.Linear(256, 128),
nn.LeakyReLU(),
nn.Linear(128, 64),
nn.LeakyReLU(),
nn.Linear(64, 1)
)
def forward(self, x):
return x
def action(self, state) -> Tuple[torch.Tensor, torch.Tensor]:
probs = self.actor(state)
dist = Categorical(probs=probs)
action = dist.sample()
logprob = dist.log_prob(action)
return action.detach(), logprob.detach()
def evaluate(self, state, action) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
probs = self.actor(state)
dist = Categorical(probs=probs)
entropy = dist.entropy()
logprob = dist.log_prob(action).diagonal().view(action.shape)
critics = self.critic(state)
return entropy, logprob, critics
class PPOBuffer:
def __init__(self):
self.actions = []
self.states = []
self.logprobs = []
self.rewards = []
self.is_terminals = []
def reset(self):
del self.actions[:]
del self.states[:]
del self.logprobs[:]
del self.rewards[:]
del self.is_terminals[:]
self.actions = []
self.states = []
self.logprobs = []
self.rewards = []
self.is_terminals = []
def isEmpty(self):
return len(self.actions) <= 0
class PPO(object):
def __init__(self,
state_dimension, action_dimension,
lr_actor, lr_critic,
num_epochs, discount,
eps_clip, batch_size,
max_grad_norm, train
):
self.discount = discount
self.num_epochs = num_epochs
self.eps_clip = eps_clip
self.lr_actor = lr_actor
self.lr_critic = lr_critic
self.batch_size = batch_size
self.max_grad_norm = max_grad_norm
self.training = train
self.iter_count = 0
self.buffer = PPOBuffer()
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.AC = ActorCritic(state_dimension, action_dimension).to(self.device)
self.optim_actor = Adam(self.AC.actor.parameters(), lr_actor)
self.optim_critic = Adam(self.AC.critic.parameters(), lr_critic)
self.AC_saved = ActorCritic(state_dimension, action_dimension).to(self.device)
self.AC_saved.load_state_dict(self.AC.state_dict())
self.AC_saved.eval()
self.loss = nn.MSELoss()
def action(self, state):
with torch.no_grad():
state = torch.FloatTensor(state).to(self.device)
action, logprob = self.AC_saved.action(state)
if self.training:
self.buffer.states.append(state)
self.buffer.actions.append(action)
self.buffer.logprobs.append(logprob)
return action.cpu().item()
def save(self, filename):
torch.save((
self.AC_saved.state_dict(),
self.optim_actor.state_dict(),
self.optim_critic.state_dict(),
), filename)
def load(self, filename):
states, opt1, opt2 = torch.load(filename, map_location=lambda storage, _: storage)
self.AC.load_state_dict(states)
self.AC_saved.load_state_dict(states)
self.optim_actor.load_state_dict(opt1)
self.optim_critic.load_state_dict(opt2)
def train(self, writer : SummaryWriter):
if not self.training: return
if self.buffer.isEmpty(): return
rewards = []
reward_disc = 0.0
for reward, is_terminal in zip(reversed(self.buffer.rewards), reversed(self.buffer.is_terminals)):
if is_terminal:
reward_disc = 0.0
reward_disc = reward + (self.discount * reward_disc)
rewards.insert(0, reward_disc)
length = len(rewards)
target_values = torch.FloatTensor(rewards)
target_values = (target_values - target_values.mean()) / (target_values.std() + 1e-8)
target_values = target_values.view(-1, 1)
old_states = torch.squeeze(torch.stack(self.buffer.states[:length], dim=0)).detach()
old_actions = torch.squeeze(torch.stack(self.buffer.actions[:length], dim=0)).view(-1, 1).detach()
old_logprobs = torch.squeeze(torch.stack(self.buffer.logprobs[:length], dim=0)).view(-1, 1).detach()
self.AC.train()
torch.cuda.empty_cache()
for _ in range(self.num_epochs):
for indices in BatchSampler(RandomSampler(range(length)), batch_size=self.batch_size, drop_last=False):
target_values_gpu = target_values[indices].to(self.device)
old_states_gpu = old_states[indices].to(self.device)
old_actions_gpu = old_actions[indices].to(self.device)
old_logprobs_gpu = old_logprobs[indices].to(self.device)
_, logprob, state_values = self.AC.evaluate(old_states_gpu, old_actions_gpu)
advantages = (target_values_gpu - state_values).detach()
ratios = torch.exp((logprob - old_logprobs_gpu))
surr1 = ratios * advantages
surr2 = torch.clamp(ratios, 1 - self.eps_clip, 1 + self.eps_clip) * advantages
loss_actor = -torch.min(surr1, surr2).mean()
self.optim_actor.zero_grad()
loss_actor.backward()
torch.nn.utils.clip_grad.clip_grad_norm_(self.AC.actor.parameters(), max_norm=self.max_grad_norm)
self.optim_actor.step()
loss_critic = self.loss(state_values, target_values_gpu).mean()
self.optim_critic.zero_grad()
loss_critic.backward()
torch.nn.utils.clip_grad.clip_grad_norm_(self.AC.critic.parameters(), max_norm=self.max_grad_norm)
self.optim_critic.step()
writer.add_scalar("PPO/Loss Actor", loss_actor.cpu().detach().item(), self.iter_count)
writer.add_scalar("PPO/Loss Critic", loss_critic.cpu().detach().item(), self.iter_count)
writer.add_scalar("PPO/Advantage", advantages.cpu().detach().mean().item(), self.iter_count)
self.iter_count += 1
self.AC.eval()
self.AC_saved.load_state_dict(self.AC.state_dict())
self.AC_saved.eval()
self.buffer.reset()
def update(self, reward, is_terminal):
if not self.training: return
self.buffer.rewards.append(reward)
self.buffer.is_terminals.append(is_terminal) | true | true |
f7fbf451f7ab0b316753c8ad61a542b73cbff82d | 14,904 | py | Python | processing_provider/Rast_fillRasterwithPatches.py | geodourados/lftools | 4b9d703513bd3d49ac7952014575bf95492a2d90 | [
"MIT"
] | 1 | 2022-03-28T22:18:09.000Z | 2022-03-28T22:18:09.000Z | processing_provider/Rast_fillRasterwithPatches.py | geodourados/lftools | 4b9d703513bd3d49ac7952014575bf95492a2d90 | [
"MIT"
] | null | null | null | processing_provider/Rast_fillRasterwithPatches.py | geodourados/lftools | 4b9d703513bd3d49ac7952014575bf95492a2d90 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
fillRasterwithPatches.py
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Leandro França'
__date__ = '2020-09-01'
__copyright__ = '(C) 2020, Leandro França'
from PyQt5.QtCore import QCoreApplication, QVariant
from qgis.core import (QgsProcessing,
QgsFeatureSink,
QgsWkbTypes,
QgsFields,
QgsField,
QgsFeature,
QgsPointXY,
QgsGeometry,
QgsProcessingException,
QgsProcessingAlgorithm,
QgsProcessingParameterString,
QgsProcessingParameterField,
QgsProcessingParameterBoolean,
QgsProcessingParameterCrs,
QgsProcessingParameterEnum,
QgsFeatureRequest,
QgsExpression,
QgsProcessingParameterFeatureSource,
QgsProcessingParameterFeatureSink,
QgsProcessingParameterFileDestination,
QgsProcessingParameterMultipleLayers,
QgsProcessingParameterRasterLayer,
QgsProcessingParameterRasterDestination,
QgsApplication,
QgsProject,
QgsRasterLayer,
QgsCoordinateTransform,
QgsCoordinateReferenceSystem)
from osgeo import osr, gdal_array, gdal #https://gdal.org/python/
from math import floor, ceil
import numpy as np
from lftools.geocapt.dip import Interpolar
from lftools.geocapt.imgs import Imgs
import os
from qgis.PyQt.QtGui import QIcon
class FillRasterwithPatches(QgsProcessingAlgorithm):
LOC = QgsApplication.locale()[:2]
def translate(self, string):
return QCoreApplication.translate('Processing', string)
def tr(self, *string):
# Traduzir para o portugês: arg[0] - english (translate), arg[1] - português
if self.LOC == 'pt':
if len(string) == 2:
return string[1]
else:
return self.translate(string[0])
else:
return self.translate(string[0])
def createInstance(self):
return FillRasterwithPatches()
def name(self):
return 'fillrasterwithpatches'
def displayName(self):
return self.tr('Fill with patches', 'Remendar vazios de raster')
def group(self):
return self.tr('Raster')
def groupId(self):
return 'raster'
def tags(self):
return self.tr('fill,hole,raster,cloud,remove,drone,patch').split(',')
def icon(self):
return QIcon(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'images/raster.png'))
txt_en = 'Fills Raster null pixels (no data) with data obtained from other smaller raster layers (Patches).'
txt_pt = 'Preenche vazios de Raster (pixels nulos) com dados obtidos de outras camadas raster menores (Remendos).'
figure = 'images/tutorial/raster_fill_holes.jpg'
def shortHelpString(self):
social_BW = Imgs().social_BW
footer = '''<div align="center">
<img src="'''+ os.path.join(os.path.dirname(os.path.dirname(__file__)), self.figure) +'''">
</div>
<div align="right">
<p align="right">
<b>'''+self.tr('Author: Leandro Franca', 'Autor: Leandro França')+'''</b>
</p>'''+ social_BW + '''</div>
</div>'''
return self.tr(self.txt_en, self.txt_pt) + footer
RasterIN ='RasterIN'
PATCHES = 'PATCHES'
RESAMPLING = 'RESAMPLING'
RasterOUT = 'RasterOUT'
OPEN = 'OPEN'
def initAlgorithm(self, config=None):
# INPUT
self.addParameter(
QgsProcessingParameterRasterLayer(
self.RasterIN,
self.tr('Input Raster', 'Raster de Entrada'),
[QgsProcessing.TypeRaster]
)
)
self.addParameter(
QgsProcessingParameterMultipleLayers(
self.PATCHES,
self.tr('Patch Layers', 'Rasters de Remendo'),
layerType = QgsProcessing.TypeRaster
)
)
interp = [self.tr('Nearest neighbor', 'Vizinho mais próximo'),
self.tr('Bilinear'),
self.tr('Bicubic', 'Bicúbica')]
self.addParameter(
QgsProcessingParameterEnum(
self.RESAMPLING,
self.tr('Interpolation', 'Interpolação'),
options = interp,
defaultValue= 0
)
)
# OUTPUT
self.addParameter(
QgsProcessingParameterFileDestination(
self.RasterOUT,
self.tr('Patched Image', 'Imagem Remendada'),
fileFilter = 'GeoTIFF (*.tif)'
)
)
self.addParameter(
QgsProcessingParameterBoolean(
self.OPEN,
self.tr('Load patched Image', 'Carregar Imagem Remendada'),
defaultValue= True
)
)
def processAlgorithm(self, parameters, context, feedback):
RasterIN = self.parameterAsRasterLayer(
parameters,
self.RasterIN,
context
)
if RasterIN is None:
raise QgsProcessingException(self.invalidSourceError(parameters, self.RasterIN))
RasterIN = RasterIN.dataProvider().dataSourceUri()
PatchesLayers = self.parameterAsLayerList(
parameters,
self.PATCHES,
context
)
reamostragem = self.parameterAsEnum(
parameters,
self.RESAMPLING,
context
)
reamostragem = ['nearest','bilinear','bicubic'][reamostragem]
RGB_Output = self.parameterAsFileOutput(
parameters,
self.RasterOUT,
context
)
Carregar = self.parameterAsBool(
parameters,
self.OPEN,
context
)
limiar = 240
# Abrir Raster layer como array
image = gdal.Open(RasterIN)
prj=image.GetProjection()
CRS=osr.SpatialReference(wkt=prj)
geotransform = image.GetGeoTransform()
n_bands = image.RasterCount # Número de bandas
cols = image.RasterXSize # Number of columns
rows = image.RasterYSize # Number of rows
# Origem e resolucao da imagem
ulx, xres, xskew, uly, yskew, yres = geotransform
origem = (ulx, uly)
resol_X = abs(xres)
resol_Y = abs(yres)
if n_bands ==1:
feedback.pushInfo(self.tr('Opening raster band...', 'Abrindo banda do raster...'))
band1 = image.GetRasterBand(1).ReadAsArray()
if n_bands >=3:
feedback.pushInfo(self.tr('Opening Band R...', 'Abrindo Banda R...'))
band1 = image.GetRasterBand(1).ReadAsArray()
feedback.pushInfo(self.tr('Opening Band G...', 'Abrindo Banda G...'))
band2 = image.GetRasterBand(2).ReadAsArray()
feedback.pushInfo(self.tr('Opening Band B...', 'Abrindo Banda B...'))
band3 = image.GetRasterBand(3).ReadAsArray()
# Transparência
if n_bands == 4:
feedback.pushInfo(self.tr('Opening Band Alpha...', 'Abrindo Banda Alfa...'))
band4 = image.GetRasterBand(4).ReadAsArray()
Pixel_Nulo = image.GetRasterBand(1).GetNoDataValue()
if Pixel_Nulo == None:
Pixel_Nulo = 0
image=None # Fechar imagem
# Número de pixels para processamento
TAM = 0
for Remendo in PatchesLayers:
Rem_Path = Remendo.dataProvider().dataSourceUri()
Rem = gdal.Open(Rem_Path)
# Rem_cols = Rem.RasterXSize # Number of columns
Rem_rows = Rem.RasterYSize # Number of rows
TAM += Rem_rows
# Remendos
total = 100.0 / TAM
cont = 0
for Remendo in PatchesLayers:
feedback.pushInfo((self.tr('Processing Layer: {}', 'Processando Camada: {}')).format(Remendo))
Rem_Path = Remendo.dataProvider().dataSourceUri()
Rem = gdal.Open(Rem_Path)
ulx, xres, xskew, uly, yskew, yres = Rem.GetGeoTransform()
Rem_origem = (ulx, uly)
Rem_resol_X = abs(xres)
Rem_resol_Y = abs(yres)
Rem_cols = Rem.RasterXSize # Number of columns
Rem_rows = Rem.RasterYSize # Number of rows
lrx = ulx + (Rem_cols * xres)
lry = uly + (Rem_rows * yres)
bbox = [ulx, lrx, lry, uly]
Rem_nulo = Rem.GetRasterBand(1).GetNoDataValue()
if Rem_nulo == None:
Rem_nulo = 0
Rem_band1 = Rem.GetRasterBand(1).ReadAsArray()
if n_bands >1:
Rem_band2 = Rem.GetRasterBand(2).ReadAsArray()
Rem_band3 = Rem.GetRasterBand(3).ReadAsArray()
# Limites de Varredura
row_ini = int(round((origem[1]-uly)/resol_Y - 0.5))
row_fim = int(round((origem[1]-lry)/resol_Y - 0.5))
col_ini = int(round((ulx - origem[0])/resol_X - 0.5))
col_fim = int(round((lrx - origem[0])/resol_X - 0.5))
# Varrer Raster
if n_bands == 4:
for lin in range(row_ini, row_fim):
for col in range(col_ini, col_fim):
px_value = band4[lin][col]
if px_value == 0 or band1[lin][col] > limiar: # Verificar Limiar
X = origem[0] + resol_X*(col + 0.5)
Y = origem[1] - resol_Y*(lin + 0.5)
band1[lin][col] = Interpolar(X, Y, Rem_band1, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band2[lin][col] = Interpolar(X, Y, Rem_band2, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band3[lin][col] = Interpolar(X, Y, Rem_band3, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
cont += 1
feedback.setProgress(int(cont * total))
if feedback.isCanceled():
break
elif n_bands == 3:
for lin in range(row_ini, row_fim):
for col in range(col_ini, col_fim):
px_value = band1[lin][col]
if px_value == Pixel_Nulo or band1[lin][col] > limiar: # Verificar Limiar
X = origem[0] + resol_X*(col + 0.5)
Y = origem[1] - resol_Y*(lin + 0.5)
band1[lin][col] = Interpolar(X, Y, Rem_band1, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band2[lin][col] = Interpolar(X, Y, Rem_band2, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band3[lin][col] = Interpolar(X, Y, Rem_band3, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
cont += 1
feedback.setProgress(int(cont * total))
if feedback.isCanceled():
break
elif n_bands == 1:
for lin in range(row_ini, row_fim):
for col in range(col_ini, col_fim):
px_value = band1[lin][col]
if px_value == Pixel_Nulo or band1[lin][col] > limiar: # Verificar Limiar
X = origem[0] + resol_X*(col + 0.5)
Y = origem[1] - resol_Y*(lin + 0.5)
band1[lin][col] = Interpolar(X, Y, Rem_band1, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
cont += 1
feedback.setProgress(int(cont * total))
if feedback.isCanceled():
break
Rem = None # Fechar imagem
# Criar imagem RGB
feedback.pushInfo(self.tr('Saving Raster...', 'Salvando Raster...'))
GDT = gdal_array.NumericTypeCodeToGDALTypeCode(band1.dtype)
if n_bands ==1:
RASTER = gdal.GetDriverByName('GTiff').Create(RGB_Output, cols, rows, 1, GDT)
else:
RASTER = gdal.GetDriverByName('GTiff').Create(RGB_Output, cols, rows, 3, GDT)
RASTER.SetGeoTransform(geotransform) # specify coords
RASTER.SetProjection(CRS.ExportToWkt()) # export coords to file
if n_bands ==1:
feedback.pushInfo(self.tr('Writing rater band...', 'Escrevendo banda do raster...'))
banda = RASTER.GetRasterBand(1)
banda.WriteArray(band1)
banda.SetNoDataValue(Pixel_Nulo)
else:
feedback.pushInfo(self.tr('Writing Band R...', 'Escrevendo Banda R...'))
bandaR = RASTER.GetRasterBand(1)
bandaR.WriteArray(band1)
feedback.pushInfo(self.tr('Writing Band G...', 'Escrevendo Banda G...'))
bandaG = RASTER.GetRasterBand(2)
bandaG.WriteArray(band2)
feedback.pushInfo(self.tr('Writing Band B...', 'Escrevendo Banda B...'))
bandaB = RASTER.GetRasterBand(3)
bandaB.WriteArray(band3)
feedback.pushInfo(self.tr('Saving raster...', 'Salvando raster...'))
RASTER.FlushCache() # Escrever no disco
RASTER = None # Salvar e fechar
feedback.pushInfo(self.tr('Operation completed successfully!', 'Operação finalizada com sucesso!'))
feedback.pushInfo(self.tr('Leandro Franca - Cartographic Engineer', 'Leandro França - Eng Cart'))
self.CAMINHO = RGB_Output
self.CARREGAR = Carregar
return {self.RasterOUT: RGB_Output}
# Carregamento de arquivo de saída
CAMINHO = ''
CARREGAR = True
def postProcessAlgorithm(self, context, feedback):
if self.CARREGAR:
rlayer = QgsRasterLayer(self.CAMINHO, self.tr('Patched Image', 'Imagem Remendada'))
QgsProject.instance().addMapLayer(rlayer)
return {}
| 41.51532 | 135 | 0.541063 |
__author__ = 'Leandro França'
__date__ = '2020-09-01'
__copyright__ = '(C) 2020, Leandro França'
from PyQt5.QtCore import QCoreApplication, QVariant
from qgis.core import (QgsProcessing,
QgsFeatureSink,
QgsWkbTypes,
QgsFields,
QgsField,
QgsFeature,
QgsPointXY,
QgsGeometry,
QgsProcessingException,
QgsProcessingAlgorithm,
QgsProcessingParameterString,
QgsProcessingParameterField,
QgsProcessingParameterBoolean,
QgsProcessingParameterCrs,
QgsProcessingParameterEnum,
QgsFeatureRequest,
QgsExpression,
QgsProcessingParameterFeatureSource,
QgsProcessingParameterFeatureSink,
QgsProcessingParameterFileDestination,
QgsProcessingParameterMultipleLayers,
QgsProcessingParameterRasterLayer,
QgsProcessingParameterRasterDestination,
QgsApplication,
QgsProject,
QgsRasterLayer,
QgsCoordinateTransform,
QgsCoordinateReferenceSystem)
from osgeo import osr, gdal_array, gdal
from math import floor, ceil
import numpy as np
from lftools.geocapt.dip import Interpolar
from lftools.geocapt.imgs import Imgs
import os
from qgis.PyQt.QtGui import QIcon
class FillRasterwithPatches(QgsProcessingAlgorithm):
LOC = QgsApplication.locale()[:2]
def translate(self, string):
return QCoreApplication.translate('Processing', string)
def tr(self, *string):
if self.LOC == 'pt':
if len(string) == 2:
return string[1]
else:
return self.translate(string[0])
else:
return self.translate(string[0])
def createInstance(self):
return FillRasterwithPatches()
def name(self):
return 'fillrasterwithpatches'
def displayName(self):
return self.tr('Fill with patches', 'Remendar vazios de raster')
def group(self):
return self.tr('Raster')
def groupId(self):
return 'raster'
def tags(self):
return self.tr('fill,hole,raster,cloud,remove,drone,patch').split(',')
def icon(self):
return QIcon(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'images/raster.png'))
txt_en = 'Fills Raster null pixels (no data) with data obtained from other smaller raster layers (Patches).'
txt_pt = 'Preenche vazios de Raster (pixels nulos) com dados obtidos de outras camadas raster menores (Remendos).'
figure = 'images/tutorial/raster_fill_holes.jpg'
def shortHelpString(self):
social_BW = Imgs().social_BW
footer = '''<div align="center">
<img src="'''+ os.path.join(os.path.dirname(os.path.dirname(__file__)), self.figure) +'''">
</div>
<div align="right">
<p align="right">
<b>'''+self.tr('Author: Leandro Franca', 'Autor: Leandro França')+'''</b>
</p>'''+ social_BW + '''</div>
</div>'''
return self.tr(self.txt_en, self.txt_pt) + footer
RasterIN ='RasterIN'
PATCHES = 'PATCHES'
RESAMPLING = 'RESAMPLING'
RasterOUT = 'RasterOUT'
OPEN = 'OPEN'
def initAlgorithm(self, config=None):
self.addParameter(
QgsProcessingParameterRasterLayer(
self.RasterIN,
self.tr('Input Raster', 'Raster de Entrada'),
[QgsProcessing.TypeRaster]
)
)
self.addParameter(
QgsProcessingParameterMultipleLayers(
self.PATCHES,
self.tr('Patch Layers', 'Rasters de Remendo'),
layerType = QgsProcessing.TypeRaster
)
)
interp = [self.tr('Nearest neighbor', 'Vizinho mais próximo'),
self.tr('Bilinear'),
self.tr('Bicubic', 'Bicúbica')]
self.addParameter(
QgsProcessingParameterEnum(
self.RESAMPLING,
self.tr('Interpolation', 'Interpolação'),
options = interp,
defaultValue= 0
)
)
self.addParameter(
QgsProcessingParameterFileDestination(
self.RasterOUT,
self.tr('Patched Image', 'Imagem Remendada'),
fileFilter = 'GeoTIFF (*.tif)'
)
)
self.addParameter(
QgsProcessingParameterBoolean(
self.OPEN,
self.tr('Load patched Image', 'Carregar Imagem Remendada'),
defaultValue= True
)
)
def processAlgorithm(self, parameters, context, feedback):
RasterIN = self.parameterAsRasterLayer(
parameters,
self.RasterIN,
context
)
if RasterIN is None:
raise QgsProcessingException(self.invalidSourceError(parameters, self.RasterIN))
RasterIN = RasterIN.dataProvider().dataSourceUri()
PatchesLayers = self.parameterAsLayerList(
parameters,
self.PATCHES,
context
)
reamostragem = self.parameterAsEnum(
parameters,
self.RESAMPLING,
context
)
reamostragem = ['nearest','bilinear','bicubic'][reamostragem]
RGB_Output = self.parameterAsFileOutput(
parameters,
self.RasterOUT,
context
)
Carregar = self.parameterAsBool(
parameters,
self.OPEN,
context
)
limiar = 240
image = gdal.Open(RasterIN)
prj=image.GetProjection()
CRS=osr.SpatialReference(wkt=prj)
geotransform = image.GetGeoTransform()
n_bands = image.RasterCount
cols = image.RasterXSize
rows = image.RasterYSize
ulx, xres, xskew, uly, yskew, yres = geotransform
origem = (ulx, uly)
resol_X = abs(xres)
resol_Y = abs(yres)
if n_bands ==1:
feedback.pushInfo(self.tr('Opening raster band...', 'Abrindo banda do raster...'))
band1 = image.GetRasterBand(1).ReadAsArray()
if n_bands >=3:
feedback.pushInfo(self.tr('Opening Band R...', 'Abrindo Banda R...'))
band1 = image.GetRasterBand(1).ReadAsArray()
feedback.pushInfo(self.tr('Opening Band G...', 'Abrindo Banda G...'))
band2 = image.GetRasterBand(2).ReadAsArray()
feedback.pushInfo(self.tr('Opening Band B...', 'Abrindo Banda B...'))
band3 = image.GetRasterBand(3).ReadAsArray()
if n_bands == 4:
feedback.pushInfo(self.tr('Opening Band Alpha...', 'Abrindo Banda Alfa...'))
band4 = image.GetRasterBand(4).ReadAsArray()
Pixel_Nulo = image.GetRasterBand(1).GetNoDataValue()
if Pixel_Nulo == None:
Pixel_Nulo = 0
image=None
TAM = 0
for Remendo in PatchesLayers:
Rem_Path = Remendo.dataProvider().dataSourceUri()
Rem = gdal.Open(Rem_Path)
ws = Rem.RasterYSize
TAM += Rem_rows
total = 100.0 / TAM
cont = 0
for Remendo in PatchesLayers:
feedback.pushInfo((self.tr('Processing Layer: {}', 'Processando Camada: {}')).format(Remendo))
Rem_Path = Remendo.dataProvider().dataSourceUri()
Rem = gdal.Open(Rem_Path)
ulx, xres, xskew, uly, yskew, yres = Rem.GetGeoTransform()
Rem_origem = (ulx, uly)
Rem_resol_X = abs(xres)
Rem_resol_Y = abs(yres)
Rem_cols = Rem.RasterXSize
Rem_rows = Rem.RasterYSize
lrx = ulx + (Rem_cols * xres)
lry = uly + (Rem_rows * yres)
bbox = [ulx, lrx, lry, uly]
Rem_nulo = Rem.GetRasterBand(1).GetNoDataValue()
if Rem_nulo == None:
Rem_nulo = 0
Rem_band1 = Rem.GetRasterBand(1).ReadAsArray()
if n_bands >1:
Rem_band2 = Rem.GetRasterBand(2).ReadAsArray()
Rem_band3 = Rem.GetRasterBand(3).ReadAsArray()
row_ini = int(round((origem[1]-uly)/resol_Y - 0.5))
row_fim = int(round((origem[1]-lry)/resol_Y - 0.5))
col_ini = int(round((ulx - origem[0])/resol_X - 0.5))
col_fim = int(round((lrx - origem[0])/resol_X - 0.5))
if n_bands == 4:
for lin in range(row_ini, row_fim):
for col in range(col_ini, col_fim):
px_value = band4[lin][col]
if px_value == 0 or band1[lin][col] > limiar:
X = origem[0] + resol_X*(col + 0.5)
Y = origem[1] - resol_Y*(lin + 0.5)
band1[lin][col] = Interpolar(X, Y, Rem_band1, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band2[lin][col] = Interpolar(X, Y, Rem_band2, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band3[lin][col] = Interpolar(X, Y, Rem_band3, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
cont += 1
feedback.setProgress(int(cont * total))
if feedback.isCanceled():
break
elif n_bands == 3:
for lin in range(row_ini, row_fim):
for col in range(col_ini, col_fim):
px_value = band1[lin][col]
if px_value == Pixel_Nulo or band1[lin][col] > limiar:
X = origem[0] + resol_X*(col + 0.5)
Y = origem[1] - resol_Y*(lin + 0.5)
band1[lin][col] = Interpolar(X, Y, Rem_band1, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band2[lin][col] = Interpolar(X, Y, Rem_band2, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
band3[lin][col] = Interpolar(X, Y, Rem_band3, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
cont += 1
feedback.setProgress(int(cont * total))
if feedback.isCanceled():
break
elif n_bands == 1:
for lin in range(row_ini, row_fim):
for col in range(col_ini, col_fim):
px_value = band1[lin][col]
if px_value == Pixel_Nulo or band1[lin][col] > limiar:
X = origem[0] + resol_X*(col + 0.5)
Y = origem[1] - resol_Y*(lin + 0.5)
band1[lin][col] = Interpolar(X, Y, Rem_band1, Rem_origem, Rem_resol_X, Rem_resol_Y, reamostragem, Rem_nulo)
cont += 1
feedback.setProgress(int(cont * total))
if feedback.isCanceled():
break
Rem = None
feedback.pushInfo(self.tr('Saving Raster...', 'Salvando Raster...'))
GDT = gdal_array.NumericTypeCodeToGDALTypeCode(band1.dtype)
if n_bands ==1:
RASTER = gdal.GetDriverByName('GTiff').Create(RGB_Output, cols, rows, 1, GDT)
else:
RASTER = gdal.GetDriverByName('GTiff').Create(RGB_Output, cols, rows, 3, GDT)
RASTER.SetGeoTransform(geotransform)
RASTER.SetProjection(CRS.ExportToWkt())
if n_bands ==1:
feedback.pushInfo(self.tr('Writing rater band...', 'Escrevendo banda do raster...'))
banda = RASTER.GetRasterBand(1)
banda.WriteArray(band1)
banda.SetNoDataValue(Pixel_Nulo)
else:
feedback.pushInfo(self.tr('Writing Band R...', 'Escrevendo Banda R...'))
bandaR = RASTER.GetRasterBand(1)
bandaR.WriteArray(band1)
feedback.pushInfo(self.tr('Writing Band G...', 'Escrevendo Banda G...'))
bandaG = RASTER.GetRasterBand(2)
bandaG.WriteArray(band2)
feedback.pushInfo(self.tr('Writing Band B...', 'Escrevendo Banda B...'))
bandaB = RASTER.GetRasterBand(3)
bandaB.WriteArray(band3)
feedback.pushInfo(self.tr('Saving raster...', 'Salvando raster...'))
RASTER.FlushCache()
RASTER = None
feedback.pushInfo(self.tr('Operation completed successfully!', 'Operação finalizada com sucesso!'))
feedback.pushInfo(self.tr('Leandro Franca - Cartographic Engineer', 'Leandro França - Eng Cart'))
self.CAMINHO = RGB_Output
self.CARREGAR = Carregar
return {self.RasterOUT: RGB_Output}
CAMINHO = ''
CARREGAR = True
def postProcessAlgorithm(self, context, feedback):
if self.CARREGAR:
rlayer = QgsRasterLayer(self.CAMINHO, self.tr('Patched Image', 'Imagem Remendada'))
QgsProject.instance().addMapLayer(rlayer)
return {}
| true | true |
f7fbf45acb2ddec84cefcc472333d7218aec52d7 | 4,101 | py | Python | examples/sprite_bullets.py | jgrigonis/arcade | 9b624da7da52e3909f6e82c552446b90249041f1 | [
"MIT"
] | 1 | 2021-05-23T20:30:46.000Z | 2021-05-23T20:30:46.000Z | examples/sprite_bullets.py | jgrigonis/arcade | 9b624da7da52e3909f6e82c552446b90249041f1 | [
"MIT"
] | null | null | null | examples/sprite_bullets.py | jgrigonis/arcade | 9b624da7da52e3909f6e82c552446b90249041f1 | [
"MIT"
] | null | null | null | """
Sprite Bullets
Simple program to show basic sprite usage.
Artwork from http://kenney.nl
"""
import random
import arcade
SPRITE_SCALING = 0.5
SCREEN_WIDTH = 800
SCREEN_HEIGHT = 600
BULLET_SPEED = 5
class Bullet(arcade.Sprite):
def update(self):
self.center_y += BULLET_SPEED
class MyAppWindow(arcade.Window):
""" Main application class. """
def __init__(self):
super().__init__(SCREEN_WIDTH, SCREEN_HEIGHT, "Sprites and Bullets Demo")
""" Set up the game and initialize the variables. """
# Sprite lists
self.all_sprites_list = arcade.SpriteList()
self.coin_list = arcade.SpriteList()
self.bullet_list = arcade.SpriteList()
# Set up the player
self.score = 0
self.player_sprite = arcade.Sprite("images/character.png",
SPRITE_SCALING)
self.player_sprite.center_x = 50
self.player_sprite.center_y = 70
self.all_sprites_list.append(self.player_sprite)
# Load sounds
self.gun_sound = arcade.sound.load_sound("sounds/laser1.ogg")
self.hit_sound = arcade.sound.load_sound("sounds/phaseJump1.ogg")
for i in range(50):
# Create the coin instance
coin = arcade.Sprite("images/coin_01.png", SPRITE_SCALING / 3)
# Position the coin
coin.center_x = random.randrange(SCREEN_WIDTH)
coin.center_y = random.randrange(120, SCREEN_HEIGHT)
# Add the coin to the lists
self.all_sprites_list.append(coin)
self.coin_list.append(coin)
# Don't show the mouse cursor
self.set_mouse_visible(False)
# Set the background color
arcade.set_background_color(arcade.color.AMAZON)
def on_draw(self):
"""
Render the screen.
"""
# This command has to happen before we start drawing
arcade.start_render()
# Draw all the sprites.
self.all_sprites_list.draw()
# Put the text on the screen.
output = "Score: {}".format(self.score)
arcade.draw_text(output, 10, 20, arcade.color.WHITE, 14)
def on_mouse_motion(self, x, y, dx, dy):
"""
Called whenever the mouse moves.
"""
self.player_sprite.center_x = x
def on_mouse_press(self, x, y, button, modifiers):
"""
Called whenever the mouse button is clicked.
"""
# Gunshot sound
arcade.sound.play_sound(self.gun_sound)
# Create a bullet
bullet = Bullet("images/laserBlue01.png", SPRITE_SCALING * 1.5)
# The image points to the right, and we want it to point up. So
# rotate it.
bullet.angle = 90
# Position the bullet
bullet.center_x = self.player_sprite.center_x
bullet.bottom = self.player_sprite.top
# Add the bullet to the appropriate lists
self.all_sprites_list.append(bullet)
self.bullet_list.append(bullet)
def update(self, delta_time):
""" Movement and game logic """
# Call update on all sprites (The sprites don't do much in this
# example though.)
self.all_sprites_list.update()
# Loop through each bullet
for bullet in self.bullet_list:
# Check this bullet to see if it hit a coin
hit_list = arcade.check_for_collision_with_list(bullet,
self.coin_list)
# If it did, get rid of the bullet
if len(hit_list) > 0:
bullet.kill()
# For every coin we hit, add to the score and remove the coin
for coin in hit_list:
coin.kill()
self.score += 1
# Hit Sound
arcade.sound.play_sound(self.hit_sound)
# If the bullet flies off-screen, remove it.
if bullet.bottom > SCREEN_HEIGHT:
bullet.kill()
def main():
MyAppWindow()
arcade.run()
if __name__ == "__main__":
main()
| 27.709459 | 81 | 0.592295 | import random
import arcade
SPRITE_SCALING = 0.5
SCREEN_WIDTH = 800
SCREEN_HEIGHT = 600
BULLET_SPEED = 5
class Bullet(arcade.Sprite):
def update(self):
self.center_y += BULLET_SPEED
class MyAppWindow(arcade.Window):
def __init__(self):
super().__init__(SCREEN_WIDTH, SCREEN_HEIGHT, "Sprites and Bullets Demo")
self.all_sprites_list = arcade.SpriteList()
self.coin_list = arcade.SpriteList()
self.bullet_list = arcade.SpriteList()
self.score = 0
self.player_sprite = arcade.Sprite("images/character.png",
SPRITE_SCALING)
self.player_sprite.center_x = 50
self.player_sprite.center_y = 70
self.all_sprites_list.append(self.player_sprite)
self.gun_sound = arcade.sound.load_sound("sounds/laser1.ogg")
self.hit_sound = arcade.sound.load_sound("sounds/phaseJump1.ogg")
for i in range(50):
coin = arcade.Sprite("images/coin_01.png", SPRITE_SCALING / 3)
coin.center_x = random.randrange(SCREEN_WIDTH)
coin.center_y = random.randrange(120, SCREEN_HEIGHT)
self.all_sprites_list.append(coin)
self.coin_list.append(coin)
self.set_mouse_visible(False)
# Set the background color
arcade.set_background_color(arcade.color.AMAZON)
def on_draw(self):
# This command has to happen before we start drawing
arcade.start_render()
# Draw all the sprites.
self.all_sprites_list.draw()
# Put the text on the screen.
output = "Score: {}".format(self.score)
arcade.draw_text(output, 10, 20, arcade.color.WHITE, 14)
def on_mouse_motion(self, x, y, dx, dy):
self.player_sprite.center_x = x
def on_mouse_press(self, x, y, button, modifiers):
# Gunshot sound
arcade.sound.play_sound(self.gun_sound)
# Create a bullet
bullet = Bullet("images/laserBlue01.png", SPRITE_SCALING * 1.5)
# The image points to the right, and we want it to point up. So
# rotate it.
bullet.angle = 90
# Position the bullet
bullet.center_x = self.player_sprite.center_x
bullet.bottom = self.player_sprite.top
# Add the bullet to the appropriate lists
self.all_sprites_list.append(bullet)
self.bullet_list.append(bullet)
def update(self, delta_time):
# Call update on all sprites (The sprites don't do much in this
self.all_sprites_list.update()
for bullet in self.bullet_list:
hit_list = arcade.check_for_collision_with_list(bullet,
self.coin_list)
if len(hit_list) > 0:
bullet.kill()
for coin in hit_list:
coin.kill()
self.score += 1
arcade.sound.play_sound(self.hit_sound)
if bullet.bottom > SCREEN_HEIGHT:
bullet.kill()
def main():
MyAppWindow()
arcade.run()
if __name__ == "__main__":
main()
| true | true |
f7fbf4646ad2c439ae2469d60113cbf6f253c938 | 353 | py | Python | Chapter_6/3.py | sai29/Python-John-Zelle-book | b6c54120cfc4742fa651a051b39fc94bf221543d | [
"MIT"
] | 14 | 2015-01-20T15:02:13.000Z | 2020-05-20T08:05:37.000Z | Chapter_6/3.py | sai29/Python-John-Zelle-book | b6c54120cfc4742fa651a051b39fc94bf221543d | [
"MIT"
] | 1 | 2016-11-10T16:45:21.000Z | 2017-04-17T03:36:46.000Z | Chapter_6/3.py | sai29/Python-John-Zelle-book | b6c54120cfc4742fa651a051b39fc94bf221543d | [
"MIT"
] | 11 | 2015-01-06T18:46:15.000Z | 2020-04-02T03:45:53.000Z |
def sphereArea(radius):
area = 4 * 3.14 * (radius * radius)
return area
def sphereVolume(radius):
vol = (4 / 3) * (3.14 * (radius * radius))
return vol
def main():
rad = float(raw_input("Enter the value of the radius"))
area1 = sphereArea(rad)
volume = sphereVolume(rad)
print "The area is", area1
print "The volume is", volume
main()
| 16.045455 | 56 | 0.654391 |
def sphereArea(radius):
area = 4 * 3.14 * (radius * radius)
return area
def sphereVolume(radius):
vol = (4 / 3) * (3.14 * (radius * radius))
return vol
def main():
rad = float(raw_input("Enter the value of the radius"))
area1 = sphereArea(rad)
volume = sphereVolume(rad)
print "The area is", area1
print "The volume is", volume
main()
| false | true |
f7fbf537fa4a15e63fc1e463b4d6a8cdb7f53b03 | 3,380 | py | Python | revscoring/languages/tests/test_estonian.py | mariushoch/revscoring | 5ecd54d31c4088b6f142c0ef54116cc5bdce0ff2 | [
"MIT"
] | null | null | null | revscoring/languages/tests/test_estonian.py | mariushoch/revscoring | 5ecd54d31c4088b6f142c0ef54116cc5bdce0ff2 | [
"MIT"
] | null | null | null | revscoring/languages/tests/test_estonian.py | mariushoch/revscoring | 5ecd54d31c4088b6f142c0ef54116cc5bdce0ff2 | [
"MIT"
] | null | null | null | import pickle
from .. import estonian
from ...datasources import revision_oriented
from ...dependencies import solve
from .util import compare_extraction
BAD = [
"butt", "butthole",
"crap",
"cock",
"fag", "faggot", "phaggot", "faggg",
"fuck", "fucking", "fucker",
"homo", "homod", "homokas", "homokad",
"idiots", "idiot", "idioooot",
"jobu", "jobud",
"kaka", "kakajunn",
"kepp", "keppis", "keppi", "keppida",
"lits", "litsid",
"loll", "lollakas", "lollid",
"motherfucker",
"munn", "munni", "munnid", "munne",
"nahhui", "nahhhui",
"nigga", "niggas", "niggaerh",
"noku", "noks",
"pask",
"pede", "peded", "pedekas", "pederast", "pederastid",
"perse", "perses", "persesse",
"pig", "pigs",
"pussy",
"puts", "putsi",
"sitt", "sitta",
"sita", "sitane", "sitajunn", "sitahunnik",
"stoopid", "stupid",
"taun",
"türa",
"tuss", "tussu",
"vitt", "vittu",
"vitupea"
]
INFORMAL = [
"animal",
"cool", "cooler", "coolest", "kool", "kooler", "koolest", "kewl",
"kewler", "kewlest",
"fakking",
"gangsta", "gängsta",
"haha", "hahaa", "hahaha",
"hmm", "hmmmmmm",
"ilge",
"ime", "imege",
"jou",
"junn", "junni",
"kill",
"kuradi",
"lahe",
"lohh",
"lol", "loll", "lolz",
"neeger",
"noob",
"pihku",
"raisk",
"räme",
"sakib",
"suck", "sucks", "sucking", "sucker",
"suht",
"tatt", "tatid",
"tegelt",
"tere",
"tsau",
"tsmir", "tšmir",
"yolo"
]
OTHER = [
"""
Friedrich Wilhelm Rembert von Berg sündis Beļava ja Sangaste mõisniku
Friedrich Georg von Bergi ja Gerdruta Wilhelmine von Ermesi vanima pojana.
Tal olid nooremad vennad Gustav, Magnus ja Alexander. Friedrich Wilhelmi ja
tema vendade koduõpetaja oli hilisem tuntud astronoom Wilhelm Struve.
"""
]
r_text = revision_oriented.revision.text
def test_badwords():
compare_extraction(estonian.badwords.revision.datasources.matches,
BAD, OTHER)
assert estonian.badwords == pickle.loads(pickle.dumps(estonian.badwords))
def test_informals():
compare_extraction(estonian.informals.revision.datasources.matches,
INFORMAL, OTHER)
assert estonian.informals == pickle.loads(pickle.dumps(estonian.informals))
def test_dictionary():
cache = {r_text: "Tal olid nooremad, vennad worngly. <td>"}
assert solve(estonian.dictionary.revision.datasources.dict_words,
cache=cache) == ["Tal", "olid", "nooremad", "vennad"]
assert solve(estonian.dictionary.revision.datasources.non_dict_words,
cache=cache) == ["worngly"]
assert estonian.dictionary == pickle.loads(
pickle.dumps(estonian.dictionary))
def test_stopwords():
cache = {revision_oriented.revision.text: "Bergi ja Gerdruta Wilhelmine " +
"von Ermesi vanima pojana."}
assert (solve(estonian.stopwords.revision.datasources.stopwords, cache=cache) ==
["von"])
assert (solve(estonian.stopwords.revision.datasources.non_stopwords,
cache=cache) ==
["Bergi", "ja", "Gerdruta", "Wilhelmine", "Ermesi", "vanima",
"pojana"])
assert estonian.stopwords == pickle.loads(pickle.dumps(estonian.stopwords))
| 27.258065 | 84 | 0.593195 | import pickle
from .. import estonian
from ...datasources import revision_oriented
from ...dependencies import solve
from .util import compare_extraction
BAD = [
"butt", "butthole",
"crap",
"cock",
"fag", "faggot", "phaggot", "faggg",
"fuck", "fucking", "fucker",
"homo", "homod", "homokas", "homokad",
"idiots", "idiot", "idioooot",
"jobu", "jobud",
"kaka", "kakajunn",
"kepp", "keppis", "keppi", "keppida",
"lits", "litsid",
"loll", "lollakas", "lollid",
"motherfucker",
"munn", "munni", "munnid", "munne",
"nahhui", "nahhhui",
"nigga", "niggas", "niggaerh",
"noku", "noks",
"pask",
"pede", "peded", "pedekas", "pederast", "pederastid",
"perse", "perses", "persesse",
"pig", "pigs",
"pussy",
"puts", "putsi",
"sitt", "sitta",
"sita", "sitane", "sitajunn", "sitahunnik",
"stoopid", "stupid",
"taun",
"türa",
"tuss", "tussu",
"vitt", "vittu",
"vitupea"
]
INFORMAL = [
"animal",
"cool", "cooler", "coolest", "kool", "kooler", "koolest", "kewl",
"kewler", "kewlest",
"fakking",
"gangsta", "gängsta",
"haha", "hahaa", "hahaha",
"hmm", "hmmmmmm",
"ilge",
"ime", "imege",
"jou",
"junn", "junni",
"kill",
"kuradi",
"lahe",
"lohh",
"lol", "loll", "lolz",
"neeger",
"noob",
"pihku",
"raisk",
"räme",
"sakib",
"suck", "sucks", "sucking", "sucker",
"suht",
"tatt", "tatid",
"tegelt",
"tere",
"tsau",
"tsmir", "tšmir",
"yolo"
]
OTHER = [
"""
Friedrich Wilhelm Rembert von Berg sündis Beļava ja Sangaste mõisniku
Friedrich Georg von Bergi ja Gerdruta Wilhelmine von Ermesi vanima pojana.
Tal olid nooremad vennad Gustav, Magnus ja Alexander. Friedrich Wilhelmi ja
tema vendade koduõpetaja oli hilisem tuntud astronoom Wilhelm Struve.
"""
]
r_text = revision_oriented.revision.text
def test_badwords():
compare_extraction(estonian.badwords.revision.datasources.matches,
BAD, OTHER)
assert estonian.badwords == pickle.loads(pickle.dumps(estonian.badwords))
def test_informals():
compare_extraction(estonian.informals.revision.datasources.matches,
INFORMAL, OTHER)
assert estonian.informals == pickle.loads(pickle.dumps(estonian.informals))
def test_dictionary():
cache = {r_text: "Tal olid nooremad, vennad worngly. <td>"}
assert solve(estonian.dictionary.revision.datasources.dict_words,
cache=cache) == ["Tal", "olid", "nooremad", "vennad"]
assert solve(estonian.dictionary.revision.datasources.non_dict_words,
cache=cache) == ["worngly"]
assert estonian.dictionary == pickle.loads(
pickle.dumps(estonian.dictionary))
def test_stopwords():
cache = {revision_oriented.revision.text: "Bergi ja Gerdruta Wilhelmine " +
"von Ermesi vanima pojana."}
assert (solve(estonian.stopwords.revision.datasources.stopwords, cache=cache) ==
["von"])
assert (solve(estonian.stopwords.revision.datasources.non_stopwords,
cache=cache) ==
["Bergi", "ja", "Gerdruta", "Wilhelmine", "Ermesi", "vanima",
"pojana"])
assert estonian.stopwords == pickle.loads(pickle.dumps(estonian.stopwords))
| true | true |
f7fbf5c782016ffdc3a2611a220b499a8311f833 | 225 | py | Python | lifepo4weredPyController/helpers/SetHelper.py | fredericklussier/myLifepo4weredPy | a31479a50b630fd94105020159fd269a329de2bd | [
"MIT"
] | null | null | null | lifepo4weredPyController/helpers/SetHelper.py | fredericklussier/myLifepo4weredPy | a31479a50b630fd94105020159fd269a329de2bd | [
"MIT"
] | null | null | null | lifepo4weredPyController/helpers/SetHelper.py | fredericklussier/myLifepo4weredPy | a31479a50b630fd94105020159fd269a329de2bd | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- coding: utf-8 -*-
def areSame(dict, dictToCompare):
if len(dict) != len(dictToCompare):
return False
else:
return (set() == (set(dict.items()) - set(dictToCompare.items())))
| 22.5 | 74 | 0.586667 |
def areSame(dict, dictToCompare):
if len(dict) != len(dictToCompare):
return False
else:
return (set() == (set(dict.items()) - set(dictToCompare.items())))
| true | true |
f7fbf5febaadcdf8549d6bccd8d324dfbc2150a5 | 4,951 | py | Python | sdk/python/pulumi_aws_native/customerprofiles/get_integration.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 29 | 2021-09-30T19:32:07.000Z | 2022-03-22T21:06:08.000Z | sdk/python/pulumi_aws_native/customerprofiles/get_integration.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 232 | 2021-09-30T19:26:26.000Z | 2022-03-31T23:22:06.000Z | sdk/python/pulumi_aws_native/customerprofiles/get_integration.py | pulumi/pulumi-aws-native | 1ae4a4d9c2256b2a79ca536f8d8497b28d10e4c3 | [
"Apache-2.0"
] | 4 | 2021-11-10T19:42:01.000Z | 2022-02-05T10:15:49.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'GetIntegrationResult',
'AwaitableGetIntegrationResult',
'get_integration',
'get_integration_output',
]
@pulumi.output_type
class GetIntegrationResult:
def __init__(__self__, created_at=None, last_updated_at=None, object_type_name=None, object_type_names=None, tags=None):
if created_at and not isinstance(created_at, str):
raise TypeError("Expected argument 'created_at' to be a str")
pulumi.set(__self__, "created_at", created_at)
if last_updated_at and not isinstance(last_updated_at, str):
raise TypeError("Expected argument 'last_updated_at' to be a str")
pulumi.set(__self__, "last_updated_at", last_updated_at)
if object_type_name and not isinstance(object_type_name, str):
raise TypeError("Expected argument 'object_type_name' to be a str")
pulumi.set(__self__, "object_type_name", object_type_name)
if object_type_names and not isinstance(object_type_names, list):
raise TypeError("Expected argument 'object_type_names' to be a list")
pulumi.set(__self__, "object_type_names", object_type_names)
if tags and not isinstance(tags, list):
raise TypeError("Expected argument 'tags' to be a list")
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[str]:
"""
The time of this integration got created
"""
return pulumi.get(self, "created_at")
@property
@pulumi.getter(name="lastUpdatedAt")
def last_updated_at(self) -> Optional[str]:
"""
The time of this integration got last updated at
"""
return pulumi.get(self, "last_updated_at")
@property
@pulumi.getter(name="objectTypeName")
def object_type_name(self) -> Optional[str]:
"""
The name of the ObjectType defined for the 3rd party data in Profile Service
"""
return pulumi.get(self, "object_type_name")
@property
@pulumi.getter(name="objectTypeNames")
def object_type_names(self) -> Optional[Sequence['outputs.IntegrationObjectTypeMapping']]:
"""
The mapping between 3rd party event types and ObjectType names
"""
return pulumi.get(self, "object_type_names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence['outputs.IntegrationTag']]:
"""
The tags (keys and values) associated with the integration
"""
return pulumi.get(self, "tags")
class AwaitableGetIntegrationResult(GetIntegrationResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetIntegrationResult(
created_at=self.created_at,
last_updated_at=self.last_updated_at,
object_type_name=self.object_type_name,
object_type_names=self.object_type_names,
tags=self.tags)
def get_integration(domain_name: Optional[str] = None,
uri: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetIntegrationResult:
"""
The resource schema for creating an Amazon Connect Customer Profiles Integration.
:param str domain_name: The unique name of the domain.
:param str uri: The URI of the S3 bucket or any other type of data source.
"""
__args__ = dict()
__args__['domainName'] = domain_name
__args__['uri'] = uri
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('aws-native:customerprofiles:getIntegration', __args__, opts=opts, typ=GetIntegrationResult).value
return AwaitableGetIntegrationResult(
created_at=__ret__.created_at,
last_updated_at=__ret__.last_updated_at,
object_type_name=__ret__.object_type_name,
object_type_names=__ret__.object_type_names,
tags=__ret__.tags)
@_utilities.lift_output_func(get_integration)
def get_integration_output(domain_name: Optional[pulumi.Input[str]] = None,
uri: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetIntegrationResult]:
"""
The resource schema for creating an Amazon Connect Customer Profiles Integration.
:param str domain_name: The unique name of the domain.
:param str uri: The URI of the S3 bucket or any other type of data source.
"""
...
| 37.793893 | 134 | 0.677641 |
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'GetIntegrationResult',
'AwaitableGetIntegrationResult',
'get_integration',
'get_integration_output',
]
@pulumi.output_type
class GetIntegrationResult:
def __init__(__self__, created_at=None, last_updated_at=None, object_type_name=None, object_type_names=None, tags=None):
if created_at and not isinstance(created_at, str):
raise TypeError("Expected argument 'created_at' to be a str")
pulumi.set(__self__, "created_at", created_at)
if last_updated_at and not isinstance(last_updated_at, str):
raise TypeError("Expected argument 'last_updated_at' to be a str")
pulumi.set(__self__, "last_updated_at", last_updated_at)
if object_type_name and not isinstance(object_type_name, str):
raise TypeError("Expected argument 'object_type_name' to be a str")
pulumi.set(__self__, "object_type_name", object_type_name)
if object_type_names and not isinstance(object_type_names, list):
raise TypeError("Expected argument 'object_type_names' to be a list")
pulumi.set(__self__, "object_type_names", object_type_names)
if tags and not isinstance(tags, list):
raise TypeError("Expected argument 'tags' to be a list")
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[str]:
return pulumi.get(self, "created_at")
@property
@pulumi.getter(name="lastUpdatedAt")
def last_updated_at(self) -> Optional[str]:
return pulumi.get(self, "last_updated_at")
@property
@pulumi.getter(name="objectTypeName")
def object_type_name(self) -> Optional[str]:
return pulumi.get(self, "object_type_name")
@property
@pulumi.getter(name="objectTypeNames")
def object_type_names(self) -> Optional[Sequence['outputs.IntegrationObjectTypeMapping']]:
return pulumi.get(self, "object_type_names")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence['outputs.IntegrationTag']]:
return pulumi.get(self, "tags")
class AwaitableGetIntegrationResult(GetIntegrationResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetIntegrationResult(
created_at=self.created_at,
last_updated_at=self.last_updated_at,
object_type_name=self.object_type_name,
object_type_names=self.object_type_names,
tags=self.tags)
def get_integration(domain_name: Optional[str] = None,
uri: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetIntegrationResult:
__args__ = dict()
__args__['domainName'] = domain_name
__args__['uri'] = uri
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('aws-native:customerprofiles:getIntegration', __args__, opts=opts, typ=GetIntegrationResult).value
return AwaitableGetIntegrationResult(
created_at=__ret__.created_at,
last_updated_at=__ret__.last_updated_at,
object_type_name=__ret__.object_type_name,
object_type_names=__ret__.object_type_names,
tags=__ret__.tags)
@_utilities.lift_output_func(get_integration)
def get_integration_output(domain_name: Optional[pulumi.Input[str]] = None,
uri: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetIntegrationResult]:
...
| true | true |
f7fbf6f7dd6fbdb6d7896e22e918b16db8e7adef | 1,255 | py | Python | src/basic_classification/basic.py | asaladino/tf-playground | c188d29077334038b2ae3afefed950f1ebb94477 | [
"MIT"
] | null | null | null | src/basic_classification/basic.py | asaladino/tf-playground | c188d29077334038b2ae3afefed950f1ebb94477 | [
"MIT"
] | null | null | null | src/basic_classification/basic.py | asaladino/tf-playground | c188d29077334038b2ae3afefed950f1ebb94477 | [
"MIT"
] | null | null | null | from __future__ import absolute_import, division, print_function
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
from basic_classification.helper import prediction_for_one_image
fashion_mnist = keras.datasets.fashion_mnist
# Load the data
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# Scale the data
train_images = train_images / 255.0
test_images = test_images / 255.0
# Build the model
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
# Compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(train_images, train_labels, epochs=5)
# Evaluate the model
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
# Predict what one image will be.
prediction_for_one_image(test_images[0], model, test_labels, test_images, class_names)
| 35.857143 | 93 | 0.759363 | from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
from basic_classification.helper import prediction_for_one_image
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images = train_images / 255.0
test_images = test_images / 255.0
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
prediction_for_one_image(test_images[0], model, test_labels, test_images, class_names)
| true | true |
f7fbf76eba006d0a341e5e20f2bcdbdcb5a4ddde | 4,004 | py | Python | retrieve_news_data.py | aarshpatel/RecapMyDay | 7e6e148104ac425747123131a0dd4993d4220545 | [
"MIT"
] | null | null | null | retrieve_news_data.py | aarshpatel/RecapMyDay | 7e6e148104ac425747123131a0dd4993d4220545 | [
"MIT"
] | null | null | null | retrieve_news_data.py | aarshpatel/RecapMyDay | 7e6e148104ac425747123131a0dd4993d4220545 | [
"MIT"
] | null | null | null | import aylien_news_api
from aylien_news_api.rest import ApiException
import time
import pickle
from utils import *
import json
params = {
'language': ['en'],
'source_locations_country': ['US'],
'published_at_start': 'NOW-1DAYS',
'published_at_end': 'NOW',
'categories_taxonomy': 'iab-qag',
'sort_by': 'relevance',
'categories_id': ['IAB17', 'IAB19', 'IAB11-4', 'IAB1-2', 'IAB3'],
'cursor': '*',
'per_page': 16
}
id_to_category = {
'IAB17': 'Sports',
'IAB19': 'Tech',
'IAB11-4': 'Politics',
'IAB1-2': 'Celebrity Fan/Gossip',
'IAB3': 'Business'
}
def fetch_stories(params={}):
fetched_stories = []
stories = None
while (stories is None or len(stories) > 0) and (len(fetched_stories) < 200):
try:
response = api_instance.list_stories(**params)
except ApiException as e:
if (e.status == 429):
print('Usage limit are exceeded. Wating for 60 seconds...')
time.sleep(60)
continue
stories = response.stories
params['cursor'] = response.next_page_cursor
fetched_stories += stories
print("Fetched %d stories. Total story count so far: %d" %
(len(stories), len(fetched_stories)))
return fetched_stories
# Configure API key authorization: app_id, application_key
aylien_news_api.configuration.api_key['X-AYLIEN-NewsAPI-Application-ID'] = 'cfa4787e'
aylien_news_api.configuration.api_key['X-AYLIEN-NewsAPI-Application-Key'] = '4fd1a82e4716ecccfb9ee352139d6ca9'
# create an instance of the API class
api_instance = aylien_news_api.DefaultApi()
def get_title_and_location_of_stories(stories):
"""
Preprocess the raw stories from the API to only include the title, link, and location
TODO:
Need some way to filter out places that aren't part of the United States or aren't states of the US
"""
all_story_titles = []
stories_and_location = []
for story in stories:
title = story.title
if title not in all_story_titles:
all_story_titles.append(title)
all_entities = []
geo_coordinates = []
for entities in story.entities.body:
if "Place" in entities.types:
coords = convert_location_to_lat_lng(entities.text)
if coords is not None and entities.text.isalpha():
geo_coordinates.append(coords) # append the geo_coordinates of the place
all_entities.append(entities.text) # append the text of the place
print entities.text, coords
if len(all_entities) == 0 or len(all_entities) > 2:
continue
else:
# get the category of the story
all_ids = [cat.id for cat in story.categories]
for story_id in all_ids:
if story_id in id_to_category:
id_category = id_to_category[story_id]
break
# get the image of the story
url = None
if len(story.media) > 0:
url = story.media[0].url
# get the summary of the story
summary = story.summary.sentences
stories_and_location.append((title, all_entities, story.links.permalink, geo_coordinates, id_category, url, summary))
return stories_and_location
def create_json_from_stories(stories_and_location):
""" Convert the preprocessed stories into a list of dictionaries. This will help us with making
the 3d visualizations """
stories = []
for story, location, link, geo_coordinates, category, img_url, summary in stories_and_location:
story_dict = {}
story_dict["title"] = story
story_dict["locations"] = location
story_dict["link"] = link
story_dict["geo_coordinates"] = geo_coordinates
story_dict["category"] = category
story_dict["img_url"] = img_url
story_dict["summary"] = summary
stories.append(story_dict)
return stories
stories = fetch_stories(params)
stories = get_title_and_location_of_stories(stories)
stories = create_json_from_stories(stories)
with open("stories.json", "w") as f:
json.dump(stories, f)
print stories[:5]
| 29.659259 | 125 | 0.68032 | import aylien_news_api
from aylien_news_api.rest import ApiException
import time
import pickle
from utils import *
import json
params = {
'language': ['en'],
'source_locations_country': ['US'],
'published_at_start': 'NOW-1DAYS',
'published_at_end': 'NOW',
'categories_taxonomy': 'iab-qag',
'sort_by': 'relevance',
'categories_id': ['IAB17', 'IAB19', 'IAB11-4', 'IAB1-2', 'IAB3'],
'cursor': '*',
'per_page': 16
}
id_to_category = {
'IAB17': 'Sports',
'IAB19': 'Tech',
'IAB11-4': 'Politics',
'IAB1-2': 'Celebrity Fan/Gossip',
'IAB3': 'Business'
}
def fetch_stories(params={}):
fetched_stories = []
stories = None
while (stories is None or len(stories) > 0) and (len(fetched_stories) < 200):
try:
response = api_instance.list_stories(**params)
except ApiException as e:
if (e.status == 429):
print('Usage limit are exceeded. Wating for 60 seconds...')
time.sleep(60)
continue
stories = response.stories
params['cursor'] = response.next_page_cursor
fetched_stories += stories
print("Fetched %d stories. Total story count so far: %d" %
(len(stories), len(fetched_stories)))
return fetched_stories
aylien_news_api.configuration.api_key['X-AYLIEN-NewsAPI-Application-ID'] = 'cfa4787e'
aylien_news_api.configuration.api_key['X-AYLIEN-NewsAPI-Application-Key'] = '4fd1a82e4716ecccfb9ee352139d6ca9'
api_instance = aylien_news_api.DefaultApi()
def get_title_and_location_of_stories(stories):
"""
Preprocess the raw stories from the API to only include the title, link, and location
TODO:
Need some way to filter out places that aren't part of the United States or aren't states of the US
"""
all_story_titles = []
stories_and_location = []
for story in stories:
title = story.title
if title not in all_story_titles:
all_story_titles.append(title)
all_entities = []
geo_coordinates = []
for entities in story.entities.body:
if "Place" in entities.types:
coords = convert_location_to_lat_lng(entities.text)
if coords is not None and entities.text.isalpha():
geo_coordinates.append(coords)
all_entities.append(entities.text)
print entities.text, coords
if len(all_entities) == 0 or len(all_entities) > 2:
continue
else:
all_ids = [cat.id for cat in story.categories]
for story_id in all_ids:
if story_id in id_to_category:
id_category = id_to_category[story_id]
break
url = None
if len(story.media) > 0:
url = story.media[0].url
summary = story.summary.sentences
stories_and_location.append((title, all_entities, story.links.permalink, geo_coordinates, id_category, url, summary))
return stories_and_location
def create_json_from_stories(stories_and_location):
""" Convert the preprocessed stories into a list of dictionaries. This will help us with making
the 3d visualizations """
stories = []
for story, location, link, geo_coordinates, category, img_url, summary in stories_and_location:
story_dict = {}
story_dict["title"] = story
story_dict["locations"] = location
story_dict["link"] = link
story_dict["geo_coordinates"] = geo_coordinates
story_dict["category"] = category
story_dict["img_url"] = img_url
story_dict["summary"] = summary
stories.append(story_dict)
return stories
stories = fetch_stories(params)
stories = get_title_and_location_of_stories(stories)
stories = create_json_from_stories(stories)
with open("stories.json", "w") as f:
json.dump(stories, f)
print stories[:5]
| false | true |
f7fbf85d06c907ff650c79d8d3f87dab917a39b8 | 5,964 | py | Python | train.py | free-city-of-ulm/Hashbrowns | 586e665b94a2d2221009f2602a10cc862a362895 | [
"MIT"
] | null | null | null | train.py | free-city-of-ulm/Hashbrowns | 586e665b94a2d2221009f2602a10cc862a362895 | [
"MIT"
] | null | null | null | train.py | free-city-of-ulm/Hashbrowns | 586e665b94a2d2221009f2602a10cc862a362895 | [
"MIT"
] | null | null | null | #!/bin/env python
from __future__ import print_function
import numpy as np
import tensorflow as tf
import argparse
import time
import os
from six.moves import cPickle
from rnn.utils import TextLoader
from rnn.model import Model
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--data_dir', type=str, default='input',
help='data directory containing input.txt')
parser.add_argument('--save_dir', type=str, default='output',
help='directory to store checkpointed models')
parser.add_argument('--rnn_size', type=int, default=256,
help='size of RNN hidden state')
parser.add_argument('--num_layers', type=int, default=2,
help='number of layers in the RNN')
parser.add_argument('--model', type=str, default='lstm',
help='rnn, gru, or lstm')
parser.add_argument('--batch_size', type=int, default=50,
help='minibatch size')
parser.add_argument('--seq_length', type=int, default=25,
help='RNN sequence length')
parser.add_argument('--num_epochs', type=int, default=50,
help='number of epochs')
parser.add_argument('--save_every', type=int, default=1000,
help='save frequency')
parser.add_argument('--grad_clip', type=float, default=5.,
help='clip gradients at this value')
parser.add_argument('--learning_rate', type=float, default=0.002,
help='learning rate')
parser.add_argument('--decay_rate', type=float, default=0.97,
help='decay rate for rmsprop')
parser.add_argument('--init_from', type=str, default=None,
help="""continue training from saved model at this path. Path must contain files saved by previous training process:
'config.pkl' : configuration;
'words_vocab.pkl' : vocabulary definitions;
'checkpoint' : paths to model file(s) (created by tf).
Note: this file contains absolute paths, be careful when moving files around;
'model.ckpt-*' : file(s) with model definition (created by tf)
""")
args = parser.parse_args()
train(args)
def train(args):
data_loader = TextLoader(args.data_dir, args.batch_size, args.seq_length)
args.vocab_size = data_loader.vocab_size
# check compatibility if training is continued from previously saved model
if args.init_from is not None:
# check if all necessary files exist
assert os.path.isdir(args.init_from)," %s must be a a path" % args.init_from
assert os.path.isfile(os.path.join(args.init_from,"config.pkl")),"config.pkl file does not exist in path %s"%args.init_from
assert os.path.isfile(os.path.join(args.init_from,"words_vocab.pkl")),"words_vocab.pkl.pkl file does not exist in path %s" % args.init_from
ckpt = tf.train.get_checkpoint_state(args.init_from)
assert ckpt,"No checkpoint found"
assert ckpt.model_checkpoint_path,"No model path found in checkpoint"
# open old config and check if models are compatible
with open(os.path.join(args.init_from, 'config.pkl'), 'rb') as f:
saved_model_args = cPickle.load(f)
need_be_same=["model","rnn_size","num_layers","seq_length"]
for checkme in need_be_same:
assert vars(saved_model_args)[checkme]==vars(args)[checkme],"Command line argument and saved model disagree on '%s' "%checkme
# open saved vocab/dict and check if vocabs/dicts are compatible
with open(os.path.join(args.init_from, 'words_vocab.pkl'), 'rb') as f:
saved_words, saved_vocab = cPickle.load(f)
assert saved_words==data_loader.words, "Data and loaded model disagreee on word set!"
assert saved_vocab==data_loader.vocab, "Data and loaded model disagreee on dictionary mappings!"
with open(os.path.join(args.save_dir, 'config.pkl'), 'wb') as f:
cPickle.dump(args, f)
with open(os.path.join(args.save_dir, 'words_vocab.pkl'), 'wb') as f:
cPickle.dump((data_loader.words, data_loader.vocab), f)
model = Model(args)
with tf.Session() as sess:
tf.initialize_all_variables().run()
saver = tf.train.Saver(tf.all_variables())
# restore model
if args.init_from is not None:
saver.restore(sess, ckpt.model_checkpoint_path)
for e in range(args.num_epochs):
sess.run(tf.assign(model.lr, args.learning_rate * (args.decay_rate ** e)))
data_loader.reset_batch_pointer()
state = model.initial_state.eval()
for b in range(data_loader.num_batches):
start = time.time()
x, y = data_loader.next_batch()
feed = {model.input_data: x, model.targets: y, model.initial_state: state}
train_loss, state, _ = sess.run([model.cost, model.final_state, model.train_op], feed)
end = time.time()
print("{}/{} (epoch {}), train_loss = {:.3f}, time/batch = {:.3f}" \
.format(e * data_loader.num_batches + b,
args.num_epochs * data_loader.num_batches,
e, train_loss, end - start))
if (e * data_loader.num_batches + b) % args.save_every == 0 \
or (e==args.num_epochs-1 and b == data_loader.num_batches-1): # save for the last result
checkpoint_path = os.path.join(args.save_dir, 'model.ckpt')
saver.save(sess, checkpoint_path, global_step = e * data_loader.num_batches + b)
print("model saved to {}".format(checkpoint_path))
if __name__ == '__main__':
main()
| 52.315789 | 147 | 0.611167 |
from __future__ import print_function
import numpy as np
import tensorflow as tf
import argparse
import time
import os
from six.moves import cPickle
from rnn.utils import TextLoader
from rnn.model import Model
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--data_dir', type=str, default='input',
help='data directory containing input.txt')
parser.add_argument('--save_dir', type=str, default='output',
help='directory to store checkpointed models')
parser.add_argument('--rnn_size', type=int, default=256,
help='size of RNN hidden state')
parser.add_argument('--num_layers', type=int, default=2,
help='number of layers in the RNN')
parser.add_argument('--model', type=str, default='lstm',
help='rnn, gru, or lstm')
parser.add_argument('--batch_size', type=int, default=50,
help='minibatch size')
parser.add_argument('--seq_length', type=int, default=25,
help='RNN sequence length')
parser.add_argument('--num_epochs', type=int, default=50,
help='number of epochs')
parser.add_argument('--save_every', type=int, default=1000,
help='save frequency')
parser.add_argument('--grad_clip', type=float, default=5.,
help='clip gradients at this value')
parser.add_argument('--learning_rate', type=float, default=0.002,
help='learning rate')
parser.add_argument('--decay_rate', type=float, default=0.97,
help='decay rate for rmsprop')
parser.add_argument('--init_from', type=str, default=None,
help="""continue training from saved model at this path. Path must contain files saved by previous training process:
'config.pkl' : configuration;
'words_vocab.pkl' : vocabulary definitions;
'checkpoint' : paths to model file(s) (created by tf).
Note: this file contains absolute paths, be careful when moving files around;
'model.ckpt-*' : file(s) with model definition (created by tf)
""")
args = parser.parse_args()
train(args)
def train(args):
data_loader = TextLoader(args.data_dir, args.batch_size, args.seq_length)
args.vocab_size = data_loader.vocab_size
if args.init_from is not None:
assert os.path.isdir(args.init_from)," %s must be a a path" % args.init_from
assert os.path.isfile(os.path.join(args.init_from,"config.pkl")),"config.pkl file does not exist in path %s"%args.init_from
assert os.path.isfile(os.path.join(args.init_from,"words_vocab.pkl")),"words_vocab.pkl.pkl file does not exist in path %s" % args.init_from
ckpt = tf.train.get_checkpoint_state(args.init_from)
assert ckpt,"No checkpoint found"
assert ckpt.model_checkpoint_path,"No model path found in checkpoint"
with open(os.path.join(args.init_from, 'config.pkl'), 'rb') as f:
saved_model_args = cPickle.load(f)
need_be_same=["model","rnn_size","num_layers","seq_length"]
for checkme in need_be_same:
assert vars(saved_model_args)[checkme]==vars(args)[checkme],"Command line argument and saved model disagree on '%s' "%checkme
with open(os.path.join(args.init_from, 'words_vocab.pkl'), 'rb') as f:
saved_words, saved_vocab = cPickle.load(f)
assert saved_words==data_loader.words, "Data and loaded model disagreee on word set!"
assert saved_vocab==data_loader.vocab, "Data and loaded model disagreee on dictionary mappings!"
with open(os.path.join(args.save_dir, 'config.pkl'), 'wb') as f:
cPickle.dump(args, f)
with open(os.path.join(args.save_dir, 'words_vocab.pkl'), 'wb') as f:
cPickle.dump((data_loader.words, data_loader.vocab), f)
model = Model(args)
with tf.Session() as sess:
tf.initialize_all_variables().run()
saver = tf.train.Saver(tf.all_variables())
if args.init_from is not None:
saver.restore(sess, ckpt.model_checkpoint_path)
for e in range(args.num_epochs):
sess.run(tf.assign(model.lr, args.learning_rate * (args.decay_rate ** e)))
data_loader.reset_batch_pointer()
state = model.initial_state.eval()
for b in range(data_loader.num_batches):
start = time.time()
x, y = data_loader.next_batch()
feed = {model.input_data: x, model.targets: y, model.initial_state: state}
train_loss, state, _ = sess.run([model.cost, model.final_state, model.train_op], feed)
end = time.time()
print("{}/{} (epoch {}), train_loss = {:.3f}, time/batch = {:.3f}" \
.format(e * data_loader.num_batches + b,
args.num_epochs * data_loader.num_batches,
e, train_loss, end - start))
if (e * data_loader.num_batches + b) % args.save_every == 0 \
or (e==args.num_epochs-1 and b == data_loader.num_batches-1):
checkpoint_path = os.path.join(args.save_dir, 'model.ckpt')
saver.save(sess, checkpoint_path, global_step = e * data_loader.num_batches + b)
print("model saved to {}".format(checkpoint_path))
if __name__ == '__main__':
main()
| true | true |
f7fbf897b66372ba08a4e061fd84692a3927efff | 2,831 | py | Python | hypernets/dispatchers/grpc/proto/predict_pb2_grpc.py | qigj/Hypernets | 534150e682f09df11ac3363e6aa7cd5e4a294c1d | [
"Apache-2.0"
] | 1 | 2020-11-06T07:52:14.000Z | 2020-11-06T07:52:14.000Z | hypernets/dispatchers/grpc/proto/predict_pb2_grpc.py | frankbaul/Hypernets | 534150e682f09df11ac3363e6aa7cd5e4a294c1d | [
"Apache-2.0"
] | null | null | null | hypernets/dispatchers/grpc/proto/predict_pb2_grpc.py | frankbaul/Hypernets | 534150e682f09df11ac3363e6aa7cd5e4a294c1d | [
"Apache-2.0"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from hypernets.dispatchers.grpc.proto import predict_pb2 as hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2
class PredictServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.predict = channel.unary_unary(
'/hypernets.dispatchers.proto.PredictService/predict',
request_serializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictRequest.SerializeToString,
response_deserializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictResponse.FromString,
)
class PredictServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def predict(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_PredictServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'predict': grpc.unary_unary_rpc_method_handler(
servicer.predict,
request_deserializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictRequest.FromString,
response_serializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'hypernets.dispatchers.proto.PredictService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class PredictService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def predict(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/hypernets.dispatchers.proto.PredictService/predict',
hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictRequest.SerializeToString,
hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictResponse.FromString,
options, channel_credentials,
call_credentials, compression, wait_for_ready, timeout, metadata)
| 42.893939 | 136 | 0.7213 |
import grpc
from hypernets.dispatchers.grpc.proto import predict_pb2 as hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2
class PredictServiceStub(object):
def __init__(self, channel):
self.predict = channel.unary_unary(
'/hypernets.dispatchers.proto.PredictService/predict',
request_serializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictRequest.SerializeToString,
response_deserializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictResponse.FromString,
)
class PredictServiceServicer(object):
def predict(self, request, context):
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_PredictServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'predict': grpc.unary_unary_rpc_method_handler(
servicer.predict,
request_deserializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictRequest.FromString,
response_serializer=hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'hypernets.dispatchers.proto.PredictService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
class PredictService(object):
@staticmethod
def predict(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/hypernets.dispatchers.proto.PredictService/predict',
hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictRequest.SerializeToString,
hypernets_dot_dispatchers_dot_grpc_dot_proto_dot_predict__pb2.PredictResponse.FromString,
options, channel_credentials,
call_credentials, compression, wait_for_ready, timeout, metadata)
| true | true |
f7fbf96348c87d410ce9ecef02e1e3092f5c23d9 | 2,916 | py | Python | tests/utilities/test_command_runner.py | cmontesano/xappt | 66f83fe53006dfd9c33b9d583dc1f0afcd2a8d26 | [
"MIT"
] | 2 | 2020-11-27T14:52:04.000Z | 2022-03-08T18:33:14.000Z | tests/utilities/test_command_runner.py | cmontesano/xappt | 66f83fe53006dfd9c33b9d583dc1f0afcd2a8d26 | [
"MIT"
] | 11 | 2021-07-04T00:37:45.000Z | 2021-10-04T17:07:56.000Z | tests/utilities/test_command_runner.py | cmontesano/xappt | 66f83fe53006dfd9c33b9d583dc1f0afcd2a8d26 | [
"MIT"
] | null | null | null | import os
import re
import unittest
from xappt.utilities import CommandRunner
from xappt.utilities import temporary_path
class TestCommandRunner(unittest.TestCase):
def test_basic_command(self):
with temporary_path() as tmp:
cmd = CommandRunner(cwd=tmp)
if os.name == "nt":
cmd.run(("mkdir", "test"), capture_output=False, shell=True)
else:
cmd.run(("mkdir", "test"), capture_output=False)
self.assertTrue(tmp.joinpath("test").is_dir())
def test_basic_command_silent(self):
with temporary_path() as tmp:
cmd = CommandRunner(cwd=tmp)
if os.name == "nt":
cmd.run(("mkdir", "test"), capture_output=True, shell=True)
else:
cmd.run(("mkdir", "test"), capture_output=True)
self.assertTrue(tmp.joinpath("test").is_dir())
def test_basic_command_output(self):
with temporary_path() as tmp:
cmd = CommandRunner(cwd=tmp)
if os.name == "nt":
cmd.run(("mkdir", "test"), capture_output=True, shell=True)
self.assertTrue(tmp.joinpath("test").is_dir())
output = cmd.run("dir", capture_output=True, shell=True)
match_pattern = r"^.*?<DIR>\s+test$"
else:
cmd.run(("mkdir", "test"), capture_output=True)
self.assertTrue(tmp.joinpath("test").is_dir())
output = cmd.run(("ls", "-l"), capture_output=True)
match_pattern = r"^[d].*\s(?:test)$"
matched = False
for line in output.stdout.split("\n"):
line = line.strip()
if re.match(match_pattern, line):
matched = True
if not matched:
self.fail(f"No matches in '{output.stdout}'")
def test_environment_add(self):
cmd = CommandRunner(env={})
cmd.env_var_set("TEST", "12345")
self.assertIn("TEST", cmd.env)
self.assertEqual(cmd.env["TEST"], "12345")
cmd.env_var_set("TEST", "abcde")
self.assertEqual(cmd.env["TEST"], "abcde")
def test_environment_rem(self):
cmd = CommandRunner(env={})
cmd.env_var_set("TEST", "12345")
self.assertIn("TEST", cmd.env)
cmd.env_var_remove("TEST")
self.assertNotIn("TEST", cmd.env)
try:
cmd.env_var_remove("INVALID")
except KeyError:
self.fail("A KeyError should not have been raised.")
def test_environment_path(self):
cmd = CommandRunner(env={})
cmd.env_var_set("TESTPATH", "3")
cmd.env_path_append("TESTPATH", "4")
cmd.env_path_prepend("TESTPATH", "2")
cmd.env_path_append("TESTPATH", "5")
cmd.env_path_prepend("TESTPATH", "1")
self.assertEqual(cmd.env["TESTPATH"], os.pathsep.join("12345"))
| 38.368421 | 76 | 0.564129 | import os
import re
import unittest
from xappt.utilities import CommandRunner
from xappt.utilities import temporary_path
class TestCommandRunner(unittest.TestCase):
def test_basic_command(self):
with temporary_path() as tmp:
cmd = CommandRunner(cwd=tmp)
if os.name == "nt":
cmd.run(("mkdir", "test"), capture_output=False, shell=True)
else:
cmd.run(("mkdir", "test"), capture_output=False)
self.assertTrue(tmp.joinpath("test").is_dir())
def test_basic_command_silent(self):
with temporary_path() as tmp:
cmd = CommandRunner(cwd=tmp)
if os.name == "nt":
cmd.run(("mkdir", "test"), capture_output=True, shell=True)
else:
cmd.run(("mkdir", "test"), capture_output=True)
self.assertTrue(tmp.joinpath("test").is_dir())
def test_basic_command_output(self):
with temporary_path() as tmp:
cmd = CommandRunner(cwd=tmp)
if os.name == "nt":
cmd.run(("mkdir", "test"), capture_output=True, shell=True)
self.assertTrue(tmp.joinpath("test").is_dir())
output = cmd.run("dir", capture_output=True, shell=True)
match_pattern = r"^.*?<DIR>\s+test$"
else:
cmd.run(("mkdir", "test"), capture_output=True)
self.assertTrue(tmp.joinpath("test").is_dir())
output = cmd.run(("ls", "-l"), capture_output=True)
match_pattern = r"^[d].*\s(?:test)$"
matched = False
for line in output.stdout.split("\n"):
line = line.strip()
if re.match(match_pattern, line):
matched = True
if not matched:
self.fail(f"No matches in '{output.stdout}'")
def test_environment_add(self):
cmd = CommandRunner(env={})
cmd.env_var_set("TEST", "12345")
self.assertIn("TEST", cmd.env)
self.assertEqual(cmd.env["TEST"], "12345")
cmd.env_var_set("TEST", "abcde")
self.assertEqual(cmd.env["TEST"], "abcde")
def test_environment_rem(self):
cmd = CommandRunner(env={})
cmd.env_var_set("TEST", "12345")
self.assertIn("TEST", cmd.env)
cmd.env_var_remove("TEST")
self.assertNotIn("TEST", cmd.env)
try:
cmd.env_var_remove("INVALID")
except KeyError:
self.fail("A KeyError should not have been raised.")
def test_environment_path(self):
cmd = CommandRunner(env={})
cmd.env_var_set("TESTPATH", "3")
cmd.env_path_append("TESTPATH", "4")
cmd.env_path_prepend("TESTPATH", "2")
cmd.env_path_append("TESTPATH", "5")
cmd.env_path_prepend("TESTPATH", "1")
self.assertEqual(cmd.env["TESTPATH"], os.pathsep.join("12345"))
| true | true |
f7fbfa42fc9f6923adfc89509e8e26b4247e7489 | 4,367 | py | Python | test/test_download_espa_order.py | cgaustin/bulk-downloader | 7c8bd16240ca65ac909e13638a7dc7afb4c0c3de | [
"Unlicense"
] | null | null | null | test/test_download_espa_order.py | cgaustin/bulk-downloader | 7c8bd16240ca65ac909e13638a7dc7afb4c0c3de | [
"Unlicense"
] | null | null | null | test/test_download_espa_order.py | cgaustin/bulk-downloader | 7c8bd16240ca65ac909e13638a7dc7afb4c0c3de | [
"Unlicense"
] | null | null | null | import os
import re
import unittest
from mock import patch
import download_espa_order as deo
import urllib
import requests
import requests_mock
class TestDownloadEspaOrder(unittest.TestCase):
def test_get_version(self):
ver = deo.get_version()
self.assertTrue(re.match("[0-9]{1}.[0-9]{1,2}.[0-9]{1,2}", ver))
class TestRequestsHandler(unittest.TestCase):
def setUp(self):
rh = deo.RequestsHandler(host='http://foo.gov')
self.handler = rh
def tearDown(self):
self.handler = None
if os.path.exists("bar.tar.part"):
os.remove("bar.tar.part")
def test_auth(self):
self.handler.auth('foo', 'bar')
self.assertEqual(('foo', 'bar'), self.handler.creds)
def test_get(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/bar', json={'bilbo': 'baggins'})
resp = self.handler.get('/bar')
self.assertEqual(resp, {'bilbo': 'baggins'})
def test_download(self):
with requests_mock.mock() as m:
m.head('http://foo.gov/bar.tar', headers={'Content-Length': '99'})
m.get('http://foo.gov/bar.tar', text="guacamole")
resp = self.handler.download('/bar.tar', 'bar.tar')
self.assertEqual(resp, "bar.tar")
class TestApi(unittest.TestCase):
def setUp(self):
self.api = deo.Api('bilbo', 'baggins', 'http://foo.gov')
def tearDown(self):
self.api = None
def test_api_request(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/morefoo', json={'thing1': 'thing2', 'messages': {'value': 'true'}})
resp = self.api.api_request('/morefoo')
self.assertEqual(resp, {'thing1': 'thing2'})
def test_get_completed_scenes(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/api/v1/item-status/anespaorderid', json={'anespaorderid': [{'product_dload_url': 'a.gov'},
{'product_dload_url': 'b.gov'}]})
resp = self.api.get_completed_scenes('anespaorderid')
self.assertEqual(resp, ['a.gov', 'b.gov'])
def test_retrieve_all_orders(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/api/v1/list-orders/ausersemail@gmail.com', json={'orderjson': {'status': 'complete'}})
resp = self.api.retrieve_all_orders('ausersemail@gmail.com')
self.assertEqual(resp, {'orderjson': {'status': 'complete'}})
class TestScene(unittest.TestCase):
def setUp(self):
self.test_url = "https://downloads.usgs.gov/orders/user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.tar.gz"
def test_init(self):
scene = deo.Scene(self.test_url)
self.assertEqual(scene.srcurl, self.test_url)
self.assertEqual(scene.orderid, 'user@contractor.gov-030')
self.assertEqual(scene.filename, 'LC080070592016082701T1-SC20200423155912.tar.gz')
self.assertEqual(scene.name, 'LC080070592016082701T1-SC20200423155912')
self.assertEqual(scene.cksum_url, 'https://downloads.usgs.gov/orders/user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.md5')
self.assertEqual(scene.cksum_file, 'LC080070592016082701T1-SC20200423155912.md5')
self.assertEqual(scene.cksum_name, 'LC080070592016082701T1-SC20200423155912 MD5 checksum')
class TestLocalStorage(unittest.TestCase):
def setUp(self):
self.test_url = "https://downloads.usgs.gov/orders/user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.tar.gz"
self.scene = deo.Scene(self.test_url)
self.local_storage = deo.LocalStorage('')
def tearDown(self):
path = "user@contractor.gov-030"
if os.path.exists(path):
os.rmdir(path)
def test_directory_path(self):
path = self.local_storage.directory_path(self.scene)
self.assertTrue(os.path.exists(path))
def test_scene_path(self):
path = self.local_storage.scene_path(self.scene)
self.assertEqual(path, "user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.tar.gz")
@patch('os.path.exists', lambda i: type(i) == str)
def test_is_stored(self):
self.assertTrue(self.local_storage.is_stored(self.scene))
| 38.646018 | 146 | 0.640485 | import os
import re
import unittest
from mock import patch
import download_espa_order as deo
import urllib
import requests
import requests_mock
class TestDownloadEspaOrder(unittest.TestCase):
def test_get_version(self):
ver = deo.get_version()
self.assertTrue(re.match("[0-9]{1}.[0-9]{1,2}.[0-9]{1,2}", ver))
class TestRequestsHandler(unittest.TestCase):
def setUp(self):
rh = deo.RequestsHandler(host='http://foo.gov')
self.handler = rh
def tearDown(self):
self.handler = None
if os.path.exists("bar.tar.part"):
os.remove("bar.tar.part")
def test_auth(self):
self.handler.auth('foo', 'bar')
self.assertEqual(('foo', 'bar'), self.handler.creds)
def test_get(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/bar', json={'bilbo': 'baggins'})
resp = self.handler.get('/bar')
self.assertEqual(resp, {'bilbo': 'baggins'})
def test_download(self):
with requests_mock.mock() as m:
m.head('http://foo.gov/bar.tar', headers={'Content-Length': '99'})
m.get('http://foo.gov/bar.tar', text="guacamole")
resp = self.handler.download('/bar.tar', 'bar.tar')
self.assertEqual(resp, "bar.tar")
class TestApi(unittest.TestCase):
def setUp(self):
self.api = deo.Api('bilbo', 'baggins', 'http://foo.gov')
def tearDown(self):
self.api = None
def test_api_request(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/morefoo', json={'thing1': 'thing2', 'messages': {'value': 'true'}})
resp = self.api.api_request('/morefoo')
self.assertEqual(resp, {'thing1': 'thing2'})
def test_get_completed_scenes(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/api/v1/item-status/anespaorderid', json={'anespaorderid': [{'product_dload_url': 'a.gov'},
{'product_dload_url': 'b.gov'}]})
resp = self.api.get_completed_scenes('anespaorderid')
self.assertEqual(resp, ['a.gov', 'b.gov'])
def test_retrieve_all_orders(self):
with requests_mock.mock() as m:
m.get('http://foo.gov/api/v1/list-orders/ausersemail@gmail.com', json={'orderjson': {'status': 'complete'}})
resp = self.api.retrieve_all_orders('ausersemail@gmail.com')
self.assertEqual(resp, {'orderjson': {'status': 'complete'}})
class TestScene(unittest.TestCase):
def setUp(self):
self.test_url = "https://downloads.usgs.gov/orders/user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.tar.gz"
def test_init(self):
scene = deo.Scene(self.test_url)
self.assertEqual(scene.srcurl, self.test_url)
self.assertEqual(scene.orderid, 'user@contractor.gov-030')
self.assertEqual(scene.filename, 'LC080070592016082701T1-SC20200423155912.tar.gz')
self.assertEqual(scene.name, 'LC080070592016082701T1-SC20200423155912')
self.assertEqual(scene.cksum_url, 'https://downloads.usgs.gov/orders/user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.md5')
self.assertEqual(scene.cksum_file, 'LC080070592016082701T1-SC20200423155912.md5')
self.assertEqual(scene.cksum_name, 'LC080070592016082701T1-SC20200423155912 MD5 checksum')
class TestLocalStorage(unittest.TestCase):
def setUp(self):
self.test_url = "https://downloads.usgs.gov/orders/user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.tar.gz"
self.scene = deo.Scene(self.test_url)
self.local_storage = deo.LocalStorage('')
def tearDown(self):
path = "user@contractor.gov-030"
if os.path.exists(path):
os.rmdir(path)
def test_directory_path(self):
path = self.local_storage.directory_path(self.scene)
self.assertTrue(os.path.exists(path))
def test_scene_path(self):
path = self.local_storage.scene_path(self.scene)
self.assertEqual(path, "user@contractor.gov-030/LC080070592016082701T1-SC20200423155912.tar.gz")
@patch('os.path.exists', lambda i: type(i) == str)
def test_is_stored(self):
self.assertTrue(self.local_storage.is_stored(self.scene))
| true | true |
f7fbfaf1e833620b5f09f7ca98374722712475f1 | 47,007 | py | Python | src/twisted/mail/test/test_pop3.py | clokep/twisted | 79a26b0aa4b1b81b46cc64d203644b35e455e46b | [
"Unlicense",
"MIT"
] | null | null | null | src/twisted/mail/test/test_pop3.py | clokep/twisted | 79a26b0aa4b1b81b46cc64d203644b35e455e46b | [
"Unlicense",
"MIT"
] | null | null | null | src/twisted/mail/test/test_pop3.py | clokep/twisted | 79a26b0aa4b1b81b46cc64d203644b35e455e46b | [
"Unlicense",
"MIT"
] | 1 | 2021-12-13T10:46:13.000Z | 2021-12-13T10:46:13.000Z | # Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Test cases for Ltwisted.mail.pop3} module.
"""
import hmac
import base64
import itertools
from hashlib import md5
from collections import OrderedDict
from io import BytesIO
from zope.interface import implementer
from twisted import cred
from twisted import internet
from twisted import mail
from twisted.internet import defer
from twisted.mail import pop3
from twisted.protocols import loopback
from twisted.python import failure
from twisted.python.compat import intToBytes
from twisted.test.proto_helpers import LineSendingProtocol
from twisted.trial import unittest, util
import twisted.cred.checkers
import twisted.cred.credentials
import twisted.cred.portal
import twisted.internet.protocol
import twisted.mail.pop3
import twisted.mail.protocols
class UtilityTests(unittest.TestCase):
"""
Test the various helper functions and classes used by the POP3 server
protocol implementation.
"""
def test_LineBuffering(self):
"""
Test creating a LineBuffer and feeding it some lines. The lines should
build up in its internal buffer for a while and then get spat out to
the writer.
"""
output = []
input = iter(itertools.cycle(['012', '345', '6', '7', '8', '9']))
c = pop3._IteratorBuffer(output.extend, input, 6)
i = iter(c)
self.assertEqual(output, []) # Nothing is buffer
next(i)
self.assertEqual(output, []) # '012' is buffered
next(i)
self.assertEqual(output, []) # '012345' is buffered
next(i)
self.assertEqual(output, ['012', '345', '6']) # Nothing is buffered
for n in range(5):
next(i)
self.assertEqual(output, ['012', '345', '6', '7', '8', '9', '012',
'345'])
def test_FinishLineBuffering(self):
"""
Test that a LineBuffer flushes everything when its iterator is
exhausted, and itself raises StopIteration.
"""
output = []
input = iter(['a', 'b', 'c'])
c = pop3._IteratorBuffer(output.extend, input, 5)
for i in c:
pass
self.assertEqual(output, ['a', 'b', 'c'])
def test_SuccessResponseFormatter(self):
"""
Test that the thing that spits out POP3 'success responses' works
right.
"""
self.assertEqual(
pop3.successResponse(b'Great.'),
b'+OK Great.\r\n')
def test_StatLineFormatter(self):
"""
Test that the function which formats stat lines does so appropriately.
"""
statLine = list(pop3.formatStatResponse([]))[-1]
self.assertEqual(statLine, b'+OK 0 0\r\n')
statLine = list(pop3.formatStatResponse([10, 31, 0, 10101]))[-1]
self.assertEqual(statLine, b'+OK 4 10142\r\n')
def test_ListLineFormatter(self):
"""
Test that the function which formats the lines in response to a LIST
command does so appropriately.
"""
listLines = list(pop3.formatListResponse([]))
self.assertEqual(
listLines,
[b'+OK 0\r\n', b'.\r\n'])
listLines = list(pop3.formatListResponse([1, 2, 3, 100]))
self.assertEqual(
listLines,
[b'+OK 4\r\n', b'1 1\r\n', b'2 2\r\n', b'3 3\r\n', b'4 100\r\n',
b'.\r\n'])
def test_UIDListLineFormatter(self):
"""
Test that the function which formats lines in response to a UIDL
command does so appropriately.
"""
uids = ['abc', 'def', 'ghi']
listLines = list(pop3.formatUIDListResponse([], uids.__getitem__))
self.assertEqual(
listLines,
[b'+OK \r\n', b'.\r\n'])
listLines = list(pop3.formatUIDListResponse([123, 431, 591],
uids.__getitem__))
self.assertEqual(
listLines,
[b'+OK \r\n', b'1 abc\r\n', b'2 def\r\n', b'3 ghi\r\n', b'.\r\n'])
listLines = list(pop3.formatUIDListResponse([0, None, 591],
uids.__getitem__))
self.assertEqual(
listLines,
[b'+OK \r\n', b'1 abc\r\n', b'3 ghi\r\n', b'.\r\n'])
class MyVirtualPOP3(mail.protocols.VirtualPOP3):
"""
A virtual-domain-supporting POP3 server.
"""
magic = b'<moshez>'
def authenticateUserAPOP(self, user, digest):
"""
Authenticate against a user against a virtual domain.
@param user: The username.
@param digest: The digested password.
@return: A three-tuple like the one returned by
L{IRealm.requestAvatar}. The mailbox will be for the user given
by C{user}.
"""
user, domain = self.lookupDomain(user)
return self.service.domains[b'baz.com'].authenticateUserAPOP(
user, digest, self.magic, domain)
class DummyDomain:
"""
A virtual domain for a POP3 server.
"""
def __init__(self):
self.users = {}
def addUser(self, name):
"""
Create a mailbox for a new user.
@param name: The username.
"""
self.users[name] = []
def addMessage(self, name, message):
"""
Add a message to the mailbox of the named user.
@param name: The username.
@param message: The contents of the message.
"""
self.users[name].append(message)
def authenticateUserAPOP(self, name, digest, magic, domain):
"""
Succeed with a L{ListMailbox}.
@param name: The name of the user authenticating.
@param digest: ignored
@param magic: ignored
@param domain: ignored
@return: A three-tuple like the one returned by
L{IRealm.requestAvatar}. The mailbox will be for the user given
by C{name}.
"""
return pop3.IMailbox, ListMailbox(self.users[name]), lambda: None
class ListMailbox:
"""
A simple in-memory list implementation of L{IMailbox}.
"""
def __init__(self, list):
"""
@param list: The messages.
"""
self.list = list
def listMessages(self, i=None):
"""
Get some message information.
@param i: See L{pop3.IMailbox.listMessages}.
@return: See L{pop3.IMailbox.listMessages}.
"""
if i is None:
return [len(l) for l in self.list]
return len(self.list[i])
def getMessage(self, i):
"""
Get the message content.
@param i: See L{pop3.IMailbox.getMessage}.
@return: See L{pop3.IMailbox.getMessage}.
"""
return BytesIO(self.list[i])
def getUidl(self, i):
"""
Construct a UID by using the given index value.
@param i: See L{pop3.IMailbox.getUidl}.
@return: See L{pop3.IMailbox.getUidl}.
"""
return i
def deleteMessage(self, i):
"""
Wipe the message at the given index.
@param i: See L{pop3.IMailbox.deleteMessage}.
"""
self.list[i] = b''
def sync(self):
"""
No-op.
@see: L{pop3.IMailbox.sync}
"""
class MyPOP3Downloader(pop3.POP3Client):
"""
A POP3 client which downloads all messages from the server.
"""
def handle_WELCOME(self, line):
"""
Authenticate.
@param line: The welcome response.
"""
pop3.POP3Client.handle_WELCOME(self, line)
self.apop(b'hello@baz.com', b'world')
def handle_APOP(self, line):
"""
Require an I{OK} response to I{APOP}.
@param line: The I{APOP} response.
"""
parts = line.split()
code = parts[0]
if code != b'+OK':
raise AssertionError('code is: %s , parts is: %s ' % (code, parts))
self.lines = []
self.retr(1)
def handle_RETR_continue(self, line):
"""
Record one line of message information.
@param line: A I{RETR} response line.
"""
self.lines.append(line)
def handle_RETR_end(self):
"""
Record the received message information.
"""
self.message = b'\n'.join(self.lines) + b'\n'
self.quit()
def handle_QUIT(self, line):
"""
Require an I{OK} response to I{QUIT}.
@param line: The I{QUIT} response.
"""
if line[:3] != b'+OK':
raise AssertionError(b'code is ' + line)
class POP3Tests(unittest.TestCase):
"""
Tests for L{pop3.POP3}.
"""
message = b'''\
Subject: urgent
Someone set up us the bomb!
'''
expectedOutput = (b'''\
+OK <moshez>\015
+OK Authentication succeeded\015
+OK \015
1 0\015
.\015
+OK ''' + intToBytes(len(message)) + b'''\015
Subject: urgent\015
\015
Someone set up us the bomb!\015
.\015
+OK \015
''')
def setUp(self):
"""
Set up a POP3 server with virtual domain support.
"""
self.factory = internet.protocol.Factory()
self.factory.domains = {}
self.factory.domains[b'baz.com'] = DummyDomain()
self.factory.domains[b'baz.com'].addUser(b'hello')
self.factory.domains[b'baz.com'].addMessage(b'hello', self.message)
def test_messages(self):
"""
Messages can be downloaded over a loopback TCP connection.
"""
client = LineSendingProtocol([
b'APOP hello@baz.com world',
b'UIDL',
b'RETR 1',
b'QUIT',
])
server = MyVirtualPOP3()
server.service = self.factory
def check(ignored):
output = b'\r\n'.join(client.response) + b'\r\n'
self.assertEqual(output, self.expectedOutput)
return loopback.loopbackTCP(server, client).addCallback(check)
def test_loopback(self):
"""
Messages can be downloaded over a loopback connection.
"""
protocol = MyVirtualPOP3()
protocol.service = self.factory
clientProtocol = MyPOP3Downloader()
def check(ignored):
self.assertEqual(clientProtocol.message, self.message)
protocol.connectionLost(
failure.Failure(Exception("Test harness disconnect")))
d = loopback.loopbackAsync(protocol, clientProtocol)
return d.addCallback(check)
test_loopback.suppress = [util.suppress(
message="twisted.mail.pop3.POP3Client is deprecated")]
def test_incorrectDomain(self):
"""
Look up a user in a domain which this server does not support.
"""
factory = internet.protocol.Factory()
factory.domains = {}
factory.domains[b'twistedmatrix.com'] = DummyDomain()
server = MyVirtualPOP3()
server.service = factory
exc = self.assertRaises(pop3.POP3Error,
server.authenticateUserAPOP, b'nobody@baz.com', b'password')
self.assertEqual(exc.args[0], 'no such domain baz.com')
class DummyPOP3(pop3.POP3):
"""
A simple POP3 server with a hard-coded mailbox for any user.
"""
magic = b'<moshez>'
def authenticateUserAPOP(self, user, password):
"""
Succeed with a L{DummyMailbox}.
@param user: ignored
@param password: ignored
@return: A three-tuple like the one returned by
L{IRealm.requestAvatar}.
"""
return pop3.IMailbox, DummyMailbox(ValueError), lambda: None
class DummyPOP3Auth(DummyPOP3):
"""
Class to test successful authentication in twisted.mail.pop3.POP3.
"""
def __init__(self, user, password):
self.portal = cred.portal.Portal(TestRealm())
ch = cred.checkers.InMemoryUsernamePasswordDatabaseDontUse()
ch.addUser(user, password)
self.portal.registerChecker(ch)
class DummyMailbox(pop3.Mailbox):
"""
An in-memory L{pop3.IMailbox} implementation.
@ivar messages: A sequence of L{bytes} defining the messages in this
mailbox.
@ivar exceptionType: The type of exception to raise when an out-of-bounds
index is addressed.
"""
messages = [b'From: moshe\nTo: moshe\n\nHow are you, friend?\n']
def __init__(self, exceptionType):
self.messages = DummyMailbox.messages[:]
self.exceptionType = exceptionType
def listMessages(self, i=None):
"""
Get some message information.
@param i: See L{pop3.IMailbox.listMessages}.
@return: See L{pop3.IMailbox.listMessages}.
"""
if i is None:
return [len(m) for m in self.messages]
if i >= len(self.messages):
raise self.exceptionType()
return len(self.messages[i])
def getMessage(self, i):
"""
Get the message content.
@param i: See L{pop3.IMailbox.getMessage}.
@return: See L{pop3.IMailbox.getMessage}.
"""
return BytesIO(self.messages[i])
def getUidl(self, i):
"""
Construct a UID which is simply the string representation of the given
index.
@param i: See L{pop3.IMailbox.getUidl}.
@return: See L{pop3.IMailbox.getUidl}.
"""
if i >= len(self.messages):
raise self.exceptionType()
return intToBytes(i)
def deleteMessage(self, i):
"""
Wipe the message at the given index.
@param i: See L{pop3.IMailbox.deleteMessage}.
"""
self.messages[i] = b''
class AnotherPOP3Tests(unittest.TestCase):
"""
Additional L{pop3.POP3} tests.
"""
def runTest(self, lines, expectedOutput, protocolInstance=None):
"""
Assert that when C{lines} are delivered to L{pop3.POP3} it responds
with C{expectedOutput}.
@param lines: A sequence of L{bytes} representing lines to deliver to
the server.
@param expectedOutput: A sequence of L{bytes} representing the
expected response from the server.
@param protocolInstance: Instance of L{twisted.mail.pop3.POP3} or
L{None}. If L{None}, a new DummyPOP3 will be used.
@return: A L{Deferred} that fires when the lines have been delivered
and the output checked.
"""
dummy = protocolInstance if protocolInstance else DummyPOP3()
client = LineSendingProtocol(lines)
d = loopback.loopbackAsync(dummy, client)
return d.addCallback(self._cbRunTest, client, dummy, expectedOutput)
def _cbRunTest(self, ignored, client, dummy, expectedOutput):
self.assertEqual(b'\r\n'.join(expectedOutput),
b'\r\n'.join(client.response))
dummy.connectionLost(failure.Failure(
Exception("Test harness disconnect")))
return ignored
def test_buffer(self):
"""
Test a lot of different POP3 commands in an extremely pipelined
scenario.
This test may cover legitimate behavior, but the intent and
granularity are not very good. It would likely be an improvement to
split it into a number of smaller, more focused tests.
"""
return self.runTest(
[b"APOP moshez dummy",
b"LIST",
b"UIDL",
b"RETR 1",
b"RETR 2",
b"DELE 1",
b"RETR 1",
b"QUIT"],
[b'+OK <moshez>',
b'+OK Authentication succeeded',
b'+OK 1',
b'1 44',
b'.',
b'+OK ',
b'1 0',
b'.',
b'+OK 44',
b'From: moshe',
b'To: moshe',
b'',
b'How are you, friend?',
b'.',
b'-ERR Bad message number argument',
b'+OK ',
b'-ERR message deleted',
b'+OK '])
def test_noop(self):
"""
Test the no-op command.
"""
return self.runTest(
[b'APOP spiv dummy',
b'NOOP',
b'QUIT'],
[b'+OK <moshez>',
b'+OK Authentication succeeded',
b'+OK ',
b'+OK '])
def test_badUTF8CharactersInCommand(self):
"""
Sending a command with invalid UTF-8 characters
will raise a L{pop3.POP3Error}.
"""
error = b'not authenticated yet: cannot do \x81PASS'
d = self.runTest(
[b'\x81PASS',
b'QUIT'],
[b'+OK <moshez>',
b"-ERR bad protocol or server: POP3Error: " +
error,
b'+OK '])
errors = self.flushLoggedErrors(pop3.POP3Error)
self.assertEqual(len(errors), 1)
return d
def test_authListing(self):
"""
L{pop3.POP3} responds to an I{AUTH} command with a list of supported
authentication types based on its factory's C{challengers}.
"""
p = DummyPOP3()
p.factory = internet.protocol.Factory()
p.factory.challengers = {b'Auth1': None, b'secondAuth': None,
b'authLast': None}
client = LineSendingProtocol([
b"AUTH",
b"QUIT",
])
d = loopback.loopbackAsync(p, client)
return d.addCallback(self._cbTestAuthListing, client)
def _cbTestAuthListing(self, ignored, client):
self.assertTrue(client.response[1].startswith(b'+OK'))
self.assertEqual(sorted(client.response[2:5]),
[b"AUTH1", b"AUTHLAST", b"SECONDAUTH"])
self.assertEqual(client.response[5], b".")
def run_PASS(self, real_user, real_password,
tried_user=None, tried_password=None,
after_auth_input=[], after_auth_output=[]):
"""
Test a login with PASS.
If L{real_user} matches L{tried_user} and L{real_password} matches
L{tried_password}, a successful login will be expected.
Otherwise an unsuccessful login will be expected.
@type real_user: L{bytes}
@param real_user: The user to test.
@type real_password: L{bytes}
@param real_password: The password of the test user.
@type tried_user: L{bytes} or L{None}
@param tried_user: The user to call USER with.
If None, real_user will be used.
@type tried_password: L{bytes} or L{None}
@param tried_password: The password to call PASS with.
If None, real_password will be used.
@type after_auth_input: L{list} of l{bytes}
@param after_auth_input: Extra protocol input after authentication.
@type after_auth_output: L{list} of l{bytes}
@param after_auth_output: Extra protocol output after authentication.
"""
if not tried_user:
tried_user = real_user
if not tried_password:
tried_password = real_password
response = [b'+OK <moshez>',
b'+OK USER accepted, send PASS',
b'-ERR Authentication failed']
if real_user == tried_user and real_password == tried_password:
response = [b'+OK <moshez>',
b'+OK USER accepted, send PASS',
b'+OK Authentication succeeded']
fullInput = [b' '.join([b'USER', tried_user]),
b' '.join([b'PASS', tried_password])]
fullInput += after_auth_input + [b'QUIT']
response += after_auth_output + [b'+OK ']
return self.runTest(
fullInput,
response,
protocolInstance=DummyPOP3Auth(real_user, real_password))
def run_PASS_before_USER(self, password):
"""
Test protocol violation produced by calling PASS before USER.
@type password: L{bytes}
@param password: A password to test.
"""
return self.runTest(
[b' '.join([b'PASS', password]),
b'QUIT'],
[b'+OK <moshez>',
b'-ERR USER required before PASS',
b'+OK '])
def test_illegal_PASS_before_USER(self):
"""
Test PASS before USER with a wrong password.
"""
return self.run_PASS_before_USER(b'fooz')
def test_empty_PASS_before_USER(self):
"""
Test PASS before USER with an empty password.
"""
return self.run_PASS_before_USER(b'')
def test_one_space_PASS_before_USER(self):
"""
Test PASS before USER with an password that is a space.
"""
return self.run_PASS_before_USER(b' ')
def test_space_PASS_before_USER(self):
"""
Test PASS before USER with a password containing a space.
"""
return self.run_PASS_before_USER(b'fooz barz')
def test_multiple_spaces_PASS_before_USER(self):
"""
Test PASS before USER with a password containing multiple spaces.
"""
return self.run_PASS_before_USER(b'fooz barz asdf')
def test_other_whitespace_PASS_before_USER(self):
"""
Test PASS before USER with a password containing tabs and spaces.
"""
return self.run_PASS_before_USER(b'fooz barz\tcrazy@! \t ')
def test_good_PASS(self):
"""
Test PASS with a good password.
"""
return self.run_PASS(b'testuser', b'fooz')
def test_space_PASS(self):
"""
Test PASS with a password containing a space.
"""
return self.run_PASS(b'testuser', b'fooz barz')
def test_multiple_spaces_PASS(self):
"""
Test PASS with a password containing a space.
"""
return self.run_PASS(b'testuser', b'fooz barz asdf')
def test_other_whitespace_PASS(self):
"""
Test PASS with a password containing tabs and spaces.
"""
return self.run_PASS(b'testuser', b'fooz barz\tcrazy@! \t ')
def test_pass_wrong_user(self):
"""
Test PASS with a wrong user.
"""
return self.run_PASS(b'testuser', b'fooz',
tried_user=b'wronguser')
def test_wrong_PASS(self):
"""
Test PASS with a wrong password.
"""
return self.run_PASS(b'testuser', b'fooz',
tried_password=b'barz')
def test_wrong_space_PASS(self):
"""
Test PASS with a password containing a space.
"""
return self.run_PASS(b'testuser', b'fooz barz',
tried_password=b'foozbarz ')
def test_wrong_multiple_spaces_PASS(self):
"""
Test PASS with a password containing a space.
"""
return self.run_PASS(b'testuser', b'fooz barz asdf',
tried_password=b'foozbarz ')
def test_wrong_other_whitespace_PASS(self):
"""
Test PASS with a password containing tabs and spaces.
"""
return self.run_PASS(b'testuser', b'fooz barz\tcrazy@! \t ')
def test_wrong_command(self):
"""
After logging in, test a dummy command that is not defined.
"""
extra_input = [b'DUMMY COMMAND']
extra_output = [b' '.join([b'-ERR bad protocol or server: POP3Error:',
b'Unknown protocol command: DUMMY'])]
return self.run_PASS(b'testuser', b'testpassword',
after_auth_input=extra_input,
after_auth_output=extra_output,
).addCallback(self.flushLoggedErrors,
pop3.POP3Error)
@implementer(pop3.IServerFactory)
class TestServerFactory:
"""
A L{pop3.IServerFactory} implementation, for use by the test suite, with
some behavior controlled by the values of (settable) public attributes and
other behavior based on values hard-coded both here and in some test
methods.
"""
def cap_IMPLEMENTATION(self):
"""
Return the hard-coded value.
@return: L{pop3.IServerFactory}
"""
return "Test Implementation String"
def cap_EXPIRE(self):
"""
Return the hard-coded value.
@return: L{pop3.IServerFactory}
"""
return 60
challengers = OrderedDict([(b"SCHEME_1", None), (b"SCHEME_2", None)])
def cap_LOGIN_DELAY(self):
"""
Return the hard-coded value.
@return: L{pop3.IServerFactory}
"""
return 120
pue = True
def perUserExpiration(self):
"""
Return the hard-coded value.
@return: L{pop3.IServerFactory}
"""
return self.pue
puld = True
def perUserLoginDelay(self):
"""
Return the hard-coded value.
@return: L{pop3.IServerFactory}
"""
return self.puld
class TestMailbox:
"""
An incomplete L{IMailbox} implementation with certain per-user values
hard-coded and known by tests in this module.
This is useful for testing the server's per-user capability
implementation.
"""
loginDelay = 100
messageExpiration = 25
def contained(testcase, s, *caps):
"""
Assert that the given capability is included in all of the capability
sets.
@param testcase: A L{unittest.TestCase} to use to make assertions.
@param s: The capability for which to check.
@type s: L{bytes}
@param caps: The capability sets in which to check.
@type caps: L{tuple} of iterable
"""
for c in caps:
testcase.assertIn(s, c)
class CapabilityTests(unittest.TestCase):
"""
Tests for L{pop3.POP3}'s per-user capability handling.
"""
def setUp(self):
"""
Create a POP3 server with some capabilities.
"""
s = BytesIO()
p = pop3.POP3()
p.factory = TestServerFactory()
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
p.do_CAPA()
self.caps = p.listCapabilities()
self.pcaps = s.getvalue().splitlines()
s = BytesIO()
p.mbox = TestMailbox()
p.transport = internet.protocol.FileWrapper(s)
p.do_CAPA()
self.lpcaps = s.getvalue().splitlines()
p.connectionLost(failure.Failure(Exception("Test harness disconnect")))
def test_UIDL(self):
"""
The server can advertise the I{UIDL} capability.
"""
contained(self, b"UIDL", self.caps, self.pcaps, self.lpcaps)
def test_TOP(self):
"""
The server can advertise the I{TOP} capability.
"""
contained(self, b"TOP", self.caps, self.pcaps, self.lpcaps)
def test_USER(self):
"""
The server can advertise the I{USER} capability.
"""
contained(self, b"USER", self.caps, self.pcaps, self.lpcaps)
def test_EXPIRE(self):
"""
The server can advertise its per-user expiration as well as a global
expiration.
"""
contained(self, b"EXPIRE 60 USER", self.caps, self.pcaps)
contained(self, b"EXPIRE 25", self.lpcaps)
def test_IMPLEMENTATION(self):
"""
The server can advertise its implementation string.
"""
contained(
self,
b"IMPLEMENTATION Test Implementation String",
self.caps, self.pcaps, self.lpcaps
)
def test_SASL(self):
"""
The server can advertise the SASL schemes it supports.
"""
contained(
self,
b"SASL SCHEME_1 SCHEME_2",
self.caps, self.pcaps, self.lpcaps
)
def test_LOGIN_DELAY(self):
"""
The can advertise a per-user login delay as well as a global login
delay.
"""
contained(self, b"LOGIN-DELAY 120 USER", self.caps, self.pcaps)
self.assertIn(b"LOGIN-DELAY 100", self.lpcaps)
class GlobalCapabilitiesTests(unittest.TestCase):
"""
Tests for L{pop3.POP3}'s global capability handling.
"""
def setUp(self):
"""
Create a POP3 server with some capabilities.
"""
s = BytesIO()
p = pop3.POP3()
p.factory = TestServerFactory()
p.factory.pue = p.factory.puld = False
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
p.do_CAPA()
self.caps = p.listCapabilities()
self.pcaps = s.getvalue().splitlines()
s = BytesIO()
p.mbox = TestMailbox()
p.transport = internet.protocol.FileWrapper(s)
p.do_CAPA()
self.lpcaps = s.getvalue().splitlines()
p.connectionLost(failure.Failure(Exception("Test harness disconnect")))
def test_EXPIRE(self):
"""
I{EXPIRE} is in the server's advertised capabilities.
"""
contained(self, b"EXPIRE 60", self.caps, self.pcaps, self.lpcaps)
def test_LOGIN_DELAY(self):
"""
I{LOGIN-DELAY} is in the server's advertised capabilities.
"""
contained(self, b"LOGIN-DELAY 120", self.caps, self.pcaps, self.lpcaps)
class TestRealm:
"""
An L{IRealm} which knows about a single test account's mailbox.
"""
def requestAvatar(self, avatarId, mind, *interfaces):
"""
Retrieve a mailbox for I{testuser} or fail.
@param avatarId: See L{IRealm.requestAvatar}.
@param mind: See L{IRealm.requestAvatar}.
@param interfaces: See L{IRealm.requestAvatar}.
@raises: L{AssertionError} when requesting an C{avatarId} other than
I{testuser}.
"""
if avatarId == b'testuser':
return pop3.IMailbox, DummyMailbox(ValueError), lambda: None
assert False
class SASLTests(unittest.TestCase):
"""
Tests for L{pop3.POP3}'s SASL implementation.
"""
def test_ValidLogin(self):
"""
A CRAM-MD5-based SASL login attempt succeeds if it uses a username and
a hashed password known to the server's credentials checker.
"""
p = pop3.POP3()
p.factory = TestServerFactory()
p.factory.challengers = {b'CRAM-MD5':
cred.credentials.CramMD5Credentials}
p.portal = cred.portal.Portal(TestRealm())
ch = cred.checkers.InMemoryUsernamePasswordDatabaseDontUse()
ch.addUser(b'testuser', b'testpassword')
p.portal.registerChecker(ch)
s = BytesIO()
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
p.lineReceived(b"CAPA")
self.assertTrue(s.getvalue().find(b"SASL CRAM-MD5") >= 0)
p.lineReceived(b"AUTH CRAM-MD5")
chal = s.getvalue().splitlines()[-1][2:]
chal = base64.decodestring(chal)
response = hmac.HMAC(b'testpassword', chal,
digestmod=md5).hexdigest().encode("ascii")
p.lineReceived(
base64.encodestring(b'testuser ' + response).rstrip(b'\n'))
self.assertTrue(p.mbox)
self.assertTrue(s.getvalue().splitlines()[-1].find(b"+OK") >= 0)
p.connectionLost(failure.Failure(Exception("Test harness disconnect")))
class CommandMixin:
"""
Tests for all the commands a POP3 server is allowed to receive.
"""
extraMessage = b'''\
From: guy
To: fellow
More message text for you.
'''
def setUp(self):
"""
Make a POP3 server protocol instance hooked up to a simple mailbox and
a transport that buffers output to a BytesIO.
"""
p = pop3.POP3()
p.mbox = self.mailboxType(self.exceptionType)
p.schedule = list
self.pop3Server = p
s = BytesIO()
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
s.seek(0)
s.truncate(0)
self.pop3Transport = s
def tearDown(self):
"""
Disconnect the server protocol so it can clean up anything it might
need to clean up.
"""
self.pop3Server.connectionLost(failure.Failure(
Exception("Test harness disconnect")))
def _flush(self):
"""
Do some of the things that the reactor would take care of, if the
reactor were actually running.
"""
# Oh man FileWrapper is pooh.
self.pop3Server.transport._checkProducer()
def test_LIST(self):
"""
Test the two forms of list: with a message index number, which should
return a short-form response, and without a message index number, which
should return a long-form response, one line per message.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"LIST 1")
self._flush()
self.assertEqual(s.getvalue(), b"+OK 1 44\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"LIST")
self._flush()
self.assertEqual(s.getvalue(), b"+OK 1\r\n1 44\r\n.\r\n")
def test_LISTWithBadArgument(self):
"""
Test that non-integers and out-of-bound integers produce appropriate
error responses.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"LIST a")
self.assertEqual(
s.getvalue(),
b"-ERR Invalid message-number: a\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"LIST 0")
self.assertEqual(
s.getvalue(),
b"-ERR Invalid message-number: 0\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"LIST 2")
self.assertEqual(
s.getvalue(),
b"-ERR Invalid message-number: 2\r\n")
s.seek(0)
s.truncate(0)
def test_UIDL(self):
"""
Test the two forms of the UIDL command. These are just like the two
forms of the LIST command.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"UIDL 1")
self.assertEqual(s.getvalue(), b"+OK 0\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"UIDL")
self._flush()
self.assertEqual(s.getvalue(), b"+OK \r\n1 0\r\n.\r\n")
def test_UIDLWithBadArgument(self):
"""
Test that UIDL with a non-integer or an out-of-bounds integer produces
the appropriate error response.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"UIDL a")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"UIDL 0")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"UIDL 2")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
def test_STAT(self):
"""
Test the single form of the STAT command, which returns a short-form
response of the number of messages in the mailbox and their total size.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"STAT")
self._flush()
self.assertEqual(s.getvalue(), b"+OK 1 44\r\n")
def test_RETR(self):
"""
Test downloading a message.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"RETR 1")
self._flush()
self.assertEqual(
s.getvalue(),
b"+OK 44\r\n"
b"From: moshe\r\n"
b"To: moshe\r\n"
b"\r\n"
b"How are you, friend?\r\n"
b".\r\n")
s.seek(0)
s.truncate(0)
def test_RETRWithBadArgument(self):
"""
Test that trying to download a message with a bad argument, either not
an integer or an out-of-bounds integer, fails with the appropriate
error response.
"""
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"RETR a")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"RETR 0")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"RETR 2")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
def test_TOP(self):
"""
Test downloading the headers and part of the body of a message.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b"TOP 1 0")
self._flush()
self.assertEqual(
s.getvalue(),
b"+OK Top of message follows\r\n"
b"From: moshe\r\n"
b"To: moshe\r\n"
b"\r\n"
b".\r\n")
def test_TOPWithBadArgument(self):
"""
Test that trying to download a message with a bad argument, either a
message number which isn't an integer or is an out-of-bounds integer or
a number of lines which isn't an integer or is a negative integer,
fails with the appropriate error response.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b"TOP 1 a")
self.assertEqual(
s.getvalue(),
b"-ERR Bad line count argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP 1 -1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad line count argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP a 1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP 0 1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP 3 1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
def test_LAST(self):
"""
Test the exceedingly pointless LAST command, which tells you the
highest message index which you have already downloaded.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b"+OK 0\r\n")
s.seek(0)
s.truncate(0)
def test_RetrieveUpdatesHighest(self):
"""
Test that issuing a RETR command updates the LAST response.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'RETR 2')
self._flush()
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 2\r\n')
s.seek(0)
s.truncate(0)
def test_TopUpdatesHighest(self):
"""
Test that issuing a TOP command updates the LAST response.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'TOP 2 10')
self._flush()
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 2\r\n')
def test_HighestOnlyProgresses(self):
"""
Test that downloading a message with a smaller index than the current
LAST response doesn't change the LAST response.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'RETR 2')
self._flush()
p.lineReceived(b'TOP 1 10')
self._flush()
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 2\r\n')
def test_ResetClearsHighest(self):
"""
Test that issuing RSET changes the LAST response to 0.
"""
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'RETR 2')
self._flush()
p.lineReceived(b'RSET')
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 0\r\n')
_listMessageDeprecation = (
"twisted.mail.pop3.IMailbox.listMessages may not "
"raise IndexError for out-of-bounds message numbers: "
"raise ValueError instead.")
_listMessageSuppression = util.suppress(
message=_listMessageDeprecation,
category=PendingDeprecationWarning)
_getUidlDeprecation = (
"twisted.mail.pop3.IMailbox.getUidl may not "
"raise IndexError for out-of-bounds message numbers: "
"raise ValueError instead.")
_getUidlSuppression = util.suppress(
message=_getUidlDeprecation,
category=PendingDeprecationWarning)
class IndexErrorCommandTests(CommandMixin, unittest.TestCase):
"""
Run all of the command tests against a mailbox which raises IndexError
when an out of bounds request is made. This behavior will be deprecated
shortly and then removed.
"""
exceptionType = IndexError
mailboxType = DummyMailbox
def test_LISTWithBadArgument(self):
"""
An attempt to get metadata about a message with a bad argument fails
with an I{ERR} response even if the mailbox implementation raises
L{IndexError}.
"""
return CommandMixin.test_LISTWithBadArgument(self)
test_LISTWithBadArgument.suppress = [_listMessageSuppression]
def test_UIDLWithBadArgument(self):
"""
An attempt to look up the UID of a message with a bad argument fails
with an I{ERR} response even if the mailbox implementation raises
L{IndexError}.
"""
return CommandMixin.test_UIDLWithBadArgument(self)
test_UIDLWithBadArgument.suppress = [_getUidlSuppression]
def test_TOPWithBadArgument(self):
"""
An attempt to download some of a message with a bad argument fails with
an I{ERR} response even if the mailbox implementation raises
L{IndexError}.
"""
return CommandMixin.test_TOPWithBadArgument(self)
test_TOPWithBadArgument.suppress = [_listMessageSuppression]
def test_RETRWithBadArgument(self):
"""
An attempt to download a message with a bad argument fails with an
I{ERR} response even if the mailbox implementation raises
L{IndexError}.
"""
return CommandMixin.test_RETRWithBadArgument(self)
test_RETRWithBadArgument.suppress = [_listMessageSuppression]
class ValueErrorCommandTests(CommandMixin, unittest.TestCase):
"""
Run all of the command tests against a mailbox which raises ValueError
when an out of bounds request is made. This is the correct behavior and
after support for mailboxes which raise IndexError is removed, this will
become just C{CommandTestCase}.
"""
exceptionType = ValueError
mailboxType = DummyMailbox
class SyncDeferredMailbox(DummyMailbox):
"""
Mailbox which has a listMessages implementation which returns a Deferred
which has already fired.
"""
def listMessages(self, n=None):
"""
Synchronously list messages.
@type n: L{int} or L{None}
@param n: The 0-based index of the message.
@return: A L{Deferred} which already has a message list result.
"""
return defer.succeed(DummyMailbox.listMessages(self, n))
class IndexErrorSyncDeferredCommandTests(IndexErrorCommandTests):
"""
Run all of the L{IndexErrorCommandTests} tests with a
synchronous-Deferred returning IMailbox implementation.
"""
mailboxType = SyncDeferredMailbox
class ValueErrorSyncDeferredCommandTests(ValueErrorCommandTests):
"""
Run all of the L{ValueErrorCommandTests} tests with a
synchronous-Deferred returning IMailbox implementation.
"""
mailboxType = SyncDeferredMailbox
class AsyncDeferredMailbox(DummyMailbox):
"""
Mailbox which has a listMessages implementation which returns a Deferred
which has not yet fired.
"""
def __init__(self, *a, **kw):
self.waiting = []
DummyMailbox.__init__(self, *a, **kw)
def listMessages(self, n=None):
"""
Record a new unfired L{Deferred} in C{self.waiting} and return it.
@type n: L{int} or L{None}
@param n: The 0-based index of the message.
@return: The L{Deferred}
"""
d = defer.Deferred()
# See AsyncDeferredMailbox._flush
self.waiting.append((d, DummyMailbox.listMessages(self, n)))
return d
class IndexErrorAsyncDeferredCommandTests(IndexErrorCommandTests):
"""
Run all of the L{IndexErrorCommandTests} tests with an
asynchronous-Deferred returning IMailbox implementation.
"""
mailboxType = AsyncDeferredMailbox
def _flush(self):
"""
Fire whatever Deferreds we've built up in our mailbox.
"""
while self.pop3Server.mbox.waiting:
d, a = self.pop3Server.mbox.waiting.pop()
d.callback(a)
IndexErrorCommandTests._flush(self)
class ValueErrorAsyncDeferredCommandTests(ValueErrorCommandTests):
"""
Run all of the L{IndexErrorCommandTests} tests with an
asynchronous-Deferred returning IMailbox implementation.
"""
mailboxType = AsyncDeferredMailbox
def _flush(self):
"""
Fire whatever Deferreds we've built up in our mailbox.
"""
while self.pop3Server.mbox.waiting:
d, a = self.pop3Server.mbox.waiting.pop()
d.callback(a)
ValueErrorCommandTests._flush(self)
class POP3MiscTests(unittest.TestCase):
"""
Miscellaneous tests more to do with module/package structure than
anything to do with the Post Office Protocol.
"""
def test_all(self):
"""
This test checks that all names listed in
twisted.mail.pop3.__all__ are actually present in the module.
"""
mod = twisted.mail.pop3
for attr in mod.__all__:
self.assertTrue(hasattr(mod, attr))
| 28.080645 | 79 | 0.583381 |
import hmac
import base64
import itertools
from hashlib import md5
from collections import OrderedDict
from io import BytesIO
from zope.interface import implementer
from twisted import cred
from twisted import internet
from twisted import mail
from twisted.internet import defer
from twisted.mail import pop3
from twisted.protocols import loopback
from twisted.python import failure
from twisted.python.compat import intToBytes
from twisted.test.proto_helpers import LineSendingProtocol
from twisted.trial import unittest, util
import twisted.cred.checkers
import twisted.cred.credentials
import twisted.cred.portal
import twisted.internet.protocol
import twisted.mail.pop3
import twisted.mail.protocols
class UtilityTests(unittest.TestCase):
def test_LineBuffering(self):
output = []
input = iter(itertools.cycle(['012', '345', '6', '7', '8', '9']))
c = pop3._IteratorBuffer(output.extend, input, 6)
i = iter(c)
self.assertEqual(output, [])
next(i)
self.assertEqual(output, [])
next(i)
self.assertEqual(output, [])
next(i)
self.assertEqual(output, ['012', '345', '6'])
for n in range(5):
next(i)
self.assertEqual(output, ['012', '345', '6', '7', '8', '9', '012',
'345'])
def test_FinishLineBuffering(self):
output = []
input = iter(['a', 'b', 'c'])
c = pop3._IteratorBuffer(output.extend, input, 5)
for i in c:
pass
self.assertEqual(output, ['a', 'b', 'c'])
def test_SuccessResponseFormatter(self):
self.assertEqual(
pop3.successResponse(b'Great.'),
b'+OK Great.\r\n')
def test_StatLineFormatter(self):
statLine = list(pop3.formatStatResponse([]))[-1]
self.assertEqual(statLine, b'+OK 0 0\r\n')
statLine = list(pop3.formatStatResponse([10, 31, 0, 10101]))[-1]
self.assertEqual(statLine, b'+OK 4 10142\r\n')
def test_ListLineFormatter(self):
listLines = list(pop3.formatListResponse([]))
self.assertEqual(
listLines,
[b'+OK 0\r\n', b'.\r\n'])
listLines = list(pop3.formatListResponse([1, 2, 3, 100]))
self.assertEqual(
listLines,
[b'+OK 4\r\n', b'1 1\r\n', b'2 2\r\n', b'3 3\r\n', b'4 100\r\n',
b'.\r\n'])
def test_UIDListLineFormatter(self):
uids = ['abc', 'def', 'ghi']
listLines = list(pop3.formatUIDListResponse([], uids.__getitem__))
self.assertEqual(
listLines,
[b'+OK \r\n', b'.\r\n'])
listLines = list(pop3.formatUIDListResponse([123, 431, 591],
uids.__getitem__))
self.assertEqual(
listLines,
[b'+OK \r\n', b'1 abc\r\n', b'2 def\r\n', b'3 ghi\r\n', b'.\r\n'])
listLines = list(pop3.formatUIDListResponse([0, None, 591],
uids.__getitem__))
self.assertEqual(
listLines,
[b'+OK \r\n', b'1 abc\r\n', b'3 ghi\r\n', b'.\r\n'])
class MyVirtualPOP3(mail.protocols.VirtualPOP3):
magic = b'<moshez>'
def authenticateUserAPOP(self, user, digest):
user, domain = self.lookupDomain(user)
return self.service.domains[b'baz.com'].authenticateUserAPOP(
user, digest, self.magic, domain)
class DummyDomain:
def __init__(self):
self.users = {}
def addUser(self, name):
self.users[name] = []
def addMessage(self, name, message):
self.users[name].append(message)
def authenticateUserAPOP(self, name, digest, magic, domain):
return pop3.IMailbox, ListMailbox(self.users[name]), lambda: None
class ListMailbox:
def __init__(self, list):
self.list = list
def listMessages(self, i=None):
if i is None:
return [len(l) for l in self.list]
return len(self.list[i])
def getMessage(self, i):
return BytesIO(self.list[i])
def getUidl(self, i):
return i
def deleteMessage(self, i):
self.list[i] = b''
def sync(self):
class MyPOP3Downloader(pop3.POP3Client):
def handle_WELCOME(self, line):
pop3.POP3Client.handle_WELCOME(self, line)
self.apop(b'hello@baz.com', b'world')
def handle_APOP(self, line):
parts = line.split()
code = parts[0]
if code != b'+OK':
raise AssertionError('code is: %s , parts is: %s ' % (code, parts))
self.lines = []
self.retr(1)
def handle_RETR_continue(self, line):
self.lines.append(line)
def handle_RETR_end(self):
self.message = b'\n'.join(self.lines) + b'\n'
self.quit()
def handle_QUIT(self, line):
if line[:3] != b'+OK':
raise AssertionError(b'code is ' + line)
class POP3Tests(unittest.TestCase):
message = b'''\
Subject: urgent
Someone set up us the bomb!
'''
expectedOutput = (b'''\
+OK <moshez>\015
+OK Authentication succeeded\015
+OK \015
1 0\015
.\015
+OK ''' + intToBytes(len(message)) + b'''\015
Subject: urgent\015
\015
Someone set up us the bomb!\015
.\015
+OK \015
''')
def setUp(self):
self.factory = internet.protocol.Factory()
self.factory.domains = {}
self.factory.domains[b'baz.com'] = DummyDomain()
self.factory.domains[b'baz.com'].addUser(b'hello')
self.factory.domains[b'baz.com'].addMessage(b'hello', self.message)
def test_messages(self):
client = LineSendingProtocol([
b'APOP hello@baz.com world',
b'UIDL',
b'RETR 1',
b'QUIT',
])
server = MyVirtualPOP3()
server.service = self.factory
def check(ignored):
output = b'\r\n'.join(client.response) + b'\r\n'
self.assertEqual(output, self.expectedOutput)
return loopback.loopbackTCP(server, client).addCallback(check)
def test_loopback(self):
protocol = MyVirtualPOP3()
protocol.service = self.factory
clientProtocol = MyPOP3Downloader()
def check(ignored):
self.assertEqual(clientProtocol.message, self.message)
protocol.connectionLost(
failure.Failure(Exception("Test harness disconnect")))
d = loopback.loopbackAsync(protocol, clientProtocol)
return d.addCallback(check)
test_loopback.suppress = [util.suppress(
message="twisted.mail.pop3.POP3Client is deprecated")]
def test_incorrectDomain(self):
factory = internet.protocol.Factory()
factory.domains = {}
factory.domains[b'twistedmatrix.com'] = DummyDomain()
server = MyVirtualPOP3()
server.service = factory
exc = self.assertRaises(pop3.POP3Error,
server.authenticateUserAPOP, b'nobody@baz.com', b'password')
self.assertEqual(exc.args[0], 'no such domain baz.com')
class DummyPOP3(pop3.POP3):
magic = b'<moshez>'
def authenticateUserAPOP(self, user, password):
return pop3.IMailbox, DummyMailbox(ValueError), lambda: None
class DummyPOP3Auth(DummyPOP3):
def __init__(self, user, password):
self.portal = cred.portal.Portal(TestRealm())
ch = cred.checkers.InMemoryUsernamePasswordDatabaseDontUse()
ch.addUser(user, password)
self.portal.registerChecker(ch)
class DummyMailbox(pop3.Mailbox):
messages = [b'From: moshe\nTo: moshe\n\nHow are you, friend?\n']
def __init__(self, exceptionType):
self.messages = DummyMailbox.messages[:]
self.exceptionType = exceptionType
def listMessages(self, i=None):
if i is None:
return [len(m) for m in self.messages]
if i >= len(self.messages):
raise self.exceptionType()
return len(self.messages[i])
def getMessage(self, i):
return BytesIO(self.messages[i])
def getUidl(self, i):
if i >= len(self.messages):
raise self.exceptionType()
return intToBytes(i)
def deleteMessage(self, i):
self.messages[i] = b''
class AnotherPOP3Tests(unittest.TestCase):
def runTest(self, lines, expectedOutput, protocolInstance=None):
dummy = protocolInstance if protocolInstance else DummyPOP3()
client = LineSendingProtocol(lines)
d = loopback.loopbackAsync(dummy, client)
return d.addCallback(self._cbRunTest, client, dummy, expectedOutput)
def _cbRunTest(self, ignored, client, dummy, expectedOutput):
self.assertEqual(b'\r\n'.join(expectedOutput),
b'\r\n'.join(client.response))
dummy.connectionLost(failure.Failure(
Exception("Test harness disconnect")))
return ignored
def test_buffer(self):
return self.runTest(
[b"APOP moshez dummy",
b"LIST",
b"UIDL",
b"RETR 1",
b"RETR 2",
b"DELE 1",
b"RETR 1",
b"QUIT"],
[b'+OK <moshez>',
b'+OK Authentication succeeded',
b'+OK 1',
b'1 44',
b'.',
b'+OK ',
b'1 0',
b'.',
b'+OK 44',
b'From: moshe',
b'To: moshe',
b'',
b'How are you, friend?',
b'.',
b'-ERR Bad message number argument',
b'+OK ',
b'-ERR message deleted',
b'+OK '])
def test_noop(self):
return self.runTest(
[b'APOP spiv dummy',
b'NOOP',
b'QUIT'],
[b'+OK <moshez>',
b'+OK Authentication succeeded',
b'+OK ',
b'+OK '])
def test_badUTF8CharactersInCommand(self):
error = b'not authenticated yet: cannot do \x81PASS'
d = self.runTest(
[b'\x81PASS',
b'QUIT'],
[b'+OK <moshez>',
b"-ERR bad protocol or server: POP3Error: " +
error,
b'+OK '])
errors = self.flushLoggedErrors(pop3.POP3Error)
self.assertEqual(len(errors), 1)
return d
def test_authListing(self):
p = DummyPOP3()
p.factory = internet.protocol.Factory()
p.factory.challengers = {b'Auth1': None, b'secondAuth': None,
b'authLast': None}
client = LineSendingProtocol([
b"AUTH",
b"QUIT",
])
d = loopback.loopbackAsync(p, client)
return d.addCallback(self._cbTestAuthListing, client)
def _cbTestAuthListing(self, ignored, client):
self.assertTrue(client.response[1].startswith(b'+OK'))
self.assertEqual(sorted(client.response[2:5]),
[b"AUTH1", b"AUTHLAST", b"SECONDAUTH"])
self.assertEqual(client.response[5], b".")
def run_PASS(self, real_user, real_password,
tried_user=None, tried_password=None,
after_auth_input=[], after_auth_output=[]):
if not tried_user:
tried_user = real_user
if not tried_password:
tried_password = real_password
response = [b'+OK <moshez>',
b'+OK USER accepted, send PASS',
b'-ERR Authentication failed']
if real_user == tried_user and real_password == tried_password:
response = [b'+OK <moshez>',
b'+OK USER accepted, send PASS',
b'+OK Authentication succeeded']
fullInput = [b' '.join([b'USER', tried_user]),
b' '.join([b'PASS', tried_password])]
fullInput += after_auth_input + [b'QUIT']
response += after_auth_output + [b'+OK ']
return self.runTest(
fullInput,
response,
protocolInstance=DummyPOP3Auth(real_user, real_password))
def run_PASS_before_USER(self, password):
return self.runTest(
[b' '.join([b'PASS', password]),
b'QUIT'],
[b'+OK <moshez>',
b'-ERR USER required before PASS',
b'+OK '])
def test_illegal_PASS_before_USER(self):
return self.run_PASS_before_USER(b'fooz')
def test_empty_PASS_before_USER(self):
return self.run_PASS_before_USER(b'')
def test_one_space_PASS_before_USER(self):
return self.run_PASS_before_USER(b' ')
def test_space_PASS_before_USER(self):
return self.run_PASS_before_USER(b'fooz barz')
def test_multiple_spaces_PASS_before_USER(self):
return self.run_PASS_before_USER(b'fooz barz asdf')
def test_other_whitespace_PASS_before_USER(self):
return self.run_PASS_before_USER(b'fooz barz\tcrazy@! \t ')
def test_good_PASS(self):
return self.run_PASS(b'testuser', b'fooz')
def test_space_PASS(self):
return self.run_PASS(b'testuser', b'fooz barz')
def test_multiple_spaces_PASS(self):
return self.run_PASS(b'testuser', b'fooz barz asdf')
def test_other_whitespace_PASS(self):
return self.run_PASS(b'testuser', b'fooz barz\tcrazy@! \t ')
def test_pass_wrong_user(self):
return self.run_PASS(b'testuser', b'fooz',
tried_user=b'wronguser')
def test_wrong_PASS(self):
return self.run_PASS(b'testuser', b'fooz',
tried_password=b'barz')
def test_wrong_space_PASS(self):
return self.run_PASS(b'testuser', b'fooz barz',
tried_password=b'foozbarz ')
def test_wrong_multiple_spaces_PASS(self):
return self.run_PASS(b'testuser', b'fooz barz asdf',
tried_password=b'foozbarz ')
def test_wrong_other_whitespace_PASS(self):
return self.run_PASS(b'testuser', b'fooz barz\tcrazy@! \t ')
def test_wrong_command(self):
extra_input = [b'DUMMY COMMAND']
extra_output = [b' '.join([b'-ERR bad protocol or server: POP3Error:',
b'Unknown protocol command: DUMMY'])]
return self.run_PASS(b'testuser', b'testpassword',
after_auth_input=extra_input,
after_auth_output=extra_output,
).addCallback(self.flushLoggedErrors,
pop3.POP3Error)
@implementer(pop3.IServerFactory)
class TestServerFactory:
def cap_IMPLEMENTATION(self):
return "Test Implementation String"
def cap_EXPIRE(self):
return 60
challengers = OrderedDict([(b"SCHEME_1", None), (b"SCHEME_2", None)])
def cap_LOGIN_DELAY(self):
return 120
pue = True
def perUserExpiration(self):
return self.pue
puld = True
def perUserLoginDelay(self):
return self.puld
class TestMailbox:
loginDelay = 100
messageExpiration = 25
def contained(testcase, s, *caps):
for c in caps:
testcase.assertIn(s, c)
class CapabilityTests(unittest.TestCase):
def setUp(self):
s = BytesIO()
p = pop3.POP3()
p.factory = TestServerFactory()
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
p.do_CAPA()
self.caps = p.listCapabilities()
self.pcaps = s.getvalue().splitlines()
s = BytesIO()
p.mbox = TestMailbox()
p.transport = internet.protocol.FileWrapper(s)
p.do_CAPA()
self.lpcaps = s.getvalue().splitlines()
p.connectionLost(failure.Failure(Exception("Test harness disconnect")))
def test_UIDL(self):
contained(self, b"UIDL", self.caps, self.pcaps, self.lpcaps)
def test_TOP(self):
contained(self, b"TOP", self.caps, self.pcaps, self.lpcaps)
def test_USER(self):
contained(self, b"USER", self.caps, self.pcaps, self.lpcaps)
def test_EXPIRE(self):
contained(self, b"EXPIRE 60 USER", self.caps, self.pcaps)
contained(self, b"EXPIRE 25", self.lpcaps)
def test_IMPLEMENTATION(self):
contained(
self,
b"IMPLEMENTATION Test Implementation String",
self.caps, self.pcaps, self.lpcaps
)
def test_SASL(self):
contained(
self,
b"SASL SCHEME_1 SCHEME_2",
self.caps, self.pcaps, self.lpcaps
)
def test_LOGIN_DELAY(self):
contained(self, b"LOGIN-DELAY 120 USER", self.caps, self.pcaps)
self.assertIn(b"LOGIN-DELAY 100", self.lpcaps)
class GlobalCapabilitiesTests(unittest.TestCase):
def setUp(self):
s = BytesIO()
p = pop3.POP3()
p.factory = TestServerFactory()
p.factory.pue = p.factory.puld = False
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
p.do_CAPA()
self.caps = p.listCapabilities()
self.pcaps = s.getvalue().splitlines()
s = BytesIO()
p.mbox = TestMailbox()
p.transport = internet.protocol.FileWrapper(s)
p.do_CAPA()
self.lpcaps = s.getvalue().splitlines()
p.connectionLost(failure.Failure(Exception("Test harness disconnect")))
def test_EXPIRE(self):
contained(self, b"EXPIRE 60", self.caps, self.pcaps, self.lpcaps)
def test_LOGIN_DELAY(self):
contained(self, b"LOGIN-DELAY 120", self.caps, self.pcaps, self.lpcaps)
class TestRealm:
def requestAvatar(self, avatarId, mind, *interfaces):
if avatarId == b'testuser':
return pop3.IMailbox, DummyMailbox(ValueError), lambda: None
assert False
class SASLTests(unittest.TestCase):
def test_ValidLogin(self):
p = pop3.POP3()
p.factory = TestServerFactory()
p.factory.challengers = {b'CRAM-MD5':
cred.credentials.CramMD5Credentials}
p.portal = cred.portal.Portal(TestRealm())
ch = cred.checkers.InMemoryUsernamePasswordDatabaseDontUse()
ch.addUser(b'testuser', b'testpassword')
p.portal.registerChecker(ch)
s = BytesIO()
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
p.lineReceived(b"CAPA")
self.assertTrue(s.getvalue().find(b"SASL CRAM-MD5") >= 0)
p.lineReceived(b"AUTH CRAM-MD5")
chal = s.getvalue().splitlines()[-1][2:]
chal = base64.decodestring(chal)
response = hmac.HMAC(b'testpassword', chal,
digestmod=md5).hexdigest().encode("ascii")
p.lineReceived(
base64.encodestring(b'testuser ' + response).rstrip(b'\n'))
self.assertTrue(p.mbox)
self.assertTrue(s.getvalue().splitlines()[-1].find(b"+OK") >= 0)
p.connectionLost(failure.Failure(Exception("Test harness disconnect")))
class CommandMixin:
extraMessage = b'''\
From: guy
To: fellow
More message text for you.
'''
def setUp(self):
p = pop3.POP3()
p.mbox = self.mailboxType(self.exceptionType)
p.schedule = list
self.pop3Server = p
s = BytesIO()
p.transport = internet.protocol.FileWrapper(s)
p.connectionMade()
s.seek(0)
s.truncate(0)
self.pop3Transport = s
def tearDown(self):
self.pop3Server.connectionLost(failure.Failure(
Exception("Test harness disconnect")))
def _flush(self):
self.pop3Server.transport._checkProducer()
def test_LIST(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"LIST 1")
self._flush()
self.assertEqual(s.getvalue(), b"+OK 1 44\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"LIST")
self._flush()
self.assertEqual(s.getvalue(), b"+OK 1\r\n1 44\r\n.\r\n")
def test_LISTWithBadArgument(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"LIST a")
self.assertEqual(
s.getvalue(),
b"-ERR Invalid message-number: a\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"LIST 0")
self.assertEqual(
s.getvalue(),
b"-ERR Invalid message-number: 0\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"LIST 2")
self.assertEqual(
s.getvalue(),
b"-ERR Invalid message-number: 2\r\n")
s.seek(0)
s.truncate(0)
def test_UIDL(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"UIDL 1")
self.assertEqual(s.getvalue(), b"+OK 0\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"UIDL")
self._flush()
self.assertEqual(s.getvalue(), b"+OK \r\n1 0\r\n.\r\n")
def test_UIDLWithBadArgument(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"UIDL a")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"UIDL 0")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"UIDL 2")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
def test_STAT(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"STAT")
self._flush()
self.assertEqual(s.getvalue(), b"+OK 1 44\r\n")
def test_RETR(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"RETR 1")
self._flush()
self.assertEqual(
s.getvalue(),
b"+OK 44\r\n"
b"From: moshe\r\n"
b"To: moshe\r\n"
b"\r\n"
b"How are you, friend?\r\n"
b".\r\n")
s.seek(0)
s.truncate(0)
def test_RETRWithBadArgument(self):
p = self.pop3Server
s = self.pop3Transport
p.lineReceived(b"RETR a")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"RETR 0")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"RETR 2")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
def test_TOP(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b"TOP 1 0")
self._flush()
self.assertEqual(
s.getvalue(),
b"+OK Top of message follows\r\n"
b"From: moshe\r\n"
b"To: moshe\r\n"
b"\r\n"
b".\r\n")
def test_TOPWithBadArgument(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b"TOP 1 a")
self.assertEqual(
s.getvalue(),
b"-ERR Bad line count argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP 1 -1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad line count argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP a 1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP 0 1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
p.lineReceived(b"TOP 3 1")
self.assertEqual(
s.getvalue(),
b"-ERR Bad message number argument\r\n")
s.seek(0)
s.truncate(0)
def test_LAST(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b"+OK 0\r\n")
s.seek(0)
s.truncate(0)
def test_RetrieveUpdatesHighest(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'RETR 2')
self._flush()
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 2\r\n')
s.seek(0)
s.truncate(0)
def test_TopUpdatesHighest(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'TOP 2 10')
self._flush()
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 2\r\n')
def test_HighestOnlyProgresses(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'RETR 2')
self._flush()
p.lineReceived(b'TOP 1 10')
self._flush()
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 2\r\n')
def test_ResetClearsHighest(self):
p = self.pop3Server
s = self.pop3Transport
p.mbox.messages.append(self.extraMessage)
p.lineReceived(b'RETR 2')
self._flush()
p.lineReceived(b'RSET')
s.seek(0)
s.truncate(0)
p.lineReceived(b'LAST')
self.assertEqual(
s.getvalue(),
b'+OK 0\r\n')
_listMessageDeprecation = (
"twisted.mail.pop3.IMailbox.listMessages may not "
"raise IndexError for out-of-bounds message numbers: "
"raise ValueError instead.")
_listMessageSuppression = util.suppress(
message=_listMessageDeprecation,
category=PendingDeprecationWarning)
_getUidlDeprecation = (
"twisted.mail.pop3.IMailbox.getUidl may not "
"raise IndexError for out-of-bounds message numbers: "
"raise ValueError instead.")
_getUidlSuppression = util.suppress(
message=_getUidlDeprecation,
category=PendingDeprecationWarning)
class IndexErrorCommandTests(CommandMixin, unittest.TestCase):
exceptionType = IndexError
mailboxType = DummyMailbox
def test_LISTWithBadArgument(self):
return CommandMixin.test_LISTWithBadArgument(self)
test_LISTWithBadArgument.suppress = [_listMessageSuppression]
def test_UIDLWithBadArgument(self):
return CommandMixin.test_UIDLWithBadArgument(self)
test_UIDLWithBadArgument.suppress = [_getUidlSuppression]
def test_TOPWithBadArgument(self):
return CommandMixin.test_TOPWithBadArgument(self)
test_TOPWithBadArgument.suppress = [_listMessageSuppression]
def test_RETRWithBadArgument(self):
return CommandMixin.test_RETRWithBadArgument(self)
test_RETRWithBadArgument.suppress = [_listMessageSuppression]
class ValueErrorCommandTests(CommandMixin, unittest.TestCase):
exceptionType = ValueError
mailboxType = DummyMailbox
class SyncDeferredMailbox(DummyMailbox):
def listMessages(self, n=None):
return defer.succeed(DummyMailbox.listMessages(self, n))
class IndexErrorSyncDeferredCommandTests(IndexErrorCommandTests):
mailboxType = SyncDeferredMailbox
class ValueErrorSyncDeferredCommandTests(ValueErrorCommandTests):
mailboxType = SyncDeferredMailbox
class AsyncDeferredMailbox(DummyMailbox):
def __init__(self, *a, **kw):
self.waiting = []
DummyMailbox.__init__(self, *a, **kw)
def listMessages(self, n=None):
d = defer.Deferred()
self.waiting.append((d, DummyMailbox.listMessages(self, n)))
return d
class IndexErrorAsyncDeferredCommandTests(IndexErrorCommandTests):
mailboxType = AsyncDeferredMailbox
def _flush(self):
while self.pop3Server.mbox.waiting:
d, a = self.pop3Server.mbox.waiting.pop()
d.callback(a)
IndexErrorCommandTests._flush(self)
class ValueErrorAsyncDeferredCommandTests(ValueErrorCommandTests):
mailboxType = AsyncDeferredMailbox
def _flush(self):
while self.pop3Server.mbox.waiting:
d, a = self.pop3Server.mbox.waiting.pop()
d.callback(a)
ValueErrorCommandTests._flush(self)
class POP3MiscTests(unittest.TestCase):
def test_all(self):
mod = twisted.mail.pop3
for attr in mod.__all__:
self.assertTrue(hasattr(mod, attr))
| true | true |
f7fbfb511166364af3d606200d30680cfd63cfc2 | 675 | py | Python | nova/network/quantum/__init__.py | armaan/nova | 22859fccb95502efcb73ecf2bd827c45c0886bd3 | [
"Apache-2.0"
] | 1 | 2021-11-08T10:11:44.000Z | 2021-11-08T10:11:44.000Z | nova/network/quantum/__init__.py | armaan/nova | 22859fccb95502efcb73ecf2bd827c45c0886bd3 | [
"Apache-2.0"
] | 1 | 2020-07-24T14:14:13.000Z | 2020-07-24T14:14:13.000Z | nova/network/quantum/__init__.py | armaan/nova | 22859fccb95502efcb73ecf2bd827c45c0886bd3 | [
"Apache-2.0"
] | 2 | 2019-06-12T00:52:15.000Z | 2020-07-24T10:35:29.000Z | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2011 Nicira Networks
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
| 39.705882 | 78 | 0.731852 | true | true | |
f7fbfccc6f21e4a7f82e7690279110bd1d673d81 | 3,462 | py | Python | codes/utils/logger.py | kingsj0405/Explorable-Super-Resolution | 6582477ec1e2b0c6f4bd781552ac880fabdb4496 | [
"Apache-2.0"
] | 54 | 2020-06-16T07:44:19.000Z | 2022-03-01T06:23:38.000Z | codes/utils/logger.py | kingsj0405/Explorable-Super-Resolution | 6582477ec1e2b0c6f4bd781552ac880fabdb4496 | [
"Apache-2.0"
] | 6 | 2020-07-01T08:59:24.000Z | 2021-02-22T19:58:23.000Z | codes/utils/logger.py | kingsj0405/Explorable-Super-Resolution | 6582477ec1e2b0c6f4bd781552ac880fabdb4496 | [
"Apache-2.0"
] | 11 | 2020-06-16T21:28:00.000Z | 2022-01-06T12:28:58.000Z | import os
import sys
from utils.util import get_timestamp
# print to file and std_out simultaneously
class PrintLogger(object):
def __init__(self, log_path):
self.terminal = sys.stdout
self.log = open(os.path.join(log_path, 'print_log.txt'), 'a')
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
pass
class Logger(object):
def __init__(self, opt,tb_logger_suffix=''):
self.exp_name = opt['name']
self.use_tb_logger = opt['use_tb_logger']
self.opt = opt['logger']
self.log_dir = opt['path']['log']
if not os.path.isdir(self.log_dir):
os.mkdir(self.log_dir)
# loss log file
self.loss_log_path = os.path.join(self.log_dir, 'loss_log.txt')
with open(self.loss_log_path, 'a') as log_file:
log_file.write('=============== Time: ' + get_timestamp() + ' =============\n')
log_file.write('================ Training Losses ================\n')
# val results log file
self.val_log_path = os.path.join(self.log_dir, 'val_log.txt')
with open(self.val_log_path, 'a') as log_file:
log_file.write('================ Time: ' + get_timestamp() + ' ===============\n')
log_file.write('================ Validation Results ================\n')
if self.use_tb_logger:# and 'debug' not in self.exp_name:
from tensorboard_logger import Logger as TensorboardLogger
logger_dir_num = 0
tb_logger_dir = self.log_dir.replace('experiments', 'logs')
if not os.path.isdir(tb_logger_dir):
os.mkdir(tb_logger_dir)
existing_dirs = sorted([dir.split('_')[0] for dir in os.listdir(tb_logger_dir) if os.path.isdir(os.path.join(tb_logger_dir,dir))],key=lambda x:int(x.split('_')[0]))
if len(existing_dirs)>0:
logger_dir_num = int(existing_dirs[-1])+1
self.tb_logger = TensorboardLogger(os.path.join(tb_logger_dir,str(logger_dir_num)+tb_logger_suffix))
def print_format_results(self, mode, rlt,dont_print=False,keys_ignore_list=[]):
epoch = rlt.pop('epoch')
iters = rlt.pop('iters')
time = rlt.pop('time')
model = rlt.pop('model')
if 'lr' in rlt:
lr = rlt.pop('lr')
message = '<epoch:{:3d}, iter:{:8,d}, time:{:.2f}, lr:{:.1e}> '.format(
epoch, iters, time, lr)
else:
message = '<epoch:{:3d}, iter:{:8,d}, time:{:.2f}> '.format(epoch, iters, time)
for label, value in rlt.items():
if label in keys_ignore_list or '_baseline' in label:
continue
if mode == 'train':
message += '{:s}: {:.4e} '.format(label, value)
elif mode == 'val':
message += '{:s}: {:.4e} '.format(label, value)
# tensorboard logger
if self.use_tb_logger:# and 'debug' not in self.exp_name:
self.tb_logger.log_value(label, value, iters)
# print in console
if not dont_print:
print(message)
# write in log file
if mode == 'train':
with open(self.loss_log_path, 'a') as log_file:
log_file.write(message + '\n')
elif mode == 'val':
with open(self.val_log_path, 'a') as log_file:
log_file.write(message + '\n')
| 42.219512 | 176 | 0.554015 | import os
import sys
from utils.util import get_timestamp
class PrintLogger(object):
def __init__(self, log_path):
self.terminal = sys.stdout
self.log = open(os.path.join(log_path, 'print_log.txt'), 'a')
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
pass
class Logger(object):
def __init__(self, opt,tb_logger_suffix=''):
self.exp_name = opt['name']
self.use_tb_logger = opt['use_tb_logger']
self.opt = opt['logger']
self.log_dir = opt['path']['log']
if not os.path.isdir(self.log_dir):
os.mkdir(self.log_dir)
self.loss_log_path = os.path.join(self.log_dir, 'loss_log.txt')
with open(self.loss_log_path, 'a') as log_file:
log_file.write('=============== Time: ' + get_timestamp() + ' =============\n')
log_file.write('================ Training Losses ================\n')
self.val_log_path = os.path.join(self.log_dir, 'val_log.txt')
with open(self.val_log_path, 'a') as log_file:
log_file.write('================ Time: ' + get_timestamp() + ' ===============\n')
log_file.write('================ Validation Results ================\n')
if self.use_tb_logger:
from tensorboard_logger import Logger as TensorboardLogger
logger_dir_num = 0
tb_logger_dir = self.log_dir.replace('experiments', 'logs')
if not os.path.isdir(tb_logger_dir):
os.mkdir(tb_logger_dir)
existing_dirs = sorted([dir.split('_')[0] for dir in os.listdir(tb_logger_dir) if os.path.isdir(os.path.join(tb_logger_dir,dir))],key=lambda x:int(x.split('_')[0]))
if len(existing_dirs)>0:
logger_dir_num = int(existing_dirs[-1])+1
self.tb_logger = TensorboardLogger(os.path.join(tb_logger_dir,str(logger_dir_num)+tb_logger_suffix))
def print_format_results(self, mode, rlt,dont_print=False,keys_ignore_list=[]):
epoch = rlt.pop('epoch')
iters = rlt.pop('iters')
time = rlt.pop('time')
model = rlt.pop('model')
if 'lr' in rlt:
lr = rlt.pop('lr')
message = '<epoch:{:3d}, iter:{:8,d}, time:{:.2f}, lr:{:.1e}> '.format(
epoch, iters, time, lr)
else:
message = '<epoch:{:3d}, iter:{:8,d}, time:{:.2f}> '.format(epoch, iters, time)
for label, value in rlt.items():
if label in keys_ignore_list or '_baseline' in label:
continue
if mode == 'train':
message += '{:s}: {:.4e} '.format(label, value)
elif mode == 'val':
message += '{:s}: {:.4e} '.format(label, value)
if self.use_tb_logger:
self.tb_logger.log_value(label, value, iters)
if not dont_print:
print(message)
if mode == 'train':
with open(self.loss_log_path, 'a') as log_file:
log_file.write(message + '\n')
elif mode == 'val':
with open(self.val_log_path, 'a') as log_file:
log_file.write(message + '\n')
| true | true |
f7fbfd64a6f56e1f41ed05c6bb641e2db24f156a | 7,108 | py | Python | var/spack/repos/builtin/packages/hip-rocclr/package.py | jeanbez/spack | f4e51ce8f366c85bf5aa0eafe078677b42dae1ba | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | var/spack/repos/builtin/packages/hip-rocclr/package.py | jeanbez/spack | f4e51ce8f366c85bf5aa0eafe078677b42dae1ba | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | var/spack/repos/builtin/packages/hip-rocclr/package.py | jeanbez/spack | f4e51ce8f366c85bf5aa0eafe078677b42dae1ba | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2 | 2019-02-08T20:37:20.000Z | 2019-03-31T15:19:26.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class HipRocclr(CMakePackage):
"""Hip-ROCclr is a virtual device interface that compute runtimes interact
with to different backends such as ROCr or PAL This abstraction allows
runtimes to work on Windows as well as on Linux without much effort."""
homepage = "https://github.com/ROCm-Developer-Tools/ROCclr"
git = "https://github.com/ROCm-Developer-Tools/ROCclr.git"
maintainers = ['srekolam', 'arjun-raj-kuppala']
def url_for_version(self, version):
# Fix up a typo in the 3.5.0 release.
if version == Version('3.5.0'):
return "https://github.com/ROCm-Developer-Tools/ROCclr/archive/roc-3.5.0.tar.gz"
url = "https://github.com/ROCm-Developer-Tools/ROCclr/archive/rocm-{0}.tar.gz"
return url.format(version)
version('master', branch='main')
version('5.1.3', sha256='ddee63cdc6515c90bab89572b13e1627b145916cb8ede075ef8446cbb83f0a48')
version('5.1.0', sha256='f4f265604b534795a275af902b2c814f416434d9c9e16db81b3ed5d062187dfa')
version('5.0.2', sha256='34decd84652268dde865f38e66f8fb4750a08c2457fea52ad962bced82a03e5e')
version('5.0.0', sha256='6b72faf8819628a5c109b2ade515ab9009606d10f11316f0d7e4c4c998d7f724')
version('4.5.2', sha256='6581916a3303a31f76454f12f86e020fb5e5c019f3dbb0780436a8f73792c4d1')
version('4.5.0', sha256='ca8d6305ff0e620d9cb69ff7ac3898917db9e9b6996a7320244b48ab6511dd8e')
version('4.3.1', sha256='bda52c65f03a69a9d8ab1a118d45646d76843249fb975d67e5141e63fa3acc79', deprecated=True)
version('4.3.0', sha256='8a86b4f2a1b1c7ac628262e5b11b07ff42a224e62e594a4e0683aeb616062538', deprecated=True)
version('4.2.0', sha256='c57525af32c59becf56fd83cdd61f5320a95024d9baa7fb729a01e7a9fcdfd78', deprecated=True)
version('4.1.0', sha256='9eb1d88cfc9474979aaf29b99bcf9d3769a0f7f1f8f10660941aabf83d9eeb0c', deprecated=True)
version('4.0.0', sha256='8db502d0f607834e3b882f939d33e8abe2f9b55ddafaf1b0c2cd29a0425ed76a', deprecated=True)
version('3.10.0', sha256='d1ac02840c2dcb3d5fa3008fe9e313767ebe6d1dcf978a924341834ec96ebfe2', deprecated=True)
version('3.9.0', sha256='d248958672ae35ab7f9fbd83827ccf352e2756dfa7819f6b614ace2e1a9a064e', deprecated=True)
version('3.8.0', sha256='10d8aa6f5af7b51813015da603c4e75edc863c3530793f6ed9769ca345c08ed6', deprecated=True)
version('3.7.0', sha256='a49f464bb2eab6317e87e3cc249aba3b2517a34fbdfe50175f0437f69a219adc', deprecated=True)
version('3.5.0', sha256='87c1ee9f02b8aa487b628c543f058198767c474cec3d21700596a73c028959e1', deprecated=True)
variant('build_type', default='Release', values=("Release", "Debug", "RelWithDebInfo"), description='CMake build type')
depends_on('cmake@3:', type='build')
depends_on('gl@4.5:', type='link')
depends_on('libelf', type='link', when="@3.7.0:3.8.0")
depends_on('numactl', type='link', when="@3.7.0:")
for ver in ['3.5.0', '3.7.0', '3.8.0', '3.9.0', '3.10.0', '4.0.0', '4.1.0',
'4.2.0', '4.3.0', '4.3.1', '4.5.0', '4.5.2', '5.0.0', '5.0.2',
'5.1.0', '5.1.3', 'master']:
depends_on('hsakmt-roct@' + ver, when='@' + ver)
depends_on('hsa-rocr-dev@' + ver, when='@' + ver)
depends_on('comgr@' + ver, when='@' + ver)
# See: https://github.com/ROCm-Developer-Tools/ROCclr/pull/16
# In 3.7.0 the find opengl things have changed slightly.
patch('opengl.patch', when='@3.5.0')
resource(name='opencl-on-vdi',
url='https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/archive/roc-3.5.0.tar.gz',
sha256='511b617d5192f2d4893603c1a02402b2ac9556e9806ff09dd2a91d398abf39a0',
expand=True,
destination='',
placement='opencl-on-vdi',
when='@3.5.0')
# Add opencl sources thru the below
for d_version, d_shasum in [
('5.1.3', '44a7fac721abcd93470e1a7e466bdea0c668c253dee93e4f1ea9a72dbce4ba31'),
('5.1.0', '362d81303048cf7ed5d2f69fb65ed65425bc3da4734fff83e3b8fbdda51b0927'),
('5.0.2', '3edb1992ba28b4a7f82dd66fbd121f62bd859c1afb7ceb47fa856bd68feedc95'),
('5.0.0', '2aa3a628b336461f83866c4e76225ef5338359e31f802987699d6308515ae1be'),
('4.5.2', '96b43f314899707810db92149caf518bdb7cf39f7c0ad86e98ad687ffb0d396d'),
('4.5.0', '3a163aed24619b3faf5e8ba17325bdcedd1667a904ea20914ac6bdd33fcdbca8'),
('4.3.1', '7f98f7d4707b4392f8aa7017aaca9e27cb20263428a1a81fb7ec7c552e60c4ca'),
('4.3.0', 'd37bddcc6835b6c0fecdf4d02c204ac1d312076f3eef2b1faded1c4c1bc651e9'),
('4.2.0', '18133451948a83055ca5ebfb5ba1bd536ed0bcb611df98829f1251a98a38f730'),
('4.1.0', '0729e6c2adf1e3cf649dc6e679f9cb936f4f423f4954ad9852857c0a53ef799c'),
('4.0.0', 'd43ea5898c6b9e730b5efabe8367cc136a9260afeac5d0fe85b481d625dd7df1'),
('3.10.0', '3aa9dc5a5f570320b04b35ee129ce9ff21062d2770df934c6c307913f975e93d'),
('3.9.0', '286ff64304905384ce524cd8794c28aee216befd6c9267d4187a12e5a21e2daf'),
('3.8.0', '7f75dd1abf3d771d554b0e7b0a7d915ab5f11a74962c92b013ee044a23c1270a'),
('3.7.0', '283e1dfe4c3d2e8af4d677ed3c20e975393cdb0856e3ccd77b9c7ed2a151650b')
]:
resource(
name='opencl-on-vdi',
url='https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/archive/rocm-{0}.tar.gz'.format(d_version),
sha256=d_shasum,
expand=True,
destination='',
placement='opencl-on-vdi',
when='@{0}'.format(d_version)
)
resource(
name='opencl-on-vdi',
git='https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime.git',
destination='',
placement='opencl-on-vdi',
branch='main',
when='@master'
)
@property
def install_targets(self):
if self.spec.satisfies('@4.5.0:'):
return []
return ['install']
@run_after('install')
def deploy_missing_files(self):
if '@3.5.0' in self.spec:
# the amdrocclr_staticTargets.cmake file is generated but not
# installed and when we install it by hand, we have to fix the
# path to the static library libamdrocclr_static.a from build
# dir to prefix lib dir.
cmakefile = join_path(self.build_directory,
'amdrocclr_staticTargets.cmake')
filter_file(self.build_directory, self.prefix.lib, cmakefile)
install(cmakefile, self.prefix.lib)
elif self.spec.satisfies('@3.7.0:4.3.2'):
path = join_path(self.prefix.lib,
'cmake/rocclr/ROCclrConfig.cmake')
filter_file(self.build_directory, self.prefix, path)
def cmake_args(self):
args = [
'-DUSE_COMGR_LIBRARY=yes',
'-DOPENCL_DIR={0}/opencl-on-vdi'.format(self.stage.source_path)
]
return args
| 51.883212 | 123 | 0.68444 |
from spack.package import *
class HipRocclr(CMakePackage):
homepage = "https://github.com/ROCm-Developer-Tools/ROCclr"
git = "https://github.com/ROCm-Developer-Tools/ROCclr.git"
maintainers = ['srekolam', 'arjun-raj-kuppala']
def url_for_version(self, version):
if version == Version('3.5.0'):
return "https://github.com/ROCm-Developer-Tools/ROCclr/archive/roc-3.5.0.tar.gz"
url = "https://github.com/ROCm-Developer-Tools/ROCclr/archive/rocm-{0}.tar.gz"
return url.format(version)
version('master', branch='main')
version('5.1.3', sha256='ddee63cdc6515c90bab89572b13e1627b145916cb8ede075ef8446cbb83f0a48')
version('5.1.0', sha256='f4f265604b534795a275af902b2c814f416434d9c9e16db81b3ed5d062187dfa')
version('5.0.2', sha256='34decd84652268dde865f38e66f8fb4750a08c2457fea52ad962bced82a03e5e')
version('5.0.0', sha256='6b72faf8819628a5c109b2ade515ab9009606d10f11316f0d7e4c4c998d7f724')
version('4.5.2', sha256='6581916a3303a31f76454f12f86e020fb5e5c019f3dbb0780436a8f73792c4d1')
version('4.5.0', sha256='ca8d6305ff0e620d9cb69ff7ac3898917db9e9b6996a7320244b48ab6511dd8e')
version('4.3.1', sha256='bda52c65f03a69a9d8ab1a118d45646d76843249fb975d67e5141e63fa3acc79', deprecated=True)
version('4.3.0', sha256='8a86b4f2a1b1c7ac628262e5b11b07ff42a224e62e594a4e0683aeb616062538', deprecated=True)
version('4.2.0', sha256='c57525af32c59becf56fd83cdd61f5320a95024d9baa7fb729a01e7a9fcdfd78', deprecated=True)
version('4.1.0', sha256='9eb1d88cfc9474979aaf29b99bcf9d3769a0f7f1f8f10660941aabf83d9eeb0c', deprecated=True)
version('4.0.0', sha256='8db502d0f607834e3b882f939d33e8abe2f9b55ddafaf1b0c2cd29a0425ed76a', deprecated=True)
version('3.10.0', sha256='d1ac02840c2dcb3d5fa3008fe9e313767ebe6d1dcf978a924341834ec96ebfe2', deprecated=True)
version('3.9.0', sha256='d248958672ae35ab7f9fbd83827ccf352e2756dfa7819f6b614ace2e1a9a064e', deprecated=True)
version('3.8.0', sha256='10d8aa6f5af7b51813015da603c4e75edc863c3530793f6ed9769ca345c08ed6', deprecated=True)
version('3.7.0', sha256='a49f464bb2eab6317e87e3cc249aba3b2517a34fbdfe50175f0437f69a219adc', deprecated=True)
version('3.5.0', sha256='87c1ee9f02b8aa487b628c543f058198767c474cec3d21700596a73c028959e1', deprecated=True)
variant('build_type', default='Release', values=("Release", "Debug", "RelWithDebInfo"), description='CMake build type')
depends_on('cmake@3:', type='build')
depends_on('gl@4.5:', type='link')
depends_on('libelf', type='link', when="@3.7.0:3.8.0")
depends_on('numactl', type='link', when="@3.7.0:")
for ver in ['3.5.0', '3.7.0', '3.8.0', '3.9.0', '3.10.0', '4.0.0', '4.1.0',
'4.2.0', '4.3.0', '4.3.1', '4.5.0', '4.5.2', '5.0.0', '5.0.2',
'5.1.0', '5.1.3', 'master']:
depends_on('hsakmt-roct@' + ver, when='@' + ver)
depends_on('hsa-rocr-dev@' + ver, when='@' + ver)
depends_on('comgr@' + ver, when='@' + ver)
patch('opengl.patch', when='@3.5.0')
resource(name='opencl-on-vdi',
url='https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/archive/roc-3.5.0.tar.gz',
sha256='511b617d5192f2d4893603c1a02402b2ac9556e9806ff09dd2a91d398abf39a0',
expand=True,
destination='',
placement='opencl-on-vdi',
when='@3.5.0')
for d_version, d_shasum in [
('5.1.3', '44a7fac721abcd93470e1a7e466bdea0c668c253dee93e4f1ea9a72dbce4ba31'),
('5.1.0', '362d81303048cf7ed5d2f69fb65ed65425bc3da4734fff83e3b8fbdda51b0927'),
('5.0.2', '3edb1992ba28b4a7f82dd66fbd121f62bd859c1afb7ceb47fa856bd68feedc95'),
('5.0.0', '2aa3a628b336461f83866c4e76225ef5338359e31f802987699d6308515ae1be'),
('4.5.2', '96b43f314899707810db92149caf518bdb7cf39f7c0ad86e98ad687ffb0d396d'),
('4.5.0', '3a163aed24619b3faf5e8ba17325bdcedd1667a904ea20914ac6bdd33fcdbca8'),
('4.3.1', '7f98f7d4707b4392f8aa7017aaca9e27cb20263428a1a81fb7ec7c552e60c4ca'),
('4.3.0', 'd37bddcc6835b6c0fecdf4d02c204ac1d312076f3eef2b1faded1c4c1bc651e9'),
('4.2.0', '18133451948a83055ca5ebfb5ba1bd536ed0bcb611df98829f1251a98a38f730'),
('4.1.0', '0729e6c2adf1e3cf649dc6e679f9cb936f4f423f4954ad9852857c0a53ef799c'),
('4.0.0', 'd43ea5898c6b9e730b5efabe8367cc136a9260afeac5d0fe85b481d625dd7df1'),
('3.10.0', '3aa9dc5a5f570320b04b35ee129ce9ff21062d2770df934c6c307913f975e93d'),
('3.9.0', '286ff64304905384ce524cd8794c28aee216befd6c9267d4187a12e5a21e2daf'),
('3.8.0', '7f75dd1abf3d771d554b0e7b0a7d915ab5f11a74962c92b013ee044a23c1270a'),
('3.7.0', '283e1dfe4c3d2e8af4d677ed3c20e975393cdb0856e3ccd77b9c7ed2a151650b')
]:
resource(
name='opencl-on-vdi',
url='https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/archive/rocm-{0}.tar.gz'.format(d_version),
sha256=d_shasum,
expand=True,
destination='',
placement='opencl-on-vdi',
when='@{0}'.format(d_version)
)
resource(
name='opencl-on-vdi',
git='https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime.git',
destination='',
placement='opencl-on-vdi',
branch='main',
when='@master'
)
@property
def install_targets(self):
if self.spec.satisfies('@4.5.0:'):
return []
return ['install']
@run_after('install')
def deploy_missing_files(self):
if '@3.5.0' in self.spec:
cmakefile = join_path(self.build_directory,
'amdrocclr_staticTargets.cmake')
filter_file(self.build_directory, self.prefix.lib, cmakefile)
install(cmakefile, self.prefix.lib)
elif self.spec.satisfies('@3.7.0:4.3.2'):
path = join_path(self.prefix.lib,
'cmake/rocclr/ROCclrConfig.cmake')
filter_file(self.build_directory, self.prefix, path)
def cmake_args(self):
args = [
'-DUSE_COMGR_LIBRARY=yes',
'-DOPENCL_DIR={0}/opencl-on-vdi'.format(self.stage.source_path)
]
return args
| true | true |
f7fbfde5f32671eeca378684439614939666ff7c | 6,849 | py | Python | python/smqtk/representation/key_value/__init__.py | cdeepakroy/SMQTK | ef5cec7399e9766f95ff06fe122471a51fc4b2d8 | [
"BSD-3-Clause"
] | null | null | null | python/smqtk/representation/key_value/__init__.py | cdeepakroy/SMQTK | ef5cec7399e9766f95ff06fe122471a51fc4b2d8 | [
"BSD-3-Clause"
] | null | null | null | python/smqtk/representation/key_value/__init__.py | cdeepakroy/SMQTK | ef5cec7399e9766f95ff06fe122471a51fc4b2d8 | [
"BSD-3-Clause"
] | null | null | null | """
Data abstraction interface for general key-value storage.
"""
import abc
import collections
import os
from smqtk.exceptions import ReadOnlyError
from smqtk.representation import SmqtkRepresentation
from smqtk.utils.plugin import Pluggable
NO_DEFAULT_VALUE = type("KeyValueStoreNoDefaultValueType", (object,), {})()
class KeyValueStore (SmqtkRepresentation, Pluggable):
"""
Interface for general key/value storage.
Implementations may impose restrictions on what types keys or values may be
due to backend used.
Data access and manipulation should be thread-safe.
"""
# Mutable storage container is not hashable.
__hash__ = None
def __len__(self):
return self.count()
def __contains__(self, item):
return self.has(item)
@abc.abstractmethod
def __repr__(self):
"""
Return representative string for this class.
*NOTE:* **This abstract super-method returns a template string to add to
sub-class specific information to. The returned string should be
formatted using the ``%`` operator and expects a single string
argument.**
:return: Representative string for this class.
:rtype: str
"""
return '<' + self.__class__.__name__ + " %s>"
@abc.abstractmethod
def count(self):
"""
:return: The number of key-value relationships in this store.
:rtype: int | long
"""
@abc.abstractmethod
def keys(self):
"""
:return: Iterator over keys in this store.
:rtype: collections.Iterator[collections.Hashable]
"""
def values(self):
"""
:return: Iterator over values in this store. Values are not guaranteed
to be in any particular order.
:rtype: collections.Iterator[object]
"""
for k in self.keys():
yield self.get(k)
@abc.abstractmethod
def is_read_only(self):
"""
:return: True if this instance is read-only and False if it is not.
:rtype: bool
"""
@abc.abstractmethod
def has(self, key):
"""
Check if this store has a value for the given key.
:param key: Key to check for a value for.
:type key: collections.Hashable
:return: If this store has a value for the given key.
:rtype: bool
"""
@abc.abstractmethod
def add(self, key, value):
"""
Add a key-value pair to this store.
*NOTE:* **Implementing sub-classes should call this super-method. This
super method should not be considered a critical section for thread
safety unless ``is_read_only`` is not thread-safe.**
:param key: Key for the value. Must be hashable.
:type key: collections.Hashable
:param value: Python object to store.
:type value: object
:raises ReadOnlyError: If this instance is marked as read-only.
:return: Self.
:rtype: KeyValueStore
"""
if not isinstance(key, collections.Hashable):
raise ValueError("Key is not a hashable type.")
if self.is_read_only():
raise ReadOnlyError("Cannot add to read-only instance %s." % self)
@abc.abstractmethod
def add_many(self, d):
"""
Add multiple key-value pairs at a time into this store as represented in
the provided dictionary `d`.
:param d: Dictionary of key-value pairs to add to this store.
:type d: dict[collections.Hashable, object]
:return: Self.
:rtype: KeyValueStore
"""
@abc.abstractmethod
def get(self, key, default=NO_DEFAULT_VALUE):
"""
Get the value for the given key.
*NOTE:* **Implementing sub-classes are responsible for raising a
``KeyError`` where appropriate.**
:param key: Key to get the value of.
:type key: collections.Hashable
:param default: Optional default value if the given key is not present
in this store. This may be any value except for the
``NO_DEFAULT_VALUE`` constant (custom anonymous class instance).
:type default: object
:raises KeyError: The given key is not present in this store and no
default value given.
:return: Deserialized python object stored for the given key.
:rtype: object
"""
# TODO: get_many(self, keys, default=NO_DEFAULT_VALUE)
@abc.abstractmethod
def clear(self):
"""
Clear this key-value store.
*NOTE:* **Implementing sub-classes should call this super-method. This
super method should not be considered a critical section for thread
safety.**
:raises ReadOnlyError: If this instance is marked as read-only.
"""
if self.is_read_only():
raise ReadOnlyError("Cannot clear a read-only %s instance."
% self.__class__.__name__)
def get_key_value_store_impls(reload_modules=False):
"""
Discover and return discovered ``KeyValueStore`` classes. Keys in the
returned map are the names of the discovered classes, and the paired values
are the actual class type objects.
We search for implementation classes in:
- modules next to this file this function is defined in (ones that begin
with an alphanumeric character),
- python modules listed in the environment variable
``KEY_VALUE_STORE_PATH``
- This variable should contain a sequence of python module
specifications, separated by the platform specific PATH separator
character (``;`` for Windows, ``:`` for unix)
Within a module we first look for a helper variable by the name
``KEY_VALUE_STORE_CLASS``, which can either be a single class object or
an iterable of class objects, to be specifically exported. If the variable
is set to None, we skip that module and do not import anything. If the
variable is not present, we look at attributes defined in that module for
classes that descend from the given base class type. If none of the above
are found, or if an exception occurs, the module is skipped.
:param reload_modules: Explicitly reload discovered modules from source.
:type reload_modules: bool
:return: Map of discovered class object of type ``KeyValueStore``
whose keys are the string names of the classes.
:rtype: dict[str, type]
"""
from smqtk.utils.plugin import get_plugins
this_dir = os.path.abspath(os.path.dirname(__file__))
env_var = "KEY_VALUE_STORE_PATH"
helper_var = "KEY_VALUE_STORE_CLASS"
return get_plugins(__name__, this_dir, env_var, helper_var,
KeyValueStore, reload_modules=reload_modules)
| 32.004673 | 80 | 0.64681 | import abc
import collections
import os
from smqtk.exceptions import ReadOnlyError
from smqtk.representation import SmqtkRepresentation
from smqtk.utils.plugin import Pluggable
NO_DEFAULT_VALUE = type("KeyValueStoreNoDefaultValueType", (object,), {})()
class KeyValueStore (SmqtkRepresentation, Pluggable):
__hash__ = None
def __len__(self):
return self.count()
def __contains__(self, item):
return self.has(item)
@abc.abstractmethod
def __repr__(self):
return '<' + self.__class__.__name__ + " %s>"
@abc.abstractmethod
def count(self):
@abc.abstractmethod
def keys(self):
def values(self):
for k in self.keys():
yield self.get(k)
@abc.abstractmethod
def is_read_only(self):
@abc.abstractmethod
def has(self, key):
@abc.abstractmethod
def add(self, key, value):
if not isinstance(key, collections.Hashable):
raise ValueError("Key is not a hashable type.")
if self.is_read_only():
raise ReadOnlyError("Cannot add to read-only instance %s." % self)
@abc.abstractmethod
def add_many(self, d):
@abc.abstractmethod
def get(self, key, default=NO_DEFAULT_VALUE):
@abc.abstractmethod
def clear(self):
if self.is_read_only():
raise ReadOnlyError("Cannot clear a read-only %s instance."
% self.__class__.__name__)
def get_key_value_store_impls(reload_modules=False):
from smqtk.utils.plugin import get_plugins
this_dir = os.path.abspath(os.path.dirname(__file__))
env_var = "KEY_VALUE_STORE_PATH"
helper_var = "KEY_VALUE_STORE_CLASS"
return get_plugins(__name__, this_dir, env_var, helper_var,
KeyValueStore, reload_modules=reload_modules)
| true | true |
f7fc002869dcb55f0f4ad40430039ad2d1a1ba1e | 2,180 | py | Python | tests/run_basic_tests.py | ThibHlln/unifhy | 4105932ed7dfec34d428c1f2d2f85ec25ea522ed | [
"BSD-3-Clause"
] | null | null | null | tests/run_basic_tests.py | ThibHlln/unifhy | 4105932ed7dfec34d428c1f2d2f85ec25ea522ed | [
"BSD-3-Clause"
] | 2 | 2021-12-03T10:49:42.000Z | 2022-03-26T11:24:00.000Z | tests/run_basic_tests.py | unifhy-org/unifhy | 4105932ed7dfec34d428c1f2d2f85ec25ea522ed | [
"BSD-3-Clause"
] | null | null | null | import unittest
import doctest
from tests.test_model import BasicTestModel
from tests.test_space import (
TestLatLonGridAPI, TestGridComparison
)
from tests.test_time import (
TestTimeDomainAPI, TestTimeDomainComparison
)
from tests.test_utils.test_clock import TestClock
import unifhy
class TestModelSameTimeSameSpace(BasicTestModel, unittest.TestCase):
# flag to specify that components are to run at same temporal resolutions
t = 'same_t'
# flag to specify that components are to run at same spatial resolution
s = 'same_s'
class TestModelDiffTimeDiffSpace(BasicTestModel, unittest.TestCase):
# flag to specify that components are to run at different temporal resolutions
t = 'diff_t'
# flag to specify that components are to run at different spatial resolutions
s = 'diff_s'
if __name__ == '__main__':
test_loader = unittest.TestLoader()
test_suite = unittest.TestSuite()
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestModelSameTimeSameSpace)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestModelDiffTimeDiffSpace)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestLatLonGridAPI)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestGridComparison)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestTimeDomainAPI)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestTimeDomainComparison)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestClock)
)
test_suite.addTests(doctest.DocTestSuite(unifhy.data))
test_suite.addTests(doctest.DocTestSuite(unifhy.time))
test_suite.addTests(doctest.DocTestSuite(unifhy.space))
test_suite.addTests(doctest.DocTestSuite(unifhy.model))
test_suite.addTests(doctest.DocTestSuite(unifhy._utils.clock))
test_suite.addTests(doctest.DocTestSuite(unifhy._utils.exchanger))
test_suite.addTests(doctest.DocTestSuite(unifhy._utils.compass))
runner = unittest.TextTestRunner(verbosity=2)
result = runner.run(test_suite)
if not result.wasSuccessful():
exit(1)
| 31.594203 | 82 | 0.759174 | import unittest
import doctest
from tests.test_model import BasicTestModel
from tests.test_space import (
TestLatLonGridAPI, TestGridComparison
)
from tests.test_time import (
TestTimeDomainAPI, TestTimeDomainComparison
)
from tests.test_utils.test_clock import TestClock
import unifhy
class TestModelSameTimeSameSpace(BasicTestModel, unittest.TestCase):
t = 'same_t'
s = 'same_s'
class TestModelDiffTimeDiffSpace(BasicTestModel, unittest.TestCase):
t = 'diff_t'
s = 'diff_s'
if __name__ == '__main__':
test_loader = unittest.TestLoader()
test_suite = unittest.TestSuite()
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestModelSameTimeSameSpace)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestModelDiffTimeDiffSpace)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestLatLonGridAPI)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestGridComparison)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestTimeDomainAPI)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestTimeDomainComparison)
)
test_suite.addTests(
test_loader.loadTestsFromTestCase(TestClock)
)
test_suite.addTests(doctest.DocTestSuite(unifhy.data))
test_suite.addTests(doctest.DocTestSuite(unifhy.time))
test_suite.addTests(doctest.DocTestSuite(unifhy.space))
test_suite.addTests(doctest.DocTestSuite(unifhy.model))
test_suite.addTests(doctest.DocTestSuite(unifhy._utils.clock))
test_suite.addTests(doctest.DocTestSuite(unifhy._utils.exchanger))
test_suite.addTests(doctest.DocTestSuite(unifhy._utils.compass))
runner = unittest.TextTestRunner(verbosity=2)
result = runner.run(test_suite)
if not result.wasSuccessful():
exit(1)
| true | true |
f7fc003ccfc18d1cf5992acacf7a94974efb90c7 | 2,479 | py | Python | examples/ad_manager/v201905/activity_group_service/create_activity_groups.py | ale180192/googleads-python-lib | 783a2d40a49956fb16ed73280708f6f9e322aa09 | [
"Apache-2.0"
] | 1 | 2019-09-30T06:36:07.000Z | 2019-09-30T06:36:07.000Z | examples/ad_manager/v201905/activity_group_service/create_activity_groups.py | ale180192/googleads-python-lib | 783a2d40a49956fb16ed73280708f6f9e322aa09 | [
"Apache-2.0"
] | null | null | null | examples/ad_manager/v201905/activity_group_service/create_activity_groups.py | ale180192/googleads-python-lib | 783a2d40a49956fb16ed73280708f6f9e322aa09 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This code example creates new activity groups.
To determine which activity groups exist, run get_all_activity_groups.py.
The LoadFromStorage method is pulling credentials and properties from a
"googleads.yaml" file. By default, it looks for this file in your home
directory. For more information, see the "Caching authentication information"
section of our README.
"""
import uuid
# Import appropriate modules from the client library.
from googleads import ad_manager
# Set the ID of the advertiser company this activity group is associated with.
ADVERTISER_COMPANY_ID = 'INSERT_ADVERTISER_COMPANY_ID_HERE'
def main(client, advertiser_company_id):
# Initialize appropriate service.
activity_group_service = client.GetService('ActivityGroupService',
version='v201905')
# Create a short-term activity group.
short_term_activity_group = {
'name': 'Short-term activity group #%s' % uuid.uuid4(),
'companyIds': [advertiser_company_id],
'clicksLookback': '1',
'impressionsLookback': '1'
}
# Create a long-term activity group.
long_term_activity_group = {
'name': 'Long-term activity group #%s' % uuid.uuid4(),
'companyIds': [advertiser_company_id],
'clicksLookback': '30',
'impressionsLookback': '30'
}
# Create the activity groups on the server.
activity_groups = activity_group_service.createActivityGroups([
short_term_activity_group, long_term_activity_group])
# Display results.
for activity_group in activity_groups:
print('Activity group with ID "%s" and name "%s" was created.'
% (activity_group['id'], activity_group['name']))
if __name__ == '__main__':
# Initialize client object.
ad_manager_client = ad_manager.AdManagerClient.LoadFromStorage()
main(ad_manager_client, ADVERTISER_COMPANY_ID)
| 34.430556 | 78 | 0.734167 |
import uuid
from googleads import ad_manager
ADVERTISER_COMPANY_ID = 'INSERT_ADVERTISER_COMPANY_ID_HERE'
def main(client, advertiser_company_id):
activity_group_service = client.GetService('ActivityGroupService',
version='v201905')
short_term_activity_group = {
'name': 'Short-term activity group #%s' % uuid.uuid4(),
'companyIds': [advertiser_company_id],
'clicksLookback': '1',
'impressionsLookback': '1'
}
long_term_activity_group = {
'name': 'Long-term activity group #%s' % uuid.uuid4(),
'companyIds': [advertiser_company_id],
'clicksLookback': '30',
'impressionsLookback': '30'
}
activity_groups = activity_group_service.createActivityGroups([
short_term_activity_group, long_term_activity_group])
for activity_group in activity_groups:
print('Activity group with ID "%s" and name "%s" was created.'
% (activity_group['id'], activity_group['name']))
if __name__ == '__main__':
ad_manager_client = ad_manager.AdManagerClient.LoadFromStorage()
main(ad_manager_client, ADVERTISER_COMPANY_ID)
| true | true |
f7fc00cad765d312ab428e5d68895bb8a38dd4bc | 2,702 | py | Python | scons/scons-local-2.3.3/SCons/Tool/latex.py | pedrishi/pdb2pqr_pypka | 74f64948658d021a8bfc8fd78936ce4186ffc88e | [
"BSD-3-Clause"
] | null | null | null | scons/scons-local-2.3.3/SCons/Tool/latex.py | pedrishi/pdb2pqr_pypka | 74f64948658d021a8bfc8fd78936ce4186ffc88e | [
"BSD-3-Clause"
] | null | null | null | scons/scons-local-2.3.3/SCons/Tool/latex.py | pedrishi/pdb2pqr_pypka | 74f64948658d021a8bfc8fd78936ce4186ffc88e | [
"BSD-3-Clause"
] | null | null | null | """SCons.Tool.latex
Tool-specific initialization for LaTeX.
Generates .dvi files from .latex or .ltx files
There normally shouldn't be any need to import this module directly.
It will usually be imported through the generic SCons.Tool.Tool()
selection method.
"""
#
# Copyright (c) 2001 - 2014 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "src/engine/SCons/Tool/latex.py 2014/08/24 12:12:31 garyo"
import SCons.Action
import SCons.Defaults
import SCons.Scanner.LaTeX
import SCons.Util
import SCons.Tool
import SCons.Tool.tex
def LaTeXAuxFunction(target = None, source= None, env=None):
result = SCons.Tool.tex.InternalLaTeXAuxAction( SCons.Tool.tex.LaTeXAction, target, source, env )
if result != 0:
SCons.Tool.tex.check_file_error_message(env['LATEX'])
return result
LaTeXAuxAction = SCons.Action.Action(LaTeXAuxFunction,
strfunction=SCons.Tool.tex.TeXLaTeXStrFunction)
def generate(env):
"""Add Builders and construction variables for LaTeX to an Environment."""
env.AppendUnique(LATEXSUFFIXES=SCons.Tool.LaTeXSuffixes)
import dvi
dvi.generate(env)
import pdf
pdf.generate(env)
bld = env['BUILDERS']['DVI']
bld.add_action('.ltx', LaTeXAuxAction)
bld.add_action('.latex', LaTeXAuxAction)
bld.add_emitter('.ltx', SCons.Tool.tex.tex_eps_emitter)
bld.add_emitter('.latex', SCons.Tool.tex.tex_eps_emitter)
SCons.Tool.tex.generate_common(env)
def exists(env):
SCons.Tool.tex.generate_darwin(env)
return env.Detect('latex')
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
| 33.358025 | 101 | 0.746484 |
__revision__ = "src/engine/SCons/Tool/latex.py 2014/08/24 12:12:31 garyo"
import SCons.Action
import SCons.Defaults
import SCons.Scanner.LaTeX
import SCons.Util
import SCons.Tool
import SCons.Tool.tex
def LaTeXAuxFunction(target = None, source= None, env=None):
result = SCons.Tool.tex.InternalLaTeXAuxAction( SCons.Tool.tex.LaTeXAction, target, source, env )
if result != 0:
SCons.Tool.tex.check_file_error_message(env['LATEX'])
return result
LaTeXAuxAction = SCons.Action.Action(LaTeXAuxFunction,
strfunction=SCons.Tool.tex.TeXLaTeXStrFunction)
def generate(env):
env.AppendUnique(LATEXSUFFIXES=SCons.Tool.LaTeXSuffixes)
import dvi
dvi.generate(env)
import pdf
pdf.generate(env)
bld = env['BUILDERS']['DVI']
bld.add_action('.ltx', LaTeXAuxAction)
bld.add_action('.latex', LaTeXAuxAction)
bld.add_emitter('.ltx', SCons.Tool.tex.tex_eps_emitter)
bld.add_emitter('.latex', SCons.Tool.tex.tex_eps_emitter)
SCons.Tool.tex.generate_common(env)
def exists(env):
SCons.Tool.tex.generate_darwin(env)
return env.Detect('latex')
| true | true |
f7fc01ffda0a4346bc1fd593b9c49641adc3b38c | 159 | py | Python | python/testData/quickdoc/ParamDescriptionEpytext.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/quickdoc/ParamDescriptionEpytext.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/quickdoc/ParamDescriptionEpytext.py | truthiswill/intellij-community | fff88cfb0dc168eea18ecb745d3e5b93f57b0b95 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def fu<the_ref>nc(param1, param2):
"""
@param param1: param1 description
@param param2: param2 description
@return: return description
""" | 22.714286 | 37 | 0.660377 | def fu<the_ref>nc(param1, param2):
"""
@param param1: param1 description
@param param2: param2 description
@return: return description
""" | false | true |
f7fc02aace6b8758a9c0c84593adbea465bf3abc | 25,325 | py | Python | scripts/sct_compute_hausdorff_distance.py | valosekj/spinalcordtoolbox | 266bfc88d6eb6e96a2c2f1ec88c2e185c6f88e09 | [
"MIT"
] | 1 | 2020-05-17T00:39:47.000Z | 2020-05-17T00:39:47.000Z | scripts/sct_compute_hausdorff_distance.py | valosekj/spinalcordtoolbox | 266bfc88d6eb6e96a2c2f1ec88c2e185c6f88e09 | [
"MIT"
] | null | null | null | scripts/sct_compute_hausdorff_distance.py | valosekj/spinalcordtoolbox | 266bfc88d6eb6e96a2c2f1ec88c2e185c6f88e09 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
# Thinning with the Zhang-Suen algorithm (1984) --> code taken from https://github.com/linbojin/Skeletonization-by-Zhang-Suen-Thinning-Algorithm
# Computation of the distances between two skeleton
# ---------------------------------------------------------------------------------------
# Copyright (c) 2013 Polytechnique Montreal <www.neuro.polymtl.ca>
# Authors: Sara Dupont
# CREATED: 2015-07-15
#
# About the license: see the file LICENSE.TXT
#########################################################################################
from __future__ import absolute_import, division
import sys, io, os, time, shutil, argparse
import numpy as np
import sct_utils as sct
import spinalcordtoolbox.image as msct_image
from spinalcordtoolbox.image import Image
from spinalcordtoolbox.utils import Metavar, SmartFormatter
# TODO: display results ==> not only max : with a violin plot of h1 and h2 distribution ? see dev/straightening --> seaborn.violinplot
# TODO: add the option Hyberbolic Hausdorff's distance : see choi and seidel paper
# ----------------------------------------------------------------------------------------------------------------------
# PARAM ----------------------------------------------------------------------------------------------------------------
class Param:
def __init__(self):
self.debug = 0
self.thinning = True
self.verbose = 1
# ----------------------------------------------------------------------------------------------------------------------
# THINNING -------------------------------------------------------------------------------------------------------------
class Thinning:
def __init__(self, im, v=1):
sct.printv('Thinning ... ', v, 'normal')
self.image = im
self.image.data = bin_data(self.image.data)
self.dim_im = len(self.image.data.shape)
if self.dim_im == 2:
self.thinned_image = msct_image.empty_like(self.image)
self.thinned_image.data = self.zhang_suen(self.image.data)
self.thinned_image.absolutepath = sct.add_suffix(self.image.absolutepath, "_thinned")
elif self.dim_im == 3:
if not self.image.orientation == 'IRP':
sct.printv('-- changing orientation ...')
self.image.change_orientation('IRP')
thinned_data = np.asarray([self.zhang_suen(im_slice) for im_slice in self.image.data])
self.thinned_image = msct_image.empty_like(self.image)
self.thinned_image.data = thinned_data
self.thinned_image.absolutepath = sct.add_suffix(self.image.absolutepath, "_thinned")
# ------------------------------------------------------------------------------------------------------------------
def get_neighbours(self, x, y, image):
"""
Return 8-neighbours of image point P1(x,y), in a clockwise order
code from https://github.com/linbojin/Skeletonization-by-Zhang-Suen-Thinning-Algorithm
:param x:
:param y:
:param image:
:return:
"""
# now = time.time()
x_1, y_1, x1, y1 = x - 1, y - 1, x + 1, y + 1
neighbours = [image[x_1][y], image[x_1][y1], image[x][y1], image[x1][y1], # P2,P3,P4,P5
image[x1][y], image[x1][y_1], image[x][y_1], image[x_1][y_1]] # P6,P7,P8,P9
# t = time.time() - now
# sct.printv('t neighbours: ', t)
return neighbours
# ------------------------------------------------------------------------------------------------------------------
def transitions(self, neighbours):
"""
No. of 0,1 patterns (transitions from 0 to 1) in the ordered sequence
code from https://github.com/linbojin/Skeletonization-by-Zhang-Suen-Thinning-Algorithm
:param neighbours:
:return:
"""
# now = time.time()
n = neighbours + neighbours[0:1] # P2, P3, ... , P8, P9, P2
s = np.sum((n1, n2) == (0, 1) for n1, n2 in zip(n, n[1:])) # (P2,P3), (P3,P4), ... , (P8,P9), (P9,P2)
# t = time.time() - now
# sct.printv('t transitions sum: ', t)
return s
# ------------------------------------------------------------------------------------------------------------------
def zhang_suen(self, image):
"""
the Zhang-Suen Thinning Algorithm
code from https://github.com/linbojin/Skeletonization-by-Zhang-Suen-Thinning-Algorithm
:param image:
:return:
"""
# now = time.time()
image_thinned = image.copy() # deepcopy to protect the original image
changing1 = changing2 = 1 # the points to be removed (set as 0)
while changing1 or changing2: # iterates until no further changes occur in the image
# Step 1
changing1 = []
max = len(image_thinned) - 1
pass_list = [1, max]
# rows, columns = image_thinned.shape # x for rows, y for columns
# for x in range(1, rows - 1): # No. of rows
# for y in range(1, columns - 1): # No. of columns
for x, y in non_zero_coord(image_thinned):
if x not in pass_list and y not in pass_list:
# if image_thinned[x][y] == 1: # Condition 0: Point P1 in the object regions
P2, P3, P4, P5, P6, P7, P8, P9 = n = self.get_neighbours(x, y, image_thinned)
if (2 <= sum(n) <= 6 and # Condition 1: 2<= N(P1) <= 6
P2 * P4 * P6 == 0 and # Condition 3
P4 * P6 * P8 == 0 and # Condition 4
self.transitions(n) == 1): # Condition 2: S(P1)=1
changing1.append((x, y))
for x, y in changing1:
image_thinned[x][y] = 0
# Step 2
changing2 = []
# for x in range(1, rows - 1):
# for y in range(1, columns - 1):
for x, y in non_zero_coord(image_thinned):
if x not in pass_list and y not in pass_list:
# if image_thinned[x][y] == 1: # Condition 0
P2, P3, P4, P5, P6, P7, P8, P9 = n = self.get_neighbours(x, y, image_thinned)
if (2 <= sum(n) <= 6 and # Condition 1
P2 * P4 * P8 == 0 and # Condition 3
P2 * P6 * P8 == 0 and # Condition 4
self.transitions(n) == 1): # Condition 2
changing2.append((x, y))
for x, y in changing2:
image_thinned[x][y] = 0
# t = time.time() - now
# sct.printv('t thinning: ', t)
return image_thinned
# ----------------------------------------------------------------------------------------------------------------------
# HAUSDORFF'S DISTANCE -------------------------------------------------------------------------------------------------
class HausdorffDistance:
def __init__(self, data1, data2, v=1):
"""
the hausdorff distance between two sets is the maximum of the distances from a point in any of the sets to the nearest point in the other set
:return:
"""
# now = time.time()
sct.printv('Computing 2D Hausdorff\'s distance ... ', v, 'normal')
self.data1 = bin_data(data1)
self.data2 = bin_data(data2)
self.min_distances_1 = self.relative_hausdorff_dist(self.data1, self.data2, v)
self.min_distances_2 = self.relative_hausdorff_dist(self.data2, self.data1, v)
# relatives hausdorff's distances in pixel
self.h1 = np.max(self.min_distances_1)
self.h2 = np.max(self.min_distances_2)
# Hausdorff's distance in pixel
self.H = max(self.h1, self.h2)
# t = time.time() - now
# sct.printv('Hausdorff dist time :', t)
# ------------------------------------------------------------------------------------------------------------------
def relative_hausdorff_dist(self, dat1, dat2, v=1):
h = np.zeros(dat1.shape)
nz_coord_1 = non_zero_coord(dat1)
nz_coord_2 = non_zero_coord(dat2)
if len(nz_coord_1) != 0 and len(nz_coord_2) != 0 :
for x1, y1 in nz_coord_1:
# for x1 in range(dat1.shape[0]):
# for y1 in range(dat1.shape[1]):
# if dat1[x1, y1] == 1:
d_p1_dat2 = []
p1 = np.asarray([x1, y1])
for x2, y2 in nz_coord_2:
# for x2 in range(dat2.shape[0]):
# for y2 in range(dat2.shape[1]):
# if dat2[x2, y2] == 1:
p2 = np.asarray([x2, y2])
d_p1_dat2.append(np.linalg.norm(p1 - p2)) # Euclidean distance between p1 and p2
h[x1, y1] = min(d_p1_dat2)
else:
sct.printv('Warning: an image is empty', v, 'warning')
return h
# ----------------------------------------------------------------------------------------------------------------------
# COMPUTE DISTANCES ----------------------------------------------------------------------------------------------------
class ComputeDistances:
def __init__(self, im1, im2=None, param=None):
self.im1 = im1
self.im2 = im2
self.dim_im = len(self.im1.data.shape)
self.dim_pix = 0
self.distances = None
self.res = ''
self.param = param
self.dist1_distribution = None
self.dist2_distribution = None
if self.dim_im == 3:
self.orientation1 = self.im1.orientation
if self.orientation1 != 'IRP':
self.im1.change_orientation('IRP', generate_path=True)
if self.im2 is not None:
self.orientation2 = self.im2.orientation
if self.orientation2 != 'IRP':
self.im2.change_orientation('IRP', generate_path=True)
if self.param.thinning:
self.thinning1 = Thinning(self.im1, self.param.verbose)
self.thinning1.thinned_image.save()
if self.im2 is not None:
self.thinning2 = Thinning(self.im2, self.param.verbose)
self.thinning2.thinned_image.save()
if self.dim_im == 2 and self.im2 is not None:
self.compute_dist_2im_2d()
if self.dim_im == 3:
if self.im2 is None:
self.compute_dist_1im_3d()
else:
self.compute_dist_2im_3d()
if self.dim_im == 2 and self.distances is not None:
self.dist1_distribution = self.distances.min_distances_1[np.nonzero(self.distances.min_distances_1)]
self.dist2_distribution = self.distances.min_distances_2[np.nonzero(self.distances.min_distances_2)]
if self.dim_im == 3:
self.dist1_distribution = []
self.dist2_distribution = []
for d in self.distances:
if np.nonzero(d.min_distances_1)[0].size: # Exist non zero values
self.dist1_distribution.append(d.min_distances_1[np.nonzero(d.min_distances_1)])
else: # all values are zero
self.dist1_distribution.append(0)
if np.nonzero(d.min_distances_2)[0].size: # Exist non zero values
self.dist2_distribution.append(d.min_distances_2[np.nonzero(d.min_distances_2)])
else: # all values are zero
self.dist2_distribution.append(0)
self.res = 'Hausdorff\'s distance - First relative Hausdorff\'s distance median - Second relative Hausdorff\'s distance median(all in mm)\n'
for i, d in enumerate(self.distances):
med1 = np.median(self.dist1_distribution[i])
med2 = np.median(self.dist2_distribution[i])
if self.im2 is None:
self.res += 'Slice ' + str(i) + ' - slice ' + str(i + 1) + ': ' + str(d.H * self.dim_pix) + ' - ' + str(med1 * self.dim_pix) + ' - ' + str(med2 * self.dim_pix) + ' \n'
else:
self.res += 'Slice ' + str(i) + ': ' + str(d.H * self.dim_pix) + ' - ' + str(med1 * self.dim_pix) + ' - ' + str(med2 * self.dim_pix) + ' \n'
sct.printv('-----------------------------------------------------------------------------\n' +
self.res, self.param.verbose, 'normal')
if self.param.verbose == 2:
self.show_results()
# ------------------------------------------------------------------------------------------------------------------
def compute_dist_2im_2d(self):
nx1, ny1, nz1, nt1, px1, py1, pz1, pt1 = self.im1.dim
nx2, ny2, nz2, nt2, px2, py2, pz2, pt2 = self.im2.dim
assert np.isclose(px1, px2) and np.isclose(py1, py2) and np.isclose(px1, py1)
self.dim_pix = py1
if self.param.thinning:
dat1 = self.thinning1.thinned_image.data
dat2 = self.thinning2.thinned_image.data
else:
dat1 = bin_data(self.im1.data)
dat2 = bin_data(self.im2.data)
self.distances = HausdorffDistance(dat1, dat2, self.param.verbose)
self.res = 'Hausdorff\'s distance : ' + str(self.distances.H * self.dim_pix) + ' mm\n\n' \
'First relative Hausdorff\'s distance : ' + str(self.distances.h1 * self.dim_pix) + ' mm\n' \
'Second relative Hausdorff\'s distance : ' + str(self.distances.h2 * self.dim_pix) + ' mm'
# ------------------------------------------------------------------------------------------------------------------
def compute_dist_1im_3d(self):
nx1, ny1, nz1, nt1, px1, py1, pz1, pt1 = self.im1.dim
self.dim_pix = py1
if self.param.thinning:
dat1 = self.thinning1.thinned_image.data
else:
dat1 = bin_data(self.im1.data)
self.distances = []
for i, dat_slice in enumerate(dat1[:-1]):
self.distances.append(HausdorffDistance(bin_data(dat_slice), bin_data(dat1[i + 1]), self.param.verbose))
# ------------------------------------------------------------------------------------------------------------------
def compute_dist_2im_3d(self):
nx1, ny1, nz1, nt1, px1, py1, pz1, pt1 = self.im1.dim
nx2, ny2, nz2, nt2, px2, py2, pz2, pt2 = self.im2.dim
# assert np.round(pz1, 5) == np.round(pz2, 5) and np.round(py1, 5) == np.round(py2, 5)
assert nx1 == nx2
self.dim_pix = py1
if self.param.thinning:
dat1 = self.thinning1.thinned_image.data
dat2 = self.thinning2.thinned_image.data
else:
dat1 = bin_data(self.im1.data)
dat2 = bin_data(self.im2.data)
self.distances = []
for slice1, slice2 in zip(dat1, dat2):
self.distances.append(HausdorffDistance(slice1, slice2, self.param.verbose))
# ------------------------------------------------------------------------------------------------------------------
def show_results(self):
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
plt.hold(True)
sns.set(style="whitegrid", palette="pastel", color_codes=True)
plt.figure(figsize=(35, 20))
data_dist = {"distances": [], "image": [], "slice": []}
if self.dim_im == 2:
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist1_distribution])
data_dist["image"].append(len(self.dist1_distribution) * [1])
data_dist["slice"].append(len(self.dist1_distribution) * [0])
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist2_distribution])
data_dist["image"].append(len(self.dist2_distribution) * [2])
data_dist["slice"].append(len(self.dist2_distribution) * [0])
if self.dim_im == 3:
for i in range(len(self.distances)):
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist1_distribution[i]])
data_dist["image"].append(len(self.dist1_distribution[i]) * [1])
data_dist["slice"].append(len(self.dist1_distribution[i]) * [i])
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist2_distribution[i]])
data_dist["image"].append(len(self.dist2_distribution[i]) * [2])
data_dist["slice"].append(len(self.dist2_distribution[i]) * [i])
for k in data_dist.keys(): # flatten the lists in data_dist
data_dist[k] = [item for sublist in data_dist[k] for item in sublist]
data_dist = pd.DataFrame(data_dist)
sns.violinplot(x="slice", y="distances", hue="image", data=data_dist, split=True, inner="point", cut=0)
plt.savefig('violin_plot.png')
# plt.show()
# ----------------------------------------------------------------------------------------------------------------------
def bin_data(data):
return np.asarray((data > 0).astype(int))
# ----------------------------------------------------------------------------------------------------------------------
def resample_image(fname, suffix='_resampled.nii.gz', binary=False, npx=0.3, npy=0.3, thr=0.0, interpolation='spline'):
"""
Resampling function: add a padding, resample, crop the padding
:param fname: name of the image file to be resampled
:param suffix: suffix added to the original fname after resampling
:param binary: boolean, image is binary or not
:param npx: new pixel size in the x direction
:param npy: new pixel size in the y direction
:param thr: if the image is binary, it will be thresholded at thr (default=0) after the resampling
:param interpolation: type of interpolation used for the resampling
:return: file name after resampling (or original fname if it was already in the correct resolution)
"""
im_in = Image(fname)
orientation = im_in.orientation
if orientation != 'RPI':
fname = im_in.change_orientation(im_in, 'RPI', generate_path=True).save().absolutepath
nx, ny, nz, nt, px, py, pz, pt = im_in.dim
if np.round(px, 2) != np.round(npx, 2) or np.round(py, 2) != np.round(npy, 2):
name_resample = sct.extract_fname(fname)[1] + suffix
if binary:
interpolation = 'nn'
if nz == 1:
# when data is 2d: we convert it to a 3d image in order to avoid conversion problem with 2d data
# TODO: check if this above problem is still present (now that we are using nibabel instead of nipy)
sct.run(['sct_image', '-i', ','.join([fname, fname]), '-concat', 'z', '-o', fname])
sct.run(['sct_resample', '-i', fname, '-mm', str(npx) + 'x' + str(npy) + 'x' + str(pz), '-o', name_resample, '-x', interpolation])
if nz == 1: # when input data was 2d: re-convert data 3d-->2d
sct.run(['sct_image', '-i', name_resample, '-split', 'z'])
im_split = Image(name_resample.split('.nii.gz')[0] + '_Z0000.nii.gz')
im_split.save(name_resample)
if binary:
sct.run(['sct_maths', '-i', name_resample, '-bin', str(thr), '-o', name_resample])
if orientation != 'RPI':
name_resample = Image(name_resample) \
.change_orientation(orientation, generate_path=True) \
.save() \
.absolutepath
return name_resample
else:
if orientation != 'RPI':
fname = sct.add_suffix(fname, "_RPI")
im_in = msct_image.change_orientation(im_in, orientation).save(fname)
sct.printv('Image resolution already ' + str(npx) + 'x' + str(npy) + 'xpz')
return fname
# ----------------------------------------------------------------------------------------------------------------------
def non_zero_coord(data):
dim = len(data.shape)
if dim == 3:
X, Y, Z = (data > 0).nonzero()
list_coordinates = [(X[i], Y[i], Z[i]) for i in range(0, len(X))]
elif dim == 2:
X, Y = (data > 0).nonzero()
list_coordinates = [(X[i], Y[i]) for i in range(0, len(X))]
return list_coordinates
def get_parser():
# Initialize the parser
parser = argparse.ArgumentParser(
description='Compute the Hausdorff\'s distance between two binary images which can be thinned (ie skeletonized).'
' If only one image is inputted, it will be only thinned',
add_help=None,
formatter_class=SmartFormatter,
prog=os.path.basename(__file__).strip(".py")
)
mandatoryArguments = parser.add_argument_group("\nMANDATORY ARGUMENTS")
mandatoryArguments.add_argument(
"-i",
required=True,
help='First Image on which you want to find the skeleton Example: t2star_manual_gmseg.nii.gz',
metavar=Metavar.file,
)
optional = parser.add_argument_group("\nOPTIONAL ARGUMENTS")
optional.add_argument(
"-h",
"--help",
action="help",
help="show this help message and exit")
optional.add_argument(
"-d",
help='Second Image on which you want to find the skeleton Example: t2star_manual_gmseg.nii.gz',
metavar=Metavar.file,
required=False,
default=None)
optional.add_argument(
"-thinning",
type=int,
help="Thinning : find the skeleton of the binary images using the Zhang-Suen algorithm (1984) and use it to "
"compute the hausdorff's distance",
required=False,
default=1,
choices=(0, 1))
optional.add_argument(
"-resampling",
type=float,
help="pixel size in mm to resample to Example: 0.5",
metavar=Metavar.float,
required=False,
default=0.1)
optional.add_argument(
"-o",
help='Name of the output file Example: my_hausdorff_dist.txt',
metavar=Metavar.str,
required=False,
default='hausdorff_distance.txt')
optional.add_argument(
"-v",
type=int,
help="Verbose. 0: nothing, 1: basic, 2: extended.",
required=False,
choices=(0, 1, 2),
default = 1)
return parser
########################################################################################################################
# ------------------------------------------------------ MAIN ------------------------------------------------------- #
########################################################################################################################
if __name__ == "__main__":
sct.init_sct()
param = Param()
input_fname = None
if param.debug:
sct.printv('\n*** WARNING: DEBUG MODE ON ***\n')
else:
param_default = Param()
parser = get_parser()
arguments = parser.parse_args(args=None if sys.argv[1:] else ['--help'])
input_fname = arguments.i
input_second_fname = ''
output_fname = 'hausdorff_distance.txt'
resample_to = 0.1
if arguments.d is not None:
input_second_fname = arguments.d
if arguments.thinning is not None:
param.thinning = bool(arguments.thinning)
if arguments.resampling is not None:
resample_to = arguments.resampling
if arguments.o is not None:
output_fname = arguments.o
param.verbose = arguments.v
sct.init_sct(log_level=param.verbose, update=True) # Update log level
tmp_dir = sct.tmp_create()
im1_name = "im1.nii.gz"
sct.copy(input_fname, os.path.join(tmp_dir, im1_name))
if input_second_fname != '':
im2_name = 'im2.nii.gz'
sct.copy(input_second_fname, os.path.join(tmp_dir, im2_name))
else:
im2_name = None
curdir = os.getcwd()
os.chdir(tmp_dir)
# now = time.time()
input_im1 = Image(resample_image(im1_name, binary=True, thr=0.5, npx=resample_to, npy=resample_to))
input_im1.absolutepath = os.path.basename(input_fname)
if im2_name is not None:
input_im2 = Image(resample_image(im2_name, binary=True, thr=0.5, npx=resample_to, npy=resample_to))
input_im2.absolutepath = os.path.basename(input_second_fname)
else:
input_im2 = None
computation = ComputeDistances(input_im1, im2=input_im2, param=param)
# TODO change back the orientatin of the thinned image
if param.thinning:
computation.thinning1.thinned_image.save(
os.path.join(curdir, sct.add_suffix(os.path.basename(input_fname), '_thinned')))
if im2_name is not None:
computation.thinning2.thinned_image.save(
os.path.join(curdir, sct.add_suffix(os.path.basename(input_second_fname), '_thinned')))
os.chdir(curdir)
res_fic = open(output_fname, 'w')
res_fic.write(computation.res)
res_fic.write('\n\nInput 1: ' + input_fname)
res_fic.write('\nInput 2: ' + input_second_fname)
res_fic.close()
# sct.printv('Total time: ', time.time() - now)
| 44.664903 | 191 | 0.519013 |
4, P5, P6, P7, P8, P9 = n = self.get_neighbours(x, y, image_thinned)
if (2 <= sum(n) <= 6 and # Condition 1: 2<= N(P1) <= 6
P2 * P4 * P6 == 0 and # Condition 3
P4 * P6 * P8 == 0 and # Condition 4
self.transitions(n) == 1): # Condition 2: S(P1)=1
changing1.append((x, y))
for x, y in changing1:
image_thinned[x][y] = 0
# Step 2
changing2 = []
# for x in range(1, rows - 1):
# for y in range(1, columns - 1):
for x, y in non_zero_coord(image_thinned):
if x not in pass_list and y not in pass_list:
# if image_thinned[x][y] == 1: # Condition 0
P2, P3, P4, P5, P6, P7, P8, P9 = n = self.get_neighbours(x, y, image_thinned)
if (2 <= sum(n) <= 6 and # Condition 1
P2 * P4 * P8 == 0 and # Condition 3
P2 * P6 * P8 == 0 and # Condition 4
self.transitions(n) == 1): # Condition 2
changing2.append((x, y))
for x, y in changing2:
image_thinned[x][y] = 0
# t = time.time() - now
# sct.printv('t thinning: ', t)
return image_thinned
# ----------------------------------------------------------------------------------------------------------------------
# HAUSDORFF'S DISTANCE -------------------------------------------------------------------------------------------------
class HausdorffDistance:
def __init__(self, data1, data2, v=1):
sct.printv('Computing 2D Hausdorff\'s distance ... ', v, 'normal')
self.data1 = bin_data(data1)
self.data2 = bin_data(data2)
self.min_distances_1 = self.relative_hausdorff_dist(self.data1, self.data2, v)
self.min_distances_2 = self.relative_hausdorff_dist(self.data2, self.data1, v)
# relatives hausdorff's distances in pixel
self.h1 = np.max(self.min_distances_1)
self.h2 = np.max(self.min_distances_2)
self.H = max(self.h1, self.h2)
# t = time.time() - now
# sct.printv('Hausdorff dist time :', t)
# ------------------------------------------------------------------------------------------------------------------
def relative_hausdorff_dist(self, dat1, dat2, v=1):
h = np.zeros(dat1.shape)
nz_coord_1 = non_zero_coord(dat1)
nz_coord_2 = non_zero_coord(dat2)
if len(nz_coord_1) != 0 and len(nz_coord_2) != 0 :
for x1, y1 in nz_coord_1:
# for x1 in range(dat1.shape[0]):
# for y1 in range(dat1.shape[1]):
# if dat1[x1, y1] == 1:
d_p1_dat2 = []
p1 = np.asarray([x1, y1])
for x2, y2 in nz_coord_2:
# for x2 in range(dat2.shape[0]):
# for y2 in range(dat2.shape[1]):
# if dat2[x2, y2] == 1:
p2 = np.asarray([x2, y2])
d_p1_dat2.append(np.linalg.norm(p1 - p2)) # Euclidean distance between p1 and p2
h[x1, y1] = min(d_p1_dat2)
else:
sct.printv('Warning: an image is empty', v, 'warning')
return h
# ----------------------------------------------------------------------------------------------------------------------
# COMPUTE DISTANCES ----------------------------------------------------------------------------------------------------
class ComputeDistances:
def __init__(self, im1, im2=None, param=None):
self.im1 = im1
self.im2 = im2
self.dim_im = len(self.im1.data.shape)
self.dim_pix = 0
self.distances = None
self.res = ''
self.param = param
self.dist1_distribution = None
self.dist2_distribution = None
if self.dim_im == 3:
self.orientation1 = self.im1.orientation
if self.orientation1 != 'IRP':
self.im1.change_orientation('IRP', generate_path=True)
if self.im2 is not None:
self.orientation2 = self.im2.orientation
if self.orientation2 != 'IRP':
self.im2.change_orientation('IRP', generate_path=True)
if self.param.thinning:
self.thinning1 = Thinning(self.im1, self.param.verbose)
self.thinning1.thinned_image.save()
if self.im2 is not None:
self.thinning2 = Thinning(self.im2, self.param.verbose)
self.thinning2.thinned_image.save()
if self.dim_im == 2 and self.im2 is not None:
self.compute_dist_2im_2d()
if self.dim_im == 3:
if self.im2 is None:
self.compute_dist_1im_3d()
else:
self.compute_dist_2im_3d()
if self.dim_im == 2 and self.distances is not None:
self.dist1_distribution = self.distances.min_distances_1[np.nonzero(self.distances.min_distances_1)]
self.dist2_distribution = self.distances.min_distances_2[np.nonzero(self.distances.min_distances_2)]
if self.dim_im == 3:
self.dist1_distribution = []
self.dist2_distribution = []
for d in self.distances:
if np.nonzero(d.min_distances_1)[0].size: # Exist non zero values
self.dist1_distribution.append(d.min_distances_1[np.nonzero(d.min_distances_1)])
else: # all values are zero
self.dist1_distribution.append(0)
if np.nonzero(d.min_distances_2)[0].size: # Exist non zero values
self.dist2_distribution.append(d.min_distances_2[np.nonzero(d.min_distances_2)])
else: # all values are zero
self.dist2_distribution.append(0)
self.res = 'Hausdorff\'s distance - First relative Hausdorff\'s distance median - Second relative Hausdorff\'s distance median(all in mm)\n'
for i, d in enumerate(self.distances):
med1 = np.median(self.dist1_distribution[i])
med2 = np.median(self.dist2_distribution[i])
if self.im2 is None:
self.res += 'Slice ' + str(i) + ' - slice ' + str(i + 1) + ': ' + str(d.H * self.dim_pix) + ' - ' + str(med1 * self.dim_pix) + ' - ' + str(med2 * self.dim_pix) + ' \n'
else:
self.res += 'Slice ' + str(i) + ': ' + str(d.H * self.dim_pix) + ' - ' + str(med1 * self.dim_pix) + ' - ' + str(med2 * self.dim_pix) + ' \n'
sct.printv('-----------------------------------------------------------------------------\n' +
self.res, self.param.verbose, 'normal')
if self.param.verbose == 2:
self.show_results()
def compute_dist_2im_2d(self):
nx1, ny1, nz1, nt1, px1, py1, pz1, pt1 = self.im1.dim
nx2, ny2, nz2, nt2, px2, py2, pz2, pt2 = self.im2.dim
assert np.isclose(px1, px2) and np.isclose(py1, py2) and np.isclose(px1, py1)
self.dim_pix = py1
if self.param.thinning:
dat1 = self.thinning1.thinned_image.data
dat2 = self.thinning2.thinned_image.data
else:
dat1 = bin_data(self.im1.data)
dat2 = bin_data(self.im2.data)
self.distances = HausdorffDistance(dat1, dat2, self.param.verbose)
self.res = 'Hausdorff\'s distance : ' + str(self.distances.H * self.dim_pix) + ' mm\n\n' \
'First relative Hausdorff\'s distance : ' + str(self.distances.h1 * self.dim_pix) + ' mm\n' \
'Second relative Hausdorff\'s distance : ' + str(self.distances.h2 * self.dim_pix) + ' mm'
# ------------------------------------------------------------------------------------------------------------------
def compute_dist_1im_3d(self):
nx1, ny1, nz1, nt1, px1, py1, pz1, pt1 = self.im1.dim
self.dim_pix = py1
if self.param.thinning:
dat1 = self.thinning1.thinned_image.data
else:
dat1 = bin_data(self.im1.data)
self.distances = []
for i, dat_slice in enumerate(dat1[:-1]):
self.distances.append(HausdorffDistance(bin_data(dat_slice), bin_data(dat1[i + 1]), self.param.verbose))
# ------------------------------------------------------------------------------------------------------------------
def compute_dist_2im_3d(self):
nx1, ny1, nz1, nt1, px1, py1, pz1, pt1 = self.im1.dim
nx2, ny2, nz2, nt2, px2, py2, pz2, pt2 = self.im2.dim
# assert np.round(pz1, 5) == np.round(pz2, 5) and np.round(py1, 5) == np.round(py2, 5)
assert nx1 == nx2
self.dim_pix = py1
if self.param.thinning:
dat1 = self.thinning1.thinned_image.data
dat2 = self.thinning2.thinned_image.data
else:
dat1 = bin_data(self.im1.data)
dat2 = bin_data(self.im2.data)
self.distances = []
for slice1, slice2 in zip(dat1, dat2):
self.distances.append(HausdorffDistance(slice1, slice2, self.param.verbose))
# ------------------------------------------------------------------------------------------------------------------
def show_results(self):
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
plt.hold(True)
sns.set(style="whitegrid", palette="pastel", color_codes=True)
plt.figure(figsize=(35, 20))
data_dist = {"distances": [], "image": [], "slice": []}
if self.dim_im == 2:
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist1_distribution])
data_dist["image"].append(len(self.dist1_distribution) * [1])
data_dist["slice"].append(len(self.dist1_distribution) * [0])
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist2_distribution])
data_dist["image"].append(len(self.dist2_distribution) * [2])
data_dist["slice"].append(len(self.dist2_distribution) * [0])
if self.dim_im == 3:
for i in range(len(self.distances)):
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist1_distribution[i]])
data_dist["image"].append(len(self.dist1_distribution[i]) * [1])
data_dist["slice"].append(len(self.dist1_distribution[i]) * [i])
data_dist["distances"].append([dist * self.dim_pix for dist in self.dist2_distribution[i]])
data_dist["image"].append(len(self.dist2_distribution[i]) * [2])
data_dist["slice"].append(len(self.dist2_distribution[i]) * [i])
for k in data_dist.keys(): # flatten the lists in data_dist
data_dist[k] = [item for sublist in data_dist[k] for item in sublist]
data_dist = pd.DataFrame(data_dist)
sns.violinplot(x="slice", y="distances", hue="image", data=data_dist, split=True, inner="point", cut=0)
plt.savefig('violin_plot.png')
# plt.show()
# ----------------------------------------------------------------------------------------------------------------------
def bin_data(data):
return np.asarray((data > 0).astype(int))
# ----------------------------------------------------------------------------------------------------------------------
def resample_image(fname, suffix='_resampled.nii.gz', binary=False, npx=0.3, npy=0.3, thr=0.0, interpolation='spline'):
im_in = Image(fname)
orientation = im_in.orientation
if orientation != 'RPI':
fname = im_in.change_orientation(im_in, 'RPI', generate_path=True).save().absolutepath
nx, ny, nz, nt, px, py, pz, pt = im_in.dim
if np.round(px, 2) != np.round(npx, 2) or np.round(py, 2) != np.round(npy, 2):
name_resample = sct.extract_fname(fname)[1] + suffix
if binary:
interpolation = 'nn'
if nz == 1:
# when data is 2d: we convert it to a 3d image in order to avoid conversion problem with 2d data
# TODO: check if this above problem is still present (now that we are using nibabel instead of nipy)
sct.run(['sct_image', '-i', ','.join([fname, fname]), '-concat', 'z', '-o', fname])
sct.run(['sct_resample', '-i', fname, '-mm', str(npx) + 'x' + str(npy) + 'x' + str(pz), '-o', name_resample, '-x', interpolation])
if nz == 1: # when input data was 2d: re-convert data 3d-->2d
sct.run(['sct_image', '-i', name_resample, '-split', 'z'])
im_split = Image(name_resample.split('.nii.gz')[0] + '_Z0000.nii.gz')
im_split.save(name_resample)
if binary:
sct.run(['sct_maths', '-i', name_resample, '-bin', str(thr), '-o', name_resample])
if orientation != 'RPI':
name_resample = Image(name_resample) \
.change_orientation(orientation, generate_path=True) \
.save() \
.absolutepath
return name_resample
else:
if orientation != 'RPI':
fname = sct.add_suffix(fname, "_RPI")
im_in = msct_image.change_orientation(im_in, orientation).save(fname)
sct.printv('Image resolution already ' + str(npx) + 'x' + str(npy) + 'xpz')
return fname
# ----------------------------------------------------------------------------------------------------------------------
def non_zero_coord(data):
dim = len(data.shape)
if dim == 3:
X, Y, Z = (data > 0).nonzero()
list_coordinates = [(X[i], Y[i], Z[i]) for i in range(0, len(X))]
elif dim == 2:
X, Y = (data > 0).nonzero()
list_coordinates = [(X[i], Y[i]) for i in range(0, len(X))]
return list_coordinates
def get_parser():
# Initialize the parser
parser = argparse.ArgumentParser(
description='Compute the Hausdorff\'s distance between two binary images which can be thinned (ie skeletonized).'
' If only one image is inputted, it will be only thinned',
add_help=None,
formatter_class=SmartFormatter,
prog=os.path.basename(__file__).strip(".py")
)
mandatoryArguments = parser.add_argument_group("\nMANDATORY ARGUMENTS")
mandatoryArguments.add_argument(
"-i",
required=True,
help='First Image on which you want to find the skeleton Example: t2star_manual_gmseg.nii.gz',
metavar=Metavar.file,
)
optional = parser.add_argument_group("\nOPTIONAL ARGUMENTS")
optional.add_argument(
"-h",
"--help",
action="help",
help="show this help message and exit")
optional.add_argument(
"-d",
help='Second Image on which you want to find the skeleton Example: t2star_manual_gmseg.nii.gz',
metavar=Metavar.file,
required=False,
default=None)
optional.add_argument(
"-thinning",
type=int,
help="Thinning : find the skeleton of the binary images using the Zhang-Suen algorithm (1984) and use it to "
"compute the hausdorff's distance",
required=False,
default=1,
choices=(0, 1))
optional.add_argument(
"-resampling",
type=float,
help="pixel size in mm to resample to Example: 0.5",
metavar=Metavar.float,
required=False,
default=0.1)
optional.add_argument(
"-o",
help='Name of the output file Example: my_hausdorff_dist.txt',
metavar=Metavar.str,
required=False,
default='hausdorff_distance.txt')
optional.add_argument(
"-v",
type=int,
help="Verbose. 0: nothing, 1: basic, 2: extended.",
required=False,
choices=(0, 1, 2),
default = 1)
return parser
########################################################################################################################
# ------------------------------------------------------ MAIN ------------------------------------------------------- #
########################################################################################################################
if __name__ == "__main__":
sct.init_sct()
param = Param()
input_fname = None
if param.debug:
sct.printv('\n*** WARNING: DEBUG MODE ON ***\n')
else:
param_default = Param()
parser = get_parser()
arguments = parser.parse_args(args=None if sys.argv[1:] else ['--help'])
input_fname = arguments.i
input_second_fname = ''
output_fname = 'hausdorff_distance.txt'
resample_to = 0.1
if arguments.d is not None:
input_second_fname = arguments.d
if arguments.thinning is not None:
param.thinning = bool(arguments.thinning)
if arguments.resampling is not None:
resample_to = arguments.resampling
if arguments.o is not None:
output_fname = arguments.o
param.verbose = arguments.v
sct.init_sct(log_level=param.verbose, update=True) # Update log level
tmp_dir = sct.tmp_create()
im1_name = "im1.nii.gz"
sct.copy(input_fname, os.path.join(tmp_dir, im1_name))
if input_second_fname != '':
im2_name = 'im2.nii.gz'
sct.copy(input_second_fname, os.path.join(tmp_dir, im2_name))
else:
im2_name = None
curdir = os.getcwd()
os.chdir(tmp_dir)
# now = time.time()
input_im1 = Image(resample_image(im1_name, binary=True, thr=0.5, npx=resample_to, npy=resample_to))
input_im1.absolutepath = os.path.basename(input_fname)
if im2_name is not None:
input_im2 = Image(resample_image(im2_name, binary=True, thr=0.5, npx=resample_to, npy=resample_to))
input_im2.absolutepath = os.path.basename(input_second_fname)
else:
input_im2 = None
computation = ComputeDistances(input_im1, im2=input_im2, param=param)
# TODO change back the orientatin of the thinned image
if param.thinning:
computation.thinning1.thinned_image.save(
os.path.join(curdir, sct.add_suffix(os.path.basename(input_fname), '_thinned')))
if im2_name is not None:
computation.thinning2.thinned_image.save(
os.path.join(curdir, sct.add_suffix(os.path.basename(input_second_fname), '_thinned')))
os.chdir(curdir)
res_fic = open(output_fname, 'w')
res_fic.write(computation.res)
res_fic.write('\n\nInput 1: ' + input_fname)
res_fic.write('\nInput 2: ' + input_second_fname)
res_fic.close()
# sct.printv('Total time: ', time.time() - now)
| true | true |
f7fc04972085df58b6a79a56d269a65b319308c8 | 1,962 | py | Python | pypy/translator/backendopt/finalizer.py | benoitc/pypy | a3e1b12d1d01dc29056b7badc051ffc034297658 | [
"MIT"
] | 1 | 2020-01-21T11:10:51.000Z | 2020-01-21T11:10:51.000Z | pypy/translator/backendopt/finalizer.py | benoitc/pypy | a3e1b12d1d01dc29056b7badc051ffc034297658 | [
"MIT"
] | null | null | null | pypy/translator/backendopt/finalizer.py | benoitc/pypy | a3e1b12d1d01dc29056b7badc051ffc034297658 | [
"MIT"
] | null | null | null |
from pypy.translator.backendopt import graphanalyze
from pypy.rpython.lltypesystem import lltype
class FinalizerError(Exception):
""" __del__ marked as lightweight finalizer, but the analyzer did
not agree
"""
class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer):
""" Analyzer that determines whether a finalizer is lightweight enough
so it can be called without all the complicated logic in the garbage
collector. The set of operations here is restrictive for a good reason
- it's better to be safe. Specifically disallowed operations:
* anything that escapes self
* anything that can allocate
"""
ok_operations = ['ptr_nonzero', 'ptr_eq', 'ptr_ne', 'free', 'same_as',
'direct_ptradd', 'force_cast', 'track_alloc_stop',
'raw_free']
def analyze_light_finalizer(self, graph):
result = self.analyze_direct_call(graph)
if (result is self.top_result() and
getattr(graph.func, '_must_be_light_finalizer_', False)):
raise FinalizerError(FinalizerError.__doc__, graph)
return result
def analyze_simple_operation(self, op, graphinfo):
if op.opname in self.ok_operations:
return self.bottom_result()
if (op.opname.startswith('int_') or op.opname.startswith('float_')
or op.opname.startswith('cast_')):
return self.bottom_result()
if op.opname == 'setfield' or op.opname == 'bare_setfield':
TP = op.args[2].concretetype
if not isinstance(TP, lltype.Ptr) or TP.TO._gckind == 'raw':
# primitive type
return self.bottom_result()
if op.opname == 'getfield':
TP = op.result.concretetype
if not isinstance(TP, lltype.Ptr) or TP.TO._gckind == 'raw':
# primitive type
return self.bottom_result()
return self.top_result()
| 41.744681 | 74 | 0.639144 |
from pypy.translator.backendopt import graphanalyze
from pypy.rpython.lltypesystem import lltype
class FinalizerError(Exception):
class FinalizerAnalyzer(graphanalyze.BoolGraphAnalyzer):
ok_operations = ['ptr_nonzero', 'ptr_eq', 'ptr_ne', 'free', 'same_as',
'direct_ptradd', 'force_cast', 'track_alloc_stop',
'raw_free']
def analyze_light_finalizer(self, graph):
result = self.analyze_direct_call(graph)
if (result is self.top_result() and
getattr(graph.func, '_must_be_light_finalizer_', False)):
raise FinalizerError(FinalizerError.__doc__, graph)
return result
def analyze_simple_operation(self, op, graphinfo):
if op.opname in self.ok_operations:
return self.bottom_result()
if (op.opname.startswith('int_') or op.opname.startswith('float_')
or op.opname.startswith('cast_')):
return self.bottom_result()
if op.opname == 'setfield' or op.opname == 'bare_setfield':
TP = op.args[2].concretetype
if not isinstance(TP, lltype.Ptr) or TP.TO._gckind == 'raw':
return self.bottom_result()
if op.opname == 'getfield':
TP = op.result.concretetype
if not isinstance(TP, lltype.Ptr) or TP.TO._gckind == 'raw':
return self.bottom_result()
return self.top_result()
| true | true |
f7fc04a584104a02421e2372f31da54c51ab2304 | 5,333 | py | Python | text_utils/segmenter/state.py | numb3r3/text_utils | 5ec4d0f28117c30a65e516381b4f182aa6add663 | [
"Apache-2.0"
] | 4 | 2020-02-22T08:13:41.000Z | 2021-06-18T06:52:30.000Z | text_utils/segmenter/state.py | numb3r3/text_utils | 5ec4d0f28117c30a65e516381b4f182aa6add663 | [
"Apache-2.0"
] | null | null | null | text_utils/segmenter/state.py | numb3r3/text_utils | 5ec4d0f28117c30a65e516381b4f182aa6add663 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
from .sign import SPLIT_SIGN, SOFTEN_SPLIT_SIGN
from .context import Context
from ..utils import is_chinese
class ContextState(object):
def execute(self, context):
pass
class CharCheckContextState(ContextState):
def execute(self, context):
global CONTEXT_STATE_MANAGER
current_char = context.current_char
if is_chinese(current_char):
context.char_num += 1
context.token_num = int(context.char_num / 1.5)
if (not context.is_pair_sign_opened) and (
current_char in context.pair_signs):
context.state = CONTEXT_STATE_MANAGER["PAIR_SIGN_CONTEXT_STATE"]
elif context.is_too_long() and current_char in SOFTEN_SPLIT_SIGN:
context.current_sentence_builder.append(current_char)
context.state = CONTEXT_STATE_MANAGER["SPLIT_CONTEXT_STATE"]
elif current_char in context.split_signs:
context.current_sentence_builder.append(current_char)
context.state = CONTEXT_STATE_MANAGER["SPLIT_CONTEXT_STATE"]
elif context.is_pair_sign_opened and current_char in context.back_pair_sign:
context.state = CONTEXT_STATE_MANAGER[
"PAIR_SIGN_CLOSE_CONTEXT_STATE"]
else:
context.current_sentence_builder.append(current_char)
context.state = CONTEXT_STATE_MANAGER["FINISH_CHECK_CONTEXT_STATE"]
class FinishCheckContextState(ContextState):
def execute(self, context):
if context.current_index + 1 == len(context.text):
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
# context.current_sentence_builder = []
context.clear_local_state()
context.finish()
return
else:
context.current_index += 1
context.state = CONTEXT_STATE_MANAGER["CHAR_CHECK_CONTEXT_STATE"]
class PairSignContextState(ContextState):
def execute(self, context):
if context.current_index + 1 == len(context.text):
context.current_sentence_builder.append(context.current_char)
else:
pair_sign_context = Context(
context.text,
CONTEXT_STATE_MANAGER["CHAR_CHECK_CONTEXT_STATE"],
context.split_signs, context.pair_signs,
context.soften_split_signs, context.token_limits)
pair_sign_context.sentences.append(context.current_char)
pair_sign_context.current_index = context.current_index + 1
pair_sign_context.pair_sign = context.current_char
pair_sign_context.back_pair_sign = context.pair_signs[
pair_sign_context.pair_sign]
pair_sign_context.is_pair_sign_opened = True
pair_sign_context.execute()
res = pair_sign_context.sentences
if len(res) == 3 and \
res[2] in pair_sign_context.back_pair_sign and \
res[0] == pair_sign_context.pair_sign and \
((res[1] and res[1][-1] not in SPLIT_SIGN) or (len(res[1]) < 20)):
context.current_sentence_builder.append(''.join(res))
else:
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
# context.current_sentence_builder = []
context.clear_local_state()
context.sentences += res
context.current_index = pair_sign_context.current_index
context.state = CONTEXT_STATE_MANAGER["FINISH_CHECK_CONTEXT_STATE"]
class PairSignCloseContextState(ContextState):
def execute(self, context):
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
# context.current_sentence_builder = []
context.clear_local_state()
context.sentences.append(context.current_char)
context.finish()
return
class SplitContextState(ContextState):
def execute(self, context):
context.has_split_sign = True
while context.current_index < len(context.text) - 1:
context.current_index += 1
tmp = context.text[context.current_index]
if tmp in context.split_signs or tmp in context.soften_split_signs:
context.current_sentence_builder.append(tmp)
else:
context.current_index -= 1
break
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
# context.current_sentence_builder = []
context.clear_local_state()
context.state = CONTEXT_STATE_MANAGER["FINISH_CHECK_CONTEXT_STATE"]
CONTEXT_STATE_MANAGER = {
"SPLIT_CONTEXT_STATE": SplitContextState(),
"PAIR_SIGN_CONTEXT_STATE": PairSignContextState(),
"CHAR_CHECK_CONTEXT_STATE": CharCheckContextState(),
"FINISH_CHECK_CONTEXT_STATE": FinishCheckContextState(),
"PAIR_SIGN_CLOSE_CONTEXT_STATE": PairSignCloseContextState()
}
| 37.034722 | 86 | 0.651791 |
from .sign import SPLIT_SIGN, SOFTEN_SPLIT_SIGN
from .context import Context
from ..utils import is_chinese
class ContextState(object):
def execute(self, context):
pass
class CharCheckContextState(ContextState):
def execute(self, context):
global CONTEXT_STATE_MANAGER
current_char = context.current_char
if is_chinese(current_char):
context.char_num += 1
context.token_num = int(context.char_num / 1.5)
if (not context.is_pair_sign_opened) and (
current_char in context.pair_signs):
context.state = CONTEXT_STATE_MANAGER["PAIR_SIGN_CONTEXT_STATE"]
elif context.is_too_long() and current_char in SOFTEN_SPLIT_SIGN:
context.current_sentence_builder.append(current_char)
context.state = CONTEXT_STATE_MANAGER["SPLIT_CONTEXT_STATE"]
elif current_char in context.split_signs:
context.current_sentence_builder.append(current_char)
context.state = CONTEXT_STATE_MANAGER["SPLIT_CONTEXT_STATE"]
elif context.is_pair_sign_opened and current_char in context.back_pair_sign:
context.state = CONTEXT_STATE_MANAGER[
"PAIR_SIGN_CLOSE_CONTEXT_STATE"]
else:
context.current_sentence_builder.append(current_char)
context.state = CONTEXT_STATE_MANAGER["FINISH_CHECK_CONTEXT_STATE"]
class FinishCheckContextState(ContextState):
def execute(self, context):
if context.current_index + 1 == len(context.text):
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
context.clear_local_state()
context.finish()
return
else:
context.current_index += 1
context.state = CONTEXT_STATE_MANAGER["CHAR_CHECK_CONTEXT_STATE"]
class PairSignContextState(ContextState):
def execute(self, context):
if context.current_index + 1 == len(context.text):
context.current_sentence_builder.append(context.current_char)
else:
pair_sign_context = Context(
context.text,
CONTEXT_STATE_MANAGER["CHAR_CHECK_CONTEXT_STATE"],
context.split_signs, context.pair_signs,
context.soften_split_signs, context.token_limits)
pair_sign_context.sentences.append(context.current_char)
pair_sign_context.current_index = context.current_index + 1
pair_sign_context.pair_sign = context.current_char
pair_sign_context.back_pair_sign = context.pair_signs[
pair_sign_context.pair_sign]
pair_sign_context.is_pair_sign_opened = True
pair_sign_context.execute()
res = pair_sign_context.sentences
if len(res) == 3 and \
res[2] in pair_sign_context.back_pair_sign and \
res[0] == pair_sign_context.pair_sign and \
((res[1] and res[1][-1] not in SPLIT_SIGN) or (len(res[1]) < 20)):
context.current_sentence_builder.append(''.join(res))
else:
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
context.clear_local_state()
context.sentences += res
context.current_index = pair_sign_context.current_index
context.state = CONTEXT_STATE_MANAGER["FINISH_CHECK_CONTEXT_STATE"]
class PairSignCloseContextState(ContextState):
def execute(self, context):
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
context.clear_local_state()
context.sentences.append(context.current_char)
context.finish()
return
class SplitContextState(ContextState):
def execute(self, context):
context.has_split_sign = True
while context.current_index < len(context.text) - 1:
context.current_index += 1
tmp = context.text[context.current_index]
if tmp in context.split_signs or tmp in context.soften_split_signs:
context.current_sentence_builder.append(tmp)
else:
context.current_index -= 1
break
if len(context.current_sentence_builder) != 0:
sen = ''.join(context.current_sentence_builder)
context.sentences.append(sen)
context.clear_local_state()
context.state = CONTEXT_STATE_MANAGER["FINISH_CHECK_CONTEXT_STATE"]
CONTEXT_STATE_MANAGER = {
"SPLIT_CONTEXT_STATE": SplitContextState(),
"PAIR_SIGN_CONTEXT_STATE": PairSignContextState(),
"CHAR_CHECK_CONTEXT_STATE": CharCheckContextState(),
"FINISH_CHECK_CONTEXT_STATE": FinishCheckContextState(),
"PAIR_SIGN_CLOSE_CONTEXT_STATE": PairSignCloseContextState()
}
| true | true |
f7fc050606a7847918cef647e3efe9ceab83e11e | 619 | py | Python | site/brooks/migrations/__init__.py | ivco19/dashboard | f4f6ff6002295556007220660224c5d2ab4f0c06 | [
"BSD-3-Clause"
] | 4 | 2020-03-31T21:35:59.000Z | 2020-10-13T00:44:53.000Z | site/brooks/migrations/__init__.py | ivco19/dashboard | f4f6ff6002295556007220660224c5d2ab4f0c06 | [
"BSD-3-Clause"
] | 8 | 2020-03-21T18:19:17.000Z | 2020-04-29T18:04:54.000Z | site/brooks/migrations/__init__.py | ivco19/dashboard | f4f6ff6002295556007220660224c5d2ab4f0c06 | [
"BSD-3-Clause"
] | 1 | 2020-04-18T15:58:07.000Z | 2020-04-18T15:58:07.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file is part of Arcovid-19 Brooks.
# Copyright (c) 2020, Juan B Cabral, Vanessa Daza, Diego García Lambas,
# Marcelo Lares, Nadia Luczywo, Dante Paz, Rodrigo Quiroga,
# Bruno Sanchez, Federico Stasyszyn.
# License: BSD-3-Clause
# Full Text: https://github.com/ivco19/brooks/blob/master/LICENSE
# =============================================================================
# DOCS
# =============================================================================
"""Automatically created files for brooks migrations.
"""
| 32.578947 | 79 | 0.484653 | true | true | |
f7fc05070cb07528f70cc3e3289796e7da95ca69 | 1,094 | py | Python | FrequencyAnalysis.py | adeloucas/GSoC-Akkadian | 65dc1ccc215ef353533c4d3f361910e489b89346 | [
"MIT"
] | 5 | 2018-04-25T00:00:24.000Z | 2020-07-16T08:06:54.000Z | FrequencyAnalysis.py | adeloucas/GSoC-Akkadian | 65dc1ccc215ef353533c4d3f361910e489b89346 | [
"MIT"
] | 44 | 2018-06-08T18:43:13.000Z | 2019-07-29T18:42:12.000Z | FrequencyAnalysis.py | adeloucas/GSoC-Akkadian | 65dc1ccc215ef353533c4d3f361910e489b89346 | [
"MIT"
] | null | null | null | from collections import Counter
from Importer.file_importer import FileImport
from Importer.cdli_corpus import CDLICorpus
from ATFConverter.tokenizer import Tokenizer
from ATFConverter.atf_converter import ATFConverter
fi = FileImport('texts/Akkadian.txt')
fi.read_file()
cc = CDLICorpus()
cc.parse_file(fi.file_lines)
tk = Tokenizer()
atf = ATFConverter()
stopwords = ['a-na', 'u3', 'sza', '[...]', 'i-na', '=',
'ARM', '01,', 'lang', 'akk', 'um-ma', 'la',
'u2-ul', 'mesz_', 'asz-szum', '0.1', 'broken',
'isz-tu', '_lu2_', 'ki-a-am', '1(disz)', 'ki-ma',
'x', 'sza-a-ti', 'the', '_lu2', '...]', 'lu-u2',
'sza#', 'a-na#', '_u4', 'beginning', 'of', '2(disz)',
'[a-na', 'szum-ma', 'hi-a_', 'ana', 'a-di']
bag_of_words = []
for lines in cc.catalog['P249253']['transliteration']:
for word in tk.word_tokenizer(lines):
if word[0] not in stopwords:
bag_of_words.append('-'.join(atf.process(word[0].split('-'))))
frequency_analysis = Counter(bag_of_words).most_common(11)
print(frequency_analysis)
| 37.724138 | 74 | 0.606033 | from collections import Counter
from Importer.file_importer import FileImport
from Importer.cdli_corpus import CDLICorpus
from ATFConverter.tokenizer import Tokenizer
from ATFConverter.atf_converter import ATFConverter
fi = FileImport('texts/Akkadian.txt')
fi.read_file()
cc = CDLICorpus()
cc.parse_file(fi.file_lines)
tk = Tokenizer()
atf = ATFConverter()
stopwords = ['a-na', 'u3', 'sza', '[...]', 'i-na', '=',
'ARM', '01,', 'lang', 'akk', 'um-ma', 'la',
'u2-ul', 'mesz_', 'asz-szum', '0.1', 'broken',
'isz-tu', '_lu2_', 'ki-a-am', '1(disz)', 'ki-ma',
'x', 'sza-a-ti', 'the', '_lu2', '...]', 'lu-u2',
'sza#', 'a-na#', '_u4', 'beginning', 'of', '2(disz)',
'[a-na', 'szum-ma', 'hi-a_', 'ana', 'a-di']
bag_of_words = []
for lines in cc.catalog['P249253']['transliteration']:
for word in tk.word_tokenizer(lines):
if word[0] not in stopwords:
bag_of_words.append('-'.join(atf.process(word[0].split('-'))))
frequency_analysis = Counter(bag_of_words).most_common(11)
print(frequency_analysis)
| true | true |
f7fc05141269060be2d12868ee0e12989775825b | 2,075 | py | Python | tests/test_articles.py | JECINTA534521/News-Highligts-project | 00634f8eb0fc972d28f1ec911b83fbdea838ebd5 | [
"Unlicense"
] | null | null | null | tests/test_articles.py | JECINTA534521/News-Highligts-project | 00634f8eb0fc972d28f1ec911b83fbdea838ebd5 | [
"Unlicense"
] | null | null | null | tests/test_articles.py | JECINTA534521/News-Highligts-project | 00634f8eb0fc972d28f1ec911b83fbdea838ebd5 | [
"Unlicense"
] | null | null | null | import unittest
from app.models import Articles
class ArticleTest(unittest.TestCase):
'''
Test class to test behaviours of the Article class
Args:
unittest.TestCase : Test case class that helps create test cases
'''
def setUp(self):
'''
Set up method to run before each test case
'''
self.new_article = Articles('BBC News','EU acts against Poland judiciary reforms', 'Unprecedented disciplinary measures are taken as the EU says the reforms threaten the rule of law.', 'https://ichef.bbci.co.uk/news/1024/cpsprodpb/F046/production/_98901516_2efffed4-d4a6-486a-8a78-112232b92faa.jpg','http://www.bbc.co.uk/news/world-europe-42420150', '2017-12-20T13:36:14Z')
def test_instance(self):
'''
Test case to check if self.new_article is an instance of Article
'''
self.assertTrue( isinstance( self.new_article, Articles ) )
def test_init(self):
'''
Test case to check if the Article class is initialised
'''
self.assertEqual( self.new_article.author, 'BBC News')
self.assertEqual( self.new_article.title, 'EU acts against Poland judiciary reforms')
self.assertEqual( self.new_article.description, 'Unprecedented disciplinary measures are taken as the EU says the reforms threaten the rule of law.')
self.assertEqual( self.new_article.urlToImage, 'https://ichef.bbci.co.uk/news/1024/cpsprodpb/F046/production/_98901516_2efffed4-d4a6-486a-8a78-112232b92faa.jpg')
self.assertEqual( self.new_article.url, 'http://www.bbc.co.uk/news/world-europe-42420150')
self.assertEqual( self.new_article.publishedAt, '2017-12-20T13:36:14Z')
def test_publish_date_format(self):
'''
Test case to check if UTC date format is converted to a display-friendly format
'''
display_friendly_format = self.new_article.publish_date_format(self.new_article.publishedAt)
self.assertEqual( display_friendly_format, '2017-12-20')
if __name__ == '__main__':
unittest.main(verbosity=2) | 44.148936 | 382 | 0.701205 | import unittest
from app.models import Articles
class ArticleTest(unittest.TestCase):
'''
Test class to test behaviours of the Article class
Args:
unittest.TestCase : Test case class that helps create test cases
'''
def setUp(self):
'''
Set up method to run before each test case
'''
self.new_article = Articles('BBC News','EU acts against Poland judiciary reforms', 'Unprecedented disciplinary measures are taken as the EU says the reforms threaten the rule of law.', 'https://ichef.bbci.co.uk/news/1024/cpsprodpb/F046/production/_98901516_2efffed4-d4a6-486a-8a78-112232b92faa.jpg','http://www.bbc.co.uk/news/world-europe-42420150', '2017-12-20T13:36:14Z')
def test_instance(self):
'''
Test case to check if self.new_article is an instance of Article
'''
self.assertTrue( isinstance( self.new_article, Articles ) )
def test_init(self):
'''
Test case to check if the Article class is initialised
'''
self.assertEqual( self.new_article.author, 'BBC News')
self.assertEqual( self.new_article.title, 'EU acts against Poland judiciary reforms')
self.assertEqual( self.new_article.description, 'Unprecedented disciplinary measures are taken as the EU says the reforms threaten the rule of law.')
self.assertEqual( self.new_article.urlToImage, 'https://ichef.bbci.co.uk/news/1024/cpsprodpb/F046/production/_98901516_2efffed4-d4a6-486a-8a78-112232b92faa.jpg')
self.assertEqual( self.new_article.url, 'http://www.bbc.co.uk/news/world-europe-42420150')
self.assertEqual( self.new_article.publishedAt, '2017-12-20T13:36:14Z')
def test_publish_date_format(self):
'''
Test case to check if UTC date format is converted to a display-friendly format
'''
display_friendly_format = self.new_article.publish_date_format(self.new_article.publishedAt)
self.assertEqual( display_friendly_format, '2017-12-20')
if __name__ == '__main__':
unittest.main(verbosity=2) | false | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.