hexsha stringlengths 40 40 | size int64 3 1.03M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 3 972 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 972 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 972 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 3 1.03M | avg_line_length float64 1.13 941k | max_line_length int64 2 941k | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b6de8abddf7d5e259b6503a613b0a68263ccb8d2 | 1,196 | py | Python | src/gsyslyzer_cli.py | hanxuzjuckc/gsyslyzer | ddd061c0914a7d6fc73dc8e5f07069e7740ca928 | [
"Apache-2.0"
] | 1 | 2020-07-24T20:56:15.000Z | 2020-07-24T20:56:15.000Z | src/gsyslyzer_cli.py | hanxuzjuckc/gsyslyzer | ddd061c0914a7d6fc73dc8e5f07069e7740ca928 | [
"Apache-2.0"
] | 2 | 2020-08-04T20:49:02.000Z | 2020-08-06T16:24:19.000Z | src/gsyslyzer_cli.py | hanxuzjuckc/gsyslyzer | ddd061c0914a7d6fc73dc8e5f07069e7740ca928 | [
"Apache-2.0"
] | 1 | 2021-03-01T18:54:21.000Z | 2021-03-01T18:54:21.000Z | """ Module for defining the CLI usage. """
import argparse
class Gsyslyzer:
""" Object for building and running a complete sifting pipeline
Attributes
----------
builder: SifterBuilder object for bulding the LogSifter
flags: object storing attributes given as cli flags"""
def __init__(self, builder):
self.builder = builder
parser = argparse.ArgumentParser(description="Gsyslyzer CLI")
parser.add_argument("--verbosity", default=0,
type=int, help=("0: Symptom summary "
"1: All symptom bursts"))
parser.add_argument("--collect_statistics", default=False,
type=bool, help=("Set True to collect signal statistics"))
parser.add_argument("--json_output", default=False, help=("Set True to write output "
"to gsift_output.json"))
parser.add_argument("--log_file_path", required=True)
self.flags = parser.parse_args()
def run(self):
log_sifter = self.builder.build_sifter(self.flags)
log_sifter.sift_log()
| 38.580645 | 93 | 0.576923 |
2c865207902af4fa5a64b0bf7cae6b57f9b5fa47 | 49,650 | py | Python | lib/antlr3/recognizers.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 6 | 2015-09-19T18:22:33.000Z | 2020-11-29T15:21:17.000Z | lib/antlr3/recognizers.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 1 | 2015-08-04T08:03:46.000Z | 2015-08-04T08:03:46.000Z | lib/antlr3/recognizers.py | ncbray/pystream | 70bba5646d6512adb6803564c22268d3424c66d8 | [
"Apache-2.0"
] | 1 | 2019-12-09T08:27:09.000Z | 2019-12-09T08:27:09.000Z | """ANTLR3 runtime package"""
# begin[licence]
#
# [The "BSD licence"]
# Copyright (c) 2005-2006 Terence Parr
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
# THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# end[licence]
import sys
import inspect
from antlr3.constants import DEFAULT_CHANNEL, HIDDEN_CHANNEL, EOF, \
EOR_TOKEN_TYPE, INVALID_TOKEN_TYPE
from antlr3.exceptions import RecognitionException, MismatchedTokenException, \
MismatchedRangeException, MismatchedTreeNodeException, \
NoViableAltException, EarlyExitException, MismatchedSetException, \
MismatchedNotSetException, FailedPredicateException, \
BacktrackingFailed, UnwantedTokenException, MissingTokenException
from antlr3.tokens import CommonToken, EOF_TOKEN, SKIP_TOKEN
from antlr3.compat import set, frozenset, reversed
class RecognizerSharedState(object):
"""
The set of fields needed by an abstract recognizer to recognize input
and recover from errors etc... As a separate state object, it can be
shared among multiple grammars; e.g., when one grammar imports another.
These fields are publically visible but the actual state pointer per
parser is protected.
"""
def __init__(self):
# Track the set of token types that can follow any rule invocation.
# Stack grows upwards.
self.following = []
# This is true when we see an error and before having successfully
# matched a token. Prevents generation of more than one error message
# per error.
self.errorRecovery = False
# The index into the input stream where the last error occurred.
# This is used to prevent infinite loops where an error is found
# but no token is consumed during recovery...another error is found,
# ad naseum. This is a failsafe mechanism to guarantee that at least
# one token/tree node is consumed for two errors.
self.lastErrorIndex = -1
# If 0, no backtracking is going on. Safe to exec actions etc...
# If >0 then it's the level of backtracking.
self.backtracking = 0
# An array[size num rules] of Map<Integer,Integer> that tracks
# the stop token index for each rule. ruleMemo[ruleIndex] is
# the memoization table for ruleIndex. For key ruleStartIndex, you
# get back the stop token for associated rule or MEMO_RULE_FAILED.
#
# This is only used if rule memoization is on (which it is by default).
self.ruleMemo = None
## Did the recognizer encounter a syntax error? Track how many.
self.syntaxErrors = 0
# LEXER FIELDS (must be in same state object to avoid casting
# constantly in generated code and Lexer object) :(
## The goal of all lexer rules/methods is to create a token object.
# This is an instance variable as multiple rules may collaborate to
# create a single token. nextToken will return this object after
# matching lexer rule(s). If you subclass to allow multiple token
# emissions, then set this to the last token to be matched or
# something nonnull so that the auto token emit mechanism will not
# emit another token.
self.token = None
## What character index in the stream did the current token start at?
# Needed, for example, to get the text for current token. Set at
# the start of nextToken.
self.tokenStartCharIndex = -1
## The line on which the first character of the token resides
self.tokenStartLine = None
## The character position of first character within the line
self.tokenStartCharPositionInLine = None
## The channel number for the current token
self.channel = None
## The token type for the current token
self.type = None
## You can set the text for the current token to override what is in
# the input char buffer. Use setText() or can set this instance var.
self.text = None
class BaseRecognizer(object):
"""
@brief Common recognizer functionality.
A generic recognizer that can handle recognizers generated from
lexer, parser, and tree grammars. This is all the parsing
support code essentially; most of it is error recovery stuff and
backtracking.
"""
MEMO_RULE_FAILED = -2
MEMO_RULE_UNKNOWN = -1
# copies from Token object for convenience in actions
DEFAULT_TOKEN_CHANNEL = DEFAULT_CHANNEL
# for convenience in actions
HIDDEN = HIDDEN_CHANNEL
# overridden by generated subclasses
tokenNames = None
def __init__(self, state=None):
# Input stream of the recognizer. Must be initialized by a subclass.
self.input = None
## State of a lexer, parser, or tree parser are collected into a state
# object so the state can be shared. This sharing is needed to
# have one grammar import others and share same error variables
# and other state variables. It's a kind of explicit multiple
# inheritance via delegation of methods and shared state.
if state is None:
state = RecognizerSharedState()
self._state = state
# this one only exists to shut up pylint :(
def setInput(self, input):
self.input = input
def reset(self):
"""
reset the parser's state; subclasses must rewinds the input stream
"""
# wack everything related to error recovery
if self._state is None:
# no shared state work to do
return
self._state.following = []
self._state.errorRecovery = False
self._state.lastErrorIndex = -1
self._state.syntaxErrors = 0
# wack everything related to backtracking and memoization
self._state.backtracking = 0
if self._state.ruleMemo is not None:
self._state.ruleMemo = {}
def match(self, input, ttype, follow):
"""
Match current input symbol against ttype. Attempt
single token insertion or deletion error recovery. If
that fails, throw MismatchedTokenException.
To turn off single token insertion or deletion error
recovery, override mismatchRecover() and have it call
plain mismatch(), which does not recover. Then any error
in a rule will cause an exception and immediate exit from
rule. Rule would recover by resynchronizing to the set of
symbols that can follow rule ref.
"""
matchedSymbol = self.getCurrentInputSymbol(input)
if self.input.LA(1) == ttype:
self.input.consume()
self._state.errorRecovery = False
return matchedSymbol
if self._state.backtracking > 0:
# FIXME: need to return matchedSymbol here as well. damn!!
raise BacktrackingFailed
matchedSymbol = self.recoverFromMismatchedToken(input, ttype, follow)
return matchedSymbol
def matchAny(self, input):
"""Match the wildcard: in a symbol"""
self._state.errorRecovery = False
self.input.consume()
def mismatchIsUnwantedToken(self, input, ttype):
return input.LA(2) == ttype
def mismatchIsMissingToken(self, input, follow):
if follow is None:
# we have no information about the follow; we can only consume
# a single token and hope for the best
return False
# compute what can follow this grammar element reference
if EOR_TOKEN_TYPE in follow:
if len(self._state.following) > 0:
# remove EOR if we're not the start symbol
follow = follow - set([EOR_TOKEN_TYPE])
viableTokensFollowingThisRule = self.computeContextSensitiveRuleFOLLOW()
follow = follow | viableTokensFollowingThisRule
# if current token is consistent with what could come after set
# then we know we're missing a token; error recovery is free to
# "insert" the missing token
if input.LA(1) in follow or EOR_TOKEN_TYPE in follow:
return True
return False
def mismatch(self, input, ttype, follow):
"""
Factor out what to do upon token mismatch so tree parsers can behave
differently. Override and call mismatchRecover(input, ttype, follow)
to get single token insertion and deletion. Use this to turn of
single token insertion and deletion. Override mismatchRecover
to call this instead.
"""
if self.mismatchIsUnwantedToken(input, ttype):
raise UnwantedTokenException(ttype, input)
elif self.mismatchIsMissingToken(input, follow):
raise MissingTokenException(ttype, input, None)
raise MismatchedTokenException(ttype, input)
## def mismatchRecover(self, input, ttype, follow):
## if self.mismatchIsUnwantedToken(input, ttype):
## mte = UnwantedTokenException(ttype, input)
## elif self.mismatchIsMissingToken(input, follow):
## mte = MissingTokenException(ttype, input)
## else:
## mte = MismatchedTokenException(ttype, input)
## self.recoverFromMismatchedToken(input, mte, ttype, follow)
def reportError(self, e):
"""Report a recognition problem.
This method sets errorRecovery to indicate the parser is recovering
not parsing. Once in recovery mode, no errors are generated.
To get out of recovery mode, the parser must successfully match
a token (after a resync). So it will go:
1. error occurs
2. enter recovery mode, report error
3. consume until token found in resynch set
4. try to resume parsing
5. next match() will reset errorRecovery mode
If you override, make sure to update syntaxErrors if you care about
that.
"""
# if we've already reported an error and have not matched a token
# yet successfully, don't report any errors.
if self._state.errorRecovery:
return
self._state.syntaxErrors += 1 # don't count spurious
self._state.errorRecovery = True
self.displayRecognitionError(self.tokenNames, e)
def displayRecognitionError(self, tokenNames, e):
hdr = self.getErrorHeader(e)
msg = self.getErrorMessage(e, tokenNames)
self.emitErrorMessage(hdr+" "+msg)
def getErrorMessage(self, e, tokenNames):
"""
What error message should be generated for the various
exception types?
Not very object-oriented code, but I like having all error message
generation within one method rather than spread among all of the
exception classes. This also makes it much easier for the exception
handling because the exception classes do not have to have pointers back
to this object to access utility routines and so on. Also, changing
the message for an exception type would be difficult because you
would have to subclassing exception, but then somehow get ANTLR
to make those kinds of exception objects instead of the default.
This looks weird, but trust me--it makes the most sense in terms
of flexibility.
For grammar debugging, you will want to override this to add
more information such as the stack frame with
getRuleInvocationStack(e, this.getClass().getName()) and,
for no viable alts, the decision description and state etc...
Override this to change the message generated for one or more
exception types.
"""
if isinstance(e, UnwantedTokenException):
tokenName = "<unknown>"
if e.expecting == EOF:
tokenName = "EOF"
else:
tokenName = self.tokenNames[e.expecting]
msg = "extraneous input %s expecting %s" % (
self.getTokenErrorDisplay(e.getUnexpectedToken()),
tokenName
)
elif isinstance(e, MissingTokenException):
tokenName = "<unknown>"
if e.expecting == EOF:
tokenName = "EOF"
else:
tokenName = self.tokenNames[e.expecting]
msg = "missing %s at %s" % (
tokenName, self.getTokenErrorDisplay(e.token)
)
elif isinstance(e, MismatchedTokenException):
tokenName = "<unknown>"
if e.expecting == EOF:
tokenName = "EOF"
else:
tokenName = self.tokenNames[e.expecting]
msg = "mismatched input " \
+ self.getTokenErrorDisplay(e.token) \
+ " expecting " \
+ tokenName
elif isinstance(e, MismatchedTreeNodeException):
tokenName = "<unknown>"
if e.expecting == EOF:
tokenName = "EOF"
else:
tokenName = self.tokenNames[e.expecting]
msg = "mismatched tree node: %s expecting %s" \
% (e.node, tokenName)
elif isinstance(e, NoViableAltException):
msg = "no viable alternative at input " \
+ self.getTokenErrorDisplay(e.token)
elif isinstance(e, EarlyExitException):
msg = "required (...)+ loop did not match anything at input " \
+ self.getTokenErrorDisplay(e.token)
elif isinstance(e, MismatchedSetException):
msg = "mismatched input " \
+ self.getTokenErrorDisplay(e.token) \
+ " expecting set " \
+ repr(e.expecting)
elif isinstance(e, MismatchedNotSetException):
msg = "mismatched input " \
+ self.getTokenErrorDisplay(e.token) \
+ " expecting set " \
+ repr(e.expecting)
elif isinstance(e, FailedPredicateException):
msg = "rule " \
+ e.ruleName \
+ " failed predicate: {" \
+ e.predicateText \
+ "}?"
else:
msg = str(e)
return msg
def getNumberOfSyntaxErrors(self):
"""
Get number of recognition errors (lexer, parser, tree parser). Each
recognizer tracks its own number. So parser and lexer each have
separate count. Does not count the spurious errors found between
an error and next valid token match
See also reportError()
"""
return self._state.syntaxErrors
def getErrorHeader(self, e):
"""
What is the error header, normally line/character position information?
"""
return "line %d:%d" % (e.line, e.charPositionInLine)
def getTokenErrorDisplay(self, t):
"""
How should a token be displayed in an error message? The default
is to display just the text, but during development you might
want to have a lot of information spit out. Override in that case
to use t.toString() (which, for CommonToken, dumps everything about
the token). This is better than forcing you to override a method in
your token objects because you don't have to go modify your lexer
so that it creates a new Java type.
"""
s = t.text
if s is None:
if t.type == EOF:
s = "<EOF>"
else:
s = "<"+t.type+">"
return repr(s)
def emitErrorMessage(self, msg):
"""Override this method to change where error messages go"""
sys.stderr.write(msg + '\n')
def recover(self, input, re):
"""
Recover from an error found on the input stream. This is
for NoViableAlt and mismatched symbol exceptions. If you enable
single token insertion and deletion, this will usually not
handle mismatched symbol exceptions but there could be a mismatched
token that the match() routine could not recover from.
"""
# PROBLEM? what if input stream is not the same as last time
# perhaps make lastErrorIndex a member of input
if self._state.lastErrorIndex == input.index():
# uh oh, another error at same token index; must be a case
# where LT(1) is in the recovery token set so nothing is
# consumed; consume a single token so at least to prevent
# an infinite loop; this is a failsafe.
input.consume()
self._state.lastErrorIndex = input.index()
followSet = self.computeErrorRecoverySet()
self.beginResync()
self.consumeUntil(input, followSet)
self.endResync()
def beginResync(self):
"""
A hook to listen in on the token consumption during error recovery.
The DebugParser subclasses this to fire events to the listenter.
"""
pass
def endResync(self):
"""
A hook to listen in on the token consumption during error recovery.
The DebugParser subclasses this to fire events to the listenter.
"""
pass
def computeErrorRecoverySet(self):
"""
Compute the error recovery set for the current rule. During
rule invocation, the parser pushes the set of tokens that can
follow that rule reference on the stack; this amounts to
computing FIRST of what follows the rule reference in the
enclosing rule. This local follow set only includes tokens
from within the rule; i.e., the FIRST computation done by
ANTLR stops at the end of a rule.
EXAMPLE
When you find a "no viable alt exception", the input is not
consistent with any of the alternatives for rule r. The best
thing to do is to consume tokens until you see something that
can legally follow a call to r *or* any rule that called r.
You don't want the exact set of viable next tokens because the
input might just be missing a token--you might consume the
rest of the input looking for one of the missing tokens.
Consider grammar:
a : '[' b ']'
| '(' b ')'
;
b : c '^' INT ;
c : ID
| INT
;
At each rule invocation, the set of tokens that could follow
that rule is pushed on a stack. Here are the various "local"
follow sets:
FOLLOW(b1_in_a) = FIRST(']') = ']'
FOLLOW(b2_in_a) = FIRST(')') = ')'
FOLLOW(c_in_b) = FIRST('^') = '^'
Upon erroneous input "[]", the call chain is
a -> b -> c
and, hence, the follow context stack is:
depth local follow set after call to rule
0 \<EOF> a (from main())
1 ']' b
3 '^' c
Notice that ')' is not included, because b would have to have
been called from a different context in rule a for ')' to be
included.
For error recovery, we cannot consider FOLLOW(c)
(context-sensitive or otherwise). We need the combined set of
all context-sensitive FOLLOW sets--the set of all tokens that
could follow any reference in the call chain. We need to
resync to one of those tokens. Note that FOLLOW(c)='^' and if
we resync'd to that token, we'd consume until EOF. We need to
sync to context-sensitive FOLLOWs for a, b, and c: {']','^'}.
In this case, for input "[]", LA(1) is in this set so we would
not consume anything and after printing an error rule c would
return normally. It would not find the required '^' though.
At this point, it gets a mismatched token error and throws an
exception (since LA(1) is not in the viable following token
set). The rule exception handler tries to recover, but finds
the same recovery set and doesn't consume anything. Rule b
exits normally returning to rule a. Now it finds the ']' (and
with the successful match exits errorRecovery mode).
So, you cna see that the parser walks up call chain looking
for the token that was a member of the recovery set.
Errors are not generated in errorRecovery mode.
ANTLR's error recovery mechanism is based upon original ideas:
"Algorithms + Data Structures = Programs" by Niklaus Wirth
and
"A note on error recovery in recursive descent parsers":
http://portal.acm.org/citation.cfm?id=947902.947905
Later, Josef Grosch had some good ideas:
"Efficient and Comfortable Error Recovery in Recursive Descent
Parsers":
ftp://www.cocolab.com/products/cocktail/doca4.ps/ell.ps.zip
Like Grosch I implemented local FOLLOW sets that are combined
at run-time upon error to avoid overhead during parsing.
"""
return self.combineFollows(False)
def computeContextSensitiveRuleFOLLOW(self):
"""
Compute the context-sensitive FOLLOW set for current rule.
This is set of token types that can follow a specific rule
reference given a specific call chain. You get the set of
viable tokens that can possibly come next (lookahead depth 1)
given the current call chain. Contrast this with the
definition of plain FOLLOW for rule r:
FOLLOW(r)={x | S=>*alpha r beta in G and x in FIRST(beta)}
where x in T* and alpha, beta in V*; T is set of terminals and
V is the set of terminals and nonterminals. In other words,
FOLLOW(r) is the set of all tokens that can possibly follow
references to r in *any* sentential form (context). At
runtime, however, we know precisely which context applies as
we have the call chain. We may compute the exact (rather
than covering superset) set of following tokens.
For example, consider grammar:
stat : ID '=' expr ';' // FOLLOW(stat)=={EOF}
| "return" expr '.'
;
expr : atom ('+' atom)* ; // FOLLOW(expr)=={';','.',')'}
atom : INT // FOLLOW(atom)=={'+',')',';','.'}
| '(' expr ')'
;
The FOLLOW sets are all inclusive whereas context-sensitive
FOLLOW sets are precisely what could follow a rule reference.
For input input "i=(3);", here is the derivation:
stat => ID '=' expr ';'
=> ID '=' atom ('+' atom)* ';'
=> ID '=' '(' expr ')' ('+' atom)* ';'
=> ID '=' '(' atom ')' ('+' atom)* ';'
=> ID '=' '(' INT ')' ('+' atom)* ';'
=> ID '=' '(' INT ')' ';'
At the "3" token, you'd have a call chain of
stat -> expr -> atom -> expr -> atom
What can follow that specific nested ref to atom? Exactly ')'
as you can see by looking at the derivation of this specific
input. Contrast this with the FOLLOW(atom)={'+',')',';','.'}.
You want the exact viable token set when recovering from a
token mismatch. Upon token mismatch, if LA(1) is member of
the viable next token set, then you know there is most likely
a missing token in the input stream. "Insert" one by just not
throwing an exception.
"""
return self.combineFollows(True)
def combineFollows(self, exact):
followSet = set()
for idx, localFollowSet in reversed(list(enumerate(self._state.following))):
followSet |= localFollowSet
if exact:
# can we see end of rule?
if EOR_TOKEN_TYPE in localFollowSet:
# Only leave EOR in set if at top (start rule); this lets
# us know if have to include follow(start rule); i.e., EOF
if idx > 0:
followSet.remove(EOR_TOKEN_TYPE)
else:
# can't see end of rule, quit
break
return followSet
def recoverFromMismatchedToken(self, input, ttype, follow):
"""Attempt to recover from a single missing or extra token.
EXTRA TOKEN
LA(1) is not what we are looking for. If LA(2) has the right token,
however, then assume LA(1) is some extra spurious token. Delete it
and LA(2) as if we were doing a normal match(), which advances the
input.
MISSING TOKEN
If current token is consistent with what could come after
ttype then it is ok to 'insert' the missing token, else throw
exception For example, Input 'i=(3;' is clearly missing the
')'. When the parser returns from the nested call to expr, it
will have call chain:
stat -> expr -> atom
and it will be trying to match the ')' at this point in the
derivation:
=> ID '=' '(' INT ')' ('+' atom)* ';'
^
match() will see that ';' doesn't match ')' and report a
mismatched token error. To recover, it sees that LA(1)==';'
is in the set of tokens that can follow the ')' token
reference in rule atom. It can assume that you forgot the ')'.
"""
e = None
# if next token is what we are looking for then "delete" this token
if self. mismatchIsUnwantedToken(input, ttype):
e = UnwantedTokenException(ttype, input)
self.beginResync()
input.consume() # simply delete extra token
self.endResync()
# report after consuming so AW sees the token in the exception
self.reportError(e)
# we want to return the token we're actually matching
matchedSymbol = self.getCurrentInputSymbol(input)
# move past ttype token as if all were ok
input.consume()
return matchedSymbol
# can't recover with single token deletion, try insertion
if self.mismatchIsMissingToken(input, follow):
inserted = self.getMissingSymbol(input, e, ttype, follow)
e = MissingTokenException(ttype, input, inserted)
# report after inserting so AW sees the token in the exception
self.reportError(e)
return inserted
# even that didn't work; must throw the exception
e = MismatchedTokenException(ttype, input)
raise e
def recoverFromMismatchedSet(self, input, e, follow):
"""Not currently used"""
if self.mismatchIsMissingToken(input, follow):
self.reportError(e)
# we don't know how to conjure up a token for sets yet
return self.getMissingSymbol(input, e, INVALID_TOKEN_TYPE, follow)
# TODO do single token deletion like above for Token mismatch
raise e
def getCurrentInputSymbol(self, input):
"""
Match needs to return the current input symbol, which gets put
into the label for the associated token ref; e.g., x=ID. Token
and tree parsers need to return different objects. Rather than test
for input stream type or change the IntStream interface, I use
a simple method to ask the recognizer to tell me what the current
input symbol is.
This is ignored for lexers.
"""
return None
def getMissingSymbol(self, input, e, expectedTokenType, follow):
"""Conjure up a missing token during error recovery.
The recognizer attempts to recover from single missing
symbols. But, actions might refer to that missing symbol.
For example, x=ID {f($x);}. The action clearly assumes
that there has been an identifier matched previously and that
$x points at that token. If that token is missing, but
the next token in the stream is what we want we assume that
this token is missing and we keep going. Because we
have to return some token to replace the missing token,
we have to conjure one up. This method gives the user control
over the tokens returned for missing tokens. Mostly,
you will want to create something special for identifier
tokens. For literals such as '{' and ',', the default
action in the parser or tree parser works. It simply creates
a CommonToken of the appropriate type. The text will be the token.
If you change what tokens must be created by the lexer,
override this method to create the appropriate tokens.
"""
return None
## def recoverFromMissingElement(self, input, e, follow):
## """
## This code is factored out from mismatched token and mismatched set
## recovery. It handles "single token insertion" error recovery for
## both. No tokens are consumed to recover from insertions. Return
## true if recovery was possible else return false.
## """
## if self.mismatchIsMissingToken(input, follow):
## self.reportError(e)
## return True
## # nothing to do; throw exception
## return False
def consumeUntil(self, input, tokenTypes):
"""
Consume tokens until one matches the given token or token set
tokenTypes can be a single token type or a set of token types
"""
if not isinstance(tokenTypes, (set, frozenset)):
tokenTypes = frozenset([tokenTypes])
ttype = input.LA(1)
while ttype != EOF and ttype not in tokenTypes:
input.consume()
ttype = input.LA(1)
def getRuleInvocationStack(self):
"""
Return List<String> of the rules in your parser instance
leading up to a call to this method. You could override if
you want more details such as the file/line info of where
in the parser java code a rule is invoked.
This is very useful for error messages and for context-sensitive
error recovery.
You must be careful, if you subclass a generated recognizers.
The default implementation will only search the module of self
for rules, but the subclass will not contain any rules.
You probably want to override this method to look like
def getRuleInvocationStack(self):
return self._getRuleInvocationStack(<class>.__module__)
where <class> is the class of the generated recognizer, e.g.
the superclass of self.
"""
return self._getRuleInvocationStack(self.__module__)
def _getRuleInvocationStack(cls, module):
"""
A more general version of getRuleInvocationStack where you can
pass in, for example, a RecognitionException to get it's rule
stack trace. This routine is shared with all recognizers, hence,
static.
TODO: move to a utility class or something; weird having lexer call
this
"""
# mmmhhh,... perhaps look at the first argument
# (f_locals[co_varnames[0]]?) and test if it's a (sub)class of
# requested recognizer...
rules = []
for frame in reversed(inspect.stack()):
code = frame[0].f_code
codeMod = inspect.getmodule(code)
if codeMod is None:
continue
# skip frames not in requested module
if codeMod.__name__ != module:
continue
# skip some unwanted names
if code.co_name in ('nextToken', '<module>'):
continue
rules.append(code.co_name)
return rules
_getRuleInvocationStack = classmethod(_getRuleInvocationStack)
def getBacktrackingLevel(self):
return self._state.backtracking
def getGrammarFileName(self):
"""For debugging and other purposes, might want the grammar name.
Have ANTLR generate an implementation for this method.
"""
return self.grammarFileName
def getSourceName(self):
raise NotImplementedError
def toStrings(self, tokens):
"""A convenience method for use most often with template rewrites.
Convert a List<Token> to List<String>
"""
if tokens is None:
return None
return [token.text for token in tokens]
def getRuleMemoization(self, ruleIndex, ruleStartIndex):
"""
Given a rule number and a start token index number, return
MEMO_RULE_UNKNOWN if the rule has not parsed input starting from
start index. If this rule has parsed input starting from the
start index before, then return where the rule stopped parsing.
It returns the index of the last token matched by the rule.
"""
if ruleIndex not in self._state.ruleMemo:
self._state.ruleMemo[ruleIndex] = {}
return self._state.ruleMemo[ruleIndex].get(
ruleStartIndex, self.MEMO_RULE_UNKNOWN
)
def alreadyParsedRule(self, input, ruleIndex):
"""
Has this rule already parsed input at the current index in the
input stream? Return the stop token index or MEMO_RULE_UNKNOWN.
If we attempted but failed to parse properly before, return
MEMO_RULE_FAILED.
This method has a side-effect: if we have seen this input for
this rule and successfully parsed before, then seek ahead to
1 past the stop token matched for this rule last time.
"""
stopIndex = self.getRuleMemoization(ruleIndex, input.index())
if stopIndex == self.MEMO_RULE_UNKNOWN:
return False
if stopIndex == self.MEMO_RULE_FAILED:
raise BacktrackingFailed
else:
input.seek(stopIndex + 1)
return True
def memoize(self, input, ruleIndex, ruleStartIndex, success):
"""
Record whether or not this rule parsed the input at this position
successfully.
"""
if success:
stopTokenIndex = input.index() - 1
else:
stopTokenIndex = self.MEMO_RULE_FAILED
if ruleIndex in self._state.ruleMemo:
self._state.ruleMemo[ruleIndex][ruleStartIndex] = stopTokenIndex
def traceIn(self, ruleName, ruleIndex, inputSymbol):
sys.stdout.write("enter %s %s" % (ruleName, inputSymbol))
## if self._state.failed:
## sys.stdout.write(" failed=%s" % self._state.failed)
if self._state.backtracking > 0:
sys.stdout.write(" backtracking=%s" % self._state.backtracking)
sys.stdout.write('\n')
def traceOut(self, ruleName, ruleIndex, inputSymbol):
sys.stdout.write("exit %s %s" % (ruleName, inputSymbol))
## if self._state.failed:
## sys.stdout.write(" failed=%s" % self._state.failed)
if self._state.backtracking > 0:
sys.stdout.write(" backtracking=%s" % self._state.backtracking)
sys.stdout.write('\n')
class TokenSource(object):
"""
@brief Abstract baseclass for token producers.
A source of tokens must provide a sequence of tokens via nextToken()
and also must reveal it's source of characters; CommonToken's text is
computed from a CharStream; it only store indices into the char stream.
Errors from the lexer are never passed to the parser. Either you want
to keep going or you do not upon token recognition error. If you do not
want to continue lexing then you do not want to continue parsing. Just
throw an exception not under RecognitionException and Java will naturally
toss you all the way out of the recognizers. If you want to continue
lexing then you should not throw an exception to the parser--it has already
requested a token. Keep lexing until you get a valid one. Just report
errors and keep going, looking for a valid token.
"""
def nextToken(self):
"""Return a Token object from your input stream (usually a CharStream).
Do not fail/return upon lexing error; keep chewing on the characters
until you get a good one; errors are not passed through to the parser.
"""
raise NotImplementedError
def __iter__(self):
"""The TokenSource is an interator.
The iteration will not include the final EOF token, see also the note
for the next() method.
"""
return self
def next(self):
"""Return next token or raise StopIteration.
Note that this will raise StopIteration when hitting the EOF token,
so EOF will not be part of the iteration.
"""
token = self.nextToken()
if token is None or token.type == EOF:
raise StopIteration
return token
class Lexer(BaseRecognizer, TokenSource):
"""
@brief Baseclass for generated lexer classes.
A lexer is recognizer that draws input symbols from a character stream.
lexer grammars result in a subclass of this object. A Lexer object
uses simplified match() and error recovery mechanisms in the interest
of speed.
"""
def __init__(self, input, state=None):
BaseRecognizer.__init__(self, state)
TokenSource.__init__(self)
# Where is the lexer drawing characters from?
self.input = input
def reset(self):
BaseRecognizer.reset(self) # reset all recognizer state variables
if self.input is not None:
# rewind the input
self.input.seek(0)
if self._state is None:
# no shared state work to do
return
# wack Lexer state variables
self._state.token = None
self._state.type = INVALID_TOKEN_TYPE
self._state.channel = DEFAULT_CHANNEL
self._state.tokenStartCharIndex = -1
self._state.tokenStartLine = -1
self._state.tokenStartCharPositionInLine = -1
self._state.text = None
def nextToken(self):
"""
Return a token from this source; i.e., match a token on the char
stream.
"""
while 1:
self._state.token = None
self._state.channel = DEFAULT_CHANNEL
self._state.tokenStartCharIndex = self.input.index()
self._state.tokenStartCharPositionInLine = self.input.charPositionInLine
self._state.tokenStartLine = self.input.line
self._state.text = None
if self.input.LA(1) == EOF:
return EOF_TOKEN
try:
self.mTokens()
if self._state.token is None:
self.emit()
elif self._state.token == SKIP_TOKEN:
continue
return self._state.token
except NoViableAltException, re:
self.reportError(re)
self.recover(re) # throw out current char and try again
except RecognitionException, re:
self.reportError(re)
# match() routine has already called recover()
def skip(self):
"""
Instruct the lexer to skip creating a token for current lexer rule
and look for another token. nextToken() knows to keep looking when
a lexer rule finishes with token set to SKIP_TOKEN. Recall that
if token==null at end of any token rule, it creates one for you
and emits it.
"""
self._state.token = SKIP_TOKEN
def mTokens(self):
"""This is the lexer entry point that sets instance var 'token'"""
# abstract method
raise NotImplementedError
def setCharStream(self, input):
"""Set the char stream and reset the lexer"""
self.input = None
self.reset()
self.input = input
def getSourceName(self):
return self.input.getSourceName()
def emit(self, token=None):
"""
The standard method called to automatically emit a token at the
outermost lexical rule. The token object should point into the
char buffer start..stop. If there is a text override in 'text',
use that to set the token's text. Override this method to emit
custom Token objects.
If you are building trees, then you should also override
Parser or TreeParser.getMissingSymbol().
"""
if token is None:
token = CommonToken(
input=self.input,
type=self._state.type,
channel=self._state.channel,
start=self._state.tokenStartCharIndex,
stop=self.getCharIndex()-1
)
token.line = self._state.tokenStartLine
token.text = self._state.text
token.charPositionInLine = self._state.tokenStartCharPositionInLine
self._state.token = token
return token
def match(self, s):
if isinstance(s, basestring):
for c in s:
if self.input.LA(1) != ord(c):
if self._state.backtracking > 0:
raise BacktrackingFailed
mte = MismatchedTokenException(c, self.input)
self.recover(mte)
raise mte
self.input.consume()
else:
if self.input.LA(1) != s:
if self._state.backtracking > 0:
raise BacktrackingFailed
mte = MismatchedTokenException(unichr(s), self.input)
self.recover(mte) # don't really recover; just consume in lexer
raise mte
self.input.consume()
def matchAny(self):
self.input.consume()
def matchRange(self, a, b):
if self.input.LA(1) < a or self.input.LA(1) > b:
if self._state.backtracking > 0:
raise BacktrackingFailed
mre = MismatchedRangeException(unichr(a), unichr(b), self.input)
self.recover(mre)
raise mre
self.input.consume()
def getLine(self):
return self.input.line
def getCharPositionInLine(self):
return self.input.charPositionInLine
def getCharIndex(self):
"""What is the index of the current character of lookahead?"""
return self.input.index()
def getText(self):
"""
Return the text matched so far for the current token or any
text override.
"""
if self._state.text is not None:
return self._state.text
return self.input.substring(
self._state.tokenStartCharIndex,
self.getCharIndex()-1
)
def setText(self, text):
"""
Set the complete text of this token; it wipes any previous
changes to the text.
"""
self._state.text = text
text = property(getText, setText)
def reportError(self, e):
## TODO: not thought about recovery in lexer yet.
## # if we've already reported an error and have not matched a token
## # yet successfully, don't report any errors.
## if self.errorRecovery:
## #System.err.print("[SPURIOUS] ");
## return;
##
## self.errorRecovery = True
self.displayRecognitionError(self.tokenNames, e)
def getErrorMessage(self, e, tokenNames):
msg = None
if isinstance(e, MismatchedTokenException):
msg = "mismatched character " \
+ self.getCharErrorDisplay(e.c) \
+ " expecting " \
+ self.getCharErrorDisplay(e.expecting)
elif isinstance(e, NoViableAltException):
msg = "no viable alternative at character " \
+ self.getCharErrorDisplay(e.c)
elif isinstance(e, EarlyExitException):
msg = "required (...)+ loop did not match anything at character " \
+ self.getCharErrorDisplay(e.c)
elif isinstance(e, MismatchedNotSetException):
msg = "mismatched character " \
+ self.getCharErrorDisplay(e.c) \
+ " expecting set " \
+ repr(e.expecting)
elif isinstance(e, MismatchedSetException):
msg = "mismatched character " \
+ self.getCharErrorDisplay(e.c) \
+ " expecting set " \
+ repr(e.expecting)
elif isinstance(e, MismatchedRangeException):
msg = "mismatched character " \
+ self.getCharErrorDisplay(e.c) \
+ " expecting set " \
+ self.getCharErrorDisplay(e.a) \
+ ".." \
+ self.getCharErrorDisplay(e.b)
else:
msg = BaseRecognizer.getErrorMessage(self, e, tokenNames)
return msg
def getCharErrorDisplay(self, c):
if c == EOF:
c = '<EOF>'
return repr(c)
def recover(self, re):
"""
Lexers can normally match any char in it's vocabulary after matching
a token, so do the easy thing and just kill a character and hope
it all works out. You can instead use the rule invocation stack
to do sophisticated error recovery if you are in a fragment rule.
"""
self.input.consume()
def traceIn(self, ruleName, ruleIndex):
inputSymbol = "%s line=%d:%s" % (self.input.LT(1),
self.getLine(),
self.getCharPositionInLine()
)
BaseRecognizer.traceIn(self, ruleName, ruleIndex, inputSymbol)
def traceOut(self, ruleName, ruleIndex):
inputSymbol = "%s line=%d:%s" % (self.input.LT(1),
self.getLine(),
self.getCharPositionInLine()
)
BaseRecognizer.traceOut(self, ruleName, ruleIndex, inputSymbol)
class Parser(BaseRecognizer):
"""
@brief Baseclass for generated parser classes.
"""
def __init__(self, lexer, state=None):
BaseRecognizer.__init__(self, state)
self.setTokenStream(lexer)
def reset(self):
BaseRecognizer.reset(self) # reset all recognizer state variables
if self.input is not None:
self.input.seek(0) # rewind the input
def getCurrentInputSymbol(self, input):
return input.LT(1)
def getMissingSymbol(self, input, e, expectedTokenType, follow):
tokenText = "<missing " + self.tokenNames[expectedTokenType] + ">"
t = CommonToken(type=expectedTokenType, text=tokenText)
current = input.LT(1)
if current.type == EOF:
current = input.LT(-1)
if current is not None:
t.line = current.line
t.charPositionInLine = current.charPositionInLine
t.channel = DEFAULT_CHANNEL
return t
def setTokenStream(self, input):
"""Set the token stream and reset the parser"""
self.input = None
self.reset()
self.input = input
def getTokenStream(self):
return self.input
def getSourceName(self):
return self.input.getSourceName()
def traceIn(self, ruleName, ruleIndex):
BaseRecognizer.traceIn(self, ruleName, ruleIndex, self.input.LT(1))
def traceOut(self, ruleName, ruleIndex):
BaseRecognizer.traceOut(self, ruleName, ruleIndex, self.input.LT(1))
| 34.842105 | 84 | 0.609507 |
c005aaac6515d0b281c170f61dcaac050b225fa8 | 3,359 | py | Python | chainspotter/chain.py | uSasha/chainspotter | 102753ff32e4181586ce7302bc31658f5ab8e536 | [
"BSD-2-Clause"
] | null | null | null | chainspotter/chain.py | uSasha/chainspotter | 102753ff32e4181586ce7302bc31658f5ab8e536 | [
"BSD-2-Clause"
] | null | null | null | chainspotter/chain.py | uSasha/chainspotter | 102753ff32e4181586ce7302bc31658f5ab8e536 | [
"BSD-2-Clause"
] | null | null | null | import inspect
import logging
import time
from typing import List
import redis
logger = logging.getLogger('drainpipe')
class ClickChain:
"""
Redis Streams wrapper to track, store and query by count or time user-item interactions history
"""
def __init__(self, prefix: str, redis_connection: redis.client.Redis, limit: int = None) -> None:
"""
:param prefix: stream name = prefix + '_' + user_id
:param redis_connection: redis connection pool
:param limit: user history length
"""
self.redis = redis_connection
self.prefix = prefix
self.limit = limit
logging.info('New clickchain created, prefix: %s, limit: %s', prefix, limit)
def last_n_pcs(self, user_id: int, count: int = None) -> List[int]:
"""
Get last N items used by user
:param user_id: user ID
:param count: number of items
:return: list if item IDs from old to new ones
"""
stream_content = self.redis.xrevrange(f'{self.prefix}_{user_id}', count=count)
return [int(log[b'item']) for timestamp, log in stream_content][::-1]
def last_n_hours(self, user_id: int, hours: int = 24) -> List[int]:
"""
Get items used by user in last N hours
:param user_id: user ID
:param hours: number of hours
:return: list if item IDs from old to new ones
"""
now_ms = int(time.time() * 1000)
stream_content = self.redis.xrange(f'{self.prefix}_{user_id}', min=now_ms - hours * 60 * 60 * 1000)
return [int(log[b'item']) for timestamp, log in stream_content]
def add(self, user_id: int, item: int) -> None:
"""
Add item to user history
:param user_id: user ID
:param item: item ID
"""
self.redis.xadd(f'{self.prefix}_{user_id}', {'item': item}, maxlen=self.limit)
logging.debug('Added new item %s for user %s clickchain', item, user_id)
def __iter__(self) -> str:
"""
Iterate over user IDs stored in Redis
:return:
"""
cursor = 0
streams = [None]
while streams:
cursor, streams = self.redis.scan(cursor=cursor, match=self.prefix + '*', count=1000)
for stream in streams:
yield int(stream.decode().replace(self.prefix + '_', ''))
def to_chain(prefix: str, redis_connection: redis.client.Redis, limit: int = None,
user_id_arg: str = 'user_id', item_id_arg: str = 'item_id'):
"""
Decorator to store user-item interactions to Redis Streams
:param prefix: stream name = prefix + '_' + user_id
:param redis_connection: redis connection pool
:param limit: user history length
:param user_id_arg: user_id argument name in wrapped function
:param item_id_arg: item_id argument name in wrapped function
"""
def dump_args(func):
def wrapper(*args, **kwargs):
func_args = inspect.signature(func).bind(*args, **kwargs).arguments
user = func_args[user_id_arg]
item = func_args[item_id_arg]
redis_connection.xadd(f'{prefix}_{user}', {'item': item}, maxlen=limit)
logging.debug('Added new item %s for user %s clickchain', item, user)
return func(*args, **kwargs)
return wrapper
return dump_args
| 37.741573 | 107 | 0.615064 |
e441c3a79d1f5cdf2451bbe11db39dff013a1a00 | 5,266 | py | Python | deep_sort.py | absallh/A_yolov3 | 550ec41de42b8efe638e887c51a568189947e049 | [
"Apache-2.0"
] | 6 | 2019-12-30T14:26:23.000Z | 2021-09-14T04:48:20.000Z | deep_sort.py | absallh/A_yolov3 | 550ec41de42b8efe638e887c51a568189947e049 | [
"Apache-2.0"
] | 1 | 2020-01-13T10:44:30.000Z | 2020-12-08T10:54:10.000Z | deep_sort.py | absallh/A_yolov3 | 550ec41de42b8efe638e887c51a568189947e049 | [
"Apache-2.0"
] | 3 | 2020-04-01T06:10:32.000Z | 2020-10-18T05:02:16.000Z | import os
import cv2
import time
import argparse
import torch
import numpy as np
from predict import InferYOLOv3
from utils.utils import xyxy2xywh
from deep_sort import DeepSort
from utils.utils_sort import COLORS_10, draw_bboxes
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
class Detector(object):
def __init__(self, args):
self.args = args
if args.display:
cv2.namedWindow("test", cv2.WINDOW_NORMAL)
cv2.resizeWindow("test", args.display_width, args.display_height)
device = torch.device(
'cuda') if torch.cuda.is_available() else torch.device('cpu')
self.vdo = cv2.VideoCapture()
self.yolo3 = InferYOLOv3(args.yolo_cfg,
args.img_size,
args.yolo_weights,
args.data_cfg,
device,
conf_thres=args.conf_thresh,
nms_thres=args.nms_thresh)
self.deepsort = DeepSort(args.deepsort_checkpoint)
self.class_names = self.yolo3.classes
def __enter__(self):
assert os.path.isfile(self.args.VIDEO_PATH), "Error: path error"
self.vdo.open(self.args.VIDEO_PATH)
self.im_width = int(self.vdo.get(cv2.CAP_PROP_FRAME_WIDTH))
self.im_height = int(self.vdo.get(cv2.CAP_PROP_FRAME_HEIGHT))
if self.args.save_path:
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
self.output = cv2.VideoWriter(self.args.save_path, fourcc, 20,
(self.im_width, self.im_height))
assert self.vdo.isOpened()
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
if exc_type:
print(exc_type, exc_value, exc_traceback)
def detect(self):
frame_cnt = -1
while self.vdo.grab():
frame_cnt += 1
# skip frames every 3 frames
if frame_cnt % 3 == 0:
continue
start = time.time()
_, ori_im = self.vdo.retrieve()
# im = cv2.cvtColor(ori_im, cv2.COLOR_BGR2RGB)
im = ori_im
t1_begin = time.time()
bbox_xxyy, cls_conf, cls_ids = self.yolo3.predict(im)
t1_end = time.time()
t2_begin = time.time()
if bbox_xxyy is not None:
# select class cow
# mask = cls_ids == 0
# bbox_xxyy = bbox_xxyy[mask]
# bbox_xxyy[:, 3:] *= 1.2
# cls_conf = cls_conf[mask]
bbox_xcycwh = xyxy2xywh(bbox_xxyy)
outputs = self.deepsort.update(bbox_xcycwh, cls_conf, im)
if len(outputs) > 0:
bbox_xyxy = outputs[:, :4]
identities = outputs[:, -1]
ori_im = draw_bboxes(ori_im, bbox_xyxy, identities)
t2_end = time.time()
end = time.time()
print(
"frame:%d|det:%.4f|sort:%.4f|total:%.4f|det p:%.2f%%|fps:%.2f"
% (frame_cnt, (t1_end - t1_begin), (t2_end - t2_begin),
(end - start), ((t1_end - t1_begin) * 100 /
((end - start))), (1 / (end - start))))
if self.args.display:
cv2.imshow("test", ori_im)
cv2.waitKey(1)
if self.args.save_path:
self.output.write(ori_im)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("VIDEO_PATH", type=str)
parser.add_argument("--yolo_cfg",
type=str,
default="cfg/yolov3-tiny-cbam.cfg"
) #"uolov3/cfg/yolov3-1cls-d1.cfg")
parser.add_argument("--yolo_weights",
type=str,
default="weights/best.pt"
) #"uolov3/weights/yolov3-1cls-d1.pt")
parser.add_argument("--yolo_names",
type=str,
default="data/cow.names")
parser.add_argument("--conf_thresh", type=float, default=0.5)
parser.add_argument("--nms_thresh", type=float, default=0.4)
parser.add_argument("--deepsort_checkpoint",
type=str,
default="deep_sort/deep/checkpoint/ckpt.t7")
parser.add_argument("--max_dist", type=float, default=0.2)
parser.add_argument("--ignore_display",
dest="display",
action="store_false")
parser.add_argument("--display_width", type=int, default=800)
parser.add_argument("--display_height", type=int, default=600)
parser.add_argument("--save_path", type=str, default="demo.avi")
parser.add_argument("--data_cfg",
type=str,
default="data/dataset4.data")
parser.add_argument("--img_size", type=int, default=416, help="img size")
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
with Detector(args) as det:
det.detect()
os.system("ffmpeg -y -i demo.avi -r 10 -b:a 32k %s_output.mp4" %
(os.path.basename(args.VIDEO_PATH).split('.')[0]))
| 36.825175 | 78 | 0.539878 |
4e3199120c6ceb1b294ec53f6a7d114bb59926a9 | 4,496 | py | Python | src/conda_package_handling/conda_fmt.py | leej3/conda-package-handling | 75b15b506d9f4ebfc5f3657077814dc0e4c33eb2 | [
"BSD-3-Clause"
] | null | null | null | src/conda_package_handling/conda_fmt.py | leej3/conda-package-handling | 75b15b506d9f4ebfc5f3657077814dc0e4c33eb2 | [
"BSD-3-Clause"
] | 1 | 2021-08-23T05:41:14.000Z | 2021-08-23T05:41:14.000Z | src/conda_package_handling/conda_fmt.py | leej3/conda-package-handling | 75b15b506d9f4ebfc5f3657077814dc0e4c33eb2 | [
"BSD-3-Clause"
] | 1 | 2019-09-18T08:08:07.000Z | 2019-09-18T08:08:07.000Z | """The 'new' conda format, introduced in late 2018/early 2019. Spec at
https://anaconda.atlassian.net/wiki/spaces/AD/pages/90210540/Conda+package+format+v2"""
import json
import os
from tempfile import NamedTemporaryFile
try:
from zipfile import ZipFile, BadZipFile, ZIP_STORED
except ImportError:
# py27 compat
from zipfile import ZipFile, ZIP_STORED, BadZipfile as BadZipFile
from . import utils
from .exceptions import InvalidArchiveError
from .interface import AbstractBaseFormat
from .tarball import create_compressed_tarball, _tar_xf
CONDA_PACKAGE_FORMAT_VERSION = 2
DEFAULT_COMPRESSION_TUPLE = ('.tar.zst', 'zstd', 'zstd:compression-level=22')
def _lookup_component_filename(zf, file_id, component_name):
contents = zf.namelist()
component_filename_without_ext = '-'.join((component_name, file_id))
component_filename = [_ for _ in contents if
_.startswith(component_filename_without_ext)]
return component_filename
def _extract_component(fn, file_id, component_name, dest_dir=os.getcwd()):
try:
with ZipFile(fn, compression=ZIP_STORED) as zf:
with utils.TemporaryDirectory(prefix=dest_dir) as tmpdir:
with utils.tmp_chdir(tmpdir):
component_filename = _lookup_component_filename(zf, file_id, component_name)
if not component_filename:
raise RuntimeError("didn't find {} component in {}"
.format(component_name, fn))
component_filename = component_filename[0]
zf.extract(component_filename)
_tar_xf(component_filename, dest_dir)
except BadZipFile as e:
raise InvalidArchiveError(fn, str(e))
class CondaFormat_v2(AbstractBaseFormat):
"""If there's another conda format or breaking changes, please create a new class and keep this
one, so that handling of v2 stays working."""
@staticmethod
def extract(fn, dest_dir, **kw):
components = utils.ensure_list(kw.get('components')) or ('info', 'pkg')
file_id = os.path.basename(fn).replace('.conda', '')
if not os.path.isabs(fn):
fn = os.path.normpath(os.path.join(os.getcwd(), fn))
if not os.path.isdir(dest_dir):
os.makedirs(dest_dir)
for component in components:
_extract_component(fn, file_id, component, dest_dir)
@staticmethod
def extract_info(fn, dest_dir=None):
return CondaFormat_v2.extract(fn, dest_dir, components=['info'])
@staticmethod
def create(prefix, file_list, out_fn, out_folder=os.getcwd(), **kw):
if os.path.isabs(out_fn):
out_folder = os.path.dirname(out_fn)
out_fn = os.path.basename(out_fn)
conda_pkg_fn = os.path.join(out_folder, out_fn)
out_fn = out_fn.replace('.conda', '')
pkg_files = utils.filter_info_files(file_list, prefix)
info_files = set(file_list) - set(pkg_files)
ext, comp_filter, filter_opts = kw.get('compression_tuple') or DEFAULT_COMPRESSION_TUPLE
with utils.TemporaryDirectory(prefix=out_folder) as tmpdir:
info_tarball = create_compressed_tarball(prefix, info_files, tmpdir, 'info-' + out_fn,
ext, comp_filter, filter_opts)
pkg_tarball = create_compressed_tarball(prefix, pkg_files, tmpdir, 'pkg-' + out_fn,
ext, comp_filter, filter_opts)
pkg_metadata = {'conda_pkg_format_version': CONDA_PACKAGE_FORMAT_VERSION}
with ZipFile(conda_pkg_fn, 'w', compression=ZIP_STORED) as zf:
with NamedTemporaryFile(mode='w', delete=False) as tf:
json.dump(pkg_metadata, tf)
zf.write(tf.name, 'metadata.json')
for pkg in (info_tarball, pkg_tarball):
zf.write(pkg, os.path.basename(pkg))
utils.rm_rf(tf.name)
return conda_pkg_fn
@staticmethod
def get_pkg_details(in_file):
stat_result = os.stat(in_file)
size = stat_result.st_size
# open the file twice because we need to start from the beginning each time
with open(in_file, 'rb') as f:
md5 = utils.md5_checksum(f)
with open(in_file, 'rb') as f:
sha256 = utils.sha256_checksum(f)
return {"size": size, "md5": md5, "sha256": sha256}
| 43.650485 | 99 | 0.643016 |
d9af5c50ebc758ce23967ceeae4676133fe937bb | 1,443 | py | Python | chia_tea/watchdog/collection/api/update_from_wallet.py | Tea-n-Tech/chia-tea | a5bd327b9d5e048e55e9f5d8cefca2dbcd5eae96 | [
"BSD-3-Clause"
] | 6 | 2021-08-05T21:31:15.000Z | 2021-11-15T20:54:25.000Z | chia_tea/watchdog/collection/api/update_from_wallet.py | Tea-n-Tech/chia-tea | a5bd327b9d5e048e55e9f5d8cefca2dbcd5eae96 | [
"BSD-3-Clause"
] | 49 | 2021-08-05T19:33:08.000Z | 2022-03-30T19:33:38.000Z | chia_tea/watchdog/collection/api/update_from_wallet.py | Tea-n-Tech/chia-tea | a5bd327b9d5e048e55e9f5d8cefca2dbcd5eae96 | [
"BSD-3-Clause"
] | 1 | 2022-01-09T17:08:32.000Z | 2022-01-09T17:08:32.000Z | from chia.rpc.wallet_rpc_client import WalletRpcClient
from chia.util.config import load_config
from chia.util.default_root import DEFAULT_ROOT_PATH
from chia.util.ints import uint16
from ....models.ChiaWatchdog import ChiaWatchdog
from .shared_settings import API_EXCEPTIONS
async def update_from_wallet(chia_dog: ChiaWatchdog):
"""Updates the chia dog with wallet data
Parameters
----------
chia_dog : ChiaWatchdog
watchdog instance to be modified
"""
try:
config = load_config(DEFAULT_ROOT_PATH, "config.yaml", exit_on_error=False)
self_hostname = config["self_hostname"]
wallet_rpc_port = config["wallet"]["rpc_port"]
wallet_client = await WalletRpcClient.create(
self_hostname, uint16(wallet_rpc_port), DEFAULT_ROOT_PATH, config
)
chia_dog.wallet_service.n_wallets = len(await wallet_client.get_connections())
chia_dog.wallet_service.is_running = True
chia_dog.wallet_service.is_synced = await wallet_client.get_synced()
# pylint: disable=catching-non-exception
except API_EXCEPTIONS:
chia_dog.wallet_service.n_wallets = 0
chia_dog.wallet_service.is_running = False
chia_dog.wallet_service.is_synced = False
finally:
if "wallet_client" in locals():
wallet_client.close()
await wallet_client.await_closed()
chia_dog.wallet_service.is_ready = True
| 34.357143 | 86 | 0.71587 |
4c6366e958f30ced5fc8c6c5a75b4e95867a4f29 | 4,079 | py | Python | tests/test_google_lifesciences.py | cokelaer/snakemake | d030443548a9851a82bcce618b24a9e24a8b545d | [
"MIT"
] | null | null | null | tests/test_google_lifesciences.py | cokelaer/snakemake | d030443548a9851a82bcce618b24a9e24a8b545d | [
"MIT"
] | null | null | null | tests/test_google_lifesciences.py | cokelaer/snakemake | d030443548a9851a82bcce618b24a9e24a8b545d | [
"MIT"
] | null | null | null | import os
import sys
import tempfile
from google.cloud import storage
sys.path.insert(0, os.path.dirname(__file__))
from common import *
def has_google_credentials():
return "GOOGLE_APPLICATION_CREDENTIALS" in os.environ
google_credentials = pytest.mark.skipif(
not has_google_credentials(),
reason="Skipping google lifesciences tests because "
"GOOGLE_APPLICATION_CREDENTIALS not found in the environment.",
)
def cleanup_google_storage(prefix, bucket_name="snakemake-testing"):
"""Given a storage prefix and a bucket, recursively delete files there
This is intended to run after testing to ensure that
the bucket is cleaned up.
Arguments:
prefix (str) : the "subfolder" or prefix for some files in the buckets
bucket_name (str) : the name of the bucket, default snakemake-testing
"""
client = storage.Client()
bucket = client.get_bucket(bucket_name)
blobs = bucket.list_blobs(prefix="source")
for blob in blobs:
blob.delete()
# Using API we get an exception about bucket deletion
shell("gsutil -m rm -r gs://{bucket.name}/* || true")
bucket.delete()
def create_google_storage(bucket_name="snakemake-testing"):
"""Given a bucket name, create the Google storage bucket,
intending to be used for testing and then cleaned up by
cleanup_google_storage
Arguments:
bucket_name (str) : the name of the bucket, default snakemake-testing
"""
client = storage.Client()
return client.create_bucket(bucket_name)
@google_credentials
def test_google_lifesciences():
bucket_name = "snakemake-testing-%s" % next(tempfile._get_candidate_names())
create_google_storage(bucket_name)
storage_prefix = "test_google_lifesciences"
workdir = dpath("test_google_lifesciences")
try:
run(
workdir,
use_conda=True,
default_remote_prefix="%s/%s" % (bucket_name, storage_prefix),
google_lifesciences=True,
google_lifesciences_cache=False,
preemption_default=None,
preemptible_rules=["pack=1"],
)
finally:
cleanup_google_storage(storage_prefix, bucket_name)
@pytest.mark.skip(
reason="Cannot test using touch with a remote prefix until the container image is deployed."
)
@google_credentials
def test_touch_remote_prefix():
bucket_name = "snakemake-testing-%s" % next(tempfile._get_candidate_names())
create_google_storage(bucket_name)
storage_prefix = "test_touch_remote_prefix"
workdir = dpath("test_touch_remote_prefix")
try:
run(
workdir,
use_conda=True,
default_remote_prefix="%s/%s" % (bucket_name, storage_prefix),
google_lifesciences=True,
google_lifesciences_cache=False,
)
finally:
cleanup_google_storage(storage_prefix, bucket_name)
@google_credentials
def test_cloud_checkpoints_issue574():
"""see Github issue #574"""
bucket_name = "snakemake-testing-%s" % next(tempfile._get_candidate_names())
create_google_storage(bucket_name)
storage_prefix = "test_cloud_checkpoints_issue574"
workdir = dpath("test_cloud_checkpoints_issue574")
try:
run(
workdir,
use_conda=True,
default_remote_prefix="%s/%s" % (bucket_name, storage_prefix),
google_lifesciences=True,
google_lifesciences_cache=False,
)
finally:
cleanup_google_storage(storage_prefix, bucket_name)
def test_github_issue1396():
bucket_name = "snakemake-testing-%s" % next(tempfile._get_candidate_names())
create_google_storage(bucket_name)
storage_prefix = "test_github_issue1396"
workdir = dpath("test_github_issue1396")
try:
run(
workdir,
default_remote_prefix="%s/%s" % (bucket_name, storage_prefix),
google_lifesciences=True,
google_lifesciences_cache=False,
dryrun=True,
)
finally:
cleanup_google_storage(storage_prefix, bucket_name)
| 31.620155 | 96 | 0.690856 |
e2a9863d763d20b90a83b8fccf54a1db4e1d1970 | 603 | py | Python | RankingSystem.py | PapaDoraemon/marina-AI | 9f9281b5decf889a55d6c1bdbfe3a62adadd47f9 | [
"Apache-2.0"
] | 1 | 2020-09-04T12:42:54.000Z | 2020-09-04T12:42:54.000Z | RankingSystem.py | PapaDoraemon/marina-AI | 9f9281b5decf889a55d6c1bdbfe3a62adadd47f9 | [
"Apache-2.0"
] | null | null | null | RankingSystem.py | PapaDoraemon/marina-AI | 9f9281b5decf889a55d6c1bdbfe3a62adadd47f9 | [
"Apache-2.0"
] | null | null | null |
class RankingSystem():
def __init__(self, scoring_system, **kwargs):
self.scoring_system = scoring_system
self.setup(**kwargs)
def setup(self, **kwargs):
"""
This has to be implemented to setup specific ranking system
"""
pass
def scoreCandidates(self):
"""
This has to be implemented for every specific ranking system.
This method is aimed to provide answers to questions for scoring system
Returns score.
"""
pass
def getScoringSystem(self):
return self.scoring_system
| 23.192308 | 79 | 0.610282 |
22af828b83b93d22ce0795dc1a646b22e682f6bb | 2,664 | py | Python | snII_cosmo_tools/style.py | chvogl/snII_cosmo_tools | b688a4be96ce82a25f71964a9cbdbc7460dd140f | [
"BSD-3-Clause"
] | null | null | null | snII_cosmo_tools/style.py | chvogl/snII_cosmo_tools | b688a4be96ce82a25f71964a9cbdbc7460dd140f | [
"BSD-3-Clause"
] | null | null | null | snII_cosmo_tools/style.py | chvogl/snII_cosmo_tools | b688a4be96ce82a25f71964a9cbdbc7460dd140f | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from snII_cosmo_tools.tns_downloader import TNSDownloader
survey_link_dict = {
'ALeRCE': 'http://alerce.online/vue/object/',
'ZTF': 'http://alerce.online/vue/object/',
'ATLAS': 'https://star.pst.qub.ac.uk/sne/atlas4/',
'Pan-STARRS1': 'https://star.pst.qub.ac.uk/sne/ps13pi/psdb/',
'GaiaAlerts': 'http://gsaweb.ast.cam.ac.uk/alerts/alert/'
}
def highlight_dec_cut(s):
dec = [float(s1.split(':')[0]) for s1 in s]
above_cut = np.array(dec) >= 25.
return ['background-color: red' if v else '' for v in above_cut]
def highlight_visibility(s):
vis = []
for s1 in s:
if s1 < 30.:
vis.append('background-color: red')
elif s1 < 45:
vis.append('background-color: orange')
else:
vis.append('background-color: green')
return vis
def highlight_general_traffic_light(s, levels=[30., 45., 90.]):
prop = []
for s1 in s:
if s1 < levels[0]:
prop.append('background-color: red')
elif np.logical_and(s1 < levels[1], s1 < levels[2]):
prop.append('background-color: orange')
else:
prop.append('background-color: green')
return prop
def highlight_gal_lat(s):
vis = []
for s1 in s:
s1 = np.abs(s1)
if s1 < 20.:
vis.append('background-color: red')
elif s1 < 30:
vis.append('background-color: orange')
else:
vis.append('background-color: green')
return vis
def insert_tns_links_into_df(targets):
links = []
for name in targets.Name:
link = TNSDownloader.get_object_link(name)
links.append('<a href="{1}" target="_blank">{0}</a>'.format(
name, link))
names = targets.Name.copy()
targets['Name'] = links
return targets, names
def insert_survey_links_into_df(targets):
links = []
for groups, name in zip(targets['Discovering Group/s'],
targets['Disc. Internal Name']):
groups = groups.split(',')
group = groups[0]
if group in survey_link_dict and type(name) is str:
link = survey_link_dict[group]
if group not in ['ATLAS', 'Pan-STARRS1']:
link += name
link = '<a href="{1}" target="_blank">{0}</a>'.format(name, link)
else:
link = name
links.append(link)
targets['Disc. Internal Name'] = links
return targets
def get_styled_html_table(targets):
return targets.style.apply(
highlight_visibility, subset=['max_alt']).apply(
highlight_gal_lat, subset=['gal_lat']).hide_index().render()
| 29.932584 | 77 | 0.584084 |
023412d3ace383dfe637364d7db942b5fcae29fa | 17,317 | py | Python | src/ralph/discovery/admin.py | fossabot/ralph | 9ceeec52e3fc85a589c2e5766597a7c67c4e4aa2 | [
"Apache-2.0"
] | null | null | null | src/ralph/discovery/admin.py | fossabot/ralph | 9ceeec52e3fc85a589c2e5766597a7c67c4e4aa2 | [
"Apache-2.0"
] | 1 | 2019-08-14T10:03:45.000Z | 2019-08-14T10:03:45.000Z | src/ralph/discovery/admin.py | fossabot/ralph | 9ceeec52e3fc85a589c2e5766597a7c67c4e4aa2 | [
"Apache-2.0"
] | 1 | 2019-08-14T09:59:42.000Z | 2019-08-14T09:59:42.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import re
import logging
from django import forms
from django.conf import settings
from django.contrib import admin
from django.utils.translation import ugettext_lazy as _
from lck.django.common.admin import (
ForeignKeyAutocompleteTabularInline,
ModelAdmin,
)
from django.core.exceptions import ValidationError
from django.contrib import messages
from django.template.defaultfilters import slugify
from ralph.business.admin import RolePropertyValueInline
from ralph.discovery import models
from ralph.discovery import models_device
from ralph.export_to_ng.admin import Ralph3SyncAdminMixin
from ralph.ui.forms.network import NetworkForm
from ralph.ui.widgets import ReadOnlyWidget
SAVE_PRIORITY = 215
HOSTS_NAMING_TEMPLATE_REGEX = re.compile(r'<[0-9]+,[0-9]+>.*\.[a-zA-Z0-9]+')
def copy_network(modeladmin, request, queryset):
for net in queryset:
name = 'Copy of %s' % net.name
address = net.address.rsplit('/', 1)[0] + '/32'
new_net = models.Network(
name=name,
address=address,
gateway=net.gateway,
kind=net.kind,
data_center=net.data_center,
)
try:
new_net.save()
except ValidationError:
messages.error(request, "Network %s already exists." % address)
except Exception:
message = "Failed to create %s." % address
messages.error(request, message)
logging.exception(message)
else:
new_net.terminators = net.terminators.all()
new_net.save()
copy_network.short_description = "Copy network"
class NetworkAdmin(Ralph3SyncAdminMixin, ModelAdmin):
def address(self):
return self.address
address.short_description = _("network address")
address.admin_order_field = 'min_ip'
def gateway(self):
return self.gateway
gateway.short_description = _("gateway address")
gateway.admin_order_field = 'gateway_as_int'
def terms(self):
return ", ".join([n.name for n in self.terminators.order_by('name')])
terms.short_description = _("network terminators")
list_display = ('name', 'vlan', address, gateway, terms, 'data_center',
'environment', 'kind')
list_filter = (
'data_center', 'terminators', 'environment', 'kind', 'dhcp_broadcast',
)
list_per_page = 250
radio_fields = {
'data_center': admin.HORIZONTAL,
'environment': admin.HORIZONTAL,
'kind': admin.HORIZONTAL,
}
search_fields = ('name', 'address', 'vlan')
filter_horizontal = ('terminators', 'racks', 'custom_dns_servers')
save_on_top = True
form = NetworkForm
actions = [copy_network]
admin.site.register(models.Network, NetworkAdmin)
class NetworkKindAdmin(ModelAdmin):
list_display = ('name',)
search_fields = ('name',)
admin.site.register(models.NetworkKind, NetworkKindAdmin)
class NetworkTerminatorAdmin(ModelAdmin):
list_display = ('name',)
search_fields = ('name',)
admin.site.register(models.NetworkTerminator, NetworkTerminatorAdmin)
class DataCenterAdmin(ModelAdmin):
list_display = ('name',)
search_fields = ('name',)
admin.site.register(models.DataCenter, DataCenterAdmin)
class EnvironmentAdminForm(forms.ModelForm):
class Meta:
model = models.Environment
def clean_name(self):
name = self.cleaned_data['name'].strip()
if slugify(name) != name.lower():
raise forms.ValidationError(
_('You can use only this characters: [a-zA-Z0-9_-]')
)
return name
def clean_hosts_naming_template(self):
template = self.cleaned_data['hosts_naming_template']
if re.search("[^a-z0-9<>,\.|-]", template):
raise forms.ValidationError(
_("Please remove disallowed characters."),
)
for part in template.split("|"):
if not HOSTS_NAMING_TEMPLATE_REGEX.search(part):
raise forms.ValidationError(
_(
"Incorrect template structure. Please see example "
"below.",
),
)
return template
class EnvironmentAdmin(ModelAdmin):
list_display = (
'name',
'data_center',
'queue',
'domain',
'hosts_naming_template',
'next_server'
)
search_fields = ('name',)
form = EnvironmentAdminForm
list_filter = ('data_center', 'queue')
admin.site.register(models.Environment, EnvironmentAdmin)
class DiscoveryQueueAdmin(ModelAdmin):
list_display = ('name',)
search_fields = ('name',)
admin.site.register(models.DiscoveryQueue, DiscoveryQueueAdmin)
class IPAddressInlineFormset(forms.models.BaseInlineFormSet):
def get_queryset(self):
qs = super(IPAddressInlineFormset, self).get_queryset().filter(
is_management=False,
)
return qs
class IPAddressInline(ForeignKeyAutocompleteTabularInline):
formset = IPAddressInlineFormset
model = models.IPAddress
readonly_fields = ('snmp_name', 'last_seen')
exclude = (
'created', 'modified', 'dns_info', 'http_family', 'snmp_community',
'last_puppet', 'is_management', 'scan_summary',
)
edit_separately = True
extra = 0
related_search_fields = {
'device': ['^name'],
'network': ['^name'],
'venture': ['^name'],
}
class ChildDeviceInline(ForeignKeyAutocompleteTabularInline):
model = models.Device
edit_separately = True
readonly_fields = (
'venture', 'venture_role', 'name', 'model', 'sn', 'remarks',
'last_seen',
)
exclude = ('name2', 'created', 'modified', 'boot_firmware', 'barcode',
'hard_firmware', 'diag_firmware', 'mgmt_firmware', 'price',
'purchase_date', 'warranty_expiration_date', 'role',
'support_expiration_date', 'deprecation_kind', 'margin_kind',
'chassis_position', 'position', 'support_kind', 'management',
'logical_parent')
extra = 0
related_search_fields = {
'model': ['^name'],
'venture': ['^name'],
'venture_role': ['^name'],
}
fk_name = 'parent'
class DeviceModelAdmin(ModelAdmin):
def count(self):
return models.Device.objects.filter(model=self).count()
list_display = ('name', 'type', count, 'created', 'modified')
list_filter = ('type',)
search_fields = ('name',)
admin.site.register(models.DeviceModel, DeviceModelAdmin)
class DeviceModelInline(admin.TabularInline):
model = models.DeviceModel
exclude = ('created', 'modified')
extra = 0
class DeviceForm(forms.ModelForm):
class Meta:
model = models.Device
def __init__(self, *args, **kwargs):
super(DeviceForm, self).__init__(*args, **kwargs)
if self.instance.id is not None:
asset = self.instance.get_asset()
if asset:
self.fields['dc'].widget = ReadOnlyWidget()
self.fields['rack'].widget = ReadOnlyWidget()
self.fields['chassis_position'].widget = ReadOnlyWidget()
self.fields['position'].widget = ReadOnlyWidget()
def clean_sn(self):
sn = self.cleaned_data['sn']
if not sn:
sn = None
return sn
def clean_model(self):
model = self.cleaned_data['model']
if not model:
raise forms.ValidationError(_("Model is required"))
return model
def clean_barcode(self):
barcode = self.cleaned_data['barcode']
return barcode or None
def clean(self):
cleaned_data = super(DeviceForm, self).clean()
model = self.cleaned_data.get('model')
if all((
'ralph_assets' in settings.INSTALLED_APPS,
not self.instance.id, # only when we create new device
model
)):
if model and model.type not in models.ASSET_NOT_REQUIRED:
raise forms.ValidationError(
"Adding this type of devices is allowed only via "
"Assets module."
)
return cleaned_data
class ProcessorInline(ForeignKeyAutocompleteTabularInline):
model = models.Processor
# readonly_fields = ('label', 'index', 'speed')
exclude = ('created', 'modified')
extra = 0
related_search_fields = {
'model': ['^name'],
}
class MemoryInline(ForeignKeyAutocompleteTabularInline):
model = models.Memory
exclude = ('created', 'modified')
extra = 0
related_search_fields = {
'model': ['^name'],
}
class EthernetInline(ForeignKeyAutocompleteTabularInline):
model = models.Ethernet
exclude = ('created', 'modified')
extra = 0
related_search_fields = {
'model': ['^name'],
}
class StorageInline(ForeignKeyAutocompleteTabularInline):
model = models.Storage
readonly_fields = (
'label',
'size',
'sn',
'model',
'created',
'modified',
'mount_point',
)
extra = 0
related_search_fields = {
'model': ['^name'],
}
class InboundConnectionInline(ForeignKeyAutocompleteTabularInline):
model = models.Connection
extra = 1
related_search_fields = {
'outbound': ['^name']
}
fk_name = 'inbound'
verbose_name = _("Inbound Connection")
verbose_name_plural = _("Inbound Connections")
class OutboundConnectionInline(ForeignKeyAutocompleteTabularInline):
model = models.Connection
extra = 1
related_search_fields = {
'inbound': ['^name'],
}
fk_name = 'outbound'
verbose_name = _("Outbound Connection")
verbose_name_plural = _("Outbound Connections")
class DeviceAdmin(ModelAdmin):
form = DeviceForm
inlines = [
ProcessorInline,
MemoryInline,
EthernetInline,
StorageInline,
IPAddressInline,
ChildDeviceInline,
RolePropertyValueInline,
InboundConnectionInline,
OutboundConnectionInline,
]
list_display = ('name', 'sn', 'created', 'modified')
list_filter = ('model__type',)
list_per_page = 250
readonly_fields = ('last_seen',)
save_on_top = True
search_fields = ('name', 'name2', 'sn', 'model__type',
'model__name', 'ethernet__mac')
related_search_fields = {
'parent': ['^name'],
'logical_parent': ['^name'],
'venture': ['^name'],
'venture_role': ['^name'],
'management': ['^address', '^hostname'],
'model': ['^name', ],
'service': ['^name', ],
}
def get_readonly_fields(self, request, obj=None):
ro_fields = super(DeviceAdmin, self).get_readonly_fields(request, obj)
if obj and obj.get_asset():
ro_fields = ro_fields + ('parent', 'management',)
return ro_fields
def save_model(self, request, obj, form, change):
obj.save(user=request.user, sync_fields=True, priority=SAVE_PRIORITY)
def save_formset(self, request, form, formset, change):
if formset.model.__name__ == 'RolePropertyValue':
for instance in formset.save(commit=False):
instance.save(user=request.user)
elif formset.model.__name__ == 'IPAddress':
for instance in formset.save(commit=False):
if not instance.id:
# Sometimes IP address exists and does not have any
# assigned device. In this case we should reuse it,
# otherwise we can get IntegrityError.
try:
ip_id = models.IPAddress.objects.filter(
address=instance.address,
).values_list('id', flat=True)[0]
except IndexError:
pass
else:
instance.id = ip_id
instance.save()
else:
formset.save(commit=True)
admin.site.register(models.Device, DeviceAdmin)
class IPAliasInline(admin.TabularInline):
model = models.IPAlias
exclude = ('created', 'modified')
extra = 0
class IPAddressForm(forms.ModelForm):
class Meta:
model = models.IPAddress
def clean(self):
cleaned_data = super(IPAddressForm, self).clean()
device = cleaned_data.get('device')
if device and (
'device' in self.changed_data or
'is_management' in self.changed_data
):
is_management = cleaned_data.get('is_management', False)
if is_management and device.management_ip:
msg = 'This device already has management IP.'
self._errors['device'] = self.error_class([msg])
return cleaned_data
class IPAddressAdmin(ModelAdmin):
form = IPAddressForm
inlines = [IPAliasInline]
def ip_address(self):
"""Used for proper ordering."""
return self.address
ip_address.short_description = _("IP address")
ip_address.admin_order_field = 'number'
list_display = (
ip_address, 'hostname', 'device', 'snmp_name', 'is_public', 'created',
'modified',
)
list_filter = ('is_public', 'snmp_community')
list_per_page = 250
save_on_top = True
search_fields = ('address', 'hostname', 'number', 'snmp_name')
related_search_fields = {
'device': ['^name'],
'network': ['^name'],
'venture': ['^name'],
}
admin.site.register(models.IPAddress, IPAddressAdmin)
class DeprecationKindAdmin(ModelAdmin):
save_on_top = True
list_display = ('name', 'months', 'default')
admin.site.register(models.DeprecationKind, DeprecationKindAdmin)
class MarginKindAdmin(ModelAdmin):
save_on_top = True
admin.site.register(models.MarginKind, MarginKindAdmin)
class LoadBalancerTypeAdmin(ModelAdmin):
pass
admin.site.register(
models.LoadBalancerType,
LoadBalancerTypeAdmin,
)
class LoadBalancerVirtualServerAdmin(ModelAdmin):
search_fields = ('name', 'service__name', 'device__name',)
list_display = ('name', 'service', 'device_environment', 'venture', 'load_balancer_type', 'device', 'address', 'port',)
list_filter = ('load_balancer_type',)
related_search_fields = {
'device': ['^name'],
'address': ['^address'],
'default_pool': ['^name'],
}
admin.site.register(
models.LoadBalancerVirtualServer,
LoadBalancerVirtualServerAdmin,
)
class LoadBalancerMemberAdmin(ModelAdmin):
pass
admin.site.register(
models.LoadBalancerMember,
LoadBalancerMemberAdmin,
)
class ComponentModelInline(admin.TabularInline):
model = models.ComponentModel
exclude = ('created', 'modified')
extra = 0
class ComponentModelAdmin(ModelAdmin):
def count(self):
return self.get_count()
list_filter = ('type',)
list_display = ('name', 'type', count, 'family',)
search_fields = ('name', 'type', 'group__name', 'family')
admin.site.register(models.ComponentModel, ComponentModelAdmin)
class GenericComponentAdmin(ModelAdmin):
search_fields = ('label', 'sn', 'model__name')
list_display = ('label', 'model', 'sn')
related_search_fields = {
'device': ['^name'],
'model': ['^name']
}
admin.site.register(models.GenericComponent, GenericComponentAdmin)
class DiskShareMountInline(ForeignKeyAutocompleteTabularInline):
model = models.DiskShareMount
exclude = ('created', 'modified')
related_search_fields = {
'device': ['^name'],
'server': ['^name'],
'address': ['^address'],
}
extra = 0
class DiskShareAdmin(ModelAdmin):
inlines = [DiskShareMountInline]
search_fields = ('wwn',)
related_search_fields = {
'device': ['^name'],
'model': ['^name']
}
admin.site.register(models.DiskShare, DiskShareAdmin)
class HistoryChangeAdmin(ModelAdmin):
list_display = ('date', 'user', 'device', 'component', 'field_name',
'old_value', 'new_value')
list_per_page = 250
readonly_fields = ('date', 'device', 'user', 'field_name', 'new_value',
'old_value', 'component')
search_fields = ('user__username', 'field_name', 'new_value')
admin.site.register(models.HistoryChange, HistoryChangeAdmin)
class DeviceEnvironmentAdmin(ModelAdmin):
save_on_top = True
list_display = ('name',)
search_fields = ('name',)
admin.site.register(models_device.DeviceEnvironment, DeviceEnvironmentAdmin)
class DatabaseTypeAdmin(ModelAdmin):
pass
admin.site.register(
models.DatabaseType,
DatabaseTypeAdmin,
)
class DatabaseAdmin(ModelAdmin):
list_filter = ('database_type__name',)
list_display = ('name', 'venture', 'service', 'device_environment', 'database_type')
search_fields = ('name', 'venture', 'service')
related_search_fields = {
'parent_device': ['^name'],
}
admin.site.register(
models.Database,
DatabaseAdmin,
)
| 28.43514 | 123 | 0.629035 |
34f8c0676ad136d2b9a8b0237fc7f33640a9bba0 | 4,087 | py | Python | src/flashpolicies/views.py | ubernostrum/django-flashpolicies | f24fb80907a82c6894d08c58e25d085a18b11155 | [
"BSD-3-Clause"
] | 7 | 2015-04-07T22:18:02.000Z | 2022-01-26T16:02:14.000Z | src/flashpolicies/views.py | ubernostrum/django-flashpolicies | f24fb80907a82c6894d08c58e25d085a18b11155 | [
"BSD-3-Clause"
] | 1 | 2017-06-05T01:41:10.000Z | 2017-10-23T10:14:23.000Z | src/flashpolicies/views.py | ubernostrum/django-flashpolicies | f24fb80907a82c6894d08c58e25d085a18b11155 | [
"BSD-3-Clause"
] | 2 | 2016-09-24T18:36:17.000Z | 2017-08-07T14:26:49.000Z | """
Views for generating and serving policy files.
"""
import warnings
from typing import Iterable, Optional
from django.http import HttpRequest, HttpResponse
from . import policies
def serve(request: HttpRequest, policy: policies.Policy) -> HttpResponse:
"""
Given a ``flashpolicies.policies.Policy`` instance, serializes it
to XML and serve it.
Internally, this is used by all other views as the mechanism which
actually serves the policy file.
**Required arguments:**
``policy``
The ``flashpolicies.policies.Policy`` instance to serve.
**Optional arguments:**
None.
"""
return HttpResponse(
policy.serialize(), content_type="text/x-cross-domain-policy; charset=utf-8"
)
def allow_domains(request: HttpRequest, domains: Iterable[str]) -> HttpResponse:
"""
Serves a cross-domain access policy allowing a list of domains.
Note that if this is returned from the URL ``/crossdomain.xml`` on
a domain, it will act as a master policy and will not permit other
policies to exist on that domain. If you need to set meta-policy
information and allow other policies, use the view
:view:`flashpolicies.views.metapolicy` for the master policy instead.
**Required arguments:**
``domains``
A list of domains from which to allow access. Each value may
be either a domain name (e.g., ``example.com``) or a wildcard
(e.g., ``*.example.com``). Due to serious potential security
issues, it is strongly recommended that you not use wildcard
domain values.
**Optional arguments:**
None.
"""
return serve(request, policies.Policy(*domains))
def simple(request: HttpRequest, domains: Iterable[str]) -> HttpResponse:
"""
Deprecated name for the ``allow_domains`` view.
"""
warnings.warn(
"flashpolicies.views.simple has been renamed to "
"flashpolicies.views.allow_domains. Support for referring to it as "
"flashpolicies.views.simple is deprecated and will be removed in a "
"future release of django-flashpolicies.",
DeprecationWarning,
)
return allow_domains(request, domains)
def metapolicy(
request: HttpRequest, permitted: str, domains: Optional[Iterable[str]] = None
) -> HttpResponse:
"""
Serves a cross-domain policy which can allow other policies
to exist on the same domain.
Note that this view, if used, must be the master policy for the
domain, and so must be served from the URL ``/crossdomain.xml`` on
the domain: setting metapolicy information in other policy files
is forbidden by the cross-domain policy specification.
**Required arguments:**
``permitted``
A string indicating the extent to which other policies are
permitted. A set of constants is available in
``flashpolicies.policies``, defining acceptable values for
this argument.
**Optional arguments:**
``domains``
A list of domains from which to allow access. Each value may
be either a domain name (e.g., ``example.com``) or a wildcard
(e.g., ``*.example.com``). Due to serious potential security
issues, it is strongly recommended that you not use wildcard
domain values.
"""
if domains is None:
domains = []
policy = policies.Policy(*domains)
policy.metapolicy(permitted)
return serve(request, policy)
def no_access(request: HttpRequest) -> HttpResponse:
"""
Serves a cross-domain access policy which permits no access of any
kind, via a metapolicy declaration disallowing all policy files.
Note that this view, if used, must be the master policy for the
domain, and so must be served from the URL ``/crossdomain.xml`` on
the domain: setting metapolicy information in other policy files
is forbidden by the cross-domain policy specification.
**Required arguments:**
None.
**Optional arguments:**
None.
"""
return metapolicy(request, permitted=policies.SITE_CONTROL_NONE)
| 30.051471 | 84 | 0.684854 |
20ce4e5f5d513192def8efe9845cc866f2290a37 | 3,203 | py | Python | torchvision/datasets/semeion.py | iselic/vision | 6651ac9265af1929cbc1f931a775f574ef71b8d8 | [
"BSD-3-Clause"
] | 21 | 2021-10-08T02:47:56.000Z | 2022-03-29T14:17:04.000Z | torchvision/datasets/semeion.py | iselic/vision | 6651ac9265af1929cbc1f931a775f574ef71b8d8 | [
"BSD-3-Clause"
] | 46 | 2020-10-20T09:52:53.000Z | 2021-08-15T09:29:27.000Z | torchvision/datasets/semeion.py | iselic/vision | 6651ac9265af1929cbc1f931a775f574ef71b8d8 | [
"BSD-3-Clause"
] | 9 | 2021-11-11T11:17:16.000Z | 2022-03-08T04:26:10.000Z | from PIL import Image
import os
import os.path
import numpy as np
from typing import Any, Callable, Optional, Tuple
from .vision import VisionDataset
from .utils import download_url, check_integrity
class SEMEION(VisionDataset):
r"""`SEMEION <http://archive.ics.uci.edu/ml/datasets/semeion+handwritten+digit>`_ Dataset.
Args:
root (string): Root directory of dataset where directory
``semeion.py`` exists.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
download (bool, optional): If true, downloads the dataset from the internet and
puts it in root directory. If dataset is already downloaded, it is not
downloaded again.
"""
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/semeion/semeion.data"
filename = "semeion.data"
md5_checksum = 'cb545d371d2ce14ec121470795a77432'
def __init__(
self,
root: str,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
download: bool = True,
) -> None:
super(SEMEION, self).__init__(root, transform=transform,
target_transform=target_transform)
if download:
self.download()
if not self._check_integrity():
raise RuntimeError('Dataset not found or corrupted.' +
' You can use download=True to download it')
fp = os.path.join(self.root, self.filename)
data = np.loadtxt(fp)
# convert value to 8 bit unsigned integer
# color (white #255) the pixels
self.data = (data[:, :256] * 255).astype('uint8')
self.data = np.reshape(self.data, (-1, 16, 16))
self.labels = np.nonzero(data[:, 256:])[1]
def __getitem__(self, index: int) -> Tuple[Any, Any]:
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is index of the target class.
"""
img, target = self.data[index], int(self.labels[index])
# doing this so that it is consistent with all other datasets
# to return a PIL Image
img = Image.fromarray(img, mode='L')
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self) -> int:
return len(self.data)
def _check_integrity(self) -> bool:
root = self.root
fpath = os.path.join(root, self.filename)
if not check_integrity(fpath, self.md5_checksum):
return False
return True
def download(self) -> None:
if self._check_integrity():
print('Files already downloaded and verified')
return
root = self.root
download_url(self.url, root, self.filename, self.md5_checksum)
| 34.44086 | 94 | 0.611926 |
9f63c20b910153b74d42185a82336bef3d689589 | 6,591 | py | Python | JudgerServer/main.py | eetze/LPOJ | 785f96e067a6ea376d03a982aede6fa2a2ea3613 | [
"MIT"
] | 1 | 2020-03-16T03:47:13.000Z | 2020-03-16T03:47:13.000Z | JudgerServer/main.py | eetze/LPOJ | 785f96e067a6ea376d03a982aede6fa2a2ea3613 | [
"MIT"
] | null | null | null | JudgerServer/main.py | eetze/LPOJ | 785f96e067a6ea376d03a982aede6fa2a2ea3613 | [
"MIT"
] | null | null | null | # coding=utf-8
import MySQLdb
from queue import Queue
import socket
import json
from time import sleep
import threading
import os
mutex = threading.Lock() # queue mutex
queue = Queue() # 全局判题列表
myjsonfile = open("./setting.json", 'r')
judgerjson = json.loads(myjsonfile.read())
if os.environ.get("DB_USER"):
judgerjson["db_ip"] = os.environ.get("DB_HOST")
judgerjson["db_pass"] = os.environ.get("DB_PASSWORD")
judgerjson["db_user"] = os.environ.get("DB_USER")
judgerjson["db_port"] = os.environ.get("DB_PORT")
try:
db = MySQLdb.connect(judgerjson["db_ip"], judgerjson["db_user"], judgerjson["db_pass"],
judgerjson["db_database"], int(judgerjson["db_port"]), charset='utf8')
except Exception as e:
print(e)
exit(1)
# 获取未判题列表,放入到全局队列中
def getSubmition():
global queue, mutex, db
cursor = db.cursor()
while True:
sleep(1)
if mutex.acquire():
cursor.execute(
"SELECT * from judgestatus_judgestatus where result = '-1'")
data = cursor.fetchall()
try:
for d in data:
queue.put(d[0])
cursor.execute(
"UPDATE judgestatus_judgestatus SET result = '-6' WHERE id = '%d'" % d[0])
db.commit()
except:
db.rollback()
mutex.release()
db.close()
# 处理每个判题机的逻辑
def deal_client(newSocket: socket, addr):
global mutex, queue
statue = False
cursor = db.cursor()
falsetime = 0
while True:
sleep(1) # 每隔一秒取一次
if mutex.acquire(): # 获取队列锁
try:
if statue == True and queue.empty() is not True:
id = queue.get() # 如果可以判题,那就发送判题命令
statue = False
cursor.execute(
"SELECT language from judgestatus_judgestatus where id = '%d'" % (id))
data = cursor.fetchall()
print(data[0][0])
newSocket.send(("judge|%d" % id).encode("utf-8"))
else:
newSocket.send("getstatue".encode("utf-8"))
data = newSocket.recv(1024)
recv_data = data.decode('utf-8')
if recv_data == "ok":
falsetime = 0
statue = True
else:
falsetime = falsetime + 1
statue = False
if falsetime >= 180: # 计算一下未准备好的时间,如果超过120s,发送销毁重启命令
newSocket.send("timeout".encode("utf-8"))
print(addr, "timeout!")
newSocket.close()
mutex.release()
return
print(addr, statue)
except socket.error:
newSocket.close()
mutex.release()
return
except:
print("error!")
mutex.release()
return
mutex.release()
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(("", judgerjson["server_port"]))
server.listen(20)
print("server is running!")
t = threading.Thread(target=getSubmition, args=()) # 用一个线程去跑
t.setDaemon(True)
t.start()
# 比赛题目设置为auth=2,contest开始时,自动设置题目为auth=3,比赛结束自动设置auth=1
def changeauth():
global db, mutex
curcontest = set()
curpro = set()
curinpro = set()
cursor = db.cursor()
while True:
sleep(2)
if mutex.acquire():
cursor.execute(
"SELECT * from contest_contestinfo where type <> 'Personal' and TO_SECONDS(NOW()) - TO_SECONDS(begintime) <= lasttime")
data = cursor.fetchall()
getcontest = set()
for d in data:
getcontest.add(d[0]) # 用于求结束的比赛
cursor.execute(
"SELECT * from contest_contestproblem where contestid=%d" % d[0])
pros = cursor.fetchall()
for pid in pros:
if pid[2] not in curpro:
curpro.add(pid[2])
cursor.execute(
"UPDATE problem_problemdata SET auth = 2 WHERE problem = %s" % pid[2])
cursor.execute(
"UPDATE problem_problem SET auth = 2 WHERE problem = %s" % pid[2])
db.commit()
cursor.execute(
"SELECT * from contest_contestinfo where type <> 'Personal' and TO_SECONDS(NOW()) - TO_SECONDS(begintime) <= lasttime and TO_SECONDS(NOW()) - TO_SECONDS(begintime) >=-1")
data = cursor.fetchall()
for d in data:
cursor.execute(
"SELECT * from contest_contestproblem where contestid=%d" % d[0])
pros = cursor.fetchall()
for pid in pros:
if pid[2] not in curinpro:
curinpro.add(pid[2])
cursor.execute(
"UPDATE problem_problemdata SET auth = 3 WHERE problem = %s" % pid[2])
cursor.execute(
"UPDATE problem_problem SET auth = 3 WHERE problem = %s" % pid[2])
db.commit()
endcontest = curcontest.difference(getcontest)
print("curcontest", curcontest)
for eid in endcontest:
cursor.execute(
"SELECT * from contest_contestproblem where contestid=%d" % eid)
pros = cursor.fetchall()
for pid in pros:
print(pid[2])
curpro.remove(pid[2])
curinpro.remove(pid[2])
cursor.execute(
"UPDATE problem_problemdata SET auth = 1 WHERE problem = %s" % pid[2])
cursor.execute(
"UPDATE problem_problem SET auth = 1 WHERE problem = %s" % pid[2])
db.commit()
curcontest = getcontest
mutex.release()
t1 = threading.Thread(target=changeauth, args=())
t1.setDaemon(True)
t1.start()
# 循环监听
while True:
newSocket, addr = server.accept()
print("client [%s] is connected!" % str(addr))
client = threading.Thread(target=deal_client, args=(newSocket, addr))
client.setDaemon(True)
client.start()
| 34.873016 | 186 | 0.5066 |
4384f3ec27c35fe736b6aad6c9e2601e86eae93e | 5,876 | py | Python | alipay/aop/api/domain/BatchSettleDetail.py | articuly/alipay-sdk-python-all | 0259cd28eca0f219b97dac7f41c2458441d5e7a6 | [
"Apache-2.0"
] | null | null | null | alipay/aop/api/domain/BatchSettleDetail.py | articuly/alipay-sdk-python-all | 0259cd28eca0f219b97dac7f41c2458441d5e7a6 | [
"Apache-2.0"
] | null | null | null | alipay/aop/api/domain/BatchSettleDetail.py | articuly/alipay-sdk-python-all | 0259cd28eca0f219b97dac7f41c2458441d5e7a6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import simplejson as json
from alipay.aop.api.constant.ParamConstants import *
class BatchSettleDetail(object):
def __init__(self):
self._amount = None
self._currency = None
self._error_code = None
self._error_desc = None
self._settle_account_id = None
self._settle_account_id_type = None
self._settle_account_type = None
self._settle_entity_id = None
self._settle_entity_type = None
self._status = None
@property
def amount(self):
return self._amount
@amount.setter
def amount(self, value):
self._amount = value
@property
def currency(self):
return self._currency
@currency.setter
def currency(self, value):
self._currency = value
@property
def error_code(self):
return self._error_code
@error_code.setter
def error_code(self, value):
self._error_code = value
@property
def error_desc(self):
return self._error_desc
@error_desc.setter
def error_desc(self, value):
self._error_desc = value
@property
def settle_account_id(self):
return self._settle_account_id
@settle_account_id.setter
def settle_account_id(self, value):
self._settle_account_id = value
@property
def settle_account_id_type(self):
return self._settle_account_id_type
@settle_account_id_type.setter
def settle_account_id_type(self, value):
self._settle_account_id_type = value
@property
def settle_account_type(self):
return self._settle_account_type
@settle_account_type.setter
def settle_account_type(self, value):
self._settle_account_type = value
@property
def settle_entity_id(self):
return self._settle_entity_id
@settle_entity_id.setter
def settle_entity_id(self, value):
self._settle_entity_id = value
@property
def settle_entity_type(self):
return self._settle_entity_type
@settle_entity_type.setter
def settle_entity_type(self, value):
self._settle_entity_type = value
@property
def status(self):
return self._status
@status.setter
def status(self, value):
self._status = value
def to_alipay_dict(self):
params = dict()
if self.amount:
if hasattr(self.amount, 'to_alipay_dict'):
params['amount'] = self.amount.to_alipay_dict()
else:
params['amount'] = self.amount
if self.currency:
if hasattr(self.currency, 'to_alipay_dict'):
params['currency'] = self.currency.to_alipay_dict()
else:
params['currency'] = self.currency
if self.error_code:
if hasattr(self.error_code, 'to_alipay_dict'):
params['error_code'] = self.error_code.to_alipay_dict()
else:
params['error_code'] = self.error_code
if self.error_desc:
if hasattr(self.error_desc, 'to_alipay_dict'):
params['error_desc'] = self.error_desc.to_alipay_dict()
else:
params['error_desc'] = self.error_desc
if self.settle_account_id:
if hasattr(self.settle_account_id, 'to_alipay_dict'):
params['settle_account_id'] = self.settle_account_id.to_alipay_dict()
else:
params['settle_account_id'] = self.settle_account_id
if self.settle_account_id_type:
if hasattr(self.settle_account_id_type, 'to_alipay_dict'):
params['settle_account_id_type'] = self.settle_account_id_type.to_alipay_dict()
else:
params['settle_account_id_type'] = self.settle_account_id_type
if self.settle_account_type:
if hasattr(self.settle_account_type, 'to_alipay_dict'):
params['settle_account_type'] = self.settle_account_type.to_alipay_dict()
else:
params['settle_account_type'] = self.settle_account_type
if self.settle_entity_id:
if hasattr(self.settle_entity_id, 'to_alipay_dict'):
params['settle_entity_id'] = self.settle_entity_id.to_alipay_dict()
else:
params['settle_entity_id'] = self.settle_entity_id
if self.settle_entity_type:
if hasattr(self.settle_entity_type, 'to_alipay_dict'):
params['settle_entity_type'] = self.settle_entity_type.to_alipay_dict()
else:
params['settle_entity_type'] = self.settle_entity_type
if self.status:
if hasattr(self.status, 'to_alipay_dict'):
params['status'] = self.status.to_alipay_dict()
else:
params['status'] = self.status
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = BatchSettleDetail()
if 'amount' in d:
o.amount = d['amount']
if 'currency' in d:
o.currency = d['currency']
if 'error_code' in d:
o.error_code = d['error_code']
if 'error_desc' in d:
o.error_desc = d['error_desc']
if 'settle_account_id' in d:
o.settle_account_id = d['settle_account_id']
if 'settle_account_id_type' in d:
o.settle_account_id_type = d['settle_account_id_type']
if 'settle_account_type' in d:
o.settle_account_type = d['settle_account_type']
if 'settle_entity_id' in d:
o.settle_entity_id = d['settle_entity_id']
if 'settle_entity_type' in d:
o.settle_entity_type = d['settle_entity_type']
if 'status' in d:
o.status = d['status']
return o
| 33.386364 | 95 | 0.619469 |
a697aac7a8e404b0eaa92207ff02acec4924c664 | 6,637 | py | Python | data_preprocess/linear_regression_with_drop_features.py | ColorOfLight/ML-term-project | 047b22fcdd8df7a18abd224ccbf23ae5d981fc97 | [
"MIT"
] | null | null | null | data_preprocess/linear_regression_with_drop_features.py | ColorOfLight/ML-term-project | 047b22fcdd8df7a18abd224ccbf23ae5d981fc97 | [
"MIT"
] | null | null | null | data_preprocess/linear_regression_with_drop_features.py | ColorOfLight/ML-term-project | 047b22fcdd8df7a18abd224ccbf23ae5d981fc97 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
class Linear_Regression_with_drop_features(object):
def __init__(self):
self.lm = None
self.apartment_map = None
def fit(self,X,y):
drop_columns = ['latitude', 'longtitude', 'altitude', '1st region id', '2nd region id', 'road id',
'parking lot limit', 'parking lot area', 'parking lot external',
'management fee', 'households', 'age of residents', 'builder id', 'completion date', 'built year',
'schools', 'bus stations', 'subway stations']
data = X.drop(columns = drop_columns)
data = data.assign(price=y)
data = data.assign(pure_price=data['price'])
data = data.assign(apartment_price=data['price'])
data['contract date'] = pd.to_datetime(data['contract date'])
data['contract date'] = pd.to_numeric(data['contract date'] - data['contract date'].min())
data['contract date'] = data['contract date']/data['contract date'].max()
data['angle'] = np.sin(data['angle'])
datalist=[data[data.apartment_id==i] for i in set(data['apartment_id'])]
newlist = []
for i in datalist:
if len(i['angle'])>0:
newlist.append(i)
data = pd.concat(newlist)
def get_X_y(data):
y = data['pure_price']
X = data.drop(columns=['apartment_price','pure_price','price', 'apartment_id'])
return X, y
def get_pure_apartment_price(data,n):
datalist=[data[data.apartment_id==i] for i in set(data['apartment_id'])]
Xylist = [get_X_y(data) for data in datalist]
model_list = [LinearRegression() for i in range(len(Xylist))]
for i in range(len(model_list)):
X,y = Xylist[i]
model_list[i].fit(X,y)
intercept_list = [[model_list[i].intercept_]*len(datalist[i]['apartment_price']) for i in range(len(model_list))]
l = []
for i in intercept_list:
l.extend(i)
data['apartment_price']=l
data['pure_price']=data['price']-data['apartment_price']
for i in range(n):
total_model = LinearRegression()
X,y = get_X_y(data)
total_model.fit(X,y)
coeffiecient = total_model.coef_
data['apartment_price'] = data['price']-data.drop(columns=['apartment_price','pure_price','price', 'apartment_id']).dot(coeffiecient)
datalist=[data[data.apartment_id==i] for i in set(data['apartment_id'])]
med_list = [[np.median(datalist[i]['apartment_price'])]*len(datalist[i]['apartment_price']) for i in range(len(datalist))]
l=[]
for i in med_list:
l.extend(i)
data['apartment_price']=l
data['pure_price']=data['price']-data['apartment_price']
return data
N = 15
data = get_pure_apartment_price(data,N)
apartment_map=dict()
for i in data['apartment_id']:
if i not in apartment_map:
price=data[data.apartment_id==i]
apartment_map[i]=price['apartment_price'].iloc[0]
lm = LinearRegression()
y = data['price']
X=data.drop(columns=['price','pure_price','apartment_id'])
lm.fit(X,y)
self.lm = lm
self.apartment_map = apartment_map
def predict(self,X):
apartment_map = self.apartment_map
def preprocess_test_data(data, apartment_map):
apartment_price = []
for i in data['apartment_id']:
if i not in apartment_map:
pass
else:
apartment_price.append(apartment_map[i])
apartment_price.sort()
median_value = apartment_price[len(apartment_price)//2]
new_price = []
for i in data['apartment_id']:
if i not in apartment_map:
new_price.append(median_value)
else:
new_price.append(apartment_map[i])
data['apartment_price'] = new_price
return data
lm = self.lm
drop_columns = ['latitude', 'longtitude', 'altitude', '1st region id', '2nd region id', 'road id',
'parking lot limit', 'parking lot area', 'parking lot external',
'management fee', 'households', 'age of residents', 'builder id', 'completion date', 'built year',
'schools', 'bus stations', 'subway stations']
data = X.drop(columns = drop_columns)
data['contract date'] = pd.to_datetime(data['contract date'])
data['contract date'] = pd.to_numeric(data['contract date'] - data['contract date'].min())
data['contract date'] = data['contract date']/data['contract date'].max()
data['angle'] = np.sin(data['angle'])
data = data.assign(apartment_price=data['angle'])
X = preprocess_test_data(data, apartment_map)
X = X.drop(columns = ['apartment_id'])
return lm.predict(X)
def test():
names = ['contract date', 'latitude', 'longtitude', 'altitude', '1st region id', '2nd region id', 'road id',
'apartment_id', 'floor', 'angle', 'area', 'parking lot limit', 'parking lot area', 'parking lot external',
'management fee', 'households', 'age of residents', 'builder id', 'completion date', 'built year',
'schools', 'bus stations', 'subway stations', 'price']
data = pd.read_csv('./data_train_original.csv', names=names)
y = data['price']
X = data.drop(columns =['price'])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 111)
model = Linear_Regression_with_drop_features()
model.fit(X_train,y_train)
predictions = model.predict(X_test)
result = 0
for i in range(len(X_test)):
result += abs((y_test.iloc[i]-predictions[i])/y_test.iloc[i])
print(1-result/len(X_test))
def main():
test()
main() | 45.14966 | 150 | 0.549043 |
83c499f8c16cda46c67caec67e7804d79177e0e3 | 6,186 | py | Python | src/cmdtree/tests/unittest/test_parser.py | winkidney/cmdtree | 8558be856f4c3044cf13d2d07a86b69877bb6491 | [
"MIT"
] | 63 | 2016-07-29T10:55:20.000Z | 2021-06-28T09:11:48.000Z | src/cmdtree/tests/unittest/test_parser.py | winkidney/cmdtree | 8558be856f4c3044cf13d2d07a86b69877bb6491 | [
"MIT"
] | 3 | 2016-09-22T08:42:18.000Z | 2016-12-10T12:02:01.000Z | src/cmdtree/tests/unittest/test_parser.py | winkidney/cmdtree | 8558be856f4c3044cf13d2d07a86b69877bb6491 | [
"MIT"
] | 3 | 2016-07-30T23:53:29.000Z | 2016-08-30T11:03:39.000Z | import mock
import pytest
import six
from cmdtree import parser
from cmdtree.exceptions import ArgumentParseError
def mk_obj(property_dict):
class TestObject(object):
pass
obj = TestObject()
for key, value in six.iteritems(property_dict):
setattr(obj, key, value)
return obj
@pytest.fixture()
def aparser():
from cmdtree.parser import AParser
return AParser()
@pytest.fixture()
def test_func():
def func():
return "result"
return func
@pytest.mark.parametrize(
"arg_name, expected",
(
("hello_world", "hello_world"),
("hello-world", "hello_world"),
)
)
def test_normalize_arg_name(arg_name, expected):
from cmdtree.parser import _normalize_arg_name
assert _normalize_arg_name(arg_name) == expected
@pytest.mark.parametrize(
"p_dict, expected",
(
({"_k": "v", "k": "v"}, {"k": "v"}),
({"__k": "v", "k": "v"}, {"k": "v"}),
({"k1": "v", "k": "v"}, {"k": "v", "k1": "v"}),
)
)
def test_vars_should_return_right_dict(p_dict, expected):
obj = mk_obj(p_dict)
assert parser.vars_(
obj
) == expected
class TestAParser:
def test_should_execute_func(self, aparser, test_func):
aparser.add_cmd("test", func=test_func)
assert aparser.run(["test"]) == "result"
def test_should_execute_child_cmd(self, aparser, test_func):
parent = aparser.add_cmd("parent")
parent.add_cmd("child", func=test_func)
assert aparser.run(['parent', 'child']) == "result"
@pytest.mark.parametrize(
"cmd_func, exception",
(
(None, ValueError),
(lambda *args, **kwargs: "str", None),
)
)
def test_should_execute_without_func(self, cmd_func, exception, aparser):
parent = aparser.add_cmd("parent")
parent.add_cmd("child", func=cmd_func)
if exception is not None:
with pytest.raises(exception):
aparser.run(['parent', 'child'])
else:
assert aparser.run(['parent', 'child']) == "str"
@pytest.mark.parametrize(
"silent_exit, exception",
(
(False, ArgumentParseError),
(True, SystemExit)
)
)
def test_should_parent_cmd_exit_or_raise_error(self, silent_exit, exception, test_func, aparser):
from cmdtree.registry import env
env.silent_exit = silent_exit
parent = aparser.add_cmd("parent")
parent.add_cmd("child", func=test_func)
with pytest.raises(exception):
aparser.run(['parent'])
@pytest.mark.parametrize(
"arg_name, exception",
(
('--name', ValueError),
('-name', ValueError),
('name', None),
)
)
def test_should_argument_starts_with_valid_string(self, arg_name, exception, test_func, aparser):
cmd = aparser.add_cmd("execute", func=test_func)
with mock.patch.object(cmd, "add_argument") as mocked_add:
if exception is not None:
with pytest.raises(exception):
cmd.argument(arg_name)
else:
cmd.argument(arg_name)
mocked_add.assert_called_with(arg_name, help=None)
@pytest.mark.parametrize(
"arg_name, expected_name",
(
('--name', '--name'),
('-name', '-name'),
('name', '--name'),
)
)
def test_option_should_starts_with_hyphen(self, arg_name, expected_name, test_func, aparser):
cmd = aparser.add_cmd("execute", func=test_func)
with mock.patch.object(cmd, "add_argument") as mocked_add:
cmd.option(arg_name)
mocked_add.assert_called_with(expected_name, help=None)
@pytest.mark.parametrize(
"is_flag",
(
True,
False,
)
)
def test_option_should_work_with_is_flag(self, is_flag, test_func, aparser):
cmd = aparser.add_cmd("execute", func=test_func)
with mock.patch.object(cmd, "add_argument") as mocked_add:
cmd.option("name", is_flag=is_flag)
if is_flag:
mocked_add.assert_called_with("--name", help=None, action="store_true")
else:
mocked_add.assert_called_with("--name", help=None)
@pytest.mark.parametrize(
"default",
(
None,
1,
)
)
def test_option_should_work_with_default_value(self, default, aparser):
cmd = aparser.add_cmd("execute", func=test_func)
with mock.patch.object(cmd, "add_argument") as mocked_add:
cmd.option("name", default=default)
if default is None:
mocked_add.assert_called_with("--name", help=None)
else:
mocked_add.assert_called_with("--name", help=None, default=default)
@pytest.mark.parametrize(
"type_func, kwargs",
(
(mock.Mock(), {"help": None, "type": int}),
(None, {"help": None}),
)
)
def test_add_argument_work_with_type(
self, type_func, kwargs, aparser
):
if type_func is not None:
type_func.return_value = {"type": int}
with mock.patch.object(aparser, "add_argument") as mocked_add:
aparser.argument("name", type=type_func)
if type_func is not None:
assert type_func.called
mocked_add.assert_called_with("name", **kwargs)
@pytest.mark.parametrize(
"type_func, kwargs",
(
(mock.Mock(), {"help": None, "type": int}),
(None, {"help": None}),
)
)
def test_add_option_work_with_type(
self, type_func, kwargs, aparser
):
if type_func is not None:
type_func.return_value = {"type": int}
with mock.patch.object(aparser, "add_argument") as mocked_add:
aparser.option("name", type=type_func)
if type_func is not None:
assert type_func.called
mocked_add.assert_called_with("--name", **kwargs)
| 31.242424 | 101 | 0.575655 |
8ed26ff58b10a8a8bdd59f9146bb14e44647a4b2 | 170 | py | Python | tags/models.py | nikita03565/projects_fair | e4a6095f804f500a0285332a5fab051b2b61acc1 | [
"Apache-2.0"
] | null | null | null | tags/models.py | nikita03565/projects_fair | e4a6095f804f500a0285332a5fab051b2b61acc1 | [
"Apache-2.0"
] | null | null | null | tags/models.py | nikita03565/projects_fair | e4a6095f804f500a0285332a5fab051b2b61acc1 | [
"Apache-2.0"
] | null | null | null | from django.db import models
class Tag(models.Model):
name = models.CharField(max_length=50, help_text="Tag name")
def __str__(self):
return self.name
| 18.888889 | 64 | 0.694118 |
413a061fda8e652333a6c6bd5210473574aa0cc0 | 1,006 | py | Python | data/train/python/413a061fda8e652333a6c6bd5210473574aa0cc0petfinder.py | harshp8l/deep-learning-lang-detection | 2a54293181c1c2b1a2b840ddee4d4d80177efb33 | [
"MIT"
] | 84 | 2017-10-25T15:49:21.000Z | 2021-11-28T21:25:54.000Z | data/train/python/413a061fda8e652333a6c6bd5210473574aa0cc0petfinder.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 5 | 2018-03-29T11:50:46.000Z | 2021-04-26T13:33:18.000Z | data/train/python/413a061fda8e652333a6c6bd5210473574aa0cc0petfinder.py | vassalos/deep-learning-lang-detection | cbb00b3e81bed3a64553f9c6aa6138b2511e544e | [
"MIT"
] | 24 | 2017-11-22T08:31:00.000Z | 2022-03-27T01:22:31.000Z | """
This module takes the petfinder API and makes it python friendly.
""""
import pprint
import requests
import config
API_BASE = "http://api.petfinder.com/"
API_gettoken = API_BASE + "auth.getToken"
API_getRandomPet = API_BASE + "pet.getRandom"
API_getPet = API_BASE + "pet.get"
API_findPet = API_BASE + "pet.find"
API_findShelter = API_BASE + "shelter.find"
API_getShelter = API_BASE + "shelter.get"
API_getSheleterPet = API_BASE + "shelter.get"
API_getAllShelterPets = API_BASE + "shelter.getPets"
API_getShleterBreeds = API_BASE + "shelter.listByBreed"
def getRandom_params(location):
params = dict()
params["key"] = config.key
params["location"] = location
params["format"] = "json"
return params
def get_params(petid):
params = dict()
params["key"] = config.key
params['id'] = petid
params['format'] = "json"
return params
def queryAPI(API_URL, params):
request = requests.get(API_URL, params = params)
data = request.json()
return data
| 22.863636 | 65 | 0.701789 |
8f973e653d123f317558f78d0e838ce3720e9bb6 | 4,997 | py | Python | ansible/my_env/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_portgroup_facts.py | otus-devops-2019-02/yyashkin_infra | 0cd0c003884155ac922e3e301305ac202de7028c | [
"MIT"
] | 1 | 2019-04-16T21:23:15.000Z | 2019-04-16T21:23:15.000Z | ansible/my_env/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_portgroup_facts.py | otus-devops-2019-02/yyashkin_infra | 0cd0c003884155ac922e3e301305ac202de7028c | [
"MIT"
] | 5 | 2020-02-26T20:10:50.000Z | 2021-09-23T23:23:18.000Z | ansible/my_env/lib/python2.7/site-packages/ansible/modules/cloud/vmware/vmware_portgroup_facts.py | otus-devops-2019-02/yyashkin_infra | 0cd0c003884155ac922e3e301305ac202de7028c | [
"MIT"
] | 1 | 2020-02-13T14:24:57.000Z | 2020-02-13T14:24:57.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_portgroup_facts
short_description: Gathers facts about an ESXi host's portgroup configuration
description:
- This module can be used to gather facts about an ESXi host's portgroup configuration when ESXi hostname or Cluster name is given.
version_added: '2.6'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
cluster_name:
description:
- Name of the cluster.
- Facts will be returned for all hostsystem belonging to this cluster name.
- If C(esxi_hostname) is not given, this parameter is required.
esxi_hostname:
description:
- ESXi hostname to gather facts from.
- If C(cluster_name) is not given, this parameter is required.
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Gather portgroup facts about all ESXi Host in given Cluster
vmware_portgroup_facts:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
cluster_name: '{{ cluster_name }}'
delegate_to: localhost
- name: Gather portgroup facts about ESXi Host system
vmware_portgroup_facts:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
delegate_to: localhost
'''
RETURN = r'''
hosts_firewall_facts:
description: metadata about host's portgroup configuration
returned: on success
type: dict
sample: {
"10.76.33.208": [
{
"forged_transmits": false,
"mac_changes": false,
"name": "VM Network",
"promiscuous_mode": false,
"vlan_id": 0,
"vswitch_name": "vSwitch0"
},
{
"forged_transmits": false,
"mac_changes": false,
"name": "Management Network",
"promiscuous_mode": false,
"vlan_id": 0,
"vswitch_name": "vSwitch0"
},
{
"forged_transmits": false,
"mac_changes": false,
"name": "pg0001",
"promiscuous_mode": false,
"vlan_id": 0,
"vswitch_name": "vSwitch001"
},
]
}
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import vmware_argument_spec, PyVmomi
class PortgroupFactsManager(PyVmomi):
def __init__(self, module):
super(PortgroupFactsManager, self).__init__(module)
cluster_name = self.params.get('cluster_name', None)
esxi_host_name = self.params.get('esxi_hostname', None)
self.hosts = self.get_all_host_objs(cluster_name=cluster_name, esxi_host_name=esxi_host_name)
@staticmethod
def normalize_pg_info(portgroup_obj):
pg_info_dict = dict()
pg_info_dict['name'] = portgroup_obj.spec.name
vlan_id = 'N/A'
if portgroup_obj.spec.vlanId:
vlan_id = portgroup_obj.spec.vlanId
pg_info_dict['vlan_id'] = vlan_id
switch_name = 'N/A'
if portgroup_obj.spec.vswitchName:
switch_name = portgroup_obj.spec.vswitchName
pg_info_dict['vswitch_name'] = switch_name
# Network Policy related facts
pg_info_dict['promiscuous_mode'] = bool(portgroup_obj.spec.policy.security.allowPromiscuous)
pg_info_dict['mac_changes'] = bool(portgroup_obj.spec.policy.security.macChanges)
pg_info_dict['forged_transmits'] = bool(portgroup_obj.spec.policy.security.forgedTransmits)
return pg_info_dict
def gather_host_portgroup_facts(self):
hosts_pg_facts = dict()
for host in self.hosts:
pgs = host.config.network.portgroup
hosts_pg_facts[host.name] = []
for pg in pgs:
hosts_pg_facts[host.name].append(self.normalize_pg_info(portgroup_obj=pg))
return hosts_pg_facts
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
cluster_name=dict(type='str', required=False),
esxi_hostname=dict(type='str', required=False),
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['cluster_name', 'esxi_hostname'],
]
)
host_pg_mgr = PortgroupFactsManager(module)
module.exit_json(changed=False, hosts_portgroup_facts=host_pg_mgr.gather_host_portgroup_facts())
if __name__ == "__main__":
main()
| 31.828025 | 131 | 0.647188 |
99ba37e03fe6baa42d22c06e7a1ed1e790b7e648 | 2,535 | py | Python | factory/hooks.py | vineet79/factory | 101a8985683b6d316ae281f03749786b884ff650 | [
"MIT"
] | null | null | null | factory/hooks.py | vineet79/factory | 101a8985683b6d316ae281f03749786b884ff650 | [
"MIT"
] | null | null | null | factory/hooks.py | vineet79/factory | 101a8985683b6d316ae281f03749786b884ff650 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from . import __version__ as app_version
app_name = "factory"
app_title = "Factory"
app_publisher = "Mayur"
app_description = "MCPL"
app_icon = "octicon octicon-beaker"
app_color = "grey"
app_email = "mayur@mittalclothing.com"
app_license = "MIT"
# Includes in <head>
# ------------------
# include js, css files in header of desk.html
# app_include_css = "/assets/factory/css/factory.css"
# app_include_js = "/assets/factory/js/factory.js"
# include js, css files in header of web template
# web_include_css = "/assets/factory/css/factory.css"
# web_include_js = "/assets/factory/js/factory.js"
# Home Pages
# ----------
# application home page (will override Website Settings)
# home_page = "login"
# website user home page (by Role)
# role_home_page = {
# "Role": "home_page"
# }
# Website user home page (by function)
# get_website_user_home_page = "factory.utils.get_home_page"
# Generators
# ----------
# automatically create page for each record of this doctype
# website_generators = ["Web Page"]
# Installation
# ------------
# before_install = "factory.install.before_install"
# after_install = "factory.install.after_install"
# Desk Notifications
# ------------------
# See frappe.core.notifications.get_notification_config
# notification_config = "factory.notifications.get_notification_config"
# Permissions
# -----------
# Permissions evaluated in scripted ways
# permission_query_conditions = {
# "Event": "frappe.desk.doctype.event.event.get_permission_query_conditions",
# }
#
# has_permission = {
# "Event": "frappe.desk.doctype.event.event.has_permission",
# }
# Document Events
# ---------------
# Hook on document methods and events
# doc_events = {
# "*": {
# "on_update": "method",
# "on_cancel": "method",
# "on_trash": "method"
# }
# }
doc_events = {
"Change Owner": {
"after_save": "factory.factory.doctype.change_owner.change_owner.change_own"
}
}
# Scheduled Tasks
# ---------------
# scheduler_events = {
# "all": [
# "factory.tasks.all"
# ],
# "daily": [
# "factory.tasks.daily"
# ],
# "hourly": [
# "factory.tasks.hourly"
# ],
# "weekly": [
# "factory.tasks.weekly"
# ]
# "monthly": [
# "factory.tasks.monthly"
# ]
# }
# Testing
# -------
# before_tests = "factory.install.before_tests"
# Overriding Whitelisted Methods
# ------------------------------
#
# override_whitelisted_methods = {
# "frappe.desk.doctype.event.event.get_events": "factory.event.get_events"
# }
| 21.125 | 84 | 0.658383 |
7f072ebb56b855f61e8c70cf6c11122faef0e28b | 1,412 | py | Python | main.py | marcelo-nugatti/bot-form | 662a0d9b1b7673536f9189c007a3475617e12ba6 | [
"MIT"
] | 1 | 2020-04-15T20:00:26.000Z | 2020-04-15T20:00:26.000Z | main.py | marcelo-nugatti/bot-form | 662a0d9b1b7673536f9189c007a3475617e12ba6 | [
"MIT"
] | null | null | null | main.py | marcelo-nugatti/bot-form | 662a0d9b1b7673536f9189c007a3475617e12ba6 | [
"MIT"
] | 1 | 2020-06-05T05:08:38.000Z | 2020-06-05T05:08:38.000Z | #!/usr/bin/python3
# -*- coding: utf-8 -*-
# BotForm
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
class BotForm():
def __init__(self):
self.driver = webdriver.Chrome()
def setUp(self):
self.driver.get("https://ibo.pe/contacto")
time.sleep(5)
def bot_form(self):
self.name = self.driver.find_element_by_xpath('//*[@id="contacto"]/div/div[2]/div[2]/div/form/div[1]/div[1]/div/input')
self.name.send_keys("Arnold")
self.phone = self.driver.find_element_by_xpath('//*[@id="contacto"]/div/div[2]/div[2]/div/form/div[1]/div[2]/div/input')
self.phone.send_keys("987654321")
self.email = self.driver.find_element_by_xpath('//*[@id="contacto"]/div/div[2]/div[2]/div/form/div[2]/div[1]/div/input')
self.email.send_keys("terminator@gmail.com")
self.message = self.driver.find_element_by_xpath('//*[@id="contacto"]/div/div[2]/div[2]/div/form/div[2]/div[2]/div/textarea')
self.message.send_keys("Hasta la vista, baby")
self.button = self.driver.find_element_by_xpath('//*[@id="contacto"]/div/div[2]/div[2]/div/form/div[4]/div/div/button')
self.button.click()
time.sleep(5)
def bot_loop(self):
while True:
self.setUp()
self.bot_form()
if __name__ == '__main__':
bot = BotForm()
bot.bot_loop()
| 32.837209 | 133 | 0.626062 |
66c2adffaf27582f7a211da2f14fc61e6f502b60 | 184 | py | Python | thecut/publishing/settings.py | exemplarysoftware/thecut-publishing | b4c9374870e07ddab1fef477da4b3aeb151fee04 | [
"Apache-2.0"
] | null | null | null | thecut/publishing/settings.py | exemplarysoftware/thecut-publishing | b4c9374870e07ddab1fef477da4b3aeb151fee04 | [
"Apache-2.0"
] | null | null | null | thecut/publishing/settings.py | exemplarysoftware/thecut-publishing | b4c9374870e07ddab1fef477da4b3aeb151fee04 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
from django.conf import settings
AUTH_USER_MODEL = getattr(settings, 'AUTH_USER_MODEL', 'auth.User')
| 26.285714 | 67 | 0.771739 |
2912e75eb8b68c190fb39a013c743d78ca16d7b8 | 393 | py | Python | authors/wsgi.py | andela/ah-backend-odin | 0e9ef1a10c8a3f6736999a5111736f7bd7236689 | [
"BSD-3-Clause"
] | null | null | null | authors/wsgi.py | andela/ah-backend-odin | 0e9ef1a10c8a3f6736999a5111736f7bd7236689 | [
"BSD-3-Clause"
] | 43 | 2018-10-25T10:14:52.000Z | 2022-03-11T23:33:46.000Z | authors/wsgi.py | andela/ah-backend-odin | 0e9ef1a10c8a3f6736999a5111736f7bd7236689 | [
"BSD-3-Clause"
] | 4 | 2018-10-29T07:04:58.000Z | 2020-04-02T14:15:10.000Z | """
WSGI config for authors project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "authors.settings")
application = get_wsgi_application()
| 21.833333 | 78 | 0.783715 |
6304e7aa60ea30a83c941aad7c12c965894f5e44 | 315 | py | Python | planet_custom/tasks.py | PrinceofMath/planet-custom | 43d17e6f8d49ca52edc2bb703244fb3f5cc62b8c | [
"MIT"
] | null | null | null | planet_custom/tasks.py | PrinceofMath/planet-custom | 43d17e6f8d49ca52edc2bb703244fb3f5cc62b8c | [
"MIT"
] | null | null | null | planet_custom/tasks.py | PrinceofMath/planet-custom | 43d17e6f8d49ca52edc2bb703244fb3f5cc62b8c | [
"MIT"
] | null | null | null | from planet import api
import sys
import os
import requests
import time
import frappe
apikey = '92e11e515ad344b386c19f0a4dce232c'
client = api.ClientV1(api_key=apikey)
item_type = ["PSScene4Band"]
asset_type = 'visual'
def create_daily_image():
locations = frappe.db.get_list("Location")
print(locations) | 19.6875 | 46 | 0.777778 |
569bba5b7e52f5eb6fe5763dddea3f9daa069647 | 6,319 | py | Python | main.py | NedoProgrammer/InvertexIndex | 5d918953d75cf9af334c4f6d3bf02116524b40b1 | [
"MIT"
] | 1 | 2021-05-01T21:54:34.000Z | 2021-05-01T21:54:34.000Z | main.py | NedoProgrammer/InvertexIndex | 5d918953d75cf9af334c4f6d3bf02116524b40b1 | [
"MIT"
] | null | null | null | main.py | NedoProgrammer/InvertexIndex | 5d918953d75cf9af334c4f6d3bf02116524b40b1 | [
"MIT"
] | null | null | null | """
The main file where the inverted index could be
created, or be opened for searching/editing.
"""
# Imports
import argparse
import inverted_index
import time
"""
The function for create a new inverted index,
based on the --dataset, --stop-words and --dump-to
arguments from the arguments.
Apart from creating a new InvertedIndex instance
it will also save it as json to the file specified
in --dump-to.
"""
def create():
print("--- CREATING ---")
print("loading dataset..")
dataset = inverted_index.load_dataset(filepath=args.dataset, stop_words=args.stop_words)
print("getting unique words..")
unique = inverted_index.get_unique_words(dataset)
print("getting InvertedIndex..")
result = inverted_index.combine_into_list(unique)
print("--- DUMPING ---")
print("dumping json..")
result.dump(args.dump_to)
"""
The function for searching in the already
existing invertex index.
It loads the invertex index from --list,
And searches for indexes in words from --query (it is a list)
"""
def find():
print("--- LOADING ---")
print("loading inverted index list from", args.list, "..")
index = inverted_index.InvertedIndex.load_inverted_index(args.list)
print("--- FINDING ---")
result = []
print("searching for ", args.query, "..")
result += index.query(args.query)
# If we want to just print the output
if args.save_query is "":
print(args.query)
print(result)
# If we want to output the result to a file
else:
print("--- SAVING ---")
print("saving result to", args.save_query, "..")
f = open(args.save_query, mode="w", encoding="utf-8")
f.write(
f'Queries "{", ".join(map(str, args.query))}" were found at these indexes:\n{", ".join(map(str, result))}')
"""
The function to tag an existing inverted index from --list,
Tag it with numbers from --tag,
And save it to --save_tag.
"""
def tag():
print("--- LOADING ---")
print("loading inverted index list from", args.list, "..")
index = inverted_index.InvertedIndex.load_inverted_index(args.list)
print("loading dataset from", args.dataset, "..")
dataset = inverted_index.load_dataset(args.dataset, args.stop_words)
print("--- TAGING ---")
print("starting taging:")
print("list:", args.list)
print("dataset:", args.dataset)
print("stop words:", args.stop_words)
print("split_into:", args.tag[0])
print("occurences:", args.tag[1])
print("save tag:", args.save_tag)
index.tag(int(args.tag[0]), int(args.tag[1]), dataset, args.save_tag)
"""
Argument parser:
-c - a flag showing if the user wants to create a new inverted index list.
-sq - a path to the file to which the user's query result will be written to.
-da - a path to the dataset which will be turned into an inverted index.
-du - a path to which the inverted index will be saved to.
-s - a path to the list of words which shouldn't be in the inverted index.
-l - a path to the invertex index. (used for searching and taggining)
-q - a list of words which will be searched in the inverted index list.
-t (int1) (int2) - int1 is "split_into", int2 is "occurences".
-st = a path to the file to which the user's tag result will be written to.
"""
parser = argparse.ArgumentParser()
parser.add_argument("-c", "--create", action="store_true", required=False, dest="create")
parser.add_argument("-sq", "--save-query", type=str, required=False, default="", dest="save_query")
parser.add_argument("-da", "--dataset", type=str, required=False, default="", dest="dataset")
parser.add_argument("-du", "--dump-to", type=str, required=False, default="", dest="dump_to")
parser.add_argument("-s", "--stop-words", type=str, required=False, default="", dest="stop_words")
parser.add_argument("-l", "--list", type=str, required=False, default="", dest="list")
parser.add_argument("-q", "--query", nargs='+', action="store", type=str, default="query", dest="query")
parser.add_argument("-t", "--tag", nargs=2, action="store", type=str, default="tag", dest="tag")
parser.add_argument("-st", "--save-tag", type=str, required=False, default="", dest="save_tag")
args = parser.parse_args()
# If the user didn't specify anything.
if not args.create and args.dataset is "" and args.stop_words is "" and args.list is "" and args.query is "query" and args.dump_to is "" and args.save_query is "" and args.tag is "tag" and args.save_tag is "":
print("At least specify something! (or call --help)")
exit(1)
# If the user wants to create a new invertex index list but haven't specified
# The database, or stop words, or where it will be dumped.
elif args.create and (args.dataset is "" or args.stop_words is "" or args.dump_to is ""):
print("Please specify the --dataset, --stop-words and --dump-to to create an inverted index list!")
exit(1)
# If the user wants to find something in the invertex index list,
# But haven't specified the path to that list.
elif args.query is not "query" and args.list is "":
print("Please specify the inverted index list file and the query list to find something in it!")
exit(1)
elif args.tag is not "tag" and (args.save_tag is "" or args.list is "" or args.dataset is "" or args.stop_words is ""):
print("Please specify the inverted index list file, tag numbers, the tag save (--save-tag) file, the original "
"dataset (--dataset) and the stop words (--stop-words) to tag the file!")
exit(1)
start = time.time()
# If the database, stop words and dump to are specified
if args.create and args.dataset is not "" and args.stop_words is not "" and args.dump_to is not "":
create()
# If the query is a list and the path to the inverted index list is specified
if args.query is not "query" and args.list is not "":
find()
# If the tag is in correct format and the path to the inverted index is specified
if args.tag is not "tag" and len(
args.tag) is 2 and args.list is not "" and args.save_tag is not "" and args.dataset is not "":
tag()
# :D
print("done! took", time.time() - start, "seconds.")
| 43.881944 | 210 | 0.655167 |
32b675b350c83d1fabd76f225883a67d8580f693 | 87,614 | py | Python | purity_fb/purity_fb_1dot11/apis/policies_api.py | tlewis-ps/purity_fb_python_client | 652835cbd485c95a86da27f8b661679727ec6ea0 | [
"Apache-2.0"
] | 5 | 2017-09-08T20:47:22.000Z | 2021-06-29T02:11:05.000Z | purity_fb/purity_fb_1dot11/apis/policies_api.py | tlewis-ps/purity_fb_python_client | 652835cbd485c95a86da27f8b661679727ec6ea0 | [
"Apache-2.0"
] | 16 | 2017-11-27T20:57:48.000Z | 2021-11-23T18:46:43.000Z | purity_fb/purity_fb_1dot11/apis/policies_api.py | tlewis-ps/purity_fb_python_client | 652835cbd485c95a86da27f8b661679727ec6ea0 | [
"Apache-2.0"
] | 22 | 2017-10-13T15:33:05.000Z | 2021-11-08T19:56:21.000Z | # coding: utf-8
"""
Pure Storage FlashBlade REST 1.11 Python SDK
Pure Storage FlashBlade REST 1.11 Python SDK. Compatible with REST API versions 1.0 - 1.11. Developed by [Pure Storage, Inc](http://www.purestorage.com/). Documentations can be found at [purity-fb.readthedocs.io](http://purity-fb.readthedocs.io/).
OpenAPI spec version: 1.11
Contact: info@purestorage.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class PoliciesApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def create_policies(self, policy, **kwargs):
"""
Create a new policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_policies(policy, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Policy policy: The attribute map used to create the policy. (required)
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:return: PolicyResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_policies_with_http_info(policy, **kwargs)
else:
(data) = self.create_policies_with_http_info(policy, **kwargs)
return data
def create_policies_with_http_info(self, policy, **kwargs):
"""
Create a new policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_policies_with_http_info(policy, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param Policy policy: The attribute map used to create the policy. (required)
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:return: PolicyResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy', 'names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_policies" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'policy' is set
if ('policy' not in params) or (params['policy'] is None):
raise ValueError("Missing the required parameter `policy` when calling `create_policies`")
collection_formats = {}
path_params = {}
query_params = []
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'policy' in params:
body_params = params['policy']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_policy_file_system_replica_links(self, **kwargs):
"""
Create a connection between a file system replica link and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_policy_file_system_replica_links(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] local_file_system_names: A comma-separated list of local file system names. This cannot be provided together with the `local_file_system_ids` query parameter.
:param list[str] local_file_system_ids: A comma-separated list of local file system IDs. This cannot be provided together with the `local_file_system_names` query parameter.
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] remote_ids: A comma-separated list of remote array IDs. This cannot be provided together with the `remote_names` query parameter.
:param list[str] remote_names: A comma-separated list of remote array names. This cannot be provided together with `remote_ids` query parameter.
:return: PolicyMemberWithRemoteResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_policy_file_system_replica_links_with_http_info(**kwargs)
else:
(data) = self.create_policy_file_system_replica_links_with_http_info(**kwargs)
return data
def create_policy_file_system_replica_links_with_http_info(self, **kwargs):
"""
Create a connection between a file system replica link and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_policy_file_system_replica_links_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] local_file_system_names: A comma-separated list of local file system names. This cannot be provided together with the `local_file_system_ids` query parameter.
:param list[str] local_file_system_ids: A comma-separated list of local file system IDs. This cannot be provided together with the `local_file_system_names` query parameter.
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] remote_ids: A comma-separated list of remote array IDs. This cannot be provided together with the `remote_names` query parameter.
:param list[str] remote_names: A comma-separated list of remote array names. This cannot be provided together with `remote_ids` query parameter.
:return: PolicyMemberWithRemoteResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['local_file_system_names', 'local_file_system_ids', 'policy_ids', 'policy_names', 'member_ids', 'remote_ids', 'remote_names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_policy_file_system_replica_links" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'local_file_system_names' in params:
query_params.append(('local_file_system_names', params['local_file_system_names']))
collection_formats['local_file_system_names'] = 'csv'
if 'local_file_system_ids' in params:
query_params.append(('local_file_system_ids', params['local_file_system_ids']))
collection_formats['local_file_system_ids'] = 'csv'
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'remote_ids' in params:
query_params.append(('remote_ids', params['remote_ids']))
collection_formats['remote_ids'] = 'csv'
if 'remote_names' in params:
query_params.append(('remote_names', params['remote_names']))
collection_formats['remote_names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-system-replica-links', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyMemberWithRemoteResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_policy_filesystems(self, **kwargs):
"""
Create a connection between a file system and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_policy_filesystems(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_policy_filesystems_with_http_info(**kwargs)
else:
(data) = self.create_policy_filesystems_with_http_info(**kwargs)
return data
def create_policy_filesystems_with_http_info(self, **kwargs):
"""
Create a connection between a file system and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_policy_filesystems_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_ids', 'policy_names', 'member_ids', 'member_names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_policy_filesystems" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'member_names' in params:
query_params.append(('member_names', params['member_names']))
collection_formats['member_names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-systems', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyMemberResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_policies(self, **kwargs):
"""
Delete a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policies(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] ids: A comma-separated list of resource IDs. This cannot be provided together with the name or names query parameters.
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_policies_with_http_info(**kwargs)
else:
(data) = self.delete_policies_with_http_info(**kwargs)
return data
def delete_policies_with_http_info(self, **kwargs):
"""
Delete a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policies_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] ids: A comma-separated list of resource IDs. This cannot be provided together with the name or names query parameters.
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['ids', 'names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_policies" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'ids' in params:
query_params.append(('ids', params['ids']))
collection_formats['ids'] = 'csv'
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_policy_file_system_replica_links(self, **kwargs):
"""
Delete a connection betwwen a file system replica link and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policy_file_system_replica_links(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] local_file_system_names: A comma-separated list of local file system names. This cannot be provided together with the `local_file_system_ids` query parameter.
:param list[str] local_file_system_ids: A comma-separated list of local file system IDs. This cannot be provided together with the `local_file_system_names` query parameter.
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] remote_ids: A comma-separated list of remote array IDs. This cannot be provided together with the `remote_names` query parameter.
:param list[str] remote_names: A comma-separated list of remote array names. This cannot be provided together with `remote_ids` query parameter.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_policy_file_system_replica_links_with_http_info(**kwargs)
else:
(data) = self.delete_policy_file_system_replica_links_with_http_info(**kwargs)
return data
def delete_policy_file_system_replica_links_with_http_info(self, **kwargs):
"""
Delete a connection betwwen a file system replica link and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policy_file_system_replica_links_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] local_file_system_names: A comma-separated list of local file system names. This cannot be provided together with the `local_file_system_ids` query parameter.
:param list[str] local_file_system_ids: A comma-separated list of local file system IDs. This cannot be provided together with the `local_file_system_names` query parameter.
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] remote_ids: A comma-separated list of remote array IDs. This cannot be provided together with the `remote_names` query parameter.
:param list[str] remote_names: A comma-separated list of remote array names. This cannot be provided together with `remote_ids` query parameter.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['local_file_system_names', 'local_file_system_ids', 'policy_ids', 'policy_names', 'member_ids', 'remote_ids', 'remote_names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_policy_file_system_replica_links" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'local_file_system_names' in params:
query_params.append(('local_file_system_names', params['local_file_system_names']))
collection_formats['local_file_system_names'] = 'csv'
if 'local_file_system_ids' in params:
query_params.append(('local_file_system_ids', params['local_file_system_ids']))
collection_formats['local_file_system_ids'] = 'csv'
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'remote_ids' in params:
query_params.append(('remote_ids', params['remote_ids']))
collection_formats['remote_ids'] = 'csv'
if 'remote_names' in params:
query_params.append(('remote_names', params['remote_names']))
collection_formats['remote_names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-system-replica-links', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_policy_filesystem_snapshots(self, **kwargs):
"""
Delete a connection betwwen a file system snapshot and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policy_filesystem_snapshots(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_policy_filesystem_snapshots_with_http_info(**kwargs)
else:
(data) = self.delete_policy_filesystem_snapshots_with_http_info(**kwargs)
return data
def delete_policy_filesystem_snapshots_with_http_info(self, **kwargs):
"""
Delete a connection betwwen a file system snapshot and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policy_filesystem_snapshots_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_ids', 'policy_names', 'member_ids', 'member_names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_policy_filesystem_snapshots" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'member_names' in params:
query_params.append(('member_names', params['member_names']))
collection_formats['member_names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-system-snapshots', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_policy_filesystems(self, **kwargs):
"""
Delete a connection betwwen a file system and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policy_filesystems(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_policy_filesystems_with_http_info(**kwargs)
else:
(data) = self.delete_policy_filesystems_with_http_info(**kwargs)
return data
def delete_policy_filesystems_with_http_info(self, **kwargs):
"""
Delete a connection betwwen a file system and a policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_policy_filesystems_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_ids', 'policy_names', 'member_ids', 'member_names']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_policy_filesystems" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'member_names' in params:
query_params.append(('member_names', params['member_names']))
collection_formats['member_names'] = 'csv'
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-systems', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None,
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_policies(self, **kwargs):
"""
List policies.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policies(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str filter: The filter to be used for query.
:param list[str] ids: A comma-separated list of resource IDs. This cannot be provided together with the name or names query parameters.
:param int limit: limit, should be >= 0
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_policies_with_http_info(**kwargs)
else:
(data) = self.list_policies_with_http_info(**kwargs)
return data
def list_policies_with_http_info(self, **kwargs):
"""
List policies.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policies_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str filter: The filter to be used for query.
:param list[str] ids: A comma-separated list of resource IDs. This cannot be provided together with the name or names query parameters.
:param int limit: limit, should be >= 0
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['filter', 'ids', 'limit', 'names', 'sort', 'start', 'token']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_policies" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'ids' in params:
query_params.append(('ids', params['ids']))
collection_formats['ids'] = 'csv'
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
if 'sort' in params:
query_params.append(('sort', params['sort']))
if 'start' in params:
query_params.append(('start', params['start']))
if 'token' in params:
query_params.append(('token', params['token']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_policy_file_system_replica_links(self, **kwargs):
"""
List policy attached to file system replica links.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_file_system_replica_links(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] local_file_system_names: A comma-separated list of local file system names. This cannot be provided together with the `local_file_system_ids` query parameter.
:param list[str] local_file_system_ids: A comma-separated list of local file system IDs. This cannot be provided together with the `local_file_system_names` query parameter.
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] remote_ids: A comma-separated list of remote array IDs. This cannot be provided together with the `remote_names` query parameter.
:param list[str] remote_names: A comma-separated list of remote array names. This cannot be provided together with `remote_ids` query parameter.
:param list[str] remote_file_system_names: A comma-separated list of remote file system names. This cannot be provided together with `remote_file_system_ids` query parameter.
:param list[str] remote_file_system_ids: A comma-separated list of remote file system IDs. This cannot be provided together with `remote_file_system_names` query parameter.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberWithRemoteResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_policy_file_system_replica_links_with_http_info(**kwargs)
else:
(data) = self.list_policy_file_system_replica_links_with_http_info(**kwargs)
return data
def list_policy_file_system_replica_links_with_http_info(self, **kwargs):
"""
List policy attached to file system replica links.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_file_system_replica_links_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] local_file_system_names: A comma-separated list of local file system names. This cannot be provided together with the `local_file_system_ids` query parameter.
:param list[str] local_file_system_ids: A comma-separated list of local file system IDs. This cannot be provided together with the `local_file_system_names` query parameter.
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] remote_ids: A comma-separated list of remote array IDs. This cannot be provided together with the `remote_names` query parameter.
:param list[str] remote_names: A comma-separated list of remote array names. This cannot be provided together with `remote_ids` query parameter.
:param list[str] remote_file_system_names: A comma-separated list of remote file system names. This cannot be provided together with `remote_file_system_ids` query parameter.
:param list[str] remote_file_system_ids: A comma-separated list of remote file system IDs. This cannot be provided together with `remote_file_system_names` query parameter.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberWithRemoteResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['local_file_system_names', 'local_file_system_ids', 'policy_ids', 'policy_names', 'member_ids', 'remote_ids', 'remote_names', 'remote_file_system_names', 'remote_file_system_ids', 'filter', 'sort', 'start', 'limit', 'token']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_policy_file_system_replica_links" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'local_file_system_names' in params:
query_params.append(('local_file_system_names', params['local_file_system_names']))
collection_formats['local_file_system_names'] = 'csv'
if 'local_file_system_ids' in params:
query_params.append(('local_file_system_ids', params['local_file_system_ids']))
collection_formats['local_file_system_ids'] = 'csv'
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'remote_ids' in params:
query_params.append(('remote_ids', params['remote_ids']))
collection_formats['remote_ids'] = 'csv'
if 'remote_names' in params:
query_params.append(('remote_names', params['remote_names']))
collection_formats['remote_names'] = 'csv'
if 'remote_file_system_names' in params:
query_params.append(('remote_file_system_names', params['remote_file_system_names']))
collection_formats['remote_file_system_names'] = 'csv'
if 'remote_file_system_ids' in params:
query_params.append(('remote_file_system_ids', params['remote_file_system_ids']))
collection_formats['remote_file_system_ids'] = 'csv'
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'sort' in params:
query_params.append(('sort', params['sort']))
if 'start' in params:
query_params.append(('start', params['start']))
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'token' in params:
query_params.append(('token', params['token']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-system-replica-links', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyMemberWithRemoteResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_policy_filesystem_snapshots(self, **kwargs):
"""
List policy attached to filesystem snapshots
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_filesystem_snapshots(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_policy_filesystem_snapshots_with_http_info(**kwargs)
else:
(data) = self.list_policy_filesystem_snapshots_with_http_info(**kwargs)
return data
def list_policy_filesystem_snapshots_with_http_info(self, **kwargs):
"""
List policy attached to filesystem snapshots
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_filesystem_snapshots_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_ids', 'policy_names', 'member_ids', 'member_names', 'filter', 'sort', 'start', 'limit', 'token']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_policy_filesystem_snapshots" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'member_names' in params:
query_params.append(('member_names', params['member_names']))
collection_formats['member_names'] = 'csv'
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'sort' in params:
query_params.append(('sort', params['sort']))
if 'start' in params:
query_params.append(('start', params['start']))
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'token' in params:
query_params.append(('token', params['token']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-system-snapshots', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyMemberResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_policy_filesystems(self, **kwargs):
"""
List policy attached to filesystems.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_filesystems(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_policy_filesystems_with_http_info(**kwargs)
else:
(data) = self.list_policy_filesystems_with_http_info(**kwargs)
return data
def list_policy_filesystems_with_http_info(self, **kwargs):
"""
List policy attached to filesystems.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_filesystems_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_ids', 'policy_names', 'member_ids', 'member_names', 'filter', 'sort', 'start', 'limit', 'token']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_policy_filesystems" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'member_names' in params:
query_params.append(('member_names', params['member_names']))
collection_formats['member_names'] = 'csv'
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'sort' in params:
query_params.append(('sort', params['sort']))
if 'start' in params:
query_params.append(('start', params['start']))
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'token' in params:
query_params.append(('token', params['token']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/file-systems', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyMemberResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_policy_members(self, **kwargs):
"""
List policy attached to filesystems and filesystem snapshots.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_members(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:param list[str] member_types: A list of member types.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.list_policy_members_with_http_info(**kwargs)
else:
(data) = self.list_policy_members_with_http_info(**kwargs)
return data
def list_policy_members_with_http_info(self, **kwargs):
"""
List policy attached to filesystems and filesystem snapshots.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.list_policy_members_with_http_info(callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param list[str] policy_ids: A comma-separated list of policy IDs. This cannot be provided together with the policy names query parameters.
:param list[str] policy_names: A comma-separated list of policy names. This cannot be provided together with the policy ids query parameters.
:param list[str] member_ids: A comma-separated list of member ids. This cannot be provided together with the member names query parameters.
:param list[str] member_names: A comma-separated list of member names. This cannot be provided together with the member ids query parameters.
:param list[str] member_types: A list of member types.
:param str filter: The filter to be used for query.
:param str sort: Sort the response by the specified fields (in descending order if '-' is appended to the field name).
:param int start: The offset of the first resource to return from a collection.
:param int limit: limit, should be >= 0
:param str token: An opaque token used to iterate over a collection. The token to use on the next request is returned in the `continuation_token` field of the result.
:return: PolicyMemberResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_ids', 'policy_names', 'member_ids', 'member_names', 'member_types', 'filter', 'sort', 'start', 'limit', 'token']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_policy_members" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'policy_ids' in params:
query_params.append(('policy_ids', params['policy_ids']))
collection_formats['policy_ids'] = 'csv'
if 'policy_names' in params:
query_params.append(('policy_names', params['policy_names']))
collection_formats['policy_names'] = 'csv'
if 'member_ids' in params:
query_params.append(('member_ids', params['member_ids']))
collection_formats['member_ids'] = 'csv'
if 'member_names' in params:
query_params.append(('member_names', params['member_names']))
collection_formats['member_names'] = 'csv'
if 'member_types' in params:
query_params.append(('member_types', params['member_types']))
collection_formats['member_types'] = 'csv'
if 'filter' in params:
query_params.append(('filter', params['filter']))
if 'sort' in params:
query_params.append(('sort', params['sort']))
if 'start' in params:
query_params.append(('start', params['start']))
if 'limit' in params:
query_params.append(('limit', params['limit']))
if 'token' in params:
query_params.append(('token', params['token']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies/members', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyMemberResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_policies(self, policy_patch, **kwargs):
"""
Update an existing policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_policies(policy_patch, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param PolicyPatch policy_patch: The attribute map used to update the policy. (required)
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:param bool destroy_snapshots: This parameter must be set to `true` in order to remove rules that have snapshots created. Setting this parameter to `true` is an acknowledgement that some of the snapshots managed by this policy will be destroyed.
:return: PolicyResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_policies_with_http_info(policy_patch, **kwargs)
else:
(data) = self.update_policies_with_http_info(policy_patch, **kwargs)
return data
def update_policies_with_http_info(self, policy_patch, **kwargs):
"""
Update an existing policy.
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_policies_with_http_info(policy_patch, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param PolicyPatch policy_patch: The attribute map used to update the policy. (required)
:param list[str] names: A comma-separated list of resource names. This cannot be provided together with the ids query parameters.
:param bool destroy_snapshots: This parameter must be set to `true` in order to remove rules that have snapshots created. Setting this parameter to `true` is an acknowledgement that some of the snapshots managed by this policy will be destroyed.
:return: PolicyResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['policy_patch', 'names', 'destroy_snapshots']
all_params.append('callback')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_policies" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'policy_patch' is set
if ('policy_patch' not in params) or (params['policy_patch'] is None):
raise ValueError("Missing the required parameter `policy_patch` when calling `update_policies`")
collection_formats = {}
path_params = {}
query_params = []
if 'names' in params:
query_params.append(('names', params['names']))
collection_formats['names'] = 'csv'
if 'destroy_snapshots' in params:
query_params.append(('destroy_snapshots', params['destroy_snapshots']))
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'policy_patch' in params:
body_params = params['policy_patch']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['AuthTokenHeader']
return self.api_client.call_api('/1.11/policies', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PolicyResponse',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 51.811946 | 253 | 0.617116 |
3ac6fbcd1d67c4961b8b51fe1ef7b211ad04d25c | 4,171 | py | Python | qa/rpc-tests/high_priority_transaction.py | jonspock/maza | 0d857c3206691a7958f65d91527b6a689506e222 | [
"MIT"
] | null | null | null | qa/rpc-tests/high_priority_transaction.py | jonspock/maza | 0d857c3206691a7958f65d91527b6a689506e222 | [
"MIT"
] | null | null | null | qa/rpc-tests/high_priority_transaction.py | jonspock/maza | 0d857c3206691a7958f65d91527b6a689506e222 | [
"MIT"
] | 1 | 2020-08-31T04:49:06.000Z | 2020-08-31T04:49:06.000Z | #!/usr/bin/env python3
# Copyright (c) 2017 The Bitcoin developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
# Test HighPriorityTransaction code
#
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *
from test_framework.mininode import COIN
from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE, COINBASE_MATURITY
class HighPriorityTransactionTest(BitcoinTestFramework):
def __init__(self):
super().__init__()
self.setup_clean_chain = True
self.is_network_split = False
self.num_nodes = 2
def setup_nodes(self):
self.nodes = start_nodes(self.num_nodes, self.options.tmpdir, extra_args=[
["-blockprioritypercentage=0", "-limitfreerelay=2"],
["-limitfreerelay=2"]
])
def create_small_transactions(self, node, utxos, num, fee):
addr = node.getnewaddress()
txids = []
for _ in range(num):
t = utxos.pop()
inputs = [{"txid": t["txid"], "vout": t["vout"]}]
outputs = {}
change = t['amount'] - fee
outputs[addr] = satoshi_round(change)
rawtx = node.createrawtransaction(inputs, outputs)
signresult = node.signrawtransaction(
rawtx, None, None, "NONE|FORKID")
txid = node.sendrawtransaction(signresult["hex"], True)
txids.append(txid)
return txids
def generate_high_priotransactions(self, node, count):
# generate a bunch of spendable utxos
self.txouts = gen_return_txouts()
# create 150 simple one input one output hi prio txns
hiprio_utxo_count = 150
age = 250
# be sure to make this utxo aged enough
hiprio_utxos = create_confirmed_utxos(
self.relayfee, node, hiprio_utxo_count, age)
txids = []
# Create hiprio_utxo_count number of txns with 0 fee
range_size = [0, hiprio_utxo_count]
start_range = range_size[0]
end_range = range_size[1]
txids = self.create_small_transactions(
node, hiprio_utxos[start_range:end_range], end_range - start_range, 0)
return txids
def run_test(self):
# this is the priority cut off as defined in AllowFreeThreshold() (see: src/txmempool.h)
# anything above that value is considered an high priority transaction
hiprio_threshold = COIN * 144 / 250
self.relayfee = self.nodes[0].getnetworkinfo()['relayfee']
# first test step: 0 reserved prio space in block
txids = self.generate_high_priotransactions(self.nodes[0], 150)
mempool_size_pre = self.nodes[0].getmempoolinfo()['bytes']
mempool = self.nodes[0].getrawmempool(True)
# assert that all the txns are in the mempool and that all of them are hi prio
for i in txids:
assert(i in mempool)
assert(mempool[i]['currentpriority'] > hiprio_threshold)
# mine one block
self.nodes[0].generate(1)
self.log.info(
"Assert that all high prio transactions haven't been mined")
assert_equal(self.nodes[0].getmempoolinfo()['bytes'], mempool_size_pre)
# second test step: default reserved prio space in block (100K).
# the mempool size is about 25K this means that all txns will be
# included in the soon to be mined block
txids = self.generate_high_priotransactions(self.nodes[1], 150)
mempool_size_pre = self.nodes[1].getmempoolinfo()['bytes']
mempool = self.nodes[1].getrawmempool(True)
# assert that all the txns are in the mempool and that all of them are hiprio
for i in txids:
assert(i in mempool)
assert(mempool[i]['currentpriority'] > hiprio_threshold)
# mine one block
self.nodes[1].generate(1)
self.log.info("Assert that all high prio transactions have been mined")
assert(self.nodes[1].getmempoolinfo()['bytes'] == 0)
if __name__ == '__main__':
HighPriorityTransactionTest().main()
| 38.981308 | 96 | 0.649964 |
0158c0532a9976cb2441237ee5a14e3871cbadaa | 3,493 | py | Python | app/api/utils/elevenST.py | semicolondsm/DDYZD_ERP_V2_Crawling_API | 5dc25ddbfafd3fa32a116e6a9a73a073b3af3012 | [
"MIT"
] | 1 | 2021-03-31T06:37:56.000Z | 2021-03-31T06:37:56.000Z | app/api/utils/elevenST.py | semicolonDSM/DDYZD_ERP_V2_Crawling_API | 5dc25ddbfafd3fa32a116e6a9a73a073b3af3012 | [
"MIT"
] | 12 | 2021-05-13T07:05:55.000Z | 2021-10-07T06:12:14.000Z | app/api/utils/elevenST.py | semicolonDSM/DDYZD_ERP_V2_Crawling_API | 5dc25ddbfafd3fa32a116e6a9a73a073b3af3012 | [
"MIT"
] | null | null | null | from .crawler import Crawler
import requests
import json
import re
class ElevenST(Crawler):
def __init__(self, url):
super().__init__(url)
self.first_option_api = 'http://www.11st.co.kr/product/SellerProductDetailAjax.tmall?method=getTopOptionInfoJson&prdNo='+self.get_prd_no()
self.sub_option_api = 'http://www.11st.co.kr/products/'+self.get_prd_no()+'/sub-options?optNoArr={}&optLvl=2&selOptCnt=3&strNo=0'
self.last_option_api = 'http://www.11st.co.kr/products/'+self.get_prd_no()+'/last-options?optNoArr={}&selOptCnt={}&strNo=0'
self.option_apis = [self.first_option_api, self.sub_option_api, self.last_option_api]
def get_prd_no(self):
p = re.compile('\/([0-9]+)\/?')
return p.findall(self.url)[0]
def get_prd_title(self):
return self.soup.find('h1', class_='title').contents[0]
def get_opt_title_list(self):
response = requests.get(self.option_apis[0])
response.encoding = 'UTF-18'
response = json.loads(response.text[1:-1])
return response.get('selOptTitleList')
def get_option_json(self, num, opt_no_arr=''):
if(num<0 or num >2):
return ValueError("num 변수는 0, 1, 2의 숫자만 유효합니다. 현재 num값: "+str(num))
if num == 0 or num == 1:
url = self.option_apis[num].format(opt_no_arr)
else:
url = self.option_apis[num].format(opt_no_arr, len(self.get_opt_title_list()))
print(url)
response = requests.get(url)
response.encoding = 'UTF-18'
if num == 0:
response = json.loads(response.text[1:-1])
response = response.get('selOptList')
else:
response = json.loads(response.text)
response = response.get('infoList')
return response
def parse_option(self, option, opt_no_arr=''):
options = self.options
if opt_no_arr != '':
for opt_no in opt_no_arr.split(','):
options = options[opt_no][1]
options[option.get('optNo', option.get('dataOptno'))] = [{
'dtlOptNm': option.get('dtlOptNm', option.get('dataDtloptnm')),
'stckQty': option.get('stckQty', option.get('dataStckqty')),
'price': option.get('price', option.get('dataPrice')) ,
'minAddPrc': option.get('minAddPrc', option.get('dataMinaddprc')),
'maxAddPrc': option.get('maxAddPrc', option.get('dataMaxaddprc'))
}, {}]
return option.get('optNo', option.get('dataOptno'))
def run(self):
opt_list_len = len(self.get_opt_title_list())
for sel_opt1 in self.get_option_json(num=0):
opt_no_arr1 = self.parse_option(sel_opt1)
if opt_list_len ==2:
for sel_opt2 in self.get_option_json(num=2, opt_no_arr=opt_no_arr1):
self.parse_option(sel_opt2, opt_no_arr=opt_no_arr1)
elif opt_list_len == 3:
for sel_opt2 in self.get_option_json(num=1, opt_no_arr=opt_no_arr1):
opt_no_arr2 = opt_no_arr1+','+self.parse_option(sel_opt2, opt_no_arr=opt_no_arr1)
for sel_opt3 in self.get_option_json(num=2, opt_no_arr=opt_no_arr2):
self.parse_option(sel_opt3, opt_no_arr=opt_no_arr2)
return self.get_opt_title_list(), self.options
| 45.960526 | 146 | 0.591755 |
1db79a7460a8368a503fd89e2a4924fcacd78f99 | 25,184 | py | Python | examples/pytorch/text-classification/run_glue.py | jecoz/transformers | 47a1ebc9d83ecb030814cdd1ed4bef2be6c8ca09 | [
"Apache-2.0"
] | 1 | 2020-07-03T21:49:22.000Z | 2020-07-03T21:49:22.000Z | examples/pytorch/text-classification/run_glue.py | amitness/transformers | cbfda413389830fc2788f82f49279c4c566b4194 | [
"Apache-2.0"
] | 5 | 2021-10-05T21:43:16.000Z | 2021-12-15T15:20:32.000Z | examples/pytorch/text-classification/run_glue.py | amitness/transformers | cbfda413389830fc2788f82f49279c4c566b4194 | [
"Apache-2.0"
] | 1 | 2021-12-27T17:22:52.000Z | 2021-12-27T17:22:52.000Z | #!/usr/bin/env python
# coding=utf-8
# Copyright 2020 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Finetuning the library models for sequence classification on GLUE."""
# You can also adapt this script on your own text classification task. Pointers for this are left as comments.
import logging
import os
import random
import sys
from dataclasses import dataclass, field
from typing import Optional
import datasets
import numpy as np
from datasets import load_dataset, load_metric
import transformers
from transformers import (
AutoConfig,
AutoModelForSequenceClassification,
AutoTokenizer,
DataCollatorWithPadding,
EvalPrediction,
HfArgumentParser,
PretrainedConfig,
Trainer,
TrainingArguments,
default_data_collator,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint
from transformers.utils import check_min_version
from transformers.utils.versions import require_version
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.14.0.dev0")
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt")
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
logger = logging.getLogger(__name__)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
Using `HfArgumentParser` we can turn this class
into argparse arguments to be able to specify them on
the command line.
"""
task_name: Optional[str] = field(
default=None,
metadata={"help": "The name of the task to train on: " + ", ".join(task_to_keys.keys())},
)
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
max_seq_length: int = field(
default=128,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached preprocessed datasets or not."}
)
pad_to_max_length: bool = field(
default=True,
metadata={
"help": "Whether to pad all samples to `max_seq_length`. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch."
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
max_predict_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of prediction examples to this "
"value if set."
},
)
train_file: Optional[str] = field(
default=None, metadata={"help": "A csv or a json file containing the training data."}
)
validation_file: Optional[str] = field(
default=None, metadata={"help": "A csv or a json file containing the validation data."}
)
test_file: Optional[str] = field(default=None, metadata={"help": "A csv or a json file containing the test data."})
def __post_init__(self):
if self.task_name is not None:
self.task_name = self.task_name.lower()
if self.task_name not in task_to_keys.keys():
raise ValueError("Unknown task, you should pick one in " + ",".join(task_to_keys.keys()))
elif self.dataset_name is not None:
pass
elif self.train_file is None or self.validation_file is None:
raise ValueError("Need either a GLUE task, a training/validation file or a dataset name.")
else:
train_extension = self.train_file.split(".")[-1]
assert train_extension in ["csv", "json"], "`train_file` should be a csv or a json file."
validation_extension = self.validation_file.split(".")[-1]
assert (
validation_extension == train_extension
), "`validation_file` should have the same extension (csv or json) as `train_file`."
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
model_name_or_path: str = field(
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": "Will use the token generated when running `transformers-cli login` (necessary to use this script "
"with private models)."
},
)
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
log_level = training_args.get_process_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
transformers.utils.logging.enable_default_handler()
transformers.utils.logging.enable_explicit_format()
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
logger.info(f"Training/evaluation parameters {training_args}")
# Detecting last checkpoint.
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
# Set seed before initializing model.
set_seed(training_args.seed)
# Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below)
# or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the
# sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named
# label if at least two columns are provided.
#
# If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this
# single column. You can easily tweak this behavior (see below)
#
# In distributed training, the load_dataset function guarantee that only one local process can concurrently
# download the dataset.
if data_args.task_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir)
elif data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
raw_datasets = load_dataset(
data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir
)
else:
# Loading a dataset from your local files.
# CSV/JSON training and evaluation files are needed.
data_files = {"train": data_args.train_file, "validation": data_args.validation_file}
# Get the test dataset: you can provide your own CSV/JSON test file (see below)
# when you use `do_predict` without specifying a GLUE benchmark task.
if training_args.do_predict:
if data_args.test_file is not None:
train_extension = data_args.train_file.split(".")[-1]
test_extension = data_args.test_file.split(".")[-1]
assert (
test_extension == train_extension
), "`test_file` should have the same extension (csv or json) as `train_file`."
data_files["test"] = data_args.test_file
else:
raise ValueError("Need either a GLUE task or a test file for `do_predict`.")
for key in data_files.keys():
logger.info(f"load a local file for {key}: {data_files[key]}")
if data_args.train_file.endswith(".csv"):
# Loading a dataset from local csv files
raw_datasets = load_dataset("csv", data_files=data_files, cache_dir=model_args.cache_dir)
else:
# Loading a dataset from local json files
raw_datasets = load_dataset("json", data_files=data_files, cache_dir=model_args.cache_dir)
# See more about loading any type of standard or custom dataset at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Labels
if data_args.task_name is not None:
is_regression = data_args.task_name == "stsb"
if not is_regression:
label_list = raw_datasets["train"].features["label"].names
num_labels = len(label_list)
else:
num_labels = 1
else:
# Trying to have good defaults here, don't hesitate to tweak to your needs.
is_regression = raw_datasets["train"].features["label"].dtype in ["float32", "float64"]
if is_regression:
num_labels = 1
else:
# A useful fast method:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.unique
label_list = raw_datasets["train"].unique("label")
label_list.sort() # Let's sort it for determinism
num_labels = len(label_list)
# Load pretrained model and tokenizer
#
# In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
num_labels=num_labels,
finetuning_task=data_args.task_name,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast_tokenizer,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
# Preprocessing the raw_datasets
if data_args.task_name is not None:
sentence1_key, sentence2_key = task_to_keys[data_args.task_name]
else:
# Again, we try to have some nice defaults but don't hesitate to tweak to your use case.
non_label_column_names = [name for name in raw_datasets["train"].column_names if name != "label"]
if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names:
sentence1_key, sentence2_key = "sentence1", "sentence2"
else:
if len(non_label_column_names) >= 2:
sentence1_key, sentence2_key = non_label_column_names[:2]
else:
sentence1_key, sentence2_key = non_label_column_names[0], None
# Padding strategy
if data_args.pad_to_max_length:
padding = "max_length"
else:
# We will pad later, dynamically at batch creation, to the max sequence length in each batch
padding = False
# Some models have set the order of the labels to use, so let's make sure we do use it.
label_to_id = None
if (
model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id
and data_args.task_name is not None
and not is_regression
):
# Some have all caps in their config, some don't.
label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()}
if list(sorted(label_name_to_id.keys())) == list(sorted(label_list)):
label_to_id = {i: int(label_name_to_id[label_list[i]]) for i in range(num_labels)}
else:
logger.warning(
"Your model seems to have been trained with labels, but they don't match the dataset: ",
f"model labels: {list(sorted(label_name_to_id.keys()))}, dataset labels: {list(sorted(label_list))}."
"\nIgnoring the model labels as a result.",
)
elif data_args.task_name is None and not is_regression:
label_to_id = {v: i for i, v in enumerate(label_list)}
if label_to_id is not None:
model.config.label2id = label_to_id
model.config.id2label = {id: label for label, id in config.label2id.items()}
elif data_args.task_name is not None and not is_regression:
model.config.label2id = {l: i for i, l in enumerate(label_list)}
model.config.id2label = {id: label for label, id in config.label2id.items()}
if data_args.max_seq_length > tokenizer.model_max_length:
logger.warning(
f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the"
f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}."
)
max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)
def preprocess_function(examples):
# Tokenize the texts
args = (
(examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key])
)
result = tokenizer(*args, padding=padding, max_length=max_seq_length, truncation=True)
# Map labels to IDs (not necessary for GLUE tasks)
if label_to_id is not None and "label" in examples:
result["label"] = [(label_to_id[l] if l != -1 else -1) for l in examples["label"]]
return result
with training_args.main_process_first(desc="dataset map pre-processing"):
raw_datasets = raw_datasets.map(
preprocess_function,
batched=True,
load_from_cache_file=not data_args.overwrite_cache,
desc="Running tokenizer on dataset",
)
if training_args.do_train:
if "train" not in raw_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = raw_datasets["train"]
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
if training_args.do_eval:
if "validation" not in raw_datasets and "validation_matched" not in raw_datasets:
raise ValueError("--do_eval requires a validation dataset")
eval_dataset = raw_datasets["validation_matched" if data_args.task_name == "mnli" else "validation"]
if data_args.max_eval_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
if training_args.do_predict or data_args.task_name is not None or data_args.test_file is not None:
if "test" not in raw_datasets and "test_matched" not in raw_datasets:
raise ValueError("--do_predict requires a test dataset")
predict_dataset = raw_datasets["test_matched" if data_args.task_name == "mnli" else "test"]
if data_args.max_predict_samples is not None:
predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))
# Log a few random samples from the training set:
if training_args.do_train:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# Get the metric function
if data_args.task_name is not None:
metric = load_metric("glue", data_args.task_name)
else:
metric = load_metric("accuracy")
# You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a
# predictions and label_ids field) and has to return a dictionary string to float.
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)
if data_args.task_name is not None:
result = metric.compute(predictions=preds, references=p.label_ids)
if len(result) > 1:
result["combined_score"] = np.mean(list(result.values())).item()
return result
elif is_regression:
return {"mse": ((preds - p.label_ids) ** 2).mean().item()}
else:
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
# Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.
if data_args.pad_to_max_length:
data_collator = default_data_collator
elif training_args.fp16:
data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)
else:
data_collator = None
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# Training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.save_model() # Saves the tokenizer too for easy upload
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# Evaluation
if training_args.do_eval:
logger.info("*** Evaluate ***")
# Loop to handle MNLI double evaluation (matched, mis-matched)
tasks = [data_args.task_name]
eval_datasets = [eval_dataset]
if data_args.task_name == "mnli":
tasks.append("mnli-mm")
eval_datasets.append(raw_datasets["validation_mismatched"])
for eval_dataset, task in zip(eval_datasets, tasks):
metrics = trainer.evaluate(eval_dataset=eval_dataset)
max_eval_samples = (
data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
)
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
if training_args.do_predict:
logger.info("*** Predict ***")
# Loop to handle MNLI double evaluation (matched, mis-matched)
tasks = [data_args.task_name]
predict_datasets = [predict_dataset]
if data_args.task_name == "mnli":
tasks.append("mnli-mm")
predict_datasets.append(raw_datasets["test_mismatched"])
for predict_dataset, task in zip(predict_datasets, tasks):
# Removing the `label` columns because it contains -1 and Trainer won't like that.
predict_dataset = predict_dataset.remove_columns("label")
predictions = trainer.predict(predict_dataset, metric_key_prefix="predict").predictions
predictions = np.squeeze(predictions) if is_regression else np.argmax(predictions, axis=1)
output_predict_file = os.path.join(training_args.output_dir, f"predict_results_{task}.txt")
if trainer.is_world_process_zero():
with open(output_predict_file, "w") as writer:
logger.info(f"***** Predict results {task} *****")
writer.write("index\tprediction\n")
for index, item in enumerate(predictions):
if is_regression:
writer.write(f"{index}\t{item:3.3f}\n")
else:
item = label_list[item]
writer.write(f"{index}\t{item}\n")
kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-classification"}
if data_args.task_name is not None:
kwargs["language"] = "en"
kwargs["dataset_tags"] = "glue"
kwargs["dataset_args"] = data_args.task_name
kwargs["dataset"] = f"GLUE {data_args.task_name.upper()}"
if training_args.push_to_hub:
trainer.push_to_hub(**kwargs)
else:
trainer.create_model_card(**kwargs)
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
| 44.260105 | 119 | 0.667884 |
72f038764ddb0914afd9df9af347869985d7a8e5 | 5,366 | py | Python | df_resources/callbacks.py | jtmahoney/segmentation_pipeline | 284b43706f2ac8a179c9d7c122569a550cea0fb6 | [
"MIT"
] | null | null | null | df_resources/callbacks.py | jtmahoney/segmentation_pipeline | 284b43706f2ac8a179c9d7c122569a550cea0fb6 | [
"MIT"
] | null | null | null | df_resources/callbacks.py | jtmahoney/segmentation_pipeline | 284b43706f2ac8a179c9d7c122569a550cea0fb6 | [
"MIT"
] | null | null | null | # From https://github.com/bckenstler/CLR
from keras.callbacks import *
class CyclicLR(Callback):
"""This callback implements a cyclical learning rate policy (CLR).
The method cycles the learning rate between two boundaries with
some constant frequency, as detailed in this paper (https://arxiv.org/abs/1506.01186).
The amplitude of the cycle can be scaled on a per-iteration or
per-cycle basis.
This class has three built-in policies, as put forth in the paper.
"triangular":
A basic triangular cycle w/ no amplitude scaling.
"triangular2":
A basic triangular cycle that scales initial amplitude by half each cycle.
"exp_range":
A cycle that scales initial amplitude by gamma**(cycle iterations) at each
cycle iteration.
For more detail, please see paper.
# Example
```python
clr = CyclicLR(base_lr=0.001, max_lr=0.006,
step_size=2000., mode='triangular')
model.fit(X_train, Y_train, callbacks=[clr])
```
Class also supports custom scaling functions:
```python
clr_fn = lambda x: 0.5*(1+np.sin(x*np.pi/2.))
clr = CyclicLR(base_lr=0.001, max_lr=0.006,
step_size=2000., scale_fn=clr_fn,
scale_mode='cycle')
model.fit(X_train, Y_train, callbacks=[clr])
```
# Arguments
base_lr: initial learning rate which is the
lower boundary in the cycle.
max_lr: upper boundary in the cycle. Functionally,
it defines the cycle amplitude (max_lr - base_lr).
The lr at any cycle is the sum of base_lr
and some scaling of the amplitude; therefore
max_lr may not actually be reached depending on
scaling function.
step_size: number of training iterations per
half cycle. Authors suggest setting step_size
2-8 x training iterations in epoch.
mode: one of {triangular, triangular2, exp_range}.
Default 'triangular'.
Values correspond to policies detailed above.
If scale_fn is not None, this argument is ignored.
gamma: constant in 'exp_range' scaling function:
gamma**(cycle iterations)
scale_fn: Custom scaling policy defined by a single
argument lambda function, where
0 <= scale_fn(x) <= 1 for all x >= 0.
mode paramater is ignored
scale_mode: {'cycle', 'iterations'}.
Defines whether scale_fn is evaluated on
cycle number or cycle iterations (training
iterations since start of cycle). Default is 'cycle'.
"""
def __init__(self, base_lr=0.001, max_lr=0.006, step_size=2000., mode='triangular',
gamma=1., scale_fn=None, scale_mode='cycle'):
super(CyclicLR, self).__init__()
self.base_lr = base_lr
self.max_lr = max_lr
self.step_size = step_size
self.mode = mode
self.gamma = gamma
if scale_fn == None:
if self.mode == 'triangular':
self.scale_fn = lambda x: 1.
self.scale_mode = 'cycle'
elif self.mode == 'triangular2':
self.scale_fn = lambda x: 1/(2.**(x-1))
self.scale_mode = 'cycle'
elif self.mode == 'exp_range':
self.scale_fn = lambda x: gamma**(x)
self.scale_mode = 'iterations'
else:
self.scale_fn = scale_fn
self.scale_mode = scale_mode
self.clr_iterations = 0.
self.trn_iterations = 0.
self.history = {}
self._reset()
def _reset(self, new_base_lr=None, new_max_lr=None,
new_step_size=None):
"""Resets cycle iterations.
Optional boundary/step size adjustment.
"""
if new_base_lr != None:
self.base_lr = new_base_lr
if new_max_lr != None:
self.max_lr = new_max_lr
if new_step_size != None:
self.step_size = new_step_size
self.clr_iterations = 0.
def clr(self):
cycle = np.floor(1+self.clr_iterations/(2*self.step_size))
x = np.abs(self.clr_iterations/self.step_size - 2*cycle + 1)
if self.scale_mode == 'cycle':
return self.base_lr + (self.max_lr-self.base_lr)*np.maximum(0, (1-x))*self.scale_fn(cycle)
else:
return self.base_lr + (self.max_lr-self.base_lr)*np.maximum(0, (1-x))*self.scale_fn(self.clr_iterations)
def on_train_begin(self, logs={}):
logs = logs or {}
if self.clr_iterations == 0:
K.set_value(self.model.optimizer.lr, self.base_lr)
else:
K.set_value(self.model.optimizer.lr, self.clr())
def on_batch_end(self, epoch, logs=None):
logs = logs or {}
self.trn_iterations += 1
self.clr_iterations += 1
self.history.setdefault('lr', []).append(K.get_value(self.model.optimizer.lr))
self.history.setdefault('iterations', []).append(self.trn_iterations)
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
K.set_value(self.model.optimizer.lr, self.clr()) | 40.651515 | 116 | 0.58852 |
8d398afc75cad9e2a1a43e4c8b29afb125db0679 | 617 | py | Python | setup.py | lozuponelab/q2-SCNIC | f8566e777557eecbdbffdabef10a15fa1622bd18 | [
"BSD-3-Clause"
] | 2 | 2021-02-22T03:25:19.000Z | 2021-03-17T01:01:36.000Z | setup.py | lozuponelab/q2-SCNIC | f8566e777557eecbdbffdabef10a15fa1622bd18 | [
"BSD-3-Clause"
] | null | null | null | setup.py | lozuponelab/q2-SCNIC | f8566e777557eecbdbffdabef10a15fa1622bd18 | [
"BSD-3-Clause"
] | 3 | 2020-10-05T22:57:28.000Z | 2022-02-04T11:01:55.000Z | from setuptools import find_packages, setup
import versioneer
setup(
name='q2-SCNIC',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
license='BSD-3-Clause',
packages=find_packages(),
author="Michael Shaffer, Kumar Thurimella",
author_email="lozuponelab.dev@olucdenver.onmicrosoft.com",
description=(
"QIIME2 plugin for using SCNIC."),
url="https://github.com/lozuponelab/q2-SCNIC",
package_data={
'q2_SCNIC': ['citations.bib']
},
entry_points={
'qiime2.plugins':
['q2-SCNIC=q2_SCNIC.plugin_setup:plugin']
}
)
| 26.826087 | 62 | 0.667747 |
d6fcb937a17428e164e2c2e17c38dc19ba62fc3c | 319 | py | Python | codes_/1217_Minimum_Cost_to_Move_Chips_to_The_Same_Position.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | codes_/1217_Minimum_Cost_to_Move_Chips_to_The_Same_Position.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | codes_/1217_Minimum_Cost_to_Move_Chips_to_The_Same_Position.py | SaitoTsutomu/leetcode | 4656d66ab721a5c7bc59890db9a2331c6823b2bf | [
"MIT"
] | null | null | null | # %% [1217. Minimum Cost to Move Chips to The Same Position](https://leetcode.com/problems/minimum-cost-to-move-chips-to-the-same-position/)
class Solution:
def minCostToMoveChips(self, position: List[int]) -> int:
res = [0, 0]
for i in position:
res[i % 2] += 1
return min(res)
| 39.875 | 140 | 0.623824 |
d70fde8537ea031cd240baf770bb5a67556e8974 | 4,263 | py | Python | aim/artifacts/utils.py | jaekyeom/aim | 5b87b55ff7732f006a713878edec4608006b7dbb | [
"Apache-2.0"
] | null | null | null | aim/artifacts/utils.py | jaekyeom/aim | 5b87b55ff7732f006a713878edec4608006b7dbb | [
"Apache-2.0"
] | 2 | 2021-08-25T16:17:16.000Z | 2022-02-10T05:49:55.000Z | aim/artifacts/utils.py | paulmchen/aim | 53212cdce7a80cb8dadfaf7869a31fbf4ee6ce5b | [
"Apache-2.0"
] | 1 | 2021-01-29T02:10:14.000Z | 2021-01-29T02:10:14.000Z | from typing import Tuple, Any
from aim.engine.utils import get_module
def get_pt_tensor(t):
if hasattr(t, 'is_cuda') and t.is_cuda:
return t.cpu()
return t
def get_unique(a):
np = get_module('numpy')
s = set()
unique = []
for element in a:
if element not in s:
unique.append(element)
s.add(element)
return np.array(unique)
def validate_dict(item, key_types: tuple, value_types: tuple,
none_type: bool = True, depth: int = 1) -> Tuple[bool, Any]:
if not isinstance(item, value_types) \
and not (isinstance(item, dict) and depth == 1):
if item is None and none_type:
return True, None
return False, item
if isinstance(item, (list, tuple, set)):
for i in item:
res, res_i = validate_dict(i, key_types, value_types,
none_type, depth)
if not res:
return res, res_i
if isinstance(item, dict):
depth += 1
for k, v in item.items():
if not isinstance(k, key_types):
return False, k
res, res_i = validate_dict(v, key_types, value_types,
none_type, depth)
if not res:
return res, res_i
return True, None
class TfUtils:
# FIXME: statics as properties, and __init__(sess)
@staticmethod
def get_tf_t_vars(sess):
"""Returns all trainable variables in the tf.session"""
return sess.graph.get_collection("trainable_variables")
@staticmethod
def get_tf_t_vals(sess):
"""Returns all trainable values (parameters) in the tf.session"""
return sess.run(
TfUtils.get_tf_t_vars(sess)
)
@staticmethod
def _is_op_defined(t_vars) -> bool:
"""Checks whether trainable variables are tf.Variables"""
return all(t_var.name.startswith('Variable') for t_var in t_vars)
@staticmethod
def get_vals_hist(t_vals, num_bin):
"""Creates and returns hist"""
np = get_module('numpy')
t_vals_hist = np.histogram(t_vals, num_bin)
return [t_vals_hist[0].tolist(),
t_vals_hist[1].tolist(),
]
@staticmethod
def get_layers(t_vars):
"""Return the names of layers in net."""
if TfUtils._is_op_defined(t_vars):
return [t_var.name for t_var in t_vars][: len(t_vars) // 2]
return get_unique([t_var.name.split('/')[0]
for t_var in t_vars
if '/' in t_var.name])
@staticmethod
def get_weights(t_vars, sess):
"""Given the session and trainable variables, returns weights"""
if TfUtils._is_op_defined(t_vars):
num_of_layers = len(TfUtils.get_layers(t_vars))
return [sess.run(t_var) for t_var in t_vars[:num_of_layers]]
return [sess.run(t_var) for t_var in t_vars if 'kernel' in t_var.name]
@staticmethod
def get_biases(t_vars, sess):
"""Given the seesion and trainable variables, returns biases"""
if TfUtils._is_op_defined(t_vars):
num_of_layers = len(TfUtils.get_layers(t_vars))
return [sess.run(t_var) for t_var in t_vars[num_of_layers:]]
return [sess.run(t_var) for t_var in t_vars if "bias" in t_var.name]
# TODO: Move to SDK
# class CheckpointCallback(tf.keras.callbacks.Callback):
# """
# Custom callback for tracking checkpoints in Keras models.
# """
#
# def __init__(self, name, checkpoint_name, meta):
# super(CheckpointCallback, self).__init__()
# self.name = name
# self.checkpoint_name = checkpoint_name
# self.meta = meta
#
# def on_epoch_end(self, epoch, logs=None):
# """Tracks checkpoint at the end of each epoch"""
# if '{e}' in self.checkpoint_name:
# checkpoint_name = self.checkpoint_name.format(e=epoch)
# else:
# checkpoint_name = '{e}-{n}'.format(n=self.checkpoint_name,
# e=epoch)
# track(checkpoint, self.name, checkpoint_name,
# self.model, epoch, meta=self.meta)
| 33.304688 | 78 | 0.585972 |
30f5fc716de67f91b67ebb1226172311bfe8748e | 78 | py | Python | lunch/vegetable.py | maruf212000/Python_Assignment_3 | dfedb06ea5f73475c51467577622cb63f8f3888e | [
"MIT"
] | null | null | null | lunch/vegetable.py | maruf212000/Python_Assignment_3 | dfedb06ea5f73475c51467577622cb63f8f3888e | [
"MIT"
] | null | null | null | lunch/vegetable.py | maruf212000/Python_Assignment_3 | dfedb06ea5f73475c51467577622cb63f8f3888e | [
"MIT"
] | null | null | null | def potato():
print("I am Potato")
def tomato():
print("I am Tomato")
| 15.6 | 24 | 0.589744 |
e42ebd888d124077de05faf77d5b9db8983d936c | 2,520 | py | Python | src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py | WERimagin/transformers | cc7d14511c647f8147494df72f8b0575015e37ab | [
"Apache-2.0"
] | 47 | 2021-04-16T22:29:25.000Z | 2022-02-11T08:19:13.000Z | src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py | WERimagin/transformers | cc7d14511c647f8147494df72f8b0575015e37ab | [
"Apache-2.0"
] | 15 | 2021-03-16T08:13:25.000Z | 2022-02-01T12:22:33.000Z | src/transformers/convert_gpt2_original_tf_checkpoint_to_pytorch.py | WERimagin/transformers | cc7d14511c647f8147494df72f8b0575015e37ab | [
"Apache-2.0"
] | 7 | 2021-08-24T09:50:44.000Z | 2022-02-23T13:55:28.000Z | # coding=utf-8
# Copyright 2018 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Convert OpenAI GPT checkpoint."""
import argparse
import torch
from transformers import CONFIG_NAME, WEIGHTS_NAME, GPT2Config, GPT2Model, load_tf_weights_in_gpt2
from transformers.utils import logging
logging.set_verbosity_info()
def convert_gpt2_checkpoint_to_pytorch(gpt2_checkpoint_path, gpt2_config_file, pytorch_dump_folder_path):
# Construct model
if gpt2_config_file == "":
config = GPT2Config()
else:
config = GPT2Config.from_json_file(gpt2_config_file)
model = GPT2Model(config)
# Load weights from numpy
load_tf_weights_in_gpt2(model, config, gpt2_checkpoint_path)
# Save pytorch-model
pytorch_weights_dump_path = pytorch_dump_folder_path + "/" + WEIGHTS_NAME
pytorch_config_dump_path = pytorch_dump_folder_path + "/" + CONFIG_NAME
print("Save PyTorch model to {}".format(pytorch_weights_dump_path))
torch.save(model.state_dict(), pytorch_weights_dump_path)
print("Save configuration file to {}".format(pytorch_config_dump_path))
with open(pytorch_config_dump_path, "w", encoding="utf-8") as f:
f.write(config.to_json_string())
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument(
"--gpt2_checkpoint_path", default=None, type=str, required=True, help="Path to the TensorFlow checkpoint path."
)
parser.add_argument(
"--pytorch_dump_folder_path", default=None, type=str, required=True, help="Path to the output PyTorch model."
)
parser.add_argument(
"--gpt2_config_file",
default="",
type=str,
help="An optional config json file corresponding to the pre-trained OpenAI model. \n"
"This specifies the model architecture.",
)
args = parser.parse_args()
convert_gpt2_checkpoint_to_pytorch(args.gpt2_checkpoint_path, args.gpt2_config_file, args.pytorch_dump_folder_path)
| 37.058824 | 119 | 0.739683 |
7a5756ff1d6f81a2ab5b48f2099991ab08cd00e5 | 6,965 | py | Python | third-party-synthetic/third-party-tester/reporting/resultsreporter.py | dlopes7/dynatrace-api | 5ed061b5542eea9cee8cfd5dab81d91fc891f2f5 | [
"BSD-3-Clause"
] | 80 | 2016-09-19T21:06:50.000Z | 2022-03-31T06:34:29.000Z | third-party-synthetic/third-party-tester/reporting/resultsreporter.py | dlopes7/dynatrace-api | 5ed061b5542eea9cee8cfd5dab81d91fc891f2f5 | [
"BSD-3-Clause"
] | 36 | 2018-01-29T06:33:10.000Z | 2022-03-07T08:05:56.000Z | third-party-synthetic/third-party-tester/reporting/resultsreporter.py | dlopes7/dynatrace-api | 5ed061b5542eea9cee8cfd5dab81d91fc891f2f5 | [
"BSD-3-Clause"
] | 70 | 2017-01-30T09:42:18.000Z | 2022-03-24T18:57:35.000Z | from reporting.api_constants import ApiConstants
from datetime import datetime
import requests
import logging
import hashlib
class ResultsReporter:
"""A class responsible for sending test results to Dynatrace."""
def __init__(self, api_url, api_token, schedule_interval, location_id,
location_name, engine_name='Custom Python script'):
"""
Initialize ResultsReporter class.
Args:
api_url: Dynatrace API endpoint FQDN for sending test results
api_token: Dynatrace API token
schedule_interval: Seconds between
location_id: ID of location tests are run on
location_name: Name of used location tests are run on
engine_name: Name of the engine displayed in UI (default 'Custom Python script')
"""
self.logger = logging.getLogger(__name__)
self.api_url = api_url
self.api_token = api_token
self.schedule_interval = schedule_interval
self.location_id = location_id
self.location_name = location_name
self.engine_name = engine_name
def send_result_of(self, test):
"""
Send test result to Dynatrace
Args:
test: Test object which result should be sent
"""
result_report = self._prepare_report(test)
test_result_consumer_endpoint = "{api_url}?Api-Token={api_token}".format(
api_url=self.api_url, api_token=self.api_token
)
try:
self.logger.info("Sending test results to {api_url} started".format(api_url=self.api_url))
response = requests.post(test_result_consumer_endpoint, json=result_report)
if not response.ok:
raise RuntimeError(
"HTTP status code: {code}, message: {response_body}"
.format(code=response.status_code, response_body=response.text)
)
self.logger.info("Sending test results to {api_url} finished".format(api_url=self.api_url))
except Exception as e:
self.logger.error(
"Error while sending results to {api_url}: {message}"
.format(api_url=self.api_url, message=str(e))
)
def _prepare_report(self, test):
"""Prepare a valid Dynatrace API call.
For internal use only.
"""
result_report = dict()
result_report[ApiConstants.MESSAGE_TIMESTAMP] = self._convert_datetime_to_milliseconds(datetime.now())
result_report[ApiConstants.SYNTHETIC_ENGINE_NAME] = self.engine_name
result_report[
ApiConstants.SYNTHETIC_ENGINE_ICON_URL] = "http://assets.dynatrace.com/global/icons/cpu_processor.png"
result_report[ApiConstants.LOCATIONS] = self._get_locations()
result_report[ApiConstants.TESTS] = self._get_tests(test)
result_report[ApiConstants.TEST_RESULTS] = self._get_test_results(test)
return result_report
def _get_locations(self):
"""Return a representation of a list of test locations.
For internal use only.
"""
locations = [{ApiConstants.Locations.ID: str(self.location_id),
ApiConstants.Locations.NAME: self.location_name}]
return locations
def _get_tests(self, test):
"""Return a representation of a list of tests.
For internal use only.
"""
tests = [{
ApiConstants.Tests.ID: self._make_test_id(test),
ApiConstants.Tests.TITLE: test.dynatrace_test_name,
ApiConstants.Tests.TEST_SETUP: self.engine_name,
ApiConstants.Tests.ENABLED: True,
ApiConstants.Tests.LOCATIONS: self._get_test_locations(),
ApiConstants.Tests.STEPS: self._get_test_steps(test),
ApiConstants.Tests.SCHEDULE_INTERVAL_IN_SECONDS: self.schedule_interval
}]
return tests
def _get_test_locations(self):
"""Return a representation of a list of test locations.
For internal use only.
"""
locations = [{
ApiConstants.Tests.Locations.ID: self.location_id,
ApiConstants.Tests.Locations.ENABLED: True
}]
return locations
def _get_test_steps(self, test):
"""Return a representation of a list of test steps.
For internal use only.
"""
steps = [{
ApiConstants.Tests.Steps.ID: step_number,
ApiConstants.Tests.Steps.TITLE: step.name
} for step_number, step in enumerate(test.steps, 1)]
return steps
def _get_test_results(self, test):
"""Return a representation of a list of test results.
For internal use only.
"""
test_results = [{
ApiConstants.TestResults.ID: self._make_test_id(test),
ApiConstants.TestResults.SCHEDULE_INTERVAL_IN_SECONDS: self.schedule_interval,
ApiConstants.TestResults.TOTAL_STEP_COUNT: 1,
ApiConstants.TestResults.LOCATION_RESULTS: self._get_location_results(test)
}]
return test_results
def _get_location_results(self, test):
"""Return a representation of a list of test location results.
For internal use only.
"""
location_results = [{
ApiConstants.TestResults.LocationResults.ID: self.location_id,
ApiConstants.TestResults.LocationResults.START_TIMESTAMP:
self._convert_datetime_to_milliseconds(test.start_timestamp),
ApiConstants.TestResults.LocationResults.SUCCESS: all([step.successful for step in test.steps]),
ApiConstants.TestResults.LocationResults.STEP_RESULTS: self._get_step_results(test)
}]
return location_results
def _get_step_results(self, test):
"""Return a representation of a list of step results.
For internal use only.
"""
step_results = [{
ApiConstants.TestResults.LocationResults.StepResults.ID: step_number,
ApiConstants.TestResults.LocationResults.StepResults.START_TIMESTAMP:
self._convert_datetime_to_milliseconds(step.start_timestamp) if step.start_timestamp is not None
else self._convert_datetime_to_milliseconds(test.start_timestamp),
ApiConstants.TestResults.LocationResults.StepResults.RESPONSE_TIME_MILLIS: int(
step.duration.total_seconds() * 1000) if step.duration is not None else None
} for step_number, step in enumerate(test.steps, 1)]
return step_results
def _convert_datetime_to_milliseconds(self, timestamp):
"""Convert a timestamp in seconds to an integer timestamp in milliseconds.
For internal use only.
"""
return int(timestamp.timestamp() * 1000)
def _make_test_id(self, test):
return hashlib.sha256(str.encode('{title}'.format(title=test.dynatrace_test_name))).hexdigest()
| 37.446237 | 114 | 0.655276 |
6635a8c202d46b58db3f36f1973eab2a165707db | 64,749 | py | Python | sdk/python/pulumi_artifactory/local_ivy_repository.py | pulumi/terraform-provider-artifactory | 4f217f2e6bc2f7e5395a148cd3b3b7b5aaa66372 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_artifactory/local_ivy_repository.py | pulumi/terraform-provider-artifactory | 4f217f2e6bc2f7e5395a148cd3b3b7b5aaa66372 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_artifactory/local_ivy_repository.py | pulumi/terraform-provider-artifactory | 4f217f2e6bc2f7e5395a148cd3b3b7b5aaa66372 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['LocalIvyRepositoryArgs', 'LocalIvyRepository']
@pulumi.input_type
class LocalIvyRepositoryArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
archive_browsing_enabled: Optional[pulumi.Input[bool]] = None,
blacked_out: Optional[pulumi.Input[bool]] = None,
checksum_policy_type: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
download_direct: Optional[pulumi.Input[bool]] = None,
excludes_pattern: Optional[pulumi.Input[str]] = None,
handle_releases: Optional[pulumi.Input[bool]] = None,
handle_snapshots: Optional[pulumi.Input[bool]] = None,
includes_pattern: Optional[pulumi.Input[str]] = None,
max_unique_snapshots: Optional[pulumi.Input[int]] = None,
notes: Optional[pulumi.Input[str]] = None,
priority_resolution: Optional[pulumi.Input[bool]] = None,
project_environments: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
project_key: Optional[pulumi.Input[str]] = None,
property_sets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
repo_layout_ref: Optional[pulumi.Input[str]] = None,
snapshot_version_behavior: Optional[pulumi.Input[str]] = None,
suppress_pom_consistency_checks: Optional[pulumi.Input[bool]] = None,
xray_index: Optional[pulumi.Input[bool]] = None):
"""
The set of arguments for constructing a LocalIvyRepository resource.
:param pulumi.Input[str] key: the identity key of the repo.
:param pulumi.Input[bool] archive_browsing_enabled: When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
:param pulumi.Input[bool] blacked_out: When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
:param pulumi.Input[str] checksum_policy_type: Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
:param pulumi.Input[bool] download_direct: When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
:param pulumi.Input[str] excludes_pattern: List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
:param pulumi.Input[bool] handle_releases: If set, Artifactory allows you to deploy release artifacts into this repository.
:param pulumi.Input[bool] handle_snapshots: If set, Artifactory allows you to deploy snapshot artifacts into this repository.
:param pulumi.Input[str] includes_pattern: List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
:param pulumi.Input[int] max_unique_snapshots: The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
:param pulumi.Input[bool] priority_resolution: Setting repositories with priority will cause metadata to be merged only from repositories set with this field
:param pulumi.Input[Sequence[pulumi.Input[str]]] project_environments: Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
:param pulumi.Input[str] project_key: Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
:param pulumi.Input[Sequence[pulumi.Input[str]]] property_sets: List of property set name
:param pulumi.Input[str] repo_layout_ref: Repository layout key for the local repository
:param pulumi.Input[str] snapshot_version_behavior: Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
:param pulumi.Input[bool] suppress_pom_consistency_checks: By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
:param pulumi.Input[bool] xray_index: Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
pulumi.set(__self__, "key", key)
if archive_browsing_enabled is not None:
pulumi.set(__self__, "archive_browsing_enabled", archive_browsing_enabled)
if blacked_out is not None:
pulumi.set(__self__, "blacked_out", blacked_out)
if checksum_policy_type is not None:
pulumi.set(__self__, "checksum_policy_type", checksum_policy_type)
if description is not None:
pulumi.set(__self__, "description", description)
if download_direct is not None:
pulumi.set(__self__, "download_direct", download_direct)
if excludes_pattern is not None:
pulumi.set(__self__, "excludes_pattern", excludes_pattern)
if handle_releases is not None:
pulumi.set(__self__, "handle_releases", handle_releases)
if handle_snapshots is not None:
pulumi.set(__self__, "handle_snapshots", handle_snapshots)
if includes_pattern is not None:
pulumi.set(__self__, "includes_pattern", includes_pattern)
if max_unique_snapshots is not None:
pulumi.set(__self__, "max_unique_snapshots", max_unique_snapshots)
if notes is not None:
pulumi.set(__self__, "notes", notes)
if priority_resolution is not None:
pulumi.set(__self__, "priority_resolution", priority_resolution)
if project_environments is not None:
pulumi.set(__self__, "project_environments", project_environments)
if project_key is not None:
pulumi.set(__self__, "project_key", project_key)
if property_sets is not None:
pulumi.set(__self__, "property_sets", property_sets)
if repo_layout_ref is not None:
pulumi.set(__self__, "repo_layout_ref", repo_layout_ref)
if snapshot_version_behavior is not None:
pulumi.set(__self__, "snapshot_version_behavior", snapshot_version_behavior)
if suppress_pom_consistency_checks is not None:
pulumi.set(__self__, "suppress_pom_consistency_checks", suppress_pom_consistency_checks)
if xray_index is not None:
pulumi.set(__self__, "xray_index", xray_index)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
"""
the identity key of the repo.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter(name="archiveBrowsingEnabled")
def archive_browsing_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
"""
return pulumi.get(self, "archive_browsing_enabled")
@archive_browsing_enabled.setter
def archive_browsing_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "archive_browsing_enabled", value)
@property
@pulumi.getter(name="blackedOut")
def blacked_out(self) -> Optional[pulumi.Input[bool]]:
"""
When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
"""
return pulumi.get(self, "blacked_out")
@blacked_out.setter
def blacked_out(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "blacked_out", value)
@property
@pulumi.getter(name="checksumPolicyType")
def checksum_policy_type(self) -> Optional[pulumi.Input[str]]:
"""
Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
"""
return pulumi.get(self, "checksum_policy_type")
@checksum_policy_type.setter
def checksum_policy_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "checksum_policy_type", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="downloadDirect")
def download_direct(self) -> Optional[pulumi.Input[bool]]:
"""
When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
"""
return pulumi.get(self, "download_direct")
@download_direct.setter
def download_direct(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "download_direct", value)
@property
@pulumi.getter(name="excludesPattern")
def excludes_pattern(self) -> Optional[pulumi.Input[str]]:
"""
List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
"""
return pulumi.get(self, "excludes_pattern")
@excludes_pattern.setter
def excludes_pattern(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "excludes_pattern", value)
@property
@pulumi.getter(name="handleReleases")
def handle_releases(self) -> Optional[pulumi.Input[bool]]:
"""
If set, Artifactory allows you to deploy release artifacts into this repository.
"""
return pulumi.get(self, "handle_releases")
@handle_releases.setter
def handle_releases(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "handle_releases", value)
@property
@pulumi.getter(name="handleSnapshots")
def handle_snapshots(self) -> Optional[pulumi.Input[bool]]:
"""
If set, Artifactory allows you to deploy snapshot artifacts into this repository.
"""
return pulumi.get(self, "handle_snapshots")
@handle_snapshots.setter
def handle_snapshots(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "handle_snapshots", value)
@property
@pulumi.getter(name="includesPattern")
def includes_pattern(self) -> Optional[pulumi.Input[str]]:
"""
List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
"""
return pulumi.get(self, "includes_pattern")
@includes_pattern.setter
def includes_pattern(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "includes_pattern", value)
@property
@pulumi.getter(name="maxUniqueSnapshots")
def max_unique_snapshots(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
"""
return pulumi.get(self, "max_unique_snapshots")
@max_unique_snapshots.setter
def max_unique_snapshots(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_unique_snapshots", value)
@property
@pulumi.getter
def notes(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "notes")
@notes.setter
def notes(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "notes", value)
@property
@pulumi.getter(name="priorityResolution")
def priority_resolution(self) -> Optional[pulumi.Input[bool]]:
"""
Setting repositories with priority will cause metadata to be merged only from repositories set with this field
"""
return pulumi.get(self, "priority_resolution")
@priority_resolution.setter
def priority_resolution(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "priority_resolution", value)
@property
@pulumi.getter(name="projectEnvironments")
def project_environments(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
"""
return pulumi.get(self, "project_environments")
@project_environments.setter
def project_environments(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "project_environments", value)
@property
@pulumi.getter(name="projectKey")
def project_key(self) -> Optional[pulumi.Input[str]]:
"""
Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
"""
return pulumi.get(self, "project_key")
@project_key.setter
def project_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project_key", value)
@property
@pulumi.getter(name="propertySets")
def property_sets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of property set name
"""
return pulumi.get(self, "property_sets")
@property_sets.setter
def property_sets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "property_sets", value)
@property
@pulumi.getter(name="repoLayoutRef")
def repo_layout_ref(self) -> Optional[pulumi.Input[str]]:
"""
Repository layout key for the local repository
"""
return pulumi.get(self, "repo_layout_ref")
@repo_layout_ref.setter
def repo_layout_ref(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "repo_layout_ref", value)
@property
@pulumi.getter(name="snapshotVersionBehavior")
def snapshot_version_behavior(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
"""
return pulumi.get(self, "snapshot_version_behavior")
@snapshot_version_behavior.setter
def snapshot_version_behavior(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "snapshot_version_behavior", value)
@property
@pulumi.getter(name="suppressPomConsistencyChecks")
def suppress_pom_consistency_checks(self) -> Optional[pulumi.Input[bool]]:
"""
By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
"""
return pulumi.get(self, "suppress_pom_consistency_checks")
@suppress_pom_consistency_checks.setter
def suppress_pom_consistency_checks(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "suppress_pom_consistency_checks", value)
@property
@pulumi.getter(name="xrayIndex")
def xray_index(self) -> Optional[pulumi.Input[bool]]:
"""
Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
return pulumi.get(self, "xray_index")
@xray_index.setter
def xray_index(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "xray_index", value)
@pulumi.input_type
class _LocalIvyRepositoryState:
def __init__(__self__, *,
archive_browsing_enabled: Optional[pulumi.Input[bool]] = None,
blacked_out: Optional[pulumi.Input[bool]] = None,
checksum_policy_type: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
download_direct: Optional[pulumi.Input[bool]] = None,
excludes_pattern: Optional[pulumi.Input[str]] = None,
handle_releases: Optional[pulumi.Input[bool]] = None,
handle_snapshots: Optional[pulumi.Input[bool]] = None,
includes_pattern: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None,
max_unique_snapshots: Optional[pulumi.Input[int]] = None,
notes: Optional[pulumi.Input[str]] = None,
package_type: Optional[pulumi.Input[str]] = None,
priority_resolution: Optional[pulumi.Input[bool]] = None,
project_environments: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
project_key: Optional[pulumi.Input[str]] = None,
property_sets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
repo_layout_ref: Optional[pulumi.Input[str]] = None,
snapshot_version_behavior: Optional[pulumi.Input[str]] = None,
suppress_pom_consistency_checks: Optional[pulumi.Input[bool]] = None,
xray_index: Optional[pulumi.Input[bool]] = None):
"""
Input properties used for looking up and filtering LocalIvyRepository resources.
:param pulumi.Input[bool] archive_browsing_enabled: When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
:param pulumi.Input[bool] blacked_out: When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
:param pulumi.Input[str] checksum_policy_type: Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
:param pulumi.Input[bool] download_direct: When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
:param pulumi.Input[str] excludes_pattern: List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
:param pulumi.Input[bool] handle_releases: If set, Artifactory allows you to deploy release artifacts into this repository.
:param pulumi.Input[bool] handle_snapshots: If set, Artifactory allows you to deploy snapshot artifacts into this repository.
:param pulumi.Input[str] includes_pattern: List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
:param pulumi.Input[str] key: the identity key of the repo.
:param pulumi.Input[int] max_unique_snapshots: The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
:param pulumi.Input[bool] priority_resolution: Setting repositories with priority will cause metadata to be merged only from repositories set with this field
:param pulumi.Input[Sequence[pulumi.Input[str]]] project_environments: Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
:param pulumi.Input[str] project_key: Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
:param pulumi.Input[Sequence[pulumi.Input[str]]] property_sets: List of property set name
:param pulumi.Input[str] repo_layout_ref: Repository layout key for the local repository
:param pulumi.Input[str] snapshot_version_behavior: Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
:param pulumi.Input[bool] suppress_pom_consistency_checks: By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
:param pulumi.Input[bool] xray_index: Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
if archive_browsing_enabled is not None:
pulumi.set(__self__, "archive_browsing_enabled", archive_browsing_enabled)
if blacked_out is not None:
pulumi.set(__self__, "blacked_out", blacked_out)
if checksum_policy_type is not None:
pulumi.set(__self__, "checksum_policy_type", checksum_policy_type)
if description is not None:
pulumi.set(__self__, "description", description)
if download_direct is not None:
pulumi.set(__self__, "download_direct", download_direct)
if excludes_pattern is not None:
pulumi.set(__self__, "excludes_pattern", excludes_pattern)
if handle_releases is not None:
pulumi.set(__self__, "handle_releases", handle_releases)
if handle_snapshots is not None:
pulumi.set(__self__, "handle_snapshots", handle_snapshots)
if includes_pattern is not None:
pulumi.set(__self__, "includes_pattern", includes_pattern)
if key is not None:
pulumi.set(__self__, "key", key)
if max_unique_snapshots is not None:
pulumi.set(__self__, "max_unique_snapshots", max_unique_snapshots)
if notes is not None:
pulumi.set(__self__, "notes", notes)
if package_type is not None:
pulumi.set(__self__, "package_type", package_type)
if priority_resolution is not None:
pulumi.set(__self__, "priority_resolution", priority_resolution)
if project_environments is not None:
pulumi.set(__self__, "project_environments", project_environments)
if project_key is not None:
pulumi.set(__self__, "project_key", project_key)
if property_sets is not None:
pulumi.set(__self__, "property_sets", property_sets)
if repo_layout_ref is not None:
pulumi.set(__self__, "repo_layout_ref", repo_layout_ref)
if snapshot_version_behavior is not None:
pulumi.set(__self__, "snapshot_version_behavior", snapshot_version_behavior)
if suppress_pom_consistency_checks is not None:
pulumi.set(__self__, "suppress_pom_consistency_checks", suppress_pom_consistency_checks)
if xray_index is not None:
pulumi.set(__self__, "xray_index", xray_index)
@property
@pulumi.getter(name="archiveBrowsingEnabled")
def archive_browsing_enabled(self) -> Optional[pulumi.Input[bool]]:
"""
When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
"""
return pulumi.get(self, "archive_browsing_enabled")
@archive_browsing_enabled.setter
def archive_browsing_enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "archive_browsing_enabled", value)
@property
@pulumi.getter(name="blackedOut")
def blacked_out(self) -> Optional[pulumi.Input[bool]]:
"""
When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
"""
return pulumi.get(self, "blacked_out")
@blacked_out.setter
def blacked_out(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "blacked_out", value)
@property
@pulumi.getter(name="checksumPolicyType")
def checksum_policy_type(self) -> Optional[pulumi.Input[str]]:
"""
Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
"""
return pulumi.get(self, "checksum_policy_type")
@checksum_policy_type.setter
def checksum_policy_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "checksum_policy_type", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="downloadDirect")
def download_direct(self) -> Optional[pulumi.Input[bool]]:
"""
When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
"""
return pulumi.get(self, "download_direct")
@download_direct.setter
def download_direct(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "download_direct", value)
@property
@pulumi.getter(name="excludesPattern")
def excludes_pattern(self) -> Optional[pulumi.Input[str]]:
"""
List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
"""
return pulumi.get(self, "excludes_pattern")
@excludes_pattern.setter
def excludes_pattern(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "excludes_pattern", value)
@property
@pulumi.getter(name="handleReleases")
def handle_releases(self) -> Optional[pulumi.Input[bool]]:
"""
If set, Artifactory allows you to deploy release artifacts into this repository.
"""
return pulumi.get(self, "handle_releases")
@handle_releases.setter
def handle_releases(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "handle_releases", value)
@property
@pulumi.getter(name="handleSnapshots")
def handle_snapshots(self) -> Optional[pulumi.Input[bool]]:
"""
If set, Artifactory allows you to deploy snapshot artifacts into this repository.
"""
return pulumi.get(self, "handle_snapshots")
@handle_snapshots.setter
def handle_snapshots(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "handle_snapshots", value)
@property
@pulumi.getter(name="includesPattern")
def includes_pattern(self) -> Optional[pulumi.Input[str]]:
"""
List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
"""
return pulumi.get(self, "includes_pattern")
@includes_pattern.setter
def includes_pattern(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "includes_pattern", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
the identity key of the repo.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter(name="maxUniqueSnapshots")
def max_unique_snapshots(self) -> Optional[pulumi.Input[int]]:
"""
The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
"""
return pulumi.get(self, "max_unique_snapshots")
@max_unique_snapshots.setter
def max_unique_snapshots(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_unique_snapshots", value)
@property
@pulumi.getter
def notes(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "notes")
@notes.setter
def notes(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "notes", value)
@property
@pulumi.getter(name="packageType")
def package_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "package_type")
@package_type.setter
def package_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "package_type", value)
@property
@pulumi.getter(name="priorityResolution")
def priority_resolution(self) -> Optional[pulumi.Input[bool]]:
"""
Setting repositories with priority will cause metadata to be merged only from repositories set with this field
"""
return pulumi.get(self, "priority_resolution")
@priority_resolution.setter
def priority_resolution(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "priority_resolution", value)
@property
@pulumi.getter(name="projectEnvironments")
def project_environments(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
"""
return pulumi.get(self, "project_environments")
@project_environments.setter
def project_environments(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "project_environments", value)
@property
@pulumi.getter(name="projectKey")
def project_key(self) -> Optional[pulumi.Input[str]]:
"""
Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
"""
return pulumi.get(self, "project_key")
@project_key.setter
def project_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project_key", value)
@property
@pulumi.getter(name="propertySets")
def property_sets(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of property set name
"""
return pulumi.get(self, "property_sets")
@property_sets.setter
def property_sets(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "property_sets", value)
@property
@pulumi.getter(name="repoLayoutRef")
def repo_layout_ref(self) -> Optional[pulumi.Input[str]]:
"""
Repository layout key for the local repository
"""
return pulumi.get(self, "repo_layout_ref")
@repo_layout_ref.setter
def repo_layout_ref(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "repo_layout_ref", value)
@property
@pulumi.getter(name="snapshotVersionBehavior")
def snapshot_version_behavior(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
"""
return pulumi.get(self, "snapshot_version_behavior")
@snapshot_version_behavior.setter
def snapshot_version_behavior(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "snapshot_version_behavior", value)
@property
@pulumi.getter(name="suppressPomConsistencyChecks")
def suppress_pom_consistency_checks(self) -> Optional[pulumi.Input[bool]]:
"""
By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
"""
return pulumi.get(self, "suppress_pom_consistency_checks")
@suppress_pom_consistency_checks.setter
def suppress_pom_consistency_checks(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "suppress_pom_consistency_checks", value)
@property
@pulumi.getter(name="xrayIndex")
def xray_index(self) -> Optional[pulumi.Input[bool]]:
"""
Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
return pulumi.get(self, "xray_index")
@xray_index.setter
def xray_index(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "xray_index", value)
class LocalIvyRepository(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
archive_browsing_enabled: Optional[pulumi.Input[bool]] = None,
blacked_out: Optional[pulumi.Input[bool]] = None,
checksum_policy_type: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
download_direct: Optional[pulumi.Input[bool]] = None,
excludes_pattern: Optional[pulumi.Input[str]] = None,
handle_releases: Optional[pulumi.Input[bool]] = None,
handle_snapshots: Optional[pulumi.Input[bool]] = None,
includes_pattern: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None,
max_unique_snapshots: Optional[pulumi.Input[int]] = None,
notes: Optional[pulumi.Input[str]] = None,
priority_resolution: Optional[pulumi.Input[bool]] = None,
project_environments: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
project_key: Optional[pulumi.Input[str]] = None,
property_sets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
repo_layout_ref: Optional[pulumi.Input[str]] = None,
snapshot_version_behavior: Optional[pulumi.Input[str]] = None,
suppress_pom_consistency_checks: Optional[pulumi.Input[bool]] = None,
xray_index: Optional[pulumi.Input[bool]] = None,
__props__=None):
"""
Creates a local Ivy repository.
## Example Usage
```python
import pulumi
import pulumi_artifactory as artifactory
terraform_local_test_ivy_repo = artifactory.LocalIvyRepository("terraform-local-test-ivy-repo", key="terraform-local-test-ivy-repo")
```
## Import
Local repositories can be imported using their name, e.g.
```sh
$ pulumi import artifactory:index/localIvyRepository:LocalIvyRepository terraform-local-test-ivy-repo terraform-local-test-ivy-repo
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] archive_browsing_enabled: When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
:param pulumi.Input[bool] blacked_out: When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
:param pulumi.Input[str] checksum_policy_type: Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
:param pulumi.Input[bool] download_direct: When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
:param pulumi.Input[str] excludes_pattern: List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
:param pulumi.Input[bool] handle_releases: If set, Artifactory allows you to deploy release artifacts into this repository.
:param pulumi.Input[bool] handle_snapshots: If set, Artifactory allows you to deploy snapshot artifacts into this repository.
:param pulumi.Input[str] includes_pattern: List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
:param pulumi.Input[str] key: the identity key of the repo.
:param pulumi.Input[int] max_unique_snapshots: The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
:param pulumi.Input[bool] priority_resolution: Setting repositories with priority will cause metadata to be merged only from repositories set with this field
:param pulumi.Input[Sequence[pulumi.Input[str]]] project_environments: Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
:param pulumi.Input[str] project_key: Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
:param pulumi.Input[Sequence[pulumi.Input[str]]] property_sets: List of property set name
:param pulumi.Input[str] repo_layout_ref: Repository layout key for the local repository
:param pulumi.Input[str] snapshot_version_behavior: Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
:param pulumi.Input[bool] suppress_pom_consistency_checks: By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
:param pulumi.Input[bool] xray_index: Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: LocalIvyRepositoryArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Creates a local Ivy repository.
## Example Usage
```python
import pulumi
import pulumi_artifactory as artifactory
terraform_local_test_ivy_repo = artifactory.LocalIvyRepository("terraform-local-test-ivy-repo", key="terraform-local-test-ivy-repo")
```
## Import
Local repositories can be imported using their name, e.g.
```sh
$ pulumi import artifactory:index/localIvyRepository:LocalIvyRepository terraform-local-test-ivy-repo terraform-local-test-ivy-repo
```
:param str resource_name: The name of the resource.
:param LocalIvyRepositoryArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(LocalIvyRepositoryArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
archive_browsing_enabled: Optional[pulumi.Input[bool]] = None,
blacked_out: Optional[pulumi.Input[bool]] = None,
checksum_policy_type: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
download_direct: Optional[pulumi.Input[bool]] = None,
excludes_pattern: Optional[pulumi.Input[str]] = None,
handle_releases: Optional[pulumi.Input[bool]] = None,
handle_snapshots: Optional[pulumi.Input[bool]] = None,
includes_pattern: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None,
max_unique_snapshots: Optional[pulumi.Input[int]] = None,
notes: Optional[pulumi.Input[str]] = None,
priority_resolution: Optional[pulumi.Input[bool]] = None,
project_environments: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
project_key: Optional[pulumi.Input[str]] = None,
property_sets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
repo_layout_ref: Optional[pulumi.Input[str]] = None,
snapshot_version_behavior: Optional[pulumi.Input[str]] = None,
suppress_pom_consistency_checks: Optional[pulumi.Input[bool]] = None,
xray_index: Optional[pulumi.Input[bool]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = LocalIvyRepositoryArgs.__new__(LocalIvyRepositoryArgs)
__props__.__dict__["archive_browsing_enabled"] = archive_browsing_enabled
__props__.__dict__["blacked_out"] = blacked_out
__props__.__dict__["checksum_policy_type"] = checksum_policy_type
__props__.__dict__["description"] = description
__props__.__dict__["download_direct"] = download_direct
__props__.__dict__["excludes_pattern"] = excludes_pattern
__props__.__dict__["handle_releases"] = handle_releases
__props__.__dict__["handle_snapshots"] = handle_snapshots
__props__.__dict__["includes_pattern"] = includes_pattern
if key is None and not opts.urn:
raise TypeError("Missing required property 'key'")
__props__.__dict__["key"] = key
__props__.__dict__["max_unique_snapshots"] = max_unique_snapshots
__props__.__dict__["notes"] = notes
__props__.__dict__["priority_resolution"] = priority_resolution
__props__.__dict__["project_environments"] = project_environments
__props__.__dict__["project_key"] = project_key
__props__.__dict__["property_sets"] = property_sets
__props__.__dict__["repo_layout_ref"] = repo_layout_ref
__props__.__dict__["snapshot_version_behavior"] = snapshot_version_behavior
__props__.__dict__["suppress_pom_consistency_checks"] = suppress_pom_consistency_checks
__props__.__dict__["xray_index"] = xray_index
__props__.__dict__["package_type"] = None
super(LocalIvyRepository, __self__).__init__(
'artifactory:index/localIvyRepository:LocalIvyRepository',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
archive_browsing_enabled: Optional[pulumi.Input[bool]] = None,
blacked_out: Optional[pulumi.Input[bool]] = None,
checksum_policy_type: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
download_direct: Optional[pulumi.Input[bool]] = None,
excludes_pattern: Optional[pulumi.Input[str]] = None,
handle_releases: Optional[pulumi.Input[bool]] = None,
handle_snapshots: Optional[pulumi.Input[bool]] = None,
includes_pattern: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None,
max_unique_snapshots: Optional[pulumi.Input[int]] = None,
notes: Optional[pulumi.Input[str]] = None,
package_type: Optional[pulumi.Input[str]] = None,
priority_resolution: Optional[pulumi.Input[bool]] = None,
project_environments: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
project_key: Optional[pulumi.Input[str]] = None,
property_sets: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
repo_layout_ref: Optional[pulumi.Input[str]] = None,
snapshot_version_behavior: Optional[pulumi.Input[str]] = None,
suppress_pom_consistency_checks: Optional[pulumi.Input[bool]] = None,
xray_index: Optional[pulumi.Input[bool]] = None) -> 'LocalIvyRepository':
"""
Get an existing LocalIvyRepository resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] archive_browsing_enabled: When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
:param pulumi.Input[bool] blacked_out: When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
:param pulumi.Input[str] checksum_policy_type: Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
:param pulumi.Input[bool] download_direct: When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
:param pulumi.Input[str] excludes_pattern: List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
:param pulumi.Input[bool] handle_releases: If set, Artifactory allows you to deploy release artifacts into this repository.
:param pulumi.Input[bool] handle_snapshots: If set, Artifactory allows you to deploy snapshot artifacts into this repository.
:param pulumi.Input[str] includes_pattern: List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
:param pulumi.Input[str] key: the identity key of the repo.
:param pulumi.Input[int] max_unique_snapshots: The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
:param pulumi.Input[bool] priority_resolution: Setting repositories with priority will cause metadata to be merged only from repositories set with this field
:param pulumi.Input[Sequence[pulumi.Input[str]]] project_environments: Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
:param pulumi.Input[str] project_key: Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
:param pulumi.Input[Sequence[pulumi.Input[str]]] property_sets: List of property set name
:param pulumi.Input[str] repo_layout_ref: Repository layout key for the local repository
:param pulumi.Input[str] snapshot_version_behavior: Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
:param pulumi.Input[bool] suppress_pom_consistency_checks: By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
:param pulumi.Input[bool] xray_index: Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _LocalIvyRepositoryState.__new__(_LocalIvyRepositoryState)
__props__.__dict__["archive_browsing_enabled"] = archive_browsing_enabled
__props__.__dict__["blacked_out"] = blacked_out
__props__.__dict__["checksum_policy_type"] = checksum_policy_type
__props__.__dict__["description"] = description
__props__.__dict__["download_direct"] = download_direct
__props__.__dict__["excludes_pattern"] = excludes_pattern
__props__.__dict__["handle_releases"] = handle_releases
__props__.__dict__["handle_snapshots"] = handle_snapshots
__props__.__dict__["includes_pattern"] = includes_pattern
__props__.__dict__["key"] = key
__props__.__dict__["max_unique_snapshots"] = max_unique_snapshots
__props__.__dict__["notes"] = notes
__props__.__dict__["package_type"] = package_type
__props__.__dict__["priority_resolution"] = priority_resolution
__props__.__dict__["project_environments"] = project_environments
__props__.__dict__["project_key"] = project_key
__props__.__dict__["property_sets"] = property_sets
__props__.__dict__["repo_layout_ref"] = repo_layout_ref
__props__.__dict__["snapshot_version_behavior"] = snapshot_version_behavior
__props__.__dict__["suppress_pom_consistency_checks"] = suppress_pom_consistency_checks
__props__.__dict__["xray_index"] = xray_index
return LocalIvyRepository(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="archiveBrowsingEnabled")
def archive_browsing_enabled(self) -> pulumi.Output[Optional[bool]]:
"""
When set, you may view content such as HTML or Javadoc files directly from Artifactory. This may not be safe and
therefore requires strict content moderation to prevent malicious users from uploading content that may compromise
security (e.g., cross-site scripting attacks).
"""
return pulumi.get(self, "archive_browsing_enabled")
@property
@pulumi.getter(name="blackedOut")
def blacked_out(self) -> pulumi.Output[Optional[bool]]:
"""
When set, the repository does not participate in artifact resolution and new artifacts cannot be deployed.
"""
return pulumi.get(self, "blacked_out")
@property
@pulumi.getter(name="checksumPolicyType")
def checksum_policy_type(self) -> pulumi.Output[Optional[str]]:
"""
Checksum policy determines how Artifactory behaves when a client checksum for a deployed resource is missing or
conflicts with the locally calculated checksum (bad checksum). Options are: "client-checksums", or
"server-generated-checksums". Default: "client-checksums"\n For more details, please refer to Checksum Policy -
https://www.jfrog.com/confluence/display/JFROG/Local+Repositories#LocalRepositories-ChecksumPolicy
"""
return pulumi.get(self, "checksum_policy_type")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "description")
@property
@pulumi.getter(name="downloadDirect")
def download_direct(self) -> pulumi.Output[Optional[bool]]:
"""
When set, download requests to this repository will redirect the client to download the artifact directly from the cloud
storage provider. Available in Enterprise+ and Edge licenses only.
"""
return pulumi.get(self, "download_direct")
@property
@pulumi.getter(name="excludesPattern")
def excludes_pattern(self) -> pulumi.Output[str]:
"""
List of artifact patterns to exclude when evaluating artifact requests, in the form of x/y/**/z/*. By default no
artifacts are excluded.
"""
return pulumi.get(self, "excludes_pattern")
@property
@pulumi.getter(name="handleReleases")
def handle_releases(self) -> pulumi.Output[Optional[bool]]:
"""
If set, Artifactory allows you to deploy release artifacts into this repository.
"""
return pulumi.get(self, "handle_releases")
@property
@pulumi.getter(name="handleSnapshots")
def handle_snapshots(self) -> pulumi.Output[Optional[bool]]:
"""
If set, Artifactory allows you to deploy snapshot artifacts into this repository.
"""
return pulumi.get(self, "handle_snapshots")
@property
@pulumi.getter(name="includesPattern")
def includes_pattern(self) -> pulumi.Output[str]:
"""
List of artifact patterns to include when evaluating artifact requests in the form of x/y/**/z/*. When used, only
artifacts matching one of the include patterns are served. By default, all artifacts are included (**/*).
"""
return pulumi.get(self, "includes_pattern")
@property
@pulumi.getter
def key(self) -> pulumi.Output[str]:
"""
the identity key of the repo.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter(name="maxUniqueSnapshots")
def max_unique_snapshots(self) -> pulumi.Output[Optional[int]]:
"""
The maximum number of unique snapshots of a single artifact to store. Once the number of snapshots exceeds this setting,
older versions are removed. A value of 0 (default) indicates there is no limit, and unique snapshots are not cleaned up.
"""
return pulumi.get(self, "max_unique_snapshots")
@property
@pulumi.getter
def notes(self) -> pulumi.Output[Optional[str]]:
return pulumi.get(self, "notes")
@property
@pulumi.getter(name="packageType")
def package_type(self) -> pulumi.Output[str]:
return pulumi.get(self, "package_type")
@property
@pulumi.getter(name="priorityResolution")
def priority_resolution(self) -> pulumi.Output[Optional[bool]]:
"""
Setting repositories with priority will cause metadata to be merged only from repositories set with this field
"""
return pulumi.get(self, "priority_resolution")
@property
@pulumi.getter(name="projectEnvironments")
def project_environments(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
Project environment for assigning this repository to. Allow values: "DEV" or "PROD"
"""
return pulumi.get(self, "project_environments")
@property
@pulumi.getter(name="projectKey")
def project_key(self) -> pulumi.Output[Optional[str]]:
"""
Project key for assigning this repository to. When assigning repository to a project, repository key must be prefixed
with project key, separated by a dash.
"""
return pulumi.get(self, "project_key")
@property
@pulumi.getter(name="propertySets")
def property_sets(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
List of property set name
"""
return pulumi.get(self, "property_sets")
@property
@pulumi.getter(name="repoLayoutRef")
def repo_layout_ref(self) -> pulumi.Output[Optional[str]]:
"""
Repository layout key for the local repository
"""
return pulumi.get(self, "repo_layout_ref")
@property
@pulumi.getter(name="snapshotVersionBehavior")
def snapshot_version_behavior(self) -> pulumi.Output[Optional[str]]:
"""
Specifies the naming convention for Maven SNAPSHOT versions. The options are - unique: Version number is based on a
time-stamp (default) non-unique: Version number uses a self-overriding naming pattern of
artifactId-version-SNAPSHOT.type deployer: Respects the settings in the Maven client that is deploying the artifact.
"""
return pulumi.get(self, "snapshot_version_behavior")
@property
@pulumi.getter(name="suppressPomConsistencyChecks")
def suppress_pom_consistency_checks(self) -> pulumi.Output[Optional[bool]]:
"""
By default, Artifactory keeps your repositories healthy by refusing POMs with incorrect coordinates (path). If the
groupId:artifactId:version information inside the POM does not match the deployed path, Artifactory rejects the
deployment with a "409 Conflict" error. You can disable this behavior by setting the Suppress POM Consistency Checks
checkbox.
"""
return pulumi.get(self, "suppress_pom_consistency_checks")
@property
@pulumi.getter(name="xrayIndex")
def xray_index(self) -> pulumi.Output[Optional[bool]]:
"""
Enable Indexing In Xray. Repository will be indexed with the default retention period. You will be able to change it via
Xray settings.
"""
return pulumi.get(self, "xray_index")
| 54.779188 | 181 | 0.68798 |
6c29e2ff3c30e4725a019309dc9ba67e3d7b4f6b | 1,895 | py | Python | sdk/compute/azure-mgmt-compute/azure/mgmt/compute/v2019_03_01/models/usage_py3.py | pjquirk/azure-sdk-for-python | cbf02ec4f177b96eae1dbbba87c34c2c93880150 | [
"MIT"
] | 1 | 2021-09-07T18:36:04.000Z | 2021-09-07T18:36:04.000Z | sdk/compute/azure-mgmt-compute/azure/mgmt/compute/v2019_03_01/models/usage_py3.py | pjquirk/azure-sdk-for-python | cbf02ec4f177b96eae1dbbba87c34c2c93880150 | [
"MIT"
] | 2 | 2019-10-02T23:37:38.000Z | 2020-10-02T01:17:31.000Z | azure-mgmt-compute/azure/mgmt/compute/v2019_03_01/models/usage_py3.py | xiafu-msft/azure-sdk-for-python | 4d9560cfd519ee60667f3cc2f5295a58c18625db | [
"MIT"
] | 1 | 2019-06-17T22:18:23.000Z | 2019-06-17T22:18:23.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class Usage(Model):
"""Describes Compute Resource Usage.
Variables are only populated by the server, and will be ignored when
sending a request.
All required parameters must be populated in order to send to Azure.
:ivar unit: Required. An enum describing the unit of usage measurement.
Default value: "Count" .
:vartype unit: str
:param current_value: Required. The current usage of the resource.
:type current_value: int
:param limit: Required. The maximum permitted usage of the resource.
:type limit: long
:param name: Required. The name of the type of usage.
:type name: ~azure.mgmt.compute.v2019_03_01.models.UsageName
"""
_validation = {
'unit': {'required': True, 'constant': True},
'current_value': {'required': True},
'limit': {'required': True},
'name': {'required': True},
}
_attribute_map = {
'unit': {'key': 'unit', 'type': 'str'},
'current_value': {'key': 'currentValue', 'type': 'int'},
'limit': {'key': 'limit', 'type': 'long'},
'name': {'key': 'name', 'type': 'UsageName'},
}
unit = "Count"
def __init__(self, *, current_value: int, limit: int, name, **kwargs) -> None:
super(Usage, self).__init__(**kwargs)
self.current_value = current_value
self.limit = limit
self.name = name
| 34.454545 | 82 | 0.594195 |
4974cbafbce3780c9329dd4cfd3fbb7928fa45b4 | 4,470 | py | Python | src/SeleniumLibrary/keywords/javascript.py | ponkar/robotframework-selenium2library | e41b6ea6664fe80f469ac7c5dfd9717819b97d18 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/SeleniumLibrary/keywords/javascript.py | ponkar/robotframework-selenium2library | e41b6ea6664fe80f469ac7c5dfd9717819b97d18 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/SeleniumLibrary/keywords/javascript.py | ponkar/robotframework-selenium2library | e41b6ea6664fe80f469ac7c5dfd9717819b97d18 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright 2008-2011 Nokia Networks
# Copyright 2011-2016 Ryan Tomac, Ed Manlove and contributors
# Copyright 2016- Robot Framework Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from SeleniumLibrary.base import LibraryComponent, keyword
class JavaScriptKeywords(LibraryComponent):
@keyword
def execute_javascript(self, *code):
"""Executes the given JavaScript code.
`code` may contain multiple lines of code and may be divided into
multiple cells in the test data. In that case, the parts are
catenated together without adding spaces.
If `code` is an absolute path to an existing file, the JavaScript
to execute will be read from that file. Forward slashes work as
a path separator on all operating systems.
The JavaScript executes in the context of the currently selected
frame or window as the body of an anonymous function. Use _window_ to
refer to the window of your application and _document_ to refer to the
document object of the current frame or window, e.g.
_document.getElementById('foo')_.
This keyword returns None unless there is a return statement in the
JavaScript. Return values are converted to the appropriate type in
Python, including WebElements.
Examples:
| Execute JavaScript | window.my_js_function('arg1', 'arg2') | |
| Execute JavaScript | ${CURDIR}/js_to_execute.js | |
| ${sum}= | Execute JavaScript | return 1 + 1; |
| Should Be Equal | ${sum} | ${2} |
"""
js = self._get_javascript_to_execute(''.join(code))
self.info("Executing JavaScript:\n%s" % js)
return self.browser.execute_script(js)
@keyword
def execute_async_javascript(self, *code):
"""Executes asynchronous JavaScript code.
Similar to `Execute Javascript` except that scripts executed with
this keyword must explicitly signal they are finished by invoking the
provided callback. This callback is always injected into the executed
function as the last argument.
Scripts must complete within the script timeout or this keyword will
fail. See the `Timeouts` section for more information.
Examples:
| Execute Async JavaScript | var callback = arguments[arguments.length - 1]; | window.setTimeout(callback, 2000); |
| Execute Async JavaScript | ${CURDIR}/async_js_to_execute.js | |
| ${retval}= | Execute Async JavaScript | |
| ... | var callback = arguments[arguments.length - 1]; | |
| ... | function answer(){callback("text");}; | |
| ... | window.setTimeout(answer, 2000); | |
| Should Be Equal | ${retval} | text |
"""
js = self._get_javascript_to_execute(''.join(code))
self.info("Executing Asynchronous JavaScript:\n%s" % js)
return self.browser.execute_async_script(js)
def _get_javascript_to_execute(self, code):
codepath = code.replace('/', os.sep)
if not (os.path.isabs(codepath) and os.path.isfile(codepath)):
return code
self.info(
'Reading JavaScript from file <a href="file://%s">%s</a>.'.format(
codepath.replace(os.sep, '/'), codepath),
html=True)
codefile = open(codepath)
try:
return codefile.read().strip()
finally:
codefile.close()
| 47.553191 | 123 | 0.589485 |
604970e7dfe0212eac41207e9e227f7607e10f55 | 31,303 | py | Python | lib/libvcc/generate.py | Dridi/varnish-cache | e44ef53c2ff0663e0dfa00d15794d2c5d3bdece7 | [
"BSD-2-Clause"
] | null | null | null | lib/libvcc/generate.py | Dridi/varnish-cache | e44ef53c2ff0663e0dfa00d15794d2c5d3bdece7 | [
"BSD-2-Clause"
] | null | null | null | lib/libvcc/generate.py | Dridi/varnish-cache | e44ef53c2ff0663e0dfa00d15794d2c5d3bdece7 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
#-
# Copyright (c) 2006 Verdens Gang AS
# Copyright (c) 2006-2015 Varnish Software AS
# All rights reserved.
#
# Author: Poul-Henning Kamp <phk@phk.freebsd.dk>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
#
# Generate various .c and .h files for the VCL compiler and the interfaces
# for it.
from __future__ import print_function
import subprocess
import os
#######################################################################
# These are our tokens
# We could drop all words such as "include", "if" etc, and use the
# ID type instead, but declaring them tokens makes them reserved words
# which hopefully makes for better error messages.
# XXX: does it actually do that ?
import copy
import sys
from os.path import join
srcroot = "../.."
buildroot = "../.."
if len(sys.argv) == 3:
srcroot = sys.argv[1]
buildroot = sys.argv[2]
tokens = {
"T_INC": "++",
"T_DEC": "--",
"T_CAND": "&&",
"T_COR": "||",
"T_LEQ": "<=",
"T_EQ": "==",
"T_NEQ": "!=",
"T_GEQ": ">=",
"T_SHR": ">>",
"T_SHL": "<<",
"T_INCR": "+=",
"T_DECR": "-=",
"T_MUL": "*=",
"T_DIV": "/=",
"T_NOMATCH": "!~",
# Single char tokens, for convenience on one line
None: "{}()*+-/%><=;!&.|~,",
# These have handwritten recognizers
"ID": None,
"CNUM": None,
"CSTR": None,
"EOI": None,
"CSRC": None,
}
#######################################################################
# Our methods and actions
returns = (
###############################################################
# Client side
('recv',
"C",
('fail', 'synth', 'pass', 'pipe', 'hash', 'purge', 'vcl')
),
('pipe',
"C",
('fail', 'synth', 'pipe',)
),
('pass',
"C",
('fail', 'synth', 'restart', 'fetch',)
),
('hash',
"C",
('fail', 'lookup',)
),
('purge',
"C",
('fail', 'synth', 'restart',)
),
('miss',
"C",
('fail', 'synth', 'restart', 'pass', 'fetch',)
),
('hit',
"C",
('fail', 'synth', 'restart', 'pass', 'miss', 'deliver',)
),
('deliver',
"C",
('fail', 'synth', 'restart', 'deliver',)
),
('synth',
"C",
('fail', 'restart', 'deliver',)
),
###############################################################
# Backend-fetch
('backend_fetch',
"B",
('fail', 'fetch', 'abandon')
),
('backend_response',
"B",
('fail', 'deliver', 'retry', 'abandon', 'pass')
),
('backend_error',
"B",
('fail', 'deliver', 'retry', 'abandon')
),
###############################################################
# Housekeeping
('init',
"H",
('ok', 'fail')
),
('fini',
"H",
('ok',)
),
)
#######################################################################
# Variables available in sessions
#
# 'all' means all methods
# 'client' means all methods tagged "C"
# 'backend' means all methods tagged "B"
# 'both' means all methods tagged "B" or "C"
sp_variables = [
('remote.ip',
'IP',
('both',),
(), """
The IP address of the other end of the TCP connection.
This can either be the clients IP, or the outgoing IP
of a proxy server.
"""
),
('client.ip',
'IP',
('both',),
(), """
The client's IP address.
"""
),
('client.identity',
'STRING',
('client',),
('client',), """
Identification of the client, used to load balance
in the client director. Defaults to the client's IP
address.
"""
),
('local.ip',
'IP',
('both',),
(), """
The IP address of the local end of the TCP connection.
"""
),
('server.ip',
'IP',
('both',),
(), """
The IP address of the socket on which the client
connection was received.
"""
),
('server.hostname',
'STRING',
('all',),
(), """
The host name of the server.
"""
),
('server.identity',
'STRING',
('all',),
(), """
The identity of the server, as set by the -i
parameter. If the -i parameter is not passed to varnishd,
server.identity will be set to the name of the instance, as
specified by the -n parameter.
"""
),
('req',
'HTTP',
('client',),
(), """
The entire request HTTP data structure
"""
),
('req.method',
'STRING',
('client',),
('client',), """
The request type (e.g. "GET", "HEAD").
"""
),
('req.url',
'STRING',
('client',),
('client',), """
The requested URL.
"""
),
('req.proto',
'STRING',
('client',),
('client',), """
The HTTP protocol version used by the client.
"""
),
('req.http.',
'HEADER',
('client',),
('client',), """
The corresponding HTTP header.
"""
),
('req.restarts',
'INT',
('client',),
(), """
A count of how many times this request has been restarted.
"""
),
('req.storage',
'STEVEDORE',
('recv',),
('recv',), """
The storage backend to use to save this request body.
"""
),
('req.esi_level',
'INT',
('client',),
(), """
A count of how many levels of ESI requests we're currently at.
"""
),
('req.ttl',
'DURATION',
('client',),
('client',), """
Upper limit on the object age for cache lookups to return hit.
Usage of req.ttl should be replaced with a check on
obj.ttl in vcl_hit, returning miss when needed, but
this currently hits bug #1799, so an additional
workaround is required.
Deprecated and scheduled for removal with varnish release 7.
"""
),
('req.xid',
'STRING',
('client',),
(), """
Unique ID of this request.
"""
),
('req.esi',
'BOOL',
('client',),
('client',), """
Boolean. Set to false to disable ESI processing
regardless of any value in beresp.do_esi. Defaults
to true. This variable is subject to change in
future versions, you should avoid using it.
"""
),
('req.can_gzip',
'BOOL',
('client',),
(), """
Does the client accept the gzip transfer encoding.
"""
),
('req.backend_hint',
'BACKEND',
('client', ),
('client',), """
Set bereq.backend to this if we attempt to fetch.
When set to a director, reading this variable returns
an actual backend if the director has resolved immediately,
or the director otherwise.
When used in string context, returns the name of the director
or backend, respectively.
"""
),
('req.hash_ignore_busy',
'BOOL',
('recv',),
('recv',), """
Ignore any busy object during cache lookup. You
would want to do this if you have two server looking
up content from each other to avoid potential deadlocks.
"""
),
('req.hash_always_miss',
'BOOL',
('recv',),
('recv',), """
Force a cache miss for this request. If set to true
Varnish will disregard any existing objects and
always (re)fetch from the backend.
"""
),
('req_top.method',
'STRING',
('client',),
(), """
The request method of the top-level request in a tree
of ESI requests. (e.g. "GET", "HEAD").
Identical to req.method in non-ESI requests.
"""
),
('req_top.url',
'STRING',
('client',),
(), """
The requested URL of the top-level request in a tree
of ESI requests.
Identical to req.url in non-ESI requests.
"""
),
('req_top.http.',
'HEADER',
('client',),
(), """
HTTP headers of the top-level request in a tree of ESI requests.
Identical to req.http. in non-ESI requests.
"""
),
('req_top.proto',
'STRING',
('client',),
(), """
HTTP protocol version of the top-level request in a tree of
ESI requests.
Identical to req.proto in non-ESI requests.
"""
),
('bereq',
'HTTP',
('backend',),
(), """
The entire backend request HTTP data structure
"""
),
('bereq.xid',
'STRING',
('backend',),
(), """
Unique ID of this request.
"""
),
('bereq.retries',
'INT',
('backend',),
(), """
A count of how many times this request has been retried.
"""
),
('bereq.backend',
'BACKEND',
('pipe', 'backend', ),
('pipe', 'backend', ), """
This is the backend or director we attempt to fetch from.
When set to a director, reading this variable returns
an actual backend if the director has resolved immediately,
or the director otherwise.
When used in string context, returns the name of the director
or backend, respectively.
"""
),
('bereq.body',
'BODY',
(),
('backend_fetch',), """
The request body.
"""
),
('bereq.method',
'STRING',
('pipe', 'backend', ),
('pipe', 'backend', ), """
The request type (e.g. "GET", "HEAD").
"""
),
('bereq.url',
'STRING',
('pipe', 'backend', ),
('pipe', 'backend', ), """
The requested URL.
"""
),
('bereq.proto',
'STRING',
('pipe', 'backend', ),
('pipe', 'backend', ), """
The HTTP protocol version used to talk to the server.
"""
),
('bereq.http.',
'HEADER',
('pipe', 'backend', ),
('pipe', 'backend', ), """
The corresponding HTTP header.
"""
),
('bereq.uncacheable',
'BOOL',
('backend', ),
(), """
Indicates whether this request is uncacheable due
to a pass in the client side or a hit on an existing
uncacheable object (aka hit-for-pass).
"""
),
('bereq.connect_timeout',
'DURATION',
('pipe', 'backend', ),
('pipe', 'backend', ), """
The time in seconds to wait for a backend connection.
"""
),
('bereq.first_byte_timeout',
'DURATION',
('backend', ),
('backend', ), """
The time in seconds to wait for the first byte from
the backend. Not available in pipe mode.
"""
),
('bereq.between_bytes_timeout',
'DURATION',
('backend', ),
('backend', ), """
The time in seconds to wait between each received byte from the
backend. Not available in pipe mode.
"""
),
('beresp',
'HTTP',
('backend_response', 'backend_error'),
(), """
The entire backend response HTTP data structure
"""
),
('beresp.body',
'BODY',
(),
('backend_error',), """
The response body.
"""
),
('beresp.proto',
'STRING',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
The HTTP protocol version used the backend replied with.
"""
),
('beresp.status',
'INT',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
The HTTP status code returned by the server.
Status codes >1000 can be set for vcl-internal
purposes and will be taken modulo 1000 on delivery.
"""
),
('beresp.reason',
'STRING',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
The HTTP status message returned by the server.
"""
),
('beresp.http.',
'HEADER',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
The corresponding HTTP header.
"""
),
('beresp.do_esi',
'BOOL',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Boolean. ESI-process the object after fetching it.
Defaults to false. Set it to true to parse the
object for ESI directives. Will only be honored if
req.esi is true.
"""
),
('beresp.do_stream',
'BOOL',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Deliver the object to the client while fetching the whole
object into varnish. For uncacheable objects, storage for
parts of the body which have been sent to the client may
get freed early, depending on the storage engine used.
"""
),
('beresp.do_gzip',
'BOOL',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Boolean. Gzip the object before storing it. Defaults
to false. When http_gzip_support is on Varnish will
request already compressed content from the backend
and as such compression in Varnish is not needed.
"""
),
('beresp.do_gunzip',
'BOOL',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Boolean. Unzip the object before storing it in the
cache. Defaults to false.
"""
),
('beresp.was_304',
'BOOL',
('backend_response', 'backend_error'),
(), """
Boolean. If this is a successful 304 response to a
backend conditional request refreshing an existing
cache object.
"""
),
('beresp.uncacheable',
'BOOL',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Inherited from bereq.uncacheable, see there.
Setting this variable makes the object uncacheable, which may
get stored as a hit-for-pass object in the cache.
Clearing the variable has no effect and will log the warning
"Ignoring attempt to reset beresp.uncacheable".
"""
),
('beresp.ttl',
'DURATION',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
The object's remaining time to live, in seconds.
"""
),
('beresp.age',
'DURATION',
('backend_response', 'backend_error'),
(), """
The age of the object.
"""
),
('beresp.grace',
'DURATION',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Set to a period to enable grace.
"""
),
('beresp.keep',
'DURATION',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Set to a period to enable conditional backend requests.
The keep time is cache lifetime in addition to the ttl.
Objects with ttl expired but with keep time left may be used
to issue conditional (If-Modified-Since / If-None-Match)
requests to the backend to refresh them.
"""
),
('beresp.backend',
'BACKEND',
('backend_response', 'backend_error'),
(), """
This is the backend we fetched from. If bereq.backend
was set to a director, this will be the backend selected
by the director.
When used in string context, returns its name.
"""
),
('beresp.backend.name',
'STRING',
('backend_response', 'backend_error'),
(), """
Name of the backend this response was fetched from.
Same as beresp.backend.
"""
),
('beresp.backend.ip',
'IP',
('backend_response',),
(), """
IP of the backend this response was fetched from.
"""
),
('beresp.storage',
'STEVEDORE',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
The storage backend to use to save this object.
"""
),
('beresp.storage_hint',
'STRING',
('backend_response', 'backend_error'),
('backend_response', 'backend_error'), """
Deprecated. Hint to Varnish that you want to
save this object to a particular storage backend.
Use beresp.storage instead.
"""
),
('obj.proto',
'STRING',
('hit',),
(), """
The HTTP protocol version stored with the object.
"""
),
('obj.status',
'INT',
('hit',),
(), """
The HTTP status code stored with the object.
"""
),
('obj.reason',
'STRING',
('hit',),
(), """
The HTTP reason phrase stored with the object.
"""
),
('obj.hits',
'INT',
('hit', 'deliver'),
(), """
The count of cache-hits on this object. A value of 0 indicates a
cache miss.
"""
),
('obj.http.',
'HEADER',
('hit',),
(), """
The corresponding HTTP header.
"""
),
('obj.ttl',
'DURATION',
('hit', 'deliver'),
(), """
The object's remaining time to live, in seconds.
"""
),
('obj.age',
'DURATION',
('hit', 'deliver'),
(), """
The age of the object.
"""
),
('obj.grace',
'DURATION',
('hit', 'deliver'),
(), """
The object's remaining grace period in seconds.
"""
),
('obj.keep',
'DURATION',
('hit', 'deliver'),
(), """
The object's remaining keep period in seconds.
"""
),
('obj.uncacheable',
'BOOL',
('deliver',),
(), """
Whether the object is uncacheable (pass or hit-for-pass).
"""
),
('resp',
'HTTP',
('deliver', 'synth'),
(), """
The entire response HTTP data structure.
"""
),
('resp.body',
'BODY',
(),
('synth',), """
The response body.
"""
),
('resp.proto',
'STRING',
('deliver', 'synth'),
('deliver', 'synth'), """
The HTTP protocol version to use for the response.
"""
),
('resp.status',
'INT',
('deliver', 'synth'),
('deliver', 'synth'), """
The HTTP status code that will be returned.
Assigning a HTTP standardized code to resp.status will also
set resp.reason to the corresponding status message.
resp.status 200 will get changed into 304 by core code after
a return(deliver) from vcl_deliver for conditional requests
to cached content if validation succeeds.
"""
),
('resp.reason',
'STRING',
('deliver', 'synth'),
('deliver', 'synth'), """
The HTTP status message that will be returned.
"""
),
('resp.http.',
'HEADER',
('deliver', 'synth'),
('deliver', 'synth'), """
The corresponding HTTP header.
"""
),
('resp.is_streaming',
'BOOL',
('deliver', 'synth'),
(), """
Returns true when the response will be streamed
from the backend.
"""
),
('now',
'TIME',
('all',),
(), """
The current time, in seconds since the epoch. When
used in string context it returns a formatted string.
"""
),
]
# Backwards compatibility:
aliases = []
stv_variables = (
('free_space', 'BYTES', "0.", 'storage.<name>.free_space', """
Free space available in the named stevedore. Only available for
the malloc stevedore.
"""),
('used_space', 'BYTES', "0.", 'storage.<name>.used_space', """
Used space in the named stevedore. Only available for the malloc
stevedore.
"""),
('happy', 'BOOL', "0", 'storage.<name>.happy', """
Health status for the named stevedore. Not available in any of the
current stevedores.
"""),
)
#######################################################################
# VCL to C type conversion
vcltypes = {
'STRING_LIST': "void*",
}
fi = open(join(srcroot, "include/vrt.h"))
for i in fi:
j = i.split()
if len(j) < 3:
continue
if j[0] != "typedef":
continue
if j[-1][-1] != ";":
continue
if j[-1][-2] == ")":
continue
if j[-1][:4] != "VCL_":
continue
d = " ".join(j[1:-1])
vcltypes[j[-1][4:-1]] = d
fi.close()
#######################################################################
# Nothing is easily configurable below this line.
#######################################################################
#######################################################################
def emit_vcl_fixed_token(fo, tokens):
"Emit a function to recognize tokens in a string"
recog = list()
emit = dict()
for i in tokens:
j = tokens[i]
if j is not None:
recog.append(j)
emit[j] = i
recog.sort()
rrecog = copy.copy(recog)
rrecog.sort(key=lambda x: -len(x))
fo.write("""
#define M1()\tdo {*q = p + 1; return (p[0]); } while (0)
#define M2(c,t)\tdo {if (p[1] == (c)) { *q = p + 2; return (t); }} while (0)
unsigned
vcl_fixed_token(const char *p, const char **q)
{
\tswitch (p[0]) {
""")
last_initial = None
for i in recog:
if (i[0] == last_initial):
continue
last_initial = i[0]
fo.write("\tcase '%s':\n" % last_initial)
need_ret = True
for j in rrecog:
if (j[0] != last_initial):
continue
if len(j) == 2:
fo.write("\t\tM2('%s', %s);\n" %
(j[1], emit[j]))
elif len(j) == 1:
fo.write("\t\tM1();\n")
need_ret = False
else:
fo.write("\t\tif (")
k = 1
l = len(j)
while (k < l):
fo.write("p[%d] == '%s'" % (k, j[k]))
fo.write(" &&")
if (k % 3) == 0:
fo.write("\n\t\t ")
else:
fo.write(" ")
k += 1
fo.write("!isvar(p[%d])) {\n" % l)
fo.write("\t\t\t*q = p + %d;\n" % l)
fo.write("\t\t\treturn (%s);\n" % emit[j])
fo.write("\t\t}\n")
if need_ret:
fo.write("\t\treturn (0);\n")
fo.write("\tdefault:\n\t\treturn (0);\n\t}\n}\n")
#######################################################################
def emit_vcl_tnames(fo, tokens):
"Emit the vcl_tnames (token->string) conversion array"
fo.write("\nconst char * const vcl_tnames[256] = {\n")
l = list(tokens.keys())
l.sort()
for i in l:
j = tokens[i]
if j is None:
j = i
if i[0] == "'":
j = i
fo.write("\t[%s] = \"%s\",\n" % (i, j))
fo.write("};\n")
#######################################################################
def emit_file(fo, fd, bn):
"Read a C-source file and spit out code that outputs it with VSB_cat()"
fn = join(fd, bn)
fi = open(fn)
fc = fi.read()
fi.close()
w = 66 # Width of lines, after white space prefix
maxlen = 10240 # Max length of string literal
x = 0
l = 0
fo.write("\n\t/* %s */\n\n" % fn)
fo.write('\tVSB_cat(sb, "/* ---===### %s ###===--- */\\n\\n");\n' % bn)
for c in fc:
if l == 0:
fo.write("\tVSB_cat(sb, \"")
l += 12
x += 12
if x == 0:
fo.write("\t \"")
d = c
if c == '\n':
d = "\\n"
elif c == '\t':
d = "\\t"
elif c == '"':
d = "\\\""
elif c == '\\':
d = "\\\\"
if c == '\n' and x > w - 20:
fo.write(d + "\"\n")
x = 0
continue
if c.isspace() and x > w - 10:
fo.write(d + "\"\n")
x = 0
continue
fo.write(d)
x += len(d)
l += len(d)
if l > maxlen:
fo.write("\");\n")
l = 0
x = 0
if x > w - 3:
fo.write("\"\n")
x = 0
if x != 0:
fo.write("\"\n")
if l != 0:
fo.write("\t);\n")
fo.write('\tVSB_cat(sb, "\\n");\n')
#######################################################################
def polish_tokens(tokens):
"Expand single char tokens"
st = tokens[None]
del tokens[None]
for i in st:
tokens["'" + i + "'"] = i
#######################################################################
def file_header(fo):
fo.write("""/*
* NB: This file is machine generated, DO NOT EDIT!
*
* Edit and run lib/libvcc/generate.py instead.
*/
""")
def lint_start(fo):
fo.write('/*lint -save -e525 -e539 */\n\n')
def lint_end(fo):
fo.write('\n/*lint -restore */\n')
#######################################################################
polish_tokens(tokens)
fo = open(join(buildroot, "lib/libvcc/vcc_token_defs.h"), "w")
file_header(fo)
j = 128
for i in sorted(tokens.keys()):
if i[0] == "'":
continue
fo.write("#define\t%s %d\n" % (i, j))
j += 1
assert j < 256
fo.close()
#######################################################################
rets = dict()
vcls = list()
vcls_client = list()
vcls_backend = list()
for i in returns:
vcls.append(i[0])
for j in i[1]:
if j == "B":
vcls_backend.append(i[0])
elif j == "C":
vcls_client.append(i[0])
for j in i[2]:
rets[j] = True
#######################################################################
fo = open(join(buildroot, "include/tbl/vcl_returns.h"), "w")
file_header(fo)
lint_start(fo)
fo.write("#ifdef VCL_RET_MAC\n")
ll = sorted(returns)
for i in sorted(rets.keys()):
fo.write("VCL_RET_MAC(%s, %s" % (i.lower(), i.upper()))
s = ",\n\t"
for j in ll:
if i in j[2]:
fo.write("%sVCL_MET_%s" % (s, j[0].upper()))
s = " |\n\t"
fo.write("\n)\n\n")
fo.write("#undef VCL_RET_MAC\n")
fo.write("#endif\n")
fo.write("\n#ifdef VCL_MET_MAC\n")
for i in ll:
fo.write("VCL_MET_MAC(%s, %s, %s," %
(i[0].lower(), i[0].upper(), i[1]))
p = " (\n\t"
for j in sorted(i[2]):
fo.write("%s(1U << VCL_RET_%s)" % (p, j.upper()))
p = " |\n\t"
fo.write(")\n)\n\n")
fo.write("#undef VCL_MET_MAC\n")
fo.write("#endif\n")
lint_end(fo)
fo.close()
#######################################################################
fo = open(join(buildroot, "include/vcl.h"), "w")
file_header(fo)
fo.write("""
struct vrt_ctx;
#define VRT_CTX const struct vrt_ctx *ctx
struct req;
struct busyobj;
struct ws;
struct cli;
struct worker;
enum vcl_event_e {
VCL_EVENT_LOAD,
VCL_EVENT_WARM,
VCL_EVENT_COLD,
VCL_EVENT_DISCARD,
};
typedef int vcl_event_f(VRT_CTX, enum vcl_event_e);
typedef int vcl_init_f(VRT_CTX);
typedef void vcl_fini_f(VRT_CTX);
typedef void vcl_func_f(VRT_CTX);
""")
def tbl40(a, b):
while len(a.expandtabs()) < 40:
a += "\t"
return a + b
fo.write("\n/* VCL Methods */\n")
n = 1
for i in returns:
fo.write(tbl40("#define VCL_MET_%s" % i[0].upper(), "(1U << %d)\n" % n))
n += 1
fo.write("\n" + tbl40("#define VCL_MET_MAX", "%d\n" % n))
fo.write("\n" + tbl40("#define VCL_MET_MASK", "0x%x\n" % ((1 << n) - 1)))
fo.write("\n/* VCL Returns */\n")
n = 1
for i in sorted(rets.keys()):
fo.write(tbl40("#define VCL_RET_%s" % i.upper(), "%d\n" % n))
n += 1
fo.write("\n" + tbl40("#define VCL_RET_MAX", "%d\n" % n))
fo.write("""
struct VCL_conf {
unsigned magic;
#define VCL_CONF_MAGIC 0x7406c509 /* from /dev/random */
struct director **default_director;
const struct vrt_backend_probe *default_probe;
unsigned nref;
struct vrt_ref *ref;
unsigned nsrc;
const char **srcname;
const char **srcbody;
vcl_event_f *event_vcl;
""")
for i in returns:
fo.write("\tvcl_func_f\t*" + i[0] + "_func;\n")
fo.write("\n};\n")
fo.close()
#######################################################################
def restrict(fo, spec):
d = dict()
for j in spec:
if j == 'all':
for i in vcls:
d[i] = True
elif j == 'backend':
for i in vcls_backend:
d[i] = True
elif j == 'client':
for i in vcls_client:
d[i] = True
elif j == 'both':
for i in vcls_client:
d[i] = True
for i in vcls_backend:
d[i] = True
else:
assert j in vcls
d[j] = True
p = ""
n = 0
l = list(d.keys())
l.sort()
w = 0
fo.write("\t\t")
for j in l:
x = p + "VCL_MET_" + j.upper()
if w + len(x) > 60:
fo.write("\n\t\t")
w = 0
fo.write(x)
w += len(x)
p = " | "
if len(d) == 0:
fo.write("0")
fo.write(",\n")
#######################################################################
fh = open(join(buildroot, "include/vrt_obj.h"), "w")
file_header(fh)
fo = open(join(buildroot, "lib/libvcc/vcc_obj.c"), "w")
file_header(fo)
fo.write("""
#include "config.h"
#include <stdio.h>
#include "vcc_compile.h"
const struct var vcc_vars[] = {
""")
def one_var(nm, spec):
fh.write("\n")
typ = spec[1]
cnam = i[0].replace(".", "_")
ctyp = vcltypes[typ]
fo.write("\t{ \"%s\", %s,\n" % (nm, typ))
if len(spec[2]) == 0:
fo.write('\t NULL,\t/* No reads allowed */\n')
elif typ == "HEADER":
fo.write('\t "HDR_')
fo.write(nm.split(".")[0].upper())
fo.write('",\n')
else:
fo.write('\t "VRT_r_%s(ctx)",\n' % cnam)
if nm == i[0]:
fh.write("VCL_" + typ + " VRT_r_%s(VRT_CTX);\n" % cnam)
restrict(fo, spec[2])
if len(spec[3]) == 0:
fo.write('\t NULL,\t/* No writes allowed */\n')
elif typ == "HEADER":
fo.write('\t "HDR_')
fo.write(nm.split(".")[0].upper())
fo.write('",\n')
else:
fo.write('\t "VRT_l_%s(ctx, ",\n' % cnam)
if nm == i[0]:
fh.write("void VRT_l_%s(VRT_CTX, " % cnam)
if typ != "STRING" and typ != "BODY":
fh.write("VCL_" + typ + ");\n")
else:
fh.write(ctyp + ", ...);\n")
restrict(fo, spec[3])
fo.write("\t},\n")
sp_variables.sort()
aliases.sort()
for i in sp_variables:
one_var(i[0], i)
for j in aliases:
if j[1] == i[0]:
one_var(j[0], i)
fo.write("\t{ NULL }\n};\n\n")
for i in stv_variables:
fh.write(vcltypes[i[1]] + " VRT_Stv_" + i[0] + "(const char *);\n")
fo.close()
fh.close()
#######################################################################
fo = open(join(buildroot, "lib/libvcc/vcc_fixed_token.c"), "w")
file_header(fo)
fo.write("""
#include "config.h"
#include <ctype.h>
#include <stdio.h>
#include "vcc_compile.h"
""")
emit_vcl_fixed_token(fo, tokens)
emit_vcl_tnames(fo, tokens)
fo.write("""
void
vcl_output_lang_h(struct vsb *sb)
{
""")
emit_file(fo, srcroot, "include/vdef.h")
emit_file(fo, buildroot, "include/vcl.h")
emit_file(fo, srcroot, "include/vrt.h")
emit_file(fo, buildroot, "include/vrt_obj.h")
fo.write("\n}\n")
fo.close()
#######################################################################
ft = open(join(buildroot, "include/tbl/vcc_types.h"), "w")
file_header(ft)
lint_start(ft)
for vcltype in sorted(vcltypes.keys()):
ft.write("VCC_TYPE(" + vcltype + ")\n")
ft.write("#undef VCC_TYPE\n")
lint_end(ft)
ft.close()
#######################################################################
fo = open(join(buildroot, "include/tbl/vrt_stv_var.h"), "w")
file_header(fo)
lint_start(fo)
for i in stv_variables:
ct = vcltypes[i[1]]
fo.write("VRTSTVVAR(" + i[0] + ",\t" + i[1] + ",\t")
fo.write(ct + ",\t" + i[2] + ")")
fo.write("\n")
fo.write("#undef VRTSTVVAR\n")
lint_end(fo)
fo.close()
#######################################################################
fp_vclvar = open(join(buildroot, "doc/sphinx/include/vcl_var.rst"), "w")
l = sorted(sp_variables)
def rst_where(fo, h, l):
ll = list()
if len(l) == 0:
return
fo.write("\t" + h)
s = ""
for j in l:
if j == "both":
ll.append("client")
ll.append("backend")
elif j == "client":
ll.append(j)
elif j == "backend":
ll.append(j)
elif j == "all":
ll.append(j)
else:
ll.append("vcl_" + j)
for j in ll:
fo.write(s + j)
s = ", "
fo.write("\n\n")
hdr = ""
for i in l:
j = i[0].split(".")
if j[0] != hdr:
fp_vclvar.write("\n" + j[0] + "\n")
fp_vclvar.write("~" * len(j[0]) + "\n")
hdr = j[0]
fp_vclvar.write("\n" + i[0] + "\n\n")
fp_vclvar.write("\tType: " + i[1] + "\n\n")
rst_where(fp_vclvar, "Readable from: ", i[2])
rst_where(fp_vclvar, "Writable from: ", i[3])
for j in i[4].split("\n"):
fp_vclvar.write("\t%s\n" % j.strip())
hdr = "storage"
fp_vclvar.write("\n" + hdr + "\n")
fp_vclvar.write("~" * len(hdr) + "\n")
for i in stv_variables:
fp_vclvar.write("\n" + i[3] + "\n\n")
fp_vclvar.write("\tType: " + i[1] + "\n\n")
fp_vclvar.write("\tReadable from: client, backend\n\n")
for j in i[4].split("\n"):
fp_vclvar.write("\t%s\n" % j.strip())
fp_vclvar.close()
#######################################################################
if os.path.isdir(os.path.join(srcroot, ".git")):
v = subprocess.check_output([
"git --git-dir=" + os.path.join(srcroot, ".git") +
" show -s --pretty=format:%h"
], shell=True, universal_newlines=True)
v = v.strip()
b = subprocess.check_output([
"git --git-dir=" + os.path.join(srcroot, ".git") +
" rev-parse --abbrev-ref HEAD"
], shell=True, universal_newlines=True)
b = b.strip()
else:
b = "NOGIT"
v = "NOGIT"
vcsfn = os.path.join(srcroot, "include", "vcs_version.h")
try:
i = open(vcsfn).readline()
except IOError:
i = ""
if i != "/* " + v + " */":
fo = open(vcsfn, "w")
file_header(fo)
fo.write('#define VCS_Version "%s"\n' % v)
fo.write('#define VCS_Branch "%s"\n' % b)
fo.close()
for i in open(os.path.join(srcroot, "Makefile")):
if i[:14] == "PACKAGE_STRING":
break
i = i.split("=")[1].strip()
fo = open(os.path.join(srcroot, "include", "vmod_abi.h"), "w")
file_header(fo)
fo.write('#define VMOD_ABI_Version "%s %s"\n' % (i, v))
fo.close()
| 21.920868 | 76 | 0.563652 |
43557fd7742f2677a16788aa7d452a905529433d | 69,659 | py | Python | arelle/FunctionXfi.py | ardenliu/Arelle | 02486d4021d21dd5d40e9c59be79b03e08cedc2b | [
"Apache-2.0"
] | null | null | null | arelle/FunctionXfi.py | ardenliu/Arelle | 02486d4021d21dd5d40e9c59be79b03e08cedc2b | [
"Apache-2.0"
] | null | null | null | arelle/FunctionXfi.py | ardenliu/Arelle | 02486d4021d21dd5d40e9c59be79b03e08cedc2b | [
"Apache-2.0"
] | null | null | null | '''
Created on Dec 20, 2010
@author: Mark V Systems Limited
(c) Copyright 2010 Mark V Systems Limited, All rights reserved.
'''
import xml.dom, datetime, re
from arelle import XPathContext, XbrlConst, XbrlUtil, XmlUtil
from arelle.ModelObject import ModelObject, ModelAttribute
from arelle.ModelValue import qname, QName, dateTime, DATE, DATETIME, DATEUNION, DateTime, dateUnionEqual, anyURI
from arelle.FunctionUtil import anytypeArg, stringArg, numericArg, qnameArg, nodeArg, atomicArg
from arelle.ModelXbrl import ModelXbrl
from arelle.ModelDtsObject import anonymousTypeSuffix, ModelConcept
from arelle.ModelInstanceObject import ModelDimensionValue, ModelFact, ModelInlineFact
from arelle.ModelFormulaObject import ModelFormulaResource
from arelle.PythonUtil import flattenSequence
from arelle.XmlValidate import UNKNOWN, VALID, validate as xmlValidate, NCNamePattern
from arelle.ValidateXbrlCalcs import inferredDecimals, inferredPrecision
from arelle.ValidateXbrlDimensions import priItemElrHcRels
from arelle.Locale import format_picture
from lxml import etree
from math import isnan, isinf
class xfiFunctionNotAvailable(Exception):
def __init__(self):
self.args = (_("xfi function not available"),)
def __repr__(self):
return self.args[0]
def call(xc, p, localname, args):
try:
if localname not in xfiFunctions: raise xfiFunctionNotAvailable
return xfiFunctions[localname](xc, p, args)
except xfiFunctionNotAvailable:
raise XPathContext.FunctionNotAvailable("xfi:{0}".format(localname))
def instance(xc, p, args, i=0):
if i >= len(args): # missing argument means to use the standard input instance
return xc.modelXbrl
if len(args[i]) != 1: # a sequence of instances isn't acceptable to these classes of functions
raise XPathContext.FunctionArgType(i+1,"xbrl:xbrl")
xbrliXbrl = anytypeArg(xc, args, i, "xbrli:xbrl")
if isinstance(xbrliXbrl, ModelObject) and xbrliXbrl.elementQname == XbrlConst.qnXbrliXbrl:
return xbrliXbrl.modelXbrl
elif isinstance(xbrliXbrl, ModelXbrl):
return xbrliXbrl
raise XPathContext.FunctionArgType(i,"xbrl:xbrl")
def item(xc, args, i=0):
if len(args[i]) != 1: raise XPathContext.FunctionArgType(i+1,"xbrl:item")
modelItem = xc.modelItem(args[i][0])
if modelItem is not None:
return modelItem
raise XPathContext.FunctionArgType(i,"xbrl:item")
def xbrlTuple(xc, args, i=0):
# can't name this just tuple because then it hides tuple() constructor of Python
if len(args[i]) != 1: raise XPathContext.FunctionArgType(i+1,"xbrl:tuple")
modelTuple = args[i][0]
if isinstance(modelTuple, (ModelFact, ModelInlineFact)) and modelTuple.isTuple:
return modelTuple
raise XPathContext.FunctionArgType(i,"xbrl:tuple")
def item_context(xc, args, i=0):
return item(xc, args, i).context
def item_context_element(xc, args, name):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
context = item_context(xc, args)
if context is not None:
return XmlUtil.descendant(context, XbrlConst.xbrli, name)
raise XPathContext.FunctionArgType(1,"xbrl:item")
def context(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
return item_context(xc, args)
def unit(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:item")
modelItem = xc.modelItem(args[0][0])
if modelItem is not None:
modelConcept = modelItem.concept
if modelConcept.isNumeric and not modelConcept.isFraction:
return modelItem.unit
return []
raise XPathContext.FunctionArgType(1,"xbrl:item")
def unit_numerator(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:unit")
unit = args[0][0]
if isinstance(unit,ModelObject) and \
unit.localName == "unit" and unit.namespaceURI == XbrlConst.xbrli:
measuresParent = XmlUtil.descendant(unit, XbrlConst.xbrli, "unitNumerator")
if measuresParent is None: measuresParent = unit
return XmlUtil.descendants(measuresParent, XbrlConst.xbrli, "measure")
raise XPathContext.FunctionArgType(1,"xbrl:unit")
def unit_denominator(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:unit")
unit = args[0][0]
if isinstance(unit,ModelObject) and \
unit.localName == "unit" and unit.namespaceURI == XbrlConst.xbrli:
measuresParent = XmlUtil.descendant(unit, XbrlConst.xbrli, "unitDenominator")
if measuresParent is None: return []
return XmlUtil.descendants(measuresParent, XbrlConst.xbrli, "measure")
raise XPathContext.FunctionArgType(1,"xbrl:unit")
def measure_name(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:measure")
unit = args[0][0]
if isinstance(unit,ModelObject) and \
unit.localName == "measure" and unit.namespaceURI == XbrlConst.xbrli:
return qname(unit, XmlUtil.text(unit))
raise XPathContext.FunctionArgType(1,"xbrl:unit")
def period(xc, p, args):
return item_context_element(xc, args, "period")
def context_period(xc, p, args):
return parent_child(args, "context", "period")
def parent_child(args, parentName, childName, findDescendant=False):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:" + parentName)
parent = args[0][0]
if isinstance(parent,ModelObject) and \
parent.localName == parentName and parent.namespaceURI == XbrlConst.xbrli:
if childName.startswith('@'):
return parent.get(childName[1:])
elif childName == 'text()':
return XmlUtil.textNotStripped(parent)
elif childName == 'strip-text()':
return XmlUtil.text(parent)
elif findDescendant:
return XmlUtil.descendant(parent, XbrlConst.xbrli, childName)
else:
return XmlUtil.child(parent, XbrlConst.xbrli, childName)
raise XPathContext.FunctionArgType(1,"xbrl:" + parentName)
def is_start_end_period(xc, p, args):
return is_period_type(args, "startDate")
def is_forever_period(xc, p, args):
return is_period_type(args, "forever")
def is_duration_period(xc, p, args):
return is_period_type(args, ("forever","startDate") )
def is_instant_period(xc, p, args):
return is_period_type(args, "instant")
def is_period_type(args, periodElement):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:period")
period = args[0][0]
if isinstance(period,ModelObject) and \
period.localName == "period" and period.namespaceURI == XbrlConst.xbrli:
return XmlUtil.hasChild(period, XbrlConst.xbrli, periodElement)
raise XPathContext.FunctionArgType(1,"xbrl:period")
def period_start(xc, p, args):
return period_datetime(p, args, ("startDate","instant"))
def period_end(xc, p, args):
return period_datetime(p, args, ("endDate","instant"))
def period_instant(xc, p, args):
return period_datetime(p, args, "instant")
def period_datetime(p, args, periodElement):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:period")
period = args[0][0]
if (isinstance(period,ModelObject) == 1 and
period.localName == "period" and period.namespaceURI == XbrlConst.xbrli):
child = XmlUtil.child(period, XbrlConst.xbrli, periodElement)
if child is not None:
addOneDay = child.localName != "startDate"
return dateTime( child, addOneDay=addOneDay, type=DATETIME)
elif periodElement == "instant":
raise XPathContext.XPathException(p, 'xfie:PeriodIsNotInstant', _('Period is not instant'))
else:
raise XPathContext.XPathException(p, 'xfie:PeriodIsForever', _('Period is forever'))
raise XPathContext.FunctionArgType(1,"xbrl:period")
def entity(xc, p, args):
return item_context_element(xc, args, "entity")
def context_entity(xc, p, args):
return parent_child(args, "context", "entity")
def identifier(xc, p, args):
return item_context_element(xc, args, "identifier")
def context_identifier(xc, p, args):
return parent_child(args, "context", "identifier", True)
def entity_identifier(xc, p, args):
return parent_child(args, "entity", "identifier")
def identifier_value(xc, p, args):
return parent_child(args, "identifier", "strip-text()")
def identifier_scheme(xc, p, args):
scheme = parent_child(args, "identifier", "@scheme")
if scheme is None:
return None
return anyURI(scheme)
def fact_identifier_value(xc, p, args):
return XmlUtil.text(item_context_element(xc, args, "identifier")).strip()
def fact_identifier_scheme(xc, p, args):
scheme = item_context_element(xc, args, "identifier").get("scheme")
if scheme is None:
return None
return anyURI(scheme)
def segment(xc, p, args):
seg = item_context_element(xc, args, "segment")
if seg is None:
return () # no segment
return seg
def entity_segment(xc, p, args):
seg = parent_child(args, "entity", "segment")
if seg is None:
return () # no segment
return seg
def context_segment(xc, p, args):
seg = parent_child(args, "context", "segment", True)
if seg is None:
return () # no segment
return seg
def scenario(xc, p, args):
scen = item_context_element(xc, args, "scenario")
if scen is None:
return () # no segment
return scen
def context_scenario(xc, p, args):
scen = parent_child(args, "context", "scenario")
if scen is None:
return () # no segment
return scen
def precision(xc, p, args):
return infer_precision_decimals(xc, p, args, "precision")
def decimals(xc, p, args):
return infer_precision_decimals(xc, p, args, "decimals")
def infer_precision_decimals(xc, p, args, attrName):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
if len(args[0]) != 1: raise XPathContext.FunctionArgType(1,"xbrl:item",args[0])
modelItem = xc.modelItem(args[0][0])
if modelItem is None:
raise XPathContext.FunctionArgType(1,"xbrl:item")
modelConcept = modelItem.concept
if modelConcept.isFraction:
return 'INF'
if modelConcept.isNumeric:
p = inferredPrecision(modelItem) if attrName == "precision" else inferredDecimals(modelItem)
if isinf(p):
return 'INF'
if isnan(p):
raise XPathContext.XPathException(p, 'xfie:ItemIsNotNumeric', _('Argument 1 {0} is not inferrable.').format(attrName))
return p
raise XPathContext.XPathException(p, 'xfie:ItemIsNotNumeric', _('Argument 1 is not reported with {0}.').format(attrName))
def numeric(xc, p, args):
return conceptProperty(xc, p, args, "numeric")
def non_numeric(xc, p, args):
return conceptProperty(xc, p, args, "non-numeric")
def fraction(xc, p, args):
return conceptProperty(xc, p, args, "fraction")
def conceptProperty(xc, p, args, property):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
qn = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
if qn:
modelConcept = xc.modelXbrl.qnameConcepts.get(qn)
if modelConcept is not None:
if property == "numeric": return modelConcept.isNumeric or modelConcept.isFraction
if property == "non-numeric": return modelConcept.isItem and not (modelConcept.isNumeric or modelConcept.isFraction)
if property == "fraction": return modelConcept.isFraction
return False
def checkXffFunctionUse(xc, p, functionName):
# check function use after checking argument types
if xc.progHeader is not None and xc.progHeader.element is not None:
try:
modelResourceElt = xc.progHeader.element._modelResourceElt
except AttributeError:
modelResourceElt = xc.progHeader.element
while (modelResourceElt is not None and not isinstance(modelResourceElt, ModelFormulaResource)):
modelResourceElt = modelResourceElt.getparent()
xc.progHeader.element._modelResourceElt = modelResourceElt
if (modelResourceElt is None or
modelResourceElt.localName not in ("formula", "consistencyAssertion", "valueAssertion", "precondition", "message")):
raise XPathContext.XPathException(p, 'xffe:invalidFunctionUse', _('Function xff:uncovered-aspect cannot be used on an XPath expression associated with a {0}').format(xc.progHeader.element.localName))
if xc.variableSet is not None and xc.variableSet.implicitFiltering == "false":
raise XPathContext.XPathException(p, 'xffe:invalidFunctionUse', _('Function xff:uncovered-aspect cannot be used with implicitFiltering=false'))
def uncovered_aspect(xc, p, args):
from arelle.ModelFormulaObject import aspectFromToken, Aspect
from arelle.FormulaEvaluator import uncoveredAspectValue
if len(args) not in (1,2): raise XPathContext.FunctionNumArgs()
aspect = aspectFromToken.get(stringArg(xc, args, 0, "xs:token").strip())
if aspect == Aspect.DIMENSIONS:
qn = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
checkXffFunctionUse(xc, p, "uncovered-aspect")
if aspect == Aspect.DIMENSIONS:
if qn:
modelConcept = xc.modelXbrl.qnameConcepts.get(qn)
if modelConcept is not None and modelConcept.isDimensionItem:
aspect = qn
else:
return () # not a dimension
dimValue = uncoveredAspectValue(xc, aspect)
if isinstance(dimValue, ModelDimensionValue):
if dimValue.isExplicit:
return dimValue.memberQname
elif dimValue.isTyped:
return dimValue # return the typedMember element, not its contents
elif isinstance(dimValue, QName): # qname for explicit or node for typed
return dimValue
return ()
aspectValue = uncoveredAspectValue(xc, aspect)
if aspectValue is None:
return ()
return aspectValue
def has_fallback_value(xc, p, args):
from arelle.FormulaEvaluator import variableBindingIsFallback
if len(args) != 1: raise XPathContext.FunctionNumArgs()
variableQname = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
checkXffFunctionUse(xc, p, "has-fallback-value")
return variableBindingIsFallback(xc, variableQname)
def uncovered_non_dimensional_aspects(xc, p, args):
return uncovered_aspects(xc, p, args, dimensionAspects=False)
def uncovered_dimensional_aspects(xc, p, args):
return uncovered_aspects(xc, p, args, dimensionAspects=True)
def uncovered_aspects(xc, p, args, dimensionAspects=False):
from arelle.ModelFormulaObject import aspectToToken, Aspect
from arelle.FormulaEvaluator import uncoveredVariableSetAspects
if len(args) != 0: raise XPathContext.FunctionNumArgs()
# check function use after checking argument types
if xc.progHeader is not None and xc.progHeader.element is not None:
if xc.progHeader.element.localName not in ("formula", "consistencyAssertion", "valueAssertion", "message"):
raise XPathContext.XPathException(p, 'xffe:invalidFunctionUse', _('Function xff:uncovered-aspect cannot be used on an XPath expression associated with a {0}').format(xc.progHeader.element.localName))
if xc.variableSet is not None and xc.variableSet.implicitFiltering == "false":
raise XPathContext.XPathException(p, 'xffe:invalidFunctionUse', _('Function xff:uncovered-aspect cannot be used with implicitFiltering=false'))
uncoveredAspects = uncoveredVariableSetAspects(xc)
return [(a if dimensionAspects else aspectToToken.get(a))
for a in uncoveredAspects if a != Aspect.DIMENSIONS and isinstance(a,QName) == dimensionAspects ]
def nodesEqual(xc, args, test, mustBeItems=False, nonItemErrCode=None):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
seq1 = flattenSequence(args[0])
seq2 = flattenSequence(args[1])
for i, node1 in enumerate(seq1):
try:
node2 = seq2[i]
if not isinstance(node1, (ModelObject,ModelAttribute)):
raise XPathContext.FunctionArgType(1,"node()*")
if not isinstance(node2, (ModelObject,ModelAttribute)):
raise XPathContext.FunctionArgType(2,"node()*")
if mustBeItems:
if not isinstance(node1, (ModelFact, ModelInlineFact)) or not node1.isItem:
raise XPathContext.FunctionArgType(1,"xbrl:item*", errCode=nonItemErrCode)
if not isinstance(node2, (ModelFact, ModelInlineFact)) or not node2.isItem:
raise XPathContext.FunctionArgType(2,"xbrl:item*", errCode=nonItemErrCode)
if not test(node1, node2):
return False
except IndexError:
return False
return True
def setsEqual(xc, args, test, mustBeItems=False):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
seq1 = flattenSequence(args[0])
seq2 = flattenSequence(args[1])
for node1 in seq1:
if not isinstance(node1, ModelObject):
raise XPathContext.FunctionArgType(1,"node()*")
if mustBeItems and (not isinstance(node1, (ModelFact, ModelInlineFact)) or not node1.isItem):
raise XPathContext.FunctionArgType(1,"xbrl:item*", errCode="xfie:NodeIsNotXbrlItem")
for node2 in seq2:
if not isinstance(node2, ModelObject):
raise XPathContext.FunctionArgType(2,"node()*")
if mustBeItems and (not isinstance(node2, (ModelFact, ModelInlineFact)) or not node2.isItem):
raise XPathContext.FunctionArgType(2,"xbrl:item*", errCode="xfie:NodeIsNotXbrlItem")
if len(set(seq1)) != len(set(seq2)): # sequences can have nondistinct duplicates, just same set lengths needed
return False
for node1 in seq1:
if not any(test(node1, node2) for node2 in seq2):
return False
return True
def identical_nodes(xc, p, args):
return nodesEqual(xc, args, identical_nodes_test)
def identical_nodes_test(node1, node2):
return node1 == node2
def s_equal(xc, p, args):
return nodesEqual(xc, args, s_equal_test)
def s_equal_test(node1, node2):
if (isinstance(node1, (ModelFact, ModelInlineFact)) and node1.isItem and
isinstance(node2, (ModelFact, ModelInlineFact)) and node2.isItem):
return (c_equal_test(node1, node2) and u_equal_test(node1, node2) and
XbrlUtil.xEqual(node1, node2) and
# must be validated (by xEqual) before precision tests to assure xAttributes is set
node1.xAttributes.get("precision") == node2.xAttributes.get("precision") and
node1.xAttributes.get("decimals") == node2.xAttributes.get("decimals"))
elif isinstance(node1, ModelObject):
if isinstance(node2, ModelObject):
return XbrlUtil.sEqual(node1.modelXbrl, node1, node2, excludeIDs=XbrlUtil.TOP_IDs_EXCLUDED, dts2=node2.modelXbrl)
else:
return False
elif isinstance(node1, ModelAttribute):
if isinstance(node2, ModelAttribute):
return node1.text == node2.text
return False
def u_equal(xc, p, args):
return nodesEqual(xc, args, u_equal_test, mustBeItems=True, nonItemErrCode="xfie:NodeIsNotXbrlItem")
def u_equal_test(modelItem1, modelItem2):
modelUnit1 = modelItem1.unit
modelUnit2 = modelItem2.unit
if modelUnit1 is None:
return modelUnit2 is None
else:
return modelUnit1.isEqualTo(modelUnit2)
def v_equal(xc, p, args):
return nodesEqual(xc, args, v_equal_test, mustBeItems=True, nonItemErrCode="xfie:NodeIsNotXbrlItem")
def v_equal_test(modelItem1, modelItem2):
return modelItem1.isVEqualTo(modelItem2)
def c_equal(xc, p, args):
return nodesEqual(xc, args, c_equal_test, mustBeItems=True, nonItemErrCode="xfie:NodeIsNotXbrlItem")
def c_equal_test(modelItem1, modelItem2):
modelCntx1 = modelItem1.context
modelCntx2 = modelItem2.context
if modelCntx1 is None:
return modelCntx2 is None
else:
return modelCntx1.isEqualTo(modelCntx2,dimensionalAspectModel=False)
def identical_node_set(xc, p, args):
return setsEqual(xc, args, identical_nodes_test)
def s_equal_set(xc, p, args):
return setsEqual(xc, args, s_equal_test)
def v_equal_set(xc, p, args):
return setsEqual(xc, args, v_equal_test, mustBeItems=True)
def c_equal_set(xc, p, args):
return setsEqual(xc, args, c_equal_test, mustBeItems=True)
def u_equal_set(xc, p, args):
return setsEqual(xc, args, u_equal_test, mustBeItems=True)
def x_equal(xc, p, args):
return nodesEqual(xc, args, x_equal_test)
def x_equal_test(node1, node2):
if isinstance(node1, ModelObject):
if isinstance(node2, ModelObject):
return XbrlUtil.xEqual(node1, node2)
else:
return False
elif isinstance(node1, ModelAttribute):
if isinstance(node2, ModelAttribute):
return node1.sValue == node2.sValue
return False
def duplicate_item(xc, p, args):
node1 = item(xc, args, 0)
node2 = item(xc, args, 1)
if node1.isItem and node2.isItem:
return node1.isDuplicateOf(node2)
return False
def duplicate_tuple(xc, p, args):
node1 = xbrlTuple(xc, args, 0)
node2 = xbrlTuple(xc, args, 1)
return duplicate_tuple_test(node1, node2)
def duplicate_tuple_test(node1, node2, topLevel=True):
if node1.isTuple and node2.isTuple:
return node1.isDuplicateOf(node2)
return False
def p_equal(xc, p, args):
return nodesEqual(xc, args, p_equal_test)
def p_equal_test(node1, node2):
if not isinstance(node1, (ModelFact, ModelInlineFact)) or not (node1.isItem or node1.isTuple):
raise XPathContext.FunctionArgType(1,"xbrli:item or xbrli:tuple", errCode="xfie:ElementIsNotXbrlConcept")
if not isinstance(node2, (ModelFact, ModelInlineFact)) or not (node1.isItem or node1.isTuple):
raise XPathContext.FunctionArgType(2,"xbrli:item or xbrli:tuple", errCode="xfie:ElementIsNotXbrlConcept")
return node1.parentElement == node2.parentElement
def cu_equal(xc, p, args):
return nodesEqual(xc, args, cu_equal_test, mustBeItems=True, nonItemErrCode="xfie:NodeIsNotXbrlItem")
def cu_equal_test(modelItem1, modelItem2):
return c_equal_test(modelItem1, modelItem2) and u_equal_test(modelItem1, modelItem2)
def pc_equal(xc, p, args):
return nodesEqual(xc, args, pc_equal_test, mustBeItems=True, nonItemErrCode="xfie:NodeIsNotXbrlItem")
def pc_equal_test(modelItem1, modelItem2):
return p_equal_test(modelItem1, modelItem2) and c_equal_test(modelItem1, modelItem2)
def pcu_equal(xc, p, args):
return nodesEqual(xc, args, pcu_equal_test, mustBeItems=True, nonItemErrCode="xfie:NodeIsNotXbrlItem")
def pcu_equal_test(modelItem1, modelItem2):
return p_equal_test(modelItem1, modelItem2) and c_equal_test(modelItem1, modelItem2) and u_equal_test(modelItem1, modelItem2)
def start_equal(xc, p, args):
return date_equal_test(xc, p, args, False)
def end_equal(xc, p, args):
return date_equal_test(xc, p, args, True)
def date_equal_test(xc, p, args, instantEndDate):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
date1 = atomicArg(xc, p, args, 0, "xbrldi:dateUnion", missingArgFallback=(), emptyFallback=None)
if not isinstance(date1, (DateTime,datetime.date)):
raise XPathContext.FunctionArgType(1,"xbrldi:dateUnion")
date2 = atomicArg(xc, p, args, 1, "xbrldi:dateUnion", missingArgFallback=(), emptyFallback=None)
if not isinstance(date1, (DateTime,datetime.date)):
raise XPathContext.FunctionArgType(2,"xbrldi:dateUnion")
return dateUnionEqual(date1, date2, instantEndDate)
def nodes_correspond(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
node1 = nodeArg(xc, args, 0, "node()?", missingArgFallback=(), emptyFallback=())
node2 = nodeArg(xc, args, 1, "node()?", missingArgFallback=(), emptyFallback=())
if node1 == ():
if node2 == (): return True
return False
if node2 == (): return False
return XbrlUtil.nodesCorrespond(xc.modelXbrl, node1, node2, xc.modelXbrl)
def facts_in_instance(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args)
return inst.factsInInstance
def items_in_instance(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args)
return [f for f in inst.factsInInstance if f.isItem]
def tuples_in_instance(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args)
return [f for f in inst.factsInInstance if f.isTuple]
def items_in_tuple(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
parentTuple = xbrlTuple(xc, args, 0)
return [f for f in parentTuple.modelTupleFacts if f.isItem]
def tuples_in_tuple(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
parentTuple = xbrlTuple(xc, args, 0)
return [f for f in parentTuple.modelTupleFacts if f.isTuple]
def non_nil_facts_in_instance(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args)
return [f for f in inst.factsInInstance if (f.isItem or f.isTuple) and not f.isNil]
def concept(xc, p, args):
qnConcept = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
srcConcept = xc.modelXbrl.qnameConcepts.get(qnConcept)
if srcConcept is None or not (srcConcept.isItem or srcConcept.isTuple) or srcConcept.qname is None or srcConcept.qname.namespaceURI == XbrlConst.xbrli:
raise XPathContext.XPathException(p, 'xfie:invalidConceptQName', _('Argument 1 {0} is not a concept in the DTS.').format(qnConcept))
return srcConcept
def concept_balance(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
balance = concept(xc,p,args).get("{http://www.xbrl.org/2003/instance}balance")
if balance is None:
balance = ""
return balance
def concept_period_type(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
return concept(xc,p,args).get("{http://www.xbrl.org/2003/instance}periodType")
def concept_custom_attribute(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
qnAttr = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
if qnAttr is None: raise XPathContext.FunctionArgType(2,"xs:QName")
element = concept(xc,p,args)
return element_attribute(element, qnAttr)
def concept_data_type(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
typeQname = concept(xc,p,args).typeQname
if typeQname is None or typeQname.localName.endswith(anonymousTypeSuffix):
return ()
return typeQname
def concept_data_type_derived_from(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
qnType = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
if qnType is None: raise XPathContext.FunctionArgType(2,"xs:QName")
return concept(xc,p,args).instanceOfType(qnType)
def concept_substitutions(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
return concept(xc,p,args).substitutionGroupQnames
def concepts_from_local_name(xc, p, args):
if not 1 <= len(args) <= 2: raise XPathContext.FunctionNumArgs()
localName = stringArg(xc, args, 0, "xs:string")
if len(args) == 2:
nsPattern = re.compile(stringArg(xc, args, 1, "xs:string"))
return [c.qname for c in xc.modelXbrl.nameConcepts.get(localName,())
if (c.isItem or c.isTuple) and bool(nsPattern.search(c.qname.namespaceURI))]
else:
return [c.qname for c in xc.modelXbrl.nameConcepts.get(localName,())
if c.isItem or c.isTuple]
def concepts_from_local_name_pattern(xc, p, args):
if not 1 <= len(args) <= 2: raise XPathContext.FunctionNumArgs()
localNamePattern = re.compile(stringArg(xc, args, 0, "xs:string"))
if len(args) == 2:
nsPattern = re.compile(stringArg(xc, args, 1, "xs:string"))
return [c.qname for c in xc.modelXbrl.qnameConcepts.values()
if (c.isItem or c.isTuple) and bool(localNamePattern.search(c.name)) and bool(nsPattern.search(c.qname.namespaceURI))]
else:
return [c.qname for c in xc.modelXbrl.qnameConcepts.values()
if (c.isItem or c.isTuple) and bool(localNamePattern.search(c.name))]
def filter_member_network_selection(xc, p, args):
if len(args) != 5: raise XPathContext.FunctionNumArgs()
qnDim = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
qnMem = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
linkroleURI = stringArg(xc, args, 2, "xs:string")
arcroleURI = stringArg(xc, args, 3, "xs:string")
axis = stringArg(xc, args, 4, "xs:string")
if not axis in ('descendant-or-self', 'child-or-self', 'descendant', 'child'):
return ()
dimConcept = xc.modelXbrl.qnameConcepts.get(qnDim)
if dimConcept is None or not dimConcept.isDimensionItem:
raise XPathContext.XPathException(p, 'xfie:invalidDimensionQName', _('Argument 1 {0} is not a dimension concept QName.').format(qnDim))
memConcept = xc.modelXbrl.qnameConcepts.get(qnMem)
if memConcept is None or not memConcept.isDomainMember:
# removed error 2011-03-10: raise XPathContext.XPathException(p, 'xfie:unrecognisedExplicitDimensionValueQName', _('Argument 1 {0} is not a member concept QName.').format(qnMem))
return ()
relationshipSet = xc.modelXbrl.relationshipSet(arcroleURI, linkroleURI)
if relationshipSet is not None:
members = set()
''' removed 2011-03-10:
linkQnames = set()
arcQnames = set()
'''
if axis.endswith("-or-self"):
members.add(qnMem)
fromRels = relationshipSet.fromModelObject(memConcept)
if fromRels is not None:
filter_member_network_members(relationshipSet, fromRels, axis.startswith("descendant"), members=members)
''' removed 2011-03-10:
if len(linkQnames) > 1 or len(arcQnames) > 1:
raise XPathContext.XPathException(p, 'xfie:ambiguousFilterMemberNetwork',
_('Network of linkrole {0} and arcrole {1} dimension {2} from {3} is ambiguous because of multiple link elements, {4}, or arc elements {5}').format(
linkroleURI, arcroleURI, qnDim, qnMem, linkQnames, arcQnames))
'''
return members
# no fromRels, must be a toRel or else the qname is not in the member network
if relationshipSet.toModelObject(memConcept):
return members # valid situation, the member exists as a leaf node
# removed error 2011-03-10: raise XPathContext.XPathException(p, 'xfie:unrecognisedExplicitDimensionValueQName', _('Argument 1 {0} member is not in the network.').format(qnMem))
return ()
def filter_member_network_members(relationshipSet, fromRels, recurse, members=None, relationships=None, linkQnames=None, arcQnames=None):
if members is None:
members = set()
for modelRel in fromRels:
toConcept = modelRel.toModelObject
toConceptQname = toConcept.qname
if linkQnames is not None:
linkQnames.add(modelRel.linkQname)
if arcQnames is not None:
arcQnames.add(modelRel.qname)
if toConceptQname not in members:
members.add(toConceptQname)
if relationships is not None:
relationships.add(modelRel)
if recurse:
filter_member_network_members(relationshipSet, relationshipSet.fromModelObject(toConcept), recurse, members, relationships, linkQnames, arcQnames)
def filter_member_DRS_selection(xc, p, args):
if len(args) != 5: raise XPathContext.FunctionNumArgs()
qnDim = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
qnPriItem = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
qnMem = qnameArg(xc, p, args, 2, 'QName', emptyFallback=None)
linkroleURI = stringArg(xc, args, 3, "xs:string")
if not linkroleURI: # '' or ()
linkroleURI = None # select all ELRs
axis = stringArg(xc, args, 4, "xs:string")
if not axis in ('DRS-descendant', 'DRS-child'):
return ()
memSelectionQnames = set()
dimConcept = xc.modelXbrl.qnameConcepts.get(qnDim)
if dimConcept is None or dimConcept.qname is None or dimConcept.qname.namespaceURI == XbrlConst.xbrli:
raise XPathContext.XPathException(p, 'xfie:invalidDimensionQName', _('Argument 1 {0} is not in the DTS.').format(qnDim))
elif not dimConcept.isDimensionItem:
raise XPathContext.XPathException(p, 'xfie:invalidDimensionQName', _('Argument 1 {0} is not a dimension.').format(qnDim))
priItemConcept = xc.modelXbrl.qnameConcepts.get(qnPriItem)
if priItemConcept is None or priItemConcept.qname is None or priItemConcept.qname.namespaceURI == XbrlConst.xbrli:
raise XPathContext.XPathException(p, 'xfie:invalidPrimaryItemConceptQName', _('Argument 2 {0} is not in the DTS.').format(qnPriItem))
elif not priItemConcept.isPrimaryItem:
raise XPathContext.XPathException(p, 'xfie:invalidPrimaryItemConceptQName', _('Argument 2 {0} is not a primary item.').format(qnPriItem))
memConcept = xc.modelXbrl.qnameConcepts.get(qnMem)
if memConcept is None or not memConcept.isDomainMember or not dimConcept.isDimensionItem:
# not an error, just don't find anything
return ()
for hcELR, hcRels in priItemElrHcRels(xc, priItemConcept, linkroleURI).items():
if not linkroleURI or linkroleURI == hcELR:
for hasHcRel in hcRels:
hcConcept = hasHcRel.toModelObject
if hasHcRel.arcrole == XbrlConst.all:
dimELR = (hasHcRel.targetRole or hcELR)
for hcDimRel in xc.modelXbrl.relationshipSet(XbrlConst.hypercubeDimension, dimELR).fromModelObject(hcConcept):
if dimConcept == hcDimRel.toModelObject:
filter_member_DRS_members(xc,
xc.modelXbrl.relationshipSet(XbrlConst.dimensionDomain,
(hcDimRel.targetRole or dimELR))
.fromModelObject(dimConcept),
axis,
memConcept,
False,
set(),
memSelectionQnames)
return memSelectionQnames
def filter_member_DRS_members(xc, fromRels, axis, memConcept, inSelection, visited, memSelectionQnames):
for rel in fromRels:
toConcept = rel.toModelObject
toConceptQname = toConcept.qname
nestedSelection = inSelection
if rel.fromModelObject == memConcept or inSelection: # from is the asked-for parent
memSelectionQnames.add(toConceptQname) # to is a child or descendant
nestedSelection = True
if toConceptQname not in visited and (not nestedSelection or axis == "DRS-descendant"):
visited.add(toConcept)
filter_member_DRS_members(xc,
xc.modelXbrl.relationshipSet(XbrlConst.domainMember,
(rel.targetRole or rel.linkrole))
.fromModelObject(toConcept),
axis,
memConcept,
nestedSelection,
visited,
memSelectionQnames)
visited.discard(toConcept)
def dimension_default(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
qnDim = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
dimConcept = xc.modelXbrl.qnameConcepts.get(qnDim)
if dimConcept is None or dimConcept.qname is None or dimConcept.qname.namespaceURI == XbrlConst.xbrli:
raise XPathContext.XPathException(p, 'xfie:invalidDimensionQName', _('Argument 1 {0} is not in the DTS.').format(qnDim))
elif not dimConcept.isDimensionItem:
raise XPathContext.XPathException(p, 'xfie:invalidDimensionQName', _('Argument 1 {0} is not a dimension.').format(qnDim))
for dimDefRel in xc.modelXbrl.relationshipSet(XbrlConst.dimensionDefault).fromModelObject(dimConcept):
dimConcept = dimDefRel.toModelObject
if dimConcept is not None and dimConcept.isDomainMember:
return [dimConcept.qname]
return []
def fact_segment_remainder(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
context = item_context(xc, args)
if context is not None:
return context.segNonDimValues
raise XPathContext.FunctionArgType(1,"xbrl:item")
def fact_scenario_remainder(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
context = item_context(xc, args)
if context is not None:
return context.scenNonDimValues
raise XPathContext.FunctionArgType(1,"xbrl:item")
def fact_dim_value(xc, p, args, dimType):
context = item_context(xc, args)
qnDim = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
dimConcept = xc.modelXbrl.qnameConcepts.get(qnDim)
if dimConcept is None or not dimConcept.isDimensionItem:
raise XPathContext.XPathException(p,
'xfie:invalid{0}DimensionQName'.format(dimType),
_('Argument 1 {0} is not a dimension concept QName.').format(qnDim))
if context is not None:
return context.dimValue(qnDim)
raise XPathContext.FunctionArgType(1,"xbrl:item")
def fact_has_explicit_dimension(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
dimValue = fact_dim_value(xc, p, args, "Explicit")
return dimValue is not None and (isinstance(dimValue,QName) or
dimValue.isExplicit)
def fact_has_typed_dimension(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
dimValue = fact_dim_value(xc, p, args, "Typed")
return dimValue is not None and not isinstance(dimValue,QName) and dimValue.isTyped
def fact_explicit_dimension_value_value(xc, p, args):
context = item_context(xc, args)
if context is not None:
qn = qnameArg(xc, p, args, 1, 'QName', emptyFallback=())
if qn == (): raise XPathContext.FunctionArgType(2,"xbrl:QName")
dimConcept = xc.modelXbrl.qnameConcepts.get(qn) # check qname is explicit dimension
if dimConcept is None or not dimConcept.isExplicitDimension:
raise XPathContext.XPathException(p, 'xfie:invalidExplicitDimensionQName', _('dimension does not specify an explicit dimension'))
dimValue = context.dimValue(qn)
if isinstance(dimValue, ModelDimensionValue) and dimValue.isExplicit:
return dimValue.memberQname # known to be valid given instance is valid
elif isinstance(dimValue, QName): #default, check if this is valid
''' removed 2011-03-01 FWG clarification that default always applies
#note that there's no way to check one dimension without full set of others for validity
modelItem = xc.modelItem(args[0][0])
itemConcept = modelItem.concept
from arelle.ValidateXbrlDimensions import checkPriItemDimValueValidity
memConcept = xc.modelXbrl.qnameConcepts.get(dimValue)
# remove check for pri item validity per FWG meeting notes 2011-01-13
if itemConcept: # and checkPriItemDimValueValidity(xc, itemConcept, dimConcept, memConcept):
return dimValue
'''
return dimValue
return () # not an applicable primary item for default dimension
raise XPathContext.FunctionArgType(1,"xbrl:item")
def fact_has_explicit_dimension_value(xc, p, args):
if len(args) != 3: raise XPathContext.FunctionNumArgs()
return qnameArg(xc, p, args, 2, 'QName', emptyFallback=()) == fact_explicit_dimension_value_value(xc, p, args)
def fact_explicit_dimension_value(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
return fact_explicit_dimension_value_value(xc, p, args)
def fact_typed_dimension_value(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
context = item_context(xc, args)
if context is not None:
qn = qnameArg(xc, p, args, 1, 'QName', emptyFallback=())
if qn == (): raise XPathContext.FunctionArgType(2,"xbrl:QName")
modelConcept = xc.modelXbrl.qnameConcepts.get(qn) # check qname is explicit dimension
if modelConcept is None or not modelConcept.isTypedDimension:
raise XPathContext.XPathException(p, 'xfie:invalidTypedDimensionQName', _('dimension does not specify a typed dimension'))
result = context.dimValue(qn)
return result if result is not None else ()
raise XPathContext.FunctionArgType(1,"xbrl:item")
def fact_explicit_dimensions(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
context = item_context(xc, args)
if context is not None:
return set(qn for qn, dim in context.qnameDims.items() if dim.isExplicit) | _DICT_SET(xc.modelXbrl.qnameDimensionDefaults.keys())
return set()
def fact_typed_dimensions(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
context = item_context(xc, args)
if context is not None:
return set(qn for qn, dim in context.qnameDims.items() if dim.isTyped)
return set()
def fact_dimension_s_equal2(xc, p, args):
if len(args) != 3: raise XPathContext.FunctionNumArgs()
context1 = item_context(xc, args, i=0)
context2 = item_context(xc, args, i=1)
if context1 is not None:
if context2 is not None:
qn = qnameArg(xc, p, args, 2, 'QName', emptyFallback=())
if qn == (): raise XPathContext.FunctionArgType(3,"xbrl:QName")
modelConcept = xc.modelXbrl.qnameConcepts.get(qn) # check qname is explicit dimension
if modelConcept is None or not modelConcept.isDimensionItem:
# raise XPathContext.XPathException(p, 'xfie:invalidTypedDimensionQName', _('dimension does not specify a typed dimension'))
return False
dimValue1 = context1.dimValue(qn)
dimValue2 = context2.dimValue(qn)
if dimValue1 is not None and isinstance(dimValue1,ModelDimensionValue):
return dimValue1.isEqualTo(dimValue2, equalMode=XbrlUtil.S_EQUAL2)
elif dimValue2 is not None and isinstance(dimValue2,ModelDimensionValue):
return dimValue2.isEqualTo(dimValue1, equalMode=XbrlUtil.S_EQUAL2)
return dimValue1 == dimValue2
raise XPathContext.FunctionArgType(2,"xbrl:item")
raise XPathContext.FunctionArgType(1,"xbrl:item")
def linkbase_link_roles(xc, p, args):
if len(args) > 2: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 1)
arcroleURI = stringArg(xc, args, 0, "xs:string")
relationshipSet = inst.relationshipSet(arcroleURI)
if relationshipSet:
return [anyURI(linkrole) for linkrole in relationshipSet.linkRoleUris]
return ()
def navigate_relationships(xc, p, args):
raise xfiFunctionNotAvailable()
def concept_label(xc, p, args):
if not 4 <= len(args) <= 5: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 4)
qnSource = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
srcConcept = inst.qnameConcepts.get(qnSource)
if srcConcept is None:
return ""
linkroleURI = stringArg(xc, args, 1, "xs:string", emptyFallback='')
if not linkroleURI: linkroleURI = XbrlConst.defaultLinkRole
labelroleURI = stringArg(xc, args, 2, "xs:string", emptyFallback='')
if not labelroleURI: labelroleURI = XbrlConst.standardLabel
lang = stringArg(xc, args, 3, "xs:string", emptyFallback='')
relationshipSet = inst.relationshipSet(XbrlConst.conceptLabel,linkroleURI)
if relationshipSet is not None:
label = relationshipSet.label(srcConcept, labelroleURI, lang)
if label is not None: return label
return ""
def arcrole_definition(xc, p, args):
if len(args) > 2: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 1)
arcroleURI = stringArg(xc, args, 0, "xs:string", emptyFallback='')
modelArcroleTypes = inst.arcroleTypes.get(arcroleURI)
if modelArcroleTypes is not None and len(modelArcroleTypes) > 0:
arcroledefinition = modelArcroleTypes[0].definition
if arcroledefinition is not None: return arcroledefinition
return ()
def role_definition(xc, p, args):
if len(args) > 2: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 1)
roleURI = stringArg(xc, args, 0, "xs:string", emptyFallback='')
modelRoleTypes = inst.roleTypes.get(roleURI)
if modelRoleTypes is not None and len(modelRoleTypes) > 0:
roledefinition = modelRoleTypes[0].definition
if roledefinition is not None: return roledefinition
return ()
def fact_footnotes(xc, p, args):
if len(args) > 6: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 5)
itemObj = item(xc, args)
linkroleURI = stringArg(xc, args, 1, "xs:string", emptyFallback='')
if not linkroleURI: linkroleURI = XbrlConst.defaultLinkRole
arcroleURI = stringArg(xc, args, 2, "xs:string", emptyFallback='')
if not arcroleURI: arcroleURI = XbrlConst.factFootnote
footnoteroleURI = stringArg(xc, args, 3, "xs:string", emptyFallback='')
if not footnoteroleURI: footnoteroleURI = XbrlConst.footnote
lang = stringArg(xc, args, 4, "xs:string", emptyFallback='')
relationshipSet = inst.relationshipSet(arcroleURI,linkroleURI)
if relationshipSet: # must return empty sequence, not None if no footnotes match filters
return relationshipSet.label(itemObj, footnoteroleURI, lang, returnMultiple=True) or ()
return ()
def concept_relationships(xc, p, args, nestResults=False):
lenArgs = len(args)
if not 4 <= lenArgs <= 8: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 7)
qnSource = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
linkroleURI = stringArg(xc, args, 1, "xs:string")
if not linkroleURI:
linkroleURI = XbrlConst.defaultLinkRole
elif linkroleURI == "XBRL-all-linkroles":
linkroleURI = None
arcroleURI = stringArg(xc, args, 2, "xs:string")
axis = stringArg(xc, args, 3, "xs:string")
if not axis in ('descendant', 'child', 'ancestor', 'parent', 'sibling', 'sibling-or-self'):
raise XPathContext.FunctionArgType(3, "'descendant', 'child', 'ancestor', 'parent', 'sibling' or 'sibling-or-self'",
errCode="xfie:InvalidConceptRelationParameters")
if qnSource != XbrlConst.qnXfiRoot:
srcConcept = inst.qnameConcepts.get(qnSource)
if srcConcept is None:
return ()
if lenArgs > 4:
generations = numericArg(xc, p, args, 4, "xs:integer", convertFallback=0)
if axis in ('child', 'parent', 'sibling', 'sibling-or-self') and generations != 1:
raise XPathContext.FunctionArgType(4, "generations must be 1 for 'child', 'parent', 'sibling' or 'sibling-or-self' axis",
errCode="xfie:InvalidConceptRelationParameters")
elif axis in ('child', 'parent', 'sibling', 'sibling-or-self'):
generations = 1
else:
generations = 0
if axis in ('child','parent','sibling', 'sibling-or-self'):
generations = 1
if axis == 'child':
axis = 'descendant'
elif axis == 'parent':
axis = 'ancestor'
if lenArgs > 5:
qnLink = qnameArg(xc, p, args, 5, 'QName', emptyFallback=None)
else:
qnLink = None
if lenArgs > 5:
qnArc = qnameArg(xc, p, args, 6, 'QName', emptyFallback=None)
else:
qnArc = None
removeSelf = axis == 'sibling'
relationshipSet = inst.relationshipSet(arcroleURI, linkroleURI, qnLink, qnArc)
if relationshipSet:
result = []
visited = {qnSource}
if qnSource == XbrlConst.qnXfiRoot:
if axis in ('sibling', 'sibling-or-self', 'ancestor'):
return []
roots = relationshipSet.rootConcepts
visited = {c.qname for c in roots}
rels = [rel for c in roots for rel in relationshipSet.fromModelObject(c)]
if generations == 1:
return rels
if generations > 1:
generations -= 1
elif axis == 'descendant':
rels = relationshipSet.fromModelObject(srcConcept)
elif axis == 'ancestor': # includes first pass on parents of object to get sibling
rels = relationshipSet.toModelObject(srcConcept)
elif axis in ('sibling', 'sibling-or-self'):
rels = relationshipSet.toModelObject(srcConcept)
if rels:
rels = relationshipSet.fromModelObject(rels[0].fromModelObject)
axis = 'descendant'
else: # must be a root, never has any siblings
return []
if rels:
concept_relationships_step(xc, inst, relationshipSet, rels, axis, generations, result, visited, nestResults)
if removeSelf:
for i, rel in enumerate(result):
if rel.toModelObject == srcConcept:
result.pop(i)
break
return result
return ()
def concept_relationships_step(xc, inst, relationshipSet, rels, axis, generations, result, visited, nestResults):
if rels:
for modelRel in rels:
concept = modelRel.toModelObject if axis == 'descendant' else modelRel.fromModelObject
conceptQname = concept.qname
result.append(modelRel)
if generations > 1 or (generations == 0 and conceptQname not in visited):
nextGen = (generations - 1) if generations > 1 else 0
if generations == 0:
visited.add(conceptQname)
if axis == 'descendant':
if relationshipSet.arcrole == "XBRL-dimensions":
stepRelationshipSet = inst.relationshipSet("XBRL-dimensions", modelRel.consecutiveLinkrole)
else:
stepRelationshipSet = relationshipSet
stepRels = stepRelationshipSet.fromModelObject(concept)
else:
if relationshipSet.arcrole == "XBRL-dimensions":
stepRelationshipSet = inst.relationshipSet("XBRL-dimensions")
# search all incoming relationships for those with right consecutiveLinkrole
stepRels = [rel
for rel in stepRelationshipSet.toModelObject(concept)
if rel.consectuiveLinkrole == modelRel.linkrole]
else:
stepRelationshipSet = relationshipSet
stepRels = stepRelationshipSet.toModelObject(concept)
if nestResults: # nested results are in a sub-list
nestedList = []
else: # nested results flattened in top level results
nestedList = result
concept_relationships_step(xc, inst, stepRelationshipSet, stepRels, axis, nextGen, nestedList, visited, nestResults)
if nestResults and nestedList: # don't append empty nested results
result.append(nestedList)
if generations == 0:
visited.discard(conceptQname)
def relationship_from_concept(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
modelRel = anytypeArg(xc, args, 0, "arelle:ModelRelationship", None)
if modelRel is not None:
return modelRel.fromModelObject.qname
raise XPathContext.FunctionArgType(1,"arelle:modelRelationship")
def relationship_to_concept(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
modelRel = anytypeArg(xc, args, 0, "arelle:ModelRelationship", None)
if modelRel is not None:
return modelRel.toModelObject.qname
raise XPathContext.FunctionArgType(1,"arelle:modelRelationship")
def distinct_nonAbstract_parent_concepts(xc, p, args):
lenArgs = len(args)
if not 2 <= lenArgs <= 3: raise XPathContext.FunctionNumArgs()
inst = instance(xc, p, args, 2)
linkroleURI = stringArg(xc, args, 0, "xs:string")
if not linkroleURI:
linkroleURI = XbrlConst.defaultLinkRole
arcroleURI = stringArg(xc, args, 1, "xs:string")
# TBD allow instance as arg 2
result = set()
relationshipSet = inst.relationshipSet(arcroleURI, linkroleURI)
if relationshipSet:
for rel in relationshipSet.modelRelationships:
fromModelObject = rel.fromModelObject
toModelObject = rel.toModelObject
if (isinstance(fromModelObject, ModelConcept) and
isinstance(toModelObject, ModelConcept) and
not fromModelObject.isAbstract and
not toModelObject.isAbstract):
result.add(fromModelObject.qname)
return result
def relationship_element_attribute(xc, p, args, elementParent=False):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
modelRel = anytypeArg(xc, args, 0, "arelle:ModelRelationship", None)
if modelRel is None: raise XPathContext.FunctionArgType(1,"arelle:modelRelationship")
qnAttr = qnameArg(xc, p, args, 1, 'QName', emptyFallback=None)
if qnAttr is None: raise XPathContext.FunctionArgType(2,"xs:QName")
element = modelRel.arcElement
if elementParent: element = element.getparent()
return element_attribute(element, qnAttr)
def element_attribute(element, attrQname):
attrTag = attrQname.clarkNotation
modelAttribute = None
try:
modelAttribute = element.xAttributes[attrTag]
except (AttributeError, TypeError, IndexError, KeyError):
# may be lax or deferred validated
try:
xmlValidate(element.modelXbrl, element, attrQname)
modelAttribute = element.xAttributes[attrTag]
except (AttributeError, TypeError, IndexError, KeyError):
pass
if modelAttribute is None:
value = element.get(attrTag)
if value is not None:
return value
elif modelAttribute.xValid >= VALID:
return modelAttribute.xValue
return ()
def relationship_attribute(xc, p, args):
return relationship_element_attribute(xc, p, args)
def relationship_link_attribute(xc, p, args):
return relationship_element_attribute(xc, p, args, elementParent=True)
def element_name(xc, p, args, elementParent=False):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
modelRel = anytypeArg(xc, args, 0, "arelle:ModelRelationship", None)
if modelRel is None: raise XPathContext.FunctionArgType(1,"arelle:modelRelationship")
element = modelRel.arcElement
if elementParent: element = element.getparent()
return qname(element)
def relationship_name(xc, p, args):
return element_name(xc, p, args)
def relationship_link_name(xc, p, args):
return element_name(xc, p, args, elementParent=True)
def xbrl_instance(xc, p, args):
raise xfiFunctionNotAvailable()
def format_number(xc, p, args):
if len(args) != 2: raise XPathContext.FunctionNumArgs()
value = numericArg(xc, p, args, 0, missingArgFallback='NaN', emptyFallback='NaN')
picture = stringArg(xc, args, 1, "xs:string", missingArgFallback='', emptyFallback='')
try:
return format_picture(xc.modelXbrl.locale, value, picture)
except ValueError as err:
raise XPathContext.XPathException(p, 'xfie:invalidPictureSyntax', str(err) )
# note that this function was initially in plugin functionsXmlCreation when it was named xfxc:element
def create_element(xc, p, args):
if not 2 <= len(args) <= 4: raise XPathContext.FunctionNumArgs()
qn = qnameArg(xc, p, args, 0, 'QName', emptyFallback=None)
attrArg = flattenSequence(args[1])
# attributes have to be pairs
if attrArg:
if (len(attrArg) & 1 or
any((not isinstance(arg, (QName, _STR_BASE))) or
(isinstance(arg,_STR_BASE) and NCNamePattern.match(arg) is None)
for i in range(0, len(attrArg),2)
for arg in (attrArg[i],))):
raise XPathContext.FunctionArgType(1,"((xs:qname|xs:string),xs:anyAtomicValue)", errCode="xfie:AttributesNotNameValuePairs")
else:
attrParam = [(attrArg[i],attrArg[i+1]) # need name-value pairs for XmlUtil function
for i in range(0, len(attrArg),2)]
else:
attrParam = None
value = atomicArg(xc, p, args, 2, "xs:anyAtomicType", emptyFallback='')
if not value: # be sure '' is None so no text node is created
value = None
if len(args) < 4:
childElements = None
else:
childElements = xc.flattenSequence(args[3])
if value and childElements:
raise XPathContext.FunctionArgType(1,str(value), errCode="xfie:MixedContentError")
# scratchpad instance document emulates fn:doc( ) to hold XML nodes
scratchpadXmlDocUrl = "http://www.xbrl.org/2012/function/creation/xml_scratchpad.xml"
if scratchpadXmlDocUrl in xc.modelXbrl.urlDocs:
modelDocument = xc.modelXbrl.urlDocs[scratchpadXmlDocUrl]
else:
# create scratchpad xml document
# this will get the fake instance document in the list of modelXbrl docs so that it is garbage collected
from arelle import ModelDocument
modelDocument = ModelDocument.create(xc.modelXbrl,
ModelDocument.Type.UnknownXML,
scratchpadXmlDocUrl,
initialXml="<xfc:dummy xmlns:xfc='http://www.xbrl.org/2012/function/creation'/>")
newElement = XmlUtil.addChild(modelDocument.xmlRootElement,
qn,
attributes=attrParam,
text=value)
if childElements:
for element in childElements:
if isinstance(element, etree.ElementBase):
newElement.append(element)
# node myst be validated for use in instance creation (typed dimension references)
xmlValidate(xc.modelXbrl, newElement)
return newElement
def any_identifier(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
for cntx in xc.modelXbrl.contextsInUse:
return cntx.entityIdentifierElement
return ()
def unique_identifiers(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
distinctIdentifiers = {}
for cntx in xc.modelXbrl.contextsInUse:
if cntx.entityIdentifier not in distinctIdentifiers:
distinctIdentifiers[cntx.entityIdentifier] = cntx.entityIdentifierElement
return [e for k,e in sorted(distinctIdentifiers.items(), key=lambda i:i[0])]
def single_unique_identifier(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
return len(set(cntx.entityIdentifier for cntx in xc.modelXbrl.contextsInUse)) == 1
def any_start_date(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
for cntx in xc.modelXbrl.contextsInUse:
if cntx.isStartEndPeriod:
return cntx.startDatetime
return ()
def unique_start_dates(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
distinctStartDates = set()
for cntx in xc.modelXbrl.contextsInUse:
if cntx.isStartEndPeriod:
distinctStartDates.add(cntx.startDatetime)
return [sorted(distinctStartDates, key=lambda d:(d.tzinfo is None,d))]
def single_unique_start_date(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
return len(set(cntx.startDatetime for cntx in xc.modelXbrl.contextsInUse if cntx.isStartEndPeriod)) == 1
def any_end_date(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
for cntx in xc.modelXbrl.contextsInUse:
if cntx.isStartEndPeriod:
return cntx.endDatetime
return ()
def unique_end_dates(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
distinctStartDates = set()
for cntx in inst.contextsInUse:
if cntx.isStartEndPeriod:
distinctStartDates.add(cntx.endDatetime)
return [sorted(distinctStartDates, key=lambda d:(d.tzinfo is None,d))]
def single_unique_end_date(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
return len(set(cntx.endDatetime for cntx in xc.modelXbrl.contextsInUse if cntx.isStartEndPeriod)) == 1
def any_instant_date(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
for cntx in xc.modelXbrl.contextsInUse:
if cntx.isInstantPeriod:
return cntx.instantDatetime
return ()
def unique_instant_dates(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
distinctStartDates = set()
for cntx in xc.modelXbrl.contextsInUse:
if cntx.isInstantPeriod:
distinctStartDates.add(cntx.instantDatetime)
return [sorted(distinctStartDates, key=lambda d:(d.tzinfo is None,d))]
def single_unique_instant_date(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
return len(set(cntx.instantDatetime for cntx in xc.modelXbrl.contextsInUse if cntx.isInstantPeriod)) == 1
def filingIndicatorValues(inst, filedValue):
filingIndicators = set()
for fact in inst.factsByQname[XbrlConst.qnEuFiIndFact]:
if fact.parentElement.qname == XbrlConst.qnEuFiTuple and fact.get(XbrlConst.cnEuFiIndAttr,"true") == filedValue:
filingIndicators.add(fact.stringValue.strip())
for fact in inst.factsByQname[XbrlConst.qnFiFact]:
if fact.context is not None and XbrlConst.qnFiDim in fact.context.qnameDims and fact.value.strip() == filedValue:
fiValue = fact.context.qnameDims[XbrlConst.qnFiDim].stringValue.strip()
if fiValue:
filingIndicators.add(fiValue)
return filingIndicators
def positive_filing_indicators(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
return sorted(filingIndicatorValues(xc.modelXbrl, "true"))
def positive_filing_indicator(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
ind = anytypeArg(xc, args, 0, "xs:string", None)
if ind is None: raise XPathContext.FunctionArgType(1,"xs:string")
return ind in filingIndicatorValues(xc.modelXbrl, "true")
def negative_filing_indicators(xc, p, args):
if len(args) != 0: raise XPathContext.FunctionNumArgs()
return sorted(filingIndicatorValues(xc.modelXbrl, "false"))
def negative_filing_indicator(xc, p, args):
if len(args) != 1: raise XPathContext.FunctionNumArgs()
ind = anytypeArg(xc, args, 0, "xs:string", None)
if ind is None: raise XPathContext.FunctionArgType(1,"xs:string")
return ind in filingIndicatorValues(xc.modelXbrl, "false")
xfiFunctions = {
'context': context,
'unit': unit,
'unit-numerator': unit_numerator,
'unit-denominator': unit_denominator,
'measure-name': measure_name,
'period': period,
'context-period': context_period,
'is-start-end-period': is_start_end_period,
'is-forever-period': is_forever_period,
'is-duration-period': is_duration_period,
'is-instant-period': is_instant_period,
'period-start': period_start,
'period-end': period_end,
'period-instant': period_instant,
'entity' : entity,
'context-entity' : context_entity,
'identifier': identifier,
'context-identifier': context_identifier,
'entity-identifier': entity_identifier,
'identifier-value': identifier_value,
'identifier-scheme': identifier_scheme,
'segment': segment,
'entity-segment': entity_segment,
'context-segment': context_segment,
'scenario': scenario,
'context-scenario': context_scenario,
'fact-identifier-value': fact_identifier_value,
'fact-identifier-scheme': fact_identifier_scheme,
'is-non-numeric' : non_numeric,
'is-numeric' : numeric,
'is-fraction' : fraction,
'precision': precision,
'decimals': decimals,
'uncovered-aspect' : uncovered_aspect,
'has-fallback-value' : has_fallback_value,
'uncovered-non-dimensional-aspects' : uncovered_non_dimensional_aspects,
'uncovered-dimensional-aspects': uncovered_dimensional_aspects,
'identical-nodes': identical_nodes,
's-equal': s_equal,
'u-equal': u_equal,
'v-equal': v_equal,
'c-equal': c_equal,
'identical-node-set' : identical_node_set,
's-equal-set': s_equal_set,
'v-equal-set': v_equal_set,
'c-equal-set': c_equal_set,
'u-equal-set': u_equal_set,
'x-equal': x_equal,
'duplicate-item': duplicate_item,
'duplicate-tuple': duplicate_tuple,
'p-equal': p_equal,
'cu-equal': cu_equal,
'pc-equal': pc_equal,
'pcu-equal': pcu_equal,
'start-equal': start_equal,
'end-equal': end_equal,
'nodes-correspond': nodes_correspond,
'facts-in-instance': facts_in_instance,
'items-in-instance': items_in_instance,
'tuples-in-instance': tuples_in_instance,
'items-in-tuple': items_in_tuple,
'tuples-in-tuple': tuples_in_tuple,
'non-nil-facts-in-instance': non_nil_facts_in_instance,
'concept-balance': concept_balance,
'concept-period-type': concept_period_type,
'concept-custom-attribute': concept_custom_attribute,
'concept-data-type': concept_data_type,
'concept-data-type-derived-from' : concept_data_type_derived_from,
'concept-substitutions': concept_substitutions,
'concepts-from-local-name': concepts_from_local_name,
'concepts-from-local-name-pattern': concepts_from_local_name_pattern,
'filter-member-network-selection' : filter_member_network_selection,
'filter-member-DRS-selection' : filter_member_DRS_selection,
'dimension-default': dimension_default,
'fact-segment-remainder': fact_segment_remainder,
'fact-scenario-remainder': fact_scenario_remainder,
'fact-has-explicit-dimension': fact_has_explicit_dimension,
'fact-has-typed-dimension': fact_has_typed_dimension,
'fact-has-explicit-dimension-value': fact_has_explicit_dimension_value,
'fact-explicit-dimension-value': fact_explicit_dimension_value,
'fact-typed-dimension-value': fact_typed_dimension_value,
'fact-explicit-dimensions': fact_explicit_dimensions,
'fact-typed-dimensions': fact_typed_dimensions,
'fact-dimension-s-equal2': fact_dimension_s_equal2,
'linkbase-link-roles': linkbase_link_roles,
'concept-label': concept_label,
'arcrole-definition': arcrole_definition,
'role-definition': role_definition,
'fact-footnotes': fact_footnotes,
'concept-relationships': concept_relationships,
'relationship-from-concept': relationship_from_concept,
'relationship-to-concept': relationship_to_concept,
'distinct-nonAbstract-parent-concepts': distinct_nonAbstract_parent_concepts,
'relationship-attribute': relationship_attribute,
'relationship-link-attribute': relationship_link_attribute,
'relationship-name': relationship_name,
'relationship-link-name': relationship_link_name,
'xbrl-instance': xbrl_instance,
'format-number': format_number,
'create-element': create_element,
'any-identifier': any_identifier,
'unique-identifiers': unique_identifiers,
'single-unique-identifier': single_unique_identifier,
'any-start-date': any_start_date,
'unique-start-dates': unique_start_dates,
'single-unique-start-date': single_unique_start_date,
'any-end-date': any_end_date,
'unique-end-dates': unique_end_dates,
'single-unique-end-date': single_unique_end_date,
'any-instant-date': any_instant_date,
'unique-instant-dates': unique_instant_dates,
'single-unique-instant-date': single_unique_instant_date,
'positive-filing-indicators': positive_filing_indicators,
'positive-filing-indicator': positive_filing_indicator,
'negative-filing-indicators': negative_filing_indicators,
'negative-filing-indicator': negative_filing_indicator,
}
| 46.719651 | 211 | 0.678792 |
588ddce853fa27e8176799597f0f1cfbd4037702 | 4,017 | py | Python | lib/spack/spack/cmd/python.py | kkauder/spack | 6ae8d5c380c1f42094b05d38be26b03650aafb39 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2 | 2020-09-10T22:50:08.000Z | 2021-01-12T22:18:54.000Z | lib/spack/spack/cmd/python.py | kkauder/spack | 6ae8d5c380c1f42094b05d38be26b03650aafb39 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 14 | 2021-07-20T01:04:53.000Z | 2022-03-02T01:08:36.000Z | lib/spack/spack/cmd/python.py | kkauder/spack | 6ae8d5c380c1f42094b05d38be26b03650aafb39 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2 | 2021-04-07T18:27:09.000Z | 2022-03-31T22:52:38.000Z | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import os
import sys
import code
import argparse
import platform
import runpy
import llnl.util.tty as tty
import spack
description = "launch an interpreter as spack would launch a command"
section = "developer"
level = "long"
def setup_parser(subparser):
subparser.add_argument(
'-V', '--version', action='store_true',
help='print the Python version number and exit')
subparser.add_argument(
'-c', dest='python_command', help='command to execute')
subparser.add_argument(
'-i', dest='python_interpreter', help='python interpreter',
choices=['python', 'ipython'], default='python')
subparser.add_argument(
'-m', dest='module', action='store',
help='run library module as a script')
subparser.add_argument(
'python_args', nargs=argparse.REMAINDER,
help="file to run plus arguments")
def python(parser, args, unknown_args):
if args.version:
print('Python', platform.python_version())
return
if args.module:
sys.argv = ['spack-python'] + unknown_args + args.python_args
runpy.run_module(args.module, run_name="__main__", alter_sys=True)
return
if unknown_args:
tty.die("Unknown arguments:", " ".join(unknown_args))
# Unexpected behavior from supplying both
if args.python_command and args.python_args:
tty.die("You can only specify a command OR script, but not both.")
# Run user choice of interpreter
if args.python_interpreter == "ipython":
return spack.cmd.python.ipython_interpreter(args)
return spack.cmd.python.python_interpreter(args)
def ipython_interpreter(args):
"""An ipython interpreter is intended to be interactive, so it doesn't
support running a script or arguments
"""
try:
import IPython
except ImportError:
tty.die("ipython is not installed, install and try again.")
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
exec(startup.read())
# IPython can also support running a script OR command, not both
if args.python_args:
IPython.start_ipython(argv=args.python_args)
elif args.python_command:
IPython.start_ipython(argv=['-c', args.python_command])
else:
header = ("Spack version %s\nPython %s, %s %s"
% (spack.spack_version, platform.python_version(),
platform.system(), platform.machine()))
__name__ = "__main__" # noqa
IPython.embed(module="__main__", header=header)
def python_interpreter(args):
"""A python interpreter is the default interpreter
"""
# Fake a main python shell by setting __name__ to __main__.
console = code.InteractiveConsole({'__name__': '__main__',
'spack': spack})
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
console.runsource(startup.read(), startup_file, 'exec')
if args.python_command:
console.runsource(args.python_command)
elif args.python_args:
sys.argv = args.python_args
with open(args.python_args[0]) as file:
console.runsource(file.read(), args.python_args[0], 'exec')
else:
# Provides readline support, allowing user to use arrow keys
console.push('import readline')
console.interact("Spack version %s\nPython %s, %s %s"
% (spack.spack_version, platform.python_version(),
platform.system(), platform.machine()))
| 34.042373 | 75 | 0.655713 |
64d1f4b82cb8f60302d25c9b8eda09449ce3f4d0 | 4,726 | py | Python | var/spack/repos/builtin/packages/rocm-debug-agent/package.py | renjithravindrankannath/spack | 043b2cbb7c99d69a373f3ecbf35bc3b4638bcf85 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | var/spack/repos/builtin/packages/rocm-debug-agent/package.py | renjithravindrankannath/spack | 043b2cbb7c99d69a373f3ecbf35bc3b4638bcf85 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | var/spack/repos/builtin/packages/rocm-debug-agent/package.py | renjithravindrankannath/spack | 043b2cbb7c99d69a373f3ecbf35bc3b4638bcf85 | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
from spack.package import *
class RocmDebugAgent(CMakePackage):
"""Radeon Open Compute (ROCm) debug agent"""
homepage = "https://github.com/ROCm-Developer-Tools/rocr_debug_agent"
git = "https://github.com/ROCm-Developer-Tools/rocr_debug_agent.git"
url = "https://github.com/ROCm-Developer-Tools/rocr_debug_agent/archive/rocm-5.1.3.tar.gz"
maintainers = ['srekolam', 'arjun-raj-kuppala']
libraries = ['librocm-debug-agent']
version('5.1.3', sha256='ef26130829f3348d503669467ab1ea39fb67d943d88d64e7ac04b9617ec6067d')
version('5.1.0', sha256='e0ceeef575d8645385bc6e4c9c3accaa192a93c42d83545cf5626c848f59806b')
version('5.0.2', sha256='4ec3cdedc4ba774d05c3dc972186b3181b3aa823af08f3843238961d5ef90e57')
version('5.0.0', sha256='fb8ebe136bfa815116453bdcb4afb9617ab488f54501434c72eed9706857be3f')
version('4.5.2', sha256='85c7f19485defd9a58716fffdd1a0e065ed7f779c3f124467fca18755bc634a6')
version('4.5.0', sha256='6486b1a8515da4711d3c85f8e41886f8fe6ba37ca2c63664f00c811f6296ac20')
version('4.3.1', sha256='7bee6be6c29883f03f47a8944c0d50b7cf43a6b5eeed734602f521c3c40a18d0', deprecated=True)
version('4.3.0', sha256='0cdee5792b808e03b839070da0d1b08dc4078a7d1fc295f0c99c6a5ae7d636a6', deprecated=True)
version('4.2.0', sha256='ce02a5b752291882daa0a2befa23944e59087ce9fe65a91061476c3c399e4a0c', deprecated=True)
version('4.1.0', sha256='b1ae874887e5ee037070f1dd46b145ad02ec9fd8a724c6b6ae194b534f01acdb', deprecated=True)
version('4.0.0', sha256='a9e64834d56a9221c242e71aa110c2cef0087aa8f86f50428dd618e5e623cc3c', deprecated=True)
version('3.10.0', sha256='675b8d3cc4aecc4428a93553abf664bbe6a2cb153f1f480e6cadeeb4d24ef4b1', deprecated=True)
version('3.9.0', sha256='3e56bf8b2b53d9102e8709b6259deea52257dc6210df16996b71a7d677952b1b', deprecated=True)
version('3.8.0', sha256='55243331ac4b0d90e88882eb29fd06fad354e278f8a34ac7f0680b2c895ca2ac', deprecated=True)
version('3.7.0', sha256='d0f442a2b224a734b0080c906f0fc3066a698e5cde9ff97ffeb485b36d2caba1', deprecated=True)
version('3.5.0', sha256='203ccb18d2ac508aae40bf364923f67375a08798b20057e574a0c5be8039f133', deprecated=True)
def url_for_version(self, version):
url = "https://github.com/ROCm-Developer-Tools/rocr_debug_agent/archive/"
if version <= Version('3.7.0'):
url += "roc-{0}.tar.gz".format(version)
else:
url += "rocm-{0}.tar.gz".format(version)
return url
variant('build_type', default='Release', values=("Release", "Debug", "RelWithDebInfo"), description='CMake build type')
depends_on('cmake@3:', type='build')
depends_on('elfutils@:0.168', type='link')
for ver in ['3.5.0', '3.7.0', '3.8.0', '3.9.0', '3.10.0', '4.0.0', '4.1.0',
'4.2.0', '4.3.0', '4.3.1', '4.5.0', '4.5.2', '5.0.0', '5.0.2',
'5.1.0', '5.1.3']:
depends_on('hsa-rocr-dev@' + ver, when='@' + ver)
depends_on('hsakmt-roct@' + ver, when='@' + ver)
for ver in ['3.7.0', '3.8.0', '3.9.0', '3.10.0', '4.0.0', '4.1.0', '4.2.0',
'4.3.0', '4.3.1', '4.5.0', '4.5.2', '5.0.0', '5.0.2', '5.1.0', '5.1.3']:
depends_on('rocm-dbgapi@' + ver, when='@' + ver)
depends_on('hip@' + ver, when='@' + ver)
# https://github.com/ROCm-Developer-Tools/rocr_debug_agent/pull/4
patch('0001-Drop-overly-strict-Werror-flag.patch', when='@3.7.0:')
patch('0002-add-hip-architecture.patch', when='@3.9.0:')
@classmethod
def determine_version(cls, lib):
match = re.search(r'lib\S*\.so\.\d+\.\d+\.(\d)(\d\d)(\d\d)',
lib)
if match:
ver = '{0}.{1}.{2}'.format(int(match.group(1)),
int(match.group(2)),
int(match.group(3)))
else:
ver = None
return ver
@property
def root_cmakelists_dir(self):
if '@3.5.0' in self.spec:
return 'src'
else:
return self.stage.source_path
def cmake_args(self):
spec = self.spec
args = []
if '@3.5.0' in spec:
args.append(
'-DCMAKE_PREFIX_PATH={0}/include/hsa;{1}/include,'.
format(spec['hsa-rocr-dev'].prefix, spec['hsakmt-roct'].prefix)
)
if '@3.7.0:' in spec:
args.append(
'-DCMAKE_MODULE_PATH={0}'.
format(spec['hip'].prefix.cmake)
)
return args
| 46.333333 | 123 | 0.641134 |
1221f2169747a158ac12e5595635b11d4eafd1ee | 638 | py | Python | main/email.py | MugeraH/Awwards | 75b7695a855150098239d022524522800e24348e | [
"MIT"
] | null | null | null | main/email.py | MugeraH/Awwards | 75b7695a855150098239d022524522800e24348e | [
"MIT"
] | null | null | null | main/email.py | MugeraH/Awwards | 75b7695a855150098239d022524522800e24348e | [
"MIT"
] | null | null | null | from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
def send_welcome_email(name,receiver,date):
# Creating message subject and sender
subject = 'Welcome to the Awwwards clone'
sender = 'testmugera@gmail.com'
ctx= {
"name": name,
"date":date
}
#passing in the context vairables
text_content = render_to_string('email/awwemail.txt',ctx)
html_content = render_to_string('email/awwemail.html',ctx)
msg = EmailMultiAlternatives(subject,text_content,sender,[receiver])
msg.attach_alternative(html_content,'text/html')
msg.send()
| 30.380952 | 72 | 0.727273 |
0f8550022fddc3c41486d0c7dcbf08248a29f61e | 1,840 | py | Python | networking_nec/nwa/l3/rpc/nwa_l3_server_callback.py | nec-openstack/networking-nec-nwa | 0c5a4a9fb74b6dc78b773d78755c758ed67ed777 | [
"Apache-2.0"
] | null | null | null | networking_nec/nwa/l3/rpc/nwa_l3_server_callback.py | nec-openstack/networking-nec-nwa | 0c5a4a9fb74b6dc78b773d78755c758ed67ed777 | [
"Apache-2.0"
] | null | null | null | networking_nec/nwa/l3/rpc/nwa_l3_server_callback.py | nec-openstack/networking-nec-nwa | 0c5a4a9fb74b6dc78b773d78755c758ed67ed777 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015-2016 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.extensions import l3
from neutron import manager
from neutron.plugins.common import constants as plugin_constants
from oslo_log import log as logging
import oslo_messaging
LOG = logging.getLogger(__name__)
class NwaL3ServerRpcCallback(object):
target = oslo_messaging.Target(version='1.0')
@property
def l3plugin(self):
if not hasattr(self, '_l3plugin'):
self._l3plugin = manager.NeutronManager.get_service_plugins()[
plugin_constants.L3_ROUTER_NAT]
return self._l3plugin
def update_floatingip_status(self, context, floatingip_id, status):
'''Update operational status for a floating IP.'''
with context.session.begin(subtransactions=True):
LOG.debug('New status for floating IP %s: %s' %
(floatingip_id, status))
try:
self.l3plugin.update_floatingip_status(context,
floatingip_id,
status)
except l3.FloatingIPNotFound:
LOG.debug("Floating IP: %s no longer present.",
floatingip_id)
| 39.148936 | 78 | 0.64837 |
a3366a843d68c4465229fd9c3eadf21883a22385 | 543 | py | Python | arvestust/views/tag_list.py | lehvitus/arvestust | 2d508317b744eaf12a643a398ff95723893a046a | [
"BSD-3-Clause"
] | 1 | 2021-09-17T23:45:27.000Z | 2021-09-17T23:45:27.000Z | arvestust/views/tag_list.py | lehvitus/arvestust | 2d508317b744eaf12a643a398ff95723893a046a | [
"BSD-3-Clause"
] | 3 | 2020-07-25T05:40:54.000Z | 2020-08-11T04:01:19.000Z | arvestust/views/tag_list.py | lehvitus/arvestust | 2d508317b744eaf12a643a398ff95723893a046a | [
"BSD-3-Clause"
] | null | null | null | from django.views.generic import ListView
from django.contrib.auth.mixins import LoginRequiredMixin
from django.urls import path
from .urls import urlpatterns
from ..models import Tag
class TagListView(LoginRequiredMixin, ListView):
model = Tag
context_object_name = 'tags'
template_name = 'tag_list.html'
paginate_by = 100
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
return context
urlpatterns.append(
path('tags/', TagListView.as_view(), name='tag-list')
)
| 24.681818 | 57 | 0.731123 |
da87da12fd56f63550f87e3cec7e32bbb1bfcee8 | 3,251 | py | Python | pygame_gui/core/ui_window_stack.py | halfninja/pygame_gui | 71b1150cb0c789339a9f8d781da15bdfad604f6c | [
"MIT"
] | null | null | null | pygame_gui/core/ui_window_stack.py | halfninja/pygame_gui | 71b1150cb0c789339a9f8d781da15bdfad604f6c | [
"MIT"
] | null | null | null | pygame_gui/core/ui_window_stack.py | halfninja/pygame_gui | 71b1150cb0c789339a9f8d781da15bdfad604f6c | [
"MIT"
] | null | null | null | from typing import Union
from pygame_gui.core import ui_window
class UIWindowStack:
"""
A class for managing a stack of GUI windows so that only one is 'in front' at a time and the rest are sorted based
on the last time they were interacted with/created.
"""
def __init__(self, window_resolution):
self.window_resolution = window_resolution
self.stack = []
def clear(self):
"""
Empties the whole stack removing and killing all windows.
"""
while len(self.stack) != 0:
self.stack.pop().kill()
self.stack.clear()
def add_new_window(self, window: ui_window.UIWindow):
"""
Adds a window to the top of the stack.
:param window: The window to add.
"""
if len(self.stack) > 0:
new_layer = self.stack[-1].get_top_layer() + 1
else:
new_layer = 0
window.change_window_layer(new_layer)
self.stack.append(window)
def remove_window(self, window_to_remove: ui_window.UIWindow):
"""
Removes a window from the stack and resorts the remaining windows to adjust for it's absence.
:param window_to_remove: the window to remove.
"""
if window_to_remove in self.stack:
popped_windows_to_readd = []
window = self.stack.pop()
while window != window_to_remove:
popped_windows_to_readd.append(window)
window = self.stack.pop()
popped_windows_to_readd.reverse()
for old_window in popped_windows_to_readd:
self.add_new_window(old_window)
def move_window_to_front(self, window_to_front: ui_window.UIWindow):
"""
Moves the passed in window to the top of the window stack and resorts the other windows to deal with the
change.
:param window_to_front: the window to move to the front.
"""
if window_to_front in self.stack:
popped_windows_to_readd = []
window = self.stack.pop()
while window != window_to_front:
popped_windows_to_readd.append(window)
window = self.stack.pop()
popped_windows_to_readd.reverse()
for old_window in popped_windows_to_readd:
self.add_new_window(old_window)
self.add_new_window(window_to_front)
def get_root_window(self) -> Union[ui_window.UIWindow, None]:
"""
Gets the 'root' window, which should always be the first one in the stack and should represent an imaginary
window the size of the whole pygame application's display surface.
:return Union[ui_window.UIWindow, None]: The 'root' window
"""
if len(self.stack) > 0:
return self.stack[0]
else:
return None
def is_window_at_top(self, window: ui_window.UIWindow) -> bool:
"""
Checks if a window is at the top of the window stack or not.
:param window: The window to check.
:return bool: returns True if this window is at the top of the stack.
"""
if window is self.stack[-1]:
return True
else:
return False
| 33.864583 | 118 | 0.611504 |
9196561f50ee897eb57210dc39a682a1508f0871 | 2,224 | py | Python | login/urls.py | aschrist/WebServerAndClient | 3aa0af2c444acac88a1b51b4cfd4bb8d0c36e640 | [
"BSD-3-Clause"
] | null | null | null | login/urls.py | aschrist/WebServerAndClient | 3aa0af2c444acac88a1b51b4cfd4bb8d0c36e640 | [
"BSD-3-Clause"
] | null | null | null | login/urls.py | aschrist/WebServerAndClient | 3aa0af2c444acac88a1b51b4cfd4bb8d0c36e640 | [
"BSD-3-Clause"
] | null | null | null | from django.conf.urls import url
from django.contrib.admin.views.decorators import staff_member_required
from login import views
app_name = 'login'
urlpatterns = [
# login/logout
url(r'^login/$',
views.LoginView.as_view(),
name='login'),
url(r'^logout/$',
views.LogoutView.as_view(),
name='logout'),
# signup
url(r'^signup/$',
views.SignupView.as_view(),
name='signup'),
# User Admin
url(r'^user/$',
staff_member_required(views.UserAdminListView.as_view()),
name='list-user'),
url(r'^user/create/$',
staff_member_required(views.UserAdminCreateView.as_view()),
name='create-user'),
url(r'^user/detail/(?P<pk>[0-9]+)$',
staff_member_required(views.UserAdminDetailView.as_view()),
name='detail-user'),
url(r'^user/update/(?P<pk>[0-9]+)$',
staff_member_required(views.UserAdminUpdateView.as_view()),
name='update-user'),
# Group Admin
url(r'^group/$',
staff_member_required(views.GroupAdminListView.as_view()),
name='list-group'),
url(r'^group/create/$',
staff_member_required(views.GroupAdminCreateView.as_view()),
name='create-group'),
url(r'^group/detail/(?P<pk>[0-9]+)$',
staff_member_required(views.GroupAdminDetailView.as_view()),
name='detail-group'),
url(r'^group/update/(?P<pk>[0-9]+)$',
staff_member_required(views.GroupAdminUpdateView.as_view()),
name='update-group'),
# client admin
url(r'^client/$',
staff_member_required(views.ClientListView.as_view()),
name='list-client'),
url(r'^client/detail/(?P<pk>[0-9]+)$',
staff_member_required(views.ClientDetailView.as_view()),
name='detail-client'),
# restart
url(r'^restart/$',
views.RestartView.as_view(),
name='restart'),
# mqtt login
url(r'^mqtt/login/$',
views.MQTTLoginView.as_view(),
name='login-mqtt'),
url(r'^mqtt/superuser/$',
views.MQTTSuperuserView.as_view(),
name='superuser-mqtt'),
url(r'^mqtt/acl/$',
views.MQTTAclView.as_view(),
name='acl-mqtt'),
]
| 25.860465 | 71 | 0.59982 |
5282ab53bed6984dfd146c24be2e074cd819411d | 1,415 | py | Python | tests/conftest.py | JocelynDelalande/dockerspawner | d1f27e2855d2cefbdb25b29cc069b9ca69d564e3 | [
"BSD-3-Clause"
] | 1 | 2021-01-28T17:22:25.000Z | 2021-01-28T17:22:25.000Z | tests/conftest.py | JocelynDelalande/dockerspawner | d1f27e2855d2cefbdb25b29cc069b9ca69d564e3 | [
"BSD-3-Clause"
] | null | null | null | tests/conftest.py | JocelynDelalande/dockerspawner | d1f27e2855d2cefbdb25b29cc069b9ca69d564e3 | [
"BSD-3-Clause"
] | 1 | 2018-07-25T16:11:06.000Z | 2018-07-25T16:11:06.000Z | """pytest config for dockerspawner tests"""
from unittest import mock
from docker import from_env as docker_from_env
from docker.errors import APIError
import pytest
from jupyterhub.tests.mocking import MockHub
# import base jupyterhub fixtures
from jupyterhub.tests.conftest import app, io_loop # noqa
from dockerspawner import DockerSpawner
# make Hub connectable from docker by default
MockHub.hub_ip = "0.0.0.0"
@pytest.fixture
def dockerspawner(app):
"""Configure JupyterHub to use DockerSpawner"""
app.config.DockerSpawner.prefix = "dockerspawner-test"
# app.config.DockerSpawner.remove = True
with mock.patch.dict(app.tornado_settings, {"spawner_class": DockerSpawner}):
yield
@pytest.fixture(autouse=True, scope="session")
def docker():
"""Fixture to return a connected docker client
cleans up any containers we leave in docker
"""
d = docker_from_env()
try:
yield d
finally:
# cleanup our containers
for c in d.containers.list(all=True):
if c.name.startswith("dockerspawner-test"):
c.stop()
c.remove()
try:
services = d.services.list()
except APIError:
# e.g. services not available
return
else:
for s in services:
if s.name.startswith("dockerspawner-test"):
s.remove()
| 26.698113 | 81 | 0.653004 |
6ee0efaa3a1319febebf9fb8bccfc989db711011 | 557 | py | Python | hood/migrations/0003_auto_20181022_1003.py | KageniJK/watchi | 50268615169096bc6302057103bf22b4e8377a1b | [
"MIT"
] | null | null | null | hood/migrations/0003_auto_20181022_1003.py | KageniJK/watchi | 50268615169096bc6302057103bf22b4e8377a1b | [
"MIT"
] | 2 | 2020-06-05T19:24:19.000Z | 2021-06-10T20:56:18.000Z | hood/migrations/0003_auto_20181022_1003.py | KageniJK/watchi | 50268615169096bc6302057103bf22b4e8377a1b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.15 on 2018-10-22 07:03
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('hood', '0002_auto_20181022_0923'),
]
operations = [
migrations.AlterField(
model_name='profile',
name='neighbourhood',
field=models.ForeignKey(default=3, on_delete=django.db.models.deletion.CASCADE, to='hood.Neighbourhood'),
),
]
| 25.318182 | 117 | 0.657092 |
697d283d830fdbde80784d3d6cbdb443627a76d7 | 2,053 | py | Python | Code/SSDV2/faceRecognition.py | swapnil96/BTP | fda254a8d83297698808857a78e2b9d091c195e3 | [
"MIT"
] | null | null | null | Code/SSDV2/faceRecognition.py | swapnil96/BTP | fda254a8d83297698808857a78e2b9d091c195e3 | [
"MIT"
] | null | null | null | Code/SSDV2/faceRecognition.py | swapnil96/BTP | fda254a8d83297698808857a78e2b9d091c195e3 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
from mvnc import mvncapi as mvnc
import numpy, cv2
import sys, os
import cPickle as pickle
import utilities
import argparse
def setup(args):
utilities.setup(args)
model = None
with open(args.testModel, 'rb') as mod:
model = pickle.load(mod)
return model
def run(model, testData, threshold):
if model == None:
print("No model found")
return []
infer_image = cv2.imread(testData)
input_vector = utilities.run_inference(infer_image)
if numpy.any(input_vector) == None:
return []
match = utilities.run_image(model, input_vector, threshold)
return [match]
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
'-fG',
'--facenetGraph',
type=str,
help="graph file for facenet",
default="facenet_celeb_ncs.graph")
parser.add_argument(
'-sG',
'--ssdGraph',
type=str,
help="graph file for SSD",
default="ssd_face.graph")
parser.add_argument(
'type',
type=str,
help="train for training, test for testing",
default="train")
parser.add_argument(
'--trainData',
type=str,
help="Path to train data directory for training",
default="../train_data/")
parser.add_argument(
'--trainModel',
type=str,
help="Name of model which training will produce",
default='model.pkl')
parser.add_argument(
'-tD',
'--testData',
type=str,
help="Path to test image for testing",
default='../raw/')
parser.add_argument(
'-tM',
'--testModel',
type=str,
help="Path to pickle model for testing",
default='model.pkl')
parser.add_argument(
'-t',
'--threshold',
type=float,
default=1.2,
help='Face recognition threshold for facenet')
parser.add_argument('-v', '--verbose', action='store_true')
args = parser.parse_args()
if (args.type == "train"):
utilities.setup(args)
utilities.train(args)
else:
model = setup(args)
print(run(model, args.testData, args.threshold))
| 23.597701 | 63 | 0.640039 |
dff29d84d0ddbb3665ec2bf8c2c7bc1a8439afe8 | 26,478 | py | Python | autosklearn/smbo.py | psaks/auto-sklearn | e21047aa7b52e762a58992e33ffcebb420586e67 | [
"BSD-3-Clause"
] | 6,390 | 2015-07-11T07:59:51.000Z | 2022-03-31T16:45:15.000Z | autosklearn/smbo.py | psaks/auto-sklearn | e21047aa7b52e762a58992e33ffcebb420586e67 | [
"BSD-3-Clause"
] | 1,276 | 2015-07-29T02:11:29.000Z | 2022-03-31T17:31:34.000Z | autosklearn/smbo.py | psaks/auto-sklearn | e21047aa7b52e762a58992e33ffcebb420586e67 | [
"BSD-3-Clause"
] | 1,313 | 2015-07-20T14:11:39.000Z | 2022-03-25T18:22:48.000Z | import copy
import json
import logging
import multiprocessing
import os
import time
import traceback
import typing
import warnings
import dask.distributed
import pynisher
from smac.facade.smac_ac_facade import SMAC4AC
from smac.intensification.simple_intensifier import SimpleIntensifier
from smac.intensification.intensification import Intensifier
from smac.runhistory.runhistory2epm import RunHistory2EPM4LogCost
from smac.scenario.scenario import Scenario
from smac.tae.serial_runner import SerialRunner
from smac.tae.dask_runner import DaskParallelRunner
from smac.callbacks import IncorporateRunResultCallback
import autosklearn.metalearning
from autosklearn.constants import MULTILABEL_CLASSIFICATION, \
BINARY_CLASSIFICATION, TASK_TYPES_TO_STRING, CLASSIFICATION_TASKS, \
MULTICLASS_CLASSIFICATION, REGRESSION, MULTIOUTPUT_REGRESSION
from autosklearn.ensemble_builder import EnsembleBuilderManager
from autosklearn.metalearning.mismbo import suggest_via_metalearning
from autosklearn.data.abstract_data_manager import AbstractDataManager
from autosklearn.evaluation import ExecuteTaFuncWithQueue, get_cost_of_crash
from autosklearn.util.logging_ import get_named_client_logger
from autosklearn.util.parallel import preload_modules
from autosklearn.metalearning.metalearning.meta_base import MetaBase
from autosklearn.metalearning.metafeatures.metafeatures import \
calculate_all_metafeatures_with_labels, calculate_all_metafeatures_encoded_labels
EXCLUDE_META_FEATURES_CLASSIFICATION = {
'Landmark1NN',
'LandmarkDecisionNodeLearner',
'LandmarkDecisionTree',
'LandmarkLDA',
'LandmarkNaiveBayes',
'LandmarkRandomNodeLearner',
'PCAFractionOfComponentsFor95PercentVariance',
'PCAKurtosisFirstPC',
'PCASkewnessFirstPC',
'PCA',
}
EXCLUDE_META_FEATURES_REGRESSION = {
'Landmark1NN',
'LandmarkDecisionNodeLearner',
'LandmarkDecisionTree',
'LandmarkLDA',
'LandmarkNaiveBayes',
'PCAFractionOfComponentsFor95PercentVariance',
'PCAKurtosisFirstPC',
'PCASkewnessFirstPC',
'NumberOfClasses',
'ClassOccurences',
'ClassProbabilityMin',
'ClassProbabilityMax',
'ClassProbabilityMean',
'ClassProbabilitySTD',
'ClassEntropy',
'LandmarkRandomNodeLearner',
'PCA',
}
def get_send_warnings_to_logger(logger):
def _send_warnings_to_log(message, category, filename, lineno, file, line):
logger.debug('%s:%s: %s:%s', filename, lineno, category.__name__, message)
return _send_warnings_to_log
# metalearning helpers
def _calculate_metafeatures(data_feat_type, data_info_task, basename,
x_train, y_train, watcher, logger_):
with warnings.catch_warnings():
warnings.showwarning = get_send_warnings_to_logger(logger_)
# == Calculate metafeatures
task_name = 'CalculateMetafeatures'
watcher.start_task(task_name)
categorical = {col: True if feat_type.lower() == 'categorical' else False
for col, feat_type in data_feat_type.items()}
EXCLUDE_META_FEATURES = EXCLUDE_META_FEATURES_CLASSIFICATION \
if data_info_task in CLASSIFICATION_TASKS else EXCLUDE_META_FEATURES_REGRESSION
if data_info_task in [MULTICLASS_CLASSIFICATION, BINARY_CLASSIFICATION,
MULTILABEL_CLASSIFICATION, REGRESSION,
MULTIOUTPUT_REGRESSION]:
logger_.info('Start calculating metafeatures for %s', basename)
result = calculate_all_metafeatures_with_labels(
x_train, y_train, categorical=categorical,
dataset_name=basename,
dont_calculate=EXCLUDE_META_FEATURES, logger=logger_)
for key in list(result.metafeature_values.keys()):
if result.metafeature_values[key].type_ != 'METAFEATURE':
del result.metafeature_values[key]
else:
result = None
logger_.info('Metafeatures not calculated')
watcher.stop_task(task_name)
logger_.info(
'Calculating Metafeatures (categorical attributes) took %5.2f',
watcher.wall_elapsed(task_name))
return result
def _calculate_metafeatures_encoded(data_feat_type, basename, x_train, y_train, watcher,
task, logger_):
with warnings.catch_warnings():
warnings.showwarning = get_send_warnings_to_logger(logger_)
EXCLUDE_META_FEATURES = EXCLUDE_META_FEATURES_CLASSIFICATION \
if task in CLASSIFICATION_TASKS else EXCLUDE_META_FEATURES_REGRESSION
task_name = 'CalculateMetafeaturesEncoded'
watcher.start_task(task_name)
categorical = {col: True if feat_type.lower() == 'categorical' else False
for col, feat_type in data_feat_type.items()}
result = calculate_all_metafeatures_encoded_labels(
x_train, y_train, categorical=categorical,
dataset_name=basename, dont_calculate=EXCLUDE_META_FEATURES, logger=logger_)
for key in list(result.metafeature_values.keys()):
if result.metafeature_values[key].type_ != 'METAFEATURE':
del result.metafeature_values[key]
watcher.stop_task(task_name)
logger_.info(
'Calculating Metafeatures (encoded attributes) took %5.2fsec',
watcher.wall_elapsed(task_name))
return result
def _get_metalearning_configurations(meta_base, basename, metric,
configuration_space,
task,
initial_configurations_via_metalearning,
is_sparse,
watcher, logger):
task_name = 'InitialConfigurations'
watcher.start_task(task_name)
try:
metalearning_configurations = suggest_via_metalearning(
meta_base, basename, metric,
task,
is_sparse == 1,
initial_configurations_via_metalearning,
logger=logger,
)
except Exception as e:
logger.error("Error getting metalearning configurations!")
logger.error(str(e))
logger.error(traceback.format_exc())
metalearning_configurations = []
watcher.stop_task(task_name)
return metalearning_configurations
def _print_debug_info_of_init_configuration(initial_configurations, basename,
time_for_task, logger, watcher):
logger.debug('Initial Configurations: (%d)' % len(initial_configurations))
for initial_configuration in initial_configurations:
logger.debug(initial_configuration)
logger.debug('Looking for initial configurations took %5.2fsec',
watcher.wall_elapsed('InitialConfigurations'))
logger.info(
'Time left for %s after finding initial configurations: %5.2fsec',
basename, time_for_task - watcher.wall_elapsed(basename))
def get_smac_object(
scenario_dict,
seed,
ta,
ta_kwargs,
metalearning_configurations,
n_jobs,
dask_client,
):
if len(scenario_dict['instances']) > 1:
intensifier = Intensifier
else:
intensifier = SimpleIntensifier
scenario = Scenario(scenario_dict)
if len(metalearning_configurations) > 0:
default_config = scenario.cs.get_default_configuration()
initial_configurations = [default_config] + metalearning_configurations
else:
initial_configurations = None
rh2EPM = RunHistory2EPM4LogCost
return SMAC4AC(
scenario=scenario,
rng=seed,
runhistory2epm=rh2EPM,
tae_runner=ta,
tae_runner_kwargs=ta_kwargs,
initial_configurations=initial_configurations,
run_id=seed,
intensifier=intensifier,
dask_client=dask_client,
n_jobs=n_jobs,
)
class AutoMLSMBO(object):
def __init__(self, config_space, dataset_name,
backend,
total_walltime_limit,
func_eval_time_limit,
memory_limit,
metric,
watcher,
n_jobs,
dask_client: dask.distributed.Client,
port: int,
start_num_run=1,
data_memory_limit=None,
num_metalearning_cfgs=25,
config_file=None,
seed=1,
metadata_directory=None,
resampling_strategy='holdout',
resampling_strategy_args=None,
include=None,
exclude=None,
disable_file_output=False,
smac_scenario_args=None,
get_smac_object_callback=None,
scoring_functions=None,
pynisher_context='spawn',
ensemble_callback: typing.Optional[EnsembleBuilderManager] = None,
trials_callback: typing.Optional[IncorporateRunResultCallback] = None
):
super(AutoMLSMBO, self).__init__()
# data related
self.dataset_name = dataset_name
self.datamanager = None
self.metric = metric
self.task = None
self.backend = backend
self.port = port
# the configuration space
self.config_space = config_space
# the number of parallel workers/jobs
self.n_jobs = n_jobs
self.dask_client = dask_client
# Evaluation
self.resampling_strategy = resampling_strategy
if resampling_strategy_args is None:
resampling_strategy_args = {}
self.resampling_strategy_args = resampling_strategy_args
# and a bunch of useful limits
self.worst_possible_result = get_cost_of_crash(self.metric)
self.total_walltime_limit = int(total_walltime_limit)
self.func_eval_time_limit = int(func_eval_time_limit)
self.memory_limit = memory_limit
self.data_memory_limit = data_memory_limit
self.watcher = watcher
self.num_metalearning_cfgs = num_metalearning_cfgs
self.config_file = config_file
self.seed = seed
self.metadata_directory = metadata_directory
self.start_num_run = start_num_run
self.include = include
self.exclude = exclude
self.disable_file_output = disable_file_output
self.smac_scenario_args = smac_scenario_args
self.get_smac_object_callback = get_smac_object_callback
self.scoring_functions = scoring_functions
self.pynisher_context = pynisher_context
self.ensemble_callback = ensemble_callback
self.trials_callback = trials_callback
dataset_name_ = "" if dataset_name is None else dataset_name
logger_name = '%s(%d):%s' % (self.__class__.__name__, self.seed, ":" + dataset_name_)
if port is None:
self.logger = logging.getLogger(__name__)
else:
self.logger = get_named_client_logger(
name=logger_name,
port=self.port,
)
def reset_data_manager(self, max_mem=None):
if max_mem is None:
max_mem = self.data_memory_limit
if self.datamanager is not None:
del self.datamanager
if isinstance(self.dataset_name, AbstractDataManager):
self.datamanager = self.dataset_name
else:
self.datamanager = self.backend.load_datamanager()
self.task = self.datamanager.info['task']
def collect_metalearning_suggestions(self, meta_base):
metalearning_configurations = _get_metalearning_configurations(
meta_base=meta_base,
basename=self.dataset_name,
metric=self.metric,
configuration_space=self.config_space,
task=self.task,
is_sparse=self.datamanager.info['is_sparse'],
initial_configurations_via_metalearning=self.num_metalearning_cfgs,
watcher=self.watcher,
logger=self.logger)
_print_debug_info_of_init_configuration(
metalearning_configurations,
self.dataset_name,
self.total_walltime_limit,
self.logger,
self.watcher)
return metalearning_configurations
def _calculate_metafeatures_with_limits(self, time_limit):
res = None
time_limit = max(time_limit, 1)
try:
context = multiprocessing.get_context(self.pynisher_context)
preload_modules(context)
safe_mf = pynisher.enforce_limits(mem_in_mb=self.memory_limit,
wall_time_in_s=int(time_limit),
grace_period_in_s=30,
context=context,
logger=self.logger)(
_calculate_metafeatures)
res = safe_mf(
data_feat_type=self.datamanager.feat_type,
data_info_task=self.datamanager.info['task'],
x_train=self.datamanager.data['X_train'],
y_train=self.datamanager.data['Y_train'],
basename=self.dataset_name,
watcher=self.watcher,
logger_=self.logger
)
except Exception as e:
self.logger.error('Error getting metafeatures: %s', str(e))
return res
def _calculate_metafeatures_encoded_with_limits(self, time_limit):
res = None
time_limit = max(time_limit, 1)
try:
context = multiprocessing.get_context(self.pynisher_context)
preload_modules(context)
safe_mf = pynisher.enforce_limits(mem_in_mb=self.memory_limit,
wall_time_in_s=int(time_limit),
grace_period_in_s=30,
context=context,
logger=self.logger)(
_calculate_metafeatures_encoded)
res = safe_mf(
data_feat_type=self.datamanager.feat_type,
task=self.datamanager.info['task'],
x_train=self.datamanager.data['X_train'],
y_train=self.datamanager.data['Y_train'],
basename=self.dataset_name,
watcher=self.watcher,
logger_=self.logger
)
except Exception as e:
self.logger.error('Error getting metafeatures (encoded) : %s',
str(e))
return res
def run_smbo(self):
self.watcher.start_task('SMBO')
# == first things first: load the datamanager
self.reset_data_manager()
# == Initialize non-SMBO stuff
# first create a scenario
seed = self.seed
self.config_space.seed(seed)
# allocate a run history
num_run = self.start_num_run
# Initialize some SMAC dependencies
metalearning_configurations = self.get_metalearning_suggestions()
if self.resampling_strategy in ['partial-cv',
'partial-cv-iterative-fit']:
num_folds = self.resampling_strategy_args['folds']
instances = [[json.dumps({'task_id': self.dataset_name,
'fold': fold_number})]
for fold_number in range(num_folds)]
else:
instances = [[json.dumps({'task_id': self.dataset_name})]]
# TODO rebuild target algorithm to be it's own target algorithm
# evaluator, which takes into account that a run can be killed prior
# to the model being fully fitted; thus putting intermediate results
# into a queue and querying them once the time is over
ta_kwargs = dict(
backend=copy.deepcopy(self.backend),
autosklearn_seed=seed,
resampling_strategy=self.resampling_strategy,
initial_num_run=num_run,
include=self.include,
exclude=self.exclude,
metric=self.metric,
memory_limit=self.memory_limit,
disable_file_output=self.disable_file_output,
scoring_functions=self.scoring_functions,
port=self.port,
pynisher_context=self.pynisher_context,
**self.resampling_strategy_args
)
ta = ExecuteTaFuncWithQueue
startup_time = self.watcher.wall_elapsed(self.dataset_name)
total_walltime_limit = self.total_walltime_limit - startup_time - 5
scenario_dict = {
'abort_on_first_run_crash': False,
'cs': self.config_space,
'cutoff_time': self.func_eval_time_limit,
'deterministic': 'true',
'instances': instances,
'memory_limit': self.memory_limit,
'output-dir': self.backend.get_smac_output_directory(),
'run_obj': 'quality',
'wallclock_limit': total_walltime_limit,
'cost_for_crash': self.worst_possible_result,
}
if self.smac_scenario_args is not None:
for arg in [
'abort_on_first_run_crash',
'cs',
'deterministic',
'instances',
'output-dir',
'run_obj',
'shared-model',
'cost_for_crash',
]:
if arg in self.smac_scenario_args:
self.logger.warning('Cannot override scenario argument %s, '
'will ignore this.', arg)
del self.smac_scenario_args[arg]
for arg in [
'cutoff_time',
'memory_limit',
'wallclock_limit',
]:
if arg in self.smac_scenario_args:
self.logger.warning(
'Overriding scenario argument %s: %s with value %s',
arg,
scenario_dict[arg],
self.smac_scenario_args[arg]
)
scenario_dict.update(self.smac_scenario_args)
smac_args = {
'scenario_dict': scenario_dict,
'seed': seed,
'ta': ta,
'ta_kwargs': ta_kwargs,
'metalearning_configurations': metalearning_configurations,
'n_jobs': self.n_jobs,
'dask_client': self.dask_client,
}
if self.get_smac_object_callback is not None:
smac = self.get_smac_object_callback(**smac_args)
else:
smac = get_smac_object(**smac_args)
if self.ensemble_callback is not None:
smac.register_callback(self.ensemble_callback)
if self.trials_callback is not None:
smac.register_callback(self.trials_callback)
smac.optimize()
self.runhistory = smac.solver.runhistory
self.trajectory = smac.solver.intensifier.traj_logger.trajectory
if isinstance(smac.solver.tae_runner, DaskParallelRunner):
self._budget_type = smac.solver.tae_runner.single_worker.budget_type
elif isinstance(smac.solver.tae_runner, SerialRunner):
self._budget_type = smac.solver.tae_runner.budget_type
else:
raise NotImplementedError(type(smac.solver.tae_runner))
return self.runhistory, self.trajectory, self._budget_type
def get_metalearning_suggestions(self):
# == METALEARNING suggestions
# we start by evaluating the defaults on the full dataset again
# and add the suggestions from metalearning behind it
if self.num_metalearning_cfgs > 0:
# If metadata directory is None, use default
if self.metadata_directory is None:
metalearning_directory = os.path.dirname(
autosklearn.metalearning.__file__)
# There is no multilabel data in OpenML
if self.task == MULTILABEL_CLASSIFICATION:
meta_task = BINARY_CLASSIFICATION
else:
meta_task = self.task
metadata_directory = os.path.join(
metalearning_directory, 'files',
'%s_%s_%s' % (self.metric, TASK_TYPES_TO_STRING[meta_task],
'sparse' if self.datamanager.info['is_sparse']
else 'dense'))
self.metadata_directory = metadata_directory
# If metadata directory is specified by user,
# then verify that it exists.
else:
if not os.path.exists(self.metadata_directory):
raise ValueError('The specified metadata directory \'%s\' '
'does not exist!' % self.metadata_directory)
else:
# There is no multilabel data in OpenML
if self.task == MULTILABEL_CLASSIFICATION:
meta_task = BINARY_CLASSIFICATION
else:
meta_task = self.task
metadata_directory = os.path.join(
self.metadata_directory,
'%s_%s_%s' % (self.metric, TASK_TYPES_TO_STRING[meta_task],
'sparse' if self.datamanager.info['is_sparse']
else 'dense'))
# Check that the metadata directory has the correct
# subdirectory needed for this dataset.
if os.path.basename(metadata_directory) not in \
os.listdir(self.metadata_directory):
raise ValueError('The specified metadata directory '
'\'%s\' does not have the correct '
'subdirectory \'%s\'' %
(self.metadata_directory,
os.path.basename(metadata_directory))
)
self.metadata_directory = metadata_directory
if os.path.exists(self.metadata_directory):
self.logger.info('Metadata directory: %s',
self.metadata_directory)
meta_base = MetaBase(self.config_space, self.metadata_directory, self.logger)
metafeature_calculation_time_limit = int(
self.total_walltime_limit / 4)
metafeature_calculation_start_time = time.time()
meta_features = self._calculate_metafeatures_with_limits(
metafeature_calculation_time_limit)
metafeature_calculation_end_time = time.time()
metafeature_calculation_time_limit = \
metafeature_calculation_time_limit - (
metafeature_calculation_end_time -
metafeature_calculation_start_time)
if metafeature_calculation_time_limit < 1:
self.logger.warning(
'Time limit for metafeature calculation less '
'than 1 seconds (%f). Skipping calculation '
'of metafeatures for encoded dataset.',
metafeature_calculation_time_limit)
meta_features_encoded = None
else:
with warnings.catch_warnings():
warnings.showwarning = get_send_warnings_to_logger(self.logger)
meta_features_encoded = \
self._calculate_metafeatures_encoded_with_limits(
metafeature_calculation_time_limit)
# In case there is a problem calculating the encoded meta-features
if meta_features is None:
if meta_features_encoded is not None:
meta_features = meta_features_encoded
else:
if meta_features_encoded is not None:
meta_features.metafeature_values.update(
meta_features_encoded.metafeature_values)
if meta_features is not None:
meta_base.add_dataset(self.dataset_name, meta_features)
# Do mean imputation of the meta-features - should be done specific
# for each prediction model!
all_metafeatures = meta_base.get_metafeatures(
features=list(meta_features.keys()))
all_metafeatures.fillna(all_metafeatures.mean(),
inplace=True)
with warnings.catch_warnings():
warnings.showwarning = get_send_warnings_to_logger(self.logger)
metalearning_configurations = self.collect_metalearning_suggestions(
meta_base)
if metalearning_configurations is None:
metalearning_configurations = []
self.reset_data_manager()
self.logger.info('%s', meta_features)
# Convert meta-features into a dictionary because the scenario
# expects a dictionary
meta_features_dict = {}
for dataset, series in all_metafeatures.iterrows():
meta_features_dict[dataset] = series.values
meta_features_list = []
for meta_feature_name in all_metafeatures.columns:
meta_features_list.append(
meta_features[meta_feature_name].value)
self.logger.info(list(meta_features_dict.keys()))
else:
meta_features = None
self.logger.warning('Could not find meta-data directory %s' %
metadata_directory)
else:
meta_features = None
if meta_features is None:
metalearning_configurations = []
return metalearning_configurations
| 41.307332 | 93 | 0.601292 |
5185ba8033a121577bab4d6849d10189ac408ef6 | 2,892 | py | Python | ipyvol3.py | chrisjsewell/ipyvol_html_ci | 8bd4304c0d17a26bc07970416ebf4b146a92d715 | [
"MIT"
] | null | null | null | ipyvol3.py | chrisjsewell/ipyvol_html_ci | 8bd4304c0d17a26bc07970416ebf4b146a92d715 | [
"MIT"
] | null | null | null | ipyvol3.py | chrisjsewell/ipyvol_html_ci | 8bd4304c0d17a26bc07970416ebf4b146a92d715 | [
"MIT"
] | null | null | null | import os
import sys
import glob
import json
import http.server
import socketserver
import multiprocessing
import pytest
from selenium import webdriver
# from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
capabilities = {
'platform': "Mac OS X 10.10",
'browserName': "chrome",
'version': "60",
}
PORT = 8082
@pytest.yield_fixture(scope='session')
def browser():
if "TRAVIS_BUILD_NUMBER" in os.environ:
username = os.environ["SAUCE_USERNAME"]
access_key = os.environ["SAUCE_ACCESS_KEY"]
capabilities["tunnel-identifier"] = os.environ["TRAVIS_JOB_NUMBER"]
capabilities["build"] = os.environ["TRAVIS_BUILD_NUMBER"]
capabilities["tags"] = [os.environ["TRAVIS_PYTHON_VERSION"], "CI"]
hub_url = "%s:%s@localhost:4445" % (username, access_key)
else:
username = "chrisjsewell"
access_key = "2428b132-dd74-4326-a484-95eace873558"
hub_url = "%s:%s@localhost:4445" % (username, access_key)
server = socketserver.TCPServer(('', PORT), http.server.SimpleHTTPRequestHandler)
process = multiprocessing.Process(target=server.serve_forever)
try:
process.start()
driver = webdriver.Remote(desired_capabilities=capabilities,
command_executor="http://%s/wd/hub" % hub_url)
yield driver
finally:
driver.quit()
process.terminate()
def test__local(browser):
_unfatal_messages = [
"ipyvolume.js - Failed to load resource",
"TypeError: Cannot read property 'then' of undefined"
]
htmlpath = os.path.join(os.path.join(os.path.dirname(__file__), 'html_files'))
#for path in glob.glob(os.path.join(htmlpath, '*online*.html')):
# with socketserver.TCPServer(("", PORT), Handler) as httpd:
# print("serving at port", PORT)
# httpd.server_activate()
browser.get("http://localhost:{port}/html_files/ipyolume_scatter_online.html".format(port=PORT))
#browser.get("http://localhost:8081/html_files/" + os.path.basename(path))
#browser.get("http://google.com")
#browser.get('file:///'+os.path.abspath(path))
#browser.get("https://github.com/chrisjsewell/ipyvol_html_ci/blob/master/html_files/" + os.path.basename(path))
WebDriverWait(browser, 30).until(EC.presence_of_element_located((By.TAG_NAME, "canvas")))
log = browser.get_log('browser')
for item in log:
if item.get('level') == 'SEVERE':
if any([msg in item['message'] for msg in _unfatal_messages]):
# known unfatal error
continue
raise RuntimeError('html file {0} load fails:\n{1}'.format(path, json.dumps(log, indent=2)))
| 37.076923 | 115 | 0.672891 |
e69a724225dd2772719e85c272c172bd34dd68ae | 5,041 | py | Python | cirq-ionq/cirq_ionq/ionq_devices.py | peterse/Cirq | 31daa9410a0e1e1ac3da38109aa8ce3a15aed17b | [
"Apache-2.0"
] | 3,326 | 2018-07-18T23:17:21.000Z | 2022-03-29T22:28:24.000Z | cirq-ionq/cirq_ionq/ionq_devices.py | peterse/Cirq | 31daa9410a0e1e1ac3da38109aa8ce3a15aed17b | [
"Apache-2.0"
] | 3,443 | 2018-07-18T21:07:28.000Z | 2022-03-31T20:23:21.000Z | cirq-ionq/cirq_ionq/ionq_devices.py | peterse/Cirq | 31daa9410a0e1e1ac3da38109aa8ce3a15aed17b | [
"Apache-2.0"
] | 865 | 2018-07-18T23:30:24.000Z | 2022-03-30T11:43:23.000Z | # Copyright 2021 The Cirq Developers
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Devices for IonQ hardware."""
from typing import AbstractSet, Sequence, Union
import numpy as np
import cirq
class IonQAPIDevice(cirq.Device):
"""A device that uses the gates exposed by the IonQ API.
When using this device in constructing a circuit, it will convert one and two qubit gates
that are not supported by the API into those supported by the API if they have a unitary
matrix (support the unitary protocol).
Note that this device does not do any compression of the resulting circuit, i.e. it may
result in a series of single qubit gates that could be executed using far fewer elements.
The gates supported by the API are
* `cirq.XPowGate`, `cirq.YPowGate`, `cirq.ZPowGate`
* `cirq.XXPowGate`, `cirq.YYPowGate`, `cirq.ZZPowGate`
* `cirq.CNOT`, `cirq.H`, `cirq.SWAP`
* `cirq.MeasurementGate`
"""
def __init__(self, qubits: Union[Sequence[cirq.LineQubit], int], atol=1e-8):
"""Construct the device.
Args:
qubits: The qubits upon which this device acts or the number of qubits. If the number
of qubits, then the qubits will be `cirq.LineQubit`s from 0 to this number minus
one.
atol: The absolute tolerance used for gate calculations and decompositions.
"""
if isinstance(qubits, int):
self.qubits = frozenset(cirq.LineQubit.range(qubits))
else:
self.qubits = frozenset(qubits)
self.atol = atol
self.gateset = cirq.Gateset(
cirq.H,
cirq.CNOT,
cirq.SWAP,
cirq.XPowGate,
cirq.YPowGate,
cirq.ZPowGate,
cirq.XXPowGate,
cirq.YYPowGate,
cirq.ZZPowGate,
cirq.MeasurementGate,
unroll_circuit_op=False,
accept_global_phase_op=False,
)
def qubit_set(self) -> AbstractSet['cirq.Qid']:
return self.qubits
def validate_operation(self, operation: cirq.Operation):
if operation.gate is None:
raise ValueError(
f'IonQAPIDevice does not support operations with no gates {operation}.'
)
if not self.is_api_gate(operation):
raise ValueError(f'IonQAPIDevice has unsupported gate {operation.gate}.')
if not set(operation.qubits).intersection(self.qubit_set()):
raise ValueError(f'Operation with qubits not on the device. Qubits: {operation.qubits}')
def is_api_gate(self, operation: cirq.Operation) -> bool:
return operation in self.gateset
def decompose_operation(self, operation: cirq.Operation) -> cirq.OP_TREE:
if self.is_api_gate(operation):
return operation
assert cirq.has_unitary(operation), (
f'Operation {operation} that is not available on the IonQ API nor does it have a '
'unitary matrix to use to decompose it to the API.'
)
num_qubits = len(operation.qubits)
if num_qubits == 1:
return self._decompose_single_qubit(operation)
if num_qubits == 2:
return self._decompose_two_qubit(operation)
raise ValueError(f'Operation {operation} not supported by IonQ API.')
def _decompose_single_qubit(self, operation: cirq.Operation) -> cirq.OP_TREE:
qubit = operation.qubits[0]
mat = cirq.unitary(operation)
for gate in cirq.single_qubit_matrix_to_gates(mat, self.atol):
yield gate(qubit)
def _decompose_two_qubit(self, operation: cirq.Operation) -> cirq.OP_TREE:
"""Decomposes a two qubit gate into XXPow, YYPow, and ZZPow plus single qubit gates."""
mat = cirq.unitary(operation)
kak = cirq.kak_decomposition(mat, check_preconditions=False)
for qubit, mat in zip(operation.qubits, kak.single_qubit_operations_before):
gates = cirq.single_qubit_matrix_to_gates(mat, self.atol)
for gate in gates:
yield gate(qubit)
two_qubit_gates = [cirq.XX, cirq.YY, cirq.ZZ]
for two_qubit_gate, coefficient in zip(two_qubit_gates, kak.interaction_coefficients):
yield (two_qubit_gate ** (-coefficient * 2 / np.pi))(*operation.qubits)
for qubit, mat in zip(operation.qubits, kak.single_qubit_operations_after):
for gate in cirq.single_qubit_matrix_to_gates(mat, self.atol):
yield gate(qubit)
| 41.661157 | 100 | 0.659195 |
fcff8621bb9031e20714d47cc5cc24ed47de3313 | 2,915 | py | Python | odoo-13.0/addons/hr_contract/tests/test_auto_status.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | odoo-13.0/addons/hr_contract/tests/test_auto_status.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | odoo-13.0/addons/hr_contract/tests/test_auto_status.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from datetime import date, datetime
from dateutil.relativedelta import relativedelta
from odoo.addons.hr_contract.tests.common import TestContractBase
class TestHrContracts(TestContractBase):
def setUp(self):
super(TestHrContracts, self).setUp()
self.contracts = self.env['hr.contract'].with_context(tracking_disable=True)
self.test_contract = dict(name='Test', wage=1, employee_id=self.employee.id, state='open')
def test_employee_contractwarning(self):
self.assertEquals(self.employee.contract_warning, True)
def apply_cron(self):
self.env.ref('hr_contract.ir_cron_data_contract_update_state').method_direct_trigger()
def test_contract_enddate(self):
self.test_contract.update(dict(date_end=datetime.now() + relativedelta(days=100)))
self.contract = self.contracts.create(self.test_contract)
self.apply_cron()
self.assertEquals(self.contract.state, 'open')
self.assertEquals(self.contract.kanban_state, 'normal')
self.assertEquals(self.employee.contract_warning, False)
self.test_contract.update(dict(date_end=datetime.now() + relativedelta(days=5)))
self.contract.write(self.test_contract)
self.apply_cron()
self.assertEquals(self.contract.state, 'open')
self.assertEquals(self.contract.kanban_state, 'blocked')
self.test_contract.update({
'date_start': datetime.now() + relativedelta(days=-50),
'date_end': datetime.now() + relativedelta(days=-1),
'state': 'open',
'kanban_state': 'blocked',
})
self.contract.write(self.test_contract)
self.apply_cron()
self.assertEquals(self.contract.state, 'close')
def test_contract_pending_visa_expire(self):
self.employee.visa_expire = date.today() + relativedelta(days=30)
self.test_contract.update(dict(date_end=False))
self.contract = self.contracts.create(self.test_contract)
self.apply_cron()
self.assertEquals(self.contract.state, 'open')
self.assertEquals(self.contract.kanban_state, 'blocked')
self.employee.visa_expire = date.today() + relativedelta(days=-5)
self.test_contract.update({
'date_start': datetime.now() + relativedelta(days=-50),
'state': 'open',
'kanban_state': 'blocked',
})
self.contract.write(self.test_contract)
self.apply_cron()
self.assertEquals(self.contract.state, 'close')
def test_contract_start_date(self):
self.test_contract.update(dict(date_start=datetime.now(), state='draft', kanban_state='done'))
self.contract = self.contracts.create(self.test_contract)
self.apply_cron()
self.assertEquals(self.contract.state, 'open')
| 42.867647 | 102 | 0.683019 |
c62c841db5df2294fb3a60e8a0b19fe8d389c2ce | 79,669 | py | Python | python/ray/_private/services.py | amzn/amazon-ray | 86156dc5c2401e1bcbd799211ee793d546553530 | [
"Apache-2.0"
] | 39 | 2021-02-02T23:09:31.000Z | 2022-03-28T16:39:12.000Z | python/ray/_private/services.py | amzn/amazon-ray | 86156dc5c2401e1bcbd799211ee793d546553530 | [
"Apache-2.0"
] | 65 | 2021-02-04T08:23:41.000Z | 2022-03-16T19:16:20.000Z | python/ray/_private/services.py | amzn/amazon-ray | 86156dc5c2401e1bcbd799211ee793d546553530 | [
"Apache-2.0"
] | 20 | 2021-02-05T05:51:39.000Z | 2022-03-04T21:13:24.000Z | import base64
import collections
import errno
import io
import json
import logging
import multiprocessing
import os
from pathlib import Path
import mmap
import random
import shutil
import signal
import socket
import subprocess
import sys
import time
from typing import Optional
import colorama
import psutil
# Ray modules
import ray
import ray.ray_constants as ray_constants
import redis
resource = None
if sys.platform != "win32":
import resource
EXE_SUFFIX = ".exe" if sys.platform == "win32" else ""
# True if processes are run in the valgrind profiler.
RUN_RAYLET_PROFILER = False
# Location of the redis server and module.
RAY_HOME = os.path.join(os.path.dirname(os.path.dirname(__file__)), "../..")
RAY_PATH = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
RAY_PRIVATE_DIR = "_private"
AUTOSCALER_PRIVATE_DIR = "autoscaler/_private"
REDIS_EXECUTABLE = os.path.join(
RAY_PATH, "core/src/ray/thirdparty/redis/src/redis-server" + EXE_SUFFIX)
REDIS_MODULE = os.path.join(
RAY_PATH, "core/src/ray/gcs/redis_module/libray_redis_module.so")
# Location of the raylet executables.
RAYLET_EXECUTABLE = os.path.join(RAY_PATH,
"core/src/ray/raylet/raylet" + EXE_SUFFIX)
GCS_SERVER_EXECUTABLE = os.path.join(
RAY_PATH, "core/src/ray/gcs/gcs_server" + EXE_SUFFIX)
# Location of the cpp default worker executables.
DEFAULT_WORKER_EXECUTABLE = os.path.join(
RAY_PATH, "core/src/ray/cpp/default_worker" + EXE_SUFFIX)
# Logger for this module. It should be configured at the entry point
# into the program using Ray. Ray provides a default configuration at
# entry/init points.
logger = logging.getLogger(__name__)
ProcessInfo = collections.namedtuple("ProcessInfo", [
"process",
"stdout_file",
"stderr_file",
"use_valgrind",
"use_gdb",
"use_valgrind_profiler",
"use_perftools_profiler",
"use_tmux",
])
def serialize_config(config):
return base64.b64encode(json.dumps(config).encode("utf-8")).decode("utf-8")
class ConsolePopen(subprocess.Popen):
if sys.platform == "win32":
def terminate(self):
if isinstance(self.stdin, io.IOBase):
self.stdin.close()
if self._use_signals:
self.send_signal(signal.CTRL_BREAK_EVENT)
else:
super(ConsolePopen, self).terminate()
def __init__(self, *args, **kwargs):
# CREATE_NEW_PROCESS_GROUP is used to send Ctrl+C on Windows:
# https://docs.python.org/3/library/subprocess.html#subprocess.Popen.send_signal
new_pgroup = subprocess.CREATE_NEW_PROCESS_GROUP
flags_to_add = 0
if ray._private.utils.detect_fate_sharing_support():
# If we don't have kernel-mode fate-sharing, then don't do this
# because our children need to be in out process group for
# the process reaper to properly terminate them.
flags_to_add = new_pgroup
flags_key = "creationflags"
if flags_to_add:
kwargs[flags_key] = (kwargs.get(flags_key) or 0) | flags_to_add
self._use_signals = (kwargs[flags_key] & new_pgroup)
super(ConsolePopen, self).__init__(*args, **kwargs)
def address(ip_address, port):
return ip_address + ":" + str(port)
def new_port(lower_bound=10000, upper_bound=65535, denylist=None):
if not denylist:
denylist = set()
port = random.randint(lower_bound, upper_bound)
retry = 0
while port in denylist:
if retry > 100:
break
port = random.randint(lower_bound, upper_bound)
retry += 1
if retry > 100:
raise ValueError("Failed to find a new port from the range "
f"{lower_bound}-{upper_bound}. Denylist: {denylist}")
return port
def find_redis_address(address=None):
"""
Attempts to find all valid Ray redis addresses on this node.
Returns:
Set of detected Redis instances.
"""
# Currently, this extracts the deprecated --redis-address from the command
# that launched the raylet running on this node, if any. Anyone looking to
# edit this function should be warned that these commands look like, for
# example:
# /usr/local/lib/python3.8/dist-packages/ray/core/src/ray/raylet/raylet
# --redis_address=123.456.78.910 --node_ip_address=123.456.78.910
# --raylet_socket_name=... --store_socket_name=... --object_manager_port=0
# --min_worker_port=10000 --max_worker_port=10999
# --node_manager_port=58578 --redis_port=6379
# --maximum_startup_concurrency=8
# --static_resource_list=node:123.456.78.910,1.0,object_store_memory,66
# --config_list=plasma_store_as_thread,True
# --python_worker_command=/usr/bin/python
# /usr/local/lib/python3.8/dist-packages/ray/workers/default_worker.py
# --redis-address=123.456.78.910:6379
# --node-ip-address=123.456.78.910 --node-manager-port=58578
# --object-store-name=... --raylet-name=...
# --temp-dir=/tmp/ray
# --metrics-agent-port=41856 --redis-password=[MASKED]
# --java_worker_command= --cpp_worker_command=
# --redis_password=[MASKED] --temp_dir=/tmp/ray --session_dir=...
# --metrics-agent-port=41856 --metrics_export_port=64229
# --agent_command=/usr/bin/python
# -u /usr/local/lib/python3.8/dist-packages/ray/new_dashboard/agent.py
# --redis-address=123.456.78.910:6379 --metrics-export-port=64229
# --dashboard-agent-port=41856 --node-manager-port=58578
# --object-store-name=... --raylet-name=... --temp-dir=/tmp/ray
# --log-dir=/tmp/ray/session_2020-11-08_14-29-07_199128_278000/logs
# --redis-password=[MASKED] --object_store_memory=5037192806
# --plasma_directory=/tmp
# Longer arguments are elided with ... but all arguments from this instance
# are included, to provide a sense of what is in these.
# Indeed, we had to pull --redis-address to the front of each call to make
# this readable.
# As you can see, this is very long and complex, which is why we can't
# simply extract all the the arguments using regular expressions and
# present a dict as if we never lost track of these arguments, for
# example. Picking out --redis-address below looks like it might grab the
# wrong thing, but double-checking that we're finding the correct process
# by checking that the contents look like we expect would probably be prone
# to choking in unexpected ways.
# Notice that --redis-address appears twice. This is not a copy-paste
# error; this is the reason why the for loop below attempts to pick out
# every appearance of --redis-address.
# The --redis-address here is what is now called the --address, but it
# appears in the default_worker.py and agent.py calls as --redis-address.
pids = psutil.pids()
redis_addresses = set()
for pid in pids:
try:
proc = psutil.Process(pid)
# HACK: Workaround for UNIX idiosyncrasy
# Normally, cmdline() is supposed to return the argument list.
# But it in some cases (such as when setproctitle is called),
# an arbitrary string resembling a command-line is stored in
# the first argument.
# Explanation: https://unix.stackexchange.com/a/432681
# More info: https://github.com/giampaolo/psutil/issues/1179
cmdline = proc.cmdline()
# NOTE(kfstorm): To support Windows, we can't use
# `os.path.basename(cmdline[0]) == "raylet"` here.
if len(cmdline) > 0 and "raylet" in os.path.basename(cmdline[0]):
for arglist in cmdline:
# Given we're merely seeking --redis-address, we just split
# every argument on spaces for now.
for arg in arglist.split(" "):
# TODO(ekl): Find a robust solution for locating Redis.
if arg.startswith("--redis-address="):
proc_addr = arg.split("=")[1]
if address is not None and address != proc_addr:
continue
redis_addresses.add(proc_addr)
except psutil.AccessDenied:
pass
except psutil.NoSuchProcess:
pass
return redis_addresses
def get_ray_address_to_use_or_die():
"""
Attempts to find an address for an existing Ray cluster if it is not
already specified as an environment variable.
Returns:
A string to pass into `ray.init(address=...)`
"""
return os.environ.get(ray_constants.RAY_ADDRESS_ENVIRONMENT_VARIABLE,
find_redis_address_or_die())
def find_redis_address_or_die():
redis_addresses = find_redis_address()
if len(redis_addresses) > 1:
raise ConnectionError(
f"Found multiple active Ray instances: {redis_addresses}. "
"Please specify the one to connect to by setting `address`.")
sys.exit(1)
elif not redis_addresses:
raise ConnectionError(
"Could not find any running Ray instance. "
"Please specify the one to connect to by setting `address`.")
return redis_addresses.pop()
def wait_for_node(redis_address,
node_plasma_store_socket_name,
redis_password=None,
timeout=30):
"""Wait until this node has appeared in the client table.
Args:
redis_address (str): The redis address.
node_plasma_store_socket_name (str): The
plasma_store_socket_name for the given node which we wait for.
redis_password (str): the redis password.
timeout: The amount of time in seconds to wait before raising an
exception.
Raises:
TimeoutError: An exception is raised if the timeout expires before
the node appears in the client table.
"""
redis_ip_address, redis_port = redis_address.split(":")
wait_for_redis_to_start(redis_ip_address, redis_port, redis_password)
global_state = ray.state.GlobalState()
global_state._initialize_global_state(redis_address, redis_password)
start_time = time.time()
while time.time() - start_time < timeout:
clients = global_state.node_table()
object_store_socket_names = [
client["ObjectStoreSocketName"] for client in clients
]
if node_plasma_store_socket_name in object_store_socket_names:
return
else:
time.sleep(0.1)
raise TimeoutError("Timed out while waiting for node to startup.")
def get_node_to_connect_for_driver(redis_address,
node_ip_address,
redis_password=None):
redis_ip_address, redis_port = redis_address.split(":")
# Get node table from global state accessor.
global_state = ray.state.GlobalState()
global_state._initialize_global_state(redis_address, redis_password)
return global_state.get_node_to_connect_for_driver(node_ip_address)
def get_webui_url_from_redis(redis_client):
webui_url = redis_client.hmget("webui", "url")[0]
return ray._private.utils.decode(
webui_url) if webui_url is not None else None
def remaining_processes_alive():
"""See if the remaining processes are alive or not.
Note that this ignores processes that have been explicitly killed,
e.g., via a command like node.kill_raylet().
Returns:
True if the remaining processes started by ray.init() are alive and
False otherwise.
Raises:
Exception: An exception is raised if the processes were not started by
ray.init().
"""
if ray.worker._global_node is None:
raise RuntimeError("This process is not in a position to determine "
"whether all processes are alive or not.")
return ray.worker._global_node.remaining_processes_alive()
def validate_redis_address(address):
"""Validates address parameter.
Returns:
redis_address: string containing the full <host:port> address.
redis_ip: string representing the host portion of the address.
redis_port: integer representing the port portion of the address.
"""
if address == "auto":
address = find_redis_address_or_die()
redis_address = address_to_ip(address)
redis_address_parts = redis_address.split(":")
if len(redis_address_parts) != 2:
raise ValueError("Malformed address. Expected '<host>:<port>'.")
redis_ip = redis_address_parts[0]
try:
redis_port = int(redis_address_parts[1])
except ValueError:
raise ValueError("Malformed address port. Must be an integer.")
if redis_port < 1024 or redis_port > 65535:
raise ValueError("Invalid address port. Must "
"be between 1024 and 65535.")
return redis_address, redis_ip, redis_port
def address_to_ip(address):
"""Convert a hostname to a numerical IP addresses in an address.
This should be a no-op if address already contains an actual numerical IP
address.
Args:
address: This can be either a string containing a hostname (or an IP
address) and a port or it can be just an IP address.
Returns:
The same address but with the hostname replaced by a numerical IP
address.
"""
address_parts = address.split(":")
ip_address = socket.gethostbyname(address_parts[0])
# Make sure localhost isn't resolved to the loopback ip
if ip_address == "127.0.0.1":
ip_address = get_node_ip_address()
return ":".join([ip_address] + address_parts[1:])
def node_ip_address_from_perspective(address):
"""IP address by which the local node can be reached *from* the `address`.
Args:
address (str): The IP address and port of any known live service on the
network you care about.
Returns:
The IP address by which the local node can be reached from the address.
"""
ip_address, port = address.split(":")
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# This command will raise an exception if there is no internet
# connection.
s.connect((ip_address, int(port)))
node_ip_address = s.getsockname()[0]
except OSError as e:
node_ip_address = "127.0.0.1"
# [Errno 101] Network is unreachable
if e.errno == errno.ENETUNREACH:
try:
# try get node ip address from host name
host_name = socket.getfqdn(socket.gethostname())
node_ip_address = socket.gethostbyname(host_name)
except Exception:
pass
finally:
s.close()
return node_ip_address
def get_node_ip_address(address="8.8.8.8:53"):
if ray.worker._global_node is not None:
return ray.worker._global_node.node_ip_address
return node_ip_address_from_perspective(address)
def create_redis_client(redis_address, password=None):
"""Create a Redis client.
Args:
The IP address, port, and password of the Redis server.
Returns:
A Redis client.
"""
redis_ip_address, redis_port = redis_address.split(":")
# For this command to work, some other client (on the same machine
# as Redis) must have run "CONFIG SET protected-mode no".
return redis.StrictRedis(
host=redis_ip_address, port=int(redis_port), password=password)
def start_ray_process(command,
process_type,
fate_share,
env_updates=None,
cwd=None,
use_valgrind=False,
use_gdb=False,
use_valgrind_profiler=False,
use_perftools_profiler=False,
use_tmux=False,
stdout_file=None,
stderr_file=None,
pipe_stdin=False):
"""Start one of the Ray processes.
TODO(rkn): We need to figure out how these commands interact. For example,
it may only make sense to start a process in gdb if we also start it in
tmux. Similarly, certain combinations probably don't make sense, like
simultaneously running the process in valgrind and the profiler.
Args:
command (List[str]): The command to use to start the Ray process.
process_type (str): The type of the process that is being started
(e.g., "raylet").
fate_share: If true, the child will be killed if its parent (us) dies.
True must only be passed after detection of this functionality.
env_updates (dict): A dictionary of additional environment variables to
run the command with (in addition to the caller's environment
variables).
cwd (str): The directory to run the process in.
use_valgrind (bool): True if we should start the process in valgrind.
use_gdb (bool): True if we should start the process in gdb.
use_valgrind_profiler (bool): True if we should start the process in
the valgrind profiler.
use_perftools_profiler (bool): True if we should profile the process
using perftools.
use_tmux (bool): True if we should start the process in tmux.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
pipe_stdin: If true, subprocess.PIPE will be passed to the process as
stdin.
Returns:
Information about the process that was started including a handle to
the process that was started.
"""
# Detect which flags are set through environment variables.
valgrind_env_var = f"RAY_{process_type.upper()}_VALGRIND"
if os.environ.get(valgrind_env_var) == "1":
logger.info("Detected environment variable '%s'.", valgrind_env_var)
use_valgrind = True
valgrind_profiler_env_var = f"RAY_{process_type.upper()}_VALGRIND_PROFILER"
if os.environ.get(valgrind_profiler_env_var) == "1":
logger.info("Detected environment variable '%s'.",
valgrind_profiler_env_var)
use_valgrind_profiler = True
perftools_profiler_env_var = (f"RAY_{process_type.upper()}"
"_PERFTOOLS_PROFILER")
if os.environ.get(perftools_profiler_env_var) == "1":
logger.info("Detected environment variable '%s'.",
perftools_profiler_env_var)
use_perftools_profiler = True
tmux_env_var = f"RAY_{process_type.upper()}_TMUX"
if os.environ.get(tmux_env_var) == "1":
logger.info("Detected environment variable '%s'.", tmux_env_var)
use_tmux = True
gdb_env_var = f"RAY_{process_type.upper()}_GDB"
if os.environ.get(gdb_env_var) == "1":
logger.info("Detected environment variable '%s'.", gdb_env_var)
use_gdb = True
if sum([
use_gdb,
use_valgrind,
use_valgrind_profiler,
use_perftools_profiler,
]) > 1:
raise ValueError(
"At most one of the 'use_gdb', 'use_valgrind', "
"'use_valgrind_profiler', and 'use_perftools_profiler' flags can "
"be used at a time.")
if env_updates is None:
env_updates = {}
if not isinstance(env_updates, dict):
raise ValueError("The 'env_updates' argument must be a dictionary.")
modified_env = os.environ.copy()
modified_env.update(env_updates)
if use_gdb:
if not use_tmux:
raise ValueError(
"If 'use_gdb' is true, then 'use_tmux' must be true as well.")
# TODO(suquark): Any better temp file creation here?
gdb_init_path = os.path.join(ray._private.utils.get_ray_temp_dir(),
f"gdb_init_{process_type}_{time.time()}")
ray_process_path = command[0]
ray_process_args = command[1:]
run_args = " ".join(["'{}'".format(arg) for arg in ray_process_args])
with open(gdb_init_path, "w") as gdb_init_file:
gdb_init_file.write(f"run {run_args}")
command = ["gdb", ray_process_path, "-x", gdb_init_path]
if use_valgrind:
command = [
"valgrind",
"--track-origins=yes",
"--leak-check=full",
"--show-leak-kinds=all",
"--leak-check-heuristics=stdstring",
"--error-exitcode=1",
] + command
if use_valgrind_profiler:
command = ["valgrind", "--tool=callgrind"] + command
if use_perftools_profiler:
modified_env["LD_PRELOAD"] = os.environ["PERFTOOLS_PATH"]
modified_env["CPUPROFILE"] = os.environ["PERFTOOLS_LOGFILE"]
if use_tmux:
# The command has to be created exactly as below to ensure that it
# works on all versions of tmux. (Tested with tmux 1.8-5, travis'
# version, and tmux 2.1)
command = ["tmux", "new-session", "-d", f"{' '.join(command)}"]
if fate_share:
assert ray._private.utils.detect_fate_sharing_support(), (
"kernel-level fate-sharing must only be specified if "
"detect_fate_sharing_support() has returned True")
def preexec_fn():
import signal
signal.pthread_sigmask(signal.SIG_BLOCK, {signal.SIGINT})
if fate_share and sys.platform.startswith("linux"):
ray._private.utils.set_kill_on_parent_death_linux()
win32_fate_sharing = fate_share and sys.platform == "win32"
# With Windows fate-sharing, we need special care:
# The process must be added to the job before it is allowed to execute.
# Otherwise, there's a race condition: the process might spawn children
# before the process itself is assigned to the job.
# After that point, its children will not be added to the job anymore.
CREATE_SUSPENDED = 0x00000004 # from Windows headers
process = ConsolePopen(
command,
env=modified_env,
cwd=cwd,
stdout=stdout_file,
stderr=stderr_file,
stdin=subprocess.PIPE if pipe_stdin else None,
preexec_fn=preexec_fn if sys.platform != "win32" else None,
creationflags=CREATE_SUSPENDED if win32_fate_sharing else 0)
if win32_fate_sharing:
try:
ray._private.utils.set_kill_child_on_death_win32(process)
psutil.Process(process.pid).resume()
except (psutil.Error, OSError):
process.kill()
raise
def _get_stream_name(stream):
if stream is not None:
try:
return stream.name
except AttributeError:
return str(stream)
return None
return ProcessInfo(
process=process,
stdout_file=_get_stream_name(stdout_file),
stderr_file=_get_stream_name(stderr_file),
use_valgrind=use_valgrind,
use_gdb=use_gdb,
use_valgrind_profiler=use_valgrind_profiler,
use_perftools_profiler=use_perftools_profiler,
use_tmux=use_tmux)
def wait_for_redis_to_start(redis_ip_address, redis_port, password=None):
"""Wait for a Redis server to be available.
This is accomplished by creating a Redis client and sending a random
command to the server until the command gets through.
Args:
redis_ip_address (str): The IP address of the redis server.
redis_port (int): The port of the redis server.
password (str): The password of the redis server.
Raises:
Exception: An exception is raised if we could not connect with Redis.
"""
redis_client = redis.StrictRedis(
host=redis_ip_address, port=redis_port, password=password)
# Wait for the Redis server to start.
num_retries = ray_constants.START_REDIS_WAIT_RETRIES
delay = 0.001
for i in range(num_retries):
try:
# Run some random command and see if it worked.
logger.debug(
"Waiting for redis server at {}:{} to respond...".format(
redis_ip_address, redis_port))
redis_client.client_list()
# If the Redis service is delayed getting set up for any reason, we may
# get a redis.ConnectionError: Error 111 connecting to host:port.
# Connection refused.
# Unfortunately, redis.ConnectionError is also the base class of
# redis.AuthenticationError. We *don't* want to obscure a
# redis.AuthenticationError, because that indicates the user provided a
# bad password. Thus a double except clause to ensure a
# redis.AuthenticationError isn't trapped here.
except redis.AuthenticationError as authEx:
raise RuntimeError("Unable to connect to Redis at {}:{}.".format(
redis_ip_address, redis_port)) from authEx
except redis.ConnectionError as connEx:
if i >= num_retries - 1:
raise RuntimeError(
f"Unable to connect to Redis at {redis_ip_address}:"
f"{redis_port} after {num_retries} retries. Check that "
f"{redis_ip_address}:{redis_port} is reachable from this "
"machine. If it is not, your firewall may be blocking "
"this port. If the problem is a flaky connection, try "
"setting the environment variable "
"`RAY_START_REDIS_WAIT_RETRIES` to increase the number of"
" attempts to ping the Redis server.") from connEx
# Wait a little bit.
time.sleep(delay)
delay *= 2
else:
break
else:
raise RuntimeError(
f"Unable to connect to Redis (after {num_retries} retries). "
"If the Redis instance is on a different machine, check that "
"your firewall and relevant Ray ports are configured properly. "
"You can also set the environment variable "
"`RAY_START_REDIS_WAIT_RETRIES` to increase the number of "
"attempts to ping the Redis server.")
def _compute_version_info():
"""Compute the versions of Python, and Ray.
Returns:
A tuple containing the version information.
"""
ray_version = ray.__version__
python_version = ".".join(map(str, sys.version_info[:3]))
return ray_version, python_version
def _put_version_info_in_redis(redis_client):
"""Store version information in Redis.
This will be used to detect if workers or drivers are started using
different versions of Python, or Ray.
Args:
redis_client: A client for the primary Redis shard.
"""
redis_client.set("VERSION_INFO", json.dumps(_compute_version_info()))
def check_version_info(redis_client):
"""Check if various version info of this process is correct.
This will be used to detect if workers or drivers are started using
different versions of Python, or Ray. If the version
information is not present in Redis, then no check is done.
Args:
redis_client: A client for the primary Redis shard.
Raises:
Exception: An exception is raised if there is a version mismatch.
"""
redis_reply = redis_client.get("VERSION_INFO")
# Don't do the check if there is no version information in Redis. This
# is to make it easier to do things like start the processes by hand.
if redis_reply is None:
return
true_version_info = tuple(
json.loads(ray._private.utils.decode(redis_reply)))
version_info = _compute_version_info()
if version_info != true_version_info:
node_ip_address = get_node_ip_address()
error_message = ("Version mismatch: The cluster was started with:\n"
" Ray: " + true_version_info[0] + "\n"
" Python: " + true_version_info[1] + "\n"
"This process on node " + node_ip_address +
" was started with:" + "\n"
" Ray: " + version_info[0] + "\n"
" Python: " + version_info[1] + "\n")
if version_info[:2] != true_version_info[:2]:
raise RuntimeError(error_message)
else:
logger.warning(error_message)
def start_reaper(fate_share=None):
"""Start the reaper process.
This is a lightweight process that simply
waits for its parent process to die and then terminates its own
process group. This allows us to ensure that ray processes are always
terminated properly so long as that process itself isn't SIGKILLed.
Returns:
ProcessInfo for the process that was started.
"""
# Make ourselves a process group leader so that the reaper can clean
# up other ray processes without killing the process group of the
# process that started us.
try:
if sys.platform != "win32":
os.setpgrp()
except OSError as e:
errcode = e.errno
if errcode == errno.EPERM and os.getpgrp() == os.getpid():
# Nothing to do; we're already a session leader.
pass
else:
logger.warning("setpgrp failed, processes may not be "
"cleaned up properly: {}.".format(e))
# Don't start the reaper in this case as it could result in killing
# other user processes.
return None
reaper_filepath = os.path.join(RAY_PATH, RAY_PRIVATE_DIR,
"ray_process_reaper.py")
command = [sys.executable, "-u", reaper_filepath]
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_REAPER,
pipe_stdin=True,
fate_share=fate_share)
return process_info
def start_redis(node_ip_address,
redirect_files,
resource_spec,
port=None,
redis_shard_ports=None,
num_redis_shards=1,
redis_max_clients=None,
redirect_worker_output=False,
password=None,
fate_share=None,
external_addresses=None,
port_denylist=None):
"""Start the Redis global state store.
Args:
node_ip_address: The IP address of the current node. This is only used
for recording the log filenames in Redis.
redirect_files: The list of (stdout, stderr) file pairs.
resource_spec (ResourceSpec): Resources for the node.
port (int): If provided, the primary Redis shard will be started on
this port.
redis_shard_ports: A list of the ports to use for the non-primary Redis
shards.
num_redis_shards (int): If provided, the number of Redis shards to
start, in addition to the primary one. The default value is one
shard.
redis_max_clients: If this is provided, Ray will attempt to configure
Redis with this maxclients number.
redirect_worker_output (bool): True if worker output should be
redirected to a file and false otherwise. Workers will have access
to this value when they start up.
password (str): Prevents external clients without the password
from connecting to Redis if provided.
port_denylist (set): A set of denylist ports that shouldn't
be used when allocating a new port.
Returns:
A tuple of the address for the primary Redis shard, a list of
addresses for the remaining shards, and the processes that were
started.
"""
if len(redirect_files) != 1 + num_redis_shards:
raise ValueError("The number of redirect file pairs should be equal "
"to the number of redis shards (including the "
"primary shard) we will start.")
if redis_shard_ports is None:
redis_shard_ports = num_redis_shards * [None]
elif len(redis_shard_ports) != num_redis_shards:
raise RuntimeError("The number of Redis shard ports does not match "
"the number of Redis shards.")
processes = []
if external_addresses is not None:
primary_redis_address = external_addresses[0]
[primary_redis_ip, port] = primary_redis_address.split(":")
port = int(port)
redis_address = address(primary_redis_ip, port)
primary_redis_client = create_redis_client(
"%s:%s" % (primary_redis_ip, port), password=password)
# Deleting the key to avoid duplicated rpush.
primary_redis_client.delete("RedisShards")
else:
redis_executable = REDIS_EXECUTABLE
redis_modules = [REDIS_MODULE]
redis_stdout_file, redis_stderr_file = redirect_files[0]
# If no port is given, fallback to default Redis port for the primary
# shard.
if port is None:
port = ray_constants.DEFAULT_PORT
num_retries = 20
else:
num_retries = 1
# Start the primary Redis shard.
port, p = _start_redis_instance(
redis_executable,
modules=redis_modules,
port=port,
password=password,
redis_max_clients=redis_max_clients,
num_retries=num_retries,
# Below we use None to indicate no limit on the memory of the
# primary Redis shard.
redis_max_memory=None,
stdout_file=redis_stdout_file,
stderr_file=redis_stderr_file,
fate_share=fate_share,
port_denylist=port_denylist)
processes.append(p)
redis_address = address(node_ip_address, port)
primary_redis_client = redis.StrictRedis(
host=node_ip_address, port=port, password=password)
# Register the number of Redis shards in the primary shard, so that clients
# know how many redis shards to expect under RedisShards.
primary_redis_client.set("NumRedisShards", str(num_redis_shards))
# Put the redirect_worker_output bool in the Redis shard so that workers
# can access it and know whether or not to redirect their output.
primary_redis_client.set("RedirectOutput", 1
if redirect_worker_output else 0)
# Init job counter to GCS.
primary_redis_client.set("JobCounter", 0)
# Store version information in the primary Redis shard.
_put_version_info_in_redis(primary_redis_client)
# Calculate the redis memory.
assert resource_spec.resolved()
redis_max_memory = resource_spec.redis_max_memory
# Start other Redis shards. Each Redis shard logs to a separate file,
# prefixed by "redis-<shard number>".
redis_shards = []
# If Redis shard ports are not provided, start the port range of the
# other Redis shards at a high, random port.
last_shard_port = new_port(denylist=port_denylist) - 1
for i in range(num_redis_shards):
if external_addresses is not None and len(external_addresses) > 1:
shard_address = external_addresses[i + 1]
else:
redis_stdout_file, redis_stderr_file = redirect_files[i + 1]
redis_executable = REDIS_EXECUTABLE
redis_modules = [REDIS_MODULE]
redis_shard_port = redis_shard_ports[i]
# If no shard port is given, try to start this shard's Redis
# instance on the port right after the last shard's port.
if redis_shard_port is None:
redis_shard_port = last_shard_port + 1
num_retries = 20
else:
num_retries = 1
redis_shard_port, p = _start_redis_instance(
redis_executable,
modules=redis_modules,
port=redis_shard_port,
password=password,
redis_max_clients=redis_max_clients,
num_retries=num_retries,
redis_max_memory=redis_max_memory,
stdout_file=redis_stdout_file,
stderr_file=redis_stderr_file,
fate_share=fate_share,
port_denylist=port_denylist)
processes.append(p)
shard_address = address(node_ip_address, redis_shard_port)
last_shard_port = redis_shard_port
redis_shards.append(shard_address)
# Store redis shard information in the primary redis shard.
primary_redis_client.rpush("RedisShards", shard_address)
return redis_address, redis_shards, processes
def _start_redis_instance(executable,
modules,
port,
redis_max_clients=None,
num_retries=20,
stdout_file=None,
stderr_file=None,
password=None,
redis_max_memory=None,
fate_share=None,
port_denylist=None):
"""Start a single Redis server.
Notes:
We will initially try to start the Redis instance at the given port,
and then try at most `num_retries - 1` times to start the Redis
instance at successive random ports.
Args:
executable (str): Full path of the redis-server executable.
modules (list of str): A list of pathnames, pointing to the redis
module(s) that will be loaded in this redis server.
port (int): Try to start a Redis server at this port.
redis_max_clients: If this is provided, Ray will attempt to configure
Redis with this maxclients number.
num_retries (int): The number of times to attempt to start Redis at
successive ports.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
password (str): Prevents external clients without the password
from connecting to Redis if provided.
redis_max_memory: The max amount of memory (in bytes) to allow redis
to use, or None for no limit. Once the limit is exceeded, redis
will start LRU eviction of entries.
port_denylist (set): A set of denylist ports that shouldn't
be used when allocating a new port.
Returns:
A tuple of the port used by Redis and ProcessInfo for the process that
was started. If a port is passed in, then the returned port value
is the same.
Raises:
Exception: An exception is raised if Redis could not be started.
"""
assert os.path.isfile(executable)
for module in modules:
assert os.path.isfile(module)
counter = 0
load_module_args = []
for module in modules:
load_module_args += ["--loadmodule", module]
while counter < num_retries:
# Construct the command to start the Redis server.
command = [executable]
if password:
if " " in password:
raise ValueError("Spaces not permitted in redis password.")
command += ["--requirepass", password]
command += (
["--port", str(port), "--loglevel", "warning"] + load_module_args)
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_REDIS_SERVER,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
time.sleep(0.1)
# Check if Redis successfully started (or at least if it the executable
# did not exit within 0.1 seconds).
if process_info.process.poll() is None:
break
port = new_port(denylist=port_denylist)
counter += 1
if counter == num_retries:
raise RuntimeError("Couldn't start Redis. "
"Check log files: {} {}".format(
stdout_file.name if stdout_file is not None else
"<stdout>", stderr_file.name
if stdout_file is not None else "<stderr>"))
# Create a Redis client just for configuring Redis.
redis_client = redis.StrictRedis(
host="127.0.0.1", port=port, password=password)
# Wait for the Redis server to start.
wait_for_redis_to_start("127.0.0.1", port, password=password)
# Configure Redis to generate keyspace notifications. TODO(rkn): Change
# this to only generate notifications for the export keys.
redis_client.config_set("notify-keyspace-events", "Kl")
# Configure Redis to not run in protected mode so that processes on other
# hosts can connect to it. TODO(rkn): Do this in a more secure way.
redis_client.config_set("protected-mode", "no")
# Discard old task and object metadata.
if redis_max_memory is not None:
redis_client.config_set("maxmemory", str(redis_max_memory))
redis_client.config_set("maxmemory-policy", "allkeys-lru")
redis_client.config_set("maxmemory-samples", "10")
logger.debug("Starting Redis shard with {} GB max memory.".format(
round(redis_max_memory / 1e9, 2)))
# If redis_max_clients is provided, attempt to raise the number of maximum
# number of Redis clients.
if redis_max_clients is not None:
redis_client.config_set("maxclients", str(redis_max_clients))
elif resource is not None:
# If redis_max_clients is not provided, determine the current ulimit.
# We will use this to attempt to raise the maximum number of Redis
# clients.
current_max_clients = int(
redis_client.config_get("maxclients")["maxclients"])
# The below command should be the same as doing ulimit -n.
ulimit_n = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
# The quantity redis_client_buffer appears to be the required buffer
# between the maximum number of redis clients and ulimit -n. That is,
# if ulimit -n returns 10000, then we can set maxclients to
# 10000 - redis_client_buffer.
redis_client_buffer = 32
if current_max_clients < ulimit_n - redis_client_buffer:
redis_client.config_set("maxclients",
ulimit_n - redis_client_buffer)
# Increase the hard and soft limits for the redis client pubsub buffer to
# 128MB. This is a hack to make it less likely for pubsub messages to be
# dropped and for pubsub connections to therefore be killed.
cur_config = (redis_client.config_get("client-output-buffer-limit")[
"client-output-buffer-limit"])
cur_config_list = cur_config.split()
assert len(cur_config_list) == 12
cur_config_list[8:] = ["pubsub", "268435456", "268435456", "60"]
redis_client.config_set("client-output-buffer-limit",
" ".join(cur_config_list))
# Put a time stamp in Redis to indicate when it was started.
redis_client.set("redis_start_time", time.time())
return port, process_info
def start_log_monitor(redis_address,
logs_dir,
stdout_file=None,
stderr_file=None,
redis_password=None,
fate_share=None,
max_bytes=0,
backup_count=0):
"""Start a log monitor process.
Args:
redis_address (str): The address of the Redis instance.
logs_dir (str): The directory of logging files.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
redis_password (str): The password of the redis server.
max_bytes (int): Log rotation parameter. Corresponding to
RotatingFileHandler's maxBytes.
backup_count (int): Log rotation parameter. Corresponding to
RotatingFileHandler's backupCount.
Returns:
ProcessInfo for the process that was started.
"""
log_monitor_filepath = os.path.join(RAY_PATH, RAY_PRIVATE_DIR,
"log_monitor.py")
command = [
sys.executable, "-u", log_monitor_filepath,
f"--redis-address={redis_address}", f"--logs-dir={logs_dir}",
f"--logging-rotate-bytes={max_bytes}",
f"--logging-rotate-backup-count={backup_count}"
]
if redis_password:
command += ["--redis-password", redis_password]
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_LOG_MONITOR,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
return process_info
def start_dashboard(require_dashboard,
host,
redis_address,
temp_dir,
logdir,
port=None,
stdout_file=None,
stderr_file=None,
redis_password=None,
fate_share=None,
max_bytes=0,
backup_count=0):
"""Start a dashboard process.
Args:
require_dashboard (bool): If true, this will raise an exception if we
fail to start the dashboard. Otherwise it will print a warning if
we fail to start the dashboard.
host (str): The host to bind the dashboard web server to.
port (str): The port to bind the dashboard web server to.
Defaults to 8265.
redis_address (str): The address of the Redis instance.
temp_dir (str): The temporary directory used for log files and
information for this Ray session.
logdir (str): The log directory used to generate dashboard log.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
redis_password (str): The password of the redis server.
max_bytes (int): Log rotation parameter. Corresponding to
RotatingFileHandler's maxBytes.
backup_count (int): Log rotation parameter. Corresponding to
RotatingFileHandler's backupCount.
Returns:
ProcessInfo for the process that was started.
"""
try:
# Make sure port is available.
if port is None:
port_retries = 50
port = ray_constants.DEFAULT_DASHBOARD_PORT
else:
port_retries = 0
port_test_socket = socket.socket()
port_test_socket.setsockopt(
socket.SOL_SOCKET,
socket.SO_REUSEADDR,
1,
)
try:
port_test_socket.bind((host, port))
port_test_socket.close()
except socket.error as e:
if e.errno in {48, 98}: # address already in use.
raise ValueError(
f"Failed to bind to {host}:{port} because it's "
"already occupied. You can use `ray start "
"--dashboard-port ...` or `ray.init(dashboard_port=..."
")` to select a different port.")
else:
raise e
# Make sure the process can start.
try:
import aiohttp # noqa: F401
import grpc # noqa: F401
except ImportError:
warning_message = (
"Missing dependencies for dashboard. Please run "
"pip install aiohttp grpcio'.")
raise ImportError(warning_message)
# Start the dashboard process.
dashboard_dir = "new_dashboard"
dashboard_filepath = os.path.join(RAY_PATH, dashboard_dir,
"dashboard.py")
command = [
sys.executable, "-u", dashboard_filepath, f"--host={host}",
f"--port={port}", f"--port-retries={port_retries}",
f"--redis-address={redis_address}", f"--temp-dir={temp_dir}",
f"--log-dir={logdir}", f"--logging-rotate-bytes={max_bytes}",
f"--logging-rotate-backup-count={backup_count}"
]
if redis_password:
command += ["--redis-password", redis_password]
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_DASHBOARD,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
# Retrieve the dashboard url
redis_client = ray._private.services.create_redis_client(
redis_address, redis_password)
dashboard_url = None
dashboard_returncode = None
for _ in range(200):
dashboard_url = redis_client.get(ray_constants.REDIS_KEY_DASHBOARD)
if dashboard_url is not None:
dashboard_url = dashboard_url.decode("utf-8")
break
dashboard_returncode = process_info.process.poll()
if dashboard_returncode is not None:
break
# This is often on the critical path of ray.init() and ray start,
# so we need to poll often.
time.sleep(0.1)
if dashboard_url is None:
dashboard_log = os.path.join(logdir, "dashboard.log")
returncode_str = (f", return code {dashboard_returncode}"
if dashboard_returncode is not None else "")
# Read last n lines of dashboard log. The log file may be large.
n = 10
lines = []
try:
with open(dashboard_log, "rb") as f:
with mmap.mmap(
f.fileno(), 0, access=mmap.ACCESS_READ) as mm:
end = mm.size()
for _ in range(n):
sep = mm.rfind(b"\n", 0, end - 1)
if sep == -1:
break
lines.append(mm[sep + 1:end].decode("utf-8"))
end = sep
lines.append(f" The last {n} lines of {dashboard_log}:")
except Exception as e:
raise Exception(f"Failed to read dashbord log: {e}")
last_log_str = "\n".join(reversed(lines[-n:]))
raise Exception("Failed to start the dashboard"
f"{returncode_str}.{last_log_str}")
logger.info("View the Ray dashboard at %s%shttp://%s%s%s",
colorama.Style.BRIGHT, colorama.Fore.GREEN, dashboard_url,
colorama.Fore.RESET, colorama.Style.NORMAL)
return dashboard_url, process_info
except Exception as e:
if require_dashboard:
raise e from e
else:
logger.error(f"Failed to start the dashboard: {e}")
return None, None
def start_gcs_server(redis_address,
stdout_file=None,
stderr_file=None,
redis_password=None,
config=None,
fate_share=None,
gcs_server_port=None,
metrics_agent_port=None,
node_ip_address=None):
"""Start a gcs server.
Args:
redis_address (str): The address that the Redis server is listening on.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
redis_password (str): The password of the redis server.
config (dict|None): Optional configuration that will
override defaults in RayConfig.
gcs_server_port (int): Port number of the gcs server.
metrics_agent_port(int): The port where metrics agent is bound to.
node_ip_address(str): IP Address of a node where gcs server starts.
Returns:
ProcessInfo for the process that was started.
"""
gcs_ip_address, gcs_port = redis_address.split(":")
redis_password = redis_password or ""
config_str = serialize_config(config)
if gcs_server_port is None:
gcs_server_port = 0
command = [
GCS_SERVER_EXECUTABLE,
f"--redis_address={gcs_ip_address}",
f"--redis_port={gcs_port}",
f"--config_list={config_str}",
f"--gcs_server_port={gcs_server_port}",
f"--metrics-agent-port={metrics_agent_port}",
f"--node-ip-address={node_ip_address}",
]
if redis_password:
command += [f"--redis_password={redis_password}"]
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_GCS_SERVER,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
return process_info
def start_raylet(redis_address,
node_ip_address,
node_manager_port,
raylet_name,
plasma_store_name,
worker_path,
setup_worker_path,
worker_setup_hook,
runtime_env_setup_hook,
temp_dir,
session_dir,
resource_dir,
log_dir,
resource_spec,
plasma_directory,
object_store_memory,
min_worker_port=None,
max_worker_port=None,
worker_port_list=None,
object_manager_port=None,
redis_password=None,
metrics_agent_port=None,
metrics_export_port=None,
use_valgrind=False,
use_profiler=False,
stdout_file=None,
stderr_file=None,
config=None,
huge_pages=False,
fate_share=None,
socket_to_use=None,
start_initial_python_workers_for_first_job=False,
max_bytes=0,
backup_count=0):
"""Start a raylet, which is a combined local scheduler and object manager.
Args:
redis_address (str): The address of the primary Redis server.
node_ip_address (str): The IP address of this node.
node_manager_port(int): The port to use for the node manager. If it's
0, a random port will be used.
raylet_name (str): The name of the raylet socket to create.
plasma_store_name (str): The name of the plasma store socket to connect
to.
worker_path (str): The path of the Python file that new worker
processes will execute.
setup_worker_path (str): The path of the Python file that will run
worker_setup_hook to set up the environment for the worker process.
worker_setup_hook (str): The module path to a Python function that will
be imported and run to set up the environment for the worker.
runtime_env_setup_hook (str): The module path to a Python function that
will be imported and run to set up the runtime env in agent.
temp_dir (str): The path of the temporary directory Ray will use.
session_dir (str): The path of this session.
resource_dir(str): The path of resource of this session .
log_dir (str): The path of the dir where log files are created.
resource_spec (ResourceSpec): Resources for this raylet.
object_manager_port: The port to use for the object manager. If this is
None, then the object manager will choose its own port.
min_worker_port (int): The lowest port number that workers will bind
on. If not set, random ports will be chosen.
max_worker_port (int): The highest port number that workers will bind
on. If set, min_worker_port must also be set.
redis_password: The password to use when connecting to Redis.
metrics_agent_port(int): The port where metrics agent is bound to.
metrics_export_port(int): The port at which metrics are exposed to.
use_valgrind (bool): True if the raylet should be started inside
of valgrind. If this is True, use_profiler must be False.
use_profiler (bool): True if the raylet should be started inside
a profiler. If this is True, use_valgrind must be False.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
tracing_startup_hook: Tracing startup hook.
config (dict|None): Optional Raylet configuration that will
override defaults in RayConfig.
max_bytes (int): Log rotation parameter. Corresponding to
RotatingFileHandler's maxBytes.
backup_count (int): Log rotation parameter. Corresponding to
RotatingFileHandler's backupCount.
Returns:
ProcessInfo for the process that was started.
"""
assert node_manager_port is not None and type(node_manager_port) == int
if use_valgrind and use_profiler:
raise ValueError("Cannot use valgrind and profiler at the same time.")
assert resource_spec.resolved()
static_resources = resource_spec.to_resource_dict()
# Limit the number of workers that can be started in parallel by the
# raylet. However, make sure it is at least 1.
num_cpus_static = static_resources.get("CPU", 0)
maximum_startup_concurrency = max(
1, min(multiprocessing.cpu_count(), num_cpus_static))
# Format the resource argument in a form like 'CPU,1.0,GPU,0,Custom,3'.
resource_argument = ",".join(
["{},{}".format(*kv) for kv in static_resources.items()])
gcs_ip_address, gcs_port = redis_address.split(":")
has_java_command = False
if shutil.which("java") is not None:
has_java_command = True
ray_java_installed = False
try:
jars_dir = get_ray_jars_dir()
if os.path.exists(jars_dir):
ray_java_installed = True
except Exception:
pass
include_java = has_java_command and ray_java_installed
if include_java is True:
java_worker_command = build_java_worker_command(
redis_address,
plasma_store_name,
raylet_name,
redis_password,
session_dir,
node_ip_address,
)
else:
java_worker_command = []
if os.path.exists(DEFAULT_WORKER_EXECUTABLE):
cpp_worker_command = build_cpp_worker_command(
"", redis_address, plasma_store_name, raylet_name, redis_password,
session_dir, log_dir, node_ip_address)
else:
cpp_worker_command = []
# Create the command that the Raylet will use to start workers.
# TODO(architkulkarni): Pipe in setup worker args separately instead of
# inserting them into start_worker_command and later erasing them if
# needed.
start_worker_command = [
sys.executable,
setup_worker_path,
f"--worker-setup-hook={worker_setup_hook}",
f"--session-dir={session_dir}",
worker_path,
f"--node-ip-address={node_ip_address}",
"--node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER",
f"--object-store-name={plasma_store_name}",
f"--raylet-name={raylet_name}",
f"--redis-address={redis_address}",
f"--temp-dir={temp_dir}",
f"--metrics-agent-port={metrics_agent_port}",
f"--logging-rotate-bytes={max_bytes}",
f"--logging-rotate-backup-count={backup_count}",
"RAY_WORKER_DYNAMIC_OPTION_PLACEHOLDER",
]
if redis_password:
start_worker_command += [f"--redis-password={redis_password}"]
# If the object manager port is None, then use 0 to cause the object
# manager to choose its own port.
if object_manager_port is None:
object_manager_port = 0
if min_worker_port is None:
min_worker_port = 0
if max_worker_port is None:
max_worker_port = 0
# Create agent command
agent_command = [
sys.executable,
"-u",
os.path.join(RAY_PATH, "new_dashboard/agent.py"),
f"--node-ip-address={node_ip_address}",
f"--redis-address={redis_address}",
f"--metrics-export-port={metrics_export_port}",
f"--dashboard-agent-port={metrics_agent_port}",
"--node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER",
f"--object-store-name={plasma_store_name}",
f"--raylet-name={raylet_name}",
f"--temp-dir={temp_dir}",
f"--session-dir={session_dir}",
f"--runtime-env-dir={resource_dir}",
f"--runtime-env-setup-hook={runtime_env_setup_hook}",
f"--log-dir={log_dir}",
f"--logging-rotate-bytes={max_bytes}",
f"--logging-rotate-backup-count={backup_count}",
]
if redis_password is not None and len(redis_password) != 0:
agent_command.append("--redis-password={}".format(redis_password))
command = [
RAYLET_EXECUTABLE,
f"--raylet_socket_name={raylet_name}",
f"--store_socket_name={plasma_store_name}",
f"--object_manager_port={object_manager_port}",
f"--min_worker_port={min_worker_port}",
f"--max_worker_port={max_worker_port}",
f"--node_manager_port={node_manager_port}",
f"--node_ip_address={node_ip_address}",
f"--redis_address={gcs_ip_address}",
f"--redis_port={gcs_port}",
f"--maximum_startup_concurrency={maximum_startup_concurrency}",
f"--static_resource_list={resource_argument}",
f"--python_worker_command={subprocess.list2cmdline(start_worker_command)}", # noqa
f"--java_worker_command={subprocess.list2cmdline(java_worker_command)}", # noqa
f"--cpp_worker_command={subprocess.list2cmdline(cpp_worker_command)}", # noqa
f"--redis_password={redis_password or ''}",
f"--temp_dir={temp_dir}",
f"--session_dir={session_dir}",
f"--resource_dir={resource_dir}",
f"--metrics-agent-port={metrics_agent_port}",
f"--metrics_export_port={metrics_export_port}",
f"--object_store_memory={object_store_memory}",
f"--plasma_directory={plasma_directory}",
]
if worker_port_list is not None:
command.append(f"--worker_port_list={worker_port_list}")
if start_initial_python_workers_for_first_job:
command.append("--num_initial_python_workers_for_first_job={}".format(
resource_spec.num_cpus))
command.append("--agent_command={}".format(
subprocess.list2cmdline(agent_command)))
if huge_pages:
command.append("--huge_pages")
if socket_to_use:
socket_to_use.close()
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_RAYLET,
use_valgrind=use_valgrind,
use_gdb=False,
use_valgrind_profiler=use_profiler,
use_perftools_profiler=("RAYLET_PERFTOOLS_PATH" in os.environ),
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
return process_info
def get_ray_jars_dir():
"""Return a directory where all ray-related jars and
their dependencies locate."""
current_dir = RAY_PATH
jars_dir = os.path.abspath(os.path.join(current_dir, "jars"))
if not os.path.exists(jars_dir):
raise RuntimeError("Ray jars is not packaged into ray. "
"Please build ray with java enabled "
"(set env var RAY_INSTALL_JAVA=1)")
return os.path.abspath(os.path.join(current_dir, "jars"))
def build_java_worker_command(
redis_address,
plasma_store_name,
raylet_name,
redis_password,
session_dir,
node_ip_address,
):
"""This method assembles the command used to start a Java worker.
Args:
redis_address (str): Redis address of GCS.
plasma_store_name (str): The name of the plasma store socket to connect
to.
raylet_name (str): The name of the raylet socket to create.
redis_password (str): The password of connect to redis.
session_dir (str): The path of this session.
node_ip_address (str): The ip address for this node.
Returns:
The command string for starting Java worker.
"""
pairs = []
if redis_address is not None:
pairs.append(("ray.address", redis_address))
pairs.append(("ray.raylet.node-manager-port",
"RAY_NODE_MANAGER_PORT_PLACEHOLDER"))
if plasma_store_name is not None:
pairs.append(("ray.object-store.socket-name", plasma_store_name))
if raylet_name is not None:
pairs.append(("ray.raylet.socket-name", raylet_name))
if redis_password is not None:
pairs.append(("ray.redis.password", redis_password))
if node_ip_address is not None:
pairs.append(("ray.node-ip", node_ip_address))
pairs.append(("ray.home", RAY_HOME))
pairs.append(("ray.logging.dir", os.path.join(session_dir, "logs")))
pairs.append(("ray.session-dir", session_dir))
command = ["java"] + ["-D{}={}".format(*pair) for pair in pairs]
# Add ray jars path to java classpath
ray_jars = os.path.join(get_ray_jars_dir(), "*")
command += ["-cp", ray_jars]
command += ["RAY_WORKER_DYNAMIC_OPTION_PLACEHOLDER"]
command += ["io.ray.runtime.runner.worker.DefaultWorker"]
return command
def build_cpp_worker_command(cpp_worker_options, redis_address,
plasma_store_name, raylet_name, redis_password,
session_dir, log_dir, node_ip_address):
"""This method assembles the command used to start a CPP worker.
Args:
cpp_worker_options (list): The command options for CPP worker.
redis_address (str): Redis address of GCS.
plasma_store_name (str): The name of the plasma store socket to connect
to.
raylet_name (str): The name of the raylet socket to create.
redis_password (str): The password of connect to redis.
session_dir (str): The path of this session.
log_dir (str): The path of logs.
node_ip_address (str): The ip address for this node.
Returns:
The command string for starting CPP worker.
"""
command = [
DEFAULT_WORKER_EXECUTABLE,
f"--ray-plasma-store-socket-name={plasma_store_name}",
f"--ray-raylet-socket-name={raylet_name}",
"--ray-node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER",
f"--ray-address={redis_address}",
f"--ray-redis-password={redis_password}",
f"--ray-session-dir={session_dir}", f"--ray-logs-dir={log_dir}",
f"--ray-node-ip-address={node_ip_address}"
]
return command
def determine_plasma_store_config(object_store_memory,
plasma_directory=None,
huge_pages=False):
"""Figure out how to configure the plasma object store.
This will determine which directory to use for the plasma store. On Linux,
we will try to use /dev/shm unless the shared memory file system is too
small, in which case we will fall back to /tmp. If any of the object store
memory or plasma directory parameters are specified by the user, then those
values will be preserved.
Args:
object_store_memory (int): The object store memory to use.
plasma_directory (str): The user-specified plasma directory parameter.
huge_pages (bool): The user-specified huge pages parameter.
Returns:
The plasma directory to use. If it is specified by the user, then that
value will be preserved.
"""
if not isinstance(object_store_memory, int):
object_store_memory = int(object_store_memory)
if huge_pages and not (sys.platform == "linux"
or sys.platform == "linux2"):
raise ValueError("The huge_pages argument is only supported on "
"Linux.")
system_memory = ray._private.utils.get_system_memory()
# Determine which directory to use. By default, use /tmp on MacOS and
# /dev/shm on Linux, unless the shared-memory file system is too small,
# in which case we default to /tmp on Linux.
if plasma_directory is None:
if sys.platform == "linux" or sys.platform == "linux2":
shm_avail = ray._private.utils.get_shared_memory_bytes()
# Compare the requested memory size to the memory available in
# /dev/shm.
if shm_avail > object_store_memory:
plasma_directory = "/dev/shm"
elif (not os.environ.get("RAY_OBJECT_STORE_ALLOW_SLOW_STORAGE")
and object_store_memory >
ray_constants.REQUIRE_SHM_SIZE_THRESHOLD):
raise ValueError(
"The configured object store size ({} GB) exceeds "
"/dev/shm size ({} GB). This will harm performance. "
"Consider deleting files in /dev/shm or increasing its "
"size with "
"--shm-size in Docker. To ignore this warning, "
"set RAY_OBJECT_STORE_ALLOW_SLOW_STORAGE=1.".format(
object_store_memory / 1e9, shm_avail / 1e9))
else:
plasma_directory = ray._private.utils.get_user_temp_dir()
logger.warning(
"WARNING: The object store is using {} instead of "
"/dev/shm because /dev/shm has only {} bytes available. "
"This will harm performance! You may be able to free up "
"space by deleting files in /dev/shm. If you are inside a "
"Docker container, you can increase /dev/shm size by "
"passing '--shm-size={:.2f}gb' to 'docker run' (or add it "
"to the run_options list in a Ray cluster config). Make "
"sure to set this to more than 30% of available RAM.".
format(ray._private.utils.get_user_temp_dir(), shm_avail,
object_store_memory * (1.1) / (2**30)))
else:
plasma_directory = ray._private.utils.get_user_temp_dir()
# Do some sanity checks.
if object_store_memory > system_memory:
raise ValueError(
"The requested object store memory size is greater "
"than the total available memory.")
else:
plasma_directory = os.path.abspath(plasma_directory)
logger.info("object_store_memory is not verified when "
"plasma_directory is set.")
if not os.path.isdir(plasma_directory):
raise ValueError(f"The file {plasma_directory} does not "
"exist or is not a directory.")
if huge_pages and plasma_directory is None:
raise ValueError("If huge_pages is True, then the "
"plasma_directory argument must be provided.")
if object_store_memory < ray_constants.OBJECT_STORE_MINIMUM_MEMORY_BYTES:
raise ValueError("Attempting to cap object store memory usage at {} "
"bytes, but the minimum allowed is {} bytes.".format(
object_store_memory,
ray_constants.OBJECT_STORE_MINIMUM_MEMORY_BYTES))
# Print the object store memory using two decimal places.
logger.debug(
"Determine to start the Plasma object store with {} GB memory "
"using {}.".format(
round(object_store_memory / 10**9, 2), plasma_directory))
return plasma_directory, object_store_memory
def start_worker(node_ip_address,
object_store_name,
raylet_name,
redis_address,
worker_path,
temp_dir,
raylet_ip_address=None,
stdout_file=None,
stderr_file=None,
fate_share=None):
"""This method starts a worker process.
Args:
node_ip_address (str): The IP address of the node that this worker is
running on.
object_store_name (str): The socket name of the object store.
raylet_name (str): The socket name of the raylet server.
redis_address (str): The address that the Redis server is listening on.
worker_path (str): The path of the source code which the worker process
will run.
temp_dir (str): The path of the temp dir.
raylet_ip_address (str): The IP address of the worker's raylet. If not
provided, it defaults to the node_ip_address.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
Returns:
ProcessInfo for the process that was started.
"""
command = [
sys.executable,
"-u",
worker_path,
"--node-ip-address=" + node_ip_address,
"--object-store-name=" + object_store_name,
"--raylet-name=" + raylet_name,
"--redis-address=" + str(redis_address),
"--temp-dir=" + temp_dir,
]
if raylet_ip_address is not None:
command.append("--raylet-ip-address=" + raylet_ip_address)
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_WORKER,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
return process_info
def start_monitor(redis_address,
logs_dir,
stdout_file=None,
stderr_file=None,
autoscaling_config=None,
redis_password=None,
fate_share=None,
max_bytes=0,
backup_count=0,
monitor_ip=None):
"""Run a process to monitor the other processes.
Args:
redis_address (str): The address that the Redis server is listening on.
logs_dir(str): The path to the log directory.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
autoscaling_config: path to autoscaling config file.
redis_password (str): The password of the redis server.
max_bytes (int): Log rotation parameter. Corresponding to
RotatingFileHandler's maxBytes.
backup_count (int): Log rotation parameter. Corresponding to
RotatingFileHandler's backupCount.
monitor_ip (str): IP address of the machine that the monitor will be
run on. Can be excluded, but required for autoscaler metrics.
Returns:
ProcessInfo for the process that was started.
"""
monitor_path = os.path.join(RAY_PATH, AUTOSCALER_PRIVATE_DIR, "monitor.py")
command = [
sys.executable, "-u", monitor_path, f"--logs-dir={logs_dir}",
f"--redis-address={redis_address}",
f"--logging-rotate-bytes={max_bytes}",
f"--logging-rotate-backup-count={backup_count}"
]
if autoscaling_config:
command.append("--autoscaling-config=" + str(autoscaling_config))
if redis_password:
command.append("--redis-password=" + redis_password)
if monitor_ip:
command.append("--monitor-ip=" + monitor_ip)
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_MONITOR,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
return process_info
def start_ray_client_server(redis_address,
ray_client_server_port,
stdout_file=None,
stderr_file=None,
redis_password=None,
fate_share=None,
server_type: str = "proxy",
serialized_runtime_env: Optional[str] = None,
session_dir: Optional[str] = None):
"""Run the server process of the Ray client.
Args:
ray_client_server_port (int): Port the Ray client server listens on.
stdout_file: A file handle opened for writing to redirect stdout to. If
no redirection should happen, then this should be None.
stderr_file: A file handle opened for writing to redirect stderr to. If
no redirection should happen, then this should be None.
redis_password (str): The password of the redis server.
server_type (str): Whether to start the proxy version of Ray Client.
serialized_runtime_env (str|None): If specified, the serialized
runtime_env to start the client server in.
Returns:
ProcessInfo for the process that was started.
"""
root_ray_dir = Path(__file__).resolve().parents[1]
setup_worker_path = os.path.join(root_ray_dir, "workers",
ray_constants.SETUP_WORKER_FILENAME)
conda_shim_flag = (
"--worker-setup-hook=" + ray_constants.DEFAULT_WORKER_SETUP_HOOK)
command = [
sys.executable,
setup_worker_path,
conda_shim_flag, # These two args are to use the shim process.
"-m",
"ray.util.client.server",
"--redis-address=" + str(redis_address),
"--port=" + str(ray_client_server_port),
"--mode=" + server_type
]
if redis_password:
command.append("--redis-password=" + redis_password)
if serialized_runtime_env:
command.append("--serialized-runtime-env=" + serialized_runtime_env)
if session_dir:
command.append(f"--session-dir={session_dir}")
process_info = start_ray_process(
command,
ray_constants.PROCESS_TYPE_RAY_CLIENT_SERVER,
stdout_file=stdout_file,
stderr_file=stderr_file,
fate_share=fate_share)
return process_info
| 41.667887 | 92 | 0.627923 |
c59822f4e267127003d865fb8f3441c7aaaba41c | 370 | py | Python | setup.py | shanefagan/snapcraft_gen_yaml | e25246257b74b2e2424e7a2b104df8e29255f8a3 | [
"MIT"
] | null | null | null | setup.py | shanefagan/snapcraft_gen_yaml | e25246257b74b2e2424e7a2b104df8e29255f8a3 | [
"MIT"
] | null | null | null | setup.py | shanefagan/snapcraft_gen_yaml | e25246257b74b2e2424e7a2b104df8e29255f8a3 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from sys import platform
import subprocess
from setuptools import setup
setup(name='test',
version='0.1',
description='Bla',
author='Shane Fagan',
author_email='mail@example.com',
url='http://example.com',
license='MIT',
packages=['bin'],
install_requires=[],
scripts=['bin/main']
),
| 18.5 | 38 | 0.602703 |
74b515f0b4bb9c034c1ab91cf4d837158a646049 | 8,855 | py | Python | tests/transforms/test_tensor_transforms.py | lonestar686/torchsample | 79076d991019b7c81d72a0dd460536909e1f8c9f | [
"MIT"
] | null | null | null | tests/transforms/test_tensor_transforms.py | lonestar686/torchsample | 79076d991019b7c81d72a0dd460536909e1f8c9f | [
"MIT"
] | null | null | null | tests/transforms/test_tensor_transforms.py | lonestar686/torchsample | 79076d991019b7c81d72a0dd460536909e1f8c9f | [
"MIT"
] | null | null | null | """
Tests for torchsample/transforms/image_transforms.py
"""
import torch as th
from torchsample.transforms import (ToTensor,
ToCuda,
ToFile,
ChannelsLast, HWC,
ChannelsFirst, CHW,
TypeCast,
AddChannel,
Transpose,
RangeNormalize,
StdNormalize,
RandomCrop,
SpecialCrop,
Pad,
RandomFlip,
RandomOrder)
# ----------------------------------------------------
## DATA SET ##
def gray2d_setup():
images = {}
x = th.zeros(1,30,30)
x[:,10:21,10:21] = 1
images['gray_01'] = x
x = th.zeros(1,30,40)
x[:,10:21,10:21] = 1
images['gray_02'] = x
return images
def multi_gray2d_setup():
old_imgs = gray2d_setup()
images = {}
for k,v in old_imgs.items():
images[k+'_2imgs'] = [v,v]
images[k+'_3imgs'] = [v,v,v]
images[k+'_4imgs'] = [v,v,v,v]
return images
def color2d_setup():
images = {}
x = th.zeros(3,30,30)
x[:,10:21,10:21] = 1
images['color_01'] = x
x = th.zeros(3,30,40)
x[:,10:21,10:21] = 1
images['color_02'] = x
return images
def multi_color2d_setup():
old_imgs = color2d_setup()
images = {}
for k,v in old_imgs.items():
images[k+'_2imgs'] = [v,v]
images[k+'_3imgs'] = [v,v,v]
images[k+'_4imgs'] = [v,v,v,v]
return images
# ----------------------------------------------------
# ----------------------------------------------------
## TFORMS SETUP ###
def ToTensor_setup():
tforms = {}
tforms['totensor'] = ToTensor()
return tforms
def ToCuda_setup():
tforms = {}
tforms['tocuda'] = ToCuda()
return tforms
def ToFile_setup():
tforms = {}
ROOT = '~/desktop/data/'
tforms['tofile_npy'] = ToFile(root=ROOT, fmt='npy')
tforms['tofile_pth'] = ToFile(root=ROOT, fmt='pth')
tforms['tofile_jpg'] = ToFile(root=ROOT, fmt='jpg')
tforms['tofile_png'] = ToFile(root=ROOT, fmt='png')
return tforms
def ChannelsLast_setup():
tforms = {}
tforms['channels_last'] = ChannelsLast()
tforms['hwc'] = HWC()
return tforms
def ChannelsFirst_setup():
tforms = {}
tforms['channels_first'] = ChannelsFirst()
tforms['chw'] = CHW()
return tforms
def TypeCast_setup():
tforms = {}
tforms['byte'] = TypeCast('byte')
tforms['double'] = TypeCast('double')
tforms['float'] = TypeCast('float')
tforms['int'] = TypeCast('int')
tforms['long'] = TypeCast('long')
tforms['short'] = TypeCast('short')
return tforms
def AddChannel_setup():
tforms = {}
tforms['addchannel_axis0'] = AddChannel(axis=0)
tforms['addchannel_axis1'] = AddChannel(axis=1)
tforms['addchannel_axis2'] = AddChannel(axis=2)
return tforms
def Transpose_setup():
tforms = {}
tforms['transpose_01'] = Transpose(0, 1)
tforms['transpose_02'] = Transpose(0, 2)
tforms['transpose_10'] = Transpose(1, 0)
tforms['transpose_12'] = Transpose(1, 2)
tforms['transpose_20'] = Transpose(2, 0)
tforms['transpose_21'] = Transpose(2, 1)
return tforms
def RangeNormalize_setup():
tforms = {}
tforms['rangenorm_01'] = RangeNormalize(0, 1)
tforms['rangenorm_-11'] = RangeNormalize(-1, 1)
tforms['rangenorm_-33'] = RangeNormalize(-3, 3)
tforms['rangenorm_02'] = RangeNormalize(0, 2)
return tforms
def StdNormalize_setup():
tforms = {}
tforms['stdnorm'] = StdNormalize()
return tforms
def RandomCrop_setup():
tforms = {}
tforms['randomcrop_1010'] = RandomCrop((10,10))
tforms['randomcrop_510'] = RandomCrop((5,10))
tforms['randomcrop_105'] = RandomCrop((10,5))
tforms['randomcrop_99'] = RandomCrop((9,9))
tforms['randomcrop_79'] = RandomCrop((7,9))
tforms['randomcrop_97'] = RandomCrop((9,7))
return tforms
def SpecialCrop_setup():
tforms = {}
tforms['specialcrop_0_1010'] = SpecialCrop((10,10),0)
tforms['specialcrop_0_510'] = SpecialCrop((5,10),0)
tforms['specialcrop_0_105'] = SpecialCrop((10,5),0)
tforms['specialcrop_0_99'] = SpecialCrop((9,9),0)
tforms['specialcrop_0_79'] = SpecialCrop((7,9),0)
tforms['specialcrop_0_97'] = SpecialCrop((9,7),0)
tforms['specialcrop_1_1010'] = SpecialCrop((10,10),1)
tforms['specialcrop_1_510'] = SpecialCrop((5,10),1)
tforms['specialcrop_1_105'] = SpecialCrop((10,5),1)
tforms['specialcrop_1_99'] = SpecialCrop((9,9),1)
tforms['specialcrop_1_79'] = SpecialCrop((7,9),1)
tforms['specialcrop_1_97'] = SpecialCrop((9,7),1)
tforms['specialcrop_2_1010'] = SpecialCrop((10,10),2)
tforms['specialcrop_2_510'] = SpecialCrop((5,10),2)
tforms['specialcrop_2_105'] = SpecialCrop((10,5),2)
tforms['specialcrop_2_99'] = SpecialCrop((9,9),2)
tforms['specialcrop_2_79'] = SpecialCrop((7,9),2)
tforms['specialcrop_2_97'] = SpecialCrop((9,7),2)
tforms['specialcrop_3_1010'] = SpecialCrop((10,10),3)
tforms['specialcrop_3_510'] = SpecialCrop((5,10),3)
tforms['specialcrop_3_105'] = SpecialCrop((10,5),3)
tforms['specialcrop_3_99'] = SpecialCrop((9,9),3)
tforms['specialcrop_3_79'] = SpecialCrop((7,9),3)
tforms['specialcrop_3_97'] = SpecialCrop((9,7),3)
tforms['specialcrop_4_1010'] = SpecialCrop((10,10),4)
tforms['specialcrop_4_510'] = SpecialCrop((5,10),4)
tforms['specialcrop_4_105'] = SpecialCrop((10,5),4)
tforms['specialcrop_4_99'] = SpecialCrop((9,9),4)
tforms['specialcrop_4_79'] = SpecialCrop((7,9),4)
tforms['specialcrop_4_97'] = SpecialCrop((9,7),4)
return tforms
def Pad_setup():
tforms = {}
tforms['pad_4040'] = Pad((40,40))
tforms['pad_3040'] = Pad((30,40))
tforms['pad_4030'] = Pad((40,30))
tforms['pad_3939'] = Pad((39,39))
tforms['pad_3941'] = Pad((39,41))
tforms['pad_4139'] = Pad((41,39))
tforms['pad_4138'] = Pad((41,38))
tforms['pad_3841'] = Pad((38,41))
return tforms
def RandomFlip_setup():
tforms = {}
tforms['randomflip_h_01'] = RandomFlip(h=True, v=False)
tforms['randomflip_h_02'] = RandomFlip(h=True, v=False, p=0)
tforms['randomflip_h_03'] = RandomFlip(h=True, v=False, p=1)
tforms['randomflip_h_04'] = RandomFlip(h=True, v=False, p=0.3)
tforms['randomflip_v_01'] = RandomFlip(h=False, v=True)
tforms['randomflip_v_02'] = RandomFlip(h=False, v=True, p=0)
tforms['randomflip_v_03'] = RandomFlip(h=False, v=True, p=1)
tforms['randomflip_v_04'] = RandomFlip(h=False, v=True, p=0.3)
tforms['randomflip_hv_01'] = RandomFlip(h=True, v=True)
tforms['randomflip_hv_02'] = RandomFlip(h=True, v=True, p=0)
tforms['randomflip_hv_03'] = RandomFlip(h=True, v=True, p=1)
tforms['randomflip_hv_04'] = RandomFlip(h=True, v=True, p=0.3)
return tforms
def RandomOrder_setup():
tforms = {}
tforms['randomorder'] = RandomOrder()
return tforms
# ----------------------------------------------------
# ----------------------------------------------------
def test_image_transforms_runtime(verbose=1):
### MAKE TRANSFORMS ###
tforms = {}
tforms.update(ToTensor_setup())
tforms.update(ToCuda_setup())
#tforms.update(ToFile_setup())
tforms.update(ChannelsLast_setup())
tforms.update(ChannelsFirst_setup())
tforms.update(TypeCast_setup())
tforms.update(AddChannel_setup())
tforms.update(Transpose_setup())
tforms.update(RangeNormalize_setup())
tforms.update(StdNormalize_setup())
tforms.update(RandomCrop_setup())
tforms.update(SpecialCrop_setup())
tforms.update(Pad_setup())
tforms.update(RandomFlip_setup())
tforms.update(RandomOrder_setup())
### MAKE DATA
images = {}
images.update(gray2d_setup())
images.update(multi_gray2d_setup())
images.update(color2d_setup())
images.update(multi_color2d_setup())
successes =[]
failures = []
for im_key, im_val in images.items():
for tf_key, tf_val in tforms.items():
try:
if isinstance(im_val, (tuple,list)):
tf_val(*im_val)
else:
tf_val(im_val)
successes.append((im_key, tf_key))
except:
failures.append((im_key, tf_key))
if verbose > 0:
for k, v in failures:
print('%s - %s' % (k, v))
print('# SUCCESSES: ', len(successes))
print('# FAILURES: ' , len(failures))
if __name__=='__main__':
test_image_transforms_runtime()
| 28.937908 | 66 | 0.57651 |
43620e8a6d0c1e51a1e1e2585d1542b574df1c2d | 2,626 | py | Python | spyne/util/appreg.py | DXist/spyne | f185e44c0cf3c71c99471133a44c17f4a47ab46e | [
"BSD-3-Clause"
] | 1 | 2021-06-07T16:19:41.000Z | 2021-06-07T16:19:41.000Z | spyne/util/appreg.py | lemanyk/spyne | 12bea0be357ceebec1cf877270ce240424357c7b | [
"BSD-3-Clause"
] | null | null | null | spyne/util/appreg.py | lemanyk/spyne | 12bea0be357ceebec1cf877270ce240424357c7b | [
"BSD-3-Clause"
] | null | null | null |
#
# spyne - Copyright (C) Spyne contributors.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
#
"""
Module that contains the Spyne Application Registry.
"""
import logging
logger = logging.getLogger(__name__)
_applications = {}
try:
from collections import namedtuple
_ApplicationMetaData = namedtuple("_ApplicationMetaData",
['app', 'inst_stack', 'null', 'ostr'])
except ImportError: # python 2.5
class _ApplicationMetaData:
def __init__(self, app, inst_stack, null, ostr):
self.app = app
self.inst_stack = inst_stack
self.null = null
self.ostr = ostr
def register_application(app):
key = (app.tns, app.name)
from spyne.server.null import NullServer
try:
import traceback
stack = traceback.format_stack()
except ImportError:
stack = None
prev = _applications.get(key, None)
if prev is not None:
if hash(prev.app) == hash(app):
logger.debug("Application %r previously registered as %r is the same"
" as %r. Skipping." % (prev.app, key, app))
prev.inst_stack.append(stack)
else:
logger.warning("Overwriting application %r(%r)." % (key, app))
if prev.inst_stack is not None:
stack_traces = []
for s in prev.inst_stack:
if s is not None:
stack_traces.append(''.join(s))
logger.debug("Stack trace of the instantiation:\n%s" %
'====================\n'.join(stack_traces))
_applications[key] = _ApplicationMetaData(app=app, inst_stack=[stack],
null=NullServer(app), ostr=NullServer(app, ostr=True))
logger.debug("Registering %r as %r" % (app, key))
def get_application(tns, name):
return _applications.get((tns, name), None)
| 32.02439 | 81 | 0.624524 |
75719eceacd15af1d1ef3815c1f118e76f7c1d01 | 37,722 | py | Python | aries_cloudagent/protocols/present_proof/v1_0/tests/test_manager.py | vitalrev/aries-cloudagent-python | 7da54521eddd731a58281ae4647945a6ffba92a9 | [
"Apache-2.0"
] | null | null | null | aries_cloudagent/protocols/present_proof/v1_0/tests/test_manager.py | vitalrev/aries-cloudagent-python | 7da54521eddd731a58281ae4647945a6ffba92a9 | [
"Apache-2.0"
] | null | null | null | aries_cloudagent/protocols/present_proof/v1_0/tests/test_manager.py | vitalrev/aries-cloudagent-python | 7da54521eddd731a58281ae4647945a6ffba92a9 | [
"Apache-2.0"
] | null | null | null | import json
from time import time
from asynctest import TestCase as AsyncTestCase
from asynctest import mock as async_mock
from .....core.in_memory import InMemoryProfile
from .....indy.holder import IndyHolder
from .....indy.sdk.holder import IndySdkHolder
from .....indy.issuer import IndyIssuer
from .....ledger.base import BaseLedger
from .....messaging.request_context import RequestContext
from .....messaging.responder import BaseResponder, MockResponder
from .....storage.error import StorageNotFoundError
from .....indy.verifier import IndyVerifier
from .....indy.sdk.verifier import IndySdkVerifier
from ....didcomm_prefix import DIDCommPrefix
from .. import manager as test_module
from ..manager import PresentationManager, PresentationManagerError
from ..messages.presentation import Presentation
from ..messages.presentation_ack import PresentationAck
from ..messages.presentation_proposal import PresentationProposal
from ..messages.presentation_request import PresentationRequest
from ..messages.inner.presentation_preview import (
PresAttrSpec,
PresentationPreview,
PresPredSpec,
)
from ..models.presentation_exchange import V10PresentationExchange
from ..util.indy import indy_proof_req_preview2indy_requested_creds
CONN_ID = "connection_id"
ISSUER_DID = "NcYxiDXkpYi6ov5FcYDi1e"
S_ID = f"{ISSUER_DID}:2:vidya:1.0"
CD_ID = f"{ISSUER_DID}:3:CL:{S_ID}:tag1"
RR_ID = f"{ISSUER_DID}:4:{CD_ID}:CL_ACCUM:0"
PRES_PREVIEW = PresentationPreview(
attributes=[
PresAttrSpec(name="player", cred_def_id=CD_ID, value="Richie Knucklez"),
PresAttrSpec(
name="screenCapture",
cred_def_id=CD_ID,
mime_type="image/png",
value="aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
),
],
predicates=[
PresPredSpec(
name="highScore", cred_def_id=CD_ID, predicate=">=", threshold=1000000
)
],
)
PRES_PREVIEW_NAMES = PresentationPreview(
attributes=[
PresAttrSpec(
name="player", cred_def_id=CD_ID, value="Richie Knucklez", referent="0"
),
PresAttrSpec(
name="screenCapture",
cred_def_id=CD_ID,
mime_type="image/png",
value="aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
referent="0",
),
],
predicates=[
PresPredSpec(
name="highScore", cred_def_id=CD_ID, predicate=">=", threshold=1000000
)
],
)
PROOF_REQ_NAME = "name"
PROOF_REQ_VERSION = "1.0"
PROOF_REQ_NONCE = "12345"
NOW = int(time())
class TestPresentationManager(AsyncTestCase):
async def setUp(self):
self.profile = InMemoryProfile.test_profile()
injector = self.profile.context.injector
Ledger = async_mock.MagicMock(BaseLedger, autospec=True)
self.ledger = Ledger()
self.ledger.get_schema = async_mock.CoroutineMock(
return_value=async_mock.MagicMock()
)
self.ledger.get_credential_definition = async_mock.CoroutineMock(
return_value={"value": {"revocation": {"...": "..."}}}
)
self.ledger.get_revoc_reg_def = async_mock.CoroutineMock(
return_value={
"ver": "1.0",
"id": RR_ID,
"revocDefType": "CL_ACCUM",
"tag": RR_ID.split(":")[-1],
"credDefId": CD_ID,
"value": {
"IssuanceType": "ISSUANCE_BY_DEFAULT",
"maxCredNum": 1000,
"publicKeys": {"accumKey": {"z": "1 ..."}},
"tailsHash": "3MLjUFQz9x9n5u9rFu8Ba9C5bo4HNFjkPNc54jZPSNaZ",
"tailsLocation": "http://sample.ca/path",
},
}
)
self.ledger.get_revoc_reg_delta = async_mock.CoroutineMock(
return_value=(
{
"ver": "1.0",
"value": {"prevAccum": "1 ...", "accum": "21 ...", "issued": [1]},
},
NOW,
)
)
self.ledger.get_revoc_reg_entry = async_mock.CoroutineMock(
return_value=(
{
"ver": "1.0",
"value": {"prevAccum": "1 ...", "accum": "21 ...", "issued": [1]},
},
NOW,
)
)
injector.bind_instance(BaseLedger, self.ledger)
Holder = async_mock.MagicMock(IndyHolder, autospec=True)
self.holder = Holder()
get_creds = async_mock.CoroutineMock(
return_value=(
{
"cred_info": {
"referent": "dummy_reft",
"attrs": {
"player": "Richie Knucklez",
"screenCapture": "aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
"highScore": "1234560",
},
}
}, # leave this comma: return a tuple
)
)
self.holder.get_credentials_for_presentation_request_by_referent = get_creds
self.holder.get_credential = async_mock.CoroutineMock(
return_value=json.dumps(
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": RR_ID,
"cred_rev_id": 1,
}
)
)
self.holder.create_presentation = async_mock.CoroutineMock(return_value="{}")
self.holder.create_revocation_state = async_mock.CoroutineMock(
return_value=json.dumps(
{
"witness": {"omega": "1 ..."},
"rev_reg": {"accum": "21 ..."},
"timestamp": NOW,
}
)
)
injector.bind_instance(IndyHolder, self.holder)
Verifier = async_mock.MagicMock(IndyVerifier, autospec=True)
self.verifier = Verifier()
self.verifier.verify_presentation = async_mock.CoroutineMock(
return_value="true"
)
injector.bind_instance(IndyVerifier, self.verifier)
self.manager = PresentationManager(self.profile)
async def test_record_eq(self):
same = [
V10PresentationExchange(
presentation_exchange_id="dummy-0",
thread_id="thread-0",
role=V10PresentationExchange.ROLE_PROVER,
)
] * 2
diff = [
V10PresentationExchange(
presentation_exchange_id="dummy-1",
role=V10PresentationExchange.ROLE_PROVER,
),
V10PresentationExchange(
presentation_exchange_id="dummy-0",
thread_id="thread-1",
role=V10PresentationExchange.ROLE_PROVER,
),
V10PresentationExchange(
presentation_exchange_id="dummy-1",
thread_id="thread-0",
role=V10PresentationExchange.ROLE_VERIFIER,
),
]
for i in range(len(same) - 1):
for j in range(i, len(same)):
assert same[i] == same[j]
for i in range(len(diff) - 1):
for j in range(i, len(diff)):
assert diff[i] == diff[j] if i == j else diff[i] != diff[j]
async def test_create_exchange_for_proposal(self):
proposal = PresentationProposal()
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
PresentationProposal, "serialize", autospec=True
):
exchange = await self.manager.create_exchange_for_proposal(
CONN_ID, proposal, auto_present=None
)
save_ex.assert_called_once()
assert exchange.thread_id == proposal._thread_id
assert exchange.initiator == V10PresentationExchange.INITIATOR_SELF
assert exchange.role == V10PresentationExchange.ROLE_PROVER
assert exchange.state == V10PresentationExchange.STATE_PROPOSAL_SENT
async def test_receive_proposal(self):
connection_record = async_mock.MagicMock(connection_id=CONN_ID)
proposal = PresentationProposal()
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex:
exchange = await self.manager.receive_proposal(proposal, connection_record)
save_ex.assert_called_once()
assert exchange.state == V10PresentationExchange.STATE_PROPOSAL_RECEIVED
async def test_create_bound_request(self):
comment = "comment"
proposal = PresentationProposal(presentation_proposal=PRES_PREVIEW)
exchange = V10PresentationExchange(
presentation_proposal_dict=proposal.serialize(),
role=V10PresentationExchange.ROLE_VERIFIER,
)
exchange.save = async_mock.CoroutineMock()
(ret_exchange, pres_req_msg) = await self.manager.create_bound_request(
presentation_exchange_record=exchange,
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
comment=comment,
)
assert ret_exchange is exchange
exchange.save.assert_called_once()
async def test_create_exchange_for_request(self):
request = async_mock.MagicMock()
request.indy_proof_request = async_mock.MagicMock()
request._thread_id = "dummy"
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex:
exchange = await self.manager.create_exchange_for_request(CONN_ID, request)
save_ex.assert_called_once()
assert exchange.thread_id == request._thread_id
assert exchange.initiator == V10PresentationExchange.INITIATOR_SELF
assert exchange.role == V10PresentationExchange.ROLE_VERIFIER
assert exchange.state == V10PresentationExchange.STATE_REQUEST_SENT
async def test_receive_request(self):
exchange_in = V10PresentationExchange()
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex:
exchange_out = await self.manager.receive_request(exchange_in)
save_ex.assert_called_once()
assert exchange_out.state == V10PresentationExchange.STATE_REQUEST_RECEIVED
async def test_create_presentation(self):
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
exchange_in.presentation_request = indy_proof_req
more_magic_rr = async_mock.MagicMock(
get_or_fetch_local_tails_path=async_mock.CoroutineMock(
return_value="/tmp/sample/tails/path"
)
)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
test_module, "AttachDecorator", autospec=True
) as mock_attach_decorator, async_mock.patch.object(
test_module, "RevocationRegistry", autospec=True
) as mock_rr:
mock_rr.from_definition = async_mock.MagicMock(return_value=more_magic_rr)
mock_attach_decorator.from_indy_dict = async_mock.MagicMock(
return_value=mock_attach_decorator
)
req_creds = await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, holder=self.holder
)
assert not req_creds["self_attested_attributes"]
assert len(req_creds["requested_attributes"]) == 2
assert len(req_creds["requested_predicates"]) == 1
(exchange_out, pres_msg) = await self.manager.create_presentation(
exchange_in, req_creds
)
save_ex.assert_called_once()
assert exchange_out.state == V10PresentationExchange.STATE_PRESENTATION_SENT
async def test_create_presentation_proof_req_non_revoc_interval_none(self):
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
indy_proof_req["non_revoked"] = None # simulate interop with indy-vcx
exchange_in.presentation_request = indy_proof_req
more_magic_rr = async_mock.MagicMock(
get_or_fetch_local_tails_path=async_mock.CoroutineMock(
return_value="/tmp/sample/tails/path"
)
)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
test_module, "AttachDecorator", autospec=True
) as mock_attach_decorator, async_mock.patch.object(
test_module, "RevocationRegistry", autospec=True
) as mock_rr:
mock_rr.from_definition = async_mock.MagicMock(return_value=more_magic_rr)
mock_attach_decorator.from_indy_dict = async_mock.MagicMock(
return_value=mock_attach_decorator
)
req_creds = await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, holder=self.holder
)
assert not req_creds["self_attested_attributes"]
assert len(req_creds["requested_attributes"]) == 2
assert len(req_creds["requested_predicates"]) == 1
(exchange_out, pres_msg) = await self.manager.create_presentation(
exchange_in, req_creds
)
save_ex.assert_called_once()
assert exchange_out.state == V10PresentationExchange.STATE_PRESENTATION_SENT
async def test_create_presentation_self_asserted(self):
PRES_PREVIEW_SELFIE = PresentationPreview(
attributes=[
PresAttrSpec(name="player", value="Richie Knucklez"),
PresAttrSpec(
name="screenCapture",
mime_type="image/png",
value="aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
),
],
predicates=[
PresPredSpec(
name="highScore",
cred_def_id=None,
predicate=">=",
threshold=1000000,
)
],
)
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW_SELFIE.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
exchange_in.presentation_request = indy_proof_req
more_magic_rr = async_mock.MagicMock(
get_or_fetch_local_tails_path=async_mock.CoroutineMock(
return_value="/tmp/sample/tails/path"
)
)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
test_module, "AttachDecorator", autospec=True
) as mock_attach_decorator, async_mock.patch.object(
test_module, "RevocationRegistry", autospec=True
) as mock_rr:
mock_rr.from_definition = async_mock.MagicMock(return_value=more_magic_rr)
mock_attach_decorator.from_indy_dict = async_mock.MagicMock(
return_value=mock_attach_decorator
)
req_creds = await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, holder=self.holder
)
assert len(req_creds["self_attested_attributes"]) == 3
assert not req_creds["requested_attributes"]
assert not req_creds["requested_predicates"]
(exchange_out, pres_msg) = await self.manager.create_presentation(
exchange_in, req_creds
)
save_ex.assert_called_once()
assert exchange_out.state == V10PresentationExchange.STATE_PRESENTATION_SENT
async def test_create_presentation_no_revocation(self):
Ledger = async_mock.MagicMock(BaseLedger, autospec=True)
self.ledger = Ledger()
self.ledger.get_schema = async_mock.CoroutineMock(
return_value=async_mock.MagicMock()
)
self.ledger.get_credential_definition = async_mock.CoroutineMock(
return_value={"value": {"revocation": None}}
)
self.profile.context.injector.bind_instance(BaseLedger, self.ledger)
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
exchange_in.presentation_request = indy_proof_req
Holder = async_mock.MagicMock(IndyHolder, autospec=True)
self.holder = Holder()
get_creds = async_mock.CoroutineMock(
return_value=(
{
"cred_info": {"referent": "dummy_reft"},
"attrs": {
"player": "Richie Knucklez",
"screenCapture": "aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
"highScore": "1234560",
},
}, # leave this comma: return a tuple
)
)
self.holder.get_credentials_for_presentation_request_by_referent = get_creds
self.holder.get_credential = async_mock.CoroutineMock(
return_value=json.dumps(
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": None,
"cred_rev_id": None,
}
)
)
self.holder.create_presentation = async_mock.CoroutineMock(return_value="{}")
self.profile.context.injector.bind_instance(IndyHolder, self.holder)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
test_module, "AttachDecorator", autospec=True
) as mock_attach_decorator:
mock_attach_decorator.from_indy_dict = async_mock.MagicMock(
return_value=mock_attach_decorator
)
req_creds = await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, holder=self.holder
)
(exchange_out, pres_msg) = await self.manager.create_presentation(
exchange_in, req_creds
)
save_ex.assert_called_once()
assert exchange_out.state == V10PresentationExchange.STATE_PRESENTATION_SENT
async def test_create_presentation_bad_revoc_state(self):
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
exchange_in.presentation_request = indy_proof_req
Holder = async_mock.MagicMock(IndyHolder, autospec=True)
self.holder = Holder()
get_creds = async_mock.CoroutineMock(
return_value=(
{
"cred_info": {"referent": "dummy_reft"},
"attrs": {
"player": "Richie Knucklez",
"screenCapture": "aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
"highScore": "1234560",
},
}, # leave this comma: return a tuple
)
)
self.holder.get_credentials_for_presentation_request_by_referent = get_creds
self.holder.get_credential = async_mock.CoroutineMock(
return_value=json.dumps(
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": RR_ID,
"cred_rev_id": 1,
}
)
)
self.holder.create_presentation = async_mock.CoroutineMock(return_value="{}")
self.holder.create_revocation_state = async_mock.CoroutineMock(
side_effect=test_module.IndyHolderError("Problem", {"message": "Nope"})
)
self.profile.context.injector.bind_instance(IndyHolder, self.holder)
more_magic_rr = async_mock.MagicMock(
get_or_fetch_local_tails_path=async_mock.CoroutineMock(
return_value="/tmp/sample/tails/path"
)
)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
test_module, "AttachDecorator", autospec=True
) as mock_attach_decorator, async_mock.patch.object(
test_module, "RevocationRegistry", autospec=True
) as mock_rr:
mock_rr.from_definition = async_mock.MagicMock(return_value=more_magic_rr)
mock_attach_decorator.from_indy_dict = async_mock.MagicMock(
return_value=mock_attach_decorator
)
req_creds = await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, holder=self.holder
)
with self.assertRaises(test_module.IndyHolderError):
await self.manager.create_presentation(exchange_in, req_creds)
async def test_create_presentation_multi_matching_proposal_creds_names(self):
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW_NAMES.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
exchange_in.presentation_request = indy_proof_req
Holder = async_mock.MagicMock(IndyHolder, autospec=True)
self.holder = Holder()
get_creds = async_mock.CoroutineMock(
return_value=(
{
"cred_info": {
"referent": "dummy_reft_0",
"cred_def_id": CD_ID,
"attrs": {
"player": "Richie Knucklez",
"screenCapture": "aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
"highScore": "1234560",
},
}
},
{
"cred_info": {
"referent": "dummy_reft_1",
"cred_def_id": CD_ID,
"attrs": {
"player": "Richie Knucklez",
"screenCapture": "aW1hZ2luZSBhbm90aGVyIHNjcmVlbiBjYXB0dXJl",
"highScore": "1515880",
},
}
},
)
)
self.holder.get_credentials_for_presentation_request_by_referent = get_creds
self.holder.get_credential = async_mock.CoroutineMock(
return_value=json.dumps(
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": RR_ID,
"cred_rev_id": 1,
}
)
)
self.holder.create_presentation = async_mock.CoroutineMock(return_value="{}")
self.holder.create_revocation_state = async_mock.CoroutineMock(
return_value=json.dumps(
{
"witness": {"omega": "1 ..."},
"rev_reg": {"accum": "21 ..."},
"timestamp": NOW,
}
)
)
self.profile.context.injector.bind_instance(IndyHolder, self.holder)
more_magic_rr = async_mock.MagicMock(
get_or_fetch_local_tails_path=async_mock.CoroutineMock(
return_value="/tmp/sample/tails/path"
)
)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
test_module, "AttachDecorator", autospec=True
) as mock_attach_decorator, async_mock.patch.object(
test_module, "RevocationRegistry", autospec=True
) as mock_rr:
mock_rr.from_definition = async_mock.MagicMock(return_value=more_magic_rr)
mock_attach_decorator.from_indy_dict = async_mock.MagicMock(
return_value=mock_attach_decorator
)
req_creds = await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, preview=PRES_PREVIEW_NAMES, holder=self.holder
)
assert not req_creds["self_attested_attributes"]
assert len(req_creds["requested_attributes"]) == 1
assert len(req_creds["requested_predicates"]) == 1
(exchange_out, pres_msg) = await self.manager.create_presentation(
exchange_in, req_creds
)
save_ex.assert_called_once()
assert exchange_out.state == V10PresentationExchange.STATE_PRESENTATION_SENT
async def test_no_matching_creds_for_proof_req(self):
exchange_in = V10PresentationExchange()
indy_proof_req = await PRES_PREVIEW.indy_proof_request(
name=PROOF_REQ_NAME,
version=PROOF_REQ_VERSION,
nonce=PROOF_REQ_NONCE,
ledger=self.ledger,
)
get_creds = async_mock.CoroutineMock(return_value=())
self.holder.get_credentials_for_presentation_request_by_referent = get_creds
with self.assertRaises(ValueError):
await indy_proof_req_preview2indy_requested_creds(
indy_proof_req, holder=self.holder
)
get_creds = async_mock.CoroutineMock(
return_value=(
{
"cred_info": {"referent": "dummy_reft"},
"attrs": {
"player": "Richie Knucklez",
"screenCapture": "aW1hZ2luZSBhIHNjcmVlbiBjYXB0dXJl",
"highScore": "1234560",
},
}, # leave this comma: return a tuple
)
)
self.holder.get_credentials_for_presentation_request_by_referent = get_creds
async def test_receive_presentation(self):
connection_record = async_mock.MagicMock(connection_id=CONN_ID)
exchange_dummy = V10PresentationExchange(
presentation_proposal_dict={
"presentation_proposal": {
"@type": DIDCommPrefix.qualify_current(
"present-proof/1.0/presentation-preview"
),
"attributes": [
{"name": "favourite", "cred_def_id": CD_ID, "value": "potato"},
{"name": "icon", "cred_def_id": CD_ID, "value": "cG90YXRv"},
],
"predicates": [],
}
},
presentation_request={
"name": "proof-request",
"version": "1.0",
"nonce": "1234567890",
"requested_attributes": {
"0_favourite_uuid": {
"name": "favourite",
"restrictions": [{"cred_def_id": CD_ID}],
},
"1_icon_uuid": {
"name": "icon",
"restrictions": [{"cred_def_id": CD_ID}],
},
},
},
presentation={
"proof": {
"proofs": [],
"requested_proof": {
"revealed_attrs": {
"0_favourite_uuid": {
"sub_proof_index": 0,
"raw": "potato",
"encoded": "12345678901234567890",
},
"1_icon_uuid": {
"sub_proof_index": 1,
"raw": "cG90YXRv",
"encoded": "12345678901234567890",
},
},
"self_attested_attrs": {},
"unrevealed_attrs": {},
"predicates": {},
},
},
"identifiers": [
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": None,
"timestamp": None,
},
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": None,
"timestamp": None,
},
],
},
)
message = async_mock.MagicMock()
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
V10PresentationExchange, "retrieve_by_tag_filter", autospec=True
) as retrieve_ex, async_mock.patch.object(
self.profile,
"session",
async_mock.MagicMock(return_value=self.profile.session()),
) as session:
retrieve_ex.side_effect = [
StorageNotFoundError("no such record"),
exchange_dummy,
]
exchange_out = await self.manager.receive_presentation(
message, connection_record
)
assert retrieve_ex.call_count == 2
save_ex.assert_called_once()
assert exchange_out.state == (
V10PresentationExchange.STATE_PRESENTATION_RECEIVED
)
async def test_receive_presentation_bait_and_switch(self):
connection_record = async_mock.MagicMock(connection_id=CONN_ID)
exchange_dummy = V10PresentationExchange(
presentation_proposal_dict={
"presentation_proposal": {
"@type": DIDCommPrefix.qualify_current(
"present-proof/1.0/presentation-preview"
),
"attributes": [
{
"name": "favourite",
"cred_def_id": CD_ID,
"value": "no potato",
},
{"name": "icon", "cred_def_id": CD_ID, "value": "cG90YXRv"},
],
"predicates": [],
}
},
presentation_request={
"name": "proof-request",
"version": "1.0",
"nonce": "1234567890",
"requested_attributes": {
"0_favourite_uuid": {
"name": "favourite",
"restrictions": [{"cred_def_id": CD_ID}],
},
"1_icon_uuid": {
"name": "icon",
"restrictions": [{"cred_def_id": CD_ID}],
},
},
},
)
message = async_mock.MagicMock(
indy_proof=async_mock.MagicMock(
return_value={
"proof": {"proofs": []},
"requested_proof": {
"revealed_attrs": {
"0_favourite_uuid": {
"sub_proof_index": 0,
"raw": "potato",
"encoded": "12345678901234567890",
},
"1_icon_uuid": {
"sub_proof_index": 1,
"raw": "cG90YXRv",
"encoded": "23456789012345678901",
},
},
"self_attested_attrs": {},
"unrevealed_attrs": {},
"predicates": {},
},
"identifiers": [
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": None,
"timestamp": None,
},
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": None,
"timestamp": None,
},
],
}
)
)
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
V10PresentationExchange, "retrieve_by_tag_filter", autospec=True
) as retrieve_ex:
retrieve_ex.return_value = exchange_dummy
with self.assertRaises(PresentationManagerError):
await self.manager.receive_presentation(message, connection_record)
async def test_receive_presentation_connection_less(self):
exchange_dummy = V10PresentationExchange()
message = async_mock.MagicMock()
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
V10PresentationExchange, "retrieve_by_tag_filter", autospec=True
) as retrieve_ex, async_mock.patch.object(
self.profile,
"session",
async_mock.MagicMock(return_value=self.profile.session()),
) as session:
retrieve_ex.return_value = exchange_dummy
exchange_out = await self.manager.receive_presentation(message, None)
retrieve_ex.assert_called_once_with(
session.return_value, {"thread_id": message._thread_id}, None
)
save_ex.assert_called_once()
assert exchange_out.state == (
V10PresentationExchange.STATE_PRESENTATION_RECEIVED
)
async def test_verify_presentation(self):
exchange_in = V10PresentationExchange()
exchange_in.presentation = {
"identifiers": [{"schema_id": S_ID, "cred_def_id": CD_ID}]
}
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex:
exchange_out = await self.manager.verify_presentation(exchange_in)
save_ex.assert_called_once()
assert exchange_out.state == (V10PresentationExchange.STATE_VERIFIED)
async def test_verify_presentation_with_revocation(self):
exchange_in = V10PresentationExchange()
exchange_in.presentation = {
"identifiers": [
{
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": RR_ID,
"timestamp": NOW,
},
{ # cover multiple instances of same rev reg
"schema_id": S_ID,
"cred_def_id": CD_ID,
"rev_reg_id": RR_ID,
"timestamp": NOW,
},
]
}
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex:
exchange_out = await self.manager.verify_presentation(exchange_in)
save_ex.assert_called_once()
assert exchange_out.state == (V10PresentationExchange.STATE_VERIFIED)
async def test_send_presentation_ack(self):
exchange = V10PresentationExchange()
responder = MockResponder()
self.profile.context.injector.bind_instance(BaseResponder, responder)
await self.manager.send_presentation_ack(exchange)
messages = responder.messages
assert len(messages) == 1
async def test_send_presentation_ack_no_responder(self):
exchange = V10PresentationExchange()
self.profile.context.injector.clear_binding(BaseResponder)
await self.manager.send_presentation_ack(exchange)
async def test_receive_presentation_ack(self):
connection_record = async_mock.MagicMock(connection_id=CONN_ID)
exchange_dummy = V10PresentationExchange()
message = async_mock.MagicMock()
with async_mock.patch.object(
V10PresentationExchange, "save", autospec=True
) as save_ex, async_mock.patch.object(
V10PresentationExchange, "retrieve_by_tag_filter", autospec=True
) as retrieve_ex:
retrieve_ex.return_value = exchange_dummy
exchange_out = await self.manager.receive_presentation_ack(
message, connection_record
)
save_ex.assert_called_once()
assert exchange_out.state == (
V10PresentationExchange.STATE_PRESENTATION_ACKED
)
| 38.808642 | 88 | 0.556651 |
3e34cec5f7ea033df37c3219c3fed69160dd018a | 26,339 | py | Python | sleekxmpp/plugins/xep_0050/adhoc.py | szarsti/SleekXMPP | 7075a3b7fa857fc8f9db60da9d01fa1a11806659 | [
"BSD-3-Clause"
] | 499 | 2015-01-04T21:45:16.000Z | 2022-02-14T13:04:08.000Z | sleekxmpp/plugins/xep_0050/adhoc.py | numanturle/SleekXMPP | 1aeefd88accf45947c6376e9fac3abae9cbba8aa | [
"BSD-3-Clause"
] | 159 | 2015-01-02T19:09:47.000Z | 2020-02-12T08:29:54.000Z | sleekxmpp/plugins/xep_0050/adhoc.py | numanturle/SleekXMPP | 1aeefd88accf45947c6376e9fac3abae9cbba8aa | [
"BSD-3-Clause"
] | 209 | 2015-01-07T16:23:16.000Z | 2022-01-26T13:02:20.000Z | """
SleekXMPP: The Sleek XMPP Library
Copyright (C) 2011 Nathanael C. Fritz, Lance J.T. Stout
This file is part of SleekXMPP.
See the file LICENSE for copying permission.
"""
import logging
import time
from sleekxmpp import Iq
from sleekxmpp.exceptions import IqError, XMPPError
from sleekxmpp.xmlstream.handler import Callback
from sleekxmpp.xmlstream.matcher import StanzaPath
from sleekxmpp.xmlstream import register_stanza_plugin, JID
from sleekxmpp.plugins import BasePlugin
from sleekxmpp.plugins.xep_0050 import stanza
from sleekxmpp.plugins.xep_0050 import Command
from sleekxmpp.plugins.xep_0004 import Form
log = logging.getLogger(__name__)
class XEP_0050(BasePlugin):
"""
XEP-0050: Ad-Hoc Commands
XMPP's Adhoc Commands provides a generic workflow mechanism for
interacting with applications. The result is similar to menu selections
and multi-step dialogs in normal desktop applications. Clients do not
need to know in advance what commands are provided by any particular
application or agent. While adhoc commands provide similar functionality
to Jabber-RPC, adhoc commands are used primarily for human interaction.
Also see <http://xmpp.org/extensions/xep-0050.html>
Configuration Values:
threaded -- Indicates if command events should be threaded.
Defaults to True.
Events:
command_execute -- Received a command with action="execute"
command_next -- Received a command with action="next"
command_complete -- Received a command with action="complete"
command_cancel -- Received a command with action="cancel"
Attributes:
threaded -- Indicates if command events should be threaded.
Defaults to True.
commands -- A dictionary mapping JID/node pairs to command
names and handlers.
sessions -- A dictionary or equivalent backend mapping
session IDs to dictionaries containing data
relevant to a command's session.
Methods:
plugin_init -- Overrides base_plugin.plugin_init
post_init -- Overrides base_plugin.post_init
new_session -- Return a new session ID.
prep_handlers -- Placeholder. May call with a list of handlers
to prepare them for use with the session storage
backend, if needed.
set_backend -- Replace the default session storage with some
external storage mechanism, such as a database.
The provided backend wrapper must be able to
act using the same syntax as a dictionary.
add_command -- Add a command for use by external entitites.
get_commands -- Retrieve a list of commands provided by a
remote agent.
send_command -- Send a command request to a remote agent.
start_command -- Command user API: initiate a command session
continue_command -- Command user API: proceed to the next step
cancel_command -- Command user API: cancel a command
complete_command -- Command user API: finish a command
terminate_command -- Command user API: delete a command's session
"""
name = 'xep_0050'
description = 'XEP-0050: Ad-Hoc Commands'
dependencies = set(['xep_0030', 'xep_0004'])
stanza = stanza
default_config = {
'threaded': True,
'session_db': None
}
def plugin_init(self):
"""Start the XEP-0050 plugin."""
self.sessions = self.session_db
if self.sessions is None:
self.sessions = {}
self.commands = {}
self.xmpp.register_handler(
Callback("Ad-Hoc Execute",
StanzaPath('iq@type=set/command'),
self._handle_command))
register_stanza_plugin(Iq, Command)
register_stanza_plugin(Command, Form, iterable=True)
self.xmpp.add_event_handler('command_execute',
self._handle_command_start,
threaded=self.threaded)
self.xmpp.add_event_handler('command_next',
self._handle_command_next,
threaded=self.threaded)
self.xmpp.add_event_handler('command_cancel',
self._handle_command_cancel,
threaded=self.threaded)
self.xmpp.add_event_handler('command_complete',
self._handle_command_complete,
threaded=self.threaded)
def plugin_end(self):
self.xmpp.del_event_handler('command_execute',
self._handle_command_start)
self.xmpp.del_event_handler('command_next',
self._handle_command_next)
self.xmpp.del_event_handler('command_cancel',
self._handle_command_cancel)
self.xmpp.del_event_handler('command_complete',
self._handle_command_complete)
self.xmpp.remove_handler('Ad-Hoc Execute')
self.xmpp['xep_0030'].del_feature(feature=Command.namespace)
self.xmpp['xep_0030'].set_items(node=Command.namespace, items=tuple())
def session_bind(self, jid):
self.xmpp['xep_0030'].add_feature(Command.namespace)
self.xmpp['xep_0030'].set_items(node=Command.namespace, items=tuple())
def set_backend(self, db):
"""
Replace the default session storage dictionary with
a generic, external data storage mechanism.
The replacement backend must be able to interact through
the same syntax and interfaces as a normal dictionary.
Arguments:
db -- The new session storage mechanism.
"""
self.sessions = db
def prep_handlers(self, handlers, **kwargs):
"""
Prepare a list of functions for use by the backend service.
Intended to be replaced by the backend service as needed.
Arguments:
handlers -- A list of function pointers
**kwargs -- Any additional parameters required by the backend.
"""
pass
# =================================================================
# Server side (command provider) API
def add_command(self, jid=None, node=None, name='', handler=None):
"""
Make a new command available to external entities.
Access control may be implemented in the provided handler.
Command workflow is done across a sequence of command handlers. The
first handler is given the initial Iq stanza of the request in order
to support access control. Subsequent handlers are given only the
payload items of the command. All handlers will receive the command's
session data.
Arguments:
jid -- The JID that will expose the command.
node -- The node associated with the command.
name -- A human readable name for the command.
handler -- A function that will generate the response to the
initial command request, as well as enforcing any
access control policies.
"""
if jid is None:
jid = self.xmpp.boundjid
elif not isinstance(jid, JID):
jid = JID(jid)
item_jid = jid.full
self.xmpp['xep_0030'].add_identity(category='automation',
itype='command-list',
name='Ad-Hoc commands',
node=Command.namespace,
jid=jid)
self.xmpp['xep_0030'].add_item(jid=item_jid,
name=name,
node=Command.namespace,
subnode=node,
ijid=jid)
self.xmpp['xep_0030'].add_identity(category='automation',
itype='command-node',
name=name,
node=node,
jid=jid)
self.xmpp['xep_0030'].add_feature(Command.namespace, None, jid)
self.commands[(item_jid, node)] = (name, handler)
def new_session(self):
"""Return a new session ID."""
return str(time.time()) + '-' + self.xmpp.new_id()
def _handle_command(self, iq):
"""Raise command events based on the command action."""
self.xmpp.event('command_%s' % iq['command']['action'], iq)
def _handle_command_start(self, iq):
"""
Process an initial request to execute a command.
Arguments:
iq -- The command execution request.
"""
sessionid = self.new_session()
node = iq['command']['node']
key = (iq['to'].full, node)
name, handler = self.commands.get(key, ('Not found', None))
if not handler:
log.debug('Command not found: %s, %s', key, self.commands)
payload = []
for stanza in iq['command']['substanzas']:
payload.append(stanza)
if len(payload) == 1:
payload = payload[0]
interfaces = set([item.plugin_attrib for item in payload])
payload_classes = set([item.__class__ for item in payload])
initial_session = {'id': sessionid,
'from': iq['from'],
'to': iq['to'],
'node': node,
'payload': payload,
'interfaces': interfaces,
'payload_classes': payload_classes,
'notes': None,
'has_next': False,
'allow_complete': False,
'allow_prev': False,
'past': [],
'next': None,
'prev': None,
'cancel': None}
session = handler(iq, initial_session)
self._process_command_response(iq, session)
def _handle_command_next(self, iq):
"""
Process a request for the next step in the workflow
for a command with multiple steps.
Arguments:
iq -- The command continuation request.
"""
sessionid = iq['command']['sessionid']
session = self.sessions.get(sessionid)
if session:
handler = session['next']
interfaces = session['interfaces']
results = []
for stanza in iq['command']['substanzas']:
if stanza.plugin_attrib in interfaces:
results.append(stanza)
if len(results) == 1:
results = results[0]
session = handler(results, session)
self._process_command_response(iq, session)
else:
raise XMPPError('item-not-found')
def _handle_command_prev(self, iq):
"""
Process a request for the prev step in the workflow
for a command with multiple steps.
Arguments:
iq -- The command continuation request.
"""
sessionid = iq['command']['sessionid']
session = self.sessions.get(sessionid)
if session:
handler = session['prev']
interfaces = session['interfaces']
results = []
for stanza in iq['command']['substanzas']:
if stanza.plugin_attrib in interfaces:
results.append(stanza)
if len(results) == 1:
results = results[0]
session = handler(results, session)
self._process_command_response(iq, session)
else:
raise XMPPError('item-not-found')
def _process_command_response(self, iq, session):
"""
Generate a command reply stanza based on the
provided session data.
Arguments:
iq -- The command request stanza.
session -- A dictionary of relevant session data.
"""
sessionid = session['id']
payload = session['payload']
if payload is None:
payload = []
if not isinstance(payload, list):
payload = [payload]
interfaces = session.get('interfaces', set())
payload_classes = session.get('payload_classes', set())
interfaces.update(set([item.plugin_attrib for item in payload]))
payload_classes.update(set([item.__class__ for item in payload]))
session['interfaces'] = interfaces
session['payload_classes'] = payload_classes
self.sessions[sessionid] = session
for item in payload:
register_stanza_plugin(Command, item.__class__, iterable=True)
iq.reply()
iq['command']['node'] = session['node']
iq['command']['sessionid'] = session['id']
if session['next'] is None:
iq['command']['actions'] = []
iq['command']['status'] = 'completed'
elif session['has_next']:
actions = ['next']
if session['allow_complete']:
actions.append('complete')
if session['allow_prev']:
actions.append('prev')
iq['command']['actions'] = actions
iq['command']['status'] = 'executing'
else:
iq['command']['actions'] = ['complete']
iq['command']['status'] = 'executing'
iq['command']['notes'] = session['notes']
for item in payload:
iq['command'].append(item)
iq.send()
def _handle_command_cancel(self, iq):
"""
Process a request to cancel a command's execution.
Arguments:
iq -- The command cancellation request.
"""
node = iq['command']['node']
sessionid = iq['command']['sessionid']
session = self.sessions.get(sessionid)
if session:
handler = session['cancel']
if handler:
handler(iq, session)
del self.sessions[sessionid]
iq.reply()
iq['command']['node'] = node
iq['command']['sessionid'] = sessionid
iq['command']['status'] = 'canceled'
iq['command']['notes'] = session['notes']
iq.send()
else:
raise XMPPError('item-not-found')
def _handle_command_complete(self, iq):
"""
Process a request to finish the execution of command
and terminate the workflow.
All data related to the command session will be removed.
Arguments:
iq -- The command completion request.
"""
node = iq['command']['node']
sessionid = iq['command']['sessionid']
session = self.sessions.get(sessionid)
if session:
handler = session['next']
interfaces = session['interfaces']
results = []
for stanza in iq['command']['substanzas']:
if stanza.plugin_attrib in interfaces:
results.append(stanza)
if len(results) == 1:
results = results[0]
if handler:
handler(results, session)
del self.sessions[sessionid]
payload = session['payload']
if payload is None:
payload = []
if not isinstance(payload, list):
payload = [payload]
for item in payload:
register_stanza_plugin(Command, item.__class__, iterable=True)
iq.reply()
iq['command']['node'] = node
iq['command']['sessionid'] = sessionid
iq['command']['actions'] = []
iq['command']['status'] = 'completed'
iq['command']['notes'] = session['notes']
for item in payload:
iq['command'].append(item)
iq.send()
else:
raise XMPPError('item-not-found')
# =================================================================
# Client side (command user) API
def get_commands(self, jid, **kwargs):
"""
Return a list of commands provided by a given JID.
Arguments:
jid -- The JID to query for commands.
local -- If true, then the query is for a JID/node
combination handled by this Sleek instance and
no stanzas need to be sent.
Otherwise, a disco stanza must be sent to the
remove JID to retrieve the items.
ifrom -- Specifiy the sender's JID.
block -- If true, block and wait for the stanzas' reply.
timeout -- The time in seconds to block while waiting for
a reply. If None, then wait indefinitely.
callback -- Optional callback to execute when a reply is
received instead of blocking and waiting for
the reply.
iterator -- If True, return a result set iterator using
the XEP-0059 plugin, if the plugin is loaded.
Otherwise the parameter is ignored.
"""
return self.xmpp['xep_0030'].get_items(jid=jid,
node=Command.namespace,
**kwargs)
def send_command(self, jid, node, ifrom=None, action='execute',
payload=None, sessionid=None, flow=False, **kwargs):
"""
Create and send a command stanza, without using the provided
workflow management APIs.
Arguments:
jid -- The JID to send the command request or result.
node -- The node for the command.
ifrom -- Specify the sender's JID.
action -- May be one of: execute, cancel, complete,
or cancel.
payload -- Either a list of payload items, or a single
payload item such as a data form.
sessionid -- The current session's ID value.
flow -- If True, process the Iq result using the
command workflow methods contained in the
session instead of returning the response
stanza itself. Defaults to False.
block -- Specify if the send call will block until a
response is received, or a timeout occurs.
Defaults to True.
timeout -- The length of time (in seconds) to wait for a
response before exiting the send call
if blocking is used. Defaults to
sleekxmpp.xmlstream.RESPONSE_TIMEOUT
callback -- Optional reference to a stream handler
function. Will be executed when a reply
stanza is received if flow=False.
"""
iq = self.xmpp.Iq()
iq['type'] = 'set'
iq['to'] = jid
iq['from'] = ifrom
iq['command']['node'] = node
iq['command']['action'] = action
if sessionid is not None:
iq['command']['sessionid'] = sessionid
if payload is not None:
if not isinstance(payload, list):
payload = [payload]
for item in payload:
iq['command'].append(item)
if not flow:
return iq.send(**kwargs)
else:
if kwargs.get('block', True):
try:
result = iq.send(**kwargs)
except IqError as err:
result = err.iq
self._handle_command_result(result)
else:
iq.send(block=False, callback=self._handle_command_result)
def start_command(self, jid, node, session, ifrom=None, block=False):
"""
Initiate executing a command provided by a remote agent.
The default workflow provided is non-blocking, but a blocking
version may be used with block=True.
The provided session dictionary should contain:
next -- A handler for processing the command result.
error -- A handler for processing any error stanzas
generated by the request.
Arguments:
jid -- The JID to send the command request.
node -- The node for the desired command.
session -- A dictionary of relevant session data.
ifrom -- Optionally specify the sender's JID.
block -- If True, block execution until a result
is received. Defaults to False.
"""
session['jid'] = jid
session['node'] = node
session['timestamp'] = time.time()
session['block'] = block
if 'payload' not in session:
session['payload'] = None
iq = self.xmpp.Iq()
iq['type'] = 'set'
iq['to'] = jid
iq['from'] = ifrom
session['from'] = ifrom
iq['command']['node'] = node
iq['command']['action'] = 'execute'
if session['payload'] is not None:
payload = session['payload']
if not isinstance(payload, list):
payload = list(payload)
for stanza in payload:
iq['command'].append(stanza)
sessionid = 'client:pending_' + iq['id']
session['id'] = sessionid
self.sessions[sessionid] = session
if session['block']:
try:
result = iq.send(block=True)
except IqError as err:
result = err.iq
self._handle_command_result(result)
else:
iq.send(block=False, callback=self._handle_command_result)
def continue_command(self, session, direction='next'):
"""
Execute the next action of the command.
Arguments:
session -- All stored data relevant to the current
command session.
"""
sessionid = 'client:' + session['id']
self.sessions[sessionid] = session
self.send_command(session['jid'],
session['node'],
ifrom=session.get('from', None),
action=direction,
payload=session.get('payload', None),
sessionid=session['id'],
flow=True,
block=session['block'])
def cancel_command(self, session):
"""
Cancel the execution of a command.
Arguments:
session -- All stored data relevant to the current
command session.
"""
sessionid = 'client:' + session['id']
self.sessions[sessionid] = session
self.send_command(session['jid'],
session['node'],
ifrom=session.get('from', None),
action='cancel',
payload=session.get('payload', None),
sessionid=session['id'],
flow=True,
block=session['block'])
def complete_command(self, session):
"""
Finish the execution of a command workflow.
Arguments:
session -- All stored data relevant to the current
command session.
"""
sessionid = 'client:' + session['id']
self.sessions[sessionid] = session
self.send_command(session['jid'],
session['node'],
ifrom=session.get('from', None),
action='complete',
payload=session.get('payload', None),
sessionid=session['id'],
flow=True,
block=session['block'])
def terminate_command(self, session):
"""
Delete a command's session after a command has completed
or an error has occured.
Arguments:
session -- All stored data relevant to the current
command session.
"""
sessionid = 'client:' + session['id']
try:
del self.sessions[sessionid]
except Exception as e:
log.error("Error deleting adhoc command session: %s" % e.message)
def _handle_command_result(self, iq):
"""
Process the results of a command request.
Will execute the 'next' handler stored in the session
data, or the 'error' handler depending on the Iq's type.
Arguments:
iq -- The command response.
"""
sessionid = 'client:' + iq['command']['sessionid']
pending = False
if sessionid not in self.sessions:
pending = True
pendingid = 'client:pending_' + iq['id']
if pendingid not in self.sessions:
return
sessionid = pendingid
session = self.sessions[sessionid]
sessionid = 'client:' + iq['command']['sessionid']
session['id'] = iq['command']['sessionid']
self.sessions[sessionid] = session
if pending:
del self.sessions[pendingid]
handler_type = 'next'
if iq['type'] == 'error':
handler_type = 'error'
handler = session.get(handler_type, None)
if handler:
handler(iq, session)
elif iq['type'] == 'error':
self.terminate_command(session)
if iq['command']['status'] == 'completed':
self.terminate_command(session)
| 37.519943 | 78 | 0.535062 |
705cd13a2a4336c67f408830cb76b7b0525254fb | 26,898 | py | Python | main/call_snake.py | ph1001/GA_learning_Snake | 6341310ce41e45e71fceb084232d7a4867f9ebfb | [
"MIT"
] | null | null | null | main/call_snake.py | ph1001/GA_learning_Snake | 6341310ce41e45e71fceb084232d7a4867f9ebfb | [
"MIT"
] | null | null | null | main/call_snake.py | ph1001/GA_learning_Snake | 6341310ce41e45e71fceb084232d7a4867f9ebfb | [
"MIT"
] | 3 | 2021-04-26T12:56:11.000Z | 2021-06-09T20:12:58.000Z | # -*- coding: utf-8 -*-
"""
Created on Mon Apr 26 16:36:47 2021
@author: utente
"""
# This script is heavily inspired by this blogpost: https://thingsidobysanil.wordpress.com/2018/11/12/87/
# Import libraries and components from snake
from snake import controlled_run, dis_width, dis_height, snake_block, automatic_mode
from snake_nodisplay import controlled_run_nodisplay, dis_width_nodisplay, dis_height_nodisplay, snake_block_nodisplay
import numpy as np
from keras import layers, models
from random import random, randint
from tqdm import tqdm
from operator import attrgetter
import math
from utils import phen_variance, gen_variance, phen_entropy, gen_entropy, fs, mo_selection
import os
import csv
# Class Individual. Instances of this class play snake and make up a population.
class Individual():
# init function of class Individual
def __init__(self,
ind_number = randint(1,9),
evolution_step = 1,
verbose = False,
games_to_play = 1,
fitness_function = lambda x,y: x*math.exp(y) ,
weights = None,
moves_till_stuck = 100
,
show = False, #wheter to show the snake game window
hidden_layers = 1):
self.games_to_play = games_to_play
self.verbose = verbose
self.fitness_function = fitness_function
self.moves_till_stuck = moves_till_stuck
self.show = show
self.hidden_layers = hidden_layers
# Give this individual a number
self.ind_number = ind_number
# Initialise this individual's evolution step with 1
self.evolution_step = evolution_step
# Print game's width, height and snake's width
if self.verbose:
print(dis_width, dis_height, snake_block)
# Create a neural network that will learn to play snake
self.model = models.Sequential()
self.model.add(layers.Dense(16, activation = 'sigmoid', input_dim = 6))
# for _ in range(self.hidden_layers - 1):
# self.model.add(layers.Dense(64, activation = 'sigmoid', input_dim = input_dim))
self.model.add(layers.Dense(3, activation = 'softmax'))
if weights != None:
self.model.set_weights(weights)
self.weights = self.model.get_weights()
# Play a game
self.play()
def __getitem__(self, position):
return self.weights[position]
def __setitem__(self, position, value):
self.weights[position] = value
def __len__(self):
return len(self.weights)
def __repr__(self):
return f'Neural Network with 6 input nodes, {self.model.layers[0].weights[1].shape[0]} hidden layer neurons and {self.model.layers[1].weights[1].shape[0]} output layer neurons'
# Define a function that lets an individual play snake
def play(self, show = False):
# Start the game by calling the function controlled_run from snake.py and receive the fitness resulting
# from the games_to_play games played by this individual in this evolution step
# MOVED games_to_play here, defined together with the individual
#the controlled_run function return the score and the age of the Individual
if self.show or show:
score, age = controlled_run(self, self.ind_number, self.evolution_step, self.games_to_play, self.verbose, self.moves_till_stuck)
else:
score, age = controlled_run_nodisplay(self, self.ind_number, self.evolution_step, self.games_to_play, self.verbose, self.moves_till_stuck)
self.score = score
self.age = age
if self.verbose:
print('Evolution step ' + str(self.evolution_step) + ':, Individual ' + str(self.ind_number) + ' is done playing.')
# INDIVIDUAL FITNESS FUNCTION
self.fitness = self.fitness_function(age, score)
# Define a function that communicates with snake.py. It is called from snake.py from inside the function gameLoop
def control(self, game_state):
# Some printing for debugging purposes
if self.verbose:
print("control() was called.")
# In the very first iteration, simply pass "up"
if game_state['snake_List'] == []:
# Some printing for debugging purposes
if self.verbose:
print('"Up" was passed automatically.')
return 'w'
# Process the information received about the current state of the game
snake_List = game_state['snake_List']
snake_Head = game_state['snake_Head']
direction = game_state['direction']
food = (game_state['foodx'], game_state['foody'])
#check if going straight is clear(1) or there is an obstacle(0)
possible_position = [snake_Head[0] + direction[0], snake_Head[1] + direction[1]]
if possible_position[0] >= 400 or possible_position[1] >= 400 or possible_position[0] < 0 or possible_position[1] < 0 or possible_position in snake_List:
clear_straight = 0
else:
clear_straight = 1
#checking if it's clear right or left
#first identify what's left/right
if direction[1] == -10: #recognize if it's moving up vertically
right = [snake_Head[0] + 10, snake_Head[1]]
left = [snake_Head[0] - 10, snake_Head[1]]
if snake_Head[0] == food[0] and snake_Head[1] >= food[1]: #check if the food is ahead
food_ahead = 1
food_right = 0
food_left = 0
elif snake_Head[0] < food[0]: #food on the right
food_ahead = 0
food_right = 1
food_left = 0
else: #food on the left
food_ahead = 0
food_right = 0
food_left = 1
elif direction[1] == 10: #recognize if it's moving down vertically
right = [snake_Head[0] - 10, snake_Head[1]]
left = [snake_Head[0] + 10, snake_Head[1]]
if snake_Head[0] == food[0] and snake_Head[1] - food[1] <= 0: #check if the food is ahead
food_ahead = 1
food_right = 0
food_left = 0
elif snake_Head[0] < food[0]: #food on the right
food_ahead = 0
food_right = 0
food_left = 1
else: #food on the left
food_ahead = 0
food_right = 1
food_left = 0
elif direction[0] == -10: #recognize if it's moving left horizontally
right = [snake_Head[0], snake_Head[1] - 10]
left = [snake_Head[0], snake_Head[1] + 10]
if snake_Head[1] == food[1] and snake_Head[0] - food[0] <= 0: #check if the food is ahead
food_ahead = 1
food_right = 0
food_left = 0
elif snake_Head[1] < food[1]: #food on the right
food_ahead = 0
food_right = 0
food_left = 1
else: #food on the left
food_ahead = 0
food_right = 1
food_left = 0
else: #recognize if it's moving right horizontally
right = [snake_Head[0], snake_Head[1] + 10]
left = [snake_Head[0], snake_Head[1] - 10]
if snake_Head[1] == food[1] and snake_Head[0] - food[0] >= 0: #check if the food is ahead
food_ahead = 1
food_right = 0
food_left = 0
elif snake_Head[1] < food[1]: #food on the right
food_ahead = 0
food_right = 1
food_left = 0
else: #food on the left
food_ahead = 0
food_right = 0
food_left = 1
#then check if it's occupied
if right[0] >= 400 or right[1] >= 400 or right[0] < 0 or right[1] < 0 or right in snake_List:
clear_right = 0
else:
clear_right = 1
if left[0] >= 400 or left[1] >= 400 or left[0] < 0 or left[1] < 0 or left in snake_List:
clear_left = 0
else:
clear_left = 1
#creating the input for the neural network
input_nn = np.array([
clear_straight, clear_right, clear_left,
food_ahead, food_right, food_left
])
input_nn.shape = (1,6)
#deciding the next move
game_action = np.argmax(self.model.predict(input_nn))
if self.verbose:
print(f'Input : {input_nn}')
print(f'Output : {game_action}')
return game_action
class Population:
def __init__(self,
size,
optim, #either 'min' or 'max'
verbose = False,
evolution_step = 0,
moves_till_stuck = 50,
show = False,
fitness_function = lambda x,y: x*math.exp(y) ,
hidden_layers = 1):
self.individuals = []
self.size = size
self.verbose = verbose
self.evolution_step = evolution_step
self.moves_till_stuck = moves_till_stuck
self.show = show
self.fitness_function = fitness_function
self.hidden_layers = hidden_layers
self.optim = optim
# Create individuals and add them to the population. Creating an individual will execute the __init__ function
# of class Individual, which then will result in this individual playing snake.
for i in tqdm(range(size)):
individual = Individual(ind_number = i+1,
evolution_step = self.evolution_step,
verbose = self.verbose,
moves_till_stuck = self.moves_till_stuck,
show = self.show,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers)
self.individuals.append(individual)
def __len__(self):
return len(self.individuals)
def __getitem__(self, position):
return self.individuals[position]
def __repr__(self):
return f"Population(size={len(self.individuals)})"
def log_bestfit(self, config_name, run_number):
dir_path = os.path.join('data', config_name)
if not os.path.exists(dir_path):
os.mkdir(dir_path)
with open(os.path.join('data', config_name, f'{config_name}_{run_number}.csv'), mode = 'a', newline='') as file:
writer = csv.writer(file)
for gen, best_fit in enumerate(self.evolution_process):
writer.writerow([gen, best_fit])
def log_diversity(self, config_name, run_number):
dir_path = os.path.join('data', config_name)
if not os.path.exists(dir_path):
os.mkdir(dir_path)
with open(os.path.join('data', config_name, f'{config_name}_{run_number}.csv'), mode = 'a', newline='') as file:
writer = csv.writer(file)
for gen in range(self.evolution_step):
writer.writerow([gen,
self.phen_variance_dict[str(gen)],
self.gen_variance_dict[str(gen)],
self.phen_entropy_dict[str(gen)],
self.gen_entropy_dict[str(gen)],])
# Define a funcion that receives a population and evolves it using a GA. It also receives evolution_step to keep track of where we are at in the process.
def evolve( self,
gens, #Number of generations to be produced
select, #Selection function
crossover, #Crossover function
mutate, #Mutation function
co_p, #crossover probability
mu_p, #mutation probability
multi_objective = False, #wheter to perform multiobjective optimization (fitness has to be a tuple)
tournament_size = None, #size of the sample for the tournament selction
constant_ms = None, #Geometric Mutation coefficient
elitism = False, #wheter to perform elitisim
record_diversity = False, #wheter to record diversity
fitness_sharing = False): #wheter to perform fitness sharing
self.evolution_process = []
if record_diversity:
self.phen_variance_dict = {}
self.gen_variance_dict = {}
self.phen_entropy_dict = {}
self.gen_entropy_dict = {}
for gen in tqdm(range(gens), desc = 'Evolving Population'): #argument of evolve attribute
#recording the variance of the Population
if record_diversity: #argument of evolve attribute
self.phen_variance_dict[str(self.evolution_step)] = phen_variance(self)
self.gen_variance_dict[str(self.evolution_step)] = gen_variance(self)
self.phen_entropy_dict[str(self.evolution_step)] = phen_entropy(self)
self.gen_entropy_dict[str(self.evolution_step)] = gen_entropy(self)
#FITNESS SHARING
if fitness_sharing: #argument of evolve attribute
fs(self)
#Elitism
if elitism: #argument of evolve attribute
if self.optim == 'max':
if multi_objective:
#selecting the best solution
min_fit_x = max([i.fitness[0] for i in self.individuals])
min_fit_y = max([i.fitness[1] for i in self.individuals])
#calculating the distances to the best solution
distances = [ math.sqrt((i.fitness[0] - min_fit_x)**2) + math.sqrt((i.fitness[1] - min_fit_y)**2) for i in self.individuals]
#selecting the individual that is closer to the optimal solution
elite = self.individuals[distances.index(min(distances))].weights
else:
#saving a copy of the weights best individual of the population
elite = max(self.individuals, key = attrgetter('fitness')).weights
else:
if multi_objective:
#selecting the best solution
min_fit_x = min([i.fitness[0] for i in self.individuals])
min_fit_y = min([i.fitness[1] for i in self.individuals])
#calculating the distances to the best solution
distances = [ math.sqrt((i.fitness[0] - min_fit_x)**2) + math.sqrt((i.fitness[1] - min_fit_y)**2) for i in self.individuals]
#selecting the individual that is closer to the optimal solution
elite = self.individuals[distances.index(min(distances))].weights
else:
#saving a copy of the weights best individual of the population
elite = min(self.individuals, key = attrgetter('fitness')).weights
new_pop = []
while len(new_pop) < self.size:
#selection
if multi_objective:
parent1, parent2 = mo_selection(self)
else:
if tournament_size != None:
parent1, parent2 = select(self, tournament_size), select(self, tournament_size)
else:
parent1, parent2 = select(self), select(self) #argument of evolve attribute
# Crossover
if random() < co_p: #argument of evolve attribute
offspring1, offspring2 = crossover(parent1, parent2) #argument of evolve attribute
else:
offspring1, offspring2 = parent1.weights.copy(), parent2.weights.copy()
# Mutation
if random() < mu_p: #argument of evolve attribute
if constant_ms != None:
#GEOMETRIC MUTATION
offspring1 = mutate(offspring1, constant_ms, self.evolution_step) #argument of evolve attribute
else:
offspring1 = mutate(offspring1)
if random() < mu_p: #argument of evolve attribute
if constant_ms != None:
#GEOMETRIC MUTATION
offspring2 = mutate(offspring2, constant_ms, self.evolution_step) #argument of evolve attribute
else:
offspring2 = mutate(offspring2)
new_pop.append(Individual(ind_number = len(new_pop),
weights = offspring1,
moves_till_stuck = self.moves_till_stuck,
evolution_step = gen + 1,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers))
if len(new_pop) < self.size:
new_pop.append(Individual(ind_number = len(new_pop),
weights = offspring1,
moves_till_stuck = self.moves_till_stuck,
evolution_step = gen + 1,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers))
if elitism: #argument of evolve attribute
if self.optim == 'max':
if multi_objective:
#selecting the best solution
min_fit_x = max([i.fitness[0] for i in self.individuals])
min_fit_y = max([i.fitness[1] for i in self.individuals])
#calculating the distances to the best solution
distances = [ math.sqrt((i.fitness[0] - min_fit_x)**2) + math.sqrt((i.fitness[1] - min_fit_y)**2) for i in self.individuals]
#selecting the individual that is further to the best solution
least_fit = new_pop[distances.index(max(distances))]
#substituting the worst individual of the new population with the best one from the previous one
new_pop[new_pop.index(least_fit)] = Individual(ind_number = new_pop.index(least_fit),
weights = elite,
moves_till_stuck = self.moves_till_stuck,
evolution_step = gen + 1,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers)
else:
#finding worst Individual of the new population
least_fit = min(new_pop, key = attrgetter('fitness'))
#substituting the worst individual of the new population with the best one from the previous one
new_pop[new_pop.index(least_fit)] = Individual(ind_number = new_pop.index(least_fit),
weights = elite,
moves_till_stuck = self.moves_till_stuck,
evolution_step = gen + 1,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers)
else:
if multi_objective:
#selecting the best solution
min_fit_x = min([i.fitness[0] for i in self.individuals])
min_fit_y = min([i.fitness[1] for i in self.individuals])
#calculating the distances to the best solution
distances = [ math.sqrt((i.fitness[0] - min_fit_x)**2) + math.sqrt((i.fitness[1] - min_fit_y)**2) for i in self.individuals]
#selecting the individual that is further to the best solution
least_fit = new_pop[distances.index(max(distances))]
#substituting the worst individual of the new population with the best one from the previous one
new_pop[new_pop.index(least_fit)] = Individual(ind_number = new_pop.index(least_fit),
weights = elite,
moves_till_stuck = self.moves_till_stuck,
evolution_step = gen + 1,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers)
else:
#finding worst Individual of the new population
least_fit = max(new_pop, key = attrgetter('fitness'))
#substituting the worst individual of the new population with the best one from the previous one
new_pop[new_pop.index(least_fit)] = Individual(ind_number = new_pop.index(least_fit),
weights = elite,
moves_till_stuck = self.moves_till_stuck,
evolution_step = gen + 1,
fitness_function = self.fitness_function,
hidden_layers = self.hidden_layers)
self.individuals = new_pop
#updating the evolution step
self.evolution_step += 1
for indiv in self.individuals:
indiv.evolution_step = self.evolution_step
if self.optim == 'max':
if multi_objective:
#selecting the best solution
min_fit_x = max([i.fitness[0] for i in self.individuals])
min_fit_y = max([i.fitness[1] for i in self.individuals])
#calculating the distances to the best solution
distances = [ math.sqrt((i.fitness[0] - min_fit_x)**2) + math.sqrt((i.fitness[1] - min_fit_y)**2) for i in self.individuals]
#selecting the individual that is closer to the optimal solution
best_fit = self.individuals[distances.index(min(distances))].fitness
else:
best_fit = max(self, key=attrgetter("fitness")).fitness
else:
if multi_objective:
#selecting the best solution
min_fit_x = min([i.fitness[0] for i in self.individuals])
min_fit_y = min([i.fitness[1] for i in self.individuals])
#calculating the distances to the best solution
distances = [ math.sqrt((i.fitness[0] - min_fit_x)**2) + math.sqrt((i.fitness[1] - min_fit_y)**2) for i in self.individuals]
#selecting the individual that is closer to the optimal solution
best_fit = self.individuals[distances.index(min(distances))].fitness
else:
best_fit = min(self, key=attrgetter("fitness")).fitness
if self.optim == 'max':
self.evolution_process.append([best_fit, max(self, key=attrgetter("fitness")).age, max(self, key=attrgetter("fitness")).score])
else:
self.evolution_process.append([best_fit, min(self, key=attrgetter("fitness")).age, min(self, key=attrgetter("fitness")).score])
print(f'Best Individual: {best_fit}')
| 48.994536 | 184 | 0.491263 |
794ef040c3a7f25257f7612ec1f908c19e848ed0 | 2,424 | py | Python | doc/sphinx/example-acoustics-1d/setplot_3.py | geoflows/geoclaw-4.x | c8879d25405017b38392aa3b1ea422ff3e3604ea | [
"BSD-3-Clause"
] | 7 | 2016-11-13T03:11:51.000Z | 2021-09-07T18:59:48.000Z | doc/sphinx/example-acoustics-1d/setplot_3.py | che-wenchao/D-Claw | 8ab5d971c9a7a7130e03a447a4b8642e292f4e88 | [
"BSD-3-Clause"
] | 11 | 2020-01-14T18:00:37.000Z | 2022-03-29T14:25:24.000Z | doc/sphinx/example-acoustics-1d/setplot_3.py | che-wenchao/D-Claw | 8ab5d971c9a7a7130e03a447a4b8642e292f4e88 | [
"BSD-3-Clause"
] | 6 | 2020-01-14T17:15:42.000Z | 2021-12-03T17:28:44.000Z |
"""
Single figure with two axes
=============================
The pressure q[0] and q[1] are plotted on two sets of axes in a single
figure.
"""
#--------------------------
def setplot(plotdata):
#--------------------------
"""
Specify what is to be plotted at each frame.
Input: plotdata, an instance of pyclaw.plotters.data.ClawPlotData.
Output: a modified version of plotdata.
"""
plotdata.clearfigures() # clear any old figures,axes,items data
plotfigure = plotdata.new_plotfigure(name='Solution', figno=1)
# Pressure:
# Set up for axes in this figure:
plotaxes = plotfigure.new_plotaxes()
plotaxes.axescmd = 'subplot(211)'
plotaxes.xlimits = 'auto'
plotaxes.ylimits = [-.5,1.1]
plotaxes.title = 'Pressure'
# Set up for item on these axes:
plotitem = plotaxes.new_plotitem(plot_type='1d')
plotitem.plot_var = 0
plotitem.plotstyle = '-'
plotitem.color = 'b'
# Set up for second item on these axes:
plotitem = plotaxes.new_plotitem(plot_type='1d')
plotitem.plot_var = 0
plotitem.plotstyle = 'o'
plotitem.color = '#ff00ff' # any color supported by matplotlib
# Velocity:
# Set up for second axes in this figure:
plotaxes = plotfigure.new_plotaxes()
plotaxes.axescmd = 'subplot(212)'
plotaxes.xlimits = 'auto'
plotaxes.ylimits = [-.5,.5]
plotaxes.title = 'Velocity'
# Set up for item on these axes:
plotitem = plotaxes.new_plotitem(plot_type='1d')
plotitem.plot_var = 1
plotitem.plotstyle = 'o-'
plotitem.color = 'b'
# Parameters used only when creating html and/or latex hardcopy
# e.g., via pyclaw.plotters.frametools.printframes:
plotdata.printfigs = True # print figures
plotdata.print_format = 'png' # file format
plotdata.print_framenos = 'all' # list of frames to print
plotdata.print_fignos = 'all' # list of figures to print
plotdata.html = True # create html files of plots?
plotdata.html_homelink = '../README.html'# pointer for index page
plotdata.latex = True # create latex file of plots?
plotdata.latex_figsperline = 1 # layout of plots
plotdata.latex_framesperline = 2 # layout of plots
plotdata.latex_makepdf = True # also run pdflatex?
return plotdata
| 31.076923 | 74 | 0.616749 |
f40475639d8535520bb8c05bec4e8b69858b4d8b | 7,819 | py | Python | chain_joiner/make_model.py | leelasd/chain_joiner | 3d8f03b1b0a19a1ca06a345f826dcbd67ad22491 | [
"MIT"
] | 3 | 2017-09-16T23:34:08.000Z | 2018-03-09T11:43:42.000Z | chain_joiner/make_model.py | leelasd/chain_joiner | 3d8f03b1b0a19a1ca06a345f826dcbd67ad22491 | [
"MIT"
] | 1 | 2018-03-09T12:24:23.000Z | 2018-05-18T11:56:31.000Z | chain_joiner/make_model.py | mc-robinson/chain_joiner | 3d8f03b1b0a19a1ca06a345f826dcbd67ad22491 | [
"MIT"
] | 1 | 2017-10-19T16:07:56.000Z | 2017-10-19T16:07:56.000Z | #!/usr/bin/env python
# File name: make_model.py
# Author: Matt Robinson
# Date created: 5/24/2017
# Date last modified: 9/05/2017
# Python Version: 3.6
"""
Description:
This script uses the alignment file, alignment.ali, to make a homology model that includes the previously missing residues.
In this case, the original PDB structure is the template while the full sequence is the target of the model.
Usage: python make_model.py pdbfile.pdb [options]
Input Arguments:
[optional]
-a, --automodel
The simplest method for simple comparitive modeling.
Will not give great results but suggested when many chain breaks are present. [default: True]
-f, --fixed_automodel
Builds an automodel and keeps the non-missing residues fixed,
whereas they can move in the other methods. [default: False]
-l, --loopmodel
Builds a model by refining the loop with the missing residues.
Suggested when have one small chain break in the PDB. [default: False]
Output: A set of PDB files (number depends on the chosen method)
Note: The alignment file must be in the same directory as this script.
"""
import sys
import argparse
import os
from modeller import *
from modeller.automodel import * # Load the automodel class
def main():
parser = argparse.ArgumentParser(
prog='make_model.py',
formatter_class=argparse.RawDescriptionHelpFormatter,
description=
"""
This script uses the alignment file, alignment.ali,
to make a homology model that includes the previously missing residues.
In this case, the original PDB structure is the template
while the full sequence is the target of the model.
@author: Matt Robinson, matthew.robinson@yale.edu
William L. Jorgensen lab, Yale University
For simple automodel,
Usage: python make_model.py -p pdbfile.pdb -a
For fixed automodel,
Usage: python make_model.py -p pdbfile.pdb -f
For loopmodel,
Usage: python make_model.py -p pdbfile.pdb -l
REQUIREMENTS:
Preferably Anaconda python 3 with following modules:
argparse
modeller
"""
)
parser.add_argument(
"-p", "--pdb", help="full path of the pdb file with .pdb file descriptor")
parser.add_argument(
"-a", "--automodel", help="the simplest method for simple comparitive modeling", action="store_true")
parser.add_argument(
"-f", "--fixed_automodel", help="builds an automodel and keeps the non-missing residues fixed", action="store_true")
parser.add_argument(
"-l", "--loopmodel", help="builds a model by refining the loop with the missing residues", action="store_true")
args = parser.parse_args()
# call the model function
model(args.pdb, args.automodel, args.fixed_automodel, args.loopmodel)
def model(pdb_file, a, f, l):
log.verbose()
env = environ()
# directories for input atom files
env.io.atom_files_directory = ['.', '../atom_files']
# Read in HETATM records from template PDBs
env.io.hetatm = True
#first need to get PDB data so can find missing residues
pdb_id = os.path.splitext(os.path.basename(pdb_file))[0]
with open(pdb_file) as pdb:
pdb_file_data = pdb.readlines()
with open('./' + pdb_id + '_alignment.ali') as aln_file:
aln_data = aln_file.readlines()
# do modeller stuff
pdb_seq = get_pdb_seq(aln_data)
if (f):
#make the string for selecting the residues to move (the missing residues)
selection_str = make_sel_str(find_missing_residues(pdb_seq))
#selection_str = "self.residue_range('647:B', '648:B')"
print(selection_str)
#build the model
class MyModel(automodel):
def select_atoms(self):
#select only the missing residues
return eval('selection(' + selection_str + ')') #need to use eval b/c sel_str is str
a = MyModel(env, alnfile = (pdb_id + '_alignment.ali'),
knowns = pdb_file, sequence = pdb_id + '_fill',
assess_methods=(assess.DOPE, assess.GA341))
#build only one model
a.starting_model= 1
a.ending_model = 1
a.make()
elif (l):
a = loopmodel(env, alnfile = (pdb_id + '_alignment.ali'),
knowns = pdb_file, sequence = pdb_id + '_fill',
assess_methods=(assess.DOPE, assess.GA341))
a.starting_model = 1
a.ending_model = 1
a.loop.starting_model = 1
a.loop.ending_model = 2
a.loop.md_level = refine.fast
a.make()
else:
a = automodel(env, alnfile = (pdb_id + '_alignment.ali'),
knowns = pdb_file, sequence = pdb_id + '_fill',
assess_methods=(assess.DOPE, assess.GA341))
a.starting_model= 1
a.ending_model = 1
a.make()
def find_missing_residues(pdb_seq):
##first delete all / from string
#pdb_seq = pdb_seq.replace("/","")
missing_res_lists = []
res_number = 0
#create a list for holding chain labels
chain_labels = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z']
#first split the chains
chains_l = pdb_seq.split('/')
# go through and find every dash in string and note index
chain_number = 0
res_idx = 0
for pdb_seq in chains_l:
missing_res_l = []
for i in range(len(pdb_seq)):
if (pdb_seq[i] == '-'):
missing_res_l.append(res_idx+1) #+1 b/c Modeller starts at 1
res_idx = res_idx + 1
#Split missing_res into seperate loops
chain_missing_res_lists = []
start_idx = 0
for i in range(len(missing_res_l)-1):
#split if not consecutive residues
if (not(missing_res_l[i] + 1 == missing_res_l[i+1])):
chain_missing_res_lists.append(missing_res_l[start_idx:i+1])
start_idx = i+1
#also split if end of list
if (i==len(missing_res_l)-2):
chain_missing_res_lists.append(missing_res_l[start_idx:i+2])
print(chain_missing_res_lists)
#if only one chain, don't need to add residue numbers. Just make string
if (len(chains_l)==1):
for lst in chain_missing_res_lists:
lst = [str(i) for i in lst]
missing_res_lists.append(lst)
#go over all lists and add chain identifier if there is more than one chain
else:
for lst in chain_missing_res_lists:
lst = [str(i) for i in lst]
lst_idx = 0
for atom_str in lst:
lst[lst_idx] = atom_str + ':' + chain_labels[chain_number]
lst_idx = lst_idx + 1
missing_res_lists.append(lst)
chain_number = chain_number + 1
print(missing_res_l)
print(missing_res_lists)
return missing_res_lists
def make_sel_str(missing_res_ls):
sel_str = ''
for l in missing_res_ls:
# need to add offset since Modeller numbering is different
first_idx = l[0]
last_idx = l[-1]
# make the str to use as an argument
sel_str = sel_str + "self.residue_range('" + str(first_idx) + "', '" + str(last_idx) + "'),"
# take off the final comma
sel_str = sel_str[0:-1]
return sel_str
def get_pdb_seq(aln_data):
pdb_seq = ""
seq_data = aln_data[3:]
for line in seq_data:
line = line.rstrip()
if (line[0]=='>'):
break
pdb_seq = pdb_seq + line
#remove the break character '/'
#pdb_seq = re.sub('/','',pdb_seq)
#print(pdb_seq)
return pdb_seq
if __name__ == "__main__":
main()
| 32.443983 | 124 | 0.620795 |
638d0c0cf33b91e0e3a6b2cda272bf2ae3e9a583 | 13,394 | py | Python | opencga-client/src/main/python/pyopencga/commons.py | mwhamgenomics/opencga | a6b521f441fbefa35f6fbaadd6dd97e33bb33e7b | [
"Apache-2.0"
] | 146 | 2015-03-05T19:14:22.000Z | 2022-03-30T03:46:48.000Z | opencga-client/src/main/python/pyopencga/commons.py | mwhamgenomics/opencga | a6b521f441fbefa35f6fbaadd6dd97e33bb33e7b | [
"Apache-2.0"
] | 1,623 | 2015-01-27T00:30:36.000Z | 2022-03-31T14:42:33.000Z | opencga-client/src/main/python/pyopencga/commons.py | mwhamgenomics/opencga | a6b521f441fbefa35f6fbaadd6dd97e33bb33e7b | [
"Apache-2.0"
] | 93 | 2015-01-28T17:13:01.000Z | 2022-03-09T20:46:47.000Z | import sys
import threading
from time import sleep
import warnings
import requests
from pyopencga.exceptions import OpencgaInvalidToken, OpencgaAuthorisationError
try:
from Queue import Queue
except ImportError:
from queue import Queue
_CALL_BATCH_SIZE = 2000
_NUM_THREADS_DEFAULT = 4
def deprecated(func):
"""Prints a warning for functions marked as deprecated"""
def new_func(*args, **kwargs):
warnings.simplefilter('always', DeprecationWarning) # turn off filter
warnings.warn('Call to deprecated function "{}".'.format(func.__name__),
category=DeprecationWarning, stacklevel=2)
warnings.simplefilter('default', DeprecationWarning) # reset filter
return func(*args, **kwargs)
return new_func
def _create_rest_url(host, version, sid, category, resource, subcategory=None, query_id=None,
second_query_id=None, options=None):
"""Creates the URL for querying the REST service"""
# Creating the basic URL
url = ('/'.join([host,
'webservices/rest',
version,
category
]))
# If subcategory is queried, query_id can be absent
if query_id is not None:
url += '/' + query_id
if subcategory is not None:
url += '/' + subcategory
if second_query_id is not None:
url += '/' + second_query_id
url += '/' + resource
header = {"Accept-Encoding": "gzip"}
if sid is not None:
header['Authorization'] = 'Bearer {}'.format(sid)
# Checking optional params
if options is not None:
opts = []
for k, v in options.items():
if k == 'debug':
continue
if isinstance(v, list):
opts.append(k + '=' + ','.join(map(str, v)))
else:
opts.append(k + '=' + str(v))
if opts:
url += '?' + '&'.join(opts)
return url, header
def _fetch(host, version, sid, category, resource, method, subcategory=None, query_id=None,
second_query_id=None, data=None, options=None):
"""Queries the REST service retrieving results until exhaustion or limit"""
# HERE BE DRAGONS
final_response = None
# Setting up skip and limit default parameters
call_skip = 0
call_limit = 1000
max_limit = None
if options is None:
opts = {'skip': call_skip, 'limit': call_limit}
else:
opts = options.copy() # Do not modify original data!
if 'skip' not in opts:
opts['skip'] = call_skip
# If 'limit' is specified, a maximum of 'limit' results will be returned
if 'limit' in opts:
max_limit = opts['limit']
# Server must be always queried for results in groups of 1000
opts['limit'] = call_limit
# If there is a query_id, the next variables will be used
total_id_list = [] # All initial ids
next_id_list = [] # Ids which should be queried again for more results
next_id_indexes = [] # Ids position in the final response
if query_id is not None:
total_id_list = query_id.split(',')
# If some query has more than 'call_limit' results, the server will be
# queried again to retrieve the next 'call_limit results'
call = True
current_query_id = None # Current REST query
current_id_list = None # Current list of ids
time_out_counter = 0 # Number of times a query is repeated due to time-out
while call:
# Check 'limit' parameter if there is a maximum limit of results
if max_limit is not None and max_limit <= call_limit:
opts['limit'] = max_limit
# Updating query_id and list of ids to query
if query_id is not None:
if current_query_id is None:
current_query_id = query_id
current_id_list = total_id_list
current_id_indexes = range(len(total_id_list))
else:
current_query_id = ','.join(next_id_list)
current_id_list = next_id_list
current_id_indexes = next_id_indexes
# Retrieving url
url, header = _create_rest_url(host=host,
version=version,
category=category,
sid=sid,
subcategory=subcategory,
query_id=current_query_id,
second_query_id=second_query_id,
resource=resource,
options=opts)
# DEBUG param
if opts is not None and 'debug' in opts and opts['debug']:
sys.stderr.write(url + '\n')
# Getting REST response
if method == 'get':
try:
r = requests.get(url, headers=header)
except requests.exceptions.ConnectionError:
sleep(1)
r = requests.get(url, headers=header)
elif method == 'post':
try:
r = requests.post(url, json=data, headers=header)
except requests.exceptions.ConnectionError:
sleep(1)
r = requests.post(url, json=data, headers=header)
elif method == 'delete':
try:
r = requests.delete(url, headers=header)
except requests.exceptions.ConnectionError:
sleep(1)
r = requests.delete(url, headers=header)
else:
raise NotImplementedError('method: ' + method + ' not implemented.')
if r.status_code == 504: # Gateway Time-out
if time_out_counter == 99:
msg = 'Server not responding in time'
raise requests.ConnectionError(msg)
time_out_counter += 1
continue
time_out_counter = 0
if r.status_code == 401:
raise OpencgaInvalidToken(r.content)
elif r.status_code == 403:
raise OpencgaAuthorisationError(r.content)
elif r.status_code != 200:
raise Exception(r.content)
try:
response = r.json()
# TODO Remove deprecated response and result in future release. Added for backwards compatibility
if 'response' in response:
response['responses'] = response['response']
for query_result in response['responses']:
if 'result' in query_result:
query_result['results'] = query_result['result']
except ValueError:
msg = 'Bad JSON format retrieved from server'
raise ValueError(msg)
# Setting up final_response
if final_response is None:
final_response = response
# Concatenating results
else:
if query_id is not None:
for index, res in enumerate(response['responses']):
id_index = current_id_indexes[index]
final_response[id_index]['results'] += res['results']
else:
final_response['responses'][0]['results'] += response['responses'][0]['results']
if query_id is not None:
# Checking which ids are completely retrieved
next_id_list = []
next_id_indexes = []
for index, res in enumerate(response['responses']):
if res['numResults'] == call_limit:
next_id_list.append(current_id_list[index])
next_id_indexes.append(current_id_indexes[index])
# Ending REST calling when there are no more ids to retrieve
if not next_id_list:
call = False
else:
# Ending REST calling when there are no more results to retrieve
if response['responses'][0]['numResults'] != call_limit:
call = False
# Skipping the first 'limit' results to retrieve the next ones
opts['skip'] += call_limit
# Subtracting the number of returned results from the maximum goal
if max_limit is not None:
max_limit -= call_limit
# When 'limit' is 0 returns all the results. So, break the loop if 0
if max_limit == 0:
break
return final_response
def _worker(queue, results, host, version, sid, category, resource, method, subcategory=None,
second_query_id=None, data=None, options=None):
"""Manages the queue system for the threads"""
while True:
# Fetching new element from the queue
index, query_id = queue.get()
response = _fetch(host=host, version=version, sid=sid, category=category, subcategory=subcategory,
resource=resource, method=method, data=data, query_id=query_id,
second_query_id=second_query_id, options=options)
# Store data in results at correct index
results[index] = response
# Signaling to the queue that task has been processed
queue.task_done()
def merge_query_responses(query_response_list):
final_response = query_response_list[0]
for i, query_response in enumerate(query_response_list):
if i != 0:
final_response['events'] += query_response['events']
final_response['time'] += query_response['time']
# final_response['responses'] += response['responses']
for key in query_response['params']:
if final_response['params'][key] != query_response['params'][key]:
final_response['params'][key] += ',' + query_response['params'][key]
for j, query_result in enumerate(query_response['responses']):
if len(final_response['responses'])-1 < j:
final_response['responses'] += []
for key in query_result:
if key not in final_response['responses'][j]:
final_response['responses'][j][key] = query_result[key]
else:
if isinstance(query_result[key], (int, list)):
final_response['responses'][j][key] += query_result[key]
return final_response
def execute(host, version, sid, category, resource, method, subcategory=None, query_id=None,
second_query_id=None, data=None, options=None):
"""Queries the REST service using multiple threads if needed"""
# If query_id is an array, convert to comma-separated string
if query_id is not None:
if isinstance(query_id, list):
query_id = ','.join([str(item) for item in query_id])
else:
query_id = str(query_id) # convert to string so we can call this method with int ids
# Multithread if the number of queries is greater than _CALL_BATCH_SIZE
if query_id is None or len(query_id.split(',')) <= _CALL_BATCH_SIZE:
response = _fetch(host=host, version=version, sid=sid, category=category, subcategory=subcategory,
resource=resource, method=method, data=data, query_id=query_id,
second_query_id=second_query_id, options=options)
return response
else:
if options is not None and 'num_threads' in options:
num_threads = options['num_threads']
else:
num_threads = _NUM_THREADS_DEFAULT
# Splitting query_id into batches depending on the call batch size
id_list = query_id.split(',')
id_batches = [','.join(id_list[x:x + _CALL_BATCH_SIZE])
for x in range(0, len(id_list), _CALL_BATCH_SIZE)]
# Setting up the queue to hold all the id batches
q = Queue(maxsize=0)
# Creating a size defined list to store thread results
res = [''] * len(id_batches)
# Setting up the threads
for thread in range(num_threads):
t = threading.Thread(target=_worker,
kwargs={'queue': q,
'results': res,
'host': host,
'version': version,
'sid': sid,
'category': category,
'subcategory': subcategory,
'second_query_id': second_query_id,
'resource': resource,
'method': method,
'data': data,
'options': options})
# Setting threads as "daemon" allows main program to exit eventually
# even if these do not finish correctly
t.setDaemon(True)
t.start()
# Loading up the queue with index and id batches for each job
for index, batch in enumerate(id_batches):
q.put((index, batch)) # Notice this is a tuple
# Waiting until the queue has been processed
q.join()
# Joining all the responses into a one final response
final_query_response = merge_query_responses(res)
return final_query_response
| 39.744807 | 109 | 0.56809 |
7bcc5851612cfd2ded4ac2e7e4c57dcc4cc3d661 | 4,649 | py | Python | tomviz/python/AutoTiltAxisShiftAlignment.py | alvarosan/tomviz | b53ccb0a07bfe7a33c3fb984c28d9b2658faa64b | [
"BSD-3-Clause"
] | null | null | null | tomviz/python/AutoTiltAxisShiftAlignment.py | alvarosan/tomviz | b53ccb0a07bfe7a33c3fb984c28d9b2658faa64b | [
"BSD-3-Clause"
] | null | null | null | tomviz/python/AutoTiltAxisShiftAlignment.py | alvarosan/tomviz | b53ccb0a07bfe7a33c3fb984c28d9b2658faa64b | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
from scipy.interpolate import interp1d
import tomviz.operators
class AutoTiltAxisShiftAlignmentOperator(tomviz.operators.CancelableOperator):
def transform_scalars(self, dataset):
"""Automatic align the tilt axis to the center of images"""
self.progress.maximum = 1
from tomviz import utils
# Get Tilt angles
tilt_angles = utils.get_tilt_angles(dataset)
tiltSeries = utils.get_array(dataset)
if tiltSeries is None:
raise RuntimeError("No scalars found!")
Nx, Ny, Nz = tiltSeries.shape
shifts = (np.linspace(-20, 20, 41)).astype('int')
numberOfSlices = 5 # number of slices used for recon
# randomly choose slices with top 50% total intensities
tiltSeriesSum = np.sum(tiltSeries, axis=(1, 2))
temp = tiltSeriesSum.argsort()[Nx // 2:]
slices = temp[np.random.permutation(temp.size)[:numberOfSlices]]
print('Reconstruction slices:')
print(slices)
I = np.zeros(shifts.size)
self.progress.maximum = shifts.size - 1
step = 0
for i in range(shifts.size):
if self.canceled:
return
shiftedTiltSeries = np.roll(
tiltSeries[slices, :, :, ], shifts[i], axis=1)
for s in range(numberOfSlices):
self.progress.message = ('Reconstructing slice No.%d with %d'
'pixels shift' %
(slices[s], shifts[i]))
recon = wbp2(shiftedTiltSeries[s, :, :],
tilt_angles, Ny, 'ramp', 'linear')
I[i] = I[i] + np.amax(recon)
step += 1
self.progress.value = step
print('shift: %d' % shifts[np.argmax(I)])
result = np.roll(tiltSeries, shifts[np.argmax(I)], axis=1)
# Set the result as the new scalars.
utils.set_array(dataset, result)
def wbp2(sinogram, angles, N=None, filter="ramp", interp="linear"):
if sinogram.ndim != 2:
raise ValueError('Sinogram must be 2D')
(Nray, Nproj) = sinogram.shape
if Nproj != angles.size:
raise ValueError('Sinogram does not match angles!')
interpolation_methods = ('linear', 'nearest', 'spline', 'cubic')
if interp not in interpolation_methods:
raise ValueError("Unknown interpolation: %s" % interp)
if not N: # if ouput size is not given
N = int(np.floor(np.sqrt(Nray**2 / 2.0)))
ang = np.double(angles) * np.pi / 180.0
# Create Fourier filter
F = makeFilter(Nray, filter)
# Pad sinogram for filtering
s = np.lib.pad(sinogram, ((0, F.size - Nray), (0, 0)),
'constant', constant_values=(0, 0))
# Apply Fourier filter
s = np.fft.fft(s, axis=0) * F
s = np.real(np.fft.ifft(s, axis=0))
# Change back to original
s = s[:Nray, :]
# Back projection
recon = np.zeros((N, N))
center_proj = Nray // 2 # Index of center of projection
[X, Y] = np.mgrid[0:N, 0:N]
xpr = X - int(N) // 2
ypr = Y - int(N) // 2
for j in range(Nproj):
t = ypr * np.cos(ang[j]) - xpr * np.sin(ang[j])
x = np.arange(Nray) - center_proj
if interp == 'linear':
bp = np.interp(t, x, s[:, j], left=0, right=0)
elif interp == 'spline':
interpolant = interp1d(
x, s[:, j], kind='slinear', bounds_error=False, fill_value=0)
bp = interpolant(t)
else:
interpolant = interp1d(
x, s[:, j], kind=interp, bounds_error=False, fill_value=0)
bp = interpolant(t)
recon = recon + bp
# Normalize
recon = recon * np.pi / 2 / Nproj
return recon
# Filter (1D) projections.
def makeFilter(Nray, filterMethod="ramp"):
# Calculate next power of 2
N2 = 2**np.ceil(np.log2(Nray))
# Make a ramp filter.
freq = np.fft.fftfreq(int(N2)).reshape(-1, 1)
omega = 2 * np.pi * freq
filter = 2 * np.abs(freq)
if filterMethod == "ramp":
pass
elif filterMethod == "shepp-logan":
filter[1:] = filter[1:] * np.sin(omega[1:]) / omega[1:]
elif filterMethod == "cosine":
filter[1:] = filter[1:] * np.cos(filter[1:])
elif filterMethod == "hamming":
filter[1:] = filter[1:] * (0.54 + 0.46 * np.cos(omega[1:] / 2))
elif filterMethod == "hann":
filter[1:] = filter[1:] * (1 + np.cos(omega[1:] / 2)) / 2
elif filterMethod == "none":
filter[:] = 1
else:
raise ValueError("Unknown filter: %s" % filterMethod)
return filter
| 33.207143 | 78 | 0.560336 |
5584836de19f8760469e908b92c03f9b600d14ce | 24,345 | py | Python | vectorbt/_settings.py | polakowo/vectorbt | 0a0077e42e74c24a2633453b98bf975626efbb70 | [
"Apache-2.0"
] | 1,787 | 2019-08-25T02:53:56.000Z | 2022-03-31T23:28:01.000Z | vectorbt/_settings.py | polakowo/vectorbt | 0a0077e42e74c24a2633453b98bf975626efbb70 | [
"Apache-2.0"
] | 251 | 2020-02-25T09:14:51.000Z | 2022-03-29T22:31:49.000Z | vectorbt/_settings.py | polakowo/vectorbt | 0a0077e42e74c24a2633453b98bf975626efbb70 | [
"Apache-2.0"
] | 304 | 2019-08-18T13:37:35.000Z | 2022-03-31T16:00:44.000Z | # Copyright (c) 2021 Oleg Polakow. All rights reserved.
# This code is licensed under Apache 2.0 with Commons Clause license (see LICENSE.md for details)
"""Global settings.
`settings` config is also accessible via `vectorbt.settings`.
Here are the main properties of the `settings` config:
* It's a nested config, that is, a config that consists of multiple sub-configs.
one per sub-package (e.g., 'data'), module (e.g., 'array_wrapper'), or even class (e.g., 'configured').
Each sub-config may consist of other sub-configs.
* It has frozen keys - you cannot add other sub-configs or remove the existing ones, but you can modify them.
* Each sub-config can either inherit the properties of the parent one by using `dict` or overwrite them
by using its own `vectorbt.utils.config.Config`. The main reason for defining an own config is to allow
adding new keys (e.g., 'plotting.layout').
For example, you can change default width and height of each plot:
```python-repl
>>> import vectorbt as vbt
>>> vbt.settings['plotting']['layout']['width'] = 800
>>> vbt.settings['plotting']['layout']['height'] = 400
```
The main sub-configs such as for plotting can be also accessed/modified using the dot notation:
```
>>> vbt.settings.plotting['layout']['width'] = 800
```
Some sub-configs allow the dot notation too but this depends whether they inherit the rules of the root config.
```plaintext
>>> vbt.settings.data - ok
>>> vbt.settings.data.binance - ok
>>> vbt.settings.data.binance.api_key - error
>>> vbt.settings.data.binance['api_key'] - ok
```
Since this is only visible when looking at the source code, the advice is to always use the bracket notation.
!!! note
Any change takes effect immediately. But whether its reflected immediately depends upon the place
that accesses the settings. For example, changing 'array_wrapper.freq` has an immediate effect because
the value is resolved every time `vectorbt.base.array_wrapper.ArrayWrapper.freq` is called.
On the other hand, changing 'portfolio.fillna_close' has only effect on `vectorbt.portfolio.base.Portfolio`
instances created in the future, not the existing ones, because the value is resolved upon the construction.
But mostly you can still force-update the default value by replacing the instance using
`vectorbt.portfolio.base.Portfolio.replace`.
All places in vectorbt import `settings` from `vectorbt._settings.settings`, not from `vectorbt`.
Overwriting `vectorbt.settings` only overwrites the reference created for the user.
Consider updating the settings config instead of replacing it.
## Saving
Like any other class subclassing `vectorbt.utils.config.Config`, we can save settings to the disk,
load it back, and update in-place:
```python-repl
>>> vbt.settings.save('my_settings')
>>> vbt.settings['caching']['enabled'] = False
>>> vbt.settings['caching']['enabled']
False
>>> vbt.settings.load_update('my_settings') # load() would return a new object!
>>> vbt.settings['caching']['enabled']
True
```
Bonus: You can do the same with any sub-config inside `settings`!
"""
import json
import pkgutil
import numpy as np
import plotly.graph_objects as go
import plotly.io as pio
from vectorbt.base.array_wrapper import ArrayWrapper
from vectorbt.base.column_grouper import ColumnGrouper
from vectorbt.records.col_mapper import ColumnMapper
from vectorbt.utils.config import Config
from vectorbt.utils.datetime_ import get_local_tz, get_utc_tz
from vectorbt.utils.decorators import CacheCondition
from vectorbt.utils.template import Sub, RepEval
__pdoc__ = {}
class SettingsConfig(Config):
"""Extends `vectorbt.utils.config.Config` for global settings."""
def register_template(self, theme: str) -> None:
"""Register template of a theme."""
pio.templates['vbt_' + theme] = go.layout.Template(self['plotting']['themes'][theme]['template'])
def register_templates(self) -> None:
"""Register templates of all themes."""
for theme in self['plotting']['themes']:
self.register_template(theme)
def set_theme(self, theme: str) -> None:
"""Set default theme."""
self.register_template(theme)
self['plotting']['color_schema'].update(self['plotting']['themes'][theme]['color_schema'])
self['plotting']['layout']['template'] = 'vbt_' + theme
def reset_theme(self) -> None:
"""Reset to default theme."""
self.set_theme('light')
settings = SettingsConfig(
dict(
numba=dict(
check_func_type=True,
check_func_suffix=False
),
config=Config(), # flex
configured=dict(
config=Config( # flex
dict(
readonly=True
)
),
),
caching=dict(
enabled=True,
whitelist=[
CacheCondition(base_cls=ArrayWrapper),
CacheCondition(base_cls=ColumnGrouper),
CacheCondition(base_cls=ColumnMapper)
],
blacklist=[]
),
broadcasting=dict(
align_index=False,
align_columns=True,
index_from='strict',
columns_from='stack',
ignore_sr_names=True,
drop_duplicates=True,
keep='last',
drop_redundant=True,
ignore_default=True
),
array_wrapper=dict(
column_only_select=False,
group_select=True,
freq=None,
silence_warnings=False
),
datetime=dict(
naive_tz=get_local_tz(),
to_py_timezone=True
),
data=dict(
tz_localize=get_utc_tz(),
tz_convert=get_utc_tz(),
missing_index='nan',
missing_columns='raise',
alpaca=Config(
dict(
key_id=None,
secret_key=None
)
),
binance=Config( # flex
dict(
api_key=None,
api_secret=None
)
),
ccxt=Config( # flex
dict(
enableRateLimit=True
)
),
stats=Config(), # flex
plots=Config() # flex
),
plotting=dict(
use_widgets=True,
show_kwargs=Config(), # flex
color_schema=Config( # flex
dict(
increasing="#1b9e76",
decreasing="#d95f02"
)
),
contrast_color_schema=Config( # flex
dict(
blue="#4285F4",
orange="#FFAA00",
green="#37B13F",
red="#EA4335",
gray="#E2E2E2"
)
),
themes=dict(
light=dict(
color_schema=Config( # flex
dict(
blue="#1f77b4",
orange="#ff7f0e",
green="#2ca02c",
red="#dc3912",
purple="#9467bd",
brown="#8c564b",
pink="#e377c2",
gray="#7f7f7f",
yellow="#bcbd22",
cyan="#17becf"
)
),
template=Config(json.loads(pkgutil.get_data(__name__, "templates/light.json"))), # flex
),
dark=dict(
color_schema=Config( # flex
dict(
blue="#1f77b4",
orange="#ff7f0e",
green="#2ca02c",
red="#dc3912",
purple="#9467bd",
brown="#8c564b",
pink="#e377c2",
gray="#7f7f7f",
yellow="#bcbd22",
cyan="#17becf"
)
),
template=Config(json.loads(pkgutil.get_data(__name__, "templates/dark.json"))), # flex
),
seaborn=dict(
color_schema=Config( # flex
dict(
blue="rgb(76,114,176)",
orange="rgb(221,132,82)",
green="rgb(129,114,179)",
red="rgb(85,168,104)",
purple="rgb(218,139,195)",
brown="rgb(204,185,116)",
pink="rgb(140,140,140)",
gray="rgb(100,181,205)",
yellow="rgb(147,120,96)",
cyan="rgb(196,78,82)"
)
),
template=Config(json.loads(pkgutil.get_data(__name__, "templates/seaborn.json"))), # flex
),
),
layout=Config( # flex
dict(
width=700,
height=350,
margin=dict(
t=30, b=30, l=30, r=30
),
legend=dict(
orientation="h",
yanchor="bottom",
y=1.02,
xanchor="right",
x=1,
traceorder='normal'
)
)
),
),
stats_builder=dict(
metrics='all',
tags='all',
silence_warnings=False,
template_mapping=Config(), # flex
filters=Config( # flex
dict(
is_not_grouped=dict(
filter_func=lambda self, metric_settings:
not self.wrapper.grouper.is_grouped(group_by=metric_settings['group_by']),
warning_message=Sub("Metric '$metric_name' does not support grouped data")
),
has_freq=dict(
filter_func=lambda self, metric_settings:
self.wrapper.freq is not None,
warning_message=Sub("Metric '$metric_name' requires frequency to be set")
)
)
),
settings=Config( # flex
dict(
to_timedelta=None,
use_caching=True
)
),
metric_settings=Config(), # flex
),
plots_builder=dict(
subplots='all',
tags='all',
silence_warnings=False,
template_mapping=Config(), # flex
filters=Config( # flex
dict(
is_not_grouped=dict(
filter_func=lambda self, subplot_settings:
not self.wrapper.grouper.is_grouped(group_by=subplot_settings['group_by']),
warning_message=Sub("Subplot '$subplot_name' does not support grouped data")
),
has_freq=dict(
filter_func=lambda self, subplot_settings:
self.wrapper.freq is not None,
warning_message=Sub("Subplot '$subplot_name' requires frequency to be set")
)
)
),
settings=Config( # flex
dict(
use_caching=True,
hline_shape_kwargs=dict(
type='line',
line=dict(
color='gray',
dash="dash",
)
)
)
),
subplot_settings=Config(), # flex
show_titles=True,
hide_id_labels=True,
group_id_labels=True,
make_subplots_kwargs=Config(), # flex
layout_kwargs=Config(), # flex
),
generic=dict(
stats=Config( # flex
dict(
filters=dict(
has_mapping=dict(
filter_func=lambda self, metric_settings:
metric_settings.get('mapping', self.mapping) is not None
)
),
settings=dict(
incl_all_keys=False
)
)
),
plots=Config() # flex
),
ranges=dict(
stats=Config(), # flex
plots=Config() # flex
),
drawdowns=dict(
stats=Config( # flex
dict(
settings=dict(
incl_active=False
)
)
),
plots=Config() # flex
),
ohlcv=dict(
plot_type='OHLC',
column_names=dict(
open='Open',
high='High',
low='Low',
close='Close',
volume='Volume'
),
stats=Config(), # flex
plots=Config() # flex
),
signals=dict(
stats=Config(
dict(
filters=dict(
silent_has_other=dict(
filter_func=lambda self, metric_settings:
metric_settings.get('other', None) is not None
),
),
settings=dict(
other=None,
other_name='Other',
from_other=False
)
)
), # flex
plots=Config() # flex
),
returns=dict(
year_freq='365 days',
defaults=Config( # flex
dict(
start_value=0.,
window=10,
minp=None,
ddof=1,
risk_free=0.,
levy_alpha=2.,
required_return=0.,
cutoff=0.05
)
),
stats=Config( # flex
dict(
filters=dict(
has_year_freq=dict(
filter_func=lambda self, metric_settings:
self.year_freq is not None,
warning_message=Sub("Metric '$metric_name' requires year frequency to be set")
),
has_benchmark_rets=dict(
filter_func=lambda self, metric_settings:
metric_settings.get('benchmark_rets', self.benchmark_rets) is not None,
warning_message=Sub("Metric '$metric_name' requires benchmark_rets to be set")
)
),
settings=dict(
check_is_not_grouped=True
)
)
),
plots=Config() # flex
),
qs_adapter=dict(
defaults=Config(), # flex
),
records=dict(
stats=Config(), # flex
plots=Config() # flex
),
mapped_array=dict(
stats=Config( # flex
dict(
filters=dict(
has_mapping=dict(
filter_func=lambda self, metric_settings:
metric_settings.get('mapping', self.mapping) is not None
)
),
settings=dict(
incl_all_keys=False
)
)
),
plots=Config() # flex
),
orders=dict(
stats=Config(), # flex
plots=Config() # flex
),
trades=dict(
stats=Config( # flex
dict(
settings=dict(
incl_open=False
),
template_mapping=dict(
incl_open_tags=RepEval("['open', 'closed'] if incl_open else ['closed']")
)
)
),
plots=Config() # flex
),
logs=dict(
stats=Config() # flex
),
portfolio=dict(
call_seq='default',
init_cash=100.,
size=np.inf,
size_type='amount',
fees=0.,
fixed_fees=0.,
slippage=0.,
reject_prob=0.,
min_size=1e-8,
max_size=np.inf,
size_granularity=np.nan,
lock_cash=False,
allow_partial=True,
raise_reject=False,
val_price=np.inf,
accumulate=False,
sl_stop=np.nan,
sl_trail=False,
tp_stop=np.nan,
stop_entry_price='close',
stop_exit_price='stoplimit',
stop_conflict_mode='exit',
upon_stop_exit='close',
upon_stop_update='override',
use_stops=None,
log=False,
upon_long_conflict='ignore',
upon_short_conflict='ignore',
upon_dir_conflict='ignore',
upon_opposite_entry='reversereduce',
signal_direction='longonly',
order_direction='both',
cash_sharing=False,
call_pre_segment=False,
call_post_segment=False,
ffill_val_price=True,
update_value=False,
fill_pos_record=True,
row_wise=False,
flexible=False,
use_numba=True,
seed=None,
freq=None,
attach_call_seq=False,
fillna_close=True,
trades_type='exittrades',
stats=Config( # flex
dict(
filters=dict(
has_year_freq=dict(
filter_func=lambda self, metric_settings:
metric_settings['year_freq'] is not None,
warning_message=Sub("Metric '$metric_name' requires year frequency to be set")
)
),
settings=dict(
use_asset_returns=False,
incl_open=False
),
template_mapping=dict(
incl_open_tags=RepEval("['open', 'closed'] if incl_open else ['closed']")
)
)
),
plots=Config( # flex
dict(
subplots=['orders', 'trade_pnl', 'cum_returns'],
settings=dict(
use_asset_returns=False
)
)
)
),
messaging=dict(
telegram=Config( # flex
dict(
token=None,
use_context=True,
persistence='telegram_bot.pickle',
defaults=Config(), # flex
drop_pending_updates=True
)
),
giphy=dict(
api_key=None,
weirdness=5
),
),
),
copy_kwargs=dict(
copy_mode='deep'
),
frozen_keys=True,
nested=True,
convert_dicts=Config
)
"""_"""
settings.reset_theme()
settings.make_checkpoint()
settings.register_templates()
__pdoc__['settings'] = f"""Global settings config.
## settings.numba
Settings applied to Numba.
```json
{settings['numba'].to_doc()}
```
## settings.config
Settings applied to `vectorbt.utils.config.Config`.
```json
{settings['config'].to_doc()}
```
## settings.configured
Settings applied to `vectorbt.utils.config.Configured`.
```json
{settings['configured'].to_doc()}
```
## settings.caching
Settings applied across `vectorbt.utils.decorators`.
See `vectorbt.utils.decorators.should_cache`.
```json
{settings['caching'].to_doc()}
```
## settings.broadcasting
Settings applied across `vectorbt.base.reshape_fns`.
```json
{settings['broadcasting'].to_doc()}
```
## settings.array_wrapper
Settings applied to `vectorbt.base.array_wrapper.ArrayWrapper`.
```json
{settings['array_wrapper'].to_doc()}
```
## settings.datetime
Settings applied across `vectorbt.utils.datetime_`.
```json
{settings['datetime'].to_doc()}
```
## settings.data
Settings applied across `vectorbt.data`.
```json
{settings['data'].to_doc()}
```
### settings.data.binance
See `binance.client.Client`.
### settings.data.ccxt
See [Configuring API Keys](https://ccxt.readthedocs.io/en/latest/manual.html#configuring-api-keys).
Keys can be defined per exchange. If a key is defined at the root, it applies to all exchanges.
## settings.plotting
Settings applied to plotting Plotly figures.
```json
{settings['plotting'].to_doc(replace={
'settings.plotting.themes.light.template': "{ ... templates/light.json ... }",
'settings.plotting.themes.dark.template': "{ ... templates/dark.json ... }",
'settings.plotting.themes.seaborn.template': "{ ... templates/seaborn.json ... }"
}, path='settings.plotting')}
```
## settings.stats_builder
Settings applied to `vectorbt.generic.stats_builder.StatsBuilderMixin`.
```json
{settings['stats_builder'].to_doc()}
```
## settings.plots_builder
Settings applied to `vectorbt.generic.plots_builder.PlotsBuilderMixin`.
```json
{settings['plots_builder'].to_doc()}
```
## settings.generic
Settings applied across `vectorbt.generic`.
```json
{settings['generic'].to_doc()}
```
## settings.generic.ranges
Settings applied across `vectorbt.generic.ranges`.
```json
{settings['ranges'].to_doc()}
```
## settings.generic.drawdowns
Settings applied across `vectorbt.generic.drawdowns`.
```json
{settings['drawdowns'].to_doc()}
```
## settings.ohlcv
Settings applied across `vectorbt.ohlcv_accessors`.
```json
{settings['ohlcv'].to_doc()}
```
## settings.signals
Settings applied across `vectorbt.signals`.
```json
{settings['signals'].to_doc()}
```
## settings.returns
Settings applied across `vectorbt.returns`.
```json
{settings['returns'].to_doc()}
```
## settings.qs_adapter
Settings applied across `vectorbt.returns.qs_adapter`.
```json
{settings['qs_adapter'].to_doc()}
```
## settings.records
Settings applied across `vectorbt.records.base`.
```json
{settings['records'].to_doc()}
```
## settings.mapped_array
Settings applied across `vectorbt.records.mapped_array`.
```json
{settings['mapped_array'].to_doc()}
```
## settings.portfolio.orders
Settings applied across `vectorbt.portfolio.orders`.
```json
{settings['orders'].to_doc()}
```
## settings.portfolio.trades
Settings applied across `vectorbt.portfolio.trades`.
```json
{settings['trades'].to_doc()}
```
## settings.portfolio.logs
Settings applied across `vectorbt.portfolio.logs`.
```json
{settings['logs'].to_doc()}
```
## settings.portfolio
Settings applied to `vectorbt.portfolio.base.Portfolio`.
```json
{settings['portfolio'].to_doc()}
```
## settings.messaging
Settings applied across `vectorbt.messaging`.
```json
{settings['messaging'].to_doc()}
```
### settings.messaging.telegram
Settings applied to [python-telegram-bot](https://github.com/python-telegram-bot/python-telegram-bot).
Set `persistence` to string to use as `filename` in `telegram.ext.PicklePersistence`.
For `defaults`, see `telegram.ext.Defaults`. Other settings will be distributed across
`telegram.ext.Updater` and `telegram.ext.updater.Updater.start_polling`.
### settings.messaging.giphy
Settings applied to [GIPHY Translate Endpoint](https://developers.giphy.com/docs/api/endpoint#translate).
"""
| 30.055556 | 112 | 0.50647 |
1da9d0991247c7f7511969da2d8c2124712be8b9 | 21,974 | py | Python | nova/virt/block_device.py | jeckxie/gxzw-nova | edbc620439cf3dfc959c6eb8355ab35adc8268d7 | [
"Apache-2.0"
] | null | null | null | nova/virt/block_device.py | jeckxie/gxzw-nova | edbc620439cf3dfc959c6eb8355ab35adc8268d7 | [
"Apache-2.0"
] | 11 | 2017-06-19T01:28:55.000Z | 2017-06-23T02:01:47.000Z | nova/virt/block_device.py | jeckxie/gxzw-nova | edbc620439cf3dfc959c6eb8355ab35adc8268d7 | [
"Apache-2.0"
] | 1 | 2020-07-22T22:06:24.000Z | 2020-07-22T22:06:24.000Z | # All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import itertools
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import excutils
from nova import block_device
import nova.conf
from nova import exception
from nova.i18n import _LE
from nova.i18n import _LI
from nova.i18n import _LW
from nova.volume import encryptors
CONF = nova.conf.CONF
LOG = logging.getLogger(__name__)
class _NotTransformable(Exception):
pass
class _InvalidType(_NotTransformable):
pass
def update_db(method):
@functools.wraps(method)
def wrapped(obj, context, *args, **kwargs):
try:
ret_val = method(obj, context, *args, **kwargs)
finally:
obj.save()
return ret_val
return wrapped
def _get_volume_create_az_value(instance):
"""Determine az to use when creating a volume
Uses the cinder.cross_az_attach config option to determine the availability
zone value to use when creating a volume.
:param nova.objects.Instance instance: The instance for which the volume
will be created and attached.
:returns: The availability_zone value to pass to volume_api.create
"""
# If we're allowed to attach a volume in any AZ to an instance in any AZ,
# then we don't care what AZ the volume is in so don't specify anything.
if CONF.cinder.cross_az_attach:
return None
# Else the volume has to be in the same AZ as the instance otherwise we
# fail. If the AZ is not in Cinder the volume create will fail. But on the
# other hand if the volume AZ and instance AZ don't match and
# cross_az_attach is False, then volume_api.check_attach will fail too, so
# we can't really win. :)
# TODO(mriedem): It would be better from a UX perspective if we could do
# some validation in the API layer such that if we know we're going to
# specify the AZ when creating the volume and that AZ is not in Cinder, we
# could fail the boot from volume request early with a 400 rather than
# fail to build the instance on the compute node which results in a
# NoValidHost error.
return instance.availability_zone
class DriverBlockDevice(dict):
"""A dict subclass that represents block devices used by the virt layer.
Uses block device objects internally to do the database access.
_fields and _legacy_fields class attributes present a set of fields that
are expected on a certain DriverBlockDevice type. We may have more legacy
versions in the future.
If an attribute access is attempted for a name that is found in the
_proxy_as_attr set, it will be proxied to the underlying object. This
allows us to access stuff that is not part of the data model that all
drivers understand.
The save() method allows us to update the database using the underlying
object. _update_on_save class attribute dictionary keeps the following
mapping:
{'object field name': 'driver dict field name (or None if same)'}
These fields will be updated on the internal object, from the values in the
dict, before the actual database update is done.
"""
_fields = set()
_legacy_fields = set()
_proxy_as_attr = set()
_update_on_save = {'disk_bus': None,
'device_name': None,
'device_type': None}
def __init__(self, bdm):
self.__dict__['_bdm_obj'] = bdm
if self._bdm_obj.no_device:
raise _NotTransformable()
self.update({field: None for field in self._fields})
self._transform()
def __getattr__(self, name):
if name in self._proxy_as_attr:
return getattr(self._bdm_obj, name)
else:
super(DriverBlockDevice, self).__getattr__(name)
def __setattr__(self, name, value):
if name in self._proxy_as_attr:
return setattr(self._bdm_obj, name, value)
else:
super(DriverBlockDevice, self).__setattr__(name, value)
def _transform(self):
"""Transform bdm to the format that is passed to drivers."""
raise NotImplementedError()
def legacy(self):
"""Basic legacy transformation.
Basic method will just drop the fields that are not in
_legacy_fields set. Override this in subclass if needed.
"""
return {key: self.get(key) for key in self._legacy_fields}
def attach(self, **kwargs):
"""Make the device available to be used by VMs.
To be overridden in subclasses with the connecting logic for
the type of device the subclass represents.
"""
raise NotImplementedError()
def save(self):
for attr_name, key_name in self._update_on_save.items():
lookup_name = key_name or attr_name
if self[lookup_name] != getattr(self._bdm_obj, attr_name):
setattr(self._bdm_obj, attr_name, self[lookup_name])
self._bdm_obj.save()
class DriverSwapBlockDevice(DriverBlockDevice):
_fields = set(['device_name', 'swap_size', 'disk_bus'])
_legacy_fields = _fields - set(['disk_bus'])
_update_on_save = {'disk_bus': None,
'device_name': None}
def _transform(self):
if not block_device.new_format_is_swap(self._bdm_obj):
raise _InvalidType
self.update({
'device_name': self._bdm_obj.device_name,
'swap_size': self._bdm_obj.volume_size or 0,
'disk_bus': self._bdm_obj.disk_bus
})
class DriverEphemeralBlockDevice(DriverBlockDevice):
_new_only_fields = set(['disk_bus', 'device_type', 'guest_format'])
_fields = set(['device_name', 'size']) | _new_only_fields
_legacy_fields = (_fields - _new_only_fields |
set(['num', 'virtual_name']))
def _transform(self):
if not block_device.new_format_is_ephemeral(self._bdm_obj):
raise _InvalidType
self.update({
'device_name': self._bdm_obj.device_name,
'size': self._bdm_obj.volume_size or 0,
'disk_bus': self._bdm_obj.disk_bus,
'device_type': self._bdm_obj.device_type,
'guest_format': self._bdm_obj.guest_format
})
def legacy(self, num=0):
legacy_bdm = super(DriverEphemeralBlockDevice, self).legacy()
legacy_bdm['num'] = num
legacy_bdm['virtual_name'] = 'ephemeral' + str(num)
return legacy_bdm
class DriverVolumeBlockDevice(DriverBlockDevice):
_legacy_fields = set(['connection_info', 'mount_device',
'delete_on_termination'])
_new_fields = set(['guest_format', 'device_type',
'disk_bus', 'boot_index'])
_fields = _legacy_fields | _new_fields
_valid_source = 'volume'
_valid_destination = 'volume'
_proxy_as_attr = set(['volume_size', 'volume_id'])
_update_on_save = {'disk_bus': None,
'device_name': 'mount_device',
'device_type': None}
def _transform(self):
if (not self._bdm_obj.source_type == self._valid_source
or not self._bdm_obj.destination_type ==
self._valid_destination):
raise _InvalidType
self.update(
{k: v for k, v in self._bdm_obj.items()
if k in self._new_fields | set(['delete_on_termination'])}
)
self['mount_device'] = self._bdm_obj.device_name
try:
self['connection_info'] = jsonutils.loads(
self._bdm_obj.connection_info)
except TypeError:
self['connection_info'] = None
def _preserve_multipath_id(self, connection_info):
if self['connection_info'] and 'data' in self['connection_info']:
if 'multipath_id' in self['connection_info']['data']:
connection_info['data']['multipath_id'] =\
self['connection_info']['data']['multipath_id']
LOG.info(_LI('preserve multipath_id %s'),
connection_info['data']['multipath_id'])
@update_db
def attach(self, context, instance, volume_api, virt_driver,
do_check_attach=True, do_driver_attach=False, **kwargs):
volume = volume_api.get(context, self.volume_id)
if do_check_attach:
volume_api.check_attach(context, volume, instance=instance)
volume_id = volume['id']
context = context.elevated()
connector = virt_driver.get_volume_connector(instance)
connection_info = volume_api.initialize_connection(context,
volume_id,
connector)
if 'serial' not in connection_info:
connection_info['serial'] = self.volume_id
self._preserve_multipath_id(connection_info)
# If do_driver_attach is False, we will attach a volume to an instance
# at boot time. So actual attach is done by instance creation code.
if do_driver_attach:
encryption = encryptors.get_encryption_metadata(
context, volume_api, volume_id, connection_info)
try:
virt_driver.attach_volume(
context, connection_info, instance,
self['mount_device'], disk_bus=self['disk_bus'],
device_type=self['device_type'], encryption=encryption)
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE("Driver failed to attach volume "
"%(volume_id)s at %(mountpoint)s"),
{'volume_id': volume_id,
'mountpoint': self['mount_device']},
instance=instance)
volume_api.terminate_connection(context, volume_id,
connector)
self['connection_info'] = connection_info
if self.volume_size is None:
self.volume_size = volume.get('size')
mode = 'rw'
if 'data' in connection_info:
mode = connection_info['data'].get('access_mode', 'rw')
if volume['attach_status'] == "detached":
# NOTE(mriedem): save our current state so connection_info is in
# the database before the volume status goes to 'in-use' because
# after that we can detach and connection_info is required for
# detach.
self.save()
try:
volume_api.attach(context, volume_id, instance.uuid,
self['mount_device'], mode=mode)
except Exception:
with excutils.save_and_reraise_exception():
if do_driver_attach:
try:
virt_driver.detach_volume(connection_info,
instance,
self['mount_device'],
encryption=encryption)
except Exception:
LOG.warning(_LW("Driver failed to detach volume "
"%(volume_id)s at %(mount_point)s."),
{'volume_id': volume_id,
'mount_point': self['mount_device']},
exc_info=True, instance=instance)
volume_api.terminate_connection(context, volume_id,
connector)
# Cinder-volume might have completed volume attach. So
# we should detach the volume. If the attach did not
# happen, the detach request will be ignored.
volume_api.detach(context, volume_id)
@update_db
def refresh_connection_info(self, context, instance,
volume_api, virt_driver):
# NOTE (ndipanov): A no-op if there is no connection info already
if not self['connection_info']:
return
connector = virt_driver.get_volume_connector(instance)
connection_info = volume_api.initialize_connection(context,
self.volume_id,
connector)
if 'serial' not in connection_info:
connection_info['serial'] = self.volume_id
self._preserve_multipath_id(connection_info)
self['connection_info'] = connection_info
def save(self):
# NOTE(ndipanov): we might want to generalize this by adding it to the
# _update_on_save and adding a transformation function.
try:
connection_info_string = jsonutils.dumps(
self.get('connection_info'))
if connection_info_string != self._bdm_obj.connection_info:
self._bdm_obj.connection_info = connection_info_string
except TypeError:
pass
super(DriverVolumeBlockDevice, self).save()
def _call_wait_func(self, context, wait_func, volume_api, volume_id):
try:
wait_func(context, volume_id)
except exception.VolumeNotCreated:
with excutils.save_and_reraise_exception():
if self['delete_on_termination']:
try:
volume_api.delete(context, volume_id)
except Exception as exc:
LOG.warning(
_LW('Failed to delete volume: %(volume_id)s '
'due to %(exc)s'),
{'volume_id': volume_id, 'exc': exc})
class DriverSnapshotBlockDevice(DriverVolumeBlockDevice):
_valid_source = 'snapshot'
_proxy_as_attr = set(['volume_size', 'volume_id', 'snapshot_id'])
def attach(self, context, instance, volume_api,
virt_driver, wait_func=None, do_check_attach=True):
if not self.volume_id:
av_zone = _get_volume_create_az_value(instance)
snapshot = volume_api.get_snapshot(context,
self.snapshot_id)
vol = volume_api.create(context, self.volume_size, '', '',
snapshot, availability_zone=av_zone)
if wait_func:
self._call_wait_func(context, wait_func, volume_api, vol['id'])
self.volume_id = vol['id']
# Call the volume attach now
super(DriverSnapshotBlockDevice, self).attach(
context, instance, volume_api, virt_driver,
do_check_attach=do_check_attach)
class DriverImageBlockDevice(DriverVolumeBlockDevice):
_valid_source = 'image'
_proxy_as_attr = set(['volume_size', 'volume_id', 'image_id'])
def attach(self, context, instance, volume_api,
virt_driver, wait_func=None, do_check_attach=True):
if not self.volume_id:
av_zone = _get_volume_create_az_value(instance)
vol = volume_api.create(context, self.volume_size,
'', '', image_id=self.image_id,
availability_zone=av_zone)
if wait_func:
self._call_wait_func(context, wait_func, volume_api, vol['id'])
self.volume_id = vol['id']
super(DriverImageBlockDevice, self).attach(
context, instance, volume_api, virt_driver,
do_check_attach=do_check_attach)
class DriverBlankBlockDevice(DriverVolumeBlockDevice):
_valid_source = 'blank'
_proxy_as_attr = set(['volume_size', 'volume_id', 'image_id'])
def attach(self, context, instance, volume_api,
virt_driver, wait_func=None, do_check_attach=True):
if not self.volume_id:
vol_name = instance.uuid + '-blank-vol'
av_zone = _get_volume_create_az_value(instance)
vol = volume_api.create(context, self.volume_size, vol_name, '',
availability_zone=av_zone)
if wait_func:
self._call_wait_func(context, wait_func, volume_api, vol['id'])
self.volume_id = vol['id']
super(DriverBlankBlockDevice, self).attach(
context, instance, volume_api, virt_driver,
do_check_attach=do_check_attach)
def _convert_block_devices(device_type, block_device_mapping):
devices = []
for bdm in block_device_mapping:
try:
devices.append(device_type(bdm))
except _NotTransformable:
pass
return devices
convert_swap = functools.partial(_convert_block_devices,
DriverSwapBlockDevice)
convert_ephemerals = functools.partial(_convert_block_devices,
DriverEphemeralBlockDevice)
convert_volumes = functools.partial(_convert_block_devices,
DriverVolumeBlockDevice)
convert_snapshots = functools.partial(_convert_block_devices,
DriverSnapshotBlockDevice)
convert_images = functools.partial(_convert_block_devices,
DriverImageBlockDevice)
convert_blanks = functools.partial(_convert_block_devices,
DriverBlankBlockDevice)
def convert_all_volumes(*volume_bdms):
source_volume = convert_volumes(volume_bdms)
source_snapshot = convert_snapshots(volume_bdms)
source_image = convert_images(volume_bdms)
source_blank = convert_blanks(volume_bdms)
return [vol for vol in
itertools.chain(source_volume, source_snapshot,
source_image, source_blank)]
def convert_volume(volume_bdm):
try:
return convert_all_volumes(volume_bdm)[0]
except IndexError:
pass
def attach_block_devices(block_device_mapping, *attach_args, **attach_kwargs):
def _log_and_attach(bdm):
instance = attach_args[1]
if bdm.get('volume_id'):
LOG.info(_LI('Booting with volume %(volume_id)s at '
'%(mountpoint)s'),
{'volume_id': bdm.volume_id,
'mountpoint': bdm['mount_device']},
instance=instance)
elif bdm.get('snapshot_id'):
LOG.info(_LI('Booting with volume snapshot %(snapshot_id)s at '
'%(mountpoint)s'),
{'snapshot_id': bdm.snapshot_id,
'mountpoint': bdm['mount_device']},
instance=instance)
elif bdm.get('image_id'):
LOG.info(_LI('Booting with volume-backed-image %(image_id)s at '
'%(mountpoint)s'),
{'image_id': bdm.image_id,
'mountpoint': bdm['mount_device']},
instance=instance)
else:
LOG.info(_LI('Booting with blank volume at %(mountpoint)s'),
{'mountpoint': bdm['mount_device']},
instance=instance)
bdm.attach(*attach_args, **attach_kwargs)
for device in block_device_mapping:
_log_and_attach(device)
return block_device_mapping
def refresh_conn_infos(block_device_mapping, *refresh_args, **refresh_kwargs):
for device in block_device_mapping:
# NOTE(lyarwood): At present only DriverVolumeBlockDevice derived
# devices provide a refresh_connection_info method.
if hasattr(device, 'refresh_connection_info'):
device.refresh_connection_info(*refresh_args, **refresh_kwargs)
return block_device_mapping
def legacy_block_devices(block_device_mapping):
bdms = [bdm.legacy() for bdm in block_device_mapping]
# Re-enumerate ephemeral devices
if all(isinstance(bdm, DriverEphemeralBlockDevice)
for bdm in block_device_mapping):
for i, dev in enumerate(bdms):
dev['virtual_name'] = dev['virtual_name'][:-1] + str(i)
dev['num'] = i
return bdms
def get_swap(transformed_list):
"""Get the swap device out of the list context.
The block_device_info needs swap to be a single device,
not a list - otherwise this is a no-op.
"""
if not all(isinstance(device, DriverSwapBlockDevice) or
'swap_size' in device
for device in transformed_list):
return None
try:
return transformed_list.pop()
except IndexError:
return None
_IMPLEMENTED_CLASSES = (DriverSwapBlockDevice, DriverEphemeralBlockDevice,
DriverVolumeBlockDevice, DriverSnapshotBlockDevice,
DriverImageBlockDevice, DriverBlankBlockDevice)
def is_implemented(bdm):
for cls in _IMPLEMENTED_CLASSES:
try:
cls(bdm)
return True
except _NotTransformable:
pass
return False
def is_block_device_mapping(bdm):
return (bdm.source_type in ('image', 'volume', 'snapshot', 'blank')
and bdm.destination_type == 'volume'
and is_implemented(bdm))
| 38.34904 | 79 | 0.608264 |
489cd9aba3797310f6d67e5d2fb143ec3580e6fc | 10,732 | py | Python | practice/pycxsimulator.py | nalyd88/modeling-and-simulation | 96e5fc3994ded8782229425e43ed9237966c9567 | [
"MIT"
] | null | null | null | practice/pycxsimulator.py | nalyd88/modeling-and-simulation | 96e5fc3994ded8782229425e43ed9237966c9567 | [
"MIT"
] | null | null | null | practice/pycxsimulator.py | nalyd88/modeling-and-simulation | 96e5fc3994ded8782229425e43ed9237966c9567 | [
"MIT"
] | null | null | null | ## "pycxsimulator.py"
## Realtime Simulation GUI for PyCX
##
## Developed by:
## Chun Wong
## email@chunwong.net
##
## Revised by:
## Hiroki Sayama
## sayama@binghamton.edu
##
## Copyright 2012 Chun Wong & Hiroki Sayama
##
## Simulation control & GUI extensions
## Copyright 2013 Przemyslaw Szufel & Bogumil Kaminski
## {pszufe, bkamins}@sgh.waw.pl
##
##
## The following two lines should be placed at the beginning of your simulator code:
##
## import matplotlib
## matplotlib.use('TkAgg')
import pylab as PL
import ttk
from Tkinter import *
from ttk import Notebook
class GUI:
## GUI variables
titleText = 'PyCX Simulator' # window title
timeInterval = 0 # refresh time in milliseconds
running = False
modelFigure = None
stepSize = 1
currentStep = 0
def __init__(self,title='PyCX Simulator',interval=0,stepSize=1,parameterSetters=[]):
self.titleText = title
self.timeInterval = interval
self.stepSize = stepSize
self.parameterSetters = parameterSetters
self.varEntries = {}
self.statusStr = ""
self.initGUI()
def initGUI(self):
#create root window
self.rootWindow = Tk()
self.statusText = StringVar(value=self.statusStr)
self.setStatusStr("Simulation not yet started")
self.rootWindow.wm_title(self.titleText)
self.rootWindow.protocol('WM_DELETE_WINDOW',self.quitGUI)
self.rootWindow.geometry('550x400')
self.rootWindow.columnconfigure(0, weight=1)
self.rootWindow.rowconfigure(0, weight=1)
self.notebook = Notebook(self.rootWindow)
self.notebook.grid(row=0,column=0,padx=2,pady=2,sticky='nswe')
self.frameRun = Frame()
self.frameSettings = Frame()
self.frameParameters = Frame()
self.frameInformation = Frame()
self.notebook.add(self.frameRun,text="Run")
self.notebook.add(self.frameSettings,text="Settings")
self.notebook.add(self.frameParameters,text="Parameters")
self.notebook.add(self.frameInformation,text="Info")
self.notebook.pack(expand=YES, fill=BOTH, padx=5, pady=5 ,side=TOP)
self.status = Label(self.rootWindow, width=40,height=3, relief=SUNKEN, bd=1,textvariable=self.statusText)
self.status.grid(row=1,column=0,padx=2,pady=2,sticky='nswe')
self.status.pack(side=TOP, fill=X, padx=1, pady=1, expand=NO)
self.runPauseString = StringVar()
self.runPauseString.set("Run")
self.buttonRun = Button(self.frameRun,width=30,height=2,textvariable=self.runPauseString,command=self.runEvent)
self.buttonRun.pack(side=TOP, padx=5, pady=5)
self.showHelp(self.buttonRun,"Runs the simulation (or pauses the running simulation)")
self.buttonStep = Button(self.frameRun,width=30,height=2,text='Step Once',command=self.stepOnce)
self.buttonStep.pack(side=TOP, padx=5, pady=5)
self.showHelp(self.buttonStep,"Steps the simulation only once")
self.buttonReset = Button(self.frameRun,width=30,height=2,text='Reset',command=self.resetModel)
self.buttonReset.pack(side=TOP, padx=5, pady=5)
self.showHelp(self.buttonReset,"Resets the simulation")
can = Canvas(self.frameSettings)
lab = Label(can, width=25,height=1,text="Step size ", justify=LEFT, anchor=W,takefocus=0)
lab.pack(side='left')
self.stepScale = Scale(can,from_=1, to=50, resolution=1,command=self.changeStepSize,orient=HORIZONTAL, width=25,length=150)
self.stepScale.set(self.stepSize)
self.showHelp(self.stepScale,"Skips model redraw during every [n] simulation steps\nResults in a faster model run.")
self.stepScale.pack(side='left')
can.pack(side='top')
can = Canvas(self.frameSettings)
lab = Label(can, width=25,height=1,text="Step visualization delay in ms ", justify=LEFT, anchor=W,takefocus=0)
lab.pack(side='left')
self.stepDelay = Scale(can,from_=0, to=max(2000,self.timeInterval), resolution=10,command=self.changeStepDelay,orient=HORIZONTAL, width=25,length=150)
self.stepDelay.set(self.timeInterval)
self.showHelp(self.stepDelay,"The visualization of each step is delays by the given number of milliseconds.")
self.stepDelay.pack(side='left')
can.pack(side='top')
scrollInfo = Scrollbar(self.frameInformation)
self.textInformation = Text(self.frameInformation, width=45,height=13,bg='lightgray',wrap=WORD,font=("Courier",10))
scrollInfo.pack(side=RIGHT, fill=Y)
self.textInformation.pack(side=LEFT,fill=BOTH,expand=YES)
scrollInfo.config(command=self.textInformation.yview)
self.textInformation.config(yscrollcommand=scrollInfo.set)
for variableSetter in self.parameterSetters:
can = Canvas(self.frameParameters)
lab = Label(can, width=25,height=1,text=variableSetter.__name__+" ",anchor=W,takefocus=0)
lab.pack(side='left')
ent = Entry(can, width=11)
ent.insert(0, str(variableSetter()))
if variableSetter.__doc__ != None and len(variableSetter.__doc__) > 0:
self.showHelp(ent,variableSetter.__doc__.strip())
ent.pack(side='left')
can.pack(side='top')
self.varEntries[variableSetter]=ent
if len(self.parameterSetters) > 0:
self.buttonSaveParameters = Button(self.frameParameters,width=50,height=1,command=self.saveParametersCmd,text="Save parameters to the running model",state=DISABLED)
self.showHelp(self.buttonSaveParameters,"Saves the parameter values.\nNot all values may take effect on a running model\nA model reset might be required.")
self.buttonSaveParameters.pack(side='top',padx=5,pady=5)
self.buttonSaveParametersAndReset = Button(self.frameParameters,width=50,height=1,command=self.saveParametersAndResetCmd,text="Save parameters to the model and reset the model")
self.showHelp(self.buttonSaveParametersAndReset,"Saves the given parameter values and resets the model")
self.buttonSaveParametersAndReset.pack(side='top',padx=5,pady=5)
def setStatusStr(self,newStatus):
self.statusStr = newStatus
self.statusText.set(self.statusStr)
#model control functions
def changeStepSize(self,val):
self.stepSize = int(val)
def changeStepDelay(self,val):
self.timeInterval= int(val)
def saveParametersCmd(self):
for variableSetter in self.parameterSetters:
variableSetter(float(self.varEntries[variableSetter].get()))
self.setStatusStr("New parameter values have been set")
def saveParametersAndResetCmd(self):
self.saveParametersCmd()
self.resetModel()
def runEvent(self):
self.running = not self.running
if self.running:
self.rootWindow.after(self.timeInterval,self.stepModel)
self.runPauseString.set("Pause")
self.buttonStep.configure(state=DISABLED)
self.buttonReset.configure(state=DISABLED)
if len(self.parameterSetters) > 0:
self.buttonSaveParameters.configure(state=NORMAL)
self.buttonSaveParametersAndReset.configure(state=DISABLED)
else:
self.runPauseString.set("Continue Run")
self.buttonStep.configure(state=NORMAL)
self.buttonReset.configure(state=NORMAL)
if len(self.parameterSetters) > 0:
self.buttonSaveParameters.configure(state=NORMAL)
self.buttonSaveParametersAndReset.configure(state=NORMAL)
def stepModel(self):
if self.running:
self.modelStepFunc()
self.currentStep += 1
self.setStatusStr("Step "+str(self.currentStep))
self.status.configure(foreground='black')
if (self.currentStep) % self.stepSize == 0:
self.drawModel()
self.rootWindow.after(int(self.timeInterval*1.0/self.stepSize),self.stepModel)
def stepOnce(self):
self.running = False
self.runPauseString.set("Continue Run")
self.modelStepFunc()
self.currentStep += 1
self.setStatusStr("Step "+str(self.currentStep))
self.drawModel()
if len(self.parameterSetters) > 0:
self.buttonSaveParameters.configure(state=NORMAL)
def resetModel(self):
self.running = False
self.runPauseString.set("Run")
self.modelInitFunc()
self.currentStep = 0;
self.setStatusStr("Model has been reset")
self.drawModel()
def drawModel(self):
if self.modelFigure == None or self.modelFigure.canvas.manager.window == None:
self.modelFigure = PL.figure()
PL.ion()
self.modelDrawFunc()
self.modelFigure.canvas.manager.window.update()
def start(self,func=[]):
if len(func)==3:
self.modelInitFunc = func[0]
self.modelDrawFunc = func[1]
self.modelStepFunc = func[2]
if (self.modelStepFunc.__doc__ != None and len(self.modelStepFunc.__doc__)>0):
self.showHelp(self.buttonStep,self.modelStepFunc.__doc__.strip())
if (self.modelInitFunc.__doc__ != None and len(self.modelInitFunc.__doc__)>0):
self.textInformation.config(state=NORMAL)
self.textInformation.delete(1.0, END)
self.textInformation.insert(END, self.modelInitFunc.__doc__.strip())
self.textInformation.config(state=DISABLED)
self.modelInitFunc()
self.drawModel()
self.rootWindow.mainloop()
def quitGUI(self):
PL.close('all')
self.rootWindow.quit()
self.rootWindow.destroy()
def showHelp(self, widget,text):
def setText(self):
self.statusText.set(text)
self.status.configure(foreground='blue')
def showHelpLeave(self):
self.statusText.set(self.statusStr)
self.status.configure(foreground='black')
widget.bind("<Enter>", lambda e : setText(self))
widget.bind("<Leave>", lambda e : showHelpLeave(self))
| 43.449393 | 190 | 0.630544 |
694728f1549a14ad8a33bdc621d327384f473dcd | 9,839 | py | Python | actions/scripts.py | SainTurDaY27/beattosetto | 980b240f72fbd057a8d7bbd4e669de80eca3193b | [
"MIT"
] | null | null | null | actions/scripts.py | SainTurDaY27/beattosetto | 980b240f72fbd057a8d7bbd4e669de80eca3193b | [
"MIT"
] | 16 | 2021-11-17T08:34:10.000Z | 2022-03-08T19:23:50.000Z | actions/scripts.py | Siraphop4Nene/beattosetto | 78911428092c9c157c835e70d8abd4a08c48da8b | [
"MIT"
] | null | null | null | """
Script for using in worker.
"""
import logging
import os
import time
import requests
import traceback
from django.core.files import File
from django.core.files.temp import NamedTemporaryFile
from django.db.models.functions import datetime
from django.utils import timezone
from .logging import setup_logger, log_two_handler, LOG_FORMAT, LOG_DEBUG_FORMAT
from .models import ActionLog
from beatmap_collections.models import Beatmap
from beattosetto.settings import OSU_API_V1_KEY
from django.utils.timezone import make_aware
def update_beatmap_action_script(action: ActionLog):
"""An action script for updating a beatmap's data entire server.
Parameters:
action (ActionLog): The ActionLog for tracking the action.
"""
try:
# For running first time, make a new folder for store debug log
if not os.path.exists('actions_logs_debug'):
os.mkdir('actions_logs_debug')
# Setup the new logger
info_logger = setup_logger(f'info_log_{action.id}', f'media/{action.log}', 'a+', logging.INFO, LOG_FORMAT)
debug_logger = setup_logger(f'debug_log_{action.id}', f'actions_logs_debug/log_{action.id}_debug.log',
'a+', logging.DEBUG, LOG_DEBUG_FORMAT)
log_two_handler(info_logger, debug_logger, logging.INFO, "Setup logger complete.")
beatmap_count = Beatmap.objects.all().count()
log_two_handler(info_logger, debug_logger, logging.INFO, f"Prepare to update {beatmap_count} beatmaps.")
failed = 0
success = 0
count = 0
for beatmap in Beatmap.objects.all():
count += 1
action.running_text = f"Updating {beatmap.title}[{beatmap.version}] ({count}/{beatmap_count})"
action.save()
log_two_handler(info_logger, debug_logger, logging.INFO,
f"Updating {beatmap.title}[{beatmap.version}] ({count}/{beatmap_count})")
beatmap_id = beatmap.beatmap_id
parameter = {'b': beatmap.beatmap_id, 'k': OSU_API_V1_KEY}
log_two_handler(info_logger, debug_logger, logging.INFO,
f'Requesting beatmap data for {beatmap.title}[{beatmap.version}] ({count}/{beatmap_count})')
request_data = requests.get("https://osu.ppy.sh/api/get_beatmaps", params=parameter)
if (request_data.status_code == 200) and (request_data.json() != []):
try:
beatmap_json = request_data.json()[0]
log_two_handler(info_logger, debug_logger, logging.INFO,
f'Beatmap data received for {beatmap.title}[{beatmap.version}]')
debug_logger.debug(f"{beatmap.title}[{beatmap.version}] JSON Data : {beatmap_json}")
action.running_text = f"Fetching the new beatmap picture of" \
f" {beatmap.title}[{beatmap.version}] ({count}/{beatmap_count})"
action.save()
# Try to delete the old beatmap picture and replace it with a new one
try:
os.remove(f"media/{beatmap.beatmap_card}")
log_two_handler(info_logger, debug_logger, logging.INFO,
f"Deleted old beatmap card picture of {beatmap.title}[{beatmap.version}]")
except FileNotFoundError:
log_two_handler(info_logger, debug_logger, logging.WARNING,
f"No old beatmap card picture of {beatmap.title}[{beatmap.version}] to delete, pass it.")
try:
os.remove(f"media/{beatmap.beatmap_list}")
log_two_handler(info_logger, debug_logger, logging.INFO,
f"Deleted old beatmap list picture of {beatmap.title}[{beatmap.version}]")
except FileNotFoundError:
log_two_handler(info_logger, debug_logger, logging.WARNING,
f"No old beatmap list picture of {beatmap.title}[{beatmap.version}] to delete, pass it.")
card_pic = requests.get(
f"https://assets.ppy.sh/beatmaps/{beatmap_json['beatmapset_id']}/covers/card.jpg")
card_temp = NamedTemporaryFile(delete=True)
card_temp.write(card_pic.content)
card_temp.flush()
beatmap.beatmap_card.save(f"{beatmap_id}.jpg", File(card_temp), save=True)
card_temp.close()
log_two_handler(info_logger, debug_logger, logging.INFO, f"Saved new beatmap card picture of {beatmap.title}[{beatmap.version}]")
list_pic = requests.get(
f"https://assets.ppy.sh/beatmaps/{beatmap_json['beatmapset_id']}/covers/list.jpg")
list_temp = NamedTemporaryFile(delete=True)
list_temp.write(list_pic.content)
list_temp.flush()
beatmap.beatmap_list.save(f"{beatmap_id}.jpg", File(list_temp), save=True)
list_temp.close()
log_two_handler(info_logger, debug_logger, logging.INFO, f"Saved new beatmap list picture of {beatmap.title}[{beatmap.version}]")
action.running_text = f"Updating the metadata of {beatmap.title}[{beatmap.version}] ({count}/{beatmap_count})"
log_two_handler(info_logger, debug_logger, logging.INFO, f"Updating the metadata of {beatmap.title} [{beatmap.version}]")
beatmap.beatmapset_id = beatmap_json['beatmapset_id']
beatmap.title = beatmap_json['title']
beatmap.artist = beatmap_json['artist']
beatmap.source = beatmap_json['source']
beatmap.creator = beatmap_json['creator']
beatmap.approved = beatmap_json['approved']
beatmap.difficultyrating = beatmap_json['difficultyrating']
beatmap.bpm = beatmap_json['bpm']
beatmap.version = beatmap_json['version']
beatmap.count_normal = beatmap_json['count_normal']
beatmap.count_slider = beatmap_json['count_slider']
beatmap.count_spinner = beatmap_json['count_spinner']
beatmap.diff_approach = beatmap_json['diff_approach']
beatmap.diff_drain = beatmap_json['diff_drain']
beatmap.diff_overall = beatmap_json['diff_overall']
beatmap.diff_size = beatmap_json['diff_size']
if beatmap_json['diff_aim'] is not None:
beatmap.diff_aim = beatmap_json['diff_aim']
if beatmap_json['diff_speed'] is not None:
beatmap.diff_speed = beatmap_json['diff_speed']
if beatmap_json['max_combo'] is not None:
beatmap.max_combo = beatmap_json['max_combo']
beatmap.playcount = beatmap_json['playcount']
beatmap.favourite_count = beatmap_json['favourite_count']
beatmap.total_length = beatmap_json['total_length']
beatmap.genre_id = beatmap_json['genre_id']
beatmap.language_id = beatmap_json['language_id']
beatmap.tags = beatmap_json['tags']
beatmap.submit_date = make_aware(datetime.datetime.strptime(beatmap_json['submit_date'], '%Y-%m-%d %H:%M:%S'))
if beatmap_json['approved_date'] is not None:
beatmap.approved_date = make_aware(datetime.datetime.strptime(beatmap_json['approved_date'], '%Y-%m-%d %H:%M:%S'))
beatmap.last_update = make_aware(datetime.datetime.strptime(beatmap_json['last_update'], '%Y-%m-%d %H:%M:%S'))
beatmap.save()
log_two_handler(info_logger, debug_logger, logging.INFO,
f"Saved new metadata of {beatmap.title}[{beatmap.version}]")
success += 1
except Exception as error:
log_two_handler(info_logger, debug_logger, logging.ERROR,
f"Error while updating the metadata of {beatmap.title}[{beatmap.version}] : {str(error)}")
log_two_handler(info_logger, debug_logger, logging.ERROR, f"Traceback detail: \n {traceback.format_exc()}")
failed += 1
else:
log_two_handler(info_logger, debug_logger, logging.ERROR,
f"Failed to fetch beatmap data of {beatmap.title}[{beatmap.version}] from osu! API")
debug_logger.error(f"Status Code: {request_data.status_code}")
debug_logger.error(f"JSON Data: {request_data.json()}")
failed += 1
# To make the API request rate not too rush, we need to add a small delay on request
time.sleep(5)
action.status = 2
action.running_text = f"Task running successfully with {success} success and {failed} failed!"
action.time_finish = timezone.now()
action.save()
log_two_handler(info_logger, debug_logger, logging.INFO,
f"Task running successfully with {success} success and {failed} failed!")
log_two_handler(info_logger, debug_logger, logging.INFO, "Action finished! Thanks for using beatto-chan services.")
except Exception as error:
action.status = 3
action.running_text = f"Start Action failed : {str(error)}"
action.time_finish = timezone.now()
action.save()
| 59.271084 | 149 | 0.592642 |
a481c83ce434635c6055ff965a54617b28a5f40d | 280 | py | Python | tests/CRAFT/MFW/ET_SG.py | idaholab/SR2ML | 2aa5e0be02786523cdeaf898d42411a7068d30b7 | [
"Apache-2.0"
] | 5 | 2021-01-25T02:01:22.000Z | 2021-12-27T03:14:49.000Z | tests/CRAFT/MFW/ET_SG.py | idaholab/SR2ML | 2aa5e0be02786523cdeaf898d42411a7068d30b7 | [
"Apache-2.0"
] | 32 | 2021-01-12T18:43:29.000Z | 2022-02-17T19:45:27.000Z | tests/CRAFT/testMC_timeDep/ET_SG.py | idaholab/SR2ML | 2aa5e0be02786523cdeaf898d42411a7068d30b7 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020, Battelle Energy Alliance, LLC
# ALL RIGHTS RESERVED
import numpy as np
import math
import random
from scipy.integrate import quad
def run(self,Input):
# intput: t, T (max time)
# output: outcome
self.outcome_SG = self.p_SG * np.ones(Input['time'].size)
| 20 | 59 | 0.728571 |
c7e5437ac9ec0be4e4abe23e679e16330bb9eaff | 743 | py | Python | test/test_compute_project_vm_iso_create.py | hyperonecom/h1-client-python | 4ce355852ba3120ec1b8f509ab5894a5c08da730 | [
"MIT"
] | null | null | null | test/test_compute_project_vm_iso_create.py | hyperonecom/h1-client-python | 4ce355852ba3120ec1b8f509ab5894a5c08da730 | [
"MIT"
] | null | null | null | test/test_compute_project_vm_iso_create.py | hyperonecom/h1-client-python | 4ce355852ba3120ec1b8f509ab5894a5c08da730 | [
"MIT"
] | null | null | null | """
HyperOne
HyperOne API # noqa: E501
The version of the OpenAPI document: 0.1.0
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import h1
from h1.model.compute_project_vm_iso_create import ComputeProjectVmIsoCreate
class TestComputeProjectVmIsoCreate(unittest.TestCase):
"""ComputeProjectVmIsoCreate unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testComputeProjectVmIsoCreate(self):
"""Test ComputeProjectVmIsoCreate"""
# FIXME: construct object with mandatory attributes with example values
# model = ComputeProjectVmIsoCreate() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 20.638889 | 79 | 0.697174 |
469c7876e50ed53bfc77d57416e29f5e5303d477 | 979 | py | Python | portfolios/equally_weighted.py | NizarGhandri/Optimal_Portfolios | d3656aa044bb8d81035ac11ee8b24dbb01b9c6c3 | [
"MIT"
] | null | null | null | portfolios/equally_weighted.py | NizarGhandri/Optimal_Portfolios | d3656aa044bb8d81035ac11ee8b24dbb01b9c6c3 | [
"MIT"
] | null | null | null | portfolios/equally_weighted.py | NizarGhandri/Optimal_Portfolios | d3656aa044bb8d81035ac11ee8b24dbb01b9c6c3 | [
"MIT"
] | null | null | null | from dask import dataframe
import pandas as pd
import yfinance as yf
import os
import logging
class EquallyWeighted():
def __init__(self, cfg):
self.cfg = cfg
self.data = self.load_data()
def load_data(self):
return self.preprocess(pd.concat([pd.read_parquet(os.path.join(self.cfg.data_dir, f))["Close"] for f in os.listdir(self.cfg.data_dir)]))
def preprocess(self, x, percent0=0.5, percent1=0.2):
tmp = x.dropna(axis=0, thresh=int(percent0*x.shape[1])).dropna(axis=1, thresh=int(percent1*x.shape[0])).fillna(method="ffill")
dropped = set(x.columns) - set(tmp.columns)
logging.info("Preprocessing dropped the following stocks" + "-".join(list(dropped)))
return tmp
#return x
def __call__(self):
#self.data = dataframe.from_pandas(self.data, npartitions=os.cpu_count())
return ((self.data.diff()/self.data) + 1).fillna(1).cumprod().mean(axis=1)
| 26.459459 | 144 | 0.642492 |
0d3aca1713dea38cffa45eeecfecdc8b19ef7e62 | 48,176 | py | Python | pymatgen/analysis/structure_matcher.py | exenGT/pymatgen | a8ffb820ab8fc3f60251099e38c8888f45eae618 | [
"MIT"
] | null | null | null | pymatgen/analysis/structure_matcher.py | exenGT/pymatgen | a8ffb820ab8fc3f60251099e38c8888f45eae618 | [
"MIT"
] | null | null | null | pymatgen/analysis/structure_matcher.py | exenGT/pymatgen | a8ffb820ab8fc3f60251099e38c8888f45eae618 | [
"MIT"
] | null | null | null | # Copyright (c) Pymatgen Development Team.
# Distributed under the terms of the MIT License.
"""
This module provides classes to perform fitting of structures.
"""
import abc
import itertools
import numpy as np
from monty.json import MSONable
from pymatgen.analysis.defects.core import Defect, Interstitial, Substitution, Vacancy
from pymatgen.core import PeriodicSite
from pymatgen.core.composition import Composition
from pymatgen.core.lattice import Lattice
from pymatgen.core.periodic_table import get_el_sp
from pymatgen.core.structure import Structure
from pymatgen.optimization.linear_assignment import LinearAssignment # type: ignore
from pymatgen.util.coord import lattice_points_in_supercell
from pymatgen.util.coord_cython import ( # type: ignore
is_coord_subset_pbc,
pbc_shortest_vectors,
)
__author__ = "William Davidson Richards, Stephen Dacek, Shyue Ping Ong"
__copyright__ = "Copyright 2011, The Materials Project"
__version__ = "1.0"
__maintainer__ = "William Davidson Richards"
__email__ = "wrichard@mit.edu"
__status__ = "Production"
__date__ = "Dec 3, 2012"
class AbstractComparator(MSONable, metaclass=abc.ABCMeta):
"""
Abstract Comparator class. A Comparator defines how sites are compared in
a structure.
"""
@abc.abstractmethod
def are_equal(self, sp1, sp2):
"""
Defines how the species of two sites are considered equal. For
example, one can consider sites to have the same species only when
the species are exactly the same, i.e., Fe2+ matches Fe2+ but not
Fe3+. Or one can define that only the element matters,
and all oxidation state information are ignored.
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
Boolean indicating whether species are considered equal.
"""
return
@abc.abstractmethod
def get_hash(self, composition):
"""
Defines a hash to group structures. This allows structures to be
grouped efficiently for comparison. The hash must be invariant under
supercell creation. (e.g. composition is not a good hash, but
fractional_composition might be). Reduced formula is not a good formula,
due to weird behavior with fractional occupancy.
Composition is used here instead of structure because for anonymous
matches it is much quicker to apply a substitution to a composition
object than a structure object.
Args:
composition (Composition): composition of the structure
Returns:
A hashable object. Examples can be string formulas, integers etc.
"""
return
@classmethod
def from_dict(cls, d):
"""
:param d: Dict representation
:return: Comparator.
"""
for trans_modules in ["structure_matcher"]:
mod = __import__(
"pymatgen.analysis." + trans_modules,
globals(),
locals(),
[d["@class"]],
0,
)
if hasattr(mod, d["@class"]):
trans = getattr(mod, d["@class"])
return trans()
raise ValueError("Invalid Comparator dict")
def as_dict(self):
"""
:return: MSONable dict
"""
return {
"version": __version__,
"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
}
class SpeciesComparator(AbstractComparator):
"""
A Comparator that matches species exactly. The default used in
StructureMatcher.
"""
def are_equal(self, sp1, sp2):
"""
True if species are exactly the same, i.e., Fe2+ == Fe2+ but not Fe3+.
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
Boolean indicating whether species are equal.
"""
return sp1 == sp2
def get_hash(self, composition):
"""
Returns: Fractional composition
"""
return composition.fractional_composition
class SpinComparator(AbstractComparator):
"""
A Comparator that matches magnetic structures to their inverse spins.
This comparator is primarily used to filter magnetically ordered
structures with opposite spins, which are equivalent.
"""
def are_equal(self, sp1, sp2):
"""
True if species are exactly the same, i.e., Fe2+ == Fe2+ but not
Fe3+. and the spins are reversed. i.e., spin up maps to spin down,
and vice versa.
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
Boolean indicating whether species are equal.
"""
for s1 in sp1.keys():
spin1 = getattr(s1, "spin", 0)
oxi1 = getattr(s1, "oxi_state", 0)
for s2 in sp2.keys():
spin2 = getattr(s2, "spin", 0)
oxi2 = getattr(s2, "oxi_state", 0)
if s1.symbol == s2.symbol and oxi1 == oxi2 and spin2 == -spin1:
break
else:
return False
return True
def get_hash(self, composition):
"""
Returns: Fractional composition
"""
return composition.fractional_composition
class ElementComparator(AbstractComparator):
"""
A Comparator that matches elements. i.e. oxidation states are
ignored.
"""
def are_equal(self, sp1, sp2):
"""
True if element:amounts are exactly the same, i.e.,
oxidation state is not considered.
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
Boolean indicating whether species are the same based on element
and amounts.
"""
comp1 = Composition(sp1)
comp2 = Composition(sp2)
return comp1.get_el_amt_dict() == comp2.get_el_amt_dict()
def get_hash(self, composition):
"""
Returns: Fractional element composition
"""
return composition.element_composition.fractional_composition
class FrameworkComparator(AbstractComparator):
"""
A Comparator that matches sites, regardless of species.
"""
def are_equal(self, sp1, sp2):
"""
True if there are atoms on both sites.
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
True always
"""
return True
def get_hash(self, composition):
"""
No hash possible
"""
return 1
class OrderDisorderElementComparator(AbstractComparator):
"""
A Comparator that matches sites, given some overlap in the element
composition
"""
def are_equal(self, sp1, sp2):
"""
True if there is some overlap in composition between the species
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
True always
"""
set1 = set(sp1.elements)
set2 = set(sp2.elements)
return set1.issubset(set2) or set2.issubset(set1)
def get_hash(self, composition):
"""
Returns: Fractional composition
"""
return composition.fractional_composition
class OccupancyComparator(AbstractComparator):
"""
A Comparator that matches occupancies on sites,
irrespective of the species of those sites.
"""
def are_equal(self, sp1, sp2):
"""
Args:
sp1: First species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
sp2: Second species. A dict of {specie/element: amt} as per the
definition in Site and PeriodicSite.
Returns:
True if sets of occupancies (amt) are equal on both sites.
"""
return set(sp1.element_composition.values()) == set(sp2.element_composition.values())
def get_hash(self, composition):
"""
:param composition: Composition.
:return: 1. Difficult to define sensible hash
"""
return 1
class StructureMatcher(MSONable):
"""
Class to match structures by similarity.
Algorithm:
1. Given two structures: s1 and s2
2. Optional: Reduce to primitive cells.
3. If the number of sites do not match, return False
4. Reduce to s1 and s2 to Niggli Cells
5. Optional: Scale s1 and s2 to same volume.
6. Optional: Remove oxidation states associated with sites
7. Find all possible lattice vectors for s2 within shell of ltol.
8. For s1, translate an atom in the smallest set to the origin
9. For s2: find all valid lattices from permutations of the list
of lattice vectors (invalid if: det(Lattice Matrix) < half
volume of original s2 lattice)
10. For each valid lattice:
a. If the lattice angles of are within tolerance of s1,
basis change s2 into new lattice.
b. For each atom in the smallest set of s2:
i. Translate to origin and compare fractional sites in
structure within a fractional tolerance.
ii. If true:
ia. Convert both lattices to cartesian and place
both structures on an average lattice
ib. Compute and return the average and max rms
displacement between the two structures normalized
by the average free length per atom
if fit function called:
if normalized max rms displacement is less than
stol. Return True
if get_rms_dist function called:
if normalized average rms displacement is less
than the stored rms displacement, store and
continue. (This function will search all possible
lattices for the smallest average rms displacement
between the two structures)
"""
def __init__(
self,
ltol=0.2,
stol=0.3,
angle_tol=5,
primitive_cell=True,
scale=True,
attempt_supercell=False,
allow_subset=False,
comparator=SpeciesComparator(),
supercell_size="num_sites",
ignored_species=None,
):
"""
Args:
ltol (float): Fractional length tolerance. Default is 0.2.
stol (float): Site tolerance. Defined as the fraction of the
average free length per atom := ( V / Nsites ) ** (1/3)
Default is 0.3.
angle_tol (float): Angle tolerance in degrees. Default is 5 degrees.
primitive_cell (bool): If true: input structures will be reduced to
primitive cells prior to matching. Default to True.
scale (bool): Input structures are scaled to equivalent volume if
true; For exact matching, set to False.
attempt_supercell (bool): If set to True and number of sites in
cells differ after a primitive cell reduction (divisible by an
integer) attempts to generate a supercell transformation of the
smaller cell which is equivalent to the larger structure.
allow_subset (bool): Allow one structure to match to the subset of
another structure. Eg. Matching of an ordered structure onto a
disordered one, or matching a delithiated to a lithiated
structure. This option cannot be combined with
attempt_supercell, or with structure grouping.
comparator (Comparator): A comparator object implementing an equals
method that declares declaring equivalency of sites. Default is
SpeciesComparator, which implies rigid species
mapping, i.e., Fe2+ only matches Fe2+ and not Fe3+.
Other comparators are provided, e.g., ElementComparator which
matches only the elements and not the species.
The reason why a comparator object is used instead of
supplying a comparison function is that it is not possible to
pickle a function, which makes it otherwise difficult to use
StructureMatcher with Python's multiprocessing.
supercell_size (str or list): Method to use for determining the
size of a supercell (if applicable). Possible values are
num_sites, num_atoms, volume, or an element or list of elements
present in both structures.
ignored_species (list): A list of ions to be ignored in matching.
Useful for matching structures that have similar frameworks
except for certain ions, e.g., Li-ion intercalation frameworks.
This is more useful than allow_subset because it allows better
control over what species are ignored in the matching.
"""
self.ltol = ltol
self.stol = stol
self.angle_tol = angle_tol
self._comparator = comparator
self._primitive_cell = primitive_cell
self._scale = scale
self._supercell = attempt_supercell
self._supercell_size = supercell_size
self._subset = allow_subset
self._ignored_species = [] if ignored_species is None else ignored_species[:]
def _get_supercell_size(self, s1, s2):
"""
Returns the supercell size, and whether the supercell should
be applied to s1. If fu == 1, s1_supercell is returned as
true, to avoid ambiguity.
"""
if self._supercell_size == "num_sites":
fu = s2.num_sites / s1.num_sites
elif self._supercell_size == "num_atoms":
fu = s2.composition.num_atoms / s1.composition.num_atoms
elif self._supercell_size == "volume":
fu = s2.volume / s1.volume
elif not isinstance(self._supercell_size, str):
s1comp, s2comp = 0, 0
for el in self._supercell_size:
el = get_el_sp(el)
s1comp += s1.composition[el]
s2comp += s2.composition[el]
fu = s2comp / s1comp
else:
el = get_el_sp(self._supercell_size)
if (el in s2.composition) and (el in s1.composition):
fu = s2.composition[el] / s1.composition[el]
else:
raise ValueError("Invalid argument for supercell_size.")
if fu < 2 / 3:
return int(round(1 / fu)), False
return int(round(fu)), True
def _get_lattices(self, target_lattice, s, supercell_size=1):
"""
Yields lattices for s with lengths and angles close to the
lattice of target_s. If supercell_size is specified, the
returned lattice will have that number of primitive cells
in it
Args:
s, target_s: Structure objects
"""
lattices = s.lattice.find_all_mappings(
target_lattice,
ltol=self.ltol,
atol=self.angle_tol,
skip_rotation_matrix=True,
)
for l, _, scale_m in lattices:
if abs(abs(np.linalg.det(scale_m)) - supercell_size) < 0.5:
yield l, scale_m
def _get_supercells(self, struct1, struct2, fu, s1_supercell):
"""
Computes all supercells of one structure close to the lattice of the
other
if s1_supercell == True, it makes the supercells of struct1, otherwise
it makes them of s2
yields: s1, s2, supercell_matrix, average_lattice, supercell_matrix
"""
def av_lat(l1, l2):
params = (np.array(l1.parameters) + np.array(l2.parameters)) / 2
return Lattice.from_parameters(*params)
def sc_generator(s1, s2):
s2_fc = np.array(s2.frac_coords)
if fu == 1:
cc = np.array(s1.cart_coords)
for l, sc_m in self._get_lattices(s2.lattice, s1, fu):
fc = l.get_fractional_coords(cc)
fc -= np.floor(fc)
yield fc, s2_fc, av_lat(l, s2.lattice), sc_m
else:
fc_init = np.array(s1.frac_coords)
for l, sc_m in self._get_lattices(s2.lattice, s1, fu):
fc = np.dot(fc_init, np.linalg.inv(sc_m))
lp = lattice_points_in_supercell(sc_m)
fc = (fc[:, None, :] + lp[None, :, :]).reshape((-1, 3))
fc -= np.floor(fc)
yield fc, s2_fc, av_lat(l, s2.lattice), sc_m
if s1_supercell:
for x in sc_generator(struct1, struct2):
yield x
else:
for x in sc_generator(struct2, struct1):
# reorder generator output so s1 is still first
yield x[1], x[0], x[2], x[3]
@classmethod
def _cmp_fstruct(cls, s1, s2, frac_tol, mask):
"""
Returns true if a matching exists between s2 and s2
under frac_tol. s2 should be a subset of s1
"""
if len(s2) > len(s1):
raise ValueError("s1 must be larger than s2")
if mask.shape != (len(s2), len(s1)):
raise ValueError("mask has incorrect shape")
return is_coord_subset_pbc(s2, s1, frac_tol, mask)
@classmethod
def _cart_dists(cls, s1, s2, avg_lattice, mask, normalization, lll_frac_tol=None):
"""
Finds a matching in cartesian space. Finds an additional
fractional translation vector to minimize RMS distance
Args:
s1, s2: numpy arrays of fractional coordinates. len(s1) >= len(s2)
avg_lattice: Lattice on which to calculate distances
mask: numpy array of booleans. mask[i, j] = True indicates
that s2[i] cannot be matched to s1[j]
normalization (float): inverse normalization length
Returns:
Distances from s2 to s1, normalized by (V/Natom) ^ 1/3
Fractional translation vector to apply to s2.
Mapping from s1 to s2, i.e. with numpy slicing, s1[mapping] => s2
"""
if len(s2) > len(s1):
raise ValueError("s1 must be larger than s2")
if mask.shape != (len(s2), len(s1)):
raise ValueError("mask has incorrect shape")
# vectors are from s2 to s1
vecs, d_2 = pbc_shortest_vectors(avg_lattice, s2, s1, mask, return_d2=True, lll_frac_tol=lll_frac_tol)
lin = LinearAssignment(d_2)
s = lin.solution # pylint: disable=E1101
short_vecs = vecs[np.arange(len(s)), s]
translation = np.average(short_vecs, axis=0)
f_translation = avg_lattice.get_fractional_coords(translation)
new_d2 = np.sum((short_vecs - translation) ** 2, axis=-1)
return new_d2 ** 0.5 * normalization, f_translation, s
def _get_mask(self, struct1, struct2, fu, s1_supercell):
"""
Returns mask for matching struct2 to struct1. If struct1 has sites
a b c, and fu = 2, assumes supercells of struct2 will be ordered
aabbcc (rather than abcabc)
Returns:
mask, struct1 translation indices, struct2 translation index
"""
mask = np.zeros((len(struct2), len(struct1), fu), dtype=bool)
inner = []
for sp2, i in itertools.groupby(enumerate(struct2.species_and_occu), key=lambda x: x[1]):
i = list(i)
inner.append((sp2, slice(i[0][0], i[-1][0] + 1)))
for sp1, j in itertools.groupby(enumerate(struct1.species_and_occu), key=lambda x: x[1]):
j = list(j)
j = slice(j[0][0], j[-1][0] + 1)
for sp2, i in inner:
mask[i, j, :] = not self._comparator.are_equal(sp1, sp2)
if s1_supercell:
mask = mask.reshape((len(struct2), -1))
else:
# supercell is of struct2, roll fu axis back to preserve
# correct ordering
mask = np.rollaxis(mask, 2, 1)
mask = mask.reshape((-1, len(struct1)))
# find the best translation indices
i = np.argmax(np.sum(mask, axis=-1))
inds = np.where(np.invert(mask[i]))[0]
if s1_supercell:
# remove the symmetrically equivalent s1 indices
inds = inds[::fu]
return np.array(mask, dtype=int), inds, i
def fit(self, struct1, struct2, symmetric=False):
"""
Fit two structures.
Args:
struct1 (Structure): 1st structure
struct2 (Structure): 2nd structure
symmetric (Bool): Defaults to False
If True, check the equality both ways.
This only impacts a small percentage of structures
Returns:
True or False.
"""
struct1, struct2 = self._process_species([struct1, struct2])
if not self._subset and self._comparator.get_hash(struct1.composition) != self._comparator.get_hash(
struct2.composition
):
return None
if not symmetric:
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2)
match = self._match(struct1, struct2, fu, s1_supercell, break_on_match=True)
if match is None:
return False
return match[0] <= self.stol
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2)
match1 = self._match(struct1, struct2, fu, s1_supercell, break_on_match=True)
struct1, struct2 = struct2, struct1
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2)
match2 = self._match(struct1, struct2, fu, s1_supercell, break_on_match=True)
if match1 is None or match2 is None:
return False
return max(match1[0], match2[0]) <= self.stol
def get_rms_dist(self, struct1, struct2):
"""
Calculate RMS displacement between two structures
Args:
struct1 (Structure): 1st structure
struct2 (Structure): 2nd structure
Returns:
rms displacement normalized by (Vol / nsites) ** (1/3)
and maximum distance between paired sites. If no matching
lattice is found None is returned.
"""
struct1, struct2 = self._process_species([struct1, struct2])
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2)
match = self._match(struct1, struct2, fu, s1_supercell, use_rms=True, break_on_match=False)
if match is None:
return None
return match[0], max(match[1])
def _process_species(self, structures):
copied_structures = []
for s in structures:
# We need the copies to be actual Structure to work properly, not
# subclasses. So do type(s) == Structure.
ss = Structure.from_sites(s)
if self._ignored_species:
ss.remove_species(self._ignored_species)
copied_structures.append(ss)
return copied_structures
def _preprocess(self, struct1, struct2, niggli=True):
"""
Rescales, finds the reduced structures (primitive and niggli),
and finds fu, the supercell size to make struct1 comparable to
s2
"""
struct1 = struct1.copy()
struct2 = struct2.copy()
if niggli:
struct1 = struct1.get_reduced_structure(reduction_algo="niggli")
struct2 = struct2.get_reduced_structure(reduction_algo="niggli")
# primitive cell transformation
if self._primitive_cell:
struct1 = struct1.get_primitive_structure()
struct2 = struct2.get_primitive_structure()
if self._supercell:
fu, s1_supercell = self._get_supercell_size(struct1, struct2)
else:
fu, s1_supercell = 1, True
mult = fu if s1_supercell else 1 / fu
# rescale lattice to same volume
if self._scale:
ratio = (struct2.volume / (struct1.volume * mult)) ** (1 / 6)
nl1 = Lattice(struct1.lattice.matrix * ratio)
struct1.lattice = nl1
nl2 = Lattice(struct2.lattice.matrix / ratio)
struct2.lattice = nl2
return struct1, struct2, fu, s1_supercell
def _match(
self,
struct1,
struct2,
fu,
s1_supercell=True,
use_rms=False,
break_on_match=False,
):
"""
Matches one struct onto the other
"""
ratio = fu if s1_supercell else 1 / fu
if len(struct1) * ratio >= len(struct2):
return self._strict_match(
struct1,
struct2,
fu,
s1_supercell=s1_supercell,
break_on_match=break_on_match,
use_rms=use_rms,
)
return self._strict_match(
struct2,
struct1,
fu,
s1_supercell=(not s1_supercell),
break_on_match=break_on_match,
use_rms=use_rms,
)
def _strict_match(
self,
struct1,
struct2,
fu,
s1_supercell=True,
use_rms=False,
break_on_match=False,
):
"""
Matches struct2 onto struct1 (which should contain all sites in
struct2).
Args:
struct1, struct2 (Structure): structures to be matched
fu (int): size of supercell to create
s1_supercell (bool): whether to create the supercell of
struct1 (vs struct2)
use_rms (bool): whether to minimize the rms of the matching
break_on_match (bool): whether to stop search at first
valid match
"""
if fu < 1:
raise ValueError("fu cannot be less than 1")
mask, s1_t_inds, s2_t_ind = self._get_mask(struct1, struct2, fu, s1_supercell)
if mask.shape[0] > mask.shape[1]:
raise ValueError("after supercell creation, struct1 must have more sites than struct2")
# check that a valid mapping exists
if (not self._subset) and mask.shape[1] != mask.shape[0]:
return None
if LinearAssignment(mask).min_cost > 0: # pylint: disable=E1101
return None
best_match = None
# loop over all lattices
for s1fc, s2fc, avg_l, sc_m in self._get_supercells(struct1, struct2, fu, s1_supercell):
# compute fractional tolerance
normalization = (len(s1fc) / avg_l.volume) ** (1 / 3)
inv_abc = np.array(avg_l.reciprocal_lattice.abc)
frac_tol = inv_abc * self.stol / (np.pi * normalization)
# loop over all translations
for s1i in s1_t_inds:
t = s1fc[s1i] - s2fc[s2_t_ind]
t_s2fc = s2fc + t
if self._cmp_fstruct(s1fc, t_s2fc, frac_tol, mask):
inv_lll_abc = np.array(avg_l.get_lll_reduced_lattice().reciprocal_lattice.abc)
lll_frac_tol = inv_lll_abc * self.stol / (np.pi * normalization)
dist, t_adj, mapping = self._cart_dists(s1fc, t_s2fc, avg_l, mask, normalization, lll_frac_tol)
if use_rms:
val = np.linalg.norm(dist) / len(dist) ** 0.5
else:
val = max(dist)
# pylint: disable=E1136
if best_match is None or val < best_match[0]:
total_t = t + t_adj
total_t -= np.round(total_t)
best_match = val, dist, sc_m, total_t, mapping
if (break_on_match or val < 1e-5) and val < self.stol:
return best_match
if best_match and best_match[0] < self.stol:
return best_match
return None
def group_structures(self, s_list, anonymous=False):
"""
Given a list of structures, use fit to group
them by structural equality.
Args:
s_list ([Structure]): List of structures to be grouped
anonymous (bool): Whether to use anonymous mode.
Returns:
A list of lists of matched structures
Assumption: if s1 == s2 but s1 != s3, than s2 and s3 will be put
in different groups without comparison.
"""
if self._subset:
raise ValueError("allow_subset cannot be used with group_structures")
original_s_list = list(s_list)
s_list = self._process_species(s_list)
# Use structure hash to pre-group structures
if anonymous:
def c_hash(c):
return c.anonymized_formula
else:
c_hash = self._comparator.get_hash
def s_hash(s):
return c_hash(s[1].composition)
sorted_s_list = sorted(enumerate(s_list), key=s_hash)
all_groups = []
# For each pre-grouped list of structures, perform actual matching.
for k, g in itertools.groupby(sorted_s_list, key=s_hash):
unmatched = list(g)
while len(unmatched) > 0:
i, refs = unmatched.pop(0)
matches = [i]
if anonymous:
inds = filter(
lambda i: self.fit_anonymous(refs, unmatched[i][1]),
list(range(len(unmatched))),
)
else:
inds = filter(
lambda i: self.fit(refs, unmatched[i][1]),
list(range(len(unmatched))),
)
inds = list(inds)
matches.extend([unmatched[i][0] for i in inds])
unmatched = [unmatched[i] for i in range(len(unmatched)) if i not in inds]
all_groups.append([original_s_list[i] for i in matches])
return all_groups
def as_dict(self):
"""
:return: MSONable dict
"""
return {
"version": __version__,
"@module": self.__class__.__module__,
"@class": self.__class__.__name__,
"comparator": self._comparator.as_dict(),
"stol": self.stol,
"ltol": self.ltol,
"angle_tol": self.angle_tol,
"primitive_cell": self._primitive_cell,
"scale": self._scale,
"attempt_supercell": self._supercell,
"allow_subset": self._subset,
"supercell_size": self._supercell_size,
"ignored_species": self._ignored_species,
}
@classmethod
def from_dict(cls, d):
"""
:param d: Dict representation
:return: StructureMatcher
"""
return StructureMatcher(
ltol=d["ltol"],
stol=d["stol"],
angle_tol=d["angle_tol"],
primitive_cell=d["primitive_cell"],
scale=d["scale"],
attempt_supercell=d["attempt_supercell"],
allow_subset=d["allow_subset"],
comparator=AbstractComparator.from_dict(d["comparator"]),
supercell_size=d["supercell_size"],
ignored_species=d["ignored_species"],
)
def _anonymous_match(
self,
struct1,
struct2,
fu,
s1_supercell=True,
use_rms=False,
break_on_match=False,
single_match=False,
):
"""
Tries all permutations of matching struct1 to struct2.
Args:
struct1, struct2 (Structure): Preprocessed input structures
Returns:
List of (mapping, match)
"""
if not isinstance(self._comparator, SpeciesComparator):
raise ValueError("Anonymous fitting currently requires SpeciesComparator")
# check that species lists are comparable
sp1 = struct1.composition.elements
sp2 = struct2.composition.elements
if len(sp1) != len(sp2):
return None
ratio = fu if s1_supercell else 1 / fu
swapped = len(struct1) * ratio < len(struct2)
s1_comp = struct1.composition
s2_comp = struct2.composition
matches = []
for perm in itertools.permutations(sp2):
sp_mapping = dict(zip(sp1, perm))
# do quick check that compositions are compatible
mapped_comp = Composition({sp_mapping[k]: v for k, v in s1_comp.items()})
if (not self._subset) and (self._comparator.get_hash(mapped_comp) != self._comparator.get_hash(s2_comp)):
continue
mapped_struct = struct1.copy()
mapped_struct.replace_species(sp_mapping)
if swapped:
m = self._strict_match(
struct2,
mapped_struct,
fu,
(not s1_supercell),
use_rms,
break_on_match,
)
else:
m = self._strict_match(mapped_struct, struct2, fu, s1_supercell, use_rms, break_on_match)
if m:
matches.append((sp_mapping, m))
if single_match:
break
return matches
def get_rms_anonymous(self, struct1, struct2):
"""
Performs an anonymous fitting, which allows distinct species in one
structure to map to another. E.g., to compare if the Li2O and Na2O
structures are similar.
Args:
struct1 (Structure): 1st structure
struct2 (Structure): 2nd structure
Returns:
(min_rms, min_mapping)
min_rms is the minimum rms distance, and min_mapping is the
corresponding minimal species mapping that would map
struct1 to struct2. (None, None) is returned if the minimax_rms
exceeds the threshold.
"""
struct1, struct2 = self._process_species([struct1, struct2])
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2)
matches = self._anonymous_match(struct1, struct2, fu, s1_supercell, use_rms=True, break_on_match=False)
if matches:
best = sorted(matches, key=lambda x: x[1][0])[0]
return best[1][0], best[0]
return None, None
def get_best_electronegativity_anonymous_mapping(self, struct1, struct2):
"""
Performs an anonymous fitting, which allows distinct species in one
structure to map to another. E.g., to compare if the Li2O and Na2O
structures are similar. If multiple substitutions are within tolerance
this will return the one which minimizes the difference in
electronegativity between the matches species.
Args:
struct1 (Structure): 1st structure
struct2 (Structure): 2nd structure
Returns:
min_mapping (Dict): Mapping of struct1 species to struct2 species
"""
struct1, struct2 = self._process_species([struct1, struct2])
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2)
matches = self._anonymous_match(struct1, struct2, fu, s1_supercell, use_rms=True, break_on_match=True)
if matches:
min_X_diff = np.inf
for m in matches:
X_diff = 0
for k, v in m[0].items():
X_diff += struct1.composition[k] * (k.X - v.X) ** 2
if X_diff < min_X_diff:
min_X_diff = X_diff
best = m[0]
return best
return None
def get_all_anonymous_mappings(self, struct1, struct2, niggli=True, include_dist=False):
"""
Performs an anonymous fitting, which allows distinct species in one
structure to map to another. Returns a dictionary of species
substitutions that are within tolerance
Args:
struct1 (Structure): 1st structure
struct2 (Structure): 2nd structure
niggli (bool): Find niggli cell in preprocessing
include_dist (bool): Return the maximin distance with each mapping
Returns:
list of species mappings that map struct1 to struct2.
"""
struct1, struct2 = self._process_species([struct1, struct2])
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2, niggli)
matches = self._anonymous_match(struct1, struct2, fu, s1_supercell, break_on_match=not include_dist)
if matches:
if include_dist:
return [(m[0], m[1][0]) for m in matches]
return [m[0] for m in matches]
return None
def fit_anonymous(self, struct1, struct2, niggli=True):
"""
Performs an anonymous fitting, which allows distinct species in one
structure to map to another. E.g., to compare if the Li2O and Na2O
structures are similar.
Args:
struct1 (Structure): 1st structure
struct2 (Structure): 2nd structure
Returns:
True/False: Whether a species mapping can map struct1 to stuct2
"""
struct1, struct2 = self._process_species([struct1, struct2])
struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2, niggli)
matches = self._anonymous_match(struct1, struct2, fu, s1_supercell, break_on_match=True, single_match=True)
return bool(matches)
def get_supercell_matrix(self, supercell, struct):
"""
Returns the matrix for transforming struct to supercell. This
can be used for very distorted 'supercells' where the primitive cell
is impossible to find
"""
if self._primitive_cell:
raise ValueError("get_supercell_matrix cannot be used with the primitive cell option")
struct, supercell, fu, s1_supercell = self._preprocess(struct, supercell, False)
if not s1_supercell:
raise ValueError("The non-supercell must be put onto the basis of the supercell, not the other way around")
match = self._match(struct, supercell, fu, s1_supercell, use_rms=True, break_on_match=False)
if match is None:
return None
return match[2]
def get_transformation(self, struct1, struct2):
"""
Returns the supercell transformation, fractional translation vector,
and a mapping to transform struct2 to be similar to struct1.
Args:
struct1 (Structure): Reference structure
struct2 (Structure): Structure to transform.
Returns:
supercell (numpy.ndarray(3, 3)): supercell matrix
vector (numpy.ndarray(3)): fractional translation vector
mapping (list(int or None)):
The first len(struct1) items of the mapping vector are the
indices of struct1's corresponding sites in struct2 (or None
if there is no corresponding site), and the other items are
the remaining site indices of struct2.
"""
if self._primitive_cell:
raise ValueError("get_transformation cannot be used with the primitive cell option")
struct1, struct2 = self._process_species((struct1, struct2))
s1, s2, fu, s1_supercell = self._preprocess(struct1, struct2, False)
ratio = fu if s1_supercell else 1 / fu
if s1_supercell and fu > 1:
raise ValueError("Struct1 must be the supercell, not the other way around")
if len(s1) * ratio >= len(s2):
# s1 is superset
match = self._strict_match(s1, s2, fu=fu, s1_supercell=False, use_rms=True, break_on_match=False)
if match is None:
return None
# invert the mapping, since it needs to be from s1 to s2
mapping = [list(match[4]).index(i) if i in match[4] else None for i in range(len(s1))]
return match[2], match[3], mapping
# s2 is superset
match = self._strict_match(s2, s1, fu=fu, s1_supercell=True, use_rms=True, break_on_match=False)
if match is None:
return None
# add sites not included in the mapping
not_included = list(range(len(s2) * fu))
for i in match[4]:
not_included.remove(i)
mapping = list(match[4]) + not_included
return match[2], -match[3], mapping
def get_s2_like_s1(self, struct1, struct2, include_ignored_species=True):
"""
Performs transformations on struct2 to put it in a basis similar to
struct1 (without changing any of the inter-site distances)
Args:
struct1 (Structure): Reference structure
struct2 (Structure): Structure to transform.
include_ignored_species (bool): Defaults to True,
the ignored_species is also transformed to the struct1
lattice orientation, though obviously there is no direct
matching to existing sites.
Returns:
A structure object similar to struct1, obtained by making a
supercell, sorting, and translating struct2.
"""
s1, s2 = self._process_species([struct1, struct2])
trans = self.get_transformation(s1, s2)
if trans is None:
return None
sc, t, mapping = trans
sites = list(s2)
# Append the ignored sites at the end.
sites.extend([site for site in struct2 if site not in s2])
temp = Structure.from_sites(sites)
temp.make_supercell(sc)
temp.translate_sites(list(range(len(temp))), t)
# translate sites to correct unit cell
for i, j in enumerate(mapping[: len(s1)]):
if j is not None:
vec = np.round(struct1[i].frac_coords - temp[j].frac_coords)
temp.translate_sites(j, vec, to_unit_cell=False)
sites = [temp.sites[i] for i in mapping if i is not None]
if include_ignored_species:
start = int(round(len(temp) / len(struct2) * len(s2)))
sites.extend(temp.sites[start:])
return Structure.from_sites(sites)
def get_mapping(self, superset, subset):
"""
Calculate the mapping from superset to subset.
Args:
superset (Structure): Structure containing at least the sites in
subset (within the structure matching tolerance)
subset (Structure): Structure containing some of the sites in
superset (within the structure matching tolerance)
Returns:
numpy array such that superset.sites[mapping] is within matching
tolerance of subset.sites or None if no such mapping is possible
"""
if self._supercell:
raise ValueError("cannot compute mapping to supercell")
if self._primitive_cell:
raise ValueError("cannot compute mapping with primitive cell option")
if len(subset) > len(superset):
raise ValueError("subset is larger than superset")
superset, subset, _, _ = self._preprocess(superset, subset, True)
match = self._strict_match(superset, subset, 1, break_on_match=False)
if match is None or match[0] > self.stol:
return None
return match[4]
class PointDefectComparator(MSONable):
"""
A class that matches pymatgen Point Defect objects even if their
cartesian coordinates are different (compares sublattices for the defect)
NOTE: for defect complexes (more than a single defect),
this comparator will break.
"""
def __init__(self, check_charge=False, check_primitive_cell=False, check_lattice_scale=False):
"""
Args:
check_charge (bool): Gives option to check
if charges are identical.
Default is False (different charged defects can be same)
check_primitive_cell (bool): Gives option to
compare different supercells of bulk_structure,
rather than directly compare supercell sizes
Default is False (requires bulk_structure in each defect to be same size)
check_lattice_scale (bool): Gives option to scale volumes of
structures to each other identical lattice constants.
Default is False (enforces same
lattice constants in both structures)
"""
self.check_charge = check_charge
self.check_primitive_cell = check_primitive_cell
self.check_lattice_scale = check_lattice_scale
def are_equal(self, d1, d2):
"""
Args:
d1: First defect. A pymatgen Defect object.
d2: Second defect. A pymatgen Defect object.
Returns:
True if defects are identical in type and sublattice.
"""
possible_defect_types = (Defect, Vacancy, Substitution, Interstitial)
if not isinstance(d1, possible_defect_types) or not isinstance(d2, possible_defect_types):
raise ValueError("Cannot use PointDefectComparator to compare non-defect objects...")
if not isinstance(d1, d2.__class__):
return False
if d1.site.specie != d2.site.specie:
return False
if self.check_charge and (d1.charge != d2.charge):
return False
sm = StructureMatcher(
ltol=0.01,
primitive_cell=self.check_primitive_cell,
scale=self.check_lattice_scale,
)
if not sm.fit(d1.bulk_structure, d2.bulk_structure):
return False
d1 = d1.copy()
d2 = d2.copy()
if self.check_primitive_cell or self.check_lattice_scale:
# if allowing for base structure volume or supercell modifications,
# then need to preprocess defect objects to allow for matching
d1_mod_bulk_structure, d2_mod_bulk_structure, _, _ = sm._preprocess(d1.bulk_structure, d2.bulk_structure)
d1_defect_site = PeriodicSite(
d1.site.specie,
d1.site.coords,
d1_mod_bulk_structure.lattice,
to_unit_cell=True,
coords_are_cartesian=True,
)
d2_defect_site = PeriodicSite(
d2.site.specie,
d2.site.coords,
d2_mod_bulk_structure.lattice,
to_unit_cell=True,
coords_are_cartesian=True,
)
d1._structure = d1_mod_bulk_structure
d2._structure = d2_mod_bulk_structure
d1._defect_site = d1_defect_site
d2._defect_site = d2_defect_site
return sm.fit(d1.generate_defect_structure(), d2.generate_defect_structure())
| 37.6375 | 119 | 0.597082 |
b58aad33c798c3b631d3cc38fcf1da78660dd58c | 176 | py | Python | ABC148/ABC148f.py | VolgaKurvar/AtCoder | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | [
"Unlicense"
] | null | null | null | ABC148/ABC148f.py | VolgaKurvar/AtCoder | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | [
"Unlicense"
] | null | null | null | ABC148/ABC148f.py | VolgaKurvar/AtCoder | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | [
"Unlicense"
] | null | null | null | # ABC148f
def main():
import sys
input = sys.stdin.readline
sys.setrecursionlimit(10**6)
#再帰関数を使わない限りPypyで出すこと
if __name__ == '__main__':
main()
| 14.666667 | 32 | 0.619318 |
ae8d9a27076077cf9a428490d5b4a66fbbe77aae | 107 | py | Python | reports/urls.py | Mortaza-Seydi/Task-Manager | 2e3f9c87814bfa993e27ca3e0dee918c66975653 | [
"MIT"
] | 12 | 2021-03-01T08:07:16.000Z | 2022-02-27T06:33:10.000Z | reports/urls.py | Mortaza-Seydi/Task-Manager | 2e3f9c87814bfa993e27ca3e0dee918c66975653 | [
"MIT"
] | null | null | null | reports/urls.py | Mortaza-Seydi/Task-Manager | 2e3f9c87814bfa993e27ca3e0dee918c66975653 | [
"MIT"
] | 2 | 2021-06-04T14:39:40.000Z | 2022-02-24T05:48:22.000Z | from django.urls import path
from .views import Report
urlpatterns = [
path('', Report.as_view()),
]
| 13.375 | 31 | 0.682243 |
084dff9172831d872c75a1af5d6d19983e23a1b3 | 6,155 | py | Python | lib/reda/importers/eit_version_2010.py | j-hase/reda | b6419c39842cfbdd9380a27a5c6e9a04ccaeb294 | [
"MIT"
] | 12 | 2017-12-11T08:32:46.000Z | 2021-06-09T05:41:57.000Z | lib/reda/importers/eit_version_2010.py | j-hase/reda | b6419c39842cfbdd9380a27a5c6e9a04ccaeb294 | [
"MIT"
] | 58 | 2017-11-12T11:10:42.000Z | 2021-06-11T13:52:44.000Z | lib/reda/importers/eit_version_2010.py | geophysics-ubonn/REDA | 8f0399031121f5a937171231a25f9ab03a3c8873 | [
"MIT"
] | 11 | 2017-11-12T12:02:35.000Z | 2021-02-16T06:54:04.000Z | """Research Center Jülich - EIT40 system importer (2010 version)
"""
import datetime
import numpy as np
import pandas as pd
def _average_swapped_current_injections(df):
AB = df[['a', 'b']].values
# get unique injections
abu = np.unique(
AB.flatten().view(AB.dtype.descr * 2)
).view(AB.dtype).reshape(-1, 2)
# find swapped pairs
pairs = []
alone = []
abul = [x.tolist() for x in abu]
for ab in abul:
swap = list(reversed(ab))
if swap in abul:
pair = (ab, swap)
pair_r = (swap, ab)
if pair not in pairs and pair_r not in pairs:
pairs.append(pair)
else:
alone.append(ab)
# check that all pairs got assigned
if len(pairs) * 2 + len(alone) != len(abul):
print('len(pairs) * 2 == {0}'.format(len(pairs) * 2))
print(len(abul))
raise Exception(
'total numbers of unswapped-swapped matching do not match!'
)
if len(pairs) > 0 and len(alone) > 0:
print(
'WARNING: Found both swapped configurations and non-swapped ones!'
)
delete_slices = []
# these are the columns that we work on (and that are retained)
columns = [
'frequency', 'a', 'b', 'p',
'Z1', 'Z2', 'Z3',
'Il1', 'Il2', 'Il3',
'Is1', 'Is2', 'Is3',
'Zg1', 'Zg2', 'Zg3',
'datetime',
]
dtypes = {col: df.dtypes[col] for col in columns}
X = df[columns].values
for pair in pairs:
index_a = np.where(
(X[:, 1] == pair[0][0]) & (X[:, 2] == pair[0][1])
)[0]
index_b = np.where(
(X[:, 1] == pair[1][0]) & (X[:, 2] == pair[1][1])
)[0]
# normal injection
A = X[index_a, :]
# swapped injection
B = X[index_b, :]
# make sure we have the same ordering in P, frequency
diff = A[:, [0, 3]] - B[:, [0, 3]]
if not np.all(diff) == 0:
raise Exception('Wrong ordering')
# compute the averages in A
# the minus stems from the swapped current electrodes
X[index_a, 4:10] = (A[:, 4:10] - B[:, 4:10]) / 2.0
X[index_a, 10:16] = (A[:, 10:16] + B[:, 10:16]) / 2.0
# delete the second pair
delete_slices.append(
index_b
)
if len(delete_slices) == 0:
X_clean = X
else:
X_clean = np.delete(X, np.hstack(delete_slices), axis=0)
df_clean = pd.DataFrame(X_clean, columns=columns)
# for col in columns:
# # df_clean[col] = df_clean[col].astype(dtypes[col])
df_clean = df_clean.astype(dtype=dtypes)
return df_clean
def _extract_md(mat, **kwargs):
return None
def _extract_emd(mat, **kwargs):
emd = mat['EMD'].squeeze()
# Labview epoch
epoch = datetime.datetime(1904, 1, 1)
def convert_epoch(x):
timestamp = epoch + datetime.timedelta(seconds=x.astype(float))
return timestamp
dfl = []
# loop over frequencies
for f_id in range(0, emd.size):
# print('Frequency: ', emd[f_id]['fm'])
fdata = emd[f_id]
# fdata_md = md[f_id]
timestamp = np.atleast_2d(
[convert_epoch(x) for x in fdata['Time'].squeeze()]
).T
# import IPython
# IPython.embed()
df = pd.DataFrame(
np.hstack((
timestamp,
fdata['ni'],
fdata['nu'][:, np.newaxis],
fdata['Z3'],
fdata['Is3'],
fdata['Il3'],
fdata['Zg3'],
)),
)
df.columns = (
'datetime',
'a',
'b',
'p',
'Z1',
'Z2',
'Z3',
'Is1',
'Is2',
'Is3',
'Il1',
'Il2',
'Il3',
'Zg1',
'Zg2',
'Zg3',
)
df['frequency'] = np.ones(df.shape[0]) * fdata['fm']
# cast to correct type
df['datetime'] = pd.to_datetime(df['datetime'])
df['a'] = df['a'].astype(int)
df['b'] = df['b'].astype(int)
df['p'] = df['p'].astype(int)
df['Z1'] = df['Z1'].astype(complex)
df['Z2'] = df['Z2'].astype(complex)
df['Z3'] = df['Z3'].astype(complex)
df['Zg1'] = df['Zg1'].astype(complex)
df['Zg2'] = df['Zg2'].astype(complex)
df['Zg3'] = df['Zg3'].astype(complex)
df['Is1'] = df['Is1'].astype(complex)
df['Is2'] = df['Is2'].astype(complex)
df['Is3'] = df['Is3'].astype(complex)
df['Il1'] = df['Il1'].astype(complex)
df['Il2'] = df['Il2'].astype(complex)
df['Il3'] = df['Il3'].astype(complex)
dfl.append(df)
if len(dfl) == 0:
return None
df = pd.concat(dfl)
# average swapped current injections here!
df = _average_swapped_current_injections(df)
# sort current injections
condition = df['a'] > df['b']
df.loc[condition, ['a', 'b']] = df.loc[condition, ['b', 'a']].values
# for some reason we lose the integer casting of a and b here
df['a'] = df['a'].astype(int)
df['b'] = df['b'].astype(int)
# change sign because we changed A and B
df.loc[condition, ['Z1', 'Z2', 'Z3']] *= -1
# average of Z1-Z3
df['Zt'] = np.mean(df[['Z1', 'Z2', 'Z3']].values, axis=1)
# we need to keep the sign of the real part
sign_re = np.real(df['Zt']) / np.abs(np.real(df['Zt']))
df['r'] = np.abs(df['Zt']) * sign_re
# df['Zt_std'] = np.std(df[['Z1', 'Z2', 'Z3']].values, axis=1)
df['Is'] = np.mean(df[['Is1', 'Is2', 'Is3']].values, axis=1)
df['Il'] = np.mean(df[['Il1', 'Il2', 'Il3']].values, axis=1)
df['Zg'] = np.mean(df[['Zg1', 'Zg2', 'Zg3']].values, axis=1)
# "standard" injected current, in [mA]
df['Iab'] = np.abs(df['Is']) * 1e3
df['Iab'] = df['Iab'].astype(float)
# df['Is_std'] = np.std(df[['Is1', 'Is2', 'Is3']].values, axis=1)
# take absolute value and convert to mA
df['Ileakage'] = np.abs(df['Il']) * 1e3
df['Ileakage'] = df['Ileakage'].astype(float)
return df
| 29.309524 | 78 | 0.497969 |
6f0bc04faea7a4fb72d92db41a817e5d93733504 | 1,766 | py | Python | setup.py | dnorthcote/rfsoc_studio | 76b9eeb194d34333d95cad25d8cae918938cbc16 | [
"BSD-3-Clause"
] | null | null | null | setup.py | dnorthcote/rfsoc_studio | 76b9eeb194d34333d95cad25d8cae918938cbc16 | [
"BSD-3-Clause"
] | null | null | null | setup.py | dnorthcote/rfsoc_studio | 76b9eeb194d34333d95cad25d8cae918938cbc16 | [
"BSD-3-Clause"
] | null | null | null | import os
import shutil
from distutils.dir_util import copy_tree
from setuptools import find_packages, setup
# global variables
board = os.environ['BOARD']
nb_dir = os.environ['PYNQ_JUPYTER_NOTEBOOKS']
package_name = 'rfsoc_studio'
pip_name = 'rfsoc-studio'
data_files = []
# check whether board is supported
def check_env():
if board not in ['RFSoC2x2', 'ZCU111']:
raise ValueError("Board {} is not supported.".format(board))
# copy notebooks into jupyter home
def copy_notebooks():
src_nb_dir = os.path.join('notebooks')
dst_nb_dir = os.path.join(nb_dir, pip_name)
if os.path.exists(dst_nb_dir):
shutil.rmtree(dst_nb_dir)
copy_tree(src_nb_dir, dst_nb_dir)
check_env()
copy_notebooks()
setup(
name=package_name,
version='0.2.0',
install_requires=[
'plotly==4.5.2',
'pynq==2.6',
'rfsoc-sam @ https://github.com/strath-sdr/rfsoc_sam/archive/v0.3.1.tar.gz',
'rfsoc-freqplan @ https://github.com/strath-sdr/rfsoc_frequency_planner/archive/v0.1.1.tar.gz',
'rfsoc-ofdm @ https://github.com/strath-sdr/rfsoc_ofdm/archive/v0.2.2.tar.gz',
'rfsoc-qpsk @ https://github.com/strath-sdr/rfsoc_qpsk/archive/v1.3.1.tar.gz',
'rfsoc-radio @ https://github.com/strath-sdr/rfsoc_radio/archive/v0.1.2.tar.gz',
'pystrath-dsp @ https://github.com/strath-sdr/dsp_notebooks/archive/v0.1.1.tar.gz',
'pystrath-rfsoc @ https://github.com/strath-sdr/rfsoc_notebooks/archive/v0.1.1.tar.gz',
'pynq-agc @ https://github.com/strath-sdr/pynq_agc/releases/download/v0.3.1/pynq_agc.tar.gz'
],
author="David Northcote",
packages=find_packages(),
package_data={
'': data_files,
},
description="The Strathclyde RFSoC Studio for PYNQ.")
| 33.961538 | 103 | 0.68573 |
83529d17decaf0e3fe0594152f12fcb149f8a577 | 35,757 | py | Python | pygments/lexers/shell.py | Liam-Stevens/pygments | 49f1a5c6d5734734edca3766f88a698c4172ff59 | [
"BSD-2-Clause"
] | null | null | null | pygments/lexers/shell.py | Liam-Stevens/pygments | 49f1a5c6d5734734edca3766f88a698c4172ff59 | [
"BSD-2-Clause"
] | null | null | null | pygments/lexers/shell.py | Liam-Stevens/pygments | 49f1a5c6d5734734edca3766f88a698c4172ff59 | [
"BSD-2-Clause"
] | null | null | null | """
pygments.lexers.shell
~~~~~~~~~~~~~~~~~~~~~
Lexers for various shells.
:copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import Lexer, RegexLexer, do_insertions, bygroups, \
include, default, this, using, words
from pygments.token import Punctuation, \
Text, Comment, Operator, Keyword, Name, String, Number, Generic
from pygments.util import shebang_matches
__all__ = ['BashLexer', 'BashSessionLexer', 'TcshLexer', 'BatchLexer',
'SlurmBashLexer', 'MSDOSSessionLexer', 'PowerShellLexer',
'PowerShellSessionLexer', 'TcshSessionLexer', 'FishShellLexer',
'ExeclineLexer']
line_re = re.compile('.*?\n')
class BashLexer(RegexLexer):
"""
Lexer for (ba|k|z|)sh shell scripts.
.. versionadded:: 0.6
"""
name = 'Bash'
aliases = ['bash', 'sh', 'ksh', 'zsh', 'shell']
filenames = ['*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass',
'*.exheres-0', '*.exlib', '*.zsh',
'.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc',
'PKGBUILD']
mimetypes = ['application/x-sh', 'application/x-shellscript', 'text/x-shellscript']
tokens = {
'root': [
include('basic'),
(r'`', String.Backtick, 'backticks'),
include('data'),
include('interp'),
],
'interp': [
(r'\$\(\(', Keyword, 'math'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', String.Interpol, 'curly'),
(r'\$[a-zA-Z_]\w*', Name.Variable), # user variable
(r'\$(?:\d+|[#$?!_*@-])', Name.Variable), # builtin
(r'\$', Text),
],
'basic': [
(r'\b(if|fi|else|while|in|do|done|for|then|return|function|case|'
r'select|continue|until|esac|elif)(\s*)\b',
bygroups(Keyword, Text)),
(r'\b(alias|bg|bind|break|builtin|caller|cd|command|compgen|'
r'complete|declare|dirs|disown|echo|enable|eval|exec|exit|'
r'export|false|fc|fg|getopts|hash|help|history|jobs|kill|let|'
r'local|logout|popd|printf|pushd|pwd|read|readonly|set|shift|'
r'shopt|source|suspend|test|time|times|trap|true|type|typeset|'
r'ulimit|umask|unalias|unset|wait)(?=[\s)`])',
Name.Builtin),
(r'\A#!.+\n', Comment.Hashbang),
(r'#.*\n', Comment.Single),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(\+?=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]', Operator),
(r'<<<', Operator), # here-string
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r'&&|\|\|', Operator),
],
'data': [
(r'(?s)\$?"(\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r"(?s)'.*?'", String.Single),
(r';', Punctuation),
(r'&', Punctuation),
(r'\|', Punctuation),
(r'\s+', Text),
(r'\d+\b', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
(r'<', Text),
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'curly': [
(r'\}', String.Interpol, '#pop'),
(r':-', Keyword),
(r'\w+', Name.Variable),
(r'[^}:"\'`$\\]+', Punctuation),
(r':', Punctuation),
include('root'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'math': [
(r'\)\)', Keyword, '#pop'),
(r'[-+*/%^|&]|\*\*|\|\|', Operator),
(r'\d+#\d+', Number),
(r'\d+#(?! )', Number),
(r'\d+', Number),
include('root'),
],
'backticks': [
(r'`', String.Backtick, '#pop'),
include('root'),
],
}
def analyse_text(text):
if shebang_matches(text, r'(ba|z|)sh'):
return 1
if text.startswith('$ '):
return 0.2
class SlurmBashLexer(BashLexer):
"""
Lexer for (ba|k|z|)sh Slurm scripts.
.. versionadded:: 2.4
"""
name = 'Slurm'
aliases = ['slurm', 'sbatch']
filenames = ['*.sl']
mimetypes = []
EXTRA_KEYWORDS = {'srun'}
def get_tokens_unprocessed(self, text):
for index, token, value in BashLexer.get_tokens_unprocessed(self, text):
if token is Text and value in self.EXTRA_KEYWORDS:
yield index, Name.Builtin, value
elif token is Comment.Single and 'SBATCH' in value:
yield index, Keyword.Pseudo, value
else:
yield index, token, value
class ShellSessionBaseLexer(Lexer):
"""
Base lexer for shell sessions.
.. versionadded:: 2.1
"""
_venv = re.compile(r'^(\([^)]*\))(\s*)')
def get_tokens_unprocessed(self, text):
innerlexer = self._innerLexerCls(**self.options)
pos = 0
curcode = ''
insertions = []
backslash_continuation = False
for match in line_re.finditer(text):
line = match.group()
venv_match = self._venv.match(line)
if venv_match:
venv = venv_match.group(1)
venv_whitespace = venv_match.group(2)
insertions.append((len(curcode),
[(0, Generic.Prompt.VirtualEnv, venv)]))
if venv_whitespace:
insertions.append((len(curcode),
[(0, Text, venv_whitespace)]))
line = line[venv_match.end():]
m = self._ps1rgx.match(line)
if m:
# To support output lexers (say diff output), the output
# needs to be broken by prompts whenever the output lexer
# changes.
if not insertions:
pos = match.start()
insertions.append((len(curcode),
[(0, Generic.Prompt, m.group(1))]))
curcode += m.group(2)
backslash_continuation = curcode.endswith('\\\n')
elif line.startswith(self._ps2) and backslash_continuation:
insertions.append((len(curcode),
[(0, Generic.Prompt, line[:len(self._ps2)])]))
curcode += line[len(self._ps2):]
backslash_continuation = curcode.endswith('\\\n')
else:
if insertions:
toks = innerlexer.get_tokens_unprocessed(curcode)
for i, t, v in do_insertions(insertions, toks):
yield pos+i, t, v
yield match.start(), Generic.Output, line
insertions = []
curcode = ''
if insertions:
for i, t, v in do_insertions(insertions,
innerlexer.get_tokens_unprocessed(curcode)):
yield pos+i, t, v
class BashSessionLexer(ShellSessionBaseLexer):
"""
Lexer for Bash shell sessions, i.e. command lines, including a
prompt, interspersed with output.
.. versionadded:: 1.1
"""
name = 'Bash Session'
aliases = ['console', 'shell-session']
filenames = ['*.sh-session', '*.shell-session']
mimetypes = ['application/x-shell-session', 'application/x-sh-session']
_innerLexerCls = BashLexer
_ps1rgx = re.compile(
r'^((?:(?:\[.*?\])|(?:\(\S+\))?(?:| |sh\S*?|\w+\S+[@:]\S+(?:\s+\S+)' \
r'?|\[\S+[@:][^\n]+\].+))\s*[$#%]\s*)(.*\n?)')
_ps2 = '> '
class BatchLexer(RegexLexer):
"""
Lexer for the DOS/Windows Batch file format.
.. versionadded:: 0.7
"""
name = 'Batchfile'
aliases = ['bat', 'batch', 'dosbatch', 'winbatch']
filenames = ['*.bat', '*.cmd']
mimetypes = ['application/x-dos-batch']
flags = re.MULTILINE | re.IGNORECASE
_nl = r'\n\x1a'
_punct = r'&<>|'
_ws = r'\t\v\f\r ,;=\xa0'
_nlws = r'\s\x1a\xa0,;='
_space = r'(?:(?:(?:\^[%s])?[%s])+)' % (_nl, _ws)
_keyword_terminator = (r'(?=(?:\^[%s]?)?[%s+./:[\\\]]|[%s%s(])' %
(_nl, _ws, _nl, _punct))
_token_terminator = r'(?=\^?[%s]|[%s%s])' % (_ws, _punct, _nl)
_start_label = r'((?:(?<=^[^:])|^[^:]?)[%s]*)(:)' % _ws
_label = r'(?:(?:[^%s%s+:^]|\^[%s]?[\w\W])*)' % (_nlws, _punct, _nl)
_label_compound = r'(?:(?:[^%s%s+:^)]|\^[%s]?[^)])*)' % (_nlws, _punct, _nl)
_number = r'(?:-?(?:0[0-7]+|0x[\da-f]+|\d+)%s)' % _token_terminator
_opword = r'(?:equ|geq|gtr|leq|lss|neq)'
_string = r'(?:"[^%s"]*(?:"|(?=[%s])))' % (_nl, _nl)
_variable = (r'(?:(?:%%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|'
r'[^%%:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%%%s^]|'
r'\^[^%%%s])[^=%s]*=(?:[^%%%s^]|\^[^%%%s])*)?)?%%))|'
r'(?:\^?![^!:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:'
r'[^!%s^]|\^[^!%s])[^=%s]*=(?:[^!%s^]|\^[^!%s])*)?)?\^?!))' %
(_nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl))
_core_token = r'(?:(?:(?:\^[%s]?)?[^"%s%s])+)' % (_nl, _nlws, _punct)
_core_token_compound = r'(?:(?:(?:\^[%s]?)?[^"%s%s)])+)' % (_nl, _nlws, _punct)
_token = r'(?:[%s]+|%s)' % (_punct, _core_token)
_token_compound = r'(?:[%s]+|%s)' % (_punct, _core_token_compound)
_stoken = (r'(?:[%s]+|(?:%s|%s|%s)+)' %
(_punct, _string, _variable, _core_token))
def _make_begin_state(compound, _core_token=_core_token,
_core_token_compound=_core_token_compound,
_keyword_terminator=_keyword_terminator,
_nl=_nl, _punct=_punct, _string=_string,
_space=_space, _start_label=_start_label,
_stoken=_stoken, _token_terminator=_token_terminator,
_variable=_variable, _ws=_ws):
rest = '(?:%s|%s|[^"%%%s%s%s])*' % (_string, _variable, _nl, _punct,
')' if compound else '')
rest_of_line = r'(?:(?:[^%s^]|\^[%s]?[\w\W])*)' % (_nl, _nl)
rest_of_line_compound = r'(?:(?:[^%s^)]|\^[%s]?[^)])*)' % (_nl, _nl)
set_space = r'((?:(?:\^[%s]?)?[^\S\n])*)' % _nl
suffix = ''
if compound:
_keyword_terminator = r'(?:(?=\))|%s)' % _keyword_terminator
_token_terminator = r'(?:(?=\))|%s)' % _token_terminator
suffix = '/compound'
return [
((r'\)', Punctuation, '#pop') if compound else
(r'\)((?=\()|%s)%s' % (_token_terminator, rest_of_line),
Comment.Single)),
(r'(?=%s)' % _start_label, Text, 'follow%s' % suffix),
(_space, using(this, state='text')),
include('redirect%s' % suffix),
(r'[%s]+' % _nl, Text),
(r'\(', Punctuation, 'root/compound'),
(r'@+', Punctuation),
(r'((?:for|if|rem)(?:(?=(?:\^[%s]?)?/)|(?:(?!\^)|'
r'(?<=m))(?:(?=\()|%s)))(%s?%s?(?:\^[%s]?)?/(?:\^[%s]?)?\?)' %
(_nl, _token_terminator, _space,
_core_token_compound if compound else _core_token, _nl, _nl),
bygroups(Keyword, using(this, state='text')),
'follow%s' % suffix),
(r'(goto%s)(%s(?:\^[%s]?)?/(?:\^[%s]?)?\?%s)' %
(_keyword_terminator, rest, _nl, _nl, rest),
bygroups(Keyword, using(this, state='text')),
'follow%s' % suffix),
(words(('assoc', 'break', 'cd', 'chdir', 'cls', 'color', 'copy',
'date', 'del', 'dir', 'dpath', 'echo', 'endlocal', 'erase',
'exit', 'ftype', 'keys', 'md', 'mkdir', 'mklink', 'move',
'path', 'pause', 'popd', 'prompt', 'pushd', 'rd', 'ren',
'rename', 'rmdir', 'setlocal', 'shift', 'start', 'time',
'title', 'type', 'ver', 'verify', 'vol'),
suffix=_keyword_terminator), Keyword, 'follow%s' % suffix),
(r'(call)(%s?)(:)' % _space,
bygroups(Keyword, using(this, state='text'), Punctuation),
'call%s' % suffix),
(r'call%s' % _keyword_terminator, Keyword),
(r'(for%s(?!\^))(%s)(/f%s)' %
(_token_terminator, _space, _token_terminator),
bygroups(Keyword, using(this, state='text'), Keyword),
('for/f', 'for')),
(r'(for%s(?!\^))(%s)(/l%s)' %
(_token_terminator, _space, _token_terminator),
bygroups(Keyword, using(this, state='text'), Keyword),
('for/l', 'for')),
(r'for%s(?!\^)' % _token_terminator, Keyword, ('for2', 'for')),
(r'(goto%s)(%s?)(:?)' % (_keyword_terminator, _space),
bygroups(Keyword, using(this, state='text'), Punctuation),
'label%s' % suffix),
(r'(if(?:(?=\()|%s)(?!\^))(%s?)((?:/i%s)?)(%s?)((?:not%s)?)(%s?)' %
(_token_terminator, _space, _token_terminator, _space,
_token_terminator, _space),
bygroups(Keyword, using(this, state='text'), Keyword,
using(this, state='text'), Keyword,
using(this, state='text')), ('(?', 'if')),
(r'rem(((?=\()|%s)%s?%s?.*|%s%s)' %
(_token_terminator, _space, _stoken, _keyword_terminator,
rest_of_line_compound if compound else rest_of_line),
Comment.Single, 'follow%s' % suffix),
(r'(set%s)%s(/a)' % (_keyword_terminator, set_space),
bygroups(Keyword, using(this, state='text'), Keyword),
'arithmetic%s' % suffix),
(r'(set%s)%s((?:/p)?)%s((?:(?:(?:\^[%s]?)?[^"%s%s^=%s]|'
r'\^[%s]?[^"=])+)?)((?:(?:\^[%s]?)?=)?)' %
(_keyword_terminator, set_space, set_space, _nl, _nl, _punct,
')' if compound else '', _nl, _nl),
bygroups(Keyword, using(this, state='text'), Keyword,
using(this, state='text'), using(this, state='variable'),
Punctuation),
'follow%s' % suffix),
default('follow%s' % suffix)
]
def _make_follow_state(compound, _label=_label,
_label_compound=_label_compound, _nl=_nl,
_space=_space, _start_label=_start_label,
_token=_token, _token_compound=_token_compound,
_ws=_ws):
suffix = '/compound' if compound else ''
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state += [
(r'%s([%s]*)(%s)(.*)' %
(_start_label, _ws, _label_compound if compound else _label),
bygroups(Text, Punctuation, Text, Name.Label, Comment.Single)),
include('redirect%s' % suffix),
(r'(?=[%s])' % _nl, Text, '#pop'),
(r'\|\|?|&&?', Punctuation, '#pop'),
include('text')
]
return state
def _make_arithmetic_state(compound, _nl=_nl, _punct=_punct,
_string=_string, _variable=_variable,
_ws=_ws, _nlws=_nlws):
op = r'=+\-*/!~'
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state += [
(r'0[0-7]+', Number.Oct),
(r'0x[\da-f]+', Number.Hex),
(r'\d+', Number.Integer),
(r'[(),]+', Punctuation),
(r'([%s]|%%|\^\^)+' % op, Operator),
(r'(%s|%s|(\^[%s]?)?[^()%s%%\^"%s%s]|\^[%s]?%s)+' %
(_string, _variable, _nl, op, _nlws, _punct, _nlws,
r'[^)]' if compound else r'[\w\W]'),
using(this, state='variable')),
(r'(?=[\x00|&])', Text, '#pop'),
include('follow')
]
return state
def _make_call_state(compound, _label=_label,
_label_compound=_label_compound):
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state.append((r'(:?)(%s)' % (_label_compound if compound else _label),
bygroups(Punctuation, Name.Label), '#pop'))
return state
def _make_label_state(compound, _label=_label,
_label_compound=_label_compound, _nl=_nl,
_punct=_punct, _string=_string, _variable=_variable):
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state.append((r'(%s?)((?:%s|%s|\^[%s]?%s|[^"%%^%s%s%s])*)' %
(_label_compound if compound else _label, _string,
_variable, _nl, r'[^)]' if compound else r'[\w\W]', _nl,
_punct, r')' if compound else ''),
bygroups(Name.Label, Comment.Single), '#pop'))
return state
def _make_redirect_state(compound,
_core_token_compound=_core_token_compound,
_nl=_nl, _punct=_punct, _stoken=_stoken,
_string=_string, _space=_space,
_variable=_variable, _nlws=_nlws):
stoken_compound = (r'(?:[%s]+|(?:%s|%s|%s)+)' %
(_punct, _string, _variable, _core_token_compound))
return [
(r'((?:(?<=[%s])\d)?)(>>?&|<&)([%s]*)(\d)' %
(_nlws, _nlws),
bygroups(Number.Integer, Punctuation, Text, Number.Integer)),
(r'((?:(?<=[%s])(?<!\^[%s])\d)?)(>>?|<)(%s?%s)' %
(_nlws, _nl, _space, stoken_compound if compound else _stoken),
bygroups(Number.Integer, Punctuation, using(this, state='text')))
]
tokens = {
'root': _make_begin_state(False),
'follow': _make_follow_state(False),
'arithmetic': _make_arithmetic_state(False),
'call': _make_call_state(False),
'label': _make_label_state(False),
'redirect': _make_redirect_state(False),
'root/compound': _make_begin_state(True),
'follow/compound': _make_follow_state(True),
'arithmetic/compound': _make_arithmetic_state(True),
'call/compound': _make_call_state(True),
'label/compound': _make_label_state(True),
'redirect/compound': _make_redirect_state(True),
'variable-or-escape': [
(_variable, Name.Variable),
(r'%%%%|\^[%s]?(\^!|[\w\W])' % _nl, String.Escape)
],
'string': [
(r'"', String.Double, '#pop'),
(_variable, Name.Variable),
(r'\^!|%%', String.Escape),
(r'[^"%%^%s]+|[%%^]' % _nl, String.Double),
default('#pop')
],
'sqstring': [
include('variable-or-escape'),
(r'[^%]+|%', String.Single)
],
'bqstring': [
include('variable-or-escape'),
(r'[^%]+|%', String.Backtick)
],
'text': [
(r'"', String.Double, 'string'),
include('variable-or-escape'),
(r'[^"%%^%s%s\d)]+|.' % (_nlws, _punct), Text)
],
'variable': [
(r'"', String.Double, 'string'),
include('variable-or-escape'),
(r'[^"%%^%s]+|.' % _nl, Name.Variable)
],
'for': [
(r'(%s)(in)(%s)(\()' % (_space, _space),
bygroups(using(this, state='text'), Keyword,
using(this, state='text'), Punctuation), '#pop'),
include('follow')
],
'for2': [
(r'\)', Punctuation),
(r'(%s)(do%s)' % (_space, _token_terminator),
bygroups(using(this, state='text'), Keyword), '#pop'),
(r'[%s]+' % _nl, Text),
include('follow')
],
'for/f': [
(r'(")((?:%s|[^"])*?")([%s]*)(\))' % (_variable, _nlws),
bygroups(String.Double, using(this, state='string'), Text,
Punctuation)),
(r'"', String.Double, ('#pop', 'for2', 'string')),
(r"('(?:%%%%|%s|[\w\W])*?')([%s]*)(\))" % (_variable, _nlws),
bygroups(using(this, state='sqstring'), Text, Punctuation)),
(r'(`(?:%%%%|%s|[\w\W])*?`)([%s]*)(\))' % (_variable, _nlws),
bygroups(using(this, state='bqstring'), Text, Punctuation)),
include('for2')
],
'for/l': [
(r'-?\d+', Number.Integer),
include('for2')
],
'if': [
(r'((?:cmdextversion|errorlevel)%s)(%s)(\d+)' %
(_token_terminator, _space),
bygroups(Keyword, using(this, state='text'),
Number.Integer), '#pop'),
(r'(defined%s)(%s)(%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text'),
using(this, state='variable')), '#pop'),
(r'(exist%s)(%s%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text')), '#pop'),
(r'(%s%s)(%s)(%s%s)' % (_number, _space, _opword, _space, _number),
bygroups(using(this, state='arithmetic'), Operator.Word,
using(this, state='arithmetic')), '#pop'),
(_stoken, using(this, state='text'), ('#pop', 'if2')),
],
'if2': [
(r'(%s?)(==)(%s?%s)' % (_space, _space, _stoken),
bygroups(using(this, state='text'), Operator,
using(this, state='text')), '#pop'),
(r'(%s)(%s)(%s%s)' % (_space, _opword, _space, _stoken),
bygroups(using(this, state='text'), Operator.Word,
using(this, state='text')), '#pop')
],
'(?': [
(_space, using(this, state='text')),
(r'\(', Punctuation, ('#pop', 'else?', 'root/compound')),
default('#pop')
],
'else?': [
(_space, using(this, state='text')),
(r'else%s' % _token_terminator, Keyword, '#pop'),
default('#pop')
]
}
class MSDOSSessionLexer(ShellSessionBaseLexer):
"""
Lexer for MS DOS shell sessions, i.e. command lines, including a
prompt, interspersed with output.
.. versionadded:: 2.1
"""
name = 'MSDOS Session'
aliases = ['doscon']
filenames = []
mimetypes = []
_innerLexerCls = BatchLexer
_ps1rgx = re.compile(r'^([^>]*>)(.*\n?)')
_ps2 = 'More? '
class TcshLexer(RegexLexer):
"""
Lexer for tcsh scripts.
.. versionadded:: 0.10
"""
name = 'Tcsh'
aliases = ['tcsh', 'csh']
filenames = ['*.tcsh', '*.csh']
mimetypes = ['application/x-csh']
tokens = {
'root': [
include('basic'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', Keyword, 'curly'),
(r'`', String.Backtick, 'backticks'),
include('data'),
],
'basic': [
(r'\b(if|endif|else|while|then|foreach|case|default|'
r'continue|goto|breaksw|end|switch|endsw)\s*\b',
Keyword),
(r'\b(alias|alloc|bg|bindkey|break|builtins|bye|caller|cd|chdir|'
r'complete|dirs|echo|echotc|eval|exec|exit|fg|filetest|getxvers|'
r'glob|getspath|hashstat|history|hup|inlib|jobs|kill|'
r'limit|log|login|logout|ls-F|migrate|newgrp|nice|nohup|notify|'
r'onintr|popd|printenv|pushd|rehash|repeat|rootnode|popd|pushd|'
r'set|shift|sched|setenv|setpath|settc|setty|setxvers|shift|'
r'source|stop|suspend|source|suspend|telltc|time|'
r'umask|unalias|uncomplete|unhash|universe|unlimit|unset|unsetenv|'
r'ver|wait|warp|watchlog|where|which)\s*\b',
Name.Builtin),
(r'#.*', Comment),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]+', Operator),
(r'<<\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r';', Punctuation),
],
'data': [
(r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double),
(r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r'\s+', Text),
(r'[^=\s\[\]{}()$"\'`\\;#]+', Text),
(r'\d+(?= |\Z)', Number),
(r'\$#?(\w+|.)', Name.Variable),
],
'curly': [
(r'\}', Keyword, '#pop'),
(r':-', Keyword),
(r'\w+', Name.Variable),
(r'[^}:"\'`$]+', Punctuation),
(r':', Punctuation),
include('root'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'backticks': [
(r'`', String.Backtick, '#pop'),
include('root'),
],
}
class TcshSessionLexer(ShellSessionBaseLexer):
"""
Lexer for Tcsh sessions, i.e. command lines, including a
prompt, interspersed with output.
.. versionadded:: 2.1
"""
name = 'Tcsh Session'
aliases = ['tcshcon']
filenames = []
mimetypes = []
_innerLexerCls = TcshLexer
_ps1rgx = re.compile(r'^([^>]+>)(.*\n?)')
_ps2 = '? '
class PowerShellLexer(RegexLexer):
"""
For Windows PowerShell code.
.. versionadded:: 1.5
"""
name = 'PowerShell'
aliases = ['powershell', 'posh', 'ps1', 'psm1']
filenames = ['*.ps1', '*.psm1']
mimetypes = ['text/x-powershell']
flags = re.DOTALL | re.IGNORECASE | re.MULTILINE
keywords = (
'while validateset validaterange validatepattern validatelength '
'validatecount until trap switch return ref process param parameter in '
'if global: function foreach for finally filter end elseif else '
'dynamicparam do default continue cmdletbinding break begin alias \\? '
'% #script #private #local #global mandatory parametersetname position '
'valuefrompipeline valuefrompipelinebypropertyname '
'valuefromremainingarguments helpmessage try catch throw').split()
operators = (
'and as band bnot bor bxor casesensitive ccontains ceq cge cgt cle '
'clike clt cmatch cne cnotcontains cnotlike cnotmatch contains '
'creplace eq exact f file ge gt icontains ieq ige igt ile ilike ilt '
'imatch ine inotcontains inotlike inotmatch ireplace is isnot le like '
'lt match ne not notcontains notlike notmatch or regex replace '
'wildcard').split()
verbs = (
'write where watch wait use update unregister unpublish unprotect '
'unlock uninstall undo unblock trace test tee take sync switch '
'suspend submit stop step start split sort skip show set send select '
'search scroll save revoke resume restore restart resolve resize '
'reset request repair rename remove register redo receive read push '
'publish protect pop ping out optimize open new move mount merge '
'measure lock limit join invoke install initialize import hide group '
'grant get format foreach find export expand exit enter enable edit '
'dismount disconnect disable deny debug cxnew copy convertto '
'convertfrom convert connect confirm compress complete compare close '
'clear checkpoint block backup assert approve aggregate add').split()
aliases_ = (
'ac asnp cat cd cfs chdir clc clear clhy cli clp cls clv cnsn '
'compare copy cp cpi cpp curl cvpa dbp del diff dir dnsn ebp echo epal '
'epcsv epsn erase etsn exsn fc fhx fl foreach ft fw gal gbp gc gci gcm '
'gcs gdr ghy gi gjb gl gm gmo gp gps gpv group gsn gsnp gsv gu gv gwmi '
'h history icm iex ihy ii ipal ipcsv ipmo ipsn irm ise iwmi iwr kill lp '
'ls man md measure mi mount move mp mv nal ndr ni nmo npssc nsn nv ogv '
'oh popd ps pushd pwd r rbp rcjb rcsn rd rdr ren ri rjb rm rmdir rmo '
'rni rnp rp rsn rsnp rujb rv rvpa rwmi sajb sal saps sasv sbp sc select '
'set shcm si sl sleep sls sort sp spjb spps spsv start sujb sv swmi tee '
'trcm type wget where wjb write').split()
commenthelp = (
'component description example externalhelp forwardhelpcategory '
'forwardhelptargetname functionality inputs link '
'notes outputs parameter remotehelprunspace role synopsis').split()
tokens = {
'root': [
# we need to count pairs of parentheses for correct highlight
# of '$(...)' blocks in strings
(r'\(', Punctuation, 'child'),
(r'\s+', Text),
(r'^(\s*#[#\s]*)(\.(?:%s))([^\n]*$)' % '|'.join(commenthelp),
bygroups(Comment, String.Doc, Comment)),
(r'#[^\n]*?$', Comment),
(r'(<|<)#', Comment.Multiline, 'multline'),
(r'@"\n', String.Heredoc, 'heredoc-double'),
(r"@'\n.*?\n'@", String.Heredoc),
# escaped syntax
(r'`[\'"$@-]', Punctuation),
(r'"', String.Double, 'string'),
(r"'([^']|'')*'", String.Single),
(r'(\$|@@|@)((global|script|private|env):)?\w+',
Name.Variable),
(r'(%s)\b' % '|'.join(keywords), Keyword),
(r'-(%s)\b' % '|'.join(operators), Operator),
(r'(%s)-[a-z_]\w*\b' % '|'.join(verbs), Name.Builtin),
(r'(%s)\s' % '|'.join(aliases_), Name.Builtin),
(r'\[[a-z_\[][\w. `,\[\]]*\]', Name.Constant), # .net [type]s
(r'-[a-z_]\w*', Name),
(r'\w+', Name),
(r'[.,;:@{}\[\]$()=+*/\\&%!~?^`|<>-]', Punctuation),
],
'child': [
(r'\)', Punctuation, '#pop'),
include('root'),
],
'multline': [
(r'[^#&.]+', Comment.Multiline),
(r'#(>|>)', Comment.Multiline, '#pop'),
(r'\.(%s)' % '|'.join(commenthelp), String.Doc),
(r'[#&.]', Comment.Multiline),
],
'string': [
(r"`[0abfnrtv'\"$`]", String.Escape),
(r'[^$`"]+', String.Double),
(r'\$\(', Punctuation, 'child'),
(r'""', String.Double),
(r'[`$]', String.Double),
(r'"', String.Double, '#pop'),
],
'heredoc-double': [
(r'\n"@', String.Heredoc, '#pop'),
(r'\$\(', Punctuation, 'child'),
(r'[^@\n]+"]', String.Heredoc),
(r".", String.Heredoc),
]
}
class PowerShellSessionLexer(ShellSessionBaseLexer):
"""
Lexer for PowerShell sessions, i.e. command lines, including a
prompt, interspersed with output.
.. versionadded:: 2.1
"""
name = 'PowerShell Session'
aliases = ['ps1con']
filenames = []
mimetypes = []
_innerLexerCls = PowerShellLexer
_ps1rgx = re.compile(r'^((?:\[[^]]+\]: )?PS[^>]*> ?)(.*\n?)')
_ps2 = '>> '
class FishShellLexer(RegexLexer):
"""
Lexer for Fish shell scripts.
.. versionadded:: 2.1
"""
name = 'Fish'
aliases = ['fish', 'fishshell']
filenames = ['*.fish', '*.load']
mimetypes = ['application/x-fish']
tokens = {
'root': [
include('basic'),
include('data'),
include('interp'),
],
'interp': [
(r'\$\(\(', Keyword, 'math'),
(r'\(', Keyword, 'paren'),
(r'\$#?(\w+|.)', Name.Variable),
],
'basic': [
(r'\b(begin|end|if|else|while|break|for|in|return|function|block|'
r'case|continue|switch|not|and|or|set|echo|exit|pwd|true|false|'
r'cd|count|test)(\s*)\b',
bygroups(Keyword, Text)),
(r'\b(alias|bg|bind|breakpoint|builtin|command|commandline|'
r'complete|contains|dirh|dirs|emit|eval|exec|fg|fish|fish_config|'
r'fish_indent|fish_pager|fish_prompt|fish_right_prompt|'
r'fish_update_completions|fishd|funced|funcsave|functions|help|'
r'history|isatty|jobs|math|mimedb|nextd|open|popd|prevd|psub|'
r'pushd|random|read|set_color|source|status|trap|type|ulimit|'
r'umask|vared|fc|getopts|hash|kill|printf|time|wait)\s*\b(?!\.)',
Name.Builtin),
(r'#.*\n', Comment),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]()=]', Operator),
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
],
'data': [
(r'(?s)\$?"(\\\\|\\[0-7]+|\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r"(?s)'.*?'", String.Single),
(r';', Punctuation),
(r'&|\||\^|<|>', Operator),
(r'\s+', Text),
(r'\d+(?= |\Z)', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'math': [
(r'\)\)', Keyword, '#pop'),
(r'[-+*/%^|&]|\*\*|\|\|', Operator),
(r'\d+#\d+', Number),
(r'\d+#(?! )', Number),
(r'\d+', Number),
include('root'),
],
}
class ExeclineLexer(RegexLexer):
"""
Lexer for Laurent Bercot's execline language
(https://skarnet.org/software/execline).
.. versionadded:: 2.7
"""
name = 'execline'
aliases = ['execline']
filenames = ['*.exec']
tokens = {
'root': [
include('basic'),
include('data'),
include('interp')
],
'interp': [
(r'\$\{', String.Interpol, 'curly'),
(r'\$[\w@#]+', Name.Variable), # user variable
(r'\$', Text),
],
'basic': [
(r'\b(background|backtick|cd|define|dollarat|elgetopt|'
r'elgetpositionals|elglob|emptyenv|envfile|exec|execlineb|'
r'exit|export|fdblock|fdclose|fdmove|fdreserve|fdswap|'
r'forbacktickx|foreground|forstdin|forx|getcwd|getpid|heredoc|'
r'homeof|if|ifelse|ifte|ifthenelse|importas|loopwhilex|'
r'multidefine|multisubstitute|pipeline|piperw|posix-cd|'
r'redirfd|runblock|shift|trap|tryexec|umask|unexport|wait|'
r'withstdinas)\b', Name.Builtin),
(r'\A#!.+\n', Comment.Hashbang),
(r'#.*\n', Comment.Single),
(r'[{}]', Operator)
],
'data': [
(r'(?s)"(\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r'\s+', Text),
(r'[^\s{}$"\\]+', Text)
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'curly': [
(r'\}', String.Interpol, '#pop'),
(r'[\w#@]+', Name.Variable),
include('root')
]
}
def analyse_text(text):
if shebang_matches(text, r'execlineb'):
return 1
| 39.293407 | 87 | 0.470845 |
c95b70164433c47f43340eb21c6d916cf1d00737 | 523 | py | Python | textClassifier/scoring_methods.py | vikash06131721/autoMltext | 96f41161947d78c663e7f6b4ff452fc5bf2462e8 | [
"MIT"
] | null | null | null | textClassifier/scoring_methods.py | vikash06131721/autoMltext | 96f41161947d78c663e7f6b4ff452fc5bf2462e8 | [
"MIT"
] | null | null | null | textClassifier/scoring_methods.py | vikash06131721/autoMltext | 96f41161947d78c663e7f6b4ff452fc5bf2462e8 | [
"MIT"
] | null | null | null | from sklearn.metrics import f1_score, \
precision_score, recall_score, roc_auc_score, \
accuracy_score
import warnings
warnings.filterwarnings("ignore")
def scorer_f1(y_true,y_pred):
return f1_score(y_true,y_pred)
def scorer_precision(y_true,y_pred):
return precision_score(y_true,y_pred)
def scorer_recall(y_true,y_pred):
return recall_score(y_true,y_pred)
def scorer_accuracy(y_true,y_pred):
return accuracy_score(y_true,y_pred)
def scorer_roc(y_true,y_pred):
return roc_auc_score(y_true,y_pred) | 26.15 | 47 | 0.799235 |
50e1511b561d67d134805d2bf28e2b5b30a67585 | 2,472 | py | Python | tests/test_multipart.py | ambrozic/http3 | 5442006a41f94a3e41186910d7a6e8546adf0f89 | [
"BSD-3-Clause"
] | null | null | null | tests/test_multipart.py | ambrozic/http3 | 5442006a41f94a3e41186910d7a6e8546adf0f89 | [
"BSD-3-Clause"
] | null | null | null | tests/test_multipart.py | ambrozic/http3 | 5442006a41f94a3e41186910d7a6e8546adf0f89 | [
"BSD-3-Clause"
] | null | null | null | import cgi
import io
import pytest
from http3 import (
CertTypes,
Client,
Dispatcher,
Request,
Response,
TimeoutTypes,
VerifyTypes,
)
class MockDispatch(Dispatcher):
def send(
self,
request: Request,
verify: VerifyTypes = None,
cert: CertTypes = None,
timeout: TimeoutTypes = None,
) -> Response:
return Response(200, content=request.read())
def test_multipart():
client = Client(dispatch=MockDispatch())
# Test with a single-value 'data' argument, and a plain file 'files' argument.
data = {"text": "abc"}
files = {"file": io.BytesIO(b"<file content>")}
response = client.post("http://127.0.0.1:8000/", data=data, files=files)
assert response.status_code == 200
# We're using the cgi module to verify the behavior here, which is a
# bit grungy, but sufficient just for our testing purposes.
boundary = response.request.headers["Content-Type"].split("boundary=")[-1]
content_length = response.request.headers["Content-Length"]
pdict = {"boundary": boundary.encode("ascii"), "CONTENT-LENGTH": content_length}
multipart = cgi.parse_multipart(io.BytesIO(response.content), pdict)
# Note that the expected return type for text fields appears to differs from 3.6 to 3.7+
assert multipart["text"] == ["abc"] or multipart["text"] == [b"abc"]
assert multipart["file"] == [b"<file content>"]
def test_multipart_file_tuple():
client = Client(dispatch=MockDispatch())
# Test with a list of values 'data' argument, and a tuple style 'files' argument.
data = {"text": ["abc"]}
files = {"file": ("name.txt", io.BytesIO(b"<file content>"))}
response = client.post("http://127.0.0.1:8000/", data=data, files=files)
assert response.status_code == 200
# We're using the cgi module to verify the behavior here, which is a
# bit grungy, but sufficient just for our testing purposes.
boundary = response.request.headers["Content-Type"].split("boundary=")[-1]
content_length = response.request.headers["Content-Length"]
pdict = {"boundary": boundary.encode("ascii"), "CONTENT-LENGTH": content_length}
multipart = cgi.parse_multipart(io.BytesIO(response.content), pdict)
# Note that the expected return type for text fields appears to differs from 3.6 to 3.7+
assert multipart["text"] == ["abc"] or multipart["text"] == [b"abc"]
assert multipart["file"] == [b"<file content>"]
| 36.352941 | 92 | 0.66788 |
5ed575a5954a5a1791269e7a7422eddd0c664537 | 157 | py | Python | tests/model_control/detailed/transf_Fisher/model_control_one_enabled_Fisher_MovingMedian_Seasonal_Hour_NoAR.py | shaido987/pyaf | b9afd089557bed6b90b246d3712c481ae26a1957 | [
"BSD-3-Clause"
] | 377 | 2016-10-13T20:52:44.000Z | 2022-03-29T18:04:14.000Z | tests/model_control/detailed/transf_Fisher/model_control_one_enabled_Fisher_MovingMedian_Seasonal_Hour_NoAR.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 160 | 2016-10-13T16:11:53.000Z | 2022-03-28T04:21:34.000Z | tests/model_control/detailed/transf_Fisher/model_control_one_enabled_Fisher_MovingMedian_Seasonal_Hour_NoAR.py | ysdede/pyaf | b5541b8249d5a1cfdc01f27fdfd99b6580ed680b | [
"BSD-3-Clause"
] | 63 | 2017-03-09T14:51:18.000Z | 2022-03-27T20:52:57.000Z | import tests.model_control.test_ozone_custom_models_enabled as testmod
testmod.build_model( ['Fisher'] , ['MovingMedian'] , ['Seasonal_Hour'] , ['NoAR'] ); | 39.25 | 84 | 0.751592 |
ef1a01ecc7f56144db74bf9c3be7c5033f38a5c8 | 1,582 | py | Python | lldb/test/API/lang/swift/completion/TestSwiftREPLCompletion.py | WYK15/swift-Ollvm10 | ea68224ab23470963b68dfcc28b5ac769a070ea3 | [
"Apache-2.0"
] | 3 | 2021-06-11T17:30:05.000Z | 2022-01-29T13:46:47.000Z | lldb/test/API/lang/swift/completion/TestSwiftREPLCompletion.py | WYK15/swift-Ollvm10 | ea68224ab23470963b68dfcc28b5ac769a070ea3 | [
"Apache-2.0"
] | null | null | null | lldb/test/API/lang/swift/completion/TestSwiftREPLCompletion.py | WYK15/swift-Ollvm10 | ea68224ab23470963b68dfcc28b5ac769a070ea3 | [
"Apache-2.0"
] | null | null | null |
import lldb
from lldbsuite.test.decorators import *
from lldbsuite.test.lldbtest import *
from lldbsuite.test.lldbpexpect import PExpectTest
class SwiftCompletionTest(PExpectTest):
mydir = TestBase.compute_mydir(__file__)
# This test is failing sporadically on the bots (rdar://66291543) due to
# some cl::opt being registered multiple times.
## # PExpect uses many timeouts internally and doesn't play well
## # under ASAN on a loaded machine..
## @skipIfAsan
## @skipUnlessDarwin
@skipIf
def test_basic_completion(self):
self.launch(extra_args=["--repl"], executable=None, dimensions=(100,500))
# Wait on the first prompt
self.child.expect_exact("1>")
# Press tab a few times which should do nothing.
# Note that we don't get any indentation whitespace as
# pexpect is not recognized as a interactive terminal by pexpect it seems.
self.child.send("\t\t\t")
# Try completing something that only has one result "fun" -> "func".
self.child.send("fun\t")
self.child.expect_exact("func")
self.child.sendline("")
# Try completing something that has multiple completions.
self.child.send("Hash\t")
self.child.expect_exact("Available completions:")
self.child.expect_exact("Hashable")
self.child.expect_exact("Hasher")
self.child.sendline("")
def setUpCommands(self):
return [] # REPL doesn't take any setup commands.
def expect_prompt(self):
pass # No constant prompt on the REPL.
| 34.391304 | 82 | 0.670038 |
343d0f86976e0113be6c3e327c4d9684ab8a6f8d | 9,317 | py | Python | tests/agents/student/test_goals.py | e-kolpakov/study-model | e10dd9f0d876c8d434fef99c5ffea80b385ec9ed | [
"MIT"
] | 2 | 2019-04-25T04:59:02.000Z | 2019-05-09T06:14:04.000Z | tests/agents/student/test_goals.py | e-kolpakov/study-model | e10dd9f0d876c8d434fef99c5ffea80b385ec9ed | [
"MIT"
] | null | null | null | tests/agents/student/test_goals.py | e-kolpakov/study-model | e10dd9f0d876c8d434fef99c5ffea80b385ec9ed | [
"MIT"
] | null | null | null | from unittest import mock
from unittest.mock import PropertyMock
import pytest
from model.agents.resource import Resource
from model.agents.student.goals import StudyCompetenciesGoal, PassExamGoal
from model.knowledge_representation import Competency, Fact
from model.knowledge_representation.lesson_type import Exam, ExamFeedback
__author__ = 'e.kolpakov'
DEFAULT_EXAM_CODE = "EXAM_CODE"
def _make_resource(agent_id, facts, exams):
result = mock.Mock(spec_set=Resource)
type(result).agent_id = PropertyMock(return_value=agent_id)
type(result).facts_to_study = PropertyMock(return_value=facts)
type(result).exams = PropertyMock(return_value=exams)
return result
def _make_competency(facts, is_mastered):
result = mock.Mock(spec_set=Competency)
type(result).facts = PropertyMock(return_value=facts)
result.is_mastered = mock.Mock(return_value=is_mastered)
return result
def _fact_weight(target=0, dependency=0, other=0, cls=StudyCompetenciesGoal):
result = sum([
target * cls.TARGET_FACT_WEIGHT,
dependency * cls.DEPENDENCY_FACT_WEIGHT,
other * cls.OTHER_FACT_WEIGHT
])
return result
def _make_fact(code, dependencies):
result = mock.Mock(spec_set=Fact)
type(result).code = PropertyMock(return_value=code)
type(result).dependencies = PropertyMock(return_value=dependencies)
return result
def _make_exam(code, facts):
result = mock.Mock(spec_set=Exam)
type(result).code = PropertyMock(return_value=code)
type(result).facts = PropertyMock(return_value=facts)
return result
class TestStudyCompetenciesGoal:
def _get_resource_map(self, student, curriculum, target_facts, res_map):
target_competency = _make_competency(target_facts, False)
goal = StudyCompetenciesGoal([target_competency])
resources = [
_make_resource(index, [_make_fact(code, []) for code in resource_facts], [])
for index, resource_facts in res_map.items()
]
with mock.patch('model.agents.student.goals.get_available_facts') as available_facts_mock:
available_facts_mock.side_effect = lambda res, stud: res
resource_map = goal.resource_choice_map(student, curriculum, resources)
return {resource.agent_id: weight for resource, weight in resource_map.items()}
@pytest.mark.parametrize("mastered_responses, expected_result", [
((False,), False),
((True,), True),
((True, False), False),
((False, False), False),
((True, True), True),
])
def test_achieved(self, student, mastered_responses, expected_result):
competencies = [_make_competency([], mastered) for mastered in mastered_responses]
goal = StudyCompetenciesGoal(competencies)
assert goal.achieved(student) == expected_result
@pytest.mark.parametrize("target_facts, res_map, expected_weights", [
(['A'], {'r1': ['A']}, {'r1': _fact_weight(target=1)}),
(['A'], {'r1': ['B']}, {'r1': _fact_weight(other=1)}),
(['A'], {'r1': ['A', 'B']}, {'r1': _fact_weight(target=1, other=1)}),
(['A', 'B'], {'r1': ['A', 'B']}, {'r1': _fact_weight(target=2)}),
(
['A', 'B'],
{'r1': ['A'], 'r2':['B']},
{'r1': _fact_weight(target=1), 'r2': _fact_weight(target=1)}
),
(
['A', 'B', 'C'],
{'r1': ['A', 'C'], 'r2':['B', 'D'], 'r3': ['A', 'B', 'C', 'D']},
{'r1': _fact_weight(target=2), 'r2': _fact_weight(target=1, other=1), 'r3': _fact_weight(target=3, other=1)}
),
])
def test_resource_choice_map_no_dependencies(self, student, curriculum, target_facts, res_map, expected_weights):
target_facts = [_make_fact(code, []) for code in target_facts]
assert self._get_resource_map(student, curriculum, target_facts, res_map) == expected_weights
@pytest.mark.parametrize("target_facts, res_map, expected_weights", [
({'A': ['B']}, {'r1': ['A']}, {'r1': _fact_weight(target=1)}),
({'A': ['B']}, {'r1': ['B']}, {'r1': _fact_weight(dependency=1)}),
({'A': ['B', 'C']}, {'r1': ['B', 'C']}, {'r1': _fact_weight(dependency=2)}),
({'A': ['B'], 'B':[]}, {'r1': ['A', 'B']}, {'r1': _fact_weight(target=2, dependency=1)}),
({'A': ['B'], 'B':[]}, {'r1': ['B']}, {'r1': _fact_weight(target=1, dependency=1)}),
(
{'A': ['B'], 'B':[]},
{'r1': ['A'], 'r2': ['A', 'B', 'D'], 'r3':['C', 'D']},
{
'r1': _fact_weight(target=1),
'r2': _fact_weight(target=2, dependency=1, other=1),
'r3': _fact_weight(other=2),
}
),
])
def test_resource_choice_map(self, student, curriculum, target_facts, res_map, expected_weights):
target_facts = [_make_fact(code, dependencies) for code, dependencies in target_facts.items()]
assert self._get_resource_map(student, curriculum, target_facts, res_map) == expected_weights
class TestPassExamGoal:
@pytest.mark.parametrize("exam_results, expected_result", [
({}, False),
({DEFAULT_EXAM_CODE: [False]}, False),
({DEFAULT_EXAM_CODE: [True]}, True),
({DEFAULT_EXAM_CODE: [False, True]}, True),
({DEFAULT_EXAM_CODE: [False, False]}, False),
({"OTHER": [True]}, False),
({"OTHER": [True], DEFAULT_EXAM_CODE:[False]}, False),
({"OTHER": [False], DEFAULT_EXAM_CODE:[True]}, True),
])
def test_achieved(self, student, exam_results, expected_result):
exam = _make_exam(DEFAULT_EXAM_CODE, [])
type(student).exam_results = PropertyMock(return_value={
code: [ExamFeedback(exam, 0, feedback, 1) for feedback in feedbacks]
for code, feedbacks in exam_results.items()
})
goal = PassExamGoal(exam)
assert goal.achieved(student) == expected_result
@pytest.mark.parametrize("target_facts, reduced_expected_map", [
(
{"A": []},
{
'r1': _fact_weight(cls=PassExamGoal, target=1),
'r2': _fact_weight(cls=PassExamGoal, target=1),
'r3': _fact_weight(cls=PassExamGoal, target=1),
'r4': _fact_weight(cls=PassExamGoal, target=1),
'r5': _fact_weight(cls=PassExamGoal, target=1),
'r6': _fact_weight(cls=PassExamGoal),
}
),
(
{"A": [], "B":[]},
{
'r1': _fact_weight(cls=PassExamGoal, target=1),
'r2': _fact_weight(cls=PassExamGoal, target=2),
'r3': _fact_weight(cls=PassExamGoal, target=2),
'r4': _fact_weight(cls=PassExamGoal, target=1),
'r5': _fact_weight(cls=PassExamGoal, target=2),
'r6': _fact_weight(cls=PassExamGoal, target=1),
},
),
(
{"A": ["B"]},
{
'r1': _fact_weight(cls=PassExamGoal, target=1),
'r2': _fact_weight(cls=PassExamGoal, target=1, dependency=1),
'r3': _fact_weight(cls=PassExamGoal, target=1, dependency=1),
'r4': _fact_weight(cls=PassExamGoal, target=1),
'r5': _fact_weight(cls=PassExamGoal, target=1, dependency=1),
'r6': _fact_weight(cls=PassExamGoal, dependency=1),
},
),
(
{"C": ["B"]},
{
'r1': _fact_weight(cls=PassExamGoal, target=0),
'r2': _fact_weight(cls=PassExamGoal, dependency=1),
'r3': _fact_weight(cls=PassExamGoal, dependency=1),
'r4': _fact_weight(cls=PassExamGoal, target=1),
'r5': _fact_weight(cls=PassExamGoal, dependency=1),
'r6': _fact_weight(cls=PassExamGoal, target=1, dependency=1),
},
)
])
def test_resource_choice_map(self, student, curriculum, target_facts, reduced_expected_map):
target_facts = [_make_fact(code, dependencies) for code, dependencies in target_facts.items()]
target_exam = _make_exam(DEFAULT_EXAM_CODE, target_facts)
other_exam = _make_exam("OTHER", target_facts)
goal = PassExamGoal(target_exam)
resources = [
_make_resource('r1', [_make_fact("A", [])], []),
_make_resource('r2', [_make_fact("A", []), _make_fact("B", [])], []),
_make_resource('r3', [_make_fact("A", []), _make_fact("B", [])], [other_exam]),
_make_resource('r4', [_make_fact("A", []), _make_fact("C", [])], [target_exam]),
_make_resource('r5', [_make_fact("A", []), _make_fact("B", [])], [target_exam]),
_make_resource('r6', [_make_fact("B", []), _make_fact("C", [])], [target_exam]),
]
expected_weight_map = {resource.agent_id: 0 for resource in resources}
expected_weight_map.update(reduced_expected_map)
with mock.patch('model.agents.student.goals.get_available_facts') as available_facts_mock:
available_facts_mock.side_effect = lambda res, stud: res
resource_map = goal.resource_choice_map(student, curriculum, resources)
assert {resource.agent_id: weight for resource, weight in resource_map.items()} == expected_weight_map | 44.366667 | 120 | 0.599549 |
df0c96f9c79cc404036dabf55965837140de0d78 | 968 | py | Python | tests/workflows/test_wrappers.py | mfarrera/algorithm-reference-library | 7331812aa7cc3501a15d3392cecf6ea65b43f91e | [
"Apache-2.0"
] | null | null | null | tests/workflows/test_wrappers.py | mfarrera/algorithm-reference-library | 7331812aa7cc3501a15d3392cecf6ea65b43f91e | [
"Apache-2.0"
] | null | null | null | tests/workflows/test_wrappers.py | mfarrera/algorithm-reference-library | 7331812aa7cc3501a15d3392cecf6ea65b43f91e | [
"Apache-2.0"
] | null | null | null | """ Unit tests for json helpers
"""
import logging
import unittest
from data_models.parameters import arl_path
from workflows.arlexecute.processing_component_interface.processing_component_interface \
import initialise_config_wrapper
from workflows.arlexecute.processing_component_interface.execution_helper import initialise_logging_wrapper
class TestWrappers(unittest.TestCase):
def test_initialise_config(self):
config = initialise_config_wrapper(arl_path("tests/workflows/test_json_helpers.json"))
for key in ['execute', 'component', 'logging', 'inputs', 'outputs', 'imaging', 'image', 'deconvolution',
'create_vislist']:
assert key in config.keys(), "Key %s not in configuration"
log = logging.getLogger(__name__)
initialise_logging_wrapper(config)
log.info('Test message')
log.info(str(config))
if __name__ == '__main__':
unittest.main()
| 33.37931 | 112 | 0.711777 |
8001ec2ceef0adddd340965147132ff941ce8317 | 410 | py | Python | examples/simple_auth.py | SantaSpeen/gitflic | 14c68092e238d6731863f9db29b304a9fad3e61c | [
"MIT"
] | 2 | 2022-03-16T10:18:38.000Z | 2022-03-16T13:08:51.000Z | examples/simple_auth.py | SantaSpeen/gitflic | 14c68092e238d6731863f9db29b304a9fad3e61c | [
"MIT"
] | 1 | 2022-03-16T12:52:28.000Z | 2022-03-16T13:08:30.000Z | examples/simple_auth.py | SantaSpeen/gitflic | 14c68092e238d6731863f9db29b304a9fad3e61c | [
"MIT"
] | 4 | 2022-03-16T09:33:05.000Z | 2022-03-30T05:46:58.000Z | from gitflic import Gitflic, GitflicAuth
# Your authentication token.
# See: https://gitflic.ru/settings/oauth/token/create
token = "token_here"
# Creating authorized session with our token.
gf_session = GitflicAuth(token)
gf = Gitflic(gf_session)
def main():
# Call method to get authorized user from API.
user_me = gf.call("/user/me")
print(user_me)
if __name__ == '__main__':
main()
| 20.5 | 53 | 0.712195 |
60fd86e72296d8de8636a581200b69921cbcbd67 | 4,634 | py | Python | docarray/document/mixins/text.py | fastflair/docarray | 0bbdbc816b2f4a3b399779f6816875fbc1dfe862 | [
"Apache-2.0"
] | 1 | 2022-03-16T14:05:32.000Z | 2022-03-16T14:05:32.000Z | docarray/document/mixins/text.py | fastflair/docarray | 0bbdbc816b2f4a3b399779f6816875fbc1dfe862 | [
"Apache-2.0"
] | null | null | null | docarray/document/mixins/text.py | fastflair/docarray | 0bbdbc816b2f4a3b399779f6816875fbc1dfe862 | [
"Apache-2.0"
] | null | null | null | from collections import Counter
from typing import Tuple, Dict, Union, Optional, TYPE_CHECKING
import numpy as np
from .helper import _uri_to_blob, _to_datauri
if TYPE_CHECKING:
from ...types import T
class TextDataMixin:
"""Provide helper functions for :class:`Document` to support text data. """
def load_uri_to_text(self: 'T', charset: str = 'utf-8') -> 'T':
"""Convert :attr:`.uri` to :attr`.text` inplace.
:param charset: charset may be any character set registered with IANA
:return: itself after processed
"""
blob = _uri_to_blob(self.uri)
self.text = blob.decode(charset)
return self
def get_vocabulary(self, text_attrs: Tuple[str, ...] = ('text',)) -> Dict[str, int]:
"""Get the text vocabulary in a counter dict that maps from the word to its frequency from all :attr:`text_fields`.
:param text_attrs: the textual attributes where vocabulary will be derived from
:return: a vocabulary in dictionary where key is the word, value is the frequency of that word in all text fields.
"""
all_tokens = Counter()
for f in text_attrs:
all_tokens.update(_text_to_word_sequence(getattr(self, f)))
return all_tokens
def convert_text_to_tensor(
self: 'T',
vocab: Dict[str, int],
max_length: Optional[int] = None,
dtype: str = 'int64',
) -> 'T':
"""Convert :attr:`.text` to :attr:`.tensor` inplace.
In the end :attr:`.tensor` will be a 1D array where `D` is `max_length`.
To get the vocab of a DocumentArray, you can use `jina.types.document.converters.build_vocab` to
:param vocab: a dictionary that maps a word to an integer index, `0` is reserved for padding, `1` is reserved
for unknown words in :attr:`.text`. So you should *not* include these two entries in `vocab`.
:param max_length: the maximum length of the sequence. Sequence longer than this are cut off from *beginning*.
Sequence shorter than this will be padded with `0` from right hand side.
:param dtype: the dtype of the generated :attr:`.tensor`
:return: Document itself after processed
"""
self.tensor = np.array(
_text_to_int_sequence(self.text, vocab, max_length), dtype=dtype
)
return self
def convert_tensor_to_text(
self: 'T', vocab: Union[Dict[str, int], Dict[int, str]], delimiter: str = ' '
) -> 'T':
"""Convert :attr:`.tensor` to :attr:`.text` inplace.
:param vocab: a dictionary that maps a word to an integer index, `0` is reserved for padding, `1` is reserved
for unknown words in :attr:`.text`
:param delimiter: the delimiter that used to connect all words into :attr:`.text`
:return: Document itself after processed
"""
if isinstance(list(vocab.keys())[0], str):
_vocab = {v: k for k, v in vocab.items()}
_text = []
for k in self.tensor:
k = int(k)
if k == 0:
continue
elif k == 1:
_text.append('<UNK>')
else:
_text.append(_vocab.get(k, '<UNK>'))
self.text = delimiter.join(_text)
return self
def convert_text_to_datauri(
self: 'T', charset: str = 'utf-8', base64: bool = False
) -> 'T':
"""Convert :attr:`.text` to data :attr:`.uri`.
:param charset: charset may be any character set registered with IANA
:param base64: used to encode arbitrary octet sequences into a form that satisfies the rules of 7bit.
Designed to be efficient for non-text 8 bit and binary data.
Sometimes used for text data that frequently uses non-US-ASCII characters.
:return: itself after processed
"""
self.uri = _to_datauri(self.mime_type, self.text, charset, base64, binary=False)
return self
def _text_to_word_sequence(
text, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', split=' '
):
translate_dict = {c: split for c in filters}
translate_map = str.maketrans(translate_dict)
text = text.lower().translate(translate_map)
seq = text.split(split)
for i in seq:
if i:
yield i
def _text_to_int_sequence(text, vocab, max_len=None):
seq = _text_to_word_sequence(text)
vec = [vocab.get(s, 1) for s in seq]
if max_len:
if len(vec) < max_len:
vec = [0] * (max_len - len(vec)) + vec
elif len(vec) > max_len:
vec = vec[-max_len:]
return vec
| 36.777778 | 123 | 0.611135 |
0986bcd21821ac806ebbe0f7d10798feea61cc7b | 1,144 | py | Python | emulator/tests/test_binja.py | joshwatson/f-ing-around-with-binaryninja | 2f7e907952a15305541820ac11ff0a86645c631d | [
"MIT"
] | 88 | 2019-01-13T20:11:13.000Z | 2022-02-08T01:13:46.000Z | emulator/tests/test_binja.py | joshwatson/f-ing-around-with-binaryninja | 2f7e907952a15305541820ac11ff0a86645c631d | [
"MIT"
] | 4 | 2019-03-20T03:02:49.000Z | 2020-10-31T11:57:56.000Z | emulator/tests/test_binja.py | joshwatson/f-ing-around-with-binaryninja | 2f7e907952a15305541820ac11ff0a86645c631d | [
"MIT"
] | 10 | 2019-04-26T08:59:56.000Z | 2020-08-19T03:19:27.000Z | from binaryninja import (BinaryView, LowLevelILFunction, PluginCommand,
SegmentFlag)
from emulator import Executor
def setup_stack(view: BinaryView, function: LowLevelILFunction) -> None:
emulator = view.session_data['emulator']
memory_view = view.session_data['emulator.memory.view']
map_start = 0x1000
map_len = 0x10000
while True:
while memory_view.get_segment_at(map_start) is not None:
map_start += 0x1000
if any(
s.start > map_start and
s.start < map_start + map_len
for s in memory_view.segments
):
map_start += 0x1000
continue
emulator.map_memory(
map_start,
map_len,
SegmentFlag.SegmentReadable | SegmentFlag.SegmentWritable
)
break
sp = map_start + map_len - view.address_size
emulator.write_register(view.arch.stack_pointer, sp)
PluginCommand.register_for_low_level_il_function(
'Emulator\\Setup stack',
'Setup Emulator Stack',
setup_stack,
lambda v, f: v.session_data.get('emulator') is not None
)
| 27.238095 | 72 | 0.640734 |
eafdacc7f22d3cb3f0b31cc6fbbecf32d19dc157 | 2,196 | py | Python | getsitemap/main.py | hsiaofongw/watson | bd8939e198d747633db00582a9f5bb00bdc04acc | [
"MIT"
] | null | null | null | getsitemap/main.py | hsiaofongw/watson | bd8939e198d747633db00582a9f5bb00bdc04acc | [
"MIT"
] | null | null | null | getsitemap/main.py | hsiaofongw/watson | bd8939e198d747633db00582a9f5bb00bdc04acc | [
"MIT"
] | null | null | null | import argparse
from urlhandle import URLHandler
from inputhandle import InputHandler
from crawler import Crawler
from sitemap import SiteMapParser
# import default configuration
from config import USER_AGENT
from config import SITEMAP_PATH
# do those when it is invoke from commandline
def main():
# parse commandline inputs
parser = argparse.ArgumentParser()
parser.add_argument(
"host",
help='''your website's domain, for example, google.com'''
)
parser.add_argument(
"--sitemap-path",
default=SITEMAP_PATH,
help='''for example, if your domain is google.com, '''
'''and sitemap path is /sitemap.xml, then we will'''
''' use google.com/sitemap.xml to crawl the sitemap'''
)
parser.add_argument(
"--user-agent",
default=USER_AGENT,
help='''exactly the User-Agent field in request header'''
)
parser.add_argument(
"--use-https",
default=True,
help='''use https to communicate with server, this is default'''
)
parser.add_argument(
"--use-http",
default=False,
help='''use http to communicate with the server'''
)
args = parser.parse_args()
# retrieve parameters from inputs
input_handler = InputHandler()
sitemap_path = input_handler.get_sitemap_path(args)
host = input_handler.get_host(args)
http_scheme = input_handler.get_http_scheme(args)
user_agent = input_handler.get_user_agent(args)
# make full url from parameters
url_handler = URLHandler()
full_url_to_crawl = url_handler.make_sitemap_full_url(
host = host,
sitemap_path = sitemap_path,
http_scheme = http_scheme
)
# start to crawl the sitemap response
crawler = Crawler(user_agent)
sitemap_response = crawler.crawl(full_url_to_crawl)
# parse sitemap response to url list
sitemap_parser = SiteMapParser()
urls = sitemap_parser.parse_to_url_list(sitemap_response)
# print url in the urls line by line
for url in urls:
print(url)
# in case it it imported as module, then don't run
if __name__ == '__main__':
main() | 27.111111 | 72 | 0.666667 |
eb041db7d63e94382c58ebefc99e26b5a2225bee | 515 | py | Python | Examples/Foundation/Scripts/super-call.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | Examples/Foundation/Scripts/super-call.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | Examples/Foundation/Scripts/super-call.py | linuxfood/pyobjc-framework-Cocoa-test | 3475890f165ab26a740f13d5afe4c62b4423a140 | [
"MIT"
] | null | null | null | #
# Demonstrates that the super-class implementation of an overridden method
# can be called in the same way as with normal objects.
#
from Foundation import NSObject
from objc import super
N = 1
class MyObject(NSObject):
def init(self):
global N
if N == 1:
print("Calling super.init")
N = 0
# Call super-class implementation.
super(MyObject, self).init()
else:
print("Cyclic call detected")
x = MyObject.alloc().init()
| 19.807692 | 74 | 0.609709 |
9d9990e9d52da1aacaa9ac433d43fd884c3d7dfe | 7,656 | py | Python | src/NetCloud-Failover-Reporter.py | CC-Digital-Innovation/NetCloud-Failover-Reporter | b5034009938e6a6a161439c1dd7020516b6fe940 | [
"MIT"
] | null | null | null | src/NetCloud-Failover-Reporter.py | CC-Digital-Innovation/NetCloud-Failover-Reporter | b5034009938e6a6a161439c1dd7020516b6fe940 | [
"MIT"
] | null | null | null | src/NetCloud-Failover-Reporter.py | CC-Digital-Innovation/NetCloud-Failover-Reporter | b5034009938e6a6a161439c1dd7020516b6fe940 | [
"MIT"
] | null | null | null | import configparser
from datetime import datetime
import os
import pandas as pd
import pytz
import requests
import time_uuid
# Module information.
__author__ = 'Anthony Farina'
__copyright__ = 'Copyright 2021, NetCloud Monthly Failover Reporter'
__credits__ = ['Anthony Farina']
__license__ = 'MIT'
__version__ = '1.1.2'
__maintainer__ = 'Anthony Farina'
__email__ = 'farinaanthony96@gmail.com'
__status__ = 'Released'
# Global variables from the config file for easy referencing.
CONFIG = configparser.ConfigParser()
CONFIG_PATH = '/../configs/NetCloud-Failover-Reporter-config.ini'
CONFIG.read(os.path.dirname(os.path.realpath(__file__)) + CONFIG_PATH)
NETCLOUD_HEADERS = CONFIG._sections['NetCloud API Info']
TIMEZONE = CONFIG['Timezone Info']['timezone']
EXCEL_FILE_NAME = CONFIG['Output Info']['excel-file-name']
COL_LABELS = CONFIG['Output Info']['column-labels'].split(',')
# This method extracts the current month's failover events from NetCloud. This
# information is outputted to an Excel file. The output can be customized
# from the config.ini file.
def netcloud_failover_reporter() -> None:
# Create the base URL for one batch of failover alerts in NetCloud.
# NetCloud's maximum number of failover events from one HTTP GET call is
# 500, so 1 batch = 500 failover alerts.
failovers_url = 'https://www.cradlepointecm.com/api/v2/alerts/?type' \
'=failover_event&limit=500'
# Get the current time in the relative timezone into a datetime object.
rel_now_dt = datetime.utcnow().replace(tzinfo=pytz.UTC).astimezone(
pytz.timezone(TIMEZONE))
# Get the start of the current month in the relative timezone in UTC into
# a datetime object.
rel_month_start_dt = datetime(rel_now_dt.year, rel_now_dt.month, 1, 0,
0, 0, tzinfo=rel_now_dt.tzinfo)
rel_month_start_utc_dt = rel_month_start_dt.astimezone(pytz.timezone(
'UTC'))
# Make a time UUID object for the relative UTC time for the start of the
# month.
rel_month_start_utc_tuuid = time_uuid.TimeUUID.convert(
rel_month_start_utc_dt)
# Modify the base failover URL to only include failovers from the start
# of the current month to now.
failovers_url += '&created_at_timeuuid__gte=' + str(
rel_month_start_utc_tuuid)
# Extract the first batch of monthly failovers from NetCloud in JSON
# format.
failover_request = requests.get(url=failovers_url,
headers=NETCLOUD_HEADERS)
monthly_failover_batch = failover_request.json()
# Keep a reference to all the routers we have extracted information for
# thus far.
router_dict = dict()
# Prepare the output list that will be converted to an Excel file.
output_list = list()
# Make a do-while condition to get more failovers if there multiple
# batches of failover alerts. Reminder: 1 batch = 500 failover alerts.
next_failover_batch = True
# Loop through all monthly failovers to find relevant failover records
# to add to the output list.
while next_failover_batch:
# Loop through this batch's failovers to find relevant failover
# records to add to the output list.
for failover_record in monthly_failover_batch['data']:
# Get the relative time this failover occurred at as a datetime
# object.
rel_time_dt = convert_utc(failover_record['created_at'],
'%Y-%m-%dT%H:%M:%S.%f%z', TIMEZONE)
# Prepare the logic to see if this record is relevant. If all
# failovers should be extracted then leave this section alone.
# example_cond_1 = rel_time_dt.weekday() != 5 or
# rel_time_dt.weekday() != 6
# example_cond_2 = rel_time_dt.hour >= 9 and rel_time_dt.hour <= 17
# Check if this failover is relevant. If so, add it to the
# output list. Uncomment if statement if there are conditions
# made above.
# if example_cond_1 and example_cond_2:
# BEGIN IF-STATEMENT INDENTATION ---------------------------------
# Prepare a list to contain relevant failover information.
relevant_failover_record = list()
# Extract the router number from the failover record.
router_url = failover_record['router']
router_url_split = router_url.split('/')
router_num = router_url_split[len(router_url_split) - 2]
# Check if the router exists in the router dictionary.
if router_num not in router_dict:
# Request router information from NetCloud.
router_request = requests.get(url=router_url,
headers=NETCLOUD_HEADERS)
router_info = router_request.json()
# Add router information to router dictionary.
router_dict[router_num] = router_info
# Add router information to relevant failover record. Add or remove
# any lines needed to extract relevant information.
relevant_failover_record.append(router_dict[router_num]['name'])
relevant_failover_record.append(router_dict[router_num]['mac'])
relevant_failover_record.append(router_dict[router_num][
'serial_number'])
relevant_failover_record.append(datetime.strftime(
rel_time_dt, '%m/%d/%Y %I:%M:%S %p ' +
str(rel_time_dt.tzname())))
relevant_failover_record.append(failover_record['type'])
# Add the relevant failover record to the output list.
output_list.append(relevant_failover_record)
# END IF-STATEMENT INDENTATION -----------------------------------
# Check if there is another batch of failover alerts to process.
if monthly_failover_batch['meta']['next'] is None:
next_failover_batch = False
else:
# Extract the next batch of failovers from NetCloud in JSON format.
failovers_url = monthly_failover_batch['meta']['next']
failover_request = requests.get(url=failovers_url,
headers=NETCLOUD_HEADERS)
monthly_failover_batch = failover_request.json()
# Convert the output list to an Excel file.
output_dataframe = pd.DataFrame(output_list, columns=COL_LABELS)
output_dataframe.to_excel('./../' + EXCEL_FILE_NAME + '.xlsx', index=None,
header=True)
# Takes an arbitrarily formatted UTC time as a string and the format of
# the string (following Python's time formatting conventions) then converts
# it to the provided timezone. Returns a datetime object of the given time in
# the relative timezone (timezone-aware).
def convert_utc(utc_str: str, utc_str_format: str, timezone: str) -> datetime:
# Convert the given time string to an unaware datetime object with
# the given format.
time_dt = datetime.strptime(utc_str, utc_str_format)
# Make the datetime object aware by setting its timezone (UTC).
time_utc_dt = time_dt.replace(tzinfo=pytz.UTC)
# Convert the time in the datetime object from UTC to the provided
# timezone.
time_other_dt = time_utc_dt.astimezone(pytz.timezone(timezone))
return time_other_dt
# The main method that runs the NetCloud failover reporter method.
# There is no input.
if __name__ == '__main__':
# Run the script.
netcloud_failover_reporter()
| 43.748571 | 79 | 0.662487 |
38167803e96f2d73bb49b4ccad1e0e45ad4100ca | 942 | py | Python | cmachines_site/cmachines/urls.py | JinpengLI/gpu_share_platform | 53d7f2b45acd9340cf49908fe9809f8d2856757c | [
"MIT"
] | 18 | 2018-02-21T00:39:37.000Z | 2019-05-21T08:03:49.000Z | cmachines_site/cmachines/urls.py | JinpengLI/gpu_share_platform | 53d7f2b45acd9340cf49908fe9809f8d2856757c | [
"MIT"
] | 1 | 2020-08-08T18:05:31.000Z | 2020-08-08T19:57:05.000Z | cmachines_site/cmachines/urls.py | JinpengLI/gpu_share_platform | 53d7f2b45acd9340cf49908fe9809f8d2856757c | [
"MIT"
] | 12 | 2018-02-22T07:22:58.000Z | 2020-01-06T07:45:52.000Z | """cmachines URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.11/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import include, url
from django.contrib import admin
from users.views import root_index
urlpatterns = [
url(r'^users/', include('users.urls')),
url(r'^admin/', admin.site.urls),
#url(r'^$', include('users.urls')),
url(r'^$', root_index, name='root_index'),
]
| 34.888889 | 79 | 0.690021 |
f33c05029150313b113774972ccc24ccbe622cc0 | 10,338 | py | Python | tools_webrtc/libs/generate_licenses.py | wnpllrzodiac/webrtc_code_mirror | 0e7b3a9dadc64266eba82b0fc1265b36b7794839 | [
"BSD-3-Clause"
] | 2 | 2021-01-27T16:03:00.000Z | 2022-02-16T08:07:37.000Z | tools_webrtc/libs/generate_licenses.py | wnpllrzodiac/webrtc_code_mirror | 0e7b3a9dadc64266eba82b0fc1265b36b7794839 | [
"BSD-3-Clause"
] | 1 | 2021-09-22T12:27:53.000Z | 2021-09-22T12:27:53.000Z | tools_webrtc/libs/generate_licenses.py | wnpllrzodiac/webrtc_code_mirror | 0e7b3a9dadc64266eba82b0fc1265b36b7794839 | [
"BSD-3-Clause"
] | 2 | 2021-01-21T01:51:02.000Z | 2021-01-21T06:09:09.000Z | #!/usr/bin/env python
# Copyright 2016 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
"""Generates license markdown for a prebuilt version of WebRTC.
Licenses are taken from dependent libraries which are determined by
GN desc command `gn desc` on all targets specified via `--target` argument.
One can see all dependencies by invoking this command:
$ gn.py desc --all --format=json <out_directory> <target> | python -m json.tool
(see "deps" subarray)
Libraries are mapped to licenses via LIB_TO_LICENSES_DICT dictionary.
"""
import sys
import argparse
import cgi
import json
import logging
import os
import re
import subprocess
# Third_party library to licences mapping. Keys are names of the libraries
# (right after the `third_party/` prefix)
LIB_TO_LICENSES_DICT = {
'abseil-cpp': ['third_party/abseil-cpp/LICENSE'],
'android_ndk': ['third_party/android_ndk/NOTICE'],
'android_sdk': ['third_party/android_sdk/LICENSE'],
'auto': [
'third_party/android_deps/libs/'
'com_google_auto_service_auto_service/LICENSE'
],
'bazel': ['third_party/bazel/LICENSE'],
'boringssl': ['third_party/boringssl/src/LICENSE'],
'errorprone': [
'third_party/android_deps/libs/'
'com_google_errorprone_error_prone_core/LICENSE'
],
'fiat': ['third_party/boringssl/src/third_party/fiat/LICENSE'],
'guava': ['third_party/android_deps/libs/com_google_guava_guava/LICENSE'],
'ijar': ['third_party/ijar/LICENSE'],
'jsoncpp': ['third_party/jsoncpp/LICENSE'],
'libaom': ['third_party/libaom/source/libaom/LICENSE'],
'libc++': ['buildtools/third_party/libc++/trunk/LICENSE.TXT'],
'libc++abi': ['buildtools/third_party/libc++abi/trunk/LICENSE.TXT'],
'libevent': ['base/third_party/libevent/LICENSE'],
'libjpeg_turbo': ['third_party/libjpeg_turbo/LICENSE.md'],
'libsrtp': ['third_party/libsrtp/LICENSE'],
'libvpx': ['third_party/libvpx/source/libvpx/LICENSE'],
'libyuv': ['third_party/libyuv/LICENSE'],
'nasm': ['third_party/nasm/LICENSE'],
'opus': ['third_party/opus/src/COPYING'],
'pffft': ['third_party/pffft/LICENSE'],
'protobuf': ['third_party/protobuf/LICENSE'],
'rnnoise': ['third_party/rnnoise/COPYING'],
'usrsctp': ['third_party/usrsctp/LICENSE'],
'webrtc': ['LICENSE'],
'zlib': ['third_party/zlib/LICENSE'],
'base64': ['rtc_base/third_party/base64/LICENSE'],
'sigslot': ['rtc_base/third_party/sigslot/LICENSE'],
'portaudio': ['modules/third_party/portaudio/LICENSE'],
'fft': ['modules/third_party/fft/LICENSE'],
'g711': ['modules/third_party/g711/LICENSE'],
'g722': ['modules/third_party/g722/LICENSE'],
'ooura': ['common_audio/third_party/ooura/LICENSE'],
'spl_sqrt_floor': ['common_audio/third_party/spl_sqrt_floor/LICENSE'],
# TODO(bugs.webrtc.org/1110): Remove this hack. This is not a lib.
# For some reason it is listed as so in _GetThirdPartyLibraries.
'android_deps': [],
# Compile time dependencies, no license needed:
'yasm': [],
'ow2_asm': [],
'jdk': [],
}
# Third_party library _regex_ to licences mapping. Keys are regular expression
# with names of the libraries (right after the `third_party/` prefix)
LIB_REGEX_TO_LICENSES_DICT = {
'android_deps:android_support_annotations.*': [
'third_party/android_deps/libs/' +
'com_android_support_support_annotations/LICENSE'
],
# Internal dependencies, licenses are already included by other dependencies
'android_deps:com_android_support_support_annotations.*': [],
}
def FindSrcDirPath():
"""Returns the abs path to the src/ dir of the project."""
src_dir = os.path.dirname(os.path.abspath(__file__))
while os.path.basename(src_dir) != 'src':
src_dir = os.path.normpath(os.path.join(src_dir, os.pardir))
return src_dir
SCRIPT_DIR = os.path.dirname(os.path.realpath(sys.argv[0]))
WEBRTC_ROOT = os.path.abspath(os.path.join(SCRIPT_DIR, os.pardir, os.pardir))
SRC_DIR = FindSrcDirPath()
sys.path.append(os.path.join(SRC_DIR, 'build'))
import find_depot_tools
THIRD_PARTY_LIB_SIMPLE_NAME_REGEX = r'^.*/third_party/([\w\-+]+).*$'
THIRD_PARTY_LIB_REGEX_TEMPLATE = r'^.*/third_party/%s$'
class LicenseBuilder(object):
def __init__(self,
buildfile_dirs,
targets,
lib_to_licenses_dict=None,
lib_regex_to_licenses_dict=None):
if lib_to_licenses_dict is None:
lib_to_licenses_dict = LIB_TO_LICENSES_DICT
if lib_regex_to_licenses_dict is None:
lib_regex_to_licenses_dict = LIB_REGEX_TO_LICENSES_DICT
self.buildfile_dirs = buildfile_dirs
self.targets = targets
self.lib_to_licenses_dict = lib_to_licenses_dict
self.lib_regex_to_licenses_dict = lib_regex_to_licenses_dict
self.common_licenses_dict = self.lib_to_licenses_dict.copy()
self.common_licenses_dict.update(self.lib_regex_to_licenses_dict)
@staticmethod
def _ParseLibraryName(dep):
"""Returns library name after third_party
Input one of:
//a/b/third_party/libname:c
//a/b/third_party/libname:c(//d/e/f:g)
//a/b/third_party/libname/c:d(//e/f/g:h)
Outputs libname or None if this is not a third_party dependency.
"""
groups = re.match(THIRD_PARTY_LIB_SIMPLE_NAME_REGEX, dep)
return groups.group(1) if groups else None
def _ParseLibrary(self, dep):
"""Returns library simple or regex name that matches `dep` after third_party
This method matches `dep` dependency against simple names in
LIB_TO_LICENSES_DICT and regular expression names in
LIB_REGEX_TO_LICENSES_DICT keys
Outputs matched dict key or None if this is not a third_party dependency.
"""
libname = LicenseBuilder._ParseLibraryName(dep)
for lib_regex in self.lib_regex_to_licenses_dict:
if re.match(THIRD_PARTY_LIB_REGEX_TEMPLATE % lib_regex, dep):
return lib_regex
return libname
@staticmethod
def _RunGN(buildfile_dir, target):
cmd = [
sys.executable,
os.path.join(find_depot_tools.DEPOT_TOOLS_PATH, 'gn.py'),
'desc',
'--all',
'--format=json',
os.path.abspath(buildfile_dir),
target,
]
logging.debug('Running: %r', cmd)
output_json = subprocess.check_output(cmd, cwd=WEBRTC_ROOT)
logging.debug('Output: %s', output_json)
return output_json
def _GetThirdPartyLibraries(self, buildfile_dir, target):
output = json.loads(LicenseBuilder._RunGN(buildfile_dir, target))
libraries = set()
for described_target in output.values():
third_party_libs = (self._ParseLibrary(dep)
for dep in described_target['deps'])
libraries |= set(lib for lib in third_party_libs if lib)
return libraries
def GenerateLicenseText(self, output_dir):
# Get a list of third_party libs from gn. For fat libraries we must consider
# all architectures, hence the multiple buildfile directories.
third_party_libs = set()
for buildfile in self.buildfile_dirs:
for target in self.targets:
third_party_libs |= self._GetThirdPartyLibraries(
buildfile, target)
assert len(third_party_libs) > 0
missing_licenses = third_party_libs - set(
self.common_licenses_dict.keys())
if missing_licenses:
error_msg = 'Missing licenses for following third_party targets: %s' % \
', '.join(missing_licenses)
logging.error(error_msg)
raise Exception(error_msg)
# Put webrtc at the front of the list.
license_libs = sorted(third_party_libs)
license_libs.insert(0, 'webrtc')
logging.info('List of licenses: %s', ', '.join(license_libs))
# Generate markdown.
output_license_file = open(os.path.join(output_dir, 'LICENSE.md'),
'w+')
for license_lib in license_libs:
if len(self.common_licenses_dict[license_lib]) == 0:
logging.info(
'Skipping compile time or internal dependency: %s',
license_lib)
continue # Compile time dependency
output_license_file.write('# %s\n' % license_lib)
output_license_file.write('```\n')
for path in self.common_licenses_dict[license_lib]:
license_path = os.path.join(WEBRTC_ROOT, path)
with open(license_path, 'r') as license_file:
license_text = cgi.escape(license_file.read(), quote=True)
output_license_file.write(license_text)
output_license_file.write('\n')
output_license_file.write('```\n\n')
output_license_file.close()
def main():
parser = argparse.ArgumentParser(description='Generate WebRTC LICENSE.md')
parser.add_argument('--verbose',
action='store_true',
default=False,
help='Debug logging.')
parser.add_argument('--target',
required=True,
action='append',
default=[],
help='Name of the GN target to generate a license for')
parser.add_argument('output_dir',
help='Directory to output LICENSE.md to.')
parser.add_argument('buildfile_dirs',
nargs='+',
help='Directories containing gn generated ninja files')
args = parser.parse_args()
logging.basicConfig(level=logging.DEBUG if args.verbose else logging.INFO)
builder = LicenseBuilder(args.buildfile_dirs, args.target)
builder.GenerateLicenseText(args.output_dir)
if __name__ == '__main__':
sys.exit(main())
| 38.574627 | 84 | 0.6568 |
6728026c1fd7120439b32f83a9931670f7b8e3c9 | 2,140 | py | Python | src/bibox/api.py | gjwang/trading_robot | 81713875d63c1596483bdbc61e7d0ccab140b331 | [
"MIT"
] | null | null | null | src/bibox/api.py | gjwang/trading_robot | 81713875d63c1596483bdbc61e7d0ccab140b331 | [
"MIT"
] | null | null | null | src/bibox/api.py | gjwang/trading_robot | 81713875d63c1596483bdbc61e7d0ccab140b331 | [
"MIT"
] | null | null | null | #-*- coding:utf-8 -*-
import hmac
import hashlib
import json, requests
#1-买,2-卖
ORDER_SIDE_BUY = 1
ORDER_SIDE_SELL = 2
#订单类型,2-限价单
#ORDER_TYPE_MARKET = ?
ORDER_TYPE_LIMIT = 2
ACCOUNT_TYPE_COMMON = 0 #0-普通账户
account_type = ACCOUNT_TYPE_COMMON
BIBOX_API = "https://api.bibox.com/v1/orderpending"
class BiboxClient(object):
def __init__(self, api_key, api_secret):
self.api_key = api_key
self.api_secret = api_secret
def getSign(self, data):
result = hmac.new(self.api_secret.encode("utf-8"), data.encode("utf-8"), hashlib.md5).hexdigest()
return result
def doApiRequestWithApikey(self, url, cmds):
s_cmds = json.dumps(cmds)
sign = self.getSign(s_cmds)
r = requests.post(url, data={'cmds': s_cmds, 'apikey': self.api_key,'sign':sign})
print(r.text)
return r.text
def post_order(self, cmds):
return self.doApiRequestWithApikey(BIBOX_API,cmds)
def order_buy(self, pair, order_type, price, amount):
return self.order_create(pair, order_type, ORDER_SIDE_BUY, price, amount)
def order_sell(self, pair, order_type, price, amount):
return self.order_create(pair, order_type, ORDER_SIDE_SELL, price, amount)
def order_create(self, pair, order_type, order_side, price, amount):
cmds = [
{
'cmd':"orderpending/trade",
'body':{
'pair':pair,
'account_type':ACCOUNT_TYPE_COMMON,
'order_type':order_type,
'order_side':order_side,
'price':price,
'amount':amount,
}
}
]
result = self.post_order(cmds)
order_id = json.loads(result)['result'][0]['result']
return order_id
def order_cancel(self, order_id):
cmds = [
{
'cmd':"orderpending/cancelTrade",
'body':{
'orders_id':order_id,
}
}
]
self.post_order(cmds)
| 28.157895 | 105 | 0.558879 |
d0b9884c39f9557e94552d4b0e92b7493a81cbbd | 11,824 | py | Python | odoo/addons/base/tests/test_user_has_group.py | jjiege/odoo | fd5b8ad387c1881f349d125cbd56433f4d49398f | [
"MIT"
] | null | null | null | odoo/addons/base/tests/test_user_has_group.py | jjiege/odoo | fd5b8ad387c1881f349d125cbd56433f4d49398f | [
"MIT"
] | null | null | null | odoo/addons/base/tests/test_user_has_group.py | jjiege/odoo | fd5b8ad387c1881f349d125cbd56433f4d49398f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
from odoo.tests.common import TransactionCase
from odoo.exceptions import ValidationError
class TestHasGroup(TransactionCase):
def setUp(self):
super(TestHasGroup, self).setUp()
self.group0 = 'test_user_has_group.group0'
self.group1 = 'test_user_has_group.group1'
group0, group1 = self.env['res.groups']._load_records([
dict(xml_id=self.group0, values={'name': 'group0'}),
dict(xml_id=self.group1, values={'name': 'group1'}),
])
self.test_user = self.env['res.users'].create({
'login': 'testuser',
'partner_id': self.env['res.partner'].create({
'name': "Strawman Test User"
}).id,
'groups_id': [(4, group0.id, 0)]
})
self.grp_internal_xml_id = 'base.group_user'
self.grp_internal = self.env.ref(self.grp_internal_xml_id)
self.grp_portal_xml_id = 'base.group_portal'
self.grp_portal = self.env.ref(self.grp_portal_xml_id)
self.grp_public_xml_id = 'base.group_public'
self.grp_public = self.env.ref(self.grp_public_xml_id)
def test_env_uid(self):
Users = self.env['res.users'].sudo(self.test_user)
self.assertTrue(
Users.has_group(self.group0),
"the test user should belong to group0"
)
self.assertFalse(
Users.has_group(self.group1),
"the test user should *not* belong to group1"
)
def test_record(self):
self.assertTrue(
self.test_user.has_group(self.group0),
"the test user should belong to group0",
)
self.assertFalse(
self.test_user.has_group(self.group1),
"the test user shoudl not belong to group1"
)
def test_portal_creation(self):
"""Here we check that portal user creation fails if it tries to create a user
who would also have group_user by implied_group.
Otherwise, it succeeds with the groups we asked for.
"""
grp_public = self.env.ref('base.group_public')
grp_test_portal_xml_id = 'test_user_has_group.portal_implied_group'
grp_test_portal = self.env['res.groups']._load_records([
dict(xml_id=grp_test_portal_xml_id, values={'name': 'Test Group Portal'})
])
grp_test_internal1 = self.env['res.groups']._load_records([
dict(xml_id='test_user_has_group.internal_implied_group1', values={'name': 'Test Group Itnernal 1'})
])
grp_test_internal2_xml_id = 'test_user_has_group.internal_implied_group2'
grp_test_internal2 = self.env['res.groups']._load_records([
dict(xml_id=grp_test_internal2_xml_id, values={'name': 'Test Group Internal 2'})
])
self.grp_portal.implied_ids = grp_test_portal
grp_test_internal1.implied_ids = False
grp_test_internal2.implied_ids = False
portal_user = self.env['res.users'].create({
'login': 'portalTest',
'name': 'Portal test',
'sel_groups_%s_%s_%s' % (self.grp_internal.id, self.grp_portal.id, grp_public.id): self.grp_portal.id,
'sel_groups_%s_%s' % (grp_test_internal1.id, grp_test_internal2.id): grp_test_internal2.id,
})
self.assertTrue(
portal_user.has_group(self.grp_portal_xml_id),
"The portal user should belong to '%s'" % self.grp_portal_xml_id,
)
self.assertTrue(
portal_user.has_group(grp_test_portal_xml_id),
"The portal user should belong to '%s'" % grp_test_portal_xml_id,
)
self.assertTrue(
portal_user.has_group(grp_test_internal2_xml_id),
"The portal user should belong to '%s'" % grp_test_internal2_xml_id
)
self.assertFalse(
portal_user.has_group(self.grp_internal_xml_id),
"The portal user should not belong to '%s'" % self.grp_internal_xml_id
)
portal_user.unlink() # otherwise, badly modifying the implication would raise
grp_test_internal1.implied_ids = self.grp_internal
grp_test_internal2.implied_ids = self.grp_internal
with self.assertRaises(ValidationError): # current group implications forbid to create a portal user
portal_user = self.env['res.users'].create({
'login': 'portalFail',
'name': 'Portal fail',
'sel_groups_%s_%s_%s' % (self.grp_internal.id, self.grp_portal.id, grp_public.id): self.grp_portal.id,
'sel_groups_%s_%s' % (grp_test_internal1.id, grp_test_internal2.id): grp_test_internal2.id,
})
def test_portal_write(self):
"""Check that adding a new group to a portal user works as expected,
except if it implies group_user/public, in chich case it should raise.
"""
grp_test_portal = self.env["res.groups"].create({"name": "implied by portal"})
self.grp_portal.implied_ids = grp_test_portal
portal_user = self.env['res.users'].create({
'login': 'portalTest2',
'name': 'Portal test 2',
'groups_id': [(6, 0, [self.grp_portal.id])],
})
self.assertEqual(
portal_user.groups_id, (self.grp_portal + grp_test_portal),
"The portal user should have the implied group.",
)
grp_fail = self.env["res.groups"].create(
{"name": "fail", "implied_ids": [(6, 0, [self.grp_internal.id])]})
with self.assertRaises(ValidationError):
portal_user.write({'groups_id': [(4, grp_fail.id)]})
def test_two_user_types(self):
#Create a user with two groups of user types kind (Internal and Portal)
grp_test = self.env['res.groups']._load_records([
dict(xml_id='test_two_user_types.implied_groups', values={'name': 'Test Group'})
])
grp_test.implied_ids += self.grp_internal
grp_test.implied_ids += self.grp_portal
with self.assertRaises(ValidationError):
self.env['res.users'].create({
'login': 'test_two_user_types',
'name': "Test User with two user types",
'groups_id': [(6, 0, [grp_test.id])]
})
#Add a user with portal to the group Internal
test_user = self.env['res.users'].create({
'login': 'test_user_portal',
'name': "Test User with two user types",
'groups_id': [(6, 0, [self.grp_portal.id])]
})
with self.assertRaises(ValidationError):
self.grp_internal.users = [(4, test_user.id)]
def test_two_user_types_implied_groups(self):
"""Contrarily to test_two_user_types, we simply add an implied_id to a group.
This will trigger the addition of the relevant users to the relevant groups;
if, say, this was done in SQL and thus bypassing the ORM, it would bypass the constraints
and thus give us a case uncovered by the aforementioned test.
"""
grp_test = self.env["res.groups"].create(
{"name": "test", "implied_ids": [(6, 0, [self.grp_internal.id])]})
test_user = self.env['res.users'].create({
'login': 'test_user_portal',
'name': "Test User with one user types",
'groups_id': [(6, 0, [grp_test.id])]
})
with self.assertRaises(ValidationError):
grp_test.write({'implied_ids': [(4, self.grp_portal.id)]})
def test_demote_user(self):
"""When a user is demoted to the status of portal/public,
we should strip him of all his (previous) rights
"""
group_0 = self.env.ref(self.group0) # the group to which test_user already belongs
group_U = self.env["res.groups"].create({"name": "U", "implied_ids": [(6, 0, [self.grp_internal.id])]})
self.grp_internal.implied_ids = False # only there to simplify the test by not having to care about its trans_implied_ids
self.test_user.write({'groups_id': [(4, group_U.id)]})
self.assertEqual(
self.test_user.groups_id, (group_0 + group_U + self.grp_internal),
"We should have our 2 groups and the implied user group",
)
# Now we demote him. The JS framework sends 3 and 4 commands,
# which is what we write here, but it should work even with a 5 command or whatever.
self.test_user.write({'groups_id': [
(3, self.grp_internal.id),
(3, self.grp_public.id),
(4, self.grp_portal.id),
]})
# if we screw up the removing groups/adding the implied ids, we could end up in two situations:
# 1. we have a portal user with way too much rights (e.g. 'Contact Creation', which does not imply any other group)
# 2. because a group may be (transitively) implying group_user, then it would raise an exception
# so as a compromise we remove all groups when demoting a user
# (even technical display groups, e.g. TaxB2B, which could be re-added later)
self.assertEqual(
self.test_user.groups_id, (self.grp_portal),
"Here the portal group does not imply any other group, so we should only have this group.",
)
def test_implied_groups(self):
""" We check that the adding of implied ids works correctly for normal users and portal users.
In the second case, working normally means raising if a group implies to give 'group_user'
rights to a portal user.
"""
U = self.env["res.users"]
G = self.env["res.groups"]
group_user = self.env.ref('base.group_user')
group_portal = self.env.ref('base.group_portal')
group_no_one = self.env.ref('base.group_no_one')
group_A = G.create({"name": "A"})
group_AA = G.create({"name": "AA", "implied_ids": [(6, 0, [group_A.id])]})
group_B = G.create({"name": "B"})
group_BB = G.create({"name": "BB", "implied_ids": [(6, 0, [group_B.id])]})
# user_a is a normal user, so we expect groups to be added when we add them,
# as well as 'implied_groups'; otherwise nothing else should happen.
# By contrast, for a portal user we want implied groups not to be added
# if and only if it would not give group_user (or group_public) privileges
user_a = U.create({"name": "a", "login": "a", "groups_id": [(6, 0, [group_AA.id, group_user.id])]})
self.assertEqual(user_a.groups_id, (group_AA + group_A + group_user + group_no_one))
user_b = U.create({"name": "b", "login": "b", "groups_id": [(6, 0, [group_portal.id, group_AA.id])]})
self.assertEqual(user_b.groups_id, (group_AA + group_A + group_portal))
# user_b is not an internal user, but giving it a new group just added a new group
(user_a + user_b).write({"groups_id": [(4, group_BB.id)]})
self.assertEqual(user_a.groups_id, (group_AA + group_A + group_BB + group_B + group_user + group_no_one))
self.assertEqual(user_b.groups_id, (group_AA + group_A + group_BB + group_B + group_portal))
# now we create a group that implies the group_user
# adding it to a user should work normally, whereas adding it to a portal user should raise
group_C = G.create({"name": "C", "implied_ids": [(6, 0, [group_user.id])]})
user_a.write({"groups_id": [(4, group_C.id)]})
self.assertEqual(user_a.groups_id, (group_AA + group_A + group_BB + group_B + group_C + group_user + group_no_one))
with self.assertRaises(ValidationError):
user_b.write({"groups_id": [(4, group_C.id)]})
| 46.920635 | 130 | 0.622294 |
f47baf3714d79b9a71b8cef9bcfec5413651163b | 2,803 | py | Python | tests/unit_tests/common/test_imcversion.py | ecoen66/imcsdk | b10eaa926a5ee57cea7182ae0adc8dd1c818b0ab | [
"Apache-2.0"
] | 31 | 2016-06-14T07:23:59.000Z | 2021-09-12T17:17:26.000Z | tests/unit_tests/common/test_imcversion.py | sthagen/imcsdk | 1831eaecb5960ca03a8624b1579521749762b932 | [
"Apache-2.0"
] | 109 | 2016-05-25T03:56:56.000Z | 2021-10-18T02:58:12.000Z | tests/unit_tests/common/test_imcversion.py | sthagen/imcsdk | 1831eaecb5960ca03a8624b1579521749762b932 | [
"Apache-2.0"
] | 67 | 2016-05-17T05:53:56.000Z | 2022-03-24T15:52:53.000Z | # Copyright 2015 Cisco Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from nose.tools import assert_equal
from imcsdk.imcmeta import VersionMeta
from imcsdk.imccoremeta import ImcVersion
def test_nightly_version1():
version1 = ImcVersion("2.0(13aS6)")
version2 = ImcVersion("3.0(1S10)")
assert_equal((version1 < version2), True)
def test_nightly_version2():
version1 = ImcVersion("2.0(13aS6)")
version2 = ImcVersion("2.0(1S10)")
assert_equal((version1 > version2), True)
def test_nightly_version3():
# 2.0(2cS6) will be considered as 2.0(2d) internally
version1 = ImcVersion("2.0(2cS6)")
version2 = ImcVersion("2.0(2c)")
assert_equal((version1 == version2), False)
def test_nightly_version4():
version1 = ImcVersion("2.0(2cS6)")
version2 = ImcVersion("2.0(3)")
assert_equal((version1 < version2), True)
def test_spin_version1():
# version interpreted as 4.0(2b)
version1 = ImcVersion("4.0(2aS3)")
version2 = ImcVersion("4.0(2b)")
assert_equal((version1 == version2), True)
def test_spin_version2():
# version interpreted as 4.0(234c)
version1 = ImcVersion("4.0(234bS3)")
version2 = ImcVersion("4.0(234c)")
assert_equal((version1 == version2), True)
def test_spin_version3():
# version interpreted as 4.0(2z)
version1 = ImcVersion("4.0(2S3)")
version2 = ImcVersion("4.0(2z)")
assert_equal((version1 == version2), True)
def test_spin_version4():
# version interpreted as 4.0(234z)
version1 = ImcVersion("4.0(234S3)")
version2 = ImcVersion("4.0(234z)")
assert_equal((version1 == version2), True)
def test_patch_version1():
# version interpreted as 4.0(235a)
version1 = ImcVersion("4.0(234.5)")
version2 = ImcVersion("4.0(235a)")
assert_equal((version1 == version2), True)
def test_gt_same_major_version():
version1 = VersionMeta.Version151f
version2 = VersionMeta.Version151x
assert_equal((version1 < version2), True)
def test_gt_different_major_version():
version1 = VersionMeta.Version151x
version2 = VersionMeta.Version202c
assert_equal((version1 < version2), True)
def test_patch_versions():
version1 = ImcVersion("2.0(12b)")
version2 = ImcVersion("2.0(12)")
assert_equal((version1 > version2), True)
| 29.197917 | 74 | 0.704245 |
e20a5994755e20ea8f47c51ee5def578f1c3135f | 1,248 | py | Python | Competitive_Coding/First_Pattern_Match.py | Arko98/Alogirthms | ce56faaaf847dbf077de935a98814c37275f8a5f | [
"MIT"
] | 5 | 2020-08-02T16:31:09.000Z | 2022-02-21T22:58:59.000Z | Competitive_Coding/First_Pattern_Match.py | Arko98/Alogirthms | ce56faaaf847dbf077de935a98814c37275f8a5f | [
"MIT"
] | null | null | null | Competitive_Coding/First_Pattern_Match.py | Arko98/Alogirthms | ce56faaaf847dbf077de935a98814c37275f8a5f | [
"MIT"
] | 3 | 2020-09-27T11:23:58.000Z | 2020-10-01T05:47:39.000Z | # Problem Statement:
# Solution
class Solution:
def strStr(self, haystack: str, needle: str) -> int:
if needle == "":
return 0
if haystack == "":
return -1
found = 0
found_idx = 0
needle_length = len(needle)
for i in range(len(haystack) - len(needle) + 1):
if needle[0] != haystack[i]:
continue
else:
needle_idx = 1
i += 1
while (needle_idx < needle_length and needle[needle_idx] == haystack[i]):
i += 1
needle_idx += 1
if needle_idx == needle_length:
found_idx = i - needle_length
found = 1
break
if found == 1:
return found_idx
elif found == 0:
return -1
# Inbuilt function
class Solution:
def strStr(self, haystack: str, needle: str) -> int:
if(len(needle)==0):
return 0
elif (len(haystack)==0):
return -1
else:
return haystack.find(needle, 0, len(haystack)) | 27.130435 | 90 | 0.426282 |
b595a9cd1f4bd2011ccb223bc17c5f0f597d9fb1 | 356 | py | Python | src/cogent3/recalculation/__init__.py | xingjianleng/cogent3 | a85d08a948f6903e4e04eea8292f588cc0b4907e | [
"BSD-3-Clause"
] | null | null | null | src/cogent3/recalculation/__init__.py | xingjianleng/cogent3 | a85d08a948f6903e4e04eea8292f588cc0b4907e | [
"BSD-3-Clause"
] | null | null | null | src/cogent3/recalculation/__init__.py | xingjianleng/cogent3 | a85d08a948f6903e4e04eea8292f588cc0b4907e | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/envthon
__all__ = ["calculation", "definition", "scope", "setting"]
__author__ = "Peter Maxwell"
__copyright__ = "Copyright 2007-2022, The Cogent Project"
__credits__ = ["Peter Maxwell", "Gavin Huttley"]
__license__ = "BSD-3"
__version__ = "2022.4.20a1"
__maintainer__ = "Peter Maxwell"
__email__ = "pm67nz@gmail.com"
__status__ = "Production"
| 29.666667 | 59 | 0.730337 |
409da00d34e5ca485a34ca9c70927c0423f7ad5f | 17,240 | py | Python | python/ray/serve/replica.py | jamesliu/ray | 11ab412db1fa3603a3006e8ed414e80dd1f11c0c | [
"Apache-2.0"
] | 3 | 2021-06-24T17:00:18.000Z | 2021-09-20T15:49:11.000Z | python/ray/serve/replica.py | jamesliu/ray | 11ab412db1fa3603a3006e8ed414e80dd1f11c0c | [
"Apache-2.0"
] | 227 | 2021-10-01T08:00:01.000Z | 2021-12-28T16:47:26.000Z | python/ray/serve/replica.py | gramhagen/ray | c18caa4db36d466718bdbcb2229aa0b2dc03da1f | [
"Apache-2.0"
] | 1 | 2020-12-02T06:26:20.000Z | 2020-12-02T06:26:20.000Z | import asyncio
import logging
import pickle
import inspect
from typing import Any, Callable, Optional, Tuple, Dict
import time
import aiorwlock
import starlette.responses
import ray
from ray import cloudpickle
from ray.actor import ActorHandle
from ray._private.async_compat import sync_to_async
from ray.serve.autoscaling_metrics import start_metrics_pusher
from ray.serve.common import str, ReplicaTag
from ray.serve.config import DeploymentConfig
from ray.serve.http_util import ASGIHTTPSender
from ray.serve.utils import parse_request_item, _get_logger
from ray.serve.exceptions import RayServeException
from ray.util import metrics
from ray.serve.router import Query, RequestMetadata
from ray.serve.constants import (
RECONFIGURE_METHOD,
DEFAULT_LATENCY_BUCKET_MS,
)
from ray.serve.version import DeploymentVersion
from ray.serve.utils import wrap_to_ray_error
logger = _get_logger()
def create_replica_wrapper(name: str, serialized_deployment_def: bytes):
"""Creates a replica class wrapping the provided function or class.
This approach is picked over inheritance to avoid conflict between user
provided class and the RayServeReplica class.
"""
serialized_deployment_def = serialized_deployment_def
# TODO(architkulkarni): Add type hints after upgrading cloudpickle
class RayServeWrappedReplica(object):
async def __init__(self, deployment_name, replica_tag, init_args,
init_kwargs, deployment_config_proto_bytes: bytes,
version: DeploymentVersion, controller_name: str,
detached: bool):
deployment_def = cloudpickle.loads(serialized_deployment_def)
deployment_config = DeploymentConfig.from_proto_bytes(
deployment_config_proto_bytes)
if inspect.isfunction(deployment_def):
is_function = True
elif inspect.isclass(deployment_def):
is_function = False
else:
assert False, ("deployment_def must be function, class, or "
"corresponding import path.")
# Set the controller name so that serve.connect() in the user's
# code will connect to the instance that this deployment is running
# in.
ray.serve.api._set_internal_replica_context(
deployment_name,
replica_tag,
controller_name,
servable_object=None)
assert controller_name, "Must provide a valid controller_name"
controller_namespace = ray.serve.api._get_controller_namespace(
detached)
controller_handle = ray.get_actor(
controller_name, namespace=controller_namespace)
# This closure initializes user code and finalizes replica
# startup. By splitting the initialization step like this,
# we can already access this actor before the user code
# has finished initializing.
# The supervising state manager can then wait
# for allocation of this replica by using the `is_allocated`
# method. After that, it calls `reconfigure` to trigger
# user code initialization.
async def initialize_replica():
if is_function:
_callable = deployment_def
else:
# This allows deployments to define an async __init__
# method (required for FastAPI).
_callable = deployment_def.__new__(deployment_def)
await sync_to_async(_callable.__init__)(*init_args,
**init_kwargs)
# Setting the context again to update the servable_object.
ray.serve.api._set_internal_replica_context(
deployment_name,
replica_tag,
controller_name,
servable_object=_callable)
self.replica = RayServeReplica(
_callable, deployment_name, replica_tag, deployment_config,
deployment_config.user_config, version, is_function,
controller_handle)
# Is it fine that replica is None here?
# Should we add a check in all methods that use self.replica
# or, alternatively, create an async get_replica() method?
self.replica = None
self._initialize_replica = initialize_replica
# asyncio.Event used to signal that the replica is shutting down.
self.shutdown_event = asyncio.Event()
@ray.method(num_returns=2)
async def handle_request(
self,
pickled_request_metadata: bytes,
*request_args,
**request_kwargs,
):
# The request metadata should be pickled for performance.
request_metadata: RequestMetadata = pickle.loads(
pickled_request_metadata)
# Directly receive input because it might contain an ObjectRef.
query = Query(request_args, request_kwargs, request_metadata)
return await self.replica.handle_request(query)
async def is_allocated(self):
"""poke the replica to check whether it's alive.
When calling this method on an ActorHandle, it will complete as
soon as the actor has started running. We use this mechanism to
detect when a replica has been allocated a worker slot.
At this time, the replica can transition from PENDING_ALLOCATION
to PENDING_INITIALIZATION startup state.
"""
pass
async def reconfigure(self, user_config: Optional[Any] = None
) -> Tuple[DeploymentConfig, DeploymentVersion]:
if self.replica is None:
await self._initialize_replica()
if user_config is not None:
await self.replica.reconfigure(user_config)
return self.get_metadata()
def get_metadata(self) -> Tuple[DeploymentConfig, DeploymentVersion]:
return self.replica.deployment_config, self.replica.version
async def prepare_for_shutdown(self):
self.shutdown_event.set()
if self.replica is not None:
return await self.replica.prepare_for_shutdown()
async def run_forever(self):
await self.shutdown_event.wait()
RayServeWrappedReplica.__name__ = name
return RayServeWrappedReplica
class RayServeReplica:
"""Handles requests with the provided callable."""
def __init__(self, _callable: Callable, deployment_name: str,
replica_tag: ReplicaTag, deployment_config: DeploymentConfig,
user_config: Any, version: DeploymentVersion,
is_function: bool, controller_handle: ActorHandle) -> None:
self.deployment_config = deployment_config
self.deployment_name = deployment_name
self.replica_tag = replica_tag
self.callable = _callable
self.is_function = is_function
self.user_config = user_config
self.version = version
self.rwlock = aiorwlock.RWLock()
self.num_ongoing_requests = 0
self.request_counter = metrics.Counter(
"serve_deployment_request_counter",
description=("The number of queries that have been "
"processed in this replica."),
tag_keys=("deployment", "replica"))
self.request_counter.set_default_tags({
"deployment": self.deployment_name,
"replica": self.replica_tag
})
self.error_counter = metrics.Counter(
"serve_deployment_error_counter",
description=("The number of exceptions that have "
"occurred in this replica."),
tag_keys=("deployment", "replica"))
self.error_counter.set_default_tags({
"deployment": self.deployment_name,
"replica": self.replica_tag
})
self.restart_counter = metrics.Counter(
"serve_deployment_replica_starts",
description=("The number of times this replica "
"has been restarted due to failure."),
tag_keys=("deployment", "replica"))
self.restart_counter.set_default_tags({
"deployment": self.deployment_name,
"replica": self.replica_tag
})
self.processing_latency_tracker = metrics.Histogram(
"serve_deployment_processing_latency_ms",
description="The latency for queries to be processed.",
boundaries=DEFAULT_LATENCY_BUCKET_MS,
tag_keys=("deployment", "replica"))
self.processing_latency_tracker.set_default_tags({
"deployment": self.deployment_name,
"replica": self.replica_tag
})
self.num_processing_items = metrics.Gauge(
"serve_replica_processing_queries",
description="The current number of queries being processed.",
tag_keys=("deployment", "replica"))
self.num_processing_items.set_default_tags({
"deployment": self.deployment_name,
"replica": self.replica_tag
})
self.restart_counter.inc()
self._shutdown_wait_loop_s = (
deployment_config.graceful_shutdown_wait_loop_s)
if deployment_config.autoscaling_config:
config = deployment_config.autoscaling_config
start_metrics_pusher(
interval_s=config.metrics_interval_s,
collection_callback=self._collect_autoscaling_metrics,
controller_handle=controller_handle)
ray_logger = logging.getLogger("ray")
for handler in ray_logger.handlers:
handler.setFormatter(
logging.Formatter(
handler.formatter._fmt +
f" component=serve deployment={self.deployment_name} "
f"replica={self.replica_tag}"))
def _get_handle_request_stats(self) -> Optional[Dict[str, int]]:
actor_stats = (
ray.runtime_context.get_runtime_context()._get_actor_call_stats())
method_stat = actor_stats.get("RayServeWrappedReplica.handle_request")
return method_stat
def _collect_autoscaling_metrics(self):
method_stat = self._get_handle_request_stats()
num_inflight_requests = 0
if method_stat is not None:
num_inflight_requests = (
method_stat["pending"] + method_stat["running"])
return {self.replica_tag: num_inflight_requests}
def get_runner_method(self, request_item: Query) -> Callable:
method_name = request_item.metadata.call_method
if not hasattr(self.callable, method_name):
# Filter to methods that don't start with '__' prefix.
def callable_method_filter(attr):
if attr.startswith("__"):
return False
elif not callable(getattr(self.callable, attr)):
return False
return True
methods = list(filter(callable_method_filter, dir(self.callable)))
raise RayServeException(f"Tried to call a method '{method_name}' "
"that does not exist. Available methods: "
f"{methods}.")
if self.is_function:
return self.callable
return getattr(self.callable, method_name)
async def ensure_serializable_response(self, response: Any) -> Any:
if isinstance(response, starlette.responses.StreamingResponse):
async def mock_receive():
# This is called in a tight loop in response() just to check
# for an http disconnect. So rather than return immediately
# we should suspend execution to avoid wasting CPU cycles.
never_set_event = asyncio.Event()
await never_set_event.wait()
sender = ASGIHTTPSender()
await response(scope=None, receive=mock_receive, send=sender)
return sender.build_starlette_response()
return response
async def invoke_single(self, request_item: Query) -> Any:
logger.debug("Replica {} started executing request {}".format(
self.replica_tag, request_item.metadata.request_id))
args, kwargs = parse_request_item(request_item)
start = time.time()
method_to_call = None
try:
runner_method = self.get_runner_method(request_item)
method_to_call = sync_to_async(runner_method)
result = None
if len(inspect.signature(runner_method).parameters) > 0:
result = await method_to_call(*args, **kwargs)
else:
# The method doesn't take in anything, including the request
# information, so we pass nothing into it
result = await method_to_call()
result = await self.ensure_serializable_response(result)
self.request_counter.inc()
except Exception as e:
import os
if "RAY_PDB" in os.environ:
ray.util.pdb.post_mortem()
function_name = "unknown"
if method_to_call is not None:
function_name = method_to_call.__name__
result = wrap_to_ray_error(function_name, e)
self.error_counter.inc()
latency_ms = (time.time() - start) * 1000
self.processing_latency_tracker.observe(latency_ms)
return result
async def reconfigure(self, user_config: Any):
async with self.rwlock.writer_lock:
self.user_config = user_config
self.version = DeploymentVersion(
self.version.code_version, user_config=user_config)
if self.is_function:
raise ValueError(
"deployment_def must be a class to use user_config")
elif not hasattr(self.callable, RECONFIGURE_METHOD):
raise RayServeException("user_config specified but deployment "
+ self.deployment_name + " missing " +
RECONFIGURE_METHOD + " method")
reconfigure_method = sync_to_async(
getattr(self.callable, RECONFIGURE_METHOD))
await reconfigure_method(user_config)
async def handle_request(self, request: Query) -> asyncio.Future:
async with self.rwlock.reader_lock:
request.tick_enter_replica = time.time()
logger.debug("Replica {} received request {}".format(
self.replica_tag, request.metadata.request_id))
num_running_requests = self._get_handle_request_stats()["running"]
self.num_processing_items.set(num_running_requests)
result = await self.invoke_single(request)
request_time_ms = (time.time() - request.tick_enter_replica) * 1000
logger.debug("Replica {} finished request {} in {:.2f}ms".format(
self.replica_tag, request.metadata.request_id,
request_time_ms))
# Returns a small object for router to track request status.
return b"", result
async def prepare_for_shutdown(self):
"""Perform graceful shutdown.
Trigger a graceful shutdown protocol that will wait for all the queued
tasks to be completed and return to the controller.
"""
while True:
# Sleep first because we want to make sure all the routers receive
# the notification to remove this replica first.
await asyncio.sleep(self._shutdown_wait_loop_s)
method_stat = self._get_handle_request_stats()
# The handle_request method wasn't even invoked.
if method_stat is None:
break
# The handle_request method has 0 inflight requests.
if method_stat["running"] + method_stat["pending"] == 0:
break
else:
logger.info(
"Waiting for an additional "
f"{self._shutdown_wait_loop_s}s to shut down because "
f"there are {self.num_ongoing_requests} ongoing requests.")
# Explicitly call the del method to trigger clean up.
# We set the del method to noop after succssifully calling it so the
# destructor is called only once.
try:
if hasattr(self.callable, "__del__"):
# Make sure to accept `async def __del__(self)` as well.
await sync_to_async(self.callable.__del__)()
except Exception as e:
logger.exception(
f"Exception during graceful shutdown of replica: {e}")
finally:
if hasattr(self.callable, "__del__"):
del self.callable.__del__
| 42.358722 | 79 | 0.621578 |
ca5599be28d1ff969217ab1070e38e71cc3670c7 | 1,786 | py | Python | First_bad_version.py.py | raniyer/Learning-competitive-coding | e79ab7f1cd74bd635ccc1d817db65d8101289c9c | [
"MIT"
] | 2 | 2020-05-16T03:38:32.000Z | 2020-06-04T13:41:28.000Z | First_bad_version.py.py | raniyer/30daysofcode-May-leetcode | e79ab7f1cd74bd635ccc1d817db65d8101289c9c | [
"MIT"
] | null | null | null | First_bad_version.py.py | raniyer/30daysofcode-May-leetcode | e79ab7f1cd74bd635ccc1d817db65d8101289c9c | [
"MIT"
] | null | null | null | """
You are a product manager and currently leading a team to develop a new product. Unfortunately, the latest version of your product fails the quality check. Since each version is developed based on the previous version, all the versions after a bad version are also bad.
Suppose you have n versions [1, 2, ..., n] and you want to find out the first bad one, which causes all the following ones to be bad.
Example:
Given n = 5, and version = 4 is the first bad version.
call isBadVersion(3) -> false
call isBadVersion(5) -> true
call isBadVersion(4) -> true
Then 4 is the first bad version.
"""
You are given an API bool isBadVersion(version) which will return whether version is bad. Implement a function to find the first bad version. You should minimize the number of calls to the API.
# The isBadVersion API is already defined for you.
# @param version, an integer
# @return a bool
# def isBadVersion(version):
#I have used Binary search algorithm to solve the given problem
class Solution:
def firstBadVersion(self, n):
l=0
while(l <= n):
mid = int(l + ((n-l)/2))
print(mid, n, l)
if isBadVersion(mid):
if isBadVersion(mid-1):
n = mid
else:
return mid
else:
l = mid+1
"""
#A more optimized version of the same
class Solution:
def firstBadVersion(self, n):
"""
:type n: int
:rtype: int
"""
i, j = 1, n
while i <= j:
m = (i+j) // 2
if isBadVersion(m):
j = m-1
else:
i = m+1
if isBadVersion(i):
return i
return i-1""" | 33.074074 | 270 | 0.576148 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.