hexsha stringlengths 40 40 | size int64 3 1.03M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 3 972 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 972 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 972 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 3 1.03M | avg_line_length float64 1.13 941k | max_line_length int64 2 941k | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8e12c2ea0bb3dc10c29a17c14b0b682a2529c3cb | 80 | py | Python | credentials/raspador_slackbot_credentials.py | xyla-io/raspador | 4e77234239d44a83faf5c1d3a6d022a9e3861f25 | [
"MIT"
] | null | null | null | credentials/raspador_slackbot_credentials.py | xyla-io/raspador | 4e77234239d44a83faf5c1d3a6d022a9e3861f25 | [
"MIT"
] | null | null | null | credentials/raspador_slackbot_credentials.py | xyla-io/raspador | 4e77234239d44a83faf5c1d3a6d022a9e3861f25 | [
"MIT"
] | null | null | null | raspador_slackbot_credentials = {
'BOT_NAME': {
'api_token': 'TOKEN'
}
} | 16 | 33 | 0.6375 |
d6c75602aa92c02dae36d62a300f509b23471bc8 | 12,219 | py | Python | Python/klampt/control/controller.py | smeng9/Klampt | 7ff91bead90ac04280eff310623338fd10aaba79 | [
"BSD-3-Clause"
] | 238 | 2015-01-09T15:21:27.000Z | 2022-03-30T22:48:45.000Z | Python/klampt/control/controller.py | smeng9/Klampt | 7ff91bead90ac04280eff310623338fd10aaba79 | [
"BSD-3-Clause"
] | 89 | 2015-08-26T16:56:42.000Z | 2022-03-29T23:45:46.000Z | Python/klampt/control/controller.py | smeng9/Klampt | 7ff91bead90ac04280eff310623338fd10aaba79 | [
"BSD-3-Clause"
] | 84 | 2015-01-10T18:41:52.000Z | 2022-03-30T03:32:50.000Z | """Defines a set of generic "controller blocks" that are repeatedly-updating
processes, which typically output commands to a simulated or real robot.
However, blocks can do much more, including estimators, read from sensor
drivers, etc.
The :class:`ControllerBlock` class defines a block as accepting some
inputs and produces some outputs every time that ``advance()`` is
called. The inputs and outputs are extremely general, and are just string-
indexed dictionaries. The usage of a ControllerBlock is governed by
convention.
Robot controller blocks
=======================
If the block is being used for :class:`SimpleSimulator` or the Klampt Robot
Interface Layer (see :class:`RobotInterfaceBase`, there's a standard convention
defined by :class:`RobotControllerBase`. It expects to receive the following
values as input:
- t: current time, in seconds
- dt: controller time step, in seconds
- q: sensed robot configuration
- dq: sensed robot velocity
- [sensor1_name]: measurement vector for sensor 1
- ...
- [sensor1_name]: measurement vector for sensor k
The output dictionary is expected to contain one or more of the following keys:
- qcmd
- dqcmd
- tcmd
- torquecmd
These are interpreted as follows:
- If qcmd is set, then it's a PID command. If dqcmd is also set, then it
describes the desired velocity. If dqcmd is not set, the desired velocity is
0. If torquecmd is also set, then torquecmd is a feedforward torque.
- If dqcmd and tcmd are set, they describe a fixed velocity command for
duration tcmd.
- If torquecmd is set, this describes a torque command.
No other combinations are currently supported.
For convenience, your RobotControllerBase subclass may use the
:class:`RobotControllerIO` class for object-oriented access to the input
/ output data. Example usage is as follows::
api = RobotControllerIO(inputs)
print("Current time is",api.time())
#set a position command
api.setJointPositionCommand(5,0.5)
return api.makeCommand()
Binding robot controllers to a robot
====================================
The easiest way to use this with a simulated / real robot is the
:class:`.interop.ControllerToInterface` class.
Example that outputs to a simulation (and visualizes it)::
from klampt.control.controller import RobotControllerBase
from klampt.control.simrobotinterface import *
from klampt.control.interop import RobotControllerToInterface
from klampt import WorldModel, Simulator
from klampt import vis
world = WorldModel()
...
#create world, MyControllerObject class here
...
sim = Simulator(world)
vis.add("world",world)
vis.show()
controller = MyControllerObject() #subclass of RobotControllerBase
interface = SimPositionControlInterface(sim.controller(0),sim)
binding = RobotControllerToInterface(controller,interface)
while vis.shown():
binding.advance()
sim.updateWorld()
vis.update()
Example that outputs to a real robot::
from klampt.control.controller import RobotControllerBase
from klampt.control.interop import RobotControllerToInterface
#create MyControllerObject, MyRobotInterface class here
controller = MyControllerObject() #subclass of RobotControllerBase
interface = MyRobotInterface(...)
binding = RobotControllerToInterface(controller,interface)
while not done:
binding.advance()
For tighter control over a simulation, such as sub-stepping, you should use the
:mod:`klampt.sim.simulation` module. Robot controllers in ControllerBlock form
are accepted by the :meth:`SimpleSimulator.setController` method.
Blocks submodule
================
The :mod:`klampt.control.blocks` module gives several examples of controller
blocks that can be composed. See :mod:`klampt.control.blocks.utils` for
utilities, and :mod:`klampt.control.blocks.state_machine` for state machines
that use ControllerBlocks.
"""
class ControllerBlock(object):
"""A generic base class that outputs a named dictionary outputs of based
on a dictionary of inputs. This is typically used to define robot
controllers and components of such controllers, such as state estimators.
At a minimum, a controller should implement the :meth:`advance` method.
A stateful controller should also implement the :meth:`getState` and
:meth:`setState` methods.
"""
def __init__(self):
pass
def __str__(self):
"""Optional: return a descriptive string describing this controller"""
return self.__class__.__name__
def inputNames(self):
"""Optional: to help debug, this is the set of input arguments that are
expected. Returns a list, tuple, or set."""
raise NotImplementedError()
def inputValid(self,**inputs):
"""Optional: to help debug, tests whether the input argument dictionary
is valid. Returns bool."""
try:
return all(i in inputs for i in self.inputNames())
except NotImplementedError:
return True
def outputNames(self):
"""Optional: to help debug, this is the set of output arguments that are
expected. Returns a list, tuple, set, or None to indicate no specific
output."""
return None
def advance(self,**inputs):
"""Computes the output of this controller given a dictionary of
inputs, and advances the time step. The output is a dictionary.
"""
return None
def signal(self,type,**inputs):
"""Sends some asynchronous signal to the controller. The inputs
are application-defined, but typical signals include:
- 'reset': return state to the initial state
- 'enter': for state machine: repeated advance() calls will now begin.
- 'exit': for state machine: advance() calls will now stop.
"""
pass
def getState(self):
"""Optional: for stateful controllers, returns some serializable
representation of state"""
raise NotImplementedError()
def setState(self,state):
"""Optional: for stateful controllers, restores the internal state
from an output of getState()"""
raise NotImplementedError()
def drawGL(self):
"""Optional: hook to give feedback to the visualizer"""
pass
class RobotControllerBase(ControllerBlock):
"""A base class for a robot controller. This doesn't do anything but
implement the inputNames method; the subclass still has to implement
advance, signal, getState/saveState, etc.
"""
def __init__(self,robotModel=None):
self.robotModel = robotModel
def inputNames(self):
inputs = ['t','dt','q','dq']
if self.robotModel is not None:
i = 0
while True:
s = self.robotModel.sensor(i)
i += 1
if s.name()=='':
break
if s.name() in ['q','dq']:
continue
inputs.append(s.name())
return inputs
class RobotControllerIO:
"""A helper class that makes it a bit easier to implement a
`RobotControllerBase` by providing an object-oriented interface to the
dictionary-based protocol.
Usage::
class MyController(ControllerBlock):
def advance(**input):
api = RobotControllerIO(inputs)
print("Current time is",api.time())
#set a position command
api.setJointPositionCommand(5,0.5)
return api.makeCommand()
All methods, except for those prefixed by `make` or `set`, are to get
values from the input dictionary.
The `makeXCommand` methods return a propertly formatted output
dictionary. Alternatively, you can make several `setXCommand` calls
and then call `makeCommand()` to retrieve the output dictionary.
"""
def __init__(self,inputs):
self.inputs = inputs.copy()
self.retval = dict()
def time(self):
"""Returns the robot clock time"""
return self.inputs['t']
def timeStep(self):
"""Returns the robot time step"""
return self.inputs['dt']
def commandedConfiguration(self):
"""Returns the commanded joint configuration or None if it is not
sensed."""
try: return self.inputs['qcmd']
except KeyError: return None
def commandedVelocity(self):
"""Returns the commanded joint velocities or None if it is not
sensed."""
try: return self.inputs['dqcmd']
except KeyError: return None
def sensedConfiguration(self):
"""Returns the sensed joint configuration or None if it is not
sensed."""
try: return self.inputs['q']
except KeyError: return None
def sensedVelocity(self):
"""Returns the sensed joint velocity or None if it is not
sensed."""
try: return self.inputs['dq']
except KeyError: return None
def sensedTorque(self):
"""Returns the sensed torque or None if it is not
sensed."""
try: return self.inputs['torque']
except KeyError: return None
def sensorNames(self):
"""Returns the list of sensor names (including clock and time step)"""
return self.inputs.keys()
def sensorValue(self,sensor):
"""Returns the value of the named sensor."""
try: return self.inputs[sensor]
except KeyError: return None
def makePositionCommand(self,q):
return {'qcmd':q}
def makePIDCommand(self,q,dq):
return {'qcmd':q,'dqcmd':dq}
def makeFeedforwardPIDCommand(self,q,dq,torque):
return {'qcmd':q,'dqcmd':dq,'torquecmd':torque}
def makeVelocityCommand(self,dq,t):
return {'dqcmd':dq,'tcmd':t}
def makeTorqueCommand(self,torque):
return {'torquecmd':torque}
def setPositionCommand(self,value):
self.retval['qcmd'] = value
def setPIDCommand(self,q,dq):
self.retval['qcmd'] = q
self.retval['dqcmd'] = dq
def setFeedforwardPIDCommand(self,q,dq,torque):
self.retval['qcmd'] = q
self.retval['dqcmd'] = dq
self.retval['torquecmd'] = torque
def setVelocityCommand(self,v,dt):
self.retval['dqcmd'] = v
self.retval['tcmd'] = dt
def setTorqueCommand(self,torque):
self.retval['torquecmd'] = torque
def setJointPositionCommand(self,index,value):
"""Sets a single indexed joint to a position command"""
#klampt can only do uniform position, velocity, or torque commands
if 'qcmd' in self.retval:
self.retval['qcmd'][index] = value
elif 'dqcmd' in self.retval:
self.retval['dqcmd'][index] = (value - self.inputs['qcmd'][index]) / self.inputs['dt']
self.retval['tcmd']=self.inputs['dt']
elif 'torquecmd' in self.retval:
print("Cannot combine joint position commands with joint torque commands")
else:
#no joint commands set yet, set a position command
self.retval['qcmd'] = self.inputs['qcmd']
self.retval['qcmd'][index] = value
def setJointVelocityCommand(self,index,value):
"""Sets a single indexed joint to a velocity command"""
#klampt can only do uniform position, velocity, or torque commands
if 'qcmd' in self.retval:
self.retval['qcmd'][index] = self.inputs['qcmd'][index]+value*self.inputs['dt']
if 'dqcmd' not in self.retval:
self.retval['dqcmd'] = self.inputs['dqcmd']
self.retval['dqcmd'][index] = value
elif 'dqcmd' in self.retval:
self.retval['dqcmd'][index] = value
self.retval['tcmd']=self.inputs['dt']
elif 'torquecmd' in self.retval:
print("Cannot combine joint velocity commands with joint torque commands")
else:
#no velocity commands set yet, set a velocity command
self.retval['dqcmd'] = self.inputs['dqcmd']
self.retval['dqcmd'][index] = value
def makeCommand(self):
"""Returns the command from previous setXCommand() calls."""
return self.retval
| 36.693694 | 98 | 0.660447 |
c93515952f04e11c4a251c72d7d77e4a67dea485 | 2,692 | py | Python | keras/src/layers/trainable_lambda_loss_function_layers.py | garysnake/crsae | ca03574fc75e855e612df71535504e956ef897c7 | [
"MIT"
] | null | null | null | keras/src/layers/trainable_lambda_loss_function_layers.py | garysnake/crsae | ca03574fc75e855e612df71535504e956ef897c7 | [
"MIT"
] | null | null | null | keras/src/layers/trainable_lambda_loss_function_layers.py | garysnake/crsae | ca03574fc75e855e612df71535504e956ef897c7 | [
"MIT"
] | null | null | null | """
Copyright (c) 2018 CRISP
Layer with a trainable scalar multiplication.
:author: Bahareh Tolooshams
"""
from keras import backend as K
from keras.layers import Layer
from keras.initializers import (
Identity,
Initializer,
Constant,
Ones,
Zeros,
RandomNormal,
)
from keras.activations import relu
from keras.constraints import non_neg
import numpy as np
import tensorflow as tf
class TrainableLambdaLossFunction(Layer):
"""
Layer that outputs scalar * input - p log(scalar)
where lambda is a traiable vector.
"""
def __init__(self, Ne, num_conv, mean_gamma, delta, lambda_single, **kwargs):
"""
Constructor. Instantiates a layer with trainable scalar called lambda.
"""
self.Ne = Ne
self.num_conv = num_conv
self.mean_gamma = mean_gamma
self.delta = delta
self.lambda_single = lambda_single
# super(Scalar, self).__init__(**kwargs)
super().__init__(**kwargs)
def build(self, input_shape):
"""
The only function that is overloaded from the layer class.
We Set bias to be trainable.
:param input_shape: input tensor shape
:return: none
"""
if self.lambda_single:
self.scalar = self.add_weight(
shape=(1,),
name="lambda",
initializer="ones",
dtype="float32",
trainable=True,
constraint=non_neg(),
)
else:
self.scalar = self.add_weight(
shape=(self.num_conv,),
name="lambda",
initializer="ones",
dtype="float32",
trainable=True,
constraint=non_neg(),
)
self.built = True
def call(self, inputs):
# output = K.sum(self.scalar * inputs - self.Ne * K.log(self.scalar), axis=-1)
delta = self.delta # rate
r = delta * self.mean_gamma # shape
# gamma_prior_loss = 2 * (r - 1) * K.log(self.scalar) - delta * K.square(
# self.scalar
# )
if self.lambda_single:
scalar = np.zeros((self.num_conv,)) + self.scalar
else:
scalar = self.scalar
gamma_prior_loss = (r - 1) * K.log(scalar) - delta * scalar
output = K.sum(
scalar * inputs - self.Ne * K.log(scalar) - gamma_prior_loss, axis=-1
)
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = [(input_shape[0], 1)]
return output_shape
| 27.191919 | 86 | 0.568351 |
07345f4fd4627daffd8facb6a31678ee7a216e8e | 13,926 | py | Python | src/Infraestructura/clusterServer/reactors/vmServerPacketReactor.py | lbarriosh/cygnus-cloud | 1a17fbb55de69adba2ec42db4c9a063865af4fbd | [
"Apache-2.0"
] | 3 | 2017-09-03T22:01:35.000Z | 2019-01-10T05:40:44.000Z | src/Infraestructura/clusterServer/reactors/vmServerPacketReactor.py | lbarriosh/cygnus-cloud | 1a17fbb55de69adba2ec42db4c9a063865af4fbd | [
"Apache-2.0"
] | null | null | null | src/Infraestructura/clusterServer/reactors/vmServerPacketReactor.py | lbarriosh/cygnus-cloud | 1a17fbb55de69adba2ec42db4c9a063865af4fbd | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf8 -*-
'''
========================================================================
CygnusCloud
========================================================================
File: vmServerPacketReactor.py
Version: 5.0
Description: virtual machine server packet reactor definition
Copyright 2012-13 Luis Barrios Hernández, Adrián Fernández Hernández,
Samuel Guayerbas Martín
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
from virtualMachineServer.packetHandling.packet_t import VM_SERVER_PACKET_T as VMSRVR_PACKET_T
from clusterServer.packetHandling.packet_t import CLUSTER_SERVER_PACKET_T as ENDPOINT_PACKET_T
from clusterServer.database.server_state_t import SERVER_STATE_T
from clusterServer.database.image_state_t import IMAGE_STATE_T
from time import sleep
from errors.codes import ERROR_DESC_T
class VMServerPacketReactor(object):
"""
These objects process the packets sent from the virtual machine servers
"""
def __init__(self, dbConnector, networkManager, listenningPort, vmServerPacketHandler, clusterServerPacketHandler):
"""
Initializes the reactor's state
Args:
dbConnector: a cluster server database connector
networkManager: the network manager to use
listenningPort: the control connection's listenning port
vmServerPacketHandler: the virtual machine server packet handler
clusterServerPacketHandler: the cluster server packet handler
"""
self.__commandsDBConnector = dbConnector
self.__networkManager = networkManager
self.__listenningPort = listenningPort
self.__vmServerPacketHandler = vmServerPacketHandler
self.__packetHandler = clusterServerPacketHandler
def processVMServerIncomingPacket(self, packet):
"""
Processes a packet sent from a virtual machine server
Args:
packet: the packet to process
Returns:
Nothing
"""
data = self.__vmServerPacketHandler.readPacket(packet)
if (data["packet_type"] == VMSRVR_PACKET_T.SERVER_STATUS) :
self.__updateVMServerStatus(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.DOMAIN_CONNECTION_DATA) :
self.__sendVMConnectionData(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.ACTIVE_VM_DATA) :
self.__sendDomainsVNCConnectionData(packet)
elif (data["packet_type"] == VMSRVR_PACKET_T.ACTIVE_DOMAIN_UIDS) :
self.__processActiveDomainUIDs(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.IMAGE_DEPLOYMENT_ERROR or data["packet_type"] == VMSRVR_PACKET_T.IMAGE_DELETION_ERROR):
self.__processImageDeploymentErrorPacket(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.IMAGE_DEPLOYED or data["packet_type"] == VMSRVR_PACKET_T.IMAGE_DELETED):
self.__processImageDeploymentPacket(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.IMAGE_EDITED):
self.__processImageEditedPacket(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.IMAGE_EDITION_ERROR):
self.__processImageEditionError(data)
elif (data["packet_type"] == VMSRVR_PACKET_T.INTERNAL_ERROR) :
self.__processVMServerInternalError(data)
def __updateVMServerStatus(self, data):
"""
Processes a virtual machine server status packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
serverID = None
while (serverID == None) :
serverID = self.__commandsDBConnector.getVMServerID(data["VMServerIP"])
if (serverID == None) :
sleep(0.1)
self.__commandsDBConnector.updateVMServerStatus(serverID, SERVER_STATE_T.READY)
self.__commandsDBConnector.setVMServerStatistics(serverID, data["ActiveDomains"], data["RAMInUse"], data["RAMSize"],
data["FreeStorageSpace"], data["AvailableStorageSpace"], data["FreeTemporarySpace"],
data["AvailableTemporarySpace"], data["ActiveVCPUs"], data["PhysicalCPUs"])
def __processVMServerInternalError(self, data):
"""
Processes a virtual machine server internal error packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
self.__commandsDBConnector.freeVMServerResources(data["CommandID"], True)
p = self.__packetHandler.createErrorPacket(ENDPOINT_PACKET_T.VM_SERVER_INTERNAL_ERROR, ERROR_DESC_T.VMSRVR_INTERNAL_ERROR, data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
def __processImageEditedPacket(self, data):
"""
Processes an image edited packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
self.__commandsDBConnector.deleteActiveVMLocation(data["CommandID"])
if (not self.__commandsDBConnector.isImageEditionCommand(data["CommandID"])) :
familyID = self.__commandsDBConnector.getNewImageVMFamily(data["CommandID"])
self.__commandsDBConnector.deleteNewImageVMFamily(data["CommandID"])
self.__commandsDBConnector.registerImageVMFamilyID(data["ImageID"], familyID)
p = self.__packetHandler.createImageEditedPacket(data["CommandID"], data["ImageID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
else :
self.__commandsDBConnector.removeImageEditionCommand(data["CommandID"])
self.__commandsDBConnector.changeImageCopiesState(data["ImageID"], IMAGE_STATE_T.EDITED)
p = self.__packetHandler.createCommandExecutedPacket(data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
self.__commandsDBConnector.freeVMServerResources(data["CommandID"], False)
self.__commandsDBConnector.freeImageRepositoryResources(data["CommandID"], False)
def __processImageEditionError(self, data):
"""
Processes an image edition error packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
if (not self.__commandsDBConnector.isImageEditionCommand(data["CommandID"])) :
self.__commandsDBConnector.deleteNewImageVMFamily(data["CommandID"])
packet_type = ENDPOINT_PACKET_T.IMAGE_CREATION_ERROR
else :
self.__commandsDBConnector.removeImageEditionCommand(data["CommandID"])
packet_type = ENDPOINT_PACKET_T.IMAGE_EDITION_ERROR
p = self.__packetHandler.createErrorPacket(packet_type, data["ErrorDescription"], data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
self.__commandsDBConnector.freeVMServerResources(data["CommandID"], True)
self.__commandsDBConnector.freeImageRepositoryResources(data["CommandID"], True)
def __processImageDeploymentErrorPacket(self, data):
"""
Processes an image deployment error packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
if (self.__commandsDBConnector.isImageEditionCommand(data["CommandID"])) :
packet_type = ENDPOINT_PACKET_T.IMAGE_EDITION_ERROR
self.__commandsDBConnector.removeImageEditionCommand(data["CommandID"])
elif (self.__commandsDBConnector.isAutoDeploymentCommand(data["CommandID"])) :
(generateOutput, _unused) = self.__commandsDBConnector.handleAutoDeploymentCommandOutput(data["CommandID"], True)
if (generateOutput) :
p = self.__packetHandler.createErrorPacket(ENDPOINT_PACKET_T.AUTO_DEPLOY_ERROR, ERROR_DESC_T.CLSRVR_AUTOD_ERROR, data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
elif (data["packet_type"] == VMSRVR_PACKET_T.IMAGE_DEPLOYMENT_ERROR) :
packet_type = ENDPOINT_PACKET_T.IMAGE_DEPLOYMENT_ERROR
else :
packet_type = ENDPOINT_PACKET_T.DELETE_IMAGE_FROM_SERVER_ERROR
p = self.__packetHandler.createErrorPacket(packet_type, data["ErrorDescription"], data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
self.__commandsDBConnector.freeVMServerResources(data["CommandID"], True)
def __processImageDeploymentPacket(self, data):
"""
Processes an image deployment packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
self.__commandsDBConnector.freeVMServerResources(data["CommandID"], False)
serverID = self.__commandsDBConnector.getVMServerID(data["SenderIP"])
if (self.__commandsDBConnector.isImageEditionCommand(data["CommandID"])) :
self.__commandsDBConnector.changeImageCopyState(data["ImageID"], serverID, IMAGE_STATE_T.READY)
if (not self.__commandsDBConnector.isThereSomeImageCopyInState(data["ImageID"], IMAGE_STATE_T.DEPLOY)) :
p = self.__packetHandler.createCommandExecutedPacket(data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
self.__commandsDBConnector.removeImageEditionCommand(data["CommandID"])
elif (self.__commandsDBConnector.isImageDeletionCommand(data["CommandID"])) :
self.__commandsDBConnector.deleteImageFromServer(serverID, data["ImageID"])
if (not self.__commandsDBConnector.isThereSomeImageCopyInState(data["ImageID"], IMAGE_STATE_T.DELETE)) :
p = self.__packetHandler.createCommandExecutedPacket(data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
self.__commandsDBConnector.removeImageDeletionCommand(data["CommandID"])
elif (self.__commandsDBConnector.isAutoDeploymentCommand(data["CommandID"])) :
(generateOutput, error) = self.__commandsDBConnector.handleAutoDeploymentCommandOutput(data["CommandID"], False)
if (not error) :
self.__commandsDBConnector.assignImageToServer(serverID, data["ImageID"])
if (generateOutput) :
if (error) :
p = self.__packetHandler.createErrorPacket(ENDPOINT_PACKET_T.AUTO_DEPLOY_ERROR, ERROR_DESC_T.CLSRVR_AUTOD_ERROR, data["CommandID"])
else :
p = self.__packetHandler.createCommandExecutedPacket(data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
else :
if (data["packet_type"] == VMSRVR_PACKET_T.IMAGE_DEPLOYED) :
self.__commandsDBConnector.assignImageToServer(serverID, data["ImageID"])
else :
self.__commandsDBConnector.deleteImageFromServer(serverID, data["ImageID"])
p = self.__packetHandler.createCommandExecutedPacket(data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
def __sendVMConnectionData(self, data):
"""
Processes a virtual machine connection data packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
self.__commandsDBConnector.removeVMBootCommand(data["CommandID"])
p = self.__packetHandler.createVMConnectionDataPacket(data["VNCServerIP"],
data["VNCServerPort"], data["VNCServerPassword"], data["CommandID"])
self.__networkManager.sendPacket('', self.__listenningPort, p)
self.__commandsDBConnector.freeVMServerResources(data["CommandID"], False)
def __processActiveDomainUIDs(self, data):
"""
Processes an active domains IDs packet
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
vmServerID = self.__commandsDBConnector.getVMServerID(data["VMServerIP"])
self.__commandsDBConnector.registerHostedVMs(vmServerID, data["Domain_UIDs"])
def __sendDomainsVNCConnectionData(self, packet):
"""
Redirects an active virtual machines VNC connection data to the cluster endpoint
Args:
data: a dictionary containing the received packet's data
Returns:
Nothing
"""
p = self.__packetHandler.createActiveVMsVNCDataPacket(packet)
self.__networkManager.sendPacket('', self.__listenningPort, p) | 51.198529 | 151 | 0.650582 |
9527e264a0817ce99188bada1aae8c5cae50d033 | 369 | py | Python | stor/wallet/lineage_proof.py | Stor-Network/stor-blockchain | 3c3cd1a3b99592e88160107ca5b81afc0937b992 | [
"Apache-2.0"
] | 19 | 2021-06-29T20:06:09.000Z | 2022-02-09T04:33:00.000Z | stor/wallet/lineage_proof.py | Stor-Network/stor-blockchain | 3c3cd1a3b99592e88160107ca5b81afc0937b992 | [
"Apache-2.0"
] | 8 | 2021-07-04T03:21:51.000Z | 2021-12-27T07:56:09.000Z | stor/wallet/lineage_proof.py | Stor-Network/stor-blockchain | 3c3cd1a3b99592e88160107ca5b81afc0937b992 | [
"Apache-2.0"
] | 6 | 2021-10-04T17:15:30.000Z | 2022-03-15T08:40:01.000Z | from dataclasses import dataclass
from typing import Optional
from stor.types.blockchain_format.sized_bytes import bytes32
from stor.util.ints import uint64
from stor.util.streamable import Streamable, streamable
@dataclass(frozen=True)
@streamable
class LineageProof(Streamable):
parent_name: bytes32
inner_puzzle_hash: Optional[bytes32]
amount: uint64
| 24.6 | 60 | 0.818428 |
ac3f8e488a862bcb0feda704fe84d85f9d8a93a1 | 40,203 | py | Python | py3status/core.py | boucman/py3status | 84b57304fbf71a466ccb1ed2f2dd039ece6eefb6 | [
"BSD-3-Clause"
] | 1 | 2020-04-07T19:11:36.000Z | 2020-04-07T19:11:36.000Z | py3status/core.py | boucman/py3status | 84b57304fbf71a466ccb1ed2f2dd039ece6eefb6 | [
"BSD-3-Clause"
] | 2 | 2018-03-15T18:44:42.000Z | 2018-03-15T19:22:04.000Z | py3status/core.py | boucman/py3status | 84b57304fbf71a466ccb1ed2f2dd039ece6eefb6 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import print_function
from __future__ import division
import os
import pkg_resources
import sys
import time
from collections import deque
from json import dumps
from pprint import pformat
from signal import signal, SIGTERM, SIGUSR1, SIGTSTP, SIGCONT
from subprocess import Popen
from threading import Event, Thread
from syslog import syslog, LOG_ERR, LOG_INFO, LOG_WARNING
from traceback import extract_tb, format_tb, format_stack
from py3status.command import CommandServer
from py3status.events import Events
from py3status.formatter import expand_color
from py3status.helpers import print_stderr
from py3status.i3status import I3status
from py3status.parse_config import process_config
from py3status.module import Module
from py3status.profiling import profile
from py3status.udev_monitor import UdevMonitor
LOG_LEVELS = {"error": LOG_ERR, "warning": LOG_WARNING, "info": LOG_INFO}
DBUS_LEVELS = {"error": "critical", "warning": "normal", "info": "low"}
CONFIG_SPECIAL_SECTIONS = [
".group_extras",
".module_groups",
"general",
"i3s_modules",
"on_click",
"order",
"py3_modules",
"py3status",
]
ENTRY_POINT_NAME = "py3status"
ENTRY_POINT_KEY = "entry_point"
class Runner(Thread):
"""
A Simple helper to run a module in a Thread so it is non-locking.
"""
def __init__(self, module, py3_wrapper, module_name):
Thread.__init__(self)
self.daemon = True
self.module = module
self.module_name = module_name
self.py3_wrapper = py3_wrapper
self.start()
def run(self):
try:
self.module.run()
except: # noqa e722
self.py3_wrapper.report_exception("Runner")
# the module is no longer running so notify the timeout logic
if self.module_name:
self.py3_wrapper.timeout_finished.append(self.module_name)
class NoneSetting:
"""
This class represents no setting in the config.
"""
# this attribute is used to identify that this is a none setting
none_setting = True
def __len__(self):
return 0
def __repr__(self):
# this is for output via module_test
return "None"
class Task:
"""
A simple task that can be run by the scheduler.
"""
def run(self):
# F901 'raise NotImplemented' should be 'raise NotImplementedError'
raise NotImplemented() # noqa f901
class CheckI3StatusThread(Task):
"""
Checks that the i3status thread is alive
"""
def __init__(self, i3status_thread, py3_wrapper):
self.i3status_thread = i3status_thread
self.timeout_queue_add = py3_wrapper.timeout_queue_add
self.notify_user = py3_wrapper.notify_user
def run(self):
# check i3status thread
if not self.i3status_thread.is_alive():
err = self.i3status_thread.error
if not err:
err = "I3status died horribly."
self.notify_user(err)
else:
# check again in 5 seconds
self.timeout_queue_add(self, int(time.time()) + 5)
class ModuleRunner(Task):
"""
Starts up a Module
"""
def __init__(self, module):
self.module = module
def run(self):
self.module.start_module()
class Common:
"""
This class is used to hold core functionality so that it can be shared more
easily. This allow us to run the module tests through the same code as
when we are running for real.
"""
def __init__(self, py3_wrapper):
self.py3_wrapper = py3_wrapper
self.none_setting = NoneSetting()
self.config = py3_wrapper.config
def get_config_attribute(self, name, attribute):
"""
Look for the attribute in the config. Start with the named module and
then walk up through any containing group and then try the general
section of the config.
"""
# A user can set a param to None in the config to prevent a param
# being used. This is important when modules do something like
#
# color = self.py3.COLOR_MUTED or self.py3.COLOR_BAD
config = self.config["py3_config"]
param = config[name].get(attribute, self.none_setting)
if hasattr(param, "none_setting") and name in config[".module_groups"]:
for module in config[".module_groups"][name]:
if attribute in config.get(module, {}):
param = config[module].get(attribute)
break
if hasattr(param, "none_setting"):
# check py3status config section
param = config["py3status"].get(attribute, self.none_setting)
if hasattr(param, "none_setting"):
# check py3status general section
param = config["general"].get(attribute, self.none_setting)
if param and (attribute == "color" or attribute.startswith("color_")):
# check color value
param = expand_color(param.lower(), self.none_setting)
return param
def report_exception(self, msg, notify_user=True, level="error", error_frame=None):
"""
Report details of an exception to the user.
This should only be called within an except: block Details of the
exception are reported eg filename, line number and exception type.
Because stack trace information outside of py3status or it's modules is
not helpful in actually finding and fixing the error, we try to locate
the first place that the exception affected our code.
Alternatively if the error occurs in a module via a Py3 call that
catches and reports the error then we receive an error_frame and use
that as the source of the error.
NOTE: msg should not end in a '.' for consistency.
"""
# Get list of paths that our stack trace should be found in.
py3_paths = [os.path.dirname(__file__)] + self.config["include_paths"]
traceback = None
try:
# We need to make sure to delete tb even if things go wrong.
exc_type, exc_obj, tb = sys.exc_info()
stack = extract_tb(tb)
error_str = "{}: {}\n".format(exc_type.__name__, exc_obj)
traceback = [error_str]
if error_frame:
# The error occurred in a py3status module so the traceback
# should be made to appear correct. We caught the exception
# but make it look as though we did not.
traceback += format_stack(error_frame, 1) + format_tb(tb)
filename = os.path.basename(error_frame.f_code.co_filename)
line_no = error_frame.f_lineno
else:
# This is a none module based error
traceback += format_tb(tb)
# Find first relevant trace in the stack.
# it should be in py3status or one of it's modules.
found = False
for item in reversed(stack):
filename = item[0]
for path in py3_paths:
if filename.startswith(path):
# Found a good trace
filename = os.path.basename(item[0])
line_no = item[1]
found = True
break
if found:
break
# all done! create our message.
msg = "{} ({}) {} line {}.".format(
msg, exc_type.__name__, filename, line_no
)
except: # noqa e722
# something went wrong report what we can.
msg = "{}.".format(msg)
finally:
# delete tb!
del tb
# log the exception and notify user
self.py3_wrapper.log(msg, "warning")
if traceback:
# if debug is not in the config then we are at an early stage of
# running py3status and logging is not yet available so output the
# error to STDERR so it can be seen
if "debug" not in self.config:
print_stderr("\n".join(traceback))
elif self.config.get("log_file"):
self.py3_wrapper.log("".join(["Traceback\n"] + traceback))
if notify_user:
self.py3_wrapper.notify_user(msg, level=level)
class Py3statusWrapper:
"""
This is the py3status wrapper.
"""
def __init__(self, options):
"""
Useful variables we'll need.
"""
self.config = vars(options)
self.i3bar_running = True
self.last_refresh_ts = time.time()
self.lock = Event()
self.modules = {}
self.notified_messages = set()
self.options = options
self.output_modules = {}
self.py3_modules = []
self.running = True
self.update_queue = deque()
self.update_request = Event()
# shared code
self.common = Common(self)
self.get_config_attribute = self.common.get_config_attribute
self.report_exception = self.common.report_exception
# these are used to schedule module updates
self.timeout_add_queue = deque()
self.timeout_due = None
self.timeout_finished = deque()
self.timeout_keys = []
self.timeout_missed = {}
self.timeout_queue = {}
self.timeout_queue_lookup = {}
self.timeout_running = set()
self.timeout_update_due = deque()
def timeout_queue_add(self, item, cache_time=0):
"""
Add a item to be run at a future time.
This must be a Module, I3statusModule or a Task
"""
# add the info to the add queue. We do this so that actually adding
# the module is done in the core thread.
self.timeout_add_queue.append((item, cache_time))
# if the timeout_add_queue is not due to be processed until after this
# update request is due then trigger an update now.
if self.timeout_due is None or cache_time < self.timeout_due:
self.update_request.set()
def timeout_process_add_queue(self, module, cache_time):
"""
Add a module to the timeout_queue if it is scheduled in the future or
if it is due for an update immediately just trigger that.
the timeout_queue is a dict with the scheduled time as the key and the
value is a list of module instance names due to be updated at that
point. An ordered list of keys is kept to allow easy checking of when
updates are due. A list is also kept of which modules are in the
update_queue to save having to search for modules in it unless needed.
"""
# If already set to update do nothing
if module in self.timeout_update_due:
return
# remove if already in the queue
key = self.timeout_queue_lookup.get(module)
if key:
queue_item = self.timeout_queue[key]
queue_item.remove(module)
if not queue_item:
del self.timeout_queue[key]
self.timeout_keys.remove(key)
if cache_time == 0:
# if cache_time is 0 we can just trigger the module update
self.timeout_update_due.append(module)
self.timeout_queue_lookup[module] = None
else:
# add the module to the timeout queue
if cache_time not in self.timeout_keys:
self.timeout_queue[cache_time] = set([module])
self.timeout_keys.append(cache_time)
# sort keys so earliest is first
self.timeout_keys.sort()
# when is next timeout due?
try:
self.timeout_due = self.timeout_keys[0]
except IndexError:
self.timeout_due = None
else:
self.timeout_queue[cache_time].add(module)
# note that the module is in the timeout_queue
self.timeout_queue_lookup[module] = cache_time
def timeout_queue_process(self):
"""
Check the timeout_queue and set any due modules to update.
"""
# process any items that need adding to the queue
while self.timeout_add_queue:
self.timeout_process_add_queue(*self.timeout_add_queue.popleft())
now = time.time()
due_timeouts = []
# find any due timeouts
for timeout in self.timeout_keys:
if timeout > now:
break
due_timeouts.append(timeout)
if due_timeouts:
# process them
for timeout in due_timeouts:
modules = self.timeout_queue[timeout]
# remove from the queue
del self.timeout_queue[timeout]
self.timeout_keys.remove(timeout)
for module in modules:
# module no longer in queue
del self.timeout_queue_lookup[module]
# tell module to update
self.timeout_update_due.append(module)
# when is next timeout due?
try:
self.timeout_due = self.timeout_keys[0]
except IndexError:
self.timeout_due = None
# process any finished modules.
# Now that the module has finished running it may have been marked to
# be triggered again. This is most likely to happen when events are
# being processed and the events are arriving much faster than the
# module can handle them. It is important as a module may handle
# events but not trigger the module update. If during the event the
# module is due to update the update is not actioned but it needs to be
# once the events have finished or else the module will no longer
# continue to update.
while self.timeout_finished:
module_name = self.timeout_finished.popleft()
self.timeout_running.discard(module_name)
if module_name in self.timeout_missed:
module = self.timeout_missed.pop(module_name)
self.timeout_update_due.append(module)
# run any modules that are due
while self.timeout_update_due:
module = self.timeout_update_due.popleft()
module_name = getattr(module, "module_full_name", None)
# if the module is running then we do not want to trigger it but
# instead wait till it has finished running and then trigger
if module_name and module_name in self.timeout_running:
self.timeout_missed[module_name] = module
else:
self.timeout_running.add(module_name)
Runner(module, self, module_name)
# we return how long till we next need to process the timeout_queue
if self.timeout_due is not None:
return self.timeout_due - time.time()
def gevent_monkey_patch_report(self):
"""
Report effective gevent monkey patching on the logs.
"""
try:
import gevent.socket
import socket
if gevent.socket.socket is socket.socket:
self.log("gevent monkey patching is active")
return True
else:
self.notify_user("gevent monkey patching failed.")
except ImportError:
self.notify_user("gevent is not installed, monkey patching failed.")
return False
def get_user_modules(self):
"""Mapping from module name to relevant objects.
There are two ways of discovery and storage:
`include_paths` (no installation): include_path, f_name
`entry_point` (from installed package): "entry_point", <Py3Status class>
Modules of the same name from entry points shadow all other modules.
"""
user_modules = self._get_path_based_modules()
user_modules.update(self._get_entry_point_based_modules())
return user_modules
def _get_path_based_modules(self):
"""
Search configured include directories for user provided modules.
user_modules: {
'weather_yahoo': ('~/i3/py3status/', 'weather_yahoo.py')
}
"""
user_modules = {}
for include_path in self.config["include_paths"]:
for f_name in sorted(os.listdir(include_path)):
if not f_name.endswith(".py"):
continue
module_name = f_name[:-3]
# do not overwrite modules if already found
if module_name in user_modules:
pass
user_modules[module_name] = (include_path, f_name)
self.log(
"available module from {}: {}".format(include_path, module_name)
)
return user_modules
def _get_entry_point_based_modules(self):
classes_from_entry_points = {}
for entry_point in pkg_resources.iter_entry_points(ENTRY_POINT_NAME):
try:
module = entry_point.load()
except Exception as err:
self.log("entry_point '{}' error: {}".format(entry_point, err))
continue
klass = getattr(module, Module.EXPECTED_CLASS, None)
if klass:
module_name = entry_point.module_name.split(".")[-1]
classes_from_entry_points[module_name] = (ENTRY_POINT_KEY, klass)
self.log(
"available module from {}: {}".format(ENTRY_POINT_KEY, module_name)
)
return classes_from_entry_points
def get_user_configured_modules(self):
"""
Get a dict of all available and configured py3status modules
in the user's i3status.conf.
As we already have a convenient way of loading the module, we'll
populate the map with the Py3Status class right away
"""
user_modules = {}
if not self.py3_modules:
return user_modules
for module_name, module_info in self.get_user_modules().items():
for module in self.py3_modules:
if module_name == module.split(" ")[0]:
source, item = module_info
user_modules[module_name] = (source, item)
return user_modules
def load_modules(self, modules_list, user_modules):
"""
Load the given modules from the list (contains instance name) with
respect to the user provided modules dict.
modules_list: ['weather_yahoo paris', 'pewpew', 'net_rate']
user_modules: {
'weather_yahoo': ('/etc/py3status.d/', 'weather_yahoo.py'),
'pewpew': ('entry_point', <Py3Status class>),
}
"""
for module in modules_list:
# ignore already provided modules (prevents double inclusion)
if module in self.modules:
continue
try:
instance = None
payload = user_modules.get(module)
if payload:
kind, Klass = payload
if kind == ENTRY_POINT_KEY:
instance = Klass()
my_m = Module(module, user_modules, self, instance=instance)
# only handle modules with available methods
if my_m.methods:
self.modules[module] = my_m
elif self.config["debug"]:
self.log('ignoring module "{}" (no methods found)'.format(module))
except Exception:
err = sys.exc_info()[1]
msg = 'Loading module "{}" failed ({}).'.format(module, err)
self.report_exception(msg, level="warning")
def setup(self):
"""
Setup py3status and spawn i3status/events/modules threads.
"""
# SIGTSTP will be received from i3bar indicating that all output should
# stop and we should consider py3status suspended. It is however
# important that any processes using i3 ipc should continue to receive
# those events otherwise it can lead to a stall in i3.
signal(SIGTSTP, self.i3bar_stop)
# SIGCONT indicates output should be resumed.
signal(SIGCONT, self.i3bar_start)
# log py3status and python versions
self.log("=" * 8)
msg = "Starting py3status version {version} python {python_version}"
self.log(msg.format(**self.config))
try:
# if running from git then log the branch and last commit
# we do this by looking in the .git directory
git_path = os.path.join(os.path.dirname(__file__), "..", ".git")
# branch
with open(os.path.join(git_path, "HEAD"), "r") as f:
out = f.readline()
branch = "/".join(out.strip().split("/")[2:])
self.log("git branch: {}".format(branch))
# last commit
log_path = os.path.join(git_path, "logs", "refs", "heads", branch)
with open(log_path, "r") as f:
out = f.readlines()[-1]
sha = out.split(" ")[1][:7]
msg = ":".join(out.strip().split("\t")[-1].split(":")[1:])
self.log("git commit: {}{}".format(sha, msg))
except: # noqa e722
pass
self.log("window manager: {}".format(self.config["wm_name"]))
if self.config["debug"]:
self.log("py3status started with config {}".format(self.config))
if self.config["gevent"]:
self.is_gevent = self.gevent_monkey_patch_report()
else:
self.is_gevent = False
# read i3status.conf
config_path = self.config["i3status_config_path"]
self.log("config file: {}".format(self.config["i3status_config_path"]))
self.config["py3_config"] = process_config(config_path, self)
# read resources
if "resources" in str(self.config["py3_config"].values()):
from subprocess import check_output
resources = check_output(["xrdb", "-query"]).decode().splitlines()
self.config["resources"] = {
k: v.strip() for k, v in (x.split(":", 1) for x in resources)
}
# setup i3status thread
self.i3status_thread = I3status(self)
# If standalone or no i3status modules then use the mock i3status
# else start i3status thread.
i3s_modules = self.config["py3_config"]["i3s_modules"]
if self.config["standalone"] or not i3s_modules:
self.i3status_thread.mock()
i3s_mode = "mocked"
else:
for module in i3s_modules:
self.log("adding module {}".format(module))
i3s_mode = "started"
self.i3status_thread.start()
while not self.i3status_thread.ready:
if not self.i3status_thread.is_alive():
# i3status is having a bad day, so tell the user what went
# wrong and do the best we can with just py3status modules.
err = self.i3status_thread.error
self.notify_user(err)
self.i3status_thread.mock()
i3s_mode = "mocked"
break
time.sleep(0.1)
if self.config["debug"]:
self.log(
"i3status thread {} with config {}".format(
i3s_mode, self.config["py3_config"]
)
)
# add i3status thread monitoring task
if i3s_mode == "started":
task = CheckI3StatusThread(self.i3status_thread, self)
self.timeout_queue_add(task)
# setup input events thread
self.events_thread = Events(self)
self.events_thread.daemon = True
self.events_thread.start()
if self.config["debug"]:
self.log("events thread started")
# initialise the command server
self.commands_thread = CommandServer(self)
self.commands_thread.daemon = True
self.commands_thread.start()
if self.config["debug"]:
self.log("commands thread started")
# initialize the udev monitor (lazy)
self.udev_monitor = UdevMonitor(self)
# suppress modules' output wrt issue #20
if not self.config["debug"]:
sys.stdout = open("/dev/null", "w")
sys.stderr = open("/dev/null", "w")
# get the list of py3status configured modules
self.py3_modules = self.config["py3_config"]["py3_modules"]
# get a dict of all user provided modules
self.log("modules include paths: {}".format(self.config["include_paths"]))
user_modules = self.get_user_configured_modules()
if self.config["debug"]:
self.log("user_modules={}".format(user_modules))
if self.py3_modules:
# load and spawn i3status.conf configured modules threads
self.load_modules(self.py3_modules, user_modules)
def notify_user(
self,
msg,
level="error",
rate_limit=None,
module_name="",
icon=None,
title="py3status",
):
"""
Display notification to user via i3-nagbar or send-notify
We also make sure to log anything to keep trace of it.
NOTE: Message should end with a '.' for consistency.
"""
dbus = self.config.get("dbus_notify")
if dbus:
# force msg, icon, title to be a string
title = u"{}".format(title)
msg = u"{}".format(msg)
if icon:
icon = u"{}".format(icon)
else:
msg = u"py3status: {}".format(msg)
if level != "info" and module_name == "":
fix_msg = u"{} Please try to fix this and reload i3wm (Mod+Shift+R)"
msg = fix_msg.format(msg)
# Rate limiting. If rate limiting then we need to calculate the time
# period for which the message should not be repeated. We just use
# A simple chunked time model where a message cannot be repeated in a
# given time period. Messages can be repeated more frequently but must
# be in different time periods.
limit_key = ""
if rate_limit:
try:
limit_key = time.time() // rate_limit
except TypeError:
pass
# We use a hash to see if the message is being repeated. This is crude
# and imperfect but should work for our needs.
msg_hash = hash(u"{}#{}#{}#{}".format(module_name, limit_key, msg, title))
if msg_hash in self.notified_messages:
return
elif module_name:
log_msg = 'Module `%s` sent a notification. "%s: %s"' % (
module_name,
title,
msg,
)
self.log(log_msg, level)
else:
self.log(msg, level)
self.notified_messages.add(msg_hash)
try:
if dbus:
# fix any html entities
msg = msg.replace("&", "&")
msg = msg.replace("<", "<")
msg = msg.replace(">", ">")
cmd = ["notify-send"]
if icon:
cmd += ["-i", icon]
cmd += ["-u", DBUS_LEVELS.get(level, "normal"), "-t", "10000"]
cmd += [title, msg]
else:
py3_config = self.config.get("py3_config", {})
nagbar_font = py3_config.get("py3status", {}).get("nagbar_font")
wm_nag = self.config["wm"]["nag"]
cmd = [wm_nag, "-m", msg, "-t", level]
if nagbar_font:
cmd += ["-f", nagbar_font]
Popen(cmd, stdout=open("/dev/null", "w"), stderr=open("/dev/null", "w"))
except Exception as err:
self.log("notify_user error: %s" % err)
def stop(self):
"""
Set the Event lock, this will break all threads' loops.
"""
self.running = False
# stop the command server
try:
self.commands_thread.kill()
except: # noqa e722
pass
try:
self.lock.set()
if self.config["debug"]:
self.log("lock set, exiting")
# run kill() method on all py3status modules
for module in self.modules.values():
module.kill()
except: # noqa e722
pass
def refresh_modules(self, module_string=None, exact=True):
"""
Update modules.
if module_string is None all modules are refreshed
if module_string then modules with the exact name or those starting
with the given string depending on exact parameter will be refreshed.
If a module is an i3status one then we refresh i3status.
To prevent abuse, we rate limit this function to 100ms for full
refreshes.
"""
if not module_string:
if time.time() > (self.last_refresh_ts + 0.1):
self.last_refresh_ts = time.time()
else:
# rate limiting
return
update_i3status = False
for name, module in self.output_modules.items():
if (
module_string is None
or (exact and name == module_string)
or (not exact and name.startswith(module_string))
):
if module["type"] == "py3status":
if self.config["debug"]:
self.log("refresh py3status module {}".format(name))
module["module"].force_update()
else:
if self.config["debug"]:
self.log("refresh i3status module {}".format(name))
update_i3status = True
if update_i3status:
self.i3status_thread.refresh_i3status()
def sig_handler(self, signum, frame):
"""
SIGUSR1 was received, the user asks for an immediate refresh of the bar
"""
self.log("received USR1")
self.refresh_modules()
def terminate(self, signum, frame):
"""
Received request to terminate (SIGTERM), exit nicely.
"""
self.log("received SIGTERM")
raise KeyboardInterrupt()
def purge_module(self, module_name):
"""
A module has been removed e.g. a module that had an error.
We need to find any containers and remove the module from them.
"""
containers = self.config["py3_config"][".module_groups"]
containers_to_update = set()
if module_name in containers:
containers_to_update.update(set(containers[module_name]))
for container in containers_to_update:
try:
self.modules[container].module_class.items.remove(module_name)
except ValueError:
pass
def notify_update(self, update, urgent=False):
"""
Name or list of names of modules that have updated.
"""
if not isinstance(update, list):
update = [update]
self.update_queue.extend(update)
# find containers that use the modules that updated
containers = self.config["py3_config"][".module_groups"]
containers_to_update = set()
for item in update:
if item in containers:
containers_to_update.update(set(containers[item]))
# force containers to update
for container in containers_to_update:
container_module = self.output_modules.get(container)
if container_module:
# If the container registered a urgent_function then call it
# if this update is urgent.
if urgent and container_module.get("urgent_function"):
container_module["urgent_function"](update)
# If a container has registered a content_function we use that
# to see if the container needs to be updated.
# We only need to update containers if their active content has
# changed.
if container_module.get("content_function"):
if set(update) & container_module["content_function"]():
container_module["module"].force_update()
else:
# we don't know so just update.
container_module["module"].force_update()
# we need to update the output
if self.update_queue:
self.update_request.set()
def log(self, msg, level="info"):
"""
log this information to syslog or user provided logfile.
"""
if not self.config.get("log_file"):
# If level was given as a str then convert to actual level
level = LOG_LEVELS.get(level, level)
syslog(level, u"{}".format(msg))
else:
# Binary mode so fs encoding setting is not an issue
with open(self.config["log_file"], "ab") as f:
log_time = time.strftime("%Y-%m-%d %H:%M:%S")
# nice formatting of data structures using pretty print
if isinstance(msg, (dict, list, set, tuple)):
msg = pformat(msg)
# if multiline then start the data output on a fresh line
# to aid readability.
if "\n" in msg:
msg = u"\n" + msg
out = u"{} {} {}\n".format(log_time, level.upper(), msg)
try:
# Encode unicode strings to bytes
f.write(out.encode("utf-8"))
except (AttributeError, UnicodeDecodeError):
# Write any byte strings straight to log
f.write(out)
def create_output_modules(self):
"""
Setup our output modules to allow easy updating of py3modules and
i3status modules allows the same module to be used multiple times.
"""
py3_config = self.config["py3_config"]
i3modules = self.i3status_thread.i3modules
output_modules = self.output_modules
# position in the bar of the modules
positions = {}
for index, name in enumerate(py3_config["order"]):
if name not in positions:
positions[name] = []
positions[name].append(index)
# py3status modules
for name in self.modules:
if name not in output_modules:
output_modules[name] = {}
output_modules[name]["position"] = positions.get(name, [])
output_modules[name]["module"] = self.modules[name]
output_modules[name]["type"] = "py3status"
output_modules[name]["color"] = self.mappings_color.get(name)
# i3status modules
for name in i3modules:
if name not in output_modules:
output_modules[name] = {}
output_modules[name]["position"] = positions.get(name, [])
output_modules[name]["module"] = i3modules[name]
output_modules[name]["type"] = "i3status"
output_modules[name]["color"] = self.mappings_color.get(name)
self.output_modules = output_modules
def create_mappings(self, config):
"""
Create any mappings needed for global substitutions eg. colors
"""
mappings = {}
for name, cfg in config.items():
# Ignore special config sections.
if name in CONFIG_SPECIAL_SECTIONS:
continue
color = self.get_config_attribute(name, "color")
if hasattr(color, "none_setting"):
color = None
mappings[name] = color
# Store mappings for later use.
self.mappings_color = mappings
def process_module_output(self, module):
"""
Process the output for a module and return a json string representing it.
Color processing occurs here.
"""
outputs = module["module"].get_latest()
color = module["color"]
if color:
for output in outputs:
# Color: substitute the config defined color
if "color" not in output:
output["color"] = color
# Create the json string output.
return ",".join([dumps(x) for x in outputs])
def i3bar_stop(self, signum, frame):
self.log("received SIGTSTP")
self.i3bar_running = False
# i3status should be stopped
self.i3status_thread.suspend_i3status()
self.sleep_modules()
def i3bar_start(self, signum, frame):
self.log("received SIGCONT")
self.i3bar_running = True
self.wake_modules()
def sleep_modules(self):
# Put all py3modules to sleep so they stop updating
for module in self.output_modules.values():
if module["type"] == "py3status":
module["module"].sleep()
def wake_modules(self):
# Wake up all py3modules.
for module in self.output_modules.values():
if module["type"] == "py3status":
module["module"].wake()
@profile
def run(self):
"""
Main py3status loop, continuously read from i3status and modules
and output it to i3bar for displaying.
"""
# SIGUSR1 forces a refresh of the bar both for py3status and i3status,
# this mimics the USR1 signal handling of i3status (see man i3status)
signal(SIGUSR1, self.sig_handler)
signal(SIGTERM, self.terminate)
# initialize usage variables
py3_config = self.config["py3_config"]
# prepare the color mappings
self.create_mappings(py3_config)
# self.output_modules needs to have been created before modules are
# started. This is so that modules can do things like register their
# content_function.
self.create_output_modules()
# start up all our modules
for module in self.modules.values():
task = ModuleRunner(module)
self.timeout_queue_add(task)
# this will be our output set to the correct length for the number of
# items in the bar
output = [None] * len(py3_config["order"])
write = sys.__stdout__.write
flush = sys.__stdout__.flush
# start our output
header = {
"version": 1,
"click_events": self.config["click_events"],
"stop_signal": SIGTSTP,
}
write(dumps(header))
write("\n[[]\n")
update_due = None
# main loop
while True:
# process the timeout_queue and get interval till next update due
update_due = self.timeout_queue_process()
# wait until an update is requested
if self.update_request.wait(timeout=update_due):
# event was set so clear it
self.update_request.clear()
while not self.i3bar_running:
time.sleep(0.1)
# check if an update is needed
if self.update_queue:
while len(self.update_queue):
module_name = self.update_queue.popleft()
module = self.output_modules[module_name]
out = self.process_module_output(module)
for index in module["position"]:
# store the output as json
output[index] = out
# build output string
out = ",".join([x for x in output if x])
# dump the line to stdout
write(",[{}]\n".format(out))
flush()
| 38.288571 | 87 | 0.577196 |
65cce102c88ed6de1fd235c159908017b0173dca | 1,913 | py | Python | DjangoBlog/admin_site.py | pointworld/DjangoBlog | 0ceb8a4b5cb9518fceb7b59357447230eca1a65a | [
"MIT"
] | 4 | 2019-09-09T12:54:59.000Z | 2019-09-18T06:56:43.000Z | DjangoBlog/admin_site.py | pointworld/DjangoBlog | 0ceb8a4b5cb9518fceb7b59357447230eca1a65a | [
"MIT"
] | null | null | null | DjangoBlog/admin_site.py | pointworld/DjangoBlog | 0ceb8a4b5cb9518fceb7b59357447230eca1a65a | [
"MIT"
] | 1 | 2019-09-09T12:54:21.000Z | 2019-09-09T12:54:21.000Z | from django.contrib.admin import AdminSite
from django.contrib.sites.admin import SiteAdmin
from django.contrib.admin.models import LogEntry
from django.contrib.sites.models import Site
from DjangoBlog.logentryadmin import LogEntryAdmin
from blog.admin import *
from accounts.admin import *
from oauth.admin import *
from servermanager.admin import *
from comments.admin import *
from owntracks.admin import *
class DjangoBlogAdminSite(AdminSite):
site_header = 'DjangoBlog administration'
site_title = 'DjangoBlog site admin'
def __init__(self, name='admin'):
super().__init__(name)
def has_permission(self, request):
return request.user.is_superuser
# def get_urls(self):
# urls = super().get_urls()
# from django.urls import path
# from blog.views import refresh_memcache
#
# my_urls = [
# path('refresh/', self.admin_view(refresh_memcache), name="refresh"),
# ]
# return urls + my_urls
admin_site = DjangoBlogAdminSite(name='admin')
class ArticleDetailInline(admin.StackedInline):
model = ArticleDetail
class ExtendedArticleAdmin(ArticleAdmin):
inlines = ArticleAdmin.inlines + [ArticleDetailInline]
admin_site.register(Article, ExtendedArticleAdmin)
admin_site.register(Category, CategoryAdmin)
admin_site.register(Tag, TagAdmin)
admin_site.register(Links, LinksAdmin)
admin_site.register(SideBar, SideBarAdmin)
admin_site.register(BlogSettings, BlogSettingsAdmin)
admin_site.register(commands, CommandsAdmin)
admin_site.register(EmailSendLog, EmailSendLogAdmin)
admin_site.register(BlogUser, BlogUserAdmin)
admin_site.register(Comment, CommentAdmin)
admin_site.register(OAuthUser, OAuthUserAdmin)
admin_site.register(OAuthConfig, OAuthConfigAdmin)
admin_site.register(OwnTrackLog, OwnTrackLogsAdmin)
admin_site.register(Site, SiteAdmin)
admin_site.register(LogEntry, LogEntryAdmin)
| 29.430769 | 82 | 0.772608 |
b1a1e69a9e043c2f25204cfe53fe03ee3825a6c0 | 551 | py | Python | research/delf/delf/python/examples/tomesek_npy_to_list.py | brejchajan/models | 37e05adf3d43c3bf6684cb65dfcfdab6e7d754ac | [
"Apache-2.0"
] | null | null | null | research/delf/delf/python/examples/tomesek_npy_to_list.py | brejchajan/models | 37e05adf3d43c3bf6684cb65dfcfdab6e7d754ac | [
"Apache-2.0"
] | null | null | null | research/delf/delf/python/examples/tomesek_npy_to_list.py | brejchajan/models | 37e05adf3d43c3bf6684cb65dfcfdab6e7d754ac | [
"Apache-2.0"
] | null | null | null | import numpy as np
import os
from tqdm import tqdm
d = np.load('/mnt/matylda1/itomesek/PUBLIC/Switzerland_evaluation/Alps_photos_to_segments_compact_test.npy', allow_pickle='True').item()
#root_path = '/mnt/matylda1/Locate/data/photoparam_raw/netvlad/Alps/database_depth'
root_path = '/mnt/data1/data/delg/Alps_query'
with open('switzerland_query_noscale.txt', 'w') as file:
for img_path in tqdm(d['qImageFns']):
depth_path = os.path.join(root_path, img_path.replace("_segments.png", "_depth.exr"))
file.write(depth_path + "\n")
| 42.384615 | 136 | 0.749546 |
878d8fa09507cda07a0366f1819f6682794a0507 | 288 | py | Python | .bash/powerline-shell/segments/root.py | chrislaskey/.dot-files | 44be2364f2824a2c70300f7c0f2963a592c7083e | [
"MIT"
] | null | null | null | .bash/powerline-shell/segments/root.py | chrislaskey/.dot-files | 44be2364f2824a2c70300f7c0f2963a592c7083e | [
"MIT"
] | null | null | null | .bash/powerline-shell/segments/root.py | chrislaskey/.dot-files | 44be2364f2824a2c70300f7c0f2963a592c7083e | [
"MIT"
] | null | null | null | def add_root_indicator_segment():
root_indicators = {
'bash': ' \\$',
'zsh': ' %#',
'bare': ' $',
}
bg = Color.CMD_PASSED_BG
fg = Color.CMD_PASSED_FG
powerline.append(root_indicators[powerline.args.shell], fg, bg)
add_root_indicator_segment()
| 24 | 67 | 0.600694 |
1a3d2e9e0a05353f35efdc2676a339f220413c78 | 772 | py | Python | docs/conf.py | gmr/tornado-dynamodb | 902867ed33a033bac6f047b7c8053326d6587b8a | [
"BSD-3-Clause"
] | 2 | 2016-01-01T09:20:36.000Z | 2016-11-04T08:53:43.000Z | docs/conf.py | gmr/tornado-dynamodb | 902867ed33a033bac6f047b7c8053326d6587b8a | [
"BSD-3-Clause"
] | 1 | 2016-03-13T15:35:25.000Z | 2016-04-01T12:49:39.000Z | docs/conf.py | gmr/tornado-dynamodb | 902867ed33a033bac6f047b7c8053326d6587b8a | [
"BSD-3-Clause"
] | null | null | null | import sys
sys.path.insert(0, '../')
import tornado_dynamodb
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = 'tornado-dynamodb'
copyright = '2015, Gavin M. Roy'
author = 'Gavin M. Roy'
release = tornado_dynamodb.__version__
version = '.'.join(release.split('.')[0:1])
language = None
exclude_patterns = ['_build']
pygments_style = 'sphinx'
todo_include_todos = True
html_static_path = ['_static']
htmlhelp_basename = 'tornado-dynamodb'
intersphinx_mapping = {'https://docs.python.org/': None,
'http://www.tornadoweb.org/en/stable/': None}
| 22.057143 | 68 | 0.687824 |
250328b6d6cf969c07462707e01f319b2d904179 | 450 | py | Python | riccipy/metrics/mclenaghan_tariq_tupper.py | cjayross/riccipy | 2cc0ca5e1aa4af91b203b3ff2bb1effd7d2f4846 | [
"MIT"
] | 4 | 2019-08-17T04:28:06.000Z | 2021-01-02T15:19:18.000Z | riccipy/metrics/mclenaghan_tariq_tupper.py | grdbii/riccipy | 2cc0ca5e1aa4af91b203b3ff2bb1effd7d2f4846 | [
"MIT"
] | 3 | 2019-08-02T04:07:43.000Z | 2020-06-18T07:49:38.000Z | riccipy/metrics/mclenaghan_tariq_tupper.py | grdbii/riccipy | 2cc0ca5e1aa4af91b203b3ff2bb1effd7d2f4846 | [
"MIT"
] | null | null | null | """
Name: McLenaghan-Tariq-Tupper
References:
- McLenaghan, J. Math. Phys., v16, p11, (1975)
- Tupper, Gen. Rel. Grav., v7, p479, (1976)
- Stephani (10.21) p121
"""
from sympy import diag, symbols
coords = symbols("t x y phi", real=True)
variables = symbols("a", constant=True)
functions = ()
t, x, y, ph = coords
a = variables
metric = diag(-1, a ** 2 / x ** 2, a ** 2 / x ** 2, x ** 2 - 4 * y ** 2)
metric[0, 3] = metric[3, 0] = 2 * y
| 26.470588 | 72 | 0.577778 |
0d843dfd7ec1f878e2a94aa0cfe3d58453021428 | 2,065 | py | Python | src/foreign_if/python/main/python/frovedis/matrix/dtype.py | XpressAI/frovedis | bda0f2c688fb832671c5b542dd8df1c9657642ff | [
"BSD-2-Clause"
] | null | null | null | src/foreign_if/python/main/python/frovedis/matrix/dtype.py | XpressAI/frovedis | bda0f2c688fb832671c5b542dd8df1c9657642ff | [
"BSD-2-Clause"
] | null | null | null | src/foreign_if/python/main/python/frovedis/matrix/dtype.py | XpressAI/frovedis | bda0f2c688fb832671c5b542dd8df1c9657642ff | [
"BSD-2-Clause"
] | null | null | null | """dtype.py"""
#!/usr/bin/env python
import numpy as np
from ctypes import c_char_p
def get_string_array_pointer(str_vec):
str_vec = np.asarray(str_vec)
str_ptr = (c_char_p * len(str_vec))()
str_ptr[:] = np.array([e.encode('ascii') for e in str_vec.T])
return str_ptr
class DTYPE:
"""A python container for data types enumerator"""
INT = 1
LONG = 2
FLOAT = 3
DOUBLE = 4
STRING = 5
BOOL = 6
ULONG = 7
WORDS = 8
class TypeUtil:
@staticmethod
def to_id_dtype(dtype):
"""to_numpy_dtype"""
if dtype == np.int8 or dtype == np.int32:
return DTYPE.INT
elif dtype == np.uint or dtype == np.uint64:
return DTYPE.ULONG
elif dtype == np.int64:
return DTYPE.LONG
elif dtype == np.float32:
return DTYPE.FLOAT
elif dtype == np.float64:
return DTYPE.DOUBLE
elif dtype == np.bool:
return DTYPE.BOOL
elif dtype == np.dtype(str) or dtype.char == 'S' or dtype.char == 'U':
return DTYPE.STRING
else:
raise TypeError("Unsupported numpy dtype: %s" % dtype)
@staticmethod
def to_numpy_dtype(dtype):
"""to_numpy_dtype"""
if dtype == DTYPE.INT:
return np.int32
elif dtype == DTYPE.LONG:
return np.int64
elif dtype == DTYPE.ULONG:
return np.uint
elif dtype == DTYPE.FLOAT:
return np.float32
elif dtype == DTYPE.DOUBLE:
return np.float64
elif dtype == DTYPE.BOOL:
return np.bool
elif dtype == DTYPE.STRING:
return np.dtype(str)
else:
raise TypeError("Unknown numpy type for the given TID: %d" % dtype)
def get_result_type(arr_of_dtypes):
sz = len(arr_of_dtypes)
if sz == 0:
raise ValueError("empty array of dtypes is provided!")
restype = arr_of_dtypes[0]
for i in range(0, sz):
restype = np.result_type(restype, arr_of_dtypes[i])
return restype
| 28.287671 | 79 | 0.572881 |
28853a8ac6e99a5121579ab822bb6590701865e2 | 2,854 | py | Python | src/score.py | lionking0000/recYnH | da1e4898c4148b399ed0f0a0848f5748679b7b98 | [
"MIT"
] | 2 | 2020-03-02T11:09:31.000Z | 2020-03-12T13:48:40.000Z | src/score.py | lionking0000/recYnH | da1e4898c4148b399ed0f0a0848f5748679b7b98 | [
"MIT"
] | null | null | null | src/score.py | lionking0000/recYnH | da1e4898c4148b399ed0f0a0848f5748679b7b98 | [
"MIT"
] | null | null | null | import os
VERBOSE = False
def run_cmd( cmd ):
if VERBOSE: print cmd
os.system( cmd )
def read_R_generated_matrix( file ):
rownames = []
colnames = []
m = []
f = open( file )
for line in f.xreadlines():
if line[0] == "#": continue
rownames = line[:-1].split("\t")
break
for line in f.xreadlines():
fields = line[:-1].split("\t")
colnames.append( fields[0] )
m.append( [ float(x) for x in fields[1:] ] )
f.close()
return m, rownames, colnames
def save_matrix( output_file_path, m, rownames, colnames ):
# save originalIS
fout = open( output_file_path, "w" )
fout.write( "# This file contains 'Interaction Scores' generated by rec-YnH score function\n" )
output = "DB(Read 1) \ AD(Read 2)"
for name in rownames:
output += "\t"
output += name
fout.write( output + "\n" )
i = 0
for x in m:
j = 0
output = colnames[i]
for y in x:
output += "\t"
output += "%f" % y
j += 1
i += 1
output += "\n"
fout.write( output )
fout.close()
def run( args ):
#print args.program # the experiments type ('Y2H'|'Y3H')
#print args.matrix1 # the interaction matrix of selection condition
#print args.matrix2 # the interaction matrix of non-selection condition
#print args.output # the output folder name
#print args.name # output name
if args.matrix2 == None:
args.matrix2 = args.matrix1
[ dirname1, m1 ] = os.path.split( args.matrix1 )
[ dirname2, m2 ] = os.path.split( args.matrix2 )
if args.output == None:
args.output = dirname1
# make output folder
if os.path.exists( args.output ) == False:
os.makedirs( args.output )
temp_dir = os.path.join( args.output, "tmp" )
if os.path.exists( temp_dir ) == False:
os.makedirs( temp_dir )
print "[ Normalizing matrix ]", args.matrix1, args.matrix2
#cmd = "Rscript /usr/local/bin/visualization.R %s %s %s %s %s" % ( args.program, args.matrix1, args.matrix2, args.output, args.name )
cmd = "Rscript ./src/visualization.R %s %s %s %s %s" % ( args.program, args.matrix1, args.matrix2, args.output, args.name )
run_cmd( cmd )
NIS_output_path = "%s/%s.nis.txt" % ( args.output, args.name )
print "[ Saving Interaction Score matrix ]", NIS_output_path
#IS_output_path = "%s/%s.is.txt" % ( args.output, args.name )
#if os.path.exists( IS_output_path ):
# m, rownames, colnames = read_R_generated_matrix( IS_output_path )
# save_matrix( IS_output_path, m, rownames, colnames )
if os.path.exists( NIS_output_path ):
m, rownames, colnames = read_R_generated_matrix( NIS_output_path )
save_matrix( NIS_output_path, m, rownames, colnames )
| 32.804598 | 137 | 0.598458 |
952d747347c55a3b6af09a29fcd29e43de392329 | 9,085 | py | Python | out/pe_file_format.py | horus-4ever/python-file-format-reader | 3e38647ff28682a598eb0bd0e2547fa5db7b3d38 | [
"MIT"
] | 1 | 2021-05-01T12:46:06.000Z | 2021-05-01T12:46:06.000Z | out/pe_file_format.py | horus-4ever/python-file-format-reader | 3e38647ff28682a598eb0bd0e2547fa5db7b3d38 | [
"MIT"
] | null | null | null | out/pe_file_format.py | horus-4ever/python-file-format-reader | 3e38647ff28682a598eb0bd0e2547fa5db7b3d38 | [
"MIT"
] | null | null | null |
import sys
FILE = sys.argv[1]
reader = open(FILE, "rb")
def _runtime_lookup(name):
try:
return _func_wrapper(getattr(MAPS, name).__getitem__)
except AttributeError:
pass
try:
return _func_wrapper(getattr(BITFLAGS, name).get)
except AttributeError:
print(f"No such map or bitflag {name}")
sys.exit(0)
def _func_wrapper(func):
def inner_wrapper(self, *args, **kwargs):
return func(*args, **kwargs)
return inner_wrapper
_int = _func_wrapper(int)
_hex = _func_wrapper(hex)
_bin = _func_wrapper(bin)
def _to_bytes(rule, value):
return int.to_bytes(value, rule.to_read, "little")
def _str(rule, value):
return _to_bytes(rule, value).decode("ascii")
class Symbol:
__symbols__ = {}
def __new__(cls, value):
if value in cls.__symbols__:
return cls.__symbols__[value]
obj = object.__new__(cls)
cls.__symbols__[value] = obj
return obj
def __init__(self, value):
self.value = value
def __repr__(self):
return self.value
def __eq__(self, other):
return self is other
class Flag:
def __init__(self, number, *symbols):
self.number = number
self.symbols = symbols
def __or__(self, other):
return Flag(self.number | other.number, *self.symbols, *other.symbols)
def __int__(self):
return self.number
def __repr__(self):
return " | ".join(str(symbol) for symbol in self.symbols)
class Bitflag:
def __init_subclass__(cls):
cls.__members__ = [item[0] for item in vars(cls).items() if isinstance(item[1], Flag)]
@classmethod
def get(cls, value):
result = None
for member in cls.__members__:
flag = getattr(cls, member)
if int(flag) & value != 0:
result = flag if result is None else result | flag
return result
class Attribute:
def __init__(self, value, changed_value):
self.value = value
self.changed_value = changed_value
def __int__(self):
return self.value
def __repr__(self):
return str(self.changed_value)
class ReaderRule:
def __init__(self, to_read, as_rule):
self.to_read = to_read
self.as_rule = as_rule
self.name = None
def __get__(self, instance, owner):
if instance is not None:
value = getattr(instance, self.name)
return Attribute(value, self.as_rule(self, value))
else:
return self
def __set_name__(self, cls, name):
self.name = f"_{name}"
class Reader:
def __init_subclass__(cls):
cls.__members__ = [item[0] for item in vars(cls).items() if isinstance(item[1], ReaderRule)]
@classmethod
def read(cls, stream):
reader = cls()
for name in cls.__members__:
new_name = f"_{name}"
to_read = getattr(cls, name).to_read
value = int.from_bytes(stream.read(to_read), "little")
setattr(reader, new_name, value)
return reader
class MAPS:
Machine = {332: 'x86 (32 bits)', 34404: 'amd64 (64 bits)'}
Subsystem = {0: 'IMAGE_SUBSYSTEM_UNKNOWN (0)', 1: 'IMAGE_SUBSYSTEM_NATIVE (1)', 2: 'IMAGE_SUBSYSTEM_WINDOWS_GUI (2)', 3: 'IMAGE_SUBSYSTEM_WINDOWS_CUI (3)'}
class BITFLAGS:
class Characteristics(Bitflag):
IMAGE_FILE_RELOCS_STRIPPED = Flag(1, "IMAGE_FILE_RELOCS_STRIPPED")
IMAGE_FILE_EXECUTABLE_IMAGE = Flag(2, "IMAGE_FILE_EXECUTABLE_IMAGE")
IMAGE_FILE_LINE_NUMS_STRIPPED = Flag(4, "IMAGE_FILE_LINE_NUMS_STRIPPED")
IMAGE_FILE_LOCAL_SYMS_STRIPPED = Flag(8, "IMAGE_FILE_LOCAL_SYMS_STRIPPED")
IMAGE_FILE_AGGRESSIVE_WS_TRIM = Flag(16, "IMAGE_FILE_AGGRESSIVE_WS_TRIM")
IMAGE_FILE_LARGE_ADDRESS_AWARE = Flag(32, "IMAGE_FILE_LARGE_ADDRESS_AWARE")
IMAGE_FILE_BYTES_REVERSED_LO = Flag(128, "IMAGE_FILE_BYTES_REVERSED_LO")
IMAGE_FILE_32BIT_MACHINE = Flag(256, "IMAGE_FILE_32BIT_MACHINE")
IMAGE_FILE_DEBUG_STRIPPED = Flag(512, "IMAGE_FILE_DEBUG_STRIPPED")
IMAGE_FILE_REMOVABLE_RUN_FROM_SWAP = Flag(1024, "IMAGE_FILE_REMOVABLE_RUN_FROM_SWAP")
IMAGE_FILE_NET_RUN_FROM_SWAP = Flag(2048, "IMAGE_FILE_NET_RUN_FROM_SWAP")
IMAGE_FILE_SYSTEM = Flag(4096, "IMAGE_FILE_SYSTEM")
IMAGE_FILE_DLL = Flag(8192, "IMAGE_FILE_DLL")
IMAGE_FILE_UP_SYSTEM_ONLY = Flag(16384, "IMAGE_FILE_UP_SYSTEM_ONLY")
IMAGE_FILE_BYTES_REVERSED_HI = Flag(32768, "IMAGE_FILE_BYTES_REVERSED_HI")
class DllCharacteristics(Bitflag):
IMAGE_DLLCHARACTERISTICS_HIGH_ENTROPY_VA = Flag(32, "IMAGE_DLLCHARACTERISTICS_HIGH_ENTROPY_VA")
IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE = Flag(64, "IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE")
IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY = Flag(128, "IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY")
IMAGE_DLLCHARACTERISTICS_NX_COMPAT = Flag(256, "IMAGE_DLLCHARACTERISTICS_NX_COMPAT")
IMAGE_DLLCHARACTERISTICS_NO_ISOLATION = Flag(512, "IMAGE_DLLCHARACTERISTICS_NO_ISOLATION")
IMAGE_DLLCHARACTERISTICS_NO_SEH = Flag(1024, "IMAGE_DLLCHARACTERISTICS_NO_SEH")
IMAGE_DLLCHARACTERISTICS_NO_BIND = Flag(2048, "IMAGE_DLLCHARACTERISTICS_NO_BIND")
IMAGE_DLLCHARACTERISTICS_APPCONTAINER = Flag(4096, "IMAGE_DLLCHARACTERISTICS_APPCONTAINER")
IMAGE_DLLCHARACTERISTICS_WDM_DRIVER = Flag(8192, "IMAGE_DLLCHARACTERISTICS_WDM_DRIVER")
IMAGE_DLLCHARACTERISTICS_GUARD_CF = Flag(16384, "IMAGE_DLLCHARACTERISTICS_GUARD_CF")
IMAGE_DLLCHARACTERISTICS_TERMINAL_SERVER_AWARE = Flag(32768, "IMAGE_DLLCHARACTERISTICS_TERMINAL_SERVER_AWARE")
class READERS:
class IMAGE_DOS_HEADER(Reader):
magic_number = ReaderRule(2, _to_bytes)
useless = ReaderRule(58, _hex)
pe_header_address = ReaderRule(4, _hex)
class PE_IMAGE_HEADER(Reader):
signature = ReaderRule(4, _to_bytes)
machine = ReaderRule(2, _runtime_lookup('Machine'))
section_numbers = ReaderRule(2, _hex)
time_stamp = ReaderRule(4, _hex)
useless = ReaderRule(8, _hex)
optional_header_size = ReaderRule(2, _int)
characteristics = ReaderRule(2, _runtime_lookup('Characteristics'))
class PE_OPTIONAL_HEADER(Reader):
magic_number = ReaderRule(2, _to_bytes)
major_linker_version = ReaderRule(1, _hex)
minor_linker_version = ReaderRule(1, _hex)
total_size = ReaderRule(4, _hex)
data_section_size = ReaderRule(4, _hex)
bss_section_size = ReaderRule(4, _hex)
entry_point_address = ReaderRule(4, _hex)
base_of_code = ReaderRule(4, _hex)
base_of_data = ReaderRule(4, _hex)
class WINDOWS_FIELDS(Reader):
image_base = ReaderRule(4, _hex)
section_alignement = ReaderRule(4, _hex)
file_alignement = ReaderRule(4, _hex)
major_os_version = ReaderRule(2, _hex)
minor_os_version = ReaderRule(2, _hex)
major_image_version = ReaderRule(2, _hex)
minor_image_version = ReaderRule(2, _hex)
major_subsystem_version = ReaderRule(2, _hex)
minor_subsystem_version = ReaderRule(2, _hex)
win_version_value = ReaderRule(4, _hex)
image_size = ReaderRule(4, _hex)
headers_size = ReaderRule(4, _hex)
checksum = ReaderRule(4, _hex)
subsystem = ReaderRule(2, _runtime_lookup('Subsystem'))
dll_characeristics = ReaderRule(2, _runtime_lookup('DllCharacteristics'))
stack_reserve_size = ReaderRule(4, _hex)
stack_commit_size = ReaderRule(4, _hex)
heap_reserve_size = ReaderRule(4, _hex)
heap_commit_size = ReaderRule(4, _hex)
loader_flags = ReaderRule(4, _hex)
number_of_rva_and_sizes = ReaderRule(4, _hex)
class DATA_DIRECTORIES(Reader):
export_table = ReaderRule(4, _hex)
export_table_size = ReaderRule(4, _hex)
import_table = ReaderRule(4, _hex)
import_table_size = ReaderRule(4, _hex)
ressource_table = ReaderRule(4, _hex)
ressource_table_size = ReaderRule(4, _hex)
exception_table = ReaderRule(4, _hex)
exception_table_size = ReaderRule(4, _hex)
certificate_table = ReaderRule(4, _hex)
certificate_table_size = ReaderRule(4, _hex)
base_relocation_table = ReaderRule(4, _hex)
base_relocation_table_size = ReaderRule(4, _hex)
debug = ReaderRule(4, _hex)
debug_size = ReaderRule(4, _hex)
architecture_data = ReaderRule(4, _hex)
architecture_data_size = ReaderRule(4, _hex)
global_ptr = ReaderRule(4, _hex)
useless_one = ReaderRule(4, _hex)
tls_table = ReaderRule(4, _hex)
tls_table_size = ReaderRule(4, _hex)
IMAGE_DOS_HEADER = getattr(READERS, 'IMAGE_DOS_HEADER').read(reader)
reader.seek(int(IMAGE_DOS_HEADER.pe_header_address), 0)
PE_IMAGE_HEADER = getattr(READERS, 'PE_IMAGE_HEADER').read(reader)
PE_OPTIONAL_HEADER = getattr(READERS, 'PE_OPTIONAL_HEADER').read(reader)
WINDOWS_FIELDS = getattr(READERS, 'WINDOWS_FIELDS').read(reader)
DATA_DIRECTORIES = getattr(READERS, 'DATA_DIRECTORIES').read(reader) | 39.672489 | 159 | 0.69268 |
0ccdafea09b3e339af212afff484eac41d67ab96 | 239 | py | Python | netman/core/validator.py | internap/netman | 8760f168cf056ce4428d239610514969e5f9bf56 | [
"Apache-2.0"
] | 38 | 2015-11-30T10:11:42.000Z | 2022-02-10T18:31:44.000Z | netman/core/validator.py | internap/netman | 8760f168cf056ce4428d239610514969e5f9bf56 | [
"Apache-2.0"
] | 143 | 2015-12-10T19:00:42.000Z | 2020-08-20T13:51:42.000Z | netman/core/validator.py | internap/netman | 8760f168cf056ce4428d239610514969e5f9bf56 | [
"Apache-2.0"
] | 15 | 2015-12-14T23:03:30.000Z | 2019-01-15T19:35:45.000Z | from netman.core.objects.exceptions import BadMplsIpState
def is_valid_mpls_state(state):
option = str(state).lower()
if option not in ['true', 'false']:
raise BadMplsIpState(state)
return {'state': option == 'true'}
| 26.555556 | 57 | 0.686192 |
d20268352296031922d33fd7d66b723f83ce7b5c | 3,411 | py | Python | mapmint-services/context/service.py | mapmint/mapmint | 6425c3178e6e3f3d036aed60e03eb9924e00f9f8 | [
"MIT"
] | 41 | 2015-02-10T10:34:42.000Z | 2022-02-25T07:52:12.000Z | mapmint-services/context/service.py | mapmint/mapmint | 6425c3178e6e3f3d036aed60e03eb9924e00f9f8 | [
"MIT"
] | 19 | 2015-01-06T11:48:35.000Z | 2021-09-03T15:40:23.000Z | mapmint-services/context/service.py | mapmint/mapmint | 6425c3178e6e3f3d036aed60e03eb9924e00f9f8 | [
"MIT"
] | 33 | 2015-02-20T09:34:57.000Z | 2021-05-26T14:18:37.000Z | # -*- coding: utf-8 -*-
###############################################################################
# Author: Gérald Fenoy, gerald.fenoy@geolabs.fr
# Copyright (c) 2010-2019, Cartoworks Inc.
###############################################################################
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################
import sys
import json
import time
import zoo
import authenticate.service as auth
import shortInteger
# Use this SQL command from sqlite3 command prompt before using this service
'''
SQLITE:
CREATE TABLE contexts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name varchar(25) UNIQUE,
layers text,
ext varchar(100)
);
POSTGRESQL:
CREATE TABLE contexts (
id serial PRIMARY KEY,
name varchar(25) UNIQUE,
layers text,
ext varchar(100)
);
'''
def saveContext(conf, inputs, outputs):
con = auth.getCon(conf)
prefix = auth.getPrefix(conf)
con.connect()
cur = con.conn.cursor()
newNameId = str(time.time()).split('.')[0]
name = shortInteger.shortURL(int(newNameId))
layers = ""
if 'length' in inputs["layers"]:
for i in inputs["layers"]["value"]:
if layers != '':
layers += ","
layers += i
else:
layers += inputs["layers"]["value"]
req = "INSERT INTO " + prefix + "contexts (name,layers,ext) VALUES ([_name_],[_layers_],[_extent_])"
con.pexecute_req([req, {"name": {"value": name, "format": "s"}, "layers": {"value": layers, "format": "s"},
"extent": {"value": inputs["extent"]["value"], "format": "s"}}])
con.conn.commit()
outputs["Result"]["value"] = conf["main"]["applicationAddress"] + "public/" + conf["senv"][
"last_map"] + ";c=" + name
return zoo.SERVICE_SUCCEEDED
def loadContext(conf, inputs, outputs):
con = auth.getCon(conf)
prefix = auth.getPrefix(conf)
con.connect()
conn = con.conn
cur = con.conn.cursor()
name = inputs["name"]["value"]
req = "SELECT ext,layers from " + prefix + "contexts where name = [_name_]"
con.pexecute_req([req, {"name": {"value": name, "format": "s"}}])
con.conn.commit()
res = con.cur.fetchall()
outputs["Result"]["value"] = json.dumps({"ext": res[0][0], "layers": res[0][1].split(',')})
return zoo.SERVICE_SUCCEEDED
| 37.9 | 111 | 0.615069 |
41c8f290ef339e1c25a51a8083c33abc653c6d51 | 5,353 | py | Python | test/functional/feature_versionbits_warning.py | bitcoin-rush/bitcoinrush | 4329a6a7b9ce7a2188225f4abfc307e68de7dae0 | [
"MIT"
] | null | null | null | test/functional/feature_versionbits_warning.py | bitcoin-rush/bitcoinrush | 4329a6a7b9ce7a2188225f4abfc307e68de7dae0 | [
"MIT"
] | null | null | null | test/functional/feature_versionbits_warning.py | bitcoin-rush/bitcoinrush | 4329a6a7b9ce7a2188225f4abfc307e68de7dae0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) 2016-2018 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test version bits warning system.
Generate chains with block versions that appear to be signalling unknown
soft-forks, and test that warning alerts are generated.
"""
import os
import re
from test_framework.blocktools import create_block, create_coinbase
from test_framework.messages import msg_block
from test_framework.mininode import P2PInterface, mininode_lock
from test_framework.test_framework import BitcoinrushTestFramework
from test_framework.util import wait_until
VB_PERIOD = 144 # versionbits period length for regtest
VB_THRESHOLD = 108 # versionbits activation threshold for regtest
VB_TOP_BITS = 0x20000000
VB_UNKNOWN_BIT = 27 # Choose a bit unassigned to any deployment
VB_UNKNOWN_VERSION = VB_TOP_BITS | (1 << VB_UNKNOWN_BIT)
WARN_UNKNOWN_RULES_MINED = "Unknown block versions being mined! It's possible unknown rules are in effect"
WARN_UNKNOWN_RULES_ACTIVE = "unknown new rules activated (versionbit {})".format(VB_UNKNOWN_BIT)
VB_PATTERN = re.compile("Warning: unknown new rules activated.*versionbit")
class VersionBitsWarningTest(BitcoinrushTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 1
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def setup_network(self):
self.alert_filename = os.path.join(self.options.tmpdir, "alert.txt")
# Open and close to create zero-length file
with open(self.alert_filename, 'w', encoding='utf8'):
pass
self.extra_args = [["-alertnotify=echo %s >> \"" + self.alert_filename + "\""]]
self.setup_nodes()
def send_blocks_with_version(self, peer, numblocks, version):
"""Send numblocks blocks to peer with version set"""
tip = self.nodes[0].getbestblockhash()
height = self.nodes[0].getblockcount()
block_time = self.nodes[0].getblockheader(tip)["time"] + 1
tip = int(tip, 16)
for _ in range(numblocks):
block = create_block(tip, create_coinbase(height + 1), block_time)
block.nVersion = version
block.solve()
peer.send_message(msg_block(block))
block_time += 1
height += 1
tip = block.sha256
peer.sync_with_ping()
def versionbits_in_alert_file(self):
"""Test that the versionbits warning has been written to the alert file."""
alert_text = open(self.alert_filename, 'r', encoding='utf8').read()
return VB_PATTERN.search(alert_text) is not None
def run_test(self):
node = self.nodes[0]
node.add_p2p_connection(P2PInterface())
# Mine one period worth of blocks
node.generate(VB_PERIOD)
self.log.info("Check that there is no warning if previous VB_BLOCKS have <VB_THRESHOLD blocks with unknown versionbits version.")
# Build one period of blocks with < VB_THRESHOLD blocks signaling some unknown bit
self.send_blocks_with_version(node.p2p, VB_THRESHOLD - 1, VB_UNKNOWN_VERSION)
node.generate(VB_PERIOD - VB_THRESHOLD + 1)
# Check that we're not getting any versionbit-related errors in get*info()
assert(not VB_PATTERN.match(node.getmininginfo()["warnings"]))
assert(not VB_PATTERN.match(node.getnetworkinfo()["warnings"]))
self.log.info("Check that there is a warning if >50 blocks in the last 100 were an unknown version")
# Build one period of blocks with VB_THRESHOLD blocks signaling some unknown bit
self.send_blocks_with_version(node.p2p, VB_THRESHOLD, VB_UNKNOWN_VERSION)
node.generate(VB_PERIOD - VB_THRESHOLD)
# Check that get*info() shows the 51/100 unknown block version error.
assert(WARN_UNKNOWN_RULES_MINED in node.getmininginfo()["warnings"])
assert(WARN_UNKNOWN_RULES_MINED in node.getnetworkinfo()["warnings"])
self.log.info("Check that there is a warning if previous VB_BLOCKS have >=VB_THRESHOLD blocks with unknown versionbits version.")
# Mine a period worth of expected blocks so the generic block-version warning
# is cleared. This will move the versionbit state to ACTIVE.
node.generate(VB_PERIOD)
# Stop-start the node. This is required because bitcoinrushd will only warn once about unknown versions or unknown rules activating.
self.restart_node(0)
# Generating one block guarantees that we'll get out of IBD
node.generate(1)
wait_until(lambda: not node.getblockchaininfo()['initialblockdownload'], timeout=10, lock=mininode_lock)
# Generating one more block will be enough to generate an error.
node.generate(1)
# Check that get*info() shows the versionbits unknown rules warning
assert(WARN_UNKNOWN_RULES_ACTIVE in node.getmininginfo()["warnings"])
assert(WARN_UNKNOWN_RULES_ACTIVE in node.getnetworkinfo()["warnings"])
# Check that the alert file shows the versionbits unknown rules warning
wait_until(lambda: self.versionbits_in_alert_file(), timeout=60)
if __name__ == '__main__':
VersionBitsWarningTest().main()
| 47.371681 | 140 | 0.712124 |
e3d620a70cffe28f506d4c0cf64f1c63da85d7bd | 6,204 | py | Python | Incident-Response/Tools/grr/grr/core/grr_response_core/lib/parsers/linux_release_parser_test.py | sn0b4ll/Incident-Playbook | cf519f58fcd4255674662b3620ea97c1091c1efb | [
"MIT"
] | 1 | 2021-07-24T17:22:50.000Z | 2021-07-24T17:22:50.000Z | Incident-Response/Tools/grr/grr/core/grr_response_core/lib/parsers/linux_release_parser_test.py | sn0b4ll/Incident-Playbook | cf519f58fcd4255674662b3620ea97c1091c1efb | [
"MIT"
] | 2 | 2022-02-28T03:40:31.000Z | 2022-02-28T03:40:52.000Z | Incident-Response/Tools/grr/grr/core/grr_response_core/lib/parsers/linux_release_parser_test.py | sn0b4ll/Incident-Playbook | cf519f58fcd4255674662b3620ea97c1091c1efb | [
"MIT"
] | 2 | 2022-02-25T08:34:51.000Z | 2022-03-16T17:29:44.000Z | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
"""Unit test for the linux distribution parser."""
from __future__ import absolute_import
from __future__ import division
from __future__ import unicode_literals
import io
import os
from absl import app
from grr_response_core.lib.parsers import linux_release_parser
from grr_response_core.lib.rdfvalues import anomaly as rdf_anomaly
from grr_response_core.lib.rdfvalues import paths as rdf_paths
from grr_response_core.lib.rdfvalues import protodict as rdf_protodict
from grr.test_lib import test_lib
class LinuxReleaseParserTest(test_lib.GRRBaseTest):
"""Test parsing of linux distribution collection."""
def setUp(self):
super().setUp()
self.parser_test_dir = os.path.join(self.base_path, "parser_test")
def testMalformedLsbReleaseFile(self):
path = os.path.join(self.parser_test_dir, "lsb-release-bad")
with io.open(path, "r") as f:
data = f.read()
parser = linux_release_parser.LsbReleaseParseHandler(data)
complete, result = parser.Parse()
self.assertFalse(complete)
self.assertTupleEqual((None, 0, 0), result)
def testGoodLsbReleaseFile(self):
path = os.path.join(self.parser_test_dir, "lsb-release")
with io.open(path, "r") as f:
data = f.read()
parser = linux_release_parser.LsbReleaseParseHandler(data)
complete, result = parser.Parse()
self.assertTrue(complete)
self.assertTupleEqual(("Ubuntu", 14, 4), result)
def testFallbackLsbReleaseFile(self):
path = os.path.join(self.parser_test_dir, "lsb-release-notubuntu")
with io.open(path, "r") as f:
data = f.read()
parser = linux_release_parser.LsbReleaseParseHandler(data)
complete, result = parser.Parse()
self.assertFalse(complete)
self.assertTupleEqual(("NotUbuntu", 0, 0), result)
def testReleaseFileRedHatish(self):
path = os.path.join(self.parser_test_dir, "oracle-release")
with io.open(path, "r") as f:
data = f.read()
parser = linux_release_parser.ReleaseFileParseHandler("OracleLinux")
parser(data)
complete, result = parser.Parse()
self.assertTrue(complete)
self.assertTupleEqual(("OracleLinux", 6, 5), result)
def testMalformedReleaseFileRedHatish(self):
path = os.path.join(self.parser_test_dir, "oracle-release-bad")
with io.open(path, "r") as f:
data = f.read()
parser = linux_release_parser.ReleaseFileParseHandler("OracleLinux")
parser(data)
complete, result = parser.Parse()
self.assertFalse(complete)
self.assertTupleEqual(("OracleLinux", 0, 0), result)
def _CreateTestData(self, testdata):
"""Create 'stats' and 'file_objects' lists for passing to ParseMultiple."""
pathspecs = []
files = []
for filepath, localfile in testdata:
files.append(open(localfile, "rb"))
p = rdf_paths.PathSpec(path=filepath)
pathspecs.append(p)
return pathspecs, files
def testEndToEndUbuntu(self):
parser = linux_release_parser.LinuxReleaseParser()
testdata = [
("/etc/lsb-release", os.path.join(self.parser_test_dir, "lsb-release")),
]
pathspecs, files = self._CreateTestData(testdata)
result = list(parser.ParseFiles(None, pathspecs, files)).pop()
self.assertIsInstance(result, rdf_protodict.Dict)
self.assertEqual("Ubuntu", result["os_release"])
self.assertEqual(14, result["os_major_version"])
self.assertEqual(4, result["os_minor_version"])
def testEndToEndOracleLinux(self):
parser = linux_release_parser.LinuxReleaseParser()
testdata = [
("/etc/lsb-release",
os.path.join(self.parser_test_dir, "lsb-release-notubuntu")),
("/etc/oracle-release",
os.path.join(self.parser_test_dir, "oracle-release")),
]
pathspecs, files = self._CreateTestData(testdata)
result = list(parser.ParseFiles(None, pathspecs, files)).pop()
self.assertIsInstance(result, rdf_protodict.Dict)
self.assertEqual("OracleLinux", result["os_release"])
self.assertEqual(6, result["os_major_version"])
self.assertEqual(5, result["os_minor_version"])
def testEndToEndAmazon(self):
parser = linux_release_parser.LinuxReleaseParser()
test_data = [
("/etc/system-release",
os.path.join(self.parser_test_dir, "amazon-system-release")),
]
pathspecs, file_objects = self._CreateTestData(test_data)
actual_result = list(parser.ParseFiles(None, pathspecs, file_objects))
expected_result = [
rdf_protodict.Dict({
"os_release": "AmazonLinuxAMI",
"os_major_version": 2018,
"os_minor_version": 3,
})
]
self.assertCountEqual(actual_result, expected_result)
def testEndToEndCoreOS(self):
parser = linux_release_parser.LinuxReleaseParser()
test_data = [
("/etc/os-release",
os.path.join(self.parser_test_dir, "coreos-os-release")),
]
pathspecs, file_objects = self._CreateTestData(test_data)
actual_result = list(parser.ParseFiles(None, pathspecs, file_objects))
expected_result = [
rdf_protodict.Dict({
"os_release": "Container Linux by CoreOS",
"os_major_version": 2023,
"os_minor_version": 4,
})
]
self.assertCountEqual(actual_result, expected_result)
def testEndToEndGoogleCOS(self):
parser = linux_release_parser.LinuxReleaseParser()
test_data = [
("/etc/os-release",
os.path.join(self.parser_test_dir, "google-cos-os-release")),
]
pathspecs, file_objects = self._CreateTestData(test_data)
actual_result = list(parser.ParseFiles(None, pathspecs, file_objects))
expected_result = [
rdf_protodict.Dict({
"os_release": "Container-Optimized OS",
"os_major_version": 69,
"os_minor_version": 0,
})
]
self.assertCountEqual(actual_result, expected_result)
def testAnomaly(self):
parser = linux_release_parser.LinuxReleaseParser()
result = list(parser.ParseFiles(None, [], []))
self.assertLen(result, 1)
self.assertIsInstance(result[0], rdf_anomaly.Anomaly)
def main(args):
test_lib.main(args)
if __name__ == "__main__":
app.run(main)
| 31.653061 | 80 | 0.693746 |
cd4efc10879355904d75ed588afb3fbc432263b3 | 10,975 | py | Python | remo/voting/migrations/0007_auto__add_field_poll_bug__add_field_poll_automated_poll__chg_field_pol.py | glogiotatidis/remo | 1c4f55c63c8d03cbee776b60af042b8068d9f297 | [
"BSD-3-Clause"
] | null | null | null | remo/voting/migrations/0007_auto__add_field_poll_bug__add_field_poll_automated_poll__chg_field_pol.py | glogiotatidis/remo | 1c4f55c63c8d03cbee776b60af042b8068d9f297 | [
"BSD-3-Clause"
] | null | null | null | remo/voting/migrations/0007_auto__add_field_poll_bug__add_field_poll_automated_poll__chg_field_pol.py | glogiotatidis/remo | 1c4f55c63c8d03cbee776b60af042b8068d9f297 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'Poll.bug'
db.add_column('voting_poll', 'bug',
self.gf('django.db.models.fields.related.ForeignKey')(to=orm['remozilla.Bug'], null=True, blank=True),
keep_default=False)
# Adding field 'Poll.automated_poll'
db.add_column('voting_poll', 'automated_poll',
self.gf('django.db.models.fields.BooleanField')(default=False),
keep_default=False)
# Changing field 'Poll.slug'
db.alter_column('voting_poll', 'slug', self.gf('django.db.models.fields.SlugField')(max_length=255))
# Changing field 'Poll.name'
db.alter_column('voting_poll', 'name', self.gf('django.db.models.fields.CharField')(max_length=300))
def backwards(self, orm):
# Deleting field 'Poll.bug'
db.delete_column('voting_poll', 'bug_id')
# Deleting field 'Poll.automated_poll'
db.delete_column('voting_poll', 'automated_poll')
# Changing field 'Poll.slug'
db.alter_column('voting_poll', 'slug', self.gf('django.db.models.fields.SlugField')(max_length=100))
# Changing field 'Poll.name'
db.alter_column('voting_poll', 'name', self.gf('django.db.models.fields.CharField')(max_length=100))
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'remozilla.bug': {
'Meta': {'ordering': "['-bug_last_change_time']", 'object_name': 'Bug'},
'assigned_to': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'bugs_assigned'", 'null': 'True', 'to': "orm['auth.User']"}),
'bug_creation_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'bug_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'bug_last_change_time': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'cc': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'bugs_cced'", 'symmetrical': 'False', 'to': "orm['auth.User']"}),
'component': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'creator': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'bugs_created'", 'null': 'True', 'to': "orm['auth.User']"}),
'due_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'first_comment': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}),
'flag_name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '30', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'resolution': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '30'}),
'status': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '30'}),
'summary': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '500'}),
'updated_on': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'whiteboard': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '500'})
},
'voting.poll': {
'Meta': {'ordering': "['-created_on']", 'object_name': 'Poll'},
'automated_poll': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'bug': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['remozilla.Bug']", 'null': 'True', 'blank': 'True'}),
'created_by': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'range_polls_created'", 'to': "orm['auth.User']"}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {}),
'end': ('django.db.models.fields.DateTimeField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_notification': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '300'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255', 'blank': 'True'}),
'start': ('django.db.models.fields.DateTimeField', [], {}),
'task_end_id': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '256', 'null': 'True', 'blank': 'True'}),
'task_start_id': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '256', 'null': 'True', 'blank': 'True'}),
'users_voted': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'polls_voted'", 'symmetrical': 'False', 'through': "orm['voting.Vote']", 'to': "orm['auth.User']"}),
'valid_groups': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'valid_polls'", 'to': "orm['auth.Group']"})
},
'voting.radiopoll': {
'Meta': {'object_name': 'RadioPoll'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'poll': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'radio_polls'", 'to': "orm['voting.Poll']"}),
'question': ('django.db.models.fields.CharField', [], {'max_length': '500'})
},
'voting.radiopollchoice': {
'Meta': {'ordering': "['-votes']", 'object_name': 'RadioPollChoice'},
'answer': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'radio_poll': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'answers'", 'to': "orm['voting.RadioPoll']"}),
'votes': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
'voting.rangepoll': {
'Meta': {'object_name': 'RangePoll'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '500'}),
'poll': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'range_polls'", 'to': "orm['voting.Poll']"})
},
'voting.rangepollchoice': {
'Meta': {'ordering': "['-votes', 'nominee__last_name', 'nominee__first_name']", 'object_name': 'RangePollChoice'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'nominee': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'range_poll': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'choices'", 'to': "orm['voting.RangePoll']"}),
'votes': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
'voting.vote': {
'Meta': {'object_name': 'Vote'},
'date_voted': ('django.db.models.fields.DateField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'poll': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['voting.Poll']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
}
}
complete_apps = ['voting'] | 72.203947 | 201 | 0.565558 |
182330f9c5a6a796c456b6f446b06eefe4624238 | 11,396 | py | Python | setup.py | ArneBinder/transformers | ddaafd78fb9c98d4f7b5009fb1998deff4c3d6f1 | [
"Apache-2.0"
] | 1 | 2021-02-08T06:35:21.000Z | 2021-02-08T06:35:21.000Z | setup.py | ArneBinder/transformers | ddaafd78fb9c98d4f7b5009fb1998deff4c3d6f1 | [
"Apache-2.0"
] | null | null | null | setup.py | ArneBinder/transformers | ddaafd78fb9c98d4f7b5009fb1998deff4c3d6f1 | [
"Apache-2.0"
] | 3 | 2021-09-19T00:54:21.000Z | 2021-11-22T01:13:26.000Z | # Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/master/setup.py
To create the package for pypi.
1. Change the version in __init__.py, setup.py as well as docs/source/conf.py. Remove the master from the links in
the new models of the README:
(https://huggingface.co/transformers/master/model_doc/ -> https://huggingface.co/transformers/model_doc/)
then run `make fix-copies` to fix the index of the documentation.
2. Unpin specific versions from setup.py that use a git install.
2. Commit these changes with the message: "Release: VERSION"
3. Add a tag in git to mark the release: "git tag VERSION -m 'Adds tag VERSION for pypi' "
Push the tag to git: git push --tags origin master
4. Build both the sources and the wheel. Do not change anything in setup.py between
creating the wheel and the source distribution (obviously).
For the wheel, run: "python setup.py bdist_wheel" in the top level directory.
(this will build a wheel for the python version you use to build it).
For the sources, run: "python setup.py sdist"
You should now have a /dist directory with both .whl and .tar.gz source versions.
5. Check that everything looks correct by uploading the package to the pypi test server:
twine upload dist/* -r pypitest
(pypi suggest using twine as other methods upload files via plaintext.)
You may have to specify the repository url, use the following command then:
twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/
Check that you can install it in a virtualenv by running:
pip install -i https://testpypi.python.org/pypi transformers
6. Upload the final version to actual pypi:
twine upload dist/* -r pypi
7. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory.
8. Add the release version to docs/source/_static/js/custom.js and .circleci/deploy.sh
9. Update README.md to redirect to correct documentation.
10. Update the version in __init__.py, setup.py to the new version "-dev" and push to master.
"""
import os
import re
import shutil
from distutils.core import Command
from pathlib import Path
from setuptools import find_packages, setup
# Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466
stale_egg_info = Path(__file__).parent / "transformers.egg-info"
if stale_egg_info.exists():
print(
(
"Warning: {} exists.\n\n"
"If you recently updated transformers to 3.0 or later, this is expected,\n"
"but it may prevent transformers from installing in editable mode.\n\n"
"This directory is automatically generated by Python's packaging tools.\n"
"I will remove it now.\n\n"
"See https://github.com/pypa/pip/issues/5466 for details.\n"
).format(stale_egg_info)
)
shutil.rmtree(stale_egg_info)
# IMPORTANT:
# 1. all dependencies should be listed here with their version requirements if any
# 2. once modified, run: `make deps_table_update` to update src/transformers/dependency_versions_table.py
_deps = [
"black>=20.8b1",
"cookiecutter==1.7.2",
"dataclasses",
"datasets",
"faiss-cpu",
"fastapi",
"filelock",
"flake8>=3.8.3",
"flax>=0.2.2",
"fugashi>=1.0",
"importlib_metadata",
"ipadic>=1.0.0,<2.0",
"isort>=5.5.4",
"jax>=0.2.8",
"jaxlib>=0.1.59",
"keras2onnx",
"numpy>=1.17",
"onnxconverter-common",
"onnxruntime-tools>=1.4.2",
"onnxruntime>=1.4.0",
"packaging",
"parameterized",
"protobuf",
"psutil",
"pydantic",
"pytest",
"pytest-xdist",
"python>=3.6.0",
"recommonmark",
"regex!=2019.12.17",
"requests",
"sacremoses",
"scikit-learn",
"sentencepiece==0.1.91",
"soundfile",
"sphinx-copybutton",
"sphinx-markdown-tables",
"sphinx-rtd-theme==0.4.3", # sphinx-rtd-theme==0.5.0 introduced big changes in the style.
"sphinx==3.2.1",
"starlette",
"tensorflow-cpu>=2.3",
"tensorflow>=2.3",
"timeout-decorator",
"tokenizers==0.10.1rc1",
"torch>=1.0",
"tqdm>=4.27",
"unidic>=1.0.2",
"unidic_lite>=1.0.7",
"uvicorn",
]
# this is a lookup table with items like:
#
# tokenizers: "tokenizers==0.9.4"
# packaging: "packaging"
#
# some of the values are versioned whereas others aren't.
deps = {b: a for a, b in (re.findall(r"^(([^!=<>]+)(?:[!=<>].*)?$)", x)[0] for x in _deps)}
# since we save this data in src/transformers/dependency_versions_table.py it can be easily accessed from
# anywhere. If you need to quickly access the data from this table in a shell, you can do so easily with:
#
# python -c 'import sys; from transformers.dependency_versions_table import deps; \
# print(" ".join([ deps[x] for x in sys.argv[1:]]))' tokenizers datasets
#
# Just pass the desired package names to that script as it's shown with 2 packages above.
#
# If transformers is not yet installed and the work is done from the cloned repo remember to add `PYTHONPATH=src` to the script above
#
# You can then feed this for example to `pip`:
#
# pip install -U $(python -c 'import sys; from transformers.dependency_versions_table import deps; \
# print(" ".join([ deps[x] for x in sys.argv[1:]]))' tokenizers datasets)
#
def deps_list(*pkgs):
return [deps[pkg] for pkg in pkgs]
class DepsTableUpdateCommand(Command):
"""
A custom distutils command that updates the dependency table.
usage: python setup.py deps_table_update
"""
description = "build runtime dependency table"
user_options = [
# format: (long option, short option, description).
("dep-table-update", None, "updates src/transformers/dependency_versions_table.py"),
]
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
content = [
"# THIS FILE HAS BEEN AUTOGENERATED. To update:",
"# 1. modify the `_deps` dict in setup.py",
"# 2. run `make deps_table_update``",
"deps = {",
entries,
"}",
"",
]
target = "src/transformers/dependency_versions_table.py"
print(f"updating {target}")
with open(target, "w", encoding="utf-8", newline="\n") as f:
f.write("\n".join(content))
extras = {}
extras["ja"] = deps_list("fugashi", "ipadic", "unidic_lite", "unidic")
extras["sklearn"] = deps_list("scikit-learn")
extras["tf"] = deps_list("tensorflow", "onnxconverter-common", "keras2onnx")
extras["tf-cpu"] = deps_list("tensorflow-cpu", "onnxconverter-common", "keras2onnx")
extras["torch"] = deps_list("torch")
if os.name == "nt": # windows
extras["retrieval"] = deps_list("datasets") # faiss is not supported on windows
extras["flax"] = [] # jax is not supported on windows
else:
extras["retrieval"] = deps_list("faiss-cpu", "datasets")
extras["flax"] = deps_list("jax", "jaxlib", "flax")
extras["tokenizers"] = deps_list("tokenizers")
extras["onnxruntime"] = deps_list("onnxruntime", "onnxruntime-tools")
extras["modelcreation"] = deps_list("cookiecutter")
extras["serving"] = deps_list("pydantic", "uvicorn", "fastapi", "starlette")
extras["speech"] = deps_list("soundfile")
extras["sentencepiece"] = deps_list("sentencepiece", "protobuf")
extras["testing"] = (
deps_list("pytest", "pytest-xdist", "timeout-decorator", "parameterized", "psutil", "datasets")
+ extras["retrieval"]
+ extras["modelcreation"]
+ extras["speech"]
)
extras["docs"] = deps_list("recommonmark", "sphinx", "sphinx-markdown-tables", "sphinx-rtd-theme", "sphinx-copybutton")
extras["quality"] = deps_list("black", "isort", "flake8")
extras["all"] = extras["tf"] + extras["torch"] + extras["flax"] + extras["sentencepiece"] + extras["tokenizers"]
extras["dev"] = (
extras["all"]
+ extras["testing"]
+ extras["quality"]
+ extras["ja"]
+ extras["docs"]
+ extras["sklearn"]
+ extras["modelcreation"]
)
extras["torchhub"] = deps_list(
"filelock",
"importlib_metadata",
"numpy",
"packaging",
"protobuf",
"regex",
"requests",
"sacremoses",
"sentencepiece",
"torch",
"tokenizers",
"tqdm",
)
# when modifying the following list, make sure to update src/transformers/dependency_versions_check.py
install_requires = [
deps["dataclasses"] + ";python_version<'3.7'", # dataclasses for Python versions that don't have it
deps["importlib_metadata"] + ";python_version<'3.8'", # importlib_metadata for Python versions that don't have it
deps["filelock"], # filesystem locks, e.g., to prevent parallel downloads
deps["numpy"],
deps["packaging"], # utilities from PyPA to e.g., compare versions
deps["regex"], # for OpenAI GPT
deps["requests"], # for downloading models over HTTPS
deps["sacremoses"], # for XLM
deps["tokenizers"],
deps["tqdm"], # progress bars in model download and training scripts
]
setup(
name="transformers",
version="4.4.0.dev0", # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots)
author="Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors",
author_email="thomas@huggingface.co",
description="State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
keywords="NLP deep learning transformer pytorch tensorflow BERT GPT GPT-2 google openai CMU",
license="Apache",
url="https://github.com/huggingface/transformers",
package_dir={"": "src"},
packages=find_packages("src"),
extras_require=extras,
entry_points={"console_scripts": ["transformers-cli=transformers.commands.transformers_cli:main"]},
python_requires=">=3.6.0",
install_requires=install_requires,
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
cmdclass={"deps_table_update": DepsTableUpdateCommand},
)
| 36.292994 | 233 | 0.67357 |
3af51437903540119d8d80bb0bc088f669ca3ecd | 2,652 | py | Python | contrib/attic/jumpy/setup.py | eric-erki/deeplearning4j | b9d462f66879e9315767b70190bd2ab31b9a3275 | [
"Apache-2.0"
] | null | null | null | contrib/attic/jumpy/setup.py | eric-erki/deeplearning4j | b9d462f66879e9315767b70190bd2ab31b9a3275 | [
"Apache-2.0"
] | null | null | null | contrib/attic/jumpy/setup.py | eric-erki/deeplearning4j | b9d462f66879e9315767b70190bd2ab31b9a3275 | [
"Apache-2.0"
] | null | null | null | # /* ******************************************************************************
# * Copyright (c) 2021 Deeplearning4j Contributors
# *
# * This program and the accompanying materials are made available under the
# * terms of the Apache License, Version 2.0 which is available at
# * https://www.apache.org/licenses/LICENSE-2.0.
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# * License for the specific language governing permissions and limitations
# * under the License.
# *
# * SPDX-License-Identifier: Apache-2.0
# ******************************************************************************/
################################################################################
#
# This program and the accompanying materials are made available under the
# terms of the Apache License, Version 2.0 which is available at
# https://www.apache.org/licenses/LICENSE-2.0.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# SPDX-License-Identifier: Apache-2.0
################################################################################
from setuptools import setup
from setuptools import find_packages
setup(name='jumpy',
version='0.2.4',
description='Numpy and nd4j interop',
long_description='Mapping of the numpy & nd4j array representations',
author='Adam Gibson',
author_email='adam@skymind.io',
classifiers=[
'Development Status :: 3 - Alpha',
'License :: OSI Approved :: Apache Software License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
'Topic :: Software Development :: Libraries',
],
keywords='numpy jumpy java nd4j deeplearning4j',
url='https://github.com/eclipse/deeplearning4j.git',
license='Apache',
setup_requires=['Cython', 'pytest-runner'],
install_requires=['Cython', 'requests', 'pydl4j', 'numpy'],
extras_require={
'spark': ['pyspark'],
'tests': ['pytest',
'pytest-pep8',
'mock'],
},
packages=find_packages())
| 42.774194 | 84 | 0.585973 |
6c4f3cf2e292b6d3e2945f9435fbe4e56b1166ef | 5,119 | py | Python | convexgeometry/optimization.py | jtanderson/convexgeometry | 572236e7f7374eff309391f65a3b91dcdc9c31ca | [
"MIT"
] | 1 | 2021-03-02T21:53:49.000Z | 2021-03-02T21:53:49.000Z | convexgeometry/optimization.py | jtanderson/convexgeometry | 572236e7f7374eff309391f65a3b91dcdc9c31ca | [
"MIT"
] | null | null | null | convexgeometry/optimization.py | jtanderson/convexgeometry | 572236e7f7374eff309391f65a3b91dcdc9c31ca | [
"MIT"
] | 1 | 2021-06-08T20:48:53.000Z | 2021-06-08T20:48:53.000Z | import numpy as np
import numpy.linalg as la
def CentralCutEllipsoid(eps, sepK, n, R):
"""
Solves the following problem:
Given a rational number eps > 0 and circumscriebd convex set (K; n, R)
given by an oracle sepK that, for any y and any rational delta > 0,
either asserts that y in S(K, delta) or finds a vector c with norm(c, inf) = 1
such that c.x <= c.y + delta for every x in K.
Outputs of the following:
a) a vector a in S(K,eps)
b) a pd n-by-n matrix A and a n-dim point a such that K is a subset of E(A,a)
and vol(E(A,a)) < eps
Note: in the case of a) will also output None as the second parameter
"""
N = int(np.ceil(5*n*np.abs(np.log2(2*R)) + 5*n*n*np.abs(np.log2(2*R))))
p = 8*N
delta = 2**(-p)
a = 0
A = R**2 @ np.eye(n,n)
for k in range(0,N):
c = sepK(y=a, error=delta)
if( isinstance(c,bool) ):
return a, None
else:
# c is a vector with inf norm = 1 and c.x <= c.y + delta for all x in K
#c /= np.max(c)
# TODO: officially supposed to round to only p digits beyond the decimal
a = a - (1/(n+1)) * ((A@c) / np.sqrt(c.T @ A @ c))
A = ((2*n*n + 3)/(2n*n)) * (A - (2/(n+1)) * ((A @ c @ c.T @ A)/(c.T @ A @ c)))
return A, a
"""
Poly-time algorithm that solves weak violation problem for every circumscribed
convex body (K; n, R) given by a weak separation oracle.
Comes from proof of 4.2.2 in GLS book
"""
def WSep2Viol(c, gamma, epsilon, n, R, sepK):
"""
Solves the following problem: given a body (K; n, R), and a
separation oracle, a vector c, and gamma, epsilon > 0;
either
i) assert that <c,x> <= gamma + epsilon for all x in S(K,-eps), or
- <c,x> <= gamma is almost valid
ii) find a vector y in S(K,eps) with <c,y> >= gamma - eps
- y almost violates <c,x> <= gamma
"""
#assume that max(c) = 1
cscale = np.max(c)
c /= cscale
eps_p = epsilon / (2*n)
def sep_kprime(y, delta, n, R):
if not (np.dot(c, y) >= gamma + delta):
return -c
else:
# second parameter is delta1
d = sepK(y, np.min((eps_p, delta/n)))
if isinstance(d, bool):
# asserted that y is in S(K, delta1)
return True
else:
# got a vector d so that <x,d> <= <y,d> + delta for all x in S(K, -delta1)
return d / np.max(d) # just to make sure?
# now, run ellipsoid algorithm with sep_kprime above and eps1 = np.min(eps_p, (eps_p/n)**n)
eps1 = np.min(eps_p, (eps_p/n)**n)
a, A = CentralCutEllipsoid(eps1, sepK, n, R)
if A is None:
# gave a point a in S(Kprime, eps1)
return a
else:
# gave an ellipsoid E of volume at most eps1 containing K
# so we assert that c.x < gamma + eps
return True
# TODO -- Theorem 4.3.2
def WMem2Viol(c, gamma, eps, memK, n, R, r, a0):
"""
Given a body (K; n, R, r, a0), where
memK: weak-membership oracle
n: dimensions
R: circumradius so S(0,r) contains K
r: radius so that of S(a0,r) in K
Either
i) asserts that c.T @ x <= gamma + eps for all x in in S(K,-eps)
ii) find a vector y in S(K, eps) with c.T @ x >= gamma - eps
"""
pass
# Lemma 4.3.3
def StrongerMem(y, delta, memK, n, R, r, a):
"""
Given vector y, error delta > 0,
and weak-membership oracle
for a body (K; n, R, r, a0), where
n: dimensions
R: circumradius so S(0,r) contains K
r: radius so that of S(a0,r) in K
Either
i) asserts that y in S(K,delta)
ii) asserts that y not in K
"""
assert(delta>0)
if la.norm(y - a, ord=2) < 2*R:
return False
yprime = (1 - delta/(4*R))*y + (delta*a)/(4*R)
deltaprime = (r*delta)/(4*R)
test = memK(yprime, deltaprime, n, R)
assert(isinstance(test,bool))
return test
# Lemma 4.3.4
def SeparationNonEllipsoid(y, delta, beta, memK, n, R, r, a):
"""
Either
i) asserts that y in S(K,delta),
ii) finds vector c such that c != 0 and for every x in K,
c.T @ x <= c.T @ y + (delta + beta |x-y|) |c|
"""
if la.norm(y - a) >= 2*R:
return (y-a) / la.norm(y - a)
if StrongerMem(y, delta, memK, n, R, r, a):
return True
# StrongerMem says y not in K
alpha = np.arctan(beta/(4*n*n))
delta1 = (r*delta) / (R + r)
r1 = (r*delta1) / (4*n*R)
eps1 = (beta*beta*r1) / (16*n*n*n*n)
in_v = np.copy(a)
out_v = np.copy(y)
# binary search to straddle the boundary close enough
while la.norm(in_v - out_v) > delta1/(2*n):
new_v = (in_v + out_v) / 2
if memK(new_v, eps1, n, R):
in_v = np.copy(new_v)
else:
out_v = np.copy(new_v)
vpp = (1/(r + eps1)) * ((r-r1)*in_v + (r1 + eps1)*a)
# we know S(vpp, r1) subseteq K
# now need to re-center the algorithm at vpp so that vpp == 0...
# TODO finish, page 109
# TODO
def WVal():
pass
# TODO
def WSep():
pass
# TODO
def Wmem():
pass
| 29.589595 | 95 | 0.544247 |
50be83eeff14fa7efb8c148221e43e942800ead2 | 65,170 | py | Python | src/hangar/checkout.py | KarenImmanuel/hangar-py | 2a5caff259ad699db56676f14a70cb94e75d8a5b | [
"Apache-2.0"
] | null | null | null | src/hangar/checkout.py | KarenImmanuel/hangar-py | 2a5caff259ad699db56676f14a70cb94e75d8a5b | [
"Apache-2.0"
] | null | null | null | src/hangar/checkout.py | KarenImmanuel/hangar-py | 2a5caff259ad699db56676f14a70cb94e75d8a5b | [
"Apache-2.0"
] | null | null | null | import atexit
import os
import weakref
from contextlib import suppress
from functools import partial
from collections import namedtuple
from uuid import uuid4
import lmdb
import warnings
from .arrayset import Arraysets
from .diff import ReaderUserDiff, WriterUserDiff
from .merger import select_merge_algorithm
from .metadata import MetadataReader, MetadataWriter
from .records import commiting, hashs, heads
from .utils import cm_weakref_obj_proxy
class ReaderCheckout(object):
"""Checkout the repository as it exists at a particular branch.
This class is instantiated automatically from a repository checkout
operation. This object will govern all access to data and interaction methods
the user requests.
>>> co = repo.checkout()
>>> isinstance(co, ReaderCheckout)
True
If a commit hash is provided, it will take precedent over the branch name
parameter. If neither a branch not commit is specified, the staging
environment's base branch ``HEAD`` commit hash will be read.
>>> co = repo.checkout(commit='foocommit')
>>> co.commit_hash
'foocommit'
>>> co.close()
>>> co = repo.checkout(branch='testbranch')
>>> co.commit_hash
'someothercommithashhere'
>>> co.close()
Unlike :class:`WriterCheckout`, any number of :class:`ReaderCheckout`
objects can exist on the repository independently. Like the
``write-enabled`` variant, the :meth:`close` method should be called after
performing the necessary operations on the repo. However, as there is no
concept of a ``lock`` for ``read-only`` checkouts, this is just to free up
memory resources, rather than changing recorded access state.
In order to reduce the chance that the python interpreter is shut down
without calling :meth:`close`, - a common mistake during ipython / jupyter
sessions - an `atexit <https://docs.python.org/3/library/atexit.html>`_
hook is registered to :meth:`close`. If properly closed by the user, the
hook is unregistered after completion with no ill effects. So long as a the
process is NOT terminated via non-python ``SIGKILL``, fatal internal python
error, or or special ``os exit`` methods, cleanup will occur on interpreter
shutdown and resources will be freed. If a non-handled termination method
does occur, the implications of holding resources varies on a per-OS basis.
While no risk to data integrity is observed, repeated misuse may require a
system reboot in order to achieve expected performance characteristics.
"""
def __init__(self,
base_path: os.PathLike, labelenv: lmdb.Environment,
dataenv: lmdb.Environment, hashenv: lmdb.Environment,
branchenv: lmdb.Environment, refenv: lmdb.Environment,
commit: str):
"""Developer documentation of init method.
Parameters
----------
base_path : str
directory path to the Hangar repository on disk
labelenv : lmdb.Environment
db where the label dat is stored
dataenv : lmdb.Environment
db where the checkout record data is unpacked and stored.
hashenv : lmdb.Environment
db where the hash records are stored.
branchenv : lmdb.Environment
db where the branch records are stored.
refenv : lmdb.Environment
db where the commit references are stored.
commit : str
specific commit hash to checkout
"""
self._commit_hash = commit
self._repo_path = base_path
self._labelenv = labelenv
self._dataenv = dataenv
self._hashenv = hashenv
self._branchenv = branchenv
self._refenv = refenv
self._is_conman = False
self._metadata = MetadataReader(
mode='r',
repo_pth=self._repo_path,
dataenv=self._dataenv,
labelenv=self._labelenv)
self._arraysets = Arraysets._from_commit(
repo_pth=self._repo_path,
hashenv=self._hashenv,
cmtrefenv=self._dataenv)
self._differ = ReaderUserDiff(
commit_hash=self._commit_hash,
branchenv=self._branchenv,
refenv=self._refenv)
atexit.register(self.close)
def _repr_pretty_(self, p, cycle):
"""pretty repr for printing in jupyter notebooks
"""
self.__verify_checkout_alive()
res = f'Hangar {self.__class__.__name__}\
\n Writer : False\
\n Commit Hash : {self._commit_hash}\
\n Num Arraysets : {len(self._arraysets)}\
\n Num Metadata : {len(self._metadata)}\n'
p.text(res)
def __repr__(self):
self.__verify_checkout_alive()
res = f'{self.__class__}('\
f'base_path={self._repo_path} '\
f'labelenv={self._labelenv} '\
f'dataenv={self._dataenv} '\
f'hashenv={self._hashenv} '\
f'commit={self._commit_hash})'
return res
def __enter__(self):
self._is_conman = True
return self
def __exit__(self, *exc):
self._is_conman = False
def __verify_checkout_alive(self):
"""Validates that the checkout object has not been closed
Raises
------
PermissionError
if the checkout was previously close
"""
p_hasattr = partial(hasattr, self)
if not all(map(p_hasattr, ['_metadata', '_arraysets', '_differ'])):
e = PermissionError(
f'Unable to operate on past checkout objects which have been '
f'closed. No operation occurred. Please use a new checkout.')
raise e from None
def __getitem__(self, index):
"""Dictionary style access to arraysets and samples
Checkout object can be thought of as a "dataset" ("dset") mapping a
view of samples across arraysets.
>>> dset = repo.checkout(branch='master')
Get an arrayset contained in the checkout.
>>> dset['foo']
ArraysetDataReader
Get a specific sample from ``'foo'`` (returns a single array)
>>> dset['foo', '1']
np.array([1])
Get multiple samples from ``'foo'`` (retuns a list of arrays, in order
of input keys)
>>> dset['foo', ['1', '2', '324']]
[np.array([1]), np.ndarray([2]), np.ndarray([324])]
Get sample from multiple arraysets (returns namedtuple of arrays, field
names = arrayset names)
>>> dset[('foo', 'bar', 'baz'), '1']
ArraysetData(foo=array([1]), bar=array([11]), baz=array([111]))
Get multiple samples from multiple arraysets(returns list of namedtuple
of array sorted in input key order, field names = arrayset names)
>>> dset[('foo', 'bar'), ('1', '2')]
[ArraysetData(foo=array([1]), bar=array([11])),
ArraysetData(foo=array([2]), bar=array([22]))]
Get samples from all arraysets (shortcut syntax)
>>> out = dset[:, ('1', '2')]
>>> out = dset[..., ('1', '2')]
>>> out
[ArraysetData(foo=array([1]), bar=array([11]), baz=array([111])),
ArraysetData(foo=array([2]), bar=array([22]), baz=array([222]))]
>>> out = dset[:, '1']
>>> out = dset[..., '1']
>>> out
ArraysetData(foo=array([1]), bar=array([11]), baz=array([111]))
.. warning::
It is possible for an :class:`~.arrayset.Arraysets` name to be an
invalid field name for a ``namedtuple`` object. The python docs state:
Any valid Python identifier may be used for a fieldname except for
names starting with an underscore. Valid identifiers consist of
letters, digits, and underscores but do not start with a digit or
underscore and cannot be a keyword such as class, for, return,
global, pass, or raise.
In addition, names must be unique, and cannot containing a period
(``.``) or dash (``-``) character. If a namedtuple would normally be
returned during some operation, and the field name is invalid, a
:class:`UserWarning` will be emitted indicating that any suspect fields
names will be replaced with the positional index as is customary in the
python standard library. The ``namedtuple`` docs explain this by
saying:
If rename is true, invalid fieldnames are automatically replaced with
positional names. For example, ['abc', 'def', 'ghi', 'abc'] is
converted to ['abc', '_1', 'ghi', '_3'], eliminating the keyword def
and the duplicate fieldname abc.
The next section demonstrates the implications and workarounds for this
issue
As an example, if we attempted to retrieve samples from arraysets with
the names: ``['raw', 'data.jpeg', '_garba-ge', 'try_2']``, two of the
four would be renamed:
>>> out = dset[('raw', 'data.jpeg', '_garba-ge', 'try_2'), '1']
>>> print(out)
ArraysetData(raw=array([0]), _1=array([1]), _2=array([2]), try_2==array([3]))
>>> print(out._fields)
('raw', '_1', '_2', 'try_2')
>>> out.raw
array([0])
>>> out._2
array([4])
In cases where the input arraysets are explicitly specified, then, then
it is guarrenteed that the order of fields in the resulting tuple is
exactally what was requested in the input
>>> out = dset[('raw', 'data.jpeg', '_garba-ge', 'try_2'), '1']
>>> out._fields
('raw', '_1', '_2', 'try_2')
>>> reorder = dset[('data.jpeg', 'try_2', 'raw', '_garba-ge'), '1']
>>> reorder._fields
('_0', 'try_2', 'raw', '_3')
However, if an ``Ellipsis`` (``...``) or ``slice`` (``:``) syntax is
used to select arraysets, *the order in which arraysets are placed into
the namedtuple IS NOT predictable.* Should any arrayset have an invalid
field name, it will be renamed according to it's positional index, but
will not contain any identifying mark. At the moment, there is no
direct way to infer the arraayset name from this strcture alone. This
limitation will be addressed in a future release.
Do NOT rely on any observed patterns. For this corner-case, **the ONLY
guarrentee we provide is that structures returned from multi-sample
queries have the same order in every ``ArraysetData`` tuple returned in
that queries result list!** Should another query be made with
unspecified ordering, you should assume that the indices of the
arraysets in the namedtuple would have changed from the previous
result!!
>>> print(dset.arraysets.keys())
('raw', 'data.jpeg', '_garba-ge', 'try_2']
>>> out = dset[:, '1']
>>> out._fields
('_0', 'raw', '_2', 'try_2')
>>>
>>> # ordering in a single query is preserved
...
>>> multi_sample = dset[..., ('1', '2')]
>>> multi_sample[0]._fields
('try_2', '_1', 'raw', '_3')
>>> multi_sample[1]._fields
('try_2', '_1', 'raw', '_3')
>>>
>>> # but it may change upon a later query
>>> multi_sample2 = dset[..., ('1', '2')]
>>> multi_sample2[0]._fields
('_0', '_1', 'raw', 'try_2')
>>> multi_sample2[1]._fields
('_0', '_1', 'raw', 'try_2')
Parameters
----------
index
Please see detailed explanation above for full options. Hard coded
options are the order to specification.
The first element (or collection) specified must be ``str`` type and
correspond to an arrayset name(s). Alternativly the Ellipsis operator
(``...``) or unbounded slice operator (``:`` <==> ``slice(None)``) can
be used to indicate "select all" behavior.
If a second element (or collection) is present, the keys correspond to
sample names present within (all) the specified arraysets. If a key is
not present in even on arrayset, the entire ``get`` operation will
abort with ``KeyError``. If desired, the same selection syntax can be
used with the :meth:`~hangar.checkout.ReaderCheckout.get` method, which
will not Error in these situations, but simply return ``None`` values
in the appropriate position for keys which do not exist.
Returns
-------
:class:`~.arrayset.Arraysets`
single arrayset parameter, no samples specified
:class:`numpy.ndarray`
Single arrayset specified, single sample key specified
List[:class:`numpy.ndarray`]
Single arrayset, multiple samples array data for each sample is
returned in same order sample keys are recieved.
List[NamedTuple[``*``:class:`numpy.ndarray`]]
Multiple arraysets, multiple samples. Each arrayset's name is used
as a field in the NamedTuple elements, each NamedTuple contains
arrays stored in each arrayset via a common sample key. Each sample
key is returned values as an individual element in the
List. The sample order is returned in the same order it wasw recieved.
Warns
-----
UserWarning
Arrayset names contains characters which are invalid as namedtuple fields.
Notes
-----
* All specified arraysets must exist
* All specified sample `keys` must exist in all specified arraysets,
otherwise standard exception thrown
* Slice syntax cannot be used in sample `keys` field
* Slice syntax for arrayset field cannot specify `start`, `stop`, or
`step` fields, it is soley a shortcut syntax for 'get all arraysets' in
the ``:`` or ``slice(None)`` form
.. seealso:
:meth:`~hangar.checkout.ReaderCheckout.get`
"""
try:
tmpconman = not self._is_conman
if tmpconman:
self.__verify_checkout_alive()
self.__enter__()
if isinstance(index, str):
return self.arraysets[index]
elif not isinstance(index, (tuple, list)):
raise TypeError(f'Unknown index: {index} type: {type(index)}')
if len(index) > 2:
raise ValueError(f'index of len > 2 not allowed: {index}')
arraysets, samples = index
return self.get(arraysets, samples, except_missing=True)
finally:
if tmpconman:
self.__exit__()
def get(self, arraysets, samples, *, except_missing=False):
"""View of sample data across arraysets gracefully handeling missing sample keys.
Please see :meth:`__getitem__` for full description. This method is
identical with a single exception: if a sample key is not present in an
arrayset, this method will plane a null ``None`` value in it's return
slot rather than throwing a ``KeyError`` like the dict style access
does.
Parameters
----------
arraysets: Union[str, Iterable[str], Ellipses, slice(None)]
Name(s) of the arraysets to query. The Ellipsis operator (``...``)
or unbounded slice operator (``:`` <==> ``slice(None)``) can be
used to indicate "select all" behavior.
samples: Union[str, int, Iterable[Union[str, int]]]
Names(s) of the samples to query
except_missing: bool, **KWARG ONLY**
If False, will not throw exceptions on missing sample key value.
Will raise KeyError if True and missing key found.
Returns
-------
:class:`~.arrayset.Arraysets`
single arrayset parameter, no samples specified
:class:`numpy.ndarray`
Single arrayset specified, single sample key specified
List[:class:`numpy.ndarray`]
Single arrayset, multiple samples array data for each sample is
returned in same order sample keys are recieved.
List[NamedTuple[``*``:class:`numpy.ndarray`]]
Multiple arraysets, multiple samples. Each arrayset's name is used
as a field in the NamedTuple elements, each NamedTuple contains
arrays stored in each arrayset via a common sample key. Each sample
key is returned values as an individual element in the
List. The sample order is returned in the same order it wasw recieved.
Warns
-----
UserWarning
Arrayset names contains characters which are invalid as namedtuple fields.
"""
try:
tmpconman = not self._is_conman
if tmpconman:
self.__verify_checkout_alive()
self.__enter__()
# Arrayset Parsing
if (arraysets is Ellipsis) or isinstance(arraysets, slice):
arraysets = list(self._arraysets._arraysets.values())
elif isinstance(arraysets, str):
arraysets = [self._arraysets._arraysets[arraysets]]
elif isinstance(arraysets, (tuple, list)):
arraysets = [self._arraysets._arraysets[aname] for aname in arraysets]
else:
raise TypeError(f'Arraysets: {arraysets} type: {type(arraysets)}')
nAsets = len(arraysets)
try:
aset_names = [aset.name for aset in arraysets]
ArraysetData = namedtuple('ArraysetData', aset_names)
except ValueError:
warnings.warn(
'Arrayset names contains characters which are invalid as namedtuple fields. '
'All suspect field names will be replaced by their positional names '
'(ie "_0" for element 0, "_4" for element 4)', UserWarning)
ArraysetData = namedtuple('ArraysetData', aset_names, rename=True)
# Sample Parsing
if isinstance(samples, (str, int)):
samples = [samples]
elif not isinstance(samples, (tuple, list)):
raise TypeError(f'Samples idx: {samples} type: {type(samples)}')
nSamples = len(samples)
# Data Retrieval
asetsSamplesData = []
for aset in arraysets:
aset_samples = []
for sample in samples:
try:
arr = aset.get(sample)
except KeyError as e:
if except_missing:
raise e
arr = None
aset_samples.append(arr)
if nAsets == 1:
asetsSamplesData = aset_samples
if nSamples == 1:
asetsSamplesData = asetsSamplesData[0]
break
asetsSamplesData.append(aset_samples)
else: # N.B. for-else conditional (ie. 'no break')
tmp = map(ArraysetData._make, zip(*asetsSamplesData))
asetsSamplesData = list(tmp)
if len(asetsSamplesData) == 1:
asetsSamplesData = asetsSamplesData[0]
return asetsSamplesData
finally:
if tmpconman:
self.__exit__()
@property
def arraysets(self) -> Arraysets:
"""Provides access to arrayset interaction object.
Can be used to either return the arraysets accessor for all elements or
a single arrayset instance by using dictionary style indexing.
>>> co = repo.checkout(write=False)
>>> len(co.arraysets)
1
>>> print(co.arraysets.keys())
['foo']
>>> fooAset = co.arraysets['foo']
>>> fooAset.dtype
np.fooDtype
>>> asets = co.arraysets
>>> fooAset = asets['foo']
>>> fooAset = asets.get('foo')
>>> fooAset.dtype
np.fooDtype
.. seealso::
The class :class:`~.arrayset.Arraysets` contains all methods
accessible by this property accessor
Returns
-------
:class:`~.arrayset.Arraysets`
weakref proxy to the arraysets object which behaves exactly like a
arraysets accessor class but which can be invalidated when the writer
lock is released.
"""
self.__verify_checkout_alive()
wr = cm_weakref_obj_proxy(self._arraysets)
return wr
@property
def metadata(self) -> MetadataReader:
"""Provides access to metadata interaction object.
.. seealso::
The class :class:`~hangar.metadata.MetadataReader` contains all methods
accessible by this property accessor
Returns
-------
MetadataReader
weakref proxy to the metadata object which behaves exactly like a
metadata class but which can be invalidated when the writer lock is
released.
"""
self.__verify_checkout_alive()
wr = cm_weakref_obj_proxy(self._metadata)
return wr
@property
def diff(self) -> ReaderUserDiff:
"""Access the differ methods for a read-only checkout.
.. seealso::
The class :class:`ReaderUserDiff` contains all methods accessible
by this property accessor
Returns
-------
ReaderUserDiff
weakref proxy to the differ object (and contained methods) which behaves
exactly like the differ class but which can be invalidated when the
writer lock is released.
"""
self.__verify_checkout_alive()
wr = weakref.proxy(self._differ)
return wr
@property
def commit_hash(self) -> str:
"""Commit hash this read-only checkout's data is read from.
>>> co.commit_hash
foohashdigesthere
Returns
-------
str
commit hash of the checkout
"""
self.__verify_checkout_alive()
return self._commit_hash
def close(self) -> None:
"""Gracefully close the reader checkout object.
Though not strictly required for reader checkouts (as opposed to
writers), closing the checkout after reading will free file handles and
system resources, which may improve performance for repositories with
multiple simultaneous read checkouts.
"""
self.__verify_checkout_alive()
with suppress(AttributeError):
self._arraysets._close()
for asetn in (self._arraysets._arraysets.keys()):
for attr in list(self._arraysets._arraysets[asetn].__dir__()):
with suppress(AttributeError, TypeError):
delattr(self._arraysets._arraysets[asetn], attr)
for attr in list(self._arraysets.__dir__()):
with suppress(AttributeError, TypeError):
# adding `_self_` addresses `WeakrefProxy` in `wrapt.ObjectProxy`
delattr(self._arraysets, f'_self_{attr}')
for attr in list(self._metadata.__dir__()):
with suppress(AttributeError, TypeError):
# adding `_self_` addresses `WeakrefProxy` in `wrapt.ObjectProxy`
delattr(self._metadata, f'_self_{attr}')
with suppress(AttributeError):
del self._arraysets
with suppress(AttributeError):
del self._metadata
with suppress(AttributeError):
del self._differ
del self._commit_hash
del self._repo_path
del self._labelenv
del self._dataenv
del self._hashenv
del self._branchenv
del self._refenv
del self._is_conman
atexit.unregister(self.close)
return
# --------------- Write enabled checkout ---------------------------------------
class WriterCheckout(object):
"""Checkout the repository at the head of a given branch for writing.
This is the entry point for all writing operations to the repository, the
writer class records all interactions in a special ``"staging"`` area,
which is based off the state of the repository as it existed at the
``HEAD`` commit of a branch.
>>> co = repo.checkout(write=True)
>>> co.branch_name
'master'
>>> co.commit_hash
'masterheadcommithash'
>>> co.close()
At the moment, only one instance of this class can write data to the
staging area at a time. After the desired operations have been completed,
it is crucial to call :meth:`close` to release the writer lock. In
addition, after any changes have been made to the staging area, the branch
``HEAD`` cannot be changed. In order to checkout another branch ``HEAD``
for writing, you must either :meth:`commit` the changes, or perform a
hard-reset of the staging area to the last commit via
:meth:`reset_staging_area`.
In order to reduce the chance that the python interpreter is shut down
without calling :meth:`close`, which releases the writer lock - a common
mistake during ipython / jupyter sessions - an `atexit
<https://docs.python.org/3/library/atexit.html>`_ hook is registered to
:meth:`close`. If properly closed by the user, the hook is unregistered
after completion with no ill effects. So long as a the process is NOT
terminated via non-python SIGKILL, fatal internal python error, or or
special os exit methods, cleanup will occur on interpreter shutdown and the
writer lock will be released. If a non-handled termination method does
occur, the :py:meth:`~.Repository.force_release_writer_lock` method must be
called manually when a new python process wishes to open the writer
checkout.
"""
def __init__(self,
repo_pth: os.PathLike,
branch_name: str,
labelenv: lmdb.Environment,
hashenv: lmdb.Environment,
refenv: lmdb.Environment,
stageenv: lmdb.Environment,
branchenv: lmdb.Environment,
stagehashenv: lmdb.Environment,
mode: str = 'a'):
"""Developer documentation of init method.
Parameters
----------
repo_pth : str
local file path of the repository.
branch_name : str
name of the branch whose ``HEAD`` commit will for the starting state
of the staging area.
labelenv : lmdb.Environment
db where the label dat is stored
hashenv lmdb.Environment
db where the hash records are stored.
refenv : lmdb.Environment
db where the commit record data is unpacked and stored.
stagenv : lmdb.Environment
db where the stage record data is unpacked and stored.
branchenv : lmdb.Environment
db where the head record data is unpacked and stored.
stagehashenv: lmdb.Environment
db where the staged hash record data is stored.
mode : str, optional
open in write or read only mode, default is 'a' which is write-enabled.
"""
self._is_conman = False
self._repo_path = repo_pth
self._branch_name = branch_name
self._writer_lock = str(uuid4())
self._refenv = refenv
self._hashenv = hashenv
self._labelenv = labelenv
self._stageenv = stageenv
self._branchenv = branchenv
self._stagehashenv = stagehashenv
self._arraysets: Arraysets = None
self._differ: WriterUserDiff = None
self._metadata: MetadataWriter = None
self.__setup()
atexit.register(self.close)
def _repr_pretty_(self, p, cycle):
"""pretty repr for printing in jupyter notebooks
"""
self.__acquire_writer_lock()
res = f'Hangar {self.__class__.__name__}\
\n Writer : True\
\n Base Branch : {self._branch_name}\
\n Num Arraysets : {len(self._arraysets)}\
\n Num Metadata : {len(self._metadata)}\n'
p.text(res)
def __repr__(self):
self.__acquire_writer_lock()
res = f'{self.__class__}('\
f'base_path={self._repo_path} '\
f'branch_name={self._branch_name} ' \
f'labelenv={self._labelenv} '\
f'hashenv={self._hashenv} '\
f'refenv={self._refenv} '\
f'stageenv={self._stageenv} '\
f'branchenv={self._branchenv})\n'
return res
def __enter__(self):
self.__acquire_writer_lock()
self._is_conman = True
self.arraysets.__enter__()
return self
def __exit__(self, *exc):
self._is_conman = False
self.arraysets.__exit__(*exc)
def __acquire_writer_lock(self):
"""Ensures that this class instance holds the writer lock in the database.
Raises
------
PermissionError
If the checkout was previously closed (no :attr:``_writer_lock``) or if
the writer lock value does not match that recorded in the branch db
"""
try:
self._writer_lock
except AttributeError:
with suppress(AttributeError):
del self._arraysets
with suppress(AttributeError):
del self._metadata
with suppress(AttributeError):
del self._differ
err = f'Unable to operate on past checkout objects which have been '\
f'closed. No operation occurred. Please use a new checkout.'
raise PermissionError(err) from None
try:
heads.acquire_writer_lock(self._branchenv, self._writer_lock)
except PermissionError as e:
with suppress(AttributeError):
del self._arraysets
with suppress(AttributeError):
del self._metadata
with suppress(AttributeError):
del self._differ
raise e from None
def __setup(self):
"""setup the staging area appropriately for a write enabled checkout.
On setup, we cannot be sure what branch the staging area was previously
checked out on, and we cannot be sure if there are any 'uncommitted
changes' in the staging area (ie. the staging area is ``DIRTY``). The
setup methods here ensure that we can safety make any changes to the
staging area without overwriting uncommitted changes, and then perform
the setup steps to checkout staging area state at that point in time.
Raises
------
ValueError
if there are changes previously made in the staging area which were
based on one branch's ``HEAD``, but a different branch was specified to
be used for the base of this checkout.
"""
self.__acquire_writer_lock()
current_head = heads.get_staging_branch_head(self._branchenv)
currentDiff = WriterUserDiff(stageenv=self._stageenv,
refenv=self._refenv,
branchenv=self._branchenv,
branch_name=current_head)
if currentDiff.status() == 'DIRTY':
if current_head != self._branch_name:
e = ValueError(
f'Unable to check out branch: {self._branch_name} for writing '
f'as the staging area has uncommitted changes on branch: '
f'{current_head}. Please commit or stash uncommitted changes '
f'before checking out a different branch for writing.')
self.close()
raise e
else:
if current_head != self._branch_name:
try:
cmt = heads.get_branch_head_commit(
branchenv=self._branchenv, branch_name=self._branch_name)
except ValueError as e:
self.close()
raise e
commiting.replace_staging_area_with_commit(
refenv=self._refenv, stageenv=self._stageenv, commit_hash=cmt)
heads.set_staging_branch_head(
branchenv=self._branchenv, branch_name=self._branch_name)
self._metadata = MetadataWriter(
mode='a',
repo_pth=self._repo_path,
dataenv=self._stageenv,
labelenv=self._labelenv)
self._arraysets = Arraysets._from_staging_area(
repo_pth=self._repo_path,
hashenv=self._hashenv,
stageenv=self._stageenv,
stagehashenv=self._stagehashenv)
self._differ = WriterUserDiff(
stageenv=self._stageenv,
refenv=self._refenv,
branchenv=self._branchenv,
branch_name=self._branch_name)
def __getitem__(self, index):
"""Dictionary style access to arraysets and samples
Checkout object can be thought of as a "dataset" ("dset") mapping a
view of samples across arraysets.
>>> dset = repo.checkout(branch='master')
Get an arrayset contained in the checkout.
>>> dset['foo']
ArraysetDataReader
Get a specific sample from ``'foo'`` (returns a single array)
>>> dset['foo', '1']
np.array([1])
Get multiple samples from ``'foo'`` (retuns a list of arrays, in order
of input keys)
>>> dset['foo', ['1', '2', '324']]
[np.array([1]), np.ndarray([2]), np.ndarray([324])]
Get sample from multiple arraysets (returns namedtuple of arrays, field
names = arrayset names)
>>> dset[('foo', 'bar', 'baz'), '1']
ArraysetData(foo=array([1]), bar=array([11]), baz=array([111]))
Get multiple samples from multiple arraysets(returns list of namedtuple
of array sorted in input key order, field names = arrayset names)
>>> dset[('foo', 'bar'), ('1', '2')]
[ArraysetData(foo=array([1]), bar=array([11])),
ArraysetData(foo=array([2]), bar=array([22]))]
Get samples from all arraysets (shortcut syntax)
>>> out = dset[:, ('1', '2')]
>>> out = dset[..., ('1', '2')]
>>> out
[ArraysetData(foo=array([1]), bar=array([11]), baz=array([111])),
ArraysetData(foo=array([2]), bar=array([22]), baz=array([222]))]
>>> out = dset[:, '1']
>>> out = dset[..., '1']
>>> out
ArraysetData(foo=array([1]), bar=array([11]), baz=array([111]))
.. warning::
It is possible for an :class:`~.arrayset.Arraysets` name to be an
invalid field name for a ``namedtuple`` object. The python docs state:
Any valid Python identifier may be used for a fieldname except for
names starting with an underscore. Valid identifiers consist of
letters, digits, and underscores but do not start with a digit or
underscore and cannot be a keyword such as class, for, return,
global, pass, or raise.
In addition, names must be unique, and cannot containing a period
(``.``) or dash (``-``) character. If a namedtuple would normally be
returned during some operation, and the field name is invalid, a
:class:`UserWarning` will be emitted indicating that any suspect fields
names will be replaced with the positional index as is customary in the
python standard library. The ``namedtuple`` docs explain this by
saying:
If rename is true, invalid fieldnames are automatically replaced with
positional names. For example, ['abc', 'def', 'ghi', 'abc'] is
converted to ['abc', '_1', 'ghi', '_3'], eliminating the keyword def
and the duplicate fieldname abc.
The next section demonstrates the implications and workarounds for this
issue
As an example, if we attempted to retrieve samples from arraysets with
the names: ``['raw', 'data.jpeg', '_garba-ge', 'try_2']``, two of the
four would be renamed:
>>> out = dset[('raw', 'data.jpeg', '_garba-ge', 'try_2'), '1']
>>> print(out)
ArraysetData(raw=array([0]), _1=array([1]), _2=array([2]), try_2==array([3]))
>>> print(out._fields)
('raw', '_1', '_2', 'try_2')
>>> out.raw
array([0])
>>> out._2
array([4])
In cases where the input arraysets are explicitly specified, then, then
it is guarrenteed that the order of fields in the resulting tuple is
exactally what was requested in the input
>>> out = dset[('raw', 'data.jpeg', '_garba-ge', 'try_2'), '1']
>>> out._fields
('raw', '_1', '_2', 'try_2')
>>> reorder = dset[('data.jpeg', 'try_2', 'raw', '_garba-ge'), '1']
>>> reorder._fields
('_0', 'try_2', 'raw', '_3')
However, if an ``Ellipsis`` (``...``) or ``slice`` (``:``) syntax is
used to select arraysets, *the order in which arraysets are placed into
the namedtuple IS NOT predictable.* Should any arrayset have an invalid
field name, it will be renamed according to it's positional index, but
will not contain any identifying mark. At the moment, there is no
direct way to infer the arraayset name from this strcture alone. This
limitation will be addressed in a future release.
Do NOT rely on any observed patterns. For this corner-case, **the ONLY
guarrentee we provide is that structures returned from multi-sample
queries have the same order in every ``ArraysetData`` tuple returned in
that queries result list!** Should another query be made with
unspecified ordering, you should assume that the indices of the
arraysets in the namedtuple would have changed from the previous
result!!
>>> print(dset.arraysets.keys())
('raw', 'data.jpeg', '_garba-ge', 'try_2']
>>> out = dset[:, '1']
>>> out._fields
('_0', 'raw', '_2', 'try_2')
>>>
>>> # ordering in a single query is preserved
...
>>> multi_sample = dset[..., ('1', '2')]
>>> multi_sample[0]._fields
('try_2', '_1', 'raw', '_3')
>>> multi_sample[1]._fields
('try_2', '_1', 'raw', '_3')
>>>
>>> # but it may change upon a later query
>>> multi_sample2 = dset[..., ('1', '2')]
>>> multi_sample2[0]._fields
('_0', '_1', 'raw', 'try_2')
>>> multi_sample2[1]._fields
('_0', '_1', 'raw', 'try_2')
Parameters
----------
index
Please see detailed explanation above for full options.
The first element (or collection) specified must be ``str`` type and
correspond to an arrayset name(s). Alternativly the Ellipsis operator
(``...``) or unbounded slice operator (``:`` <==> ``slice(None)``) can
be used to indicate "select all" behavior.
If a second element (or collection) is present, the keys correspond to
sample names present within (all) the specified arraysets. If a key is
not present in even on arrayset, the entire ``get`` operation will
abort with ``KeyError``. If desired, the same selection syntax can be
used with the :meth:`~hangar.checkout.WriterCheckout.get` method, which
will not Error in these situations, but simply return ``None`` values
in the appropriate position for keys which do not exist.
Returns
-------
:class:`~.arrayset.Arraysets`
single arrayset parameter, no samples specified
:class:`numpy.ndarray`
Single arrayset specified, single sample key specified
List[:class:`numpy.ndarray`]
Single arrayset, multiple samples array data for each sample is
returned in same order sample keys are recieved.
List[NamedTuple[``*``:class:`numpy.ndarray`]]
Multiple arraysets, multiple samples. Each arrayset's name is used
as a field in the NamedTuple elements, each NamedTuple contains
arrays stored in each arrayset via a common sample key. Each sample
key is returned values as an individual element in the
List. The sample order is returned in the same order it wasw recieved.
Warns
-----
UserWarning
Arrayset names contains characters which are invalid as namedtuple fields.
Notes
-----
* All specified arraysets must exist
* All specified sample `keys` must exist in all specified arraysets,
otherwise standard exception thrown
* Slice syntax cannot be used in sample `keys` field
* Slice syntax for arrayset field cannot specify `start`, `stop`, or
`step` fields, it is soley a shortcut syntax for 'get all arraysets' in
the ``:`` or ``slice(None)`` form
.. seealso:
:meth:`~hangar.checkout.WriterCheckout.get`
"""
try:
tmpconman = not self._is_conman
if tmpconman:
self.__acquire_writer_lock()
self.__enter__()
if isinstance(index, str):
return self.arraysets[index]
elif not isinstance(index, (tuple, list)):
raise TypeError(f'Unknown index: {index} type: {type(index)}')
if len(index) > 2:
raise ValueError(f'index of len > 2 not allowed: {index}')
arraysets, samples = index
return self.get(arraysets, samples, except_missing=True)
finally:
if tmpconman:
self.__exit__()
def get(self, arraysets, samples, *, except_missing=False):
"""View of samples across arraysets which handles missing sample keys.
Please see :meth:`__getitem__` for full description. This method is
identical with a single exception: if a sample key is not present in an
arrayset, this method will plane a null ``None`` value in it's return
slot rather than throwing a ``KeyError`` like the dict style access
does.
Parameters
----------
arraysets: Union[str, Iterable[str], Ellipses, slice(None)]
Name(s) of the arraysets to query. The Ellipsis operator (``...``)
or unbounded slice operator (``:`` <==> ``slice(None)``) can be
used to indicate "select all" behavior.
samples: Union[str, int, Iterable[Union[str, int]]]
Names(s) of the samples to query
except_missing: bool, *kwarg-only*
If False, will not throw exceptions on missing sample key value.
Will raise KeyError if True and missing key found.
Returns
-------
:class:`~.arrayset.Arraysets`
single arrayset parameter, no samples specified
:class:`numpy.ndarray`
Single arrayset specified, single sample key specified
List[:class:`numpy.ndarray`]
Single arrayset, multiple samples array data for each sample is
returned in same order sample keys are recieved.
List[NamedTuple[``*``:class:`numpy.ndarray`]]
Multiple arraysets, multiple samples. Each arrayset's name is used
as a field in the NamedTuple elements, each NamedTuple contains
arrays stored in each arrayset via a common sample key. Each sample
key is returned values as an individual element in the List. The
sample order is returned in the same order it wasw recieved.
Warns
-----
UserWarning
Arrayset names contains characters which are invalid as namedtuple fields.
"""
try:
tmpconman = not self._is_conman
if tmpconman:
self.__acquire_writer_lock()
self.__enter__()
# Arrayset Parsing
if (arraysets is Ellipsis) or isinstance(arraysets, slice):
arraysets = list(self._arraysets._arraysets.values())
elif isinstance(arraysets, str):
arraysets = [self._arraysets._arraysets[arraysets]]
elif isinstance(arraysets, (tuple, list)):
arraysets = [self._arraysets._arraysets[aname] for aname in arraysets]
else:
raise TypeError(f'Arraysets: {arraysets} type: {type(arraysets)}')
nAsets = len(arraysets)
try:
aset_names = [aset.name for aset in arraysets]
ArraysetData = namedtuple('ArraysetData', aset_names)
except ValueError:
warnings.warn(
'Arrayset names contains characters which are invalid as namedtuple fields. '
'All suspect field names will be replaced by their positional names '
'(ie "_0" for element 0, "_4" for element 4)', UserWarning)
ArraysetData = namedtuple('ArraysetData', aset_names, rename=True)
# Sample Parsing
if isinstance(samples, (str, int)):
samples = [samples]
elif not isinstance(samples, (tuple, list)):
raise TypeError(f'Samples idx: {samples} type: {type(samples)}')
nSamples = len(samples)
# Data Retrieval
asetsSamplesData = []
for aset in arraysets:
aset_samples = []
for sample in samples:
try:
arr = aset.get(sample)
except KeyError as e:
if except_missing:
raise e
arr = None
aset_samples.append(arr)
if nAsets == 1:
asetsSamplesData = aset_samples
if nSamples == 1:
asetsSamplesData = asetsSamplesData[0]
break
asetsSamplesData.append(aset_samples)
else: # N.B. for-else conditional (ie. 'no break')
tmp = map(ArraysetData._make, zip(*asetsSamplesData))
asetsSamplesData = list(tmp)
if len(asetsSamplesData) == 1:
asetsSamplesData = asetsSamplesData[0]
return asetsSamplesData
finally:
if tmpconman:
self.__exit__()
def __setitem__(self, index, value):
"""Syntax for setting items.
Checkout object can be thought of as a "dataset" ("dset") mapping a view
of samples across arraysets:
>>> dset = repo.checkout(branch='master', write=True)
Add single sample to single arrayset
>>> dset['foo', 1] = np.array([1])
>>> dset['foo', 1]
array([1])
Add multiple samples to single arrayset
>>> dset['foo', [1, 2, 3]] = [np.array([1]), np.array([2]), np.array([3])]
>>> dset['foo', [1, 2, 3]]
[array([1]), array([2]), array([3])]
Add single sample to multiple arraysets
>>> dset[['foo', 'bar'], 1] = [np.array([1]), np.array([11])]
>>> dset[:, 1]
ArraysetData(foo=array([1]), bar=array([11]))
Parameters
----------
index: Union[Iterable[str], Iterable[str, int]]
Please see detailed explanation above for full options.The first
element (or collection) specified must be ``str`` type and correspond
to an arrayset name(s). The second element (or collection) are keys
corresponding to sample names which the data should be written to.
Unlike the :meth:`__getitem__` method, only ONE of the ``arrayset``
name(s) or ``sample`` key(s) can specify multiple elements at the same
time. Ie. If multiple ``arraysets`` are specified, only one sample key
can be set, likewise if multiple ``samples`` are specified, only one
``arrayset`` can be specified. When specifying multiple ``arraysets``
or ``samples``, each data piece to be stored must reside as individual
elements (``np.ndarray``) in a List or Tuple. The number of keys and
the number of values must match exactally.
values: Union[:class:`numpy.ndarray`, Iterable[:class:`numpy.ndarray`]]
Data to store in the specified arraysets/sample keys. When
specifying multiple ``arraysets`` or ``samples``, each data piece
to be stored must reside as individual elements (``np.ndarray``) in
a List or Tuple. The number of keys and the number of values must
match exactally.
Notes
-----
* No slicing syntax is supported for either arraysets or samples. This
is in order to ensure explicit setting of values in the desired
fields/keys
* Add multiple samples to multiple arraysets not yet supported.
"""
try:
tmpconman = not self._is_conman
if tmpconman:
self.__acquire_writer_lock()
self.__enter__()
if not isinstance(index, (tuple, list)):
raise ValueError(f'Idx: {index} does not specify arrayset(s) AND sample(s)')
elif len(index) > 2:
raise ValueError(f'Index of len > 2 invalid. To multi-set, pass in lists')
asetsIdx, sampleNames = index
# Parse Arraysets
if isinstance(asetsIdx, str):
asets = [self._arraysets._arraysets[asetsIdx]]
elif isinstance(asetsIdx, (tuple, list)):
asets = [self._arraysets._arraysets[aidx] for aidx in asetsIdx]
else:
raise TypeError(f'Arrayset idx: {asetsIdx} of type: {type(asetsIdx)}')
nAsets = len(asets)
# Parse sample names
if isinstance(sampleNames, (str, int)):
sampleNames = [sampleNames]
elif not isinstance(sampleNames, (list, tuple)):
raise TypeError(f'Sample names: {sampleNames} type: {type(sampleNames)}')
nSamples = len(sampleNames)
# Verify asets
if (nAsets > 1) and (nSamples > 1):
raise SyntaxError(
'Not allowed to specify BOTH multiple samples AND multiple'
'arraysets in `set` operation in current Hangar implementation')
elif (nAsets == 1) and (nSamples == 1):
aset = asets[0]
sampleName = sampleNames[0]
aset[sampleName] = value
elif nAsets >= 2:
if not isinstance(value, (list, tuple)):
raise TypeError(f'Value: {value} not list/tuple of np.ndarray')
elif not (len(value) == nAsets):
raise ValueError(f'Num values: {len(value)} != num arraysets {nAsets}')
for aset, val in zip(asets, value):
isCompat = aset._verify_array_compatible(val)
if not isCompat.compatible:
raise ValueError(isCompat.reason)
for sampleName in sampleNames:
for aset, val in zip(asets, value):
aset[sampleName] = val
else: # nSamples >= 2
if not isinstance(value, (list, tuple)):
raise TypeError(f'Value: {value} not list/tuple of np.ndarray')
elif not (len(value) == nSamples):
raise ValueError(f'Num values: {len(value)} != num samples {nSamples}')
for aset in asets:
for val in value:
isCompat = aset._verify_array_compatible(val)
if not isCompat.compatible:
raise ValueError(isCompat.reason)
for aset in asets:
for sampleName, val in zip(sampleNames, value):
aset[sampleName] = val
return None
finally:
if tmpconman:
self.__exit__()
@property
def arraysets(self) -> Arraysets:
"""Provides access to arrayset interaction object.
Can be used to either return the arraysets accessor for all elements or
a single arrayset instance by using dictionary style indexing.
>>> co = repo.checkout(write=True)
>>> asets = co.arraysets
>>> len(asets)
0
>>> fooAset = asets.init_arrayset('foo', shape=(10, 10), dtype=np.uint8)
>>> len(co.arraysets)
1
>>> print(co.arraysets.keys())
['foo']
>>> fooAset = co.arraysets['foo']
>>> fooAset.dtype
np.fooDtype
>>> fooAset = asets.get('foo')
>>> fooAset.dtype
np.fooDtype
.. seealso::
The class :class:`~.arrayset.Arraysets` contains all methods accessible
by this property accessor
Returns
-------
:class:`~.arrayset.Arraysets`
weakref proxy to the arraysets object which behaves exactly like a
arraysets accessor class but which can be invalidated when the writer
lock is released.
"""
self.__acquire_writer_lock()
wr = cm_weakref_obj_proxy(self._arraysets)
return wr
@property
def metadata(self) -> MetadataWriter:
"""Provides access to metadata interaction object.
.. seealso::
The class :class:`hangar.metadata.MetadataWriter` contains all methods
accessible by this property accessor
Returns
-------
MetadataWriter
weakref proxy to the metadata object which behaves exactly like a
metadata class but which can be invalidated when the writer lock is
released.
"""
self.__acquire_writer_lock()
wr = cm_weakref_obj_proxy(self._metadata)
return wr
@property
def diff(self) -> WriterUserDiff:
"""Access the differ methods which are aware of any staged changes.
.. seealso::
The class :class:`hangar.diff.WriterUserDiff` contains all methods
accessible by this property accessor
Returns
-------
WriterUserDiff
weakref proxy to the differ object (and contained methods) which behaves
exactly like the differ class but which can be invalidated when the
writer lock is released.
"""
self.__acquire_writer_lock()
wr = weakref.proxy(self._differ)
return wr
@property
def branch_name(self) -> str:
"""Branch this write enabled checkout's staging area was based on.
Returns
-------
str
name of the branch whose commit ``HEAD`` changes are staged from.
"""
self.__acquire_writer_lock()
return self._branch_name
@property
def commit_hash(self) -> str:
"""Commit hash which the staging area of `branch_name` is based on.
Returns
-------
str
commit hash
"""
self.__acquire_writer_lock()
cmt = heads.get_branch_head_commit(branchenv=self._branchenv,
branch_name=self._branch_name)
return cmt
def merge(self, message: str, dev_branch: str) -> str:
"""Merge the currently checked out commit with the provided branch name.
If a fast-forward merge is possible, it will be performed, and the
commit message argument to this function will be ignored.
Parameters
----------
message : str
commit message to attach to a three-way merge
dev_branch : str
name of the branch which should be merge into this branch (`master`)
Returns
-------
str
commit hash of the new commit for the `master` branch this checkout
was started from.
"""
self.__acquire_writer_lock()
commit_hash = select_merge_algorithm(
message=message,
branchenv=self._branchenv,
stageenv=self._stageenv,
refenv=self._refenv,
stagehashenv=self._stagehashenv,
master_branch=self._branch_name,
dev_branch=dev_branch,
repo_path=self._repo_path,
writer_uuid=self._writer_lock)
for asetHandle in self._arraysets.values():
with suppress(KeyError):
asetHandle._close()
self._metadata = MetadataWriter(
mode='a',
repo_pth=self._repo_path,
dataenv=self._stageenv,
labelenv=self._labelenv)
self._arraysets = Arraysets._from_staging_area(
repo_pth=self._repo_path,
hashenv=self._hashenv,
stageenv=self._stageenv,
stagehashenv=self._stagehashenv)
self._differ = WriterUserDiff(
stageenv=self._stageenv,
refenv=self._refenv,
branchenv=self._branchenv,
branch_name=self._branch_name)
return commit_hash
def commit(self, commit_message: str) -> str:
"""Commit the changes made in the staging area on the checkout branch.
Parameters
----------
commit_message : str, optional
user proved message for a log of what was changed in this commit.
Should a fast forward commit be possible, this will NOT be added to
fast-forward ``HEAD``.
Returns
-------
str
The commit hash of the new commit.
Raises
------
RuntimeError
If no changes have been made in the staging area, no commit occurs.
"""
self.__acquire_writer_lock()
open_asets = []
for arrayset in self._arraysets.values():
if arrayset._is_conman:
open_asets.append(arrayset.name)
open_meta = self._metadata._is_conman
try:
if open_meta:
self._metadata.__exit__()
for asetn in open_asets:
self._arraysets[asetn].__exit__()
if self._differ.status() == 'CLEAN':
e = RuntimeError('No changes made in staging area. Cannot commit.')
raise e from None
self._arraysets._close()
commit_hash = commiting.commit_records(message=commit_message,
branchenv=self._branchenv,
stageenv=self._stageenv,
refenv=self._refenv,
repo_path=self._repo_path)
# purge recs then reopen file handles so that we don't have to invalidate
# previous weakproxy references like if we just called :meth:``__setup```
hashs.clear_stage_hash_records(self._stagehashenv)
self._arraysets._open()
finally:
for asetn in open_asets:
self._arraysets[asetn].__enter__()
if open_meta:
self._metadata.__enter__()
return commit_hash
def reset_staging_area(self) -> str:
"""Perform a hard reset of the staging area to the last commit head.
After this operation completes, the writer checkout will automatically
close in the typical fashion (any held references to :attr:``arrayset``
or :attr:``metadata`` objects will finalize and destruct as normal), In
order to perform any further operation, a new checkout needs to be
opened.
.. warning::
This operation is IRREVERSIBLE. all records and data which are note
stored in a previous commit will be permanently deleted.
Returns
-------
str
Commit hash of the head which the staging area is reset to.
Raises
------
RuntimeError
If no changes have been made to the staging area, No-Op.
"""
self.__acquire_writer_lock()
print(f'Hard reset requested with writer_lock: {self._writer_lock}')
if self._differ.status() == 'CLEAN':
e = RuntimeError(f'No changes made in staging area. No reset necessary.')
raise e from None
self._arraysets._close()
hashs.remove_stage_hash_records_from_hashenv(self._hashenv, self._stagehashenv)
hashs.clear_stage_hash_records(self._stagehashenv)
hashs.delete_in_process_data(self._repo_path)
branch_head = heads.get_staging_branch_head(self._branchenv)
head_commit = heads.get_branch_head_commit(self._branchenv, branch_head)
commiting.replace_staging_area_with_commit(refenv=self._refenv,
stageenv=self._stageenv,
commit_hash=head_commit)
self._metadata = MetadataWriter(
mode='a',
repo_pth=self._repo_path,
dataenv=self._stageenv,
labelenv=self._labelenv)
self._arraysets = Arraysets._from_staging_area(
repo_pth=self._repo_path,
hashenv=self._hashenv,
stageenv=self._stageenv,
stagehashenv=self._stagehashenv)
self._differ = WriterUserDiff(
stageenv=self._stageenv,
refenv=self._refenv,
branchenv=self._branchenv,
branch_name=self._branch_name)
return head_commit
def close(self) -> None:
"""Close all handles to the writer checkout and release the writer lock.
Failure to call this method after the writer checkout has been used will
result in a lock being placed on the repository which will not allow any
writes until it has been manually cleared.
"""
self.__acquire_writer_lock()
if hasattr(self, '_arraysets') and (getattr(self, '_arraysets') is not None):
self._arraysets._close()
for asetn in (self._arraysets._arraysets.keys()):
for attr in list(self._arraysets._arraysets[asetn].__dir__()):
with suppress(AttributeError, TypeError):
delattr(self._arraysets._arraysets[asetn], attr)
for attr in list(self._arraysets.__dir__()):
with suppress(AttributeError, TypeError):
# prepending `_self_` addresses `WeakrefProxy` in `wrapt.ObjectProxy`
delattr(self._arraysets, f'_self_{attr}')
if hasattr(self, '_metadata') and (getattr(self, '_arraysets') is not None):
for attr in list(self._metadata.__dir__()):
with suppress(AttributeError, TypeError):
# prepending `_self_` addresses `WeakrefProxy` in `wrapt.ObjectProxy`
delattr(self._metadata, f'_self_{attr}')
with suppress(AttributeError):
del self._arraysets
with suppress(AttributeError):
del self._metadata
with suppress(AttributeError):
del self._differ
heads.release_writer_lock(self._branchenv, self._writer_lock)
del self._refenv
del self._hashenv
del self._labelenv
del self._stageenv
del self._branchenv
del self._stagehashenv
del self._repo_path
del self._writer_lock
del self._branch_name
del self._is_conman
atexit.unregister(self.close)
return
| 39.859327 | 97 | 0.580513 |
1bb553cfb5c764f4eb32d61012ef3c60a96e058e | 2,935 | py | Python | word2vec/graph.py | pjavia/NLP | 21f3bee00842564cdb0618bcf4ba9a589af0fc69 | [
"MIT"
] | null | null | null | word2vec/graph.py | pjavia/NLP | 21f3bee00842564cdb0618bcf4ba9a589af0fc69 | [
"MIT"
] | null | null | null | word2vec/graph.py | pjavia/NLP | 21f3bee00842564cdb0618bcf4ba9a589af0fc69 | [
"MIT"
] | null | null | null | import tensorflow as tf
import preprocessing
from random import shuffle, seed
from datetime import datetime
from tensorflow.contrib.tensorboard.plugins import projector
import os
g = tf.Graph()
batch = 20
par = (2, 128, 'datasetSentences.txt')
obj = preprocessing.word2vec(par)
obj.fetch_data()
obj.prepare_data_for_word2vec()
vocabulary_size = obj.vocabulary
embedding_size = obj.embeddings_dim
with g.as_default():
embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0), name='word_embedding')
print embeddings, 'embeddings'
nce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / tf.sqrt(float(embedding_size))))
print nce_weights, 'nce_weights'
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
print nce_biases, 'nce_biases'
train_inputs = tf.placeholder(tf.int32, shape=[batch])
train_labels = tf.placeholder(tf.int32, shape=[batch, 1])
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, labels=train_labels, inputs=embed,
num_sampled=20, num_classes=vocabulary_size))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0).minimize(loss)
saver = tf.train.Saver()
summary_l = tf.summary.scalar('loss', loss)
summary_op = tf.summary.merge_all()
with tf.Session(graph=g) as sess:
train_writer = tf.summary.FileWriter('summary_directory')
data_size = len(obj.numeral_pairs)
data = obj.numeral_pairs
sess.run(tf.global_variables_initializer())
count = 0
for epoch in range(0, 1):
for i in range(0, data_size):
count += 1
print epoch, '--------->epoch', count, '---------->iteration'
seed(datetime.now())
shuffle(data)
batch_xs = []
batch_ys = []
for j in range(0, batch):
batch_xs.append(data[j][0])
batch_ys.append([data[j][1]])
l_, _ = sess.run([loss, optimizer], feed_dict={train_inputs:batch_xs, train_labels:batch_ys})
print l_, 'loss'
summary_full = sess.run(summary_op, feed_dict={train_inputs: batch_xs, train_labels: batch_ys})
train_writer.add_summary(summary_full, count)
if count == 300:
break
saver.save(sess, os.path.join('summary_directory', "model.ckpt"), 1)
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
embedding.metadata_path = os.path.join('summary_directory', 'metadata.tsv')
embedding.tensor_name = embeddings.name
projector.visualize_embeddings(train_writer, config)
with open("summary_directory/metadata.tsv", "w") as record_file:
for i in obj.metadata:
record_file.write("%s\n" % i)
| 40.205479 | 117 | 0.660307 |
ccd959d905c91fc690f22e23018d457b6a67aa5d | 8,736 | py | Python | intersight/model/ether_physical_port_list_all_of.py | CiscoDevNet/intersight-python | 04b721f37c3044646a91c185c7259edfb991557a | [
"Apache-2.0"
] | 5 | 2021-12-16T15:13:32.000Z | 2022-03-29T16:09:54.000Z | intersight/model/ether_physical_port_list_all_of.py | CiscoDevNet/intersight-python | 04b721f37c3044646a91c185c7259edfb991557a | [
"Apache-2.0"
] | 4 | 2022-01-25T19:05:51.000Z | 2022-03-29T20:18:37.000Z | intersight/model/ether_physical_port_list_all_of.py | CiscoDevNet/intersight-python | 04b721f37c3044646a91c185c7259edfb991557a | [
"Apache-2.0"
] | 2 | 2020-07-07T15:01:08.000Z | 2022-01-31T04:27:35.000Z | """
Cisco Intersight
Cisco Intersight is a management platform delivered as a service with embedded analytics for your Cisco and 3rd party IT infrastructure. This platform offers an intelligent level of management that enables IT organizations to analyze, simplify, and automate their environments in more advanced ways than the prior generations of tools. Cisco Intersight provides an integrated and intuitive management experience for resources in the traditional data center as well as at the edge. With flexible deployment options to address complex security needs, getting started with Intersight is quick and easy. Cisco Intersight has deep integration with Cisco UCS and HyperFlex systems allowing for remote deployment, configuration, and ongoing maintenance. The model-based deployment works for a single system in a remote location or hundreds of systems in a data center and enables rapid, standardized configuration and deployment. It also streamlines maintaining those systems whether you are working with small or very large configurations. The Intersight OpenAPI document defines the complete set of properties that are returned in the HTTP response. From that perspective, a client can expect that no additional properties are returned, unless these properties are explicitly defined in the OpenAPI document. However, when a client uses an older version of the Intersight OpenAPI document, the server may send additional properties because the software is more recent than the client. In that case, the client may receive properties that it does not know about. Some generated SDKs perform a strict validation of the HTTP response body against the OpenAPI document. # noqa: E501
The version of the OpenAPI document: 1.0.9-4950
Contact: intersight@cisco.com
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from intersight.model_utils import ( # noqa: F401
ApiTypeError,
ModelComposed,
ModelNormal,
ModelSimple,
cached_property,
change_keys_js_to_python,
convert_js_args_to_python_args,
date,
datetime,
file_type,
none_type,
validate_get_composed_info,
)
def lazy_import():
from intersight.model.ether_physical_port import EtherPhysicalPort
globals()['EtherPhysicalPort'] = EtherPhysicalPort
class EtherPhysicalPortListAllOf(ModelNormal):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
Attributes:
allowed_values (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
with a capitalized key describing the allowed value and an allowed
value. These dicts store the allowed enum values.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
discriminator_value_class_map (dict): A dict to go from the discriminator
variable value to the discriminator class name.
validations (dict): The key is the tuple path to the attribute
and the for var_name this is (var_name,). The value is a dict
that stores validations for max_length, min_length, max_items,
min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum,
inclusive_minimum, and regex.
additional_properties_type (tuple): A tuple of classes accepted
as additional properties values.
"""
allowed_values = {
}
validations = {
}
additional_properties_type = None
_nullable = False
@cached_property
def openapi_types():
"""
This must be a method because a model may have properties that are
of type self, this must run after the class is loaded
Returns
openapi_types (dict): The key is attribute name
and the value is attribute type.
"""
lazy_import()
return {
'count': (int,), # noqa: E501
'results': ([EtherPhysicalPort], none_type,), # noqa: E501
}
@cached_property
def discriminator():
return None
attribute_map = {
'count': 'Count', # noqa: E501
'results': 'Results', # noqa: E501
}
_composed_schemas = {}
required_properties = set([
'_data_store',
'_check_type',
'_spec_property_naming',
'_path_to_item',
'_configuration',
'_visited_composed_classes',
])
@convert_js_args_to_python_args
def __init__(self, *args, **kwargs): # noqa: E501
"""EtherPhysicalPortListAllOf - a model defined in OpenAPI
Keyword Args:
_check_type (bool): if True, values for parameters in openapi_types
will be type checked and a TypeError will be
raised if the wrong type is input.
Defaults to True
_path_to_item (tuple/list): This is a list of keys or values to
drill down to the model in received_data
when deserializing a response
_spec_property_naming (bool): True if the variable names in the input data
are serialized names, as specified in the OpenAPI document.
False if the variable names in the input data
are pythonic names, e.g. snake case (default)
_configuration (Configuration): the instance to use when
deserializing a file_type parameter.
If passed, type conversion is attempted
If omitted no type conversion is done.
_visited_composed_classes (tuple): This stores a tuple of
classes that we have traveled through so that
if we see that class again we will not use its
discriminator again.
When traveling through a discriminator, the
composed schema that is
is traveled through is added to this set.
For example if Animal has a discriminator
petType and we pass in "Dog", and the class Dog
allOf includes Animal, we move through Animal
once using the discriminator, and pick Dog.
Then in Dog, we will make an instance of the
Animal class but this time we won't travel
through its discriminator because we passed in
_visited_composed_classes = (Animal,)
count (int): The total number of 'ether.PhysicalPort' resources matching the request, accross all pages. The 'Count' attribute is included when the HTTP GET request includes the '$inlinecount' parameter.. [optional] # noqa: E501
results ([EtherPhysicalPort], none_type): The array of 'ether.PhysicalPort' resources matching the request.. [optional] # noqa: E501
"""
_check_type = kwargs.pop('_check_type', True)
_spec_property_naming = kwargs.pop('_spec_property_naming', False)
_path_to_item = kwargs.pop('_path_to_item', ())
_configuration = kwargs.pop('_configuration', None)
_visited_composed_classes = kwargs.pop('_visited_composed_classes', ())
if args:
raise ApiTypeError(
"Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % (
args,
self.__class__.__name__,
),
path_to_item=_path_to_item,
valid_classes=(self.__class__,),
)
self._data_store = {}
self._check_type = _check_type
self._spec_property_naming = _spec_property_naming
self._path_to_item = _path_to_item
self._configuration = _configuration
self._visited_composed_classes = _visited_composed_classes + (self.__class__,)
for var_name, var_value in kwargs.items():
if var_name not in self.attribute_map and \
self._configuration is not None and \
self._configuration.discard_unknown_keys and \
self.additional_properties_type is None:
# discard variable.
continue
setattr(self, var_name, var_value)
| 49.636364 | 1,678 | 0.639881 |
4cd0cb1a313c30ebdb2927742061ea7ed65911b2 | 25,269 | py | Python | neurst/tasks/speech2text.py | Yaoming95/neurst | f7e2a043f20b6724310b048035e0a6075f60032a | [
"Apache-2.0"
] | 16 | 2021-06-22T02:36:32.000Z | 2022-03-27T23:07:55.000Z | neurst/tasks/speech2text.py | Yaoming95/neurst | f7e2a043f20b6724310b048035e0a6075f60032a | [
"Apache-2.0"
] | 1 | 2022-03-12T13:28:23.000Z | 2022-03-12T13:28:23.000Z | neurst/tasks/speech2text.py | Yaoming95/neurst | f7e2a043f20b6724310b048035e0a6075f60032a | [
"Apache-2.0"
] | 3 | 2021-08-03T12:49:35.000Z | 2021-09-02T03:58:18.000Z | # Copyright 2020 ByteDance Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from typing import Tuple
import tensorflow as tf
from absl import logging
from neurst.data import dataset_utils
from neurst.data.data_pipelines import DataPipeline, build_data_pipeline
from neurst.data.data_pipelines.transcript_data_pipeline import TranscriptDataPipeline
from neurst.data.datasets import Dataset
from neurst.layers.metric_layers.token_metric_layers import (AudioFramesMetricLayer, BatchCountMetricLayer,
SequenceTokenMetricLayer)
from neurst.metrics import build_metric
from neurst.models import build_model
from neurst.models.model_utils import deduce_text_length
from neurst.tasks import register_task
from neurst.tasks.task import Task
from neurst.training.training_utils import minimal_multiple
from neurst.utils import compat
from neurst.utils.audio_lib import SpecAugment
from neurst.utils.configurable import deep_merge_dict
from neurst.utils.flags_core import Flag, ModuleFlag
def create_audio_bucket_boundaries(maxlen, minlen=128):
if minlen is None:
minlen = 128
bounds = [minlen]
base = minlen
base_incr = int(2 ** ((math.log2(minlen) + 1) // 2))
base_incr_mult = 1
times = len(str(int(minlen)))
while True:
for _ in range(times):
bounds.append(bounds[-1] + base)
if bounds[-1] > maxlen:
break
base += base_incr * base_incr_mult
base_incr_mult += 1
if bounds[-1] > maxlen:
break
bounds[-1] = maxlen + 1
return bounds
@register_task(["speech2text", "audio2text", "AudioToText"])
class SpeechToText(Task):
""" Defines the audio to text task. """
def __init__(self, args):
""" Initializes the task.
Args:
args: A dict of model configurations.
"""
super(SpeechToText, self).__init__(args)
trg_data_pipeline_cls = args.get("transcript_data_pipeline.class", TranscriptDataPipeline)
trg_data_pipeline_params = args.get("transcript_data_pipeline.params", None) or {}
self._trg_data_pipeline = build_data_pipeline(
trg_data_pipeline_cls, **trg_data_pipeline_params)
self._audio_feature_dim = args["audio_feature_dim"]
self._audio_feature_channels = args["audio_feature_channels"]
self._specaug = SpecAugment.build(args.get("specaug", None))
def get_config(self):
return {
"transcript_data_pipeline.class": self._trg_data_pipeline.__class__.__name__,
"transcript_data_pipeline.params": self._trg_data_pipeline.get_config(),
"audio_feature_dim": self._audio_feature_dim,
"audio_feature_channels": self._audio_feature_channels
}
@staticmethod
def class_or_method_args():
this_args = super(SpeechToText, SpeechToText).class_or_method_args()
this_args.extend([
ModuleFlag("transcript_data_pipeline", DataPipeline.REGISTRY_NAME,
default=None, help="The target side transcript data pipeline."),
Flag("audio_feature_dim", dtype=Flag.TYPE.INTEGER, default=80,
help="The dimension of audio features."),
Flag("audio_feature_channels", dtype=Flag.TYPE.INTEGER, default=1,
help="The number of channels of audio features."),
Flag("max_src_len", dtype=Flag.TYPE.INTEGER, default=None,
help="The maximum source length of training data (audio frames)."),
Flag("min_src_bucket_boundary", dtype=Flag.TYPE.INTEGER, default=128,
help="The minimum source length of the training bucket (audio frames)."),
Flag("max_trg_len", dtype=Flag.TYPE.INTEGER, default=None,
help="The maximum target length of training data."),
Flag("truncate_src", dtype=Flag.TYPE.BOOLEAN, default=None,
help="Whether to truncate source to max_src_len."),
Flag("truncate_trg", dtype=Flag.TYPE.BOOLEAN, default=None,
help="Whether to truncate target to max_trg_len."),
Flag("experimental_frame_transcript_ratio", dtype=Flag.TYPE.INTEGER, default=None,
help="The ratio of the number of frames and its transcript for training batch bucket."),
Flag("specaug", dtype=Flag.TYPE.STRING, default=None,
help="The arguments for spec augment, can be either predefined settings "
"like LB, LD, SM, SS... or a dict containing detailed arguments."),
Flag("disable_batch_efficiency", dtype=Flag.TYPE.BOOLEAN, default=None,
help="Whether to disable the batch efficiency.")
])
return this_args
def inputs_signature(self, mode) -> Tuple[dict, dict]:
""" Returns the input dtypes and signatures (from dataset). """
dtypes = {"audio": tf.float32, "audio_length": tf.int64}
# [batch, frames, feature_dim]
signatures = {"audio": tf.TensorShape([None, None]),
"audio_length": tf.TensorShape([None, ])}
if mode == compat.ModeKeys.INFER:
return dtypes, signatures
dtypes["transcript"] = tf.int64
signatures["transcript"] = tf.TensorShape([None, None])
return dtypes, signatures
def build_model(self, args, name=None):
""" Creates the model. """
model = build_model(args, {"audio_feature_dim": self._audio_feature_dim,
"audio_feature_channels": self._audio_feature_channels},
self._trg_data_pipeline.meta, name=name)
return model
def example_to_input(self, batch_of_data: dict, mode) -> dict:
""" Transform the data examples to model acceptable inputs.
Args:
batch_of_data: A data tensor with shape [batch, ...]
mode: The running mode.
Returns: The input data for model.
"""
batch = tf.shape(batch_of_data["audio"])[0]
input_dict = {"src": tf.reshape(batch_of_data["audio"],
[batch, -1, self._audio_feature_dim, self._audio_feature_channels]),
"src_length": batch_of_data["audio_length"]}
target_bos = tf.tile(
[tf.convert_to_tensor(self._trg_data_pipeline.meta["bos_id"], dtype=tf.int64)],
[tf.shape(input_dict["src"])[0]])
if mode == compat.ModeKeys.INFER:
input_dict["trg_input"] = target_bos
else:
input_dict["trg"] = batch_of_data["transcript"]
input_dict["trg_length"] = deduce_text_length(batch_of_data["transcript"],
self._trg_data_pipeline.meta["pad_id"],
self._trg_data_pipeline.meta["padding_mode"])
input_dict["trg_input"] = tf.concat([tf.expand_dims(target_bos, axis=1),
batch_of_data["transcript"][:, :-1]], axis=1)
return input_dict
def get_data_postprocess_fn(self, mode):
if mode == compat.ModeKeys.INFER:
return self._trg_data_pipeline.recover
raise ValueError("No postprocess for TRAIN/EVAL.")
def get_data_preprocess_fn(self, mode, data_status, args=None) -> callable:
""" Preprocess data sample according to this task.
Args:
args: A dict containing dataset arguments.
mode: A ModeKeys indicating the running mode.
data_status: The status of the data sample.
Returns: A callable function to collate (process) a data sample.
"""
if args is None:
args = self._args
else:
args = deep_merge_dict(self._args, args, local_overwrite=False)
trunc_audio = args.get("truncate_src", None)
max_audio_len = args.get("max_src_len", None)
trunc_trg = args.get("truncate_trg", None)
max_trg_len = args.get("max_trg_len", None)
if data_status["audio"] != compat.DataStatus.PROJECTED:
raise RuntimeError("We recommend one to preprocess the audio in advance.")
def _process_audio(audio):
if trunc_audio and max_audio_len:
audio = audio[:max_audio_len * self._audio_feature_dim * self._audio_feature_channels]
if self._specaug is not None:
audio = tf.reshape(
audio, [-1, self._audio_feature_dim * self._audio_feature_channels])
audio = tf.reshape(self._specaug(audio), [-1])
return audio
def _process_and_truncate_text(text):
if data_status["transcript"] == compat.DataStatus.RAW:
text = self._trg_data_pipeline.process(text, is_processed=False)
else:
assert data_status["transcript"] == compat.DataStatus.PROJECTED
if mode == compat.ModeKeys.TRAIN and trunc_trg and max_trg_len:
if compat.is_tf_tensor(text):
text = tf.cond(
tf.less_equal(tf.size(text), max_trg_len), lambda: text,
lambda: tf.concat([text[:(max_trg_len - 1)], text[-1:]], axis=0))
else:
if len(text) > max_trg_len:
text = text[:(max_trg_len - 1)] + text[-1:]
return text
def data_proc(data, with_label):
feature = _process_audio(data["audio"])
ret = {"audio": feature,
"audio_length": tf.cast(
(tf.shape(feature)[0] if compat.is_tf_tensor(feature)
else feature.shape[0]) // self._audio_feature_dim // self._audio_feature_channels,
dtype=tf.int64)}
if with_label:
ret["transcript"] = tf.convert_to_tensor(
_process_and_truncate_text(data["transcript"]), tf.int64)
return ret
if mode == compat.ModeKeys.INFER:
return lambda data: data_proc(data, False)
return lambda data: data_proc(data, True)
def create_and_batch_tfds(self, ds: Dataset, mode,
args=None, num_replicas_in_sync=1) -> tf.data.Dataset:
""" Creates a dataset according to the `mode`.
Args:
args: A dict containing dataset arguments.
ds: A neurst.data.datasets.Dataset object.
mode: A ModeKeys indicating the running mode.
num_replicas_in_sync: The number of GPUs or other workers. We will generate global
batches, and each global batch is equally divisible by number of replicas.
Returns:
A tf.data.Dataset.
"""
if args is None:
args = self._args
else:
args = deep_merge_dict(self._args, args, local_overwrite=False)
float_zero = tf.constant(0, dtype=tf.float32)
int_zero = tf.constant(0, dtype=tf.int64)
trg_eos = tf.constant(self._trg_data_pipeline.meta["eos_id"], dtype=tf.int64)
dataset = ds.build(map_func=self.get_data_preprocess_fn(mode, ds.status, args),
map_output_dtypes=self.inputs_signature(mode)[0],
auto_shard=(mode == compat.ModeKeys.TRAIN),
shuffle=(mode == compat.ModeKeys.TRAIN))
if mode == compat.ModeKeys.INFER:
logging.info("Creating test dataset.")
return dataset.cache().padded_batch(
dataset_utils.adjust_batch_size(args["batch_size"],
num_replicas_in_sync=num_replicas_in_sync),
padded_shapes={"audio": [None], "audio_length": []},
padding_values={"audio": float_zero, "audio_length": int_zero},
drop_remainder=False)
elif mode == compat.ModeKeys.EVAL:
logging.info("Creating evaluation dataset.")
return dataset.cache().padded_batch(
dataset_utils.adjust_batch_size(args["batch_size"],
num_replicas_in_sync=num_replicas_in_sync),
padded_shapes={"audio": [None], "audio_length": [], "transcript": [None]},
padding_values={"audio": float_zero, "audio_length": int_zero, "transcript": trg_eos},
drop_remainder=False)
else:
logging.info("Creating training dataset.")
dataset = dataset_utils.clean_dataset_by_length(
dataset, {"audio": args["max_src_len"] * self._audio_feature_dim * self._audio_feature_channels,
"audio_length": -1, "transcript": args["max_trg_len"]})
if args["cache_dataset"]:
dataset = dataset.cache()
if args["shuffle_buffer"]:
dataset = dataset.shuffle(buffer_size=args["shuffle_buffer"])
padding_values = {"audio": float_zero, "audio_length": int_zero, "transcript": trg_eos}
if args["max_src_len"] is None:
raise RuntimeError("`max_src_len` for SpeechToText task must be provided.")
if args["max_trg_len"] is None:
raise RuntimeError("`max_trg_len` for SpeechToText task must be provided.")
max_src_len = args["max_src_len"]
max_trg_len = minimal_multiple(args["max_trg_len"], 8)
audio_bucket_boundaries = create_audio_bucket_boundaries(max_src_len, args["min_src_bucket_boundary"])
audio_bucket_boundaries[-1] = minimal_multiple(audio_bucket_boundaries[-1], 8)
batch_size = dataset_utils.adjust_batch_size(args["batch_size"], args["batch_size_per_gpu"],
num_replicas_in_sync=num_replicas_in_sync,
verbose=False)
batch_size_per_gpu = batch_size // num_replicas_in_sync
assert batch_size_per_gpu > max_src_len, (
f"batch size per gpu({batch_size_per_gpu} must be greater than "
f"`max_src_len`={max_src_len}")
if args["disable_batch_efficiency"]:
bucket_batch_sizes = [int(batch_size_per_gpu // bound
* num_replicas_in_sync) for bound in audio_bucket_boundaries]
else:
bucket_batch_sizes = [int(minimal_multiple(batch_size_per_gpu // bound, 8)
* num_replicas_in_sync) for bound in audio_bucket_boundaries]
frame_transcript_ratio = args["experimental_frame_transcript_ratio"]
if frame_transcript_ratio is None:
logging.info("WARNING: we recommend one to pre-scan the dataset and estimate the ratio: "
"frame length / transcript length.")
else:
trans_bucket_boundaries = [
int(bound / (frame_transcript_ratio + i * (
max_src_len / max_trg_len - frame_transcript_ratio) / len(audio_bucket_boundaries)))
for i, bound in enumerate(audio_bucket_boundaries)]
trans_bucket_boundaries = [minimal_multiple(min(i, max_trg_len), 8) for i in
trans_bucket_boundaries]
num_buckets = len(trans_bucket_boundaries)
true_trans_bucket_boundaries = []
num_input_shapes = 0
for idx, (batc, bound, tbound) in enumerate(zip(bucket_batch_sizes, audio_bucket_boundaries,
trans_bucket_boundaries)):
max_trans_len = [tbound,
trans_bucket_boundaries[min(idx + 1, len(bucket_batch_sizes) - 1)]]
num_input_shapes += len(set(max_trans_len))
true_trans_bucket_boundaries.append(max_trans_len)
logging.info(f"There are {num_input_shapes} input shapes to be compiled:")
for idx, (batc, bound, tbound) in enumerate(zip(bucket_batch_sizes, audio_bucket_boundaries,
true_trans_bucket_boundaries)):
logging.info(f" - batch={batc}, maximum-frames={bound}, "
f"maximum-transcript-length={set(tbound)}")
true_trans_bucket_boundaries = tf.constant(true_trans_bucket_boundaries, dtype=tf.int32)
true_audio_bucket_boundaries = tf.transpose(tf.constant([audio_bucket_boundaries] * 2, dtype=tf.int32))
bucket_batch_sizes = tf.constant(bucket_batch_sizes, dtype=tf.int64)
audio_bucket_boundaries = tf.constant(audio_bucket_boundaries, dtype=tf.int32)
def example_to_bucket_id(examples):
"""Return int64 bucket id for this example, calculated based on length."""
if frame_transcript_ratio is None:
conditions_c = tf.less_equal(tf.cast(examples["audio_length"], tf.int32),
audio_bucket_boundaries)
return tf.reduce_min(tf.where(conditions_c))
conditions_c = tf.logical_and(
tf.less_equal(tf.cast(examples["audio_length"], tf.int32), true_audio_bucket_boundaries),
tf.less_equal(tf.size(examples["transcript"]), true_trans_bucket_boundaries))
minimum_match = tf.where(conditions_c)[0]
return minimum_match[0] * num_buckets + minimum_match[1]
def window_size_fn(bucket_id):
"""Return number of examples to be grouped when given a bucket id."""
if frame_transcript_ratio is None:
return bucket_batch_sizes[bucket_id]
return bucket_batch_sizes[bucket_id // num_buckets]
def batching_fn(bucket_id, grouped_dataset):
"""Batch and add padding to a dataset of elements with similar lengths."""
bucket_batch_size = window_size_fn(bucket_id)
# Batch the dataset and add padding so that all input sequences in the
# examples have the same length, and all target sequences have the same
# lengths as well. Resulting lengths of inputs and targets can differ.
return grouped_dataset.padded_batch(
bucket_batch_size,
padded_shapes={
"audio": ([(audio_bucket_boundaries[bucket_id] if frame_transcript_ratio is None
else audio_bucket_boundaries[bucket_id // num_buckets])
* self._audio_feature_dim * self._audio_feature_channels]),
"audio_length": [],
"transcript": ([None] if frame_transcript_ratio is None
else [
true_trans_bucket_boundaries[bucket_id // num_buckets][bucket_id % num_buckets]])
},
padding_values=padding_values, drop_remainder=True)
return dataset.apply(tf.data.experimental.group_by_window(
key_func=example_to_bucket_id,
reduce_func=batching_fn,
window_size=None,
window_size_func=window_size_fn))
def build_metric_layer(self):
return [AudioFramesMetricLayer("src"), SequenceTokenMetricLayer("trg"),
BatchCountMetricLayer("src")]
def get_eval_metric(self, args, name="metric", ds=None):
""" Returns a neurst.metrics.metric.Metric object for evaluation."""
if ds is not None and hasattr(ds, "trg_lang") and ds.trg_lang is not None:
return build_metric(args[name + ".class"], language=ds.trg_lang,
**args[name + ".params"])
return build_metric(args[name + ".class"], language=self._trg_data_pipeline.meta["language"],
**args[name + ".params"])
@register_task
class MultiTaskSpeechTranslation(Task):
def __init__(self, args):
""" Initializes with configuration. """
super(MultiTaskSpeechTranslation, self).__init__(args)
transcript_dp_cls = args.get("transcript_data_pipeline.class", TranscriptDataPipeline)
transcript_dp_params = args.get("transcript_data_pipeline.params", None) or {}
self._transcript_data_pipeline = build_data_pipeline(
transcript_dp_cls, **transcript_dp_params)
translation_dp_cls = args.get("translation_data_pipeline.class", TranscriptDataPipeline)
translation_dp_params = args.get("translation_data_pipeline.params", None) or {}
self._translation_data_pipeline = build_data_pipeline(
translation_dp_cls, **translation_dp_params)
def get_config(self):
return {
"transcript_data_pipeline.class": self._transcript_data_pipeline.__class__.__name__,
"transcript_data_pipeline.params": self._transcript_data_pipeline.get_config(),
"translation_data_pipeline.class": self._translation_data_pipeline.__class__.__name__,
"translation_data_pipeline.params": self._translation_data_pipeline.get_config(),
}
@staticmethod
def class_or_method_args():
""" Returns a list of args for flag definition. """
this_args = super(SpeechToText, SpeechToText).class_or_method_args()
this_args.extend([
ModuleFlag("transcript_data_pipeline", DataPipeline.REGISTRY_NAME,
default=TranscriptDataPipeline.__name__,
help="The data pipeline for ASR transcription."),
ModuleFlag("translation_data_pipeline", DataPipeline.REGISTRY_NAME,
default=TranscriptDataPipeline.__name__,
help="The data pipeline for translation."),
])
return this_args
def inputs_signature(self, mode) -> Tuple[dict, dict]:
""" Returns the input dtypes and signatures (from dataset). """
dtypes = {"audio": tf.float32, "audio_length": tf.int64}
# [batch, frames, feature_dim]
signatures = {"audio": tf.TensorShape([None, None]),
"audio_length": tf.TensorShape([None, ])}
if mode == compat.ModeKeys.INFER:
return dtypes, signatures
dtypes["transcript"] = tf.int64
signatures["transcript"] = tf.TensorShape([None, None])
dtypes["translation"] = tf.int64
signatures["translation"] = tf.TensorShape([None, None])
return dtypes, signatures
def example_to_input(self, batch_of_data: dict, mode) -> dict:
""" Converts a batch of data into the model readable structure. """
raise NotImplementedError
def get_data_preprocess_fn(self, mode, data_status, args=None) -> callable:
""" Returns a callable function that preprocess the data sample
according to this task. """
_ = args
if mode != compat.ModeKeys.TRAIN:
raise NotImplementedError
# if data_status["audio"] != compat.DataStatus.PROJECTED:
# raise RuntimeError("We recommend one to preprocess the audio in advance.")
def _process_text(text, status, dp):
if status == compat.DataStatus.RAW:
text = dp.process(text, is_processed=False)
return text
def _process(example):
return {"audio": example["audio"],
"transcript": _process_text(example["transcript"], data_status["transcript"],
self._transcript_data_pipeline),
"translation": _process_text(example["translation"], data_status["translation"],
self._translation_data_pipeline)
}
return _process
def get_data_postprocess_fn(self, mode) -> callable:
""" Returns a callable function that postprocess the data sample
according to this task. """
raise NotImplementedError
def create_and_batch_tfds(self,
ds: Dataset,
mode,
args=None,
num_replicas_in_sync=1) -> tf.data.Dataset:
""" Batch dataset. """
raise NotImplementedError
def build_model(self, args, name=None):
"""Build a new model instance."""
raise NotImplementedError
| 51.674847 | 119 | 0.608572 |
ee2620eaa7243e232e0de52745eb4a5cf2a57e18 | 1,052 | py | Python | isi_sdk_7_2/test/test_filepool_default_policy_default_policy.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 24 | 2018-06-22T14:13:23.000Z | 2022-03-23T01:21:26.000Z | isi_sdk_7_2/test/test_filepool_default_policy_default_policy.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 46 | 2018-04-30T13:28:22.000Z | 2022-03-21T21:11:07.000Z | isi_sdk_7_2/test/test_filepool_default_policy_default_policy.py | mohitjain97/isilon_sdk_python | a371f438f542568edb8cda35e929e6b300b1177c | [
"Unlicense"
] | 29 | 2018-06-19T00:14:04.000Z | 2022-02-08T17:51:19.000Z | # coding: utf-8
"""
Isilon SDK
Isilon SDK - Language bindings for the OneFS API # noqa: E501
OpenAPI spec version: 2
Contact: sdk@isilon.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import isi_sdk_7_2
from isi_sdk_7_2.models.filepool_default_policy_default_policy import FilepoolDefaultPolicyDefaultPolicy # noqa: E501
from isi_sdk_7_2.rest import ApiException
class TestFilepoolDefaultPolicyDefaultPolicy(unittest.TestCase):
"""FilepoolDefaultPolicyDefaultPolicy unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testFilepoolDefaultPolicyDefaultPolicy(self):
"""Test FilepoolDefaultPolicyDefaultPolicy"""
# FIXME: construct object with mandatory attributes with example values
# model = isi_sdk_7_2.models.filepool_default_policy_default_policy.FilepoolDefaultPolicyDefaultPolicy() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 25.658537 | 126 | 0.747148 |
b786f9603f5261e8a5ea4730c546f96480c776a1 | 143 | py | Python | modules/text/decorate/arabic/__init__.py | painor/modular_menu_bot | 18e560ef674c72b4572e14ed63f2ae262f364706 | [
"MIT"
] | null | null | null | modules/text/decorate/arabic/__init__.py | painor/modular_menu_bot | 18e560ef674c72b4572e14ed63f2ae262f364706 | [
"MIT"
] | null | null | null | modules/text/decorate/arabic/__init__.py | painor/modular_menu_bot | 18e560ef674c72b4572e14ed63f2ae262f364706 | [
"MIT"
] | null | null | null | from gettext import gettext as _
name = _("decorate arabic text")
description = _("decorates an arabic text using a random method each time")
| 28.6 | 75 | 0.762238 |
08169789dea12aea4a26c3090fa018c1655e45bb | 21,536 | py | Python | sympy/series/gruntz.py | mamueller/sympy | be25332c4341e008839052475c8d4e5853ce56fe | [
"BSD-3-Clause"
] | null | null | null | sympy/series/gruntz.py | mamueller/sympy | be25332c4341e008839052475c8d4e5853ce56fe | [
"BSD-3-Clause"
] | null | null | null | sympy/series/gruntz.py | mamueller/sympy | be25332c4341e008839052475c8d4e5853ce56fe | [
"BSD-3-Clause"
] | 1 | 2019-09-19T06:20:55.000Z | 2019-09-19T06:20:55.000Z | """
Limits
======
Implemented according to the PhD thesis
http://www.cybertester.com/data/gruntz.pdf, which contains very thorough
descriptions of the algorithm including many examples. We summarize here
the gist of it.
All functions are sorted according to how rapidly varying they are at
infinity using the following rules. Any two functions f and g can be
compared using the properties of L:
L=lim log|f(x)| / log|g(x)| (for x -> oo)
We define >, < ~ according to::
1. f > g .... L=+-oo
we say that:
- f is greater than any power of g
- f is more rapidly varying than g
- f goes to infinity/zero faster than g
2. f < g .... L=0
we say that:
- f is lower than any power of g
3. f ~ g .... L!=0, +-oo
we say that:
- both f and g are bounded from above and below by suitable integral
powers of the other
Examples
========
::
2 < x < exp(x) < exp(x**2) < exp(exp(x))
2 ~ 3 ~ -5
x ~ x**2 ~ x**3 ~ 1/x ~ x**m ~ -x
exp(x) ~ exp(-x) ~ exp(2x) ~ exp(x)**2 ~ exp(x+exp(-x))
f ~ 1/f
So we can divide all the functions into comparability classes (x and x^2
belong to one class, exp(x) and exp(-x) belong to some other class). In
principle, we could compare any two functions, but in our algorithm, we
don't compare anything below the class 2~3~-5 (for example log(x) is
below this), so we set 2~3~-5 as the lowest comparability class.
Given the function f, we find the list of most rapidly varying (mrv set)
subexpressions of it. This list belongs to the same comparability class.
Let's say it is {exp(x), exp(2x)}. Using the rule f ~ 1/f we find an
element "w" (either from the list or a new one) from the same
comparability class which goes to zero at infinity. In our example we
set w=exp(-x) (but we could also set w=exp(-2x) or w=exp(-3x) ...). We
rewrite the mrv set using w, in our case {1/w, 1/w^2}, and substitute it
into f. Then we expand f into a series in w::
f = c0*w^e0 + c1*w^e1 + ... + O(w^en), where e0<e1<...<en, c0!=0
but for x->oo, lim f = lim c0*w^e0, because all the other terms go to zero,
because w goes to zero faster than the ci and ei. So::
for e0>0, lim f = 0
for e0<0, lim f = +-oo (the sign depends on the sign of c0)
for e0=0, lim f = lim c0
We need to recursively compute limits at several places of the algorithm, but
as is shown in the PhD thesis, it always finishes.
Important functions from the implementation:
compare(a, b, x) compares "a" and "b" by computing the limit L.
mrv(e, x) returns list of most rapidly varying (mrv) subexpressions of "e"
rewrite(e, Omega, x, wsym) rewrites "e" in terms of w
leadterm(f, x) returns the lowest power term in the series of f
mrv_leadterm(e, x) returns the lead term (c0, e0) for e
limitinf(e, x) computes lim e (for x->oo)
limit(e, z, z0) computes any limit by converting it to the case x->oo
All the functions are really simple and straightforward except
rewrite(), which is the most difficult/complex part of the algorithm.
When the algorithm fails, the bugs are usually in the series expansion
(i.e. in SymPy) or in rewrite.
This code is almost exact rewrite of the Maple code inside the Gruntz
thesis.
Debugging
---------
Because the gruntz algorithm is highly recursive, it's difficult to
figure out what went wrong inside a debugger. Instead, turn on nice
debug prints by defining the environment variable SYMPY_DEBUG. For
example:
[user@localhost]: SYMPY_DEBUG=True ./bin/isympy
In [1]: limit(sin(x)/x, x, 0)
limitinf(_x*sin(1/_x), _x) = 1
+-mrv_leadterm(_x*sin(1/_x), _x) = (1, 0)
| +-mrv(_x*sin(1/_x), _x) = set([_x])
| | +-mrv(_x, _x) = set([_x])
| | +-mrv(sin(1/_x), _x) = set([_x])
| | +-mrv(1/_x, _x) = set([_x])
| | +-mrv(_x, _x) = set([_x])
| +-mrv_leadterm(exp(_x)*sin(exp(-_x)), _x, set([exp(_x)])) = (1, 0)
| +-rewrite(exp(_x)*sin(exp(-_x)), set([exp(_x)]), _x, _w) = (1/_w*sin(_w), -_x)
| +-sign(_x, _x) = 1
| +-mrv_leadterm(1, _x) = (1, 0)
+-sign(0, _x) = 0
+-limitinf(1, _x) = 1
And check manually which line is wrong. Then go to the source code and
debug this function to figure out the exact problem.
"""
from __future__ import print_function, division
from sympy.core import Basic, S, oo, Symbol, I, Dummy, Wild, Mul
from sympy.functions import log, exp
from sympy.series.order import Order
from sympy.simplify.powsimp import powsimp, powdenest
from sympy import cacheit
from sympy.core.compatibility import reduce
from sympy.utilities.timeutils import timethis
timeit = timethis('gruntz')
from sympy.utilities.misc import debug_decorator as debug
def compare(a, b, x):
"""Returns "<" if a<b, "=" for a == b, ">" for a>b"""
# log(exp(...)) must always be simplified here for termination
la, lb = log(a), log(b)
if isinstance(a, Basic) and a.func is exp:
la = a.args[0]
if isinstance(b, Basic) and b.func is exp:
lb = b.args[0]
c = limitinf(la/lb, x)
if c == 0:
return "<"
elif c.is_infinite:
return ">"
else:
return "="
class SubsSet(dict):
"""
Stores (expr, dummy) pairs, and how to rewrite expr-s.
The gruntz algorithm needs to rewrite certain expressions in term of a new
variable w. We cannot use subs, because it is just too smart for us. For
example::
> Omega=[exp(exp(_p - exp(-_p))/(1 - 1/_p)), exp(exp(_p))]
> O2=[exp(-exp(_p) + exp(-exp(-_p))*exp(_p)/(1 - 1/_p))/_w, 1/_w]
> e = exp(exp(_p - exp(-_p))/(1 - 1/_p)) - exp(exp(_p))
> e.subs(Omega[0],O2[0]).subs(Omega[1],O2[1])
-1/w + exp(exp(p)*exp(-exp(-p))/(1 - 1/p))
is really not what we want!
So we do it the hard way and keep track of all the things we potentially
want to substitute by dummy variables. Consider the expression::
exp(x - exp(-x)) + exp(x) + x.
The mrv set is {exp(x), exp(-x), exp(x - exp(-x))}.
We introduce corresponding dummy variables d1, d2, d3 and rewrite::
d3 + d1 + x.
This class first of all keeps track of the mapping expr->variable, i.e.
will at this stage be a dictionary::
{exp(x): d1, exp(-x): d2, exp(x - exp(-x)): d3}.
[It turns out to be more convenient this way round.]
But sometimes expressions in the mrv set have other expressions from the
mrv set as subexpressions, and we need to keep track of that as well. In
this case, d3 is really exp(x - d2), so rewrites at this stage is::
{d3: exp(x-d2)}.
The function rewrite uses all this information to correctly rewrite our
expression in terms of w. In this case w can be choosen to be exp(-x),
i.e. d2. The correct rewriting then is::
exp(-w)/w + 1/w + x.
"""
def __init__(self):
self.rewrites = {}
def __repr__(self):
return super(SubsSet, self).__repr__() + ', ' + self.rewrites.__repr__()
def __getitem__(self, key):
if not key in self:
self[key] = Dummy()
return dict.__getitem__(self, key)
def do_subs(self, e):
for expr, var in self.items():
e = e.subs(var, expr)
return e
def meets(self, s2):
"""Tell whether or not self and s2 have non-empty intersection"""
return set(self.keys()).intersection(list(s2.keys())) != set()
def union(self, s2, exps=None):
"""Compute the union of self and s2, adjusting exps"""
res = self.copy()
tr = {}
for expr, var in s2.items():
if expr in self:
if exps:
exps = exps.subs(var, res[expr])
tr[var] = res[expr]
else:
res[expr] = var
for var, rewr in s2.rewrites.items():
res.rewrites[var] = rewr.subs(tr)
return res, exps
def copy(self):
r = SubsSet()
r.rewrites = self.rewrites.copy()
for expr, var in self.items():
r[expr] = var
return r
@debug
def mrv(e, x):
"""Returns a SubsSet of most rapidly varying (mrv) subexpressions of 'e',
and e rewritten in terms of these"""
e = powsimp(e, deep=True, combine='exp')
if not isinstance(e, Basic):
raise TypeError("e should be an instance of Basic")
if not e.has(x):
return SubsSet(), e
elif e == x:
s = SubsSet()
return s, s[x]
elif e.is_Mul or e.is_Add:
i, d = e.as_independent(x) # throw away x-independent terms
if d.func != e.func:
s, expr = mrv(d, x)
return s, e.func(i, expr)
a, b = d.as_two_terms()
s1, e1 = mrv(a, x)
s2, e2 = mrv(b, x)
return mrv_max1(s1, s2, e.func(i, e1, e2), x)
elif e.is_Pow:
b, e = e.as_base_exp()
if b == 1:
return SubsSet(), b
if e.has(x):
return mrv(exp(e * log(b)), x)
else:
s, expr = mrv(b, x)
return s, expr**e
elif e.func is log:
s, expr = mrv(e.args[0], x)
return s, log(expr)
elif e.func is exp:
# We know from the theory of this algorithm that exp(log(...)) may always
# be simplified here, and doing so is vital for termination.
if e.args[0].func is log:
return mrv(e.args[0].args[0], x)
# if a product has an infinite factor the result will be
# infinite if there is no zero, otherwise NaN; here, we
# consider the result infinite if any factor is infinite
li = limitinf(e.args[0], x)
if any(_.is_infinite for _ in Mul.make_args(li)):
s1 = SubsSet()
e1 = s1[e]
s2, e2 = mrv(e.args[0], x)
su = s1.union(s2)[0]
su.rewrites[e1] = exp(e2)
return mrv_max3(s1, e1, s2, exp(e2), su, e1, x)
else:
s, expr = mrv(e.args[0], x)
return s, exp(expr)
elif e.is_Function:
l = [mrv(a, x) for a in e.args]
l2 = [s for (s, _) in l if s != SubsSet()]
if len(l2) != 1:
# e.g. something like BesselJ(x, x)
raise NotImplementedError("MRV set computation for functions in"
" several variables not implemented.")
s, ss = l2[0], SubsSet()
args = [ss.do_subs(x[1]) for x in l]
return s, e.func(*args)
elif e.is_Derivative:
raise NotImplementedError("MRV set computation for derviatives"
" not implemented yet.")
return mrv(e.args[0], x)
raise NotImplementedError(
"Don't know how to calculate the mrv of '%s'" % e)
def mrv_max3(f, expsf, g, expsg, union, expsboth, x):
"""Computes the maximum of two sets of expressions f and g, which
are in the same comparability class, i.e. max() compares (two elements of)
f and g and returns either (f, expsf) [if f is larger], (g, expsg)
[if g is larger] or (union, expsboth) [if f, g are of the same class].
"""
if not isinstance(f, SubsSet):
raise TypeError("f should be an instance of SubsSet")
if not isinstance(g, SubsSet):
raise TypeError("g should be an instance of SubsSet")
if f == SubsSet():
return g, expsg
elif g == SubsSet():
return f, expsf
elif f.meets(g):
return union, expsboth
c = compare(list(f.keys())[0], list(g.keys())[0], x)
if c == ">":
return f, expsf
elif c == "<":
return g, expsg
else:
if c != "=":
raise ValueError("c should be =")
return union, expsboth
def mrv_max1(f, g, exps, x):
"""Computes the maximum of two sets of expressions f and g, which
are in the same comparability class, i.e. mrv_max1() compares (two elements of)
f and g and returns the set, which is in the higher comparability class
of the union of both, if they have the same order of variation.
Also returns exps, with the appropriate substitutions made.
"""
u, b = f.union(g, exps)
return mrv_max3(f, g.do_subs(exps), g, f.do_subs(exps),
u, b, x)
@debug
@cacheit
@timeit
def sign(e, x):
"""
Returns a sign of an expression e(x) for x->oo.
::
e > 0 for x sufficiently large ... 1
e == 0 for x sufficiently large ... 0
e < 0 for x sufficiently large ... -1
The result of this function is currently undefined if e changes sign
arbitarily often for arbitrarily large x (e.g. sin(x)).
Note that this returns zero only if e is *constantly* zero
for x sufficiently large. [If e is constant, of course, this is just
the same thing as the sign of e.]
"""
from sympy import sign as _sign
if not isinstance(e, Basic):
raise TypeError("e should be an instance of Basic")
if e.is_positive:
return 1
elif e.is_negative:
return -1
elif e.is_zero:
return 0
elif not e.has(x):
return _sign(e)
elif e == x:
return 1
elif e.is_Mul:
a, b = e.as_two_terms()
sa = sign(a, x)
if not sa:
return 0
return sa * sign(b, x)
elif e.func is exp:
return 1
elif e.is_Pow:
s = sign(e.base, x)
if s == 1:
return 1
if e.exp.is_Integer:
return s**e.exp
elif e.func is log:
return sign(e.args[0] - 1, x)
# if all else fails, do it the hard way
c0, e0 = mrv_leadterm(e, x)
return sign(c0, x)
@debug
@timeit
@cacheit
def limitinf(e, x):
"""Limit e(x) for x-> oo"""
# rewrite e in terms of tractable functions only
e = e.rewrite('tractable', deep=True)
if not e.has(x):
return e # e is a constant
if e.has(Order):
e = e.expand().removeO()
if not x.is_positive:
# We make sure that x.is_positive is True so we
# get all the correct mathematical behavior from the expression.
# We need a fresh variable.
p = Dummy('p', positive=True, finite=True)
e = e.subs(x, p)
x = p
c0, e0 = mrv_leadterm(e, x)
sig = sign(e0, x)
if sig == 1:
return S.Zero # e0>0: lim f = 0
elif sig == -1: # e0<0: lim f = +-oo (the sign depends on the sign of c0)
if c0.match(I*Wild("a", exclude=[I])):
return c0*oo
s = sign(c0, x)
# the leading term shouldn't be 0:
if s == 0:
raise ValueError("Leading term should not be 0")
return s*oo
elif sig == 0:
return limitinf(c0, x) # e0=0: lim f = lim c0
def moveup2(s, x):
r = SubsSet()
for expr, var in s.items():
r[expr.subs(x, exp(x))] = var
for var, expr in s.rewrites.items():
r.rewrites[var] = s.rewrites[var].subs(x, exp(x))
return r
def moveup(l, x):
return [e.subs(x, exp(x)) for e in l]
@debug
@timeit
def calculate_series(e, x, logx=None):
""" Calculates at least one term of the series of "e" in "x".
This is a place that fails most often, so it is in its own function.
"""
from sympy.polys import cancel
for t in e.lseries(x, logx=logx):
t = cancel(t)
if t.has(exp) and t.has(log):
t = powdenest(t)
if t.simplify():
break
return t
@debug
@timeit
@cacheit
def mrv_leadterm(e, x):
"""Returns (c0, e0) for e."""
Omega = SubsSet()
if not e.has(x):
return (e, S.Zero)
if Omega == SubsSet():
Omega, exps = mrv(e, x)
if not Omega:
# e really does not depend on x after simplification
series = calculate_series(e, x)
c0, e0 = series.leadterm(x)
if e0 != 0:
raise ValueError("e0 should be 0")
return c0, e0
if x in Omega:
# move the whole omega up (exponentiate each term):
Omega_up = moveup2(Omega, x)
e_up = moveup([e], x)[0]
exps_up = moveup([exps], x)[0]
# NOTE: there is no need to move this down!
e = e_up
Omega = Omega_up
exps = exps_up
#
# The positive dummy, w, is used here so log(w*2) etc. will expand;
# a unique dummy is needed in this algorithm
#
# For limits of complex functions, the algorithm would have to be
# improved, or just find limits of Re and Im components separately.
#
w = Dummy("w", real=True, positive=True, finite=True)
f, logw = rewrite(exps, Omega, x, w)
series = calculate_series(f, w, logx=logw)
return series.leadterm(w)
def build_expression_tree(Omega, rewrites):
r""" Helper function for rewrite.
We need to sort Omega (mrv set) so that we replace an expression before
we replace any expression in terms of which it has to be rewritten::
e1 ---> e2 ---> e3
\
-> e4
Here we can do e1, e2, e3, e4 or e1, e2, e4, e3.
To do this we assemble the nodes into a tree, and sort them by height.
This function builds the tree, rewrites then sorts the nodes.
"""
class Node:
def ht(self):
return reduce(lambda x, y: x + y,
[x.ht() for x in self.before], 1)
nodes = {}
for expr, v in Omega:
n = Node()
n.before = []
n.var = v
n.expr = expr
nodes[v] = n
for _, v in Omega:
if v in rewrites:
n = nodes[v]
r = rewrites[v]
for _, v2 in Omega:
if r.has(v2):
n.before.append(nodes[v2])
return nodes
@debug
@timeit
def rewrite(e, Omega, x, wsym):
"""e(x) ... the function
Omega ... the mrv set
wsym ... the symbol which is going to be used for w
Returns the rewritten e in terms of w and log(w). See test_rewrite1()
for examples and correct results.
"""
from sympy import ilcm
if not isinstance(Omega, SubsSet):
raise TypeError("Omega should be an instance of SubsSet")
if len(Omega) == 0:
raise ValueError("Length can not be 0")
# all items in Omega must be exponentials
for t in Omega.keys():
if not t.func is exp:
raise ValueError("Value should be exp")
rewrites = Omega.rewrites
Omega = list(Omega.items())
nodes = build_expression_tree(Omega, rewrites)
Omega.sort(key=lambda x: nodes[x[1]].ht(), reverse=True)
# make sure we know the sign of each exp() term; after the loop,
# g is going to be the "w" - the simplest one in the mrv set
for g, _ in Omega:
sig = sign(g.args[0], x)
if sig != 1 and sig != -1:
raise NotImplementedError('Result depends on the sign of %s' % sig)
if sig == 1:
wsym = 1/wsym # if g goes to oo, substitute 1/w
# O2 is a list, which results by rewriting each item in Omega using "w"
O2 = []
denominators = []
for f, var in Omega:
c = limitinf(f.args[0]/g.args[0], x)
if c.is_Rational:
denominators.append(c.q)
arg = f.args[0]
if var in rewrites:
if not rewrites[var].func is exp:
raise ValueError("Value should be exp")
arg = rewrites[var].args[0]
O2.append((var, exp((arg - c*g.args[0]).expand())*wsym**c))
# Remember that Omega contains subexpressions of "e". So now we find
# them in "e" and substitute them for our rewriting, stored in O2
# the following powsimp is necessary to automatically combine exponentials,
# so that the .subs() below succeeds:
# TODO this should not be necessary
f = powsimp(e, deep=True, combine='exp')
for a, b in O2:
f = f.subs(a, b)
for _, var in Omega:
assert not f.has(var)
# finally compute the logarithm of w (logw).
logw = g.args[0]
if sig == 1:
logw = -logw # log(w)->log(1/w)=-log(w)
# Some parts of sympy have difficulty computing series expansions with
# non-integral exponents. The following heuristic improves the situation:
exponent = reduce(ilcm, denominators, 1)
f = f.subs(wsym, wsym**exponent)
logw /= exponent
return f, logw
def gruntz(e, z, z0, dir="+"):
"""
Compute the limit of e(z) at the point z0 using the Gruntz algorithm.
z0 can be any expression, including oo and -oo.
For dir="+" (default) it calculates the limit from the right
(z->z0+) and for dir="-" the limit from the left (z->z0-). For infinite z0
(oo or -oo), the dir argument doesn't matter.
This algorithm is fully described in the module docstring in the gruntz.py
file. It relies heavily on the series expansion. Most frequently, gruntz()
is only used if the faster limit() function (which uses heuristics) fails.
"""
if not z.is_Symbol:
raise NotImplementedError("Second argument must be a Symbol")
# convert all limits to the limit z->oo; sign of z is handled in limitinf
r = None
if z0 == oo:
r = limitinf(e, z)
elif z0 == -oo:
r = limitinf(e.subs(z, -z), z)
else:
if str(dir) == "-":
e0 = e.subs(z, z0 - 1/z)
elif str(dir) == "+":
e0 = e.subs(z, z0 + 1/z)
else:
raise NotImplementedError("dir must be '+' or '-'")
r = limitinf(e0, z)
# This is a bit of a heuristic for nice results... we always rewrite
# tractable functions in terms of familiar intractable ones.
# It might be nicer to rewrite the exactly to what they were initially,
# but that would take some work to implement.
return r.rewrite('intractable', deep=True)
| 32.482655 | 83 | 0.589385 |
f9ab4b160ce5ba162feee37f4b4d41e3421795ac | 3,604 | py | Python | tourism/settings.py | mhiloca/TourismApi | b9904354f2d249e996d79f7a6ad58269c68363d7 | [
"MIT"
] | null | null | null | tourism/settings.py | mhiloca/TourismApi | b9904354f2d249e996d79f7a6ad58269c68363d7 | [
"MIT"
] | null | null | null | tourism/settings.py | mhiloca/TourismApi | b9904354f2d249e996d79f7a6ad58269c68363d7 | [
"MIT"
] | null | null | null | """
Django settings for tourism project.
Generated by 'django-admin startproject' using Django 2.2.5.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
from decouple import config
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = config('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = config('DEBUG', default=False, cast=bool)
ALLOWED_HOSTS = ['tourism-api-test.herokuapp.com', 'localhost:8000']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_filters',
'rest_framework',
'rest_framework.authtoken',
'attractions',
'comments',
'core',
'reviews',
'locations',
'teste',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'tourism.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'tourism.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
from dj_database_url import parse as dburl
default_dburl = 'sqlite:///' + os.path.join(BASE_DIR, 'db.sqlite3')
DATABASES = {'default': config('DATABASE_URL', default=default_dburl, cast=dburl)}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = '/static/'
MEDIA_ROOT = 'images'
MEDIA_URL = '/media/'
REST_FRAMEWORK = {
'DEFAULT_FILTER_BACKENDS': ['django_filters.rest_framework.DjangoFilterBackend']
}
| 25.560284 | 91 | 0.704495 |
73a83df24f35f451808c953ddb5b42fd5fa9ffdd | 2,197 | py | Python | script/analyze/poetcloud.py | onlyrico/poetry | 2a81d7824832aed24d393f20fc8adeafb1bda7fb | [
"MIT"
] | 496 | 2020-04-09T12:54:26.000Z | 2022-03-27T09:21:48.000Z | script/analyze/poetcloud.py | onlyrico/poetry | 2a81d7824832aed24d393f20fc8adeafb1bda7fb | [
"MIT"
] | 5 | 2019-08-19T14:39:18.000Z | 2021-11-12T02:16:31.000Z | script/analyze/poetcloud.py | onlyrico/poetry | 2a81d7824832aed24d393f20fc8adeafb1bda7fb | [
"MIT"
] | 59 | 2019-12-17T01:06:11.000Z | 2022-03-24T06:06:41.000Z | # 诗人词云生成,字数=频率
# pip install pyecharts
import os.path as path
import os
from math import log10
import math
from pyecharts.charts import WordCloud
adjust = {}
file = open('./search_count.csv', 'r', encoding='utf-8')
segs = list(map(lambda line: line.split(','), file.readlines()[1:]))
count_max = 0
for seg in segs:
count = int(seg[1])
if count > count_max:
count_max = count
print('search count max', count_max)
count_max_base = log10(count_max + 2)
for seg in segs:
adjust_val = log10(int(seg[1]) + 2) / count_max_base
adjust[seg[0]] = adjust_val
origin_dir_path = path.join('..', '..', 'data')
frequencies = []
total_word_num = 0
for poet_path, dir_list, file_list in os.walk(origin_dir_path):
for poet_dir in dir_list:
poet_name = poet_dir[0:poet_dir.index('_')]
frequency = 0
poem_count = 0
for pp, dd, ff in os.walk(path.join(origin_dir_path, poet_dir)):
for poem_file_name in ff:
# not poem file
if not poem_file_name.endswith('.pt'):
continue
poem_file = open(path.join(pp, poem_file_name),
mode='rt', encoding='utf-8')
for line in poem_file.readlines():
line = line.strip()
if not line.startswith("date") and not line.startswith('title'):
frequency = frequency + len(line)
# Score of poem number
poem_count = poem_count + 1
total_word_num = (total_word_num + frequency)
frequency = frequency + poem_count * 30
rate = adjust.get(poet_name) if poet_name in adjust else 0
frequency = math.floor(frequency * rate)
frequencies.append((poet_name, frequency))
frequencies.sort(reverse=True, key=lambda bi: bi[1])
frequencies = frequencies[0:200]
print(frequencies)
print('Total word number: ' + str(total_word_num))
wc = (
WordCloud().add(shape='circle', series_name="诗人分布",
data_pair=frequencies, rotate_step=10, width=1000, height=600)
)
wc.render('index.html')
| 31.385714 | 85 | 0.592626 |
72328682d7d493d5ab811d47e587b49e1e212ad1 | 318 | py | Python | setup.py | jbjulia/mcc-mnc | 1290476535a3786515bdc92a6189afd327a7c816 | [
"MIT"
] | null | null | null | setup.py | jbjulia/mcc-mnc | 1290476535a3786515bdc92a6189afd327a7c816 | [
"MIT"
] | null | null | null | setup.py | jbjulia/mcc-mnc | 1290476535a3786515bdc92a6189afd327a7c816 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name='mcc-mnc',
version='1.0.0',
packages=['src'],
url='https://github.com/jbjulia/mcc-mnc',
license='MIT',
author='Joseph Julian',
author_email='joseph.b.julian@gmail.com',
description='Mobile Country Codes (MCC) and Mobile Network Codes (MNC)',
)
| 24.461538 | 76 | 0.654088 |
0e9534913154632467ba4ee1275445d7b9a57141 | 6,109 | py | Python | tools/c7n_trailcreator/setup.py | wildbinaryforest/cloud-custodian | f132a909003738ff48c3008665ad7f4aa596553d | [
"Apache-2.0"
] | 1 | 2020-06-26T16:55:54.000Z | 2020-06-26T16:55:54.000Z | tools/c7n_trailcreator/setup.py | wildbinaryforest/cloud-custodian | f132a909003738ff48c3008665ad7f4aa596553d | [
"Apache-2.0"
] | null | null | null | tools/c7n_trailcreator/setup.py | wildbinaryforest/cloud-custodian | f132a909003738ff48c3008665ad7f4aa596553d | [
"Apache-2.0"
] | null | null | null | # Automatically generated from poetry/pyproject.toml
# flake8: noqa
# -*- coding: utf-8 -*-
from setuptools import setup
packages = \
['c7n_trailcreator']
package_data = \
{'': ['*']}
install_requires = \
['argcomplete (>=1.11.1,<2.0.0)',
'attrs (>=19.3.0,<20.0.0)',
'boto3 (>=1.14.8,<2.0.0)',
'botocore (>=1.17.8,<2.0.0)',
'c7n (>=0.9.3,<0.10.0)',
'c7n-org (>=0.6.2,<0.7.0)',
'click (>=7.1.2,<8.0.0)',
'click>=7.0,<8.0',
'docutils (>=0.15.2,<0.16.0)',
'importlib-metadata (>=1.6.1,<2.0.0)',
'jmespath (>=0.10.0,<0.11.0)',
'jsonschema (>=3.2.0,<4.0.0)',
'pyrsistent (>=0.16.0,<0.17.0)',
'python-dateutil (>=2.8.1,<3.0.0)',
'pyyaml (>=5.3.1,<6.0.0)',
's3transfer (>=0.3.3,<0.4.0)',
'six (>=1.15.0,<2.0.0)',
'tabulate (>=0.8.7,<0.9.0)',
'urllib3 (>=1.25.9,<2.0.0)',
'zipp (>=3.1.0,<4.0.0)']
entry_points = \
{'console_scripts': ['c7n-trailcreator = c7n_trailcreator.trailcreator:cli']}
setup_kwargs = {
'name': 'c7n-trailcreator',
'version': '0.2.3',
'description': 'Cloud Custodian - Retroactive Tag Resource Creators from CloudTrail',
'long_description': '# c7n-trailcreator: Retroactive Resource Creator Tagging\n\nThis script will process cloudtrail records to create a sqlite db of\nresources and their creators, and then use that sqlitedb to tag\nthe resources with their creator\'s name.\n\nIn processing cloudtrail it can use either Athena or S3 Select. A\nconfig file of the events and resources of interest is required.\n\n## Install\n\n```shell\n$ pip install c7n_trailcreator\n\n$ c7n-trailcreator --help\n```\n\n## Config File\n\nThe config file format here is similiar to what custodian requires\nfor lambda policies on cloudtrail api events as an event selector.\n\nFirst for each resource, the custodian resource-type is required\nto be specified, and then for each event, we need to know the\nname of the service, the event name, and a jmespath expression\nto get the resource ids.\n\nHere\'s a a few examples, covering iam-user, iam-role, and and an s3 bucket.\n\n\n```json\n{\n "resources": [\n {\n "resource": "iam-role",\n "events": [\n {\n "event": "CreateRole",\n "ids": "requestParameters.roleName",\n "service": "iam.amazonaws.com"\n }\n ]\n },\n {\n "resource": "s3",\n "events": [\n {\n "ids": "requestParameters.bucketName",\n "event": "CreateBucket",\n "service": "s3.amazonaws.com"\n }\n ]\n },\n {\n "resource": "iam-user",\n "events": [\n {\n "event": "CreateUser",\n "ids": "requestParameters.userName",\n "service": "iam.amazonaws.com"\n }\n ]\n }]\n}\n```\n\n## Athena Usage\n\nTrail creators supports loading data from s3 using s3 select or from cloudtrail s3 using athena.\n\nNote you\'ll have to pre-created the athena table for cloudtrail previously per\nhttps://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html\n\nLet\'s use the example config file to load up data for all the roles, buckets, and users created in 2019\n\n```\nc7n-trailcreator load-athena \\\n --region us-east-1 \\\n\t--resource-map resource_map.json \\\n\t--table cloudtrail_logs_custodian_skunk_trails \\\n\t--db "creators.db" \\\n\t--year 2019\n```\n\nBy default we\'ll use the default s3 athena output used by the console,\nand the default db and primary workgroup, you can pass all of these in\non the cli to be more explicit.\n\nYou can also specify to just process a month with `--month 2019/11` or\nan individual day with `--day 2019/02/01`\n\n```\nINFO:c7n_trailowner:Athena query:569712dc-d1e9-4474-b86f-6579c53b5b46\nINFO:c7n_trailowner:Polling athena query progress scanned:489.24 Mb qexec:28.62s\nINFO:c7n_trailowner:Polling athena query progress scanned:1.29 Gb qexec:88.96s\nINFO:c7n_trailowner:Polling athena query progress scanned:2.17 Gb qexec:141.16s\nINFO:c7n_trailowner:processing athena result page 78 records\nINFO:c7n_trailowner:Athena Processed 78 records\n```\n\nNote you can reprocess a completed query\'s results, by passing in `--query-id` on the cli.\n\n## Tagging\n\nIt supports this across all the resources that custodian supports.\n\n```\n$ c7n-trailcreator tag \\\n\t--db creators.db \\\n\t--creator-tag Owner \\\n\t--region us-east-1\nINFO:c7n_trailowner:account:644160558196 region:us-east-1 tag 13 iam-role resources users:5 population:97 not-found:84 records:124\nINFO:c7n_trailowner:account:644160558196 region:us-east-1 tag 5 iam-user resources users:4 population:6 not-found:1 records:18\nINFO:c7n_trailowner:account:644160558196 region:us-east-1 tag 9 s3 resources users:4 population:14 not-found:5 records:20\nINFO:c7n_trailowner:auto tag summary account:644160558196 region:us-east-1\n iam-role-not-found: 84\n iam-role: 13\n iam-user-not-found: 1\n iam-user: 5\n s3-not-found: 5\n s3: 9\nINFO:c7n_trailowner:Total resources tagged: 27\n```\n\nlet\'s break down one of these log messages\n\n```\nINFO:c7n_trailowner:account:644160558196 region:us-east-1 tag 13 iam-role resources users:5 population:97 not-found:84 records:124\n```\n\n- records: the count of database create events we have for this resource type.\n- users: the number of unique users for whom we have create events.\n- not-found: the number of resources for whom we do not have create events, ie created before or after our trail analysis period.\n- population: the total number of resources in the account region.\n\n## Multi Account / Multi Region\n\nc7n-trailcreator supports executing across multiple accounts and regions when tagging\nusing the same file format that c7n-org uses to denote accounts. See `tag-org` subcommand.\n\n',
'long_description_content_type': 'text/markdown',
'author': 'Cloud Custodian Project',
'author_email': None,
'maintainer': None,
'maintainer_email': None,
'url': 'https://cloudcustodian.io',
'packages': packages,
'package_data': package_data,
'install_requires': install_requires,
'entry_points': entry_points,
'python_requires': '>=3.6,<4.0',
}
setup(**setup_kwargs)
| 107.175439 | 4,623 | 0.697823 |
c71a62a28556172e1c39ba97410872cbe2788e43 | 1,534 | py | Python | implementation/adjacency_set_graph/__init__.py | e-liyai/Graphs_in_Python | 7b7ec2e38be8761ada4f58b65d6d50c8f2ff133b | [
"MIT"
] | null | null | null | implementation/adjacency_set_graph/__init__.py | e-liyai/Graphs_in_Python | 7b7ec2e38be8761ada4f58b65d6d50c8f2ff133b | [
"MIT"
] | null | null | null | implementation/adjacency_set_graph/__init__.py | e-liyai/Graphs_in_Python | 7b7ec2e38be8761ada4f58b65d6d50c8f2ff133b | [
"MIT"
] | null | null | null | import numpy as np
from implementation.base_graph import Graph
from implementation.node import Node
class AdjacencySetGraph(Graph):
def __init__(self, numVertices, directed=False):
super(AdjacencySetGraph, self).__init__(numVertices, directed)
self.vertex_list = []
for i in range(numVertices):
self.vertex_list.append(Node(i))
def add_edge(self, v1, v2, weight=1):
if v1 >= self.numVertices or v2 >= self.numVertices or v1 < 0 or v2 < 0:
raise ValueError('Vertices %d and %d are out of bounds' % (v1, v2))
if weight != 1:
raise ValueError('An adjacency set cannot rep edge weight != 1')
self.vertex_list[v1].add_edge(v2)
if self.directed == False:
self.vertex_list[v2].add_edge(v1)
def get_adjacent_vertices(self, v):
self.check_valid_vertex(v)
return self.vertex_list[v].get_adjacent_vertices()
def get_indegree(self, v):
self.check_valid_vertex(v)
indegree = 0
for i in range(self.numVertices):
if v in self.get_adjacent_vertices(i):
indegree += 1
return indegree
def display(self):
for i in range(self.numVertices):
for v in self.get_adjacent_vertices(i):
print(i, ' -----> ', v)
def get_edge_weight(self, v1, v2):
return 1
def check_valid_vertex(self, v):
if v < 0 or v >= self.numVertices:
raise ValueError('Cannot access vertex %d' % v)
| 27.890909 | 80 | 0.613429 |
f88d42669772483c7412efe16c27fe8a5c4bb6a6 | 335 | py | Python | tests/__init__.py | davidlt/pkgtools | d9648d5c97c1c9612cca802bed1d3f12610e7be4 | [
"BSD-3-Clause"
] | 2 | 2015-04-25T14:48:37.000Z | 2017-12-13T14:24:40.000Z | tests/__init__.py | davidlt/pkgtools | d9648d5c97c1c9612cca802bed1d3f12610e7be4 | [
"BSD-3-Clause"
] | 60 | 2015-01-26T09:44:36.000Z | 2022-01-08T16:53:56.000Z | tests/__init__.py | davidlt/pkgtools | d9648d5c97c1c9612cca802bed1d3f12610e7be4 | [
"BSD-3-Clause"
] | 15 | 2015-01-19T08:46:53.000Z | 2020-04-14T19:11:30.000Z | from os.path import abspath
import imp
location = abspath (__file__)
testDir = location.rsplit ("/", 1)[0]
baseDir = location.rsplit ("/", 2)[0]
cmsBuildFilename = abspath (baseDir+"/cmsBuild")
cmsBuildFile = open (cmsBuildFilename, 'r')
cmsBuild = imp.load_module ("cmsBuild", cmsBuildFile, cmsBuildFilename, ["",'r', imp.PY_SOURCE])
| 37.222222 | 96 | 0.725373 |
9ea749c178406275cc9b62017882ba125e3e89e2 | 1,163 | py | Python | tests/test_relu.py | gavinuhma/tf-encrypted | 4e18d78a151bbe91489a1773fb839b889ff5b460 | [
"Apache-2.0"
] | 3 | 2018-10-18T19:36:02.000Z | 2020-07-05T19:46:23.000Z | tests/test_relu.py | dropoutlabs/tf-encrypted | 48c9dc7419163425e736ad05bb19980d134fc851 | [
"Apache-2.0"
] | null | null | null | tests/test_relu.py | dropoutlabs/tf-encrypted | 48c9dc7419163425e736ad05bb19980d134fc851 | [
"Apache-2.0"
] | null | null | null | # pylint: disable=missing-docstring
import unittest
import numpy as np
import tensorflow as tf
import tf_encrypted as tfe
from tf_encrypted.layers.activation import Relu
class TestRelu(unittest.TestCase):
def setUp(self):
tf.reset_default_graph()
def test_forward(self):
input_shape = [2, 2, 2, 50]
input_relu = np.random.randn(np.prod(input_shape)).astype(
np.float32).reshape(input_shape)
with tfe.protocol.SecureNN() as prot:
tf.reset_default_graph()
relu_input = prot.define_private_variable(input_relu)
relu_layer = Relu(input_shape)
relu_out_pond = relu_layer.forward(relu_input)
with tfe.Session() as sess:
sess.run(tf.global_variables_initializer())
out_pond = sess.run(relu_out_pond.reveal(), tag='tfe')
tf.reset_default_graph()
x = tf.Variable(input_relu, dtype=tf.float32)
relu_out_tf = tf.nn.relu(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
out_tensorflow = sess.run(relu_out_tf)
np.testing.assert_allclose(out_pond, out_tensorflow, atol=.01)
if __name__ == '__main__':
unittest.main()
| 26.431818 | 68 | 0.703353 |
e92d9b211e3c976b00af5ec99b05baa817837dd0 | 3,518 | py | Python | userbot/modules/sed.py | StayWithMe69/ViollinProject | b27f22e85ced75fa2230bfcec222341c374ca98c | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/sed.py | StayWithMe69/ViollinProject | b27f22e85ced75fa2230bfcec222341c374ca98c | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | userbot/modules/sed.py | StayWithMe69/ViollinProject | b27f22e85ced75fa2230bfcec222341c374ca98c | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | 1 | 2022-03-17T22:23:51.000Z | 2022-03-17T22:23:51.000Z | # Copyright (C) 2019 The Raphielscape Company LLC.
#
# Licensed under the Raphielscape Public License, Version 1.c (the "License");
# you may not use this file except in compliance with the License.
#
# The entire source code is OSSRPL except 'sed' which is GPLv3
# License: GPLv3 and OSSRPL
""" Userbot command for sed. """
import re
from sre_constants import error as sre_err
from userbot import CMD_HELP
from userbot.events import register
DELIMITERS = ("/", ":", "|", "_")
async def separate_sed(sed_string):
""" Separate sed arguments. """
if len(sed_string) < 2:
return
if (
len(sed_string) >= 2
and sed_string[2] in DELIMITERS
and sed_string.count(sed_string[2]) >= 2
):
delim = sed_string[2]
start = counter = 3
while counter < len(sed_string):
if sed_string[counter] == "\\":
counter += 1
elif sed_string[counter] == delim:
replace = sed_string[start:counter]
counter += 1
start = counter
break
counter += 1
else:
return None
while counter < len(sed_string):
if (
sed_string[counter] == "\\"
and counter + 1 < len(sed_string)
and sed_string[counter + 1] == delim
):
sed_string = sed_string[:counter] + sed_string[counter + 1 :]
elif sed_string[counter] == delim:
replace_with = sed_string[start:counter]
counter += 1
break
counter += 1
else:
return replace, sed_string[start:], ""
flags = ""
if counter < len(sed_string):
flags = sed_string[counter:]
return replace, replace_with, flags.lower()
return None
@register(outgoing=True, pattern="^.s")
async def sed(command):
""" For sed command, use sed on Telegram. """
sed_result = await separate_sed(command.text)
textx = await command.get_reply_message()
if sed_result:
if textx:
to_fix = textx.text
else:
return await command.edit(
"`Master, I don't have brains. Well you too don't I guess.`"
)
repl, repl_with, flags = sed_result
if not repl:
return await command.edit(
"`Master, I don't have brains. Well you too don't I guess.`"
)
try:
check = re.match(repl, to_fix, flags=re.IGNORECASE)
if check and check.group(0).lower() == to_fix.lower():
return await command.edit("`Boi!, that's a reply. Don't use sed`")
if "i" in flags and "g" in flags:
text = re.sub(repl, repl_with, to_fix, flags=re.I).strip()
elif "i" in flags:
text = re.sub(repl, repl_with, to_fix, count=1, flags=re.I).strip()
elif "g" in flags:
text = re.sub(repl, repl_with, to_fix).strip()
else:
text = re.sub(repl, repl_with, to_fix, count=1).strip()
except sre_err:
return await command.edit("B O I! [Learn Regex](https://regexone.com)")
if text:
await command.edit(f"Did you mean? \n\n{text}")
CMD_HELP.update(
{
"sed": ">`.s<delimiter><old word(s)><delimiter><new word(s)>`"
"\nUsage: Replaces a word or words using sed."
"\nDelimiters: `/, :, |, _`"
}
)
| 30.068376 | 83 | 0.542354 |
6e3ce063d0092079b053ba007ae0d848e51e691d | 1,910 | py | Python | spirit/category/models.py | shriyanka/daemo-forum | 58c555f69208beedbb0c09f7b7d1e32ab741b2c5 | [
"MIT"
] | 1 | 2016-02-29T01:26:42.000Z | 2016-02-29T01:26:42.000Z | spirit/category/models.py | shriyanka/daemo-forum | 58c555f69208beedbb0c09f7b7d1e32ab741b2c5 | [
"MIT"
] | 16 | 2015-08-10T18:28:18.000Z | 2022-03-11T23:12:48.000Z | spirit/category/models.py | shriyanka/daemo-forum | 58c555f69208beedbb0c09f7b7d1e32ab741b2c5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.core.urlresolvers import reverse
from django.conf import settings
from .managers import CategoryQuerySet
from ..core.utils.models import AutoSlugField
class Category(models.Model):
parent = models.ForeignKey('self', verbose_name=_("category parent"), null=True, blank=True)
title = models.CharField(_("title"), max_length=75)
slug = AutoSlugField(populate_from="title", db_index=False, blank=True)
description = models.CharField(_("description"), max_length=255, blank=True)
is_closed = models.BooleanField(_("closed"), default=False)
is_removed = models.BooleanField(_("removed"), default=False)
is_private = models.BooleanField(_("private"), default=False)
# topic_count = models.PositiveIntegerField(_("topic count"), default=0)
objects = CategoryQuerySet.as_manager()
class Meta:
ordering = ['title', 'pk']
verbose_name = _("category")
verbose_name_plural = _("categories")
def get_absolute_url(self):
if self.pk == settings.ST_TOPIC_PRIVATE_CATEGORY_PK:
return reverse('spirit:topic:private:index')
else:
return reverse('spirit:category:detail', kwargs={'pk': str(self.id), 'slug': self.slug})
@property
def is_subcategory(self):
if self.parent_id:
return True
else:
return False
# def topic_posted_handler(sender, topic, **kwargs):
# if topic.category.is_subcategory:
# category = Category.objects.filter(pk__in=[topic.category.pk, topic.category.parent.pk])
# else:
# category = Category.objects.filter(pk=topic.category.pk)
#
# category.update(topic_count=F('topic_count') + 1)
# topic_posted.connect(topic_posted_handler, dispatch_uid=__name__)
| 32.931034 | 100 | 0.7 |
a6f2e7346e17f22476592e8451e93e0c8c6de516 | 13,248 | py | Python | spanner/tests/unit/test_batch.py | bomboradata/bombora-google-cloud-python | 255bbebe6c50490f40fcc3eed40bae1e77e03859 | [
"Apache-2.0"
] | null | null | null | spanner/tests/unit/test_batch.py | bomboradata/bombora-google-cloud-python | 255bbebe6c50490f40fcc3eed40bae1e77e03859 | [
"Apache-2.0"
] | null | null | null | spanner/tests/unit/test_batch.py | bomboradata/bombora-google-cloud-python | 255bbebe6c50490f40fcc3eed40bae1e77e03859 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from google.cloud._testing import _GAXBaseAPI
TABLE_NAME = 'citizens'
COLUMNS = ['email', 'first_name', 'last_name', 'age']
VALUES = [
[u'phred@exammple.com', u'Phred', u'Phlyntstone', 32],
[u'bharney@example.com', u'Bharney', u'Rhubble', 31],
]
class _BaseTest(unittest.TestCase):
PROJECT_ID = 'project-id'
INSTANCE_ID = 'instance-id'
INSTANCE_NAME = 'projects/' + PROJECT_ID + '/instances/' + INSTANCE_ID
DATABASE_ID = 'database-id'
DATABASE_NAME = INSTANCE_NAME + '/databases/' + DATABASE_ID
SESSION_ID = 'session-id'
SESSION_NAME = DATABASE_NAME + '/sessions/' + SESSION_ID
def _make_one(self, *args, **kwargs):
return self._getTargetClass()(*args, **kwargs)
class Test_BatchBase(_BaseTest):
def _getTargetClass(self):
from google.cloud.spanner_v1.batch import _BatchBase
return _BatchBase
def _compare_values(self, result, source):
from google.protobuf.struct_pb2 import ListValue
from google.protobuf.struct_pb2 import Value
for found, expected in zip(result, source):
self.assertIsInstance(found, ListValue)
self.assertEqual(len(found.values), len(expected))
for found_cell, expected_cell in zip(found.values, expected):
self.assertIsInstance(found_cell, Value)
if isinstance(expected_cell, int):
self.assertEqual(
int(found_cell.string_value), expected_cell)
else:
self.assertEqual(found_cell.string_value, expected_cell)
def test_ctor(self):
session = _Session()
base = self._make_one(session)
self.assertIs(base._session, session)
self.assertEqual(len(base._mutations), 0)
def test__check_state_virtual(self):
session = _Session()
base = self._make_one(session)
with self.assertRaises(NotImplementedError):
base._check_state()
def test_insert(self):
from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation
session = _Session()
base = self._make_one(session)
base.insert(TABLE_NAME, columns=COLUMNS, values=VALUES)
self.assertEqual(len(base._mutations), 1)
mutation = base._mutations[0]
self.assertIsInstance(mutation, Mutation)
write = mutation.insert
self.assertIsInstance(write, Mutation.Write)
self.assertEqual(write.table, TABLE_NAME)
self.assertEqual(write.columns, COLUMNS)
self._compare_values(write.values, VALUES)
def test_update(self):
from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation
session = _Session()
base = self._make_one(session)
base.update(TABLE_NAME, columns=COLUMNS, values=VALUES)
self.assertEqual(len(base._mutations), 1)
mutation = base._mutations[0]
self.assertIsInstance(mutation, Mutation)
write = mutation.update
self.assertIsInstance(write, Mutation.Write)
self.assertEqual(write.table, TABLE_NAME)
self.assertEqual(write.columns, COLUMNS)
self._compare_values(write.values, VALUES)
def test_insert_or_update(self):
from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation
session = _Session()
base = self._make_one(session)
base.insert_or_update(TABLE_NAME, columns=COLUMNS, values=VALUES)
self.assertEqual(len(base._mutations), 1)
mutation = base._mutations[0]
self.assertIsInstance(mutation, Mutation)
write = mutation.insert_or_update
self.assertIsInstance(write, Mutation.Write)
self.assertEqual(write.table, TABLE_NAME)
self.assertEqual(write.columns, COLUMNS)
self._compare_values(write.values, VALUES)
def test_replace(self):
from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation
session = _Session()
base = self._make_one(session)
base.replace(TABLE_NAME, columns=COLUMNS, values=VALUES)
self.assertEqual(len(base._mutations), 1)
mutation = base._mutations[0]
self.assertIsInstance(mutation, Mutation)
write = mutation.replace
self.assertIsInstance(write, Mutation.Write)
self.assertEqual(write.table, TABLE_NAME)
self.assertEqual(write.columns, COLUMNS)
self._compare_values(write.values, VALUES)
def test_delete(self):
from google.cloud.spanner_v1.proto.mutation_pb2 import Mutation
from google.cloud.spanner_v1.keyset import KeySet
keys = [[0], [1], [2]]
keyset = KeySet(keys=keys)
session = _Session()
base = self._make_one(session)
base.delete(TABLE_NAME, keyset=keyset)
self.assertEqual(len(base._mutations), 1)
mutation = base._mutations[0]
self.assertIsInstance(mutation, Mutation)
delete = mutation.delete
self.assertIsInstance(delete, Mutation.Delete)
self.assertEqual(delete.table, TABLE_NAME)
key_set_pb = delete.key_set
self.assertEqual(len(key_set_pb.ranges), 0)
self.assertEqual(len(key_set_pb.keys), len(keys))
for found, expected in zip(key_set_pb.keys, keys):
self.assertEqual(
[int(value.string_value) for value in found.values], expected)
class TestBatch(_BaseTest):
def _getTargetClass(self):
from google.cloud.spanner_v1.batch import Batch
return Batch
def test_ctor(self):
session = _Session()
batch = self._make_one(session)
self.assertIs(batch._session, session)
def test_commit_already_committed(self):
from google.cloud.spanner_v1.keyset import KeySet
keys = [[0], [1], [2]]
keyset = KeySet(keys=keys)
database = _Database()
session = _Session(database)
batch = self._make_one(session)
batch.committed = object()
batch.delete(TABLE_NAME, keyset=keyset)
with self.assertRaises(ValueError):
batch.commit()
def test_commit_grpc_error(self):
from google.gax.errors import GaxError
from google.cloud.spanner_v1.proto.transaction_pb2 import (
TransactionOptions)
from google.cloud.spanner_v1.proto.mutation_pb2 import (
Mutation as MutationPB)
from google.cloud.spanner_v1.keyset import KeySet
keys = [[0], [1], [2]]
keyset = KeySet(keys=keys)
database = _Database()
api = database.spanner_api = _FauxSpannerAPI(
_random_gax_error=True)
session = _Session(database)
batch = self._make_one(session)
batch.delete(TABLE_NAME, keyset=keyset)
with self.assertRaises(GaxError):
batch.commit()
(session, mutations, single_use_txn, options) = api._committed
self.assertEqual(session, self.SESSION_NAME)
self.assertTrue(len(mutations), 1)
mutation = mutations[0]
self.assertIsInstance(mutation, MutationPB)
self.assertTrue(mutation.HasField('delete'))
delete = mutation.delete
self.assertEqual(delete.table, TABLE_NAME)
keyset_pb = delete.key_set
self.assertEqual(len(keyset_pb.ranges), 0)
self.assertEqual(len(keyset_pb.keys), len(keys))
for found, expected in zip(keyset_pb.keys, keys):
self.assertEqual(
[int(value.string_value) for value in found.values], expected)
self.assertIsInstance(single_use_txn, TransactionOptions)
self.assertTrue(single_use_txn.HasField('read_write'))
self.assertEqual(options.kwargs['metadata'],
[('google-cloud-resource-prefix', database.name)])
def test_commit_ok(self):
import datetime
from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse
from google.cloud.spanner_v1.proto.transaction_pb2 import (
TransactionOptions)
from google.cloud._helpers import UTC
from google.cloud._helpers import _datetime_to_pb_timestamp
now = datetime.datetime.utcnow().replace(tzinfo=UTC)
now_pb = _datetime_to_pb_timestamp(now)
response = CommitResponse(commit_timestamp=now_pb)
database = _Database()
api = database.spanner_api = _FauxSpannerAPI(
_commit_response=response)
session = _Session(database)
batch = self._make_one(session)
batch.insert(TABLE_NAME, COLUMNS, VALUES)
committed = batch.commit()
self.assertEqual(committed, now)
self.assertEqual(batch.committed, committed)
(session, mutations, single_use_txn, options) = api._committed
self.assertEqual(session, self.SESSION_NAME)
self.assertEqual(mutations, batch._mutations)
self.assertIsInstance(single_use_txn, TransactionOptions)
self.assertTrue(single_use_txn.HasField('read_write'))
self.assertEqual(options.kwargs['metadata'],
[('google-cloud-resource-prefix', database.name)])
def test_context_mgr_already_committed(self):
import datetime
from google.cloud._helpers import UTC
now = datetime.datetime.utcnow().replace(tzinfo=UTC)
database = _Database()
api = database.spanner_api = _FauxSpannerAPI()
session = _Session(database)
batch = self._make_one(session)
batch.committed = now
with self.assertRaises(ValueError):
with batch:
pass # pragma: NO COVER
self.assertEqual(api._committed, None)
def test_context_mgr_success(self):
import datetime
from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse
from google.cloud.spanner_v1.proto.transaction_pb2 import (
TransactionOptions)
from google.cloud._helpers import UTC
from google.cloud._helpers import _datetime_to_pb_timestamp
now = datetime.datetime.utcnow().replace(tzinfo=UTC)
now_pb = _datetime_to_pb_timestamp(now)
response = CommitResponse(commit_timestamp=now_pb)
database = _Database()
api = database.spanner_api = _FauxSpannerAPI(
_commit_response=response)
session = _Session(database)
batch = self._make_one(session)
with batch:
batch.insert(TABLE_NAME, COLUMNS, VALUES)
self.assertEqual(batch.committed, now)
(session, mutations, single_use_txn, options) = api._committed
self.assertEqual(session, self.SESSION_NAME)
self.assertEqual(mutations, batch._mutations)
self.assertIsInstance(single_use_txn, TransactionOptions)
self.assertTrue(single_use_txn.HasField('read_write'))
self.assertEqual(options.kwargs['metadata'],
[('google-cloud-resource-prefix', database.name)])
def test_context_mgr_failure(self):
import datetime
from google.cloud.spanner_v1.proto.spanner_pb2 import CommitResponse
from google.cloud._helpers import UTC
from google.cloud._helpers import _datetime_to_pb_timestamp
now = datetime.datetime.utcnow().replace(tzinfo=UTC)
now_pb = _datetime_to_pb_timestamp(now)
response = CommitResponse(commit_timestamp=now_pb)
database = _Database()
api = database.spanner_api = _FauxSpannerAPI(
_commit_response=response)
session = _Session(database)
batch = self._make_one(session)
class _BailOut(Exception):
pass
with self.assertRaises(_BailOut):
with batch:
batch.insert(TABLE_NAME, COLUMNS, VALUES)
raise _BailOut()
self.assertEqual(batch.committed, None)
self.assertEqual(api._committed, None)
self.assertEqual(len(batch._mutations), 1)
class _Session(object):
def __init__(self, database=None, name=TestBatch.SESSION_NAME):
self._database = database
self.name = name
class _Database(object):
name = 'testing'
class _FauxSpannerAPI(_GAXBaseAPI):
_create_instance_conflict = False
_instance_not_found = False
_committed = None
def commit(self, session, mutations,
transaction_id='', single_use_transaction=None, options=None):
from google.gax.errors import GaxError
assert transaction_id == ''
self._committed = (session, mutations, single_use_transaction, options)
if self._random_gax_error:
raise GaxError('error')
return self._commit_response
| 36.098093 | 79 | 0.66795 |
f5b3d4def4eb1b55b946b4312030764117d0dac4 | 1,269 | py | Python | transforms/TileTissueNet5Class.py | msk-mind/luna-ml | a4ef02c5f64e83b55a9d244e04fd1ef01d604bf6 | [
"Apache-2.0"
] | null | null | null | transforms/TileTissueNet5Class.py | msk-mind/luna-ml | a4ef02c5f64e83b55a9d244e04fd1ef01d604bf6 | [
"Apache-2.0"
] | null | null | null | transforms/TileTissueNet5Class.py | msk-mind/luna-ml | a4ef02c5f64e83b55a9d244e04fd1ef01d604bf6 | [
"Apache-2.0"
] | null | null | null | import torch
import torchvision
from torchvision.models import resnet18, resnet34, resnet50, squeezenet1_1, vgg19_bn
from luna.pathology.analysis.ml import TorchTransformModel
from models.tissue_tile_net import TissueTileNet
from utils import get_state_dict_from_git_tag
import numpy as np
class TissueTileNetTransformer(TorchTransformModel):
def __init__(self, use_weights=False):
# del kwargs['depth']
self.model = TissueTileNet(resnet18(), 5, activation=torch.nn.Softmax(dim=0))
self.class_labels = {0:'Stroma', 1:'Tumor', 2:'Glass', 3:'Necrosis', 4:'TILs'}
self.column_labels = {0:'Classification'}
state_dict = get_state_dict_from_git_tag("main:tissue_net_2021-01-19_21.05.24-e17.pth")
self.model.load_state_dict(state_dict)
def get_preprocess(self):
return torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
])
def transform(self, X):
out = self.model(X)
preds = torch.argmax(out, dim=1)
labels = np.array([self.class_labels[val.item()] for val in preds])
labels = np.expand_dims(labels, axis=1)
return labels
| 36.257143 | 95 | 0.689519 |
d85b4e6462fb0489bd1eaf654a7b4bd95cadbd34 | 1,180 | py | Python | conference/migrations/0027_auto_20201204_1539.py | ethancarlsson/epcon | 10ae259ad75271651506d44cc5e71cf089349ea3 | [
"BSD-2-Clause"
] | 40 | 2015-03-03T22:14:58.000Z | 2022-02-15T22:27:48.000Z | conference/migrations/0027_auto_20201204_1539.py | ethancarlsson/epcon | 10ae259ad75271651506d44cc5e71cf089349ea3 | [
"BSD-2-Clause"
] | 699 | 2015-01-21T10:13:29.000Z | 2022-02-08T09:26:36.000Z | conference/migrations/0027_auto_20201204_1539.py | ethancarlsson/epcon | 10ae259ad75271651506d44cc5e71cf089349ea3 | [
"BSD-2-Clause"
] | 96 | 2015-01-22T11:03:13.000Z | 2022-01-31T05:35:34.000Z | # Generated by Django 2.2.17 on 2020-12-04 14:39
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('conference', '0026_auto_20201204_1130'),
]
operations = [
migrations.AlterField(
model_name='conferencetag',
name='name',
field=models.CharField(max_length=100, unique=True, verbose_name='Name'),
),
migrations.AlterField(
model_name='conferencetag',
name='slug',
field=models.SlugField(max_length=100, unique=True, verbose_name='Slug'),
),
migrations.AlterField(
model_name='conferencetaggeditem',
name='content_type',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='conference_conferencetaggeditem_tagged_items', to='contenttypes.ContentType', verbose_name='Content type'),
),
migrations.AlterField(
model_name='conferencetaggeditem',
name='object_id',
field=models.IntegerField(db_index=True, verbose_name='Object id'),
),
]
| 33.714286 | 202 | 0.637288 |
2b7dd22e7ac7d553df7dfbdbceb44315315ecd2c | 2,055 | py | Python | 2019/day23/2.py | tomhel/AoC_2019 | c76c34235821864bc763f85d43cbcbfb9ed43469 | [
"MIT"
] | 1 | 2021-12-07T13:18:52.000Z | 2021-12-07T13:18:52.000Z | 2019/day23/2.py | tomhel/AoC | c76c34235821864bc763f85d43cbcbfb9ed43469 | [
"MIT"
] | null | null | null | 2019/day23/2.py | tomhel/AoC | c76c34235821864bc763f85d43cbcbfb9ed43469 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import intcode
import threading
import queue
class Network(object):
def __init__(self):
self.devices = {}
self.nat = None
self.nat_log = []
def connect(self, address, ingress, egress):
self.devices[address] = (ingress, egress)
def is_idle(self):
for ingress, egress in self.devices.values():
if not ingress.empty() or not egress.empty():
return False
return True
def packet_switch(self):
while True:
for adr, ch in self.devices.items():
self.devices[adr][0].put(-1)
try:
dst = ch[1].get_nowait()
x = ch[1].get()
y = ch[1].get()
if dst == 255:
self.nat = (x, y)
else:
self.devices[dst][0].put(x)
self.devices[dst][0].put(y)
except queue.Empty:
pass
if self.nat is not None and self.is_idle():
self.devices[0][0].put(self.nat[0])
self.devices[0][0].put(self.nat[1])
self.nat_log.append(self.nat)
self.nat = None
if len(self.nat_log) >= 2 and self.nat_log[-1][1] == self.nat_log[-2][1]:
return self.nat_log[-1]
def rebuild_network():
network = Network()
threads = []
computers = []
prog = intcode.load_program("input")
for adr in range(50):
ingress = queue.Queue()
egress = queue.Queue()
computer = intcode.Computer(prog[:], intcode.PipeIOHandler(ingress, egress, [adr]))
computers.append(computer)
network.connect(adr, ingress, egress)
t = threading.Thread(target=computer.execute)
t.start()
threads.append(t)
x, y = network.packet_switch()
for c in computers:
c.suspend()
for t in threads:
t.join()
return y
print(rebuild_network())
| 25.6875 | 91 | 0.506569 |
091ed8b032e0cd44e6dec6272ed7f59361ea7ac1 | 4,552 | py | Python | configs/example/hmctest.py | believe7028/gem5 | 9486ad465cbecd816a8d9500d05fa86c573756dd | [
"BSD-3-Clause"
] | 24 | 2019-07-09T06:17:08.000Z | 2022-03-17T11:13:22.000Z | configs/example/hmctest.py | believe7028/gem5 | 9486ad465cbecd816a8d9500d05fa86c573756dd | [
"BSD-3-Clause"
] | null | null | null | configs/example/hmctest.py | believe7028/gem5 | 9486ad465cbecd816a8d9500d05fa86c573756dd | [
"BSD-3-Clause"
] | 12 | 2019-07-08T09:31:55.000Z | 2021-12-15T02:42:35.000Z | from __future__ import print_function
import sys
import argparse
import subprocess
from pprint import pprint
import m5
from m5.objects import *
from m5.util import *
addToPath('../')
from common import MemConfig
from common import HMC
def add_options(parser):
parser.add_argument("--external-memory-system", default=0, action="store",
type=int, help="External memory system")
# TLM related options, currently optional in configs/common/MemConfig.py
parser.add_argument("--tlm-memory", action="store_true", help="use\
external port for SystemC TLM co-simulation. Default:\
no")
# Elastic traces related options, currently optional in
# configs/common/MemConfig.py
parser.add_argument("--elastic-trace-en", action="store_true",
help="enable capture of data dependency and\
instruction fetch traces using elastic trace\
probe.\nDefault: no")
# Options related to traffic generation
parser.add_argument("--num-tgen", default=4, action="store", type=int,
choices=[4], help="number of traffic generators.\
Right now this script supports only 4.\nDefault: 4")
parser.add_argument("--tgen-cfg-file",
default="./configs/example/hmc_tgen.cfg",
type=str, help="Traffic generator(s) configuration\
file. Note: this script uses the same configuration\
file for all traffic generators")
# considering 4GB HMC device with following parameters
# hmc_device_size = '4GB'
# hmc_vault_size = '256MB'
# hmc_stack_size = 8
# hmc_bank_in_stack = 2
# hmc_bank_size = '16MB'
# hmc_bank_in_vault = 16
def build_system(options):
# create the system we are going to simulate
system = System()
# use timing mode for the interaction between master-slave ports
system.mem_mode = 'timing'
# set the clock fequency of the system
clk = '100GHz'
vd = VoltageDomain(voltage='1V')
system.clk_domain = SrcClockDomain(clock=clk, voltage_domain=vd)
# add traffic generators to the system
system.tgen = [TrafficGen(config_file=options.tgen_cfg_file) for i in
range(options.num_tgen)]
# Config memory system with given HMC arch
MemConfig.config_mem(options, system)
# Connect the traffic generatiors
if options.arch == "distributed":
for i in range(options.num_tgen):
system.tgen[i].port = system.membus.slave
# connect the system port even if it is not used in this example
system.system_port = system.membus.slave
if options.arch == "mixed":
for i in range(int(options.num_tgen/2)):
system.tgen[i].port = system.membus.slave
hh = system.hmc_host
if options.enable_global_monitor:
system.tgen[2].port = hh.lmonitor[2].slave
hh.lmonitor[2].master = hh.seriallink[2].slave
system.tgen[3].port = hh.lmonitor[3].slave
hh.lmonitor[3].master = hh.seriallink[3].slave
else:
system.tgen[2].port = hh.seriallink[2].slave
system.tgen[3].port = hh.seriallink[3].slave
# connect the system port even if it is not used in this example
system.system_port = system.membus.slave
if options.arch == "same":
hh = system.hmc_host
for i in range(options.num_links_controllers):
if options.enable_global_monitor:
system.tgen[i].port = hh.lmonitor[i].slave
else:
system.tgen[i].port = hh.seriallink[i].slave
# set up the root SimObject
root = Root(full_system=False, system=system)
return root
def main():
parser = argparse.ArgumentParser(description="Simple system using HMC as\
main memory")
HMC.add_options(parser)
add_options(parser)
options = parser.parse_args()
# build the system
root = build_system(options)
# instantiate all of the objects we've created so far
m5.instantiate()
print("Beginning simulation!")
event = m5.simulate(10000000000)
m5.stats.dump()
print('Exiting @ tick %i because %s (exit code is %i)' % (m5.curTick(),
event.getCause(),
event.getCode()))
print("Done")
if __name__ == "__m5_main__":
main()
| 39.241379 | 79 | 0.619069 |
e762ed54f9d144cbb05b1189a0d17a33ecf28de2 | 2,572 | py | Python | naive_bayes/emails/email_preprocess.py | bitnot/ud120 | 344469e5e0e875473f2070e333f706d7987b4993 | [
"MIT"
] | null | null | null | naive_bayes/emails/email_preprocess.py | bitnot/ud120 | 344469e5e0e875473f2070e333f706d7987b4993 | [
"MIT"
] | null | null | null | naive_bayes/emails/email_preprocess.py | bitnot/ud120 | 344469e5e0e875473f2070e333f706d7987b4993 | [
"MIT"
] | null | null | null | #!/usr/bin/python
import pickle
import numpy
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import SelectPercentile, f_classif
def preprocess(words_file="../word_data.pkl", authors_file="../email_authors.pkl"):
"""
this function takes a pre-made list of email texts (by default word_data.pkl)
and the corresponding authors (by default email_authors.pkl) and performs
a number of preprocessing steps:
-- splits into training/testing sets (10% testing)
-- vectorizes into tfidf matrix
-- selects/keeps most helpful features
after this, the feaures and labels are put into numpy arrays, which play nice with sklearn functions
4 objects are returned:
-- training/testing features
-- training/testing labels
"""
# the words (features) and authors (labels), already largely preprocessed
# this preprocessing will be repeated in the text learning mini-project
with open(authors_file, "rb") as authors_file_handler:
authors = pickle.load(authors_file_handler)
with open(words_file, "rb") as words_file_handler:
word_data = pickle.load(words_file_handler)
# test_size is the percentage of events assigned to the test set
# (remainder go into training)
features_train, features_test, labels_train, labels_test = train_test_split(
word_data, authors, test_size=0.1, random_state=42)
# text vectorization--go from strings to lists of numbers
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
stop_words='english')
features_train_transformed = vectorizer.fit_transform(features_train)
features_test_transformed = vectorizer.transform(features_test)
# feature selection, because text is super high dimensional and
# can be really computationally chewy as a result
selector = SelectPercentile(f_classif, percentile=10)
selector.fit(features_train_transformed, labels_train)
features_train_transformed = selector.transform(
features_train_transformed).toarray()
features_test_transformed = selector.transform(
features_test_transformed).toarray()
# info on the data
print(f"no. of Chris training emails: {sum(labels_train)}")
print(
f"no. of Sara training emails: {len(labels_train)-sum(labels_train)}")
return features_train_transformed, features_test_transformed, labels_train, labels_test
| 40.825397 | 108 | 0.726672 |
c785207371cb075b5c0a5cd9854d2f5ffa42ede2 | 3,983 | py | Python | plugins/inline.py | TR-TECH-GUIDE/Amanda-Music-v2 | fd49e9692ec5d657729b378e97a3bb298930f07d | [
"MIT"
] | 1 | 2021-12-22T14:10:41.000Z | 2021-12-22T14:10:41.000Z | plugins/inline.py | TharukRenuja/Amanda-Music-v2 | fd49e9692ec5d657729b378e97a3bb298930f07d | [
"MIT"
] | null | null | null | plugins/inline.py | TharukRenuja/Amanda-Music-v2 | fd49e9692ec5d657729b378e97a3bb298930f07d | [
"MIT"
] | 2 | 2021-09-04T02:16:26.000Z | 2021-10-05T03:46:54.000Z | #MIT License
#Copyright (c) 2021 TR-TECH-GUIDE
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is
#furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in all
#copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
#SOFTWARE.
from pyrogram.handlers import InlineQueryHandler
from youtubesearchpython import VideosSearch
from utils import USERNAME
from pyrogram.types import InlineQueryResultArticle, InputTextMessageContent, InlineKeyboardButton, InlineKeyboardMarkup
from pyrogram import Client, errors
from config import Config
REPLY_MESSAGE=Config.REPLY_MESSAGE
buttons = [
[
InlineKeyboardButton('⚡️Make Own Bot', url='https://heroku.com/deploy?template=https://github.com/TR-TECH-GUIDE/Amanda-Music-v2'),
InlineKeyboardButton('🧩 Source Code', url='https://github.com/TR-TECH-GUIDE/Amanda-Music-v2'),
],
[
InlineKeyboardButton('🎧Play Music', url=f'https://t.me/{USERNAME}'),
InlineKeyboardButton('👨🏼🦯 Help', callback_data='help')
]
]
@Client.on_inline_query()
async def search(client, query):
answers = []
if query.query == "ORU_MANDAN_PM_VANNU":
answers.append(
InlineQueryResultArticle(
title="Deploy",
input_message_content=InputTextMessageContent(f"{REPLY_MESSAGE}\n\n<b>You can't use this bot in your group, for that you have to make your own bot from the [SOURCE CODE](https://github.com/TR-TECH-GUIDE/Amanda-Music-v2) below.</b>", disable_web_page_preview=True),
reply_markup=InlineKeyboardMarkup(buttons)
)
)
await query.answer(results=answers, cache_time=0)
return
string = query.query.lower().strip().rstrip()
if string == "":
await client.answer_inline_query(
query.id,
results=answers,
switch_pm_text=("Search a youtube video"),
switch_pm_parameter="help",
cache_time=0
)
else:
videosSearch = VideosSearch(string.lower(), limit=50)
for v in videosSearch.result()["result"]:
answers.append(
InlineQueryResultArticle(
title=v["title"],
description=("Duration: {} Views: {}").format(
v["duration"],
v["viewCount"]["short"]
),
input_message_content=InputTextMessageContent(
"/play https://www.youtube.com/watch?v={}".format(
v["id"]
)
),
thumb_url=v["thumbnails"][0]["url"]
)
)
try:
await query.answer(
results=answers,
cache_time=0
)
except errors.QueryIdInvalid:
await query.answer(
results=answers,
cache_time=0,
switch_pm_text=("Nothing found"),
switch_pm_parameter="",
)
__handlers__ = [
[
InlineQueryHandler(
search
)
]
]
| 39.83 | 280 | 0.624906 |
8398700a31b0bec0eff51711a05a7bc9861971f5 | 279 | py | Python | src/mdurl/_url.py | executablebooks/mdurl | a50720050d4d7bd1d66e41f81603bd82f6a1b228 | [
"MIT"
] | null | null | null | src/mdurl/_url.py | executablebooks/mdurl | a50720050d4d7bd1d66e41f81603bd82f6a1b228 | [
"MIT"
] | 3 | 2021-08-31T13:35:50.000Z | 2022-02-04T20:11:39.000Z | src/mdurl/_url.py | hukkin/mdurl | a50720050d4d7bd1d66e41f81603bd82f6a1b228 | [
"MIT"
] | 1 | 2021-08-31T13:27:54.000Z | 2021-08-31T13:27:54.000Z | from typing import NamedTuple, Optional
class URL(NamedTuple):
protocol: Optional[str]
slashes: bool
auth: Optional[str]
port: Optional[str]
hostname: Optional[str]
hash: Optional[str] # noqa: A003
search: Optional[str]
pathname: Optional[str]
| 21.461538 | 39 | 0.681004 |
6f90367354680864560ddd18abb8aed582b1eba6 | 303 | py | Python | src/griottes/__init__.py | BaroudLab/napari-griottes | c0a5885979a7747b8d3a496d5ff8c24d09d4afe0 | [
"BSD-3-Clause"
] | null | null | null | src/griottes/__init__.py | BaroudLab/napari-griottes | c0a5885979a7747b8d3a496d5ff8c24d09d4afe0 | [
"BSD-3-Clause"
] | null | null | null | src/griottes/__init__.py | BaroudLab/napari-griottes | c0a5885979a7747b8d3a496d5ff8c24d09d4afe0 | [
"BSD-3-Clause"
] | null | null | null |
try:
from ._version import version as __version__
except ImportError:
__version__ = "unknown"
from ._reader import napari_get_reader
from ._writer import write_single_image, write_multiple
from ._sample_data import make_sample_data
from ._widget import ExampleQWidget, example_magic_widget
| 25.25 | 57 | 0.818482 |
956d5190e27962643d8eafc53a32c2cccd36eca2 | 57,151 | py | Python | zerver/lib/export.py | anh2111htd/zulip | 001ec76e1f9ec460782dd4db608084dc142cae26 | [
"Apache-2.0"
] | null | null | null | zerver/lib/export.py | anh2111htd/zulip | 001ec76e1f9ec460782dd4db608084dc142cae26 | [
"Apache-2.0"
] | null | null | null | zerver/lib/export.py | anh2111htd/zulip | 001ec76e1f9ec460782dd4db608084dc142cae26 | [
"Apache-2.0"
] | null | null | null | import datetime
from boto.s3.connection import S3Connection
from boto.s3.key import Key # for mypy
from django.apps import apps
from django.conf import settings
from django.forms.models import model_to_dict
from django.utils.timezone import make_aware as timezone_make_aware
from django.utils.timezone import is_naive as timezone_is_naive
import glob
import logging
import os
import ujson
import subprocess
import tempfile
import shutil
from scripts.lib.zulip_tools import overwrite_symlink
from zerver.lib.avatar_hash import user_avatar_path_from_ids
from analytics.models import RealmCount, UserCount, StreamCount
from zerver.models import UserProfile, Realm, Client, Huddle, Stream, \
UserMessage, Subscription, Message, RealmEmoji, RealmFilter, Reaction, \
RealmDomain, Recipient, DefaultStream, get_user_profile_by_id, \
UserPresence, UserActivity, UserActivityInterval, CustomProfileField, \
CustomProfileFieldValue, get_display_recipient, Attachment, get_system_bot, \
RealmAuditLog, UserHotspot, MutedTopic, Service, UserGroup, \
UserGroupMembership, BotStorageData, BotConfigData
from zerver.lib.parallel import run_parallel
from typing import Any, Callable, Dict, List, Optional, Set, Tuple, \
Union
# Custom mypy types follow:
Record = Dict[str, Any]
TableName = str
TableData = Dict[TableName, List[Record]]
Field = str
Path = str
Context = Dict[str, Any]
FilterArgs = Dict[str, Any]
IdSource = Tuple[TableName, Field]
SourceFilter = Callable[[Record], bool]
# These next two types are callbacks, which mypy does not
# support well, because PEP 484 says "using callbacks
# with keyword arguments is not perceived as a common use case."
# CustomFetch = Callable[[TableData, Config, Context], None]
# PostProcessData = Callable[[TableData, Config, Context], None]
CustomFetch = Any # TODO: make more specific, see above
PostProcessData = Any # TODO: make more specific
# The keys of our MessageOutput variables are normally
# List[Record], but when we write partials, we can get
# lists of integers or a single integer.
# TODO: This could maybe be improved using TypedDict?
MessageOutput = Dict[str, Union[List[Record], List[int], int]]
MESSAGE_BATCH_CHUNK_SIZE = 1000
ALL_ZULIP_TABLES = {
'analytics_fillstate',
'analytics_installationcount',
'analytics_realmcount',
'analytics_streamcount',
'analytics_usercount',
'otp_static_staticdevice',
'otp_static_statictoken',
'otp_totp_totpdevice',
'social_auth_association',
'social_auth_code',
'social_auth_nonce',
'social_auth_partial',
'social_auth_usersocialauth',
'two_factor_phonedevice',
'zerver_archivedattachment',
'zerver_archivedattachment_messages',
'zerver_archivedmessage',
'zerver_archivedusermessage',
'zerver_attachment',
'zerver_attachment_messages',
'zerver_botconfigdata',
'zerver_botstoragedata',
'zerver_client',
'zerver_customprofilefield',
'zerver_customprofilefieldvalue',
'zerver_defaultstream',
'zerver_defaultstreamgroup',
'zerver_defaultstreamgroup_streams',
'zerver_emailchangestatus',
'zerver_huddle',
'zerver_message',
'zerver_multiuseinvite',
'zerver_multiuseinvite_streams',
'zerver_preregistrationuser',
'zerver_preregistrationuser_streams',
'zerver_pushdevicetoken',
'zerver_reaction',
'zerver_realm',
'zerver_realmauditlog',
'zerver_realmdomain',
'zerver_realmemoji',
'zerver_realmfilter',
'zerver_recipient',
'zerver_scheduledemail',
'zerver_scheduledmessage',
'zerver_service',
'zerver_stream',
'zerver_submessage',
'zerver_subscription',
'zerver_useractivity',
'zerver_useractivityinterval',
'zerver_usergroup',
'zerver_usergroupmembership',
'zerver_userhotspot',
'zerver_usermessage',
'zerver_userpresence',
'zerver_userprofile',
'zerver_userprofile_groups',
'zerver_userprofile_user_permissions',
'zerver_userstatus',
'zerver_mutedtopic',
}
NON_EXPORTED_TABLES = {
# These invitation/confirmation flow tables don't make sense to
# export, since invitations links will be broken by the server URL
# change anyway:
'zerver_emailchangestatus',
'zerver_multiuseinvite',
'zerver_multiuseinvite_streams',
'zerver_preregistrationuser',
'zerver_preregistrationuser_streams',
# When switching servers, clients will need to re-login and
# reregister for push notifications anyway.
'zerver_pushdevicetoken',
# We don't use these generated Django tables
'zerver_userprofile_groups',
'zerver_userprofile_user_permissions',
# These is used for scheduling future activity; it could make
# sense to export, but is relatively low value.
'zerver_scheduledemail',
'zerver_scheduledmessage',
# These tables are related to a user's 2FA authentication
# configuration, which will need to be re-setup on the new server.
'two_factor_phonedevice',
'otp_static_staticdevice',
'otp_static_statictoken',
'otp_totp_totpdevice',
# These archive tables should not be exported (they are to support
# restoring content accidentally deleted due to software bugs in
# the retention policy feature)
'zerver_archivedmessage',
'zerver_archivedusermessage',
'zerver_archivedattachment',
'zerver_archivedattachment_messages',
# Social auth tables are not needed post-export, since we don't
# use any of this state outside of a direct authentication flow.
'social_auth_association',
'social_auth_code',
'social_auth_nonce',
'social_auth_partial',
'social_auth_usersocialauth',
# We will likely never want to migrate this table, since it's a
# total of all the realmcount values on the server. Might need to
# recompute it after a fillstate import.
'analytics_installationcount',
# Fillstate will require some cleverness to do the right partial export.
'analytics_fillstate',
# These are for unfinished features; we'll want to add them ot the
# export before they reach full production status.
'zerver_defaultstreamgroup',
'zerver_defaultstreamgroup_streams',
'zerver_submessage',
# This is low priority, since users can easily just reset themselves to away.
'zerver_userstatus',
# For any tables listed below here, it's a bug that they are not present in the export.
}
IMPLICIT_TABLES = {
# ManyToMany relationships are exported implicitly.
'zerver_attachment_messages',
}
ATTACHMENT_TABLES = {
'zerver_attachment',
}
MESSAGE_TABLES = {
# message tables get special treatment, because they're so big
'zerver_message',
'zerver_usermessage',
# zerver_reaction belongs here, since it's added late
'zerver_reaction',
}
DATE_FIELDS = {
'zerver_attachment': ['create_time'],
'zerver_message': ['last_edit_time', 'pub_date'],
'zerver_realm': ['date_created'],
'zerver_stream': ['date_created'],
'zerver_useractivity': ['last_visit'],
'zerver_useractivityinterval': ['start', 'end'],
'zerver_userpresence': ['timestamp'],
'zerver_userprofile': ['date_joined', 'last_login', 'last_reminder'],
'zerver_realmauditlog': ['event_time'],
'zerver_userhotspot': ['timestamp'],
'analytics_installationcount': ['end_time'],
'analytics_realmcount': ['end_time'],
'analytics_usercount': ['end_time'],
'analytics_streamcount': ['end_time'],
} # type: Dict[TableName, List[Field]]
def sanity_check_output(data: TableData) -> None:
# First, we verify that the export tool has a declared
# configuration for every table.
target_models = (
list(apps.get_app_config('analytics').get_models(include_auto_created=True)) +
list(apps.get_app_config('django_otp').get_models(include_auto_created=True)) +
list(apps.get_app_config('otp_static').get_models(include_auto_created=True)) +
list(apps.get_app_config('otp_totp').get_models(include_auto_created=True)) +
list(apps.get_app_config('social_django').get_models(include_auto_created=True)) +
list(apps.get_app_config('two_factor').get_models(include_auto_created=True)) +
list(apps.get_app_config('zerver').get_models(include_auto_created=True))
)
all_tables_db = set(model._meta.db_table for model in target_models)
# These assertion statements will fire when we add a new database
# table that is not included in Zulip's data exports. Generally,
# you can add your new table to `ALL_ZULIP_TABLES` and
# `NON_EXPORTED_TABLES` during early work on a new feature so that
# CI passes.
#
# We'll want to make sure we handle it for exports before
# releasing the new feature, but doing so correctly requires some
# expertise on this export system.
assert ALL_ZULIP_TABLES == all_tables_db
assert NON_EXPORTED_TABLES.issubset(ALL_ZULIP_TABLES)
assert IMPLICIT_TABLES.issubset(ALL_ZULIP_TABLES)
assert ATTACHMENT_TABLES.issubset(ALL_ZULIP_TABLES)
tables = set(ALL_ZULIP_TABLES)
tables -= NON_EXPORTED_TABLES
tables -= IMPLICIT_TABLES
tables -= MESSAGE_TABLES
tables -= ATTACHMENT_TABLES
for table in tables:
if table not in data:
logging.warning('??? NO DATA EXPORTED FOR TABLE %s!!!' % (table,))
def write_data_to_file(output_file: Path, data: Any) -> None:
with open(output_file, "w") as f:
f.write(ujson.dumps(data, indent=4))
def make_raw(query: Any, exclude: Optional[List[Field]]=None) -> List[Record]:
'''
Takes a Django query and returns a JSONable list
of dictionaries corresponding to the database rows.
'''
rows = []
for instance in query:
data = model_to_dict(instance, exclude=exclude)
"""
In Django 1.11.5, model_to_dict evaluates the QuerySet of
many-to-many field to give us a list of instances. We require
a list of primary keys, so we get the primary keys from the
instances below.
"""
for field in instance._meta.many_to_many:
value = data[field.name]
data[field.name] = [row.id for row in value]
rows.append(data)
return rows
def floatify_datetime_fields(data: TableData, table: TableName) -> None:
for item in data[table]:
for field in DATE_FIELDS[table]:
orig_dt = item[field]
if orig_dt is None:
continue
if timezone_is_naive(orig_dt):
logging.warning("Naive datetime:", item)
dt = timezone_make_aware(orig_dt)
else:
dt = orig_dt
utc_naive = dt.replace(tzinfo=None) - dt.utcoffset()
item[field] = (utc_naive - datetime.datetime(1970, 1, 1)).total_seconds()
class Config:
'''
A Config object configures a single table for exporting (and,
maybe some day importing as well.
You should never mutate Config objects as part of the export;
instead use the data to determine how you populate other
data structures.
There are parent/children relationships between Config objects.
The parent should be instantiated first. The child will
append itself to the parent's list of children.
'''
def __init__(self, table: Optional[str]=None,
model: Optional[Any]=None,
normal_parent: Optional['Config']=None,
virtual_parent: Optional['Config']=None,
filter_args: Optional[FilterArgs]=None,
custom_fetch: Optional[CustomFetch]=None,
custom_tables: Optional[List[TableName]]=None,
post_process_data: Optional[PostProcessData]=None,
concat_and_destroy: Optional[List[TableName]]=None,
id_source: Optional[IdSource]=None,
source_filter: Optional[SourceFilter]=None,
parent_key: Optional[Field]=None,
use_all: bool=False,
is_seeded: bool=False,
exclude: Optional[List[Field]]=None) -> None:
assert table or custom_tables
self.table = table
self.model = model
self.normal_parent = normal_parent
self.virtual_parent = virtual_parent
self.filter_args = filter_args
self.parent_key = parent_key
self.use_all = use_all
self.is_seeded = is_seeded
self.exclude = exclude
self.custom_fetch = custom_fetch
self.custom_tables = custom_tables
self.post_process_data = post_process_data
self.concat_and_destroy = concat_and_destroy
self.id_source = id_source
self.source_filter = source_filter
self.children = [] # type: List[Config]
if normal_parent is not None:
self.parent = normal_parent # type: Optional[Config]
else:
self.parent = None
if virtual_parent is not None and normal_parent is not None:
raise AssertionError('''
If you specify a normal_parent, please
do not create a virtual_parent.
''')
if normal_parent is not None:
normal_parent.children.append(self)
elif virtual_parent is not None:
virtual_parent.children.append(self)
elif is_seeded is None:
raise AssertionError('''
You must specify a parent if you are
not using is_seeded.
''')
if self.id_source is not None:
if self.virtual_parent is None:
raise AssertionError('''
You must specify a virtual_parent if you are
using id_source.''')
if self.id_source[0] != self.virtual_parent.table:
raise AssertionError('''
Configuration error. To populate %s, you
want data from %s, but that differs from
the table name of your virtual parent (%s),
which suggests you many not have set up
the ordering correctly. You may simply
need to assign a virtual_parent, or there
may be deeper issues going on.''' % (
self.table,
self.id_source[0],
self.virtual_parent.table))
def export_from_config(response: TableData, config: Config, seed_object: Optional[Any]=None,
context: Optional[Context]=None) -> None:
table = config.table
parent = config.parent
model = config.model
if context is None:
context = {}
if table:
exported_tables = [table]
else:
if config.custom_tables is None:
raise AssertionError('''
You must specify config.custom_tables if you
are not specifying config.table''')
exported_tables = config.custom_tables
for t in exported_tables:
logging.info('Exporting via export_from_config: %s' % (t,))
rows = None
if config.is_seeded:
rows = [seed_object]
elif config.custom_fetch:
config.custom_fetch(
response=response,
config=config,
context=context
)
if config.custom_tables:
for t in config.custom_tables:
if t not in response:
raise AssertionError('Custom fetch failed to populate %s' % (t,))
elif config.concat_and_destroy:
# When we concat_and_destroy, we are working with
# temporary "tables" that are lists of records that
# should already be ready to export.
data = [] # type: List[Record]
for t in config.concat_and_destroy:
data += response[t]
del response[t]
logging.info('Deleted temporary %s' % (t,))
assert table is not None
response[table] = data
elif config.use_all:
assert model is not None
query = model.objects.all()
rows = list(query)
elif config.normal_parent:
# In this mode, our current model is figuratively Article,
# and normal_parent is figuratively Blog, and
# now we just need to get all the articles
# contained by the blogs.
model = config.model
assert parent is not None
assert parent.table is not None
assert config.parent_key is not None
parent_ids = [r['id'] for r in response[parent.table]]
filter_parms = {config.parent_key: parent_ids} # type: Dict[str, Any]
if config.filter_args is not None:
filter_parms.update(config.filter_args)
assert model is not None
query = model.objects.filter(**filter_parms)
rows = list(query)
elif config.id_source:
# In this mode, we are the figurative Blog, and we now
# need to look at the current response to get all the
# blog ids from the Article rows we fetched previously.
model = config.model
assert model is not None
# This will be a tuple of the form ('zerver_article', 'blog').
(child_table, field) = config.id_source
child_rows = response[child_table]
if config.source_filter:
child_rows = [r for r in child_rows if config.source_filter(r)]
lookup_ids = [r[field] for r in child_rows]
filter_parms = dict(id__in=lookup_ids)
if config.filter_args:
filter_parms.update(config.filter_args)
query = model.objects.filter(**filter_parms)
rows = list(query)
# Post-process rows (which won't apply to custom fetches/concats)
if rows is not None:
assert table is not None # Hint for mypy
response[table] = make_raw(rows, exclude=config.exclude)
if table in DATE_FIELDS:
floatify_datetime_fields(response, table)
if config.post_process_data:
config.post_process_data(
response=response,
config=config,
context=context
)
# Now walk our children. It's extremely important to respect
# the order of children here.
for child_config in config.children:
export_from_config(
response=response,
config=child_config,
context=context,
)
def get_realm_config() -> Config:
# This is common, public information about the realm that we can share
# with all realm users.
realm_config = Config(
table='zerver_realm',
is_seeded=True
)
Config(
table='zerver_defaultstream',
model=DefaultStream,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='zerver_customprofilefield',
model=CustomProfileField,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='zerver_realmemoji',
model=RealmEmoji,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='zerver_realmdomain',
model=RealmDomain,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='zerver_realmfilter',
model=RealmFilter,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='zerver_client',
model=Client,
virtual_parent=realm_config,
use_all=True
)
user_profile_config = Config(
custom_tables=[
'zerver_userprofile',
'zerver_userprofile_mirrordummy',
],
# set table for children who treat us as normal parent
table='zerver_userprofile',
virtual_parent=realm_config,
custom_fetch=fetch_user_profile,
)
user_groups_config = Config(
table='zerver_usergroup',
model=UserGroup,
normal_parent=realm_config,
parent_key='realm__in',
)
Config(
table='zerver_usergroupmembership',
model=UserGroupMembership,
normal_parent=user_groups_config,
parent_key='user_group__in',
)
Config(
custom_tables=[
'zerver_userprofile_crossrealm',
],
virtual_parent=user_profile_config,
custom_fetch=fetch_user_profile_cross_realm,
)
Config(
table='zerver_userpresence',
model=UserPresence,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
Config(
table='zerver_customprofilefieldvalue',
model=CustomProfileFieldValue,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
Config(
table='zerver_useractivity',
model=UserActivity,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
Config(
table='zerver_useractivityinterval',
model=UserActivityInterval,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
Config(
table='zerver_realmauditlog',
model=RealmAuditLog,
normal_parent=user_profile_config,
parent_key='modified_user__in',
)
Config(
table='zerver_userhotspot',
model=UserHotspot,
normal_parent=user_profile_config,
parent_key='user__in',
)
Config(
table='zerver_mutedtopic',
model=MutedTopic,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
Config(
table='zerver_service',
model=Service,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
Config(
table='zerver_botstoragedata',
model=BotStorageData,
normal_parent=user_profile_config,
parent_key='bot_profile__in',
)
Config(
table='zerver_botconfigdata',
model=BotConfigData,
normal_parent=user_profile_config,
parent_key='bot_profile__in',
)
# Some of these tables are intermediate "tables" that we
# create only for the export. Think of them as similar to views.
user_subscription_config = Config(
table='_user_subscription',
model=Subscription,
normal_parent=user_profile_config,
filter_args={'recipient__type': Recipient.PERSONAL},
parent_key='user_profile__in',
)
Config(
table='_user_recipient',
model=Recipient,
virtual_parent=user_subscription_config,
id_source=('_user_subscription', 'recipient'),
)
#
stream_subscription_config = Config(
table='_stream_subscription',
model=Subscription,
normal_parent=user_profile_config,
filter_args={'recipient__type': Recipient.STREAM},
parent_key='user_profile__in',
)
stream_recipient_config = Config(
table='_stream_recipient',
model=Recipient,
virtual_parent=stream_subscription_config,
id_source=('_stream_subscription', 'recipient'),
)
Config(
table='zerver_stream',
model=Stream,
virtual_parent=stream_recipient_config,
id_source=('_stream_recipient', 'type_id'),
source_filter=lambda r: r['type'] == Recipient.STREAM,
exclude=['email_token'],
post_process_data=sanity_check_stream_data
)
#
Config(
custom_tables=[
'_huddle_recipient',
'_huddle_subscription',
'zerver_huddle',
],
normal_parent=user_profile_config,
custom_fetch=fetch_huddle_objects,
)
# Now build permanent tables from our temp tables.
Config(
table='zerver_recipient',
virtual_parent=user_profile_config,
concat_and_destroy=[
'_user_recipient',
'_stream_recipient',
'_huddle_recipient',
],
)
Config(
table='zerver_subscription',
virtual_parent=user_profile_config,
concat_and_destroy=[
'_user_subscription',
'_stream_subscription',
'_huddle_subscription',
]
)
# Analytics tables
Config(
table='analytics_realmcount',
model=RealmCount,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='analytics_usercount',
model=UserCount,
normal_parent=realm_config,
parent_key='realm_id__in',
)
Config(
table='analytics_streamcount',
model=StreamCount,
normal_parent=realm_config,
parent_key='realm_id__in',
)
return realm_config
def sanity_check_stream_data(response: TableData, config: Config, context: Context) -> None:
if context['exportable_user_ids'] is not None:
# If we restrict which user ids are exportable,
# the way that we find # streams is a little too
# complex to have a sanity check.
return
actual_streams = set([stream.name for stream in Stream.objects.filter(
realm=response["zerver_realm"][0]['id'])])
streams_in_response = set([stream['name'] for stream in response['zerver_stream']])
if len(streams_in_response - actual_streams) > 0:
print("Error: Streams not present in the realm were exported:")
print(" ", streams_in_response - actual_streams)
print("This is likely due to a bug in the export tool.")
raise AssertionError("Aborting! Please investigate.")
if len(actual_streams - streams_in_response) > 0:
print("Error: Some streams present in the realm were not exported:")
print(" ", actual_streams - streams_in_response)
print("Usually, this is caused by a stream having been created that never had subscribers.")
print("(Due to a bug elsewhere in Zulip, not in the export tool)")
raise AssertionError("Aborting! Please investigate.")
def fetch_user_profile(response: TableData, config: Config, context: Context) -> None:
realm = context['realm']
exportable_user_ids = context['exportable_user_ids']
query = UserProfile.objects.filter(realm_id=realm.id)
exclude = ['password', 'api_key']
rows = make_raw(list(query), exclude=exclude)
normal_rows = [] # type: List[Record]
dummy_rows = [] # type: List[Record]
for row in rows:
if exportable_user_ids is not None:
if row['id'] in exportable_user_ids:
assert not row['is_mirror_dummy']
else:
# Convert non-exportable users to
# inactive is_mirror_dummy users.
row['is_mirror_dummy'] = True
row['is_active'] = False
if row['is_mirror_dummy']:
dummy_rows.append(row)
else:
normal_rows.append(row)
response['zerver_userprofile'] = normal_rows
response['zerver_userprofile_mirrordummy'] = dummy_rows
def fetch_user_profile_cross_realm(response: TableData, config: Config, context: Context) -> None:
realm = context['realm']
response['zerver_userprofile_crossrealm'] = []
if realm.string_id == settings.SYSTEM_BOT_REALM:
return
for bot_user in [
get_system_bot(settings.NOTIFICATION_BOT),
get_system_bot(settings.EMAIL_GATEWAY_BOT),
get_system_bot(settings.WELCOME_BOT),
]:
recipient_id = Recipient.objects.get(type_id=bot_user.id, type=Recipient.PERSONAL).id
response['zerver_userprofile_crossrealm'].append(dict(
email=bot_user.email,
id=bot_user.id,
recipient_id=recipient_id,
))
def fetch_attachment_data(response: TableData, realm_id: int, message_ids: Set[int]) -> None:
filter_args = {'realm_id': realm_id}
query = Attachment.objects.filter(**filter_args)
response['zerver_attachment'] = make_raw(list(query))
floatify_datetime_fields(response, 'zerver_attachment')
'''
We usually export most messages for the realm, but not
quite ALL messages for the realm. So, we need to
clean up our attachment data to have correct
values for response['zerver_attachment'][<n>]['messages'].
'''
for row in response['zerver_attachment']:
filterer_message_ids = set(row['messages']).intersection(message_ids)
row['messages'] = sorted(list(filterer_message_ids))
'''
Attachments can be connected to multiple messages, although
it's most common to have just one message. Regardless,
if none of those message(s) survived the filtering above
for a particular attachment, then we won't export the
attachment row.
'''
response['zerver_attachment'] = [
row for row in response['zerver_attachment']
if row['messages']]
def fetch_reaction_data(response: TableData, message_ids: Set[int]) -> None:
query = Reaction.objects.filter(message_id__in=list(message_ids))
response['zerver_reaction'] = make_raw(list(query))
def fetch_huddle_objects(response: TableData, config: Config, context: Context) -> None:
realm = context['realm']
assert config.parent is not None
assert config.parent.table is not None
user_profile_ids = set(r['id'] for r in response[config.parent.table])
# First we get all huddles involving someone in the realm.
realm_huddle_subs = Subscription.objects.select_related("recipient").filter(
recipient__type=Recipient.HUDDLE, user_profile__in=user_profile_ids)
realm_huddle_recipient_ids = set(sub.recipient_id for sub in realm_huddle_subs)
# Mark all Huddles whose recipient ID contains a cross-realm user.
unsafe_huddle_recipient_ids = set()
for sub in Subscription.objects.select_related().filter(recipient__in=realm_huddle_recipient_ids):
if sub.user_profile.realm != realm:
# In almost every case the other realm will be zulip.com
unsafe_huddle_recipient_ids.add(sub.recipient_id)
# Now filter down to just those huddles that are entirely within the realm.
#
# This is important for ensuring that the User objects needed
# to import it on the other end exist (since we're only
# exporting the users from this realm), at the cost of losing
# some of these cross-realm messages.
huddle_subs = [sub for sub in realm_huddle_subs if sub.recipient_id not in unsafe_huddle_recipient_ids]
huddle_recipient_ids = set(sub.recipient_id for sub in huddle_subs)
huddle_ids = set(sub.recipient.type_id for sub in huddle_subs)
huddle_subscription_dicts = make_raw(huddle_subs)
huddle_recipients = make_raw(Recipient.objects.filter(id__in=huddle_recipient_ids))
response['_huddle_recipient'] = huddle_recipients
response['_huddle_subscription'] = huddle_subscription_dicts
response['zerver_huddle'] = make_raw(Huddle.objects.filter(id__in=huddle_ids))
def fetch_usermessages(realm: Realm,
message_ids: Set[int],
user_profile_ids: Set[int],
message_filename: Path) -> List[Record]:
# UserMessage export security rule: You can export UserMessages
# for the messages you exported for the users in your realm.
user_message_query = UserMessage.objects.filter(user_profile__realm=realm,
message_id__in=message_ids)
user_message_chunk = []
for user_message in user_message_query:
if user_message.user_profile_id not in user_profile_ids:
continue
user_message_obj = model_to_dict(user_message)
user_message_obj['flags_mask'] = user_message.flags.mask
del user_message_obj['flags']
user_message_chunk.append(user_message_obj)
logging.info("Fetched UserMessages for %s" % (message_filename,))
return user_message_chunk
def export_usermessages_batch(input_path: Path, output_path: Path) -> None:
"""As part of the system for doing parallel exports, this runs on one
batch of Message objects and adds the corresponding UserMessage
objects. (This is called by the export_usermessage_batch
management command)."""
with open(input_path, "r") as input_file:
output = ujson.loads(input_file.read())
message_ids = [item['id'] for item in output['zerver_message']]
user_profile_ids = set(output['zerver_userprofile_ids'])
del output['zerver_userprofile_ids']
realm = Realm.objects.get(id=output['realm_id'])
del output['realm_id']
output['zerver_usermessage'] = fetch_usermessages(realm, set(message_ids), user_profile_ids, output_path)
write_message_export(output_path, output)
os.unlink(input_path)
def write_message_export(message_filename: Path, output: MessageOutput) -> None:
write_data_to_file(output_file=message_filename, data=output)
logging.info("Dumped to %s" % (message_filename,))
def export_partial_message_files(realm: Realm,
response: TableData,
chunk_size: int=MESSAGE_BATCH_CHUNK_SIZE,
output_dir: Optional[Path]=None,
public_only: bool=False) -> Set[int]:
if output_dir is None:
output_dir = tempfile.mkdtemp(prefix="zulip-export")
def get_ids(records: List[Record]) -> Set[int]:
return set(x['id'] for x in records)
# Basic security rule: You can export everything either...
# - sent by someone in your exportable_user_ids
# OR
# - received by someone in your exportable_user_ids (which
# equates to a recipient object we are exporting)
#
# TODO: In theory, you should be able to export messages in
# cross-realm PM threads; currently, this only exports cross-realm
# messages received by your realm that were sent by Zulip system
# bots (e.g. emailgateway, notification-bot).
# Here, "we" and "us" refers to the inner circle of users who
# were specified as being allowed to be exported. "Them"
# refers to other users.
user_ids_for_us = get_ids(
response['zerver_userprofile']
)
ids_of_our_possible_senders = get_ids(
response['zerver_userprofile'] +
response['zerver_userprofile_mirrordummy'] +
response['zerver_userprofile_crossrealm'])
if public_only:
recipient_streams = Stream.objects.filter(realm=realm, invite_only=False)
recipient_ids = Recipient.objects.filter(
type=Recipient.STREAM, type_id__in=recipient_streams).values_list("id", flat=True)
recipient_ids_for_us = get_ids(response['zerver_recipient']) & set(recipient_ids)
else:
recipient_ids_for_us = get_ids(response['zerver_recipient'])
# We capture most messages here, since the
# recipients we subscribe to are also the
# recipients of most messages we send.
messages_we_received = Message.objects.filter(
sender__in=ids_of_our_possible_senders,
recipient__in=recipient_ids_for_us,
).order_by('id')
if public_only:
# For the public stream export, we only need the messages those streams received.
message_queries = [
messages_we_received,
]
else:
# This should pick up stragglers; messages we sent
# where we the recipient wasn't subscribed to by any of
# us (such as PMs to "them").
ids_of_non_exported_possible_recipients = ids_of_our_possible_senders - user_ids_for_us
recipients_for_them = Recipient.objects.filter(
type=Recipient.PERSONAL,
type_id__in=ids_of_non_exported_possible_recipients).values("id")
recipient_ids_for_them = get_ids(recipients_for_them)
messages_we_sent_to_them = Message.objects.filter(
sender__in=user_ids_for_us,
recipient__in=recipient_ids_for_them,
).order_by('id')
message_queries = [
messages_we_received,
messages_we_sent_to_them,
]
all_message_ids = set() # type: Set[int]
dump_file_id = 1
for message_query in message_queries:
dump_file_id = write_message_partial_for_query(
realm=realm,
message_query=message_query,
dump_file_id=dump_file_id,
all_message_ids=all_message_ids,
output_dir=output_dir,
user_profile_ids=user_ids_for_us,
chunk_size=chunk_size,
)
return all_message_ids
def write_message_partial_for_query(realm: Realm, message_query: Any, dump_file_id: int,
all_message_ids: Set[int], output_dir: Path,
user_profile_ids: Set[int],
chunk_size: int=MESSAGE_BATCH_CHUNK_SIZE) -> int:
min_id = -1
while True:
actual_query = message_query.filter(id__gt=min_id)[0:chunk_size]
message_chunk = make_raw(actual_query)
message_ids = set(m['id'] for m in message_chunk)
assert len(message_ids.intersection(all_message_ids)) == 0
all_message_ids.update(message_ids)
if len(message_chunk) == 0:
break
# Figure out the name of our shard file.
message_filename = os.path.join(output_dir, "messages-%06d.json" % (dump_file_id,))
message_filename += '.partial'
logging.info("Fetched Messages for %s" % (message_filename,))
# Clean up our messages.
table_data = {} # type: TableData
table_data['zerver_message'] = message_chunk
floatify_datetime_fields(table_data, 'zerver_message')
# Build up our output for the .partial file, which needs
# a list of user_profile_ids to search for (as well as
# the realm id).
output = {} # type: MessageOutput
output['zerver_message'] = table_data['zerver_message']
output['zerver_userprofile_ids'] = list(user_profile_ids)
output['realm_id'] = realm.id
# And write the data.
write_message_export(message_filename, output)
min_id = max(message_ids)
dump_file_id += 1
return dump_file_id
def export_uploads_and_avatars(realm: Realm, output_dir: Path) -> None:
uploads_output_dir = os.path.join(output_dir, 'uploads')
avatars_output_dir = os.path.join(output_dir, 'avatars')
emoji_output_dir = os.path.join(output_dir, 'emoji')
for output_dir in (uploads_output_dir, avatars_output_dir, emoji_output_dir):
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if settings.LOCAL_UPLOADS_DIR:
# Small installations and developers will usually just store files locally.
export_uploads_from_local(realm,
local_dir=os.path.join(settings.LOCAL_UPLOADS_DIR, "files"),
output_dir=uploads_output_dir)
export_avatars_from_local(realm,
local_dir=os.path.join(settings.LOCAL_UPLOADS_DIR, "avatars"),
output_dir=avatars_output_dir)
export_emoji_from_local(realm,
local_dir=os.path.join(settings.LOCAL_UPLOADS_DIR, "avatars"),
output_dir=emoji_output_dir)
else:
# Some bigger installations will have their data stored on S3.
export_files_from_s3(realm,
settings.S3_AVATAR_BUCKET,
output_dir=avatars_output_dir,
processing_avatars=True)
export_files_from_s3(realm,
settings.S3_AUTH_UPLOADS_BUCKET,
output_dir=uploads_output_dir)
export_files_from_s3(realm,
settings.S3_AVATAR_BUCKET,
output_dir=emoji_output_dir,
processing_emoji=True)
def _check_key_metadata(email_gateway_bot: Optional[UserProfile],
key: Key, processing_avatars: bool,
realm: Realm, user_ids: Set[int]) -> None:
# Helper function for export_files_from_s3
if 'realm_id' in key.metadata and key.metadata['realm_id'] != str(realm.id):
if email_gateway_bot is None or key.metadata['user_profile_id'] != str(email_gateway_bot.id):
raise AssertionError("Key metadata problem: %s %s / %s" % (key.name, key.metadata, realm.id))
# Email gateway bot sends messages, potentially including attachments, cross-realm.
print("File uploaded by email gateway bot: %s / %s" % (key.name, key.metadata))
elif processing_avatars:
if 'user_profile_id' not in key.metadata:
raise AssertionError("Missing user_profile_id in key metadata: %s" % (key.metadata,))
if int(key.metadata['user_profile_id']) not in user_ids:
raise AssertionError("Wrong user_profile_id in key metadata: %s" % (key.metadata,))
elif 'realm_id' not in key.metadata:
raise AssertionError("Missing realm_id in key metadata: %s" % (key.metadata,))
def _get_exported_s3_record(
bucket_name: str,
key: Key,
processing_avatars: bool,
processing_emoji: bool) -> Dict[str, Union[str, int]]:
# Helper function for export_files_from_s3
record = dict(s3_path=key.name, bucket=bucket_name,
size=key.size, last_modified=key.last_modified,
content_type=key.content_type, md5=key.md5)
record.update(key.metadata)
if processing_emoji:
record['file_name'] = os.path.basename(key.name)
# A few early avatars don't have 'realm_id' on the object; fix their metadata
user_profile = get_user_profile_by_id(record['user_profile_id'])
if 'realm_id' not in record:
record['realm_id'] = user_profile.realm_id
record['user_profile_email'] = user_profile.email
# Fix the record ids
record['user_profile_id'] = int(record['user_profile_id'])
record['realm_id'] = int(record['realm_id'])
return record
def _save_s3_object_to_file(
key: Key,
output_dir: str,
processing_avatars: bool,
processing_emoji: bool) -> None:
# Helper function for export_files_from_s3
if processing_avatars or processing_emoji:
filename = os.path.join(output_dir, key.name)
else:
fields = key.name.split('/')
if len(fields) != 3:
raise AssertionError("Suspicious key with invalid format %s" % (key.name))
filename = os.path.join(output_dir, key.name)
dirname = os.path.dirname(filename)
if not os.path.exists(dirname):
os.makedirs(dirname)
key.get_contents_to_filename(filename)
def export_files_from_s3(realm: Realm, bucket_name: str, output_dir: Path,
processing_avatars: bool=False,
processing_emoji: bool=False) -> None:
conn = S3Connection(settings.S3_KEY, settings.S3_SECRET_KEY)
bucket = conn.get_bucket(bucket_name, validate=True)
records = []
logging.info("Downloading uploaded files from %s" % (bucket_name))
avatar_hash_values = set()
user_ids = set()
if processing_avatars:
bucket_list = bucket.list()
for user_profile in UserProfile.objects.filter(realm=realm):
avatar_path = user_avatar_path_from_ids(user_profile.id, realm.id)
avatar_hash_values.add(avatar_path)
avatar_hash_values.add(avatar_path + ".original")
user_ids.add(user_profile.id)
if processing_emoji:
bucket_list = bucket.list(prefix="%s/emoji/images/" % (realm.id,))
else:
bucket_list = bucket.list(prefix="%s/" % (realm.id,))
if settings.EMAIL_GATEWAY_BOT is not None:
email_gateway_bot = get_system_bot(settings.EMAIL_GATEWAY_BOT) # type: Optional[UserProfile]
else:
email_gateway_bot = None
count = 0
for bkey in bucket_list:
if processing_avatars and bkey.name not in avatar_hash_values:
continue
key = bucket.get_key(bkey.name)
# This can happen if an email address has moved realms
_check_key_metadata(email_gateway_bot, key, processing_avatars, realm, user_ids)
record = _get_exported_s3_record(bucket_name, key, processing_avatars, processing_emoji)
record['path'] = key.name
_save_s3_object_to_file(key, output_dir, processing_avatars, processing_emoji)
records.append(record)
count += 1
if (count % 100 == 0):
logging.info("Finished %s" % (count,))
with open(os.path.join(output_dir, "records.json"), "w") as records_file:
ujson.dump(records, records_file, indent=4)
def export_uploads_from_local(realm: Realm, local_dir: Path, output_dir: Path) -> None:
count = 0
records = []
for attachment in Attachment.objects.filter(realm_id=realm.id):
local_path = os.path.join(local_dir, attachment.path_id)
output_path = os.path.join(output_dir, attachment.path_id)
os.makedirs(os.path.dirname(output_path), exist_ok=True)
shutil.copy2(local_path, output_path)
stat = os.stat(local_path)
record = dict(realm_id=attachment.realm_id,
user_profile_id=attachment.owner.id,
user_profile_email=attachment.owner.email,
s3_path=attachment.path_id,
path=attachment.path_id,
size=stat.st_size,
last_modified=stat.st_mtime,
content_type=None)
records.append(record)
count += 1
if (count % 100 == 0):
logging.info("Finished %s" % (count,))
with open(os.path.join(output_dir, "records.json"), "w") as records_file:
ujson.dump(records, records_file, indent=4)
def export_avatars_from_local(realm: Realm, local_dir: Path, output_dir: Path) -> None:
count = 0
records = []
users = list(UserProfile.objects.filter(realm=realm))
users += [
get_system_bot(settings.NOTIFICATION_BOT),
get_system_bot(settings.EMAIL_GATEWAY_BOT),
get_system_bot(settings.WELCOME_BOT),
]
for user in users:
if user.avatar_source == UserProfile.AVATAR_FROM_GRAVATAR:
continue
avatar_path = user_avatar_path_from_ids(user.id, realm.id)
wildcard = os.path.join(local_dir, avatar_path + '.*')
for local_path in glob.glob(wildcard):
logging.info('Copying avatar file for user %s from %s' % (
user.email, local_path))
fn = os.path.relpath(local_path, local_dir)
output_path = os.path.join(output_dir, fn)
os.makedirs(str(os.path.dirname(output_path)), exist_ok=True)
shutil.copy2(str(local_path), str(output_path))
stat = os.stat(local_path)
record = dict(realm_id=realm.id,
user_profile_id=user.id,
user_profile_email=user.email,
s3_path=fn,
path=fn,
size=stat.st_size,
last_modified=stat.st_mtime,
content_type=None)
records.append(record)
count += 1
if (count % 100 == 0):
logging.info("Finished %s" % (count,))
with open(os.path.join(output_dir, "records.json"), "w") as records_file:
ujson.dump(records, records_file, indent=4)
def export_emoji_from_local(realm: Realm, local_dir: Path, output_dir: Path) -> None:
count = 0
records = []
for realm_emoji in RealmEmoji.objects.filter(realm_id=realm.id):
emoji_path = RealmEmoji.PATH_ID_TEMPLATE.format(
realm_id=realm.id,
emoji_file_name=realm_emoji.file_name
)
local_path = os.path.join(local_dir, emoji_path)
output_path = os.path.join(output_dir, emoji_path)
os.makedirs(os.path.dirname(output_path), exist_ok=True)
shutil.copy2(local_path, output_path)
record = dict(realm_id=realm.id,
author=realm_emoji.author.id,
path=emoji_path,
s3_path=emoji_path,
file_name=realm_emoji.file_name,
name=realm_emoji.name,
deactivated=realm_emoji.deactivated)
records.append(record)
count += 1
if (count % 100 == 0):
logging.info("Finished %s" % (count,))
with open(os.path.join(output_dir, "records.json"), "w") as records_file:
ujson.dump(records, records_file, indent=4)
def do_write_stats_file_for_realm_export(output_dir: Path) -> None:
stats_file = os.path.join(output_dir, 'stats.txt')
realm_file = os.path.join(output_dir, 'realm.json')
attachment_file = os.path.join(output_dir, 'attachment.json')
message_files = glob.glob(os.path.join(output_dir, 'messages-*.json'))
fns = sorted([attachment_file] + message_files + [realm_file])
logging.info('Writing stats file: %s\n' % (stats_file,))
with open(stats_file, 'w') as f:
for fn in fns:
f.write(os.path.basename(fn) + '\n')
payload = open(fn).read()
data = ujson.loads(payload)
for k in sorted(data):
f.write('%5d %s\n' % (len(data[k]), k))
f.write('\n')
avatar_file = os.path.join(output_dir, 'avatars/records.json')
uploads_file = os.path.join(output_dir, 'uploads/records.json')
for fn in [avatar_file, uploads_file]:
f.write(fn+'\n')
payload = open(fn).read()
data = ujson.loads(payload)
f.write('%5d records\n' % len(data))
f.write('\n')
def do_export_realm(realm: Realm, output_dir: Path, threads: int,
exportable_user_ids: Optional[Set[int]]=None,
public_only: bool=False) -> None:
response = {} # type: TableData
# We need at least one thread running to export
# UserMessage rows. The management command should
# enforce this for us.
if not settings.TEST_SUITE:
assert threads >= 1
realm_config = get_realm_config()
create_soft_link(source=output_dir, in_progress=True)
logging.info("Exporting data from get_realm_config()...")
export_from_config(
response=response,
config=realm_config,
seed_object=realm,
context=dict(realm=realm, exportable_user_ids=exportable_user_ids)
)
logging.info('...DONE with get_realm_config() data')
sanity_check_output(response)
logging.info("Exporting uploaded files and avatars")
export_uploads_and_avatars(realm, output_dir)
# We (sort of) export zerver_message rows here. We write
# them to .partial files that are subsequently fleshed out
# by parallel processes to add in zerver_usermessage data.
# This is for performance reasons, of course. Some installations
# have millions of messages.
logging.info("Exporting .partial files messages")
message_ids = export_partial_message_files(realm, response, output_dir=output_dir,
public_only=public_only)
logging.info('%d messages were exported' % (len(message_ids)))
# zerver_reaction
zerver_reaction = {} # type: TableData
fetch_reaction_data(response=zerver_reaction, message_ids=message_ids)
response.update(zerver_reaction)
# Write realm data
export_file = os.path.join(output_dir, "realm.json")
write_data_to_file(output_file=export_file, data=response)
logging.info('Writing realm data to %s' % (export_file,))
# zerver_attachment
export_attachment_table(realm=realm, output_dir=output_dir, message_ids=message_ids)
# Start parallel jobs to export the UserMessage objects.
launch_user_message_subprocesses(threads=threads, output_dir=output_dir)
logging.info("Finished exporting %s" % (realm.string_id))
create_soft_link(source=output_dir, in_progress=False)
def export_attachment_table(realm: Realm, output_dir: Path, message_ids: Set[int]) -> None:
response = {} # type: TableData
fetch_attachment_data(response=response, realm_id=realm.id, message_ids=message_ids)
output_file = os.path.join(output_dir, "attachment.json")
logging.info('Writing attachment table data to %s' % (output_file,))
write_data_to_file(output_file=output_file, data=response)
def create_soft_link(source: Path, in_progress: bool=True) -> None:
is_done = not in_progress
if settings.DEVELOPMENT:
in_progress_link = os.path.join(settings.DEPLOY_ROOT, 'var', 'export-in-progress')
done_link = os.path.join(settings.DEPLOY_ROOT, 'var', 'export-most-recent')
else:
in_progress_link = '/home/zulip/export-in-progress'
done_link = '/home/zulip/export-most-recent'
if in_progress:
new_target = in_progress_link
else:
try:
os.remove(in_progress_link)
except FileNotFoundError:
pass
new_target = done_link
overwrite_symlink(source, new_target)
if is_done:
logging.info('See %s for output files' % (new_target,))
def launch_user_message_subprocesses(threads: int, output_dir: Path) -> None:
logging.info('Launching %d PARALLEL subprocesses to export UserMessage rows' % (threads,))
def run_job(shard: str) -> int:
subprocess.call(["./manage.py", 'export_usermessage_batch', '--path',
str(output_dir), '--thread', shard])
return 0
for (status, job) in run_parallel(run_job,
[str(x) for x in range(0, threads)],
threads=threads):
print("Shard %s finished, status %s" % (job, status))
def do_export_user(user_profile: UserProfile, output_dir: Path) -> None:
response = {} # type: TableData
export_single_user(user_profile, response)
export_file = os.path.join(output_dir, "user.json")
write_data_to_file(output_file=export_file, data=response)
logging.info("Exporting messages")
export_messages_single_user(user_profile, output_dir)
def export_single_user(user_profile: UserProfile, response: TableData) -> None:
config = get_single_user_config()
export_from_config(
response=response,
config=config,
seed_object=user_profile,
)
def get_single_user_config() -> Config:
# zerver_userprofile
user_profile_config = Config(
table='zerver_userprofile',
is_seeded=True,
exclude=['password', 'api_key'],
)
# zerver_subscription
subscription_config = Config(
table='zerver_subscription',
model=Subscription,
normal_parent=user_profile_config,
parent_key='user_profile__in',
)
# zerver_recipient
recipient_config = Config(
table='zerver_recipient',
model=Recipient,
virtual_parent=subscription_config,
id_source=('zerver_subscription', 'recipient'),
)
# zerver_stream
Config(
table='zerver_stream',
model=Stream,
virtual_parent=recipient_config,
id_source=('zerver_recipient', 'type_id'),
source_filter=lambda r: r['type'] == Recipient.STREAM,
exclude=['email_token'],
)
return user_profile_config
def export_messages_single_user(user_profile: UserProfile, output_dir: Path,
chunk_size: int=MESSAGE_BATCH_CHUNK_SIZE) -> None:
user_message_query = UserMessage.objects.filter(user_profile=user_profile).order_by("id")
min_id = -1
dump_file_id = 1
while True:
actual_query = user_message_query.select_related(
"message", "message__sending_client").filter(id__gt=min_id)[0:chunk_size]
user_message_chunk = [um for um in actual_query]
user_message_ids = set(um.id for um in user_message_chunk)
if len(user_message_chunk) == 0:
break
message_chunk = []
for user_message in user_message_chunk:
item = model_to_dict(user_message.message)
item['flags'] = user_message.flags_list()
item['flags_mask'] = user_message.flags.mask
# Add a few nice, human-readable details
item['sending_client_name'] = user_message.message.sending_client.name
item['display_recipient'] = get_display_recipient(user_message.message.recipient)
message_chunk.append(item)
message_filename = os.path.join(output_dir, "messages-%06d.json" % (dump_file_id,))
logging.info("Fetched Messages for %s" % (message_filename,))
output = {'zerver_message': message_chunk}
floatify_datetime_fields(output, 'zerver_message')
message_output = dict(output) # type: MessageOutput
write_message_export(message_filename, message_output)
min_id = max(user_message_ids)
dump_file_id += 1
| 37.426981 | 109 | 0.657784 |
1b8462fded4a6e6ea746d258648a856bc93dbd11 | 15,320 | py | Python | exp/tests/test_lab_views.py | sehgalvibhor/lookit-api | 7dad97a2c9a0342fb839b5b11beaa4e00af99e8c | [
"MIT"
] | null | null | null | exp/tests/test_lab_views.py | sehgalvibhor/lookit-api | 7dad97a2c9a0342fb839b5b11beaa4e00af99e8c | [
"MIT"
] | null | null | null | exp/tests/test_lab_views.py | sehgalvibhor/lookit-api | 7dad97a2c9a0342fb839b5b11beaa4e00af99e8c | [
"MIT"
] | null | null | null | from django.test import Client, TestCase, override_settings
from django.urls import reverse
from django_dynamic_fixture import G
from guardian.shortcuts import assign_perm
from accounts.backends import TWO_FACTOR_AUTH_SESSION_KEY
from accounts.models import User
from studies.models import Lab, Study, StudyType
from studies.permissions import LabPermission
class Force2FAClient(Client):
"""For convenience, let's just pretend everyone is two-factor auth'd."""
@property
def session(self):
_session = super().session
_session[TWO_FACTOR_AUTH_SESSION_KEY] = True
return _session
# run celery .delay() tasks right away and propagate errors.
# Ideally to test celery tasks we would mock per
# https://docs.celeryproject.org/en/stable/userguide/testing.html
# but for these views the celery tasks are relatively unimportant and
# we're happy just checking there aren't errors when emails are sent.
@override_settings(CELERY_TASK_ALWAYS_EAGER=True)
@override_settings(CELERY_TASK_EAGER_PROPAGATES=True)
class LabViewsTestCase(TestCase):
def setUp(self):
self.client = Force2FAClient()
self.lab = G(Lab, name="ECCL", institution="MIT", approved_to_test=False)
self.lab2 = G(Lab, name="Second lab")
self.researcher = G(
User, is_active=True, is_researcher=True, given_name="Alice"
)
self.researcher_outside_lab = G(
User, is_active=True, is_researcher=True, given_name="Bobbington"
)
self.researcher_in_lab = G(
User, is_active=True, is_researcher=True, given_name="Candice"
)
self.lab.researchers.add(self.researcher)
self.lab.researchers.add(self.researcher_in_lab)
self.lab.member_group.user_set.add(self.researcher_in_lab)
self.lab.save()
self.study_type = G(StudyType, name="default", id=1)
self.study = G(
Study, creator=self.researcher, study_type=self.study_type, lab=self.lab
)
self.study.researcher_group.user_set.add(self.researcher_in_lab)
self.superuser = G(User, is_active=True, is_researcher=True, is_superuser=True)
self.participant = G(User, is_active=True)
self.create_lab_url = reverse("exp:lab-create")
self.lab_detail_url = reverse("exp:lab-detail", kwargs={"pk": self.lab.pk})
self.lab_members_url = reverse("exp:lab-members", kwargs={"pk": self.lab.pk})
self.lab_list_url = reverse("exp:lab-list")
self.lab2_request_url = reverse("exp:lab-request", kwargs={"pk": self.lab2.pk})
self.lab_update_url = reverse("exp:lab-edit", kwargs={"pk": self.lab.pk})
# Create lab view: can get as researcher
def testCanGetCreateLabViewAsResearcher(self):
self.client.force_login(self.researcher)
page = self.client.get(self.create_lab_url)
self.assertEqual(
page.status_code, 200, "Unable to get create lab view as researcher"
)
self.assertTemplateUsed(
page,
"studies/lab_create.html",
"Incorrect template used for displaying create lab form",
)
# Create lab view: can create new lab as researcher
def testCanCreateNewLabAsResearcher(self):
self.client.force_login(self.researcher)
post_data = {
"name": "New lab",
"principal_investigator_name": "Jane Smith",
"institution": "MIT",
"contact_email": "abc@def.org",
"contact_phone": "(123) 456-7890",
"lab_website": "https://mit.edu",
"description": "ABCDEFG",
"irb_contact_info": "how to reach the IRB",
}
page = self.client.post(self.create_lab_url, post_data)
self.assertEqual(page.status_code, 302, "Unable to create lab as researcher")
# Create lab view: cannot get as participant
def testCanGetCreateLabViewAsParticipant(self):
self.client.force_login(self.participant)
page = self.client.get(self.create_lab_url)
self.assertEqual(
page.status_code, 403, "Participant is able to create a new lab!"
)
# Lab detail view: can see as researcher
def testCanGetLabDetailViewAsResearcher(self):
self.client.force_login(self.researcher)
page = self.client.get(self.lab_detail_url)
self.assertEqual(
page.status_code, 200, "Unable to get lab detail view as researcher"
)
self.assertTemplateUsed(
page,
"studies/lab_detail.html",
"Incorrect template used for displaying lab detail page",
)
# Lab detail view: cannot see as participant
def testCanGetLabDetailViewAsParticipant(self):
self.client.force_login(self.participant)
page = self.client.get(self.lab_detail_url)
self.assertEqual(
page.status_code, 403, "Participant is able to view exp lab detail page!"
)
# Lab members view: cannot see as researcher not in lab
def testCannotGetLabMembersViewAsUnaffiliatedResearcher(self):
self.client.force_login(self.researcher_outside_lab)
page = self.client.get(self.lab_members_url)
self.assertEqual(
page.status_code, 403, "Unaffiliated researcher is able to view lab members"
)
# Lab members view: can see as researcher in lab.
def testCanGetLabMembersViewAsLabResearcher(self):
self.client.force_login(self.researcher)
page = self.client.get(self.lab_members_url)
self.assertEqual(
page.status_code, 200, "Unable to get lab members view as lab researcher"
)
self.assertTemplateUsed(
page,
"studies/lab_member_list.html",
"Incorrect template used for displaying lab member page",
)
# note - can use page.context_data too!
self.assertIn("Alice", page.rendered_content)
self.assertIn("Candice", page.rendered_content)
self.assertNotIn("Bobbington", page.rendered_content)
# Lab members view: cannot post as researcher in lab w/o manage perms
def testPostLabMembersViewIncorrectPerms(self):
self.client.force_login(self.researcher)
post_data = {
"user_action": "make_member",
"user_id": self.researcher_outside_lab.pk,
}
page = self.client.post(self.lab_members_url, post_data)
self.assertEqual(
page.status_code,
403,
"Researcher able to add new lab member without permissions",
)
# Lab members view: can add new researcher w/ appropriate perms
def testAddNewLabMemberWithCorrectPerms(self):
self.client.force_login(self.researcher)
assign_perm(
LabPermission.MANAGE_LAB_RESEARCHERS.codename, self.researcher, self.lab
)
post_data = {
"user_action": "make_guest",
"user_id": self.researcher_outside_lab.pk,
}
page = self.client.post(self.lab_members_url, post_data)
self.assertEqual(
page.status_code,
302,
"Researcher unable to add new lab member despite correct permissions",
)
self.assertRedirects(page, self.lab_members_url)
self.assertIn(
self.lab,
self.researcher_outside_lab.labs.all(),
"Researcher not successfully added to lab",
)
self.assertIn(self.researcher_outside_lab, self.lab.guest_group.user_set.all())
# Lab members view: can remove researcher w/ appropriate perms
def testRemoveLabMemberWithCorrectPerms(self):
self.client.force_login(self.researcher)
assign_perm(
LabPermission.MANAGE_LAB_RESEARCHERS.codename, self.researcher, self.lab
)
post_data = {
"user_action": "remove_researcher",
"user_id": self.researcher_in_lab.pk,
}
page = self.client.post(self.lab_members_url, post_data)
self.assertEqual(
page.status_code,
302,
"Researcher unable to remove lab member despite correct permissions",
)
self.assertRedirects(page, self.lab_members_url)
self.assertNotIn(self.lab, self.researcher_in_lab.labs.all())
self.assertNotIn(
self.researcher_in_lab,
self.lab.member_group.user_set.all(),
"Researcher removed from lab but not from associated member group",
)
self.assertNotIn(
self.researcher_in_lab,
self.study.researcher_group.user_set.all(),
"Researcher removed from lab but not from associated study group",
)
# Lab list view: can see as researcher
def testCanGetLabListViewAsResearcher(self):
self.client.force_login(self.researcher)
page = self.client.get(self.lab_list_url)
self.assertEqual(
page.status_code, 200, "Unable to get lab list view as researcher"
)
self.assertTemplateUsed(
page,
"studies/lab_list.html",
"Incorrect template used for displaying lab list page",
)
# Lab list view: cannot see as participant
def testCanGetLabListViewAsParticipant(self):
self.client.force_login(self.participant)
page = self.client.get(self.lab_list_url)
self.assertEqual(
page.status_code, 403, "Participant is able to view exp lab list page!"
)
# Lab update view: cannot get as researcher w/o edit perms
def testGetLabUpdateViewIncorrectPerms(self):
assign_perm(
LabPermission.MANAGE_LAB_RESEARCHERS.codename, self.researcher, self.lab
)
self.client.force_login(self.researcher)
page = self.client.get(self.lab_update_url)
self.assertEqual(
page.status_code,
403,
"Researcher able to access lab update view without permissions",
)
# Lab update view: can get as researcher w/ edit perms
def testGetLabUpdateViewCorrectPerms(self):
assign_perm(LabPermission.EDIT_LAB_METADATA.codename, self.researcher, self.lab)
self.client.force_login(self.researcher)
page = self.client.get(self.lab_update_url)
self.assertEqual(
page.status_code,
200,
"Researcher not able to access lab update view despite permissions",
)
# Lab update view: cannot post as researcher w/o edit perms
def testPostLabUpdateViewIncorrectPerms(self):
assign_perm(
LabPermission.MANAGE_LAB_RESEARCHERS.codename, self.researcher, self.lab
)
self.client.force_login(self.researcher)
post_data = {
"name": "New lab",
"principal_investigator_name": "Jane Smith",
"institution": "New institution",
"contact_email": "abc@def.org",
"contact_phone": "(123) 456-7890",
"lab_website": "https://mit.edu",
"description": "ABCDEFG",
"irb_contact_info": "how to reach the IRB",
}
page = self.client.post(self.lab_update_url, post_data)
self.assertEqual(
page.status_code,
403,
"Researcher able to edit lab metadata without permissions",
)
updated_lab = Lab.objects.get(pk=self.lab.pk)
self.assertEqual(updated_lab.institution, "MIT")
# Lab update view: can post as researcher w/ edit perms
def testPostLabUpdateViewCorrectPerms(self):
assign_perm(LabPermission.EDIT_LAB_METADATA.codename, self.researcher, self.lab)
self.client.force_login(self.researcher)
post_data = {
"name": "New lab",
"principal_investigator_name": "Jane Smith",
"institution": "New institution",
"contact_email": "abc@def.org",
"contact_phone": "(123) 456-7890",
"lab_website": "https://mit.edu",
"description": "ABCDEFG",
"irb_contact_info": "how to reach the IRB",
}
page = self.client.post(self.lab_update_url, post_data)
self.assertEqual(
page.status_code,
302,
"Researcher unable to edit lab metadata despite permissions",
)
updated_lab = Lab.objects.get(pk=self.lab.pk)
self.assertEqual(updated_lab.institution, "New institution")
# Lab update view: cannot update approved_to_test
def testPostLabUpdateViewEditApproval(self):
assign_perm(LabPermission.EDIT_LAB_METADATA.codename, self.researcher, self.lab)
self.client.force_login(self.researcher)
post_data = {
"name": "New lab",
"principal_investigator_name": "Jane Smith",
"institution": "New institution",
"contact_email": "abc@def.org",
"contact_phone": "(123) 456-7890",
"lab_website": "https://mit.edu",
"description": "ABCDEFG",
"irb_contact_info": "how to reach the IRB",
"approved_to_test": True,
}
page = self.client.post(self.lab_update_url, post_data)
updated_lab = Lab.objects.get(pk=self.lab.pk)
self.assertFalse(
updated_lab.approved_to_test,
"Researcher approved lab to test without permission",
)
# Lab update view: can update approved_to_test as admin
def testPostLabUpdateViewEditApprovalAsAdmin(self):
assign_perm(LabPermission.EDIT_LAB_APPROVAL.prefixed_codename, self.researcher)
assign_perm(LabPermission.EDIT_LAB_METADATA.prefixed_codename, self.researcher)
self.client.force_login(self.researcher)
post_data = {
"name": "New lab",
"principal_investigator_name": "Jane Smith",
"institution": "New institution",
"contact_email": "abc@def.org",
"contact_phone": "(123) 456-7890",
"lab_website": "https://mit.edu",
"description": "ABCDEFG",
"irb_contact_info": "how to reach the IRB",
"approved_to_test": True,
}
page = self.client.post(self.lab_update_url, post_data)
updated_lab = Lab.objects.get(pk=self.lab.pk)
self.assertEqual(page.status_code, 302)
self.assertTrue(
updated_lab.approved_to_test,
"Researcher could not approve lab to test despite permission",
)
# Lab membership request: can make as researcher
def testRequestLabMembershipAsResearcher(self):
self.client.force_login(self.researcher)
page = self.client.post(self.lab2_request_url, {})
self.assertEqual(
page.status_code, 302, "Unable to request lab membership as researcher"
)
self.assertIn(self.researcher, self.lab2.requested_researchers.all())
self.assertNotIn(self.researcher, self.lab2.researchers.all())
# Lab membership request: cannot make as participant
def testRequestLabMembershipAsParticipant(self):
self.client.force_login(self.participant)
page = self.client.post(self.lab2_request_url, {})
self.assertEqual(
page.status_code, 403, "Participant is able to request lab membership!"
)
| 41.293801 | 88 | 0.646279 |
571b5cb0bbeb167ec7325889879698a3c49e5b8c | 1,630 | py | Python | contact/migrations/0001_initial.py | Dimstella/blockchain-contact-tracing-app-hospitals | e0b2bf2b3b8c06e58032faed99900d1c7b7d300d | [
"MIT"
] | null | null | null | contact/migrations/0001_initial.py | Dimstella/blockchain-contact-tracing-app-hospitals | e0b2bf2b3b8c06e58032faed99900d1c7b7d300d | [
"MIT"
] | null | null | null | contact/migrations/0001_initial.py | Dimstella/blockchain-contact-tracing-app-hospitals | e0b2bf2b3b8c06e58032faed99900d1c7b7d300d | [
"MIT"
] | null | null | null | # Generated by Django 2.2.6 on 2020-12-05 14:32
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Patient',
fields=[
('uid', models.AutoField(primary_key=True, serialize=False)),
('name', models.CharField(default='', max_length=100)),
('surname', models.CharField(default='', max_length=70)),
('address', models.CharField(default='', max_length=256)),
('email', models.CharField(default='', max_length=256)),
('city', models.CharField(default='', max_length=70)),
('region', models.CharField(default='', max_length=70)),
('postal', models.CharField(default='', max_length=70)),
('country', models.CharField(default='', max_length=70)),
('phone', models.CharField(default='', max_length=70)),
('status', models.CharField(choices=[(True, 'Infected'), (False, 'Cured'), ('Suspected', 'Suspected')], default='', max_length=20, null=True)),
('notes', models.TextField(blank=True, null=True)),
('created_at', models.DateTimeField()),
('user', models.ForeignKey(default=1, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
| 44.054054 | 160 | 0.576687 |
cdee06a9ec68002af8327d177bb588be28ee4032 | 4,173 | py | Python | client/python/setup.py | miyamotok0105/modeldb | 6b2b7fb598d90be733e4b68efae3165c24efe2d1 | [
"MIT"
] | 1 | 2018-08-23T01:15:43.000Z | 2018-08-23T01:15:43.000Z | client/python/setup.py | miyamotok0105/modeldb | 6b2b7fb598d90be733e4b68efae3165c24efe2d1 | [
"MIT"
] | 1 | 2018-08-20T17:37:22.000Z | 2018-08-20T17:37:22.000Z | client/python/setup.py | miyamotok0105/modeldb | 6b2b7fb598d90be733e4b68efae3165c24efe2d1 | [
"MIT"
] | null | null | null | """A setuptools based setup module.
See:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='modeldb',
# Versions should comply with PEP440. For a discussion on single-sourcing
# the version across setup.py and the project code, see
# https://packaging.python.org/en/latest/single_source_version.html
version='0.0.1a30',
description='A system to manage machine learning models',
long_description=long_description,
# The project's main homepage.
url='https://github.com/mitdbg/modeldb/tree/master/client',
# Author details
author='Manasi Vartak, MIT DB Group',
author_email='modeldb@csail.mit.edu',
# Choose your license
license='MIT',
# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 3 - Alpha',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'Topic :: Scientific/Engineering',
# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: MIT License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 2.7',
# 'Programming Language :: Python :: 3',
# 'Programming Language :: Python :: 3.3',
# 'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
# What does your project relate to?
keywords='machine learning ML model catalog',
# You can just specify the packages manually here if your project is
# simple. Or you can use find_packages().
packages=find_packages(exclude=['*.tests.*', '*.tests']),
# Alternatively, if you want to distribute just a my_module.py, uncomment
# this:
# py_modules=["my_module"],
# List run-time dependencies here. These will be installed by pip when
# your project is installed. For an analysis of "install_requires" vs pip's
# requirements files see:
# https://packaging.python.org/en/latest/requirements.html
install_requires=[
'numpy', 'pandas', 'statsmodels', 'matplotlib', 'patsy',
'scikit-learn', 'sklearn', 'thrift', 'pyyaml', 'requests', 'dpath',
'future'],
# List additional groups of dependencies here (e.g. development
# dependencies). You can install these using the following syntax,
# for example:
# $ pip install -e .[dev,test]
# extras_require={
# 'dev': ['check-manifest'],
# 'test': ['coverage'],
# },
# If there are data files included in your packages that need to be
# installed, specify them here. If using Python 2.6 or less, then these
# have to be included in MANIFEST.in as well.
package_data={
'': ['syncer.json'],
},
# Although 'package_data' is the preferred approach, in some case you may
# need to place data files outside of your packages. See:
# http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # noqa
# In this case, 'data_file' will be installed into '<sys.prefix>/my_data'
# data_files=[('config', ['syncer.json'])],
# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# pip to create the appropriate form of executable for the target platform.
entry_points={
'console_scripts': [
'sample=sample:main',
],
},
)
| 35.974138 | 94 | 0.661395 |
cbd6962d25f3901bbb114e1c49ffe7720c40aa11 | 2,206 | py | Python | pygeoapi/__init__.py | anthonyfok/pygeoapi | b6a981f46d8fdaa29540d9e34cd273fdab177371 | [
"MIT"
] | null | null | null | pygeoapi/__init__.py | anthonyfok/pygeoapi | b6a981f46d8fdaa29540d9e34cd273fdab177371 | [
"MIT"
] | 5 | 2021-12-10T14:10:28.000Z | 2021-12-10T17:08:57.000Z | pygeoapi/__init__.py | anthonyfok/pygeoapi | b6a981f46d8fdaa29540d9e34cd273fdab177371 | [
"MIT"
] | 3 | 2021-11-15T16:21:38.000Z | 2022-01-03T07:50:50.000Z | # =================================================================
#
# Authors: Tom Kralidis <tomkralidis@gmail.com>
#
# Copyright (c) 2021 Tom Kralidis
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation
# files (the "Software"), to deal in the Software without
# restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following
# conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
# OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#
# =================================================================
__version__ = '0.13.dev0'
import click
from pygeoapi.config import config
from pygeoapi.openapi import openapi
@click.group()
@click.version_option(version=__version__)
def cli():
pass
@cli.command()
@click.option('--flask', 'server', flag_value="flask", default=True)
@click.option('--starlette', 'server', flag_value="starlette")
@click.pass_context
def serve(ctx, server):
"""Run the server with different daemon type (--flask is the default)"""
if server == "flask":
from pygeoapi.flask_app import serve as serve_flask
ctx.forward(serve_flask)
ctx.invoke(serve_flask)
elif server == "starlette":
from pygeoapi.starlette_app import serve as serve_starlette
ctx.forward(serve_starlette)
ctx.invoke(serve_starlette)
else:
raise click.ClickException('--flask/--starlette is required')
cli.add_command(config)
cli.add_command(openapi)
| 34.46875 | 76 | 0.697189 |
7d76b04f21e976bde82c03730ff0e53d61cba4bb | 15,039 | py | Python | cnmodel/cells/sgc.py | cnmodel/cnmodel | e4fee803d9f783d961c4a7ebb69ae222b74d8441 | [
"BSD-3-Clause"
] | 5 | 2017-07-26T21:46:14.000Z | 2020-11-27T07:53:14.000Z | cnmodel/cells/sgc.py | cnmodel/cnmodel | e4fee803d9f783d961c4a7ebb69ae222b74d8441 | [
"BSD-3-Clause"
] | 12 | 2017-07-26T07:16:16.000Z | 2021-07-14T13:41:37.000Z | cnmodel/cells/sgc.py | cnmodel/cnmodel | e4fee803d9f783d961c4a7ebb69ae222b74d8441 | [
"BSD-3-Clause"
] | 10 | 2017-07-26T07:03:29.000Z | 2021-06-23T15:52:37.000Z | from __future__ import print_function
from neuron import h
from ..util import nstomho
from ..util import Params
import numpy as np
from .cell import Cell
from .. import synapses
from .. import an_model
from .. import data
__all__ = ['SGC', 'SGC_TypeI', 'DummySGC']
class SGC(Cell):
scaled = False
@classmethod
def create(cls, model='I', species='mouse', **kwds):
if model == 'dummy':
return DummySGC(**kwds)
elif model == 'I':
return SGC_TypeI(species=species, **kwds)
else:
raise ValueError ('SGC model %s is unknown', model)
def __init__(self, cf=None, sr=None):
Cell.__init__(self)
self._cf = cf
self._sr = sr
self.spike_source = None # used by DummySGC to connect VecStim to terminal
@property
def cf(self):
""" Center frequency
"""
return self._cf
@property
def celltype(self):
return 'sgc'
@property
def sr(self):
""" Spontaneous rate group. 1=low, 2=mid, 3=high
"""
return self._sr
def make_terminal(self, post_cell, term_type, **kwds):
"""Create a StochasticTerminal and configure it according to the
postsynaptic cell type.
"""
pre_sec = self.soma
# Return a simple terminal unless a stochastic terminal was requested.
if term_type == 'simple':
return synapses.SimpleTerminal(pre_sec, post_cell,
spike_source=self.spike_source, **kwds)
elif term_type == 'multisite':
n_rsites = data.get('sgc_synapse', species='mouse', post_type=post_cell.celltype,
field='n_rsites')
opts = {'nzones': n_rsites, 'delay': 0, 'dep_flag' : 1, 'spike_source': self.spike_source}
opts.update(kwds)
# when created, depflag is set True (1) so that we compute the DKR D*F to get release
# this can be modified prior to the run by setting the terminal(s) so that dep_flag is 0
# (no DKR: constant release probability)
term = synapses.StochasticTerminal(pre_sec, post_cell, **opts)
kinetics = data.get('sgc_ampa_kinetics', species='mouse', post_type=post_cell.celltype,
field=['tau_g', 'amp_g'])
term.set_params(**kinetics)
dynamics = data.get('sgc_release_dynamics', species='mouse', post_type=post_cell.celltype,
field=['F', 'k0', 'kmax', 'kd', 'kf', 'taud', 'tauf', 'dD', 'dF'])
term.set_params(**dynamics)
return term
else:
raise ValueError("Unsupported terminal type %s" % term_type)
class DummySGC(SGC):
""" SGC class with no cell body; this cell only replays a predetermined
spike train.
"""
def __init__(self, cf=None, sr=None, simulator=None):
"""
Parameters
----------
cf : float (default: None)
Required: the characteristic frequency for the SGC
sr : int (default None)
required : Selects the spontaneous rate group from the
Zilany et al (2010) model. 1 = LSR, 2 = MSR, 3 = HSR
simulator : 'cochlea' | 'matlab' | None (default None)
Sets the simulator interface that will be used. All models
currently use the Zilany et al. model, but the simulator can
be run though a Python-interface directly to the Matlab code
as publicy available, (simulator='matlab'), or can be run through
Rudnicki & Hemmert's Python interface to the simulator's C code
(simulator='cochlea').
"""
self._simulator = simulator
SGC.__init__(self, cf, sr)
self.vecstim = h.VecStim()
# this causes the terminal to receive events from the VecStim:
self.spike_source = self.vecstim
# just an empty section for holding the terminal
self.add_section(h.Section(), self.somaname)
self.status = {self.somaname: True, 'axon': False, 'dendrites': False, 'pumps': False,
'na': None, 'species': None, 'modelType': 'dummy', 'ttx': False, 'name': 'DummysGC',
'morphology': None, 'decorator': None, 'temperature': None}
def set_spiketrain(self, times):
""" Set the times of spikes (in seconds) to be replayed by the cell.
"""
self._spiketrain = times
self._stvec = h.Vector(times)
self.vecstim.play(self._stvec)
def set_sound_stim(self, stim, seed, simulator=None):
""" Set the sound stimulus used to generate this cell's spike train.
"""
self._sound_stim = stim
spikes = self.generate_spiketrain(stim, seed, simulator)
self.set_spiketrain(spikes)
def generate_spiketrain(self, stim, seed, simulator=None):
if simulator is None:
simulator = self._simulator
spikes = an_model.get_spiketrain(cf=self.cf, sr=self.sr, seed=seed,
stim=stim, simulator=simulator)
return spikes * 1000
class SGC_TypeI(SGC):
"""
Spiral ganglion cell model
"""
def __init__(self, morphology=None, decorator=None, nach=None, ttx=False,
species='guineapig',
modelType='sgc-bm', cf=None, sr=None, debug=False):
"""
Initialize a spiral ganglion Type I cell, based on a bushy cell model.
Modifications to the cell can be made by calling the methods below. These include
converting to a model with modified size and conductances (experimental), and
and changing the sodium channel conductances.
Parameters
----------
morphology : string (default: None)
a file name to read the cell morphology from. If a valid file is found, a cell is constructed
as a cable model from the hoc file.
If None (default), the only a point model is made, exactly according to RM03.
decorator : Python function (default: None)
decorator is a function that "decorates" the morphology with ion channels according
to a set of rules.
If None, a default set of channels aer inserted into the first soma section, and the
rest of the structure is "bare".
nach : string (default: 'na')
nach selects the type of sodium channel that will be used in the model. A channel mechanim
by that name must exist. The default is jsrna (Rothman et al., 1993)
ttx : Boolean (default: False)
If ttx is True, then the sodium channel conductance is set to 0 everywhere in the cell.
Currently, this is not implemented.
species: string (default 'guineapig')
species defines the channel density that will be inserted for different models. Note that
if a decorator function is specified, this argument is ignored.
modelType: string (default: None)
modelType specifies the type of the model that will be used. SGC model know about "a" (apical)
and "bm" (basal-middle) models, based on Liu et al., JARO, 2014.
modelType is passed to the decorator, or to species_scaling to adjust point models.
cf : float (default: None)
The CF for the auditory nerve fiber that this SGC represents.
sr : string (default: None)
The spontaneous rate group to which this fiber belongs. "LS", "MS", and "HS" are known values.
debug: boolean (default: False)
debug is a boolean flag. When set, there will be multiple printouts of progress and parameters.
Returns
-------
Nothing
"""
super(SGC_TypeI, self).__init__(cf=cf, sr=sr)
if species == 'guineapig':
modelName = 'SGC'
temp = 22.
dataset = 'sgc_guineapig_channels'
elif species == 'mouse':
temp = 34.
modelName = 'SGC'
dataset = 'sgc_mouse_channels'
else:
raise ValueError(f"Species {species:s} not recognized for {self.celltype:s} cells")
self.status = {self.somaname: True, 'axon': False, 'dendrites': False, 'pumps': False,
'na': nach, 'species': species, 'modelType': modelType, 'modelName': modelName,
'ttx': ttx, 'name': 'SGC',
'morphology': morphology, 'decorator': decorator, 'temperature': None}
self.i_test_range={'pulse': [(-0.3, 0.3, 0.02), (-0.03, 0., 0.005)]} # include finer range as well
self.vrange = [-75., -55.]
self.debug = debug
soma = self.do_morphology(morphology)
self.pars = self.get_cellpars(dataset, species=species, modelType=modelType)
self.status['na'] = self.pars.natype
# decorate the morphology with ion channels
if decorator is None: # basic model, only on the soma
self.mechanisms = [self.status['na'], 'klt', 'kht', 'leak']
if modelType == 'sgc-a':
self.mechanisms.append('ihsgcApical')
elif modelType == 'sgc-bm':
self.mechanisms.append('ihsgcBasalMiddle')
else:
raise ValueError ('Type %s not known for SGC model' % modelType)
for mech in self.mechanisms:
self.soma.insert(mech)
self.soma.ek = self.e_k
self.soma().leak.erev = self.e_leak
self.species_scaling(silent=True) # set the default type II cell parameters
else: # decorate according to a defined set of rules on all cell compartments
self.decorate()
self.save_all_mechs() # save all mechanisms inserted, location and gbar values...
self.get_mechs(self.soma)
if debug:
print("<< SGC: Spiral Ganglion Cell created >>")
def get_cellpars(self, dataset, species='guineapig', modelType='sgc-a'):
cellcap = data.get(dataset, species=species, model_type=modelType,
field='soma_Cap')
chtype = data.get(dataset, species=species, model_type=modelType,
field='soma_na_type')
pars = Params(soma_Cap=cellcap, natype=chtype)
for g in ['soma_na_gbar', 'soma_kht_gbar', 'soma_klt_gbar', 'soma_ihap_gbar', 'soma_ihbm_gbar',
'soma_ihap_eh', 'soma_ihbm_eh', 'soma_leak_gbar', 'soma_leak_erev',
'soma_e_k', 'soma_e_na']:
pars.additem(g, data.get(dataset, species=species, model_type=modelType,
field=g))
return pars
def species_scaling(self, silent=True):
"""
Adjust all of the conductances and the cell size according to the species requested.
Used ONLY for point models.
Parameters
----------
species : string (default: 'guineapig')
name of the species to use for scaling the conductances in the base point model
Must be one of mouse or guineapig
modelType: string (default: 'a')
definition of HCN model type from Liu et al. JARO 2014:
'a' for apical model
'bm' for basal-middle model
silent : boolean (default: True)
run silently (True) or verbosely (False)
Returns
-------
Nothing
Notes
-----
The 'guineapig' model uses the mouse HCN channel model, verbatim. This may not
be appropriate, given that the other conductances are scaled up.
"""
assert self.scaled is False # block double scaling!
self.scaled = True
soma = self.soma
if self.status['species'] == 'mouse':
self._valid_temperatures = (34.,)
if self.status['temperature'] is None:
self.set_temperature(34.)
elif self.status['species'] == 'guineapig':
# guinea pig data from Rothman and Manis, 2003, modelType II
self._valid_temperatures = (22.,)
if self.status['temperature'] is None:
self.set_temperature(22.)
self.set_soma_size_from_Cm(self.pars.soma_Cap)
self.adjust_na_chans(soma)
soma().kht.gbar = nstomho(self.pars.soma_kht_gbar, self.somaarea)
soma().klt.gbar = nstomho(self.pars.soma_klt_gbar, self.somaarea)
if self.status['modelType'] == 'sgc-a':
soma().ihsgcApical.gbar = nstomho(self.pars.soma_ihap_gbar, self.somaarea)
soma().ihsgcApical.eh = self.pars.soma_ihap_eh
elif self.status['modelType'] == 'sgc-bm':
soma().ihsgcBasalMiddle.gbar = nstomho(self.pars.soma_ihbm_gbar, self.somaarea)
soma().ihsgcBasalMiddle.eh = self.pars.soma_ihbm_eh
else:
raise ValueError('Ihsgc modelType %s not recognized for species %s' % (self.status['modelType'], self.status['species']))
soma().leak.gbar = nstomho(self.pars.soma_leak_gbar, self.somaarea)
soma().leak.erev = self.pars.soma_leak_erev
self.check_temperature()
def i_currents(self, V):
"""
For the steady-state case, return the total current at voltage V
Used to find the zero current point
vrange brackets the interval
Implemented here are the basic RM03 mechanisms
This function should be replaced for specific cell types.
"""
for part in self.all_sections.keys():
for sec in self.all_sections[part]:
sec.v = V
h.t = 0.
h.celsius = self.status['temperature']
h.finitialize()
self.ix = {}
if 'na' in self.mechanisms:
#print dir(self.soma().na)
self.ix['na'] = self.soma().na.gna*(V - self.soma().ena)
if 'jsrna' in self.mechanisms:
#print dir(self.soma().na)
self.ix['jsrna'] = self.soma().jsrna.gna*(V - self.soma().ena)
if 'klt' in self.mechanisms:
self.ix['klt'] = self.soma().klt.gklt*(V - self.soma().ek)
if 'kht' in self.mechanisms:
self.ix['kht'] = self.soma().kht.gkht*(V - self.soma().ek)
if 'ihsgcApical' in self.mechanisms:
self.ix['ihsgcApical'] = self.soma().ihsgcApical.gh*(V - self.soma().ihsgcApical.eh)
if 'ihsgcBasalMiddle' in self.mechanisms:
self.ix['ihsgcBasalMiddle'] = self.soma().ihsgcBasalMiddle.gh*(V - self.soma().ihsgcBasalMiddle.eh)
if 'leak' in self.mechanisms:
self.ix['leak'] = self.soma().leak.gbar*(V - self.soma().leak.erev)
# print self.status['name'], self.status['type'], V, self.ix
return np.sum([self.ix[i] for i in self.ix])
| 42.483051 | 133 | 0.586409 |
5dbc51101aa4413d5cb3d3874b613c3a1be48861 | 124 | py | Python | seismo/__init__.py | ezietsman/seismo | 43729526391126c025bf13d727edb693cd46daa9 | [
"MIT"
] | 1 | 2015-07-23T08:01:18.000Z | 2015-07-23T08:01:18.000Z | seismo/__init__.py | ezietsman/seismo | 43729526391126c025bf13d727edb693cd46daa9 | [
"MIT"
] | null | null | null | seismo/__init__.py | ezietsman/seismo | 43729526391126c025bf13d727edb693cd46daa9 | [
"MIT"
] | null | null | null | from .timeseries import deeming, fast_deeming, find_peak
from .fitting import signal, sinewave
from .session import Session
| 31 | 56 | 0.830645 |
f9471216393d4ce6df07969f5da1ee9015fdcc99 | 5,510 | py | Python | lwganrt/holo/holodemo.py | darkAlert/impersonator-rt | 8a2b879cf60f2094944a0104592d460fee3bda6a | [
"MIT"
] | 6 | 2020-04-17T08:47:58.000Z | 2021-07-02T10:58:52.000Z | lwganrt/holo/holodemo.py | darkAlert/impersonator-rt | 8a2b879cf60f2094944a0104592d460fee3bda6a | [
"MIT"
] | null | null | null | lwganrt/holo/holodemo.py | darkAlert/impersonator-rt | 8a2b879cf60f2094944a0104592d460fee3bda6a | [
"MIT"
] | 1 | 2020-05-24T23:46:54.000Z | 2020-05-24T23:46:54.000Z | import os
import glob
from shutil import copyfile
from holovideo import make_video
def run_imitator(img_dir, img_names, tgt_path, load_path, output_dir):
preds_name_mask = 'imitators/pred_*.jpg'
for img_name in img_names:
subject_name = img_name.split('.')[0]
src_path = os.path.join(img_dir,img_name)
print ('Processing', subject_name)
#Run imitator:
os.system("python3 -W ignore run_imitator.py --gpu_ids 0 --model imitator --output_dir %s --src_path %s --tgt_path %s --has_detector --post_tune --save_res --load_path %s" % (output_dir, src_path, tgt_path, load_path))
#Copy predicted images:
src_preds_paths = glob.glob(os.path.join(output_dir,preds_name_mask))
dst_preds_dir = os.path.join(output_dir,'preds_imitator',subject_name)
if not os.path.exists(dst_preds_dir):
os.makedirs(dst_preds_dir)
for src_p in src_preds_paths:
filename = src_p.split('/')[-1]
dst_p = os.path.join(dst_preds_dir,filename)
print (src_p)
copyfile(src_p, dst_p)
return True
def run_imitator_front_warp(img_dir, img_names, tgt_path, load_path, output_dir):
preds_name_mask = 'imitators/pred_*.jpg'
for img_name in img_names:
subject_name = img_name.split('.')[0]
src_path = os.path.join(img_dir,img_name)
print ('Processing', subject_name)
#Run imitator:
os.system("python3 -W ignore run_imitator.py --gpu_ids 0 --model imitator --output_dir %s --src_path %s --tgt_path %s --has_detector --post_tune --save_res --front_warp --load_path %s" % (output_dir, src_path, tgt_path,load_path))
#Copy predicted images:
src_preds_paths = glob.glob(os.path.join(output_dir,preds_name_mask))
dst_preds_dir = os.path.join(output_dir,'preds_imitator_front_warp',subject_name)
if not os.path.exists(dst_preds_dir):
os.makedirs(dst_preds_dir)
for src_p in src_preds_paths:
filename = src_p.split('/')[-1]
dst_p = os.path.join(dst_preds_dir,filename)
print (src_p)
copyfile(src_p, dst_p)
return True
def run_view(img_dir,img_names,load_path,output_dir):
preds_name_mask = 'imgs/pred_*.jpg'
for img_name in img_names:
subject_name = img_name.split('.')[0]
src_path = os.path.join(img_dir,img_name)
print ('Processing', subject_name)
#Run imitator:
os.system("python3 -W ignore run_view.py --gpu_ids 0 --model viewer --output_dir %s --src_path %s --bg_ks 13 --ft_ks 3 --has_detector --post_tune --save_res --bg_replace --load_path %s" % (output_dir, src_path,load_path))
#Copy predicted images:
src_preds_paths = glob.glob(os.path.join(output_dir,preds_name_mask))
dst_preds_dir = os.path.join(output_dir,'preds_view',subject_name)
if not os.path.exists(dst_preds_dir):
os.makedirs(dst_preds_dir)
for src_p in src_preds_paths:
filename = src_p.split('/')[-1]
dst_p = os.path.join(dst_preds_dir,filename)
print (src_p)
copyfile(src_p, dst_p)
return True
def run_view_front_warp(img_dir,img_names,load_path,output_dir):
preds_name_mask = 'imgs/pred_*.jpg'
for img_name in img_names:
subject_name = img_name.split('.')[0]
src_path = os.path.join(img_dir,img_name)
print ('Processing', subject_name)
#Run imitator:
os.system("python3 -W ignore run_view.py --gpu_ids 0 --model viewer --output_dir %s --src_path %s --bg_ks 13 --ft_ks 3 --has_detector --post_tune --save_res --bg_replace --front_warp --load_path %s" % (output_dir, src_path,load_path))
#Copy predicted images:
src_preds_paths = glob.glob(os.path.join(output_dir,preds_name_mask))
dst_preds_dir = os.path.join(output_dir,'preds_view_front_warp',subject_name)
if not os.path.exists(dst_preds_dir):
os.makedirs(dst_preds_dir)
for src_p in src_preds_paths:
filename = src_p.split('/')[-1]
dst_p = os.path.join(dst_preds_dir,filename)
print (src_p)
copyfile(src_p, dst_p)
return True
def main():
img_dir = '/home/darkalert/KazendiJob/Data/LWGtest/'
img_names = []
img_names.append('inet-woman1.jpeg')
img_names.append('inet-woman2.jpeg')
img_names.append('inet-woman3.jpeg')
img_names.append('inet-man1.jpeg')
img_names.append('inet-man3.jpeg')
img_names.append('jason.jpeg')
tgt_path = './assets/samples/refs/iPER/024_8_2'
load_path = './outputs/checkpoints/my_iPER/net_epoch_26_id_G.pth'
output_dir = './outputs/results/LWGtest_by_my_model/'
os.chdir("../")
#Run impersonator:
run_imitator(img_dir,img_names,tgt_path,load_path,output_dir)
run_imitator_front_warp(img_dir,img_names,tgt_path,load_path,output_dir)
run_view(img_dir,img_names,load_path,output_dir)
run_view_front_warp(img_dir,img_names,load_path,output_dir)
#Make videos:
seq_dir = os.path.abspath(output_dir)
seq_names = ['preds_imitator','preds_imitator_front_warp']
subj_names = [img_names[0],img_names[1],img_names[2]]
output_mp4_path = os.path.join(seq_dir,'imitator-women.mp4')
make_video(subj_names, img_dir, seq_names, seq_dir, output_mp4_path)
subj_names = [img_names[3],img_names[4],img_names[5]]
output_mp4_path = os.path.join(seq_dir,'imitator-men.mp4')
make_video(subj_names, img_dir, seq_names, seq_dir, output_mp4_path)
seq_names = ['preds_view','preds_view_front_warp']
subj_names = [img_names[0],img_names[1],img_names[2]]
output_mp4_path = os.path.join(seq_dir,'view-women.mp4')
make_video(subj_names, img_dir, seq_names, seq_dir, output_mp4_path, fps=5)
subj_names = [img_names[3],img_names[4],img_names[5]]
output_mp4_path = os.path.join(seq_dir,'view-men.mp4')
make_video(subj_names, img_dir, seq_names, seq_dir, output_mp4_path, fps=5)
if __name__ == "__main__":
main() | 34.4375 | 237 | 0.748094 |
0f1f88f8795f5ea258778785d1aee5c343570443 | 12,214 | py | Python | examples/adwords/v201809/shopping/add_shopping_campaign_for_showcase_ads.py | beamc83/python-googleads | 6039d08e2d85850a46a70f24359d362ffde2f7ed | [
"Apache-2.0"
] | 2 | 2019-07-11T13:01:56.000Z | 2019-07-11T13:01:58.000Z | examples/adwords/v201809/shopping/add_shopping_campaign_for_showcase_ads.py | SoungMo/googleads-python-lib | fe86335c416e0571328c0a481c4b0cff863c01d9 | [
"Apache-2.0"
] | null | null | null | examples/adwords/v201809/shopping/add_shopping_campaign_for_showcase_ads.py | SoungMo/googleads-python-lib | fe86335c416e0571328c0a481c4b0cff863c01d9 | [
"Apache-2.0"
] | 1 | 2020-07-19T14:24:05.000Z | 2020-07-19T14:24:05.000Z | #!/usr/bin/env python
#
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This example adds a Shopping campaign for Showcase ads.
The LoadFromStorage method is pulling credentials and properties from a
"googleads.yaml" file. By default, it looks for this file in your home
directory. For more information, see the "Caching authentication information"
section of our README.
"""
import uuid
from googleads import adwords
from googleads import errors
BUDGET_ID = 'INSERT_BUDGET_ID_HERE'
MERCHANT_ID = 'INSERT_MERCHANT_ID_HERE'
EXPANDED_IMAGE_FILEPATH = 'INSERT_PATH_TO_EXPANDED_IMAGE'
COLLAPSED_IMAGE_FILEPATH = 'INSERT_PATH_TO_COLLAPSED_IMAGE'
class ProductPartitionHelper(object):
"""A helper for creating ProductPartition trees."""
def __init__(self, adgroup_id):
"""Initializer.
Args:
adgroup_id: The ID of the AdGroup that we wish to attach the partition
tree to.
"""
# The next temporary criterion ID to be used.
# When creating our tree we need to specify the parent-child relationships
# between nodes. However, until a criterion has been created on the server
# we do not have a criterion ID with which to refer to it.
# Instead we can specify temporary IDs that are specific to a single mutate
# request. Once the criteria have been created they are assigned an ID as
# normal and the temporary ID will no longer refer to it.
# A valid temporary ID is any negative integer.
self.next_id = -1
# The set of mutate operations needed to create the current tree.
self.operations = []
self.adgroup_id = adgroup_id
def CreateSubdivision(self, parent=None, value=None):
"""Creates a subdivision node.
Args:
parent: The node that should be this node's parent.
value: The value being partitioned on.
Returns:
A new subdivision node.
"""
division = {
'xsi_type': 'ProductPartition',
'partitionType': 'SUBDIVISION',
'id': str(self.next_id)
}
# The root has neither a parent nor a value.
if parent is not None:
division['parentCriterionId'] = parent['id']
division['caseValue'] = value
adgroup_criterion = {
'xsi_type': 'BiddableAdGroupCriterion',
'adGroupId': self.adgroup_id,
'criterion': division
}
self.CreateAddOperation(adgroup_criterion)
self.next_id -= 1
return division
def CreateUnit(self, parent=None, value=None, bid_amount=None):
"""Creates a unit node.
Args:
parent: The node that should be this node's parent.
value: The value being partitioned on.
bid_amount: The amount to bid for matching products, in micros.
Returns:
A new unit node.
"""
unit = {
'xsi_type': 'ProductPartition',
'partitionType': 'UNIT'
}
# The root node has neither a parent nor a value.
if parent is not None:
unit['parentCriterionId'] = parent['id']
unit['caseValue'] = value
if bid_amount is not None and bid_amount > 0:
# Note: Showcase ads require that the campaign has a ManualCpc
# BiddingStrategyConfiguration.
bidding_strategy_configuration = {
'bids': [{
'xsi_type': 'CpcBid',
'bid': {
'xsi_type': 'Money',
'microAmount': str(bid_amount)
}
}]
}
adgroup_criterion = {
'xsi_type': 'BiddableAdGroupCriterion',
'biddingStrategyConfiguration': bidding_strategy_configuration
}
else:
adgroup_criterion = {
'xsi_type': 'NegativeAdGroupCriterion'
}
adgroup_criterion['adGroupId'] = self.adgroup_id
adgroup_criterion['criterion'] = unit
self.CreateAddOperation(adgroup_criterion)
return unit
def GetOperations(self):
"""Returns the set of mutate operations needed to create the current tree.
Returns:
The set of operations
"""
return self.operations
def CreateAddOperation(self, criterion):
"""Creates an AdGroupCriterionOperation for the given criterion.
Args:
criterion: The criterion we want to add.
"""
operation = {
'operator': 'ADD',
'operand': criterion
}
self.operations.append(operation)
def main(client, budget_id, merchant_id, expanded_image_filepath,
collapsed_image_filepath):
try:
# Create the Shopping Campaign
campaign = CreateShoppingCampaign(client, budget_id, merchant_id)
# Create the AdGroup
adgroup = CreateAdGroup(client, campaign['id'])
# Create a Showcase Ad
CreateShowcaseAd(client, adgroup, expanded_image_filepath,
collapsed_image_filepath)
product_partition_tree = CreateProductPartition(client, adgroup['id'])
print 'Final tree:\n%s' % product_partition_tree
except errors.GoogleAdsServerFault:
print 'Failed to create shopping campaign for showcase ads.'
raise
def CreateShoppingCampaign(client, budget_id, merchant_id):
"""Creates a shopping campaign with the given budget and merchant IDs.
Args:
client: an AdWordsClient instance.
budget_id: the str ID of the budget to be associated with the shopping
campaign.
merchant_id: the str ID of the merchant account to be associated with the
shopping campaign.
Returns:
The created Shopping Campaign as a sudsobject.
"""
campaign_service = client.GetService('CampaignService', 'v201809')
campaign = {
'name': 'Shopping campaign #%s' % uuid.uuid4(),
# The advertisingChannelType is what makes this a shopping campaign
'advertisingChannelType': 'SHOPPING',
# Recommendation: Set the campaign to PAUSED when creating it to stop the
# ads from immediately serving. Set to ENABLED once you've added targeting
# and the ads are ready to serve.
'status': 'PAUSED',
# Set portfolio budget (required)
'budget': {
'budgetId': budget_id
},
'biddingStrategyConfiguration': {
'biddingStrategyType': 'MANUAL_CPC'
},
'settings': [
# All shopping campaigns need a ShoppingSetting
{
'xsi_type': 'ShoppingSetting',
'salesCountry': 'US',
'campaignPriority': '0',
'merchantId': merchant_id,
# Set to "True" to enable Local Inventory Ads in your campaign.
'enableLocal': True
}
]
}
campaign_operations = [{
'operator': 'ADD',
'operand': campaign
}]
campaign = campaign_service.mutate(campaign_operations)['value'][0]
print ('Campaign with name "%s" and ID "%s" was added.'
% (campaign['name'], campaign['id']))
return campaign
def CreateAdGroup(client, campaign_id):
"""Creates an AdGroup for the given shopping campaign ID.
Args:
client: an AdWordsClient instance.
campaign_id: the str ID of a shopping campaign.
Returns:
The created AdGroup as a sudsobject.
"""
ad_group_service = client.GetService('AdGroupService', 'v201809')
adgroup = {
# Required: Set the ad group type to SHOPPING_SHOWCASE_ADS
'adGroupType': 'SHOPPING_SHOWCASE_ADS',
'campaignId': campaign_id,
'name': 'AdGroup #%s' % uuid.uuid4(),
# REQUIRED: Set the ad group's bidding strategy configuration.
'biddingStrategyConfiguration': {
# Showcase ads require either ManualCpc or EnhancedCpc.
'biddingStrategyType': 'MANUAL_CPC',
# Optional: Set the bids
'bids': [{
'xsi_type': 'CpcBid',
'bid': {
'microAmount': 100000
}
}]
}
}
adgroup_operations = {
'operator': 'ADD',
'operand': adgroup
}
# Make the mutate request to add the AdGroup to the Shopping Campaign
adgroup = ad_group_service.mutate(adgroup_operations)['value'][0]
print ('AdGroup with name "%s" and ID "%s" was added.'
% (adgroup['name'], adgroup['id']))
return adgroup
def CreateShowcaseAd(client, adgroup, expanded_image_filepath,
collapsed_image_filepath):
"""Creates a showcase add for the given AdGroup with the given images.
Args:
client: an AdWordsClient instance.
adgroup: a dict or suds object defining an AdGroup for a Shopping Campaign.
expanded_image_filepath: a str filepath to a .jpg file that will be used as
the Showcase Ad's expandedImage.
collapsed_image_filepath: a str filepath to a .jpg file that will be used as
the Showcase Ad's collapsedImage.
Returns:
The created Showcase Ad as a sudsobject.
"""
ad_group_ad_service = client.GetService('AdGroupAdService', 'v201809')
showcase_ad = {
'adGroupId': adgroup['id'],
'ad': {
'xsi_type': 'ShowcaseAd',
'Ad.Type': 'ShowcaseAd',
# Required: set the ad's name, final URLs, and display URL.
'name': 'Showcase ad #%s' % uuid.uuid4(),
'finalUrls': 'http://example.com/showcase',
'displayUrl': 'example.com',
# Required: Set the ad's expanded image.
'expandedImage': {
'mediaId': UploadImage(client, expanded_image_filepath)['mediaId']
},
# Optional: Set the collapsed image.
'collapsedImage': {
'mediaId':
UploadImage(client, collapsed_image_filepath)['mediaId']
}
}
}
ad_operation = {
'operator': 'ADD',
'operand': showcase_ad
}
# Make the mutate request to add the ProductAd to the AdGroup
showcase_ad = ad_group_ad_service.mutate([ad_operation])['value'][0]
print 'ShowcaseAd with ID "%s" was added.' % showcase_ad['ad']['id']
return showcase_ad
def UploadImage(client, filepath):
"""Uploads a .jpg image with the given filepath via the AdWords MediaService.
Args:
client: an AdWordsClient instance.
filepath: a str filepath to the .jpg file to be uploaded.
Returns:
The created Image as a sudsobject.
"""
media_service = client.GetService('MediaService', 'v201809')
with open(filepath, 'rb') as image_handle:
image_data = image_handle.read().decode('utf-8')
image = [{
'xsi_type': 'Image',
'data': image_data,
'type': 'IMAGE'
}]
image = media_service.upload(image)[0]
return image
def CreateProductPartition(client, adgroup_id):
"""Creates a ProductPartition tree for the given AdGroup ID.
Args:
client: an AdWordsClient instance.
adgroup_id: a str AdGroup ID.
Returns:
The ProductPartition tree as a sudsobject.
"""
ad_group_criterion_service = client.GetService('AdGroupCriterionService',
'v201809')
helper = ProductPartitionHelper(adgroup_id)
root = helper.CreateSubdivision()
new_product_canonical_condition = {
'xsi_type': 'ProductCanonicalCondition',
'condition': 'NEW'
}
used_product_canonical_condition = {
'xsi_type': 'ProductCanonicalCondition',
'condition': 'USED'
}
other_product_canonical_condition = {
'xsi_type': 'ProductCanonicalCondition',
}
helper.CreateUnit(root, new_product_canonical_condition)
helper.CreateUnit(root, used_product_canonical_condition)
helper.CreateUnit(root, other_product_canonical_condition)
result = ad_group_criterion_service.mutate(helper.operations)
return result['value']
if __name__ == '__main__':
# Initialize client object.
adwords_client = adwords.AdWordsClient.LoadFromStorage()
main(adwords_client, BUDGET_ID, MERCHANT_ID, EXPANDED_IMAGE_FILEPATH,
COLLAPSED_IMAGE_FILEPATH)
| 30.383085 | 80 | 0.664402 |
3517cc23d413b34aacfb0e6b9f51a521da23bf02 | 328 | py | Python | api/migrations/0004_remove_company_suppliers.py | apigram/jade-api | 1aece29c3109db68897fdf854be431554e7f2863 | [
"Apache-2.0"
] | null | null | null | api/migrations/0004_remove_company_suppliers.py | apigram/jade-api | 1aece29c3109db68897fdf854be431554e7f2863 | [
"Apache-2.0"
] | null | null | null | api/migrations/0004_remove_company_suppliers.py | apigram/jade-api | 1aece29c3109db68897fdf854be431554e7f2863 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.1.3 on 2018-11-09 13:26
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('api', '0003_auto_20181110_0020'),
]
operations = [
migrations.RemoveField(
model_name='company',
name='suppliers',
),
]
| 18.222222 | 47 | 0.591463 |
d585f1257918c8c823be929a0545044e8a281b53 | 5,411 | py | Python | python/dlxapi/models/user_group_added_event.py | dlens/dlxapi | 189a6519240ce625d7a9cdb89e305a335d2aa045 | [
"MIT"
] | null | null | null | python/dlxapi/models/user_group_added_event.py | dlens/dlxapi | 189a6519240ce625d7a9cdb89e305a335d2aa045 | [
"MIT"
] | 1 | 2020-08-20T17:31:43.000Z | 2020-08-20T17:31:43.000Z | python/dlxapi/models/user_group_added_event.py | dlens/dlxapi | 189a6519240ce625d7a9cdb89e305a335d2aa045 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Decision Lens API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 1.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from dlxapi.configuration import Configuration
class UserGroupAddedEvent(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'id': 'str',
'user': 'User',
'added_by_user': 'User',
'group_id': 'str'
}
attribute_map = {
'id': 'id',
'user': 'user',
'added_by_user': 'addedByUser',
'group_id': 'groupId'
}
def __init__(self, id=None, user=None, added_by_user=None, group_id=None, _configuration=None): # noqa: E501
"""UserGroupAddedEvent - a model defined in Swagger""" # noqa: E501
if _configuration is None:
_configuration = Configuration()
self._configuration = _configuration
self._id = None
self._user = None
self._added_by_user = None
self._group_id = None
self.discriminator = None
if id is not None:
self.id = id
if user is not None:
self.user = user
if added_by_user is not None:
self.added_by_user = added_by_user
if group_id is not None:
self.group_id = group_id
@property
def id(self):
"""Gets the id of this UserGroupAddedEvent. # noqa: E501
:return: The id of this UserGroupAddedEvent. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this UserGroupAddedEvent.
:param id: The id of this UserGroupAddedEvent. # noqa: E501
:type: str
"""
self._id = id
@property
def user(self):
"""Gets the user of this UserGroupAddedEvent. # noqa: E501
:return: The user of this UserGroupAddedEvent. # noqa: E501
:rtype: User
"""
return self._user
@user.setter
def user(self, user):
"""Sets the user of this UserGroupAddedEvent.
:param user: The user of this UserGroupAddedEvent. # noqa: E501
:type: User
"""
self._user = user
@property
def added_by_user(self):
"""Gets the added_by_user of this UserGroupAddedEvent. # noqa: E501
:return: The added_by_user of this UserGroupAddedEvent. # noqa: E501
:rtype: User
"""
return self._added_by_user
@added_by_user.setter
def added_by_user(self, added_by_user):
"""Sets the added_by_user of this UserGroupAddedEvent.
:param added_by_user: The added_by_user of this UserGroupAddedEvent. # noqa: E501
:type: User
"""
self._added_by_user = added_by_user
@property
def group_id(self):
"""Gets the group_id of this UserGroupAddedEvent. # noqa: E501
:return: The group_id of this UserGroupAddedEvent. # noqa: E501
:rtype: str
"""
return self._group_id
@group_id.setter
def group_id(self, group_id):
"""Sets the group_id of this UserGroupAddedEvent.
:param group_id: The group_id of this UserGroupAddedEvent. # noqa: E501
:type: str
"""
self._group_id = group_id
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(UserGroupAddedEvent, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, UserGroupAddedEvent):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, UserGroupAddedEvent):
return True
return self.to_dict() != other.to_dict()
| 26.787129 | 119 | 0.573461 |
9fd9e8d6c48ab4814903e0697be06740d2e153a7 | 22,241 | py | Python | anyex/async/bitmex.py | ttwishing/anyex | cfd1f2f04ab992b790add4843aafff91e5773cbf | [
"MIT"
] | null | null | null | anyex/async/bitmex.py | ttwishing/anyex | cfd1f2f04ab992b790add4843aafff91e5773cbf | [
"MIT"
] | null | null | null | anyex/async/bitmex.py | ttwishing/anyex | cfd1f2f04ab992b790add4843aafff91e5773cbf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# PLEASE DO NOT EDIT THIS FILE, IT IS GENERATED AND WILL BE OVERWRITTEN:
# https://github.com/anyex/anyex/blob/master/CONTRIBUTING.md#how-to-contribute-code
from anyex.async.base.exchange import Exchange
import json
from anyex.base.errors import ExchangeError
from anyex.base.errors import AuthenticationError
from anyex.base.errors import OrderNotFound
from anyex.base.errors import DDoSProtection
class bitmex (Exchange):
def describe(self):
return self.deep_extend(super(bitmex, self).describe(), {
'id': 'bitmex',
'name': 'BitMEX',
'countries': 'SC', # Seychelles
'version': 'v1',
'userAgent': None,
'rateLimit': 2000,
'has': {
'CORS': False,
'fetchOHLCV': True,
'withdraw': True,
'editOrder': True,
'fetchOrder': True,
'fetchOrders': True,
'fetchOpenOrders': True,
'fetchClosedOrders': True,
},
'timeframes': {
'1m': '1m',
'5m': '5m',
'1h': '1h',
'1d': '1d',
},
'urls': {
'test': 'https://testnet.bitmex.com',
'logo': 'https://user-images.githubusercontent.com/1294454/27766319-f653c6e6-5ed4-11e7-933d-f0bc3699ae8f.jpg',
'api': 'https://www.bitmex.com',
'www': 'https://www.bitmex.com',
'doc': [
'https://www.bitmex.com/app/apiOverview',
'https://github.com/BitMEX/api-connectors/tree/master/official-http',
],
'fees': 'https://www.bitmex.com/app/fees',
},
'api': {
'public': {
'get': [
'announcement',
'announcement/urgent',
'funding',
'instrument',
'instrument/active',
'instrument/activeAndIndices',
'instrument/activeIntervals',
'instrument/compositeIndex',
'instrument/indices',
'insurance',
'leaderboard',
'liquidation',
'orderBook',
'orderBook/L2',
'quote',
'quote/bucketed',
'schema',
'schema/websocketHelp',
'settlement',
'stats',
'stats/history',
'trade',
'trade/bucketed',
],
},
'private': {
'get': [
'apiKey',
'chat',
'chat/channels',
'chat/connected',
'execution',
'execution/tradeHistory',
'notification',
'order',
'position',
'user',
'user/affiliateStatus',
'user/checkReferralCode',
'user/commission',
'user/depositAddress',
'user/margin',
'user/minWithdrawalFee',
'user/wallet',
'user/walletHistory',
'user/walletSummary',
],
'post': [
'apiKey',
'apiKey/disable',
'apiKey/enable',
'chat',
'order',
'order/bulk',
'order/cancelAllAfter',
'order/closePosition',
'position/isolate',
'position/leverage',
'position/riskLimit',
'position/transferMargin',
'user/cancelWithdrawal',
'user/confirmEmail',
'user/confirmEnableTFA',
'user/confirmWithdrawal',
'user/disableTFA',
'user/logout',
'user/logoutAll',
'user/preferences',
'user/requestEnableTFA',
'user/requestWithdrawal',
],
'put': [
'order',
'order/bulk',
'user',
],
'delete': [
'apiKey',
'order',
'order/all',
],
},
},
})
async def fetch_markets(self):
markets = await self.publicGetInstrumentActiveAndIndices()
result = []
for p in range(0, len(markets)):
market = markets[p]
active = (market['state'] != 'Unlisted')
id = market['symbol']
base = market['underlying']
quote = market['quoteCurrency']
type = None
future = False
prediction = False
basequote = base + quote
base = self.common_currency_code(base)
quote = self.common_currency_code(quote)
swap = (id == basequote)
symbol = id
if swap:
type = 'swap'
symbol = base + '/' + quote
elif id.find('B_') >= 0:
prediction = True
type = 'prediction'
else:
future = True
type = 'future'
precision = {
'amount': None,
'price': None,
}
if market['lotSize']:
precision['amount'] = self.precision_from_string(self.truncate_to_string(market['lotSize'], 16))
if market['tickSize']:
precision['price'] = self.precision_from_string(self.truncate_to_string(market['tickSize'], 16))
result.append({
'id': id,
'symbol': symbol,
'base': base,
'quote': quote,
'active': active,
'precision': precision,
'limits': {
'amount': {
'min': market['lotSize'],
'max': market['maxOrderQty'],
},
'price': {
'min': market['tickSize'],
'max': market['maxPrice'],
},
},
'taker': market['takerFee'],
'maker': market['makerFee'],
'type': type,
'spot': False,
'swap': swap,
'future': future,
'prediction': prediction,
'info': market,
})
return result
async def fetch_balance(self, params={}):
await self.load_markets()
response = await self.privateGetUserMargin({'currency': 'all'})
result = {'info': response}
for b in range(0, len(response)):
balance = response[b]
currency = balance['currency'].upper()
currency = self.common_currency_code(currency)
account = {
'free': balance['availableMargin'],
'used': 0.0,
'total': balance['marginBalance'],
}
if currency == 'BTC':
account['free'] = account['free'] * 0.00000001
account['total'] = account['total'] * 0.00000001
account['used'] = account['total'] - account['free']
result[currency] = account
return self.parse_balance(result)
async def fetch_order_book(self, symbol, limit=None, params={}):
await self.load_markets()
market = self.market(symbol)
request = {
'symbol': market['id'],
}
if limit is not None:
request['depth'] = limit
orderbook = await self.publicGetOrderBookL2(self.extend(request, params))
result = {
'bids': [],
'asks': [],
'timestamp': None,
'datetime': None,
'nonce': None,
}
for o in range(0, len(orderbook)):
order = orderbook[o]
side = 'asks' if (order['side'] == 'Sell') else 'bids'
amount = order['size']
price = order['price']
result[side].append([price, amount])
result['bids'] = self.sort_by(result['bids'], 0, True)
result['asks'] = self.sort_by(result['asks'], 0)
return result
async def fetch_order(self, id, symbol=None, params={}):
filter = {'filter': {'orderID': id}}
result = await self.fetch_orders(symbol, None, None, self.deep_extend(filter, params))
numResults = len(result)
if numResults == 1:
return result[0]
raise OrderNotFound(self.id + ': The order ' + id + ' not found.')
async def fetch_orders(self, symbol=None, since=None, limit=None, params={}):
await self.load_markets()
market = None
request = {}
if symbol is not None:
market = self.market(symbol)
request['symbol'] = market['id']
if since is not None:
request['startTime'] = self.iso8601(since)
if limit is not None:
request['count'] = limit
request = self.deep_extend(request, params)
# why the hassle? urlencode in python is kinda broken for nested dicts.
# E.g. self.urlencode({"filter": {"open": True}}) will return "filter={'open':+True}"
# Bitmex doesn't like that. Hence resorting to self hack.
if 'filter' in request:
request['filter'] = self.json(request['filter'])
response = await self.privateGetOrder(request)
return self.parse_orders(response, market, since, limit)
async def fetch_open_orders(self, symbol=None, since=None, limit=None, params={}):
filter_params = {'filter': {'open': True}}
return await self.fetch_orders(symbol, since, limit, self.deep_extend(filter_params, params))
async def fetch_closed_orders(self, symbol=None, since=None, limit=None, params={}):
# Bitmex barfs if you set 'open': False in the filter...
orders = await self.fetch_orders(symbol, since, limit, params)
return self.filter_by(orders, 'status', 'closed')
async def fetch_ticker(self, symbol, params={}):
await self.load_markets()
market = self.market(symbol)
if not market['active']:
raise ExchangeError(self.id + ': symbol ' + symbol + ' is delisted')
request = self.extend({
'symbol': market['id'],
'binSize': '1d',
'partial': True,
'count': 1,
'reverse': True,
}, params)
quotes = await self.publicGetQuoteBucketed(request)
quotesLength = len(quotes)
quote = quotes[quotesLength - 1]
tickers = await self.publicGetTradeBucketed(request)
ticker = tickers[0]
timestamp = self.milliseconds()
open = self.safe_float(ticker, 'open')
close = self.safe_float(ticker, 'close')
change = close - open
return {
'symbol': symbol,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'high': float(ticker['high']),
'low': float(ticker['low']),
'bid': float(quote['bidPrice']),
'bidVolume': None,
'ask': float(quote['askPrice']),
'askVolume': None,
'vwap': float(ticker['vwap']),
'open': open,
'close': close,
'last': close,
'previousClose': None,
'change': change,
'percentage': change / open * 100,
'average': self.sum(open, close) / 2,
'baseVolume': float(ticker['homeNotional']),
'quoteVolume': float(ticker['foreignNotional']),
'info': ticker,
}
def parse_ohlcv(self, ohlcv, market=None, timeframe='1m', since=None, limit=None):
timestamp = self.parse8601(ohlcv['timestamp']) - self.parse_timeframe(timeframe) * 1000
return [
timestamp,
ohlcv['open'],
ohlcv['high'],
ohlcv['low'],
ohlcv['close'],
ohlcv['volume'],
]
async def fetch_ohlcv(self, symbol, timeframe='1m', since=None, limit=None, params={}):
await self.load_markets()
# send JSON key/value pairs, such as {"key": "value"}
# filter by individual fields and do advanced queries on timestamps
# filter = {'key': 'value'}
# send a bare series(e.g. XBU) to nearest expiring contract in that series
# you can also send a timeframe, e.g. XBU:monthly
# timeframes: daily, weekly, monthly, quarterly, and biquarterly
market = self.market(symbol)
request = {
'symbol': market['id'],
'binSize': self.timeframes[timeframe],
'partial': True, # True == include yet-incomplete current bins
# 'filter': filter, # filter by individual fields and do advanced queries
# 'columns': [], # will return all columns if omitted
# 'start': 0, # starting point for results(wtf?)
# 'reverse': False, # True == newest first
# 'endTime': '', # ending date filter for results
}
if limit is not None:
request['count'] = limit # default 100, max 500
# if since is not set, they will return candles starting from 2017-01-01
if since is not None:
ymdhms = self.ymdhms(since)
ymdhm = ymdhms[0:16]
request['startTime'] = ymdhm # starting date filter for results
response = await self.publicGetTradeBucketed(self.extend(request, params))
return self.parse_ohlcvs(response, market, timeframe, since, limit)
def parse_trade(self, trade, market=None):
timestamp = self.parse8601(trade['timestamp'])
symbol = None
if not market:
if 'symbol' in trade:
market = self.markets_by_id[trade['symbol']]
if market:
symbol = market['symbol']
return {
'id': trade['trdMatchID'],
'info': trade,
'timestamp': timestamp,
'datetime': self.iso8601(timestamp),
'symbol': symbol,
'order': None,
'type': None,
'side': trade['side'].lower(),
'price': trade['price'],
'amount': trade['size'],
}
def parse_order_status(self, status):
statuses = {
'new': 'open',
'partiallyfilled': 'open',
'filled': 'closed',
'canceled': 'canceled',
'rejected': 'rejected',
'expired': 'expired',
}
return self.safe_string(statuses, status.lower())
def parse_order(self, order, market=None):
status = self.safe_value(order, 'ordStatus')
if status is not None:
status = self.parse_order_status(status)
symbol = None
if market:
symbol = market['symbol']
else:
id = order['symbol']
if id in self.markets_by_id:
market = self.markets_by_id[id]
symbol = market['symbol']
datetime_value = None
timestamp = None
iso8601 = None
if 'timestamp' in order:
datetime_value = order['timestamp']
elif 'transactTime' in order:
datetime_value = order['transactTime']
if datetime_value is not None:
timestamp = self.parse8601(datetime_value)
iso8601 = self.iso8601(timestamp)
price = self.safe_float(order, 'price')
amount = float(order['orderQty'])
filled = self.safe_float(order, 'cumQty', 0.0)
remaining = max(amount - filled, 0.0)
cost = None
if price is not None:
if filled is not None:
cost = price * filled
result = {
'info': order,
'id': str(order['orderID']),
'timestamp': timestamp,
'datetime': iso8601,
'lastTradeTimestamp': None,
'symbol': symbol,
'type': order['ordType'].lower(),
'side': order['side'].lower(),
'price': price,
'amount': amount,
'cost': cost,
'filled': filled,
'remaining': remaining,
'status': status,
'fee': None,
}
return result
async def fetch_trades(self, symbol, since=None, limit=None, params={}):
await self.load_markets()
market = self.market(symbol)
request = {
'symbol': market['id'],
}
if since is not None:
request['startTime'] = self.iso8601(since)
if limit is not None:
request['count'] = limit
response = await self.publicGetTrade(self.extend(request, params))
return self.parse_trades(response, market)
async def create_order(self, symbol, type, side, amount, price=None, params={}):
await self.load_markets()
request = {
'symbol': self.market_id(symbol),
'side': self.capitalize(side),
'orderQty': amount,
'ordType': self.capitalize(type),
}
if price is not None:
request['price'] = price
response = await self.privatePostOrder(self.extend(request, params))
order = self.parse_order(response)
id = order['id']
self.orders[id] = order
return self.extend({'info': response}, order)
async def edit_order(self, id, symbol, type, side, amount=None, price=None, params={}):
await self.load_markets()
request = {
'orderID': id,
}
if amount is not None:
request['orderQty'] = amount
if price is not None:
request['price'] = price
response = await self.privatePutOrder(self.extend(request, params))
order = self.parse_order(response)
self.orders[order['id']] = order
return self.extend({'info': response}, order)
async def cancel_order(self, id, symbol=None, params={}):
await self.load_markets()
response = await self.privateDeleteOrder(self.extend({'orderID': id}, params))
order = response[0]
error = self.safe_string(order, 'error')
if error is not None:
if error.find('Unable to cancel order due to existing state') >= 0:
raise OrderNotFound(self.id + ' cancelOrder() failed: ' + error)
order = self.parse_order(order)
self.orders[order['id']] = order
return self.extend({'info': response}, order)
def is_fiat(self, currency):
if currency == 'EUR':
return True
if currency == 'PLN':
return True
return False
async def withdraw(self, currency, amount, address, tag=None, params={}):
self.check_address(address)
await self.load_markets()
if currency != 'BTC':
raise ExchangeError(self.id + ' supoprts BTC withdrawals only, other currencies coming soon...')
request = {
'currency': 'XBt', # temporarily
'amount': amount,
'address': address,
# 'otpToken': '123456', # requires if two-factor auth(OTP) is enabled
# 'fee': 0.001, # bitcoin network fee
}
response = await self.privatePostUserRequestWithdrawal(self.extend(request, params))
return {
'info': response,
'id': response['transactID'],
}
def handle_errors(self, code, reason, url, method, headers, body):
if code == 429:
raise DDoSProtection(self.id + ' ' + body)
if code >= 400:
if body:
if body[0] == '{':
response = json.loads(body)
if 'error' in response:
if 'message' in response['error']:
message = self.safe_value(response['error'], 'message')
if message is not None:
if message == 'Invalid API Key.':
raise AuthenticationError(self.id + ' ' + self.json(response))
# stub code, need proper handling
raise ExchangeError(self.id + ' ' + self.json(response))
def nonce(self):
return self.milliseconds()
def sign(self, path, api='public', method='GET', params={}, headers=None, body=None):
query = '/api' + '/' + self.version + '/' + path
if method != 'PUT':
if params:
query += '?' + self.urlencode(params)
url = self.urls['api'] + query
if api == 'private':
self.check_required_credentials()
nonce = str(self.nonce())
auth = method + query + nonce
if method == 'POST' or method == 'PUT':
if params:
body = self.json(params)
auth += body
headers = {
'Content-Type': 'application/json',
'api-nonce': nonce,
'api-key': self.apiKey,
'api-signature': self.hmac(self.encode(auth), self.encode(self.secret)),
}
return {'url': url, 'method': method, 'body': body, 'headers': headers}
| 39.087873 | 126 | 0.48568 |
0a612a4b42b21c535508de762fdc327ac40b7d00 | 5,977 | py | Python | leads/views.py | rakibul-islam-raju/django-crm | bd2fe84e6662b2d2425d890c64a9f15c3b76127a | [
"MIT"
] | null | null | null | leads/views.py | rakibul-islam-raju/django-crm | bd2fe84e6662b2d2425d890c64a9f15c3b76127a | [
"MIT"
] | null | null | null | leads/views.py | rakibul-islam-raju/django-crm | bd2fe84e6662b2d2425d890c64a9f15c3b76127a | [
"MIT"
] | null | null | null | from django.core.mail import send_mail
from django.shortcuts import render
from django.views import generic
from django.urls import reverse_lazy
from django.contrib.auth.mixins import LoginRequiredMixin
from agents.mixins import OrganisorAndLoginRequiredMixin
from .forms import *
from .models import *
class LandingPageView(generic.TemplateView):
template_name = 'landing-page.html'
class LeadCreateView(LoginRequiredMixin, generic.CreateView):
form_class = LeadCreateForm
template_name = 'lead/lead-create.html'
success_url = reverse_lazy('lead:lead-list')
def form_valid(self, form):
# TODO: send email
return super(LeadCreateView, self).form_valid(form)
class LeadListView(LoginRequiredMixin, generic.ListView):
template_name = 'lead/lead-list.html'
context_object_name = 'leads'
def get_queryset(self):
user = self.request.user
if user.is_organisor:
queryset = Lead.objects.filter(organization=user.userprofile)
else:
queryset = Lead.objects.filter(agent=user.agent)
return queryset
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
if self.request.user.is_organisor:
queryset = Lead.objects.filter(
organization=self.request.user.userprofile,
agent__isnull=True
)
context.update({
'unassigned_list': queryset
})
return context
class LeadDetailView(LoginRequiredMixin, generic.DetailView):
template_name='lead/lead-detail.html'
context_object_name = 'lead'
def get_queryset(self):
user = self.request.user
if user.is_organisor:
queryset = Lead.objects.filter(organization=user.userprofile)
else:
queryset = Lead.objects.filter(organization=user.agent.organization)
return queryset
class LeadUpdateView(OrganisorAndLoginRequiredMixin, generic.UpdateView):
template_name='lead/lead-update.html'
form_class = LeadCreateForm
context_object_name = 'lead'
def get_queryset(self):
return Lead.objects.filter(organization=self.request.user.userprofile)
class LeadDeleteView(OrganisorAndLoginRequiredMixin, generic.DeleteView):
template_name='delete.html'
success_url = reverse_lazy('lead:lead-list')
def get_queryset(self):
return Lead.objects.filter(organization=self.request.user.userprofile)
class AssignAgentView(OrganisorAndLoginRequiredMixin, generic.FormView):
template_name = 'lead/agent-assign.html'
form_class = AssignAgentForm
success_url = reverse_lazy('lead:lead-list')
def get_form_kwargs(self, **kwargs):
kwargs = super(AssignAgentView, self).get_form_kwargs(**kwargs)
kwargs.update({
"request": self.request
})
return kwargs
def form_valid(self, form):
agent = form.cleaned_data['agent']
lead = Lead.objects.get(id=self.kwargs['pk'])
lead.agent = agent
lead.save()
return super(AssignAgentView, self).form_valid(form)
class CategoryListView(LoginRequiredMixin, generic.ListView):
template_name = 'lead/categories.html'
context_object_name = 'categories'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
user = self.request.user
if user.is_organisor:
queryset = Lead.objects.filter(organization=user.userprofile)
else:
queryset = Lead.objects.filter(organization=user.agent.organization)
context.update({
'unassigned_lead_count': queryset.filter(category__isnull=True).count()
})
return context
def get_queryset(self):
user = self.request.user
if user.is_organisor:
queryset = Category.objects.filter(organization=user.userprofile)
else:
queryset = Category.objects.filter(organization=user.agent.organization)
return queryset
class CategoryDetailView(LoginRequiredMixin, generic.DetailView):
template_name = 'lead/categoriy-detail.html'
context_object_name = 'category'
# def get_context_data(self, **kwargs):
# context = super().get_context_data(**kwargs)
# context.update({
# 'leads': self.get_object().leads.all()
# })
# return context
def get_queryset(self):
user = self.request.user
if user.is_organisor:
queryset = Category.objects.filter(organization=user.userprofile)
else:
queryset = Category.objects.filter(organization=user.agent.organization)
return queryset
# TODO: Category update view
class UnCategorisedLeads(LoginRequiredMixin, generic.ListView):
template_name = 'lead/uncategorised-leads.html'
context_object_name = 'leads'
# def get_context_data(self, **kwargs):
# context = super().get_context_data(**kwargs)
# user = self.request.user
# if user.is_organisor:
# queryset = Lead.objects.filter(organization=user.userprofile)
# else:
# queryset = Lead.objects.filter(organization=user.agent.organization)
# context.update({
# 'unassigned_lead_count': queryset.filter(category__isnull=True).count()
# })
# return context
def get_queryset(self):
user = self.request.user
if user.is_organisor:
queryset = Lead.objects.filter(organization=user.userprofile)
else:
queryset = Lead.objects.filter(organization=user.agent.organization)
# user = self.request.user
# if user.is_organisor:
# queryset = Lead.objects.filter(organization=user.userprofile)
# else:
# queryset = Lead.objects.filter(agent=user.agent)
queryset = queryset.filter(category__isnull=True)
return queryset
| 32.483696 | 85 | 0.67124 |
7dbc20dbf6ce2034b9eb0a29872cfaaedaceae83 | 482 | py | Python | tests/bootstrap5_utilities/test_models.py | crydotsnake/djangocms-boostrap5 | d9693eca62b5c04f150b0231c668564ea2ae24b4 | [
"BSD-3-Clause"
] | 4 | 2021-09-10T10:43:15.000Z | 2022-02-15T08:53:00.000Z | tests/bootstrap5_utilities/test_models.py | crydotsnake/djangocms-boostrap5 | d9693eca62b5c04f150b0231c668564ea2ae24b4 | [
"BSD-3-Clause"
] | 3 | 2021-09-27T07:45:29.000Z | 2022-02-02T18:37:25.000Z | tests/bootstrap5_utilities/test_models.py | crydotsnake/djangocms-boostrap5 | d9693eca62b5c04f150b0231c668564ea2ae24b4 | [
"BSD-3-Clause"
] | 3 | 2021-09-20T16:36:32.000Z | 2021-12-17T10:36:27.000Z | from django.test import TestCase
from djangocms_bootstrap5.contrib.bootstrap5_utilities.models import (
Bootstrap5Spacing,
)
class B5UtilitiesModelTestCase(TestCase):
def test_instance(self):
instance = Bootstrap5Spacing.objects.create()
self.assertEqual(str(instance), "1")
self.assertEqual(instance.get_short_description(), "(.m-0)")
instance.space_device = "md"
self.assertEqual(instance.get_short_description(), "(.m-md-0)")
| 28.352941 | 71 | 0.717842 |
f74161f3def9e3e8308d97b48187ded2dee83717 | 57 | py | Python | annie/blueprints/playground/__init__.py | hno2/emlie | 6ccb2a55c569e9384beecd14e6272dbc1a5bddf3 | [
"MIT"
] | null | null | null | annie/blueprints/playground/__init__.py | hno2/emlie | 6ccb2a55c569e9384beecd14e6272dbc1a5bddf3 | [
"MIT"
] | 46 | 2021-05-10T09:59:40.000Z | 2021-09-21T07:39:49.000Z | annie/blueprints/playground/__init__.py | hno2/emlie | 6ccb2a55c569e9384beecd14e6272dbc1a5bddf3 | [
"MIT"
] | null | null | null | from annie.blueprints.playground.views import playground
| 28.5 | 56 | 0.877193 |
9f9861200b2e8a98ecdbb7c884088a6c2a87d0ec | 7,503 | py | Python | src/brisca_game_ar.py | narcispr/pyBrisca | 57b80ac88a4f68420d7989c8a82e94690282ce35 | [
"MIT"
] | 2 | 2019-10-22T11:46:32.000Z | 2020-06-05T20:40:09.000Z | src/brisca_game_ar.py | narcispr/pyBrisca | 57b80ac88a4f68420d7989c8a82e94690282ce35 | [
"MIT"
] | null | null | null | src/brisca_game_ar.py | narcispr/pyBrisca | 57b80ac88a4f68420d7989c8a82e94690282ce35 | [
"MIT"
] | null | null | null | import time
import random
import cv2
from cv2 import aruco
import pyttsx3
from card_utils import Stack
from brisca_utils import check_victory_hand, BriscaDeck
from brisca_players import BriscaPlayerSimpleAI
def get_marker_id():
cam = cv2.VideoCapture(0)
num_frames = 0
aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_100)
parameters = aruco.DetectorParameters_create()
card_id = -1
while num_frames < 125:
ret, frame = cam.read()
if not ret:
break
k = cv2.waitKey(1)
if k % 256 == 27:
# ESC pressed
print("Escape hit, closing...")
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
corners, ids, rejectedImgPoints = aruco.detectMarkers(gray, aruco_dict, parameters=parameters)
num_frames += 1
if ids is not None:
card_id = ids[0][0]
break
cam.release()
cv2.destroyAllWindows()
return card_id
class BriscaGameAR:
def __init__(self, names, player_idx, player):
assert len(names) <= 4, '2, 3 or 4 players only!'
assert player_idx < len(names), 'Invalid players index!'
self.players_name = names
self.player_idx = player_idx
self.player = player
self.next_player = 0
self.victory_card = None
self.player_points = list()
for j in range(len(names)):
self.player_points.append(0)
self.list_of_seen_cards = list()
self.engine = pyttsx3.init()
self.engine.setProperty('voice', 'catalan')
def announce(self, text):
if isinstance(text, list):
text = text[int(random.random() * len(text))]
print(text)
self.engine.say(text)
self.engine.runAndWait()
time.sleep(0.5)
def get_card(self):
card_id = -1
while card_id <= 0:
card_id = get_marker_id()
if card_id in self.list_of_seen_cards:
card_id = -1
else:
self.list_of_seen_cards.append(card_id)
card = BriscaDeck.get_card_id(card_id)
return card
def player_order(self):
order = list()
for j in range(self.next_player, len(self.players_name)):
order.append(j)
for j in range(0, self.next_player):
order.append(j)
return order
def game(self):
confirmed = False
self.announce(['Hola! Soc en {}, comencem?'.format(self.players_name[self.player_idx]),
'Hem dic {}, fem una brisca?'.format(self.players_name[self.player_idx]),
'Soc en {}, jugem a la brisca?'.format(self.players_name[self.player_idx])])
while not confirmed:
self.announce(["Ensenya'm el trumfo...", "De que va la partida?"])
self.victory_card = self.get_card()
self.announce('El trumfo és el {}?'.format(self.victory_card))
answ = input("El trumfo es {}? (S/N)".format(self.victory_card))
if answ == 's' or answ == 'S' or answ == 'si' or answ == 'SI' or answ == 'Si':
confirmed = True
self.list_of_seen_cards.pop()
self.player.set_up(self.victory_card.suit, len(self.players_name))
self.announce(['Reparteix-me 3 cartes, siusplau!', "Dona'm la ma inicial"])
for j in range(3):
self.player.receive_card(self.get_card())
if j < 2:
self.announce(['La següent...', 'Una més', 'Una altra...'])
for j in range(int(48 / len(self.players_name))):
order = self.player_order()
table = Stack()
for p in order:
if p == self.player_idx:
idx, played_card = self.player.play(table)
self.announce(['Agafeu la carta en posicio {}, que és un {}.'.format(idx + 1, played_card),
'Jugo la carta {} que està en posició {}'.format(played_card, idx + 1),
'Agafa la carta número {} que és un {}'.format(idx + 1, played_card)])
else:
self.announce(['Ensenyem la carta que vols jugar {}.'.format(self.players_name[p]),
'Que jugues {}?'.format(self.players_name[p]),
'Et toca {}. Que tires?'.format(self.players_name[p]),
"Va {}, Ensenya'm la carta".format(self.players_name[p]),
'Següent carta siusplau',
'{}, vas tu.'.format(self.players_name[p]),
'{}, et toca.'.format(self.players_name[p])])
played_card = self.get_card()
table.add(played_card, self.players_name[p])
print('El jugador {} baixa la carta {}.'.format(self.players_name[p], played_card))
owner, card, points = check_victory_hand(table, self.victory_card.suit)
self.next_player = self.players_name.index(owner)
self.player_points[self.next_player] += points
if points > 0 and owner != self.players_name[self.player_idx]:
self.announce(["El jugador {} ha guanyat la ma i ha fet {} punts".format(owner, points),
"{} punts pel jugador {}".format(points, owner)])
elif points == 0 and owner != self.players_name[self.player_idx]:
self.announce(["Guanya el jugador {}".format(owner),
"Palla pel burro {}".format(owner)])
elif owner == self.players_name[self.player_idx]:
self.announce(["Aquesta ma per mi",
"Guanyo jo",
"{} punts per mi".format(points)])
if j < int(48 / len(self.players_name)) - 3:
self.announce(['Donem una carta i col·locala en posició 3',
'Carta siusplau',
"M'acosteu la meva carta siusplau?"])
self.player.receive_card(self.get_card())
for j, p in enumerate(self.player_points):
print('{}: {} punts'.format(self.players_name[j], p))
a = max(self.player_points)
idx = self.player_points.index(a)
if idx == self.player_idx:
self.announce(['Us he fotut una pallisa! Ha ha ha ha!!!',
'Ara us he guanyat!',
'Aquesta partida la guanyo jo!'])
else:
self.announce(['Ha guanyat el jugador {}. Felicitats!'.format(self.players_name[idx]),
'Molt bé {}, has guanyat!'.format(self.players_name[idx]),
'Aquesta partica la guanya el jugador {}.'.format(self.players_name[idx]), ])
if __name__ == '__main__':
num_players = int(input('Quats serem a més de jo? '))
players_name = list()
if 0 < num_players < 4:
for i in range(num_players):
nom = input('Nom jugador/a {}: '.format(i + 1))
players_name.append(nom)
nom_ai = input('Tria el meu nom: ')
params = dict()
params['victory_suit_penalty'] = 3.5
p1 = BriscaPlayerSimpleAI(nom_ai, params)
players_name.append(nom_ai)
brisca = BriscaGameAR(players_name, len(players_name) - 1, p1)
brisca.game()
else:
print('Hem de ser entre 2 i 4 jugadors!')
| 41.916201 | 111 | 0.547114 |
eff693c41ffad375bc6f4ed799312f1f1196d14e | 9,067 | py | Python | src/minterpy/polynomials/lagrange_polynomial.py | karanprime/minterpy | 75d8976b2ddcf79ebaa29b4cb80691ca02a6d180 | [
"MIT"
] | 13 | 2021-11-30T17:52:45.000Z | 2021-12-09T10:05:20.000Z | src/minterpy/polynomials/lagrange_polynomial.py | karanprime/minterpy | 75d8976b2ddcf79ebaa29b4cb80691ca02a6d180 | [
"MIT"
] | 2 | 2021-11-30T18:31:22.000Z | 2022-02-10T10:13:37.000Z | src/minterpy/polynomials/lagrange_polynomial.py | karanprime/minterpy | 75d8976b2ddcf79ebaa29b4cb80691ca02a6d180 | [
"MIT"
] | 6 | 2021-11-30T18:17:26.000Z | 2022-02-18T17:38:27.000Z | """
LagrangePolynomial class
"""
from typing import Any, Optional
import numpy as np
import minterpy
from minterpy.global_settings import ARRAY
from ..core import Grid, MultiIndexSet
from ..core.ABC import MultivariatePolynomialSingleABC
from ..core.utils import insert_lexicographically
from ..core.verification import verify_domain
from .canonical_polynomial import _match_dims, _matching_internal_domain
__all__ = ["LagrangePolynomial"]
def dummy(x: Optional[Any] = None) -> None:
"""Placeholder function.
.. warning::
This feature is not implemented yet!
"""
raise NotImplementedError("This feature is not implemented yet.")
# TODO: part of polynomial utils?
# TODO: optimize with numba
def _union_of_exponents(exp1, exp2):
"""Return union of exponents.
:param exp1: First set of exponents.
:type exp1: np.ndarray
:param exp2: Second set of exponents.
:type exp2: np.ndarray
:return: Return the union of two ``multi_indices`` along with a mapping of indices from the
resultant set to the exp1 and exp2.
:rtype: (np.ndarray, callable)
.. todo::
- implement this as a member function on :class:`MultiIndexSet`.
- improve performance by using similar functions from ``numpy``.
"""
res_exp = insert_lexicographically(exp1.copy(), exp2.copy())
nr_monomials, _ = res_exp.shape
num_exp1, _ = exp1.shape
num_exp2, _ = exp2.shape
res_map = np.zeros((nr_monomials, 2)).astype(np.int)
# Assuming all exponents are lexicographically sorted
pos_exp1 = 0
pos_exp2 = 0
for i in range(nr_monomials):
if pos_exp1 < num_exp1:
if np.array_equal(res_exp[i, :], exp1[pos_exp1, :]):
res_map[i, 0] = pos_exp1
pos_exp1 += 1
if pos_exp2 < num_exp2:
if np.array_equal(res_exp[i, :], exp2[pos_exp2, :]):
res_map[i, 1] = pos_exp2
pos_exp2 += 1
return res_exp, res_map
# TODO : poly2 can be of a different basis?
def _lagrange_add(
poly1: MultivariatePolynomialSingleABC, poly2: MultivariatePolynomialSingleABC
) -> MultivariatePolynomialSingleABC:
"""Addition of two polynomials in Lagrange basis.
:param poly1: First polynomial to be added.
:type poly1: LagrangePolynomial
:param poly2: Second polynomial to be added.
:type poly2: LagrangePolynomial
:return: The result of ``poly1 + poly2``
:rtype: LagrangePolynomial
"""
p1, p2 = _match_dims(poly1, poly2)
if _matching_internal_domain(p1, p2):
l2n_p1 = minterpy.transformations.LagrangeToNewton(p1)
newt_p1 = l2n_p1()
l2n_p2 = minterpy.transformations.LagrangeToNewton(p2)
newt_p2 = l2n_p2()
max_poly_degree = np.max(
np.array([p1.multi_index.poly_degree, p2.multi_index.poly_degree])
)
max_lp_degree = np.max(
np.array([p1.multi_index.lp_degree, p2.multi_index.lp_degree])
)
dim = p1.spatial_dimension # must be the same for p2
res_mi = MultiIndexSet.from_degree(dim, int(max_poly_degree), max_lp_degree)
res_grid = Grid(res_mi)
un = res_grid.unisolvent_nodes
eval_p1 = newt_p1(un)
eval_p2 = newt_p2(un)
res_coeffs = eval_p1 + eval_p2
return LagrangePolynomial(
res_mi,
res_coeffs,
grid=res_grid,
internal_domain=p1.internal_domain,
user_domain=p1.user_domain,
)
else:
raise NotImplementedError(
"Addition is not implemented for lagrange polynomials with different domains"
)
def _lagrange_sub(
poly1: MultivariatePolynomialSingleABC, poly2: MultivariatePolynomialSingleABC
) -> MultivariatePolynomialSingleABC:
"""Subtraction of two polynomials in Lagrange basis.
:param poly1: First polynomial from which will be substracted.
:type poly1: LagrangePolynomial
:param poly2: Second polynomial which is substracted.
:type poly2: LagrangePolynomial
:return: The result of ``poly1 - poly2``
:rtype: LagrangePolynomial
"""
p1, p2 = _match_dims(poly1, poly2)
if _matching_internal_domain(p1, p2):
l2n_p1 = minterpy.transformations.LagrangeToNewton(p1)
newt_p1 = l2n_p1()
l2n_p2 = minterpy.transformations.LagrangeToNewton(p2)
newt_p2 = l2n_p2()
max_poly_degree = np.max(
np.array([p1.multi_index.poly_degree, p2.multi_index.poly_degree])
)
max_lp_degree = np.max(
np.array([p1.multi_index.lp_degree, p2.multi_index.lp_degree])
)
dim = p1.spatial_dimension # must be the same for p2
res_mi = MultiIndexSet.from_degree(dim, int(max_poly_degree), max_lp_degree)
res_grid = Grid(res_mi)
un = res_grid.unisolvent_nodes
eval_p1 = newt_p1(un)
eval_p2 = newt_p2(un)
res_coeffs = eval_p1 - eval_p2
return LagrangePolynomial(
res_mi,
res_coeffs,
grid=res_grid,
internal_domain=p1.internal_domain,
user_domain=p1.user_domain,
)
else:
raise NotImplementedError(
"Subtraction is not implemented for lagrange polynomials with different domains"
)
def _lagrange_mul(
poly1: MultivariatePolynomialSingleABC, poly2: MultivariatePolynomialSingleABC
) -> MultivariatePolynomialSingleABC:
"""Multiplication of two polynomials in Lagrange basis.
:param poly1: First polynomial to be multiplied.
:type poly1: LagrangePolynomial
:param poly2: Second polynomial to be multiplied.
:type poly2: LagrangePolynomial
:return: The result of ``poly1 * poly2``
:rtype: LagrangePolynomial
"""
p1, p2 = _match_dims(poly1, poly2)
if _matching_internal_domain(p1, p2):
l2n_p1 = minterpy.transformations.LagrangeToNewton(p1)
newt_p1 = l2n_p1()
l2n_p2 = minterpy.transformations.LagrangeToNewton(p2)
newt_p2 = l2n_p2()
degree_poly1 = p1.multi_index.poly_degree
degree_poly2 = p2.multi_index.poly_degree
lpdegree_poly1 = p1.multi_index.lp_degree
lpdegree_poly2 = p2.multi_index.lp_degree
res_degree = int(degree_poly1 + degree_poly2)
res_lpdegree = lpdegree_poly1 + lpdegree_poly2
res_mi = MultiIndexSet.from_degree(
p1.spatial_dimension, res_degree, res_lpdegree
)
res_grid = Grid(res_mi)
un = res_grid.unisolvent_nodes
eval_p1 = newt_p1(un)
eval_p2 = newt_p2(un)
res_coeffs = eval_p1 * eval_p2
return LagrangePolynomial(
res_mi,
res_coeffs,
grid=res_grid,
internal_domain=p1.internal_domain,
user_domain=p1.user_domain,
)
else:
raise NotImplementedError(
"Multiplication is not implemented for Lagrange polynomials with different domains"
)
# TODO redundant
lagrange_generate_internal_domain = verify_domain
lagrange_generate_user_domain = verify_domain
class LagrangePolynomial(MultivariatePolynomialSingleABC):
"""Datatype to discribe polynomials in Lagrange base.
A polynomial in Lagrange basis is the sum of so called Lagrange polynomials (each multiplied with a coefficient).
A `single` Lagrange monomial is per definition 1 on one of the grid points and 0 on all others.
Attributes
----------
coeffs
nr_active_monomials
spatial_dimension
unisolvent_nodes
Notes
-----
A polynomial in Lagrange basis is well defined also for multi indices which are lexicographically incomplete. This means that the corresponding Lagrange polynomials also form a basis in such cases. These Lagrange polynomials however will possess their special property of being 1 on a single grid point and 0 on all others, with respect to the given grid! This allows defining very "sparse" polynomials (few multi indices -> few coefficients), but which still fulfill additional constraints (vanish on additional grid points). Practically this can be achieved by storing a "larger" grid (defined on a larger set of multi indices). In this case the transformation matrices become non-square, since there are fewer Lagrange polynomials than there are grid points (<-> only some of the Lagrange polynomials of this basis are "active"). Conceptually this is equal to fix the "inactivate" coefficients to always be 0.
.. todo::
- provide a short definition of this base here.
"""
_newt_coeffs_lagr_monomials: Optional[ARRAY] = None
# Virtual Functions
_add = staticmethod(_lagrange_add)
_sub = staticmethod(_lagrange_sub)
_mul = staticmethod(_lagrange_mul)
_div = staticmethod(dummy) # type: ignore
_pow = staticmethod(dummy) # type: ignore
_eval = staticmethod(dummy) # type: ignore
generate_internal_domain = staticmethod(lagrange_generate_internal_domain)
generate_user_domain = staticmethod(lagrange_generate_user_domain)
| 33.457565 | 916 | 0.686004 |
7140e77b003d28edbea6cdfe5446f240e9e41141 | 14,151 | py | Python | grinder/config.py | gridcentric/grinder | fb0fb52c0690a9c80ca0704cf7231b407482d793 | [
"MIT"
] | 1 | 2018-06-11T01:36:46.000Z | 2018-06-11T01:36:46.000Z | grinder/config.py | gridcentric/grinder | fb0fb52c0690a9c80ca0704cf7231b407482d793 | [
"MIT"
] | null | null | null | grinder/config.py | gridcentric/grinder | fb0fb52c0690a9c80ca0704cf7231b407482d793 | [
"MIT"
] | null | null | null | # Copyright 2013 GridCentric Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from . logger import log
import inspect
import os
import hashlib
import time
import random
from getpass import getuser
from socket import gethostname
DEFAULT_KEY_PATH = None
DEFAULT_DROPALL_FRACTION = 0.5
DEFAULT_SHARING_CLONES = 2
DEFAULT_SHARE_RATIO = 0.8
DEFAULT_COW_SLACK = 1500
DEFAULT_SSH_PORT = 22
DEFAULT_WINDOWS_LINK_PORT = 9845
DEFAULT_LIMIT_HEADROOM_PAGES = 256
class Image(object):
'''Add an image.
Fields are specified with comma-separated key=val pairs:
name -- the name of the image in glance,
distro -- the linux distribution,
arch -- the architecture,
platform -- Linux or Windows,
user -- the user to login as,
key_path -- the path to the SSH key for the user,
key_name -- the name of the key for booting.
flavor -- the flavor to use to boot this image, overrides the global
default config.flavor_name.
cloudinit -- does the image have the cloud-init daemon
agent_skip -- should the test harness skip agent installation
e.g. --image 11.10-a1-64,distro=ubuntu,user=ubuntu,arch=64
'''
def __init__(self, name, distro, arch, platform='linux', user='root', key_path=None,
key_name=None, flavor=None, cloudinit=False, agent_skip=False):
self.name = name
self.distro = distro
self.arch = arch
self.platform = platform
self.user = user
self.key_path = key_path
self.key_name = key_name
self.flavor = flavor
self.cloudinit = cloudinit
self.agent_skip = agent_skip
def check(self):
assert self.distro
assert self.arch
assert self.platform
assert self.user
def __repr__(self):
return 'Image(name=%s, distro=%s, ' % (self.name, self.distro) + \
'arch=%s, platform=%s, user=%s, ' % (self.arch, self.platform, self.user) + \
'key_path=%s, key_name=%s, flavor=%s, cloudinit=%s, agent_skip=%s)' % \
(self.key_path, self.key_name, self.flavor,
str(self.cloudinit), str(self.agent_skip))
class Config(object):
def __init__(self):
# Hosts in the OpenStack cluster. These are automatically
# inferred in most cases, see Readme.md.
self.hosts = []
# Hosts without the Gridcentric service installed. There should be at
# least one for the benefit of the test_migration_errors test.
# Otherwise, the test is skipped.
self.hosts_without_gridcentric = []
# Security group for all testing instanced
self.security_group = 'default'
# Instance flavor; determines RAM and what disks are attached.
self.flavor_name = 'm1.tiny'
# Some tests require instances with >= 4GiB of RAM
self.big_ram_flavor_name = "m1.medium"
# Name of the key to inject into the instances. You may use either
# this mechanism to inject a guest key, or save some images with a
# key preprovisioned. Both are acceptable.
self.guest_key_name = None
# The default path for the counter-part private key that is injected
# into guests. We require logging into guests as part of the tests.
self.guest_key_path = DEFAULT_KEY_PATH
# The name of the network to access instances via. By default,
# an arbitrary network that the instance is attached to is chosen.
self.network_name = None
# Some tests will ssh to the hosts and ensure that the state is as
# expected. This will happen using the host_user and the private key
# behind host_key_path below.
self.host_user = None
self.host_key_path = DEFAULT_KEY_PATH
# How long to wait for servers to come up, become ping-able, ssh-able,
# etc. In seconds.
self.ops_timeout = 600
# The port to use to initiate ssh connections.
self.ssh_port = DEFAULT_SSH_PORT
# The port for the Windows TestListener service.
self.windows_link_port = DEFAULT_WINDOWS_LINK_PORT
# A custom agent location (passed to the gc-install-agent command).
self.agent_location = None
self.agent_version = 'latest'
# A custom Windows agent location (passed to the Windows
# TestListener). The location must be a url directly to the msi file.
self.windows_agent_location = \
"http://downloads.gridcentric.com/packages/agent/windows/"
# The arch to use for non-arch tests (i.e., tests that aren't sensitive
# to the arch).
self.default_archs = []
# The distro to use for non-distro tests (i.e., tests that aren't
# sensitive to the distro).
self.default_distros = []
# The platform to use for non-platform tests (i.e., tests that aren't
# sensititve to the platform.
self.default_platforms = []
# Name to prefix to all of the tests. Defaults to the environment
# variable RUN_NAME if available, otherwise one is constructed from
# the current host, user and pid.
self.run_name = None
# The images available in Glance for the tests to use. The tests need
# images of certain distributions for certain architectures; hence the
# image names aren't important. See Image.
#
# We no longer store defaults here in this repo, but as an example:
# Image('cirros-0.3.0-x86_64', distro='cirros', arch='64', platform='linux', user='cirros'),
# Image('oneiric-agent-ready', distro='ubuntu', arch='64', user='ubuntu'),
# Image('precise-32-agent-ready', distro='ubuntu', arch='32', user='root'),
# Image('precise-pae-agent-ready', distro='ubuntu', arch='pae', user='root'),
# Image('centos-6.3-64-agent-ready', distro='centos', arch='64', user='root'),
# Image('centos-6.3-32-agent-ready', distro='centos', arch='32', user='root'),
# Image('centos-6.3-pae-agent-ready', distro='centos', arch='pae', user='root'),
# Image('windows7-64bit-virtio', distro='win7', arch='64', platform='windows', user='root')
self.images = []
# Whether to leave the VMs around on failure.
self.leave_on_failure = False
# Parameters for reading test configuration from a Tempest configuration file:
# tempest_config is the path of the configuration file
# tc_distro is the distro for the default guest image
# tc_arch is the arch for the default guest image
# The function pytest_configure will use these parameters to construct
# one instance of Image
# (e.g. Image('precise-32-agent-ready', distro='ubuntu', arch='32', user='root'))
self.tempest_config = None
self.tc_distro = None
self.tc_arch = None
# Authentication parameters
self.os_username = os.environ.get('OS_USERNAME')
self.os_password = os.environ.get('OS_PASSWORD')
self.os_tenant_name = os.environ.get('OS_TENANT_NAME')
self.os_auth_url = os.environ.get('OS_AUTH_URL')
self.os_region_name = os.environ.get('OS_REGION_NAME', 'RegionOne')
# Folsom and later: a default availability zone that contains testing hosts
self.default_az = 'nova'
# Parameter for the memory-hoard-dropall test. Only change if you
# really know what you are doing. There is no good definition for
# the "success" of a memory eviction operation. However, on a
# (relatively) freshly booted Linux, fully hoarded, with over 256MiB of
# RAM, there should be massive removal of free pages. Settle on a 50%
# threshold by default.
self.test_memory_dropall_fraction = DEFAULT_DROPALL_FRACTION
# These are knobs that control the sharing test, and you should be very
# sure about what you are doing before changing them.
# Number of clones that will share memory.
self.test_sharing_sharing_clones = DEFAULT_SHARING_CLONES
# When share-hoarding across a bunch of stopped clones, we expect
# the resident to allocated ratio to be test_sharing_share_ratio * num
# of clones i.e. for two clones, 60% more resident than allocated.
self.test_sharing_share_ratio = DEFAULT_SHARE_RATIO
# We cannot avoid a small but unknown amount of CoW to happen before we
# start accounting. So provide a slack to absorb that unknown number of
# pages and prevent spurious failures.
self.test_sharing_cow_slack = DEFAULT_COW_SLACK
# Policy tests that assert a memory limit need a small headroom as the
# VM cools down after hitting the limit.
self.test_policy_headroom_pages = DEFAULT_LIMIT_HEADROOM_PAGES
# Self explanatory
self.skip_migration_tests = False
# Test output spews endless 'DEBUG' API calls when logging level is set
# to 'DEBUG'. Control what logging levels we want to see.
self.log_level = 'INFO'
# Set this flag on the command line to see HTTP request/response to nova API.
self.http_log_debug = False
# The default vmspolicyd policy to install before starting tests. Note
# that this policy will affect all VMS on the test host and will be left
# in place when grinder exits (i.e. grinder won't remember and restore
# the policy it found on the host). Some grinder tests will also modify
# the policy to test policyd.
self.default_policy = """
[*]
unmanaged = true
"""
def get_all_archs(self):
return list(set([i.arch for i in self.images]))
def get_all_distros(self):
return list(set([i.distro for i in self.images]))
def get_all_platforms(self):
return list(set([i.platform for i in self.images]))
def post_config(self):
if self.run_name == None:
if os.getenv('RUN_NAME'):
self.run_name = os.getenv('RUN_NAME')
else:
pid = os.getpid()
ppid = os.getppid()
user = getuser()
host = gethostname()
self.run_name = '%s@%s-%d-%d' % (user, host, ppid, pid)
archs = set()
distros = set()
platforms = set()
for image in self.images:
archs.add(image.arch)
distros.add(image.distro)
platforms.add(image.platform)
if image.key_name == None:
image.key_name = self.guest_key_name
if image.key_path == None:
image.key_path = self.guest_key_path
if len(self.default_archs) == 0:
self.default_archs = list(archs)
if len(self.default_distros) == 0:
self.default_distros = list(distros)
if len(self.default_platforms) == 0:
self.default_platforms = list(platforms)
# Cast number options, handle bogosity
def handle_number_option(opt, type, name, default, min, max):
try:
opt = type(opt)
except ValueError:
log.warn("Bad value for %s, back to default %s" %
(name, str(default)))
opt = default
if opt < min or opt > max:
log.warn("Provided %s %s will break the test, back to "
"default %s." % (name, str(opt), str(default)))
opt = default
return opt
random.seed(time.time())
self.test_sharing_sharing_clones =\
handle_number_option(self.test_sharing_sharing_clones,
int, "sharing clones",
DEFAULT_SHARING_CLONES, 2, 10)
self.test_sharing_share_ratio =\
handle_number_option(self.test_sharing_share_ratio,
float, "sharing ratio",
DEFAULT_SHARE_RATIO, 0.25, 0.99)
self.test_sharing_cow_slack =\
handle_number_option(self.test_sharing_cow_slack,
int, "sharing cow slack",
DEFAULT_COW_SLACK, 0, 16 * 256)
self.test_memory_dropall_fraction =\
handle_number_option(self.test_memory_dropall_fraction,
float, "dropall fraction",
DEFAULT_DROPALL_FRACTION, 0.25, 0.99)
self.test_policy_headroom_pages =\
handle_number_option(self.test_policy_headroom_pages,
int, "policy limit headroom pages",
DEFAULT_LIMIT_HEADROOM_PAGES, 0, 16 * 256)
def get_images(self, distro, arch, platform):
return filter(lambda i: i.distro == distro and \
i.arch == arch and \
i.platform == platform, self.images)
def hostname_to_ids(self, tenant_id, hostname):
essex_hash = hashlib.sha224(str(tenant_id) + str(hostname)).hexdigest()
diablo_hash = hashlib.sha224(str(hostname)).hexdigest()
print "hostname hashes for %s" % hostname
print "essex = %s" % essex_hash
print "diablo = %s" % diablo_hash
return [essex_hash, diablo_hash]
def id_to_hostname(self, tenant_id, host_id):
for hostname in self.hosts:
if host_id in self.hostname_to_ids(tenant_id, hostname):
return hostname
raise KeyError(host_id)
default_config = Config()
| 42.241791 | 102 | 0.629637 |
6d56f2b6e3fb382f4d6480a04af031014792feac | 11,309 | py | Python | senteval/examples/sent2vec.py | wasiahmad/community_question_answering | 73d13bc1cdf2ea66d13209c007dcc2767cf2155c | [
"MIT"
] | 11 | 2018-10-04T04:37:03.000Z | 2020-02-19T15:59:51.000Z | senteval/examples/sent2vec.py | wasiahmad/community_question_answering | 73d13bc1cdf2ea66d13209c007dcc2767cf2155c | [
"MIT"
] | 1 | 2020-04-02T09:49:57.000Z | 2020-04-02T13:44:56.000Z | senteval/examples/sent2vec.py | wasiahmad/community_question_answering | 73d13bc1cdf2ea66d13209c007dcc2767cf2155c | [
"MIT"
] | 8 | 2018-10-08T06:52:32.000Z | 2020-04-18T03:49:14.000Z | # Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
"""
This file contains the definition of encoders used in https://arxiv.org/pdf/1705.02364.pdf
"""
import numpy as np
import time
import torch
from torch.autograd import Variable
import torch.nn as nn
class Sent2VecSingle(nn.Module):
def __init__(self, config):
super(Sent2VecSingle, self).__init__()
self.bsize = config['bsize']
self.word_emb_dim = config['word_emb_dim']
self.rnn_dim = config['enc_lstm_dim']
self.pool_type = config['pool_type']
self.dpout_model = config['dpout_model']
self.version = 1 if 'version' not in config else config['version']
self.rnn = nn.LSTM(self.word_emb_dim, self.rnn_dim, 1,
bidirectional=True, dropout=self.dpout_model)
assert self.version in [1, 2]
if self.version == 1:
self.bos = '<s>'
self.eos = '</s>'
self.max_pad = True
self.moses_tok = False
elif self.version == 2:
self.bos = '<p>'
self.eos = '</p>'
self.max_pad = False
self.moses_tok = True
def load_state(self, model_path):
self.rnn.load_state_dict(torch.load(model_path))
def is_cuda(self):
# either all weights are on cpu or they are on gpu
return 'cuda' in str(type(self.rnn.bias_hh_l0.data))
def forward(self, sent_tuple):
# sent_len: [max_len, ..., min_len] (bsize)
# sent: Variable(seqlen x bsize x worddim)
sent, sent_len = sent_tuple
# Sort by length (keep idx)
sent_len_sorted, idx_sort = np.sort(sent_len)[::-1], np.argsort(-sent_len)
idx_unsort = np.argsort(idx_sort)
idx_sort = torch.from_numpy(idx_sort).cuda() if self.is_cuda() \
else torch.from_numpy(idx_sort)
sent = sent.index_select(1, Variable(idx_sort))
# FIXME: without applying cuda, it throws error
sent = sent.cuda()
# Handling padding in Recurrent Networks
sent_packed = nn.utils.rnn.pack_padded_sequence(sent, sent_len_sorted)
sent_output = self.rnn(sent_packed)[0] # seqlen x batch x 2*nhid
sent_output = nn.utils.rnn.pad_packed_sequence(sent_output)[0]
# Un-sort by length
idx_unsort = torch.from_numpy(idx_unsort).cuda() if self.is_cuda() \
else torch.from_numpy(idx_unsort)
# FIXME: without applying cuda, it throws error
idx_unsort = idx_unsort.cuda()
sent_output = sent_output.index_select(1, Variable(idx_unsort))
# Pooling
if self.pool_type == "mean":
sent_len = Variable(torch.FloatTensor(sent_len.copy())).unsqueeze(1).cuda()
emb = torch.sum(sent_output, 0).squeeze(0)
emb = emb / sent_len.expand_as(emb)
elif self.pool_type == "max":
if not self.max_pad:
sent_output[sent_output == 0] = -1e9
emb = torch.max(sent_output, 0)[0]
if emb.ndimension() == 3:
emb = emb.squeeze(0)
assert emb.ndimension() == 2
return emb
def set_w2v_path(self, w2v_path):
self.w2v_path = w2v_path
def get_word_dict(self, sentences, tokenize=True):
# create vocab of words
word_dict = {}
sentences = [s.split() if not tokenize else self.tokenize(s) for s in sentences]
for sent in sentences:
for word in sent:
if word not in word_dict:
word_dict[word] = ''
word_dict[self.bos] = ''
word_dict[self.eos] = ''
return word_dict
def get_w2v(self, word_dict):
assert hasattr(self, 'w2v_path'), 'w2v path not set'
# create word_vec with w2v vectors
word_vec = {}
with open(self.w2v_path) as f:
for line in f:
word, vec = line.split(' ', 1)
if word in word_dict:
word_vec[word] = np.fromstring(vec, sep=' ')
print('Found %s(/%s) words with w2v vectors' % (len(word_vec), len(word_dict)))
return word_vec
def get_w2v_k(self, K):
assert hasattr(self, 'w2v_path'), 'w2v path not set'
# create word_vec with k first w2v vectors
k = 0
word_vec = {}
with open(self.w2v_path) as f:
for line in f:
word, vec = line.split(' ', 1)
if k <= K:
word_vec[word] = np.fromstring(vec, sep=' ')
k += 1
if k > K:
if word in [self.bos, self.eos]:
word_vec[word] = np.fromstring(vec, sep=' ')
if k > K and all([w in word_vec for w in [self.bos, self.eos]]):
break
return word_vec
def build_vocab(self, sentences, tokenize=True):
assert hasattr(self, 'w2v_path'), 'w2v path not set'
word_dict = self.get_word_dict(sentences, tokenize)
self.word_vec = self.get_w2v(word_dict)
print('Vocab size : %s' % (len(self.word_vec)))
# build w2v vocab with k most frequent words
def build_vocab_k_words(self, K):
assert hasattr(self, 'w2v_path'), 'w2v path not set'
self.word_vec = self.get_w2v_k(K)
print('Vocab size : %s' % (K))
def update_vocab(self, sentences, tokenize=True):
assert hasattr(self, 'w2v_path'), 'warning : w2v path not set'
assert hasattr(self, 'word_vec'), 'build_vocab before updating it'
word_dict = self.get_word_dict(sentences, tokenize)
# keep only new words
for word in self.word_vec:
if word in word_dict:
del word_dict[word]
# udpate vocabulary
if word_dict:
new_word_vec = self.get_w2v(word_dict)
self.word_vec.update(new_word_vec)
else:
new_word_vec = []
print('New vocab size : %s (added %s words)' % (len(self.word_vec), len(new_word_vec)))
def get_batch(self, batch):
# sent in batch in decreasing order of lengths
# batch: (bsize, max_len, word_dim)
embed = np.zeros((len(batch[0]), len(batch), self.word_emb_dim))
for i in range(len(batch)):
for j in range(len(batch[i])):
embed[j, i, :] = self.word_vec[batch[i][j]]
return torch.FloatTensor(embed)
def tokenize(self, s):
from nltk.tokenize import word_tokenize
if self.moses_tok:
s = ' '.join(word_tokenize(s))
s = s.replace(" n't ", "n 't ") # HACK to get ~MOSES tokenization
return s.split()
else:
return word_tokenize(s)
def prepare_samples(self, sentences, bsize, tokenize, verbose):
sentences = [[self.bos] + s.split() + [self.eos] if not tokenize else
[self.bos] + self.tokenize(s) + [self.eos] for s in sentences]
n_w = np.sum([len(x) for x in sentences])
# filters words without w2v vectors
for i in range(len(sentences)):
s_f = [word for word in sentences[i] if word in self.word_vec]
if not s_f:
import warnings
warnings.warn('No words in "%s" (idx=%s) have w2v vectors. \
Replacing by "</s>"..' % (sentences[i], i))
s_f = [self.eos]
sentences[i] = s_f
lengths = np.array([len(s) for s in sentences])
n_wk = np.sum(lengths)
if verbose:
print('Nb words kept : %s/%s (%.1f%s)' % (
n_wk, n_w, 100.0 * n_wk / n_w, '%'))
# sort by decreasing length
lengths, idx_sort = np.sort(lengths)[::-1], np.argsort(-lengths)
sentences = np.array(sentences)[idx_sort]
return sentences, lengths, idx_sort
def encode(self, sentences, bsize=64, tokenize=True, verbose=False):
tic = time.time()
sentences, lengths, idx_sort = self.prepare_samples(
sentences, bsize, tokenize, verbose)
embeddings = []
with torch.no_grad():
for stidx in range(0, len(sentences), bsize):
batch = Variable(self.get_batch(
sentences[stidx:stidx + bsize]))
if self.is_cuda():
batch = batch.cuda()
batch = self.forward(
(batch, lengths[stidx:stidx + bsize])).data.cpu().numpy()
embeddings.append(batch)
embeddings = np.vstack(embeddings)
# unsort
idx_unsort = np.argsort(idx_sort)
embeddings = embeddings[idx_unsort]
if verbose:
print('Speed : %.1f sentences/s (%s mode, bsize=%s)' % (
len(embeddings) / (time.time() - tic),
'gpu' if self.is_cuda() else 'cpu', bsize))
return embeddings
def visualize(self, sent, tokenize=True):
sent = sent.split() if not tokenize else self.tokenize(sent)
sent = [[self.bos] + [word for word in sent if word in self.word_vec] + [self.eos]]
if ' '.join(sent[0]) == '%s %s' % (self.bos, self.eos):
import warnings
warnings.warn('No words in "%s" have w2v vectors. Replacing \
by "%s %s"..' % (sent, self.bos, self.eos))
batch = Variable(self.get_batch(sent), volatile=True)
if self.is_cuda():
batch = batch.cuda()
output = self.rnn(batch)[0]
output, idxs = torch.max(output, 0)
# output, idxs = output.squeeze(), idxs.squeeze()
idxs = idxs.data.cpu().numpy()
argmaxs = [np.sum((idxs == k)) for k in range(len(sent[0]))]
# visualize model
import matplotlib.pyplot as plt
x = range(len(sent[0]))
y = [100.0 * n / np.sum(argmaxs) for n in argmaxs]
plt.xticks(x, sent[0], rotation=45)
plt.bar(x, y)
plt.ylabel('%')
plt.title('Visualisation of words importance')
plt.show()
return output, idxs
class Sent2Vec(nn.Module):
"""Concat Gensen."""
def __init__(self, args, con_type):
"""A wrapper class for multiple GenSen models."""
super(Sent2Vec, self).__init__()
self.sent2vec_models = args
self.con_type = con_type
def build_vocab(self, sentences, tokenize=True):
for model in self.sent2vec_models:
model.build_vocab(sentences, tokenize=tokenize)
def get_representation(
self, sentences, batch_size,
tokenize=False
):
"""Get model representations."""
representations = [
model.encode(
sentences, bsize=batch_size,
tokenize=tokenize
)
for model in self.sent2vec_models
]
if self.con_type == 'concat':
representations = np.concatenate(representations, axis=1)
elif self.con_type == 'mean':
representations = np.stack(representations, axis=1)
representations = np.mean(representations, axis=1)
else:
assert False
return representations
| 36.246795 | 95 | 0.567601 |
9b543a8e7d828d75f301480129ca03b250947128 | 7,038 | py | Python | astrometry.py | astromark/lacewing | e96d2dbbcce0e0c8cc00e7923ada4c8031de80ef | [
"BSD-2-Clause"
] | 3 | 2015-07-03T20:47:44.000Z | 2020-11-27T08:47:32.000Z | astrometry.py | astromark/lacewing | e96d2dbbcce0e0c8cc00e7923ada4c8031de80ef | [
"BSD-2-Clause"
] | 1 | 2017-11-20T22:02:11.000Z | 2018-01-06T01:54:31.000Z | astrometry.py | astromark/lacewing | e96d2dbbcce0e0c8cc00e7923ada4c8031de80ef | [
"BSD-2-Clause"
] | 1 | 2020-02-21T08:48:53.000Z | 2020-02-21T08:48:53.000Z | import numpy
# Weighted Standard Devation taken from http://stackoverflow.com/questions/2413522/weighted-standard-deviation-in-numpy
def wstddev(x,u):
w = 1/(u**2)
average = numpy.average(x,weights=w)
variance = numpy.dot(w,(x-average)**2)/w.sum()
return average,numpy.sqrt(variance)
def isnumber(x):
#print type(x).__name__
result = False
if type(x).__name__ in ["str","string_"]:
try:
float(x)/2.0
result = True
except ValueError:
result = False
if type(x).__name__ in ["int", "int64", "long", "float", "float64", "double"]:
return True
if type(x).__name__ in ["list","ndarray","Column","MaskedColumn"]:
result=[]
for i in xrange(len(x)):
#print x[i]
try:
if isinstance(x[i],numpy.ma.core.MaskedConstant):
raise ValueError
float(x[i])/2. # should induce non-numbers to throw a 'ValueError'
result.append(True)
except ValueError,TypeError:
result.append(False)
return result
def meanpi(pi,epi,fpi):
# as seen on www.denseproject.com/25pc/
npi = len(pi)
top = 0
bottom = 0
number = 0
for i in range(npi):
if fpi[i] == 'P':
top = top + (pi[i] / (epi[i]**2))
bottom = bottom + (1.0 / (epi[i]**2))
number = number + 1
if number > 0:
wpi = top/bottom
ewpi = 1/numpy.sqrt(bottom)
else:
wpi = 0
ewpi = 0
return wpi,ewpi
# Astrometry routines
def sixty(x):
hour = int(x)
minute = int((x - hour)*60.0)
second = (((x - hour)*60.0)-minute)*60.0
return hour,abs(minute),abs(second)
def ten(x):
#x = [x]
size = len(x)
#print size,x
try:
len(x[0])
x = numpy.asarray(x)
val = numpy.zeros(len(x[0]))
except TypeError:
v = 0
if size == 1:
val = x[0]
elif size == 2:
val = abs(x[0]) + x[1]/60.
elif size == 3:
val = abs(x[0]) + x[1]/60. + x[2]/3600.
try:
len(x[0]) # will raise TYPEERROR if this is not an array
flip = numpy.where(x[0] < 0)
val[flip] = val[flip] * -1
except TypeError:
if x[0] < 0:
val = val * -1
#print val
return val
###################################
# PM join
###################################
def pmjoin(pmra,epmra,pmdec,epmdec):
pm=numpy.sqrt(pmra**2+pmdec**2)
epm = numpy.sqrt(epmra**2 + epmdec**2)
pa = numpy.arctan2(pmra,pmdec)*180./numpy.pi
if pa < 0:
pa = 360 + pa
epa = abs(pa - numpy.arctan2(pmra-epmra,pmdec-epmdec)*180./numpy.pi)
if epa > 180:
epa = 360-epa
return pm,epm,pa,epa
###################################
# PM split
###################################
def pmsplit(pm,epm,pa,epa):
pa = (pa % 360) * numpy.pi/180.
pi2 = numpy.pi / 2.
if ((pa >= 0.0) and (pa < pi2)):
pmra = pm * numpy.sin(pa)
pmdec = pm * numpy.cos(pa)
if ((pa >= pi2) and (pa < 2*pi2)):
pa = pa - pi2
pmra = pm * numpy.cos(pa)
pmdec = - (pm * numpy.sin(pa))
if (pa >= 2*pi2) and (pa < 3*pi2):
pa = pa - (2*pi2)
pmra = - (pm * numpy.sin(pa))
pmdec = - (pm * numpy.cos(pa))
if (pa >= 3*pi2) and (pa < 4*pi2):
pa = pa - (3*pi2)
pmra = - (pm * numpy.cos(pa))
pmdec = pm * numpy.sin(pa)
return pmra,numpy.sqrt(epm),pmdec,numpy.sqrt(epm)
def addpm(ra,era,dec,edec,pmra,epmra,pmdec,epmdec,time):
ra = ra + pmra*numpy.cos(dec*numpy.pi/180)/3600. * time/365.25
dec = dec + pmdec/3600. * time/365.25
era = numpy.sqrt((era**2) + (epmra*numpy.cos(dec*numpy.pi/180)/3600. * time/365.25)**2)
edec = numpy.sqrt((edec**2) + (epmdec/3600. * time/365.25)**2)
return ra,era,dec,edec
##############################################
# PROPER MOTION to GREAT CIRCLE
##############################################
def greatcircle(ra,dec,pmra,pmdec):
radeg=180/numpy.pi
# set up a great circle
raeq=numpy.arange(0,361,1)/radeg
deceq=numpy.zeros(361)
# to find the pole of the appropriate
# great circle, we must take the cross
# product of the two unit vectors
# (ra,dec) and (ra+pmra/dec,dec+pmdec)
# in cartesian coordinates.
rar=ra/radeg
decr=dec/radeg
# print ra,dec,rar,decr
pmrar=pmra/3600./radeg
pmdecr=pmdec/3600./radeg
x1=numpy.cos(decr)*numpy.cos(rar)
y1=numpy.cos(decr)*numpy.sin(rar)
z1=numpy.sin(decr)
x2=numpy.cos(decr+pmdecr)*numpy.cos(rar+pmrar/numpy.cos(decr))
y2=numpy.cos(decr+pmdecr)*numpy.sin(rar+pmrar/numpy.cos(decr))
z2=numpy.sin(decr+pmdecr)
# print x1,y1,z1,sqrt(x1**2+y1**2+z1**2),x2,y2,z2,sqrt(x2**2+y2**2+z2**2)
# print
# the cross-product:
x=y1*z2-z1*y2
y=z1*x2-x1*z2
z=x1*y2-y1*x2
# (x,y,z) is the coordinate of the great circle's pole.
r=numpy.sqrt(x**2+y**2+z**2)
# get the RA and DEC (in radians, fixing the bounds)
rap=numpy.arctan2(y,x)
if rap < 0:
rap = rap + 2*numpy.pi
decp=numpy.arcsin(z/r)
# if decp gt !pi/2. then decp=!pi/2.-decp
# angular distance between star and equatorial pole (ie, the dec)
dlon=(90-dec)/radeg
sdp = numpy.sin(decp)
cdp = numpy.sqrt(1.0-sdp*sdp)
# stolen from glactc.pro
sgb = numpy.sin(deceq)
cgb = numpy.sqrt(1.0-sgb*sgb)
sdec = sgb*sdp + cgb*cdp*numpy.cos(dlon-raeq)
decgc = radeg * numpy.arcsin(sdec)
cdec = numpy.sqrt(1.0-sdec*sdec)
sinf = cgb * numpy.sin(dlon-raeq) / cdec
cosf = (sgb-sdp*sdec) / (cdp*cdec)
ragc = radeg * (rap + numpy.arctan2(sinf,cosf))
ragc[numpy.where(ragc < 0)] = ragc[numpy.where(ragc < 0)]+360
return ragc,decgc
#Original IDL code by John Subasavage, translated to Python/Numpy by Adric Riedel
def spheresep(ra1,dec1,ra2,dec2):
radeg = 180/numpy.pi
ra1 = ra1/radeg
ra2 = ra2/radeg
dec1 = dec1/radeg
dec2 = dec2/radeg
diffra=numpy.abs(ra1-ra2)
diffdec=numpy.abs(dec1-dec2)
############################################################
#
# Calculate separation (law of cosines)
#
############################################################
sepa=numpy.arccos((numpy.sin(dec1)*numpy.sin(dec2))+numpy.cos(dec1)*numpy.cos(dec2)*numpy.cos(diffra))
############################################################
#
# Calculate position angle
#
############################################################
posang=numpy.arcsin((numpy.sin(diffra)*numpy.sin((numpy.pi/2.)-dec2))/sepa)
try:
if posang < 0:
posang = posang + numpy.pi*2
except ValueError:
flip = numpy.where(posang < 0)
posang[flip] = posang[flip] + numpy.pi*2
sepa = sepa*radeg
posang = posang*radeg
return sepa,posang
| 28.152 | 119 | 0.516482 |
6ae2aa96f2c876642c3751ad7dc2750224fba968 | 3,227 | py | Python | src/envs/gravity_pendulum.py | jlebensold/flrl-ddpg | d91e9f4aedf48d0614e33bd22c7f684ecda089b1 | [
"MIT"
] | 1 | 2021-05-11T06:28:01.000Z | 2021-05-11T06:28:01.000Z | src/envs/gravity_pendulum.py | jlebensold/flrl-ddpg | d91e9f4aedf48d0614e33bd22c7f684ecda089b1 | [
"MIT"
] | null | null | null | src/envs/gravity_pendulum.py | jlebensold/flrl-ddpg | d91e9f4aedf48d0614e33bd22c7f684ecda089b1 | [
"MIT"
] | 1 | 2021-03-07T06:33:17.000Z | 2021-03-07T06:33:17.000Z | from gym.envs.classic_control.pendulum import PendulumEnv, angle_normalize
from gym import Wrapper
import gym
from gym import spaces
from gym.utils import seeding
import numpy as np
from os import path
class GravityPendulum(gym.Env):
metadata = {
'render.modes' : ['human', 'rgb_array'],
'video.frames_per_second' : 30
}
def __init__(self, env_param=10.):
self.max_speed=8
self.max_torque=2.
self.dt=.05
self.viewer = None
high = np.array([1., 1., self.max_speed])
self.action_space = spaces.Box(low=-self.max_torque, high=self.max_torque, shape=(1,), dtype=np.float32)
self.observation_space = spaces.Box(low=-high, high=high, dtype=np.float32)
self.g = env_param
self.seed()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def step(self, u):
th, thdot = self.state # th := theta
g = self.g
m = 1.
l = 1.
dt = self.dt
u = np.clip(u, -self.max_torque, self.max_torque)[0]
self.last_u = u # for rendering
costs = angle_normalize(th)**2 + .1*thdot**2 + .001*(u**2)
newthdot = thdot + (-3*g/(2*l) * np.sin(th + np.pi) + 3./(m*l**2)*u) * dt
newth = th + newthdot*dt
newthdot = np.clip(newthdot, -self.max_speed, self.max_speed) #pylint: disable=E1111
self.state = np.array([newth, newthdot])
return self._get_obs(), -costs, False, {}
def reset(self):
high = np.array([np.pi, 1])
# NOTE: This is the default setting: (makes the task harder!)
self.state = self.np_random.uniform(low=-high, high=high)
# We fix a start state (for now):
# self.state = (0, 0)
self.last_u = None
return self._get_obs()
def _get_obs(self):
theta, thetadot = self.state
return np.array([np.cos(theta), np.sin(theta), thetadot])
def render(self, mode='human'):
if self.viewer is None:
from gym.envs.classic_control import rendering
self.viewer = rendering.Viewer(500,500)
self.viewer.set_bounds(-2.2,2.2,-2.2,2.2)
rod = rendering.make_capsule(1, .2)
rod.set_color(.8, .3, .3)
self.pole_transform = rendering.Transform()
rod.add_attr(self.pole_transform)
self.viewer.add_geom(rod)
axle = rendering.make_circle(.05)
axle.set_color(0,0,0)
self.viewer.add_geom(axle)
fname = path.join(path.dirname(__file__), "assets/clockwise.png")
self.img = rendering.Image(fname, 1., 1.)
self.imgtrans = rendering.Transform()
self.img.add_attr(self.imgtrans)
self.viewer.add_onetime(self.img)
self.pole_transform.set_rotation(self.state[0] + np.pi/2)
if self.last_u:
self.imgtrans.scale = (-self.last_u/2, np.abs(self.last_u)/2)
return self.viewer.render(return_rgb_array = mode=='rgb_array')
def close(self):
if self.viewer:
self.viewer.close()
self.viewer = None
def angle_normalize(x):
return (((x+np.pi) % (2*np.pi)) - np.pi)
| 32.928571 | 112 | 0.590022 |
c70c920126b64363bd9b57fc952ad1be98664ef5 | 5,455 | py | Python | watchdog_kj_kultura/organizations/migrations/0001_initial_squashed_0007_auto_20161230_1852.py | watchdogpolska/watchdog-kj-kultura | ea1a5c52ef2a174c012cc08eff5fdd7aa3b911b0 | [
"MIT"
] | null | null | null | watchdog_kj_kultura/organizations/migrations/0001_initial_squashed_0007_auto_20161230_1852.py | watchdogpolska/watchdog-kj-kultura | ea1a5c52ef2a174c012cc08eff5fdd7aa3b911b0 | [
"MIT"
] | 138 | 2016-12-10T19:18:18.000Z | 2019-06-10T19:32:40.000Z | watchdog_kj_kultura/organizations/migrations/0001_initial_squashed_0007_auto_20161230_1852.py | watchdogpolska/watchdog-kj-kultura | ea1a5c52ef2a174c012cc08eff5fdd7aa3b911b0 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.4 on 2016-12-30 18:40
from __future__ import unicode_literals
import autoslug.fields
from django.conf import settings
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import djgeojson.fields
import model_utils.fields
import watchdog_kj_kultura.organizations.validators
class Migration(migrations.Migration):
replaces = [('organizations', '0001_initial'), ('organizations', '0002_auto_20161210_1948'), ('organizations', '0003_auto_20161210_2136'), ('organizations', '0004_organization_pos'), ('organizations', '0005_organization_visible'), ('organizations', '0006_auto_20161222_0252'), ('organizations', '0007_auto_20161230_1852')]
dependencies = [
('teryt_tree', '0002_auto_20151216_0201'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='MetaCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('name', models.CharField(max_length=50, verbose_name='Name')),
('key', models.CharField(max_length=25, verbose_name='Key')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['created'],
'verbose_name_plural': 'MetaCategorys',
'verbose_name': 'MetaCategory',
},
),
migrations.CreateModel(
name='Organization',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('name', models.CharField(max_length=50, verbose_name='Name')),
('slug', autoslug.fields.AutoSlugField(editable=False, populate_from='name', unique=True, verbose_name='Slug')),
('email', models.EmailField(max_length=254, verbose_name='E-mail')),
('meta', django.contrib.postgres.fields.jsonb.JSONField(default={}, verbose_name='Metadata')),
('jst', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='teryt_tree.JednostkaAdministracyjna', verbose_name='Unit of administrative division')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ['created'],
'verbose_name_plural': 'Organizations',
'verbose_name': 'Organization',
},
),
migrations.CreateModel(
name='Category',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('name', models.CharField(max_length=50, verbose_name='Name')),
('slug', autoslug.fields.AutoSlugField(editable=False, populate_from='name', unique=True, verbose_name='Slug')),
],
options={
'ordering': ['created'],
'verbose_name_plural': 'Categories',
'verbose_name': 'Category',
},
),
migrations.AlterModelOptions(
name='metacategory',
options={'ordering': ['pk'], 'verbose_name': 'MetaCategory', 'verbose_name_plural': 'MetaCategorys'},
),
migrations.AddField(
model_name='organization',
name='category',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='organizations.Category', verbose_name='Category'),
),
migrations.AddField(
model_name='organization',
name='pos',
field=djgeojson.fields.PointField(blank=True, null=True, verbose_name='Position'),
),
migrations.AddField(
model_name='organization',
name='visible',
field=models.BooleanField(default=False, help_text='Check to mark organization as public visible', verbose_name='Public visible'),
),
migrations.AlterField(
model_name='metacategory',
name='key',
field=models.CharField(help_text='They are permitted only Latin characters and numbers.', max_length=25, validators=[watchdog_kj_kultura.organizations.validators.is_allnum], verbose_name='Key'),
),
]
| 54.009901 | 326 | 0.640697 |
3ef1799b6f0c065a9dce7b0faa97e48e2a5e0a4a | 448 | py | Python | core/src/main/python/synapse/ml/core/spark/FluentAPI.py | ppruthi/SynapseML | d38c82455b28585cb35f6f4386a508ff4026f1d3 | [
"MIT"
] | null | null | null | core/src/main/python/synapse/ml/core/spark/FluentAPI.py | ppruthi/SynapseML | d38c82455b28585cb35f6f4386a508ff4026f1d3 | [
"MIT"
] | null | null | null | core/src/main/python/synapse/ml/core/spark/FluentAPI.py | ppruthi/SynapseML | d38c82455b28585cb35f6f4386a508ff4026f1d3 | [
"MIT"
] | null | null | null | # Copyright (C) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE in project root for information.
import sys
if sys.version >= "3":
basestring = str
import pyspark
def _mlTransform(self, t):
return t.transform(self)
setattr(pyspark.sql.dataframe.DataFrame, "mlTransform", _mlTransform)
def _mlFit(self, e):
return e.fit(self)
setattr(pyspark.sql.dataframe.DataFrame, "mlFit", _mlFit)
| 18.666667 | 78 | 0.736607 |
d8a5b667656c679e52cf0b1d81c42e27c6a1b0f9 | 3,723 | py | Python | configs/ssd/ssd300_coco.py | mengfu188/mmdetection.bak | 0bc0ea591b5725468f83f9f48630a1e3ad599303 | [
"Apache-2.0"
] | 2 | 2020-07-14T13:55:17.000Z | 2021-05-07T11:25:31.000Z | configs/ssd/ssd300_coco.py | mengfu188/mmdetection.bak | 0bc0ea591b5725468f83f9f48630a1e3ad599303 | [
"Apache-2.0"
] | null | null | null | configs/ssd/ssd300_coco.py | mengfu188/mmdetection.bak | 0bc0ea591b5725468f83f9f48630a1e3ad599303 | [
"Apache-2.0"
] | null | null | null | # model settings
input_size = 300
model = dict(
type='SingleStageDetector',
pretrained='',
backbone=dict(
type='Mb_Tiny_RFB',
out_indices=(7, 10, 12, 13)),
neck=None,
bbox_head=dict(
type='SSDHead',
input_size=input_size,
in_channels=(64, 128, 256, 256),
num_classes=3,
anchor_strides=(8, 16, 32, 64),
basesize_ratio_range=(0.15, 0.9),
anchor_ratios=([1], [1], [1], [1]),
target_means=(.0, .0, .0, .0),
target_stds=(0.1, 0.1, 0.2, 0.2)))
cudnn_benchmark = True
train_cfg = dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.,
ignore_iof_thr=-1,
gt_max_assign_all=False),
smoothl1_beta=1.,
allowed_border=-1,
pos_weight=-1,
neg_pos_ratio=3,
debug=False)
test_cfg = dict(
nms=dict(type='nms', iou_thr=0.45),
min_bbox_size=0,
score_thr=0.02,
max_per_img=200)
# model training and testing settings
# dataset settings
dataset_type = 'CocoDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile', to_float32=True),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='PhotoMetricDistortion',
brightness_delta=32,
contrast_range=(0.5, 1.5),
saturation_range=(0.5, 1.5),
hue_delta=18),
dict(
type='Expand',
mean=img_norm_cfg['mean'],
to_rgb=img_norm_cfg['to_rgb'],
ratio_range=(1, 4)),
dict(
type='MinIoURandomCrop',
min_ious=(0.1, 0.3, 0.5, 0.7, 0.9),
min_crop_size=0.3),
dict(type='Resize', img_scale=(300, 300), keep_ratio=False),
dict(type='Normalize', **img_norm_cfg),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(300, 300),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=False),
dict(type='Normalize', **img_norm_cfg),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
imgs_per_gpu=8,
workers_per_gpu=3,
train=dict(
type='RepeatDataset',
times=5,
dataset=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_train2017.json',
img_prefix=data_root + 'train2017/',
pipeline=train_pipeline)),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/instances_val2017.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline))
# optimizer
optimizer = dict(type='SGD', lr=2e-3, momentum=0.9, weight_decay=5e-4)
optimizer_config = dict()
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[16, 22])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 24
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/ssd300_coco'
load_from = None
resume_from = None
workflow = [('train', 1)]
| 28.860465 | 79 | 0.609992 |
e87aa13ebbc44ae872d7b67592eedc87a68dde46 | 4,689 | py | Python | RecoLocalCalo/HcalRecAlgos/python/RemoveAddSevLevel.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 6 | 2017-09-08T14:12:56.000Z | 2022-03-09T23:57:01.000Z | RecoLocalCalo/HcalRecAlgos/python/RemoveAddSevLevel.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 545 | 2017-09-19T17:10:19.000Z | 2022-03-07T16:55:27.000Z | RecoLocalCalo/HcalRecAlgos/python/RemoveAddSevLevel.py | ckamtsikis/cmssw | ea19fe642bb7537cbf58451dcf73aa5fd1b66250 | [
"Apache-2.0"
] | 14 | 2017-10-04T09:47:21.000Z | 2019-10-23T18:04:45.000Z | from __future__ import print_function
from __future__ import absolute_import
import FWCore.ParameterSet.Config as cms
def RemoveFlag(sevLevelComputer,flag="HFLongShort"):
''' Removes the specified flag from the Severity Level Computer,
and returns the revised Computer.'''
removeSeverity=-1 # Track which Severity Level has been modified
# Loop over all levels
for i in range(len(sevLevelComputer.SeverityLevels)):
Flags=sevLevelComputer.SeverityLevels[i].RecHitFlags.value()
if flag not in Flags: # Flag not present for this level
continue
#Remove flag
Flags.remove(flag)
ChanStat=sevLevelComputer.SeverityLevels[i].ChannelStatus.value()
# Check to see if Severity Level no longer contains any useful information
if len(Flags)==0 and ChanStat==['']:
removeSeverity=i
else:
# Set revised list of flags for this severity level
sevLevelComputer.SeverityLevels[i].RecHitFlags=Flags
break
# Removing flag results in empty severity level; remove it
if (removeSeverity>-1):
sevLevelComputer.SeverityLevels.remove(sevLevelComputer.SeverityLevels[removeSeverity])
return sevLevelComputer
def PrintLevels(SLComp):
print("Severity Level Computer Levels and associated flags/Channel Status values:")
for i in SLComp.SeverityLevels:
print("\t Level = %i"%i.Level.value())
print("\t\t RecHit Flags = %s"%i.RecHitFlags.value())
print("\t\t Channel Status = %s"%i.ChannelStatus.value())
print()
return
def AddFlag(sevLevelComputer,flag="UserDefinedBit0",severity=10,verbose=True):
''' Adds specified flag to severity level computer using specified severity level.
If flag already exists at another severity level, it is removed from that level.
'''
AddedSeverity=False
removeSeverity=-1
allowedflags=[]
for i in sevLevelComputer.SeverityLevels:
for j in i.RecHitFlags.value():
if j=="":
continue
allowedflags.append(j)
#print "Allowed flags = ",allowedflags
if flag not in allowedflags and verbose:
print("\n\n")
for j in range(0,3):
print("###################################################")
print("\nWARNING!!!!!! You are adding a flag \n\t'%s' \nthat is not defined in the Severity Level Computer!"%flag)
print("This can be EXCEPTIONALLY dangerous if you do not \nknow what you are doing!\n")
print("Proceed with EXTREME caution!\n")
for j in range(0,3):
print("###################################################")
print("\n\n")
#Loop over severity Levels
for i in range(len(sevLevelComputer.SeverityLevels)):
Level=sevLevelComputer.SeverityLevels[i].Level.value()
Flags=sevLevelComputer.SeverityLevels[i].RecHitFlags.value()
if Level==severity: # Found the specified level
if (Flags==['']):
Flags=[flag] # Create new vector for this flag
else:
if flag not in Flags: # don't need to add flag if it's already there
Flags.append(flag) # append flag to existing vector
sevLevelComputer.SeverityLevels[i].RecHitFlags=Flags # Set new RecHitFlags vector
AddedSeverity=True
else: # Found some other level; be sure to remove flag from it
if flag not in Flags:
continue
else:
Flags.remove(flag)
# Removing flag leaves nothing else: need to remove this level completely
if len(Flags)==0 and ChanStat==['']:
removeSeverity=i
else:
sevLevelComputer.SeverityLevels[i].RecHitFlags=Flags
# Remove any newly-empty levels
if (removeSeverity>-1):
sevLevelComputer.SeverityLevels.remove(sevLevelComputer.SeverityLevels[removeSeverity])
# No existing severity level for specified severity was found;
# add a new one
if (AddedSeverity==False):
sevLevelComputer.SeverityLevels.append(cms.PSet(Level=cms.int32(severity),
RecHitFlags=cms.vstring(flag),
ChannelStatus=cms.vstring("")))
return sevLevelComputer
##########################
if __name__=="__main__":
from . import hcalRecAlgoESProd_cfi as ES
ES.hcalRecAlgos=RemoveFlag(ES.hcalRecAlgos)
ES.hcalRecAlgos=AddFlag(ES.hcalRecAlgos,flag="HOBit",severity=5)
PrintLevels(ES.hcalRecAlgos)
| 39.403361 | 122 | 0.618895 |
22e8027dbfb38aad56439a5fcc2703c9fa03c250 | 2,833 | py | Python | test_elasticsearch/test_serializer.py | achave11/elasticsearch-py | 5611445203ebabb1a450b17c2c93cd3546a12071 | [
"Apache-2.0"
] | 1 | 2020-09-16T01:51:00.000Z | 2020-09-16T01:51:00.000Z | test_elasticsearch/test_serializer.py | achave11/elasticsearch-py | 5611445203ebabb1a450b17c2c93cd3546a12071 | [
"Apache-2.0"
] | null | null | null | test_elasticsearch/test_serializer.py | achave11/elasticsearch-py | 5611445203ebabb1a450b17c2c93cd3546a12071 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import sys
import uuid
from datetime import datetime
from decimal import Decimal
from elasticsearch.serializer import (
JSONSerializer,
Deserializer,
DEFAULT_SERIALIZERS,
TextSerializer,
)
from elasticsearch.exceptions import SerializationError, ImproperlyConfigured
from .test_cases import TestCase, SkipTest
class TestJSONSerializer(TestCase):
def test_datetime_serialization(self):
self.assertEquals(
'{"d":"2010-10-01T02:30:00"}',
JSONSerializer().dumps({"d": datetime(2010, 10, 1, 2, 30)}),
)
def test_decimal_serialization(self):
if sys.version_info[:2] == (2, 6):
raise SkipTest("Float rounding is broken in 2.6.")
self.assertEquals('{"d":3.8}', JSONSerializer().dumps({"d": Decimal("3.8")}))
def test_uuid_serialization(self):
self.assertEquals(
'{"d":"00000000-0000-0000-0000-000000000003"}',
JSONSerializer().dumps(
{"d": uuid.UUID("00000000-0000-0000-0000-000000000003")}
),
)
def test_raises_serialization_error_on_dump_error(self):
self.assertRaises(SerializationError, JSONSerializer().dumps, object())
def test_raises_serialization_error_on_load_error(self):
self.assertRaises(SerializationError, JSONSerializer().loads, object())
self.assertRaises(SerializationError, JSONSerializer().loads, "")
self.assertRaises(SerializationError, JSONSerializer().loads, "{{")
def test_strings_are_left_untouched(self):
self.assertEquals("你好", JSONSerializer().dumps("你好"))
class TestTextSerializer(TestCase):
def test_strings_are_left_untouched(self):
self.assertEquals("你好", TextSerializer().dumps("你好"))
def test_raises_serialization_error_on_dump_error(self):
self.assertRaises(SerializationError, TextSerializer().dumps, {})
class TestDeserializer(TestCase):
def setUp(self):
super(TestDeserializer, self).setUp()
self.de = Deserializer(DEFAULT_SERIALIZERS)
def test_deserializes_json_by_default(self):
self.assertEquals({"some": "data"}, self.de.loads('{"some":"data"}'))
def test_deserializes_text_with_correct_ct(self):
self.assertEquals(
'{"some":"data"}', self.de.loads('{"some":"data"}', "text/plain")
)
self.assertEquals(
'{"some":"data"}',
self.de.loads('{"some":"data"}', "text/plain; charset=whatever"),
)
def test_raises_serialization_error_on_unknown_mimetype(self):
self.assertRaises(SerializationError, self.de.loads, "{}", "text/html")
def test_raises_improperly_configured_when_default_mimetype_cannot_be_deserialized(
self,
):
self.assertRaises(ImproperlyConfigured, Deserializer, {})
| 34.13253 | 87 | 0.673491 |
eac37bbea75c77b8a34d4cdb0548c1665e340439 | 1,378 | py | Python | ciutil.py | raxvan/srcbuild | 28687d9e90b8bb93d1ca18d666da6b0c8b7012c3 | [
"Apache-2.0"
] | null | null | null | ciutil.py | raxvan/srcbuild | 28687d9e90b8bb93d1ca18d666da6b0c8b7012c3 | [
"Apache-2.0"
] | null | null | null | ciutil.py | raxvan/srcbuild | 28687d9e90b8bb93d1ca18d666da6b0c8b7012c3 | [
"Apache-2.0"
] | null | null | null |
import os
import time
def files_equal(file_path_a,file_path_b):
import sys
if not os.path.exists(file_path_a):
print("FAILURE\nMissing file:" + file_path_a)
sys.exit(-1)
if not os.path.exists(file_path_b):
print("FAILURE\nMissing file:" + file_path_b)
sys.exit(-1)
_af = os.path.abspath(file_path_a)
_bf = os.path.abspath(file_path_b)
import filecmp
if filecmp.cmp(_af, _bf):
print("files equal:SUCCESS\n " + _af + "\n " + _bf)
return;
print("FAILURE\nFiles are not equal\n" + _af + "\n" + _bf)
sys.exit(-1)
def _wait_for_remove(path):
itr = 0
while os.path.exists(path):
if itr > 10:
print("FAILED to remove [" + path + "]")
sys.exit(-1)
time.sleep(0.1)
itr = itr + 1
def rmdir(dir_path):
abs_dir = os.path.abspath(dir_path)
message = "Removing [" + abs_dir + "]"
if not os.path.exists(abs_dir):
print(message + " ... not found ...")
return
print(message)
for path in os.listdir(abs_dir):
dpath = os.path.join(abs_dir,os.fsdecode(path))
#print(dpath)
if os.path.isfile(dpath):
os.remove(dpath)
_wait_for_remove(dpath)
continue
if os.path.isdir(dpath):
rmdir(dpath)
continue
os.rmdir(abs_dir)
_wait_for_remove(abs_dir)
def read_text_file(abs_file_path):
f = open(abs_file_path,"r");
content = f.read()
f.close()
return content | 21.2 | 60 | 0.644412 |
594afeb57528c542ad50df9143b36721692861a0 | 9,805 | py | Python | dunnotheway/_test.py | iuriramos/DunnoTheWay | 29831698e940183688809d865fa75136abddaf0d | [
"BSD-2-Clause"
] | null | null | null | dunnotheway/_test.py | iuriramos/DunnoTheWay | 29831698e940183688809d865fa75136abddaf0d | [
"BSD-2-Clause"
] | null | null | null | dunnotheway/_test.py | iuriramos/DunnoTheWay | 29831698e940183688809d865fa75136abddaf0d | [
"BSD-2-Clause"
] | null | null | null | import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from collections import defaultdict, namedtuple
from common.db import open_database_session
from common.utils import distance_two_dimensions_coordinates
import flight.models.fixtures
from flight.models.airport import Airport
from flight.models.flight import Flight
from flight.models.flight_location import FlightLocation
from weather.models.convection_cell import ConvectionCell
from engine.models.section import Section
from engine.models._obstacle import Obstacle
from engine._obstacle_detector import ObstacleDetector
matplotlib.use('Agg')
session = None
# TODO: change to attributes / maybe a class is more convenient?
# Intersection = namedtuple(
# 'Intersection', ['convection_cell', 'departure_airport', 'destination_airports', 'impact?'])
Intersection = namedtuple(
'Intersection', ['convection_cell', 'partition', 'flight_ids', 'all_flight_ids'])
reference_point = (lambda x, longitude_based:
x.longitude if longitude_based else x.latitude)
def convection_cells_stats(convection_cell_ids):
global session
with open_database_session() as session:
BSB = Airport.airport_from_icao_code(session, 'SBBR')
GRU = Airport.airport_from_icao_code(session, 'SBGR')
print('create intersections table from', BSB, 'to', GRU)
# create_intersections_table(session, convection_cell_ids, BSB, GRU)
print('#' * 20)
print()
print('create intersections table from', GRU, 'to', BSB)
create_intersections_table(session, convection_cell_ids, GRU, BSB)
print('#' * 20)
print()
print('generate flights vs convection cells charts')
plot_flights_vs_convection_cells(session, convection_cell_ids)
def create_intersections_table(session, convection_cell_ids, departure_airport, destination_airport):
convection_cell_to_partition_likelihood = {}
convection_cell_to_callsigns = defaultdict(set)
for convection_cell_id in convection_cell_ids:
convection_cell = session.query(ConvectionCell).filter(
ConvectionCell.id == convection_cell_id).first()
intersections = check_for_intersections(
departure_airport, destination_airport, convection_cell)
if not intersections:
continue
for weather_obstacle in convection_cell.obstacles:
flight = weather_obstacle.flight_location.flight
callsign = flight.flight_plan.callsign
convection_cell_to_callsigns[convection_cell].add(callsign)
max_intersection = intersections[0]
max_likelihood = len(max_intersection.flight_ids)/len(max_intersection.all_flight_ids)
for intersection in intersections[1:]:
likelihood = len(intersection.flight_ids)/len(intersection.all_flight_ids)
if likelihood > max_likelihood:
max_intersection, max_likelihood = intersection, likelihood
convection_cell_to_partition_likelihood[convection_cell] = (
max_intersection.partition, max_likelihood)
# log result
print('convection_cell', 'partition', 'likelihood')
for convection_cell, (partition, likelihood) in convection_cell_to_partition_likelihood.items():
print(convection_cell, partition, likelihood)
print(convection_cell_to_callsigns[convection_cell])
def plot_flights_vs_convection_cells(session, convection_cell_ids):
flight_key_to_convection_cells = defaultdict(set)
flight_key_to_flights = defaultdict(set)
for convection_cell_id in convection_cell_ids:
convection_cell = session.query(ConvectionCell).filter(
ConvectionCell.id == convection_cell_id).first()
for weather_obstacle in convection_cell.obstacles:
flight = weather_obstacle.flight_location.flight
key = flight.airplane.icao_code
flight_key_to_convection_cells[key].add(convection_cell)
flight_key_to_flights[key].add(flight)
for flight_key in flight_key_to_convection_cells:
flight_locations = []
for flight in flight_key_to_flights[flight_key]:
flight_locations += flight.flight_locations
flight_locations.sort(key=lambda x: x.timestamp)
plot_flight_locations_vs_convection_cells(
flight_locations, flight_key_to_convection_cells[flight_key])
def plot_flight_locations_vs_convection_cells(flight_locations, convection_cells):
plot_flight_trajectory_and_convection_cells(flight_locations, convection_cells)
# plot_flight_location_params(flight_locations)
def plot_flight_trajectory_and_convection_cells(flight_locations, convection_cells):
latitudes = [float(fl.latitude) for fl in flight_locations]
longitudes = [float(fl.longitude) for fl in flight_locations]
_, axes = plt.subplots(figsize=(12, 4))
# plot flight path
axes.scatter(latitudes, longitudes)
# plot convection cells
x = [cc.latitude for cc in convection_cells]
y = [cc.longitude for cc in convection_cells]
axes.scatter(x, y)
for i, txt in enumerate([cc.id for cc in convection_cells]):
axes.annotate(txt, (x[i], y[i]))
# plot airports
flight = flight_locations[0].flight
departure_airport = flight.flight_plan.departure_airport
destination_airport = flight.flight_plan.destination_airport
x = [airport.latitude for airport in (departure_airport, destination_airport)]
y = [airport.longitude for airport in (departure_airport, destination_airport)]
axes.scatter(x, y)
axes.scatter(x, y, c=('green', 'red'))
for i, txt in enumerate([airport.icao_code
for airport in (departure_airport, destination_airport)]):
axes.annotate(txt, (x[i], y[i]))
axes.set_title('Flight Trajectory of ' + str(flight) +
' from ' + str(departure_airport) +
' to ' + str(destination_airport))
axes.set_xlabel('Latitude')
axes.set_ylabel('Longitude')
plt.show()
def plot_flight_location_params(flight_locations):
'''Plot flight locations parameters'''
_, axes = plt.subplots(nrows=2, ncols=1)
axis_altitude, axis_speed = axes
# plot flight location params
plot_flight_location_altitudes(flight_locations, axis_altitude)
plot_flight_location_speeds(flight_locations, axis_speed)
plt.show()
def plot_flight_location_speeds(flight_locations, axis):
speeds = [float(flight_location.speed) for flight_location in flight_locations]
axis.plot(speeds)
axis.set_title('Cruising Speed')
def plot_flight_location_altitudes(flight_locations, axis):
altitudes = [float(flight_location.altitude) for flight_location in flight_locations]
axis.plot(altitudes)
axis.set_title('Altitude')
def check_for_intersections(departure_airport, destination_airport, convection_cell):
intersections = []
sections = Section.sections_from_airports(
departure_airport, destination_airport)
clusters_freq = [len(set(s.labels)) for s in sections]
print ('clusters found')
print (clusters_freq)
longitude_based = Airport.should_be_longitude_based(
departure_airport, destination_airport)
follow_ascending_order = Airport.follow_ascending_order(
departure_airport, destination_airport)
cells = [convection_cell] # TODO: maybe use more than one cell
iter_sections, iter_cells = iter(sections), iter(cells)
section, cell = next(iter_sections), next(iter_cells)
def move_section_iterator(section, cell):
return ((follow_ascending_order and
section.section_point < reference_point(cell, longitude_based)) or
(not follow_ascending_order and
section.section_point > reference_point(cell, longitude_based)))
while True:
try:
distance = (ObstacleDetector.
distance_between_section_and_cell(section, cell))
if distance < cell.radius:
intersection = (
intersection_between_section_and_cell(section, cell))
if intersection.flight_ids:
intersections.append(intersection)
section = next(iter_sections)
else:
if move_section_iterator(section, cell):
# section is placed before cell
section = next(iter_sections)
else:
cell = next(iter_cells)
except StopIteration:
break
return intersections
def intersection_between_section_and_cell(section, cell):
def has_intersection_between_record_and_cell(record, cell):
distance = distance_two_dimensions_coordinates(
(record.latitude, record.longitude), (cell.latitude, cell.longitude))
return distance < cell.radius
#### TODO: forcing update on section labels
_ = section.labels
labels = []
all_flight_ids = set()
for label, records in section:
for record in records:
all_flight_ids.add(record.flight_id)
if has_intersection_between_record_and_cell(record, cell):
labels.append(label)
# break
flight_ids = {record.flight_id
for label in labels
for record in section.records_from_label(label)}
return Intersection(cell, section, flight_ids, all_flight_ids)
if __name__ == '__main__':
# mapping = convection_cells_stats(
# convection_cell_ids=[3, 4, 5, 9])
# convection_cells_stats(list(range(2, 18)))
convection_cells_stats([3, 4, 5, 8]) | 38.602362 | 101 | 0.699847 |
5262fe24c4cdbf7dc34328250a0221fd32c9f382 | 411 | py | Python | portal/privascope_portal/wsgi.py | privascope/privascope_portal | a580df8c94f68e69f96fe099b0032dc922227804 | [
"Apache-2.0"
] | null | null | null | portal/privascope_portal/wsgi.py | privascope/privascope_portal | a580df8c94f68e69f96fe099b0032dc922227804 | [
"Apache-2.0"
] | null | null | null | portal/privascope_portal/wsgi.py | privascope/privascope_portal | a580df8c94f68e69f96fe099b0032dc922227804 | [
"Apache-2.0"
] | null | null | null | """
WSGI config for privascope_portal project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "privascope_portal.settings")
application = get_wsgi_application()
| 24.176471 | 78 | 0.79562 |
ee484eb7455287ee10b79a1959b8b999195c7503 | 2,601 | py | Python | forte/data/readers/plaintext_reader.py | zhanyuanucb/forte | 4cb691c11b6945e789d7e445e62ba1ac5a9ffc4a | [
"Apache-2.0"
] | 163 | 2019-11-01T19:25:40.000Z | 2022-03-30T22:49:45.000Z | forte/data/readers/plaintext_reader.py | zhanyuanucb/forte | 4cb691c11b6945e789d7e445e62ba1ac5a9ffc4a | [
"Apache-2.0"
] | 633 | 2019-11-01T20:07:08.000Z | 2022-03-31T23:11:20.000Z | forte/data/readers/plaintext_reader.py | KGerring/forte | 7dc6e6c7d62d9a4126bdfc5ca02d15be3ffd61ca | [
"Apache-2.0"
] | 62 | 2019-11-01T19:41:33.000Z | 2022-03-24T11:14:21.000Z | # Copyright 2019 The Forte Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The reader that reads plain text data into Datapacks.
"""
import os
from typing import Any, Iterator, Dict, Set
from forte.data.data_pack import DataPack
from forte.data.data_utils_io import dataset_path_iterator
from forte.data.base_reader import PackReader
from ft.onto.base_ontology import Document
__all__ = [
"PlainTextReader",
]
class PlainTextReader(PackReader):
r""":class:`PlainTextReader` is designed to read in plain text dataset."""
def _collect(self, text_directory) -> Iterator[Any]: # type: ignore
r"""Should be called with param ``text_directory`` which is a path to a
folder containing txt files.
Args:
text_directory: text directory containing the files.
Returns: Iterator over paths to .txt files
"""
return dataset_path_iterator(text_directory, self.configs.file_ext)
def _cache_key_function(self, text_file: str) -> str:
return os.path.basename(text_file)
# pylint: disable=unused-argument
def text_replace_operation(self, text: str):
return []
def _parse_pack(self, file_path: str) -> Iterator[DataPack]:
pack = DataPack()
with open(file_path, "r", encoding="utf8", errors="ignore") as file:
text = file.read()
pack.set_text(text, replace_func=self.text_replace_operation)
Document(pack, 0, len(pack.text))
pack.pack_name = file_path
yield pack
@classmethod
def default_configs(cls):
return {"file_ext": ".txt"}
def record(self, record_meta: Dict[str, Set[str]]):
r"""Method to add output type record of `PlainTextReader` which is
`ft.onto.base_ontology.Document` with an empty set
to :attr:`forte.data.data_pack.Meta.record`.
Args:
record_meta: the field in the datapack for type record that need to
fill in for consistency checking.
"""
record_meta["ft.onto.base_ontology.Document"] = set()
| 33.346154 | 79 | 0.690888 |
1d0741c4bd728102c1afc184488192f4a7c58303 | 1,634 | py | Python | src/cfnlint/rules/functions/If.py | sthagen/aws-cloudformation-cfn-lint | 8628b2bab208acb2ac7843d2cadf7b56252058f7 | [
"MIT-0"
] | 445 | 2018-04-19T14:43:33.000Z | 2019-03-01T11:00:21.000Z | src/cfnlint/rules/functions/If.py | sthagen/cfn-lint | 80c8211eb028b374fdf547f21a8e218248dedc89 | [
"MIT-0"
] | 464 | 2018-04-19T17:29:50.000Z | 2019-03-01T14:20:19.000Z | src/cfnlint/rules/functions/If.py | sthagen/cfn-lint | 80c8211eb028b374fdf547f21a8e218248dedc89 | [
"MIT-0"
] | 93 | 2018-04-19T14:55:35.000Z | 2019-03-01T03:26:47.000Z | """
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
"""
from cfnlint.rules import CloudFormationLintRule
from cfnlint.rules import RuleMatch
class If(CloudFormationLintRule):
"""Check if Condition exists"""
id = 'E1028'
shortdesc = 'Check Fn::If structure for validity'
description = 'Check Fn::If to make sure its valid. Condition has to be a string.'
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-conditions.html#intrinsic-function-reference-conditions-if'
tags = ['functions', 'if']
def match(self, cfn):
matches = []
# Build the list of functions
iftrees = cfn.search_deep_keys('Fn::If')
# Get the conditions used in the functions
for iftree in iftrees:
ifs = iftree[-1]
if isinstance(ifs, list):
if_condition = ifs[0]
if len(ifs) != 3:
message = 'Fn::If must be a list of 3 elements.'
matches.append(RuleMatch(
iftree[:-1], message
))
if not isinstance(if_condition, str):
message = 'Fn::If first element must be a condition and a string.'
matches.append(RuleMatch(
iftree[:-1] + [0], message
))
else:
message = 'Fn::If must be a list of 3 elements.'
matches.append(RuleMatch(
iftree[:-1], message
))
return matches
| 36.311111 | 169 | 0.561812 |
6f47bb0da07100f82ecd27a8b90006165fc71e27 | 1,919 | py | Python | roi_align/setup.py | daikiclimate/Grid-Anchor-based-Image-Cropping-Pytorch | f6f925dadaf3c975f84c422ad575e0c92dc87320 | [
"MIT"
] | 51 | 2019-09-19T06:30:04.000Z | 2022-03-16T09:24:27.000Z | roi_align/setup.py | daikiclimate/Grid-Anchor-based-Image-Cropping-Pytorch | f6f925dadaf3c975f84c422ad575e0c92dc87320 | [
"MIT"
] | 13 | 2019-10-28T03:56:11.000Z | 2021-01-14T03:39:48.000Z | roi_align/setup.py | daikiclimate/Grid-Anchor-based-Image-Cropping-Pytorch | f6f925dadaf3c975f84c422ad575e0c92dc87320 | [
"MIT"
] | 19 | 2019-09-24T05:49:38.000Z | 2021-12-28T10:01:57.000Z | from __future__ import print_function
import os
import torch
from pkg_resources import parse_version
min_version = parse_version('1.0.0')
current_version = parse_version(torch.__version__)
if current_version < min_version: #PyTorch before 1.0
from torch.utils.ffi import create_extension
sources = ['src/roi_align.c']
headers = ['src/roi_align.h']
extra_objects = []
#sources = []
#headers = []
defines = []
with_cuda = False
this_file = os.path.dirname(os.path.realpath(__file__))
print(this_file)
if torch.cuda.is_available():
print('Including CUDA code.')
sources += ['src/roi_align_cuda.c']
headers += ['src/roi_align_cuda.h']
defines += [('WITH_CUDA', None)]
with_cuda = True
extra_objects = ['src/roi_align_kernel.cu.o']
extra_objects = [os.path.join(this_file, fname) for fname in extra_objects]
ffi = create_extension(
'_ext.roi_align',
headers=headers,
sources=sources,
define_macros=defines,
relative_to=__file__,
with_cuda=with_cuda,
extra_objects=extra_objects
)
if __name__ == '__main__':
ffi.build()
else: # PyTorch 1.0 or later
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
print('Including CUDA code.')
current_dir = os.path.dirname(os.path.realpath(__file__))
#cuda_include = '/usr/local/cuda-10.0/include'
#GPU version
setup(
name='roi_align_api',
ext_modules=[
CUDAExtension(
name='roi_align_api',
sources=['src/roi_align_cuda.cpp', 'src/roi_align_kernel.cu'],
include_dirs=[current_dir]+torch.utils.cpp_extension.include_paths(cuda=True)
)
],
cmdclass={
'build_ext': BuildExtension
})
| 28.641791 | 97 | 0.628452 |
b54276360c286c4c82e053d333042cdbdf80e066 | 196 | py | Python | kiwi_bugfix_typechecker/__init__.py | kiwi0fruit/jats-semi-supervised-pytorch | 67e9bb85f09f8ef02e17e495784d1d9a71c3adec | [
"MIT"
] | null | null | null | kiwi_bugfix_typechecker/__init__.py | kiwi0fruit/jats-semi-supervised-pytorch | 67e9bb85f09f8ef02e17e495784d1d9a71c3adec | [
"MIT"
] | null | null | null | kiwi_bugfix_typechecker/__init__.py | kiwi0fruit/jats-semi-supervised-pytorch | 67e9bb85f09f8ef02e17e495784d1d9a71c3adec | [
"MIT"
] | null | null | null | def test_assert() -> None:
try:
assert False
# noinspection PyUnreachableCode
raise RuntimeError("assert keyword doesn't work")
except AssertionError:
pass
| 24.5 | 57 | 0.637755 |
2fdb6cae9d203c40f41d1f85612044ba1c3cd027 | 753 | py | Python | instaclone/migrations/0001_initial.py | Kips-alih/instagram-clone | f64dd065f45c324981d8f35d9745cd5108aea5b6 | [
"MIT"
] | null | null | null | instaclone/migrations/0001_initial.py | Kips-alih/instagram-clone | f64dd065f45c324981d8f35d9745cd5108aea5b6 | [
"MIT"
] | null | null | null | instaclone/migrations/0001_initial.py | Kips-alih/instagram-clone | f64dd065f45c324981d8f35d9745cd5108aea5b6 | [
"MIT"
] | null | null | null | # Generated by Django 2.0 on 2021-12-02 17:02
import cloudinary.models
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Image',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('image', cloudinary.models.CloudinaryField(max_length=255, verbose_name='image')),
('name', models.CharField(max_length=100)),
('caption', models.TextField(max_length=300)),
('pub_date', models.DateTimeField(auto_now_add=True, null=True)),
],
),
]
| 28.961538 | 114 | 0.598938 |
c0f32eb8cf765a614be150facf69da26c14b6375 | 2,968 | py | Python | saleor/dashboard/customer/filters.py | VanilleBid/weekly-saleor | e776e86ee7ce710929ef33878d936e2a8367a217 | [
"BSD-3-Clause"
] | null | null | null | saleor/dashboard/customer/filters.py | VanilleBid/weekly-saleor | e776e86ee7ce710929ef33878d936e2a8367a217 | [
"BSD-3-Clause"
] | 86 | 2018-03-08T14:19:19.000Z | 2018-05-12T14:55:16.000Z | saleor/dashboard/customer/filters.py | JesusDelgadoPatlan/tiendaSpark | 0c8cfe7fa6e070f57daf4d06e2776bc4059ad830 | [
"BSD-3-Clause"
] | 2 | 2018-03-05T12:29:10.000Z | 2018-09-28T12:40:52.000Z | from django import forms
from django.db.models import Q
from django.utils.translation import npgettext, pgettext_lazy
from django_countries import countries
from django_filters import CharFilter, ChoiceFilter, OrderingFilter
from ...core.filters import SortedFilterSet
from ...userprofile.models import User
SORT_BY_FIELDS = (
('email', 'email'),
('default_billing_address__first_name', 'name'),
('default_billing_address__city', 'location'))
SORT_BY_FIELDS_LABELS = {
'email': pgettext_lazy(
'Customer list sorting option', 'email'),
'default_billing_address__first_name': pgettext_lazy(
'Customer list sorting option', 'name'),
'default_billing_address__city': pgettext_lazy(
'Customer list sorting option', 'location')}
IS_ACTIVE_CHOICES = (
('1', pgettext_lazy('Is active filter choice', 'Active')),
('0', pgettext_lazy('Is active filter choice', 'Not active')))
class UserFilter(SortedFilterSet):
name_or_email = CharFilter(
label=pgettext_lazy('Customer name or email filter', 'Name or email'),
method='filter_by_customer')
location = CharFilter(
label=pgettext_lazy('Customer list filter label', 'Location'),
method='filter_by_location')
is_active = ChoiceFilter(
label=pgettext_lazy('Customer list filter label', 'Is active'),
choices=IS_ACTIVE_CHOICES,
empty_label=pgettext_lazy('Filter empty choice label', 'All'),
widget=forms.Select)
sort_by = OrderingFilter(
label=pgettext_lazy('Customer list filter label', 'Sort by'),
fields=SORT_BY_FIELDS,
field_labels=SORT_BY_FIELDS_LABELS)
class Meta:
model = User
fields = []
def filter_by_customer(self, queryset, name, value):
return queryset.filter(
Q(email__icontains=value) |
Q(default_billing_address__first_name__icontains=value) |
Q(default_billing_address__last_name__icontains=value))
def filter_by_location(self, queryset, name, value):
q = Q(default_billing_address__city__icontains=value)
q |= Q(default_billing_address__country__icontains=value)
country_codes = self.get_mapped_country_codes_from_search(value)
for code in country_codes:
q |= Q(default_billing_address__country__icontains=code)
return queryset.filter(q)
def get_mapped_country_codes_from_search(self, value):
country_codes = []
for code, country in dict(countries).items():
if value.lower() in country.lower():
country_codes.append(code)
return country_codes
def get_summary_message(self):
counter = self.qs.count()
return npgettext(
'Number of matching records in the dashboard customers list',
'Found %(counter)d matching customer',
'Found %(counter)d matching customers',
number=counter) % {'counter': counter}
| 38.545455 | 78 | 0.688679 |
693083c5546c9df640eb1847e4b676e69d06b302 | 143 | py | Python | FinalWork/main.py | LIGHT1213/MySQL_Class | 4b88b4bdc882aad17c85b211abaeaf8933a232ec | [
"MIT"
] | null | null | null | FinalWork/main.py | LIGHT1213/MySQL_Class | 4b88b4bdc882aad17c85b211abaeaf8933a232ec | [
"MIT"
] | null | null | null | FinalWork/main.py | LIGHT1213/MySQL_Class | 4b88b4bdc882aad17c85b211abaeaf8933a232ec | [
"MIT"
] | null | null | null | from tkinter import *
import pymysql
import Login
import RecordAndSearch
global db
Login.UserLoginMain()
RecordAndSearch.RecordAndSearchMain()
| 17.875 | 37 | 0.853147 |
7b5b0e55d5fae86f10cd470179a7fb22d3a75481 | 506 | py | Python | App/ext.py | qianfengproject/DBTest | a31d569cf23d7998a68f77071d51f113d1742303 | [
"Apache-2.0"
] | null | null | null | App/ext.py | qianfengproject/DBTest | a31d569cf23d7998a68f77071d51f113d1742303 | [
"Apache-2.0"
] | null | null | null | App/ext.py | qianfengproject/DBTest | a31d569cf23d7998a68f77071d51f113d1742303 | [
"Apache-2.0"
] | null | null | null | from flask_migrate import Migrate
from flask_session import Session
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
migrate = Migrate()
def init_ext(app):
Session(app=app)
migrate.init_app(app=app, db=db)
#session
app.config['SECRET_KEY']='100'
app.config['SESSION_TYPE']='redis'
#sqlalchemy
app.config['SQLALCHEMY_DATABASE_URI']='mysql+pymysql://root:123456@localhost:3306/GitTest'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS']=False
db.init_app(app=app)
| 24.095238 | 94 | 0.737154 |
485cc46fda4f72c3f7f6f371b0d74768a56f7a31 | 16,528 | py | Python | kartothek/io/dask/delayed.py | eacheson/kartothek | 2a72f521a28112e811cf2c204c8be315d7de45ed | [
"MIT"
] | null | null | null | kartothek/io/dask/delayed.py | eacheson/kartothek | 2a72f521a28112e811cf2c204c8be315d7de45ed | [
"MIT"
] | null | null | null | kartothek/io/dask/delayed.py | eacheson/kartothek | 2a72f521a28112e811cf2c204c8be315d7de45ed | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from collections import defaultdict
from functools import partial
import dask
from dask import delayed
from kartothek.core import naming
from kartothek.core.docs import default_docs
from kartothek.core.factory import _ensure_factory
from kartothek.core.naming import DEFAULT_METADATA_VERSION
from kartothek.core.utils import _check_callable
from kartothek.core.uuid import gen_uuid
from kartothek.io_components.delete import (
delete_common_metadata,
delete_indices,
delete_top_level_metadata,
)
from kartothek.io_components.gc import delete_files, dispatch_files_to_gc
from kartothek.io_components.merge import align_datasets
from kartothek.io_components.metapartition import (
SINGLE_TABLE,
MetaPartition,
parse_input_to_metapartition,
)
from kartothek.io_components.read import dispatch_metapartitions_from_factory
from kartothek.io_components.update import update_dataset_from_partitions
from kartothek.io_components.utils import (
_ensure_compatible_indices,
normalize_arg,
normalize_args,
raise_if_indices_overlap,
validate_partition_keys,
)
from kartothek.io_components.write import (
raise_if_dataset_exists,
store_dataset_from_partitions,
)
from ._update import update_dask_partitions_one_to_one
from ._utils import (
_cast_categorical_to_index_cat,
_get_data,
_identity,
_maybe_get_categoricals_from_index,
map_delayed,
)
def _delete_all_additional_metadata(dataset_factory):
delete_indices(dataset_factory=dataset_factory)
delete_common_metadata(dataset_factory=dataset_factory)
def _delete_tl_metadata(dataset_factory, *args):
"""
This function serves as a collector function for delayed objects. Therefore
allowing additional arguments which are not used.
"""
delete_top_level_metadata(dataset_factory=dataset_factory)
@default_docs
def delete_dataset__delayed(dataset_uuid=None, store=None, factory=None):
"""
Parameters
----------
"""
dataset_factory = _ensure_factory(
dataset_uuid=dataset_uuid,
store=store,
factory=factory,
load_schema=False,
load_dataset_metadata=False,
)
gc = garbage_collect_dataset__delayed(factory=dataset_factory)
mps = dispatch_metapartitions_from_factory(dataset_factory)
delayed_dataset_uuid = delayed(_delete_all_additional_metadata)(
dataset_factory=dataset_factory
)
mps = map_delayed(
MetaPartition.delete_from_store,
mps,
store=store,
dataset_uuid=dataset_factory.dataset_uuid,
)
return delayed(_delete_tl_metadata)(dataset_factory, mps, gc, delayed_dataset_uuid)
@default_docs
def garbage_collect_dataset__delayed(
dataset_uuid=None, store=None, chunk_size=100, factory=None
):
"""
Remove auxiliary files that are no longer tracked by the dataset.
These files include indices that are no longer referenced by the metadata
as well as files in the directories of the tables that are no longer
referenced. The latter is only applied to static datasets.
Parameters
----------
chunk_size: int
Number of files that should be deleted in a single job.
Returns
-------
tasks: list of dask.delayed
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid,
store=store,
factory=factory,
load_dataset_metadata=False,
)
nested_files = dispatch_files_to_gc(
dataset_uuid=None, store_factory=None, chunk_size=chunk_size, factory=ds_factory
)
return list(
map_delayed(delete_files, nested_files, store_factory=ds_factory.store_factory)
)
def _load_and_merge_mps(mp_list, store, label_merger, metadata_merger, merge_tasks):
mp_list = [mp.load_dataframes(store=store) for mp in mp_list]
mp = MetaPartition.merge_metapartitions(
mp_list, label_merger=label_merger, metadata_merger=metadata_merger
)
mp = mp.concat_dataframes()
for task in merge_tasks:
mp = mp.merge_dataframes(**task)
return mp
@default_docs
def merge_datasets_as_delayed(
left_dataset_uuid,
right_dataset_uuid,
store,
merge_tasks,
match_how="exact",
label_merger=None,
metadata_merger=None,
):
"""
A dask.delayed graph to perform the merge of two full kartothek datasets.
Parameters
----------
left_dataset_uuid : str
UUID for left dataset (order does not matter in all merge schemas)
right_dataset_uuid : str
UUID for right dataset (order does not matter in all merge schemas)
match_how : Union[str, Callable]
Define the partition label matching scheme.
Available implementations are:
* left (right) : The left (right) partitions are considered to be
the base partitions and **all** partitions of the
right (left) dataset are joined to the left
partition. This should only be used if one of the
datasets contain very few partitions.
* prefix : The labels of the partitions of the dataset with fewer
partitions are considered to be the prefixes to the
right dataset
* exact : All partition labels of the left dataset need to have
an exact match in the right dataset
* callable : A callable with signature func(left, right) which
returns a boolean to determine if the partitions match
If True, an exact match of partition labels between the to-be-merged
datasets is required in order to merge.
If False (Default), the partition labels of the dataset with fewer
partitions are interpreted as prefixes.
merge_tasks : List[Dict]
A list of merge tasks. Each item in this list is a dictionary giving
explicit instructions for a specific merge.
Each dict should contain key/values:
* `left`: The table for the left dataframe
* `right`: The table for the right dataframe
* 'output_label' : The table for the merged dataframe
* `merge_func`: A callable with signature
`merge_func(left_df, right_df, merge_kwargs)` to
handle the data preprocessing and merging.
Default pandas.merge
* 'merge_kwargs' : The kwargs to be passed to the `merge_func`
Example:
.. code::
>>> merge_tasks = [
... {
... "left": "left_dict",
... "right": "right_dict",
... "merge_kwargs": {"kwargs of merge_func": ''},
... "output_label": 'merged_core_data'
... },
... ]
"""
_check_callable(store)
mps = align_datasets(
left_dataset_uuid=left_dataset_uuid,
right_dataset_uuid=right_dataset_uuid,
store=store,
match_how=match_how,
)
mps = map_delayed(
_load_and_merge_mps,
mps,
store=store,
label_merger=label_merger,
metadata_merger=metadata_merger,
merge_tasks=merge_tasks,
)
return list(mps)
def _load_and_concat_metapartitions_inner(mps, args, kwargs):
return MetaPartition.concat_metapartitions(
[mp.load_dataframes(*args, **kwargs) for mp in mps]
)
def _load_and_concat_metapartitions(list_of_mps, *args, **kwargs):
return map_delayed(
_load_and_concat_metapartitions_inner, list_of_mps, args=args, kwargs=kwargs
)
@default_docs
def read_dataset_as_delayed_metapartitions(
dataset_uuid=None,
store=None,
tables=None,
columns=None,
concat_partitions_on_primary_index=False,
predicate_pushdown_to_io=True,
categoricals=None,
label_filter=None,
dates_as_object=False,
load_dataset_metadata=False,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A collection of dask.delayed objects to retrieve a dataset from store where each
partition is loaded as a :class:`~kartothek.io_components.metapartition.MetaPartition`.
.. seealso:
:func:`~kartothek.io.dask.read_dataset_as_delayed`
Parameters
----------
"""
ds_factory = _ensure_factory(
dataset_uuid=dataset_uuid,
store=store,
factory=factory,
load_dataset_metadata=load_dataset_metadata,
)
store = ds_factory.store_factory
mps = dispatch_metapartitions_from_factory(
dataset_factory=ds_factory,
concat_partitions_on_primary_index=concat_partitions_on_primary_index,
label_filter=label_filter,
predicates=predicates,
dispatch_by=dispatch_by,
)
if concat_partitions_on_primary_index or dispatch_by:
mps = _load_and_concat_metapartitions(
mps,
store=store,
tables=tables,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
else:
mps = map_delayed(
MetaPartition.load_dataframes,
mps,
store=store,
tables=tables,
columns=columns,
categoricals=categoricals,
predicate_pushdown_to_io=predicate_pushdown_to_io,
dates_as_object=dates_as_object,
predicates=predicates,
)
categoricals_from_index = _maybe_get_categoricals_from_index(
ds_factory, categoricals
)
if categoricals_from_index:
func_dict = defaultdict(_identity)
func_dict.update(
{
table: partial(_cast_categorical_to_index_cat, categories=cats)
for table, cats in categoricals_from_index.items()
}
)
mps = map_delayed(
partial(MetaPartition.apply, func=func_dict, type_safe=True), mps
)
return list(mps)
@default_docs
def read_dataset_as_delayed(
dataset_uuid=None,
store=None,
tables=None,
columns=None,
concat_partitions_on_primary_index=False,
predicate_pushdown_to_io=True,
categoricals=None,
label_filter=None,
dates_as_object=False,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A collection of dask.delayed objects to retrieve a dataset from store
where each partition is loaded as a :class:`~pandas.DataFrame`.
Parameters
----------
"""
mps = read_dataset_as_delayed_metapartitions(
dataset_uuid=dataset_uuid,
store=store,
factory=factory,
tables=tables,
columns=columns,
concat_partitions_on_primary_index=concat_partitions_on_primary_index,
predicate_pushdown_to_io=predicate_pushdown_to_io,
categoricals=categoricals,
label_filter=label_filter,
dates_as_object=dates_as_object,
load_dataset_metadata=False,
predicates=predicates,
dispatch_by=dispatch_by,
)
return list(map_delayed(_get_data, mps))
@default_docs
def read_table_as_delayed(
dataset_uuid=None,
store=None,
table=SINGLE_TABLE,
columns=None,
concat_partitions_on_primary_index=False,
predicate_pushdown_to_io=True,
categoricals=None,
label_filter=None,
dates_as_object=False,
predicates=None,
factory=None,
dispatch_by=None,
):
"""
A collection of dask.delayed objects to retrieve a single table from
a dataset as partition-individual :class:`~pandas.DataFrame` instances.
You can transform the collection of ``dask.delayed`` objects into
a ``dask.dataframe`` using the following code snippet. As older kartothek
specifications don't store schema information, this must be provided by
a separate code path.
.. code ::
>>> import dask.dataframe as dd
>>> ddf_tasks = read_table_as_delayed(…)
>>> meta = …
>>> ddf = dd.from_delayed(ddf_tasks, meta=meta)
Parameters
----------
"""
if not isinstance(columns, dict):
columns = {table: columns}
mps = read_dataset_as_delayed_metapartitions(
dataset_uuid=dataset_uuid,
store=store,
tables=[table],
columns=columns,
concat_partitions_on_primary_index=concat_partitions_on_primary_index,
predicate_pushdown_to_io=predicate_pushdown_to_io,
categoricals=categoricals,
label_filter=label_filter,
dates_as_object=dates_as_object,
load_dataset_metadata=False,
predicates=predicates,
factory=factory,
dispatch_by=dispatch_by,
)
return list(map_delayed(partial(_get_data, table=table), mps))
@default_docs
def update_dataset_from_delayed(
delayed_tasks,
store=None,
dataset_uuid=None,
delete_scope=None,
metadata=None,
df_serializer=None,
metadata_merger=None,
default_metadata_version=DEFAULT_METADATA_VERSION,
partition_on=None,
sort_partitions_by=None,
secondary_indices=None,
factory=None,
):
"""
A dask.delayed graph to add and store a list of dictionaries containing
dataframes to a kartothek dataset in store. The input should be a list
(or splitter pipeline) containing
:class:`~karothek.io.metapartition.MetaPartition`. If you want to use this
pipeline step for just deleting partitions without adding new ones you
have to give an empty meta partition as input (``[Metapartition(None)]``).
Parameters
----------
"""
partition_on = normalize_arg("partition_on", partition_on)
secondary_indices = normalize_arg("secondary_indices", secondary_indices)
delete_scope = dask.delayed(normalize_arg)("delete_scope", delete_scope)
ds_factory, metadata_version, partition_on = validate_partition_keys(
dataset_uuid=dataset_uuid,
store=store,
default_metadata_version=default_metadata_version,
partition_on=partition_on,
ds_factory=factory,
)
secondary_indices = _ensure_compatible_indices(ds_factory, secondary_indices)
mps = update_dask_partitions_one_to_one(
delayed_tasks=delayed_tasks,
secondary_indices=secondary_indices,
metadata_version=metadata_version,
partition_on=partition_on,
store_factory=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
sort_partitions_by=sort_partitions_by,
)
return dask.delayed(update_dataset_from_partitions)(
mps,
store_factory=store,
dataset_uuid=dataset_uuid,
ds_factory=ds_factory,
delete_scope=delete_scope,
metadata=metadata,
metadata_merger=metadata_merger,
)
@default_docs
@normalize_args
def store_delayed_as_dataset(
delayed_tasks,
store,
dataset_uuid=None,
metadata=None,
df_serializer=None,
overwrite=False,
metadata_merger=None,
metadata_version=naming.DEFAULT_METADATA_VERSION,
partition_on=None,
metadata_storage_format=naming.DEFAULT_METADATA_STORAGE_FORMAT,
secondary_indices=None,
):
"""
Transform and store a list of dictionaries containing
dataframes to a kartothek dataset in store.
Parameters
----------
Returns
-------
A dask.delayed dataset object.
"""
_check_callable(store)
if dataset_uuid is None:
dataset_uuid = gen_uuid()
if not overwrite:
raise_if_dataset_exists(dataset_uuid=dataset_uuid, store=store)
raise_if_indices_overlap(partition_on, secondary_indices)
input_to_mps = partial(
parse_input_to_metapartition, metadata_version=metadata_version
)
mps = map_delayed(input_to_mps, delayed_tasks)
if partition_on:
mps = map_delayed(MetaPartition.partition_on, mps, partition_on=partition_on)
if secondary_indices:
mps = map_delayed(MetaPartition.build_indices, mps, columns=secondary_indices)
mps = map_delayed(
MetaPartition.store_dataframes,
mps,
store=store,
df_serializer=df_serializer,
dataset_uuid=dataset_uuid,
)
return delayed(store_dataset_from_partitions)(
mps,
dataset_uuid=dataset_uuid,
store=store,
dataset_metadata=metadata,
metadata_merger=metadata_merger,
metadata_storage_format=metadata_storage_format,
)
| 30.105647 | 91 | 0.687076 |
742326d113c5e6948d84edf9b3a75d69e5a2cf11 | 7,159 | py | Python | wechat_work.py | yepcn/flexget_qbittorrent_mod | 09c284492dd9640c8cc81765e520553dfee0d975 | [
"MIT"
] | 1 | 2020-10-13T00:45:30.000Z | 2020-10-13T00:45:30.000Z | wechat_work.py | yepcn/flexget_qbittorrent_mod | 09c284492dd9640c8cc81765e520553dfee0d975 | [
"MIT"
] | null | null | null | wechat_work.py | yepcn/flexget_qbittorrent_mod | 09c284492dd9640c8cc81765e520553dfee0d975 | [
"MIT"
] | null | null | null | from datetime import timedelta, datetime
import requests
from flexget import db_schema, plugin
from flexget.event import event
from flexget.manager import Session
from flexget.plugin import PluginError
from loguru import logger
from sqlalchemy import Column, Integer, String, DateTime
_PLUGIN_NAME = 'wechat_work'
_CORP_ID = 'corp_id'
_CORP_SECRET = 'corp_secret'
_AGENT_ID = 'agent_id'
_TO_USER = 'to_user'
_GET_ACCESS_TOKEN_URL = 'https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid={corp_id}&corpsecret={corp_secret}'
_POST_MESSAGE_URL = 'https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token={access_token}'
_TEXT_LIMIT = 1024
AccessTokenBase = db_schema.versioned_base('wechat_work_access_token', 0)
logger = logger.bind(name=_PLUGIN_NAME)
class AccessTokenEntry(AccessTokenBase):
__tablename__ = 'wechat_work_access_token'
id = Column(String, primary_key=True)
corp_id = Column(String, index=True, nullable=True)
corp_secret = Column(String, index=True, nullable=True)
access_token = Column(String, primary_key=True)
expires_in = Column(Integer, index=True, nullable=True)
gmt_modify = Column(DateTime, index=True, nullable=True)
def __str__(self):
x = ['id={0}'.format(self.id)]
if self.corp_id:
x.append('corp_id={0}'.format(self.corp_id))
if self.corp_secret:
x.append('corp_secret={0}'.format(self.corp_secret))
if self.access_token:
x.append('access_token={0}'.format(self.access_token))
if self.expires_in:
x.append('expires_in={0}'.format(self.expires_in))
if self.gmt_modify:
x.append('gmt_modify={0}'.format(self.gmt_modify))
return ' '.join(x)
class WeChatWorkNotifier:
_corp_id = None
_corp_secret = None
_agent_id = None
_to_user = None
schema = {
'type': 'object',
'properties': {
_CORP_ID: {'type': 'string'},
_CORP_SECRET: {'type': 'string'},
_AGENT_ID: {'type': 'string'},
_TO_USER: {'type': 'string'},
},
'additionalProperties': False,
}
def notify(self, title, message, config):
access_token = self._real_init(Session(), config)
if not access_token:
return
self._send_msgs(message, access_token)
def _parse_config(self, config):
self._corp_id = config.get(_CORP_ID)
self._corp_secret = config.get(_CORP_SECRET)
self._agent_id = config.get(_AGENT_ID)
self._to_user = config.get(_TO_USER)
def _real_init(self, session, config):
self._parse_config(config)
access_token = self._get_access_token_n_update_db(session)
return access_token
def _request(self, method, url, **kwargs):
try:
return requests.request(method, url, **kwargs)
except Exception as e:
raise PluginError(str(e))
def _send_msgs(self, msg, access_token):
msg_limit, msg_extend = self._get_msg_limit(msg)
data = {
'touser': self._to_user,
'msgtype': 'text',
'agentid': self._agent_id,
'text': {'content': msg_limit},
'safe': 0,
'enable_id_trans': 0,
'enable_duplicate_check': 0,
'duplicate_check_interval': 1800
}
response_json = self._request('post', _POST_MESSAGE_URL.format(access_token=access_token.access_token),
json=data).json()
if response_json.get('errcode') != 0:
logger.error(response_json)
if msg_extend:
self._send_msgs(msg_extend, access_token)
def _get_msg_limit(self, msg):
msg_encode = msg.encode()
if len(msg_encode) < _TEXT_LIMIT:
return msg, ''
msg_lines = msg.split('\n')
msg_limit_len = 0
for line in msg_lines:
line_len = len(line.encode())
if msg_limit_len == 0 and line_len >= _TEXT_LIMIT:
return msg_encode[:_TEXT_LIMIT].decode(), msg_encode[_TEXT_LIMIT:].decode()
if msg_limit_len + line_len + 1 < _TEXT_LIMIT:
msg_limit_len += line_len + 1
else:
return msg_encode[:msg_limit_len].decode(), msg_encode[msg_limit_len:].decode()
def _get_access_token_n_update_db(self, session):
corp_id = self._corp_id
corp_secret = self._corp_secret
access_token, has_new_access_token = self._get_access_token(session, corp_id, corp_secret)
logger.debug('access_token={}', access_token)
if not access_token:
raise PluginError(
'no access token found'
)
else:
if not access_token.access_token:
logger.warning('no access_token found for corp_id: {} and corp_secret: {}', corp_id, corp_secret)
if has_new_access_token:
self._update_db(session, access_token)
return access_token
def _get_access_token(self, session, corp_id, corp_secret):
logger.debug('loading cached access token')
access_token = self._get_cached_access_token(session, corp_id, corp_secret)
logger.debug('found cached access token: {0}'.format(access_token))
if access_token:
if access_token.gmt_modify > datetime.now() - timedelta(seconds=access_token.expires_in):
logger.debug('all access token found in cache')
return access_token, False
else:
self._delete_db(session, access_token)
logger.debug('loading new access token')
new_access_token = self._get_new_access_token(corp_id, corp_secret)
logger.debug('found new access token: {0}'.format(access_token))
return new_access_token, bool(new_access_token)
@staticmethod
def _get_cached_access_token(session, corp_id, corp_secret):
access_token = session.query(AccessTokenEntry).filter(
AccessTokenEntry.id == '{}{}'.format(corp_id, corp_secret)).one_or_none()
return access_token
def _get_new_access_token(self, corp_id, corp_secret):
response_json = self._request('get',
_GET_ACCESS_TOKEN_URL.format(corp_id=corp_id, corp_secret=corp_secret)).json()
entry = AccessTokenEntry(
id='{}{}'.format(corp_id, corp_secret),
corp_id=corp_id,
corp_secret=corp_secret,
access_token=response_json.get('access_token'),
expires_in=response_json.get('expires_in'),
gmt_modify=datetime.now()
)
return entry
def _update_db(self, session, access_token):
logger.info('saving updated access_token to db')
session.add(access_token)
session.commit()
def _delete_db(self, session, access_token):
logger.info('delete access_token from db')
session.delete(access_token)
session.commit()
@event('plugin.register')
def register_plugin():
plugin.register(WeChatWorkNotifier, _PLUGIN_NAME, api_ver=2, interfaces=['notifiers'])
| 35.093137 | 116 | 0.639335 |
02a6422bd867428b4ab5e0496aa244dd76a0e5d9 | 1,626 | py | Python | test/ffmpeg-vaapi/vpp/sharpen.py | zhangyuankun-star/vaapi-fits | 117b10f4d458b3685de16eebae795f7cb54207e2 | [
"BSD-3-Clause"
] | null | null | null | test/ffmpeg-vaapi/vpp/sharpen.py | zhangyuankun-star/vaapi-fits | 117b10f4d458b3685de16eebae795f7cb54207e2 | [
"BSD-3-Clause"
] | null | null | null | test/ffmpeg-vaapi/vpp/sharpen.py | zhangyuankun-star/vaapi-fits | 117b10f4d458b3685de16eebae795f7cb54207e2 | [
"BSD-3-Clause"
] | null | null | null | ###
### Copyright (C) 2018-2019 Intel Corporation
###
### SPDX-License-Identifier: BSD-3-Clause
###
from ....lib import *
from ..util import *
from .vpp import VppTest
spec = load_test_spec("vpp", "sharpen")
class default(VppTest):
def before(self):
vars(self).update(
caps = platform.get_caps("vpp", "sharpen"),
vpp_op = "sharpen",
)
super(default, self).before()
@slash.requires(*platform.have_caps("vpp", "sharpen"))
@slash.requires(*have_ffmpeg_filter("sharpness_vaapi"))
@slash.parametrize(*gen_vpp_sharpen_parameters(spec))
def test(self, case, level):
vars(self).update(spec[case].copy())
vars(self).update(
case = case,
level = level,
mlevel = mapRange(level, [0, 100], [0, 64]),
)
if self.width == 1280 and self.height == 720:
if get_media()._get_driver_name() == "i965":
slash.add_failure(
"1280x720 resolution is known to cause GPU HANG with i965 driver")
return
self.vpp()
def check_metrics(self):
psnr = calculate_psnr(
self.source, self.decoded,
self.width, self.height,
self.frames, self.format)
psnr = map(lambda v: round(v, 4), psnr)
assert psnr[-2] == 100, "Cb(U) should not be affected by SHARPEN filter"
assert psnr[-1] == 100, "Cr(V) should not be affected by SHARPEN filter"
def compare(k, ref, actual):
assert ref is not None, "Invalid reference value"
assert abs(ref[-3] - actual[-3]) < 0.2, "Luma (Y) out of baseline range"
get_media().baseline.check_result(
compare = compare, context = self.refctx, psnr = psnr)
| 29.035714 | 79 | 0.632226 |
0dd16d2a9caac681638b5af42e47a62339fd51d8 | 13,056 | bzl | Python | third_party/engine.bzl | stereoboy/isaac_sdk_20191213 | 73c863254e626c8d498870189fbfb20be4e10fb3 | [
"FSFAP"
] | 1 | 2020-04-14T13:55:16.000Z | 2020-04-14T13:55:16.000Z | third_party/engine.bzl | stereoboy/isaac_sdk_20191213 | 73c863254e626c8d498870189fbfb20be4e10fb3 | [
"FSFAP"
] | 4 | 2020-09-25T22:34:29.000Z | 2022-02-09T23:45:12.000Z | third_party/engine.bzl | stereoboy/isaac_sdk_20191213 | 73c863254e626c8d498870189fbfb20be4e10fb3 | [
"FSFAP"
] | 1 | 2020-07-02T11:51:17.000Z | 2020-07-02T11:51:17.000Z | """
Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited.
"""
load("//engine/build:isaac.bzl", "isaac_git_repository", "isaac_new_git_repository")
load("//engine/build:isaac.bzl", "isaac_http_archive", "isaac_new_http_archive")
load("//engine/build:isaac.bzl", "isaac_new_local_repository")
def clean_dep(dep):
return str(Label(dep))
# loads dependencies required to build apps with alice
def isaac_engine_workspace():
isaac_new_http_archive(
name = "gtest",
build_file = clean_dep("//third_party:gtest.BUILD"),
sha256 = "d88ad7eba129d2d5453da05c6318af6babf65af37835d720e6bfa105d61cf5ce",
url = "https://developer.nvidia.com/isaac/download/third_party/googletest-release-1-8-0-tar-gz",
type = "tar.gz",
strip_prefix = "googletest-release-1.8.0/googletest",
licenses = ["@gtest//:LICENSE"],
)
isaac_new_http_archive(
name = "benchmark",
build_file = clean_dep("//third_party:benchmark.BUILD"),
sha256 = "f19559475a592cbd5ac48b61f6b9cedf87f0b6775d1443de54cfe8f53940b28d",
url = "https://developer.nvidia.com/isaac/download/third_party/benchmark-1-3-0-tar-gz",
type = "tar.gz",
strip_prefix = "benchmark-1.3.0",
licenses = ["@benchmark//:LICENSE"],
)
isaac_new_http_archive(
# Name is matching that of deps for cartographer (@cartographer//bazel:repositories.bzl)
name = "net_zlib_zlib",
build_file = clean_dep("//third_party:zlib.BUILD"),
sha256 = "c3e5e9fdd5004dcb542feda5ee4f0ff0744628baf8ed2dd5d66f8ca1197cb1a1",
url = "https://developer.nvidia.com/isaac/download/third_party/zlib-1-2-11-tar-gz",
type = "tar.gz",
strip_prefix = "zlib-1.2.11",
licenses = ["https://zlib.net/zlib_license.html"],
)
isaac_http_archive(
name = "boringssl",
sha256 = "524ba98a56300149696481b4cb9ddebd0c7b7ac9b9f6edee81da2d2d7e5d2bb3",
url = "https://developer.nvidia.com/isaac/download/third_party/boringssl-a0fb951d2a26a8ee746b52f3ba81ab011a0af778-tar-gz",
type = "tar.gz",
patches = [clean_dep("//third_party:boringssl.patch")],
strip_prefix = "boringssl-a0fb951d2a26a8ee746b52f3ba81ab011a0af778",
licenses = ["@boringssl//:LICENSE"],
)
isaac_new_http_archive(
name = "libuuid",
build_file = clean_dep("//third_party:uuid.BUILD"),
sha256 = "46af3275291091009ad7f1b899de3d0cea0252737550e7919d17237997db5644",
url = "https://developer.nvidia.com/isaac/download/third_party/libuuid-1-0-3-tar-gz",
type = "tar.gz",
strip_prefix = "libuuid-1.0.3",
licenses = ["@libuuid//:COPYING"],
)
isaac_new_http_archive(
# Name is matching that of deps for cartographer (@cartographer//bazel:repositories.bzl)
name = "org_tuxfamily_eigen",
build_file = clean_dep("//third_party:eigen.BUILD"),
sha256 = "e5a4f7699e9379e5ce049000b937cf30dc1c22740667dfa5977b0ec13060d8ed",
# This zip has been manually updated to fix some warnings:
# Eigen/Core was including `host_defines.h` directly instead of `cuda_runtime_api.h`
url = "https://developer.nvidia.com/isaac/download/third_party/eigen-eigen-323c052e1731-v2-tar-bz2",
type = "tar.bz2",
strip_prefix = "eigen-eigen-323c052e1731",
licenses = [
"@org_tuxfamily_eigen/COPYING.BSD",
"@org_tuxfamily_eigen/COPYING.MPL2",
"@org_tuxfamily_eigen/COPYING.MINPACK",
"@org_tuxfamily_eigen/COPYING.LGPL",
],
)
isaac_new_http_archive(
# Name is matching that of deps for cartographer (@cartographer//bazel:repositories.bzl)
name = "org_libpng_libpng",
build_file = clean_dep("//third_party:png.BUILD"),
sha256 = "2f1e960d92ce3b3abd03d06dfec9637dfbd22febf107a536b44f7a47c60659f6",
url = "https://developer.nvidia.com/isaac/download/third_party/libpng-1-6-34-tar-xz",
type = "tar.xz",
strip_prefix = "libpng-1.6.34",
licenses = ["@org_libpng_libpng//:libpng-LICENSE.txt"],
)
isaac_new_git_repository(
name = "capnproto",
build_file = clean_dep("//third_party:capnproto.BUILD"),
tag = "v0.7.0",
remote = "https://github.com/capnproto/capnproto.git",
licenses = ["@capnproto//:LICENSE"],
)
isaac_new_http_archive(
name = "asio",
build_file = clean_dep("//third_party:asio.BUILD"),
sha256 = "4e27dcb37456ba707570334b91f4798721111ed67b69915685eac141895779aa",
url = "https://developer.nvidia.com/isaac/download/third_party/asio-1-12-2-tar-bz2",
type = "tar.bz2",
strip_prefix = "asio-1.12.2",
licenses = ["@asio//:asio-1.12.2/LICENSE_1_0.txt"],
)
isaac_git_repository(
name = "com_github_gflags_gflags",
commit = "e292e0452fcfd5a8ae055b59052fc041cbab4abf",
remote = "https://github.com/gflags/gflags.git",
licenses = ["@com_github_gflags_gflags//:COPYING.txt"],
)
native.bind(
name = "gflags",
actual = "@com_github_gflags_gflags//:gflags",
)
isaac_new_http_archive(
name = "nlohmann_json",
build_file = clean_dep("//third_party:nlohmann_json.BUILD"),
sha256 = "e8fffa6cbdb3c15ecdff32eebf958b6c686bc188da8ad5c6489462d16f83ae54",
url = "https://developer.nvidia.com/isaac/download/third_party/json-3-1-2-tar-gz",
type = "tar.gz",
licenses = ["@nlohmann_json//:LICENSE.MIT"],
)
isaac_new_http_archive(
name = "lmdb",
build_file = clean_dep("//third_party:lmdb.BUILD"),
sha256 = "44602436c52c29d4f301f55f6fd8115f945469b868348e3cddaf91ab2473ea26",
url = "https://github.com/LMDB/lmdb/archive/LMDB_0.9.24.tar.gz",
type = "tar.gz",
strip_prefix = "lmdb-LMDB_0.9.24",
licenses = ["@lmdb//:libraries/liblmdb/LICENSE"],
)
isaac_new_http_archive(
name = "nasm",
build_file = clean_dep("//third_party:nasm.BUILD"),
sha256 = "00b0891c678c065446ca59bcee64719d0096d54d6886e6e472aeee2e170ae324",
url = "https://developer.nvidia.com/isaac/download/third_party/nasm-2-12-02-tar-bz2",
type = "tar.bz2",
strip_prefix = "nasm-2.12.02",
licenses = ["@nasm//:LICENSE"],
)
isaac_new_http_archive(
# Name is matching that of deps for cartographer (@cartographer//bazel:repositories.bzl)
name = "libjpeg",
build_file = clean_dep("//third_party:jpeg.BUILD"),
sha256 = "c15a9607892113946379ccea3ca8b85018301b200754f209453ab21674268e77",
url = "https://developer.nvidia.com/isaac/download/third_party/libjpeg-turbo-1-5-1-tar-gz",
type = "tar.gz",
strip_prefix = "libjpeg-turbo-1.5.1",
licenses = ["@libjpeg//:LICENSE.md"],
)
isaac_new_http_archive(
name = "uwebsockets",
build_file = clean_dep("//third_party:uwebsockets.BUILD"),
sha256 = "663a22b521c8258e215e34e01c7fcdbbd500296aab2c31d36857228424bb7675",
url = "https://developer.nvidia.com/isaac/download/third_party/uwebsockets-0-14-8-tar-gz",
type = "tar.gz",
strip_prefix = "uWebSockets-0.14.8",
licenses = ["@uwebsockets//:LICENSE"],
)
isaac_new_http_archive(
name = "snappy",
build_file = clean_dep("//third_party:snappy.BUILD"),
sha256 = "3dfa02e873ff51a11ee02b9ca391807f0c8ea0529a4924afa645fbf97163f9d4",
url = "https://developer.nvidia.com/isaac/download/third_party/snappy-1-1-7-tar-gz",
type = "tar.gz",
licenses = ["@snappy//:COPYING"],
)
isaac_new_local_repository(
name = "python",
build_file = clean_dep("//third_party:python.BUILD"),
path = "/usr",
licenses = ["https://docs.python.org/3/license.html"],
)
isaac_new_http_archive(
name = "pybind11",
build_file = clean_dep("//third_party:pybind11.BUILD"),
sha256 = "0f34838f2c8024a6765168227ba587b3687729ebf03dc912f88ff75c7aa9cfe8",
url = "https://developer.nvidia.com/isaac/download/third_party/pybind11-2-3-0-tar-gz",
type = "tar.gz",
strip_prefix = "pybind11-2.3.0",
licenses = ["@pybind11//:LICENSE"],
)
isaac_new_http_archive(
name = "python_aarch64",
build_file = clean_dep("//third_party:python_aarch64.BUILD"),
sha256 = "0557f47820f90d0dc9371c991ecb361f0e5fbac753a416a80ed4ee7ad80906e6",
url = "https://developer.nvidia.com/isaac/download/third_party/python3_aarch64_jp42-tar-xz",
type = "tar.xz",
licenses = ["https://docs.python.org/3/license.html"],
)
isaac_new_http_archive(
name = "curl",
build_file = clean_dep("//third_party:curl.BUILD"),
sha256 = "ff3e80c1ca6a068428726cd7dd19037a47cc538ce58ef61c59587191039b2ca6",
url = "https://developer.nvidia.com/isaac/download/third_party/curl-7-49-1-tar-gz",
type = "tar.gz",
strip_prefix = "curl-7.49.1",
licenses = ["@curl//:COPYING"],
)
isaac_new_http_archive(
name = "glfw",
build_file = clean_dep("//third_party:glfw.BUILD"),
sha256 = "e10f0de1384d75e6fc210c53e91843f6110d6c4f3afbfb588130713c2f9d8fe8",
url = "https://developer.nvidia.com/isaac/download/third_party/glfw-3-2-1-tar-gz",
type = "tar.gz",
strip_prefix = "glfw-3.2.1",
licenses = ["@glfw//:COPYING.txt"],
)
isaac_new_http_archive(
name = "gl3w",
build_file = clean_dep("//third_party:gl3w.BUILD"),
sha256 = "442801ac9f10258499259b0b7679d28a1c1bf17e4a92c7e774b44bd4ba37525c",
url = "https://developer.nvidia.com/isaac/download/third_party/gl3w-4f1d558410b0938840dc3db98e741d71f382ba22-tar-gz",
type = "tar.gz",
strip_prefix = "gl3w-4f1d558410b0938840dc3db98e741d71f382ba22",
licenses = ["@gl3w//:UNLICENSE"],
)
isaac_new_http_archive(
name = "imgui",
build_file = clean_dep("//third_party:imgui.BUILD"),
sha256 = "dc48173f9b34c763005e63dccb4355653909b25e04d5bc28ea9351540c457d96",
url = "https://developer.nvidia.com/isaac/download/third_party/imgui-1-66-tar-gz",
type = "tar.gz",
strip_prefix = "imgui-1.66",
licenses = ["@imgui//:LICENSE.txt"],
)
isaac_new_http_archive(
name = "lss",
build_file = clean_dep("//third_party:lss.BUILD"),
sha256 = "6d2e98e9d360797db6348ae725be901c1947e5736d87f07917c2bd835b03eeef",
url = "https://developer.nvidia.com/isaac/download/third_party/linux-syscall-support-93426bda6535943ff1525d0460aab5cc0870ccaf-tar-gz",
type = "tar.gz",
licenses = ["@lss//:linux_syscall_support.h"],
)
isaac_new_git_repository(
name = "breakpad",
build_file = clean_dep("//third_party:breakpad.BUILD"),
remote = "https://chromium.googlesource.com/breakpad/breakpad",
commit = "db1cda26539c711c3da7ed4d410dfe8190e89b8f",
licenses = ["@breakpad//:LICENSE"],
)
isaac_new_http_archive(
name = "nvcc_10",
build_file = clean_dep("//third_party:nvcc_10.BUILD"),
sha256 = "91061a7a475b42557ce8ddad1337626624e449daed1e8bbce6a70d6b468df973",
url = "https://developer.nvidia.com/isaac/download/third_party/cuda_10_nvcc-tar-xz",
type = "tar.xz",
licenses = ["http://docs.nvidia.com/cuda/eula/index.html"],
)
isaac_new_http_archive(
name = "cuda_x86_64",
build_file = clean_dep("//third_party:cuda_x86_64.BUILD"),
sha256 = "f0ab78d7ca659dd70b1b9b6aefa6340fb03af474fed338a277942d0d7c70cef0",
url = "https://developer.nvidia.com/isaac/download/third_party/cuda-10-0-cudnn7-6-3-28-x86_64-tar-xz",
type = "tar.xz",
licenses = ["http://docs.nvidia.com/cuda/eula/index.html"],
)
isaac_new_http_archive(
name = "cuda_aarch64_jetpack43",
build_file = clean_dep("//third_party:cuda_aarch64_jetpack43.BUILD"),
sha256 = "11919eeb2ab467bb32e87f7a33edd56ad18329aef4e97b94b1e1a8c57a62266d",
url = "https://developer.nvidia.com/isaac/download/third_party/cuda-10-0_cudnn7-6-3-28-aarch64-tar-xz",
type = "tar.xz",
licenses = ["http://docs.nvidia.com/cuda/eula/index.html"],
)
isaac_new_http_archive(
name = "isaac_assets",
url = "https://developer.nvidia.com/isaac/download/third_party/isaac_assets-zip",
build_file = clean_dep("//third_party:isaac_assets.BUILD"),
type = "zip",
licenses = ["http://docs.nvidia.com/cuda/eula/index.html"],
)
| 42.666667 | 142 | 0.66322 |
b3c06219e25623f2c534ad9e34e6118c04676d37 | 7,597 | py | Python | tensorflow_probability/python/internal/batch_shape_lib_test.py | NeelGhoshal/probability | 45ed841e3cff6cdc7cd1b2d96dd874d9070318f7 | [
"Apache-2.0"
] | 3,670 | 2018-02-14T03:29:40.000Z | 2022-03-30T01:19:52.000Z | tensorflow_probability/python/internal/batch_shape_lib_test.py | gregorystrubel/probability | df96f3d56eff92c6b06fbac68dc58e095e28fed6 | [
"Apache-2.0"
] | 1,395 | 2018-02-24T02:28:49.000Z | 2022-03-31T16:12:06.000Z | tensorflow_probability/python/internal/batch_shape_lib_test.py | gregorystrubel/probability | df96f3d56eff92c6b06fbac68dc58e095e28fed6 | [
"Apache-2.0"
] | 1,135 | 2018-02-14T01:51:10.000Z | 2022-03-28T02:24:11.000Z | # Copyright 2021 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tests for batch_shape."""
# Dependency imports
from absl import logging
from absl.testing import parameterized
import tensorflow.compat.v1 as tf1
import tensorflow.compat.v2 as tf
from tensorflow_probability import bijectors as tfb
from tensorflow_probability import distributions as tfd
from tensorflow_probability.python.internal import batch_shape_lib
from tensorflow_probability.python.internal import parameter_properties
from tensorflow_probability.python.internal import test_util
from tensorflow.python.platform import test as tf_test # pylint: disable=g-direct-tensorflow-import,g-import-not-at-top
from tensorflow.python.util import nest # pylint: disable=g-direct-tensorflow-import
class _MVNTriLWithDynamicParamNdims(tfd.MultivariateNormalTriL):
@classmethod
def _parameter_properties(cls, dtype, num_classes=None):
return dict(
loc=parameter_properties.ParameterProperties(
event_ndims=lambda _: None,
event_ndims_tensor=(
lambda _: tf1.placeholder_with_default(1, shape=None))),
scale_tril=parameter_properties.ParameterProperties(
event_ndims=lambda _: None,
event_ndims_tensor=(
lambda _: tf1.placeholder_with_default(2, shape=None))))
@test_util.test_graph_and_eager_modes
class BatchShapeInferenceTests(test_util.TestCase):
@parameterized.named_parameters(
{'testcase_name': '_trivial',
'value_fn': lambda: tfd.Normal(loc=0., scale=1.),
'expected_batch_shape_parts': {'loc': [], 'scale': []},
'expected_batch_shape': []},
{'testcase_name': '_simple_tensor_broadcasting',
'value_fn': lambda: tfd.MultivariateNormalDiag( # pylint: disable=g-long-lambda
loc=[0., 0.], scale_diag=tf.convert_to_tensor([[1., 1.], [1., 1.]])),
'expected_batch_shape_parts': {'loc': [], 'scale_diag': [2]},
'expected_batch_shape': [2]},
{'testcase_name': '_rank_deficient_tensor_broadcasting',
'value_fn': lambda: tfd.MultivariateNormalDiag( # pylint: disable=g-long-lambda
loc=0., scale_diag=tf.convert_to_tensor([[1., 1.], [1., 1.]])),
'expected_batch_shape_parts': {'loc': [], 'scale_diag': [2]},
'expected_batch_shape': [2]},
{'testcase_name': '_dynamic_event_ndims',
'value_fn': lambda: _MVNTriLWithDynamicParamNdims( # pylint: disable=g-long-lambda
loc=[[0., 0.], [1., 1.], [2., 2.]],
scale_tril=[[1., 0.], [-1., 1.]]),
'expected_batch_shape_parts': {'loc': [3], 'scale_tril': []},
'expected_batch_shape': [3]},
{'testcase_name': '_mixture_same_family',
'value_fn': lambda: tfd.MixtureSameFamily( # pylint: disable=g-long-lambda
mixture_distribution=tfd.Categorical(
logits=[[[1., 2., 3.],
[4., 5., 6.]]]),
components_distribution=tfd.Normal(loc=0.,
scale=[[[1., 2., 3.],
[4., 5., 6.]]])),
'expected_batch_shape_parts': {'mixture_distribution': [1, 2],
'components_distribution': [1, 2]},
'expected_batch_shape': [1, 2]},
{'testcase_name': '_deeply_nested',
'value_fn': lambda: tfd.Independent( # pylint: disable=g-long-lambda
tfd.Independent(
tfd.Independent(
tfd.Independent(
tfd.Normal(loc=0., scale=[[[[[[[[1.]]]]]]]]),
reinterpreted_batch_ndims=2),
reinterpreted_batch_ndims=0),
reinterpreted_batch_ndims=1),
reinterpreted_batch_ndims=1),
'expected_batch_shape_parts': {'distribution': [1, 1, 1, 1]},
'expected_batch_shape': [1, 1, 1, 1]},
{'testcase_name': 'noparams',
'value_fn': tfb.Exp,
'expected_batch_shape_parts': {},
'expected_batch_shape': []})
@test_util.numpy_disable_test_missing_functionality('b/188002189')
def test_batch_shape_inference_is_correct(
self, value_fn, expected_batch_shape_parts, expected_batch_shape):
value = value_fn() # Defer construction until we're in the right graph.
parts = batch_shape_lib.batch_shape_parts(value)
self.assertAllEqualNested(
parts,
nest.map_structure_up_to(
parts, tf.TensorShape, expected_batch_shape_parts))
self.assertAllEqual(expected_batch_shape,
batch_shape_lib.inferred_batch_shape_tensor(value))
batch_shape = batch_shape_lib.inferred_batch_shape(value)
self.assertIsInstance(batch_shape, tf.TensorShape)
self.assertTrue(batch_shape.is_compatible_with(expected_batch_shape))
def test_bijector_event_ndims(self):
bij = tfb.Sigmoid(low=tf.zeros([2]), high=tf.ones([3, 2]))
self.assertAllEqual(batch_shape_lib.inferred_batch_shape(bij), [3, 2])
self.assertAllEqual(batch_shape_lib.inferred_batch_shape_tensor(bij),
[3, 2])
self.assertAllEqual(
batch_shape_lib.inferred_batch_shape(bij, bijector_x_event_ndims=1),
[3])
self.assertAllEqual(
batch_shape_lib.inferred_batch_shape_tensor(
bij, bijector_x_event_ndims=1),
[3])
# Verify that we don't pass Nones through to component
# `experimental_batch_shape(x_event_ndims=None)` calls, where they'd be
# incorrectly interpreted as `x_event_ndims=forward_min_event_ndims`.
joint_bij = tfb.JointMap([bij, bij])
self.assertAllEqual(
batch_shape_lib.inferred_batch_shape(
joint_bij, bijector_x_event_ndims=[None, None]),
tf.TensorShape(None))
class ParametersAsKwargsTest(test_util.TestCase):
# This doesn't really deserve to be a separate test class, but is split
# out from BatchShapeInferenceTests because the `test_graph_and_eager_modes`
# decorator interacts poorly with the `tf_test.mock.patch.object` decorator.
@test_util.numpy_disable_test_missing_functionality('tf_logging')
@test_util.jax_disable_test_missing_functionality('tf_logging')
@tf_test.mock.patch.object(logging, 'warning', autospec=True)
def test_parameters_as_kwargs(self, mock_warning):
dist = tfd.Normal(loc=tf.zeros([2]), scale=tf.ones([5, 1]))
self.assertAllEqual(
batch_shape_lib.inferred_batch_shape_tensor(dist), [5, 2])
self.assertAllEqual(
batch_shape_lib.inferred_batch_shape_tensor(dist, loc=tf.zeros([5])),
[5, 5])
self.assertAllEqual(
batch_shape_lib.inferred_batch_shape_tensor(
dist, scale=tf.zeros([5]), loc=tf.zeros([1, 1])),
[1, 5])
# Check that passing an unrecognized argument raises a warning.
self.assertEqual(0, mock_warning.call_count)
batch_shape_lib.inferred_batch_shape_tensor(dist, df=3.)
self.assertEqual(1, mock_warning.call_count)
if __name__ == '__main__':
test_util.main()
| 45.220238 | 120 | 0.669212 |
93350c65fa1141b6a1b0547f2f5c1262d1f90ccb | 7,153 | py | Python | ansible/modules/network/avi/avi_healthmonitor.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | 1 | 2022-01-25T22:52:58.000Z | 2022-01-25T22:52:58.000Z | ansible/modules/network/avi/avi_healthmonitor.py | EnjoyLifeFund/Debian_py36_packages | 1985d4c73fabd5f08f54b922e73a9306e09c77a5 | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | ansible/modules/network/avi/avi_healthmonitor.py | EnjoyLifeFund/Debian_py36_packages | 1985d4c73fabd5f08f54b922e73a9306e09c77a5 | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | #!/usr/bin/python
#
# Created on Aug 25, 2016
# @author: Gaurav Rastogi (grastogi@avinetworks.com)
# Eric Anderson (eanderson@avinetworks.com)
# module_check: supported
# Avi Version: 17.1.1
#
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: avi_healthmonitor
author: Gaurav Rastogi (grastogi@avinetworks.com)
short_description: Module for setup of HealthMonitor Avi RESTful Object
description:
- This module is used to configure HealthMonitor object
- more examples at U(https://github.com/avinetworks/devops)
requirements: [ avisdk ]
version_added: "2.3"
options:
state:
description:
- The state that should be applied on the entity.
default: present
choices: ["absent","present"]
description:
description:
- User defined description for the object.
dns_monitor:
description:
- Healthmonitordns settings for healthmonitor.
external_monitor:
description:
- Healthmonitorexternal settings for healthmonitor.
failed_checks:
description:
- Number of continuous failed health checks before the server is marked down.
- Allowed values are 1-50.
- Default value when not specified in API or module is interpreted by Avi Controller as 2.
http_monitor:
description:
- Healthmonitorhttp settings for healthmonitor.
https_monitor:
description:
- Healthmonitorhttp settings for healthmonitor.
is_federated:
description:
- This field describes the object's replication scope.
- If the field is set to false, then the object is visible within the controller-cluster and its associated service-engines.
- If the field is set to true, then the object is replicated across the federation.
- Field introduced in 17.1.3.
- Default value when not specified in API or module is interpreted by Avi Controller as False.
version_added: "2.4"
monitor_port:
description:
- Use this port instead of the port defined for the server in the pool.
- If the monitor succeeds to this port, the load balanced traffic will still be sent to the port of the server defined within the pool.
- Allowed values are 1-65535.
- Special values are 0 - 'use server port'.
name:
description:
- A user friendly name for this health monitor.
required: true
receive_timeout:
description:
- A valid response from the server is expected within the receive timeout window.
- This timeout must be less than the send interval.
- If server status is regularly flapping up and down, consider increasing this value.
- Allowed values are 1-300.
- Default value when not specified in API or module is interpreted by Avi Controller as 4.
send_interval:
description:
- Frequency, in seconds, that monitors are sent to a server.
- Allowed values are 1-3600.
- Default value when not specified in API or module is interpreted by Avi Controller as 10.
successful_checks:
description:
- Number of continuous successful health checks before server is marked up.
- Allowed values are 1-50.
- Default value when not specified in API or module is interpreted by Avi Controller as 2.
tcp_monitor:
description:
- Healthmonitortcp settings for healthmonitor.
tenant_ref:
description:
- It is a reference to an object of type tenant.
type:
description:
- Type of the health monitor.
- Enum options - HEALTH_MONITOR_PING, HEALTH_MONITOR_TCP, HEALTH_MONITOR_HTTP, HEALTH_MONITOR_HTTPS, HEALTH_MONITOR_EXTERNAL, HEALTH_MONITOR_UDP,
- HEALTH_MONITOR_DNS, HEALTH_MONITOR_GSLB.
required: true
udp_monitor:
description:
- Healthmonitorudp settings for healthmonitor.
url:
description:
- Avi controller URL of the object.
uuid:
description:
- Uuid of the health monitor.
extends_documentation_fragment:
- avi
'''
EXAMPLES = '''
- name: Create a HTTPS health monitor
avi_healthmonitor:
controller: 10.10.27.90
username: admin
password: AviNetworks123!
https_monitor:
http_request: HEAD / HTTP/1.0
http_response_code:
- HTTP_2XX
- HTTP_3XX
receive_timeout: 4
failed_checks: 3
send_interval: 10
successful_checks: 3
type: HEALTH_MONITOR_HTTPS
name: MyWebsite-HTTPS
'''
RETURN = '''
obj:
description: HealthMonitor (api/healthmonitor) object
returned: success, changed
type: dict
'''
from ansible.module_utils.basic import AnsibleModule
try:
from ansible.module_utils.avi import (
avi_common_argument_spec, HAS_AVI, avi_ansible_api)
except ImportError:
HAS_AVI = False
def main():
argument_specs = dict(
state=dict(default='present',
choices=['absent', 'present']),
description=dict(type='str',),
dns_monitor=dict(type='dict',),
external_monitor=dict(type='dict',),
failed_checks=dict(type='int',),
http_monitor=dict(type='dict',),
https_monitor=dict(type='dict',),
is_federated=dict(type='bool',),
monitor_port=dict(type='int',),
name=dict(type='str', required=True),
receive_timeout=dict(type='int',),
send_interval=dict(type='int',),
successful_checks=dict(type='int',),
tcp_monitor=dict(type='dict',),
tenant_ref=dict(type='str',),
type=dict(type='str', required=True),
udp_monitor=dict(type='dict',),
url=dict(type='str',),
uuid=dict(type='str',),
)
argument_specs.update(avi_common_argument_spec())
module = AnsibleModule(
argument_spec=argument_specs, supports_check_mode=True)
if not HAS_AVI:
return module.fail_json(msg=(
'Avi python API SDK (avisdk>=17.1) is not installed. '
'For more details visit https://github.com/avinetworks/sdk.'))
return avi_ansible_api(module, 'healthmonitor',
set([]))
if __name__ == '__main__':
main()
| 36.494898 | 157 | 0.653292 |
2a756facd25032b53ba448c5b57942c81751445f | 1,433 | py | Python | pykit/utils/_generate.py | ContinuumIO/pyk | 1730d7b831e0cf12a641ac23b5cf03e17e0dc550 | [
"BSD-3-Clause"
] | 9 | 2015-06-23T00:13:49.000Z | 2022-02-23T02:46:43.000Z | pykit/utils/_generate.py | ContinuumIO/pyk | 1730d7b831e0cf12a641ac23b5cf03e17e0dc550 | [
"BSD-3-Clause"
] | 1 | 2017-08-30T08:13:12.000Z | 2017-08-31T06:36:32.000Z | pykit/utils/_generate.py | ContinuumIO/pyk | 1730d7b831e0cf12a641ac23b5cf03e17e0dc550 | [
"BSD-3-Clause"
] | 7 | 2015-05-08T10:17:47.000Z | 2021-04-01T15:00:57.000Z | #! /usr/bin/env python
"""
Generate some internal code.
"""
from __future__ import absolute_import
from collections import defaultdict
from os.path import splitext
from pykit.ir import ops
def getorder():
pos = defaultdict(int) # { 'opname': index }
fn, ext = splitext(ops.__file__)
lines = list(open(fn + '.py'))
for name, op in vars(ops).iteritems():
if isinstance(op, basestring) and name not in ('__file__', '__name__',
'constant'):
for i, line in enumerate(lines):
if line.startswith(op):
pos[op] = i
break
order = sorted((lineno, op) for op, lineno in pos.iteritems())
return order
order = getorder()
def gen_builder():
"""Generate code for pykit.ir.builder operations"""
print(" # Generated by pykit.utils._generate")
for lineno, op in order:
if op[0].isupper():
print(" %-20s = _const(ops.%s)" % (op, op))
else:
print(" %-20s = _op(ops.%s)" % (op, op))
def gen_visitor():
"""Generate code for any visitor"""
for lineno, op in order:
if not op[0].isupper():
print(" def %s(self, op):\n pass\n" % (op,))
def gen_ops(lst):
"""Generate ops for ops.py"""
for name in lst:
print("%-18s = %r" % (name, name))
if __name__ == "__main__":
gen_builder() | 28.66 | 78 | 0.5485 |
ed94772504c189f761e2ed4d499023bbd29a3329 | 3,243 | py | Python | tf_retinanet/models/fpn.py | edend10/tf-retinanet | dba799b0c2ccfa3c641a7e14e10599d9e4ec3437 | [
"Apache-2.0"
] | 122 | 2020-01-13T14:31:09.000Z | 2021-09-28T02:05:17.000Z | tf_retinanet/models/fpn.py | edend10/tf-retinanet | dba799b0c2ccfa3c641a7e14e10599d9e4ec3437 | [
"Apache-2.0"
] | 14 | 2020-01-13T18:43:30.000Z | 2020-10-04T13:28:25.000Z | tf_retinanet/models/fpn.py | edend10/tf-retinanet | dba799b0c2ccfa3c641a7e14e10599d9e4ec3437 | [
"Apache-2.0"
] | 42 | 2020-01-13T22:03:54.000Z | 2021-08-30T09:51:16.000Z | """
Copyright 2017-2019 Fizyr (https://fizyr.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import tensorflow as tf
from .. import layers
def create_pyramid_features(C3, C4, C5, feature_size=256):
""" Creates the FPN layers on top of the backbone features.
Args
C3 : Feature stage C3 from the backbone.
C4 : Feature stage C4 from the backbone.
C5 : Feature stage C5 from the backbone.
feature_size : The feature size to use for the resulting feature levels.
Returns
A list of feature levels [P3, P4, P5, P6, P7].
"""
# Upsample C5 to get P5 from the FPN paper.
P5 = tf.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C5_reduced')(C5)
P5_upsampled = layers.UpsampleLike(name='P5_upsampled')([P5, C4])
P5 = tf.keras.layers.Conv2D(feature_size, kernel_size=3, strides=1, padding='same', name='P5')(P5)
# Add P5 elementwise to C4.
P4 = tf.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C4_reduced')(C4)
P4 = tf.keras.layers.Add(name='P4_merged')([P5_upsampled, P4])
P4_upsampled = layers.UpsampleLike(name='P4_upsampled')([P4, C3])
P4 = tf.keras.layers.Conv2D(feature_size, kernel_size=3, strides=1, padding='same', name='P4')(P4)
# Add P4 elementwise to C3.
P3 = tf.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C3_reduced')(C3)
P3 = tf.keras.layers.Add(name='P3_merged')([P4_upsampled, P3])
P3 = tf.keras.layers.Conv2D(feature_size, kernel_size=3, strides=1, padding='same', name='P3')(P3)
# "P6 is obtained via a 3x3 stride-2 conv on C5".
P6 = tf.keras.layers.Conv2D(feature_size, kernel_size=3, strides=2, padding='same', name='P6')(C5)
# "P7 is computed by applying ReLU followed by a 3x3 stride-2 conv on P6".
P7 = tf.keras.layers.Activation('relu', name='C6_relu')(P6)
P7 = tf.keras.layers.Conv2D(feature_size, kernel_size=3, strides=2, padding='same', name='P7')(P7)
return [P3, P4, P5, P6, P7]
def build_model_pyramid(name, model, features):
""" Applies a single submodel to each FPN level.
Args
name : Name of the submodel.
model : The submodel to evaluate.
features : The FPN features.
Returns
A tensor containing the response from the submodel on the FPN features.
"""
return tf.keras.layers.Concatenate(axis=1, name=name)([model(f) for f in features])
def build_pyramid(models, features):
""" Applies all submodels to each FPN level.
Args
models : List of sumodels to run on each pyramid level (by default only regression, classifcation).
features : The FPN features.
Returns
A list of tensors, one for each submodel.
"""
return [build_model_pyramid(n, m, features) for n, m in models]
| 41.050633 | 117 | 0.713228 |
0951e94a96107a296d1168b45e8a8af9aa30099c | 727 | py | Python | nz_crawl_demo/day1/python2/demo3.py | gaohj/nzflask_bbs | 36a94c380b78241ed5d1e07edab9618c3e8d477b | [
"Apache-2.0"
] | null | null | null | nz_crawl_demo/day1/python2/demo3.py | gaohj/nzflask_bbs | 36a94c380b78241ed5d1e07edab9618c3e8d477b | [
"Apache-2.0"
] | 27 | 2020-02-12T07:55:58.000Z | 2022-03-12T00:19:09.000Z | nz_crawl_demo/day1/python2/demo3.py | gaohj/nzflask_bbs | 36a94c380b78241ed5d1e07edab9618c3e8d477b | [
"Apache-2.0"
] | 2 | 2020-02-18T01:54:55.000Z | 2020-02-21T11:36:28.000Z | #encoding:utf-8
# import sys
# print sys.version 打印python的版本
import urllib2
import urllib
def baidu(params):
url = "https://www.baidu.com/s?" + params
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36",
}
request = urllib2.Request(url=url, headers=headers)
response = urllib2.urlopen(request)
print response.read()
if __name__ == "__main__":
#https://www.baidu.com/s?wd=%E8%8B%8D%E8%80%81%E5%B8%88&chrome=utf-8
#用户想要查询的关键词
kw = raw_input("请输入要搜索的内容")
params = {'wd':kw,'chrome':'utf-8'}
#将字典拼接成 url
params = urllib.urlencode(params)
print params
baidu(params)
| 22.71875 | 140 | 0.651994 |
1299cf977ca39152cf4c93cef51acdcca33c4c34 | 3,933 | py | Python | Shingles/run_shingles.py | mastino/ert | 5e88828b74344322ad635b8f8d804cef6c7cea9a | [
"BSD-3-Clause-LBNL"
] | null | null | null | Shingles/run_shingles.py | mastino/ert | 5e88828b74344322ad635b8f8d804cef6c7cea9a | [
"BSD-3-Clause-LBNL"
] | null | null | null | Shingles/run_shingles.py | mastino/ert | 5e88828b74344322ad635b8f8d804cef6c7cea9a | [
"BSD-3-Clause-LBNL"
] | null | null | null | '''
run_shingles.py
author Brian Gravelle
email gravelle@lanl.gov
This script generates and runs benchamrks to identify benchamrks that hit the roofline
at different arithmentic intensties on a particular system.
shingles.c is the driver, this generates a series of shingles.h files with different AIs
'''
import sys
import argparse
from shingle_defs import *
from math import floor
from subprocess import check_output
def print_shingles_h(filename, flops):
file = open(filename,"w")
file.write(header)
file.write(function_def)
file.write(variable_defs%flops)
file.write(loop_start)
if (flops % 2) == 1:
file.write(kernel_1)
for i in range(0,int(floor(flops/2))):
file.write(kernel_2)
file.write(loops_close)
file.write("\n")
file.close()
if __name__ == '__main__':
# parser = argparse.ArgumentParser(description='Process some numbers.')
# parser.add_argument('-p', '--peak', dest='peak', action='store',
# required=True, type=float,
# help='peak GFLOPS')
# parser.add_argument('-s1', '--L1_slope', dest='L1_slope', action='store',
# required=True, type=float,
# help='BW calculated for L1 cache')
# parser.add_argument('-c1', '--L1_corner', dest='L1_corner', action='store',
# required=True, type=float,
# help='AI calculated for L1 cache max GFLOPS')
# parser.add_argument('-s2', '--L2_slope', dest='L2_slope', action='store',
# required=True, type=float,
# help='BW calculated for L2 cache')
# parser.add_argument('-c2', '--L2_corner', dest='L2_corner', action='store',
# required=True, type=float,
# help='AI calculated for L2 cache max GFLOPS')
# parser.add_argument('-se', '--ext_mem_slope', dest='em_slope', action='store',
# required=True, type=float,
# help='BW calculated for external memory')
# parser.add_argument('-ce', '--ext_mem_corner', dest='em_corner', action='store',
# required=True, type=float,
# help='AI calculated for external memory max GFLOPS')
# parser.add_argument('-f', '--file', dest='file', action='store',
# required=False, default="roofline.csv",
# help='AI calculated for external memory max GFLOPS')
# args = parser.parse_args()
# print("")
# print("Peak GFLOPS = %f" % args.peak)
# print("")
# print("L1 slope = %f" % args.L1_slope)
# print("L1 max AI = %f" % args.L1_corner)
# print("L1 y-intercept = %f" % (args.peak - args.L1_slope*args.L1_corner) )
# print("")
# print("L2 slope = %f" % args.L2_slope)
# print("L2 max AI = %f" % args.L2_corner)
# print("L2 y-intercept = %f" % (args.peak - args.L2_slope*args.L2_corner) )
# print("")
# print("EM slope = %f" % args.em_slope)
# print("EM max AI = %f" % args.em_corner)
# print("EM y-intercept = %f" % (args.peak - args.em_slope*args.em_corner) )
# print("")
run = False
num_shingles = 6
AI_li = [0.0625, 0.125, 0.5, 1.0, 6.25, 12.5, 25.0]
bytes_per_elem = 16 # DP load and DP store
data_start_size = 10000
kern_dir = "test_kern_files/"
kern_pattern = "shingles_flops_%d.c"
exe_pattern = "shingles_flops_%d.exe"
for ai in AI_li:
flops = int(ai*bytes_per_elem)
# print kern_dir+(kern_pattern%flops)
print_shingles_h(kern_dir+(kern_pattern%flops), flops)
# make = cc + a64_opt + c_files + (h_file_opt%(h_dir+(h_pattern%flops))) + options
make = cc + opt + c_files + kern_dir+(kern_pattern%flops) + " " + options + "-o " + (exe_pattern%flops)
output=check_output(make, shell=True)
if not output.strip() is "":
print make
print output
quit()
if run:
nthrs = 112
nreps = 10000
for s in [4000, 16000, 64000, 256000, 512000, 1024000, 2048000]:
# for s in [1024000, 2048000, 4096000, 8192000, 16384000, 32768000, 65536000, 131072000]:
cmd = exe_cmd%(flops,s,nthrs,nreps)
output=check_output(cmd, shell=True)
if not output.strip() is "":
print cmd
print output
| 30.968504 | 105 | 0.658276 |
a014102f6e73819795a7eed39f72e93ad52fbe7b | 1,183 | py | Python | pipeline/boto_helpers.py | wilsonpe66/server-backend | 16665d810fe1829f5dacc67f396b7cecf5af042f | [
"BSD-3-Clause"
] | 1 | 2019-09-26T04:00:55.000Z | 2019-09-26T04:00:55.000Z | pipeline/boto_helpers.py | wilsonpe66/server-backend | 16665d810fe1829f5dacc67f396b7cecf5af042f | [
"BSD-3-Clause"
] | 2 | 2020-06-05T21:58:55.000Z | 2021-06-10T21:45:08.000Z | pipeline/boto_helpers.py | wilsonpe66/server-backend | 16665d810fe1829f5dacc67f396b7cecf5af042f | [
"BSD-3-Clause"
] | 1 | 2019-09-26T03:55:06.000Z | 2019-09-26T03:55:06.000Z | import json
import os.path
import subprocess
import boto3
# This is all cribbed from the django branch's cluster_management/deployment_helpers folder
# TODO once the branches are merged, use that code and NOT this code
def get_aws_object_names():
configs_folder = get_configs_folder()
with open(os.path.join(configs_folder, 'aws-object-names.json')) as fn:
return json.load(fn)
def get_boto_client(client_type):
from config.settings import AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID
aws_object_names = get_aws_object_names()
return boto3.client(
client_type,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=aws_object_names['region_name'],
)
def get_pipeline_folder():
return os.path.abspath(__file__).rsplit('/', 1)[0]
def get_configs_folder():
return os.path.join(get_pipeline_folder(), 'configs')
def set_default_region():
aws_object_names = get_aws_object_names()
region_name = aws_object_names['region_name']
subprocess.check_call(['aws', 'configure', 'set', 'default.region', region_name])
| 28.166667 | 92 | 0.716822 |
416f9aff2f6f7f3950b9ca2b3930da5e9f04357d | 9,835 | py | Python | dis_sdk_python/dependency/setuptools-40.6.3/pkg_resources/tests/test_pkg_resources.py | leishanlin/huaweicloud-sdk-python-dis | 900317432b9e9b3fea331d2cb9aa402594f9992b | [
"Apache-2.0"
] | null | null | null | dis_sdk_python/dependency/setuptools-40.6.3/pkg_resources/tests/test_pkg_resources.py | leishanlin/huaweicloud-sdk-python-dis | 900317432b9e9b3fea331d2cb9aa402594f9992b | [
"Apache-2.0"
] | 2 | 2017-09-02T12:26:14.000Z | 2017-09-06T06:40:54.000Z | dis_sdk_python/dependency/setuptools-40.6.3/pkg_resources/tests/test_pkg_resources.py | leishanlin/huaweicloud-sdk-python-dis | 900317432b9e9b3fea331d2cb9aa402594f9992b | [
"Apache-2.0"
] | 5 | 2017-08-26T12:11:14.000Z | 2019-10-07T02:01:06.000Z | # coding: utf-8
from __future__ import unicode_literals
import sys
import tempfile
import os
import zipfile
import datetime
import time
import subprocess
import stat
import distutils.dist
import distutils.command.install_egg_info
try:
from unittest import mock
except ImportError:
import mock
from pkg_resources.extern.six.moves import map
from pkg_resources.extern.six import text_type, string_types
import pytest
import pkg_resources
__metaclass__ = type
def timestamp(dt):
"""
Return a timestamp for a local, naive datetime instance.
"""
try:
return dt.timestamp()
except AttributeError:
# Python 3.2 and earlier
return time.mktime(dt.timetuple())
class EggRemover(text_type):
def __call__(self):
if self in sys.path:
sys.path.remove(self)
if os.path.exists(self):
os.remove(self)
class TestZipProvider:
finalizers = []
ref_time = datetime.datetime(2013, 5, 12, 13, 25, 0)
"A reference time for a file modification"
@classmethod
def setup_class(cls):
"create a zip egg and add it to sys.path"
egg = tempfile.NamedTemporaryFile(suffix='.egg', delete=False)
zip_egg = zipfile.ZipFile(egg, 'w')
zip_info = zipfile.ZipInfo()
zip_info.filename = 'mod.py'
zip_info.date_time = cls.ref_time.timetuple()
zip_egg.writestr(zip_info, 'x = 3\n')
zip_info = zipfile.ZipInfo()
zip_info.filename = 'data.dat'
zip_info.date_time = cls.ref_time.timetuple()
zip_egg.writestr(zip_info, 'hello, world!')
zip_info = zipfile.ZipInfo()
zip_info.filename = 'subdir/mod2.py'
zip_info.date_time = cls.ref_time.timetuple()
zip_egg.writestr(zip_info, 'x = 6\n')
zip_info = zipfile.ZipInfo()
zip_info.filename = 'subdir/data2.dat'
zip_info.date_time = cls.ref_time.timetuple()
zip_egg.writestr(zip_info, 'goodbye, world!')
zip_egg.close()
egg.close()
sys.path.append(egg.name)
subdir = os.path.join(egg.name, 'subdir')
sys.path.append(subdir)
cls.finalizers.append(EggRemover(subdir))
cls.finalizers.append(EggRemover(egg.name))
@classmethod
def teardown_class(cls):
for finalizer in cls.finalizers:
finalizer()
def test_resource_listdir(self):
import mod
zp = pkg_resources.ZipProvider(mod)
expected_root = ['data.dat', 'mod.py', 'subdir']
assert sorted(zp.resource_listdir('')) == expected_root
assert sorted(zp.resource_listdir('/')) == expected_root
expected_subdir = ['data2.dat', 'mod2.py']
assert sorted(zp.resource_listdir('subdir')) == expected_subdir
assert sorted(zp.resource_listdir('subdir/')) == expected_subdir
assert zp.resource_listdir('nonexistent') == []
assert zp.resource_listdir('nonexistent/') == []
import mod2
zp2 = pkg_resources.ZipProvider(mod2)
assert sorted(zp2.resource_listdir('')) == expected_subdir
assert sorted(zp2.resource_listdir('/')) == expected_subdir
assert zp2.resource_listdir('subdir') == []
assert zp2.resource_listdir('subdir/') == []
def test_resource_filename_rewrites_on_change(self):
"""
If a previous call to get_resource_filename has saved the file, but
the file has been subsequently mutated with different file of the
same size and modification time, it should not be overwritten on a
subsequent call to get_resource_filename.
"""
import mod
manager = pkg_resources.ResourceManager()
zp = pkg_resources.ZipProvider(mod)
filename = zp.get_resource_filename(manager, 'data.dat')
actual = datetime.datetime.fromtimestamp(os.stat(filename).st_mtime)
assert actual == self.ref_time
f = open(filename, 'w')
f.write('hello, world?')
f.close()
ts = timestamp(self.ref_time)
os.utime(filename, (ts, ts))
filename = zp.get_resource_filename(manager, 'data.dat')
with open(filename) as f:
assert f.read() == 'hello, world!'
manager.cleanup_resources()
class TestResourceManager:
def test_get_cache_path(self):
mgr = pkg_resources.ResourceManager()
path = mgr.get_cache_path('foo')
type_ = str(type(path))
message = "Unexpected type from get_cache_path: " + type_
assert isinstance(path, string_types), message
def test_get_cache_path_race(self, tmpdir):
# Patch to os.path.isdir to create a race condition
def patched_isdir(dirname, unpatched_isdir=pkg_resources.isdir):
patched_isdir.dirnames.append(dirname)
was_dir = unpatched_isdir(dirname)
if not was_dir:
os.makedirs(dirname)
return was_dir
patched_isdir.dirnames = []
# Get a cache path with a "race condition"
mgr = pkg_resources.ResourceManager()
mgr.set_extraction_path(str(tmpdir))
archive_name = os.sep.join(('foo', 'bar', 'baz'))
with mock.patch.object(pkg_resources, 'isdir', new=patched_isdir):
mgr.get_cache_path(archive_name)
# Because this test relies on the implementation details of this
# function, these assertions are a sentinel to ensure that the
# test suite will not fail silently if the implementation changes.
called_dirnames = patched_isdir.dirnames
assert len(called_dirnames) == 2
assert called_dirnames[0].split(os.sep)[-2:] == ['foo', 'bar']
assert called_dirnames[1].split(os.sep)[-1:] == ['foo']
"""
Tests to ensure that pkg_resources runs independently from setuptools.
"""
def test_setuptools_not_imported(self):
"""
In a separate Python environment, import pkg_resources and assert
that action doesn't cause setuptools to be imported.
"""
lines = (
'import pkg_resources',
'import sys',
(
'assert "setuptools" not in sys.modules, '
'"setuptools was imported"'
),
)
cmd = [sys.executable, '-c', '; '.join(lines)]
subprocess.check_call(cmd)
class TestDeepVersionLookupDistutils:
@pytest.fixture
def env(self, tmpdir):
"""
Create a package environment, similar to a virtualenv,
in which packages are installed.
"""
class Environment(str):
pass
env = Environment(tmpdir)
tmpdir.chmod(stat.S_IRWXU)
subs = 'home', 'lib', 'scripts', 'data', 'egg-base'
env.paths = dict(
(dirname, str(tmpdir / dirname))
for dirname in subs
)
list(map(os.mkdir, env.paths.values()))
return env
def create_foo_pkg(self, env, version):
"""
Create a foo package installed (distutils-style) to env.paths['lib']
as version.
"""
ld = "This package has unicode metadata! ❄"
attrs = dict(name='foo', version=version, long_description=ld)
dist = distutils.dist.Distribution(attrs)
iei_cmd = distutils.command.install_egg_info.install_egg_info(dist)
iei_cmd.initialize_options()
iei_cmd.install_dir = env.paths['lib']
iei_cmd.finalize_options()
iei_cmd.run()
def test_version_resolved_from_egg_info(self, env):
version = '1.11.0.dev0+2329eae'
self.create_foo_pkg(env, version)
# this requirement parsing will raise a VersionConflict unless the
# .egg-info file is parsed (see #419 on BitBucket)
req = pkg_resources.Requirement.parse('foo>=1.9')
dist = pkg_resources.WorkingSet([env.paths['lib']]).find(req)
assert dist.version == version
@pytest.mark.parametrize(
'unnormalized, normalized',
[
('foo', 'foo'),
('foo/', 'foo'),
('foo/bar', 'foo/bar'),
('foo/bar/', 'foo/bar'),
],
)
def test_normalize_path_trailing_sep(self, unnormalized, normalized):
"""Ensure the trailing slash is cleaned for path comparison.
See pypa/setuptools#1519.
"""
result_from_unnormalized = pkg_resources.normalize_path(unnormalized)
result_from_normalized = pkg_resources.normalize_path(normalized)
assert result_from_unnormalized == result_from_normalized
@pytest.mark.skipif(
os.path.normcase('A') != os.path.normcase('a'),
reason='Testing case-insensitive filesystems.',
)
@pytest.mark.parametrize(
'unnormalized, normalized',
[
('MiXeD/CasE', 'mixed/case'),
],
)
def test_normalize_path_normcase(self, unnormalized, normalized):
"""Ensure mixed case is normalized on case-insensitive filesystems.
"""
result_from_unnormalized = pkg_resources.normalize_path(unnormalized)
result_from_normalized = pkg_resources.normalize_path(normalized)
assert result_from_unnormalized == result_from_normalized
@pytest.mark.skipif(
os.path.sep != '\\',
reason='Testing systems using backslashes as path separators.',
)
@pytest.mark.parametrize(
'unnormalized, expected',
[
('forward/slash', 'forward\\slash'),
('forward/slash/', 'forward\\slash'),
('backward\\slash\\', 'backward\\slash'),
],
)
def test_normalize_path_backslash_sep(self, unnormalized, expected):
"""Ensure path seps are cleaned on backslash path sep systems.
"""
result = pkg_resources.normalize_path(unnormalized)
assert result.endswith(expected)
| 33.681507 | 77 | 0.631317 |
aa840622831a7c609bdb4ab09308cc89d8ee503a | 4,521 | py | Python | predictionserver/futureconventions/namingconventions.py | EricZLou/predictionserver | 5f9d4a6711f9941f3c221ef1dccf57418dd17306 | [
"MIT"
] | null | null | null | predictionserver/futureconventions/namingconventions.py | EricZLou/predictionserver | 5f9d4a6711f9941f3c221ef1dccf57418dd17306 | [
"MIT"
] | null | null | null | predictionserver/futureconventions/namingconventions.py | EricZLou/predictionserver | 5f9d4a6711f9941f3c221ef1dccf57418dd17306 | [
"MIT"
] | null | null | null | from predictionserver.futureconventions.sepconventions import SepConventions
import os
import re
import uuid
import itertools
# Things that should be moved elsewhere :)
class NamingConventions:
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.DELAYED = "delayed" + SepConventions.sep()
self.CDF = 'cdf' + SepConventions.sep()
self.LINKS = "links" + SepConventions.sep()
self.BACKLINKS = "backlinks" + SepConventions.sep()
self.MESSAGES = "messages" + SepConventions.sep()
self.HISTORY = "history" + SepConventions.sep()
self.SUBSCRIBERS = "subscribers" + SepConventions.sep()
self.SUBSCRIPTIONS = "subscriptions" + SepConventions.sep()
self.PREDICTIONS = "predictions" + SepConventions.sep()
self.SAMPLES = "samples" + SepConventions.sep()
self.BALANCE = "balance" + SepConventions.sep()
self.PERFORMANCE = "performance" + SepConventions.sep()
self.BUDGETS = "budget" + SepConventions.sep()
self.VOLUMES = "volumes" + SepConventions.sep()
self.SUMMARY = "summary" + SepConventions.sep()
def history_name(self, name):
return self.HISTORY + name
def lagged_values_name(self, name):
return self.LAGGED_VALUES + name
def lagged_times_name(self, name):
return self.LAGGED_TIMES + name
def links_name(self, name, delay):
return self.LINKS + str(delay) + SepConventions.sep() + name
def backlinks_name(self, name):
return self.BACKLINKS + name
def subscribers_name(self, name):
return self.SUBSCRIBERS + name
def subscriptions_name(self, name):
return self.SUBSCRIPTIONS + name
def cdf_name(self, name, delay=None):
return self.CDF + name if delay == None else self.CDF + str(delay) + SepConventions.sep() + name
def performance_name(self, write_key):
return self.PERFORMANCE + write_key + '.json'
def balance_name(self, write_key):
return self.BALANCE + write_key + '.json'
def delayed_name(self, name, delay):
return self.DELAYED + str(delay) + SepConventions.sep() + name
def messages_name(self, name):
return self.MESSAGES + name
@staticmethod
def is_plain_name(name: str):
return NamingConventions.is_valid_name(name) and not "~" in name
@staticmethod
def is_valid_name(name: str):
name_regex = re.compile(r'^[-a-zA-Z0-9_~.:]{1,200}\.[json,html]+$', re.IGNORECASE)
return (re.match(name_regex, name) is not None) and (not SepConventions.sep() in name)
@staticmethod
def random_name():
return str(uuid.uuid4()) + '.json'
def zcurve_names(self, names, delays:[int]):
znames = list()
for delay in delays:
for dim in [1, 2, 3]:
name_combinations = list(itertools.combinations(sorted(names), dim))
zname = self.zcurve_name(names=name_combinations, delay=delay)
znames.append(zname)
return znames
@staticmethod
def zcurve_name(names: [str], delay: int):
""" Naming convention for derived quantities, called zcurves """
basenames = sorted([n.split('.')[-2] for n in names])
prefix = "z" + str(len(names))
clearbase = "~".join([prefix] + basenames + [str(delay)])
return clearbase + '.json'
class LegacyNamingConventions:
def __init__(self,**kwargs):
super().__init__(**kwargs)
self.TRANSACTIONS = "transactions" + SepConventions.sep()
self.CONFIRMS = "confirms" + SepConventions.sep()
self.WARNINGS = "warnings" + SepConventions.sep()
self.ERRORS = "errors" + SepConventions.sep()
def legacy_confirms_name(self, write_key):
return self.CONFIRMS + write_key + '.json'
def legacy_errors_name(self, write_key):
return self.ERRORS + write_key + '.json'
def legacy_warnings_name(self, write_key):
return self.WARNINGS + write_key + '.json'
def legacy_transactions_name(self, write_key=None, name=None, delay=None ):
""" Convention for names of transactions records """
delay = None if delay is None else str(delay)
key_stem = None if write_key is None else os.path.splitext(write_key)[0]
name_stem = None if name is None else os.path.splitext(name)[0]
tail = SepConventions.sep().join( [ s for s in [key_stem,delay,name_stem] if s is not None ])
return self.TRANSACTIONS + tail + '.json'
| 36.756098 | 104 | 0.646317 |
828115475a4350b8972c7d571a0399a7d825a8fd | 6,980 | py | Python | src/contentratings/browser/basic.py | NextThought/nti.contentratings | d6c5cb082085c03edfdc87d27d4bdd22f336d68f | [
"ZPL-2.1"
] | null | null | null | src/contentratings/browser/basic.py | NextThought/nti.contentratings | d6c5cb082085c03edfdc87d27d4bdd22f336d68f | [
"ZPL-2.1"
] | null | null | null | src/contentratings/browser/basic.py | NextThought/nti.contentratings | d6c5cb082085c03edfdc87d27d4bdd22f336d68f | [
"ZPL-2.1"
] | null | null | null | from datetime import datetime, timedelta
from zope.interface import Interface, implementer
from zope.component import queryUtility, queryMultiAdapter
from zope.traversing.browser.interfaces import IAbsoluteURL
from contentratings.browser.interfaces import IAnonymousSession
from contentratings.browser.interfaces import IRatingView
from contentratings.interfaces import _
from zope.schema.interfaces import IVocabularyTokenized
try:
from ZPublisher.BaseRequest import DefaultPublishTraverse
except ImportError:
pass
try:
from zExceptions import BadRequest
except ImportError:
class BadRequest(Exception):
pass
try:
from Products.statusmessages.interfaces import IStatusMessage
except ImportError:
# No Plone
class IStatusMessage(Interface):
pass
@implementer(IRatingView)
class BasicEditorialRatingView(object):
"""A basic view for applying and removing user ratings. Expects
its context to be an IRatingManager providing IEditorialRating."""
vocab_name = 'contentratings.browser.base_vocabs.five_star_vocab'
traversal_name = 'EditorialRating'
# TODO There should be a better way to do this.
def publishTraverse(self, request, name):
"""
Make sure we don't end up in
Products.Five.browser.metaconfigure.ViewMixinForTemplates's
publishTraverse.
"""
adapter = DefaultPublishTraverse(self, request)
return adapter.publishTraverse(request, name)
def __init__(self, context, request):
"""We implement this to make the tests happy"""
self.context = context
self.request = request
@property
def vocabulary(self):
"""Get te named vocabulary for vaildation"""
return queryUtility(IVocabularyTokenized, self.vocab_name)
@property
def content_url(self):
"""Returns the content url, try to make Zope 2 and 3 happy"""
# context.context is the content being rated
context = self.context.context
# Use Zope 3 mechanism if possible
url = queryMultiAdapter((context, self.request),
IAbsoluteURL)
# Fallback on Zope 2's OFS.Traversible
if url is None and hasattr(context, 'absolute_url'):
url = context.absolute_url()
return url
def rate(self, value, redirect=True):
"""Rate the content. Enforce vocabulary values.
"""
try:
value = int(value)
if value not in self.vocabulary:
raise ValueError()
except ValueError:
raise BadRequest("Invalid rating value")
msg = _(u'The rating has been changed')
self.context.rating = value
return self._redirect(redirect, msg=msg)
def _setMessage(self, msg):
"""Attempt to send a message to the user. Plone only for now."""
sm = IStatusMessage(self.request, alternate=None)
if sm is not None:
sm.addStatusMessage(msg, type='info')
def _redirect(self, redirect, msg=None):
"""Redirect the user to a specified url and set a status message if
desired. Use the content url if the url is not specified.
"""
if redirect:
url = self.request.get('HTTP_REFERER', self.content_url)
redirect = redirect is True and url or redirect
if msg:
self._setMessage(msg)
self.request.response.redirect(redirect)
return msg
class BasicUserRatingView(BasicEditorialRatingView):
"""A basic view for applying and removing user ratings. Expects
its context to be an IRatingManager providing IUserRating."""
# 1 Year rating timeout
ANON_TIMEOUT = timedelta(365)
traversal_name = 'UserRating'
@property
def _session_key(self):
"""Lookup the anonymous session key"""
ses_util = queryUtility(IAnonymousSession)
if ses_util is not None:
return ses_util.get_anon_key(self.request)
@property
def can_rate(self):
"""A user can rate if the rating manager says so. Make sure
anonymous users cannot repeatedly rate the same content by using
the threshold time and a session id"""
context = self.context
userid = context.userid
# check the category write expression
if not context.can_write:
return False
elif userid is None:
key = self._session_key
if key is not None:
# If another rating was made under this session, don't allow
# another rating unless enough time has passed. We need to use
# the storage directly because we need this info, even if the
# user can't view other ratings.
last_rating = context.storage.last_anon_rating(key)
if last_rating and \
(datetime.utcnow() - last_rating) <= self.ANON_TIMEOUT:
return False
return True
def rate(self, value, redirect=True):
"""Rate the content. Enforce vocabulary values.
"""
try:
value = int(value)
if value not in self.vocabulary:
raise ValueError()
except ValueError:
raise BadRequest("Invalid rating value")
context = self.context
userid = context.userid
msg = _(u'You have changed your rating')
if userid is None and not self.can_rate:
return self._redirect(redirect,
msg=_(u'You have already rated this '
u'item, and cannot change your '
u'rating unless you log in.'))
elif userid is None:
# rate the object passing in the session id
context.rate(value, session_key=self._session_key)
else:
context.rate(value, userid)
return self._redirect(redirect, msg=msg)
def remove_rating(self, redirect=True):
"""Remove the rating for the current user"""
context = self.context
userid = context.userid
if userid:
context.remove_rating(userid)
return self._redirect(redirect, msg=_(u'You have removed your rating'))
@property
def current_rating(self):
"""Return the logged in user's currently set rating regardless of
security settings. Users should always be able to see their own
rating."""
context = self.context
userid = context.userid
# go directly to the storage to bypass security
return userid and context.storage.userRating(userid)
class SmallStarUserRating(BasicUserRatingView):
"""A view that specifies small stars"""
star_size = 12 # px
class ThreeStarUserRating(SmallStarUserRating):
"""A view that specifies small stars"""
vocab_name='contentratings.browser.base_vocabs.three_star_vocab'
| 36.354167 | 79 | 0.641261 |
64cd106f643918a969d934e38a1488a72ec66864 | 1,005 | py | Python | scraper/storage_spiders/fyivn.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | null | null | null | scraper/storage_spiders/fyivn.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 10 | 2020-02-11T23:34:28.000Z | 2022-03-11T23:16:12.000Z | scraper/storage_spiders/fyivn.py | chongiadung/choinho | d2a216fe7a5064d73cdee3e928a7beef7f511fd1 | [
"MIT"
] | 3 | 2018-08-05T14:54:25.000Z | 2021-06-07T01:49:59.000Z | # Auto generated by generator.py. Delete this line if you make modification.
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
XPATH = {
'name' : "//div[@class='prod_header_title']/h1[@id='prod_title']",
'price' : "//div[@class='prod_pricebox_price_final']/span[@id='special_price_box']",
'category' : "//div[@class='header__breadcrumb__wrapper']/ul/li/h3/a",
'description' : "//div[@class='product-description__block']/p",
'images' : "//div[@id='productImageBox']//span/@data-zoom-image",
'canonical' : "//link[@rel='canonical']/@href",
'base_url' : "",
'brand' : ""
}
name = 'fyi.vn'
allowed_domains = ['fyi.vn']
start_urls = ['https://www.fyi.vn']
tracking_url = ''
sitemap_urls = ['']
sitemap_rules = [('', 'parse_item')]
sitemap_follow = []
rules = [
Rule(LinkExtractor(allow=['/[a-zA-Z0-9-]+\.html']), 'parse_item'),
Rule(LinkExtractor(allow=['/[a-zA-Z0-9-]+/']), 'parse'),
#Rule(LinkExtractor(), 'parse_item_and_links'),
]
| 37.222222 | 88 | 0.646766 |
117d62406d6b638d839e27a86f0c1ffadfab7310 | 659 | py | Python | runs/bro/100KB/src1-tgt1/icmp-par-noids-iter00500.cfg.py | Largio/broeval | 89e831d07f066100afdd1a5b220f9f08f1c10b3d | [
"MIT"
] | null | null | null | runs/bro/100KB/src1-tgt1/icmp-par-noids-iter00500.cfg.py | Largio/broeval | 89e831d07f066100afdd1a5b220f9f08f1c10b3d | [
"MIT"
] | null | null | null | runs/bro/100KB/src1-tgt1/icmp-par-noids-iter00500.cfg.py | Largio/broeval | 89e831d07f066100afdd1a5b220f9f08f1c10b3d | [
"MIT"
] | null | null | null |
# Write results to this file
OUTFILE = 'runs/bro/100KB/src1-tgt1/icmp-par-noids-iter00500.result.csv'
# Source computers for the request
SOURCE = ['10.0.0.1']
# Target machines for the requests (aka server)
TARGET = ['10.0.0.2']
# IDS Mode. (ATM: noids, min, max, http, ssl, ftp, icmp, mysql)
IDSMODE = 'noids'
# Connection mode (par = parallel, seq = sequential)
MODE = 'par'
# Number of evaluation repititions to run
EPOCHS = 100
# Number of iterations to be run in each evaluation repitition
ITER = 500
# Size of the file to be downloaded from target (in Bytes * 10^SIZE)
SIZE = 5
# Protocol to be used e.g. HTTP, SSL, FTP, MYSQL
PROTOCOL = 'icmp' | 24.407407 | 72 | 0.704097 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.