input stringlengths 2.65k 237k | output stringclasses 1
value |
|---|---|
of
padding algorithm to use, or a list indicating the explicit paddings at
the start and end of each dimension. When explicit padding is used and
data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top,
pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used
and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0],
[pad_top, pad_bottom], [pad_left, pad_right]]`.
data_format: An optional `string` from: `"NHWC", "NCHW"`.
Defaults to `"NHWC"`.
Specify the data format of the input and output data. With the
default format "NHWC", the data is stored in the order of:
`batch_shape + [height, width, channels]`.
Alternatively, the format could be "NCHW", the data storage order of:
`batch_shape + [channels, height, width]`.
dilations: An int or list of `ints` that has length `1`, `2` or `4`,
defaults to 1. The dilation factor for each dimension of`input`. If a
single value is given it is replicated in the `H` and `W` dimension. By
default the `N` and `C` dimensions are set to 1. If set to k > 1, there
will be k-1 skipped cells between each filter element on that dimension.
The dimension order is determined by the value of `data_format`, see above
for details. Dilations in the batch and depth dimensions if a 4-d tensor
must be 1.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input` and the same outer batch shape.
"""
# pylint: enable=line-too-long
return conv2d(input, # pylint: disable=redefined-builtin
filters,
strides,
padding,
use_cudnn_on_gpu=True,
data_format=data_format,
dilations=dilations,
name=name)
@tf_export(v1=["nn.conv2d"])
@dispatch.add_dispatch_support
def conv2d( # pylint: disable=redefined-builtin,dangerous-default-value
input,
filter=None,
strides=None,
padding=None,
use_cudnn_on_gpu=True,
data_format="NHWC",
dilations=[1, 1, 1, 1],
name=None,
filters=None):
r"""Computes a 2-D convolution given 4-D `input` and `filter` tensors.
Given an input tensor of shape `[batch, in_height, in_width, in_channels]`
and a filter / kernel tensor of shape
`[filter_height, filter_width, in_channels, out_channels]`, this op
performs the following:
1. Flattens the filter to a 2-D matrix with shape
`[filter_height * filter_width * in_channels, output_channels]`.
2. Extracts image patches from the input tensor to form a *virtual*
tensor of shape `[batch, out_height, out_width,
filter_height * filter_width * in_channels]`.
3. For each patch, right-multiplies the filter matrix and the image patch
vector.
In detail, with the default NHWC format,
output[b, i, j, k] =
sum_{di, dj, q} input[b, strides[1] * i + di, strides[2] * j + dj, q]
* filter[di, dj, q, k]
Must have `strides[0] = strides[3] = 1`. For the most common case of the same
horizontal and vertical strides, `strides = [1, stride, stride, 1]`.
Args:
input: A `Tensor`. Must be one of the following types:
`half`, `bfloat16`, `float32`, `float64`.
A 4-D tensor. The dimension order is interpreted according to the value
of `data_format`, see below for details.
filter: A `Tensor`. Must have the same type as `input`.
A 4-D tensor of shape
`[filter_height, filter_width, in_channels, out_channels]`
strides: An int or list of `ints` that has length `1`, `2` or `4`. The
stride of the sliding window for each dimension of `input`. If a single
value is given it is replicated in the `H` and `W` dimension. By default
the `N` and `C` dimensions are set to 1. The dimension order is determined
by the value of `data_format`, see below for details.
padding: Either the `string` `"SAME"` or `"VALID"` indicating the type of
padding algorithm to use, or a list indicating the explicit paddings at
the start and end of each dimension. When explicit padding is used and
data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top,
pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used
and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0],
[pad_top, pad_bottom], [pad_left, pad_right]]`.
use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.
data_format: An optional `string` from: `"NHWC", "NCHW"`.
Defaults to `"NHWC"`.
Specify the data format of the input and output data. With the
default format "NHWC", the data is stored in the order of:
[batch, height, width, channels].
Alternatively, the format could be "NCHW", the data storage order of:
[batch, channels, height, width].
dilations: An int or list of `ints` that has length `1`, `2` or `4`,
defaults to 1. The dilation factor for each dimension of`input`. If a
single value is given it is replicated in the `H` and `W` dimension. By
default the `N` and `C` dimensions are set to 1. If set to k > 1, there
will be k-1 skipped cells between each filter element on that dimension.
The dimension order is determined by the value of `data_format`, see above
for details. Dilations in the batch and depth dimensions if a 4-d tensor
must be 1.
name: A name for the operation (optional).
filters: Alias for filter.
Returns:
A `Tensor`. Has the same type as `input`.
"""
filter = deprecation.deprecated_argument_lookup(
"filters", filters, "filter", filter)
padding, explicit_paddings = convert_padding(padding)
if data_format is None:
data_format = "NHWC"
channel_index = 1 if data_format.startswith("NC") else 3
strides = _get_sequence(strides, 2, channel_index, "strides")
dilations = _get_sequence(dilations, 2, channel_index, "dilations")
shape = input.shape
# shape object may lack ndims, e.g., if input is an np.ndarray. In that case,
# we fall back to len(shape).
ndims = getattr(shape, "ndims", -1)
if ndims == -1:
ndims = len(shape)
if ndims in (4, 3, 2, 1, 0, None):
# We avoid calling squeeze_batch_dims to reduce extra python function
# call slowdown in eager mode. This branch doesn't require reshapes.
return gen_nn_ops.conv2d(
input,
filter=filter,
strides=strides,
padding=padding,
use_cudnn_on_gpu=use_cudnn_on_gpu,
explicit_paddings=explicit_paddings,
data_format=data_format,
dilations=dilations,
name=name)
return squeeze_batch_dims(
input,
functools.partial(
gen_nn_ops.conv2d,
filter=filter,
strides=strides,
padding=padding,
use_cudnn_on_gpu=use_cudnn_on_gpu,
explicit_paddings=explicit_paddings,
data_format=data_format,
dilations=dilations),
inner_rank=3,
name=name)
@tf_export(v1=["nn.conv2d_backprop_filter"])
@dispatch.add_dispatch_support
def conv2d_backprop_filter( # pylint: disable=redefined-builtin,dangerous-default-value
input,
filter_sizes,
out_backprop,
strides,
padding,
use_cudnn_on_gpu=True,
data_format="NHWC",
dilations=[1, 1, 1, 1],
name=None):
r"""Computes the gradients of convolution with respect to the filter.
Args:
input: A `Tensor`. Must be one of the following types:
`half`, `bfloat16`, `float32`, `float64`.
4-D with shape `[batch, in_height, in_width, in_channels]`.
filter_sizes: A `Tensor` of type `int32`.
An integer vector representing the tensor shape of `filter`,
where `filter` is a 4-D
`[filter_height, filter_width, in_channels, out_channels]` tensor.
out_backprop: A `Tensor`. Must have the same type as `input`.
4-D with shape `[batch, out_height, out_width, out_channels]`.
Gradients w.r.t. the output of the convolution.
strides: A list of `ints`.
The stride of the sliding window for each dimension of the input
of the convolution. Must be in the same order as the dimension specified
with format.
padding: Either the `string `"SAME"` or `"VALID"` indicating the type of
padding algorithm to use, or a list indicating the explicit paddings at
the start and end of each dimension. When explicit padding is used and
data_format is `"NHWC"`, this should be in the form `[[0, 0], [pad_top,
pad_bottom], [pad_left, pad_right], [0, 0]]`. When explicit padding used
and data_format is `"NCHW"`, this should be in the form `[[0, 0], [0, 0],
[pad_top, pad_bottom], [pad_left, pad_right]]`.
use_cudnn_on_gpu: An optional `bool`. Defaults to `True`.
data_format: An optional `string` from: `"NHWC", "NCHW"`.
Defaults to `"NHWC"`.
Specify the data format of the input and output data. With the
default format "NHWC", the data is stored in the order of:
[batch, in_height, in_width, in_channels].
Alternatively, the format could be "NCHW", the data storage order of:
[batch, in_channels, in_height, in_width].
dilations: An optional list of `ints`. Defaults to `[1, 1, 1, 1]`.
1-D tensor of length 4. The dilation factor for each dimension of
`input`. If set to k > 1, there will be k-1 skipped cells between each
filter element on that dimension. The dimension order is determined by
the value of `data_format`, see above for details. Dilations in the batch
and depth dimensions must be 1.
name: A name for the operation (optional).
Returns:
A `Tensor`. Has the same type as `input`.
"""
padding, explicit_paddings = convert_padding(padding)
return gen_nn_ops.conv2d_backprop_filter(
input, filter_sizes, out_backprop, strides, padding, use_cudnn_on_gpu,
explicit_paddings, data_format, dilations, name)
@tf_export(v1=["nn.conv2d_backprop_input"])
@dispatch.add_dispatch_support
def conv2d_backprop_input( # pylint: | |
None
self.RequestId = None
def _deserialize(self, params):
self.TaskId = params.get("TaskId")
self.RequestId = params.get("RequestId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyInstanceParamsRequest(AbstractModel):
"""ModifyInstanceParams请求参数结构体
"""
def __init__(self):
"""
:param InstanceId: 实例ID
:type InstanceId: str
:param InstanceParams: 实例修改的参数列表
:type InstanceParams: list of InstanceParam
"""
self.InstanceId = None
self.InstanceParams = None
def _deserialize(self, params):
self.InstanceId = params.get("InstanceId")
if params.get("InstanceParams") is not None:
self.InstanceParams = []
for item in params.get("InstanceParams"):
obj = InstanceParam()
obj._deserialize(item)
self.InstanceParams.append(obj)
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyInstanceParamsResponse(AbstractModel):
"""ModifyInstanceParams返回参数结构体
"""
def __init__(self):
"""
:param Changed: 修改是否成功。
:type Changed: bool
:param TaskId: 任务ID
:type TaskId: int
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.Changed = None
self.TaskId = None
self.RequestId = None
def _deserialize(self, params):
self.Changed = params.get("Changed")
self.TaskId = params.get("TaskId")
self.RequestId = params.get("RequestId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyInstanceRequest(AbstractModel):
"""ModifyInstance请求参数结构体
"""
def __init__(self):
"""
:param Operation: 修改实例操作,如填写:rename-表示实例重命名;modifyProject-修改实例所属项目;modifyAutoRenew-修改实例续费标记
:type Operation: str
:param InstanceIds: 实例Id
:type InstanceIds: list of str
:param InstanceNames: 实例的新名称
:type InstanceNames: list of str
:param ProjectId: 项目Id
:type ProjectId: int
:param AutoRenews: 自动续费标识。0 - 默认状态(手动续费);1 - 自动续费;2 - 明确不自动续费
:type AutoRenews: list of int
:param InstanceId: 已经废弃
:type InstanceId: str
:param InstanceName: 已经废弃
:type InstanceName: str
:param AutoRenew: 已经废弃
:type AutoRenew: int
"""
self.Operation = None
self.InstanceIds = None
self.InstanceNames = None
self.ProjectId = None
self.AutoRenews = None
self.InstanceId = None
self.InstanceName = None
self.AutoRenew = None
def _deserialize(self, params):
self.Operation = params.get("Operation")
self.InstanceIds = params.get("InstanceIds")
self.InstanceNames = params.get("InstanceNames")
self.ProjectId = params.get("ProjectId")
self.AutoRenews = params.get("AutoRenews")
self.InstanceId = params.get("InstanceId")
self.InstanceName = params.get("InstanceName")
self.AutoRenew = params.get("AutoRenew")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyInstanceResponse(AbstractModel):
"""ModifyInstance返回参数结构体
"""
def __init__(self):
"""
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.RequestId = None
def _deserialize(self, params):
self.RequestId = params.get("RequestId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyMaintenanceWindowRequest(AbstractModel):
"""ModifyMaintenanceWindow请求参数结构体
"""
def __init__(self):
"""
:param InstanceId: 实例ID
:type InstanceId: str
:param StartTime: 维护时间窗起始时间,如:17:00
:type StartTime: str
:param EndTime: 维护时间窗结束时间,如:19:00
:type EndTime: str
"""
self.InstanceId = None
self.StartTime = None
self.EndTime = None
def _deserialize(self, params):
self.InstanceId = params.get("InstanceId")
self.StartTime = params.get("StartTime")
self.EndTime = params.get("EndTime")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyMaintenanceWindowResponse(AbstractModel):
"""ModifyMaintenanceWindow返回参数结构体
"""
def __init__(self):
"""
:param Status: 修改状态:success 或者 failed
:type Status: str
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.Status = None
self.RequestId = None
def _deserialize(self, params):
self.Status = params.get("Status")
self.RequestId = params.get("RequestId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyNetworkConfigRequest(AbstractModel):
"""ModifyNetworkConfig请求参数结构体
"""
def __init__(self):
"""
:param InstanceId: 实例ID
:type InstanceId: str
:param Operation: 操作类型:changeVip——修改实例VIP;changeVpc——修改实例子网;changeBaseToVpc——基础网络转VPC网络
:type Operation: str
:param Vip: VIP地址,changeVip的时候填写,不填则默认分配
:type Vip: str
:param VpcId: 私有网络ID,changeVpc、changeBaseToVpc的时候需要提供
:type VpcId: str
:param SubnetId: 子网ID,changeVpc、changeBaseToVpc的时候需要提供
:type SubnetId: str
"""
self.InstanceId = None
self.Operation = None
self.Vip = None
self.VpcId = None
self.SubnetId = None
def _deserialize(self, params):
self.InstanceId = params.get("InstanceId")
self.Operation = params.get("Operation")
self.Vip = params.get("Vip")
self.VpcId = params.get("VpcId")
self.SubnetId = params.get("SubnetId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ModifyNetworkConfigResponse(AbstractModel):
"""ModifyNetworkConfig返回参数结构体
"""
def __init__(self):
"""
:param Status: 执行状态:true|false
:type Status: bool
:param SubnetId: 子网ID
:type SubnetId: str
:param VpcId: 私有网络ID
:type VpcId: str
:param Vip: VIP地址
:type Vip: str
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.Status = None
self.SubnetId = None
self.VpcId = None
self.Vip = None
self.RequestId = None
def _deserialize(self, params):
self.Status = params.get("Status")
self.SubnetId = params.get("SubnetId")
self.VpcId = params.get("VpcId")
self.Vip = params.get("Vip")
self.RequestId = params.get("RequestId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class Outbound(AbstractModel):
"""安全组出站规则
"""
def __init__(self):
"""
:param Action: 策略,ACCEPT或者DROP。
:type Action: str
:param AddressModule: 地址组id代表的地址集合。
:type AddressModule: str
:param CidrIp: 来源Ip或Ip段,例如192.168.0.0/16。
:type CidrIp: str
:param Desc: 描述。
:type Desc: str
:param IpProtocol: 网络协议,支持udp、tcp等。
:type IpProtocol: str
:param PortRange: 端口。
:type PortRange: str
:param ServiceModule: 服务组id代表的协议和端口集合。
:type ServiceModule: str
:param Id: 安全组id代表的地址集合。
:type Id: str
"""
self.Action = None
self.AddressModule = None
self.CidrIp = None
self.Desc = None
self.IpProtocol = None
self.PortRange = None
self.ServiceModule = None
self.Id = None
def _deserialize(self, params):
self.Action = params.get("Action")
self.AddressModule = params.get("AddressModule")
self.CidrIp = params.get("CidrIp")
self.Desc = params.get("Desc")
self.IpProtocol = params.get("IpProtocol")
self.PortRange = params.get("PortRange")
self.ServiceModule = params.get("ServiceModule")
self.Id = params.get("Id")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ProductConf(AbstractModel):
"""产品信息
"""
def __init__(self):
"""
:param Type: 产品类型,2 – Redis2.8内存版(标准架构),3 – CKV 3.2内存版(标准架构),4 – CKV 3.2内存版(集群架构),5 – Redis2.8内存版(单机版),6 – Redis4.0内存版(标准架构),7 – Redis4.0内存版(集群架构),8 – Redis5.0内存版(标准架构),9 – Redis5.0内存版(集群架构),10 – Redis4.0混合存储版Tendis
:type Type: int
:param TypeName: 产品名称,Redis主从版,CKV主从版,CKV集群版,Redis单机版,Redis集群版,混合存储版Tendis
:type TypeName: str
:param MinBuyNum: 购买时的最小数量
:type MinBuyNum: int
:param MaxBuyNum: 购买时的最大数量
:type MaxBuyNum: int
:param Saleout: 产品是否售罄
:type Saleout: bool
:param Engine: 产品引擎,腾讯云CKV或者社区版Redis
:type Engine: str
:param Version: 兼容版本,Redis-2.8,Redis-3.2,Redis-4.0
:type Version: str
:param TotalSize: 规格总大小,单位G
:type TotalSize: list of str
:param ShardSize: 每个分片大小,单位G
:type ShardSize: list of str
:param ReplicaNum: 副本数量
:type ReplicaNum: list of str
:param ShardNum: 分片数量
:type ShardNum: list of str
:param PayMode: 支持的计费模式,1-包年包月,0-按量计费
:type PayMode: str
:param EnableRepicaReadOnly: 是否支持副本只读
:type EnableRepicaReadOnly: bool
"""
self.Type = None
self.TypeName = None
self.MinBuyNum = None
self.MaxBuyNum = None
self.Saleout = None
self.Engine = None
self.Version = None
self.TotalSize = None
self.ShardSize = None
self.ReplicaNum = None
self.ShardNum = None
self.PayMode = None
self.EnableRepicaReadOnly = None
def _deserialize(self, params):
self.Type = params.get("Type")
self.TypeName = params.get("TypeName")
self.MinBuyNum = params.get("MinBuyNum")
self.MaxBuyNum = params.get("MaxBuyNum")
self.Saleout = params.get("Saleout")
self.Engine = params.get("Engine")
self.Version = params.get("Version")
self.TotalSize = params.get("TotalSize")
self.ShardSize = params.get("ShardSize")
self.ReplicaNum = params.get("ReplicaNum")
self.ShardNum = params.get("ShardNum")
self.PayMode = params.get("PayMode")
self.EnableRepicaReadOnly = params.get("EnableRepicaReadOnly")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class ProxyNodes(AbstractModel):
"""Proxy节点信息
"""
def __init__(self):
"""
:param NodeId: 节点ID
注意:此字段可能返回 null,表示取不到有效值。
:type NodeId: str
"""
self.NodeId = None
def _deserialize(self, params):
self.NodeId = params.get("NodeId")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class RedisBackupSet(AbstractModel):
"""实例的备份数组
"""
def __init__(self):
"""
:param StartTime: 开始备份的时间
:type StartTime: str
:param BackupId: 备份ID
:type BackupId: str
:param BackupType: 备份类型。 manualBackupInstance:用户发起的手动备份; systemBackupInstance:凌晨系统发起的备份
:type BackupType: str
:param Status: 备份状态。 1:"备份被其它流程锁定"; 2:"备份正常,没有被任何流程锁定"; -1:"备份已过期"; 3:"备份正在被导出"; 4:"备份导出成功"
:type Status: int
:param Remark: 备份的备注信息
:type Remark: str
:param Locked: 备份是否被锁定,0:未被锁定;1:已被锁定
:type Locked: int
"""
self.StartTime = None
self.BackupId = None
self.BackupType = None
self.Status = None
self.Remark = None
self.Locked = None
def _deserialize(self, params):
self.StartTime = params.get("StartTime")
self.BackupId = params.get("BackupId")
self.BackupType = params.get("BackupType")
self.Status = params.get("Status")
self.Remark = params.get("Remark")
self.Locked = params.get("Locked")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set), Warning)
class RedisCommonInstanceList(AbstractModel):
"""单个实例信息
"""
def __init__(self):
"""
:param InstanceName: 实例名称
:type InstanceName: str
:param InstanceId: 实例id
:type InstanceId: str
:param AppId: 用户id
:type AppId: int
:param ProjectId: 实例所属项目id
:type ProjectId: int
:param Region: 实例接入区域
:type Region: str
:param Zone: 实例接入zone
:type Zone: str
:param VpcId: 实例网络id
:type VpcId: str
:param SubnetId: 子网id
:type SubnetId: str
:param Status: 实例状态信息,0-创建中,1-运行中
:type Status: str
:param Vips: 实例网络ip
:type | |
"""
Collection of validators for rating data.
"""
from collections import defaultdict
import itertools
import attr
from registered import parser
from registered.validate import helpers
@attr.s(frozen=True)
class ValidationError: # pylint: disable=too-few-public-methods
"""
Wrapper around a single instance of a validation error.
"""
file_type = attr.ib()
key = attr.ib()
error = attr.ib()
description = attr.ib()
def validate_unique_pattern_prefix(rating):
"""
For most patterns in PAT file, the pattern prefix (first 5 characters) and direction are unique.
"""
expected_non_unique_keys = [
("00wad", "Inbound"),
("00rad", "Inbound"),
("00wad", "Outbound"),
("00rad", "Outbound"),
("0746_", "Inbound"),
("0746_", "Outbound"),
]
patterns_by_prefix = defaultdict(set)
for parsed in rating["pat"]:
if not isinstance(parsed, parser.Pattern):
continue
if parsed.direction_name == "":
# ignore blanks
continue
key = (parsed.pattern_id[:5], parsed.direction_name)
if key in expected_non_unique_keys:
continue
patterns_by_prefix[key].add(parsed.pattern_id)
for (key, pattern_ids) in patterns_by_prefix.items():
if len(pattern_ids) == 1:
continue
yield ValidationError(
file_type="pat",
key=key,
error="non_unique_pattern",
description=f"multiple patterns with prefix: {list(pattern_ids)}",
)
def validate_unique_timepoint_pattern(rating):
"""
For a given timepoint pattern ID, the list of timepoints should be always be the same.
"""
patterns_by_id = defaultdict(list)
for timepoint_pattern in rating["ppat"]:
patterns_by_id[timepoint_pattern.timepoint_pattern_id].append(timepoint_pattern)
for (timepoint_pattern_id, patterns) in patterns_by_id.items():
if len(patterns) == 1:
continue
[first, *rest] = patterns
for pattern in rest:
if first.timepoints != pattern.timepoints:
yield ValidationError(
file_type="ppat",
key=timepoint_pattern_id,
error="non_unique_timepoint_pattern",
description=f"{first.timepoints} != {pattern.timepoints}",
)
def validate_no_extra_timepoints(rating):
"""
All timepoints in PAT should also be in PPAT for a given route/direction.
Exceptions:
- PPAT records with an empty direction_name
- RAD/WAD routes
"""
timepoints_by_route_direction = helpers.timepoints_by_route_direction(rating)
key = None
for record in rating["pat"]:
# keep track of the last Pattern we saw
if isinstance(record, parser.Pattern):
if record.route_id in {"rad", "wad"}:
# RAD/WAD routes don't need to get validated
key = None
continue
key = (record.route_id, record.direction_name)
if key not in timepoints_by_route_direction and record.direction_name != "":
yield ValidationError(
file_type="pat",
key=key,
error="timepoint_pattern_missing",
description="No matching timepoint pattern found",
)
continue
# record is a PatternStop
if key is None or key not in timepoints_by_route_direction:
# missing route/directions already provided a ValidationError above
continue
if record.revenue_type != parser.RevenueType.REVENUE:
continue
timepoint = record.timepoint_id
if timepoint not in timepoints_by_route_direction[key]:
yield ValidationError(
file_type="pat",
key=key,
error="timepoint_missing_from_timepoint_pattern",
description=f"{repr(timepoint)} missing from timepoint patterns",
)
def validate_timepoints_in_consistent_order(rating):
"""
Timepoints in PAT should be in the same order as in PPAT for a given route/direction.
"""
timepoints_by_route_direction = helpers.timepoints_by_route_direction(rating)
pattern = None
timepoints = []
def validate_timepoints():
key = (pattern.route_id, pattern.direction_name)
expected_timepoints = timepoints_by_route_direction.get(key, [])
if expected_timepoints == []:
return # pylint: disable
if not helpers.same_list_order(expected_timepoints, timepoints):
yield ValidationError(
file_type="pat",
key=pattern.pattern_id,
error="timepoints_out_of_order",
description=(
f"expected timepoint order: {repr(expected_timepoints)} "
f"actual timepoint order: {repr(timepoints)}"
),
)
for record in rating["pat"]:
if isinstance(record, parser.Pattern):
if pattern:
yield from validate_timepoints()
pattern = record
timepoints = []
continue
if isinstance(record, parser.PatternStop) and record.timepoint_id:
timepoints.append(record.timepoint_id)
if pattern:
yield from validate_timepoints()
VALID_GARAGES = {
"albny",
"arbor",
"cabot",
"censq",
"charl",
"fell",
"lynn",
"ncamb",
"prwb",
"soham",
"qubus",
"somvl",
"wondw",
}
def validate_block_garages(rating):
"""
Validate that each block leaves/arrives from the same, valid, garage.
Exceptions:
- Central Square -> Lynn
- Lynn -> Central Square
- Lynn -> Wonderland
- Wonderland -> Lynn
- dead reckoning schedules (ST1, DR1)
"""
for record in rating["blk"]:
if not isinstance(record, parser.Block):
continue
if record.service_key in {"ST1", "DR1"}:
continue
(first_garage, _) = record.times[0]
(last_garage, _) = record.times[-1]
for garage in [first_garage, last_garage]:
if garage not in VALID_GARAGES:
yield ValidationError(
file_type="blk",
key=(record.block_id, record.service_key),
error="block_with_invalid_garage",
description=f"{garage} is not a valid garage",
)
if first_garage != last_garage and (first_garage, last_garage) not in {
("censq", "lynn"),
("lynn", "censq"),
("lynn", "wondw"),
("wondw", "lynn"),
}:
yield ValidationError(
file_type="blk",
key=(record.block_id, record.service_key),
error="block_with_different_garage",
description=f"leaves from {first_garage}, arrives at {last_garage}",
)
def validate_all_blocks_have_trips(rating):
"""
Validate that all blocks have at least one revenue trip.
Exceptions:
- RAD/WAD blocks
"""
previous_block = None
has_revenue_trips = False
revenue_trips = {
trip.trip_id
for trip in rating["trp"]
if isinstance(trip, parser.Trip)
and trip.revenue_type
in {parser.RevenueType.REVENUE, parser.RevenueType.OPPORTUNITY}
}
def error():
return ValidationError(
file_type="blk",
key=(previous_block.block_id, previous_block.service_key),
error="block_with_no_trips",
description="Block has no/only non-revenue trips",
)
for record in rating["blk"]:
if isinstance(record, parser.Block):
if "rad" in record.block_id or "wad" in record.block_id:
# don't need to validate RAD/WAD trips.
previous_block = None
has_revenue_trips = False
continue
if previous_block is not None and not has_revenue_trips:
yield error()
previous_block = record
has_revenue_trips = False
continue
if previous_block is None:
continue
if isinstance(record, parser.TripIdentifier):
if record.trip_id in revenue_trips:
has_revenue_trips = True
if not has_revenue_trips and previous_block is not None:
yield error()
def validate_trip_has_valid_pattern(rating):
"""
Validate that each trip's pattern is also present in the PAT file.
Exceptions:
- non revenue trips
"""
valid_patterns = {
pattern.pattern_id
for pattern in rating["pat"]
if isinstance(pattern, parser.Pattern)
}
invalid_trips = (
trip
for trip in rating["trp"]
if isinstance(trip, parser.Trip)
and trip.revenue_type != parser.RevenueType.NON_REVENUE
and trip.pattern_id not in valid_patterns
)
for trip in invalid_trips:
yield ValidationError(
file_type="trp",
key=trip.trip_id,
error="trip_with_invalid_pattern",
description=f"pattern {trip.pattern_id} does not exist",
)
def validate_stop_has_only_one_timepoint(rating):
"""
Stops should only have one timepoint value.
"""
stop_timepoints = defaultdict(set)
for stop in rating["nde"]:
if stop.timepoint_id == "":
continue
# add the default timepoint if the stop exists in the NDE file
stop_timepoints[stop.stop_id].add(stop.timepoint_id)
for record in rating["pat"]:
if not isinstance(record, parser.PatternStop):
continue
if record.timepoint_id == "":
continue
stop_timepoints[record.stop_id].add(record.timepoint_id)
for (stop_id, timepoints) in stop_timepoints.items():
if len(timepoints) == 1:
continue
yield ValidationError(
file_type="pat",
key=stop_id,
error="stop_with_multiple_timepoints",
description=repr(timepoints),
)
def validate_all_routes_have_patterns(rating):
"""
All routes (RTE file) should have at least one pattern.
"""
routes = {route.route_id for route in rating["rte"]}
routes_from_patterns = {
record.route_id
for record in rating["pat"]
if isinstance(record, parser.Pattern)
}
missing_routes = routes - routes_from_patterns
for route_id in missing_routes:
yield ValidationError(
file_type="rte",
key=route_id,
error="route_without_patterns",
description="route has no patterns in PAT file",
)
def validate_pattern_stop_has_node(rating):
"""
All PatternStop records should exist in the NDE file.
"""
valid_stops = {stop.stop_id for stop in rating["nde"]}
pattern = None
for record in rating["pat"]:
if isinstance(record, parser.Pattern):
pattern = record
continue
if not isinstance(record, parser.PatternStop):
continue
if record.stop_id not in valid_stops:
yield ValidationError(
file_type="pat",
key=(pattern.pattern_id, record.stop_id),
error="pattern_stop_without_node",
description=f"stop {record.stop_id} not in NDE file",
)
def validate_routes_have_two_directions(rating):
"""
Each route in the PPAT file should have two directions.
Exceptions:
- 171
- 195
- 214
- rad
- wad
"""
default_expected_count = 2
override_counts = {"171": 1, "195": 1, "214": 1, "rad": 1, "wad": 1}
routes_to_directions = defaultdict(set)
for trip_pattern in rating["ppat"]:
routes_to_directions[trip_pattern.route_id].add(trip_pattern.direction_name)
for (route_id, direction_names) in routes_to_directions.items():
expected_count = override_counts.get(route_id, default_expected_count)
if len(direction_names) == expected_count:
continue
yield ValidationError(
file_type="ppat",
key=route_id,
error="route_with_unexpected_direction_count",
description=f"has directions {repr(direction_names)}",
)
def validate_all_blocks_have_runs(rating):
"""
Each block in the BLK file should have at least one Piece in the CRW file.
"""
piece_id_service_keys = {
(piece.piece_id, piece.service_key)
for piece in rating["crw"]
if isinstance(piece, parser.Piece)
}
for block in rating["blk"]:
if not isinstance(block, parser.Block):
continue
if (block.piece_id, block.service_key) in piece_id_service_keys:
continue
yield ValidationError(
file_type="blk",
error="block_without_runs",
key=(block.block_id, block.service_key),
description="No pieces found.",
)
def validate_all_runs_have_blocks(rating):
"""
Each block in the BLK file should have at least one Piece in the CRW file.
"""
piece_id_service_keys = {
(block.piece_id, block.service_key)
for block in rating["blk"]
if isinstance(block, parser.Block)
}
for piece in rating["crw"]:
if not isinstance(piece, parser.Piece):
continue
if (piece.piece_id, piece.service_key) in piece_id_service_keys:
continue
yield ValidationError(
file_type="crw",
error="run_without_blocks",
key=(piece.run_id, piece.service_key),
description="No blocks found.",
)
def validate_calendar_exceptions_have_unique_runs(rating):
"""
Validate that each used exception combo has unique run IDs.
Inside TransitMaster, we only use the last 3 digits of the service ID to
identify which blocks/runs are active. Inside HASTUS, the schedulers need
to be aware of this, so that those groups use a unique set of runs. If they
are not, it can cause an issue where overlapping runs are activated inside
TM on a particular date, causing lots of problems.
"""
calendar_dates_to_exceptions = defaultdict(set)
for record in rating["cal"]:
if not isinstance(record, parser.CalendarDate):
continue
if record.service_key == "":
# Service not active on the date
continue
calendar_dates_to_exceptions[record.date].add(record.service_key)
possible_exceptions = {
frozenset(combo) for combo in calendar_dates_to_exceptions.values()
}
runs_by_service_key = defaultdict(set)
for record in rating["crw"]:
if not isinstance(record, parser.Piece):
continue
runs_by_service_key[record.service_key].add(record.run_id)
for combo in possible_exceptions:
if len(combo) == 1:
continue
for (fst, snd) in itertools.combinations(combo, 2):
overlaps = runs_by_service_key[fst] & runs_by_service_key[snd]
for run_id in overlaps:
yield ValidationError(
file_type="crw",
error="calendar_exception_with_duplicate_runs",
key=run_id,
description=f"used by services: {fst}, {snd}",
)
ALL_VALIDATORS = [
validate_all_blocks_have_trips,
validate_all_blocks_have_runs,
validate_all_routes_have_patterns,
validate_all_runs_have_blocks,
validate_block_garages,
validate_calendar_exceptions_have_unique_runs,
validate_no_extra_timepoints,
validate_pattern_stop_has_node,
validate_routes_have_two_directions,
validate_stop_has_only_one_timepoint,
validate_timepoints_in_consistent_order,
| |
<filename>DINGO/along_tract.py
import gc
import re
from nipy import load_image
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cycler, transforms
from matplotlib.legend_handler import HandlerLine2D
from matplotlib.collections import LineCollection
import stats
class HandlerColorLine2D(HandlerLine2D):
def __init__(self, cmap, **kw):
self.cmap = cmap # not a natural Line2D property
super(HandlerColorLine2D, self).__init__(**kw)
def create_artists(self, legend, orig_handle, xdescent, ydescent, width,
height, fontsize, trans):
x = np.linspace(0, width, self.get_numpoints(legend) + 1)
y = np.zeros(self.get_numpoints(legend) + 1) + height / 2. - ydescent
points = np.array([x, y]).T.reshape(-1, 1, 2)
segs = np.concatenate([points[:-1], points[1:]], axis=1)
lc = LineCollection(segs, cmap=self.cmap, transform=trans)
lc.set_array(x)
lc.set_linewidth(orig_handle.get_linewidth() + 2)
return [lc]
def add_group_lines_ci(axes, data, pvals=None, thresh=0.05, scale='red',
lc=(0, 0, 0.8), lw=5.0, z=1, alphal=0.8, alphaci=0.4):
group_mean = np.ma.mean(data, 1) # mean over t, i.e. by slice
ci = stats.confInt(data)
xs = np.arange(data.shape[0])
line = axes.plot(xs, group_mean,
color=lc, linewidth=lw, zorder=z, alpha=alphal)
patch = axes.fill_between(xs, group_mean + ci, group_mean - ci,
facecolor=lc, alpha=alphaci, zorder=z)
if pvals is not None:
for i in range(len(xs) - 1):
if pvals[i] <= thresh:
if scale == 'red':
pcolor = [min(1, abs(1 - (pvals[i] / thresh) + .4)), 0, 0]
elif scale == 'green':
pcolor = [0, min(1, abs(1 - (pvals[i] / thresh) + .4)), 0]
else:
pcolor = lc
axes.plot(xs[i:i + 2], group_mean[i:i + 2],
lw=5, zorder=3, color=pcolor, solid_capstyle="round")
return line, patch
def add_ind_lines(axes, data,
ind_sort=False, lw=1, z=1, alpha=0.5):
xs = np.arange(data.shape[0])
if ind_sort:
ind_mean = np.ma.mean(data, 0)
sort = np.argsort(ind_mean) # sort by average fa per person
ys = data[:, sort]
else:
ys = data
lines = axes.plot(xs, ys,
linewidth=lw, zorder=z, alpha=alpha)
return lines
def add_ind_labels(figure, axes, data, ind_labels):
txt_offset = transforms.offset_copy(axes.transData, fig=figure,
x=0.02, y=0.05, units='inches')
ind_peaks_idx = zip(np.argmax(data, 0), np.arange(data.shape[1]))
# zip(slice of max fa by individual, number of individual)
labels = []
for x, ind in ind_peaks_idx:
labels.append(plt.text(x, data[x, ind], ind_labels[ind],
fontsize=10,
transform=txt_offset))
return labels
def plot_along(data, data2=None, pvals=None, thresh=0.05, scale='red',
title='FA Along Tract', xlim=None, ylim=(0, 1), xlabel='Unknown Dir', ylabel='FA',
fig_facecolor=(1, 1, 1), fig_size=(15, 5), bg_color=(0, 0, 0), bg_alpha=0.25,
lcolor=(0, 0, 0.8), lcolor2=(0, 0.8, 0), legend=None, ind_sort=None,
ind_labels=None, ind_cmap=None, ind_cmap2=None, filename='FA_along_tract', ):
"""Plot along tract FA values, either means or individuals separately.
Data input is a 2-D numpy masked array
Parameters
----------
data : 2-D numpy array - Mandatory
data2 : 2-D numpy array
pvals : 1-D numpy array
thresh : Float - alpha threshold - default 0.05
scale : Str - default 'red'
title : Str - default 'FA Along Tract'
xlim : 2-sequence -default (0, data.shape[0])
ylim : 2-sequence -default (0, 1)
xlabel : Str - default 'Unknown Dir'
ylabel : Str - default 'FA'
fig_facecolor : 3-sequence - figure background color - default (1,1,1)
fig_size : 2-sequence - figure size - default (15,5)
bg_color : 3-sequence - axis background color - default (0,0,0)
bg_alpha : Float - axis background alpha - default 0.25
lcolor : 3-sequence - mean line color - default (0,0,0.8)
lcolor2 : 3-sequence - mean line 2 color - default (0,0.8,0)
legend : Tuple - does not apply to ind, default ('Mean', '95%CI')
ind_sort : Bool - default None i.e. plot group mean, otherwise
whether to sort individual along tract fas in the colorspace by mean
ind_labels : Sequence (Str,) same length as data
ind_cmap : Str - colormap name - default 'plasma'
ind_cmap2 : Str - colormap name - default 'PuBuGn'
filename : Str - save filename - default 'FA_along_tract'
Return
------
matplotlib.pyplot.figure
savefile image at filename"""
if xlim is None:
xlim = (0, data.shape[0])
if legend is None:
legend = ('Mean', '95% CI')
if len(data.shape) != 2:
raise (ValueError('data must be 2-D, but has shape %s' % (data.shape,)))
fig = plt.figure(
facecolor=fig_facecolor,
figsize=fig_size)
sub = fig.add_subplot(111, # 1st figure 1x1
facecolor=bg_color,
xlim=xlim,
ylim=ylim)
sub.set_title(title,
fontsize=14,
fontweight='bold')
sub.set_xlabel(''.join(('Slice: ', xlabel)),
fontsize=14,
fontweight='bold')
sub.set_ylabel(ylabel,
fontsize=14,
fontweight='bold')
sub.patch.set_alpha(bg_alpha)
if ind_sort is None: # plot mean, confidence interval
if pvals is not None: # add pvals to plot
if len(pvals) != len(data):
raise (ValueError('data and pvals must have the same length'
'Data: %d, Pvals: %d' %
(len(data), len(pvals))))
g1_line, g1_patch = add_group_lines_ci(sub, data, pvals, # sig on g1 line
thresh=thresh, scale=scale, lc=lcolor, lw=5.0, z=1,
alphal=0.8, alphaci=0.4)
if data2 is not None:
g2_line, g2_patch = add_group_lines_ci(sub, data2,
lc=lcolor2, lw=5.0, z=2, alphal=0.8, alphaci=0.4)
plt.legend(handles=[g1_line[0], g2_line[0], g1_patch, g2_patch],
labels=legend)
else:
plt.legend(handles=[g1_line[0], g1_patch], labels=legend)
else: # plot individual lines
if ind_cmap is None:
cmap = plt.get_cmap('plasma')
else:
cmap = plt.get_cmap(ind_cmap)
if ind_cmap2 is None:
cmap2 = plt.get_cmap('PuBuGn')
else:
cmap2 = plt.get_cmap(ind_cmap2)
color1 = cmap(np.linspace(0, 1, data.shape[1]))
ind_lines1 = add_ind_lines(sub, data, ind_sort, 1, 1, 0.5)
if data2 is not None:
color2 = cmap2(np.linspace(0, 1, data2.shape[1]))
colors = np.concatenate((color1, color2), 0)
ind_lines2 = add_ind_lines(sub, data2, ind_sort, 1, 1, 0.5)
ind_lines = ind_lines1 + ind_lines2
plt.legend(handles=[ind_lines1[0], ind_lines2[0]],
labels=legend,
handler_map={
ind_lines1[0]: HandlerColorLine2D(cmap=cmap, numpoints=4),
ind_lines2[0]: HandlerColorLine2D(cmap=cmap2, numpoints=4)})
else:
colors = color1
ind_lines = ind_lines1
plt.legend(handles=[ind_lines1[0]],
labels=legend,
handler_map={
ind_lines1[0]: HandlerColorLine2D(cmap=cmap, numpoints=4)})
plt.gca().set_prop_cycle(cycler(color=colors)) # distribute in cmap
for i, j in enumerate(ind_lines):
j.set_color(colors[i])
if ind_labels is not None:
if data2 is None:
if len(ind_labels) != data.shape[1]:
raise (ValueError('data and labels must have the same length'
'Data {:d}, Labels: {:d}'
.format(data.shape[1], len(ind_labels))))
else:
i1_labels = add_ind_labels(fig, sub, data, ind_labels)
else:
if len(ind_labels[0]) != data.shape[1]:
raise (ValueError('data and labels must have the same length'
'Data {:d}, Labels: {:d}'
.format(data.shape[1], len(ind_labels[0]))))
if len(ind_labels[1]) != data2.shape[1]:
raise (ValueError('data and labels must have the same length'
'Data2 {:d}, Labels: {:d}'
.format(data2.shape[1], len(ind_labels[1]))))
else:
i1_labels = add_ind_labels(fig, sub, data, ind_labels[0])
i2_labels = add_ind_labels(fig, sub, data2, ind_labels[1])
plt.savefig(filename,
dpi=900,
facecolor=fig.get_facecolor(),
edgecolor='w',
orientation='landscape',
bbox_inches=None,
pad_inches=0.1)
plt.close(fig)
def get_data(filename):
"""Load a nifti and return its data as an array
Parameters
----------
filename : Str
Return
------
data : numpy.array"""
data_nii = load_image(filename)
return data_nii.get_data()
def mask_data(data_filename, mask_filename):
"""Mask data, keep points where mask = 1
Parameters
----------
data_filename : Str
mask_filename : Str
Return
------
masked_data : numpy.ma.masked_array"""
data = get_data(data_filename)
mask = get_data(mask_filename)
if data.shape != mask.shape:
raise (LookupError('Data and mask do not have the same dimensions.'
'\nData: %s, Mask: %s' %
(data.shape, mask.shape)))
else:
np_mask = np.ma.make_mask(mask)
ext_mask = np.invert(np_mask)
masked_data = np.ma.masked_array(data, ext_mask)
return masked_data
def mean_data(data, collapse=None):
"""Wrap np.ma.mean to average over multiple dimensions. May provide
dimensions by index or as a boolean sequence of len(data.shape).
If none provided will average over all.
Parameters
----------
data : masked array
collapse : tuple,list,array - ints or booleans
Return
------
means : data averaged over given or all dimensions"""
if collapse is None:
# no dimensions given, return one value
means = np.ma.mean(data)
elif isinstance(collapse, int) and collapse < len(data.shape):
# one dim given, average it
means = np.ma.mean(data, collapse)
elif len(data.shape) != len(collapse):
if all([isinstance(c, int) for c in collapse]):
# dim by index, average them
if any([c > len(data.shape) - 1 for c in collapse]):
# could let numpy throw the error, but this may save time
msg = ('Data axis %d is out of bounds for array of dimension %d' %
(c, len(data.shape)))
raise (IndexError(msg))
means = data
for direction in sorted(collapse, reverse=True):
# big->little dim mean order for proper indices
means = np.ma.mean(means, direction)
else:
# dimensions not given by index, but improper
msg = ('boolean collapse must be the same length as data.shape'
'\nData: %d, Collapse: %d' %
(len(data.shape), len(collapse)))
raise (LookupError(msg))
else:
# dim by boolean, big->little dim mean order for proper indices
means = data
for direction in range(len(collapse), 0, -1):
if collapse[direction - 1]:
means = np.ma.mean(means, direction - 1)
return means
def mean_3d(data):
"""produce separate means for each slice direction"""
means = [None] * 3
for i in range(0, 3):
collapse = tuple(set((0, 1, 2)) - set((i,)))
means[i] = mean_data(data, collapse)
return means
tract2dir = {
'CCBody': 'R-L',
'Genu': 'R-L',
'Splenium': 'R-L',
'CST_L': 'I-S',
'CST_R': 'I-S',
'FOF_L': 'P-A',
'FOF_R': 'P-A',
'ILF_L': 'P-A',
'ILF_R': 'P-A',
'SLF_L': 'P-A',
'SLF_R': 'P-A',
'Cingulum_L': 'I-S',
'Cingulum_R': 'I-S'
}
dir2mean = {
'R-L': (0, 1, 1, 0),
'I-S': (1, 1, 0, 0),
'P-A': (1, 0, 1, 0)
}
def labels_from_filelist(filelist, prefix, group=None):
with open(filelist, | |
is the Abstract base class defining the methods that
must be implemented by the concrete classes.
A user should extend this class with implementations that work on
specific queue systems.
"""
Error = QueueAdapterError
# the limits for certain parameters set on the cluster.
# currently hard coded, should be read at init
# the increase functions will not increase beyond this limits
# TODO: This constraint should be implemented by the partition, not by the QueueAdapter.
LIMITS = []
def __init__(self, qparams=None, setup=None, modules=None, shell_env=None, omp_env=None,
pre_run=None, post_run=None, mpi_runner=None):
"""
Args:
setup:
String or list of commands to execute during the initial setup.
modules:
String or list of modules to load before running the application.
shell_env:
Dictionary with the environment variables to export
before running the application.
omp_env:
Dictionary with the OpenMP variables.
pre_run:
String or list of commands to execute before launching the calculation.
post_run:
String or list of commands to execute once the calculation is completed.
mpi_runner:
Path to the MPI runner or `MpiRunner` instance. None if not used
"""
# Make defensive copies so that we can change the values at runtime.
self.qparams = qparams.copy() if qparams is not None else {}
self._verbatim = []
if is_string(setup): setup = [setup]
self.setup = setup[:] if setup is not None else []
self.omp_env = omp_env.copy() if omp_env is not None else {}
if is_string(modules): modules = [modules]
self.modules = modules[:] if modules is not None else []
self.shell_env = shell_env.copy() if shell_env is not None else {}
self.mpi_runner = mpi_runner
if not isinstance(mpi_runner, MpiRunner):
self.mpi_runner = MpiRunner(mpi_runner)
if is_string(pre_run): pre_run = [pre_run]
self.pre_run = pre_run[:] if pre_run is not None else []
if is_string(post_run): post_run = [post_run]
self.post_run = post_run[:] if post_run is not None else []
# Parse the template so that we know the list of supported options.
cls = self.__class__
if hasattr(cls, "QTEMPLATE"):
# Consistency check.
err_msg = ""
for param in self.qparams:
if param not in self.supported_qparams:
err_msg += "Unsupported QUEUE parameter name %s\n" % param
err_msg += "Supported are: \n"
for param_sup in self.supported_qparams:
err_msg += " %s \n" % param_sup
if err_msg:
raise ValueError(err_msg)
def __str__(self):
lines = [self.__class__.__name__]
app = lines.append
#lines.extend(["qparams:\n", str(self.qparams)])
if self.has_omp: app(str(self.omp_env))
return "\n".join(lines)
#def copy(self):
# return copy.copy(self)
def deepcopy(self):
return copy.deepcopy(self)
@property
def supported_qparams(self):
"""
Dictionary with the supported parameters that can be passed to the
queue manager (obtained by parsing QTEMPLATE).
"""
try:
return self._supported_qparams
except AttributeError:
import re
self._supported_qparams = re.findall("\$\$\{(\w+)\}", self.QTEMPLATE)
return self._supported_qparams
@property
def has_mpi(self):
return self.has_mpirun
@property
#@deprecated(has_mpi)
def has_mpirun(self):
"""True if we are using a mpirunner"""
return bool(self.mpi_runner)
@property
def has_omp(self):
"""True if we are using OpenMP threads"""
return hasattr(self, "omp_env") and bool(getattr(self, "omp_env"))
@property
def tot_cores(self):
"""Total number of cores employed"""
return self.mpi_procs * self.omp_threads
@property
def omp_threads(self):
"""Number of OpenMP threads."""
if self.has_omp:
return self.omp_env["OMP_NUM_THREADS"]
else:
return 1
@property
def use_only_mpi(self):
"""True if only MPI is used."""
return self.has_mpi and not self.has_omp
@property
def use_only_omp(self):
"""True if only Openmp is used."""
return self.has_omp and not self.has_mpi
@property
def use_mpi_omp(self):
"""True if we are running in MPI+Openmp mode."""
return self.has_omp and self.has_mpi
@property
def run_info(self):
"""String with info on the run."""
return "MPI: %d, OMP: %d" % (self.mpi_procs, self.omp_threads)
@abc.abstractmethod
def set_omp_threads(self, omp_threads):
"""Set the number of OpenMP threads."""
@abc.abstractproperty
def mpi_procs(self):
"""Number of CPUs used for MPI."""
@abc.abstractmethod
def set_mpi_procs(self, mpi_procs):
"""Set the number of CPUs used for MPI."""
#@abc.abstractproperty
#def walltime(self):
# """Returns the walltime in seconds."""
#@abc.abstractmethod
#def set_walltime(self):
# """Set the walltime in seconds."""
#@abc.abstractproperty
#def mem_per_cpu(self):
# """The memory per CPU in Megabytes."""
@abc.abstractmethod
def set_mem_per_cpu(self, mem_mb):
"""Set the memory per CPU in Megabytes"""
#@property
#def tot_mem(self):
# """Total memory required by the job n Megabytes."""
# return self.mem_per_cpu * self.mpi_procs
@abc.abstractmethod
def cancel(self, job_id):
"""
Cancel the job.
Args:
job_id:
(in) Job identifier.
Returns:
Exit status.
"""
def add_verbatim(self, lines):
"""
Add a list of lines or just a string to the header.
No programmatic interface to change these options is provided
"""
if is_string(lines): lines = [lines]
self._verbatim.extend(lines)
def get_subs_dict(self, partition):
"""
Return substitution dict for replacements into the template
Subclasses may want to customize this method.
"""
# clean null values
return {k: v for k, v in self.qparams.items() if v is not None}
def _make_qheader(self, job_name, partition, qout_path, qerr_path):
"""Return a string with the options that are passed to the resource manager."""
# get substitution dict for replacements into the template
subs_dict = self.get_subs_dict(partition)
# Set job_name and the names for the stderr and stdout of the
# queue manager (note the use of the extensions .qout and .qerr
# so that we can easily locate this file.
subs_dict['job_name'] = job_name.replace('/', '_')
subs_dict['_qout_path'] = qout_path
subs_dict['_qerr_path'] = qerr_path
qtemplate = QScriptTemplate(self.QTEMPLATE)
# might contain unused parameters as leftover $$.
unclean_template = qtemplate.safe_substitute(subs_dict)
# Remove lines with leftover $$.
clean_template = []
for line in unclean_template.split('\n'):
if '$$' not in line:
clean_template.append(line)
# Add verbatim lines
if self._verbatim:
clean_template.extend(self._verbatim)
return '\n'.join(clean_template)
def get_script_str(self, job_name, launch_dir, partition, executable, qout_path, qerr_path,
stdin=None, stdout=None, stderr=None):
"""
Returns a (multi-line) String representing the queue script, e.g. PBS script.
Uses the template_file along with internal parameters to create the script.
Args:
job_name:
Name of the job.
launch_dir:
(str) The directory the job will be launched in.
partitition:
``Partition` object with information on the queue selected for submission.
executable:
String with the name of the executable to be executed.
qout_path
Path of the Queue manager output file.
qerr_path:
Path of the Queue manager error file.
"""
# PBS does not accept job_names longer than 15 chars.
if len(job_name) > 14 and isinstance(self, PbsProAdapter):
job_name = job_name[:14]
# Construct the header for the Queue Manager.
qheader = self._make_qheader(job_name, partition, qout_path, qerr_path)
# Add the bash section.
se = ScriptEditor()
if self.setup:
se.add_comment("Setup section")
se.add_lines(self.setup)
se.add_emptyline()
if self.modules:
se.add_comment("Load Modules")
se.add_line("module purge")
se.load_modules(self.modules)
se.add_emptyline()
if self.has_omp:
se.add_comment("OpenMp Environment")
se.declare_vars(self.omp_env)
se.add_emptyline()
if self.shell_env:
se.add_comment("Shell Environment")
se.declare_vars(self.shell_env)
se.add_emptyline()
# Cd to launch_dir
se.add_line("cd " + os.path.abspath(launch_dir))
if self.pre_run:
se.add_comment("Commands before execution")
se.add_lines(self.pre_run)
se.add_emptyline()
# Construct the string to run the executable with MPI and mpi_procs.
line = self.mpi_runner.string_to_run(executable, self.mpi_procs,
stdin=stdin, stdout=stdout, stderr=stderr)
se.add_line(line)
if self.post_run:
se.add_emptyline()
se.add_comment("Commands after execution")
se.add_lines(self.post_run)
shell_text = se.get_script_str()
return qheader + shell_text + "\n"
@abc.abstractmethod
def submit_to_queue(self, script_file):
"""
Submits the job to the queue, probably using subprocess or shutil
Args:
script_file:
(str) name of the script file to use (String)
Returns:
process, queue_id
"""
@abc.abstractmethod
def get_njobs_in_queue(self, username=None):
"""
returns the number of jobs in the queue, probably using subprocess or shutil to
call a command like 'qstat'. returns None when the number of jobs cannot be determined.
Args:
username: (str) the username of the jobs to count (default is to autodetect)
"""
#some method to fix problems
@abc.abstractmethod
def exclude_nodes(self, nodes):
"""
Method to exclude nodes in the calculation
"""
@abc.abstractmethod
def increase_mem(self, factor):
"""
Method to increase the amount of memory asked for, by factor.
"""
@abc.abstractmethod
def increase_time(self, factor):
"""
Method to increase the available wall time asked for, by factor.
"""
@abc.abstractmethod
def increase_cpus(self, factor):
"""
Method to increase the number of cpus asked for.
"""
####################
# Concrete classes #
####################
class ShellAdapter(AbstractQueueAdapter):
QTYPE = "shell"
QTEMPLATE = """\
#!/bin/bash
export MPI_PROCS=$${MPI_PROCS}
"""
@property
def mpi_procs(self):
"""Number of CPUs used for MPI."""
return self.qparams.get("MPI_PROCS", 1)
def set_mpi_procs(self, mpi_procs):
"""Set the number of CPUs used for MPI."""
self.qparams["MPI_PROCS"] = mpi_procs
def set_omp_threads(self, omp_threads):
"""Set the number of OpenMP threads."""
self.omp_env["OMP_NUM_THREADS"] = omp_threads
def set_mem_per_cpu(self, mem_mb):
"""mem_per_cpu is not available in ShellAdapter."""
def cancel(self, job_id):
return os.system("kill -9 %d" % job_id)
def submit_to_queue(self, script_file):
if not os.path.exists(script_file):
raise self.Error('Cannot find script file located at: {}'.format(script_file))
try:
# submit the job
| |
<filename>obsolete/snp2counts_test.py
##########################################################################
# Gene prediction pipeline
#
# $Id: snp2counts_test.py 2855 2010-02-10 09:59:58Z andreas $
#
# Copyright (C) 2004 <NAME>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
##########################################################################
"""unit testing module for the Tree.py class."""
import sys
import os
import shutil
import optparse
import random
import math
import unittest
import tempfile
import snp2counts
import CGAT.GTF as GTF
import CGAT.Genomics as Genomics
import CGAT.IndexedFasta as IndexedFasta
class getCDSPositionTestPos(unittest.TestCase):
def setUp(self):
self.mExons = []
self.mSplitCodonsNext = {}
self.mSplitCodonsPrev = {}
self.mSpliceSize = 4
self.mExonSize = 100
self.mIntronSize = 900
self.strand = "+"
self.mNExons = 9
self.mOffset = 1000
length = 0
self.frame = 0
self.mIncrement = self.mIntronSize + self.mExonSize
seq = list("123" * int((self.mNExons * self.mExonSize) / 3))
exon_id = 0
start = self.mOffset
for x in range(self.mNExons):
e = GTF.Entry()
e.contig, e.strand, e.gene_id, e.transcript_id = "chr1", "+", "gene1", "trans1"
e.start, e.end = start, start + self.mExonSize
e.frame = (3 - (length % 3)) % 3
length += e.end - e.start
self.mExons.append(e)
if e.frame != 0:
for y in range(0, e.frame):
self.mSplitCodonsPrev[start + y] = start - self.mIntronSize
for y in range(0, 3 - e.frame):
self.mSplitCodonsNext[
start - self.mIntronSize - y - 1] = start
exon_id += 1
if exon_id < self.mNExons:
p = exon_id * self.mExonSize + self.mIntronSize * (exon_id - 1)
seq[p:p] = list("AG")
seq[p:p] = list("T" * (self.mIntronSize - 4))
seq[p:p] = list("GT")
start += self.mIncrement
# print str(e)
# print self.mSplitCodonsNext
# print self.mSplitCodonsPrev
seq[0:0] = "C" * self.mOffset
seq.append("G" * self.mOffset)
tmpfile = tempfile.NamedTemporaryFile()
tmpfile.close()
seq = "".join(seq)
self.mSequence = seq
self.contigSize = len(seq)
IndexedFasta.createDatabase(tmpfile.name, iter([("chr1", seq), ]))
self.mFasta = IndexedFasta.IndexedFasta(tmpfile.name)
def tearDown(self):
os.unlink(self.mFasta.getDatabaseName())
os.unlink(self.mFasta.getDatabaseName()[:-len(".fasta")] + ".idx")
def toRange(self, x, y):
'''convert snp to positive strand base.'''
if self.strand == "+":
return x, y
else:
return self.contigSize - y, self.contigSize - x
def testCodingSNPs(self):
length = 0
framed_length = (3 - self.frame) % 3
phase = (3 - self.frame) % 3
if self.strand == "+":
motif = "123"
else:
motif = "321"
for x in range(self.mOffset, self.contigSize - self.mOffset, self.mIncrement):
for y in range(0, self.mExonSize):
base = x + y
rangex, rangey = self.toRange(base, base + 1)
result = snp2counts.getCDSPosition(self.mExons, rangex, rangey,
lcontig=self.contigSize,
fasta=self.mFasta)
self.assertEqual(result.strand, self.strand)
self.assertEqual(result.cds_start, length)
self.assertEqual(result.cds_end, length + 1)
self.assertEqual(result.cds_phase, phase)
self.assertEqual(result.intron_start, None)
self.assertEqual(result.intron_end, None)
self.assertEqual(len(result.cds_seq), 3)
# print x, y, base, str(result)
if self.frame == 0:
self.assertEqual(result.cds_seq, motif)
self.assertEqual(result.cds_seq_start, framed_length % 3)
self.assertEqual(result.cds_seq_end, (framed_length % 3) + 1)
self.assertEqual(result.nc_seq, None)
self.assertEqual(result.nc_start, None)
self.assertEqual(result.nc_end, None)
if base in self.mSplitCodonsPrev:
self.assertEqual(
result.prev_exon_end, self.mSplitCodonsPrev[base])
else:
self.assertEqual(result.prev_exon_end, None)
if base in self.mSplitCodonsNext:
self.assertEqual(
result.next_exon_start, self.mSplitCodonsNext[base])
else:
self.assertEqual(result.next_exon_start, None)
length += 1
framed_length += 1
phase += 1
if phase >= 3:
phase = 0
def testIntronsSNPs(self):
length = 0
t = 0
exon_id = 0
for x in range(self.mOffset, self.contigSize - self.mIncrement - self.mOffset, self.mIncrement):
# exons
for y in range(0, self.mExonSize):
base = x + y
base_x, base_y = self.toRange(base, base + 1)
result = snp2counts.getCDSPosition(
self.mExons, base_x, base_y, lcontig=self.contigSize, fasta=self.mFasta)
self.assertEqual(result.strand, self.strand)
self.assertEqual(result.cds_start, t)
self.assertEqual(result.cds_end, t + 1)
self.assertEqual(result.intron_start, None)
self.assertEqual(result.intron_end, None)
self.assertEqual(len(result.cds_seq) % 3, 0)
self.assertEqual(result.nc_seq, None)
self.assertEqual(result.nc_start, None)
self.assertEqual(result.nc_end, None)
self.assertEqual(result.exon_id, exon_id)
self.assertEqual(result.intron_id, None)
t += 1
exon_id += 1
# introns
for y in range(self.mExonSize, self.mExonSize + self.mIntronSize):
base = x + y
base_x, base_y = self.toRange(base, base + 1)
result = snp2counts.getCDSPosition(
self.mExons, base_x, base_y, lcontig=self.contigSize, fasta=self.mFasta)
self.assertEqual(result.strand, self.strand)
self.assertEqual(result.cds_start, None)
self.assertEqual(result.cds_end, None)
self.assertEqual(result.cds_phase, None)
self.assertEqual(result.intron_start, x + self.mExonSize)
self.assertEqual(
result.intron_end, x + self.mIntronSize + self.mExonSize)
self.assertEqual(result.cds_seq, None)
self.assertEqual(result.cds_seq_start, None)
self.assertEqual(result.cds_seq_end, None)
self.assertEqual(len(result.nc_seq), 1)
self.assert_(result.nc_seq not in "abc")
self.assertEqual(result.nc_start, base)
self.assertEqual(result.nc_end, base + 1)
self.assertEqual(result.exon_id, exon_id)
self.assertEqual(result.intron_id, exon_id - 1)
def testIndels(self):
'''test with segments of size 5'''
size = 5
length = 0
framed_length = (3 - self.frame) % 3
phase = (3 - self.frame) % 3
if self.strand == "+":
motif = "123"
else:
motif = "321"
for x in range(self.mOffset, self.contigSize - self.mIncrement - self.mOffset, self.mIncrement):
for y in range(-2 * size, self.mExonSize + 2 * size):
base = x + y
if base < self.mOffset:
continue
base_x, base_y = self.toRange(base, base + size)
result = snp2counts.getCDSPosition(
self.mExons, base_x, base_y, lcontig=self.contigSize, fasta=self.mFasta)
if -size < y < self.mExonSize:
# overlap with coding sequence
self.assertEqual(len(result.cds_seq) % 3, 0)
self.assertEqual(result.cds_start, length)
if y < 0:
self.assertEqual(result.cds_end, length + size + y)
else:
self.assertEqual(
result.cds_end, length + min(size, self.mExonSize - y))
self.assertEqual(result.cds_phase, phase)
self.assertEqual(result.strand, self.strand)
ncodons = int(
math.ceil((result.cds_phase + result.cds_end - result.cds_start) / 3.0))
if self.frame == 0:
self.assertEqual(result.cds_seq, motif * ncodons)
self.assertEqual(result.cds_seq_start, framed_length % 3)
self.assertEqual(
result.cds_seq_end, framed_length % 3 + min(size, size + y, self.mExonSize - y))
if result.nc_end is not None:
self.assertEqual(
result.cds_end - result.cds_start + (result.nc_end - result.nc_start), size)
self.assertEqual(
len(result.nc_seq), (result.nc_end - result.nc_start))
else:
self.assertEqual(result.cds_start, None)
self.assertEqual(result.cds_end, None)
self.assertEqual(result.cds_phase, None)
if y > self.mExonSize - size:
self.assertEqual(result.intron_start, x + self.mExonSize)
self.assertEqual(
result.intron_end, x + self.mIntronSize + self.mExonSize)
elif y < 0:
self.assertEqual(result.intron_start, x - self.mIntronSize)
self.assertEqual(result.intron_end, x)
if 0 <= y < self.mExonSize:
length += 1
framed_length += 1
phase += 1
if phase >= 3:
phase = 0
class getCDSPositionTestNeg(getCDSPositionTestPos):
def setUp(self):
getCDSPositionTestPos.setUp(self)
for x in self.mExons:
x.start, x.end = self.contigSize - x.end, self.contigSize - x.start
x.strand = "-"
# frame remains
self.mExons.reverse()
self.strand = "-"
class getCDSPositionTestWithStartingFrame2(getCDSPositionTestPos):
'''test with a transcript not starting at frame 0, but at frame 2.'''
def setUp(self):
getCDSPositionTestPos.setUp(self)
self.mSplitCodonsNext = {}
self.mSplitCodonsPrev = {}
start = self.mOffset
l = 1
for exon_id, e in enumerate(self.mExons):
e.frame = (3 - l % 3) % 3
l += e.end - e.start
if e.frame != 0:
if exon_id > 0:
for y in range(0, e.frame):
self.mSplitCodonsPrev[
start + y] = start - self.mIntronSize
if exon_id < self.mNExons - 1:
for y in range(0, 3 - e.frame):
self.mSplitCodonsNext[
start - self.mIntronSize - y - 1] = start
start += self.mIncrement
self.frame = self.mExons[0].frame
# for e in self.mExons:
# print str(e)
# print self.mSplitCodonsPrev
# print self.mSplitCodonsNext
class iterateOverFrames(unittest.TestCase):
def setUp(self):
self.seq = list("AAA" * 20)
self.length = len(self.seq)
self.ncodons = self.length / 3
def merge(self, result):
n = []
last = result[0]
for this in result[1:]:
if last[0] == this[0]:
last[-1] = this[-1]
else:
n.append(tuple(last))
last = this
n.append(tuple(last))
return n
def testDeletion(self):
'''test single deletion.'''
for l in range(1, 7):
for x in range(0, len(self.seq)):
s = list(self.seq)
todelete = min(l, self.length - x)
for y in range(x, x + todelete):
s[y] = ""
ncodons = self.ncodons - todelete // 3
i = list(snp2counts.iterateOverFrames(s))
codon_start = (x // 3) * 3
codon_end = min(self.length, x + l + (3 - (x + l) % 3) % 3)
result = []
if codon_start > 0:
result.append([True, 0, codon_start])
if todelete % 3 == 0:
if x % 3 != 0:
result.append([False, codon_start, codon_end])
if codon_end < self.length:
result.append([True, codon_end, self.length])
else:
result.append([True, codon_start, self.length])
else:
o = codon_start
if todelete > 3 and x % 3 == 0:
o = codon_start + (todelete // 3) * 3
result.append([True, codon_start, o])
result.append([False, o, codon_end])
result.append([False, codon_end, self.length])
result = self.merge(result)
self.assertEqual(i, result)
def testInsertion(self):
'''test single insertion.'''
for l in range(1, 7):
for x in range(len(self.seq)):
s = list(self.seq)
s[x] = "A" * l + s[x]
i = list(snp2counts.iterateOverFrames(s))
result = []
codon_start = (x // 3) * 3
if codon_start > 0:
result.append([True, 0, codon_start])
if l % 3 == 0:
| |
<reponame>dewancse/SMT-PMR<filename>server/miscellaneous.py
import requests
from libcellml import *
import lxml.etree as ET
# pre-generated model recipe in JSON format
model_recipe = [
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P26433",
"med_pr_text": "sodium/hydrogen exchanger 3 (rat)",
"med_pr_text_syn": "NHE3",
"model_entity": "weinstein_1995.cellml#NHE3.J_NHE3_Na",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/PR_P26433",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "",
"source_fma3": "",
"variable_text": "J_NHE3_Na",
"variable_text2": "flux",
"variable_text3": "flux"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q9ET37",
"med_pr_text": "low affinity sodium-glucose cotransporter (mouse)",
"med_pr_text_syn": "Q9ET37",
"model_entity": "mackenzie_1996-mouse-baso.cellml#NBC_current.J_Na",
"model_entity2": "mackenzie_1996-mouse-baso.cellml#NBC_current.J_Na",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/PR_Q9ET37",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "Na+",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma3": "",
"variable_text": "J_Na",
"variable_text2": "J_Na",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P55018",
"med_pr_text": "solute carrier family 12 member 3 (rat)",
"med_pr_text_syn": "TSC",
"model_entity": "chang_fujita_b_1999.cellml#total_transepithelial_sodium_flux.J_mc_Na",
"model_entity2": "chang_fujita_b_1999.cellml#solute_concentrations.J_mc_Cl",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "Cl-",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma3": "",
"variable_text": "J_mc_Na",
"variable_text2": "J_mc_Cl",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q63633",
"med_pr_text": "solute carrier family 12 member 5 (rat)",
"med_pr_text_syn": "Q63633",
"model_entity": "chang_fujita_b_1999.cellml#solute_concentrations.J_mc_Cl",
"model_entity2": "chang_fujita_b_1999.cellml#total_transepithelial_potassium_flux.J_mc_K",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi3": "",
"solute_text": "Cl-",
"solute_text2": "K+",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma3": "",
"variable_text": "J_mc_Cl",
"variable_text2": "J_mc_K",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P37089",
"med_pr_text": "amiloride-sensitive sodium channel subunit alpha (rat)",
"med_pr_text_syn": "RENAC",
"model_entity": "chang_fujita_b_1999.cellml#mc_sodium_flux.G_mc_Na",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "Na+",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_mc_Na",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q06393",
"med_pr_text": "chloride channel protein ClC-Ka (rat)",
"med_pr_text_syn": "CLCNK1",
"model_entity": "chang_fujita_b_1999.cellml#mc_chloride_flux.G_mc_Cl",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "Cl-",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_mc_Cl",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84666",
"med_pr": "http://purl.obolibrary.org/obo/PR_P15387",
"med_pr_text": "potassium voltage-gated channel subfamily B member 1 (rat)",
"med_pr_text_syn": "P15387",
"model_entity": "chang_fujita_b_1999.cellml#mc_potassium_flux.G_mc_K",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "K+",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_mc_K",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_P06685",
"med_pr_text": "sodium/potassium-transporting ATPase subunit alpha-1 (rat)",
"med_pr_text_syn": "P06685",
"model_entity": "chang_fujita_b_1999.cellml#solute_concentrations.J_sc_Na",
"model_entity2": "chang_fujita_b_1999.cellml#sc_potassium_flux.J_sc_K",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma3": "",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi3": "",
"solute_text": "Na+",
"solute_text2": "K+",
"solute_text3": "",
"source_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"source_fma2": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma3": "",
"variable_text": "J_sc_Na",
"variable_text2": "J_sc_K",
"variable_text3": ""
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q06393",
"med_pr_text": "chloride channel protein ClC-Ka (rat)",
"med_pr_text_syn": "CLCNK1",
"model_entity": "chang_fujita_b_1999.cellml#sc_chloride_flux.G_sc_Cl",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "Cl-",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_sc_Cl",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_84669",
"med_pr": "http://purl.obolibrary.org/obo/PR_P15387",
"med_pr_text": "potassium voltage-gated channel subfamily B member 1 (rat)",
"med_pr_text_syn": "P15387",
"model_entity": "chang_fujita_b_1999.cellml#sc_potassium_flux.G_sc_K",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_66836",
"sink_fma2": "channel",
"sink_fma3": "channel",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi2": "channel",
"solute_chebi3": "channel",
"solute_text": "K+",
"solute_text2": "channel",
"solute_text3": "channel",
"source_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"source_fma2": "channel",
"source_fma3": "channel",
"variable_text": "G_sc_K",
"variable_text2": "channel",
"variable_text3": "channel"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_67394",
"med_pr": "http://purl.obolibrary.org/obo/PR_Q9Z0S6",
"med_pr_text": "claudin-10 (mouse)",
"med_pr_text_syn": "CLDN10A",
"model_entity": "chang_fujita_b_1999.cellml#ms_sodium_flux.G_ms_Na",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "diffusiveflux",
"sink_fma3": "diffusiveflux",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29101",
"solute_chebi2": "diffusiveflux",
"solute_chebi3": "diffusiveflux",
"solute_text": "Na+",
"solute_text2": "diffusiveflux",
"solute_text3": "diffusiveflux",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "diffusiveflux",
"source_fma3": "diffusiveflux",
"variable_text": "G_ms_Na",
"variable_text2": "diffusiveflux",
"variable_text3": "diffusiveflux"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_67394",
"med_pr": "http://purl.obolibrary.org/obo/PR_O35054",
"med_pr_text": "claudin-4 (mouse)",
"med_pr_text_syn": "CPETR1",
"model_entity": "chang_fujita_b_1999.cellml#ms_chloride_flux.G_ms_Cl",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "diffusiveflux",
"sink_fma3": "diffusiveflux",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_17996",
"solute_chebi2": "diffusiveflux",
"solute_chebi3": "diffusiveflux",
"solute_text": "Cl-",
"solute_text2": "diffusiveflux",
"solute_text3": "diffusiveflux",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "diffusiveflux",
"source_fma3": "diffusiveflux",
"variable_text": "G_ms_Cl",
"variable_text2": "diffusiveflux",
"variable_text3": "diffusiveflux"
},
{
"med_fma": "http://purl.obolibrary.org/obo/FMA_67394",
"med_pr": "http://purl.obolibrary.org/obo/PR_F1LZ52",
"med_pr_text": "kelch-like protein 3 (rat)",
"med_pr_text_syn": "F1LZ52",
"model_entity": "chang_fujita_b_1999.cellml#ms_potassium_flux.G_ms_K",
"model_entity2": "",
"model_entity3": "",
"protein_name": "http://purl.obolibrary.org/obo/CL_0000066",
"sink_fma": "http://purl.obolibrary.org/obo/FMA_9673",
"sink_fma2": "diffusiveflux",
"sink_fma3": "diffusiveflux",
"solute_chebi": "http://purl.obolibrary.org/obo/CHEBI_29103",
"solute_chebi2": "diffusiveflux",
"solute_chebi3": "diffusiveflux",
"solute_text": "K+",
"solute_text2": "diffusiveflux",
"solute_text3": "diffusiveflux",
"source_fma": "http://purl.obolibrary.org/obo/FMA_74550",
"source_fma2": "diffusiveflux",
"source_fma3": "diffusiveflux",
"variable_text": "G_ms_K",
"variable_text2": "diffusiveflux",
"variable_text3": "diffusiveflux"
}
]
# sparql endpoint in PMR
sparqlendpoint = "https://models.physiomeproject.org/pmr2_virtuoso_search"
# workspace url where we have all models
workspaceURL = "https://models.physiomeproject.org/workspace/267/rawfile/HEAD/"
# reference URIs of anatomical locations
lumen_fma = "http://purl.obolibrary.org/obo/FMA_74550"
cytosol_fma = "http://purl.obolibrary.org/obo/FMA_66836"
interstitialfluid_fma = "http://purl.obolibrary.org/obo/FMA_9673"
# solutes dictionary to map URI to name
dict_solutes = [
{
"http://purl.obolibrary.org/obo/CHEBI_29101": "Na",
"http://purl.obolibrary.org/obo/CHEBI_17996": "Cl",
"http://purl.obolibrary.org/obo/CHEBI_29103": "K"
}
]
# get channels and diffusive fluxes equations from source model
def getChannelsEquation(str_channel, v, compartment, importedModel, m, epithelial):
# string index of "id=" and "</math>" inside MathML
str_index = []
# save here required variables to make channels and diffusive fluxes equations
# e.g. ['C_c_Na', 'RT', 'psi_c', 'P_mc_Na', 'F', 'psi_m']
list_of_variables = []
# remove C_c_Na from here ['C_c_Na', 'RT', 'psi_c', 'P_mc_Na', 'F', 'psi_m'] and save in this variable
list_of_variables_2 = []
for i in range(len(str_channel)):
if "id=" in str_channel[i]:
str_index.append(i) # insert variables equation
elif "</math>" in str_channel[i]:
str_index.append(i) # insert math index to note end of math
# print(str_index)
for i in range(len(str_index)):
flag = False
if i + 1 == len(str_index):
break
else:
my_str = str_channel[str_index[i]:str_index[i + 1] - 1]
for i in range(len(my_str)):
if "<eq/>" in my_str[i] and "<ci>" + v + "</ci>" in my_str[i + 1]:
channel_str = ""
for s in my_str:
channel_str += s
channel_str = "<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\n" + channel_str + "</apply>\n</math>\n"
# check that whether this channel already exists in this component
# we are doing this because G_mc_Na, etc comes twice in the epithelial component!
mth = compartment.math()
if channel_str not in mth:
compartment.appendMath(channel_str)
# extract variables from this math string
for i in range(len(my_str)):
if "<ci>" in my_str[i]:
start_index = my_str[i].find("<ci>")
end_index = my_str[i].find("</ci>")
if my_str[i][start_index + 4:end_index] != v:
list_of_variables.append(my_str[i][start_index + 4:end_index])
flag = True
break
if flag == True:
break
# remove variables if already exists in the component
for i in range(compartment.variableCount()):
var = compartment.variable(i)
# we will remove C_c_Na from the list below after constructing lumen, cytosol and interstitial fluid component
# e.g. ['C_c_Na', 'RT', 'psi_c', 'P_mc_Na', 'F', 'psi_m']
if var.name() in list_of_variables:
list_of_variables.remove(var.name())
# unique elements in the list
list_of_variables = list(set(list_of_variables))
# save all components including a parent component into a mycomponent variable
# for now, we have considered 3 encapsulation stages: grandparent -> parent -> children
mycomponent = Component()
for i in range(importedModel.componentCount()):
c = importedModel.component(i)
mycomponent.addComponent(c)
for j in range(c.componentCount()):
c2 = c.component(j)
mycomponent.addComponent(c2)
for k in range(c2.componentCount()):
c3 = c2.component(k)
mycomponent.addComponent(c3)
for item in list_of_variables:
# iterate over components
for i in range(mycomponent.componentCount()):
c = mycomponent.component(i)
# variables within a component
for j in range(c.variableCount()):
v = c.variable(j)
if v.name() == item and v.initialValue() != "":
# add units
addUnitsModel(v.units(), importedModel, m)
if epithelial.variable(v.name()) == None:
v_epithelial = Variable()
# insert this variable in the epithelial component
createComponent(v_epithelial, v.name(), v.units(), "public_and_private",
v.initialValue(), epithelial, v)
if compartment.variable(v.name()) == None:
v_compartment = Variable()
# insert this variable in the lumen/cytosol/interstitial fluid component
createComponent(v_compartment, v.name(), v.units(), "public", None, compartment, v)
# user-defined function to append a substring of ODE based equations
def subMath(sign, vFlux):
return " <apply>\n" \
" <" + sign + "/>\n" + \
" <ci>" + vFlux + "</ci>\n" + \
" </apply>"
# user-defined function to define ODE based equations
def fullMath(vConcentration, subMath):
return "<math xmlns=\"http://www.w3.org/1998/Math/MathML\">\n" \
" <apply id=" + '"' + vConcentration + "_diff_eq" + '"' + ">\n" + \
" <eq/>\n" \
" <apply>\n" \
" <diff/>\n" \
" <bvar>\n" \
" <ci>time</ci>\n" \
" </bvar>\n" \
" <ci>" + vConcentration + "</ci>\n" + \
" </apply>\n" \
" <apply>\n" \
" <plus/>\n" \
"" + subMath + "\n" + \
" </apply>\n" \
" </apply>\n" \
"</math>\n"
# insert ODE equations for lumen, cytosol and interstitial | |
been removed. "
# "Please call frombytes() instead.")
self.frombytes(mode, size, data, decoder_name, *args)
def convert(self, mode):
"converts an image to the given mode"
if self._mode.upper() == mode.upper():
return Image(self._instance.copy())
if not mode and self.mode == "P":
# determine default mode
if self.palette:
mode = self.palette.mode
else:
mode = "RGB"
if not mode or (mode == self.mode):
return Image(self._instance.copy())
return Image(self._convert(mode))
def _convert(self, mode, obj=None):
if obj is None:
obj = self._instance
flag = self._get_converting_flag(mode)
else:
orig_mode = self._get_mode(obj.shape, obj.dtype)
flag = self._get_converting_flag(mode, inst=orig_mode)
if flag == "EQUAL":
return obj.copy()
if mode == "1":
im_gray = cv2.cvtColor(obj, cv2.COLOR_BGR2GRAY)
thresh, converted = cv2.threshold(im_gray, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
else:
converted = cv2.cvtColor(obj, flag)
return converted
def paste(self, img_color, box=None, mask=None):
"pastes either an image or a color to a region of interest defined in box with a mask"
if isinstance(img_color, Image): # pasting an image
_img_color = img_color._instance
if box is None:
box = (0, 0)
else:
if len(box) == 4:
if not(box[2]-box[0]==_img_color.shape[1] and box[3]-box[1]==_img_color.shape[0]):
raise ValueError("images do not match")
# convert modes
if len(img_color._instance.shape) == 3:
if img_color._instance.shape[2] != self._instance.shape[2] or img_color._instance.dtype != self._instance.dtype:
dest_mode = self._mode
_img_color = self._convert(dest_mode, obj=_img_color)
elif len(img_color._instance.shape) != len(self._instance.shape):
dest_mode = self._mode
_img_color = self._convert(dest_mode, obj=_img_color)
else: # pasting a colorbox
if box is None:
raise ValueError("cannot determine region size; use 4-item box")
img_dim = (box[3]-box[1]+1, box[2]-box[0]+1)
channels, depth = self._get_channels_and_depth(self._mode)
colorbox = np.zeros((img_dim[0], img_dim[1], channels), dtype=depth)
colorbox[:] = img_color
_img_color = colorbox.copy()
if mask is None:
self._instance = self._paste(self._instance, _img_color, box[0], box[1])
else:
# enlarge the image _img_color without resizing to the new_canvas
new_canvas = np.zeros(self._instance.shape, dtype=self._instance.dtype)
new_canvas = self._paste(new_canvas, _img_color, box[0], box[1])
if len(mask._instance.shape) == 3:
if mask._instance.shape[2] == 4: # RGBA
r, g, b, _mask = self.split(mask)
elif mask._instance.shape[2] == 1:
_mask = mask._instance.copy()
else:
_mask = mask._instance.copy()
if _mask.shape[:2] != new_canvas.shape[:2]:
_new_mask = np.zeros(self._instance.shape[:2], dtype=self._instance.dtype)
_new_mask = ~(self._paste(_new_mask, _mask, box[0], box[1]))
else:
_new_mask = ~_mask
self._instance = composite(self._instance, new_canvas, _new_mask, np_image=True)
def _paste(self, mother, child, x, y):
"Pastes the numpy image child into the numpy image mother at position (x, y)"
size = mother.shape
csize = child.shape
if y+csize[0]<0 or x+csize[1]<0 or y>size[0] or x>size[1]: return mother
sel = [int(y), int(x), csize[0], csize[1]]
csel = [0, 0, csize[0], csize[1]]
if y<0:
sel[0] = 0
sel[2] = csel[2] + y
csel[0] = -y
elif y+sel[2]>=size[0]:
sel[2] = int(size[0])
csel[2] = size[0]-y
else:
sel[2] = sel[0] + sel[2]
if x<0:
sel[1] = 0
sel[3] = csel[3] + x
csel[1] = -x
elif x+sel[3]>=size[1]:
sel[3] = int(size[1])
csel[3] = size[1]-x
else:
sel[3] = sel[1] + sel[3]
childpart = child[csel[0]:csel[2], csel[1]:csel[3]]
mother[sel[0]:sel[2], sel[1]:sel[3]] = childpart
return mother
def _scaleTo8Bit(self, image, div, displayMin=None, displayMax=None):
if displayMin == None:
displayMin = np.min(image)
if displayMax == None:
displayMax = np.max(image)
np.clip(image, displayMin, displayMax, out=image)
image = image - displayMin
cf = 255. / (displayMax - displayMin)
imageOut = (cf*image).astype(np.uint8)
return imageOut
def _filter_kernel(self, fa):
kernel = np.array(fa[3], dtype=np.float32)/fa[1]
kernel = kernel.reshape(fa[0])
# print(kernel)
return kernel
def filter(self, filtermethod):
"Filters this image using the given filter."
if filtermethod.name == "GaussianBlur":
return GaussianBlur().filter(self)
fa = filtermethod.filterargs
if filtermethod == EMBOSS:
_im = self._instance.astype(np.float32)
_im = cv2.filter2D(_im, -1, self._filter_kernel(fa))
_im = self._scaleTo8Bit(_im, fa[2])
elif filtermethod == CONTOUR:
_im = cv2.filter2D(self._instance, -1, self._filter_kernel(fa))
_im = ~_im
else:
_im = cv2.filter2D(self._instance, -1, self._filter_kernel(fa))
return Image(_im)
def getband(self, channel):
channels, depth = self._get_channels_and_depth(self._mode)
if channels == 1:
return self._instance.copy()
else:
chs = self.split()
return chs[channel]
def getbands(self):
return tuple([i for i in self._mode])
def getbbox(self):
"""
Calculates the bounding box of the non-zero regions in the
image.
:returns: The bounding box is returned as a 4-tuple defining the
left, upper, right, and lower pixel coordinate. See
:ref:`coordinate-system`. If the image is completely empty, this
method returns None.
"""
img_ = (self._instance > 0)
rows = np.any(img_, axis=1)
cols = np.any(img_, axis=0)
rmin, rmax = np.argmax(rows), img_.shape[0] - 1 - np.argmax(np.flipud(rows))
cmin, cmax = np.argmax(cols), img_.shape[1] - 1 - np.argmax(np.flipud(cols))
return (rmin, rmax, cmin, cmax)
def _getcolors(self):
channels, depth = self._get_channels_and_depth(self._mode)
if channels == 1:
img = self._instance.copy()
y = img.shape[0]
x = img.shape[1]
flattened = img.reshape((x*y, 1))
uni, counts = np.unique(flattened, return_counts=True)
else:
if channels == 4:
r ,g, b, a = self.split()
colorband = (r, g, b)
img = merge("RGB", colorband, image=True)
else: # channels == 3
img = self._instance.copy()
y = img.shape[0]
x = img.shape[1]
flattened = img.reshape((x*y, 3))
uni, counts = np.unique(flattened, axis=0, return_counts=True)
return uni, counts
def getcolors(self, maxcolors=256):
"""
Returns a list of colors used in this image.
:param maxcolors: Maximum number of colors. If this number is
exceeded, this method returns None. The default limit is
256 colors.
:returns: An unsorted list of (count, pixel) values.
"""
if self._mode in ("1", "L", "P"):
h = self._instance.histogram()
out = []
for i in range(256):
if h[i]:
out.append((h[i], i))
if len(out) > maxcolors:
return None
return out
uni, counts = self._getcolors()
if c>maxcolors: return None
colors = []
for l in range(len(counts)):
colors.append((counts[l], l))
return colors
def getdata(self, band=None):
channels, depth = self._get_channels_and_depth(self._mode)
flattened = self._instance.reshape((self.size[0]*self.size[1], channels))
return flattened
def getextrema(self):
return (np.minimum(self._instance), np.maximum(self._instance))
def getim(self):
return self._instance
def getpalette(self):
uni, counts = self._getcolors()
colors = list(np.ravel(uni))
return colors
def getpixel(self, xytup):
return self._instance[y, x]
def histogram(self, mask=None, extrema=None):
"""
Returns a histogram for the image. The histogram is returned as
a list of pixel counts, one for each pixel value in the source
image. If the image has more than one band, the histograms for
all bands are concatenated (for example, the histogram for an
"RGB" image contains 768 values).
A bilevel image (mode "1") is treated as a greyscale ("L") image
by this method.
If a mask is provided, the method returns a histogram for those
parts of the image where the mask image is non-zero. The mask
image must have the same size as the image, and be either a
bi-level image (mode "1") or a greyscale image ("L").
:param mask: An optional mask.
:returns: A list containing pixel counts.
"""
uni, counts = self._getcolors()
return [l for l in counts]
def offset(self, xoffset, yoffset=None):
raise NotImplementedError("offset() has been removed. "
"Please call ImageChops.offset() instead.")
def point(self, lut, mode=None):
"Map image through lookup table"
raise NotImplementedError("point() has been not implemented in this library. ")
def putpixel(self, xytup, color):
self._instance[xytup[1], xytup[0]] = color
def putalpha(self, alpha):
"""
Adds or replaces the alpha layer in this image. If the image
does not have an alpha layer, it's converted to "LA" or "RGBA".
The new layer must be either "L" or "1".
:param alpha: The new alpha layer. This can either be an "L" or "1"
image having the same size as this image, or an integer or
other color value.
"""
channels, depth = self._get_channels_and_depth(self._mode)
if isinstance(alpha, np.ndarray):
paste_image = True
else:
paste_image = False
if channels==4:
r, g, b, a = self.split()
if not paste_image:
a[:] = alpha
else:
a = alpha.copy()
colorband = (r, g, b, a)
self._instance = merge("RGBA", colorband, image=True)
elif channels == 3:
if not paste_image:
sh = self._instance.shape
sh = (sh[0], sh[1], 1)
a = np.zeros(sh, dtype=depth)
a[:] = alpha
else:
a = alpha.copy()
r, g, b = self.split()
colorband = (r, g, b, a)
self._instance = merge("RGBA", colorband, image=True)
elif channels < 2: # "L" or "LA"
if not paste_image:
sh = self._instance.shape
sh = (sh[0], sh[1], 1)
a = np.zeros(sh, dtype=depth)
a[:] = alpha
else:
a = alpha.copy()
if channels == 2:
l, a_old = self.split()
colorband = (l, a)
else:
colorband = (self._instance, a)
self._instance = merge("LA", colorband, image=True)
def putdata(self, dat, scale=1.0, offset=0.0):
"""
Copies pixel data to this image. This method copies data from a
sequence object into the image, starting at the upper left
corner (0, 0), and continuing until either the image | |
3),('wei', 2),('jia', 1),('ri', 4),
('gu', 4),('lei', 3),('xiao', 1),('xiao', 1),('lu', 2),('di', 2),('qiu', 1),
]),
(7,[('xie', 4),('gong', 1),('zui', 4),('xiao', 3),('pian', 1),('lian', 2),('nv', 3),
('zi', 4),('jia', 4),('qian', 2),('lou', 2),('bai', 3),('shi', 4),('guai', 1),
('gu', 4),('wo', 3),('wu', 2),('yi', 1),('sou', 1),('jin', 4),('qie', 4),
('ni', 2),('ta', 1),('gu', 1),('jiu', 3),('ba', 2),('jin', 1),('chai', 1),
('ye', 3),('shu', 1),('chong', 1),('shan', 4),('gan', 1),('chang', 2),('huo', 4),
('luo', 4),('ye', 4),('tian', 1),('xin', 1),('yang', 2),('gu', 3),('huai', 2),
('jin', 1),('ri', 4),('feng', 4),('qian', 2),('guo', 4),('shi', 2),('wan', 4),
('yu', 3),('jun', 1),('ying', 2),('dian', 4),('fu', 4),('ying', 2),('zhai', 1),
]),
(7,[('xi', 1),('ri', 4),('xi', 4),('yan', 2),('shen', 1),('hou', 4),('shi', 4),
('jin', 1),('zhao', 1),('dou', 1),('dao', 4),('yan', 3),('qian', 2),('lai', 2),
('yi', 1),('shang', 5),('yi', 3),('shi', 1),('xing', 2),('kan', 4),('jin', 4),
('zhen', 1),('xian', 4),('you', 2),('cun', 2),('wei', 4),('ren', 3),('kai', 1),
('shang', 4),('xiang', 3),('jiu', 4),('qing', 2),('lian', 2),('bi', 4),('pu', 2),
('ye', 3),('ceng', 2),('yin', 1),('meng', 4),('song', 4),('qian', 2),('cai', 2),
('cheng', 2),('zhi', 1),('ci', 3),('hen', 4),('ren', 2),('ren', 2),('you', 3),
('pin', 2),('jian', 4),('fu', 1),('qi', 1),('bai', 3),('shi', 4),('ai', 1),
]),
(7,[('xian', 2),('zuo', 4),('bei', 1),('jun', 1),('yi', 4),('zi', 4),('bei', 1),
('bai', 3),('nian', 2),('dou', 1),('shi', 4),('ji', 3),('duo', 1),('shi', 2),
('deng', 4),('you', 1),('wu', 2),('zi', 3),('xun', 2),('zhi', 1),('ming', 4),
('pan', 1),('yue', 4),('dao', 4),('wang', 2),('you', 2),('fei', 4),('ci', 2),
('tong',2), ('xue',2), ('yao',3), ('ming',2), ('he',2), ('suo',3), ('wang',4),
('ta',1), ('sheng',1), ('yuan',2), ('hui',4), ('geng',4), ('nan',2), ('qi',1),
('wei', 2),('jiang', 1),('zhong', 1),('ye', 4),('chang', 2),('kai', 1),('yan', 3),
('bao', 4),('da', 2),('ping', 2),('sheng', 1),('wei', 4),('zhan', 3),('mei', 2)
]),
(7,[('shi', 2),('nan', 2),('nian', 2),('huang', 1),('shi', 4),('ye', 4),('kong', 1),
('di', 4),('xiong', 1),('ji', 1),('lv', 3),('ge', 4),('xi', 1),('dong', 1),
('tian', 2),('yuan', 2),('liao', 2),('luo', 4),('gan', 1),('ge', 1),('hou', 4),
('gu', 3),('rou', 4),('liu', 2),('li', 2),('dao', 4),('lu', 4),('zhong', 1),
('diao', 4),('ying', 3),('fen', 1),('wei', 2),('qian', 1),('li', 3),('yan', 4),
('ci', 2),('gen', 1),('san', 4),('zuo', 4),('jiu', 3),('qiu', 1),('peng', 2),
('gong', 4),('kan', 4),('ming', 2),('yue', 4),('ying', 1),('chui', 2),('lei', 4),
('yi', 2),('ye', 4),('xiang', 1),('xin', 1),('wu', 3),('chu', 4),('tong', 2),
]),
(7,[('jin', 3),('se', 4),('wu', 2),('duan', 1),('wu', 3),('shi', 2),('xian', 2),
('yi', 4),('xian', 2),('yi', 2),('zhu', 4),('si', 1),('hua', 2),('nian', 2),
('zhuang', 1),('sheng', 1),('xiao', 3),('meng', 4),('mi', 2),('hu', 2),('die', 2),
('wang', 4),('di', 4),('chun', 1),('xin', 1),('tuo', 1),('du', 4),('juan', 1),
('cang', 1),('hai', 3),('yue', 4),('ming', 2),('zhu', 1),('you', 3),('lei', 4),
('lan', 2),('tian', 2),('ri', 4),('nuan', 3),('yu', 4),('sheng', 1),('yan', 1),
('ci', 3),('qing', 2),('ke', 3),('dai', 4),('cheng', 2),('zhui', 1),('yi', 4),
('zhi', 3),('shi', 4),('dang', 1),('shi', 2),('yi', 2),('wang', 3),('ran', 2),
]),
(7,[('zuo', 2),('ye', 4),('xing', 1),('chen', 2),('zuo', 2),('ye', 4),('feng', 1),
('hua', 4),('lou', 2),('xi', 1),('pan', 4),('gui', 4),('tang', 2),('dong', 1),
('shen', 1),('wu', 2),('cai', 3),('feng', 4),('shuang', 1),('fei', 1),('yi', 4),
('xin', 1),('you', 3),('ling', 2),('xi', 1),('yi', 4),('dian', 3),('tong', 1),
('ge', 2),('zuo', 4),('song', 4),('gou', 1),('chun', 1),('jiu', 3),('nuan', 3),
('fen', 1),('cao', 2),('she', 4),('fu', 4),('la', 4),('deng', 1),('hong', 2),
('jie', 1),('yu', 2),('ting', 1),('gu', 3),('ying', 1),('guan', 1),('qu', 4),
('zou', 2),('ma', 3),('lan', 2),('tai', 2),('lei', 4),('zhuan', 3),('peng', 2),
]),
(7,[('zi', 3),('quan', 2),('gong', 1),('dian', 4),('suo', 3),('yan', 1),('xia', 2),
('yu', 4),('qu', 3),('wu', 2),('cheng', 2),('zuo', 4),('di', 4),('jia', 1),
('yu', 4),('xi', 3),('bu', 4),('yuan', 2),('gui', 1),('ri', 4),('jiao', 3),
('jin', 3),('fan', 1),('ying', 1),('shi', 4),('dao', 4),('tian', 1),('ya', 2),
('yu', 2),('jin', 1),('fu', 2),('cao', 3),('wu', 2),('ying', 2),('huo', 3),
('zhong', 1),('gu', 3),('chui', 2),('yang', 2),('you', 3),('mu', 4),('ya', 1),
('di', 4),('xia', 4),('ruo', 4),('feng', 2),('chen', 2),('hou', 4),('zhu', 3),
('qi', 3),('yi', 2),('chong', 2),('wen', 4),('hou', 4),('ting', 2),('hua', 1),
]),
(7,[('lai', 2),('shi', 4),('kong', 1),('yan', 2),('qu', 4),('jue', 2),('zong', 1),
('yue', 4),('xie', 2),('lou', 2),('shang', 4),('wu', 3),('geng', 1),('zhong', 1),
('meng', 4),('wei', 2),('yuan', 3),('bie', 2),('ti', 2),('nan', 2),('huan', 4),
('shu', 1),('bei', 4),('cui', 1),('cheng', 2),('mo', 4),('wei', 4),('nong', 2),
('la', 4),('zhao', 4),('ban', 4),('long', 2),('jin', 1),('fei', 3),('cui', 4),
('she', 4),('xun', 1),('wei', 1),('du', 4),('xiu', 4),('fu', 2),('rong', 2),
('liu', 2),('lang', 2),('yi', 3),('hen', 4),('peng', 2),('shan', 1),('yuan', 3),
('geng', 4),('ge', 2),('peng', 2),('shan', 1),('yi', 2),('wan', 4),('chong', 2),
]),
(7,[('sa', 4),('sa', 4),('dong', 1),('feng', 1),('xi', 4),('yu', 3),('lai', 2),
('fu', 2),('rong', 2),('tang', 2),('wai', 4),('you', 3),('qing', 1),('lei', 2),
('jin', 1),('chan', 2),('nie', 4),('suo', 3),('shao', 1),('xiang', 1),('ru', 4),
('yu', 4),('hu', 3),('qian', 1),('si', 1),('ji', 2),('jing', 3),('hui', 2),
('jia', 3),('shi', 4),('kui', 1),('lian', 2),('han', 2),('yuan', 4),('shao', 3),
('mi', 4),('fei', 1),('liu', 2),('zhen', 3),('wei', 4),('wang', 2),('cai', 2),
('chun', 1),('xin', 1),('mo', 4),('gong', 4),('hua', 1),('zheng', 1),('fa', 1),
('yi', 2),('cun', 4),('xiang', 1),('si', 1),('yi', 2),('cun', 4),('hui', 1),
]),
(7,[('yuan', 2),('niao', 3),('you', 2),('yi', 2),('wei', 4),('jian', 3),('shu', 1),
('feng', 1),('yun', 2),('chang', 2),('wei', 2),('hu', 4),('chu', 3),('xu', 1),
('tu', 2),('ling', 4),('shang', 4),('jiang', 4),('hui', 1),('shen', 2),('bi', 3),
('zhong', 1),('jian', 4),('jiang', 4),('wang', 2),('zou', 3),('chuan', 2),('che', 1),
('guan', 3),('yue', 4),('you', 3),('cai', 2),('yuan', 2),('bu', 4),('tian', 3),
('guan', 1),('zhang', 1),('wu', 2),('ming', 4),('yu', 4),('he', 2),('ru', 2),
('ta', 1),('nian', 2),('jin', 2),('li', 3),('jing', 1),('ci', 2),('miao', 4),
('liang', 2),('fu', 4),('yin', 2),('cheng', 2),('hen', 4),('you', 3),('yu', 2),
]),
(7,[('xiang', 1),('jian', 4),('shi', 2),('nan', 2),('bie', 2),('yi', 4),('nan', 2),
('dong', 1),('feng', 1),('wu', 2),('li', 4),('bai', 3),('hua', 1),('can', 2),
('chun', 1),('can', 2),('dao', 4),('si', 3),('si', 1),('fang', 1),('jin', 4),
('la', 4),('ju', 4),('cheng', 2),('hui', 1),('lei', 4),('shi', 3),('gan', 1),
('xiao', 3),('jing', 4),('dan', 4),('chou', 2),('yun', 2),('bin', 4),('gai', 3),
('ye', 4),('yin', 2),('ying', 1),('jue', 2),('yue', 4),('guang', 1),('han', 2),
('peng', 2),('shan', 1),('ci', 3),('qu', 4),('wu', 2),('duo', 1),('lu', 4),
('qing', 1),('niao', 3),('yin', 1),('qin', 2),('wei', 4),('tan', 4),('kan', 4),
]),
(7,[('chang', 4),('wo', 4),('xin', 1),('chun', 1),('bai', 2),('jia', 2),('yi', 1),
('bai', 2),('men', 2),('liao', 2),('luo', 4),('yi', 4),('duo', 1),('wei', 2),
('hong', 2),('lou', 2),('ge', 2),('yu', 3),('xiang', 1),('wang', 4),('leng', 3),
('zhu', 1),('bo', 2),('piao', 1),('deng', 1),('du', 2),('zi', 4),('gui', 1),
('yuan',3),('lu',4),('ying',1),('bei',1),('chun',1),('wan',2),('wan',3),
('can',2),('xiao',1),('you',2),('de',2),('meng',4),('yi',1),('xi',1),
('yu',4),('dang',1),('jian',1),('zha',2),('he',2),('you',2),('da',2),
('wan',4),('li',3),('yun',2),('luo',2),('yi',2),('yan',4),('fei',1),
]),
(7,[('feng', 4),('wei', 3),('xiang', 1),('luo', 2),('bo', 2),('ji', 3),('chong', 2),
('bi', 4),('wen', 2),('yuan', 2),('ding', 3),('ye', 4),('shen', 1),('feng', 2),
('shan', 4),('cai', 2),('yue', 4),('po', 4),('xiu', 1),('nan', 2),('yan', 3),
('che', 1),('zou', 3),('lei', 2),('sheng', 1),('yu', 3),('wei', 4),('tong', 1),
('ceng', 2),('shi', 4),('ji', 4),('liao', 2),('jin', 1),('jin', 4),('an', 4),
('duan', 4),('wu', 2),('xiao', 1),('xi', 4),('shi', 2),('liu', 4),('hong', 2),
('ban', 1),('zhui', 1),('zhi', 3),('xi', 4),('chui', 2),('yang', 2),('an', 4),
('he', 2),('chu', 4),('xi', 1),('nan', 2),('ren', 4),('hao', 3),('feng', 1),
]),
(7,[('chong', 2),('wei', 2),('shen', 1),('xia', 4),('mo', 4),('chou', 2),('tang', 2),
('wo', 4),('hou', 4),('qing', 1),('xiao', 1),('xi', 4),('xi', 4),('chang', 2),
('shen', 2),('nv', 3),('sheng', 1),('ya', 2),('yuan', 2),('shi', 4),('meng', 4),
('xiao', 3),('gu', 1),('ju', 1),('chu', 4),('ben', 3),('wu', 2),('lang', 2),
('feng', 1),('bo', 1),('bu', 2),('xin', 4),('ling', 2),('zhi', 1),('ruo', 4),
('yue', 4),('lu', 4),('shui', 2),('jiao', 4),('gui', 4),('ye', 4),('xiang', 1),
('zhi', 2),('dao', 4),('xiang', 1),('si', 1),('liao', 3),('wu', 2),('yi', 4),
('wei', 4),('fang', 2),('chou', 2),('chang', 4),('shi', 4),('qing', 1),('kuang', 2),
]),
(7,[('dan', 4),('ran', 2),('kong', 1),('shui', 3),('dui', 4),('xie', 2),('hui', 1),
('qu', 1),('dao', 3),('cang', 1),('mang', 2),('jie', 1),('cui', 4),('wei', 1),
('bo', 1),('shang', 4),('ma', 3),('si', 1),('kan', 4),('zhao', 4),('qu', 4),
('liu', 3),('bian', 1),('ren', 2),('xie', 1),('dai', 4),('chuan', 2),('gui', 1),
('shu', 4),('cong', 2),('sha', 1),('cao', 3),('qun', 2),('ou', 1),('san', 4),
('wan', 4),('qing', 3),('jiang', 1),('tian', 2),('yi', 2),('lu', 4),('fei', 1),
('shui', 2),('jie', 3),('cheng', 2),('zhou', 1),('xun', 2),('fan', 4),('li', 2),
('wu', 3),('hu', 2),('yan', 1),('shui', 3),('du', 2),('wang', 4),('ji', 1),
]),
(7,[('su', 1),('wu', 3),('hun', 2),('xiao', 1),('han', 4),('shi', 3),('qian', 2),
('gu', 3),('ci', 2),('gao', 1),('shu', 4),('liang', 3),('mang', 2),('ran', 2),
('yun', 2),('bian', 1),('yan', 4),('duan', 4),('hu', 2),('tian', 1),('yue', 4),
('long', 3),('shang', 4),('yang', 2),('gui', 1),('sai', 4),('cao', 3),('yan', 1),
('hui', 2),('ri', 4),('lou', 2),('tai', 2),('fei', 1),('jia', 3),('zhang', 4),
('qu', 4),('shi', 2),('guan', 4),('jian', 4),('shi', 4),('ding', 1),('nian', 2),
('mao', 4),('ling', 2),('bu', 2),('jian', 4),('feng', 1),('hou', 2),('yin', 4),
('kong', 1),('xiang', 4),('qiu', 1),('bo', 1),('ku', 1),('shi', 4),('chuan', 1),
]),
(7,[('shi', 2),('er', 4),('lou', 2),('zhong', 1),('jin', 4),('xiao', 3),('zhuang', 1),
('wang', 4),('xian', 1),('lou', 2),('shang', 4),('wang', 4),('jun', 1),('wang', 2),
('suo', 3),('xian', 2),('jin', 1),('shou', 4),('lian', 2),('huan', 2),('leng', 3),
('shui', 3),('di', 1),('tong', 2),('long', 2),('zhou', 4),('lou', 4),('chang', 2),
('yun', 2),('ji', 4),('ba', 4),('shu', 1),('huan', 2),('dui', 4),('jing', 4),
('luo', 2),('yi', 1),('yu', 4),('huan', 4),('geng', 4),('tian', 1),('xiang', 1),
('yao', 2),('kui', 1),('zheng', 4),('dian', 4),('lian', 2),('kai', 1),('chu', 4),
('pao', 2),('ku', 4),('gong', 1),('ren', 2),('sao', 3),('yu', 4),('chuang', 2),
]),
(7,[('peng', 2),('men', 2),('wei', 4),('shi', 2),('qi', 3),('luo', 2),('xiang', 1),
('ni', 3),('tuo', 1),('liang', 2),('mei', 2),('yi', 4),('zi', 4),('shang', 1),
('shui', 2),('ai', 4),('feng', 1),('liu', 2),('gao', 1),('ge', 2),('diao', 4),
('gong', 4),('lian', 2),('shi', 2),('shi', 4),('jian', 3),('shu', 1),('zhuang', 1),
('gan', 3),('jiang', 1),('shi', 2),('zhi', 3),('kua', 1),('zhen', 1),('qiao', 3),
('bu', 4),('ba', 3),('shuang', 1),('mei', 2),('dou', 3),('hua', 4),('chang', 2),
('ku', 3),('hen', 4),('nian', 2),('nian', 2),('ya', 1),('jin', 1),('xian', 4),
('wei', 2),('ta', 1),('ren', 2),('zuo', 4),('jia', 4),('yi', 1),('shang', 4),
]),
(7,[('lu', 2),('jia', 1),('shao', 4),('fu', 4),('yu', 4),('jin', 1),('tang', 2),
('hai', 3),('yan', 4),('shuang', 1),('qi', 1),('dai', 4),('mao', 4),('liang', 2),
('jiu', 3),('yue', 4),('han', 2),('zhen', 1),('cui', 1),('mu', 4),('ye', 4),
('shi', 2),('nian', 2),('zheng', 1),('shu', 4),('yi', 4),('liao', 2),('yang', 2),
('bai', 2),('lang', 2),('he', 2),('bei', 3),('yin', 1),('shu', 1),('duan', 4),
('dan', 1),('feng', 4),('cheng', 2),('nan', 2),('qiu', 1),('ye', 4),('chang', 2),
('shui', 2),('wei', 4),('han', 2),('chou', 2),('du', 2),('bu', 2),('jian', 4),
('geng', 4),('jiao', 4),('ming', 2),('yue', 4),('zhao', 4),('liu', 2),('huang', 2),
]),
(5,[('kong', 1),('shan', 1),('bu', 2),('jian', 4),('ren', 2),
('dan', 4),('wen', 2),('ren', 2),('yu', 2),('xiang', 3),
('fan', 3),('jing', 3),('ru', 4),('shen', 1),('lin', 2),
('fu', 4),('zhao', 4),('qing', 1),('tai', 2),('shang', 4),
]),
(5,[('du', 2),('zuo', 4),('you', 1),('huang', 2),('li', 3),
('tan', 2),('qin', 2),('fu', 4),('chang', 2),('xiao', 4),
('shen', 1),('lin', 2),('ren', 2),('bu', 4),('zhi', 1),
('ming', 2),('yue', 4),('lai', 2),('xiang', 1),('zhao', 4),
]),
(5,[('shan', 1),('zhong', 1),('xiang', 1),('song', 4),('ba', 4),
('ri', 4),('mu', 4),('yan', 3),('chai', 2),('fei', 1),
('chun', 1),('cao', 3),('ming', 2),('nian', 2),('lv', 4),
('wang', 2),('sun', 1),('gui', 1),('bu', 4),('gui', 1),
]),
(5,[('hong', 2),('dou', 4),('sheng', 1),('nan', 2),('guo', 2),
('chun', 1),('lai', 2),('fa', 1),('ji', 3),('zhi', 1),
('yuan', 4),('jun', 1),('duo', 1),('cai', 3),('xie', 2),
('ci', 3),('wu', 4),('zui', 4),('xiang', 1),('si', 1),
]),
(5,[('jun', 1),('zi', 4),('gu', 4),('xiang', 1),('lai', 2),
('ying', 1),('zhi', 1),('gu', 4),('xiang', 1),('shi', 4),
('lai', 2),('ri', 4),('qi', 3),('chuang', 1),('qian', 2),
('han', 2),('mei', 2),('zhu', 4),('hua', 1),('wei', 4),
]),
(5,[('gui', 1),('shan', 1),('shen', 1),('qian', 3),('qu', 4),
('xu', 1),('jin', 4),('qiu', 1),('he', 4),('mei', 3),
('mo', 4),('xue', 2),('wu', 3),('ling', 2),('ren', 2),
('zan', 4),('you', 2),('tao', 2),('yuan', 2),('li', 3),
]),
(5,[('zhong', 1),('nan', 2),('yin', 1),('ling', 3),('xiu', 4),
('ji', 1),('xue', 3),('fu', 2),('yun', 2),('duan', 1),
('lin', 2),('biao', 3),('ming', 2),('ji', 4),('se', 4),
('cheng', 2),('zhong', 1),('zeng', 1),('mu', 4),('han', 2),
]),
(5,[('yi', 2),('zhou', 1),('bo', 2),('yan', 1),('zhu', 3),
('ri', 4),('mu', 4),('ke', 4),('chou', 2),('xin', 1),
('ye', 3),('kuang', 4),('tian', 1),('di', 1),('shu', 4),
('jiang', 1),('qing', 1),('yue', 4),('jin', 4),('ren', 2),
]),
(5,[('chun', 1),('mian', 2),('bu', 4),('jue', 2),('xiao', 3),
('chu', 4),('chu', 4),('wen', 2),('ti', 2),('niao', 3),
('ye', 4),('lai', 2),('feng', 1),('yu', 3),('sheng', 1),
('hua', 1),('luo', 4),('zhi', 1),('duo', 1),('shao', 3),
]),
(5,[('chuang', 2),('qian', 2),('ming', 2),('yue', 4),('guang', 1),
('yi', 2),('shi', 4),('di', 4),('shang', 4),('shuang', 1),
('ju', 3),('tou', 2),('wang', 4),('ming', 2),('yue', 4),
('di', 1),('tou', 2),('si', 1),('gu', 4),('xiang', 1),
]),
(5,[('mei', 3),('ren', 2),('juan', 3),('zhu', 1),('lian', 2),
('shen', 1),('zuo', 4),('cu', 4),('e', 2),('mei', 2),
('dan', 4),('jian', 4),('lei', 4),('hen', 2),('shi', 1),
('bu', 4),('zhi', 1),('xin', 1),('hen', 4),('shui', 2),
]),
(5,[('gong', 1),('gai', 4),('san', 1),('fen', 1),('guo', 2),
('ming', 2),('cheng', 2),('ba', 1),('zhen', 4),('tu', 2),
('jiang', 1),('liu', 2),('shi', | |
# coding: tilde
from utils.helpers import COLOR_HEADER, COLOR_BLUE, COLOR_OKGREEN, COLOR_WARNING, FAIL, ENDC, COLOR_BOLD, COLOR_UNDERLINE, COLOR_GREEN, COLOR_GRAY
import logging
import core.arithmetic as arithmetic
from utils.helpers import opcode, padded_hex, pretty_bignum, all_concrete, replace_lines, replace_f
from utils.helpers import clean_color, C, is_array
from core.algebra import lt_op, mul_op, minus_op, ge_zero, safe_ge_zero, sub_op, add_op, apply_mask, safe_le_op, to_bytes
from copy import deepcopy
from core.arithmetic import simplify_bool, is_zero
from core.masks import get_bit, mask_to_type
from utils.helpers import precompiled
from utils.helpers import colorize, to_exp2
from utils.signatures import get_param_name
from functools import partial
from pano.loader import Loader
import sys
logger = logging.getLogger(__name__)
'''
This module displays expressions and traces in a human readable form.
It went through very many iterations, so it's a mess by now.
A lot of it can be easily refactored, so if you're looking for a place to contribute,
this may be it :)
'''
prev_trace = None
def explain(title, trace):
global prev_trace
if '--explain' not in sys.argv:
return
if trace == prev_trace:
return
print('\n'+C.green_back+f" {title}: "+C.end+'\n')
pprint_trace(trace)
prev_trace = trace
def explain_text(title, params):
global prev_trace
if '--explain' not in sys.argv:
return
print('\n'+C.blue_back+f" {title}: "+C.end+'\n')
for name, val in params:
print(f' {C.gray}{name}{C.end}: {val}')
print()
def make_ast(trace):
def store_to_set(line):
if line ~ ('store', :size, :off, :idx, :val):
return ('set', ('stor', size, off, idx), val)
else:
return line
def mask_storage(exp):
if exp ~ ('stor', :size, :off, :idx):
return ('mask_shl', size, 0, 0, exp)
else:
return exp
trace = replace_lines(trace, store_to_set)
trace = replace_f(trace, mask_storage)
return trace
def format_exp(exp):
if type(exp) == str:
return f'"{exp}"'
if type(exp) == int:
if exp > 10 ** 6 and exp % 10 ** 6 != 0:
return hex(exp)
else:
return str(exp)
elif type(exp) != list:
return str(exp)
else:
if len(exp) == 0:
return COLOR_GRAY + "[]" + ENDC
if type(opcode(exp)) == list:
return (
COLOR_GRAY
+ "["
+ ENDC
+ f"{COLOR_GRAY}, {ENDC}".join([format_exp(e) for e in exp])
+ COLOR_GRAY
+ "]"
+ ENDC
)
else:
return (
COLOR_GRAY
+ "["
+ ENDC
+ f"{COLOR_GRAY}, {ENDC}".join(
[opcode(exp)] + [format_exp(e) for e in exp[1:]]
)
+ COLOR_GRAY
+ "]"
+ ENDC
)
def pprint_repr(trace, indent=0):
for line in trace:
if opcode(line) == "if":
cond, if_true, if_false = line[1:]
print(indent * " ", f"[if, {format_exp(cond)}, [")
pprint_repr(if_true, indent + 2)
print(indent * " ", "],[")
pprint_repr(if_false, indent + 2)
print(indent * " ", "] ")
elif opcode(line) == "while":
cond, tr = line[1], line[2]
print(indent * " ", f"[while, {format_exp(cond)}, [")
pprint_repr(tr, indent + 2)
print(indent * " ", "], ")
else:
print(indent * " ", format_exp(line) + f"{COLOR_GRAY}, {ENDC}")
'''
def pprint_repr(exp):
print(repr(exp))
return
print(pretty_repr(exp))
'''
def pretty_repr(exp, indent=0):
if type(exp) not in (tuple, list):
return repr(exp)
elif type(exp) == list:
res = ', \n'.join([' '*indent + pretty_repr(e, indent) for e in exp])
res = indent*' '+'['+res[:-3]+']'
return res
elif type(exp) == tuple:
res = ', '.join([pretty_repr(e) for e in exp])
if len(res) > 40 and exp ~ (:op, :first, *rest):
indent += len(pretty_repr(op)+', ')+1
res = pretty_repr(op)+', '+pretty_repr(first, indent)+',\n'
for r in rest:
res += indent*' '+pretty_repr(r, indent)+', \n'
res = res[:-3] # removes ', \n'
return '('+res+')'
elif type(exp) == list:
res = (',\n'+' '*indent).join([pretty_repr(e, indent) for e in exp])
return f'[{res}]'
#print(pretty_repr(('data', ('mem', ('range', ('add', 32, 'QQ', ('mask_shl', 251, 5, 0, ('add', 31, ('ext_call.return_data', 128, 32)))), 32)), 'yy', ('data', ('mem', ('range', ('add', 32, 'QQ'), ('ext_call.return_data', 128, 32))), ('mem', ('range', ('add', 96, 'QQ', ('mask_shl', 251, 5, 0, ('add', 31, ('ext_call.return_data', 128, 32))), ('ext_call.return_data', 128, 32)), 0))))))
#exit()
def pformat_trace(trace):
return '\n'.join(pprint_logic(trace)) + "\n\n"
def pprint_trace(trace):
trace = make_ast(trace)
pprint_ast(trace)
def pprint_ast(trace):
empty = True
for l in pprint_logic(trace):
print(l)
empty = False
if empty:
print(' stop')
print()
print()
def pprint_logic(exp, indent=2):
INDENT_LEN = 4
if opcode(exp) == 'while':
if len(exp) == 5:
cond, path, jd, vars = exp[1], exp[2], exp[3], exp[4]
else:
cond, path = exp[1], exp[2]
jd, vars = None, []
for v in vars:
yield ' '*indent + list(pretty_line(('setvar', v[1],v[2]), add_color=True))[0]
yield ' '*indent + COLOR_GREEN+ 'while ' + ENDC+prettify(cond, add_color=True, parentheses=False, rem_bool=True) + COLOR_GREEN+':'+ ENDC#+COLOR_GREEN+': # '+str(jd)+ENDC
if type(path) != list:
path = path.trace
for l in pprint_logic(path, indent + INDENT_LEN):
yield l
elif exp ~ ('require', :cond):
yield ' '*indent + 'require ' + prettify(exp[1], add_color=True, parentheses=False, rem_bool=True) + ''
elif exp ~ ('if', :cond, :if_true): # one-sided ifs, only after folding
if len(if_true) == 1 and (first := if_true[0]) and \
((first == ('revert', 0)) or (first ~ ('invalid', ...))):
yield ' '*indent + 'require '+prettify(is_zero(exp[1]), add_color=True, parentheses=False, rem_bool=True)
else:
yield ' '*indent + 'if ' + prettify(exp[1], add_color=True, parentheses=False, rem_bool=True) + ':'
for l in pprint_logic(if_true, indent + INDENT_LEN):
yield l
elif exp ~ ('if', :cond, :if_true, :if_false):
if len(if_false) == 1 and (first := if_false[0]) and \
((first == ('revert', 0)) or \
(first ~ ('invalid', ...))):
yield ' '*indent + 'require '+prettify(exp[1], add_color=True, parentheses=False, rem_bool=True)
for l in pprint_logic(exp[2], indent):
yield l
elif len(if_true) == 1 and (first := if_true[0]) and \
((first == ('revert', 0)) or \
(first ~ ('invalid', ...))):
yield ' '*indent + 'require '+prettify(is_zero(exp[1]), add_color=True, parentheses=False, rem_bool=True)
for l in pprint_logic(exp[3], indent):
yield l
else:
yield ' '*indent + 'if ' + prettify(exp[1], add_color=True, parentheses=False, rem_bool=True) + ':'
for l in pprint_logic(if_true, indent + INDENT_LEN):
yield l
'''
while len(if_false) == 1 and opcode(if_false) == 'if' and len(if_false) == 4:
first = if_false[0]
assert first ~ ('if', :c, :i_t, :if_false)
yield ' '*indent + 'elif ' + prettify(c, add_color=True, parentheses=False, rem_bool=True) + ':'
for l in pprint_logic(i_t, indent + INDENT_LEN):
yield l'''
yield ' '*indent + 'else:'
for l in pprint_logic(if_false, indent + INDENT_LEN):
yield l
elif type(exp) == list:
for idx, line in enumerate(exp):
if idx == len(exp)-1 and indent == 2 and line==('stop', ):
pass # don't print the last stop
else:
for l in pprint_logic(line, indent):
yield l
elif opcode(exp) == 'or' and len(exp)>1:
yield ' '*indent + 'if'
for l in pprint_logic(exp[1], indent + INDENT_LEN):
yield l
for line in exp[2:]:
yield ' '*indent + 'or'
for l in pprint_logic(line, indent + INDENT_LEN):
yield l
else:
for l in pretty_line(exp):
yield ' '* indent + l
def to_real_int(exp):
if type(exp) == int and get_bit(exp, 255):
return -arithmetic.sub(0, exp)
else:
return exp
def pretty_line(r, add_color=True):
col = partial(colorize, add_color=add_color)
pret = partial(prettify, parentheses=False, add_color=add_color)
if type(r) is str:
yield COLOR_GRAY + "# " + r + ENDC
# elif r ~ ('jumpdest', ...):
# pass
elif r ~ ('comment', :text):
yield COLOR_GRAY + "# " + prettify(text, add_color=False) + ENDC
elif r ~ ('log', :params, *events):
# solidstamp and dao cover most of those cases
res_params = pretty_memory(params, add_color=False)
for e in events:
if type(e) != int:
for e in events[1:]:
res_params = res_params + (prettify(e, add_color=False, parentheses=False), )
events = [events[0]]
break
# breaks with more than one proper event
res_events = tuple(pretty_fname(e, add_color=False, force=True) for e in events)
# print(res_events)
res_events = tuple((x[:10] if x[:2] == '0x' else x) for x in res_events)
for e in res_events:
if '(' not in e:
yield col(f"log {e}{':' if len(res_params)>0 else ''} {', '.join(res_params)}", COLOR_GRAY)
else:
fname, fparams = e.split('(')
assert fparams[-1] == ')'
fparams = fparams[:-1]
fparams = fparams.split(', ')
if fparams == [''] or len(res_params) == 0:
yield col(f'log {e}', COLOR_GRAY)
elif len(fparams) == len(res_params):
p_list = []
try:
for idx, ptype, pname in [f"{idx} {p}".split(' ') for idx, p in enumerate(fparams)]:
p_list.append((ptype, pname, res_params[int(idx)]))
except:
logger.warning(f'weird log {e} {fparams}')
yield(f'log {e}')
return
if len(p_list) == 1:
yield col(f"log {fname}({p_list[0][0]} {p_list[0][1]}={pret(p_list[0][2], add_color=False, parentheses=False)})", COLOR_GRAY)
else:
ind = len(f'log ')
first = p_list[0]
last = p_list[-1]
pline = lambda p: f'{p[0]} {p[1]}={pret(p[2], add_color=False, parentheses=False)}' # spaces around = not pep8 compliant
# but without them it was less readable
yield col(f"log {fname}(", COLOR_GRAY)#
yield col(f" {pline(first)},", COLOR_GRAY)
for p in p_list[1:-1]:
yield col(' '*ind + f"{pline(p)},", COLOR_GRAY)
yield col(' '*ind + f"{pline(last)})", COLOR_GRAY)
# elif len(res_params) == 0:
# yield col(f'log {e}', COLOR_GRAY)
else:
yield col(f'log {e}:', COLOR_GRAY)
ind = ' ' * len(f'log {fname}(')
for p in res_params:
yield col(ind + p + ',', COLOR_GRAY)
# print(repr(len(fparams)), len(res_params))
elif r | |
"""Cryptocurrency Discovery Controller"""
__docformat__ = "numpy"
# pylint: disable=R0904, C0302, W0622, C0201
import argparse
from typing import List
from prompt_toolkit.completion import NestedCompleter
from gamestonk_terminal.parent_classes import BaseController
from gamestonk_terminal import feature_flags as gtff
from gamestonk_terminal.helper_funcs import (
EXPORT_ONLY_RAW_DATA_ALLOWED,
parse_known_args_and_warn,
check_positive,
)
from gamestonk_terminal.menu import session
from gamestonk_terminal.cryptocurrency.discovery import (
coinmarketcap_model,
coinpaprika_model,
pycoingecko_model,
pycoingecko_view,
coinpaprika_view,
coinmarketcap_view,
)
class DiscoveryController(BaseController):
"""Discovery Controller class"""
CHOICES_COMMANDS = [
"coins",
"cpsearch",
"cmctop",
"cgtrending",
"cgvoted",
"cgvisited",
"cgvolume",
"cgrecently",
"cgsentiment",
"cggainers",
"cglosers",
"cgyfarms",
"cgdefi",
"cgdex",
"cgnft",
]
def __init__(self, queue: List[str] = None):
"""Constructor"""
super().__init__("/crypto/disc/", queue)
if session and gtff.USE_PROMPT_TOOLKIT:
choices: dict = {c: {} for c in self.controller_choices}
choices["cggainers"]["-p"] = {
c: {} for c in pycoingecko_model.PERIODS.keys()
}
choices["cggainers"]["-s"] = {
c: {} for c in pycoingecko_model.GAINERS_FILTERS
}
choices["cglosers"]["-p"] = {
c: {} for c in pycoingecko_model.PERIODS.keys()
}
choices["cglosers"]["-s"] = {
c: {} for c in pycoingecko_model.GAINERS_FILTERS
}
choices["cgtrending"]["-s"] = {
c: {} for c in pycoingecko_model.TRENDING_FILTERS
}
choices["cgvoted"]["-s"] = {
c: {} for c in pycoingecko_model.TRENDING_FILTERS
}
choices["cgvisited"]["-s"] = {
c: {} for c in pycoingecko_model.TRENDING_FILTERS
}
choices["cgsentiment"]["-s"] = {
c: {} for c in pycoingecko_model.TRENDING_FILTERS
}
choices["cgrecently"]["-s"] = {
c: {} for c in pycoingecko_model.RECENTLY_FILTERS
}
choices["cgyfarms"]["-s"] = {
c: {} for c in pycoingecko_model.YFARMS_FILTERS
}
choices["cgvolume"]["-s"] = {c: {} for c in pycoingecko_model.CAP_FILTERS}
choices["cgdefi"]["-s"] = {c: {} for c in pycoingecko_model.CAP_FILTERS}
choices["cgnft"]["-s"] = {c: {} for c in pycoingecko_model.CAP_FILTERS}
choices["cgdex"]["-s"] = {c: {} for c in pycoingecko_model.DEX_FILTERS}
choices["cmctop"]["-s"] = {c: {} for c in coinmarketcap_model.FILTERS}
choices["cpsearch"]["-s"] = {c: {} for c in coinpaprika_model.FILTERS}
choices["cpsearch"]["-c"] = {c: {} for c in coinpaprika_model.CATEGORIES}
self.completer = NestedCompleter.from_nested_dict(choices)
def print_help(self):
"""Print help"""
help_text = """
CoinGecko:
cgtrending trending coins on CoinGecko
cgvoted most voted coins on CoinGecko
cgvisited most visited coins on CoinGecko
cgvolume coins with highest volume on CoinGecko
cgrecently recently added on CoinGecko
cgsentiment coins with most positive sentiment
cggainers top gainers - coins which price gained the most in given period
cglosers top losers - coins which price dropped the most in given period
cgyfarms top yield farms
cgdefi top defi protocols
cgdex top decentralized exchanges
cgnft top non fungible tokens
CoinPaprika:
cpsearch search on CoinPaprika
CoinMarketCap:
cmctop top coins from CoinMarketCap
"""
print(help_text)
def call_cggainers(self, other_args):
"""Process gainers command"""
parser = argparse.ArgumentParser(
prog="cggainers",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="""
Shows Largest Gainers - coins which gain the most in given period.
You can use parameter --period to set which timeframe are you interested in: 1h, 24h, 7d, 14d, 30d, 60d, 1y
You can look on only N number of records with --limit,
You can sort by Rank, Symbol, Name, Volume, Price, Change with --sort and also with --descend flag to set it
to sort descending.
There is --urls flag, which will display one additional column you all urls for coins.
""",
)
parser.add_argument(
"-p",
"--period",
dest="period",
type=str,
help="time period, one from [1h, 24h, 7d, 14d, 30d, 60d, 1y]",
default="1h",
choices=pycoingecko_model.PERIODS.keys(),
)
parser.add_argument(
"-l",
"--limit",
dest="limit",
type=check_positive,
help="Number of records to display",
default=15,
)
parser.add_argument(
"-s",
"--sort",
dest="sortby",
type=str,
help="Sort by given column. Default: Rank",
default="Rank",
choices=pycoingecko_model.GAINERS_FILTERS,
)
parser.add_argument(
"--descend",
action="store_false",
help="Flag to sort in descending order (lowest first)",
dest="descend",
default=True,
)
parser.add_argument(
"-u",
"--urls",
dest="urls",
action="store_true",
help="Flag to show urls. If you will use that flag you will additional column with urls",
default=False,
)
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED
)
if ns_parser:
pycoingecko_view.display_gainers(
period=ns_parser.period,
top=ns_parser.limit,
sortby=ns_parser.sortby,
descend=ns_parser.descend,
links=ns_parser.urls,
export=ns_parser.export,
)
def call_cglosers(self, other_args):
"""Process losers command"""
parser = argparse.ArgumentParser(
prog="cglosers",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="""
Shows Largest Losers - coins which price dropped the most in given period
You can use parameter --period to set which timeframe are you interested in: 1h, 24h, 7d, 14d, 30d, 60d, 1y
You can look on only N number of records with --limit,
You can sort by Rank, Symbol, Name, Volume, Price, Change with --sort and also with --descend flag
to sort descending.
Flag --urls will display one additional column with all coingecko urls for listed coins.
""",
)
parser.add_argument(
"-p",
"--period",
dest="period",
type=str,
help="time period, one from [1h, 24h, 7d, 14d, 30d, 60d, 1y]",
default="1h",
choices=pycoingecko_model.PERIODS.keys(),
)
parser.add_argument(
"-l",
"--limit",
dest="limit",
type=check_positive,
help="Number of records to display",
default=15,
)
parser.add_argument(
"-s",
"--sort",
dest="sortby",
type=str,
help="Sort by given column. Default: Rank",
default="Rank",
choices=pycoingecko_model.GAINERS_FILTERS,
)
parser.add_argument(
"--descend",
action="store_false",
help="Flag to sort in descending order (lowest first)",
dest="descend",
default=True,
)
parser.add_argument(
"-u",
"--urls",
dest="urls",
action="store_true",
help="Flag to show urls. If you will use that flag you will additional column with urls",
default=False,
)
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED
)
if ns_parser:
pycoingecko_view.display_losers(
period=ns_parser.period,
top=ns_parser.limit,
sortby=ns_parser.sortby,
descend=ns_parser.descend,
links=ns_parser.urls,
export=ns_parser.export,
)
def call_cgtrending(self, other_args):
"""Process trending command"""
parser = argparse.ArgumentParser(
prog="cgtrending",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="""Discover trending coins.
Use --limit parameter to display only N number of records,
You can sort by Rank, Name, Price_BTC, Price_USD, using --sort parameter and also with --descend flag
to sort descending.
Flag --urls will display one additional column with all coingecko urls for listed coins.
trending will display: Rank, Name, Price_BTC, Price_USD
""",
)
parser.add_argument(
"-l",
"--limit",
dest="limit",
type=check_positive,
help="Number of records to display",
default=15,
)
parser.add_argument(
"-s",
"--sort",
dest="sortby",
type=str,
help="Sort by given column. Default: rank",
default="Rank",
choices=pycoingecko_model.TRENDING_FILTERS,
)
parser.add_argument(
"--descend",
action="store_false",
help="Flag to sort in descending order (lowest first)",
dest="descend",
default=True,
)
parser.add_argument(
"-u",
"--urls",
dest="urls",
action="store_true",
help="Flag to show urls. If you will use that flag you will additional column with urls",
default=False,
)
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED
)
if ns_parser:
pycoingecko_view.display_discover(
category="trending",
top=ns_parser.limit,
sortby=ns_parser.sortby,
descend=ns_parser.descend,
links=ns_parser.urls,
export=ns_parser.export,
)
def call_cgvoted(self, other_args):
"""Process voted command"""
parser = argparse.ArgumentParser(
prog="cgvoted",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="""Discover most voted coins.
Use --limit parameter to display only N number of records,
You can sort by Rank, Name, Price_BTC, Price_USD, using --sort parameter and also with --descend flag
to sort descending.
Flag --urls will display one additional column with all coingecko urls for listed coins.
voted will display: Rank, Name, Price_BTC, Price_USD
""",
)
parser.add_argument(
"-l",
"--limit",
dest="limit",
type=check_positive,
help="Number of records to display",
default=15,
)
parser.add_argument(
"-s",
"--sort",
dest="sortby",
type=str,
help="Sort by given column. Default: rank",
default="Rank",
choices=pycoingecko_model.TRENDING_FILTERS,
)
parser.add_argument(
"--descend",
action="store_false",
help="Flag to sort in descending order (lowest first)",
dest="descend",
default=True,
)
parser.add_argument(
"-u",
"--urls",
dest="urls",
action="store_true",
help="Flag to show urls. If you will use that flag you will additional column with urls",
default=False,
)
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED
)
if ns_parser:
pycoingecko_view.display_discover(
category="most_voted",
top=ns_parser.limit,
sortby=ns_parser.sortby,
descend=ns_parser.descend,
links=ns_parser.urls,
export=ns_parser.export,
)
def call_cgrecently(self, other_args):
"""Process recently command"""
parser = argparse.ArgumentParser(
prog="cgrecently",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="""
Shows recently added coins on CoinGecko. You can display only N number of coins with --limit parameter.
You can sort data by Rank, Name, Symbol, Price, Change_1h, Change_24h, Added with --sort
and also with --descend flag to sort descending.
Flag --urls will display urls""",
)
parser.add_argument(
"-l",
"--limit",
dest="limit",
type=check_positive,
help="Number of records to display",
default=15,
)
parser.add_argument(
"-s",
"--sort",
dest="sortby",
type=str,
help="Sort by given column. Default: Rank",
default="Rank",
choices=pycoingecko_model.RECENTLY_FILTERS,
)
parser.add_argument(
"--descend",
action="store_false",
help="Flag to sort in descending order (lowest first)",
dest="descend",
default=True,
)
parser.add_argument(
"-u",
"--urls",
dest="urls",
action="store_true",
help="Flag to show urls",
default=False,
)
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED
)
if ns_parser:
pycoingecko_view.display_recently_added(
top=ns_parser.limit,
sortby=ns_parser.sortby,
descend=ns_parser.descend,
links=ns_parser.urls,
export=ns_parser.export,
)
def call_cgvisited(self, other_args):
"""Process most_visited command"""
parser = argparse.ArgumentParser(
prog="cgvisited",
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
description="""
Discover most visited coins.
Use --limit parameter to display only N number of records,
You can sort by Rank, Name, Price_BTC, Price_USD, using --sort parameter and also with --descend flag
to sort descending.
Flag --urls will display one additional column with all coingecko urls for listed coins.
visited will display: Rank, Name, Price_BTC, Price_USD
""",
)
parser.add_argument(
"-l",
"--limit",
dest="limit",
type=check_positive,
help="Number of records to display",
default=15,
)
parser.add_argument(
"-s",
"--sort",
dest="sortby",
type=str,
help="Sort by given column. Default: rank",
default="Rank",
choices=pycoingecko_model.TRENDING_FILTERS,
)
parser.add_argument(
"--descend",
action="store_false",
help="Flag to sort in descending order (lowest first)",
dest="descend",
default=True,
)
parser.add_argument(
"-u",
"--urls",
dest="urls",
action="store_true",
help="Flag to show urls. If you will use that flag you will additional column with urls",
default=False,
| |
import numpy as np
import json
from os.path import join
from tqdm import tqdm
from scipy.optimize import least_squares
from pose_optimize.multiview_geo import reproject_error
DEBUG=False
def reproject_error_loss(p3d, p4, p6, cam_proj_4, cam_proj_6, num_kpt=23):
'''
Return:
kp4_e, kp6_e: error array, both (23,) shape
'''
assert p3d.shape == (num_kpt, 3)
assert p4.shape == (num_kpt, 2)
assert p6.shape == (num_kpt, 2)
kp4_recon = np.dot(cam_proj_4[0:3,0:3],p3d.T) + cam_proj_4[0:3,3].reshape([-1,1])
kp6_recon = np.dot(cam_proj_6[0:3,0:3],p3d.T) + cam_proj_6[0:3,3].reshape([-1,1])
kp4_recon = kp4_recon[0:2,:]/kp4_recon[2,:]
kp6_recon = kp6_recon[0:2,:]/kp6_recon[2,:]
# kp4_e = np.linalg.norm(kp4_recon.T - p4, axis=1)
# kp6_e = np.linalg.norm(kp6_recon.T - p6, axis=1)
kp4_e = np.sqrt(np.sum(np.square(kp4_recon.T - p4), axis=1))
kp6_e = np.sqrt(np.sum(np.square(kp6_recon.T - p6), axis=1))
return kp4_e, kp6_e
def reproject_error_loss_score(p3d, p4, p6, cam_proj_4, cam_proj_6, num_kpt=23):
'''
Return:
kp4_e, kp6_e: error array, both (23,) shape
'''
assert p3d.shape == (num_kpt, 3)
assert p4.shape == (num_kpt, 3)
assert p6.shape == (num_kpt, 3)
kp4_recon = np.dot(cam_proj_4[0:3,0:3],p3d.T) + cam_proj_4[0:3,3].reshape([-1,1])
kp6_recon = np.dot(cam_proj_6[0:3,0:3],p3d.T) + cam_proj_6[0:3,3].reshape([-1,1])
kp4_recon = kp4_recon[0:2,:]/kp4_recon[2,:]
kp6_recon = kp6_recon[0:2,:]/kp6_recon[2,:]
# kp4_e = np.linalg.norm(kp4_recon.T - p4, axis=1)
# kp6_e = np.linalg.norm(kp6_recon.T - p6, axis=1)
kp4_e = p4[:,2]*np.sqrt(np.sum(np.square(kp4_recon.T - p4[:,:2]), axis=1))
kp6_e = p6[:,2]*np.sqrt(np.sum(np.square(kp6_recon.T - p6[:,:2]), axis=1))
return kp4_e, kp6_e
def optimze_loss_2d(p3d_faltten, p4, p6, cam_proj_4, cam_proj_6, num_kpt=23, lambda_reproj = 1):
'''
Only consider reprojection loss
'''
l1 = lambda_reproj
p3d = p3d_faltten.reshape([-1,3])
kp4_e, kp6_e = reproject_error_loss(p3d, p4, p6, cam_proj_4, cam_proj_6, num_kpt=23)
return np.concatenate((l1*kp4_e, l1*kp6_e))
def shape_dis_loss(kpt_3d_array, median_bone, left_list, right_list, num_kpt=23):
'''
Shape loss given prior shape information
'''
assert kpt_3d_array.shape == (num_kpt, 3)
assert len(left_list) == len(right_list)
assert len(left_list) == len(median_bone.keys())
num_bone = len(left_list)
left_error = []
right_error = []
left_error = np.zeros(num_bone)
right_error = np.zeros(num_bone)
for i in range(num_bone):
bon_vec_left = kpt_3d_array[left_list[i][1],:] - kpt_3d_array[left_list[i][0],:]
left_error_i = np.sqrt(np.dot(bon_vec_left, bon_vec_left)) - median_bone[str(i)]
left_error[i] = abs(left_error_i)
bon_vec_right = kpt_3d_array[right_list[i][1],:] - kpt_3d_array[right_list[i][0],:]
right_error_i = np.sqrt(np.dot(bon_vec_right, bon_vec_right)) - median_bone[str(i)]
right_error[i] = abs(right_error_i)
return left_error, right_error
def optimze_loss(p3d_faltten, p4, p6, cam_proj_4, cam_proj_6, left_list, right_list, median_bone, num_kpt=23, lambda_reproj = 0.1, lambda_shape=5.0):
'''
Full Loss with shape prior
'''
l1 = lambda_reproj
l2 = lambda_shape
p3d = p3d_faltten.reshape([-1,3])
kp4_e, kp6_e = reproject_error_loss_score(p3d, p4, p6, cam_proj_4, cam_proj_6, num_kpt=23)
left_error, right_error = shape_dis_loss(p3d, median_bone, left_list, right_list, num_kpt=23)
return np.concatenate((l1*kp4_e, l1*kp6_e, l2*left_error, l2*right_error))
def optimze_loss_no_score(p3d_faltten, p4, p6, cam_proj_4, cam_proj_6, left_list, right_list, median_bone, num_kpt=23, lambda_reproj = 0.1, lambda_shape=1.0):
'''
Full Loss with shape prior
'''
l1 = lambda_reproj
l2 = lambda_shape
p3d = p3d_faltten.reshape([-1,3])
kp4_e, kp6_e = reproject_error_loss(p3d, p4, p6, cam_proj_4, cam_proj_6, num_kpt=23)
left_error, right_error = shape_dis_loss(p3d, median_bone, left_list, right_list, num_kpt=23)
return np.concatenate((l1*kp4_e, l1*kp6_e, l2*left_error, l2*right_error))
def centerize_keypoint(p1, p2, norm_dst):
'''
Centeralize two points
'''
assert p1.shape == (3,)
assert p2.shape == (3,)
p_center = (p1+p2)/2
p_vec = (p1-p2)
p_dis = np.sqrt(np.dot(p_vec, p_vec))
p1_shift = p_center + 0.5*p_vec/p_dis
p2_shift = p_center - 0.5*p_vec/p_dis
return p1_shift, p2_shift
def shape_initialize(left_list, right_list, median_bone, kpt_3d_array, num_kpt=23):
'''
Initialize human joints 3D position from shape prior
'''
assert kpt_3d_array.shape == (num_kpt,3)
assert len(left_list) == len(right_list)
assert len(left_list) == len(median_bone.keys())
num_bone = len(left_list)
left_ratio_list, right_ratio_list = [],[]
vec_left_list, vec_right_list = [], []
ratio_outlier = 1.5
ratio_draw_back = 1.1
for i in range(num_bone):
bon_vec_left = kpt_3d_array[left_list[i][1],:] - kpt_3d_array[left_list[i][0],:]
ratio_left = np.sqrt(np.dot(bon_vec_left, bon_vec_left))/ median_bone[str(i)]
left_ratio_list += [ratio_left]
vec_left_list += [bon_vec_left]
for i in range(num_bone):
bon_vec_right = kpt_3d_array[right_list[i][1],:] - kpt_3d_array[right_list[i][0],:]
ratio_right = np.sqrt(np.dot(bon_vec_right, bon_vec_right))/median_bone[str(i)]
right_ratio_list += [ratio_right]
vec_right_list += [bon_vec_right]
kp_3d_new = np.zeros(kpt_3d_array.shape)
# Adjust Shoulder to hip
kp_3d_new[left_list[2][0], :], kp_3d_new[left_list[2][1], :] = centerize_keypoint(kpt_3d_array[left_list[2][0], :], kpt_3d_array[left_list[2][1], :] , median_bone["2"])
kp_3d_new[right_list[2][0], :], kp_3d_new[right_list[2][1], :] = centerize_keypoint(kpt_3d_array[right_list[2][0], :], kpt_3d_array[right_list[2][1], :] , median_bone["2"])
# Adjust shoulder and Hip pair
sh_p = left_list[0]
hi_p = left_list[1]
kp_3d_new[sh_p[0]], kp_3d_new[sh_p[1]] = centerize_keypoint(kp_3d_new[sh_p[0]], kp_3d_new[sh_p[1]], median_bone["0"]) # shoulder
kp_3d_new[hi_p[0]], kp_3d_new[hi_p[1]] = centerize_keypoint(kp_3d_new[hi_p[0]], kp_3d_new[hi_p[1]], median_bone["1"]) # hip
# left part
for i in range(2, num_bone):
start_indx, end_indx = tuple(left_list[i])
if left_ratio_list[i] < ratio_outlier:
kp_3d_new[end_indx, :] = kp_3d_new[start_indx, :] + vec_left_list[i]
else:
kp_3d_new[end_indx, :] = kp_3d_new[start_indx, :] + vec_left_list[i]/left_ratio_list[i]*ratio_draw_back
for i in range(2, num_bone):
start_indx, end_indx = tuple(right_list[i])
if right_ratio_list[i] < ratio_outlier:
kp_3d_new[end_indx, :] = kp_3d_new[start_indx, :] + vec_right_list[i]
else:
kp_3d_new[end_indx, :] = kp_3d_new[start_indx, :] + vec_right_list[i]/right_ratio_list[i]*ratio_draw_back
# left_error, right_error = loss_kpt_3d(kp_3d_new, median_bone, left_list, right_list)
# print(left_error)
# print(right_error)
# print("OK")
return kp_3d_new
def fintune_human_keypoint_2d(P4, P6, path4, path6, path3D, path_finetune=None):
with open(path3D,"r") as f:
data_3d = json.load(f)
with open(path4, "r") as f:
data_dict4 = json.load(f)
with open(path6, "r") as f:
data_dict6 = json.load(f)
# frame_id = next(iter(data_3d["3D"].keys()))
# person_id = next(iter(data_3d["3D"][frame_id].keys()))
# # frame_id = "000005"
# # person_id = "000"
cam_proj_4 = np.array(data_3d["P4"])
cam_proj_6 = np.array(data_3d["P6"])
data_3d_dict = {}
data_3d_dict["P4"] = data_3d["P4"]
data_3d_dict["P6"] = data_3d["P6"]
data_3d_dict["3D"] = {}
data_3d_dict["kp4_e"] = {}
data_3d_dict["kp6_e"] = {}
frame_list = [k for k in data_dict4.keys()]
frame_list.sort()
for i, frame_id in enumerate(tqdm(frame_list)):
frame_3d_dict = {}
kp4_dict = {}
kp6_dict = {}
person_list = [k for k in data_dict4[frame_id].keys()]
person_list.sort()
for person_id in person_list:
p3d_flatten = np.array(data_3d["3D"][frame_id][person_id]).ravel()
p4_homo = np.array(data_dict4[frame_id][person_id]).reshape([-1,3])
p6_homo = np.array(data_dict6[frame_id][person_id]).reshape([-1,3])
p4 = p4_homo[:,:2]
p6 = p6_homo[:,:2]
if DEBUG:
loss_init = optimze_loss_2d(p3d_flatten, p4, p6, cam_proj_4, cam_proj_6)
print("Initial error", str(np.sqrt(np.sum(np.square(loss_init)))) )
res = least_squares(optimze_loss_2d, p3d_flatten, verbose=0, x_scale='jac', ftol=1e-4, method='trf',args=(p4, p6, cam_proj_4, cam_proj_6))
if DEBUG:
loss_final = res.fun
print("Final error", str(np.sqrt(np.sum(np.square(loss_final)))) )
loss_final = optimze_loss_2d(res.x, p4, p6, cam_proj_4, cam_proj_6)
print("Final error", str(np.sqrt(np.sum(np.square(loss_final)))) )
p3d_tune = res.x.reshape([-1,3])
kp4_recon, kp6_recon, kp4_e, kp6_e = reproject_error(p3d_tune, p4, p6, cam_proj_4, cam_proj_6)
frame_3d_dict[person_id] = p3d_tune.tolist()
kp4_dict[person_id] = kp4_e.tolist()
kp6_dict[person_id] = kp6_e.tolist()
data_3d_dict["3D"][frame_id] = frame_3d_dict
data_3d_dict["kp4_e"][frame_id] = kp4_dict
data_3d_dict["kp6_e"][frame_id] = kp6_dict
if path_finetune is not None:
with open(path_finetune, "w") as f:
json.dump(data_3d_dict, f)
return data_3d_dict
def finetune_human_3d(path_finetune_input, path4, path6, shape_prior_path, shape_prior_finetune_output, frame_list=None):
'''
path_finetune_input:
path4: data_C4.json
path6: data_C6.json
shape_prior_path:
shape_prior_finetune_output:
'''
with open(path_finetune_input,"r") as f:
data_3d = json.load(f)
with open(path4, "r") as f:
data_dict4 = json.load(f)
with open(path6, "r") as f:
data_dict6 = json.load(f)
with open(shape_prior_path, 'r') as f:
data_prior = json.load(f)
left_list = data_prior["left_list"]
right_list = data_prior["right_list"]
median_bone = data_prior["median_bone"]
cam_proj_4 = np.array(data_3d["P4"])
cam_proj_6 = np.array(data_3d["P6"])
data_3d_dict = {}
data_3d_dict["P4"] = data_3d["P4"]
data_3d_dict["P6"] = data_3d["P6"]
data_3d_dict["3D"] = {}
data_3d_dict["kp4_e"] = {}
data_3d_dict["kp6_e"] = {}
if frame_list:
for f in frame_list:
if f not in data_dict4.keys():
print("KEY ERROR!")
assert 0
else:
frame_list = [k for k in data_dict4.keys()]
frame_list.sort()
for i, frame_id in enumerate(tqdm(frame_list)):
frame_3d_dict = {}
kp4_dict = {}
kp6_dict = {}
person_list = [k for k in data_dict4[frame_id].keys()]
person_list.sort()
for person_id in person_list:
p3d = np.array(data_3d["3D"][frame_id][person_id]).reshape([-1,3])
p3d_init = shape_initialize(left_list, right_list, median_bone, p3d)
p4_homo = np.array(data_dict4[frame_id][person_id]).reshape([-1,3])
p6_homo = np.array(data_dict6[frame_id][person_id]).reshape([-1,3])
p4 = p4_homo
p6 = p6_homo
p3d_flatten = p3d_init.flatten()
# loss_init = optimze_loss(p3d_flatten, p4, p6, cam_proj_4, cam_proj_6, left_list, right_list, median_bone)
#print(np.linalg.norm(loss_init))
res = least_squares(optimze_loss, p3d_flatten, verbose=0, x_scale='jac', ftol=1e-2, method='trf',args=(p4, p6, cam_proj_4, cam_proj_6, left_list, right_list, median_bone))
p3d_tune = res.x.reshape([-1,3])
# loss_res = optimze_loss(res.x, p4, p6, cam_proj_4, cam_proj_6, left_list, right_list, median_bone)
# print(np.linalg.norm(loss_res))
kp4_recon, kp6_recon, kp4_e, kp6_e = reproject_error(p3d_tune, p4[:,:2], p6[:,:2], cam_proj_4, cam_proj_6)
frame_3d_dict[person_id] = p3d_tune.tolist()
kp4_dict[person_id] = kp4_e.tolist()
kp6_dict[person_id] = kp6_e.tolist()
data_3d_dict["3D"][frame_id] = frame_3d_dict
data_3d_dict["kp4_e"][frame_id] = kp4_dict
data_3d_dict["kp6_e"][frame_id] = kp6_dict
with open(shape_prior_finetune_output, "w") as f:
json.dump(data_3d_dict, f)
def finetune_human_3d_no_score(path_finetune_input, path4, path6, shape_prior_path, shape_prior_finetune_output, frame_list=None):
'''
path_finetune_input:
path4: data_C4.json
path6: data_C6.json
shape_prior_path:
shape_prior_finetune_output:
'''
with open(path_finetune_input,"r") as f:
data_3d = json.load(f)
with open(path4, "r") as f:
data_dict4 = json.load(f)
with open(path6, "r") as f:
data_dict6 = json.load(f)
with open(shape_prior_path, 'r') as f:
data_prior = json.load(f)
left_list = data_prior["left_list"]
right_list = data_prior["right_list"]
median_bone = data_prior["median_bone"]
cam_proj_4 = np.array(data_3d["P4"])
cam_proj_6 = np.array(data_3d["P6"])
data_3d_dict = {}
data_3d_dict["P4"] = data_3d["P4"]
data_3d_dict["P6"] = data_3d["P6"]
data_3d_dict["3D"] = {}
data_3d_dict["kp4_e"] = {}
data_3d_dict["kp6_e"] = {}
if frame_list:
for f in frame_list:
if f not in data_dict4.keys():
print("KEY ERROR!")
assert 0
else:
frame_list = [k for k in data_dict4.keys()]
frame_list.sort()
for i, frame_id in enumerate(tqdm(frame_list)):
if i > 300:
import sys
sys.exit()
frame_3d_dict = {}
kp4_dict = {}
kp6_dict = {}
person_list = [k for k in data_dict4[frame_id].keys()]
person_list.sort()
for person_id in person_list:
try:
p3d = np.array(data_3d["3D"][frame_id][person_id]).reshape([-1,3])
p3d_init = shape_initialize(left_list, right_list, median_bone, p3d)
p4_homo = np.array(data_dict4[frame_id][person_id]).reshape([-1,3])
p6_homo = np.array(data_dict6[frame_id][person_id]).reshape([-1,3])
p4 = p4_homo[:,:2]
p6 = p6_homo[:,:2]
p3d_flatten = p3d_init.flatten()
# loss_init = optimze_loss(p3d_flatten, p4, p6, cam_proj_4, | |
cat[catorow,catocol] > bsid:
cat[crowcol[:,0],crowcol[:,1]] = arclakeid
# if cat[catorow,catocol] != arclakeid and cat[nrow,ncol] != arclakeid:
# print lakeid
if cat[nrow,ncol] > bsid and arcatid[j] > bsid: #### lake input cat route to another lake input catch
cat[crowcol[:,0],crowcol[:,1]] = cat[nrow,ncol]
pp = Pourpoints[lrowcol[:,0],lrowcol[:,1]]
pp = np.unique(pp)
pp = pp[pp > 0]
if len(pp) == 1:
cat[lrowcol[:,0],lrowcol[:,1]] = arclakeid
return cat
###################################################33
def CE_mcat4lake2(cat1,lake,fac,fdir,bsid,nrows,ncols,Pourpoints):
cat = copy.copy(cat1)
arlakeid = np.unique(lake)
arlakeid = arlakeid[arlakeid>=0]
for i in range(0,len(arlakeid)):
lakeid = arlakeid[i]
lrowcol = np.argwhere(lake==lakeid).astype(int)
lakacc = np.full((len(lrowcol),3),-9999)
lakacc[:,0] = lrowcol[:,0]
lakacc[:,1] = lrowcol[:,1]
lakacc[:,2] = fac[lrowcol[:,0],lrowcol[:,1]]
lakacc = lakacc[lakacc[:,2].argsort()]
lorow = lakacc[len(lakacc)-1,0]
locol = lakacc[len(lakacc)-1,1] ###### lake outlet row and col
arclakeid = cat1[lorow,locol]
pp = Pourpoints[lorow,locol]
pp = np.unique(pp)
pp = pp[pp > 0]
if len(pp) == 1:
cat[lrowcol[:,0],lrowcol[:,1]] = arclakeid
return cat
######################################################
def CE_Lakeerror(fac,fdir,lake,cat2,bsid,blid,boid,nrows,ncols,cat):
Watseds = copy.copy(cat2)
Poups = np.unique(Watseds)
Poups = Poups[Poups>=0]
##### Part 2, remove some small catchment which is not lake catchment
out = np.full((len(Poups),4),-9999)
for i in range(0,len(Poups)):
catid = Poups[i]
if catid > boid:
continue #### do nothing for observation catchments
rowcol = np.argwhere(Watseds==catid).astype(int)
catacc = np.full((len(rowcol),3),-9999)
catacc[:,0] = rowcol[:,0]
catacc[:,1] = rowcol[:,1]
catacc[:,2] = fac[rowcol[:,0],rowcol[:,1]]
catacc = catacc[catacc[:,2].argsort()].astype(int)
rowcol[0,0] = catacc[len(catacc)-1,0]
rowcol[0,1] = catacc[len(catacc)-1,1]
nrow,ncol = Nextcell(fdir,rowcol[0,0],rowcol[0,1])### get the downstream catchment id
if nrow < 0 or ncol < 0:
continue
if nrow < nrows and ncol < ncols:
if len(rowcol) < 10 and Watseds[rowcol[0,0],rowcol[0,1]] > bsid:
Watseds[catacc[:,0],catacc[:,1]] = Watseds[nrow,ncol]
if len(rowcol) < 10 and Watseds[rowcol[0,0],rowcol[0,1]] < blid:
Watseds[catacc[:,0],catacc[:,1]] = Watseds[nrow,ncol]
return Watseds
#########################################33
def GenerateFinalPourpoints(fac,fdir,lake,cat3,bsid,blid,boid,nrows,ncols,cat,obs):
Poups = copy.copy(cat3)
Poups[:,:]=-9999
GWat = copy.copy(cat3)
GWatids = np.unique(cat3)
GWatids = GWatids[GWatids>=0]
ncatid = 1
for i in range(0,len(GWatids)):
trow,tcol = Getbasinoutlet(GWatids[i],GWat,fac)
Poups[trow,tcol] = ncatid
ncatid = ncatid + 1
OWat = copy.copy(cat)
OWatids = np.unique(cat)
OWatids = OWatids[OWatids>=0]
for i in range(0,len(OWatids)):
trow,tcol = Getbasinoutlet(OWatids[i],OWat,fac)
if not GWat[trow,tcol] >= blid:
if Poups[trow,tcol] < 0:
Poups[trow,tcol] = ncatid
ncatid = ncatid + 1
obsids = np.unique(obs)
obsids = obsids[obsids>=0]
for i in range(0,len(obsids)):
rowcol = np.argwhere(obs==obsids[i]).astype(int)
if Poups[rowcol[0,0],rowcol[0,1]] < 0:
Poups[rowcol[0,0],rowcol[0,1]] = ncatid
ncatid = ncatid + 1
return Poups
#######
####
# ####################################################33
def Addnlinklakes(fcat,alllake,lake1,fac,sbslid):
alllakeid = np.unique(alllake)
sllid = copy.copy(sbslid)
alllakeid = alllakeid[alllakeid>=0]
for i in range(0,len(alllakeid)):
lid = alllakeid[i]
ibglake = np.argwhere(lake1==lid).astype(int)
if len(ibglake) == 0: ## this lake is not big lakes
lrowcol = np.argwhere(alllake==lid).astype(int)
lcatacc = np.full((len(lrowcol),3),-9999)
lcatacc[:,0] = lrowcol[:,0]
lcatacc[:,1] = lrowcol[:,1]
lcatacc[:,2] = fac[lrowcol[:,0],lrowcol[:,1]]
lcatacc = lcatacc[lcatacc[:,2].argsort()]
loutrow = lcatacc[len(lcatacc)-1,0] ### get lake outlet row and col
loutcol = lcatacc[len(lcatacc)-1,1]
loutcatids = fcat[lcatacc[:,0],lcatacc[:,1]]
loutcatids = np.unique(loutcatids)
if len(loutcatids) == 1:
for j in range(0,len(lcatacc)):
fcat[lcatacc[j,0],lcatacc[j,1]] = sllid + 1
sllid = sllid + 1
return fcat
###################################33
def Generatecatinfo(Watseds,fac,fdir,lake,dem,area,hycat,hycatinfo,catinfo,allcatid,lakeinfo,width,depth,
rivlen,obs,nrows,ncols,slope):
finalcat = copy.copy(Watseds)
for i in range(0,len(allcatid)):
catid = allcatid[i].astype(int)
catinfo[i,0] = catid
rowcol = np.argwhere(finalcat==catid).astype(int)
trow,tcol = Getbasinoutlet(catid,finalcat,fac)
nrow,ncol = Nextcell(fdir,trow,tcol)### get the downstream catchment id
if nrow < 0 or ncol < 0:
catinfo[i,1] = -1
elif nrow >= nrows or ncol >= ncols:
catinfo[i,1] = -1
elif finalcat[nrow,ncol] < 0:
catinfo[i,1] = -1
else:
catinfo[i,1] = finalcat[nrow,ncol]
catinfo[i,2] = trow
catinfo[i,3] = tcol
################################## Get lake information
lakeid = lake[trow,tcol]
if lakeid > 0:
slakeinfo = lakeinfo.loc[lakeinfo['HYLAK_ID'] == lakeid]
catinfo[i,4] = lakeid
catinfo[i,5] = slakeinfo.iloc[0]['VOL_TOTAL']
catinfo[i,6] = slakeinfo.iloc[0]['LAKE_AREA']
catinfo[i,7] = slakeinfo.iloc[0]['DEPTH_AVG']
catinfo[i,8] = slakeinfo.iloc[0]['SLOPE_100']
catinfo[i,9] = slakeinfo.iloc[0]['WSHD_AREA']
catinfo[i,10] = slakeinfo.iloc[0]['LAKE_TYPE']
########Check if it is observation points
if obs[trow,tcol] >= 0:
catinfo[i,23] = obs[trow,tcol]
########Got basin width and depth
catwidth,catdepth = Getcatwd(rowcol[:,0],rowcol[:,1],width,depth,-1) ### width depth in m
catinfo[i,12] = float(sum(dem[rowcol[:,0],rowcol[:,1]])/float(len(rowcol))) ### average elevation
# catinfo[i,13] = float(sum(area[rowcol[:,0],rowcol[:,1]]))/1000/1000 #### maximum area in km^2
catinfo[i,14] = max(dem[rowcol[:,0],rowcol[:,1]]) #### maximum dem
catinfo[i,15] = min(dem[rowcol[:,0],rowcol[:,1]]) #### maximum dem
catinfo[i,16] = dem[trow,tcol] #### outlet elevation
catinfo[i,17] = max(catwidth,1)
catinfo[i,18] = max(catdepth,1)
catinfo[i,19] = 0.030
#######Got basin area and rivlen
catinfo[i,11] = np.mean(area[rowcol[:,0],rowcol[:,1]])
catrivlen,catrivslp = Getcatrivlenslope(rowcol[:,0],rowcol[:,1],rivlen,dem,fac,fdir,finalcat,
trow,tcol,nrows,ncols,slope)
catinfo[i,20] = catrivlen
catinfo[i,21] = catrivslp
slopet = slope[rowcol[:,0],rowcol[:,1]]
slopet = slopet[slopet>0,]
catinfo[i,22] = np.mean(slopet)
return catinfo
########################################################3
def Getcatwd(catrow,catcol,width,depth,DA):
wds = width[catrow,catcol]
dps = depth[catrow,catcol]
if max(wds) > 0:
catwd = max(wds)
catdps = max(dps)
else:
if DA > 0:
Q = 0.025*DA**0.9302
catwd = 7.2 *Q **(0.5)
catdps = 0.27*Q**(0.30)
else:
catwd = 15
catdps = 7.5
return catwd,catdps
############################################################
def Writervhchanl(ocatinfo,outFolder,lenThres,iscalmanningn):
catinfo = copy.copy(ocatinfo)
# print int(catinfo.iloc[0]['SUBID']),len(catinfo.index)
ochn = open(outFolder+"modelchannel.rvp","w")
##################3
orvh = open(outFolder+"test.rvh","w")
orvh.write("# --------------------------------------------"+"\n")
orvh.write("# Raven HRU Input file"+"\n")
orvh.write("# lake catchment emulation"+"\n")
orvh.write("# --------------------------------------------"+"\n")
orvh.write(":SubBasins"+"\n")
orvh.write(" :Attributes NAME DOWNSTREAM_ID PROFILE REACH_LENGTH GAUGED"+"\n")
orvh.write(" :Units none none none km none"+"\n")
tab = " "
for i in range(0,len(catinfo.index)):
### Get catchment width and dpeth
catid = int(catinfo.iloc[i]['SUBID'])
temp = catinfo.iloc[i]['RIVLEN']
if (temp >= lenThres):
catlen = float(temp)/1000 #### in km
strRlen = str(catlen)
else:
catlen = -9999
strRlen = 'ZERO-'
if catinfo.iloc[i]['ISLAKE'] >= 0 :
strRlen = 'ZERO-'
#####################################################3
Strcat = str(catid)
StrDid = str(int(catinfo.iloc[i]['DOWSUBID']))
pronam = 'Chn_'+ Strcat
chslope = max(catinfo.iloc[i]['SLOPE3'],0.0001)
if chslope < 0:
chslope = catinfo.iloc[i]['BASINSLOPE']
writechanel(pronam,max(catinfo.iloc[i]['BKFWIDTH'],1),max(catinfo.iloc[i]['BKFDEPTH'],1),
chslope,ochn,catinfo.iloc[i]['MEANELEV'],catinfo.iloc[i]['FLOODP_N'],catinfo.iloc[i]['CH_N'],iscalmanningn)
if catinfo.iloc[i]['ISOBS'] >= 0 :
Guage = '1'
else:
Guage = '0'
orvh.write(" "+Strcat+tab+'sub'+Strcat+tab+StrDid+tab+pronam+tab+strRlen+tab+Guage+"\n")
orvh.write(":EndSubBasins"+"\n")
orvh.write("\n")
##########################################
orvh.write(":HRUs"+"\n")
orvh.write(" :Attributes AREA ELEVATION LATITUDE LONGITUDE BASIN_ID LAND_USE_CLASS VEG_CLASS SOIL_PROFILE AQUIFER_PROFILE TERRAIN_CLASS SLOPE ASPECT"+"\n")
orvh.write(" :Units km2 m deg deg none none none none none none deg deg"+"\n")
maxcatid = max(catinfo['SUBID'].values)
for i in range(0,len(catinfo.index)):
hruid = int(catinfo.iloc[i]['SUBID'])
catslope = catinfo.iloc[i]['BASINSLOPE']
if catinfo.iloc[i]['ISLAKE'] > 0:
if float(catinfo.iloc[i]['AREA2'])/1000.00/1000.00 <= float(catinfo.iloc[i]['LAKEAREA']):
catarea2 = float(catinfo.iloc[i]['AREA2'])*max((1-float(catinfo.iloc[i]['LAKERATIO'])),0.05)/1000.00/1000.00
else:
catarea2 = float(catinfo.iloc[i]['AREA2'])/1000.00/1000.00 - float(catinfo.iloc[i]['LAKEAREA'])
else:
catarea2 = float(catinfo.iloc[i]['AREA2'])/1000.00/1000.00
StrGid = str(hruid)+tab
catid = str(int(catinfo.iloc[i]['SUBID']))+tab
StrGidarea = str(catarea2)+tab
StrGidelev = str(catinfo.iloc[i]['MEANELEV'])+tab
lat = str(catinfo.iloc[i]['INSIDE_Y'])+tab
lon = str(catinfo.iloc[i]['INSIDE_X'])+tab
LAND_USE_CLASS = 'FOREST'+tab
VEG_CLASS = 'FOREST'+tab
SOIL_PROFILE ='SOILPROF'+tab
AQUIFER_PROFILE ='[NONE]'+tab
TERRAIN_CLASS ='[NONE]'+tab
SLOPE = str(catslope)+tab
ASPECT = '200'+tab
orvh.write(" "+StrGid+tab+StrGidarea+StrGidelev+lat+lon+catid+LAND_USE_CLASS+VEG_CLASS+SOIL_PROFILE+AQUIFER_PROFILE+TERRAIN_CLASS+SLOPE+ASPECT+"\n")
if catinfo.iloc[i]['ISLAKE'] > 0:
hruid = int(catinfo.iloc[i]['SUBID']) + int(maxcatid)
catslope = catinfo.iloc[i]['BASINSLOPE']
if float(catinfo.iloc[i]['AREA2'])/1000.00/1000.00 <= float(catinfo.iloc[i]['LAKEAREA']):
catarea2 = float(catinfo.iloc[i]['AREA2'])*min((float(catinfo.iloc[i]['LAKERATIO'])),0.95)/1000/1000
else:
catarea2 = float(catinfo.iloc[i]['LAKEAREA'])
StrGid = str(hruid)+tab
catid = str(int(catinfo.iloc[i]['SUBID']))+tab
StrGidarea = str(catarea2)+tab
StrGidelev = str(catinfo.iloc[i]['MEANELEV'])+tab
lat = str(catinfo.iloc[i]['INSIDE_Y'])+tab
lon = str(catinfo.iloc[i]['INSIDE_X'])+tab
LAND_USE_CLASS = 'WATER'+tab
VEG_CLASS = 'WATER'+tab
SOIL_PROFILE ='SOILPROF'+tab
AQUIFER_PROFILE ='[NONE]'+tab
TERRAIN_CLASS ='[NONE]'+tab
SLOPE = str(catslope)+tab
ASPECT = '200'+tab
orvh.write(" "+StrGid+tab+StrGidarea+StrGidelev+lat+lon+catid+LAND_USE_CLASS+VEG_CLASS+SOIL_PROFILE+AQUIFER_PROFILE+TERRAIN_CLASS+SLOPE+ASPECT+"\n")
orvh.write(":EndHRUs"+"\n")
orvh.write(":RedirectToFile TestLake.rvh")
orvh.close()
ochn.close()
return catinfo
##############################
#########################################################
def writechanel(chname,chwd,chdep,chslope,orchnl,elev,floodn,channeln,iscalmanningn):
### Following SWAT instructions, assume a trapezoidal shape channel, with channel sides has depth and width ratio of 2. zch = 2
zch = 2
sidwd = zch * chdep ###river side width
tab = " "
botwd = chwd - 2*sidwd ### river
if (botwd < 0):
botwd = 0.5*chwd
sidwd = 0.5*0.5*chwd
zch = (chwd - botwd)/2/chdep
if iscalmanningn >= 0:
mann = str(channeln)
else:
mann = str(0.035)
zfld = 4 + elev
zbot = elev - chdep
sidwdfp = 4/0.25
Channame = ":ChannelProfile"+tab+chname+tab
orchnl.write(Channame+"\n")
Chanslop = " :Bedslope"+tab+str(chslope)
orchnl.write(Chanslop+"\n")
orchnl.write(" :SurveyPoints"+"\n")
orchnl.write(" 0"+tab+str(zfld)+"\n")
orchnl.write(" "+str(sidwdfp)+tab+str(elev)+"\n")
orchnl.write(" "+str(sidwdfp + 2*chwd)+tab+str(elev)+"\n")
orchnl.write(" "+str(sidwdfp + 2*chwd + sidwd)+tab+str(zbot)+"\n")
orchnl.write(" "+str(sidwdfp + 2*chwd + sidwd + botwd)+tab+str(zbot)+"\n")
orchnl.write(" "+str(sidwdfp + 2*chwd + 2*sidwd + botwd)+tab+str(elev)+"\n")
orchnl.write(" "+str(sidwdfp + 4*chwd + 2*sidwd + botwd)+tab+str(elev)+"\n")
orchnl.write(" "+str(2*sidwdfp + 4*chwd + 2*sidwd + botwd)+tab+str(zfld)+"\n")
orchnl.write(" :EndSurveyPoints"+"\n")
orchnl.write(" :RoughnessZones"+"\n")
orchnl.write(" 0" + tab + str(floodn) +"\n")
orchnl.write(" " + str(sidwdfp + 2*chwd)+ tab + mann +"\n")
orchnl.write(" " + str(sidwdfp + 2*chwd + 2*sidwd + botwd)+ tab + str(floodn) +"\n")
orchnl.write(" :EndRoughnessZones"+"\n")
orchnl.write(":EndChannelProfile"+"\n")
orchnl.write("\n")
orchnl.write("##############new channel ##############################\n")
#########################################################################################################33
def writelake(catinfo,outFolderraven):
f2 = open(outFolderraven+"TestLake.rvh","w")
tab = ' '
maxcatid = max(catinfo['SUBID'].values)
for i in range(0,len(catinfo.index)):
if catinfo.iloc[i]['HYLAKEID'] > 0:
lakeid = int(catinfo.iloc[i]['HYLAKEID'])
catid = catinfo.iloc[i]['SUBID']
if float(catinfo.iloc[i]['AREA2'])/1000.00/1000.00 <= float(catinfo.iloc[i]['LAKEAREA']):
A = float(catinfo.iloc[i]['AREA2'])*min((float(catinfo.iloc[i]['LAKERATIO'])),0.95)
else:
A = float(catinfo.iloc[i]['LAKEAREA'])*1000*1000
A = catinfo.iloc[i]['LAKEAREA']*1000*1000
h0 = catinfo.iloc[i]['LAKEDEPTH']
WeirCoe = 0.6
hruid = int(catinfo.iloc[i]['SUBID']) + int(maxcatid)
Crewd = catinfo.iloc[i]['BKFWIDTH']
# if slakeinfo.iloc[0]['Wshd_area'] < 6000 and slakeinfo.iloc[0]['Wshd_area'] > 0:
######write lake information to file
f2.write(":Reservoir"+ " Lake_"+ str(int(lakeid))+ " ######## " +"\n")
f2.write(" :SubBasinID "+str(int(catid))+ "\n")
f2.write(" :HRUID "+str(int(hruid))+ "\n")
f2.write(" :Type RESROUTE_STANDARD "+"\n")
f2.write(" :WeirCoefficient "+str(WeirCoe)+ "\n")
f2.write(" :CrestWidth "+str(Crewd)+ "\n")
f2.write(" :MaxDepth "+str(h0)+ "\n")
f2.write(" :LakeArea "+str(A)+ "\n")
f2.write(":EndReservoir "+"\n")
f2.write("#############################################"+"\n")
f2.write("###New Lake starts"+"\n")
f2.close()
#################################################################################################################3
def Writecatinfotodbf(OutputFoldersub,catinfo):
dbfile = OutputFoldersub+ 'finalcat.shp'
inFeatures = dbfile
fieldPrecision = 10
field_scale = 3
# Execute AddField twice for two new fields
arcpy.AddField_management(dbfile, "SubId", "FLOAT", fieldPrecision,field_scale,"", "", "NULLABLE","","")
arcpy.AddField_management(dbfile, | |
<reponame>lxing532/TextLinkGeo-User-Embedding
from gensim.models import doc2vec
from collections import namedtuple,Counter
import re
import string
import random
import numpy as np
from numpy import linalg as la
import tweepy
def auth_api():
key_tokens = {}
key_tokens['consumer_key'] = ''
key_tokens['consumer_secret'] = ''
key_tokens['access_token'] = ''
key_tokens['access_secret'] = ''
auth_twitter = tweepy.OAuthHandler(key_tokens['consumer_key'],key_tokens['consumer_secret'])
auth_twitter.set_access_token(key_tokens['access_token'],key_tokens['access_secret'])
api_twitter = tweepy.API(auth_twitter)
return api_twitter
#implement para2vec with gensim
def user2Uni():
D = {}
l1 = [];l2 = []
for line in open(''):
l = line.strip().split(' ')
if len(l) == 1:
l1.append(l[0])
else:
l2.append(l)
for i in range(len(l2)):
u = l1[i]
for j in l2[i]:
D[j] = []
for i in range(len(l2)):
u = l1[i]
for j in l2[i]:
D[j].append(l1[i])
return D
#euclid similarity
def euclidSimilar(inA,inB):
return 1.0/(1.0+la.norm(inA-inB))
#cosin similarity
def cosSimilar(inA,inB):
inA=np.mat(inA)
inB=np.mat(inB)
num=float(inA*inB.T)
denom=la.norm(inA)*la.norm(inB)
return 0.5+0.5*(num/denom)
def content():
data_list = []
userList = []
c = 0
holder = ''
user_str = ''
for line in open(''):
if 'xlz1015ahmj0923' in line:
c += 1
#print(c)
#print(user_str)
uid = line[15:]
uid2 = uid[:-2]
holder = uid2
if user_str != '':
data_list.append(user_str)
user_str = ''
continue
if holder is not None:
if line == '\n':
continue
else:
userList.append(holder)
holder = None
if 'xlz1015ahmj0923' not in line:
result = re.sub(r"http\S+", "", line.strip())
result = re.sub(u'([\U00002600-\U000027BF])|([\U0001f300-\U0001f64F])|([\U0001f680-\U0001f6FF])',"",result)
result = re.sub('RT ','',result)
user_str += ' '
user_str += result
data_list.append(user_str)
#c += 1
#if c == 5000:
#break
# separate long string into sentences based on '.?!'
#sentenceEnders = re.compile('[.?!]')
#data_list = sentenceEnders.split(data)
#print(data_list)
# eliminate sentence less than 3 words and all the punctuation
LabelDoc = namedtuple('LabelDoc','words tags')
exclude = set(string.punctuation)
#exclude.remove('@') #?
#exclude.remove('#') #?
#exclude.remove('_')
#exclude.remove('-')
#exclude.remove('&')
all_docs = []
count = 0
for sen in data_list:
word_list = sen.split()
tag = ['SEN_'+str(count)]
count += 1
sen = ''.join(ch for ch in sen if ch not in exclude)
all_docs.append(LabelDoc(sen.split(),tag))
print(len(userList),len(data_list))
return all_docs, userList
def addRelation(all_docs,userList):
nameDict = {}
holder = ''
for line in open(''):
line = line.strip()
s = ''
if line != '' and line[-1] == ':':
for ch in line:
if ch == ' ':
holder = s
break
s += ch
nameDict[holder] = []
else:
pre = ''
ss = ''
for ch in line:
if pre == '|':
if ch == '|':
ss = ss[2:]
ss = ss[:-2]
nameDict[holder].append(ss)
break
ss += ch
else:
pre = ch
keyList = list(nameDict.keys())
for i in range(len(userList)):
if userList[i] in keyList:
l = nameDict[userList[i]]
all_docs[i].words.append('USR')
for name in l:
all_docs[i].words.append('USR')
all_docs[i].words.append(name)
all_docs[i].words.append('USR')
all_docs[i].words.append('USR')
return all_docs
def addLocation(all_docs,userList):
stateDict = {
'AK': 'Alaska',
'AL': 'Alabama',
'AR': 'Arkansas',
'AS': 'American Samoa',
'AZ': 'Arizona',
'CA': 'California',
'CO': 'Colorado',
'CT': 'Connecticut',
'DC': 'District of Columbia',
'DE': 'Delaware',
'FL': 'Florida',
'GA': 'Georgia',
'GU': 'Guam',
'HI': 'Hawaii',
'IA': 'Iowa',
'ID': 'Idaho',
'IL': 'Illinois',
'IN': 'Indiana',
'KS': 'Kansas',
'KY': 'Kentucky',
'LA': 'Louisiana',
'MA': 'Massachusetts',
'MD': 'Maryland',
'ME': 'Maine',
'MI': 'Michigan',
'MN': 'Minnesota',
'MO': 'Missouri',
'MP': 'Northern Mariana Islands',
'MS': 'Mississippi',
'MT': 'Montana',
'NA': 'National',
'NC': 'North Carolina',
'ND': 'North Dakota',
'NE': 'Nebraska',
'NH': 'New Hampshire',
'NJ': 'New Jersey',
'NM': 'New Mexico',
'NV': 'Nevada',
'NY': 'New York',
'OH': 'Ohio',
'OK': 'Oklahoma',
'OR': 'Oregon',
'PA': 'Pennsylvania',
'PR': 'Puerto Rico',
'RI': 'Rhode Island',
'SC': 'South Carolina',
'SD': 'South Dakota',
'TN': 'Tennessee',
'TX': 'Texas',
'UT': 'Utah',
'VA': 'Virginia',
'VI': 'Virgin Islands',
'VT': 'Vermont',
'WA': 'Washington',
'WI': 'Wisconsin',
'WV': 'West Virginia',
'WY': 'Wyoming',
'AB': 'Alberta',
'BC': 'British Columbia',
'MB': 'Manitoba',
'NB': 'New Brunswick',
'NL': 'Newfoundland and Labrador',
'NT': 'Northwest Territories',
'NS': 'Nova Scotia',
'NU': 'Nunavut',
'ON': 'Ontario',
'PE': 'Prince Edward Island',
'QC': 'Quebec',
'SK': 'Saskatchewan',
'YT': 'Yukon'
}
state2timezone = { 'AK': 'Alaska', 'AL': 'Central', 'AR': 'Central', 'AS': 'Samoa', 'AZ': 'Mountain', 'CA': 'Pacific', 'CO': 'Mountain', 'CT': 'Eastern', 'DC': 'Eastern', 'DE': 'Eastern', 'FL': 'Eastern', 'GA': 'Eastern', 'GU': 'Pacific', 'HI': 'Hawaii', 'IA': 'Central', 'ID': 'Mountain', 'IL': 'Central', 'IN': 'Eastern', 'KS': 'Central', 'KY': 'Eastern', 'LA': 'Central', 'MA': 'Eastern', 'MD': 'Eastern', 'ME': 'Eastern', 'MI': 'Eastern', 'MN': 'Central', 'MO': 'Central', 'MP': 'Pacific', 'MS': 'Central', 'MT': 'Mountain', 'NC': 'Eastern', 'ND': 'Central', 'NE': 'Central', 'NH': 'Eastern', 'NJ': 'Eastern', 'NM': 'Mountain', 'NV': 'Pacific', 'NY': 'Eastern', 'OH': 'Eastern', 'OK': 'Central', 'OR': 'Pacific', 'PA': 'Eastern', 'PR': 'America', 'RI': 'Eastern', 'SC': 'Eastern', 'SD': 'Central', 'TN': 'Central', 'TX': 'Central', 'UT': 'Mountain', 'VA': 'Eastern', 'VI': 'America', 'VT': 'Eastern', 'WA': 'Pacific', 'WI': 'Central', 'WV': 'Eastern', 'WY': 'Mountain'}
state2region = { 'AK': 'west', 'AL': 'South', 'AR': 'South', 'AS': 'Samoa', 'AZ': 'southwest', 'CA': 'west', 'CO': 'west', 'CT': 'New-England', 'DC': 'Mid-Atlantic', 'DE': 'Mid-Atlantic', 'FL': 'South', 'GA': 'South', 'GU': 'Pacific', 'HI': 'west', 'IA': 'Midwest', 'ID': 'west', 'IL': 'Midwest', 'IN': 'Midwest', 'KS': 'Midwest', 'KY': 'South', 'LA': 'South', 'MA': 'New-England', 'MD': 'Mid-Atlantic', 'ME': 'New-England', 'MI': 'Midwest', 'MN': 'Midwest', 'MO': 'Midwest', 'MP': 'Pacific', 'MS': 'South', 'MT': 'west', 'NC': 'South', 'ND': 'Midwest', 'NE': 'Midwest', 'NH': 'New-England', 'NJ': 'Mid-Atlantic', 'NM': 'southwest', 'NV': 'west', 'NY': 'Mid-Atlantic', 'OH': 'Midwest', 'OK': 'southwest', 'OR': 'west', 'PA': 'Mid-Atlantic', 'PR': 'America', 'RI': 'New-England', 'SC': 'South', 'SD': 'Midwest', 'TN': 'South', 'TX': 'southwest', 'UT': 'west', 'VA': 'South', 'VI': 'America', 'VT': 'New-England', 'WA': 'west', 'WI': 'Midwest', 'WV': 'South', 'WY': 'west'}
nameDict = {}
holder = ''
for line in open(''):
line = line.strip()
s = ''
if line != '' and line[-1] == ':':
for ch in line:
if ch == ' ':
holder = s
break
s += ch
nameDict[holder] = []
else:
l = line.strip().split(' | ')
if len(l) == 3:
if ',' in l[2] and ':' not in l[2][:-1]:
l[2] = l[2].strip()
for k,v in stateDict.items():
if k in l[2] or v in l[2]:
if v in l[2]:
old = v
new = k
tmp = l[2]
l[2] = tmp.replace(old,new)
nameDict[holder].append(l[2])
keyList = list(nameDict.keys())
for i in range(len(userList)):
if userList[i] in keyList:
l = nameDict[userList[i]]
all_docs[i].words.append('LOC')
for name in l:
l2 = name.strip().split(', ')
all_docs[i].words.append('LOC')
#for k,v in state2region.items():
#if k in name:
#all_docs[i].words.append(v)
for loc in range(len(l2)):
all_docs[i].words.append(l2[loc])
all_docs[i].words.append('LOC')
all_docs[i].words.append('LOC')
return all_docs
if __name__ == '__main__':
api_twitter = auth_api()
all_docs, userList = content()
print(userList.index("19730865"),userList.index("14724725"),userList.index("77235516"),userList.index("44988185"))
newall_docs = addRelation(all_docs,userList)
nnewall_docs = addLocation(newall_docs,userList)
model = doc2vec.Doc2Vec(size=75, window=1, alpha=0.025,min_alpha=0.025, min_count=5)
model.build_vocab(nnewall_docs)
for epoch in range(10):
model.train(nnewall_docs)
model.alpha -= 0.002
model.min_alpha = model.alpha
model.save('model.doc2vec')
precision = []
recall = []
MMR = []
######################################################################
friendDict = {}
hold = ''
for line in open(''):
if ':' in line:
l = line.strip()
hold = l[:-1]
else:
friendDict[hold] = line.strip().split(' ')
print(len(friendDict))
reDict = {}
#indexList = [875, 681, 805, 922, 115, 320, 907, 317, 853, 773, 917, 894, 935, 1038, 57, 332, 96, 146, 258, 299, 841, 579, 453, 494, 357, 438, 165, 687, 848, 851, 602, 364, 1003, 562, 551, 556, 874, 88, 175, 464, 336, 231, 767, 39, 114, 550, 830, 187, 654, 852, 912, 281, 552, 203, 20, 47, 206, 293, 783, 385, 95, 680]
#indexList = [875, 681, 922, 115, 320, 907, 317, 853, 773, 917, 894, 1038, 57, 332, 146, 258, 299, 841, 579, 453, 494, 357, 438, 165, 687, 848, 602, 364, 1003, 562, 551, 556, 874, 88, 175, 464, 336, 231, 767, 39, 114, 550, 830, 187, 654, 852, 912, 281, 552, 203, 20, 47, 206, 293, 783, 385, 95, 680]
indexList = [681, 805, 922, 115, 320, 907, 317, 853, 773, 917, 894, 935, 1038, 57, 332, 96, 146, 258, 299, 841, 579, 453, 494, 357, 438, 165, 687, 848, 851, 602, 364, 1003, 562, 551, 556, 874, 88, 175, 464, 336, 231, 767, 39, 114, 550, 830, 187, 654, 912, 281, 552, 203, 20, 47, 206, 293, 783, 680]
for i in indexList:
mmr = 0
doc_id = i
count = 0
c = 0
sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count)
print('TARGET' , nnewall_docs[doc_id].words)
candident = []
resList = []
for j in sims:
if count == 40:
break
#pid = int(string.replace(i[0], "SEN_", ""))
#print(i[0],": ", all_docs[pid].words)
num = j[0]
numeric = int(num[4:])
print(j[0],userList[numeric])
candident.append(userList[numeric])
count += 1
trueList = friendDict[userList[i]]
print(userList[i])
order = 0
for k in candident:
order += 1
if k in trueList:
c += 1
print(count)
mmr += float(1)/order
tp = float(c)/40
re = float(c)/len(trueList)
precision.append(tp)
recall.append(re)
if c != 0:
MMR.append(mmr/c)
print(mmr,c)
print(mmr/c)
print(tp)
print(re)
reDict[userList[i]] | |
<reponame>dtobi59/Lixur-Protocol
'''
Lixur uses a new consensus mechanism called Enchanted Consensus, which is a modified version of the Avalanche consensus but slush querying is weighted.
The weight makes it so that nodes with higher influence are more likely to be queried for their opinion.
The weight is designed in a way so that an attacker would have to have most the following to influence the consensus to it's benefit:
1. Be an old account, preferably one of the oldest accounts on the network
2. Have a significant percentage of the total supply
3. Have validated/made a large number of transactions (making it do a considerable amount of work)
4. The total cumulative weight of all transactions it has validated would have to be a significant fraction of the total weight of all transactions in the network.
5. Have a significant amount of the computational power of the network.
This is impossible to achieve. Way, way, way too much money, work, power and time needed and that's just to get a chance of influencing the consensus.
It is nearly impossible for a byzantine node to influence the consensus.
With Enchanted Consensus, a prerequisite for a byzantine to maliciously influence the consensus is that it would have to be god on the network.
'''
from numpy.random import default_rng
from numpy import abs, array, zeros
import time
from typing import Tuple, TextIO
import argparse
import json
import numpy
import sys
# TODO: Add a way to specify the number of active nodes in the network
# TODO: Add a way to specify the number of byzantines in the network
# TODO: Add a way to associate nodes with their influence scores
# TODO: Add a way to properly add the weights of each node to the slush part
# TODO: Fix the weight function and node influence part and the on the __main__ section
# Turn this into a sungle function and also run it through a JIT compiler
'''
import time
time_of_creation = time.time() # The time of creation of a wallet should be added to the graph, and can be referenced whenever.
def node_influence():
def account_age():
if time.time() - time_of_creation < 2.628e+6: # If account is younger than a month
age = 0
elif time.time() - time_of_creation >= 2.628e+6 and time.time() - time_of_creation < 1.577e+7: # If account is a month to six months old
age = 1
elif time.time() - time_of_creation >= 1.577e+7 and time.time() - time_of_creation < 3.154e+7: # If account is six months to a year old
age = 2
elif time.time() - time_of_creation >= 3.154e+7 and time.time() - time_of_creation < 9.461e+7: # If account is a year to three years old
age = 3
elif time.time() - time_of_creation >= 9.461e+7 and time.time() - time_of_creation < 1.577e+8: # If account is three years to five years old
age = 4
elif time.time() - time_of_creation >= 1.577e+8 and time.time() - time_of_creation < 3.154e+8: # If account is five years to ten years old
age = 5
elif time.time() - time_of_creation >= 3.154e+8 and time.time() - time_of_creation < 1.577e+9: # If account is ten years to fifty years old
age = 6
elif time.time() - time_of_creation >= 1.577e+9 and time.time() - time_of_creation < 3.154e+9: # If account is fifty years to a hundred years old
age = 7
elif time.time() - time_of_creation >= 3.154e+9 and time.time() - time_of_creation < 3.154e+10: # If account is a hundred years to a thousand years old
age = 8
elif time.time() - time_of_creation >= 3.154e+10 and time.time() - time_of_creation < 3.1536e+11: # If account is a thousand years to nine thousand years old
age = 9
elif time.time() - time_of_creation >= 2.8382e+11: # If account is over 9000!!!
age = 10
return age
def account_activity(tx_count):
if tx_count < 100: # If account is younger than a month
activity = 0
elif tx_count >= 100 and tx_count < 1000: # If account is a month to six months old
activity = 1
elif tx_count >= 1000 and tx_count < 10000: # If account is six months to a year old
activity = 2
elif tx_count >= 10000 and tx_count < 100000: # If account is a year to three years old
activity = 3
elif tx_count >= 100000 and tx_count < 1000000: # If account is three years to five years old
activity = 4
elif tx_count >= 1000000 and tx_count < 10000000: # If account is five years to ten years old
activity = 5
elif tx_count >= 10000000 and tx_count < 100000000: # If account is ten years to fifty years old
activity = 6
elif tx_count >= 100000000 and tx_count < 1000000000:
activity = 7
elif tx_count >= 1000000000 and tx_count < 10000000000:
activity = 8
elif tx_count >= 10000000000 and tx_count < 100000000000:
activity = 9
elif tx_count >= 100000000000:
activity = 10
return activity
def compute_power():
import time
import platform
import cpuinfo # install CPU Info package (pip install py-cpuinfo)
start_benchmark = 10 # change this if you like (sample: 1000, 5000, etc)
start_benchmark = int(start_benchmark)
repeat_benchmark = 1 # attemps, change this if you like (sample: 3, 5, etc)
repeat_benchmark = int(repeat_benchmark)
average_benchmark = 0
for a in range(0, repeat_benchmark):
start = time.time()
for i in range(0, start_benchmark):
for x in range(1, 1000):
3.141592 * 2 ** x
for x in range(1, 10000):
float(x) / 3.141592
for x in range(1, 10000):
float(3.141592) / x
end = time.time()
duration = (end - start)
duration = round(duration, 3)
average_benchmark += duration
average_benchmark = round(average_benchmark / repeat_benchmark, 3)
score = (0.5 * start_benchmark) - average_benchmark
return score
def balance_of_node(balance):
fin_influence = (balance / 69420000) * 100
return fin_influence
def cumulative_weight(cumulative_weight):
if cumulative_weight < 10:
weight = 0
elif cumulative_weight >= 10 and cumulative_weight < 100:
weight = 1
elif cumulative_weight >= 100 and cumulative_weight < 1000:
weight = 2
elif cumulative_weight >= 1000 and cumulative_weight < 10000:
weight = 3
elif cumulative_weight >= 10000 and cumulative_weight < 100000:
weight = 4
elif cumulative_weight >= 100000 and cumulative_weight < 1000000:
weight = 5
elif cumulative_weight >= 1000000 and cumulative_weight < 10000000:
weight = 6
elif cumulative_weight >= 10000000 and cumulative_weight < 100000000:
weight = 7
elif cumulative_weight >= 100000000 and cumulative_weight < 1000000000:
weight = 8
elif cumulative_weight >= 1000000000 and cumulative_weight < 10000000000:
weight = 9
elif cumulative_weight >= 10000000000:
weight = 10
return weight
def calculate_score():
x = ({'age': account_age(), 'transactions': account_activity(0), 'computing_power': compute_power(),
'financial_influence': balance_of_node(0), 'cumulative_weight': cumulative_weight(0)})
influence = (x['age'] + x['transactions'] + x['computing_power'] + x['financial_influence'] + x['cumulative_weight']) * 2 / 5
print(f'Your node influence score is: {round(influence, 2)}/100')
calculate_score()
node_influence = node_influence()
'''
# Essentially you would need to be able to have nodes query each other for their opinion
# You would need to have slush work in a real setting, meaning these nodes would have to return an opinion
# Other nodes will assimilate
# The likelihood of a node getting queried for its opinion is proportional to its influence
# And the byzantine's opinion should get weeded each round.
def avalanche(population_weights: array, max_round: int, max_sample: int, min_sample: int, quorum_rate: float, byzantine_rate: float) -> array:
"""
Simulates for the provided number of `max_round`, the given
`population_weights.size` and `max_sample` an **Avalanche** process,
where *all* sample sizes in the [`min_sample`,`max_sample`]
range are simulated for! Then, a matrix for *each* round &&
sample size is returned.
The `byzantine_rate` is the **percentage** of population, which is
malicious towards the network attempting to flip peer nodes
w/o actually following the consensus rules.
"""
m = zeros(shape=(max_round, max_sample))
for x in range(min_sample, max_sample + 1):
p = population(population_weights.size)
c = counters(population_weights.size)
for z in range(max_round):
p += errors(population_weights, byzantine_rate)
p %= 2
m[z, x - 1] = numpy.sum(population_weights * p)
p = snowflake(*slush(p, population_weights, x, quorum_rate), c)
return m
def population(population_size: int) -> array:
"""
Returns a *uniformly* sampled population for given `population_size`
where a value of `0` represents *yellow* and a value of `1`
*red* cards.
"""
return prg.integers(0, 2, size=population_size)
def counters(population_size: | |
import os
import json
import pickle
import h5py
import numpy as np
import glob
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import StandardScaler
import seaborn as sns
from matplotlib.colors import ListedColormap
import warnings
def open_json(json_fpath):
with open(json_fpath) as json_file:
return json.load(json_file)
def zscore_(data, baseline_samples):
scaler = StandardScaler()
# note: have to reshape/transpose so that samples is in first dimension for scikitlearn
if len(data.shape) == 1:
scaler.fit(data[baseline_samples].reshape(-1, 1))
scaled_data = scaler.transform(data.reshape(-1, 1))
else:
scaler.fit(data[..., baseline_samples].T)
scaled_data = scaler.transform(data.T).T
return scaled_data
def check_exist_dir(path):
if not os.path.exists(path):
os.mkdir(path)
return path
##### LOADING FUNCTIONS
def load_h5(fpath):
data_h5file = h5py.File(fpath, 'r')
# load a snippit of data and get rid of un-needed singleton dimensions
data_snip = np.squeeze(np.array(data_h5file['imaging'], dtype=int))
""" typically it's good to have time as the last dimension because one doesn't usually iterate through time, so we'll
reorganize the data dimension order"""
return data_snip.transpose(1, 2, 0)
def load_signals(fpath):
### load and prepare time-series data
glob_signal_files = glob.glob(fpath)
_, fext = os.path.splitext(glob_signal_files[0])
if len(glob_signal_files) == 0:
print('Warning: No or signal files detected; please check your fname and fdir')
if 'npy' in fext:
signals = np.squeeze(np.load(glob_signal_files[0]))
elif 'csv' in fext:
df_signals = pd.read_csv(glob_signal_files[0], header=None)
if 'Time(s)/Cell Status' in df_signals.values:
# for inscopix data, drop first two rows and first column, and transpose
signals = np.transpose(df_signals.drop([0,1], axis=0).iloc[:, 1:].values.astype(np.float32))
else:
signals = df_signals.values
return signals
##### PLOTTING FUNCTIONS
def plot_single_img(to_plot, frame_num):
plt.figure(figsize=(7, 7))
plt.imshow(to_plot, cmap='gray')
plt.title('Frame {}'.format(frame_num), fontsize=20)
plt.axis('off')
def subplot_heatmap(axs, title, image, cmap=None, clims=None, zoom_window=None, extent_=None):
"""
Takes in a numpy 2d array and a subplot location, and plots a heatmap at the subplot location without axes
Parameters
----------
axs : matplotlib AxesSubplot object
Specific subplot from the ax output of pyplot.subplots()
title : string
Title name of the plot
image : numpy 2d array
cmap : string or colormap
Colormap desired; default is seismic
Optional Parameters
-------------------
clims : list
List with entries: [minimum_colorbar_limit, maximum_colorbar_limit] . This is for setting the color ranges
for the heatmap
zoom_window : list
List with entries: [xmin, xmax, ymin, ymax] . This is for zooming into the specific window dictated by the
x min and max, and y min and max locations
Returns
-------
im : ndarray
imshow AxesImage object. Used to reference the dataset for a colorbar (eg. fig.colorbar(im) )
"""
if len(image.shape) == 1:
warnings.warn("Your data only has one trial!")
image = image[np.newaxis, ...]
if cmap is None:
cmap = ListedColormap(sns.color_palette("RdBu_r", 100))
im = axs.imshow(image, cmap, extent=extent_)
axs.set_title(title, fontsize=15)
if zoom_window is not None:
axs.axis(zoom_window)
axs.invert_yaxis()
if clims is not None:
im.set_clim(vmin=clims[0], vmax=clims[1])
axs.set_aspect('auto')
# axs.axis('off')
return im # for colorbar
def dict_key_len(dict_, key):
return len(dict_[key])
###### DATA PROCESSING FUNCTIONS
def make_tile(start, end, num_rep):
"""
Makes indices for tiles.
Parameters
----------
start_end : int
List with two items where first int is start sample relative to trial onset.
Second int is end sample relative to trial onset.
num_rep : int
Number of times to repeat the sample vector in the y axis
Returns
-------
tile_array : ndarray
Array with shape (num_rep, samples), where samples is number of samples between
the items in start_end input
"""
samp_vec = np.arange(start, end + 1) # grab all samples between start/end
tile_array = np.tile(samp_vec, (num_rep, 1))
return tile_array
def remove_trials_out_of_bounds(data_end, these_frame_events, start_samp, end_samp):
"""
:param data_end: int
total number of samples in recording
:param these_frame_events: list
entries of list are samples/frames for the event start
:param start_samp: int
trial window start index relative to the beginning of the trial (eg. -5 if you want the trial start to be 5
samples before event onset. In other words, if the sampling rate is 5 hz, -5 would be
1 second before trial onset.)
:param end_samp: int
trial window end index relative to the beginning of the trial (similar to start_samp)
:return:
"""
after_start_bool = (these_frame_events + start_samp) > start_samp
before_end_bool = (these_frame_events + end_samp) < data_end
keep_events = []
for idx, item in enumerate(after_start_bool*before_end_bool):
if item:
keep_events.append(these_frame_events[idx])
return np.array(keep_events)
def extract_trial_data(data, tvec, start_end_samp, frame_events, conditions, baseline_start_end_samp=None, save_dir=None):
"""
Takes a 3d video (across a whole session) and cuts out trials based on event times.
Also groups trial data by condition
Parameters
----------
save_dir : string
full path of folder where extracted trial data pickle file will be saved
data : numpy 3d array
3d video data where dimensions are (y_pixel, x_pixel, samples)
tvec : numpy 1d vector
vector containing times in seconds for each sample within the trial. Just saved in pickle dict for downstream analysis
start_end_samp : 2-element list
Element 0: Number of samples before the event onset for trial start
Element 1: Number of samples after the event onset for trial end
frame_events : dictionary of np 1d arrays (vectors)
Dictionary where keys are the conditions in the session and values are numpy 1d vectors that contain
event occurrences as samples
conditions : list of strings
Each entry in the list is a condition to extract trials from; must correspond to keys in frame_events
Optional Parameters
-------------------
baseline_start_end_samp : 2-element list
Element 0: Number of samples relative to the event onset for baseline epoch start
Element 1: Number of samples relative the event onset for baseline epoch end
NOTE: including this variable will generate a z-scored data variable
Returns
-------
data_dict : dictionary
1st level of dict keys: individual conditions
2nd level of keys :
data : numpy 4d array with dimensions (trials,y,x,samples)
num_samples : number of samples (time) in a trial
num_trials : total number of trials in the condition
"""
# create sample vector for baseline epoch if argument exists (for zscoring)
if baseline_start_end_samp is not None:
baseline_svec = (np.arange(baseline_start_end_samp[0], baseline_start_end_samp[1] + 1, 1) -
baseline_start_end_samp[0]).astype('int')
data_dict = {}
for idx, condition in enumerate(conditions):
data_dict[condition] = {}
# get rid of trials that are outside of the session bounds with respect to time
data_end_sample = data.shape[-1]
cond_frame_events = remove_trials_out_of_bounds(data_end_sample, frame_events[condition], start_end_samp[0], start_end_samp[1])
# convert window time bounds to samples and make a trial sample vector
# make an array where the sample indices are repeated in the y axis for n number of trials
num_trials_cond = len(cond_frame_events)
if num_trials_cond == 1:
svec_tile = np.arange(start_end_samp[0], start_end_samp[1] + 1) # just make a 1D vector for svec
num_trial_samps = len(svec_tile)
else:
svec_tile = make_tile(start_end_samp[0], start_end_samp[1], num_trials_cond)
num_trial_samps = svec_tile.shape[1]
if num_trials_cond > 0:
# now make a repeated matrix of each trial's ttl on sample in the x dimension
ttl_repmat = np.repeat(cond_frame_events[:, np.newaxis], num_trial_samps, axis=1).astype('int')
# calculate actual trial sample indices by adding the TTL onset repeated matrix and the trial window template
trial_sample_mat = np.round(ttl_repmat + svec_tile).astype('int')
# extract frames in trials and reshape the data to be: y,x,trials,samples
# basically unpacking the last 2 dimensions
reshape_dim = data.shape[:-1] + (svec_tile.shape)
extracted_trial_dat = data[..., np.ndarray.flatten(trial_sample_mat)].reshape(reshape_dim)
# reorder dimensions and put trial as first dim; resulting dims will be [trial, roi, samples]
# or [trial, y, x, samples]
if num_trials_cond > 1:
if len(extracted_trial_dat.shape) :
data_dict[condition]['data'] = extracted_trial_dat.transpose((1, 0, 2))
elif len(extracted_trial_dat.shape) == 4:
data_dict[condition]['data'] = extracted_trial_dat.transpose((2, 0, 1, 3))
else: # dimension order is correct since there's no reshaping done
data_dict[condition]['data'] = np.expand_dims(extracted_trial_dat, axis=0)
# save normalized data
if baseline_start_end_samp is not None:
# input data dimensions should be (trials, ROI, samples)
data_dict[condition]['zdata'] = np.squeeze(np.apply_along_axis(zscore_, -1,
data_dict[condition]['data'],
baseline_svec), axis=-1)
# also save trial-averaged (if there are multiple trials) and z-scored data
if num_trials_cond > 1: # if more than one trial
data_dict[condition]['trial_avg_data'] = np.nanmean(data_dict[condition]['data'], axis=0)
if baseline_start_end_samp is not None:
# this can take a while to compute
data_dict[condition]['ztrial_avg_data'] = np.squeeze(np.nanmean(data_dict[condition]['zdata'], axis=0))
else:
# if there's only one trial, just zscore the raw data
if baseline_start_end_samp is not None:
data_dict[condition]['ztrial_avg_data'] = np.squeeze(np.apply_along_axis(zscore_, -1,
data_dict[condition]['data'],
baseline_svec))
# save some meta data
data_dict[condition]['num_samples'] = num_trial_samps
data_dict[condition]['num_trials'] = num_trials_cond
data_dict[condition]['tvec'] = tvec
if save_dir:
with open(os.path.join(save_dir, 'event_data_dict.pkl'), 'wb') as save_handle:
pickle.dump(data_dict, save_handle)
return data_dict
def get_tvec_sample(tvec, time):
return np.argmin(abs(tvec - time))
def time_to_samples(trial_tvec, analysis_window):
"""
Parameters
----------
trial_tvec : np 1d vector
Vector of times in seconds | |
# Copyright Contributors to the Pyro project.
# SPDX-License-Identifier: Apache-2.0
# The implementation follows the design in PyTorch: torch.distributions.distribution.py
#
# Copyright (c) 2016- Facebook, Inc (<NAME>)
# Copyright (c) 2014- Facebook, Inc (<NAME>)
# Copyright (c) 2011-2014 Idiap Research Institute (<NAME>)
# Copyright (c) 2012-2014 Deepmind Technologies (<NAME>)
# Copyright (c) 2011-2012 NEC Laboratories America (<NAME>)
# Copyright (c) 2011-2013 NYU (<NAME>)
# Copyright (c) 2006-2010 NEC Laboratories America (<NAME>, <NAME>, <NAME>, <NAME>)
# Copyright (c) 2006 Idiap Research Institute (<NAME>)
# Copyright (c) 2001-2004 Idiap Research Institute (<NAME>, <NAME>, <NAME>)
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from collections import OrderedDict
from contextlib import contextmanager
import functools
import inspect
import warnings
import numpy as np
from jax import lax, tree_util
import jax.numpy as jnp
from numpyro.distributions.transforms import ComposeTransform, Transform
from numpyro.distributions.util import lazy_property, promote_shapes, sum_rightmost, validate_sample
from numpyro.util import not_jax_tracer
from . import constraints
_VALIDATION_ENABLED = False
def enable_validation(is_validate=True):
"""
Enable or disable validation checks in NumPyro. Validation checks provide useful warnings and
errors, e.g. NaN checks, validating distribution arguments and support values, etc. which is
useful for debugging.
.. note:: This utility does not take effect under JAX's JIT compilation or vectorized
transformation :func:`jax.vmap`.
:param bool is_validate: whether to enable validation checks.
"""
global _VALIDATION_ENABLED
_VALIDATION_ENABLED = is_validate
Distribution.set_default_validate_args(is_validate)
@contextmanager
def validation_enabled(is_validate=True):
"""
Context manager that is useful when temporarily enabling/disabling validation checks.
:param bool is_validate: whether to enable validation checks.
"""
distribution_validation_status = _VALIDATION_ENABLED
try:
enable_validation(is_validate)
yield
finally:
enable_validation(distribution_validation_status)
COERCIONS = []
class DistributionMeta(type):
def __call__(cls, *args, **kwargs):
for coerce_ in COERCIONS:
result = coerce_(cls, args, kwargs)
if result is not None:
return result
return super().__call__(*args, **kwargs)
@property
def __wrapped__(cls):
return functools.partial(cls.__init__, None)
class Distribution(metaclass=DistributionMeta):
"""
Base class for probability distributions in NumPyro. The design largely
follows from :mod:`torch.distributions`.
:param batch_shape: The batch shape for the distribution. This designates
independent (possibly non-identical) dimensions of a sample from the
distribution. This is fixed for a distribution instance and is inferred
from the shape of the distribution parameters.
:param event_shape: The event shape for the distribution. This designates
the dependent dimensions of a sample from the distribution. These are
collapsed when we evaluate the log probability density of a batch of
samples using `.log_prob`.
:param validate_args: Whether to enable validation of distribution
parameters and arguments to `.log_prob` method.
As an example:
.. doctest::
>>> import jax.numpy as jnp
>>> import numpyro.distributions as dist
>>> d = dist.Dirichlet(jnp.ones((2, 3, 4)))
>>> d.batch_shape
(2, 3)
>>> d.event_shape
(4,)
"""
arg_constraints = {}
support = None
has_enumerate_support = False
is_discrete = False
reparametrized_params = []
_validate_args = False
# register Distribution as a pytree
# ref: https://github.com/google/jax/issues/2916
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
tree_util.register_pytree_node(cls,
cls.tree_flatten,
cls.tree_unflatten)
def tree_flatten(self):
return tuple(getattr(self, param) for param in sorted(self.arg_constraints.keys())), None
@classmethod
def tree_unflatten(cls, aux_data, params):
return cls(**dict(zip(sorted(cls.arg_constraints.keys()), params)))
@staticmethod
def set_default_validate_args(value):
if value not in [True, False]:
raise ValueError
Distribution._validate_args = value
def __init__(self, batch_shape=(), event_shape=(), validate_args=None):
self._batch_shape = batch_shape
self._event_shape = event_shape
if validate_args is not None:
self._validate_args = validate_args
if self._validate_args:
for param, constraint in self.arg_constraints.items():
if param not in self.__dict__ and isinstance(getattr(type(self), param), lazy_property):
continue
if constraints.is_dependent(constraint):
continue # skip constraints that cannot be checked
is_valid = constraint(getattr(self, param))
if not_jax_tracer(is_valid):
if not np.all(is_valid):
raise ValueError("{} distribution got invalid {} parameter.".format(
self.__class__.__name__, param))
super(Distribution, self).__init__()
@property
def batch_shape(self):
"""
Returns the shape over which the distribution parameters are batched.
:return: batch shape of the distribution.
:rtype: tuple
"""
return self._batch_shape
@property
def event_shape(self):
"""
Returns the shape of a single sample from the distribution without
batching.
:return: event shape of the distribution.
:rtype: tuple
"""
return self._event_shape
@property
def event_dim(self):
"""
:return: Number of dimensions of individual events.
:rtype: int
"""
return len(self.event_shape)
@property
def has_rsample(self):
return set(self.reparametrized_params) == set(self.arg_constraints)
def rsample(self, key, sample_shape=()):
if self.has_rsample:
return self.sample(key, sample_shape=sample_shape)
raise NotImplementedError
def shape(self, sample_shape=()):
"""
The tensor shape of samples from this distribution.
Samples are of shape::
d.shape(sample_shape) == sample_shape + d.batch_shape + d.event_shape
:param tuple sample_shape: the size of the iid batch to be drawn from the
distribution.
:return: shape of samples.
:rtype: tuple
"""
return sample_shape + self.batch_shape + self.event_shape
def sample(self, key, sample_shape=()):
"""
Returns a sample from the distribution having shape given by
`sample_shape + batch_shape + event_shape`. Note that when `sample_shape` is non-empty,
leading dimensions (of size `sample_shape`) of the returned sample will
be filled with iid draws from the distribution instance.
:param jax.random.PRNGKey key: the rng_key key to be used for the distribution.
:param tuple sample_shape: the sample shape for the distribution.
:return: an array of shape `sample_shape + batch_shape + event_shape`
:rtype: numpy.ndarray
"""
raise NotImplementedError
def sample_with_intermediates(self, key, sample_shape=()):
"""
Same as ``sample`` except that any intermediate computations are
returned (useful for `TransformedDistribution`).
:param jax.random.PRNGKey key: the rng_key key to be used for the distribution.
:param tuple sample_shape: the sample shape for the distribution.
:return: an array of shape `sample_shape + batch_shape + event_shape`
:rtype: numpy.ndarray
"""
return self.sample(key, sample_shape=sample_shape), []
def log_prob(self, value):
"""
Evaluates the log probability density for a batch of samples given by
`value`.
:param value: A batch of samples from the distribution.
:return: an array with shape `value.shape[:-self.event_shape]`
:rtype: numpy.ndarray
"""
raise NotImplementedError
@property
def mean(self):
"""
Mean of the distribution.
"""
raise NotImplementedError
@property
def variance(self):
"""
Variance of the distribution.
"""
raise NotImplementedError
def _validate_sample(self, value):
mask = self.support(value)
if not_jax_tracer(mask):
if not np.all(mask):
warnings.warn('Out-of-support values provided to log prob method. '
'The value argument should be within the support.')
return mask
def __call__(self, *args, **kwargs):
key = kwargs.pop('rng_key')
sample_intermediates = kwargs.pop('sample_intermediates', False)
if sample_intermediates:
return self.sample_with_intermediates(key, *args, **kwargs)
return self.sample(key, *args, **kwargs)
def to_event(self, reinterpreted_batch_ndims=None):
"""
Interpret the rightmost `reinterpreted_batch_ndims` batch dimensions as
dependent event dimensions.
:param reinterpreted_batch_ndims: Number of rightmost batch dims to
interpret as event dims.
:return: An instance of `Independent` distribution.
:rtype: numpyro.distributions.distribution.Independent
"""
if reinterpreted_batch_ndims is None:
reinterpreted_batch_ndims = len(self.batch_shape)
if reinterpreted_batch_ndims == 0:
return self
return Independent(self, reinterpreted_batch_ndims)
def enumerate_support(self, expand=True):
"""
Returns an array with shape `len(support) x batch_shape`
containing all values in the support.
"""
raise NotImplementedError
def expand(self, batch_shape):
"""
Returns a new :class:`ExpandedDistribution` instance with batch
dimensions expanded to `batch_shape`.
:param tuple batch_shape: batch shape to expand to.
:return: an instance of `ExpandedDistribution`.
:rtype: :class:`ExpandedDistribution`
"""
batch_shape = tuple(batch_shape)
if batch_shape == self.batch_shape:
return self
return ExpandedDistribution(self, batch_shape)
def expand_by(self, sample_shape):
"""
Expands a distribution by adding ``sample_shape`` to the left side of
its :attr:`~numpyro.distributions.distribution.Distribution.batch_shape`.
To expand internal dims of ``self.batch_shape`` from 1 to something
larger, use :meth:`expand` instead.
:param tuple sample_shape: The size of the iid batch to be drawn
from the distribution.
:return: An expanded version of this distribution.
:rtype: :class:`ExpandedDistribution`
"""
return self.expand(tuple(sample_shape) + self.batch_shape)
def mask(self, mask):
"""
Masks a distribution by a boolean or boolean-valued array that is
broadcastable to the distributions
:attr:`Distribution.batch_shape` .
:param mask: A boolean or boolean valued array (`True` includes
a site, `False` excludes a site).
:type mask: bool or jnp.ndarray
:return: A masked copy of this distribution.
:rtype: :class:`MaskedDistribution`
**Example:**
.. doctest::
>>> from jax import random
>>> import jax.numpy as jnp
>>> import numpyro
>>> import numpyro.distributions as dist
>>> from numpyro.distributions import constraints
>>> from numpyro.infer import SVI, Trace_ELBO
>>> def model(data, m):
... f = numpyro.sample("latent_fairness", dist.Beta(1, 1))
... with numpyro.plate("N", data.shape[0]):
... # only take into account | |
from kwmo.controllers.abstract_teambox import *
import time
from kwmo.lib.kwmo_kcd_client import KcdClient
from kwmo.lib.config import get_cached_kcd_external_conf_object
from kfs_lib import *
from kwmo.lib.base import init_session
from kwmo.lib.kwmolib import *
from kwmo.model.user import User
from kwmo.model.kfs_node import KfsNode
from kwmo.model.chat_request import ChatRequest
from kwmo.model.ws_request import WSRequest
import kbase
import simplejson
log = logging.getLogger(__name__)
class SkurlTeamboxController(AbstractTeamboxController):
# Internal: check if workspace is public.
def _check_public(self, workspace_id):
if not c.workspace.public:
log.warning("_check_public(): workspace %i is not public." % ( workspace_id ) )
abort(404)
# Internal: login as a skurl user.
def _login(self, user):
session['user'] = user.to_dict()
session['user_id'] = session['user']['id']
c.perms.allow('kfs.download.share.0')
c.perms.allow('kfs.upload.share.0')
session.save()
# Last minute permissions check.
self._check_perms()
# Internal: set chat request permissions.
def _set_chat_requests_perms(self, flag):
if flag:
# Allow chat requests.
c.perms.allow('pubws.req.chat')
else:
# Deny furthur chat requests.
c.perms.deny('pubws.req.chat')
# Internal: set chat permissions.
def _set_chat_perms(self, flag):
if flag:
# Allow chat.
c.perms.allow('chat.list.channel.' + str(session['user_id']))
c.perms.allow('chat.post.channel.' + str(session['user_id']))
else:
# Deny chat.
c.perms.deny('chat.list.channel.' + str(session['user_id']))
c.perms.deny('chat.post.channel.' + str(session['user_id']))
# Internal: set workspace creation requests permissions.
def _set_ws_creation_requests_perms(self, flag):
if flag:
# Deny furthur workspace creation requests.
c.perms.allow('pubws.req.wscreate')
else:
# Allow workspace requests.
c.perms.deny('pubws.req.wscreate')
# Log user out.
def logout(self, workspace_id, email_id):
log.debug("Skurl logout.")
init_session(c.workspace, reinit=True)
ui_flash_info(code='logout', hide_after_ms=5000)
redirect_to(url('teambox_pubws_show', workspace_id=workspace_id, email_id=email_id))
# Show public workspace main page.
def show(self, workspace_id, email_id):
workspace_id = int(workspace_id)
email_id = int(email_id)
# Set logout url.
c.logout_url = url('teambox_pubws_logout', workspace_id=workspace_id, email_id=email_id)
# Check if the workspace is public.
self._check_public(workspace_id)
if 'email_id' in session and session['email_id'] != email_id:
# User is logged but wants to access a different email. Reinit session.
log.debug("Reinitializing session because user is using another email id: previous='%s', new='%s'." \
% ( str(session['email_id']), str(email_id) ) )
init_session(c.workspace, reinit=True)
notif = request.GET.get('notif', 0)
if notif:
# This is the sender (user 1)... [re-]login automatically.
log.debug("User is accessing a public workspace using a notification link... automatically log user in.")
user = User.get_by(workspace_id=workspace_id, id=1)
log.debug("Reinitializing session because user is logging as user 1 (notif management).")
init_session(c.workspace, reinit=True)
self._login(user)
c.notif_flag = True
else:
if 'user' in session and session['user'] and session['user']['id'] == 1:
# Sender is logged (as a sender) but he's using a regular skurl link: logout.
log.debug("Reinitializing session because user was logged as user 1 but is using a regular skurl link.")
init_session(c.workspace, reinit=True)
if not c.perms.hasRole('skurl'):
# Give skurl role, if not already done.
c.perms.addRole('skurl')
# Save session.
session.save()
if not 'email_id' in session:
# Set email information in session.
# Instantiate a Kcd client.
kc = KcdClient(get_cached_kcd_external_conf_object())
# Check that email ID is valid.
email_info = kc.pubws_get_email_info(workspace_id, email_id)
if not email_info:
log.debug("PubWS: invalild email ID: %i" % ( email_id ) )
abort(404)
# Get the email sender.
sender_user = User.get_by(workspace_id=workspace_id, id=1)
sender = kbase.PropStore()
sender.name = sender_user.real_name
sender.email = sender_user.email
# Get the email recipients (list of PropStores, having name and email keys).
raw_recipients = kc.pubws_get_eid_recipient_identities(workspace_id, email_id)
# Strip sender email from recipients, if needed.
recipients = []
for recipient in raw_recipients:
if recipient.email != sender.email:
recipients.append(recipient)
# Merge sender and recipients.
identities = [sender] + recipients
# Set needed informations in session.
session['email_id'] = email_id
session['email_info'] = email_info.to_dict()
session['identities'] = map(lambda x: x.to_dict(), identities)
session.save()
# Get informations that will be published in the template.
c.dyn_version = 15
c.email_info = session['email_info']
c.json_email_info_str = simplejson.dumps(c.email_info)
c.identities = session['identities']
c.json_identities_str = simplejson.dumps(c.identities)
# Check if a chat request was accepted lately (delay is hardcoded in accepted_lately()).
c.user_id = None
if 'user_id' in session and session['user_id']:
c.user_id = session['user_id']
if ChatRequest.accepted_lately(workspace_id, session['user_id']):
# Deny chat requests and allow chat since a request was accepted lately.
self._set_chat_requests_perms(False)
self._set_chat_perms(True)
else:
# Allow chat requests and deny chat since no request was accepted lately.
self._set_chat_requests_perms(True)
self._set_chat_perms(False)
# Allow workspace creation request.
self._set_ws_creation_requests_perms(True)
# Save session.
session.save()
c.base_url_paths = kurl.get_base_url_paths(
'teambox_updater',
'teambox_post_chat',
'teambox_download',
'teambox_upload',
'teambox_pubws_set_identity',
'teambox_pubws_chat_request',
'teambox_pubws_chat_request_result',
'teambox_pubws_kfsup_request',
'teambox_pubws_kfsdown_request',
'teambox_pubws_create_request')
# Get first update directly.
flags = ( StateRequest.STATE_FORCE_SYNC
| StateRequest.STATE_WANT_PERMS
| StateRequest.STATE_WANT_MEMBERS
| StateRequest.STATE_WANT_KFS
| StateRequest.STATE_WANT_PUBWS_INFO )
params = { }
if 'user_id' in session and session['user_id']:
flags |= StateRequest.STATE_WANT_CHAT
params['chat_channel_id'] = session['user_id']
updater_state_dict = state_request_get(c, session, flags, params)
c.updater_state_json = simplejson.dumps(updater_state_dict)
return render('/teambox/pubwsshow.mako')
# Get a user ID matching the identity ID selected by the user.
# If user is not invited, he is invited first.
@kjsonify
def pb_set_identity(self, workspace_id):
import select
from kcd_lib import WorkspaceInvitee
workspace_id = int(workspace_id)
# Get the workspace.
if not c.workspace.public:
log.warning("pb_set_identity: Workspace %i is not public." % ( workspace_id ) )
abort(404)
# Get POST parameters.
identity_id = request.params['identity_id']
identity_id = int(identity_id)
# Shortcuts
identity = session['identities'][identity_id]
log.debug("Recipient: %s" % ( str(identity) ) )
if identity_id == 0:
# This is the sender (user 1).
user = User.get_by(workspace_id=workspace_id, id=1)
self._login(user)
log.debug("Found matching user(0): '%s'." % ( str(user) ) )
return { 'result' : 'ok', 'user' : session['user'] }
# This is a real recipient... try to get the user.
user = User.get_by(workspace_id=workspace_id, email=identity['email'])
if user:
self._login(user)
log.debug("Found matching user(1): '%s'." % ( str(user) ) )
return { 'result' : 'ok', 'user' : session['user'] }
# Instantiate a Kcd client.
kc = KcdClient(get_cached_kcd_external_conf_object())
# Invite user.
invitee = WorkspaceInvitee(real_name=identity['name'], email_address=identity['email'])
junk_url, invitees = kc.invite_users(workspace_id, "empty message", [invitee])
if invitees[0].error:
log.debug("User could not be invited: '%s'." % ( str(invitees[0].error) ) )
raise Exception('Internal error.')
# Get user. If not present, retry a few times, until new user is fetched by kwsfetcher or until timeout.
wait_seconds = 0.5
timeout_seconds = 8
time_started = time.time()
while 1:
# Get user, if it exists (fetched by kwsfetcher).
user = User.get_by(workspace_id=workspace_id, email=identity['email'])
if user:
self._login(user)
log.debug("Found matching user (2): '%s'." % ( str(user) ) )
return { 'result' : 'ok', 'user' : session['user'] }
# Check for timeout.
if time.time() > time_started + timeout_seconds: break
# Wait
select.select([], [], [], wait_seconds)
# Reached timeout.
log.error("Error: reached end of pb_set_identity(). KWSFetcher might be too loaded or down.");
raise Exception('Temporary server error: please try again later.');
# Internal: do stuff related to every pubws request.
def _request_common(self, workspace_id):
# Check that the user is logged.
if not session['user']:
log.error("_request_common(): user is not logged.")
abort(404)
# Instantiate a Kcd client in the context-global variable.
c.pubws_kc = KcdClient(get_cached_kcd_external_conf_object())
# PubWS chat request.
@kjsonify
def chat_request(self, workspace_id):
workspace_id = int(workspace_id)
# Do some checks and initialization.
self._check_public(workspace_id)
self._request_common(workspace_id)
# Time to allow the workspace owner to respond.
# Keep PubWSChat javascript object code in sync for the global chat
# request timeout (which must be a little longer than this one).
req_timeout = 60
# Shortcuts.
user_id = session['user']['id']
subject = session['email_info']['subject']
# Post request.
chat_req_id = c.pubws_kc.pubws_chat_request(workspace_id, user_id, c.workspace.compat_v2, subject, req_timeout)
log.debug("Chat request: got chat_req_id '%i'." % ( chat_req_id ) )
return { "chat_req_id" : chat_req_id }
# PubWS chat request result request.
@kjsonify
def chat_request_result(self, workspace_id, req_id):
workspace_id = int(workspace_id)
req_id = int(req_id)
req_start_time = request.params['req_start_time']
# Do some checks and initialization.
self._check_public(workspace_id)
self._request_common(workspace_id)
# Get the request.
req = ChatRequest.get_by(workspace_id=workspace_id, request_id=req_id)
if req:
# Check request status.
if req.accepted:
# Modify permissions.
self._set_chat_requests_perms(False)
self._set_chat_perms(True)
# Save session.
session.save()
log.debug("chat_request_result(): accepted.")
return { "result" : "ok" }
# Enable when debugging to enable automatic chat acceptation.
if 0:
from kanp import KANP_MINOR
from pylons import config
kc = KcdClient(get_cached_kcd_external_conf_object())
# This function has to be rewritten.
kc.pubws_chat_request_accept(workspace_id, user_id, KANP_MINOR, req_id)
else:
# Bad request ID or kwsfetcher has not yet fetched the request.
pass
log.debug("chat_request_result(): pending, chat_req_id='%s', req_start_time='%s'." \
% ( str(req_id), str(req_start_time) ) )
return { "result" : "pending", "chat_req_id" : req_id, 'req_start_time' : req_start_time }
# PubWS KFS upload request.
@kjsonify
def kfs_upload_request(self, workspace_id):
workspace_id = int(workspace_id)
# Do some checks and initialization.
self._check_public(workspace_id)
self._request_common(workspace_id)
# No-op
return { "result" : "ok" }
# PubWS KFS download request.
@kjsonify
def kfs_download_request(self, workspace_id):
workspace_id = int(workspace_id)
# Do some checks and initialization.
self._check_public(workspace_id)
self._request_common(workspace_id)
# No-op
return { "result" : "ok" }
# PubWS workspace creation request.
@kjsonify
def ws_create_request(self, workspace_id):
workspace_id = int(workspace_id)
# Do | |
@ params add mean
w = self.weights.values2d
eps -= (w * eps).sum() / w.sum()
index = self.dependent.index
fitted = DataFrame(_x @ params, index, ["fitted_values"])
idiosyncratic = DataFrame(eps, index, ["idiosyncratic"])
eps_effects = _y - fitted.values
sigma2_tot = float(eps_effects.T @ eps_effects / nobs)
sigma2_eps = float(eps.T @ eps / nobs)
sigma2_effects = sigma2_tot - sigma2_eps
rho = sigma2_effects / sigma2_tot if sigma2_tot > 0.0 else 0.0
resid_ss = float(weps.T @ weps)
if self.has_constant:
mu = ybar
else:
mu = np.array([0.0])
total_ss = float((y - mu).T @ (y - mu))
r2 = 1 - resid_ss / total_ss if total_ss > 0.0 else 0.0
root_w = cast(Float64Array, np.sqrt(self.weights.values2d))
y_ex = root_w * self.dependent.values2d
mu_ex = 0
if (
self.has_constant
or self.entity_effects
or self.time_effects
or self.other_effects
):
mu_ex = root_w * ((root_w.T @ y_ex) / (root_w.T @ root_w))
total_ss_ex_effect = float((y_ex - mu_ex).T @ (y_ex - mu_ex))
r2_ex_effects = (
1 - resid_ss / total_ss_ex_effect if total_ss_ex_effect > 0.0 else 0.0
)
res = self._postestimation(params, cov, debiased, df_resid, weps, y, x, root_w)
######################################
# Pooled f-stat
######################################
if self.entity_effects or self.time_effects or self.other_effects:
wy, wx = root_w * self.dependent.values2d, root_w * self.exog.values2d
df_num, df_denom = (df_model - wx.shape[1]), df_resid
if not self.has_constant:
# Correction for when models does not have explicit constant
wy -= root_w * _lstsq(root_w, wy, rcond=None)[0]
wx -= root_w * _lstsq(root_w, wx, rcond=None)[0]
df_num -= 1
weps_pooled = wy - wx @ _lstsq(wx, wy, rcond=None)[0]
resid_ss_pooled = float(weps_pooled.T @ weps_pooled)
num = (resid_ss_pooled - resid_ss) / df_num
denom = resid_ss / df_denom
stat = num / denom
f_pooled = WaldTestStatistic(
stat,
"Effects are zero",
df_num,
df_denom=df_denom,
name="Pooled F-statistic",
)
res.update(f_pooled=f_pooled)
effects = DataFrame(
eps_effects - eps,
columns=["estimated_effects"],
index=self.dependent.index,
)
else:
effects = DataFrame(
np.zeros_like(eps),
columns=["estimated_effects"],
index=self.dependent.index,
)
res.update(
dict(
df_resid=df_resid,
df_model=df_model,
nobs=y.shape[0],
residual_ss=resid_ss,
total_ss=total_ss,
wresids=weps,
resids=eps,
r2=r2,
entity_effects=self.entity_effects,
time_effects=self.time_effects,
other_effects=self.other_effects,
sigma2_eps=sigma2_eps,
sigma2_effects=sigma2_effects,
rho=rho,
r2_ex_effects=r2_ex_effects,
effects=effects,
fitted=fitted,
idiosyncratic=idiosyncratic,
)
)
return PanelEffectsResults(res)
class BetweenOLS(_PanelModelBase):
r"""
Between estimator for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity)
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
Notes
-----
The model is given by
.. math::
\bar{y}_{i}= \beta^{\prime}\bar{x}_{i}+\bar{\epsilon}_{i}
where :math:`\bar{z}` is the time-average.
"""
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: Optional[PanelDataLike] = None,
check_rank: bool = True,
) -> None:
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
self._cov_estimators = CovarianceManager(
self.__class__.__name__,
HomoskedasticCovariance,
HeteroskedasticCovariance,
ClusteredCovariance,
)
def _setup_clusters(
self, cov_config: Dict[str, Union[bool, float, str, PanelDataLike]]
) -> Dict[str, Union[bool, float, str, IntArray, DataFrame, PanelData]]:
"""Return covariance estimator reformat clusters"""
cov_config_upd = cov_config.copy()
if "clusters" not in cov_config:
return cov_config_upd
clusters = cov_config.get("clusters", None)
if clusters is not None:
clusters_panel = self.reformat_clusters(clusters)
cluster_max = np.nanmax(clusters_panel.values3d, axis=1)
delta = cluster_max - np.nanmin(clusters_panel.values3d, axis=1)
if np.any(delta != 0):
raise ValueError("clusters must not vary within an entity")
index = clusters_panel.panel.minor_axis
reindex = clusters_panel.entities
clusters_frame = DataFrame(
cluster_max.T, index=index, columns=clusters_panel.vars
)
clusters_frame = clusters_frame.loc[reindex].astype(np.int64)
cov_config_upd["clusters"] = clusters_frame
return cov_config_upd
def fit(
self,
*,
reweight: bool = False,
cov_type: str = "unadjusted",
debiased: bool = True,
**cov_config: Union[bool, float, str, IntArray, DataFrame, PanelData],
) -> PanelResults:
"""
Estimate model parameters
Parameters
----------
reweight : bool
Flag indicating to reweight observations if the input data is
unbalanced using a WLS estimator. If weights are provided, these
are accounted for when reweighting. Has no effect on balanced data.
cov_type : str
Name of covariance estimator. See Notes.
debiased : bool
Flag indicating whether to debiased the covariance estimator using
a degree of freedom adjustment.
**cov_config
Additional covariance-specific options. See Notes.
Returns
-------
PanelResults
Estimation results
Examples
--------
>>> from linearmodels import BetweenOLS
>>> mod = BetweenOLS(y, x)
>>> res = mod.fit(cov_type='robust')
Notes
-----
Three covariance estimators are supported:
* 'unadjusted', 'homoskedastic' - Assume residual are homoskedastic
* 'robust', 'heteroskedastic' - Control for heteroskedasticity using
White's estimator
* 'clustered` - One or two way clustering. Configuration options are:
* ``clusters`` - Input containing containing 1 or 2 variables.
Clusters should be integer values, although other types will
be coerced to integer values by treating as categorical variables
When using a clustered covariance estimator, all cluster ids must be
identical within an entity.
"""
y, x, w = self._prepare_between()
if np.all(self.weights.values2d == 1.0) and not reweight:
w = root_w = np.ones_like(y)
else:
root_w = cast(Float64Array, np.sqrt(w))
wx = root_w * x
wy = root_w * y
params = _lstsq(wx, wy, rcond=None)[0]
df_resid = y.shape[0] - x.shape[1]
df_model = (x.shape[1],)
nobs = y.shape[0]
cov_config = self._setup_clusters(cov_config)
extra_df = 0
if "extra_df" in cov_config:
cov_config = cov_config.copy()
_extra_df = cov_config.pop("extra_df")
assert isinstance(_extra_df, (str, int))
extra_df = int(_extra_df)
cov = setup_covariance_estimator(
self._cov_estimators,
cov_type,
wy,
wx,
params,
self.dependent.entity_ids,
self.dependent.time_ids,
debiased=debiased,
extra_df=extra_df,
**cov_config,
)
weps = wy - wx @ params
index = self.dependent.index
fitted = DataFrame(self.exog.values2d @ params, index, ["fitted_values"])
eps = y - x @ params
effects = DataFrame(eps, self.dependent.entities, ["estimated_effects"])
entities = fitted.index.levels[0][fitted.index.codes[0]]
effects = effects.loc[entities]
effects.index = fitted.index
dep = self.dependent.dataframe
fitted = fitted.reindex(dep.index)
effects = effects.reindex(dep.index)
idiosyncratic = DataFrame(
np.asarray(dep) - np.asarray(fitted) - np.asarray(effects),
dep.index,
["idiosyncratic"],
)
residual_ss = float(weps.T @ weps)
e = y
if self._constant:
e = y - (w * y).sum() / w.sum()
total_ss = float(w.T @ (e**2))
r2 = 1 - residual_ss / total_ss
res = self._postestimation(
params, cov, debiased, df_resid, weps, wy, wx, root_w
)
res.update(
dict(
df_resid=df_resid,
df_model=df_model,
nobs=nobs,
residual_ss=residual_ss,
total_ss=total_ss,
r2=r2,
wresids=weps,
resids=eps,
index=self.dependent.entities,
fitted=fitted,
effects=effects,
idiosyncratic=idiosyncratic,
)
)
return PanelResults(res)
@classmethod
def from_formula(
cls,
formula: str,
data: PanelDataLike,
*,
weights: Optional[PanelDataLike] = None,
check_rank: bool = True,
) -> BetweenOLS:
"""
Create a model from a formula
Parameters
----------
formula : str
Formula to transform into model. Conforms to formulaic formula
rules.
data : array_like
Data structure that can be coerced into a PanelData. In most
cases, this should be a multi-index DataFrame where the level 0
index contains the entities and the level 1 contains the time.
weights: array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual times
the weight should be homoskedastic.
check_rank : bool
Flag indicating whether to perform a rank check on the exogenous
variables to ensure that the model is identified. Skipping this
check can reduce the time required to validate a model
specification. Results may be numerically unstable if this check
is skipped and the matrix is not full rank.
Returns
-------
BetweenOLS
Model specified using the formula
Notes
-----
Unlike standard formula syntax, it is necessary to explicitly include
a constant using the constant indicator (1)
Examples
--------
>>> from linearmodels import BetweenOLS
>>> from linearmodels.panel import generate_panel_data
>>> panel_data = generate_panel_data()
>>> mod = BetweenOLS.from_formula('y ~ 1 + x1', panel_data.data)
>>> res = mod.fit()
"""
parser = PanelFormulaParser(formula, data)
dependent, exog = parser.data
mod = cls(dependent, exog, weights=weights, check_rank=check_rank)
mod.formula = formula
return mod
class FirstDifferenceOLS(_PanelModelBase):
r"""
First difference model for panel data
Parameters
----------
dependent : array_like
Dependent (left-hand-side) variable (time by entity)
exog : array_like
Exogenous or right-hand-side variables (variable by time by entity).
weights : array_like
Weights to use in estimation. Assumes residual variance is
proportional to inverse of weight to that the residual time
the weight should be homoskedastic.
Notes
-----
The model is given by
.. math::
\Delta y_{it}=\beta^{\prime}\Delta x_{it}+\Delta\epsilon_{it}
"""
def __init__(
self,
dependent: PanelDataLike,
exog: PanelDataLike,
*,
weights: Optional[PanelDataLike] = None,
check_rank: bool = True,
):
super().__init__(dependent, exog, weights=weights, check_rank=check_rank)
if self._constant:
raise ValueError(
"Constants are not allowed in first difference regressions."
)
if self.dependent.nobs < 2:
raise ValueError("Panel must have at least 2 time periods")
def _setup_clusters(
self, cov_config: Dict[str, Union[bool, float, str, PanelDataLike]]
) -> Dict[str, Union[bool, float, str, DataFrame]]:
cov_config_upd = cov_config.copy()
cluster_types = ("clusters", "cluster_entity")
common | |
self.run = False
if event.type == pygame.MOUSEBUTTONDOWN and not self.menu_gui["main"] == 3:
self.mouse_choose()
elif self.menu_gui["main"] == 3 and event.type == pygame.MOUSEBUTTONDOWN:
self.menu_gui["main"] = 0
keys = pygame.key.get_pressed()
if keys[pygame.K_LEFT] or keys[pygame.K_UP]:
self.next_item(-1)
if keys[pygame.K_RIGHT] or keys[pygame.K_DOWN]:
self.next_item(1)
if keys[pygame.K_RETURN] or keys[pygame.K_SPACE]:
self.keyboard_choose()
self.window.fill(self.background_color)
if self.RGB >= 38:
self.RGB_increment *= -1
elif self.RGB <= 2:
self.RGB_increment *= -1
self.RGB += self.RGB_increment
pygame.draw.rect(self.window, (0, 0, 0), (self.WIDTH * 0, self.HEIGHT * 0, self.WIDTH, 245))
pygame.draw.rect(self.window, self.sky_blue, (self.WIDTH * 0, self.HEIGHT * 0, self.WIDTH, 240))
# pygame.draw.rect(self.window, (0, 0, 0), (self.WIDTH * 0, self.HEIGHT - 6, self.WIDTH, 10))
self.window.blit(self.title_background, (self.X, 0))
self.window.blit(self.title_background, (self.X - 1080, 0))
self.window.blit(self.title_background, (self.X + 1080, 0))
self.window.blit(self.title, (0, self.Y))
self.X += 0.55
if self.X > 1080:
self.X = 0
if self.Y > 10 or self.Y < -10:
self.Y_increment *= -1
self.Y += self.Y_increment
if self.menu_gui["main"] <= 1:
if self.menu_gui["hover"] == 1:
pygame.draw.rect(self.window, self.box_filled_color_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.39, self.WIDTH * 0.5, self.HEIGHT * 0.1))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.39, self.WIDTH * 0.5, self.HEIGHT * 0.1))
pygame.draw.rect(self.window, (0, 0, 0),
(self.WIDTH * 0.25, self.HEIGHT * 0.39, self.WIDTH * 0.5, self.HEIGHT * 0.1), 3)
if self.menu_gui["hover"] == 2:
pygame.draw.rect(self.window, self.box_filled_color_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.54, self.WIDTH * 0.5, self.HEIGHT * 0.1))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.54, self.WIDTH * 0.5, self.HEIGHT * 0.1))
pygame.draw.rect(self.window, (0, 0, 0),
(self.WIDTH * 0.25, self.HEIGHT * 0.54, self.WIDTH * 0.5, self.HEIGHT * 0.1), 3)
if self.menu_gui["hover"] == 3:
pygame.draw.rect(self.window, self.box_filled_color_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.69, self.WIDTH * 0.5, self.HEIGHT * 0.1))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.69, self.WIDTH * 0.5, self.HEIGHT * 0.1))
pygame.draw.rect(self.window, (0, 0, 0),
(self.WIDTH * 0.25, self.HEIGHT * 0.69, self.WIDTH * 0.5, self.HEIGHT * 0.1), 3)
if self.menu_gui["main"] == 0:
self.make_text("Play", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.44 + 5)
self.make_text("Settings", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.59 + 5)
self.make_text("Credits", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.74 + 5)
if self.menu_gui["hover"] == 4:
pygame.draw.rect(self.window, self.box_filled_color_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.84, self.WIDTH * 0.5, self.HEIGHT * 0.1))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.25, self.HEIGHT * 0.84, self.WIDTH * 0.5, self.HEIGHT * 0.1))
self.make_text("Quit", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.89 + 5)
pygame.draw.rect(self.window, (0, 0, 0),
(self.WIDTH * 0.25, self.HEIGHT * 0.84, self.WIDTH * 0.5, self.HEIGHT * 0.1), 3)
elif self.menu_gui["main"] == 1:
self.make_text("Single-player", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.44 + 5)
self.make_text("Multi-player", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.59 + 5)
self.make_text("Back", self.font2, self.black, self.WIDTH * 0.5, self.HEIGHT * 0.74 + 5)
elif self.menu_gui["main"] == 3:
self.blit_text(0.6 * self.WIDTH, 0.5 * self.HEIGHT,
"Made by <NAME>. Art Contributions by <NAME>. github.com/almutwakel",
(0.22 * self.WIDTH, 0.4 * self.HEIGHT), self.font2, self.black)
self.make_text("Click Anywhere to Return to Menu", self.font2, self.black, self.WIDTH * 0.5,
self.HEIGHT * 0.89 + 5 + self.Y)
pygame.display.update()
# self.HEIGHT = window.get_height()
# self.WIDTH = window.get_self.WIDTH()
def run_loading(self):
for event in pygame.event.get():
if event.type == pygame.QUIT:
self.run = False
if self.options_done:
self.window.fill(self.box_filled_color_title)
self.make_text("Loading...", self.font5, self.white, self.WIDTH * 0.5, self.HEIGHT * 0.4)
pygame.draw.rect(self.window, self.white,
(self.WIDTH * 0.3, self.HEIGHT * 0.45, self.WIDTH * 0.4, self.HEIGHT * 0.1), 5)
for bars in range(self.loading_X):
pygame.draw.rect(self.window, self.green,
(self.WIDTH * 0.3 + bars * 10 + 3, self.HEIGHT * 0.45 + 3, 6, self.HEIGHT * 0.1 - 6))
pygame.display.update()
pygame.time.delay(35)
self.loading_X += 1
if self.loading_X >= (self.WIDTH * 0.4 - 6) / 10:
self.loading = False
else:
self.window.fill(self.box_filled_color_title)
if self.menu_gui["type"] == 1:
self.make_text("Game Options", self.font2, self.white, self.WIDTH * 0.5, self.HEIGHT * 0.1)
for row in range(3):
self.make_text(self.option_titles[row][0], self.font5, self.white, self.WIDTH * 0.15,
self.HEIGHT * (0.26 + 0.15 * row))
if self.game_options[row][1] == self.option_titles[row][0 + 1]:
pygame.draw.rect(self.window, self.dark_green,
(self.WIDTH * (0.3 + 0.1 * 0), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 1), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 2), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.6, self.HEIGHT * (0.2 + 0.15 * row), 160, 80))
elif self.game_options[row][1] == self.option_titles[row][1 + 1]:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 0), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.dark_green,
(self.WIDTH * (0.3 + 0.1 * 1), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 2), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.6, self.HEIGHT * (0.2 + 0.15 * row), 160, 80))
elif self.game_options[row][1] == self.option_titles[row][2 + 1]:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 0), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 1), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.dark_green,
(self.WIDTH * (0.3 + 0.1 * 2), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.6, self.HEIGHT * (0.2 + 0.15 * row), 160, 80))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 0), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 1), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * (0.3 + 0.1 * 2), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80))
pygame.draw.rect(self.window, self.dark_green,
(self.WIDTH * 0.6, self.HEIGHT * (0.2 + 0.15 * row), 160, 80))
for box in range(3):
pygame.draw.rect(self.window, self.white,
(self.WIDTH * (0.3 + 0.1 * box), self.HEIGHT * (0.2 + 0.15 * row),
self.WIDTH * 0.08, 80), 3)
self.make_text(str(self.option_titles[row][box + 1]), self.font5, self.white,
self.WIDTH * (0.34 + 0.1 * box),
self.HEIGHT * (0.26 + 0.15 * row))
self.make_text(str(self.custom[row]), self.font5, self.white,
self.WIDTH * (0.375 + 0.1 * 3),
self.HEIGHT * (0.26 + 0.15 * row))
pygame.draw.rect(self.window, self.white,
(self.WIDTH * 0.6, self.HEIGHT * (0.2 + 0.15 * row), 160, 80), 3)
self.make_text(self.option_titles[3], self.font5, self.white, self.WIDTH * 0.15,
self.HEIGHT * 0.695)
for box in range(8):
if box + 1 in self.game_options[3][1]:
pygame.draw.rect(self.window, self.dark_green,
(self.WIDTH * 0.3 + 61.3 * box, self.HEIGHT * 0.65, 55, 55))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.3 + 61.3 * box, self.HEIGHT * 0.65, 55, 55))
self.make_text(str(box + 1), self.font5, self.white, self.WIDTH * 0.3 + 28 + 61.3 * box,
self.HEIGHT * 0.65 + 30)
pygame.draw.rect(self.window, self.white,
(self.WIDTH * 0.3 + 61.3 * box, self.HEIGHT * 0.65, 55, 55), 3)
self.make_text(str(box + 1), self.font5, self.white, self.WIDTH * 0.3 + 28 + 61.3 * box,
self.HEIGHT * 0.65 + 30)
self.make_text(self.event_help[self.event_helper], self.font5, self.white, self.WIDTH * 0.52,
self.HEIGHT * 0.77)
if self.WIDTH * 0.3 <= self.x <= self.WIDTH * 0.75 and 0.84 * self.HEIGHT <= self.y <= 0.94 * self.HEIGHT:
pygame.draw.rect(self.window, self.dark_green,
(self.WIDTH * 0.3, self.HEIGHT * 0.84, self.WIDTH * 0.45, self.HEIGHT * 0.1))
else:
pygame.draw.rect(self.window, self.green_title,
(self.WIDTH * 0.3, self.HEIGHT * 0.84, self.WIDTH * 0.45, self.HEIGHT * 0.1))
pygame.draw.rect(self.window, self.white,
(self.WIDTH * 0.3, self.HEIGHT * 0.84, self.WIDTH * 0.45, self.HEIGHT * 0.1), 5)
self.make_text("Start", self.font5, self.white, self.WIDTH * 0.525, self.HEIGHT * 0.895)
self.make_text("<", self.font2, self.white, self.WIDTH * 0.25, self.HEIGHT * 0.895)
for event in pygame.event.get():
if event.type == pygame.MOUSEBUTTONDOWN:
(self.x, self.y) = pygame.mouse.get_pos()
for row in range(3):
for box in range(3):
if self.WIDTH * (0.3 + 0.1 * box) <= self.x <= self.WIDTH * (
0.38 + 0.1 * box) and self.HEIGHT * (
0.2 + 0.15 * row) <= self.y <= 80 + self.HEIGHT * (0.2 + 0.15 * row):
self.game_options[row][1] = self.option_titles[row][box + 1]
self.custom[row] = "Custom"
if self.WIDTH * 0.6 <= self.x <= self.WIDTH * 0.6 + 160 and self.HEIGHT * (
0.2 + 0.15 * row) <= self.y <= 80 + self.HEIGHT * (0.2 + 0.15 * row):
for p | |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# filename: __adapterproxymanager.py_compiler
# Tencent is pleased to support the open source community by making Tars available.
#
# Copyright (C) 2016THL A29 Limited, a Tencent company. All rights reserved.
#
# Licensed under the BSD 3-Clause License (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# https://opensource.org/licenses/BSD-3-Clause
#
# Unless required by applicable law or agreed to in writing, software distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
"""
@version: 0.01
@brief: 将rpc部分中的adapterproxymanager抽离出来,实现不同的负载均衡
"""
from enum import Enum
import random
import socket
import select
import os
import time
from .__util import LockGuard, NewLock, ConsistentHashNew
from .__trans import EndPointInfo
from .__logger import tarsLogger
from . import exception
from .__trans import TcpTransceiver
from .__TimeoutQueue import ReqMessage
from .exception import TarsException
# 因为循环import的问题只能放这里,不能放文件开始处
from .QueryF import QueryFProxy
from .QueryF import QueryFPrxCallback
class AdapterProxy:
"""
@brief: 每一个Adapter管理一个服务端端口的连接,数据收发
"""
def __init__(self):
tarsLogger.debug("AdapterProxy:__init__")
self.__closeTrans = False
self.__trans = None
self.__object = None
self.__reactor = None
self.__lock = None
self.__asyncProc = None
self.__activeStateInReg = True
@property
def activatestateinreg(self):
return self.__activeStateInReg
@activatestateinreg.setter
def activatestateinreg(self, value):
self.__activeStateInReg = value
def __del__(self):
tarsLogger.debug("AdapterProxy:__del__")
def initialize(self, endPointInfo, objectProxy, reactor, asyncProc):
"""
@brief: 初始化
@param endPointInfo: 连接对端信息
@type endPointInfo: EndPointInfo
@type objectProxy: ObjectProxy
@type reactor: FDReactor
@type asyncProc: AsyncProcThread
"""
tarsLogger.debug("AdapterProxy:initialize")
self.__closeTrans = False
self.__trans = TcpTransceiver(endPointInfo)
self.__object = objectProxy
self.__reactor = reactor
# self.__lock = threading.Lock()
self.__lock = NewLock()
self.__asyncProc = asyncProc
def terminate(self):
"""
@brief: 关闭
"""
tarsLogger.debug("AdapterProxy:terminate")
self.setCloseTrans(True)
def trans(self):
"""
@brief: 获取传输类
@return: 负责网络传输的trans
@rtype: Transceiver
"""
return self.__trans
def invoke(self, reqmsg):
"""
@brief: 远程过程调用处理方法
@param reqmsg: 请求响应报文
@type reqmsg: ReqMessage
@return: 错误码:0表示成功,-1表示连接失败
@rtype: int
"""
tarsLogger.debug("AdapterProxy:invoke")
assert self.__trans
if not self.__trans.hasConnected() and not self.__trans.isConnecting:
# -1表示连接失败
return -1
reqmsg.request.iRequestId = self.__object.getTimeoutQueue().generateId()
self.__object.getTimeoutQueue().push(reqmsg, reqmsg.request.iRequestId)
self.__reactor.notify(self)
return 0
def finished(self, rsp):
"""
@brief: 远程过程调用返回处理
@param rsp: 响应报文
@type rsp: ResponsePacket
@return: 函数是否执行成功
@rtype: bool
"""
tarsLogger.debug("AdapterProxy:finished")
reqmsg = self.__object.getTimeoutQueue().pop(rsp.iRequestId)
if not reqmsg:
tarsLogger.error(
"finished, can not get ReqMessage, may be timeout, id: %d", rsp.iRequestId
)
return False
reqmsg.response = rsp
if reqmsg.type == ReqMessage.SYNC_CALL:
return reqmsg.servant._finished(reqmsg)
elif reqmsg.callback:
self.__asyncProc.put(reqmsg)
return True
tarsLogger.error(
"finished, adapter proxy finish fail, id: %d, ret: %d", rsp.iRequestId, rsp.iRet
)
return False
# 检测连接是否失败,失效时重连
def checkActive(self, forceConnect=False):
"""
@brief: 检测连接是否失效
@param forceConnect: 是否强制发起连接,为true时不对状态进行判断就发起连接
@type forceConnect: bool
@return: 连接是否有效
@rtype: bool
"""
tarsLogger.debug("AdapterProxy:checkActive")
# self.__lock.acquire()
lock = LockGuard(self.__lock)
tarsLogger.info(
"checkActive, %s, forceConnect: %s", self.__trans.getEndPointInfo(), forceConnect
)
if not self.__trans.isConnecting() and not self.__trans.hasConnected():
self.doReconnect()
# self.__lock.release()
return self.__trans.isConnecting() or self.__trans.hasConnected()
def doReconnect(self):
"""
@brief: 重新发起连接
@return: None
@rtype: None
"""
tarsLogger.debug("AdapterProxy:doReconnect")
assert self.__trans
self.__trans.reInit()
tarsLogger.info(
"doReconnect, connect: %s, fd:%d", self.__trans.getEndPointInfo(), self.__trans.getFd()
)
self.__reactor.registerAdapter(self, select.EPOLLIN | select.EPOLLOUT)
def sendRequest(self):
"""
@brief: 把队列中的请求放到Transceiver的发送缓存里
@return: 放入缓存的数据长度
@rtype: int
"""
tarsLogger.debug("AdapterProxy:sendRequest")
if not self.__trans.hasConnected():
return False
reqmsg = self.__object.popRequest()
blen = 0
while reqmsg:
reqmsg.adapter = self
buf = reqmsg.packReq()
self.__trans.writeToSendBuf(buf)
tarsLogger.info("sendRequest, id: %d, len: %d", reqmsg.request.iRequestId, len(buf))
blen += len(buf)
# 合并一次发送的包 最大合并至8k 提高异步时客户端效率?
if self.__trans.getEndPointInfo().getConnType() == EndPointInfo.SOCK_UDP or blen > 8192:
break
reqmsg = self.__object.popRequest()
return blen
def finishConnect(self):
"""
@brief: 使用的非阻塞socket连接不能立刻判断是否连接成功,
在epoll响应后调用此函数处理connect结束后的操作
@return: 是否连接成功
@rtype: bool
"""
tarsLogger.debug("AdapterProxy:finishConnect")
success = True
errmsg = ""
try:
ret = self.__trans.getSock().getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
if ret:
success = False
errmsg = os.strerror(ret)
except Exception as msg:
errmsg = msg
success = False
if not success:
self.__reactor.unregisterAdapter(self, socket.EPOLLIN | socket.EPOLLOUT)
self.__trans.close()
self.__trans.setConnFailed()
tarsLogger.error(
"AdapterProxy finishConnect, exception: %s, error: %s",
self.__trans.getEndPointInfo(),
errmsg,
)
return False
self.__trans.setConnected()
self.__reactor.notify(self)
tarsLogger.info(
"AdapterProxy finishConnect, connect %s success", self.__trans.getEndPointInfo()
)
return True
def finishInvoke(self, isTimeout):
pass
# 弹出请求报文
def popRequest(self):
pass
def shouldCloseTrans(self):
"""
@brief: 是否设置关闭连接
@return: 关闭连接的flag的值
@rtype: bool
"""
return self.__closeTrans
def setCloseTrans(self, closeTrans):
"""
@brief: 设置关闭连接flag的值
@param closeTrans: 是否关闭连接
@type closeTrans: bool
@return: None
@rtype: None
"""
self.__closeTrans = closeTrans
class QueryRegisterCallback(QueryFPrxCallback):
def __init__(self, adpManager):
self.__adpManager = adpManager
super(QueryRegisterCallback, self).__init__()
# QueryFPrxCallback.__init__(self)
def callback_findObjectById4All(self, ret, activeEp, inactiveEp):
eplist = [
EndPointInfo(x.host, x.port, x.timeout, x.weight, x.weightType)
for x in activeEp
if ret == 0 and x.istcp
]
ieplist = [
EndPointInfo(x.host, x.port, x.timeout, x.weight, x.weightType)
for x in inactiveEp
if ret == 0 and x.istcp
]
self.__adpManager.setEndpoints(eplist, ieplist)
def callback_findObjectById4All_exception(self, ret):
tarsLogger.error("callback_findObjectById4All_exception ret: %d", ret)
class EndpointWeightType(Enum):
E_LOOP = 0
E_STATIC_WEIGHT = 1
class AdapterProxyManager:
"""
@brief: 管理Adapter
"""
def __init__(self):
tarsLogger.debug("AdapterProxyManager:__init__")
self.__comm = None
self.__object = None
# __adps的key=str(EndPointInfo) value=[EndPointInfo, AdapterProxy, cnt]
# cnt是访问次数
self.__adps = {}
self.__iadps = {}
self.__newLock = None
self.__isDirectProxy = True
self.__lastFreshTime = 0
self.__queryRegisterCallback = QueryRegisterCallback(self)
self.__regAdapterProxyDict = {}
self.__lastConHashPrxList = []
self.__consistentHashWeight = None
self.__weightType = EndpointWeightType.E_LOOP
self.__update = True
self.__lastWeightedProxyData = {}
def initialize(self, comm, objectProxy, eplist):
"""
@brief: 初始化
"""
tarsLogger.debug("AdapterProxyManager:initialize")
self.__comm = comm
self.__object = objectProxy
self.__newLock = NewLock()
self.__isDirectProxy = len(eplist) > 0
if self.__isDirectProxy:
self.setEndpoints(eplist, {})
else:
self.refreshEndpoints()
def terminate(self):
"""
@brief: 释放资源
"""
tarsLogger.debug("AdapterProxyManager:terminate")
# self.__lock.acquire()
lock = LockGuard(self.__newLock)
for ep, epinfo in self.__adps.items():
epinfo[1].terminate()
self.__adps = {}
self.__lock.release()
def refreshEndpoints(self):
"""
@brief: 刷新服务器列表
@return: 新的服务列表
@rtype: EndPointInfo列表
"""
tarsLogger.debug("AdapterProxyManager:refreshEndpoints")
if self.__isDirectProxy:
return
interval = self.__comm.getProperty("refresh-endpoint-interval", float) / 1000
locator = self.__comm.getProperty("locator")
if "@" not in locator:
raise exception.TarsRegistryException("locator is not valid: " + locator)
now = time.time()
last = self.__lastFreshTime
epSize = len(self.__adps)
if last + interval < now or (epSize <= 0 and last + 2 < now):
queryFPrx = self.__comm.stringToProxy(QueryFProxy, locator)
# 首次访问是同步调用,之后访问是异步调用
if epSize == 0 or last == 0:
ret, activeEps, inactiveEps = queryFPrx.findObjectById4All(self.__object.name())
# 目前只支持TCP
eplist = [
EndPointInfo(x.host, x.port, x.timeout, x.weight, x.weightType)
for x in activeEps
if ret == 0 and x.istcp
]
ieplist = [
EndPointInfo(x.host, x.port, x.timeout, x.weight, x.weightType)
for x in inactiveEps
if ret == 0 and x.istcp
]
self.setEndpoints(eplist, ieplist)
else:
queryFPrx.async_findObjectById4All(
self.__queryRegisterCallback, self.__object.name()
)
self.__lastFreshTime = now
def getEndpoints(self):
"""
@brief: 获取可用服务列表 如果启用分组,只返回同分组的服务端ip
@return: 获取节点列表
@rtype: EndPointInfo列表
"""
tarsLogger.debug("AdapterProxyManager:getEndpoints")
# self.__lock.acquire()
lock = LockGuard(self.__newLock)
ret = [x[1][0] for x in list(self.__adps.items())]
# self.__lock.release()
return ret
def setEndpoints(self, eplist, ieplist):
"""
@brief: 设置服务端信息
@para eplist: 活跃的被调节点列表
@para ieplist: 不活跃的被调节点列表
"""
tarsLogger.debug("AdapterProxyManager:setEndpoints")
adps = {}
iadps = {}
comm = self.__comm
isNeedNotify = False
# self.__lock.acquire()
lock = LockGuard(self.__newLock)
isStartStatic = True
for ep in eplist:
if ep.getWeightType() == 0:
isStartStatic = False
epstr = str(ep)
if epstr in self.__adps:
adps[epstr] = self.__adps[epstr]
continue
isNeedNotify = True
self.__update = True
adapter = AdapterProxy()
adapter.initialize(ep, self.__object, comm.getReactor(), comm.getAsyncProc())
adapter.activatestateinreg = True
adps[epstr] = [ep, adapter, 0]
self.__adps, adps = adps, self.__adps
for iep in ieplist:
iepstr = str(iep)
if iepstr in self.__iadps:
iadps[iepstr] = self.__iadps[iepstr]
continue
isNeedNotify = True
adapter = AdapterProxy()
adapter.initialize(iep, self.__object, comm.getReactor(), comm.getAsyncProc())
adapter.activatestateinreg = False
iadps[iepstr] = [iep, adapter, 0]
self.__iadps, iadps = iadps, self.__iadps
if isStartStatic:
self.__weightType = EndpointWeightType.E_STATIC_WEIGHT
else:
self.__weightType = EndpointWeightType.E_LOOP
# self.__lock.release()
if isNeedNotify:
self.__notifyEndpoints(self.__adps, self.__iadps)
# 关闭已经失效的连接
for ep in adps:
if ep not in self.__adps:
adps[ep][1].terminate()
def __notifyEndpoints(self, actives, inactives):
# self.__lock.acquire()
lock = LockGuard(self.__newLock)
self.__regAdapterProxyDict.clear()
self.__regAdapterProxyDict.update(actives)
self.__regAdapterProxyDict.update(inactives)
# self.__lock.release()
def __getNextValidProxy(self):
"""
@brief: 刷新本地缓存列表,如果服务下线了,要求删除本地缓存
@return:
@rtype: EndPointInfo列表
@todo: 优化负载均衡算法
"""
tarsLogger.debug("AdapterProxyManager:getNextValidProxy")
lock = LockGuard(self.__newLock)
if len(self.__adps) == 0:
raise TarsException("the activate adapter proxy is empty")
sortedActivateAdp = sorted(list(self.__adps.items()), key=lambda item: item[1][2])
# self.refreshEndpoints()
# self.__lock.acquire()
sortedActivateAdpSize = len(sortedActivateAdp)
while sortedActivateAdpSize != 0:
if sortedActivateAdp[0][1][1].checkActive():
self.__adps[sortedActivateAdp[0][0]][2] += 1
# 返回的是 adapterProxy
return self.__adps[sortedActivateAdp[0][0]][1]
sortedActivateAdp.pop(0)
sortedActivateAdpSize -= 1
# 随机重连一个可用节点
adpPrx = list(self.__adps.items())[random.randint(0, len(self.__adps))][1][1]
adpPrx.checkActive()
return None
# self.__lock.release()
def __getHashProxy(self, reqmsg):
if self.__weightType == EndpointWeightType.E_LOOP:
if reqmsg.isConHash:
return self.__getConHashProxyForNormal(reqmsg.hashCode)
else:
return self.__getHashProxyForNormal(reqmsg.hashCode)
else:
if reqmsg.isConHash:
return self.__getConHashProxyForWeight(reqmsg.hashCode)
else:
return self.__getHashProxyForWeight(reqmsg.hashCode)
def __getHashProxyForNormal(self, hashCode):
tarsLogger.debug("AdapterProxyManager:getHashProxyForNormal")
# self.__lock.acquire()
lock = LockGuard(self.__newLock)
regAdapterProxyList = sorted(
list(self.__regAdapterProxyDict.items()), key=lambda item: item[0]
)
allPrxSize = len(regAdapterProxyList)
if allPrxSize == 0:
raise TarsException("the adapter proxy is empty")
hashNum = hashCode % allPrxSize
if (
regAdapterProxyList[hashNum][1][1].activatestateinreg
and regAdapterProxyList[hashNum][1][1].checkActive()
):
epstr = regAdapterProxyList[hashNum][0]
self.__regAdapterProxyDict[epstr][2] += 1
if epstr in self.__adps:
self.__adps[epstr][2] += 1
elif epstr in self.__iadps:
self.__iadps[epstr][2] += 1
return self.__regAdapterProxyDict[epstr][1]
else:
if len(self.__adps) == 0:
raise TarsException("the activate adapter proxy is empty")
activeProxyList = list(self.__adps.items())
| |
+= 1
# print("passedStationCount",st1,st2,down,cnt)
return cnt
def resetStationName(self, old, new, auto_field=False):
old_dict = self.stationByDict(old)
if old_dict is not None:
old_dict["zhanming"] = new if not auto_field else new.split('::')[0]
# 更新标尺中站名
for ruler in self.line.rulers:
ruler.changeStationName(old, new)
for train in self.trains():
if train.isSfz(old):
train.sfz = new
if train.isZdz(old):
train.zdz = new
if train._localFirst == old:
train._localFirst = new
elif train._localLast == old:
train._localLast = new
st_dict = train.stationDict(old)
if st_dict is not None:
st_dict["zhanming"] = new
# 更新天窗中站名
self.line.forbid.changeStationName(old, new)
self.line.forbid2.changeStationName(old, new)
self.line.changeStationNameUpdateMap(old, new)
def addTrainByGraph(self, graph, cover=False):
"""
添加车次,返回数量
"""
num = 0
for train in graph.trains():
if train.localCount(self) >= 2:
if not self.checiExisted(train.fullCheci()):
num += 1
self.addTrain(train)
elif cover:
num += 1
t = self.trainFromCheci(train.fullCheci())
# 临时处理:移交交路数据
circuit = t.carriageCircuit()
if circuit is not None:
circuit.replaceTrain(t, train)
train.setCarriageCircuit(circuit)
self.delTrain(t)
self.addTrain(train)
self.checkCircuits()
return num
def preAddTrainByGraph(self, graph, all:bool=False):
"""
2019.07.19新增。预导入所有与本线有关涉的车次和交路。
此函数只能在临时对象中调用。自身的车次表、交路表应当是空的。
注意,此操作后的circuits是不安全的,执行train()可能会引发TrianNotFoundException。
"""
tm1 = time.perf_counter()
for train in graph.trains():
# if train.localCount(self) >= 2:
if all or train.isLocalTrain(self):
circuit = train.carriageCircuit()
if circuit is not None:
if circuit not in self._circuits:
circuit.setGraph(self)
self.addCircuit(circuit)
self.addTrain(train)
tm2 = time.perf_counter()
print("预导入线路历时", tm2 - tm1)
def checkCircuits(self):
"""
2020.02.02新增。
检查所有交路信息。如果找不到对应的车次,则设置为虚拟。
"""
for circuit in self.circuits():
circuit.identifyTrain(full_only=True)
def setMarkdown(self, mark: str):
self._markdown = mark
def markdown(self):
try:
return self._markdown
except AttributeError:
self._markdown = ""
return ''
def save_excel(self, filename: str):
try:
import openpyxl
from openpyxl.styles import Font, Alignment
except ImportError:
return
wb = openpyxl.Workbook()
ws = wb.active
ws['A1'] = f'{self.firstStation()}-{self.lastStation()}间列车时刻表'
# 写入左侧表头
ws['A3'] = '始发站'
ws.merge_cells('A3:A4')
ws['A5'] = '终到站'
ws.merge_cells('A5:A6')
ws['A7'] = '列车种类'
ws.merge_cells('A7:A8')
ws['A9'] = '车次'
ws['A10'] = '车站'
for row in range(3, 11):
ws.row_dimensions[row].font = Font(name='SimSum', size=9)
ws.row_dimensions[row].alignment = Alignment(horizontal='center', vertical='center')
ws.row_dimensions[row].height = 9.7
start = 11 # 从第11行开始表格
# 写入车站
station_row_dict = {}
cur = 11
for station in self.stations():
ws.cell(row=cur, column=1, value=station)
ws.merge_cells(start_row=cur, end_row=cur + 1, start_column=1, end_column=1)
station_row_dict[station] = cur
ws.row_dimensions[cur].height = 9.7
ws.row_dimensions[cur + 1].height = 9.7
ws.row_dimensions[cur].alignment = Alignment(horizontal='center', vertical='center')
ws.row_dimensions[cur + 1].alignment = Alignment(horizontal='center', vertical='center')
cur += 2
ws.column_dimensions['A'].width = 12
# 写入车次,先下行
last_merge_sfz, last_merge_zdz, last_merge_type = 1, 1, 1
col = 2
last_train = None
for train in self.trains():
for dct in train.itemInfo():
if not dct['down']:
continue
if last_train and train.sfz == last_train.sfz:
try:
ws.unmerge_cells(start_row=3, end_row=4, start_column=last_merge_sfz, end_column=col - 1)
except:
pass
ws.merge_cells(start_row=3, end_row=4, start_column=last_merge_sfz, end_column=col)
else:
ws.merge_cells(start_row=3, end_row=4, start_column=col, end_column=col)
last_merge_sfz = col
ws.cell(row=3, column=last_merge_sfz, value=train.sfz) # 必须访问最左边的才行
if last_train and train.zdz == last_train.zdz:
try:
ws.unmerge_cells(start_row=5, end_row=6, start_column=last_merge_zdz, end_column=col - 1)
except:
pass
ws.merge_cells(start_row=5, end_row=6, start_column=last_merge_zdz, end_column=col)
else:
ws.merge_cells(start_row=5, end_row=6, start_column=col, end_column=col)
last_merge_zdz = col
c = ws.cell(row=5, column=last_merge_zdz, value=train.zdz)
col_str = c.column_letter
ws.column_dimensions[col_str].width = 6 # 设置列宽为5
if last_train and train.type == last_train.type:
try:
ws.unmerge_cells(start_row=7, end_row=8, start_column=last_merge_type, end_column=col - 1)
except:
pass
ws.merge_cells(start_row=7, end_row=8, start_column=last_merge_type, end_column=col)
else:
ws.merge_cells(start_row=7, end_row=8, start_column=col, end_column=col)
last_merge_type = col
ws.cell(row=7, column=last_merge_type, value=train.type)
checi = train.fullCheci()
if '/' in checi:
ws.cell(row=9, column=col, value=checi.split('/')[0])
ws.cell(row=10, column=col, value='/' + checi.split('/', maxsplit=1)[1])
else:
ws.cell(row=9, column=col, value=checi)
ws.merge_cells(start_row=9, end_row=10, start_column=col, end_column=col)
last_dict = None
# 时刻表循环
for st_dict in train.timetable:
for i, s in station_row_dict.items():
if stationEqual(i, st_dict['zhanming']):
row = s
break
else:
continue
if train.isSfz(st_dict['zhanming']):
ws.cell(row=row, column=col, value='')
ws.cell(row=row + 1, column=col, value=self.outTime(st_dict['cfsj'], True))
elif train.isZdz(st_dict["zhanming"]):
ws.cell(row=row, column=col, value=self.outTime(st_dict['ddsj'], True))
ws.cell(row=row + 1, column=col, value=' --')
elif train.stationStopped(st_dict):
# 本站停车,无条件写入完整到达时刻和不完整出发时刻
ddsj_str = f'{st_dict["ddsj"].hour:2d}:{st_dict["ddsj"].minute:02d}'
sec = st_dict['ddsj'].second
if sec:
ddsj_str += f"{sec:02d}"
else:
ddsj_str += ' '
ws.cell(row=row, column=col, value=ddsj_str)
if st_dict['ddsj'].hour == st_dict['cfsj'].hour:
cfsj_str = ' '
else:
cfsj_str = f"{st_dict['cfsj'].hour:2d}:"
cfsj_str += f'{st_dict["cfsj"].minute:02d}'
sec = st_dict['cfsj'].second
if sec:
ddsj_str += f"{sec:02d}"
else:
ddsj_str += ' '
ws.cell(row=row + 1, column=col, value=cfsj_str)
else:
give_hour = False
if not last_dict:
give_hour = True
elif last_dict['cfsj'].hour != st_dict['ddsj'].hour:
give_hour = True
ws.cell(row=row, column=col, value=' ...')
tgsj_str = f'{st_dict["ddsj"].hour:2d}:' if give_hour else ' '
tgsj_str += f'{st_dict["ddsj"].minute:02d}'
sec = st_dict['ddsj'].second
if sec:
tgsj_str += f"{sec:02d}"
else:
tgsj_str += ' '
ws.cell(row=row + 1, column=col, value=tgsj_str)
last_dict = st_dict
col += 1
last_train = train
# 上行
for train in self.trains():
for dct in train.itemInfo():
if dct['down']:
continue
if last_train and train.sfz == last_train.sfz:
try:
ws.unmerge_cells(start_row=3, end_row=4, start_column=last_merge_sfz, end_column=col - 1)
except:
pass
ws.merge_cells(start_row=3, end_row=4, start_column=last_merge_sfz, end_column=col)
else:
ws.merge_cells(start_row=3, end_row=4, start_column=col, end_column=col)
last_merge_sfz = col
c = ws.cell(row=3, column=last_merge_sfz, value=train.sfz)
col_str = c.column_letter
ws.column_dimensions[col_str].width = 6 # 设置列宽为5
if last_train and train.zdz == last_train.zdz:
try:
ws.unmerge_cells(start_row=5, end_row=6, start_column=last_merge_zdz, end_column=col - 1)
except:
pass
ws.merge_cells(start_row=5, end_row=6, start_column=last_merge_zdz, end_column=col)
else:
ws.merge_cells(start_row=5, end_row=6, start_column=col, end_column=col)
last_merge_zdz = col
ws.cell(row=5, column=last_merge_zdz, value=train.zdz)
if last_train and train.type == last_train.type:
try:
ws.unmerge_cells(start_row=7, end_row=8, start_column=last_merge_type, end_column=col - 1)
except:
pass
ws.merge_cells(start_row=7, end_row=8, start_column=last_merge_type, end_column=col)
else:
ws.merge_cells(start_row=7, end_row=8, start_column=col, end_column=col)
last_merge_type = col
ws.cell(row=7, column=last_merge_type, value=train.type)
checi = train.fullCheci()
if '/' in checi:
ws.cell(row=9, column=col, value=checi.split('/')[0])
ws.cell(row=10, column=col, value='/' + checi.split('/', maxsplit=1)[1])
else:
ws.cell(row=9, column=col, value=checi)
ws.merge_cells(start_row=9, end_row=10, start_column=col, end_column=col)
last_dict = None
# 时刻表循环
for st_dict in train.timetable:
for i, s in station_row_dict.items():
if stationEqual(i, st_dict['zhanming']):
row = s
break
else:
continue
if train.isSfz(st_dict['zhanming']):
ws.cell(row=row + 1, column=col, value='')
ws.cell(row=row, column=col, value=self.outTime(st_dict['cfsj'], True))
elif train.isZdz(st_dict["zhanming"]):
ws.cell(row=row + 1, column=col, value=self.outTime(st_dict['ddsj'], True))
ws.cell(row=row, column=col, value=' --')
elif train.stationStopped(st_dict):
# 本站停车,无条件写入完整到达时刻和不完整出发时刻
ddsj_str = f'{st_dict["ddsj"].hour:2d}:{st_dict["ddsj"].minute:02d}'
sec = st_dict['ddsj'].second
if sec:
ddsj_str += f"{sec:02d}"
else:
ddsj_str += ' '
ws.cell(row=row + 1, column=col, value=ddsj_str)
if st_dict['ddsj'].hour == st_dict['cfsj'].hour:
cfsj_str = ' '
else:
cfsj_str = f"{st_dict['cfsj'].hour:2d}:"
cfsj_str += f'{st_dict["cfsj"].minute:02d}'
sec = st_dict['cfsj'].second
if sec:
ddsj_str += f"{sec:02d}"
else:
ddsj_str += ' '
ws.cell(row=row, column=col, value=cfsj_str)
else:
give_hour = False
if not last_dict:
give_hour = True
elif last_dict['cfsj'].hour != st_dict['ddsj'].hour:
give_hour = True
ws.cell(row=row + 1, column=col, value=' ...')
tgsj_str = f'{st_dict["ddsj"].hour:2d}:' if give_hour else ' '
tgsj_str += f'{st_dict["ddsj"].minute:02d}'
sec = st_dict['ddsj'].second
if sec:
tgsj_str += f"{sec:02d}"
else:
tgsj_str += ' '
ws.cell(row=row, column=col, value=tgsj_str)
col += 1
last_train = train
for row in range(1, ws.max_row + 1):
for col in range(1, ws.max_column + 1):
ws.cell(row=row, column=col).alignment = Alignment(horizontal='center',
vertical='center', shrink_to_fit=True)
ws.cell(row=row, column=col).font = Font(name='宋体', size=9)
wb.save(filename)
def outTime(self, tgsj, give_hour: bool):
tgsj_str = f'{tgsj.hour:2d}:' if give_hour else ' '
tgsj_str += f'{tgsj.minute:02d}'
sec = tgsj.second
if sec:
tgsj_str += f"{sec:02d}"
else:
tgsj_str += ' '
return tgsj_str
def getIntervalTrains(self, start, end, trainFilter, *, businessOnly=False, stoppedOnly=False):
"""
返回某个区间办客车次列表。数据结构为list<dict>。
//2.1版本修改逻辑为:两站皆办理业务才被选入。
2019.06.29修改逻辑:选入的条件由输入参数给定。其中stoppedOnly包含始发终到情况。
dict{
'train':train object,
'isSfz':boolean,
'isZdz':boolean,
'from':str,
'to':str,
"""
interval_list = []
for train in self.trains():
if not trainFilter.check(train):
continue
start_idx, end_idx = train.stationIndexByName(start), train.stationIndexByName(end)
if start_idx == -1 or end_idx == -1:
continue
if start_idx > end_idx:
continue
start_dict, end_dict = train.timetable[start_idx], train.timetable[end_idx]
if not (self.judgeStopAndBusiness(train, start_dict, businessOnly, stoppedOnly) and
self.judgeStopAndBusiness(train, end_dict, businessOnly, stoppedOnly)):
continue
isSfz = train.isSfz(start)
isZdz = train.isZdz(end)
train_dict = {
'train': train,
'isSfz': isSfz,
'isZdz': isZdz,
'from': start_dict['zhanming'],
'to': end_dict['zhanming']
}
interval_list.append(train_dict)
return interval_list
def judgeStopAndBusiness(self, train: Train, dct: dict, bOnly: bool, sOnly: bool):
"""
为上一个函数服务的工具性函数。判断时刻表中某车站是否符合对营业和停车的要求。
表达式写的丑是为了利用短路性提高效率。
"""
zm = dct['zhanming']
return (not sOnly or train.stationStopped(dct) or train.isSfz(zm) or train.isZdz(zm)) and \
(not bOnly or train.stationBusiness(dct))
def getIntervalCount(self, fromOrTo, isStart, trainFilter, passenger_only=False, freight_only=False,
business_train_only=False, stopped_train_only=False):
"""
获取区间对数表。
:param fromOrTo:发站或到站
:param isStart: True for start station, vice versa
后两个参数:是否仅包括办客和办货的车站。中心站(fromOrTo)不受限制。
返回数据结构list<dict>
dict{
'from'
"""
infoList = []
if isStart:
for st in self.businessStationNames(passenger_only, freight_only):
if not stationEqual(fromOrTo, st):
infoList.append({'from': fromOrTo, 'to': st, 'info':
self.getIntervalTrains(fromOrTo, st, trainFilter, businessOnly=business_train_only,
stoppedOnly=stopped_train_only
)})
else:
for st in self.businessStationNames(passenger_only, freight_only):
if not stationEqual(fromOrTo, st):
infoList.append({'to': fromOrTo, 'from': st, 'info':
self.getIntervalTrains(st, fromOrTo, trainFilter, businessOnly=business_train_only,
stoppedOnly=stopped_train_only
)})
count_list = []
for info_dict in infoList:
info = info_dict['info']
count = len(tuple(info))
countSfz = len([1 for st in info if st['isSfz']])
countZdz = len([1 for st in info if st['isZdz']])
countSfZd = len([1 for st in info if st['isZdz'] and st['isSfz']])
int_dict = {
'from': info_dict['from'],
'to': info_dict['to'],
'count': count,
'countSfz': countSfz,
'countZdz': countZdz,
'countSfZd': countSfZd
}
count_list.append(int_dict)
return count_list
def getIntervalCount_faster(self, fromOrTo, isStart, trainFilter,
passenger_only=False, freight_only=False,
business_train_only=False, stopped_train_only=False) -> list:
"""
2019.07.12新增,破坏封装性提高效率。
原理是避免车次的时刻表被多次遍历。由于Line对象的name->dict有映射表而可以以近常量的效率完成,
故使用反复的graph.stationDict代替反复的train.stationDict可显著提高效率。
"""
# 统计单源点车站对数的四个表。数据结构为str,int,没有的站即为0.
if not self.stationInLine(fromOrTo):
return []
startEndCount = {}
startCount = {}
endCount = {}
allCount = {}
| |
Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 2.0263,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 11.7643,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.233947,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.386441,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.26255,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.325752,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.525426,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.265217,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.11639,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.178999,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.39832,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.238523,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0136635,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.186424,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.10105,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.424947,
'Execution Unit/Register Files/Runtime Dynamic': 0.114713,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.451243,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.881487,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.90441,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000762024,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000762024,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000662347,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000255653,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00145159,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00363799,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00735535,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0971419,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.17906,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.239948,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.329938,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.69746,
'Instruction Fetch Unit/Runtime Dynamic': 0.678021,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.063601,
'L2/Runtime Dynamic': 0.00404332,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.9178,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.809374,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0543739,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0543739,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.17456,
'Load Store Unit/Runtime Dynamic': 1.1319,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.134077,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.268153,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0475843,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0485323,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.384191,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0393571,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.622042,
'Memory Management Unit/Runtime Dynamic': 0.0878894,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 22.5455,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.627446,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0223329,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.150628,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': | |
'default')
def check_edge(edge, source_name, target_name):
self.assertIsInstance(edge, dict)
source_suid = df[df.name.eq(source_name)].index[0]
target_suid = df[df.name.eq(target_name)].index[0]
self.assertEqual(edge['source'], source_suid)
self.assertEqual(edge['target'], target_suid)
self.assertIsNotNone(edge['SUID'])
# Verify that a single edge is added
res = add_cy_edges(['YLR075W', 'YKL028W'])
self.assertIsInstance(res, list)
self.assertEqual(len(res), 1)
check_edge(res[0], 'YLR075W', 'YKL028W')
self.assertEqual(get_edge_count(), start_edge_count + 1)
# Verify that three more edges are added
res = add_cy_edges([['YKL028W', 'YJR066W'], ['YJR066W', 'YLR452C'], ['YGR046W', 'YLR452C']])
self.assertIsInstance(res, list)
self.assertEqual(len(res), 3)
check_edge(res[0], 'YKL028W', 'YJR066W')
check_edge(res[1], 'YJR066W', 'YLR452C')
check_edge(res[2], 'YGR046W', 'YLR452C')
self.assertEqual(get_edge_count(), start_edge_count + 4)
@print_entry_exit
def test_get_edge_count(self):
# Initialization
load_test_session()
# Verify the expected edge count
self.assertEqual(get_edge_count(), 359)
@print_entry_exit
def test_get_edge_info(self):
# Initialization
load_test_session()
def check_edge_info(edge_info, source_name, target_name, edge_name, betweenness):
source_suid = node_name_to_node_suid(source_name)[0]
target_suid = node_name_to_node_suid(target_name)[0]
edge_suid = edge_name_to_edge_suid(edge_name)[0]
self.assertIsInstance(edge_info, dict)
self.assertEqual(edge_info['source'], source_suid)
self.assertEqual(edge_info['target'], target_suid)
self.assertEqual(edge_info['SUID'], edge_suid)
self.assertEqual(edge_info['shared name'], edge_name)
self.assertEqual(edge_info['shared interaction'], 'pp')
self.assertEqual(edge_info['name'], edge_name)
self.assertEqual(edge_info['selected'], False)
self.assertEqual(edge_info['interaction'], 'pp')
self.assertEqual(edge_info['EdgeBetweenness'], betweenness)
# Verify that a string containing an edge returns valid edge information
res = get_edge_info('YDR277C (pp) YDL194W')
self.assertIsInstance(res, list)
self.assertEqual(len(res), 1)
check_edge_info(res[0], 'YDR277C', 'YDL194W', 'YDR277C (pp) YDL194W', 496.0)
# Verify that a list containing an edge returns valid edge information
res = get_edge_info(['YDR277C (pp) YDL194W'])
self.assertIsInstance(res, list)
self.assertEqual(len(res), 1)
check_edge_info(res[0], 'YDR277C', 'YDL194W', 'YDR277C (pp) YDL194W', 496.0)
# Verify that a list containing multiple edges returns valid edge information
res = get_edge_info(['YDR277C (pp) YDL194W', 'YDR277C (pp) YJR022W'])
self.assertIsInstance(res, list)
self.assertEqual(len(res), 2)
check_edge_info(res[0], 'YDR277C', 'YDL194W', 'YDR277C (pp) YDL194W', 496.0)
check_edge_info(res[1], 'YDR277C', 'YJR022W', 'YDR277C (pp) YJR022W', 988.0)
# Verify the error when a bad edge is requested
self.assertRaises(CyError, get_edge_info, 'junk')
@print_entry_exit
def test_get_all_edges(self):
# Initialization
load_test_session()
# Verify that the expected number of edges is returned
res = get_all_edges()
self.assertIsInstance(res, list)
self.assertEqual(len(res), 359)
@print_entry_exit
def test_clone_network(self):
# Initialization
load_test_session()
start_suid = get_network_suid()
# Verify that a network clone is plausible
self._check_cloned_network(clone_network(), start_suid, get_network_name(start_suid),
get_node_count(start_suid), get_edge_count(start_suid))
@print_entry_exit
def test_create_subnet(self):
# Initialization
load_test_session()
base_suid = get_network_suid()
base_name = get_network_name(base_suid)
# Verify that a creating a subnet containing all nodes produces a plausible copy
self._check_cloned_network(create_subnetwork(nodes='all', network=base_suid), base_suid, base_name,
get_node_count(base_suid), get_edge_count(base_suid))
# Verify that creating a subset subnet produces a plausible copy
self._check_cloned_network(
create_subnetwork(nodes=['RAP1', 'HIS4', 'PDC1', 'RPL18A'], nodes_by_col='COMMON',
subnetwork_name=base_name + 'xx', network=base_suid), base_suid, base_name, 4, 3)
@print_entry_exit
def test_create_network_from_data_frames(self):
node_data = {'id': ["node 0", "node 1", "node 2", "node 3"],
'group': ["A", "A", "B", "B"],
'score': [20, 10, 15, 5]}
nodes = df.DataFrame(data=node_data, columns=['id', 'group', 'score'])
edge_data = {'source': ["node 0", "node 0", "node 0", "node 2"],
'target': ["node 1", "node 2", "node 3", "node 3"],
'interaction': ["inhibits", "interacts", "activates", "interacts"],
'weight': [5.1, 3.0, 5.2, 9.9]}
edges = df.DataFrame(data=edge_data, columns=['source', 'target', 'interaction', 'weight'])
# Verify that a network can be created containing dataframe encoding both nodes and edges
res = create_network_from_data_frames(nodes, edges, title='From node & edge dataframe')
suid_1 = res['networkSUID']
self.assertEqual(get_network_name(suid_1), 'From node & edge dataframe')
self.assertEqual(get_node_count(suid_1), 4)
self.assertEqual(get_edge_count(suid_1), 4)
self.assertSetEqual(set(get_all_nodes(suid_1)), set(['node 0', 'node 1', 'node 2', 'node 3']))
self.assertSetEqual(set(get_all_edges(suid_1)), set(
['node 0 (inhibits) node 1', 'node 0 (interacts) node 2', 'node 0 (activates) node 3',
'node 2 (interacts) node 3']))
self.assertSetEqual(set(get_table_column_names('node', network=suid_1)),
set(['SUID', 'shared name', 'id', 'score', 'group', 'name', 'selected']))
self.assertSetEqual(set(get_table_column_names('edge', network=suid_1)), set(
['SUID', 'shared name', 'shared interaction', 'source', 'target', 'data.key.column', 'weight', 'name',
'selected', 'interaction']))
self.assertDictEqual(get_table_column_types('node', network=suid_1),
{'SUID': 'Long', 'shared name': 'String', 'id': 'String', 'score': 'Integer',
'group': 'String', 'name': 'String', 'selected': 'Boolean'})
self.assertDictEqual(get_table_column_types('edge', network=suid_1),
{'SUID': 'Long', 'shared name': 'String', 'shared interaction': 'String',
'source': 'String', 'target': 'String', 'data.key.column': 'Integer', 'weight': 'Double',
'name': 'String', 'selected': 'Boolean', 'interaction': 'String'})
# Verify that a network can be created from a dataframe containing just edges
res = create_network_from_data_frames(edges=edges, collection='Another collection',
title='From just edge dataframe')
suid_2 = res['networkSUID']
self.assertEqual(get_network_name(suid_2), 'From just edge dataframe')
self.assertEqual(get_node_count(suid_2), 4)
self.assertEqual(get_edge_count(suid_2), 4)
self.assertSetEqual(set(get_all_nodes(suid_2)), set(['node 0', 'node 1', 'node 2', 'node 3']))
self.assertSetEqual(set(get_all_edges(suid_2)), set(
['node 0 (inhibits) node 1', 'node 0 (interacts) node 2', 'node 0 (activates) node 3',
'node 2 (interacts) node 3']))
self.assertSetEqual(set(get_table_column_names('node', network=suid_2)),
set(['SUID', 'shared name', 'id', 'name', 'selected']))
self.assertSetEqual(set(get_table_column_names('edge', network=suid_2)), set(
['SUID', 'shared name', 'shared interaction', 'source', 'target', 'data.key.column', 'weight', 'name',
'selected', 'interaction']))
self.assertDictEqual(get_table_column_types('node', network=suid_2),
{'SUID': 'Long', 'shared name': 'String', 'id': 'String', 'name': 'String',
'selected': 'Boolean'})
self.assertDictEqual(get_table_column_types('edge', network=suid_2),
{'SUID': 'Long', 'shared name': 'String', 'shared interaction': 'String',
'source': 'String', 'target': 'String', 'data.key.column': 'Integer', 'weight': 'Double',
'name': 'String', 'selected': 'Boolean', 'interaction': 'String'})
# Verify that a disconnected network can be created from a dataframe containing just nodes
res = create_network_from_data_frames(nodes=nodes, collection='A third collection',
title='From just nodes dataframe')
suid_3 = res['networkSUID']
self.assertEqual(get_network_name(suid_3), 'From just nodes dataframe')
self.assertEqual(get_node_count(suid_3), 4)
self.assertEqual(get_edge_count(suid_3), 0)
self.assertSetEqual(set(get_all_nodes(suid_3)), set(['node 0', 'node 1', 'node 2', 'node 3']))
self.assertIsNone(get_all_edges(suid_3))
self.assertSetEqual(set(get_table_column_names('node', network=suid_3)),
set(['SUID', 'shared name', 'id', 'score', 'group', 'name', 'selected']))
# TODO: Verify that this list of edge columns should be created ... why not source, target?
self.assertSetEqual(set(get_table_column_names('edge', network=suid_3)),
set(['SUID', 'shared name', 'shared interaction', 'name', 'selected', 'interaction']))
self.assertDictEqual(get_table_column_types('node', network=suid_3),
{'SUID': 'Long', 'shared name': 'String', 'id': 'String', 'score': 'Integer',
'group': 'String', 'name': 'String', 'selected': 'Boolean'})
self.assertDictEqual(get_table_column_types('edge', network=suid_3),
{'SUID': 'Long', 'shared name': 'String', 'shared interaction': 'String', 'name': 'String',
'selected': 'Boolean', 'interaction': 'String'})
# Verify that when no edges or nodes are passed in, an error occurs
self.assertRaises(CyError, create_network_from_data_frames)
@print_entry_exit
def test_import_network_from_file(self):
# Verify that test network loads from test data directory
res = import_network_from_file('data/galFiltered.sif')
self.assertIsInstance(res['networks'], list)
self.assertEqual(len(res['networks']), 1)
self.assertIsInstance(res['views'], list)
self.assertEqual(len(res['views']), 1)
# Verify that default network loads
res = import_network_from_file()
self.assertIsInstance(res['networks'], list)
self.assertEqual(len(res['networks']), 1)
self.assertIsInstance(res['views'], list)
self.assertEqual(len(res['views']), 1)
self.assertRaises(CyError, import_network_from_file, 'bogus')
@print_entry_exit
def test_create_igraph_from_network(self):
# Initialization
load_test_session()
all_nodes = get_all_nodes()
all_edges = get_all_edges()
i = create_igraph_from_network()
# verify that all nodes are present
self.assertEqual(len(i.vs), len(all_nodes))
self.assertNotIn(False, [v['name'] in all_nodes for v in i.vs])
# verify that all edges are present
self.assertEqual(len(i.es), len(all_edges))
i_edges = [[x['source'], x['target']] for x in i.es]
self.assertNotIn(False, [re.split("\ \\(.*\\)\ ", x) in i_edges for x in all_edges])
@print_entry_exit
def test_create_networkx_from_network(self):
# Initialization
load_test_session()
cyedge_table = tables.get_table_columns('edge')
cynode_table = tables.get_table_columns('node')
cynode_table.set_index('name', inplace=True) # Index by 'name' instead of SUID ... drop 'name' from attributes
# Verify that the networkx returns the right number of rows and columns
netx = create_networkx_from_network()
self.assertEqual(netx.number_of_nodes(), len(cynode_table.index))
self.assertEqual(netx.number_of_edges(), len(cyedge_table.index))
# Verify that all edges are present, and all of their attributes are correct
# Note that edge SUIDs are carried to distinguish multiple edges that connect the same nodes
netx_out_edges = netx.out_edges(data=True, keys=True)
for src_node, targ_node, edge_suid, edge_attrs in netx_out_edges:
self.assertDictEqual(edge_attrs, dict(cyedge_table.loc[edge_suid]))
# Verify that all nodes are present, and all attributes are correct. Note that node YER056CA has 'nan' values,
# so this verifies that nan is carried into the networkx.
netx_nodes = netx.nodes(data=True)
for node_name, node_attrs in netx_nodes:
self.assertDictEqual(node_attrs, dict(cynode_table.loc[node_name]))
# Verify that invalid network is caught
self.assertRaises(CyError, create_networkx_from_network, network='BogusNetwork')
@print_entry_exit
def test_create_network_from_networkx(self):
# Initialization
load_test_session()
cyedge_table = tables.get_table_columns('edge')
cyedge_table.set_index('name', inplace=True) # Index by 'name' instead of SUID ... drop 'name' from attributes
cyedge_table.sort_index(inplace=True)
cynode_table = tables.get_table_columns('node')
cynode_table.set_index('name', inplace=True) # Index by 'name' instead of SUID ... drop 'name' from attributes
cynode_table.sort_index(inplace=True)
def compare_table(orig_table, table_name, network):
# Compare nodes in new Cytoscape network created from NetworkX to those in the original Cytoscape network
# Start by lining up the dataframe rows for each
netx_table = tables.get_table_columns(table_name, network=network)
netx_table.set_index('name', inplace=True) # Index by 'name' to match up with orig_table
netx_table.sort_index(inplace=True)
# Verify that the new network has at least the columns of the original. There may be a few more if they were
# created for reference.
orig_table_cols = set(orig_table.columns)
netx_table_cols = set(netx_table.columns)
self.assertTrue(orig_table_cols <= netx_table_cols)
# Create a vector showing which new columns are the same as the original columns. Use .equals() to compare 'nan' properly.
s = [orig_table[col].equals(netx_table[col]) for col in orig_table_cols - {'SUID'}]
self.assertFalse(False in s)
# Get the NetworkX for a known good network galFiltered.sif and send it to Cytoscape as a new network
netx = create_networkx_from_network()
netx_suid = create_network_from_networkx(netx)['networkSUID']
self.assertEqual(netx_suid, get_network_suid()) # Verify that the new network is the selected network
compare_table(cynode_table, 'node', netx)
compare_table(cyedge_table, 'edge', netx)
# @skip
@print_entry_exit
def test_create_network_from_igraph(self):
# Initialization
load_test_session()
# TODO: Consider allowing creation of a network from an empty igraph
# This will fail but probably should not | |
import os
import copy
import concurrent.futures
import random
import pandas as pd
import numpy as np
import xarray as xr
from joblib import dump, load
from functools import partial
from typing import List, Tuple
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline
def elevation_scaler(x, feature_range=(0, 1), data_range=(-420, 8848)):
'''
MinMaxScaler for elevations on Earth
Credit: @jhamman
'''
fmin, fmax = feature_range
dmin, dmax = data_range
scale = (fmax - fmin) / (dmax - dmin)
x_scaled = scale * x + fmin - dmin * scale
return x_scaled
def cube(x):
"""
Taking a cube, for the default inverse transformers
For some reason numpy doesn't like keyword arguments,
so you can'd do np.power(x, x2=3). This is necessary
to get around that, so that we can specify it as the
``inverse_func``.
"""
return np.power(x, 3)
def fit_transformers(df: pd.DataFrame, custom_transformers: dict={}):
"""
Fit function transformers for preprocessing data to train
a machine learning model. Note that this does not actually do
any data transformations, but merely fits the transformers
and returns them for later application. Any variables not in
the standard set provided here will simply be standardized
Parameters
----------
df:
Dataframe with complete set of all data for training
This should have a MultiIndex with levels ('site', 'time')
custom_transformers:
Any custom transformers to be applied. These will override
the default ones specified here.
Returns
-------
fit_transformers:
Dictionary of transformers in the form {varname: fit_transformer}
"""
# Default transformers
transformers = {
'raw_flow': Pipeline([
('log1p', FunctionTransformer(func=np.log1p, inverse_func=np.expm1,
validate=False)),
('normalize', MinMaxScaler())
]),
'target_flow': Pipeline([
('log1p', FunctionTransformer(func=np.log1p, inverse_func=np.expm1,
validate=False)),
('normalize', MinMaxScaler())
]),
'precipitation': Pipeline([
('cbrt', FunctionTransformer(func=np.cbrt, inverse_func=cube,
validate=False)),
('normalize', MinMaxScaler())
]),
'elevation': FunctionTransformer(elevation_scaler),
'temperature': StandardScaler(),
'contributing_area': MinMaxScaler(),
'other': StandardScaler(),
}
# Check if any custom transformers were given
for varname, transformer in custom_transformers.items():
transformers[varname] = transformer
# Fit the transformers on a per variable level
fit_transformers = {}
for key in df.columns:
# Look for key in transformers, otherwise use 'other' as default
fit_transformers[key] = transformers.get(key, transformers['other'])
fit_transformers[key].fit(df[[key]])
return fit_transformers
def apply_transformers(df: pd.DataFrame, fit_transformers: dict, inverse=False):
"""
Applies transformers fit in the `fit_transformers` function.
This does not mutate data in place.
Parameters
----------
df:
Dataframe with complete set of all data for training
This should have a MultiIndex with levels ('site', 'time')
fit_transformers:
Dictionary of transformers to be applied. They must be
generated by the ``fit_transformers`` function or manually
fit before this function can be used. Format is
``{variable: fit_transformer}``
inverse:
Whether to invert from the given transformers. This simply
calls the ``inverse_transform`` method instead of ``transform``
Returns
-------
out:
Dataframe transformed by each of the transformers
"""
out = pd.DataFrame(index=df.index)
if not inverse:
for key in df:
key = str(key)
out[key] = fit_transformers[key].transform(df[[key]])
else:
for key in df:
key = str(key)
out[key] = fit_transformers[key].inverse_transform(df[[key]])
return out
def save_transformers(transformers: dict, path='.') -> dict:
"""Saves transformers to disk"""
out_files = {}
for name, tformer in transformers.items():
of = f'{path}{os.sep}{name}-bmorph-transformer.joblib'
dump(tformer, of)
out_files[name] = of
return out_files
def load_transformers(file_list: dict) -> dict:
"""Load transformers from disk"""
transformers = {}
for var, file in file_list.items():
transformers[var] = load(file)
return transformers
def split_train_test_sites(sites: List, train_frac: float=0.8) -> Tuple[List]:
"""
Randomly partition sites into test and train samples.
Parameters
----------
sites:
List of sites to partition
train_frac:
Fraction of sites to place into the training partition
Returns
-------
train_sites, test_sites
"""
n_train = int(train_frac * len(sites))
shuffled = copy.copy(sites)
random.shuffle(shuffled)
return shuffled[:n_train], shuffled[n_train:]
def partition_train_test(df: pd.DataFrame,
train_frac: float=0.8) -> Tuple[pd.DataFrame]:
"""
Randomly partition a dataframe into test and train dataframes
based on randomly splitting site groupings.
Parameters
----------
df:
Dataframe to partition. Should have MultiIndex with levels
('site', 'time')
train_frac:
Fraction of sites to place into the training partition
Returns
-------
train_df, test_df
"""
sites = np.unique(df.index.get_level_values('site'))
train_sites, test_sites = split_train_test_sites(sites, train_frac)
return df.loc[train_sites], df.loc[test_sites]
def make_lookback(df: pd.DataFrame, lookback: int=7) -> pd.DataFrame:
"""
Create a dataset for training or applying a recurrent network.
The returned dataset has dimensions ``(samples, lookback, features)``.
Credit: @jhamman
Parameters
----------
df:
Dataframe to create lookback for.
lookback:
Number of timesteps to create for the ``lookback`` dimension.
Returns
-------
A new dataset with the newly created ``lookback`` dimension
"""
coords = {'features': df.columns}
da = xr.DataArray(df.values, dims=("samples", "features"), coords=coords)
lba = da.rolling(samples=lookback).construct("lookback")
lba.coords['lookback'] = np.linspace(-1 * (lookback - 1), 0,
num=lookback, dtype=int)
mask = lba.isnull().any(("lookback", "features"))
lba = lba.where(~mask, drop=True)
return lba.transpose("samples", "lookback", "features")
def prep_lstm_training_data(df: pd.DataFrame, lookback: int=7,
target_feature='target_flow') -> Tuple[np.ndarray]:
"""
Takes a dataframe, create the lookback, and separate the
training and target data.
"""
lookback_ds = make_lookback(df, lookback)
in_features = list(lookback_ds.features.values)
in_features.remove(target_feature)
lstm_features = lookback_ds.sel(features=in_features)
lstm_target = (lookback_ds.sel(features=target_feature)
.isel(lookback=-1))
return lstm_features, lstm_target
def make_metrics() -> List:
from tensorflow.keras import backend
# root mean squared error (rmse) for regression (only for Keras tensors)
def rmse(y_true, y_pred):
return backend.sqrt(backend.mean(backend.square(y_pred - y_true), axis=-1))
# mean squared error (mse) for regression (only for Keras tensors)
def mse(y_true, y_pred):
return backend.mean(backend.square(y_pred - y_true), axis=-1)
# coefficient of determination (R^2) for regression (only for Keras tensors)
def r_square(y_true, y_pred):
SS_res = backend.sum(backend.square(y_true - y_pred))
SS_tot = backend.sum(backend.square(y_true - backend.mean(y_true)))
return 1 - SS_res / (SS_tot + backend.epsilon())
def bias(y_true, y_pred):
return backend.mean(y_pred) - backend.mean(y_true)
metrics = {'rmse': rmse, 'mse': mse, 'r_square': r_square, 'bias': bias}
return metrics
def make_callbacks(name: str) -> List:
"""Creates some utilty callbacks for use in training."""
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.callbacks import ModelCheckpoint
mc = ModelCheckpoint(f'best_{name}.h5', monitor='val_loss', mode='min',
verbose=0, save_best_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=0, patience=25)
return [es, mc]
def create_lstm(train_shape: tuple, depth: int=1,
n_nodes: int=10, loss='mse', compilation_kwargs={}):
"""
Helper function to create various LSTM models.
Parameters
----------
train_shape:
Shape of the training data, should be following the use of the
`make_lookback` function.
depth:
How deep to construct the LSTM
n_nodes:
How wide each layer of the LSTM should be
loss:
Loss function
Returns
-------
model:
The compiled tensorflow model (using the sequential API)
"""
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.constraints import NonNeg
model = Sequential()
if depth == 1:
# If single layer, just create it
model.add(LSTM(n_nodes, input_shape=(train_shape[1], train_shape[2])))
else:
# For deeper networks we need to set some additional parameters
model.add(LSTM(n_nodes, input_shape=(train_shape[1], train_shape[2]),
return_sequences=True))
for i in range(1, depth-1):
model.add(LSTM(n_nodes, return_sequences=True))
model.add(LSTM(n_nodes))
# Output layer is just one value
model.add(Dense(1, activation='relu', kernel_constraint=NonNeg()))
model.compile(loss=loss, optimizer='adam', metrics=list(make_metrics().values()))
return model
def create_bidirectional_lstm(train_shape: tuple, depth: int=1,
n_nodes: int=10, loss='mse', compilation_kwargs={}):
"""
Helper function to create various bidirectional LSTM models.
Parameters
----------
train_shape:
Shape of the training data, should be following the use of the
``make_lookback`` function.
depth:
How deep to construct the BiLSTM
n_nodes:
How wide each layer of the BiLSTM should be
loss:
Loss function
Returns
-------
model:
The compiled tensorflow model (using the sequential API)
"""
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras.constraints import NonNeg
model = Sequential()
if depth == 1:
# If single layer, just create it
model.add(Bidirectional(LSTM(n_nodes),
input_shape=(train_shape[1], train_shape[2])))
else:
# For deeper networks we need to set some additional parameters
model.add(Bidirectional(LSTM(n_nodes, return_sequences=True),
input_shape=(train_shape[1], train_shape[2]),
))
for i in range(1, depth-1):
model.add(Bidirectional(LSTM(n_nodes, return_sequences=True)))
model.add(Bidirectional(LSTM(n_nodes)))
# Output layer is just one value
model.add(Dense(1, activation='relu', kernel_constraint=NonNeg()))
model.compile(loss=loss, optimizer='adam', metrics=list(make_metrics().values()))
return model
def run_predict(ds: xr.Dataset, transformer_files: List[str],
model_file: str, lookback: int) -> np.ndarray:
"""
Take the raw dataset from mizuRoute, with other necessary features
and run the neural network to predict the bias corrected local flows.
Parameters
----------
ds:
The mizuRoute output dataset
transformer_files:
Paths to all of the transformers that were used
to transform the training data for the model
model_file:
Path to the tensorflow model
lookback:
The number of days of lookback that were used
during training
Returns
-------
np.ndarray:
The predicted bias corrected local flows
"""
# Cannot have tensorflow loaded iin before hand,
# otherwise parallelism isn't possible
from tensorflow.keras import backend as K
from tensorflow.keras.models import load_model
# Load in the model and prep the output data
model = load_model(model_file, custom_objects=make_metrics())
transformers = load_transformers(transformer_files)
corrected_flows = 0.0 * ds['raw_flow'].isel(time=slice(lookback-1, None))
target_flow_transformer = | |
"""Ice-liquid water equilibrium functions.
This module provides thermodynamic properties of ice and liquid water in
equilibrium, e.g. the enthalpy of melting.
:Examples:
>>> pressure(temp=270.)
39313338.8825
>>> densityliq(temp=270.)
1019.05568894
>>> enthalpymelt(temp=270.)
325166.686739
>>> entropymelt(temp=270.)
1204.32106199
>>> volumemelt(temp=270.)
-1.04052121182e-4
>>> temperature(pres=1e7)
272.401648868
>>> densityliq(pres=1e7)
1004.79353660
>>> enthalpymelt(pres=1e7)
331548.910815
>>> entropymelt(pres=1e7)
1217.13254010
>>> volumemelt(pres=1e7)
-9.4217890326e-05
:Functions:
* :func:`eq_tp`: Calculate ice-liquid water equilibrium properties at
either temperature or pressure.
* :func:`temperature`: Temperature at ice-liquid water equilibrium.
* :func:`pressure`: Pressure at ice-liquid water equilibrium.
* :func:`densityliq`: Liquid water density at ice-liquid water
equilibrium.
* :func:`chempot`: Chemical potential at ice-liquid water equilibrium.
* :func:`densityice`: Ice density at ice-liquid water equilibrium.
* :func:`enthalpyice`: Ice enthalpy at ice-liquid water equilibrium.
* :func:`enthalpyliq`: Liquid water enthalpy at ice-liquid water
equilibrium.
* :func:`enthalpymelt`: Enthalpy of melting.
* :func:`entropyice`: Ice entropy at ice-liquid water equilibrium.
* :func:`entropyliq`: Liquid water entropy at ice-liquid water
equilibrium.
* :func:`entropymelt`: Entropy of melting.
* :func:`volumemelt`: Specific volume of melting.
"""
__all__ = ['eq_tp','temperature','pressure','densityliq','chempot','densityice',
'enthalpyice','enthalpyliq','enthalpymelt','entropyice','entropyliq',
'entropymelt','volumemelt']
import warnings
import numpy
from teospy import constants0
from teospy import ice1
from teospy import flu2
from teospy import ice2
from teospy import maths3
_CHKTOL = constants0.CHKTOL
_TTP = constants0.TTP
_PTPI = constants0.PTPI
_DLTP = constants0.DLTP
_LILTP = constants0.LILTP
_chkflubnds = constants0.chkflubnds
_chkicebnds = constants0.chkicebnds
_ice_g = ice1.ice_g
_eq_chempot = flu2.eq_chempot
_eq_pressure = flu2.eq_pressure
_newton = maths3.newton
_C_APPS = ((-1.78582981492113,-12.2325084306734,-52.8236936433529),
(-1.67329759176351e-7,-2.02262929999658e-13))
## Equilibrium functions
def _approx_t(temp):
"""Approximate PDl at T.
Approximate the pressure and liquid water density for ice and liquid
water in equilibrium at the given temperature. This approximation is
based on an empirical polynomial for density.
:arg float temp: Temperature in K.
:returns: Pressure in Pa and liquid water density in kg/m3.
"""
tau = temp/_TTP - 1
dta = 0.
for (i,a) in enumerate(_C_APPS[0]):
dta += a * tau**(i+1)
dliq = _DLTP * (1 + dta)
pres = flu2.pressure(temp,dliq)
return pres, dliq
def _approx_p(pres):
"""Approximate TDl at P.
Approximate the temperature and liquid water density for ice and
liquid water in equilibrium at the given pressure. This
approximation is based on empirical polynomials for temperature and
density.
:arg float pres: Pressure in Pa.
:returns: Temperature in K and liquid water density in kg/m3.
"""
a1, a2 = _C_APPS[1]
psi = pres/_PTPI - 1
tau = a1*psi + a2*psi**2
temp = _TTP * (1 + tau)
dta = 0.
for (i,a) in enumerate(_C_APPS[0]):
dta += a * tau**(i+1)
dliq = _DLTP * (1 + dta)
return temp, dliq
def _diff_t(p,dl,temp):
"""Calculate ice-liquid disequilibrium at T.
Calculate both sides of the equations
given pressure = pressure of liquid water
chemical potential of ice = potential of liquid water
and their Jacobians with respect to pressure and liquid water
density. Solving these equations gives the pressure and liquid water
density at the given temperature.
:arg float p: Pressure in Pa.
:arg float dl: Liquid water density in kg/m3.
:arg float temp: Temperature in K.
:returns: Left-hand side of the equation, right-hand side,
Jacobian of LHS, and Jacobian of RHS.
:rtype: tuple(array(float))
"""
pl = _eq_pressure(0,0,temp,dl)
gi = _ice_g(0,0,temp,p)
gl = _eq_chempot(0,0,temp,dl)
lhs = numpy.array([p, gi])
rhs = numpy.array([pl, gl])
pl_d = _eq_pressure(0,1,temp,dl)
gi_p = _ice_g(0,1,temp,p)
gl_d = _eq_chempot(0,1,temp,dl)
dlhs = numpy.array([[1.,0.], [gi_p,0.]])
drhs = numpy.array([[0.,pl_d], [0.,gl_d]])
return lhs, rhs, dlhs, drhs
def _diff_p(t,dl,pres):
"""Calculate ice-liquid disequilibrium at P.
Calculate both sides of the equations
given pressure = pressure of liquid water
chemical potential of ice = potential of liquid water
and their Jacobians with respect to temperature and liquid water
density. Solving these equations gives the temperature and liquid
water density at the given temperature.
:arg float t: Temperature in K.
:arg float dl: Liquid water density in kg/m3.
:arg float pres: Pressure in Pa.
:returns: Left-hand side of the equation, right-hand side,
Jacobian of LHS, and Jacobian of RHS.
:rtype: tuple(array(float))
"""
pl = _eq_pressure(0,0,t,dl)
gi = _ice_g(0,0,t,pres)
gl = _eq_chempot(0,0,t,dl)
lhs = numpy.array([pres, gi])
rhs = numpy.array([pl, gl])
pl_t = _eq_pressure(1,0,t,dl)
pl_d = _eq_pressure(0,1,t,dl)
gi_t = _ice_g(1,0,t,pres)
gl_t = _eq_chempot(1,0,t,dl)
gl_d = _eq_chempot(0,1,t,dl)
dlhs = numpy.array([[0.,0.], [gi_t,0.]])
drhs = numpy.array([[pl_t,pl_d], [gl_t,gl_d]])
return lhs, rhs, dlhs, drhs
def eq_tp(temp=None,pres=None,dliq=None,chkvals=False,chktol=_CHKTOL,
temp0=None,pres0=None,dliq0=None,chkbnd=False,mathargs=None):
"""Get primary ice-liquid variables at T or P.
Get the values of all primary variables for ice and liquid water in
equilibrium at either of a given temperature or pressure.
If the calculation has already been done, the results can be passed
to avoid unnecessary repeat calculations. If enough values are
passed, they will be checked for consistency if chkvals is True.
:arg temp: Temperature in K.
:type temp: float or None
:arg pres: Pressure in Pa.
:type pres: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_p` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_t` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `_approx_t` or `_approx_p` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Temperature, pressure, and liquid water density (all in SI
units).
:raises ValueError: If neither of temp or pres is provided.
:raises RuntimeWarning: If the relative disequilibrium is more than
chktol, if chkvals is True and all values are given.
"""
if temp is None and pres is None:
errmsg = 'One of temp or pres must be provided'
raise ValueError(errmsg)
if temp is not None:
if any(val is None for val in (pres,dliq)):
x0 = (pres0,dliq0)
fargs = (temp,)
if mathargs is None:
mathargs = dict()
x1 = _newton(_diff_t,x0,_approx_t,fargs=fargs,**mathargs)
pres, dliq = x1
else:
x0 = (temp0,dliq0)
fargs = (pres,)
if mathargs is None:
mathargs = dict()
x1 = _newton(_diff_p,x0,_approx_p,fargs=fargs,**mathargs)
temp, dliq = x1
_chkflubnds(temp,dliq,chkbnd=chkbnd)
_chkicebnds(temp,pres,chkbnd=chkbnd)
if not chkvals:
return temp, pres, dliq
lhs, rhs, __, __ = _diff_p(temp,dliq,pres)
errs = list()
for (l,r) in zip(lhs,rhs):
if abs(r) >= chktol:
errs.append(abs(l/r-1))
else:
errs.append(abs(l-r))
if max(errs) > chktol:
warnmsg = ('Given values {0} and solutions {1} disagree to more than '
'the tolerance {2}').format(lhs,rhs,chktol)
warnings.warn(warnmsg,RuntimeWarning)
return temp, pres, dliq
## Thermodynamic properties
def temperature(temp=None,pres=None,dliq=None,chkvals=False,
chktol=_CHKTOL,temp0=None,pres0=None,dliq0=None,chkbnd=False,
mathargs=None):
"""Calculate ice-liquid temperature.
Calculate the temperature of ice and liquid water in equilibrium.
:arg temp: Temperature in K.
:type temp: float or None
:arg pres: Pressure in Pa.
:type pres: float or None
:arg dliq: Liquid water density in kg/m3. If unknown, pass None
(default) and it will be calculated.
:type dliq: float or None
:arg bool chkvals: If True (default False) and all values are given,
this function will calculate the disequilibrium and raise a
warning if the results are not within a given tolerance.
:arg float chktol: Tolerance to use when checking values (default
_CHKTOL).
:arg temp0: Initial guess for the temperature in K. If None
(default) then `_approx_p` is used.
:type temp0: float or None
:arg pres0: Initial guess for the pressure in Pa. If None (default)
then `_approx_t` is used.
:type pres0: float or None
:arg dliq0: Initial guess for the liquid water density in kg/m3. If
None (default) then `_approx_t` or `_approx_p` is used.
:type dliq0: float or None
:arg bool chkbnd: If True then warnings are raised when the given
values are valid but outside the recommended bounds (default
False).
:arg mathargs: Keyword arguments to the root-finder
:func:`_newton <maths3.newton>` (e.g. maxiter, rtol). If None
(default) then no arguments are passed and default parameters
will be used.
:returns: Temperature in K.
| |
<reponame>GustavoSouza13/Minha_Vida_Meu_Trabalho
import os
import time
import random
def status(ListaInfo,cargoTrabalho):
ListaCampos = ["vida:","fome:","sede:","dinheiro:","exp:"]
arquivo = "arq01.txt"
ListaPosicao = procurarCampo(ListaCampos,arquivo)
divisao1 = "| |"
divisao2 = "| |"
divisao3 = "|"
if ListaInfo[ListaPosicao[0]] < 100:
divisao1 = " " + divisao1
if ListaInfo[ListaPosicao[0]] < 10:
divisao1 = " " + divisao1
if ListaInfo[ListaPosicao[1]] < 100:
divisao2 = " " + divisao2
if ListaInfo[ListaPosicao[1]] < 10:
divisao2 = " " + divisao2
if ListaInfo[ListaPosicao[2]] < 100:
divisao3 = " " + divisao3
if ListaInfo[ListaPosicao[2]] < 10:
divisao3 = " " + divisao3
print("|-----------| |-----------| |-----------|")
print("| Vida:",ListaInfo[ListaPosicao[0]],divisao1,"Fome:",ListaInfo[ListaPosicao[1]],divisao2,"Sede:",ListaInfo[ListaPosicao[2]],divisao3)
print("|-----------| |-----------| |-----------|\n")
print("Cargo:",cargoTrabalho,"\n")
print("Dinheiro:",ListaInfo[ListaPosicao[3]],"\n")
print("Exp:",ListaInfo[ListaPosicao[4]],"\n")
def trabalhar(ListaInfo,cargoTrabalho):
trab = "sim"
while trab.lower() == "sim" or trab.lower() == "s":
ListaCampos = ["vida:","fome:","sede:","dinheiro:","exp:"]
arquivo = "arq01.txt"
ListaPosicao = procurarCampo(ListaCampos,arquivo)
ListaTrabalhando = ["trabalhandovez:","trabalhandodinheiro:","trabalhandoexp:"]
ListaTrabalhando = procurarCampo(ListaTrabalhando,arquivo)
tarefa = tarefas(cargoTrabalho)
if ListaInfo[ListaPosicao[0]] > 0 and ListaInfo[ListaPosicao[1]] > 0 and ListaInfo[ListaPosicao[2]] > 0:
print("Cargo atual:",cargoTrabalho,"\n")
for i in range(0,ListaInfo[ListaTrabalhando[0]],1):
print(tarefa)
time.sleep(1)
num = random.randrange(0,101)
# Bonus para cada vez que trabalha.
if num >= 70:
num = random.randrange(0,ListaInfo[ListaTrabalhando[0]])
else:
num = 0
# Altera os dados de vida, fome, sede, dinheiro e exp. (No programa)
ListaInfo[ListaPosicao[3]] += ListaInfo[ListaTrabalhando[1]] + num
ListaInfo[ListaPosicao[4]] += ListaInfo[ListaTrabalhando[2]] + num
ListaInfo[ListaPosicao[0]] -= random.randrange(2,4)
ListaInfo[ListaPosicao[1]] -= random.randrange(2,4)
ListaInfo[ListaPosicao[2]] -= random.randrange(2,4)
# Executa pensamentos que pode ter e possue duas respostas, dependendo pode dar um bônus ou prejuizo.
num = random.randrange(0,101)
if num >= 65:
ListaPerguntas, ListaRespostas, ListaFrases, ListaBoosts, ListaCamposBoost = perguntas(cargoTrabalho,ListaCampos)
num1 = random.randrange(0,len(ListaPerguntas))
num2 = num1 * 2
print("\nPensamento:",ListaPerguntas[num1],"\n")
print("Respostas:\n")
contador = 1
for i in range(0,2,1):
print(contador,"-",ListaRespostas[num2])
num2 += 1
contador += 1
num2 -= contador
escolha = int(input("\nO que fazer (Digite o número correspondente): "))
num3 = num2+escolha
print("\n-- Acontecimento --\n")
print(ListaFrases[num3],"\n")
if ListaBoosts[num3] != "0":
print("Pela sua resposta:",ListaBoosts[num3])
for j in range(0,len(ListaCampos),1):
if ListaCamposBoost[num3] in ListaCampos[j]:
final = int(ListaBoosts[num3].find(" "))
quant = ListaBoosts[num3]
quant = int(quant[0:final])
ListaInfo[ListaPosicao[j]] += quant
print("----------------------------------------")
# Se a vida, fome, sede, dinheiro ou exp fica abaixo de 1, muda para 0.
for j in range(0,5,1):
if ListaInfo[ListaPosicao[j]] < 1:
ListaInfo[ListaPosicao[j]] = 0
# Se a vida, fome ou sede fica acima do limite, muda para o limite.
for k in range(0,3,1):
if ListaInfo[ListaPosicao[k]] > ListaInfo[k+6]:
ListaInfo[ListaPosicao[k]] = ListaInfo[k+6]
print("\nDinheiro atual: R$",ListaInfo[ListaPosicao[3]],"\n")
ListaItemAdici = [ListaInfo[ListaPosicao[0]],ListaInfo[ListaPosicao[1]],ListaInfo[ListaPosicao[2]],ListaInfo[ListaPosicao[3]],ListaInfo[ListaPosicao[4]]]
atualizarDados(ListaCampos,ListaPosicao,ListaItemAdici,arquivo)
# Verifica se houve promoção e a executa se houver.
expUP = int(expUPCargo(cargoTrabalho,ListaInfo))
if ListaInfo[ListaPosicao[4]] >= expUP:
print("\nVocê foi promovido!!")
promocao(cargoTrabalho)
print("Agora vaza daqui e vai contar pra sua esposa.")
time.sleep(5)
break
# Confirma se continuará trabalhando.
trab = input("Deseja trabalhar mais? (Sim/Não): ")
print("----------------------------------------\n")
else:
# Se a vida, fome ou sede for == 0, não executa o trabalho.
print("Você tá precisando se cuidar, vai lá e depois volta pra trabalhar!!")
trab = "Não"
time.sleep(3)
def tarefas(cargoTrabalho):
contador = 0
ListaRemover = ["cargo:","\n"]
ListaAlfabetoMi = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","ó","á","ç","ú"," ",".",",","\n"]
ListaAlfabetoMa = ["A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z","Ó","Á","Ç","Ú"," ",".",",","\n"]
ListaNum = ["2","3","4","5","6","7","8","9","1","0","\n"]
ListaTarefas = []
with open('perguntas.txt','r') as f:
texto = f.readlines()
for linha in texto:
for i in range(0,len(ListaRemover),1):
linha = linha.replace(ListaRemover[i],"")
if linha == cargoTrabalho:
acabar = contador
acabar += 1
contador += 1
break
else:
contador += 1
for linha in texto:
if contador <= acabar:
if texto.index(linha) == contador:
contador += 1
numTare = linha.replace("trabalhando:","")
tarefas = linha.replace("trabalhando:","")
for i in range(0,len(ListaAlfabetoMi),1):
numTare = numTare.replace(ListaAlfabetoMi[i],"")
numTare = numTare.replace(ListaAlfabetoMa[i],"")
numTare = int(numTare[len(numTare)-1])
comeco = 0
for j in range(0,numTare,1):
final = tarefas.find(ListaNum[j])
tarefa = tarefas
tarefa = tarefa[comeco:final]
for k in range(0,len(ListaNum),1):
tarefa = tarefa.replace(ListaNum[k],"")
ListaTarefas.append(tarefa)
comeco = final
numTare = random.randrange(0,numTare)
return(ListaTarefas[numTare])
def perguntas(cargoTrabalho,ListaCampos):
ListaRemover = ["cargo:","\n"]
ListaCamposPerg = ["pergunta:","r:","frases:","boost:","\n"]
ListaPerguntas = []
ListaRespostas = []
ListaFrases = []
ListaBoosts = []
ListaCamposBoost = []
contador = 0
with open('perguntas.txt','r') as f:
texto = f.readlines()
for linha in texto:
for i in range(0,len(ListaRemover),1):
linha = linha.replace(ListaRemover[i],"")
if linha == cargoTrabalho:
contador += 2
break
else:
contador += 1
for linha in texto:
comeco = 0
if contador == 0:
if linha.find("pergunta:") > -1:
for i in range(0,len(ListaCamposPerg),1):
linha = linha.replace(ListaCamposPerg[i],"")
ListaPerguntas.append(linha)
elif linha.find("r:") > -1:
for j in range(0,len(ListaCamposPerg),1):
linha = linha.replace(ListaCamposPerg[j],"")
for k in range(0,2,1):
final = linha.find("/")
if final == -1:
final = len(linha)
ListaRespostas.append(linha[comeco:final])
linha = linha.replace(ListaRespostas[len(ListaRespostas)-1],"")
linha = linha.replace("/","")
elif linha.find("frases:") > -1:
for j in range(0,len(ListaCamposPerg),1):
linha = linha.replace(ListaCamposPerg[j],"")
for k in range(0,2,1):
final = linha.find("/")
if final == -1:
final = len(linha)
ListaFrases.append(linha[comeco:final])
linha = linha.replace(linha[comeco:final],"")
linha = linha.replace("/","")
elif linha.find("boost:") > -1:
for j in range(0,len(ListaCamposPerg),1):
linha = linha.replace(ListaCamposPerg[j],"")
for k in range(0,2,1):
final = linha.find("/")
if final == -1:
final = len(linha)
ListaBoosts.append(linha[comeco:final])
for l in range(0,len(ListaCampos),1):
if linha[comeco:final] == "0":
ListaCamposBoost.append("0")
break
if ListaCampos[l] in linha[comeco:final] + ":":
ListaCamposBoost.append(ListaCampos[l])
break
linha = linha.replace(linha[comeco:final],"")
linha = linha.replace("/","")
elif linha.find(ListaRemover[0]) > -1:
break
else:
contador -= 1
return(ListaPerguntas, ListaRespostas, ListaFrases, ListaBoosts, ListaCamposBoost)
def expUPCargo(cargoTrabalho,ListaInfo):
ListaRemover = ["cargo:","expup:","\n"]
ListaCargos = []
ListaExpUp = []
arquivo = "cargo.txt"
with open(arquivo,'r') as f:
texto = f.readlines()
for linha in texto:
if linha.find(ListaRemover[0]) > -1:
for i in range(0,len(ListaRemover),1):
linha = linha.replace(ListaRemover[i],"")
ListaCargos.append(linha)
if linha.find("expup:") > -1:
for i in range(0,len(ListaRemover),1):
linha = linha.replace(ListaRemover[i],"")
ListaExpUp.append(linha)
for i in range(0,len(ListaCargos),1):
if cargoTrabalho == ListaCargos[i]:
contador = i + 1
for j in range(0,contador,1):
del(ListaExpUp[0])
break
if ListaExpUp == [] or ListaExpUp[0] == "0":
ListaExpUp.insert(0,"99999999999")
return(ListaExpUp[0])
def promocao(cargoTrabalho):
ListaRemover = ["cargo:","trabalhandodinheiro:","trabalhandoexp:","expup:","\n"]
ListaDados = []
deletar = 0
arquivo = "cargo.txt"
with open(arquivo,'r') as f:
texto = f.readlines()
for linha in texto:
for i in range(0,len(ListaRemover),1):
linha = linha.replace(ListaRemover[i],"")
ListaDados.append(linha)
for i in range(0,len(ListaDados),1):
if cargoTrabalho == ListaDados[i]:
for j in range(0,i+4,1):
del(ListaDados[0])
break
for i in range(0,len(ListaRemover),1):
with open('arq01.txt','r') as f:
texto = f.readlines()
if len(ListaRemover) > 2:
with open('arq01.txt','w') as f:
for linha in texto:
if linha.find(ListaRemover[0]) > -1:
f.write(ListaRemover[0] + ListaDados[0] + "\n")
else:
f.write(linha)
del(ListaRemover[0])
del(ListaDados[0])
else:
break
def mercado(ListaInfo):
mercado = 1
ListaCarrinho = []
while mercado == 1:
arquivo = "arq01.txt"
ListaCampos = ["dinheiro:"]
dinheiro = procurarCampo(ListaCampos,arquivo)
dinheiro = dinheiro[0]
print("Bem vindo(a) a Mercado Market!\n")
print("Seu carrinho:",ListaCarrinho,"\n")
print("Você possui: R$",ListaInfo[dinheiro],"\n")
print("1 - Comida\n2 - Bebida\n0 - Concluir compra / Voltar\n")
supri = input("O que deseja: ")
supri = supri[0:2]
supri = int(supri)
if supri == 0:
mercado = 0
os.system('cls')
elif supri > 2 or supri < 0:
print("\nEste número não corresponde a nenhum corredor, tente novamente.")
time.sleep(4)
os.system('cls')
else:
if supri == 1:
ListaCarac = ["comida:","\n"]
ListaNum = ["0","1","2","3","4","5","6","7","8","9"]
tamanho = 7
elif supri == 2:
ListaCarac = ["bebida:","\n"]
ListaNum = ["0","1","2","3","4","5","6","7","8","9"]
tamanho = 7
ListaPrate = []
ListaValores = []
arquivo = open('mercado.txt','r')
for linha in arquivo:
linha.rstrip
if linha[0:tamanho] == ListaCarac[0]:
for i in range(0,len(ListaCarac),1):
linha = linha.replace(ListaCarac[i],"")
produto = linha
for j in range(0,len(ListaNum),1):
produto = produto.replace(ListaNum[j],"")
ListaPrate.append(produto)
valor = linha.replace(produto,"")
ListaValores.append(valor)
sessao = 1
while sessao == 1:
os.system('cls')
contador = 0
print("\nSeu carrinho:",ListaCarrinho,"\n")
print("Veja o que temos na prateleira:\n")
for i in range(0,len(ListaPrate),1):
contador += 1
print(contador,"-",ListaPrate[i].title(),"R$:",ListaValores[i],"\n")
print("0 - Voltar\n")
escolha = input("Digite o número do que quer levar (1 por vez): ")
escolha = escolha[0:2]
escolha = int(escolha)
if escolha == 0:
sessao = 0
os.system('cls')
elif escolha > contador or escolha < 0:
print("\nEste número não corresponde a nenhum produto da prateleira, tente novamente.")
time.sleep(4)
else:
quant = input("Digite a quantidade que deseja levar: ")
quant = quant[0:2]
quant = int(quant)
if quant > 0:
escolha -= 1
atualizado = 0
ListaCarac = ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","ã","á","é"," "]
for j in range(0,len(ListaCarrinho),1):
produto = ListaCarrinho[j].lower()
for k in range(0,len(ListaNum),1):
produto = produto.replace(ListaNum[k],"")
produto = produto.replace(" ","")
if produto == ListaPrate[escolha]:
atualizado = 1
quantCarrinho = ListaCarrinho[j].lower()
for l in range(0,len(ListaCarac),1):
quantCarrinho = quantCarrinho.replace(ListaCarac[l],"")
quantCarrinho = int(quantCarrinho)
quant += quantCarrinho
quant = str(quant)
del(ListaCarrinho[j])
escolha = quant + " " + ListaPrate[escolha].title()
ListaCarrinho.insert(j,escolha)
break
if atualizado == 0:
quant = str(quant)
escolha = quant + " " + ListaPrate[escolha].title()
ListaCarrinho.append(escolha)
else:
print("\nQuantidade desejada incorreta, tente novamente.")
time.sleep(4)
os.system('cls')
ListaNum = | |
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the MIT License. See the LICENSE file in the root of this
# repository for complete details.
"""
Helpers that make development with ``structlog`` more pleasant.
See also the narrative documentation in `development`.
"""
import sys
import warnings
from io import StringIO
from typing import Any, Iterable, Optional, TextIO, Type, Union
from ._frames import _format_exception
from .types import (
EventDict,
ExceptionFormatter,
ExcInfo,
Protocol,
WrappedLogger,
)
try:
import colorama
except ImportError:
colorama = None
try:
import better_exceptions
except ImportError:
better_exceptions = None
try:
import rich
from rich.console import Console
from rich.traceback import Traceback
except ImportError:
rich = None # type: ignore
__all__ = [
"ConsoleRenderer",
"plain_traceback",
"rich_traceback",
"better_traceback",
]
_IS_WINDOWS = sys.platform == "win32"
_MISSING = "{who} requires the {package} package installed. "
_EVENT_WIDTH = 30 # pad the event name to so many characters
def _pad(s: str, length: int) -> str:
"""
Pads *s* to length *length*.
"""
missing = length - len(s)
return s + " " * (missing if missing > 0 else 0)
if colorama is not None:
RESET_ALL = colorama.Style.RESET_ALL
BRIGHT = colorama.Style.BRIGHT
DIM = colorama.Style.DIM
RED = colorama.Fore.RED
BLUE = colorama.Fore.BLUE
CYAN = colorama.Fore.CYAN
MAGENTA = colorama.Fore.MAGENTA
YELLOW = colorama.Fore.YELLOW
GREEN = colorama.Fore.GREEN
RED_BACK = colorama.Back.RED
else:
# These are the same values as the colorama color codes. Redefining them
# here allows users to specify that they want color without having to
# install colorama, which is only supposed to be necessary in Windows.
RESET_ALL = "\033[0m"
BRIGHT = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
BLUE = "\033[34m"
CYAN = "\033[36m"
MAGENTA = "\033[35m"
YELLOW = "\033[33m"
GREEN = "\033[32m"
RED_BACK = "\033[41m"
if _IS_WINDOWS: # pragma: no cover
# On Windows, use colors by default only if colorama is installed.
_use_colors = colorama is not None
else:
# On other OSes, use colors by default.
_use_colors = True
class _Styles(Protocol):
reset: str
bright: str
level_critical: str
level_exception: str
level_error: str
level_warn: str
level_info: str
level_debug: str
level_notset: str
timestamp: str
logger_name: str
kv_key: str
kv_value: str
Styles = Union[_Styles, Type[_Styles]]
class _ColorfulStyles:
reset = RESET_ALL
bright = BRIGHT
level_critical = RED
level_exception = RED
level_error = RED
level_warn = YELLOW
level_info = GREEN
level_debug = GREEN
level_notset = RED_BACK
timestamp = DIM
logger_name = BLUE
kv_key = CYAN
kv_value = MAGENTA
class _PlainStyles:
reset = ""
bright = ""
level_critical = ""
level_exception = ""
level_error = ""
level_warn = ""
level_info = ""
level_debug = ""
level_notset = ""
timestamp = ""
logger_name = ""
kv_key = ""
kv_value = ""
def plain_traceback(sio: TextIO, exc_info: ExcInfo) -> None:
"""
"Pretty"-print *exc_info* to *sio* using our own plain formatter.
To be passed into `ConsoleRenderer`'s ``exception_formatter`` argument.
Used by default if neither ``rich`` not ``better-exceptions`` are present.
.. versionadded:: 21.2
"""
sio.write("\n" + _format_exception(exc_info))
def rich_traceback(sio: TextIO, exc_info: ExcInfo) -> None:
"""
Pretty-print *exc_info* to *sio* using the ``rich`` package.
To be passed into `ConsoleRenderer`'s ``exception_formatter`` argument.
Used by default if ``rich`` is installed.
.. versionadded:: 21.2
"""
sio.write("\n")
Console(file=sio, color_system="truecolor").print(
Traceback.from_exception(*exc_info, show_locals=True)
)
def better_traceback(sio: TextIO, exc_info: ExcInfo) -> None:
"""
Pretty-print *exc_info* to *sio* using the ``better-exceptions`` package.
To be passed into `ConsoleRenderer`'s ``exception_formatter`` argument.
Used by default if ``better-exceptions`` is installed and ``rich`` is
absent.
.. versionadded:: 21.2
"""
sio.write("\n" + "".join(better_exceptions.format_exception(*exc_info)))
if rich is not None:
default_exception_formatter = rich_traceback
elif better_exceptions is not None: # type: ignore
default_exception_formatter = better_traceback
else:
default_exception_formatter = plain_traceback
class ConsoleRenderer:
"""
Render ``event_dict`` nicely aligned, possibly in colors, and ordered.
If ``event_dict`` contains a true-ish ``exc_info`` key, it will be
rendered *after* the log line. If rich_ or better-exceptions_ are present,
in colors and with extra context.
:param pad_event: Pad the event to this many characters.
:param colors: Use colors for a nicer output. `True` by default if
colorama_ is installed.
:param force_colors: Force colors even for non-tty destinations.
Use this option if your logs are stored in a file that is meant
to be streamed to the console.
:param repr_native_str: When `True`, `repr` is also applied
to native strings (i.e. unicode on Python 3 and bytes on Python 2).
Setting this to `False` is useful if you want to have human-readable
non-ASCII output on Python 2. The ``event`` key is *never*
`repr` -ed.
:param level_styles: When present, use these styles for colors. This
must be a dict from level names (strings) to colorama styles. The
default can be obtained by calling
`ConsoleRenderer.get_default_level_styles`
:param exception_formatter: A callable to render ``exc_infos``. If rich_
or better-exceptions_ are installed, they are used for pretty-printing
by default (rich_ taking precendence). You can also manually set it to
`plain_traceback`, `better_traceback`, `rich_traceback`, or implement
your own.
:param sort_keys: Whether to sort keys when formatting. `True` by default.
Requires the colorama_ package if *colors* is `True` **on Windows**.
.. _colorama: https://pypi.org/project/colorama/
.. _better-exceptions: https://pypi.org/project/better-exceptions/
.. _rich: https://pypi.org/project/rich/
.. versionadded:: 16.0
.. versionadded:: 16.1 *colors*
.. versionadded:: 17.1 *repr_native_str*
.. versionadded:: 18.1 *force_colors*
.. versionadded:: 18.1 *level_styles*
.. versionchanged:: 19.2
``colorama`` now initializes lazily to avoid unwanted initializations as
``ConsoleRenderer`` is used by default.
.. versionchanged:: 19.2 Can be pickled now.
.. versionchanged:: 20.1 ``colorama`` does not initialize lazily on Windows
anymore because it breaks rendering.
.. versionchanged:: 21.1 It is additionally possible to set the logger name
using the ``logger_name`` key in the ``event_dict``.
.. versionadded:: 21.2 *exception_formatter*
.. versionchanged:: 21.2 `ConsoleRenderer` now handles the ``exc_info``
event dict key itself. Do **not** use the
`structlog.processors.format_exc_info` processor together with
`ConsoleRenderer` anymore! It will keep working, but you can't have
customize exception formatting and a warning will be raised if you ask
for it.
.. versionchanged:: 21.2 The colors keyword now defaults to True on
non-Windows systems, and either True or False in Windows depending on
whether colorama is installed.
.. versionadded:: 21.3.0 *sort_keys*
"""
def __init__(
self,
pad_event: int = _EVENT_WIDTH,
colors: bool = _use_colors,
force_colors: bool = False,
repr_native_str: bool = False,
level_styles: Optional[Styles] = None,
exception_formatter: ExceptionFormatter = default_exception_formatter,
sort_keys: bool = True,
):
styles: Styles
if colors:
if _IS_WINDOWS: # pragma: no cover
# On Windows, we can't do colorful output without colorama.
if colorama is None:
classname = self.__class__.__name__
raise SystemError(
_MISSING.format(
who=classname + " with `colors=True`",
package="colorama",
)
)
# Colorama must be init'd on Windows, but must NOT be
# init'd on other OSes, because it can break colors.
if force_colors:
colorama.deinit()
colorama.init(strip=False)
else:
colorama.init()
styles = _ColorfulStyles
else:
styles = _PlainStyles
self._styles = styles
self._pad_event = pad_event
if level_styles is None:
self._level_to_color = self.get_default_level_styles(colors)
else:
self._level_to_color = level_styles
for key in self._level_to_color.keys():
self._level_to_color[key] += styles.bright
self._longest_level = len(
max(self._level_to_color.keys(), key=lambda e: len(e))
)
self._repr_native_str = repr_native_str
self._exception_formatter = exception_formatter
self._sort_keys = sort_keys
def _repr(self, val: Any) -> str:
"""
Determine representation of *val* depending on its type &
self._repr_native_str.
"""
if self._repr_native_str is True:
return repr(val)
if isinstance(val, str):
return val
else:
return repr(val)
def __call__(
self, logger: WrappedLogger, name: str, event_dict: EventDict
) -> str:
sio = StringIO()
ts = event_dict.pop("timestamp", None)
if ts is not None:
sio.write(
# can be a number if timestamp is UNIXy
self._styles.timestamp
+ str(ts)
+ self._styles.reset
+ " "
)
level = event_dict.pop("level", None)
if level is not None:
sio.write(
"["
+ self._level_to_color.get(level, "")
+ _pad(level, self._longest_level)
+ self._styles.reset
+ "] "
)
# force event to str for compatibility with standard library
event = event_dict.pop("event", None)
if not isinstance(event, str):
event = str(event)
if event_dict:
event = _pad(event, self._pad_event) + self._styles.reset + " "
else:
event += self._styles.reset
sio.write(self._styles.bright + event)
logger_name = event_dict.pop("logger", None)
if logger_name is None:
logger_name = event_dict.pop("logger_name", None)
if logger_name is not None:
sio.write(
"["
+ self._styles.logger_name
+ self._styles.bright
+ logger_name
+ self._styles.reset
+ "] "
)
stack = event_dict.pop("stack", None)
exc = event_dict.pop("exception", None)
exc_info = event_dict.pop("exc_info", None)
event_dict_keys: Iterable[str] = event_dict.keys()
if self._sort_keys:
event_dict_keys = sorted(event_dict_keys)
sio.write(
" ".join(
self._styles.kv_key
+ key
+ self._styles.reset
+ "="
+ self._styles.kv_value
+ self._repr(event_dict[key])
+ self._styles.reset
| |
#!/usr/bin/env python
#
# Author: <NAME> (mmckerns @caltech and @uqfoundation)
# Copyright (c) 2008-2016 California Institute of Technology.
# Copyright (c) 2016-2019 The Uncertainty Quantification Foundation.
# License: 3-clause BSD. The full license text is available at:
# - https://github.com/uqfoundation/dill/blob/master/LICENSE
"""
all Python Standard Library objects (currently: CH 1-15 @ 2.7)
and some other common objects (i.e. numpy.ndarray)
"""
__all__ = ['registered','failures','succeeds']
# helper imports
import warnings; warnings.filterwarnings("ignore", category=DeprecationWarning)
import sys
PY3 = (hex(sys.hexversion) >= '0x30000f0')
if PY3:
import queue as Queue
import dbm as anydbm
else:
import Queue
import anydbm
import sets # deprecated/removed
import mutex # removed
try:
from cStringIO import StringIO # has StringI and StringO types
except ImportError: # only has StringIO type
if PY3:
from io import BytesIO as StringIO
else:
from StringIO import StringIO
import re
import array
import collections
import codecs
import struct
import datetime
import calendar
import weakref
import pprint
import decimal
import functools
import itertools
import operator
import tempfile
import shelve
import zlib
import gzip
import zipfile
import tarfile
import xdrlib
import csv
import hashlib
import hmac
import os
import logging
import optparse
#import __hello__
import threading
import socket
import contextlib
try:
import bz2
import sqlite3
if PY3: import dbm.ndbm as dbm
else: import dbm
HAS_ALL = True
except ImportError: # Ubuntu
HAS_ALL = False
try:
#import curses
#from curses import textpad, panel
HAS_CURSES = True
except ImportError: # Windows
HAS_CURSES = False
try:
import ctypes
HAS_CTYPES = True
# if using `pypy`, pythonapi is not found
IS_PYPY = not hasattr(ctypes, 'pythonapi')
except ImportError: # MacPorts
HAS_CTYPES = False
IS_PYPY = False
# helper objects
class _class:
def _method(self):
pass
# @classmethod
# def _clsmethod(cls): #XXX: test me
# pass
# @staticmethod
# def _static(self): #XXX: test me
# pass
class _class2:
def __call__(self):
pass
_instance2 = _class2()
class _newclass(object):
def _method(self):
pass
# @classmethod
# def _clsmethod(cls): #XXX: test me
# pass
# @staticmethod
# def _static(self): #XXX: test me
# pass
class _newclass2(object):
__slots__ = ['descriptor']
def _function(x): yield x
def _function2():
try: raise
except:
from sys import exc_info
e, er, tb = exc_info()
return er, tb
if HAS_CTYPES:
class _Struct(ctypes.Structure):
pass
_Struct._fields_ = [("_field", ctypes.c_int),("next", ctypes.POINTER(_Struct))]
_filedescrip, _tempfile = tempfile.mkstemp('r') # deleted in cleanup
_tmpf = tempfile.TemporaryFile('w')
# put the objects in order, if possible
try:
from collections import OrderedDict as odict
except ImportError:
try:
from ordereddict import OrderedDict as odict
except ImportError:
odict = dict
# objects used by dill for type declaration
registered = d = odict()
# objects dill fails to pickle
failures = x = odict()
# all other type objects
succeeds = a = odict()
# types module (part of CH 8)
a['BooleanType'] = bool(1)
a['BuiltinFunctionType'] = len
a['BuiltinMethodType'] = a['BuiltinFunctionType']
a['BytesType'] = _bytes = codecs.latin_1_encode('\x00')[0] # bytes(1)
a['ClassType'] = _class
a['ComplexType'] = complex(1)
a['DictType'] = _dict = {}
a['DictionaryType'] = a['DictType']
a['FloatType'] = float(1)
a['FunctionType'] = _function
a['InstanceType'] = _instance = _class()
a['IntType'] = _int = int(1)
a['ListType'] = _list = []
a['NoneType'] = None
a['ObjectType'] = object()
a['StringType'] = _str = str(1)
a['TupleType'] = _tuple = ()
a['TypeType'] = type
if PY3:
a['LongType'] = _int
a['UnicodeType'] = _str
else:
a['LongType'] = long(1)
a['UnicodeType'] = unicode(1)
# built-in constants (CH 4)
a['CopyrightType'] = copyright
# built-in types (CH 5)
a['ClassObjectType'] = _newclass # <type 'type'>
a['ClassInstanceType'] = _newclass() # <type 'class'>
a['SetType'] = _set = set()
a['FrozenSetType'] = frozenset()
# built-in exceptions (CH 6)
a['ExceptionType'] = _exception = _function2()[0]
# string services (CH 7)
a['SREPatternType'] = _srepattern = re.compile('')
# data types (CH 8)
a['ArrayType'] = array.array("f")
a['DequeType'] = collections.deque([0])
a['DefaultDictType'] = collections.defaultdict(_function, _dict)
a['TZInfoType'] = datetime.tzinfo()
a['DateTimeType'] = datetime.datetime.today()
a['CalendarType'] = calendar.Calendar()
if not PY3:
a['SetsType'] = sets.Set()
a['ImmutableSetType'] = sets.ImmutableSet()
a['MutexType'] = mutex.mutex()
# numeric and mathematical types (CH 9)
a['DecimalType'] = decimal.Decimal(1)
a['CountType'] = itertools.count(0)
# data compression and archiving (CH 12)
a['TarInfoType'] = tarfile.TarInfo()
# generic operating system services (CH 15)
a['LoggerType'] = logging.getLogger()
a['FormatterType'] = logging.Formatter() # pickle ok
a['FilterType'] = logging.Filter() # pickle ok
a['LogRecordType'] = logging.makeLogRecord(_dict) # pickle ok
a['OptionParserType'] = _oparser = optparse.OptionParser() # pickle ok
a['OptionGroupType'] = optparse.OptionGroup(_oparser,"foo") # pickle ok
a['OptionType'] = optparse.Option('--foo') # pickle ok
if HAS_CTYPES:
a['CCharType'] = _cchar = ctypes.c_char()
a['CWCharType'] = ctypes.c_wchar() # fail == 2.6
a['CByteType'] = ctypes.c_byte()
a['CUByteType'] = ctypes.c_ubyte()
a['CShortType'] = ctypes.c_short()
a['CUShortType'] = ctypes.c_ushort()
a['CIntType'] = ctypes.c_int()
a['CUIntType'] = ctypes.c_uint()
a['CLongType'] = ctypes.c_long()
a['CULongType'] = ctypes.c_ulong()
a['CLongLongType'] = ctypes.c_longlong()
a['CULongLongType'] = ctypes.c_ulonglong()
a['CFloatType'] = ctypes.c_float()
a['CDoubleType'] = ctypes.c_double()
a['CSizeTType'] = ctypes.c_size_t()
a['CLibraryLoaderType'] = ctypes.cdll
a['StructureType'] = _Struct
if not IS_PYPY:
a['BigEndianStructureType'] = ctypes.BigEndianStructure()
#NOTE: also LittleEndianStructureType and UnionType... abstract classes
#NOTE: remember for ctypesobj.contents creates a new python object
#NOTE: ctypes.c_int._objects is memberdescriptor for object's __dict__
#NOTE: base class of all ctypes data types is non-public _CData
try: # python 2.6
import fractions
import number
import io
from io import StringIO as TextIO
# built-in functions (CH 2)
a['ByteArrayType'] = bytearray([1])
# numeric and mathematical types (CH 9)
a['FractionType'] = fractions.Fraction()
a['NumberType'] = numbers.Number()
# generic operating system services (CH 15)
a['IOBaseType'] = io.IOBase()
a['RawIOBaseType'] = io.RawIOBase()
a['TextIOBaseType'] = io.TextIOBase()
a['BufferedIOBaseType'] = io.BufferedIOBase()
a['UnicodeIOType'] = TextIO() # the new StringIO
a['LoggingAdapterType'] = logging.LoggingAdapter(_logger,_dict) # pickle ok
if HAS_CTYPES:
a['CBoolType'] = ctypes.c_bool(1)
a['CLongDoubleType'] = ctypes.c_longdouble()
except ImportError:
pass
try: # python 2.7
import argparse
# data types (CH 8)
a['OrderedDictType'] = collections.OrderedDict(_dict)
a['CounterType'] = collections.Counter(_dict)
if HAS_CTYPES:
a['CSSizeTType'] = ctypes.c_ssize_t()
# generic operating system services (CH 15)
a['NullHandlerType'] = logging.NullHandler() # pickle ok # new 2.7
a['ArgParseFileType'] = argparse.FileType() # pickle ok
except (AttributeError, ImportError):
pass
# -- pickle fails on all below here -----------------------------------------
# types module (part of CH 8)
a['CodeType'] = compile('','','exec')
a['DictProxyType'] = type.__dict__
a['DictProxyType2'] = _newclass.__dict__
a['EllipsisType'] = Ellipsis
a['ClosedFileType'] = open(os.devnull, 'wb', buffering=0).close()
a['GetSetDescriptorType'] = array.array.typecode
a['LambdaType'] = _lambda = lambda x: lambda y: x #XXX: works when not imported!
a['MemberDescriptorType'] = _newclass2.descriptor
if not IS_PYPY:
a['MemberDescriptorType2'] = datetime.timedelta.days
a['MethodType'] = _method = _class()._method #XXX: works when not imported!
a['ModuleType'] = datetime
a['NotImplementedType'] = NotImplemented
a['SliceType'] = slice(1)
a['UnboundMethodType'] = _class._method #XXX: works when not imported!
a['TextWrapperType'] = open(os.devnull, 'r') # same as mode='w','w+','r+'
a['BufferedRandomType'] = open(os.devnull, 'r+b') # same as mode='w+b'
a['BufferedReaderType'] = open(os.devnull, 'rb') # (default: buffering=-1)
a['BufferedWriterType'] = open(os.devnull, 'wb')
try: # oddities: deprecated
from _pyio import open as _open
a['PyTextWrapperType'] = _open(os.devnull, 'r', buffering=-1)
a['PyBufferedRandomType'] = _open(os.devnull, 'r+b', buffering=-1)
a['PyBufferedReaderType'] = _open(os.devnull, 'rb', buffering=-1)
a['PyBufferedWriterType'] = _open(os.devnull, 'wb', buffering=-1)
except ImportError:
pass
# other (concrete) object types
if PY3:
d['CellType'] = (_lambda)(0).__closure__[0]
a['XRangeType'] = _xrange = range(1)
else:
d['CellType'] = (_lambda)(0).func_closure[0]
a['XRangeType'] = _xrange = xrange(1)
if not IS_PYPY:
d['MethodDescriptorType'] = type.__dict__['mro']
d['WrapperDescriptorType'] = type.__repr__
a['WrapperDescriptorType2'] = type.__dict__['__module__']
d['ClassMethodDescriptorType'] = type.__dict__['__prepare__' if PY3 else 'mro']
# built-in functions (CH 2)
if PY3 or IS_PYPY:
_methodwrap = (1).__lt__
else:
_methodwrap = (1).__cmp__
d['MethodWrapperType'] = _methodwrap
a['StaticMethodType'] = staticmethod(_method)
a['ClassMethodType'] = classmethod(_method)
a['PropertyType'] = property()
d['SuperType'] = super(Exception, _exception)
# string services (CH 7)
if PY3:
_in = _bytes
else:
_in = _str
a['InputType'] = _cstrI = StringIO(_in)
a['OutputType'] = _cstrO = StringIO()
# data types (CH 8)
a['WeakKeyDictionaryType'] = weakref.WeakKeyDictionary()
a['WeakValueDictionaryType'] = weakref.WeakValueDictionary()
a['ReferenceType'] = weakref.ref(_instance)
a['DeadReferenceType'] = weakref.ref(_class())
a['ProxyType'] = weakref.proxy(_instance)
a['DeadProxyType'] = weakref.proxy(_class())
a['CallableProxyType'] = weakref.proxy(_instance2)
a['DeadCallableProxyType'] = weakref.proxy(_class2())
a['QueueType'] = Queue.Queue()
# numeric and mathematical types (CH 9)
d['PartialType'] = functools.partial(int,base=2)
if PY3:
a['IzipType'] = zip('0','1')
else:
a['IzipType'] = itertools.izip('0','1')
a['ChainType'] = itertools.chain('0','1')
d['ItemGetterType'] = operator.itemgetter(0)
d['AttrGetterType'] = operator.attrgetter('__repr__')
# file and directory access (CH 10)
if PY3: _fileW = _cstrO
else: _fileW = _tmpf
# data persistence (CH 11)
if HAS_ALL:
a['ConnectionType'] = _conn = sqlite3.connect(':memory:')
a['CursorType'] = _conn.cursor()
a['ShelveType'] = shelve.Shelf({})
# data compression and archiving (CH 12)
if HAS_ALL:
if (hex(sys.hexversion) < '0x2070ef0') or PY3:
a['BZ2FileType'] = bz2.BZ2File(os.devnull) #FIXME: fail >= 3.3, 2.7.14
a['BZ2CompressorType'] = bz2.BZ2Compressor()
a['BZ2DecompressorType'] = bz2.BZ2Decompressor()
#a['ZipFileType'] = _zip = zipfile.ZipFile(os.devnull,'w') #FIXME: fail >= 3.2
#_zip.write(_tempfile,'x') [causes annoying warning/error printed on import]
#a['ZipInfoType'] = _zip.getinfo('x')
a['TarFileType'] = tarfile.open(fileobj=_fileW,mode='w')
# file formats (CH 13)
a['DialectType'] = csv.get_dialect('excel')
a['PackerType'] = xdrlib.Packer()
# optional operating system services (CH 16)
a['LockType'] = threading.Lock()
a['RLockType'] = threading.RLock()
# generic operating system services (CH 15) # also closed/open and r/w/etc...
a['NamedLoggerType'] = _logger = logging.getLogger(__name__) #FIXME: fail >= 3.2 and <= 2.6
#a['FrozenModuleType'] = __hello__ #FIXME: prints "Hello world..."
# interprocess communication (CH 17)
if PY3:
a['SocketType'] = _socket = socket.socket() #FIXME: fail >= 3.3
a['SocketPairType'] = socket.socketpair()[0] #FIXME: fail >= 3.3
else:
a['SocketType'] = _socket = socket.socket()
a['SocketPairType'] = _socket._sock
# python runtime services (CH 27)
if PY3:
a['GeneratorContextManagerType'] = contextlib.contextmanager(max)([1])
else:
a['GeneratorContextManagerType'] = contextlib.GeneratorContextManager(max)
try: # ipython
__IPYTHON__ is True # is ipython
except NameError:
# built-in constants (CH 4)
a['QuitterType'] = quit
d['ExitType'] = a['QuitterType']
try: # numpy #FIXME: slow... 0.05 to 0.1 sec to import numpy
from numpy import ufunc as _numpy_ufunc
from numpy import array as _numpy_array
from numpy import int32 as _numpy_int32
a['NumpyUfuncType'] = _numpy_ufunc
a['NumpyArrayType'] = _numpy_array
a['NumpyInt32Type'] = _numpy_int32
except ImportError:
pass
try: # python 2.6
# numeric and mathematical types (CH 9)
a['ProductType'] = itertools.product('0','1')
# generic operating system services (CH 15)
a['FileHandlerType'] = logging.FileHandler(os.devnull) #FIXME: fail >= 3.2 and <= 2.6
a['RotatingFileHandlerType'] = logging.handlers.RotatingFileHandler(os.devnull)
a['SocketHandlerType'] = logging.handlers.SocketHandler('localhost',514)
a['MemoryHandlerType'] = logging.handlers.MemoryHandler(1)
except AttributeError:
pass
try: # python 2.7
# | |
<reponame>samuelkolb/polytope<filename>tests/polytope_test.py
#!/usr/bin/env python
"""Tests for the polytope subpackage."""
import logging
from nose import tools as nt
import numpy as np
from numpy.testing import assert_allclose
from numpy.testing import assert_array_equal
import scipy.optimize
import polytope as pc
from polytope.polytope import solve_rotation_ap, givens_rotation_matrix
from polytope import solvers
log = logging.getLogger('polytope.polytope')
log.setLevel(logging.INFO)
def test_polytope_str():
# 1 constaint (so uniline)
A = np.array([[1]])
b = np.array([1])
p = pc.Polytope(A, b)
s = str(p)
s_ = 'Single polytope \n [[1.]] x <= [[1.]]\n'
assert s == s_, (s, s_)
# > 1 constraints (so multiline)
polys = dict(
p1d=[[0, 1]],
p2d=[[0, 1], [0, 2]],
p3d=[[0, 1], [0, 2], [0, 3]])
strings = dict(
p1d='Single polytope \n [[ 1.] x <= [[1.]\n [-1.]]| [0.]]\n',
p2d=(
'Single polytope \n [[ 1. 0.] | [[1.]\n [ 0. 1.] '
'x <= [2.]\n [-1. -0.] | [0.]\n [-0. -1.]]|'
' [0.]]\n'),
p3d=(
'Single polytope \n [[ 1. 0. 0.] | [[1.]\n '
'[ 0. 1. 0.] | [2.]\n [ 0. 0. 1.] x <= [3.]\n'
' [-1. -0. -0.] | [0.]\n [-0. -1. -0.] |'
' [0.]\n [-0. -0. -1.]]| [0.]]\n'))
for name, poly in polys.items():
p = pc.Polytope.from_box(poly)
s = str(p)
s_ = strings[name]
assert s == s_, (s, s_)
class operations_test(object):
def setUp(self):
# unit square in first quadrant
self.Ab = np.array([[0.0, 1.0, 1.0],
[0.0, -1.0, 0.0],
[1.0, 0.0, 1.0],
[-1.0, 0.0, 0.0]])
# unit square in second quadrant
self.Ab2 = np.array([[-1.0, 0.0, 1.0],
[1.0, 0.0, 0.0],
[0.0, 1.0, 1.0],
[0.0, -1.0, 0.0]])
# unit square in third quadrant
self.Ab3 = np.array([[0.0, 1.0, 0.0],
[0.0, -1.0, 1.0],
[1.0, 0.0, 0.0],
[-1.0, 0.0, 1.0]])
# unit square in fourth quadrant
self.Ab4 = np.array([[0.0, 1.0, 0.0],
[0.0, -1.0, 1.0],
[1.0, 0.0, 1.0],
[-1.0, 0.0, 0.0]])
self.A = self.Ab[:, 0:2]
self.b = self.Ab[:, 2]
def tearDown(self):
pass
def comparison_test(self):
p = pc.Polytope(self.A, self.b)
p2 = pc.Polytope(self.A, 2*self.b)
assert(p <= p2)
assert(not p2 <= p)
assert(not p2 == p)
r = pc.Region([p])
r2 = pc.Region([p2])
assert(r <= r2)
assert(not r2 <= r)
assert(not r2 == r)
# test H-rep -> V-rep -> H-rep
v = pc.extreme(p)
p3 = pc.qhull(v)
assert(p3 == p)
# test V-rep -> H-rep with d+1 points
p4 = pc.qhull(np.array([[0, 0], [1, 0], [0, 1]]))
assert(p4 == pc.Polytope(
np.array([[1, 1], [0, -1], [0, -1]]),
np.array([1, 0, 0])))
def region_rotation_test(self):
p = pc.Region([pc.Polytope(self.A, self.b)])
p1 = pc.Region([pc.Polytope(self.A, self.b)])
p2 = pc.Region([pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2])])
p3 = pc.Region([pc.Polytope(self.Ab3[:, 0:2], self.Ab3[:, 2])])
p4 = pc.Region([pc.Polytope(self.Ab4[:, 0:2], self.Ab4[:, 2])])
p = p.rotation(0, 1, np.pi/2)
print(p.bounding_box)
assert(p == p2)
assert(not p == p3)
assert(not p == p4)
assert(not p == p1)
assert_allclose(p.chebXc, [-0.5, 0.5])
p = p.rotation(0, 1, np.pi/2)
assert(p == p3)
assert_allclose(p.chebXc, [-0.5, -0.5])
p = p.rotation(0, 1, np.pi/2)
assert(p == p4)
assert_allclose(p.chebXc, [0.5, -0.5])
p = p.rotation(0, 1, np.pi/2)
assert(p == p1)
assert_allclose(p.chebXc, [0.5, 0.5])
def polytope_rotation_test(self):
p = pc.Polytope(self.A, self.b)
p1 = pc.Polytope(self.A, self.b)
p2 = pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2])
p3 = pc.Polytope(self.Ab3[:, 0:2], self.Ab3[:, 2])
p4 = pc.Polytope(self.Ab4[:, 0:2], self.Ab4[:, 2])
p = p.rotation(0, 1, np.pi/2)
print(p.bounding_box)
assert(p == p2)
assert(not p == p3)
assert(not p == p4)
assert(not p == p1)
assert_allclose(p.chebXc, [-0.5, 0.5])
p = p.rotation(0, 1, np.pi/2)
assert(p == p3)
assert_allclose(p.chebXc, [-0.5, -0.5])
p = p.rotation(0, 1, np.pi/2)
assert(p == p4)
assert_allclose(p.chebXc, [0.5, -0.5])
p = p.rotation(0, 1, np.pi/2)
assert(p == p1)
assert_allclose(p.chebXc, [0.5, 0.5])
def region_translation_test(self):
p = pc.Region([pc.Polytope(self.A, self.b)])
p1 = pc.Region([pc.Polytope(self.A, self.b)])
p2 = pc.Region([pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2])])
p = p.translation([-1, 0])
assert(p == p2)
assert(not p == p1)
p = p.translation([1, 0])
assert(p == p1)
def polytope_translation_test(self):
p = pc.Polytope(self.A, self.b)
p1 = pc.Polytope(self.A, self.b)
p2 = pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2])
p = p.translation([-1, 0])
assert(p == p2)
assert(not p == p1)
p = p.translation([1, 0])
assert(p == p1)
def region_empty_test(self):
# Note that as of commit <PASSWORD>
# Region.__init__ deletes empty polytopes from
# the given list of polytopes at instantiation.
reg = pc.Region()
reg.list_poly = [pc.Polytope(), pc.Polytope()]
assert len(reg) > 0
assert pc.is_empty(reg)
def polytope_full_dim_test(self):
assert pc.is_fulldim(pc.Polytope(self.A, self.b))
assert pc.is_fulldim(pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2]))
assert not pc.is_fulldim(pc.Polytope())
assert not pc.is_fulldim(pc.Polytope(self.A, self.b - 1e3))
def region_full_dim_test(self):
assert not pc.is_fulldim(pc.Region())
p1 = pc.Polytope(self.A, self.b)
p2 = pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2])
reg = pc.Region([p1, p2])
assert pc.is_fulldim(reg)
# Adding empty polytopes should not affect the
# full-dimensional status of this region.
reg.list_poly.append(pc.Polytope())
assert pc.is_fulldim(reg)
reg.list_poly.append(pc.Polytope(self.A, self.b - 1e3))
assert pc.is_fulldim(reg)
def polytope_intersect_test(self):
p1 = pc.Polytope(self.A, self.b)
p2 = pc.Polytope(self.Ab2[:, 0:2], self.Ab2[:, 2])
p3 = p1.intersect(p2)
assert pc.is_fulldim(p1)
assert pc.is_fulldim(p2)
assert not pc.is_fulldim(p3)
# p4 is the unit square with center at the origin.
p4 = pc.Polytope(np.array([[ 1., 0.],
[ 0., 1.],
[-1., 0.],
[ 0., -1.]]),
np.array([0.5, 0.5, 0.5, 0.5]))
p5 = p2.intersect(p4)
assert pc.is_fulldim(p4)
assert pc.is_fulldim(p5)
def polytope_contains_test(self):
p = pc.Polytope(self.A, self.b)
# single point
point_i = [0.1, 0.3]
point_o = [2, 0]
assert point_i in p
assert point_o not in p
# multiple points
many_points_i = np.random.random((2, 8))
many_points_0 = np.random.random((2, 8)) - np.array([[0], [1]])
many_points = np.concatenate([many_points_0, many_points_i], axis=1)
truth = np.array([False] * 8 + [True] * 8, dtype=bool)
assert_array_equal(p.contains(many_points), truth)
def region_contains_test(self):
A = np.array([[1.0],
[-1.0]])
b = np.array([1.0, 0.0])
poly = pc.Polytope(A, b)
polys = [poly]
reg = pc.Region(polys)
assert 0.5 in reg
# small positive tolerance (includes boundary)
points = np.array([[-1.0, 0.0, 0.5, 1.0, 2.0]])
c = reg.contains(points)
c_ = np.array(
[[False, True, True, True, False]], dtype=bool)
# zero tolerance (excludes boundary)
points = np.array([[-1.0, 0.0, 0.5, 1.0, 2.0]])
c = reg.contains(points, abs_tol=0)
c_ = np.array(
[[False, False, True, False, False]], dtype=bool)
assert np.all(c == c_), c
def solve_rotation_test_090(atol=1e-15):
g1 = np.array([0, 1, 1, 0])
g2 = np.array([0, 1, 0, 0])
R = solve_rotation_ap(g1, g2)
e0 = np.array([0, 1, 1, 1])
e1 = np.array([0, 0, -1, 0])
e2 = np.array([0, 0, 0, 0])
t0 = np.array([0, 1, -1, 1])
t1 = np.array([0, -1, 0, 0])
t2 = np.array([0, 0, 0, 0])
assert_allclose(R.dot(e0), t0, atol=atol)
assert_allclose(R.dot(e1), t1, atol=atol)
assert_allclose(R.dot(e2), t2, atol=atol)
def solve_rotation_test_180(atol=1e-15):
g1 = np.array([0, 1, 0, 0])
g2 = np.array([0, 0, 1, 0])
R = solve_rotation_ap(g1, g2)
e0 = np.array([0, 1, 1, 1])
e1 = np.array([0, 0, -1, 0])
e2 = np.array([0, 0, 0, 0])
t0 = np.array([0, -1, -1, 1])
t1 = np.array([0, 0, 1, 0])
t2 = np.array([0, 0, 0, 0])
assert_allclose(R.dot(e0), t0, atol=atol)
assert_allclose(R.dot(e1), t1, atol=atol)
assert_allclose(R.dot(e2), t2, atol=atol)
def solve_rotation_test_270R(atol=1e-15):
g1 = np.array([0, -1, 0, 0])
g2 = np.array([0, 1, 1, 0])
R = solve_rotation_ap(g1, g2)
e0 = np.array([0, 1, 1, 1])
e1 = np.array([0, 0, -1, 0])
e2 = np.array([0, 0, 0, 0])
t0 = np.array([0, -1, 1, 1])
t1 = np.array([0, 1, 0, 0])
t2 = np.array([0, 0, 0, 0])
assert_allclose(R.dot(e0), t0, atol=atol)
assert_allclose(R.dot(e1), t1, atol=atol)
assert_allclose(R.dot(e2), t2, atol=atol)
def solve_rotation_test_270L(atol=1e-15):
g1 = np.array([0, -1, 0, 0])
g2 = np.array([0, 1, -1, 0])
R = solve_rotation_ap(g1, g2)
e0 = np.array([0, 1, 1, 1])
e1 = np.array([0, 0, -1, 0])
e2 = np.array([0, 0, 0, 0])
t0 = np.array([0, 1, -1, 1])
t1 = np.array([0, -1, 0, 0])
t2 = np.array([0, 0, 0, 0])
assert_allclose(R.dot(e0), t0, atol=atol)
assert_allclose(R.dot(e1), t1, atol=atol)
assert_allclose(R.dot(e2), t2, atol=atol)
def givens_rotation_test_180(atol=1e-15):
R = givens_rotation_matrix(1, 2, np.pi, 4)
e0 = np.array([0, 1, 1, 1])
e1 = np.array([0, 0, -1, 0])
e2 = np.array([0, 0, 0, 0])
t0 = np.array([0, -1, -1, 1])
t1 = np.array([0, 0, 1, 0])
t2 = np.array([0, 0, 0, 0])
assert_allclose(R.dot(e0), t0, atol=atol)
assert_allclose(R.dot(e1), t1, atol=atol)
assert_allclose(R.dot(e2), t2, atol=atol)
def givens_rotation_test_270L(atol=1e-15):
g1 = np.array([0, -1, 0, 0])
g2 = np.array([0, 1, -1, 0])
R = givens_rotation_matrix(1, 2, 3*np.pi/2, 4)
e0 = np.array([0, 1, 1, 1])
e1 = np.array([0, 0, -1, 0])
e2 = np.array([0, 0, 0, 0])
t0 = np.array([0, 1, -1, 1])
t1 = np.array([0, -1, 0, 0])
t2 = np.array([0, 0, 0, 0])
assert_allclose(R.dot(e0), t0, atol=atol)
assert_allclose(R.dot(e1), t1, atol=atol)
assert_allclose(R.dot(e2), t2, atol=atol)
def test_lpsolve():
# Ensure same API for both `scipy` and `cvxopt`.
# Ensured by the different testing configurations.
# Could change `polytope.polytope.default_solver` to
# achieve | |
"""
Database table models for Water Survey of Canada National Water Data Archive Hydrometic Data
https://www.canada.ca/en/environment-climate-change/services/water-overview/quantity/monitoring/survey/data-products-services/national-archive-hydat.html
These models were autogenerated using sqlacodegen (https://pypi.org/project/sqlacodegen/) using the schema
from the original database available at the link above.
Primary keys were manually added to the generated models.
Warning: the original database schema did not include any foreign key constraints.
"""
# coding: utf-8
from typing import List
from geojson import Feature, Point
from sqlalchemy import BigInteger, Column, DateTime, Index, Text, text, ForeignKey, func, Integer
from sqlalchemy.orm import relationship, Session
from sqlalchemy.dialects.postgresql import DOUBLE_PRECISION
from geoalchemy2 import Geometry
from sqlalchemy.sql.sqltypes import Date, Numeric
from api.db.base_class import BaseLayerTable
import api.v1.hydat.schema as hydat_schema
from shapely.geometry import Polygon
from logging import getLogger
logger = getLogger("Hydat")
class AgencyList(BaseLayerTable):
__tablename__ = 'agency_list'
__table_args__ = {'schema': 'hydat'}
agency_id = Column(BigInteger, primary_key=True, server_default=text(
"nextval('hydat.agency_list_agency_id_seq'::regclass)"))
agency_en = Column(Text)
agency_fr = Column(Text)
class AnnualInstantPeak(BaseLayerTable):
__tablename__ = 'annual_instant_peaks'
__table_args__ = (
Index('idx_20802_annual_instant_peaks___uniqueindex',
'station_number', 'data_type', 'year', 'peak_code', unique=True),
{'schema': 'hydat'}
)
station_number = Column(Text, primary_key=True, nullable=False)
data_type = Column(Text, primary_key=True, nullable=False)
year = Column(BigInteger, primary_key=True, nullable=False)
peak_code = Column(Text, primary_key=True, nullable=False)
precision_code = Column(BigInteger, index=True)
month = Column(BigInteger)
day = Column(BigInteger)
hour = Column(BigInteger)
minute = Column(BigInteger)
time_zone = Column(Text)
peak = Column(DOUBLE_PRECISION)
symbol = Column(Text)
class AnnualStatistic(BaseLayerTable):
__tablename__ = 'annual_statistics'
__table_args__ = (
Index('idx_20940_annual_statistics_primarykey',
'station_number', 'data_type', 'year', unique=True),
{'schema': 'hydat'}
)
station_number = Column(Text, primary_key=True, nullable=False)
data_type = Column(Text, primary_key=True, nullable=False)
year = Column(BigInteger, primary_key=True, nullable=False)
mean = Column(DOUBLE_PRECISION)
min_month = Column(BigInteger)
min_day = Column(BigInteger)
min = Column(DOUBLE_PRECISION)
min_symbol = Column(Text)
max_month = Column(BigInteger)
max_day = Column(BigInteger)
max = Column(DOUBLE_PRECISION)
max_symbol = Column(Text)
class ConcentrationSymbol(BaseLayerTable):
__tablename__ = 'concentration_symbols'
__table_args__ = {'schema': 'hydat'}
concentration_symbol = Column(Text, primary_key=True)
concentration_en = Column(Text)
concentration_fr = Column(Text)
class DataSymbol(BaseLayerTable):
__tablename__ = 'data_symbols'
__table_args__ = {'schema': 'hydat'}
symbol_id = Column(Text, primary_key=True)
symbol_en = Column(Text)
symbol_fr = Column(Text)
class DataType(BaseLayerTable):
__tablename__ = 'data_types'
__table_args__ = {'schema': 'hydat'}
data_type = Column(Text, primary_key=True)
data_type_en = Column(Text)
data_type_fr = Column(Text)
class DatumList(BaseLayerTable):
__tablename__ = 'datum_list'
__table_args__ = {'schema': 'hydat'}
datum_id = Column(BigInteger, primary_key=True, server_default=text(
"nextval('hydat.datum_list_datum_id_seq'::regclass)"))
datum_en = Column(Text)
datum_fr = Column(Text)
class DailyFlow(BaseLayerTable):
__tablename__ = 'dly_flows'
__table_args__ = (
Index('idx_20862_dly_flows_primarykey',
'station_number', 'year', 'month', unique=True),
{'schema': 'hydat'}
)
station_number = Column(Text, ForeignKey(
'hydat.stations.station_number'), primary_key=True, nullable=False)
year = Column(BigInteger, primary_key=True, nullable=False)
month = Column(BigInteger, primary_key=True, nullable=False)
full_month = Column(BigInteger)
no_days = Column(BigInteger)
monthly_mean = Column(DOUBLE_PRECISION)
monthly_total = Column(DOUBLE_PRECISION)
first_day_min = Column(BigInteger)
min = Column(DOUBLE_PRECISION)
first_day_max = Column(BigInteger)
max = Column(DOUBLE_PRECISION)
flow1 = Column(DOUBLE_PRECISION)
flow_symbol1 = Column(Text)
flow2 = Column(DOUBLE_PRECISION)
flow_symbol2 = Column(Text)
flow3 = Column(DOUBLE_PRECISION)
flow_symbol3 = Column(Text)
flow4 = Column(DOUBLE_PRECISION)
flow_symbol4 = Column(Text)
flow5 = Column(DOUBLE_PRECISION)
flow_symbol5 = Column(Text)
flow6 = Column(DOUBLE_PRECISION)
flow_symbol6 = Column(Text)
flow7 = Column(DOUBLE_PRECISION)
flow_symbol7 = Column(Text)
flow8 = Column(DOUBLE_PRECISION)
flow_symbol8 = Column(Text)
flow9 = Column(DOUBLE_PRECISION)
flow_symbol9 = Column(Text)
flow10 = Column(DOUBLE_PRECISION)
flow_symbol10 = Column(Text)
flow11 = Column(DOUBLE_PRECISION)
flow_symbol11 = Column(Text)
flow12 = Column(DOUBLE_PRECISION)
flow_symbol12 = Column(Text)
flow13 = Column(DOUBLE_PRECISION)
flow_symbol13 = Column(Text)
flow14 = Column(DOUBLE_PRECISION)
flow_symbol14 = Column(Text)
flow15 = Column(DOUBLE_PRECISION)
flow_symbol15 = Column(Text)
flow16 = Column(DOUBLE_PRECISION)
flow_symbol16 = Column(Text)
flow17 = Column(DOUBLE_PRECISION)
flow_symbol17 = Column(Text)
flow18 = Column(DOUBLE_PRECISION)
flow_symbol18 = Column(Text)
flow19 = Column(DOUBLE_PRECISION)
flow_symbol19 = Column(Text)
flow20 = Column(DOUBLE_PRECISION)
flow_symbol20 = Column(Text)
flow21 = Column(DOUBLE_PRECISION)
flow_symbol21 = Column(Text)
flow22 = Column(DOUBLE_PRECISION)
flow_symbol22 = Column(Text)
flow23 = Column(DOUBLE_PRECISION)
flow_symbol23 = Column(Text)
flow24 = Column(DOUBLE_PRECISION)
flow_symbol24 = Column(Text)
flow25 = Column(DOUBLE_PRECISION)
flow_symbol25 = Column(Text)
flow26 = Column(DOUBLE_PRECISION)
flow_symbol26 = Column(Text)
flow27 = Column(DOUBLE_PRECISION)
flow_symbol27 = Column(Text)
flow28 = Column(DOUBLE_PRECISION)
flow_symbol28 = Column(Text)
flow29 = Column(DOUBLE_PRECISION)
flow_symbol29 = Column(Text)
flow30 = Column(DOUBLE_PRECISION)
flow_symbol30 = Column(Text)
flow31 = Column(DOUBLE_PRECISION)
flow_symbol31 = Column(Text)
station = relationship("Station", back_populates="dly_flows")
@classmethod
def get_available_flow_years(cls, db: Session, station: str):
""" fetch a list of years for which stream flow data is available """
return db.query(cls).filter(
cls.station_number == station).distinct("year")
@classmethod
def get_monthly_flows_by_station(cls, db: Session, station: str, year: int) -> List[hydat_schema.MonthlyFlow]:
""" fetch monthly stream levels for a specified station_number and year """
if year:
return db.query(cls).filter(
cls.station_number == station,
cls.year == year
).all()
# year not specified, return average by month for all available years.
return db.query(
func.avg(cls.monthly_mean).label('monthly_mean'),
func.min(cls.min).label('min'),
func.max(cls.max).label('max'),
cls.month) \
.filter(cls.station_number == station, cls.full_month == 1) \
.group_by(cls.month) \
.order_by(cls.month).all()
class DailyLevel(BaseLayerTable):
__tablename__ = 'dly_levels'
__table_args__ = (
Index('idx_20916_dly_levels_primarykey',
'station_number', 'year', 'month', unique=True),
{'schema': 'hydat'}
)
station_number = Column(Text, ForeignKey(
'hydat.stations.station_number'), primary_key=True, nullable=False)
year = Column(BigInteger, primary_key=True, nullable=False)
month = Column(BigInteger, primary_key=True, nullable=False)
precision_code = Column(BigInteger)
full_month = Column(BigInteger)
no_days = Column(BigInteger)
monthly_mean = Column(DOUBLE_PRECISION)
monthly_total = Column(DOUBLE_PRECISION)
first_day_min = Column(BigInteger)
min = Column(DOUBLE_PRECISION)
first_day_max = Column(BigInteger)
max = Column(DOUBLE_PRECISION)
level1 = Column(DOUBLE_PRECISION)
level_symbol1 = Column(Text)
level2 = Column(DOUBLE_PRECISION)
level_symbol2 = Column(Text)
level3 = Column(DOUBLE_PRECISION)
level_symbol3 = Column(Text)
level4 = Column(DOUBLE_PRECISION)
level_symbol4 = Column(Text)
level5 = Column(DOUBLE_PRECISION)
level_symbol5 = Column(Text)
level6 = Column(DOUBLE_PRECISION)
level_symbol6 = Column(Text)
level7 = Column(DOUBLE_PRECISION)
level_symbol7 = Column(Text)
level8 = Column(DOUBLE_PRECISION)
level_symbol8 = Column(Text)
level9 = Column(DOUBLE_PRECISION)
level_symbol9 = Column(Text)
level10 = Column(DOUBLE_PRECISION)
level_symbol10 = Column(Text)
level11 = Column(DOUBLE_PRECISION)
level_symbol11 = Column(Text)
level12 = Column(DOUBLE_PRECISION)
level_symbol12 = Column(Text)
level13 = Column(DOUBLE_PRECISION)
level_symbol13 = Column(Text)
level14 = Column(DOUBLE_PRECISION)
level_symbol14 = Column(Text)
level15 = Column(DOUBLE_PRECISION)
level_symbol15 = Column(Text)
level16 = Column(DOUBLE_PRECISION)
level_symbol16 = Column(Text)
level17 = Column(DOUBLE_PRECISION)
level_symbol17 = Column(Text)
level18 = Column(DOUBLE_PRECISION)
level_symbol18 = Column(Text)
level19 = Column(DOUBLE_PRECISION)
level_symbol19 = Column(Text)
level20 = Column(DOUBLE_PRECISION)
level_symbol20 = Column(Text)
level21 = Column(DOUBLE_PRECISION)
level_symbol21 = Column(Text)
level22 = Column(DOUBLE_PRECISION)
level_symbol22 = Column(Text)
level23 = Column(DOUBLE_PRECISION)
level_symbol23 = Column(Text)
level24 = Column(DOUBLE_PRECISION)
level_symbol24 = Column(Text)
level25 = Column(DOUBLE_PRECISION)
level_symbol25 = Column(Text)
level26 = Column(DOUBLE_PRECISION)
level_symbol26 = Column(Text)
level27 = Column(DOUBLE_PRECISION)
level_symbol27 = Column(Text)
level28 = Column(DOUBLE_PRECISION)
level_symbol28 = Column(Text)
level29 = Column(DOUBLE_PRECISION)
level_symbol29 = Column(Text)
level30 = Column(DOUBLE_PRECISION)
level_symbol30 = Column(Text)
level31 = Column(DOUBLE_PRECISION)
level_symbol31 = Column(Text)
station = relationship("Station", back_populates="dly_levels")
@classmethod
def get_available_level_years(cls, db: Session, station: str):
""" fetch a list of years for which stream level data is available """
return db.query(cls).filter(
cls.station_number == station).distinct("year")
@classmethod
def get_monthly_levels_by_station(cls, db: Session, station: str, year: int) -> List[hydat_schema.MonthlyLevel]:
""" fetch monthly stream levels for a specified station_number and year """
if year:
return db.query(cls).filter(
cls.station_number == station,
cls.year == year
).all()
# year not specified, return an average by month for all years.
return db.query(
func.avg(cls.monthly_mean).label('monthly_mean'),
func.min(cls.min).label('min'),
func.max(cls.max).label('max'),
cls.month
) \
.filter(cls.station_number == station, cls.full_month == 1) \
.group_by(cls.month) \
.order_by(cls.month).all()
class MeasurementCode(BaseLayerTable):
__tablename__ = 'measurement_codes'
__table_args__ = {'schema': 'hydat'}
measurement_code = Column(Text, primary_key=True)
measurement_en = Column(Text)
measurement_fr = Column(Text)
class OperationCode(BaseLayerTable):
__tablename__ = 'operation_codes'
__table_args__ = {'schema': 'hydat'}
operation_code = Column(Text, primary_key=True)
operation_en = Column(Text)
operation_fr = Column(Text)
class PeakCode(BaseLayerTable):
__tablename__ = 'peak_codes'
__table_args__ = {'schema': 'hydat'}
peak_code = Column(Text, primary_key=True)
peak_en = Column(Text)
peak_fr = Column(Text)
class PrecisionCode(BaseLayerTable):
__tablename__ = 'precision_codes'
__table_args__ = {'schema': 'hydat'}
precision_code = Column(BigInteger, primary_key=True, server_default=text(
"nextval('hydat.precision_codes_precision_code_seq'::regclass)"))
precision_en = Column(Text)
precision_fr = Column(Text)
class RegionalOfficeList(BaseLayerTable):
__tablename__ = 'regional_office_list'
__table_args__ = {'schema': 'hydat'}
regional_office_id = Column(BigInteger, primary_key=True, server_default=text(
"nextval('hydat.regional_office_list_regional_office_id_seq'::regclass)"))
regional_office_name_en = Column(Text)
regional_office_name_fr = Column(Text)
class SampleRemarkCode(BaseLayerTable):
__tablename__ = 'sample_remark_codes'
__table_args__ = {'schema': 'hydat'}
sample_remark_code = Column(BigInteger, primary_key=True, server_default=text(
"nextval('hydat.sample_remark_codes_sample_remark_code_seq'::regclass)"))
sample_remark_en = Column(Text)
sample_remark_fr = Column(Text)
class SedDataType(BaseLayerTable):
__tablename__ = 'sed_data_types'
__table_args__ = {'schema': 'hydat'}
sed_data_type = Column(Text, primary_key=True)
sed_data_type_en = Column(Text)
sed_data_type_fr = Column(Text)
class SedDlyLoad(BaseLayerTable):
__tablename__ = 'sed_dly_loads'
__table_args__ = (
Index('idx_20910_sed_dly_loads_primarykey',
'station_number', 'year', 'month', unique=True),
{'schema': 'hydat'}
)
station_number = Column(Text, primary_key=True, nullable=False)
year = Column(BigInteger, primary_key=True, nullable=False)
month = Column(BigInteger, primary_key=True, nullable=False)
full_month = Column(BigInteger)
no_days = Column(BigInteger)
monthly_mean = Column(DOUBLE_PRECISION)
monthly_total = Column(DOUBLE_PRECISION)
first_day_min = Column(BigInteger)
min = Column(DOUBLE_PRECISION)
first_day_max = Column(BigInteger)
max = Column(DOUBLE_PRECISION)
load1 = Column(DOUBLE_PRECISION)
load2 = Column(DOUBLE_PRECISION)
load3 = Column(DOUBLE_PRECISION)
load4 = Column(DOUBLE_PRECISION)
load5 = Column(DOUBLE_PRECISION)
load6 = Column(DOUBLE_PRECISION)
load7 = Column(DOUBLE_PRECISION)
load8 = Column(DOUBLE_PRECISION)
load9 = Column(DOUBLE_PRECISION)
load10 = Column(DOUBLE_PRECISION)
load11 = Column(DOUBLE_PRECISION)
load12 = Column(DOUBLE_PRECISION)
load13 = Column(DOUBLE_PRECISION)
load14 = Column(DOUBLE_PRECISION)
load15 = Column(DOUBLE_PRECISION)
load16 = Column(DOUBLE_PRECISION)
load17 = Column(DOUBLE_PRECISION)
load18 = Column(DOUBLE_PRECISION)
load19 = Column(DOUBLE_PRECISION)
load20 = Column(DOUBLE_PRECISION)
load21 = Column(DOUBLE_PRECISION)
load22 = Column(DOUBLE_PRECISION)
load23 = Column(DOUBLE_PRECISION)
load24 = Column(DOUBLE_PRECISION)
load25 = Column(DOUBLE_PRECISION)
load26 = Column(DOUBLE_PRECISION)
load27 = Column(DOUBLE_PRECISION)
load28 = Column(DOUBLE_PRECISION)
load29 = Column(DOUBLE_PRECISION)
load30 = Column(DOUBLE_PRECISION)
load31 = Column(DOUBLE_PRECISION)
class SedDlySuscon(BaseLayerTable):
__tablename__ = 'sed_dly_suscon'
__table_args__ = (
Index('idx_20886_sed_dly_suscon_primarykey',
'station_number', 'year', 'month', unique=True),
{'schema': 'hydat'}
)
station_number = | |
def test_subsecond_change(self):
"""Perform two ticket changes within a second."""
tkt_id = self._insert_ticket('Test', reporter='joe', component='foo')
ticket = Ticket(self.env, tkt_id)
t1 = datetime(2001, 1, 1, 1, 1, 1, 123456, utc)
ticket.save_changes('jane', 'Testing', t1)
t2 = datetime(2001, 1, 1, 1, 1, 1, 123789, utc)
ticket.save_changes('jim', 'Other', t2)
log = ticket.get_changelog()
self.assertEqual(2, len(log))
self.assertEqual((t1, 'jane', 'comment', '1', 'Testing', True), log[0])
self.assertEqual((t2, 'jim', 'comment', '2', 'Other', True), log[1])
def test_changelog_with_reverted_change(self):
tkt_id = self._insert_ticket('Test', reporter='joe', component='foo')
ticket = Ticket(self.env, tkt_id)
ticket['component'] = 'bar'
ticket['component'] = 'foo'
now = datetime(2001, 1, 1, 1, 1, 1, 0, utc)
ticket.save_changes('jane', 'Testing', now)
self.assertEqual([(now, 'jane', 'comment', '1', 'Testing', True)],
list(ticket.get_changelog()))
def test_change_listener_created(self):
ts = TicketSystem(self.env)
listener = ts.change_listeners[0]
ticket = self._create_a_ticket()
ticket.insert()
self.assertEqual(1, len(ts.change_listeners))
self.assertEqual('created', listener.action)
self.assertEqual(ticket, listener.ticket)
self.assertEqual(ticket.id, ticket.resource.id)
def test_change_listener_changed(self):
ts = TicketSystem(self.env)
listener = ts.change_listeners[0]
data = {'component': 'foo', 'milestone': 'bar'}
tkt_id = self._insert_ticket('Hello World', reporter='john', **data)
ticket = Ticket(self.env, tkt_id)
ticket['component'] = 'new component'
ticket['milestone'] = 'new milestone'
comment = 'changing ticket'
ticket.save_changes('author', comment)
self.assertEqual(1, len(ts.change_listeners))
self.assertEqual('changed', listener.action)
self.assertEqual(comment, listener.comment)
self.assertEqual('author', listener.author)
for key, value in data.iteritems():
self.assertEqual(value, listener.old_values[key])
def test_change_listener_deleted(self):
ts = TicketSystem(self.env)
listener = ts.change_listeners[0]
ticket = self._create_a_ticket()
ticket.insert()
ticket.delete()
self.assertEqual(1, len(ts.change_listeners))
self.assertEqual('deleted', listener.action)
self.assertEqual(ticket, listener.ticket)
class TicketCommentTestCase(unittest.TestCase):
ticket_change_listeners = []
@classmethod
def setUpClass(cls):
class AllMethodTicketChangeListener(core.Component):
"""Ticket change listener that implements all methods of the
interface.
"""
implements(ITicketChangeListener)
def ticket_created(self, ticket):
pass
def ticket_changed(self, ticket, comment, author, old_values):
pass
def ticket_deleted(self, ticket):
pass
def ticket_comment_modified(self, ticket, cdate, author, comment,
old_comment):
self.action = 'comment_modified'
self.ticket = ticket
self.cdate = cdate
self.author = author
self.comment = comment
self.old_comment = old_comment
def ticket_change_deleted(self, ticket, cdate, changes):
self.action = 'change_deleted'
self.ticket = ticket
self.cdate = cdate
self.changes = changes
cls.ticket_change_listeners = [AllMethodTicketChangeListener]
@classmethod
def tearDownClass(cls):
for listener in cls.ticket_change_listeners:
core.ComponentMeta.deregister(listener)
def _insert_ticket(self, summary, when, **kwargs):
ticket = insert_ticket(self.env, summary=summary, when=when, **kwargs)
self.id = ticket.id
def _modify_ticket(self, author, comment, when, replyto=None, **kwargs):
ticket = Ticket(self.env, self.id)
for k, v in kwargs.iteritems():
ticket[k] = v
ticket.save_changes(author, comment, when, replyto)
def _find_change(self, ticket, cnum):
(ts, author, comment) = ticket._find_change(cnum)
return from_utimestamp(ts)
def assertChange(self, ticket, cnum, date, author, **fields):
change = ticket.get_change(cnum=cnum)
self.assertEqual(dict(date=date, author=author, fields=fields), change)
class TicketCommentEditTestCase(TicketCommentTestCase):
def setUp(self):
self.env = EnvironmentStub(default_data=True,
enable=['trac.ticket.*'] +
self.ticket_change_listeners)
self.created = datetime(2001, 1, 1, 1, 0, 0, 0, utc)
self._insert_ticket('Test ticket', self.created,
owner='john', keywords='a, b, c')
self.t1 = self.created + timedelta(seconds=1)
self._modify_ticket('jack', 'Comment 1', self.t1)
self.t2 = self.created + timedelta(seconds=2)
self._modify_ticket('john', 'Comment 2', self.t2, '1',
owner='jack')
self.t3 = self.created + timedelta(seconds=3)
self._modify_ticket('jim', 'Comment 3', self.t3,
keywords='a, b')
def tearDown(self):
self.env.reset_db()
def test_modify_comment(self):
"""Check modification of a "standalone" comment"""
ticket = Ticket(self.env, self.id)
self.assertChange(ticket, 1, self.t1, 'jack',
comment=dict(author='jack', old='1', new='Comment 1'))
self.assertChange(ticket, 2, self.t2, 'john',
owner=dict(author='john', old='john', new='jack'),
comment=dict(author='john', old='1.2', new='Comment 2'))
self.assertChange(ticket, 3, self.t3, 'jim',
keywords=dict(author='jim', old='a, b, c', new='a, b'),
comment=dict(author='jim', old='3', new='Comment 3'))
t = self.created + timedelta(seconds=10)
ticket.modify_comment(self._find_change(ticket, 1),
'joe', 'New comment 1', t)
self.assertChange(ticket, 1, self.t1, 'jack',
comment=dict(author='jack', old='1', new='New comment 1'),
_comment0=dict(author='joe', old='Comment 1',
new=str(to_utimestamp(t))))
self.assertEqual(t, Ticket(self.env, self.id)['changetime'])
def test_threading(self):
"""Check modification of a "threaded" comment"""
ticket = Ticket(self.env, self.id)
t = self.created + timedelta(seconds=20)
ticket.modify_comment(self._find_change(ticket, 2),
'joe', 'New comment 2', t)
self.assertChange(ticket, 2, self.t2, 'john',
owner=dict(author='john', old='john', new='jack'),
comment=dict(author='john', old='1.2', new='New comment 2'),
_comment0=dict(author='joe', old='Comment 2',
new=str(to_utimestamp(t))))
def test_modify_missing_cnum(self):
"""Editing a comment with no cnum in oldvalue"""
self.env.db_transaction(
"UPDATE ticket_change SET oldvalue='' WHERE oldvalue='3'")
ticket = Ticket(self.env, self.id)
t = self.created + timedelta(seconds=30)
ticket.modify_comment(self._find_change(ticket, 3),
'joe', 'New comment 3', t)
self.assertChange(ticket, 3, self.t3, 'jim',
keywords=dict(author='jim', old='a, b, c', new='a, b'),
comment=dict(author='jim', old='', new='New comment 3'),
_comment0=dict(author='joe', old='Comment 3',
new=str(to_utimestamp(t))))
def test_modify_missing_comment(self):
"""Editing a comment where the comment field is missing"""
self.env.db_transaction("""
DELETE FROM ticket_change WHERE field='comment' AND oldvalue='1.2'
""")
ticket = Ticket(self.env, self.id)
t = self.created + timedelta(seconds=40)
ticket.modify_comment(self._find_change(ticket, 2),
'joe', 'New comment 2', t)
self.assertChange(ticket, 2, self.t2, 'john',
owner=dict(author='john', old='john', new='jack'),
comment=dict(author='john', old='', new='New comment 2'),
_comment0=dict(author='joe', old='',
new=str(to_utimestamp(t))))
def test_modify_missing_cnums_and_comment(self):
"""Editing a comment when all cnums are missing and one comment
field is missing
"""
with self.env.db_transaction as db:
db("UPDATE ticket_change SET oldvalue='' WHERE oldvalue='1'")
db("""DELETE FROM ticket_change
WHERE field='comment' AND oldvalue='1.2'""")
db("UPDATE ticket_change SET oldvalue='' WHERE oldvalue='3'")
# Modify after missing comment
ticket = Ticket(self.env, self.id)
t = self.created + timedelta(seconds=50)
ticket.modify_comment(self._find_change(ticket, 3),
'joe', 'New comment 3', t)
self.assertChange(ticket, 3, self.t3, 'jim',
keywords=dict(author='jim', old='a, b, c', new='a, b'),
comment=dict(author='jim', old='', new='New comment 3'),
_comment0=dict(author='joe', old='Comment 3',
new=str(to_utimestamp(t))))
# Modify missing comment
t = self.created + timedelta(seconds=60)
ticket.modify_comment(self._find_change(ticket, 2),
'joe', 'New comment 2', t)
self.assertChange(ticket, 2, self.t2, 'john',
owner=dict(author='john', old='john', new='jack'),
comment=dict(author='john', old='', new='New comment 2'),
_comment0=dict(author='joe', old='',
new=str(to_utimestamp(t))))
def test_missing_comment_edit(self):
"""Modify a comment where one edit is missing"""
ticket = Ticket(self.env, self.id)
t1 = self.created + timedelta(seconds=70)
ticket.modify_comment(self._find_change(ticket, 1),
'joe', 'New comment 1', t1)
t2 = self.created + timedelta(seconds=80)
ticket.modify_comment(self._find_change(ticket, 1),
'joe', 'Other comment 1', t2)
self.assertChange(ticket, 1, self.t1, 'jack',
comment=dict(author='jack', old='1', new='Other comment 1'),
_comment0=dict(author='joe', old='Comment 1',
new=str(to_utimestamp(t1))),
_comment1=dict(author='joe', old='New comment 1',
new=str(to_utimestamp(t2))))
self.env.db_transaction(
"DELETE FROM ticket_change WHERE field='_comment0'")
t3 = self.created + timedelta(seconds=90)
ticket.modify_comment(self._find_change(ticket, 1),
'joe', 'Newest comment 1', t3)
self.assertChange(ticket, 1, self.t1, 'jack',
comment=dict(author='jack', old='1', new='Newest comment 1'),
_comment1=dict(author='joe', old='New comment 1',
new=str(to_utimestamp(t2))),
_comment2=dict(author='joe', old='Other comment 1',
new=str(to_utimestamp(t3))))
def test_comment_history(self):
"""Check the generation of the comment history"""
ticket = Ticket(self.env, self.id)
t = [self.t1]
for i in xrange(1, 32):
t.append(self.created + timedelta(minutes=i))
ticket.modify_comment(self._find_change(ticket, 1),
'joe (%d)' % i,
'Comment 1 (%d)' % i, t[-1])
history = ticket.get_comment_history(cnum=1)
self.assertEqual((0, t[0], 'jack', 'Comment 1'), history[0])
for i in xrange(1, len(history)):
self.assertEqual((i, t[i], 'joe (%d)' % i,
'Comment 1 (%d)' % i), history[i])
history = ticket.get_comment_history(cdate=self.t1)
self.assertEqual((0, t[0], 'jack', 'Comment 1'), history[0])
for i in xrange(1, len(history)):
self.assertEqual((i, t[i], 'joe (%d)' % i,
'Comment 1 (%d)' % i), history[i])
def test_change_listener_comment_modified(self):
ts = TicketSystem(self.env)
listener = ts.change_listeners[0]
ticket = Ticket(self.env, self.id)
ticket.modify_comment(cdate=self.t2, author='jack',
comment='New Comment 2', when=datetime_now(utc))
self.assertEqual(1, len(ts.change_listeners))
self.assertEqual('comment_modified', listener.action)
self.assertEqual(ticket, listener.ticket)
self.assertEqual(self.t2, listener.cdate)
self.assertEqual('jack', listener.author)
self.assertEqual('New Comment 2', listener.comment)
self.assertEqual('Comment 2', listener.old_comment)
def test_get_comment_number(self):
ticket = Ticket(self.env, self.id)
self.assertEqual(1, ticket.get_comment_number(self.created +
timedelta(seconds=1)))
self.assertEqual(2, ticket.get_comment_number(self.created +
timedelta(seconds=2)))
self.assertEqual(3, ticket.get_comment_number(self.created +
timedelta(seconds=3)))
class TicketCommentDeleteTestCase(TicketCommentTestCase):
def setUp(self):
self.env = EnvironmentStub(default_data=True,
enable=['trac.ticket.*'] +
self.ticket_change_listeners)
self.env.config.set('ticket-custom', 'foo', 'text')
self.created = datetime(2001, 1, 1, 1, 0, 0, 0, utc)
self._insert_ticket('Test ticket', self.created,
owner='john', keywords='a, b, c', foo='initial')
self.t1 = self.created + timedelta(seconds=1)
self._modify_ticket('jack', 'Comment 1', self.t1,
foo='change 1')
self.t2 = self.created + timedelta(seconds=2)
self._modify_ticket('john', 'Comment 2', self.t2, '1',
owner='jack', foo='change2')
self.t3 = self.created + timedelta(seconds=3)
self._modify_ticket('jim', 'Comment 3', self.t3,
keywords='a, b', foo='change3')
self.t4 = self.created + timedelta(seconds=4)
self._modify_ticket('joe', 'Comment 4', self.t4,
keywords='a', foo='change4')
def tearDown(self):
self.env.reset_db()
def test_delete_last_comment(self):
ticket = Ticket(self.env, self.id)
self.assertEqual('a', ticket['keywords'])
self.assertEqual('change4', ticket['foo'])
t = datetime_now(utc)
ticket.delete_change(cnum=4, when=t)
self.assertEqual('a, b', ticket['keywords'])
self.assertEqual('change3', ticket['foo'])
self.assertIsNone(ticket.get_change(cnum=4))
self.assertIsNotNone(ticket.get_change(cnum=3))
self.assertEqual(t, ticket['changetime'])
def test_delete_last_comment_when_custom_field_gone(self):
"""Regression test for http://trac.edgewall.org/ticket/10858"""
ticket = Ticket(self.env, self.id)
self.assertEqual('a', ticket['keywords'])
self.assertEqual('change4', ticket['foo'])
# we simulate the removal of the definition of the 'foo' custom field
self.env.config.remove('ticket-custom', 'foo')
del TicketSystem(self.env).fields
del TicketSystem(self.env).custom_fields
ticket = Ticket(self.env, self.id)
#
t = datetime_now(utc)
ticket.delete_change(cnum=4, when=t)
self.assertEqual('a, b', ticket['keywords'])
# 'foo' is no longer defined for the ticket
self.assertIsNone(ticket['foo'])
# however, 'foo=change3' is still in the database
self.assertEqual([('change3',)], self.env.db_query("""
SELECT value FROM ticket_custom WHERE ticket=%s AND name='foo'
""", (self.id,)))
self.assertIsNone(ticket.get_change(cnum=4))
self.assertIsNotNone(ticket.get_change(cnum=3))
self.assertEqual(t, ticket['changetime'])
def test_delete_last_comment_by_date(self):
ticket = Ticket(self.env, self.id)
self.assertEqual('a', ticket['keywords'])
self.assertEqual('change4', ticket['foo'])
t = datetime_now(utc)
ticket.delete_change(cdate=self.t4, when=t)
self.assertEqual('a, b', ticket['keywords'])
self.assertEqual('change3', ticket['foo'])
self.assertIsNone(ticket.get_change(cdate=self.t4))
self.assertIsNotNone(ticket.get_change(cdate=self.t3))
self.assertEqual(t, ticket['changetime'])
def test_delete_mid_comment(self):
ticket = Ticket(self.env, self.id)
self.assertChange(ticket, 4, self.t4, 'joe',
comment=dict(author='joe', old='4', new='Comment 4'),
keywords=dict(author='joe', old='a, b', new='a'),
foo=dict(author='joe', old='change3', new='change4'))
t = datetime_now(utc)
ticket.delete_change(cnum=3, when=t)
self.assertIsNone(ticket.get_change(cnum=3))
self.assertEqual('a', ticket['keywords'])
self.assertChange(ticket, 4, self.t4, 'joe',
comment=dict(author='joe', old='4', new='Comment 4'),
keywords=dict(author='joe', old='a, b, c', new='a'),
foo=dict(author='joe', old='change2', new='change4'))
self.assertEqual(t, ticket['changetime'])
def test_delete_mid_comment_by_date(self):
ticket = Ticket(self.env, self.id)
self.assertChange(ticket, 4, self.t4, 'joe',
comment=dict(author='joe', old='4', new='Comment 4'),
keywords=dict(author='joe', old='a, b', new='a'),
foo=dict(author='joe', old='change3', new='change4'))
t = datetime_now(utc)
ticket.delete_change(cdate=self.t3, when=t)
self.assertIsNone(ticket.get_change(cdate=self.t3))
self.assertEqual('a', ticket['keywords'])
self.assertChange(ticket, 4, self.t4, 'joe',
comment=dict(author='joe', old='4', new='Comment 4'),
keywords=dict(author='joe', old='a, b, c', new='a'),
foo=dict(author='joe', old='change2', new='change4'))
self.assertEqual(t, ticket['changetime'])
def test_delete_mid_comment_inconsistent(self):
# Make oldvalue on keywords for change 4 inconsistent. This should
# result | |
import numpy as np
import logging
from collections import defaultdict
from Bio import motifs as BioMotifs
from itertools import product
import re
def motifs_from_file( opts ):
"""
Read through the file containing motifs to query. Should
be in the form:
ACGT-0
GATC-1
CATG-1
where the number indicates the 0-based index of the
methylated base in the string.
"""
file_motifs = set()
for i,line in enumerate(open(opts.motifs_file).xreadlines()):
line = line.strip()
file_motifs.add(line)
logging.info("Added %s motifs from %s" % (i+1, opts.motifs_file))
return file_motifs
def build_motif_sets( opts ):
"""
Generate a set of all possible motifs within provided parameters.
"""
motifs = set()
bi_motifs = set()
NUCS = "ACGT"
total_kmers = 0
logging.info("Initiating dictionary of all possible motifs...")
#########################################
# Generate all possible contiguous motifs
#########################################
for kmer in range(opts.min_kmer,opts.max_kmer+1):
total_kmers += len(NUCS)**kmer
logging.info(" - Adding %s %s-mer motifs..." % (len(NUCS)**kmer, kmer))
for seq in product("ATGC",repeat=kmer):
string = "".join(seq)
for base in opts.mod_bases:
indexes = [m.start() for m in re.finditer(base, string)]
for index in indexes:
motif = "%s-%s" % (string, index)
motifs.add(motif)
logging.info("Done: %s possible contiguous motifs\n" % len(motifs))
if opts.bipartite:
####################################################################################
# Generate all possible bipartite motifs. The parameters specifying
# the acceptable forms of bipartite motifs are currently hard-coded as:
# opts.bipart_config = [(3,4), (5,6), (3,4)]
# where the first item in tuple is the possible lengths of the first part
# of the motif, the second item contains the possible number of Ns, and
# the third item contains the possible lengths of the second part of the motif.
# Adding to this motif space greatly increases compute time and memory requirements.
####################################################################################
logging.info(" - Adding bipartite motifs to search space...")
firsts = opts.bipart_config[0]
Ns = opts.bipart_config[1]
seconds = opts.bipart_config[2]
for bases1 in firsts:
for num_Ns in Ns:
for bases2 in seconds:
last_mod_pos = bases1-1
for seq1 in product("ATGC",repeat=bases1):
for seq2 in product("ATGC",repeat=bases2):
string = "".join(seq1) + ("N"*num_Ns) + "".join(seq2)
for base in opts.mod_bases:
indexes = [m.start() for m in re.finditer(base, string) if m.start()<=last_mod_pos]
for index in indexes:
motif = "%s-%s" % (string, index)
bi_motifs.add(motif)
logging.info("Done: %s possible bipartite motifs\n" % len(bi_motifs))
return motifs, bi_motifs
def add_degen_motifs( motifs, orig_control_means ):
"""
If a predetermined set of motifs is input using --motifs_file option,
create a new entry for the degen motif in the control values dictionary
by combining the existing data from the various specified versions
of the motif.
"""
keys_str = "\n".join(orig_control_means.keys())
new_control_means = orig_control_means
for m in motifs:
new_m = sub_bases(m)
if new_m!=m:
matches = re.findall(new_m, keys_str)
degen_mean = np.mean([orig_control_means[match] for match in matches])
new_control_means[m] = degen_mean
logging.info("Adding degenerate motif %s to controls: %s" % (m, degen_mean))
return new_control_means
def sub_bases( motif ):
"""
Return all possible specifications of a motif with degenerate bases.
"""
subs = {"W":"[AT]", \
"S":"[CG]", \
"M":"[AC]", \
"K":"[GT]", \
"R":"[AG]", \
"Y":"[CT]", \
"B":"[CGT]", \
"D":"[AGT]", \
"H":"[ACT]", \
"V":"[ACG]", \
"N":"[ACGTN]"}
for symbol,sub in subs.iteritems():
if motif.find(symbol) > -1:
motif = motif.replace(symbol, sub)
return motif
def comp_motif( motif ):
"""
Return the complement of the input motif.
"""
COMP = {"A":"T", \
"T":"A", \
"C":"G", \
"G":"C", \
"W":"S", \
"S":"W", \
"M":"K", \
"K":"M", \
"R":"Y", \
"Y":"R", \
"B":"V", \
"D":"H", \
"H":"D", \
"V":"B", \
"N":"N", \
"X":"X", \
"*":"*"}
r_motif = []
for char in motif:
r_motif.append( COMP[char] )
return "".join(r_motif)
def rev_comp_motif( motif ):
"""
Return the reverse complement of the input motif.
"""
COMP = {"A":"T", \
"T":"A", \
"C":"G", \
"G":"C", \
"W":"S", \
"S":"W", \
"M":"K", \
"K":"M", \
"R":"Y", \
"Y":"R", \
"B":"V", \
"D":"H", \
"H":"D", \
"V":"B", \
"N":"N", \
"X":"X", \
"*":"*"}
rc_motif = []
for char in motif[::-1]:
rc_motif.append( COMP[char] )
return "".join(rc_motif)
def shorten_motifs( highscore_motifs ):
"""
Keep only the shortest, most concise version of the high scoring
motifs (reduces redundancy).
"""
keeper_motifs = set(highscore_motifs.keys())
if len(highscore_motifs)>0:
shortest_contiguous = min([len(m.split("-")[0]) for m in highscore_motifs.keys()])
# (1) Sort by keys; shortest motif to longest
motifs_s = sorted(highscore_motifs, key=len)
# (2) For each motif, check if it's contained in a longer version of other motifs
for m in motifs_s:
motif_str = m.split("-")[0]
motif_idx = int(m.split("-")[1])
for remaining in list(keeper_motifs):
remaining_str = remaining.split("-")[0]
remaining_idx = int(remaining.split("-")[1])
match = re.search(motif_str, remaining_str)
if match != None and (motif_idx + match.start()) == remaining_idx and len(remaining_str) > len(motif_str):
# 3. If True, remove the longer version
keeper_motifs.remove(remaining)
return keeper_motifs
def wagner_fischer(word_1, word_2):
"""
"""
n = len(word_1) + 1 # counting empty string
m = len(word_2) + 1 # counting empty string
# initialize D matrix
D = np.zeros(shape=(n, m), dtype=np.int)
D[:,0] = range(n)
D[0,:] = range(m)
# B is the backtrack matrix. At each index, it contains a triple
# of booleans, used as flags. if B(i,j) = (1, 1, 0) for example,
# the distance computed in D(i,j) came from a deletion or a
# substitution. This is used to compute backtracking later.
B = np.zeros(shape=(n, m), dtype=[("del", 'b'),
("sub", 'b'),
("ins", 'b')])
B[1:,0] = (1, 0, 0)
B[0,1:] = (0, 0, 1)
for i, l_1 in enumerate(word_1, start=1):
for j, l_2 in enumerate(word_2, start=1):
deletion = D[i-1,j] + 1
insertion = D[i, j-1] + 1
substitution = D[i-1,j-1] + (0 if l_1==l_2 else 2)
mo = np.min([deletion, insertion, substitution])
B[i,j] = (deletion==mo, substitution==mo, insertion==mo)
D[i,j] = mo
return D, B
def naive_backtrace(B_matrix):
"""
"""
i, j = B_matrix.shape[0]-1, B_matrix.shape[1]-1
backtrace_idxs = [(i, j)]
while (i, j) != (0, 0):
if B_matrix[i,j][1]:
i, j = i-1, j-1
elif B_matrix[i,j][0]:
i, j = i-1, j
elif B_matrix[i,j][2]:
i, j = i, j-1
backtrace_idxs.append((i,j))
return backtrace_idxs
def align(word_1, word_2, bt):
"""
"""
aligned_word_1 = []
aligned_word_2 = []
operations = []
backtrace = bt[::-1] # make it a forward trace
for k in range(len(backtrace) - 1):
i_0, j_0 = backtrace[k]
i_1, j_1 = backtrace[k+1]
w_1_letter = None
w_2_letter = None
op = None
if i_1 > i_0 and j_1 > j_0: # either substitution or no-op
if word_1[i_0] == word_2[j_0]: # no-op, same symbol
w_1_letter = word_1[i_0]
w_2_letter = word_2[j_0]
op = " "
else: # cost increased: substitution
w_1_letter = word_1[i_0]
w_2_letter = word_2[j_0]
op = "s"
elif i_0 == i_1: # insertion
w_1_letter = " "
w_2_letter = word_2[j_0]
op = "i"
else: # j_0 == j_1, deletion
w_1_letter = word_1[i_0]
w_2_letter = " "
op = "d"
aligned_word_1.append(w_1_letter)
aligned_word_2.append(w_2_letter)
operations.append(op)
return aligned_word_1, aligned_word_2, operations
def refine_degen_motifs( keeper_motifs, contig_motifs, case_motif_Ns ):
"""
Identify redundant instances of degenerate motifs and replace them with the
most parsimonious representation.
'A': 'A',
'C': 'C',
'G': 'G',
'T': 'T',
'AC': 'M',
'AG': 'R',
'AT': 'W',
'CG': 'S',
'CT': 'Y',
'GT': 'K',
'ACG': 'V',
'ACT': 'H',
'AGT': 'D',
'CGT': 'B',
'ACGT': 'N'
In order to call a motif consensus to identify degenerate bases, we first
need to identify those motifs that are likely different specifications
of the same motif containing degenerate bases. To build this set of related
motifs, we filter based on:
(1) motif length
(2) methylated position in the motif
(3) number of central Ns in bipartite motifs
(4) edit distance between the most representative of the motifs and all
others in the set. Most representative determined by all-vs-all edit
distance calculations.
Once this set of related motifs is selected, we call a consensus to get
degenerate bases.
"""
def edit_distance( word1, word2 ):
D,B = wagner_fischer(word1, word2)
bt = naive_backtrace(B)
alignment_table = align(word1, word2, bt)
n_edits = len([entry for entry in alignment_table[2] if entry!=" "])
return n_edits
refined_motifs = []
degen_members = defaultdict(list)
# First sort by (1) motif lengths
lens = map(lambda x: len(x), list(keeper_motifs))
for size in list(set(lens)):
size_motifs = [m for m in list(keeper_motifs) if len(m)==size]
# Next, sort by (2) methylated potision in motif
idxs = set(map(lambda x: x.split("-")[1], size_motifs))
for idx in list(idxs):
idx_motifs = [m for m in size_motifs if m.split("-")[1]==idx]
# Count number of Ns in bipartite motifs
n_N_motifs = defaultdict(list)
for m in idx_motifs:
n_N = len([nuc for nuc in m if nuc=="N"])
n_N_motifs[n_N].append(m)
# Now sort based on (3) number of central Ns in bipartite motifs
for n_N,motifs in n_N_motifs.iteritems():
motif_set = set()
leftovers = set()
# Finally, calculate edit distance between remaining motifs.
# Run all-against-all edit distance to determine the most
# representative of the motifs in the set.
edit_mat = np.zeros([len(motifs), len(motifs)])
for i in range(len(motifs)):
for j in range(len(motifs)):
edit_mat[i,j] = edit_distance(motifs[i], motifs[j])
# Find the motif with the least total edits
min_idx = np.argmin([np.sum(edit_mat[i,:]) for i in range(edit_mat.shape[0])])
word1 = motifs[min_idx]
# Calculate edit distance against all other motifs
other_motifs = [motif for motif in motifs if motif!=word1]
for word2 in other_motifs:
n_edits = edit_distance(word1, word2)
n_Ns = len([x for x in word1 if x=="N"])
if (n_Ns==0 and n_edits<=1) or (n_Ns>0 and n_edits<=2):
# Close enough edit distance to use in consensus calling
motif_set.add(word1)
motif_set.add(word2)
else:
# Not close enough edit distance; do not group this motif
# with the others; will retain and keep separate
leftovers.add(word2)
if len(motif_set)==0:
# No companion motif sets found for consensus. Cannot search for
# degenerate bases.
refined_motifs+=motifs
else:
# Successfully found companion motifs. Gather information about these
# motifs and call consensus motif to identify degenerate bases.
SCp_values = [contig_motifs[m] for m in list(motif_set)]
N_values = [case_motif_Ns[m] for m in list(motif_set)]
for_consensus = [m.split("-")[0] for m in list(motif_set)]
# Must treat contiguous and bipartite motifs slightly differently
if n_N>0:
# BioMotifs.degenerate_consensus() cannot accept Ns;
# will temporarily replace Ns with Ts.
replaced = []
for j,m in enumerate(for_consensus):
mk = np.array([i for i,nuc in enumerate(m) if nuc=="N"])
replaced.append( m.replace("N","T") )
m = BioMotifs.create(replaced)
x = list(m.degenerate_consensus)
for pos in mk:
x[pos] = "N"
degen_motif = "".join(x) + "-%s" % idx
else:
# No need to replace Ns with Ts
m = BioMotifs.create(for_consensus)
degen_motif = | |
<reponame>exizt/tistory-skin-simulator<filename>SkinParser.py
"""
skin.html을 flask의 template파일로 변환하는 부분 중 기능적인 부분들을 담당.
이를 호출하는 것은 SkinLoader
렌더링과 파서 기능을 담당한다.
"""
import re
def parse(context: str) -> str:
"""
skin.html 을 flask template 형태로 변환.
:param context: str
:return:str
"""
# 공통
context = context.replace("[##_body_id_##]", "{{ body_id }}")
context = context.replace("[##_page_title_##]", "{{ page_title }}")
context = context.replace("[##_category_list_##]", "{{ category_list|safe }}")
# 전체를 감싸는 태그
context = re.sub(pattern=r'</?s_t3>', repl="", string=context, flags=re.MULTILINE)
# 광고 관련 태그
context = context.replace("[##_revenue_list_upper_##]", "")
context = context.replace("[##_revenue_list_lower_##]", "")
# 이따금 오류 일으킬 소지가 있음.
context = context.replace("[##_article_rep_thumbnail_raw_url_##]", "")
# 불필요한 경우들 (작업하려면 봐야하므로)
context = re.sub(pattern=r'</?s_search>', repl="", string=context, flags=re.MULTILINE)
context = re.sub(pattern=r'</?s_ad_div>', repl="", string=context, flags=re.MULTILINE)
# s_if_var_ 와 s_not_var 를 변환
context = parse_skin_var(context)
# cover 기능
context = parse_cover(context)
# notice 관련.
context = parse_notice(context)
# s_index_article_rep 관련.
context = parse_index_article_rep(context)
# s_list 와 관련된 것 변환.
context = parse_s_list(context)
# article 관련.
context = parse_article(context)
# guest 관련.
context = parse_guest(context)
# tag 관련
context = parse_tag(context)
# sidebar 관련
context = parse_sidebar(context)
# 위치 로그 관련
context = parse_location_log(context)
# 아예 여러개 있으면 여러번 돌고 하나만 있으면 하나만 도는 식으로 처리하는 게 나으려나?
# 보니까 남은 것 중 _rep 는 repeat 인 거 같다?
context = re.sub(pattern=r'<s_([^>]+)_rep>', repl=r'{% for \g<1>_rep in \g<1>_list %}', string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</s_([^>]+)_rep>', repl=r'{% endfor %}', string=context, flags=re.MULTILINE)
# 기타 변수들 (한 번에 바꿔도 되는데. 그건 어느 정도 정리 된 후에 하자. 지금은 조금 이른 듯.
# 블로그 제목
context = context.replace("[##_title_##]", "{{ title }}")
# 프로필 이미지, 또는 블로그 대표 이미지
context = context.replace("[##_image_##]", "{{ image }}")
# 블로거 필명
context = context.replace("[##_blogger_##]", "{{ blogger }}")
# 블로그 설명
context = context.replace("[##_desc_##]", "{{ desc }}")
# 블로그 url
context = context.replace("[##_blog_link_##]", "{{ blog_link }}")
# rss_url
context = context.replace("[##_rss_url_##]", "#")
# 카운트들
context = context.replace("[##_count_total_##]", "{{ count_total }}")
context = context.replace("[##_count_today_##]", "{{ count_today }}")
context = context.replace("[##_count_yesterday_##]", "{{ count_yesterday }}")
context = context.replace("[##_search_name_##]", "")
context = context.replace("[##_search_onclick_submit_##]", "")
context = context.replace("[##_search_text_##]", "검색어")
context = context.replace("[##_owner_url_##]", "#")
context = context.replace("[##_blog_menu_##]", "{{ blog_menu|safe }}")
context = context.replace("[##_guestbook_link_##]", "./guestbook")
context = context.replace("[##_taglog_link_##]", "./tags")
# contents = re.sub(pattern=r'<s_([^>]+)_rep>', repl=r'{% for i in \g<1> %}', string=contents, flags=re.MULTILINE)
# s_cover 는 name 값을 갖고 있어서, 얘는 별도로.
return context
def parse_cover(context: str) -> str:
"""
s_cover 에 해당하는 부분을 렌더링
:param context: 문자열
:return: str: 문자열
"""
# s_cover_group 에 해당하는 부분을 변환
context = context.replace("<s_cover_group>", "{% if cover_group %}")
context = context.replace("</s_cover_group>", "{% endif %}")
# s_cover 에 해당하는 부분을 변환
context = re.sub(pattern=r'<s_cover name=([^>]+)>', repl=r"{% if cover['name'] == \g<1> %}", string=context,
flags=re.MULTILINE)
context = context.replace("</s_cover>", "{% endif %}")
# s_cover_item 에 해당하는 부분을 변환
context = context.replace("<s_cover_item>", "{% for cover_item in cover['data'] %}")
context = context.replace("</s_cover_item>", "{% endfor %}")
# cover item 의 하위 요소에 대한 체크 루틴에 대한 변환
context = re.sub(pattern=r'<s_cover_item_([^>]+)>', repl=r"{% if cover_item.\g<1> %}", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</s_cover_item_([^>]+)>', repl=r' {% endif %}', string=context, flags=re.MULTILINE)
# cover_item 을 출력하는 부분을 변환
context = re.sub(pattern=r'\[##_cover_item_([^\]]+)_##\]', repl=r"{{ cover_item['\g<1>'] }}", string=context,
flags=re.MULTILINE)
# s_cover_rep에 대한 변환
context = context.replace("<s_cover_rep>", "{% for cover in cover_group %}")
context = context.replace("</s_cover_rep>", "{% endfor %}")
# cover_title, cover_url 같은 것들에 대한 변환. (순서에 주의. 이 구문이 위로 가면 순서가 꼬일 수 있겠음)
context = re.sub(pattern=r'\[##_cover_([^\]]+)_##\]', repl=r"{{ cover['\g<1>'] }}", string=context,
flags=re.MULTILINE)
# cover 요소에 대한 if 변환
context = re.sub(pattern=r'<s_cover_([^>]+)>', repl=r"{% if cover['\g<1>'] %}", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</s_cover_([^>]+)>', repl=r'{% endif %}', string=context, flags=re.MULTILINE)
return context
def parse_index_article_rep(context: str) -> str:
"""
여기저기 가 있는 s_index_article_rep 를 모아서 s_list 뒷부분에 넣어준다.
:param context:
:return:
"""
s_article_protected = find_tags_inner_html('s_article_protected', context)
s_index_article_rep = find_tags_inner_html('s_index_article_rep', s_article_protected)
# s_protected_index = re.findall(r'<s_index_article_rep>.*</s_index_article_rep>', s_protected, re.DOTALL)
# s_protected_index = s_protected_index[0]
index_article_rep_protected = "{% if article_rep.is_protected %}" + s_index_article_rep + '\t{% endif %}'
# print(protected_index_article)
s_article_rep = find_tags_inner_html('s_article_rep', context)
s_index_article_rep = find_tags_inner_html('s_index_article_rep', s_article_rep)
index_article_rep_normal = '{% if article_rep.is_protected is not defined %}' + s_index_article_rep + '{% endif %}'
# s_list 바로 안쪽 끝에 붙이기. append 하기.
# contents = re.sub(pattern='</', repl=r'', string=contents, flags=re.MULTILINE)
index_article_rep_both = "\n\t\t\t\t\t\t{% if article_list_info is defined %}\n\t\t\t\t\t\t" + index_article_rep_protected \
+ '\n\t\t\t\t\t\t' + index_article_rep_normal + "\n\t\t\t\t\t\t{% else %}"
# context = context.replace("</s_list>", index_article_both + "</s_list>")
# index_article_rep 는 s_article_rep의 안에서 앞쪽에 붙는다고 보면 된다...
context = context.replace("<s_article_rep>", "{% for article_rep in article_index_list %}" + index_article_rep_both)
context = context.replace("</s_article_rep>", "\t{% endif %}\n\t\t\t\t\t{% endfor %}")
# 원래 있던 index_article_rep 는 제거하기.
context = remove_tag('s_index_article_rep', context)
context = context.replace("[##_article_rep_desc_##]", "{{ article_rep.desc|safe }}")
# article_rep_ 변수들 변환
context = re.sub(pattern=r'\[##_article_rep_([^\]]+)_##\]', repl=r'{{ article_rep.\g<1> }}', string=context,
flags=re.MULTILINE)
return context
def parse_s_list(context: str) -> str:
"""
예전 방식의 리스트
:param context:
:return:
"""
# s_list 변환
context = context.replace("<s_list>", "{% if article_list_info is defined %}")
context = context.replace("</s_list>", "{% endif %}")
# s_list_empty 변환
context = context.replace("<s_list_empty>", "{% if article_index_list|length == 0 %}")
context = context.replace("</s_list_empty>", "{% endif %}")
# s_list_rep 변환
context = context.replace("<s_list_rep>", "{% for list_rep in article_list_legacy %}")
context = context.replace("</s_list_rep>", "{% endfor %}")
# s_list_rep_thumbnail
context = context.replace("<s_list_rep_thumbnail>", "{% if list_rep.thumbnail %}")
context = context.replace("</s_list_rep_thumbnail>", "{% endif %}")
# list_rep_title 같은 변수들 변환
context = re.sub(pattern=r'\[##_list_rep_([^\]]+)_##\]', repl=r'{{ list_rep.\g<1> }}', string=context,
flags=re.MULTILINE)
# list_conform 같은 변수들 변환. 여기는 info를 이용하므로 article_list_info 로 변환.
context = re.sub(pattern=r'\[##_list_([^\]]+)_##\]', repl=r'{{ article_list_info.\g<1> }}', string=context,
flags=re.MULTILINE)
return context
def parse_article(context: str) -> str:
"""
article 부분에 대한 변환
:param context:
:return:
"""
# parmalink_article : 게시글 보기 일 때에 해당하는 사항에 대한 변환.
# contents = contents.replace("<s_permalink_article_rep>", "{% if article_rep['type'] == 'permalink' %}")
# contents = contents.replace("</s_permalink_article_rep>", "{% endif %}")
# 그냥 이 조건문은 제거하기로..
context = re.sub(pattern=r'</?s_permalink_article_rep>', repl="", string=context,
flags=re.MULTILINE)
# protected article : 보호된 글에 대한 변환.
context = context.replace("<s_article_protected>", "{% if article_protected %}")
context = context.replace("</s_article_protected>", "{% endif %}")
# s_article_rep_ : article의 하위 값들. 작성일, 작성자, 링크, 제목 등
context = re.sub(pattern=r'<s_article_rep_([^>]+)>', repl=r" {% if article_rep.\g<1> is defined %}", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</s_article_rep_([^>]+)>', repl=r' {% endif %}', string=context, flags=re.MULTILINE)
# 불필요한 경우들 (작업하려면 봐야하므로)
context = re.sub(pattern=r'</?s_tag_label>', repl="", string=context, flags=re.MULTILINE)
context = re.sub(pattern=r'</?s_article_related>', repl="", string=context, flags=re.MULTILINE)
# s_article_rep 변환
# context = context.replace("<s_article_rep>", "{% if article_rep is defined %}")
# context = context.replace("</s_article_rep>", "{% endif %}")
return context
def parse_skin_var(context: str) -> str:
"""
스킨 고유 변수에 대한 변환
:param context:
:return:
"""
# 먼저, var 의 dash 방지 (스킨에 따라서 그런 경우가 있길래...)
context = re.sub(pattern=r'</?s_(if|not)_var_([^>]+)>', repl=lambda m: m.group().replace("-", "_"),
string=context, flags=re.MULTILINE)
context = re.sub(pattern=r'\[##_var_([^\]]+)_##\]', repl=lambda m: m.group().replace("-", "_"),
string=context, flags=re.MULTILINE)
# s_if_var_ 와 s_not_var 를 변환
# s_if_var 변환
context = re.sub(pattern=r'<s_if_var_([^>]+)>', repl=r" {% if vars.\g<1> is defined %}", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</s_if_var_([^>]+)>', repl=' {% endif %}', string=context, flags=re.MULTILINE)
# s_not_var 변환
# {% if vars.\g<1> is none or not vars['\g<1>'] %}
context = re.sub(pattern=r'<s_not_var_([^>]+)>', repl=r" {% if vars.\g<1> is not defined %}",
string=context, flags=re.MULTILINE)
context = re.sub(pattern=r'</s_not_var_([^>]+)>', repl=' {% endif %}', string=context, flags=re.MULTILINE)
# var 출력되는 부분 처리
context = re.sub(pattern=r'\[##_var_([^\]]+)_##\]', repl=r"{{ vars.\g<1> }}", string=context,
flags=re.MULTILINE)
return context
def parse_notice(context: str) -> str:
"""
공지사항 부분에 대한 변환
:param context:
:return:
"""
context = re.sub(pattern=r'<s_notice_rep_([^>]+)>', repl=r" {% if notice_rep[\g<1>] %}", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</s_notice_rep_([^>]+)>', repl=r' {% endif %}', string=context, flags=re.MULTILINE)
return context
def parse_guest(context: str) -> str:
"""
방명록 부분에 대한 변환
:param context:
:return:
"""
context = context.replace("<s_guest>", "{% if guest %}")
context = context.replace("</s_guest>", "{% endif %}")
# 화면에 보여져야하기 때문에. 몇가지는 그냥 제거
context = re.sub(pattern=r'</?s_guest_input_form>', repl="", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</?s_guest_member>', repl="", string=context,
flags=re.MULTILINE)
context = re.sub(pattern=r'</?s_guest_container>', repl="", string=context,
flags=re.MULTILINE)
# 변수들
context = re.sub(pattern=r'\[##_guest_rep_([^\]]+)_##\]', repl=r'{{ guest_rep.\g<1> }}', string=context,
flags=re.MULTILINE)
context = context.replace("[##_guest_name_##]", "방문자이름")
context = context.replace("[##_guest_input_name_##]", "")
context = context.replace("[##_guest_input_password_##]", "")
context = context.replace("[##_guest_password_##]", "")
context = context.replace("<s_guest_reply_container>", "{% if guest_rep.reply is defined %}")
context = context.replace("</s_guest_reply_container>", "{% endif %}")
context = context.replace("<s_guest_reply_rep>", "{% for guest_rep in guest_rep.reply %}")
context = context.replace("</s_guest_reply_rep>", "{% endfor %}")
# 불필요한 경우들 (작업하려면 봐야하므로)
context = re.sub(pattern=r'</?s_guest_form>', repl="", string=context, flags=re.MULTILINE)
return context
def parse_tag(context: str) -> str:
"""
tags 에 대한 변환
| |
<gh_stars>1-10
import os
import dill
import numpy as np
import copy
from math import pi, log, exp, sqrt, floor
from keras.models import Sequential
from keras.layers import LSTM, GRU, Dense
from keras import optimizers
from oscillator_snap.oscillator_auxiliaries import *
def compile_model(model, learning_rate):
"""compiles the model
:param model: RNN model
:returns: compiled model"""
# optimizer (stochastic gradient descent)
sgd = optimizers.SGD(lr=learning_rate, momentum=0.0, decay=0.0, nesterov=False)
# compile model
model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['accuracy'])
return model
def generate_model(data_dim_in, data_dim_out, past, nodes, learning_rate, cell=LSTM, n_hidden_layers=1):
"""generates the model with n hidden layers (does not compile it)
:param n_hidden_layers: number of hidden layers
:returns: return the keras model"""
model = Sequential()
if(n_hidden_layers > 0):
model.add(cell(nodes, activation='tanh', return_sequences=True, input_shape=(past, data_dim_in))) # input layer
for i in range(n_hidden_layers-1):
model.add(cell(nodes, activation='tanh', return_sequences=True)) # hidden layers
model.add(cell(nodes, activation='tanh')) # last hidden layer
else:
model.add(cell(nodes, activation='tanh', input_shape=(past, data_dim_in))) # input layer
model.add(Dense(data_dim_out, activation='linear')) # output layer
return model
def parse_train_data(seq, past, dim_in, dim_out):
"""parses the training data for the RNN
:param seq: the sequence to be parsed
:param past: the number of considered steps in the past
:param dim_in: the dimensionality of the data fed to the network
:param dim_out: the dimensionality of the prediction
:returns: input data X and prediction Y"""
# network input output
X = []
Y = []
# determine the length of the sequence
L = len(seq[0])
# take care of dimensions
seq_in = seq[0:dim_in]
seq_out = seq[0:dim_out]
# reshape
seq_in = np.array(seq_in)
seq_in = seq_in.transpose()
seq_out = np.array(seq_out)
seq_out = seq_out.transpose()
# organize
for i in range(L-past):
X.append(seq_in[i:i+past])
Y.append(seq_out[i+past])
return np.array(X), np.array(Y)
def forecast_starter(seq, past, dim_in):
"""prepares the staring vector for RNN forecast
:param seq: the sequence from which the beggining is used as the forcast starter
:param past: the number of considered steps in the past
:param dim_in: the dimensionality of the data fed to the network
:returns: starter input data"""
# determine the length of the sequence
L = len(seq[0])
# take care of dimensions
seq_in = seq[0:dim_in]
# reshape
seq_in = np.array(seq_in)
seq_in = seq_in.transpose()
return np.array(seq_in[0:past])
def update_x(model, x, new_input, number_of_variables=1):
"""update x and make the next prediction y (auxiliary for forecast)
:param model: RNN model
:param x: input vector to RNN
:param new_input: value of the new input
:param number_of_variables: the number of variables that have to be updated (default 1)
:returns: updated x"""
y = (model.predict(x))[0][0:number_of_variables] # next prediction value
# update x
x = x[0][1:]
x = np.append(x, np.append(y, new_input)).reshape(1, x.shape[0]+1, x.shape[1])
return x, y
def forecast(model, past, dim_in, stream_start, time_span, oscillator_input, number_of_variables=1):
"""forecasts the signal, using the RNN
:param model: RNN model
:param past: the number of considered steps in the past
:param dim_in: the dimensionality of the data fed to the network
:param forcast_start: the initial input to the network
:param time_span: the time span of the forcasting, in network steps
:param oscillator_input: the time series of the input being fed to the oscillator while forcasting (starting from the time of the first forcasted value)
:param number_of_variables: the number of variables that are being forecasted (default 1)
:returns: forcast of the time series"""
print("forecast")
s_for = []
x = np.array([stream_start])
for i in range(time_span):
x, y = update_x(model, x, oscillator_input[i], number_of_variables=number_of_variables)
s_for.append(y.tolist()) # save the values
return s_for
def period_measure(model, past, dim_in, forcast_start, constant_input_offset, thr=0.0, period_counts=100):
"""estimate the natural period from the RNN model, also useful for chaotic and quasi-periodic dynamics because of averaging (in units of dt*sampling)
:param model: RNN model
:param past: the number of considered steps in the past
:param dim_in: the dimensionality of the data fed to the network
:param forcast_start: the starting point of the time series
:param constant_input_offset: the constant input to be fed to the network
:param thr: signal threshold (default 0.0)
:param period_counts: how many periods do we average over (default 100)
:returns: the natural period"""
print("period estimation")
x = np.array([forcast_start])
yh, y = 0, 0 # initial setting of signal and its previous value
# first warmup
for t in range(1000):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
# then we get to a position just after a crossing
failsafe_time = 0
while((yh < thr and y > thr) == False):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
# time update
failsafe_time = failsafe_time + 1
# if it runs for too long it might be that it does not oscillate
if(failsafe_time == 1000): # the choice 1000 is arbitrary
print("\tallert: it will probably never reach the threshold")
return
previous_crossing = (thr-yh)/(y-yh)*1 # time of crossing correction
# now set time to zero and look for following crossings
time = 0
avg_period = 0
for p in range(period_counts):
yh = thr+1 # break the condition
failsafe_time = 0
while((yh < thr and y > thr) == False):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
# time update
time = time + 1
failsafe_time = failsafe_time + 1
# if it runs for too long it might be that it does not oscillate
if(failsafe_time == 1000): # the choice 1000 is arbitrary
print("\tallert: it will probably never reach the threshold")
return
crossing = time + (thr-yh)/(y-yh)*1 # time of crossing
avg_period = avg_period + crossing-previous_crossing # add to the period
previous_crossing = crossing # reset previous crossing
avg_period = avg_period/period_counts
return avg_period
def PRC_measure(model, past, dim_in, forcast_start, constant_input_offset=0.0, warmup_time=1000, warmup_between_probes=100, stimulation=0.25, thr=0.0, period_counts=5, phase_repeats=20):
"""estimate the PRC from the RNN model (in units of sampling)
:param model: RNN model
:param past: the number of considered steps in the past
:param dim_in: the dimensionality of the data fed to the network
:param forcast_start: the starting point of the time series
:param constant_input_offset: the constant input to be fed to the network (default 0.0)
:param warmup_time: the time for the system to relax to its steady state (default 1000)
:param warmup_between_probes: the time for the system to relax between diferent phase probes (default 100)
:param stimulation: how strong to stimulate the oscillator (default 0.05)
:param thr: signal threshold (default 0.0)
:param period_counts: how many periods do we average over (default 5)
:param phase_repeats: how many times to stimulate at each phase (default 10)
:returns: the PRC in list format"""
print("PRC estimation")
period = period_measure(model, past, dim_in, forcast_start, constant_input_offset, thr=thr)
PRC = [[2*pi*(i+1)/period for i in range(floor(period)-1)],[0 for i in range(floor(period)-1)]] # PRC list (i+1 because when i=0 the hit comes effectively between 0.5 and 1.5 depending on how it crosses the threshold)
x = np.array([forcast_start])
yh, y = 0, 0 # initial setting of signal and its previous value
# first warmup
for t in range(warmup_time):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
for ph in range(floor(period)-1):
print("\tphase = ", ph, "/", floor(period)-1-1)
# warmup between different phase probes
for t in range(warmup_between_probes):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
# set phase shift to 0, then we will add to it to get an average
phase_shift = 0
for r in range(phase_repeats):
# then we get to a position just after a crossing
while((yh < thr and y > thr) == False):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
first_crossing = (thr-yh)/(y-yh)*1 # time of crossing correction
# now set time to zero and look for following crossings
time = 0
# wait ph steps...
for t in range(ph):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
# time update
time = time + 1
# and then stimulate
yh = y # previous value
x, y = update_x(model, x, constant_input_offset+stimulation) # notice +stimulation
# time update
time = time + 1
# now run for some periods and then evaluate the phase shift
for p in range(period_counts):
yh = thr+1 # break the condition (just in case - generally should be already broken)
while((yh < thr and y > thr) == False):
yh = y # previous value
x, y = update_x(model, x, constant_input_offset) # update x,y
# time update
time = time + 1
crossing = time + (thr-yh)/(y-yh)*1 # time of crossing
# altered periods
altered_periods = crossing-first_crossing
# phase shift
phase_shift = phase_shift + 2*pi*(altered_periods-period_counts*period)/period
phase_shift = phase_shift/phase_repeats
PRC[1][ph] = phase_shift/stimulation
return PRC
def lyapunov_measure(model, past, dim_in, forcast_start, constant_input_offset=0.0, warmup_time=1000, delta=0.005, tau=10, unconsidered_trials=50, trials=500):
"""estimate the largest Lyapunov exponent from the RNN model (in units of 1/(sampling*dt) ),
this is achieved by staring with two close trajectories (reference one - x, and perturbed one x_p), evolving them and evaluating the deviation.
To assure we are measuring the maximal exponent the perturbed trajectory is every time renormalized
(the whole past is rescaled by delta*sqrt(1/past*sum_i^past (x_p(t-i)-x(t-i))^2) ),
thus allowing the maximal exponent to take over and there's no need for any embedding - pretty neat:)
:param model: RNN model
:param past: the number of considered steps in the past
:param dim_in: the dimensionality of the data fed to | |
+ str(stars) + '\n')
else:
print 'We do not have complete QC data for this sample'
print temp[0] + '\t' + temp[1] + '\t' + temp[2] + '\t' + temp[3] + '\t' + temp[4] + '\t' + temp[5]
f.close()
g.close()
# Get the tissue type linked to each project
f = open('Supplementary_Table_2.tsv', 'r')
tissues = {}
line = f.next()
for line in f:
temp = line.split('\t')
if temp[1].strip() in tissues:
named = tissues[temp[1].strip()]
named.append(temp[0])
else:
named = [temp[0]]
tissues[temp[1].strip()] = named
f.close()
tissues_sorted = []
for key in tissues:
tissues_sorted.append(key)
tissues_sorted.sort()
### Third, denisty scatter plots for the nomral and tumour samples, to compare how the two evenness of coverage measures compare
#% Calculate the point density for the normal samples
x = np.array(Med_Mean_size_norm)
y = np.array(fwhm_norm)
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
# Now the plot
fig, ax = plt.subplots()
ax.axvline(x=whiskers[0][1], color='k', linestyle='dashed', linewidth=2)
plt.text(.85,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axvline(x=whiskers[1][1], color='k', linestyle='dashed', linewidth=2)
plt.text(1.02,0.7,'Passes for Med/Mean', color='green',rotation=90)
plt.text(1.07,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axhline(y=0.205, color='k', linestyle='dashed', linewidth=2)
plt.text(0.71,0.17,'Passes for FWHM', color='green')
plt.text(0.71,0.215,'Fails for FWHM', color='red')
# ax.set_yscale('log')
# ax.set_xscale('log')
ax.set_xlim(.7,1.1)
ax.set_ylim(0,.8)
cax = ax.scatter(x, y, c=z, s=30, edgecolor='')
fig.colorbar(cax)
ax.set_xlabel('Median/Mean')
ax.set_ylabel('FWHM')
fig_name = 'Evenness_med-mean_fwhm_normal_scattterplot.pdf'
fig.savefig(fig_name)
plt.show()
plt.clf()
#% Calculate the point density for the tumour samples
x = np.array(Med_Mean_size_tumo)
y = np.array(fwhm_tumo)
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
# Plot a new school scatter plot
fig, ax = plt.subplots()
ax.axvline(x=whiskers[2][1], color='k', linestyle='dashed', linewidth=2)
plt.text(whiskers[2][1]+.008,0.7,'Passes for Med/Mean', color='green',rotation=90)
plt.text(whiskers[2][1]-.018,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axvline(x=whiskers[3][1], color='k', linestyle='dashed', linewidth=2)
plt.text(whiskers[3][1]-.018,0.7,'Passes for Med/Mean', color='green',rotation=90)
plt.text(whiskers[3][1]+.008,0.66, 'Fails for Med/Mean', color='red', rotation=90)
ax.axhline(y=0.34, color='k', linestyle='dashed', linewidth=2)
plt.text(0.71,0.35,'Fails for FWHM', color='red')
plt.text(0.71,0.3,'Passes for FWHM', color='green')
ax.set_xlim(.7,1.1)
ax.set_ylim(0,.8)
cax = ax.scatter(x, y, c=z, s=30, edgecolor='')
fig.colorbar(cax)
ax.set_xlabel('Median/Mean')
ax.set_ylabel('FWHM')
fig_name = 'Evenness_med-mean_fwhm_tumour_scattterplot.pdf'
fig.savefig(fig_name)
plt.show()
plt.clf()
### Fourth, these are individual plots of the qc data, showing what proportion passed and failed for individual projects. These figures did not make it to the final paper, but are kept here for completeness sake.
qcs = ['Mean_norm', 'Mean_tumo', 'FWHM_norm', 'FWHM_tumo', 'CallPow', 'DiffChrom_norm', 'DiffChrom_tumo', 'BaseBias_norm', 'BaseBias_tumo']
for k,qc in enumerate([Mean_norm, Mean_tumo, FWHM_norm, FWHM_tumo, CallPow, DiffChrom_norm, DiffChrom_tumo, BaseBias_norm, BaseBias_tumo]):
faill = 0
passs = 0
for key in qc:
passs += qc[key]['pass']
faill += qc[key]['fail']
percent = (faill / float(passs + faill)) * 100
qc['Total'] = {'fail': faill, 'pass': passs}
print 'For ' + qcs[k] + ' we have ' + str(percent) + ' failing (total = ' + str(passs + faill) + ')'
labelled = []
tish = ['', 'Total', '']
organ = ['', 'Total', '']
passed = []
failed = []
total = []
for key in qc:
labelled.append(key)
labelled.sort()
for key in tissues_sorted:
c = True
for item in tissues[key]:
if item in labelled:
tish.append(item)
if c:
organ.append(key)
c = False
else:
organ.append(' ')
tish.append('')
organ.append('')
for key in tish:
if key == '':
passed.append(0)
failed.append(0)
total.append('')
else:
pass_temp = qc[key]['pass']
fail_temp = qc[key]['fail']
temp = float(pass_temp + fail_temp)
passed.append(pass_temp/temp * 100)
failed.append(fail_temp/temp * 100)
total.append(str(int(temp)))
N = len(tish)
ind = np.arange(N) # the x locations for the groups
width = 1 # the width of the bars: can also be len(x) sequence
p1 = plt.bar(ind, passed, width, color='blue')
p2 = plt.bar(ind, failed, width, color='red', bottom=passed)
plt.title(qcs[k])
locs, labels = plt.xticks(ind + width/2., (organ))
plt.setp(labels, rotation=90)
plt.tick_params(axis='both', which='major', labelsize=5)
plt.legend((p1[0], p2[0]), ('Pass', 'Fail'), bbox_to_anchor=(1.02, .55), fontsize='x-small')
plt.ylim(0,100)
plt.yticks(range(0, 101, 20), [str(x) + "%" for x in range(0, 101, 20)], fontsize=5)
for j,item in enumerate(ind+0.1):
plt.text(item,15, tish[j] +': '+ total[j], color='white', size=5, rotation=90, horizontalalignment='left')
fig_name = '' + qcs[k] + '_project_bias.pdf'
plt.savefig(fig_name)
plt.show()
plt.clf
### Fifth, plots of the star ratings for each project, as well as a bar summarising the star ratings for all the normal-tumour sample pairs in PCAWG.
# Get the star rating in a usuable form to plot
one = []
onehalf = []
two = []
twohalf = []
three = []
threehalf = []
four = []
fourhalf = []
five = []
total = []
see_all = []
equal_add = True
for key in tish:
if key != '':
if key in starred:
temp = Counter(starred[key])
if equal_add:
see_all = temp
equal_add = False
else:
see_all = see_all + temp
if 1.0 in temp:
one.append((temp[1.0]/float(len(starred[key])))*100)
else:
one.append(0)
if 1.5 in temp:
onehalf.append((temp[1.5]/float(len(starred[key])))*100)
else:
onehalf.append(0)
if 2.0 in temp:
two.append((temp[2.0]/float(len(starred[key])))*100)
else:
two.append(0)
if 2.5 in temp:
twohalf.append((temp[2.5]/float(len(starred[key])))*100)
else:
twohalf.append(0)
if 3.0 in temp:
three.append((temp[3.0]/float(len(starred[key])))*100)
else:
three.append(0)
if 3.5 in temp:
threehalf.append((temp[3.5]/float(len(starred[key])))*100)
else:
threehalf.append(0)
if 4.0 in temp:
four.append((temp[4.0]/float(len(starred[key])))*100)
else:
four.append(0)
if 4.5 in temp:
fourhalf.append((temp[4.5]/float(len(starred[key])))*100)
else:
fourhalf.append(0)
if 5.0 in temp:
five.append((temp[5.0]/float(len(starred[key])))*100)
else:
five.append(0)
total.append(str(len(starred[key])))
else:
one.append(0)
onehalf.append(0)
two.append(0)
twohalf.append(0)
three.append(0)
threehalf.append(0)
four.append(0)
fourhalf.append(0)
five.append(0)
total.append('')
else:
one.append(0)
onehalf.append(0)
two.append(0)
twohalf.append(0)
three.append(0)
threehalf.append(0)
four.append(0)
fourhalf.append(0)
five.append(0)
total.append('')
vote_all = 0
for item in see_all:
vote_all += see_all[item]
one[1] = (see_all[1.0]/float(vote_all)) * 100
onehalf[1] = (see_all[1.5]/float(vote_all)) * 100
two[1] = (see_all[2.0]/float(vote_all)) * 100
twohalf[1] = (see_all[2.5]/float(vote_all)) * 100
three[1] = (see_all[3.0]/float(vote_all)) * 100
threehalf[1] = (see_all[3.5]/float(vote_all)) * 100
four[1] = (see_all[4.0]/float(vote_all)) * 100
fourhalf[1] = (see_all[4.5]/float(vote_all)) * 100
five[1] = (see_all[5.0]/float(vote_all)) * 100
total[1] = str(vote_all)
N = len(tish)
ind = np.arange(N) # the x locations for the groups
width = 1 # the width of the bars: can also be len(x) sequence
pq = plt.bar(ind, one, width, color ='gray')
pp = plt.bar(ind, onehalf, width, color ='red', bottom=one)
p0 = plt.bar(ind, two, width, color= 'blue', bottom =[one[h] + onehalf[h] for h in range(len(threehalf))])
p1 = plt.bar(ind, twohalf, width, color='brown', bottom=[one[h] + onehalf[h] + two[h] for h in range(len(threehalf))])
p2 = plt.bar(ind, three, width, color='purple', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] for h in range(len(threehalf))])
p3 = plt.bar(ind, threehalf, width, color='hotpink', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h] for h in range(len(threehalf))])
p4 = plt.bar(ind, four, width, color='orange', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h]+ threehalf[h] for h in range(len(threehalf))])
p5 = plt.bar(ind, fourhalf, width, color='gold', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h] + threehalf[h] + four[h] for h in range(len(threehalf))])
p6 = plt.bar(ind, five, width, color='green', bottom=[one[h] + onehalf[h] + two[h] + twohalf[h] + three[h] + threehalf[h] + four[h] + fourhalf[h] for h in range(len(threehalf))])
locs, labels = plt.xticks(ind + width/2., (organ))
plt.setp(labels, rotation=90)
plt.tick_params(axis='both', which='major', labelsize=8)
plt.legend((p6[0], p5[0], p4[0], p3[0], p2[0], p1[0], p0[0], pp[0], pq[0]), ('5', '4.5', '4', '3.5', '3', '2.5', '2', '1.5', '1'), bbox_to_anchor=(1, .7), fontsize='x-small')
plt.ylim(0,100)
plt.yticks(range(0, 101, 20), [str(x) + "%" for x in range(0, 101, 20)], fontsize=8)
for j,item in enumerate(ind+0.1):
plt.text(item,95, tish[j] +': '+ total[j], color='white', size=5, rotation=90, horizontalalignment='left')
plt.tight_layout()
fig_name = 'starred_project_bias.pdf'
plt.savefig(fig_name)
plt.show()
plt.clf
#% Now a star plot of stars with all bars
one = []
onehalf = []
two = []
twohalf = []
three = []
threehalf = []
four = []
fourhalf = []
five = []
total =[]
temp = Counter(all_dam)
if 1.0 in temp:
one.append((temp[1.0]/float(len(all_dam)))*100)
else:
one.append(0)
if 1.5 in temp:
onehalf.append((temp[1.5]/float(len(all_dam)))*100)
else:
onehalf.append(0)
if 2.0 in temp:
two.append((temp[2.0]/float(len(all_dam)))*100)
else:
two.append(0)
if 2.5 in temp:
twohalf.append((temp[2.5]/float(len(all_dam)))*100)
else:
twohalf.append(0)
if 3.0 in temp:
three.append((temp[3.0]/float(len(all_dam)))*100)
else:
three.append(0)
if 3.5 in temp:
threehalf.append((temp[3.5]/float(len(all_dam)))*100)
else:
threehalf.append(0)
if 4.0 in temp:
four.append((temp[4.0]/float(len(all_dam)))*100)
else:
four.append(0)
if 4.5 in temp:
| |
[]
alreadyRunRandom = 0
RS_configurations_count = 0
arr = list(fast_addressing_of_data_array.values())
if len(arr) == 0:
absolute_configuration_index = 0
else:
absolute_configuration_index = np.asarray(arr, dtype=np.int).max() + 1
# See if input space is big enough otherwise it doesn't make sense to draw number_of_RS samples.
if (self.get_space_size() - len(fast_addressing_of_data_array)) <= number_of_RS:
configurations_aux = self.get_space()
tmp_configurations = (
self.filter_already_run_and_fill_with_random_configurations(
fast_addressing_of_data_array, configurations_aux, 0
)
)
for conf_index in range(
len(tmp_configurations[self.get_input_parameters()[0]])
):
configuration = {}
for header in self.get_input_parameters():
configuration[header] = tmp_configurations[header][conf_index]
configurations.append(configuration)
else:
while RS_configurations_count != number_of_RS:
configuration = self.get_random_configuration(use_priors)
if self.isConfigurationAlreadyRun(
fast_addressing_of_data_array, configuration
):
alreadyRunRandom += 1
if alreadyRunRandom <= 1000000:
continue # pick another configuration
else:
print(
"\n ####\n Warning: reached maximum number of Random sampling that have been already run. \nThe Random sampling configuration selection will stop now. Is the search space very small?\n"
)
break # too many random samples failed, probably the space is very small
str_data = self.get_unique_hash_string_from_values(configuration)
fast_addressing_of_data_array[str_data] = absolute_configuration_index
absolute_configuration_index += 1
configurations.append(configuration)
RS_configurations_count += 1
return configurations
def standard_latin_hypercube_sampling_configurations_without_repetitions(
self,
fast_addressing_of_data_array,
number_of_samples,
input_parameters_names_list,
):
"""
Standard latin hypercube sampling (Standard LHS) techniques like in 2012 SANDIA Surrogate models for mixed
discrete-continuous variables (Swiler et al, 2012).
m is (2.2.9) in the paper.
m*n is number_of_samples => n = number_of_samples/m.
This procedure works also in the case of categorical parameters.
:param fast_addressing_of_data_array: configurations previously selected.
:param number_of_samples: the number of unique LHS samples needed.
:param input_parameters_names_list: a list containing the names of the input parameters.
This act as a filter, applying this method only on the parameters listed in this variable.
:return: a set of configurations following the standard latin hypercube sampling algorithm.
"""
from pyDOE import lhs
tmp_configurations = []
m = 1 # m is the size of the Cartesian product of the categorical variables.
parameters_values_categorical = {}
for key, value in self.get_input_categorical_parameters_objects(
input_parameters_names_list
).items():
m = m * value.get_size() # Cartesian product size
parameters_values_categorical[key] = value.get_int_values()
# This shuffling is useful when we don't have enough budget to deal with the whole Cartesian product m.
# In this case we want to randomize the choice of which configuration is selected.
shuffle(parameters_values_categorical[key])
# Sample using the latin hypercube sampling algorithm. lhs returns values from the [0, 1] interval.
lhs_samples = lhs(
len(self.get_input_non_categorical_parameters(input_parameters_names_list)),
samples=number_of_samples,
)
# Scale values of non-categorical variables from [0, 1] to actual parameter values.
# If a distribution for the parameter is defined in the json it takes into consideration the distribution.
X = []
for param_index, param_object in enumerate(
self.get_input_non_categorical_parameters_objects(
input_parameters_names_list
).values()
):
if param_object.prior == "distribution":
# This is the case of a distribution (distribution instead of a density) ordinal.
object_distribution = param_object.get_parameter_distribution()
object_values = param_object.get_values()
# Using the indices instead of directly object_values with the rv_discrete is a workaround.
# For some reason rv_discrete.ppf() returns float values while this is not always what is in object_
# values (it is integers sometimes). To not change the original type we use the trick of using indices.
object_indices = range(0, len(object_values), 1)
param_distribution = stats.rv_discrete(
values=(object_indices, object_distribution)
)
# distribution = stats.rv_discrete(values=(param_object.get_values(), param_object.get_parameter_distribution()))
aux = np.asarray(lhs_samples[:, param_index])
aux2 = param_distribution.ppf(aux)
aux2 = aux2.astype(int)
aux3 = np.asarray(object_values)[aux2]
X.append(list(aux3))
# X.append(list(distribution.ppf(lhs_samples[:, param_index]))) # The ppf here maps from the probability value pk in [0,1] to the xk defined in the distribution.
else:
a = param_object.densities_alphas[param_object.prior]
b = param_object.densities_betas[param_object.prior]
X.append(
param_object.from_range_0_1_to_parameter_value(
np.asarray(stats.beta(a, b).ppf(lhs_samples[:, param_index]))
)
)
# Filling up the sampled configurations with the non-categorical parameters first.
for i in range(len(X[0])):
configuration = {}
for j, param_name in enumerate(
self.get_input_non_categorical_parameters(input_parameters_names_list)
):
configuration[param_name] = X[j][i]
tmp_configurations.append(configuration)
# Dealing with categorical parameters
exploit_prior_information = True
# The algorithm that doesn't exploit the prior information is the one introduced by Swiler et al., 2012.
# They split the amount of samples equally for the categoricals.
if exploit_prior_information:
# In this part we exploit the prior information version of the lhs algorithm,
# we compute a joint distribution on the categorical parameters and then we use this information to split
# deterministically the samples.
# Note that this approach is deterministic and not random even if based on the prior probability distribution.
# Another approach would be to sample following the joint distribution and then having a probabilistic approach.
# However this is contrary in spirit to the LHS algorithm that tries to fill the space more evenly than
# random sampling.
# This new way of splitting the categoricals is a change with respect to (Swiler et al, 2012) because
# here we use the prior present in the json to sample.
# In the case the prior is not present this algorithm this algorithm is equivalent to Swiler et al. 2012.
# Compute the joint distribution.
joint_distribution = np.zeros(shape=(m, 2))
input_categorical_parameters_objects = [
param_object
for param_object in self.get_input_categorical_parameters_objects(
input_parameters_names_list
).values()
]
cartesian_product = [
cartesian_element
for cartesian_element in itertools.product(
*[
param_values
for param_values in parameters_values_categorical.values()
]
)
] # cartesian_element here is a tuple
for level in range(m):
joint = 1
tuple_cartesian_product = cartesian_product[level]
for i, tuple_i in enumerate(tuple_cartesian_product):
joint *= input_categorical_parameters_objects[
i
].get_parameter_distribution()[tuple_i]
joint_distribution[level][0] = level
joint_distribution[level][1] = joint
df_joint_distribution = pd.DataFrame(
joint_distribution, columns=["level", "joint"]
)
df_joint_distribution.sort_values(by=["joint"], ascending=[0], inplace=True)
counter = 0
# for param_index, cartesian_element in enumerate(cartesian_product):
for index_cartesian_product in range(len(cartesian_product)):
number_of_samples_per_level = int(
math.floor(
number_of_samples
* df_joint_distribution["joint"].iloc[index_cartesian_product]
)
)
for index_number_of_samples_per_level in range(
number_of_samples_per_level
):
for j, param in enumerate(parameters_values_categorical.keys()):
tmp_configurations[counter][param] = cartesian_product[
int(
df_joint_distribution["level"].iloc[
index_cartesian_product
]
)
][j]
counter += 1
if (
counter >= number_of_samples
): # Not enough sampling budget to continue on the whole Cartesian product of size m
break
if (
counter >= number_of_samples
): # Not enough sampling budget to continue on the whole Cartesian product of size m
break
# This deals with the reminder fill up (loop tail) case where the number n doesn't allow to cover all the samples requested.
if counter < number_of_samples:
for cartesian_element in itertools.product(
*[
param_values
for param_values in parameters_values_categorical.values()
]
):
for j, param in enumerate(parameters_values_categorical.keys()):
tmp_configurations[counter][param] = cartesian_element[j]
counter += 1
if counter >= number_of_samples:
break
else:
# Compute the amount of the split among the levels of the categorical variables.
# The max with 1 deals with the corner case where the number of samples is strictly less than the Cartesian product m.
# In that case the algo will create as many configurations as possible following the LHS algorithm but
# we won't be able to split the sampling according to the whole m.
n = max(1, math.floor(number_of_samples / m))
# Which is, splitting the configurations on the levels of the categorical parameters like explained in (Swiler et al, 2012).
counter = 0
for cartesian_element in itertools.product(
*[
param_values
for param_values in parameters_values_categorical.values()
]
): # cartesian_element here is a tuple
for i in range(n):
for j, param in enumerate(parameters_values_categorical.keys()):
tmp_configurations[counter][param] = cartesian_element[j]
counter += 1
if (
counter >= number_of_samples
): # No enough sampling budget to continue on the whole Cartesian product of size m
break
if (
counter >= number_of_samples
): # No enough sampling budget to continue on the whole Cartesian product of size m
break
# This deals with the reminder fill up (loop tail) case where the number n doesn't allow to cover all the samples requested.
if counter < number_of_samples:
for cartesian_element in itertools.product(
*[
param_values
for param_values in parameters_values_categorical.values()
]
):
for j, param in enumerate(parameters_values_categorical.keys()):
tmp_configurations[counter][param] = cartesian_element[j]
counter += 1
if counter >= number_of_samples:
break
# Check that all the configurations are unique.
# That is true in general (by definition of the LHS algorithm) but
# since in the ordinal parameters case we compress range of values in one value this becomes not true anymore.
configurations = []
absolute_configuration_index = len(fast_addressing_of_data_array)
duplicate_configurations = 0
for configuration in tmp_configurations:
str_data = self.get_unique_hash_string_from_values(configuration)
if self.isConfigurationAlreadyRun(
fast_addressing_of_data_array, configuration
):
| |
<reponame>zhangjq933/HowtoSim_Script<gh_stars>10-100
# coding=utf-8
import os, re, sys, clr, json, math, logging, random, time
from itertools import combinations
os.chdir(os.path.dirname(__file__))
logging.basicConfig(filename='gui.log', filemode='w', encoding='utf-8', level=logging.DEBUG)
clr.AddReference('System.Drawing')
clr.AddReference('System.Windows.Forms')
from System import Drawing, Array, ComponentModel, Diagnostics, IO
from System.Drawing import Color
from System.Windows import Forms
import System.Object as object
import System.String as string
from System.Windows.Forms import DialogResult, OpenFileDialog ,SaveFileDialog, FolderBrowserDialog, MessageBox
#----------------------------------------------------------------------------
import ScriptEnv
import clr
clr.AddReference('Ansys.Ansoft.Edb')
clr.AddReference('Ansys.Ansoft.SimSetupData')
import Ansys.Ansoft.Edb as edb
import Ansys.Ansoft.Edb.Definition as edbd
ScriptEnv.Initialize("Ansoft.ElectronicsDesktop")
oDesktop.RestoreWindow()
oDesktop.ClearMessages("", "", 2)
oProject = oDesktop.GetActiveProject()
oDesign = oProject.GetActiveDesign()
oEditor = oDesign.GetActiveEditor()
oDefinitionManager = oProject.GetDefinitionManager()
oBondwireManager = oDefinitionManager.GetManager("Bondwire")
DB = edb.Database.Attach(int(oProject.GetEDBHandle()))
def changeJEDECType(bondwirenames, profile, jtype):
jvalue = {1: "Cadence APD/Allegro:JEDEC4Bondwire",
2: "Cadence APD/Allegro:JEDEC5Bondwire"}
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
] + bondwirenames,
[
"NAME:ChangedProps",
[
"NAME:Type",
"Value:=" , jvalue[jtype]
],
[
"NAME:Profile",
"Value:=" , "\"{}\"".format(profile)
]
]
]
])
def getExistingProfiles():
return oBondwireManager.GetNames()
def getCategory():
category = {}
for p in oBondwireManager.GetNames():
category[p] = []
for i in oEditor.FindObjects('type', 'bondwire'):
profile = oEditor.GetPropertyValue('BaseElementTab', i, 'Profile')[1:-1]
try:
category[profile] +=[i]
except:
category[profile] = [i]
return category
def getProfileInfo():
result = {i:(-1, '0', '0', '0') for i in getCategory()}
for i in oBondwireManager.GetNames():
data = oBondwireManager.GetData(i)
bondwire_type = data[2]
if bondwire_type not in [1, 2]:
continue
h = data[8][0][:-2]
a = data[10][0][:-3]
b = data[12][0][:-3]
result[i] = (bondwire_type, h, a, b)
return result
def removeProfile(names):
for name in names:
oBondwireManager.Remove(name, True, "", "Project")
def addProfile(name, profile_type, h="500", a="90", b="30"):
# profile_type 1:Jedec4Bondwire, 2:Jedec4Bondwire
oBondwireManager.Add(
[
"NAME:{}".format(name),
"Type:=" , profile_type,
"ModifiedOn:=" , str(time.time()).split('.')[0],
"Library:=" , "",
"h:=" , [h+'um'],
"a:=" , [a+'deg'],
"b:=" , [b+'deg']
])
if profile_type == 1:
result = edbd.Jedec4BondwireDef.Create(DB, name, float(h)*1e-6)
elif profile_type == 2:
result = edbd.Jedec5BondwireDef.Create(DB, name, float(h)*1e-6, float(a), float(b))
setBondwireProfile(name, profile_type)
AddWarningMessage('{} is added!'.format(name))
return result
def setBondwireProfile(name, profile_type):
x = getCategory()
bondwires = x[name]
if bondwires:
changeJEDECType(bondwires, name, profile_type)
def editProfile(name, profile_type, h='500', a='90', b='30'):
# profile_type 1:Jedec4Bondwire, 2:Jedec4Bondwire
a = '90' if a == '' else a
b = '30' if b == '' else b
if name not in getExistingProfiles():
addProfile(name, profile_type, h, a, b)
else:
oBondwireManager.Edit(name,
[
"NAME:{}".format(name),
"Type:=" , profile_type,
"ModifiedOn:=" , str(time.time()).split('.')[0],
"Library:=" , "",
"h:=" , [h+'um'],
"a:=" , [a+'deg'],
"b:=" , [b+'deg']
])
if profile_type == 1:
result = edbd.Jedec4BondwireDef.Create(DB, name, float(h)*1e-6)
elif profile_type == 2:
result = edbd.Jedec5BondwireDef.Create(DB, name, float(h)*1e-6, float(a), float(b))
setBondwireProfile(name, profile_type)
AddWarningMessage('{} is set!'.format(name))
return result
def isfloat(x):
try:
return (float(x) > 0)
except:
return False
def getPW():
result = {}
for i in oEditor.FindObjects('type', 'bondwire'):
pw = oEditor.GetPropertyValue('BaseElementTab', i, 'PathWidth')
if pw in ['0fm']:
continue
else:
result[i] = pw
return result
def changeBondwirePathWidth(bondwires, pathwidth = '0fm'):
if len(bondwires) == 0:
return None
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
] + bondwires,
[
"NAME:ChangedProps",
[
"NAME:PathWidth",
"Value:=" , pathwidth
]
]
]
])
def change(bondwire_name, direction, distance, point="Pt1"):
if bondwire_name not in oEditor.FindObjects('Type', 'bondwire'):
return
pt0 = oEditor.GetPropertyValue("BaseElementTab", bondwire_name, 'pt0')
pt1 = oEditor.GetPropertyValue("BaseElementTab", bondwire_name, 'pt1')
x0, y0 = map(float, pt0.strip().split(','))
x1, y1 = map(float, pt1.strip().split(','))
length = math.sqrt((x1-x0)**2 + (y1-y0)**2)
dx = distance*(x1-x0)/(length)
dy = distance*(y1-y0)/(length)
dvector = { "Forward": (dx, dy),
"Backward": (-dx, -dy),
"Left":(-dy, dx),
"Right":(dy, -dx),
}
du, dv = dvector[direction]
if point == "Pt0":
x, y = x0 + du, y0 + dv
else:
x, y = x1 + du, y1 + dv
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bondwire_name
],
[
"NAME:ChangedProps",
[
"NAME:{}".format(point),
"X:=" , "{}mm".format(x),
"Y:=" , "{}mm".format(y)
]
]
]
])
def reverse(bw_name):
unit = oEditor.GetActiveUnits()
start_layer = oEditor.GetPropertyValue("BaseElementTab", bw_name, 'Start Layer')
end_layer = oEditor.GetPropertyValue("BaseElementTab", bw_name, 'End Layer')
pt0 = oEditor.GetPropertyValue("BaseElementTab", bw_name, 'Pt0').split(',')
pt1 = oEditor.GetPropertyValue("BaseElementTab", bw_name, 'Pt1').split(',')
try:
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bw_name
],
[
"NAME:ChangedProps",
[
"NAME:Start Layer",
"Value:=" , end_layer
]
]
]
])
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bw_name
],
[
"NAME:ChangedProps",
[
"NAME:End Layer",
"Value:=" , start_layer
]
]
]
])
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bw_name
],
[
"NAME:ChangedProps",
[
"NAME:Pt0",
"X:=" , "{}{}".format(pt1[0], unit),
"Y:=" , "{}{}".format(pt1[1], unit)
]
]
]
])
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bw_name
],
[
"NAME:ChangedProps",
[
"NAME:Pt1",
"X:=" , "{}{}".format(pt0[0], unit),
"Y:=" , "{}{}".format(pt0[1], unit)
]
]
]
])
AddWarningMessage('{} is switched!'.format(bw_name))
except:
AddWarningMessage('{} failed in switching!'.format(bw_name))
def alignBondwireCenter(bondwire, point='Pt0'):
try:
x, y = oEditor.GetPropertyValue('BaseElementTab', bondwire, point).split(',')
x, y = float(x), float(y)
if point == 'Pt0':
layer = oEditor.GetPropertyValue('BaseElementTab', bondwire, 'Start Layer')
else:
layer = oEditor.GetPropertyValue('BaseElementTab', bondwire, 'End Layer')
objs = oEditor.FindObjectsByPoint(oEditor.Point().Set(x*1e-3, y*1e-3), layer)
for i in objs:
if oEditor.GetPropertyValue('BaseElementTab', i, 'Type') in ['Via', 'Pin']:
u, v = oEditor.GetPropertyValue('BaseElementTab', i, 'Location').split(',')
break
else:
pass
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bondwire,
],
[
"NAME:ChangedProps",
[
"NAME:{}".format(point),
"X:=" , "{}mm".format(u),
"Y:=" , "{}mm".format(v),
]
]
]
])
AddWarningMessage('{} is aligned to {} center!'.format(bondwire, i))
except:
logging.exception('error')
#Separate Code-------------------------------------------------------
def ccw(A,B,C):
Ax, Ay = A
Bx, By = B
Cx, Cy = C
return (Cy-Ay) * (Bx-Ax) > (By-Ay) * (Cx-Ax)
def intersect(A,B,C,D):
return ccw(A,C,D) != ccw(B,C,D) and ccw(A,B,C) != ccw(A,B,D)
def checkintersection(segments):
for (A, B), (C, D) in combinations(segments, 2):
if intersect(A, B, C, D):
return True
return False
def getPkgGrid(pin_name):
layer = oEditor.GetPropertyValue('BaseElementTab', pin_name, 'Start Layer')
x0, y0 = oEditor.GetPropertyValue('BaseElementTab', pin_name, 'Location').split(',')
x0, y0 = float(x0), float(y0)
grid = []
for i in range(-10, 11):
for j in range(-10, 11):
x = (x0 + 0.04 * i) * 1e-3
y = (y0 + 0.04 * j) * 1e-3
pt = oEditor.Point()
pt.Set(x,y)
if pin_name in oEditor.FindObjectsByPoint(pt, layer):
grid.append((x, y))
return grid
def getDieGrid(pin_name):
layer = oEditor.GetPropertyValue('BaseElementTab', pin_name, 'Start Layer')
grid = {}
for i in oEditor.FindObjects('Type', 'bondwire'):
p1 = oEditor.Point()
x, y = oEditor.GetPropertyValue('BaseElementTab', i, 'Pt1').split(',')
pt = p1.Set(float(x)*1e-3 ,float(y)*1e-3)
obj = oEditor.FindObjectsByPoint(p1, layer)
if pin_name in oEditor.FindObjectsByPoint(pt, layer):
x, y = oEditor.GetPropertyValue('BaseElementTab', i, 'Pt0').split(',')
x, y = float(x)*1e-3+random.uniform(0, 1)*1e-9 ,float(y)*1e-3+random.uniform(0, 1)*1e-9
grid[(x, y)] = i
return grid
def separate(pcb_pad):
pkg = getPkgGrid(pcb_pad)
AddWarningMessage('Pkg Locations: {}'.format(len(pkg)))
die = getDieGrid(pcb_pad)
AddWarningMessage('die Locations: {}'.format(len(die)))
pair = {}
N = 0
while(True):
N+=1
if N > 100000:
AddWarningMessage('Failed')
segments = []
break
segments = []
random.shuffle(pkg)
for (pt0, pt1) in zip(die.keys(), pkg):
segments.append((pt0, pt1))
if checkintersection(segments) == False:
AddWarningMessage('Successful')
break
for pt0, pt1 in segments:
pair[die[pt0]] = pt1
AddWarningMessage(str(pair))
try:
for bw_name in pair:
x, y = pair[bw_name]
oEditor.ChangeProperty(
[
"NAME:AllTabs",
[
"NAME:BaseElementTab",
[
"NAME:PropServers",
bw_name
],
[
"NAME:ChangedProps",
[
"NAME:Pt1",
"X:=" , str(x),
"Y:=" , str(y)
]
]
]
])
except:
pass
#----------------------------------------------------------------------------
class MyForm(Forms.Form):
def __init__(self):
self.tabPage1 = Forms.TabPage()
self.ok_bt = Forms.Button()
self.label2 = Forms.Label()
self.modelname_lb = Forms.Label()
self.groupBox1 = Forms.GroupBox()
self.label8 = Forms.Label()
self.label9 = Forms.Label()
self.label10 = Forms.Label()
self.label7 = Forms.Label()
self.label6 = Forms.Label()
self.label5 = Forms.Label()
self.apply_bt = Forms.Button()
self.beta_tb = Forms.TextBox()
self.alpha_tb = Forms.TextBox()
self.h1_tb = Forms.TextBox()
self.groupBox2 = Forms.GroupBox()
self.create_bt = Forms.Button()
self.name_tb = Forms.TextBox()
self.delete_bt = Forms.Button()
self.type_cb = Forms.ComboBox()
self.model_lb = Forms.ListBox()
self.switch_tab = Forms.TabControl()
self.tabPage2 = Forms.TabPage()
self.groupBox5 = Forms.GroupBox()
self.label13 = Forms.Label()
self.label12 = Forms.Label()
self.label11 = Forms.Label()
self.separate_bt = Forms.Button()
self.align_bt = Forms.Button()
self.reverse_bt = Forms.Button()
self.groupBox4 = Forms.GroupBox()
self.right_bt = Forms.Button()
self.backward_bt = Forms.Button()
self.left_bt = Forms.Button()
self.forward_bt = Forms.Button()
self.groupBox3 = Forms.GroupBox()
self.unit_lb = Forms.Label()
self.label3 = Forms.Label()
self.step_tb = Forms.TextBox()
self.pt1_rb = Forms.RadioButton()
self.pt0_rb = Forms.RadioButton()
self.tabPage1.SuspendLayout()
self.groupBox1.SuspendLayout()
self.groupBox2.SuspendLayout()
self.switch_tab.SuspendLayout()
self.tabPage2.SuspendLayout()
self.groupBox5.SuspendLayout()
self.groupBox4.SuspendLayout()
self.groupBox3.SuspendLayout()
self.SuspendLayout()
# tabPage1
self.tabPage1.BackColor = Drawing.Color.Transparent
self.tabPage1.Controls.Add(self.ok_bt)
self.tabPage1.Controls.Add(self.label2)
self.tabPage1.Controls.Add(self.modelname_lb)
self.tabPage1.Controls.Add(self.groupBox1)
self.tabPage1.Controls.Add(self.groupBox2)
self.tabPage1.Controls.Add(self.delete_bt)
self.tabPage1.Controls.Add(self.type_cb)
self.tabPage1.Controls.Add(self.model_lb)
self.tabPage1.Font = Drawing.Font("Arial", 9.75, Drawing.FontStyle.Regular, Drawing.GraphicsUnit.Point)
self.tabPage1.Location = Drawing.Point(4, 25)
self.tabPage1.Name = "tabPage1"
self.tabPage1.Padding = Forms.Padding(3)
self.tabPage1.Size = Drawing.Size(417, 506)
self.tabPage1.TabIndex = 0
self.tabPage1.Text = "Profile Edit"
# ok_bt
self.ok_bt.Anchor = (((Forms.AnchorStyles.Bottom | Forms.AnchorStyles.Right)))
self.ok_bt.Font = Drawing.Font("Arial", 12, Drawing.FontStyle.Regular, Drawing.GraphicsUnit.Point)
self.ok_bt.Location = Drawing.Point(304, 458)
self.ok_bt.Name = "ok_bt"
self.ok_bt.Size = Drawing.Size(100, 40)
self.ok_bt.TabIndex = 14
self.ok_bt.Text = "Interact"
self.ok_bt.UseVisualStyleBackColor = True
self.ok_bt.Click += self.ok_bt_Click
# label2
self.label2.AutoSize = True
self.label2.Font = Drawing.Font("Arial", 9.75, Drawing.FontStyle.Regular, Drawing.GraphicsUnit.Point)
self.label2.Location = Drawing.Point(222, 8)
self.label2.Name = "label2"
self.label2.Size = Drawing.Size(47, 16)
self.label2.TabIndex = 10
self.label2.Text = "Profile:"
# modelname_lb
self.modelname_lb.AutoSize = True
self.modelname_lb.Font = Drawing.Font("Arial", 9.75, Drawing.FontStyle.Regular, Drawing.GraphicsUnit.Point)
self.modelname_lb.Location = Drawing.Point(12, 8)
self.modelname_lb.Name = "modelname_lb"
self.modelname_lb.Size = Drawing.Size(84, 16)
self.modelname_lb.TabIndex = 7
self.modelname_lb.Text = "Model Name:"
# groupBox1
self.groupBox1.Anchor | |
<reponame>RogerEMO/srd<filename>srd/actors.py<gh_stars>1-10
from copy import deepcopy
# INITIALIZE INDIVIDUALS AND HOUSEHOLDS #
class Person:
"""
Classe pour définir une personne.
Ceci définit une personne et son profil en termes de revenus et d'actifs.
Parameters
----------
age: int
âge de l'individu
male: bool
prend la valeur True si l'individu est un homme
earn: float
revenu de travail
rpp: float
revenu de régime complémentaire de retraite (RCR)
cpp: float
revenu du Régime de rentes du Québec (RRQ) ou du Régime de pensions du Canada (RPC)
net_cap_gains: float
gains (ou pertes si valeur négative) nets en capital réalisés dans l'année
prev_cap_losses: float
pertes en capital nettes d'autres années (avec facteur d'inclusion partielle déjà appliqué)
cap_gains_exempt: float
exonération des gains en capital admissibles demandée (sur gains en capital nets); soumis à un plafond à vie
othtax: float
autre revenu imposable
othntax: float
autre revenu non-imposable
inc_rrsp: float
revenu de REER (retrait de fonds)
self_earn: float
revenu de travail autonome
div_elig: float
montant réel des dividendes déterminés (canadiens)
div_other_can: float
montant réel des dividendes ordinaires (canadiens)
con_rrsp: float
cotisation REER
con_non_rrsp: float
cotisation autre que REER (p.ex. à un CELI ou à des comptes non enregistrés)
con_rpp: float
cotisation à un régime de pension agréé (RPA)
union_dues: float
cotisations syndicales, professionnelles ou autres
donation: float
don de bienfaisance et autres dons
gift: float
dons de biens culturels et écosensibles
years_can: int
nombre d'années vécues au Canada lorsque la Pension de la sécurité de la vieillesse (PSV) est demandée
disabled: boolean
statut d'invalidité
widow: boolean
statut de veuf/veuve
med_exp: float
montant des dépenses en santé admissibles
ndays_chcare_k1: float
nombre de jours de garde du premier enfant
ndays_chcare_k2: float
nombre de jours de garde du second enfant
asset: float
valeur marchande des actifs (illiquides)
oas_years_post: int
nombre d'années de report pour la PSV (après 65 ans)
months_cerb_cesb: int
nombre de mois pour lesquels la PCU ou la PCUE est demandée
student: boolean
statut d'étudiant ou fin des études après décembre 2019 (pour PCUE)
essential_worker: boolean
True si travailleur essentiel (au Québec seulement)
hours_month: float
nombre d'heures travaillées par mois
prev_inc_work: float
revenu du travail de l'année précédente
dep_senior: boolean
True si la personne aînée n'est pas autonome
home_support_cost: float
coût du maintien à domicile
"""
def __init__(self, age=50, male=True, earn=0, rpp=0, cpp=0,
net_cap_gains=0, prev_cap_losses=0, cap_gains_exempt=0,
othtax=0, othntax=0, inc_rrsp=0, self_earn=0, div_elig=0,
div_other_can=0, con_rrsp=0, con_non_rrsp=0, con_rpp=0,
union_dues=0, donation=0, gift=0, years_can=None,
disabled=False, widow=False, med_exp=0, ndays_chcare_k1=0,
ndays_chcare_k2=0, asset=0, oas_years_post=0,
months_cerb_cesb=0, student=False, essential_worker=False,
hours_month=None, prev_inc_work=None,
dep_senior=False, home_support_cost=0):
self.age = age
self.male = male
self.attach_inc_work_month(earn, self_earn)
self.attach_prev_work_inc(prev_inc_work)
self.inc_rpp = rpp
self.inc_cpp = cpp
self.net_cap_gains = net_cap_gains
self.prev_cap_losses = prev_cap_losses
self.cap_gains_exempt = cap_gains_exempt # for example for small businesses
self.inc_othtax = othtax
self.inc_othntax = othntax
self.div_elig = div_elig
self.div_other_can = div_other_can
self.inc_rrsp = inc_rrsp
self.con_rrsp = con_rrsp
self.con_non_rrsp = con_non_rrsp
self.con_rpp = con_rpp
self.union_dues = union_dues
self.donation = donation
self.gift = gift
self.years_can = age if years_can is None else years_can # number of years in Canada (max = 40)
self.disabled = disabled
self.widow = widow
self.med_exp = med_exp
self.ndays_chcare_k1 = ndays_chcare_k1 # should be the kid with the most days,
self.ndays_chcare_k2 = ndays_chcare_k2 # second kid with most days, in same order for both spouses
self.asset = asset
self.oas_years_post = oas_years_post
self.compute_months_cerb_cesb(months_cerb_cesb, student)
self.student = student
self.essential_worker = essential_worker
self.hours_month = hours_month # could enter list of hours for ei
self.dep_senior = dep_senior
self.home_support_cost = home_support_cost
self.pension_split = 0
self.pension_split_qc = 0
self.pension_deduction = 0
self.pension_deduction_qc = 0
self.inc_oas = 0
self.inc_gis = 0
self.inc_ei = 0
self.inc_social_ass = 0
self.allow_couple = 0
self.allow_surv = 0
self.inc_cerb = 0
self.inc_cesb = 0
self.inc_iprew = 0
self.covid = None
self.after_tax_inc = None
self.disp_inc = None
self.fed_return = None
self.prov_return = None
self.payroll = None
def attach_prev_work_inc(self, prev_work_inc):
"""
Fonction qui ajoute le revenu du travail de l'an passé s'il est disponible,
ou l'approxime avec le revenu du travail de l'année courante sinon.
Parameters
----------
prev_work_inc: float
revenu de travail de l'année précédente
"""
if prev_work_inc is None:
self.prev_inc_work = self.inc_earn + self.inc_self_earn
else:
self.prev_inc_work = prev_work_inc
def attach_inc_work_month(self, earn, self_earn):
"""
Fonction qui convertit le revenu de travail annuel en revenu mensuel et vice versa.
On entre le revenu de travail annuel ou mensuel (liste avec 12 éléments)
et le revenu de travail annuel et mensuel deviennent des attributs de la personne.
Parameters
----------
earn: float or list
revenu de travail salarié
self_earn: float or list
revenu de travail autonome
"""
if isinstance(earn, list):
earn_month = earn
self.inc_earn = sum(earn)
else:
earn_month = [earn / 12] * 12
self.inc_earn = earn
if isinstance(self_earn, list):
self_earn_month = self_earn
self.inc_self_earn = sum(self_earn)
else:
self_earn_month = [self_earn / 12] * 12
self.inc_self_earn = self_earn
self.inc_work_month = [x + y for x, y in zip(earn_month, self_earn_month)]
@property
def inc_work(self):
"""
Fonction qui retourne le revenu de travail.
Inclut le revenu de travail autonome.
Returns
-------
float
Revenu de travail.
"""
return self.inc_earn + self.inc_self_earn \
+ self.inc_cerb + self.inc_cesb + self.inc_iprew
@property
def inc_non_work(self):
"""
Fonction qui retourne le total des revenus autres que les revenus du travail.
Returns
-------
float
Revenu provenant de sources autres que le travail.
"""
return (self.inc_rpp + self.inc_cpp + self.inc_othtax
+ self.inc_othntax + self.inc_rrsp + self.inc_oas
+ self.inc_gis + self.allow_couple + self.allow_surv
+ self.inc_ei + self.net_cap_gains
+ self.div_elig + self.div_other_can)
@property
def inc_tot(self):
"""
Fonction qui retourne le revenu total.
Ce revenu total contient les montants réels des dividendes de sociétés
canadiennes (et non les montants imposables).
Returns
-------
float
Revenu total.
"""
return self.inc_work + self.inc_non_work
def compute_months_cerb_cesb(self, months_cerb_cesb, student):
"""
Fonction qui établit le nombre de mois de PCU ou de PCUE selon le nombre de mois
pour lesquels la personne demande la prestation et selon son statut d'étudiant.
Parameters
----------
months_cerb_cesb: int
nombre de mois pour lesquels la prestation est demandée
student: boolean
True si la personne est étudiante (ou l'était en décembre 2019)
"""
self.months_cesb = self.months_cerb = 0
if months_cerb_cesb > 0:
if student:
self.months_cesb = months_cerb_cesb
else:
self.months_cerb = months_cerb_cesb # assuming that last year's work income > 5000
def copy(self):
"""
Fonction qui produit une copie des attributs de la personne.
"""
self.temp = deepcopy(self.__dict__)
def reset(self):
"""
Fonction qui utilise la copie des attributs de la personne
pour réinitialiser l'instance de la personne.
"""
l_attr = [k for k in self.__dict__ if k != 'temp']
for k in l_attr:
delattr(self, k)
for attr, val in self.temp.items():
setattr(self, attr, val)
class Dependent:
"""
Classe pour définir un dépendant.
Ceci définit un dépendant et son profil.
Parameters
----------
age: int
âge de l'individu
disa: boolean
statut d'invalidité
child_care: float
montant des dépenses réelles de frais de garde
school: float
montant des dépenses de scolarité
home_care: float
montant de l'aide à domicile
med_exp: float
montant des dépenses en santé admissibles
"""
def __init__(self, age, disa=None, child_care=0, school=None,
home_care=None, med_exp=0):
self.age = age
self.disa = disa
self.child_care = child_care
self.school = school
self.home_care = home_care
self.med_exp = med_exp
class Hhold:
"""
Classe pour définir un ménage.
Ceci définit un ménage et son profil.
Parameters
----------
first: Person
instance Person du 1er membre du couple
second: Person
instance Person du 2e membre du couple, s'il y a lieu
prov: str
province (qc = Québec)
n_adults_in_hh: int
nombre d'adultes (18 ans et plus) dans le ménage
"""
def __init__(self, first, second=None, prov='qc', n_adults_in_hh=None):
self.sp = [first]
self.couple = bool(second)
if self.couple:
self.sp.append(second)
self.prov = prov
self.dep = []
self.nkids_0_6 = 0
self.nkids_7_16 = 0
self.nkids_0_17 = 0
self.nkids_0_18 = 0
self.n_adults_in_hh = self.adjust_n_adults(n_adults_in_hh)
self.compute_max_split()
self.assess_elig_split()
def adjust_n_adults(self, n_adults_in_hh):
"""
Fonction qui calcule le nombre d'adultes dans le ménage si celui-ci
n'est pas fourni.
Parameters
----------
n_adults_in_hh: float
nombre d'adultes dans le ménage s'il est fourni, None sinon
Returns
-------
float
Nombre d'adultes dans le ménage.
"""
if n_adults_in_hh:
return n_adults_in_hh
else:
adult_deps = len([s for s in | |
import demistomock as demisto
from CommonServerPython import *
from CommonServerUserPython import *
import json
import requests
import dateparser
from datetime import datetime
import traceback
from typing import Any, Dict, Tuple, List, Optional, cast
from copy import copy
import hmac
import hashlib
"""Darktrace Integration for Cortex XSOAR (aka Demisto)"""
# Disable insecure warnings
requests.packages.urllib3.disable_warnings()
"""*****CONSTANTS*****"""
DATE_FORMAT = '%Y-%m-%dT%H:%M:%SZ'
MAX_INCIDENTS_TO_FETCH = 50
MIN_SCORE_TO_FETCH = 0
# For API call mapping
PARAMS_DICTIONARY = {
'did': 'did',
'data_type': 'datatype',
'external_domain': 'externaldomain',
'full_device_details': 'fulldevicedetails',
'destination_did': 'oid',
'show_all_graph_data': 'showallgraphdata',
'num_similar_devices': 'similardevices',
'breach_id': 'pbid',
'host_name': 'hostname',
'order_by': 'orderBy',
'max_results': 'count'
}
"""*****CLIENT CLASS*****
Wraps all the code that interacts with the Darktrace API."""
class Client(BaseClient):
"""Client class to interact with the Darktrace API
This Client implements API calls, and does not contain any Demisto logic.
Should only do requests and return data.
It inherits from BaseClient defined in CommonServer Python.
Most calls use _http_request() that handles proxy, SSL verification, etc.
"""
def get_modelbreach(self, pbid):
"""Searches for a single Darktrace model breach alerts using '/modelbreaches?pbid=<pbid>'
:type pbid: ``str``
:param pbid: Model breach ID of the model breach to get
:return: list containing the found Darktrace model breach as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
request = f"/modelbreaches?pbid={pbid}"
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers
)
def get_modelbreach_comments(self, pbid):
"""Searches for comments on a modelbreach using '/modelbreaches/<pbid>/comments'
:type pbid: ``str``
:param pbid: Model breach ID of the model breach to get
:return: list containing the found Darktrace model breach as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
request = "/modelbreaches/" + pbid + "/comments"
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers
)
def acknowledge_breach(self, pbid):
"""Acknowledges a modelbreach using '/modelbreaches/<pbid>/acknowledge?acknowledge=true'
:type pbid: ``str``
:param pbid: Model breach ID of the model breach to get
:return: list containing the found Darktrace model breach as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
request = "/modelbreaches/" + pbid + "/acknowledge?acknowledge=true"
http_headers = get_headers(self._auth, request)
return self._http_request(
method='POST',
url_suffix=request,
headers=http_headers,
data={"acknowledge": "true"}
)
def unacknowledge_breach(self, pbid):
"""Unacknowledges a modelbreach using '/modelbreaches/<pbid>/unacknowledge?unacknowledge=true'
:type pbid: ``str``
:param pbid: Model breach ID of the model breach to get
:return: list containing the found Darktrace model breach as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
request = "/modelbreaches/" + pbid + "/unacknowledge?unacknowledge=true"
http_headers = get_headers(self._auth, request)
return self._http_request(
method='POST',
url_suffix=request,
headers=http_headers,
data={"unacknowledge": "true"}
)
def list_similar_devices(self, did, max_results):
"""Returns a list of similar devices using '/similardevices'
:type did: ``str``
:param did: Device ID of device
:type max_results: ``str``
:param max_results: Max # of results to return
:return: list containing the found Darktrace model breach as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
request = "/similardevices?did=" + did + "&count=" + max_results
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers,
)
def get_external_endpoint_details(self, endpoint_type, endpoint_value, additional_info, devices, score):
"""Returns information from Darktrace about external endpoints using '/endpointdetails'
:type endpoint_type: ``str``
:param endpoint_type: Type of endpoint, IP or hostname
:type endpoint_value: ``str``
:param endpoint_value: Value of IP or hostname
:type additional_info: ``str``
:param additional_info: Whether to include additional info
:type devices: ``str``
:param devices: Whether to include additional devices that connected to the endpoint
:type score: ``str``
:param score: Whether to include external endpoint score
:return: list containing the found Darktrace model breach as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
request = f"/endpointdetails?{endpoint_type}={endpoint_value}&additionalinfo={additional_info}" \
f"&devices={devices}&score={score}"
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers,
)
def get_device_connection_info(self, did, data_type, external_domain, destination_did,
show_all_graph_data, full_device_details, num_similar_devices):
"""Returns information from Darktrace about graphical connection data for devices using '/deviceinfo'
:type did: ``str``
:param did: Darktrace Device ID
:type data_type: ``str``
:param data_type: Whether to return data for either connections (connections), data size out (sizeout) or
data size in (sizein)
:type external_domain: ``str``
:param external_domain: Whether to restrict external data to a particular domain name.
:type destination_did: ``str``
:param destination_did: Darktrace Device DID of destination device to restrict data to.
:type show_all_graph_data: ``str``
:param show_all_graph_data: Whether to return an entry for all time intervals
:type full_device_details: ``str``
:param full_device_details: Whether to return the full device detail objects
:type num_similar_devices: ``str``
:param num_similar_devices: Num similar devices to include
:return: list containing the connection info as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
query_dict = copy(locals())
query_dict.pop('self')
query_string = create_query_from_dict(query_dict)
request = "/deviceinfo" + query_string
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers,
)
def get_device_identity_info(self, max_results, order_by, order, query):
"""Returns information from Darktrace about identifying data for devices using '/devicesearch'
:type max_results: ``str``
:param max_results: Darktrace Device ID
:type order_by: ``str``
:param order_by: Whether to return data for either connections (connections), data size out (sizeout) or
data size in (sizein)
:type order: ``str``
:param order: Whether to restrict external data to a particular domain name.
:type query: ``str``
:param query: Darktrace Device DID of destination device to restrict data to.
:return: list containing the device info as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
query_dict = copy(locals())
query_dict.pop('self')
query_string = create_query_from_dict(query_dict)
request = "/devicesearch" + query_string
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers,
)
def get_entity_details(self, max_results, offset, query_list):
"""Returns information from Darktrace about entities using '/details'
:type max_results: ``int``
:param max_results: Darktrace Device ID
:type offset: ``int``
:param offset: Offset index to start returning queries from.
:type query: ``list``
:param query: List-separated query
:return: list containing the device info as a Dict
:rtype: ``List[Dict[str, Any]]``
"""
query_string = create_query_from_list(query_list)
request = '/details' + query_string
http_headers = get_headers(self._auth, request)
res = self._http_request(
method='GET',
url_suffix=request,
headers=http_headers
)
if not isinstance(res, list):
raise Exception(f'Error getting results:\n {res}')
if offset > len(res):
raise Exception(f'Offset argument: {offset}, is grater than the amount of results: {len(res)}')
truncated_response = res[offset:offset + max_results]
return truncated_response, res
def search_modelbreaches(self, min_score: float,
start_time: Optional[int]) -> List[Dict[str, Any]]:
"""Searches for Darktrace alerts using the '/modelbreaches' API endpoint
:type min_score: ``float``
:param min_score: min score of the alert to search for. Range [0, 1].
:type start_time: ``Optional[int]``
:param start_time: start timestamp (epoch in seconds) for the alert search
:return: list containing the found Darktrace model breaches as dicts
:rtype: ``List[Dict[str, Any]]``
"""
request = '/modelbreaches'
request = request + '?minscore=' + str(min_score)
request = request + '&starttime=' + str(start_time)
http_headers = get_headers(self._auth, request)
return self._http_request(
method='GET',
url_suffix=request,
headers=http_headers
)
"""*****HELPER FUNCTIONS****"""
def arg_to_timestamp(arg: Any, arg_name: str, required: bool = False) -> Optional[int]:
"""Converts an XSOAR argument to a timestamp (seconds from epoch)
This function is used to quickly validate an argument provided to XSOAR
via ``demisto.args()`` into an ``int`` containing a timestamp (seconds
since epoch). It will throw a ValueError if the input is invalid.
If the input is None, it will throw a ValueError if required is ``True``,
or ``None`` if required is ``False.
:type arg: ``Any``
:param arg: argument to convert
:type arg_name: ``str``
:param arg_name: argument name
:type required: ``bool``
:param required:
throws exception if ``True`` and argument provided is None
:return:
returns an ``int`` containing a timestamp (seconds from epoch) if conversion works
returns ``None`` if arg is ``None`` and required is set to ``False``
otherwise throws an Exception
:rtype: ``Optional[int]``
"""
if arg is None:
if required is True:
raise ValueError(f'Missing "{arg_name}"')
return None
if isinstance(arg, str) and arg.isdigit():
# timestamp is a str containing digits - we just convert it to int
return int(arg)
if isinstance(arg, str):
# we use dateparser to handle strings either in ISO8601 format, or
# relative time stamps.
# For example: format 2019-10-23T00:00:00 or "3 days", etc
date = dateparser.parse(arg, settings={'TIMEZONE': 'UTC'})
if date is None:
# if d is None it means dateparser failed to parse it
raise ValueError(f'Invalid date: {arg_name}')
return int(date.timestamp())
if isinstance(arg, (int, float)):
# Convert to int if the input is a float
return int(arg)
raise ValueError(f'Invalid date: "{arg_name}"')
def arg_to_int(arg: Any, arg_name: str, required: bool = False) -> Optional[int]:
"""Converts an XSOAR argument to a Python int
This function is used to quickly validate an argument provided to XSOAR
via ``demisto.args()`` into an ``int`` type. It will throw a ValueError
if the input is invalid. If the input is None, it will throw | |
* mu.cost(5.23997785185 + 9872.27408296480 * x)
B0 += 0.00000000142 * mu.cost(3.02798835989 + 3511.28529731900 * x)
B0 += 0.00000000165 * mu.cost(2.53171951288 + 16276.46394262300 * x)
B0 += 0.00000000153 * mu.cost(6.14783670557 + 13362.51701710200 * x)
B0 += 0.00000000119 * mu.cost(4.15694365082 + 3760.09707057500 * x)
B0 += 0.00000000120 * mu.cost(0.64287725481 + 4459.36821880260 * x)
B0 += 0.00000000130 * mu.cost(4.95002309460 + 13553.89797291080 * x)
B0 += 0.00000000120 * mu.cost(0.17087854222 + 8671.96987044060 * x)
B0 += 0.00000000112 * mu.cost(0.16822264326 + 135.06508003540 * x)
B0 += 0.00000000137 * mu.cost(3.34809361979 + 3341.04230982650 * x)
B0 += 0.00000000125 * mu.cost(1.32195559043 + 1349.86740965880 * x)
B0 += 0.00000000111 * mu.cost(3.14151030451 + 13524.91634293140 * x)
B0 += 0.00000000119 * mu.cost(5.95361348050 + 12295.95422960920 * x)
B0 += 0.00000000131 * mu.cost(5.09769375731 + 14158.74771361560 * x)
B0 += 0.00000000141 * mu.cost(1.37128440708 + 3169.93955608060 * x)
B0 += 0.00000000112 * mu.cost(3.35831868034 + 5989.06725217280 * x)
B0 += 0.00000000104 * mu.cost(5.00696041032 + 13119.72110282519 * x)
B0 += 0.00000000110 * mu.cost(5.23317664736 + 1375.77379984580 * x)
B0 += 0.00000000105 * mu.cost(2.72692368303 + 1162.47470440780 * x)
B0 += 0.00000000104 * mu.cost(1.73769165705 + 2221.85663459700 * x)
B0 += 0.00000000137 * mu.cost(1.04576950390 + 3340.18254357310 * x)
B0 += 0.00000000106 * mu.cost(6.13415161313 + 162.46663613220 * x)
B0 += 0.00000000119 * mu.cost(2.63312561442 + 7321.12213971360 * x)
B0 += 0.00000000105 * mu.cost(3.09551802365 + 20618.01935853360 * x)
B0 += 0.00000000099 * mu.cost(4.25515697974 + 23539.70738633280 * x)
B0 += 0.00000000108 * mu.cost(1.01854506729 + 3265.83082813250 * x)
B0 += 0.00000000119 * mu.cost(4.07277528003 + 10184.30391623160 * x)
B0 += 0.00000000096 * mu.cost(1.81122023425 + 10001.06188460700 * x)
B0 += 0.00000000093 * mu.cost(3.58905885066 + 5099.26550511660 * x)
B0 += 0.00000000095 * mu.cost(4.94756054764 + 3981.49003408200 * x)
B0 += 0.00000000094 * mu.cost(5.37493368020 + 13355.33615979840 * x)
B0 += 0.00000000095 * mu.cost(0.13037485775 + 15508.61512327440 * x)
B0 += 0.00000000103 * mu.cost(0.43484130196 + 1861.74585263540 * x)
B0 += 0.00000000090 * mu.cost(3.76370412628 + 22324.90505670940 * x)
B0 += 0.00000000091 * mu.cost(3.95041101283 + 10042.61267559180 * x)
B0 += 0.00000000106 * mu.cost(4.30186500383 + 640.87760738220 * x)
B0 += 0.00000000109 * mu.cost(6.18873749839 + 1478.86657406440 * x)
B0 += 0.00000000088 * mu.cost(1.79608901332 + 6247.51311552280 * x)
B0 += 0.00000000102 * mu.cost(5.58754073056 + 2766.26762836500 * x)
B0 += 0.00000000110 * mu.cost(0.94707767481 + 3274.12501778540 * x)
B0 += 0.00000000084 * mu.cost(4.45487801845 + 6696.47732458460 * x)
B0 += 0.00000000085 * mu.cost(2.74791518135 + 3407.09983561420 * x)
B0 += 0.00000000087 * mu.cost(4.51145821088 + 220.41264243880 * x)
B0 += 0.00000000101 * mu.cost(5.94930983227 + 8425.65083781480 * x)
B0 += 0.00000000082 * mu.cost(0.01837230371 + 9499.25986200560 * x)
B0 += 0.00000000080 * mu.cost(0.42550989980 + 18052.92954315780 * x)
B0 += 0.00000000083 * mu.cost(2.96589752213 + 6652.77566593180 * x)
B0 += 0.00000000080 * mu.cost(4.61446168762 + 3914.95722503460 * x)
B0 += 0.00000000079 * mu.cost(1.50228636499 + 2111.65031337760 * x)
B0 += 0.00000000089 * mu.cost(3.52977975496 + 9485.03276800400 * x)
B0 += 0.00000000086 * mu.cost(0.41976545794 + 956.28915597060 * x)
B0 += 0.00000000088 * mu.cost(5.46013317934 + 16460.33352952499 * x)
B0 += 0.00000000091 * mu.cost(2.09965252231 + 949.17560896980 * x)
B0 += 0.00000000104 * mu.cost(1.72206104768 + 3296.89351439480 * x)
B0 += 0.00000000103 * mu.cost(1.25691413032 + 3384.33133900480 * x)
B0 += 0.00000000084 * mu.cost(5.78647729498 + 5518.75014899180 * x)
B0 += 0.00000000079 * mu.cost(1.79313426804 + 38.13303563780 * x)
B0 += 0.00000000073 * mu.cost(0.10667695992 + 29822.78323632420 * x)
B0 += 0.00000000087 * mu.cost(2.11654357529 + 3450.81874791920 * x)
B0 += 0.00000000072 * mu.cost(3.89476829327 + 9380.95967271720 * x)
B0 += 0.00000000075 * mu.cost(2.59340305340 + 1964.83862685400 * x)
B0 += 0.00000000098 * mu.cost(4.01577665825 + 6843.69148953180 * x)
B0 += 0.00000000074 * mu.cost(5.32032289064 + 11766.26326451460 * x)
B0 += 0.00000000068 * mu.cost(0.04775525953 + 2125.87740737920 * x)
B0 += 0.00000000069 * mu.cost(6.07427052412 + 26482.17080962440 * x)
B0 += 0.00000000069 * mu.cost(2.05018999200 + 29424.63423291600 * x)
B0 += 0.00000000084 * mu.cost(0.16960920719 + 263.08392337280 * x)
B0 += 0.00000000068 * mu.cost(5.03013252197 + 9070.11887384880 * x)
B0 += 0.00000000076 * mu.cost(2.00296087293 + 224.34479570190 * x)
B0 += 0.00000000078 * mu.cost(2.17362706851 + 30220.93223973240 * x)
B0 += 0.00000000066 * mu.cost(3.85497672006 + 19406.67828817460 * x)
B0 += 0.00000000066 * mu.cost(5.70059718737 + 33561.54466643220 * x)
B0 += 0.00000000067 * mu.cost(0.16600936321 + 22743.40937951640 * x)
B0 += 0.00000000065 * mu.cost(4.65423392949 + 2807.39834325620 * x)
B0 += 0.00000000069 * mu.cost(3.34387224268 + 11670.28403729680 * x)
B0 += 0.00000000087 * mu.cost(4.97838021880 + 1118.75579210280 * x)
B0 += 0.00000000063 * mu.cost(0.18907106180 + 30065.51184029820 * x)
B0 += 0.00000000064 * mu.cost(4.61909647015 + 9886.77220006400 * x)
B0 += 0.00000000073 * mu.cost(0.93706647938 + 20735.83216142559 * x)
B0 += 0.00000000060 * mu.cost(5.83757395809 + 8646.06348025360 * x)
B0 += 0.00000000062 * mu.cost(4.81389895867 + 20199.09495963300 * x)
B0 += 0.00000000059 * mu.cost(5.00150762621 + 6414.61781167780 * x)
B0 += 0.00000000068 * mu.cost(3.84252763135 + 6571.01853218020 * x)
B0 += 0.00000000062 * mu.cost(2.81689634717 + 6944.30877677240 * x)
B0 += 0.00000000065 * mu.cost(4.49078808776 + 632.78373931320 * x)
B0 += 0.00000000058 * mu.cost(5.64889513615 + 9945.57120882380 * x)
B0 += 0.00000000070 * mu.cost(2.51605694403 + 9638.94074787620 * x)
B0 += 0.00000000057 * mu.cost(3.28105791201 + 206.18554843720 * x)
B0 += 0.00000000057 * mu.cost(2.97448265957 + 21795.21409161479 * x)
B0 += 0.00000000056 * mu.cost(2.23565630779 + 20995.39296644940 * x)
B0 += 0.00000000057 * mu.cost(1.88614831237 + 18451.07854656599 * x)
B0 += 0.00000000071 * mu.cost(4.82445647307 + 8542.97070603500 * x)
B0 += 0.00000000061 * mu.cost(3.65945073900 + 14421.83163698840 * x)
B0 += 0.00000000056 * mu.cost(3.13789031275 + 8799.98871377800 * x)
B0 += 0.00000000057 * mu.cost(4.89927831599 + 9602.35263622420 * x)
B0 += 0.00000000065 * mu.cost(3.37109873211 + 11610.91017538320 * x)
B0 += 0.00000000067 * mu.cost(1.92945007459 + 21265.52312652020 * x)
B0 += 0.00000000055 * mu.cost(1.95164531764 + 9588.12554222260 * x)
B0 += 0.00000000057 * mu.cost(2.82240075154 + 10124.93005431800 * x)
B0 += 0.00000000057 * mu.cost(6.10407356832 + 19800.94595622480 * x)
B0 += 0.00000000055 * mu.cost(5.20976473824 + 3237.51965248120 * x)
B0 += 0.00000000057 * mu.cost(4.12235760406 + 10028.95082710020 * x)
B0 += 0.00000000055 * mu.cost(1.41700952855 + 15906.76412668260 * x)
B0 += 0.00000000053 * mu.cost(2.16328741039 + 6418.14093002680 * x)
B0 += 0.00000000060 * mu.cost(2.64683840328 + 10018.24685144760 * x)
B0 += 0.00000000068 * mu.cost(5.36539876845 + 1228.96211332220 * x)
B0 += 0.00000000051 * mu.cost(5.73824213507 + 6048.44111408640 * x)
B0 += 0.00000000053 * mu.cost(0.31937174553 + 12721.57209941700 * x)
B0 += 0.00000000051 * mu.cost(0.06312524105 + 20206.14119633100 * x)
B0 += 0.00000000049 * mu.cost(4.53401402385 + 6675.70192909220 * x)
B0 += 0.00000000051 * mu.cost(1.15475560534 + 10156.90236013480 * x)
B0 += 0.00000000064 * mu.cost(4.56332268770 + 16703.07938715119 * x)
B0 += 0.00000000060 * mu.cost(3.61007443614 + 9468.26787725700 * x)
B0 += 0.00000000059 * mu.cost(3.08413561767 + 10025.42770875120 * x)
B0 += 0.00000000064 * mu.cost(2.53229538141 + 16703.04487984680 * x)
B0 += 0.00000000056 * mu.cost(3.31988072467 + 6518.75821726740 * x)
B0 += 0.00000000047 * mu.cost(1.44559165677 + 6643.09181776180 * x)
B0 += 0.00000000050 * mu.cost(1.92342238827 + 11614.43329373220 * x)
B0 += 0.00000000047 * mu.cost(4.03794177027 + 23958.63178523340 * x)
B0 += 0.00000000046 * mu.cost(3.70927352724 + 8859.36257569160 * x)
B0 += 0.00000000060 * mu.cost(2.55506470511 + 11780.49035851620 * x)
B0 += 0.00000000047 * mu.cost(1.69256878711 + 6660.86953400080 * x)
B0 += 0.00000000044 * mu.cost(6.09481217162 + 6460.81221096080 * x)
B0 += 0.00000000044 * mu.cost(2.63040622140 + 13936.79450513400 * x)
B0 += 0.00000000053 * mu.cost(0.77878945764 + 16865.52876963120 * x)
B0 += 0.00000000049 * mu.cost(1.83368544550 + 17654.78053974960 * x)
B0 += 0.00000000048 * mu.cost(0.52828042378 + 6686.74777770700 * x)
B0 += 0.00000000042 * mu.cost(4.30347553493 + 9065.54812412880 * x)
B0 += 0.00000000042 * mu.cost(5.71964550673 + 7203.80227149340 * x)
B0 += 0.00000000041 * mu.cost(0.98427208931 + 20426.57109242200 * x)
B0 += 0.00000000051 * mu.cost(3.54335413699 + 20597.24396304120 * x)
B0 += 0.00000000041 * mu.cost(0.21219617682 + 7314.00859271280 * x)
B0 += 0.00000000038 * mu.cost(2.53074981011 + 13207.02930736500 * x)
B0 += 0.00000000039 * mu.cost(5.15577369902 + 6670.58818804980 * x)
B0 += 0.00000000051 * mu.cost(3.25271478667 + 7799.98064550240 * x)
B0 += 0.00000000049 * mu.cost(0.77060706107 + 17101.21113690720 * x)
B0 += 0.00000000038 * mu.cost(6.06684699984 + 9389.05354078620 * x)
B0 += 0.00000000043 * mu.cost(0.51983815091 + 16489.76303806100 * x)
B0 += 0.00000000036 * mu.cost(0.84102576439 + 23937.85638974100 * x)
B1: float = 0
B1 += 0.00350068845 * mu.cost(5.36847836211 + 3340.61242669980 * x)
B1 -= 0.00014116030
B1 += 0.00009670755 * mu.cost(5.47877786506 + 6681.22485339960 * x)
B1 += 0.00001471918 * mu.cost(3.20205766795 + 10021.83728009940 | |
count_interior:
count_interior += 1
else:
count_interior=0
indice_parametros += 1
count_exterior += 1
comandos_establecidos = comandos_sin_establecer
# Si hay mas de 2 comandos de dibujado invocados...
if len(self.comandos_tikz_validados)-1 >= 0:
# Si se desea añadir los comandos invocados al "ejecutar" del comando "animarPytikz" o "guardarPytikz", se añadira con todo y nombre del nomando personalizado...
if self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][0] == "animarPytikz" or self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][0] == "guardarPytikz":
comandos={self.comando_tikz: comandos_establecidos[self.comando_tikz]}
if not "ejecutar" in self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][1][1]:
self.funcion_de_comando[1]["ejecutar"]=[comandos]
# REEMPLAZAR VALORES VACIÓS DEL COMANDO ANIMARPYTIKZ
self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][1]=self.funcion_de_comando
else:
# AÑADIR MÁS VALORES ANIDADOS DEL COMANDO ANIMARPYTIKZ
self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][1][1]["ejecutar"].append(comandos)
# Si no entonces solo añadir...
else:
for comando_establecido in comandos_establecidos[self.comando_tikz]:
self.comandos_tikz_validados.append(comando_establecido)
# Si no los hay...
else:
for comando_establecido in comandos_establecidos[self.comando_tikz]:
self.comandos_tikz_validados.append(comando_establecido)
else:
self.mensajes_de_error.append("Error en la linea "+str(self.linea_de_codigo)+": La cantidad de valores a colocar "+str(len(valores_a_establecer))+" es diferente a la cantidad de parametros del comando lo cual son "+str(cantidad_de_parametros))
def __parametros_array(self, codigo:str)->None:
r"""1. Recorre los parametros anidados en un [] de un comando. Se ordena y se agrega a la variable "self.parametros_comando".
2. En el caso del comando \foreach, se hacen dos cosas mas:
- Extraer las variables del comando, ejemplo si hay dos variables asi "\r" "\i", se extraen y se almacenan en una variable asi ["r","i"], y se guardan en la variable "self.parametros_comando".
- Extraer los "each" del comando, ejemplo si hay "\foreach \r in {3/A,2/B,1/C}", se extraen y se almacenan en una variable asi ["3/A","2/B","1/C"], y se guardan en la variable "self.parametros_comando".
Los comandos que tienen este tipo de parametro son:
- \newcommand{\ocentro}[1][1]{ NOTA: Solo se toma en cuenta el primer Array.
- \foreach \p\r [count=\i] in {1/1} {
- \comando_de_dibujo[estilo_con_parametros global={red}{blue}{1.5pt}];
- \comando_personalizado[300pt]{310pt}{black}"""
if self.comando_tikz == "foreach":
#Extraer variables
variables=[]
for var in codigo.split("\\"):
if var and not re.search("foreach",var):
if re.search(" ",var):
var=var[0:re.search(" ", var).start()]
variables.append(var.strip())
self.parametros_comando[1]["variables"]=variables
#Extraer "each"
patron_inicio_objeto=r"^ *\\foreach.*\[?.*\]? in {"
patron_final_objeto=r"}"
if(re.search(patron_inicio_objeto, codigo) and re.search(patron_final_objeto, codigo)):
parametros_objeto=codigo[
re.search(patron_inicio_objeto, codigo).end():
re.search(patron_final_objeto, codigo).start()
].split(",")
self.parametros_comando[0].append(parametros_objeto)
#Extraer valores del Array
patron_inicio_array=r"^ *\\foreach.*\["
patron_final_array=r"\]"
if(re.search(patron_inicio_array, codigo) and re.search(patron_final_array, codigo)):
parametros=codigo[
re.search(patron_inicio_array, codigo).end():
re.search(patron_final_array, codigo).end()-1
].replace("\\","").split(",")
else:
parametros=""
else:
parametros=codigo[slice(codigo.find("[")+1, codigo.find("]"))].split(",")
for parametro in parametros:
# Caso 1. [estilo_con_parametros global={red}{blue}{1.5pt}]
# Caso 2. ['count=i']
if(re.search("=",parametro)):
clave_parametro=parametro[slice(0, parametro.find("="))]
valor_parametro=parametro[slice(parametro.find("=")+1, len(parametro))]
self.parametros_comando[1][clave_parametro]=valor_parametro
else:
# Caso 2. \newcommand{\ocentro}[1][1]
if self.comando_tikz == "newcommand":
self.funcion_de_comando[0].append(parametro)
# Caso 3. \lineaVertical[300pt]{310pt}{black}
else:
# En el caso de que se cree despues de [] unos objetos asi: []{}
self.parametros_comando[0].append(parametro)
self.__parametros_objeto(codigo)
def __extraer_validar_parametros_a_definir(self)->List[any]:
r"""Se extrae los parametros a definir desde la variable "self.comandos_de_usuario" del valor de la llave de "self.comando_tikz". Y se guarda la cantidad de parametros asi como un Array de los parametros con el identificador (#), de los comandos anidados de un \newcommand.
Retorna:
- List[cantidad_de_parametros(int), parametros_sin_establecer(List)]"""
# Extraer los parametros a definir...
# [0] => Parametros de estilos, [1] => Parametros de posiciones...
parametros_sin_establecer=[[], []]
for i in range(len(self.comandos_de_usuario[self.comando_tikz])):
estilos_parametrizados=self.comandos_de_usuario[self.comando_tikz][i][1][1]
estilos_sin_definir=self.tratamiento_parametros.extraerParametrosSinDefinir(estilos_parametrizados=estilos_parametrizados, comando_invocado=True)
posiciones_parametrizados=self.comandos_de_usuario[self.comando_tikz][i][2]
posiciones_sin_definir=self.tratamiento_parametros.extraerParametrosSinDefinir(posiciones_parametrizados, comando_invocado=True)
if len(estilos_sin_definir):
parametros_sin_establecer[0].append(estilos_sin_definir)
if len(posiciones_parametrizados):
parametros_sin_establecer[1].append(posiciones_sin_definir)
# Validar si la cantidad de parametros corresponde a la cantidad de valores a establecer..
len_param=[]
num_superior=0
for i, arr in enumerate(parametros_sin_establecer):
cantidad_de_parametros=[] # [[#1],[#1,#2]]
for parametro in arr: # [#1,#1]
cantidad_de_parametros.append(list(set(parametro)))
for parametro in cantidad_de_parametros:
for param in parametro: # ["#1,#2"]
param=int(param.replace("#", ""))
len_param.append(param)
len_param=list(set(len_param))
if num_superior < len(len_param):
num_superior=len(len_param)
cantidad_de_parametros = num_superior
return[cantidad_de_parametros,parametros_sin_establecer]
def __validador_posiciones(self, codigo:str)->None:
r"""1. Se extrae los valores de la tupla () del codigo, para luego validar si son valores correctos, dependiendo del tipo de valor, entre los valores estan:
- Angulo inicial (int), Angulo final (int), Radio (str).
- Posicion X (str) y Posicion Y (str)
- Radio (str)
1.1. Tambien se valida si existe "cycle" en el codigo.
2. Si se pasa todas las validaciones, los valores se agregan a la variable "self.posiciones", en el caso de la validacion del "cycle;", se agrega es la palabra "cycle". """
if re.search(r"[\(\)]",codigo):
codigo_copia=codigo
# Primero se saca el contenido de los ()
while True:
if re.search(r"\((.+?)\)",codigo_copia):
posicion_normal=codigo_copia[slice(codigo_copia.find("(")+1, codigo_copia.find(")"))].split(",")
posicion_por_angulo=codigo_copia[slice(codigo_copia.find("(")+1, codigo_copia.find(")"))].split(":")
elif re.search(r"[\(\)]",codigo_copia):
self.mensajes_de_error.append("Error en la linea "+str(self.linea_de_codigo)+": Se esperaba ()")
posicion_normal=[]
posicion_por_angulo=[]
else:
posicion_normal=[]
posicion_por_angulo=[]
# arc (0:30:3mm) -> Parametros donde (Angulo inicial, Angulo final, radio)
if(len(posicion_por_angulo) == 3):
posicion_valido=True
# Se verifica si los valores de los () sean validos
for indice,posicion in enumerate(posicion_por_angulo,0):
# ["",""]
if not posicion:
self.mensajes_de_error.append("Error en la linea "+str(self.linea_de_codigo)+": Debe de haber 3 coordenadas, el valor Angulo Inicial, Angulo Final y el Radio")
posicion_valido=False
break
# [1.1,2.2]
else:
if indice < 2:
try:
float(posicion)
except:
self.mensajes_de_error.append("Error en la linea "+str(self.linea_de_codigo)+": El valor debe de ser de tipo FLOAT o INTEGER")
posicion_valido=False
break
if posicion_valido:
self.posiciones.append(list(map(str, posicion_por_angulo)))
# Elimino el parametro ya validado del codigo.
codigo_copia=codigo_copia[slice(codigo_copia.find(")")+1, len(codigo_copia))]
else:
self.posiciones=[]
break
# Solo hay 2 posiciones X y Y.
elif(len(posicion_normal) == 2):
posicion_valido=True
# Se verifica si los valores de los () sean validos
for posicion in posicion_normal:
# ["",""]
if not posicion:
self.mensajes_de_error.append("Error en la linea "+str(self.linea_de_codigo)+": Debe de haber 2 coordenadas, el Valor X y el Valor Y")
posicion_valido=False
break
if posicion_valido:
self.posiciones.append(list(map(str, posicion_normal)))
# Elimino el parametro ya validado del codigo.
codigo_copia=codigo_copia[slice(codigo_copia.find(")")+1, len(codigo_copia))]
else:
self.posiciones=[]
break
# Solo hay 1 parametro...
# \Circle
elif(len(posicion_normal) == 1 and "circle" in self.figuras):
self.posiciones.append(posicion_normal[0]) # "0.8cm"
# Elimino el parametro ya validado del codigo.
codigo_copia=codigo_copia[slice(codigo_copia.find(")")+1, len(codigo_copia))]
else:
for posicion in posicion_normal:
# Si hay mas o menos de 2 posiciones...
if(posicion != ""):
self.mensajes_de_error.append("Error en la linea "+str(
self.linea_de_codigo)+": Se esperaban 2 coordenadas con un valor valido.")
break
else:
self.mensajes_de_error.append("Error en la linea "+str(
self.linea_de_codigo)+": El valor de la coordenada debe de ser de tipo FLOAT o INTEGER")
break
break
if re.search(r"cycle;",codigo) and len(self.posiciones) > 0:
# Este Cycle solo debe de existir al final...
self.posiciones.append("cycle")
def __agregar_comando(self, funcion_de_comando:Dict[str,Dict[str,Union[str,list]]])->None:
r"""1. Se valida si la cantidad_de_parametros coincide con la cantidad de parametros_sin_definir.
2. Si todo es valido, se agrega todos los comandos creados por el usuario en la variable "self.comandos_de_usuario".
Con el objetivo de que en la verificación de comandos, se pueda validar si el comando invocado fue creado por el usuario, y poder así continuar con las demás validaciones, siguiendo el procedimiento como cualquier otro comando."""
cantidad_de_parametros=funcion_de_comando[0][0]
nombre_funcion=funcion_de_comando[1]["comando"]
funcion_ejecutable=funcion_de_comando[1]["ejecutar"]
# Validar si se cumple con la cantidad de parametros en Funcion Ejecutable, establecida...
for i in range(len(funcion_ejecutable)):
estilos_parametrizados=funcion_ejecutable[i][1][1]
estilos_sin_definir=self.tratamiento_parametros.extraerParametrosSinDefinir(
estilos_parametrizados=estilos_parametrizados)
posiciones_parametrizados=funcion_ejecutable[i][2]
posiciones_sin_definir=self.tratamiento_parametros.extraerParametrosSinDefinir(
posiciones_parametrizados)
if len(estilos_sin_definir) and len(posiciones_sin_definir):
parametros_sin_definir=estilos_sin_definir+posiciones_sin_definir
elif len(estilos_sin_definir):
parametros_sin_definir=estilos_sin_definir
elif len(posiciones_sin_definir):
parametros_sin_definir=posiciones_sin_definir
else:
parametros_sin_definir=[]
result_validacion_secuencia=self.tratamiento_parametros.validarSecuenciaNumerica(
cantidad_de_parametros, parametros_sin_definir)
if type(result_validacion_secuencia) is dict:
self.mensajes_de_error.append(
"Error en la linea "+str(self.linea_de_codigo)+": "+result_validacion_secuencia["error"])
break
if not len(self.mensajes_de_error):
# Si todas las validaciones son correctas...
self.comandos_de_usuario[nombre_funcion]=funcion_ejecutable
def __anidar(self, codigo:str, anidado_nivel_2:bool=False)->None:
r"""
1. Si en la variable "self.comandos_tikz_validados", el ultimo comando se trata de un comando de anidacion (\newcommand, \foreach, etc), se validan los comandos anidados usando la funcion "self.__validador_codigo", y se almacena lo retornado en la variable "codigo".
1.1. En el caso de que el comando de anidacion ya tenga almacenado sus comandos anidados en forma de codigo en la variable "self.comandos_tikz_validados", entonces se recupera y se guarda en la variable "array_codigo", para luego al recorrer las identanciones de la variable "codigo", el valor de la variable "array_codigo", se va actualizando con el nuevo "codigo".
1.2. En el caso de que no tenga todavia ningun codigo de sus comandos anidados, se almacena el contenido de la variable "codigo" en el comando anidado, en la variable "self.comandos_tikz_validados".
2. En el caso de que no se trate de un comando de anidacion, entonces la variable "codigo", es utilizado en la funcion "self.__validador_codigo".
"""
# Leer comandos que se almacenaran en esta clase...
comando_anidador=self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][0]
# Si se trata de un comando que esta dentro de un comando anidador... [foreach, newcommand, animarPytikz, guardarPytikz]
if comando_anidador in self.COMANDOS_ANIDADORES:
codigo=self.__validador_codigo(codigo, True)
if "ejecutar" in list(self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][1][1].keys()):
array_codigo=self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][1][1]["ejecutar"]
if codigo:
# Si hay código anidado en el código anterior, deberá de ser añadido dentro de ese array de código.
if anidado_nivel_2:
codigo_anidado=self.comandos_tikz_validados[len(self.comandos_tikz_validados)-1][1][1]["ejecutar"]
ultimo_comando_sub_anidado=codigo_anidado[len(codigo_anidado)-1][0]
if ultimo_comando_sub_anidado == "foreach":
if("ejecutar" in array_codigo[len(array_codigo)-1][1][1].keys()):
array_codigo[len(array_codigo)-1][1][1]["ejecutar"].append(codigo)
else:
array_codigo[len(array_codigo)-1][1][1]["ejecutar"]=[codigo]
else:
array_codigo[len(array_codigo)-1][1].append(codigo)
# | |
if 'n_layers' not in network_config else network_config['n_layers'] + 1
dim_hidden = network_config.get('dim_hidden', 40)
if type(dim_hidden) is int:
dim_hidden = (n_layers - 1) * [dim_hidden]
else:
dim_hidden = copy(dim_hidden)
n_layers = len(dim_hidden) + 1
boundaries = network_config['output_boundaries']
dim_act = max([b[1] for b in boundaries])
dim_hidden.append(dim_act)
types = network_config.get('types', [])
aux_boundaries = network_config.get('aux_boundaries', [])
aux_types = network_config.get('aux_types', ['cont'])
# List of indices for state (vector) data and image (tensor) data in observation.
x_idx, img_idx, i = [], [], 0
for sensor in network_config['obs_include']:
dim = network_config['sensor_dims'][sensor]
if sensor in network_config['obs_image_data']:
img_idx = img_idx + list(range(i, i+dim))
else:
x_idx = x_idx + list(range(i, i+dim))
i += dim
if input_layer is None:
nn_input, action, precision = get_input_layer(dim_input, dim_output, len(boundaries))
else:
nn_input, action, precision = input_layer
state_input = nn_input[:, 0:x_idx[-1]+1]
image_input = nn_input[:, x_idx[-1]+1:img_idx[-1]+1]
# image goes through 3 convnet layers
num_filters = network_config['num_filters']
filter_sizes = network_config['filter_sizes']
n_conv = len(num_filters)
fp_only = True # False
im_height = network_config['image_height']
im_width = network_config['image_width']
num_channels = network_config['image_channels']
image_input = tf.reshape(image_input, [-1, im_width, im_height, num_channels])
#image_input = annotate_xy(im_width, im_height, image_input)
with tf.variable_scope('discr_conv_base'):
discr_conv_layers, weights, biases = build_conv_layers(image_input, filter_sizes, num_filters)
_, num_rows, num_cols, num_fp = discr_conv_layers[-1].get_shape()
num_rows, num_cols, num_fp = [int(x) for x in [num_rows, num_cols, num_fp]]
if fp_only:
discr_fp = compute_fp(discr_conv_layers[-1])
discr_fc_input = tf.concat(axis=1, values=[discr_fp, state_input])
else:
discr_fp = compute_fp(discr_conv_layers[-1][:,:,:,8:])
lastconv = tf.reshape(tf.nn.relu(discr_conv_layers[-1][:,:,:,:8]), [-1, num_rows*num_cols*8])
discr_fc_input = tf.concat(axis=1, values=[lastconv, discr_fp, state_input])
last_conv_vars = discr_fc_input
losses = []
preds = []
fc_vars = []
offset = 0
cur_input = discr_fc_input
with tf.variable_scope('discr_head'):
offset = len(dim_hidden)
task_bounds = [(st, en) for i, (st, en) in enumerate(boundaries) if not len(types) or types[i].find('discr') >= 0]
cont_bounds = [(st, en) for i, (st, en) in enumerate(boundaries) if len(types) and types[i].find('cont') >= 0]
task_types = len(task_bounds) * ['discrete']
cont_types = len(task_bounds) * ['continuous']
ntask = np.sum([en-st for (st, en) in task_bounds])
dh = dim_hidden[:-1] + [ntask]
mlp_applied, weights_FC, biases_FC = get_mlp_layers(cur_input, len(dh), dh, offset)
scaled_mlp_applied = mlp_applied
if eta is not None:
scaled_mlp_applied = mlp_applied * eta
discr_mlp = scaled_mlp_applied
prediction = multi_mix_prediction_layer(scaled_mlp_applied, task_bounds, types=task_types)
fc_vars = weights_FC + biases_FC
loss_out = get_loss_layer(mlp_out=scaled_mlp_applied, task=action, boundaries=boundaries, batch_size=batch_size, precision=precision, types=types, wt=1e3) # wt=5e4)
losses = [loss_out]
preds = [prediction]
return TfMap.init_from_lists([nn_input, action, precision], preds, losses), fc_vars, last_conv_vars
def fp_multi_modal_cont_network(dim_input=27, dim_output=2, batch_size=25, network_config=None, input_layer=None, eta=None):
print('Building cont output fp net')
pool_size = 2
n_layers = 2 if 'n_layers' not in network_config else network_config['n_layers'] + 1
dim_hidden = network_config.get('dim_hidden', 40)
if type(dim_hidden) is int:
dim_hidden = (n_layers - 1) * [dim_hidden]
else:
dim_hidden = copy(dim_hidden)
n_layers = len(dim_hidden) + 1
boundaries = network_config['output_boundaries']
dim_act = max([b[1] for b in boundaries])
dim_hidden.append(dim_act)
types = network_config.get('types', [])
aux_boundaries = network_config.get('aux_boundaries', [])
aux_types = network_config.get('aux_types', ['cont'])
cont_bounds = boundaries
types = ['continuous'] * len(cont_bounds)
# List of indices for state (vector) data and image (tensor) data in observation.
x_idx, img_idx, i = [], [], 0
for sensor in network_config['obs_include']:
dim = network_config['sensor_dims'][sensor]
if sensor in network_config['obs_image_data']:
img_idx = img_idx + list(range(i, i+dim))
else:
x_idx = x_idx + list(range(i, i+dim))
i += dim
if input_layer is None:
nn_input, action, precision = get_input_layer(dim_input, dim_output, len(boundaries))
else:
nn_input, action, precision = input_layer
state_input = nn_input[:, 0:x_idx[-1]+1]
image_input = nn_input[:, x_idx[-1]+1:img_idx[-1]+1]
# image goes through 3 convnet layers
num_filters = network_config['num_filters']
filter_sizes = network_config['filter_sizes']
n_conv = len(num_filters)
fp_only = True # False
im_height = network_config['image_height']
im_width = network_config['image_width']
num_channels = network_config['image_channels']
image_input = tf.reshape(image_input, [-1, im_width, im_height, num_channels])
#image_input = annotate_xy(im_width, im_height, image_input)
with tf.variable_scope('conv_base'):
conv_layers, weights, biases = build_conv_layers(image_input, filter_sizes, num_filters)
_, num_rows, num_cols, num_fp = conv_layers[-1].get_shape()
num_rows, num_cols, num_fp = [int(x) for x in [num_rows, num_cols, num_fp]]
if fp_only:
fp = compute_fp(conv_layers[-1])
fc_input = tf.concat(axis=1, values=[fp, state_input])
else:
fp = compute_fp(conv_layers[-1][:,:,:,8:])
lastconv = tf.reshape(tf.nn.relu(conv_layers[-1][:,:,:,:8]), [-1, num_rows*num_cols*8])
fc_input = tf.concat(axis=1, values=[fp, state_input, lastconv])
last_conv_vars = fc_input
losses = []
preds = []
fc_vars = []
offset = 0
cur_input = fc_input
with tf.variable_scope('cont_head'):
ncont = np.sum([en-st for (st, en) in cont_bounds])
dh = dim_hidden[:-1] + [int(ncont)]
mlp_applied, weights_FC, biases_FC = get_mlp_layers(cur_input, len(dh), dh, offset)
prediction = mlp_applied
scaled_mlp_applied = mlp_applied
fc_vars = weights_FC + biases_FC
loss_out = get_loss_layer(mlp_out=scaled_mlp_applied, task=action, boundaries=boundaries, batch_size=batch_size, precision=precision, types=types, wt=1e1) # wt=5e4)
losses = [loss_out]
preds = [prediction]
return TfMap.init_from_lists([nn_input, action, precision], preds, losses), fc_vars, last_conv_vars
def fp_multi_modal_cond_network(dim_input=27, dim_output=2, batch_size=25, network_config=None, input_layer=None, eta=None):
"""
An example a network in tf that has both state and image inputs, with the feature
point architecture (spatial softmax + expectation).
Args:
dim_input: Dimensionality of input.
dim_output: Dimensionality of the output.
batch_size: Batch size.
network_config: dictionary of network structure parameters
Returns:
A tfMap object that stores inputs, outputs, and scalar loss.
"""
print('Building mixed output fp net')
pool_size = 2
n_layers = 2 if 'n_layers' not in network_config else network_config['n_layers'] + 1
dim_hidden = network_config.get('dim_hidden', 40)
if type(dim_hidden) is int:
dim_hidden = (n_layers - 1) * [dim_hidden]
else:
dim_hidden = copy(dim_hidden)
n_layers = len(dim_hidden) + 1
boundaries = network_config['output_boundaries']
dim_act = max([b[1] for b in boundaries])
dim_hidden.append(dim_act)
types = network_config.get('types', [])
aux_boundaries = network_config.get('aux_boundaries', [])
aux_types = network_config.get('aux_types', ['cont'])
# List of indices for state (vector) data and image (tensor) data in observation.
x_idx, img_idx, i = [], [], 0
for sensor in network_config['obs_include']:
dim = network_config['sensor_dims'][sensor]
if sensor in network_config['obs_image_data']:
img_idx = img_idx + list(range(i, i+dim))
else:
x_idx = x_idx + list(range(i, i+dim))
i += dim
if input_layer is None:
nn_input, action, precision = get_input_layer(dim_input, dim_output, len(boundaries))
else:
nn_input, action, precision = input_layer
state_input = nn_input[:, 0:x_idx[-1]+1]
image_input = nn_input[:, x_idx[-1]+1:img_idx[-1]+1]
# image goes through 3 convnet layers
num_filters = network_config['num_filters']
filter_sizes = network_config['filter_sizes']
n_conv = len(num_filters)
fp_only = True # False
im_height = network_config['image_height']
im_width = network_config['image_width']
num_channels = network_config['image_channels']
image_input = tf.reshape(image_input, [-1, im_width, im_height, num_channels])
#image_input = annotate_xy(im_width, im_height, image_input)
with tf.variable_scope('discr_conv_base'):
discr_conv_layers, weights, biases = build_conv_layers(image_input, filter_sizes, num_filters)
_, num_rows, num_cols, num_fp = discr_conv_layers[-1].get_shape()
num_rows, num_cols, num_fp = [int(x) for x in [num_rows, num_cols, num_fp]]
if fp_only:
discr_fp = compute_fp(discr_conv_layers[-1])
else:
discr_fp = compute_fp(discr_conv_layers[-1][:,:,:,8:])
### FC Layers
if fp_only:
discr_fc_input = tf.concat(axis=1, values=[discr_fp, state_input])
else:
lastconv = tf.reshape(tf.nn.relu(discr_conv_layers[-1][:,:,:,:8]), [-1, num_rows*num_cols*8])
discr_fc_input = tf.concat(axis=1, values=[lastconv, discr_fp, state_input])
last_conv_vars = discr_fc_input
losses = []
preds = []
fc_vars = []
offset = 0
#mlp_applied = fc_input
#mlp_applied, weights_FC, biases_FC = get_mlp_layers(fc_input, n_layers-2, dim_hidden[:-1], nonlin=True)
discr_mlp_applied, weights_FC, biases_FC = get_mlp_layers(discr_fc_input, 1, dim_hidden[:1], nonlin=True)
with tf.variable_scope('conv_base'):
conv_layers, weights, biases = build_conv_layers(image_input, filter_sizes, num_filters)
_, num_rows, num_cols, num_fp = conv_layers[-1].get_shape()
num_rows, num_cols, num_fp = [int(x) for x in [num_rows, num_cols, num_fp]]
if fp_only:
fp = compute_fp(conv_layers[-1])
fc_input = tf.concat(axis=1, values=[fp, state_input])
else:
fp = compute_fp(conv_layers[-1][:,:,:,8:])
lastconv = tf.reshape(tf.nn.relu(conv_layers[-1][:,:,:,:8]), [-1, num_rows*num_cols*8])
fc_input = tf.concat(axis=1, values=[fp, state_input, lastconv])
last_conv_vars = fc_input
losses = []
preds = []
fc_vars = []
offset = 0
#mlp_applied = fc_input
#mlp_applied, weights_FC, biases_FC = get_mlp_layers(fc_input, n_layers-2, dim_hidden[:-1], nonlin=True)
mlp_applied, weights_FC, biases_FC = get_mlp_layers(fc_input, 1, dim_hidden[:1], nonlin=True)
cur_input = discr_fc_input # mlp_applied
with tf.variable_scope('discr_head'):
offset = len(dim_hidden)
task_bounds = [(st, en) for i, (st, en) in enumerate(boundaries) if not len(types) or types[i].find('discr') >= 0]
cont_bounds = [(st, en) for i, (st, en) in enumerate(boundaries) if len(types) and types[i].find('cont') >= 0]
task_types = len(task_bounds) * ['discrete']
cont_types = len(task_bounds) * ['continuous']
ntask = np.sum([en-st for (st, en) in task_bounds])
dh = dim_hidden[:-1] + [ntask]
mlp_applied, weights_FC, biases_FC = get_mlp_layers(cur_input, len(dh), dh, offset)
scaled_mlp_applied = mlp_applied
if eta is not None:
scaled_mlp_applied = mlp_applied * eta
discr_mlp = scaled_mlp_applied
prediction = multi_mix_prediction_layer(scaled_mlp_applied, task_bounds, types=task_types)
onehot_preds = [tf.one_hot(tf.argmax(prediction[:,lb:ub], axis=1), ub-lb) for lb, ub in task_bounds]
onehot_preds = tf.concat(onehot_preds, axis=1)
if len(cont_bounds):
#cur_input = tf.concat([cur_input, tf.stop_gradient(onehot_preds)], axis=1)
cur_input = tf.concat([fc_input, tf.stop_gradient(onehot_preds)], axis=1)
#cur_input = tf.concat([cur_input, tf.stop_gradient(prediction)], axis=1)
with tf.variable_scope('cont_head'):
offset += len(dh)
ncont = np.sum([en-st for (st, en) in cont_bounds])
dh = dim_hidden[:-1] + [ncont]
mlp_applied, weights_FC_2, biases_FC_2 = get_mlp_layers(cur_input, len(dh), dh, offset)
prediction = tf.concat([prediction, mlp_applied], axis=1)
scaled_mlp_applied = tf.concat([discr_mlp, mlp_applied], axis=1)
fc_vars = weights_FC + biases_FC
loss_out = get_loss_layer(mlp_out=scaled_mlp_applied, task=action, boundaries=boundaries, batch_size=batch_size, precision=precision, | |
2),('yin', 2),('hu', 3),('xiao', 4),('yi', 4),('shi', 2),('fa', 1),
('wan', 4),('lai', 4),('bai', 3),('quan', 2),('xiang', 1),('yu', 3),('qiu', 1),
('hu', 1),('ran', 2),('geng', 4),('zuo', 4),('yu', 2),('yang', 2),('chan', 1),
('huang', 2),('yun', 2),('xiao', 1),('tiao', 2),('bai', 2),('ri', 4),('an', 4),
('bian', 4),('diao', 4),('ru', 2),('wen', 2),('yang', 2),('liu', 3),('chun', 1),
('shang', 4),('lin', 2),('fan', 2),('hua', 1),('zhao', 4),('yan', 3),('xin', 1),
('sui', 4),('ye', 4),('gao', 1),('tang', 2),('lie', 4),('ming', 2),('zhu', 2),
('mei', 2),('jiu', 3),('yi', 4),('bei', 1),('sheng', 1),('yi', 4),('qu', 3),
]),
(7,[('shan', 1),('si', 4),('zhong', 1),('ming', 2),('zhou', 4),('yi', 3),('hun', 1),
('yu', 2),('liang', 2),('du', 4),('tou', 2),('zheng', 1),('du', 4),('xuan', 1),
('ren', 2),('sui', 2),('sha', 1),('lu', 4),('xiang', 4),('jiang', 1),('cun', 1),
('yu', 2),('yi', 4),('cheng', 2),('zhou', 1),('gui', 1),('lu', 4),('men', 2),
('lu', 4),('men', 2),('yue', 4),('zhao', 4),('kai', 1),('yan', 1),('shu', 4),
('hu', 1),('dao', 4),('pang', 2),('gong', 1),('qi', 1),('yin', 3),('chu', 4),
('yan', 2),('fei', 1),('song', 1),('jing', 4),('chang', 2),('ji', 4),('liao', 2),
('wei', 2),('you', 3),('you', 1),('ren', 2),('zi', 4),('lai', 2),('qu', 4),
]),
(7,[('feng', 1),('chui', 1),('liu', 3),('hua', 1),('man', 3),('dian', 4),('xiang', 1),
('wu', 2),('ji', 1),('ya', 1),('jiu', 3),('huan', 4),('ke', 4),('chang', 2),
('jin', 1),('ling', 2),('zi', 3),('di', 4),('lai', 2),('xiang', 1),('song', 4),
('yu', 4),('xing', 2),('bu', 4),('xing', 2),('ge', 4),('jin', 4),('shang', 1),
('qing', 3),('jun', 1),('shi', 4),('wen', 4),('dong', 1),('liu', 2),('shui', 3),
('bie', 2),('yi', 4),('yu', 3),('zhi', 1),('shui', 2),('duan', 3),('chang', 2),
]),
(7,[('lun', 2),('tai', 2),('cheng', 2),('tou', 2),('ye', 4),('chui', 1),('jiao', 3),
('lun', 2),('tai', 2),('cheng', 2),('bei', 3),('mao', 2),('tou', 2),('luo', 4),
('yu', 3),('shu', 1),('zuo', 2),('ye', 4),('guo', 4),('qu', 2),('li', 2),
('chan', 2),('yu', 2),('yi', 3),('zai', 4),('jin', 1),('shan', 1),('xi', 1),
('shu', 4),('lou', 2),('xi', 1),('wang', 4),('yan', 1),('chen', 2),('hei', 1),
('han', 4),('bing', 1),('tun', 2),('zai', 4),('lun', 2),('tai', 2),('bei', 3),
('shang', 4),('jiang', 4),('yong', 1),('mao', 2),('xi', 1),('chu', 1),('zheng', 1),
('ping', 2),('ming', 2),('chui', 1),('di', 2),('da', 4),('jun', 1),('xing', 2),
('si', 4),('bian', 1),('fa', 2),('gu', 3),('xue', 2),('hai', 2),('yong', 3),
('san', 1),('jun', 1),('da', 4),('hu', 1),('yin', 1),('shan', 1),('dong', 4),
('lu', 3),('sai', 4),('bing', 1),('qi', 4),('lian', 2),('yun', 2),('tun', 2),
('zhan', 4),('chang', 3),('bai', 2),('gu', 3),('chan', 2),('cao', 3),('gen', 1),
('jian', 4),('he', 2),('feng', 1),('ji', 2),('xue', 3),('pian', 4),('kuo', 4),
('sha', 1),('kou', 3),('shi', 2),('dong', 4),('ma', 3),('ti', 2),('tuo', 1),
('ya', 4),('xiang', 1),('qin', 2),('wang', 2),('gan', 1),('ku', 3),('xin', 1),
('shi', 4),('jiang', 1),('bao', 4),('zhu', 3),('jing', 4),('bian', 1),('chen', 2),
('gu', 3),('lai', 2),('qing', 1),('shi', 3),('shui', 2),('bu', 2),('jian', 4),
('jin', 1),('jian', 4),('gong', 1),('ming', 2),('sheng', 4),('gu', 3),('ren', 2),
]),
(7,[('bei', 3),('feng', 1),('juan', 3),('di', 4),('bai', 2),('cao', 3),('zhe', 2),
('hu', 2),('tian', 1),('ba', 1),('yue', 4),('ji', 2),('fei', 1),('xue', 3),
('hu', 1),('ru', 2),('yi', 2),('ye', 4),('chun', 1),('feng', 1),('lai', 2),
('qian', 1),('shu', 4),('wan', 4),('shu', 4),('li', 2),('hua', 1),('kai', 1),
('san', 4),('ru', 4),('zhu', 1),('lian', 2),('shi', 1),('luo', 2),('mu', 4),
('hu', 2),('qiu', 2),('bu', 4),('nuan', 3),('jin', 3),('qin', 1),('bo', 2),
('jiang', 1),('jun', 1),('jiao', 3),('gong', 1),('bu', 4),('de', 2),('kong', 4),
('du', 1),('hu', 4),('tie', 3),('yi', 1),('leng', 3),('nan', 2),('zhuo', 2),
('han', 4),('hai', 3),('lan', 2),('gan', 1),('bai', 3),('zhang', 4),('bing', 1),
('chou', 2),('yun', 2),('can', 3),('dan', 4),('wan', 4),('li', 3),('ning', 2),
('zhong', 1),('jun', 1),('zhi', 4),('jiu', 3),('yin', 3),('gui', 1),('ke', 4),
('hu', 2),('qin', 2),('pi', 2),('pa', 2),('yu', 3),('qiang', 1),('di', 2),
('fen', 1),('fen', 1),('mu', 4),('xue', 3),('xia', 4),('yuan', 2),('men', 2),
('feng', 1),('che', 4),('hong', 2),('qi', 2),('dong', 4),('bu', 4),('fan', 1),
('lun', 2),('tai', 2),('dong', 1),('men', 2),('song', 4),('jun', 1),('qu', 4),
('qu', 4),('shi', 2),('xue', 2),('man', 3),('tian', 1),('shan', 1),('lu', 4),
('shan', 1),('hui', 2),('lu', 4),('zhuan', 3),('bu', 2),('jian', 4),('jun', 1),
('xue', 3),('shang', 4),('kong', 1),('liu', 2),('ma', 3),('xing', 2),('chu', 4),
]),
(7,[('guo', 2),('chu', 1),('yi', 3),('lai', 2),('hua', 4),('an', 1),('ma', 3),
('shen', 2),('miao', 4),('du', 2),('shu', 3),('jiang', 1),('du', 1),('wang', 2),
('jiang', 1),('jun', 1),('de', 2),('ming', 2),('san', 1),('shi', 2),('zai', 3),
('ren', 2),('jian', 1),('you', 4),('jian', 4),('zhen', 1),('cheng', 2),('huang', 2),
('ceng', 2),('mao', 4),('xian', 1),('di', 4),('zhao', 4),('ye', 4),('bai', 2),
('long', 2),('chi', 2),('shi', 2),('ri', 4),('fei', 1),('pi', 1),('li', 4),
('nei', 4),('fu', 3),('yan', 1),('hong', 2),('ma', 2),('nao', 3),('pan', 2),
('jie', 2),('yu', 2),('chuan', 2),('zhao', 4),('cai', 2),('ren', 2),('suo', 3),
('pan', 2),('ci', 4),('jiang', 1),('jun', 1),('bai', 4),('wu', 3),('gui', 1),
('qing', 1),('wan', 2),('xi', 4),('qi', 3),('xiang', 1),('zhui', 1),('fei', 1),
('gui', 4),('qi', 1),('quan', 2),('men', 2),('de', 2),('bi', 3),('ji', 4),
('shi', 3),('jiao', 4),('ping', 2),('zhang', 4),('sheng', 1),('guang', 1),('hui', 1),
('xi',1), ('ri',4), ('tai',4),('zong',1),('quan',2),('mao',2),('gua',1),
('jin',4),('shi',2),('guo',1),('jia',1),('shi',1),('zi',3),('hua',1),
('jin', 1),('zhi', 1),('xin', 1),('tu', 2),('you', 3),('er', 4),('ma', 3),
('fu', 4),('ling', 4),('shi', 2),('zhe', 3),('jiu', 3),('tan', 4),('jie', 1),
('ci', 3),('jie', 1),('qi', 2),('zhan', 4),('yi', 1),('di', 2),('wan', 4),
('gao', 3),('su', 4),('mo', 4),('mo', 4),('kai', 1),('feng', 1),('sha', 1),
('qi', 2),('yu', 2),('qi', 1),('pi', 3),('yi', 4),('shu', 1),('jue', 2),
('jiong', 3),('ruo', 4),('han', 2),('kong', 1),('dong', 4),('yan', 1),('xue', 3),
('shuang', 1),('ti', 2),('cu', 4),('ta', 4),('chang', 2),('qiu', 1),('jian', 1),
('ma', 3),('guan', 1),('si', 1),('yang', 3),('sen', 1),('cheng', 2),('lie', 4),
('ke', 3),('lian', 2),('jiu', 2),('ma', 3),('zheng', 1),('shen', 2),('jun', 4),
('gu', 4),('shi', 4),('qing', 1),('gao', 1),('qi', 4),('shen', 1),('wen', 3),
('jie', 4),('wen', 4),('ku', 3),('xin', 1),('ai', 4),('zhe', 3),('shui', 2),
('hou', 4),('you', 3),('wei', 2),('feng', 3),('qian', 2),('zhi', 1),('dun', 4),
('yi', 4),('xi', 1),('xun', 2),('xing', 4),('xin', 1),('feng', 1),('gong', 1),
('cui', 4),('hua', 2),('fu', 2),('tian', 1),('lai', 2),('xiang', 4),('dong', 1),
('teng', 2),('xiang', 1),('lei', 3),('luo', 4),('san', 1),('wan', 4),('pi', 3),
('jie', 1),('yu', 3),('ci', 3),('tu', 2),('jin', 1),('gu', 3),('tong', 2),
('zi', 4),('cong', 2),('xian', 4),('bao', 3),('chao', 2),('he', 2),('zong', 1),
('wu', 2),('fu', 4),('she', 4),('jiao', 1),('jiang', 1),('shui', 3),('zhong', 1),
]),
(7,[('jiang', 1),('jun', 1),('wei', 4),('wu', 3),('zhi', 1),('zi', 3),('sun', 1),
('yu', 2),('jin', 1),('wei', 2),('shu', 4),('wei', 2),('qing', 1),('men', 2),
('ying', 1),('xiong', 2),('ge', 1),('ju', 4),('sui', 1),('yi', 2),('yi', 3),
('wen', 2),('cai', 3),('feng', 1),('liu', 2),('jin', 1),('shang', 4),('cun', 2),
('xue', 2),('shu', 1),('chu', 1),('xue', 2),('wei', 4),('fu', 1),('ren', 2),
('dan', 4),('hen', 4),('wu', 2),('guo', 4),('wang', 2),('you', 4),('jun', 1),
('dan', 1),('qing', 1),('bu', 4),('zhi', 1),('lao', 3),('jiang', 1),('zhi', 4),
('fu', 4),('gui', 4),('yu', 2),('wo', 3),('ru', 2),('fu', 2),('yun', 2),
('kai', 1),('yuan', 2),('zhi', 1),('zhong', 1),('chang', 2),('yin', 3),('jian', 4),
('cheng', 2),('en', 1),('shu', 4),('shang', 4),('nan', 2),('xun', 1),('dian', 4),
('ling', 2),('yan', 1),('gong', 1),('chen', 2),('shao', 3),('yan', 2),('se', 4),
('jiang', 1),('jun', 1),('xia', 4),('bi', 3),('kai', 1),('sheng', 1),('mian', 4),
('liang', 2),('xiang', 4),('tou', 2),('shang', 4),('jin', 4),('xian', 2),('guan', 4),
('meng', 3),('jiang', 4),('yao', 1),('jian', 1),('da', 4),('yu', 3),('jian', 4),
('bao', 1),('gong', 1),('e', 4),('gong', 1),('mao', 2),('fa', 4),('dong', 4),
('ying', 1),('zi', 1),('sa', 4),('shuang', 3),('you', 2),('han', 1),('zhan', 4),
('xian', 1),('di', 4),('tian', 1),('ma', 3),('yu', 4),('hua', 1),('cong', 1),
('hua', 4),('gong', 1),('ru', 2),('shan', 1),('mao', 4),('bu', 4),('tong', 2),
('shi', 4),('ri', 4),('qian', 1),('lai', 2),('chi', 4),('chi', 2),('xia', 4),
('jiong', 3),('li', 4),('chang', 1),('he', 2),('sheng', 1),('chang', 2),('feng', 1),
('zhao', 4),('wei', 4),('jiang', 1),('jun', 1),('fu', 2),('juan', 4),('su', 4),
('yi', 4),('jiang', 4),('can', 3),('dan', 4),('jing', 1),('ying', 2),('zhong', 1),
('si', 1),('xu', 1),('jiu', 3),('chong', 2),('zhen', 1),('long', 2),('chu', 1),
('yi', 4),('xi', 3),('wan', 4),('gu', 3),('fan', 2),('ma', 3),('kong', 1),
('yu', 4),('hua', 1),('que', 4),('zai', 4),('yu', 4),('ta', 4),('shang', 4),
('ta', 4),('shang', 4),('ting', 2),('qian', 2),('yi', 4),('xiang', 1),('xiang', 4),
('zhi', 4),('zun', 1),('han', 2),('xiao', 4),('cui', 1),('ci', 4),('jin', 1),
('yu', 3),('ren', 2),('tai', 4),('pu', 2),('jie', 1),('chou', 2),('chang', 4),
('di', 4),('zi', 3),('han', 2),('gan', 1),('zao', 3),('ru', 4),('shi', 4),
('yi', 4),('neng', 2),('hua', 4),('ma', 3),('qiong', 2),('shu', 1),('xiang', 4),
('gan', 1),('wei', 2),('hua', 4),('rou', 4),('bu', 2),('hua', 4),('gu', 3),
('ren', 2),('shi', 3),('hua', 2),('liu', 2),('qi', 4),('diao', 1),('sang', 4),
('jiang', 1),('jun', 1),('hua', 4),('shan', 4),('gai', 4),('you', 3),('shen', 2),
('ou', 3),('feng', 2),('jia', 1),('shi', 4),('yi', 4),('xie', 3),('zhen', 1),
('ji', 2),('jin', 1),('piao', 1),('bo', 2),('gan', 1),('ge', 1),('ji', 4),
('lv', 3),('mao', 4),('xun', 2),('chang', 2),('xing', 2),('lu', 4),('ren', 2),
('tu', 2),('qiong', 2),('fan', 3),('zao', 1),('su', 2),('yan', 3),('bai', 2),
('shi', 4),('shang', 4),('wei', 4),('you', 3),('ru', 2),('gong', 1),('pin', 2),
('dan',4), ('kan',4), ('gu',3), ('lai',2), ('sheng',4), ('ming',2),('xia',4),
('zhong',1), ('ri',4), ('kan',2), ('lan',3), ('chan',2), ('qi',2), ('shen',1)
]),
(7,[('jin', 1),('wo', 3),('bu', 2),('le', 4),('si', 1),('yue', 4),('yang', 2),
('shen', 1),('yu', 4),('fen', 4),('fei', 1),('bing', 4),('zai', 4),('chuang', 2),
('mei', 3),('ren', 2),('juan', 1),('juan', 1),('ge', 2),('qiu', 1),('shui', 3),
('zhuo', 2),('zu', 2),('dong', 4),('ting', 2),('wang', 4),('ba', 1),('huang', 1),
('hong', 2),('fei', 1),('ming', 2),('ming', 2),('ri', 4),('yue', 4),('bai', 2),
('qing', 1),('feng', 1),('ye', 4),('chi', 4),('tian', 1),('yu', 3),('shuang', 1),
('yu', 4),('jing', 1),('qun', 2),('di', 4),('ji', 2),('bei', 2),('dou', 3),
('huo', 4),('qi', 2),('qi', 2),('lin', 2),('yi', 4),('feng', 4),('huang', 2),
('fu', 2),('rong', 2),('jing', 1),('qi', 2),('yan', 1),('wu', 4),('luo', 4),
('ying', 3),('dong', 4),('dao', 4),('jing', 3),('yao', 2),('xiao', 1),('xiang', 1),
('xing', 1),('gong', 1),('zhi', 1),('jun', 1),('zui', 4),('qiong', 2),('jiang', 1),
('yu', 3),('ren', 2),('xi', 1),('shao', 3),('bu', 2),('zai', 4),('pang', 2),
('si', 4),('wen', 2),('zuo', 2),('zhe', 3),('chi', 4),('song', 1),('zi', 3),
('kong', 3),('shi', 4),('han', 4),('dai', 4),('han', 2),('zhang', 1),('liang', 2),
('xi', 1),('sui', 2),('liu', 2),('shi', 4),('ding', 4),('chang', 2),('an', 1),
('wei', 2),('wo', 4),('wei', 4),('gai', 3),('shen', 2),('can', 3),('shang', 1),
('guo', 2),('jia', 1),('cheng', 2),('bai', 4),('wu', 2),('qi', 2),('gan', 3),
('se', 4),('nan', 2),('xing', 1),('fu', 3),('can', 1),('feng', 1),('xiang', 1),
('zhou', 1),('nan', 2),('liu', 2),('zhi', 4),('gu', 2),('suo', 3),('xi', 1),
('nan', 2),('ji', 2),('lao', 3),('ren', 2),('ying', 1),('shou', 4),('chang', 1),
('mei', 3),('ren', 2),('hu', 2),('wei', 4),('ge', 2),('qiu', 1),('shui', 3),
('yan', 1),('de', 2),('zhi', 4),('zhi', 1),('gong', 4),('yu', 4),('tang', 2),
]),
(7,[('kong', 3),('ming', 2),('miao', 4),('qian', 2),('you', 3),('lao', 2),('bai', 3),
('ke', 1),('ru', 2),('qing', 1),('tong', 2),('gen', 1),('ru', 2),('shi', 2),
('shuang', 1),('pi', 2),('liu', 1),('yu', 3),('si', 4),('shi', 2),('wei', 2),
('dai', 4),('se', 4),('can', 1),('tian', 1),('er', 4),('qian', 1),('chi', 3),
('jun', 1),('chen', 2),('yi', 2),('yu', 3),('shi', 2),('ji', 4),('hui', 4),
('shu', 4),('mu', 4),('you', 2),('wei', 2),('ren', 2),('ai', 4),('xi', 1),
('yun', 2),('lai', 2),('qi', 4),('jie', 1),('wu', 1),('xia', 2),('chang', 2),
('yue', 4),('chu', 1),('han', 2),('tong', 1),('xue', 3),('shan', 1),('bai', 2),
('yi', 4),('zuo',2), ('lu',4), ('rao',4), ('jin',3), ('ting',2), ('dong',1),
('xian',1), ('zhu',3), ('wu',3), ('hou',2), ('tong',2), ('bi',4), ('gong',1),
('cui', 1),('wei', 2),('zhi', 1),('gan', 4),('jiao', 1),('yuan', 2),('gu', 3),
('yao', 2),('tiao', 3),('dan', 1),('qing', 1),('hu', 4),('you', 3),('kong', 1),
('luo', 4),('luo', 4),('pan', 2),('ju', 4),('sui', 1),('de', 2),('di', 4),
('ming', 2),('ming', 2),('gu', 1),('gao', 1),('duo', 1),('lie', 4),('feng', 1),
('fu', 2),('chi', 2),('zi', 4),('shi', 4),('shen', 2),('ming', 2),('li', 4),
('zheng', 4),('zhi', 2),('yuan', 2),('yin', 1),('zao', 4),('hua', 4),('gong', 1),
('da', 4),('sha', 4),('ru', 2),('qing', 1),('yao', 4),('liang', 2),('dong', 4),
('wan', 4),('niu', 2),('hui', 2),('shou', 3),('qiu', 1),('shan', 1),('zhong', 4),
('bu', 2),('lu', 4),('wen', 2),('zhang', 1),('shi', 4),('yi', 3),('jing', 1),
('wei', 4),('ci', 2),('jian', 3),('fa', 2),('shui', 2),('neng', 2),('song', 4),
('ku', 3),('xin', 1),('qi', 2),('mian', 3),('rong', 2),('lou', 2),('yi', 3),
('xiang', 1),('ye', 4),('zhong', 1),('jing', 1),('su', 4),('luan', 2),('feng', 4),
('zhi', 4),('shi', 4),('you', 1),('ren', 2),('mo', 4),('yuan', 4),('jie', 1),
('gu', 3),('lai', 2),('cai', 2),('da', 4),('nan', 2),('wei', 2),('yong', 4),
]),
(7,[('xi', 1),('you', 3),('jia', 1),('ren', 2),('gong', 1),('sun', 1),('shi', 4),
('yi', 4),('wu', 3),('jian', 4),('qi', 4),('dong', 4),('si', 4),('fang', 1),
('guan', 1),('zhe', 3),('ru', 2),('shan', 1),('se', 4),('ju', 3),('sang', 4),
('tian', 1),('di', 4),('wei', 2),('zhi', 1),('jiu', 3),('di', 1),('ang', 2),
('huo', 4),('ru', 2),('yi', 4),('she', 4),('jiu', 3),('ri', 4),('luo', 4),
('jiao', 3),('ru', 2),('qun', 2),('di', 4),('can', 1),('long', 2),('xiang', 2),
('lai', 2),('ru', 2),('lei', 2),('ting', 2),('shou', 1),('zhen', 4),('nu', 4),
('ba', 4),('ru', 2),('jiang', 1),('hai', 3),('ning', 2),('qing', 1),('guang', 1),
('jiang', 4),('chun', 2),('zhu', 1),('xiu', 4),('liang', 3),('ji', 4),('mo', 4),
('wan', 2),('you', 3),('di', | |
<gh_stars>0
"""
This playbook processes URLs not in bogon_list and creates a task note for every indicator for review by the analyst
"""
import phantom.rules as phantom
import json
from datetime import datetime, timedelta
def on_start(container):
phantom.debug('on_start() called')
# call 'check_urls' block
check_urls(container=container)
return
def check_urls(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('check_urls() called')
# check for 'if' condition 1
matched = phantom.decision(
container=container,
conditions=[
["artifact:*.cef.requestURL", "!=", ""],
["artifact:*.cef.url", "!=", ""],
["artifact:*.cef.http_referrer", "==", ""],
],
logical_operator='or')
# call connected blocks if condition 1 matched
if matched:
url_filter(action=action, success=success, container=container, results=results, handle=handle, custom_function=custom_function)
return
# call connected blocks for 'else' condition 2
missing_data_comment(action=action, success=success, container=container, results=results, handle=handle, custom_function=custom_function)
return
def url_filter(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('url_filter() called')
# collect filtered artifact ids for 'if' condition 1
matched_artifacts_1, matched_results_1 = phantom.condition(
container=container,
conditions=[
["artifact:*.cef.requestURL", "not in", "custom_list:bogon_list"],
["artifact:*.cef.url", "not in", "custom_list:bogon_list"],
["artifact:*.cef.http_referrer", "==", "custom_list:bogon_list"],
],
logical_operator='or',
name="url_filter:condition_1")
# call connected blocks if filtered artifacts or results
if matched_artifacts_1 or matched_results_1:
merge_urls(action=action, success=success, container=container, results=results, handle=handle, custom_function=custom_function, filtered_artifacts=matched_artifacts_1, filtered_results=matched_results_1)
return
def url_reputation(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('url_reputation() called')
#phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED')))
# collect data for 'url_reputation' call
custom_function_results_data_1 = phantom.collect2(container=container, datapath=['merge_urls:custom_function_result.data.*.item'], action_results=results)
parameters = []
# build parameters list for 'url_reputation' call
for custom_function_results_item_1 in custom_function_results_data_1:
if custom_function_results_item_1[0]:
parameters.append({
'url': custom_function_results_item_1[0],
})
phantom.act(action="url reputation", parameters=parameters, assets=['virustotal'], callback=url_reputation_format, name="url_reputation")
return
def url_intelligence(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('url_intelligence() called')
#phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED')))
# collect data for 'url_intelligence' call
custom_function_results_data_1 = phantom.collect2(container=container, datapath=['merge_urls:custom_function_result.data.*.item'], action_results=results)
parameters = []
# build parameters list for 'url_intelligence' call
for custom_function_results_item_1 in custom_function_results_data_1:
if custom_function_results_item_1[0]:
parameters.append({
'url': custom_function_results_item_1[0],
})
phantom.act(action="url intelligence", parameters=parameters, assets=['recorded future'], callback=url_intel_format, name="url_intelligence")
return
"""
Param 0 = Name of the task to update
"""
def generate_task_notes(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('generate_task_notes() called')
input_parameter_0 = "Indicator analysis"
indicator_analysis__analysis = json.loads(phantom.get_run_data(key='indicator_analysis:analysis'))
custom_function_results_data_1 = phantom.collect2(container=container, datapath=['merge_urls:custom_function_result.data.*.item'], action_results=results)
formatted_data_1 = phantom.get_format_data(name='url_reputation_format__as_list')
formatted_data_2 = phantom.get_format_data(name='url_hunt_format__as_list')
formatted_data_3 = phantom.get_format_data(name='url_intel_format__as_list')
custom_function_results_item_1_0 = [item[0] for item in custom_function_results_data_1]
generate_task_notes__note_params = None
################################################################################
## Custom Code Start
################################################################################
""" Maps inputs to processing values and adds debugs for task default template """
note_params = []
""" Modify for # of notes created per # of indicators example below of 5 means
more than 5 indicators found will produce 1 note vs 5 notes. For a maximum of 20 indicators (ip, domain, url, filehash) """
note_limit = 5
# Debug input data
#phantom.debug("Task Title:")
#phantom.debug(indicator_analysis__analysis)
title_data = indicator_analysis__analysis
#phantom.debug("Reputation Note:")
#phantom.debug(formatted_data_1)
rep_data = formatted_data_1
#phantom.debug("Hunt Note:")
#phantom.debug(formatted_data_2)
hunt_data = formatted_data_2
#phantom.debug("Intelligence Note:")
#phantom.debug(formatted_data_3)
intel_data = formatted_data_3
#phantom.debug("Indicator Processed")
#phantom.debug(filtered_artifacts_data_1)
indicators = custom_function_results_data_1
# Organize Indicators by value with correct data for note insertion
for indicator in indicators:
for title in title_data:
if indicator[0] in title['indicator']:
indicator.append(title['title'])
for rep in rep_data:
if indicator[0] in rep:
indicator.append(rep)
for hunt in hunt_data:
if indicator[0] in hunt:
indicator.append(hunt)
for intel in intel_data:
if indicator[0] in intel:
indicator.append(intel)
phantom.debug("Reorganzied note data to indicator.")
#phantom.debug(indicators)
# Get workbook phase id
phantom.debug('Getting current phase')
success, message, phase_id, phase_name = phantom.get_phase()
phantom.debug(
'phantom.get_phase results: success: {}, message: {}, phase_id: {}, phase_name: {}'.format(success, message, phase_id, phase_name)
)
# Task data for adding task notes
task_data = {}
# Get the tasks for start of the workbook
for task in phantom.get_tasks(container=container):
## gets the current phase and 1st task
if phase_id == task['data']['phase'] and task['data']['name'] == input_parameter_0:
task_data.update(task['data'])
phantom.debug('phantom.get_tasks found the task: task_id: {}, task_name: {}'.format(task_data['id'],task_data['name']))
""" Create multiple single indicator note or multiple notes (cusotmer defined)
Change the indicators length to greater than 5 artifacts if you want more notes created
The maximum number of notes you want created is related to the number of indicators present."""
title = "Automated URL Indicator Report"
if len(indicators) <= note_limit:
# Create loop for creating multiple notes under the same task
phantom.debug("Found {} indicators.".format(len(indicators)))
phantom.debug("Creating Multiple indicator notes.")
for indicator in indicators:
title = indicator[1].encode('UTF-8')
# Define Note content build here
note_content = "{}\n {}\n {}".format(indicator[4].encode('UTF-8'),indicator[3].encode('UTF-8'),indicator[2].encode('UTF-8'))
#phantom.debug("Multi-Note content: \n {}".format(note_content))
# Build note parameters
note_params.append({
"note_type": "task",
"task_id": task_data['id'],
"container_id": container['id'],
"title": title,
"content": note_content,
"note_format": "markdown",
"phase_id": phase_id
})
else:
phantom.debug("Found {} indicators.".format(len(indicators)))
phantom.debug("Creating Single indicator notes.")
note_content = ""
for indicator in indicators:
# Define Note content build here
note_content += "## {}\n {}\n {}\n {}\n".format(indicator[0].encode('UTF-8'),indicator[3].encode('UTF-8'),indicator[2].encode('UTF-8'),indicator[1].encode('UTF-8'))
#phantom.debug("Single Note content: \n {}".format(note_content))
# Build note parameters
note_params.append({
"note_type": "task",
"task_id": task_data['id'],
"container_id": container['id'],
"title": title,
"content": note_content,
"note_format": "markdown",
"phase_id": phase_id
})
# Save parameters for REST calls to update
#phantom.debug("Debug Parameters:")
generate_task_notes__note_params = note_params
################################################################################
## Custom Code End
################################################################################
phantom.save_run_data(key='generate_task_notes:note_params', value=json.dumps(generate_task_notes__note_params))
create_task_notes(container=container)
return
def url_reputation_format(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('url_reputation_format() called')
template = """%%
### VirusTotal Summary of `{0}`: *{1}, {2}*
*VTI link: {3}*
Scan Date: *{4}*
** Scan Results **
| Scanner | Detected | Result |
| ---- | ---- | ---- |
| Kaspersky | {5} | {6} |
| BitDefender | {7} | {8} |
| Google Safe Browsing: | {9} | {10} |
| AlienVault | {11} | {12} |
Sophos | {13} | {14} |
| Forcepoint ThreatSeeker: | {15} | {16} |
| ESET | {17} | {18} |
| MalwareDomainList | {19} | {20} |
| Fortinet | {21} | {22} |
---
%%"""
# parameter list for template variable replacement
parameters = [
"url_reputation:action_result.parameter.url",
"url_reputation:action_result.message",
"url_reputation:action_result.data.*.verbose_msg",
"url_reputation:action_result.data.*.permalink",
"url_reputation:action_result.data.*.scan_date",
"url_reputation:action_result.data.*.scans.Kaspersky.detected",
"url_reputation:action_result.data.*.scans.Kaspersky.result",
"url_reputation:action_result.data.*.scans.BitDefender.detected",
"url_reputation:action_result.data.*.scans.BitDefender.result",
"url_reputation:action_result.data.*.scans.Google Safebrowsing.detected",
"url_reputation:action_result.data.*.scans.Google Safebrowsing.result",
"url_reputation:action_result.data.*.scans.AlienVault.detected",
"url_reputation:action_result.data.*.scans.AlienVault.result",
"url_reputation:action_result.data.*.scans.Sophos.detected",
"url_reputation:action_result.data.*.scans.Sophos.result",
"url_reputation:action_result.data.*.scans.Forcepoint ThreatSeeker.detected",
"url_reputation:action_result.data.*.scans.Forcepoint ThreatSeeker.result",
"url_reputation:action_result.data.*.scans.ESET.detected",
"url_reputation:action_result.data.*.scans.ESET.result",
"url_reputation:action_result.data.*.scans.MalwareDomainList.detected",
"url_reputation:action_result.data.*.scans.MalwareDomainList.result",
"url_reputation:action_result.data.*.scans.Fortinet.detected",
"url_reputation:action_result.data.*.scans.Fortinet.result",
]
phantom.format(container=container, template=template, parameters=parameters, name="url_reputation_format")
join_indicator_analysis(container=container)
return
def url_intel_format(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('url_intel_format() called')
template = """%%
### Recorded Future Summary of `{0}`: *{1}*
***Critical Label: {2}, Last seen: {3}***
*RF link (Intel Card): {4}*
First Seen: {5}
***Threat List:***
- Threat list: {6}
- Threat list: {7}
***Rules Found***
1. **{8}** - Evidence: {9}
1. **{10}** - Evidence: {11}
1. **{12}** - Evidence: {13}
1. **{14}** - Evidence: {15}
1. **{16}** - Evidence: {17}
1. **{18}** - Evidence: {19}
---
%%"""
# parameter list for template variable replacement
parameters = [
"url_intelligence:action_result.parameter.url",
"url_intelligence:action_result.summary.riskSummary",
"url_intelligence:action_result.summary.criticalityLabel",
"url_intelligence:action_result.summary.lastSeen",
"url_intelligence:action_result.data.*.intelCard",
"url_intelligence:action_result.data.*.timestamps.firstSeen",
"url_intelligence:action_result.data.*.threatLists.0.description",
"url_intelligence:action_result.data.*.threatLists.1.description",
"url_intelligence:action_result.data.*.risk.evidenceDetails.0.rule",
"url_intelligence:action_result.data.*.risk.evidenceDetails.0.evidenceString",
"url_intelligence:action_result.data.*.risk.evidenceDetails.1.rule",
"url_intelligence:action_result.data.*.risk.evidenceDetails.1.evidenceString",
"url_intelligence:action_result.data.*.risk.evidenceDetails.2.rule",
"url_intelligence:action_result.data.*.risk.evidenceDetails.2.evidenceString",
"url_intelligence:action_result.data.*.risk.evidenceDetails.3.rule",
"url_intelligence:action_result.data.*.risk.evidenceDetails.3.evidenceString",
":url_intelligence:action_result.data.*.risk.evidenceDetails.4.rule",
"url_intelligence:action_result.data.*.risk.evidenceDetails.4.evidenceString",
"url_intelligence:action_result.data.*.risk.evidenceDetails.5.rule",
"url_intelligence:action_result.data.*.risk.evidenceDetails.5.evidenceString",
]
phantom.format(container=container, template=template, parameters=parameters, name="url_intel_format")
join_indicator_analysis(container=container)
return
def missing_data_comment(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('missing_data_comment() called')
phantom.comment(container=container, comment="Missing indicator to execute Indicator analysis - URL playbook. Check logic and playbook parameters")
return
def hunt_url(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('hunt_url() called')
#phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED')))
# collect data for 'hunt_url' call
custom_function_results_data_1 = phantom.collect2(container=container, datapath=['merge_urls:custom_function_result.data.*.item'], action_results=results)
parameters = []
# build parameters list for 'hunt_url' call
for custom_function_results_item_1 in custom_function_results_data_1:
if custom_function_results_item_1[0]:
parameters.append({
'url': custom_function_results_item_1[0],
})
phantom.act(action="hunt url", parameters=parameters, assets=['hybrid-analysis-personal'], callback=url_hunt_format, name="hunt_url")
return
def url_hunt_format(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('url_hunt_format() called')
template = """%%
### Falcon Sandbox Summary of `{0}`: *{1}, {2} - {3}*
*Hybrid Analysis Link: https://hybrid-analysis.com/sample/{10}*
| Data| Result |
| --- | --- |
| VX family | {4} |
| Scan date | {5} |
| Name(s) | {6} |
| Environment | {7} |
| Type | {8} |
| sha1 | {9} |
| sha256 | {10} |
| Compromised Hosts | {11} |
| Domains | {12} |
---
%%"""
# parameter list for template variable replacement
parameters = [
"hunt_url:action_result.parameter.url",
"hunt_url:action_result.message",
"hunt_url:action_result.data.*.verdict",
"hunt_url:action_result.data.*.threat_score_verbose",
"hunt_url:action_result.data.*.vx_family",
"hunt_url:action_result.data.*.analysis_start_time",
"hunt_url:action_result.data.*.submit_name",
"hunt_url:action_result.data.*.environment",
"hunt_url:action_result.data.*.type",
"hunt_url:action_result.data.*.sha1",
"hunt_url:action_result.data.*.sha256",
"hunt_url:action_result.data.*.compromised_hosts",
"hunt_url:action_result.data.*.domains",
]
phantom.format(container=container, template=template, parameters=parameters, name="url_hunt_format")
join_indicator_analysis(container=container)
return
"""
See the doc type in the source code for calculation parameters for this indicator
"""
def indicator_analysis(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, **kwargs):
phantom.debug('indicator_analysis() called')
results_data_1 = phantom.collect2(container=container, datapath=['url_reputation:action_result.parameter.url', 'url_reputation:action_result.data.*.positives'], action_results=results)
results_data_2 = phantom.collect2(container=container, datapath=['hunt_url:action_result.parameter.url', 'hunt_url:action_result.data.*.threat_score', 'hunt_url:action_result.data.*.verdict'], action_results=results)
results_data_3 = phantom.collect2(container=container, datapath=['url_intelligence:action_result.parameter.url', 'url_intelligence:action_result.data.*.risk.score', 'url_intelligence:action_result.data.*.risk.criticalityLabel'], action_results=results)
custom_function_results_data_1 = phantom.collect2(container=container, datapath=['merge_urls:custom_function_result.data.*.item'], action_results=results)
results_item_1_0 = [item[0] for item in results_data_1]
results_item_1_1 = [item[1] for item in results_data_1]
results_item_2_0 = [item[0] for item in results_data_2]
results_item_2_1 = | |
from time import sleep, localtime
from biblio.interface.interface import *
from biblio.extras.extras import *
from os import getcwd, get_terminal_size, listdir, system
from biblio.bib import *
# /* Tamanho Terminal */
tamterm = get_terminal_size()
tamterm = tamterm[0]
if tamterm < 50:
barra = 2
elif tamterm < 75:
barra = 4
elif tamterm < 100:
barra = 6
elif tamterm < 125:
barra = 8
elif tamterm < 150:
barra = 10
def abrir(path):
"""
Tenta abrir o arquivo no caminho que recebe. Caso não encontre o arquivo,
Cria um arquivo com o nome no caminho especificado.
:param path: Local onde o arquivo está ou será criado.
"""
try:
a = open(path, mode='r')
return False
except:
a = open(path, mode='w+')
c = 0
while c < (tamterm-3):
clear()
if c < (tamterm-4):
cabecalho('Criando Arquivo...')
else:
cabecalho('Arquivo Criado!')
cheio = "■" * c
vazio = "□" * ((tamterm-4) - c)
print(f'║ {cheio}{vazio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
c += barra
sleep(0.01)
if c > (tamterm - 9):
clear()
cabecalho('Arquivo Criado!')
cheio = "■" * (tamterm - 4)
print(f'║ {cheio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
input('Enter para Continuar')
finally:
a.close()
def ler(path):
"""
Abre um arquivo no caminho especificado e adiciona o conteudo em uma lista separada pelas linhas do arquivo.
:param path: Local do arquivo a ser lido.
"""
try:
f = open(path, 'tr')
arquivo = f.readlines()
f.close()
abriu = True
except:
abriu = False
if abriu:
return arquivo
else:
print('Não foi possivel ler o arquivo')
sleep(1)
def gravar(path, wra, gravacao):
"""
Abre um arquivo no caminho especificado. Do modo que lhe é definido e adiciona informações a esse arquivo.
:param path: Local do arquivo onde as informações serão adicionadas.
:param wra: Modo em que o arquivo será aberto. Sendo: 'r' - leitura, 'w' - escrita, 'a' - adicionar.
:param gravacao: Conteudo que será salvo no arquivo.
"""
try:
f = open(path, wra)
abriu = True
except Exception as erro:
print(f'Não foi possivel devido erro: "{erro.__class__}"')
if abriu:
f.write(gravacao)
f.close()
def adicionar(path):
"""
Adiciona novos participantes a tabela.
:param path: Local do arquivo em que o participante será adicionado.
"""
try:
while True:
es = str(input('Entrada ou Saida? [E/S]: ')).strip().upper()[0]
if es in 'SE':
break
else:
print('Opção Inválida, Tente novamente!')
continue
while True:
data = leiaInt('Dia: ')
if 0 < data <= 31:
break
else:
print('Dia Inválida, Tente novamente!')
continue
lancamento = str(input('Lançamento: ')).strip()
while True:
try:
valor = str(input('Valor: R$')).replace(',', '.')
if valor[:2] == 'R$' or valor[:2] == 'RS':
valor = valor[2:]
valor = float(valor)
except:
print('Valor Inválido, Tente novamente!')
continue
else:
break
gravar(path, 'a', f'{es};{data:0>2};{lancamento};{valor}\n')
except:
print('Não foi possivel Adicionar')
else:
print(f'"{lancamento}" adicionado com sucesso')
sleep(1)
def modificar(path, arquivo):
remover(path, arquivo, False)
adicionar(path)
pass
def remover(path, arquivo, rem=True):
"""
Remove um participante de uma tabela.
:param path: Local do arquivo a ser modificado.
:param arquivo: Lista de informações que serão modificadas e gravadas no arquivo.
"""
if len(arquivo) == 0:
print('Lista Vazia! Não é possivel remover!')
input('Enter para continuar')
return
pos = leiaInt('Posição: ') - 1
if -1 < pos <= len(arquivo):
arquivo[pos] = arquivo[pos].split(';')
deletado = arquivo[pos][2]
if rem:
while True:
certeza = str(input(f'Tem Certeza que deseja Remover {deletado}? [S/N]: ')).strip().upper()[0]
if certeza not in 'SN':
print('Escolha Inválida')
sleep(2)
else:
break
if certeza == 'N':
return
del arquivo[pos]
if len(arquivo) == 0:
f = open(path, 'w')
f.write('')
else:
try:
for p , i in enumerate(arquivo):
if len(arquivo) > 0:
if p == 0:
f = open(path,'w')
f.write(f'{i}')
else:
f = open(path, 'a')
f.write(f'{i}')
except Exception as erro:
print(f'Falhao ao Remover da lista em arquivo: {erro.__class__}')
input('Enter para continuar')
f.close()
if rem:
print(f'{deletado} foi excluido da lista com sucesso!')
sleep(2)
else:
print(f'"{pos+1}" Não faz parte da lista\nRetornando ao Menu Principal...')
sleep(2)
def pegadata(arquivo):
return arquivo[1]
def atualizar(path, arquivo):
arquivo.sort(key=pegadata)
for p , i in enumerate(arquivo):
if len(arquivo) > 0:
if p == 0:
f = open(path,'w')
f.write(f'{i[0]};{i[1]};{i[2]};{i[3]}')
else:
f = open(path, 'a')
f.write(f'{i[0]};{i[1]};{i[2]};{i[3]}')
def limpar(path):
try:
gravar(path, 'w', '')
except:
print('Não foi possivel limpar o arquivo!')
def delpasta(path):
# system(f'rmdir /s /q {path}')
print(f'Por favor vá até: \n"{path}" \ne delete o arquivo')
input('Enter para Continuar!')
return
'''
*Função desativada por risco de bug deletar todos os arquivos de diretórios superiores*
c = 0
while c < (tamterm - 3):
clear()
if c < (tamterm - 4):
cabecalho('Deletando Pasta...')
else:
cabecalho('Pasta Deletada!')
cheio = "■" * ((tamterm - 4) - c)
vazio = "□" * c
print(f'║ {cheio}{vazio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
c += barra
sleep(0.01)
if c > (tamterm - 9):
clear()
cabecalho('Pasta Deletada!')
vazio = "□" * (tamterm - 4)
print(f'║ {vazio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
input('Enter para Continuar!')
'''
def delarquivo(path):
print(f'Por favor vá até: \n"{path}" \ne delete o arquivo')
input('Enter para Continuar!')
return
'''
*Função desativada por risco de bug deletar todos os arquivos de diretórios superiores*
"""
Deleta o arquivo do local especificado.
:param path: Local do arquivo a ser deletado.
"""
import os
os.system(f'del {path}')
c = 0
while c < (tamterm - 3):
clear()
if c < (tamterm - 4):
cabecalho('Deletando Arquivo...')
else:
cabecalho('Arquivo Deletado!')
cheio = "■" * ((tamterm - 4) - c)
vazio = "□" * c
print(f'║ {cheio}{vazio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
c += barra
sleep(0.01)
if c > (tamterm - 9):
clear()
cabecalho('Arquivo Deletado!')
vazio = "□" * (tamterm - 4)
print(f'║ {vazio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
input('Enter para Continuar!')
return
'''
def criarpasta(path):
system(f'mkdir "{path}"')
c = 0
while c < (tamterm-3):
clear()
if c < (tamterm-4):
cabecalho('Criando Pasta...')
else:
cabecalho('Pasta Criada!')
cheio = "■" * c
vazio = "□" * ((tamterm-4) - c)
print(f'║ {cheio}{vazio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
c += barra
sleep(0.01)
if c > (tamterm - 11):
clear()
cabecalho('Pasta Criada!')
cheio = "■" * (tamterm - 4)
print(f'║ {cheio} ║', flush=True)
linhas('╚', '╝', '═', tamterm, flush=True)
input('Enter para Continuar')
def lerpasta(pathpasta):
clear()
atualpath = getcwd()
lista = listdir(f'{atualpath}\\{pathpasta}')
linhas('╔', '╗', '═', tamterm)
titulo = f'Conteudo de {pathpasta}'
print(f'║{titulo:^{(tamterm-2)}}║')
linhas('╠', '╣', '═', tamterm)
print(f'║{" [ 0 ] - Voltar a Pasta Principal":{((tamterm//2)-2)}}║{" [ -1] - Voltar ao Menu Principal":{((tamterm//2))}}║')
linhas('╟', '╢', '─', tamterm)
print(f'║{" [ -2] - Criar Arquivo":{((tamterm//2)-2)}}║{" [ -3] - Criar Pasta":{((tamterm//2))}}║')
linhas('╟', '╢', '─', tamterm)
print(f'║{" [ -4] - Deletar Arquivo":{((tamterm//2)-2)}}║{" [ -5] - Deletar Pasta":{((tamterm//2))}}║')
linhas('╠', '╣', '═', tamterm)
if len(lista) == 0:
print(f'║{"":{tamterm - 2}}║')
print(f'║{"Pasta Vazia":^{tamterm - 2}}║')
print(f'║{"":{tamterm - 2}}║')
linhas('╚', '╝', '═', tamterm)
else:
for p, c in enumerate(lista):
print(f'║ {p+1:^3} - {c:<{tamterm - 18}}', end='')
if c[-3:-1] + c[-1] == 'txt':
print(f'{"Arquivo":<8} ║')
else:
print(f'{"Pasta":<8} ║')
linhas('╚', '╝', '═', tamterm)
pasta = leiaInt('Opção: ') - 1
if pasta == -1:
arquivo = lerpasta('pastas')
elif pasta == -2:
arquivo = 'voltar'
elif pasta == -3:
nome = str(input('Nome: '))
if nome[-4:-1] + nome[-1] != '.txt':
nome = nome + '.txt'
nome = atualpath + '/' + pathpasta + '/' + nome
abrir(nome)
arquivo = lerpasta(pathpasta)
elif pasta == -4:
nome = str(input('nome: '))
nome = atualpath + '/' + pathpasta + '/' + nome
criarpasta(nome)
arquivo = lerpasta(pathpasta)
elif pasta == -5:
while True:
deletar = leiaInt('Arquivo a ser Deletado: ') - 1
if -1 < deletar < len(lista):
break
else:
print('Opção inválida, Tente Novamente!')
continue
while True:
confirma = str(input(f'Tem Certeza de que deseja deletar "{lista[deletar]}"? [S/N]: ')).strip().upper()[0]
if confirma in 'SN':
break
else:
print('Opção Inválida!')
continue
if confirma == 'S':
if lista[deletar][-3:-1] + lista[deletar][-1] == 'txt':
try:
delarquivo(f'{atualpath}\{pathpasta}\{lista[deletar]}')
except Exception as erro:
print(f'Não foi possivel deletar este arquivo!, erro:"{erro.__class__}"')
else:
print('Isto é uma pasta, para deletar pastas use outra função!')
input('Pressione Enter Para Continuar.')
arquivo = lerpasta(pathpasta)
elif confirma == 'N':
arquivo = lerpasta(pathpasta)
elif pasta == -6:
while True:
deletar = leiaInt('Arquivo a ser Deletado: ') - 1
if -1 < deletar < len(lista):
break
else:
print('Opção inválida, Tente Novamente!')
continue
while True:
print(f'tem Certeza de que | |
signif / 2) * repl) - 1)
lower = ma_sort[low_idx, :, :, :]
upper = ma_sort[upp_idx, :, :, :]
return lower, upper
def irf_resim(self, orth=False, repl=1000, T=10,
seed=None, burn=100, cum=False):
"""
Simulates impulse response function, returning an array of simulations.
Used for Sims-Zha error band calculation.
Parameters
----------
orth: bool, default False
Compute orthoganalized impulse response error bands
repl: int
number of Monte Carlo replications to perform
T: int, default 10
number of impulse response periods
signif: float (0 < signif <1)
Significance level for error bars, defaults to 95% CI
seed: int
np.random.seed for replications
burn: int
number of initial observations to discard for simulation
cum: bool, default False
produce cumulative irf error bands
Notes
-----
Sims, <NAME>., and <NAME>. 1999. "Error Bands for Impulse Response." Econometrica 67: 1113-1155.
Returns
-------
Array of simulated impulse response functions
"""
neqs = self.neqs
# mean = self.mean()
k_ar = self.k_ar
coefs = self.coefs
sigma_u = self.sigma_u
intercept = self.intercept
# df_model = self.df_model
nobs = self.nobs
ma_coll = np.zeros((repl, T+1, neqs, neqs))
def fill_coll(sim):
ret = VAR(sim, exog=self.exog).fit(maxlags=k_ar, trend=self.trend)
ret = ret.orth_ma_rep(maxn=T) if orth else ret.ma_rep(maxn=T)
return ret.cumsum(axis=0) if cum else ret
for i in range(repl):
# discard first hundred to eliminate correct for starting bias
sim = util.varsim(coefs, intercept, sigma_u,
seed=seed, steps=nobs+burn)
sim = sim[burn:]
ma_coll[i, :, :, :] = fill_coll(sim)
return ma_coll
def _omega_forc_cov(self, steps):
# Approximate MSE matrix \Omega(h) as defined in Lut p97
G = self._zz
Ginv = scipy.linalg.inv(G)
# memoize powers of B for speedup
# TODO: see if can memoize better
# TODO: much lower-hanging fruit in caching `np.trace` and `chain_dot` below.
B = self._bmat_forc_cov()
_B = {}
def bpow(i):
if i not in _B:
_B[i] = np.linalg.matrix_power(B, i)
return _B[i]
phis = self.ma_rep(steps)
sig_u = self.sigma_u
omegas = np.zeros((steps, self.neqs, self.neqs))
for h in range(1, steps + 1):
if h == 1:
omegas[h-1] = self.df_model * self.sigma_u
continue
om = omegas[h-1]
for i in range(h):
for j in range(h):
Bi = bpow(h - 1 - i)
Bj = bpow(h - 1 - j)
mult = np.trace(chain_dot(Bi.T, Ginv, Bj, G))
om += mult * chain_dot(phis[i], sig_u, phis[j].T)
omegas[h-1] = om
return omegas
def _bmat_forc_cov(self):
# B as defined on p. 96 of Lut
upper = np.zeros((self.k_exog, self.df_model))
upper[:, :self.k_exog] = np.eye(self.k_exog)
lower_dim = self.neqs * (self.k_ar - 1)
I = np.eye(lower_dim)
lower = np.column_stack((np.zeros((lower_dim, self.k_exog)), I,
np.zeros((lower_dim, self.neqs))))
return np.vstack((upper, self.params.T, lower))
def summary(self):
"""Compute console output summary of estimates
Returns
-------
summary : VARSummary
"""
return VARSummary(self)
def irf(self, periods=10, var_decomp=None, var_order=None):
"""Analyze impulse responses to shocks in system
Parameters
----------
periods : int
var_decomp : ndarray (k x k), lower triangular
Must satisfy Omega = P P', where P is the passed matrix. Defaults to
Cholesky decomposition of Omega
var_order : sequence
Alternate variable order for Cholesky decomposition
Returns
-------
irf : IRAnalysis
"""
if var_order is not None:
raise NotImplementedError('alternate variable order not implemented'
' (yet)')
return IRAnalysis(self, P=var_decomp, periods=periods)
def fevd(self, periods=10, var_decomp=None):
"""
Compute forecast error variance decomposition ("fevd")
Returns
-------
fevd : FEVD instance
"""
return FEVD(self, P=var_decomp, periods=periods)
def reorder(self, order):
"""Reorder variables for structural specification
"""
if len(order) != len(self.params[0, :]):
raise ValueError("Reorder specification length should match "
"number of endogenous variables")
# This converts order to list of integers if given as strings
if isinstance(order[0], string_types):
order_new = []
for i, nam in enumerate(order):
order_new.append(self.names.index(order[i]))
order = order_new
return _reordered(self, order)
# --------------------------------------------------------------------------
# VAR Diagnostics: Granger-causality, whiteness of residuals, normality, etc
def test_causality(self, caused, causing=None, kind='f', signif=0.05):
"""
Test Granger causality
Parameters
----------
caused : int or str or sequence of int or str
If int or str, test whether the variable specified via this index
(int) or name (str) is Granger-caused by the variable(s) specified
by `causing`.
If a sequence of int or str, test whether the corresponding
variables are Granger-caused by the variable(s) specified
by `causing`.
causing : int or str or sequence of int or str or None, default: None
If int or str, test whether the variable specified via this index
(int) or name (str) is Granger-causing the variable(s) specified by
`caused`.
If a sequence of int or str, test whether the corresponding
variables are Granger-causing the variable(s) specified by
`caused`.
If None, `causing` is assumed to be the complement of `caused`.
kind : {'f', 'wald'}
Perform F-test or Wald (chi-sq) test
signif : float, default 5%
Significance level for computing critical values for test,
defaulting to standard 0.05 level
Notes
-----
Null hypothesis is that there is no Granger-causality for the indicated
variables. The degrees of freedom in the F-test are based on the
number of variables in the VAR system, that is, degrees of freedom
are equal to the number of equations in the VAR times degree of freedom
of a single equation.
Test for Granger-causality as described in chapter 7.6.3 of [1]_.
Test H0: "`causing` does not Granger-cause the remaining variables of
the system" against H1: "`causing` is Granger-causal for the
remaining variables".
Returns
-------
results : CausalityTestResults
References
----------
.. [1] <NAME>. 2005. *New Introduction to Multiple Time Series Analysis*. Springer.
"""
if not (0 < signif < 1):
raise ValueError("signif has to be between 0 and 1")
allowed_types = (string_types, int)
if isinstance(caused, allowed_types):
caused = [caused]
if not all(isinstance(c, allowed_types) for c in caused):
raise TypeError("caused has to be of type string or int (or a "
"sequence of these types).")
caused = [self.names[c] if type(c) == int else c for c in caused]
caused_ind = [util.get_index(self.names, c) for c in caused]
if causing is not None:
if isinstance(causing, allowed_types):
causing = [causing]
if not all(isinstance(c, allowed_types) for c in causing):
raise TypeError("causing has to be of type string or int (or "
"a sequence of these types) or None.")
causing = [self.names[c] if type(c) == int else c for c in causing]
causing_ind = [util.get_index(self.names, c) for c in causing]
if causing is None:
causing_ind = [i for i in range(self.neqs) if i not in caused_ind]
causing = [self.names[c] for c in caused_ind]
k, p = self.neqs, self.k_ar
# number of restrictions
num_restr = len(causing) * len(caused) * p
num_det_terms = self.k_exog
# Make restriction matrix
C = np.zeros((num_restr, k * num_det_terms + k**2 * p), dtype=float)
cols_det = k * num_det_terms
row = 0
for j in range(p):
for ing_ind in causing_ind:
for ed_ind in caused_ind:
C[row, cols_det + ed_ind + k * ing_ind + k**2 * j] = 1
row += 1
# Lutkepohl 3.6.5
Cb = np.dot(C, vec(self.params.T))
middle = scipy.linalg.inv(chain_dot(C, self.cov_params, C.T))
# wald statistic
lam_wald = statistic = chain_dot(Cb, middle, Cb)
if kind.lower() == 'wald':
df = num_restr
dist = stats.chi2(df)
elif kind.lower() == 'f':
statistic = lam_wald / num_restr
df = (num_restr, k * self.df_resid)
dist = stats.f(*df)
else:
raise ValueError('kind %s not recognized' % kind)
pvalue = dist.sf(statistic)
crit_value = dist.ppf(1 - signif)
return CausalityTestResults(causing, caused, statistic,
crit_value, pvalue, df, signif,
test="granger", method=kind)
def test_inst_causality(self, causing, signif=0.05):
"""
Test for instantaneous causality
Parameters
----------
causing :
If int or str, test whether the corresponding variable is causing
the variable(s) specified in caused.
If sequence of int or str, test whether the corresponding variables
are causing the variable(s) specified in caused.
signif : float between 0 and 1, default 5 %
Significance level for computing critical values for test,
defaulting to standard 0.05 level
verbose : bool
If True, print a table with the results.
Returns
-------
results : dict
A dict holding the test's results. The dict's keys are:
"statistic" : float
The calculated test statistic.
"crit_value" : float
The critical value of the Chi^2-distribution.
"pvalue" : float
The p-value corresponding to the test statistic.
"df" : float
The degrees of freedom of the Chi^2-distribution.
"conclusion" : | |
<gh_stars>10-100
from test.support import verbose, run_unittest, gc_collect, bigmemtest, _2G, cpython_only, captured_stdout
import io
import locale
import re
import sre_compile
import string
import sys
import traceback
import unittest
import warnings
from re import Scanner
from weakref import proxy
class S(str):
def __getitem__(self, index):
return S(super().__getitem__(index))
class B(bytes):
def __getitem__(self, index):
return B(super().__getitem__(index))
class ReTests(unittest.TestCase):
def assertTypedEqual(self, actual, expect, msg=None):
self.assertEqual(actual, expect, msg)
def recurse(actual, expect):
if isinstance(expect, (tuple, list)):
for x, y in zip(actual, expect):
recurse(x, y)
else:
self.assertIs(type(actual), type(expect), msg)
recurse(actual, expect)
def checkPatternError(self, pattern, errmsg, pos=None):
with self.assertRaises(re.error) as cm:
re.compile(pattern)
with self.subTest(pattern=pattern):
err = cm.exception
self.assertEqual(err.msg, errmsg)
if pos is not None:
self.assertEqual(err.pos, pos)
def checkTemplateError(self, pattern, repl, string, errmsg, pos=None):
with self.assertRaises(re.error) as cm:
re.sub(pattern, repl, string)
with self.subTest(pattern=pattern, repl=repl):
err = cm.exception
self.assertEqual(err.msg, errmsg)
if pos is not None:
self.assertEqual(err.pos, pos)
def test_keep_buffer(self):
b = bytearray(b'x')
it = re.finditer(b'a', b)
with self.assertRaises(BufferError):
b.extend(b'x' * 400)
list(it)
del it
gc_collect()
b.extend(b'x' * 400)
def test_weakref(self):
s = 'QabbbcR'
x = re.compile('ab+c')
y = proxy(x)
self.assertEqual(x.findall('QabbbcR'), y.findall('QabbbcR'))
def test_search_star_plus(self):
self.assertEqual(re.search('x*', 'axx').span(0), (0, 0))
self.assertEqual(re.search('x*', 'axx').span(), (0, 0))
self.assertEqual(re.search('x+', 'axx').span(0), (1, 3))
self.assertEqual(re.search('x+', 'axx').span(), (1, 3))
self.assertIsNone(re.search('x', 'aaa'))
self.assertEqual(re.match('a*', 'xxx').span(0), (0, 0))
self.assertEqual(re.match('a*', 'xxx').span(), (0, 0))
self.assertEqual(re.match('x*', 'xxxa').span(0), (0, 3))
self.assertEqual(re.match('x*', 'xxxa').span(), (0, 3))
self.assertIsNone(re.match('a+', 'xxx'))
def bump_num(self, matchobj):
int_value = int(matchobj.group(0))
return str(int_value + 1)
def test_basic_re_sub(self):
self.assertTypedEqual(re.sub('y', 'a', 'xyz'), 'xaz')
self.assertTypedEqual(re.sub('y', S('a'), S('xyz')), 'xaz')
self.assertTypedEqual(re.sub(b'y', b'a', b'xyz'), b'xaz')
self.assertTypedEqual(re.sub(b'y', B(b'a'), B(b'xyz')), b'xaz')
self.assertTypedEqual(re.sub(b'y', bytearray(b'a'), bytearray(
b'xyz')), b'xaz')
self.assertTypedEqual(re.sub(b'y', memoryview(b'a'), memoryview(
b'xyz')), b'xaz')
for y in ('à', 'а', '𝒜'):
self.assertEqual(re.sub(y, 'a', 'x%sz' % y), 'xaz')
self.assertEqual(re.sub('(?i)b+', 'x', 'bbbb BBBB'), 'x x')
self.assertEqual(re.sub('\\d+', self.bump_num, '08.2 -2 23x99y'),
'9.3 -3 24x100y')
self.assertEqual(re.sub('\\d+', self.bump_num, '08.2 -2 23x99y', 3),
'9.3 -3 23x99y')
self.assertEqual(re.sub('\\d+', self.bump_num, '08.2 -2 23x99y',
count=3), '9.3 -3 23x99y')
self.assertEqual(re.sub('.', lambda m: '\\n', 'x'), '\\n')
self.assertEqual(re.sub('.', '\\n', 'x'), '\n')
s = '\\1\\1'
self.assertEqual(re.sub('(.)', s, 'x'), 'xx')
self.assertEqual(re.sub('(.)', re.escape(s), 'x'), s)
self.assertEqual(re.sub('(.)', lambda m: s, 'x'), s)
self.assertEqual(re.sub('(?P<a>x)', '\\g<a>\\g<a>', 'xx'), 'xxxx')
self.assertEqual(re.sub('(?P<a>x)', '\\g<a>\\g<1>', 'xx'), 'xxxx')
self.assertEqual(re.sub('(?P<unk>x)', '\\g<unk>\\g<unk>', 'xx'), 'xxxx'
)
self.assertEqual(re.sub('(?P<unk>x)', '\\g<1>\\g<1>', 'xx'), 'xxxx')
self.assertEqual(re.sub('a', '\\t\\n\\v\\r\\f\\a\\b', 'a'),
'\t\n\x0b\r\x0c\x07\x08')
self.assertEqual(re.sub('a', '\t\n\x0b\r\x0c\x07\x08', 'a'),
'\t\n\x0b\r\x0c\x07\x08')
self.assertEqual(re.sub('a', '\t\n\x0b\r\x0c\x07\x08', 'a'), chr(9) +
chr(10) + chr(11) + chr(13) + chr(12) + chr(7) + chr(8))
for c in 'cdehijklmopqsuwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ':
with self.subTest(c):
with self.assertWarns(DeprecationWarning):
self.assertEqual(re.sub('a', '\\' + c, 'a'), '\\' + c)
self.assertEqual(re.sub('^\\s*', 'X', 'test'), 'Xtest')
def test_bug_449964(self):
self.assertEqual(re.sub('(?P<unk>x)', '\\g<1>\\g<1>\\b', 'xx'),
'xx\x08xx\x08')
def test_bug_449000(self):
self.assertEqual(re.sub('\\r\\n', '\\n', 'abc\r\ndef\r\n'),
'abc\ndef\n')
self.assertEqual(re.sub('\r\n', '\\n', 'abc\r\ndef\r\n'), 'abc\ndef\n')
self.assertEqual(re.sub('\\r\\n', '\n', 'abc\r\ndef\r\n'), 'abc\ndef\n'
)
self.assertEqual(re.sub('\r\n', '\n', 'abc\r\ndef\r\n'), 'abc\ndef\n')
def test_bug_1661(self):
pattern = re.compile('.')
self.assertRaises(ValueError, re.match, pattern, 'A', re.I)
self.assertRaises(ValueError, re.search, pattern, 'A', re.I)
self.assertRaises(ValueError, re.findall, pattern, 'A', re.I)
self.assertRaises(ValueError, re.compile, pattern, re.I)
def test_bug_3629(self):
re.compile('(?P<quote>)(?(quote))')
def test_sub_template_numeric_escape(self):
self.assertEqual(re.sub('x', '\\0', 'x'), '\x00')
self.assertEqual(re.sub('x', '\\000', 'x'), '\x00')
self.assertEqual(re.sub('x', '\\001', 'x'), '\x01')
self.assertEqual(re.sub('x', '\\008', 'x'), '\x00' + '8')
self.assertEqual(re.sub('x', '\\009', 'x'), '\x00' + '9')
self.assertEqual(re.sub('x', '\\111', 'x'), 'I')
self.assertEqual(re.sub('x', '\\117', 'x'), 'O')
self.assertEqual(re.sub('x', '\\377', 'x'), 'ÿ')
self.assertEqual(re.sub('x', '\\1111', 'x'), 'I1')
self.assertEqual(re.sub('x', '\\1111', 'x'), 'I' + '1')
self.assertEqual(re.sub('x', '\\00', 'x'), '\x00')
self.assertEqual(re.sub('x', '\\07', 'x'), '\x07')
self.assertEqual(re.sub('x', '\\08', 'x'), '\x00' + '8')
self.assertEqual(re.sub('x', '\\09', 'x'), '\x00' + '9')
self.assertEqual(re.sub('x', '\\0a', 'x'), '\x00' + 'a')
self.checkTemplateError('x', '\\400', 'x',
'octal escape value \\400 outside of range 0-0o377', 0)
self.checkTemplateError('x', '\\777', 'x',
'octal escape value \\777 outside of range 0-0o377', 0)
self.checkTemplateError('x', '\\1', 'x', 'invalid group reference 1', 1
)
self.checkTemplateError('x', '\\8', 'x', 'invalid group reference 8', 1
)
self.checkTemplateError('x', '\\9', 'x', 'invalid group reference 9', 1
)
self.checkTemplateError('x', '\\11', 'x',
'invalid group reference 11', 1)
self.checkTemplateError('x', '\\18', 'x',
'invalid group reference 18', 1)
self.checkTemplateError('x', '\\1a', 'x',
'invalid group reference 1', 1)
self.checkTemplateError('x', '\\90', 'x',
'invalid group reference 90', 1)
self.checkTemplateError('x', '\\99', 'x',
'invalid group reference 99', 1)
self.checkTemplateError('x', '\\118', 'x',
'invalid group reference 11', 1)
self.checkTemplateError('x', '\\11a', 'x',
'invalid group reference 11', 1)
self.checkTemplateError('x', '\\181', 'x',
'invalid group reference 18', 1)
self.checkTemplateError('x', '\\800', 'x',
'invalid group reference 80', 1)
self.checkTemplateError('x', '\\8', '', 'invalid group reference 8', 1)
self.assertEqual(re.sub('(((((((((((x)))))))))))', '\\11', 'x'), 'x')
self.assertEqual(re.sub('((((((((((y))))))))))(.)', '\\118', 'xyz'),
'xz8')
self.assertEqual(re.sub('((((((((((y))))))))))(.)', '\\11a', 'xyz'),
'xza')
def test_qualified_re_sub(self):
self.assertEqual(re.sub('a', 'b', 'aaaaa'), 'bbbbb')
self.assertEqual(re.sub('a', 'b', 'aaaaa', 1), 'baaaa')
self.assertEqual(re.sub('a', 'b', 'aaaaa', count=1), 'baaaa')
def test_bug_114660(self):
self.assertEqual(re.sub('(\\S)\\s+(\\S)', '\\1 \\2', 'hello there'
), 'hello there')
def test_bug_462270(self):
self.assertEqual(re.sub('x*', '-', 'abxd'), '-a-b-d-')
self.assertEqual(re.sub('x+', '-', 'abxd'), 'ab-d')
def test_symbolic_groups(self):
re.compile('(?P<a>x)(?P=a)(?(a)y)')
re.compile('(?P<a1>x)(?P=a1)(?(a1)y)')
re.compile('(?P<a1>x)\\1(?(1)y)')
self.checkPatternError('(?P<a>)(?P<a>)',
"redefinition of group name 'a' as group 2; was group 1")
self.checkPatternError('(?P<a>(?P=a))',
'cannot refer to an open group', 10)
self.checkPatternError('(?Pxy)', 'unknown extension ?Px')
self.checkPatternError('(?P<a>)(?P=a',
'missing ), unterminated name', 11)
self.checkPatternError('(?P=', 'missing group name', 4)
self.checkPatternError('(?P=)', 'missing group name', 4)
self.checkPatternError('(?P=1)', "bad character in group name '1'", 4)
self.checkPatternError('(?P=a)', "unknown group name 'a'")
self.checkPatternError('(?P=a1)', "unknown group name 'a1'")
self.checkPatternError('(?P=a.)', "bad character in group name 'a.'", 4
)
self.checkPatternError('(?P<)', 'missing >, unterminated name', 4)
self.checkPatternError('(?P<a', 'missing >, unterminated name', 4)
self.checkPatternError('(?P<', 'missing group name', 4)
self.checkPatternError('(?P<>)', 'missing group name', 4)
self.checkPatternError('(?P<1>)', "bad character in group name '1'", 4)
self.checkPatternError('(?P<a.>)',
"bad character in group name 'a.'", 4)
self.checkPatternError('(?(', 'missing group name', 3)
self.checkPatternError('(?())', 'missing group name', 3)
self.checkPatternError('(?(a))', "unknown group name 'a'", 3)
self.checkPatternError('(?(-1))', "bad character in group name '-1'", 3
)
self.checkPatternError('(?(1a))', "bad character in group name '1a'", 3
)
self.checkPatternError('(?(a.))', "bad character in group name 'a.'", 3
)
re.compile('(?P<µ>x)(?P=µ)(?(µ)y)')
re.compile('(?P<𝔘𝔫𝔦𝔠𝔬𝔡𝔢>x)(?P=𝔘𝔫𝔦𝔠𝔬𝔡𝔢)(?(𝔘𝔫𝔦𝔠𝔬𝔡𝔢)y)')
self.checkPatternError('(?P<©>x)', "bad character in group name '©'", 4
)
pat = '|'.join('x(?P<a%d>%x)y' % (i, i) for i in range(1, 200 + 1))
pat = '(?:%s)(?(200)z|t)' % pat
self.assertEqual(re.match(pat, 'xc8yz').span(), (0, 5))
def test_symbolic_refs(self):
self.checkTemplateError('(?P<a>x)', '\\g<a', 'xx',
'missing >, unterminated name', 3)
self.checkTemplateError('(?P<a>x)', '\\g<', 'xx',
'missing group name', 3)
self.checkTemplateError('(?P<a>x)', '\\g', 'xx', 'missing <', 2)
self.checkTemplateError('(?P<a>x)', '\\g<a a>', 'xx',
"bad character in group name 'a a'", 3)
self.checkTemplateError('(?P<a>x)', '\\g<>', 'xx',
'missing group name', 3)
self.checkTemplateError('(?P<a>x)', '\\g<1a1>', 'xx',
"bad character in group name '1a1'", 3)
self.checkTemplateError('(?P<a>x)', '\\g<2>', 'xx',
'invalid group reference 2', 3)
self.checkTemplateError('(?P<a>x)', '\\2', 'xx',
'invalid group reference 2', 1)
with self.assertRaisesRegex(IndexError, "unknown group name 'ab'"):
re.sub('(?P<a>x)', '\\g<ab>', 'xx')
self.assertEqual(re.sub('(?P<a>x)|(?P<b>y)', '\\g<b>', 'xx'), '')
self.assertEqual(re.sub('(?P<a>x)|(?P<b>y)', '\\2', 'xx'), '')
self.checkTemplateError('(?P<a>x)', '\\g<-1>', 'xx',
"bad character in group name '-1'", 3)
self.assertEqual(re.sub('(?P<µ>x)', '\\g<µ>', 'xx'), 'xx')
self.assertEqual(re.sub('(?P<𝔘𝔫𝔦𝔠𝔬𝔡𝔢>x)', '\\g<𝔘𝔫𝔦𝔠𝔬𝔡𝔢>', 'xx'), 'xx')
self.checkTemplateError('(?P<a>x)', '\\g<©>', 'xx',
"bad character in group name '©'", 3)
pat = '|'.join('x(?P<a%d>%x)y' % (i, i) for i in range(1, 200 + 1))
self.assertEqual(re.sub(pat, '\\g<200>', 'xc8yzxc8y'), 'c8zc8')
def test_re_subn(self):
self.assertEqual(re.subn('(?i)b+', 'x', 'bbbb BBBB'), ('x x', 2))
self.assertEqual(re.subn('b+', 'x', 'bbbb BBBB'), ('x BBBB', 1))
self.assertEqual(re.subn('b+', 'x', 'xyz'), ('xyz', 0))
self.assertEqual(re.subn('b*', 'x', 'xyz'), ('xxxyxzx', 4))
self.assertEqual(re.subn('b*', 'x', 'xyz', 2), ('xxxyz', 2))
self.assertEqual(re.subn('b*', 'x', 'xyz', count=2), ('xxxyz', 2))
def test_re_split(self):
for string in (':a:b::c', S(':a:b::c')):
self.assertTypedEqual(re.split(':', string), ['', 'a', 'b', '',
'c'])
self.assertTypedEqual(re.split(':+', string), ['', 'a', 'b', 'c'])
self.assertTypedEqual(re.split('(:+)', string), ['', ':', 'a',
':', 'b', '::', 'c'])
for string in (b':a:b::c', B(b':a:b::c'), bytearray(b':a:b::c'),
memoryview(b':a:b::c')):
self.assertTypedEqual(re.split(b':', string), [b'', b'a', b'b',
b'', b'c'])
self.assertTypedEqual(re.split(b':+', string), [b'', b'a', b'b',
b'c'])
self.assertTypedEqual(re.split(b'(:+)', string), [b'', b':',
b'a', b':', b'b', b'::', b'c'])
for a, b, c in ('àßç', 'абв', '𝒜𝒞𝒵'):
string = ':%s:%s::%s' % (a, b, c)
self.assertEqual(re.split(':', string), ['', a, b, '', c])
self.assertEqual(re.split(':+', string), ['', a, b, c])
self.assertEqual(re.split('(:+)', string), ['', ':', a, ':', b,
'::', c])
self.assertEqual(re.split('(?::+)', ':a:b::c'), ['', 'a', 'b', 'c'])
self.assertEqual(re.split('(:)+', ':a:b::c'), ['', ':', 'a', ':',
'b', ':', 'c'])
self.assertEqual(re.split('([b:]+)', ':a:b::c'), ['', ':', 'a',
':b::', 'c'])
self.assertEqual(re.split('(b)|(:+)', ':a:b::c'), ['', None, ':',
'a', None, ':', '', 'b', None, '', None, '::', 'c'])
self.assertEqual(re.split('(?:b)|(?::+)', ':a:b::c'), ['', 'a', '',
'', 'c'])
for sep, expected in [(':*', ['', 'a', 'b', 'c']), ('(?::*)', ['',
'a', 'b', 'c']), ('(:*)', ['', ':', 'a', ':', 'b', '::', 'c']),
('(:)*', ['', ':', 'a', ':', 'b', ':', 'c'])]:
with self.subTest(sep=sep), self.assertWarns(FutureWarning):
self.assertTypedEqual(re.split(sep, ':a:b::c'), expected)
for sep, expected in [('', [':a:b::c']), ('\\b', [':a:b::c']), (
'(?=:)', [':a:b::c']), ('(?<=:)', [':a:b::c'])]:
with self.subTest(sep=sep), self.assertRaises(ValueError):
self.assertTypedEqual(re.split(sep, ':a:b::c'), expected)
def test_qualified_re_split(self):
self.assertEqual(re.split(':', ':a:b::c', 2), ['', 'a', 'b::c'])
self.assertEqual(re.split(':', ':a:b::c', maxsplit=2), ['', 'a',
'b::c'])
self.assertEqual(re.split(':', 'a:b:c:d', maxsplit=2), ['a', 'b',
'c:d'])
self.assertEqual(re.split('(:)', ':a:b::c', maxsplit=2), ['', ':',
'a', ':', 'b::c'])
self.assertEqual(re.split('(:+)', ':a:b::c', maxsplit=2), ['', ':',
'a', ':', 'b::c'])
with self.assertWarns(FutureWarning):
self.assertEqual(re.split('(:*)', ':a:b::c', maxsplit=2), ['',
':', 'a', ':', 'b::c'])
def test_re_findall(self):
self.assertEqual(re.findall(':+', 'abc'), [])
for string in ('a:b::c:::d', S('a:b::c:::d')):
self.assertTypedEqual(re.findall(':+', string), [':', '::', ':::'])
self.assertTypedEqual(re.findall('(:+)', string), [':', '::',
| |
<reponame>hfboyce/tableau_course
# Classification and Regression Metrics
*<NAME>, May 17th, 2021*
# Importing our libraries
import pandas as pd
import altair as alt
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.dummy import DummyClassifier, DummyRegressor
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.model_selection import cross_validate, train_test_split
from sklearn.svm import SVR, SVC
from sklearn import datasets
import sys
sys.path.append('code/')
from display_tree import display_tree
from plot_classifier import plot_classifier
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
# Preprocessing and pipeline
from sklearn.impute import SimpleImputer
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, StandardScaler, MinMaxScaler
import scipy
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
## House Keeping
- Big lecture today!
- Last class on Wednesday.
- Assignment 3 due on Wednesday.
- [My Twitter](https://twitter.com/HayleyFBoyce)
- Question 3.2 -> most informative negative words
- Project clarification (If you have a "How" business question)
## Lecture Learning Objectives
- Explain why accuracy is not always the best metric in ML.
- Explain components of a confusion matrix.
- Define precision, recall, and f1-score and use them to evaluate different classifiers.
- Identify whether there is class imbalance and whether you need to deal with it.
- Explain `class_weight` and use it to deal with data imbalance.
- Appropriately select a scoring metric given a regression problem.
- Interpret and communicate the meanings of different scoring metrics on regression problems. MSE, RMSE, $R^2$, MAPE.
- Apply different scoring functions with `cross_validate`, `GridSearchCV` and `RandomizedSearchCV`.
## Five Minute Recap/ Lightning Questions
- What are the 2 types of feature selection methods we saw last class?
- What is the name of the function that helps us discover features that potentially contribute to our model in Decision Trees (and other models too)
- In a decision tree, where can we see the "most important" feature of the model in the structure?
- Should we ever question our clients' requests?
### Some lingering questions
- What happens if we have data where there is a lot of one class and very few of another?
- How can we measure our model's success besides using accuracy or $R2$?
## Introducing Evaluation Metrics
Up until this point, we have been scoring our models the same way every time.
We've been using the percentage of correctly predicted examples for classification problems and the $R^2$ metric for regression problems.
Let's discuss how we need to expand our horizons and why it's important to evaluate our models in other ways.
To help explain why accuracy isn't always the most beneficial option, we are bringing in a new dataset.
You've actually seen this data at the very beginning of this course in lecture 1 but it was just a subset of the entire data.
Please download the data from Kaggle here and put it in the data folder used for the lectures.
cc_df = pd.read_csv('data/creditcard.csv', encoding='latin-1')
train_df, test_df = train_test_split(cc_df, test_size=0.3, random_state=111)
train_df.head()
train_df.shape
We can see this is a large dataset with 199364 examples and 31 features in our training set.
Hence why I can't distribute it - it's too big!
train_df.describe(include="all", percentiles = [])
We see that the columns are all scaled and numerical.
You don't need to worry about this now. The original columns have been transformed already for confidentiality and our benefit so now there are no categorical features.
Let's separate `X` and `y` for train and test splits.
X_train_big, y_train_big = train_df.drop(columns=["Class"]), train_df["Class"]
X_test, y_test = test_df.drop(columns=["Class"]), test_df["Class"]
We are going to be talking about evaluation metrics and it's easier to do so if we use an explicit validation set instead of using cross-validation.
Our data is large enough so it shouldn't be a problem.
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_big, y_train_big, test_size=0.3, random_state=123)
### Baseline
Just like and predictive question, we start our analysis by building a simple `DummyClassifier` model as our baseline.
dummy = DummyClassifier(strategy="most_frequent")
dummy.fit(X_train, y_train)
dummy.score(X_train, y_train)
dummy.score(X_valid, y_valid)
Hang on, what is going on?
99.8% accuracy? This is supposed to be a baseline model! How is it getting such high accuracy?
Should we just deploy this `DummyClassifier` model for fraud detection?
train_df["Class"].value_counts(normalize=True)
If we look at the distribution of fraudulent labels to non-fraudulent labels, we can see there is an imbalance in the classes.
Here the `0` class is a Non fraud transaction, and the `1` class is a Fraud transaction.
We can see here that there are MANY Non fraud transactions and only a tiny handful of Fraud transactions.
So, what would be a good accuracy here? 99.9%? 99.99%?
The "Fraud" class is the class that we want to spot. The class we are interested in.
We can make a model better than the dummy classifier now.
pipe = make_pipeline(
(StandardScaler()),
(LogisticRegression(random_state=123))
)
pd.DataFrame(cross_validate(pipe, X_train, y_train, return_train_score=True)).mean()
This seems slightly better than `DummyClassifier`, but the question is can it really identify fraudulent transactions?
This model will cover new tools on how to measure this.
## Classification Metrics and tools
### What is "positive" and "negative"?
There are two kinds of binary classification problems:
- Distinguishing between two classes
- Spotting a specific class (fraud transaction, spam, disease)
We saw in logistic regression that the model designates a positive and negative class alphabetically when classifying observation but here when we are designating a positive and negative class, we need to be a bit more thoughtful.
In the case of spotting problems, the thing that we are interested in spotting is considered "positive".
In our example, we want to spot **fraudulent** transactions and so fraudulent is the "positive" class.
### Confusion Matrix
A **confusion matrix** is a table that visualizes the performance of an algorithm. It shows the possible labels and how many of each label the model predicts correctly and incorrectly.
We can import `plot_confusion_matrix` from `sklearn.metrics`.
from sklearn.metrics import plot_confusion_matrix
pipe.fit(X_train, y_train);
Once we fit on our training portion, we can use the `plot_confusion_matrix` function to see how well our model is doing classifying each target class.
In this case, we are looking at the validation portion only.
This results in a 2 by 2 matrix with the labels `Non fraud` and `Fraud` on each axis.
plot_confusion_matrix(pipe, X_valid, y_valid,
display_labels=["Non fraud", "Fraud"],
values_format="d",
cmap="Greens");
**Looking at the arguments:**
Similar to other `sklearn` functions, we can the model/pipeline followed by the feature table and then the target value objects.
`display_labels` will show more descriptive labels. without this argument, it would simply show the classes we have in the data (`0`, `1`).
`values_format` will determine how the numbers are displayed. Specifying `d` avoids scientific notation.
`cmap` is the colour argument! The default is `viridis` but other values such as `Blues`, `Purples`, `RdPu` or other colour schemes from [here](https://matplotlib.org/stable/tutorials/colors/colormaps.html) are also possible.
#### Confusion Matrix components
plot_confusion_matrix(pipe, X_valid, y_valid,
display_labels=["Non fraud", "Fraud"],
values_format="d", cmap="Blues");
| X | predict negative | predict positive |
|------|----------|-------|
| negative example | True negative (TN) | False positive (FP)|
| positive example | False negative (FN) | True positive (TP) |
Remember the Fraud is considered "positive" in this case and Non fraud is considered "negative".
The 4 quadrants of the confusion matrix can be explained as follows. These positions will change depending on what values we deem as the positive label.
- **True negative (TN)**: Examples that are negatively labelled that the model correctly predicts. This is in the top left quadrant.
- **False positive (FP)**: Examples that are negatively labelled that the model incorrectly predicts as positive. This is in the top right quadrant.
- **False negative (FN)**: Examples that are positively labelled that the model incorrectly predicts as negative. This is in the bottom left quadrant.
- **True positive (TP)**: Examples that are positively labelled that the model correctly predicted as positive. This is in the bottom right quadrant.
If you want something more numeric and simpler you can obtain a NumPy array by importing `confusion_matrix` from the sklearn library. (Before we were importing `plot_confusion_matrix`)
from sklearn.metrics import confusion_matrix
Here we get the predictions of the model first with `.predict()` and compare it with `y_valid` in the function `confusion_matrix()`.
predictions = pipe.predict(X_valid)
confusion_matrix(y_valid, predictions)
### Accuracy is only part of the story...
We have been using `.score` to assess our models, which returns accuracy by default.
And we saw that accuracy can be misleading when we have a class imbalance.
We need other | |
<gh_stars>0
import pytest
from summit.benchmarks import *
from summit.domain import *
from summit.utils.dataset import DataSet
from summit.utils.multiobjective import pareto_efficient, hypervolume
from summit.strategies import *
import GPy
from fastprogress.fastprogress import progress_bar
import numpy as np
import os
import warnings
import pkg_resources
def test_strategy():
class MockStrategy(Strategy):
def suggest_experiments(self, num_experiments, previous_results):
inputs, outputs = self.transform.transform_inputs_outputs(previous_results)
objectives = [v for v in self.domain.variables if v.is_objective]
assert len(objectives) == 1
assert objectives[0].name == "scalar_objective"
assert outputs["scalar_objective"].iloc[0] == 70.0
return self.transform.un_transform(inputs)
def reset(self):
pass
def test_random():
domain = Domain()
domain += ContinuousVariable(
name="temperature",
description="reaction temperature in celsius",
bounds=[50, 100],
)
domain += ContinuousVariable(
name="flowrate_a", description="flow of reactant a in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="flowrate_b", description="flow of reactant b in mL/min", bounds=[0.1, 0.5]
)
strategy = Random(domain, random_state=np.random.RandomState(3))
results = strategy.suggest_experiments(5)
arr = np.array(
(
[
[77.53989513, 0.45851724, 0.11195048],
[85.40739113, 0.15023412, 0.28273329],
[64.54523695, 0.18289715, 0.35965762],
[75.54138026, 0.12058688, 0.21139491],
[94.64734772, 0.27632394, 0.37050196],
]
)
)
results_arr = results.data_to_numpy().astype(np.float32)
assert np.isclose(results_arr.all(), arr.all())
solvent_ds = DataSet(
[[5, 81], [-93, 111]],
index=["benzene", "toluene"],
columns=["melting_point", "boiling_point"],
)
domain += CategoricalVariable(
"solvent", "solvent descriptors", descriptors=solvent_ds
)
strategy = Random(domain, random_state=np.random.RandomState(3))
results = strategy.suggest_experiments(5)
return results
def test_lhs():
domain = Domain()
domain += ContinuousVariable(
name="temperature",
description="reaction temperature in celsius",
bounds=[50, 100],
)
domain += ContinuousVariable(
name="flowrate_a", description="flow of reactant a in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="flowrate_b", description="flow of reactant b in mL/min", bounds=[0.1, 0.5]
)
strategy = LHS(domain, random_state=np.random.RandomState(3))
results = strategy.suggest_experiments(5)
arr = np.array(
[
[95.0, 0.46, 0.38],
[65.0, 0.14, 0.14],
[55.0, 0.22, 0.3],
[85.0, 0.3, 0.46],
[75.0, 0.38, 0.22],
]
)
results_arr = results.data_to_numpy().astype(np.float32)
assert np.isclose(results_arr.all(), arr.all())
solvent_ds = DataSet(
[[5, 81], [-93, 111]],
index=["benzene", "toluene"],
columns=["melting_point", "boiling_point"],
)
domain += CategoricalVariable(
"solvent", "solvent descriptors", descriptors=solvent_ds
)
strategy = LHS(domain, random_state=np.random.RandomState(3))
results = strategy.suggest_experiments(5)
return results
def test_doe():
domain = Domain()
domain += ContinuousVariable(
name="temperature",
description="reaction temperature in celsius",
bounds=[50, 100],
)
domain += ContinuousVariable(
name="flowrate_a", description="flow of reactant a in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="flowrate_b", description="flow of reactant b in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="yield_", description="", bounds=[0, 100], is_objective=True, maximize=True
)
domain += ContinuousVariable(
name="de",
description="diastereomeric excess",
bounds=[0, 100],
is_objective=True,
maximize=True,
)
strategy = FullFactorial(domain)
levels = dict(temperature=[50, 100], flowrate_a=[0.1, 0.5], flowrate_b=[0.1, 0.5])
experiments = strategy.suggest_experiments(levels)
return experiments
def test_multitosingleobjective_transform():
class MockStrategy(Strategy):
def suggest_experiments(self, num_experiments, previous_results):
inputs, outputs = self.transform.transform_inputs_outputs(previous_results)
objectives = [v for v in self.domain.variables if v.is_objective]
assert len(objectives) == 1
assert objectives[0].name == "scalar_objective"
assert outputs["scalar_objective"].iloc[0] == 70.0
return self.transform.un_transform(inputs)
def reset(self):
pass
domain = Domain()
domain += ContinuousVariable(
name="temperature",
description="reaction temperature in celsius",
bounds=[50, 100],
)
domain += ContinuousVariable(
name="flowrate_a", description="flow of reactant a in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="flowrate_b", description="flow of reactant b in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="yield_", description="", bounds=[0, 100], is_objective=True, maximize=True
)
domain += ContinuousVariable(
name="de",
description="diastereomeric excess",
bounds=[0, 100],
is_objective=True,
maximize=True,
)
columns = [v.name for v in domain.variables]
values = {
("temperature", "DATA"): 60,
("flowrate_a", "DATA"): 0.5,
("flowrate_b", "DATA"): 0.5,
("yield_", "DATA"): 50,
("de", "DATA"): 90,
}
previous_results = DataSet([values], columns=columns)
transform = MultitoSingleObjective(
domain, expression="(yield_+de)/2", maximize=True
)
strategy = MockStrategy(domain, transform=transform)
strategy.suggest_experiments(5, previous_results)
def test_logspaceobjectives_transform():
class MockStrategy(Strategy):
def suggest_experiments(self, num_experiments, previous_results):
inputs, outputs = self.transform.transform_inputs_outputs(previous_results)
objectives = [v for v in self.domain.variables if v.is_objective]
assert len(objectives) == 2
assert np.isclose(outputs["log_yield_"].iloc[0], np.log(50))
assert np.isclose(outputs["log_de"].iloc[0], np.log(90))
return self.transform.un_transform(inputs)
def reset(self):
pass
domain = Domain()
domain += ContinuousVariable(
name="temperature",
description="reaction temperature in celsius",
bounds=[50, 100],
)
domain += ContinuousVariable(
name="flowrate_a", description="flow of reactant a in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="flowrate_b", description="flow of reactant b in mL/min", bounds=[0.1, 0.5]
)
domain += ContinuousVariable(
name="yield_", description="", bounds=[0, 100], is_objective=True, maximize=True
)
domain += ContinuousVariable(
name="de",
description="diastereomeric excess",
bounds=[0, 100],
is_objective=True,
maximize=True,
)
columns = [v.name for v in domain.variables]
values = {
("temperature", "DATA"): [60, 100],
("flowrate_a", "DATA"): [0.5, 0.4],
("flowrate_b", "DATA"): [0.5, 0.4],
("yield_", "DATA"): [50, 60],
("de", "DATA"): [90, 80],
}
previous_results = DataSet(values, columns=columns)
transform = LogSpaceObjectives(domain)
strategy = MockStrategy(domain, transform=transform)
strategy.suggest_experiments(5, previous_results)
@pytest.mark.parametrize("num_experiments", [1, 2, 4])
@pytest.mark.parametrize("maximize", [True, False])
@pytest.mark.parametrize("constraints", [False])
def test_snobfit(num_experiments, maximize, constraints):
hartmann3D = Hartmann3D(maximize=maximize, constraints=constraints)
strategy = SNOBFIT(hartmann3D.domain, probability_p=0.5, dx_dim=1e-5)
initial_exp = None
# Comment out to start without initial data
# initial_exp = pd.DataFrame(data={'x_1': [0.409,0.112,0.17,0.8], 'x_2': [0.424,0.33,0.252,0.1],
# 'x_3': [0.13,0.3,0.255,0.01]}) # initial experimental points
# initial_exp = DataSet.from_df(initial_exp)
# initial_exp = hartmann3D.run_experiments(initial_exp) # initial results
# run SNOBFIT loop for fixed <num_iter> number of iteration with <num_experiments> number of experiments each
# stop loop if <max_stop> consecutive iterations have not produced an improvement
num_iter = 400 // num_experiments
max_stop = 50 // num_experiments
nstop = 0
fbestold = float("inf")
# Initial experiments
if initial_exp is not None:
next_experiments = initial_exp
else:
next_experiments = None
param = None
for i in range(num_iter):
# Call of SNOBFIT
next_experiments = strategy.suggest_experiments(
num_experiments, prev_res=next_experiments
)
# This is the part where experiments take place
next_experiments = hartmann3D.run_experiments(next_experiments)
fbest = strategy.fbest * -1.0 if maximize else strategy.fbest
xbest = strategy.xbest
if fbest < fbestold:
fbestold = fbest
nstop = 0
else:
nstop += 1
if nstop >= max_stop:
print("No improvement in last " + str(max_stop) + " iterations.")
break
xbest = np.around(xbest, decimals=3)
fbest = np.around(fbest, decimals=3)
# Extrema of test function without constraint: glob_min = -3.86 at (0.114,0.556,0.853)
assert fbest <= -3.85 and fbest >= -3.87
# Test saving and loading
strategy.save("snobfit_test.json")
strategy_2 = SNOBFIT.load("snobfit_test.json")
for a, b in zip(strategy.prev_param[0], strategy_2.prev_param[0]):
if type(a) == list:
assert all(a) == all(b)
elif type(a) == np.ndarray:
assert a.all() == b.all()
elif np.isnan(a):
assert np.isnan(b)
else:
assert a == b
assert all(strategy.prev_param[1][0]) == all(strategy_2.prev_param[1][0])
os.remove("snobfit_test.json")
print("Optimal setting: " + str(xbest) + " with outcome: " + str(fbest))
@pytest.mark.parametrize("x_start", [[], [0, 0], [4, 6], [1, 2], [-2, 5]])
@pytest.mark.parametrize("maximize", [True, False])
@pytest.mark.parametrize("constraint", [True, False])
def test_nm2D(x_start, maximize, constraint, plot=False):
himmelblau = Himmelblau(maximize=maximize, constraints=constraint)
strategy = NelderMead(
himmelblau.domain, x_start=x_start, random_start=True, adaptive=False
) # Will only random start if x_start is []
initial_exp = None
# Uncomment to create test case which results in reduction dimension and dimension recovery
# initial_exp = pd.DataFrame(data={'x_1': [4.0,4.0,2.0], 'x_2': [2.0,3.0,-6.0]})
# initial_exp = DataSet.from_df(initial_exp)
# initial_exp = himmelblau.run_experiments(initial_exp) # initial results
# run Nelder-Mead loop for fixed <num_iter> number of iteration
num_iter = 100 # maximum number of iterations
max_stop = 20 # allowed number of consecutive iterations w/o improvement
nstop = 0
fbestold = float("inf")
polygons_points = []
# Initial experiments
if initial_exp is not None:
polygons_points.append(
np.asarray(
[
(
initial_exp.data_to_numpy()[i][:2].tolist(),
initial_exp.data_to_numpy()[j][:2],
)
for i in range(len(initial_exp.data_to_numpy()))
for j in range(len(initial_exp.data_to_numpy()))
]
)
)
next_experiments = initial_exp
else:
next_experiments = None
param = None
for i in range(num_iter):
next_experiments = strategy.suggest_experiments(prev_res=next_experiments)
# This is the part where experiments take place
next_experiments = himmelblau.run_experiments(next_experiments)
# save polygon points for plotting
param = strategy.prev_param
polygons_points.append(
np.asarray(
[param[0]["sim"][i].tolist() for i in range(len(param[0]["sim"]))]
)
)
fbest = strategy.fbest * -1.0 if maximize else strategy.fbest
xbest = strategy.xbest
if fbest < fbestold:
fbestold = fbest
nstop = 0
else:
nstop += 1
if nstop >= max_stop:
print("No improvement in last " + str(max_stop) + " iterations.")
break
xbest = np.around(xbest, decimals=3)
fbest = np.around(fbest, decimals=3)
assert fbest <= 0.1
# print("Optimal setting: " + str(xbest) + " with outcome: " + str(fbest))
# Extrema of test function without constraints: four identical local minima f = 0 at x1 = (3.000, 2.000),
# x2 = (-2.810, 3.131), x3 = (-3.779, -3.283), x4 = (3.584, -1.848)
# Test saving and loading
strategy.save("nm_2d.json")
strategy_2 = NelderMead.load("nm_2d.json")
assert strategy._x_start == strategy_2._x_start
assert strategy.random_start == strategy_2.random_start
assert strategy._dx == strategy_2._dx
assert strategy._df == strategy_2._df
assert strategy._adaptive == strategy_2._adaptive
p = strategy.prev_param[0]
p2 = strategy.prev_param[0]
for k, v in p.items():
if type(v) not in [list, np.ndarray]:
assert v == p2[k]
elif type(v) == list:
for i, l in enumerate(v):
if type(l) in [np.ndarray, DataSet]:
assert l.all() == p2[k][i].all()
else:
assert l == p2[k][i]
assert all(strategy.prev_param[1]) == all(strategy_2.prev_param[1])
os.remove("nm_2d.json")
# plot
if plot:
fig, ax = himmelblau.plot(polygons=polygons_points)
@pytest.mark.parametrize(
"x_start, maximize, constraint",
[
([0, 0, 0], True, True),
([0, | |
face material
#
nodePath.setMaterial(materials[matIndex], 1) #Apply the material to this nodePath
nodePath.setTwoSided(materials[matIndex].getTwoside())
nodePath.setPythonTag('material', materials[matIndex])
nodePath.setPythonTag('pickableObjTag', 1)
#
# set polygon face textures
#
texFileMain = None
texFileSphere = None
if (not mat.texture_file) and mat.toon_index < 0:
nodePath.setTransparency(TransparencyAttrib.MDual, matIndex)
if mat.alpha<1:
nodePath.setTransparency(TransparencyAttrib.MAlpha, matIndex)
if mat.texture_file and len(mat.texture_file) >= 0:
texName = mat.texture_file.decode('shift_jis', errors='replace')
tex_list = texName.split('*')
if len(tex_list)==1:
tex_list = texName.split('/')
if len(tex_list)==2:
texFileMain = tex_list[0].strip()
texFileSphere = tex_list[1].strip()
else:
ext = os.path.splitext(texName)[1]
if ext.lower() in ['.spa', '.sph']:
texFileMain = None
texFileSphere = texName
else:
texFileMain = texName
texFileSphere = None
if texFileMain:
if not texFileMain in texList:
texList[texFileMain] = loadTexture(os.path.join(modelPath, texFileMain))
if texList[texFileMain]:
log(u'Loaded Texture : %s' % texFileMain, force=True)
# texList[texFileMain].setWrapU(Texture.WM_clamp)
texMain = texList[texFileMain]
if texMain and texMain.hasRamImage():
if mat.edge_flag:
# 輪郭有效
texMain.setBorderColor(VBase4(mat.diffuse_color.r, mat.diffuse_color.g, mat.diffuse_color.b, 1))
pass
ts_main = TextureStage('%s_%3d_main' % (matName, matIndex))
ts_main.setColor(VBase4(mat.ambient_color.r, mat.ambient_color.g, mat.ambient_color.b, 1))
ts_main.setSort(matIndex)
ts_main.setPriority(matIndex)
if not texFileSphere:
ts_main.setMode(TextureStage.MReplace)
else:
# ts_main.setMode(TextureStage.MModulate)
# ts_main.setMode(TextureStage.MModulateGloss)
ts_main.setMode(TextureStage.MModulateGlow)
if hasAlpha(texMain):
nodePath.setTransparency(TransparencyAttrib.MDual, matIndex)
texImage = texMain.getRamImageAs('RGB')
pixel_LT = texImage.getData()[0:3]
if pixel_LT[0] == chr(0xff) and pixel_LT[1] == chr(0xff) and pixel_LT[2] == chr(0xff):
print('--> Left-Top Pixel is WHITE')
nodePath.setTransparency(TransparencyAttrib.MAlpha, matIndex)
elif pixel_LT[0] == chr(0x00) and pixel_LT[1] == chr(0x00) and pixel_LT[2] == chr(0x00):
print('--> Left-Top Pixel is BLACK')
nodePath.setTransparency(TransparencyAttrib.MAlpha, matIndex)
else:
nodePath.setTransparency(TransparencyAttrib.MMultisample, matIndex)
nodePath.setTexture(ts_main, texMain)
nodePath.setTexScale(ts_main, 1, -1, -1)
if texFileSphere:
texMode = TextureStage.MReplace
ext = os.path.splitext(texFileSphere)[1]
if ext.lower() in ['.spa']:
texMode = TextureStage.MAdd
elif ext.lower() in ['.sph']:
# texMode = TextureStage.MGlow
# texMode = TextureStage.MModulateGlow
texMode = TextureStage.MModulate
# texMode = TextureStage.MBlend
# texMode = TextureStage.MBlend
if not texFileSphere in texList:
texList[texFileSphere] = loadTexture(os.path.join(modelPath, texFileSphere))
texSphere = texList[texFileSphere]
if texSphere and texSphere.hasRamImage():
log(u'Loaded Texture : %s' % texFileSphere, force=True)
# texSphere.setWrapU(Texture.WM_clamp)
# texSphere.setWrapV(Texture.WM_clamp)
ts_sphere = TextureStage('%s_%03d_sphere' % (matName, matIndex))
ts_sphere.setMode(texMode)
ts_sphere.setSort(matIndex)
ts_sphere.setPriority(matIndex)
nodePath.setTexGen(ts_sphere, TexGenAttrib.MEyeSphereMap, 2)
nodePath.setTexture(ts_sphere, texSphere, 1)
nodePath.setTexScale(ts_sphere, 1, -1, -1)
if not texFileMain:
if hasAlpha(texSphere):
nodePath.setTransparency(TransparencyAttrib.MDual, matIndex)
if mat.toon_index>=0 and textures[mat.toon_index] and textures[mat.toon_index].hasRamImage():
# texMode = TextureStage.MModulateGlow
# texMode = TextureStage.MModulateGloss
texMode = TextureStage.MModulate
texToon = textures[mat.toon_index]
# texToon.setMagfilter(Texture.FTNearestMipmapNearest)
# texToon.setMinfilter(Texture.FTNearestMipmapNearest)
# texToon.setAnisotropicDegree(30)
# texToon.setWrapU(Texture.WM_clamp)
ts_toon = TextureStage('%s_%03d_toon' % (matName, matIndex))
# ts_toon.setColor(VBase4(1,1,1,.67))
ts_toon.setMode(texMode)
ts_toon.setSort(matIndex)
ts_toon.setPriority(matIndex)
nodePath.setTexGen(ts_toon, TexGenAttrib.MEyeSphereMap, 2)
nodePath.setTexture(ts_toon, texToon, 1)
nodePath.setTexScale(ts_toon, 1, -1, -1)
# print(nodePath.getTransparency())
nodePath.setAntialias(AntialiasAttrib.MAuto)
if nodePath.getTransparency() == TransparencyAttrib.MNone:
nodePath.setTwoSided(True)
if mat.edge_flag:
nodePath.setTwoSided(True)
if mat.alpha < 1:
nodePath.setTwoSided(True)
vIndex += mat.vertex_count
modelBody.addChild(node)
matIndex += 1
log(u'Loaded Node : %s' % matName, force=True)
modelPath = NodePath(model)
# modelPath.setShaderAuto()
return(modelPath)
pass
def loadPmdBone(pmd_model):
def GetParentNode(root, parent_index):
node = None
if parent_index == -1:
node = root
pass
else:
for child in root.getChildren():
node = GetParentNode(child, parent_index)
if node:
break
else:
boneIndex = child.getPythonTag('boneIndex')
if boneIndex == parent_index:
node = child
break
pass
return(node)
pass
#
# Load Bone outline for display
#
data = EggData()
data.read('stages/bone.egg')
# data.read('stages/bone_oct.egg')
# data.read('stages/bone_cone.egg')
dnp = NodePath(loadEggData(data))
dnp.setColor(LVector4f(1,1,0,1))
boneOutline = dnp.node().getChild(0)
min_point = LPoint3f()
max_point = LPoint3f()
dnp.calcTightBounds(min_point, max_point)
bone_size = LPoint3f(max_point.x-min_point.x, max_point.y-min_point.y, max_point.z-min_point.z)
#
# Load Bone data
#
formatArray = GeomVertexArrayFormat()
formatArray.addColumn(InternalName.make(str("vindex")), 1, Geom.NTUint32, Geom.CIndex)
formatArray.addColumn(InternalName.make(str("tindex")), 1, Geom.NTFloat32, Geom.COther)
formatArray.addColumn(InternalName.make(str("pindex")), 1, Geom.NTFloat32, Geom.COther)
format = GeomVertexFormat(GeomVertexFormat.getV3c4())
format.addArray(formatArray)
format = GeomVertexFormat.registerFormat(format)
boneNode = PandaNode('Bones')
boneIndex = 0
for bone in pmd_model.bones:
boneName = bone.name.decode('shift_jis', errors='replace')
log(u'Loading Bone : %s' % boneName, force=True)
#
# load vertices(vertex list)
#
vdata = GeomVertexData(boneName+'_vdata', format, Geom.UHDynamic)
vdata.setNumRows(3)
vertex = GeomVertexWriter(vdata, 'vertex')
color = GeomVertexWriter(vdata, 'color')
vindex = GeomVertexWriter(vdata, 'vindex')
tindex = GeomVertexWriter(vdata, 'tindex')
pindex = GeomVertexWriter(vdata, 'pindex')
node = GeomNode(boneName)
tu = LVector3f(bone.tail.x, bone.tail.y, bone.tail.z)
log(tu.cross(LVector3f(bone.pos.x, bone.pos.y, bone.pos.z)))
if bone.tail_index >= 0:
t = V2V(pmd_model.bones[bone.tail_index].pos)
else:
t = V2V(bone.pos+bone.tail)
vertex.addData3f(t)
color.addData4f(.95, .95, 0, 1) # Yellow
vindex.addData1i(boneIndex)
tindex.addData1i(bone.tail_index)
pindex.addData1i(bone.parent_index)
v = V2V(bone.pos)
vertex.addData3f(v)
color.addData4f(0, .95, 0.95, 1) # Cyan
vindex.addData1i(boneIndex)
tindex.addData1i(bone.tail_index)
pindex.addData1i(bone.parent_index)
geom = Geom(vdata)
prim = GeomLines(Geom.UHDynamic)
prim.addVertex(0)
prim.addVertex(1)
geom.addPrimitive(prim)
node.addGeom(geom)
node.setPythonTag('english_name', bone.english_name)
node.setPythonTag('position', V2V(bone.pos))
node.setPythonTag('parent_index', bone.parent_index)
node.setPythonTag('tail_index', bone.tail_index)
node.setPythonTag('tail_position', V2V(bone.tail))
# if bone.ik:
# iklink = map(lambda ik: {
# 'bone_index':ik.bone_index,
# 'limit_angle':ik.limit_angle,
# 'limit_max':LVector3f(V2V(ik.limit_max)),
# 'limit_min':LVector3f(V2V(ik.limit_min))
# }, bone.ik.link)
# node.setPythonTag('ik.limit_radian', bone.ik.limit_radian)
# node.setPythonTag('ik.loop', bone.ik.loop)
# node.setPythonTag('ik.target_index', bone.ik.target_index)
# node.setPythonTag('ik.link', bone.ik.link)
# else:
# node.setPythonTag('ik', None)
node.setPythonTag('index', bone.index)
node.setPythonTag('boneIndex', boneIndex)
node.setPythonTag('pickableObjTag', 1)
vd = vdist(v, t)
scale = vd / bone_size.z
s_x = scale if scale<.25 else .25
s_y = scale if scale<.25 else .25
s_z = scale #if scale<.25 else .25
s = LVector3f(s_x, s_y, s_z)
r = getHprFromTo(v, t)
trans = TransformState.makePosHprScale(v, r, s)
bo = boneOutline.makeCopy()
bo.setName(boneName)
bo.setTransform(trans)
bo.setPythonTag('pickableObjTag', 1)
parentNode = GetParentNode(boneNode, bone.parent_index)
if isinstance(parentNode, PandaNode):
# print('PNode: ', parentNode)
parentNode.addChild(node)
parentNode.addChild(bo)
elif isinstance(parentNode, GeomNode):
# print('GNode: ', parentNode)
parentNode.addGeom(node)
# parentNode.addGeom(bo)
boneIndex += 1
np = NodePath(boneNode)
np.setRenderModeWireframe()
# np.setPythonTag('pickableObjTag', 1)
# ofs = OFileStream('bonelist.txt', 3)
# np.ls(ofs, 2)
np.hide()
return(np)
def loadPmdMorph(pmd_model):
#
# Load Morph data
#
formatArray = GeomVertexArrayFormat()
formatArray.addColumn(InternalName.make(str("vindex")), 1, Geom.NTUint32, Geom.CIndex)
# formatArray.addColumn(InternalName.make(str("v.morph")), 3, Geom.NTFloat32, Geom.CMorphDelta)
formatArray.addColumn(InternalName.make(str("vmorph")), 3, Geom.NTFloat32, Geom.COther)
formatArray.addColumn(InternalName.make(str("transform_index")), 1, Geom.NTUint32, Geom.CIndex)
formatArray.addColumn(InternalName.make(str("transform_weight")), 3, Geom.NTUint32, Geom.COther)
formatArray.addColumn(InternalName.make(str("emotion.morph.strange")), 1, Geom.NTFloat32, Geom.COther)
format = GeomVertexFormat(GeomVertexFormat.getV3())
format.addArray(formatArray)
format = GeomVertexFormat.registerFormat(format)
morphNode = PandaNode('Morphs')
morphIndex = 0
morphBase = None
for morph in pmd_model.morphs:
morphName = morph.name.decode('shift_jis', errors='replace')
log(u'Loading Morph : %s' % morphName, force=True)
if morphIndex==0 and morphName == 'base':
morphBase = morph
morphIndex += 1
continue
#
# load vertices(vertex list)
#
vdata = GeomVertexData(morphName+'_vdata', format, Geom.UHDynamic)
# vdata.setNumRows(6)
vertex = GeomVertexWriter(vdata, 'vertex')
vindex = GeomVertexWriter(vdata, 'vindex')
# vmorph = GeomVertexWriter(vdata, 'v.morph')
vmorph = GeomVertexWriter(vdata, 'vmorph')
transform_index = GeomVertexWriter(vdata, 'transform_index')
transform_weight = GeomVertexWriter(vdata, 'transform_weight')
column_morph_slider = GeomVertexWriter(vdata, 'emotion.morph.strange')
node = GeomNode(morphName)
morphData = None
morphID = encode(morphName)
morphEggText = []
morphEggText.append('<CoordinateSystem> { Z-up }')
morphEggText.append('<Group> %s_ACTOR {' % morphID)
morphEggText.append(' <DART> { 1 }')
morphEggText.append(' <Group> %s {' % morphID)
morphEggText.append(' <VertexPool> %s {' % morphID)
prim = GeomPoints(Geom.UHDynamic)
vdata.setNumRows(len(morph.pos_list))
for idx in range(len(morph.pos_list)):
i = morphBase.indices[morph.indices[idx]]
v = V2V(pmd_model.vertices[i].pos)
o = V2V(morph.pos_list[idx])
vertex.addData3f(v)
vindex.addData1i(i)
vmorph.addData3f(o)
transform_index.addData1i(i)
transform_weight.addData3f(o)
column_morph_slider.addData1f(1.0)
prim.addVertex(idx)
morphEggText.append(' <Vertex> %d {' % idx)
morphEggText.append(' %.11f %.11f %.11f' % (v.x, v.y, v.z))
morphEggText.append(' <Dxyz> Wedge { %.6f %.6f %.6f }' % (o.x, o.y, o.z))
morphEggText.append(' }')
morphEggText.append(' }')
morphEggText.append(' }')
morphEggText.append('}')
geom = Geom(vdata)
geom.addPrimitive(prim)
node.addGeom(geom)
egg = EggData()
egg.read(StringStream('\n'.join(morphEggText)))
action = loadEggData(egg).getChild(0)
node.addChild(action)
node.setPythonTag('english_name', morph.english_name)
node.setPythonTag('morph_type', 1)
node.setPythonTag('morph_data', morphData)
node.setPythonTag('morph_index', morphIndex)
node.setPythonTag('pickableObjTag', 1)
morphNode.addChild(node)
morphIndex += 1
np = NodePath(morphNode)
np.hide()
return(np)
pass
def loadPmdSlot(pmd_model):
#
# Load Display Slot data
#
slotNode = PandaNode('Slots')
slotIndex = 0
for slot in pmd_model.display_slots:
slotName = slot.name.decode('shift_jis', errors='replace')
log(u'Loading Slot : %s' % slotName, force=True)
node = PandaNode(slotName)
node.setPythonTag('english_name', slot.english_name)
node.setPythonTag('references', slot.references)
node.setPythonTag('special_flag', slot.special_flag)
node.setPythonTag('slotIndex', slotIndex)
node.setPythonTag('pickableObjTag', 1)
slotNode.addChild(node)
slotIndex += 1
np = NodePath(slotNode)
np.hide()
return(np)
def loadPmdRigid(pmd_model):
#
# Load Rigid data
#
rigidNode = PandaNode('Rigid')
rigidIndex = 0
for rigid in pmd_model.rigidbodies:
rigidName = rigid.name.decode('shift_jis', errors='replace')
log(u'Loading RigidBodies : %s' % rigidName, force=True)
node = PandaNode(rigidName)
node.setPythonTag('english_name', rigid.english_name)
node.setPythonTag('bone_index', rigid.bone_index)
node.setPythonTag('collision_group', rigid.collision_group)
node.setPythonTag('no_collision_group', rigid.no_collision_group)
node.setPythonTag('shape_type', rigid.shape_type)
node.setPythonTag('shape_size', V2V(rigid.shape_size))
node.setPythonTag('shape_position', V2V(rigid.shape_position))
node.setPythonTag('shape_rotation', R2DV(rigid.shape_rotation))
node.setPythonTag('param.mass', rigid.param.mass)
node.setPythonTag('param.linear_damping', rigid.param.linear_damping)
node.setPythonTag('param.angular_damping', rigid.param.angular_damping)
node.setPythonTag('param.restitution', rigid.param.restitution)
node.setPythonTag('param.friction', rigid.param.friction)
node.setPythonTag('mode', rigid.mode)
node.setPythonTag('rigidIndex', rigidIndex)
node.setPythonTag('pickableObjTag', 1)
rigidNode.addChild(node)
rigidIndex += 1
np = NodePath(rigidNode)
np.hide()
return(np)
def loadPmdJoint(pmd_model):
#
# Load Joints data
#
jointNode = PandaNode('Joints')
jointIndex = 0
for joint in pmd_model.joints:
jointName = joint.name.decode('shift_jis', errors='replace')
log(u'Loading RigidBodies : %s' % jointName, force=True)
node = PandaNode(jointName)
node.setPythonTag('english_name', joint.english_name)
node.setPythonTag('joint_type', joint.joint_type)
node.setPythonTag('rigidbody_index_a', joint.rigidbody_index_a)
node.setPythonTag('rigidbody_index_b', joint.rigidbody_index_b)
node.setPythonTag('position', V2V(joint.position))
node.setPythonTag('rotation', R2DV(joint.rotation))
node.setPythonTag('translation_limit_min', V2V(joint.translation_limit_min))
node.setPythonTag('translation_limit_max', V2V(joint.translation_limit_max))
node.setPythonTag('rotation_limit_min', R2DV(joint.rotation_limit_min))
node.setPythonTag('rotation_limit_max', R2DV(joint.rotation_limit_max))
node.setPythonTag('spring_constant_translation', V2V(joint.spring_constant_translation))
node.setPythonTag('spring_constant_rotation', R2DV(joint.spring_constant_rotation))
node.setPythonTag('jointIndex', jointIndex)
node.setPythonTag('pickableObjTag', 1)
jointNode.addChild(node)
jointIndex += 1
np = NodePath(jointNode)
np.hide()
return(np)
def displayPmdModelInfo(model):
# print(dir(model))
info = pmdInfo(model)
fn = os.path.splitext(os.path.basename(model.path))
log(os.path.join(CWD, fn[0]+'_info'+'.txt'))
with codecs.open(os.path.join(CWD, fn[0]+'_info'+'.txt'), 'w', encoding='utf8') as f:
f.writelines(os.linesep.join(info))
pass
def testPMD(pmd):
pmdModel = pmdLoad(pmd)
if pmdModel:
print(pmdModel.path)
displayPmdModelInfo(pmdModel)
from direct.showbase.ShowBase import ShowBase
base = ShowBase()
p3dnode = pmd2p3d(pmdModel)
p3dnode.reparentTo(base.render)
base.run()
pass
pass
def loadPmdModel(modelfile):
p3dnode = None
try:
mmdFile = os.path.relpath(modelfile)
except:
mmdFile = modelfile
pass
if os.path.altsep:
mmdFile = mmdFile.replace('\\', os.path.altsep)
ext = os.path.splitext(mmdFile)[1].lower()
if ext in ['.pmd']:
mmdModel = pmdLoad(mmdFile)
if mmdModel:
p3dnode = loadPmdBody(mmdModel)
morphs = loadPmdMorph(mmdModel)
if morphs:
morphs.reparentTo(p3dnode)
bones = loadPmdBone(mmdModel)
if bones:
bones.reparentTo(p3dnode)
# slots = loadPmdSlot(mmdModel)
# if slots:
| |
<gh_stars>1-10
"""
Copyright (C) 2019 the LSNN team, <NAME>
"""
from distutils.version import LooseVersion
import datetime
from collections import OrderedDict
from collections import namedtuple
import numpy as np
import numpy.random as rd
import tensorflow as tf
from tensorflow.python.framework import function
from tensorflow.python.framework.ops import Tensor
if LooseVersion(tf.__version__) >= LooseVersion("1.11"):
from tensorflow.python.ops.variables import Variable, RefVariable
else:
print("Using tensorflow version older then 1.11 -> skipping RefVariable storing")
from tensorflow.python.ops.variables import Variable
from lsnn.toolbox.rewiring_tools import weight_sampler
from lsnn.toolbox.tensorflow_einsums.einsum_re_written import einsum_bi_ijk_to_bjk
from lsnn.toolbox.tensorflow_utils import tf_roll
from time import time
Cell = tf.contrib.rnn.BasicRNNCell
def map_to_named_tuple(S, f):
state_dict = S._asdict()
new_state_dict = OrderedDict({})
for k, v in state_dict.items():
new_state_dict[k] = f(v)
new_named_tuple = S.__class__(**new_state_dict)
return new_named_tuple
def placeholder_container_for_rnn_state(cell_state_size, dtype, batch_size, name='TupleStateHolder'):
with tf.name_scope(name):
default_dict = cell_state_size._asdict()
placeholder_dict = OrderedDict({})
for k, v in default_dict.items():
if np.shape(v) == ():
v = [v]
shape = np.concatenate([[batch_size], v])
placeholder_dict[k] = tf.placeholder(shape=shape, dtype=dtype, name=k)
placeholder_tuple = cell_state_size.__class__(**placeholder_dict)
return placeholder_tuple
def placeholder_container_from_example(state_example, name='TupleStateHolder'):
with tf.name_scope(name):
default_dict = state_example._asdict()
placeholder_dict = OrderedDict({})
for k, v in default_dict.items():
placeholder_dict[k] = tf.placeholder(shape=v.shape, dtype=v.dtype, name=k)
placeholder_tuple = state_example.__class__(**placeholder_dict)
return placeholder_tuple
def feed_dict_with_placeholder_container(dict_to_update, state_holder, state_value, batch_selection=None):
if state_value is None:
return dict_to_update
assert state_holder.__class__ == state_value.__class__, 'Should have the same class, got {} and {}'.format(
state_holder.__class__, state_value.__class__)
for k, v in state_value._asdict().items():
if batch_selection is None:
dict_to_update.update({state_holder._asdict()[k]: v})
else:
dict_to_update.update({state_holder._asdict()[k]: v[batch_selection]})
return dict_to_update
#################################
# Rewirite the Spike function without hack
#################################
@tf.custom_gradient
def SpikeFunction(v_scaled, dampening_factor):
z_ = tf.greater(v_scaled, 0.)
z_ = tf.cast(z_, dtype=tf.float32)
def grad(dy):
dE_dz = dy
dz_dv_scaled = tf.maximum(1 - tf.abs(v_scaled), 0)
dz_dv_scaled *= dampening_factor
dE_dv_scaled = dE_dz * dz_dv_scaled
return [dE_dv_scaled,
tf.zeros_like(dampening_factor)]
return tf.identity(z_, name="SpikeFunction"), grad
def weight_matrix_with_delay_dimension(w, d, n_delay):
"""
Generate the tensor of shape n_in x n_out x n_delay that represents the synaptic weights with the right delays.
:param w: synaptic weight value, float tensor of shape (n_in x n_out)
:param d: delay number, int tensor of shape (n_in x n_out)
:param n_delay: number of possible delays
:return:
"""
with tf.name_scope('WeightDelayer'):
w_d_list = []
for kd in range(n_delay):
mask = tf.equal(d, kd)
w_d = tf.where(condition=mask, x=w, y=tf.zeros_like(w))
w_d_list.append(w_d)
delay_axis = len(d.shape)
WD = tf.stack(w_d_list, axis=delay_axis)
return WD
# PSP on output layer
def exp_convolve(tensor, decay): # tensor shape (trial, time, neuron)
with tf.name_scope('ExpConvolve'):
assert tensor.dtype in [tf.float16, tf.float32, tf.float64]
tensor_time_major = tf.transpose(tensor, perm=[1, 0, 2])
initializer = tf.zeros_like(tensor_time_major[0])
filtered_tensor = tf.scan(lambda a, x: a * decay + (1 - decay) * x, tensor_time_major, initializer=initializer)
filtered_tensor = tf.transpose(filtered_tensor, perm=[1, 0, 2])
return filtered_tensor
LIFStateTuple = namedtuple('LIFStateTuple', ('v', 'z', 'i_future_buffer', 'z_buffer'))
def tf_cell_to_savable_dict(cell, sess, supplement={}):
"""
Usefull function to return a python/numpy object from of of the tensorflow cell object defined here.
The idea is simply that varaibles and Tensors given as attributes of the object with be replaced by there numpy value evaluated on the current tensorflow session.
:param cell: tensorflow cell object
:param sess: tensorflow session
:param supplement: some possible
:return:
"""
dict_to_save = {}
dict_to_save['cell_type'] = str(cell.__class__)
time_stamp = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
dict_to_save['time_stamp'] = time_stamp
dict_to_save.update(supplement)
tftypes = [Variable, Tensor]
if LooseVersion(tf.__version__) >= LooseVersion("1.11"):
tftypes.append(RefVariable)
for k, v in cell.__dict__.items():
if k == 'self':
pass
elif type(v) in tftypes:
dict_to_save[k] = sess.run(v)
elif type(v) in [bool, int, float, np.int64, np.ndarray]:
dict_to_save[k] = v
else:
print('WARNING: attribute of key {} and value {} has type {}, recoding it as string.'.format(k, v, type(v)))
dict_to_save[k] = str(v)
return dict_to_save
class LIF(Cell):
def __init__(self, n_in, n_rec, tau=20., thr=0.03,
dt=1., n_refractory=0, dtype=tf.float32, n_delay=1, rewiring_connectivity=-1,
in_neuron_sign=None, rec_neuron_sign=None,
dampening_factor=0.3,
injected_noise_current=0.,
V0=1., eprop=False):
"""
Tensorflow cell object that simulates a LIF neuron with an approximation of the spike derivatives.
:param n_in: number of input neurons
:param n_rec: number of recurrent neurons
:param tau: membrane time constant
:param thr: threshold voltage
:param dt: time step of the simulation
:param n_refractory: number of refractory time steps
:param dtype: data type of the cell tensors
:param n_delay: number of synaptic delay, the delay range goes from 1 to n_delay time steps
:param reset: method of resetting membrane potential after spike thr-> by fixed threshold amount, zero-> to zero
"""
if np.isscalar(tau): tau = tf.ones(n_rec, dtype=dtype) * np.mean(tau)
if np.isscalar(thr): thr = tf.ones(n_rec, dtype=dtype) * np.mean(thr)
tau = tf.cast(tau, dtype=dtype)
dt = tf.cast(dt, dtype=dtype)
self.dampening_factor = dampening_factor
# Parameters
self.n_delay = n_delay
self.n_refractory = n_refractory
self.dt = dt
self.n_in = n_in
self.n_rec = n_rec
self.data_type = dtype
self._num_units = self.n_rec
self.tau = tf.Variable(tau, dtype=dtype, name="Tau", trainable=False)
self._decay = tf.exp(-dt / tau)
self.thr = tf.Variable(thr, dtype=dtype, name="Threshold", trainable=False)
self.V0 = V0
self.eprop = eprop
self.injected_noise_current = injected_noise_current
self.rewiring_connectivity = rewiring_connectivity
self.in_neuron_sign = in_neuron_sign
self.rec_neuron_sign = rec_neuron_sign
with tf.variable_scope('InputWeights'):
# Input weights
if 0 < rewiring_connectivity < 1:
self.w_in_val, self.w_in_sign, self.w_in_var, _ = weight_sampler(n_in, n_rec, rewiring_connectivity,
neuron_sign=in_neuron_sign)
else:
self.w_in_var = tf.Variable(rd.randn(n_in, n_rec) / np.sqrt(n_in), dtype=dtype, name="InputWeight")
self.w_in_val = self.w_in_var
self.w_in_val = self.V0 * self.w_in_val
self.w_in_delay = tf.Variable(rd.randint(self.n_delay, size=n_in * n_rec).reshape(n_in, n_rec),
dtype=tf.int64, name="InDelays", trainable=False)
self.W_in = weight_matrix_with_delay_dimension(self.w_in_val, self.w_in_delay, self.n_delay)
with tf.variable_scope('RecWeights'):
if 0 < rewiring_connectivity < 1:
self.w_rec_val, self.w_rec_sign, self.w_rec_var, _ = weight_sampler(n_rec, n_rec,
rewiring_connectivity,
neuron_sign=rec_neuron_sign)
else:
if rec_neuron_sign is not None or in_neuron_sign is not None:
raise NotImplementedError('Neuron sign requested but this is only implemented with rewiring')
self.w_rec_var = Variable(rd.randn(n_rec, n_rec) / np.sqrt(n_rec), dtype=dtype,
name='RecurrentWeight')
self.w_rec_val = self.w_rec_var
recurrent_disconnect_mask = np.diag(np.ones(n_rec, dtype=bool))
self.w_rec_val = self.w_rec_val * self.V0
self.w_rec_val = tf.where(recurrent_disconnect_mask, tf.zeros_like(self.w_rec_val),
self.w_rec_val) # Disconnect autotapse
self.w_rec_delay = tf.Variable(rd.randint(self.n_delay, size=n_rec * n_rec).reshape(n_rec, n_rec),
dtype=tf.int64, name="RecDelays", trainable=False)
self.W_rec = weight_matrix_with_delay_dimension(self.w_rec_val, self.w_rec_delay, self.n_delay)
@property
def state_size(self):
return LIFStateTuple(v=self.n_rec,
z=self.n_rec,
i_future_buffer=(self.n_rec, self.n_delay),
z_buffer=(self.n_rec, self.n_refractory))
@property
def output_size(self):
return self.n_rec
def zero_state(self, batch_size, dtype, n_rec=None):
if n_rec is None: n_rec = self.n_rec
v0 = tf.zeros(shape=(batch_size, n_rec), dtype=dtype)
z0 = tf.zeros(shape=(batch_size, n_rec), dtype=dtype)
i_buff0 = tf.zeros(shape=(batch_size, n_rec, self.n_delay), dtype=dtype)
z_buff0 = tf.zeros(shape=(batch_size, n_rec, self.n_refractory), dtype=dtype)
return LIFStateTuple(
v=v0,
z=z0,
i_future_buffer=i_buff0,
z_buffer=z_buff0
)
def __call__(self, inputs, state, scope=None, dtype=tf.float32):
i_future_buffer = state.i_future_buffer + einsum_bi_ijk_to_bjk(inputs, self.W_in) + einsum_bi_ijk_to_bjk(
state.z, self.W_rec)
new_v, new_z = self.LIF_dynamic(
v=state.v,
z=state.z,
z_buffer=state.z_buffer,
i_future_buffer=i_future_buffer)
if self.eprop:
new_z = tf.stop_gradient(new_z)
new_z_buffer = tf_roll(state.z_buffer, new_z, axis=2)
new_i_future_buffer = tf_roll(i_future_buffer, axis=2)
new_state = LIFStateTuple(v=new_v,
z=new_z,
i_future_buffer=new_i_future_buffer,
z_buffer=new_z_buffer)
return new_z, new_state
def LIF_dynamic(self, v, z, z_buffer, i_future_buffer, thr=None, decay=None, n_refractory=None, add_current=0.):
"""
Function that generate the next spike and voltage tensor for given cell state.
:param v
:param z
:param z_buffer:
:param i_future_buffer:
:param thr:
:param decay:
:param n_refractory:
:param add_current:
:return:
"""
if self.injected_noise_current > 0:
add_current = tf.random_normal(shape=z.shape, stddev=self.injected_noise_current)
with tf.name_scope('LIFdynamic'):
if thr is None: thr = self.thr
if decay is None: decay = self._decay
if n_refractory is None: n_refractory = self.n_refractory
i_t = i_future_buffer[:, :, 0] + add_current
I_reset = z * thr * self.dt
new_v = decay * v + (1 - decay) * i_t - I_reset
# Spike generation
v_scaled = (v - thr) / thr
# new_z = differentiable_spikes(v_scaled=v_scaled)
new_z = SpikeFunction(v_scaled, self.dampening_factor)
if n_refractory > 0:
is_ref = tf.greater(tf.reduce_max(z_buffer[:, :, -n_refractory:], axis=2), 0)
new_z = tf.where(is_ref, tf.zeros_like(new_z), new_z)
new_z = new_z * 1 / self.dt
return new_v, new_z
ALIFStateTuple = namedtuple('ALIFState', (
'z',
'v',
'b',
'i_future_buffer',
'z_buffer'))
class ALIF(LIF):
def __init__(self, n_in, n_rec, tau=20, thr=0.01,
dt=1., n_refractory=0, dtype=tf.float32, n_delay=1,
tau_adaptation=200., beta=1.6,
rewiring_connectivity=-1, dampening_factor=0.3,
in_neuron_sign=None, rec_neuron_sign=None, injected_noise_current=0.,
V0=1., eprop=False):
"""
Tensorflow cell object that simulates a LIF neuron with an approximation of the spike derivatives.
:param n_in: number of input neurons
:param n_rec: number of recurrent neurons
:param tau: membrane time constant
:param thr: threshold voltage
:param dt: time step of the simulation
:param n_refractory: number of refractory time steps
:param dtype: data type of the cell tensors
:param n_delay: number of synaptic delay, the delay range goes from 1 to n_delay time steps
:param tau_adaptation: adaptation time constant for the threshold voltage
:param beta: amplitude of adpatation
:param rewiring_connectivity: number of non-zero synapses in weight matrices (at initialization)
:param in_neuron_sign: vector of +1, -1 to specify input neuron signs
:param rec_neuron_sign: same of recurrent neurons
:param injected_noise_current: amplitude of current noise
:param V0: to choose voltage unit, specify the value of V0=1 Volt in the desired unit (example V0=1000 to set voltage in millivolts)
"""
super(ALIF, self).__init__(n_in=n_in, n_rec=n_rec, tau=tau, thr=thr, dt=dt, n_refractory=n_refractory,
dtype=dtype, n_delay=n_delay,
rewiring_connectivity=rewiring_connectivity,
dampening_factor=dampening_factor, in_neuron_sign=in_neuron_sign,
rec_neuron_sign=rec_neuron_sign,
injected_noise_current=injected_noise_current,
V0=V0, eprop=eprop)
if | |
import copy
import warnings
from ajenti.api import *
from ajenti.ui.element import p, UIElement
from ajenti.util import *
def is_bound_context(el):
"""
:type el: UIElement
:rtype: bool
"""
return ('{binder}context' in el.properties) and el.properties['{binder}context'] is not None
def is_bound(el):
"""
:type el: UIElement
:rtype: bool
"""
if el.typeid.startswith('bind:'):
return True
for prop in el.properties.keys():
if prop == 'bind' or prop.startswith('{bind}') or is_bound_context(el):
if el.properties[prop]:
return True
return False
@public
class Binding (object):
"""
A base class for bindings. Binding is a link between a Python object attribute and Ajenti UI element's property.
:param object: a Python object
:param attribute: attribute name
:param ui: Ajenti :class:`ajenti.ui.UIElement`
"""
def __init__(self, object, attribute, ui):
"""
:type object: object
:type attribute: str
:type ui: UIElement
"""
self.object = object
self.attribute = attribute
self.ui = ui
self.dict_mode = False
if attribute and attribute.startswith('[') and attribute.endswith(']'):
self.dict_mode = True
self.attribute = self.attribute[1:-1]
@classmethod
def applicable(cls, object, attribute):
try:
cls.extract(object, attribute)
return True
except:
return False
@classmethod
def extract(cls, object, attribute, ignore_errors=True):
if attribute.startswith('[') and attribute.endswith(']'):
if ignore_errors:
return object.get(attribute[1:-1], None)
else:
return object.get[attribute[1:-1]]
else:
return getattr(object, attribute)
def get(self):
"""
:returns: value of the bound attribute
"""
if self.dict_mode:
return self.object.get(self.attribute, None)
else:
return getattr(self.object, self.attribute)
def set(self, value):
"""
Sets value of the bound attribute
"""
try:
if self.dict_mode:
self.object[self.attribute] = value
else:
setattr(self.object, self.attribute, value)
except Exception:
raise Exception('Binder set failed: %s.%s = %s' % (self.object, self.attribute, repr(value)))
def populate(self):
"""
Should update the UI with attribute's value
"""
def unpopulate(self):
"""
Should revert UI to normal state
"""
def update(self):
"""
Should update the attribute with data from the UI
"""
@public
class PropertyBinding (Binding):
"""
A simple binding between UI element's property and Python object's attribute
:param property: UI property name. If ``None``, property is deduced from ``bindtypes``
"""
def __init__(self, obj, attribute, ui, property=None):
"""
:type attribute: str
:type ui: UIElement
:type property: str, None
"""
Binding.__init__(self, obj, attribute, ui)
if property is None:
# find a property with matching bindtypes
v = self.__get_transformed()
for prop in ui.property_definitions.values():
if prop.bindtypes:
# nb: we can't guess the type for None
if type(v) in prop.bindtypes or (v is None) or (object in prop.bindtypes):
self.property = prop.name
break
else:
raise Exception('Cannot bind %s.%s (%s, = %s) to %s' % (repr(obj), attribute, repr(type(v)), repr(v), ui))
else:
self.property = property
self.oneway = ui.bindtransform is not None
def __repr__(self):
return u'[%s.%s <-> %s.%s]' % (self.object, self.attribute, self.ui, self.property)
def __get_transformed(self):
return self.ui.bindtransform(self.get()) if self.ui.bindtransform else self.get()
def populate(self):
self.old_value = self.get()
setattr(self.ui, self.property, self.__get_transformed())
def update(self):
if self.oneway:
return
new_value = getattr(self.ui, self.property)
# avoid unnecessary sets
if new_value != self.old_value:
self.set(new_value)
class DictValueBinding (PropertyBinding):
def get(self):
return self.object.get(self.attribute, None)
def set(self, value):
self.object[self.attribute] = value
def update(self):
if self.oneway:
return
self.set(getattr(self.ui, self.property))
@public
class ListAutoBinding (Binding):
"""
Binds values of a collection to UI element's children consecutively, using :class:`Binder`
"""
def __init__(self, object, attribute, ui):
Binding.__init__(self, object, attribute, ui)
self.binders = {}
self.values = []
def unpopulate(self):
for binder in self.binders.values():
binder.unpopulate()
def populate(self):
if self.attribute:
self.collection = Binding.extract(self.object, self.attribute)
else:
self.collection = self.object
self.values = self.ui.values(self.collection)
self.unpopulate()
self.binders = {}
index = 0
if len(self.values) > len(self.ui.children):
raise Exception('Number of bind:list children is less than collection size')
for value in self.values:
template = self.ui.children[index]
index += 1
binder = Binder(value, template)
binder.populate()
self.binders[value] = binder
self.ui.post_item_bind(self.object, self.collection, value, template)
self.ui.post_bind(self.object, self.collection, self.ui)
return self
def update(self):
for value in self.values:
self.binders[value].update()
self.ui.post_item_update(self.object, self.collection, value, self.binders[value].ui)
@public
class DictAutoBinding (Binding):
"""
Binds values from a dict to UI element's children mapping 'bind' attribute to dict key, using :class:`Binder`
"""
def __init__(self, object, attribute, ui):
Binding.__init__(self, object, attribute, ui)
self.binders = {}
def unpopulate(self):
for binder in self.binders.values():
binder.unpopulate()
def populate(self):
if self.attribute:
self.collection = Binding.extract(self.object, self.attribute)
else:
self.collection = self.object
self.values = self.ui.values(self.collection)
self.unpopulate()
self.binders = {}
bindables = self.ui.nearest(
lambda x: is_bound(x),
exclude=lambda x: (
x != self.ui and is_bound_context(x.parent) and x.parent != self.ui
)
)
for bindable in bindables:
if bindable == self.ui:
continue
for prop in bindable.properties:
if not bindable.properties[prop]:
continue
if prop.startswith('{bind}'):
binder = DictValueBinding(self.values, bindable.properties[prop], bindable, prop.split('}')[1])
elif prop == 'bind':
binder = DictValueBinding(self.values, bindable.bind, bindable)
else:
continue
key = bindable.properties[prop]
binder.populate()
self.binders[key] = binder
self.ui.post_bind(self.object, self.collection, self.ui)
return self
def update(self):
for key in self.binders:
self.binders[key].update()
self.ui.post_item_update(self.object, self.collection, key, self.binders[key].ui)
def _element_in_child_binder(root, e):
"""
detect if the element is trapped inside a nested bind: tag
relative to e
:type root: UIElement
:type e: UIElement
:rtype: bool
"""
return any(x.typeid.startswith('bind:') for x in root.path_to(e))
def _element_in_child_template(root, e):
"""
detect if the element is trapped inside a nested bind: tag
relative to e
:type root: UIElement
:type e: UIElement
:rtype: bool
"""
return any(x.typeid.startswith('bind:template') for x in root.path_to(e))
@public
class CollectionAutoBinding (Binding):
"""
Binds values of a collection to UI element's children using a template.
The expected UI layout::
<xml xmlns:bind="bind">
<bind:collection id="<binding to this>">
<container-element bind="__items">
<1-- instantiated templates will appear here -->
</container-element>
<bind:template>
<!-- a template for one collection item
it will be bound to item using ajenti.ui.binder.Binder -->
<label bind="some_property" />
<button id="__delete" /> <!-- a delete button may appear in the template -->
</bind:template>
<button id="__add" /> <!-- an add button may appear inside collection tag -->
</bind:collection>
</xml>
"""
def __init__(self, object, attribute, ui):
Binding.__init__(self, object, attribute, ui)
self.template = ui.find_type('bind:template')
if self.template:
if self.template.children:
self.template = self.template.children[0]
self.template_parent = self.template.parent
self.template.visible = False
self.items_ui_element = self.ui.nearest(lambda x: x.bind == '__items')[0] or self.ui
self.old_items = copy.copy(self.items_ui_element.children)
self.item_ui = []
self.binders = []
self.values = []
self.last_template_hash = None
def unpopulate(self):
if self.template:
self.template_parent.append(self.template)
self.items_ui_element.empty()
# restore original container content
self.items_ui_element.children = copy.copy(self.old_items)
return self
def get_template(self, item, ui):
# override for custom item template creation
return self.template.clone()
def populate(self):
if self.template:
self.template_parent.remove(self.template)
if self.attribute:
self.collection = self.get()
else:
self.collection = self.object
self.values = self.ui.values(self.collection)
if self.ui.sorting:
self.values = sorted(self.values, key=self.ui.sorting)
self.unpopulate()
# Do it before DOM becomes huge
self.items_ui_element.on('add', self.on_add)
try:
add_button = self.ui.nearest(lambda x: x.bind == '__add')[0]
if not _element_in_child_binder(self.ui, add_button):
add_button.on('click', self.on_add)
except IndexError:
pass
if self.ui.pagesize:
try:
self.paging = None
paging = self.ui.nearest(lambda x: x.bind == '__paging')[0]
if not _element_in_child_binder(self.ui, paging):
self.paging = paging
paging.on('switch', self.set_page)
paging.length = int((len(self.values) - 1) / self.ui.pagesize) + 1
except IndexError:
pass
self.item_ui = {}
self.binders = {}
for index, value in enumerate(self.values):
# apply the filter property
if not self.ui.filter(value):
continue
template = self.get_template(value, self.ui)
template.visible = True
self.items_ui_element.append(template)
self.item_ui[index] = template
binder = Binder(value, template)
binder.populate()
self.binders[index] = binder
try:
del_button = template.nearest(lambda x: x.bind == '__delete')[0]
if not _element_in_child_binder(template, del_button):
del_button.on('click', self.on_delete, value)
except IndexError:
pass
self.ui.post_item_bind(self.object, self.collection, value, template)
self.set_page(0)
self.ui.post_bind(self.object, self.collection, self.ui)
return self
def set_page(self, page=0):
if self.ui.pagesize:
for index, value in enumerate(self.values):
self.item_ui[index].visible = int(index / self.ui.pagesize) == page
if self.paging:
self.paging.active = page
def on_add(self):
self.update()
self.ui.add_item(self.ui.new_item(self.collection), self.collection)
self.populate()
def on_delete(self, item):
self.update()
self.ui.delete_item(item, self.collection)
self.populate()
def update(self):
if hasattr(self.items_ui_element, 'sortable') and self.items_ui_element.order:
sortable_indexes = []
for i, e in enumerate(self.items_ui_element.children):
if e.visible:
sortable_indexes.append(i)
try:
absolute_order = [sortable_indexes[i - 1] for i in self.items_ui_element.order]
indexes_valid = True
except IndexError:
indexes_valid = False
if indexes_valid:
new_indexes = []
absolute_order_idx = 0
for i in range(len(self.values)):
if i in sortable_indexes:
new_indexes.append(absolute_order[absolute_order_idx])
absolute_order_idx += 1
else:
new_indexes.append(i)
shuffle = lambda a: dict([(old, a[i]) for old, i in enumerate(new_indexes) if i < len(self.collection)])
self.binders = shuffle(self.binders)
self.item_ui = shuffle(self.item_ui)
new_values = [self.values[i] for i in new_indexes if i < len(self.collection)]
while len(self.collection) > 0:
self.collection.pop(0)
for e in new_values:
self.collection.append(e)
self.items_ui_element.order = []
for index, value in enumerate(self.values):
if self.ui.filter(value):
self.binders[index].update()
self.ui.post_item_update(self.object, self.collection, value, self.binders[index].ui)
@public
class Binder (object):
"""
An automatic object-to-ui-hierarchy binder. Uses ``bind`` UI property to find what and where to bind.
If ``object`` is not None, the Binder is also initialized (see ``setup(object)``) with this data object.
:param object: Python object
:param ui: UI hierarchy root
"""
def __init__(self, object=None, ui=None):
self.bindings = []
self.ui = ui
if object is not | |
datastore_pb.Query_Filter.GREATER_THAN_OR_EQUAL:
start_value = equality_value + value1
else:
raise dbconstants.AppScaleMisconfiguredQuery("Bad filter ordering")
# The second operator will be either < or <=.
if oper2 == datastore_pb.Query_Filter.LESS_THAN:
end_value = equality_value + value2
elif oper2 == datastore_pb.Query_Filter.LESS_THAN_OR_EQUAL:
end_value = equality_value + value2 + self._SEPARATOR + \
self._TERM_STRING
else:
raise dbconstants.AppScaleMisconfiguredQuery("Bad filter ordering")
if direction == datastore_pb.Query_Order.DESCENDING:
value1 = helper_functions.reverse_lex(value1)
value2 = helper_functions.reverse_lex(value2)
if oper1 == datastore_pb.Query_Filter.GREATER_THAN:
end_value = equality_value + value1
elif oper1 == datastore_pb.Query_Filter.GREATER_THAN_OR_EQUAL:
end_value = equality_value + value1 + self._SEPARATOR + \
self._TERM_STRING
else:
raise dbconstants.AppScaleMisconfiguredQuery("Bad filter ordering")
if oper2 == datastore_pb.Query_Filter.LESS_THAN:
start_value = equality_value + value2 + self._SEPARATOR + \
self._TERM_STRING
elif oper2 == datastore_pb.Query_Filter.LESS_THAN_OR_EQUAL:
start_value = equality_value + value2
else:
raise dbconstants.AppScaleMisconfiguredQuery("Bad filter ordering")
start_key = "{0}{1}".format(pre_comp_index_key, start_value)
end_key = "{0}{1}".format(pre_comp_index_key, end_value)
return start_key, end_key
@gen.coroutine
def composite_v2(self, query, filter_info):
"""Performs composite queries using a range query against
the composite table. Faster than in-memory filters, but requires
indexes to be built upon each put.
Args:
query: The query to run.
filter_info: dictionary mapping property names to tuples of
filter operators and values.
Returns:
List of entities retrieved from the given query.
"""
self.logger.debug('Composite Query:\n{}'.format(query))
start_inclusive = True
startrow, endrow = self.get_range_composite_query(query, filter_info)
# Override the start_key with a cursor if given.
if query.has_compiled_cursor() and query.compiled_cursor().position_size():
cursor = appscale_stub_util.ListCursor(query)
last_result = cursor._GetLastResult()
composite_index = query.composite_index_list()[0]
startrow = self.get_composite_index_key(composite_index, last_result,
position_list=query.compiled_cursor().position_list(),
filters=query.filter_list())
start_inclusive = False
if query.compiled_cursor().position_list()[0].start_inclusive() == 1:
start_inclusive = True
if query.has_end_compiled_cursor():
end_compiled_cursor = query.end_compiled_cursor()
list_cursor = appscale_stub_util.ListCursor(query)
last_result, _ = list_cursor._DecodeCompiledCursor(end_compiled_cursor)
composite_index = query.composite_index_list()[0]
endrow = self.get_composite_index_key(composite_index, last_result,
position_list=end_compiled_cursor.position_list(),
filters=query.filter_list())
table_name = dbconstants.COMPOSITE_TABLE
column_names = dbconstants.COMPOSITE_SCHEMA
limit = self.get_limit(query)
if startrow > endrow:
raise gen.Return([])
# TODO: Check if we should do this for other comparisons.
multiple_equality_filters = self.__get_multiple_equality_filters(
query.filter_list())
entities = []
current_limit = limit
while True:
references = yield self.datastore_batch.range_query(
table_name, column_names, startrow, endrow, current_limit,
offset=0, start_inclusive=start_inclusive, end_inclusive=True)
# This is a projection query.
if query.property_name_size() > 0:
potential_entities = self.__extract_entities_from_composite_indexes(
query, references)
else:
potential_entities = yield self.__fetch_entities(references)
if len(multiple_equality_filters) > 0:
self.logger.debug('Detected multiple equality filters on a repeated '
'property. Removing results that do not match query.')
potential_entities = self.__apply_multiple_equality_filters(
potential_entities, multiple_equality_filters)
entities.extend(potential_entities)
# If we have enough valid entities to satisfy the query, we're done.
if len(entities) >= limit:
break
# If we received fewer references than we asked for, they are exhausted.
if len(references) < current_limit:
break
# If all of the references that we fetched were valid, we're done.
if len(potential_entities) == len(references):
break
invalid_refs = len(references) - len(potential_entities)
# Pad the limit to increase the likelihood of fetching all the valid
# references that we need.
current_limit = invalid_refs + dbconstants.MAX_GROUPS_FOR_XG
self.logger.debug('{} entities do not match query. '
'Fetching {} more references.'.format(invalid_refs, current_limit))
last_startrow = startrow
# Start from the last reference fetched.
startrow = references[-1].keys()[0]
if startrow == last_startrow:
raise dbconstants.AppScaleDBError(
'An infinite loop was detected while fetching references.')
results = entities[:limit]
self.logger.debug('Returning {} results'.format(len(results)))
raise gen.Return(results)
def __get_multiple_equality_filters(self, filter_list):
""" Returns filters from the query that contain multiple equality
comparisons on repeated properties.
Args:
filter_list: A list of filters from the query.
Returns:
A dictionary that contains properties with multiple equality filters.
"""
equality_filters = {}
for query_filter in filter_list:
if query_filter.op() != datastore_pb.Query_Filter.EQUAL:
continue
for prop in query_filter.property_list():
if prop.name() not in equality_filters:
equality_filters[prop.name()] = []
equality_filters[prop.name()].append(prop)
single_eq_filters = []
for prop in equality_filters:
if len(equality_filters[prop]) < 2:
single_eq_filters.append(prop)
for prop in single_eq_filters:
del equality_filters[prop]
return equality_filters
def __apply_multiple_equality_filters(self, entities, filter_dict):
""" Removes entities that do not meet the criteria defined by multiple
equality filters.
Args:
entities: A list of entities that need filtering.
filter_dict: A dictionary containing the relevant filters.
Returns:
A list of filtered entities.
"""
filtered_entities = []
for entity in entities:
entity_proto = entity_pb.EntityProto(entity)
relevant_props_in_entity = {}
for entity_prop in entity_proto.property_list():
if entity_prop.name() not in filter_dict:
continue
if entity_prop.name() not in relevant_props_in_entity:
relevant_props_in_entity[entity_prop.name()] = []
relevant_props_in_entity[entity_prop.name()].append(entity_prop)
passes_all_filters = True
for filter_prop_name in filter_dict:
if filter_prop_name not in relevant_props_in_entity:
raise dbconstants.AppScaleDBError(
'Property name not found in entity.')
filter_props = filter_dict[filter_prop_name]
entity_props = relevant_props_in_entity[filter_prop_name]
for filter_prop in filter_props:
# Check if filter value is in repeated property.
passes_filter = False
for entity_prop in entity_props:
if entity_prop.value().Equals(filter_prop.value()):
passes_filter = True
break
if not passes_filter:
passes_all_filters = False
break
if not passes_all_filters:
break
if passes_all_filters:
filtered_entities.append(entity)
return filtered_entities
def __extract_value_from_index(self, index_entry, direction):
""" Takes an index entry and returns the value of the property.
This function is for single property indexes only.
Args:
index_entry: A dictionary containing an index entry.
direction: The direction of the index.
Returns:
A property value.
"""
reference_key = index_entry.keys()[0]
tokens = reference_key.split(self._SEPARATOR)
# Sometimes the value can contain the separator.
value = self._SEPARATOR.join(tokens[4:-1])
if direction == datastore_pb.Query_Order.DESCENDING:
value = helper_functions.reverse_lex(value)
entity = entity_pb.EntityProto()
prop = entity.add_property()
prop_value = prop.mutable_value()
self.__decode_index_str(value, prop_value)
return prop_value
def __valid_index_entry(self, entry, entities, direction, prop_name):
""" Checks if an index entry is valid.
Args:
entry: A dictionary containing an index entry.
entities: A dictionary of available valid entities.
direction: The direction of the index.
prop_name: A string containing the property name.
Returns:
A boolean indicating whether or not the entry is valid.
Raises:
AppScaleDBError: The given property name is not in the matching entity.
"""
# Skip validating reserved properties.
if dbconstants.RESERVED_PROPERTY_NAME.match(prop_name):
return True
reference = entry[entry.keys()[0]]['reference']
# Reference may be absent from entities if the entity was deleted or part
# of an invalid transaction.
if reference not in entities:
return False
index_value = self.__extract_value_from_index(entry, direction)
entity = entities[reference]
entity_proto = entity_pb.EntityProto(entity)
# TODO: Return faster if not a repeated property.
prop_found = False
for prop in entity_proto.property_list():
if prop.name() != prop_name:
continue
prop_found = True
if index_value.has_uservalue() and prop.value().has_uservalue():
if index_value.uservalue().email() == prop.value().uservalue().email():
return True
if index_value.Equals(prop.value()):
return True
if not prop_found:
# Most likely, a repeated property was populated and then emptied.
self.logger.debug('Property name {} not found in entity.'.
format(prop_name))
return False
def remove_extra_props(self, query, results):
""" Decodes entities, strips extra properties, and re-encodes them.
Args:
query: A datastore_pb.Query object.
results: A list of encoded entities.
Returns:
A list of encoded entities.
"""
projected_props = query.property_name_list()
cleaned_results = []
for result in results:
entity = entity_pb.EntityProto(result)
props_to_keep = [prop for prop in entity.property_list()
if prop.name() in projected_props]
# If the entity does not have the property, do not include it in the
# results. Raw (unindexed) properties should not be projected.
if not props_to_keep:
continue
entity.clear_property()
for prop in props_to_keep:
# Projected properties should have a meaning set to INDEX_VALUE.
prop.set_meaning(entity_pb.Property.INDEX_VALUE)
new_prop = entity.add_property()
new_prop.MergeFrom(prop)
cleaned_results.append(entity.Encode())
return cleaned_results
def __extract_entities_from_composite_indexes(self, query, index_result):
""" Takes index values and creates partial entities out of them.
This is required for projection queries where the query specifies certain
properties which should be returned. Distinct queries are also handled here.
A distinct query removes entities with duplicate index values. This will
only return the first result for entities which have the same values for
the properties that are being projected.
Args:
query: A datastore_pb.Query object.
index_result: A list of index strings.
Returns:
A list of EntityProtos.
"""
definition = query.composite_index_list()[0].definition()
prop_name_list = query.property_name_list()
distinct_checker = []
entities = []
for index in index_result:
entity = entity_pb.EntityProto()
tokens = index.keys()[0].split(self._SEPARATOR)
app_id = tokens.pop(0)
namespace = tokens.pop(0)
comp_definition_id = tokens.pop(0)
if definition.ancestor() == 1:
ancestor = tokens.pop(0)[:-1]
distinct_str = ""
value_index = 0
for def_prop in definition.property_list():
# If the value contained the separator, try to recover the value.
if len(tokens[:-1]) > len(definition.property_list()):
end_slice = value_index + 1
while end_slice <= len(tokens[:-1]):
value = self._SEPARATOR.join(tokens[value_index:end_slice])
if def_prop.direction() == entity_pb.Index_Property.DESCENDING:
value = helper_functions.reverse_lex(value)
prop_value = entity_pb.PropertyValue()
try:
self.__decode_index_str(value, prop_value)
value_index = end_slice
break
except ProtocolBufferDecodeError:
end_slice += 1
else:
value = tokens[value_index]
if def_prop.direction() == entity_pb.Index_Property.DESCENDING:
value = helper_functions.reverse_lex(value)
value_index += 1
if def_prop.name() not in prop_name_list:
self.logger.debug('Skipping prop {} in projection'.
format(def_prop.name()))
continue
prop = entity.add_property()
prop.set_name(def_prop.name())
prop.set_meaning(entity_pb.Property.INDEX_VALUE)
prop.set_multiple(False)
distinct_str += value
prop_value | |
= model.coef_
return coef
# =====================Functions used to compute the ranked features and their weights=======================
def TopGenbinary(w,feature_names):
n=len(w)
difference=np.zeros(n)
for i in range(n):
difference[i]=w[i][0]-w[i][1]
df1=pd.DataFrame(feature_names,columns=['pd'])
df1['weights']=difference
#=====Sort the difference based on the absolute value=========
df1['sort_helper'] = df1['weights'].abs()
df2=df1.sort_values(by='sort_helper',ascending=False).drop('sort_helper', axis=1)
#==== end_sort=============
return df2
def rankFeatureHelper(alg,coef,feature_names):
df1=pd.DataFrame(feature_names, columns=[alg])
df1['weights']=coef
df1['sort_helper'] = df1['weights'].abs()
df2=df1.sort_values(by='sort_helper', ascending= False).drop('sort_helper',axis=1)
return df2
def rankFeatures(X,Yr,algList,feature_names):
# flag=0
featureList = []
for alg in algList:
if (alg == 'svm'):
clf = SVC(probability=True,kernel='linear')
model = clf.fit(X,Yr.ravel())
coef = model.coef_.transpose()
df_rankFeature = rankFeatureHelper(alg, coef, feature_names)
featureList.append(df_rankFeature)
if (alg == 'RF'):
clf = RandomForestClassifier(n_estimators = 400, random_state = 10,max_depth=3)
model = clf.fit(X,Yr.ravel())
coef = model.feature_importances_
df_rankFeature = rankFeatureHelper(alg, coef, feature_names)
featureList.append(df_rankFeature)
if (alg == 'plsda'):
clf = PLSRegression(n_components=4,scale=False)
model = clf.fit(X,Yr.ravel())
coef = model.coef_
df_rankFeature = rankFeatureHelper(alg, coef, feature_names)
featureList.append(df_rankFeature)
# if flag == 0:
# df_rankFeature = TopGenbinary(coef, feature_names)
# flag =1
# else:
# df_feature = TopGenbinary(coef, feature_names)
# df_rankFeature
return featureList
#===============================Compute the \rho==============================
def basic_run_eta_molecule(X,YR,ID,k,
genenames=None,
clusternames=None,
niter=30,
rho=1,
tau=4,
beta=0.25,
delta=1.0,
eta = 500,
gamma = 1,
nfold=4,
random_seed = 1):
'''
# =====================================================================
# This function is used to compute the df_confidence
# Basic function to launch the algorithm of some specific parameters.
# - Input:
# The function of the algorithm: primal_dual_L1N
# The function to predict: predict_L1_molecule
# - X (necessary) : The data
# - YR (necessary) : The labels for the data
# - k (necessary) : The number of the clusters
#
# - genenames (optional) : The names of the features of the data
# if not given, it will be
# ['Gene 1','Gene 2',...]
#
# - clusternames (optional) : The clusternames of the data
# if not given, it will be
# ['Class 1', 'Class 2',...]
#
# - niter (optional) : The number of iterations
#
# - rho, tau, beta, delta, : The hyper-parameters for the algo
# eta, gamma (optional)
#
# - nfold (optional) : The number of the folds of the cross validation
#
# - rng (optional) : The seed to control the random funcion
#
# - Output:
# - Yprediction : list of Predicted labels
# ======================================================================
'''
np.random.seed(random_seed) # reproducible
n,d = X.shape
# parameter checking
if genenames is None:
genenames = ['Gene {}'.format(i+1) for i in range(d)]
if clusternames is None:
clusternames = ['Class {}'.format(i+1) for i in range(k)]
if YR.ndim==1: # In case that OneHotEncoder get 1D array and raise a TypeError
YR = YR.reshape(-1,1)
Y = OneHotEncoder(categories='auto').fit_transform(YR).toarray()
normY = normest(Y)
normY2 = normY**2
# Dropping the cells randomly if the n%d is not zero
# See more details in drop_cells
X,YR,Ident = drop_cells_with_ID(X,YR,ID,nfold)
dico=dict(list(enumerate(Ident)))
ref=pd.DataFrame.from_dict(dico,orient="index")
param = {}
param['niter'] = niter
param['rho'] = rho
param['tau'] = tau
tau2 = beta*(1/(np.sqrt(n)*normY))
param['tau2'] = tau2
eps = 1/(1 + tau2*rho*0.25)
sigma = 1.0/(tau + (tau2*eps*normY2))# Converge until 2.6 for L1Nel
param['sigma'] = sigma
param['delta'] = delta
param['beta']= beta
param['eta'] = eta
param['gamma'] = gamma
# Initialization
nbG = np.zeros(nfold,dtype=int) # Number of genes for each fold
W0 = np.zeros((d,k,nfold)) # w in each fold
mu0 = np.zeros((k,k,nfold))
#Z0 = np.zeros((int((nfold-1)*n/nfold),k,nfold))
#Z_mean = np.zeros((int((nfold-1)*n/nfold),k))
loss_iter0 = np.zeros((nfold,niter)) # loss for each iteration of each fold
# W_mean stores w for each eta, where w is the mean of W0 along its third axis
nbG = np.zeros(nfold)
# Parameters printing
print('\nStarts trainning for')
print('{:>6}:{:<6}'.format('niter',niter))
print('{:>6}:{:<6}'.format('eta',eta))
if 'fista' in primal_dual_L1N.__name__.lower():
print('{:>6}:{:<6}'.format('gamma',delta))
elif 'or' in primal_dual_L1N.__name__.lower():
print('{:>6}:{:<6}'.format('rho',rho))
print('{:>6}:{:<6}'.format('tau',tau))
print('{:>6}:{:<6}'.format('beta',beta))
print('{:>6}:{:<6}'.format('tau_mu',tau2))
print('{:>6}:{:<6}'.format('sigma',sigma))
print('{:>6}:{:<6}'.format('delta',delta))
print('{:>6}:{:<6}'.format('gamma',delta))
elif '_l2' in primal_dual_L1N.__name__.lower():
print('{:>6}:{:<6}'.format('rho',rho))
print('{:>6}:{:<6}'.format('tau',tau))
print('{:>6}:{:<6}'.format('beta',beta))
print('{:>6}:{:<6}'.format('tau_mu',tau2))
print('{:>6}:{:<6}'.format('sigma',sigma))
else:
print('{:>6}:{:<6}'.format('rho',rho))
print('{:>6}:{:<6}'.format('tau',tau))
print('{:>6}:{:<6}'.format('beta',beta))
print('{:>6}:{:<6}'.format('tau_mu',tau2))
print('{:>6}:{:<6}'.format('sigma',sigma))
print('{:>6}:{:<6}'.format('delta',delta))
Yprediction=[]
Confidence= []
# accuracy_train = np.zeros((nfold,k+1))
# accuracy_test = np.zeros((nfold,k+1))
ID = []
Ident=[]
kf = KFold(n_splits=nfold,random_state=random_seed,shuffle=True)
w_all,mu_all,nbGenes_all,loss_all = primal_dual_L1N(X,YR,k,param)[0:4]
for i,(train_ind, test_ind) in enumerate(kf.split(YR)):
print('{:-<30}'.format(''))
print('{message:^6} {f1} / {f2}'.format(message='fold',f1=i+1,f2=nfold))
print('-> {} classification...'.format(primal_dual_L1N.__name__))
# ========== Training =========
dico=dico
Xtrain = X[train_ind]
Ytrain = YR[train_ind]
Xtest = X[test_ind]
startTime = time.perf_counter()
w,mu,nbGenes,loss = primal_dual_L1N(Xtrain,Ytrain,k,param)[0:4]
endTime = time.perf_counter()
timeElapsed = endTime - startTime
print('-> Completed.\n-> Time Elapsed:{:.4}s'.format(timeElapsed))
W0[:,:,i] = w
mu0[:,:,i] = mu
loss_iter0[i,:] = loss
# ========== Prediction =========
Ypred,conf = predict_L1_molecule(Xtest,w,mu)
Yprediction.append(Ypred)
Confidence.append(conf)
ID.append(test_ind)
Ident.append(ref.iloc[test_ind])
nbG[i] = nbGenes
print('{:-<30}'.format(''))
# end kfold loop
return Yprediction,Confidence,ID,Ident,YR,ref
# ===================== Base Launch functions (scripts) ========================
def basic_run_eta_compare(func_algo, func_predict,
X,YR, k,alglist,
genenames=None,
clusternames=None,
niter=30,
rho=1,
tau=4,
beta=0.25,
delta=1.0,
eta = None,
eta_star = None,
gamma = 1,
nfold=4,
rng = 1,
showres=False,
keepfig = False,
saveres=False,
outputPath='../results/'):
'''
# =====================================================================
# Basic function to launch the algorithm of some specific parameters.
# - Input:
# - func_algo (necessary) : The function of the algorithm
# - func_predict (necessary) : The function to predict
# - X (necessary) : The data
# - YR (necessary) : The labels for the data
# - k (necessary) : The number of the clusters
#
# - genenames (optional) : The names of the features of the data
# if not given, it will be
# ['Gene 1','Gene 2',...]
#
# - clusternames (optional) : The clusternames of the data
# if not given, it will be
# ['Class 1', 'Class 2',...]
#
# - niter (optional) : The number of iterations
#
# - rho, tau, beta, delta, : The hyper-parameters for the algo
# eta, gamma, etc (optional)
#
# - nfold (optional) : The number of the folds of the cross validation
#
# - rng (optional) : The seed to control the random funcion
#
# - showres (optional) : Boolean value. True if we want to show
# the results, plot the figures etc.
#
# - saveres (optional) : Boolean value. True to save the results
#
# - alglist (optional) : The seed to control the random funcion
#
# - outputPath (optional) : String value. The output path.
#
#
# - Output:
# - mu : The centroids
# - nbm : Number of genes
# - accG : Global accuracy
# - loss : Loss for each iterations
# - W_mean : Mean weight matrix for all folds
# - timeElapsed : Time elapsed for one fold
# - (And the tables) : df_topGenes, df_normW, df_topG_normW,
# df_topGenes_mean, df_normW_mean,
# df_topG_normW_mean, df_acctest
# ======================================================================
'''
np.random.seed(rng) # reproducible
if not os.path.exists(outputPath): # make the directory if it does not exist
os.makedirs(outputPath)
n,d = X.shape
# parameter checking
if genenames is None:
genenames = ['Gene {}'.format(i+1) for i in range(d)]
if clusternames is None:
clusternames = ['Class {}'.format(i+1) for i in range(k)]
# Normalize the mean of datas (Deprecated)
#m = np.mean(X,axis=0)
#X = X-m
#normX = normest(X)
#X = X/normX
#YR = np.array(YR).reshape(-1,1)
if YR.ndim==1: # In case that OneHotEncoder get 1D array and raise a TypeError
YR = YR.reshape(-1,1)
Y = OneHotEncoder(categories='auto').fit_transform(YR).toarray()
normY = normest(Y)
normY2 = normY**2
# Dropping the cells randomly if the n%d is not zero
# For more details please see instructions in drop_cells
X,YR = drop_cells(X,YR,nfold)
param = {}
param['niter'] = niter
param['rho'] = rho
param['tau'] = tau
tau2 = beta*(1/(np.sqrt(n)*normY))
param['tau2'] = tau2
eps = 1/(1 + tau2*rho*0.25)
sigma = 1.0/(tau + (tau2*eps*normY2))# Converge until 2.6 for L1Nel
param['sigma'] = sigma
param['delta'] = delta
param['beta']= beta
param['eta'] = eta
param['eta_star'] = eta_star
param['gamma'] = gamma
# Initialization
nbG = np.zeros(nfold,dtype=int) # Number of genes for each fold
accuracy_train = np.zeros((nfold,k+1))
accuracy_test = np.zeros((nfold,k+1))
auc_train = np.zeros((nfold))
auc_test = np.zeros((nfold))
sil_train = np.zeros((nfold))
W0 = np.zeros((d,k,nfold)) # w in each fold
mu0 = np.zeros((k,k,nfold))
W_mean = np.zeros((d,k))
#Z0 = np.zeros((int((nfold-1)*n/nfold),k,nfold))
#Z_mean = np.zeros((int((nfold-1)*n/nfold),k))
loss_iter0 = np.zeros((nfold,niter)) # loss for each iteration of each fold
# W_mean stores w for each eta, where w is the mean of W0 along its third axis | |
<gh_stars>100-1000
# -*- test-case-name: txweb2.test.test_server,twext.web2.test.test_resource -*-
##
# Copyright (c) 2001-2007 Twisted Matrix Laboratories.
# Copyright (c) 2010-2017 Apple Inc. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
##
"""
I hold the lowest-level L{Resource} class and related mix-in classes.
"""
# System Imports
from zope.interface import implements
from twisted.internet.defer import inlineCallbacks, returnValue
from txweb2 import iweb, http, server, responsecode
from twisted.internet.defer import maybeDeferred
class RenderMixin(object):
"""
Mix-in class for L{iweb.IResource} which provides a dispatch mechanism for
handling HTTP methods.
"""
def allowedMethods(self):
"""
@return: A tuple of HTTP methods that are allowed to be invoked on this
resource.
"""
if not hasattr(self, "_allowed_methods"):
self._allowed_methods = tuple(
name[5:] for name in dir(self)
if name.startswith('http_') and getattr(self, name) is not None
)
return self._allowed_methods
def checkPreconditions(self, request):
"""
Checks all preconditions imposed by this resource upon a request made
against it.
@param request: the request to process.
@raise http.HTTPError: if any precondition fails.
@return: C{None} or a deferred whose callback value is C{request}.
"""
#
# http.checkPreconditions() gets called by the server after every
# GET or HEAD request.
#
# For other methods, we need to know to bail out before request
# processing, especially for methods that modify server state (eg.
# PUT).
# We also would like to do so even for methods that don't, if those
# methods might be expensive to process. We're assuming that GET and
# HEAD are not expensive.
#
if request.method not in ("GET", "HEAD"):
http.checkPreconditions(request)
# Check per-method preconditions
method = getattr(self, "preconditions_" + request.method, None)
if method:
return method(request)
@inlineCallbacks
def renderHTTP(self, request):
"""
See L{iweb.IResource.renderHTTP}.
This implementation will dispatch the given C{request} to another
method of C{self} named C{http_}METHOD, where METHOD is the HTTP method
used by C{request} (eg. C{http_GET}, C{http_POST}, etc.).
Generally, a subclass should implement those methods instead of
overriding this one.
C{http_*} methods are expected provide the same interface and return
the same results as L{iweb.IResource}C{.renderHTTP} (and therefore this
method).
C{etag} and C{last-modified} are added to the response returned by the
C{http_*} header, if known.
If an appropriate C{http_*} method is not found, a
L{responsecode.NOT_ALLOWED}-status response is returned, with an
appropriate C{allow} header.
@param request: the request to process.
@return: an object adaptable to L{iweb.IResponse}.
"""
method = getattr(self, "http_" + request.method, None)
if method is None:
response = http.Response(responsecode.NOT_ALLOWED)
response.headers.setHeader("allow", self.allowedMethods())
returnValue(response)
yield self.checkPreconditions(request)
result = maybeDeferred(method, request)
result.addErrback(self.methodRaisedException)
returnValue((yield result))
def methodRaisedException(self, failure):
"""
An C{http_METHOD} method raised an exception; this is an errback for
that exception. By default, simply propagate the error up; subclasses
may override this for top-level exception handling.
"""
return failure
def http_OPTIONS(self, request):
"""
Respond to a OPTIONS request.
@param request: the request to process.
@return: an object adaptable to L{iweb.IResponse}.
"""
response = http.Response(responsecode.OK)
response.headers.setHeader("allow", self.allowedMethods())
return response
# def http_TRACE(self, request):
# """
# Respond to a TRACE request.
# @param request: the request to process.
# @return: an object adaptable to L{iweb.IResponse}.
# """
# return server.doTrace(request)
def http_HEAD(self, request):
"""
Respond to a HEAD request.
@param request: the request to process.
@return: an object adaptable to L{iweb.IResponse}.
"""
return self.http_GET(request)
def http_GET(self, request):
"""
Respond to a GET request.
This implementation validates that the request body is empty and then
dispatches the given C{request} to L{render} and returns its result.
@param request: the request to process.
@return: an object adaptable to L{iweb.IResponse}.
"""
if request.stream.length != 0:
return responsecode.REQUEST_ENTITY_TOO_LARGE
return self.render(request)
def render(self, request):
"""
Subclasses should implement this method to do page rendering.
See L{http_GET}.
@param request: the request to process.
@return: an object adaptable to L{iweb.IResponse}.
"""
raise NotImplementedError("Subclass must implement render method.")
class Resource(RenderMixin):
"""
An L{iweb.IResource} implementation with some convenient mechanisms for
locating children.
"""
implements(iweb.IResource)
addSlash = False
def locateChild(self, request, segments):
"""
Locates a child resource of this resource.
@param request: the request to process.
@param segments: a sequence of URL path segments.
@return: a tuple of C{(child, segments)} containing the child
of this resource which matches one or more of the given C{segments} in
sequence, and a list of remaining segments.
"""
w = self.getChild(segments[0])
if w:
r = iweb.IResource(w, None)
if r:
return r, segments[1:]
return w(request), segments[1:]
factory = getattr(self, 'childFactory', None)
if factory is not None:
r = factory(request, segments[0])
if r:
return r, segments[1:]
return None, []
def child_(self, request):
"""
This method locates a child with a trailing C{"/"} in the URL.
@param request: the request to process.
"""
if self.addSlash and len(request.postpath) == 1:
return self
return None
def getChild(self, path):
"""
Get a static child - when registered using L{putChild}.
@param path: the name of the child to get
@type path: C{str}
@return: the child or C{None} if not present
@rtype: L{iweb.IResource}
"""
return getattr(self, 'child_%s' % (path,), None)
def putChild(self, path, child):
"""
Register a static child.
This implementation registers children by assigning them to attributes
with a C{child_} prefix. C{resource.putChild("foo", child)} is
therefore same as C{o.child_foo = child}.
@param path: the name of the child to register. You almost certainly
don't want C{"/"} in C{path}. If you want to add a "directory"
resource (e.g. C{/foo/}) specify C{path} as C{""}.
@param child: an object adaptable to L{iweb.IResource}.
"""
setattr(self, 'child_%s' % (path,), child)
def http_GET(self, request):
if self.addSlash and request.prepath[-1] != '':
# If this is a directory-ish resource...
return http.RedirectResponse(
request.unparseURL(path=request.path + '/')
)
return super(Resource, self).http_GET(request)
class PostableResource(Resource):
"""
A L{Resource} capable of handling the POST request method.
@cvar maxMem: maximum memory used during the parsing of the data.
@type maxMem: C{int}
@cvar maxFields: maximum number of form fields allowed.
@type maxFields: C{int}
@cvar maxSize: maximum size of the whole post allowed.
@type maxSize: C{int}
"""
maxMem = 100 * 1024
maxFields = 1024
maxSize = 10 * 1024 * 1024
def http_POST(self, request):
"""
Respond to a POST request.
Reads and parses the incoming body data then calls L{render}.
@param request: the request to process.
@return: an object adaptable to L{iweb.IResponse}.
"""
return server.parsePOSTData(
request, self.maxMem, self.maxFields, self.maxSize
).addCallback(lambda res: self.render(request))
class LeafResource(RenderMixin):
"""
A L{Resource} with no children.
"""
implements(iweb.IResource)
def locateChild(self, request, segments):
return self, server.StopTraversal
class RedirectResource(LeafResource):
"""
A L{LeafResource} which always performs a redirect.
"""
implements(iweb.IResource)
def __init__(self, *args, **kwargs):
"""
Parameters are URL components and are the same as those for
L{urlparse.urlunparse}. URL components which are not specified will
default to the corresponding component of the URL of the request being
redirected.
"""
self._args = args
self._kwargs = kwargs
def renderHTTP(self, request):
return http.RedirectResponse(
request.unparseURL(*self._args, **self._kwargs)
)
class WrapperResource(object):
"""
An L{iweb.IResource} implementation which wraps a L{RenderMixin} instance
and provides a hook in which a subclass can implement logic that is called
before request processing on the contained L{Resource}.
"""
implements(iweb.IResource)
def __init__(self, resource):
self.resource = resource
def hook(self, request):
"""
Override this method in order to do something before passing control on
to the wrapped resource's C{renderHTTP} and C{locateChild} methods.
@return: None or a L{Deferred}. If a deferred object is
| |
# Copyright (c) 2009-2010 <NAME>. See LICENSE for details.
import sys
import traceback
from gevent import core
from gevent.hub import greenlet, getcurrent, get_hub, GreenletExit, Waiter
from gevent.timeout import Timeout
__all__ = ['Greenlet',
'joinall',
'killall']
class SpawnedLink(object):
"""A wrapper around link that calls it in another greenlet.
Can be called only from main loop.
"""
__slots__ = ['callback']
def __init__(self, callback):
self.callback = callback
def __call__(self, source):
g = greenlet(self.callback, get_hub())
g.switch(source)
def __hash__(self):
return hash(self.callback)
def __eq__(self, other):
return self.callback == getattr(other, 'callback', other)
def __str__(self):
return str(self.callback)
def __repr__(self):
return repr(self.callback)
def __getattr__(self, item):
assert item != 'callback'
return getattr(self.callback, item)
class SuccessSpawnedLink(SpawnedLink):
"""A wrapper around link that calls it in another greenlet only if source succeed.
Can be called only from main loop.
"""
__slots__ = []
def __call__(self, source):
if source.successful():
return SpawnedLink.__call__(self, source)
class FailureSpawnedLink(SpawnedLink):
"""A wrapper around link that calls it in another greenlet only if source failed.
Can be called only from main loop.
"""
__slots__ = []
def __call__(self, source):
if not source.successful():
return SpawnedLink.__call__(self, source)
class GreenletLink(object):
"""A wrapper around greenlet that raises a LinkedExited exception when called.
Can be called only from main loop.
"""
__slots__ = ['greenlet']
def __init__(self, greenlet):
self.greenlet = greenlet
def __call__(self, source):
if source.successful():
if isinstance(source.value, GreenletExit):
error = LinkedKilled(source)
else:
error = LinkedCompleted(source)
else:
error = LinkedFailed(source)
self.greenlet.throw(error)
def __hash__(self):
return hash(self.greenlet)
def __eq__(self, other):
return self.greenlet == getattr(other, 'greenlet', other)
def __str__(self):
return str(self.greenlet)
def __repr__(self):
return repr(self.greenlet)
class SuccessGreenletLink(GreenletLink):
"""A wrapper around greenlet that raises a LinkedExited exception when called
if source has succeed.
Can be called only from main loop.
"""
__slots__ = []
def __call__(self, source):
if source.successful():
return GreenletLink.__call__(self, source)
class FailureGreenletLink(GreenletLink):
"""A wrapper around greenlet that raises a LinkedExited exception when called
if source has failed.
Can be called only from main loop.
"""
__slots__ = []
def __call__(self, source):
if not source.successful():
return GreenletLink.__call__(self, source)
class Greenlet(greenlet):
"""A light-weight cooperatively-scheduled execution unit."""
def __init__(self, run=None, *args, **kwargs):
greenlet.__init__(self, parent=get_hub())
if run is not None:
self._run = run
self.args = args
self.kwargs = kwargs
self._links = []
self.value = None
self._exception = _NONE
self._notifier = None
self._start_event = None
@property
def started(self):
return self._start_event is not None or bool(self)
def ready(self):
"""Return true if and only if the greenlet has finished execution."""
return self.dead or self._exception is not _NONE
def successful(self):
"""Return true if and only if the greenlet has finished execution successfully,
that is, without raising an error."""
return self._exception is None
def __repr__(self):
classname = self.__class__.__name__
result = '<%s at %s' % (classname, hex(id(self)))
formatted = self._formatinfo()
if formatted:
result += ': ' + formatted
return result + '>'
def _formatinfo(self):
try:
return self._formatted_info
except AttributeError:
pass
try:
result = getfuncname(self.__dict__['_run'])
except Exception:
pass
else:
args = []
if self.args:
args = [repr(x)[:50] for x in self.args]
if self.kwargs:
args.extend(['%s=%s' % (key, repr(value)[:50]) for (key, value) in self.kwargs.items()])
if args:
result += '(' + ', '.join(args) + ')'
# it is important to save the result here, because once the greenlet exits '_run' attribute will be removed
self._formatted_info = result
return result
return ''
@property
def exception(self):
"""Holds the exception instance raised by the function if the greenlet has finished with an error.
Otherwise ``None``.
"""
if self._exception is not _NONE:
return self._exception
def throw(self, *args):
"""Immediatelly switch into the greenlet and raise an exception in it.
Should only be called from the HUB, otherwise the current greenlet is left unscheduled forever.
To raise an exception in a safely manner from any greenlet, use :meth:`kill`.
If a greenlet was started but never switched to yet, then also
a) cancel the event that will start it
b) fire the notifications as if an exception was raised in a greenlet
"""
if self._start_event is not None:
self._start_event.cancel()
self._start_event = None
try:
greenlet.throw(self, *args)
finally:
if self._exception is _NONE and self.dead:
# the greenlet was not started yet, so _report_error was not called, so
# the result was not set and the links weren't notified. let's do it here.
# checking that self.dead is true is essential, because the exception raised by
# throw() could have been cancelled by the greenlet's function.
if len(args) == 1:
arg = args[0]
#if isinstance(arg, type):
if type(arg) is type(Exception):
args = (arg, arg(), None)
else:
args = (type(arg), arg, None)
elif not args:
args = (GreenletExit, GreenletExit(), None)
self._report_error(args)
def start(self):
"""Schedule the greenlet to run in this loop iteration"""
assert not self.started, 'Greenlet already started'
self._start_event = core.active_event(self.switch)
def start_later(self, seconds):
"""Schedule the greenlet to run in the future loop iteration *seconds* later"""
assert not self.started, 'Greenlet already started'
self._start_event = core.timer(seconds, self.switch)
@classmethod
def spawn(cls, *args, **kwargs):
"""Return a new :class:`Greenlet` object, scheduled to start.
The arguments are passed to :meth:`Greenlet.__init__`.
"""
g = cls(*args, **kwargs)
g.start()
return g
@classmethod
def spawn_later(cls, seconds, *args, **kwargs):
"""Return a Greenlet object, scheduled to start *seconds* later.
The arguments are passed to :meth:`Greenlet.__init__`.
"""
g = cls(*args, **kwargs)
g.start_later(seconds)
return g
@classmethod
def spawn_link(cls, *args, **kwargs):
g = cls.spawn(*args, **kwargs)
g.link()
return g
@classmethod
def spawn_link_value(cls, *args, **kwargs):
g = cls.spawn(*args, **kwargs)
g.link_value()
return g
@classmethod
def spawn_link_exception(cls, *args, **kwargs):
g = cls.spawn(*args, **kwargs)
g.link_exception()
return g
def kill(self, exception=GreenletExit, block=True, timeout=None):
"""Raise the exception in the greenlet.
If block is ``True`` (the default), wait until the greenlet dies or the optional timeout expires.
If block is ``False``, the current greenlet is not unscheduled.
The function always returns ``None`` and never raises an error.
`Changed in version 0.13.0:` *block* is now ``True`` by default.
"""
if self._start_event is not None:
self._start_event.cancel()
self._start_event = None
if not self.dead:
waiter = Waiter()
core.active_event(_kill, self, exception, waiter)
if block:
waiter.get()
self.join(timeout)
# it should be OK to use kill() in finally or kill a greenlet from more than one place;
# thus it should not raise when the greenlet is already killed (= not started)
def get(self, block=True, timeout=None):
"""Return the result the greenlet has returned or re-raise the exception it has raised.
If block is ``False``, raise :class:`gevent.Timeout` if the greenlet is still alive.
If block is ``True``, unschedule the current greenlet until the result is available
or the timeout expires. In the latter case, :class:`gevent.Timeout` is raised.
"""
if self.ready():
if self.successful():
return self.value
else:
raise self._exception
if block:
switch = getcurrent().switch
self.rawlink(switch)
try:
t = Timeout.start_new(timeout)
try:
result = self.parent.switch()
assert result is self, 'Invalid switch into Greenlet.get(): %r' % (result, )
finally:
t.cancel()
except:
# unlinking in 'except' instead of finally is an optimization:
# if switch occurred normally then link was already removed in _notify_links
# and there's no need to touch the links set.
# Note, however, that if "Invalid switch" assert was removed and invalid switch
# did happen, the link would remain, causing another invalid switch later in this greenlet.
self.unlink(switch)
raise
if self.ready():
if self.successful():
return self.value
else:
raise self._exception
else:
raise Timeout
def join(self, timeout=None):
"""Wait until the greenlet finishes or *timeout* expires.
Return ``None`` regardless.
"""
if self.ready():
return
else:
switch = getcurrent().switch
self.rawlink(switch)
try:
t = Timeout.start_new(timeout)
try:
result = self.parent.switch()
assert result is self, 'Invalid switch into Greenlet.join(): %r' % (result, )
finally:
t.cancel()
except Timeout, ex:
self.unlink(switch)
if ex is not t:
raise
except:
self.unlink(switch)
raise
def _report_result(self, result):
self._exception = None
self.value = result
if self._links and self._notifier is None:
self._notifier = core.active_event(self._notify_links)
def _report_error(self, exc_info):
exception = exc_info[1]
if isinstance(exception, GreenletExit):
self._report_result(exception)
return
try:
traceback.print_exception(*exc_info)
except:
pass
self._exception = exception
if self._links and self._notifier is None:
self._notifier = core.active_event(self._notify_links)
info = str(self) + ' failed with '
try:
info += self._exception.__class__.__name__
except Exception:
info += str(self._exception) or repr(self._exception)
sys.stderr.write(info + '\n\n')
def run(self):
try:
self._start_event = None
try:
result = self._run(*self.args, **self.kwargs)
except:
self._report_error(sys.exc_info())
return
self._report_result(result)
finally:
self.__dict__.pop('_run', None)
self.__dict__.pop('args', None)
self.__dict__.pop('kwargs', None)
def | |
js.start_process()
expected_calls_made = [call('1.1.1.1', 'abc', 'xyz', 'pre', ANY, ANY, ANY),
call('1.1.1.15', 'abc', 'xyz', 'pre', ANY, ANY, ANY),
call('1.1.1.16', 'abc', 'xyz', 'pre', ANY, ANY, ANY),
]
self.assertTrue(mock_connect.called)
mock_connect.assert_has_calls(expected_calls_made, any_order=True)
@patch('jnpr.jsnapy.jsnapy.get_path')
@patch('jnpr.jsnapy.SnapAdmin.connect')
def test_start_process_complete_flow_cmd_line_files_group_based(self, mock_connect, mock_path):
# Testcase to check complete call flow till connect when data passed in command line as files
# with multiple devices in the yml file but group based
js = SnapAdmin()
js.args.snapcheck = True
js.args.file = "main1.yml"
js.args.pre_snapfile = "pre"
js.args.post_snapfile = "post"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
expected_calls_made = [call('1.1.1.3', 'abc', 'def', 'pre', ANY, ANY, ANY),
call('1.1.1.4', 'abc', 'def', 'pre', ANY, ANY, ANY),
call('1.1.1.5', 'abc', 'def', 'pre', ANY, ANY, ANY),
]
self.assertTrue(mock_connect.called)
mock_connect.assert_has_calls(expected_calls_made, any_order=True)
@patch('jnpr.jsnapy.jsnapy.get_path')
@patch('jnpr.jsnapy.SnapAdmin.connect')
def test_start_process_complete_flow_cmd_line_files_multiple_group_based(self, mock_connect, mock_path):
# Testcase to check complete call flow till connect when data passed in command line as files
# with multiple devices in the yml file but multiple group based
js = SnapAdmin()
js.args.snapcheck = True
js.args.file = "main_multiple_group.yml"
js.args.pre_snapfile = "pre"
js.args.post_snapfile = "post"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
expected_calls_made = [call('1.1.1.3', 'abc', 'def', 'pre', ANY, ANY, ANY),
call('1.1.1.4', 'abc', 'def', 'pre', ANY, ANY, ANY),
call('1.1.1.5', 'abc', 'def', 'pre', ANY, ANY, ANY),
call('1.1.1.6', 'abc', 'def', 'pre', ANY, ANY, ANY),
call('1.1.1.12', 'abc', 'def', 'pre', ANY, ANY, ANY),
]
self.assertTrue(mock_connect.called)
mock_connect.assert_has_calls(expected_calls_made, any_order=True)
@patch('jnpr.jsnapy.jsnapy.get_path')
@patch('jnpr.jsnapy.SnapAdmin.connect')
def test_start_process_complete_flow_with_port_in_file(self, mock_connect, mock_path):
# Testcase to check complete call flow till connect when data passed in command line as files
# with port present in file
js = SnapAdmin()
js.args.snapcheck = True
js.args.file = "main_with_port.yml"
js.args.pre_snapfile = "pre"
js.args.post_snapfile = "post"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
expected_calls_made = [call('1.1.1.1', 'abc', 'xyz', 'pre', ANY, ANY, ANY, port=44)]
self.assertTrue(mock_connect.called)
mock_connect.assert_has_calls(expected_calls_made, any_order=True)
@patch('jnpr.jsnapy.jsnapy.get_path')
@patch('jnpr.jsnapy.SnapAdmin.connect')
def test_start_process_complete_flow_with_port_in_file_group_based(self, mock_connect, mock_path):
# Testcase to check complete call flow till connect when data passed in command line as files
# with port present in file based on group
js = SnapAdmin()
js.args.snapcheck = True
js.args.file = "main2_with_port.yml"
js.args.pre_snapfile = "pre"
js.args.post_snapfile = "post"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
expected_calls_made = [call('1.1.1.3', 'abc', 'def', 'pre', ANY, ANY, ANY, port=100),
call('1.1.1.4', 'abc', 'def', 'pre', ANY, ANY, ANY, port=101),
call('1.1.1.5', 'abc', 'def', 'pre', ANY, ANY, ANY, port=102),
]
self.assertTrue(mock_connect.called)
mock_connect.assert_has_calls(expected_calls_made, any_order=True)
@patch('jnpr.jsnapy.jsnapy.get_path')
@patch('jnpr.jsnapy.SnapAdmin.connect')
def test_start_process_complete_flow_with_port_as_arg(self, mock_connect, mock_path):
# Testcase to check complete call flow till connect when data passed in command line as files
# with port present in file and in argument. Port in argument should have higher importance
js = SnapAdmin()
js.args.snapcheck = True
js.args.file = "main2_with_port.yml"
js.args.pre_snapfile = "pre"
js.args.post_snapfile = "post"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.args.port = 55
js.start_process()
expected_calls_made = [call('1.1.1.3', 'abc', 'def', 'pre', ANY, ANY, ANY, port=55),
call('1.1.1.4', 'abc', 'def', 'pre', ANY, ANY, ANY, port=55),
call('1.1.1.5', 'abc', 'def', 'pre', ANY, ANY, ANY, port=55),
]
self.assertTrue(mock_connect.called)
mock_connect.assert_has_calls(expected_calls_made, any_order=True)
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling')
def test_snap(self, mock_data):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
data = yaml.load(config_file, Loader=yaml.FullLoader)
js.snap(js.args.file, 'mock_file')
mock_data.assert_called_with(data, 'mock_file', "snap", None, local=False)
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling')
def test_check(self, mock_data):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
data = yaml.load(config_file, Loader=yaml.FullLoader)
js.check(js.args.file, 'mock_file')
mock_data.assert_called_with(data, 'mock_file', "check", None, local=False)
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling')
def test_snapcheck(self, mock_data):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
data = yaml.load(config_file, Loader=yaml.FullLoader)
js.snapcheck(js.args.file, 'mock_file')
mock_data.assert_called_with(data, 'mock_file', "snapcheck", None, local=False)
@patch('sys.exit')
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling')
def test_action_api_based_error_file(self, mock_data, mock_exit):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
js.snapcheck(js.args.file, 'mock_file')
mock_exit.assert_called()
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling')
def test_action_api_based_data_passed_in_string(self, mock_data):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
data = yaml.load(config_file, Loader=yaml.FullLoader)
js.snapcheck(data, 'mock_file')
mock_data.assert_called_with(data, 'mock_file', "snapcheck", None, local=False)
@patch('ncclient.manager.connect')
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling_with_dev')
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.api_based_handling')
def test_action_api_based_data_passed_in_string_with_device(self, mock_data, mock_dev_data,
mock_connect):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
data = yaml.load(config_file, Loader=yaml.FullLoader)
dev = Device(user='1.1.1.1', host='abc', passwd='<PASSWORD>')
dev.open()
js.snapcheck(data, 'mock_file', dev)
self.assertFalse(mock_data.called)
self.assertTrue(mock_dev_data.called)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_snap_not_checked(self, mock_mul_dev, mock_exit):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_1.yml')
js.args.snap = True
js.args.pre_snapfile = "mock_snap"
js.start_process()
self.assertEqual(js.db, self.db)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_check_not_checked(self, mock_mul_dev, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_1.yml')
js.args.check = True
js.args.pre_snapfile = "mock_snap"
js.args.post_snapfile = "mock_snap2"
js.start_process()
self.assertEqual(js.db, self.db)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_snap(self, mock_mul_dev, mock_exit):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_2.yml')
js.args.snap = True
js.args.pre_snapfile = "mock_snap"
self.db['store_in_sqlite'] = True
self.db['db_name'] = 'jbb.db'
js.start_process()
self.assertEqual(js.db, self.db)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_check(self, mock_mul_dev, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_11.yml')
js.args.check = True
js.args.pre_snapfile = "mock_snap"
self.db['store_in_sqlite'] = True
self.db['check_from_sqlite'] = True
self.db['db_name'] = 'jbb.db'
self.db['first_snap_id'] = 1
self.db['second_snap_id'] = 0
js.start_process()
self.assertEqual(js.db, self.db)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_check_different_snap_id(self, mock_mul_dev, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_4.yml')
js.args.check = True
js.args.pre_snapfile = "mock_snap"
self.db['store_in_sqlite'] = True
self.db['check_from_sqlite'] = True
self.db['db_name'] = 'jbb.db'
self.db['first_snap_id'] = 0
self.db['second_snap_id'] = 1
js.start_process()
self.assertEqual(js.db, self.db)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_check_chksqlite(self, mock_mul_dev, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_5.yml')
js.args.check = True
js.args.pre_snapfile = "mock_snap"
self.db['store_in_sqlite'] = False
self.db['check_from_sqlite'] = True
self.db['db_name'] = 'jbb.db'
self.db['first_snap_id'] = 0
self.db['second_snap_id'] = 1
js.start_process()
self.assertEqual(js.db, self.db)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.connect_multiple_device')
def test_sqlite_parameters_for_snapcheck_strsqlite(self, mock_mul_dev, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_2.yml')
js.args.snapcheck = True
js.args.diff = False
js.args.pre_snapfile = "mock_snap"
self.db['store_in_sqlite'] = True
self.db['db_name'] = 'jbb.db'
js.start_process()
self.assertEqual(js.db, self.db)
def test_chk_database_1(self):
with self.assertRaises(SystemExit):
js = SnapAdmin()
js.db['store_in_sqlite'] = True
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
config_data = yaml.load(config_file, Loader=yaml.FullLoader)
del (config_data['sqlite'][0]['database_name']) # either we could have a config file with no db name
del (config_data['sqlite'][0]['store_in_sqlite'])
del (config_data['sqlite'][0]['check_from_sqlite'])
js.chk_database(config_data, 'mock_pre', 'mock_post') # or we could delete the key value pair
def test_chk_database_2(self):
with self.assertRaises(SystemExit):
js = SnapAdmin()
js.db['store_in_sqlite'] = True
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
config_data = yaml.load(config_file, Loader=yaml.FullLoader)
del (config_data['sqlite'][0]['store_in_sqlite'])
del (config_data['sqlite'][0]['check_from_sqlite'])
config_data['sqlite'][0]['compare'] = 0
js.chk_database(config_data, 'mock_pre', 'mock_post', check=True)
def test_chk_database_3(self):
with self.assertRaises(SystemExit):
js = SnapAdmin()
js.db['store_in_sqlite'] = True
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
config_data = yaml.load(config_file, Loader=yaml.FullLoader)
del (config_data['sqlite'][0]['store_in_sqlite'])
del (config_data['sqlite'][0]['check_from_sqlite'])
config_data['sqlite'][0]['compare'] = 'ab'
js.chk_database(config_data, 'mock_pre', 'mock_post', check=True)
def test_chk_database_4(self):
with self.assertRaises(SystemExit):
js = SnapAdmin()
js.db['store_in_sqlite'] = True
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
config_data = yaml.load(config_file, Loader=yaml.FullLoader)
del (config_data['sqlite'][0]['store_in_sqlite'])
del (config_data['sqlite'][0]['check_from_sqlite'])
config_data['sqlite'][0]['compare'] = '0,1,2'
js.chk_database(config_data, 'mock_pre', 'mock_post', check=True)
def test_chk_database_5(self):
with self.assertRaises(SystemExit):
js = SnapAdmin()
js.db['store_in_sqlite'] = True
js.args.file = os.path.join(os.path.dirname(__file__), 'configs', 'main.yml')
config_file = open(js.args.file, 'r')
config_data = yaml.load(config_file, Loader=yaml.FullLoader)
del (config_data['sqlite'][0]['store_in_sqlite'])
del (config_data['sqlite'][0]['check_from_sqlite'])
config_data['sqlite'][0]['compare'] = '0'
js.chk_database(config_data, 'mock_pre', 'mock_post', check=True)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.compare_tests')
@patch('getpass.getpass')
@patch('jnpr.jsnapy.notify.Notification.notify')
@patch('jnpr.jsnapy.jsnapy.get_path')
def test_check_mail(self, mock_path, mock_notify, mock_pass, mock_compare, mock_arg):
argparse.ArgumentParser.parse_args = MagicMock()
argparse.ArgumentParser.parse_args.return_value = argparse.Namespace(check=False,
diff=False, file=None, hostname=None,
login=None, passwd=<PASSWORD>, port=None,
post_snapfile=None, pre_snapfile=None,
snap=False, snapcheck=False,
verbosity=None, version=False)
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_mail_2.yml')
js.args.check = True
js.args.snap = False
js.args.snapcheck = False
js.args.pre_snapfile = "mock_snap"
js.args.post_snapfile = "mock_snap2"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
self.assertTrue(mock_notify.called)
self.assertTrue(mock_pass.called)
self.assertTrue(mock_compare.called)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.jsnapy.Device')
@patch('jnpr.jsnapy.SnapAdmin.generate_rpc_reply')
@patch('jnpr.jsnapy.SnapAdmin.compare_tests')
@patch('getpass.getpass')
@patch('jnpr.jsnapy.notify.Notification.notify')
@patch('jnpr.jsnapy.jsnapy.logging.getLogger')
@patch('jnpr.jsnapy.jsnapy.get_path')
def test_snapcheck_mail(
self, mock_path, mock_getlogger, mock_notify, mock_pass, mock_compare, mock_reply, mock_dev, mock_arg):
argparse.ArgumentParser.parse_args = MagicMock()
argparse.ArgumentParser.parse_args.return_value = argparse.Namespace(check=False,
diff=False, file=None, hostname=None,
login=None, passwd=<PASSWORD>, port=None,
post_snapfile=None, pre_snapfile=None,
snap=False, snapcheck=False,
verbosity=None, version=False)
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_mail_2.yml')
js.args.snapcheck = True
js.args.pre_snapfile = "mock_snap"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
self.assertTrue(mock_notify.called)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.jsnapy.SnapAdmin.generate_rpc_reply')
@patch('jnpr.jsnapy.jsnapy.Device')
@patch('jnpr.jsnapy.notify.Notification.notify')
@patch('jnpr.jsnapy.jsnapy.logging.getLogger')
def test_snap_mail(self, mock_logger, mock_notify, mock_pass, mock_compare, mock_arg):
argparse.ArgumentParser.parse_args = MagicMock()
argparse.ArgumentParser.parse_args.return_value = argparse.Namespace(check=False,
diff=False, file=None, hostname=None,
login=None, passwd=<PASSWORD>, port=None,
post_snapfile=None, pre_snapfile=None,
snap=False, snapcheck=False,
verbosity=None, version=False)
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_mail.yml')
js.args.snap = True
js.args.pre_snapfile = "mock_snap"
js.start_process()
self.assertFalse(mock_notify.called)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.compare_tests')
@patch('getpass.getpass')
@patch('jnpr.jsnapy.notify.Notification.notify')
@patch('jnpr.jsnapy.jsnapy.get_path')
def test_check_mail_password(
self, mock_path, mock_notify, mock_pass, mock_compare, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_mail_2.yml')
js.args.check = True
js.args.pre_snapfile = "mock_snap"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
js.start_process()
self.assertTrue(mock_pass.called)
self.assertTrue(mock_notify.called)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.compare_tests')
@patch('getpass.getpass')
@patch('jnpr.jsnapy.notify.Notification.notify')
@patch('jnpr.jsnapy.jsnapy.get_path')
def test_conditional_mail_1(self, mock_path, mock_notify, mock_pass, mock_compare, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_mail_condition.yml')
js.args.check = True
js.args.snap = False
js.args.snapcheck = False
js.args.pre_snapfile = "mock_snap"
js.args.post_snapfile = "mock_snap2"
mock_path.return_value = os.path.join(os.path.dirname(__file__), 'configs')
mock_compare.return_value = MagicMock(result='Failed')
js.start_process()
self.assertTrue(mock_notify.called)
self.assertTrue(mock_pass.called)
self.assertTrue(mock_compare.called)
@patch('argparse.ArgumentParser.exit')
@patch('jnpr.jsnapy.SnapAdmin.compare_tests')
@patch('getpass.getpass')
@patch('jnpr.jsnapy.notify.Notification.notify')
@patch('jnpr.jsnapy.jsnapy.get_path')
def test_conditional_mail_2(self, mock_path, mock_notify, mock_pass, mock_compare, mock_arg):
js = SnapAdmin()
js.args.file = os.path.join(os.path.dirname(__file__),
'configs', 'main_mail_condition.yml')
js.args.diff = | |
def enterReturn_type121(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#return_type121.
def exitReturn_type121(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#member_name.
def enterMember_name(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#member_name.
def exitMember_name(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#method_body.
def enterMethod_body(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#method_body.
def exitMethod_body(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#formal_parameter_list.
def enterFormal_parameter_list(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#formal_parameter_list.
def exitFormal_parameter_list(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#fixed_parameters.
def enterFixed_parameters(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#fixed_parameters.
def exitFixed_parameters(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#fixed_parameter.
def enterFixed_parameter(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#fixed_parameter.
def exitFixed_parameter(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#default_argument.
def enterDefault_argument(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#default_argument.
def exitDefault_argument(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#parameter_modifier.
def enterParameter_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#parameter_modifier.
def exitParameter_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#parameter_array.
def enterParameter_array(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#parameter_array.
def exitParameter_array(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#property_declaration.
def enterProperty_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#property_declaration.
def exitProperty_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#property_modifiers.
def enterProperty_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#property_modifiers.
def exitProperty_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#property_modifier.
def enterProperty_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#property_modifier.
def exitProperty_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#accessor_declarations.
def enterAccessor_declarations(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#accessor_declarations.
def exitAccessor_declarations(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#get_accessor_declaration.
def enterGet_accessor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#get_accessor_declaration.
def exitGet_accessor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#set_accessor_declaration.
def enterSet_accessor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#set_accessor_declaration.
def exitSet_accessor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#accessor_modifier.
def enterAccessor_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#accessor_modifier.
def exitAccessor_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#accessor_body.
def enterAccessor_body(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#accessor_body.
def exitAccessor_body(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#event_declaration.
def enterEvent_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#event_declaration.
def exitEvent_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#event_modifiers.
def enterEvent_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#event_modifiers.
def exitEvent_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#event_modifier.
def enterEvent_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#event_modifier.
def exitEvent_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#event_accessor_declarations.
def enterEvent_accessor_declarations(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#event_accessor_declarations.
def exitEvent_accessor_declarations(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#add_accessor_declaration.
def enterAdd_accessor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#add_accessor_declaration.
def exitAdd_accessor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#remove_accessor_declaration.
def enterRemove_accessor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#remove_accessor_declaration.
def exitRemove_accessor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#indexer_declaration.
def enterIndexer_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#indexer_declaration.
def exitIndexer_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#indexer_modifiers.
def enterIndexer_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#indexer_modifiers.
def exitIndexer_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#indexer_modifier.
def enterIndexer_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#indexer_modifier.
def exitIndexer_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#indexer_declarator.
def enterIndexer_declarator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#indexer_declarator.
def exitIndexer_declarator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#operator_declaration.
def enterOperator_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#operator_declaration.
def exitOperator_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#operator_modifiers.
def enterOperator_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#operator_modifiers.
def exitOperator_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#operator_modifier.
def enterOperator_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#operator_modifier.
def exitOperator_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#operator_declarator.
def enterOperator_declarator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#operator_declarator.
def exitOperator_declarator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#unary_operator_declarator.
def enterUnary_operator_declarator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#unary_operator_declarator.
def exitUnary_operator_declarator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#overloadable_unary_operator.
def enterOverloadable_unary_operator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#overloadable_unary_operator.
def exitOverloadable_unary_operator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#binary_operator_declarator.
def enterBinary_operator_declarator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#binary_operator_declarator.
def exitBinary_operator_declarator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#overloadable_binary_operator.
def enterOverloadable_binary_operator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#overloadable_binary_operator.
def exitOverloadable_binary_operator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#overloadable_operator.
def enterOverloadable_operator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#overloadable_operator.
def exitOverloadable_operator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#conversion_operator_declarator.
def enterConversion_operator_declarator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#conversion_operator_declarator.
def exitConversion_operator_declarator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#operator_body.
def enterOperator_body(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#operator_body.
def exitOperator_body(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#constructor_declaration.
def enterConstructor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#constructor_declaration.
def exitConstructor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#constructor_modifiers.
def enterConstructor_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#constructor_modifiers.
def exitConstructor_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#constructor_modifier.
def enterConstructor_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#constructor_modifier.
def exitConstructor_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#constructor_declarator.
def enterConstructor_declarator(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#constructor_declarator.
def exitConstructor_declarator(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#constructor_initializer.
def enterConstructor_initializer(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#constructor_initializer.
def exitConstructor_initializer(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#constructor_body.
def enterConstructor_body(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#constructor_body.
def exitConstructor_body(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#static_constructor_declaration.
def enterStatic_constructor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#static_constructor_declaration.
def exitStatic_constructor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#static_constructor_modifiers.
def enterStatic_constructor_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#static_constructor_modifiers.
def exitStatic_constructor_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#static_constructor_body.
def enterStatic_constructor_body(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#static_constructor_body.
def exitStatic_constructor_body(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#destructor_declaration.
def enterDestructor_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#destructor_declaration.
def exitDestructor_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#destructor_body.
def enterDestructor_body(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#destructor_body.
def exitDestructor_body(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#body.
def enterBody(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#body.
def exitBody(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#struct_declaration.
def enterStruct_declaration(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#struct_declaration.
def exitStruct_declaration(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#struct_modifiers.
def enterStruct_modifiers(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#struct_modifiers.
def exitStruct_modifiers(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#struct_modifier.
def enterStruct_modifier(self, ctx):
pass
# Exit a parse tree produced by CSharp4Parser#struct_modifier.
def exitStruct_modifier(self, ctx):
pass
# Enter a parse tree produced by CSharp4Parser#struct_interfaces.
def enterStruct_interfaces(self, ctx):
pass
# Exit a parse | |
<filename>ohbm/attendees.py<gh_stars>0
'''
attendees: part of the ohbm-api
'''
from ohbm.utils import get_url, ordered_to_dict, parse_item, parse_items, capitalize
class Attendees():
def __init__(self,api=None):
if api == None:
print("Please use this module from ohbm.api")
else:
self.api = api
def get_result(self,url,args=None):
'''get_result takes a url and adds the apiKey for the get_result from utils
:param url: the url to get the result for
:param args: a dictionary of {"argumentName":value} to pass to the URL
'''
if args != None:
for arg_name,arg_value in args.items():
if arg_value != None:
url = "%s&%s=%s" %(url,arg_name,arg_value)
url = "%s&apiKey=%s" %(url,self.api.key)
return get_url(url)
def getSpeakers(self,isScheduledFor,includeActiveStatus,formIDs,modifiedOnAfter,modifiedOnBefore):
'''getSpeakers
Returns an XML payload containing a full listing of all speaker data. Speakers are defined as attendees
whom are participating in the meeting in a specific capacity. E.g. Presenting in a session or
Moderating a session or Presenting a Poster. This service also returns the schedule for each speaker.
This includes all itinerary items that are 'active' (can be changed via includeActiveStatus parameter),
and 'public'.
Sample Request URL: http://.../?do=cnt.getservice&service=getSpeakers
Parameter Options:
:param isScheduledFor: Numeric value of (1,2 or 3) - This filters the speakers returned by schedule type:
Valid values; 1 = For Event, 2 = For Abstract, 3 = For Event OR Abstract. Note that this parameter
only affects whether or not a speaker record is included in the results set. It does not change the
itinerary items that are returned in each speaker's schedule block. For example, if isScheduledFor=2,
Any speaker who is linked to at least 1 Abstract will be shown. Any 'active', and 'public' itinerary
items that are linked to an Event will still be shown in the schedule for this speaker.
:param includeActiveStatus: Numeric value of (0,1, or 99) - Determines if active, inactive, or ALL itinerary
items are shown in the <schedule> block.
:param formIDs: Comma delimited list of form ids
:param modifiedOnAfter: Valid date string. Using this filter will return speakers who have updated contact
information, schedule, or form responses after/on this date.
:param modifiedOnBefore: Valid date string. Using this filter will return speakers who have updated contact
information, schedule, or form responses before/on this date.
'''
url = "%s/?do=cnt.getservice&service=getSpeakers" %(self.api.base)
args = {"isScheduledFor":isScheduledFor,
"includeActiveStatus":includeActiveStatus,
"formIDs":formIDs,
"modifiedOnAfter":modifiedOnAfter,
"modifiedOnBefore":modifiedOnBefore}
result = self.get_result(url,args)
return parse_items(result,'speaker')
def getSpeaker(self,speakerID,isScheduledFor,formIDs):
'''getSpeaker
Returns an XML payload containing a full listing of one speaker. Speaker data consists of their contact
information, bio and photo. Note Photo's are returned as a url to the actual photo. This service also
returns the schedule for the speaker. This includes all itinerary items that are 'active' (can be changed
via includeActiveStatus parameter), and 'public'.
Sample Request URL: http://.../?do=cnt.getservice&service=getSpeaker
Parameter Options:
:param *speakerID: Numeric value of a speaker.
includeActiveStatus: Numeric value of (0,1, or 99) - Determines if active, inactive, or ALL itinerary
items are shown in the <schedule> block.
:param isScheduledFor: Numeric value of (1,2 or 3) - This filters the speakers returned by schedule type:
Valid values; 1 = For Event, 2 = For Abstract, 3 = For Event OR Abstract. Note that this parameter
only affects whether or not a speaker record is included in the results set. It does not change the
itinerary items that are returned in each speaker's schedule block. For example, if isScheduledFor=2,
Any speaker who is linked to at least 1 Abstract will be shown. Any 'active', and 'public' itinerary
items that are linked to an Event will still be shown in the schedule for this speaker.
:param formIDs: Comma delimited list of form ids
'''
url = "%s/?do=cnt.getservice&service=getSpeaker" %(self.api.base)
args = {"speakerID":speakerID,
"isScheduledFor":isScheduledFor,
"formIDs":formIDs}
result = self.get_result(url,args)
return parse_item(result,'speaker')
def getAttendeesRegData(self,attendeeID=None,startInitial=None,endInitial=None,formID=None):
'''getAttendeesRegData
Returns an XML payload containing a full listing of all attendee purchases. For each attendee their
purchase or order history will be returned in the XML payload. This service is only allowed to be run
between 11 PM EST and 5 AM EST .
Sample Request URL: http://.../?do=cnt.getservice&service=getAttendeesRegData
Parameter Options:
:param attendeeID: Numeric value, Identifies the attendee record to update.
:param startInitial: 1 character used to indicate the starting initial of the last name range to include. E.g. A
:param endInitial: 1 character used to indicate the starting initial of the last name range to include. If the
'startInitial' is provided but no 'endInitial' is provided the system uses the 'startInitial' as the
'endInitial'. E.g. B
:param formID: Numeric value of a demographic form
'''
url = "%s/?do=cnt.getservice&service=getAttendeesRegData" %(self.api.base)
# Initials should probably always be uppercase
if startInitial != None:
startInitial = capitalize(startInitial)
if endInitial != None:
endInitial = capitalize(endInitial)
args = {"attendeeID":attendeeID,
"startInitial":startInitial,
"endInitial":endInitial,
"formID":formID}
result = self.get_result(url,args)
return parse_items(result,'attendee')
def getAttendeesItineraryData(self,attendeeID=None,startInitial=None,endInitial=None,insertedInLastHoursSpan=None):
'''getAttendeesItineraryData
Returns an XML payload containing a full listing of all attendee itinerary data. For each attendee their
itinerary will be returned in the XML payload. This service is only allowed to be run between 11 PM
EST and 5 AM EST. Note: For clients using the Attendee Itinerary Update service, only itinerary
items with the attributes='ReadWrite' can be modified.
Sample Request URL: http://.../?do=cnt.getservice&service=getAttendeesItineraryData
Parameter Options:
:param attendeeID: Numeric value; Identifies the attendee record to update.
:param startInitial: 1 character used to indicate the starting initial of the last name range to include. E.g. A
:param endInitial: 1 character used to indicate the starting initial of the last name range to include. If the
'startInitial' is provided but no 'endInitial' is provided the system uses the 'startInitial' as the
'endInitial'. E.g. B
:param insertedInLastHoursSpan: Used to indicate the number of hour of newly inserted records to include.
The default is 24. This must be a valid integer between 1 and 26,280.
'''
url = "%s/?do=cnt.getservice&service=getAttendeesItineraryData" %(self.api.base)
# Initials should probably always be uppercase
if startInitial != None:
startInitial = capitalize(startInitial)
if endInitial != None:
endInitial = capitalize(endInitial)
args = {"attendeeID":attendeeID,
"startInitial":startInitial,
"endInitial":endInitial,
"insertedInLastHoursSpan":insertedInLastHoursSpan}
result = self.get_result(url,args)
return parse_items(result,'attendee')
def getAttendeesFormResponses(self,formID,startInitial=None,endInitial=None):
'''getAttendeesFormResponses
Returns an XML payload containing a full listing of attendee contact data and form responses for the
specified form. For each attendee a 'formResponses' node returned in the XML payload. This service
is only allowed to be run between 11 PM EST and 5 AM EST.
Sample Request URL: http://.../?do=cnt.getservice&service=getAttendeesFormResponse
Parameter Options:
:param *formID: Numeric value; Identifies the form for which to retrieve responses.
:param attendeeID: Numeric value; Identifies the attendee record to update.
:param startInitial: 1 character used to indicate the starting initial of the last name range to include. E.g. A
:param endInitial: 1 character used to indicate the starting initial of the last name range to include. If the
'startInitial' is provided but no 'endInitial' is provided the system uses the 'startInitial' as the
'endInitial'. E.g. B
'''
# Note to developer - this endpoint has not been tested, was reported wrong in docs
# https://github.com/vsoch/ohbm/issues/3
url = "%s/?do=cnt.getservice&service=getAttendeesItineraryData" %(self.api.base)
# Initials should probably always be uppercase
if startInitial != None:
startInitial = capitalize(startInitial)
if endInitial != None:
endInitial = capitalize(endInitial)
args = {"formID":formID,
"startInitial":startInitial,
"endInitial":endInitial}
result = self.get_result(url,args)
return parse_items(result,'attendee')
def updateItinerary(self,attendeeID,itineraryID,abstractID=None,eventID=None,exhibitorID=None,
Description=None,startsOn=None,endsOn=None):
'''updateItinerary
Returns an XML payload containing the results of the attendee itinerary data update. The 'stat'
attribute is used to indicate the success or failure of the request. Extended description of all options are
listed below.
Sample Request URL: http://.../?do=cnt.getservice&service=updateItinerary&[Parameter List]
Parameter Options:
:param *attendeeID: Identifies the attendee record to update. Always required.
:param *itineraryID: Identifies the itinerary record to update. Valid options: update, delete; The update option
is assumed if no action is provided.
:param abstractID: Identifies the abstract record being added, updated or removed
:param eventID: Identifies the event record being added, updated or removed
:param exhibitorID: Identifies the exhibitor record being added, updated or removed title. Title of the activity.
Limit 300 characters. Required if record is not linked to an event, abstract or exhibitor. If linked to an
event, abstract or | |
not assigned to anyone, or if the orderHintsByAssignee '
'dictionary does not provide an order hint for the user the task is assigned to. The format is '
'defined as outlined here.')
with self.argument_context('planner planner-plan-task update-bucket-task-board-format') as c:
c.argument('planner_plan_id', type=str, help='key: id of plannerPlan')
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('order_hint', type=str, help='Hint used to order tasks in the Bucket view of the Task Board. The '
'format is defined as outlined here.')
with self.argument_context('planner planner-plan-task update-detail') as c:
c.argument('planner_plan_id', type=str, help='key: id of plannerPlan')
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('checklist', type=validate_file_or_dict, help='plannerChecklistItems Expected value: '
'json-string/@json-file.')
c.argument('description', type=str, help='Description of the task')
c.argument('preview_type', arg_type=get_enum_type(['automatic', 'noPreview', 'checklist', 'description',
'reference']), help='')
c.argument('references', type=validate_file_or_dict, help='plannerExternalReferences Expected value: '
'json-string/@json-file.')
with self.argument_context('planner planner-plan-task update-progress-task-board-format') as c:
c.argument('planner_plan_id', type=str, help='key: id of plannerPlan')
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('order_hint', type=str, help='Hint value used to order the task on the Progress view of the Task '
'Board. The format is defined as outlined here.')
with self.argument_context('planner planner-task delete-assigned-to-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('if_match', type=str, help='ETag')
with self.argument_context('planner planner-task delete-bucket-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('if_match', type=str, help='ETag')
with self.argument_context('planner planner-task delete-detail') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('if_match', type=str, help='ETag')
with self.argument_context('planner planner-task delete-progress-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('if_match', type=str, help='ETag')
with self.argument_context('planner planner-task show-assigned-to-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('select', nargs='+', help='Select properties to be returned')
c.argument('expand', nargs='+', help='Expand related entities')
with self.argument_context('planner planner-task show-bucket-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('select', nargs='+', help='Select properties to be returned')
c.argument('expand', nargs='+', help='Expand related entities')
with self.argument_context('planner planner-task show-detail') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('select', nargs='+', help='Select properties to be returned')
c.argument('expand', nargs='+', help='Expand related entities')
with self.argument_context('planner planner-task show-progress-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('select', nargs='+', help='Select properties to be returned')
c.argument('expand', nargs='+', help='Expand related entities')
with self.argument_context('planner planner-task update-assigned-to-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('order_hints_by_assignee', type=validate_file_or_dict, help='plannerOrderHintsByAssignee Expected '
'value: json-string/@json-file.')
c.argument('unassigned_order_hint', type=str, help='Hint value used to order the task on the AssignedTo view '
'of the Task Board when the task is not assigned to anyone, or if the orderHintsByAssignee '
'dictionary does not provide an order hint for the user the task is assigned to. The format is '
'defined as outlined here.')
with self.argument_context('planner planner-task update-bucket-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('order_hint', type=str, help='Hint used to order tasks in the Bucket view of the Task Board. The '
'format is defined as outlined here.')
with self.argument_context('planner planner-task update-detail') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('checklist', type=validate_file_or_dict, help='plannerChecklistItems Expected value: '
'json-string/@json-file.')
c.argument('description', type=str, help='Description of the task')
c.argument('preview_type', arg_type=get_enum_type(['automatic', 'noPreview', 'checklist', 'description',
'reference']), help='')
c.argument('references', type=validate_file_or_dict, help='plannerExternalReferences Expected value: '
'json-string/@json-file.')
with self.argument_context('planner planner-task update-progress-task-board-format') as c:
c.argument('planner_task_id', type=str, help='key: id of plannerTask')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('order_hint', type=str, help='Hint value used to order the task on the Progress view of the Task '
'Board. The format is defined as outlined here.')
with self.argument_context('planner user delete-planner') as c:
c.argument('user_id', type=str, help='key: id of user')
c.argument('if_match', type=str, help='ETag')
with self.argument_context('planner user show-planner') as c:
c.argument('user_id', type=str, help='key: id of user')
c.argument('select', nargs='+', help='Select properties to be returned')
c.argument('expand', nargs='+', help='Expand related entities')
with self.argument_context('planner user update-planner') as c:
c.argument('user_id', type=str, help='key: id of user')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('plans', type=validate_file_or_dict, help='Read-only. Nullable. Returns the plannerTasks assigned '
'to the user. Expected value: json-string/@json-file.')
c.argument('tasks', type=validate_file_or_dict, help='Read-only. Nullable. Returns the plannerPlans shared '
'with the user. Expected value: json-string/@json-file.')
with self.argument_context('planner user-planner create-plan') as c:
c.argument('user_id', type=str, help='key: id of user')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('created_date_time', help='Read-only. Date and time at which the plan is created. The Timestamp '
'type represents date and time information using ISO 8601 format and is always in UTC time. For '
'example, midnight UTC on Jan 1, 2014 would look like this: \'2014-01-01T00:00:00Z\'')
c.argument('owner', type=str, help='ID of the Group that owns the plan. A valid group must exist before this '
'field can be set. After it is set, this property can’t be updated.')
c.argument('title', type=str, help='Required. Title of the plan.')
c.argument('buckets', type=validate_file_or_dict, help='Read-only. Nullable. Collection of buckets in the '
'plan. Expected value: json-string/@json-file.')
c.argument('tasks', type=validate_file_or_dict, help='Read-only. Nullable. Collection of tasks in the plan. '
'Expected value: json-string/@json-file.')
c.argument('microsoft_graph_entity_id', type=str, help='Read-only.', arg_group='Details')
c.argument('category_descriptions', action=AddCategoryDescriptions, nargs='+',
help='plannerCategoryDescriptions', arg_group='Details')
c.argument('shared_with', type=validate_file_or_dict, help='plannerUserIds Expected value: '
'json-string/@json-file.', arg_group='Details')
c.argument('application', action=AddApplication, nargs='+', help='identity', arg_group='Created By')
c.argument('device', action=AddApplication, nargs='+', help='identity', arg_group='Created By')
c.argument('user', action=AddApplication, nargs='+', help='identity', arg_group='Created By')
with self.argument_context('planner user-planner create-task') as c:
c.argument('user_id', type=str, help='key: id of user')
c.argument('id_', options_list=['--id'], type=str, help='Read-only.')
c.argument('active_checklist_item_count', type=int, help='Number of checklist items with value set to false, '
'representing incomplete items.')
c.argument('applied_categories', type=validate_file_or_dict, help='plannerAppliedCategories Expected value: '
'json-string/@json-file.')
c.argument('assignee_priority', type=str, help='Hint used to order items of this type in a list view. The '
'format is defined as outlined here.')
c.argument('assignments', type=validate_file_or_dict, help='plannerAssignments Expected value: '
'json-string/@json-file.')
c.argument('bucket_id', type=str, help='Bucket ID to which the task belongs. The bucket needs to be in the '
'plan that the task is in. It is 28 characters long and case-sensitive. Format validation is done '
'on the service.')
c.argument('checklist_item_count', type=int, help='Number of checklist items that are present on the task.')
c.argument('completed_date_time', help='Read-only. Date and time at which the \'percentComplete\' of the task '
'is set to \'100\'. The Timestamp type represents date and time information using ISO 8601 format '
'and is always in UTC time. For example, midnight UTC on Jan 1, 2014 would look like this: '
'\'2014-01-01T00:00:00Z\'')
c.argument('conversation_thread_id', type=str, help='Thread ID of the conversation on the task. This is the ID '
'of the conversation thread object created in the group.')
c.argument('created_date_time', help='Read-only. Date and time at which the task is created. The Timestamp '
'type represents date and time information using ISO 8601 format and is always in UTC time. For '
'example, midnight UTC on Jan 1, 2014 would look like this: \'2014-01-01T00:00:00Z\'')
c.argument('due_date_time', help='Date and time at which the task is due. The Timestamp type represents date '
'and time information using ISO 8601 format and is always in UTC time. For example, midnight UTC on '
'Jan 1, 2014 would look like this: \'2014-01-01T00:00:00Z\'')
c.argument('has_description', arg_type=get_three_state_flag(), help='Read-only. Value is true if the details '
'object of the task has a non-empty description and false otherwise.')
c.argument('order_hint', type=str, help='Hint used to order items of this type in a list view. The format is '
'defined as outlined here.')
c.argument('percent_complete', type=int, help='Percentage of task completion. When set to 100, the task is '
'considered completed.')
c.argument('plan_id', type=str, help='Plan ID to which the task belongs.')
c.argument('preview_type', arg_type=get_enum_type(['automatic', 'noPreview', 'checklist', 'description',
'reference']), help='')
c.argument('reference_count', type=int, help='Number of external references that exist on the task.')
c.argument('start_date_time', help='Date and time at which the task starts. The Timestamp type represents date '
'and time information using ISO 8601 format and is always in UTC time. For example, midnight UTC on '
'Jan 1, 2014 would look like this: \'2014-01-01T00:00:00Z\'')
c.argument('title', type=str, help='Title of the task.')
c.argument('bucket_task_board_format', action=AddBucketTaskBoardFormat, nargs='+',
help='plannerBucketTaskBoardTaskFormat')
c.argument('progress_task_board_format', action=AddProgressTaskBoardFormat, nargs='+',
help='plannerProgressTaskBoardTaskFormat')
c.argument('microsoft_graph_entity_id', type=str, help='Read-only.', arg_group='Details')
c.argument('checklist', type=validate_file_or_dict, help='plannerChecklistItems Expected value: '
'json-string/@json-file.', arg_group='Details')
c.argument('description', type=str, help='Description of the task', arg_group='Details')
c.argument('microsoft_graph_planner_preview_type', arg_type=get_enum_type(['automatic', 'noPreview',
'checklist', 'description',
'reference']), help='',
arg_group='Details')
c.argument('references', type=validate_file_or_dict, help='plannerExternalReferences Expected value: '
'json-string/@json-file.', arg_group='Details')
c.argument('id1', type=str, help='Read-only.', arg_group='Assigned To Task Board Format')
c.argument('order_hints_by_assignee', type=validate_file_or_dict, help='plannerOrderHintsByAssignee Expected '
'value: json-string/@json-file.', arg_group='Assigned To Task Board Format')
c.argument('unassigned_order_hint', type=str, help='Hint value used to order the task on the AssignedTo view '
'of the Task Board when the task is not assigned to anyone, or if the orderHintsByAssignee | |
<filename>CodeArena/db.py
'''1. email and hash of pwd
verify email and pwd'''
from CodeArena import bcrypt
import mysql.connector as ms
from CodeArena.CodeUtilities import convert_to_file, get_countdown_time
import os
class userdbop:
def logincheck(self, email, pwd):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
# print(email)
# print("pwd sent=", pwd)
try:
cur = cnx.cursor()
# print(f'here is d 1')
# print("ohhhhdwhfhefhewfh")
# pw_hash = bcrypt.generate_password_hash('<PASSWORD>').decode('utf-8')
# print("hash pass", pw_hash)
stmt = f'Select `Password` from `users` where `Email` ="{email}" and actives = 1'
cur.execute(stmt)
# print(f'here is d 1')
d = cur.fetchall()
# print(f'here is d {d}')
if d:
t = d[0]
else:
return False
# print(t)
# print("list is=", d)
a = bcrypt.check_password_hash(bytes(t[0], 'utf-8'), pwd)
return a
except ms.Error as e:
# print(e)
return False
except TypeError as e:
# print(e)
return False
def verifyemail(self, email):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
try:
cur = cnx.cursor()
stmt = f'Select `Password` from `users` where `Email` ="{email}" and actives = 1'
cur.execute(stmt)
# print(f'here is d 1')
d = cur.fetchall()
# print(f'here is d {d}')
if d:
return True
else:
return False
except ms.Error as e:
# print(e)
return False
except TypeError as e:
# print(e)
return False
def update_regitration(self, email):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
try:
cur = cnx.cursor()
st = f'UPDATE `users` SET actives= 1 WHERE `Email`="{email}"'
cur.execute(st)
cnx.commit()
return True
except ms.Error as e:
# print(e)
return False
except TypeError as e:
# print(e)
return False
def registration(self, email, usn, pwd):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
try:
cur = cnx.cursor()
u = []
e = []
flu, fle = False, False
st = f'Select * from `users` where `Email`= "{email}"'
cur.execute(st)
u = cur.fetchall()
if not u: # if list empty
flu = True
st1 = f'Select * from `users` where `Username`= "{usn}"'
cur.execute(st1)
e = cur.fetchall()
if not e:
fle = True
if flu is True and fle is True:
z = bcrypt.generate_password_hash(pwd).decode('utf - 8')
cur = cnx.cursor(prepared=True)
stm21t = f'INSERT INTO `users`(`Email`, `Username`, `Password`) VALUES (%s,%s,%s)'
cur.execute(stm21t, (email, usn, z))
cnx.commit()
return f'Pass'
else:
return f'Username'
else: # list not empty so user has already registered.
return f'Email'
except ms.Error as e:
# print("db error")
return False
except TypeError as e:
# print(e)
return False
def contactus(self, name, email, sub, mess):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
try:
cur = cnx.cursor(prepared=True)
stmt = f'INSERT INTO `contactus`(`Name`, `Email`, `Subject`, `Message`) VALUES (%s,%s,%s,%s)'
cur.execute(stmt, (name, email, sub, mess))
cnx.commit()
return True
except ms.Error as e:
# print(e)
return False
except TypeError as e:
# print(e)
return False
def fetchupcomingbattles(self):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
d = []
try:
cur = cnx.cursor()
stmt = f'SELECT Compid,imgs,cName,Des,Typecmp,duration,Date,horc,Time1,Org FROM `competitions` where Typecmp = 1 or Typecmp = 2 ORDER BY `Date` ASC, `Time1` ASC'
cur.execute(stmt)
d = cur.fetchall()
res = []
for i in d:
var = dict(zip(('cid', 'pic', 'name', 'des', 'up or on', 'duration', 'dates', 'Type of Comp', 'times', 'org'), i))
if var['up or on'] == 1:
var['up or on'] = "Ongoing"
else:
var['up or on'] = "Upcoming"
if var['Type of Comp'] == 1:
var['Type of Comp'] = "Competitive"
else:
var['Type of Comp'] = "Hiring"
var['dates'] = str(var['dates'])
var['times'] = str(var['times'])
res.append(var)
# print(res)
return res
except ms.Error as e:
# print("db error")
return None
except TypeError as e:
# print(e)
return None
def fetchfinishedbattles(self):
'''
"cid": 12323,
"pic": "sample-1.jpg",
"name": "<NAME>",
"des": "You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.",
"dates": "2018-08-12",
"Type of Comp": "Competitive",
"times": "20:00:00",
"org": "Cognizant"
'''
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
d = []
try:
cur = cnx.cursor()
stmt = f'SELECT Compid,imgs,cName,Des,Date,horc,Time1,Org FROM `competitions` where Typecmp = 0 ORDER BY `Date` DESC, `Time1` DESC'
cur.execute(stmt)
d = cur.fetchall()
res = []
for i in d:
var = dict(zip(('cid', 'pic', 'name', 'des', 'dates', 'Type of Comp', 'times', 'org'), i))
var['dates'] = str(var['dates'])
var['times'] = str(var['times'])
if var['Type of Comp'] == 1:
var['Type of Comp'] = "Competitive"
else:
var['Type of Comp'] = "Hiring"
res.append(var)
# print(res)
return res
except ms.Error as e:
# print("db error")
return None
except TypeError as e:
# print(e)
return None
def fetchaccountdetailsofuser(self, email):
d = []
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
mx = []
dick = {}
try:
cur = cnx.cursor()
stmt1 = f'SELECT `Username`,`joindate` FROM `users` WHERE `Email`="{email}"'
cur.execute(stmt1)
# print("d")
d = cur.fetchone()
if d:
dick['name'] = d[0]
dick['Join date'] = str(d[1].date())
dick['email'] = email
stmt2 = f'SELECT count(*) FROM `ranks` r,`users`u WHERE u.`Email`=r.`Email`and r.`Email`="{email}" and r.`Rank`=1'
cur.execute(stmt2)
gold = cur.fetchone()
if gold:
dick['golds'] = gold[0]
else:
dick['golds'] = 0
stmt3 = f'SELECT count(*) FROM `ranks` r,`users`u WHERE u.`Email`=r.`Email`and r.`Email`="{email}" and r.`Rank`=2'
cur.execute(stmt3)
silver = cur.fetchone()
if silver:
dick['silver'] = silver[0]
else:
dick['silver'] = 0
stmt4 = f'SELECT count(*) FROM `ranks` r,`users`u WHERE u.`Email`=r.`Email`and r.`Email`="{email}" and r.`Rank`=3'
cur.execute(stmt4)
bronze = cur.fetchone()
if bronze:
dick['bronze'] = bronze[0]
else:
dick['bronze'] = 0
stmt5 = f'select r.`Language`,count(r.`Language`) as a FROM `results` r WHERE r.`Email`="{email}" GROUP BY r.`Language`ORDER BY a DESC'
cur.execute(stmt5)
# print("fifth statement")
mx = cur.fetchall()
dick['programming languages used'] = []
if mx:
maxlange = mx[0][0]
for i in mx:
dick['programming languages used'].append(i[0])
dick['style'] = maxlange
else:
dick['programming languages used'].append("None")
dick['style'] = "Python"
stmt6 = f'SELECT COUNT(DISTINCT(`competitionsid`)) FROM `results` WHERE Email="{email}"'
cur.execute(stmt6)
battlesfought = cur.fetchone()
if battlesfought:
dick['battles'] = battlesfought[0]
else:
dick['battles'] = 0
# print(dick)
return dick
except ms.Error as e:
# print(e)
return None
except TypeError as e:
# print(e)
return None
def checkauthenticowner(self, email, pwd):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
try:
cur = cnx.cursor()
d = []
st = f'Select * from `users` where `Email`= "{email}" and `Password`=""'
cur.execute(st)
d = cur.fetchall()
if d: # if list empty
return False
else:
return True
except ms.Error as e:
# print(e)
return None
except TypeError as e:
# print(e)
return None
def makepwdupdate(self, email, pwd):
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
try:
cur = cnx.cursor()
z = bcrypt.generate_password_hash(pwd).decode('utf - 8')
st = f'UPDATE `users` SET `Password`="{z}" WHERE `Email`= "{email}"'
cur.execute(st)
cnx.commit()
return True
except ms.Error as e:
print(e)
return None
except TypeError as e:
print(e)
return None
def fetch_problem_statments(self, cid, pno):
'''
"cid": 12323,
"problem name": "sample-1.jpg",
"problem statement": "Infinity Code Wars",
"input": "You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.",
"output": "2018-08-12",
'''
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
d = []
try:
cur = cnx.cursor()
stmt = f'SELECT `probno`, `probstmt`, `probname`, `probinput`, `proboutput`, c.`Time1`, c.`duration` FROM `problems`p join `competitions` c on c.`Compid` =p.`Compid` WHERE p.`Compid`= {cid} and `testcase` = 1 ORDER BY `probno` ASC'
cur.execute(stmt)
d = cur.fetchall()
res = []
endsat = None
for i in d:
var = dict(zip(('problem number', 'problem statment', 'problem name', 'input', 'output', 'time', 'dur'), i))
var['input'] = open(os.path.join('.', 'SubmissionFiles', var['input'])).read()
var['output'] = open(os.path.join('.', 'SubmissionFiles', var['output'])).read()
if var['problem number'] == int(pno):
endsat = get_countdown_time(str(var['time']), var['dur'])
var["show"] = "show active"
else:
var["show"] = " "
res.append(var)
# print(res)
return (cid, endsat, res)
except ms.Error as e:
# print("db error")
return None
except TypeError as e:
# print(e)
return None
def fetch_test_cases(self, cid, pn):
'''
"cid": 12323,
"problem name": "sample-1.jpg",
"problem statement": "Infinity Code Wars",
"input": "You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.",
"output": "2018-08-12",
'''
cnx = ms.connect(unix_socket='/Applications/MAMP/tmp/mysql/mysql.sock', user='root', password='<PASSWORD>', host='localhost', database='codearena')
d = []
try:
cur = cnx.cursor()
stmt = f'SELECT `testcase`, `probinput`, `proboutput` FROM `problems` WHERE `probno` = {pn} and `Compid`= {cid}'
cur.execute(stmt)
d = cur.fetchall()
res = []
for i in d:
var = dict(zip(('testcase', 'input', 'output',), i))
if ".txt" not in var['input'][len(var['input']) - 4:]:
var['input'] | |
import collections
from copy import deepcopy
import meshio
import numpy
from ._common import (
get_local_index,
get_meshio_version,
get_new_meshio_cells,
get_old_meshio_cells,
meshio_data,
)
from ._properties import (
_connections,
_face_areas,
_face_normals,
_faces,
_materials,
_volumes,
)
__all__ = [
"Cells",
"Mesh",
"from_meshio",
]
Cells = collections.namedtuple("Cells", ["type", "data"])
class Mesh(object):
"""toughio mesh.
This class is updated following the latest :module:`meshio` version and
brings backward compatibility with its previous versions.
Parameters
----------
points : array_like (n_points, 3)
Cooordinates of points.
cells : list of tuples (cell_type, data)
Connectivity of cells.
point_data : dict or None, optional, default None
Point data arrays.
cell_data : dict or None, optional, default None
Cell data arrays.
field_data : dict or None, optional, default None
Field data names.
point_sets : dict or None, optional, default None
Sets of points.
cell_sets : dict or None, optional, default None
Sets of cells.
"""
def __init__(
self,
points,
cells,
point_data=None,
cell_data=None,
field_data=None,
point_sets=None,
cell_sets=None,
):
self.points = points
self.cells = cells
self.point_data = point_data if point_data else {}
self.cell_data = cell_data if cell_data else {}
self.field_data = field_data if field_data else {}
self.point_sets = point_sets if point_sets else {}
self.cell_sets = cell_sets if cell_sets else {}
def __repr__(self):
lines = [
"<toughio mesh object>",
" Number of points: {}".format(len(self.points)),
]
if len(self.cells) > 0:
lines.append(" Number of cells:")
for tpe, elems in self.cells:
lines.append(" {}: {}".format(tpe, len(elems)))
else:
lines.append(" No cells.")
if self.point_sets:
lines.append(" Point sets: {}".format(", ".join(self.point_sets.keys())))
if self.point_data:
lines.append(" Point data: {}".format(", ".join(self.point_data.keys())))
if self.cell_data:
lines.append(" Cell data: {}".format(", ".join(self.cell_data.keys())))
return "\n".join(lines)
def extrude_to_3d(self, height=1.0, axis=2, inplace=True):
"""Convert a 2D mesh to 3D by extruding cells along given axis.
Parameters
----------
height : scalar or array_like, optional, default 1.0
Height of extrusion.
axis : int (0, 1 or 2), optional, default 2
Axis along which extrusion is performed.
inplace : bool, optional, default True
If `False`, return a new :class:`toughio.Mesh`.
Returns
-------
toughio.Mesh
Extruded mesh (only if ``inplace == False``).
"""
if axis not in [0, 1, 2]:
raise ValueError("axis must be 0, 1 or 2.")
mesh = self if inplace else deepcopy(self)
height = [height] if isinstance(height, (int, float)) else height
npts, nh = len(mesh.points), len(height)
if mesh.points.shape[1] == 3:
if len(set(mesh.points[:, axis])) != 1:
raise ValueError("Cannot extrude mesh along axis {}.".format(axis))
else:
mesh.points = numpy.column_stack((mesh.points, numpy.zeros(npts)))
if axis != 2:
mesh.points[:, [axis, 2]] = mesh.points[:, [2, axis]]
extra_points = numpy.array(mesh.points)
for h in height:
extra_points[:, axis] += h
mesh.points = numpy.vstack((mesh.points, extra_points))
extruded_types = {
"triangle": "wedge",
"quad": "hexahedron",
}
cells = []
for ic, c in enumerate(mesh.cells):
if c.type in extruded_types.keys():
extruded_type = extruded_types[c.type]
nr, nc = c.data.shape
cell = Cells(extruded_type, numpy.tile(c.data, (nh, 2)))
for i in range(nh):
ibeg, iend = i * nr, (i + 1) * nr
cell.data[ibeg:iend, :nc] += i * npts
cell.data[ibeg:iend, nc:] += (i + 1) * npts
cells.append(cell)
if mesh.cell_data:
for k, v in mesh.cell_data.items():
v[ic] = numpy.tile(v[ic], nh)
mesh.cells = cells
if mesh.field_data:
for k in mesh.field_data.keys():
mesh.field_data[k][1] = 3
if not inplace:
return mesh
def prune_duplicates(self, inplace=True):
"""Delete duplicate points and cells.
Parameters
----------
inplace : bool, optional, default True
If `False`, return a new :class:`toughio.Mesh`.
Returns
-------
toughio.Mesh
Pruned mesh (only if ``inplace == False``).
Note
----
Does not preserve points order from original array in mesh.
"""
mesh = self if inplace else deepcopy(self)
cells = [[c.type, c.data] for c in mesh.cells]
# Prune duplicate points
unique_points, pind, pinv = numpy.unique(
mesh.points, axis=0, return_index=True, return_inverse=True,
)
if len(unique_points) < len(mesh.points):
mesh.points = unique_points
for k, v in mesh.point_data.items():
mesh.point_data[k] = v[pind]
for ic, (k, v) in enumerate(cells):
cells[ic][1] = pinv[v]
# Prune duplicate cells
for ic, (k, v) in enumerate(cells):
vsort = numpy.sort(v, axis=1)
_, order = numpy.unique(vsort, axis=0, return_index=True)
cells[ic][1] = v[order]
if mesh.cell_data:
for kk, vv in mesh.cell_data.items():
mesh.cell_data[kk][ic] = vv[ic][order]
mesh.cells = cells
if not inplace:
return mesh
def split(self, arr):
"""Split input array into subarrays for each cell block in mesh.
Parameters
----------
arr : array_like
Input array.
Returns
-------
list of array_like
List of subarrays.
"""
if len(arr) != self.n_cells:
raise ValueError()
sizes = numpy.cumsum([len(c.data) for c in self.cells])
return numpy.split(numpy.asarray(arr), sizes[:-1])
def to_meshio(self):
"""Convert mesh to :class:`meshio.Mesh`.
Returns
-------
meshio.Mesh
Output mesh.
"""
keys = ["points", "point_data", "field_data"]
kwargs = {key: getattr(self, key) for key in keys}
version = get_meshio_version()
if version[0] >= 4:
kwargs.update(
{
"cells": self.cells,
"cell_data": self.cell_data,
"point_sets": self.point_sets,
"cell_sets": self.cell_sets,
}
)
else:
cells, cell_data = get_old_meshio_cells(self.cells, self.cell_data)
kwargs.update(
{"cells": cells, "cell_data": cell_data, "node_sets": self.point_sets,}
)
return meshio.Mesh(**kwargs)
def to_pyvista(self):
"""Convert mesh to :class:`pyvista.UnstructuredGrid`.
Returns
-------
pyvista.UnstructuredGrid
Output mesh.
"""
try:
import pyvista
from ._common import (
meshio_to_vtk_type,
vtk_type_to_numnodes,
)
except ImportError:
raise ImportError(
"Converting to pyvista.UnstructuredGrid requires pyvista to be installed."
)
# Extract cells from toughio.Mesh object
offset = []
cells = []
cell_type = []
next_offset = 0
for c in self.cells:
vtk_type = meshio_to_vtk_type[c.type]
numnodes = vtk_type_to_numnodes[vtk_type]
offset += [next_offset + i * (numnodes + 1) for i in range(len(c.data))]
cells.append(
numpy.hstack((numpy.full((len(c.data), 1), numnodes), c.data)).ravel()
)
cell_type += [vtk_type] * len(c.data)
next_offset = offset[-1] + numnodes + 1
# Extract cell data from toughio.Mesh object
cell_data = {k: numpy.concatenate(v) for k, v in self.cell_data.items()}
# Create pyvista.UnstructuredGrid object
points = self.points
if points.shape[1] == 2:
points = numpy.hstack((points, numpy.zeros((len(points), 1))))
mesh = pyvista.UnstructuredGrid(
numpy.array(offset),
numpy.concatenate(cells),
numpy.array(cell_type),
numpy.array(points, numpy.float64),
)
# Set point data
mesh.point_arrays.update(
{k: numpy.array(v, numpy.float64) for k, v in self.point_data.items()}
)
# Set cell data
mesh.cell_arrays.update(cell_data)
return mesh
def to_tough(self, filename="MESH", **kwargs):
"""Write TOUGH `MESH` file.
Parameters
----------
filename : str, optional, default 'MESH'
Output file name.
"""
self.write(filename, file_format="tough", **kwargs)
def read_output(self, file_or_output, time_step=-1):
"""Import TOUGH results to the mesh.
Parameters
----------
file_or_output : str, namedtuple or list of namedtuple
Input file name or output data.
time_step : int, optional, default -1
Data for given time step to import. Default is last time step.
"""
from .. import read_output
from .._io._helpers import Output, Save, _reorder_labels
if not isinstance(file_or_output, (str, list, tuple, Output, Save)):
raise TypeError()
if not isinstance(time_step, int):
raise TypeError()
if isinstance(file_or_output, str):
out = read_output(file_or_output)
else:
out = file_or_output
if not isinstance(out, (Output, Save)):
if not (-len(out) <= time_step < len(out)):
raise ValueError()
out = out[time_step]
if len(out.labels) != self.n_cells:
raise ValueError()
out = _reorder_labels(out, self.labels)
self.cell_data.update(out.data)
def write(self, filename, file_format=None, **kwargs):
"""Write mesh to file.
Parameters
----------
filename : str
Output file name.
file_format : str or None, optional, default None
Output file format. If `None`, it will be guessed from file's
extension. To write TOUGH MESH, `file_format` must be specified
as 'tough' (no specific extension exists for TOUGH MESH).
"""
from ._helpers import write
write(filename, self, file_format, **kwargs)
def plot(self, *args, **kwargs):
"""Display mesh using :method:`pyvista.UnstructuredGrid.plot``."""
mesh = self.to_pyvista()
mesh.plot(*args, **kwargs)
def add_point_data(self, label, data):
"""Add a new point data array.
Parameters
----------
label : str
Point data array name.
data : array_like
Point data array.
"""
if not isinstance(label, str):
raise TypeError()
if not isinstance(data, (list, tuple, numpy.ndarray)):
raise TypeError()
if len(data) != self.n_points:
raise ValueError()
self.point_data[label] = numpy.asarray(data)
def add_cell_data(self, label, data):
"""Add a new cell data array.
Parameters
----------
label : str
Cell data array name.
data : array_like
Cell data array.
"""
if not isinstance(label, str):
raise TypeError()
if not isinstance(data, (list, tuple, numpy.ndarray)):
raise TypeError()
if len(data) != self.n_cells:
raise ValueError()
self.cell_data[label] = self.split(data)
def set_material(self, material, xlim=None, ylim=None, zlim=None):
"""Set material to cells in box.
Set material for cells within box selection defined by `xlim`, `ylim` and `zlim`.
Parameters
----------
material : str
Material name.
xlim : array_like or None, optional, default None
Minimum and maximum values in X direction.
ylim : array_like or None, optional, default None
Minimum and maximum values in Y direction.
zlim : array_like or None, optional, default None
Minimum and maximum values in Z direction.
Raises
------
AssertionError
If any input argument is not valid.
| |
None:
if params.interactive:
start_color = self.status.GetBackgroundColour()
self.status.SetBackgroundColour( params.status_blue )
self.status.SetStatusText( "writing plaintext results", params.status_box )
wx.BeginBusyCursor()
wx.Yield()
self.ann_file.WriteCSV( csv_name )
if self.alive and params.interactive:
self.status.SetBackgroundColour( start_color )
self.status.SetStatusText( "", params.status_box )
wx.EndBusyCursor()
wx.Yield()
def OnSaveDiagnostics( self, evt ):
"""Choose filename to save diagnostics to."""
diag_name = self.get_filename_with_extension( '_ctraxdiagnostics.txt',
"Save diagnostics to text file" )
if diag_name is not None:
start_color = self.status.GetBackgroundColour()
self.status.SetBackgroundColour( params.status_blue )
self.status.SetStatusText( "writing diagnostics to file",
params.status_box )
wx.BeginBusyCursor()
wx.Yield()
annot.WriteDiagnostics( diag_name )
self.status.SetBackgroundColour( start_color )
self.status.SetStatusText( "", params.status_box )
wx.EndBusyCursor()
wx.Yield()
def EnableControls( self ):
"""Enable or disable GUI controls based on current state of tracker."""
if not params.interactive:
return
if self.has( 'batch' ):
self.batch.EnableControls()
twiddling_enabled = params.feedback_enabled and not self.tracking
movieready = self.has( 'movie' )
if movieready:
issbfmf = (hasattr( self.movie, 'type' ) and self.movie.type == 'sbfmf')
else:
issbfmf = False
annready = movieready and self.has( 'ann_file' ) and len( self.ann_file ) > 0
isplaying = hasattr(self,'play_break') and not self.play_break
self.menu.Enable( xrc.XRCID("menu_track_start"), # could be "stop"
movieready and (not isplaying) )
settings_enabled = movieready and twiddling_enabled
self.menu.Enable( xrc.XRCID("menu_track_writesbfmf"), settings_enabled and not issbfmf )
self.dowritesbfmf = self.dowritesbfmf and not issbfmf
self.menu.Check( xrc.XRCID("menu_track_writesbfmf"), self.dowritesbfmf )
self.menu.Enable( xrc.XRCID("menu_track_resume"),
settings_enabled and (not isplaying) and annready \
and len( self.ann_file ) < self.movie.get_n_frames() )
self.menu.Enable( xrc.XRCID("menu_track_resume_here"),
settings_enabled and (not isplaying) and annready )
self.menu.Enable( xrc.XRCID("menu_load_settings"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_save_settings"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_settings_bg"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_settings_bg_model"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_settings_tracking"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_playback_flipud"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_playback_transpose"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_tracking_wizard"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_compute_background"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_compute_shape"), settings_enabled )
self.menu.Enable( xrc.XRCID("menu_file_save_diagnostics"), settings_enabled )
self.framenumber_text.Enable( settings_enabled )
self.slider.Enable( settings_enabled )
self.frameinc_button.Enable( settings_enabled )
self.framedec_button.Enable( settings_enabled )
saving_enabled = annready and twiddling_enabled
self.menu.Enable( xrc.XRCID("menu_playback_show_ann"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_file_export"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_file_save_avi"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_choose_orientations"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_analyze_plottraj"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_analyze_plotvel"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_analyze_histpos"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_analyze_histspeed"), saving_enabled )
self.menu.Enable( xrc.XRCID("menu_analyze_histdtheta"), saving_enabled )
def InitializeFrameSlider(self):
self.slider.SetThumbPosition( self.start_frame )
self.slider.SetScrollbar( self.start_frame,1,self.movie.get_n_frames()-1,100 )
self.framenumber_text.SetValue( "%06d"%(self.movie.get_n_frames()) )
def UpdateStatusMovie( self ):
"""Update status bar with movie filename."""
try:
if not self.has( 'movie' ) or len( self.movie.filename ) == 0:
self.status.SetStatusText( "[no file loaded]", params.file_box )
elif len( self.movie.filename ) < params.file_box_max_width:
self.status.SetStatusText( self.movie.filename, params.file_box )
else:
self.status.SetStatusText( "..." + os.sep + self.movie.filename, params.file_box )
except (TypeError, AttributeError): pass
def ShowCurrentFrame( self ):
"""Grab current frame, draw on it, and display in GUI.
Also update zoom-ellipse windows, if present."""
if not params.interactive: return
if not self.alive: return
if not self.has( 'movie' ): return
if self.start_frame < 0: return
# get frame
try:
st = time.time()
frame, self.last_timestamp = self.movie.get_frame( self.start_frame )
if num.isnan(self.last_timestamp):
self.last_timestamp = float(self.start_frame) / float(params.DEFAULT_FRAME_RATE)
except movies.NoMoreFramesException:
self.start_frame = min(self.start_frame,self.movie.get_n_frames()-1)
self.slider.SetScrollbar( self.start_frame,1,self.movie.get_n_frames()-1,100 )
return
except IndexError: # framenumber out of range
return
# set frame number display
self.framenumber_text.SetValue( "%06d"%(self.start_frame) )
# dim frame
if self.menu.IsChecked( xrc.XRCID("menu_playback_dim") ):
frame = frame / 2
# annotate image
dodrawann = len( self.ann_file ) > 0 and \
self.start_frame >= self.ann_file.firstframetracked and \
self.start_frame <= self.ann_file.lastframetracked
frame8 = imagesk.double2mono8(frame,donormalize=False)
height, width = frame8.shape
# choose correct region of interest
if self.has( 'zoom_drag_roi' ):
left = int( round( width*self.zoom_drag_roi[0] ) )
top = int( round( height*self.zoom_drag_roi[1] ) )
right = int( round( width*self.zoom_drag_roi[2] ) )
bottom = int( round( height*self.zoom_drag_roi[3] ) )
frame8 = frame8[top:bottom,left:right]
# resize the image
resizew = float( width )/max( float( frame8.shape[1] ), 0.1 )
resizeh = float( height )/max( float( frame8.shape[0] ), 0.1 )
self.zoom_drag_roi_scale = min( resizew, resizeh )
try:
new_frame8 = imresize( frame8, self.zoom_drag_roi_scale )
except ValueError:
return # "tile cannot extend outside image", seems to be harmless
# fill in with gray at margins
frame8 = num.ones( frame.shape, dtype=num.uint8 )*127
#x = (frame8.shape[1] - new_frame8.shape[1])/2
#y = (frame8.shape[0] - new_frame8.shape[0])/2
# centering is a lot of work for subsequent clicks
frame8[:new_frame8.shape[0],:new_frame8.shape[1]] = new_frame8
# draw bounding box for drag rectangle
lines_to_draw = []
line_colors = []
if self.zoom_dragging:
lines_to_draw.extend( [(width*self.zoom_drag_origin[0],
height*self.zoom_drag_origin[1],
width*self.zoom_drag_origin[0],
height*self.zoom_drag_current[1]),
(width*self.zoom_drag_origin[0],
height*self.zoom_drag_current[1],
width*self.zoom_drag_current[0],
height*self.zoom_drag_current[1]),
(width*self.zoom_drag_current[0],
height*self.zoom_drag_current[1],
width*self.zoom_drag_current[0],
height*self.zoom_drag_origin[1]),
(width*self.zoom_drag_current[0],
height*self.zoom_drag_origin[1],
width*self.zoom_drag_origin[0],
height*self.zoom_drag_origin[1])] )
line_colors.extend( [params.zoom_drag_rectangle_color,
params.zoom_drag_rectangle_color,
params.zoom_drag_rectangle_color,
params.zoom_drag_rectangle_color] )
# draw annotation
if self.menu.IsChecked( xrc.XRCID("menu_playback_show_ann") ) and dodrawann:
# first frame of tail of trajectory
tailframe = max( self.ann_file.firstframetracked,
self.start_frame - params.tail_length )
dataframes = self.ann_file[tailframe:self.start_frame + 1]
# update small ellipse windows
if self.menu.IsChecked( xrc.XRCID("menu_settings_zoom") ):
self.zoom_window.SetData(dataframes[-1],frame)
self.zoom_window.Redraw()
ellipses = dataframes[-1]
old_pts = []
for dataframe in dataframes:
these_pts = []
for ellipse in dataframe.itervalues():
these_pts.append( (ellipse.center.x, ellipse.center.y, ellipse.identity) )
old_pts.append( these_pts )
# draw on image
linesegs = ell.annotate_image( ellipses, old_pts )
(linesegs,linecolors) = imagesk.separate_linesegs_colors(linesegs)
lines_to_draw.extend( linesegs )
line_colors.extend( linecolors )
self.num_flies_text.SetValue( "N. Flies: %02d"%len(ellipses) )
else:
self.num_flies_text.SetValue( "" )
# scale the drawings
if self.has( 'zoom_drag_roi' ):
for si in range( len( lines_to_draw ) ):
orig_seg = lines_to_draw[si]
new_seg = ((orig_seg[0] - left)*self.zoom_drag_roi_scale,
(orig_seg[1] - top)*self.zoom_drag_roi_scale,
(orig_seg[2] - left)*self.zoom_drag_roi_scale,
(orig_seg[3] - top)*self.zoom_drag_roi_scale)
lines_to_draw[si] = new_seg
# draw
self.img_wind.update_image_and_drawings( 'Ctraxmain',
frame8,
format="MONO8",
linesegs=lines_to_draw,
lineseg_colors=line_colors,
lineseg_widths=[params.ellipse_thickness]*len( lines_to_draw ) )
self.img_wind.Refresh( eraseBackground=False )
# update the slider
self.slider.SetThumbPosition( self.start_frame )
self.frameinc_button.Enable( self.start_frame < self.movie.get_n_frames() - 1 )
self.framedec_button.Enable( self.start_frame > 0 )
def OnSlider( self, evt ):
"""Frame slider callback. Display new frame."""
# tone down the duplicated events as much as possible (Windows, I mean you)
new_fr = self.slider.GetThumbPosition()
if new_fr != self.start_frame:
try:
wx.Yield()
except:
pass
else:
self.play_break = True
self.start_frame = new_fr
self.ShowCurrentFrame()
finally:
evt.Skip()
def OnResize( self, evt=None ):
"""Window resized. Repaint in new window size."""
if hasattr( self, 'img_size' ):
top_sizer = self.frame.GetSizer()
panel_item = top_sizer.GetItem( self.img_panel )
panel_item.SetFlag( panel_item.GetFlag() | wx.SHAPED )
panel_item.SetRatio( float( self.img_size[1] )/self.img_size[0] )
self.frame.Layout()
try:
# redraw
self.ShowCurrentFrame()
# scale slider to new width
button_size = self.frameinc_button.GetRect().GetWidth()
new_size = wx.Size( self.img_wind.GetRect().GetWidth() - 2*button_size,
self.slider.GetRect().GetHeight() )
self.slider.SetMinSize( new_size )
self.slider.SetSize( new_size )
new_pos = wx.Point( self.img_panel.GetPosition().x + button_size,
self.slider.GetPosition().y )
self.slider.SetPosition( new_pos )
new_pos = wx.Point( self.img_panel.GetPosition().x,
self.framedec_button.GetPosition().y )
self.framedec_button.SetPosition( new_pos )
new_pos.x = self.img_panel.GetPosition().x + new_size.width + button_size
self.frameinc_button.SetPosition( new_pos )
# scale movie-name box
const.file_box_max_width = int(float(self.img_wind.GetRect().GetWidth())/11.)
self.UpdateStatusMovie()
except AttributeError: pass # during initialization
def close_annfile( self ):
"""Close annotation file, as when opening a new movie or quitting."""
if self.has( 'ann_file' ):
self.ann_file.close()
if self.dowritesbfmf:
self.ann_file.CopyToSBFMF()
def OnQuit( self, evt ):
"""Quit selected (or window closing). Stop threads and close window."""
# stop tracking
if self.menu.GetLabel( xrc.XRCID("menu_track_start") ) == const.TRACK_STOP:
self.OnStopTracking( None ) # quit in mid-operation
self.play_break = True
try:
self.close_annfile()
except Exception, details:
print "error closing annotation: %s"%details
# write user settings
self.WriteUserfile()
# kill
self.alive = False
self.frame.Destroy()
def OnStopTracking( self, evt=None ):
"""Stop button pressed. Stop threads."""
self.StopThreads() # located in algorithm.py
params.enable_feedback( True )
if self.has( 'batch' ):
self.batch.executing = False
if self.dowritesbfmf and self.movie.writesbfmf_isopen():
self.movie.writesbfmf_close(self.start_frame)
# set tracking flag
self.tracking = False
def OnStartTrackingMenu( self, evt ):
"""Start button pressed. Begin tracking."""
if self.menu.GetLabel( xrc.XRCID("menu_track_start") ) == const.TRACK_STOP:
# stop tracking
self.OnStopTracking()
else:
self.OnStartTracking(evt)
def OnWriteSBFMF( self, evt=None ):
"""Set SBFMF filename. Use user input if running interactively."""
self.dowritesbfmf = False
if self.has( 'movie' ) and \
self.menu.IsChecked( xrc.XRCID("menu_track_writesbfmf") ):
self.writesbfmf_filename = self.get_filename_with_extension( '.sbfmf' )
self.dowritesbfmf = True
def OnPlayButton( self, evt ):
self.OnStartPlayback()
def OnStopButton(self,evt):
if self.tracking:
self.OnStopTracking()
else:
self.OnStopPlayback()
def OnStartTracking(self,evt=None):
# check for bg model
if not self.CheckForBGModel():
return
if params.enforce_minmax_shape and not self.CheckForShapeModel():
return
# will data be lost?
if evt is not None and params.interactive and self.has( 'ann_file' ) and \
len( self.ann_file ) > 0 and not DEBUG:
if evt.GetId() == xrc.XRCID("menu_track_start"):
msgtxt = 'Frames %d to %d have been tracked.\nErase these results and start tracking over?' % (self.ann_file.firstframetracked,self.ann_file.lastframetracked)
if wx.MessageBox( msgtxt, "Erase trajectories and start tracking?", wx.OK|wx.CANCEL ) == wx.CANCEL:
return
elif evt.GetId() == xrc.XRCID("menu_track_resume_here"):
if self.ann_file.lastframetracked >= self.start_frame:
msgtxt = 'Frames %d to %d have been tracked.\nRestarting tracking at frame %d will cause old trajectories from %d to %d to be erased.\nErase these results and restart tracking in the current frame?' | |
'The numeric ID of the file.',
},
'caption': {
'type': str,
'description': "The file's descriptive caption.",
},
'filename': {
'type': str,
'description': "The name of the file.",
},
'url': {
'type': str,
'description': "The URL of the file, for downloading purposes."
"If this is not an absolute URL, then it's "
"relative t othe Review Board server's URL.",
},
'icon_url': {
'type': str,
'description': 'The URL to a 24x24 icon representing this file.'
},
}
uri_object_key = 'file_attachment_id'
autogenerate_etags = True
def get_queryset(self, request, review_request_id, is_list=False,
*args, **kwargs):
review_request = review_request_resource.get_object(
request, review_request_id, *args, **kwargs)
q = Q(review_request=review_request)
if not is_list:
q = q | Q(inactive_review_request=review_request)
if request.user == review_request.submitter:
try:
draft = review_request_draft_resource.get_object(
request, review_request_id, *args, **kwargs)
q = q | Q(drafts=draft)
if not is_list:
q = q | Q(inactive_drafts=draft)
except ObjectDoesNotExist:
pass
return self.model.objects.filter(q)
def serialize_url_field(self, obj):
return obj.get_absolute_url()
def serialize_caption_field(self, obj):
# We prefer 'caption' here, because when creating a new screenshot, it
# won't be full of data yet (and since we're posting to screenshots/,
# it doesn't hit DraftFileAttachmentResource).
# DraftFileAttachmentResource will prefer draft_caption, in case people
# are changing an existing one.
return obj.caption or obj.draft_caption
@webapi_check_local_site
@webapi_login_required
@webapi_response_errors(DOES_NOT_EXIST, PERMISSION_DENIED,
INVALID_FORM_DATA, NOT_LOGGED_IN)
@webapi_request_fields(
required={
'path': {
'type': file,
'description': 'The file to upload.',
},
},
optional={
'caption': {
'type': str,
'description': 'The optional caption describing the '
'file.',
},
},
)
def create(self, request, *args, **kwargs):
"""Creates a new file from a file attachment.
This accepts any file type and associates it with a draft of a
review request.
It is expected that the client will send the data as part of a
:mimetype:`multipart/form-data` mimetype. The file's name
and content should be stored in the ``path`` field. A typical request
may look like::
-- SoMe BoUnDaRy
Content-Disposition: form-data; name=path; filename="foo.zip"
<Content here>
-- SoMe BoUnDaRy --
"""
try:
review_request = \
review_request_resource.get_object(request, *args, **kwargs)
except ObjectDoesNotExist:
return DOES_NOT_EXIST
if not review_request.is_mutable_by(request.user):
return _no_access_error(request.user)
form_data = request.POST.copy()
form = UploadFileForm(form_data, request.FILES)
if not form.is_valid():
return INVALID_FORM_DATA, {
'fields': _get_form_errors(form),
}
try:
file = form.create(request.FILES['path'], review_request)
except ValueError, e:
return INVALID_FORM_DATA, {
'fields': {
'path': [str(e)],
},
}
return 201, {
self.item_result_key: file,
}
@webapi_check_local_site
@webapi_login_required
@webapi_response_errors(DOES_NOT_EXIST, NOT_LOGGED_IN, PERMISSION_DENIED)
@webapi_request_fields(
optional={
'caption': {
'type': str,
'description': 'The new caption for the file.',
},
}
)
def update(self, request, caption=None, *args, **kwargs):
"""Updates the file's data.
This allows updating the file in a draft. The caption, currently,
is the only thing that can be updated.
"""
try:
review_request = \
review_request_resource.get_object(request, *args, **kwargs)
file = file_attachment_resource.get_object(request, *args, **kwargs)
except ObjectDoesNotExist:
return DOES_NOT_EXIST
if not review_request.is_mutable_by(request.user):
return PERMISSION_DENIED
try:
review_request_draft_resource.prepare_draft(request,
review_request)
except PermissionDenied:
return _no_access_error(request.user)
file.draft_caption = caption
file.save()
return 200, {
self.item_result_key: file,
}
@webapi_check_local_site
@webapi_login_required
@webapi_response_errors(DOES_NOT_EXIST, NOT_LOGGED_IN, PERMISSION_DENIED)
def delete(self, request, *args, **kwargs):
try:
review_request = \
review_request_resource.get_object(request, *args, **kwargs)
file_attachment = \
file_attachment_resource.get_object(request, *args, **kwargs)
except ObjectDoesNotExist:
return DOES_NOT_EXIST
try:
draft = review_request_draft_resource.prepare_draft(request,
review_request)
except PermissionDenied:
return _no_access_error(request.user)
draft.file_attachments.remove(file_attachment)
draft.inactive_file_attachments.add(file_attachment)
draft.save()
return 204, {}
class DraftFileAttachmentResource(BaseFileAttachmentResource):
"""Provides information on new file attachments being added to a draft of
a review request.
These are files that will be shown once the pending review request
draft is published.
"""
name = 'draft_file_attachment'
uri_name = 'file-attachments'
model_parent_key = 'drafts'
allowed_methods = ('GET', 'DELETE', 'POST', 'PUT',)
def get_queryset(self, request, review_request_id, *args, **kwargs):
try:
draft = review_request_draft_resource.get_object(
request, review_request_id, *args, **kwargs)
inactive_ids = \
draft.inactive_file_attachments.values_list('pk', flat=True)
q = Q(review_request=review_request_id) | Q(drafts=draft)
query = self.model.objects.filter(q)
query = query.exclude(pk__in=inactive_ids)
return query
except ObjectDoesNotExist:
return self.model.objects.none()
def serialize_caption_field(self, obj):
return obj.draft_caption or obj.caption
@webapi_check_local_site
@webapi_login_required
@augment_method_from(BaseFileAttachmentResource)
def get(self, *args, **kwargs):
pass
@webapi_check_local_site
@webapi_login_required
@augment_method_from(BaseFileAttachmentResource)
def delete(self, *args, **kwargs):
"""Deletes the file attachment from the draft.
This will remove the file attachment from the draft review request.
This cannot be undone.
This can be used to remove old files that were previously
shown, as well as newly added files that were part of the
draft.
Instead of a payload response on success, this will return :http:`204`.
"""
pass
@webapi_check_local_site
@webapi_login_required
@augment_method_from(WebAPIResource)
def get_list(self, *args, **kwargs):
"""Returns a list of draft files.
Each file attachment in this list is an uploaded file attachment that
will be shown in the final review request. These may include newly
file attachments or files that were already part of the
existing review request. In the latter case, existing files
are shown so that their captions can be added.
"""
pass
def _get_list_impl(self, request, *args, **kwargs):
"""Returns the list of files on this draft.
This is a specialized version of the standard get_list function
that uses this resource to serialize the children, in order to
guarantee that we'll be able to identify them as files that are
part of the draft.
"""
return WebAPIResponsePaginated(
request,
queryset=self.get_queryset(request, is_list=True,
*args, **kwargs),
results_key=self.list_result_key,
serialize_object_func=
lambda obj: self.serialize_object(obj, request=request,
*args, **kwargs),
extra_data={
'links': self.get_links(self.list_child_resources,
request=request, *args, **kwargs),
},
**self.build_response_args(request))
draft_file_attachment_resource = DraftFileAttachmentResource()
class ReviewRequestDraftResource(WebAPIResource):
"""An editable draft of a review request.
This resource is used to actually modify a review request. Anything made
in this draft can be published in order to become part of the public
review request, or it can be discarded.
Any POST or PUTs on this draft will cause the draft to be created
automatically. An initial POST is not required.
There is only ever a maximum of one draft per review request.
In order to access this resource, the user must either own the review
request, or it must have the ``reviews.can_edit_reviewrequest`` permission
set.
"""
model = ReviewRequestDraft
name = 'draft'
singleton = True
model_parent_key = 'review_request'
last_modified_field = 'last_updated'
mimetype_item_resource_name = 'review-request-draft'
fields = {
'id': {
'type': int,
'description': 'The numeric ID of the draft.',
'mutable': False,
},
'review_request': {
'type': 'reviewboard.webapi.resources.ReviewRequestResource',
'description': 'The review request that owns this draft.',
'mutable': False,
},
'last_updated': {
'type': str,
'description': 'The date and time that the draft was last updated '
'(in YYYY-MM-DD HH:MM:SS format).',
'mutable': False,
},
'branch': {
'type': str,
'description': 'The branch name.',
},
'bugs_closed': {
'type': str,
'description': 'The new list of bugs closed or referenced by this '
'change.',
},
'changedescription': {
'type': str,
'description': 'A custom description of what changes are being '
'made in this update. It often will be used to '
'describe the changes in the diff.',
},
'description': {
'type': str,
'description': 'The new review request description.',
},
'public': {
'type': bool,
'description': 'Whether or not the draft is public. '
'This will always be false up until the time '
'it is first made public. At that point, the '
'draft is deleted.',
},
'summary': {
'type': str,
'description': 'The new review request summary.',
},
'target_groups': {
'type': str,
'description': 'A comma-separated list of review groups '
'that will be on the reviewer list.',
},
'target_people': {
'type': str,
'description': 'A comma-separated list of users that will '
'be on a reviewer list.',
},
'testing_done': {
'type': str,
'description': 'The new testing done text.',
},
}
allowed_methods = ('GET', 'POST', 'PUT', 'DELETE')
item_child_resources = [
draft_screenshot_resource,
draft_file_attachment_resource
]
@classmethod
def prepare_draft(self, request, review_request):
"""Creates a draft, if the user has permission to."""
if not review_request.is_mutable_by(request.user):
raise PermissionDenied
return ReviewRequestDraft.create(review_request)
def get_queryset(self, request, review_request_id, *args, **kwargs):
review_request = review_request_resource.get_object(
request, review_request_id, *args, **kwargs)
return self.model.objects.filter(review_request=review_request)
def serialize_bugs_closed_field(self, obj):
return obj.get_bug_list()
def serialize_changedescription_field(self, obj):
if obj.changedesc:
return obj.changedesc.text
else:
return ''
def serialize_status_field(self, obj):
return status_to_string(obj.status)
def serialize_public_field(self, obj):
return False
def has_delete_permissions(self, request, draft, *args, **kwargs):
return draft.review_request.is_mutable_by(request.user)
@webapi_check_local_site
@webapi_login_required
@webapi_request_fields(
optional={
'branch': {
'type': str,
'description': 'The new branch name.',
},
'bugs_closed': {
'type': str,
'description': 'A comma-separated list of bug IDs.',
},
'changedescription': {
'type': str,
'description': 'The change description for this update.',
},
'description': {
'type': str,
'description': 'The new review request description.',
},
'public': {
'type': bool,
'description': 'Whether or not to make the review public. '
'If | |
TBI - Generalize to go through all params, reading from each its type (from a registry),
and calling on corresponding subclass to get default values (if param not found)
(as PROJECTION_TYPE and PROJECTION_SENDER are currently handled)
"""
from psyneulink.core.components.ports.port import _parse_port_spec
from psyneulink.core.components.ports.inputport import InputPort
# Perform first-pass validation in Function.__init__():
# - returns full set of params based on subclass paramClassDefaults
super(Mechanism, self)._validate_params(request_set,target_set,context)
params = target_set
# VALIDATE InputPort(S)
# INPUT_PORTS is specified, so validate:
if INPUT_PORTS in params and params[INPUT_PORTS] is not None:
try:
for port_spec in params[INPUT_PORTS]:
_parse_port_spec(owner=self, port_type=InputPort, port_spec=port_spec)
except AttributeError as e:
if DEFER_VARIABLE_SPEC_TO_MECH_MSG in e.args[0]:
pass
# VALIDATE FUNCTION_PARAMS
try:
function_param_specs = params[FUNCTION_PARAMS]
except KeyError:
if context.source & (ContextFlags.COMMAND_LINE | ContextFlags.PROPERTY):
pass
elif self.prefs.verbosePref:
print("No params specified for {0}".format(self.__class__.__name__))
else:
if not (isinstance(function_param_specs, dict)):
raise MechanismError("{0} in {1} must be a dict of param specifications".
format(FUNCTION_PARAMS, self.__class__.__name__))
# Validate params
from psyneulink.core.components.ports.parameterport import ParameterPort
for param_name, param_value in function_param_specs.items():
try:
self.defaults.value = self.paramInstanceDefaults[FUNCTION_PARAMS][param_name]
except KeyError:
raise MechanismError("{0} not recognized as a param of execute method for {1}".
format(param_name, self.__class__.__name__))
if not ((isclass(param_value) and
(issubclass(param_value, ParameterPort) or
issubclass(param_value, Projection))) or
isinstance(param_value, ParameterPort) or
isinstance(param_value, Projection) or
isinstance(param_value, dict) or
iscompatible(param_value, self.defaults.value)):
params[FUNCTION_PARAMS][param_name] = self.defaults.value
if self.prefs.verbosePref:
print("{0} param ({1}) for execute method {2} of {3} is not a ParameterPort, "
"projection, tuple, or value; default value ({4}) will be used".
format(param_name,
param_value,
self.execute.__self__.componentName,
self.__class__.__name__,
self.defaults.value))
# VALIDATE OUTPUTPORT(S)
# OUTPUT_PORTS is specified, so validate:
if OUTPUT_PORTS in params and params[OUTPUT_PORTS] is not None:
param_value = params[OUTPUT_PORTS]
# If it is a single item or a non-OrderedDict, place in list (for use here and in instantiate_output_port)
if not isinstance(param_value, (ContentAddressableList, list, OrderedDict)):
param_value = [param_value]
# Validate each item in the list or OrderedDict
i = 0
for key, item in param_value if isinstance(param_value, dict) else enumerate(param_value):
from psyneulink.core.components.ports.outputport import OutputPort
# If not valid...
if not ((isclass(item) and issubclass(item, OutputPort)) or # OutputPort class ref
isinstance(item, OutputPort) or # OutputPort object
isinstance(item, dict) or # OutputPort specification dict
isinstance(item, str) or # Name (to be used as key in OutputPorts list)
isinstance(item, tuple) or # Projection specification tuple
_is_modulatory_spec(item) or # Modulatory specification for the OutputPort
iscompatible(item, **{kwCompatibilityNumeric: True})): # value
# set to None, so it is set to default (self.value) in instantiate_output_port
param_value[key] = None
if self.prefs.verbosePref:
print("Item {0} of {1} param ({2}) in {3} is not a"
" OutputPort, specification dict or value, nor a list of dict of them; "
"output ({4}) of execute method for {5} will be used"
" to create a default OutputPort for {3}".
format(i,
OUTPUT_PORTS,
param_value,
self.__class__.__name__,
self.value,
self.execute.__self__.name))
i += 1
params[OUTPUT_PORTS] = param_value
def validate_labels_dict(lablel_dict, type):
for label, value in labels_dict.items():
if not isinstance(label,str):
raise MechanismError("Key ({}) in the {} for {} must be a string".
format(label, type, self.name))
if not isinstance(value,(list, np.ndarray)):
raise MechanismError("The value of {} ({}) in the {} for {} must be a list or array".
format(label, value, type, self.name))
def validate_subdict_key(port_type, key, dict_type):
# IMPLEMENTATION NOTE:
# can't yet validate that string is a legit InputPort name or that index is within
# bounds of the number of InputPorts; that is done in _get_port_value_labels()
if not isinstance(key, (int, str)):
raise MechanismError("Key ({}) for {} of {} must the name of an {} or the index for one".
format(key, dict_type, self.name, port_type.__name__))
if INPUT_LABELS_DICT in params and params[INPUT_LABELS_DICT]:
labels_dict = params[INPUT_LABELS_DICT]
if isinstance(list(labels_dict.values())[0], dict):
for key, ld in labels_dict.values():
validate_subdict_key(InputPort, key, INPUT_LABELS_DICT)
validate_labels_dict(ld, INPUT_LABELS_DICT)
else:
validate_labels_dict(labels_dict, INPUT_LABELS_DICT)
if OUTPUT_LABELS_DICT in params and params[OUTPUT_LABELS_DICT]:
labels_dict = params[OUTPUT_LABELS_DICT]
if isinstance(list(labels_dict.values())[0], dict):
for key, ld in labels_dict.values():
validate_subdict_key(OutputPort, key, OUTPUT_LABELS_DICT)
validate_labels_dict(ld, OUTPUT_LABELS_DICT)
else:
validate_labels_dict(labels_dict, OUTPUT_LABELS_DICT)
if TARGET_LABELS_DICT in params and params[TARGET_LABELS_DICT]:
for label, value in params[TARGET_LABELS_DICT].items():
if not isinstance(label,str):
raise MechanismError("Key ({}) in the {} for {} must be a string".
format(label, TARGET_LABELS_DICT, self.name))
if not isinstance(value,(list, np.ndarray)):
raise MechanismError("The value of {} ({}) in the {} for {} must be a list or array".
format(label, value, TARGET_LABELS_DICT, self.name))
def _validate_inputs(self, inputs=None):
# Only ProcessingMechanism supports run() method of Function; ControlMechanism and LearningMechanism do not
raise MechanismError("{} does not support run() method".format(self.__class__.__name__))
def _instantiate_attributes_before_function(self, function=None, context=None):
self.parameters.previous_value._set(None, context)
self._instantiate_input_ports(context=context)
self._instantiate_parameter_ports(function=function, context=context)
super()._instantiate_attributes_before_function(function=function, context=context)
# Assign attributes to be included in attributes_dict
# keys are keywords exposed to user for assignment
# values are names of corresponding attributes
self.attributes_dict_entries = dict(OWNER_VARIABLE = VARIABLE,
OWNER_VALUE = VALUE,
OWNER_EXECUTION_COUNT = OWNER_EXECUTION_COUNT,
OWNER_EXECUTION_TIME = OWNER_EXECUTION_TIME)
if hasattr(self, PREVIOUS_VALUE):
self.attributes_dict_entries.update({'PREVIOUS_VALUE': PREVIOUS_VALUE})
def _instantiate_function(self, function, function_params=None, context=None):
"""Assign weights and exponents if specified in input_ports
"""
super()._instantiate_function(function=function, function_params=function_params, context=context)
if self.input_ports and any(input_port.weight is not None for input_port in self.input_ports):
# Construct defaults:
# from function.weights if specified else 1's
try:
default_weights = self.function.weights
except AttributeError:
default_weights = None
if default_weights is None:
default_weights = default_weights or [1.0] * len(self.input_ports)
# Assign any weights specified in input_port spec
weights = [[input_port.weight if input_port.weight is not None else default_weight]
for input_port, default_weight in zip(self.input_ports, default_weights)]
self.function._weights = weights
if self.input_ports and any(input_port.exponent is not None for input_port in self.input_ports):
# Construct defaults:
# from function.weights if specified else 1's
try:
default_exponents = self.function.exponents
except AttributeError:
default_exponents = None
if default_exponents is None:
default_exponents = default_exponents or [1.0] * len(self.input_ports)
# Assign any exponents specified in input_port spec
exponents = [[input_port.exponent if input_port.exponent is not None else default_exponent]
for input_port, default_exponent in zip(self.input_ports, default_exponents)]
self.function._exponents = exponents
# this may be removed when the restriction making all Mechanism values 2D np arrays is lifted
# ignore warnings of certain Functions that disable conversion
with warnings.catch_warnings():
warnings.simplefilter(action='ignore', category=UserWarning)
self.function.output_type = FunctionOutputType.NP_2D_ARRAY
self.function.enable_output_type_conversion = True
self.function._instantiate_value(context)
def _instantiate_attributes_after_function(self, context=None):
from psyneulink.core.components.ports.parameterport import _instantiate_parameter_port
self._instantiate_output_ports(context=context)
# instantiate parameter ports from UDF custom parameters if necessary
try:
cfp = self.function.cust_fct_params
udf_parameters_lacking_ports = {param_name: cfp[param_name]
for param_name in cfp if param_name not in self.parameter_ports.names}
_instantiate_parameter_port(self, FUNCTION_PARAMS, udf_parameters_lacking_ports,
context=context, function=self.function)
self._parse_param_port_sources()
except AttributeError:
pass
super()._instantiate_attributes_after_function(context=context)
def _instantiate_input_ports(self, input_ports=None, reference_value=None, context=None):
"""Call Port._instantiate_input_ports to instantiate orderedDict of InputPort(s)
This is a stub, implemented to allow Mechanism subclasses to override _instantiate_input_ports
or process InputPorts before and/or after call to _instantiate_input_ports
"""
from psyneulink.core.components.ports.inputport import _instantiate_input_ports
return _instantiate_input_ports(owner=self,
input_ports=input_ports or self.input_ports,
reference_value=reference_value,
context=context)
def _instantiate_parameter_ports(self, function=None, context=None):
"""Call Port._instantiate_parameter_ports to instantiate a ParameterPort for each parameter in user_params
This is a stub, implemented to allow Mechanism subclasses to override _instantiate_parameter_ports
or process InputPorts before and/or after call to _instantiate_parameter_ports
:param function:
"""
from psyneulink.core.components.ports.parameterport import _instantiate_parameter_ports
_instantiate_parameter_ports(owner=self, function=function, context=context)
def _instantiate_output_ports(self, context=None):
"""Call Port._instantiate_output_ports to instantiate orderedDict of OutputPort(s)
This is a stub, implemented to allow Mechanism subclasses to override _instantiate_output_ports
or process InputPorts before and/or after call to _instantiate_output_ports
"""
from psyneulink.core.components.ports.outputport import _instantiate_output_ports
# self._update_parameter_ports(context=context)
self._update_attribs_dicts(context=context)
_instantiate_output_ports(owner=self, output_ports=self.output_ports, context=context)
def _add_projection_to_mechanism(self, port, projection, context=None):
from psyneulink.core.components.projections.projection import _add_projection_to
_add_projection_to(receiver=self, port=port, projection_spec=projection, context=context)
def _add_projection_from_mechanism(self, receiver, port, projection, context=None):
"""Add projection to specified port
"""
from psyneulink.core.components.projections.projection import _add_projection_from
_add_projection_from(sender=self, port=port, projection_spec=projection, receiver=receiver, context=context)
def _projection_added(self, projection, context=None):
"""Stub that can be overidden by subclasses that need to know when a projection is added to the Mechanism"""
pass
@handle_external_context(execution_id=NotImplemented)
def reinitialize(self, *args, context=None):
"""Reinitialize `previous_value <Mechanism_Base.previous_value>` if Mechanisms is stateful.
If the mechanism's `function <Mechanism.function>` is an `IntegratorFunction`, or if the mechanism has and
`integrator_function <TransferMechanism.integrator_function>` (see `TransferMechanism`), this method
effectively begins the function's accumulation over again at the specified value, and updates related
attributes on the mechanism. It also reassigns `previous_value <Mechanism.previous_value>` to None.
If the mechanism's `function <Mechanism_Base.function>` is an `IntegratorFunction`, its `reinitialize
<Mechanism_Base.reinitialize>` method:
(1) Calls the function's own `reinitialize <IntegratorFunction.reinitialize>` method (see Note below for
details)
(2) Sets the mechanism's `value <Mechanism_Base.value>` to the output of the function's
reinitialize method
(3) Updates its `output ports <Mechanism_Base.output_port>` based on its new `value
<Mechanism_Base.value>`
If the mechanism has an `integrator_function <TransferMechanism.integrator_function>`, its `reinitialize
<Mechanism_Base.reinitialize>` method::
(1) Calls | |
into the reader will be converted to the
data pack format, and being passed onto the other components for
processing.
Args:
reader: The reader to be used of the pipeline
config: The custom configuration to be passed to the reader. If
the config is not provided, the default config defined by the
reader class will be used.
Returns:
The pipeline itself, which allows you to directly chain other
pipeline construction code afterwards, i.e., you can do:
.. code-block:: python
Pipeline().set_reader(your_reader()).add(your_processor())
"""
self._reader = reader
self._reader_config = reader.make_configs(config)
return self
@property
def reader(self) -> BaseReader:
return self._reader
@property
def components(self) -> List[PipelineComponent]:
"""
Return all the components in this pipeline, except the reader.
Returns: A list containing the components.
"""
return self._components
@property
def component_configs(self) -> List[Optional[Config]]:
"""
Return the configs related to the components, except the reader.
Returns: A list containing the components configs.
"""
return self._configs
def add(
self,
component: PipelineComponent,
config: Optional[Union[Config, Dict[str, Any]]] = None,
selector: Optional[Selector] = None,
selector_config: Optional[Union[Config, Dict[str, Any]]] = None,
) -> "Pipeline":
"""
Adds a pipeline component to the pipeline. The pipeline components
will form a chain based on the insertion order. The customized
`config` and `selector` (:class:`~forte.data.selector.Selector`)
will be associated with this particular component. If the `config`
or the `selector` is not provided, the default ones will be used.
Here, note that the same component instance can be added multiple
times to the pipeline. In such cases, the instance will only be
setup at the first insertion (i.e. its `initialize` function will
only be called once). The subsequent insertion of the same component
instance will not change the behavior nor the states of the instance.
Thus, a different `config` cannot be provided (should be `None`) when
added the second time, otherwise a `ProcessorConfigError` will be
thrown. In the case where one want to them to behave differently, a
different instance should be used.
Args:
component (PipelineComponent): The component to be inserted next
to the pipeline.
config (Union[Config, Dict[str, Any]): The custom configuration
to be used for the added component. Default None, which means
the `default_configs()` of the component will be used.
selector (Selector): The selector used to pick the corresponding
data pack to be consumed by the component. Default None, which
means the whole pack will be used.
Returns:
The pipeline itself, which enables one to chain the creation of
the pipeline, i.e., you can do:
.. code-block:: python
Pipeline().set_reader(your_reader()).add(
your_processor()).add(anther_processor())
"""
if isinstance(component, BaseReader):
raise ProcessFlowException("Reader need to be set via set_reader()")
if isinstance(component, Evaluator):
# This will ask the job to keep a copy of the gold standard.
self.evaluator_indices.append(len(self.components))
if component not in self.__component_set:
# The case where the component is not found.
self._components.append(component)
self.__component_set.add(component)
self.component_configs.append(component.make_configs(config))
else:
if config is None:
self._components.append(component)
# We insert a `None` value here just to make the config list
# to match the component list, but this config should not be
# used.
self.component_configs.append(None)
else:
raise ProcessorConfigError(
f"The same instance of a component named {component.name} "
f" has already been added to"
f" the pipeline, we do not accept a different configuration"
f" for it. If you would like to use a differently"
f" configured component, please create another instance."
f" If you intend to re-use the component instance, please"
f" do not provide the `config` (or provide a `None`)."
)
if selector is None:
self._selectors.append(self.__default_selector)
self._selectors_configs.append(self.__default_selector_config)
else:
self._selectors.append(selector)
self._selectors_configs.append(
selector.make_configs(selector_config)
)
return self
def add_gold_packs(self, pack):
r"""Add gold packs to a internal dictionary used for evaluation.
This dictionary is used by the evaluator while calling
`consume_next(...)`
Args:
pack (Dict): A key, value pair containing job.id -> gold_pack
mapping
"""
self._predict_to_gold.update(pack)
def process(self, *args, **kwargs) -> PackType:
r"""Alias for :meth:`process_one`.
Args:
args: The positional arguments used to get the initial data.
kwargs: The keyword arguments used to get the initial data.
"""
return self.process_one(*args, **kwargs)
def run(self, *args, **kwargs):
r"""Run the whole pipeline and ignore all returned DataPack. This is
mostly used when you need to run the pipeline and do not require the
output but rely on the side-effect. For example, if the pipeline
writes some data to disk.
Calling this function will automatically call the :meth:`initialize`
at the beginning, and call the :meth:`finish` at the end.
Args:
args: The positional arguments used to get the initial data.
kwargs: The keyword arguments used to get the initial data.
"""
self.initialize()
for _ in self.process_dataset(*args, **kwargs):
# Process the whole dataset ignoring the return values.
# This essentially expect the processors have side effects.
pass
self.finish()
def process_one(self, *args, **kwargs) -> PackType:
r"""Process one single data pack. This is done by only reading and
processing the first pack in the reader.
Args:
kwargs: the information needed to load the data. For example, if
:attr:`_reader` is :class:`StringReader`, this should contain a
single piece of text in the form of a string variable. If
:attr:`_reader` is a file reader, this can point to the file
path.
"""
if not self._initialized:
raise ProcessFlowException(
"Please call initialize before running the pipeline"
)
first_pack = []
for p in self._reader.iter(*args, **kwargs):
first_pack.append(p)
break
if len(first_pack) == 1:
results = list(self._process_packs(iter(first_pack)))
return results[0]
else:
raise ValueError("Input data source contains no packs.")
def process_dataset(self, *args, **kwargs) -> Iterator[PackType]:
r"""Process the documents in the data source(s) and return an
iterator or list of DataPacks. The arguments are directly passed
to the reader to take data from the source.
"""
if not self._initialized:
raise ProcessFlowException(
"Please call initialize before running the pipeline"
)
data_iter = self._reader.iter(*args, **kwargs)
return self._process_packs(data_iter)
def finish(self):
"""
Call the finish method of all pipeline component. This need to be called
explicitly to release all resources.
"""
# Report time profiling of readers and processors
if self._enable_profiling:
out_header: str = "Pipeline Time Profile\n"
out_reader: str = (
f"- Reader: {self.reader.component_name}, "
+ f"{self.reader.time_profile} s\n"
)
out_processor: str = "\n".join(
[
f"- Component [{i}]: {self.components[i].name}, {t} s"
for i, t in enumerate(self._profiler)
]
)
logger.info("%s%s%s", out_header, out_reader, out_processor)
self.reader.finish(self.resource)
for p in self.components:
p.finish(self.resource)
self._initialized = False
def __update_stream_job_status(self):
q_index = self._proc_mgr.current_queue_index
u_index = self._proc_mgr.unprocessed_queue_indices[q_index]
current_queue = self._proc_mgr.current_queue
for job_i in itertools.islice(current_queue, 0, u_index + 1):
if job_i.status == ProcessJobStatus.UNPROCESSED:
job_i.set_status(ProcessJobStatus.PROCESSED)
def __update_batch_job_status(self, component: BaseBatchProcessor):
# update the status of the jobs. The jobs which were removed from
# data_pack_pool will have status "PROCESSED" else they are "QUEUED"
q_index = self._proc_mgr.current_queue_index
u_index = self._proc_mgr.unprocessed_queue_indices[q_index]
current_queue = self._proc_mgr.current_queue
data_pool_length = len(component.batcher.data_pack_pool)
for i, job_i in enumerate(
itertools.islice(current_queue, 0, u_index + 1)
):
if i <= u_index - data_pool_length:
job_i.set_status(ProcessJobStatus.PROCESSED)
else:
job_i.set_status(ProcessJobStatus.QUEUED)
def __flush_batch_job_status(self):
current_queue = self._proc_mgr.current_queue
for job in current_queue:
job.set_status(ProcessJobStatus.PROCESSED)
def _process_with_component(
self,
selector: Selector,
component: PipelineComponent,
raw_job: ProcessJob,
):
for pack in selector.select(raw_job.pack):
# First, perform the component action on the pack
try:
if isinstance(component, Caster):
# Replacing the job pack with the casted version.
raw_job.alter_pack(component.cast(pack))
elif isinstance(component, BaseBatchProcessor):
pack.set_control_component(component.name)
component.process(pack)
elif isinstance(component, Evaluator):
pack.set_control_component(component.name)
component.consume_next(
pack, self._predict_to_gold[raw_job.id]
)
elif isinstance(component, BaseProcessor):
# Should be BasePackProcessor:
# All other processor are considered to be
# streaming processor like this.
pack.set_control_component(component.name)
component.process(pack)
# After the component action, make sure the entry is
# added into the index.
pack.add_all_remaining_entries()
except ValueError as e:
raise ProcessExecutionException(
f"Exception occurred when running " f"{component.name}"
) from e
def _process_packs(
self, data_iter: Iterator[PackType]
) -> Iterator[PackType]:
r"""Process the packs received from the reader by the running through
the pipeline.
Args:
data_iter (iterator): Iterator yielding jobs that contain packs
Returns:
Yields packs that are processed by the pipeline.
"""
# pylint: disable=line-too-long
# Here is the logic for the execution of the pipeline.
# The basic idea is to yield a pack as soon as it gets processed by all
# the processors instead of waiting for later jobs to get processed.
# 1) A job can be in three status
# - UNPROCESSED
| |
isinstance(hue_palette, dict), "`hue_palette` must be dict"
# assert drawing_order in ['sorted', 'random', 'original'],\
# "`drawing_order` must be one of ['original', 'sorted', 'random']"
# legend_order = {hue: np.unique(df[hue]) for hue in list_hue
# if (is_string_dtype(df[hue])
# or is_categorical_dtype(df[hue]))}
# if(fig_legend_order is not None):
# if(not isinstance(fig_legend_order, dict)):
# raise TypeError("`fig_legend_order` must be a dictionary")
# for hue in fig_legend_order.keys():
# if(hue in legend_order.keys()):
# legend_order[hue] = fig_legend_order[hue]
# else:
# print(f"{hue} is ignored for ordering legend labels"
# "due to incorrect name or data type")
# if(len(list_hue) < fig_ncol):
# fig_ncol = len(list_hue)
# fig_nrow = int(np.ceil(len(list_hue)/fig_ncol))
# fig = plt.figure(figsize=(fig_size[0]*fig_ncol*1.05,
# fig_size[1]*fig_nrow))
# for hue in list_hue:
# if hue in hue_palette.keys():
# palette = hue_palette[hue]
# else:
# palette = None
# if drawing_order == 'sorted':
# df_updated = df.sort_values(by=hue)
# elif drawing_order == 'random':
# df_updated = df.sample(frac=1, random_state=100)
# else:
# df_updated = df
# fig = px.scatter(df_updated,
# x=x,
# y=y,
# color=hue,
# opacity=alpha,
# color_continuous_scale=px.colors.sequential.Viridis,
# color_discrete_map=palette,
# **kwargs)
# fig.update_layout(legend={'itemsizing': 'constant'},
# width=500,
# height=500)
# fig.show(renderer="notebook")
# TO-DO add 3D plot
def umap(adata,
color=None,
dict_palette=None,
n_components=None,
size=8,
drawing_order='sorted',
dict_drawing_order=None,
show_texts=False,
texts=None,
text_size=10,
text_expand=(1.05, 1.2),
fig_size=None,
fig_ncol=3,
fig_legend_ncol=1,
fig_legend_order=None,
vmin=None,
vmax=None,
alpha=1,
pad=1.08,
w_pad=None,
h_pad=None,
save_fig=None,
fig_path=None,
fig_name='plot_umap.pdf',
plolty=False,
**kwargs):
""" Plot coordinates in UMAP
Parameters
----------
data: `pd.DataFrame`
Input data structure of shape (n_samples, n_features).
x: `str`
Variable in `data` that specify positions on the x axis.
y: `str`
Variable in `data` that specify positions on the x axis.
color: `list`, optional (default: None)
A list of variables that will produce points with different colors.
e.g. color = ['anno1', 'anno2']
dict_palette: `dict`,optional (default: None)
A dictionary of palettes for different variables in `color`.
Only valid for categorical/string variables
e.g. dict_palette = {'ann1': {},'ann2': {}}
drawing_order: `str` (default: 'sorted')
The order in which values are plotted, This can be
one of the following values
- 'original': plot points in the same order as in input dataframe
- 'sorted' : plot points with higher values on top.
- 'random' : plot points in a random order
dict_drawing_order: `dict`,optional (default: None)
A dictionary of drawing_order for different variables in `color`.
Only valid for categorical/string variables
e.g. dict_drawing_order = {'ann1': 'original','ann2': 'sorted'}
size: `int` (default: 8)
Point size.
show_texts : `bool`, optional (default: False)
If True, text annotation will be shown.
text_size : `int`, optional (default: 10)
The text size.
texts: `list` optional (default: None)
Point names to plot.
text_expand : `tuple`, optional (default: (1.05, 1.2))
Two multipliers (x, y) by which to expand the bounding box of texts
when repelling them from each other/points/other objects.
fig_size: `tuple`, optional (default: None)
figure size.
fig_ncol: `int`, optional (default: 3)
the number of columns of the figure panel
fig_legend_order: `dict`,optional (default: None)
Specified order for the appearance of the annotation keys.
Only valid for categorical/string variable
e.g. fig_legend_order = {'ann1':['a','b','c'],'ann2':['aa','bb','cc']}
fig_legend_ncol: `int`, optional (default: 1)
The number of columns that the legend has.
vmin,vmax: `float`, optional (default: None)
The min and max values are used to normalize continuous values.
If None, the respective min and max of continuous values is used.
alpha: `float`, optional (default: 0.8)
0.0 transparent through 1.0 opaque
pad: `float`, optional (default: 1.08)
Padding between the figure edge and the edges of subplots,
as a fraction of the font size.
h_pad, w_pad: `float`, optional (default: None)
Padding (height/width) between edges of adjacent subplots,
as a fraction of the font size. Defaults to pad.
save_fig: `bool`, optional (default: False)
if True,save the figure.
fig_path: `str`, optional (default: None)
If save_fig is True, specify figure path.
fig_name: `str`, optional (default: 'plot_umap.pdf')
if save_fig is True, specify figure name.
Returns
-------
None
"""
if fig_size is None:
fig_size = mpl.rcParams['figure.figsize']
if save_fig is None:
save_fig = settings.save_fig
if fig_path is None:
fig_path = os.path.join(settings.workdir, 'figures')
if(n_components is None):
n_components = min(3, adata.obsm['X_umap'].shape[1])
if n_components not in [2, 3]:
raise ValueError("n_components should be 2 or 3")
if(n_components > adata.obsm['X_umap'].shape[1]):
print(f"`n_components` is greater than the available dimension.\n"
f"It is corrected to {adata.obsm['X_umap'].shape[1]}")
n_components = adata.obsm['X_umap'].shape[1]
if dict_palette is None:
dict_palette = dict()
df_plot = pd.DataFrame(index=adata.obs.index,
data=adata.obsm['X_umap'],
columns=['UMAP'+str(x+1) for x in
range(adata.obsm['X_umap'].shape[1])])
if color is None:
_scatterplot2d(df_plot,
x='UMAP1',
y='UMAP2',
drawing_order=drawing_order,
size=size,
show_texts=show_texts,
text_size=text_size,
texts=texts,
text_expand=text_expand,
fig_size=fig_size,
alpha=alpha,
pad=pad,
w_pad=w_pad,
h_pad=h_pad,
save_fig=save_fig,
fig_path=fig_path,
fig_name=fig_name,
**kwargs)
else:
color = list(dict.fromkeys(color)) # remove duplicate keys
for ann in color:
if(ann in adata.obs_keys()):
df_plot[ann] = adata.obs[ann]
if(not is_numeric_dtype(df_plot[ann])):
if 'color' not in adata.uns_keys():
adata.uns['color'] = dict()
if ann not in dict_palette.keys():
if (ann+'_color' in adata.uns['color'].keys()) \
and \
(all(np.isin(np.unique(df_plot[ann]),
list(adata.uns['color']
[ann+'_color'].keys())))):
dict_palette[ann] = \
adata.uns['color'][ann+'_color']
else:
dict_palette[ann] = \
generate_palette(adata.obs[ann])
adata.uns['color'][ann+'_color'] = \
dict_palette[ann].copy()
else:
if ann+'_color' not in adata.uns['color'].keys():
adata.uns['color'][ann+'_color'] = \
dict_palette[ann].copy()
elif(ann in adata.var_names):
df_plot[ann] = adata.obs_vector(ann)
else:
raise ValueError(f"could not find {ann} in `adata.obs.columns`"
" and `adata.var_names`")
if plolty:
print('Plotly is not supported yet.')
# _scatterplot2d_plotly(df_plot,
# x='UMAP1',
# y='UMAP2',
# list_hue=color,
# hue_palette=dict_palette,
# drawing_order=drawing_order,
# fig_size=fig_size,
# fig_ncol=fig_ncol,
# fig_legend_order=fig_legend_order,
# alpha=alpha,
# save_fig=save_fig,
# fig_path=fig_path,
# **kwargs)
else:
_scatterplot2d(df_plot,
x='UMAP1',
y='UMAP2',
list_hue=color,
hue_palette=dict_palette,
drawing_order=drawing_order,
dict_drawing_order=dict_drawing_order,
size=size,
show_texts=show_texts,
text_size=text_size,
text_expand=text_expand,
texts=texts,
fig_size=fig_size,
fig_ncol=fig_ncol,
fig_legend_ncol=fig_legend_ncol,
fig_legend_order=fig_legend_order,
vmin=vmin,
vmax=vmax,
alpha=alpha,
pad=pad,
w_pad=w_pad,
h_pad=h_pad,
save_fig=save_fig,
fig_path=fig_path,
fig_name=fig_name,
**kwargs)
def discretize(adata,
kde=None,
fig_size=(6, 6),
pad=1.08,
w_pad=None,
h_pad=None,
save_fig=None,
fig_path=None,
fig_name='plot_discretize.pdf',
**kwargs):
"""Plot original data VS discretized data
Parameters
----------
adata : `Anndata`
Annotated data matrix.
kde : `bool`, optional (default: None)
If True, compute a kernel density estimate to smooth the distribution
and show on the plot. Invalid as of v0.2.
pad: `float`, optional (default: 1.08)
Padding between the figure edge and the edges of subplots,
as a fraction of the font size.
h_pad, w_pad: `float`, optional (default: None)
Padding (height/width) between edges of adjacent subplots,
as a fraction of the font size. Defaults to pad.
fig_size: `tuple`, optional (default: (5,8))
figure size.
save_fig: `bool`, optional (default: False)
if True,save the figure.
fig_path: `str`, optional (default: None)
If save_fig is True, specify figure path.
fig_name: `str`, optional (default: 'plot_discretize.pdf')
if `save_fig` is True, specify figure name.
**kwargs: `dict`, optional
Other keyword arguments are passed through to ``plt.hist()``
Returns
-------
None
"""
if kde is not None:
warnings.warn("kde is not supported as of v0.2", DeprecationWarning)
if fig_size is None:
fig_size = mpl.rcParams['figure.figsize']
if save_fig is None:
save_fig = settings.save_fig
if fig_path is None:
fig_path = os.path.join(settings.workdir, 'figures')
assert 'disc' in adata.uns_keys(), \
"please run `si.tl.discretize()` first"
if kde is not None:
warnings.warn("kde is no longer supported as of v1.1",
DeprecationWarning)
hist_edges = adata.uns['disc']['hist_edges']
hist_count = adata.uns['disc']['hist_count']
bin_edges = adata.uns['disc']['bin_edges']
bin_count = adata.uns['disc']['bin_count']
fig, ax = plt.subplots(2, 1, figsize=fig_size)
_ = ax[0].hist(hist_edges[:-1],
hist_edges,
weights=hist_count,
linewidth=0,
**kwargs)
_ = ax[1].hist(bin_edges[:-1],
bin_edges,
weights=bin_count,
**kwargs)
ax[0].set_xlabel('Non-zero values')
ax[0].set_ylabel('Count')
ax[0].set_title('Original')
ax[1].set_xlabel('Non-zero values')
ax[1].set_ylabel('Count')
ax[1].set_title('Discretized')
plt.tight_layout(pad=pad, h_pad=h_pad, w_pad=w_pad)
if(save_fig):
if(not os.path.exists(fig_path)):
os.makedirs(fig_path)
plt.savefig(os.path.join(fig_path, fig_name),
pad_inches=1,
bbox_inches='tight')
plt.close(fig)
def node_similarity(adata,
bins=20,
log=True,
show_cutoff=True,
cutoff=None,
n_edges=5000,
fig_size=(5, 3),
pad=1.08,
w_pad=None,
h_pad=None,
save_fig=None,
fig_path=None,
fig_name='plot_node_similarity.pdf',
):
"""Plot similarity scores of nodes
Parameters
----------
adata : `Anndata`
Annotated data matrix.
bins : `int`, optional (default: 20)
The number of equal-width bins in the given range for histogram plot.
log : `bool`, optional (default: True)
If True, log scale will be used for y axis.
show_cutoff : `bool`, optional (default: True)
If True, cutoff on scores will be shown
cutoff: `int`, optional (default: None)
Cutoff used to select edges
n_edges: `int`, optional (default: 5000)
The number of edges to select.
pad: `float`, optional (default: 1.08)
Padding between the figure edge and the edges of subplots,
as a fraction of the font size.
h_pad, w_pad: `float`, optional (default: None)
Padding (height/width) between edges of adjacent subplots,
as a fraction of the font size. Defaults to pad.
fig_size: `tuple`, optional (default: (5,8))
figure size.
save_fig: `bool`, optional (default: False)
if True,save the figure.
fig_path: `str`, optional (default: None)
If save_fig is True, specify figure path.
fig_name: `str`, optional (default: 'plot_node_similarity.pdf')
if `save_fig` is True, specify figure name.
Returns
-------
None
"""
if fig_size is None:
fig_size = mpl.rcParams['figure.figsize']
if save_fig is None:
save_fig = settings.save_fig
if fig_path is None:
fig_path = os.path.join(settings.workdir, 'figures')
mat_sim = adata.X
fig, ax = plt.subplots(1, 1, figsize=fig_size)
ax.hist(mat_sim.data, | |
#!/usr/bin/python
# Copyright: (c) 2020, DellEMC
# GNU General Public License v3.0+
""" Distributed virtual volume module """
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: dellemc_vplex_distributed_virtual_volume
version_added: '1.2.0'
short_description: Manage Distributed Virtual Volumes on VPLEX Storage Object
description:
- Provisioning the distributed virtual volume on VPLEX Storage System includes
Create a distributed virtual volume,
Get existing distributed virtual volume details,
Rename an existing distributed virtual volume,
Delete an existing distributed virtual volume,
Expand a distributed virtual volume
extends_documentation_fragment:
- dellemc.vplex.dellemc_vplex.vplex
author:
- <NAME> (@mohanapriya-dell) <<EMAIL>>
options:
distributed_virtual_volume_name:
description:
- Name of the distributed virtual volume
Mutually exclusive with distributed_virtual_volume_id
type: str
distributed_device_name:
description:
- Name of the distributed device on top of which a distributed
virtual volume should be created
type: str
distributed_virtual_volume_id:
description:
- Unique ID of the distributed virtual volume or it's system_id
Mutually exclusive with distributed_virtual_volume_name
type: str
thin_enable:
description:
- Defines to have thin value
default: true
type: bool
wait_for_rebuild:
description:
- Defines whether creation of distributed virtual volume can
proceed on rebuilding device or not
default: true
type: bool
new_distributed_virtual_volume_name:
description:
- New name of the distributed virtual volume to be renamed
type: str
expand:
description:
- Defines to perform expand operation for distributed virtual volume
type: bool
state:
description:
- Defines whether the distributed virtual volume should exist or not
type: str
required: True
choices: ["present", "absent"]
notes:
- distributed_virtual_volume_name or distributed_virtual_volume_id is required
- distributed_virtual_volume_name and distributed_virtual_volume_id are
mutually exclusive
'''
EXAMPLES = r'''
- name: Create a distributed virtual volume
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_device_name: "ansible_test_dd_dev"
thin_enable: true
distributed_virtual_volume_name: "ansible_test_vol"
state: "present"
- name: Create a distributed virtual volume with wait_for_rebuild=false
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_device_name: "ansible_test_dd_dev"
thin_enable: true
distributed_virtual_volume_name: "ansible_test_vol"
wait_for_rebuild: false
state: "present"
- name: Get details of distributed virtual volume using name
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_name: "ansible_test_vol"
state: "present"
- name: Get details of distributed virtual volume using virtual volume ID
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_id: "ansible_dist_dev_vol"
state: "present"
- name: Rename distributed virtual volume using virtual volume name
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_name: "ansible_test_vol"
new_distributed_virtual_volume_name: "ansible_test_vol_new"
state: "present"
- name: Rename distributed virtual volume using virtual volume ID
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_id: "ansible_dist_dev_vol"
new_distributed_virtual_volume_name: "ansible_dist_dev_vol_new"
state: "present"
- name: Expand distributed virtual volume using virtual volume name
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_name: "ansible_test_vol"
expand: true
state: "present"
- name: Expand distributed virtual volume using virtual volume id
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_id: "ansible_dist_dev_vol"
expand: true
state: "present"
- name: Delete distributed virtual volume using virtual volume name
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_name: "ansible_test_vol"
state: "absent"
- name: Delete distributed virtual volume using virtual volume id
dellemc_vplex_distributed_virtual_volume:
vplexhost: "{{ vplexhost }}"
vplexuser: "{{ vplexuser}}"
vplexpassword: "{{ <PASSWORD> }}"
verifycert: "{{ verifycert }}"
distributed_virtual_volume_id: "ansible_dist_dev_vol"
state: "absent"
'''
RETURN = r'''
changed:
description: Status of the operation
returned: End of all the operations
type: bool
Distributed Virtual Volume Details:
description: Details of the distributed virtual volume
returned: When distributed virtual volume exists in VPLEX
type: complex
contains:
block_count:
description: Number of blocks
type: int
block_size:
description: Block size
type: int
capacity:
description: Size of volume
type: int
consistency_group:
description: Identifies the VPLEX distributed consistency
group to which this distribute virtual volume belongs
type: str
expandable:
description: Whether the virtual volume is expandable or not
type: bool
expandable_capacity:
description: The amount of space that is available for volume
expansion.
type: int
expansion_method:
description: The expansion method available for this volume
-concatenation - The volume can be expanded using Concatenation
or RAID-C expansion.
-storage-volume - The volume can be expanded to the Expandable
capacity using storage volume expansion.
-not-supported - The volume does not support expansion.
This could be because the volume is being used in
RecoverPoint.
type: str
expansion_status:
description: The expansion status of the volume.
-dash - This volume can be expanded.
-failed - The last volume expansion on this volume failed.
-unknown - The volume expansion status is unknown.
-in-progress - The volume cannot be expanded because it has a
volume expansion in progress.
type: str
health_indications:
description: If health-state is not ok, additional information
type: list
health_state:
description: Health state of volume
type: str
initialization_status:
description: initialization_status
type: str
locality:
description: Displays the virtual volume is distributed.
type: str
name:
description: Distributed Virtual Volume name
type: str
operational_status:
description: The functional status
type: str
recoverpoint_protection_at:
description: Lists the VPLEX clusters at which the RecoverPoint
splitter is attached to the volume.
type: list
recoverpoint_usage:
description: Values might be the following.
-Local Replica - A copy created at the local site using
RecoverPoint CDP.
-Remote Replica - The replica at the remote site that is
being replicated using CRR or CLR configurations.
-Journal - A volume dedicated on the storage at each
copy in a RecoverPoint configuration. Journals are defined
per copy, and can consist of multiple journal volumes.
-Repository - A special volume that must be dedicated on the
SAN-attached storage at each site,
for each RecoverPoint cluster. It stores configuration
information about the RecoverPoint appliances (RPAs) and
-Production Source - This is the volume being replicated by
RecoverPoint.
type: str
service_status:
description: whether service is running or not
type: str
storage_array_family:
description: The storage array family name
type: str
supporting_device:
description: The supporting distributed device on top of which the
corresponding distributed virtual volume is created
type: str
system_id:
description: Unique volume id
type: str
thin_enabled:
description: Thin provisioning support
type: str
visibility:
description: To display the global access
type: str
vpd_id:
description: vpd_id
type: str
'''
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.dellemc.vplex.plugins.module_utils.storage.dell\
import dellemc_ansible_vplex_utils as utils
LOG = utils.get_logger('dellemc_vplex_distributed_virtual_volume')
HAS_VPLEXAPI_SDK = utils.has_vplexapi_sdk()
class VplexDistributedVirtualVolume(): # pylint:disable=R0902
"""Class with distributed virtual volume operations"""
def __init__(self):
"""Define all parameters required by this module"""
self.module_params = utils.get_vplex_management_host_parameters()
self.module_params.update(get_distributed_virtual_volume_parameters())
mutually_exclusive = [
['distributed_virtual_volume_name',
'distributed_virtual_volume_id']
]
required_one_of = [
['distributed_virtual_volume_name',
'distributed_virtual_volume_id']
]
# initialize the ansible module
self.module = AnsibleModule(
argument_spec=self.module_params,
supports_check_mode=False,
mutually_exclusive=mutually_exclusive,
required_one_of=required_one_of
)
# Check for external libraries
lib_status, message = utils.external_library_check()
if not lib_status:
LOG.error(message)
self.module.fail_json(msg=message)
# Check for Python vplexapi sdk
if HAS_VPLEXAPI_SDK is False:
self.module.fail_json(msg="Ansible modules for VPLEX require "
"the vplexapi python library to be "
"installed. Please install the library "
"before using these modules.")
# Create the configuration instance to communicate with
# vplexapi
self.client = utils.config_vplexapi(self.module.params)
# Validating the user inputs
if isinstance(self.client, tuple):
err_code, msg = self.client # pylint: disable=W0612
LOG.error(msg)
self.module.fail_json(msg=msg)
vplex_setup = utils.get_vplex_setup(self.client)
LOG.info(vplex_setup)
# Create an instance to DistributedStorageApi to communicate with
# vplexapi
self.distvv = utils.DistributedStorageApi(api_client=self.client)
self.cluster = utils.ClustersApi(api_client=self.client)
self.vvol = utils.VirtualVolumeApi(api_client=self.client)
# result is a dictionary that contains changed status and
# distributed virtual volume details
self.result = {"changed": False, "dist_vv_details": {}}
def get_distributed_vv(self, dist_vv_name):
"""
Get distributed virtual volume details
"""
try:
dist_vv_details = self.distvv.get_distributed_virtual_volume(
dist_vv_name)
LOG.info("Got distributed virtual volume details %s",
dist_vv_name)
LOG.debug("Distributed Virtual Volume Details:\n%s",
dist_vv_details)
return dist_vv_details
except utils.ApiException as err:
err_msg = ("Could not get distributed virtual volume {0} due to"
" error: {1}".format(dist_vv_name,
utils.error_msg(err)))
LOG.error("%s\n%s\n", err_msg, err)
return None
except (ValueError, TypeError) as err:
err_msg = "Could not get distributed virtual volume {0} due to"
err_msg = err_msg.format(dist_vv_name) + " error: {0}"
e_msg = utils.display_error(err_msg, err)
LOG.error("%s\n%s\n", e_msg, err)
self.module.fail_json(msg=e_msg)
def rename_distributed_vv(self, dist_vv_name, new_dist_vv_name):
"""
Rename the distributed virtual volume
"""
try:
dist_vv_patch_payload = [{'op': 'replace',
'path': | |
# -*- coding: utf-8 -*-
# Copyright 2020 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library for incident response operations on Google Cloud Compute Engine.
Library to make forensic images of Google Compute Engine disk and create
analysis virtual machine to be used in incident response.
"""
from __future__ import unicode_literals
import binascii
import datetime
import json
import logging
import os
import re
import socket
import ssl
import subprocess
import time
from googleapiclient.discovery import build # pylint: disable=import-error
from googleapiclient.errors import HttpError
from oauth2client.client import AccessTokenRefreshError
from oauth2client.client import GoogleCredentials
from oauth2client.client import ApplicationDefaultCredentialsError
log = logging.getLogger()
RETRY_MAX = 10
REGEX_DISK_NAME = re.compile('^(?=.{1,63}$)[a-z]([-a-z0-9]*[a-z0-9])?$')
STARTUP_SCRIPT = 'scripts/startup.sh'
def CreateService(service_name, api_version):
"""Creates an GCP API service.
Args:
service_name (str): Name of the GCP service to use.
api_version (str): Version of the GCP service API to use.
Returns:
apiclient.discovery.Resource: API service resource.
Raises:
RuntimeError: If Application Default Credentials could not be obtained or if
service build times out.
"""
try:
credentials = GoogleCredentials.get_application_default()
except ApplicationDefaultCredentialsError as error:
error_msg = (
'Could not get application default credentials: {0!s}\n'
'Have you run $ gcloud auth application-default '
'login?').format(error)
raise RuntimeError(error_msg)
service_built = False
for retry in range(RETRY_MAX):
try:
service = build(
service_name, api_version, credentials=credentials,
cache_discovery=False)
service_built = True
except socket.timeout:
log.info(
'Timeout trying to build service {0:s} (try {1:s} of {2:s})'.format(
service_name, retry, RETRY_MAX))
if service_built:
break
if not service_built:
error_msg = (
'Failures building service {0:s} caused by multiple '
'timeouts').format(service_name)
raise RuntimeError(error_msg)
return service
class GoogleCloudProject:
"""Class representing a Google Cloud Project.
Attributes:
project_id: Project name.
default_zone: Default zone to create new resources in.
gce_api_client: Client to interact with GCE APIs.
Example use:
gcp = GoogleCloudProject("your_project_name", "us-east")
gcp.ListInstances()
"""
COMPUTE_ENGINE_API_VERSION = 'v1'
def __init__(self, project_id, default_zone=None):
"""Initialize the GoogleCloudProject object.
Args:
project_id (str): The name of the project.
default_zone (str): Default zone to create new resources in.
"""
self.project_id = project_id
self.default_zone = default_zone
self.gce_api_client = None
def GceApi(self):
"""Get a Google Compute Engine service object.
Returns:
apiclient.discovery.Resource: A Google Compute Engine service object.
"""
if self.gce_api_client:
return self.gce_api_client
self.gce_api_client = CreateService(
'compute', self.COMPUTE_ENGINE_API_VERSION)
return self.gce_api_client
def BlockOperation(self, response, zone=None):
"""Executes API calls.
Args:
response (dict): GCE API response.
zone (str): GCP zone to execute the operation in. None means GlobalZone.
Returns:
str: Operation result in JSON format.
Raises:
RuntimeError: If API call failed.
"""
service = self.GceApi()
while True:
if zone:
request = service.zoneOperations().get(
project=self.project_id, zone=zone, operation=response['name'])
result = request.execute()
else:
request = service.globalOperations().get(
project=self.project_id, operation=response['name'])
result = request.execute()
if 'error' in result:
raise RuntimeError(result['error'])
if result['status'] == 'DONE':
return result
time.sleep(5) # Seconds between requests
def FormatLogMessage(self, message):
"""Format log messages with project specific information.
Args:
message (str): Message string to log.
Returns:
str: Formatted log message string.
"""
return 'project:{0} {1}'.format(self.project_id, message)
def ListInstances(self):
"""List instances in project.
Returns:
dict: Dictionary with name and metadata for each instance.
"""
have_all_tokens = False
page_token = None
instances = {}
while not have_all_tokens:
gce_instance_client = self.GceApi().instances()
if page_token:
request = gce_instance_client.aggregatedList(
project=self.project_id, pageToken=page_token)
else:
request = gce_instance_client.aggregatedList(project=self.project_id)
result = request.execute()
page_token = result.get('nextPageToken')
if not page_token:
have_all_tokens = True
for zone in result['items']:
try:
for instance in result['items'][zone]['instances']:
_, zone = instance['zone'].rsplit('/', 1)
instances[instance['name']] = {
'zone': zone
}
except KeyError:
pass
return instances
def ListDisks(self):
"""List disks in project.
Returns:
dict: Dictionary with name and metadata for each instance.
"""
have_all_tokens = False
page_token = None
disks = {}
while not have_all_tokens:
gce_disk_client = self.GceApi().disks()
if page_token:
request = gce_disk_client.aggregatedList(
project=self.project_id, pageToken=page_token)
else:
request = gce_disk_client.aggregatedList(project=self.project_id)
result = request.execute()
page_token = result.get('nextPageToken')
if not page_token:
have_all_tokens = True
for zone in result['items']:
try:
for instance in result['items'][zone]['disks']:
_, zone = instance['zone'].rsplit('/', 1)
disks[instance['name']] = {
'zone': zone
}
except KeyError:
pass
return disks
def GetInstance(self, instance_name, zone=None):
"""Get instance from project.
Args:
instance_name (str): The instance name.
zone (str): The zone for the instance.
Returns:
GoogleComputeInstance: A Google Compute Instance object.
Raises:
RuntimeError: If instance does not exist.
"""
instances = self.ListInstances()
instance = instances.get(instance_name)
if not instance:
error_msg = 'Instance {0:s} was not found in project {1:s}'.format(
instance_name, self.project_id)
raise RuntimeError(error_msg)
if not zone:
zone = instance['zone']
return GoogleComputeInstance(self, zone, instance_name)
def GetDisk(self, disk_name, zone=None):
"""Get a GCP disk object.
Args:
disk_name (str): Name of the disk.
zone (str): What zone the disk is in.
Returns:
GoogleComputeDisk: Disk object.
Raises:
RuntimeError: When the specified disk cannot be found in project.
"""
disks = self.ListDisks()
disk = disks.get(disk_name)
if not disk:
error_msg = 'Disk {0:s} was not found in project {1:s}'.format(
disk_name, self.project_id)
raise RuntimeError(error_msg)
if not zone:
zone = disk['zone']
return GoogleComputeDisk(self, zone, disk_name)
def CreateDiskFromSnapshot(
self, snapshot, disk_name=None, disk_name_prefix=''):
"""Create a new disk based on a Snapshot.
Args:
snapshot (GoogleComputeSnapshot): Snapshot to use.
disk_name (str): Optional string to use as new disk name.
disk_name_prefix (str): Optional string to prefix the disk name with.
Returns:
GoogleComputeDisk: Google Compute Disk.
Raises:
RuntimeError: If the disk exists already.
"""
if not disk_name:
disk_name = GenerateDiskName(snapshot, disk_name_prefix)
body = {
'name': disk_name,
'sourceSnapshot': snapshot.GetSourceString()
}
try:
gce_disks_client = self.GceApi().disks()
request = gce_disks_client.insert(
project=self.project_id, zone=self.default_zone, body=body)
response = request.execute()
except HttpError as exception:
if exception.resp.status == 409:
error_msg = 'Disk {0:s} already exists'.format(disk_name)
raise RuntimeError(error_msg)
error_msg = (
'Unknown error (status: {0:d}) occurred when creating disk '
'from Snapshot:\n{1!s}').format(exception.resp.status, exception)
raise RuntimeError(error_msg)
self.BlockOperation(response, zone=self.default_zone)
return GoogleComputeDisk(
project=self, zone=self.default_zone, name=disk_name)
def GetOrCreateAnalysisVm(
self, vm_name, boot_disk_size, cpu_cores=4,
image_project='ubuntu-os-cloud', image_family='ubuntu-1804-lts',
packages=None):
"""Get or create a new virtual machine for analysis purposes.
Args:
vm_name (str): Name of the virtual machine.
boot_disk_size (int): The size of the analysis VM boot disk (in GB).
cpu_cores (int): Number of CPU cores for the virtual machine.
image_project (str): Name of the project where the analysis VM image is
hosted.
image_family (str): Name of the image to use to create the analysis VM.
packages (list[str]): List of packages to install in the VM.
Returns:
tuple(GoogleComputeInstance, bool): A tuple with a virtual machine object
and a boolean indicating if the virtual machine was created or not.
Raises:
RuntimeError: If virtual machine cannot be created.
"""
if not self.default_zone:
raise RuntimeError('Cannot create VM, zone information is missing')
# Re-use instance if it already exists, or create a new one.
try:
instance = self.GetInstance(vm_name, zone=self.default_zone)
created = False
return instance, created
except RuntimeError:
pass
machine_type = 'zones/{0}/machineTypes/n1-standard-{1:d}'.format(
self.default_zone, cpu_cores)
ubuntu_image = self.GceApi().images().getFromFamily(
project=image_project, family=image_family).execute()
source_disk_image = ubuntu_image['selfLink']
startup_script = self._ReadStartupScript()
if packages:
startup_script.replace('${packages[@]}', ' '.join(packages))
config = {
'name': vm_name,
'machineType': machine_type,
'disks': [{
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': source_disk_image,
'diskSizeGb': boot_disk_size,
}
}],
'networkInterfaces': [{
'network':
'global/networks/default',
'accessConfigs': [{
'type': 'ONE_TO_ONE_NAT',
'name': 'External NAT'
}]
}],
'serviceAccounts': [{
'email':
'default',
'scopes': [
'https://www.googleapis.com/auth/devstorage.read_write',
'https://www.googleapis.com/auth/logging.write'
]
}],
'metadata': {
'items': [{
'key': 'startup-script',
# Analysis software to install.
'value': startup_script
}]
}
}
gce_instance_client = self.GceApi().instances()
request = gce_instance_client.insert(
project=self.project_id, zone=self.default_zone, body=config)
response = request.execute()
self.BlockOperation(response, zone=self.default_zone)
instance = GoogleComputeInstance(
project=self, zone=self.default_zone, name=vm_name)
created = True
return instance, created
def ListInstanceByLabels(self, labels_filter, filter_union=True):
"""List VMs in a project with one/all of the provided labels.
This will call the __ListByLabel on instances() API object
with the proper labels filter.
Args:
labels_filter (dict): A dict of labels to find e.g. {'id': '123'}.
filter_union (bool): A Boolean; True to get the union of all filters,
False to get the intersection.
Returns:
dict: A dictionary with name and metadata (zone, labels) for each
instance.
ex: {'instance-1': {'zone': 'us-central1-a', 'labels': {'id': '123'}}
"""
instance_service_object = self.GceApi().instances()
return self.__ListByLabel(
labels_filter, instance_service_object, filter_union)
def ListDiskByLabels(self, labels_filter, filter_union=True):
"""List Disks in a project with one/all of | |
oO0o
if 11 - 11: o0oOOo0O0Ooo . OoooooooOO - i1IIi
if 71 - 71: I1IiiI . OOooOOo . I1ii11iIi11i
if 90 - 90: i11iIiiIii + I1Ii111 % II111iiii
def lisp_clear_map_cache ( ) :
global lisp_map_cache , lisp_rloc_probe_list
global lisp_crypto_keys_by_rloc_encap , lisp_crypto_keys_by_rloc_decap
global lisp_rtr_list
if 67 - 67: OoOoOO00 / iII111i * OoO0O00 % i11iIiiIii
o0O0o0o = bold ( "User cleared" , False )
I1I11Iiii111 = lisp_map_cache . cache_count
lprint ( "{} map-cache with {} entries" . format ( o0O0o0o , I1I11Iiii111 ) )
if 74 - 74: Oo0Ooo / oO0o + IiII * IiII % iII111i / iIii1I11I1II1
if ( lisp_program_hardware ) :
lisp_map_cache . walk_cache ( lisp_clear_hardware_walk , None )
if 15 - 15: Ii1I
lisp_map_cache = lisp_cache ( )
if 50 - 50: II111iiii * O0 / I1IiiI
if 11 - 11: I1IiiI
if 92 - 92: iIii1I11I1II1 - I11i - OOooOOo / Ii1I . o0oOOo0O0Ooo . OoO0O00
if 33 - 33: oO0o / I11i % ooOoO0o * I11i / oO0o - OoOoOO00
if 89 - 89: iIii1I11I1II1 . II111iiii + IiII
lisp_rloc_probe_list = { }
if 8 - 8: I1ii11iIi11i / II111iiii / II111iiii
if 62 - 62: I11i - iII111i . Ii1I
if 20 - 20: I1ii11iIi11i
if 99 - 99: o0oOOo0O0Ooo + I1ii11iIi11i * IiII
lisp_crypto_keys_by_rloc_encap = { }
lisp_crypto_keys_by_rloc_decap = { }
if 67 - 67: I1IiiI
if 93 - 93: ooOoO0o . Ii1I + IiII / Oo0Ooo % I11i
if 40 - 40: Oo0Ooo % OoOoOO00 . IiII / I1IiiI % OoooooooOO
if 33 - 33: OOooOOo - OoooooooOO . iII111i
if 2 - 2: I11i + i1IIi
lisp_rtr_list = { }
if 52 - 52: I11i - OoO0O00 % I1Ii111 . OOooOOo
if 90 - 90: O0 - Oo0Ooo / i1IIi * iIii1I11I1II1 % o0oOOo0O0Ooo / oO0o
if 73 - 73: iII111i % iIii1I11I1II1 + o0oOOo0O0Ooo % Ii1I . II111iiii + IiII
if 55 - 55: OoOoOO00 * II111iiii / iII111i + OOooOOo / OoooooooOO
lisp_process_data_plane_restart ( True )
return
if 12 - 12: II111iiii * O0 - Oo0Ooo + o0oOOo0O0Ooo . Oo0Ooo + iIii1I11I1II1
if 4 - 4: I1Ii111 - I1Ii111 / I1ii11iIi11i . i1IIi + I1ii11iIi11i / oO0o
if 18 - 18: iIii1I11I1II1 . ooOoO0o
if 68 - 68: o0oOOo0O0Ooo
if 36 - 36: Oo0Ooo . I11i + I1IiiI * i1IIi % Ii1I + OOooOOo
if 5 - 5: o0oOOo0O0Ooo % oO0o / OoO0O00
if 17 - 17: OoooooooOO - I1ii11iIi11i / OoO0O00 - I1Ii111 + i1IIi
if 6 - 6: Oo0Ooo - II111iiii
if 33 - 33: I1Ii111 - I1IiiI + iII111i . OoOoOO00
if 91 - 91: OOooOOo / Ii1I / IiII * OOooOOo
if 68 - 68: I11i
def lisp_encapsulate_rloc_probe ( lisp_sockets , rloc , nat_info , packet ) :
if ( len ( lisp_sockets ) != 4 ) : return
if 91 - 91: I11i
I1Ii11 = lisp_myrlocs [ 0 ]
if 15 - 15: i1IIi / O0 . i11iIiiIii
if 51 - 51: IiII
if 53 - 53: O0
if 19 - 19: o0oOOo0O0Ooo / iII111i % OoOoOO00
if 65 - 65: o0oOOo0O0Ooo
o00OOo00 = len ( packet ) + 28
i1I1i1i = struct . pack ( "BBHIBBHII" , 0x45 , 0 , socket . htons ( o00OOo00 ) , 0 , 64 ,
17 , 0 , socket . htonl ( I1Ii11 . address ) , socket . htonl ( rloc . address ) )
i1I1i1i = lisp_ip_checksum ( i1I1i1i )
if 89 - 89: iIii1I11I1II1 + OoooooooOO + i1IIi + OoooooooOO % IiII * OoO0O00
I1iIIIiI = struct . pack ( "HHHH" , 0 , socket . htons ( LISP_CTRL_PORT ) ,
socket . htons ( o00OOo00 - 20 ) , 0 )
if 53 - 53: OOooOOo . IiII % I11i - OoO0O00 - Oo0Ooo
if 58 - 58: I1Ii111 / OoooooooOO . I11i % I1Ii111
if 8 - 8: Oo0Ooo % ooOoO0o / i11iIiiIii
if 54 - 54: IiII
packet = lisp_packet ( i1I1i1i + I1iIIIiI + packet )
if 85 - 85: OOooOOo - i1IIi
if 10 - 10: I1ii11iIi11i
if 3 - 3: ooOoO0o * O0 / o0oOOo0O0Ooo
if 22 - 22: OoOoOO00 + OOooOOo . iII111i % iIii1I11I1II1 - I11i
packet . inner_dest . copy_address ( rloc )
packet . inner_dest . instance_id = 0xffffff
packet . inner_source . copy_address ( I1Ii11 )
packet . inner_ttl = 64
packet . outer_dest . copy_address ( rloc )
packet . outer_source . copy_address ( I1Ii11 )
packet . outer_version = packet . outer_dest . afi_to_version ( )
packet . outer_ttl = 64
packet . encap_port = nat_info . port if nat_info else LISP_DATA_PORT
if 23 - 23: OoOoOO00 * I1Ii111
oooOOoo0 = red ( rloc . print_address_no_iid ( ) , False )
if ( nat_info ) :
O0oO0Oooo = " {}" . format ( blue ( nat_info . hostname , False ) )
iI11iI11i11ii = bold ( "RLOC-probe request" , False )
else :
O0oO0Oooo = ""
iI11iI11i11ii = bold ( "RLOC-probe reply" , False )
if 18 - 18: o0oOOo0O0Ooo % i11iIiiIii . Ii1I . O0
if 85 - 85: I1ii11iIi11i * iIii1I11I1II1 + o0oOOo0O0Ooo * OoO0O00
lprint ( ( "Data encapsulate {} to {}{} port {} for " + "NAT-traversal" ) . format ( iI11iI11i11ii , oooOOoo0 , O0oO0Oooo , packet . encap_port ) )
if 25 - 25: o0oOOo0O0Ooo / Ii1I / Oo0Ooo . ooOoO0o - ooOoO0o * O0
if 14 - 14: O0 - Ii1I + iIii1I11I1II1 + II111iiii . ooOoO0o + Ii1I
if 25 - 25: OoO0O00 * oO0o
if 29 - 29: OOooOOo - I1Ii111 - i11iIiiIii % i1IIi
if 2 - 2: i11iIiiIii % iIii1I11I1II1 * OOooOOo
if ( packet . encode ( None ) == None ) : return
packet . print_packet ( "Send" , True )
if 45 - 45: oO0o + i1IIi + iII111i + o0oOOo0O0Ooo * OOooOOo + ooOoO0o
OOo00oOoo = lisp_sockets [ 3 ]
packet . send_packet ( OOo00oOoo , packet . outer_dest )
del ( packet )
return
if 73 - 73: Oo0Ooo + II111iiii - IiII
if 60 - 60: i1IIi . i11iIiiIii / i1IIi . I11i % OOooOOo
if 47 - 47: oO0o + IiII * I1Ii111 % o0oOOo0O0Ooo - O0 % IiII
if 66 - 66: II111iiii * I1IiiI . Oo0Ooo * OoooooooOO % OoOoOO00 . II111iiii
if 4 - 4: iII111i + I1Ii111 % OoOoOO00 / Ii1I
if 94 - 94: OoO0O00
if 35 - 35: I1ii11iIi11i % OoO0O00 + II111iiii % II111iiii / IiII - iII111i
if 9 - 9: I1ii11iIi11i * o0oOOo0O0Ooo . oO0o
def lisp_get_default_route_next_hops ( ) :
if 48 - 48: IiII . I1Ii111 + OoooooooOO - I1Ii111 . Ii1I . I1Ii111
if 24 - 24: ooOoO0o * iIii1I11I1II1
if 1 - 1: I1ii11iIi11i . O0
if 3 - 3: iIii1I11I1II1 * ooOoO0o - OoOoOO00 * I1ii11iIi11i % OoOoOO00 - OoooooooOO
if ( lisp_is_macos ( ) ) :
o00OoOO0O0 = "route -n get default"
i1i11i1i = commands . getoutput ( o00OoOO0O0 ) . split ( "\n" )
ooOiiIiI1I = I111IIiIII = None
for Ii in i1i11i1i :
if ( Ii . find ( "gateway: " ) != - 1 ) : ooOiiIiI1I = Ii . split ( ": " ) [ 1 ]
if ( Ii . find ( "interface: " ) != - 1 ) : I111IIiIII = Ii . split ( ": " ) [ 1 ]
if 52 - 52: OOooOOo / oO0o - I1ii11iIi11i * OoooooooOO * OoO0O00
return ( [ [ I111IIiIII , ooOiiIiI1I ] ] )
if 71 - 71: iII111i % i11iIiiIii * OoooooooOO * iII111i
if 92 - 92: I11i % iIii1I11I1II1 * iII111i - OoooooooOO - I11i
if 34 - 34: I1Ii111 / i1IIi / O0 / OoooooooOO
if 55 - 55: I1Ii111 | |
<reponame>toltec-astro/kidsproc<gh_stars>0
#! /usr/bin/env python
from astropy.modeling import Parameter, Model
import numpy as np
from astropy import units as u
import inspect
class _Model(Model):
"""Subclass of astropy.modeling.Model that support complex type."""
# code snippet from `astropy.modeling.Model`
def prepare_inputs(self, *inputs, model_set_axis=None, equivalencies=None,
**kwargs):
"""
This method is used in `~astropy.modeling.Model.__call__` to ensure
that all the inputs to the model can be broadcast into compatible
shapes (if one or both of them are input as arrays), particularly if
there are more than one parameter sets. This also makes sure that (if
applicable) the units of the input will be compatible with the evaluate
method.
"""
# When we instantiate the model class, we make sure that __call__ can
# take the following two keyword arguments: model_set_axis and
# equivalencies.
if model_set_axis is None:
# By default the model_set_axis for the input is assumed to be the
# same as that for the parameters the model was defined with
# TODO: Ensure that negative model_set_axis arguments are respected
model_set_axis = self.model_set_axis
params = [getattr(self, name) for name in self.param_names]
inputs = [np.asanyarray(_input, dtype=None) for _input in inputs]
self._validate_input_shapes(inputs, self.inputs, model_set_axis)
inputs_map = kwargs.get('inputs_map', None)
inputs = self._validate_input_units(inputs, equivalencies, inputs_map)
# The input formatting required for single models versus a multiple
# model set are different enough that they've been split into separate
# subroutines
if self._n_models == 1:
return self._prepare_inputs_single_model(params, inputs, **kwargs)
else:
return self._prepare_inputs_model_set(params, inputs,
model_set_axis, **kwargs)
def _get_func_args(func):
return inspect.getfullargspec(func).args
def _set_mutual_inversion(cls1, cls2):
cls1.inverse = property(lambda self: cls2(
n_models=len(self), model_set_axis=self.model_set_axis))
cls2.inverse = property(lambda self: cls1(
n_models=len(self), model_set_axis=self.model_set_axis))
class _ResonanceCircleTransformMixin(object):
"""Mixin class that defines basic resonance circle transform.
The transform reads
.. code-block:: text
v' = 0.5 / v
"""
n_inputs = 1
n_outputs = 1
_separable = True
@staticmethod
def evaluate(value):
return 0.5 / value
class ResonanceCircleComplex(_ResonanceCircleTransformMixin, _Model):
"""Model that describes the resonance circle of KIDs in complex plane.
The model reads
.. code-block:: text
S = 0.5 / X
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('X', )
self.outputs = ('S', )
class ResonanceCircleComplexInv(_ResonanceCircleTransformMixin, _Model):
"""Inversion of `ResonanceCircleComplex`."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('S', )
self.outputs = ('X', )
class ResonanceCircleQr(_ResonanceCircleTransformMixin, _Model):
"""Model that describes the relation of `r` and `Qr`."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('r', )
self.outputs = ('Qr', )
class ResonanceCircleQrInv(_ResonanceCircleTransformMixin, _Model):
"""Inversion of `ResonanceCircleQr`"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('Qr', )
self.outputs = ('r', )
class _ResonanceCircleTransform2Mixin(object):
"""Mixin class that defines basic resonance circle transform for
real and imaginary parts separately.
The transform reads
.. code-block:: text
I + iQ = 0.5 / (r + ix)
"""
n_inputs = 2
n_outputs = 2
_separable = False
@staticmethod
def evaluate(v1, v2):
f = 0.5 / (v1 ** 2 + v2 ** 2)
return v1 * f, - v2 * f
class ResonanceCircle(_ResonanceCircleTransform2Mixin, _Model):
"""Same as `ResonanceCircleComplex`, but with separate real and imaginary
parts as inputs and outputs."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('r', 'x')
self.outputs = ('I', 'Q')
class ResonanceCircleInv(_ResonanceCircleTransform2Mixin, _Model):
"""Inversion of `ResonanceCircle`."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('I', 'Q')
self.outputs = ('r', 'x')
_set_mutual_inversion(ResonanceCircleComplex, ResonanceCircleComplexInv)
_set_mutual_inversion(ResonanceCircleQr, ResonanceCircleQrInv)
_set_mutual_inversion(ResonanceCircle, ResonanceCircleInv)
class OpticalDetune(_Model):
"""Model that describes detuning of KIDs in response to incident
optical power.
The model reads
.. code-block:: text
x = (p - background) * responsivity
"""
n_inputs = 1
n_outputs = 1
_separable = True
background = Parameter(default=5.0 * u.pW, min=0.)
responsivity = Parameter(default=1e-17, unit=1. / u.W, min=0.)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('p', )
self.outputs = ('x', )
@staticmethod
def evaluate(p, background, responsivity):
return (p - background) * responsivity
class InstrumentalDetune(_Model):
"""Model that describes the detuning in terms of the probe tone and
detector intrinsics.
The model reads
.. code-block:: text
x = (fp / fr - 1.)
"""
n_inputs = 2
n_outputs = 1
_separable = True
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inputs = ('fp', 'fr')
self.outputs = ('x', )
@staticmethod
def evaluate(fp, fr):
return fp / fr - 1.
class _ComposableModelBase(_Model):
"""Base class that setup itself with mixin classes."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._set_inputs()
self._set_outputs()
class _ReadoutReprComplexMixin(object):
"""Mixin class that sets the output to use complex S21."""
n_outputs = 1
_separable = True
def _set_outputs(self):
self.outputs = ('S', )
@staticmethod
def _repr_apply(a, b):
if a.shape == (1,):
a = a[0]
if b.shape == (1,):
b = b[0]
return a + 1.j * b
class _ReadoutRepr2Mixin(object):
"""Mixin class that sets the output to use separate I and Q."""
n_outputs = 2
_separable = False
def _set_outputs(self):
self.outputs = ('I', 'Q')
@staticmethod
def _repr_apply(c):
return c.real, c.imag
class ReadoutIQToComplex(_ReadoutReprComplexMixin, _ComposableModelBase):
"""Utility model to convert from (I, Q) to complex S21."""
n_inputs = 2
def _set_inputs(self):
self.inputs = ('I', 'Q')
@staticmethod
def evaluate(I, Q): # noqa: E741
return super(
ReadoutIQToComplex, ReadoutIQToComplex)._repr_apply(I, Q)
def __call__(self, x, y, **kwargs):
# make sure they are the same shape
x = np.asanyarray(x, dtype=float)
y = np.asanyarray(y, dtype=float)
if x.shape == ():
x = np.full_like(y, np.asscalar(x))
elif y.shape == ():
y = np.full_like(x, np.asscalar(y))
return super().__call__(x, y, **kwargs)
class ReadoutComplexToIQ(_ReadoutRepr2Mixin, _ComposableModelBase):
"""Utility model to convert from complex S21 to (I, Q)."""
n_inputs = 1
def _set_inputs(self):
self.inputs = ('S', )
@staticmethod
def evaluate(S):
return super(
_ReadoutRepr2Mixin, ReadoutComplexToIQ)._repr_apply(S)
class _ResonanceCircleSweepMixin(object):
"""Mixin class that sets up the frequency sweep model."""
n_inputs = 1
def _set_inputs(self):
self.inputs = ('f', )
@staticmethod
def evaluate(f, fr, Qr):
r = ResonanceCircleQrInv.evaluate(Qr)
x = InstrumentalDetune.evaluate(f, fr)
return ResonanceCircle.evaluate(r, x)
class ResonanceCircleSweep(
_ResonanceCircleSweepMixin, _ReadoutRepr2Mixin, _ComposableModelBase):
"""Model that describes the frequency sweep of The resonance circle."""
# already separate in I, Q.
fr = Parameter(default=1e9, unit=u.Hz, min=0.)
Qr = Parameter(default=2e4)
class ResonanceCircleSweepComplex(
_ResonanceCircleSweepMixin,
_ReadoutReprComplexMixin, _ComposableModelBase):
"""The same as `ResonanceCircleSweep`, but the result is the complex S21.
"""
fr = Parameter(default=1e9, unit=u.Hz, min=0.)
Qr = Parameter(default=2e4)
# make it return complex
@staticmethod
def evaluate(f, fr, Qr):
return _ReadoutReprComplexMixin._repr_apply(
*_ResonanceCircleSweepMixin.evaluate(f, fr, Qr)
)
class _ResonanceCircleProbeMixin(object):
"""Mixin class that sets up the probing model."""
n_inputs = 2
def _set_inputs(self):
self.inputs = ('fr', 'Qr')
@staticmethod
def evaluate(fr, Qr, fp):
r = ResonanceCircleQrInv.evaluate(Qr)
x = InstrumentalDetune.evaluate(fp, fr)
return ResonanceCircle.evaluate(r, x)
class ResonanceCircleProbe(
_ResonanceCircleProbeMixin, _ReadoutRepr2Mixin, _ComposableModelBase):
"""Model that describes the probing of The resonance circle."""
# already separate in I, Q.
fp = Parameter(default=1e9, unit=u.Hz, min=0.)
class ResonanceCircleProbeComplex(
_ResonanceCircleProbeMixin, _ReadoutReprComplexMixin,
_ComposableModelBase):
"""The same as `ResonanceCircleProbe`, but the result is the complex S21.
"""
fp = Parameter(default=1e9, unit=u.Hz, min=0.)
@staticmethod
def evaluate(fr, Qr, fp):
return super()._repr_apply(
super().evaluate(fr, Qr, fp)
)
class _ReadoutTransformComplexMixin(object):
"""Mixin class that setup the transformation of (I + jQ) due to readout."""
n_inputs = 1
def _set_inputs(self):
self.inputs = ('S', )
@classmethod
def _get_transform_params(cls):
exclude_args = ('S', 'f')
return [a for a in _get_func_args(cls._transform)
if a not in exclude_args]
@classmethod
def _get_inverse_transform_params(cls):
exclude_args = ('S', 'f')
return [a for a in _get_func_args(cls._inverse_transform)
if a not in exclude_args]
# class _ReadoutGeometryParamsMixin(_Model):
# """Mixin class that defines the KIDs parameters related to the
# readout circuit geometry."""
# tau = Parameter(default=0., unit=u.s)
# Qc = Parameter(default=4e4) # optimal coupling
# Qi = Parameter(tied=lambda m: m.Qr * m.Qc / (m.Qc - m.Qr))
# phi_c = Parameter(default=0.)
# class _ReadoutGainParamsMixin(_Model):
# """Mixin class that defines some general gain parameters.
# Note that these parameters could mean different in different concrete
# classes.
# """
# g0 = Parameter(default=1.)
# g1 = Parameter(default=0.)
# g = Parameter(tied=lambda m: np.hypot(m.g0, m.g1))
# phi_g = Parameter(tied=lambda m: np.arctan2(m.g1, m.g0))
# class _ReadoutLinTrendParamsMixin(_Model):
# """Mixin class that defines parameters that describes a
# linear baseline trend."""
# f0 = Parameter(default=1e9, unit=u.Hz, min=0.)
# k0 = Parameter(default=0.)
# k1 = Parameter(default=0.)
# m0 = Parameter(default=0.)
# m1 = Parameter(default=0.)
class _ReadoutGainWithLinTrendMixin(
_ReadoutTransformComplexMixin,
# _ReadoutLinTrendParamsMixin,
# _ReadoutGainParamsMixin,
):
"""Mixin class that defines readout transform of S21 using an effective
complex gain and a linear baseline trend."""
@staticmethod
def _transform(S, f, g0, g1, f0, k0, k1, m0, m1):
gg = g0 + 1.j * g1
kk = k0 + 1.j * k1
mm = m0 + 1.j * m1
return gg * S + kk * (f - f0) + mm
@staticmethod
def _inverse_transform(S, f, g0, g1, f0, k0, k1, m0, m1):
gg = g0 + 1.j * g1
| |
<gh_stars>0
"""
Find an approximate MMS allocation.
Based on:
<NAME> and <NAME>,
["An Improved Approximation Algorithm for Maximin Shares"](https://www.sciencedirect.com/science/article/abs/pii/S0004370221000989),
Artificial Intelligence, 2021.
Programmers: <NAME> and <NAME>
Date: 2022-05
"""
from fairpy import Allocation, agents_from
from fairpy.agents import AdditiveAgent, agent_names_from
from typing import List,Any,Tuple
from copy import deepcopy
import logging
import math
logger = logging.getLogger(__name__)
three_quarters = 0.75 # The approximation ratio of the algorithm
def three_quarters_MMS_allocation_algorithm(agents, items:List[Any]=None)-> Tuple[Allocation,List[str]]:
"""
Get List of agents (with valuations), and returns 3/4_mms allocation using the algorithm from the article.
:param agents: list of agents in diffrent formattes, to preform alloction to
:param items: list of items names, if wants to assing only some the items the agents has valuations to.
:return allocation: alpha-mms Allocation to each agent.
:return remaining_items: items tha remained after each agent got at least 3/4 of it's mms allocations.
### allocation for 2 agents, 3 objects, with 0 valuations.
>>> data={'agent0': {'x0': 1000.0, 'x1': 0.0, 'x2': 0.0}, 'agent1': {'x0': 0.0, 'x1': 1000.0, 'x2': 0.0}}
>>> agents=AdditiveAgent.list_from(data)
>>> alloc, remaining_items=three_quarters_MMS_allocation_algorithm(agents)
>>> alloc
agent0 gets {x0} with value 1e+03.
agent1 gets {} with value 0.
<BLANKLINE>
>>> remaining_items
['x1', 'x2']
>>> ### allocation for 1 agent, 1 object
>>> a = AdditiveAgent({"x": 2}, name="Alice")
>>> agents=[a]
>>> alloc, remaining_items = three_quarters_MMS_allocation_algorithm(agents)
>>> alloc
Alice gets {x} with value 2.
<BLANKLINE>
>>> remaining_items
[]
>>> ### allocation for 2 agents, 2 objects
>>> a = AdditiveAgent({"x": 2, "y": 1}, name="Alice")
>>> b = AdditiveAgent({"x": 1, "y": 2}, name="Blice")
>>> agents = [a, b]
>>> alloc, remaining_items = three_quarters_MMS_allocation_algorithm(agents)
>>> print(alloc)
Alice gets {x} with value 2.
Blice gets {y} with value 2.
<BLANKLINE>
>>> remaining_items
[]
>>> ### A different input format:
>>> ### allocation for 3 agents, 3 objects
>>> alloc, remaining_items = three_quarters_MMS_allocation_algorithm([[2,3,1],[4,4,4],[2,5,3]])
>>> print(alloc)
Agent #0 gets {1} with value 3.
Agent #1 gets {0} with value 4.
Agent #2 gets {2} with value 3.
<BLANKLINE>
>>> remaining_items
[]
>>> ### detailed example: enter loop and adjusted by alpha.
>>> ### different agents preffer different items
>>> ### 3 agents 11 objects
>>> agents ={"Alice":{"x1":17.5,"x2":35,"x3":17.5,"x4":17.5,"x5":35.5,"x6":19,"x7":1,"x8":1,"x9":1,"x10":1,"x11":1},\
"Bruce":{"x1":1,"x2":35,"x3":17.5,"x4":17.5,"x5":17.5,"x6":19,"x7":1,"x8":1,"x9":1,"x10":35.5,"x11":1},\
"Carl":{"x1":35.5,"x2":35,"x3":1,"x4":17.5,"x5":17.5,"x6":1,"x7":17.5,"x8":1,"x9":19,"x10":1,"x11":1}}
>>> alloc,remaining_items = three_quarters_MMS_allocation_algorithm(agents)
>>> print(alloc.str_with_values(precision=7))
Alice gets {x2,x5} with value 70.5.
Bruce gets {x10,x6} with value 54.5.
Carl gets {x1,x9} with value 54.5.
<BLANKLINE>
>>> remaining_items
['x3', 'x4', 'x7', 'x8', 'x11']
"""
agents = agents_from(agents) # Handles various input formats
if items is None: items = list(agents[0].all_items())
# algo 7 - sort valuations from largest to smallest
ordered_agents = agents_conversion_to_ordered_instance(agents, items)
# algo 4
alloc_for_ordered_valuations = three_quarters_MMS_allocation(ordered_agents, items)
# Map the result to somting like this "{'Alice': ['x3'], 'Bruce': ['x2'], 'Carl': ['x1']}"
alloc_for_ordered_valuations=dict(alloc_for_ordered_valuations.map_agent_to_bundle())
# algo 8 - Get the real allocation
real_alloc, remaining_items=get_alpha_MMS_allocation_to_unordered_instance(agents, alloc_for_ordered_valuations, items)
return Allocation(agents=agents,bundles=real_alloc),remaining_items
####
#### Algorithm 1
####
def alpha_MMS_allocation(agents: List[AdditiveAgent], alpha: float, mms_values: List[float], items: List[str])->Allocation:
"""
Find alpha_MMS_allocation for the given agents and valuations.
:param agents: valuations of agents, valuation are ordered in ascending order
:param alpha: parameter for how much to approximate MMS allocation
:param mms_values: mms_values of each agent inorder to normalize by them.
:param items: items names sorted from the highest valued to the lowest
:return allocation: alpha-mms Allocation to each agent.
>>> ### allocation for 1 agent, 1 object
>>> a = AdditiveAgent({"x": 2}, name="Alice")
>>> agents=[a]
>>> a1 = alpha_MMS_allocation(agents,0.5,[2],['x'])
>>> print(a1)
Alice gets {x} with value 2.
<BLANKLINE>
>>> ### allocation for 1 agent, 2 objects
>>> b = AdditiveAgent({"x": 2, "y": 1}, name="Blice")
>>> agents=[b]
>>> a1 = alpha_MMS_allocation(agents,0.6,[3],['x','y'])
>>> print(a1)
Blice gets {x} with value 2.
<BLANKLINE>
>>> ### allocation for 2 agents, 2 objects
>>> a = AdditiveAgent({"x": 2, "y": 1}, name="Alice")
>>> agents = [a, b]
>>> a1 = alpha_MMS_allocation(agents,1,[1,1],['x','y'])
>>> print(a1)
Alice gets {x} with value 2.
Blice gets {y} with value 1.
<BLANKLINE>
>>> ### allocation for 3 agents, 3 objects (low alpha)
>>> a = AdditiveAgent({"x1": 3, "x2": 2, "x3": 1}, name="A")
>>> b = AdditiveAgent({"x1": 4, "x2": 4, "x3": 4}, name="B")
>>> c = AdditiveAgent({"x1": 5, "x2": 2, "x3": 1}, name="C")
>>> agents=[a,b,c]
>>> a1 = alpha_MMS_allocation(agents,0.2,[1,4,1],['x1','x2','x3'])
>>> print(a1)
A gets {x1} with value 3.
B gets {x2} with value 4.
C gets {x3} with value 1.
<BLANKLINE>
>>> ### allocation with 3 agents, 8 objects
>>> a = AdditiveAgent({"x1": 11, "x2": 10, "x3": 8,"x4": 7, "x5": 6, "x6": 5,"x7": 3, "x8": 2}, name="A")
>>> b = AdditiveAgent({"x1": 100, "x2": 55, "x3": 50,"x4": 33, "x5": 12, "x6": 5,"x7": 4, "x8": 1}, name="B")
>>> c = AdditiveAgent({"x1": 15, "x2": 15, "x3": 12,"x4": 9, "x5": 8, "x6": 8,"x7": 7, "x8": 5}, name="C")
>>> agents=[a,b,c]
>>> a1 = alpha_MMS_allocation(agents,0.75,[17,77,25],['x1','x2','x3','x4','x5','x6','x7','x8'])
>>> print(a1)
A gets {x3,x4} with value 15.
B gets {x1} with value 100.
C gets {x2,x5} with value 23.
<BLANKLINE>
>>> ### allocation with 3 agents, 8 objects
>>> a = AdditiveAgent({"x1": 1, "x2": 1, "x3": 1,"x4": 1, "x5": 1, "x6": 1,"x7": 1, "x8": 1, "x9": 1, "x10": 1,"x11": 1, "x12": 1}, name="A")
>>> b = AdditiveAgent({"x1": 2, "x2": 2, "x3": 2,"x4": 2, "x5": 2, "x6": 2,"x7": 1, "x8": 1, "x9": 1, "x10": 1,"x11": 1, "x12": 1}, name="B")
>>> c = AdditiveAgent({"x1": 2, "x2": 2, "x3": 2,"x4": 2, "x5": 2, "x6": 2,"x7": 2, "x8": 2, "x9": 1, "x10": 1,"x11": 1, "x12": 1}, name="C")
>>> agents=[a,b,c]
>>> a1 = alpha_MMS_allocation(agents,0.9,[4,6,6],['x1','x2','x3','x4','x5','x6','x7','x8','x9','x10','x11','x12'])
>>> print(a1)
A gets {x1,x11,x12,x4} with value 4.
B gets {x10,x2,x3,x9} with value 6.
C gets {x5,x6,x7} with value 6.
<BLANKLINE>
"""
num_agents=len(agents)
for i in range (0,num_agents):
if mms_values[i]==0:
mms_values.pop(i)
agents.pop(i)
if len(agents)==0 or len(agents)>len(items):
return Allocation(agents=agents, bundles={})
normelized_agents=normalize(agents,mms_values,items)
alloc_initial_assignment=initial_assignment_alpha_MSS(normelized_agents,items,alpha)
if(len(normelized_agents)==0):
return combine_allocations([alloc_initial_assignment],agents)#use function to get value of alloc
alloc_bag_filling=bag_filling_algorithm_alpha_MMS(items,normelized_agents,alpha)
return combine_allocations([alloc_initial_assignment,alloc_bag_filling], agents)
def willing_agent(agents:List[AdditiveAgent], bundle: List[str], threshold)->int:
"""
return the lowest index agent that will be satisfied with bundle (the value of bundle is >= threshold)
:param agents: valuations of agents
:param bundle: the bundle of item to be given
:param threshold: parameter for how much the bag mast be worth for agent to willing to accept it.
:return index: the index of the lowest index agent that will be satisfied with the bundle
>>> #
>>> a = AdditiveAgent({"x": 0.5, "y": 0.3 ,"z":0.2}, name="Alice")
>>> b = AdditiveAgent({"x": 0.4, "y": 0.8 ,"z":0.2}, name="Blice")
>>> agents=[a,b]
>>> # empty bundle,insufficient - returns None
>>> willing_agent(agents,[], 0.5)
>>> # insufficient bundle - returns None
>>> willing_agent(agents,["z"], 0.5)
>>> # lowest index agent
>>> willing_agent(agents,["x","z"], 0.6)
0
>>> # first agent isn't satisfied
>>> willing_agent(agents,["x","y"],0.9)
1
"""
num_agents=len(agents)
for i in range(0,num_agents):
if agents[i].value(bundle)>=threshold:
return i
# returns none if no one is satisfied with the bundle or if len(agents) is 0
return None
####
#### Algorithm 2
####
def initial_assignment_alpha_MSS(agents: List[AdditiveAgent], items: List[str], alpha: float)->Allocation:
"""
Initial division for allocting agents according to their alpha-MMS.
:param agents: valuations of agents, normalized such that MMS=1 for all agents,
and valuation are ordered in ascending order
:param items: items names sorted from the highest valued to the lowest
:param alpha: parameter for how much to approximate MMS allocation.
:return Allocation: whats been allocated so far (in this function), items and agents are update during function
>>> ### allocation for 1 agent, 1 object (this pass!)
>>> a = AdditiveAgent({"x": 1}, name="Alice")
>>> agents=[a]
>>> a1 = initial_assignment_alpha_MSS(agents,['x'],0.75)
>>> print(a1, agents)
Alice gets {x} with value nan.
[]
>>> ### allocation for 1 agent, 2 object
>>> b = AdditiveAgent({"x": 0.5, "y": 0.4}, name="Blice")
>>> agents=[b]
>>> a1 = initial_assignment_alpha_MSS(agents,['x','y'],0.6)
>>> print(a1, agents)
Blice gets {x,y} with value nan.
[]
>>> ### allocation for 2 agent, 2 object
>>> a = AdditiveAgent({"x": 0.8, "y": 0.7}, name="Alice")
>>> b = AdditiveAgent({"x": 0.7, "y": 0.7}, name="Blice")
>>> agents=[a,b]
>>> a1= initial_assignment_alpha_MSS(agents,['x','y'],0.6)
>>> print(a1, agents)
Alice gets {x} with value nan.
Blice gets {y} with value nan.
[]
>>> ### allocation for 2 agent, 8 object
>>> a = AdditiveAgent({"x1": 0.647059, "x2": 0.588235, "x3": 0.470588, "x4": 0.411765, "x5": 0.352941, "x6": 0.294118, "x7": | |
<gh_stars>0
import json
from lxml import etree as ET
import psimi
class Mif254Builder():
"""Builds PSI-MI XML (ver 2.5.4) representation of interaction record
as a single entry <entrySet>.
"""
def __init__(self):
self.ns="http://psi.hupo.org/mi/mif"
self.nsmap = {None: self.ns }
self.mif = "{%s}" % self.ns
self.dom = ET.Element( self.mif + "entrySet", nsmap = self.nsmap)
self._doc = {}
self.id = 1
self.dom.attrib['level']="2"
self.dom.attrib['version']="5"
self.dom.attrib['minorVersion']="4"
self._doc['entry'] = ET.SubElement( self.dom, self.mif + "entry" )
self._doc['source'] = None # <source> : required
self._doc['etlst'] = None # <experimentList> : optional
self._doc['irlst'] = None # <interactorList> : optional
self._doc['inlst'] = None # <interactionList> : required
self._doc['aelst'] = None # <AttributeList> : optional
@property
def source( self ):
"""<source> DOM element.
"""
return self._doc['source']
@property
def irlst( self ):
"""<interactorList> DOM element.
"""
return self._doc['irlst']
@property
def inlst( self ):
"""<interactionList> DOM element.
"""
return self._doc['inlst']
@property
def etlst( self ):
"""<experimentList> DOM element.
"""
return self._doc['etlst']
@property
def aelst( self ):
"""<attributeList> DOM element.
"""
return self._doc['aelst']
@property
def docStr( self ):
"""Serialized (PSI-MI XML ver 2.5.4) record representation
"""
return str(ET.tostring(self.dom, pretty_print=True),'utf-8')
def buildRecord( self, rec):
"""Builds PSI-MI XML 2.5.4 record representation.
"""
#print("****")
if not isinstance( rec, psimi.record.Record ):
raise TypeError
# set source
self.addSource( rec.seo )
# set interactions
if rec.inlist is not None and len(rec.inlist) > 0:
for i in rec.inlist:
self.addInteraction( self._doc['inlst'], i)
# set attributes
#if rec.aelist is not None and len(rec.aelist) > 0:
# self.addAttList(self._doc['entry'], rec.aelst )
if rec.aelist is not None and len(rec.aelist) > 0:
for ai in rec.aelist:
self.addAttribute(self._doc['aelst'], ai )
def addSource( self, src):
"""Add psimi.Source representation.
"""
if not isinstance(src, psimi.source.Source):
raise TypeError
# names
if self._doc['source'] is None:
self._doc['source'] = ET.SubElement( self._doc['entry'],
self.mif + "source" )
self.addNames( self._doc['source'], src.label, src.name )
# bibref
if src.bibref is not None:
#print(src.bibref)
self.addBibref( self._doc['source'], src.bibref )
# xref
self.addXref( self._doc['source'], src.pxref, src.sxref)
#attribute
self.addAttList(self._doc['source'], src.attlst )
def addExperiment( self, parent, evid):
"""Adds psimi.Experiment representation to parent element DOM. If parent
is None it is initialized as an entry-level <experimentList>.
"""
if not isinstance(evid, psimi.evidence.Evidence):
raise TypeError
# only simple evidence supported
nexp = psimi.Experiment.fromEvidence( evid )
if parent is None:
parent = ET.SubElement( self._doc['etlst'],
self.mif + "experimentList" )
ev0 = ET.SubElement( parent, self.mif + "experimentDescription" )
ev0.attrib['id']= str(self.id)
self.id+=1
#Label: mcevoy-1998-1
#Name: mcevoy-1998-1
#Imex: IM-26977
#PXref: {'refType': 'identity', 'refTypeAc': 'MI:0356',
# 'ns': 'intact', 'nsAc': 'MI:0469', 'ac': 'EBI-21200115'}
#SXref: [{'refType': 'imex-primary', 'refTypeAc': 'MI:0662',
# 'ns': 'imex', 'nsAc': 'MI:0670', 'ac': 'IM-26977'}]
#IntMth: {'name': 'x-ray diffraction', 'label': 'x-ray diffraction',
# 'ns': 'psi-mi', 'nsAc': 'MI:0488', 'ac': 'MI:0114'}
#PrtMth: {'name': 'experimental particp', 'label': 'experimental particp',
# 'ns': 'psi-mi', 'nsAc': 'MI:0488', 'ac': 'MI:0661'}
#Exp Host: [{'name': 'In vitro', 'label': 'in vitro', 'ns': 'taxid', 'ac': '-1'}]
# names
self.addNames( ev0, nexp.label, nexp.name )
# bibref
if evid.pmid is not None:
br0 = ET.SubElement( ev0, self.mif + "bibref" )
self.addXref( br0, {'ns':'pubmed','nsAc':'MI:0446',
'ac':nexp.pmid,
'refType':'primary-reference',
'refTypeAc':'MI:0358'})
# bibref
if evid.bibref is not None:
self.addBibref( ev0, nexp.bibref )
# xref
self.addXref( ev0, nexp.pxref, nexp.sxref)
#hostOrganismList
# hostOrganism
if nexp.ehost is not None:
hol = ET.SubElement( ev0, self.mif + "hostOrganismList" )
for ho in nexp.ehost:
self.addOrganism( hol, ho, ename='hostOrganism')
#interactionDetectionMethod
if nexp.intMth is not None:
self.addCvTerm( ev0, nexp.intMth, ename='interactionDetectionMethod')
#participantIdentificationMethod
if nexp.prtMth is not None:
self.addCvTerm( ev0, nexp.prtMth, ename='participantIdentificationMethod')
#attributeList
self.addAttList(ev0, nexp.attrib )
def addParticipant( self, parent, prt):
"""Adds psimi.Participant representation to parent element DOM.
"""
if not isinstance(prt, psimi.participant.Participant):
raise TypeError
if parent is None:
raise MissingParentError
pa0 = ET.SubElement( parent, self.mif + "participant" )
pa0.attrib['id']= str(self.id)
self.id+=1
# names: shortLabel & fullName
label = prt.label
if label == 'N/A':
label = None
self.addNames( pa0, label, prt.name )
# xref: primaryRef & secondaryRef(s)
self.addXref( pa0, prt.pxref, prt.sxref)
#interactor
self.addInteractor( pa0, prt.interactor)
#participantIdentificationMethodList
# participantIdentificationMethod
if prt.pidmth is not None and len(prt.pidmth) > 0:
pa1 = ET.SubElement( pa0, self.mif + "participantIdentificationMethodList" )
for pim in prt.pidmth:
pa2 = self.addCvTerm( pa1, pim, ename='participantIdentificationMethod')
#biological role
if prt.brole is not None:
br0 = self.addCvTerm( pa0, prt.brole, ename='biologicalRole')
#experimentalRoleList
# experimentalRole
if prt.erole is not None and len(prt.erole) > 0:
ex0 = ET.SubElement( pa0, self.mif + "experimentalRoleList" )
for ero in prt.erole:
ero1 = self.addCvTerm( ex0, ero, ename='experimentalRole')
#experimentalPreparationList
# experimentalPreparation
if prt.eprep is not None and len(prt.eprep) > 0:
ep0 = ET.SubElement( pa0, self.mif + "experimentalPreparationList" )
for epo in prt.eprep:
ep1 = self.addCvTerm( ep0, epo, ename='experimentalPreparation')
#featureList
# feature
if prt.frolst is not None:
fr0 = ET.SubElement( pa0, self.mif + "featureList" )
for fro in prt.frolst:
fr1 = self.addFeature( fr0, fro )
#hostOrganismList
# hostOrganism
if prt.ehost is not None:
hol = ET.SubElement( pa0, self.mif + "hostOrganismList" )
for ho in prt.ehost:
self.addOrganism( hol, ho, ename='hostOrganism')
#attributeList
self.addAttList(pa0, prt.attrib )
def addInteraction( self, parent, i11n):
"""Adds psimi.Interaction representation to parent element DOM.
If parent is None it is initialized as an entry-level <interactionList>.
"""
if not isinstance(i11n, psimi.interaction.Interaction):
raise TypeError
if parent is None:
parent = ET.SubElement( self._doc['entry'],
self.mif + "interactionList" )
ev0 = ET.SubElement( parent, self.mif + "interaction" )
ev0.attrib['id']= str(self.id)
self.id+=1
# names: shortLabel & fullName
self.addNames( ev0, i11n.label, i11n.name )
# xref: primaryRef & secondaryRef(s)
imexid = self.addXref( ev0, i11n.pxref, i11n.sxref)
if imexid is not None:
ev0.attrib['imexId']= str(imexid)
# experimentList
ev2 = ET.SubElement( ev0, self.mif + "experimentList" )
for evo in i11n.evolist:
self.addExperiment( ev2, evo)
# participantList
ev3 = ET.SubElement( ev0, self.mif + "participantList" )
for pto in i11n.ptolist:
self.addParticipant( ev3, pto)
# interaction Type
self.addCvTerm( ev0, i11n.type, ename='interactionType')
if i11n.modelled is not None and i11n.modelled.lower() != 'false':
ev4 = ET.SubElement( ev0, self.mif + "modelled" )
ev4.text=i11n.modelled
if i11n.intramol is not None and i11n.intramol.lower() != 'false':
ev5 = ET.SubElement( ev0, self.mif + "intraMolecular" )
ev5.text=i11n.intramol
if i11n.negative is not None and i11n.negative.lower() != 'false':
ev5 = ET.SubElement( ev0, self.mif + "negative" )
ev5.text=i11n.negative
# attributes
self.addAttList( ev0, i11n.attrib )
def addInteractor( self, parent, i10r, seq=True, att=True ):
"""Adds psimi.Interactor representation to parent element DOM.
If parent is None it is initialized as an entry-level <interactorList>.
"""
if not isinstance(i10r, psimi.interactor.Interactor):
raise TypeError
if parent is None:
parent = ET.SubElement( self._doc['entry'],
self.mif + "interactorList" )
#print(self.__dict__)
#print(i10r.raw.keys())
ir0 = ET.SubElement( parent, self.mif + "interactor" )
ir0.attrib['id']= str(self.id)
self.id+=1
# names: shortLabel & fullName
self.addNames( ir0, i10r.label, i10r.name )
# xref: primaryRef & secondaryRef(s)
self.addXref( ir0, i10r.pxref, i10r.sxref)
# molecule type
self.addCvTerm( ir0, i10r.type, ename='interactorType' )
# organism
if i10r.species is not None:
self.addOrganism( ir0, i10r.species )
if seq is True and i10r.sequence is not None:
sq0 = ET.SubElement( ir0, self.mif + "sequence" )
sq0.text = i10r.sequence
# attribute list
if att is True:
pass
def addXref( self, parent, pxref=None, sxref=None):
"""Add cross references to parent element DOM.
"""
imexId = None
# xref: primaryRef & secondaryRef(s)
if pxref is not None or (sxref is not None and len(sxref) > 0):
ir0 = ET.SubElement( parent, self.mif + "xref" )
#{'ver': 'SP_26', 'ns': 'uniprotkb', 'ac': 'P0AE67', 'nsAc': 'MI:0486'}
if pxref is not None:
if isinstance(pxref, list):
pxref = pxref[0]
ir1 = ET.SubElement( ir0, self.mif + "primaryRef" )
if 'ac' in pxref.keys():
ir1.attrib['id']=pxref['ac']
if 'ns' in pxref.keys():
ir1.attrib['db']=pxref['ns']
if 'nsAc' in pxref.keys():
ir1.attrib['dbAc']=pxref['nsAc']
if 'ver' in pxref.keys():
ir1.attrib['ver']=pxref['ver']
if 'refType' in pxref.keys():
ir1.attrib['refType']=pxref['refType']
if 'refTypeAc' in pxref.keys():
ir1.attrib['refTypeAc']=pxref['refTypeAc']
#<secondaryRef id="IM-26977-1" db="imex" dbAc="MI:0670" refType="imex-primary" refTypeAc="MI:0662"/>
if 'ns' in pxref.keys() and 'ac' in pxref.keys() and 'refType' in pxref.keys():
if pxref['ns'] == 'imex' and pxref['refType'] == 'imex-primary':
imexId = pxref['ac']
if 'ns' in pxref.keys() and 'ac' in pxref.keys() and 'refType' not in pxref.keys():
if pxref['ns'] == 'dip' and pxref['ac'] is not None and len( pxref['ac']) > 0:
ir1.attrib['refType']='identity'
ir1.attrib['refTypeAc']='MI:0356'
if sxref is not None and len(sxref) | |
<reponame>dbeetcher/GuitarEngine
#Author-<NAME>
#Description - Generates a guitar
import adsk.core, adsk.fusion, traceback, math
from math import sqrt
from .defaultParameters import defaultParameters
# Globals
app = adsk.core.Application.cast(None)
ui = adsk.core.UserInterface.cast(None)
units = ''
#Default Inputs
defaultStandard = adsk.core.DropDownCommandInput.cast(None)
defaultFretboardStyle = adsk.core.DropDownCommandInput.cast(None)
defaultPickupNeck = adsk.core.DropDownCommandInput.cast(None)
defaultPickupMiddle = adsk.core.DropDownCommandInput.cast(None)
defaultPickupBridge = adsk.core.DropDownCommandInput.cast(None)
defaultFretNumber = adsk.core.ValueCommandInput.cast(None)
defaultScaleLength = adsk.core.ValueCommandInput.cast(None)
defaultNutLength = adsk.core.ValueCommandInput.cast(None)
defaultEndLength = adsk.core.ValueCommandInput.cast(None)
defaultRadius = adsk.core.ValueCommandInput.cast(None)
defaultNutRadius = adsk.core.ValueCommandInput.cast(None)
defaultEndRadius = adsk.core.ValueCommandInput.cast(None)
defaultEndCurve = adsk.core.ValueCommandInput.cast(None)
defaultfretboardHeight = adsk.core.ValueCommandInput.cast(None)
defaultFilletRadius = adsk.core.ValueCommandInput.cast(None)
defaultTangWidth = adsk.core.ValueCommandInput.cast(None)
defaultTangDepth = adsk.core.ValueCommandInput.cast(None)
defaultBlindFrets = adsk.core.ValueCommandInput.cast(None)
defaultNutSlotWidth = adsk.core.ValueCommandInput.cast(None)
defaultNutSlotDepth = adsk.core.ValueCommandInput.cast(None)
defaultMarkerDiameter = adsk.core.ValueCommandInput.cast(None)
defaultMarkerDepth = adsk.core.ValueCommandInput.cast(None)
defaultMarkerSpacing = adsk.core.ValueCommandInput.cast(None)
defaultFretboardLength = adsk.core.ValueCommandInput.cast(None)
defaultGuitarLength = adsk.core.ValueCommandInput.cast(None)
defaultBodyWidth = adsk.core.ValueCommandInput.cast(None)
defaultBodyThickness = adsk.core.ValueCommandInput.cast(None)
defaultBodyLength = adsk.core.ValueCommandInput.cast(None)
defaultNeckLength = adsk.core.ValueCommandInput.cast(None)
defaultNeckWidth = adsk.core.ValueCommandInput.cast(None)
defaultNeckThickness = adsk.core.ValueCommandInput.cast(None)
defaultHeadstockLength = adsk.core.ValueCommandInput.cast(None)
defaultHeadstockWidth = adsk.core.ValueCommandInput.cast(None)
defaultHeadstockThickness = adsk.core.ValueCommandInput.cast(None)
defaultBridgeStringSpacing = adsk.core.ValueCommandInput.cast(None)
defaultNutStringSpacing = adsk.core.ValueCommandInput.cast(None)
defaultNutToPost = adsk.core.ValueCommandInput.cast(None)
defaultMachinePostHoleDiameter = adsk.core.ValueCommandInput.cast(None)
defaultMachinePostDiameter = adsk.core.ValueCommandInput.cast(None)
defaultMachinePostHoleSpacing = adsk.core.ValueCommandInput.cast(None)
defaultStringCount = adsk.core.ValueCommandInput.cast(None)
defaultfirstFretThickness = adsk.core.ValueCommandInput.cast(None)
defaulttwelfthfretThickness = adsk.core.ValueCommandInput.cast(None)
defaultHeadstockStyle = adsk.core.DropDownCommandInput.cast(None)
defaultNeckSpacing = adsk.core.ValueCommandInput.cast(None)
defaultBridgeSpacing = adsk.core.ValueCommandInput.cast(None)
defaultSingleCoilLength = adsk.core.ValueCommandInput.cast(None)
defaultSingleCoilWidth = adsk.core.ValueCommandInput.cast(None)
defaultSingleCoilDepth = adsk.core.ValueCommandInput.cast(None)
defaultHumbuckerLength = adsk.core.ValueCommandInput.cast(None)
defaultHumbuckerWidth = adsk.core.ValueCommandInput.cast(None)
defaultHumbuckerDepth = adsk.core.ValueCommandInput.cast(None)
defaultHumbuckerFillet = adsk.core.ValueCommandInput.cast(None)
defaultPickupCavityMountLength = adsk.core.ValueCommandInput.cast(None)
defaultPickupCavityMountTabWidth = adsk.core.ValueCommandInput.cast(None)
defaultBridgePickupAngle = adsk.core.ValueCommandInput.cast(None)
handlers = []
def run(context):
try:
global app, ui
app = adsk.core.Application.get()
ui = app.userInterface
# Create a command definition and add a button to the CREATE panel.
cmdDef = ui.commandDefinitions.addButtonDefinition('adskFretboardPythonAddIn', 'Guitar Engine [Beta] (v2020.06.13)', 'Creates a fretboard component\n\n', 'Resources/Icons')
createPanel = ui.allToolbarPanels.itemById('SolidCreatePanel')
fretboardButton = createPanel.controls.addCommand(cmdDef)
# Connect to the command created event.
onCommandCreated = FretboardCommandCreatedHandler()
cmdDef.commandCreated.add(onCommandCreated)
handlers.append(onCommandCreated)
# Make the button available in the panel.
fretboardButton.isPromotedByDefault = True
fretboardButton.isPromoted = True
if context['IsApplicationStartup'] == False:
ui.messageBox('<b>Guitar Engine [Beta] (v2020.06.13)</b> has been added to the <i>SOLID</i> tab of the <i>DESIGN</i> workspace.<br><br><div align="center"><b>This is a beta version.</div>')
except:
if ui:
ui.messageBox('Failed:\n{}'.format(traceback.format_exc()))
def stop(context):
try:
createPanel = ui.allToolbarPanels.itemById('SolidCreatePanel')
fretboardButton = createPanel.controls.itemById('adskFretboardPythonAddIn')
if fretboardButton:
fretboardButton.deleteMe()
cmdDef = ui.commandDefinitions.itemById('adskFretboardPythonAddIn')
if cmdDef:
cmdDef.deleteMe()
except:
if ui:
ui.messageBox('Failed:\n{}'.format(traceback.format_exc()))
def getCommandInputValue(commandInput, unitType):
try:
valCommandInput = adsk.core.ValueCommandInput.cast(commandInput)
if not valCommandInput:
return (False, 0)
# Verify that the expression is valid.
design = adsk.fusion.Design.cast(app.activeProduct)
unitsMgr = design.unitsManager
userParams = design.userParameters
if unitsMgr.isValidExpression(valCommandInput.expression, unitType):
value = unitsMgr.evaluateExpression(valCommandInput.expression, unitType)
return (True, value)
else:
return (False, 0)
except:
if ui:
ui.messageBox('Failed:\n{}'.format(traceback.format_exc()))
class FretboardCommandCreatedHandler(adsk.core.CommandCreatedEventHandler):
def __init__(self):
super().__init__()
def notify(self, args):
try:
eventArgs = adsk.core.CommandCreatedEventArgs.cast(args)
# Verify that a Fusion design is active.
design = adsk.fusion.Design.cast(app.activeProduct)
if not design:
ui.messageBox('A Fusion design must be active when invoking this command.')
return()
defaultUnits = design.unitsManager.defaultLengthUnits
userParams = design.userParameters
# Determine whether to use inches or millimeters as the initial default.
global units
if defaultUnits == 'in' or defaultUnits == 'ft':
units = 'in'
else:
units = 'mm'
# Define the default values and get the previous values from the attributes.
if units == 'in':
standard = 'Imperial'
else:
standard = 'Metric'
standardAttrib = design.attributes.itemByName('Fretboard', 'standard')
if standardAttrib:
standard = standardAttrib.value
if standard == 'Imperial':
units = 'in'
else:
units = 'mm'
fretNumber = str(defaultParameters.fretNumber)
fretNumberAttrib = design.attributes.itemByName('Fretboard', 'fretNumber')
if fretNumberAttrib:
fretNumber = fretNumberAttrib.value
scaleLength = str(defaultParameters.scaleLength * defaultParameters.userUnit)
scaleLengthAttrib = design.attributes.itemByName('Fretboard', 'scaleLength')
if scaleLengthAttrib:
scaleLength = scaleLengthAttrib.value
#Equation for fret spacing
for fretNum in range(1,int((fretNumber))+2):
fretDistance = (float(scaleLength))-((float(scaleLength))/(2**(fretNum/12.0)))
fretboardLength = str(math.ceil(float(fretDistance)*8)/8)
fretboardLengthAttrib = design.attributes.itemByName('Fretboard', 'fretboardLength')
if fretboardLengthAttrib:
fretboardLength = fretboardLengthAttrib.value
nutLength = str(defaultParameters.nutLength * defaultParameters.userUnit)
nutLengthAttrib = design.attributes.itemByName('Fretboard', 'nutLength')
if nutLengthAttrib:
nutLength = nutLengthAttrib.value
endLength = str(defaultParameters.endLength * defaultParameters.userUnit)
endLengthAttrib = design.attributes.itemByName('Fretboard', 'endLength')
if endLengthAttrib:
endLength = endLengthAttrib.value
radius = str(defaultParameters.radius * defaultParameters.userUnit)
radiusAttrib = design.attributes.itemByName('Fretboard', 'radius')
if radiusAttrib:
radius = radiusAttrib.value
nutRadius = str(defaultParameters.nutRadius * defaultParameters.userUnit)
nutRadiusAttrib = design.attributes.itemByName('Fretboard', 'nutRadius')
if nutRadiusAttrib:
nutRadius = nutRadiusAttrib.value
endRadius = str(defaultParameters.endRadius * defaultParameters.userUnit)
endRadiusAttrib = design.attributes.itemByName('Fretboard', 'endRadius')
if endRadiusAttrib:
endRadius = nutRadiusAttrib.value
fretboardHeight = str(defaultParameters.fretboardHeight * defaultParameters.userUnit)
fretboardHeightAttrib = design.attributes.itemByName('Fretboard', 'fretboardHeight')
if fretboardHeightAttrib:
fretboardHeight = fretboardHeightAttrib.value
filletRadius = str(defaultParameters.filletRadius * defaultParameters.userUnit)
filletRadiusAttrib = design.attributes.itemByName('Fretboard', 'filletRadius')
if filletRadiusAttrib:
filletRadius = filletRadiusAttrib.value
endCurve = str(defaultParameters.endCurve * defaultParameters.userUnit)
endCurveAttrib = design.attributes.itemByName('Fretboard', 'endCurve')
if endCurveAttrib:
endCurve = endCurveAttrib.value
tangWidth = str(defaultParameters.tangWidth * defaultParameters.userUnit)
tangWidthAttrib = design.attributes.itemByName('Fretboard', 'tangWidth')
if tangWidthAttrib:
tangWidth = tangWidthAttrib.value
tangDepth = str(defaultParameters.tangDepth * defaultParameters.userUnit)
tangDepthAttrib = design.attributes.itemByName('Fretboard', 'tangDepth')
if tangDepthAttrib:
tangDepth = tangDepthAttrib.value
blindFrets = str(defaultParameters.blindFrets * defaultParameters.userUnit)
blindFretsAttrib = design.attributes.itemByName('Fretboard', 'blindFrets')
if blindFretsAttrib:
blindFrets = blindFretsAttrib.value
nutSlotWidth = str(defaultParameters.nutSlotWidth * defaultParameters.userUnit)
nutSlotWidthAttrib = design.attributes.itemByName('Fretboard', 'nutSlotWidth')
if nutSlotWidthAttrib:
nutSlotWidth = nutSlotWidthAttrib.value
nutSlotDepth = str(defaultParameters.nutSlotDepth * defaultParameters.userUnit)
nutSlotDepthAttrib = design.attributes.itemByName('Fretboard', 'nutSlotDepth')
if nutSlotDepthAttrib:
nutSlotDepth = nutSlotDepthAttrib.value
markerDiameter = str(defaultParameters.markerDiameter * defaultParameters.userUnit)
markerDiameterAttrib = design.attributes.itemByName('Fretboard', 'markerDiameter')
if markerDiameterAttrib:
markerDiameter = markerDiameterAttrib.value
markerDepth = str(defaultParameters.markerDepth * defaultParameters.userUnit)
markerDepthAttrib = design.attributes.itemByName('Fretboard', 'markerDepth')
if markerDepthAttrib:
markerDepth = markerDepthAttrib.value
markerSpacing = str(defaultParameters.markerSpacing * defaultParameters.userUnit)
markerSpacingAttrib = design.attributes.itemByName('Fretboard', 'markerSpacing')
if markerSpacingAttrib:
markerSpacing = markerSpacingAttrib.value
guitarLength = str(defaultParameters.guitarLength * defaultParameters.userUnit)
guitarLengthAttrib = design.attributes.itemByName('Fretboard', 'guitarLength')
if guitarLengthAttrib:
guitarLength = guitarLengthAttrib.value
bodyWidth = str(defaultParameters.bodyWidth * defaultParameters.userUnit)
bodyWidthAttrib = design.attributes.itemByName('Fretboard', 'bodyWidth')
if bodyWidthAttrib:
bodyWidth = bodyWidthAttrib.value
bodyThickness = str(defaultParameters.bodyThickness * defaultParameters.userUnit)
bodyThicknessAttrib = design.attributes.itemByName('Fretboard', 'bodyThickness')
if bodyThicknessAttrib:
bodyThickness = bodyThicknessAttrib.value
bodyLength = str(defaultParameters.bodyLength * defaultParameters.userUnit)
bodyLengthAttrib = design.attributes.itemByName('Fretboard', 'bodyLength')
if bodyLengthAttrib:
bodyLength = bodyLengthAttrib.value
firstFretThickness = str(defaultParameters.firstFretThickness * defaultParameters.userUnit)
firstFretThicknessAttrib = design.attributes.itemByName('Fretboard', 'firstFretThickness')
if firstFretThicknessAttrib:
firstFretThickness = firstFretThicknessAttrib.value
twelfthfretThickness = str(defaultParameters.twelfthfretThickness * defaultParameters.userUnit)
twelfthfretThicknessAttrib = design.attributes.itemByName('Fretboard', 'twelfthfretThickness')
if twelfthfretThicknessAttrib:
twelfthfretThickness = twelfthfretThicknessAttrib.value
neckThickness = str(defaultParameters.neckThickness * defaultParameters.userUnit)
neckThicknessAttrib = design.attributes.itemByName('Fretboard', 'neckThickness')
if neckThicknessAttrib:
neckThickness = neckThicknessAttrib.value
headstockLength = str(defaultParameters.headstockLength * defaultParameters.userUnit)
headstockLengthAttrib = design.attributes.itemByName('Fretboard', 'headstockLength')
if headstockLengthAttrib:
headstockLength = headstockLengthAttrib.value
headstockWidth = str(defaultParameters.headstockWidth * defaultParameters.userUnit)
headstockWidthAttrib = design.attributes.itemByName('Fretboard', 'headstockWidth')
if headstockWidthAttrib:
headstockWidth = headstockWidthAttrib.value
headstockThickness = str(defaultParameters.headstockThickness * defaultParameters.userUnit)
headstockThicknessAttrib = design.attributes.itemByName('Fretboard', 'headstockThickness')
if headstockThicknessAttrib:
headstockThickness = headstockThicknessAttrib.value
bridgeStringSpacing = str(defaultParameters.bridgeStringSpacing * defaultParameters.userUnit)
bridgeStringSpacingAttrib = design.attributes.itemByName('Fretboard', 'bridgeStringSpacing')
if bridgeStringSpacingAttrib:
bridgeStringSpacing = bridgeStringSpacingAttrib.value
nutStringSpacing = str(defaultParameters.nutStringSpacing * defaultParameters.userUnit)
nutStringSpacingAttrib = design.attributes.itemByName('Fretboard', 'nutStringSpacing')
if nutStringSpacingAttrib:
nutStringSpacing = nutStringSpacingAttrib.value
nutToPost = str(defaultParameters.nutToPost * defaultParameters.userUnit)
nutToPostAttrib = design.attributes.itemByName('Fretboard', 'nutToPost')
if nutToPostAttrib:
nutToPost = nutToPostAttrib.value
stringCount = str(defaultParameters.stringCount)
stringCountAttrib = design.attributes.itemByName('Fretboard', 'stringCount')
if stringCountAttrib:
stringCount = stringCountAttrib.value
machinePostHoleDiameter = str(defaultParameters.machinePostHoleDiameter * defaultParameters.userUnit)
machinePostHoleDiameterAttrib = design.attributes.itemByName('Fretboard', 'machinePostHoleDiameter')
if machinePostHoleDiameterAttrib:
machinePostHoleDiameter = machinePostHoleDiameterAttrib.value
machinePostDiameter = str(defaultParameters.machinePostDiameter * defaultParameters.userUnit)
machinePostDiameterAttrib = design.attributes.itemByName('Fretboard', 'machinePostDiameter')
if machinePostDiameterAttrib:
machinePostDiameter = machinePostDiameterAttrib.value
machinePostHoleSpacing = str(defaultParameters.machinePostHoleSpacing * defaultParameters.userUnit)
machinePostHoleSpacingAttrib = design.attributes.itemByName('Fretboard', 'machinePostHoleSpacing')
if machinePostHoleSpacingAttrib:
machinePostHoleSpacing = machinePostHoleSpacingAttrib.value
headstockStyle = 'Straight In-line'
headstockStyleAttrib = design.attributes.itemByName('Fretboard', 'headstockStyle')
if headstockStyleAttrib:
headstockStyle = headstockStyleAttrib.value
fretboardStyle = 'Straight Radius'
fretboardStyleAttrib = design.attributes.itemByName('Fretboard', 'fretboardStyle')
if fretboardStyleAttrib:
fretboardStyle = fretboardStyleAttrib.value
pickupNeck = 'Single-Coil'
pickupNeckAttrib = design.attributes.itemByName('pickups', 'pickupNeck')
if pickupNeckAttrib:
pickupNeck = pickupNeckAttrib.value
pickupMiddle = 'Single-Coil'
pickupMiddleAttrib = design.attributes.itemByName('pickups', 'pickupMiddle')
if pickupMiddleAttrib:
pickupMiddle = pickupMiddleAttrib.value
pickupBridge = 'Single-Coil'
pickupBridgeAttrib = design.attributes.itemByName('pickups', 'pickupBridge')
if pickupBridgeAttrib:
pickupBridge = pickupBridgeAttrib.value
neckSpacing = str(defaultParameters.neckSpacing * defaultParameters.userUnit)
neckSpacingAttrib = design.attributes.itemByName('Fretboard', 'neckSpacing')
if neckSpacingAttrib:
neckSpacing = neckSpacingAttrib.value
bridgeSpacing = str(defaultParameters.bridgeSpacing * defaultParameters.userUnit)
bridgeSpacingAttrib = design.attributes.itemByName('Fretboard', 'bridgeSpacing')
if bridgeSpacingAttrib:
bridgeSpacing = bridgeSpacingAttrib.value
singleCoilLength = str(defaultParameters.singleCoilLength * defaultParameters.userUnit)
singleCoilLengthAttrib = design.attributes.itemByName('Fretboard', 'singleCoilLength')
if singleCoilLengthAttrib:
singleCoilLength = singleCoilLengthAttrib.value
singleCoilWidth = str(defaultParameters.singleCoilWidth * defaultParameters.userUnit)
singleCoilWidthAttrib = design.attributes.itemByName('Fretboard', 'singleCoilWidth')
if singleCoilWidthAttrib:
singleCoilWidth = singleCoilWidthAttrib.value
singleCoilDepth = str(defaultParameters.singleCoilDepth * defaultParameters.userUnit)
singleCoilDepthAttrib = design.attributes.itemByName('Fretboard', 'singleCoilDepth')
if singleCoilDepthAttrib:
singleCoilDepth = singleCoilDepthAttrib.value
humbuckerLength = str(defaultParameters.humbuckerLength * defaultParameters.userUnit)
humbuckerLengthAttrib = design.attributes.itemByName('Fretboard', 'humbuckerLength')
if humbuckerLengthAttrib:
humbuckerLength = humbuckerLengthAttrib.value
humbuckerWidth = str(defaultParameters.humbuckerWidth * defaultParameters.userUnit)
humbuckerWidthAttrib = design.attributes.itemByName('Fretboard', 'humbuckerWidth')
if humbuckerWidthAttrib:
humbuckerWidth = humbuckerWidthAttrib.value
humbuckerDepth = str(defaultParameters.humbuckerDepth * defaultParameters.userUnit)
humbuckerDepthAttrib = design.attributes.itemByName('Fretboard', 'humbuckerDepth')
if humbuckerDepthAttrib:
humbuckerDepth = humbuckerDepthAttrib.value
humbuckerFillet = str(defaultParameters.humbuckerFillet * defaultParameters.userUnit)
humbuckerFilletAttrib = design.attributes.itemByName('Fretboard', 'humbuckerFillet')
if humbuckerFilletAttrib:
humbuckerFillet = humbuckerFilletAttrib.value
pickupCavityMountLength = str(defaultParameters.pickupCavityMountLength * defaultParameters.userUnit)
pickupCavityMountLengthAttrib = design.attributes.itemByName('Fretboard', 'pickupCavityMountLength')
if pickupCavityMountLengthAttrib:
pickupCavityMountLength = pickupCavityMountLengthAttrib.value
pickupCavityMountTabWidth = str(defaultParameters.pickupCavityMountTabWidth * defaultParameters.userUnit)
pickupCavityMountTabWidthAttrib = design.attributes.itemByName('Fretboard', 'pickupCavityMountTabWidth')
if pickupCavityMountTabWidthAttrib:
pickupCavityMountTabWidth = pickupCavityMountTabWidthAttrib.value
bridgePickupAngle = str(defaultParameters.bridgePickupAngle)
bridgePickupAngleAttrib = design.attributes.itemByName('Fretboard', 'bridgePickupAngle')
if bridgePickupAngleAttrib:
bridgePickupAngle = bridgePickupAngleAttrib.value
global defaultStandard, defaultFretNumber, defaultScaleLength, defaultNutLength, defaultEndLength, createFlatFretboard, defaultRadius, defaultNutRadius, defaultEndRadius, defaultfretboardHeight, \
createFilletRadius, defaultFilletRadius, createEndCurve, extensionVisibility, defaultEndCurve, createFretCuts, defaultTangWidth, defaultTangDepth, createBlindFrets, defaultBlindFrets, \
defaultNutSlotWidth, defaultPickupNeck, defaultPickupMiddle, defaultPickupBridge, defaultNutSlotDepth, createFretMarkers, defaultMarkerDiameter, defaultMarkerDepth, defaultMarkerSpacing, \
defaultFretboardLength, defaultFretboardStyle, defaultGuitarLength, defaultBodyWidth, defaultBodyThickness, defaultBodyLength, defaultNeckLength, defaultNeckWidth, defaultNeckThickness, \
defaultHeadstockLength, defaultHeadstockWidth, defaultHeadstockThickness, defaultBridgeStringSpacing, defaultNutStringSpacing, defaultNutToPost, defaultMachinePostHoleDiameter, createBlanks, \
defaultMachinePostDiameter, defaultMachinePostHoleSpacing, defaultStringCount, defaultfirstFretThickness, defaulttwelfthfretThickness, createDimensions, defaultHeadstockStyle, defaultNeckSpacing, \
defaultBridgeSpacing, defaultSingleCoilLength, defaultSingleCoilWidth, defaultSingleCoilDepth, defaultHumbuckerLength, defaultHumbuckerWidth, defaultHumbuckerDepth, defaultHumbuckerFillet, \
defaultPickupCavityMountLength, defaultPickupCavityMountTabWidth, defaultBridgePickupAngle, createOnlyFretboard, errorMessage
cmd = eventArgs.command
cmd.isExecutedWhenPreEmpted = False
inputs = cmd.commandInputs
cmd.helpFile = 'help.html'
cmd.okButtonText = 'Create Fretboard'
# Set the size of the dialog.
cmd.setDialogInitialSize(275, 800)
cmd.setDialogMinimumSize(275, 800)
cmd.okButtonText = 'Create Guitar'
# Create a tab input.
tabCmdInput1 = inputs.addTabCommandInput('general', 'General')
tab1ChildInputs = tabCmdInput1.children
imgInput = tab1ChildInputs.addImageCommandInput('fretboardImage', '', 'Resources/guitarEngine.png')
imgInput.isFullWidth = True
defaultStandard = | |
<reponame>nens/threedigrid<gh_stars>1-10
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import os
import unittest
import tempfile
import shutil
import numpy as np
from osgeo import ogr
from threedigrid.admin.gridadmin import GridH5Admin
from threedigrid.admin.nodes.exporters import CellsOgrExporter
from threedigrid.admin.nodes.exporters import NodesOgrExporter
from threedigrid.admin.lines.exporters import LinesOgrExporter
from threedigrid.admin.breaches.exporters import BreachesOgrExporter
from threedigrid.admin.constants import SUBSET_1D_ALL
from threedigrid.admin.constants import SUBSET_2D_OPEN_WATER
from threedigrid.admin.constants import NO_DATA_VALUE
from six.moves import range
test_file_dir = os.path.join(os.getcwd(), "tests/test_files")
# the testfile is a copy of the v2_bergermeer gridadmin file
grid_admin_h5_file = os.path.join(test_file_dir, "gridadmin.h5")
class GridAdminTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
def test_get_from_meta(self):
self.assertIsNotNone(self.parser.get_from_meta("n2dtot"))
self.assertIsNone(self.parser.get_from_meta("doesnotexist"))
def test_get_extent_subset_onedee(self):
extent_1D = self.parser.get_extent_subset(subset_name=SUBSET_1D_ALL)
# should contain values
self.assertTrue(np.any(extent_1D != NO_DATA_VALUE))
def test_get_extent_subset_reproject(self):
extent_1D = self.parser.get_extent_subset(subset_name=SUBSET_1D_ALL)
extent_1D_proj = self.parser.get_extent_subset(
subset_name=SUBSET_1D_ALL, target_epsg_code="4326"
)
self.assertTrue(np.all(extent_1D != extent_1D_proj))
def test_get_extent_subset_twodee(self):
extent_2D = self.parser.get_extent_subset(
subset_name=SUBSET_2D_OPEN_WATER
)
# should contain values
self.assertTrue(np.any(extent_2D != NO_DATA_VALUE))
def test_get_extent_subset_combi(self):
extent_1D = self.parser.get_extent_subset(subset_name=SUBSET_1D_ALL)
extent_2D = self.parser.get_extent_subset(
subset_name=SUBSET_2D_OPEN_WATER
)
# should be different
self.assertTrue(np.any(np.not_equal(extent_1D, extent_2D)))
def test_get_model_extent(self):
model_extent = self.parser.get_model_extent()
np.testing.assert_almost_equal(
model_extent,
np.array([105427.6, 511727.0515702, 115887.0, 523463.3268483]),
)
def test_get_model_extent_extra_extent(self):
onedee_extra = np.array([100000.0, 90000.0, 550000.0, 580000.0])
extra_extent = {"extra_extent": [onedee_extra]}
model_extent = self.parser.get_model_extent(**extra_extent)
np.testing.assert_equal(
model_extent, np.array([100000.0, 90000.0, 550000.0, 580000.0])
)
def test_get_model_extent_extra_extent2(self):
onedee_extra = np.array([106666.6, 106666.6, 550000.0, 580000.0])
extra_extent = {"extra_extent": [onedee_extra]}
model_extent = self.parser.get_model_extent(**extra_extent)
np.testing.assert_almost_equal(
model_extent, np.array([105427.6, 106666.6, 550000.0, 580000.0])
)
def test_properties(self):
self.assertTrue(hasattr(self.parser, "has_groundwater"))
self.assertTrue(hasattr(self.parser, "has_levees"))
self.assertTrue(hasattr(self.parser, "threedicore_version"))
self.assertTrue(hasattr(self.parser, "has_1d"))
self.assertTrue(hasattr(self.parser, "has_2d"))
self.assertTrue(hasattr(self.parser, "epsg_code"))
self.assertTrue(hasattr(self.parser, "revision_hash"))
self.assertTrue(hasattr(self.parser, "revision_nr"))
self.assertTrue(hasattr(self.parser, "model_slug"))
class GridAdminLinesTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
d = tempfile.mkdtemp()
self.f = os.path.join(d, "test_lines.shp")
def tearDown(self):
shutil.rmtree(os.path.dirname(self.f))
def test_fields(self):
# Check dtype names
assert set(self.parser.lines._meta.get_fields().keys()) == {
"content_pk",
"content_type",
"id",
"kcu",
"lik",
"line",
"dpumax",
"flod",
"flou",
"cross1",
"cross2",
"ds1d",
"line_coords",
"cross_pix_coords",
"cross_weight",
"line_geometries",
"invert_level_start_point",
"invert_level_end_point",
"zoom_category",
"ds1d_half"
}
def test_exporters(self):
self.assertEqual(len(self.parser.lines._exporters), 1)
self.assertIsInstance(
self.parser.lines._exporters[0], LinesOgrExporter
)
def test_export_to_shape(self):
self.parser.lines.to_shape(self.f)
self.assertTrue(os.path.exists, self.f)
s = ogr.Open(self.f)
lyr = s.GetLayer()
self.assertEqual(lyr.GetFeatureCount(), self.parser.lines.id.size - 1)
class GridAdminGridTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
self.grid = self.parser.grid
def test_fields(self):
assert set(self.grid._meta.get_fields().keys()) == {
"id",
"nodk",
"nodm",
"nodn",
"ip",
"jp",
}
def test_dx(self):
expected = np.array([20.0, 40.0, 80.0, 160.0])
np.testing.assert_almost_equal(self.grid.dx, expected)
np.testing.assert_almost_equal(self.grid.filter(id=1).dx, expected)
def test_n2dtot(self):
expected = 5374
self.assertEqual(self.grid.n2dtot, expected)
self.assertEqual(self.grid.filter(id=1).n2dtot, expected)
def test_transform(self):
self.assertEqual(
self.grid.transform,
(0.5, 0.0, 106314.0, 0.0, 0.5, 514912.0)
)
@unittest.skip("TODO")
def test_get_pixel_map(self):
self.parser.grid.get_pixel_map()
class GridAdminNodeTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
d = tempfile.mkdtemp()
self.f = os.path.join(d, "test_nodes.shp")
def tearDown(self):
shutil.rmtree(os.path.dirname(self.f))
def test_fields(self):
# Check fields
assert set(self.parser.nodes._meta.get_fields(only_names=True)) == {
"cell_coords",
"content_pk",
"coordinates",
"id",
"node_type",
"calculation_type",
"seq_id",
"zoom_category",
"is_manhole",
"sumax",
"drain_level",
"storage_area",
"dmax",
"initial_waterlevel",
}
def test_locations_2d(self):
self.assertGreater(len(self.parser.nodes.locations_2d), 0)
frst = self.parser.nodes.locations_2d[0]
# should contain three elements
self.assertEqual(len(frst), 3)
def test_exporters(self):
self.assertEqual(len(self.parser.nodes._exporters), 1)
self.assertIsInstance(
self.parser.nodes._exporters[0], NodesOgrExporter
)
def test_export_to_shape(self):
self.parser.nodes.to_shape(self.f)
self.assertTrue(os.path.exists, self.f)
s = ogr.Open(self.f)
lyr = s.GetLayer()
self.assertEqual(lyr.GetFeatureCount(), self.parser.nodes.id.size - 1)
class GridAdminBreachTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
d = tempfile.mkdtemp()
self.f = os.path.join(d, "test_breaches.shp")
def tearDown(self):
shutil.rmtree(os.path.dirname(self.f))
def test_fields(self):
# Check fields
assert set(self.parser.breaches._meta.get_fields().keys()) == {
"content_pk",
"coordinates",
"id",
"kcu",
"levbr",
"levl",
"levmat",
"seq_ids",
}
def test_exporters(self):
self.assertEqual(len(self.parser.breaches._exporters), 1)
self.assertIsInstance(
self.parser.breaches._exporters[0], BreachesOgrExporter
)
def test_export_to_shape(self):
self.parser.breaches.to_shape(self.f)
self.assertTrue(os.path.exists, self.f)
s = ogr.Open(self.f)
lyr = s.GetLayer()
self.assertEqual(
lyr.GetFeatureCount(), self.parser.breaches.id.size - 1
)
class GridAdminCellsTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
d = tempfile.mkdtemp()
self.f = os.path.join(d, "test_cells.shp")
def tearDown(self):
shutil.rmtree(os.path.dirname(self.f))
def test_fields(self):
# should have also z_coordinate
assert set(self.parser.cells._meta.get_fields().keys()) == {
"cell_coords",
"content_pk",
"coordinates",
"id",
"calculation_type",
"node_type",
"seq_id",
"z_coordinate",
"zoom_category",
"is_manhole",
"sumax",
"pixel_width",
"pixel_coords",
"drain_level",
"storage_area",
"dmax",
"initial_waterlevel",
"has_dem_averaged",
}
def test_get_id_from_xy(self):
# should yield tow ids, one for 2d, one for groundwater
self.assertListEqual(self.parser.cells.get_id_from_xy(1.0, 2.0), [])
# first coordinate pair + some offset
x = self.parser.cells.coordinates[0][1] + 0.5
y = self.parser.cells.coordinates[1][1] + 0.5
self.assertListEqual(self.parser.cells.get_id_from_xy(x, y), [1, 5375])
def test_get_id_from_xy_2d_open_water(self):
self.assertListEqual(
self.parser.cells.get_id_from_xy(
1.0, 2.0, subset_name="2d_open_water"
),
[],
)
# first coordinate pair + some offset
x = self.parser.cells.coordinates[0][1] + 0.5
y = self.parser.cells.coordinates[1][1] + 0.5
self.assertEqual(
self.parser.cells.get_id_from_xy(
x, y, subset_name="2d_open_water"
),
[1],
)
def test_get_id_from_xy_groundwater(self):
self.assertListEqual(
self.parser.cells.get_id_from_xy(
1.0, 2.0, subset_name="groundwater_all"
),
[],
)
# first coordinate pair + some offset
x = self.parser.cells.coordinates[0][1] + 0.5
y = self.parser.cells.coordinates[1][1] + 0.5
self.assertEqual(
self.parser.cells.get_id_from_xy(
x, y, subset_name="groundwater_all"
),
[5375],
)
def test_get_extent_pixels(self):
cells = self.parser.cells
assert cells.get_extent_pixels() == (0, 0, 9600, 9920)
def test_iter_by_tile(self):
cells = self.parser.cells.subset("2D_OPEN_WATER")
w, h = 320 * 50, 1280
tiles = list(cells.iter_by_tile(w, h))
assert len(tiles) == 8 # ceil(9920/1280)
total = 0
for i, (bbox, cells) in enumerate(tiles):
assert bbox == (0, i * h, w, (i + 1) * h)
assert np.all(cells.pixel_coords[1] >= i * h)
assert np.all(cells.pixel_coords[1] < (i + 1) * h)
total += cells.count
assert total == self.parser.cells.subset("2D_OPEN_WATER").count
def test_iter_by_tile_subset_y(self):
cells = self.parser.cells.subset("2D_OPEN_WATER").filter(
pixel_coords__in_bbox=(0, 2000, 9600, 3000)
)
w, h = 320 * 50, 1280
tiles = list(cells.iter_by_tile(w, h))
assert len(tiles) == 2 # 1280 - 2560 and 2560 - 3840
assert tiles[0][0] == (0, 1280, w, 1280 * 2)
assert tiles[1][0] == (0, 1280 * 2, w, 1280 * 3)
assert cells.count == (tiles[0][1].count + tiles[1][1].count)
def test_iter_by_tile_subset_x(self):
cells = self.parser.cells.subset("2D_OPEN_WATER").filter(
pixel_coords__in_bbox=(2000, 0, 3000, 9920)
)
w, h = 1280, 320 * 50
tiles = list(cells.iter_by_tile(w, h))
assert len(tiles) == 2 # 1280 - 2560 and 2560 - 3840
assert tiles[0][0] == (1280, 0, 1280 * 2, h)
assert tiles[1][0] == (1280 * 2, 0, 1280 * 3, h)
assert cells.count == (tiles[0][1].count + tiles[1][1].count)
def test_iter_by_tile_should_match(self):
cells = self.parser.cells.subset("2D_OPEN_WATER")
for tile in ((321, 1280), (160, 1280), (500, 1280), (1280, 500)):
with self.assertRaises(ValueError):
list(cells.iter_by_tile(*tile))
def test_exporters(self):
self.assertEqual(len(self.parser.cells._exporters), 1)
self.assertIsInstance(
self.parser.cells._exporters[0], CellsOgrExporter
)
def test_export_to_shape(self):
self.parser.cells.to_shape(self.f)
self.assertTrue(os.path.exists, self.f)
class GridAdminCrossSectionsTest(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
d = tempfile.mkdtemp()
self.f = os.path.join(d, "test_cross_sections.shp")
def tearDown(self):
shutil.rmtree(os.path.dirname(self.f))
def test_fields(self):
# Check dtype names
assert set(self.parser.cross_sections._meta.get_fields().keys()) == {
"id",
"code",
"shape",
"content_pk",
"width_1d",
"offset",
"count",
"tables"
}
class NodeFilterTests(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
def test_nodes_filter_id_eq(self):
filtered = self.parser.nodes.filter(id=3).data["coordinates"]
trues, = np.where(self.parser.nodes.data["id"] == 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_filter_id_ne(self):
filtered = self.parser.nodes.filter(id__ne=3).data["coordinates"]
trues, = np.where(self.parser.nodes.data["id"] != 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_filter_id_gt(self):
filtered = self.parser.nodes.filter(id__gt=3).data["coordinates"]
trues, = np.where(self.parser.nodes.data["id"] > 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_filter_id_gte(self):
filtered = self.parser.nodes.filter(id__gte=3).data["coordinates"]
trues, = np.where(self.parser.nodes.data["id"] >= 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_filter_id_lt(self):
filtered = self.parser.nodes.filter(id__lt=3).data["coordinates"]
trues, = np.where(self.parser.nodes.data["id"] < 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_filter_id_lte(self):
filtered = self.parser.nodes.filter(id__lte=3).data["coordinates"]
trues, = np.where(self.parser.nodes.data["id"] <= 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_filter_id_in(self):
"""Verify that 'in' filter returns the correct data."""
filtered = self.parser.nodes.filter(id__in=list(range(3, 7))).data[
"coordinates"
]
trues, = np.where(
(self.parser.nodes.data["id"] >= 3)
& (self.parser.nodes.data["id"] < 7)
)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
# TODO: fix this
@unittest.skip(
"This test should succeed, but in our code the reprojected tile "
"bounds go out of bounds, which causes errors. TODO: fix this."
)
def test_nodes_filter_id_in_tile(self):
# at z=0 we have a single base tile
filtered = self.parser.nodes.filter(
coordinates__in_tile=[0, 0, 0]
).data["id"]
expected = self.parser.nodes.data["id"]
self.assertTrue(len(filtered) != 0)
self.assertTrue((filtered == expected).all())
# some special cases
def test_nodes_filter_id_eq_chained(self):
"""Eq filter can be chained."""
filtered = (
self.parser.nodes.filter(id=3).filter(id=3).data["coordinates"]
)
trues, = np.where(self.parser.nodes.data["id"] == 3)
expected = self.parser.nodes.data["coordinates"][:, trues]
self.assertTrue((filtered == expected).all())
def test_nodes_chained_same_filter_id_in(self):
"""In filters can be chained."""
filtered = (
self.parser.nodes.filter(id__in=list(range(1, 10)))
.filter(id__in=list(range(1, 10)))
.filter(id__in=list(range(1, 10)))
.data["coordinates"]
)
expected = self.parser.nodes.filter(id__in=list(range(1, 10))).data[
"coordinates"
]
self.assertTrue((filtered == expected).all())
def test_nodes_chained_filter_id_in(self):
filtered = (
self.parser.nodes.filter(id__in=list(range(1, 8)))
.filter(id__in=list(range(3, 7)))
.data["coordinates"]
)
expected = self.parser.nodes.filter(id__in=list(range(3, 7))).data[
"coordinates"
]
self.assertTrue((filtered == expected).all())
def test_nodes_chained_filter_id_in_2(self):
filtered = (
self.parser.nodes.filter(id__in=list(range(1, 10)))
.filter(id__in=list(range(5, 20)))
.data["coordinates"]
)
expected = self.parser.nodes.filter(id__in=list(range(5, 10))).data[
"coordinates"
]
self.assertTrue((filtered == expected).all())
def test_manhole_filter(self):
non_manholes1 = self.parser.nodes.filter(is_manhole=False).count
non_manholes2 = self.parser.nodes.filter(is_manhole=0).count
non_manholes3 = self.parser.nodes.filter(is_manhole__eq=0).count
non_manholes4 = self.parser.nodes.filter(is_manhole__eq=False).count
non_manholes5 = self.parser.nodes.filter(is_manhole__ne=1).count
non_manholes6 = self.parser.nodes.filter(is_manhole__ne=True).count
self.assertEqual(non_manholes1, non_manholes2)
self.assertEqual(non_manholes2, non_manholes3)
self.assertEqual(non_manholes3, non_manholes4)
self.assertEqual(non_manholes4, non_manholes5)
self.assertEqual(non_manholes5, non_manholes6)
manholes1 = self.parser.nodes.filter(is_manhole=True).count
manholes2 = self.parser.nodes.filter(is_manhole=1).count
manholes3 = self.parser.nodes.filter(is_manhole__eq=1).count
manholes4 = self.parser.nodes.filter(is_manhole__eq=True).count
manholes5 = self.parser.nodes.filter(is_manhole__ne=0).count
manholes6 = self.parser.nodes.filter(is_manhole__ne=False).count
self.assertEqual(manholes1, manholes2)
self.assertEqual(manholes2, manholes3)
self.assertEqual(manholes3, manholes4)
self.assertEqual(manholes4, manholes5)
self.assertEqual(manholes5, manholes6)
self.assertEqual(manholes1 + non_manholes2, self.parser.nodes.count)
def test_node_filter_keeps_has_1d(self):
"""Property has_1d doesn't disappear"""
self.assertEqual(
self.parser.nodes.has_1d, self.parser.nodes.filter(id=1).has_1d
)
class LineFilterTests(unittest.TestCase):
def setUp(self):
self.parser = GridH5Admin(grid_admin_h5_file)
def test_lines_filter_id_eq(self):
# from nose.tools import set_trace; set_trace()
filtered = self.parser.lines.filter(id=3).data["line_coords"]
trues, = np.where(self.parser.lines.data["id"] == 3)
expected = self.parser.lines.data["line_coords"][:, trues]
self.assertTrue((filtered == | |
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('horizontalAccuracy', node)
if value is not None and 'horizontalAccuracy' not in already_processed:
already_processed.add('horizontalAccuracy')
try:
self.horizontalAccuracy = float(value)
except ValueError as exp:
raise ValueError('Bad float/double attribute (horizontalAccuracy): %s' % exp)
value = find_attr_value_('verticalAccuracy', node)
if value is not None and 'verticalAccuracy' not in already_processed:
already_processed.add('verticalAccuracy')
try:
self.verticalAccuracy = float(value)
except ValueError as exp:
raise ValueError('Bad float/double attribute (verticalAccuracy): %s' % exp)
value = find_attr_value_('latitude', node)
if value is not None and 'latitude' not in already_processed:
already_processed.add('latitude')
try:
self.latitude = float(value)
except ValueError as exp:
raise ValueError('Bad float/double attribute (latitude): %s' % exp)
value = find_attr_value_('longitude', node)
if value is not None and 'longitude' not in already_processed:
already_processed.add('longitude')
try:
self.longitude = float(value)
except ValueError as exp:
raise ValueError('Bad float/double attribute (longitude): %s' % exp)
value = find_attr_value_('altitude', node)
if value is not None and 'altitude' not in already_processed:
already_processed.add('altitude')
try:
self.altitude = float(value)
except ValueError as exp:
raise ValueError('Bad float/double attribute (altitude): %s' % exp)
value = find_attr_value_('timestamp', node)
if value is not None and 'timestamp' not in already_processed:
already_processed.add('timestamp')
try:
self.timestamp = int(value)
except ValueError as exp:
raise_parse_error(node, 'Bad integer attribute: %s' % exp)
super(GPSLocationWidgetStep, self).buildAttributes(node, attrs, already_processed)
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
super(GPSLocationWidgetStep, self).buildChildren(child_, node, nodeName_, True)
pass
# end class GPSLocationWidgetStep
class MyDigiPassEidProfile(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, firstName=None, firstName3=None, lastName=None, gender=None, nationality=None, dateOfBirth=None, locationOfBirth=None, nobleCondition=None, issuingMunicipality=None, cardNumber=None, chipNumber=None, validityBeginsAt=None, validityEndsAt=None, createdAt=None):
self.original_tagname_ = None
self.firstName = _cast(None, firstName)
self.firstName3 = _cast(None, firstName3)
self.lastName = _cast(None, lastName)
self.gender = _cast(None, gender)
self.nationality = _cast(None, nationality)
self.dateOfBirth = _cast(None, dateOfBirth)
self.locationOfBirth = _cast(None, locationOfBirth)
self.nobleCondition = _cast(None, nobleCondition)
self.issuingMunicipality = _cast(None, issuingMunicipality)
self.cardNumber = _cast(None, cardNumber)
self.chipNumber = _cast(None, chipNumber)
self.validityBeginsAt = _cast(None, validityBeginsAt)
self.validityEndsAt = _cast(None, validityEndsAt)
self.createdAt = _cast(None, createdAt)
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, MyDigiPassEidProfile)
if subclass is not None:
return subclass(*args_, **kwargs_)
if MyDigiPassEidProfile.subclass:
return MyDigiPassEidProfile.subclass(*args_, **kwargs_)
else:
return MyDigiPassEidProfile(*args_, **kwargs_)
factory = staticmethod(factory)
def get_firstName(self): return self.firstName
def set_firstName(self, firstName): self.firstName = firstName
def get_firstName3(self): return self.firstName3
def set_firstName3(self, firstName3): self.firstName3 = firstName3
def get_lastName(self): return self.lastName
def set_lastName(self, lastName): self.lastName = lastName
def get_gender(self): return self.gender
def set_gender(self, gender): self.gender = gender
def get_nationality(self): return self.nationality
def set_nationality(self, nationality): self.nationality = nationality
def get_dateOfBirth(self): return self.dateOfBirth
def set_dateOfBirth(self, dateOfBirth): self.dateOfBirth = dateOfBirth
def get_locationOfBirth(self): return self.locationOfBirth
def set_locationOfBirth(self, locationOfBirth): self.locationOfBirth = locationOfBirth
def get_nobleCondition(self): return self.nobleCondition
def set_nobleCondition(self, nobleCondition): self.nobleCondition = nobleCondition
def get_issuingMunicipality(self): return self.issuingMunicipality
def set_issuingMunicipality(self, issuingMunicipality): self.issuingMunicipality = issuingMunicipality
def get_cardNumber(self): return self.cardNumber
def set_cardNumber(self, cardNumber): self.cardNumber = cardNumber
def get_chipNumber(self): return self.chipNumber
def set_chipNumber(self, chipNumber): self.chipNumber = chipNumber
def get_validityBeginsAt(self): return self.validityBeginsAt
def set_validityBeginsAt(self, validityBeginsAt): self.validityBeginsAt = validityBeginsAt
def get_validityEndsAt(self): return self.validityEndsAt
def set_validityEndsAt(self, validityEndsAt): self.validityEndsAt = validityEndsAt
def get_createdAt(self): return self.createdAt
def set_createdAt(self, createdAt): self.createdAt = createdAt
def hasContent_(self):
if (
):
return True
else:
return False
def export(self, outfile, level, namespaceprefix_='', name_='MyDigiPassEidProfile', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('MyDigiPassEidProfile')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespaceprefix_, name_='MyDigiPassEidProfile')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespaceprefix_='', name_='MyDigiPassEidProfile', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='MyDigiPassEidProfile'):
if self.firstName is not None and 'firstName' not in already_processed:
already_processed.add('firstName')
outfile.write(' firstName=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.firstName), input_name='firstName')), ))
if self.firstName3 is not None and 'firstName3' not in already_processed:
already_processed.add('firstName3')
outfile.write(' firstName3=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.firstName3), input_name='firstName3')), ))
if self.lastName is not None and 'lastName' not in already_processed:
already_processed.add('lastName')
outfile.write(' lastName=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.lastName), input_name='lastName')), ))
if self.gender is not None and 'gender' not in already_processed:
already_processed.add('gender')
outfile.write(' gender=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.gender), input_name='gender')), ))
if self.nationality is not None and 'nationality' not in already_processed:
already_processed.add('nationality')
outfile.write(' nationality=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.nationality), input_name='nationality')), ))
if self.dateOfBirth is not None and 'dateOfBirth' not in already_processed:
already_processed.add('dateOfBirth')
outfile.write(' dateOfBirth=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.dateOfBirth), input_name='dateOfBirth')), ))
if self.locationOfBirth is not None and 'locationOfBirth' not in already_processed:
already_processed.add('locationOfBirth')
outfile.write(' locationOfBirth=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.locationOfBirth), input_name='locationOfBirth')), ))
if self.nobleCondition is not None and 'nobleCondition' not in already_processed:
already_processed.add('nobleCondition')
outfile.write(' nobleCondition=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.nobleCondition), input_name='nobleCondition')), ))
if self.issuingMunicipality is not None and 'issuingMunicipality' not in already_processed:
already_processed.add('issuingMunicipality')
outfile.write(' issuingMunicipality=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.issuingMunicipality), input_name='issuingMunicipality')), ))
if self.cardNumber is not None and 'cardNumber' not in already_processed:
already_processed.add('cardNumber')
outfile.write(' cardNumber=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.cardNumber), input_name='cardNumber')), ))
if self.chipNumber is not None and 'chipNumber' not in already_processed:
already_processed.add('chipNumber')
outfile.write(' chipNumber=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.chipNumber), input_name='chipNumber')), ))
if self.validityBeginsAt is not None and 'validityBeginsAt' not in already_processed:
already_processed.add('validityBeginsAt')
outfile.write(' validityBeginsAt=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.validityBeginsAt), input_name='validityBeginsAt')), ))
if self.validityEndsAt is not None and 'validityEndsAt' not in already_processed:
already_processed.add('validityEndsAt')
outfile.write(' validityEndsAt=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.validityEndsAt), input_name='validityEndsAt')), ))
if self.createdAt is not None and 'createdAt' not in already_processed:
already_processed.add('createdAt')
outfile.write(' createdAt=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.createdAt), input_name='createdAt')), ))
def exportChildren(self, outfile, level, namespaceprefix_='', name_='MyDigiPassEidProfile', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('firstName', node)
if value is not None and 'firstName' not in already_processed:
already_processed.add('firstName')
self.firstName = value
value = find_attr_value_('firstName3', node)
if value is not None and 'firstName3' not in already_processed:
already_processed.add('firstName3')
self.firstName3 = value
value = find_attr_value_('lastName', node)
if value is not None and 'lastName' not in already_processed:
already_processed.add('lastName')
self.lastName = value
value = find_attr_value_('gender', node)
if value is not None and 'gender' not in already_processed:
already_processed.add('gender')
self.gender = value
value = find_attr_value_('nationality', node)
if value is not None and 'nationality' not in already_processed:
already_processed.add('nationality')
self.nationality = value
value = find_attr_value_('dateOfBirth', node)
if value is not None and 'dateOfBirth' not in already_processed:
already_processed.add('dateOfBirth')
self.dateOfBirth = value
value = find_attr_value_('locationOfBirth', node)
if value is not None and 'locationOfBirth' not in already_processed:
already_processed.add('locationOfBirth')
self.locationOfBirth = value
value = find_attr_value_('nobleCondition', node)
if value is not None and 'nobleCondition' not in already_processed:
already_processed.add('nobleCondition')
self.nobleCondition = value
value = find_attr_value_('issuingMunicipality', node)
if value is not None and 'issuingMunicipality' not in already_processed:
already_processed.add('issuingMunicipality')
self.issuingMunicipality = value
value = find_attr_value_('cardNumber', node)
if value is not None and 'cardNumber' not in already_processed:
already_processed.add('cardNumber')
self.cardNumber = value
value = find_attr_value_('chipNumber', node)
if value is not None and 'chipNumber' not in already_processed:
already_processed.add('chipNumber')
self.chipNumber = value
value = find_attr_value_('validityBeginsAt', node)
if value is not None and 'validityBeginsAt' not in already_processed:
already_processed.add('validityBeginsAt')
self.validityBeginsAt = value
value = find_attr_value_('validityEndsAt', node)
if value is not None and 'validityEndsAt' not in already_processed:
already_processed.add('validityEndsAt')
self.validityEndsAt = value
value = find_attr_value_('createdAt', node)
if value is not None and 'createdAt' not in already_processed:
already_processed.add('createdAt')
self.createdAt = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class MyDigiPassEidProfile
class MyDigiPassEidAddress(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self, streetAndNumber=None, zipCode=None, municipality=None):
self.original_tagname_ = None
self.streetAndNumber = _cast(None, streetAndNumber)
self.zipCode = _cast(None, zipCode)
self.municipality = _cast(None, municipality)
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, MyDigiPassEidAddress)
if subclass is not None:
return subclass(*args_, **kwargs_)
if MyDigiPassEidAddress.subclass:
return MyDigiPassEidAddress.subclass(*args_, **kwargs_)
else:
return MyDigiPassEidAddress(*args_, **kwargs_)
factory = staticmethod(factory)
def get_streetAndNumber(self): return self.streetAndNumber
def set_streetAndNumber(self, streetAndNumber): self.streetAndNumber = streetAndNumber
def get_zipCode(self): return self.zipCode
def set_zipCode(self, zipCode): self.zipCode = zipCode
def get_municipality(self): return self.municipality
def set_municipality(self, municipality): self.municipality = municipality
def hasContent_(self):
if (
):
return True
else:
return False
def export(self, outfile, level, namespaceprefix_='', name_='MyDigiPassEidAddress', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('MyDigiPassEidAddress')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespaceprefix_, | |
from __future__ import print_function
import os
import itertools, pkg_resources, sys
from distutils.version import LooseVersion
if LooseVersion(pkg_resources.get_distribution("chainer").version) >= LooseVersion('7.0.0') and \
sys.version_info.major == 2:
print('''Please install chainer <= 7.0.0:
sudo pip install chainer==6.7.0
c.f https://github.com/jsk-ros-pkg/jsk_recognition/pull/2485
''', file=sys.stderr)
sys.exit(1)
if [p for p in list(itertools.chain(*[pkg_resources.find_distributions(_) for _ in sys.path])) if "cupy-" in p.project_name ] == []:
print('''Please install CuPy
sudo pip install cupy-cuda[your cuda version]
i.e.
sudo pip install cupy-cuda91
''', file=sys.stderr)
sys.exit(1)
import chainer
import chainer.functions as F
import chainer.links as L
base_url = 'http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/'
models = {
'auto': 'coco/pose_iter_440000.chainermodel',
'coco': 'coco/pose_iter_440000.chainermodel',
'mpi': 'mpi/pose_iter_160000.chainermodel',
}
class PoseNet(chainer.Chain):
def __init__(self, pretrained_model='auto'):
super(PoseNet, self).__init__()
with self.init_scope():
self.conv1_1 = L.Convolution2D(
in_channels=3, out_channels=64, ksize=3, stride=1, pad=1)
self.conv1_2 = L.Convolution2D(
in_channels=64, out_channels=64, ksize=3, stride=1, pad=1)
self.conv2_1 = L.Convolution2D(
in_channels=64, out_channels=128, ksize=3, stride=1, pad=1)
self.conv2_2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv3_1 = L.Convolution2D(
in_channels=128, out_channels=256, ksize=3, stride=1, pad=1)
self.conv3_2 = L.Convolution2D(
in_channels=256, out_channels=256, ksize=3, stride=1, pad=1)
self.conv3_3 = L.Convolution2D(
in_channels=256, out_channels=256, ksize=3, stride=1, pad=1)
self.conv3_4 = L.Convolution2D(
in_channels=256, out_channels=256, ksize=3, stride=1, pad=1)
self.conv4_1 = L.Convolution2D(
in_channels=256, out_channels=512, ksize=3, stride=1, pad=1)
self.conv4_2 = L.Convolution2D(
in_channels=512, out_channels=512, ksize=3, stride=1, pad=1)
self.conv4_3_CPM = L.Convolution2D(
in_channels=512, out_channels=256, ksize=3, stride=1, pad=1)
self.conv4_4_CPM = L.Convolution2D(
in_channels=256, out_channels=128, ksize=3, stride=1, pad=1)
# stage1
self.conv5_1_CPM_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv5_2_CPM_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv5_3_CPM_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv5_4_CPM_L1 = L.Convolution2D(
in_channels=128, out_channels=512, ksize=1, stride=1, pad=0)
self.conv5_5_CPM_L1 = L.Convolution2D(
in_channels=512, out_channels=38, ksize=1, stride=1, pad=0)
self.conv5_1_CPM_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv5_2_CPM_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv5_3_CPM_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=3, stride=1, pad=1)
self.conv5_4_CPM_L2 = L.Convolution2D(
in_channels=128, out_channels=512, ksize=1, stride=1, pad=0)
self.conv5_5_CPM_L2 = L.Convolution2D(
in_channels=512, out_channels=19, ksize=1, stride=1, pad=0)
# stage2
self.Mconv1_stage2_L1 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage2_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage2_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage2_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage2_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage2_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage2_L1 = L.Convolution2D(
in_channels=128, out_channels=38, ksize=1, stride=1, pad=0)
self.Mconv1_stage2_L2 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage2_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage2_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage2_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage2_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage2_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage2_L2 = L.Convolution2D(
in_channels=128, out_channels=19, ksize=1, stride=1, pad=0)
# stage3
self.Mconv1_stage3_L1 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage3_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage3_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage3_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage3_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage3_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage3_L1 = L.Convolution2D(
in_channels=128, out_channels=38, ksize=1, stride=1, pad=0)
self.Mconv1_stage3_L2 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage3_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage3_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage3_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage3_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage3_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage3_L2 = L.Convolution2D(
in_channels=128, out_channels=19, ksize=1, stride=1, pad=0)
# stage4
self.Mconv1_stage4_L1 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage4_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage4_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage4_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage4_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage4_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage4_L1 = L.Convolution2D(
in_channels=128, out_channels=38, ksize=1, stride=1, pad=0)
self.Mconv1_stage4_L2 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage4_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage4_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage4_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage4_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage4_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage4_L2 = L.Convolution2D(
in_channels=128, out_channels=19, ksize=1, stride=1, pad=0)
# stage5
self.Mconv1_stage5_L1 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage5_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage5_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage5_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage5_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage5_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage5_L1 = L.Convolution2D(
in_channels=128, out_channels=38, ksize=1, stride=1, pad=0)
self.Mconv1_stage5_L2 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage5_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage5_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage5_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage5_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage5_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage5_L2 = L.Convolution2D(
in_channels=128, out_channels=19, ksize=1, stride=1, pad=0)
# stage6
self.Mconv1_stage6_L1 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage6_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage6_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage6_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage6_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage6_L1 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage6_L1 = L.Convolution2D(
in_channels=128, out_channels=38, ksize=1, stride=1, pad=0)
self.Mconv1_stage6_L2 = L.Convolution2D(
in_channels=185, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv2_stage6_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv3_stage6_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv4_stage6_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv5_stage6_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=7, stride=1, pad=3)
self.Mconv6_stage6_L2 = L.Convolution2D(
in_channels=128, out_channels=128, ksize=1, stride=1, pad=0)
self.Mconv7_stage6_L2 = L.Convolution2D(
in_channels=128, out_channels=19, ksize=1, stride=1, pad=0)
if pretrained_model in models.keys():
data_dir = chainer.dataset.get_dataset_directory('openpose/pose')
model_path = os.path.join(data_dir, models[pretrained_model])
try:
os.makedirs(os.path.dirname(model_path))
except OSError:
pass
chainer.dataset.cache_or_load_file(
model_path,
lambda f: _download_pretrained_model(pretrained_model, f),
lambda f: f)
chainer.serializers.load_npz(model_path, self)
elif pretrained_model is not None:
if not os.path.exists(pretrained_model):
raise OSError('model does not exists: "%s"' % pretrained_model)
chainer.serializers.load_npz(pretrained_model, self)
def __call__(self, x):
heatmaps = []
pafs = []
h = F.relu(self.conv1_1(x))
h = F.relu(self.conv1_2(h))
h = F.max_pooling_2d(h, ksize=2, stride=2)
h = F.relu(self.conv2_1(h))
h = F.relu(self.conv2_2(h))
h = F.max_pooling_2d(h, ksize=2, stride=2)
h = F.relu(self.conv3_1(h))
h = F.relu(self.conv3_2(h))
h = F.relu(self.conv3_3(h))
h = F.relu(self.conv3_4(h))
h = F.max_pooling_2d(h, ksize=2, stride=2)
h = F.relu(self.conv4_1(h))
h = F.relu(self.conv4_2(h))
h = F.relu(self.conv4_3_CPM(h))
h = F.relu(self.conv4_4_CPM(h))
feature_map = h
# stage1
h1 = F.relu(self.conv5_1_CPM_L1(feature_map)) # branch1
h1 = F.relu(self.conv5_2_CPM_L1(h1))
h1 = F.relu(self.conv5_3_CPM_L1(h1))
h1 = F.relu(self.conv5_4_CPM_L1(h1))
h1 = self.conv5_5_CPM_L1(h1)
h2 = F.relu(self.conv5_1_CPM_L2(feature_map)) # branch2
h2 = F.relu(self.conv5_2_CPM_L2(h2))
h2 = F.relu(self.conv5_3_CPM_L2(h2))
h2 = F.relu(self.conv5_4_CPM_L2(h2))
h2 = self.conv5_5_CPM_L2(h2)
pafs.append(h1)
heatmaps.append(h2)
# stage2
h = F.concat((h1, h2, feature_map), axis=1) # channel concat
h1 = F.relu(self.Mconv1_stage2_L1(h)) # branch1
h1 = F.relu(self.Mconv2_stage2_L1(h1))
h1 = F.relu(self.Mconv3_stage2_L1(h1))
h1 = F.relu(self.Mconv4_stage2_L1(h1))
h1 = F.relu(self.Mconv5_stage2_L1(h1))
h1 = F.relu(self.Mconv6_stage2_L1(h1))
h1 = self.Mconv7_stage2_L1(h1)
h2 = F.relu(self.Mconv1_stage2_L2(h)) # branch2
h2 = F.relu(self.Mconv2_stage2_L2(h2))
h2 = F.relu(self.Mconv3_stage2_L2(h2))
h2 = F.relu(self.Mconv4_stage2_L2(h2))
h2 = F.relu(self.Mconv5_stage2_L2(h2))
h2 = F.relu(self.Mconv6_stage2_L2(h2))
h2 = self.Mconv7_stage2_L2(h2)
pafs.append(h1)
heatmaps.append(h2)
# stage3
h = F.concat((h1, h2, feature_map), axis=1) # channel concat
h1 = F.relu(self.Mconv1_stage3_L1(h)) # branch1
h1 = F.relu(self.Mconv2_stage3_L1(h1))
h1 = F.relu(self.Mconv3_stage3_L1(h1))
h1 = F.relu(self.Mconv4_stage3_L1(h1))
h1 = F.relu(self.Mconv5_stage3_L1(h1))
h1 = F.relu(self.Mconv6_stage3_L1(h1))
h1 = self.Mconv7_stage3_L1(h1)
h2 = F.relu(self.Mconv1_stage3_L2(h)) # branch2
h2 = F.relu(self.Mconv2_stage3_L2(h2))
h2 = F.relu(self.Mconv3_stage3_L2(h2))
h2 = F.relu(self.Mconv4_stage3_L2(h2))
h2 = F.relu(self.Mconv5_stage3_L2(h2))
h2 = F.relu(self.Mconv6_stage3_L2(h2))
h2 = self.Mconv7_stage3_L2(h2)
pafs.append(h1)
heatmaps.append(h2)
# stage4
h = F.concat((h1, h2, feature_map), axis=1) # channel concat
h1 = F.relu(self.Mconv1_stage4_L1(h)) # branch1
h1 = F.relu(self.Mconv2_stage4_L1(h1))
h1 = F.relu(self.Mconv3_stage4_L1(h1))
h1 = F.relu(self.Mconv4_stage4_L1(h1))
h1 = F.relu(self.Mconv5_stage4_L1(h1))
h1 = F.relu(self.Mconv6_stage4_L1(h1))
h1 = self.Mconv7_stage4_L1(h1)
h2 = F.relu(self.Mconv1_stage4_L2(h)) # branch2
h2 = F.relu(self.Mconv2_stage4_L2(h2))
h2 = F.relu(self.Mconv3_stage4_L2(h2))
h2 = F.relu(self.Mconv4_stage4_L2(h2))
h2 = F.relu(self.Mconv5_stage4_L2(h2))
h2 = F.relu(self.Mconv6_stage4_L2(h2))
h2 = self.Mconv7_stage4_L2(h2)
pafs.append(h1)
heatmaps.append(h2)
# stage5
h = F.concat((h1, h2, feature_map), axis=1) # channel concat
h1 = F.relu(self.Mconv1_stage5_L1(h)) # branch1
h1 = F.relu(self.Mconv2_stage5_L1(h1))
h1 = F.relu(self.Mconv3_stage5_L1(h1))
h1 = F.relu(self.Mconv4_stage5_L1(h1))
h1 = F.relu(self.Mconv5_stage5_L1(h1))
h1 = F.relu(self.Mconv6_stage5_L1(h1))
h1 = self.Mconv7_stage5_L1(h1)
h2 = F.relu(self.Mconv1_stage5_L2(h)) # branch2
h2 = F.relu(self.Mconv2_stage5_L2(h2))
h2 = F.relu(self.Mconv3_stage5_L2(h2))
h2 = F.relu(self.Mconv4_stage5_L2(h2))
h2 = F.relu(self.Mconv5_stage5_L2(h2))
h2 = F.relu(self.Mconv6_stage5_L2(h2))
h2 = self.Mconv7_stage5_L2(h2)
pafs.append(h1)
heatmaps.append(h2)
# stage6
h = F.concat((h1, h2, feature_map), axis=1) # channel concat
h1 = F.relu(self.Mconv1_stage6_L1(h)) # branch1
h1 = F.relu(self.Mconv2_stage6_L1(h1))
h1 = F.relu(self.Mconv3_stage6_L1(h1))
h1 = F.relu(self.Mconv4_stage6_L1(h1))
h1 = F.relu(self.Mconv5_stage6_L1(h1))
h1 = F.relu(self.Mconv6_stage6_L1(h1))
h1 = self.Mconv7_stage6_L1(h1)
h2 = F.relu(self.Mconv1_stage6_L2(h)) # branch2
h2 = F.relu(self.Mconv2_stage6_L2(h2))
h2 = F.relu(self.Mconv3_stage6_L2(h2))
h2 = F.relu(self.Mconv4_stage6_L2(h2))
h2 = F.relu(self.Mconv5_stage6_L2(h2))
h2 = F.relu(self.Mconv6_stage6_L2(h2))
h2 = self.Mconv7_stage6_L2(h2)
pafs.append(h1)
heatmaps.append(h2)
return pafs, heatmaps
def _download_pretrained_model(model_type, | |
:param pulumi.Input[str] user_data_base64: Can be used instead of `user_data` to pass base64-encoded binary data directly. Use this instead of `user_data` whenever the value is not a valid UTF-8 string. For example, gzip-encoded user data must be base64-encoded and passed via this argument to avoid corruption.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] volume_tags: A map of tags to assign, at instance-creation time, to root and EBS volumes.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vpc_security_group_ids: A list of security group IDs to associate with.
"""
if ami is not None:
pulumi.set(__self__, "ami", ami)
if associate_public_ip_address is not None:
pulumi.set(__self__, "associate_public_ip_address", associate_public_ip_address)
if availability_zone is not None:
pulumi.set(__self__, "availability_zone", availability_zone)
if capacity_reservation_specification is not None:
pulumi.set(__self__, "capacity_reservation_specification", capacity_reservation_specification)
if cpu_core_count is not None:
pulumi.set(__self__, "cpu_core_count", cpu_core_count)
if cpu_threads_per_core is not None:
pulumi.set(__self__, "cpu_threads_per_core", cpu_threads_per_core)
if credit_specification is not None:
pulumi.set(__self__, "credit_specification", credit_specification)
if disable_api_termination is not None:
pulumi.set(__self__, "disable_api_termination", disable_api_termination)
if ebs_block_devices is not None:
pulumi.set(__self__, "ebs_block_devices", ebs_block_devices)
if ebs_optimized is not None:
pulumi.set(__self__, "ebs_optimized", ebs_optimized)
if enclave_options is not None:
pulumi.set(__self__, "enclave_options", enclave_options)
if ephemeral_block_devices is not None:
pulumi.set(__self__, "ephemeral_block_devices", ephemeral_block_devices)
if get_password_data is not None:
pulumi.set(__self__, "get_password_data", get_password_data)
if hibernation is not None:
pulumi.set(__self__, "hibernation", hibernation)
if host_id is not None:
pulumi.set(__self__, "host_id", host_id)
if iam_instance_profile is not None:
pulumi.set(__self__, "iam_instance_profile", iam_instance_profile)
if instance_initiated_shutdown_behavior is not None:
pulumi.set(__self__, "instance_initiated_shutdown_behavior", instance_initiated_shutdown_behavior)
if instance_type is not None:
pulumi.set(__self__, "instance_type", instance_type)
if ipv6_address_count is not None:
pulumi.set(__self__, "ipv6_address_count", ipv6_address_count)
if ipv6_addresses is not None:
pulumi.set(__self__, "ipv6_addresses", ipv6_addresses)
if key_name is not None:
pulumi.set(__self__, "key_name", key_name)
if launch_template is not None:
pulumi.set(__self__, "launch_template", launch_template)
if metadata_options is not None:
pulumi.set(__self__, "metadata_options", metadata_options)
if monitoring is not None:
pulumi.set(__self__, "monitoring", monitoring)
if network_interfaces is not None:
pulumi.set(__self__, "network_interfaces", network_interfaces)
if placement_group is not None:
pulumi.set(__self__, "placement_group", placement_group)
if placement_partition_number is not None:
pulumi.set(__self__, "placement_partition_number", placement_partition_number)
if private_ip is not None:
pulumi.set(__self__, "private_ip", private_ip)
if root_block_device is not None:
pulumi.set(__self__, "root_block_device", root_block_device)
if secondary_private_ips is not None:
pulumi.set(__self__, "secondary_private_ips", secondary_private_ips)
if security_groups is not None:
warnings.warn("""Use of `securityGroups` is discouraged as it does not allow for changes and will force your instance to be replaced if changes are made. To avoid this, use `vpcSecurityGroupIds` which allows for updates.""", DeprecationWarning)
pulumi.log.warn("""security_groups is deprecated: Use of `securityGroups` is discouraged as it does not allow for changes and will force your instance to be replaced if changes are made. To avoid this, use `vpcSecurityGroupIds` which allows for updates.""")
if security_groups is not None:
pulumi.set(__self__, "security_groups", security_groups)
if source_dest_check is not None:
pulumi.set(__self__, "source_dest_check", source_dest_check)
if subnet_id is not None:
pulumi.set(__self__, "subnet_id", subnet_id)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if tenancy is not None:
pulumi.set(__self__, "tenancy", tenancy)
if user_data is not None:
pulumi.set(__self__, "user_data", user_data)
if user_data_base64 is not None:
pulumi.set(__self__, "user_data_base64", user_data_base64)
if volume_tags is not None:
pulumi.set(__self__, "volume_tags", volume_tags)
if vpc_security_group_ids is not None:
pulumi.set(__self__, "vpc_security_group_ids", vpc_security_group_ids)
@property
@pulumi.getter
def ami(self) -> Optional[pulumi.Input[str]]:
"""
AMI to use for the instance. Required unless `launch_template` is specified and the Launch Template specifes an AMI. If an AMI is specified in the Launch Template, setting `ami` will override the AMI specified in the Launch Template.
"""
return pulumi.get(self, "ami")
@ami.setter
def ami(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "ami", value)
@property
@pulumi.getter(name="associatePublicIpAddress")
def associate_public_ip_address(self) -> Optional[pulumi.Input[bool]]:
"""
Whether to associate a public IP address with an instance in a VPC.
"""
return pulumi.get(self, "associate_public_ip_address")
@associate_public_ip_address.setter
def associate_public_ip_address(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "associate_public_ip_address", value)
@property
@pulumi.getter(name="availabilityZone")
def availability_zone(self) -> Optional[pulumi.Input[str]]:
"""
AZ to start the instance in.
"""
return pulumi.get(self, "availability_zone")
@availability_zone.setter
def availability_zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "availability_zone", value)
@property
@pulumi.getter(name="capacityReservationSpecification")
def capacity_reservation_specification(self) -> Optional[pulumi.Input['InstanceCapacityReservationSpecificationArgs']]:
"""
Describes an instance's Capacity Reservation targeting option. See Capacity Reservation Specification below for more details.
"""
return pulumi.get(self, "capacity_reservation_specification")
@capacity_reservation_specification.setter
def capacity_reservation_specification(self, value: Optional[pulumi.Input['InstanceCapacityReservationSpecificationArgs']]):
pulumi.set(self, "capacity_reservation_specification", value)
@property
@pulumi.getter(name="cpuCoreCount")
def cpu_core_count(self) -> Optional[pulumi.Input[int]]:
"""
Sets the number of CPU cores for an instance. This option is only supported on creation of instance type that support CPU Options [CPU Cores and Threads Per CPU Core Per Instance Type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html#cpu-options-supported-instances-values) - specifying this option for unsupported instance types will return an error from the EC2 API.
"""
return pulumi.get(self, "cpu_core_count")
@cpu_core_count.setter
def cpu_core_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "cpu_core_count", value)
@property
@pulumi.getter(name="cpuThreadsPerCore")
def cpu_threads_per_core(self) -> Optional[pulumi.Input[int]]:
"""
If set to to 1, hyperthreading is disabled on the launched instance. Defaults to 2 if not set. See [Optimizing CPU Options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) for more information.
"""
return pulumi.get(self, "cpu_threads_per_core")
@cpu_threads_per_core.setter
def cpu_threads_per_core(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "cpu_threads_per_core", value)
@property
@pulumi.getter(name="creditSpecification")
def credit_specification(self) -> Optional[pulumi.Input['InstanceCreditSpecificationArgs']]:
"""
Configuration block for customizing the credit specification of the instance. See Credit Specification below for more details. the provider will only perform drift detection of its value when present in a configuration. Removing this configuration on existing instances will only stop managing it. It will not change the configuration back to the default for the instance type.
"""
return pulumi.get(self, "credit_specification")
@credit_specification.setter
def credit_specification(self, value: Optional[pulumi.Input['InstanceCreditSpecificationArgs']]):
pulumi.set(self, "credit_specification", value)
@property
@pulumi.getter(name="disableApiTermination")
def disable_api_termination(self) -> Optional[pulumi.Input[bool]]:
"""
If true, enables [EC2 Instance Termination Protection](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination).
"""
return pulumi.get(self, "disable_api_termination")
@disable_api_termination.setter
def disable_api_termination(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "disable_api_termination", value)
@property
@pulumi.getter(name="ebsBlockDevices")
def ebs_block_devices(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InstanceEbsBlockDeviceArgs']]]]:
"""
One or more configuration blocks with additional EBS block devices to attach to the instance. Block device configurations only apply on resource creation. See Block Devices below for details on attributes and drift detection. When accessing this as an attribute reference, it is a set of objects.
"""
return pulumi.get(self, "ebs_block_devices")
@ebs_block_devices.setter
def ebs_block_devices(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InstanceEbsBlockDeviceArgs']]]]):
pulumi.set(self, "ebs_block_devices", value)
@property
@pulumi.getter(name="ebsOptimized")
def ebs_optimized(self) -> Optional[pulumi.Input[bool]]:
"""
If true, the launched EC2 instance will be EBS-optimized. Note that if this is not set on an instance type that is optimized by default then this will show as disabled but if the instance type is optimized by default then there is no need to set this and there is no effect to disabling it. See the [EBS Optimized section](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html) of the AWS User Guide for more information.
"""
return pulumi.get(self, "ebs_optimized")
@ebs_optimized.setter
def ebs_optimized(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "ebs_optimized", value)
@property
@pulumi.getter(name="enclaveOptions")
def enclave_options(self) -> Optional[pulumi.Input['InstanceEnclaveOptionsArgs']]:
"""
Enable Nitro Enclaves on launched instances. See Enclave Options below for more details.
"""
return pulumi.get(self, "enclave_options")
@enclave_options.setter
def enclave_options(self, value: Optional[pulumi.Input['InstanceEnclaveOptionsArgs']]):
pulumi.set(self, "enclave_options", value)
@property
@pulumi.getter(name="ephemeralBlockDevices")
def ephemeral_block_devices(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['InstanceEphemeralBlockDeviceArgs']]]]:
"""
One or more configuration blocks to customize Ephemeral (also known as "Instance Store") volumes on the instance. See Block Devices below for details. When accessing this as an attribute reference, it is a set of objects.
"""
return pulumi.get(self, "ephemeral_block_devices")
@ephemeral_block_devices.setter
def ephemeral_block_devices(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['InstanceEphemeralBlockDeviceArgs']]]]):
pulumi.set(self, "ephemeral_block_devices", value)
@property
@pulumi.getter(name="getPasswordData")
def get_password_data(self) -> Optional[pulumi.Input[bool]]:
"""
If true, wait for password data to become available and retrieve it. Useful for getting the administrator password for instances running Microsoft Windows. The password data is exported to the `password_data` attribute. See [GetPasswordData](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetPasswordData.html) for more information.
"""
return pulumi.get(self, "get_password_data")
@get_password_data.setter
def get_password_data(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "get_password_data", value)
@property
@pulumi.getter
def hibernation(self) -> Optional[pulumi.Input[bool]]:
"""
If true, the launched EC2 instance will support hibernation.
"""
return pulumi.get(self, "hibernation")
@hibernation.setter
def hibernation(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "hibernation", value)
@property
@pulumi.getter(name="hostId")
def host_id(self) -> Optional[pulumi.Input[str]]:
"""
ID of a dedicated host that the instance will be assigned to. Use when an instance is to be launched on a specific dedicated host.
"""
return pulumi.get(self, "host_id")
@host_id.setter
def host_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host_id", value)
@property
@pulumi.getter(name="iamInstanceProfile")
def iam_instance_profile(self) -> Optional[pulumi.Input[str]]:
"""
IAM Instance Profile to launch the instance with. Specified as the name of the Instance Profile. Ensure your credentials have the correct permission to assign the instance profile according to the [EC2 documentation](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html#roles-usingrole-ec2instance-permissions), notably `iam:PassRole`.
"""
return pulumi.get(self, "iam_instance_profile")
@iam_instance_profile.setter
def iam_instance_profile(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "iam_instance_profile", value)
@property
@pulumi.getter(name="instanceInitiatedShutdownBehavior")
def instance_initiated_shutdown_behavior(self) -> Optional[pulumi.Input[str]]:
"""
Shutdown behavior for the instance. Amazon defaults this to `stop` for EBS-backed instances and `terminate` for instance-store instances. Cannot be set on instance-store instances. See | |
# -*- coding: utf-8 -*-
# Copyright (c) 2013, <NAME>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
#
# * The names of the contributors may not be used to endorse or
# promote products derived from this software without specific
# prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""\
The ``namedutils`` module defines two lightweight container types:
:class:`namedtuple` and :class:`namedlist`. Both are subtypes of built-in
sequence types, which are very fast and efficient. They simply add
named attribute accessors for specific indexes within themselves.
The :class:`namedtuple` is identical to the built-in
:class:`collections.namedtuple`, with a couple of enhancements,
including a ``__repr__`` more suitable to inheritance.
The :class:`namedlist` is the mutable counterpart to the
:class:`namedtuple`, and is much faster and lighter-weight than
full-blown :class:`object`. Consider this if you're implementing nodes
in a tree, graph, or other mutable data structure. If you want an even
skinnier approach, you'll probably have to look to C.
"""
from __future__ import print_function
import sys as _sys
try:
from collections import OrderedDict
except ImportError:
# backwards compatibility (2.6 has no OrderedDict)
OrderedDict = dict
from keyword import iskeyword as _iskeyword
from operator import itemgetter as _itemgetter
try:
basestring
def exec_(code, global_env):
exec("exec code in global_env")
except NameError:
basestring = (str, bytes) # Python 3 compat
def exec_(code, global_env):
exec(code, global_env)
__all__ = ['namedlist', 'namedtuple']
# Tiny templates
_repr_tmpl = '{name}=%r'
_imm_field_tmpl = '''\
{name} = _property(_itemgetter({index:d}), doc='Alias for field {index:d}')
'''
_m_field_tmpl = '''\
{name} = _property(_itemgetter({index:d}), _itemsetter({index:d}), doc='Alias for field {index:d}')
'''
#################################################################
### namedtuple
#################################################################
_namedtuple_tmpl = '''\
class {typename}(tuple):
'{typename}({arg_list})'
__slots__ = ()
_fields = {field_names!r}
def __new__(_cls, {arg_list}): # TODO: tweak sig to make more extensible
'Create new instance of {typename}({arg_list})'
return _tuple.__new__(_cls, ({arg_list}))
@classmethod
def _make(cls, iterable, new=_tuple.__new__, len=len):
'Make a new {typename} object from a sequence or iterable'
result = new(cls, iterable)
if len(result) != {num_fields:d}:
raise TypeError('Expected {num_fields:d}'
' arguments, got %d' % len(result))
return result
def __repr__(self):
'Return a nicely formatted representation string'
tmpl = self.__class__.__name__ + '({repr_fmt})'
return tmpl % self
def _asdict(self):
'Return a new OrderedDict which maps field names to their values'
return OrderedDict(zip(self._fields, self))
def _replace(_self, **kwds):
'Return a new {typename} object replacing field(s) with new values'
result = _self._make(map(kwds.pop, {field_names!r}, _self))
if kwds:
raise ValueError('Got unexpected field names: %r' % kwds.keys())
return result
def __getnewargs__(self):
'Return self as a plain tuple. Used by copy and pickle.'
return tuple(self)
__dict__ = _property(_asdict)
def __getstate__(self):
'Exclude the OrderedDict from pickling' # wat
pass
{field_defs}
'''
def namedtuple(typename, field_names, verbose=False, rename=False):
"""Returns a new subclass of tuple with named fields.
>>> Point = namedtuple('Point', ['x', 'y'])
>>> Point.__doc__ # docstring for the new class
'Point(x, y)'
>>> p = Point(11, y=22) # instantiate with pos args or keywords
>>> p[0] + p[1] # indexable like a plain tuple
33
>>> x, y = p # unpack like a regular tuple
>>> x, y
(11, 22)
>>> p.x + p.y # fields also accessible by name
33
>>> d = p._asdict() # convert to a dictionary
>>> d['x']
11
>>> Point(**d) # convert from a dictionary
Point(x=11, y=22)
>>> p._replace(x=100) # _replace() is like str.replace() but targets named fields
Point(x=100, y=22)
"""
# Validate the field names. At the user's option, either generate an error
# message or automatically replace the field name with a valid name.
if isinstance(field_names, basestring):
field_names = field_names.replace(',', ' ').split()
field_names = [str(x) for x in field_names]
if rename:
seen = set()
for index, name in enumerate(field_names):
if (not all(c.isalnum() or c == '_' for c in name)
or _iskeyword(name)
or not name
or name[0].isdigit()
or name.startswith('_')
or name in seen):
field_names[index] = '_%d' % index
seen.add(name)
for name in [typename] + field_names:
if not all(c.isalnum() or c == '_' for c in name):
raise ValueError('Type names and field names can only contain '
'alphanumeric characters and underscores: %r'
% name)
if _iskeyword(name):
raise ValueError('Type names and field names cannot be a '
'keyword: %r' % name)
if name[0].isdigit():
raise ValueError('Type names and field names cannot start with '
'a number: %r' % name)
seen = set()
for name in field_names:
if name.startswith('_') and not rename:
raise ValueError('Field names cannot start with an underscore: '
'%r' % name)
if name in seen:
raise ValueError('Encountered duplicate field name: %r' % name)
seen.add(name)
# Fill-in the class template
fmt_kw = {'typename': typename}
fmt_kw['field_names'] = tuple(field_names)
fmt_kw['num_fields'] = len(field_names)
fmt_kw['arg_list'] = repr(tuple(field_names)).replace("'", "")[1:-1]
fmt_kw['repr_fmt'] = ', '.join(_repr_tmpl.format(name=name)
for name in field_names)
fmt_kw['field_defs'] = '\n'.join(_imm_field_tmpl.format(index=index, name=name)
for index, name in enumerate(field_names))
class_definition = _namedtuple_tmpl.format(**fmt_kw)
if verbose:
print(class_definition)
# Execute the template string in a temporary namespace and support
# tracing utilities by setting a value for frame.f_globals['__name__']
namespace = dict(_itemgetter=_itemgetter,
__name__='namedtuple_%s' % typename,
OrderedDict=OrderedDict,
_property=property,
_tuple=tuple)
try:
exec_(class_definition, namespace)
except SyntaxError as e:
raise SyntaxError(e.message + ':\n' + class_definition)
result = namespace[typename]
# For pickling to work, the __module__ variable needs to be set to the frame
# where the named tuple is created. Bypass this step in environments where
# sys._getframe is not defined (Jython for example) or sys._getframe is not
# defined for arguments greater than 0 (IronPython).
try:
frame = _sys._getframe(1)
result.__module__ = frame.f_globals.get('__name__', '__main__')
except (AttributeError, ValueError):
pass
return result
#################################################################
### namedlist
#################################################################
_namedlist_tmpl = '''\
class {typename}(list):
'{typename}({arg_list})'
__slots__ = ()
_fields = {field_names!r}
def __new__(_cls, {arg_list}): # TODO: tweak sig to make more extensible
'Create new instance of {typename}({arg_list})'
return _list.__new__(_cls, ({arg_list}))
def __init__(self, {arg_list}): # tuple didn't need this but list does
return _list.__init__(self, ({arg_list}))
@classmethod
def _make(cls, iterable, new=_list, len=len):
'Make a new {typename} object from a sequence or iterable'
# why did this function exist? why not just star the
# iterable like below?
result = cls(*iterable)
if len(result) != {num_fields:d}:
raise TypeError('Expected {num_fields:d} arguments,'
' got %d' % len(result))
return result
def __repr__(self):
'Return a nicely formatted representation string'
tmpl = self.__class__.__name__ + '({repr_fmt})'
return tmpl % tuple(self)
def _asdict(self):
'Return a new OrderedDict which maps field names to their values'
return OrderedDict(zip(self._fields, self))
def _replace(_self, **kwds):
'Return a new {typename} object replacing field(s) with new values'
result = _self._make(map(kwds.pop, {field_names!r}, _self))
if kwds:
raise ValueError('Got unexpected field names: %r' % kwds.keys())
return result
def __getnewargs__(self):
'Return self as a plain list. Used by copy and pickle.'
return tuple(self)
__dict__ = _property(_asdict)
def __getstate__(self):
'Exclude the OrderedDict from pickling' # wat
pass
{field_defs}
'''
def namedlist(typename, field_names, verbose=False, rename=False):
"""Returns a new subclass of list with named fields.
>>> Point = namedlist('Point', ['x', 'y'])
>>> Point.__doc__ # docstring for the new class
'Point(x, y)'
>>> p = Point(11, y=22) # instantiate with pos args or keywords
>>> p[0] + p[1] # indexable like a plain list
33
>>> x, y = p # unpack like a regular list
>>> x, y
(11, 22)
>>> p.x + p.y # fields also accessible by name
33
>>> d = p._asdict() # convert to a dictionary
>>> d['x']
11
>>> Point(**d) # convert from a dictionary
Point(x=11, y=22)
>>> p._replace(x=100) # _replace() is like | |
# coding: utf-8
"""
Director API
This is the oSparc's director API # noqa: E501
OpenAPI spec version: 0.1.0
Contact: <EMAIL>
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from simcore_director_sdk.api_client import ApiClient
class UsersApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def root_get(self, **kwargs): # noqa: E501
"""Service health-check endpoint # noqa: E501
Some general information on the API and state of the service behind # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.root_get(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: InlineResponse200
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.root_get_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.root_get_with_http_info(**kwargs) # noqa: E501
return data
def root_get_with_http_info(self, **kwargs): # noqa: E501
"""Service health-check endpoint # noqa: E501
Some general information on the API and state of the service behind # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.root_get_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:return: InlineResponse200
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method root_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse200', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def running_interactive_services_delete(self, service_uuid, **kwargs): # noqa: E501
"""Stops and removes an interactive service from the oSparc platform # noqa: E501
Stops and removes an interactive service from the oSparc platform # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.running_interactive_services_delete(service_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str service_uuid: The uuid of the service (required)
:return: InlineResponse204
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.running_interactive_services_delete_with_http_info(service_uuid, **kwargs) # noqa: E501
else:
(data) = self.running_interactive_services_delete_with_http_info(service_uuid, **kwargs) # noqa: E501
return data
def running_interactive_services_delete_with_http_info(self, service_uuid, **kwargs): # noqa: E501
"""Stops and removes an interactive service from the oSparc platform # noqa: E501
Stops and removes an interactive service from the oSparc platform # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.running_interactive_services_delete_with_http_info(service_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str service_uuid: The uuid of the service (required)
:return: InlineResponse204
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['service_uuid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method running_interactive_services_delete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'service_uuid' is set
if ('service_uuid' not in local_var_params or
local_var_params['service_uuid'] is None):
raise ValueError("Missing the required parameter `service_uuid` when calling `running_interactive_services_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'service_uuid' in local_var_params:
path_params['service_uuid'] = local_var_params['service_uuid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/running_interactive_services/{service_uuid}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse204', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def running_interactive_services_get(self, service_uuid, **kwargs): # noqa: E501
"""Succesfully returns if a service with the defined uuid is up and running # noqa: E501
Succesfully returns if a service with the defined uuid is up and running # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.running_interactive_services_get(service_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str service_uuid: The uuid of the service (required)
:return: InlineResponse201
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.running_interactive_services_get_with_http_info(service_uuid, **kwargs) # noqa: E501
else:
(data) = self.running_interactive_services_get_with_http_info(service_uuid, **kwargs) # noqa: E501
return data
def running_interactive_services_get_with_http_info(self, service_uuid, **kwargs): # noqa: E501
"""Succesfully returns if a service with the defined uuid is up and running # noqa: E501
Succesfully returns if a service with the defined uuid is up and running # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.running_interactive_services_get_with_http_info(service_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str service_uuid: The uuid of the service (required)
:return: InlineResponse201
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['service_uuid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method running_interactive_services_get" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'service_uuid' is set
if ('service_uuid' not in local_var_params or
local_var_params['service_uuid'] is None):
raise ValueError("Missing the required parameter `service_uuid` when calling `running_interactive_services_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'service_uuid' in local_var_params:
path_params['service_uuid'] = local_var_params['service_uuid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/running_interactive_services/{service_uuid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='InlineResponse201', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def running_interactive_services_post(self, user_id, project_id, service_key, service_uuid, **kwargs): # noqa: E501
"""Starts an interactive service in the oSparc platform # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.running_interactive_services_post(user_id, project_id, service_key, service_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_id: The ID of the user that starts the service (required)
:param str project_id: The ID of the project in which the service starts (required)
:param str service_key: The key (url) of the service (required)
:param str service_uuid: The uuid to assign the service with (required)
:param str service_tag: The tag/version of the service
:param str service_basepath: predefined basepath for the backend service otherwise uses root
:return: InlineResponse201
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.running_interactive_services_post_with_http_info(user_id, project_id, service_key, service_uuid, **kwargs) # noqa: E501
else:
(data) = self.running_interactive_services_post_with_http_info(user_id, project_id, service_key, service_uuid, **kwargs) # noqa: E501
return data
def running_interactive_services_post_with_http_info(self, user_id, project_id, service_key, service_uuid, **kwargs): # noqa: E501
"""Starts an interactive service in the oSparc platform # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.running_interactive_services_post_with_http_info(user_id, project_id, service_key, service_uuid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_id: The ID of the user that starts the service (required)
:param str project_id: The ID of the project in which the service starts (required)
:param str service_key: The key (url) of the service (required)
:param | |
and agg_data['percent_idle'] < 0:
return self.STATUS_BAD_NO_PINGS
if agg_data['tasks_done'] == 0 and agg_data['percent_idle'] < 1:
return self.STATUS_WARN_LONG_TASK # this case should self heal, as hard kill should be triggered
if agg_data['tasks_done'] == 0 and agg_data['percent_idle'] > 99:
return self.STATUS_OK_IDLE
return self.STATUS_OK
@cache(wait_sec=5)
def aggregate_pings(self, interval=None):
tnow = time.time()
if interval is None: # if time_window not given we aggregate all stored pings
interval = (tnow - self.stored_pings[0]['timestamp']) if self.stored_pings else 0
agg_data = {'tasks_per_sec': -1, 'tasks_per_min': -1, 'percent_idle': -1, 'interval': interval,
'tasks_done': -1, 'pings_received': -1, 'average_task_duration': 0}
pings = [p for p in self.stored_pings if tnow - p['timestamp'] <= interval]
if pings:
agg_data['tasks_done'] = sum([p['tasks_done'] for p in pings])
agg_data['tasks_per_sec'] = round(float(agg_data['tasks_done']) / interval, FLOAT_DIGITS)
agg_data['tasks_per_min'] = round((float(agg_data['tasks_done']) / interval) * 60, FLOAT_DIGITS)
agg_data['percent_idle'] = round(float(sum([p['percent_idle'] for p in pings])) / len(pings), FLOAT_DIGITS)
agg_data['pings_received'] = len(pings)
if agg_data['tasks_done'] > 0:
agg_data['average_task_duration'] = sum([p['task_duration'] for p in pings]
) / float(agg_data['tasks_done'])
return agg_data
@cache(wait_sec=5)
def aggregate_events(self, interval=None):
def sum_repeats(events):
return sum([e['repeats'] for e in events])
def group_by_origin(events):
events_by_origin = defaultdict(list)
for e in events:
events_by_origin[e['origin']].append(e)
return events_by_origin
if interval is None: # if time_window not given we aggregate all stored events
interval = (time.time() - self.stored_events[0]['timestamp']) if self.stored_events else 0
all_events = self.get_events(interval=interval)
actions = self.get_events(event_type=ProcessPlus.EVENT_TYPE_ACTION, interval=interval)
errors = self.get_events(event_type=ProcessPlus.EVENT_TYPE_ERROR, interval=interval)
return {
'interval': interval,
'totals': {
'events': sum_repeats(all_events),
'actions': sum_repeats(actions),
'errors': sum_repeats(errors),
},
'by_origin': {
'events': {origin: sum_repeats(elist) for origin, elist in group_by_origin(all_events).items()},
'actions': {origin: sum_repeats(elist) for origin, elist in group_by_origin(actions).items()},
'errors': {origin: sum_repeats(elist) for origin, elist in group_by_origin(errors).items()},
},
}
@cache(wait_sec=5)
def get_ping_counts(self, intervals=()):
intervals = intervals or self.default_ping_count_intervals
return {str(timedelta(seconds=ts)): self.aggregate_pings(interval=ts) for ts in intervals}
@cache(wait_sec=5)
def get_event_counts(self, intervals=()):
intervals = intervals or self.default_event_count_intervals
return {str(timedelta(seconds=ts)): self.aggregate_events(interval=ts) for ts in intervals}
def is_monitored(self):
return self.has_flag(MONITOR_PING)
def _assert_valid_event(self, event):
try:
assert event['type'] in self.event_types, 'Unrecognized event type: {}'.format(event['type'])
assert event['repeats'] >= 1, 'Not valid repeat number: Must be greater than 1'
assert set(event.keys()) == set(self._event_template.keys()), 'Malformed data: {}'.format(event)
assert not [1 for v in event.values() if v is None], 'event {} with None value is not valid'.format(event)
except Exception as e:
self.logger.exception('Bad event: ')
raise AssertionError(str(e))
def _assert_valid_ping(self, data):
try:
assert set(data.keys()) == set(self._ping_template.keys()), 'Malformed data: {}'.format(data)
except Exception as e:
self.logger.exception('Bad ping: ')
raise AssertionError(str(e))
def start(self):
self.stats['start_time'] = time.time()
self.stats['start_time_str'] = self._time2str(self.stats['start_time'])
super(ProcessPlus, self).start()
def terminate_plus(self, kill_wait=0.5):
success = False
try:
if self.is_alive():
self.logger.info('Terminating process: %s', self.name)
self.terminate()
time.sleep(kill_wait)
if self.is_alive():
self.logger.warn('Sending SIGKILL to process with pid=%s', self.pid)
os.kill(int(self.pid), signal.SIGKILL)
time.sleep(0.1)
assert not self.is_alive(), 'Fatal: Process {} alive after SIGKILL'.format(self.name)
else:
self.logger.warn('Non termination: Process: %s is not alive!', self.name)
self.abnormal_termination = True
self.stats = self._closed_stats()
success = True
except Exception:
self.logger.exception('Fatal exception: ')
self._rebel = True
return success
def _updated_stats(self):
stats = copy.deepcopy(self.stats)
stats['alive'] = self.is_alive()
stats['rebel'] = self.is_rebel()
stats['abnormal_termination'] = self.abnormal_termination
stats['t_running_secs'] = self.t_running_secs
stats['name'] = self.name
stats['pid'] = self.pid
return stats
def _closed_stats(self):
stats = self._updated_stats()
stats['stats_closed'] = True
stats['end_time'] = time.time()
stats['end_time_str'] = self._time2str(stats['end_time'])
stats['exitcode'] = self.exitcode
return stats
@classmethod
def _time2str(cls, seconds):
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(seconds)) if seconds else ''
@classmethod
def _func2str(cls, t):
return pickle.dumps(t, protocol=0) if callable(t) else t
@classmethod
def _str2func(cls, str_t):
return pickle.loads(str_t)
def to_dict(self, serialize_all=False):
self.stats = self._updated_stats()
d = copy.deepcopy({fn: getattr(self, fn) for fn in self._pack_fields})
if serialize_all:
d = {fn: (self._func2str(v) if callable(v) else v) for fn, v in d.items()}
return d
def to_json(self):
return json.dumps(self.to_dict(serialize_all=True))
def __repr__(self):
return 'ProcessPlus(**{})'.format(self.to_dict(serialize_all=True))
def __str__(self):
return self.__repr__()
class ProcessGroup(IterableUserDict):
"""
Dict-like container of ProcessPlus objects: {process_name => process}
Perform simple operations on the collection.
"""
def __init__(self, group_name=None, default_target=None, default_args=None, default_kwargs=None,
default_flags=None, default_kill_wait=0.5, max_processes=1000, process_plus_impl=None):
self.group_name = group_name
self._defaults = dict(
target=default_target,
args=default_args,
kwargs=default_kwargs,
flags=self._curate_flags(default_flags),
kill_wait=default_kill_wait
)
self.max_processes = max_processes
if process_plus_impl:
assert issubclass(process_plus_impl, ProcessPlus)
self.ProcessPlusImpl = ProcessPlus if not process_plus_impl else process_plus_impl
self.__limbo_group = None # Must access through property
self.__dead_group = None # Must access through property
self._num_keep_dead = 100
self._num_deleted_dead = 0
self.logger = logging.getLogger(__name__)
self._thread_action_loop = None
self.stop_action = True
self.action_loop_interval = 1 # seconds between each actions pass
IterableUserDict.__init__(self)
def _v_or_def(self, **kw):
assert len(kw) == 1, 'Wrong call, example of right use: self._val_or_def(kill_wait=10)'
k, v = kw.keys()[0], kw.values()[0]
return v if v not in (None, ()) else self._defaults.get(k)
@classmethod
def _curate_flags(cls, flags=None):
return flags2num(flags) if isinstance(flags, Iterable) else (flags or MONITOR_NONE)
@property
def limbo_group(self):
if self.__limbo_group is None:
self.__limbo_group = ProcessGroup(group_name='limbo')
self.__limbo_group.stop_action_loop()
return self.__limbo_group
@property
def dead_group(self):
if self.__dead_group is None:
self.__dead_group = ProcessGroup(group_name='dead')
self.__dead_group.stop_action_loop()
return self.__dead_group
@property
def dead_stats(self):
return [proc.stats for proc in self.dead_group.values()]
def add(self, process):
self[process.name] = process
def spawn_process(self, target=None, args=None, kwargs=None, flags=None, **extra):
if len(self) >= self.max_processes:
raise Exception("maximum number of processes reached: {}".format(self.max_processes))
target = self._v_or_def(target=target)
args = self._v_or_def(args=args)
kwargs = self._v_or_def(kwargs=kwargs)
flags = self._curate_flags(self._v_or_def(flags=flags))
self.logger.debug('spawning process: target=%s, args=%s, kwargs=%s, flags=%s', repr(target), args, kwargs,
flags)
try:
proc = self.ProcessPlusImpl(target=target, args=args, kwargs=kwargs, flags=flags, **extra)
proc.start()
self.add(proc)
return proc.name
except Exception:
self.logger.exception("Spawn of process failed. Caught exception with details: ")
raise
def spawn_many(self, N, target=None, args=None, kwargs=None, flags=None):
self.logger.debug('spawn_many called with: target=%s, N=%s, args=%s, kwargs=%s, flags=%s', repr(target), N,
args, kwargs, flags)
n_success = 0
for i in range(N):
try:
self.spawn_process(target=target, args=args, kwargs=kwargs, flags=flags)
except Exception:
self.logger.exception('Failed to start process. Reason: ')
else:
n_success += 1
return n_success == N # TODO: better return n_success
def get_by_pid(self, pid):
for name, proc in self.items():
if proc.pid == pid:
return proc
self.logger.warn('pid=%s not found in group %s', pid, self.group_name)
return None
def get_by_name(self, proc_name):
proc = self.get(proc_name)
if not proc:
self.logger.warn('proc_name=%s not found in group %s', proc_name, self.group_name)
return proc
def filtered(self, proc_names=(), pids=(), lambda_proc=None):
proc_dict = {proc.name: proc for proc in filter(None, map(self.get_by_name, proc_names))}
proc_dict.update({proc.name: proc for proc in filter(None, map(self.get_by_pid, pids))})
if lambda_proc and callable(lambda_proc):
proc_dict.update({proc.name: proc for proc in filter(lambda_proc, self.values())})
return proc_dict
def terminate_process(self, proc_name, kill_wait=None):
kill_wait = self._v_or_def(kill_wait=kill_wait)
proc = self.pop(proc_name, None) # pop process out of dict to avoid race conditions with action_loop
if not proc:
raise Exception('Process {} not found'.format(proc_name))
try:
proc.terminate_plus(kill_wait)
self.dead_group.add(proc)
except Exception:
self.logger.exception('Fatal exception: ')
# adding proc to limbo to preserve it for second chance to kill
self.limbo_group.add(proc)
raise
def mark_for_termination(self, proc_names=(), pids=()):
for name, proc in self.filtered(proc_names=proc_names, pids=pids).items():
proc.mark_for_termination()
def add_ping(self, pid, data):
proc = self.get_by_pid(pid)
if proc:
proc.add_ping(data)
def add_events(self, pid, events):
proc = self.get_by_pid(pid) or self.dead_group.get_by_pid(pid)
if proc and events:
for ev in events:
proc.add_event(ev)
@cache(wait_sec=30)
def processes_view(self):
running_list = []
dead_list = []
for name, proc in self.items() + self.dead_group.items():
plist = running_list if proc.is_alive() else dead_list
plist.append(proc.to_dict(serialize_all=True))
return {
'running': running_list,
'dead': dead_list,
}
@cache(wait_sec=30)
def status_view(self, interval=None):
interval = interval or 60 * 5 # TODO add default value in constructor
total_running_processes = self.total_processes()
total_dead_processes = self.total_dead_processes()
total_monitored = self.total_monitored_processes()
total_tasks_done = 0
total_tasks_x_sec = 0
avg_percent_idle = 0
num_events = 0
num_actions = 0
num_errors = 0
idle_procs = []
for name, proc in self.items() + self.dead_group.items():
if proc.ping_status == proc.STATUS_OK_IDLE: # only alive & monitored procs may have STATUS_OK_IDLE
idle_procs.append({'name': name, 'pid': proc.pid})
# aggregations should include alive and dead processes, the inclusion is by time interval
if proc.is_monitored():
ping_agg = proc.aggregate_pings(interval=interval)
total_tasks_done += ping_agg['tasks_done'] if ping_agg['tasks_done'] > 0 else 0
avg_percent_idle += ping_agg['percent_idle'] if ping_agg['percent_idle'] > 0 else 0
event_agg = proc.aggregate_events(interval=interval)
num_events += event_agg['totals']['events']
num_actions += event_agg['totals']['actions']
num_errors += event_agg['totals']['errors']
total_tasks_x_sec = (total_tasks_done * 1.0) / interval if interval > 1e-2 else total_tasks_x_sec
avg_percent_idle = (avg_percent_idle * 1.0) / total_monitored if total_monitored else 0
return {
'idle': {
'interval': ProcessPlus.default_status_interval,
'num_idle_procs': len(idle_procs),
'idle_procs': idle_procs,
},
'totals': {
'interval': interval,
'total_processes': total_running_processes,
'total_dead_processes': total_dead_processes,
'total_monitored_processes': total_monitored,
'total_unmonitored_processes': total_running_processes - total_monitored,
'total_tasks_done': total_tasks_done,
'total_tasks_per_sec': round(total_tasks_x_sec, FLOAT_DIGITS),
'total_tasks_per_min': round(total_tasks_x_sec * 60, FLOAT_DIGITS),
'avg_percent_idle': round(avg_percent_idle, FLOAT_DIGITS),
},
'events': {
'interval': interval,
'num_events': num_events,
'num_actions': num_actions,
'num_errors': num_errors,
},
}
def total_processes(self):
return len(self)
def total_monitored_processes(self):
return len([name for name, proc in self.items() if proc.is_monitored()])
def total_dead_processes(self):
return len(self.dead_group) + self._num_deleted_dead
@cache(wait_sec=30)
def is_healthy(self):
num_ok, total = 0, | |
Zserio varuint16 type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varuint16()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varuint16 type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varuint16 type to write.
"""
writer.write_varuint16(value)
class VarUInt32ArrayTraits:
"""
Array traits for Zserio varuint32 type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varuint32 type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varuint32 type value.
:returns: Length of given Zserio varuint32 type in bits.
"""
return bitsizeof_varuint32(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varuint32 type.
:param bitposition: Current bit stream position.
:param value: Zserio varuint32 type value.
:returns: Updated bit stream position which points to the first bit after Zserio varuint32 type.
"""
return bitposition + VarUInt32ArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varuint32 type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varuint32()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varuint32 type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varuint32 type to write.
"""
writer.write_varuint32(value)
class VarUInt64ArrayTraits:
"""
Array traits for Zserio varuint64 type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varuint64 type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varuint64 type value.
:returns: Length of given Zserio varuint64 type in bits.
"""
return bitsizeof_varuint64(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varuint64 type.
:param bitposition: Current bit stream position.
:param value: Zserio varuint64 type value.
:returns: Updated bit stream position which points to the first bit after Zserio varuint64 type.
"""
return bitposition + VarUInt64ArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varuint64 type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varuint64()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varuint64 type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varuint64 type to write.
"""
writer.write_varuint64(value)
class VarUIntArrayTraits:
"""
Array traits for Zserio varuint type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varuint type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varuint type value.
:returns: Length of given Zserio varuint type in bits.
"""
return bitsizeof_varuint(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varuint type.
:param bitposition: Current bit stream position.
:param value: Zserio varuint type value.
:returns: Updated bit stream position which points to the first bit after Zserio varuint type.
"""
return bitposition + VarUIntArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varuint type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varuint()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varuint type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varuint type to write.
"""
writer.write_varuint(value)
class VarSizeArrayTraits:
"""
Array traits for Zserio varsize type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varsize type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varsize type value.
:returns: Length of given Zserio varsize type in bits.
"""
return bitsizeof_varsize(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varsize type.
:param bitposition: Current bit stream position.
:param value: Zserio varsize type value.
:returns: Updated bit stream position which points to the first bit after Zserio varsize type.
"""
return bitposition + VarSizeArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varsize type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varsize()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varsize type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varsize type to write.
"""
writer.write_varsize(value)
class VarInt16ArrayTraits:
"""
Array traits for Zserio varint16 type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varint16 type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varint16 type value.
:returns: Length of given Zserio varint16 type in bits.
"""
return bitsizeof_varint16(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varint16 type.
:param bitposition: Current bit stream position.
:param value: Zserio varint16 type value.
:returns: Updated bit stream position which points to the first bit after Zserio varint16 type.
"""
return bitposition + VarInt16ArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varint16 type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varint16()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varint16 type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varint16 type to write.
"""
writer.write_varint16(value)
class VarInt32ArrayTraits:
"""
Array traits for Zserio varint32 type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varint32 type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varint32 type value.
:returns: Length of given Zserio varint32 type in bits.
"""
return bitsizeof_varint32(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varint32 type.
:param bitposition: Current bit stream position.
:param value: Zserio varint32 type value.
:returns: Updated bit stream position which points to the first bit after Zserio varint32 type.
"""
return bitposition + VarInt32ArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varint32 type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varint32()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varint32 type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varint32 type to write.
"""
writer.write_varint32(value)
class VarInt64ArrayTraits:
"""
Array traits for Zserio varint64 type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varint64 type stored in the bit stream in bits.
:param _bitposition: Not used.
:param value: Zserio varint64 type value.
:returns: Length of given Zserio varint64 type in bits.
"""
return bitsizeof_varint64(value)
@staticmethod
def initialize_offsets(bitposition: int, value: int) -> int:
"""
Initializes indexed offsets for Zserio varint64 type.
:param bitposition: Current bit stream position.
:param value: Zserio varint64 type value.
:returns: Updated bit stream position which points to the first bit after Zserio varint64 type.
"""
return bitposition + VarInt64ArrayTraits.bitsizeof(bitposition, value)
@staticmethod
def read(reader: BitStreamReader, _index: int) -> int:
"""
Reads Zserio varint64 type from the bit stream.
:param reader: Bit stream from which to read.
:param _index: Not used.
"""
return reader.read_varint64()
@staticmethod
def write(writer: BitStreamWriter, value: int) -> None:
"""
Writes Zserio varint64 type to the bit stream.
:param writer: Bit stream where to write.
:param value: Zserio varint64 type to write.
"""
writer.write_varint64(value)
class VarIntArrayTraits:
"""
Array traits for Zserio varint type.
"""
HAS_BITSIZEOF_CONSTANT = False
@staticmethod
def bitsizeof(_bitposition: int, value: int) -> int:
"""
Returns length of Zserio varint type stored in the bit stream in bits.
:param | |
<reponame>abauville/Sunaba
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 9 16:53:49 2016
@author: abauville
"""
from math import pi, pow, exp
from InputDef import Frozen
import copy
class Material(Frozen):
_Frozen__List = ["name","material","cohesion","frictionAngle","dilationAngle","rho0","alpha","beta","k","G","perm0",
"isAir","isWater", "isRef","vDisl","vDiff","vPei","phiIni","use_dtMaxwellLimit",
"staticPfFac","staticPfFacWeakFac","cohesionWeakFac","frictionAngleWeakFac","strainWeakStart","strainWeakEnd"]
def __init__(self,material="Default",name=""):
self.isRef = False
self.name = name
self.material = material
self.isAir = False
self.isWater = False
self.use_dtMaxwellLimit = False
self.phiIni = 0.0;
# Static Fluid Pressure Factor
self.staticPfFac = 0.0
# StrainWeakening
self.staticPfFacWeakFac = 0.0
self.cohesionWeakFac = 0.0 # 0.0 is none weak, 1.0 is fully weakened Cfinal = Cini*(1-CweakFac)
self.frictionAngleWeakFac = 0.0
self.strainWeakStart = 0.1;
self.strainWeakEnd = 1.0;
self.dilationAngle = 0.0
if material == "Default":
# Density
self.rho0 = 1.0
# Heat
self.alpha = 0.0
self.beta = 0.0
self.k = 1.0
# Rheology
# Plasticity
self.cohesion = 1E100
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E100
# Viscosity
self.vDisl = DislocationCreep ()
self.vDiff = DiffusionCreep ("Off")
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1e-8
elif material == "StickyAir":
self.isAir = True;
# Density
self.rho0 = 0.01
self.alpha = 0.0
self.beta = 0.0
# Heat
self.k = 3.0 * 2500/self.rho0 # in order to have roughly the same diffusivity as the rock
# Rheology
# Plasticity
self.cohesion = 50E6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E20
# Viscosity
#self.vDisl = DislocationCreep (eta0=1E17)
#self.vDiff = DiffusionCreep ("Off")
self.vDisl = DislocationCreep ("Off")
self.vDiff = DiffusionCreep (eta0=1e17)
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
self.phiIni = 0.9;
elif material == "StickyWater":
self.isWater = True;
# Density
self.rho0 = 1.0
self.alpha = 0.0
self.beta = 0.0
# Heat
self.k = 3.0 * 2500/self.rho0 # in order to have roughly the same diffusivity as the rock
# Rheology
# Plasticity
self.cohesion = 50E6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E20
# Viscosity
self.vDisl = DislocationCreep ("Off")
self.vDiff = DiffusionCreep (eta0=1E17)
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
self.phiIni = 0.9;
elif material == "Sediments":
# Density
self.rho0 = 2500
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 10e6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E11
# Viscosity
self.vDisl = DislocationCreep ("Off")
self.vDiff = DiffusionCreep (eta0=1E21)
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
elif material == "Wet_Quartzite":
# Density
self.rho0 = 2800
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 10e6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E11
# Viscosity
self.vDisl = DislocationCreep ("Wet_Quarzite-Ueda_et_al_2008")
self.vDiff = DiffusionCreep ("Off")
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
elif material == "Quartzite":
# Density
self.rho0 = 2800
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 10e6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E11
# Viscosity
self.vDisl = DislocationCreep ("Quarzite-Ranalli_1995")
self.vDiff = DiffusionCreep ("Off")
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
elif material == "Dry_Olivine":
# Density
self.rho0 = 3300
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 10e6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E11
# Viscosity
self.vDisl = DislocationCreep ("Dry_Olivine-Ranalli_1995")
self.vDiff = DiffusionCreep ("Dry_Olivine_diff_creep-Hirth_Kohlstedt_2003")
self.vPei = PeierlsCreep ("Olivine_Peierls-Kameyama_1999")
# Darcy
self.perm0 = 1E-8
elif material == "Wet_Olivine":
# Density
self.rho0 = 3300
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 10e6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E11
# Viscosity
self.vDisl = DislocationCreep ("Wet_Olivine_disl_creep-Hirth_Kohlstedt_2003_constant_C_OH")
self.vDiff = DiffusionCreep ("Wet_Olivine_diff_creep-Hirth_Kohlstedt_2003_constant_C_OH")
self.vPei = PeierlsCreep ("Olivine_Peierls-Kameyama_1999")
# Darcy
self.perm0 = 1E-8
elif material == "Diabase":
# Density
self.rho0 = 2900
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 10e6
self.frictionAngle = 30.0/180*pi
# Elasticity
self.G = 1E11
# Viscosity
self.vDisl = DislocationCreep ("Maryland_strong_diabase-Mackwell_et_al_1998")
self.vDiff = DiffusionCreep ("Off")
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
elif material == "Sand":
# From Buiter et al. 2016 (http://dx.doi.org/10.1016/j.jsg.2016.03.003)
# Density
self.rho0 = 1560
self.alpha = 1E-5
self.beta = 1E-11
# Heat
self.k = 3.0
# Rheology
# Plasticity
self.cohesion = 30
self.frictionAngle = 36.0/180*pi
# Elasticity
self.G = 1E100
# Viscosity
self.vDisl = DislocationCreep ("Off")
self.vDiff = DiffusionCreep (eta0=1e23)
self.vPei = PeierlsCreep ("Off")
# Darcy
self.perm0 = 1E-8
else :
raise ValueError("No such dislocation material: %s " % (material) )
def dictionarize(self):
self.vDisl = vars(self.vDisl)
self.vDiff = vars(self.vDiff)
self.vPei = vars(self.vPei)
def getRefVisc(self,P,T,Eii):
R = 8.3144598
invEtaDiff = 0.0
invEtaDisl = 0.0
# note: P*V is omitted in the dislocation creep, and etaPei is also omitted
if self.vDisl.isActive:
invEtaDisl = (2.0*pow(self.vDisl.B,1.0/self.vDisl.n)*pow(abs(Eii),(-1.0/self.vDisl.n+1.0))) * exp( - (self.vDisl.E+P*self.vDisl.V) / (self.vDisl.n*R*T))
if self.vDiff.isActive:
invEtaDiff = (2.0*self.vDiff.B * exp( - (self.vDiff.E+P*self.vDiff.V) / (R*T)))
#print( (self.vDisl.n*R*T))
#print((self.vDisl.E+P*self.vDisl.V) )
#print (exp( - (self.vDisl.E+P*self.vDisl.V) / (self.vDisl.n*R*T) ))
#print("Diff:" + str(invEtaDiff))
#print("Disl:" + str(invEtaDisl))
return 1.0/(invEtaDisl+invEtaDiff)
def copy(self):
return copy.deepcopy(self)
# The definition of the flow laws and the material compilation has been borrowed from LaMEM (Kaus, Popov et al.)
# We assume that the creep law has the form:
# Diffusion: eII = Ad*Tau * C_OH^r * d^-p *exp( - (Ed + P*Vd)/(R*T))
# Dislocation: eII = An*Tau^n * C_OH^r *exp( - (En + P*Vn)/(R*T))
# Peierls: eII = Bp * exp( - (EP + P*VP)/(R*T)*(1-gamma)^q) * (Tau/gamma/taup)^s
# In addition, we take into account that the creep-laws are typically measured under uniaxial or simple shear,
# whereas we need them in tensorial format (self.tensorCorrection and F2) as defined in T. Gerya book.
#
# The resulting expressions for effective viscosity:
# Diffusion: inv_eta_diff = 2 * [Bd * exp(-(Ed + P*Vd)/(R*T))]
# Dislocation: inv_eta_disl = 2 * [Bn * exp(-(En + P*Vn)/(R*T))]^(1/n) * eII^(1-1/n)
#
# In LaMEM we include the effect of grain size, H2O and tensor correction in the pre-factor (Bd,Bn) such that:
# Diffusion: Bd = (2*F2)^(-1) * Ad [Pa] * d^-p * C_OH^r
# Dislocation: Bn = (2*F2)^(-n) * An [Pa] * C_OH^r
#
# eII - strain rate [1/s]
# Tau - stress [Pa]
# P - pressure [Pa]
# R - gas constant
# Ad, An - prefactor (Bn before taking into account grain size and water fugacity) [Pa^(-n)s^(-1)]
# Bd, Bn - prefactor [Pa^(-n)s^(-1)]
# n - power-law exponent (n=1 for diffusion creep)
# Ed, En - activation Energy [J/mol]
# Vd, Vn - activation volume [m^3/mol]
# d - grain size [in micro-meters (1e-6 meter)]
# p - exponent of grain size
# C_OH - water fugacity in H/10^6 Si (see Hirth & Kohlstedt 2003 for a description)
# r - power-law exponent of C_OH term
# MPa - transform units: 0 - units in Pa 1 - units in self.MPa
# For Peierls:
# s = (Ep+p*Vp)/(R*T)*(1-gamma)^(q-1)*q*gamma
# Bp - pre-exponential constant for the Peierls mechanism [1/s]
# Ep - activation energy [J/mol]
# Vp - activation volume [m^3/mol]
# taup - Peierl stress [Pa]
# gamma - adjustable constant [-]
# q - stress dependence for Peierls creep [-]
# s - Peierls creep exponent (typical values between 7-11) [-]
class DislocationCreep(Frozen):
_Frozen__List = ["flowLaw","B","A","n","E","V","tensorCorrection","MPa","C_OH_0","r","isActive"]
def __init__(self,flowLaw="Default",eta0=1.0,n=1.0):
self.flowLaw = flowLaw
self.isActive = True
self.B = 0;
if flowLaw == "Off":
self.isActive = False
flowLaw = "Default"
if flowLaw == "Default":
self.A = 0.5/eta0
self.n = n
self.E = 0.0
self.V = 0.0
self.tensorCorrection = "None"
self.MPa = False
self.C_OH_0 = 1.0
self.r = 0.0
elif flowLaw == "Dry_Olivine-Ranalli_1995":
# after Ranalli 1995
self.A = 2.5e4
self.n = 3.5
self.E = 532e3
self.V = 17e-6
self.tensorCorrection = "UniAxial"
self.MPa = True
self.C_OH_0 = 1
self.r = 0
elif flowLaw == "Wet_Olivine-Ranalli_1995":
# after Ranalli 1995
self.A | |
are empirically chosen to approximately lead to unit variance targets
__C.MODEL.BBOX_REG_WEIGHTS = (10., 10., 5., 5.)
__C.MODEL.VIDEO_ON = False
## Batch normalization
# If true, use the SpatialBN layer instead of AffineChannel layer
__C.MODEL.USE_BN = False
# If true, use the BN layer in test mode (i.e. should be same output as the
# Affine layer).
__C.MODEL.USE_BN_TESTMODE_ONLY = False
# From Kaiming's Halekala
__C.MODEL.BN_EPSILON = 1.0000001e-5
__C.MODEL.BN_MOMENTUM = 0.9
# ---------------------------------------------------------------------------- #
# Solver options
# ---------------------------------------------------------------------------- #
__C.SOLVER = AttrDict()
__C.SOLVER.BASE_LR = 0.001
__C.SOLVER.LR_POLICY = b'step'
__C.SOLVER.GAMMA = 0.1
__C.SOLVER.STEP_SIZE = 30000
__C.SOLVER.MAX_ITER = 40000
__C.SOLVER.MOMENTUM = 0.9
__C.SOLVER.WEIGHT_DECAY = 0.0005
__C.SOLVER.WARM_UP_ITERS = 500
__C.SOLVER.WARM_UP_FACTOR = 1.0 / 3.0
# WARM_UP_METHOD can be either 'constant' or 'linear'
__C.SOLVER.WARM_UP_METHOD = 'linear'
__C.SOLVER.STEPS = []
__C.SOLVER.LRS = []
# Scale the momentum update history by new_lr / old_lr when updating the
# learning rate (this is correct given MomentumSGDUpdateOp)
__C.SOLVER.SCALE_MOMENTUM = True
__C.SOLVER.SCALE_MOMENTUM_THRESHOLD = 1.1
__C.SOLVER.LOG_LR_CHANGE_THRESHOLD = 1.1
# LR Policies (by example):
# 'step'
# lr = BASE_LR * GAMMA ** (cur_iter // STEP_SIZE)
# 'steps_with_decay'
# SOLVER.STEPS = [0, 60000, 80000]
# SOLVER.GAMMA = 0.1
# lr = BASE_LR * GAMMA ** current_step
# iters [0, 59999] are in current_step = 0, iters [60000, 79999] are in
# current_step = 1, and so on
# 'steps_with_lrs'
# SOLVER.STEPS = [0, 60000, 80000]
# SOLVER.LRS = [0.02, 0.002, 0.0002]
# lr = LRS[current_step]
# ---------------------------------------------------------------------------- #
# Fast R-CNN options
# ---------------------------------------------------------------------------- #
__C.FAST_RCNN = AttrDict()
__C.FAST_RCNN.MLP_HEAD_DIM = 1024
__C.FAST_RCNN.ROI_XFORM_METHOD = b'RoIPoolF'
# Only applies to RoIWarp, RoIWarpMax, and RoIAlign
__C.FAST_RCNN.ROI_XFORM_SAMPLING_RATIO = 0
# Models may ignore this and use fixed values
__C.FAST_RCNN.ROI_XFORM_RESOLUTION = 14
# ---------------------------------------------------------------------------- #
# RPN options
# ---------------------------------------------------------------------------- #
__C.RPN = AttrDict()
__C.RPN.ON = False
# Note: these options are *not* used by FPN RPN; see FPN.RPN* options
# RPN anchor sizes
__C.RPN.SIZES = (64, 128, 256, 512)
# Stride of the feature map that RPN is attached to
__C.RPN.STRIDE = 16
# RPN anchor aspect ratios
__C.RPN.ASPECT_RATIOS = (0.5, 1, 2)
# ---------------------------------------------------------------------------- #
# FPN options
# ---------------------------------------------------------------------------- #
__C.FPN = AttrDict()
__C.FPN.FPN_ON = False # avoid using 'ON', yaml converts it to True
__C.FPN.DIM = 256
__C.FPN.ZERO_INIT_LATERAL = False
__C.FPN.COARSEST_STRIDE = 32
# Multilevel RoI transform
__C.FPN.MULTILEVEL_ROIS = False
__C.FPN.ROI_CANONICAL_SCALE = 224 # s0
__C.FPN.ROI_CANONICAL_LEVEL = 4 # k0: where s0 maps to
__C.FPN.ROI_MAX_LEVEL = 5 # coarsest level of pyramid
__C.FPN.ROI_MIN_LEVEL = 2 # finest level of pyramid
# Multilevel RPN
__C.FPN.MULTILEVEL_RPN = False
__C.FPN.RPN_MAX_LEVEL = 6
__C.FPN.RPN_MIN_LEVEL = 2
# FPN RPN anchor aspect ratios
__C.FPN.RPN_ASPECT_RATIOS = (0.5, 1, 2)
# RPN anchors start at this size on RPN_MIN_LEVEL
# The anchor size doubled each level after that
# With a default of 32 and levels 2 to 6, we get anchor sizes of 32 to 512
__C.FPN.RPN_ANCHOR_START_SIZE = 32
__C.FPN.EXTRA_CONV_LEVELS = False
# Compatibility stopgap measure for some experimental models
__C.FPN.INPLACE_LATERAL = False
# ---------------------------------------------------------------------------- #
# Mask R-CNN options
# ---------------------------------------------------------------------------- #
__C.MRCNN = AttrDict()
__C.MRCNN.MASK_HEAD_NAME = b''
# Resolution of mask predictions
__C.MRCNN.RESOLUTION = 14 # TODO: rename to MASK_RESOLUTION
__C.MRCNN.ROI_XFORM_METHOD = b'RoIAlign'
__C.MRCNN.ROI_XFORM_RESOLUTION = 7
__C.MRCNN.ROI_XFORM_SAMPLING_RATIO = 0
__C.MRCNN.DIM_REDUCED = 256
__C.MRCNN.THRESH_BINARIZE = 0.5
__C.MRCNN.WEIGHT_LOSS_MASK = 1.
__C.MRCNN.CLS_SPECIFIC_MASK = True
__C.MRCNN.DILATION = 2 # TODO(rbg): not supported in ResNet conv5 head yet
__C.MRCNN.UPSAMPLE_RATIO = 1
__C.MRCNN.USE_FC_OUTPUT = False
__C.MRCNN.CONV_INIT = b'GaussianFill'
# ---------------------------------------------------------------------------- #
# Keyoint R-CNN options
# ---------------------------------------------------------------------------- #
__C.KRCNN = AttrDict()
# Keypoint prediction head options
__C.KRCNN.ROI_KEYPOINTS_HEAD = b''
# Output size (and size loss is computed on), e.g., 56x56
__C.KRCNN.HEATMAP_SIZE = -1
__C.KRCNN.UP_SCALE = -1
__C.KRCNN.USE_DECONV = False
__C.KRCNN.USE_DECONV_OUTPUT = False
__C.KRCNN.DILATION = 1
__C.KRCNN.DECONV_KERNEL = 4
__C.KRCNN.DECONV_DIM = 256
__C.KRCNN.NUM_KEYPOINTS = -1
__C.KRCNN.CONV_HEAD_DIM = 256
__C.KRCNN.CONV_HEAD_KERNEL = 3
__C.KRCNN.CONV_INIT = b'GaussianFill'
# Use NMS based on OKS
__C.KRCNN.NMS_OKS = False
# Source for keypoint confidence
# Valid options: ('bbox', 'logit', 'prob')
__C.KRCNN.KEYPOINT_CONFIDENCE = b'bbox'
# Standard ROI XFORM options
__C.KRCNN.ROI_XFORM_METHOD = b'RoIAlign'
__C.KRCNN.ROI_XFORM_RESOLUTION = 7
__C.KRCNN.ROI_XFORM_SAMPLING_RATIO = 0
# Minimum number of labeled keypoints that must exist in a minibatch (otherwise
# the minibatch is discarded)
__C.KRCNN.MIN_KEYPOINT_COUNT_FOR_VALID_MINIBATCH = 20
__C.KRCNN.NUM_STACKED_CONVS = 8
__C.KRCNN.INFERENCE_MIN_SIZE = 0
__C.KRCNN.LOSS_WEIGHT = 1.0
# Use 3D deconv for videos
# Setting true will only activate for video inputs though
__C.KRCNN.USE_3D_DECONV = False
# Set to False if you want to move time to channel dim when computing the keypoint
# outputs. Set to True and it will move to batch dimension.
__C.KRCNN.NO_3D_DECONV_TIME_TO_CH = False
# ---------------------------------------------------------------------------- #
# VIDEO
# ---------------------------------------------------------------------------- #
__C.VIDEO = AttrDict()
__C.VIDEO.NUM_FRAMES = -1
# The temporal dimension at the ROIalign stage. By default will set to same as
# NUM_FRAMES (assuming no temporal strides)
__C.VIDEO.NUM_FRAMES_MID = -1
__C.VIDEO.TIME_INTERVAL = -1
# Could be 'center-only', 'mean-repeat' etc (see lib/utils/net.py)
__C.VIDEO.WEIGHTS_INFLATE_MODE = b''
# Time kernel dims, for each part of the network
__C.VIDEO.TIME_KERNEL_DIM = AttrDict()
__C.VIDEO.TIME_KERNEL_DIM.BODY = 1
__C.VIDEO.TIME_KERNEL_DIM.HEAD_RPN = 1
__C.VIDEO.TIME_KERNEL_DIM.HEAD_KPS = 1
__C.VIDEO.TIME_KERNEL_DIM.HEAD_DET = 1 # only used for ResNet heads (FPN uses MLP)
# Set to True, it will use the same stride as on the spatial dimensions
__C.VIDEO.TIME_STRIDE_ON = False
# Set this to 'avg', 'slice-center' or '' (do nothing, which requires a 3D head)
__C.VIDEO.BODY_HEAD_LINK = b''
# Predict "vis" labels for tube. This is to take care of the case when all the
# boxes predicted are not in the track
__C.VIDEO.PREDICT_RPN_BOX_VIS = False
# Set to true if you want to use the GT values of RPN. This basically avoids
# evaluating the RPN in training
__C.VIDEO.DEBUG_USE_RPN_GT = False
# How to generate TPN.
# replicate = copy over the t=1 proposal num_frame times
# combinations = compute all possible combinations
__C.VIDEO.RPN_TUBE_GEN_STYLE = b'replicate'
# Default frames/clips to extract from video datasets for training/testing.
# IF the datasets/json_dataset.py entry has a different number, that will take
# precedence over this.
__C.VIDEO.DEFAULT_CLIPS_PER_VIDEO = 9999999999 # default, take all
# ---------------------------------------------------------------------------- #
# External Paths (to open source code, etc)
# ---------------------------------------------------------------------------- #
__C.EXT_PATHS = AttrDict()
# This is an old version of code
# https://github.com/leonid-pishchulin/poseval/tree/39fd82bc328b3b6d580c7afe2e98316cba35ab4a # noQA
# __C.EXT_PATHS.POSEVAL_CODE_PATH = b'/home/rgirdhar/local/OpenSource/github/poseval/'
# Multi-processing version of
# https://github.com/leonid-pishchulin/poseval/tree/dfb11f7c1035ae7d91f1601fdd1972897c2a7cf4
__C.EXT_PATHS.POSEVAL_CODE_PATH = b'/home/rgirdhar/local/OpenSource/bitbucket/poseval/'
# ---------------------------------------------------------------------------- #
# ResNets options (ResNet/ResNeXt)
# ---------------------------------------------------------------------------- #
__C.RESNETS = AttrDict()
# by default, we support the MSRA ResNet50
__C.RESNETS.NUM_GROUPS = 1
__C.RESNETS.WIDTH_PER_GROUP = 64
__C.RESNETS.STRIDE_1X1 = True # True only for MSRA ResNet; False for C2/Torch
__C.RESNETS.TRANS_FUNC = b'bottleneck_transformation'
# ---------------------------------------------------------------------------- #
# Tracking parameters
# ---------------------------------------------------------------------------- #
__C.TRACKING = AttrDict()
# Confidence value of detections to keep. Drop the lower conf detections before
# running tracking.
__C.TRACKING.CONF_FILTER_INITIAL_DETS = 0.9
# Set the following if you want to run tracking on a specific detections file.
# By default it will pick the one in the test directory corresponding to the
# config file
__C.TRACKING.DETECTIONS_FILE = b''
# Tracking distance metrics
__C.TRACKING.DISTANCE_METRICS = ('bbox-overlap', 'cnn-cosdist', 'pose-pck')
__C.TRACKING.DISTANCE_METRIC_WTS = (1.0, 0.0, 0.0)
# Algorithm to use for matching between frames
__C.TRACKING.BIPARTITE_MATCHING_ALGO = b'hungarian'
# Layer to use for CNN feature based matching for tracking, w.r.t resnet18 in pytorch
__C.TRACKING.CNN_MATCHING_LAYER = b'layer3'
# Pose smoothing
__C.TRACKING.FLOW_SMOOTHING_ON = False
# How to set the conf for each keypoint. ['global'/'local'/'scaled']
__C.TRACKING.KP_CONF_TYPE = b'global'
# Flow smoothing
__C.TRACKING.FLOW_SMOOTHING = AttrDict()
# When it is a scene change
__C.TRACKING.FLOW_SMOOTHING.FLOW_SHOT_BOUNDARY_TH = 6.0
# How many frames to consider
__C.TRACKING.FLOW_SMOOTHING.N_CONTEXT_FRAMES = 3
# Extend tracks to frames which do not have that track. Else it will only
# smooth the poses that already existed in that frame
__C.TRACKING.FLOW_SMOOTHING.EXTEND_TRACKS = True
# Keep center detections only, and drop the other frames (even before tracking).
# This basically reduces a 3D model output back to 2D by only keeping predictions
# corresponding to the center frame
# Keeping it true as I'm not doing any tube-level tracking so far. Once everything
# else works, implement that.
__C.TRACKING.KEEP_CENTER_DETS_ONLY = True
# debug
__C.TRACKING.DEBUG = AttrDict()
# Set the following to get labels from the GT for this frame. This avoids
# tracks from getting lost
__C.TRACKING.DEBUG.UPPER_BOUND = False
# Set the following if you also want to copy the keypoint locations from the GT.
# This gives an idea if all the kps for detected boxes were correct, what num
# will I get. i.e., if only issue was missed boxes
__C.TRACKING.DEBUG.UPPER_BOUND_2_GT_KPS = False
# Set the following to not copy the keypoints, only the conf value
__C.TRACKING.DEBUG.UPPER_BOUND_2_GT_KPS_ONLY_CONF = False
# This ensures the shot boundaries are known
__C.TRACKING.DEBUG.UPPER_BOUND_3_SHOTS = False
# This uses upper bound in the evaluation code, copying over the GT track id
# to the detection
__C.TRACKING.DEBUG.UPPER_BOUND_4_EVAL_UPPER_BOUND = False
# To evaluate if I only replaced the keypoints, without replacing the track Ids from GT
__C.TRACKING.DEBUG.UPPER_BOUND_5_GT_KPS_ONLY = False
# Debugging flow smoothing
__C.TRACKING.DEBUG.FLOW_SMOOTHING_COMBINE = False
# The dummy tracks baseline
__C.TRACKING.DEBUG.DUMMY_TRACKS = False
# Training a LSTM for tracking
__C.TRACKING.LSTM = AttrDict()
# type of recurrent net (RNN_TANH, RNN_RELU, LSTM, GRU)
__C.TRACKING.LSTM.MODEL = b'LSTM'
# Initial Language embedding size
__C.TRACKING.LSTM.EMSIZE = 200
# Number of hidden units per layer
__C.TRACKING.LSTM.NHID = 200
# Number of layers
__C.TRACKING.LSTM.NLAYERS = 2
__C.TRACKING.LSTM.DROPOUT = 0.2
# Tie the weights of the encoder and decoder (only works if the hidden dim ==
# encoded dim)
__C.TRACKING.LSTM.TIED_WTS = False
# Initial LR for the LSTM
__C.TRACKING.LSTM.LR = 0.1
# Gradient clipping for the LSTM
__C.TRACKING.LSTM.GRAD_CLIP = 0.25
__C.TRACKING.LSTM.BATCH_SIZE = 20
__C.TRACKING.LSTM.EPOCHS = 10
__C.TRACKING.LSTM.LOG_INTERVAL = 200
# If True, it will incur loss only on the last prediction, and not on the
# intermediate | |
time_steps: iterable
the time steps that the integrator progresses over
capture_elements: list
which model elements to capture - uses pysafe names
return_timestamps:
which subset of 'timesteps' should be values be returned?
Returns
-------
outputs: list of dictionaries
"""
outputs = pd.DataFrame(columns=capture_elements)
if self.progress:
# initialize progress bar
progressbar = utils.ProgressBar(len(time_steps)-1)
else:
# when None is used the update will do nothing
progressbar = utils.ProgressBar(None)
for t2 in time_steps[1:]:
if self.time() in return_timestamps:
outputs.at[self.time()] = [getattr(self.components, key)()
for key in capture_elements]
self._euler_step(t2 - self.time())
self.time.update(t2) # this will clear the stepwise caches
self.components.cache.reset(t2)
progressbar.update()
# TODO move control variables to a class and automatically stop
# when updating time
if self.time() >= self.components.final_time():
break
# need to add one more time step, because we run only the state
# updates in the previous loop and thus may be one short.
if self.time() in return_timestamps:
outputs.at[self.time()] = [getattr(self.components, key)()
for key in capture_elements]
progressbar.finish()
return outputs
def _add_run_elements(self, df, capture_elements, replace={}):
"""
Adds constant elements to a dataframe.
Parameters
----------
df: pandas.DataFrame
Dataframe to add elements.
capture_elements: list
List of constant elements
replace: dict
Ouputs values to replace.
TODO: move control variables to a class and avoid this.
Returns
-------
None
"""
nt = len(df.index.values)
for element in capture_elements:
df[element] = [getattr(self.components, element)()] * nt
# TODO: move control variables to a class and avoid this.
# update initial time values in df (necessary if initial_conditions)
for it, value in replace.items():
if it in df:
df[it] = value
elif it.upper() in df:
df[it.upper()] = value
elif it.replace('_', ' ') in df:
df[it.replace('_', ' ')] = value
elif it.replace('_', ' ').upper() in df:
df[it.replace('_', ' ').upper()] = value
def ramp(time, slope, start, finish=0):
"""
Implements vensim's and xmile's RAMP function
Parameters
----------
time: function
The current time of modelling
slope: float
The slope of the ramp starting at zero at time start
start: float
Time at which the ramp begins
finish: float
Optional. Time at which the ramp ends
Returns
-------
response: float
If prior to ramp start, returns zero
If after ramp ends, returns top of ramp
Examples
--------
"""
t = time()
if t < start:
return 0
else:
if finish <= 0:
return slope * (t - start)
elif t > finish:
return slope * (finish - start)
else:
return slope * (t - start)
def step(time, value, tstep):
""""
Implements vensim's STEP function
Parameters
----------
value: float
The height of the step
tstep: float
The time at and after which `result` equals `value`
Returns
-------
- In range [-inf, tstep) returns 0
- In range [tstep, +inf] returns `value`
"""
return value if time() >= tstep else 0
def pulse(time, start, duration):
""" Implements vensim's PULSE function
In range [-inf, start) returns 0
In range [start, start + duration) returns 1
In range [start + duration, +inf] returns 0
"""
t = time()
return 1 if start <= t < start + duration else 0
def pulse_train(time, start, duration, repeat_time, end):
""" Implements vensim's PULSE TRAIN function
In range [-inf, start) returns 0
In range [start + n * repeat_time, start + n * repeat_time + duration) return 1
In range [start + n * repeat_time + duration, start + (n+1) * repeat_time) return 0
"""
t = time()
if start <= t < end:
return 1 if (t - start) % repeat_time < duration else 0
else:
return 0
def pulse_magnitude(time, magnitude, start, repeat_time=0):
""" Implements xmile's PULSE function
PULSE: Generate a one-DT wide pulse at the given time
Parameters: 2 or 3: (magnitude, first time[, interval])
Without interval or when interval = 0, the PULSE is generated only once
Example: PULSE(20, 12, 5) generates a pulse value of 20/DT at time 12, 17, 22, etc.
In rage [-inf, start) returns 0
In range [start + n * repeat_time, start + n * repeat_time + dt) return magnitude/dt
In rage [start + n * repeat_time + dt, start + (n + 1) * repeat_time) return 0
"""
t = time()
if repeat_time <= small_vensim:
if abs(t - start) < time.step():
return magnitude * time.step()
else:
return 0
else:
if abs((t - start) % repeat_time) < time.step():
return magnitude * time.step()
else:
return 0
def lookup(x, xs, ys):
"""
Intermediate values are calculated with linear interpolation between
the intermediate points. Out-of-range values are the same as the
closest endpoint (i.e, no extrapolation is performed).
"""
return np.interp(x, xs, ys)
def lookup_extrapolation(x, xs, ys):
"""
Intermediate values are calculated with linear interpolation between
the intermediate points. Out-of-range values are calculated with linear
extrapolation from the last two values at either end.
"""
if x < xs[0]:
dx = xs[1] - xs[0]
dy = ys[1] - ys[0]
k = dy / dx
return ys[0] + (x - xs[0]) * k
if x > xs[-1]:
dx = xs[-1] - xs[-2]
dy = ys[-1] - ys[-2]
k = dy / dx
return ys[-1] + (x - xs[-1]) * k
return np.interp(x, xs, ys)
def lookup_discrete(x, xs, ys):
"""
Intermediate values take on the value associated with the next lower
x-coordinate (also called a step-wise function). The last two points
of a discrete graphical function must have the same y value.
Out-of-range values are the same as the closest endpoint
(i.e, no extrapolation is performed).
"""
for index in range(0, len(xs)):
if x < xs[index]:
return ys[index - 1] if index > 0 else ys[index]
return ys[-1]
def if_then_else(condition, val_if_true, val_if_false):
"""
Implements Vensim's IF THEN ELSE function.
https://www.vensim.com/documentation/20475.htm
Parameters
----------
condition: bool or xarray.DataArray of bools
val_if_true: function
Value to evaluate and return when condition is true.
val_if_false: function
Value to evaluate and return when condition is false.
Returns
-------
The value depending on the condition.
"""
if isinstance(condition, xr.DataArray):
if condition.all():
return val_if_true()
elif not condition.any():
return val_if_false()
return xr.where(condition, val_if_true(), val_if_false())
return val_if_true() if condition else val_if_false()
def logical_and(*args):
"""
Implements Vensim's :AND: method for two or several arguments.
Parameters
----------
*args: arguments
The values to compare with and operator
Returns
-------
result: bool or xarray.DataArray
The result of the comparison.
"""
current = args[0]
for arg in args[1:]:
current = np.logical_and(arg, current)
return current
def logical_or(*args):
"""
Implements Vensim's :OR: method for two or several arguments.
Parameters
----------
*args: arguments
The values to compare with and operator
Returns
-------
result: bool or xarray.DataArray
The result of the comparison.
"""
current = args[0]
for arg in args[1:]:
current = np.logical_or(arg, current)
return current
def xidz(numerator, denominator, value_if_denom_is_zero):
"""
Implements Vensim's XIDZ function.
https://www.vensim.com/documentation/fn_xidz.htm
This function executes a division, robust to denominator being zero.
In the case of zero denominator, the final argument is returned.
Parameters
----------
numerator: float or xarray.DataArray
denominator: float or xarray.DataArray
Components of the division operation
value_if_denom_is_zero: float or xarray.DataArray
The value to return if the denominator is zero
Returns
-------
numerator / denominator if denominator > 1e-6
otherwise, returns value_if_denom_is_zero
"""
if isinstance(denominator, xr.DataArray):
return xr.where(np.abs(denominator) < small_vensim,
value_if_denom_is_zero,
numerator * 1.0 / denominator)
if abs(denominator) < small_vensim:
return value_if_denom_is_zero
else:
return numerator * 1.0 / denominator
def zidz(numerator, denominator):
"""
This function bypasses divide-by-zero errors,
implementing Vensim's ZIDZ function
https://www.vensim.com/documentation/fn_zidz.htm
Parameters
----------
numerator: float or xarray.DataArray
value to be divided
denominator: float or xarray.DataArray
value to devide by
Returns
-------
result of division numerator/denominator if denominator is not zero,
otherwise zero.
"""
if isinstance(denominator, xr.DataArray):
return xr.where(np.abs(denominator) < small_vensim,
0,
numerator * 1.0 / denominator)
if abs(denominator) < small_vensim:
return 0
else:
return numerator * 1.0 / denominator
def active_initial(time, expr, init_val):
"""
Implements vensim's ACTIVE INITIAL function
Parameters
----------
time: function
The current time function
expr
init_val
Returns
-------
"""
if time.stage == 'Initialization':
return init_val
else:
return expr()
def bounded_normal(minimum, maximum, mean, std, seed):
"""
Implements vensim's BOUNDED NORMAL function
"""
# np.random.seed(seed)
# we could bring this back later, but for now, ignore
return stats.truncnorm.rvs(minimum, maximum, loc=mean, scale=std)
def | |
<reponame>reddigari/Eelbrain
# -*- coding: utf-8 -*-
# Author: <NAME> <<EMAIL>>
from functools import partial
from itertools import product
from numbers import Number
import mne
from nibabel.freesurfer import read_annot
import numpy as np
from .._data_obj import NDVar, SourceSpace, UTS
from .._utils import deprecated
from ..fmtxt import im_table
from .._text import ms
from ._base import EelFigure, ImLayout, ColorBarMixin, brain_data, butterfly_data
from ._color_luts import dspm_lut
from ._colors import ColorList
def assert_can_save_movies():
from ._brain_object import assert_can_save_movies
assert_can_save_movies()
def annot(annot, subject='fsaverage', surf='smoothwm', borders=False, alpha=0.7,
hemi=None, views=('lat', 'med'), w=None, h=None, axw=None, axh=None,
foreground=None, background=None, parallel=True, cortex='classic',
title=None, subjects_dir=None, name=None):
"""Plot the parcellation in an annotation file
Parameters
----------
annot : str
Name of the annotation (e.g., "PALS_B12_LOBES").
subject : str
Name of the subject (default 'fsaverage').
surf : 'inflated' | 'pial' | 'smoothwm' | 'sphere' | 'white'
Freesurfer surface to use as brain geometry.
borders : bool | int
Show only label borders (PySurfer Brain.add_annotation() argument).
alpha : scalar
Alpha of the annotation (1=opaque, 0=transparent, default 0.7).
hemi : 'lh' | 'rh' | 'both' | 'split'
Which hemispheres to plot (default includes hemisphere with more than one
label in the annot file).
views : str | iterator of str
View or views to show in the figure. Options are: 'rostral', 'parietal',
'frontal', 'ventral', 'lateral', 'caudal', 'medial', 'dorsal'.
w, h, axw, axh : scalar
Layout parameters (figure width/height, subplot width/height).
foreground : mayavi color
Figure foreground color (i.e., the text color).
background : mayavi color
Figure background color.
parallel : bool
Set views to parallel projection (default ``True``).
cortex : str | tuple | dict
Mark gyri and sulci on the cortex. Presets: ``'classic'`` (default),
``'high_contrast'``, ``'low_contrast'``, ``'bone'``. Can also be a
single color (e.g. ``'red'``, ``(0.1, 0.4, 1.)``) or a tuple of two
colors for gyri and sulci (e.g. ``['red', 'blue']`` or ``[(1, 0, 0),
(0, 0, 1)]``). For all options see the PySurfer documentation.
title : str
title for the window (default is the parcellation name).
subjects_dir : None | str
Override the default subjects_dir.
name : str
Equivalent to ``title``, for consistency with other plotting functions.
Returns
-------
brain : surfer.Brain
PySurfer Brain instance.
Notes
-----
The ``Brain`` object that is returned has a
:meth:`~plot._brain_fixes.plot_legend` method to plot the color legend.
See Also
--------
eelbrain.plot.brain.annot_legend : plot a corresponding legend without the brain
"""
if hemi is None:
annot_lh = mne.read_labels_from_annot(subject, annot, 'lh',
subjects_dir=subjects_dir)
use_lh = len(annot_lh) > 1
annot_rh = mne.read_labels_from_annot(subject, annot, 'rh',
subjects_dir=subjects_dir)
use_rh = len(annot_rh) > 1
if use_lh and use_rh:
hemi = 'split'
elif use_lh:
hemi = 'lh'
elif use_rh:
hemi = 'rh'
else:
raise ValueError("Neither hemisphere contains more than one label")
if title is None:
title = '%s - %s' % (subject, annot)
from ._brain_object import Brain
brain = Brain(subject, hemi, surf, title, cortex,
views=views, w=w, h=h, axw=axw, axh=axh,
foreground=foreground, background=background,
subjects_dir=subjects_dir, name=name)
brain._set_annot(annot, borders, alpha)
if parallel:
brain.set_parallel_view(scale=True)
return brain
def annot_legend(lh, rh, *args, **kwargs):
"""Plot a legend for a freesurfer parcellation
Parameters
----------
lh : str
Path to the lh annot-file.
rh : str
Path to the rh annot-file.
labels : dict (optional)
Alternative (text) label for (brain) labels.
h : 'auto' | scalar
Height of the figure in inches. If 'auto' (default), the height is
automatically increased to fit all labels.
Returns
-------
legend : :class:`~eelbrain.plot.ColorList`
Figure with legend for the parcellation.
Notes
-----
Instead of :func:`~eelbrain.plot.brain.annot_legend` it is usually
easier to use::
>>> brain = plot.brain.annoot(annot, ...)
>>> legend = brain.plot_legend()
See Also
--------
eelbrain.plot.brain.annot : plot the parcellation on a brain model
"""
_, lh_colors, lh_names = read_annot(lh)
_, rh_colors, rh_names = read_annot(rh)
lh_names = [name.decode() for name in lh_names]
rh_names = [name.decode() for name in rh_names]
lh_colors = dict(zip(lh_names, lh_colors[:, :4] / 255.))
rh_colors = dict(zip(rh_names, rh_colors[:, :4] / 255.))
names = set(lh_names)
names.update(rh_names)
colors = {}
seq = [] # sequential order in legend
seq_lh = []
seq_rh = []
for name in names:
if name in lh_colors and name in rh_colors:
if np.array_equal(lh_colors[name], rh_colors[name]):
colors[name] = lh_colors[name]
seq.append(name)
else:
colors[name + '-lh'] = lh_colors[name]
colors[name + '-rh'] = rh_colors[name]
seq_lh.append(name + '-lh')
seq_rh.append(name + '-rh')
elif name in lh_colors:
colors[name + '-lh'] = lh_colors[name]
seq_lh.append(name + '-lh')
else:
colors[name + '-rh'] = rh_colors[name]
seq_rh.append(name + '-rh')
return ColorList(colors, seq + seq_lh + seq_rh, *args, **kwargs)
def _plot(data, *args, **kwargs):
"Plot depending on source space kind"
if data.source.kind == 'vol':
return _voxel_brain(data, *args, **kwargs)
else:
return brain(data, *args, **kwargs)
def dspm(src, fmin=13, fmax=22, fmid=None, *args, **kwargs):
"""
Plot a source estimate with coloring for dSPM values (bipolar).
Parameters
----------
src : NDVar, dims = ([case,] source, [time])
NDVar with SourceSpace dimension. If stc contains a case dimension,
the average across cases is taken.
fmin, fmax : scalar >= 0
Start- and end-point for the color gradient for positive values. The
gradient for negative values goes from -fmin to -fmax. Values between
-fmin and fmin are transparent.
fmid : None | scalar
Midpoint for the color gradient. If fmid is None (default) it is set
half way between fmin and fmax.
surf : 'inflated' | 'pial' | 'smoothwm' | 'sphere' | 'white'
Freesurfer surface to use as brain geometry.
views : str | iterator of str
View or views to show in the figure. Options are: 'rostral', 'parietal',
'frontal', 'ventral', 'lateral', 'caudal', 'medial', 'dorsal'.
hemi : 'lh' | 'rh' | 'both' | 'split'
Which hemispheres to plot (default based on data).
colorbar : bool
Add a colorbar to the figure (use ``.plot_colorbar()`` to plot a
colorbar separately).
time_label : str
Label to show time point. Use ``'ms'`` or ``'s'`` to display time in
milliseconds or in seconds, or supply a custom format string to format
time values (in seconds; default is ``'ms'``).
w, h, axw, axh : scalar
Layout parameters (figure width/height, subplot width/height).
foreground : mayavi color
Figure foreground color (i.e., the text color).
background : mayavi color
Figure background color.
parallel : bool
Set views to parallel projection (default ``True``).
cortex : str | tuple | dict
Mark gyri and sulci on the cortex. Presets: ``'classic'`` (default),
``'high_contrast'``, ``'low_contrast'``, ``'bone'``. Can also be a
single color (e.g. ``'red'``, ``(0.1, 0.4, 1.)``) or a tuple of two
colors for gyri and sulci (e.g. ``['red', 'blue']`` or ``[(1, 0, 0),
(0, 0, 1)]``). For all options see the PySurfer documentation.
title : str
title for the window (default is the subject name).
smoothing_steps : None | int
Number of smoothing steps if data is spatially undersampled (pysurfer
``Brain.add_data()`` argument).
mask : bool | matplotlib color
Shade areas that are not in ``src``. Can be matplotlib color, including
alpha (e.g., ``(1, 1, 1, 0.5)`` for semi-transparent white).
subjects_dir : None | str
Override the subjects_dir associated with the source space dimension.
name : str
Equivalent to ``title``, for consistency with other plotting functions.
Returns
-------
brain : surfer.Brain
PySurfer Brain instance containing the plot.
"""
if fmid is None:
fmid = (fmax + fmin) / 2
lut = dspm_lut(fmin, fmid, fmax)
return _plot(src, lut, -fmax, fmax, *args, **kwargs)
def p_map(p_map, param_map=None, p0=0.05, p1=0.01, p0alpha=0.5, *args,
**kwargs):
"""Plot a map of p-values in source space.
Parameters
----------
p_map : NDVar | NDTest
Map of p values, or test result.
param_map : NDVar
Statistical parameter covering the same data points as p_map. Only the
sign is used, for incorporating the directionality of the effect into
the plot.
p0 : scalar
Highest p-value that is visible.
p1 : scalar
P-value where the colormap changes from ramping alpha to ramping color.
p0alpha : 1 >= float >= 0
Alpha at ``p0``. Set to 0 for a smooth transition, or a larger value to
clearly delineate significant regions (default 0.5).
surf : 'inflated' | 'pial' | 'smoothwm' | 'sphere' | 'white'
Freesurfer surface to use as brain geometry.
views : str | iterator of str
View or views to show | |
= 'GEOS-CF'
dfs_mod = {'GEOS-CF': df_mod_CF}
print(('!'*30, mod_label_master, dfs_mod.keys()))
# Get observations and model timeseries data as a DataFrame
# df_obs = dfs_obs[flight_ID]
# df_mod = dfs_mod[flight_ID]
# Only consider data during SLRs?
if just_SLR:
df_obs = df_obs.loc[df_obs['IS_SLR'] == True, :]
for key in dfs_mod.keys():
df_mod = dfs_mod[key]
df_mod = df_mod.loc[df_mod['IS_SLR'] == True, :]
dfs_mod[key] = df_mod
extr_str = '_JUST_SLR'
else:
extr_str = ''
# Setup PDF to save PDF plots to
savetitle = 'ARNA_altitude_binned_{}'.format(flight_ID)
pdff = AC.plot2pdfmulti(title=savetitle, open=True, dpi=dpi)
# - Plot up location of flights
plt_flightpath_spatially_over_CVAO(df=df_obs, flight_ID=flight_ID)
AC.plot2pdfmulti(pdff, savetitle, dpi=dpi, tight=True)
plt.close()
# - Put observations and vars to plot into a dictionary
sns.set(color_codes=True)
sns.set_context(context, font_scale=font_scale)
# Force alt to be in units of km
ALT_var = 'Altitude (km)'
Y_unit = ALT_var
key4GEOSCF = 'GEOS-CF'
for key in dfs_mod.keys():
# if key in dfs_mod.keys():
df_mod = dfs_mod[key]
if key4GEOSCF == key:
df_mod[ALT_var] = AC.hPa_to_Km(df_mod['model-lev'].values)
else:
df_mod[ALT_var] = AC.hPa_to_Km(df_mod['PRESS'].values)
dfs_mod[key] = df_mod
df_obs[ALT_var] = df_obs['ALT_GIN'].values / 1E3
# Online consider offline model output?
if just_plot_GEOS_Chem:
# GEOSChem_varname = sorted(list(dfs_mod.keys())
GCvarname = 'FP-Nest'
data_d = {GCvarname: dfs_mod[GCvarname], 'Obs.': df_obs}
else:
data_d = AC.merge_two_dicts(dfs_mod, {'Obs.': df_obs})
print('Plotting model runs: ', data_d.keys())
# - Now plot up flight time series plots by variable
title_str = "Altitude binned '{}' ({}) during flight '{}'"
# Setup color dictionary
color_dict = {'GEOS-CF': 'red', 'Obs.': 'k'}
CB_color_cycle = AC.get_CB_color_cycle()
for n_key, key in enumerate(list(data_d.keys())):
if key not in color_dict.keys():
color_dict[key] = CB_color_cycle[n_key]
unit_d = {}
mod2obs_varnames = {
'CO': 'CO_AERO', 'O3': 'O3_TECO', 'NO2': 'no2_mr', 'NO': 'no_mr',
'HNO2': 'hono_mr',
'NOx': 'NOx'
}
units_d = {
'CO': 'ppbv', 'O3': 'ppbv', 'NO2': 'pptv', 'NO': 'pptv', 'NOx': 'pptv',
'HNO2': 'pptv', 'HONO': 'pptv',
}
range_d = {
'CO': (50, 400), 'O3': (-10, 100), 'NO2': (-50, 500), 'NO': (-50, 500),
'NOx': (-50, 500),
'HNO2': (-60, 60), 'HONO': (-60, 60),
}
NOx_specs = ['HNO2', 'NOx', 'NO', 'NO2', 'HONO']
# - by variable
runs = list(sorted(data_d.keys()))
# Which variables to use?
# vars2plot = list(sorted(mod2obs_varnames.keys()))[::-1]
vars2plot = ['CO', 'O3', 'NOx', 'NO2', 'NO', 'HNO2']
print(vars2plot)
print(df_obs.columns)
vars2plot = [
i for i in vars2plot if mod2obs_varnames[i] in df_obs.columns
]
# What bins should be used?
print('Plotting:', runs)
bins = [0.5*i for i in np.arange(15)]
for var2plot in vars2plot:
fig = plt.figure()
ax = plt.gca()
# Now loop data
for n_key, key_ in enumerate(runs):
print(n_key, key_, var2plot)
#
if key_ == 'Obs.':
varname = mod2obs_varnames[var2plot]
else:
varname = var2plot
# Setup an axis label
units = units_d[var2plot]
xlabel = '{} ({})'.format(var2plot, units)
# Add alt to DataFrame
df = pd.DataFrame({
var2plot: data_d[key_][varname],
ALT_var: data_d[key_][ALT_var]
})
# Scale the modelled values to the same units
if key_ != 'Obs.':
scaleby = AC.get_unit_scaling(units)
df[var2plot] = df[var2plot].values * scaleby
print(df.head())
# drop any NaNs from the DataFrame
s_shape = df.shape
df.dropna(axis=0, how='any', inplace=True)
if s_shape != df.shape:
pcent = (float(df.shape[0]) - s_shape[0])/s_shape[0] * 100.
pstr_dtr = 'WANRING dropped values - shape {}=>{} ({:.2f})'
print(pstr_dtr.format(s_shape, df.shape, pcent))
# Plot up as binned boxplots using existing function
try:
AC.binned_boxplots_by_altitude(df=df, fig=fig, ax=ax,
var2bin_by=ALT_var,
label=key_, xlabel=xlabel,
binned_var=var2plot,
num_of_datasets=len(runs),
bins=bins,
widths=0.15,
dataset_num=n_key,
color=color_dict[key_])
except:
pass
# Make NOx species be on a log scale
xscale = 'linear'
if (var2plot in NOx_specs):
xscale = 'linear'
# xscale = 'log'
ax.set_xscale(xscale)
if xscale == 'log':
xlim = xlim(0.3, 400)
ax.set_xlim(xlim)
# Beautify plot
plt.legend()
plt.title(title_str.format(var2plot, units, flight_ID))
plt.xlim(range_d[var2plot])
# Save to PDF
AC.plot2pdfmulti(pdff, savetitle, dpi=dpi, tight=True)
if show_plot:
plt.show()
plt.close()
# - Save entire pdf
AC.plot2pdfmulti(pdff, savetitle, close=True, dpi=dpi)
plt.close('all')
def plt_comp_by_alt_4ARNA_together(dpi=320, just_SLR=True, show_plot=False,
RunSet=None, res='4x5', flight_nums=[],
just_plot_GEOS_Chem=False,
inc_GEOSChem=False,
context="paper", font_scale=0.75):
"""
Plot up altitude binned comparisons between core obs. and model data
"""
# Setup the pdf file to use
savetitle = 'ARNA_altitude_binned_combined_file'
if just_SLR:
savetitle += '_JUST_SLR'
pdff = AC.plot2pdfmulti(title=savetitle, open=True, dpi=dpi)
# Call the standard species plotter
plt_comp_by_alt_4ARNA_all(just_SLR=just_SLR, context=context,
RunSet='FP-Nest', res='0.25x0.3125',
inc_GEOSChem=True,
just_plot_GEOS_Chem=True,
savetitle=savetitle,
pdff=pdff,
close_pdf=False,
)
# Call the CIMS plotter
plt_comp_by_alt_4ARNA_CIMS_all_DUST(context=context,
inc_GEOSChem=True,
just_plot_GEOS_Chem=True,
plt_model=True,
RunSet='FP-Nest', res='0.25x0.3125',
savetitle=savetitle,
pdff=pdff,
plt_map=False,
close_pdf=False,
)
# And physical variables
#
# And SWAS
# Call ... ????
# - Save entire pdf
AC.plot2pdfmulti(pdff, savetitle, close=True, dpi=dpi)
plt.close('all')
def plt_comp_by_alt_4ARNA_flights_CIMS(dpi=320, just_SLR=False,
show_plot=False,
RunSet=None, res='4x5', flight_nums=[],
just_plot_GEOS_Chem=False,
inc_GEOSChem=False,
context="paper", font_scale=0.75):
"""
Plot up altitude binned comparisons between core obs. and model data
"""
# Which flights to plot?
# Just use non-transit ARNA flights
if len(flight_nums) == 0:
flight_nums = [
# 217, # Missing data for C217 (NOy)
218, 219, 220,
# 221, # Missing data for C221 (NOy)
222, 223,
# 224, # Missing data for C221 (BrO... )
# 225, # Missing data for C221 (BrO... )
]
flight_IDs = ['C{}'.format(i) for i in flight_nums]
# - Loop by flight and retrieve the files as dataframes (mod + obs)
# Observations
dfs_obs = {}
for flight_ID in flight_IDs:
df = get_CIMS_data4flight(flight_ID=flight_ID)
df = add_derived_FAAM_flags2df4flight(df=df, flight_ID=flight_ID)
dfs_obs[flight_ID] = df
# Model
dfs_mod_CF = {}
for flight_ID in flight_IDs:
df = get_GEOSCF4flightnum(flight_ID=flight_ID)
df = add_derived_FAAM_flags2df4flight(df=df, flight_ID=flight_ID)
dfs_mod_CF[flight_ID] = df
# Model - GEOS-Chem (offline)
if inc_GEOSChem:
# RunSet='MERRA2-0.5-initial'
# res='0.5x0.625'
# RunSet='MERRA2-BC'
# res='4x5'
# RunSet='FP-Nest'
# res='0.25x0.3125'
dfs_mod_GC = {}
for flight_ID in flight_IDs:
dfs = get_GEOSChem4flightnum(flight_ID=flight_ID, res=res,
RunSet=RunSet,)
for key in dfs.keys():
df = dfs[key]
df = add_derived_FAAM_flags2df4flight(df=df,
flight_ID=flight_ID)
dfs[key] = df
dfs_mod_GC[flight_ID] = dfs
del dfs
# - Now plot up
for flight_ID in flight_IDs:
print(flight_ID)
# Get observations and model timeseries data as a DataFrame
df_obs = dfs_obs[flight_ID]
# df_mod = dfs_mod[flight_ID]
df_mod_CF = dfs_mod_CF[flight_ID]
if inc_GEOSChem:
if just_plot_GEOS_Chem:
dfs_mod = dfs_mod_GC[flight_ID]
mod_label_master = RunSet
else:
dfs_mod_GC4flight = dfs_mod_GC[flight_ID]
dfs_mod = {'GEOS-CF': df_mod_CF}
for key in list(dfs_mod_GC4flight.keys()):
dfs_mod[key] = dfs_mod_GC4flight[key]
mod_label_master = 'GEOS-CF'
else:
mod_label_master = 'GEOS-CF'
dfs_mod = {'GEOS-CF': df_mod_CF}
print(('!'*30, mod_label_master, dfs_mod.keys()))
# Only consider data during SLRs?
if just_SLR:
df_obs = df_obs.loc[df_obs['IS_SLR'] == True, :]
# df_mod = df_mod.loc[df_mod['IS_SLR'] == True, :]
for key in dfs_mod.keys():
df_mod = dfs_mod[key]
df_mod = df_mod.loc[df_mod['IS_SLR'] == True, :]
dfs_mod[key] = df_mod
extr_str = '_JUST_SLR'
else:
extr_str = ''
# Setup PDF to save PDF plots to
savetitle_str = 'ARNA_altitude_binned_{}_CIMS{}'
savetitle = savetitle_str.format(flight_ID, extr_str)
pdff = AC.plot2pdfmulti(title=savetitle, open=True, dpi=dpi)
# - Plot up location of flights
plt_flightpath_spatially_over_CVAO(df=df_obs, flight_ID=flight_ID)
AC.plot2pdfmulti(pdff, savetitle, dpi=dpi, tight=True)
plt.close()
# - Put observations and vars to plot into a dictionary
sns.set(color_codes=True)
sns.set_context(context, font_scale=font_scale)
# Force alt to be in units of km
ALT_var = 'Altitude (km)'
Y_unit = ALT_var
# df_mod[ALT_var] = AC.hPa_to_Km( df_mod['model-lev'].values )
key4GEOSCF = 'GEOS-CF'
for key in dfs_mod.keys():
# if key in dfs_mod.keys():
df_mod = dfs_mod[key]
if key4GEOSCF == key:
df_mod[ALT_var] = AC.hPa_to_Km(df_mod['model-lev'].values)
else:
df_mod[ALT_var] = AC.hPa_to_Km(df_mod['PRESS'].values)
dfs_mod[key] = df_mod
df_obs[ALT_var] = df_obs['ALT_GIN'].values / 1E3
# Online consider offline model output?
if just_plot_GEOS_Chem:
# GEOSChem_varname = sorted(list(dfs_mod.keys())
GCvarname = 'FP-Nest'
data_d = {GCvarname: dfs_mod[GCvarname], 'Obs.': df_obs}
else:
data_d = AC.merge_two_dicts(dfs_mod, {'Obs.': df_obs})
print('Plotting model runs: ', data_d.keys())
# data_d = {'GEOS-CF': df_mod, 'Obs.':df_obs}
# - Now plot up flight time series plots by variable
title_str = "Altitude binned '{}' ({}) during flight '{}'"
# Setup color dictinoary
color_dict = {'GEOS-CF': 'red', 'Obs.': 'k'}
CB_color_cycle = AC.get_CB_color_cycle()
for n_key, key in enumerate(list(data_d.keys())):
if key not in color_dict.keys():
color_dict[key] = CB_color_cycle[n_key]
unit_d = {}
mod2obs_varnames = {
'BrO': 'BrO',
'HNO3': 'HNO3',
'HNO2': 'HONO',
# 'CO':'CO_AERO', 'O3':'O3_TECO', 'NO2':'no2_mr', 'NO':'no_mr',
# 'HNO2':'hono_mr',
# 'NOx':'NOx'
}
units_d = {
# 'CO':'ppbv', 'O3':'ppbv', 'NO2':'pptv', 'NO':'pptv', 'NOx':'pptv',
'BrO': 'pptv', 'HNO3': 'pptv', 'HNO2': 'pptv', 'HONO': 'pptv',
}
range_d = {
# 'CO':(50, 400), 'O3':(-10, 100), 'NO2':(-50, 500), 'NO':(-50, 500),
# 'NOx':(-50, 500),
'HNO2': (-10, 60),
'HNO3': (-30, 1500),
'BrO': (-0.2, 1.0),
'HONO': (-10, 60),
}
NOx_specs = ['HNO2', 'NOx', 'NO', 'NO2', 'HONO']
# - by variable
runs = list(sorted(data_d.keys()))
# Which variables to use?
vars2plot = mod2obs_varnames.keys()
print(vars2plot)
print(df_obs.columns)
vars2plot = [
i for i in vars2plot if mod2obs_varnames[i] in df_obs.columns
]
# What bins should be used?
bins = [0.5*i for i in np.arange(15)]
for var2plot in vars2plot:
fig = plt.figure()
ax = plt.gca()
# Now loop data
for n_key, key_ | |
request, please pass async_req=True
>>> thread = api.security_get_security_group_categories_v1(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[RightCategory]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.security_get_security_group_categories_v1_with_http_info(**kwargs) # noqa: E501
def security_get_security_group_categories_v1_with_http_info(self, **kwargs): # noqa: E501
"""Get all Security Group categories # noqa: E501
Operation to get IDs and names for all available Security Group categories. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.security_get_security_group_categories_v1_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[RightCategory], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method security_get_security_group_categories_v1" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/V1/getsecuritygroupcategories', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[RightCategory]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def security_get_security_group_categories_v2(self, **kwargs): # noqa: E501
"""Get all Security Group categories # noqa: E501
Operation to get IDs and names for all available Security Group categories. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.security_get_security_group_categories_v2(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SecurityRightCategoriesResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.security_get_security_group_categories_v2_with_http_info(**kwargs) # noqa: E501
def security_get_security_group_categories_v2_with_http_info(self, **kwargs): # noqa: E501
"""Get all Security Group categories # noqa: E501
Operation to get IDs and names for all available Security Group categories. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.security_get_security_group_categories_v2_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SecurityRightCategoriesResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method security_get_security_group_categories_v2" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/V2/getsecuritygroupcategories', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SecurityRightCategoriesResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def security_get_security_group_rights_by_group_id_and_category_id_v1(self, groupid, categoryid, **kwargs): # noqa: E501
"""Get permissions for a Security Group by category # noqa: E501
Operation to get permissions for a Security Group by category. To get Security Group IDs, use \"Get all available Security Groups.\" To get Security Group category IDs, use \"Get all Security Group categories.\" # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.security_get_security_group_rights_by_group_id_and_category_id_v1(groupid, categoryid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str groupid: Specify the Security Group ID (required)
:param str categoryid: Specify the Security Group category ID (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[Right]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.security_get_security_group_rights_by_group_id_and_category_id_v1_with_http_info(groupid, categoryid, **kwargs) # noqa: E501
def security_get_security_group_rights_by_group_id_and_category_id_v1_with_http_info(self, groupid, categoryid, **kwargs): # noqa: E501
"""Get permissions for a Security Group by category # noqa: E501
Operation to get permissions for a Security Group by category. To get Security Group IDs, use \"Get all available Security Groups.\" To get Security Group category IDs, use \"Get all Security Group categories.\" # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.security_get_security_group_rights_by_group_id_and_category_id_v1_with_http_info(groupid, categoryid, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str groupid: Specify the Security Group ID (required)
:param str categoryid: Specify the Security Group category ID (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[Right], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['groupid', 'categoryid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method security_get_security_group_rights_by_group_id_and_category_id_v1" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'groupid' is set
if self.api_client.client_side_validation and ('groupid' not in local_var_params or # noqa: E501
local_var_params['groupid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `groupid` when calling `security_get_security_group_rights_by_group_id_and_category_id_v1`") # noqa: E501
# verify the required parameter 'categoryid' is set
if self.api_client.client_side_validation and ('categoryid' not in local_var_params or # noqa: E501
local_var_params['categoryid'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `categoryid` when calling `security_get_security_group_rights_by_group_id_and_category_id_v1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'groupid' in local_var_params:
path_params['groupid'] = local_var_params['groupid'] # noqa: E501
if 'categoryid' in local_var_params:
path_params['categoryid'] = local_var_params['categoryid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/V1/getsecuritygrouprights/groupid/{groupid}/categoryid/{categoryid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Right]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def security_get_security_group_rights_by_group_id_and_category_id_v2(self, groupid, categoryid, **kwargs): # noqa: E501
"""Get permissions for a Security Group by category # noqa: E501
Operation to get permissions for a Security Group by category. To get Security Group IDs, | |
columns.append(p.key)
elif hasattr(p, 'columns'):
if len(p.columns) > 1:
filtered = tools.filter_foreign_columns(self.model.__table__, p.columns)
if len(filtered) == 0:
continue
elif len(filtered) > 1:
warnings.warn('Can not convert multiple-column properties (%s.%s)' % (self.model, p.key))
continue
column = filtered[0]
else:
column = p.columns[0]
if column.foreign_keys:
continue
if not self.column_display_pk and column.primary_key:
continue
columns.append(p.key)
return columns
def scaffold_sortable_columns(self):
"""
Return a dictionary of sortable columns.
Key is column name, value is sort column/field.
"""
columns = dict()
for p in self._get_model_iterator():
if hasattr(p, 'columns'):
# Sanity check
if len(p.columns) > 1:
# Multi-column properties are not supported
continue
column = p.columns[0]
# Can't sort on primary or foreign keys by default
if column.foreign_keys:
continue
if not self.column_display_pk and column.primary_key:
continue
columns[p.key] = column
return columns
def get_sortable_columns(self):
"""
Returns a dictionary of the sortable columns. Key is a model
field name and value is sort column (for example - attribute).
If `column_sortable_list` is set, will use it. Otherwise, will call
`scaffold_sortable_columns` to get them from the model.
"""
self._sortable_joins = dict()
if self.column_sortable_list is None:
return self.scaffold_sortable_columns()
else:
result = dict()
for c in self.column_sortable_list:
if isinstance(c, tuple):
if isinstance(c[1], tuple):
column, path = [], []
for item in c[1]:
column_item, path_item = tools.get_field_with_path(self.model, item)
column.append(column_item)
path.append(path_item)
column_name = c[0]
else:
column, path = tools.get_field_with_path(self.model, c[1])
column_name = c[0]
else:
column, path = tools.get_field_with_path(self.model, c)
column_name = text_type(c)
if path and (hasattr(path[0], 'property') or isinstance(path[0], list)):
self._sortable_joins[column_name] = path
elif path:
raise Exception("For sorting columns in a related table, "
"column_sortable_list requires a string "
"like '<relation name>.<column name>'. "
"Failed on: {0}".format(c))
else:
# column is in same table, use only model attribute name
if getattr(column, 'key', None) is not None:
column_name = column.key
# column_name must match column_name used in `get_list_columns`
result[column_name] = column
return result
def get_column_names(self, only_columns, excluded_columns):
"""
Returns a list of tuples with the model field name and formatted
field name.
Overridden to handle special columns like InstrumentedAttribute.
:param only_columns:
List of columns to include in the results. If not set,
`scaffold_list_columns` will generate the list from the model.
:param excluded_columns:
List of columns to exclude from the results.
"""
if excluded_columns:
only_columns = [c for c in only_columns if c not in excluded_columns]
formatted_columns = []
for c in only_columns:
try:
column, path = tools.get_field_with_path(self.model, c)
if path:
# column is a relation (InstrumentedAttribute), use full path
column_name = text_type(c)
else:
# column is in same table, use only model attribute name
if getattr(column, 'key', None) is not None:
column_name = column.key
else:
column_name = text_type(c)
except AttributeError:
# TODO: See ticket #1299 - allow virtual columns. Probably figure out
# better way to handle it. For now just assume if column was not found - it
# is virtual and there's column formatter for it.
column_name = text_type(c)
visible_name = self.get_column_name(column_name)
# column_name must match column_name in `get_sortable_columns`
formatted_columns.append((column_name, visible_name))
return formatted_columns
def init_search(self):
"""
Initialize search. Returns `True` if search is supported for this
view.
For SQLAlchemy, this will initialize internal fields: list of
column objects used for filtering, etc.
"""
if self.column_searchable_list:
self._search_fields = []
for name in self.column_searchable_list:
attr, joins = tools.get_field_with_path(self.model, name)
if not attr:
raise Exception('Failed to find field for search field: %s' % name)
if tools.is_hybrid_property(self.model, name):
column = attr
if isinstance(name, string_types):
column.key = name.split('.')[-1]
self._search_fields.append((column, joins))
else:
for column in tools.get_columns_for_field(attr):
self._search_fields.append((column, joins))
return bool(self.column_searchable_list)
def search_placeholder(self):
"""
Return search placeholder.
For example, if set column_labels and column_searchable_list:
class MyModelView(BaseModelView):
column_labels = dict(name='Name', last_name='<NAME>')
column_searchable_list = ('name', 'last_name')
placeholder is: "Name, Last Name"
"""
if not self.column_searchable_list:
return None
placeholders = []
for searchable in self.column_searchable_list:
if isinstance(searchable, InstrumentedAttribute):
placeholders.append(
self.column_labels.get(searchable.key, searchable.key))
else:
placeholders.append(
self.column_labels.get(searchable, searchable))
return u', '.join(placeholders)
def scaffold_filters(self, name):
"""
Return list of enabled filters
"""
attr, joins = tools.get_field_with_path(self.model, name)
if attr is None:
raise Exception('Failed to find field for filter: %s' % name)
# Figure out filters for related column
if is_relationship(attr):
filters = []
for p in self._get_model_iterator(attr.property.mapper.class_):
if hasattr(p, 'columns'):
# TODO: Check for multiple columns
column = p.columns[0]
if column.foreign_keys or column.primary_key:
continue
visible_name = '%s / %s' % (self.get_column_name(attr.prop.target.name),
self.get_column_name(p.key))
type_name = type(column.type).__name__
flt = self.filter_converter.convert(type_name,
column,
visible_name)
if flt:
table = column.table
if joins:
self._filter_joins[column] = joins
elif tools.need_join(self.model, table):
self._filter_joins[column] = [table]
filters.extend(flt)
return filters
else:
is_hybrid_property = tools.is_hybrid_property(self.model, name)
if is_hybrid_property:
column = attr
if isinstance(name, string_types):
column.key = name.split('.')[-1]
else:
columns = tools.get_columns_for_field(attr)
if len(columns) > 1:
raise Exception('Can not filter more than on one column for %s' % name)
column = columns[0]
# If filter related to relation column (represented by
# relation_name.target_column) we collect here relation name
joined_column_name = None
if isinstance(name, string_types) and '.' in name:
joined_column_name = name.split('.')[0]
# Join not needed for hybrid properties
if (not is_hybrid_property and tools.need_join(self.model, column.table) and
name not in self.column_labels):
if joined_column_name:
visible_name = '%s / %s / %s' % (
joined_column_name,
self.get_column_name(column.table.name),
self.get_column_name(column.name)
)
else:
visible_name = '%s / %s' % (
self.get_column_name(column.table.name),
self.get_column_name(column.name)
)
else:
if not isinstance(name, string_types):
visible_name = self.get_column_name(name.property.key)
else:
if self.column_labels and name in self.column_labels:
visible_name = self.column_labels[name]
else:
visible_name = self.get_column_name(name)
visible_name = visible_name.replace('.', ' / ')
type_name = type(column.type).__name__
flt = self.filter_converter.convert(
type_name,
column,
visible_name,
options=self.column_choices.get(name),
)
key_name = column
# In case of filter related to relation column filter key
# must be named with relation name (to prevent following same
# target column to replace previous)
if joined_column_name:
key_name = "{0}.{1}".format(joined_column_name, column)
for f in flt:
f.key_name = key_name
if joins:
self._filter_joins[key_name] = joins
elif not is_hybrid_property and tools.need_join(self.model, column.table):
self._filter_joins[key_name] = [column.table]
return flt
def handle_filter(self, filter):
if isinstance(filter, sqla_filters.BaseSQLAFilter):
column = filter.column
# hybrid_property joins are not supported yet
if (isinstance(column, InstrumentedAttribute) and
tools.need_join(self.model, column.table)):
self._filter_joins[column] = [column.table]
return filter
def scaffold_form(self):
"""
Create form from the model.
"""
converter = self.model_form_converter(self.session, self)
form_class = form.get_form(self.model, converter,
base_class=self.form_base_class,
only=self.form_columns,
exclude=self.form_excluded_columns,
field_args=self.form_args,
ignore_hidden=self.ignore_hidden,
extra_fields=self.form_extra_fields)
if self.inline_models:
form_class = self.scaffold_inline_form_models(form_class)
return form_class
def scaffold_list_form(self, widget=None, validators=None):
"""
Create form for the `index_view` using only the columns from
`self.column_editable_list`.
:param widget:
WTForms widget class. Defaults to `XEditableWidget`.
:param validators:
`form_args` dict with only validators
{'name': {'validators': [required()]}}
"""
converter = self.model_form_converter(self.session, self)
form_class = form.get_form(self.model, converter,
base_class=self.form_base_class,
only=self.column_editable_list,
field_args=validators)
return create_editable_list_form(self.form_base_class, form_class,
widget)
def scaffold_inline_form_models(self, form_class):
"""
Contribute inline models to the form
:param form_class:
Form class
"""
default_converter = self.inline_model_form_converter(
self.session, self, self.model_form_converter)
for m in self.inline_models:
if not hasattr(m, 'inline_converter'):
form_class = default_converter.contribute(
self.model, form_class, m)
continue
custom_converter = m.inline_converter(
self.session, self, self.model_form_converter)
form_class = custom_converter.contribute(
self.model, form_class, m)
return form_class
def scaffold_auto_joins(self):
"""
Return a list of joined tables by going through the
displayed columns.
"""
if not self.column_auto_select_related:
return []
relations = set()
for p in self._get_model_iterator():
if hasattr(p, 'direction'):
# Check if it is pointing to same model
if p.mapper.class_ == self.model:
continue
# Check if it is pointing to a differnet bind
source_bind = getattr(self.model, '__bind_key__', None)
target_bind = getattr(p.mapper.class_, '__bind_key__', None)
if source_bind != target_bind:
continue
if p.direction.name in ['MANYTOONE', 'MANYTOMANY']:
relations.add(p.key)
joined = []
for prop, name in self._list_columns:
if prop in relations:
joined.append(getattr(self.model, prop))
return joined
# AJAX foreignkey support
def _create_ajax_loader(self, name, options):
return create_ajax_loader(self.model, self.session, name, name, options)
# Database-related API
def get_query(self):
"""
Return a query for the model type.
This method can be used to set a "persistent filter" on an index_view.
Example::
class MyView(ModelView):
def get_query(self):
return super(MyView, self).get_query().filter(User.username == current_user.username)
If you override this method, don't forget to also override `get_count_query`, for displaying the correct
item count in the list view, and `get_one`, which is used when retrieving records for the edit view.
"""
return self.session.query(self.model)
def get_count_query(self):
"""
Return a the count query for the model type
A ``query(self.model).count()`` approach produces an excessive
subquery, so ``query(func.count('*'))`` should be used instead.
See commit | |
<reponame>tomasdubec/openstack-cinder<gh_stars>0
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for Volume Code.
"""
import datetime
import os
import mox
import shutil
import tempfile
from cinder import context
from cinder import db
from cinder import exception
from cinder import flags
from cinder.image import image_utils
from cinder.openstack.common import importutils
from cinder.openstack.common.notifier import api as notifier_api
from cinder.openstack.common.notifier import test_notifier
from cinder.openstack.common import rpc
import cinder.policy
from cinder import quota
from cinder import test
from cinder.tests import fake_flags
from cinder.tests.image import fake as fake_image
from cinder.volume import configuration as conf
from cinder.volume import driver
from cinder.volume import iscsi
QUOTAS = quota.QUOTAS
FLAGS = flags.FLAGS
class VolumeTestCase(test.TestCase):
"""Test Case for volumes."""
def setUp(self):
super(VolumeTestCase, self).setUp()
vol_tmpdir = tempfile.mkdtemp()
self.flags(connection_type='fake',
volumes_dir=vol_tmpdir,
notification_driver=[test_notifier.__name__])
self.volume = importutils.import_object(FLAGS.volume_manager)
self.context = context.get_admin_context()
self.stubs.Set(iscsi.TgtAdm, '_get_target', self.fake_get_target)
fake_image.stub_out_image_service(self.stubs)
test_notifier.NOTIFICATIONS = []
def tearDown(self):
try:
shutil.rmtree(FLAGS.volumes_dir)
except OSError:
pass
notifier_api._reset_drivers()
super(VolumeTestCase, self).tearDown()
def fake_get_target(obj, iqn):
return 1
@staticmethod
def _create_volume(size=0, snapshot_id=None, image_id=None,
metadata=None):
"""Create a volume object."""
vol = {}
vol['size'] = size
vol['snapshot_id'] = snapshot_id
vol['image_id'] = image_id
vol['user_id'] = 'fake'
vol['project_id'] = 'fake'
vol['availability_zone'] = FLAGS.storage_availability_zone
vol['status'] = "creating"
vol['attach_status'] = "detached"
vol['host'] = FLAGS.host
if metadata is not None:
vol['metadata'] = metadata
return db.volume_create(context.get_admin_context(), vol)
def test_create_delete_volume(self):
"""Test volume can be created and deleted."""
# Need to stub out reserve, commit, and rollback
def fake_reserve(context, expire=None, project_id=None, **deltas):
return ["RESERVATION"]
def fake_commit(context, reservations, project_id=None):
pass
def fake_rollback(context, reservations, project_id=None):
pass
self.stubs.Set(QUOTAS, "reserve", fake_reserve)
self.stubs.Set(QUOTAS, "commit", fake_commit)
self.stubs.Set(QUOTAS, "rollback", fake_rollback)
volume = self._create_volume()
volume_id = volume['id']
self.assertEquals(len(test_notifier.NOTIFICATIONS), 0)
self.volume.create_volume(self.context, volume_id)
self.assertEquals(len(test_notifier.NOTIFICATIONS), 2)
self.assertEqual(volume_id, db.volume_get(context.get_admin_context(),
volume_id).id)
self.volume.delete_volume(self.context, volume_id)
vol = db.volume_get(context.get_admin_context(read_deleted='yes'),
volume_id)
self.assertEquals(vol['status'], 'deleted')
self.assertEquals(len(test_notifier.NOTIFICATIONS), 4)
self.assertRaises(exception.NotFound,
db.volume_get,
self.context,
volume_id)
def test_create_delete_volume_with_metadata(self):
"""Test volume can be created with metadata and deleted."""
test_meta = {'fake_key': 'fake_value'}
volume = self._create_volume(0, None, metadata=test_meta)
volume_id = volume['id']
self.volume.create_volume(self.context, volume_id)
result_meta = {
volume.volume_metadata[0].key: volume.volume_metadata[0].value}
self.assertEqual(result_meta, test_meta)
self.volume.delete_volume(self.context, volume_id)
self.assertRaises(exception.NotFound,
db.volume_get,
self.context,
volume_id)
def test_create_volume_with_invalid_metadata(self):
"""Test volume create with too much metadata fails."""
volume_api = cinder.volume.api.API()
test_meta = {'fake_key': 'fake_value' * 256}
self.assertRaises(exception.InvalidVolumeMetadataSize,
volume_api.create,
self.context,
1,
'name',
'description',
None,
None,
None,
test_meta)
def test_create_volume_with_volume_type(self):
"""Test volume creation with default volume type."""
def fake_reserve(context, expire=None, project_id=None, **deltas):
return ["RESERVATION"]
def fake_commit(context, reservations, project_id=None):
pass
def fake_rollback(context, reservations, project_id=None):
pass
self.stubs.Set(QUOTAS, "reserve", fake_reserve)
self.stubs.Set(QUOTAS, "commit", fake_commit)
self.stubs.Set(QUOTAS, "rollback", fake_rollback)
volume_api = cinder.volume.api.API()
# Create volume with default volume type while default
# volume type doesn't exist, volume_type_id should be NULL
volume = volume_api.create(self.context,
1,
'name',
'description')
self.assertEquals(volume['volume_type_id'], None)
# Create default volume type
vol_type = fake_flags.def_vol_type
db.volume_type_create(context.get_admin_context(),
dict(name=vol_type, extra_specs={}))
db_vol_type = db.volume_type_get_by_name(context.get_admin_context(),
vol_type)
# Create volume with default volume type
volume = volume_api.create(self.context,
1,
'name',
'description')
self.assertEquals(volume['volume_type_id'], db_vol_type.get('id'))
# Create volume with specific volume type
vol_type = 'test'
db.volume_type_create(context.get_admin_context(),
dict(name=vol_type, extra_specs={}))
db_vol_type = db.volume_type_get_by_name(context.get_admin_context(),
vol_type)
volume = volume_api.create(self.context,
1,
'name',
'description',
volume_type=db_vol_type)
self.assertEquals(volume['volume_type_id'], db_vol_type.get('id'))
def test_delete_busy_volume(self):
"""Test volume survives deletion if driver reports it as busy."""
volume = self._create_volume()
volume_id = volume['id']
self.volume.create_volume(self.context, volume_id)
self.mox.StubOutWithMock(self.volume.driver, 'delete_volume')
self.volume.driver.delete_volume(
mox.IgnoreArg()).AndRaise(exception.VolumeIsBusy(
volume_name='fake'))
self.mox.ReplayAll()
res = self.volume.delete_volume(self.context, volume_id)
self.assertEqual(True, res)
volume_ref = db.volume_get(context.get_admin_context(), volume_id)
self.assertEqual(volume_id, volume_ref.id)
self.assertEqual("available", volume_ref.status)
self.mox.UnsetStubs()
self.volume.delete_volume(self.context, volume_id)
def test_create_volume_from_snapshot(self):
"""Test volume can be created from a snapshot."""
volume_src = self._create_volume()
self.volume.create_volume(self.context, volume_src['id'])
snapshot_id = self._create_snapshot(volume_src['id'])['id']
self.volume.create_snapshot(self.context, volume_src['id'],
snapshot_id)
volume_dst = self._create_volume(0, snapshot_id)
self.volume.create_volume(self.context, volume_dst['id'], snapshot_id)
self.assertEqual(volume_dst['id'],
db.volume_get(
context.get_admin_context(),
volume_dst['id']).id)
self.assertEqual(snapshot_id,
db.volume_get(context.get_admin_context(),
volume_dst['id']).snapshot_id)
self.volume.delete_volume(self.context, volume_dst['id'])
self.volume.delete_snapshot(self.context, snapshot_id)
self.volume.delete_volume(self.context, volume_src['id'])
def test_too_big_volume(self):
"""Ensure failure if a too large of a volume is requested."""
# FIXME(vish): validation needs to move into the data layer in
# volume_create
return True
try:
volume = self._create_volume(1001)
self.volume.create_volume(self.context, volume)
self.fail("Should have thrown TypeError")
except TypeError:
pass
def test_run_attach_detach_volume(self):
"""Make sure volume can be attached and detached from instance."""
instance_uuid = '12345678-1234-5678-1234-567812345678'
mountpoint = "/dev/sdf"
volume = self._create_volume()
volume_id = volume['id']
self.volume.create_volume(self.context, volume_id)
db.volume_attached(self.context, volume_id, instance_uuid, mountpoint)
vol = db.volume_get(context.get_admin_context(), volume_id)
self.assertEqual(vol['status'], "in-use")
self.assertEqual(vol['attach_status'], "attached")
self.assertEqual(vol['mountpoint'], mountpoint)
self.assertEqual(vol['instance_uuid'], instance_uuid)
self.assertRaises(exception.VolumeAttached,
self.volume.delete_volume,
self.context,
volume_id)
db.volume_detached(self.context, volume_id)
vol = db.volume_get(self.context, volume_id)
self.assertEqual(vol['status'], "available")
self.volume.delete_volume(self.context, volume_id)
self.assertRaises(exception.VolumeNotFound,
db.volume_get,
self.context,
volume_id)
@test.skip_test
def test_preattach_status_volume(self):
"""Ensure volume goes into pre-attaching state"""
instance_uuid = '12345678-1234-5678-1234-567812345678'
mountpoint = "/dev/sdf"
volume = db.volume_create(self.context, {'size': 1,
'status': 'available'})
volume_id = volume['id']
volume_api = cinder.volume.api.API()
volume_api.attach(self.context, volume, instance_uuid, mountpoint)
vol = db.volume_get(self.context, volume_id)
self.assertEqual(vol['status'], "available")
self.assertEqual(vol['attach_status'], None)
self.assertEqual(vol['instance_uuid'], None)
def test_concurrent_volumes_get_different_targets(self):
"""Ensure multiple concurrent volumes get different targets."""
volume_ids = []
targets = []
def _check(volume_id):
"""Make sure targets aren't duplicated."""
volume_ids.append(volume_id)
admin_context = context.get_admin_context()
iscsi_target = db.volume_get_iscsi_target_num(admin_context,
volume_id)
self.assert_(iscsi_target not in targets)
targets.append(iscsi_target)
total_slots = FLAGS.iscsi_num_targets
for _index in xrange(total_slots):
self._create_volume()
for volume_id in volume_ids:
self.volume.delete_volume(self.context, volume_id)
def test_multi_node(self):
# TODO(termie): Figure out how to test with two nodes,
# each of them having a different FLAG for storage_node
# This will allow us to test cross-node interactions
pass
@staticmethod
def _create_snapshot(volume_id, size='0'):
"""Create a snapshot object."""
snap = {}
snap['volume_size'] = size
snap['user_id'] = 'fake'
snap['project_id'] = 'fake'
snap['volume_id'] = volume_id
snap['status'] = "creating"
return db.snapshot_create(context.get_admin_context(), snap)
def test_create_delete_snapshot(self):
"""Test snapshot can be created and deleted."""
volume = self._create_volume()
self.volume.create_volume(self.context, volume['id'])
snapshot_id = self._create_snapshot(volume['id'])['id']
self.volume.create_snapshot(self.context, volume['id'], snapshot_id)
self.assertEqual(snapshot_id,
db.snapshot_get(context.get_admin_context(),
snapshot_id).id)
self.volume.delete_snapshot(self.context, snapshot_id)
snap = db.snapshot_get(context.get_admin_context(read_deleted='yes'),
snapshot_id)
self.assertEquals(snap['status'], 'deleted')
self.assertRaises(exception.NotFound,
db.snapshot_get,
self.context,
snapshot_id)
self.volume.delete_volume(self.context, volume['id'])
def test_cant_delete_volume_in_use(self):
"""Test volume can't be deleted in invalid stats."""
# create a volume and assign to host
volume = self._create_volume()
self.volume.create_volume(self.context, volume['id'])
volume['status'] = 'in-use'
volume['host'] = 'fakehost'
volume_api = cinder.volume.api.API()
# 'in-use' status raises InvalidVolume
self.assertRaises(exception.InvalidVolume,
volume_api.delete,
self.context,
volume)
# clean up
self.volume.delete_volume(self.context, volume['id'])
def test_force_delete_volume(self):
"""Test volume can be forced to delete."""
# create a volume and assign to host
volume = self._create_volume()
self.volume.create_volume(self.context, volume['id'])
volume['status'] = 'error_deleting'
volume['host'] = 'fakehost'
volume_api = cinder.volume.api.API()
# 'error_deleting' volumes can't be deleted
self.assertRaises(exception.InvalidVolume,
volume_api.delete,
self.context,
volume)
# delete with force
volume_api.delete(self.context, volume, force=True)
# status is deleting
volume = db.volume_get(context.get_admin_context(), volume['id'])
self.assertEquals(volume['status'], 'deleting')
# clean up
self.volume.delete_volume(self.context, volume['id'])
def test_cant_delete_volume_with_snapshots(self):
"""Test volume can't be deleted with dependent snapshots."""
volume = self._create_volume()
self.volume.create_volume(self.context, volume['id'])
snapshot_id = self._create_snapshot(volume['id'])['id']
self.volume.create_snapshot(self.context, volume['id'], snapshot_id)
self.assertEqual(snapshot_id,
db.snapshot_get(context.get_admin_context(),
snapshot_id).id)
volume['status'] = 'available'
volume['host'] = 'fakehost'
volume_api = cinder.volume.api.API()
self.assertRaises(exception.InvalidVolume,
volume_api.delete,
self.context,
volume)
self.volume.delete_snapshot(self.context, snapshot_id)
self.volume.delete_volume(self.context, volume['id'])
def test_can_delete_errored_snapshot(self):
"""Test snapshot can be created and deleted."""
volume = self._create_volume()
self.volume.create_volume(self.context, volume['id'])
snapshot_id = self._create_snapshot(volume['id'])['id']
self.volume.create_snapshot(self.context, volume['id'], snapshot_id)
snapshot = db.snapshot_get(context.get_admin_context(),
snapshot_id)
volume_api = cinder.volume.api.API()
snapshot['status'] = 'badstatus'
self.assertRaises(exception.InvalidVolume,
volume_api.delete_snapshot,
self.context,
snapshot)
snapshot['status'] = 'error'
self.volume.delete_snapshot(self.context, snapshot_id)
self.volume.delete_volume(self.context, volume['id'])
def test_create_snapshot_force(self):
"""Test snapshot in use can be created forcibly."""
def fake_cast(ctxt, topic, msg):
pass
self.stubs.Set(rpc, 'cast', fake_cast)
instance_uuid = '12345678-1234-5678-1234-567812345678'
volume = self._create_volume()
self.volume.create_volume(self.context, volume['id'])
db.volume_attached(self.context, volume['id'], instance_uuid,
'/dev/sda1')
volume_api = cinder.volume.api.API()
volume = volume_api.get(self.context, volume['id'])
self.assertRaises(exception.InvalidVolume,
volume_api.create_snapshot,
self.context, volume,
'fake_name', 'fake_description')
snapshot_ref = volume_api.create_snapshot_force(self.context,
volume,
'fake_name',
'fake_description')
db.snapshot_destroy(self.context, snapshot_ref['id'])
db.volume_destroy(self.context, volume['id'])
def test_delete_busy_snapshot(self):
"""Test snapshot can be created and deleted."""
volume = self._create_volume()
volume_id = volume['id']
self.volume.create_volume(self.context, volume_id)
snapshot_id = self._create_snapshot(volume_id)['id']
self.volume.create_snapshot(self.context, volume_id, snapshot_id)
self.mox.StubOutWithMock(self.volume.driver, 'delete_snapshot')
self.volume.driver.delete_snapshot(
mox.IgnoreArg()).AndRaise(
exception.SnapshotIsBusy(snapshot_name='fake'))
self.mox.ReplayAll()
self.volume.delete_snapshot(self.context, snapshot_id)
snapshot_ref = db.snapshot_get(self.context, snapshot_id)
self.assertEqual(snapshot_id, snapshot_ref.id)
self.assertEqual("available", snapshot_ref.status)
self.mox.UnsetStubs()
self.volume.delete_snapshot(self.context, snapshot_id)
self.volume.delete_volume(self.context, volume_id)
def _create_volume_from_image(self, expected_status,
fakeout_copy_image_to_volume=False):
"""Call copy image to volume, Test the status of volume after calling
copying image to volume."""
def fake_local_path(volume):
return dst_path
def fake_copy_image_to_volume(context, volume,
image_service, image_id):
pass
def fake_fetch_to_raw(context, image_service, image_id, vol_path):
pass
dst_fd, dst_path = tempfile.mkstemp()
os.close(dst_fd)
self.stubs.Set(self.volume.driver, 'local_path', fake_local_path)
self.stubs.Set(image_utils, 'fetch_to_raw', fake_fetch_to_raw)
if fakeout_copy_image_to_volume:
self.stubs.Set(self.volume, '_copy_image_to_volume',
fake_copy_image_to_volume)
image_id = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77'
volume_id = 1
# creating volume testdata
db.volume_create(self.context,
{'id': volume_id,
'updated_at': datetime.datetime(1, 1, 1, 1, 1, 1),
'display_description': 'Test Desc',
'size': 20,
'status': 'creating',
'instance_uuid': None,
'host': 'dummy'})
try:
self.volume.create_volume(self.context,
volume_id,
image_id=image_id)
volume = db.volume_get(self.context, volume_id)
self.assertEqual(volume['status'], expected_status)
finally:
| |
# type: (...) -> models.AdminApiTokenGetResponse
"""
Displays API tokens for the specified administrators.
Args:
admins (list[FixedReference], optional):
A list of admins to query for. Overrides admin_ids and admin_names keyword arguments.
admin_ids (list[str], optional):
A list of admin IDs. If after filtering, there is not at least one admin
resource that matches each of the elements, then an error is returned. This
cannot be provided together with the `admin_names` query parameter.
admin_names (list[str], optional):
A list of admin names. If there is not at least one admin resource that matches
each of the elements, then an error is returned. This cannot be provided
together with `admin_ids` query parameter.
continuation_token (str, optional):
An opaque token to iterate over a collection of resources.
expose_api_token (bool, optional):
If `true`, exposes the API token of the current user.
filter (Filter, optional):
A filter to include only resources that match the specified criteria.
limit (int, optional):
Limit the number of resources in the response. If not specified, defaults to
1000.
offset (int, optional):
The offset of the first resource to return from a collection.
sort (list[Property], optional):
Sort the response by the specified Properties. Can also be a single element.
async_req (bool, optional):
Request runs in separate thread and method returns
multiprocessing.pool.ApplyResult.
_return_http_data_only (bool, optional):
Returns only data field.
_preload_content (bool, optional):
Response is converted into objects.
_request_timeout (int, optional):
Total request timeout in seconds.
Returns:
ValidResponse: If the call was successful.
ErrorResponse: If the call was not successful.
Raises:
PureError: If calling the API fails.
ValueError: If a parameter is of an invalid type.
TypeError: If invalid or missing parameters are used.
"""
kwargs = dict(
admin_ids=admin_ids,
admin_names=admin_names,
continuation_token=continuation_token,
expose_api_token=expose_api_token,
filter=filter,
limit=limit,
offset=offset,
sort=sort,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
)
kwargs = {k: v for k, v in kwargs.items() if v is not None}
endpoint = self._administrators_api.api20_admins_api_tokens_get_with_http_info
_process_references(admins, ['admin_ids', 'admin_names'], kwargs)
return self._call_api(endpoint, kwargs)
def post_admins_api_tokens(
self,
admins=None, # type: List[models.ReferenceType]
admin_ids=None, # type: List[str]
admin_names=None, # type: List[str]
timeout=None, # type: int
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.AdminApiTokenResponse
"""
Creates API tokens for the specified administrators.
Args:
admins (list[FixedReference], optional):
A list of admins to query for. Overrides admin_ids and admin_names keyword arguments.
admin_ids (list[str], optional):
A list of admin IDs. If after filtering, there is not at least one admin
resource that matches each of the elements, then an error is returned. This
cannot be provided together with the `admin_names` query parameter.
admin_names (list[str], optional):
A list of admin names. If there is not at least one admin resource that matches
each of the elements, then an error is returned. This cannot be provided
together with `admin_ids` query parameter.
timeout (int, optional):
The duration of API token validity, in milliseconds.
async_req (bool, optional):
Request runs in separate thread and method returns
multiprocessing.pool.ApplyResult.
_return_http_data_only (bool, optional):
Returns only data field.
_preload_content (bool, optional):
Response is converted into objects.
_request_timeout (int, optional):
Total request timeout in seconds.
Returns:
ValidResponse: If the call was successful.
ErrorResponse: If the call was not successful.
Raises:
PureError: If calling the API fails.
ValueError: If a parameter is of an invalid type.
TypeError: If invalid or missing parameters are used.
"""
kwargs = dict(
admin_ids=admin_ids,
admin_names=admin_names,
timeout=timeout,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
)
kwargs = {k: v for k, v in kwargs.items() if v is not None}
endpoint = self._administrators_api.api20_admins_api_tokens_post_with_http_info
_process_references(admins, ['admin_ids', 'admin_names'], kwargs)
return self._call_api(endpoint, kwargs)
def delete_admins_cache(
self,
references=None, # type: List[models.ReferenceType]
ids=None, # type: List[str]
names=None, # type: List[str]
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> None
"""
Delete cached administrator role information by name or ID.
Args:
references (list[FixedReference], optional):
A list of references to query for. Overrides ids and names keyword arguments.
ids (list[str], optional):
A list of resource IDs. If after filtering, there is not at least one resource
that matches each of the elements of `ids`, then an error is returned. This
cannot be provided together with the `name` or `names` query parameters.
names (list[str], optional):
A list of resource names. If there is not at least one resource that matches
each of the elements of `names`, then an error is returned.
async_req (bool, optional):
Request runs in separate thread and method returns
multiprocessing.pool.ApplyResult.
_return_http_data_only (bool, optional):
Returns only data field.
_preload_content (bool, optional):
Response is converted into objects.
_request_timeout (int, optional):
Total request timeout in seconds.
Returns:
ValidResponse: If the call was successful.
ErrorResponse: If the call was not successful.
Raises:
PureError: If calling the API fails.
ValueError: If a parameter is of an invalid type.
TypeError: If invalid or missing parameters are used.
"""
kwargs = dict(
ids=ids,
names=names,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
)
kwargs = {k: v for k, v in kwargs.items() if v is not None}
endpoint = self._administrators_api.api20_admins_cache_delete_with_http_info
_process_references(references, ['ids', 'names'], kwargs)
return self._call_api(endpoint, kwargs)
def get_admins_cache(
self,
references=None, # type: List[models.ReferenceType]
continuation_token=None, # type: str
filter=None, # type: str
ids=None, # type: List[str]
limit=None, # type: int
names=None, # type: List[str]
offset=None, # type: int
refresh=None, # type: bool
sort=None, # type: List[str]
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.AdminCacheGetResponse
"""
List cached administrator information used to determine role based access
control privileges.
Args:
references (list[FixedReference], optional):
A list of references to query for. Overrides ids and names keyword arguments.
continuation_token (str, optional):
An opaque token to iterate over a collection of resources.
filter (Filter, optional):
A filter to include only resources that match the specified criteria.
ids (list[str], optional):
A list of resource IDs. If after filtering, there is not at least one resource
that matches each of the elements of `ids`, then an error is returned. This
cannot be provided together with the `name` or `names` query parameters.
limit (int, optional):
Limit the number of resources in the response. If not specified, defaults to
1000.
names (list[str], optional):
A list of resource names. If there is not at least one resource that matches
each of the elements of `names`, then an error is returned.
offset (int, optional):
The offset of the first resource to return from a collection.
refresh (bool, optional):
Whether to refresh the user info from directory service. If not specified,
defaults to `false`.
sort (list[Property], optional):
Sort the response by the specified Properties. Can also be a single element.
async_req (bool, optional):
Request runs in separate thread and method returns
multiprocessing.pool.ApplyResult.
_return_http_data_only (bool, optional):
Returns only data field.
_preload_content (bool, optional):
Response is converted into objects.
_request_timeout (int, optional):
Total request timeout in seconds.
Returns:
ValidResponse: If the call was successful.
ErrorResponse: If the call was not successful.
Raises:
PureError: If calling the API fails.
ValueError: If a parameter is of an invalid type.
TypeError: If invalid or missing parameters are used.
"""
kwargs = dict(
continuation_token=continuation_token,
filter=filter,
ids=ids,
limit=limit,
names=names,
offset=offset,
refresh=refresh,
sort=sort,
async_req=async_req,
_return_http_data_only=_return_http_data_only,
_preload_content=_preload_content,
_request_timeout=_request_timeout,
)
kwargs = {k: v for k, v in kwargs.items() if v is not None}
endpoint = self._administrators_api.api20_admins_cache_get_with_http_info
_process_references(references, ['ids', 'names'], kwargs)
return self._call_api(endpoint, kwargs)
def get_admins(
self,
references=None, # type: List[models.ReferenceType]
continuation_token=None, # type: str
expose_api_token=None, # type: bool
filter=None, # type: str
ids=None, # type: List[str]
limit=None, # type: int
names=None, # type: List[str]
offset=None, # type: int
sort=None, # type: List[str]
async_req=False, # type: bool
_return_http_data_only=False, # type: bool
_preload_content=True, # type: bool
_request_timeout=None, # type: Optional[int]
):
# type: (...) -> models.AdminGetResponse
"""
List the administrator's attributes, including the API token and public key.
Args:
references (list[FixedReference], optional):
A list of references to query for. Overrides ids and names keyword arguments.
continuation_token (str, optional):
An opaque token | |
<gh_stars>1-10
"""Classification with abstention network architecture."""
import numpy as np
np.warnings.filterwarnings('ignore', category=np.VisibleDeprecationWarning)
__author__ = "<NAME> and <NAME>"
__date__ = "March 17, 2021"
#----------------------------------------------------------------
def define_experiments(exp_name):
exps = {# olsr
'olsr0': {'exp_name': 'olsr0',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'n_samples': [4000,1000],#noisy data, not noisy
'noise': [.5,.05],
'slope': [1.,.7],
'yint': [-2.,.6],
'x_sigma': [.25,.5],
'undersample': False,
'spinup': 0,
'hiddens': [5,5],
'lr_init': .0001,
'batch_size': 32,
'np_seed': 99,
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 225,
'n_coarse_epochs': 0,
'n_epochs': 1000,
'patience': 200,
'boxcox': False,
'ridge_param': 0.,
},
'olsr1': {'exp_name': 'olsr1',
'loss': 'AbstentionLogLoss',
'updater': 'Constant',
'nupd': np.nan,
'numClasses': 2,
'n_samples': [4000,1000],#noisy data, not noisy
'noise': [.5,.05],
'slope': [1.,.7],
'yint': [-2.,.6],
'x_sigma': [.25,.5],
'undersample': False,
'spinup': 0,
'hiddens': [5,5],
'lr_init': .0001,
'batch_size': 32,
'np_seed': 99,
'act_fun': 'relu',
'coarse_setpoint': .1,
'fixed_alpha': .1,
'n_spinup_epochs': 225,
'n_coarse_epochs': 0,
'n_epochs': 1000,
'patience': 200,
'boxcox': False,
'ridge_param': 0.,
},
#tranquilFOOr
'tranquilFOOr0': {'exp_name': 'tranquilFOOr0',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .0,
},
'tranquilFOOr1': {'exp_name': 'tranquilFOOr1',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 0.5,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
},
'tranquilFOOr2': {'exp_name': 'tranquilFOOr2',
'loss': 'StandardMAE',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 0.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
},
'tranquilFOOr3': {'exp_name': 'tranquilFOOr3',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 0.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
},
'tranquilFOOr4': {'exp_name': 'tranquilFOOr4',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': True,
},
'tranquilFOOr5': {'exp_name': 'tranquilFOOr5',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 25,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .5,
},
'tranquilFOOr6': {'exp_name': 'tranquilFOOr6',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 25,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .5,
},
'tranquilFOOr7': {'exp_name': 'tranquilFOOr7',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 0.5,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 25,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .5,
},
'tranquilFOOr8': {'exp_name': 'tranquilFOOr8',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .15,
},
'tranquilFOOr9': {'exp_name': 'tranquilFOOr9',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .15,
},
'tranquilFOOr10': {'exp_name': 'tranquilFOOr10',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .1,
},
'tranquilFOOr11': {'exp_name': 'tranquilFOOr11',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .01,
},
'tranquilFOOr12': {'exp_name': 'tranquilFOOr12',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .05,
},
'tranquilFOOr13': {'exp_name': 'tranquilFOOr13',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .1,
},
'tranquilFOOr14': {'exp_name': 'tranquilFOOr14',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 10,
'n_epochs': 300,
'patience': 60,
'boxcox': False,
'ridge_param': .0,
},
'tranquilFOOr15': {'exp_name': 'tranquilFOOr15',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 10,
'n_coarse_epochs': 0,
'n_epochs': 100,
'patience': 10,
'boxcox': False,
'ridge_param': .1,
},
'tranquilFOOr16': {'exp_name': 'tranquilFOOr16',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 15,
'n_coarse_epochs': 0,
'n_epochs': 100,
'patience': 10,
'boxcox': False,
'ridge_param': .1,
},
'tranquilFOOr17': {'exp_name': 'tranquilFOOr17',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 10,
'n_coarse_epochs': 5,
'n_epochs': 100,
'patience': 10,
'boxcox': False,
'ridge_param': .1,
},
'tranquilFOOr18': {'exp_name': 'tranquilFOOr18',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
'batch_size': 32,
'np_seed': 99,
'simple_data': '15x60',
'foo_region': 'nhENSO',
'act_fun': 'relu',
'coarse_setpoint': .1,
'n_spinup_epochs': 20,
'n_coarse_epochs': 0,
'n_epochs': 100,
'patience': 10,
'boxcox': False,
'ridge_param': .1,
},
'tranquilFOOr19': {'exp_name': 'tranquilFOOr19',
'loss': 'AbstentionLogLoss',
'updater': 'Colorado',
'nupd': 6,
'numClasses': 2,
'prNoise': 1.0,
'cutoff': 0.5,
'nSamples': 18000,
'undersample': False,
'hiddens': [50,25],
'lr_init': 0.0005,
| |
not isinstance(d, list):
d = [d]
for d1 in d:
for f in qualified_method(d1):
if f not in seen:
self.add_call(node, name, f)
seen.add(f)
return name
def visit_Assign(self, node):
for v in node.targets:
if isinstance(v, ast.Attribute):
self.add_attr(v)
def visit_Attribute(self, node):
ops = self.traces[node.lineno]
for op, symbol, data in ops:
if symbol == node.attr and op in ["LOAD_ATTR"]:
ref = self.add_local_ref(
node,
target=node.value,
name=node.value + "." + symbol,
data=data)
if data and len(data) == 2:
_, rhs = data
self.typemap[ref.id] = rhs
break
elif symbol == node.attr and op in ["STORE_ATTR"]:
defn = self.add_local_def(node)
if self.current_class:
# We only support attr definitions within a class definition.
self.current_env.setattr(node.attr, defn)
return node.value + "." + node.attr
def visit_Subscript(self, node):
return node.value
def visit_DictComp(self, _node):
return "<expr>"
def visit_ListComp(self, _node):
return "<expr>"
def process_import(self, node, is_from):
"""Common code for Import and ImportFrom."""
store_ops = get_opcodes(self.traces, node.lineno, "STORE_NAME")
import_ops = get_opcodes(self.traces, node.lineno, "IMPORT_NAME")
# Only record modules that pytype has resolved in self.modules
def is_resolved(defn, symbol, data):
return (symbol == defn.name and data and
isinstance(data[0], abstract.Module))
def filter_ops(op_list, defn):
return [(symbol, data) for _, symbol, data in op_list
if is_resolved(defn, symbol, data)]
def add_import_ref(name, data, loc):
self.add_global_ref(
node, name=name, data=data, location=loc, typ="Import")
for alias in node.names:
name = alias.asname if alias.asname else alias.name
d = self.add_local_def(node, name=name)
loc = self.locs[d.id][-1].location
if alias.asname or is_from:
# for |import x.y as z| or |from x import y as z| we want {z: x.y}
for symbol, data in filter_ops(store_ops, d):
self.modules[d.id] = data[0].full_name
add_import_ref(name=symbol, data=data, loc=loc)
else:
# |import x.y| puts both {x: x} and {x.y: x.y} in modules
for symbol, data in filter_ops(import_ops, d):
add_import_ref(name=symbol, data=data, loc=loc)
for mod in module_utils.get_all_prefixes(name):
# TODO(mdemello): Create references for every element.
self.modules[d.scope + "." + mod] = mod
def visit_Import(self, node):
self.process_import(node, is_from=False)
def visit_ImportFrom(self, node):
self.process_import(node, is_from=True)
# pylint: enable=invalid-name
# pylint: enable=missing-docstring
class Indexer(object):
"""Runs the indexer visitor and collects its results."""
def __init__(self, source, loader, module_name, kythe_args=None):
self.source = source
self.loader = loader
self.resolved_modules = loader.get_resolved_modules()
self.imports = xref_utils.process_imports_map(loader.imports_map)
self.module_name = module_name
self.traces = source.traces
self.kythe = kythe.Kythe(source, kythe_args)
self.defs = None
self.locs = None
self.refs = None
self.envs = None
self.modules = None
self.typemap = None
self.classmap = None
self.calls = None
self.links = []
def index(self, code_ast):
"""Index an AST corresponding to self.source."""
v = IndexVisitor(self.source, self.module_name, self.kythe)
v.visit(code_ast)
self.defs = v.defs
self.locs = v.locs
self.refs = v.refs
self.envs = v.envs
self.modules = v.modules
self.typemap = v.typemap
self.classmap = v.classmap
self.calls = v.calls
def get_def_offsets(self, defloc):
"""Get the byte offsets for a definition."""
defn = self.defs[defloc.def_id]
typ = defn.typ
if typ == "Attribute":
start, end = self._get_attr_bounds(defn.name, defloc.location)
else:
line, col = defloc.location
start = self.source.get_offset(line, col)
if typ in DEF_OFFSETS:
start += DEF_OFFSETS[typ]
end = start + len(defn.name)
return (start, end)
def get_doc_offsets(self, doc):
"""Get the byte offsets for a docstring."""
line, col = doc.location
start = self.source.get_offset(line, col)
end = start + doc.length
return (start, end)
def finalize(self):
"""Postprocess the information gathered by the tree visitor.
Note that these functions need to be run in order; some of them depend on
information generated by previous ones.
"""
links = self._lookup_refs()
self.links = links
self._process_deflocs()
self._process_links(links)
self._process_calls(links)
def _process_deflocs(self):
"""Generate kythe edges for definitions."""
for def_id in self.locs:
defn = self.defs[def_id]
for defloc in self.locs[def_id]:
defn = self.defs[defloc.def_id]
defn_vname = self.kythe.vname(defn.to_signature())
start, end = self.get_def_offsets(defloc)
anchor_vname = self.kythe.add_anchor(start, end)
self.kythe.add_fact(
source=defn_vname,
fact_name="node/kind",
fact_value=defn.node_kind())
self.kythe.add_edge(
source=anchor_vname,
target=defn_vname,
edge_name="defines/binding")
# Emit a docstring if we have one.
doc = defn.doc
if doc:
doc_vname = self.kythe.vname(defn.doc_signature())
start, end = self.get_doc_offsets(defn.doc)
anchor_vname = self.kythe.add_anchor(start, end)
self.kythe.add_fact(
source=doc_vname,
fact_name="node/kind",
fact_value="doc")
self.kythe.add_fact(
source=doc_vname,
fact_name="text",
fact_value=doc.text)
self.kythe.add_edge(
source=anchor_vname,
target=doc_vname,
edge_name="defines")
self.kythe.add_edge(
source=doc_vname,
target=defn_vname,
edge_name="documents")
def _get_attr_bounds(self, name, location):
"""Calculate the anchor bounds for an attr access."""
# TODO(mdemello): This is pretty crude, and does not for example take into
# account multiple calls of the same attribute in a line. It is just to get
# our tests passing till we incorporate asttokens.
line, _ = location
src_line = self.source.line(line)
attr = name.split(".")[-1]
dot_attr = "." + attr
if dot_attr in src_line:
col = src_line.index(dot_attr)
start = self.source.get_offset(line, col) + 1
end = start + len(attr)
return (start, end)
else:
# We have something like
# (foo
# .bar)
# or
# (foo.
# bar)
# Lookahead up to 5 lines to find '.attr' (the ast node always starts from
# the beginning of the chain, so foo.\nbar.\nbaz etc could span several
# lines).
start, end = self.get_multiline_bounds(location, 5, dot_attr)
if start:
start, end = start + 1, end
else:
# Find consecutive lines ending with '.' and starting with 'attr'.
for l in range(line, line + 5):
if self.source.line(l).endswith("."):
next_line = self.source.next_non_comment_line(l)
text = self.source.line(next_line)
if text.lstrip().startswith(attr):
c = text.find(attr)
start, end = self.get_anchor_bounds((next_line, c), len(attr))
if not start:
# if all else fails, fall back to just spanning the name
start, end = self.get_anchor_bounds(location, len(name))
return (start, end)
def get_multiline_bounds(self, location, n_lines, text):
"""Get a span of text anywhere within n_lines of location."""
line, _ = location
text_line, text_col = self.source.find_text(line, line + n_lines, text)
if text_line:
start = self.source.get_offset(text_line, text_col)
end = start + len(text)
return (start, end)
else:
return (None, None)
def get_anchor_bounds(self, location, length):
"""Generate byte offsets from a location and length."""
line, col = location
start = self.source.get_offset(line, col)
end = start + length
return (start, end)
def get_ref_bounds(self, ref):
if ref.typ == "Attribute":
return self._get_attr_bounds(ref.name, ref.location)
else:
return self.get_anchor_bounds(ref.location, len(ref.name))
def _make_defn_vname(self, defn):
"""Convert a definition into a kythe vname."""
if isinstance(defn, Remote):
remote = defn.module
if remote in self.resolved_modules:
if remote in self.imports:
# The canonical path from the imports_map is the best bet for
# module->filepath translation.
path = self.imports[remote]
else:
# Fallback to the filepath of the stub file, though this is not always
# accurate due to overrides.
path = self.resolved_modules[remote].filename
path = xref_utils.get_module_filepath(path)
if defn.name == IMPORT_FILE_MARKER:
# file nodes have empty signatures
sig = ""
else:
sig = "module." + defn.name
if path.startswith("pytd:"):
return self.kythe.builtin_vname(sig, path)
else:
return self.kythe.vname(sig, path)
else:
# Don't generate vnames for unresolved modules.
return None
else:
return self.kythe.vname(defn.to_signature())
def _process_links(self, links):
"""Generate kythe edges for references."""
for ref, defn in links:
if not isinstance(defn, (Definition, Remote, Module)):
# TODO(mdemello): Fixes needed for chained method calls.
continue
start, end = self.get_ref_bounds(ref)
vname = self.kythe.add_anchor(start, end)
target = self._make_defn_vname(defn)
if target:
self.kythe.add_edge(
source=vname,
target=target,
edge_name="ref")
def _process_calls(self, links):
"""Generate kythe edges for function calls.
Checks if a function call corresponds to a resolved reference, and generates
a ref/call to that references's source definition if so.
Args:
links: A list of (reference, definition) tuples.
"""
link_map = collections.defaultdict(list)
for ref, defn in links:
link_map[ref.location].append((ref, defn))
for call in self.calls:
call_links = link_map[call.location]
call_ref = None
call_defn = None
for ref, d in call_links:
if ref.name == call.name:
call_ref = ref
call_defn = d
break
if call_defn:
target = self._make_defn_vname(call_defn)
if target:
start, end = self.get_ref_bounds(call_ref)
anchor_vname = self.kythe.anchor_vname(start, end)
self.kythe.add_edge(
source=anchor_vname,
target=target,
edge_name="ref/call")
# The call is a child of the enclosing function/class (this lets us
# generate call graphs).
if ref.scope != "module":
parent_defn = self.defs.get(call_ref.scope)
if parent_defn:
# TODO(mdemello): log the 'else' case; it should never happen.
self.kythe.add_edge(
source=anchor_vname,
target=self.kythe.vname(parent_defn.to_signature()),
edge_name="childof")
else:
assert False, ref
def _lookup_remote_symbol(self, ref, defn):
"""Try to look up a definition in an imported module."""
if defn.id in self.modules:
remote = self.modules[defn.id]
resolved = True
elif defn.typ in ["Import", "ImportFrom"]:
# Allow unresolved modules too.
remote = defn.name
resolved = False
else:
return None
name = ref.name
if name.startswith(remote):
name = name[(len(remote) + 1):]
return Remote(module=remote, name=name, resolved=resolved)
def _lookup_class_attr(self, name, attr):
"""Look up a class attribute in the environment."""
env = self.envs["module"]
if name not in env.env:
return None
d = env.env[name]
class_env = self.envs[d.id]
_, defn = class_env.lookup(attr)
return defn
def _get_attribute_class(self, | |
<gh_stars>1-10
# -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
from Tea.model import TeaModel
from typing import List
class Config(TeaModel):
"""
Model for initing client
"""
def __init__(
self,
access_key_id: str = None,
access_key_secret: str = None,
security_token: str = None,
protocol: str = None,
read_timeout: int = None,
connect_timeout: int = None,
http_proxy: str = None,
https_proxy: str = None,
endpoint: str = None,
no_proxy: str = None,
max_idle_conns: int = None,
user_agent: str = None,
socks_5proxy: str = None,
socks_5net_work: str = None,
max_idle_time_millis: int = None,
keep_alive_duration_millis: int = None,
max_requests: int = None,
max_requests_per_host: int = None,
):
# accesskey id
self.access_key_id = access_key_id
# accesskey secret
self.access_key_secret = access_key_secret
# security token
self.security_token = security_token
# http protocol
self.protocol = protocol
# read timeout
self.read_timeout = read_timeout
# connect timeout
self.connect_timeout = connect_timeout
# http proxy
self.http_proxy = http_proxy
# https proxy
self.https_proxy = https_proxy
# endpoint
self.endpoint = endpoint
# proxy white list
self.no_proxy = no_proxy
# max idle conns
self.max_idle_conns = max_idle_conns
# user agent
self.user_agent = user_agent
# socks5 proxy
self.socks_5proxy = socks_5proxy
# socks5 network
self.socks_5net_work = socks_5net_work
# 长链接最大空闲时长
self.max_idle_time_millis = max_idle_time_millis
# 长链接最大连接时长
self.keep_alive_duration_millis = keep_alive_duration_millis
# 最大连接数(长链接最大总数)
self.max_requests = max_requests
# 每个目标主机的最大连接数(分主机域名的长链接最大总数
self.max_requests_per_host = max_requests_per_host
def validate(self):
pass
def to_map(self):
result = dict()
if self.access_key_id is not None:
result['accessKeyId'] = self.access_key_id
if self.access_key_secret is not None:
result['accessKeySecret'] = self.access_key_secret
if self.security_token is not None:
result['securityToken'] = self.security_token
if self.protocol is not None:
result['protocol'] = self.protocol
if self.read_timeout is not None:
result['readTimeout'] = self.read_timeout
if self.connect_timeout is not None:
result['connectTimeout'] = self.connect_timeout
if self.http_proxy is not None:
result['httpProxy'] = self.http_proxy
if self.https_proxy is not None:
result['httpsProxy'] = self.https_proxy
if self.endpoint is not None:
result['endpoint'] = self.endpoint
if self.no_proxy is not None:
result['noProxy'] = self.no_proxy
if self.max_idle_conns is not None:
result['maxIdleConns'] = self.max_idle_conns
if self.user_agent is not None:
result['userAgent'] = self.user_agent
if self.socks_5proxy is not None:
result['socks5Proxy'] = self.socks_5proxy
if self.socks_5net_work is not None:
result['socks5NetWork'] = self.socks_5net_work
if self.max_idle_time_millis is not None:
result['maxIdleTimeMillis'] = self.max_idle_time_millis
if self.keep_alive_duration_millis is not None:
result['keepAliveDurationMillis'] = self.keep_alive_duration_millis
if self.max_requests is not None:
result['maxRequests'] = self.max_requests
if self.max_requests_per_host is not None:
result['maxRequestsPerHost'] = self.max_requests_per_host
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('accessKeyId') is not None:
self.access_key_id = m.get('accessKeyId')
if m.get('accessKeySecret') is not None:
self.access_key_secret = m.get('accessKeySecret')
if m.get('securityToken') is not None:
self.security_token = m.get('securityToken')
if m.get('protocol') is not None:
self.protocol = m.get('protocol')
if m.get('readTimeout') is not None:
self.read_timeout = m.get('readTimeout')
if m.get('connectTimeout') is not None:
self.connect_timeout = m.get('connectTimeout')
if m.get('httpProxy') is not None:
self.http_proxy = m.get('httpProxy')
if m.get('httpsProxy') is not None:
self.https_proxy = m.get('httpsProxy')
if m.get('endpoint') is not None:
self.endpoint = m.get('endpoint')
if m.get('noProxy') is not None:
self.no_proxy = m.get('noProxy')
if m.get('maxIdleConns') is not None:
self.max_idle_conns = m.get('maxIdleConns')
if m.get('userAgent') is not None:
self.user_agent = m.get('userAgent')
if m.get('socks5Proxy') is not None:
self.socks_5proxy = m.get('socks5Proxy')
if m.get('socks5NetWork') is not None:
self.socks_5net_work = m.get('socks5NetWork')
if m.get('maxIdleTimeMillis') is not None:
self.max_idle_time_millis = m.get('maxIdleTimeMillis')
if m.get('keepAliveDurationMillis') is not None:
self.keep_alive_duration_millis = m.get('keepAliveDurationMillis')
if m.get('maxRequests') is not None:
self.max_requests = m.get('maxRequests')
if m.get('maxRequestsPerHost') is not None:
self.max_requests_per_host = m.get('maxRequestsPerHost')
return self
class TdmCpfEncodeNameVO(TeaModel):
def __init__(
self,
code: str = None,
name: str = None,
):
# 公积金中心编码
self.code = code
# 公积金中心名称
self.name = name
def validate(self):
self.validate_required(self.code, 'code')
self.validate_required(self.name, 'name')
def to_map(self):
result = dict()
if self.code is not None:
result['code'] = self.code
if self.name is not None:
result['name'] = self.name
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('code') is not None:
self.code = m.get('code')
if m.get('name') is not None:
self.name = m.get('name')
return self
class TdmCpfCitysVO(TeaModel):
def __init__(
self,
code: str = None,
name: str = None,
cpfs: List[TdmCpfEncodeNameVO] = None,
):
# 城市编码
self.code = code
# 城市名称
self.name = name
# 公积金中心城市列表
self.cpfs = cpfs
def validate(self):
self.validate_required(self.code, 'code')
self.validate_required(self.name, 'name')
self.validate_required(self.cpfs, 'cpfs')
if self.cpfs:
for k in self.cpfs:
if k:
k.validate()
def to_map(self):
result = dict()
if self.code is not None:
result['code'] = self.code
if self.name is not None:
result['name'] = self.name
result['cpfs'] = []
if self.cpfs is not None:
for k in self.cpfs:
result['cpfs'].append(k.to_map() if k else None)
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('code') is not None:
self.code = m.get('code')
if m.get('name') is not None:
self.name = m.get('name')
self.cpfs = []
if m.get('cpfs') is not None:
for k in m.get('cpfs'):
temp_model = TdmCpfEncodeNameVO()
self.cpfs.append(temp_model.from_map(k))
return self
class ChainInfo(TeaModel):
def __init__(
self,
block_height: str = None,
translate_date: str = None,
tx_hash: str = None,
):
# 块高
self.block_height = block_height
# 交易时间
self.translate_date = translate_date
# hash(64位)
self.tx_hash = tx_hash
def validate(self):
pass
def to_map(self):
result = dict()
if self.block_height is not None:
result['block_height'] = self.block_height
if self.translate_date is not None:
result['translate_date'] = self.translate_date
if self.tx_hash is not None:
result['tx_hash'] = self.tx_hash
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('block_height') is not None:
self.block_height = m.get('block_height')
if m.get('translate_date') is not None:
self.translate_date = m.get('translate_date')
if m.get('tx_hash') is not None:
self.tx_hash = m.get('tx_hash')
return self
class AuthAgreement(TeaModel):
def __init__(
self,
auth_agreement_code: str = None,
auth_agreement_type: str = None,
auth_begin_time: str = None,
auth_end_time: str = None,
auth_count: int = None,
auth_balance_count: int = None,
):
# 授权协议code
self.auth_agreement_code = auth_agreement_code
# 授权协议类型:
# TIME、时间授权
# COUNT、次数授权
# TIME_COUNT、时间范围内次数授权
self.auth_agreement_type = auth_agreement_type
# 授权开始ishi见
self.auth_begin_time = auth_begin_time
# 授权截止日期
#
#
self.auth_end_time = auth_end_time
# 授权次数
#
#
self.auth_count = auth_count
# 剩余授权次数
self.auth_balance_count = auth_balance_count
def validate(self):
self.validate_required(self.auth_agreement_code, 'auth_agreement_code')
self.validate_required(self.auth_agreement_type, 'auth_agreement_type')
def to_map(self):
result = dict()
if self.auth_agreement_code is not None:
result['auth_agreement_code'] = self.auth_agreement_code
if self.auth_agreement_type is not None:
result['auth_agreement_type'] = self.auth_agreement_type
if self.auth_begin_time is not None:
result['auth_begin_time'] = self.auth_begin_time
if self.auth_end_time is not None:
result['auth_end_time'] = self.auth_end_time
if self.auth_count is not None:
result['auth_count'] = self.auth_count
if self.auth_balance_count is not None:
result['auth_balance_count'] = self.auth_balance_count
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('auth_agreement_code') is not None:
self.auth_agreement_code = m.get('auth_agreement_code')
if m.get('auth_agreement_type') is not None:
self.auth_agreement_type = m.get('auth_agreement_type')
if m.get('auth_begin_time') is not None:
self.auth_begin_time = m.get('auth_begin_time')
if m.get('auth_end_time') is not None:
self.auth_end_time = m.get('auth_end_time')
if m.get('auth_count') is not None:
self.auth_count = m.get('auth_count')
if m.get('auth_balance_count') is not None:
self.auth_balance_count = m.get('auth_balance_count')
return self
class CertUseParams(TeaModel):
def __init__(
self,
issue_id: str = None,
):
# 证明文件ID
self.issue_id = issue_id
def validate(self):
self.validate_required(self.issue_id, 'issue_id')
def to_map(self):
result = dict()
if self.issue_id is not None:
result['issue_id'] = self.issue_id
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('issue_id') is not None:
self.issue_id = m.get('issue_id')
return self
class AuthUsedRecord(TeaModel):
def __init__(
self,
authorized_name: str = None,
auth_code: str = None,
chain_info: ChainInfo = None,
extend_params: str = None,
target_name: str = None,
tee_data: str = None,
use_date: str = None,
):
# 被授权租户名称:
#
# 身份证号/统一社会组织机构信用码
#
#
self.authorized_name = authorized_name
# 授权码
#
#
self.auth_code = auth_code
# 链的信息
self.chain_info = chain_info
# 扩展字段
self.extend_params = extend_params
# 标的物,产品码名称
#
#
self.target_name = target_name
# 授权可信内容
self.tee_data = tee_data
# 数据使用时间
#
#
self.use_date = use_date
def validate(self):
self.validate_required(self.authorized_name, 'authorized_name')
self.validate_required(self.auth_code, 'auth_code')
self.validate_required(self.chain_info, 'chain_info')
if self.chain_info:
self.chain_info.validate()
self.validate_required(self.extend_params, 'extend_params')
self.validate_required(self.target_name, 'target_name')
self.validate_required(self.use_date, 'use_date')
def to_map(self):
result = dict()
if self.authorized_name is not None:
result['authorized_name'] = self.authorized_name
if self.auth_code is not None:
result['auth_code'] = self.auth_code
if self.chain_info is not None:
result['chain_info'] = self.chain_info.to_map()
if self.extend_params is not None:
result['extend_params'] = self.extend_params
if self.target_name is not None:
result['target_name'] = self.target_name
if self.tee_data is not None:
result['tee_data'] = self.tee_data
if self.use_date is not None:
result['use_date'] = self.use_date
return result
def from_map(self, m: dict = None):
m = m or dict()
if m.get('authorized_name') is not None:
self.authorized_name = m.get('authorized_name')
if m.get('auth_code') is not None:
self.auth_code = m.get('auth_code')
| |
<filename>legtool/gait/ripple.py
# Copyright 2014 <NAME>, <EMAIL>.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''An implementation of a simple open loop ripple gait, with optional
grouping of legs.'''
import bisect
import math
from .common import (STANCE, SWING, UNKNOWN)
from .common import (LegConfig, MechanicalConfig)
from .common import (LegResult, Command, GaitGraphLeg, GaitGraph, LegState)
from .common import (NotSupported, CommonState)
from ..tf import geometry
from ..tf import tf
class RippleConfig(object):
def __init__(self):
self.mechanical = MechanicalConfig()
self.max_cycle_time_s = 4.0
self.lift_height_mm = 80.0
self.lift_percent = 25.0
self.swing_percent = 80.0
self.position_margin_percent = 80.0
self.leg_order = []
self.body_z_offset_mm = 0.0
self.servo_speed_margin_percent = 70.0
self.statically_stable = False
self.static_center_factor = 3.0
self.static_stable_factor = 10.0
self.static_margin_mm = 20.0
def copy(self):
result = RippleConfig()
result.mechanical = self.mechanical.copy()
result.max_cycle_time_s = self.max_cycle_time_s
result.lift_height_mm = self.lift_height_mm
result.lift_percent = self.lift_percent
result.swing_percent = self.swing_percent
result.position_margin_percent = self.position_margin_percent
result.leg_order = self.leg_order[:]
result.body_z_offset_mm = self.body_z_offset_mm
result.servo_speed_margin_percent = self.servo_speed_margin_percent
result.statically_stable = self.statically_stable
result.static_center_factor = self.static_center_factor
result.static_stable_factor = self.static_stable_factor
result.static_margin_mm = self.static_margin_mm
return result
@staticmethod
def parse_leg_order(data):
'''A leg ordering is a comma separated list of leg numbers, or
of leg groups, where a leg group is a parenthesis grouped list
of leg numbers.
Return the programmatic representation of that ordering when
given a string version. On malformed input, make all attempts
to return something, even if only a subset of the input.
'''
result = []
if data == '':
return result
in_tuple = False
current_tuple = ()
current_item = ''
for x in data:
if x == '(':
if in_tuple:
return result
in_tuple = True
if x >= '0' and x <= '9':
current_item += x
else:
if len(current_item):
value = int(current_item)
current_item = ''
if in_tuple:
current_tuple += (value,)
else:
result.append(value)
if x == ')':
if not in_tuple:
return result
if len(current_tuple) == 1:
result.append(current_tuple[0])
elif len(current_tuple) > 1:
result.append(current_tuple)
current_tuple = ()
in_tuple = False
if len(current_item):
result.append(int(current_item))
return result
@staticmethod
def str_leg_order(data):
'''Given a leg ordering, return the canonical string
representation.'''
assert isinstance(data, list)
return str(data)[1:-1].replace(' ', '')
_FLOAT_ATTRIBUTES = [
'max_cycle_time_s',
'lift_height_mm',
'lift_percent',
'swing_percent',
'position_margin_percent',
'body_z_offset_mm',
'servo_speed_margin_percent',
'static_center_factor',
'static_stable_factor',
'static_margin_mm',
]
@staticmethod
def read_settings(config, group_name, leg_ik_map):
'''Populate a RippleConfig instance from the given
ConfigParser instance and group name.
:param config: Configuration to read
:param group_name: String containing the appropriate group
:param leg_ik_map: Mapping from leg number to IK instance'''
result = RippleConfig()
result.mechanical = MechanicalConfig.read_settings(
config, group_name + '.legs', leg_ik_map)
for x in RippleConfig._FLOAT_ATTRIBUTES:
if config.has_option(group_name, x):
setattr(result, x, config.getfloat(group_name, x))
if config.has_option(group_name, 'statically_stable'):
result.statically_stable = config.getboolean(
group_name, 'statically_stable')
if config.has_option(group_name, 'leg_order'):
result.leg_order = RippleConfig.parse_leg_order(
config.get(group_name, 'leg_order'))
return result
def write_settings(self, config, group_name):
'''Store this RippleConfig instance into the given
ConfigParser instance at the given group name.'''
config.add_section(group_name)
self.mechanical.write_settings(config, group_name + '.legs')
for x in self._FLOAT_ATTRIBUTES:
config.set(group_name, x, getattr(self, x))
config.set(group_name, 'statically_stable', self.statically_stable)
config.set(group_name, 'leg_order', self.str_leg_order(self.leg_order))
class RippleState(CommonState):
def __init__(self):
self.legs = {}
self.phase = 0.
self.action = 0
# robot_frame coordinates describing the start and end
# position of the current swing leg(s).
self.swing_start_pos = {}
self.swing_end_pos = {}
self.world_frame = tf.Frame()
self.robot_frame = tf.Frame(None, None, self.world_frame)
self.body_frame = tf.Frame(None, None, self.robot_frame)
self.cog_frame = tf.Frame(None, None, self.body_frame)
def copy(self):
result = RippleState()
super(RippleState, self).copy_into(result)
result.phase = self.phase
result.action = self.action
result.swing_start_pos = dict(
[(key, value.copy()) for key, value in
self.swing_start_pos.iteritems()])
result.swing_end_pos = dict(
[(key, value.copy()) for key, value in
self.swing_end_pos.iteritems()])
return result
def _sign(val):
return -1.0 if (val < 0.0) else 1.0
class Options(object):
cycle_time_s = 0.0
servo_speed_dps = 0.0
def _iterate_legs(leg_group):
"""Given a leg group (either a scalar leg number, or a tuple of
legs), iterate over all of them."""
if isinstance(leg_group, int):
yield leg_group
else:
for x in leg_group:
yield x
class RippleGait(object):
ACTION_START_SWING, ACTION_START_STANCE, ACTION_END = range(3)
def __init__(self, config):
assert config is not None
self.config = config
self.num_legs = len(config.leg_order)
self.cycle_time_s = None
self.state = self.get_idle_state()
self.idle_state = self.get_idle_state()
self.next_command = None
self.next_options = None
self._create_actions()
self._really_set_command(Command(), Options())
def _create_actions(self):
self.actions = []
if self.num_legs == 0:
return
# Create the action list.
swing_time = self._swing_phase_time()
for i in range(self.num_legs):
fraction = float(i) / self.num_legs
leg_group = self.config.leg_order[i]
self.actions.append(
(fraction, leg_group, self.ACTION_START_SWING))
self.actions.append(
(fraction + swing_time, leg_group, self.ACTION_START_STANCE))
self.actions.append((1.0, -1, self.ACTION_END))
def set_state(self, state, command):
'''Force the current leg state to the given configuration. If
a phase is present, it must be consistent, i.e. it should have
been read from this class along with the leg state.
This may raise NotSupported, if the command and state are
inconsistent with one another. In this case, neither the
state nor command are changed.
'''
old_state = self.state
self.state = state.copy()
# Make sure all the legs are in the correct frame.
assert state.phase == 0.0
for leg in self.state.legs.values():
if leg.mode == STANCE:
leg.point = self.state.world_frame.map_from_frame(
leg.frame, leg.point)
leg.frame = self.state.world_frame
elif leg.mode == SWING:
leg.point = self.state.robot_frame.map_from_frame(
leg.frame, leg.point)
leg.frame = self.state.robot_frame
try:
self.set_command(command)
except:
self.state = old_state
raise
return self.state
def _select_command_options(self, command):
if self.num_legs == 0:
return Options()
# First, iterate, solving IK for all legs in time until we
# find the point at which the first leg is unsolvable.
dt = 0.05
time_s = 0.0
my_state = self.idle_state.copy()
self._apply_body_command(my_state, command)
end_time_s = None
min_observed_speed = None
# Dictionary of (direction, leg_num) to old ik_result
old_ik_result = {}
fraction_in_stance = 1.0 - self._swing_phase_time()
margin = 0.01 * self.config.position_margin_percent * fraction_in_stance
while time_s < (0.5 * self.config.max_cycle_time_s / margin):
if end_time_s is not None:
break
time_s += dt
for direction in [-1, 1]:
frame = self._get_update_frame(
direction * time_s, command=command)
if end_time_s is not None:
break
for leg_num, leg in my_state.legs.iteritems():
# TODO: Need to do this for the lifted leg as well.
leg_robot_frame_point = frame.map_to_parent(leg.point)
leg_shoulder_point = leg.shoulder_frame.map_from_frame(
my_state.robot_frame, leg_robot_frame_point)
leg_config = self.config.mechanical.leg_config[leg_num]
result = leg_config.leg_ik.do_ik(leg_shoulder_point)
if result is None:
# Break, so that we can take action knowing
# how far we can go.
end_time_s = time_s
break
if (direction, leg_num) in old_ik_result:
this_old_result = old_ik_result[(direction, leg_num)]
largest_change_deg = \
leg_config.leg_ik.largest_change_deg(
result, this_old_result)
this_speed = largest_change_deg / dt
if (min_observed_speed is None or
this_speed < min_observed_speed):
min_observed_speed = this_speed
old_ik_result[(direction, leg_num)] = result
if min_observed_speed is None:
raise NotSupported()
result = Options()
if end_time_s is None:
# We can achieve this at the maximum time.
result.cycle_time_s = self.config.max_cycle_time_s
else:
result.cycle_time_s = (2.0 * end_time_s * margin)
# TODO jpieper: See if this cycle time is feasible. We will
# do this by checking to see if the swing leg has to move too
# fast.
min_swing_speed = (min_observed_speed *
(1.0 - self._swing_phase_time()) /
self._swing_phase_time())
result.servo_speed_dps = min_swing_speed
any_ik = self.config.mechanical.leg_config.values()[0].leg_ik
servo_speed_dps = any_ik.servo_speed_dps()
speed_margin = 0.01 * self.config.servo_speed_margin_percent
if min_swing_speed > speed_margin * servo_speed_dps:
# Slow the command down.
slow_down_factor = (min_swing_speed /
(speed_margin * servo_speed_dps))
command.translate_x_mm_s /= slow_down_factor
command.translate_y_mm_s /= slow_down_factor
command.rotate_deg_s /= slow_down_factor
result.cycle_time_s *= slow_down_factor
result.servo_speed_dps = speed_margin * servo_speed_dps
return result
def _do_commands_differ_body_only(self, command1, command2):
return (command1.translate_x_mm_s == command2.translate_x_mm_s and
command1.translate_y_mm_s == command2.translate_y_mm_s and
command1.rotate_deg_s == command2.rotate_deg_s)
def set_command(self, command):
'''Set the current command. This will raise a NotSupported
exception if the platform cannot achieve the desired command,
in this case, the desired command will not be changed.'''
command = command.copy()
# Determine if the command is valid or not, and select the
# options necessary for it.
#
# NOTE: This may modify command.
options = self._select_command_options(command)
self._really_set_command(command, options)
def is_command_pending(self):
return self.next_command is not None
def _really_set_command(self, command, options):
self.command = command
self.options = options
if self.num_legs == 0:
return
self.cycle_time_s = options.cycle_time_s
self._apply_body_command(self.state, command)
def _apply_body_command(self, state, command):
if not self.config.statically_stable:
state.body_frame.transform.translation.x = command.body_x_mm
state.body_frame.transform.translation.y = command.body_y_mm
| |
<filename>src/third_party/red_tamarin_stable/tamarin-cental/aot/AOTStubs.py
#!/usr/bin/env python
# -*- Mode: Python; indent-tabs-mode: nil -*-
# vi: set ts=4 sw=4 expandtab:
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import sys
import os
import pprint
import bisect
import pickle
import subprocess
import optparse
scriptname = os.path.basename(os.path.normpath(os.path.abspath(sys.argv[0])))
standardHeader = """// This file was auto-generated, do not modify by hand.
// """ + scriptname + " generates this file.\n"
# ------------------------------------------------------------------------------
# Process creation / execution
# ------------------------------------------------------------------------------
def runProcess(p, msg, ignoreErrors = False):
(stdoutdata, stderrdata) = p.communicate()
if not ignoreErrors:
if not p.returncode == 0:
if stderrdata:
print stderrdata
print msg
sys.exit(1)
return (stdoutdata, stderrdata)
def createProcess(exe, args, verbose = False):
cmdargs = [exe] + args
if verbose:
print "running: " + " ".join(cmdargs)
return subprocess.Popen(cmdargs, executable=exe, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# ------------------------------------------------------------------------------
# demangle
# ------------------------------------------------------------------------------
def getType(t):
t = t.strip()
if t.startswith("avmplus::"):
return t.replace("avmplus::", "")
elif t.find("S*") != -1:
return "".join(t.split("S*"))
elif t.find("*") != -1:
p = t.find("*")
return getType(t[0:p]) + t[p:len(t)]
elif t == "unsigned int" or t == "uint32_t":
return "uint32_t"
elif t == "int":
return "int32_t"
else:
return t
def demangle(n):
(stdout, stderr) = runProcess(createProcess('c++filt', [u'-n', n], False), "Unable to demangle...")
stdout = stdout.replace("((anonymous namespace)::LLVMSelectGetSetDelHasProperty)", "FAIL")
(functype, args) = stdout.split("(");
args = args.replace(")", "").split(",")
functype = functype.replace("<", " ").replace(">", " ").replace(",", " ").replace("unsigned int", "uint32_t").split(" ")
# we only care if its templated
if len(functype) > 2:
return "template %s %s(%s);" % (getType(functype[0]), getType(functype[1]), ", ".join(map(getType, args)).strip())
else:
# print "# ignoring non-templated function: %s" % n
return None
# ------------------------------------------------------------------------------
# Stub order optimisation
# ------------------------------------------------------------------------------
stuborder = {}
pickleFile = "AOTStubs.pickle"
def updateStubOrder(fn):
global stuborder
count = 0
for info in open(fn).read().splitlines():
count += 1
bits = info.split("|")
n = demangle(bits[0].strip())
try:
stuborder[n] += int(bits[1])
except KeyError:
stuborder[n] = int(bits[1])
print "# Found %d stubs in %s" % (count, fn)
def updateStubOrdering(files):
global stuborder
global pickleFile
if os.path.exists(pickleFile):
f = open(pickleFile, 'rb')
stuborder = pickle.load(f)
f.close()
else:
print "No stub ordering file found: '%s'" % os.path.abspath(pickleFile)
if len(files) > 0:
for fn in files:
updateStubOrder(fn)
f = open(pickleFile, 'wb')
pickle.dump(stuborder, f)
f.close()
def dumpStubOrderInfo(files):
global stuborder
updateStubOrdering(files)
for (s,c) in stuborder.iteritems():
print "%s | %d" % (s, c)
def getStubSortOrder(stub):
global stuborder
substubs = []
substubs.append( stub )
# CUIDADO! Be sure to get the number of spaces correct in the replacements
if stub.find(" DOUBLE_ALLOCA_DECL") != -1:
substubs.append( stub.replace(" DOUBLE_ALLOCA_DECL", "") )
substubs.append( stub.replace(" DOUBLE_ALLOCA_DECL", ", double *") )
for substub in substubs:
try:
return stuborder[substub]
except KeyError:
pass
return 0
# ------------------------------------------------------------------------------
# Header Generation
# ------------------------------------------------------------------------------
stubs = []
currentfile = None
stubcount = 0
stubmax = 4000
numstubheaders = 30
def subgroups(xs, n):
result = []
s = len(xs)/n
for i in range(n-1):
result.append(xs[:s])
xs = xs[s:]
if len(xs) > 0:
result.append(xs)
return result
def genCPPFiles(stubs, filenum):
for xs in subgroups(stubs, numstubheaders):
hfile = "AOTStubs-%05d.cpp" % filenum
hfile = open(hfile, "w")
print >>hfile, standardHeader
print >>hfile, "#include \"AOTStubs.h\""
for x in xs:
print >>hfile, (x[1])
hfile.close()
filenum += 1
# ------------------------------------------------------------------------------
# Stub Generation
# ------------------------------------------------------------------------------
argdesctypes = ["uint32_t", "char*"]
vectortypes = ["DoubleVectorObject*", "IntVectorObject*", "UIntVectorObject*", "ObjectVectorObject*"]
objecttypes = ["ScriptObject*", "ArrayObject*", "LLVMAtom"]
receivertypes = objecttypes + ["String*", "double"]
mosttypes = ["double", "int32_t", "uint32_t", "String*", "LLVMBool", "Namespace*", "QNameObject*"] + objecttypes
alltypes = ["void"] + mosttypes
multinameIndexTypes = ["LLVMMultinameIndex", "Multiname*"]
multinameIndexTypesWithInt = multinameIndexTypes + ["LLVMMultinameIndexMaybeInt", "LLVMMultinamePtrMaybeInt"]
def genPerms(xs):
if len(xs) == 0:
return [[]]
else:
p = genPerms(xs[1:])
return [[x] + y for x in xs[0] for y in p]
def genStubs(name, types, filterFunc=None):
perms = genPerms(types)
if (filterFunc is not None):
perms = filterFunc(perms)
if len(types) == 1:
for p in perms:
genCall((p[0], name, ""))
else:
for p in perms:
genCall((p[0], name, ", ".join(p[1:])))
def genCall(params):
global stubs
decl = "template %s %s(%s);" % params
bisect.insort(stubs, (- getStubSortOrder(decl), decl))
def genPropRelatedWithIntOptDouble(name, retTypes, argTypes = [mosttypes]):
nameTypes = mosttypes + ["LLVMUnusedParam"]
legalUintNameTypes = set(("double", "int32_t", "uint32_t", "LLVMAtom"))
legalUintObjectTypes = set(objecttypes)
# perm: 0:retType, 1: MethodEnv*, 2:multinameIndex, 3:n, 4:ns, 5:obj
filterIntPermutations = lambda perms: filter(lambda perm: (perm[2] in multinameIndexTypes) or ((perm[3] in legalUintNameTypes) and (perm[4] == "LLVMUnusedParam") and (perm[5] in legalUintObjectTypes)), perms)
genStubs(name, [retTypes, ["MethodEnv* DOUBLE_ALLOCA_DECL"], multinameIndexTypesWithInt, nameTypes, nameTypes] + argTypes, filterIntPermutations)
def genPropRelatedWithVectorOpts(name, retTypes, argTypes):
nameTypes = mosttypes + ["LLVMUnusedParam"]
legalUintNameTypes = set(("double", "int32_t", "uint32_t", "LLVMAtom"))
legalUintObjectTypes = set(objecttypes + vectortypes)
# perm: 0:retType, 1: MethodEnv*, 2:multinameIndex, 3:n, 4:ns, 5:obj
filterIntPermutations = lambda perms: filter(lambda perm: (perm[2] in multinameIndexTypes) or ((perm[3] in legalUintNameTypes) and (perm[4] == "LLVMUnusedParam") and (perm[5] in legalUintObjectTypes)), perms)
genStubs(name, [retTypes, ["MethodEnv* DOUBLE_ALLOCA_DECL"], multinameIndexTypesWithInt, nameTypes, nameTypes] + argTypes, filterIntPermutations)
def genPropRelatedWithInt(name, retTypes, argTypes = [mosttypes]):
nameTypes = mosttypes + ["LLVMUnusedParam"]
legalUintNameTypes = set(("double", "int32_t", "uint32_t", "LLVMAtom"))
legalUintObjectTypes = set(objecttypes)
# perm: 0:retType, 1: MethodEnv*, 2:multinameIndex, 3:n, 4:ns, 5:obj
filterIntPermutations = lambda perms: filter(lambda perm: (perm[2] in multinameIndexTypes) or ((perm[3] in legalUintNameTypes) and (perm[4] == "LLVMUnusedParam") and (perm[5] in legalUintObjectTypes)), perms)
genStubs(name, [retTypes, ["MethodEnv*"], multinameIndexTypesWithInt, nameTypes, nameTypes] + argTypes, filterIntPermutations)
def genPropRelated(name, retTypes, argTypes = [mosttypes]):
nameTypes = mosttypes + ["LLVMUnusedParam"]
genStubs(name, [retTypes, ["MethodEnv*"], multinameIndexTypes, nameTypes, nameTypes] + argTypes)
# ------------------------------------------------------------------------------
# Main Entrypoint
# ------------------------------------------------------------------------------
if __name__ == "__main__":
import os.path
optParser = optparse.OptionParser(usage='usage: %prog [ options ] file1.abc ... fileN.abc')
optParser.set_defaults()
optParser.allow_interspersed_args = True
optParser.add_option( '-d', '--dump', dest="dump", default = False)
optParser.add_option( '-n', '--numstubheaders', dest="numstubheaders", default = 30)
optParser.add_option( '-p', '--picklefile', dest="pickleFile", default = None)
(opts, args) = optParser.parse_args()
if opts.dump:
dumpStubOrderInfo(args)
sys.exit(0)
if opts.pickleFile:
pickleFile = opts.pickleFile
updateStubOrdering(args)
genStubs("abcOP_si8", [["void"], ["MethodEnv*"], ["uint32_t", "int32_t", "LLVMAtom"], ["uint32_t", "int32_t", "double", "LLVMAtom"]])
genStubs("abcOP_si16", [["void"], ["MethodEnv*"], ["uint32_t", "int32_t", "LLVMAtom"], ["uint32_t", "int32_t", "double", "LLVMAtom"]])
genStubs("abcOP_si32", [["void"], ["MethodEnv*"], ["uint32_t", "int32_t", "LLVMAtom"], ["uint32_t", "int32_t", "double", "LLVMAtom"]])
genStubs("abcOP_sf32", [["void"], ["MethodEnv*"], ["double", "int32_t", "LLVMAtom"], ["uint32_t", "int32_t", "double", "LLVMAtom"]])
genStubs("abcOP_sf64", [["void"], ["MethodEnv*"], ["double", "int32_t", "LLVMAtom"], ["uint32_t", "int32_t", "double", "LLVMAtom"]])
genStubs("abcOP_li8", [["uint32_t", "int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_li16", [["uint32_t", "int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_li32", [["uint32_t", "int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_lf32", [["double"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_lf64", [["double"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_sxi1", [["uint32_t", "int32_t", "LLVMAtom"], ["MethodEnv*"], ["uint32_t", "int32_t", "LLVMAtom"]])
genStubs("abcOP_sxi8", [["uint32_t", "int32_t", "LLVMAtom"], ["MethodEnv*"], ["uint32_t", "int32_t", "LLVMAtom"]])
genStubs("abcOP_sxi16", [["uint32_t", "int32_t", "LLVMAtom"], ["MethodEnv*"], ["uint32_t", "int32_t", "LLVMAtom"]])
genPropRelatedWithInt("abcOP_deleteproperty", ["LLVMBool"])
genPropRelatedWithVectorOpts("abcOP_getproperty", mosttypes, argTypes = [mosttypes + vectortypes])
genPropRelatedWithVectorOpts("abcOP_getproperty_nonc", mosttypes, argTypes = [mosttypes + vectortypes])
genPropRelatedWithVectorOpts("abcOP_setproperty", ["void"], argTypes = [mosttypes + vectortypes, mosttypes])
genPropRelatedWithVectorOpts("abcOP_setproperty_nonc", ["void"], argTypes = [mosttypes + vectortypes, mosttypes])
genPropRelated("abcOP_initproperty", ["void"], argTypes = [mosttypes, mosttypes])
genPropRelated("abcOP_callproperty", alltypes, argTypes = [mosttypes, argdesctypes, ["..."]])
genPropRelated("abcOP_constructprop", mosttypes, argTypes = [argdesctypes, ["..."]])
genPropRelated("abcOP_getdescendants", mosttypes, argTypes = [mosttypes])
genPropRelated("abcOP_getsuper", mosttypes, argTypes = [mosttypes])
genPropRelated("abcOP_setsuper", ["void"], argTypes = [mosttypes, mosttypes])
genPropRelated("abcOP_findproperty", mosttypes, argTypes = [["LLVMAtom*"], ["int32_t"], ["int32_t"]])
genPropRelated("abcOP_findpropstrict", mosttypes, argTypes = [["LLVMAtom*"], ["int32_t"], ["int32_t"]])
genStubs("abcOP_finddef", [objecttypes, ["MethodEnv*"], multinameIndexTypes])
genStubs("abcOP_methodEnvFromDispId", [["MethodEnv*"], ["MethodEnv*"], mosttypes, ["int32_t"]])
genStubs("abcOP_methodEnvFromBaseDispId", [["MethodEnv*"], ["MethodEnv*"], mosttypes, ["int32_t"]])
genStubs("abcOP_handlerFromMethodEnv", [["int32_t*"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_call", [mosttypes, ["MethodEnv*"], objecttypes, argdesctypes, ["..."]])
genStubs("abcOP_callmethod", [alltypes, ["MethodEnv*"], mosttypes, ["int32_t"], argdesctypes, ["..."]])
genStubs("abcOP_callstatic", [alltypes, ["MethodEnv*"], ["AbcEnv*"], ["int32_t"], argdesctypes, ["..."]])
genPropRelated("abcOP_callsuper", alltypes, argTypes = [argdesctypes, ["..."]])
genStubs("abcOP_throwCallOfNonFunctionError", [alltypes, ["MethodEnv*"]])
genStubs("abcOP_construct", [mosttypes, ["MethodEnv*"], mosttypes, argdesctypes, ["..."]])
genStubs("abcOP_getglobalscope", [mosttypes, ["MethodEnv*"]])
genStubs("abcOP_findInterfaceBinding", [["int32_t"], ["int32_t"], ["const uint32_t*", "const uint16_t*"]])
genStubs("abcOP_not", [["LLVMBool", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_increment", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_decrement", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_increment_i", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_decrement_i", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_add", [mosttypes, ["MethodEnv* DOUBLE_ALLOCA_DECL"], mosttypes, mosttypes])
genStubs("abcOP_add_i", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_subtract", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_subtract_i", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_multiply", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_multiply_i", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_divide", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_modulo", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_bitand", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_bitor", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_bitxor", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_bitnot", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_lshift", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_rshift", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_urshift", [["uint32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_negate", [["double", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_negate_i", [["int32_t", "LLVMAtom"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_true", [["LLVMBool", "LLVMAtom", "int32_t"], ["MethodEnv*"], mosttypes])
genStubs("abcOP_equals", [["LLVMBool", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_strictequals", [["LLVMBool", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_lessthan", [["LLVMBool", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_greaterthan", [["LLVMBool", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_greaterequals", [["LLVMBool", "LLVMAtom"], ["MethodEnv*"], mosttypes, mosttypes])
genStubs("abcOP_lessequals", [["LLVMBool", "LLVMAtom"], | |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from collections import Counter
import math
import random
import sys
from typing import Union
from scipy.stats import binom
# Stupid black magic to suppress rare `divide by zero encountered in _binom_cdf` warning when calling binom for first
# time with very particular values; scipy version 1.7.1. Python suppresses warnings past the first so we purposely
# trigger it here. Messing with warnings management didn't help since reverting them to normal resets the count.
import os
STDERR = sys.stderr
sys.stderr = open(os.devnull, 'w')
binom.cdf(39, 43097, 0.5) # No, it doesn't occur for 38 or 40, or for any n lower than 43097
sys.stderr = STDERR
from .components import RandomInput
from .schem_random import SChemRandom
NON_PRECOG_MIN_PASS_RATE = 0.2
# We will keep two confidence levels (CLs) for statistical operations; a more strict ideal CL, which we will attempt
# to achieve if given enough time, and a fallback minimum acceptable CL, which if time-constrained, we will consider
# acceptable for returning an answer anyway even if the ideal is not achieved. If neither of these confidence levels
# is obtained, an error will be raised
# The preferred rate of precognitive solutions being marked as non-precognitive
# Lowering this increases the number of runs non-precog solutions require
# We could probably be more lax on this one since precogs are submitted much less often, but it only saves about
# 10 runs when checking a typical non-precog production solution
PREFERRED_FALSE_NEG_RATE = 0.001
# The maximum acceptable rate of non-precognitive solutions being marked as precognitive
# Lowering this increases the number of runs precog solutions require
# This is the expensive one, but non-precogs are more common so we need a low false positive rate for them
PREFERRED_FALSE_POS_RATE = 0.001
# Fallback confidence levels used for very slow solutions if we can't run enough to reach the higher confidence levels
MAX_FALSE_POS_RATE = 0.1
MAX_FALSE_NEG_RATE = 0.1
# A constant factor that determines how quickly we decide a molecule variant has been assumed, if we see it fail X times
# without ever succeeding. We declare precog if the variant's success rate is provably (to within our above
# false positive confidence level) less than this ratio of the solution's observed success rate. Check the comments near
# its usage for a fuller explanation, but I don't believe it actually has to be near 0, and putting it near 0 scales the
# time precog solutions take to evaluate. For example, at a factor of 0.75 and false positive rate 0.001, for a solution
# that was originally observed to have 100% success rate before searching for missing variants, it will only be
# declared precog if a variant of the Nth molecule appears in 8 failing runs without ever appearing in a succeeding run.
# We do want it to be less than 1 however since that saves us from edge case handling if the success rate was originally
# measured to be 100%
MOLECULE_SUCCESS_RATE_DEVIATION_LIMIT = 0.75
# Since long cycle counts go hand-in-hand with demanding many runs for sufficient certainty, practical applications
# don't have time to properly check precog for long solutions. By default, cut off the max total cycles runtime and
# raise an error if this will be exceeded (rather than returning an insufficiently-confident answer)
DEFAULT_MAX_PRECOG_CHECK_CYCLES = 2_000_000 # Large enough to ensure it doesn't constrain typical required run counts
# TODO: Might want type hinting here, this post suggests a way to type hint Solution without introducing a circular
# import or needing to merge the modules:
# https://stackoverflow.com/questions/39740632/python-type-hinting-without-cyclic-imports
def is_precognitive(solution, max_cycles=None, just_run_cycle_count=0, max_total_cycles=None,
include_explanation=False) -> Union[bool, tuple]:
"""Run this solution enough times to check if fits the community definition of a precognitive solution.
If time constraints do not allow enough runs for even 90% certainty either way, raise a TimeoutError.
Currently, a solution is considered precognitive if:
* It assumes the value of the Nth molecule of a random input, for some N >= 2.
Stated conversely, a solution (with acceptable success rate) is non-precognitive if, for each random input I,
each N >= 2, and each type of molecule M that I produces, there exists a random seed where the Nth input of I is
M, and the solution succeeds.
* OR it succeeds for < 20% of random seeds.
Accordingly with the first rule excepting the first input molecule, this check only uses seeds that match
the first molecule (or all first molecules if there are multiple random inputs), if that is more favourable.
In practice we check this with the following process:
1. Run the solution on the original level, verifying it succeeds (validate the solution's expected score here too if
possible). Track how many molecules were generated from each random input (call this M), and what the mth
molecule's variant was for every m up to M.
2. Randomize the input seed (in the case of two random inputs, shift the second seed by the same amount).
3. Repeat step 1 with the new random seed(s) (but without requiring that the run succeed). Update M for each random
input to be the minimum number of molecules produced from that input for any passing run (since any unconsumed
input cannot have been assumed). Once again track the molecule variants that appeared, keeping a tally of how
many times each variant has been in a passing vs failing run.
4. Repeat steps 2-3 until any of the following conditions is met (again ignoring seeds that had a differing first
molecule if that is more forgiving):
* The success rate is measured to be < 20%, with 99.9% confidence (precog).
* The success rate is measured to be > 20% with 99.9% confidence, and the dataset of succeeding runs covers
every possible variant of every possible mth molecule (2 <= m <= M), for all random inputs (non-precog).
* A variant of the mth molecule fails sufficiently many runs without ever succeeding (precog).
This threshold is calculated dynamically based on the observed success rate.
* The maximum allowed runs based on max_total_cycles is reached (TimeoutError).
With default settings this should only occur for very long (100k+ cycles) solutions or solutions with a
failure rate extremely close to 20%.
Args:
solution: The loaded solution to check.
max_cycles: The maximum cycle count allowed for a SINGLE run of the solution (passed to Solution.run).
Note that this is not the total number of cycles allowed across all runs; any solution within this limit
is allowed to run at least twice, with the maximum runs taken being limited for extremely slow solutions.
max_total_cycles: The maximum total cycle count that may be used by all runs; if this value is exceeded before
sufficient confidence in an answer is obtained, a TimeoutError is raised.
just_run_cycle_count: In order to save on excess runs, if the solution has just been successfully run on the
loaded level (and not been modified or reset() since), pass its cycle count here to skip the first run (but
still pull the first run's data from the Solution object).
include_explanation: If True, instead of the boolean result, return a tuple of (result, explanation), where
the latter is a string describing why the solution was or was not determined to be precognitive.
"""
# Hang onto references to each random input in the solution
random_inputs = [input_component for input_component in solution.inputs
if isinstance(input_component, RandomInput)]
if not random_inputs: # duh
return (False, "Solution is not precognitive; level is non-random") if include_explanation else False
# Set a larger default for max_cycles than in Solution.run, since the seed might change the cycle count by a lot
if max_cycles is None and solution.expected_score is not None:
max_cycles = 2 * solution.expected_score.cycles
if max_total_cycles is None:
# TODO: Might also want to limit this by reactor count
max_total_cycles = DEFAULT_MAX_PRECOG_CHECK_CYCLES
total_cycles = 0
# Track the min cycles a passing run takes so we can exit early if we know we can't prove anything before timeout
min_passing_run_cycles = math.inf
# For | |
in [group, mesh]:
print "[X] The input objects are incorrect or the Mesh module was not yet loaded."; return
#-
# Set father object
father = None
if infa == True: father = group
#-
if False: pass
else:# All checks done
# Get input objects
[dist, step] = thick_and_size
try:
mesh = smesh.Mesh(mesh)
except:
pass
main_shape = group.GetMainShape()
if main_shape == None:
print "[X] The input group has no parent shape."; return
group_name = group.GetName()
group_vertexes = GetSubShapes(group)[0]
#-
# Check if the group is "wire-shaped"
group_edge_list = GetSubShapes(group)[1]
try:
group_wire = geompy.MakeWire(group_edge_list)
except:
print "[X] The input group should be \"wire-shaped\"."; return
#-
# Make wire edge offsets
if rev == True:
dist *= -1
offsets = MakePlanarWireOffset(dist, group_wire, np = np, curv = curv, simple = True, single = False, add = False)
edges = GetReorderedEdges(group_wire, add = False)
#-
if dim == 1:
compound = geompy.MakeCompound(offsets)
to_return = compound
to_return_name = "VirtualOffset"
else:
whole_vertex_list = list(group_vertexes)
nb_edges = len(edges)
for i in range(nb_edges):# For each edge of the input group...
edge = edges[i]
offset = offsets[i]
offset_vertexes = GetSubShapes(offset)[0]
whole_vertex_list += offset_vertexes
# Get the number of steps
edge_length = geompy.BasicProperties(edge)[0]
offset_length = geompy.BasicProperties(offset)[0]
nb_steps = math.ceil(offset_length / step)
real_step = offset_length / nb_steps
#-
# Project offset vertexes on the edge
distance = real_step
projected_vertex_list = []
vertex_on_offset_list = []
while distance < offset_length - real_step / 2:
vertex_on_offset = geompy.MakeVertexOnCurveByLength(offset, distance)
vertex_on_offset_list.append(vertex_on_offset)
##############################
#projected_vertex = geompy.MakeProjection(vertex_on_offset, edge)# Not available on Salome 7.5.1
[x,y,z] = geompy.ClosestPoints(vertex_on_offset, edge)[1][3:6]
projected_vertex = geompy.MakeVertex(x, y, z)
##############################
projected_vertex_list.append(projected_vertex)
distance += real_step
#-
whole_vertex_list += projected_vertex_list
whole_vertex_list += vertex_on_offset_list
# Split the edge with projected vertexes
discretized_edge = geompy.MakePartition([edge], projected_vertex_list)
#-
# Reorder discretized edges
reordered_edges = GetReorderedEdges(discretized_edge, add = False)
nb_sub_edges = len(reordered_edges)
#-
if dim == -1:
# Publish the edge in the study tree
published_edge = geompy.GetInPlace(group, edge, theName = "SubEdge_" + str(i))
#-
if nb_sub_edges == 1:# If the edge was not discretized...
if dim == -1:
# Create a Nb. Segments sub-mesh
algo = mesh.Segment(geom = published_edge)
hypo = algo.NumberOfSegments(1)
mesh.GetSubMesh(published_edge, "VirtualOffsetSubmesh_" + str(i) + " on " + group_name)
#-
else:# If the edge was discretized...
# Get the suitable Fixed Points 1D hypothesis parameters
parameter_list = []
total_distance = 0
for sub_edge in reordered_edges:
sub_edge_length = geompy.BasicProperties(sub_edge)[0]
parameter = (total_distance + sub_edge_length) / edge_length
parameter_list.append(parameter)
if len(parameter_list) == nb_sub_edges - 1:
break
total_distance += sub_edge_length
#-
if dim == -1:
# Create temporary mesh and Fixed Points 1D sub-mesh
tmp_mesh = smesh.Mesh(main_shape)
algo = tmp_mesh.Segment(geom = published_edge)
tmp_hypo = algo.FixedPoints1D(parameter_list, [1] * nb_sub_edges, [])
sub_mesh = tmp_mesh.GetSubMesh(published_edge, "VirtualOffsetSubmesh_" + str(i) + " on " + group_name)
tmp_mesh.Compute()
#-
# Check if the edge is reversed
vertex_compound = MakeVertexesFromMeshGroup(sub_mesh, add = False)
projected_vertex_compound = geompy.MakeCompound(projected_vertex_list)
cut = geompy.MakeCut(vertex_compound, projected_vertex_compound)
nb_resting_vertexes = geompy.NumberOfSubShapes(cut, geompy.ShapeType["VERTEX"])
reversed_edges = []
if nb_resting_vertexes > 2:
reversed_edges = [published_edge]
#-
# Delete temporary geometrical shapes and mesh
#http://www.salome-platform.org/forum/forum_10/366900504#419952388
try:
so = salome.ObjectToSObject(vertex_compound)
sb = salome.myStudy.NewBuilder()
sb.RemoveObjectWithChildren(so)
except:
pass
try:
so = salome.ObjectToSObject(tmp_mesh.GetMesh())
sb = salome.myStudy.NewBuilder()
sb.RemoveObjectWithChildren(so)
except:
pass
try:
so = salome.ObjectToSObject(tmp_hypo)
sb = salome.myStudy.NewBuilder()
sb.RemoveObjectWithChildren(so)
except:
pass
#-
# Create the final Fixed Points 1D sub-mesh
algo = mesh.Segment(geom = published_edge)
hypo = algo.FixedPoints1D(parameter_list, [1] * nb_sub_edges, reversed_edges)
mesh.GetSubMesh(published_edge, "VirtualOffsetSubmesh_" + str(i) + " on " + group_name)
#-
if dim == 0:
compound = geompy.MakeCompound(whole_vertex_list)
to_return = compound
to_return_name = "VirtualOffset (Vertexes)"
if dim >= 0:
if add == True:
# Add and return the resulting shape(s)
if add == True:
AddToStudy(to_return, to_return_name, father)
return to_return
#-
else:
# Update the study tree
if salome.sg.hasDesktop():
salome.sg.updateObjBrowser(1)
#-
mvoes = MakeVirtualOffsetEdgeSubmeshes
def MakeTriEdgeFaceSubmeshes( groups_and_mesh = None ):
"""
Description:
Creates quadrangle submeshes on tri-edge face groups (that can be create using the GetTriEdgeFaces function) and add base vertexes when possible.
Arguments:
# groups_and_mesh
Description: The input tri-edge face groups and the mesh in which to create sub-meshes.
Type: List ofGroups of 1 Face + 1 Mesh
GUI selection: yes
Selection by name: yes
Recursive: -
Default value: [None]
Returned Values:
"dim" value: -
"single" value: -
Type: -
Number: -
Name: -
Conditions of use:
-
"""
# Get the input shape(s)
groups_and_mesh = GetGUISelection(groups_and_mesh)
groups_and_mesh = GetObject(groups_and_mesh, "GEOM", silent = True) + GetObject(groups_and_mesh, "SMESH", silent = True)
#-
# Distinguish input shapes
mesh = None
groups = []
for object in groups_and_mesh:
if "SMESH_Mesh instance" in str(object) or "meshProxy instance" in str(object) or "Mesh object" in str(object): mesh = object
if "GEOM_Object instance" in str(object): groups.append(object)
if None in [mesh] or len(groups) == 0:
print "[X] The input objects are incorrect or the Mesh module was not yet loaded."; return
#-
else:# All checks done
try:
mesh = smesh.Mesh(mesh)
except:
pass
# Get the mesh main shape
main_shape = mesh.GetShape()
#-
# For each input group...
for group in groups:
# Get group edge
group_edges = geompy.SubShapeAll(group, geompy.ShapeType["EDGE"])
#-
# Keep only straight edges
straight_edges = []
for edge in group_edges:
edge_length = geompy.BasicProperties(edge)[0]
edge_vertexes = geompy.SubShapeAll(edge, geompy.ShapeType["VERTEX"])
min_edge_length = geompy.MinDistance(edge_vertexes[0], edge_vertexes[1])
if abs(edge_length - min_edge_length) < 1e-9:
straight_edges.append(edge)
#-
# Get the group vertexes
group_vertexes = geompy.SubShapeAll(group, geompy.ShapeType["VERTEX"])
#-
# Find the base vertex
base_vertex = None
for vertex in group_vertexes:
nb_touching_edges = 0
for edge in straight_edges:
if geompy.MinDistance(edge, vertex) < 1e-9:
nb_touching_edges += 1
if nb_touching_edges == 2:
base_vertex = vertex
break
#-
# Get the base vertex ID
base_vertex_id = geompy.GetSubShapeID(main_shape, base_vertex)
#-
# Create a sub-mesh on the group
algo = mesh.Quadrangle(geom = group)
hypo = algo.QuadrangleParameters()
hypo.SetTriaVertex(base_vertex_id)
submesh = mesh.GetSubMesh(group, group.GetName())
#-
# Update the study tree
salome.sg.updateObjBrowser(1)
#-
mtefs = MakeTriEdgeFaceSubmeshes
def ProjectEdgeSubmesh( submesh_and_edge = [None] ):
"""
Description:
Projects orthogonally an edge sub-mesh on another.
Arguments:
# submesh_and_edge
Description: The source submesh and the target sub-edge.
Type: List of1 Edge + 1 Sub-mesh
GUI selection: yes
Selection by name: yes
Recursive: -
Default value: [None]
Returned Values:
"dim" value: -
"single" value: -
Type: -
Number: -
Name: -
Conditions of use:
The source sub-mesh has to be already computed.
"""
if isinstance(submesh_and_edge, list) == False: print "[X] The first argument (submesh_and_edge) should be an array."; return
# Get the input shape(s)
submesh_and_edge = GetGUISelection(submesh_and_edge)
submesh_and_edge = GetObject(submesh_and_edge, "GEOM", silent = True) + GetObject(submesh_and_edge, "SMESH", silent = True)
#-
# Distinguish input shapes
submesh = None
edge = None
for object in submesh_and_edge:
if "SMESH_subMesh instance" in str(object): submesh = object
if "GEOM_Object" in str(object): edge = object
if None in [submesh, edge]:
print "[X] The input objects are incorrect or the Mesh module was not yet loaded."
return
#-
else:# All checks done
# Create vertexes from the sub-mesh
vertexes_from_submesh_compound = MakeVertexesFromMeshGroup(submesh, add = False)
vertexes_from_submesh = geompy.SubShapeAll(vertexes_from_submesh_compound, geompy.ShapeType["VERTEX"])
#-
# Project vertexes on the edge
projected_vertex_list = []
for vertex_from_submesh in vertexes_from_submesh:
[x, y, z] = geompy.ClosestPoints(vertex_from_submesh, edge)[1][3:6]
projected_vertex_list.append(geompy.MakeVertex(x, y, z))
#-
# Split the edge with projected vertexes
discretized_edge = geompy.MakePartition([edge], projected_vertex_list)
#-
# Get vertex parameters on the input edge
edge_length = geompy.BasicProperties(edge)[0]
reordered_edges = GetReorderedEdges(discretized_edge, add = False)
nb_sub_edges = len(reordered_edges)
parameter_list = []
total_distance = 0
for sub_edge in reordered_edges:
sub_edge_length = geompy.BasicProperties(sub_edge)[0]
parameter = (total_distance + sub_edge_length) / edge_length
parameter_list.append(parameter)
if len(parameter_list) == nb_sub_edges - 1:
break
total_distance += sub_edge_length
#-
# Get the mesh
mesh = smesh.Mesh(submesh.GetMesh())
#-
# Create a sub-mesh on the edge
algo = mesh.Segment(geom = edge)
fixed_points_hypo = algo.FixedPoints1D(parameter_list, [1] * nb_sub_edges, [])
mesh.GetSubMesh(edge, edge.GetName())
smesh.SetName(fixed_points_hypo, edge.GetName())
#-
# Update the study tree
if salome.sg.hasDesktop():
salome.sg.updateObjBrowser(1)
#-
pes = ProjectEdgeSubmesh
def MakeNetgenRefinement( size, hypo_and_area = [None], ratio = 0.7, test = False ):
"""
Description:
Create an arbitrary 3D refinement area in a Netgen hypothesis.
Arguments:
# size
Description: The desired cell size in the refinement area.
Type: Float
GUI selection: -
Selection by name: -
Recursive: -
Default value: -
# hypo_and_area
Description: The volume defining the refinement area and the Netgen hypothesis.
Type: List of 1 Mesh hypothesis + 1 Solid
GUI selection: yes
Selection by name: yes
Recursive: -
Default value: [None]
# ratio
Description: Defines the distance between edges describing the refinement area. If equals one, this distance equals the desired cell size. If lower than one, this distance is increased.
Type: Float
GUI selection: -
Selection by name: -
Recursive: -
Default value: 0.7
# test
Description: If equals True, the edges are not created, but the number of necessary edge is displayed in the Python console.
Type: Boolean
GUI selection: -
Selection by name: -
Recursive: -
Default value: False
Returned Values:
"dim" value: -
"single" value: -
Type: Compound
Number: 1
Name: "RefinementEdges"
Conditions of use:
-
"""
if isinstance(size, str): print "[X] The first argument (size) should be a float number."; return
if isinstance(hypo_and_area, list) == False: print "[X] The second argument (hypo_and_area) should be an array."; return
# Get the input shape(s)
hypo_and_area = GetGUISelection(hypo_and_area)
hypo_and_area = GetObject(hypo_and_area, "GEOM", silent = True) + GetObject(hypo_and_area, "NETGENPlugin", silent = True)
#-
# Distinguish input shapes
hypo = None
area = None
for object in hypo_and_area:
if str(object)[1:45] == "NETGENPlugin._objref_NETGENPlugin_Hypothesis": hypo = object
if str(object)[1:25] == "GEOM._objref_GEOM_Object": area = object
if None in [hypo, area]:
print "[X] The input objects are incorrect or the Mesh module was not yet loaded."
return
hypothesis_type = hypo.GetName()
if str(hypothesis_type) != "NETGEN_Parameters_2D" and str(hypothesis_type) != "NETGEN_Parameters":
print "[X] The selected hypothesis is not a Netgen 1D - 2D or Netgen 1D - 2D - 3D hypothesis."
return
#-
else:# All checks done
# Get the area bounding box
[x_min, x_max, y_min, y_max, z_min, z_max] = geompy.BoundingBox(area)
x_margin = (x_max - x_min) / 10000
x_min += x_margin
x_max -= x_margin
#-
# Create edges
void_compound = geompy.MakeCompound([])
nb_edges_x = int((x_max - x_min) / size * ratio)
nb_edges_y = int((y_max - y_min) / | |
<reponame>husensofteng/msstitch
import argparse
shared_options = {
'fn': {'driverattr': 'fn', 'dest': 'infile', 'type': 'file', 'clarg': '-i',
'help': 'Input file of {} format'},
'outfile': {'driverattr': 'outfile', 'dest': 'outfile', 'type': str,
'clarg': '-o', 'help': 'Output file', 'required': False},
'outdir': {'driverattr': 'outdir', 'dest': 'outdir', 'clarg': '-d',
'help': 'Directory to output in', 'type': 'file',
'required': False},
'multifiles': {'driverattr': 'fn', 'dest': 'infile', 'clarg': '-i',
'help': 'Multiple input files of {} format',
'type': 'file', 'nargs': '+'},
'lookupfn': {'driverattr': 'lookupfn', 'clarg': '--dbfile',
'type': 'file', 'help': 'Database lookup file'},
'setnames': {'driverattr': 'setnames', 'dest': 'setnames',
'type': str, 'nargs': '+', 'clarg': '--setnames',
'help': 'Names of biological sets. Can be '
'specified with quotation marks if spaces are '
'used'},
'spectracol': {'driverattr': 'spectracol', 'dest': 'spectracol',
'type': int, 'clarg': '--spectracol', 'help':
'Column number in which spectra file names are, '
'in case some framework has changed the file '
'names. First column number is 1.', 'required': False,
'default': 1},
'decoyfn': {'driverattr': 'decoyfn', 'dest': 'decoyfn',
'help': 'Decoy input file (percolator out XML) for qvality',
'type': 'file', 'clarg': '--decoyfn'},
'proline': {'driverattr': 'proline', 'dest': 'proline', 'required': False,
'clarg': '--cutproline', 'action': 'store_const',
'const': True, 'default': False, 'help': 'Flag to make '
'trypsin before a proline residue. Then filtering will be '
'done against both cut and non-cut peptides.',
},
'fasta': {'driverattr': 'fasta', 'dest': 'fasta',
'type': 'file', 'help': 'FASTA sequence database',
'required': False, 'default': False, 'clarg': '--fasta'},
'featuretype': {'driverattr': 'featuretype', 'dest': 'featuretype',
'help': 'Feature type to use for qvality. Can either be '
'psm or peptide.', 'clarg': '--feattype',
'type': 'pick', 'picks': ['psm', 'peptide']},
'unroll': {'driverattr': 'unroll', 'clarg': '--unroll', 'const': True,
'action': 'store_const', 'default': False, 'help': 'PSM table '
'from Mzid2TSV contains either one PSM per line with all '
'the proteins of that shared peptide on the same line (not'
' unrolled, default), or one PSM/protein match per line '
'where each protein from that shared peptide gets its own '
'line (unrolled).', 'required': False},
'genecentric': {'driverattr': 'genecentric', 'dest': 'genecentric',
'clarg': '--genecentric', 'type': 'pick',
'picks': ['assoc', 'genes'], 'required': False,
'default': False, 'help': 'Do not include protein group '
'data in output. Should be one of [genes, assoc]. '
'With assoc, associated gene IDs are used from e.g. '
'Biomart rather than the ones found in the FASTA db used '
'for PSM search. These need to have been stored when '
'creating a PSM lookup.'},
'isobaric': {'driverattr': 'isobaric', 'clarg': '--isobaric',
'action': 'store_const', 'const': True, 'default': False,
'help': 'Specifies to add isobaric quant data from lookup DB '
'to output table', 'required': False,
},
'precursor': {'driverattr': 'precursor', 'clarg': '--precursor',
'action': 'store_const', 'const': True, 'default': False,
'help': 'Specifies to add precursor quant data from lookup '
'DB to output table', 'required': False,
},
'quantcolpattern': {'driverattr': 'quantcolpattern',
'clarg': '--isobquantcolpattern', 'type': str,
'default': None, 'required': False,
'help': 'Unique text pattern to identify '
'isobaric quant columns in input table.'},
'precursorquantcolpattern': {'driverattr': 'precursorquantcolpattern',
'type': str, 'required': False,
'dest': 'precursorquantcolpattern',
'clarg': '--ms1quantcolpattern',
'default': None,
'help': 'Unique text pattern to identify '
'precursor quant column in input table.'},
'quantacccolpattern': {'driverattr': 'quantacccolpattern',
'clarg': '--qaccpattern', 'type': str,
'help': 'Unique text pattern to identify '
'accession column in table containing quant info.'},
'qvalityout': {'driverattr': 'qvalityout', 'dest': 'qvalityout',
'help': 'Qvality output file to fetch q-values and PEP '
'from', 'type': 'file', 'clarg': ['-q', '--qvality']},
'proteincol': {'driverattr': 'proteincol', 'clarg': '--protcol',
'type': int, 'required': False, 'help': 'Column number in '
'table in which protein or gene accessions are. '
'stored. First column number is 1. Use in case of not '
'using standard {} column'},
'pcolpattern': {'driverattr': 'pcolpattern', 'clarg': '--protcolpattern',
'type': str, 'required': False, 'help': 'Text pattern to '
'identify column in table in which protein or gene '
'accessions are. Use in case of not using standard '
'{} column', 'default': False},
'fdrcolpattern': {'driverattr': 'fdrcolpattern', 'dest': 'fdrcolpattern',
'clarg': '--fdrcolpattern', 'type': str,
'required': False, 'default': None,
'help': 'Unique text pattern to identify '
'FDR column in input table.'},
'fastadelim': {'driverattr': 'fastadelim', 'clarg': '--fastadelim',
'dest': 'fastadelim', 'required': False, 'type': 'pick',
'picks': ['tab', 'pipe', 'semicolon'],
'help': 'Delimiter in FASTA header, used to parse gene '
'names in case of non-ENSEMBL/Uniprot'},
'genefield': {'driverattr': 'genefield', 'clarg': '--genefield',
'dest': 'genefield', 'required': False, 'type': int,
'help': 'Field nr (first=1) in FASTA that contains gene '
'name when using --fastadelim to parse the gene names'},
'minlength': {'driverattr': 'minlength', 'dest': 'minlength', 'default': 0,
'help': 'Minimum length of peptide to be included',
'type': int, 'clarg': '--minlen', 'required': False},
'addbioset': {'driverattr': 'addbioset', 'dest': 'addbioset',
'clarg': '--addbioset', 'required': False, 'action': 'store_const',
'default': False, 'const': True,
'help': 'Add biological setname from DB lookup to PSM table',
},
'addmiscleav': {'driverattr': 'addmiscleav', 'dest': 'addmiscleav',
'clarg': '--addmiscleav', 'required': False, 'action': 'store_const',
'default': False, 'const': True, 'help': 'Add missed cleavages to PSM table',
},
}
sequence_options = {
'scramble': {
'driverattr': 'scramble', 'dest': 'scramble', 'clarg': '--scramble',
'help': 'Decoy scrambling method, use: "reverse": reverse peptides fully, '
'"tryp_rev": tryptic reverse, or "prot_rev": protein reverse.',
'required': False, 'default': 'tryp_rev'},
'ignoretarget': {
'driverattr': 'ignoretarget', 'dest': 'ignoretarget', 'clarg': '--ignore-target-hits',
'help': 'Do not remove tryptic peptides from sequence where they match target DB',
'required': False, 'action': 'store_const', 'const': True, 'default': False},
'trypsinize': {'driverattr': 'trypsinize', 'dest': 'trypsinize',
'clarg': '--notrypsin', 'required': False,
'action': 'store_const', 'const': False, 'default': True,
'help': 'Do not trypsinize. User is expected to deliver a'
'pretrypsinized FASTA file'
},
'max_shuffle': {'driverattr': 'max_shuffle', 'dest': 'max_shuffle',
'clarg': '--maxshuffle', 'required': False, 'type': int, 'default': 10,
'help': 'Amount of times to attempt to shuffle a decoy reversed peptide '
'to make it not match target peptides, before discarding it.'
' Used when using tryptic peptide reversal (not protein reversal)'},
'miss_cleavage': {'driverattr': 'miss_cleavage', 'dest': 'miss_cleavage',
'clarg': '--miscleav', 'required': False, 'type': int, 'default': 0,
'help': 'Amount of missed cleavages to allow when trypsinizing',
},
}
mslookup_options = {
'falloff': {'driverattr': 'falloff', 'dest': 'falloff',
'clarg': '--insourcefrag', 'default': False,
'action': 'store_const', 'const': True, 'help': 'Apply '
'filter against both intact peptides and those '
'that match to the C-terminal part of a tryptic peptide '
'from the database, resulting from in-source fragmentation, '
'where some amino acids will be missing from the N-terminus. '
'Specify the max number of amino acids that may be missing. '
'Database should be built with this '
'flag in order for the lookup to work, since sequences '
'will be stored and looked up reversed', 'required': False
},
'mapfn': {'driverattr': 'mapfn', 'dest': 'mapfn',
'type': 'file', 'clarg': '--map',
'required': False, 'help': 'File that contains '
'a map obtained from ENSEMBL BioMart which '
'should contain mappings from protein accession '
'to Gene ENSG and Symbol.'},
'decoy': {'driverattr': 'decoy', 'dest': 'decoy', 'clarg': '--decoy',
'action': 'store_const', 'const': True,
'default': False, 'help': 'Specifies lookup is '
'for decoy PSMs, use with --map in case there '
'are no decoy symbols in the FASTA used to '
'search.', 'required': False},
'spectrafns': {'driverattr': 'spectrafns', 'dest': 'spectra',
'type': str, 'help': 'Spectra files in mzML '
'format. Multiple files can be specified, if '
'order is important, e.g. when matching them '
'with quant data, the order will be their input '
'order at the command line.', 'clarg': '--spectra',
'nargs': '+'},
'quantfiletype': {'driverattr': 'quantfiletype', 'dest': 'quanttype',
'clarg': '--quanttype', 'type': 'pick', 'help':
'Filetype of '
'precursor quants to store. One of kronik or openms.',
'picks': ['kronik', 'openms']},
'rttol': {'driverattr': 'rt_tol', 'dest': 'rttol', 'clarg': '--rttol',
'type': float, 'help': 'Specifies tolerance in seconds for '
'retention time when mapping MS1 feature quant info to '
'identifications in the PSM table.'},
'mztol': {'driverattr': 'mz_tol', 'dest': 'mztol', 'clarg': '--mztol',
'type': float, 'help': 'Specifies tolerance in mass-to-charge '
'when mapping MS1 feature quant info to identifications in '
'the PSM table.'},
'mztoltype': {'driverattr': 'mz_toltype', 'dest': 'mztoltype',
'type': 'pick', 'picks': ['ppm', 'Da'],
'clarg': '--mztoltype',
'help': 'Type of tolerance in mass-to-charge when mapping '
'MS1 feature quant info to identifications in the PSM table.'
' One of ppm, Da.'},
'peptidecol': {'driverattr': 'peptidecol', 'dest': 'peptidecol',
'type': int, 'clarg': '--peptidecol', 'help':
'Column nr of peptide table where peptide sequences are '
'stored. First column is nr. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.