code
stringlengths 1
199k
|
|---|
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'certified'}
DOCUMENTATION = r'''
---
module: bigip_virtual_server
short_description: Manage LTM virtual servers on a BIG-IP
description:
- Manage LTM virtual servers on a BIG-IP.
version_added: 2.1
options:
state:
description:
- The virtual server state. If C(absent), delete the virtual server
if it exists. C(present) creates the virtual server and enable it.
If C(enabled), enable the virtual server if it exists. If C(disabled),
create the virtual server if needed, and set state to C(disabled).
default: present
choices:
- present
- absent
- enabled
- disabled
type:
description:
- Specifies the network service provided by this virtual server.
- When creating a new virtual server, if this parameter is not provided, the
default will be C(standard).
- This value cannot be changed after it is set.
- When C(standard), specifies a virtual server that directs client traffic to
a load balancing pool and is the most basic type of virtual server. When you
first create the virtual server, you assign an existing default pool to it.
From then on, the virtual server automatically directs traffic to that default pool.
- When C(forwarding-l2), specifies a virtual server that shares the same IP address as a
node in an associated VLAN.
- When C(forwarding-ip), specifies a virtual server like other virtual servers, except
that the virtual server has no pool members to load balance. The virtual server simply
forwards the packet directly to the destination IP address specified in the client request.
- When C(performance-http), specifies a virtual server with which you associate a Fast HTTP
profile. Together, the virtual server and profile increase the speed at which the virtual
server processes HTTP requests.
- When C(performance-l4), specifies a virtual server with which you associate a Fast L4 profile.
Together, the virtual server and profile increase the speed at which the virtual server
processes layer 4 requests.
- When C(stateless), specifies a virtual server that accepts traffic matching the virtual
server address and load balances the packet to the pool members without attempting to
match the packet to a pre-existing connection in the connection table. New connections
are immediately removed from the connection table. This addresses the requirement for
one-way UDP traffic that needs to be processed at very high throughput levels, for example,
load balancing syslog traffic to a pool of syslog servers. Stateless virtual servers are
not suitable for processing traffic that requires stateful tracking, such as TCP traffic.
Stateless virtual servers do not support iRules, persistence, connection mirroring,
rateshaping, or SNAT automap.
- When C(reject), specifies that the BIG-IP system rejects any traffic destined for the
virtual server IP address.
- When C(dhcp), specifies a virtual server that relays Dynamic Host Control Protocol (DHCP)
client requests for an IP address to one or more DHCP servers, and provides DHCP server
responses with an available IP address for the client.
- When C(internal), specifies a virtual server that supports modification of HTTP requests
and responses. Internal virtual servers enable usage of ICAP (Internet Content Adaptation
Protocol) servers to modify HTTP requests and responses by creating and applying an ICAP
profile and adding Request Adapt or Response Adapt profiles to the virtual server.
- When C(message-routing), specifies a virtual server that uses a SIP application protocol
and functions in accordance with a SIP session profile and SIP router profile.
choices:
- standard
- forwarding-l2
- forwarding-ip
- performance-http
- performance-l4
- stateless
- reject
- dhcp
- internal
- message-routing
default: standard
version_added: 2.6
name:
description:
- Virtual server name.
required: True
aliases:
- vs
destination:
description:
- Destination IP of the virtual server.
- Required when C(state) is C(present) and virtual server does not exist.
- When C(type) is C(internal), this parameter is ignored. For all other types,
it is required.
aliases:
- address
- ip
source:
description:
- Specifies an IP address or network from which the virtual server accepts traffic.
- The virtual server accepts clients only from one of these IP addresses.
- For this setting to function effectively, specify a value other than 0.0.0.0/0 or ::/0
(that is, any/0, any6/0).
- In order to maximize utility of this setting, specify the most specific address
prefixes covering all customer addresses and no others.
- Specify the IP address in Classless Inter-Domain Routing (CIDR) format; address/prefix,
where the prefix length is in bits. For example, for IPv4, 10.0.0.1/32 or 10.0.0.0/24,
and for IPv6, ffe1::0020/64 or 2001:ed8:77b5:2:10:10:100:42/64.
version_added: 2.5
port:
description:
- Port of the virtual server. Required when C(state) is C(present)
and virtual server does not exist.
- If you do not want to specify a particular port, use the value C(0).
The result is that the virtual server will listen on any port.
- When C(type) is C(dhcp), this module will force the C(port) parameter to be C(67).
- When C(type) is C(internal), this module will force the C(port) parameter to be C(0).
- In addition to specifying a port number, a select number of service names may also
be provided.
- The string C(ftp) may be substituted for for port C(21).
- The string C(http) may be substituted for for port C(80).
- The string C(https) may be substituted for for port C(443).
- The string C(telnet) may be substituted for for port C(23).
- The string C(smtp) may be substituted for for port C(25).
- The string C(snmp) may be substituted for for port C(161).
- The string C(snmp-trap) may be substituted for for port C(162).
- The string C(ssh) may be substituted for for port C(22).
- The string C(tftp) may be substituted for for port C(69).
- The string C(isakmp) may be substituted for for port C(500).
- The string C(mqtt) may be substituted for for port C(1883).
- The string C(mqtt-tls) may be substituted for for port C(8883).
profiles:
description:
- List of profiles (HTTP, ClientSSL, ServerSSL, etc) to apply to both sides
of the connection (client-side and server-side).
- If you only want to apply a particular profile to the client-side of
the connection, specify C(client-side) for the profile's C(context).
- If you only want to apply a particular profile to the server-side of
the connection, specify C(server-side) for the profile's C(context).
- If C(context) is not provided, it will default to C(all).
- If you want to remove a profile from the list of profiles currently active
on the virtual, then simply remove it from the C(profiles) list. See
examples for an illustration of this.
- If you want to add a profile to the list of profiles currently active
on the virtual, then simply add it to the C(profiles) list. See
examples for an illustration of this.
- B(Profiles matter). This module will fail to configure a BIG-IP if you mix up
your profiles, or, if you attempt to set an IP protocol which your current,
or new, profiles do not support. Both this module, and BIG-IP, will tell you
when you are wrong, with an error resembling C(lists profiles incompatible
with its protocol).
- If you are unsure what correct profile combinations are, then have a BIG-IP
available to you in which you can make changes and copy what the correct
combinations are.
suboptions:
name:
description:
- Name of the profile.
- If this is not specified, then it is assumed that the profile item is
only a name of a profile.
- This must be specified if a context is specified.
context:
description:
- The side of the connection on which the profile should be applied.
choices:
- all
- server-side
- client-side
default: all
aliases:
- all_profiles
irules:
version_added: 2.2
description:
- List of rules to be applied in priority order.
- If you want to remove existing iRules, specify a single empty value; C("").
See the documentation for an example.
- When C(type) is C(dhcp), this parameter will be ignored.
- When C(type) is C(stateless), this parameter will be ignored.
- When C(type) is C(reject), this parameter will be ignored.
- When C(type) is C(internal), this parameter will be ignored.
aliases:
- all_rules
enabled_vlans:
version_added: "2.2"
description:
- List of VLANs to be enabled. When a VLAN named C(all) is used, all
VLANs will be allowed. VLANs can be specified with or without the
leading partition. If the partition is not specified in the VLAN,
then the C(partition) option of this module will be used.
- This parameter is mutually exclusive with the C(disabled_vlans) parameter.
disabled_vlans:
version_added: 2.5
description:
- List of VLANs to be disabled. If the partition is not specified in the VLAN,
then the C(partition) option of this module will be used.
- This parameter is mutually exclusive with the C(enabled_vlans) parameters.
pool:
description:
- Default pool for the virtual server.
- If you want to remove the existing pool, specify an empty value; C("").
See the documentation for an example.
- When creating a new virtual server, and C(type) is C(stateless), this parameter
is required.
- If C(type) is C(stateless), the C(pool) that is used must not have any members
which define a C(rate_limit).
policies:
description:
- Specifies the policies for the virtual server.
- When C(type) is C(dhcp), this parameter will be ignored.
- When C(type) is C(reject), this parameter will be ignored.
- When C(type) is C(internal), this parameter will be ignored.
aliases:
- all_policies
snat:
description:
- Source network address policy.
- When C(type) is C(dhcp), this parameter is ignored.
- When C(type) is C(reject), this parameter will be ignored.
- When C(type) is C(internal), this parameter will be ignored.
- The name of a SNAT pool (eg "/Common/snat_pool_name") can be specified to enable SNAT
with the specific pool.
- To remove SNAT, specify the word C(none).
- To specify automap, use the word C(automap).
default_persistence_profile:
description:
- Default Profile which manages the session persistence.
- If you want to remove the existing default persistence profile, specify an
empty value; C(""). See the documentation for an example.
- When C(type) is C(dhcp), this parameter will be ignored.
description:
description:
- Virtual server description.
fallback_persistence_profile:
description:
- Specifies the persistence profile you want the system to use if it
cannot use the specified default persistence profile.
- If you want to remove the existing fallback persistence profile, specify an
empty value; C(""). See the documentation for an example.
- When C(type) is C(dhcp), this parameter will be ignored.
version_added: 2.3
partition:
description:
- Device partition to manage resources on.
default: Common
version_added: 2.5
metadata:
description:
- Arbitrary key/value pairs that you can attach to a pool. This is useful in
situations where you might want to annotate a virtual to me managed by Ansible.
- Key names will be stored as strings; this includes names that are numbers.
- Values for all of the keys will be stored as strings; this includes values
that are numbers.
- Data will be persisted, not ephemeral.
version_added: 2.5
address_translation:
description:
- Specifies, when C(enabled), that the system translates the address of the
virtual server.
- When C(disabled), specifies that the system uses the address without translation.
- This option is useful when the system is load balancing devices that have the
same IP address.
- When creating a new virtual server, the default is C(enabled).
type: bool
version_added: 2.6
port_translation:
description:
- Specifies, when C(enabled), that the system translates the port of the virtual
server.
- When C(disabled), specifies that the system uses the port without translation.
Turning off port translation for a virtual server is useful if you want to use
the virtual server to load balance connections to any service.
- When creating a new virtual server, the default is C(enabled).
type: bool
version_added: 2.6
ip_protocol:
description:
- Specifies a network protocol name you want the system to use to direct traffic
on this virtual server.
- When creating a new virtual server, if this parameter is not specified, the default is C(tcp).
- The Protocol setting is not available when you select Performance (HTTP) as the Type.
- The value of this argument can be specified in either it's numeric value, or,
for convenience, in a select number of named values. Refer to C(choices) for examples.
- For a list of valid IP protocol numbers, refer to this page
https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers
- When C(type) is C(dhcp), this module will force the C(ip_protocol) parameter to be C(17) (UDP).
choices:
- ah
- any
- bna
- esp
- etherip
- gre
- icmp
- ipencap
- ipv6
- ipv6-auth
- ipv6-crypt
- ipv6-icmp
- isp-ip
- mux
- ospf
- sctp
- tcp
- udp
- udplite
version_added: 2.6
firewall_enforced_policy:
description:
- Applies the specify AFM policy to the virtual in an enforcing way.
- When creating a new virtual, if this parameter is not specified, the enforced
policy is disabled.
version_added: 2.6
firewall_staged_policy:
description:
- Applies the specify AFM policy to the virtual in an enforcing way.
- A staged policy shows the results of the policy rules in the log, while not
actually applying the rules to traffic.
- When creating a new virtual, if this parameter is not specified, the staged
policy is disabled.
version_added: 2.6
security_log_profiles:
description:
- Specifies the log profile applied to the virtual server.
- To make use of this feature, the AFM module must be licensed and provisioned.
- The C(Log all requests) and C(Log illegal requests) are mutually exclusive and
therefore, this module will raise an error if the two are specified together.
version_added: 2.6
security_nat_policy:
description:
- Specify the Firewall NAT policies for the virtual server.
- You can specify one or more NAT policies to use.
- The most specific policy is used. For example, if you specify that the
virtual server use the device policy and the route domain policy, the route
domain policy overrides the device policy.
version_added: 2.7
suboptions:
policy:
description:
- Policy to apply a NAT policy directly to the virtual server.
- The virtual server NAT policy is the most specific, and overrides a
route domain and device policy, if specified.
- To remove the policy, specify an empty string value.
use_device_policy:
description:
- Specify that the virtual server uses the device NAT policy, as specified
in the Firewall Options.
- The device policy is used if no route domain or virtual server NAT
setting is specified.
type: bool
use_route_domain_policy:
description:
- Specify that the virtual server uses the route domain policy, as
specified in the Route Domain Security settings.
- When specified, the route domain policy overrides the device policy, and
is overridden by a virtual server policy.
type: bool
extends_documentation_fragment: f5
author:
- Tim Rupp (@caphrim007)
'''
EXAMPLES = r'''
- name: Modify Port of the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
state: present
partition: Common
name: my-virtual-server
port: 8080
delegate_to: localhost
- name: Delete virtual server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
state: absent
partition: Common
name: my-virtual-server
delegate_to: localhost
- name: Add virtual server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
state: present
partition: Common
name: my-virtual-server
destination: 10.10.10.10
port: 443
pool: my-pool
snat: Automap
description: Test Virtual Server
profiles:
- http
- fix
- name: clientssl
context: server-side
- name: ilx
context: client-side
policies:
- my-ltm-policy-for-asm
- ltm-uri-policy
- ltm-policy-2
- ltm-policy-3
enabled_vlans:
- /Common/vlan2
delegate_to: localhost
- name: Add FastL4 virtual server
bigip_virtual_server:
destination: 1.1.1.1
name: fastl4_vs
port: 80
profiles:
- fastL4
state: present
- name: Add iRules to the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
name: my-virtual-server
irules:
- irule1
- irule2
delegate_to: localhost
- name: Remove one iRule from the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
name: my-virtual-server
irules:
- irule2
delegate_to: localhost
- name: Remove all iRules from the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
name: my-virtual-server
irules: ""
delegate_to: localhost
- name: Remove pool from the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
name: my-virtual-server
pool: ""
delegate_to: localhost
- name: Add metadata to virtual
bigip_pool:
server: lb.mydomain.com
user: admin
password: secret
state: absent
name: my-pool
partition: Common
metadata:
ansible: 2.4
updated_at: 2017-12-20T17:50:46Z
delegate_to: localhost
- name: Add virtual with two profiles
bigip_pool:
server: lb.mydomain.com
user: admin
password: secret
state: absent
name: my-pool
partition: Common
profiles:
- http
- tcp
delegate_to: localhost
- name: Remove HTTP profile from previous virtual
bigip_pool:
server: lb.mydomain.com
user: admin
password: secret
state: absent
name: my-pool
partition: Common
profiles:
- tcp
delegate_to: localhost
- name: Add the HTTP profile back to the previous virtual
bigip_pool:
server: lb.mydomain.com
user: admin
password: secret
state: absent
name: my-pool
partition: Common
profiles:
- http
- tcp
delegate_to: localhost
'''
RETURN = r'''
description:
description: New description of the virtual server.
returned: changed
type: string
sample: This is my description
default_persistence_profile:
description: Default persistence profile set on the virtual server.
returned: changed
type: string
sample: /Common/dest_addr
destination:
description: Destination of the virtual server.
returned: changed
type: string
sample: 1.1.1.1
disabled:
description: Whether the virtual server is disabled, or not.
returned: changed
type: bool
sample: True
disabled_vlans:
description: List of VLANs that the virtual is disabled for.
returned: changed
type: list
sample: ['/Common/vlan1', '/Common/vlan2']
enabled:
description: Whether the virtual server is enabled, or not.
returned: changed
type: bool
sample: False
enabled_vlans:
description: List of VLANs that the virtual is enabled for.
returned: changed
type: list
sample: ['/Common/vlan5', '/Common/vlan6']
fallback_persistence_profile:
description: Fallback persistence profile set on the virtual server.
returned: changed
type: string
sample: /Common/source_addr
irules:
description: iRules set on the virtual server.
returned: changed
type: list
sample: ['/Common/irule1', '/Common/irule2']
pool:
description: Pool that the virtual server is attached to.
returned: changed
type: string
sample: /Common/my-pool
policies:
description: List of policies attached to the virtual.
returned: changed
type: list
sample: ['/Common/policy1', '/Common/policy2']
port:
description: Port that the virtual server is configured to listen on.
returned: changed
type: int
sample: 80
profiles:
description: List of profiles set on the virtual server.
returned: changed
type: list
sample: [{'name': 'tcp', 'context': 'server-side'}, {'name': 'tcp-legacy', 'context': 'client-side'}]
snat:
description: SNAT setting of the virtual server.
returned: changed
type: string
sample: Automap
source:
description: Source address, in CIDR form, set on the virtual server.
returned: changed
type: string
sample: 1.2.3.4/32
metadata:
description: The new value of the virtual.
returned: changed
type: dict
sample: {'key1': 'foo', 'key2': 'bar'}
address_translation:
description: The new value specifying whether address translation is on or off.
returned: changed
type: bool
sample: True
port_translation:
description: The new value specifying whether port translation is on or off.
returned: changed
type: bool
sample: True
ip_protocol:
description: The new value of the IP protocol.
returned: changed
type: int
sample: 6
firewall_enforced_policy:
description: The new enforcing firewall policy.
returned: changed
type: string
sample: /Common/my-enforced-fw
firewall_staged_policy:
description: The new staging firewall policy.
returned: changed
type: string
sample: /Common/my-staged-fw
security_log_profiles:
description: The new list of security log profiles.
returned: changed
type: list
sample: ['/Common/profile1', '/Common/profile2']
'''
import os
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.basic import env_fallback
from ansible.module_utils.six import iteritems
from collections import namedtuple
try:
from library.module_utils.network.f5.bigip import F5RestClient
from library.module_utils.network.f5.common import MANAGED_BY_ANNOTATION_VERSION
from library.module_utils.network.f5.common import MANAGED_BY_ANNOTATION_MODIFIED
from library.module_utils.network.f5.common import F5ModuleError
from library.module_utils.network.f5.common import AnsibleF5Parameters
from library.module_utils.network.f5.common import cleanup_tokens
from library.module_utils.network.f5.common import fq_name
from library.module_utils.network.f5.common import f5_argument_spec
from library.module_utils.network.f5.common import fail_json
from library.module_utils.network.f5.common import exit_json
from library.module_utils.network.f5.common import transform_name
from library.module_utils.network.f5.common import mark_managed_by
from library.module_utils.network.f5.common import only_has_managed_metadata
from library.module_utils.network.f5.compare import cmp_simple_list
from library.module_utils.network.f5.ipaddress import is_valid_ip
from library.module_utils.network.f5.ipaddress import ip_interface
from library.module_utils.network.f5.ipaddress import validate_ip_v6_address
except ImportError:
from ansible.module_utils.network.f5.bigip import F5RestClient
from ansible.module_utils.network.f5.common import MANAGED_BY_ANNOTATION_VERSION
from ansible.module_utils.network.f5.common import MANAGED_BY_ANNOTATION_MODIFIED
from ansible.module_utils.network.f5.common import F5ModuleError
from ansible.module_utils.network.f5.common import AnsibleF5Parameters
from ansible.module_utils.network.f5.common import cleanup_tokens
from ansible.module_utils.network.f5.common import fq_name
from ansible.module_utils.network.f5.common import f5_argument_spec
from ansible.module_utils.network.f5.common import fail_json
from ansible.module_utils.network.f5.common import exit_json
from ansible.module_utils.network.f5.common import transform_name
from ansible.module_utils.network.f5.common import mark_managed_by
from ansible.module_utils.network.f5.common import only_has_managed_metadata
from ansible.module_utils.network.f5.compare import cmp_simple_list
from ansible.module_utils.network.f5.ipaddress import is_valid_ip
from ansible.module_utils.network.f5.ipaddress import ip_interface
from ansible.module_utils.network.f5.ipaddress import validate_ip_v6_address
class Parameters(AnsibleF5Parameters):
api_map = {
'sourceAddressTranslation': 'snat',
'fallbackPersistence': 'fallback_persistence_profile',
'persist': 'default_persistence_profile',
'vlansEnabled': 'vlans_enabled',
'vlansDisabled': 'vlans_disabled',
'profilesReference': 'profiles',
'policiesReference': 'policies',
'rules': 'irules',
'translateAddress': 'address_translation',
'translatePort': 'port_translation',
'ipProtocol': 'ip_protocol',
'fwEnforcedPolicy': 'firewall_enforced_policy',
'fwStagedPolicy': 'firewall_staged_policy',
'securityLogProfiles': 'security_log_profiles',
'securityNatPolicy': 'security_nat_policy',
}
api_attributes = [
'description',
'destination',
'disabled',
'enabled',
'fallbackPersistence',
'ipProtocol',
'metadata',
'persist',
'policies',
'pool',
'profiles',
'rules',
'source',
'sourceAddressTranslation',
'vlans',
'vlansEnabled',
'vlansDisabled',
'translateAddress',
'translatePort',
'l2Forward',
'ipForward',
'stateless',
'reject',
'dhcpRelay',
'internal',
'fwEnforcedPolicy',
'fwStagedPolicy',
'securityLogProfiles',
'securityNatPolicy',
]
updatables = [
'address_translation',
'description',
'default_persistence_profile',
'destination',
'disabled_vlans',
'enabled',
'enabled_vlans',
'fallback_persistence_profile',
'ip_protocol',
'irules',
'metadata',
'pool',
'policies',
'port',
'port_translation',
'profiles',
'snat',
'source',
'type',
'firewall_enforced_policy',
'firewall_staged_policy',
'security_log_profiles',
'security_nat_policy',
]
returnables = [
'address_translation',
'description',
'default_persistence_profile',
'destination',
'disabled',
'disabled_vlans',
'enabled',
'enabled_vlans',
'fallback_persistence_profile',
'ip_protocol',
'irules',
'metadata',
'pool',
'policies',
'port',
'port_translation',
'profiles',
'snat',
'source',
'vlans',
'vlans_enabled',
'vlans_disabled',
'type',
'firewall_enforced_policy',
'firewall_staged_policy',
'security_log_profiles',
'security_nat_policy',
]
profiles_mutex = [
'sip',
'sipsession',
'iiop',
'rtsp',
'http',
'diameter',
'diametersession',
'radius',
'ftp',
'tftp',
'dns',
'pptp',
'fix',
]
ip_protocols_map = [
('ah', 51),
('bna', 49),
('esp', 50),
('etherip', 97),
('gre', 47),
('icmp', 1),
('ipencap', 4),
('ipv6', 41),
('ipv6-auth', 51), # not in the official list
('ipv6-crypt', 50), # not in the official list
('ipv6-icmp', 58),
('iso-ip', 80),
('mux', 18),
('ospf', 89),
('sctp', 132),
('tcp', 6),
('udp', 17),
('udplite', 136),
]
def to_return(self):
result = {}
for returnable in self.returnables:
try:
result[returnable] = getattr(self, returnable)
except Exception:
pass
result = self._filter_params(result)
return result
def _format_port_for_destination(self, ip, port):
if validate_ip_v6_address(ip):
if port == 0:
result = '.any'
else:
result = '.{0}'.format(port)
else:
result = ':{0}'.format(port)
return result
def _format_destination(self, address, port, route_domain):
if port is None:
if route_domain is None:
result = '{0}'.format(
fq_name(self.partition, address)
)
else:
result = '{0}%{1}'.format(
fq_name(self.partition, address),
route_domain
)
else:
port = self._format_port_for_destination(address, port)
if route_domain is None:
result = '{0}{1}'.format(
fq_name(self.partition, address),
port
)
else:
result = '{0}%{1}{2}'.format(
fq_name(self.partition, address),
route_domain,
port
)
return result
@property
def ip_protocol(self):
if self._values['ip_protocol'] is None:
return None
if self._values['ip_protocol'] == 'any':
return 'any'
for x in self.ip_protocols_map:
if x[0] == self._values['ip_protocol']:
return int(x[1])
try:
return int(self._values['ip_protocol'])
except ValueError:
raise F5ModuleError(
"Specified ip_protocol was neither a number nor in the list of common protocols."
)
@property
def source(self):
if self._values['source'] is None:
return None
try:
addr = ip_interface(u'{0}'.format(self._values['source']))
result = '{0}/{1}'.format(str(addr.ip), addr.network.prefixlen)
return result
except ValueError:
raise F5ModuleError(
"The source IP address must be specified in CIDR format: address/prefix"
)
@property
def has_message_routing_profiles(self):
if self.profiles is None:
return None
current = self._read_current_message_routing_profiles_from_device()
result = [x['name'] for x in self.profiles if x['name'] in current]
if len(result) > 0:
return True
return False
@property
def has_fastl4_profiles(self):
if self.profiles is None:
return None
current = self._read_current_fastl4_profiles_from_device()
result = [x['name'] for x in self.profiles if x['name'] in current]
if len(result) > 0:
return True
return False
@property
def has_fasthttp_profiles(self):
"""Check if ``fasthttp`` profile is in API profiles
This method is used to determine the server type when doing comparisons
in the Difference class.
Returns:
bool: True if server has ``fasthttp`` profiles. False otherwise.
"""
if self.profiles is None:
return None
current = self._read_current_fasthttp_profiles_from_device()
result = [x['name'] for x in self.profiles if x['name'] in current]
if len(result) > 0:
return True
return False
def _read_current_message_routing_profiles_from_device(self):
result = []
result += self._read_diameter_profiles_from_device()
result += self._read_sip_profiles_from_device()
return result
def _read_diameter_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/diameter/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [x['name'] for x in response['items']]
return result
def _read_sip_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/sip/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [x['name'] for x in response['items']]
return result
def _read_current_fastl4_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/fastl4/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [x['name'] for x in response['items']]
return result
def _read_current_fasthttp_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/fasthttp/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [x['name'] for x in response['items']]
return result
class ApiParameters(Parameters):
@property
def type(self):
"""Attempt to determine the current server type
This check is very unscientific. It turns out that this information is not
exactly available anywhere on a BIG-IP. Instead, we rely on a semi-reliable
means for determining what the type of the virtual server is. Hopefully it
always works.
There are a handful of attributes that can be used to determine a specific
type. There are some types though that can only be determined by looking at
the profiles that are assigned to them. We follow that method for those
complicated types; message-routing, fasthttp, and fastl4.
Because type determination is an expensive operation, we cache the result
from the operation.
Returns:
string: The server type.
"""
if self._values['type']:
return self._values['type']
if self.l2Forward is True:
result = 'forwarding-l2'
elif self.ipForward is True:
result = 'forwarding-ip'
elif self.stateless is True:
result = 'stateless'
elif self.reject is True:
result = 'reject'
elif self.dhcpRelay is True:
result = 'dhcp'
elif self.internal is True:
result = 'internal'
elif self.has_fasthttp_profiles:
result = 'performance-http'
elif self.has_fastl4_profiles:
result = 'performance-l4'
elif self.has_message_routing_profiles:
result = 'message-routing'
else:
result = 'standard'
self._values['type'] = result
return result
@property
def destination(self):
if self._values['destination'] is None:
return None
destination = self.destination_tuple
result = self._format_destination(destination.ip, destination.port, destination.route_domain)
return result
@property
def destination_tuple(self):
Destination = namedtuple('Destination', ['ip', 'port', 'route_domain'])
# Remove the partition
if self._values['destination'] is None:
result = Destination(ip=None, port=None, route_domain=None)
return result
destination = re.sub(r'^/[a-zA-Z0-9_.-]+/', '', self._values['destination'])
if is_valid_ip(destination):
result = Destination(
ip=destination,
port=None,
route_domain=None
)
return result
# Covers the following examples
#
# /Common/2700:bc00:1f10:101::6%2.80
# 2700:bc00:1f10:101::6%2.80
# 1.1.1.1%2:80
# /Common/1.1.1.1%2:80
# /Common/2700:bc00:1f10:101::6%2.any
#
pattern = r'(?P<ip>[^%]+)%(?P<route_domain>[0-9]+)[:.](?P<port>[0-9]+|any)'
matches = re.search(pattern, destination)
if matches:
try:
port = int(matches.group('port'))
except ValueError:
# Can be a port of "any". This only happens with IPv6
port = matches.group('port')
if port == 'any':
port = 0
ip = matches.group('ip')
if not is_valid_ip(ip):
raise F5ModuleError(
"The provided destination is not a valid IP address"
)
result = Destination(
ip=matches.group('ip'),
port=port,
route_domain=int(matches.group('route_domain'))
)
return result
pattern = r'(?P<ip>[^%]+)%(?P<route_domain>[0-9]+)'
matches = re.search(pattern, destination)
if matches:
ip = matches.group('ip')
if not is_valid_ip(ip):
raise F5ModuleError(
"The provided destination is not a valid IP address"
)
result = Destination(
ip=matches.group('ip'),
port=None,
route_domain=int(matches.group('route_domain'))
)
return result
parts = destination.split('.')
if len(parts) == 4:
# IPv4
ip, port = destination.split(':')
if not is_valid_ip(ip):
raise F5ModuleError(
"The provided destination is not a valid IP address"
)
result = Destination(
ip=ip,
port=int(port),
route_domain=None
)
return result
elif len(parts) == 2:
# IPv6
ip, port = destination.split('.')
try:
port = int(port)
except ValueError:
# Can be a port of "any". This only happens with IPv6
if port == 'any':
port = 0
if not is_valid_ip(ip):
raise F5ModuleError(
"The provided destination is not a valid IP address"
)
result = Destination(
ip=ip,
port=port,
route_domain=None
)
return result
else:
result = Destination(ip=None, port=None, route_domain=None)
return result
@property
def port(self):
destination = self.destination_tuple
self._values['port'] = destination.port
return destination.port
@property
def route_domain(self):
"""Return a route domain number from the destination
Returns:
int: The route domain number
"""
destination = self.destination_tuple
self._values['route_domain'] = destination.route_domain
return int(destination.route_domain)
@property
def profiles(self):
"""Returns a list of profiles from the API
The profiles are formatted so that they are usable in this module and
are able to be compared by the Difference engine.
Returns:
list (:obj:`list` of :obj:`dict`): List of profiles.
Each dictionary in the list contains the following three (3) keys.
* name
* context
* fullPath
Raises:
F5ModuleError: If the specified context is a value other that
``all``, ``serverside``, or ``clientside``.
"""
if 'items' not in self._values['profiles']:
return None
result = []
for item in self._values['profiles']['items']:
context = item['context']
name = item['name']
if context in ['all', 'serverside', 'clientside']:
result.append(dict(name=name, context=context, fullPath=item['fullPath']))
else:
raise F5ModuleError(
"Unknown profile context found: '{0}'".format(context)
)
return result
@property
def profile_types(self):
return [x['name'] for x in iteritems(self.profiles)]
@property
def policies(self):
if 'items' not in self._values['policies']:
return None
result = []
for item in self._values['policies']['items']:
name = item['name']
partition = item['partition']
result.append(dict(name=name, partition=partition))
return result
@property
def default_persistence_profile(self):
"""Get the name of the current default persistence profile
These persistence profiles are always lists when we get them
from the REST API even though there can only be one. We'll
make it a list again when we get to the Difference engine.
Returns:
string: The name of the default persistence profile
"""
if self._values['default_persistence_profile'] is None:
return None
return self._values['default_persistence_profile'][0]
@property
def enabled(self):
if 'enabled' in self._values:
return True
return False
@property
def disabled(self):
if 'disabled' in self._values:
return True
return False
@property
def metadata(self):
if self._values['metadata'] is None:
return None
if only_has_managed_metadata(self._values['metadata']):
return None
result = []
for md in self._values['metadata']:
if md['name'] in [MANAGED_BY_ANNOTATION_VERSION, MANAGED_BY_ANNOTATION_MODIFIED]:
continue
tmp = dict(name=str(md['name']))
if 'value' in md:
tmp['value'] = str(md['value'])
else:
tmp['value'] = ''
result.append(tmp)
return result
@property
def security_log_profiles(self):
if self._values['security_log_profiles'] is None:
return None
# At the moment, BIG-IP wraps the names of log profiles in double-quotes if
# the profile name contains spaces. This is likely due to the REST code being
# too close to actual tmsh code and, at the tmsh level, a space in the profile
# name would cause tmsh to see the 2nd word (and beyond) as "the next parameter".
#
# This seems like a bug to me.
result = list(set([x.strip('"') for x in self._values['security_log_profiles']]))
result.sort()
return result
@property
def sec_nat_use_device_policy(self):
if self._values['security_nat_policy'] is None:
return None
if 'useDevicePolicy' not in self._values['security_nat_policy']:
return None
if self._values['security_nat_policy']['useDevicePolicy'] == "no":
return False
return True
@property
def sec_nat_use_rd_policy(self):
if self._values['security_nat_policy'] is None:
return None
if 'useRouteDomainPolicy' not in self._values['security_nat_policy']:
return None
if self._values['security_nat_policy']['useRouteDomainPolicy'] == "no":
return False
return True
@property
def sec_nat_policy(self):
if self._values['security_nat_policy'] is None:
return None
if 'policy' not in self._values['security_nat_policy']:
return None
return self._values['security_nat_policy']['policy']
@property
def irules(self):
if self._values['irules'] is None:
return []
return self._values['irules']
class ModuleParameters(Parameters):
services_map = {
'ftp': 21,
'http': 80,
'https': 443,
'telnet': 23,
'pptp': 1723,
'smtp': 25,
'snmp': 161,
'snmp-trap': 162,
'ssh': 22,
'tftp': 69,
'isakmp': 500,
'mqtt': 1883,
'mqtt-tls': 8883,
'rtsp': 554
}
def _handle_profile_context(self, tmp):
if 'context' not in tmp:
tmp['context'] = 'all'
else:
if 'name' not in tmp:
raise F5ModuleError(
"A profile name must be specified when a context is specified."
)
tmp['context'] = tmp['context'].replace('server-side', 'serverside')
tmp['context'] = tmp['context'].replace('client-side', 'clientside')
def _handle_clientssl_profile_nuances(self, profile):
if profile['name'] != 'clientssl':
return
if profile['context'] != 'clientside':
profile['context'] = 'clientside'
def _check_port(self):
try:
port = int(self._values['port'])
except ValueError:
raise F5ModuleError(
"The specified port was not a valid integer"
)
if 0 <= port <= 65535:
return port
raise F5ModuleError(
"Valid ports must be in range 0 - 65535"
)
@property
def destination(self):
addr = self._values['destination'].split("%")[0]
if not is_valid_ip(addr):
raise F5ModuleError(
"The provided destination is not a valid IP address"
)
result = self._format_destination(addr, self.port, self.route_domain)
return result
@property
def destination_tuple(self):
Destination = namedtuple('Destination', ['ip', 'port', 'route_domain'])
if self._values['destination'] is None:
result = Destination(ip=None, port=None, route_domain=None)
return result
addr = self._values['destination'].split("%")[0]
result = Destination(ip=addr, port=self.port, route_domain=self.route_domain)
return result
@property
def port(self):
if self._values['port'] is None:
return None
if self._values['port'] in ['*', 'any']:
return 0
if self._values['port'] in self.services_map:
port = self._values['port']
self._values['port'] = self.services_map[port]
self._check_port()
return int(self._values['port'])
@property
def irules(self):
results = []
if self._values['irules'] is None:
return None
if len(self._values['irules']) == 1 and self._values['irules'][0] == '':
return ''
for irule in self._values['irules']:
result = fq_name(self.partition, irule)
results.append(result)
return results
@property
def profiles(self):
if self._values['profiles'] is None:
return None
if len(self._values['profiles']) == 1 and self._values['profiles'][0] == '':
return ''
result = []
for profile in self._values['profiles']:
tmp = dict()
if isinstance(profile, dict):
tmp.update(profile)
self._handle_profile_context(tmp)
if 'name' not in profile:
tmp['name'] = profile
tmp['fullPath'] = fq_name(self.partition, tmp['name'])
self._handle_clientssl_profile_nuances(tmp)
else:
full_path = fq_name(self.partition, profile)
tmp['name'] = os.path.basename(profile)
tmp['context'] = 'all'
tmp['fullPath'] = full_path
self._handle_clientssl_profile_nuances(tmp)
result.append(tmp)
mutually_exclusive = [x['name'] for x in result if x in self.profiles_mutex]
if len(mutually_exclusive) > 1:
raise F5ModuleError(
"Profiles {0} are mutually exclusive".format(
', '.join(self.profiles_mutex).strip()
)
)
return result
@property
def policies(self):
if self._values['policies'] is None:
return None
if len(self._values['policies']) == 1 and self._values['policies'][0] == '':
return ''
result = []
policies = [fq_name(self.partition, p) for p in self._values['policies']]
policies = set(policies)
for policy in policies:
parts = policy.split('/')
if len(parts) != 3:
raise F5ModuleError(
"The specified policy '{0}' is malformed".format(policy)
)
tmp = dict(
name=parts[2],
partition=parts[1]
)
result.append(tmp)
return result
@property
def pool(self):
if self._values['pool'] is None:
return None
if self._values['pool'] == '':
return ''
return fq_name(self.partition, self._values['pool'])
@property
def vlans_enabled(self):
if self._values['enabled_vlans'] is None:
return None
elif self._values['vlans_enabled'] is False:
# This is a special case for "all" enabled VLANs
return False
if self._values['disabled_vlans'] is None:
return True
return False
@property
def vlans_disabled(self):
if self._values['disabled_vlans'] is None:
return None
elif self._values['vlans_disabled'] is True:
# This is a special case for "all" enabled VLANs
return True
elif self._values['enabled_vlans'] is None:
return True
return False
@property
def enabled_vlans(self):
if self._values['enabled_vlans'] is None:
return None
elif any(x.lower() for x in self._values['enabled_vlans'] if x.lower() in ['all', '*']):
result = [fq_name(self.partition, 'all')]
if result[0].endswith('/all'):
if self._values['__warnings'] is None:
self._values['__warnings'] = []
self._values['__warnings'].append(
dict(
msg="Usage of the 'ALL' value for 'enabled_vlans' parameter is deprecated. Use '*' instead",
version='2.9'
)
)
return result
results = list(set([fq_name(self.partition, x) for x in self._values['enabled_vlans']]))
results.sort()
return results
@property
def disabled_vlans(self):
if self._values['disabled_vlans'] is None:
return None
elif any(x.lower() for x in self._values['disabled_vlans'] if x.lower() in ['all', '*']):
raise F5ModuleError(
"You cannot disable all VLANs. You must name them individually."
)
results = list(set([fq_name(self.partition, x) for x in self._values['disabled_vlans']]))
results.sort()
return results
@property
def vlans(self):
disabled = self.disabled_vlans
if disabled:
return self.disabled_vlans
return self.enabled_vlans
@property
def state(self):
if self._values['state'] == 'present':
return 'enabled'
return self._values['state']
@property
def snat(self):
if self._values['snat'] is None:
return None
lowercase = self._values['snat'].lower()
if lowercase in ['automap', 'none']:
return dict(type=lowercase)
snat_pool = fq_name(self.partition, self._values['snat'])
return dict(pool=snat_pool, type='snat')
@property
def default_persistence_profile(self):
if self._values['default_persistence_profile'] is None:
return None
if self._values['default_persistence_profile'] == '':
return ''
profile = fq_name(self.partition, self._values['default_persistence_profile'])
parts = profile.split('/')
if len(parts) != 3:
raise F5ModuleError(
"The specified 'default_persistence_profile' is malformed"
)
result = dict(
name=parts[2],
partition=parts[1]
)
return result
@property
def fallback_persistence_profile(self):
if self._values['fallback_persistence_profile'] is None:
return None
if self._values['fallback_persistence_profile'] == '':
return ''
result = fq_name(self.partition, self._values['fallback_persistence_profile'])
return result
@property
def enabled(self):
if self._values['state'] == 'enabled':
return True
elif self._values['state'] == 'disabled':
return False
else:
return None
@property
def disabled(self):
if self._values['state'] == 'enabled':
return False
elif self._values['state'] == 'disabled':
return True
else:
return None
@property
def metadata(self):
if self._values['metadata'] is None:
return None
if self._values['metadata'] == '':
return []
result = []
try:
for k, v in iteritems(self._values['metadata']):
tmp = dict(name=str(k))
if v:
tmp['value'] = str(v)
else:
tmp['value'] = ''
result.append(tmp)
except AttributeError:
raise F5ModuleError(
"The 'metadata' parameter must be a dictionary of key/value pairs."
)
return result
@property
def address_translation(self):
if self._values['address_translation'] is None:
return None
if self._values['address_translation']:
return 'enabled'
return 'disabled'
@property
def port_translation(self):
if self._values['port_translation'] is None:
return None
if self._values['port_translation']:
return 'enabled'
return 'disabled'
@property
def firewall_enforced_policy(self):
if self._values['firewall_enforced_policy'] is None:
return None
return fq_name(self.partition, self._values['firewall_enforced_policy'])
@property
def firewall_staged_policy(self):
if self._values['firewall_staged_policy'] is None:
return None
return fq_name(self.partition, self._values['firewall_staged_policy'])
@property
def security_log_profiles(self):
if self._values['security_log_profiles'] is None:
return None
if len(self._values['security_log_profiles']) == 1 and self._values['security_log_profiles'][0] == '':
return ''
result = list(set([fq_name(self.partition, x) for x in self._values['security_log_profiles']]))
result.sort()
return result
@property
def sec_nat_use_device_policy(self):
if self._values['security_nat_policy'] is None:
return None
if 'use_device_policy' not in self._values['security_nat_policy']:
return None
return self._values['security_nat_policy']['use_device_policy']
@property
def sec_nat_use_rd_policy(self):
if self._values['security_nat_policy'] is None:
return None
if 'use_route_domain_policy' not in self._values['security_nat_policy']:
return None
return self._values['security_nat_policy']['use_route_domain_policy']
@property
def sec_nat_policy(self):
if self._values['security_nat_policy'] is None:
return None
if 'policy' not in self._values['security_nat_policy']:
return None
if self._values['security_nat_policy']['policy'] == '':
return ''
return fq_name(self.partition, self._values['security_nat_policy']['policy'])
@property
def security_nat_policy(self):
result = dict()
if self.sec_nat_policy:
result['policy'] = self.sec_nat_policy
if self.sec_nat_use_device_policy is not None:
result['use_device_policy'] = self.sec_nat_use_device_policy
if self.sec_nat_use_rd_policy is not None:
result['use_route_domain_policy'] = self.sec_nat_use_rd_policy
if result:
return result
return None
class Changes(Parameters):
pass
class UsableChanges(Changes):
@property
def destination(self):
if self._values['type'] == 'internal':
return None
return self._values['destination']
@property
def vlans(self):
if self._values['vlans'] is None:
return None
elif len(self._values['vlans']) == 0:
return []
elif any(x for x in self._values['vlans'] if x.lower() in ['/common/all', 'all']):
return []
return self._values['vlans']
@property
def irules(self):
if self._values['irules'] is None:
return None
if self._values['type'] in ['dhcp', 'stateless', 'reject', 'internal']:
return None
return self._values['irules']
@property
def policies(self):
if self._values['policies'] is None:
return None
if self._values['type'] in ['dhcp', 'reject', 'internal']:
return None
return self._values['policies']
@property
def default_persistence_profile(self):
if self._values['default_persistence_profile'] is None:
return None
if self._values['type'] == 'dhcp':
return None
if not self._values['default_persistence_profile']:
return []
return [self._values['default_persistence_profile']]
@property
def fallback_persistence_profile(self):
if self._values['fallback_persistence_profile'] is None:
return None
if self._values['type'] == 'dhcp':
return None
return self._values['fallback_persistence_profile']
@property
def snat(self):
if self._values['snat'] is None:
return None
if self._values['type'] in ['dhcp', 'reject', 'internal']:
return None
return self._values['snat']
@property
def dhcpRelay(self):
if self._values['type'] == 'dhcp':
return True
@property
def reject(self):
if self._values['type'] == 'reject':
return True
@property
def stateless(self):
if self._values['type'] == 'stateless':
return True
@property
def internal(self):
if self._values['type'] == 'internal':
return True
@property
def ipForward(self):
if self._values['type'] == 'forwarding-ip':
return True
@property
def l2Forward(self):
if self._values['type'] == 'forwarding-l2':
return True
@property
def security_log_profiles(self):
if self._values['security_log_profiles'] is None:
return None
mutex = ('Log all requests', 'Log illegal requests')
if len([x for x in self._values['security_log_profiles'] if x.endswith(mutex)]) >= 2:
raise F5ModuleError(
"The 'Log all requests' and 'Log illegal requests' are mutually exclusive."
)
return self._values['security_log_profiles']
@property
def security_nat_policy(self):
if self._values['security_nat_policy'] is None:
return None
result = dict()
sec = self._values['security_nat_policy']
if 'policy' in sec:
result['policy'] = sec['policy']
if 'use_device_policy' in sec:
result['useDevicePolicy'] = 'yes' if sec['use_device_policy'] else 'no'
if 'use_route_domain_policy' in sec:
result['useRouteDomainPolicy'] = 'yes' if sec['use_route_domain_policy'] else 'no'
if result:
return result
return None
class ReportableChanges(Changes):
@property
def snat(self):
if self._values['snat'] is None:
return None
result = self._values['snat'].get('type', None)
if result == 'automap':
return 'Automap'
elif result == 'none':
return 'none'
result = self._values['snat'].get('pool', None)
return result
@property
def destination(self):
params = ApiParameters(params=dict(destination=self._values['destination']))
result = params.destination_tuple.ip
return result
@property
def port(self):
params = ApiParameters(params=dict(destination=self._values['destination']))
result = params.destination_tuple.port
return result
@property
def default_persistence_profile(self):
if len(self._values['default_persistence_profile']) == 0:
return []
profile = self._values['default_persistence_profile'][0]
result = '/{0}/{1}'.format(profile['partition'], profile['name'])
return result
@property
def policies(self):
if len(self._values['policies']) == 0:
return []
result = ['/{0}/{1}'.format(x['partition'], x['name']) for x in self._values['policies']]
return result
@property
def enabled_vlans(self):
if len(self._values['vlans']) == 0 and self._values['vlans_disabled'] is True:
return 'all'
elif len(self._values['vlans']) > 0 and self._values['vlans_enabled'] is True:
return self._values['vlans']
@property
def disabled_vlans(self):
if len(self._values['vlans']) > 0 and self._values['vlans_disabled'] is True:
return self._values['vlans']
@property
def address_translation(self):
if self._values['address_translation'] == 'enabled':
return True
return False
@property
def port_translation(self):
if self._values['port_translation'] == 'enabled':
return True
return False
@property
def ip_protocol(self):
if self._values['ip_protocol'] is None:
return None
try:
int(self._values['ip_protocol'])
except ValueError:
return self._values['ip_protocol']
protocol = next((x[0] for x in self.ip_protocols_map if x[1] == self._values['ip_protocol']), None)
if protocol:
return protocol
return self._values['ip_protocol']
class VirtualServerValidator(object):
def __init__(self, module=None, client=None, want=None, have=None):
self.have = have if have else ApiParameters()
self.want = want if want else ModuleParameters()
self.client = client
self.module = module
def check_update(self):
# TODO(Remove in Ansible 2.9)
self._override_standard_type_from_profiles()
# Regular checks
self._override_port_by_type()
self._override_protocol_by_type()
self._verify_type_has_correct_profiles()
self._verify_default_persistence_profile_for_type()
self._verify_fallback_persistence_profile_for_type()
self._update_persistence_profile()
self._ensure_server_type_supports_vlans()
self._verify_type_has_correct_ip_protocol()
# For different server types
self._verify_dhcp_profile()
self._verify_fastl4_profile()
self._verify_stateless_profile()
def check_create(self):
# TODO(Remove in Ansible 2.9)
self._override_standard_type_from_profiles()
# Regular checks
self._set_default_ip_protocol()
self._set_default_profiles()
self._override_port_by_type()
self._override_protocol_by_type()
self._verify_type_has_correct_profiles()
self._verify_default_persistence_profile_for_type()
self._verify_fallback_persistence_profile_for_type()
self._update_persistence_profile()
self._verify_virtual_has_required_parameters()
self._ensure_server_type_supports_vlans()
self._override_vlans_if_all_specified()
self._check_source_and_destination_match()
self._verify_type_has_correct_ip_protocol()
self._verify_minimum_profile()
# For different server types
self._verify_dhcp_profile()
self._verify_fastl4_profile()
self._verify_stateless_profile_on_create()
def _ensure_server_type_supports_vlans(self):
"""Verifies the specified server type supports VLANs
A select number of server types do not support VLANs. This method
checks to see if the specified types were provided along with VLANs.
If they were, the module will raise an error informing the user that
they need to either remove the VLANs, or, change the ``type``.
Returns:
None: Returned if no VLANs are specified.
Raises:
F5ModuleError: Raised if the server type conflicts with VLANs.
"""
if self.want.enabled_vlans is None:
return
if self.want.type == 'internal':
raise F5ModuleError(
"The 'internal' server type does not support VLANs."
)
def _override_vlans_if_all_specified(self):
"""Overrides any specified VLANs if "all" VLANs are specified
The special setting "all VLANs" in a BIG-IP requires that no other VLANs
be specified. If you specify any number of VLANs, AND include the "all"
VLAN, this method will erase all of the other VLANs and only return the
"all" VLAN.
"""
all_vlans = ['/common/all', 'all']
if self.want.enabled_vlans is not None:
if any(x for x in self.want.enabled_vlans if x.lower() in all_vlans):
self.want.update(
dict(
enabled_vlans=[],
vlans_disabled=True,
vlans_enabled=False
)
)
def _override_port_by_type(self):
if self.want.type == 'dhcp':
self.want.update({'port': 67})
elif self.want.type == 'internal':
self.want.update({'port': 0})
def _override_protocol_by_type(self):
if self.want.type in ['stateless']:
self.want.update({'ip_protocol': 17})
def _override_standard_type_from_profiles(self):
"""Overrides a standard virtual server type given the specified profiles
For legacy purposes, this module will do some basic overriding of the default
``type`` parameter to support cases where changing the ``type`` only requires
specifying a different set of profiles.
Ideally, ``type`` would always be specified, but in the past, this module only
supported an implicit "standard" type. Module users would specify some different
types of profiles and this would change the type...in some circumstances.
Now that this module supports a ``type`` param, the implicit ``type`` changing
that used to happen is technically deprecated (and will be warned on). Users
should always specify a ``type`` now, or, accept the default standard type.
Returns:
void
"""
if self.want.type == 'standard':
if self.want.has_fastl4_profiles:
self.want.update({'type': 'performance-l4'})
self.module.deprecate(
msg="Specifying 'performance-l4' profiles on a 'standard' type is deprecated and will be removed.",
version='2.10'
)
if self.want.has_fasthttp_profiles:
self.want.update({'type': 'performance-http'})
self.module.deprecate(
msg="Specifying 'performance-http' profiles on a 'standard' type is deprecated and will be removed.",
version='2.10'
)
if self.want.has_message_routing_profiles:
self.want.update({'type': 'message-routing'})
self.module.deprecate(
msg="Specifying 'message-routing' profiles on a 'standard' type is deprecated and will be removed.",
version='2.10'
)
def _check_source_and_destination_match(self):
"""Verify that destination and source are of the same IP version
BIG-IP does not allow for mixing of the IP versions for destination and
source addresses. For example, a destination IPv6 address cannot be
associated with a source IPv4 address.
This method checks that you specified the same IP version for these
parameters
Raises:
F5ModuleError: Raised when the IP versions of source and destination differ.
"""
if self.want.source and self.want.destination:
want = ip_interface(u'{0}'.format(self.want.source))
have = ip_interface(u'{0}'.format(self.want.destination_tuple.ip))
if want.version != have.version:
raise F5ModuleError(
"The source and destination addresses for the virtual server must be be the same type (IPv4 or IPv6)."
)
def _verify_type_has_correct_ip_protocol(self):
if self.want.ip_protocol is None:
return
if self.want.type == 'standard':
# Standard supports
# - tcp
# - udp
# - sctp
# - ipsec-ah
# - ipsec esp
# - all protocols
if self.want.ip_protocol not in [6, 17, 132, 51, 50, 'any']:
raise F5ModuleError(
"The 'standard' server type does not support the specified 'ip_protocol'."
)
elif self.want.type == 'performance-http':
# Perf HTTP supports
#
# - tcp
if self.want.ip_protocol not in [6]:
raise F5ModuleError(
"The 'performance-http' server type does not support the specified 'ip_protocol'."
)
elif self.want.type == 'stateless':
# Stateless supports
#
# - udp
if self.want.ip_protocol not in [17]:
raise F5ModuleError(
"The 'stateless' server type does not support the specified 'ip_protocol'."
)
elif self.want.type == 'dhcp':
# DHCP supports no IP protocols
if self.want.ip_protocol is not None:
raise F5ModuleError(
"The 'dhcp' server type does not support an 'ip_protocol'."
)
elif self.want.type == 'internal':
# Internal supports
#
# - tcp
# - udp
if self.want.ip_protocol not in [6, 17]:
raise F5ModuleError(
"The 'internal' server type does not support the specified 'ip_protocol'."
)
elif self.want.type == 'message-routing':
# Message Routing supports
#
# - tcp
# - udp
# - sctp
# - all protocols
if self.want.ip_protocol not in [6, 17, 132, 'all', 'any']:
raise F5ModuleError(
"The 'message-routing' server type does not support the specified 'ip_protocol'."
)
def _verify_virtual_has_required_parameters(self):
"""Verify that the virtual has required parameters
Virtual servers require several parameters that are not necessarily required
when updating the virtual. This method will check for the required params
upon creation.
Ansible supports ``default`` variables in an Argument Spec, but those defaults
apply to all operations; including create, update, and delete. Since users are not
required to always specify these parameters, we cannot use Ansible's facility.
If we did, and then users would be required to provide them when, for example,
they attempted to delete a virtual (even though they are not required to delete
a virtual.
Raises:
F5ModuleError: Raised when the user did not specify required parameters.
"""
required_resources = ['destination', 'port']
if self.want.type == 'internal':
return
if all(getattr(self.want, v) is None for v in required_resources):
raise F5ModuleError(
"You must specify both of " + ', '.join(required_resources)
)
def _verify_default_persistence_profile_for_type(self):
"""Verify that the server type supports default persistence profiles
Verifies that the specified server type supports default persistence profiles.
Some virtual servers do not support these types of profiles. This method will
check that the type actually supports what you are sending it.
Types that do not, at this time, support default persistence profiles include,
* dhcp
* message-routing
* reject
* stateless
* forwarding-ip
* forwarding-l2
Raises:
F5ModuleError: Raised if server type does not support default persistence profiles.
"""
default_profile_not_allowed = [
'dhcp', 'message-routing', 'reject', 'stateless', 'forwarding-ip', 'forwarding-l2'
]
if self.want.ip_protocol in default_profile_not_allowed:
raise F5ModuleError(
"The '{0}' server type does not support a 'default_persistence_profile'".format(self.want.type)
)
def _verify_fallback_persistence_profile_for_type(self):
"""Verify that the server type supports fallback persistence profiles
Verifies that the specified server type supports fallback persistence profiles.
Some virtual servers do not support these types of profiles. This method will
check that the type actually supports what you are sending it.
Types that do not, at this time, support fallback persistence profiles include,
* dhcp
* message-routing
* reject
* stateless
* forwarding-ip
* forwarding-l2
* performance-http
Raises:
F5ModuleError: Raised if server type does not support fallback persistence profiles.
"""
default_profile_not_allowed = [
'dhcp', 'message-routing', 'reject', 'stateless', 'forwarding-ip', 'forwarding-l2',
'performance-http'
]
if self.want.ip_protocol in default_profile_not_allowed:
raise F5ModuleError(
"The '{0}' server type does not support a 'fallback_persistence_profile'".format(self.want.type)
)
def _update_persistence_profile(self):
# This must be changed back to a list to make a valid REST API
# value. The module manipulates this as a normal dictionary
if self.want.default_persistence_profile is not None:
self.want.update({'default_persistence_profile': self.want.default_persistence_profile})
def _verify_type_has_correct_profiles(self):
"""Verify that specified server type does not include forbidden profiles
The type of the server determines the ``type``s of profiles that it accepts. This
method checks that the server ``type`` that you specified is indeed one that can
accept the profiles that you specified.
The common situations are
* ``standard`` types that include ``fasthttp``, ``fastl4``, or ``message routing`` profiles
* ``fasthttp`` types that are missing a ``fasthttp`` profile
* ``fastl4`` types that are missing a ``fastl4`` profile
* ``message-routing`` types that are missing ``diameter`` or ``sip`` profiles
Raises:
F5ModuleError: Raised when a validation check fails.
"""
if self.want.type == 'standard':
if self.want.has_fasthttp_profiles:
raise F5ModuleError("A 'standard' type may not have 'fasthttp' profiles.")
if self.want.has_fastl4_profiles:
raise F5ModuleError("A 'standard' type may not have 'fastl4' profiles.")
if self.want.has_message_routing_profiles:
raise F5ModuleError("A 'standard' type may not have 'message-routing' profiles.")
elif self.want.type == 'performance-http':
if not self.want.has_fasthttp_profiles:
raise F5ModuleError("A 'fasthttp' type must have at least one 'fasthttp' profile.")
elif self.want.type == 'performance-l4':
if not self.want.has_fastl4_profiles:
raise F5ModuleError("A 'fastl4' type must have at least one 'fastl4' profile.")
elif self.want.type == 'message-routing':
if not self.want.has_message_routing_profiles:
raise F5ModuleError("A 'message-routing' type must have either a 'sip' or 'diameter' profile.")
def _set_default_ip_protocol(self):
if self.want.type == 'dhcp':
return
if self.want.ip_protocol is None:
self.want.update({'ip_protocol': 6})
def _set_default_profiles(self):
if self.want.type == 'standard':
if not self.want.profiles:
# Sets a default profiles when creating a new standard virtual.
#
# It appears that if no profiles are deliberately specified, then under
# certain circumstances, the server type will default to ``performance-l4``.
#
# It's unclear what these circumstances are, but they are met in issue 00093.
# If this block of profile setting code is removed, the virtual server's
# type will change to performance-l4 for some reason.
#
if self.want.ip_protocol == 6:
self.want.update({'profiles': ['tcp']})
if self.want.ip_protocol == 17:
self.want.update({'profiles': ['udp']})
if self.want.ip_protocol == 132:
self.want.update({'profiles': ['sctp']})
def _verify_minimum_profile(self):
if self.want.profiles:
return None
if self.want.type == 'internal' and self.want.profiles == '':
raise F5ModuleError(
"An 'internal' server must have at least one profile relevant to its 'ip_protocol'. "
"For example, 'tcp', 'udp', or variations of those."
)
def _verify_dhcp_profile(self):
if self.want.type != 'dhcp':
return
if self.want.profiles is None:
return
have = set(self.read_dhcp_profiles_from_device())
want = set([x['fullPath'] for x in self.want.profiles])
if have.intersection(want):
return True
raise F5ModuleError(
"A dhcp profile, such as 'dhcpv4', or 'dhcpv6' must be specified when 'type' is 'dhcp'."
)
def _verify_fastl4_profile(self):
if self.want.type != 'performance-l4':
return
if self.want.profiles is None:
return
have = set(self.read_fastl4_profiles_from_device())
want = set([x['fullPath'] for x in self.want.profiles])
if have.intersection(want):
return True
raise F5ModuleError(
"A performance-l4 profile, such as 'fastL4', must be specified when 'type' is 'performance-l4'."
)
def _verify_fasthttp_profile(self):
if self.want.type != 'performance-http':
return
if self.want.profiles is None:
return
have = set(self.read_fasthttp_profiles_from_device())
want = set([x['fullPath'] for x in self.want.profiles])
if have.intersection(want):
return True
raise F5ModuleError(
"A performance-http profile, such as 'fasthttp', must be specified when 'type' is 'performance-http'."
)
def _verify_stateless_profile_on_create(self):
if self.want.type != 'stateless':
return
result = self._verify_stateless_profile()
if result is None:
raise F5ModuleError(
"A udp profile, must be specified when 'type' is 'stateless'."
)
def _verify_stateless_profile(self):
if self.want.type != 'stateless':
return
if self.want.profiles is None:
return
have = set(self.read_udp_profiles_from_device())
want = set([x['fullPath'] for x in self.want.profiles])
if have.intersection(want):
return True
raise F5ModuleError(
"A udp profile, must be specified when 'type' is 'stateless'."
)
def read_dhcp_profiles_from_device(self):
result = []
result += self.read_dhcpv4_profiles_from_device()
result += self.read_dhcpv6_profiles_from_device()
return result
def read_dhcpv4_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/dhcpv4/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [fq_name(self.want.partition, x['name']) for x in response['items']]
return result
def read_dhcpv6_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/dhcpv6/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [fq_name(self.want.partition, x['name']) for x in response['items']]
return result
def read_fastl4_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/fastl4/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [fq_name(self.want.partition, x['name']) for x in response['items']]
return result
def read_fasthttp_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/fasthttp/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [fq_name(self.want.partition, x['name']) for x in response['items']]
return result
def read_udp_profiles_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/profile/udp/".format(
self.client.provider['server'],
self.client.provider['server_port'],
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
result = [fq_name(self.want.partition, x['name']) for x in response['items']]
return result
class Difference(object):
def __init__(self, want, have=None):
self.have = have
self.want = want
def compare(self, param):
try:
result = getattr(self, param)
return result
except AttributeError:
result = self.__default(param)
return result
def __default(self, param):
attr1 = getattr(self.want, param)
try:
attr2 = getattr(self.have, param)
if attr1 != attr2:
return attr1
except AttributeError:
return attr1
def to_tuple(self, items):
result = []
for x in items:
tmp = [(str(k), str(v)) for k, v in iteritems(x)]
result += tmp
return result
def _diff_complex_items(self, want, have):
if want == [] and have is None:
return None
if want is None:
return None
w = self.to_tuple(want)
h = self.to_tuple(have)
if set(w).issubset(set(h)):
return None
else:
return want
def _update_vlan_status(self, result):
if self.want.vlans_disabled is not None:
if self.want.vlans_disabled != self.have.vlans_disabled:
result['vlans_disabled'] = self.want.vlans_disabled
result['vlans_enabled'] = not self.want.vlans_disabled
elif self.want.vlans_enabled is not None:
if any(x.lower().endswith('/all') for x in self.want.vlans):
if self.have.vlans_enabled is True:
result['vlans_disabled'] = True
result['vlans_enabled'] = False
elif self.want.vlans_enabled != self.have.vlans_enabled:
result['vlans_disabled'] = not self.want.vlans_enabled
result['vlans_enabled'] = self.want.vlans_enabled
@property
def destination(self):
# The internal type does not support the 'destination' parameter, so it is ignored.
if self.want.type == 'internal':
return
addr_tuple = [self.want.destination, self.want.port, self.want.route_domain]
if all(x for x in addr_tuple if x is None):
return None
have = self.have.destination_tuple
if self.want.port is None:
self.want.update({'port': have.port})
if self.want.route_domain is None:
self.want.update({'route_domain': have.route_domain})
if self.want.destination_tuple.ip is None:
address = have.ip
else:
address = self.want.destination_tuple.ip
want = self.want._format_destination(address, self.want.port, self.want.route_domain)
if want != self.have.destination:
return fq_name(self.want.partition, want)
@property
def source(self):
if self.want.source is None:
return None
want = ip_interface(u'{0}'.format(self.want.source))
have = ip_interface(u'{0}'.format(self.have.destination_tuple.ip))
if want.version != have.version:
raise F5ModuleError(
"The source and destination addresses for the virtual server must be be the same type (IPv4 or IPv6)."
)
if self.want.source != self.have.source:
return self.want.source
@property
def vlans(self):
if self.want.vlans is None:
return None
elif self.want.vlans == [] and self.have.vlans is None:
return None
elif self.want.vlans == self.have.vlans:
return None
# Specifically looking for /all because the vlans return value will be
# an FQDN list. This means that "all" will be returned as "/partition/all",
# ex, /Common/all.
#
# We do not want to accidentally match values that would end with the word
# "all", like "vlansall". Therefore we look for the forward slash because this
# is a path delimiter.
elif any(x.lower().endswith('/all') for x in self.want.vlans):
if self.have.vlans is None:
return None
else:
return []
else:
return self.want.vlans
@property
def enabled_vlans(self):
return self.vlan_status
@property
def disabled_vlans(self):
return self.vlan_status
@property
def vlan_status(self):
result = dict()
vlans = self.vlans
if vlans is not None:
result['vlans'] = vlans
self._update_vlan_status(result)
return result
@property
def port(self):
result = self.destination
if result is not None:
return dict(
destination=result
)
@property
def profiles(self):
if self.want.profiles is None:
return None
if self.want.profiles == '' and len(self.have.profiles) > 0:
have = set([(p['name'], p['context'], p['fullPath']) for p in self.have.profiles])
if len(self.have.profiles) == 1:
if not any(x[0] in ['tcp', 'udp', 'sctp'] for x in have):
return []
else:
return None
else:
return []
if self.want.profiles == '' and len(self.have.profiles) == 0:
return None
want = set([(p['name'], p['context'], p['fullPath']) for p in self.want.profiles])
have = set([(p['name'], p['context'], p['fullPath']) for p in self.have.profiles])
if len(have) == 0:
return self.want.profiles
elif len(have) == 1:
if want != have:
return self.want.profiles
else:
if not any(x[0] == 'tcp' for x in want):
if self.want.type != 'stateless':
have = set([x for x in have if x[0] != 'tcp'])
if not any(x[0] == 'udp' for x in want):
have = set([x for x in have if x[0] != 'udp'])
if not any(x[0] == 'sctp' for x in want):
if self.want.type != 'stateless':
have = set([x for x in have if x[0] != 'sctp'])
want = set([(p[2], p[1]) for p in want])
have = set([(p[2], p[1]) for p in have])
if want != have:
return self.want.profiles
@property
def ip_protocol(self):
if self.want.ip_protocol != self.have.ip_protocol:
return self.want.ip_protocol
@property
def fallback_persistence_profile(self):
if self.want.fallback_persistence_profile is None:
return None
if self.want.fallback_persistence_profile == '' and self.have.fallback_persistence_profile is not None:
return ""
if self.want.fallback_persistence_profile == '' and self.have.fallback_persistence_profile is None:
return None
if self.want.fallback_persistence_profile != self.have.fallback_persistence_profile:
return self.want.fallback_persistence_profile
@property
def default_persistence_profile(self):
if self.want.default_persistence_profile is None:
return None
if self.want.default_persistence_profile == '' and self.have.default_persistence_profile is not None:
return []
if self.want.default_persistence_profile == '' and self.have.default_persistence_profile is None:
return None
if self.have.default_persistence_profile is None:
return dict(
default_persistence_profile=self.want.default_persistence_profile
)
w_name = self.want.default_persistence_profile.get('name', None)
w_partition = self.want.default_persistence_profile.get('partition', None)
h_name = self.have.default_persistence_profile.get('name', None)
h_partition = self.have.default_persistence_profile.get('partition', None)
if w_name != h_name or w_partition != h_partition:
return dict(
default_persistence_profile=self.want.default_persistence_profile
)
@property
def policies(self):
if self.want.policies is None:
return None
if self.want.policies == '' and self.have.policies is None:
return None
if self.want.policies == '' and len(self.have.policies) > 0:
return []
if not self.have.policies:
return self.want.policies
want = set([(p['name'], p['partition']) for p in self.want.policies])
have = set([(p['name'], p['partition']) for p in self.have.policies])
if not want == have:
return self.want.policies
@property
def snat(self):
if self.want.snat is None:
return None
if self.want.snat['type'] != self.have.snat['type']:
result = dict(snat=self.want.snat)
return result
if self.want.snat.get('pool', None) is None:
return None
if self.want.snat['pool'] != self.have.snat['pool']:
result = dict(snat=self.want.snat)
return result
@property
def enabled(self):
if self.want.state == 'enabled' and self.have.disabled:
result = dict(
enabled=True,
disabled=False
)
return result
elif self.want.state == 'disabled' and self.have.enabled:
result = dict(
enabled=False,
disabled=True
)
return result
@property
def irules(self):
if self.want.irules is None:
return None
if self.want.irules == '' and len(self.have.irules) > 0:
return []
if self.want.irules == '' and len(self.have.irules) == 0:
return None
if sorted(set(self.want.irules)) != sorted(set(self.have.irules)):
return self.want.irules
@property
def pool(self):
if self.want.pool is None:
return None
if self.want.pool == '' and self.have.pool is not None:
return ""
if self.want.pool == '' and self.have.pool is None:
return None
if self.want.pool != self.have.pool:
return self.want.pool
@property
def metadata(self):
if self.want.metadata is None:
return None
elif len(self.want.metadata) == 0 and self.have.metadata is None:
return None
elif len(self.want.metadata) == 0:
return []
elif self.have.metadata is None:
return self.want.metadata
result = self._diff_complex_items(self.want.metadata, self.have.metadata)
return result
@property
def type(self):
if self.want.type != self.have.type:
raise F5ModuleError(
"Changing the 'type' parameter is not supported."
)
@property
def security_log_profiles(self):
result = cmp_simple_list(self.want.security_log_profiles, self.have.security_log_profiles)
return result
@property
def security_nat_policy(self):
result = dict()
if self.want.sec_nat_use_device_policy is not None:
if self.want.sec_nat_use_device_policy != self.have.sec_nat_use_device_policy:
result['use_device_policy'] = self.want.sec_nat_use_device_policy
if self.want.sec_nat_use_rd_policy is not None:
if self.want.sec_nat_use_rd_policy != self.have.sec_nat_use_rd_policy:
result['use_route_domain_policy'] = self.want.sec_nat_use_rd_policy
if self.want.sec_nat_policy is not None:
if self.want.sec_nat_policy == '' and self.have.sec_nat_policy is None:
pass
elif self.want.sec_nat_policy != self.have.sec_nat_policy:
result['policy'] = self.want.sec_nat_policy
if result:
return dict(security_nat_policy=result)
class ModuleManager(object):
def __init__(self, *args, **kwargs):
self.module = kwargs.get('module', None)
self.client = kwargs.get('client', None)
self.have = ApiParameters(client=self.client)
self.want = ModuleParameters(client=self.client, params=self.module.params)
self.changes = UsableChanges()
def exec_module(self):
changed = False
result = dict()
state = self.want.state
if state in ['present', 'enabled', 'disabled']:
changed = self.present()
elif state == "absent":
changed = self.absent()
reportable = ReportableChanges(params=self.changes.to_return())
changes = reportable.to_return()
result.update(**changes)
result.update(dict(changed=changed))
return result
def present(self):
if self.exists():
return self.update()
else:
return self.create()
def absent(self):
if self.exists():
return self.remove()
return False
def update(self):
self.have = self.read_current_from_device()
validator = VirtualServerValidator(
module=self.module, client=self.client, have=self.have, want=self.want
)
validator.check_update()
if not self.should_update():
return False
if self.module.check_mode:
return True
self.update_on_device()
return True
def should_update(self):
result = self._update_changed_options()
if result:
return True
return False
def remove(self):
if self.module.check_mode:
return True
self.remove_from_device()
if self.exists():
raise F5ModuleError("Failed to delete the resource")
return True
def get_reportable_changes(self):
result = ReportableChanges(params=self.changes.to_return())
return result
def _set_changed_options(self):
changed = {}
for key in Parameters.returnables:
if getattr(self.want, key) is not None:
changed[key] = getattr(self.want, key)
if changed:
self.changes = UsableChanges(params=changed)
def _update_changed_options(self):
diff = Difference(self.want, self.have)
updatables = Parameters.updatables
changed = dict()
for k in updatables:
change = diff.compare(k)
if change is None:
continue
else:
if isinstance(change, dict):
changed.update(change)
else:
changed[k] = change
if changed:
self.changes = UsableChanges(params=changed)
return True
return False
def exists(self): # lgtm [py/similar-function]
uri = "https://{0}:{1}/mgmt/tm/ltm/virtual/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError:
return False
if resp.status == 404 or 'code' in response and response['code'] == 404:
return False
return True
def create(self):
validator = VirtualServerValidator(
module=self.module, client=self.client, have=self.have, want=self.want
)
validator.check_create()
self._set_changed_options()
if self.module.check_mode:
return True
self.create_on_device()
return True
def update_on_device(self):
params = self.changes.api_params()
# Mark the resource as managed by Ansible.
params = mark_managed_by(self.module.ansible_version, params)
uri = "https://{0}:{1}/mgmt/tm/ltm/virtual/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
resp = self.client.api.patch(uri, json=params)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
def read_current_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/virtual/{2}?expandSubcollections=true".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
resp = self.client.api.get(uri)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] == 400:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
return ApiParameters(params=response, client=self.client)
def create_on_device(self):
params = self.changes.api_params()
params['name'] = self.want.name
params['partition'] = self.want.partition
# Mark the resource as managed by Ansible.
params = mark_managed_by(self.module.ansible_version, params)
uri = "https://{0}:{1}/mgmt/tm/ltm/virtual/".format(
self.client.provider['server'],
self.client.provider['server_port']
)
resp = self.client.api.post(uri, json=params)
try:
response = resp.json()
except ValueError as ex:
raise F5ModuleError(str(ex))
if 'code' in response and response['code'] in [400, 403]:
if 'message' in response:
raise F5ModuleError(response['message'])
else:
raise F5ModuleError(resp.content)
def remove_from_device(self):
uri = "https://{0}:{1}/mgmt/tm/ltm/virtual/{2}".format(
self.client.provider['server'],
self.client.provider['server_port'],
transform_name(self.want.partition, self.want.name)
)
response = self.client.api.delete(uri)
if response.status == 200:
return True
raise F5ModuleError(response.content)
class ArgumentSpec(object):
def __init__(self):
self.supports_check_mode = True
argument_spec = dict(
state=dict(
default='present',
choices=['present', 'absent', 'disabled', 'enabled']
),
name=dict(
required=True,
aliases=['vs']
),
destination=dict(
aliases=['address', 'ip']
),
port=dict(),
profiles=dict(
type='list',
aliases=['all_profiles'],
options=dict(
name=dict(),
context=dict(default='all', choices=['all', 'server-side', 'client-side'])
)
),
policies=dict(
type='list',
aliases=['all_policies']
),
irules=dict(
type='list',
aliases=['all_rules']
),
enabled_vlans=dict(
type='list'
),
disabled_vlans=dict(
type='list'
),
pool=dict(),
description=dict(),
snat=dict(),
default_persistence_profile=dict(),
fallback_persistence_profile=dict(),
source=dict(),
metadata=dict(type='raw'),
partition=dict(
default='Common',
fallback=(env_fallback, ['F5_PARTITION'])
),
address_translation=dict(type='bool'),
port_translation=dict(type='bool'),
ip_protocol=dict(
choices=[
'ah', 'any', 'bna', 'esp', 'etherip', 'gre', 'icmp', 'ipencap', 'ipv6',
'ipv6-auth', 'ipv6-crypt', 'ipv6-icmp', 'isp-ip', 'mux', 'ospf',
'sctp', 'tcp', 'udp', 'udplite'
]
),
type=dict(
default='standard',
choices=[
'standard', 'forwarding-ip', 'forwarding-l2', 'internal', 'message-routing',
'performance-http', 'performance-l4', 'reject', 'stateless', 'dhcp'
]
),
firewall_staged_policy=dict(),
firewall_enforced_policy=dict(),
security_log_profiles=dict(type='list'),
security_nat_policy=dict(
type='dict',
options=dict(
policy=dict(),
use_device_policy=dict(type='bool'),
use_route_domain_policy=dict(type='bool')
)
)
)
self.argument_spec = {}
self.argument_spec.update(f5_argument_spec)
self.argument_spec.update(argument_spec)
self.mutually_exclusive = [
['enabled_vlans', 'disabled_vlans']
]
def main():
spec = ArgumentSpec()
module = AnsibleModule(
argument_spec=spec.argument_spec,
supports_check_mode=spec.supports_check_mode,
mutually_exclusive=spec.mutually_exclusive
)
try:
client = F5RestClient(**module.params)
mm = ModuleManager(module=module, client=client)
results = mm.exec_module()
exit_json(module, results, client)
cleanup_tokens(client)
except F5ModuleError as ex:
cleanup_tokens(client)
fail_json(module, ex, client)
if __name__ == '__main__':
main()
|
from ansible.module_utils.ec2 import camel_dict_to_snake_dict, get_ec2_security_group_ids_from_names, \
ansible_dict_to_boto3_tag_list, boto3_tag_list_to_ansible_dict, compare_policies as compare_dicts, \
AWSRetry
from ansible.module_utils.aws.elb_utils import get_elb, get_elb_listener, convert_tg_name_to_arn
try:
from botocore.exceptions import BotoCoreError, ClientError
except ImportError:
pass
import traceback
from copy import deepcopy
class ElasticLoadBalancerV2(object):
def __init__(self, connection, module):
self.connection = connection
self.module = module
self.changed = False
self.new_load_balancer = False
self.scheme = module.params.get("scheme")
self.name = module.params.get("name")
self.subnet_mappings = module.params.get("subnet_mappings")
self.subnets = module.params.get("subnets")
self.deletion_protection = module.params.get("deletion_protection")
self.wait = module.params.get("wait")
if module.params.get("tags") is not None:
self.tags = ansible_dict_to_boto3_tag_list(module.params.get("tags"))
else:
self.tags = None
self.purge_tags = module.params.get("purge_tags")
self.elb = get_elb(connection, module, self.name)
if self.elb is not None:
self.elb_attributes = self.get_elb_attributes()
self.elb['tags'] = self.get_elb_tags()
else:
self.elb_attributes = None
def wait_for_status(self, elb_arn):
"""
Wait for load balancer to reach 'active' status
:param elb_arn: The load balancer ARN
:return:
"""
try:
waiter = self.connection.get_waiter('load_balancer_available')
waiter.wait(LoadBalancerArns=[elb_arn])
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
def get_elb_attributes(self):
"""
Get load balancer attributes
:return:
"""
try:
attr_list = AWSRetry.jittered_backoff()(
self.connection.describe_load_balancer_attributes
)(LoadBalancerArn=self.elb['LoadBalancerArn'])['Attributes']
elb_attributes = boto3_tag_list_to_ansible_dict(attr_list)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
# Replace '.' with '_' in attribute key names to make it more Ansibley
return dict((k.replace('.', '_'), v) for k, v in elb_attributes.items())
def update_elb_attributes(self):
"""
Update the elb_attributes parameter
:return:
"""
self.elb_attributes = self.get_elb_attributes()
def get_elb_tags(self):
"""
Get load balancer tags
:return:
"""
try:
return AWSRetry.jittered_backoff()(
self.connection.describe_tags
)(ResourceArns=[self.elb['LoadBalancerArn']])['TagDescriptions'][0]['Tags']
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
def delete_tags(self, tags_to_delete):
"""
Delete elb tags
:return:
"""
try:
AWSRetry.jittered_backoff()(
self.connection.remove_tags
)(ResourceArns=[self.elb['LoadBalancerArn']], TagKeys=tags_to_delete)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
def modify_tags(self):
"""
Modify elb tags
:return:
"""
try:
AWSRetry.jittered_backoff()(
self.connection.add_tags
)(ResourceArns=[self.elb['LoadBalancerArn']], Tags=self.tags)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
def delete(self):
"""
Delete elb
:return:
"""
try:
AWSRetry.jittered_backoff()(
self.connection.delete_load_balancer
)(LoadBalancerArn=self.elb['LoadBalancerArn'])
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
def compare_subnets(self):
"""
Compare user subnets with current ELB subnets
:return: bool True if they match otherwise False
"""
subnet_id_list = []
subnets = []
# Check if we're dealing with subnets or subnet_mappings
if self.subnets is not None:
# We need to first get the subnet ID from the list
subnets = self.subnets
if self.subnet_mappings is not None:
# Make a list from the subnet_mappings dict
subnets_from_mappings = []
for subnet_mapping in self.subnet_mappings:
subnets.append(subnet_mapping['SubnetId'])
for subnet in self.elb['AvailabilityZones']:
subnet_id_list.append(subnet['SubnetId'])
if set(subnet_id_list) != set(subnets):
return False
else:
return True
def modify_subnets(self):
"""
Modify elb subnets to match module parameters
:return:
"""
try:
AWSRetry.jittered_backoff()(
self.connection.set_subnets
)(LoadBalancerArn=self.elb['LoadBalancerArn'], Subnets=self.subnets)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
def update(self):
"""
Update the elb from AWS
:return:
"""
self.elb = get_elb(self.connection, self.module, self.module.params.get("name"))
self.elb['tags'] = self.get_elb_tags()
class ApplicationLoadBalancer(ElasticLoadBalancerV2):
def __init__(self, connection, connection_ec2, module):
"""
:param connection: boto3 connection
:param module: Ansible module
"""
super(ApplicationLoadBalancer, self).__init__(connection, module)
self.connection_ec2 = connection_ec2
# Ansible module parameters specific to ALBs
self.type = 'application'
if module.params.get('security_groups') is not None:
try:
self.security_groups = AWSRetry.jittered_backoff()(
get_ec2_security_group_ids_from_names
)(module.params.get('security_groups'), self.connection_ec2, boto3=True)
except ValueError as e:
self.module.fail_json(msg=str(e), exception=traceback.format_exc())
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
else:
self.security_groups = module.params.get('security_groups')
self.access_logs_enabled = module.params.get("access_logs_enabled")
self.access_logs_s3_bucket = module.params.get("access_logs_s3_bucket")
self.access_logs_s3_prefix = module.params.get("access_logs_s3_prefix")
self.idle_timeout = module.params.get("idle_timeout")
if self.elb is not None and self.elb['Type'] != 'application':
self.module.fail_json(msg="The load balancer type you are trying to manage is not application. Try elb_network_lb module instead.")
def create_elb(self):
"""
Create a load balancer
:return:
"""
# Required parameters
params = dict()
params['Name'] = self.name
params['Type'] = self.type
# Other parameters
if self.subnets is not None:
params['Subnets'] = self.subnets
if self.security_groups is not None:
params['SecurityGroups'] = self.security_groups
params['Scheme'] = self.scheme
if self.tags:
params['Tags'] = self.tags
try:
self.elb = AWSRetry.jittered_backoff()(self.connection.create_load_balancer)(**params)['LoadBalancers'][0]
self.changed = True
self.new_load_balancer = True
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
if self.wait:
self.wait_for_status(self.elb['LoadBalancerArn'])
def modify_elb_attributes(self):
"""
Update Application ELB attributes if required
:return:
"""
update_attributes = []
if self.access_logs_enabled and self.elb_attributes['access_logs_s3_enabled'] != "true":
update_attributes.append({'Key': 'access_logs.s3.enabled', 'Value': "true"})
if not self.access_logs_enabled and self.elb_attributes['access_logs_s3_enabled'] != "false":
update_attributes.append({'Key': 'access_logs.s3.enabled', 'Value': 'false'})
if self.access_logs_s3_bucket is not None and self.access_logs_s3_bucket != self.elb_attributes['access_logs_s3_bucket']:
update_attributes.append({'Key': 'access_logs.s3.bucket', 'Value': self.access_logs_s3_bucket})
if self.access_logs_s3_prefix is not None and self.access_logs_s3_prefix != self.elb_attributes['access_logs_s3_prefix']:
update_attributes.append({'Key': 'access_logs.s3.prefix', 'Value': self.access_logs_s3_prefix})
if self.deletion_protection and self.elb_attributes['deletion_protection_enabled'] != "true":
update_attributes.append({'Key': 'deletion_protection.enabled', 'Value': "true"})
if self.deletion_protection is not None and not self.deletion_protection and self.elb_attributes['deletion_protection_enabled'] != "false":
update_attributes.append({'Key': 'deletion_protection.enabled', 'Value': "false"})
if self.idle_timeout is not None and str(self.idle_timeout) != self.elb_attributes['idle_timeout_timeout_seconds']:
update_attributes.append({'Key': 'idle_timeout.timeout_seconds', 'Value': str(self.idle_timeout)})
if update_attributes:
try:
AWSRetry.jittered_backoff()(
self.connection.modify_load_balancer_attributes
)(LoadBalancerArn=self.elb['LoadBalancerArn'], Attributes=update_attributes)
self.changed = True
except (BotoCoreError, ClientError) as e:
# Something went wrong setting attributes. If this ELB was created during this task, delete it to leave a consistent state
if self.new_load_balancer:
AWSRetry.jittered_backoff()(self.connection.delete_load_balancer)(LoadBalancerArn=self.elb['LoadBalancerArn'])
self.module.fail_json_aws(e)
def compare_security_groups(self):
"""
Compare user security groups with current ELB security groups
:return: bool True if they match otherwise False
"""
if set(self.elb['SecurityGroups']) != set(self.security_groups):
return False
else:
return True
def modify_security_groups(self):
"""
Modify elb security groups to match module parameters
:return:
"""
try:
AWSRetry.jittered_backoff()(
self.connection.set_security_groups
)(LoadBalancerArn=self.elb['LoadBalancerArn'], SecurityGroups=self.security_groups)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
class NetworkLoadBalancer(ElasticLoadBalancerV2):
def __init__(self, connection, connection_ec2, module):
"""
:param connection: boto3 connection
:param module: Ansible module
"""
super(NetworkLoadBalancer, self).__init__(connection, module)
self.connection_ec2 = connection_ec2
# Ansible module parameters specific to NLBs
self.type = 'network'
self.cross_zone_load_balancing = module.params.get('cross_zone_load_balancing')
if self.elb is not None and self.elb['Type'] != 'network':
self.module.fail_json(msg="The load balancer type you are trying to manage is not network. Try elb_application_lb module instead.")
def create_elb(self):
"""
Create a load balancer
:return:
"""
# Required parameters
params = dict()
params['Name'] = self.name
params['Type'] = self.type
# Other parameters
if self.subnets is not None:
params['Subnets'] = self.subnets
params['Scheme'] = self.scheme
if self.tags:
params['Tags'] = self.tags
try:
self.elb = AWSRetry.jittered_backoff()(self.connection.create_load_balancer)(**params)['LoadBalancers'][0]
self.changed = True
self.new_load_balancer = True
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
if self.wait:
self.wait_for_status(self.elb['LoadBalancerArn'])
def modify_elb_attributes(self):
"""
Update Network ELB attributes if required
:return:
"""
update_attributes = []
if self.cross_zone_load_balancing is not None and str(self.cross_zone_load_balancing).lower() != \
self.elb_attributes['load_balancing_cross_zone_enabled']:
update_attributes.append({'Key': 'load_balancing.cross_zone.enabled', 'Value': str(self.cross_zone_load_balancing).lower()})
if self.deletion_protection is not None and str(self.deletion_protection).lower() != self.elb_attributes['deletion_protection_enabled']:
update_attributes.append({'Key': 'deletion_protection.enabled', 'Value': str(self.deletion_protection).lower()})
if update_attributes:
try:
AWSRetry.jittered_backoff()(
self.connection.modify_load_balancer_attributes
)(LoadBalancerArn=self.elb['LoadBalancerArn'], Attributes=update_attributes)
self.changed = True
except (BotoCoreError, ClientError) as e:
# Something went wrong setting attributes. If this ELB was created during this task, delete it to leave a consistent state
if self.new_load_balancer:
AWSRetry.jittered_backoff()(self.connection.delete_load_balancer)(LoadBalancerArn=self.elb['LoadBalancerArn'])
self.module.fail_json_aws(e)
class ELBListeners(object):
def __init__(self, connection, module, elb_arn):
self.connection = connection
self.module = module
self.elb_arn = elb_arn
listeners = module.params.get("listeners")
if listeners is not None:
# Remove suboption argspec defaults of None from each listener
listeners = [dict((x, listener_dict[x]) for x in listener_dict if listener_dict[x] is not None) for listener_dict in listeners]
self.listeners = self._ensure_listeners_default_action_has_arn(listeners)
self.current_listeners = self._get_elb_listeners()
self.purge_listeners = module.params.get("purge_listeners")
self.changed = False
def update(self):
"""
Update the listeners for the ELB
:return:
"""
self.current_listeners = self._get_elb_listeners()
def _get_elb_listeners(self):
"""
Get ELB listeners
:return:
"""
try:
listener_paginator = self.connection.get_paginator('describe_listeners')
return (AWSRetry.jittered_backoff()(listener_paginator.paginate)(LoadBalancerArn=self.elb_arn).build_full_result())['Listeners']
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
def _ensure_listeners_default_action_has_arn(self, listeners):
"""
If a listener DefaultAction has been passed with a Target Group Name instead of ARN, lookup the ARN and
replace the name.
:param listeners: a list of listener dicts
:return: the same list of dicts ensuring that each listener DefaultActions dict has TargetGroupArn key. If a TargetGroupName key exists, it is removed.
"""
if not listeners:
listeners = []
for listener in listeners:
if 'TargetGroupName' in listener['DefaultActions'][0]:
listener['DefaultActions'][0]['TargetGroupArn'] = convert_tg_name_to_arn(self.connection, self.module,
listener['DefaultActions'][0]['TargetGroupName'])
del listener['DefaultActions'][0]['TargetGroupName']
return listeners
def compare_listeners(self):
"""
:return:
"""
listeners_to_modify = []
listeners_to_delete = []
listeners_to_add = deepcopy(self.listeners)
# Check each current listener port to see if it's been passed to the module
for current_listener in self.current_listeners:
current_listener_passed_to_module = False
for new_listener in self.listeners[:]:
new_listener['Port'] = int(new_listener['Port'])
if current_listener['Port'] == new_listener['Port']:
current_listener_passed_to_module = True
# Remove what we match so that what is left can be marked as 'to be added'
listeners_to_add.remove(new_listener)
modified_listener = self._compare_listener(current_listener, new_listener)
if modified_listener:
modified_listener['Port'] = current_listener['Port']
modified_listener['ListenerArn'] = current_listener['ListenerArn']
listeners_to_modify.append(modified_listener)
break
# If the current listener was not matched against passed listeners and purge is True, mark for removal
if not current_listener_passed_to_module and self.purge_listeners:
listeners_to_delete.append(current_listener['ListenerArn'])
return listeners_to_add, listeners_to_modify, listeners_to_delete
def _compare_listener(self, current_listener, new_listener):
"""
Compare two listeners.
:param current_listener:
:param new_listener:
:return:
"""
modified_listener = {}
# Port
if current_listener['Port'] != new_listener['Port']:
modified_listener['Port'] = new_listener['Port']
# Protocol
if current_listener['Protocol'] != new_listener['Protocol']:
modified_listener['Protocol'] = new_listener['Protocol']
# If Protocol is HTTPS, check additional attributes
if current_listener['Protocol'] == 'HTTPS' and new_listener['Protocol'] == 'HTTPS':
# Cert
if current_listener['SslPolicy'] != new_listener['SslPolicy']:
modified_listener['SslPolicy'] = new_listener['SslPolicy']
if current_listener['Certificates'][0]['CertificateArn'] != new_listener['Certificates'][0]['CertificateArn']:
modified_listener['Certificates'] = []
modified_listener['Certificates'].append({})
modified_listener['Certificates'][0]['CertificateArn'] = new_listener['Certificates'][0]['CertificateArn']
elif current_listener['Protocol'] != 'HTTPS' and new_listener['Protocol'] == 'HTTPS':
modified_listener['SslPolicy'] = new_listener['SslPolicy']
modified_listener['Certificates'] = []
modified_listener['Certificates'].append({})
modified_listener['Certificates'][0]['CertificateArn'] = new_listener['Certificates'][0]['CertificateArn']
# Default action
# We wont worry about the Action Type because it is always 'forward'
if current_listener['DefaultActions'][0]['TargetGroupArn'] != new_listener['DefaultActions'][0]['TargetGroupArn']:
modified_listener['DefaultActions'] = []
modified_listener['DefaultActions'].append({})
modified_listener['DefaultActions'][0]['TargetGroupArn'] = new_listener['DefaultActions'][0]['TargetGroupArn']
modified_listener['DefaultActions'][0]['Type'] = 'forward'
if modified_listener:
return modified_listener
else:
return None
class ELBListener(object):
def __init__(self, connection, module, listener, elb_arn):
"""
:param connection:
:param module:
:param listener:
:param elb_arn:
"""
self.connection = connection
self.module = module
self.listener = listener
self.elb_arn = elb_arn
def add(self):
try:
# Rules is not a valid parameter for create_listener
if 'Rules' in self.listener:
self.listener.pop('Rules')
AWSRetry.jittered_backoff()(self.connection.create_listener)(LoadBalancerArn=self.elb_arn, **self.listener)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
def modify(self):
try:
# Rules is not a valid parameter for modify_listener
if 'Rules' in self.listener:
self.listener.pop('Rules')
AWSRetry.jittered_backoff()(self.connection.modify_listener)(**self.listener)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
def delete(self):
try:
AWSRetry.jittered_backoff()(self.connection.delete_listener)(ListenerArn=self.listener)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
class ELBListenerRules(object):
def __init__(self, connection, module, elb_arn, listener_rules, listener_port):
self.connection = connection
self.module = module
self.elb_arn = elb_arn
self.rules = self._ensure_rules_action_has_arn(listener_rules)
self.changed = False
# Get listener based on port so we can use ARN
self.current_listener = get_elb_listener(connection, module, elb_arn, listener_port)
self.listener_arn = self.current_listener['ListenerArn']
self.rules_to_add = deepcopy(self.rules)
self.rules_to_modify = []
self.rules_to_delete = []
# If the listener exists (i.e. has an ARN) get rules for the listener
if 'ListenerArn' in self.current_listener:
self.current_rules = self._get_elb_listener_rules()
else:
self.current_rules = []
def _ensure_rules_action_has_arn(self, rules):
"""
If a rule Action has been passed with a Target Group Name instead of ARN, lookup the ARN and
replace the name.
:param rules: a list of rule dicts
:return: the same list of dicts ensuring that each rule Actions dict has TargetGroupArn key. If a TargetGroupName key exists, it is removed.
"""
for rule in rules:
if 'TargetGroupName' in rule['Actions'][0]:
rule['Actions'][0]['TargetGroupArn'] = convert_tg_name_to_arn(self.connection, self.module, rule['Actions'][0]['TargetGroupName'])
del rule['Actions'][0]['TargetGroupName']
return rules
def _get_elb_listener_rules(self):
try:
return AWSRetry.jittered_backoff()(self.connection.describe_rules)(ListenerArn=self.current_listener['ListenerArn'])['Rules']
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
def _compare_condition(self, current_conditions, condition):
"""
:param current_conditions:
:param condition:
:return:
"""
condition_found = False
for current_condition in current_conditions:
if current_condition['Field'] == condition['Field'] and current_condition['Values'][0] == condition['Values'][0]:
condition_found = True
break
return condition_found
def _compare_rule(self, current_rule, new_rule):
"""
:return:
"""
modified_rule = {}
# Priority
if current_rule['Priority'] != new_rule['Priority']:
modified_rule['Priority'] = new_rule['Priority']
# Actions
# We wont worry about the Action Type because it is always 'forward'
if current_rule['Actions'][0]['TargetGroupArn'] != new_rule['Actions'][0]['TargetGroupArn']:
modified_rule['Actions'] = []
modified_rule['Actions'].append({})
modified_rule['Actions'][0]['TargetGroupArn'] = new_rule['Actions'][0]['TargetGroupArn']
modified_rule['Actions'][0]['Type'] = 'forward'
# Conditions
modified_conditions = []
for condition in new_rule['Conditions']:
if not self._compare_condition(current_rule['Conditions'], condition):
modified_conditions.append(condition)
if modified_conditions:
modified_rule['Conditions'] = modified_conditions
return modified_rule
def compare_rules(self):
"""
:return:
"""
rules_to_modify = []
rules_to_delete = []
rules_to_add = deepcopy(self.rules)
for current_rule in self.current_rules:
current_rule_passed_to_module = False
for new_rule in self.rules[:]:
if current_rule['Priority'] == new_rule['Priority']:
current_rule_passed_to_module = True
# Remove what we match so that what is left can be marked as 'to be added'
rules_to_add.remove(new_rule)
modified_rule = self._compare_rule(current_rule, new_rule)
if modified_rule:
modified_rule['Priority'] = int(current_rule['Priority'])
modified_rule['RuleArn'] = current_rule['RuleArn']
modified_rule['Actions'] = new_rule['Actions']
modified_rule['Conditions'] = new_rule['Conditions']
rules_to_modify.append(modified_rule)
break
# If the current rule was not matched against passed rules, mark for removal
if not current_rule_passed_to_module and not current_rule['IsDefault']:
rules_to_delete.append(current_rule['RuleArn'])
return rules_to_add, rules_to_modify, rules_to_delete
class ELBListenerRule(object):
def __init__(self, connection, module, rule, listener_arn):
self.connection = connection
self.module = module
self.rule = rule
self.listener_arn = listener_arn
self.changed = False
def create(self):
"""
Create a listener rule
:return:
"""
try:
self.rule['ListenerArn'] = self.listener_arn
self.rule['Priority'] = int(self.rule['Priority'])
AWSRetry.jittered_backoff()(self.connection.create_rule)(**self.rule)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
def modify(self):
"""
Modify a listener rule
:return:
"""
try:
del self.rule['Priority']
AWSRetry.jittered_backoff()(self.connection.modify_rule)(**self.rule)
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
def delete(self):
"""
Delete a listener rule
:return:
"""
try:
AWSRetry.jittered_backoff()(self.connection.delete_rule)(RuleArn=self.rule['RuleArn'])
except (BotoCoreError, ClientError) as e:
self.module.fail_json_aws(e)
self.changed = True
|
"""
Volume driver for NetApp E-Series iSCSI storage systems.
"""
from cinder import interface
from cinder.volume import driver
from cinder.volume.drivers.netapp.eseries import library
from cinder.volume.drivers.netapp import utils as na_utils
@interface.volumedriver
class NetAppEseriesISCSIDriver(driver.BaseVD,
driver.ManageableVD,
driver.ExtendVD,
driver.TransferVD,
driver.SnapshotVD,
driver.ConsistencyGroupVD):
"""NetApp E-Series iSCSI volume driver."""
DRIVER_NAME = 'NetApp_iSCSI_ESeries'
# ThirdPartySystems wiki page
CI_WIKI_NAME = "NetApp_CI"
VERSION = library.NetAppESeriesLibrary.VERSION
def __init__(self, *args, **kwargs):
super(NetAppEseriesISCSIDriver, self).__init__(*args, **kwargs)
na_utils.validate_instantiation(**kwargs)
self.library = library.NetAppESeriesLibrary(self.DRIVER_NAME,
'iSCSI', **kwargs)
def do_setup(self, context):
self.library.do_setup(context)
def check_for_setup_error(self):
self.library.check_for_setup_error()
def create_volume(self, volume):
self.library.create_volume(volume)
def create_volume_from_snapshot(self, volume, snapshot):
self.library.create_volume_from_snapshot(volume, snapshot)
def create_cloned_volume(self, volume, src_vref):
self.library.create_cloned_volume(volume, src_vref)
def delete_volume(self, volume):
self.library.delete_volume(volume)
def create_snapshot(self, snapshot):
return self.library.create_snapshot(snapshot)
def delete_snapshot(self, snapshot):
self.library.delete_snapshot(snapshot)
def get_volume_stats(self, refresh=False):
return self.library.get_volume_stats(refresh)
def extend_volume(self, volume, new_size):
self.library.extend_volume(volume, new_size)
def ensure_export(self, context, volume):
return self.library.ensure_export(context, volume)
def create_export(self, context, volume, connector):
return self.library.create_export(context, volume)
def remove_export(self, context, volume):
self.library.remove_export(context, volume)
def manage_existing(self, volume, existing_ref):
return self.library.manage_existing(volume, existing_ref)
def manage_existing_get_size(self, volume, existing_ref):
return self.library.manage_existing_get_size(volume, existing_ref)
def unmanage(self, volume):
return self.library.unmanage(volume)
def initialize_connection(self, volume, connector):
return self.library.initialize_connection_iscsi(volume, connector)
def terminate_connection(self, volume, connector, **kwargs):
return self.library.terminate_connection_iscsi(volume, connector,
**kwargs)
def get_pool(self, volume):
return self.library.get_pool(volume)
def create_cgsnapshot(self, context, cgsnapshot, snapshots):
return self.library.create_cgsnapshot(cgsnapshot, snapshots)
def delete_cgsnapshot(self, context, cgsnapshot, snapshots):
return self.library.delete_cgsnapshot(cgsnapshot, snapshots)
def create_consistencygroup(self, context, group):
return self.library.create_consistencygroup(group)
def delete_consistencygroup(self, context, group, volumes):
return self.library.delete_consistencygroup(group, volumes)
def update_consistencygroup(self, context, group,
add_volumes=None, remove_volumes=None):
return self.library.update_consistencygroup(
group, add_volumes, remove_volumes)
def create_consistencygroup_from_src(self, context, group, volumes,
cgsnapshot=None, snapshots=None,
source_cg=None, source_vols=None):
return self.library.create_consistencygroup_from_src(
group, volumes, cgsnapshot, snapshots, source_cg, source_vols)
|
"""
Defines some base class related to managing green threads.
"""
import abc
import logging
import socket
import time
import traceback
import weakref
import netaddr
from ryu.lib import hub
from ryu.lib import sockopt
from ryu.lib.hub import Timeout
from ryu.lib.packet.bgp import RF_IPv4_UC
from ryu.lib.packet.bgp import RF_IPv6_UC
from ryu.lib.packet.bgp import RF_IPv4_VPN
from ryu.lib.packet.bgp import RF_IPv6_VPN
from ryu.lib.packet.bgp import RF_RTC_UC
from ryu.services.protocols.bgp.utils.circlist import CircularListType
from ryu.services.protocols.bgp.utils.evtlet import LoopingCall
LOG = logging.getLogger('bgpspeaker.base')
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict
OrderedDict = OrderedDict
SUPPORTED_GLOBAL_RF = set([RF_IPv4_UC,
RF_IPv6_UC,
RF_IPv4_VPN,
RF_RTC_UC,
RF_IPv6_VPN
])
ACTIVITY_ERROR_CODE = 100
RUNTIME_CONF_ERROR_CODE = 200
BIN_ERROR = 300
NET_CTRL_ERROR_CODE = 400
API_ERROR_CODE = 500
PREFIX_ERROR_CODE = 600
BGP_PROCESSOR_ERROR_CODE = 700
CORE_ERROR_CODE = 800
_EXCEPTION_REGISTRY = {}
class BGPSException(Exception):
"""Base exception class for all BGPS related exceptions.
"""
CODE = 1
SUB_CODE = 1
DEF_DESC = 'Unknown exception.'
def __init__(self, desc=None):
super(BGPSException, self).__init__()
if not desc:
desc = self.__class__.DEF_DESC
kls = self.__class__
self.message = '%d.%d - %s' % (kls.CODE, kls.SUB_CODE, desc)
def __repr__(self):
kls = self.__class__
return '<%s(desc=%s)>' % (kls, self.message)
def __str__(self, *args, **kwargs):
return self.message
def add_bgp_error_metadata(code, sub_code, def_desc='unknown'):
"""Decorator for all exceptions that want to set exception class meta-data.
"""
# Check registry if we already have an exception with same code/sub-code
if _EXCEPTION_REGISTRY.get((code, sub_code)) is not None:
raise ValueError('BGPSException with code %d and sub-code %d '
'already defined.' % (code, sub_code))
def decorator(klass):
"""Sets class constants for exception code and sub-code.
If given class is sub-class of BGPSException we sets class constants.
"""
if issubclass(klass, BGPSException):
_EXCEPTION_REGISTRY[(code, sub_code)] = klass
klass.CODE = code
klass.SUB_CODE = sub_code
klass.DEF_DESC = def_desc
return klass
return decorator
@add_bgp_error_metadata(code=ACTIVITY_ERROR_CODE,
sub_code=1,
def_desc='Unknown activity exception.')
class ActivityException(BGPSException):
"""Base class for exceptions related to Activity.
"""
pass
class Activity(object):
"""Base class for a thread of execution that provides some custom settings.
Activity is also a container of other activities or threads that it has
started. Inside a Activity you should always use one of the spawn method
to start another activity or greenthread. Activity is also holds pointers
to sockets that it or its child activities of threads have create.
"""
__metaclass__ = abc.ABCMeta
def __init__(self, name=None):
self._name = name
if self._name is None:
self._name = 'UnknownActivity: ' + str(time.time())
self._child_thread_map = weakref.WeakValueDictionary()
self._child_activity_map = weakref.WeakValueDictionary()
self._asso_socket_map = weakref.WeakValueDictionary()
self._timers = weakref.WeakValueDictionary()
self._started = False
@property
def name(self):
return self._name
@property
def started(self):
return self._started
def _validate_activity(self, activity):
"""Checks the validity of the given activity before it can be started.
"""
if not self._started:
raise ActivityException(desc='Tried to spawn a child activity'
' before Activity was started.')
if activity.started:
raise ActivityException(desc='Tried to start an Activity that was '
'already started.')
def _spawn_activity(self, activity, *args, **kwargs):
"""Starts *activity* in a new thread and passes *args* and *kwargs*.
Maintains pointer to this activity and stops *activity* when this
activity is stopped.
"""
self._validate_activity(activity)
# Spawn a new greenthread for given activity
greenthread = hub.spawn(activity.start, *args, **kwargs)
self._child_thread_map[activity.name] = greenthread
self._child_activity_map[activity.name] = activity
return greenthread
def _spawn_activity_after(self, seconds, activity, *args, **kwargs):
self._validate_activity(activity)
# Schedule to spawn a new greenthread after requested delay
greenthread = hub.spawn_after(seconds, activity.start, *args,
**kwargs)
self._child_thread_map[activity.name] = greenthread
self._child_activity_map[activity.name] = activity
return greenthread
def _validate_callable(self, callable_):
if callable_ is None:
raise ActivityException(desc='Callable cannot be None')
if not hasattr(callable_, '__call__'):
raise ActivityException(desc='Currently only supports instances'
' that have __call__ as callable which'
' is missing in given arg.')
if not self._started:
raise ActivityException(desc='Tried to spawn a child thread '
'before this Activity was started.')
def _spawn(self, name, callable_, *args, **kwargs):
self._validate_callable(callable_)
greenthread = hub.spawn(callable_, *args, **kwargs)
self._child_thread_map[name] = greenthread
return greenthread
def _spawn_after(self, name, seconds, callable_, *args, **kwargs):
self._validate_callable(callable_)
greenthread = hub.spawn_after(seconds, callable_, *args, **kwargs)
self._child_thread_map[name] = greenthread
return greenthread
def _create_timer(self, name, func, *arg, **kwarg):
timer = LoopingCall(func, *arg, **kwarg)
self._timers[name] = timer
return timer
@abc.abstractmethod
def _run(self, *args, **kwargs):
"""Main activity of this class.
Can launch other activity/callables here.
Sub-classes should override this method.
"""
raise NotImplementedError()
def start(self, *args, **kwargs):
"""Starts the main activity of this class.
Calls *_run* and calls *stop* when *_run* is finished.
This method should be run in a new greenthread as it may not return
immediately.
"""
if self.started:
raise ActivityException(desc='Activity already started')
self._started = True
try:
self._run(*args, **kwargs)
except BGPSException:
LOG.error(traceback.format_exc())
finally:
if self.started: # could have been stopped somewhere else
self.stop()
def pause(self, seconds=0):
"""Relinquishes hub for given number of seconds.
In other words is puts to sleep to give other greeenthread a chance to
run.
"""
hub.sleep(seconds)
def _stop_child_activities(self):
"""Stop all child activities spawn by this activity.
"""
# Iterating over items list instead of iteritems to avoid dictionary
# changed size during iteration
child_activities = self._child_activity_map.items()
for child_name, child_activity in child_activities:
LOG.debug('%s: Stopping child activity %s ', self.name, child_name)
if child_activity.started:
child_activity.stop()
def _stop_child_threads(self, name=None):
"""Stops all threads spawn by this activity.
"""
child_threads = self._child_thread_map.items()
for thread_name, thread in child_threads:
if not name or thread_name is name:
LOG.debug('%s: Stopping child thread %s',
self.name, thread_name)
thread.kill()
self._child_thread_map.pop(thread_name, None)
def _close_asso_sockets(self):
"""Closes all the sockets linked to this activity.
"""
asso_sockets = self._asso_socket_map.items()
for sock_name, sock in asso_sockets:
LOG.debug('%s: Closing socket %s - %s', self.name, sock_name, sock)
sock.close()
def _stop_timers(self):
timers = self._timers.items()
for timer_name, timer in timers:
LOG.debug('%s: Stopping timer %s', self.name, timer_name)
timer.stop()
def stop(self):
"""Stops all child threads and activities and closes associated
sockets.
Re-initializes this activity to be able to start again.
Raise `ActivityException` if activity is not currently started.
"""
if not self.started:
raise ActivityException(desc='Cannot call stop when activity is '
'not started or has been stopped already.')
LOG.debug('Stopping activity %s.', self.name)
self._stop_timers()
self._stop_child_activities()
self._stop_child_threads()
self._close_asso_sockets()
# Setup activity for start again.
self._started = False
self._asso_socket_map = weakref.WeakValueDictionary()
self._child_activity_map = weakref.WeakValueDictionary()
self._child_thread_map = weakref.WeakValueDictionary()
self._timers = weakref.WeakValueDictionary()
LOG.debug('Stopping activity %s finished.', self.name)
def _canonicalize_ip(self, ip):
addr = netaddr.IPAddress(ip)
if addr.is_ipv4_mapped():
ip = str(addr.ipv4())
return ip
def get_remotename(self, sock):
addr, port = sock.getpeername()[:2]
return (self._canonicalize_ip(addr), str(port))
def get_localname(self, sock):
addr, port = sock.getsockname()[:2]
return (self._canonicalize_ip(addr), str(port))
def _create_listen_socket(self, family, loc_addr):
s = socket.socket(family)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(loc_addr)
s.listen(1)
return s
def _listen_socket_loop(self, s, conn_handle):
while True:
sock, client_address = s.accept()
client_address, port = self.get_remotename(sock)
LOG.debug('Connect request received from client for port'
' %s:%s', client_address, port)
client_name = self.name + '_client@' + client_address
self._asso_socket_map[client_name] = sock
self._spawn(client_name, conn_handle, sock)
def _listen_tcp(self, loc_addr, conn_handle):
"""Creates a TCP server socket which listens on `port` number.
For each connection `server_factory` starts a new protocol.
"""
info = socket.getaddrinfo(None, loc_addr[1], socket.AF_UNSPEC,
socket.SOCK_STREAM, 0, socket.AI_PASSIVE)
listen_sockets = {}
for res in info:
af, socktype, proto, cannonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if af == socket.AF_INET6:
sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 1)
sock.bind(sa)
sock.listen(50)
listen_sockets[sa] = sock
except socket.error as e:
LOG.error('Error creating socket: %s', e)
if sock:
sock.close()
count = 0
server = None
for sa in listen_sockets.keys():
name = self.name + '_server@' + str(sa[0])
self._asso_socket_map[name] = listen_sockets[sa]
if count == 0:
import eventlet
server = eventlet.spawn(self._listen_socket_loop,
listen_sockets[sa], conn_handle)
self._child_thread_map[name] = server
count += 1
else:
server = self._spawn(name, self._listen_socket_loop,
listen_sockets[sa], conn_handle)
return server, listen_sockets
def _connect_tcp(self, peer_addr, conn_handler, time_out=None,
bind_address=None, password=None):
"""Creates a TCP connection to given peer address.
Tries to create a socket for `timeout` number of seconds. If
successful, uses the socket instance to start `client_factory`.
The socket is bound to `bind_address` if specified.
"""
LOG.debug('Connect TCP called for %s:%s', peer_addr[0], peer_addr[1])
if netaddr.valid_ipv4(peer_addr[0]):
family = socket.AF_INET
else:
family = socket.AF_INET6
with Timeout(time_out, socket.error):
sock = socket.socket(family)
if bind_address:
sock.bind(bind_address)
if password:
sockopt.set_tcp_md5sig(sock, peer_addr[0], password)
sock.connect(peer_addr)
# socket.error exception is rasied in cese of timeout and
# the following code is executed only when the connection
# is established.
# Connection name for pro-active connection is made up of
# local end address + remote end address
local = self.get_localname(sock)[0]
remote = self.get_remotename(sock)[0]
conn_name = ('L: ' + local + ', R: ' + remote)
self._asso_socket_map[conn_name] = sock
# If connection is established, we call connection handler
# in a new thread.
self._spawn(conn_name, conn_handler, sock)
return sock
class Sink(object):
"""An entity to which we send out messages (eg. BGP routes)."""
#
# OutgoingMsgList
#
# A circular list type in which objects are linked to each
# other using the 'next_sink_out_route' and 'prev_sink_out_route'
# attributes.
#
OutgoingMsgList = CircularListType(next_attr_name='next_sink_out_route',
prev_attr_name='prev_sink_out_route')
# Next available index that can identify an instance uniquely.
idx = 0
@staticmethod
def next_index():
"""Increments the sink index and returns the value."""
Sink.idx = Sink.idx + 1
return Sink.idx
def __init__(self):
# A small integer that represents this sink.
self.index = Sink.next_index()
# Event used to signal enqueing.
from utils.evtlet import EventletIOFactory
self.outgoing_msg_event = EventletIOFactory.create_custom_event()
self.messages_queued = 0
# List of msgs. that are to be sent to this peer. Each item
# in the list is an instance of OutgoingRoute.
self.outgoing_msg_list = Sink.OutgoingMsgList()
def clear_outgoing_msg_list(self):
self.outgoing_msg_list = Sink.OutgoingMsgList()
def enque_outgoing_msg(self, msg):
self.outgoing_msg_list.append(msg)
self.outgoing_msg_event.set()
self.messages_queued += 1
def enque_first_outgoing_msg(self, msg):
self.outgoing_msg_list.prepend(msg)
self.outgoing_msg_event.set()
def __iter__(self):
return self
def next(self):
"""Pops and returns the first outgoing message from the list.
If message list currently has no messages, the calling thread will
be put to sleep until we have at-least one message in the list that
can be poped and returned.
"""
# We pick the first outgoing available and send it.
outgoing_msg = self.outgoing_msg_list.pop_first()
# If we do not have any outgoing msg., we wait.
if outgoing_msg is None:
self.outgoing_msg_event.clear()
self.outgoing_msg_event.wait()
outgoing_msg = self.outgoing_msg_list.pop_first()
return outgoing_msg
class Source(object):
"""An entity that gives us BGP routes. A BGP peer, for example."""
def __init__(self, version_num):
# Number that is currently being used to stamp information
# received from this source. We will bump this number up when
# the information that is now expected from the source belongs
# to a different logical batch. This mechanism can be used to
# identify stale information.
self.version_num = version_num
class FlexinetPeer(Source, Sink):
def __init__(self):
# Initialize source and sink
Source.__init__(self, 1)
Sink.__init__(self)
_VALIDATORS = {}
def validate(**kwargs):
"""Defines a decorator to register a validator with a name for look-up.
If name is not provided we use function name as name of the validator.
"""
def decorator(func):
_VALIDATORS[kwargs.pop('name', func.__name__)] = func
return func
return decorator
def get_validator(name):
"""Returns a validator registered for given name.
"""
return _VALIDATORS.get(name)
|
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
from textwrap import dedent
from pants.backend.python.subsystems.python_setup import PythonSetup
from pants.python.python_repos import PythonRepos
from pants_test.backend.python.tasks.python_task_test_base import PythonTaskTestBase
from pants.contrib.python.checks.tasks2.python_eval import PythonEval
class PythonEvalTest(PythonTaskTestBase):
@classmethod
def task_type(cls):
return PythonEval
def setUp(self):
super(PythonEvalTest, self).setUp()
self._create_graph(broken_b_library=True)
def _create_graph(self, broken_b_library):
self.reset_build_graph()
self.a_library = self.create_python_library('src/python/a', 'a', {'a.py': dedent("""
import inspect
def compile_time_check_decorator(cls):
if not inspect.isclass(cls):
raise TypeError('This decorator can only be applied to classes, given {}'.format(cls))
return cls
""")})
self.b_library = self.create_python_library('src/python/b', 'b', {'b.py': dedent("""
from a.a import compile_time_check_decorator
@compile_time_check_decorator
class BarB(object):
pass
""")})
# TODO: Presumably this was supposed to be c_library, not override b_library. Unravel and fix.
self.b_library = self.create_python_library('src/python/c', 'c', {'c.py': dedent("""
from a.a import compile_time_check_decorator
@compile_time_check_decorator
{}:
pass
""".format('def baz_c()' if broken_b_library else 'class BazC(object)')
)}, dependencies=['//src/python/a'])
self.d_library = self.create_python_library('src/python/d', 'd', { 'd.py': dedent("""
from a.a import compile_time_check_decorator
@compile_time_check_decorator
class BazD(object):
pass
""")}, dependencies=['//src/python/a'])
self.e_binary = self.create_python_binary('src/python/e', 'e', 'a.a',
dependencies=['//src/python/a'])
self.f_binary = self.create_python_binary('src/python/f', 'f',
'a.a:compile_time_check_decorator',
dependencies=['//src/python/a'])
self.g_binary = self.create_python_binary('src/python/g', 'g', 'a.a:does_not_exist',
dependencies=['//src/python/a'])
self.h_binary = self.create_python_binary('src/python/h', 'h', 'a.a')
def _create_task(self, target_roots, options=None):
if options:
self.set_options(**options)
return self.create_task(self.context(target_roots=target_roots,
for_subsystems=[PythonSetup, PythonRepos]))
def test_noop(self):
python_eval = self._create_task(target_roots=[])
compiled = python_eval.execute()
self.assertEqual([], compiled)
def test_compile(self):
python_eval = self._create_task(target_roots=[self.a_library])
compiled = python_eval.execute()
self.assertEqual([self.a_library], compiled)
def test_skip(self):
self.set_options(skip=True)
python_eval = self._create_task(target_roots=[self.a_library])
compiled = python_eval.execute()
self.assertIsNone(compiled)
def test_compile_incremental(self):
python_eval = self._create_task(target_roots=[self.a_library])
compiled = python_eval.execute()
self.assertEqual([self.a_library], compiled)
python_eval = self._create_task(target_roots=[self.a_library])
compiled = python_eval.execute()
self.assertEqual([], compiled)
def test_compile_closure(self):
python_eval = self._create_task(target_roots=[self.d_library], options={'closure': True})
compiled = python_eval.execute()
self.assertEqual({self.d_library, self.a_library}, set(compiled))
def test_compile_fail_closure(self):
python_eval = self._create_task(target_roots=[self.b_library], options={'closure': True})
with self.assertRaises(PythonEval.Error) as e:
python_eval.execute()
self.assertEqual([self.a_library], e.exception.compiled)
self.assertEqual([self.b_library], e.exception.failed)
def test_compile_incremental_progress(self):
python_eval = self._create_task(target_roots=[self.b_library], options={'closure': True})
with self.assertRaises(PythonEval.Error) as e:
python_eval.execute()
self.assertEqual([self.a_library], e.exception.compiled)
self.assertEqual([self.b_library], e.exception.failed)
self._create_graph(broken_b_library=False)
python_eval = self._create_task(target_roots=[self.b_library], options={'closure': True})
compiled = python_eval.execute()
self.assertEqual([self.b_library], compiled)
def test_compile_fail_missing_build_dep(self):
python_eval = self._create_task(target_roots=[self.b_library])
with self.assertRaises(python_eval.Error) as e:
python_eval.execute()
self.assertEqual([], e.exception.compiled)
self.assertEqual([self.b_library], e.exception.failed)
def test_compile_fail_compile_time_check_decorator(self):
python_eval = self._create_task(target_roots=[self.b_library])
with self.assertRaises(PythonEval.Error) as e:
python_eval.execute()
self.assertEqual([], e.exception.compiled)
self.assertEqual([self.b_library], e.exception.failed)
def test_compile_failslow(self):
python_eval = self._create_task(target_roots=[self.a_library, self.b_library, self.d_library],
options={'fail_slow': True})
with self.assertRaises(PythonEval.Error) as e:
python_eval.execute()
self.assertEqual({self.a_library, self.d_library}, set(e.exception.compiled))
self.assertEqual([self.b_library], e.exception.failed)
def test_entry_point_module(self):
python_eval = self._create_task(target_roots=[self.e_binary])
compiled = python_eval.execute()
self.assertEqual([self.e_binary], compiled)
def test_entry_point_function(self):
python_eval = self._create_task(target_roots=[self.f_binary])
compiled = python_eval.execute()
self.assertEqual([self.f_binary], compiled)
def test_entry_point_does_not_exist(self):
python_eval = self._create_task(target_roots=[self.g_binary])
with self.assertRaises(PythonEval.Error) as e:
python_eval.execute()
self.assertEqual([], e.exception.compiled)
self.assertEqual([self.g_binary], e.exception.failed)
def test_entry_point_missing_build_dep(self):
python_eval = self._create_task(target_roots=[self.h_binary])
with self.assertRaises(PythonEval.Error) as e:
python_eval.execute()
self.assertEqual([], e.exception.compiled)
self.assertEqual([self.h_binary], e.exception.failed)
|
"""
HappyBase utility tests.
"""
from nose.tools import assert_equal, assert_less
import happybase.util as util
def test_camel_case_to_pep8():
def check(lower_cc, upper_cc, correct):
x1 = util.camel_case_to_pep8(lower_cc)
x2 = util.camel_case_to_pep8(upper_cc)
assert_equal(correct, x1)
assert_equal(correct, x2)
y1 = util.pep8_to_camel_case(x1, True)
y2 = util.pep8_to_camel_case(x2, False)
assert_equal(upper_cc, y1)
assert_equal(lower_cc, y2)
examples = [('foo', 'Foo', 'foo'),
('fooBar', 'FooBar', 'foo_bar'),
('fooBarBaz', 'FooBarBaz', 'foo_bar_baz'),
('fOO', 'FOO', 'f_o_o')]
for a, b, c in examples:
yield check, a, b, c
def test_str_increment():
def check(s_hex, expected):
s = s_hex.decode('hex')
v = util.str_increment(s)
v_hex = v.encode('hex')
assert_equal(expected, v_hex)
assert_less(s, v)
test_values = [
('00', '01'),
('01', '02'),
('fe', 'ff'),
('1234', '1235'),
('12fe', '12ff'),
('12ff', '13'),
('424242ff', '424243'),
('4242ffff', '4243'),
]
assert util.str_increment('\xff\xff\xff') is None
for s, expected in test_values:
yield check, s, expected
|
from warnings import warn
warn("IPython.utils.localinterfaces has moved to jupyter_client.localinterfaces", stacklevel=2)
from jupyter_client.localinterfaces import *
|
from m5.SimObject import SimObject
from m5.params import *
from m5.proxy import *
from Device import BasicPioDevice, DmaDevice, PioDevice
class PciConfigAll(PioDevice):
type = 'PciConfigAll'
cxx_header = "dev/pciconfigall.hh"
platform = Param.Platform(Parent.any, "Platform this device is part of.")
pio_latency = Param.Latency('30ns', "Programmed IO latency")
bus = Param.UInt8(0x00, "PCI bus to act as config space for")
size = Param.MemorySize32('16MB', "Size of config space")
class PciDevice(DmaDevice):
type = 'PciDevice'
cxx_class = 'PciDev'
cxx_header = "dev/pcidev.hh"
abstract = True
platform = Param.Platform(Parent.any, "Platform this device is part of.")
config = SlavePort("PCI configuration space port")
pci_bus = Param.Int("PCI bus")
pci_dev = Param.Int("PCI device number")
pci_func = Param.Int("PCI function code")
pio_latency = Param.Latency('30ns', "Programmed IO latency")
config_latency = Param.Latency('20ns', "Config read or write latency")
VendorID = Param.UInt16("Vendor ID")
DeviceID = Param.UInt16("Device ID")
Command = Param.UInt16(0, "Command")
Status = Param.UInt16(0, "Status")
Revision = Param.UInt8(0, "Device")
ProgIF = Param.UInt8(0, "Programming Interface")
SubClassCode = Param.UInt8(0, "Sub-Class Code")
ClassCode = Param.UInt8(0, "Class Code")
CacheLineSize = Param.UInt8(0, "System Cacheline Size")
LatencyTimer = Param.UInt8(0, "PCI Latency Timer")
HeaderType = Param.UInt8(0, "PCI Header Type")
BIST = Param.UInt8(0, "Built In Self Test")
BAR0 = Param.UInt32(0x00, "Base Address Register 0")
BAR1 = Param.UInt32(0x00, "Base Address Register 1")
BAR2 = Param.UInt32(0x00, "Base Address Register 2")
BAR3 = Param.UInt32(0x00, "Base Address Register 3")
BAR4 = Param.UInt32(0x00, "Base Address Register 4")
BAR5 = Param.UInt32(0x00, "Base Address Register 5")
BAR0Size = Param.MemorySize32('0B', "Base Address Register 0 Size")
BAR1Size = Param.MemorySize32('0B', "Base Address Register 1 Size")
BAR2Size = Param.MemorySize32('0B', "Base Address Register 2 Size")
BAR3Size = Param.MemorySize32('0B', "Base Address Register 3 Size")
BAR4Size = Param.MemorySize32('0B', "Base Address Register 4 Size")
BAR5Size = Param.MemorySize32('0B', "Base Address Register 5 Size")
BAR0LegacyIO = Param.Bool(False, "Whether BAR0 is hardwired legacy IO")
BAR1LegacyIO = Param.Bool(False, "Whether BAR1 is hardwired legacy IO")
BAR2LegacyIO = Param.Bool(False, "Whether BAR2 is hardwired legacy IO")
BAR3LegacyIO = Param.Bool(False, "Whether BAR3 is hardwired legacy IO")
BAR4LegacyIO = Param.Bool(False, "Whether BAR4 is hardwired legacy IO")
BAR5LegacyIO = Param.Bool(False, "Whether BAR5 is hardwired legacy IO")
CardbusCIS = Param.UInt32(0x00, "Cardbus Card Information Structure")
SubsystemID = Param.UInt16(0x00, "Subsystem ID")
SubsystemVendorID = Param.UInt16(0x00, "Subsystem Vendor ID")
ExpansionROM = Param.UInt32(0x00, "Expansion ROM Base Address")
InterruptLine = Param.UInt8(0x00, "Interrupt Line")
InterruptPin = Param.UInt8(0x00, "Interrupt Pin")
MaximumLatency = Param.UInt8(0x00, "Maximum Latency")
MinimumGrant = Param.UInt8(0x00, "Minimum Grant")
|
'''Calculate radius of gyration of neurites.'''
import neurom as nm
from neurom import morphmath as mm
from neurom.core.dataformat import COLS
import numpy as np
def segment_centre_of_mass(seg):
'''Calculate and return centre of mass of a segment.
C, seg_volalculated as centre of mass of conical frustum'''
h = mm.segment_length(seg)
r0 = seg[0][COLS.R]
r1 = seg[1][COLS.R]
num = r0 * r0 + 2 * r0 * r1 + 3 * r1 * r1
denom = 4 * (r0 * r0 + r0 * r1 + r1 * r1)
centre_of_mass_z_loc = num / denom
return seg[0][COLS.XYZ] + (centre_of_mass_z_loc / h) * (seg[1][COLS.XYZ] - seg[0][COLS.XYZ])
def neurite_centre_of_mass(neurite):
'''Calculate and return centre of mass of a neurite.'''
centre_of_mass = np.zeros(3)
total_volume = 0
seg_vol = np.array(map(mm.segment_volume, nm.iter_segments(neurite)))
seg_centre_of_mass = np.array(map(segment_centre_of_mass, nm.iter_segments(neurite)))
# multiply array of scalars with array of arrays
# http://stackoverflow.com/questions/5795700/multiply-numpy-array-of-scalars-by-array-of-vectors
seg_centre_of_mass = seg_centre_of_mass * seg_vol[:, np.newaxis]
centre_of_mass = np.sum(seg_centre_of_mass, axis=0)
total_volume = np.sum(seg_vol)
return centre_of_mass / total_volume
def distance_sqr(point, seg):
'''Calculate and return square Euclidian distance from given point to
centre of mass of given segment.'''
centre_of_mass = segment_centre_of_mass(seg)
return sum(pow(np.subtract(point, centre_of_mass), 2))
def radius_of_gyration(neurite):
'''Calculate and return radius of gyration of a given neurite.'''
centre_mass = neurite_centre_of_mass(neurite)
sum_sqr_distance = 0
N = 0
dist_sqr = [distance_sqr(centre_mass, s) for s in nm.iter_segments(neurite)]
sum_sqr_distance = np.sum(dist_sqr)
N = len(dist_sqr)
return np.sqrt(sum_sqr_distance / N)
def mean_rad_of_gyration(neurites):
'''Calculate mean radius of gyration for set of neurites.'''
return np.mean([radius_of_gyration(n) for n in neurites])
if __name__ == '__main__':
# load a neuron from an SWC file
filename = 'test_data/swc/Neuron.swc'
nrn = nm.load_neuron(filename)
# for every neurite, print (number of segments, radius of gyration, neurite type)
print([(sum(len(s.points) - 1 for s in nrte.iter_sections()),
radius_of_gyration(nrte), nrte.type) for nrte in nrn.neurites])
# print mean radius of gyration per neurite type
print('Mean radius of gyration for axons: ',
mean_rad_of_gyration(n for n in nrn.neurites if n.type == nm.AXON))
print('Mean radius of gyration for basal dendrites: ',
mean_rad_of_gyration(n for n in nrn.neurites if n.type == nm.BASAL_DENDRITE))
print('Mean radius of gyration for apical dendrites: ',
mean_rad_of_gyration(n for n in nrn.neurites
if n.type == nm.APICAL_DENDRITE))
|
import re
import sys
GIT_HASH_PATTERN = re.compile(r'^[0-9a-fA-F]{40}$')
def GetOSName(platform_name=sys.platform):
if platform_name == 'cygwin' or platform_name.startswith('win'):
return 'win'
elif platform_name.startswith('linux'):
return 'unix'
elif platform_name.startswith('darwin'):
return 'mac'
else:
return platform_name
def IsGitHash(revision):
return GIT_HASH_PATTERN.match(str(revision))
|
from nose.plugins.attrib import attr
from checks import AgentCheck
from tests.checks.common import AgentCheckTest
@attr(requires='apache')
class TestCheckApache(AgentCheckTest):
CHECK_NAME = 'apache'
CONFIG_STUBS = [
{
'apache_status_url': 'http://localhost:8180/server-status',
'tags': ['instance:first']
},
{
'apache_status_url': 'http://localhost:8180/server-status?auto',
'tags': ['instance:second']
},
]
BAD_CONFIG = [
{
'apache_status_url': 'http://localhost:1234/server-status',
}
]
APACHE_GAUGES = [
'apache.performance.idle_workers',
'apache.performance.busy_workers',
'apache.performance.cpu_load',
'apache.performance.uptime',
'apache.net.bytes',
'apache.net.hits',
'apache.conns_total',
'apache.conns_async_writing',
'apache.conns_async_keep_alive',
'apache.conns_async_closing'
]
APACHE_RATES = [
'apache.net.bytes_per_s',
'apache.net.request_per_s'
]
def test_check(self):
config = {
'instances': self.CONFIG_STUBS
}
self.run_check_twice(config)
# Assert metrics
for stub in self.CONFIG_STUBS:
expected_tags = stub['tags']
for mname in self.APACHE_GAUGES + self.APACHE_RATES:
self.assertMetric(mname, tags=expected_tags, count=1)
# Assert service checks
self.assertServiceCheck('apache.can_connect', status=AgentCheck.OK,
tags=['host:localhost', 'port:8180'], count=2)
self.coverage_report()
def test_connection_failure(self):
config = {
'instances': self.BAD_CONFIG
}
# Assert service check
self.assertRaises(
Exception,
lambda: self.run_check(config)
)
self.assertServiceCheck('apache.can_connect', status=AgentCheck.CRITICAL,
tags=['host:localhost', 'port:1234'], count=1)
self.coverage_report()
|
from __future__ import absolute_import, division, print_function
INCLUDES = """
"""
TYPES = """
typedef struct { ...; } HMAC_CTX;
"""
FUNCTIONS = """
void HMAC_CTX_init(HMAC_CTX *);
void HMAC_CTX_cleanup(HMAC_CTX *);
int Cryptography_HMAC_Init_ex(HMAC_CTX *, const void *, int, const EVP_MD *,
ENGINE *);
int Cryptography_HMAC_Update(HMAC_CTX *, const unsigned char *, size_t);
int Cryptography_HMAC_Final(HMAC_CTX *, unsigned char *, unsigned int *);
int Cryptography_HMAC_CTX_copy(HMAC_CTX *, HMAC_CTX *);
"""
MACROS = """
"""
CUSTOMIZATIONS = """
int Cryptography_HMAC_Init_ex(HMAC_CTX *ctx, const void *key, int key_len,
const EVP_MD *md, ENGINE *impl) {
return HMAC_Init_ex(ctx, key, key_len, md, impl);
HMAC_Init_ex(ctx, key, key_len, md, impl);
return 1;
}
int Cryptography_HMAC_Update(HMAC_CTX *ctx, const unsigned char *data,
size_t data_len) {
return HMAC_Update(ctx, data, data_len);
HMAC_Update(ctx, data, data_len);
return 1;
}
int Cryptography_HMAC_Final(HMAC_CTX *ctx, unsigned char *digest,
unsigned int *outlen) {
return HMAC_Final(ctx, digest, outlen);
HMAC_Final(ctx, digest, outlen);
return 1;
}
int Cryptography_HMAC_CTX_copy(HMAC_CTX *dst_ctx, HMAC_CTX *src_ctx) {
return HMAC_CTX_copy(dst_ctx, src_ctx);
HMAC_CTX_init(dst_ctx);
if (!EVP_MD_CTX_copy_ex(&dst_ctx->i_ctx, &src_ctx->i_ctx)) {
goto err;
}
if (!EVP_MD_CTX_copy_ex(&dst_ctx->o_ctx, &src_ctx->o_ctx)) {
goto err;
}
if (!EVP_MD_CTX_copy_ex(&dst_ctx->md_ctx, &src_ctx->md_ctx)) {
goto err;
}
memcpy(dst_ctx->key, src_ctx->key, HMAC_MAX_MD_CBLOCK);
dst_ctx->key_length = src_ctx->key_length;
dst_ctx->md = src_ctx->md;
return 1;
err:
return 0;
}
"""
CONDITIONAL_NAMES = {}
|
from thrift.Thrift import TType, TMessageType, TException, TApplicationException
import cogcomp.base.ttypes
import cogcomp.curator.ttypes
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol, TProtocol
try:
from thrift.protocol import fastbinary
except:
fastbinary = None
|
from __future__ import unicode_literals
"""
Contains the Document class representing an object / record
"""
import webnotes
import webnotes.model.meta
from webnotes.utils import *
valid_fields_map = {}
class Document:
"""
The wn(meta-data)framework equivalent of a Database Record.
Stores,Retrieves,Updates the record in the corresponding table.
Runs the triggers required.
The `Document` class represents the basic Object-Relational Mapper (ORM). The object type is defined by
`DocType` and the object ID is represented by `name`::
Please note the anamoly in the Web Notes Framework that `ID` is always called as `name`
If both `doctype` and `name` are specified in the constructor, then the object is loaded from the database.
If only `doctype` is given, then the object is not loaded
If `fielddata` is specfied, then the object is created from the given dictionary.
**Note 1:**
The getter and setter of the object are overloaded to map to the fields of the object that
are loaded when it is instantiated.
For example: doc.name will be the `name` field and doc.owner will be the `owner` field
**Note 2 - Standard Fields:**
* `name`: ID / primary key
* `owner`: creator of the record
* `creation`: datetime of creation
* `modified`: datetime of last modification
* `modified_by` : last updating user
* `docstatus` : Status 0 - Saved, 1 - Submitted, 2- Cancelled
* `parent` : if child (table) record, this represents the parent record
* `parenttype` : type of parent record (if any)
* `parentfield` : table fieldname of parent record (if any)
* `idx` : Index (sequence) of the child record
"""
def __init__(self, doctype = None, name = None, fielddata = None, prefix='tab'):
self._roles = []
self._perms = []
self._user_defaults = {}
self._prefix = prefix
if isinstance(doctype, dict):
fielddata = doctype
doctype = None
if fielddata:
self.fields = webnotes._dict(fielddata)
else:
self.fields = webnotes._dict()
if not self.fields.has_key('name'):
self.fields['name']='' # required on save
if not self.fields.has_key('doctype'):
self.fields['doctype']='' # required on save
if not self.fields.has_key('owner'):
self.fields['owner']='' # required on save
if doctype:
self.fields['doctype'] = doctype
if name:
self.fields['name'] = name
self.__initialized = 1
if (doctype and name):
self._loadfromdb(doctype, name)
else:
if not fielddata:
self.fields['__islocal'] = 1
if not self.fields.docstatus:
self.fields.docstatus = 0
def __nonzero__(self):
return True
def __str__(self):
return str(self.fields)
def __repr__(self):
return repr(self.fields)
def __unicode__(self):
return unicode(self.fields)
def __eq__(self, other):
if isinstance(other, Document):
return self.fields == other.fields
else:
return False
def __getstate__(self):
return self.fields
def __setstate__(self, d):
self.fields = d
def encode(self, encoding='utf-8'):
"""convert all unicode values to utf-8"""
for key in self.fields:
if isinstance(self.fields[key], unicode):
self.fields[key] = self.fields[key].encode(encoding)
def _loadfromdb(self, doctype = None, name = None):
if name: self.name = name
if doctype: self.doctype = doctype
is_single = False
try:
is_single = webnotes.model.meta.is_single(self.doctype)
except Exception, e:
pass
if is_single:
self._loadsingle()
else:
dataset = webnotes.conn.sql('select * from `%s%s` where name="%s"' % (self._prefix, self.doctype, self.name.replace('"', '\"')))
if not dataset:
raise Exception, '[WNF] %s %s does not exist' % (self.doctype, self.name)
self._load_values(dataset[0], webnotes.conn.get_description())
def _load_values(self, data, description):
if '__islocal' in self.fields:
del self.fields['__islocal']
for i in range(len(description)):
v = data[i]
self.fields[description[i][0]] = webnotes.conn.convert_to_simple_type(v)
def _merge_values(self, data, description):
for i in range(len(description)):
v = data[i]
if v: # only if value, over-write
self.fields[description[i][0]] = webnotes.conn.convert_to_simple_type(v)
def _loadsingle(self):
self.name = self.doctype
self.fields.update(getsingle(self.doctype))
def __setattr__(self, name, value):
# normal attribute
if not self.__dict__.has_key('_Document__initialized'):
self.__dict__[name] = value
elif self.__dict__.has_key(name):
self.__dict__[name] = value
else:
# field attribute
f = self.__dict__['fields']
f[name] = value
def __getattr__(self, name):
if self.__dict__.has_key(name):
return self.__dict__[name]
elif self.fields.has_key(name):
return self.fields[name]
else:
return ''
def _get_amended_name(self):
am_id = 1
am_prefix = self.amended_from
if webnotes.conn.sql('select amended_from from `tab%s` where name = "%s"' % (self.doctype, self.amended_from))[0][0] or '':
am_id = cint(self.amended_from.split('-')[-1]) + 1
am_prefix = '-'.join(self.amended_from.split('-')[:-1]) # except the last hyphen
self.name = am_prefix + '-' + str(am_id)
def _set_name(self, autoname, istable):
self.localname = self.name
# get my object
import webnotes.model.code
so = webnotes.model.code.get_server_obj(self, [])
# amendments
if self.amended_from:
self._get_amended_name()
# by method
elif so and hasattr(so, 'autoname'):
r = webnotes.model.code.run_server_obj(so, 'autoname')
if r: return r
# based on a field
elif autoname and autoname.startswith('field:'):
n = self.fields[autoname[6:]]
if not n:
raise Exception, 'Name is required'
self.name = n.strip()
elif autoname and autoname.startswith("naming_series:"):
if not self.naming_series:
# pick default naming series
from webnotes.model.doctype import get_property
self.naming_series = get_property(self.doctype, "options", "naming_series")
if not self.naming_series:
webnotes.msgprint(webnotes._("Naming Series mandatory"), raise_exception=True)
self.naming_series = self.naming_series.split("\n")
self.naming_series = self.naming_series[0] or self.naming_series[1]
self.name = make_autoname(self.naming_series+'.#####')
# based on expression
elif autoname and autoname.startswith('eval:'):
doc = self # for setting
self.name = eval(autoname[5:])
# call the method!
elif autoname and autoname!='Prompt':
self.name = make_autoname(autoname, self.doctype)
# given
elif self.fields.get('__newname',''):
self.name = self.fields['__newname']
# default name for table
elif istable:
self.name = make_autoname('#########', self.doctype)
# unable to determine a name, use a serial number!
if not self.name:
self.name = make_autoname('#########', self.doctype)
def _insert(self, autoname, istable, case='', make_autoname=1, keep_timestamps=False):
# set name
if make_autoname:
self._set_name(autoname, istable)
# validate name
self.name = validate_name(self.doctype, self.name, case)
# insert!
if not keep_timestamps:
if not self.owner:
self.owner = webnotes.session['user']
self.modified_by = webnotes.session['user']
self.creation = self.modified = now()
webnotes.conn.sql("insert into `tab%(doctype)s`" % self.fields \
+ """ (name, owner, creation, modified, modified_by)
values (%(name)s, %(owner)s, %(creation)s, %(modified)s,
%(modified_by)s)""", self.fields)
def _update_single(self, link_list):
update_str = ["(%s, 'modified', %s)",]
values = [self.doctype, now()]
webnotes.conn.sql("delete from tabSingles where doctype='%s'" % self.doctype)
for f in self.fields.keys():
if not (f in ('modified', 'doctype', 'name', 'perm', 'localname', 'creation'))\
and (not f.startswith('__')): # fields not saved
# validate links
if link_list and link_list.get(f):
self.fields[f] = self._validate_link(link_list[f][0], self.fields[f])
if self.fields[f]==None:
update_str.append("(%s,%s,NULL)")
values.append(self.doctype)
values.append(f)
else:
update_str.append("(%s,%s,%s)")
values.append(self.doctype)
values.append(f)
values.append(self.fields[f])
webnotes.conn.sql("insert into tabSingles(doctype, field, value) values %s" % (', '.join(update_str)), values)
def validate_links(self, link_list):
err_list = []
for f in self.fields.keys():
# validate links
old_val = self.fields[f]
if link_list and link_list.get(f):
self.fields[f] = self._validate_link(link_list[f][0], self.fields[f])
if old_val and not self.fields[f]:
s = link_list[f][1] + ': ' + old_val
err_list.append(s)
return err_list
def make_link_list(self):
res = webnotes.model.meta.get_link_fields(self.doctype)
link_list = {}
for i in res: link_list[i[0]] = (i[1], i[2]) # options, label
return link_list
def _validate_link(self, dt, dn):
if not dt: return dn
if not dn: return None
if dt=="[Select]": return dn
if dt.lower().startswith('link:'):
dt = dt[5:]
if '\n' in dt:
dt = dt.split('\n')[0]
tmp = webnotes.conn.sql("""SELECT name FROM `tab%s`
WHERE name = %s""" % (dt, '%s'), dn)
return tmp and tmp[0][0] or ''# match case
def _update_values(self, issingle, link_list, ignore_fields=0, keep_timestamps=False):
if issingle:
self._update_single(link_list)
else:
update_str, values = [], []
# set modified timestamp
if self.modified and not keep_timestamps:
self.modified = now()
self.modified_by = webnotes.session['user']
fields_list = ignore_fields and self.get_valid_fields() or self.fields.keys()
for f in fields_list:
if (not (f in ('doctype', 'name', 'perm', 'localname',
'creation','_user_tags', "file_list"))) and (not f.startswith('__')):
# fields not saved
# validate links
if link_list and link_list.get(f):
self.fields[f] = self._validate_link(link_list[f][0],
self.fields.get(f))
if self.fields.get(f) is None or self.fields.get(f)=='':
update_str.append("`%s`=NULL" % f)
else:
values.append(self.fields.get(f))
update_str.append("`%s`=%s" % (f, '%s'))
if values:
values.append(self.name)
r = webnotes.conn.sql("update `tab%s` set %s where name=%s" % \
(self.doctype, ', '.join(update_str), "%s"), values)
def get_valid_fields(self):
global valid_fields_map
if not valid_fields_map.get(self.doctype):
import webnotes.model.doctype
if cint(webnotes.conn.get_value("DocType", self.doctype, "issingle")):
doctypelist = webnotes.model.doctype.get(self.doctype)
valid_fields_map[self.doctype] = doctypelist.get_fieldnames({
"fieldtype": ["not in", webnotes.model.no_value_fields]})
else:
valid_fields_map[self.doctype] = \
webnotes.conn.get_table_columns(self.doctype)
return valid_fields_map.get(self.doctype)
def save(self, new=0, check_links=1, ignore_fields=0, make_autoname=1,
keep_timestamps=False):
res = webnotes.model.meta.get_dt_values(self.doctype,
'autoname, issingle, istable, name_case', as_dict=1)
res = res and res[0] or {}
if new:
self.fields["__islocal"] = 1
# add missing parentinfo (if reqd)
if self.parent and not (self.parenttype and self.parentfield):
self.update_parentinfo()
if self.parent and not self.idx:
self.set_idx()
# if required, make new
if self.fields.get('__islocal') and (not res.get('issingle')):
r = self._insert(res.get('autoname'), res.get('istable'), res.get('name_case'),
make_autoname, keep_timestamps = keep_timestamps)
if r:
return r
else:
if not res.get('issingle') and not webnotes.conn.exists(self.doctype, self.name):
webnotes.msgprint("""This document was updated before your change. Please refresh before saving.""", raise_exception=1)
# save the values
self._update_values(res.get('issingle'),
check_links and self.make_link_list() or {}, ignore_fields=ignore_fields,
keep_timestamps=keep_timestamps)
self._clear_temp_fields()
def insert(self):
self.fields['__islocal'] = 1
self.save()
return self
def update_parentinfo(self):
"""update parent type and parent field, if not explicitly specified"""
tmp = webnotes.conn.sql("""select parent, fieldname from tabDocField
where fieldtype='Table' and options=%s""", self.doctype)
if len(tmp)==0:
raise Exception, 'Incomplete parent info in child table (%s, %s)' \
% (self.doctype, self.fields.get('name', '[new]'))
elif len(tmp)>1:
raise Exception, 'Ambiguous parent info (%s, %s)' \
% (self.doctype, self.fields.get('name', '[new]'))
else:
self.parenttype = tmp[0][0]
self.parentfield = tmp[0][1]
def set_idx(self):
"""set idx"""
self.idx = (webnotes.conn.sql("""select max(idx) from `tab%s`
where parent=%s and parentfield=%s""" % (self.doctype, '%s', '%s'),
(self.parent, self.parentfield))[0][0] or 0) + 1
def _clear_temp_fields(self):
# clear temp stuff
keys = self.fields.keys()
for f in keys:
if f.startswith('__'):
del self.fields[f]
def clear_table(self, doclist, tablefield, save=0):
"""
Clears the child records from the given `doclist` for a particular `tablefield`
"""
from webnotes.model.utils import getlist
table_list = getlist(doclist, tablefield)
delete_list = [d.name for d in table_list]
if delete_list:
#filter doclist
doclist = filter(lambda d: d.name not in delete_list, doclist)
# delete from db
webnotes.conn.sql("""\
delete from `tab%s`
where parent=%s and parenttype=%s"""
% (table_list[0].doctype, '%s', '%s'),
(self.name, self.doctype))
self.fields['__unsaved'] = 1
return webnotes.doclist(doclist)
def addchild(self, fieldname, childtype = '', doclist=None):
"""
Returns a child record of the give `childtype`.
* if local is set, it does not save the record
* if doclist is passed, it append the record to the doclist
"""
from webnotes.model.doc import Document
d = Document()
d.parent = self.name
d.parenttype = self.doctype
d.parentfield = fieldname
d.doctype = childtype
d.docstatus = 0;
d.name = ''
d.owner = webnotes.session['user']
d.fields['__islocal'] = 1 # for Client to identify unsaved doc
if doclist != None:
doclist.append(d)
return d
def get_values(self):
"""get non-null fields dict withouth standard fields"""
from webnotes.model import default_fields
ret = {}
for key in self.fields:
if key not in default_fields and self.fields[key]:
ret[key] = self.fields[key]
return ret
def addchild(parent, fieldname, childtype = '', doclist=None):
"""
Create a child record to the parent doc.
Example::
c = Document('Contact','ABC')
d = addchild(c, 'contact_updates', 'Contact Update')
d.last_updated = 'Phone call'
d.save(1)
"""
return parent.addchild(fieldname, childtype, doclist)
def make_autoname(key, doctype=''):
"""
Creates an autoname from the given key:
**Autoname rules:**
* The key is separated by '.'
* '####' represents a series. The string before this part becomes the prefix:
Example: ABC.#### creates a series ABC0001, ABC0002 etc
* 'MM' represents the current month
* 'YY' and 'YYYY' represent the current year
*Example:*
* DE/./.YY./.MM./.##### will create a series like
DE/09/01/0001 where 09 is the year, 01 is the month and 0001 is the series
"""
n = ''
l = key.split('.')
series_set = False
today = now_datetime()
for e in l:
en = ''
if e.startswith('#'):
if not series_set:
digits = len(e)
en = getseries(n, digits, doctype)
series_set = True
elif e=='YY':
en = today.strftime('%y')
elif e=='MM':
en = today.strftime('%m')
elif e=='DD':
en = today.strftime("%d")
elif e=='YYYY':
en = today.strftime('%Y')
else: en = e
n+=en
return n
def getseries(key, digits, doctype=''):
# series created ?
if webnotes.conn.sql("select name from tabSeries where name='%s'" % key):
# yes, update it
webnotes.conn.sql("update tabSeries set current = current+1 where name='%s'" % key)
# find the series counter
r = webnotes.conn.sql("select current from tabSeries where name='%s'" % key)
n = r[0][0]
else:
# no, create it
webnotes.conn.sql("insert into tabSeries (name, current) values ('%s', 1)" % key)
n = 1
return ('%0'+str(digits)+'d') % n
def getchildren(name, childtype, field='', parenttype='', from_doctype=0, prefix='tab'):
import webnotes
from webnotes.model.doclist import DocList
condition = ""
values = []
if field:
condition += ' and parentfield=%s '
values.append(field)
if parenttype:
condition += ' and parenttype=%s '
values.append(parenttype)
dataset = webnotes.conn.sql("""select * from `%s%s` where parent=%s %s order by idx""" \
% (prefix, childtype, "%s", condition), tuple([name]+values))
desc = webnotes.conn.get_description()
l = DocList()
for i in dataset:
d = Document()
d.doctype = childtype
d._load_values(i, desc)
l.append(d)
return l
def check_page_perm(doc):
if doc.name=='Login Page':
return
if doc.publish:
return
if not webnotes.conn.sql("select name from `tabPage Role` where parent=%s and role='Guest'", doc.name):
webnotes.response['403'] = 1
raise webnotes.PermissionError, '[WNF] No read permission for %s %s' % ('Page', doc.name)
def get_report_builder_code(doc):
if doc.doctype=='Search Criteria':
from webnotes.model.code import get_code
if doc.standard != 'No':
doc.report_script = get_code(doc.module, 'Search Criteria', doc.name, 'js')
doc.custom_query = get_code(doc.module, 'Search Criteria', doc.name, 'sql')
def get(dt, dn='', with_children = 1, from_controller = 0, prefix = 'tab'):
"""
Returns a doclist containing the main record and all child records
"""
import webnotes
import webnotes.model
from webnotes.model.doclist import DocList
dn = dn or dt
# load the main doc
doc = Document(dt, dn, prefix=prefix)
if dt=='Page' and webnotes.session['user'] == 'Guest':
check_page_perm(doc)
if not with_children:
# done
return DocList([doc,])
# get all children types
tablefields = webnotes.model.meta.get_table_fields(dt)
# load chilren
doclist = DocList([doc,])
for t in tablefields:
doclist += getchildren(doc.name, t[0], t[1], dt, prefix=prefix)
# import report_builder code
if not from_controller:
get_report_builder_code(doc)
return doclist
def getsingle(doctype):
"""get single doc as dict"""
dataset = webnotes.conn.sql("select field, value from tabSingles where doctype=%s", doctype)
return dict(dataset)
def copy_common_fields(from_doc, to_doc):
from webnotes.model import default_fields
doctype_list = webnotes.get_doctype(to_doc.doctype)
for fieldname, value in from_doc.fields.items():
if fieldname in default_fields:
continue
if doctype_list.get_field(fieldname) and to_doc.fields[fieldname] != value:
to_doc.fields[fieldname] = value
def validate_name(doctype, name, case=None, merge=False):
if not merge:
if webnotes.conn.sql('select name from `tab%s` where name=%s' % (doctype,'%s'), name):
raise NameError, 'Name %s already exists' % name
# no name
if not name: return 'No Name Specified for %s' % doctype
# new..
if name.startswith('New '+doctype):
raise NameError, 'There were some errors setting the name, please contact the administrator'
if case=='Title Case': name = name.title()
if case=='UPPER CASE': name = name.upper()
name = name.strip() # no leading and trailing blanks
forbidden = ['%', "'", '"', '#', '*', '?', '`']
for f in forbidden:
if f in name:
webnotes.msgprint('%s not allowed in ID (name)' % f, raise_exception =1)
return name
|
from navmazing import NavigateToSibling
from widgetastic.utils import Parameter
from widgetastic.widget import View
from widgetastic_patternfly import Accordion, Button, Dropdown
from cfme.base import Server
from cfme.base.login import BaseLoggedInPage
from cfme.utils.appliance.implementations.ui import navigator, CFMENavigateStep
from widgetastic_manageiq import ManageIQTree, MultiBoxSelect
class CloudIntelReportsView(BaseLoggedInPage):
@property
def in_intel_reports(self):
return (
self.logged_in_as_current_user and
self.navigation.currently_selected == ["Cloud Intel", "Reports"]
)
@property
def is_displayed(self):
return self.in_intel_reports and self.configuration.is_displayed
@property
def mycompany_title(self):
if self.browser.product_version < "5.9":
title = "My Company (All EVM Groups)"
else:
title = "My Company (All Groups)"
return title
@View.nested
class saved_reports(Accordion): # noqa
ACCORDION_NAME = "Saved Reports"
tree = ManageIQTree()
@View.nested
class reports(Accordion): # noqa
tree = ManageIQTree()
@View.nested
class schedules(Accordion): # noqa
tree = ManageIQTree()
@View.nested
class dashboards(Accordion): # noqa
tree = ManageIQTree()
@View.nested
class dashboard_widgets(Accordion): # noqa
ACCORDION_NAME = "Dashboard Widgets"
tree = ManageIQTree()
@View.nested
class edit_report_menus(Accordion): # noqa
ACCORDION_NAME = "Edit Report Menus"
tree = ManageIQTree()
@View.nested
class import_export(Accordion): # noqa
ACCORDION_NAME = "Import/Export"
tree = ManageIQTree()
configuration = Dropdown("Configuration")
@navigator.register(Server)
class CloudIntelReports(CFMENavigateStep):
VIEW = CloudIntelReportsView
prerequisite = NavigateToSibling("LoggedIn")
def step(self):
self.view.navigation.select("Cloud Intel", "Reports")
class ReportsMultiBoxSelect(MultiBoxSelect):
move_into_button = Button(title=Parameter("@move_into"))
move_from_button = Button(title=Parameter("@move_from"))
|
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webapps.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
|
try:
from beaker.crypto.pbkdf2 import PBKDF2
except ImportError:
from beaker.crypto.pbkdf2 import pbkdf2
from binascii import b2a_hex
class PBKDF2(object):
def __init__(self, passphrase, salt, iterations=1000):
self.passphrase = passphrase
self.salt = salt
self.iterations = iterations
def hexread(self, octets):
return b2a_hex(pbkdf2(self.passphrase, self.salt, self.iterations, octets))
from module.common.json_layer import json_loads
from module.plugins.internal.Account import Account
class OboomCom(Account):
__name__ = "OboomCom"
__type__ = "account"
__version__ = "0.28"
__status__ = "testing"
__description__ = """Oboom.com account plugin"""
__license__ = "GPLv3"
__authors__ = [("stanley", "stanley.foerster@gmail.com")]
def load_account_data(self, user, req):
passwd = self.get_info(user)['login']['password']
salt = passwd[::-1]
pbkdf2 = PBKDF2(passwd, salt, 1000).hexread(16)
result = json_loads(self.load("http://www.oboom.com/1/login", #@TODO: Revert to `https` in 0.4.10
get={'auth': user,
'pass': pbkdf2}))
if result[0] != 200:
self.log_warning(_("Failed to log in: %s") % result[1])
self.fail_login()
return result[1]
def grab_info(self, name, req):
account_data = self.load_account_data(name, req)
userData = account_data['user']
premium = userData['premium'] != "null"
if userData['premium_unix'] == "null":
validUntil = -1
else:
validUntil = float(userData['premium_unix'])
traffic = userData['traffic']
trafficLeft = traffic['current'] / 1024 #@TODO: Remove `/ 1024` in 0.4.10
maxTraffic = traffic['max'] / 1024 #@TODO: Remove `/ 1024` in 0.4.10
session = account_data['session']
return {'premium' : premium,
'validuntil' : validUntil,
'trafficleft': trafficLeft,
'maxtraffic' : maxTraffic,
'session' : session}
def login(self, user, password, data, req):
self.load_account_data(user, req)
|
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: sequence
author: Jayson Vantuyl (!UNKNOWN) <jayson@aggressive.ly>
version_added: "1.0"
short_description: generate a list based on a number sequence
description:
- generates a sequence of items. You can specify a start value, an end value, an optional "stride" value that specifies the number of steps
to increment the sequence, and an optional printf-style format string.
- 'Arguments can be specified as key=value pair strings or as a shortcut form of the arguments string is also accepted: [start-]end[/stride][:format].'
- 'Numerical values can be specified in decimal, hexadecimal (0x3f8) or octal (0600).'
- Starting at version 1.9.2, negative strides are allowed.
- Generated items are strings. Use Jinja2 filters to convert items to preferred type, e.g. ``{{ 1 + item|int }}``.
- See also Jinja2 ``range`` filter as an alternative.
options:
start:
description: number at which to start the sequence
default: 0
type: integer
end:
description: number at which to end the sequence, dont use this with count
type: integer
default: 0
count:
description: number of elements in the sequence, this is not to be used with end
type: integer
default: 0
stride:
description: increments between sequence numbers, the default is 1 unless the end is less than the start, then it is -1.
type: integer
format:
description: return a string with the generated number formatted in
"""
EXAMPLES = """
- name: create some test users
user:
name: "{{ item }}"
state: present
groups: "evens"
with_sequence: start=0 end=32 format=testuser%02x
- name: create a series of directories with even numbers for some reason
file:
dest: "/var/stuff/{{ item }}"
state: directory
with_sequence: start=4 end=16 stride=2
- name: a simpler way to use the sequence plugin create 4 groups
group:
name: "group{{ item }}"
state: present
with_sequence: count=4
- name: the final countdown
debug:
msg: "{{item}} seconds to detonation"
with_sequence: start=10 end=0 stride=-1
- name: Use of variable
debug:
msg: "{{ item }}"
with_sequence: start=1 end="{{ end_at }}"
vars:
- end_at: 10
"""
RETURN = """
_list:
description:
- A list containing generated sequence of items
type: list
elements: str
"""
from re import compile as re_compile, IGNORECASE
from ansible.errors import AnsibleError
from ansible.module_utils.six.moves import xrange
from ansible.parsing.splitter import parse_kv
from ansible.plugins.lookup import LookupBase
NUM = "(0?x?[0-9a-f]+)"
SHORTCUT = re_compile(
"^(" + # Group 0
NUM + # Group 1: Start
"-)?" +
NUM + # Group 2: End
"(/" + # Group 3
NUM + # Group 4: Stride
")?" +
"(:(.+))?$", # Group 5, Group 6: Format String
IGNORECASE
)
class LookupModule(LookupBase):
"""
sequence lookup module
Used to generate some sequence of items. Takes arguments in two forms.
The simple / shortcut form is:
[start-]end[/stride][:format]
As indicated by the brackets: start, stride, and format string are all
optional. The format string is in the style of printf. This can be used
to pad with zeros, format in hexadecimal, etc. All of the numerical values
can be specified in octal (i.e. 0664) or hexadecimal (i.e. 0x3f8).
Negative numbers are not supported.
Some examples:
5 -> ["1","2","3","4","5"]
5-8 -> ["5", "6", "7", "8"]
2-10/2 -> ["2", "4", "6", "8", "10"]
4:host%02d -> ["host01","host02","host03","host04"]
The standard Ansible key-value form is accepted as well. For example:
start=5 end=11 stride=2 format=0x%02x -> ["0x05","0x07","0x09","0x0a"]
This format takes an alternate form of "end" called "count", which counts
some number from the starting value. For example:
count=5 -> ["1", "2", "3", "4", "5"]
start=0x0f00 count=4 format=%04x -> ["0f00", "0f01", "0f02", "0f03"]
start=0 count=5 stride=2 -> ["0", "2", "4", "6", "8"]
start=1 count=5 stride=2 -> ["1", "3", "5", "7", "9"]
The count option is mostly useful for avoiding off-by-one errors and errors
calculating the number of entries in a sequence when a stride is specified.
"""
def reset(self):
"""set sensible defaults"""
self.start = 1
self.count = None
self.end = None
self.stride = 1
self.format = "%d"
def parse_kv_args(self, args):
"""parse key-value style arguments"""
for arg in ["start", "end", "count", "stride"]:
try:
arg_raw = args.pop(arg, None)
if arg_raw is None:
continue
arg_cooked = int(arg_raw, 0)
setattr(self, arg, arg_cooked)
except ValueError:
raise AnsibleError(
"can't parse %s=%s as integer"
% (arg, arg_raw)
)
if 'format' in args:
self.format = args.pop("format")
if args:
raise AnsibleError(
"unrecognized arguments to with_sequence: %s"
% list(args.keys())
)
def parse_simple_args(self, term):
"""parse the shortcut forms, return True/False"""
match = SHORTCUT.match(term)
if not match:
return False
_, start, end, _, stride, _, format = match.groups()
if start is not None:
try:
start = int(start, 0)
except ValueError:
raise AnsibleError("can't parse start=%s as integer" % start)
if end is not None:
try:
end = int(end, 0)
except ValueError:
raise AnsibleError("can't parse end=%s as integer" % end)
if stride is not None:
try:
stride = int(stride, 0)
except ValueError:
raise AnsibleError("can't parse stride=%s as integer" % stride)
if start is not None:
self.start = start
if end is not None:
self.end = end
if stride is not None:
self.stride = stride
if format is not None:
self.format = format
return True
def sanity_check(self):
if self.count is None and self.end is None:
raise AnsibleError("must specify count or end in with_sequence")
elif self.count is not None and self.end is not None:
raise AnsibleError("can't specify both count and end in with_sequence")
elif self.count is not None:
# convert count to end
if self.count != 0:
self.end = self.start + self.count * self.stride - 1
else:
self.start = 0
self.end = 0
self.stride = 0
del self.count
if self.stride > 0 and self.end < self.start:
raise AnsibleError("to count backwards make stride negative")
if self.stride < 0 and self.end > self.start:
raise AnsibleError("to count forward don't make stride negative")
if self.format.count('%') != 1:
raise AnsibleError("bad formatting string: %s" % self.format)
def generate_sequence(self):
if self.stride >= 0:
adjust = 1
else:
adjust = -1
numbers = xrange(self.start, self.end + adjust, self.stride)
for i in numbers:
try:
formatted = self.format % i
yield formatted
except (ValueError, TypeError):
raise AnsibleError(
"problem formatting %r with %r" % (i, self.format)
)
def run(self, terms, variables, **kwargs):
results = []
for term in terms:
try:
self.reset() # clear out things for this iteration
try:
if not self.parse_simple_args(term):
self.parse_kv_args(parse_kv(term))
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("unknown error parsing with_sequence arguments: %r. Error was: %s" % (term, e))
self.sanity_check()
if self.stride != 0:
results.extend(self.generate_sequence())
except AnsibleError:
raise
except Exception as e:
raise AnsibleError(
"unknown error generating sequence: %s" % e
)
return results
|
from yt.mods import load
import sys
from matplotlib.pylab import imshow, savefig
for fn in sys.argv[1:]:
fields = ['dend']
pf = load(fn)
c = 0.5 * (pf.domain_left_edge + pf.domain_right_edge)
S = pf.domain_right_edge - pf.domain_left_edge
n_d = pf.domain_dimensions
slc = pf.h.slice(2, c[2], fields=fields)
frb = slc.to_frb(S[0], (n_d[1], n_d[0]), height=S[1], center=c)
imshow(frb['dend'])
savefig('%s.png' % pf)
|
DOCUMENTATION = '''
---
module: uri
short_description: Interacts with webservices
description:
- Interacts with HTTP and HTTPS web services and supports Digest, Basic and WSSE
HTTP authentication mechanisms.
version_added: "1.1"
options:
url:
description:
- HTTP or HTTPS URL in the form (http|https)://host.domain[:port]/path
required: true
default: null
dest:
description:
- path of where to download the file to (if desired). If I(dest) is a
directory, the basename of the file on the remote server will be used.
required: false
default: null
user:
description:
- username for the module to use for Digest, Basic or WSSE authentication.
required: false
default: null
password:
description:
- password for the module to use for Digest, Basic or WSSE authentication.
required: false
default: null
body:
description:
- The body of the http request/response to the web service. If C(body_format) is set
to 'json' it will take an already formatted JSON string or convert a data structure
into JSON.
required: false
default: null
body_format:
description:
- The serialization format of the body. When set to json, encodes the
body argument, if needed, and automatically sets the Content-Type header accordingly.
required: false
choices: [ "raw", "json" ]
default: raw
version_added: "2.0"
method:
description:
- The HTTP method of the request or response. It MUST be uppercase.
required: false
choices: [ "GET", "POST", "PUT", "HEAD", "DELETE", "OPTIONS", "PATCH", "TRACE", "CONNECT", "REFRESH" ]
default: "GET"
return_content:
description:
- Whether or not to return the body of the request as a "content" key in
the dictionary result. If the reported Content-type is
"application/json", then the JSON is additionally loaded into a key
called C(json) in the dictionary results.
required: false
choices: [ "yes", "no" ]
default: "no"
force_basic_auth:
description:
- The library used by the uri module only sends authentication information when a webservice
responds to an initial request with a 401 status. Since some basic auth services do not properly
send a 401, logins will fail. This option forces the sending of the Basic authentication header
upon initial request.
required: false
choices: [ "yes", "no" ]
default: "no"
follow_redirects:
description:
- Whether or not the URI module should follow redirects. C(all) will follow all redirects.
C(safe) will follow only "safe" redirects, where "safe" means that the client is only
doing a GET or HEAD on the URI to which it is being redirected. C(none) will not follow
any redirects. Note that C(yes) and C(no) choices are accepted for backwards compatibility,
where C(yes) is the equivalent of C(all) and C(no) is the equivalent of C(safe). C(yes) and C(no)
are deprecated and will be removed in some future version of Ansible.
required: false
choices: [ "all", "safe", "none" ]
default: "safe"
creates:
description:
- a filename, when it already exists, this step will not be run.
required: false
removes:
description:
- a filename, when it does not exist, this step will not be run.
required: false
status_code:
description:
- A valid, numeric, HTTP status code that signifies success of the
request. Can also be comma separated list of status codes.
required: false
default: 200
timeout:
description:
- The socket level timeout in seconds
required: false
default: 30
HEADER_:
description:
- Any parameter starting with "HEADER_" is a sent with your request as a header.
For example, HEADER_Content-Type="application/json" would send the header
"Content-Type" along with your request with a value of "application/json".
This option is deprecated as of C(2.1) and may be removed in a future
release. Use I(headers) instead.
required: false
default: null
headers:
description:
- Add custom HTTP headers to a request in the format of a YAML hash
required: false
default: null
version_added: '2.1'
others:
description:
- all arguments accepted by the M(file) module also work here
required: false
validate_certs:
description:
- If C(no), SSL certificates will not be validated. This should only
set to C(no) used on personally controlled sites using self-signed
certificates. Prior to 1.9.2 the code defaulted to C(no).
required: false
default: 'yes'
choices: ['yes', 'no']
version_added: '1.9.2'
notes:
- The dependency on httplib2 was removed in Ansible 2.1
author: "Romeo Theriault (@romeotheriault)"
'''
EXAMPLES = '''
- name: Check that you can connect (GET) to a page and it returns a status 200
uri:
url: 'http://www.example.com'
- uri:
url: http://www.example.com
return_content: yes
register: webpage
- name: Fail if AWESOME is not in the page content
fail:
when: "'AWESOME' not in webpage.content"
- name: Create a JIRA issue
uri:
url: https://your.jira.example.com/rest/api/2/issue/
method: POST
user: your_username
password: your_pass
body: "{{ lookup('file','issue.json') }}"
force_basic_auth: yes
status_code: 201
body_format: json
- uri:
url: https://your.form.based.auth.example.com/index.php
method: POST
body: "name=your_username&password=your_password&enter=Sign%20in"
status_code: 302
HEADER_Content-Type: "application/x-www-form-urlencoded"
register: login
- uri:
url: https://your.form.based.auth.example.com/dashboard.php
method: GET
return_content: yes
HEADER_Cookie: "{{login.set_cookie}}"
- name: Queue build of a project in Jenkins
uri:
url: "http://{{ jenkins.host }}/job/{{ jenkins.job }}/build?token={{ jenkins.token }}"
method: GET
user: "{{ jenkins.user }}"
password: "{{ jenkins.password }}"
force_basic_auth: yes
status_code: 201
'''
import cgi
import datetime
import os
import shutil
import tempfile
try:
import json
except ImportError:
import simplejson as json
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pycompat24 import get_exception
import ansible.module_utils.six as six
from ansible.module_utils._text import to_text
from ansible.module_utils.urls import fetch_url, url_argument_spec
def write_file(module, url, dest, content):
# create a tempfile with some test content
fd, tmpsrc = tempfile.mkstemp()
f = open(tmpsrc, 'wb')
try:
f.write(content)
except Exception:
err = get_exception()
os.remove(tmpsrc)
module.fail_json(msg="failed to create temporary content file: %s" % str(err))
f.close()
checksum_src = None
checksum_dest = None
# raise an error if there is no tmpsrc file
if not os.path.exists(tmpsrc):
os.remove(tmpsrc)
module.fail_json(msg="Source %s does not exist" % (tmpsrc))
if not os.access(tmpsrc, os.R_OK):
os.remove(tmpsrc)
module.fail_json( msg="Source %s not readable" % (tmpsrc))
checksum_src = module.sha1(tmpsrc)
# check if there is no dest file
if os.path.exists(dest):
# raise an error if copy has no permission on dest
if not os.access(dest, os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s not writable" % (dest))
if not os.access(dest, os.R_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination %s not readable" % (dest))
checksum_dest = module.sha1(dest)
else:
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
module.fail_json(msg="Destination dir %s not writable" % (os.path.dirname(dest)))
if checksum_src != checksum_dest:
try:
shutil.copyfile(tmpsrc, dest)
except Exception:
err = get_exception()
os.remove(tmpsrc)
module.fail_json(msg="failed to copy %s to %s: %s" % (tmpsrc, dest, str(err)))
os.remove(tmpsrc)
def url_filename(url):
fn = os.path.basename(six.moves.urllib.parse.urlsplit(url)[2])
if fn == '':
return 'index.html'
return fn
def absolute_location(url, location):
"""Attempts to create an absolute URL based on initial URL, and
next URL, specifically in the case of a ``Location`` header.
"""
if '://' in location:
return location
elif location.startswith('/'):
parts = six.moves.urllib.parse.urlsplit(url)
base = url.replace(parts[2], '')
return '%s%s' % (base, location)
elif not location.startswith('/'):
base = os.path.dirname(url)
return '%s/%s' % (base, location)
else:
return location
def uri(module, url, dest, body, body_format, method, headers, socket_timeout):
# is dest is set and is a directory, let's check if we get redirected and
# set the filename from that url
redirected = False
redir_info = {}
r = {}
if dest is not None:
# Stash follow_redirects, in this block we don't want to follow
# we'll reset back to the supplied value soon
follow_redirects = module.params['follow_redirects']
module.params['follow_redirects'] = False
dest = os.path.expanduser(dest)
if os.path.isdir(dest):
# first check if we are redirected to a file download
_, redir_info = fetch_url(module, url, data=body,
headers=headers,
method=method,
timeout=socket_timeout)
# if we are redirected, update the url with the location header,
# and update dest with the new url filename
if redir_info['status'] in (301, 302, 303, 307):
url = redir_info['location']
redirected = True
dest = os.path.join(dest, url_filename(url))
# if destination file already exist, only download if file newer
if os.path.exists(dest):
t = datetime.datetime.utcfromtimestamp(os.path.getmtime(dest))
tstamp = t.strftime('%a, %d %b %Y %H:%M:%S +0000')
headers['If-Modified-Since'] = tstamp
# Reset follow_redirects back to the stashed value
module.params['follow_redirects'] = follow_redirects
resp, info = fetch_url(module, url, data=body, headers=headers,
method=method, timeout=socket_timeout)
try:
content = resp.read()
except AttributeError:
# there was no content, but the error read()
# may have been stored in the info as 'body'
content = info.pop('body', '')
r['redirected'] = redirected or info['url'] != url
r.update(redir_info)
r.update(info)
return r, content, dest
def main():
argument_spec = url_argument_spec()
argument_spec.update(dict(
dest = dict(required=False, default=None, type='path'),
url_username = dict(required=False, default=None, aliases=['user']),
url_password = dict(required=False, default=None, aliases=['password']),
body = dict(required=False, default=None, type='raw'),
body_format = dict(required=False, default='raw', choices=['raw', 'json']),
method = dict(required=False, default='GET', choices=['GET', 'POST', 'PUT', 'HEAD', 'DELETE', 'OPTIONS', 'PATCH', 'TRACE', 'CONNECT', 'REFRESH']),
return_content = dict(required=False, default='no', type='bool'),
follow_redirects = dict(required=False, default='safe', choices=['all', 'safe', 'none', 'yes', 'no']),
creates = dict(required=False, default=None, type='path'),
removes = dict(required=False, default=None, type='path'),
status_code = dict(required=False, default=[200], type='list'),
timeout = dict(required=False, default=30, type='int'),
headers = dict(required=False, type='dict', default={})
))
module = AnsibleModule(
argument_spec=argument_spec,
check_invalid_arguments=False,
add_file_common_args=True
)
url = module.params['url']
body = module.params['body']
body_format = module.params['body_format'].lower()
method = module.params['method']
dest = module.params['dest']
return_content = module.params['return_content']
creates = module.params['creates']
removes = module.params['removes']
status_code = [int(x) for x in list(module.params['status_code'])]
socket_timeout = module.params['timeout']
dict_headers = module.params['headers']
if body_format == 'json':
# Encode the body unless its a string, then assume it is pre-formatted JSON
if not isinstance(body, basestring):
body = json.dumps(body)
dict_headers['Content-Type'] = 'application/json'
# Grab all the http headers. Need this hack since passing multi-values is
# currently a bit ugly. (e.g. headers='{"Content-Type":"application/json"}')
for key, value in six.iteritems(module.params):
if key.startswith("HEADER_"):
skey = key.replace("HEADER_", "")
dict_headers[skey] = value
if creates is not None:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of uri executions.
if os.path.exists(creates):
module.exit_json(stdout="skipped, since %s exists" % creates,
changed=False, stderr=False, rc=0)
if removes is not None:
# do not run the command if the line contains removes=filename
# and the filename do not exists. This allows idempotence
# of uri executions.
if not os.path.exists(removes):
module.exit_json(stdout="skipped, since %s does not exist" % removes, changed=False, stderr=False, rc=0)
# Make the request
resp, content, dest = uri(module, url, dest, body, body_format, method,
dict_headers, socket_timeout)
resp['status'] = int(resp['status'])
# Write the file out if requested
if dest is not None:
if resp['status'] == 304:
changed = False
else:
write_file(module, url, dest, content)
# allow file attribute changes
changed = True
module.params['path'] = dest
file_args = module.load_file_common_arguments(module.params)
file_args['path'] = dest
changed = module.set_fs_attributes_if_different(file_args, changed)
resp['path'] = dest
else:
changed = False
# Transmogrify the headers, replacing '-' with '_', since variables dont
# work with dashes.
# In python3, the headers are title cased. Lowercase them to be
# compatible with the python2 behaviour.
uresp = {}
for key, value in six.iteritems(resp):
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
try:
uresp['location'] = absolute_location(url, uresp['location'])
except KeyError:
pass
# Default content_encoding to try
content_encoding = 'utf-8'
if 'content_type' in uresp:
content_type, params = cgi.parse_header(uresp['content_type'])
if 'charset' in params:
content_encoding = params['charset']
u_content = to_text(content, encoding=content_encoding)
if 'application/json' in content_type or 'text/json' in content_type:
try:
js = json.loads(u_content)
uresp['json'] = js
except:
pass
else:
u_content = to_text(content, encoding=content_encoding)
if resp['status'] not in status_code:
uresp['msg'] = 'Status code was not %s: %s' % (status_code, uresp.get('msg', ''))
module.fail_json(content=u_content, **uresp)
elif return_content:
module.exit_json(changed=changed, content=u_content, **uresp)
else:
module.exit_json(changed=changed, **uresp)
if __name__ == '__main__':
main()
|
"""
Base test classes for LMS instructor-initiated background tasks
"""
import os
import json
from mock import Mock
import shutil
import unicodecsv
from uuid import uuid4
from celery.states import SUCCESS, FAILURE
from django.core.urlresolvers import reverse
from django.test.testcases import TestCase
from django.contrib.auth.models import User
from lms.djangoapps.lms_xblock.runtime import quote_slashes
from opaque_keys.edx.locations import Location, SlashSeparatedCourseKey
from capa.tests.response_xml_factory import OptionResponseXMLFactory
from courseware.model_data import StudentModule
from courseware.tests.tests import LoginEnrollmentTestCase
from student.tests.factories import CourseEnrollmentFactory, UserFactory
from xmodule.modulestore import ModuleStoreEnum
from xmodule.modulestore.django import modulestore
from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from instructor_task.api_helper import encode_problem_and_student_input
from instructor_task.models import PROGRESS, QUEUING, ReportStore
from instructor_task.tests.factories import InstructorTaskFactory
from instructor_task.views import instructor_task_status
TEST_COURSE_ORG = 'edx'
TEST_COURSE_NAME = 'test_course'
TEST_COURSE_NUMBER = '1.23x'
TEST_COURSE_KEY = SlashSeparatedCourseKey(TEST_COURSE_ORG, TEST_COURSE_NUMBER, TEST_COURSE_NAME)
TEST_SECTION_NAME = "Problem"
TEST_FAILURE_MESSAGE = 'task failed horribly'
TEST_FAILURE_EXCEPTION = 'RandomCauseError'
OPTION_1 = 'Option 1'
OPTION_2 = 'Option 2'
class InstructorTaskTestCase(TestCase):
"""
Tests API and view methods that involve the reporting of status for background tasks.
"""
def setUp(self):
super(InstructorTaskTestCase, self).setUp()
self.student = UserFactory.create(username="student", email="student@edx.org")
self.instructor = UserFactory.create(username="instructor", email="instructor@edx.org")
self.problem_url = InstructorTaskTestCase.problem_location("test_urlname")
@staticmethod
def problem_location(problem_url_name):
"""
Create an internal location for a test problem.
"""
return TEST_COURSE_KEY.make_usage_key('problem', problem_url_name)
def _create_entry(self, task_state=QUEUING, task_output=None, student=None):
"""Creates a InstructorTask entry for testing."""
task_id = str(uuid4())
progress_json = json.dumps(task_output) if task_output is not None else None
task_input, task_key = encode_problem_and_student_input(self.problem_url, student)
instructor_task = InstructorTaskFactory.create(course_id=TEST_COURSE_KEY,
requester=self.instructor,
task_input=json.dumps(task_input),
task_key=task_key,
task_id=task_id,
task_state=task_state,
task_output=progress_json)
return instructor_task
def _create_failure_entry(self):
"""Creates a InstructorTask entry representing a failed task."""
# view task entry for task failure
progress = {'message': TEST_FAILURE_MESSAGE,
'exception': TEST_FAILURE_EXCEPTION,
}
return self._create_entry(task_state=FAILURE, task_output=progress)
def _create_success_entry(self, student=None):
"""Creates a InstructorTask entry representing a successful task."""
return self._create_progress_entry(student, task_state=SUCCESS)
def _create_progress_entry(self, student=None, task_state=PROGRESS):
"""Creates a InstructorTask entry representing a task in progress."""
progress = {'attempted': 3,
'succeeded': 2,
'total': 5,
'action_name': 'rescored',
}
return self._create_entry(task_state=task_state, task_output=progress, student=student)
class InstructorTaskCourseTestCase(LoginEnrollmentTestCase, ModuleStoreTestCase):
"""
Base test class for InstructorTask-related tests that require
the setup of a course.
"""
course = None
current_user = None
def initialize_course(self, course_factory_kwargs=None):
"""
Create a course in the store, with a chapter and section.
Arguments:
course_factory_kwargs (dict): kwargs dict to pass to
CourseFactory.create()
"""
self.module_store = modulestore()
# Create the course
course_args = {
"org": TEST_COURSE_ORG,
"number": TEST_COURSE_NUMBER,
"display_name": TEST_COURSE_NAME
}
if course_factory_kwargs is not None:
course_args.update(course_factory_kwargs)
self.course = CourseFactory.create(**course_args)
self.add_course_content()
def add_course_content(self):
"""
Add a chapter and a sequential to the current course.
"""
# Add a chapter to the course
chapter = ItemFactory.create(parent_location=self.course.location,
display_name=TEST_SECTION_NAME)
# add a sequence to the course to which the problems can be added
self.problem_section = ItemFactory.create(parent_location=chapter.location,
category='sequential',
metadata={'graded': True, 'format': 'Homework'},
display_name=TEST_SECTION_NAME)
@staticmethod
def get_user_email(username):
"""Generate email address based on username"""
return u'{0}@test.com'.format(username)
def login_username(self, username):
"""Login the user, given the `username`."""
if self.current_user != username:
self.logout()
user_email = User.objects.get(username=username).email
self.login(user_email, "test")
self.current_user = username
def _create_user(self, username, email=None, is_staff=False, mode='honor'):
"""Creates a user and enrolls them in the test course."""
if email is None:
email = InstructorTaskCourseTestCase.get_user_email(username)
thisuser = UserFactory.create(username=username, email=email, is_staff=is_staff)
CourseEnrollmentFactory.create(user=thisuser, course_id=self.course.id, mode=mode)
return thisuser
def create_instructor(self, username, email=None):
"""Creates an instructor for the test course."""
return self._create_user(username, email, is_staff=True)
def create_student(self, username, email=None, mode='honor'):
"""Creates a student for the test course."""
return self._create_user(username, email, is_staff=False, mode=mode)
@staticmethod
def get_task_status(task_id):
"""Use api method to fetch task status, using mock request."""
mock_request = Mock()
mock_request.GET = mock_request.POST = {'task_id': task_id}
response = instructor_task_status(mock_request)
status = json.loads(response.content)
return status
def create_task_request(self, requester_username):
"""Generate request that can be used for submitting tasks"""
request = Mock()
request.user = User.objects.get(username=requester_username)
request.get_host = Mock(return_value="testhost")
request.META = {'REMOTE_ADDR': '0:0:0:0', 'SERVER_NAME': 'testhost'}
request.is_secure = Mock(return_value=False)
return request
class InstructorTaskModuleTestCase(InstructorTaskCourseTestCase):
"""
Base test class for InstructorTask-related tests that require
the setup of a course and problem in order to access StudentModule state.
"""
@staticmethod
def problem_location(problem_url_name, course_key=None):
"""
Create an internal location for a test problem.
"""
if "i4x:" in problem_url_name:
return Location.from_deprecated_string(problem_url_name)
elif course_key:
return course_key.make_usage_key('problem', problem_url_name)
else:
return TEST_COURSE_KEY.make_usage_key('problem', problem_url_name)
def define_option_problem(self, problem_url_name, parent=None, **kwargs):
"""Create the problem definition so the answer is Option 1"""
if parent is None:
parent = self.problem_section
factory = OptionResponseXMLFactory()
factory_args = {'question_text': 'The correct answer is {0}'.format(OPTION_1),
'options': [OPTION_1, OPTION_2],
'correct_option': OPTION_1,
'num_responses': 2}
problem_xml = factory.build_xml(**factory_args)
ItemFactory.create(parent_location=parent.location,
parent=parent,
category="problem",
display_name=problem_url_name,
data=problem_xml,
**kwargs)
def redefine_option_problem(self, problem_url_name):
"""Change the problem definition so the answer is Option 2"""
factory = OptionResponseXMLFactory()
factory_args = {'question_text': 'The correct answer is {0}'.format(OPTION_2),
'options': [OPTION_1, OPTION_2],
'correct_option': OPTION_2,
'num_responses': 2}
problem_xml = factory.build_xml(**factory_args)
location = InstructorTaskTestCase.problem_location(problem_url_name)
item = self.module_store.get_item(location)
with self.module_store.branch_setting(ModuleStoreEnum.Branch.draft_preferred, location.course_key):
item.data = problem_xml
self.module_store.update_item(item, self.user.id)
self.module_store.publish(location, self.user.id)
def get_student_module(self, username, descriptor):
"""Get StudentModule object for test course, given the `username` and the problem's `descriptor`."""
return StudentModule.objects.get(course_id=self.course.id,
student=User.objects.get(username=username),
module_type=descriptor.location.category,
module_state_key=descriptor.location,
)
def submit_student_answer(self, username, problem_url_name, responses):
"""
Use ajax interface to submit a student answer.
Assumes the input list of responses has two values.
"""
def get_input_id(response_id):
"""Creates input id using information about the test course and the current problem."""
# Note that this is a capa-specific convention. The form is a version of the problem's
# URL, modified so that it can be easily stored in html, prepended with "input-" and
# appended with a sequence identifier for the particular response the input goes to.
course_key = self.course.id
return u'input_i4x-{0}-{1}-problem-{2}_{3}'.format(
course_key.org.replace(u'.', u'_'),
course_key.course.replace(u'.', u'_'),
problem_url_name,
response_id
)
# make sure that the requested user is logged in, so that the ajax call works
# on the right problem:
self.login_username(username)
# make ajax call:
modx_url = reverse('xblock_handler', kwargs={
'course_id': self.course.id.to_deprecated_string(),
'usage_id': quote_slashes(
InstructorTaskModuleTestCase.problem_location(problem_url_name, self.course.id).to_deprecated_string()
),
'handler': 'xmodule_handler',
'suffix': 'problem_check',
})
# assign correct identifier to each response.
resp = self.client.post(modx_url, {
get_input_id(u'{}_1').format(index): response for index, response in enumerate(responses, 2)
})
return resp
class TestReportMixin(object):
"""
Cleans up after tests that place files in the reports directory.
"""
def tearDown(self):
report_store = ReportStore.from_config(config_name='GRADES_DOWNLOAD')
try:
reports_download_path = report_store.storage.path('')
except NotImplementedError:
pass # storage backend does not use the local filesystem
else:
if os.path.exists(reports_download_path):
shutil.rmtree(reports_download_path)
def verify_rows_in_csv(self, expected_rows, file_index=0, verify_order=True, ignore_other_columns=False):
"""
Verify that the last ReportStore CSV contains the expected content.
Arguments:
expected_rows (iterable): An iterable of dictionaries,
where each dict represents a row of data in the last
ReportStore CSV. Each dict maps keys from the CSV
header to values in that row's corresponding cell.
file_index (int): Describes which report store file to
open. Files are ordered by last modified date, and 0
corresponds to the most recently modified file.
verify_order (boolean): When True, we verify that both the
content and order of `expected_rows` matches the
actual csv rows. When False (default), we only verify
that the content matches.
ignore_other_columns (boolean): When True, we verify that `expected_rows`
contain data which is the subset of actual csv rows.
"""
report_store = ReportStore.from_config(config_name='GRADES_DOWNLOAD')
report_csv_filename = report_store.links_for(self.course.id)[file_index][0]
report_path = report_store.path_to(self.course.id, report_csv_filename)
with report_store.storage.open(report_path) as csv_file:
# Expand the dict reader generator so we don't lose it's content
csv_rows = [row for row in unicodecsv.DictReader(csv_file)]
if ignore_other_columns:
csv_rows = [
{key: row.get(key) for key in expected_rows[index].keys()} for index, row in enumerate(csv_rows)
]
if verify_order:
self.assertEqual(csv_rows, expected_rows)
else:
self.assertItemsEqual(csv_rows, expected_rows)
def get_csv_row_with_headers(self):
"""
Helper function to return list with the column names from the CSV file (the first row)
"""
report_store = ReportStore.from_config(config_name='GRADES_DOWNLOAD')
report_csv_filename = report_store.links_for(self.course.id)[0][0]
report_path = report_store.path_to(self.course.id, report_csv_filename)
with report_store.storage.open(report_path) as csv_file:
rows = unicodecsv.reader(csv_file, encoding='utf-8')
return rows.next()
|
from spack import *
class Dftd3Lib(MakefilePackage):
"""A dispersion correction for density functionals,
Hartree-Fock and semi-empirical quantum chemical methods"""
homepage = "https://www.chemie.uni-bonn.de/pctc/mulliken-center/software/dft-d3/dft-d3"
url = "https://github.com/dftbplus/dftd3-lib/archive/0.9.2.tar.gz"
version('0.9.2', sha256='4178f3cf2f3e7e982a7084ec66bac92b4fdf164537d9fc0ada840a11b784f0e0')
# This fixes a concurrency bug, where make would try to start compiling
# the dftd3 target before the lib target ended.
# Since the library is small, disabling causes not much harm
parallel = False
def edit(self, spec, prefix):
makefile = FileFilter('make.arch')
makefile.filter("FC = gfortran", "")
makefile.filter("LN = gfortran", "LN = $(FC)")
def install(self, spec, prefix):
mkdir(prefix.lib)
mkdir(prefix.bin)
mkdir(prefix.include)
install("lib/libdftd3.a", prefix.lib)
install("prg/dftd3", prefix.bin)
install("lib/dftd3_api.mod", prefix.include)
install("lib/dftd3_common.mod", prefix.include)
install("lib/dftd3_core.mod", prefix.include)
install("lib/dftd3_pars.mod", prefix.include)
install("lib/dftd3_sizes.mod", prefix.include)
|
from security_monkey import db
from security_monkey import app
from flask_wtf.csrf import generate_csrf
from security_monkey.decorators import crossdomain
from flask.ext.restful import fields, marshal, Resource, reqparse
from flask.ext.login import current_user
ORIGINS = [
'https://{}:{}'.format(app.config.get('FQDN'), app.config.get('WEB_PORT')),
# Adding this next one so you can also access the dart UI by prepending /static to the path.
'https://{}:{}'.format(app.config.get('FQDN'), app.config.get('API_PORT')),
'https://{}:{}'.format(app.config.get('FQDN'), app.config.get('NGINX_PORT')),
'https://{}:80'.format(app.config.get('FQDN')),
# FOR LOCAL DEV IN DART EDITOR:
'http://127.0.0.1:3030',
'http://127.0.0.1:8080',
'http://localhost:3030',
'http://localhost:8080'
]
REVISION_FIELDS = {
'id': fields.Integer,
'date_created': fields.String,
'active': fields.Boolean,
'item_id': fields.Integer
}
ITEM_FIELDS = {
'id': fields.Integer,
'region': fields.String,
'name': fields.String
}
AUDIT_FIELDS = {
'id': fields.Integer,
'score': fields.Integer,
'issue': fields.String,
'notes': fields.String,
'justified': fields.Boolean,
'justification': fields.String,
'justified_date': fields.String,
'item_id': fields.Integer
}
REVISION_COMMENT_FIELDS = {
'id': fields.Integer,
'revision_id': fields.Integer,
'date_created': fields.String,
'text': fields.String
}
ITEM_COMMENT_FIELDS = {
'id': fields.Integer,
'date_created': fields.String,
'text': fields.String,
'item_id': fields.Integer
}
USER_SETTINGS_FIELDS = {
# 'id': fields.Integer,
'daily_audit_email': fields.Boolean,
'change_reports': fields.String
}
ACCOUNT_FIELDS = {
'id': fields.Integer,
'name': fields.String,
's3_name': fields.String,
'number': fields.String,
'notes': fields.String,
'active': fields.Boolean,
'third_party': fields.Boolean
}
WHITELIST_FIELDS = {
'id': fields.Integer,
'name': fields.String,
'notes': fields.String,
'cidr': fields.String
}
IGNORELIST_FIELDS = {
'id': fields.Integer,
'prefix': fields.String,
'notes': fields.String,
}
AUDITORSETTING_FIELDS = {
'id': fields.Integer,
'disabled': fields.Boolean,
'issue_text': fields.String
}
class AuthenticatedService(Resource):
def __init__(self):
self.reqparse = reqparse.RequestParser()
super(AuthenticatedService, self).__init__()
self.auth_dict = dict()
if current_user.is_authenticated():
self.auth_dict = {
"authenticated": True,
"user": current_user.email
}
else:
if app.config.get('FRONTED_BY_NGINX'):
url = "https://{}:{}{}".format(app.config.get('FQDN'), app.config.get('NGINX_PORT'), '/login')
else:
url = "http://{}:{}{}".format(app.config.get('FQDN'), app.config.get('API_PORT'), '/login')
self.auth_dict = {
"authenticated": False,
"user": None,
"url": url
}
@app.after_request
@crossdomain(allowed_origins=ORIGINS)
def after(response):
response.set_cookie('XSRF-COOKIE', generate_csrf())
return response
def __check_auth__(auth_dict):
"""
To be called at the beginning of any GET or POST request.
Returns: True if needs to authenticate. Also returns the
JSON containing the SAML url to login.
Returns None,None when no authentication action needs to occur.
"""
if not current_user.is_authenticated():
return True, ({"auth": auth_dict}, 401)
return None, None
|
"""
Serialize user
"""
def serialize_group_for_user(group, user):
return {
'name': group.name,
'id': group._id,
'role': user.group_role(group)
}
def serialize_user(user):
potential_spam_profile_content = {
'schools': user.schools,
'jobs': user.jobs
}
return {
'username': user.username,
'name': user.fullname,
'id': user._id,
'emails': user.emails.values_list('address', flat=True),
'last_login': user.date_last_login,
'confirmed': user.date_confirmed,
'registered': user.date_registered,
'deleted': user.deleted,
'disabled': user.date_disabled if user.is_disabled else False,
'two_factor': user.has_addon('twofactor'),
'osf_link': user.absolute_url,
'system_tags': user.system_tags,
'is_spammy': user.is_spammy,
'spam_status': user.spam_status,
'unclaimed': bool(user.unclaimed_records),
'requested_deactivation': bool(user.requested_deactivation),
'osf_groups': [serialize_group_for_user(group, user) for group in user.osf_groups],
'potential_spam_profile_content': user._get_spam_content(potential_spam_profile_content),
}
def serialize_simple_node(node):
return {
'id': node._id,
'title': node.title,
'public': node.is_public,
'number_contributors': len(node.contributors),
'spam_status': node.spam_status,
'is_registration': node.is_registration,
'deleted': node.is_deleted,
}
def serialize_simple_preprint(preprint):
return {
'id': preprint._id,
'title': preprint.title,
'number_contributors': len(preprint.contributors),
'deleted': preprint.is_deleted,
'public': preprint.verified_publishable,
'spam_status': preprint.spam_status,
}
|
from io import BytesIO
from mapproxy.test.helper import Mocker
from mapproxy.test.mocker import ANY
from mapproxy.response import Response
from mapproxy.compat import string_type
class TestResponse(Mocker):
def test_str_response(self):
resp = Response("string content")
assert isinstance(resp.response, string_type)
start_response = self.mock()
self.expect(start_response("200 OK", ANY))
self.replay()
result = resp({"REQUEST_METHOD": "GET"}, start_response)
assert next(result) == b"string content"
def test_itr_response(self):
resp = Response(iter(["string content", "as iterable"]))
assert hasattr(resp.response, "next") or hasattr(resp.response, "__next__")
start_response = self.mock()
self.expect(start_response("200 OK", ANY))
self.replay()
result = resp({"REQUEST_METHOD": "GET"}, start_response)
assert next(result) == "string content"
assert next(result) == "as iterable"
def test_file_response(self):
data = BytesIO(b"foobar")
resp = Response(data)
assert resp.response == data
start_response = self.mock()
self.expect(start_response("200 OK", ANY))
self.replay()
result = resp({"REQUEST_METHOD": "GET"}, start_response)
assert next(result) == b"foobar"
def test_file_response_w_file_wrapper(self):
data = BytesIO(b"foobar")
resp = Response(data)
assert resp.response == data
start_response = self.mock()
self.expect(start_response("200 OK", ANY))
file_wrapper = self.mock()
self.expect(file_wrapper(data, resp.block_size)).result("DUMMY")
self.replay()
result = resp(
{"REQUEST_METHOD": "GET", "wsgi.file_wrapper": file_wrapper}, start_response
)
assert result == "DUMMY"
def test_file_response_content_length(self):
data = BytesIO(b"*" * 342)
resp = Response(data)
assert resp.response == data
start_response = self.mock()
self.expect(start_response("200 OK", ANY))
self.replay()
resp({"REQUEST_METHOD": "GET"}, start_response)
assert resp.content_length == 342
|
"""Contrib version of MirroredStrategy."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.distribute import distribute_lib
from tensorflow.python.distribute import input_lib
from tensorflow.python.distribute import mirrored_strategy
_call_for_each_replica = mirrored_strategy._call_for_each_replica
_create_mirrored_variable = mirrored_strategy._create_mirrored_variable
all_local_devices = mirrored_strategy.all_local_devices
CoreMirroredStrategy = mirrored_strategy.MirroredStrategy
CoreMirroredExtended = mirrored_strategy.MirroredExtended
class MirroredStrategy(distribute_lib.DistributionStrategy):
"""Mirrors vars to distribute across multiple devices and machines.
*** contrib version ***
This strategy uses one replica per device and sync replication for its
multi-GPU version.
When `cluster_spec` is given by the `configure` method., it turns into the
mulit-worker version that works on multiple workers with in-graph replication.
Note: `configure` will be called by higher-level APIs if running in
distributed environment.
There are several important concepts for distributed TensorFlow, e.g.
`client`, `job`, `task`, `cluster`, `in-graph replication` and
`synchronous training` and they have already been defined in the
[TensorFlow's documentation](https://www.tensorflow.org/deploy/distributed).
The distribution strategy inherits these concepts as well and in addition to
that we also clarify several more concepts:
* **In-graph replication**: the `client` creates a single `tf.Graph` that
specifies tasks for devices on all workers. The `client` then creates a
client session which will talk to the `master` service of a `worker`. Then
the `master` will partition the graph and distribute the work to all
participating workers.
* **Worker**: A `worker` is a TensorFlow `task` that usually maps to one
physical machine. We will have multiple `worker`s with different `task`
index. They all do similar things except for one worker checkpointing model
variables, writing summaries, etc. in addition to its ordinary work.
The multi-worker version of this class maps one replica to one device on a
worker. It mirrors all model variables on all replicas. For example, if you
have two `worker`s and each `worker` has 4 GPUs, it will create 8 copies of
the model variables on these 8 GPUs. Then like in MirroredStrategy, each
replica performs their computation with their own copy of variables unless in
cross-replica model where variable or tensor reduction happens.
Args:
devices: a list of device strings.
num_gpus: number of GPUs. For local training, either specify `devices` or
`num_gpus`. In distributed training, this must be specified as number of
GPUs on each worker.
num_gpus_per_worker: number of GPUs per worker. This is the same as
`num_gpus` and only one of `num_gpus` and `num_gpus_per_worker` can be
specified.
cross_device_ops: optional, a descedant of `CrossDeviceOps`. If this is not
set, the `configure` method will try to find the best one.
auto_shard_dataset: whether to auto-shard the dataset when there are
multiple workers.
cross_tower_ops: Deprecated alias for `cross_device_ops`.
"""
def __init__(self,
devices=None,
num_gpus=None,
num_gpus_per_worker=None,
cross_device_ops=None,
auto_shard_dataset=False,
cross_tower_ops=None):
assert not (cross_device_ops and cross_tower_ops)
if num_gpus is not None and num_gpus_per_worker is not None:
raise ValueError(
"You cannot specify both `num_gpus` and `num_gpus_per_worker`.")
if num_gpus is None:
num_gpus = num_gpus_per_worker
extended = MirroredExtended(self, devices, num_gpus,
cross_device_ops or cross_tower_ops,
auto_shard_dataset)
super(MirroredStrategy, self).__init__(extended)
# Override to change the documentation to reflect the different handling of
# global vs. local batch size between core and contrib.
def make_dataset_iterator(self, dataset): # pylint: disable=useless-super-delegation
"""Makes an iterator for input provided via `dataset`.
NOTE: The batch size of the `dataset` argument is treated differently for
this contrib version of `MirroredStrategy`.
Data from the given dataset will be distributed evenly across all the
compute replicas. We will assume that the input dataset is batched by the
per-replica batch size.
The user could also use `make_input_fn_iterator` if they want to
customize which input is fed to which replica/worker etc.
Args:
dataset: `tf.data.Dataset` that will be distributed evenly across all
replicas.
Returns:
An `tf.distribute.InputIterator` which returns inputs for each step of the
computation. User should call `initialize` on the returned iterator.
"""
return super(MirroredStrategy, self).make_dataset_iterator(dataset)
# Override to change the documentation to reflect the different handling of
# global vs. local batch size between core and contrib.
def experimental_make_numpy_iterator( # pylint: disable=useless-super-delegation
self, numpy_input, batch_size, num_epochs=1, shuffle=1024, session=None):
"""Makes an iterator for input provided via a nest of numpy arrays.
NOTE: The `batch_size` argument here has different behavior for this
contrib version of `MirroredStrategy`.
Args:
numpy_input: A nest of NumPy input arrays that will be distributed evenly
across all replicas.
batch_size: The number of entries from the array we should consume in one
step of the computation, across all replicas. This is the per-replica
batch size. The global batch size will be this times
`num_replicas_in_sync`.
num_epochs: The number of times to iterate through the examples. A value
of `None` means repeat forever.
shuffle: Size of buffer to use for shuffling the input examples.
Use `None` to disable shuffling.
session: (TensorFlow v1.x graph execution only) A session used for
initialization.
Returns:
An `tf.distribute.InputIterator` which returns inputs for each step of the
computation. User should call `initialize` on the returned iterator.
"""
return super(MirroredStrategy, self).experimental_make_numpy_iterator(
numpy_input, batch_size, num_epochs, shuffle, session)
class MirroredExtended(CoreMirroredExtended):
"""Implementation of (contrib) MirroredStrategy."""
def __init__(self,
container_strategy,
devices=None,
num_gpus_per_worker=None,
cross_device_ops=None,
auto_shard_dataset=False):
if devices is None:
devices = mirrored_strategy.all_local_devices(num_gpus_per_worker)
elif num_gpus_per_worker is not None:
raise ValueError(
"Must only specify one of `devices` and `num_gpus_per_worker`.")
super(MirroredExtended, self).__init__(container_strategy, devices,
cross_device_ops)
self._auto_shard_dataset = auto_shard_dataset
def _make_dataset_iterator(self, dataset):
"""Make iterator from dataset without splitting the batch.
This implementation is different than the one in
`tf.distribute.MirroredStrategy` for purposes of backward compatibility.
We treat the incoming dataset's batch size as per replica batch size.
Args:
dataset: `tf.data.Dataset` for input.
Returns:
An `InputIterator` which returns inputs for each step of the computation.
"""
return input_lib.DatasetIterator(dataset, self._input_workers)
# TODO(priyag): Delete this once all strategies use global batch size.
@property
def _global_batch_size(self):
"""The contrib version of Mirrored strategy uses per-replica batch size."""
return False
|
"""Check multiple key definition"""
__revision__ = 5
correct_dict = {
'tea': 'for two',
'two': 'for tea',
}
wrong_dict = {
'tea': 'for two',
'two': 'for tea',
'tea': 'time',
}
|
"""
Defines geometric primitives like prisms, spheres, etc.
"""
from __future__ import division, absolute_import
from future.builtins import object, super
import copy as cp
import numpy as np
class GeometricElement(object):
"""
Base class for all geometric elements.
"""
def __init__(self, props):
self.props = {}
if props is not None:
for p in props:
self.props[p] = props[p]
def addprop(self, prop, value):
"""
Add a physical property to this geometric element.
If it already has the property, the given value will overwrite the
existing one.
Parameters:
* prop : str
Name of the physical property.
* value : float
The value of this physical property.
"""
self.props[prop] = value
def copy(self):
""" Return a deep copy of the current instance."""
return cp.deepcopy(self)
class Polygon(GeometricElement):
"""
A polygon object (2D).
.. note:: Most applications require the vertices to be **clockwise**!
Parameters:
* vertices : list of lists
List of [x, y] pairs with the coordinates of the vertices.
* props : dict
Physical properties assigned to the polygon.
Ex: ``props={'density':10, 'susceptibility':10000}``
Examples::
>>> poly = Polygon([[0, 0], [1, 4], [2, 5]], {'density': 500})
>>> poly.props
{'density': 500}
>>> poly.nverts
3
>>> poly.vertices
array([[0, 0],
[1, 4],
[2, 5]])
>>> poly.x
array([0, 1, 2])
>>> poly.y
array([0, 4, 5])
"""
def __init__(self, vertices, props=None):
super().__init__(props)
self._vertices = np.asarray(vertices)
@property
def vertices(self):
return self._vertices
@property
def nverts(self):
return len(self.vertices)
@property
def x(self):
return self.vertices[:, 0]
@property
def y(self):
return self.vertices[:, 1]
class Square(Polygon):
"""
A square object (2D).
Parameters:
* bounds : list = [x1, x2, y1, y2]
Coordinates of the top right and bottom left corners of the square
* props : dict
Physical properties assigned to the square.
Ex: ``props={'density':10, 'slowness':10000}``
Example::
>>> sq = Square([0, 1, 2, 4], {'density': 750})
>>> sq.bounds
[0, 1, 2, 4]
>>> sq.x1
0
>>> sq.x2
1
>>> sq.props
{'density': 750}
>>> sq.addprop('magnetization', 100)
>>> sq.props['magnetization']
100
A square can be used as a :class:`~fatiando.mesher.Polygon`::
>>> sq.vertices
array([[0, 2],
[1, 2],
[1, 4],
[0, 4]])
>>> sq.x
array([0, 1, 1, 0])
>>> sq.y
array([2, 2, 4, 4])
>>> sq.nverts
4
"""
def __init__(self, bounds, props=None):
super().__init__(None, props)
self.x1, self.x2, self.y1, self.y2 = bounds
@property
def bounds(self):
"""
The x, y boundaries of the square as [xmin, xmax, ymin, ymax]
"""
return [self.x1, self.x2, self.y1, self.y2]
@property
def vertices(self):
"""
The vertices of the square.
"""
verts = np.array(
[[self.x1, self.y1],
[self.x2, self.y1],
[self.x2, self.y2],
[self.x1, self.y2]])
return verts
def __str__(self):
"""Return a string representation of the square."""
names = [('x1', self.x1), ('x2', self.x2), ('y1', self.y1),
('y2', self.y2)]
names.extend((p, self.props[p]) for p in sorted(self.props))
return ' | '.join('%s:%g' % (n, v) for n, v in names)
class Prism(GeometricElement):
"""
A 3D right rectangular prism.
.. note:: The coordinate system used is x -> North, y -> East and z -> Down
Parameters:
* x1, x2 : float
South and north borders of the prism
* y1, y2 : float
West and east borders of the prism
* z1, z2 : float
Top and bottom of the prism
* props : dict
Physical properties assigned to the prism.
Ex: ``props={'density':10, 'magnetization':10000}``
Examples:
>>> from fatiando.mesher import Prism
>>> p = Prism(1, 2, 3, 4, 5, 6, {'density':200})
>>> p.props['density']
200
>>> print p.get_bounds()
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
>>> print p
x1:1 | x2:2 | y1:3 | y2:4 | z1:5 | z2:6 | density:200
>>> p = Prism(1, 2, 3, 4, 5, 6)
>>> print p
x1:1 | x2:2 | y1:3 | y2:4 | z1:5 | z2:6
>>> p.addprop('density', 2670)
>>> print p
x1:1 | x2:2 | y1:3 | y2:4 | z1:5 | z2:6 | density:2670
"""
def __init__(self, x1, x2, y1, y2, z1, z2, props=None):
super().__init__(props)
self.x1 = float(x1)
self.x2 = float(x2)
self.y1 = float(y1)
self.y2 = float(y2)
self.z1 = float(z1)
self.z2 = float(z2)
def __str__(self):
"""Return a string representation of the prism."""
names = [('x1', self.x1), ('x2', self.x2), ('y1', self.y1),
('y2', self.y2), ('z1', self.z1), ('z2', self.z2)]
names.extend((p, self.props[p]) for p in sorted(self.props))
return ' | '.join('%s:%g' % (n, v) for n, v in names)
def get_bounds(self):
"""
Get the bounding box of the prism (i.e., the borders of the prism).
Returns:
* bounds : list
``[x1, x2, y1, y2, z1, z2]``, the bounds of the prism
Examples:
>>> p = Prism(1, 2, 3, 4, 5, 6)
>>> print p.get_bounds()
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
"""
return [self.x1, self.x2, self.y1, self.y2, self.z1, self.z2]
def center(self):
"""
Return the coordinates of the center of the prism.
Returns:
* coords : list = [xc, yc, zc]
Coordinates of the center
Example:
>>> prism = Prism(1, 2, 1, 3, 0, 2)
>>> print prism.center()
[ 1.5 2. 1. ]
"""
xc = 0.5 * (self.x1 + self.x2)
yc = 0.5 * (self.y1 + self.y2)
zc = 0.5 * (self.z1 + self.z2)
return np.array([xc, yc, zc])
class Tesseroid(GeometricElement):
"""
A tesseroid (spherical prism).
Parameters:
* w, e : float
West and east borders of the tesseroid in decimal degrees
* s, n : float
South and north borders of the tesseroid in decimal degrees
* top, bottom : float
Bottom and top of the tesseroid with respect to the mean earth radius
in meters. Ex: if the top is 100 meters above the mean earth radius,
``top=100``, if 100 meters below ``top=-100``.
* props : dict
Physical properties assigned to the tesseroid.
Ex: ``props={'density':10, 'magnetization':10000}``
Examples:
>>> from fatiando.mesher import Tesseroid
>>> t = Tesseroid(1, 2, 3, 4, 6, 5, {'density':200})
>>> t.props['density']
200
>>> print t.get_bounds()
[1.0, 2.0, 3.0, 4.0, 6.0, 5.0]
>>> print t
w:1 | e:2 | s:3 | n:4 | top:6 | bottom:5 | density:200
>>> t = Tesseroid(1, 2, 3, 4, 6, 5)
>>> print t
w:1 | e:2 | s:3 | n:4 | top:6 | bottom:5
>>> t.addprop('density', 2670)
>>> print t
w:1 | e:2 | s:3 | n:4 | top:6 | bottom:5 | density:2670
"""
def __init__(self, w, e, s, n, top, bottom, props=None):
super().__init__(props)
self.w = float(w)
self.e = float(e)
self.s = float(s)
self.n = float(n)
self.bottom = float(bottom)
self.top = float(top)
def __str__(self):
"""Return a string representation of the tesseroid."""
names = [('w', self.w), ('e', self.e), ('s', self.s),
('n', self.n), ('top', self.top), ('bottom', self.bottom)]
names.extend((p, self.props[p]) for p in sorted(self.props))
return ' | '.join('%s:%g' % (n, v) for n, v in names)
def get_bounds(self):
"""
Get the bounding box of the tesseroid (i.e., the borders).
Returns:
* bounds : list
``[w, e, s, n, top, bottom]``, the bounds of the tesseroid
Examples:
>>> t = Tesseroid(1, 2, 3, 4, 6, 5)
>>> print t.get_bounds()
[1.0, 2.0, 3.0, 4.0, 6.0, 5.0]
"""
return [self.w, self.e, self.s, self.n, self.top, self.bottom]
def half(self, lon=True, lat=True, r=True):
"""
Divide the tesseroid in 2 halfs for each dimension (total 8)
The smaller tesseroids will share the large one's props.
Parameters:
* lon, lat, r : True or False
Dimensions along which the tesseroid will be split in half.
Returns:
* tesseroids : list
A list of maximum 8 tesseroids that make up the larger one.
Examples::
>>> tess = Tesseroid(-10, 10, -20, 20, 0, -40, {'density':2})
>>> split = tess.half()
>>> print len(split)
8
>>> for t in split:
... print t
w:-10 | e:0 | s:-20 | n:0 | top:-20 | bottom:-40 | density:2
w:-10 | e:0 | s:-20 | n:0 | top:0 | bottom:-20 | density:2
w:-10 | e:0 | s:0 | n:20 | top:-20 | bottom:-40 | density:2
w:-10 | e:0 | s:0 | n:20 | top:0 | bottom:-20 | density:2
w:0 | e:10 | s:-20 | n:0 | top:-20 | bottom:-40 | density:2
w:0 | e:10 | s:-20 | n:0 | top:0 | bottom:-20 | density:2
w:0 | e:10 | s:0 | n:20 | top:-20 | bottom:-40 | density:2
w:0 | e:10 | s:0 | n:20 | top:0 | bottom:-20 | density:2
>>> tess = Tesseroid(-15, 15, -20, 20, 0, -40)
>>> split = tess.half(lat=False)
>>> print len(split)
4
>>> for t in split:
... print t
w:-15 | e:0 | s:-20 | n:20 | top:-20 | bottom:-40
w:-15 | e:0 | s:-20 | n:20 | top:0 | bottom:-20
w:0 | e:15 | s:-20 | n:20 | top:-20 | bottom:-40
w:0 | e:15 | s:-20 | n:20 | top:0 | bottom:-20
"""
dlon = 0.5 * (self.e - self.w)
dlat = 0.5 * (self.n - self.s)
dh = 0.5 * (self.top - self.bottom)
wests = [self.w, self.w + dlon]
souths = [self.s, self.s + dlat]
bottoms = [self.bottom, self.bottom + dh]
if not lon:
dlon *= 2
wests.pop()
if not lat:
dlat *= 2
souths.pop()
if not r:
dh *= 2
bottoms.pop()
split = [
Tesseroid(i, i + dlon, j, j + dlat, k + dh, k, props=self.props)
for i in wests for j in souths for k in bottoms]
return split
def split(self, nlon, nlat, nh):
"""
Split the tesseroid into smaller ones.
The smaller tesseroids will share the large one's props.
Parameters:
* nlon, nlat, nh : int
The number of sections to split in the longitudinal, latitudinal,
and vertical dimensions
Returns:
* tesseroids : list
A list of nlon*nlat*nh tesseroids that make up the larger one.
Examples::
>>> tess = Tesseroid(-10, 10, -20, 20, 0, -40, {'density':2})
>>> split = tess.split(1, 2, 2)
>>> print len(split)
4
>>> for t in split:
... print t
w:-10 | e:10 | s:-20 | n:0 | top:-20 | bottom:-40 | density:2
w:-10 | e:10 | s:-20 | n:0 | top:0 | bottom:-20 | density:2
w:-10 | e:10 | s:0 | n:20 | top:-20 | bottom:-40 | density:2
w:-10 | e:10 | s:0 | n:20 | top:0 | bottom:-20 | density:2
>>> tess = Tesseroid(-15, 15, -20, 20, 0, -40)
>>> split = tess.split(3, 1, 1)
>>> print len(split)
3
>>> for t in split:
... print t
w:-15 | e:-5 | s:-20 | n:20 | top:0 | bottom:-40
w:-5 | e:5 | s:-20 | n:20 | top:0 | bottom:-40
w:5 | e:15 | s:-20 | n:20 | top:0 | bottom:-40
"""
wests = np.linspace(self.w, self.e, nlon + 1)
souths = np.linspace(self.s, self.n, nlat + 1)
bottoms = np.linspace(self.bottom, self.top, nh + 1)
dlon = wests[1] - wests[0]
dlat = souths[1] - souths[0]
dh = bottoms[1] - bottoms[0]
tesseroids = [
Tesseroid(i, i + dlon, j, j + dlat, k + dh, k, props=self.props)
for i in wests[:-1] for j in souths[:-1] for k in bottoms[:-1]]
return tesseroids
class Sphere(GeometricElement):
"""
A sphere.
.. note:: The coordinate system used is x -> North, y -> East and z -> Down
Parameters:
* x, y, z : float
The coordinates of the center of the sphere
* radius : float
The radius of the sphere
* props : dict
Physical properties assigned to the prism.
Ex: ``props={'density':10, 'magnetization':10000}``
Examples:
>>> s = Sphere(1, 2, 3, 10, {'magnetization':200})
>>> s.props['magnetization']
200
>>> s.addprop('density', 20)
>>> print s.props['density']
20
>>> print s
x:1 | y:2 | z:3 | radius:10 | density:20 | magnetization:200
>>> s = Sphere(1, 2, 3, 4)
>>> print s
x:1 | y:2 | z:3 | radius:4
>>> s.addprop('density', 2670)
>>> print s
x:1 | y:2 | z:3 | radius:4 | density:2670
"""
def __init__(self, x, y, z, radius, props=None):
super().__init__(props)
self.x = float(x)
self.y = float(y)
self.z = float(z)
self.radius = float(radius)
self.center = np.array([x, y, z])
def __str__(self):
"""Return a string representation of the sphere."""
names = [('x', self.x), ('y', self.y), ('z', self.z),
('radius', self.radius)]
names.extend((p, self.props[p]) for p in sorted(self.props))
return ' | '.join('%s:%g' % (n, v) for n, v in names)
class PolygonalPrism(GeometricElement):
"""
A 3D prism with polygonal cross-section.
.. note:: The coordinate system used is x -> North, y -> East and z -> Down
.. note:: *vertices* must be **CLOCKWISE** or will give inverse result.
Parameters:
* vertices : list of lists
Coordinates of the vertices. A list of ``[x, y]`` pairs.
* z1, z2 : float
Top and bottom of the prism
* props : dict
Physical properties assigned to the prism.
Ex: ``props={'density':10, 'magnetization':10000}``
Examples:
>>> verts = [[1, 1], [1, 2], [2, 2], [2, 1]]
>>> p = PolygonalPrism(verts, 0, 3, props={'temperature':25})
>>> p.props['temperature']
25
>>> print p.x
[ 1. 1. 2. 2.]
>>> print p.y
[ 1. 2. 2. 1.]
>>> print p.z1, p.z2
0.0 3.0
>>> p.addprop('density', 2670)
>>> print p.props['density']
2670
"""
def __init__(self, vertices, z1, z2, props=None):
super().__init__(props)
self.x = np.fromiter((v[0] for v in vertices), dtype=np.float)
self.y = np.fromiter((v[1] for v in vertices), dtype=np.float)
self.z1 = float(z1)
self.z2 = float(z2)
self.nverts = len(vertices)
def topolygon(self):
"""
Get the polygon describing the prism viewed from above.
Returns:
* polygon : :func:`fatiando.mesher.Polygon`
The polygon
Example:
>>> verts = [[1, 1], [1, 2], [2, 2], [2, 1]]
>>> p = PolygonalPrism(verts, 0, 100)
>>> poly = p.topolygon()
>>> print poly.x
[ 1. 1. 2. 2.]
>>> print poly.y
[ 1. 2. 2. 1.]
"""
verts = np.transpose([self.x, self.y])
return Polygon(verts, self.props)
|
from __future__ import division, print_function, unicode_literals
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
testinfo = "s, t 4.9, s, t 5.1, s, t 10.1, s, t 10.2, s, q"
tags = "sequence, MoveBy, Reverse"
import cocos
from cocos.director import director
from cocos.sprite import Sprite
from cocos.actions import Place, MoveBy, Reverse
import pyglet
class TestLayer(cocos.layer.Layer):
def __init__(self):
super( TestLayer, self ).__init__()
x,y = director.get_window_size()
self.sprite = Sprite( 'grossini.png', (x//2,y//2) )
self.add( self.sprite )
self.sprite2 = Sprite( 'grossini.png', (x//2,y//2) )
self.add( self.sprite2 )
seq = MoveBy( (x//2, 0) ) + MoveBy( (0,y//2) )
self.sprite.do( seq )
self.sprite2.do( Reverse( seq ) )
description = """
Starting from midscreen, sprite 1 moves left and then up to
upper right corner, sprites 2 starts from midscreen and goes
down and then left, ending in bottom left corner
"""
def main():
print(description)
director.init()
test_layer = TestLayer ()
main_scene = cocos.scene.Scene (test_layer)
director.run (main_scene)
if __name__ == '__main__':
main()
|
'''
Genesis Add-on
Copyright (C) 2015 lambda
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import re,urllib,urlparse,base64
from resources.lib.libraries import cleantitle
from resources.lib.libraries import client
from resources.lib import resolvers
class source:
def __init__(self):
self.base_link = 'http://www.primewire.ag'
self.key_link = '/index.php?search'
self.link_1 = 'http://www.primewire.ag'
self.link_2 = 'http://www.primewire.org'
self.link_3 = 'http://www.primewire.is'
self.moviesearch_link = '/index.php?search_keywords=%s&key=%s&search_section=1'
self.tvsearch_link = '/index.php?search_keywords=%s&key=%s&search_section=2'
self.headers = {'Connection' : 'keep-alive'}
def get_movie(self, imdb, title, year):
try:
result = ''
links = [self.link_1, self.link_2, self.link_3]
for base_link in links:
result = client.source(urlparse.urljoin(base_link, self.key_link), headers=self.headers)
if 'searchform' in str(result): break
key = client.parseDOM(result, 'input', ret='value', attrs = {'name': 'key'})[0]
query = self.moviesearch_link % (urllib.quote_plus(re.sub('\'', '', title)), key)
result = client.source(urlparse.urljoin(base_link, query), headers=self.headers)
result = result.decode('iso-8859-1').encode('utf-8')
result = client.parseDOM(result, 'div', attrs = {'class': 'index_item.+?'})
title = 'watch' + cleantitle.movie(title)
years = ['(%s)' % str(year), '(%s)' % str(int(year)+1), '(%s)' % str(int(year)-1)]
result = [(client.parseDOM(i, 'a', ret='href')[0], client.parseDOM(i, 'a', ret='title')[0]) for i in result]
result = [i for i in result if any(x in i[1] for x in years)]
result = [(client.replaceHTMLCodes(i[0]), i[1]) for i in result]
try: result = [(urlparse.parse_qs(urlparse.urlparse(i[0]).query)['u'][0], i[1]) for i in result]
except: pass
result = [(urlparse.urlparse(i[0]).path, i[1]) for i in result]
match = [i[0] for i in result if title == cleantitle.movie(i[1])]
match2 = [i[0] for i in result]
match2 = [x for y,x in enumerate(match2) if x not in match2[:y]]
if match2 == []: return
for i in match2[:5]:
try:
if len(match) > 0:
url = match[0]
break
result = client.source(base_link + i, headers=self.headers)
if str(imdb) in str(result):
url = i
break
except:
pass
url = url.encode('utf-8')
return url
except:
return
def get_show(self, imdb, tvdb, tvshowtitle, year):
try:
result = ''
links = [self.link_1, self.link_2, self.link_3]
for base_link in links:
result = client.source(urlparse.urljoin(base_link, self.key_link), headers=self.headers)
if 'searchform' in str(result): break
key = client.parseDOM(result, 'input', ret='value', attrs = {'name': 'key'})[0]
query = self.tvsearch_link % (urllib.quote_plus(re.sub('\'', '', tvshowtitle)), key)
result = client.source(urlparse.urljoin(base_link, query), headers=self.headers)
result = result.decode('iso-8859-1').encode('utf-8')
result = client.parseDOM(result, 'div', attrs = {'class': 'index_item.+?'})
tvshowtitle = 'watch' + cleantitle.tv(tvshowtitle)
years = ['(%s)' % str(year), '(%s)' % str(int(year)+1), '(%s)' % str(int(year)-1)]
result = [(client.parseDOM(i, 'a', ret='href')[0], client.parseDOM(i, 'a', ret='title')[0]) for i in result]
result = [i for i in result if any(x in i[1] for x in years)]
result = [(client.replaceHTMLCodes(i[0]), i[1]) for i in result]
try: result = [(urlparse.parse_qs(urlparse.urlparse(i[0]).query)['u'][0], i[1]) for i in result]
except: pass
result = [(urlparse.urlparse(i[0]).path, i[1]) for i in result]
match = [i[0] for i in result if tvshowtitle == cleantitle.tv(i[1])]
match2 = [i[0] for i in result]
match2 = [x for y,x in enumerate(match2) if x not in match2[:y]]
if match2 == []: return
for i in match2[:5]:
try:
if len(match) > 0:
url = match[0]
break
result = client.source(base_link + i, headers=self.headers)
if str(imdb) in str(result):
url = i
break
except:
pass
url = url.encode('utf-8')
return url
except:
return
def get_episode(self, url, imdb, tvdb, title, date, season, episode):
if url == None: return
url = url.replace('/watch-','/tv-')
url += '/season-%01d-episode-%01d' % (int(season), int(episode))
url = client.replaceHTMLCodes(url)
url = url.encode('utf-8')
return url
def get_sources(self, url, hosthdDict, hostDict, locDict):
try:
sources = []
if url == None: return sources
result = ''
links = [self.link_1, self.link_2, self.link_3]
for base_link in links:
result = client.source(urlparse.urljoin(base_link, url), headers=self.headers)
if 'choose_tabs' in str(result): break
result = result.decode('iso-8859-1').encode('utf-8')
links = client.parseDOM(result, 'tbody')
for i in links:
try:
u = client.parseDOM(i, 'a', ret='href')[0]
u = client.replaceHTMLCodes(u)
try: u = urlparse.parse_qs(urlparse.urlparse(u).query)['u'][0]
except: pass
host = urlparse.parse_qs(urlparse.urlparse(u).query)['domain'][0]
host = base64.urlsafe_b64decode(host.encode('utf-8'))
host = host.rsplit('.', 1)[0]
host = host.strip().lower()
if not host in hostDict: raise Exception()
host = client.replaceHTMLCodes(host)
host = host.encode('utf-8')
url = urlparse.parse_qs(urlparse.urlparse(u).query)['url'][0]
url = base64.urlsafe_b64decode(url.encode('utf-8'))
url = client.replaceHTMLCodes(url)
url = url.encode('utf-8')
quality = client.parseDOM(i, 'span', ret='class')[0]
if quality == 'quality_cam' or quality == 'quality_ts': quality = 'CAM'
elif quality == 'quality_dvd': quality = 'SD'
else: raise Exception()
sources.append({'source': host, 'quality': quality, 'provider': 'Primewire', 'url': url})
except:
pass
return sources
except:
return sources
def resolve(self, url):
try:
url = resolvers.request(url)
return url
except:
return
|
from pyshop.models import (create_engine, dispose_engine,
Base, DBSession,
Group, User, Permission,
Classifier, Package, Release, ReleaseFile
)
from pyshop.bin.install import populate
from .conf import settings
def setUpModule():
engine = create_engine(settings)
populate(engine, interactive=False)
session = DBSession()
admin_user = User.by_login(session, u'admin')
local_user = User(login=u'local_user', password=u'secret', local=True,
firstname=u'Local', lastname=u'User')
local_user.groups.append(Group.by_name(session, u'developer'))
jdo = User(login=u'johndo', local=False)
jdoe = User(login=u'janedoe', local=False)
session.add(jdo)
session.add(jdoe)
session.add(local_user)
classifiers_names = [u'Programming Language :: Python',
u'Programming Language :: Python :: 2.6',
u'Programming Language :: Python :: 2.7',
u'Topic :: Software Development',
u'Topic :: System :: Archiving :: Mirroring',
u'Topic :: System :: Archiving :: Packaging',
u'Intended Audience :: Developers',
u'Intended Audience :: System Administrators'
]
classifiers = [Classifier.by_name(session, name=c,
create_if_not_exists=True)
for c in classifiers_names]
pack1 = Package(name=u'mirrored_package1')
pack1.owners.append(jdo)
pack1.owners.append(jdoe)
pack1.downloads = 7
session.add(pack1)
release1 = Release(package=pack1, version=u'0.1',
summary=u'Common Usage Library',
author=jdoe)
for c in classifiers[:3]:
release1.classifiers.append(c)
session.add(release1)
release1.files.append(ReleaseFile(filename=u'mirrored_package1-0.1.tar.gz',
package_type=u'sdist'))
session.add(release1)
release2 = Release(package=pack1, version=u'0.2',
summary=u'Common Usage Library')
for c in classifiers[:5]:
release2.classifiers.append(c)
release2.files.append(ReleaseFile(filename=u'mirrored_package1-0.2.tar.gz',
package_type=u'sdist'))
release2.files.append(ReleaseFile(filename=u'mirrored_package1-0.2.egg',
package_type=u'bdist_egg'))
session.add(release2)
pack2 = Package(name=u'mirrored_package2')
pack2.owners.append(jdo)
pack2.maintainers.append(jdoe)
pack2.downloads = 1
session.add(pack2)
release3 = Release(package=pack2, version=u'1.0',
summary=u'Web Framework For Everybody')
for c in classifiers[:3] + classifiers[-2:-2]:
release3.classifiers.append(c)
session.add(release3)
release3.files.append(ReleaseFile(filename=u'mirrored_package2-1.0.tar.gz',
package_type=u'sdist'))
session.add(release3)
pack3 = Package(name=u'local_package1', local=True)
pack3.owners.append(local_user)
pack3.owners.append(admin_user)
session.add(pack3)
release4 = Release(package=pack3, version=u'0.1',
summary=u'Pet Shop Application')
for c in classifiers:
release4.classifiers.append(c)
release4.files.append(ReleaseFile(filename=u'local_package1-0.1.tar.gz',
package_type=u'sdist'))
session.add(release4)
session.commit()
def tearDownModule():
dispose_engine()
|
"""Support for International Space Station data sensor."""
from datetime import timedelta
import logging
import pyiss
import requests
import voluptuous as vol
from homeassistant.components.binary_sensor import PLATFORM_SCHEMA, BinarySensorEntity
from homeassistant.const import (
ATTR_LATITUDE,
ATTR_LONGITUDE,
CONF_NAME,
CONF_SHOW_ON_MAP,
)
import homeassistant.helpers.config_validation as cv
from homeassistant.util import Throttle
_LOGGER = logging.getLogger(__name__)
ATTR_ISS_NEXT_RISE = "next_rise"
ATTR_ISS_NUMBER_PEOPLE_SPACE = "number_of_people_in_space"
DEFAULT_NAME = "ISS"
DEFAULT_DEVICE_CLASS = "visible"
MIN_TIME_BETWEEN_UPDATES = timedelta(seconds=60)
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
vol.Optional(CONF_SHOW_ON_MAP, default=False): cv.boolean,
}
)
def setup_platform(hass, config, add_entities, discovery_info=None):
"""Set up the ISS sensor."""
if None in (hass.config.latitude, hass.config.longitude):
_LOGGER.error("Latitude or longitude not set in Home Assistant config")
return False
try:
iss_data = IssData(hass.config.latitude, hass.config.longitude)
iss_data.update()
except requests.exceptions.HTTPError as error:
_LOGGER.error(error)
return False
name = config.get(CONF_NAME)
show_on_map = config.get(CONF_SHOW_ON_MAP)
add_entities([IssBinarySensor(iss_data, name, show_on_map)], True)
class IssBinarySensor(BinarySensorEntity):
"""Implementation of the ISS binary sensor."""
def __init__(self, iss_data, name, show):
"""Initialize the sensor."""
self.iss_data = iss_data
self._state = None
self._name = name
self._show_on_map = show
@property
def name(self):
"""Return the name of the sensor."""
return self._name
@property
def is_on(self):
"""Return true if the binary sensor is on."""
return self.iss_data.is_above if self.iss_data else False
@property
def device_class(self):
"""Return the class of this sensor."""
return DEFAULT_DEVICE_CLASS
@property
def extra_state_attributes(self):
"""Return the state attributes."""
if self.iss_data:
attrs = {
ATTR_ISS_NUMBER_PEOPLE_SPACE: self.iss_data.number_of_people_in_space,
ATTR_ISS_NEXT_RISE: self.iss_data.next_rise,
}
if self._show_on_map:
attrs[ATTR_LONGITUDE] = self.iss_data.position.get("longitude")
attrs[ATTR_LATITUDE] = self.iss_data.position.get("latitude")
else:
attrs["long"] = self.iss_data.position.get("longitude")
attrs["lat"] = self.iss_data.position.get("latitude")
return attrs
def update(self):
"""Get the latest data from ISS API and updates the states."""
self.iss_data.update()
class IssData:
"""Get data from the ISS API."""
def __init__(self, latitude, longitude):
"""Initialize the data object."""
self.is_above = None
self.next_rise = None
self.number_of_people_in_space = None
self.position = None
self.latitude = latitude
self.longitude = longitude
@Throttle(MIN_TIME_BETWEEN_UPDATES)
def update(self):
"""Get the latest data from the ISS API."""
try:
iss = pyiss.ISS()
self.is_above = iss.is_ISS_above(self.latitude, self.longitude)
self.next_rise = iss.next_rise(self.latitude, self.longitude)
self.number_of_people_in_space = iss.number_of_people_in_space()
self.position = iss.current_location()
except (requests.exceptions.HTTPError, requests.exceptions.ConnectionError):
_LOGGER.error("Unable to retrieve data")
return False
|
from typing import Text
from django.test import TestCase
from zerver.lib.test_classes import WebhookTestCase
from zerver.webhooks.appfollow.view import convert_markdown
class AppFollowHookTests(WebhookTestCase):
STREAM_NAME = 'appfollow'
URL_TEMPLATE = u"/api/v1/external/appfollow?stream={stream}&api_key={api_key}"
def test_sample(self) -> None:
expected_subject = "Webhook integration was successful."
expected_message = u"""Webhook integration was successful.
Test User / Acme (Google Play)"""
self.send_and_test_stream_message('sample', expected_subject, expected_message,
content_type="application/x-www-form-urlencoded")
def test_reviews(self) -> None:
expected_subject = "Acme - Group chat"
expected_message = u"""Acme - Group chat
App Store, Acme Technologies, Inc.
★★★★★ United States
**Great for Information Management**
Acme enables me to manage the flow of information quite well. I only wish I could create and edit my Acme Post files in the iOS app.
*by* **Mr RESOLUTIONARY** *for v3.9*
[Permalink](http://appfollow.io/permalink) · [Add tag](http://watch.appfollow.io/add_tag)"""
self.send_and_test_stream_message('review', expected_subject, expected_message,
content_type="application/x-www-form-urlencoded")
def test_reviews_with_topic(self) -> None:
# This temporary patch of URL_TEMPLATE is code smell but required due to the way
# WebhookTestCase is built.
original_url_template = self.URL_TEMPLATE
self.URL_TEMPLATE = original_url_template + "&topic=foo"
self.url = self.build_webhook_url()
expected_subject = "foo"
expected_message = u"""Acme - Group chat
App Store, Acme Technologies, Inc.
★★★★★ United States
**Great for Information Management**
Acme enables me to manage the flow of information quite well. I only wish I could create and edit my Acme Post files in the iOS app.
*by* **Mr RESOLUTIONARY** *for v3.9*
[Permalink](http://appfollow.io/permalink) · [Add tag](http://watch.appfollow.io/add_tag)"""
self.send_and_test_stream_message('review', expected_subject, expected_message,
content_type="application/x-www-form-urlencoded")
self.URL_TEMPLATE = original_url_template
def get_body(self, fixture_name: Text) -> Text:
return self.fixture_data("appfollow", fixture_name, file_type="json")
class ConvertMarkdownTest(TestCase):
def test_convert_bold(self) -> None:
self.assertEqual(convert_markdown("*test message*"), "**test message**")
def test_convert_italics(self) -> None:
self.assertEqual(convert_markdown("_test message_"), "*test message*")
self.assertEqual(convert_markdown("_ spaced message _"), " *spaced message* ")
def test_convert_strikethrough(self) -> None:
self.assertEqual(convert_markdown("~test message~"), "~~test message~~")
|
"""
Offer numeric state listening automation rules.
For more details about this automation rule, please refer to the documentation
at https://home-assistant.io/docs/automation/trigger/#numeric-state-trigger
"""
import asyncio
import logging
import voluptuous as vol
from homeassistant.core import callback
from homeassistant.const import (
CONF_VALUE_TEMPLATE, CONF_PLATFORM, CONF_ENTITY_ID,
CONF_BELOW, CONF_ABOVE, CONF_FOR)
from homeassistant.helpers.event import (
async_track_state_change, async_track_same_state)
from homeassistant.helpers import condition, config_validation as cv
TRIGGER_SCHEMA = vol.All(vol.Schema({
vol.Required(CONF_PLATFORM): 'numeric_state',
vol.Required(CONF_ENTITY_ID): cv.entity_ids,
vol.Optional(CONF_BELOW): vol.Coerce(float),
vol.Optional(CONF_ABOVE): vol.Coerce(float),
vol.Optional(CONF_VALUE_TEMPLATE): cv.template,
vol.Optional(CONF_FOR): vol.All(cv.time_period, cv.positive_timedelta),
}), cv.has_at_least_one_key(CONF_BELOW, CONF_ABOVE))
_LOGGER = logging.getLogger(__name__)
@asyncio.coroutine
def async_trigger(hass, config, action):
"""Listen for state changes based on configuration."""
entity_id = config.get(CONF_ENTITY_ID)
below = config.get(CONF_BELOW)
above = config.get(CONF_ABOVE)
time_delta = config.get(CONF_FOR)
value_template = config.get(CONF_VALUE_TEMPLATE)
unsub_track_same = {}
entities_triggered = set()
if value_template is not None:
value_template.hass = hass
@callback
def check_numeric_state(entity, from_s, to_s):
"""Return True if criteria are now met."""
if to_s is None:
return False
variables = {
'trigger': {
'platform': 'numeric_state',
'entity_id': entity,
'below': below,
'above': above,
}
}
return condition.async_numeric_state(
hass, to_s, below, above, value_template, variables)
@callback
def state_automation_listener(entity, from_s, to_s):
"""Listen for state changes and calls action."""
@callback
def call_action():
"""Call action with right context."""
hass.async_run_job(action, {
'trigger': {
'platform': 'numeric_state',
'entity_id': entity,
'below': below,
'above': above,
'from_state': from_s,
'to_state': to_s,
}
})
matching = check_numeric_state(entity, from_s, to_s)
if not matching:
entities_triggered.discard(entity)
elif entity not in entities_triggered:
entities_triggered.add(entity)
if time_delta:
unsub_track_same[entity] = async_track_same_state(
hass, time_delta, call_action, entity_ids=entity_id,
async_check_same_func=check_numeric_state)
else:
call_action()
unsub = async_track_state_change(
hass, entity_id, state_automation_listener)
@callback
def async_remove():
"""Remove state listeners async."""
unsub()
for async_remove in unsub_track_same.values():
async_remove()
unsub_track_same.clear()
return async_remove
|
from distutils.version import LooseVersion
import pytest
from joblib import Parallel
import joblib
from numpy.testing import assert_array_equal
from sklearn._config import config_context, get_config
from sklearn.utils.fixes import delayed
def get_working_memory():
return get_config()["working_memory"]
@pytest.mark.parametrize("n_jobs", [1, 2])
@pytest.mark.parametrize("backend", ["loky", "threading", "multiprocessing"])
def test_configuration_passes_through_to_joblib(n_jobs, backend):
# Tests that the global global configuration is passed to joblib jobs
if joblib.__version__ < LooseVersion("0.12") and backend == "loky":
pytest.skip("loky backend does not exist in joblib <0.12")
with config_context(working_memory=123):
results = Parallel(n_jobs=n_jobs, backend=backend)(
delayed(get_working_memory)() for _ in range(2)
)
assert_array_equal(results, [123] * 2)
|
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
from django.core.management import call_command
call_command("loaddata", "transport.json")
def backwards(self, orm):
"Write your backwards methods here."
models = {
u'transport.string': {
'Meta': {'object_name': 'String'},
'created_on': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'db_index': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_modified': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'db_index': 'True', 'blank': 'True'}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'scope_set'", 'null': 'True', 'to': u"orm['transport.String']"}),
'string': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255'})
}
}
complete_apps = ['transport']
symmetrical = True
|
"""Usage information for the main IPython applications.
"""
import sys
from IPython.core import release
cl_usage = """\
=========
IPython
=========
Tools for Interactive Computing in Python
=========================================
A Python shell with automatic history (input and output), dynamic object
introspection, easier configuration, command completion, access to the
system shell and more. IPython can also be embedded in running programs.
Usage
ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ...
If invoked with no options, it executes the file and exits, passing the
remaining arguments to the script, just as if you had specified the same
command with python. You may need to specify `--` before args to be passed
to the script, to prevent IPython from attempting to parse them. If you
specify the option `-i` before the filename, it will enter an interactive
IPython session after running the script, rather than exiting. Files ending
in .py will be treated as normal Python, but files ending in .ipy can
contain special IPython syntax (magic commands, shell expansions, etc.).
Almost all configuration in IPython is available via the command-line. Do
`ipython --help-all` to see all available options. For persistent
configuration, look into your `ipython_config.py` configuration file for
details.
This file is typically installed in the `IPYTHONDIR` directory, and there
is a separate configuration directory for each profile. The default profile
directory will be located in $IPYTHONDIR/profile_default. IPYTHONDIR
defaults to to `$HOME/.ipython`. For Windows users, $HOME resolves to
C:\\Documents and Settings\\YourUserName in most instances.
To initialize a profile with the default configuration file, do::
$> ipython profile create
and start editing `IPYTHONDIR/profile_default/ipython_config.py`
In IPython's documentation, we will refer to this directory as
`IPYTHONDIR`, you can change its default location by creating an
environment variable with this name and setting it to the desired path.
For more information, see the manual available in HTML and PDF in your
installation, or online at http://ipython.org/documentation.html.
"""
interactive_usage = """
IPython -- An enhanced Interactive Python
=========================================
IPython offers a combination of convenient shell features, special commands
and a history mechanism for both input (command history) and output (results
caching, similar to Mathematica). It is intended to be a fully compatible
replacement for the standard Python interpreter, while offering vastly
improved functionality and flexibility.
At your system command line, type 'ipython -h' to see the command line
options available. This document only describes interactive features.
MAIN FEATURES
-------------
* Access to the standard Python help. As of Python 2.1, a help system is
available with access to object docstrings and the Python manuals. Simply
type 'help' (no quotes) to access it.
* Magic commands: type %magic for information on the magic subsystem.
* System command aliases, via the %alias command or the configuration file(s).
* Dynamic object information:
Typing ?word or word? prints detailed information about an object. If
certain strings in the object are too long (docstrings, code, etc.) they get
snipped in the center for brevity.
Typing ??word or word?? gives access to the full information without
snipping long strings. Long strings are sent to the screen through the less
pager if longer than the screen, printed otherwise.
The ?/?? system gives access to the full source code for any object (if
available), shows function prototypes and other useful information.
If you just want to see an object's docstring, type '%pdoc object' (without
quotes, and without % if you have automagic on).
Both %pdoc and ?/?? give you access to documentation even on things which are
not explicitely defined. Try for example typing {}.get? or after import os,
type os.path.abspath??. The magic functions %pdef, %source and %file operate
similarly.
* Completion in the local namespace, by typing TAB at the prompt.
At any time, hitting tab will complete any available python commands or
variable names, and show you a list of the possible completions if there's
no unambiguous one. It will also complete filenames in the current directory.
This feature requires the readline and rlcomplete modules, so it won't work
if your Python lacks readline support (such as under Windows).
* Search previous command history in two ways (also requires readline):
- Start typing, and then use Ctrl-p (previous,up) and Ctrl-n (next,down) to
search through only the history items that match what you've typed so
far. If you use Ctrl-p/Ctrl-n at a blank prompt, they just behave like
normal arrow keys.
- Hit Ctrl-r: opens a search prompt. Begin typing and the system searches
your history for lines that match what you've typed so far, completing as
much as it can.
- %hist: search history by index (this does *not* require readline).
* Persistent command history across sessions.
* Logging of input with the ability to save and restore a working session.
* System escape with !. Typing !ls will run 'ls' in the current directory.
* The reload command does a 'deep' reload of a module: changes made to the
module since you imported will actually be available without having to exit.
* Verbose and colored exception traceback printouts. See the magic xmode and
xcolor functions for details (just type %magic).
* Input caching system:
IPython offers numbered prompts (In/Out) with input and output caching. All
input is saved and can be retrieved as variables (besides the usual arrow
key recall).
The following GLOBAL variables always exist (so don't overwrite them!):
_i: stores previous input.
_ii: next previous.
_iii: next-next previous.
_ih : a list of all input _ih[n] is the input from line n.
Additionally, global variables named _i<n> are dynamically created (<n>
being the prompt counter), such that _i<n> == _ih[<n>]
For example, what you typed at prompt 14 is available as _i14 and _ih[14].
You can create macros which contain multiple input lines from this history,
for later re-execution, with the %macro function.
The history function %hist allows you to see any part of your input history
by printing a range of the _i variables. Note that inputs which contain
magic functions (%) appear in the history with a prepended comment. This is
because they aren't really valid Python code, so you can't exec them.
* Output caching system:
For output that is returned from actions, a system similar to the input
cache exists but using _ instead of _i. Only actions that produce a result
(NOT assignments, for example) are cached. If you are familiar with
Mathematica, IPython's _ variables behave exactly like Mathematica's %
variables.
The following GLOBAL variables always exist (so don't overwrite them!):
_ (one underscore): previous output.
__ (two underscores): next previous.
___ (three underscores): next-next previous.
Global variables named _<n> are dynamically created (<n> being the prompt
counter), such that the result of output <n> is always available as _<n>.
Finally, a global dictionary named _oh exists with entries for all lines
which generated output.
* Directory history:
Your history of visited directories is kept in the global list _dh, and the
magic %cd command can be used to go to any entry in that list.
* Auto-parentheses and auto-quotes (adapted from Nathan Gray's LazyPython)
1. Auto-parentheses
Callable objects (i.e. functions, methods, etc) can be invoked like
this (notice the commas between the arguments)::
In [1]: callable_ob arg1, arg2, arg3
and the input will be translated to this::
callable_ob(arg1, arg2, arg3)
This feature is off by default (in rare cases it can produce
undesirable side-effects), but you can activate it at the command-line
by starting IPython with `--autocall 1`, set it permanently in your
configuration file, or turn on at runtime with `%autocall 1`.
You can force auto-parentheses by using '/' as the first character
of a line. For example::
In [1]: /globals # becomes 'globals()'
Note that the '/' MUST be the first character on the line! This
won't work::
In [2]: print /globals # syntax error
In most cases the automatic algorithm should work, so you should
rarely need to explicitly invoke /. One notable exception is if you
are trying to call a function with a list of tuples as arguments (the
parenthesis will confuse IPython)::
In [1]: zip (1,2,3),(4,5,6) # won't work
but this will work::
In [2]: /zip (1,2,3),(4,5,6)
------> zip ((1,2,3),(4,5,6))
Out[2]= [(1, 4), (2, 5), (3, 6)]
IPython tells you that it has altered your command line by
displaying the new command line preceded by -->. e.g.::
In [18]: callable list
-------> callable (list)
2. Auto-Quoting
You can force auto-quoting of a function's arguments by using ',' as
the first character of a line. For example::
In [1]: ,my_function /home/me # becomes my_function("/home/me")
If you use ';' instead, the whole argument is quoted as a single
string (while ',' splits on whitespace)::
In [2]: ,my_function a b c # becomes my_function("a","b","c")
In [3]: ;my_function a b c # becomes my_function("a b c")
Note that the ',' MUST be the first character on the line! This
won't work::
In [4]: x = ,my_function /home/me # syntax error
"""
interactive_usage_min = """\
An enhanced console for Python.
Some of its features are:
- Readline support if the readline library is present.
- Tab completion in the local namespace.
- Logging of input, see command-line options.
- System shell escape via ! , eg !ls.
- Magic commands, starting with a % (like %ls, %pwd, %cd, etc.)
- Keeps track of locally defined variables via %who, %whos.
- Show object information with a ? eg ?x or x? (use ?? for more info).
"""
quick_reference = r"""
IPython -- An enhanced Interactive Python - Quick Reference Card
================================================================
obj?, obj?? : Get help, or more help for object (also works as
?obj, ??obj).
?foo.*abc* : List names in 'foo' containing 'abc' in them.
%magic : Information about IPython's 'magic' % functions.
Magic functions are prefixed by % or %%, and typically take their arguments
without parentheses, quotes or even commas for convenience. Line magics take a
single % and cell magics are prefixed with two %%.
Example magic function calls:
%alias d ls -F : 'd' is now an alias for 'ls -F'
alias d ls -F : Works if 'alias' not a python name
alist = %alias : Get list of aliases to 'alist'
cd /usr/share : Obvious. cd -<tab> to choose from visited dirs.
%cd?? : See help AND source for magic %cd
%timeit x=10 : time the 'x=10' statement with high precision.
%%timeit x=2**100
x**100 : time 'x*100' with a setup of 'x=2**100'; setup code is not
counted. This is an example of a cell magic.
System commands:
!cp a.txt b/ : System command escape, calls os.system()
cp a.txt b/ : after %rehashx, most system commands work without !
cp ${f}.txt $bar : Variable expansion in magics and system commands
files = !ls /usr : Capture sytem command output
files.s, files.l, files.n: "a b c", ['a','b','c'], 'a\nb\nc'
History:
_i, _ii, _iii : Previous, next previous, next next previous input
_i4, _ih[2:5] : Input history line 4, lines 2-4
exec _i81 : Execute input history line #81 again
%rep 81 : Edit input history line #81
_, __, ___ : previous, next previous, next next previous output
_dh : Directory history
_oh : Output history
%hist : Command history. '%hist -g foo' search history for 'foo'
Autocall:
f 1,2 : f(1,2) # Off by default, enable with %autocall magic.
/f 1,2 : f(1,2) (forced autoparen)
,f 1 2 : f("1","2")
;f 1 2 : f("1 2")
Remember: TAB completion works in many contexts, not just file names
or python names.
The following magic functions are currently available:
"""
gui_reference = """\
===============================
The graphical IPython console
===============================
This console is designed to emulate the look, feel and workflow of a terminal
environment, while adding a number of enhancements that are simply not possible
in a real terminal, such as inline syntax highlighting, true multiline editing,
inline graphics and much more.
This quick reference document contains the basic information you'll need to
know to make the most efficient use of it. For the various command line
options available at startup, type ``ipython qtconsole --help`` at the command line.
Multiline editing
=================
The graphical console is capable of true multiline editing, but it also tries
to behave intuitively like a terminal when possible. If you are used to
IPython's old terminal behavior, you should find the transition painless, and
once you learn a few basic keybindings it will be a much more efficient
environment.
For single expressions or indented blocks, the console behaves almost like the
terminal IPython: single expressions are immediately evaluated, and indented
blocks are evaluated once a single blank line is entered::
In [1]: print "Hello IPython!" # Enter was pressed at the end of the line
Hello IPython!
In [2]: for i in range(10):
...: print i,
...:
0 1 2 3 4 5 6 7 8 9
If you want to enter more than one expression in a single input block
(something not possible in the terminal), you can use ``Control-Enter`` at the
end of your first line instead of ``Enter``. At that point the console goes
into 'cell mode' and even if your inputs are not indented, it will continue
accepting arbitrarily many lines until either you enter an extra blank line or
you hit ``Shift-Enter`` (the key binding that forces execution). When a
multiline cell is entered, IPython analyzes it and executes its code producing
an ``Out[n]`` prompt only for the last expression in it, while the rest of the
cell is executed as if it was a script. An example should clarify this::
In [3]: x=1 # Hit C-Enter here
...: y=2 # from now on, regular Enter is sufficient
...: z=3
...: x**2 # This does *not* produce an Out[] value
...: x+y+z # Only the last expression does
...:
Out[3]: 6
The behavior where an extra blank line forces execution is only active if you
are actually typing at the keyboard each line, and is meant to make it mimic
the IPython terminal behavior. If you paste a long chunk of input (for example
a long script copied form an editor or web browser), it can contain arbitrarily
many intermediate blank lines and they won't cause any problems. As always,
you can then make it execute by appending a blank line *at the end* or hitting
``Shift-Enter`` anywhere within the cell.
With the up arrow key, you can retrieve previous blocks of input that contain
multiple lines. You can move inside of a multiline cell like you would in any
text editor. When you want it executed, the simplest thing to do is to hit the
force execution key, ``Shift-Enter`` (though you can also navigate to the end
and append a blank line by using ``Enter`` twice).
If you've edited a multiline cell and accidentally navigate out of it with the
up or down arrow keys, IPython will clear the cell and replace it with the
contents of the one above or below that you navigated to. If this was an
accident and you want to retrieve the cell you were editing, use the Undo
keybinding, ``Control-z``.
Key bindings
============
The IPython console supports most of the basic Emacs line-oriented keybindings,
in addition to some of its own.
The keybinding prefixes mean:
- ``C``: Control
- ``S``: Shift
- ``M``: Meta (typically the Alt key)
The keybindings themselves are:
- ``Enter``: insert new line (may cause execution, see above).
- ``C-Enter``: *force* new line, *never* causes execution.
- ``S-Enter``: *force* execution regardless of where cursor is, no newline added.
- ``Up``: step backwards through the history.
- ``Down``: step forwards through the history.
- ``S-Up``: search backwards through the history (like ``C-r`` in bash).
- ``S-Down``: search forwards through the history.
- ``C-c``: copy highlighted text to clipboard (prompts are automatically stripped).
- ``C-S-c``: copy highlighted text to clipboard (prompts are not stripped).
- ``C-v``: paste text from clipboard.
- ``C-z``: undo (retrieves lost text if you move out of a cell with the arrows).
- ``C-S-z``: redo.
- ``C-o``: move to 'other' area, between pager and terminal.
- ``C-l``: clear terminal.
- ``C-a``: go to beginning of line.
- ``C-e``: go to end of line.
- ``C-u``: kill from cursor to the begining of the line.
- ``C-k``: kill from cursor to the end of the line.
- ``C-y``: yank (paste)
- ``C-p``: previous line (like up arrow)
- ``C-n``: next line (like down arrow)
- ``C-f``: forward (like right arrow)
- ``C-b``: back (like left arrow)
- ``C-d``: delete next character, or exits if input is empty
- ``M-<``: move to the beginning of the input region.
- ``M->``: move to the end of the input region.
- ``M-d``: delete next word.
- ``M-Backspace``: delete previous word.
- ``C-.``: force a kernel restart (a confirmation dialog appears).
- ``C-+``: increase font size.
- ``C--``: decrease font size.
- ``C-M-Space``: toggle full screen. (Command-Control-Space on Mac OS X)
The IPython pager
=================
IPython will show long blocks of text from many sources using a builtin pager.
You can control where this pager appears with the ``--paging`` command-line
flag:
- ``inside`` [default]: the pager is overlaid on top of the main terminal. You
must quit the pager to get back to the terminal (similar to how a pager such
as ``less`` or ``more`` works).
- ``vsplit``: the console is made double-tall, and the pager appears on the
bottom area when needed. You can view its contents while using the terminal.
- ``hsplit``: the console is made double-wide, and the pager appears on the
right area when needed. You can view its contents while using the terminal.
- ``none``: the console never pages output.
If you use the vertical or horizontal paging modes, you can navigate between
terminal and pager as follows:
- Tab key: goes from pager to terminal (but not the other way around).
- Control-o: goes from one to another always.
- Mouse: click on either.
In all cases, the ``q`` or ``Escape`` keys quit the pager (when used with the
focus on the pager area).
Running subprocesses
====================
The graphical IPython console uses the ``pexpect`` module to run subprocesses
when you type ``!command``. This has a number of advantages (true asynchronous
output from subprocesses as well as very robust termination of rogue
subprocesses with ``Control-C``), as well as some limitations. The main
limitation is that you can *not* interact back with the subprocess, so anything
that invokes a pager or expects you to type input into it will block and hang
(you can kill it with ``Control-C``).
We have provided as magics ``%less`` to page files (aliased to ``%more``),
``%clear`` to clear the terminal, and ``%man`` on Linux/OSX. These cover the
most common commands you'd want to call in your subshell and that would cause
problems if invoked via ``!cmd``, but you need to be aware of this limitation.
Display
=======
The IPython console can now display objects in a variety of formats, including
HTML, PNG and SVG. This is accomplished using the display functions in
``IPython.core.display``::
In [4]: from IPython.core.display import display, display_html
In [5]: from IPython.core.display import display_png, display_svg
Python objects can simply be passed to these functions and the appropriate
representations will be displayed in the console as long as the objects know
how to compute those representations. The easiest way of teaching objects how
to format themselves in various representations is to define special methods
such as: ``_repr_html_``, ``_repr_svg_`` and ``_repr_png_``. IPython's display formatters
can also be given custom formatter functions for various types::
In [6]: ip = get_ipython()
In [7]: html_formatter = ip.display_formatter.formatters['text/html']
In [8]: html_formatter.for_type(Foo, foo_to_html)
For further details, see ``IPython.core.formatters``.
Inline matplotlib graphics
==========================
The IPython console is capable of displaying matplotlib figures inline, in SVG
or PNG format. If started with the ``matplotlib=inline``, then all figures are
rendered inline automatically (PNG by default). If started with ``--matplotlib``
or ``matplotlib=<your backend>``, then a GUI backend will be used, but IPython's
``display()`` and ``getfigs()`` functions can be used to view plots inline::
In [9]: display(*getfigs()) # display all figures inline
In[10]: display(*getfigs(1,2)) # display figures 1 and 2 inline
"""
quick_guide = """\
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
"""
gui_note = """\
%guiref -> A brief reference about the graphical user interface.
"""
default_banner_parts = [
'Python %s\n' % (sys.version.split('\n')[0],),
'Type "copyright", "credits" or "license" for more information.\n\n',
'IPython {version} -- An enhanced Interactive Python.\n'.format(
version=release.version,
),
quick_guide
]
default_gui_banner_parts = default_banner_parts + [gui_note]
default_banner = ''.join(default_banner_parts)
default_gui_banner = ''.join(default_gui_banner_parts)
def page_guiref(arg_s=None):
"""Show a basic reference about the GUI Console."""
from IPython.core import page
page.page(gui_reference, auto_html=True)
|
"""Unit tests for run_perf_tests."""
import StringIO
import datetime
import json
import re
import unittest
from webkitpy.common.host_mock import MockHost
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.layout_tests.port.driver import DriverOutput
from webkitpy.layout_tests.port.test import TestPort
from webkitpy.performance_tests.perftest import ChromiumStylePerfTest
from webkitpy.performance_tests.perftest import DEFAULT_TEST_RUNNER_COUNT
from webkitpy.performance_tests.perftest import PerfTest
from webkitpy.performance_tests.perftestsrunner import PerfTestsRunner
class MainTest(unittest.TestCase):
def create_runner(self, args=[]):
options, parsed_args = PerfTestsRunner._parse_args(args)
test_port = TestPort(host=MockHost(), options=options)
runner = PerfTestsRunner(args=args, port=test_port)
runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
return runner, test_port
def _add_file(self, runner, dirname, filename, content=True):
dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
runner._host.filesystem.maybe_make_directory(dirname)
runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
def test_collect_tests(self):
runner, port = self.create_runner()
self._add_file(runner, 'inspector', 'a_file.html', 'a content')
tests = runner._collect_tests()
self.assertEqual(len(tests), 1)
def _collect_tests_and_sort_test_name(self, runner):
return sorted([test.test_name() for test in runner._collect_tests()])
def test_collect_tests_with_multile_files(self):
runner, port = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])
def add_file(filename):
port.host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'
add_file('test1.html')
add_file('test2.html')
add_file('test3.html')
port.host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner), ['test1.html', 'test2.html'])
def test_collect_tests_with_skipped_list(self):
runner, port = self.create_runner()
self._add_file(runner, 'inspector', 'test1.html')
self._add_file(runner, 'inspector', 'unsupported_test1.html')
self._add_file(runner, 'inspector', 'test2.html')
self._add_file(runner, 'inspector/resources', 'resource_file.html')
self._add_file(runner, 'unsupported', 'unsupported_test2.html')
port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html'])
def test_collect_tests_with_skipped_list_and_files(self):
runner, port = self.create_runner(args=['Suite/Test1.html', 'Suite/SkippedTest1.html', 'SkippedSuite/Test1.html'])
self._add_file(runner, 'SkippedSuite', 'Test1.html')
self._add_file(runner, 'SkippedSuite', 'Test2.html')
self._add_file(runner, 'Suite', 'Test1.html')
self._add_file(runner, 'Suite', 'Test2.html')
self._add_file(runner, 'Suite', 'SkippedTest1.html')
self._add_file(runner, 'Suite', 'SkippedTest2.html')
port.skipped_perf_tests = lambda: ['Suite/SkippedTest1.html', 'Suite/SkippedTest1.html', 'SkippedSuite']
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner),
['SkippedSuite/Test1.html', 'Suite/SkippedTest1.html', 'Suite/Test1.html'])
def test_collect_tests_with_ignored_skipped_list(self):
runner, port = self.create_runner(args=['--force'])
self._add_file(runner, 'inspector', 'test1.html')
self._add_file(runner, 'inspector', 'unsupported_test1.html')
self._add_file(runner, 'inspector', 'test2.html')
self._add_file(runner, 'inspector/resources', 'resource_file.html')
self._add_file(runner, 'unsupported', 'unsupported_test2.html')
port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
self.assertItemsEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html', 'inspector/unsupported_test1.html', 'unsupported/unsupported_test2.html'])
def test_default_args(self):
runner, port = self.create_runner()
options, args = PerfTestsRunner._parse_args([])
self.assertTrue(options.build)
self.assertEqual(options.time_out_ms, 600 * 1000)
self.assertTrue(options.generate_results)
self.assertTrue(options.show_results)
self.assertTrue(options.use_skipped_list)
self.assertEqual(options.repeat, 1)
self.assertEqual(options.test_runner_count, DEFAULT_TEST_RUNNER_COUNT)
def test_parse_args(self):
runner, port = self.create_runner()
options, args = PerfTestsRunner._parse_args([
'--build-directory=folder42',
'--platform=platform42',
'--builder-name', 'webkit-mac-1',
'--build-number=56',
'--time-out-ms=42',
'--no-show-results',
'--reset-results',
'--output-json-path=a/output.json',
'--slave-config-json-path=a/source.json',
'--test-results-server=somehost',
'--additional-drt-flag=--enable-threaded-parser',
'--additional-drt-flag=--awesomesauce',
'--repeat=5',
'--test-runner-count=5',
'--debug'])
self.assertTrue(options.build)
self.assertEqual(options.build_directory, 'folder42')
self.assertEqual(options.platform, 'platform42')
self.assertEqual(options.builder_name, 'webkit-mac-1')
self.assertEqual(options.build_number, '56')
self.assertEqual(options.time_out_ms, '42')
self.assertEqual(options.configuration, 'Debug')
self.assertFalse(options.show_results)
self.assertTrue(options.reset_results)
self.assertEqual(options.output_json_path, 'a/output.json')
self.assertEqual(options.slave_config_json_path, 'a/source.json')
self.assertEqual(options.test_results_server, 'somehost')
self.assertEqual(options.additional_drt_flag, ['--enable-threaded-parser', '--awesomesauce'])
self.assertEqual(options.repeat, 5)
self.assertEqual(options.test_runner_count, 5)
def test_upload_json(self):
runner, port = self.create_runner()
port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
class MockFileUploader:
called = []
upload_single_text_file_throws = False
upload_single_text_file_return_value = None
@classmethod
def reset(cls):
cls.called = []
cls.upload_single_text_file_throws = False
cls.upload_single_text_file_return_value = None
def __init__(mock, url, timeout):
self.assertEqual(url, 'https://some.host/some/path')
self.assertTrue(isinstance(timeout, int) and timeout)
mock.called.append('FileUploader')
def upload_single_text_file(mock, filesystem, content_type, filename):
self.assertEqual(filesystem, port.host.filesystem)
self.assertEqual(content_type, 'application/json')
self.assertEqual(filename, 'some.json')
mock.called.append('upload_single_text_file')
if mock.upload_single_text_file_throws:
raise Exception
return mock.upload_single_text_file_return_value
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('OK')
self.assertTrue(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('Some error')
output = OutputCapture()
output.capture_output()
self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
_, _, logs = output.restore_output()
self.assertEqual(logs, 'Uploaded JSON to https://some.host/some/path but got a bad response:\nSome error\n')
# Throwing an exception upload_single_text_file shouldn't blow up _upload_json
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_throws = True
self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('{"status": "OK"}')
self.assertTrue(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
MockFileUploader.reset()
MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('{"status": "SomethingHasFailed", "failureStored": false}')
output = OutputCapture()
output.capture_output()
self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
_, _, logs = output.restore_output()
serialized_json = json.dumps({'status': 'SomethingHasFailed', 'failureStored': False}, indent=4)
self.assertEqual(logs, 'Uploaded JSON to https://some.host/some/path but got an error:\n%s\n' % serialized_json)
class InspectorPassTestData:
text = 'RESULT group_name: test_name= 42 ms'
output = """Running inspector/pass.html (2 of 2)
RESULT group_name: test_name= 42 ms
Finished: 0.1 s
"""
class EventTargetWrapperTestData:
text = """Running 20 times
Ignoring warm-up run (1502)
1504
1505
1510
1504
1507
1509
1510
1487
1488
1472
1472
1488
1473
1472
1475
1487
1486
1486
1475
1471
Time:
values 1486, 1471, 1510, 1505, 1478, 1490 ms
avg 1490 ms
median 1488 ms
stdev 15.13935 ms
min 1471 ms
max 1510 ms
"""
output = """Running Bindings/event-target-wrapper.html (1 of 2)
RESULT Bindings: event-target-wrapper: Time= 1490.0 ms
median= 1488.0 ms, stdev= 14.11751 ms, min= 1471.0 ms, max= 1510.0 ms
Finished: 0.1 s
"""
results = {'url': 'https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings/event-target-wrapper.html',
'metrics': {'Time': {'current': [[1486.0, 1471.0, 1510.0, 1505.0, 1478.0, 1490.0]] * 4}}}
class SomeParserTestData:
text = """Running 20 times
Ignoring warm-up run (1115)
Time:
values 1080, 1120, 1095, 1101, 1104 ms
avg 1100 ms
median 1101 ms
stdev 14.50861 ms
min 1080 ms
max 1120 ms
"""
output = """Running Parser/some-parser.html (2 of 2)
RESULT Parser: some-parser: Time= 1100.0 ms
median= 1101.0 ms, stdev= 13.31402 ms, min= 1080.0 ms, max= 1120.0 ms
Finished: 0.1 s
"""
class MemoryTestData:
text = """Running 20 times
Ignoring warm-up run (1115)
Time:
values 1080, 1120, 1095, 1101, 1104 ms
avg 1100 ms
median 1101 ms
stdev 14.50861 ms
min 1080 ms
max 1120 ms
JS Heap:
values 825000, 811000, 848000, 837000, 829000 bytes
avg 830000 bytes
median 829000 bytes
stdev 13784.04875 bytes
min 811000 bytes
max 848000 bytes
Malloc:
values 529000, 511000, 548000, 536000, 521000 bytes
avg 529000 bytes
median 529000 bytes
stdev 14124.44689 bytes
min 511000 bytes
max 548000 bytes
"""
output = """Running 1 tests
Running Parser/memory-test.html (1 of 1)
RESULT Parser: memory-test: Time= 1100.0 ms
median= 1101.0 ms, stdev= 13.31402 ms, min= 1080.0 ms, max= 1120.0 ms
RESULT Parser: memory-test: JSHeap= 830000.0 bytes
median= 829000.0 bytes, stdev= 12649.11064 bytes, min= 811000.0 bytes, max= 848000.0 bytes
RESULT Parser: memory-test: Malloc= 529000.0 bytes
median= 529000.0 bytes, stdev= 12961.48139 bytes, min= 511000.0 bytes, max= 548000.0 bytes
Finished: 0.1 s
"""
results = {'current': [[1080, 1120, 1095, 1101, 1104]] * 4}
js_heap_results = {'current': [[825000, 811000, 848000, 837000, 829000]] * 4}
malloc_results = {'current': [[529000, 511000, 548000, 536000, 521000]] * 4}
class TestDriver:
def run_test(self, driver_input, stop_when_done):
text = ''
timeout = False
crash = False
if driver_input.test_name.endswith('pass.html'):
text = InspectorPassTestData.text
elif driver_input.test_name.endswith('timeout.html'):
timeout = True
elif driver_input.test_name.endswith('failed.html'):
text = None
elif driver_input.test_name.endswith('tonguey.html'):
text = 'we are not expecting an output from perf tests but RESULT blablabla'
elif driver_input.test_name.endswith('crash.html'):
crash = True
elif driver_input.test_name.endswith('event-target-wrapper.html'):
text = EventTargetWrapperTestData.text
elif driver_input.test_name.endswith('some-parser.html'):
text = SomeParserTestData.text
elif driver_input.test_name.endswith('memory-test.html'):
text = MemoryTestData.text
return DriverOutput(text, '', '', '', crash=crash, timeout=timeout)
def start(self):
"""do nothing"""
def stop(self):
"""do nothing"""
class IntegrationTest(unittest.TestCase):
def _normalize_output(self, log):
return re.sub(r'(stdev=\s+\d+\.\d{5})\d+', r'\1', re.sub(r'Finished: [0-9\.]+ s', 'Finished: 0.1 s', log))
def _load_output_json(self, runner):
json_content = runner._host.filesystem.read_text_file(runner._output_json_path())
return json.loads(re.sub(r'("stdev":\s*\d+\.\d{5})\d+', r'\1', json_content))
def create_runner(self, args=[], driver_class=TestDriver):
options, parsed_args = PerfTestsRunner._parse_args(args)
test_port = TestPort(host=MockHost(), options=options)
test_port.create_driver = lambda worker_number=None, no_timeout=False: driver_class()
runner = PerfTestsRunner(args=args, port=test_port)
runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
return runner, test_port
def run_test(self, test_name):
runner, port = self.create_runner()
tests = [ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name))]
return runner._run_tests_set(tests) == 0
def test_run_passing_test(self):
self.assertTrue(self.run_test('pass.html'))
def test_run_silent_test(self):
self.assertFalse(self.run_test('silent.html'))
def test_run_failed_test(self):
self.assertFalse(self.run_test('failed.html'))
def test_run_tonguey_test(self):
self.assertFalse(self.run_test('tonguey.html'))
def test_run_timeout_test(self):
self.assertFalse(self.run_test('timeout.html'))
def test_run_crash_test(self):
self.assertFalse(self.run_test('crash.html'))
def _tests_for_runner(self, runner, test_names):
filesystem = runner._host.filesystem
tests = []
for test in test_names:
path = filesystem.join(runner._base_path, test)
dirname = filesystem.dirname(path)
if test.startswith('inspector/'):
tests.append(ChromiumStylePerfTest(runner._port, test, path))
else:
tests.append(PerfTest(runner._port, test, path))
return tests
def test_run_test_set(self):
runner, port = self.create_runner()
tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
output = OutputCapture()
output.capture_output()
try:
unexpected_result_count = runner._run_tests_set(tests)
finally:
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, len(tests) - 1)
self.assertTrue('\nRESULT group_name: test_name= 42 ms\n' in log)
def test_run_test_set_kills_drt_per_run(self):
class TestDriverWithStopCount(TestDriver):
stop_count = 0
def stop(self):
TestDriverWithStopCount.stop_count += 1
runner, port = self.create_runner(driver_class=TestDriverWithStopCount)
tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
unexpected_result_count = runner._run_tests_set(tests)
self.assertEqual(TestDriverWithStopCount.stop_count, 6)
def test_run_test_set_for_parser_tests(self):
runner, port = self.create_runner()
tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html'])
output = OutputCapture()
output.capture_output()
try:
unexpected_result_count = runner._run_tests_set(tests)
finally:
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, 0)
self.assertEqual(self._normalize_output(log), EventTargetWrapperTestData.output + SomeParserTestData.output)
def test_run_memory_test(self):
runner, port = self.create_runner_and_setup_results_template()
runner._timestamp = 123456789
port.host.filesystem.write_text_file(runner._base_path + '/Parser/memory-test.html', 'some content')
output = OutputCapture()
output.capture_output()
try:
unexpected_result_count = runner.run()
finally:
stdout, stderr, log = output.restore_output()
self.assertEqual(unexpected_result_count, 0)
self.assertEqual(self._normalize_output(log), MemoryTestData.output + '\nMOCK: user.open_url: file://...\n')
parser_tests = self._load_output_json(runner)[0]['tests']['Parser']['tests']
self.assertEqual(parser_tests['memory-test']['metrics']['Time'], MemoryTestData.results)
self.assertEqual(parser_tests['memory-test']['metrics']['JSHeap'], MemoryTestData.js_heap_results)
self.assertEqual(parser_tests['memory-test']['metrics']['Malloc'], MemoryTestData.malloc_results)
def _test_run_with_json_output(self, runner, filesystem, upload_succeeds=False, results_shown=True, expected_exit_code=0, repeat=1, compare_logs=True):
filesystem.write_text_file(runner._base_path + '/inspector/pass.html', 'some content')
filesystem.write_text_file(runner._base_path + '/Bindings/event-target-wrapper.html', 'some content')
uploaded = [False]
def mock_upload_json(hostname, json_path, host_path=None):
# FIXME: Get rid of the hard-coded perf.webkit.org once we've completed the transition.
self.assertIn(hostname, ['some.host'])
self.assertIn(json_path, ['/mock-checkout/output.json'])
self.assertIn(host_path, [None, '/api/report'])
uploaded[0] = upload_succeeds
return upload_succeeds
runner._upload_json = mock_upload_json
runner._timestamp = 123456789
runner._utc_timestamp = datetime.datetime(2013, 2, 8, 15, 19, 37, 460000)
output_capture = OutputCapture()
output_capture.capture_output()
try:
self.assertEqual(runner.run(), expected_exit_code)
finally:
stdout, stderr, logs = output_capture.restore_output()
if not expected_exit_code and compare_logs:
expected_logs = ''
for i in xrange(repeat):
runs = ' (Run %d of %d)' % (i + 1, repeat) if repeat > 1 else ''
expected_logs += 'Running 2 tests%s\n' % runs + EventTargetWrapperTestData.output + InspectorPassTestData.output
if results_shown:
expected_logs += 'MOCK: user.open_url: file://...\n'
self.assertEqual(self._normalize_output(logs), expected_logs)
self.assertEqual(uploaded[0], upload_succeeds)
return logs
_event_target_wrapper_and_inspector_results = {
"Bindings":
{"url": "https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings",
"tests": {"event-target-wrapper": EventTargetWrapperTestData.results}}}
def test_run_with_json_output(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
filesystem = port.host.filesystem
self.assertTrue(filesystem.isfile(runner._output_json_path()))
self.assertTrue(filesystem.isfile(filesystem.splitext(runner._output_json_path())[0] + '.html'))
def test_run_with_description(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host', '--description', 'some description'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "description": "some description",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
def create_runner_and_setup_results_template(self, args=[]):
runner, port = self.create_runner(args)
filesystem = port.host.filesystem
filesystem.write_text_file(runner._base_path + '/resources/results-template.html',
'BEGIN<script src="%AbsolutePathToWebKitTrunk%/some.js"></script>'
'<script src="%AbsolutePathToWebKitTrunk%/other.js"></script><script>%PeformanceTestsResultsJSON%</script>END')
filesystem.write_text_file(runner._base_path + '/Dromaeo/resources/dromaeo/web/lib/jquery-1.6.4.js', 'jquery content')
return runner, port
def test_run_respects_no_results(self):
runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host', '--no-results'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=False, results_shown=False)
self.assertFalse(port.host.filesystem.isfile('/mock-checkout/output.json'))
def test_run_generates_json_by_default(self):
runner, port = self.create_runner_and_setup_results_template()
filesystem = port.host.filesystem
output_json_path = runner._output_json_path()
results_page_path = filesystem.splitext(output_json_path)[0] + '.html'
self.assertFalse(filesystem.isfile(output_json_path))
self.assertFalse(filesystem.isfile(results_page_path))
self._test_run_with_json_output(runner, port.host.filesystem)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
self.assertTrue(filesystem.isfile(output_json_path))
self.assertTrue(filesystem.isfile(results_page_path))
def test_run_merges_output_by_default(self):
runner, port = self.create_runner_and_setup_results_template()
filesystem = port.host.filesystem
output_json_path = runner._output_json_path()
filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
self._test_run_with_json_output(runner, port.host.filesystem)
self.assertEqual(self._load_output_json(runner), [{"previous": "results"}, {
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
def test_run_respects_reset_results(self):
runner, port = self.create_runner_and_setup_results_template(args=["--reset-results"])
filesystem = port.host.filesystem
output_json_path = runner._output_json_path()
filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
self._test_run_with_json_output(runner, port.host.filesystem)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
pass
def test_run_generates_and_show_results_page(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
page_shown = []
port.show_results_html_file = lambda path: page_shown.append(path)
filesystem = port.host.filesystem
self._test_run_with_json_output(runner, filesystem, results_shown=False)
expected_entry = {"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}
self.maxDiff = None
self.assertEqual(runner._output_json_path(), '/mock-checkout/output.json')
self.assertEqual(self._load_output_json(runner), [expected_entry])
self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
'<script>%s</script>END' % port.host.filesystem.read_text_file(runner._output_json_path()))
self.assertEqual(page_shown[0], '/mock-checkout/output.html')
self._test_run_with_json_output(runner, filesystem, results_shown=False)
self.assertEqual(runner._output_json_path(), '/mock-checkout/output.json')
self.assertEqual(self._load_output_json(runner), [expected_entry, expected_entry])
self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
'<script>%s</script>END' % port.host.filesystem.read_text_file(runner._output_json_path()))
def test_run_respects_no_show_results(self):
show_results_html_file = lambda path: page_shown.append(path)
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
page_shown = []
port.show_results_html_file = show_results_html_file
self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
self.assertEqual(page_shown[0], '/mock-checkout/output.html')
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--no-show-results'])
page_shown = []
port.show_results_html_file = show_results_html_file
self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
self.assertEqual(page_shown, [])
def test_run_with_bad_output_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
port.host.filesystem.write_text_file('/mock-checkout/output.json', 'bad json')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
port.host.filesystem.write_text_file('/mock-checkout/output.json', '{"another bad json": "1"}')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
def test_run_with_slave_config_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value"}')
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}, "builderKey": "value"}])
def test_run_with_bad_slave_config_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
logs = self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
self.assertTrue('Missing slave configuration JSON file: /mock-checkout/slave-config.json' in logs)
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', 'bad json')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '["another bad json"]')
self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
def test_run_with_multiple_repositories(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host'])
port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
self.assertEqual(self._load_output_json(runner), [{
"buildTime": "2013-02-08T15:19:37.460000", "tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"webkit": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"},
"some": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
def test_run_with_upload_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
self.assertEqual(generated_json[0]['platform'], 'platform1')
self.assertEqual(generated_json[0]['builderName'], 'builder1')
self.assertEqual(generated_json[0]['buildNumber'], 123)
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=False, expected_exit_code=PerfTestsRunner.EXIT_CODE_FAILED_UPLOADING)
def test_run_with_upload_json_should_generate_perf_webkit_json(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123',
'--slave-config-json-path=/mock-checkout/slave-config.json'])
port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value1"}')
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True)
generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
self.assertTrue(isinstance(generated_json, list))
self.assertEqual(len(generated_json), 1)
output = generated_json[0]
self.maxDiff = None
self.assertEqual(output['platform'], 'platform1')
self.assertEqual(output['buildNumber'], 123)
self.assertEqual(output['buildTime'], '2013-02-08T15:19:37.460000')
self.assertEqual(output['builderName'], 'builder1')
self.assertEqual(output['builderKey'], 'value1')
self.assertEqual(output['revisions'], {'blink': {'revision': '5678', 'timestamp': '2013-02-01 08:48:05 +0000'}})
self.assertEqual(output['tests'].keys(), ['Bindings'])
self.assertEqual(sorted(output['tests']['Bindings'].keys()), ['tests', 'url'])
self.assertEqual(output['tests']['Bindings']['url'], 'https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings')
self.assertEqual(output['tests']['Bindings']['tests'].keys(), ['event-target-wrapper'])
self.assertEqual(output['tests']['Bindings']['tests']['event-target-wrapper'], {
'url': 'https://src.chromium.org/viewvc/blink/trunk/PerformanceTests/Bindings/event-target-wrapper.html',
'metrics': {'Time': {'current': [[1486.0, 1471.0, 1510.0, 1505.0, 1478.0, 1490.0]] * 4}}})
def test_run_with_repeat(self):
self.maxDiff = None
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-results-server=some.host', '--repeat', '5'])
self._test_run_with_json_output(runner, port.host.filesystem, upload_succeeds=True, repeat=5)
self.assertEqual(self._load_output_json(runner), [
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}},
{"buildTime": "2013-02-08T15:19:37.460000",
"tests": self._event_target_wrapper_and_inspector_results,
"revisions": {"blink": {"timestamp": "2013-02-01 08:48:05 +0000", "revision": "5678"}}}])
def test_run_with_test_runner_count(self):
runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
'--test-runner-count=3'])
self._test_run_with_json_output(runner, port.host.filesystem, compare_logs=False)
generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
self.assertTrue(isinstance(generated_json, list))
self.assertEqual(len(generated_json), 1)
output = generated_json[0]['tests']['Bindings']['tests']['event-target-wrapper']['metrics']['Time']['current']
self.assertEqual(len(output), 3)
expectedMetrics = EventTargetWrapperTestData.results['metrics']['Time']['current'][0]
for metrics in output:
self.assertEqual(metrics, expectedMetrics)
|
from copy import deepcopy
import unittest
from host_file_system_provider import HostFileSystemProvider
from host_file_system_iterator import HostFileSystemIterator
from object_store_creator import ObjectStoreCreator
from test_branch_utility import TestBranchUtility
from test_data.canned_data import CANNED_API_FILE_SYSTEM_DATA
from test_file_system import TestFileSystem
def _GetIterationTracker(version):
'''Adds the ChannelInfo object from each iteration to a list, and signals the
loop to stop when |version| is reached.
'''
iterations = []
def callback(file_system, channel_info):
if channel_info.version == version:
return False
iterations.append(channel_info)
return True
return (iterations, callback)
class HostFileSystemIteratorTest(unittest.TestCase):
def setUp(self):
def host_file_system_constructor(branch, **optargs):
return TestFileSystem(deepcopy(CANNED_API_FILE_SYSTEM_DATA[branch]))
host_file_system_provider = HostFileSystemProvider(
ObjectStoreCreator.ForTest(),
constructor_for_test=host_file_system_constructor)
self._branch_utility = TestBranchUtility.CreateWithCannedData()
self._iterator = HostFileSystemIterator(
host_file_system_provider,
self._branch_utility)
def _GetStableChannelInfo(self,version):
return self._branch_utility.GetStableChannelInfo(version)
def _GetChannelInfo(self, channel_name):
return self._branch_utility.GetChannelInfo(channel_name)
def testAscending(self):
# Start at |stable| version 5, and move up towards |trunk|.
# Total: 28 file systems.
iterations, callback = _GetIterationTracker(0)
self.assertEqual(
self._iterator.Ascending(self._GetStableChannelInfo(5), callback),
self._GetChannelInfo('trunk'))
self.assertEqual(len(iterations), 28)
# Start at |stable| version 5, and move up towards |trunk|. The callback
# fails at |beta|, so the last successful callback was the latest version
# of |stable|. Total: 25 file systems.
iterations, callback = _GetIterationTracker(
self._GetChannelInfo('beta').version)
self.assertEqual(
self._iterator.Ascending(self._GetStableChannelInfo(5), callback),
self._GetChannelInfo('stable'))
self.assertEqual(len(iterations), 25)
# Start at |stable| version 5, and the callback fails immediately. Since
# no file systems are successfully processed, expect a return of None.
iterations, callback = _GetIterationTracker(5)
self.assertEqual(
self._iterator.Ascending(self._GetStableChannelInfo(5), callback),
None)
self.assertEqual([], iterations)
# Start at |stable| version 5, and the callback fails at version 6.
# The return should represent |stable| version 5.
iterations, callback = _GetIterationTracker(6)
self.assertEqual(
self._iterator.Ascending(self._GetStableChannelInfo(5), callback),
self._GetStableChannelInfo(5))
self.assertEqual([self._GetStableChannelInfo(5)], iterations)
# Start at the latest version of |stable|, and the callback fails at
# |trunk|. Total: 3 file systems.
iterations, callback = _GetIterationTracker('trunk')
self.assertEqual(
self._iterator.Ascending(self._GetChannelInfo('stable'), callback),
self._GetChannelInfo('dev'))
self.assertEqual([self._GetChannelInfo('stable'),
self._GetChannelInfo('beta'),
self._GetChannelInfo('dev')], iterations)
# Start at |stable| version 10, and the callback fails at |trunk|.
iterations, callback = _GetIterationTracker('trunk')
self.assertEqual(
self._iterator.Ascending(self._GetStableChannelInfo(10), callback),
self._GetChannelInfo('dev'))
self.assertEqual([self._GetStableChannelInfo(10),
self._GetStableChannelInfo(11),
self._GetStableChannelInfo(12),
self._GetStableChannelInfo(13),
self._GetStableChannelInfo(14),
self._GetStableChannelInfo(15),
self._GetStableChannelInfo(16),
self._GetStableChannelInfo(17),
self._GetStableChannelInfo(18),
self._GetStableChannelInfo(19),
self._GetStableChannelInfo(20),
self._GetStableChannelInfo(21),
self._GetStableChannelInfo(22),
self._GetStableChannelInfo(23),
self._GetStableChannelInfo(24),
self._GetStableChannelInfo(25),
self._GetStableChannelInfo(26),
self._GetStableChannelInfo(27),
self._GetStableChannelInfo(28),
self._GetChannelInfo('stable'),
self._GetChannelInfo('beta'),
self._GetChannelInfo('dev')], iterations)
def testDescending(self):
# Start at |trunk|, and the callback fails immediately. No file systems
# are successfully processed, so Descending() will return None.
iterations, callback = _GetIterationTracker('trunk')
self.assertEqual(
self._iterator.Descending(self._GetChannelInfo('trunk'), callback),
None)
self.assertEqual([], iterations)
# Start at |trunk|, and the callback fails at |dev|. Last good iteration
# should be |trunk|.
iterations, callback = _GetIterationTracker(
self._GetChannelInfo('dev').version)
self.assertEqual(
self._iterator.Descending(self._GetChannelInfo('trunk'), callback),
self._GetChannelInfo('trunk'))
self.assertEqual([self._GetChannelInfo('trunk')], iterations)
# Start at |trunk|, and then move from |dev| down to |stable| at version 5.
# Total: 28 file systems.
iterations, callback = _GetIterationTracker(0)
self.assertEqual(
self._iterator.Descending(self._GetChannelInfo('trunk'), callback),
self._GetStableChannelInfo(5))
self.assertEqual(len(iterations), 28)
# Start at the latest version of |stable|, and move down to |stable| at
# version 5. Total: 25 file systems.
iterations, callback = _GetIterationTracker(0)
self.assertEqual(
self._iterator.Descending(self._GetChannelInfo('stable'), callback),
self._GetStableChannelInfo(5))
self.assertEqual(len(iterations), 25)
# Start at |dev| and iterate down through |stable| versions. The callback
# fails at version 10. Total: 18 file systems.
iterations, callback = _GetIterationTracker(10)
self.assertEqual(
self._iterator.Descending(self._GetChannelInfo('dev'), callback),
self._GetStableChannelInfo(11))
self.assertEqual([self._GetChannelInfo('dev'),
self._GetChannelInfo('beta'),
self._GetChannelInfo('stable'),
self._GetStableChannelInfo(28),
self._GetStableChannelInfo(27),
self._GetStableChannelInfo(26),
self._GetStableChannelInfo(25),
self._GetStableChannelInfo(24),
self._GetStableChannelInfo(23),
self._GetStableChannelInfo(22),
self._GetStableChannelInfo(21),
self._GetStableChannelInfo(20),
self._GetStableChannelInfo(19),
self._GetStableChannelInfo(18),
self._GetStableChannelInfo(17),
self._GetStableChannelInfo(16),
self._GetStableChannelInfo(15),
self._GetStableChannelInfo(14),
self._GetStableChannelInfo(13),
self._GetStableChannelInfo(12),
self._GetStableChannelInfo(11)], iterations)
if __name__ == '__main__':
unittest.main()
|
from __future__ import division, print_function, absolute_import
import warnings
import numpy as np
from numpy.testing import (assert_equal, assert_almost_equal, assert_array_equal,
assert_array_almost_equal, assert_allclose, assert_raises, TestCase,
run_module_suite)
from numpy import array, diff, linspace, meshgrid, ones, pi, shape
from scipy.interpolate.fitpack import bisplrep, bisplev
from scipy.interpolate.fitpack2 import (UnivariateSpline,
LSQUnivariateSpline, InterpolatedUnivariateSpline,
LSQBivariateSpline, SmoothBivariateSpline, RectBivariateSpline,
LSQSphereBivariateSpline, SmoothSphereBivariateSpline,
RectSphereBivariateSpline)
class TestUnivariateSpline(TestCase):
def test_linear_constant(self):
x = [1,2,3]
y = [3,3,3]
lut = UnivariateSpline(x,y,k=1)
assert_array_almost_equal(lut.get_knots(),[1,3])
assert_array_almost_equal(lut.get_coeffs(),[3,3])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2]),[3,3,3])
def test_preserve_shape(self):
x = [1, 2, 3]
y = [0, 2, 4]
lut = UnivariateSpline(x, y, k=1)
arg = 2
assert_equal(shape(arg), shape(lut(arg)))
assert_equal(shape(arg), shape(lut(arg, nu=1)))
arg = [1.5, 2, 2.5]
assert_equal(shape(arg), shape(lut(arg)))
assert_equal(shape(arg), shape(lut(arg, nu=1)))
def test_linear_1d(self):
x = [1,2,3]
y = [0,2,4]
lut = UnivariateSpline(x,y,k=1)
assert_array_almost_equal(lut.get_knots(),[1,3])
assert_array_almost_equal(lut.get_coeffs(),[0,4])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2]),[0,1,2])
def test_subclassing(self):
# See #731
class ZeroSpline(UnivariateSpline):
def __call__(self, x):
return 0*array(x)
sp = ZeroSpline([1,2,3,4,5], [3,2,3,2,3], k=2)
assert_array_equal(sp([1.5, 2.5]), [0., 0.])
def test_empty_input(self):
# Test whether empty input returns an empty output. Ticket 1014
x = [1,3,5,7,9]
y = [0,4,9,12,21]
spl = UnivariateSpline(x, y, k=3)
assert_array_equal(spl([]), array([]))
def test_resize_regression(self):
"""Regression test for #1375."""
x = [-1., -0.65016502, -0.58856235, -0.26903553, -0.17370892,
-0.10011001, 0., 0.10011001, 0.17370892, 0.26903553, 0.58856235,
0.65016502, 1.]
y = [1.,0.62928599, 0.5797223, 0.39965815, 0.36322694, 0.3508061,
0.35214793, 0.3508061, 0.36322694, 0.39965815, 0.5797223,
0.62928599, 1.]
w = [1.00000000e+12, 6.88875973e+02, 4.89314737e+02, 4.26864807e+02,
6.07746770e+02, 4.51341444e+02, 3.17480210e+02, 4.51341444e+02,
6.07746770e+02, 4.26864807e+02, 4.89314737e+02, 6.88875973e+02,
1.00000000e+12]
spl = UnivariateSpline(x=x, y=y, w=w, s=None)
desired = array([0.35100374, 0.51715855, 0.87789547, 0.98719344])
assert_allclose(spl([0.1, 0.5, 0.9, 0.99]), desired, atol=5e-4)
def test_out_of_range_regression(self):
# Test different extrapolation modes. See ticket 3557
x = np.arange(5, dtype=float)
y = x**3
xp = linspace(-8, 13, 100)
xp_zeros = xp.copy()
xp_zeros[np.logical_or(xp_zeros < 0., xp_zeros > 4.)] = 0
xp_clip = xp.copy()
xp_clip[xp_clip < x[0]] = x[0]
xp_clip[xp_clip > x[-1]] = x[-1]
for cls in [UnivariateSpline, InterpolatedUnivariateSpline]:
spl = cls(x=x, y=y)
for ext in [0, 'extrapolate']:
assert_allclose(spl(xp, ext=ext), xp**3, atol=1e-16)
assert_allclose(cls(x, y, ext=ext)(xp), xp**3, atol=1e-16)
for ext in [1, 'zeros']:
assert_allclose(spl(xp, ext=ext), xp_zeros**3, atol=1e-16)
assert_allclose(cls(x, y, ext=ext)(xp), xp_zeros**3, atol=1e-16)
for ext in [2, 'raise']:
assert_raises(ValueError, spl, xp, **dict(ext=ext))
for ext in [3, 'const']:
assert_allclose(spl(xp, ext=ext), xp_clip**3, atol=1e-16)
assert_allclose(cls(x, y, ext=ext)(xp), xp_clip**3, atol=1e-16)
# also test LSQUnivariateSpline [which needs explicit knots]
t = spl.get_knots()[3:4] # interior knots w/ default k=3
spl = LSQUnivariateSpline(x, y, t)
assert_allclose(spl(xp, ext=0), xp**3, atol=1e-16)
assert_allclose(spl(xp, ext=1), xp_zeros**3, atol=1e-16)
assert_raises(ValueError, spl, xp, **dict(ext=2))
assert_allclose(spl(xp, ext=3), xp_clip**3, atol=1e-16)
# also make sure that unknown values for `ext` are caught early
for ext in [-1, 'unknown']:
spl = UnivariateSpline(x, y)
assert_raises(ValueError, spl, xp, **dict(ext=ext))
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, ext=ext))
def test_lsq_fpchec(self):
xs = np.arange(100) * 1.
ys = np.arange(100) * 1.
knots = np.linspace(0, 99, 10)
bbox = (-1, 101)
assert_raises(ValueError, LSQUnivariateSpline, xs, ys, knots,
bbox=bbox)
def test_derivative_and_antiderivative(self):
# Thin wrappers to splder/splantider, so light smoke test only.
x = np.linspace(0, 1, 70)**3
y = np.cos(x)
spl = UnivariateSpline(x, y, s=0)
spl2 = spl.antiderivative(2).derivative(2)
assert_allclose(spl(0.3), spl2(0.3))
spl2 = spl.antiderivative(1)
assert_allclose(spl2(0.6) - spl2(0.2),
spl.integral(0.2, 0.6))
def test_nan(self):
# bail out early if the input data contains nans
x = np.arange(10, dtype=float)
y = x**3
w = np.ones_like(x)
# also test LSQUnivariateSpline [which needs explicit knots]
spl = UnivariateSpline(x, y, check_finite=True)
t = spl.get_knots()[3:4] # interior knots w/ default k=3
y_end = y[-1]
for z in [np.nan, np.inf, -np.inf]:
y[-1] = z
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, check_finite=True))
assert_raises(ValueError, InterpolatedUnivariateSpline,
**dict(x=x, y=y, check_finite=True))
assert_raises(ValueError, LSQUnivariateSpline,
**dict(x=x, y=y, t=t, check_finite=True))
y[-1] = y_end # check valid y but invalid w
w[-1] = z
assert_raises(ValueError, UnivariateSpline,
**dict(x=x, y=y, w=w, check_finite=True))
assert_raises(ValueError, InterpolatedUnivariateSpline,
**dict(x=x, y=y, w=w, check_finite=True))
assert_raises(ValueError, LSQUnivariateSpline,
**dict(x=x, y=y, t=t, w=w, check_finite=True))
class TestLSQBivariateSpline(TestCase):
# NOTE: The systems in this test class are rank-deficient
def test_linear_constant(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [3,3,3,3,3,3,3,3,3]
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with warnings.catch_warnings(record=True): # coefficients of the ...
lut = LSQBivariateSpline(x,y,z,tx,ty,kx=1,ky=1)
assert_almost_equal(lut(2,2), 3.)
def test_bilinearity(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [0,7,8,3,4,7,1,3,4]
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with warnings.catch_warnings():
# This seems to fail (ier=1, see ticket 1642).
warnings.simplefilter('ignore', UserWarning)
lut = LSQBivariateSpline(x,y,z,tx,ty,kx=1,ky=1)
tx, ty = lut.get_knots()
for xa, xb in zip(tx[:-1], tx[1:]):
for ya, yb in zip(ty[:-1], ty[1:]):
for t in [0.1, 0.5, 0.9]:
for s in [0.3, 0.4, 0.7]:
xp = xa*(1-t) + xb*t
yp = ya*(1-s) + yb*s
zp = (+ lut(xa, ya)*(1-t)*(1-s)
+ lut(xb, ya)*t*(1-s)
+ lut(xa, yb)*(1-t)*s
+ lut(xb, yb)*t*s)
assert_almost_equal(lut(xp,yp), zp)
def test_integral(self):
x = [1,1,1,2,2,2,8,8,8]
y = [1,2,3,1,2,3,1,2,3]
z = array([0,7,8,3,4,7,1,3,4])
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with warnings.catch_warnings(record=True): # coefficients of the ...
lut = LSQBivariateSpline(x,y,z,tx,ty,kx=1,ky=1)
tx, ty = lut.get_knots()
tz = lut(tx, ty)
trpz = .25*(diff(tx)[:,None]*diff(ty)[None,:]
* (tz[:-1,:-1]+tz[1:,:-1]+tz[:-1,1:]+tz[1:,1:])).sum()
assert_almost_equal(lut.integral(tx[0], tx[-1], ty[0], ty[-1]),
trpz)
def test_empty_input(self):
# Test whether empty inputs returns an empty output. Ticket 1014
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [3,3,3,3,3,3,3,3,3]
s = 0.1
tx = [1+s,3-s]
ty = [1+s,3-s]
with warnings.catch_warnings(record=True): # coefficients of the ...
lut = LSQBivariateSpline(x,y,z,tx,ty,kx=1,ky=1)
assert_array_equal(lut([], []), np.zeros((0,0)))
assert_array_equal(lut([], [], grid=False), np.zeros((0,)))
class TestSmoothBivariateSpline(TestCase):
def test_linear_constant(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [3,3,3,3,3,3,3,3,3]
lut = SmoothBivariateSpline(x,y,z,kx=1,ky=1)
assert_array_almost_equal(lut.get_knots(),([1,1,3,3],[1,1,3,3]))
assert_array_almost_equal(lut.get_coeffs(),[3,3,3,3])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2],[1,1.5]),[[3,3],[3,3],[3,3]])
def test_linear_1d(self):
x = [1,1,1,2,2,2,3,3,3]
y = [1,2,3,1,2,3,1,2,3]
z = [0,0,0,2,2,2,4,4,4]
lut = SmoothBivariateSpline(x,y,z,kx=1,ky=1)
assert_array_almost_equal(lut.get_knots(),([1,1,3,3],[1,1,3,3]))
assert_array_almost_equal(lut.get_coeffs(),[0,0,4,4])
assert_almost_equal(lut.get_residual(),0.0)
assert_array_almost_equal(lut([1,1.5,2],[1,1.5]),[[0,0],[1,1],[2,2]])
def test_integral(self):
x = [1,1,1,2,2,2,4,4,4]
y = [1,2,3,1,2,3,1,2,3]
z = array([0,7,8,3,4,7,1,3,4])
with warnings.catch_warnings():
# This seems to fail (ier=1, see ticket 1642).
warnings.simplefilter('ignore', UserWarning)
lut = SmoothBivariateSpline(x, y, z, kx=1, ky=1, s=0)
tx = [1,2,4]
ty = [1,2,3]
tz = lut(tx, ty)
trpz = .25*(diff(tx)[:,None]*diff(ty)[None,:]
* (tz[:-1,:-1]+tz[1:,:-1]+tz[:-1,1:]+tz[1:,1:])).sum()
assert_almost_equal(lut.integral(tx[0], tx[-1], ty[0], ty[-1]), trpz)
lut2 = SmoothBivariateSpline(x, y, z, kx=2, ky=2, s=0)
assert_almost_equal(lut2.integral(tx[0], tx[-1], ty[0], ty[-1]), trpz,
decimal=0) # the quadratures give 23.75 and 23.85
tz = lut(tx[:-1], ty[:-1])
trpz = .25*(diff(tx[:-1])[:,None]*diff(ty[:-1])[None,:]
* (tz[:-1,:-1]+tz[1:,:-1]+tz[:-1,1:]+tz[1:,1:])).sum()
assert_almost_equal(lut.integral(tx[0], tx[-2], ty[0], ty[-2]), trpz)
def test_rerun_lwrk2_too_small(self):
# in this setting, lwrk2 is too small in the default run. Here we
# check for equality with the bisplrep/bisplev output because there,
# an automatic re-run of the spline representation is done if ier>10.
x = np.linspace(-2, 2, 80)
y = np.linspace(-2, 2, 80)
z = x + y
xi = np.linspace(-1, 1, 100)
yi = np.linspace(-2, 2, 100)
tck = bisplrep(x, y, z)
res1 = bisplev(xi, yi, tck)
interp_ = SmoothBivariateSpline(x, y, z)
res2 = interp_(xi, yi)
assert_almost_equal(res1, res2)
class TestLSQSphereBivariateSpline(TestCase):
def setUp(self):
# define the input data and coordinates
ntheta, nphi = 70, 90
theta = linspace(0.5/(ntheta - 1), 1 - 0.5/(ntheta - 1), ntheta) * pi
phi = linspace(0.5/(nphi - 1), 1 - 0.5/(nphi - 1), nphi) * 2. * pi
data = ones((theta.shape[0], phi.shape[0]))
# define knots and extract data values at the knots
knotst = theta[::5]
knotsp = phi[::5]
knotdata = data[::5, ::5]
# calculate spline coefficients
lats, lons = meshgrid(theta, phi)
lut_lsq = LSQSphereBivariateSpline(lats.ravel(), lons.ravel(),
data.T.ravel(), knotst, knotsp)
self.lut_lsq = lut_lsq
self.data = knotdata
self.new_lons, self.new_lats = knotsp, knotst
def test_linear_constant(self):
assert_almost_equal(self.lut_lsq.get_residual(), 0.0)
assert_array_almost_equal(self.lut_lsq(self.new_lats, self.new_lons),
self.data)
def test_empty_input(self):
assert_array_almost_equal(self.lut_lsq([], []), np.zeros((0,0)))
assert_array_almost_equal(self.lut_lsq([], [], grid=False), np.zeros((0,)))
class TestSmoothSphereBivariateSpline(TestCase):
def setUp(self):
theta = array([.25*pi, .25*pi, .25*pi, .5*pi, .5*pi, .5*pi, .75*pi,
.75*pi, .75*pi])
phi = array([.5 * pi, pi, 1.5 * pi, .5 * pi, pi, 1.5 * pi, .5 * pi, pi,
1.5 * pi])
r = array([3, 3, 3, 3, 3, 3, 3, 3, 3])
self.lut = SmoothSphereBivariateSpline(theta, phi, r, s=1E10)
def test_linear_constant(self):
assert_almost_equal(self.lut.get_residual(), 0.)
assert_array_almost_equal(self.lut([1, 1.5, 2],[1, 1.5]),
[[3, 3], [3, 3], [3, 3]])
def test_empty_input(self):
assert_array_almost_equal(self.lut([], []), np.zeros((0,0)))
assert_array_almost_equal(self.lut([], [], grid=False), np.zeros((0,)))
class TestRectBivariateSpline(TestCase):
def test_defaults(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
lut = RectBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y),z)
def test_evaluate(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
lut = RectBivariateSpline(x,y,z)
xi = [1, 2.3, 5.3, 0.5, 3.3, 1.2, 3]
yi = [1, 3.3, 1.2, 4.0, 5.0, 1.0, 3]
zi = lut.ev(xi, yi)
zi2 = array([lut(xp, yp)[0,0] for xp, yp in zip(xi, yi)])
assert_almost_equal(zi, zi2)
def test_derivatives_grid(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
dx = array([[0,0,-20,0,0],[0,0,13,0,0],[0,0,4,0,0],
[0,0,-11,0,0],[0,0,4,0,0]])/6.
dy = array([[4,-1,0,1,-4],[4,-1,0,1,-4],[0,1.5,0,-1.5,0],
[2,.25,0,-.25,-2],[4,-1,0,1,-4]])
dxdy = array([[40,-25,0,25,-40],[-26,16.25,0,-16.25,26],
[-8,5,0,-5,8],[22,-13.75,0,13.75,-22],[-8,5,0,-5,8]])/6.
lut = RectBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y,dx=1),dx)
assert_array_almost_equal(lut(x,y,dy=1),dy)
assert_array_almost_equal(lut(x,y,dx=1,dy=1),dxdy)
def test_derivatives(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
dx = array([0,0,2./3,0,0])
dy = array([4,-1,0,-.25,-4])
dxdy = array([160,65,0,55,32])/24.
lut = RectBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y,dx=1,grid=False),dx)
assert_array_almost_equal(lut(x,y,dy=1,grid=False),dy)
assert_array_almost_equal(lut(x,y,dx=1,dy=1,grid=False),dxdy)
def test_broadcast(self):
x = array([1,2,3,4,5])
y = array([1,2,3,4,5])
z = array([[1,2,1,2,1],[1,2,1,2,1],[1,2,3,2,1],[1,2,2,2,1],[1,2,1,2,1]])
lut = RectBivariateSpline(x,y,z)
assert_allclose(lut(x, y), lut(x[:,None], y[None,:], grid=False))
class TestRectSphereBivariateSpline(TestCase):
def test_defaults(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
assert_array_almost_equal(lut(x,y),z)
def test_evaluate(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
yi = [0.2, 1, 2.3, 2.35, 3.0, 3.99, 5.25]
xi = [1.5, 0.4, 1.1, 0.45, 0.2345, 1., 0.0001]
zi = lut.ev(xi, yi)
zi2 = array([lut(xp, yp)[0,0] for xp, yp in zip(xi, yi)])
assert_almost_equal(zi, zi2)
def test_derivatives_grid(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
y = linspace(0.02, 2*pi-0.02, 7)
x = linspace(0.02, pi-0.02, 7)
assert_allclose(lut(x, y, dtheta=1), _numdiff_2d(lut, x, y, dx=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dphi=1), _numdiff_2d(lut, x, y, dy=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dtheta=1, dphi=1), _numdiff_2d(lut, x, y, dx=1, dy=1, eps=1e-6),
rtol=1e-3, atol=1e-3)
def test_derivatives(self):
y = linspace(0.01, 2*pi-0.01, 7)
x = linspace(0.01, pi-0.01, 7)
z = array([[1,2,1,2,1,2,1],[1,2,1,2,1,2,1],[1,2,3,2,1,2,1],
[1,2,2,2,1,2,1],[1,2,1,2,1,2,1],[1,2,2,2,1,2,1],
[1,2,1,2,1,2,1]])
lut = RectSphereBivariateSpline(x,y,z)
y = linspace(0.02, 2*pi-0.02, 7)
x = linspace(0.02, pi-0.02, 7)
assert_equal(lut(x, y, dtheta=1, grid=False).shape, x.shape)
assert_allclose(lut(x, y, dtheta=1, grid=False),
_numdiff_2d(lambda x,y: lut(x,y,grid=False), x, y, dx=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dphi=1, grid=False),
_numdiff_2d(lambda x,y: lut(x,y,grid=False), x, y, dy=1),
rtol=1e-4, atol=1e-4)
assert_allclose(lut(x, y, dtheta=1, dphi=1, grid=False),
_numdiff_2d(lambda x,y: lut(x,y,grid=False), x, y, dx=1, dy=1, eps=1e-6),
rtol=1e-3, atol=1e-3)
def _numdiff_2d(func, x, y, dx=0, dy=0, eps=1e-8):
if dx == 0 and dy == 0:
return func(x, y)
elif dx == 1 and dy == 0:
return (func(x + eps, y) - func(x - eps, y)) / (2*eps)
elif dx == 0 and dy == 1:
return (func(x, y + eps) - func(x, y - eps)) / (2*eps)
elif dx == 1 and dy == 1:
return (func(x + eps, y + eps) - func(x - eps, y + eps)
- func(x + eps, y - eps) + func(x - eps, y - eps)) / (2*eps)**2
else:
raise ValueError("invalid derivative order")
if __name__ == "__main__":
run_module_suite()
|
"""SCons.Tool.filesystem
Tool-specific initialization for the filesystem tools.
There normally shouldn't be any need to import this module directly.
It will usually be imported through the generic SCons.Tool.Tool()
selection method.
"""
__revision__ = "src/engine/SCons/Tool/filesystem.py 2014/03/02 14:18:15 garyo"
import SCons
from SCons.Tool.install import copyFunc
copyToBuilder, copyAsBuilder = None, None
def copyto_emitter(target, source, env):
""" changes the path of the source to be under the target (which
are assumed to be directories.
"""
n_target = []
for t in target:
n_target = n_target + [t.File( str( s ) ) for s in source]
return (n_target, source)
def copy_action_func(target, source, env):
assert( len(target) == len(source) ), "\ntarget: %s\nsource: %s" %(list(map(str, target)),list(map(str, source)))
for t, s in zip(target, source):
if copyFunc(t.get_path(), s.get_path(), env):
return 1
return 0
def copy_action_str(target, source, env):
return env.subst_target_source(env['COPYSTR'], 0, target, source)
copy_action = SCons.Action.Action( copy_action_func, copy_action_str )
def generate(env):
try:
env['BUILDERS']['CopyTo']
env['BUILDERS']['CopyAs']
except KeyError, e:
global copyToBuilder
if copyToBuilder is None:
copyToBuilder = SCons.Builder.Builder(
action = copy_action,
target_factory = env.fs.Dir,
source_factory = env.fs.Entry,
multi = 1,
emitter = [ copyto_emitter, ] )
global copyAsBuilder
if copyAsBuilder is None:
copyAsBuilder = SCons.Builder.Builder(
action = copy_action,
target_factory = env.fs.Entry,
source_factory = env.fs.Entry )
env['BUILDERS']['CopyTo'] = copyToBuilder
env['BUILDERS']['CopyAs'] = copyAsBuilder
env['COPYSTR'] = 'Copy file(s): "$SOURCES" to "$TARGETS"'
def exists(env):
return 1
|
"""
Debugging module. Stores the program debug level and provides debugging output
functions. Only debugging output with level less than or equal to the current
debug level is output. Note that debugging output with level greater than the
current maximum debug level is treated as having a level equal to the current
maximum debug level, and that debugging output with level less than zero is
treated as having a level equal to zero.
"""
import sys
class DebugLevel:
"""
Class used to store a debug level.
"""
def __init__(self, level = 1, maxLevel = 3):
"""
Initialise a debug level.
"""
self.SetLevel(level)
self.SetMaxLevel(maxLevel)
return
def GetLevel(self):
"""
Get the debug level.
"""
return self._level
def SetLevel(self, level = 1):
"""
Set the debug level.
"""
level = max(level, 0)
try:
self._level = min(level, self.GetMaxLevel())
except:
self._level = level
return
def GetMaxLevel(self):
"""
Get the maximum debug level.
"""
return self._maxLevel
def SetMaxLevel(self, maxLevel = 3):
"""
Set the maximum debug level.
"""
maxLevel = max(maxLevel, 0)
self._maxLevel = maxLevel
self.SetLevel(self._level)
return
_debugLevel = DebugLevel()
def GetDebugLevel():
"""
Get the current debug level.
"""
return _debugLevel.GetLevel()
def SetDebugLevel(level = 1):
"""
Set the current debug level.
"""
_debugLevel.SetLevel(level)
return
def GetMaxDebugLevel():
"""
Get the current maximum debug level.
"""
return _debugLevel.GetMaxLevel()
def SetMaxDebugLevel(level = 3):
"""
Set the current maximum debug level.
"""
_debugLevel.SetMaxLevel(level)
return
def dprint(msg, level = 1, newline = True, flush = True):
"""
Print a debug message to standard output with supplied debug level.
"""
dwrite(sys.stdout, msg, level, newline, flush)
return
def deprint(msg, level = 1, newline = True, flush = True):
"""
Print a debug message to standard error with supplied debug level.
"""
dwrite(sys.stderr, msg, level, newline, flush)
return
def dwrite(stream, msg, level = 1, newline = True, flush = True):
"""
Print a debug message to the supplied file stream with supplied debug level.
"""
level = max(level, 0)
level = min(level, GetMaxDebugLevel())
if level <= GetDebugLevel():
stream.write(str(msg))
if newline:
stream.write("\n")
if flush:
stream.flush()
return
|
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pycompat24 import get_exception
DOCUMENTATION = """
module: pamd
author:
- "Kenneth D. Evensen (@kevensen)"
short_description: Manage PAM Modules
description:
- Edit PAM service's type, control, module path and module arguments.
In order for a PAM rule to be modified, the type, control and
module_path must match an existing rule. See man(5) pam.d for details.
version_added: "2.3"
options:
name:
required: true
description:
- The name generally refers to the PAM service file to
change, for example system-auth.
type:
required: true
description:
- The type of the PAM rule being modified. The type, control
and module_path all must match a rule to be modified.
control:
required: true
description:
- The control of the PAM rule being modified. This may be a
complicated control with brackets. If this is the case, be
sure to put "[bracketed controls]" in quotes. The type,
control and module_path all must match a rule to be modified.
module_path:
required: true
description:
- The module path of the PAM rule being modified. The type,
control and module_path all must match a rule to be modified.
new_type:
required: false
description:
- The type to assign to the new rule.
new_control:
required: false
description:
- The control to assign to the new rule.
new_module_path:
required: false
description:
- The control to assign to the new rule.
module_arguments:
required: false
description:
- When state is 'updated', the module_arguments will replace existing
module_arguments. When state is 'args_absent' args matching those
listed in module_arguments will be removed. When state is
'args_present' any args listed in module_arguments are added if
missing from the existing rule. Furthermore, if the module argument
takes a value denoted by '=', the value will be changed to that specified
in module_arguments.
insert:
required: false
default: none
choices:
- updated
- before
- after
- args_present
- args_absent
description:
- The default of 'updated' will modify an existing rule if type,
control and module_path all match an existing rule. With 'before',
the new rule will be inserted before a rule matching type, control
and module_path. Similarly, with 'after', the new rule will be inserted
after an existing rule matching type, control and module_path. With
either 'before' or 'after' new_type, new_control, and new_module_path
must all be specified. If state is 'args_absent' or 'args_present',
new_type, new_control, and new_module_path will be ignored.
path:
required: false
default: /etc/pam.d/
description:
- This is the path to the PAM service files
"""
EXAMPLES = """
- name: Update pamd rule's control in /etc/pam.d/system-auth
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
new_control: sufficient
- name: Update pamd rule's complex control in /etc/pam.d/system-auth
pamd:
name: system-auth
type: session
control: '[success=1 default=ignore]'
module_path: pam_succeed_if.so
new_control: '[success=2 default=ignore]'
- name: Insert a new rule before an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
new_type: auth
new_control: sufficient
new_module_path: pam_faillock.so
state: before
- name: Insert a new rule after an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
new_type: auth new_control=sufficient
new_module_path: pam_faillock.so
state: after
- name: Remove module arguments from an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
module_arguments: ''
state: updated
- name: Replace all module arguments in an existing rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
module_arguments: 'preauth
silent
deny=3
unlock_time=604800
fail_interval=900'
state: updated
- name: Remove specific arguments from a rule
pamd:
name: system-auth
type: session control='[success=1 default=ignore]'
module_path: pam_succeed_if.so
module_arguments: 'crond quiet'
state: args_absent
- name: Ensure specific arguments are present in a rule
pamd:
name: system-auth
type: session
control: '[success=1 default=ignore]'
module_path: pam_succeed_if.so
module_arguments: 'crond quiet'
state: args_present
- name: Update specific argument value in a rule
pamd:
name: system-auth
type: auth
control: required
module_path: pam_faillock.so
module_arguments: 'fail_interval=300'
state: args_present
"""
RETURN = '''
dest:
description: path to pam.d service that was changed
returned: success
type: string
sample: "/etc/pam.d/system-auth"
...
'''
class PamdRule(object):
def __init__(self, rule_type,
rule_control, rule_module_path,
rule_module_args=None):
self.rule_type = rule_type
self.rule_control = rule_control
self.rule_module_path = rule_module_path
try:
if (rule_module_args is not None and
type(rule_module_args) is list):
self.rule_module_args = rule_module_args
elif (rule_module_args is not None and
type(rule_module_args) is str):
self.rule_module_args = rule_module_args.split()
except AttributeError:
self.rule_module_args = []
@classmethod
def rulefromstring(cls, stringline):
split_line = stringline.split()
rule_type = split_line[0]
rule_control = split_line[1]
if rule_control.startswith('['):
rule_control = stringline[stringline.index('['):
stringline.index(']')+1]
if "]" in split_line[2]:
rule_module_path = split_line[3]
rule_module_args = split_line[4:]
else:
rule_module_path = split_line[2]
rule_module_args = split_line[3:]
return cls(rule_type, rule_control, rule_module_path, rule_module_args)
def get_module_args_as_string(self):
try:
if self.rule_module_args is not None:
return ' '.join(self.rule_module_args)
except AttributeError:
pass
return ''
def __str__(self):
return "%-10s\t%s\t%s %s" % (self.rule_type,
self.rule_control,
self.rule_module_path,
self.get_module_args_as_string())
class PamdService(object):
def __init__(self, path, name, ansible):
self.path = path
self.name = name
self.check = ansible.check_mode
self.ansible = ansible
self.fname = self.path + "/" + self.name
self.preamble = []
self.rules = []
try:
for line in open(self.fname, 'r'):
if line.startswith('#') and not line.isspace():
self.preamble.append(line.rstrip())
elif not line.startswith('#') and not line.isspace():
self.rules.append(PamdRule.rulefromstring
(stringline=line.rstrip()))
except Exception:
e = get_exception()
self.ansible.fail_json(msg='Unable to open/read PAM module file ' +
'%s with error %s' % (self.fname, str(e)))
def __str__(self):
return self.fname
def update_rule(service, old_rule, new_rule):
changed = False
change_count = 0
result = {'action': 'update_rule'}
for rule in service.rules:
if (old_rule.rule_type == rule.rule_type and
old_rule.rule_control == rule.rule_control and
old_rule.rule_module_path == rule.rule_module_path):
if (new_rule.rule_type is not None and
new_rule.rule_type != rule.rule_type):
rule.rule_type = new_rule.rule_type
changed = True
if (new_rule.rule_control is not None and
new_rule.rule_control != rule.rule_control):
rule.rule_control = new_rule.rule_control
changed = True
if (new_rule.rule_module_path is not None and
new_rule.rule_module_path != rule.rule_module_path):
rule.rule_module_path = new_rule.rule_module_path
changed = True
try:
if (new_rule.rule_module_args is not None and
new_rule.rule_module_args !=
rule.rule_module_args):
rule.rule_module_args = new_rule.rule_module_args
changed = True
except AttributeError:
pass
if changed:
result['updated_rule_'+str(change_count)] = str(rule)
result['new_rule'] = str(new_rule)
change_count += 1
result['change_count'] = change_count
return changed, result
def insert_before_rule(service, old_rule, new_rule):
index = 0
change_count = 0
result = {'action':
'insert_before_rule'}
changed = False
for rule in service.rules:
if (old_rule.rule_type == rule.rule_type and
old_rule.rule_control == rule.rule_control and
old_rule.rule_module_path == rule.rule_module_path):
if index == 0:
service.rules.insert(0, new_rule)
changed = True
elif (new_rule.rule_type != service.rules[index-1].rule_type or
new_rule.rule_control !=
service.rules[index-1].rule_control or
new_rule.rule_module_path !=
service.rules[index-1].rule_module_path):
service.rules.insert(index, new_rule)
changed = True
if changed:
result['new_rule'] = str(new_rule)
result['before_rule_'+str(change_count)] = str(rule)
change_count += 1
index += 1
result['change_count'] = change_count
return changed, result
def insert_after_rule(service, old_rule, new_rule):
index = 0
change_count = 0
result = {'action': 'insert_after_rule'}
changed = False
for rule in service.rules:
if (old_rule.rule_type == rule.rule_type and
old_rule.rule_control == rule.rule_control and
old_rule.rule_module_path == rule.rule_module_path):
if (new_rule.rule_type != service.rules[index+1].rule_type or
new_rule.rule_control !=
service.rules[index+1].rule_control or
new_rule.rule_module_path !=
service.rules[index+1].rule_module_path):
service.rules.insert(index+1, new_rule)
changed = True
if changed:
result['new_rule'] = str(new_rule)
result['after_rule_'+str(change_count)] = str(rule)
change_count += 1
index += 1
result['change_count'] = change_count
return changed, result
def remove_module_arguments(service, old_rule, module_args):
result = {'action': 'args_absent'}
changed = False
change_count = 0
for rule in service.rules:
if (old_rule.rule_type == rule.rule_type and
old_rule.rule_control == rule.rule_control and
old_rule.rule_module_path == rule.rule_module_path):
for arg_to_remove in module_args.split():
for arg in rule.rule_module_args:
if arg == arg_to_remove:
rule.rule_module_args.remove(arg)
changed = True
result['removed_arg_'+str(change_count)] = arg
result['from_rule_'+str(change_count)] = str(rule)
change_count += 1
result['change_count'] = change_count
return changed, result
def add_module_arguments(service, old_rule, module_args):
result = {'action': 'args_present'}
changed = False
change_count = 0
for rule in service.rules:
if (old_rule.rule_type == rule.rule_type and
old_rule.rule_control == rule.rule_control and
old_rule.rule_module_path == rule.rule_module_path):
for arg_to_add in module_args.split(' '):
if "=" in arg_to_add:
pre_string = arg_to_add[:arg_to_add.index('=')+1]
indicies = [i for i, arg
in enumerate(rule.rule_module_args)
if arg.startswith(pre_string)]
for i in indicies:
if rule.rule_module_args[i] != arg_to_add:
rule.rule_module_args[i] = arg_to_add
changed = True
result['updated_arg_' +
str(change_count)] = arg_to_add
result['in_rule_' +
str(change_count)] = str(rule)
change_count += 1
elif arg_to_add not in rule.rule_module_args:
rule.rule_module_args.append(arg_to_add)
changed = True
result['added_arg_'+str(change_count)] = arg_to_add
result['to_rule_'+str(change_count)] = str(rule)
change_count += 1
result['change_count'] = change_count
return changed, result
def write_rules(service):
previous_rule = None
f = open(service.fname, 'w')
for amble in service.preamble:
f.write(amble+'\n')
for rule in service.rules:
if (previous_rule is not None and
previous_rule.rule_type != rule.rule_type):
f.write('\n')
f.write(str(rule)+'\n')
previous_rule = rule
f.close()
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
type=dict(required=True,
choices=['account', 'auth',
'password', 'session']),
control=dict(required=True),
module_path=dict(required=True),
new_type=dict(required=False,
choices=['account', 'auth',
'password', 'session']),
new_control=dict(required=False),
new_module_path=dict(required=False),
module_arguments=dict(required=False),
state=dict(required=False, default="updated",
choices=['before', 'after', 'updated',
'args_absent', 'args_present']),
path=dict(required=False, default='/etc/pam.d')
),
supports_check_mode=True
)
service = module.params['name']
old_type = module.params['type']
old_control = module.params['control']
old_module_path = module.params['module_path']
new_type = module.params['new_type']
new_control = module.params['new_control']
new_module_path = module.params['new_module_path']
module_arguments = module.params['module_arguments']
state = module.params['state']
path = module.params['path']
pamd = PamdService(path, service, module)
old_rule = PamdRule(old_type,
old_control,
old_module_path)
new_rule = PamdRule(new_type,
new_control,
new_module_path,
module_arguments)
try:
if state == 'updated':
change, result = update_rule(pamd,
old_rule,
new_rule)
elif state == 'before':
if (new_rule.rule_control is None or
new_rule.rule_type is None or
new_rule.rule_module_path is None):
module.fail_json(msg='When inserting a new rule before ' +
'or after an existing rule, new_type, ' +
'new_control and new_module_path must ' +
'all be set.')
change, result = insert_before_rule(pamd,
old_rule,
new_rule)
elif state == 'after':
if (new_rule.rule_control is None or
new_rule.rule_type is None or
new_rule.rule_module_path is None):
module.fail_json(msg='When inserting a new rule before' +
'or after an existing rule, new_type,' +
' new_control and new_module_path must' +
' all be set.')
change, result = insert_after_rule(pamd,
old_rule,
new_rule)
elif state == 'args_absent':
change, result = remove_module_arguments(pamd,
old_rule,
module_arguments)
elif state == 'args_present':
change, result = add_module_arguments(pamd,
old_rule,
module_arguments)
if not module.check_mode:
write_rules(pamd)
except Exception:
e = get_exception()
module.fail_json(msg='error running changing pamd: %s' % str(e))
facts = {}
facts['pamd'] = {'changed': change, 'result': result}
module.params['dest'] = pamd.fname
module.exit_json(changed=change, ansible_facts=facts)
if __name__ == '__main__':
main()
|
from openerp import fields, models
class company(models.Model):
_inherit = "res.company"
calendar_mark_done_user_id = fields.Many2one(
'res.users', string='Calendar Mark Done User')
|
"""The rescue mode extension."""
import webob
from webob import exc
from nova.api.openstack import common
from nova.api.openstack import extensions as exts
from nova.api.openstack import wsgi
from nova import compute
from nova import exception
from nova import flags
from nova import log as logging
from nova import utils
FLAGS = flags.FLAGS
LOG = logging.getLogger(__name__)
authorize = exts.extension_authorizer('compute', 'rescue')
class RescueController(wsgi.Controller):
def __init__(self, *args, **kwargs):
super(RescueController, self).__init__(*args, **kwargs)
self.compute_api = compute.API()
def _get_instance(self, context, instance_id):
try:
return self.compute_api.get(context, instance_id)
except exception.InstanceNotFound:
msg = _("Server not found")
raise exc.HTTPNotFound(msg)
@wsgi.action('rescue')
@exts.wrap_errors
def _rescue(self, req, id, body):
"""Rescue an instance."""
context = req.environ["nova.context"]
authorize(context)
if body['rescue'] and 'adminPass' in body['rescue']:
password = body['rescue']['adminPass']
else:
password = utils.generate_password(FLAGS.password_length)
instance = self._get_instance(context, id)
try:
self.compute_api.rescue(context, instance,
rescue_password=password)
except exception.InstanceInvalidState as state_error:
common.raise_http_conflict_for_instance_invalid_state(state_error,
'rescue')
return {'adminPass': password}
@wsgi.action('unrescue')
@exts.wrap_errors
def _unrescue(self, req, id, body):
"""Unrescue an instance."""
context = req.environ["nova.context"]
authorize(context)
instance = self._get_instance(context, id)
try:
self.compute_api.unrescue(context, instance)
except exception.InstanceInvalidState as state_error:
common.raise_http_conflict_for_instance_invalid_state(state_error,
'unrescue')
return webob.Response(status_int=202)
class Rescue(exts.ExtensionDescriptor):
"""Instance rescue mode"""
name = "Rescue"
alias = "os-rescue"
namespace = "http://docs.openstack.org/compute/ext/rescue/api/v1.1"
updated = "2011-08-18T00:00:00+00:00"
def get_controller_extensions(self):
controller = RescueController()
extension = exts.ControllerExtension(self, 'servers', controller)
return [extension]
|
from contextlib import contextmanager
import status
@contextmanager
def context(grpc_context):
"""A context manager that automatically handles KeyError."""
try:
yield
except KeyError as key_error:
grpc_context.code(status.Code.NOT_FOUND)
grpc_context.details(
'Unable to find the item keyed by {}'.format(key_error))
|
import distutils.dir_util
import glob
import os
import shutil
import subprocess
import time
import pytest
import logging
from cassandra.concurrent import execute_concurrent_with_args
from dtest_setup_overrides import DTestSetupOverrides
from dtest import Tester, create_ks
from tools.assertions import assert_one
from tools.files import replace_in_file, safe_mkdtemp
from tools.hacks import advance_to_next_cl_segment
from tools.misc import ImmutableMapping, get_current_test_name
since = pytest.mark.since
logger = logging.getLogger(__name__)
class SnapshotTester(Tester):
def create_schema(self, session):
create_ks(session, 'ks', 1)
session.execute('CREATE TABLE ks.cf ( key int PRIMARY KEY, val text);')
def insert_rows(self, session, start, end):
insert_statement = session.prepare("INSERT INTO ks.cf (key, val) VALUES (?, 'asdf')")
args = [(r,) for r in range(start, end)]
execute_concurrent_with_args(session, insert_statement, args, concurrency=20)
def make_snapshot(self, node, ks, cf, name):
logger.debug("Making snapshot....")
node.flush()
snapshot_cmd = 'snapshot {ks} -cf {cf} -t {name}'.format(ks=ks, cf=cf, name=name)
logger.debug("Running snapshot cmd: {snapshot_cmd}".format(snapshot_cmd=snapshot_cmd))
node.nodetool(snapshot_cmd)
tmpdir = safe_mkdtemp()
os.mkdir(os.path.join(tmpdir, ks))
os.mkdir(os.path.join(tmpdir, ks, cf))
# Find the snapshot dir, it's different in various C*
x = 0
for data_dir in node.data_directories():
snapshot_dir = "{data_dir}/{ks}/{cf}/snapshots/{name}".format(data_dir=data_dir, ks=ks, cf=cf, name=name)
if not os.path.isdir(snapshot_dir):
snapshot_dirs = glob.glob("{data_dir}/{ks}/{cf}-*/snapshots/{name}".format(data_dir=data_dir, ks=ks, cf=cf, name=name))
if len(snapshot_dirs) > 0:
snapshot_dir = snapshot_dirs[0]
else:
continue
logger.debug("snapshot_dir is : " + snapshot_dir)
logger.debug("snapshot copy is : " + tmpdir)
# Copy files from the snapshot dir to existing temp dir
distutils.dir_util.copy_tree(str(snapshot_dir), os.path.join(tmpdir, str(x), ks, cf))
x += 1
return tmpdir
def restore_snapshot(self, snapshot_dir, node, ks, cf):
logger.debug("Restoring snapshot....")
for x in range(0, self.cluster.data_dir_count):
snap_dir = os.path.join(snapshot_dir, str(x), ks, cf)
if os.path.exists(snap_dir):
ip = node.address()
args = [node.get_tool('sstableloader'), '-d', ip, snap_dir]
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
exit_status = p.wait()
if exit_status != 0:
raise Exception("sstableloader command '%s' failed; exit status: %d'; stdout: %s; stderr: %s" %
(" ".join(args), exit_status, stdout.decode("utf-8"), stderr.decode("utf-8")))
def restore_snapshot_schema(self, snapshot_dir, node, ks, cf):
logger.debug("Restoring snapshot schema....")
for x in range(0, self.cluster.data_dir_count):
schema_path = os.path.join(snapshot_dir, str(x), ks, cf, 'schema.cql')
if os.path.exists(schema_path):
node.run_cqlsh(cmds="SOURCE '%s'" % schema_path)
class TestSnapshot(SnapshotTester):
def test_basic_snapshot_and_restore(self):
cluster = self.cluster
cluster.populate(1).start()
(node1,) = cluster.nodelist()
session = self.patient_cql_connection(node1)
self.create_schema(session)
self.insert_rows(session, 0, 100)
snapshot_dir = self.make_snapshot(node1, 'ks', 'cf', 'basic')
# Write more data after the snapshot, this will get thrown
# away when we restore:
self.insert_rows(session, 100, 200)
rows = session.execute('SELECT count(*) from ks.cf')
assert rows[0][0] == 200
# Drop the keyspace, make sure we have no data:
session.execute('DROP KEYSPACE ks')
self.create_schema(session)
rows = session.execute('SELECT count(*) from ks.cf')
assert rows[0][0] == 0
# Restore data from snapshot:
self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf')
node1.nodetool('refresh ks cf')
rows = session.execute('SELECT count(*) from ks.cf')
# clean up
logger.debug("removing snapshot_dir: " + snapshot_dir)
shutil.rmtree(snapshot_dir)
assert rows[0][0] == 100
@since('3.0')
def test_snapshot_and_restore_drop_table_remove_dropped_column(self):
"""
@jira_ticket CASSANDRA-13730
Dropping table should clear entries in dropped_column table
"""
cluster = self.cluster
cluster.populate(1).start()
node1, = cluster.nodelist()
session = self.patient_cql_connection(node1)
# Create schema and insert some data
create_ks(session, 'ks', 1)
session.execute("CREATE TABLE ks.cf (k int PRIMARY KEY, a text, b text)")
session.execute("INSERT INTO ks.cf (k, a, b) VALUES (1, 'a', 'b')")
assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
# Take a snapshot and drop the column and then drop table
snapshot_dir = self.make_snapshot(node1, 'ks', 'cf', 'basic')
session.execute("ALTER TABLE ks.cf DROP b")
assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
session.execute("DROP TABLE ks.cf")
# Restore schema and data from snapshot, data should be the same as input
self.restore_snapshot_schema(snapshot_dir, node1, 'ks', 'cf')
self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf')
node1.nodetool('refresh ks cf')
assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
# Clean up
logger.debug("removing snapshot_dir: " + snapshot_dir)
shutil.rmtree(snapshot_dir)
@since('3.11')
def test_snapshot_and_restore_dropping_a_column(self):
"""
@jira_ticket CASSANDRA-13276
Can't load snapshots of tables with dropped columns.
"""
cluster = self.cluster
cluster.populate(1).start()
node1, = cluster.nodelist()
session = self.patient_cql_connection(node1)
# Create schema and insert some data
create_ks(session, 'ks', 1)
session.execute("CREATE TABLE ks.cf (k int PRIMARY KEY, a text, b text)")
session.execute("INSERT INTO ks.cf (k, a, b) VALUES (1, 'a', 'b')")
assert_one(session, "SELECT * FROM ks.cf", [1, "a", "b"])
# Drop a column
session.execute("ALTER TABLE ks.cf DROP b")
assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
# Take a snapshot and drop the table
snapshot_dir = self.make_snapshot(node1, 'ks', 'cf', 'basic')
session.execute("DROP TABLE ks.cf")
# Restore schema and data from snapshot
self.restore_snapshot_schema(snapshot_dir, node1, 'ks', 'cf')
self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf')
node1.nodetool('refresh ks cf')
assert_one(session, "SELECT * FROM ks.cf", [1, "a"])
# Clean up
logger.debug("removing snapshot_dir: " + snapshot_dir)
shutil.rmtree(snapshot_dir)
class TestArchiveCommitlog(SnapshotTester):
@pytest.fixture(scope='function', autouse=True)
def fixture_dtest_setup_overrides(self, dtest_config):
dtest_setup_overrides = DTestSetupOverrides()
dtest_setup_overrides.cluster_options = ImmutableMapping({'start_rpc': 'true'})
return dtest_setup_overrides
def make_snapshot(self, node, ks, cf, name):
logger.debug("Making snapshot....")
node.flush()
snapshot_cmd = 'snapshot {ks} -cf {cf} -t {name}'.format(ks=ks, cf=cf, name=name)
logger.debug("Running snapshot cmd: {snapshot_cmd}".format(snapshot_cmd=snapshot_cmd))
node.nodetool(snapshot_cmd)
tmpdirs = []
base_tmpdir = safe_mkdtemp()
for x in range(0, self.cluster.data_dir_count):
tmpdir = os.path.join(base_tmpdir, str(x))
os.mkdir(tmpdir)
# Copy files from the snapshot dir to existing temp dir
distutils.dir_util.copy_tree(os.path.join(node.get_path(), 'data{0}'.format(x), ks), tmpdir)
tmpdirs.append(tmpdir)
return tmpdirs
def restore_snapshot(self, snapshot_dir, node, ks, cf, name):
logger.debug("Restoring snapshot for cf ....")
data_dir = os.path.join(node.get_path(), 'data{0}'.format(os.path.basename(snapshot_dir)))
cfs = [s for s in os.listdir(snapshot_dir) if s.startswith(cf + "-")]
if len(cfs) > 0:
cf_id = cfs[0]
glob_path = "{snapshot_dir}/{cf_id}/snapshots/{name}".format(snapshot_dir=snapshot_dir, cf_id=cf_id, name=name)
globbed = glob.glob(glob_path)
if len(globbed) > 0:
snapshot_dir = globbed[0]
if not os.path.exists(os.path.join(data_dir, ks)):
os.mkdir(os.path.join(data_dir, ks))
os.mkdir(os.path.join(data_dir, ks, cf_id))
logger.debug("snapshot_dir is : " + snapshot_dir)
distutils.dir_util.copy_tree(snapshot_dir, os.path.join(data_dir, ks, cf_id))
def test_archive_commitlog(self):
self.run_archive_commitlog(restore_point_in_time=False)
def test_archive_commitlog_with_active_commitlog(self):
"""
Copy the active commitlogs to the archive directory before restoration
"""
self.run_archive_commitlog(restore_point_in_time=False, archive_active_commitlogs=True)
def test_dont_archive_commitlog(self):
"""
Run the archive commitlog test, but forget to add the restore commands
"""
self.run_archive_commitlog(restore_point_in_time=False, restore_archived_commitlog=False)
def test_archive_commitlog_point_in_time(self):
"""
Test archive commit log with restore_point_in_time setting
"""
self.run_archive_commitlog(restore_point_in_time=True)
def test_archive_commitlog_point_in_time_with_active_commitlog(self):
"""
Test archive commit log with restore_point_in_time setting
"""
self.run_archive_commitlog(restore_point_in_time=True, archive_active_commitlogs=True)
def test_archive_commitlog_point_in_time_with_active_commitlog_ln(self):
"""
Test archive commit log with restore_point_in_time setting
"""
self.run_archive_commitlog(restore_point_in_time=True, archive_active_commitlogs=True, archive_command='ln')
def run_archive_commitlog(self, restore_point_in_time=False, restore_archived_commitlog=True, archive_active_commitlogs=False, archive_command='cp'):
"""
Run archive commit log restoration test
"""
cluster = self.cluster
cluster.populate(1)
(node1,) = cluster.nodelist()
# Create a temp directory for storing commitlog archives:
tmp_commitlog = safe_mkdtemp()
logger.debug("tmp_commitlog: " + tmp_commitlog)
# Edit commitlog_archiving.properties and set an archive
# command:
replace_in_file(os.path.join(node1.get_path(), 'conf', 'commitlog_archiving.properties'),
[(r'^archive_command=.*$', 'archive_command={archive_command} %path {tmp_commitlog}/%name'.format(
tmp_commitlog=tmp_commitlog, archive_command=archive_command))])
cluster.start()
session = self.patient_cql_connection(node1)
create_ks(session, 'ks', 1)
# Write until we get a new CL segment. This avoids replaying
# initialization mutations from startup into system tables when
# restoring snapshots. See CASSANDRA-11811.
advance_to_next_cl_segment(
session=session,
commitlog_dir=os.path.join(node1.get_path(), 'commitlogs')
)
session.execute('CREATE TABLE ks.cf ( key bigint PRIMARY KEY, val text);')
logger.debug("Writing first 30,000 rows...")
self.insert_rows(session, 0, 30000)
# Record when this first set of inserts finished:
insert_cutoff_times = [time.gmtime()]
# Delete all commitlog backups so far:
for f in glob.glob(tmp_commitlog + "/*"):
logger.debug('Removing {}'.format(f))
os.remove(f)
snapshot_dirs = self.make_snapshot(node1, 'ks', 'cf', 'basic')
if self.cluster.version() >= '3.0':
system_ks_snapshot_dirs = self.make_snapshot(node1, 'system_schema', 'keyspaces', 'keyspaces')
else:
system_ks_snapshot_dirs = self.make_snapshot(node1, 'system', 'schema_keyspaces', 'keyspaces')
if self.cluster.version() >= '3.0':
system_col_snapshot_dirs = self.make_snapshot(node1, 'system_schema', 'columns', 'columns')
else:
system_col_snapshot_dirs = self.make_snapshot(node1, 'system', 'schema_columns', 'columns')
if self.cluster.version() >= '3.0':
system_ut_snapshot_dirs = self.make_snapshot(node1, 'system_schema', 'types', 'usertypes')
else:
system_ut_snapshot_dirs = self.make_snapshot(node1, 'system', 'schema_usertypes', 'usertypes')
if self.cluster.version() >= '3.0':
system_cfs_snapshot_dirs = self.make_snapshot(node1, 'system_schema', 'tables', 'cfs')
else:
system_cfs_snapshot_dirs = self.make_snapshot(node1, 'system', 'schema_columnfamilies', 'cfs')
try:
# Write more data:
logger.debug("Writing second 30,000 rows...")
self.insert_rows(session, 30000, 60000)
node1.flush()
time.sleep(10)
# Record when this second set of inserts finished:
insert_cutoff_times.append(time.gmtime())
logger.debug("Writing final 5,000 rows...")
self.insert_rows(session, 60000, 65000)
# Record when the third set of inserts finished:
insert_cutoff_times.append(time.gmtime())
# Flush so we get an accurate view of commitlogs
node1.flush()
rows = session.execute('SELECT count(*) from ks.cf')
# Make sure we have the same amount of rows as when we snapshotted:
assert rows[0][0] == 65000
# Check that there are at least one commit log backed up that
# is not one of the active commit logs:
commitlog_dir = os.path.join(node1.get_path(), 'commitlogs')
logger.debug("node1 commitlog dir: " + commitlog_dir)
logger.debug("node1 commitlog dir contents: " + str(os.listdir(commitlog_dir)))
logger.debug("tmp_commitlog contents: " + str(os.listdir(tmp_commitlog)))
assert_directory_not_empty(tmp_commitlog, commitlog_dir)
cluster.flush()
cluster.compact()
node1.drain()
# Destroy the cluster
cluster.stop()
logger.debug("node1 commitlog dir contents after stopping: " + str(os.listdir(commitlog_dir)))
logger.debug("tmp_commitlog contents after stopping: " + str(os.listdir(tmp_commitlog)))
self.copy_logs(name=get_current_test_name() + "_pre-restore")
self.fixture_dtest_setup.cleanup_and_replace_cluster()
cluster = self.cluster
cluster.populate(1)
nodes = cluster.nodelist()
assert len(nodes) == 1
node1 = nodes[0]
# Restore schema from snapshots:
for system_ks_snapshot_dir in system_ks_snapshot_dirs:
if self.cluster.version() >= '3.0':
self.restore_snapshot(system_ks_snapshot_dir, node1, 'system_schema', 'keyspaces', 'keyspaces')
else:
self.restore_snapshot(system_ks_snapshot_dir, node1, 'system', 'schema_keyspaces', 'keyspaces')
for system_col_snapshot_dir in system_col_snapshot_dirs:
if self.cluster.version() >= '3.0':
self.restore_snapshot(system_col_snapshot_dir, node1, 'system_schema', 'columns', 'columns')
else:
self.restore_snapshot(system_col_snapshot_dir, node1, 'system', 'schema_columns', 'columns')
for system_ut_snapshot_dir in system_ut_snapshot_dirs:
if self.cluster.version() >= '3.0':
self.restore_snapshot(system_ut_snapshot_dir, node1, 'system_schema', 'types', 'usertypes')
else:
self.restore_snapshot(system_ut_snapshot_dir, node1, 'system', 'schema_usertypes', 'usertypes')
for system_cfs_snapshot_dir in system_cfs_snapshot_dirs:
if self.cluster.version() >= '3.0':
self.restore_snapshot(system_cfs_snapshot_dir, node1, 'system_schema', 'tables', 'cfs')
else:
self.restore_snapshot(system_cfs_snapshot_dir, node1, 'system', 'schema_columnfamilies', 'cfs')
for snapshot_dir in snapshot_dirs:
self.restore_snapshot(snapshot_dir, node1, 'ks', 'cf', 'basic')
cluster.start(wait_for_binary_proto=True)
session = self.patient_cql_connection(node1)
node1.nodetool('refresh ks cf')
rows = session.execute('SELECT count(*) from ks.cf')
# Make sure we have the same amount of rows as when we snapshotted:
assert rows[0][0] == 30000
# Edit commitlog_archiving.properties. Remove the archive
# command and set a restore command and restore_directories:
if restore_archived_commitlog:
replace_in_file(os.path.join(node1.get_path(), 'conf', 'commitlog_archiving.properties'),
[(r'^archive_command=.*$', 'archive_command='),
(r'^restore_command=.*$', 'restore_command=cp -f %from %to'),
(r'^restore_directories=.*$', 'restore_directories={tmp_commitlog}'.format(
tmp_commitlog=tmp_commitlog))])
if restore_point_in_time:
restore_time = time.strftime("%Y:%m:%d %H:%M:%S", insert_cutoff_times[1])
replace_in_file(os.path.join(node1.get_path(), 'conf', 'commitlog_archiving.properties'),
[(r'^restore_point_in_time=.*$', 'restore_point_in_time={restore_time}'.format(restore_time=restore_time))])
logger.debug("Restarting node1..")
node1.stop()
node1.start(wait_for_binary_proto=True)
node1.nodetool('flush')
node1.nodetool('compact')
session = self.patient_cql_connection(node1)
rows = session.execute('SELECT count(*) from ks.cf')
# Now we should have 30000 rows from the snapshot + 30000 rows
# from the commitlog backups:
if not restore_archived_commitlog:
assert rows[0][0] == 30000
elif restore_point_in_time:
assert rows[0][0] == 60000
else:
assert rows[0][0] == 65000
finally:
# clean up
logger.debug("removing snapshot_dir: " + ",".join(snapshot_dirs))
for snapshot_dir in snapshot_dirs:
shutil.rmtree(snapshot_dir)
logger.debug("removing snapshot_dir: " + ",".join(system_ks_snapshot_dirs))
for system_ks_snapshot_dir in system_ks_snapshot_dirs:
shutil.rmtree(system_ks_snapshot_dir)
logger.debug("removing snapshot_dir: " + ",".join(system_cfs_snapshot_dirs))
for system_cfs_snapshot_dir in system_cfs_snapshot_dirs:
shutil.rmtree(system_cfs_snapshot_dir)
logger.debug("removing snapshot_dir: " + ",".join(system_ut_snapshot_dirs))
for system_ut_snapshot_dir in system_ut_snapshot_dirs:
shutil.rmtree(system_ut_snapshot_dir)
logger.debug("removing snapshot_dir: " + ",".join(system_col_snapshot_dirs))
for system_col_snapshot_dir in system_col_snapshot_dirs:
shutil.rmtree(system_col_snapshot_dir)
logger.debug("removing tmp_commitlog: " + tmp_commitlog)
shutil.rmtree(tmp_commitlog)
def test_archive_and_restore_commitlog_repeatedly(self):
"""
@jira_ticket CASSANDRA-10593
Run archive commit log restoration test repeatedly to make sure it is idempotent
and doesn't fail if done repeatedly
"""
cluster = self.cluster
cluster.populate(1)
node1 = cluster.nodelist()[0]
# Create a temp directory for storing commitlog archives:
tmp_commitlog = safe_mkdtemp()
logger.debug("tmp_commitlog: {}".format(tmp_commitlog))
# Edit commitlog_archiving.properties and set an archive
# command:
replace_in_file(os.path.join(node1.get_path(), 'conf', 'commitlog_archiving.properties'),
[(r'^archive_command=.*$', 'archive_command=ln %path {tmp_commitlog}/%name'.format(
tmp_commitlog=tmp_commitlog)),
(r'^restore_command=.*$', 'restore_command=cp -f %from %to'),
(r'^restore_directories=.*$', 'restore_directories={tmp_commitlog}'.format(
tmp_commitlog=tmp_commitlog))])
cluster.start(wait_for_binary_proto=True)
logger.debug("Creating initial connection")
session = self.patient_cql_connection(node1)
create_ks(session, 'ks', 1)
session.execute('CREATE TABLE ks.cf ( key bigint PRIMARY KEY, val text);')
logger.debug("Writing 30,000 rows...")
self.insert_rows(session, 0, 60000)
try:
# Check that there are at least one commit log backed up that
# is not one of the active commit logs:
commitlog_dir = os.path.join(node1.get_path(), 'commitlogs')
logger.debug("node1 commitlog dir: " + commitlog_dir)
cluster.flush()
assert_directory_not_empty(tmp_commitlog, commitlog_dir)
logger.debug("Flushing and doing first restart")
cluster.compact()
node1.drain()
# restart the node which causes the active commitlogs to be archived
node1.stop()
node1.start(wait_for_binary_proto=True)
logger.debug("Stopping and second restart")
node1.stop()
node1.start(wait_for_binary_proto=True)
# Shouldn't be any additional data since it's replaying the same stuff repeatedly
session = self.patient_cql_connection(node1)
rows = session.execute('SELECT count(*) from ks.cf')
assert rows[0][0] == 60000
finally:
logger.debug("removing tmp_commitlog: " + tmp_commitlog)
shutil.rmtree(tmp_commitlog)
def assert_directory_not_empty(tmp_commitlog, commitlog_dir):
commitlog_dir_ret = set(commitlog_dir)
for tmp_commitlog_file in set(os.listdir(tmp_commitlog)):
commitlog_dir_ret.discard(tmp_commitlog_file)
assert len(commitlog_dir_ret) != 0
|
import Gaffer
import GafferImage
Gaffer.Metadata.registerNode(
GafferImage.DeepMerge,
"description",
"""
Merges the samples from two or more images into a single deep image.
The source images may be deep or flat.
""",
plugs = {
"in.*" : [
"description",
"""
A deep or flat image input.
""",
],
}
)
|
from __future__ import unicode_literals
import frappe
def execute():
try:
frappe.db.sql("alter table `tabEmail Queue` change `ref_docname` `reference_name` varchar(255)")
except Exception, e:
if e.args[0] not in (1054, 1060):
raise
try:
frappe.db.sql("alter table `tabEmail Queue` change `ref_doctype` `reference_doctype` varchar(255)")
except Exception, e:
if e.args[0] not in (1054, 1060):
raise
frappe.reload_doctype("Email Queue")
|
"""
***************************************************************************
hugeFileGroundClassify.py
---------------------
Date : May 2014
Copyright : (C) 2014 by Martin Isenburg
Email : martin near rapidlasso point com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Martin Isenburg'
__date__ = 'May 2014'
__copyright__ = '(C) 2014, Martin Isenburg'
__revision__ = '$Format:%H$'
import os
from LAStoolsUtils import LAStoolsUtils
from LAStoolsAlgorithm import LAStoolsAlgorithm
from processing.core.parameters import ParameterBoolean
from processing.core.parameters import ParameterSelection
from processing.core.parameters import ParameterNumber
class hugeFileGroundClassify(LAStoolsAlgorithm):
TILE_SIZE = "TILE_SIZE"
BUFFER = "BUFFER"
AIRBORNE = "AIRBORNE"
TERRAIN = "TERRAIN"
TERRAINS = ["wilderness", "nature", "town", "city", "metro"]
GRANULARITY = "GRANULARITY"
GRANULARITIES = ["coarse", "default", "fine", "extra_fine", "ultra_fine"]
def defineCharacteristics(self):
self.name = "hugeFileGroundClassify"
self.group = "LAStools Pipelines"
self.addParametersPointInputGUI()
self.addParameter(ParameterNumber(
hugeFileGroundClassify.TILE_SIZE,
self.tr("tile size (side length of square tile)"),
0, None, 1000.0))
self.addParameter(ParameterNumber(hugeFileGroundClassify.BUFFER,
self.tr("buffer around each tile (avoids edge artifacts)"),
0, None, 25.0))
self.addParameter(ParameterBoolean(hugeFileGroundClassify.AIRBORNE,
self.tr("airborne LiDAR"), True))
self.addParameter(ParameterSelection(hugeFileGroundClassify.TERRAIN,
self.tr("terrain type"), hugeFileGroundClassify.TERRAINS, 1))
self.addParameter(ParameterSelection(hugeFileGroundClassify.GRANULARITY,
self.tr("preprocessing"), hugeFileGroundClassify.GRANULARITIES, 1))
self.addParametersTemporaryDirectoryGUI()
self.addParametersPointOutputGUI()
self.addParametersCoresGUI()
self.addParametersVerboseGUI()
def processAlgorithm(self, progress):
# first we tile the data with option '-reversible'
commands = [os.path.join(LAStoolsUtils.LAStoolsPath(), "bin", "lastile")]
self.addParametersVerboseCommands(commands)
self.addParametersPointInputCommands(commands)
tile_size = self.getParameterValue(hugeFileGroundClassify.TILE_SIZE)
commands.append("-tile_size")
commands.append(str(tile_size))
buffer = self.getParameterValue(hugeFileGroundClassify.BUFFER)
if buffer != 0.0:
commands.append("-buffer")
commands.append(str(buffer))
commands.append("-reversible")
self.addParametersTemporaryDirectoryAsOutputDirectoryCommands(commands)
commands.append("-o")
commands.append("hugeFileGroundClassify.laz")
LAStoolsUtils.runLAStools(commands, progress)
# then we ground classify the reversible tiles
commands = [os.path.join(LAStoolsUtils.LAStoolsPath(), "bin", "lasground")]
self.addParametersVerboseCommands(commands)
self.addParametersTemporaryDirectoryAsInputFilesCommands(commands, "hugeFileGroundClassify*.laz")
airborne = self.getParameterValue(hugeFileGroundClassify.AIRBORNE)
if not airborne:
commands.append("-not_airborne")
method = self.getParameterValue(hugeFileGroundClassify.TERRAIN)
if method != 1:
commands.append("-" + hugeFileGroundClassify.TERRAINS[method])
granularity = self.getParameterValue(hugeFileGroundClassify.GRANULARITY)
if granularity != 1:
commands.append("-" + hugeFileGroundClassify.GRANULARITIES[granularity])
self.addParametersTemporaryDirectoryAsOutputDirectoryCommands(commands)
commands.append("-odix")
commands.append("_g")
commands.append("-olaz")
self.addParametersCoresCommands(commands)
LAStoolsUtils.runLAStools(commands, progress)
# then we reverse the tiling
commands = [os.path.join(LAStoolsUtils.LAStoolsPath(), "bin", "lastile")]
self.addParametersVerboseCommands(commands)
self.addParametersTemporaryDirectoryAsInputFilesCommands(commands, "hugeFileGroundClassify*_g.laz")
commands.append("-reverse_tiling")
self.addParametersPointOutputCommands(commands)
LAStoolsUtils.runLAStools(commands, progress)
|
import pytest
from cfme.utils import safe_string
@pytest.mark.parametrize("source, result", [
(u'\u25cf', '●'),
(u'ěšč', 'ěšč'),
(u'взорваться', 'взорваться'),
(4, '4')],
ids=['ugly_nonunicode_character', 'latin_diacritics', 'cyrillic', 'non_string'])
def test_safe_string(source, result):
assert safe_string(source) == result
|
import base_module_save
import base_module_record_objects
import base_module_record_data
|
import asyncio
import logging
import unittest
import grpc
from grpc.experimental import aio
from src.proto.grpc.testing import messages_pb2
from src.proto.grpc.testing import test_pb2_grpc
from tests_aio.unit import _common
from tests_aio.unit import _constants
from tests_aio.unit._test_base import AioTestBase
from tests_aio.unit._test_server import _INITIAL_METADATA_KEY
from tests_aio.unit._test_server import _TRAILING_METADATA_KEY
from tests_aio.unit._test_server import start_test_server
_LOCAL_CANCEL_DETAILS_EXPECTATION = 'Locally cancelled by application!'
_INITIAL_METADATA_TO_INJECT = aio.Metadata(
(_INITIAL_METADATA_KEY, 'extra info'),
(_TRAILING_METADATA_KEY, b'\x13\x37'),
)
_TIMEOUT_CHECK_IF_CALLBACK_WAS_CALLED = 1.0
class TestUnaryUnaryClientInterceptor(AioTestBase):
async def setUp(self):
self._server_target, self._server = await start_test_server()
async def tearDown(self):
await self._server.stop(None)
def test_invalid_interceptor(self):
class InvalidInterceptor:
"""Just an invalid Interceptor"""
with self.assertRaises(ValueError):
aio.insecure_channel("", interceptors=[InvalidInterceptor()])
async def test_executed_right_order(self):
interceptors_executed = []
class Interceptor(aio.UnaryUnaryClientInterceptor):
"""Interceptor used for testing if the interceptor is being called"""
async def intercept_unary_unary(self, continuation,
client_call_details, request):
interceptors_executed.append(self)
call = await continuation(client_call_details, request)
return call
interceptors = [Interceptor() for i in range(2)]
async with aio.insecure_channel(self._server_target,
interceptors=interceptors) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
response = await call
# Check that all interceptors were executed, and were executed
# in the right order.
self.assertSequenceEqual(interceptors_executed, interceptors)
self.assertIsInstance(response, messages_pb2.SimpleResponse)
@unittest.expectedFailure
# TODO(https://github.com/grpc/grpc/issues/20144) Once metadata support is
# implemented in the client-side, this test must be implemented.
def test_modify_metadata(self):
raise NotImplementedError()
@unittest.expectedFailure
# TODO(https://github.com/grpc/grpc/issues/20532) Once credentials support is
# implemented in the client-side, this test must be implemented.
def test_modify_credentials(self):
raise NotImplementedError()
async def test_status_code_Ok(self):
class StatusCodeOkInterceptor(aio.UnaryUnaryClientInterceptor):
"""Interceptor used for observing status code Ok returned by the RPC"""
def __init__(self):
self.status_code_Ok_observed = False
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
code = await call.code()
if code == grpc.StatusCode.OK:
self.status_code_Ok_observed = True
return call
interceptor = StatusCodeOkInterceptor()
async with aio.insecure_channel(self._server_target,
interceptors=[interceptor]) as channel:
# when no error StatusCode.OK must be observed
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
await multicallable(messages_pb2.SimpleRequest())
self.assertTrue(interceptor.status_code_Ok_observed)
async def test_add_timeout(self):
class TimeoutInterceptor(aio.UnaryUnaryClientInterceptor):
"""Interceptor used for adding a timeout to the RPC"""
async def intercept_unary_unary(self, continuation,
client_call_details, request):
new_client_call_details = aio.ClientCallDetails(
method=client_call_details.method,
timeout=_constants.UNARY_CALL_WITH_SLEEP_VALUE / 2,
metadata=client_call_details.metadata,
credentials=client_call_details.credentials,
wait_for_ready=client_call_details.wait_for_ready)
return await continuation(new_client_call_details, request)
interceptor = TimeoutInterceptor()
async with aio.insecure_channel(self._server_target,
interceptors=[interceptor]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCallWithSleep',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
with self.assertRaises(aio.AioRpcError) as exception_context:
await call
self.assertEqual(exception_context.exception.code(),
grpc.StatusCode.DEADLINE_EXCEEDED)
self.assertTrue(call.done())
self.assertEqual(grpc.StatusCode.DEADLINE_EXCEEDED, await
call.code())
async def test_retry(self):
class RetryInterceptor(aio.UnaryUnaryClientInterceptor):
"""Simulates a Retry Interceptor which ends up by making
two RPC calls."""
def __init__(self):
self.calls = []
async def intercept_unary_unary(self, continuation,
client_call_details, request):
new_client_call_details = aio.ClientCallDetails(
method=client_call_details.method,
timeout=_constants.UNARY_CALL_WITH_SLEEP_VALUE / 2,
metadata=client_call_details.metadata,
credentials=client_call_details.credentials,
wait_for_ready=client_call_details.wait_for_ready)
try:
call = await continuation(new_client_call_details, request)
await call
except grpc.RpcError:
pass
self.calls.append(call)
new_client_call_details = aio.ClientCallDetails(
method=client_call_details.method,
timeout=None,
metadata=client_call_details.metadata,
credentials=client_call_details.credentials,
wait_for_ready=client_call_details.wait_for_ready)
call = await continuation(new_client_call_details, request)
self.calls.append(call)
return call
interceptor = RetryInterceptor()
async with aio.insecure_channel(self._server_target,
interceptors=[interceptor]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCallWithSleep',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
await call
self.assertEqual(grpc.StatusCode.OK, await call.code())
# Check that two calls were made, first one finishing with
# a deadline and second one finishing ok..
self.assertEqual(len(interceptor.calls), 2)
self.assertEqual(await interceptor.calls[0].code(),
grpc.StatusCode.DEADLINE_EXCEEDED)
self.assertEqual(await interceptor.calls[1].code(),
grpc.StatusCode.OK)
async def test_rpcresponse(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
"""Raw responses are seen as reegular calls"""
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
response = await call
return call
class ResponseInterceptor(aio.UnaryUnaryClientInterceptor):
"""Return a raw response"""
response = messages_pb2.SimpleResponse()
async def intercept_unary_unary(self, continuation,
client_call_details, request):
return ResponseInterceptor.response
interceptor, interceptor_response = Interceptor(), ResponseInterceptor()
async with aio.insecure_channel(
self._server_target,
interceptors=[interceptor, interceptor_response]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
response = await call
# Check that the response returned is the one returned by the
# interceptor
self.assertEqual(id(response), id(ResponseInterceptor.response))
# Check all of the UnaryUnaryCallResponse attributes
self.assertTrue(call.done())
self.assertFalse(call.cancel())
self.assertFalse(call.cancelled())
self.assertEqual(await call.code(), grpc.StatusCode.OK)
self.assertEqual(await call.details(), '')
self.assertEqual(await call.initial_metadata(), None)
self.assertEqual(await call.trailing_metadata(), None)
self.assertEqual(await call.debug_error_string(), None)
class TestInterceptedUnaryUnaryCall(AioTestBase):
async def setUp(self):
self._server_target, self._server = await start_test_server()
async def tearDown(self):
await self._server.stop(None)
async def test_call_ok(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
response = await call
self.assertTrue(call.done())
self.assertFalse(call.cancelled())
self.assertEqual(type(response), messages_pb2.SimpleResponse)
self.assertEqual(await call.code(), grpc.StatusCode.OK)
self.assertEqual(await call.details(), '')
self.assertEqual(await call.initial_metadata(), aio.Metadata())
self.assertEqual(await call.trailing_metadata(), aio.Metadata())
async def test_call_ok_awaited(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
await call
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
response = await call
self.assertTrue(call.done())
self.assertFalse(call.cancelled())
self.assertEqual(type(response), messages_pb2.SimpleResponse)
self.assertEqual(await call.code(), grpc.StatusCode.OK)
self.assertEqual(await call.details(), '')
self.assertEqual(await call.initial_metadata(), aio.Metadata())
self.assertEqual(await call.trailing_metadata(), aio.Metadata())
async def test_call_rpc_error(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCallWithSleep',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(
messages_pb2.SimpleRequest(),
timeout=_constants.UNARY_CALL_WITH_SLEEP_VALUE / 2)
with self.assertRaises(aio.AioRpcError) as exception_context:
await call
self.assertTrue(call.done())
self.assertFalse(call.cancelled())
self.assertEqual(await call.code(),
grpc.StatusCode.DEADLINE_EXCEEDED)
self.assertEqual(await call.details(), 'Deadline Exceeded')
self.assertEqual(await call.initial_metadata(), aio.Metadata())
self.assertEqual(await call.trailing_metadata(), aio.Metadata())
async def test_call_rpc_error_awaited(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
await call
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCallWithSleep',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(
messages_pb2.SimpleRequest(),
timeout=_constants.UNARY_CALL_WITH_SLEEP_VALUE / 2)
with self.assertRaises(aio.AioRpcError) as exception_context:
await call
self.assertTrue(call.done())
self.assertFalse(call.cancelled())
self.assertEqual(await call.code(),
grpc.StatusCode.DEADLINE_EXCEEDED)
self.assertEqual(await call.details(), 'Deadline Exceeded')
self.assertEqual(await call.initial_metadata(), aio.Metadata())
self.assertEqual(await call.trailing_metadata(), aio.Metadata())
async def test_cancel_before_rpc(self):
interceptor_reached = asyncio.Event()
wait_for_ever = self.loop.create_future()
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
interceptor_reached.set()
await wait_for_ever
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
self.assertFalse(call.cancelled())
self.assertFalse(call.done())
await interceptor_reached.wait()
self.assertTrue(call.cancel())
with self.assertRaises(asyncio.CancelledError):
await call
self.assertTrue(call.cancelled())
self.assertTrue(call.done())
self.assertEqual(await call.code(), grpc.StatusCode.CANCELLED)
self.assertEqual(await call.details(),
_LOCAL_CANCEL_DETAILS_EXPECTATION)
self.assertEqual(await call.initial_metadata(), None)
self.assertEqual(await call.trailing_metadata(), None)
async def test_cancel_after_rpc(self):
interceptor_reached = asyncio.Event()
wait_for_ever = self.loop.create_future()
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
await call
interceptor_reached.set()
await wait_for_ever
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
self.assertFalse(call.cancelled())
self.assertFalse(call.done())
await interceptor_reached.wait()
self.assertTrue(call.cancel())
with self.assertRaises(asyncio.CancelledError):
await call
self.assertTrue(call.cancelled())
self.assertTrue(call.done())
self.assertEqual(await call.code(), grpc.StatusCode.CANCELLED)
self.assertEqual(await call.details(),
_LOCAL_CANCEL_DETAILS_EXPECTATION)
self.assertEqual(await call.initial_metadata(), None)
self.assertEqual(await call.trailing_metadata(), None)
async def test_cancel_inside_interceptor_after_rpc_awaiting(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
call.cancel()
await call
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
with self.assertRaises(asyncio.CancelledError):
await call
self.assertTrue(call.cancelled())
self.assertTrue(call.done())
self.assertEqual(await call.code(), grpc.StatusCode.CANCELLED)
self.assertEqual(await call.details(),
_LOCAL_CANCEL_DETAILS_EXPECTATION)
self.assertEqual(await call.initial_metadata(), None)
self.assertEqual(await call.trailing_metadata(), None)
async def test_cancel_inside_interceptor_after_rpc_not_awaiting(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
call.cancel()
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
with self.assertRaises(asyncio.CancelledError):
await call
self.assertTrue(call.cancelled())
self.assertTrue(call.done())
self.assertEqual(await call.code(), grpc.StatusCode.CANCELLED)
self.assertEqual(await call.details(),
_LOCAL_CANCEL_DETAILS_EXPECTATION)
self.assertEqual(await call.initial_metadata(), aio.Metadata())
self.assertEqual(
await call.trailing_metadata(), aio.Metadata(),
"When the raw response is None, empty metadata is returned")
async def test_initial_metadata_modification(self):
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
new_metadata = aio.Metadata(*client_call_details.metadata,
*_INITIAL_METADATA_TO_INJECT)
new_details = aio.ClientCallDetails(
method=client_call_details.method,
timeout=client_call_details.timeout,
metadata=new_metadata,
credentials=client_call_details.credentials,
wait_for_ready=client_call_details.wait_for_ready,
)
return await continuation(new_details, request)
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
stub = test_pb2_grpc.TestServiceStub(channel)
call = stub.UnaryCall(messages_pb2.SimpleRequest())
# Expected to see the echoed initial metadata
self.assertTrue(
_common.seen_metadatum(
expected_key=_INITIAL_METADATA_KEY,
expected_value=_INITIAL_METADATA_TO_INJECT[
_INITIAL_METADATA_KEY],
actual=await call.initial_metadata(),
))
# Expected to see the echoed trailing metadata
self.assertTrue(
_common.seen_metadatum(
expected_key=_TRAILING_METADATA_KEY,
expected_value=_INITIAL_METADATA_TO_INJECT[
_TRAILING_METADATA_KEY],
actual=await call.trailing_metadata(),
))
self.assertEqual(await call.code(), grpc.StatusCode.OK)
async def test_add_done_callback_before_finishes(self):
called = asyncio.Event()
interceptor_can_continue = asyncio.Event()
def callback(call):
called.set()
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
await interceptor_can_continue.wait()
call = await continuation(client_call_details, request)
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
call.add_done_callback(callback)
interceptor_can_continue.set()
await call
try:
await asyncio.wait_for(
called.wait(),
timeout=_TIMEOUT_CHECK_IF_CALLBACK_WAS_CALLED)
except:
self.fail("Callback was not called")
async def test_add_done_callback_after_finishes(self):
called = asyncio.Event()
def callback(call):
called.set()
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
await call
call.add_done_callback(callback)
try:
await asyncio.wait_for(
called.wait(),
timeout=_TIMEOUT_CHECK_IF_CALLBACK_WAS_CALLED)
except:
self.fail("Callback was not called")
async def test_add_done_callback_after_finishes_before_await(self):
called = asyncio.Event()
def callback(call):
called.set()
class Interceptor(aio.UnaryUnaryClientInterceptor):
async def intercept_unary_unary(self, continuation,
client_call_details, request):
call = await continuation(client_call_details, request)
return call
async with aio.insecure_channel(self._server_target,
interceptors=[Interceptor()
]) as channel:
multicallable = channel.unary_unary(
'/grpc.testing.TestService/UnaryCall',
request_serializer=messages_pb2.SimpleRequest.SerializeToString,
response_deserializer=messages_pb2.SimpleResponse.FromString)
call = multicallable(messages_pb2.SimpleRequest())
call.add_done_callback(callback)
await call
try:
await asyncio.wait_for(
called.wait(),
timeout=_TIMEOUT_CHECK_IF_CALLBACK_WAS_CALLED)
except:
self.fail("Callback was not called")
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
unittest.main(verbosity=2)
|
"""l3_support
Revision ID: 2c4af419145b
Revises: folsom
Create Date: 2013-03-11 19:26:45.697774
"""
revision = '2c4af419145b'
down_revision = 'folsom'
migration_for_plugins = [
'neutron.plugins.bigswitch.plugin.NeutronRestProxyV2',
'neutron.plugins.hyperv.hyperv_neutron_plugin.HyperVNeutronPlugin',
'neutron.plugins.midonet.plugin.MidonetPluginV2',
'neutron.plugins.nicira.NeutronPlugin.NvpPluginV2'
]
from neutron.db import migration
from neutron.db.migration.alembic_migrations import common_ext_ops
def upgrade(active_plugins=None, options=None):
if not migration.should_run(active_plugins, migration_for_plugins):
return
common_ext_ops.upgrade_l3()
def downgrade(active_plugins=None, options=None):
if not migration.should_run(active_plugins, migration_for_plugins):
return
common_ext_ops.downgrade_l3()
|
import webob
from nova.api.openstack.compute.legacy_v2.contrib import virtual_interfaces \
as vi20
from nova.api.openstack.compute import virtual_interfaces as vi21
from nova import compute
from nova.compute import api as compute_api
from nova import context
from nova import exception
from nova import network
from nova.objects import virtual_interface as vif_obj
from nova import test
from nova.tests.unit.api.openstack import fakes
FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'
def compute_api_get(self, context, instance_id, expected_attrs=None,
want_objects=False):
return dict(uuid=FAKE_UUID, id=instance_id, instance_type_id=1, host='bob')
def _generate_fake_vifs(context):
vif = vif_obj.VirtualInterface(context=context)
vif.address = '00-00-00-00-00-00'
vif.network_id = 123
vif.uuid = '00000000-0000-0000-0000-00000000000000000'
fake_vifs = [vif]
vif = vif_obj.VirtualInterface(context=context)
vif.address = '11-11-11-11-11-11'
vif.network_id = 456
vif.uuid = '11111111-1111-1111-1111-11111111111111111'
fake_vifs.append(vif)
return fake_vifs
def get_vifs_by_instance(self, context, instance_id):
return _generate_fake_vifs(context)
class FakeRequest(object):
def __init__(self, context):
self.environ = {'nova.context': context}
class ServerVirtualInterfaceTestV21(test.NoDBTestCase):
def setUp(self):
super(ServerVirtualInterfaceTestV21, self).setUp()
self.stubs.Set(compute.api.API, "get",
compute_api_get)
self.stubs.Set(network.api.API, "get_vifs_by_instance",
get_vifs_by_instance)
self._set_controller()
def _set_controller(self):
self.controller = vi21.ServerVirtualInterfaceController()
def test_get_virtual_interfaces_list(self):
req = fakes.HTTPRequest.blank('')
res_dict = self.controller.index(req, 'fake_uuid')
response = {'virtual_interfaces': [
{'id': '00000000-0000-0000-0000-00000000000000000',
'mac_address': '00-00-00-00-00-00'},
{'id': '11111111-1111-1111-1111-11111111111111111',
'mac_address': '11-11-11-11-11-11'}]}
self.assertEqual(res_dict, response)
def test_vif_instance_not_found(self):
self.mox.StubOutWithMock(compute_api.API, 'get')
fake_context = context.RequestContext('fake', 'fake')
fake_req = FakeRequest(fake_context)
compute_api.API.get(fake_context, 'fake_uuid',
expected_attrs=None,
want_objects=True).AndRaise(
exception.InstanceNotFound(instance_id='instance-0000'))
self.mox.ReplayAll()
self.assertRaises(
webob.exc.HTTPNotFound,
self.controller.index,
fake_req, 'fake_uuid')
class ServerVirtualInterfaceTestV20(ServerVirtualInterfaceTestV21):
def _set_controller(self):
self.controller = vi20.ServerVirtualInterfaceController()
class ServerVirtualInterfaceEnforcementV21(test.NoDBTestCase):
def setUp(self):
super(ServerVirtualInterfaceEnforcementV21, self).setUp()
self.controller = vi21.ServerVirtualInterfaceController()
self.req = fakes.HTTPRequest.blank('')
def test_index_virtual_interfaces_policy_failed(self):
rule_name = "os_compute_api:os-virtual-interfaces"
self.policy.set_rules({rule_name: "project:non_fake"})
exc = self.assertRaises(
exception.PolicyNotAuthorized,
self.controller.index, self.req, fakes.FAKE_UUID)
self.assertEqual(
"Policy doesn't allow %s to be performed." % rule_name,
exc.format_message())
|
"""This component provides support for Stookalert Binary Sensor."""
from datetime import timedelta
import stookalert
import voluptuous as vol
from homeassistant.components.binary_sensor import (
DEVICE_CLASS_SAFETY,
PLATFORM_SCHEMA,
BinarySensorEntity,
)
from homeassistant.const import ATTR_ATTRIBUTION, CONF_NAME
from homeassistant.helpers import config_validation as cv
SCAN_INTERVAL = timedelta(minutes=60)
CONF_PROVINCE = "province"
DEFAULT_DEVICE_CLASS = DEVICE_CLASS_SAFETY
DEFAULT_NAME = "Stookalert"
ATTRIBUTION = "Data provided by rivm.nl"
PROVINCES = [
"Drenthe",
"Flevoland",
"Friesland",
"Gelderland",
"Groningen",
"Limburg",
"Noord-Brabant",
"Noord-Holland",
"Overijssel",
"Utrecht",
"Zeeland",
"Zuid-Holland",
]
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
{
vol.Required(CONF_PROVINCE): vol.In(PROVINCES),
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
}
)
def setup_platform(hass, config, add_entities, discovery_info=None):
"""Set up the Stookalert binary sensor platform."""
province = config[CONF_PROVINCE]
name = config[CONF_NAME]
api_handler = stookalert.stookalert(province)
add_entities([StookalertBinarySensor(name, api_handler)], update_before_add=True)
class StookalertBinarySensor(BinarySensorEntity):
"""An implementation of RIVM Stookalert."""
def __init__(self, name, api_handler):
"""Initialize a Stookalert device."""
self._name = name
self._api_handler = api_handler
@property
def device_state_attributes(self):
"""Return the attribute(s) of the sensor."""
state_attr = {ATTR_ATTRIBUTION: ATTRIBUTION}
if self._api_handler.last_updated is not None:
state_attr["last_updated"] = self._api_handler.last_updated.isoformat()
return state_attr
@property
def name(self):
"""Return the name of the sensor."""
return self._name
@property
def is_on(self):
"""Return True if the Alert is active."""
return self._api_handler.state == 1
@property
def device_class(self):
"""Return the device class of this binary sensor."""
return DEFAULT_DEVICE_CLASS
def update(self):
"""Update the data from the Stookalert handler."""
self._api_handler.get_alerts()
|
"""BibFormat element
* Part of the video platform prototype
* Creates a list of video suggestions
* Based on word similarity ranking
* Must be done in a collection that holds video records with thumbnails, title and author
"""
from invenio.config import CFG_SITE_URL
from invenio.bibdocfile import BibRecDocs
from invenio.intbitset import intbitset
from invenio.search_engine import perform_request_search
from invenio.bibrank_record_sorter import rank_records
from invenio.search_engine_utils import get_fieldvalues
from invenio.bibencode_utils import timecode_to_seconds
import random
html_skeleton_suggestion = """
<!-- VIDEO SUGGESTION -->
<div class="video_suggestion_box">
<div class="video_suggestion_thumbnail">
<a href="%(video_record_url)s">
<img src="%(video_thumb_url)s" alt="%(video_thumb_alt)s"/>
</a>
<div class="video_suggestion_duration">
%(video_duration)s
</div>
</div>
<div class="video_suggestion_title">
%(video_title)s
</div>
<div class="video_suggestion_author">
by %(video_authors)s
</div>
</div>
"""
def format_element(bfo, collection="Videos", threshold="75", maximum="3", shuffle="True"):
""" Creates video suggestions based on ranking algorithms
@param collection: Collection to take the suggestions from
@param threshold: Value between 0 and 100. Only records ranked higher than the value are presented.
@param maximum: Maximum suggestions to show
@param shuffle: True or False, should the suggestions be shuffled?
"""
if threshold.isdigit():
threshold = int(threshold)
else:
raise ValueError("The given threshold is not a digit")
if maximum.isdigit():
maximum = int(maximum)
else:
raise ValueError("The given maximum is not a digit")
if shuffle == "True":
shuffle = True
else:
shuffle = False;
suggestions = []
recid = bfo.control_field('001')
similar_records = find_similar_videos(recid, collection, threshold, maximum, shuffle)
for sim_recid in similar_records:
thumbnail = get_video_thumbnail(sim_recid)
title = get_video_title(sim_recid)
authors = get_video_authors(sim_recid)
url = get_video_record_url(sim_recid)
duration = get_video_duration(sim_recid)
suggestion = html_skeleton_suggestion % {
'video_record_url': url,
'video_thumb_url': thumbnail[0],
'video_thumb_alt': thumbnail[1],
'video_duration': duration,
'video_title': title,
'video_authors': authors,
}
suggestions.append(suggestion)
return "\n".join(suggestions)
def find_similar_videos(recid, collection="Videos", threshold=75, maximum=3, shuffle=True):
""" Returns a list of similar video records
"""
similar_records = []
collection_recids = intbitset(perform_request_search(cc=collection))
ranking = rank_records('wrd', 0, collection_recids, ['recid:' + str(recid)])
## ([6, 7], [81, 100], '(', ')', '')
for list_pos, rank in enumerate(ranking[1]):
if rank >= threshold:
similar_records.append(ranking[0][list_pos])
if shuffle:
if maximum > len(similar_records):
maximum = len(similar_records)
return random.sample(similar_records, maximum)
else:
return similar_records[:maximum]
def get_video_thumbnail(recid):
""" Returns the URL and ALT text for a video thumbnail of a given record
"""
comments = get_fieldvalues(recid, '8564_z')
descriptions = get_fieldvalues(recid, '8564_y')
urls = get_fieldvalues(recid, '8564_u')
for pos, comment in enumerate(comments):
if comment in ('SUGGESTIONTHUMB', 'BIGTHUMB', 'THUMB', 'SMALLTHUMB', 'POSTER'):
return (urls[pos], descriptions[pos])
return ("", "")
def get_video_title(recid):
""" Return the Title of a video record
"""
return get_fieldvalues(recid, '245__a')[0]
def get_video_authors(recid):
""" Return the Authors of a video record
"""
return ", ".join(get_fieldvalues(recid, '100__a'))
def get_video_record_url(recid):
""" Return the URL of a video record
"""
return CFG_SITE_URL + "/record/" + str(recid)
def get_video_duration(recid):
""" Return the duration of a video
"""
duration = get_fieldvalues(recid, '950__d')
if duration:
duration = duration[0]
duration = timecode_to_seconds(duration)
return human_readable_time(duration)
else:
return ""
def human_readable_time(seconds):
""" Creates a human readable duration representation
"""
for x in ['s','m','h']:
if seconds < 60.0:
return "%.0f %s" % (seconds, x)
seconds /= seconds
def escape_values(bfo):
"""
Called by BibFormat in order to check if output of this element
should be escaped.
"""
return 0
|
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = """
---
module: nxos_config
extends_documentation_fragment: nxos
version_added: "2.1"
author: "Peter Sprygada (@privateip)"
short_description: Manage Cisco NXOS configuration sections
description:
- Cisco NXOS configurations use a simple block indent file syntax
for segmenting configuration into sections. This module provides
an implementation for working with NXOS configuration sections in
a deterministic way. This module works with either CLI or NXAPI
transports.
options:
lines:
description:
- The ordered set of commands that should be configured in the
section. The commands must be the exact same commands as found
in the device running-config. Be sure to note the configuration
command syntax as some commands are automatically modified by the
device config parser.
required: false
default: null
parents:
description:
- The ordered set of parents that uniquely identify the section
the commands should be checked against. If the parents argument
is omitted, the commands are checked against the set of top
level or global commands.
required: false
default: null
src:
description:
- The I(src) argument provides a path to the configuration file
to load into the remote system. The path can either be a full
system path to the configuration file if the value starts with /
or relative to the root of the implemented role or playbook.
This argument is mutually exclusive with the I(lines) and
I(parents) arguments.
required: false
default: null
version_added: "2.2"
before:
description:
- The ordered set of commands to push on to the command stack if
a change needs to be made. This allows the playbook designer
the opportunity to perform configuration commands prior to pushing
any changes without affecting how the set of commands are matched
against the system.
required: false
default: null
after:
description:
- The ordered set of commands to append to the end of the command
stack if a change needs to be made. Just like with I(before) this
allows the playbook designer to append a set of commands to be
executed after the command set.
required: false
default: null
match:
description:
- Instructs the module on the way to perform the matching of
the set of commands against the current device config. If
match is set to I(line), commands are matched line by line. If
match is set to I(strict), command lines are matched with respect
to position. If match is set to I(exact), command lines
must be an equal match. Finally, if match is set to I(none), the
module will not attempt to compare the source configuration with
the running configuration on the remote device.
required: false
default: line
choices: ['line', 'strict', 'exact', 'none']
replace:
description:
- Instructs the module on the way to perform the configuration
on the device. If the replace argument is set to I(line) then
the modified lines are pushed to the device in configuration
mode. If the replace argument is set to I(block) then the entire
command block is pushed to the device in configuration mode if any
line is not correct.
required: false
default: lineo
choices: ['line', 'block']
force:
description:
- The force argument instructs the module to not consider the
current devices running-config. When set to true, this will
cause the module to push the contents of I(src) into the device
without first checking if already configured.
- Note this argument should be considered deprecated. To achieve
the equivalent, set the C(match=none) which is idempotent. This argument
will be removed in a future release.
required: false
default: false
type: bool
backup:
description:
- This argument will cause the module to create a full backup of
the current C(running-config) from the remote device before any
changes are made. The backup file is written to the C(backup)
folder in the playbook root directory. If the directory does not
exist, it is created.
required: false
default: false
type: bool
version_added: "2.2"
running_config:
description:
- The module, by default, will connect to the remote device and
retrieve the current running-config to use as a base for comparing
against the contents of source. There are times when it is not
desirable to have the task get the current running-config for
every task in a playbook. The I(running_config) argument allows the
implementer to pass in the configuration to use as the base
config for comparison.
required: false
default: null
aliases: ['config']
version_added: "2.4"
defaults:
description:
- The I(defaults) argument will influence how the running-config
is collected from the device. When the value is set to true,
the command used to collect the running-config is append with
the all keyword. When the value is set to false, the command
is issued without the all keyword
required: false
default: false
type: bool
version_added: "2.2"
save:
description:
- The C(save) argument instructs the module to save the
running-config to startup-config. This operation is performed
after any changes are made to the current running config. If
no changes are made, the configuration is still saved to the
startup config. This option will always cause the module to
return changed.
- This option is deprecated as of Ansible 2.4, use C(save_when)
required: false
default: false
type: bool
version_added: "2.2"
save_when:
description:
- When changes are made to the device running-configuration, the
changes are not copied to non-volatile storage by default. Using
this argument will change that before. If the argument is set to
I(always), then the running-config will always be copied to the
startup-config and the I(modified) flag will always be set to
True. If the argument is set to I(modified), then the running-config
will only be copied to the startup-config if it has changed since
the last save to startup-config. If the argument is set to
I(never), the running-config will never be copied to the
startup-config
required: false
default: never
choices: ['always', 'never', 'modified']
version_added: "2.4"
diff_against:
description:
- When using the C(ansible-playbook --diff) command line argument
the module can generate diffs against different sources.
- When this option is configure as I(startup), the module will return
the diff of the running-config against the startup-config.
- When this option is configured as I(intended), the module will
return the diff of the running-config against the configuration
provided in the C(intended_config) argument.
- When this option is configured as I(running), the module will
return the before and after diff of the running-config with respect
to any changes made to the device configuration.
required: false
default: startup
choices: ['startup', 'intended', 'running']
version_added: "2.4"
diff_ignore_lines:
description:
- Use this argument to specify one or more lines that should be
ignored during the diff. This is used for lines in the configuration
that are automatically updated by the system. This argument takes
a list of regular expressions or exact line matches.
required: false
version_added: "2.4"
intended_config:
description:
- The C(intended_config) provides the master configuration that
the node should conform to and is used to check the final
running-config against. This argument will not modify any settings
on the remote device and is strictly used to check the compliance
of the current device's configuration against. When specifying this
argument, the task should also modify the C(diff_against) value and
set it to I(intended).
required: false
version_added: "2.4"
"""
EXAMPLES = """
---
- name: configure top level configuration and save it
nxos_config:
lines: hostname {{ inventory_hostname }}
save_when: modified
- name: diff the running-config against a provided config
nxos_config:
diff_against: intended
intended: "{{ lookup('file', 'master.cfg') }}"
- nxos_config:
lines:
- 10 permit ip 1.1.1.1/32 any log
- 20 permit ip 2.2.2.2/32 any log
- 30 permit ip 3.3.3.3/32 any log
- 40 permit ip 4.4.4.4/32 any log
- 50 permit ip 5.5.5.5/32 any log
parents: ip access-list test
before: no ip access-list test
match: exact
- nxos_config:
lines:
- 10 permit ip 1.1.1.1/32 any log
- 20 permit ip 2.2.2.2/32 any log
- 30 permit ip 3.3.3.3/32 any log
- 40 permit ip 4.4.4.4/32 any log
parents: ip access-list test
before: no ip access-list test
replace: block
"""
RETURN = """
commands:
description: The set of commands that will be pushed to the remote device
returned: always
type: list
sample: ['hostname foo', 'vlan 1', 'name default']
updates:
description: The set of commands that will be pushed to the remote device
returned: always
type: list
sample: ['hostname foo', 'vlan 1', 'name default']
backup_path:
description: The full path to the backup file
returned: when backup is yes
type: string
sample: /playbooks/ansible/backup/nxos_config.2016-07-16@22:28:34
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.netcfg import NetworkConfig, dumps
from ansible.module_utils.nxos import get_config, load_config, run_commands
from ansible.module_utils.nxos import nxos_argument_spec
from ansible.module_utils.nxos import check_args as nxos_check_args
def get_running_config(module, config=None):
contents = module.params['running_config']
if not contents:
if not module.params['defaults'] and config:
contents = config
else:
flags = ['all']
contents = get_config(module, flags=flags)
return NetworkConfig(indent=2, contents=contents)
def get_candidate(module):
candidate = NetworkConfig(indent=2)
if module.params['src']:
candidate.load(module.params['src'])
elif module.params['lines']:
parents = module.params['parents'] or list()
candidate.add(module.params['lines'], parents=parents)
return candidate
def main():
""" main entry point for module execution
"""
argument_spec = dict(
src=dict(type='path'),
lines=dict(aliases=['commands'], type='list'),
parents=dict(type='list'),
before=dict(type='list'),
after=dict(type='list'),
match=dict(default='line', choices=['line', 'strict', 'exact', 'none']),
replace=dict(default='line', choices=['line', 'block']),
running_config=dict(aliases=['config']),
intended_config=dict(),
defaults=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
save_when=dict(choices=['always', 'never', 'modified'], default='never'),
diff_against=dict(choices=['running', 'startup', 'intended']),
diff_ignore_lines=dict(type='list'),
# save is deprecated as of ans2.4, use save_when instead
save=dict(default=False, type='bool', removed_in_version='2.4'),
# force argument deprecated in ans2.2
force=dict(default=False, type='bool', removed_in_version='2.2')
)
argument_spec.update(nxos_argument_spec)
mutually_exclusive = [('lines', 'src'),
('save', 'save_when')]
required_if = [('match', 'strict', ['lines']),
('match', 'exact', ['lines']),
('replace', 'block', ['lines']),
('diff_against', 'intended', ['intended_config'])]
module = AnsibleModule(argument_spec=argument_spec,
mutually_exclusive=mutually_exclusive,
required_if=required_if,
supports_check_mode=True)
warnings = list()
nxos_check_args(module, warnings)
result = {'changed': False, 'warnings': warnings}
config = None
if module.params['backup'] or (module._diff and module.params['diff_against'] == 'running'):
contents = get_config(module)
config = NetworkConfig(indent=2, contents=contents)
if module.params['backup']:
result['__backup__'] = contents
if any((module.params['src'], module.params['lines'])):
match = module.params['match']
replace = module.params['replace']
candidate = get_candidate(module)
if match != 'none':
config = get_running_config(module, config)
path = module.params['parents']
configobjs = candidate.difference(config, match=match, replace=replace, path=path)
else:
configobjs = candidate.items
if configobjs:
commands = dumps(configobjs, 'commands').split('\n')
if module.params['before']:
commands[:0] = module.params['before']
if module.params['after']:
commands.extend(module.params['after'])
result['commands'] = commands
result['updates'] = commands
if not module.check_mode:
load_config(module, commands)
result['changed'] = True
running_config = None
startup_config = None
diff_ignore_lines = module.params['diff_ignore_lines']
if module.params['save']:
module.params['save_when'] = 'always'
if module.params['save_when'] != 'never':
output = run_commands(module, ['show running-config', 'show startup-config'])
running_config = NetworkConfig(indent=1, contents=output[0], ignore_lines=diff_ignore_lines)
startup_config = NetworkConfig(indent=1, contents=output[1], ignore_lines=diff_ignore_lines)
if running_config.sha1 != startup_config.sha1 or module.params['save_when'] == 'always':
result['changed'] = True
if not module.check_mode:
cmd = {'command': 'copy running-config startup-config', 'output': 'text'}
run_commands(module, [cmd])
else:
module.warn('Skipping command `copy running-config startup-config` '
'due to check_mode. Configuration not copied to '
'non-volatile storage')
if module._diff:
if not running_config:
output = run_commands(module, 'show running-config')
contents = output[0]
else:
contents = running_config.config_text
# recreate the object in order to process diff_ignore_lines
running_config = NetworkConfig(indent=1, contents=contents, ignore_lines=diff_ignore_lines)
if module.params['diff_against'] == 'running':
if module.check_mode:
module.warn("unable to perform diff against running-config due to check mode")
contents = None
else:
contents = config.config_text
elif module.params['diff_against'] == 'startup':
if not startup_config:
output = run_commands(module, 'show startup-config')
contents = output[0]
else:
contents = output[0]
contents = startup_config.config_text
elif module.params['diff_against'] == 'intended':
contents = module.params['intended_config']
if contents is not None:
base_config = NetworkConfig(indent=1, contents=contents, ignore_lines=diff_ignore_lines)
if running_config.sha1 != base_config.sha1:
result.update({
'changed': True,
'diff': {'before': str(base_config), 'after': str(running_config)}
})
module.exit_json(**result)
if __name__ == '__main__':
main()
|
import logging
import logging.handlers
import os
import pprint
import release
import sys
import threading
import psycopg2
import openerp
import sql_db
import tools
_logger = logging.getLogger(__name__)
def log(logger, level, prefix, msg, depth=None):
indent=''
indent_after=' '*len(prefix)
for line in (prefix + pprint.pformat(msg, depth=depth)).split('\n'):
logger.log(level, indent+line)
indent=indent_after
def LocalService(name):
"""
The openerp.netsvc.LocalService() function is deprecated. It still works
in two cases: workflows and reports. For workflows, instead of using
LocalService('workflow'), openerp.workflow should be used (better yet,
methods on openerp.osv.orm.Model should be used). For reports,
openerp.report.render_report() should be used (methods on the Model should
be provided too in the future).
"""
assert openerp.conf.deprecation.allow_local_service
_logger.warning("LocalService() is deprecated since march 2013 (it was called with '%s')." % name)
if name == 'workflow':
return openerp.workflow
if name.startswith('report.'):
report = openerp.report.interface.report_int._reports.get(name)
if report:
return report
else:
dbname = getattr(threading.currentThread(), 'dbname', None)
if dbname:
registry = openerp.modules.registry.RegistryManager.get(dbname)
with registry.cursor() as cr:
return registry['ir.actions.report.xml']._lookup_report(cr, name[len('report.'):])
class PostgreSQLHandler(logging.Handler):
""" PostgreSQL Loggin Handler will store logs in the database, by default
the current database, can be set using --log-db=DBNAME
"""
def emit(self, record):
ct = threading.current_thread()
ct_db = getattr(ct, 'dbname', None)
ct_uid = getattr(ct, 'uid', None)
dbname = tools.config['log_db'] or ct_db
if dbname:
cr = None
try:
cr = sql_db.db_connect(dbname).cursor()
msg = unicode(record.msg)
traceback = getattr(record, 'exc_text', '')
if traceback:
msg = "%s\n%s" % (msg, traceback)
level = logging.getLevelName(record.levelno)
val = (ct_uid, ct_uid, 'server', ct_db, record.name, level, msg, record.pathname, record.lineno, record.funcName)
cr.execute("""
INSERT INTO ir_logging(create_date, write_date, create_uid, write_uid, type, dbname, name, level, message, path, line, func)
VALUES (NOW() at time zone 'UTC', NOW() at time zone 'UTC', %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
""", val )
cr.commit()
except Exception, e:
pass
finally:
if cr:
cr.close()
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE, _NOTHING, DEFAULT = range(10)
RESET_SEQ = "\033[0m"
COLOR_SEQ = "\033[1;%dm"
BOLD_SEQ = "\033[1m"
COLOR_PATTERN = "%s%s%%s%s" % (COLOR_SEQ, COLOR_SEQ, RESET_SEQ)
LEVEL_COLOR_MAPPING = {
logging.DEBUG: (BLUE, DEFAULT),
logging.INFO: (GREEN, DEFAULT),
logging.WARNING: (YELLOW, DEFAULT),
logging.ERROR: (RED, DEFAULT),
logging.CRITICAL: (WHITE, RED),
}
class DBFormatter(logging.Formatter):
def format(self, record):
record.pid = os.getpid()
record.dbname = getattr(threading.currentThread(), 'dbname', '?')
return logging.Formatter.format(self, record)
class ColoredFormatter(DBFormatter):
def format(self, record):
fg_color, bg_color = LEVEL_COLOR_MAPPING[record.levelno]
record.levelname = COLOR_PATTERN % (30 + fg_color, 40 + bg_color, record.levelname)
return DBFormatter.format(self, record)
_logger_init = False
def init_logger():
global _logger_init
if _logger_init:
return
_logger_init = True
from tools.translate import resetlocale
resetlocale()
# create a format for log messages and dates
format = '%(asctime)s %(pid)s %(levelname)s %(dbname)s %(name)s: %(message)s'
if tools.config['syslog']:
# SysLog Handler
if os.name == 'nt':
handler = logging.handlers.NTEventLogHandler("%s %s" % (release.description, release.version))
else:
handler = logging.handlers.SysLogHandler()
format = '%s %s' % (release.description, release.version) \
+ ':%(dbname)s:%(levelname)s:%(name)s:%(message)s'
elif tools.config['logfile']:
# LogFile Handler
logf = tools.config['logfile']
try:
# We check we have the right location for the log files
dirname = os.path.dirname(logf)
if dirname and not os.path.isdir(dirname):
os.makedirs(dirname)
if tools.config['logrotate'] is not False:
handler = logging.handlers.TimedRotatingFileHandler(filename=logf, when='D', interval=1, backupCount=30)
elif os.name == 'posix':
handler = logging.handlers.WatchedFileHandler(logf)
else:
handler = logging.handlers.FileHandler(logf)
except Exception:
sys.stderr.write("ERROR: couldn't create the logfile directory. Logging to the standard output.\n")
handler = logging.StreamHandler(sys.stdout)
else:
# Normal Handler on standard output
handler = logging.StreamHandler(sys.stdout)
# Check that handler.stream has a fileno() method: when running OpenERP
# behind Apache with mod_wsgi, handler.stream will have type mod_wsgi.Log,
# which has no fileno() method. (mod_wsgi.Log is what is being bound to
# sys.stderr when the logging.StreamHandler is being constructed above.)
def is_a_tty(stream):
return hasattr(stream, 'fileno') and os.isatty(stream.fileno())
if isinstance(handler, logging.StreamHandler) and is_a_tty(handler.stream):
formatter = ColoredFormatter(format)
else:
formatter = DBFormatter(format)
handler.setFormatter(formatter)
logging.getLogger().addHandler(handler)
if tools.config['log_db']:
postgresqlHandler = PostgreSQLHandler()
postgresqlHandler.setLevel(logging.WARNING)
logging.getLogger().addHandler(postgresqlHandler)
# Configure loggers levels
pseudo_config = PSEUDOCONFIG_MAPPER.get(tools.config['log_level'], [])
logconfig = tools.config['log_handler']
logging_configurations = DEFAULT_LOG_CONFIGURATION + pseudo_config + logconfig
for logconfig_item in logging_configurations:
loggername, level = logconfig_item.split(':')
level = getattr(logging, level, logging.INFO)
logger = logging.getLogger(loggername)
logger.setLevel(level)
for logconfig_item in logging_configurations:
_logger.debug('logger level set: "%s"', logconfig_item)
DEFAULT_LOG_CONFIGURATION = [
'openerp.workflow.workitem:WARNING',
'openerp.http.rpc.request:INFO',
'openerp.http.rpc.response:INFO',
'openerp.addons.web.http:INFO',
'openerp.sql_db:INFO',
':INFO',
]
PSEUDOCONFIG_MAPPER = {
'debug_rpc_answer': ['openerp:DEBUG','openerp.http.rpc.request:DEBUG', 'openerp.http.rpc.response:DEBUG'],
'debug_rpc': ['openerp:DEBUG','openerp.http.rpc.request:DEBUG'],
'debug': ['openerp:DEBUG'],
'debug_sql': ['openerp.sql_db:DEBUG'],
'info': [],
'warn': ['openerp:WARNING'],
'error': ['openerp:ERROR'],
'critical': ['openerp:CRITICAL'],
}
|
from openerp import models, fields, api
class StockHistory(models.Model):
_inherit = 'stock.history'
@api.model
def read_group(self, domain, fields, groupby, offset=0, limit=None,
orderby=False, lazy=True):
res = super(StockHistory, self).read_group(
domain, fields, groupby, offset=offset, limit=limit,
orderby=orderby, lazy=lazy)
if ('manual_value' in fields) and ('real_value' in fields):
group_lines = {}
for line in res:
domain = line.get('__domain', domain)
group_lines.setdefault(
str(domain), self.search(domain))
for line in res:
manual_value = 0.0
real_value = 0.0
lines = group_lines.get(str(line.get('__domain', domain)))
for pre_line in lines:
manual_value += (pre_line.product_id.manual_standard_cost *
pre_line.quantity)
real_value += (pre_line.price_unit_on_quant *
pre_line.quantity)
line['real_value'] = real_value
line['manual_value'] = manual_value
return res
@api.multi
@api.depends("product_id", "product_id.manual_standard_cost", "quantity")
def _compute_manual_value(self):
for record in self:
record.manual_value = (record.product_id.manual_standard_cost *
record.quantity)
@api.multi
@api.depends('price_unit_on_quant', 'quantity')
def _compute_real_value(self):
for record in self:
record.real_value = record.price_unit_on_quant * record.quantity
manual_value = fields.Float(
string="Manual Value", compute="_compute_manual_value")
real_value = fields.Float(
string="Real Value", compute="_compute_real_value")
|
import subprocess
import inspect, os, sys
cmd_subfolder = os.path.realpath(os.path.abspath(os.path.join(os.path.split(inspect.getfile( inspect.currentframe() ))[0],"..")))
if cmd_subfolder not in sys.path:
sys.path.insert(0, cmd_subfolder)
import mosq_test
rc = 1
mid = 53
keepalive = 60
connect_packet = mosq_test.gen_connect("will-qos0-test", keepalive=keepalive)
connack_packet = mosq_test.gen_connack(rc=0)
subscribe_packet = mosq_test.gen_subscribe(mid, "will/qos0/test", 0)
suback_packet = mosq_test.gen_suback(mid, 0)
publish_packet = mosq_test.gen_publish("will/qos0/test", qos=0, payload="will-message")
cmd = ['../../src/mosquitto', '-p', '1888']
broker = mosq_test.start_broker(filename=os.path.basename(__file__), cmd=cmd)
try:
sock = mosq_test.do_client_connect(connect_packet, connack_packet, timeout=30)
sock.send(subscribe_packet)
if mosq_test.expect_packet(sock, "suback", suback_packet):
will = subprocess.Popen(['./07-will-qos0-helper.py'])
will.wait()
if mosq_test.expect_packet(sock, "publish", publish_packet):
rc = 0
sock.close()
finally:
broker.terminate()
broker.wait()
if rc:
(stdo, stde) = broker.communicate()
print(stde)
exit(rc)
|
import re
from wlauto import AndroidUiAutoBenchmark
class Caffeinemark(AndroidUiAutoBenchmark):
name = 'caffeinemark'
description = """
CaffeineMark is a series of tests that measure the speed of Java
programs running in various hardware and software configurations.
http://www.benchmarkhq.ru/cm30/info.html
From the website:
CaffeineMark scores roughly correlate with the number of Java instructions
executed per second, and do not depend significantly on the the amount of
memory in the system or on the speed of a computers disk drives or internet
connection.
The following is a brief description of what each test does:
- Sieve: The classic sieve of eratosthenes finds prime numbers.
- Loop: The loop test uses sorting and sequence generation as to measure
compiler optimization of loops.
- Logic: Tests the speed with which the virtual machine executes
decision-making instructions.
- Method: The Method test executes recursive function calls to see how
well the VM handles method calls.
- Float: Simulates a 3D rotation of objects around a point.
- Graphics: Draws random rectangles and lines.
- Image: Draws a sequence of three graphics repeatedly.
- Dialog: Writes a set of values into labels and editboxes on a form.
The overall CaffeineMark score is the geometric mean of the individual
scores, i.e., it is the 9th root of the product of all the scores.
"""
package = "com.flexycore.caffeinemark"
activity = ".Application"
summary_metrics = ['OverallScore']
regex = re.compile(r'CAFFEINEMARK RESULT: (?P<type>\w+) (?P<value>\S+)')
def update_result(self, context):
super(Caffeinemark, self).update_result(context)
with open(self.logcat_log) as fh:
for line in fh:
match = self.regex.search(line)
if match:
metric = match.group('type')
value = float(match.group('value'))
context.result.add_metric(metric, value)
|
"""Local Nest authentication."""
import asyncio
from functools import partial
from homeassistant.core import callback
from . import config_flow
from .const import DOMAIN
@callback
def initialize(hass, client_id, client_secret):
"""Initialize a local auth provider."""
config_flow.register_flow_implementation(
hass, DOMAIN, 'configuration.yaml',
partial(generate_auth_url, client_id),
partial(resolve_auth_code, hass, client_id, client_secret)
)
async def generate_auth_url(client_id, flow_id):
"""Generate an authorize url."""
from nest.nest import AUTHORIZE_URL
return AUTHORIZE_URL.format(client_id, flow_id)
async def resolve_auth_code(hass, client_id, client_secret, code):
"""Resolve an authorization code."""
from nest.nest import NestAuth, AuthorizationError
result = asyncio.Future()
auth = NestAuth(
client_id=client_id,
client_secret=client_secret,
auth_callback=result.set_result,
)
auth.pin = code
try:
await hass.async_add_job(auth.login)
return await result
except AuthorizationError as err:
if err.response.status_code == 401:
raise config_flow.CodeInvalid()
raise config_flow.NestAuthError('Unknown error: {} ({})'.format(
err, err.response.status_code))
|
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from collections import OrderedDict
import six
from six.moves import zip
import warnings
import numpy as np
from matplotlib.path import Path
from matplotlib import rcParams
import matplotlib.font_manager as font_manager
from matplotlib.ft2font import KERNING_DEFAULT, LOAD_NO_HINTING
from matplotlib.ft2font import LOAD_TARGET_LIGHT
from matplotlib.mathtext import MathTextParser
import matplotlib.dviread as dviread
from matplotlib.font_manager import FontProperties, get_font
from matplotlib.transforms import Affine2D
from six.moves.urllib.parse import quote as urllib_quote
class TextToPath(object):
"""
A class that convert a given text to a path using ttf fonts.
"""
FONT_SCALE = 100.
DPI = 72
def __init__(self):
"""
Initialization
"""
self.mathtext_parser = MathTextParser('path')
self.tex_font_map = None
from matplotlib.cbook import maxdict
self._ps_fontd = maxdict(50)
self._texmanager = None
self._adobe_standard_encoding = None
def _get_adobe_standard_encoding(self):
enc_name = dviread.find_tex_file('8a.enc')
enc = dviread.Encoding(enc_name)
return dict([(c, i) for i, c in enumerate(enc.encoding)])
def _get_font(self, prop):
"""
find a ttf font.
"""
fname = font_manager.findfont(prop)
font = get_font(fname)
font.set_size(self.FONT_SCALE, self.DPI)
return font
def _get_hinting_flag(self):
return LOAD_NO_HINTING
def _get_char_id(self, font, ccode):
"""
Return a unique id for the given font and character-code set.
"""
sfnt = font.get_sfnt()
try:
ps_name = sfnt[(1, 0, 0, 6)].decode('macroman')
except KeyError:
ps_name = sfnt[(3, 1, 0x0409, 6)].decode('utf-16be')
char_id = urllib_quote('%s-%x' % (ps_name, ccode))
return char_id
def _get_char_id_ps(self, font, ccode):
"""
Return a unique id for the given font and character-code set (for tex).
"""
ps_name = font.get_ps_font_info()[2]
char_id = urllib_quote('%s-%d' % (ps_name, ccode))
return char_id
def glyph_to_path(self, font, currx=0.):
"""
convert the ft2font glyph to vertices and codes.
"""
verts, codes = font.get_path()
if currx != 0.0:
verts[:, 0] += currx
return verts, codes
def get_text_width_height_descent(self, s, prop, ismath):
if rcParams['text.usetex']:
texmanager = self.get_texmanager()
fontsize = prop.get_size_in_points()
w, h, d = texmanager.get_text_width_height_descent(s, fontsize,
renderer=None)
return w, h, d
fontsize = prop.get_size_in_points()
scale = float(fontsize) / self.FONT_SCALE
if ismath:
prop = prop.copy()
prop.set_size(self.FONT_SCALE)
width, height, descent, trash, used_characters = \
self.mathtext_parser.parse(s, 72, prop)
return width * scale, height * scale, descent * scale
font = self._get_font(prop)
font.set_text(s, 0.0, flags=LOAD_NO_HINTING)
w, h = font.get_width_height()
w /= 64.0 # convert from subpixels
h /= 64.0
d = font.get_descent()
d /= 64.0
return w * scale, h * scale, d * scale
def get_text_path(self, prop, s, ismath=False, usetex=False):
"""
convert text *s* to path (a tuple of vertices and codes for
matplotlib.path.Path).
*prop*
font property
*s*
text to be converted
*usetex*
If True, use matplotlib usetex mode.
*ismath*
If True, use mathtext parser. Effective only if usetex == False.
"""
if not usetex:
if not ismath:
font = self._get_font(prop)
glyph_info, glyph_map, rects = self.get_glyphs_with_font(
font, s)
else:
glyph_info, glyph_map, rects = self.get_glyphs_mathtext(
prop, s)
else:
glyph_info, glyph_map, rects = self.get_glyphs_tex(prop, s)
verts, codes = [], []
for glyph_id, xposition, yposition, scale in glyph_info:
verts1, codes1 = glyph_map[glyph_id]
if len(verts1):
verts1 = np.array(verts1) * scale + [xposition, yposition]
verts.extend(verts1)
codes.extend(codes1)
for verts1, codes1 in rects:
verts.extend(verts1)
codes.extend(codes1)
return verts, codes
def get_glyphs_with_font(self, font, s, glyph_map=None,
return_new_glyphs_only=False):
"""
convert the string *s* to vertices and codes using the
provided ttf font.
"""
# Mostly copied from backend_svg.py.
lastgind = None
currx = 0
xpositions = []
glyph_ids = []
if glyph_map is None:
glyph_map = OrderedDict()
if return_new_glyphs_only:
glyph_map_new = OrderedDict()
else:
glyph_map_new = glyph_map
# I'm not sure if I get kernings right. Needs to be verified. -JJL
for c in s:
ccode = ord(c)
gind = font.get_char_index(ccode)
if gind is None:
ccode = ord('?')
gind = 0
if lastgind is not None:
kern = font.get_kerning(lastgind, gind, KERNING_DEFAULT)
else:
kern = 0
glyph = font.load_char(ccode, flags=LOAD_NO_HINTING)
horiz_advance = (glyph.linearHoriAdvance / 65536.0)
char_id = self._get_char_id(font, ccode)
if char_id not in glyph_map:
glyph_map_new[char_id] = self.glyph_to_path(font)
currx += (kern / 64.0)
xpositions.append(currx)
glyph_ids.append(char_id)
currx += horiz_advance
lastgind = gind
ypositions = [0] * len(xpositions)
sizes = [1.] * len(xpositions)
rects = []
return (list(zip(glyph_ids, xpositions, ypositions, sizes)),
glyph_map_new, rects)
def get_glyphs_mathtext(self, prop, s, glyph_map=None,
return_new_glyphs_only=False):
"""
convert the string *s* to vertices and codes by parsing it with
mathtext.
"""
prop = prop.copy()
prop.set_size(self.FONT_SCALE)
width, height, descent, glyphs, rects = self.mathtext_parser.parse(
s, self.DPI, prop)
if not glyph_map:
glyph_map = OrderedDict()
if return_new_glyphs_only:
glyph_map_new = OrderedDict()
else:
glyph_map_new = glyph_map
xpositions = []
ypositions = []
glyph_ids = []
sizes = []
currx, curry = 0, 0
for font, fontsize, ccode, ox, oy in glyphs:
char_id = self._get_char_id(font, ccode)
if char_id not in glyph_map:
font.clear()
font.set_size(self.FONT_SCALE, self.DPI)
glyph = font.load_char(ccode, flags=LOAD_NO_HINTING)
glyph_map_new[char_id] = self.glyph_to_path(font)
xpositions.append(ox)
ypositions.append(oy)
glyph_ids.append(char_id)
size = fontsize / self.FONT_SCALE
sizes.append(size)
myrects = []
for ox, oy, w, h in rects:
vert1 = [(ox, oy), (ox, oy + h), (ox + w, oy + h),
(ox + w, oy), (ox, oy), (0, 0)]
code1 = [Path.MOVETO,
Path.LINETO, Path.LINETO, Path.LINETO, Path.LINETO,
Path.CLOSEPOLY]
myrects.append((vert1, code1))
return (list(zip(glyph_ids, xpositions, ypositions, sizes)),
glyph_map_new, myrects)
def get_texmanager(self):
"""
return the :class:`matplotlib.texmanager.TexManager` instance
"""
if self._texmanager is None:
from matplotlib.texmanager import TexManager
self._texmanager = TexManager()
return self._texmanager
def get_glyphs_tex(self, prop, s, glyph_map=None,
return_new_glyphs_only=False):
"""
convert the string *s* to vertices and codes using matplotlib's usetex
mode.
"""
# codes are modstly borrowed from pdf backend.
texmanager = self.get_texmanager()
if self.tex_font_map is None:
self.tex_font_map = dviread.PsfontsMap(
dviread.find_tex_file('pdftex.map'))
if self._adobe_standard_encoding is None:
self._adobe_standard_encoding = self._get_adobe_standard_encoding()
fontsize = prop.get_size_in_points()
if hasattr(texmanager, "get_dvi"):
dvifilelike = texmanager.get_dvi(s, self.FONT_SCALE)
dvi = dviread.DviFromFileLike(dvifilelike, self.DPI)
else:
dvifile = texmanager.make_dvi(s, self.FONT_SCALE)
dvi = dviread.Dvi(dvifile, self.DPI)
try:
page = next(iter(dvi))
finally:
dvi.close()
if glyph_map is None:
glyph_map = OrderedDict()
if return_new_glyphs_only:
glyph_map_new = OrderedDict()
else:
glyph_map_new = glyph_map
glyph_ids, xpositions, ypositions, sizes = [], [], [], []
# Gather font information and do some setup for combining
# characters into strings.
# oldfont, seq = None, []
for x1, y1, dvifont, glyph, width in page.text:
font_and_encoding = self._ps_fontd.get(dvifont.texname)
font_bunch = self.tex_font_map[dvifont.texname]
if font_and_encoding is None:
font = get_font(font_bunch.filename)
for charmap_name, charmap_code in [("ADOBE_CUSTOM",
1094992451),
("ADOBE_STANDARD",
1094995778)]:
try:
font.select_charmap(charmap_code)
except (ValueError, RuntimeError):
pass
else:
break
else:
charmap_name = ""
warnings.warn("No supported encoding in font (%s)." %
font_bunch.filename)
if charmap_name == "ADOBE_STANDARD" and font_bunch.encoding:
enc0 = dviread.Encoding(font_bunch.encoding)
enc = dict([(i, self._adobe_standard_encoding.get(c, None))
for i, c in enumerate(enc0.encoding)])
else:
enc = dict()
self._ps_fontd[dvifont.texname] = font, enc
else:
font, enc = font_and_encoding
ft2font_flag = LOAD_TARGET_LIGHT
char_id = self._get_char_id_ps(font, glyph)
if char_id not in glyph_map:
font.clear()
font.set_size(self.FONT_SCALE, self.DPI)
if enc:
charcode = enc.get(glyph, None)
else:
charcode = glyph
if charcode is not None:
glyph0 = font.load_char(charcode, flags=ft2font_flag)
else:
warnings.warn("The glyph (%d) of font (%s) cannot be "
"converted with the encoding. Glyph may "
"be wrong" % (glyph, font_bunch.filename))
glyph0 = font.load_char(glyph, flags=ft2font_flag)
glyph_map_new[char_id] = self.glyph_to_path(font)
glyph_ids.append(char_id)
xpositions.append(x1)
ypositions.append(y1)
sizes.append(dvifont.size / self.FONT_SCALE)
myrects = []
for ox, oy, h, w in page.boxes:
vert1 = [(ox, oy), (ox + w, oy), (ox + w, oy + h),
(ox, oy + h), (ox, oy), (0, 0)]
code1 = [Path.MOVETO,
Path.LINETO, Path.LINETO, Path.LINETO, Path.LINETO,
Path.CLOSEPOLY]
myrects.append((vert1, code1))
return (list(zip(glyph_ids, xpositions, ypositions, sizes)),
glyph_map_new, myrects)
text_to_path = TextToPath()
class TextPath(Path):
"""
Create a path from the text.
"""
def __init__(self, xy, s, size=None, prop=None,
_interpolation_steps=1, usetex=False,
*kl, **kwargs):
"""
Create a path from the text. No support for TeX yet. Note that
it simply is a path, not an artist. You need to use the
PathPatch (or other artists) to draw this path onto the
canvas.
xy : position of the text.
s : text
size : font size
prop : font property
"""
if prop is None:
prop = FontProperties()
if size is None:
size = prop.get_size_in_points()
self._xy = xy
self.set_size(size)
self._cached_vertices = None
self._vertices, self._codes = self.text_get_vertices_codes(
prop, s,
usetex=usetex)
self._should_simplify = False
self._simplify_threshold = rcParams['path.simplify_threshold']
self._has_nonfinite = False
self._interpolation_steps = _interpolation_steps
def set_size(self, size):
"""
set the size of the text
"""
self._size = size
self._invalid = True
def get_size(self):
"""
get the size of the text
"""
return self._size
def _get_vertices(self):
"""
Return the cached path after updating it if necessary.
"""
self._revalidate_path()
return self._cached_vertices
def _get_codes(self):
"""
Return the codes
"""
return self._codes
vertices = property(_get_vertices)
codes = property(_get_codes)
def _revalidate_path(self):
"""
update the path if necessary.
The path for the text is initially create with the font size
of FONT_SCALE, and this path is rescaled to other size when
necessary.
"""
if (self._invalid or
(self._cached_vertices is None)):
tr = Affine2D().scale(
self._size / text_to_path.FONT_SCALE,
self._size / text_to_path.FONT_SCALE).translate(*self._xy)
self._cached_vertices = tr.transform(self._vertices)
self._invalid = False
def is_math_text(self, s):
"""
Returns True if the given string *s* contains any mathtext.
"""
# copied from Text.is_math_text -JJL
# Did we find an even number of non-escaped dollar signs?
# If so, treat is as math text.
dollar_count = s.count(r'$') - s.count(r'\$')
even_dollars = (dollar_count > 0 and dollar_count % 2 == 0)
if rcParams['text.usetex']:
return s, 'TeX'
if even_dollars:
return s, True
else:
return s.replace(r'\$', '$'), False
def text_get_vertices_codes(self, prop, s, usetex):
"""
convert the string *s* to vertices and codes using the
provided font property *prop*. Mostly copied from
backend_svg.py.
"""
if usetex:
verts, codes = text_to_path.get_text_path(prop, s, usetex=True)
else:
clean_line, ismath = self.is_math_text(s)
verts, codes = text_to_path.get_text_path(prop, clean_line,
ismath=ismath)
return verts, codes
|
"""Common paths for pyauto tests."""
import os
import sys
def GetSourceDir():
"""Returns src/ directory."""
script_dir = os.path.abspath(os.path.dirname(__file__))
return os.path.join(script_dir, os.pardir, os.pardir, os.pardir)
def GetThirdPartyDir():
"""Returns src/third_party directory."""
return os.path.join(GetSourceDir(), 'third_party')
def GetBuildDirs():
"""Returns list of possible build directories."""
# List of dirs that can contain a Debug/Release build.
outer_dirs = {
'linux2': ['out', 'sconsbuild'],
'linux3': ['out', 'sconsbuild'],
'darwin': ['out', 'xcodebuild'],
'win32': ['chrome', 'build'],
'cygwin': ['chrome'],
}.get(sys.platform, [])
src_dir = GetSourceDir()
build_dirs = []
for dir in outer_dirs:
build_dirs += [os.path.join(src_dir, dir, 'Debug')]
build_dirs += [os.path.join(src_dir, dir, 'Release')]
return build_dirs
def GetChromeDriverExe():
"""Returns path to ChromeDriver executable, or None if cannot be found."""
exe_name = 'chromedriver'
if sys.platform == 'win32':
exe_name += '.exe'
import pyautolib
dir = os.path.dirname(pyautolib.__file__)
exe = os.path.join(dir, exe_name)
if os.path.exists(exe):
return exe
return None
|
from __future__ import unicode_literals, print_function, division
__author__ = 'dongliu'
import struct
import socket
from pcapparser.constant import *
class TcpPack:
""" a tcp packet, header fields and data. """
TYPE_INIT = 1 # init tcp connection
TYPE_INIT_ACK = 2
TYPE_ESTABLISH = 0 # establish conn
TYPE_CLOSE = -1 # close tcp connection
def __init__(self, source, source_port, dest, dest_port, pac_type, seq, ack, body, src_mac):
self.source = source
self.source_port = source_port
self.dest = dest
self.dest_port = dest_port
self.pac_type = pac_type
self.seq = seq
self.ack = ack
self.body = body
self.direction = 0
self.key = None
self.micro_second = None
self.src_mac = src_mac
def __str__(self):
return "%s:%d --> %s:%d, type:%d, seq:%d, ack:%s size:%d" % \
(self.source, self.source_port, self.dest, self.dest_port, self.pac_type, self.seq,
self.ack, len(self.body))
def gen_key(self):
if self.key:
return self.key
skey = '%s:%d' % (self.source, self.source_port)
dkey = '%s:%d' % (self.dest, self.dest_port)
if skey < dkey:
self.key = skey + '-' + dkey
else:
self.key = dkey + '-' + skey
return self.key
def expect_ack(self):
if self.pac_type == TcpPack.TYPE_ESTABLISH:
return self.seq + len(self.body)
else:
return self.seq + 1
def dl_parse_ethernet(link_packet):
""" parse Ethernet packet """
eth_header_len = 14
# ethernet header
ethernet_header = link_packet[0:eth_header_len]
(n_protocol, ) = struct.unpack(b'!12xH', ethernet_header)
if n_protocol == NetworkProtocol.P802_1Q:
# 802.1q, we need to skip two bytes and read another two bytes to get protocol/len
type_or_len = link_packet[eth_header_len:eth_header_len + 4]
eth_header_len += 4
n_protocol, = struct.unpack(b'!2xH', type_or_len)
if n_protocol == NetworkProtocol.PPPOE_SESSION:
# skip PPPOE SESSION Header
eth_header_len += 8
type_or_len = link_packet[eth_header_len - 2:eth_header_len]
n_protocol, = struct.unpack(b'!H', type_or_len)
if n_protocol < 1536:
# TODO n_protocol means package len
pass
return n_protocol, link_packet[eth_header_len:]
def dl_parse_linux_sll(link_packet):
""" parse linux sll packet """
sll_header_len = 16
# Linux cooked header
linux_cooked = link_packet[0:sll_header_len]
packet_type, link_type_address_type, link_type_address_len, link_type_address, n_protocol \
= struct.unpack(b'!HHHQH', linux_cooked)
return n_protocol, link_packet[sll_header_len:]
def read_ip_pac(link_packet, link_layer_parser):
# ip header
n_protocol, ip_packet = link_layer_parser(link_packet)
if n_protocol == NetworkProtocol.IP or n_protocol == NetworkProtocol.PPP_IP:
src_mac = ":".join("{:02x}".format(ord(c)) for c in link_packet[6:12])
ip_base_header_len = 20
ip_header = ip_packet[0:ip_base_header_len]
(ip_info, ip_length, protocol) = struct.unpack(b'!BxH5xB10x', ip_header)
# real ip header len.
ip_header_len = (ip_info & 0xF) * 4
ip_version = (ip_info >> 4) & 0xF
# skip all extra header fields.
if ip_header_len > ip_base_header_len:
pass
# not tcp, skip.
if protocol != TransferProtocol.TCP:
return 0, None, None, None, None
source = socket.inet_ntoa(ip_header[12:16])
dest = socket.inet_ntoa(ip_header[16:])
return 1, source, dest, ip_packet[ip_header_len:ip_length], src_mac
elif n_protocol == NetworkProtocol.IPV6:
# TODO: deal with ipv6 package
return 0, None, None, None, None
else:
# skip
return 0, None, None, None, None
def read_tcp_pac(link_packet, link_layer_parser):
"""read tcp data.http only build on tcp, so we do not need to support other protocols."""
state, source, dest, tcp_packet, src_mac = read_ip_pac(link_packet, link_layer_parser)
if state == 0:
return 0, None
tcp_base_header_len = 20
# tcp header
tcp_header = tcp_packet[0:tcp_base_header_len]
source_port, dest_port, seq, ack_seq, t_f, flags = struct.unpack(b'!HHIIBB6x', tcp_header)
# real tcp header len
tcp_header_len = ((t_f >> 4) & 0xF) * 4
# skip extension headers
if tcp_header_len > tcp_base_header_len:
pass
fin = flags & 1
syn = (flags >> 1) & 1
rst = (flags >> 2) & 1
psh = (flags >> 3) & 1
ack = (flags >> 4) & 1
urg = (flags >> 5) & 1
# body
body = tcp_packet[tcp_header_len:]
# workaround to ignore no-data tcp packs
if 0 < len(body) < 20:
total = 0
for ch in body:
total += ord(ch)
if total == 0:
body = b''
if syn == 1 and ack == 0:
# init tcp connection
pac_type = TcpPack.TYPE_INIT
elif syn == 1 and ack == 1:
pac_type = TcpPack.TYPE_INIT_ACK
elif fin == 1:
pac_type = TcpPack.TYPE_CLOSE
else:
pac_type = TcpPack.TYPE_ESTABLISH
return 1, TcpPack(source, source_port, dest, dest_port, pac_type, seq, ack_seq, body, src_mac)
def get_link_layer_parser(link_type):
if link_type == LinkLayerType.ETHERNET:
return dl_parse_ethernet
elif link_type == LinkLayerType.LINUX_SLL:
return dl_parse_linux_sll
else:
return None
def read_tcp_packet(read_packet):
""" generator, read a *TCP* package once."""
for link_type, micro_second, link_packet in read_packet():
try:
link_layer_parser = get_link_layer_parser(link_type)
state, pack = read_tcp_pac(link_packet, link_layer_parser)
if state == 1 and pack:
pack.micro_second = micro_second
yield pack
continue
else:
continue
except:
pass
def read_package_r(pcap_file):
"""
clean up tcp packages.
note:we abandon the last ack package after fin.
"""
conn_dict = {}
reverse_conn_dict = {}
direction_dict = {}
for pack in read_tcp_packet(pcap_file):
key = pack.gen_key()
# if a SYN is received, erase cached connection with same key.
if key in conn_dict and pack.pac_type == TcpPack.TYPE_INIT:
del conn_dict[key]
# if we haven't keep this connection, construct one.
if key not in conn_dict:
# remember the next SEQ should appear as list[0] to skip all retransmit
# packets. list[1] to indicate whether the socket is closed.
conn_dict[key] = [pack.seq, 0, []]
# if it's SYN, the data length is considered as 1.
if pack.pac_type == TcpPack.TYPE_INIT:
conn_dict[key][0] += 1
reverse_conn_dict[key] = [pack.ack, 0, []]
direction_dict[key] = pack.source + str(pack.source_port)
if pack.source + str(pack.source_port) == direction_dict[key]:
hold_packs = conn_dict[key]
else:
hold_packs = reverse_conn_dict[key]
# if the connection is insert into dictionary by SYN, we should update
# reverse SEQ, consider the SYN+ACK packet data length as 1.
if pack.pac_type == TcpPack.TYPE_INIT_ACK:
if reverse_conn_dict[key][0] == 0:
reverse_conn_dict[key][0] = pack.seq + 1
# do not receive anything after FIN/RST
if hold_packs[1] == 1:
continue
if pack.pac_type == TcpPack.TYPE_CLOSE:
hold_packs[1] = 1
# only store FIN/RST or packets which have payload data.
if pack.body or pack.pac_type == TcpPack.TYPE_CLOSE:
hold_packs[2].append(pack)
hold_packs[2] = sorted(hold_packs[2], key=lambda x: x.seq)
yield_list = []
while len(hold_packs[2]) > 0:
first_pack = hold_packs[2][0]
if not first_pack.body:
# this must be a RST/FIN packet without data.
yield_list.append(first_pack)
del hold_packs[2][0]
continue
elif first_pack.seq > hold_packs[0]:
# there has some packets lost, wait.
break
elif first_pack.seq == hold_packs[0]:
# the first packet matches the expected SEQ exactly.
hold_packs[0] = first_pack.seq + len(first_pack.body)
yield_list.append(first_pack)
del hold_packs[2][0]
elif first_pack.seq + len(first_pack.body) <= hold_packs[0]:
# the packet is a retransmit packet.
del hold_packs[2][0]
else:
# part of the packet data is retransmit, part of it is useful.
trim_len = first_pack.seq + len(first_pack.body) - hold_packs[0]
first_pack.body = first_pack.body[-1 * trim_len:]
first_pack.seq = hold_packs[0]
hold_packs[0] += trim_len
yield_list.append(first_pack)
del hold_packs[2][0]
for ipack in yield_list:
yield ipack
|
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from erpnext.controllers.accounts_controller import validate_taxes_and_charges, validate_inclusive_tax
class SalesTaxesandChargesTemplate(Document):
def validate(self):
valdiate_taxes_and_charges_template(self)
def valdiate_taxes_and_charges_template(doc):
if not doc.is_default and not frappe.get_all(doc.doctype, filters={"is_default": 1}):
doc.is_default = 1
if doc.is_default == 1:
frappe.db.sql("""update `tab{0}` set is_default = 0
where ifnull(is_default,0) = 1 and name != %s and company = %s""".format(doc.doctype),
(doc.name, doc.company))
for tax in doc.get("taxes"):
validate_taxes_and_charges(tax)
validate_inclusive_tax(tax, doc)
|
"""
Models for contentserver
"""
from django.db.models.fields import PositiveIntegerField
from config_models.models import ConfigurationModel
class CourseAssetCacheTtlConfig(ConfigurationModel):
"""Configuration for the TTL of course assets."""
class Meta(object):
app_label = 'contentserver'
cache_ttl = PositiveIntegerField(
default=0,
help_text="The time, in seconds, to report that a course asset is allowed to be cached for."
)
@classmethod
def get_cache_ttl(cls):
"""Gets the cache TTL for course assets, if present"""
return cls.current().cache_ttl
def __repr__(self):
return '<CourseAssetCacheTtlConfig(cache_ttl={})>'.format(self.get_cache_ttl())
def __unicode__(self):
return unicode(repr(self))
|
"""Generate graphs with a given degree sequence or expected degree sequence.
"""
import networkx as nx
__author__ = "\n".join(['Aric Hagberg (hagberg@lanl.gov)',
'Pieter Swart (swart@lanl.gov)',
'Dan Schult (dschult@colgate.edu)'
'Joel Miller (joel.c.miller.research@gmail.com)'
'Ben Edwards'])
__all__ = ['is_valid_degree_sequence',
'is_valid_degree_sequence_erdos_gallai',
'is_valid_degree_sequence_havel_hakimi']
def is_valid_degree_sequence(sequence, method='hh'):
"""Returns True if the sequence is a valid degree sequence.
A degree sequence is valid if some graph can realize it.
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
method : "eg" | "hh"
The method used to validate the degree sequence.
"eg" corresponds to the Erdős-Gallai algorithm, and
"hh" to the Havel-Hakimi algorithm.
Returns
-------
valid : bool
True if the sequence is a valid degree sequence and False if not.
Examples
--------
>>> G = nx.path_graph(4)
>>> sequence = G.degree().values()
>>> nx.is_valid_degree_sequence(sequence)
True
References
----------
Erdős-Gallai
[EG1960]_, [choudum1986]_
Havel-Hakimi
[havel1955]_, [hakimi1962]_, [CL1996]_
"""
if method == 'eg':
valid = is_valid_degree_sequence_erdos_gallai(sequence)
elif method == 'hh':
valid = is_valid_degree_sequence_havel_hakimi(sequence)
else:
msg = "`method` must be 'eg' or 'hh'"
raise nx.NetworkXException(msg)
return valid
def is_valid_degree_sequence_havel_hakimi(sequence):
r"""Returns True if the sequence is a valid degree sequence.
A degree sequence is valid if some graph can realize it.
Validation proceeds via the Havel-Hakimi algorithm.
Worst-case run time is: `O(n^(log n))`
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
Returns
-------
valid : bool
True if the sequence is a valid degree sequence and False if not.
References
----------
[havel1955]_, [hakimi1962]_, [CL1996]_
"""
s = list(sequence) # copy to list
# some simple tests
if len(s) == 0:
return True # empty sequence = empty graph
if not nx.utils.is_list_of_ints(s):
return False # list of ints
if min(s)<0:
return False # each int not negative
if sum(s)%2:
return False # must be even
# successively reduce degree sequence by removing node of maximum degree
# as in Havel-Hakimi algorithm
while s:
s.sort() # sort in increasing order
if s[0]<0:
return False # check if removed too many from some node
d=s.pop() # pop largest degree
if d==0: return True # done! rest must be zero due to ordering
# degree must be <= number of available nodes
if d>len(s): return False
# remove edges to nodes of next higher degrees
#s.reverse() # to make it easy to get at higher degree nodes.
for i in range(len(s)-1,len(s)-(d+1),-1):
s[i]-=1
# should never get here b/c either d==0, d>len(s) or d<0 before s=[]
return False
def is_valid_degree_sequence_erdos_gallai(sequence):
r"""Returns True if the sequence is a valid degree sequence.
A degree sequence is valid if some graph can realize it.
Validation proceeds via the Erdős-Gallai algorithm.
Worst-case run time is: `O(n^2)`
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
Returns
-------
valid : bool
True if the sequence is a valid degree sequence and False if not.
References
----------
[EG1960]_, [choudum1986]_
"""
deg_seq = sorted(sequence,reverse=True)
n = len(deg_seq)
# some simple tests
if n == 0:
return True # empty sequence = empty graph
if not nx.utils.is_list_of_ints(deg_seq):
return False # list of ints
if min(deg_seq)<0:
return False # each int not negative
if sum(deg_seq)%2:
return False # must be even
sigk = [i for i in range(1, len(deg_seq)) if deg_seq[i] < deg_seq[i-1]]
for k in sigk:
sum_deg = sum(deg_seq[0:k])
sum_min = k*(k-1) + sum([min([k,deg_seq[i]])
for i in range(k,n)])
if sum_deg>sum_min:
return False
return True
|
"""
This config file extends the test environment configuration
so that we can run the lettuce acceptance tests.
"""
from .test import *
from .sauce import *
DEBUG = True
SITE_NAME = 'localhost:{}'.format(LETTUCE_SERVER_PORT)
import logging
logging.basicConfig(filename=TEST_ROOT / "log" / "lms_acceptance.log", level=logging.ERROR)
logging.getLogger().setLevel(logging.ERROR)
import os
from random import choice
def seed():
return os.getppid()
LOG_OVERRIDES = [
('track.middleware', logging.CRITICAL),
('codejail.safe_exec', logging.ERROR),
('edx.courseware', logging.ERROR),
('audit', logging.ERROR),
('lms.djangoapps.instructor_task.api_helper', logging.ERROR),
]
for log_name, log_level in LOG_OVERRIDES:
logging.getLogger(log_name).setLevel(log_level)
update_module_store_settings(
MODULESTORE,
doc_store_settings={
'db': 'acceptance_xmodule',
'collection': 'acceptance_modulestore_%s' % seed(),
},
module_store_options={
'fs_root': TEST_ROOT / "data",
},
default_store=os.environ.get('DEFAULT_STORE', 'draft'),
)
CONTENTSTORE = {
'ENGINE': 'xmodule.contentstore.mongo.MongoContentStore',
'DOC_STORE_CONFIG': {
'host': 'localhost',
'db': 'acceptance_xcontent_%s' % seed(),
}
}
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': TEST_ROOT / "db" / "test_edx.db",
'TEST_NAME': TEST_ROOT / "db" / "test_edx.db",
'OPTIONS': {
'timeout': 30,
},
'ATOMIC_REQUESTS': True,
},
'student_module_history': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': TEST_ROOT / "db" / "test_student_module_history.db",
'TEST_NAME': TEST_ROOT / "db" / "test_student_module_history.db",
'OPTIONS': {
'timeout': 30,
},
}
}
TRACKING_BACKENDS.update({
'mongo': {
'ENGINE': 'track.backends.mongodb.MongoBackend'
}
})
EVENT_TRACKING_BACKENDS['tracking_logs']['OPTIONS']['backends'].update({
'mongo': {
'ENGINE': 'eventtracking.backends.mongodb.MongoBackend',
'OPTIONS': {
'database': 'track'
}
}
})
BULK_EMAIL_DEFAULT_FROM_EMAIL = "test@test.org"
FEATURES['ENABLE_DISCUSSION_SERVICE'] = False
FEATURES['AUTOMATIC_AUTH_FOR_TESTING'] = True
FEATURES['ENABLE_THIRD_PARTY_AUTH'] = True
THIRD_PARTY_AUTH = {
"Google": {
"SOCIAL_AUTH_GOOGLE_OAUTH2_KEY": "test",
"SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET": "test"
},
"Facebook": {
"SOCIAL_AUTH_FACEBOOK_KEY": "test",
"SOCIAL_AUTH_FACEBOOK_SECRET": "test"
}
}
FEATURES['ENABLE_PAYMENT_FAKE'] = True
FEATURES['ENABLE_SPECIAL_EXAMS'] = True
FEATURES['AUTOMATIC_VERIFY_STUDENT_IDENTITY_FOR_TESTING'] = True
USE_I18N = True
FEATURES['ENABLE_FEEDBACK_SUBMISSION'] = False
INSTALLED_APPS += ('lettuce.django',)
LETTUCE_APPS = ('courseware', 'instructor')
LETTUCE_AVOID_APPS = ('instructor_task', 'coursewarehistoryextended')
LETTUCE_BROWSER = os.environ.get('LETTUCE_BROWSER', 'chrome')
LETTUCE_SELENIUM_CLIENT = os.environ.get('LETTUCE_SELENIUM_CLIENT', 'local')
SELENIUM_GRID = {
'URL': 'http://127.0.0.1:4444/wd/hub',
'BROWSER': LETTUCE_BROWSER,
}
try:
from .private import * # pylint: disable=import-error
except ImportError:
pass
XQUEUE_INTERFACE = {
"url": "http://127.0.0.1:{0:d}".format(XQUEUE_PORT),
"django_auth": {
"username": "lms",
"password": "***REMOVED***"
},
"basic_auth": ('anant', 'agarwal'),
}
YOUTUBE['API'] = "http://127.0.0.1:{0}/get_youtube_api/".format(YOUTUBE_PORT)
YOUTUBE['METADATA_URL'] = "http://127.0.0.1:{0}/test_youtube/".format(YOUTUBE_PORT)
YOUTUBE['TEXT_API']['url'] = "127.0.0.1:{0}/test_transcripts_youtube/".format(YOUTUBE_PORT)
YOUTUBE['TEST_TIMEOUT'] = 1500
if FEATURES.get('ENABLE_COURSEWARE_SEARCH') or \
FEATURES.get('ENABLE_DASHBOARD_SEARCH') or \
FEATURES.get('ENABLE_COURSE_DISCOVERY'):
# Use MockSearchEngine as the search engine for test scenario
SEARCH_ENGINE = "search.tests.mock_search_engine.MockSearchEngine"
import uuid
SECRET_KEY = uuid.uuid4().hex
PIPELINE_ENABLED = False
MIGRATION_MODULES = {}
|
import sys
from testrunner import run
QR_CODE = (
"██████████████ ██████████ ██ ██████████████\n"
"██ ██ ██ ██ ████ ██ ██\n"
"██ ██████ ██ ████ ██ ████ ██ ██████ ██\n"
"██ ██████ ██ ████ ██ ██ ██████ ██\n"
"██ ██████ ██ ██████████ ████ ██ ██████ ██\n"
"██ ██ ██ ██ ████ ██ ██\n"
"██████████████ ██ ██ ██ ██ ██ ██████████████\n"
" ██ ██████ ██ \n"
" ████ ██ ████ ██████ ██████ ██ ██████████\n"
" ████ ██ ████████ ██ ████ ██\n"
" ██ ████████ ██ ██ ████ ██████\n"
"██████ ██ ██ ██ ██ ██ ██ \n"
" ████████ ██ ██ ██████ ██ ████\n"
" ██ ████ ██ ██ ██████ ██ ██\n"
"██ ██ ██ ████████ ██ ██ ██████\n"
" ██ ████ ████ ██ ██ ██ ██ ██ \n"
"██ ██ ██████ ████████████████ \n"
" ██████ ██ ████ ████ ████\n"
"██████████████ ████████ ████████ ██ ████ ████\n"
"██ ██ ██ ██████ ██ ████ ██\n"
"██ ██████ ██ ██████████ ████████████ ██\n"
"██ ██████ ██ ████ ██ ██ ████████ \n"
"██ ██████ ██ ██ ████ ██ ██ ██ ██\n"
"██ ██ ████ ██████████ ████ ██ \n"
"██████████████ ██ ██ ████ ████\n"
)
def testfunc(child):
for line in QR_CODE.split("\n"):
child.expect_exact(line)
print("\nSUCCESS")
if __name__ == "__main__":
sys.exit(run(testfunc))
|
import os
import shutil
import tempfile
import unittest
import xml.etree.ElementTree as ET
import coalesce
class TestCoalesce(unittest.TestCase):
def setUp(self):
self.tmpdir = tempfile.mkdtemp(prefix='coalesce_test_')
def tearDown(self):
shutil.rmtree(self.tmpdir)
def make_result(self, name, error=''):
pkg = os.path.join(self.tmpdir, name)
os.makedirs(pkg)
if error:
inner = '<failure>something bad</failure>'
else:
inner = ''
with open(pkg + '/test.log', 'w') as fp:
fp.write(error)
with open(pkg + '/test.xml', 'w') as fp:
fp.write('''<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
<testsuite name="{name}" tests="1" failures="0" errors="0">
<testcase name="{name}" status="run">{inner}</testcase>
</testsuite>
</testsuites>'''.format(name=name, inner=inner))
return pkg
def test_utf8(self):
uni_string = u'\u8a66\u3057'
pkg = self.make_result(name='coal', error=uni_string.encode('utf8'))
result = coalesce.result(pkg)
self.assertEqual(result.find('failure').text, uni_string)
def test_header_strip(self):
failure = '''exec ${PAGER:-/usr/bin/less} "$0" || exit 1
-----------------------------------------------------------------------------
something bad'''
pkg = self.make_result(name='coal', error=failure)
result = coalesce.result(pkg)
self.assertEqual(result.find('failure').text, 'something bad')
def test_sanitize_bad(self):
self.assertEqual(coalesce.sanitize('foo\033\x00\x08'), 'foo')
def test_sanitize_ansi(self):
self.assertEqual(coalesce.sanitize('foo\033[1mbar\033[1mbaz'),
'foobarbaz')
def test_package_names(self):
os.chdir(self.tmpdir)
os.putenv('WORKSPACE', self.tmpdir)
os.symlink('.', 'bazel-testlogs')
self.make_result(name='coal/sub_test')
self.make_result(name='coal/other_test')
self.make_result(name='some/deep/package/go_test')
coalesce.main()
with open('_artifacts/junit_bazel.xml') as fp:
data = fp.read()
root = ET.fromstring(data)
names = [x.attrib['name'] for x in root.findall('testcase')]
self.assertEqual(
names,
['//coal:other_test', '//coal:sub_test', '//some/deep/package:go_test']
)
if __name__ == '__main__':
unittest.main()
|
"""
SleekXMPP: The Sleek XMPP Library
Copyright (C) 2013 Nathanael C. Fritz, Lance J.T. Stout
This file is part of SleekXMPP.
See the file LICENSE for copying permission.
"""
import logging
from sleekxmpp.xmlstream import register_stanza_plugin
from sleekxmpp.plugins import BasePlugin
from sleekxmpp.plugins.google.auth import stanza
log = logging.getLogger(__name__)
class GoogleAuth(BasePlugin):
"""
Google: Auth Extensions (JID Domain Discovery, OAuth2)
Also see:
<https://developers.google.com/talk/jep_extensions/jid_domain_change>
<https://developers.google.com/talk/jep_extensions/oauth>
"""
name = 'google_auth'
description = 'Google: Auth Extensions (JID Domain Discovery, OAuth2)'
dependencies = set(['feature_mechanisms'])
stanza = stanza
def plugin_init(self):
self.xmpp.namespace_map['http://www.google.com/talk/protocol/auth'] = 'ga'
register_stanza_plugin(self.xmpp['feature_mechanisms'].stanza.Auth,
stanza.GoogleAuth)
self.xmpp.add_filter('out', self._auth)
def plugin_end(self):
self.xmpp.del_filter('out', self._auth)
def _auth(self, stanza):
if isinstance(stanza, self.xmpp['feature_mechanisms'].stanza.Auth):
stanza.stream = self.xmpp
stanza['google']['client_uses_full_bind_result'] = True
if stanza['mechanism'] == 'X-OAUTH2':
stanza['google']['service'] = 'oauth2'
print(stanza)
return stanza
|
from cvxpy import *
import numpy as np
import scipy.sparse as sp
np.random.seed(5)
n = 10000
m = 100
pbar = (np.ones((n, 1)) * .03 +
np.matrix(np.append(np.random.rand(n - 1, 1), 0)).T * .12)
F = sp.rand(m, n, density=0.01)
F.data = np.ones(len(F.data))
D = sp.eye(n).tocoo()
D.data = np.random.randn(len(D.data))**2
Z = np.random.normal(size=(m, m))
Z = Z.T.dot(Z)
print Z.shape
x = Variable(n)
y = x.__rmul__(F)
mu = 1
ret = pbar.T * x
risk = square(norm(x.__rmul__(D))) + quad_form(y, Z)
objective = Minimize( -ret + mu * risk )
constraints_longonly = [sum_entries(x) == 1, x >= 0]
prob = Problem(objective, constraints_longonly)
import time
print "starting problems"
start = time.clock()
prob.solve(verbose=True, solver=SCS)
elapsed = (time.clock() - start)
print "SCS time:", elapsed
print prob.value
start = time.clock()
prob.solve(verbose=True, solver=ECOS)
elapsed = (time.clock() - start)
print "ECOS time:", elapsed
print prob.value
start = time.clock()
prob.solve(verbose=True, solver=CVXOPT)
elapsed = (time.clock() - start)
print "CVXOPT time:", elapsed
print prob.value
|
import codecs
import sys
import xml.etree.ElementTree as ET
input_list = []
for arg in sys.argv[1:]:
input_list.append(arg)
if len(input_list) < 1:
print 'usage: makerst.py <classes.xml>'
sys.exit(0)
def validate_tag(elem, tag):
if elem.tag != tag:
print "Tag mismatch, expected '" + tag + "', got " + elem.tag
sys.exit(255)
class_names = []
classes = {}
def ul_string(str, ul):
str += "\n"
for i in range(len(str) - 1):
str += ul
str += "\n"
return str
def make_class_list(class_list, columns):
f = codecs.open('class_list.rst', 'wb', 'utf-8')
prev = 0
col_max = len(class_list) / columns + 1
print ('col max is ', col_max)
col_count = 0
row_count = 0
last_initial = ''
fit_columns = []
for n in range(0, columns):
fit_columns += [[]]
indexers = []
last_initial = ''
idx = 0
for n in class_list:
col = idx / col_max
if col >= columns:
col = columns - 1
fit_columns[col] += [n]
idx += 1
if n[:1] != last_initial:
indexers += [n]
last_initial = n[:1]
row_max = 0
f.write("\n")
for n in range(0, columns):
if len(fit_columns[n]) > row_max:
row_max = len(fit_columns[n])
f.write("| ")
for n in range(0, columns):
f.write(" | |")
f.write("\n")
f.write("+")
for n in range(0, columns):
f.write("--+-------+")
f.write("\n")
for r in range(0, row_max):
s = '+ '
for c in range(0, columns):
if r >= len(fit_columns[c]):
continue
classname = fit_columns[c][r]
initial = classname[0]
if classname in indexers:
s += '**' + initial + '** | '
else:
s += ' | '
s += '[' + classname + '](class_' + classname.lower() + ') | '
s += '\n'
f.write(s)
for n in range(0, columns):
f.write("--+-------+")
f.write("\n")
def rstize_text(text, cclass):
# Linebreak + tabs in the XML should become two line breaks unless in a "codeblock"
pos = 0
while True:
pos = text.find('\n', pos)
if pos == -1:
break
pre_text = text[:pos]
while text[pos + 1] == '\t':
pos += 1
post_text = text[pos + 1:]
# Handle codeblocks
if post_text.startswith("[codeblock]"):
end_pos = post_text.find("[/codeblock]")
if end_pos == -1:
sys.exit("ERROR! [codeblock] without a closing tag!")
code_text = post_text[len("[codeblock]"):end_pos]
post_text = post_text[end_pos:]
# Remove extraneous tabs
code_pos = 0
while True:
code_pos = code_text.find('\n', code_pos)
if code_pos == -1:
break
to_skip = 0
while code_pos + to_skip + 1 < len(code_text) and code_text[code_pos + to_skip + 1] == '\t':
to_skip += 1
if len(code_text[code_pos + to_skip + 1:]) == 0:
code_text = code_text[:code_pos] + "\n"
code_pos += 1
else:
code_text = code_text[:code_pos] + "\n " + code_text[code_pos + to_skip + 1:]
code_pos += 5 - to_skip
text = pre_text + "\n[codeblock]" + code_text + post_text
pos += len("\n[codeblock]" + code_text)
# Handle normal text
else:
text = pre_text + "\n\n" + post_text
pos += 2
# Escape * character to avoid interpreting it as emphasis
pos = 0
while True:
pos = text.find('*', pos)
if pos == -1:
break
text = text[:pos] + "\*" + text[pos + 1:]
pos += 2
# Escape _ character at the end of a word to avoid interpreting it as an inline hyperlink
pos = 0
while True:
pos = text.find('_', pos)
if pos == -1:
break
if not text[pos + 1].isalnum(): # don't escape within a snake_case word
text = text[:pos] + "\_" + text[pos + 1:]
pos += 2
else:
pos += 1
# Handle [tags]
pos = 0
while True:
pos = text.find('[', pos)
if pos == -1:
break
endq_pos = text.find(']', pos + 1)
if endq_pos == -1:
break
pre_text = text[:pos]
post_text = text[endq_pos + 1:]
tag_text = text[pos + 1:endq_pos]
if tag_text in class_names:
tag_text = make_type(tag_text)
else: # command
cmd = tag_text
space_pos = tag_text.find(' ')
if cmd.find('html') == 0:
cmd = tag_text[:space_pos]
param = tag_text[space_pos + 1:]
tag_text = param
elif cmd.find('method') == 0:
cmd = tag_text[:space_pos]
param = tag_text[space_pos + 1:]
if param.find('.') != -1:
(class_param, method_param) = param.split('.')
tag_text = ':ref:`' + class_param + '.' + method_param + '<class_' + class_param + '_' + method_param + '>`'
else:
tag_text = ':ref:`' + param + '<class_' + cclass + "_" + param + '>`'
elif cmd.find('image=') == 0:
tag_text = "" # ''
elif cmd.find('url=') == 0:
tag_text = ':ref:`' + cmd[4:] + '<' + cmd[4:] + ">`"
elif cmd == '/url':
tag_text = ')'
elif cmd == 'center':
tag_text = ''
elif cmd == '/center':
tag_text = ''
elif cmd == 'codeblock':
tag_text = '\n::\n'
elif cmd == '/codeblock':
tag_text = ''
# Strip newline if the tag was alone on one
if pre_text[-1] == '\n':
pre_text = pre_text[:-1]
elif cmd == 'br':
# Make a new paragraph instead of a linebreak, rst is not so linebreak friendly
tag_text = '\n\n'
# Strip potential leading spaces
while post_text[0] == ' ':
post_text = post_text[1:]
elif cmd == 'i' or cmd == '/i':
tag_text = '*'
elif cmd == 'b' or cmd == '/b':
tag_text = '**'
elif cmd == 'u' or cmd == '/u':
tag_text = ''
elif cmd == 'code' or cmd == '/code':
tag_text = '``'
else:
tag_text = ':ref:`' + tag_text + '<class_' + tag_text.lower() + '>`'
text = pre_text + tag_text + post_text
pos = len(pre_text) + len(tag_text)
# tnode = ET.SubElement(parent,"div")
# tnode.text=text
return text
def make_type(t):
global class_names
if t in class_names:
return ':ref:`' + t + '<class_' + t.lower() + '>`'
return t
def make_method(
f,
name,
m,
declare,
cname,
event=False,
pp=None
):
if (declare or pp == None):
t = '- '
else:
t = ""
ret_type = 'void'
args = list(m)
mdata = {}
mdata['argidx'] = []
for a in args:
if a.tag == 'return':
idx = -1
elif a.tag == 'argument':
idx = int(a.attrib['index'])
else:
continue
mdata['argidx'].append(idx)
mdata[idx] = a
if not event:
if -1 in mdata['argidx']:
t += make_type(mdata[-1].attrib['type'])
else:
t += 'void'
t += ' '
if declare or pp == None:
# span.attrib["class"]="funcdecl"
# a=ET.SubElement(span,"a")
# a.attrib["name"]=name+"_"+m.attrib["name"]
# a.text=name+"::"+m.attrib["name"]
s = ' **' + m.attrib['name'] + '** '
else:
s = ':ref:`' + m.attrib['name'] + '<class_' + cname + "_" + m.attrib['name'] + '>` '
s += ' **(**'
argfound = False
for a in mdata['argidx']:
arg = mdata[a]
if a < 0:
continue
if a > 0:
s += ', '
else:
s += ' '
s += make_type(arg.attrib['type'])
if 'name' in arg.attrib:
s += ' ' + arg.attrib['name']
else:
s += ' arg' + str(a)
if 'default' in arg.attrib:
s += '=' + arg.attrib['default']
argfound = True
if argfound:
s += ' '
s += ' **)**'
if 'qualifiers' in m.attrib:
s += ' ' + m.attrib['qualifiers']
if (not declare):
if (pp != None):
pp.append((t, s))
else:
f.write("- " + t + " " + s + "\n")
else:
f.write(t + s + "\n")
def make_heading(title, underline):
return title + '\n' + underline * len(title) + "\n\n"
def make_rst_class(node):
name = node.attrib['name']
f = codecs.open("class_" + name.lower() + '.rst', 'wb', 'utf-8')
# Warn contributors not to edit this file directly
f.write(".. Generated automatically by doc/tools/makerst.py in Godot's source tree.\n")
f.write(".. DO NOT EDIT THIS FILE, but the doc/base/classes.xml source instead.\n\n")
f.write(".. _class_" + name + ":\n\n")
f.write(make_heading(name, '='))
if 'inherits' in node.attrib:
inh = node.attrib['inherits'].strip()
f.write('**Inherits:** ')
first = True
while(inh in classes):
if (not first):
f.write(" **<** ")
else:
first = False
f.write(make_type(inh))
inode = classes[inh]
if ('inherits' in inode.attrib):
inh = inode.attrib['inherits'].strip()
else:
inh = None
f.write("\n\n")
inherited = []
for cn in classes:
c = classes[cn]
if 'inherits' in c.attrib:
if (c.attrib['inherits'].strip() == name):
inherited.append(c.attrib['name'])
if (len(inherited)):
f.write('**Inherited By:** ')
for i in range(len(inherited)):
if (i > 0):
f.write(", ")
f.write(make_type(inherited[i]))
f.write("\n\n")
if 'category' in node.attrib:
f.write('**Category:** ' + node.attrib['category'].strip() + "\n\n")
f.write(make_heading('Brief Description', '-'))
briefd = node.find('brief_description')
if briefd != None:
f.write(rstize_text(briefd.text.strip(), name) + "\n\n")
methods = node.find('methods')
if methods != None and len(list(methods)) > 0:
f.write(make_heading('Member Functions', '-'))
ml = []
for m in list(methods):
make_method(f, node.attrib['name'], m, False, name, False, ml)
longest_t = 0
longest_s = 0
for s in ml:
sl = len(s[0])
if (sl > longest_s):
longest_s = sl
tl = len(s[1])
if (tl > longest_t):
longest_t = tl
sep = "+"
for i in range(longest_s + 2):
sep += "-"
sep += "+"
for i in range(longest_t + 2):
sep += "-"
sep += "+\n"
f.write(sep)
for s in ml:
rt = s[0]
while(len(rt) < longest_s):
rt += " "
st = s[1]
while(len(st) < longest_t):
st += " "
f.write("| " + rt + " | " + st + " |\n")
f.write(sep)
f.write('\n')
events = node.find('signals')
if events != None and len(list(events)) > 0:
f.write(make_heading('Signals', '-'))
for m in list(events):
make_method(f, node.attrib['name'], m, True, name, True)
f.write('\n')
members = node.find('members')
if members != None and len(list(members)) > 0:
f.write(make_heading('Member Variables', '-'))
for c in list(members):
s = '- '
s += make_type(c.attrib['type']) + ' '
s += '**' + c.attrib['name'] + '**'
if c.text.strip() != '':
s += ' - ' + c.text.strip()
f.write(s + '\n')
f.write('\n')
constants = node.find('constants')
if constants != None and len(list(constants)) > 0:
f.write(make_heading('Numeric Constants', '-'))
for c in list(constants):
s = '- '
s += '**' + c.attrib['name'] + '**'
if 'value' in c.attrib:
s += ' = **' + c.attrib['value'] + '**'
if c.text.strip() != '':
s += ' --- ' + rstize_text(c.text.strip(), name)
f.write(s + '\n')
f.write('\n')
descr = node.find('description')
if descr != None and descr.text.strip() != '':
f.write(make_heading('Description', '-'))
f.write(rstize_text(descr.text.strip(), name) + "\n\n")
methods = node.find('methods')
if methods != None and len(list(methods)) > 0:
f.write(make_heading('Member Function Description', '-'))
for m in list(methods):
f.write(".. _class_" + name + "_" + m.attrib['name'] + ":\n\n")
#f.write('\n<a name="'+m.attrib['name']+'">' + m.attrib['name'] + '</a>\n------\n')
make_method(f, node.attrib['name'], m, True, name)
f.write('\n')
d = m.find('description')
if d == None or d.text.strip() == '':
continue
f.write(rstize_text(d.text.strip(), name))
f.write("\n\n")
f.write('\n')
for file in input_list:
tree = ET.parse(file)
doc = tree.getroot()
if 'version' not in doc.attrib:
print "Version missing from 'doc'"
sys.exit(255)
version = doc.attrib['version']
for c in list(doc):
if c.attrib['name'] in class_names:
continue
class_names.append(c.attrib['name'])
classes[c.attrib['name']] = c
class_names.sort()
for cn in class_names:
c = classes[cn]
make_rst_class(c)
|
"""
Copyright 2008-2011 Free Software Foundation, Inc.
This file is part of GNU Radio
GNU Radio Companion is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
GNU Radio Companion is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
"""
import expr_utils
from .. base.FlowGraph import FlowGraph as _FlowGraph
from .. gui.FlowGraph import FlowGraph as _GUIFlowGraph
import re
_variable_matcher = re.compile('^(variable\w*)$')
_parameter_matcher = re.compile('^(parameter)$')
class FlowGraph(_FlowGraph, _GUIFlowGraph):
def __init__(self, **kwargs):
_FlowGraph.__init__(self, **kwargs)
_GUIFlowGraph.__init__(self)
self._eval_cache = dict()
def _eval(self, code, namespace, namespace_hash):
"""
Evaluate the code with the given namespace.
@param code a string with python code
@param namespace a dict representing the namespace
@param namespace_hash a unique hash for the namespace
@return the resultant object
"""
if not code: raise Exception, 'Cannot evaluate empty statement.'
my_hash = hash(code) ^ namespace_hash
#cache if does not exist
if not self._eval_cache.has_key(my_hash):
self._eval_cache[my_hash] = eval(code, namespace, namespace)
#return from cache
return self._eval_cache[my_hash]
def get_io_signaturev(self, direction):
"""
Get a list of io signatures for this flow graph.
@param direction a string of 'in' or 'out'
@return a list of dicts with: type, label, vlen, size
"""
sorted_pads = {
'in': self.get_pad_sources(),
'out': self.get_pad_sinks(),
}[direction]
# we only want stream ports
sorted_pads = filter(lambda b: b.get_param('type').get_evaluated() != 'message', sorted_pads);
#load io signature
return [{
'label': str(pad.get_param('label').get_evaluated()),
'type': str(pad.get_param('type').get_evaluated()),
'vlen': str(pad.get_param('vlen').get_evaluated()),
'size': pad.get_param('type').get_opt('size'),
'optional': bool(pad.get_param('optional').get_evaluated()),
} for pad in sorted_pads]
def get_pad_sources(self):
"""
Get a list of pad source blocks sorted by id order.
@return a list of pad source blocks in this flow graph
"""
pads = filter(lambda b: b.get_key() == 'pad_source', self.get_enabled_blocks())
return sorted(pads, lambda x, y: cmp(x.get_id(), y.get_id()))
def get_pad_sinks(self):
"""
Get a list of pad sink blocks sorted by id order.
@return a list of pad sink blocks in this flow graph
"""
pads = filter(lambda b: b.get_key() == 'pad_sink', self.get_enabled_blocks())
return sorted(pads, lambda x, y: cmp(x.get_id(), y.get_id()))
def get_msg_pad_sources(self):
ps = self.get_pad_sources();
return filter(lambda b: b.get_param('type').get_evaluated() == 'message', ps);
def get_msg_pad_sinks(self):
ps = self.get_pad_sinks();
return filter(lambda b: b.get_param('type').get_evaluated() == 'message', ps);
def get_imports(self):
"""
Get a set of all import statments in this flow graph namespace.
@return a set of import statements
"""
imports = sum([block.get_imports() for block in self.get_enabled_blocks()], [])
imports = sorted(set(imports))
return imports
def get_variables(self):
"""
Get a list of all variables in this flow graph namespace.
Exclude paramterized variables.
@return a sorted list of variable blocks in order of dependency (indep -> dep)
"""
variables = filter(lambda b: _variable_matcher.match(b.get_key()), self.get_enabled_blocks())
return expr_utils.sort_objects(variables, lambda v: v.get_id(), lambda v: v.get_var_make())
def get_parameters(self):
"""
Get a list of all paramterized variables in this flow graph namespace.
@return a list of paramterized variables
"""
parameters = filter(lambda b: _parameter_matcher.match(b.get_key()), self.get_enabled_blocks())
return parameters
def rewrite(self):
"""
Flag the namespace to be renewed.
"""
self._renew_eval_ns = True
_FlowGraph.rewrite(self)
def evaluate(self, expr):
"""
Evaluate the expression.
@param expr the string expression
@throw Exception bad expression
@return the evaluated data
"""
if self._renew_eval_ns:
self._renew_eval_ns = False
#reload namespace
n = dict()
#load imports
for imp in self.get_imports():
try: exec imp in n
except: pass
#load parameters
np = dict()
for parameter in self.get_parameters():
try:
e = eval(parameter.get_param('value').to_code(), n, n)
np[parameter.get_id()] = e
except: pass
n.update(np) #merge param namespace
#load variables
for variable in self.get_variables():
try:
e = eval(variable.get_param('value').to_code(), n, n)
n[variable.get_id()] = e
except: pass
#make namespace public
self.n = n
self.n_hash = hash(str(n))
#evaluate
e = self._eval(expr, self.n, self.n_hash)
return e
|
"""Library of dtypes (Tensor element types)."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.core.framework import types_pb2
class DType(object):
"""Represents the type of the elements in a `Tensor`.
The following `DType` objects are defined:
* `tf.float16`: 16-bit half-precision floating-point.
* `tf.float32`: 32-bit single-precision floating-point.
* `tf.float64`: 64-bit double-precision floating-point.
* `tf.bfloat16`: 16-bit truncated floating-point.
* `tf.complex64`: 64-bit single-precision complex.
* `tf.complex128`: 128-bit double-precision complex.
* `tf.int8`: 8-bit signed integer.
* `tf.uint8`: 8-bit unsigned integer.
* `tf.uint16`: 16-bit unsigned integer.
* `tf.int16`: 16-bit signed integer.
* `tf.int32`: 32-bit signed integer.
* `tf.int64`: 64-bit signed integer.
* `tf.bool`: Boolean.
* `tf.string`: String.
* `tf.qint8`: Quantized 8-bit signed integer.
* `tf.quint8`: Quantized 8-bit unsigned integer.
* `tf.qint16`: Quantized 16-bit signed integer.
* `tf.quint16`: Quantized 16-bit unsigned integer.
* `tf.qint32`: Quantized 32-bit signed integer.
* `tf.resource`: Handle to a mutable resource.
In addition, variants of these types with the `_ref` suffix are
defined for reference-typed tensors.
The `tf.as_dtype()` function converts numpy types and string type
names to a `DType` object.
@@is_compatible_with
@@name
@@base_dtype
@@real_dtype
@@is_bool
@@is_floating
@@is_complex
@@is_integer
@@is_quantized
@@is_unsigned
@@as_numpy_dtype
@@as_datatype_enum
@@limits
"""
def __init__(self, type_enum):
"""Creates a new `DataType`.
NOTE(mrry): In normal circumstances, you should not need to
construct a `DataType` object directly. Instead, use the
`tf.as_dtype()` function.
Args:
type_enum: A `types_pb2.DataType` enum value.
Raises:
TypeError: If `type_enum` is not a value `types_pb2.DataType`.
"""
# TODO(mrry): Make the necessary changes (using __new__) to ensure
# that calling this returns one of the interned values.
type_enum = int(type_enum)
if (type_enum not in types_pb2.DataType.values()
or type_enum == types_pb2.DT_INVALID):
raise TypeError(
"type_enum is not a valid types_pb2.DataType: %s" % type_enum)
self._type_enum = type_enum
@property
def _is_ref_dtype(self):
"""Returns `True` if this `DType` represents a reference type."""
return self._type_enum > 100
@property
def _as_ref(self):
"""Returns a reference `DType` based on this `DType`."""
if self._is_ref_dtype:
return self
else:
return _INTERN_TABLE[self._type_enum + 100]
@property
def base_dtype(self):
"""Returns a non-reference `DType` based on this `DType`."""
if self._is_ref_dtype:
return _INTERN_TABLE[self._type_enum - 100]
else:
return self
@property
def real_dtype(self):
"""Returns the dtype correspond to this dtype's real part."""
base = self.base_dtype
if base == complex64:
return float32
elif base == complex128:
return float64
else:
return self
@property
def is_numpy_compatible(self):
return (self._type_enum != types_pb2.DT_RESOURCE and
self._type_enum != types_pb2.DT_RESOURCE_REF)
@property
def as_numpy_dtype(self):
"""Returns a `numpy.dtype` based on this `DType`."""
return _TF_TO_NP[self._type_enum]
@property
def as_datatype_enum(self):
"""Returns a `types_pb2.DataType` enum value based on this `DType`."""
return self._type_enum
@property
def is_bool(self):
"""Returns whether this is a boolean data type"""
return self.base_dtype == bool
@property
def is_integer(self):
"""Returns whether this is a (non-quantized) integer type."""
return (self.is_numpy_compatible and not self.is_quantized and
issubclass(self.as_numpy_dtype, np.integer))
@property
def is_floating(self):
"""Returns whether this is a (non-quantized, real) floating point type."""
return self.is_numpy_compatible and issubclass(self.as_numpy_dtype,
np.floating)
@property
def is_complex(self):
"""Returns whether this is a complex floating point type."""
return self.base_dtype in (complex64, complex128)
@property
def is_quantized(self):
"""Returns whether this is a quantized data type."""
return self.base_dtype in [qint8, quint8, qint16, quint16, qint32, bfloat16]
@property
def is_unsigned(self):
"""Returns whether this type is unsigned.
Non-numeric, unordered, and quantized types are not considered unsigned, and
this function returns `False`.
Returns:
Whether a `DType` is unsigned.
"""
try:
return self.min == 0
except TypeError:
return False
@property
def min(self):
"""Returns the minimum representable value in this data type.
Raises:
TypeError: if this is a non-numeric, unordered, or quantized type.
"""
if (self.is_quantized or self.base_dtype in
(bool, string, complex64, complex128)):
raise TypeError("Cannot find minimum value of %s." % self)
# there is no simple way to get the min value of a dtype, we have to check
# float and int types separately
try:
return np.finfo(self.as_numpy_dtype()).min
except: # bare except as possible raises by finfo not documented
try:
return np.iinfo(self.as_numpy_dtype()).min
except:
raise TypeError("Cannot find minimum value of %s." % self)
@property
def max(self):
"""Returns the maximum representable value in this data type.
Raises:
TypeError: if this is a non-numeric, unordered, or quantized type.
"""
if (self.is_quantized or self.base_dtype in
(bool, string, complex64, complex128)):
raise TypeError("Cannot find maximum value of %s." % self)
# there is no simple way to get the max value of a dtype, we have to check
# float and int types separately
try:
return np.finfo(self.as_numpy_dtype()).max
except: # bare except as possible raises by finfo not documented
try:
return np.iinfo(self.as_numpy_dtype()).max
except:
raise TypeError("Cannot find maximum value of %s." % self)
@property
def limits(self, clip_negative=True):
"""Return intensity limits, i.e. (min, max) tuple, of the dtype.
Args:
clip_negative : bool, optional
If True, clip the negative range (i.e. return 0 for min intensity)
even if the image dtype allows negative values.
Returns
min, max : tuple
Lower and upper intensity limits.
"""
min, max = dtype_range[self.as_numpy_dtype]
if clip_negative:
min = 0
return min, max
def is_compatible_with(self, other):
"""Returns True if the `other` DType will be converted to this DType.
The conversion rules are as follows:
```python
DType(T) .is_compatible_with(DType(T)) == True
DType(T) .is_compatible_with(DType(T).as_ref) == True
DType(T).as_ref.is_compatible_with(DType(T)) == False
DType(T).as_ref.is_compatible_with(DType(T).as_ref) == True
```
Args:
other: A `DType` (or object that may be converted to a `DType`).
Returns:
True if a Tensor of the `other` `DType` will be implicitly converted to
this `DType`.
"""
other = as_dtype(other)
return self._type_enum in (
other.as_datatype_enum, other.base_dtype.as_datatype_enum)
def __eq__(self, other):
"""Returns True iff this DType refers to the same type as `other`."""
if other is None:
return False
try:
dtype = as_dtype(other).as_datatype_enum
return self._type_enum == dtype # pylint: disable=protected-access
except TypeError:
return False
def __ne__(self, other):
"""Returns True iff self != other."""
return not self.__eq__(other)
@property
def name(self):
"""Returns the string name for this `DType`."""
return _TYPE_TO_STRING[self._type_enum]
def __str__(self):
return "<dtype: %r>" % self.name
def __repr__(self):
return "tf." + self.name
def __hash__(self):
return self._type_enum
@property
def size(self):
if self._type_enum == types_pb2.DT_RESOURCE:
return 1
return np.dtype(self.as_numpy_dtype).itemsize
dtype_range = {np.bool_: (False, True),
np.bool8: (False, True),
np.uint8: (0, 255),
np.uint16: (0, 65535),
np.int8: (-128, 127),
np.int16: (-32768, 32767),
np.int64: (-2**63, 2**63 - 1),
np.uint64: (0, 2**64 - 1),
np.int32: (-2**31, 2**31 - 1),
np.uint32: (0, 2**32 - 1),
np.float32: (-1, 1),
np.float64: (-1, 1)}
resource = DType(types_pb2.DT_RESOURCE)
float16 = DType(types_pb2.DT_HALF)
half = float16
float32 = DType(types_pb2.DT_FLOAT)
float64 = DType(types_pb2.DT_DOUBLE)
double = float64
int32 = DType(types_pb2.DT_INT32)
uint8 = DType(types_pb2.DT_UINT8)
uint16 = DType(types_pb2.DT_UINT16)
int16 = DType(types_pb2.DT_INT16)
int8 = DType(types_pb2.DT_INT8)
string = DType(types_pb2.DT_STRING)
complex64 = DType(types_pb2.DT_COMPLEX64)
complex128 = DType(types_pb2.DT_COMPLEX128)
int64 = DType(types_pb2.DT_INT64)
bool = DType(types_pb2.DT_BOOL)
qint8 = DType(types_pb2.DT_QINT8)
quint8 = DType(types_pb2.DT_QUINT8)
qint16 = DType(types_pb2.DT_QINT16)
quint16 = DType(types_pb2.DT_QUINT16)
qint32 = DType(types_pb2.DT_QINT32)
resource_ref = DType(types_pb2.DT_RESOURCE_REF)
bfloat16 = DType(types_pb2.DT_BFLOAT16)
float16_ref = DType(types_pb2.DT_HALF_REF)
half_ref = float16_ref
float32_ref = DType(types_pb2.DT_FLOAT_REF)
float64_ref = DType(types_pb2.DT_DOUBLE_REF)
double_ref = float64_ref
int32_ref = DType(types_pb2.DT_INT32_REF)
uint8_ref = DType(types_pb2.DT_UINT8_REF)
uint16_ref = DType(types_pb2.DT_UINT16_REF)
int16_ref = DType(types_pb2.DT_INT16_REF)
int8_ref = DType(types_pb2.DT_INT8_REF)
string_ref = DType(types_pb2.DT_STRING_REF)
complex64_ref = DType(types_pb2.DT_COMPLEX64_REF)
complex128_ref = DType(types_pb2.DT_COMPLEX128_REF)
int64_ref = DType(types_pb2.DT_INT64_REF)
bool_ref = DType(types_pb2.DT_BOOL_REF)
qint8_ref = DType(types_pb2.DT_QINT8_REF)
quint8_ref = DType(types_pb2.DT_QUINT8_REF)
qint16_ref = DType(types_pb2.DT_QINT16_REF)
quint16_ref = DType(types_pb2.DT_QUINT16_REF)
qint32_ref = DType(types_pb2.DT_QINT32_REF)
bfloat16_ref = DType(types_pb2.DT_BFLOAT16_REF)
_INTERN_TABLE = {
types_pb2.DT_HALF: float16,
types_pb2.DT_FLOAT: float32,
types_pb2.DT_DOUBLE: float64,
types_pb2.DT_INT32: int32,
types_pb2.DT_UINT8: uint8,
types_pb2.DT_UINT16: uint16,
types_pb2.DT_INT16: int16,
types_pb2.DT_INT8: int8,
types_pb2.DT_STRING: string,
types_pb2.DT_COMPLEX64: complex64,
types_pb2.DT_COMPLEX128: complex128,
types_pb2.DT_INT64: int64,
types_pb2.DT_BOOL: bool,
types_pb2.DT_QINT8: qint8,
types_pb2.DT_QUINT8: quint8,
types_pb2.DT_QINT16: qint16,
types_pb2.DT_QUINT16: quint16,
types_pb2.DT_QINT32: qint32,
types_pb2.DT_BFLOAT16: bfloat16,
types_pb2.DT_RESOURCE: resource,
types_pb2.DT_HALF_REF: float16_ref,
types_pb2.DT_FLOAT_REF: float32_ref,
types_pb2.DT_DOUBLE_REF: float64_ref,
types_pb2.DT_INT32_REF: int32_ref,
types_pb2.DT_UINT8_REF: uint8_ref,
types_pb2.DT_UINT16_REF: uint16_ref,
types_pb2.DT_INT16_REF: int16_ref,
types_pb2.DT_INT8_REF: int8_ref,
types_pb2.DT_STRING_REF: string_ref,
types_pb2.DT_COMPLEX64_REF: complex64_ref,
types_pb2.DT_COMPLEX128_REF: complex128_ref,
types_pb2.DT_INT64_REF: int64_ref,
types_pb2.DT_BOOL_REF: bool_ref,
types_pb2.DT_QINT8_REF: qint8_ref,
types_pb2.DT_QUINT8_REF: quint8_ref,
types_pb2.DT_QINT16_REF: qint16_ref,
types_pb2.DT_QUINT16_REF: quint16_ref,
types_pb2.DT_QINT32_REF: qint32_ref,
types_pb2.DT_BFLOAT16_REF: bfloat16_ref,
types_pb2.DT_RESOURCE_REF: resource_ref,
}
_TYPE_TO_STRING = {
types_pb2.DT_HALF: "float16",
types_pb2.DT_FLOAT: "float32",
types_pb2.DT_DOUBLE: "float64",
types_pb2.DT_INT32: "int32",
types_pb2.DT_UINT8: "uint8",
types_pb2.DT_UINT16: "uint16",
types_pb2.DT_INT16: "int16",
types_pb2.DT_INT8: "int8",
types_pb2.DT_STRING: "string",
types_pb2.DT_COMPLEX64: "complex64",
types_pb2.DT_COMPLEX128: "complex128",
types_pb2.DT_INT64: "int64",
types_pb2.DT_BOOL: "bool",
types_pb2.DT_QINT8: "qint8",
types_pb2.DT_QUINT8: "quint8",
types_pb2.DT_QINT16: "qint16",
types_pb2.DT_QUINT16: "quint16",
types_pb2.DT_QINT32: "qint32",
types_pb2.DT_BFLOAT16: "bfloat16",
types_pb2.DT_RESOURCE: "resource",
types_pb2.DT_HALF_REF: "float16_ref",
types_pb2.DT_FLOAT_REF: "float32_ref",
types_pb2.DT_DOUBLE_REF: "float64_ref",
types_pb2.DT_INT32_REF: "int32_ref",
types_pb2.DT_UINT8_REF: "uint8_ref",
types_pb2.DT_UINT16_REF: "uint16_ref",
types_pb2.DT_INT16_REF: "int16_ref",
types_pb2.DT_INT8_REF: "int8_ref",
types_pb2.DT_STRING_REF: "string_ref",
types_pb2.DT_COMPLEX64_REF: "complex64_ref",
types_pb2.DT_COMPLEX128_REF: "complex128_ref",
types_pb2.DT_INT64_REF: "int64_ref",
types_pb2.DT_BOOL_REF: "bool_ref",
types_pb2.DT_QINT8_REF: "qint8_ref",
types_pb2.DT_QUINT8_REF: "quint8_ref",
types_pb2.DT_QINT16_REF: "qint16_ref",
types_pb2.DT_QUINT16_REF: "quint16_ref",
types_pb2.DT_QINT32_REF: "qint32_ref",
types_pb2.DT_BFLOAT16_REF: "bfloat16_ref",
types_pb2.DT_RESOURCE_REF: "resource_ref",
}
_STRING_TO_TF = {value: _INTERN_TABLE[key]
for key, value in _TYPE_TO_STRING.items()}
_STRING_TO_TF["half"] = float16
_STRING_TO_TF["half_ref"] = float16_ref
_STRING_TO_TF["float"] = float32
_STRING_TO_TF["float_ref"] = float32_ref
_STRING_TO_TF["double"] = float64
_STRING_TO_TF["double_ref"] = float64_ref
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
_NP_TO_TF = frozenset([
(np.float16, float16),
(np.float32, float32),
(np.float64, float64),
(np.int32, int32),
(np.int64, int64),
(np.uint8, uint8),
(np.uint16, uint16),
(np.int16, int16),
(np.int8, int8),
(np.complex64, complex64),
(np.complex128, complex128),
(np.object, string),
(np.bool, bool),
(_np_qint8, qint8),
(_np_quint8, quint8),
(_np_qint16, qint16),
(_np_quint16, quint16),
(_np_qint32, qint32),
# NOTE(touts): Intentionally no way to feed a DT_BFLOAT16.
])
_TF_TO_NP = {
types_pb2.DT_HALF: np.float16,
types_pb2.DT_FLOAT: np.float32,
types_pb2.DT_DOUBLE: np.float64,
types_pb2.DT_INT32: np.int32,
types_pb2.DT_UINT8: np.uint8,
types_pb2.DT_UINT16: np.uint16,
types_pb2.DT_INT16: np.int16,
types_pb2.DT_INT8: np.int8,
# NOTE(touts): For strings we use np.object as it supports variable length
# strings.
types_pb2.DT_STRING: np.object,
types_pb2.DT_COMPLEX64: np.complex64,
types_pb2.DT_COMPLEX128: np.complex128,
types_pb2.DT_INT64: np.int64,
types_pb2.DT_BOOL: np.bool,
types_pb2.DT_QINT8: _np_qint8,
types_pb2.DT_QUINT8: _np_quint8,
types_pb2.DT_QINT16: _np_qint16,
types_pb2.DT_QUINT16: _np_quint16,
types_pb2.DT_QINT32: _np_qint32,
types_pb2.DT_BFLOAT16: np.uint16,
# Ref types
types_pb2.DT_HALF_REF: np.float16,
types_pb2.DT_FLOAT_REF: np.float32,
types_pb2.DT_DOUBLE_REF: np.float64,
types_pb2.DT_INT32_REF: np.int32,
types_pb2.DT_UINT8_REF: np.uint8,
types_pb2.DT_UINT16_REF: np.uint16,
types_pb2.DT_INT16_REF: np.int16,
types_pb2.DT_INT8_REF: np.int8,
types_pb2.DT_STRING_REF: np.object,
types_pb2.DT_COMPLEX64_REF: np.complex64,
types_pb2.DT_COMPLEX128_REF: np.complex128,
types_pb2.DT_INT64_REF: np.int64,
types_pb2.DT_BOOL_REF: np.bool,
types_pb2.DT_QINT8_REF: _np_qint8,
types_pb2.DT_QUINT8_REF: _np_quint8,
types_pb2.DT_QINT16_REF: _np_qint16,
types_pb2.DT_QUINT16_REF: _np_quint16,
types_pb2.DT_QINT32_REF: _np_qint32,
types_pb2.DT_BFLOAT16_REF: np.uint16,
}
QUANTIZED_DTYPES = frozenset(
[qint8, quint8, qint16, quint16, qint32, qint8_ref, quint8_ref, qint16_ref,
quint16_ref, qint32_ref])
def as_dtype(type_value):
"""Converts the given `type_value` to a `DType`.
Args:
type_value: A value that can be converted to a `tf.DType`
object. This may currently be a `tf.DType` object, a
[`DataType` enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),
a string type name, or a `numpy.dtype`.
Returns:
A `DType` corresponding to `type_value`.
Raises:
TypeError: If `type_value` cannot be converted to a `DType`.
"""
if isinstance(type_value, DType):
return type_value
try:
return _INTERN_TABLE[type_value]
except KeyError:
pass
try:
return _STRING_TO_TF[type_value]
except KeyError:
pass
if isinstance(type_value, np.dtype):
# The numpy dtype for strings is variable length. We can not compare
# dtype with a single constant (np.string does not exist) to decide
# dtype is a "string" type. We need to compare the dtype.type to be
# sure it's a string type.
if type_value.type == np.string_ or type_value.type == np.unicode_:
return string
for key, val in _NP_TO_TF:
try:
if key == type_value:
return val
except TypeError as e:
raise TypeError("Cannot convert {} to a dtype. {}".format(type_value, e))
raise TypeError(
"Cannot convert value %r to a TensorFlow DType." % type_value)
|
import numpy as np
from scipy.stats import rankdata as _sp_rankdata
def _rankdata(a, method="average"):
"""Assign ranks to data, dealing with ties appropriately.
Ranks begin at 1. The method argument controls how ranks are assigned
to equal values.
Parameters
----------
a : array_like
The array of values to be ranked. The array is first flattened.
method : str, optional
The method used to assign ranks to tied elements.
The options are 'max'.
'max': The maximum of the ranks that would have been assigned
to all the tied values is assigned to each value.
Returns
-------
ranks : ndarray
An array of length equal to the size of a, containing rank scores.
Note
----
We only backport the 'max' method
"""
if method != "max":
raise NotImplementedError()
unique_all, inverse = np.unique(a, return_inverse=True)
count = np.bincount(inverse, minlength=unique_all.size)
cum_count = count.cumsum()
rank = cum_count[inverse]
return rank
try:
_sp_rankdata([1.], 'max')
rankdata = _sp_rankdata
except TypeError as e:
rankdata = _rankdata
def _weighted_percentile(array, sample_weight, percentile=50):
"""Compute the weighted ``percentile`` of ``array`` with ``sample_weight``. """
sorted_idx = np.argsort(array)
# Find index of median prediction for each sample
weight_cdf = sample_weight[sorted_idx].cumsum()
percentile_idx = np.searchsorted(
weight_cdf, (percentile / 100.) * weight_cdf[-1])
return array[sorted_idx[percentile_idx]]
|
import csv, json
from glob import glob
from os.path import basename, join
with open(join('us-data', 'codes.txt')) as f:
rows = list(csv.DictReader(f, dialect='excel-tab'))
codes = dict([(row['Postal Code'].lower(), row['State']) for row in rows])
with open(join('us-data', 'states.txt')) as f:
rows = list(csv.DictReader(f, dialect='excel-tab'))
states = dict([(row['Name'], row['State FIPS']) for row in rows])
with open(join('us-data', 'counties.txt')) as f:
counties = dict()
for row in csv.DictReader(f, dialect='excel-tab'):
key = row['State FIPS'], row['Name']
value = row['County FIPS'], row['Name']
counties[key] = value
# some key variations
if row['Name'].endswith(' County'):
counties[(row['State FIPS'], row['Name'][:-7])] = value
if row['Name'].endswith(' Parish'):
counties[(row['State FIPS'], row['Name'][:-7])] = value
if row['Name'].endswith(' Municipality'):
counties[(row['State FIPS'], row['Name'][:-13])] = value
# more key variations
for ((s, c), value) in list(counties.items()):
if c.startswith('St. '):
counties[(s, 'Saint '+c[4:])] = value
for ((s, c), value) in list(counties.items()):
counties[(s, c.lower())] = value
counties[(s, c.replace('-', ' '))] = value
counties[(s, c.replace('-', ' ').lower())] = value
for path in glob('sources/us/**/*.json'):
try:
with open(path) as f:
data = f.read()
info = json.loads(data)
except:
print path, ' is invalid json'
raise
if 'county' not in info.get('coverage', {}):
continue
if 'US Census' in info['coverage']:
continue
print path, '...'
prefix = '\n "coverage": {\n '
if prefix + '"' not in data:
print path, ' is not valid prefix'
continue
state_name = codes[info['coverage']['state']]
state_fips = states[state_name]
county = info['coverage']['county']
# if type(county) is list or basename(path)[6:-5] != county.lower().replace(' ', '-'):
# continue
if type(county) is list:
county_names = [counties[(state_fips, c)] for c in county]
print info['coverage'], state_fips, state_name, county_names
continue
try:
if u'ñ' in county:
county_fips, county_name = counties[(state_fips, county.replace(u'ñ', 'n'))]
else:
county_fips, county_name = counties[(state_fips, county)]
except Exception as inst:
print " error generating county"
continue
geoid = state_fips + county_fips
census_dict = dict(geoid=geoid, name=county_name, state=state_name)
census_json = json.dumps(census_dict, sort_keys=True)
new_data = data.replace(prefix, '{0}"US Census": {1},\n '.format(prefix, census_json))
with open(path, 'w') as file:
file.write(new_data)
|
from __future__ import absolute_import, print_function, unicode_literals
import sys
if __name__ == "__main__":
from kolibri.utils.cli import main
main(args=sys.argv[1:])
|
"""
Specific overrides to the base prod settings to make development easier.
"""
from .aws import * # pylint: disable=wildcard-import, unused-wildcard-import
del DEFAULT_FILE_STORAGE
MEDIA_ROOT = "/edx/var/edxapp/uploads"
DEBUG = True
USE_I18N = True
TEMPLATE_DEBUG = DEBUG
import logging
for pkg_name in ['track.contexts', 'track.middleware', 'dd.dogapi']:
logging.getLogger(pkg_name).setLevel(logging.CRITICAL)
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
LMS_BASE = "localhost:8000"
FEATURES['PREVIEW_LMS_BASE'] = "preview." + LMS_BASE
STATICFILES_STORAGE = 'pipeline.storage.PipelineCachedStorage'
FEATURES['ALLOW_ALL_ADVANCED_COMPONENTS'] = True
CELERY_ALWAYS_EAGER = True
INSTALLED_APPS += ('debug_toolbar', 'debug_toolbar_mongo')
MIDDLEWARE_CLASSES += ('debug_toolbar.middleware.DebugToolbarMiddleware',)
INTERNAL_IPS = ('127.0.0.1',)
DEBUG_TOOLBAR_PANELS = (
'debug_toolbar.panels.versions.VersionsPanel',
'debug_toolbar.panels.timer.TimerPanel',
'debug_toolbar.panels.settings.SettingsPanel',
'debug_toolbar.panels.headers.HeadersPanel',
'debug_toolbar.panels.request.RequestPanel',
'debug_toolbar.panels.sql.SQLPanel',
'debug_toolbar.panels.signals.SignalsPanel',
'debug_toolbar.panels.logging.LoggingPanel',
'debug_toolbar.panels.profiling.ProfilingPanel',
)
DEBUG_TOOLBAR_CONFIG = {
'SHOW_TOOLBAR_CALLBACK': 'cms.envs.devstack.should_show_debug_toolbar'
}
def should_show_debug_toolbar(_):
return True # We always want the toolbar on devstack regardless of IP, auth, etc.
DEBUG_TOOLBAR_MONGO_STACKTRACES = False
FEATURES['MILESTONES_APP'] = True
FEATURES['ENTRANCE_EXAMS'] = True
FEATURES['LICENSING'] = True
XBLOCK_SETTINGS = {
"VideoDescriptor": {
"licensing_enabled": True
}
}
FEATURES['ENABLE_COURSEWARE_INDEX'] = True
FEATURES['ENABLE_LIBRARY_INDEX'] = True
SEARCH_ENGINE = "search.elastic.ElasticSearchEngine"
REQUIRE_DEBUG = DEBUG
try:
from .private import * # pylint: disable=import-error
except ImportError:
pass
MODULESTORE = convert_module_store_setting_if_needed(MODULESTORE)
SECRET_KEY = '85920908f28904ed733fe576320db18cabd7b6cd'
FEATURES['CERTIFICATES_HTML_VIEW'] = True
|
from toolz import frequencies, identity
big_data = range(1000)*1000
small_data = range(100)
def test_frequencies():
frequencies(big_data)
def test_frequencies_small():
for i in range(1000):
frequencies(small_data)
|
import pygtk
pygtk.require('2.0')
import gtk
import time
from ivy.std_api import *
import logging
class Base:
def __init__(self):
self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
self.window.connect("destroy", self.destroy)
self.entry = gtk.Entry()
self.entry.set_width_chars(120)
self.entry.connect("key-release-event", self.key_release_event)
self.entry.show()
self.window.add(self.entry)
self.window.show()
self.ivy_init()
self.ticks = 0
def ontick(self):
if self.ticks == 5:
IvyStop()
elif self.ticks <= 2:
IvySendMsg("1 BAT " + self.text)
self.ticks = self.ticks + 1
def ivy_init(self):
logging.getLogger('Ivy').setLevel(logging.WARN)
IvyInit("Log Annotate",
"Annotate Ready Msg",
0
)
def key_release_event(self, widget, event, data=None):
if event.string == '\r': # Return
self.text = self.entry.get_text()
self.destroy(self, None)
if event.string == '\033': # Escape
self.destroy(self, None)
return False
def delete_event(self, widget, event, data=None):
return False
def destroy(self, widget, data=None):
gtk.main_quit()
def main(self):
IvyStart("")
gtk.main()
if self.text:
timerid = IvyTimerRepeatAfter(0, # number of time to be called
100, # delay in ms between calls
self.ontick # handler to call
)
IvyMainLoop()
if __name__ == "__main__":
base = Base()
base.main()
|
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible.plugins.action import ActionBase
from ansible.utils.boolean import boolean
class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=dict()):
src = self._task.args.get('src', None)
dest = self._task.args.get('dest', None)
remote_src = boolean(self._task.args.get('remote_src', 'no'))
if src is None:
return dict(failed=True, msg="src is required")
elif remote_src:
# everything is remote, so we just execute the module
# without changing any of the module arguments
return self._execute_module(task_vars=task_vars)
if self._task._role is not None:
src = self._loader.path_dwim_relative(self._task._role._role_path, 'files', src)
else:
src = self._loader.path_dwim(src)
# create the remote tmp dir if needed, and put the source file there
if tmp is None or "-tmp-" not in tmp:
tmp = self._make_tmp_path()
tmp_src = self._connection._shell.join_path(tmp, os.path.basename(src))
self._connection.put_file(src, tmp_src)
if self._connection_info.become and self._connection_info.become_user != 'root':
if not self._connection_info.check_mode:
self._remote_chmod('a+r', tmp_src, tmp)
new_module_args = self._task.args.copy()
new_module_args.update(
dict(
src=tmp_src,
)
)
return self._execute_module('patch', module_args=new_module_args, task_vars=task_vars)
|
from __future__ import absolute_import
from django.conf.urls import url
from django.core.urlresolvers import reverse_lazy
from django.contrib.auth.views import password_change, password_change_done
from admin.common_auth import views
app_name = 'admin'
urlpatterns = [
url(r'^login/?$', views.LoginView.as_view(), name='login'),
url(r'^logout/$', views.logout_user, name='logout'),
url(r'^register/$', views.RegisterUser.as_view(), name='register'),
url(r'^password_change/$', password_change,
{'post_change_redirect': reverse_lazy('auth:password_change_done')},
name='password_change'),
url(r'^password_change/done/$', password_change_done,
{'template_name': 'password_change_done.html'},
name='password_change_done'),
url(r'^settings/desk/$', views.DeskUserCreateFormView.as_view(), name='desk'),
url(r'^settings/desk/update/$', views.DeskUserUpdateFormView.as_view(), name='desk_update'),
]
|
from django.utils.encoding import python_2_unicode_compatible
from ..admin import admin
from ..models import models
@python_2_unicode_compatible
class City(models.Model):
name = models.CharField(max_length=30)
point = models.PointField()
class Meta:
app_label = 'geoadmin'
required_db_features = ['gis_enabled']
def __str__(self):
return self.name
site = admin.AdminSite(name='admin_gis')
site.register(City, admin.OSMGeoAdmin)
|
from glob import glob
from distutils import log
import distutils.command.sdist as orig
import os
import sys
from setuptools.compat import PY3
from setuptools.utils import cs_path_exists
import pkg_resources
READMES = 'README', 'README.rst', 'README.txt'
_default_revctrl = list
def walk_revctrl(dirname=''):
"""Find all files under revision control"""
for ep in pkg_resources.iter_entry_points('setuptools.file_finders'):
for item in ep.load()(dirname):
yield item
class sdist(orig.sdist):
"""Smart sdist that finds anything supported by revision control"""
user_options = [
('formats=', None,
"formats for source distribution (comma-separated list)"),
('keep-temp', 'k',
"keep the distribution tree around after creating " +
"archive file(s)"),
('dist-dir=', 'd',
"directory to put the source distribution archive(s) in "
"[default: dist]"),
]
negative_opt = {}
def run(self):
self.run_command('egg_info')
ei_cmd = self.get_finalized_command('egg_info')
self.filelist = ei_cmd.filelist
self.filelist.append(os.path.join(ei_cmd.egg_info, 'SOURCES.txt'))
self.check_readme()
# Run sub commands
for cmd_name in self.get_sub_commands():
self.run_command(cmd_name)
# Call check_metadata only if no 'check' command
# (distutils <= 2.6)
import distutils.command
if 'check' not in distutils.command.__all__:
self.check_metadata()
self.make_distribution()
dist_files = getattr(self.distribution, 'dist_files', [])
for file in self.archive_files:
data = ('sdist', '', file)
if data not in dist_files:
dist_files.append(data)
def __read_template_hack(self):
# This grody hack closes the template file (MANIFEST.in) if an
# exception occurs during read_template.
# Doing so prevents an error when easy_install attempts to delete the
# file.
try:
orig.sdist.read_template(self)
except:
sys.exc_info()[2].tb_next.tb_frame.f_locals['template'].close()
raise
# Beginning with Python 2.7.2, 3.1.4, and 3.2.1, this leaky file handle
# has been fixed, so only override the method if we're using an earlier
# Python.
has_leaky_handle = (
sys.version_info < (2, 7, 2)
or (3, 0) <= sys.version_info < (3, 1, 4)
or (3, 2) <= sys.version_info < (3, 2, 1)
)
if has_leaky_handle:
read_template = __read_template_hack
def add_defaults(self):
standards = [READMES,
self.distribution.script_name]
for fn in standards:
if isinstance(fn, tuple):
alts = fn
got_it = 0
for fn in alts:
if cs_path_exists(fn):
got_it = 1
self.filelist.append(fn)
break
if not got_it:
self.warn("standard file not found: should have one of " +
', '.join(alts))
else:
if cs_path_exists(fn):
self.filelist.append(fn)
else:
self.warn("standard file '%s' not found" % fn)
optional = ['test/test*.py', 'setup.cfg']
for pattern in optional:
files = list(filter(cs_path_exists, glob(pattern)))
if files:
self.filelist.extend(files)
# getting python files
if self.distribution.has_pure_modules():
build_py = self.get_finalized_command('build_py')
self.filelist.extend(build_py.get_source_files())
# This functionality is incompatible with include_package_data, and
# will in fact create an infinite recursion if include_package_data
# is True. Use of include_package_data will imply that
# distutils-style automatic handling of package_data is disabled
if not self.distribution.include_package_data:
for _, src_dir, _, filenames in build_py.data_files:
self.filelist.extend([os.path.join(src_dir, filename)
for filename in filenames])
if self.distribution.has_ext_modules():
build_ext = self.get_finalized_command('build_ext')
self.filelist.extend(build_ext.get_source_files())
if self.distribution.has_c_libraries():
build_clib = self.get_finalized_command('build_clib')
self.filelist.extend(build_clib.get_source_files())
if self.distribution.has_scripts():
build_scripts = self.get_finalized_command('build_scripts')
self.filelist.extend(build_scripts.get_source_files())
def check_readme(self):
for f in READMES:
if os.path.exists(f):
return
else:
self.warn(
"standard file not found: should have one of " +
', '.join(READMES)
)
def make_release_tree(self, base_dir, files):
orig.sdist.make_release_tree(self, base_dir, files)
# Save any egg_info command line options used to create this sdist
dest = os.path.join(base_dir, 'setup.cfg')
if hasattr(os, 'link') and os.path.exists(dest):
# unlink and re-copy, since it might be hard-linked, and
# we don't want to change the source version
os.unlink(dest)
self.copy_file('setup.cfg', dest)
self.get_finalized_command('egg_info').save_version_info(dest)
def _manifest_is_not_generated(self):
# check for special comment used in 2.7.1 and higher
if not os.path.isfile(self.manifest):
return False
fp = open(self.manifest, 'rbU')
try:
first_line = fp.readline()
finally:
fp.close()
return (first_line !=
'# file GENERATED by distutils, do NOT edit\n'.encode())
def read_manifest(self):
"""Read the manifest file (named by 'self.manifest') and use it to
fill in 'self.filelist', the list of files to include in the source
distribution.
"""
log.info("reading manifest file '%s'", self.manifest)
manifest = open(self.manifest, 'rbU')
for line in manifest:
# The manifest must contain UTF-8. See #303.
if PY3:
try:
line = line.decode('UTF-8')
except UnicodeDecodeError:
log.warn("%r not UTF-8 decodable -- skipping" % line)
continue
# ignore comments and blank lines
line = line.strip()
if line.startswith('#') or not line:
continue
self.filelist.append(line)
manifest.close()
|
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
_sym_db = _symbol_database.Default()
from peer import proposal_pb2 as peer_dot_proposal__pb2
from peer import proposal_response_pb2 as peer_dot_proposal__response__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='peer/peer.proto',
package='protos',
syntax='proto3',
serialized_pb=_b('\n\x0fpeer/peer.proto\x12\x06protos\x1a\x13peer/proposal.proto\x1a\x1cpeer/proposal_response.proto\"\x16\n\x06PeerID\x12\x0c\n\x04name\x18\x01 \x01(\t\";\n\x0cPeerEndpoint\x12\x1a\n\x02id\x18\x01 \x01(\x0b\x32\x0e.protos.PeerID\x12\x0f\n\x07\x61\x64\x64ress\x18\x02 \x01(\t2Q\n\x08\x45ndorser\x12\x45\n\x0fProcessProposal\x12\x16.protos.SignedProposal\x1a\x18.protos.ProposalResponse\"\x00\x42O\n\"org.hyperledger.fabric.protos.peerZ)github.com/hyperledger/fabric/protos/peerb\x06proto3')
,
dependencies=[peer_dot_proposal__pb2.DESCRIPTOR,peer_dot_proposal__response__pb2.DESCRIPTOR,])
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_PEERID = _descriptor.Descriptor(
name='PeerID',
full_name='protos.PeerID',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='protos.PeerID.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=78,
serialized_end=100,
)
_PEERENDPOINT = _descriptor.Descriptor(
name='PeerEndpoint',
full_name='protos.PeerEndpoint',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='protos.PeerEndpoint.id', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='address', full_name='protos.PeerEndpoint.address', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=102,
serialized_end=161,
)
_PEERENDPOINT.fields_by_name['id'].message_type = _PEERID
DESCRIPTOR.message_types_by_name['PeerID'] = _PEERID
DESCRIPTOR.message_types_by_name['PeerEndpoint'] = _PEERENDPOINT
PeerID = _reflection.GeneratedProtocolMessageType('PeerID', (_message.Message,), dict(
DESCRIPTOR = _PEERID,
__module__ = 'peer.peer_pb2'
# @@protoc_insertion_point(class_scope:protos.PeerID)
))
_sym_db.RegisterMessage(PeerID)
PeerEndpoint = _reflection.GeneratedProtocolMessageType('PeerEndpoint', (_message.Message,), dict(
DESCRIPTOR = _PEERENDPOINT,
__module__ = 'peer.peer_pb2'
# @@protoc_insertion_point(class_scope:protos.PeerEndpoint)
))
_sym_db.RegisterMessage(PeerEndpoint)
DESCRIPTOR.has_options = True
DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('\n\"org.hyperledger.fabric.protos.peerZ)github.com/hyperledger/fabric/protos/peer'))
try:
# THESE ELEMENTS WILL BE DEPRECATED.
# Please use the generated *_pb2_grpc.py files instead.
import grpc
from grpc.framework.common import cardinality
from grpc.framework.interfaces.face import utilities as face_utilities
from grpc.beta import implementations as beta_implementations
from grpc.beta import interfaces as beta_interfaces
class EndorserStub(object):
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.ProcessProposal = channel.unary_unary(
'/protos.Endorser/ProcessProposal',
request_serializer=peer_dot_proposal__pb2.SignedProposal.SerializeToString,
response_deserializer=peer_dot_proposal__response__pb2.ProposalResponse.FromString,
)
class EndorserServicer(object):
def ProcessProposal(self, request, context):
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_EndorserServicer_to_server(servicer, server):
rpc_method_handlers = {
'ProcessProposal': grpc.unary_unary_rpc_method_handler(
servicer.ProcessProposal,
request_deserializer=peer_dot_proposal__pb2.SignedProposal.FromString,
response_serializer=peer_dot_proposal__response__pb2.ProposalResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'protos.Endorser', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
class BetaEndorserServicer(object):
"""The Beta API is deprecated for 0.15.0 and later.
It is recommended to use the GA API (classes and functions in this
file not marked beta) for all further purposes. This class was generated
only to ease transition from grpcio<0.15.0 to grpcio>=0.15.0."""
def ProcessProposal(self, request, context):
context.code(beta_interfaces.StatusCode.UNIMPLEMENTED)
class BetaEndorserStub(object):
"""The Beta API is deprecated for 0.15.0 and later.
It is recommended to use the GA API (classes and functions in this
file not marked beta) for all further purposes. This class was generated
only to ease transition from grpcio<0.15.0 to grpcio>=0.15.0."""
def ProcessProposal(self, request, timeout, metadata=None, with_call=False, protocol_options=None):
raise NotImplementedError()
ProcessProposal.future = None
def beta_create_Endorser_server(servicer, pool=None, pool_size=None, default_timeout=None, maximum_timeout=None):
"""The Beta API is deprecated for 0.15.0 and later.
It is recommended to use the GA API (classes and functions in this
file not marked beta) for all further purposes. This function was
generated only to ease transition from grpcio<0.15.0 to grpcio>=0.15.0"""
request_deserializers = {
('protos.Endorser', 'ProcessProposal'): peer_dot_proposal__pb2.SignedProposal.FromString,
}
response_serializers = {
('protos.Endorser', 'ProcessProposal'): peer_dot_proposal__response__pb2.ProposalResponse.SerializeToString,
}
method_implementations = {
('protos.Endorser', 'ProcessProposal'): face_utilities.unary_unary_inline(servicer.ProcessProposal),
}
server_options = beta_implementations.server_options(request_deserializers=request_deserializers, response_serializers=response_serializers, thread_pool=pool, thread_pool_size=pool_size, default_timeout=default_timeout, maximum_timeout=maximum_timeout)
return beta_implementations.server(method_implementations, options=server_options)
def beta_create_Endorser_stub(channel, host=None, metadata_transformer=None, pool=None, pool_size=None):
"""The Beta API is deprecated for 0.15.0 and later.
It is recommended to use the GA API (classes and functions in this
file not marked beta) for all further purposes. This function was
generated only to ease transition from grpcio<0.15.0 to grpcio>=0.15.0"""
request_serializers = {
('protos.Endorser', 'ProcessProposal'): peer_dot_proposal__pb2.SignedProposal.SerializeToString,
}
response_deserializers = {
('protos.Endorser', 'ProcessProposal'): peer_dot_proposal__response__pb2.ProposalResponse.FromString,
}
cardinalities = {
'ProcessProposal': cardinality.Cardinality.UNARY_UNARY,
}
stub_options = beta_implementations.stub_options(host=host, metadata_transformer=metadata_transformer, request_serializers=request_serializers, response_deserializers=response_deserializers, thread_pool=pool, thread_pool_size=pool_size)
return beta_implementations.dynamic_stub(channel, 'protos.Endorser', cardinalities, options=stub_options)
except ImportError:
pass
|
MONGO_HOST = 'localhost' # server to connect to
MONGO_PORT = 27017 # port MongoD is running on
MONGO_DATABASE = 'crits' # database name to connect to
MONGO_SSL = False # whether MongoD has SSL enabled
MONGO_USER = '' # mongo user with "readWrite" role in the database
MONGO_PASSWORD = '' # password for the mongo user
SECRET_KEY = ''
FILE_DB = GRIDFS # Set to S3 (NO QUOTES) to use S3. You'll also want to set
# the stuff below and create your buckets.
|
import os
from nupic.frameworks.opf.expdescriptionhelpers import importBaseDescription
config = \
{
'dataSource': 'file://' + os.path.join(os.path.dirname(__file__),
'../datasets/category_SP_0.csv'),
'modelParams': { 'clParams': { 'clVerbosity': 1},
'inferenceType': 'NontemporalClassification',
'sensorParams': { 'encoders': { }, 'verbosity': 1},
'spParams': { 'spVerbosity': 1 },
'tpEnable': False,
'tpParams': { }}}
mod = importBaseDescription('../base_category/description.py', config)
locals().update(mod.__dict__)
|
from django.conf.urls import url
from django.contrib.auth.decorators import login_required as login
from . import views
urlpatterns = [
url(r'^$', login(views.OSFStatisticsListView.as_view()), name='stats_list'),
url(r'^update/$', login(views.update_metrics), name='update'),
url(r'^download/$', login(views.download_csv), name='download'),
]
|
from __future__ import unicode_literals
from six.moves.urllib.parse import parse_qs
import boto
from freezegun import freeze_time
import httpretty
import sure # noqa
from moto import mock_sns, mock_sqs
@mock_sqs
@mock_sns
def test_publish_to_sqs():
conn = boto.connect_sns()
conn.create_topic("some-topic")
topics_json = conn.get_all_topics()
topic_arn = topics_json["ListTopicsResponse"]["ListTopicsResult"]["Topics"][0]['TopicArn']
sqs_conn = boto.connect_sqs()
sqs_conn.create_queue("test-queue")
conn.subscribe(topic_arn, "sqs", "arn:aws:sqs:us-east-1:123456789012:test-queue")
conn.publish(topic=topic_arn, message="my message")
queue = sqs_conn.get_queue("test-queue")
message = queue.read(1)
message.get_body().should.equal('my message')
@mock_sqs
@mock_sns
def test_publish_to_sqs_in_different_region():
conn = boto.sns.connect_to_region("us-west-1")
conn.create_topic("some-topic")
topics_json = conn.get_all_topics()
topic_arn = topics_json["ListTopicsResponse"]["ListTopicsResult"]["Topics"][0]['TopicArn']
sqs_conn = boto.sqs.connect_to_region("us-west-2")
sqs_conn.create_queue("test-queue")
conn.subscribe(topic_arn, "sqs", "arn:aws:sqs:us-west-2:123456789012:test-queue")
conn.publish(topic=topic_arn, message="my message")
queue = sqs_conn.get_queue("test-queue")
message = queue.read(1)
message.get_body().should.equal('my message')
@freeze_time("2013-01-01")
@mock_sns
def test_publish_to_http():
httpretty.HTTPretty.register_uri(
method="POST",
uri="http://example.com/foobar",
)
conn = boto.connect_sns()
conn.create_topic("some-topic")
topics_json = conn.get_all_topics()
topic_arn = topics_json["ListTopicsResponse"]["ListTopicsResult"]["Topics"][0]['TopicArn']
conn.subscribe(topic_arn, "http", "http://example.com/foobar")
response = conn.publish(topic=topic_arn, message="my message", subject="my subject")
message_id = response['PublishResponse']['PublishResult']['MessageId']
last_request = httpretty.last_request()
last_request.method.should.equal("POST")
parse_qs(last_request.body.decode('utf-8')).should.equal({
"Type": ["Notification"],
"MessageId": [message_id],
"TopicArn": ["arn:aws:sns:{0}:123456789012:some-topic".format(conn.region.name)],
"Subject": ["my subject"],
"Message": ["my message"],
"Timestamp": ["2013-01-01T00:00:00.000Z"],
"SignatureVersion": ["1"],
"Signature": ["EXAMPLElDMXvB8r9R83tGoNn0ecwd5UjllzsvSvbItzfaMpN2nk5HVSw7XnOn/49IkxDKz8YrlH2qJXj2iZB0Zo2O71c4qQk1fMUDi3LGpij7RCW7AW9vYYsSqIKRnFS94ilu7NFhUzLiieYr4BKHpdTmdD6c0esKEYBpabxDSc="],
"SigningCertURL": ["https://sns.us-east-1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem"],
"UnsubscribeURL": ["https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:123456789012:some-topic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55"],
})
|
from lettuce import step
def assert_in(condition, possibilities):
assert condition in possibilities, \
u"%r は次のリストに入っている可能性がある: %r" % (
condition, possibilities
)
@step(u'入力値を (.*) とし')
def dado_que_tenho(step, group):
possibilities = [
u'何か',
u'その他',
u'データ'
]
assert_in(group, possibilities)
@step(u'処理 (.*) を使って')
def faco_algo_com(step, group):
possibilities = [
u'これ',
u'ここ',
u'動く'
]
assert_in(group, possibilities)
@step(u'表示は (.*) である')
def fico_feliz_em_ver(step, group):
possibilities = [
u'機能',
u'同じ',
u'unicodeで!'
]
assert_in(group, possibilities)
|
"""
celery.utils.patch
~~~~~~~~~~~~~~~~~~
Monkey-patch to ensure loggers are process aware.
:copyright: (c) 2009 - 2011 by Ask Solem.
:license: BSD, see LICENSE for more details.
"""
from __future__ import absolute_import
import logging
_process_aware = False
def _patch_logger_class():
"""Make sure process name is recorded when loggers are used."""
try:
from multiprocessing.process import current_process
except ImportError:
current_process = None # noqa
logging._acquireLock()
try:
OldLoggerClass = logging.getLoggerClass()
if not getattr(OldLoggerClass, '_process_aware', False):
class ProcessAwareLogger(OldLoggerClass):
_process_aware = True
def makeRecord(self, *args, **kwds):
record = OldLoggerClass.makeRecord(self, *args, **kwds)
if current_process:
record.processName = current_process()._name
else:
record.processName = ""
return record
logging.setLoggerClass(ProcessAwareLogger)
finally:
logging._releaseLock()
def ensure_process_aware_logger():
global _process_aware
if not _process_aware:
_patch_logger_class()
_process_aware = True
|
{'name': 'Credit control dunning fees',
'version': '0.1.0',
'author': "Camptocamp,Odoo Community Association (OCA)",
'maintainer': 'Camptocamp',
'category': 'Accounting',
'complexity': 'normal',
'depends': ['account_credit_control'],
'website': 'http://www.camptocamp.com',
'data': ['view/policy_view.xml',
'view/line_view.xml',
'report/report_credit_control_summary.xml',
'security/ir.model.access.csv',
],
'demo': [],
'test': [],
'installable': True,
'auto_install': False,
'license': 'AGPL-3',
'application': False}
|
import unittest
from test.test_support import TestSkipped, run_unittest
import os, struct
try:
import fcntl, termios
except ImportError:
raise TestSkipped("No fcntl or termios module")
if not hasattr(termios,'TIOCGPGRP'):
raise TestSkipped("termios module doesn't have TIOCGPGRP")
try:
tty = open("/dev/tty", "r")
tty.close()
except IOError:
raise TestSkipped("Unable to open /dev/tty")
class IoctlTests(unittest.TestCase):
def test_ioctl(self):
# If this process has been put into the background, TIOCGPGRP returns
# the session ID instead of the process group id.
ids = (os.getpgrp(), os.getsid(0))
tty = open("/dev/tty", "r")
r = fcntl.ioctl(tty, termios.TIOCGPGRP, " ")
rpgrp = struct.unpack("i", r)[0]
self.assert_(rpgrp in ids, "%s not in %s" % (rpgrp, ids))
def test_ioctl_mutate(self):
import array
buf = array.array('i', [0])
ids = (os.getpgrp(), os.getsid(0))
tty = open("/dev/tty", "r")
r = fcntl.ioctl(tty, termios.TIOCGPGRP, buf, 1)
rpgrp = buf[0]
self.assertEquals(r, 0)
self.assert_(rpgrp in ids, "%s not in %s" % (rpgrp, ids))
def test_main():
run_unittest(IoctlTests)
if __name__ == "__main__":
test_main()
|
import pytest
from tests.support.asserts import assert_error, assert_files_uploaded, assert_success
from tests.support.inline import inline
from . import map_files_to_multiline_text
def element_send_keys(session, element, text):
return session.transport.send(
"POST", "/session/{session_id}/element/{element_id}/value".format(
session_id=session.session_id,
element_id=element.id),
{"text": text})
def test_empty_text(session):
session.url = inline("<input type=file>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, "")
assert_error(response, "invalid argument")
def test_multiple_files(session, create_files):
files = create_files(["foo", "bar"])
session.url = inline("<input type=file multiple>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element,
map_files_to_multiline_text(files))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_multiple_files_last_path_not_found(session, create_files):
files = create_files(["foo", "bar"])
files.append("foo bar")
session.url = inline("<input type=file multiple>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element,
map_files_to_multiline_text(files))
assert_error(response, "invalid argument")
assert_files_uploaded(session, element, [])
def test_multiple_files_without_multiple_attribute(session, create_files):
files = create_files(["foo", "bar"])
session.url = inline("<input type=file>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element,
map_files_to_multiline_text(files))
assert_error(response, "invalid argument")
assert_files_uploaded(session, element, [])
def test_multiple_files_send_twice(session, create_files):
first_files = create_files(["foo", "bar"])
second_files = create_files(["john", "doe"])
session.url = inline("<input type=file multiple>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element,
map_files_to_multiline_text(first_files))
assert_success(response)
response = element_send_keys(session, element,
map_files_to_multiline_text(second_files))
assert_success(response)
assert_files_uploaded(session, element, first_files + second_files)
def test_multiple_files_reset_with_element_clear(session, create_files):
first_files = create_files(["foo", "bar"])
second_files = create_files(["john", "doe"])
session.url = inline("<input type=file multiple>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element,
map_files_to_multiline_text(first_files))
assert_success(response)
# Reset already uploaded files
element.clear()
assert_files_uploaded(session, element, [])
response = element_send_keys(session, element,
map_files_to_multiline_text(second_files))
assert_success(response)
assert_files_uploaded(session, element, second_files)
def test_single_file(session, create_files):
files = create_files(["foo"])
session.url = inline("<input type=file>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_single_file_replaces_without_multiple_attribute(session, create_files):
files = create_files(["foo", "bar"])
session.url = inline("<input type=file>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
response = element_send_keys(session, element, str(files[1]))
assert_success(response)
assert_files_uploaded(session, element, [files[1]])
def test_single_file_appends_with_multiple_attribute(session, create_files):
files = create_files(["foo", "bar"])
session.url = inline("<input type=file multiple>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
response = element_send_keys(session, element, str(files[1]))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_transparent(session, create_files):
files = create_files(["foo"])
session.url = inline("""<input type=file style="opacity: 0">""")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_obscured(session, create_files):
files = create_files(["foo"])
session.url = inline("""
<style>
div {
position: absolute;
width: 100vh;
height: 100vh;
background: blue;
top: 0;
left: 0;
}
</style>
<input type=file>
<div></div>
""")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_outside_viewport(session, create_files):
files = create_files(["foo"])
session.url = inline("""<input type=file style="margin-left: -100vh">""")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_hidden(session, create_files):
files = create_files(["foo"])
session.url = inline("<input type=file hidden>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
assert_files_uploaded(session, element, files)
def test_display_none(session, create_files):
files = create_files(["foo"])
session.url = inline("""<input type=file style="display: none">""")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_success(response)
assert_files_uploaded(session, element, files)
@pytest.mark.capabilities({"strictFileInteractability": True})
def test_strict_hidden(session, create_files):
files = create_files(["foo"])
session.url = inline("<input type=file hidden>")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_error(response, "element not interactable")
@pytest.mark.capabilities({"strictFileInteractability": True})
def test_strict_display_none(session, create_files):
files = create_files(["foo"])
session.url = inline("""<input type=file style="display: none">""")
element = session.find.css("input", all=False)
response = element_send_keys(session, element, str(files[0]))
assert_error(response, "element not interactable")
|
from ..gobject import GParamFlags
from .gibaseinfo import GIBaseInfo
from .gitypeinfo import GITypeInfo, GIInfoType
from .giarginfo import GITransfer
from .._utils import find_library, wrap_class
_gir = find_library("girepository-1.0")
@GIBaseInfo._register(GIInfoType.PROPERTY)
class GIPropertyInfo(GIBaseInfo):
def _get_repr(self):
values = super(GIPropertyInfo, self)._get_repr()
values["flags"] = repr(self.flags)
values["type"] = repr(self.get_type())
values["ownership_transfer"] = repr(self.ownership_transfer)
return values
_methods = [
("get_flags", GParamFlags, [GIPropertyInfo]),
("get_type", GITypeInfo, [GIPropertyInfo], True),
("get_ownership_transfer", GITransfer, [GIPropertyInfo]),
]
wrap_class(_gir, GIPropertyInfo, GIPropertyInfo,
"g_property_info_", _methods)
__all__ = ["GIPropertyInfo"]
|
"""A non-blocking, single-threaded HTTP server.
Typical applications have little direct interaction with the `HTTPServer`
class except to start a server at the beginning of the process
(and even that is often done indirectly via `tornado.web.Application.listen`).
This module also defines the `HTTPRequest` class which is exposed via
`tornado.web.RequestHandler.request`.
"""
from __future__ import absolute_import, division, print_function, with_statement
import socket
import ssl
import time
import copy
from tornado.escape import native_str, parse_qs_bytes
from tornado import httputil
from tornado import iostream
from tornado.log import gen_log
from tornado import netutil
from tornado.tcpserver import TCPServer
from tornado import stack_context
from tornado.util import bytes_type
try:
import Cookie # py2
except ImportError:
import http.cookies as Cookie # py3
class HTTPServer(TCPServer):
r"""A non-blocking, single-threaded HTTP server.
A server is defined by a request callback that takes an HTTPRequest
instance as an argument and writes a valid HTTP response with
`HTTPRequest.write`. `HTTPRequest.finish` finishes the request (but does
not necessarily close the connection in the case of HTTP/1.1 keep-alive
requests). A simple example server that echoes back the URI you
requested::
import tornado.httpserver
import tornado.ioloop
def handle_request(request):
message = "You requested %s\n" % request.uri
request.write("HTTP/1.1 200 OK\r\nContent-Length: %d\r\n\r\n%s" % (
len(message), message))
request.finish()
http_server = tornado.httpserver.HTTPServer(handle_request)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
`HTTPServer` is a very basic connection handler. It parses the request
headers and body, but the request callback is responsible for producing
the response exactly as it will appear on the wire. This affords
maximum flexibility for applications to implement whatever parts
of HTTP responses are required.
`HTTPServer` supports keep-alive connections by default
(automatically for HTTP/1.1, or for HTTP/1.0 when the client
requests ``Connection: keep-alive``). This means that the request
callback must generate a properly-framed response, using either
the ``Content-Length`` header or ``Transfer-Encoding: chunked``.
Applications that are unable to frame their responses properly
should instead return a ``Connection: close`` header in each
response and pass ``no_keep_alive=True`` to the `HTTPServer`
constructor.
If ``xheaders`` is ``True``, we support the
``X-Real-Ip``/``X-Forwarded-For`` and
``X-Scheme``/``X-Forwarded-Proto`` headers, which override the
remote IP and URI scheme/protocol for all requests. These headers
are useful when running Tornado behind a reverse proxy or load
balancer. The ``protocol`` argument can also be set to ``https``
if Tornado is run behind an SSL-decoding proxy that does not set one of
the supported ``xheaders``.
To make this server serve SSL traffic, send the ``ssl_options`` dictionary
argument with the arguments required for the `ssl.wrap_socket` method,
including ``certfile`` and ``keyfile``. (In Python 3.2+ you can pass
an `ssl.SSLContext` object instead of a dict)::
HTTPServer(applicaton, ssl_options={
"certfile": os.path.join(data_dir, "mydomain.crt"),
"keyfile": os.path.join(data_dir, "mydomain.key"),
})
`HTTPServer` initialization follows one of three patterns (the
initialization methods are defined on `tornado.tcpserver.TCPServer`):
1. `~tornado.tcpserver.TCPServer.listen`: simple single-process::
server = HTTPServer(app)
server.listen(8888)
IOLoop.instance().start()
In many cases, `tornado.web.Application.listen` can be used to avoid
the need to explicitly create the `HTTPServer`.
2. `~tornado.tcpserver.TCPServer.bind`/`~tornado.tcpserver.TCPServer.start`:
simple multi-process::
server = HTTPServer(app)
server.bind(8888)
server.start(0) # Forks multiple sub-processes
IOLoop.instance().start()
When using this interface, an `.IOLoop` must *not* be passed
to the `HTTPServer` constructor. `~.TCPServer.start` will always start
the server on the default singleton `.IOLoop`.
3. `~tornado.tcpserver.TCPServer.add_sockets`: advanced multi-process::
sockets = tornado.netutil.bind_sockets(8888)
tornado.process.fork_processes(0)
server = HTTPServer(app)
server.add_sockets(sockets)
IOLoop.instance().start()
The `~.TCPServer.add_sockets` interface is more complicated,
but it can be used with `tornado.process.fork_processes` to
give you more flexibility in when the fork happens.
`~.TCPServer.add_sockets` can also be used in single-process
servers if you want to create your listening sockets in some
way other than `tornado.netutil.bind_sockets`.
"""
def __init__(self, request_callback, no_keep_alive=False, io_loop=None,
xheaders=False, ssl_options=None, protocol=None, **kwargs):
self.request_callback = request_callback
self.no_keep_alive = no_keep_alive
self.xheaders = xheaders
self.protocol = protocol
TCPServer.__init__(self, io_loop=io_loop, ssl_options=ssl_options,
**kwargs)
def handle_stream(self, stream, address):
HTTPConnection(stream, address, self.request_callback,
self.no_keep_alive, self.xheaders, self.protocol)
class _BadRequestException(Exception):
"""Exception class for malformed HTTP requests."""
pass
class HTTPConnection(object):
"""Handles a connection to an HTTP client, executing HTTP requests.
We parse HTTP headers and bodies, and execute the request callback
until the HTTP conection is closed.
"""
def __init__(self, stream, address, request_callback, no_keep_alive=False,
xheaders=False, protocol=None):
self.stream = stream
self.address = address
# Save the socket's address family now so we know how to
# interpret self.address even after the stream is closed
# and its socket attribute replaced with None.
self.address_family = stream.socket.family
self.request_callback = request_callback
self.no_keep_alive = no_keep_alive
self.xheaders = xheaders
self.protocol = protocol
self._clear_request_state()
# Save stack context here, outside of any request. This keeps
# contexts from one request from leaking into the next.
self._header_callback = stack_context.wrap(self._on_headers)
self.stream.set_close_callback(self._on_connection_close)
self.stream.read_until(b"\r\n\r\n", self._header_callback)
def _clear_request_state(self):
"""Clears the per-request state.
This is run in between requests to allow the previous handler
to be garbage collected (and prevent spurious close callbacks),
and when the connection is closed (to break up cycles and
facilitate garbage collection in cpython).
"""
self._request = None
self._request_finished = False
self._write_callback = None
self._close_callback = None
def set_close_callback(self, callback):
"""Sets a callback that will be run when the connection is closed.
Use this instead of accessing
`HTTPConnection.stream.set_close_callback
<.BaseIOStream.set_close_callback>` directly (which was the
recommended approach prior to Tornado 3.0).
"""
self._close_callback = stack_context.wrap(callback)
def _on_connection_close(self):
if self._close_callback is not None:
callback = self._close_callback
self._close_callback = None
callback()
# Delete any unfinished callbacks to break up reference cycles.
self._header_callback = None
self._clear_request_state()
def close(self):
self.stream.close()
# Remove this reference to self, which would otherwise cause a
# cycle and delay garbage collection of this connection.
self._header_callback = None
self._clear_request_state()
def write(self, chunk, callback=None):
"""Writes a chunk of output to the stream."""
if not self.stream.closed():
self._write_callback = stack_context.wrap(callback)
self.stream.write(chunk, self._on_write_complete)
def finish(self):
"""Finishes the request."""
self._request_finished = True
# No more data is coming, so instruct TCP to send any remaining
# data immediately instead of waiting for a full packet or ack.
self.stream.set_nodelay(True)
if not self.stream.writing():
self._finish_request()
def _on_write_complete(self):
if self._write_callback is not None:
callback = self._write_callback
self._write_callback = None
callback()
# _on_write_complete is enqueued on the IOLoop whenever the
# IOStream's write buffer becomes empty, but it's possible for
# another callback that runs on the IOLoop before it to
# simultaneously write more data and finish the request. If
# there is still data in the IOStream, a future
# _on_write_complete will be responsible for calling
# _finish_request.
if self._request_finished and not self.stream.writing():
self._finish_request()
def _finish_request(self):
if self.no_keep_alive or self._request is None:
disconnect = True
else:
connection_header = self._request.headers.get("Connection")
if connection_header is not None:
connection_header = connection_header.lower()
if self._request.supports_http_1_1():
disconnect = connection_header == "close"
elif ("Content-Length" in self._request.headers
or self._request.method in ("HEAD", "GET")):
disconnect = connection_header != "keep-alive"
else:
disconnect = True
self._clear_request_state()
if disconnect:
self.close()
return
try:
# Use a try/except instead of checking stream.closed()
# directly, because in some cases the stream doesn't discover
# that it's closed until you try to read from it.
self.stream.read_until(b"\r\n\r\n", self._header_callback)
# Turn Nagle's algorithm back on, leaving the stream in its
# default state for the next request.
self.stream.set_nodelay(False)
except iostream.StreamClosedError:
self.close()
def _on_headers(self, data):
try:
data = native_str(data.decode('latin1'))
eol = data.find("\r\n")
start_line = data[:eol]
try:
method, uri, version = start_line.split(" ")
except ValueError:
raise _BadRequestException("Malformed HTTP request line")
if not version.startswith("HTTP/"):
raise _BadRequestException("Malformed HTTP version in HTTP Request-Line")
try:
headers = httputil.HTTPHeaders.parse(data[eol:])
except ValueError:
# Probably from split() if there was no ':' in the line
raise _BadRequestException("Malformed HTTP headers")
# HTTPRequest wants an IP, not a full socket address
if self.address_family in (socket.AF_INET, socket.AF_INET6):
remote_ip = self.address[0]
else:
# Unix (or other) socket; fake the remote address
remote_ip = '0.0.0.0'
self._request = HTTPRequest(
connection=self, method=method, uri=uri, version=version,
headers=headers, remote_ip=remote_ip, protocol=self.protocol)
content_length = headers.get("Content-Length")
if content_length:
content_length = int(content_length)
if content_length > self.stream.max_buffer_size:
raise _BadRequestException("Content-Length too long")
if headers.get("Expect") == "100-continue":
self.stream.write(b"HTTP/1.1 100 (Continue)\r\n\r\n")
self.stream.read_bytes(content_length, self._on_request_body)
return
self.request_callback(self._request)
except _BadRequestException as e:
gen_log.info("Malformed HTTP request from %r: %s",
self.address, e)
self.close()
return
def _on_request_body(self, data):
self._request.body = data
if self._request.method in ("POST", "PATCH", "PUT"):
httputil.parse_body_arguments(
self._request.headers.get("Content-Type", ""), data,
self._request.body_arguments, self._request.files)
for k, v in self._request.body_arguments.items():
self._request.arguments.setdefault(k, []).extend(v)
self.request_callback(self._request)
class HTTPRequest(object):
"""A single HTTP request.
All attributes are type `str` unless otherwise noted.
.. attribute:: method
HTTP request method, e.g. "GET" or "POST"
.. attribute:: uri
The requested uri.
.. attribute:: path
The path portion of `uri`
.. attribute:: query
The query portion of `uri`
.. attribute:: version
HTTP version specified in request, e.g. "HTTP/1.1"
.. attribute:: headers
`.HTTPHeaders` dictionary-like object for request headers. Acts like
a case-insensitive dictionary with additional methods for repeated
headers.
.. attribute:: body
Request body, if present, as a byte string.
.. attribute:: remote_ip
Client's IP address as a string. If ``HTTPServer.xheaders`` is set,
will pass along the real IP address provided by a load balancer
in the ``X-Real-Ip`` or ``X-Forwarded-For`` header.
.. versionchanged:: 3.1
The list format of ``X-Forwarded-For`` is now supported.
.. attribute:: protocol
The protocol used, either "http" or "https". If ``HTTPServer.xheaders``
is set, will pass along the protocol used by a load balancer if
reported via an ``X-Scheme`` header.
.. attribute:: host
The requested hostname, usually taken from the ``Host`` header.
.. attribute:: arguments
GET/POST arguments are available in the arguments property, which
maps arguments names to lists of values (to support multiple values
for individual names). Names are of type `str`, while arguments
are byte strings. Note that this is different from
`.RequestHandler.get_argument`, which returns argument values as
unicode strings.
.. attribute:: query_arguments
Same format as ``arguments``, but contains only arguments extracted
from the query string.
.. versionadded:: 3.2
.. attribute:: body_arguments
Same format as ``arguments``, but contains only arguments extracted
from the request body.
.. versionadded:: 3.2
.. attribute:: files
File uploads are available in the files property, which maps file
names to lists of `.HTTPFile`.
.. attribute:: connection
An HTTP request is attached to a single HTTP connection, which can
be accessed through the "connection" attribute. Since connections
are typically kept open in HTTP/1.1, multiple requests can be handled
sequentially on a single connection.
"""
def __init__(self, method, uri, version="HTTP/1.0", headers=None,
body=None, remote_ip=None, protocol=None, host=None,
files=None, connection=None):
self.method = method
self.uri = uri
self.version = version
self.headers = headers or httputil.HTTPHeaders()
self.body = body or ""
# set remote IP and protocol
self.remote_ip = remote_ip
if protocol:
self.protocol = protocol
elif connection and isinstance(connection.stream,
iostream.SSLIOStream):
self.protocol = "https"
else:
self.protocol = "http"
# xheaders can override the defaults
if connection and connection.xheaders:
# Squid uses X-Forwarded-For, others use X-Real-Ip
ip = self.headers.get("X-Forwarded-For", self.remote_ip)
ip = ip.split(',')[-1].strip()
ip = self.headers.get(
"X-Real-Ip", ip)
if netutil.is_valid_ip(ip):
self.remote_ip = ip
# AWS uses X-Forwarded-Proto
proto = self.headers.get(
"X-Scheme", self.headers.get("X-Forwarded-Proto", self.protocol))
if proto in ("http", "https"):
self.protocol = proto
self.host = host or self.headers.get("Host") or "127.0.0.1"
self.files = files or {}
self.connection = connection
self._start_time = time.time()
self._finish_time = None
self.path, sep, self.query = uri.partition('?')
self.arguments = parse_qs_bytes(self.query, keep_blank_values=True)
self.query_arguments = copy.deepcopy(self.arguments)
self.body_arguments = {}
def supports_http_1_1(self):
"""Returns True if this request supports HTTP/1.1 semantics"""
return self.version == "HTTP/1.1"
@property
def cookies(self):
"""A dictionary of Cookie.Morsel objects."""
if not hasattr(self, "_cookies"):
self._cookies = Cookie.SimpleCookie()
if "Cookie" in self.headers:
try:
self._cookies.load(
native_str(self.headers["Cookie"]))
except Exception:
self._cookies = {}
return self._cookies
def write(self, chunk, callback=None):
"""Writes the given chunk to the response stream."""
assert isinstance(chunk, bytes_type)
self.connection.write(chunk, callback=callback)
def finish(self):
"""Finishes this HTTP request on the open connection."""
self.connection.finish()
self._finish_time = time.time()
def full_url(self):
"""Reconstructs the full URL for this request."""
return self.protocol + "://" + self.host + self.uri
def request_time(self):
"""Returns the amount of time it took for this request to execute."""
if self._finish_time is None:
return time.time() - self._start_time
else:
return self._finish_time - self._start_time
def get_ssl_certificate(self, binary_form=False):
"""Returns the client's SSL certificate, if any.
To use client certificates, the HTTPServer must have been constructed
with cert_reqs set in ssl_options, e.g.::
server = HTTPServer(app,
ssl_options=dict(
certfile="foo.crt",
keyfile="foo.key",
cert_reqs=ssl.CERT_REQUIRED,
ca_certs="cacert.crt"))
By default, the return value is a dictionary (or None, if no
client certificate is present). If ``binary_form`` is true, a
DER-encoded form of the certificate is returned instead. See
SSLSocket.getpeercert() in the standard library for more
details.
http://docs.python.org/library/ssl.html#sslsocket-objects
"""
try:
return self.connection.stream.socket.getpeercert(
binary_form=binary_form)
except ssl.SSLError:
return None
def __repr__(self):
attrs = ("protocol", "host", "method", "uri", "version", "remote_ip")
args = ", ".join(["%s=%r" % (n, getattr(self, n)) for n in attrs])
return "%s(%s, headers=%s)" % (
self.__class__.__name__, args, dict(self.headers))
|
import sys, os
from datetime import date
extensions = ['sphinx.ext.todo', 'sphinx.ext.mathjax']
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = u'Clang Static Analyzer'
copyright = u'2013-%d, Analyzer Team' % date.today().year
version = '5'
release = '5'
exclude_patterns = ['_build']
pygments_style = 'sphinx'
html_theme = 'haiku'
html_static_path = []
htmlhelp_basename = 'ClangStaticAnalyzerdoc'
latex_elements = {
}
latex_documents = [
('index', 'ClangStaticAnalyzer.tex', u'Clang Static Analyzer Documentation',
u'Analyzer Team', 'manual'),
]
man_pages = [
('index', 'clangstaticanalyzer', u'Clang Static Analyzer Documentation',
[u'Analyzer Team'], 1)
]
texinfo_documents = [
('index', 'ClangStaticAnalyzer', u'Clang Static Analyzer Documentation',
u'Analyzer Team', 'ClangStaticAnalyzer', 'One line description of project.',
'Miscellaneous'),
]
intersphinx_mapping = {'http://docs.python.org/': None}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.