hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
263a3e82d6c845d4e53578bcb1051b7bc5cfc287 | 41 | py | Python | feather_processor/feather_processor/__init__.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | feather_processor/feather_processor/__init__.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null | feather_processor/feather_processor/__init__.py | Pennsieve/timeseries-processor | 85766afa76182503fd66cec8382c22e757743f01 | [
"Apache-2.0"
] | null | null | null |
from .processor import FeatherProcessor
| 13.666667 | 39 | 0.853659 | 4 | 41 | 8.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121951 | 41 | 2 | 40 | 20.5 | 0.972222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
266a84143ef2884fa4b1d72abfc153792b26c7e9 | 40,220 | py | Python | zvmsdk/tests/unit/test_volumeop.py | jichenjc/python-zvm-sdk | c081805c6079107b4823af898babdf92cf5577ee | [
"Apache-2.0"
] | null | null | null | zvmsdk/tests/unit/test_volumeop.py | jichenjc/python-zvm-sdk | c081805c6079107b4823af898babdf92cf5577ee | [
"Apache-2.0"
] | null | null | null | zvmsdk/tests/unit/test_volumeop.py | jichenjc/python-zvm-sdk | c081805c6079107b4823af898babdf92cf5577ee | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import unittest
from zvmsdk import dist
from zvmsdk import utils
from zvmsdk import vmops
from zvmsdk import volumeop
from zvmsdk.config import CONF
from zvmsdk.exception import ZVMVolumeError as err
from zvmsdk import xcatclient
# instance parameters:
from zvmsdk.volumeop import NAME as NAME
from zvmsdk.volumeop import OS_TYPE as OS_TYPE
# volume parameters:
from zvmsdk.volumeop import SIZE as SIZE
from zvmsdk.volumeop import TYPE as TYPE
from zvmsdk.volumeop import LUN as LUN
# connection_info parameters:
from zvmsdk.volumeop import ALIAS as ALIAS
from zvmsdk.volumeop import PROTOCOL as PROTOCOL
from zvmsdk.volumeop import FCPS as FCPS
from zvmsdk.volumeop import WWPNS as WWPNS
from zvmsdk.volumeop import DEDICATE as DEDICATE
from zvmsdk.exception import ZVMVolumeError
class _BaseConfiguratorTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
super(_BaseConfiguratorTestCase, cls).setUpClass()
cls._base_cnf = volumeop._BaseConfigurator()
@mock.patch.object(volumeop._BaseConfigurator, 'config_attach_inactive')
@mock.patch.object(volumeop._BaseConfigurator, 'config_attach_active')
@mock.patch.object(vmops.VMOps, 'is_reachable')
def test_config_attach(self, is_reachable,
config_attach_active,
config_attach_inactive):
inst = {NAME: 'inst1'}
(volume, conn_info) = ({}, {})
is_reachable.return_value = True
config_attach_active.return_value = None
self._base_cnf.config_attach(inst, volume, conn_info)
config_attach_active.assert_called_once_with(inst, volume, conn_info)
is_reachable.return_value = False
config_attach_inactive.return_value = None
self._base_cnf.config_attach(inst, volume, conn_info)
config_attach_inactive.assert_called_once_with(inst, volume, conn_info)
class _xCATProxyTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
super(_xCATProxyTestCase, cls).setUpClass()
cls._proxy = volumeop._xCATProxy()
@mock.patch.object(volumeop._xCATProxy, '_xcat_chvm')
def test_dedicate_device(self, _xcat_chvm):
inst = {NAME: 'inst1'}
device = '1faa'
_xcat_chvm.return_value = None
body = ['--dedicatedevice 1faa 1faa 0']
self._proxy.dedicate_device(inst, device)
_xcat_chvm.assert_called_once_with(inst[NAME], body)
@mock.patch.object(volumeop._xCATProxy, '_xcat_chvm')
def test_undedicate_device(self, _xcat_chvm):
inst = {NAME: 'inst1'}
device = '1faa'
_xcat_chvm.return_value = None
body = ['--undedicatedevice 1faa']
self._proxy.undedicate_device(inst, device)
_xcat_chvm.assert_called_once_with(inst[NAME], body)
@mock.patch.object(volumeop._xCATProxy, '_xcat_chhy')
def test_add_zfcp_to_pool(self, _xcat_chhy):
_xcat_chhy.return_value = None
body = ['--addzfcp2pool zvmsdk free '
'5005600670078008 0110022003300440 1G 1faa']
self._proxy.add_zfcp_to_pool('1faa',
'5005600670078008',
'0110022003300440',
'1G')
_xcat_chhy.assert_called_once_with(body)
@mock.patch.object(volumeop._xCATProxy, '_xcat_chhy')
def test_remove_zfcp_from_pool(self, _xcat_chhy):
_xcat_chhy.return_value = None
body = ['--removezfcpfrompool zvmsdk '
'0110022003300440 5005600670078008']
self._proxy.remove_zfcp_from_pool('5005600670078008',
'0110022003300440')
_xcat_chhy.assert_called_once_with(body)
@mock.patch.object(xcatclient, 'xcat_request')
@mock.patch.object(utils.get_xcat_url(), 'chhv')
def test_xcat_chhy(self, chhv, xcat_request):
url = '/chhypervisor/' + CONF.zvm.host
body = '[body]'
chhv.return_value = url
xcat_request.return_value = None
self._proxy._xcat_chhy(body)
chhv.assert_called_once_with('/' + CONF.zvm.host)
xcat_request.assert_called_once_with('PUT', url, body)
@mock.patch.object(volumeop._xCATProxy, '_xcat_chhy')
def test_allocate_zfcp(self, _xcat_chhy):
inst = {NAME: 'inst1'}
_xcat_chhy.return_value = None
body = ['--reservezfcp zvmsdk used inst1 '
'1faa 1G 5005600670078008 0110022003300440']
self._proxy.allocate_zfcp(inst,
'1faa',
'1G',
'5005600670078008',
'0110022003300440')
_xcat_chhy.assert_called_once_with(body)
@mock.patch.object(volumeop._xCATProxy, '_xcat_chvm')
def test_remove_zfcp(self, _xcat_chvm):
inst = {NAME: 'inst1'}
body = ['--removezfcp 1faa 5005600670078008 0110022003300440 1']
_xcat_chvm.return_value = None
self._proxy.remove_zfcp(inst,
'1faa',
'5005600670078008',
'0110022003300440')
_xcat_chvm.assert_called_once_with(inst[NAME], body)
@mock.patch.object(volumeop._xCATProxy, '_get_mountpoint_parms')
@mock.patch.object(volumeop._xCATProxy, '_send_notice')
@mock.patch.object(volumeop._xCATProxy, '_get_volume_parms')
def test_notice_attach(self, _get_volume_parms,
_send_notice,
_get_mountpoint_parms):
inst = {NAME: 'inst1'}
_get_volume_parms.return_value = 'fake_volume_parms'
_send_notice.return_value = None
_get_mountpoint_parms.return_value = 'fake_mp_parms'
self._proxy.notice_attach(inst,
'1faa',
'5005600670078008',
'0110022003300440',
'vda',
'rhel7')
_get_volume_parms.assert_called_once_with('addScsiVolume',
'1faa',
'5005600670078008',
'0110022003300440')
_get_mountpoint_parms.assert_called_once_with('createfilesysnode',
'1faa',
'5005600670078008',
'0110022003300440',
'vda',
'rhel7')
calls = [mock.call(inst, 'fake_volume_parms'),
mock.call(inst, 'fake_mp_parms')]
_send_notice.assert_has_calls(calls)
@mock.patch.object(volumeop._xCATProxy, '_get_mountpoint_parms')
@mock.patch.object(volumeop._xCATProxy, '_send_notice')
@mock.patch.object(volumeop._xCATProxy, '_get_volume_parms')
def test_notice_detach(self, _get_volume_parms,
_send_notice,
_get_mountpoint_parms):
inst = {NAME: 'inst1'}
_get_volume_parms.return_value = 'fake_volume_parms'
_send_notice.return_value = None
_get_mountpoint_parms.return_value = 'fake_mp_parms'
self._proxy.notice_detach(inst,
'1faa',
'5005600670078008',
'0110022003300440',
'vda',
'rhel7')
_get_volume_parms.assert_called_once_with('removeScsiVolume',
'1faa',
'5005600670078008',
'0110022003300440')
_get_mountpoint_parms.assert_called_once_with('removefilesysnode',
'1faa',
'5005600670078008',
'0110022003300440',
'vda',
'rhel7')
calls = [mock.call(inst, 'fake_volume_parms'),
mock.call(inst, 'fake_mp_parms')]
_send_notice.assert_has_calls(calls)
def test_get_volume_parms(self):
expected = ('action=test '
'fcpAddr=1faa,1fbb '
'wwpn=5005600670078008,5005600670079009 '
'lun=0110022003300440')
self.assertEqual(self._proxy._get_volume_parms(
'test',
'1faa;1fbb',
'5005600670078008;5005600670079009',
'0110022003300440'),
expected)
@mock.patch.object(volumeop._xCATProxy, '_xcat_chvm')
def test_send_notice(self, _xcat_chvm):
_xcat_chvm.return_value = None
body = ['--aemod setupDisk parms']
inst = {NAME: 'inst1'}
self._proxy._send_notice(inst, 'parms')
_xcat_chvm.assert_called_once_with(inst[NAME], body)
@mock.patch.object(xcatclient, 'xcat_request')
@mock.patch.object(utils.get_xcat_url(), 'chvm')
def test_xcat_chvm(self, chvm, xcat_request):
url = '/chvm/node'
body = '[body]'
chvm.return_value = url
xcat_request.return_value = None
self._proxy._xcat_chvm('node', body)
chvm.assert_called_once_with('/node')
xcat_request.assert_called_once_with('PUT', url, body)
@mock.patch.object(dist.LinuxDistManager, 'get_linux_dist')
def test_get_mountpoint_parms(self, get_linux_dist):
distro = dist.rhel7
get_linux_dist.return_value = distro
distro.assemble_zfcp_srcdev = mock.MagicMock(return_value='zfcp_dev')
expected = 'action=createfilesysnode tgtFile=vda srcFile=zfcp_dev'
self.assertEqual(self._proxy._get_mountpoint_parms(
'createfilesysnode',
'1faa;1fbb',
'5005600670078008;5005600670079009',
'0110022003300440',
'vda',
'rhel7'),
expected)
get_linux_dist.assert_called_once_with('rhel7')
distro.assemble_zfcp_srcdev.assert_called_once_with(
'1faa,1fbb',
'5005600670078008,5005600670079009',
'0110022003300440')
get_linux_dist.reset_mock()
distro.assemble_zfcp_srcdev.reset_mock()
expected = 'action=removefilesysnode tgtFile=vda'
self.assertEqual(self._proxy._get_mountpoint_parms(
'removefilesysnode',
'1faa;1fbb',
'5005600670078008;5005600670079009',
'0110022003300440',
'vda',
'rhel7'),
expected)
get_linux_dist.assert_not_called()
distro.assemble_zfcp_srcdev.assert_not_called()
class _Configurator_SLES12TestCases(unittest.TestCase):
@classmethod
def setUpClass(cls):
super(_Configurator_SLES12TestCases, cls).setUpClass()
cls._conf = volumeop._Configurator_SLES12()
@mock.patch.object(volumeop._Configurator_SLES12,
'_config_fc_attach_inactive_with_xCAT')
def test_config_attach_inactive_with_xCAT(self, _config_fc):
_config_fc.return_value = None
volume = {LUN: 'abcdef0987654321', TYPE: 'fc'}
conn_info = {PROTOCOL: 'fc'}
self.assertRaises(ZVMVolumeError,
self._conf._config_attach_inactive_with_xCAT,
None,
volume,
conn_info)
volume = {LUN: 'abcdef0987654321', TYPE: 'fc', SIZE: '1G'}
self._conf._config_attach_inactive_with_xCAT(None, volume, conn_info)
_config_fc.assert_called_once_with(None, volume, conn_info)
conn_info = {PROTOCOL: 'iSCSI'}
self.assertRaises(NotImplementedError,
self._conf._config_attach_inactive_with_xCAT,
None,
volume,
conn_info)
@mock.patch.object(volumeop._xCATProxy, 'notice_attach')
@mock.patch.object(volumeop._xCATProxy, 'allocate_zfcp')
@mock.patch.object(volumeop._xCATProxy, 'add_zfcp_to_pool')
@mock.patch.object(volumeop._xCATProxy, 'dedicate_device')
def test_config_fc_attach_inactive_with_xCAT(self, dedicate_device,
add_zfcp_to_pool,
allocate_zfcp,
notice_attach):
dedicate_device.return_value = None
add_zfcp_to_pool.return_value = None
allocate_zfcp.return_value = None
notice_attach.return_value = None
inst = {NAME: 'inst1', OS_TYPE: 'sles12'}
conn_info = {DEDICATE: ['1faa', '1fbb'],
FCPS: ['1faa', '1fbb'],
WWPNS: ['1234567890abcdea', '1234567890abcdeb'],
ALIAS: 'sles12'}
volume = {SIZE: '1G', LUN: 'abcdef0987654321'}
formated_wwpns = '1234567890abcdea;1234567890abcdeb'
calls = [mock.call(inst, '1faa'), mock.call(inst, '1fbb')]
self._conf._config_fc_attach_inactive_with_xCAT(inst,
volume,
conn_info)
dedicate_device.assert_has_calls(calls)
add_zfcp_to_pool.assert_called_once_with('1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
'1G')
allocate_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
'1G',
formated_wwpns,
'abcdef0987654321')
notice_attach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
conn_info[ALIAS],
inst[OS_TYPE])
add_zfcp_to_pool.reset_mock()
allocate_zfcp.reset_mock()
notice_attach.reset_mock()
conn_info.pop(DEDICATE)
self._conf._config_fc_attach_inactive_with_xCAT(inst,
volume,
conn_info)
add_zfcp_to_pool.assert_called_once_with('1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
'1G')
allocate_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
'1G',
formated_wwpns,
'abcdef0987654321')
notice_attach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
conn_info[ALIAS],
inst[OS_TYPE])
@mock.patch.object(volumeop._Configurator_SLES12,
'_config_fc_detach_inactive_with_xCAT')
def test_config_detach_inactive_with_xCAT(self, _config_fc):
_config_fc.return_value = None
conn_info = {PROTOCOL: 'fc'}
self._conf._config_detach_inactive_with_xCAT(None, None, conn_info)
_config_fc.assert_called_once_with(None, None, conn_info)
conn_info = {PROTOCOL: 'iSCSI'}
self.assertRaises(NotImplementedError,
self._conf._config_detach_inactive_with_xCAT,
None,
None,
conn_info)
@mock.patch.object(volumeop._xCATProxy, 'undedicate_device')
@mock.patch.object(volumeop._xCATProxy, 'notice_detach')
@mock.patch.object(volumeop._xCATProxy, 'remove_zfcp_from_pool')
@mock.patch.object(volumeop._xCATProxy, 'remove_zfcp')
def test_config_fc_detach_inactive_with_xCAT(self, remove_zfcp,
remove_zfcp_from_pool,
notice_detach,
undedicate_device):
remove_zfcp.return_value = None
remove_zfcp_from_pool.return_value = None
notice_detach.return_value = None
undedicate_device.return_value = None
inst = {NAME: 'inst1', OS_TYPE: 'sles12'}
conn_info = {DEDICATE: ['1faa', '1fbb'],
FCPS: ['1faa', '1fbb'],
WWPNS: ['1234567890abcdea', '1234567890abcdeb'],
ALIAS: 'sles12'}
volume = {LUN: 'abcdef0987654321'}
formated_wwpns = '1234567890abcdea;1234567890abcdeb'
calls = [mock.call(inst, '1faa'), mock.call(inst, '1fbb')]
self._conf._config_fc_detach_inactive_with_xCAT(inst,
volume,
conn_info)
remove_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN])
remove_zfcp_from_pool.assert_called_once_with(formated_wwpns,
volume[LUN])
notice_detach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN],
conn_info[ALIAS],
inst[OS_TYPE])
undedicate_device.assert_has_calls(calls)
remove_zfcp.reset_mock()
remove_zfcp_from_pool.reset_mock()
notice_detach.reset_mock()
conn_info.pop(DEDICATE)
self._conf._config_fc_detach_inactive_with_xCAT(inst,
volume,
conn_info)
remove_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN])
remove_zfcp_from_pool.assert_called_once_with(formated_wwpns,
volume[LUN])
notice_detach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN],
conn_info[ALIAS],
inst[OS_TYPE])
class _Configurator_RHEL7TestCases(unittest.TestCase):
@classmethod
def setUpClass(cls):
super(_Configurator_RHEL7TestCases, cls).setUpClass()
cls._conf = volumeop._Configurator_RHEL7()
@mock.patch.object(volumeop._Configurator_RHEL7,
'_config_fc_attach_inactive_with_xCAT')
def test_config_attach_inactive_with_xCAT(self, _config_fc):
_config_fc.return_value = None
volume = {LUN: 'abcdef0987654321', TYPE: 'fc'}
conn_info = {PROTOCOL: 'fc'}
self.assertRaises(ZVMVolumeError,
self._conf._config_attach_inactive_with_xCAT,
None,
volume,
conn_info)
volume = {LUN: 'abcdef0987654321', TYPE: 'fc', SIZE: '1G'}
self._conf._config_attach_inactive_with_xCAT(None, volume, conn_info)
_config_fc.assert_called_once_with(None, volume, conn_info)
conn_info = {PROTOCOL: 'iSCSI'}
self.assertRaises(NotImplementedError,
self._conf._config_attach_inactive_with_xCAT,
None,
volume,
conn_info)
@mock.patch.object(volumeop._xCATProxy, 'notice_attach')
@mock.patch.object(volumeop._xCATProxy, 'allocate_zfcp')
@mock.patch.object(volumeop._xCATProxy, 'add_zfcp_to_pool')
@mock.patch.object(volumeop._xCATProxy, 'dedicate_device')
def test_config_fc_attach_inactive_with_xCAT(self, dedicate_device,
add_zfcp_to_pool,
allocate_zfcp,
notice_attach):
dedicate_device.return_value = None
add_zfcp_to_pool.return_value = None
allocate_zfcp.return_value = None
notice_attach.return_value = None
inst = {NAME: 'inst1', OS_TYPE: 'rhel7'}
conn_info = {DEDICATE: ['1faa', '1fbb'],
FCPS: ['1faa', '1fbb'],
WWPNS: ['1234567890abcdea', '1234567890abcdeb'],
ALIAS: '/dev/vda'}
volume = {SIZE: '1G', LUN: 'abcdef0987654321'}
formated_wwpns = '1234567890abcdea;1234567890abcdeb'
calls = [mock.call(inst, '1faa'), mock.call(inst, '1fbb')]
self._conf._config_fc_attach_inactive_with_xCAT(inst,
volume,
conn_info)
dedicate_device.assert_has_calls(calls)
add_zfcp_to_pool.assert_called_once_with('1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
'1G')
allocate_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
'1G',
formated_wwpns,
'abcdef0987654321')
notice_attach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
conn_info[ALIAS],
inst[OS_TYPE])
add_zfcp_to_pool.reset_mock()
allocate_zfcp.reset_mock()
notice_attach.reset_mock()
conn_info.pop(DEDICATE)
self._conf._config_fc_attach_inactive_with_xCAT(inst,
volume,
conn_info)
add_zfcp_to_pool.assert_called_once_with('1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
'1G')
allocate_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
'1G',
formated_wwpns,
'abcdef0987654321')
notice_attach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
'abcdef0987654321',
conn_info[ALIAS],
inst[OS_TYPE])
@mock.patch.object(volumeop._Configurator_RHEL7,
'_config_fc_detach_inactive_with_xCAT')
def test_config_detach_inactive_with_xCAT(self, _config_fc):
_config_fc.return_value = None
conn_info = {PROTOCOL: 'fc'}
self._conf._config_detach_inactive_with_xCAT(None, None, conn_info)
_config_fc.assert_called_once_with(None, None, conn_info)
conn_info = {PROTOCOL: 'iSCSI'}
self.assertRaises(NotImplementedError,
self._conf._config_detach_inactive_with_xCAT,
None,
None,
conn_info)
@mock.patch.object(volumeop._xCATProxy, 'undedicate_device')
@mock.patch.object(volumeop._xCATProxy, 'notice_detach')
@mock.patch.object(volumeop._xCATProxy, 'remove_zfcp_from_pool')
@mock.patch.object(volumeop._xCATProxy, 'remove_zfcp')
def test_config_fc_detach_inactive_with_xCAT(self, remove_zfcp,
remove_zfcp_from_pool,
notice_detach,
undedicate_device):
remove_zfcp.return_value = None
remove_zfcp_from_pool.return_value = None
notice_detach.return_value = None
undedicate_device.return_value = None
inst = {NAME: 'inst1', OS_TYPE: 'rhel7'}
conn_info = {DEDICATE: ['1faa', '1fbb'],
FCPS: ['1faa', '1fbb'],
WWPNS: ['1234567890abcdea', '1234567890abcdeb'],
ALIAS: '/dev/vda'}
volume = {LUN: 'abcdef0987654321'}
formated_wwpns = '1234567890abcdea;1234567890abcdeb'
calls = [mock.call(inst, '1faa'), mock.call(inst, '1fbb')]
self._conf._config_fc_detach_inactive_with_xCAT(inst,
volume,
conn_info)
remove_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN])
remove_zfcp_from_pool.assert_called_once_with(formated_wwpns,
volume[LUN])
notice_detach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN],
conn_info[ALIAS],
inst[OS_TYPE])
undedicate_device.assert_has_calls(calls)
remove_zfcp.reset_mock()
remove_zfcp_from_pool.reset_mock()
notice_detach.reset_mock()
conn_info.pop(DEDICATE)
self._conf._config_fc_detach_inactive_with_xCAT(inst,
volume,
conn_info)
remove_zfcp.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN])
remove_zfcp_from_pool.assert_called_once_with(formated_wwpns,
volume[LUN])
notice_detach.assert_called_once_with(inst,
'1faa;1fbb',
formated_wwpns,
volume[LUN],
conn_info[ALIAS],
inst[OS_TYPE])
class VolumeOpTestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
super(VolumeOpTestCase, cls).setUpClass()
cls._vol_op = volumeop.VolumeOperator()
@mock.patch.object(volumeop.VolumeOperator, '_get_configurator')
@mock.patch.object(volumeop.VolumeOperator, '_validate_connection_info')
@mock.patch.object(volumeop.VolumeOperator, '_validate_volume')
@mock.patch.object(volumeop.VolumeOperator, '_validate_instance')
def test_attach_volume_to_instance_SLES12(self,
_validate_instance,
_validate_volume,
_validate_connection_info,
_get_configurator):
inst = {NAME: 'inst1', OS_TYPE: 'sles12sp2'}
volume = {TYPE: 'fc',
LUN: 'abCDEF0987654321'}
fcps = ['1faa', '1fBB']
wwpns = ['1234567890abcdea', '1234567890abCDEB',
'1234567890abcdec', '1234567890abCDED',
'1234567890abcdee', '1234567890abCDEF']
conn_info = {PROTOCOL: 'fc',
FCPS: fcps,
WWPNS: wwpns,
ALIAS: 'vda'}
configurator = volumeop._Configurator_SLES12()
_get_configurator.return_value = configurator
configurator.config_attach = mock.MagicMock()
self._vol_op.attach_volume_to_instance(inst, volume, conn_info)
_validate_instance.assert_called_once_with(inst)
_validate_volume.assert_called_once_with(volume)
_validate_connection_info.assert_called_once_with(conn_info)
_get_configurator.assert_called_once_with(inst)
configurator.config_attach.assert_called_once_with(inst,
volume,
conn_info)
@mock.patch.object(volumeop.VolumeOperator, '_get_configurator')
@mock.patch.object(volumeop.VolumeOperator, '_validate_connection_info')
@mock.patch.object(volumeop.VolumeOperator, '_validate_volume')
@mock.patch.object(volumeop.VolumeOperator, '_validate_instance')
def test_detach_volume_from_instance(self,
_validate_instance,
_validate_volume,
_validate_connection_info,
_get_configurator):
inst = {NAME: 'inst1', OS_TYPE: 'sles12sp2'}
volume = {TYPE: 'fc',
LUN: 'abCDEF0987654321'}
fcps = ['1faa', '1fBB']
wwpns = ['1234567890abcdea', '1234567890abCDEB',
'1234567890abcdec', '1234567890abCDED',
'1234567890abcdee', '1234567890abCDEF']
conn_info = {PROTOCOL: 'fc',
FCPS: fcps,
WWPNS: wwpns,
ALIAS: 'vda'}
configurator = volumeop._Configurator_SLES12()
_get_configurator.return_value = configurator
configurator.config_detach = mock.MagicMock()
self._vol_op.detach_volume_from_instance(inst, volume, conn_info)
_validate_instance.assert_called_once_with(inst)
_validate_volume.assert_called_once_with(volume)
_validate_connection_info.assert_called_once_with(conn_info)
_get_configurator.assert_called_once_with(inst)
configurator.config_detach.assert_called_once_with(inst,
volume,
conn_info)
def test_validate_instance(self):
self.assertRaises(err, self._vol_op._validate_instance, None)
inst = ['inst1', 'sles12']
self.assertRaises(err, self._vol_op._validate_instance, inst)
inst = {NAME: 'inst1'}
self.assertRaises(err, self._vol_op._validate_instance, inst)
inst = {OS_TYPE: 'sles12'}
self.assertRaises(err, self._vol_op._validate_instance, inst)
inst = {NAME: 'inst1', OS_TYPE: 'centos'}
self.assertRaises(err, self._vol_op._validate_instance, inst)
inst = {NAME: 'inst1', OS_TYPE: 'rhel7.2'}
self._vol_op._validate_instance(inst)
inst = {NAME: 'inst1', OS_TYPE: 'sles12sp2'}
self._vol_op._validate_instance(inst)
inst = {NAME: 'inst1', OS_TYPE: 'ubuntu16.10'}
self._vol_op._validate_instance(inst)
@mock.patch.object(volumeop.VolumeOperator, '_validate_fc_volume')
def test_validate_volume(self, _validate_fc_volume):
self.assertRaises(err, self._vol_op._validate_volume, None)
volume = ['fc', 'abCDEF0987654321']
self.assertRaises(err, self._vol_op._validate_volume, volume)
volume = {LUN: 'abCDEF0987654321', SIZE: '1G'}
self.assertRaises(err, self._vol_op._validate_volume, volume)
volume = {TYPE: 'unknown',
LUN: 'abCDEF0987654321',
SIZE: '1G'}
self.assertRaises(err, self._vol_op._validate_volume, volume)
volume = {LUN: 'abCDEF0987654321', TYPE: 'fc'}
self._vol_op._validate_volume(volume)
_validate_fc_volume.assert_called_once_with(volume)
_validate_fc_volume.reset_mock()
volume = {TYPE: 'fc',
LUN: 'abCDEF0987654321',
SIZE: '1G'}
self._vol_op._validate_volume(volume)
_validate_fc_volume.assert_called_once_with(volume)
def test_is_16bit_hex(self):
self.assertFalse(self._vol_op._is_16bit_hex(None))
self.assertFalse(self._vol_op._is_16bit_hex('1234'))
self.assertFalse(self._vol_op._is_16bit_hex('1234567890abcdefg'))
self.assertFalse(self._vol_op._is_16bit_hex('1234567890abcdeg'))
self.assertTrue(self._vol_op._is_16bit_hex('1234567890abcdef'))
def test_validate_fc_volume(self):
volume = {TYPE: 'fc'}
self.assertRaises(err, self._vol_op._validate_fc_volume, volume)
volume = {TYPE: 'fc',
LUN: 'abcdef0987654321f'}
self.assertRaises(err, self._vol_op._validate_fc_volume, volume)
volume = {TYPE: 'fc',
LUN: 'abcdef0987654321'}
self._vol_op._validate_fc_volume(volume)
@mock.patch.object(volumeop.VolumeOperator, '_validate_fc_connection_info')
def _validate_connection_info(self, _validate_fc_connection_info):
self.assertRaises(err, self._vol_op._validate_connection_info, None)
fcps = ['1faa', '1fBB']
wwpns = ['1234567890abcdea', '1234567890abCDEB',
'1234567890abcdec', '1234567890abCDED',
'1234567890abcdee', '1234567890abCDEF']
conn_info = ['fc', fcps, wwpns, 'vda']
self.assertRaises(err,
self._vol_op._validate_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
FCPS: fcps,
WWPNS: wwpns}
self.assertRaises(err,
self._vol_op._validate_connection_info,
conn_info)
conn_info = {FCPS: fcps,
WWPNS: wwpns,
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_connection_info,
conn_info)
conn_info = {PROTOCOL: 'unknown',
WWPNS: wwpns,
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
FCPS: fcps,
WWPNS: wwpns,
ALIAS: 'vda'}
self._vol_op._validate_connection_info(conn_info)
_validate_fc_connection_info.assert_called_once_with(conn_info)
@mock.patch.object(volumeop.VolumeOperator, '_is_16bit_hex')
@mock.patch.object(volumeop.VolumeOperator, '_validate_fcp')
def test_validate_fc_connection_info(self, _validate_fcp, _is_16bit_hex):
fcps = ['1faa', '1fBB']
wwpns = ['1234567890abcdea', '1234567890abCDEB',
'1234567890abcdec', '1234567890abCDED',
'1234567890abcdee', '1234567890abCDEF']
conn_info = {PROTOCOL: 'fc',
DEDICATE: '1faa, 1fBB',
FCPS: fcps,
WWPNS: wwpns,
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_fc_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
WWPNS: wwpns,
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_fc_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
FCPS: '1faa, 1fBB',
WWPNS: wwpns,
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_fc_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
FCPS: fcps,
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
FCPS: fcps,
WWPNS: '1234567890abcdea',
ALIAS: 'vda'}
self.assertRaises(err,
self._vol_op._validate_fc_connection_info,
conn_info)
conn_info = {PROTOCOL: 'fc',
DEDICATE: fcps,
FCPS: fcps,
WWPNS: wwpns,
ALIAS: 'vda'}
self._vol_op._validate_fc_connection_info(conn_info)
_validate_fcp.assert_called_with(conn_info[FCPS][-1])
_is_16bit_hex.assert_called_with(conn_info[WWPNS][-1])
def test_validate_fcp(self):
self.assertRaises(err, self._vol_op._validate_fcp, None)
self.assertRaises(err, self._vol_op._validate_fcp, 'absd')
self.assertRaises(err, self._vol_op._validate_fcp, '12345')
self._vol_op._validate_fcp('09af')
def test_get_configurator(self):
instance = {OS_TYPE: 'rhel7'}
self.assertIsInstance(self._vol_op._get_configurator(instance),
volumeop._Configurator_RHEL7)
instance = {OS_TYPE: 'sles12'}
self.assertIsInstance(self._vol_op._get_configurator(instance),
volumeop._Configurator_SLES12)
instance = {OS_TYPE: 'ubuntu16'}
self.assertIsInstance(self._vol_op._get_configurator(instance),
volumeop._Configurator_Ubuntu16)
instance = {OS_TYPE: 'centos'}
self.assertRaises(err, self._vol_op._get_configurator, instance)
| 45.652667 | 79 | 0.52362 | 3,543 | 40,220 | 5.542196 | 0.066046 | 0.034223 | 0.04889 | 0.061112 | 0.814524 | 0.780098 | 0.763343 | 0.734926 | 0.695457 | 0.684559 | 0 | 0.068729 | 0.3955 | 40,220 | 880 | 80 | 45.704545 | 0.738905 | 0.015962 | 0 | 0.756935 | 0 | 0 | 0.107025 | 0.017138 | 0 | 0 | 0 | 0 | 0.147952 | 1 | 0.048877 | false | 0 | 0.02642 | 0 | 0.081902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cd7bfa4760298d09a97d61fff053a8c0f418db88 | 2,287 | py | Python | tests/export/html/test_xml_vulnerabilities.py | botzill/pydocx | 98c6aa626d875278240eabea8f86a914840499b3 | [
"Apache-2.0"
] | 127 | 2015-01-12T22:35:34.000Z | 2022-01-20T06:24:18.000Z | tests/export/html/test_xml_vulnerabilities.py | turbo-q/pydocx | 98c6aa626d875278240eabea8f86a914840499b3 | [
"Apache-2.0"
] | 156 | 2015-01-05T19:55:56.000Z | 2020-10-14T07:01:42.000Z | tests/export/html/test_xml_vulnerabilities.py | turbo-q/pydocx | 98c6aa626d875278240eabea8f86a914840499b3 | [
"Apache-2.0"
] | 45 | 2015-02-22T18:52:08.000Z | 2021-06-14T08:05:47.000Z | # coding: utf-8
from __future__ import (
absolute_import,
print_function,
unicode_literals,
)
from nose import SkipTest
from pydocx.test import DocumentGeneratorTestCase
from pydocx.test.utils import WordprocessingDocumentFactory
from pydocx.openxml.packaging import MainDocumentPart
class XMLVulnerabilitiesTestCase(DocumentGeneratorTestCase):
def test_exponential_entity_expansion(self):
try:
import defusedxml
except ImportError:
defusedxml = None
if defusedxml is None:
raise SkipTest('This test case only applies when using defusedxml')
document_xml = '''
<p>
<r>
<t>&c;</t>
</r>
</p>
'''
xml_header = '''<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xml [
<!ENTITY a "123">
<!ENTITY b "&a;&a;">
<!ENTITY c "&b;&b;">
]>
'''
document = WordprocessingDocumentFactory(xml_header=xml_header)
document.add(MainDocumentPart, document_xml)
expected_html = '<p>123123123123</p>'
try:
self.assert_document_generates_html(document, expected_html)
raise AssertionError(
'Expected "EntitiesForbidden" exception did not occur',
)
except defusedxml.EntitiesForbidden:
pass
def test_entity_blowup(self):
try:
import defusedxml
except ImportError:
defusedxml = None
if defusedxml is None:
raise SkipTest('This test case only applies when using defusedxml')
document_xml = '''
<p>
<r>
<t>&a;</t>
</r>
</p>
'''
xml_header = '''<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xml [
<!ENTITY a "123">
]>
'''
document = WordprocessingDocumentFactory(xml_header=xml_header)
document.add(MainDocumentPart, document_xml)
expected_html = '<p>123</p>'
try:
self.assert_document_generates_html(document, expected_html)
raise AssertionError(
'Expected "EntitiesForbidden" exception did not occur',
)
except defusedxml.EntitiesForbidden:
pass
| 27.22619 | 79 | 0.581548 | 217 | 2,287 | 5.981567 | 0.327189 | 0.041602 | 0.03698 | 0.035439 | 0.707242 | 0.707242 | 0.707242 | 0.707242 | 0.707242 | 0.707242 | 0 | 0.018041 | 0.321382 | 2,287 | 83 | 80 | 27.554217 | 0.818299 | 0.005684 | 0 | 0.714286 | 0 | 0 | 0.289173 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.028571 | false | 0.028571 | 0.142857 | 0 | 0.185714 | 0.014286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
26c08d7120c7a7a578cc1c922940aa09503b948f | 1,619 | py | Python | 03. Advanced (Nested) Conditional Statements/P10 Trade Commission ##.py | KrisBestTech/Python-Basics | 10bd961bf16d15ddb94bbea53327b4fc5bfdba4c | [
"MIT"
] | null | null | null | 03. Advanced (Nested) Conditional Statements/P10 Trade Commission ##.py | KrisBestTech/Python-Basics | 10bd961bf16d15ddb94bbea53327b4fc5bfdba4c | [
"MIT"
] | null | null | null | 03. Advanced (Nested) Conditional Statements/P10 Trade Commission ##.py | KrisBestTech/Python-Basics | 10bd961bf16d15ddb94bbea53327b4fc5bfdba4c | [
"MIT"
] | null | null | null | name_of_city = str(input())
number_of_sales = float(input())
commission = 0
if name_of_city == 'Sofia' or \
name_of_city == 'Varna' or \
name_of_city == 'Plovdiv':
if number_of_sales >= 0:
if name_of_city == 'Sofia':
if 0 <= number_of_sales <= 500:
commission = number_of_sales * 0.05
elif 500 <= number_of_sales <= 1000:
commission = number_of_sales * 0.07
elif 1000 <= number_of_sales <= 10000:
commission = number_of_sales * 0.08
elif number_of_sales > 10000:
commission = number_of_sales * 0.12
elif name_of_city == 'Varna':
if 0 <= number_of_sales <= 500:
commission = number_of_sales * 0.045
elif 500 <= number_of_sales <= 1000:
commission = number_of_sales * 0.075
elif 1000 <= number_of_sales <= 10000:
commission = number_of_sales * 0.10
elif number_of_sales > 10000:
commission = number_of_sales * 0.13
elif name_of_city == 'Plovdiv':
if 0 <= number_of_sales <= 500:
commission = number_of_sales * 0.055
elif 500 <= number_of_sales <= 1000:
commission = number_of_sales * 0.08
elif 1000 <= number_of_sales <= 10000:
commission = number_of_sales * 0.12
elif number_of_sales > 10000:
commission = number_of_sales * 0.145
print(f'{commission:.2f}')
else:
print('error')
else:
print('error')
| 28.403509 | 52 | 0.542928 | 198 | 1,619 | 4.106061 | 0.171717 | 0.255843 | 0.415744 | 0.223862 | 0.793358 | 0.749077 | 0.704797 | 0.699877 | 0.699877 | 0.699877 | 0 | 0.114786 | 0.36504 | 1,619 | 56 | 53 | 28.910714 | 0.67607 | 0 | 0 | 0.512821 | 0 | 0 | 0.03706 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
26dd9067a92911fc26106a77722f4e65b44daa0a | 89 | py | Python | bijou/server/__init__.py | bijou/bijou-server | 7d677e5d990f359ab64e25de09f230e3a8170241 | [
"MIT"
] | null | null | null | bijou/server/__init__.py | bijou/bijou-server | 7d677e5d990f359ab64e25de09f230e3a8170241 | [
"MIT"
] | 14 | 2017-10-24T04:22:01.000Z | 2018-12-11T22:24:52.000Z | bijou/server/__init__.py | bijou/bijou-api | 7d677e5d990f359ab64e25de09f230e3a8170241 | [
"MIT"
] | null | null | null | """bijou-server (bijou)"""
from . import errors
from . import net
from . import plugins
| 14.833333 | 26 | 0.696629 | 12 | 89 | 5.166667 | 0.583333 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168539 | 89 | 5 | 27 | 17.8 | 0.837838 | 0.224719 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
26e4d62adda043bd11cf9b50273e5d845e7af837 | 67 | py | Python | amocrm_asterisk_ng/telephony/impl/instances/asterisk_16/cdr_provider/mysql/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | amocrm_asterisk_ng/telephony/impl/instances/asterisk_16/cdr_provider/mysql/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | amocrm_asterisk_ng/telephony/impl/instances/asterisk_16/cdr_provider/mysql/__init__.py | iqtek/amocrn_asterisk_ng | 429a8d0823b951c855a49c1d44ab0e05263c54dc | [
"MIT"
] | null | null | null | from .MySqlConnectionFactoryImpl import MySqlConnectionFactoryImpl
| 33.5 | 66 | 0.925373 | 4 | 67 | 15.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 67 | 1 | 67 | 67 | 0.984127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f863a8c77374258406a2e946532261523d6f6320 | 46 | py | Python | src/prosodia/base/bnfrange/example/__init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4b4a330a02779add037c71dc7a81 | [
"MIT"
] | 1 | 2018-05-14T22:04:07.000Z | 2018-05-14T22:04:07.000Z | src/prosodia/base/bnfrange/example/__init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4b4a330a02779add037c71dc7a81 | [
"MIT"
] | 1 | 2019-06-18T00:29:03.000Z | 2019-06-18T00:29:03.000Z | src/prosodia/base/bnfrange/example/__init__.py | macbeth322/bnf-parser | 5bb71c8bec8b4b4a330a02779add037c71dc7a81 | [
"MIT"
] | null | null | null | from ._grammar import create_example_bnfrange
| 23 | 45 | 0.891304 | 6 | 46 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 1 | 46 | 46 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f877baa0a0810d059e0404d83be87ed859396368 | 13,866 | py | Python | Wigle/python-client/swagger_client/api/network_observation_file_upload_and_status__api.py | BillReyor/SSIDprobeCollector | 437989fd1e9d8d200ca28f88a692ecc17530db73 | [
"MIT"
] | 1 | 2022-01-30T16:34:05.000Z | 2022-01-30T16:34:05.000Z | Wigle/python-client/swagger_client/api/network_observation_file_upload_and_status__api.py | BillReyor/SSIDprobeCollector | 437989fd1e9d8d200ca28f88a692ecc17530db73 | [
"MIT"
] | null | null | null | Wigle/python-client/swagger_client/api/network_observation_file_upload_and_status__api.py | BillReyor/SSIDprobeCollector | 437989fd1e9d8d200ca28f88a692ecc17530db73 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
WiGLE API
Search, upload, and integrate statistics from WiGLE. Use API Name+Token from https://wigle.net/account # noqa: E501
OpenAPI spec version: 3.1
Contact: WiGLE-admin@wigle.net
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from swagger_client.api_client import ApiClient
class NetworkObservationFileUploadAndStatus_Api(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def get_kml_for_trans_id(self, transid, **kwargs): # noqa: E501
"""Download a KML summary of a file # noqa: E501
Get a KML summary approximation for a successfully processed file uploaded by the current user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_kml_for_trans_id(transid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str transid: The unique transaction ID for the file (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_kml_for_trans_id_with_http_info(transid, **kwargs) # noqa: E501
else:
(data) = self.get_kml_for_trans_id_with_http_info(transid, **kwargs) # noqa: E501
return data
def get_kml_for_trans_id_with_http_info(self, transid, **kwargs): # noqa: E501
"""Download a KML summary of a file # noqa: E501
Get a KML summary approximation for a successfully processed file uploaded by the current user # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_kml_for_trans_id_with_http_info(transid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str transid: The unique transaction ID for the file (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['transid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_kml_for_trans_id" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'transid' is set
if self.api_client.client_side_validation and ('transid' not in params or
params['transid'] is None): # noqa: E501
raise ValueError("Missing the required parameter `transid` when calling `get_kml_for_trans_id`") # noqa: E501
collection_formats = {}
path_params = {}
if 'transid' in params:
path_params['transid'] = params['transid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.google-earth.kml+xml']) # noqa: E501
# Authentication setting
auth_settings = ['basic'] # noqa: E501
return self.api_client.call_api(
'/api/v2/file/kml/{transid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_trans_logs_for_user(self, **kwargs): # noqa: E501
"""Get the status of files uploaded by the current user # noqa: E501
Results in response model paginated at 100 results per page # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_trans_logs_for_user(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int pagestart: Most recent record to fetch descending chronologically. Defaults to 0
:param int pageend: Number of results to fetch from offset. Defaults to 100
:return: TranslogResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_trans_logs_for_user_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.get_trans_logs_for_user_with_http_info(**kwargs) # noqa: E501
return data
def get_trans_logs_for_user_with_http_info(self, **kwargs): # noqa: E501
"""Get the status of files uploaded by the current user # noqa: E501
Results in response model paginated at 100 results per page # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_trans_logs_for_user_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param int pagestart: Most recent record to fetch descending chronologically. Defaults to 0
:param int pageend: Number of results to fetch from offset. Defaults to 100
:return: TranslogResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['pagestart', 'pageend'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_trans_logs_for_user" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'pagestart' in params:
query_params.append(('pagestart', params['pagestart'])) # noqa: E501
if 'pageend' in params:
query_params.append(('pageend', params['pageend'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['basic'] # noqa: E501
return self.api_client.call_api(
'/api/v2/file/transactions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TranslogResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def upload(self, file, **kwargs): # noqa: E501
"""Upload a file # noqa: E501
Transmit a file for processing and incorporation into the database. Supports DStumbler, G-Mon, inSSIDer, Kismac, Kismet, MacStumbler, NetStumbler, Pocket Warrior, Wardrive-Android, WiFiFoFum, WiFi-Where, WiGLE WiFi Wardriving, and Apple consolidated DB formats. One or more files may be enclosed within a zip, tar, or tar.gz archive. Files may not exceed 180MiB, and archives WILL IGNORE more than 200 member files. For documentation on WiGLE Wireless CSV files, see https://api.wigle.net/csvFormat.html # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload(file, async_req=True)
>>> result = thread.get()
:param async_req bool
:param file file: multipart/form-data file; proper formulation requires both filename and payload. (required)
:param str donate: Allow commercial use of the file contents - 'on' to allow.
:return: UploadResponse
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.upload_with_http_info(file, **kwargs) # noqa: E501
else:
(data) = self.upload_with_http_info(file, **kwargs) # noqa: E501
return data
def upload_with_http_info(self, file, **kwargs): # noqa: E501
"""Upload a file # noqa: E501
Transmit a file for processing and incorporation into the database. Supports DStumbler, G-Mon, inSSIDer, Kismac, Kismet, MacStumbler, NetStumbler, Pocket Warrior, Wardrive-Android, WiFiFoFum, WiFi-Where, WiGLE WiFi Wardriving, and Apple consolidated DB formats. One or more files may be enclosed within a zip, tar, or tar.gz archive. Files may not exceed 180MiB, and archives WILL IGNORE more than 200 member files. For documentation on WiGLE Wireless CSV files, see https://api.wigle.net/csvFormat.html # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.upload_with_http_info(file, async_req=True)
>>> result = thread.get()
:param async_req bool
:param file file: multipart/form-data file; proper formulation requires both filename and payload. (required)
:param str donate: Allow commercial use of the file contents - 'on' to allow.
:return: UploadResponse
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['file', 'donate'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method upload" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'file' is set
if self.api_client.client_side_validation and ('file' not in params or
params['file'] is None): # noqa: E501
raise ValueError("Missing the required parameter `file` when calling `upload`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
if 'file' in params:
local_var_files['file'] = params['file'] # noqa: E501
if 'donate' in params:
form_params.append(('donate', params['donate'])) # noqa: E501
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['multipart/form-data']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/api/v2/file/upload', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='UploadResponse', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 42.27439 | 525 | 0.627434 | 1,680 | 13,866 | 4.972619 | 0.154167 | 0.045966 | 0.02011 | 0.025856 | 0.872756 | 0.852406 | 0.827508 | 0.809792 | 0.801293 | 0.783936 | 0 | 0.018314 | 0.287249 | 13,866 | 327 | 526 | 42.40367 | 0.826976 | 0.411438 | 0 | 0.666667 | 0 | 0 | 0.16486 | 0.044159 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.02381 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f88c9aef90cc9491bcf33a009907a4e0a1e63f10 | 112 | py | Python | NumericalMethods/integration/__init__.py | banin-artem/NumericalMethods | 2457b11e08be55cd0baae053e270db9190cff9d5 | [
"MIT"
] | 4 | 2020-09-11T19:08:41.000Z | 2021-05-06T06:43:40.000Z | NumericalMethods/integration/__init__.py | banin-artem/NumericalMethods | 2457b11e08be55cd0baae053e270db9190cff9d5 | [
"MIT"
] | 1 | 2020-11-27T08:47:35.000Z | 2020-11-27T08:47:35.000Z | NumericalMethods/integration/__init__.py | banin-artem/NumericalMethods | 2457b11e08be55cd0baae053e270db9190cff9d5 | [
"MIT"
] | 1 | 2021-01-21T16:51:50.000Z | 2021-01-21T16:51:50.000Z | from ._simpson import simpson
from ._runge_refinement import runge_refinement
from ._trapezoid import trapezoid
| 28 | 47 | 0.866071 | 14 | 112 | 6.571429 | 0.428571 | 0.326087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 112 | 3 | 48 | 37.333333 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e32a6921881f47f7cf0194e9e8dd22f7d752832 | 249 | py | Python | baselines/pecnet/__init__.py | InhwanBae/NPSN | 7959e9e1715b97d7dbee5471dc133e185132615d | [
"MIT"
] | null | null | null | baselines/pecnet/__init__.py | InhwanBae/NPSN | 7959e9e1715b97d7dbee5471dc133e185132615d | [
"MIT"
] | null | null | null | baselines/pecnet/__init__.py | InhwanBae/NPSN | 7959e9e1715b97d7dbee5471dc133e185132615d | [
"MIT"
] | null | null | null | from .model import PECNet as PECNet
from .utils import TrajectoryDataset
from .bridge import get_dataloader, get_latent_dim, get_model
from .bridge import model_forward_pre_hook, model_forward, model_forward_post_hook
from .bridge import model_loss
| 41.5 | 82 | 0.859438 | 38 | 249 | 5.315789 | 0.447368 | 0.148515 | 0.237624 | 0.207921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104418 | 249 | 5 | 83 | 49.8 | 0.90583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e5823cb1ef13cc26e37ee2ebb0e7b13e47090a7 | 523 | py | Python | samples/client/petstore/python/swagger_client/api/__init__.py | Cadcorp/swagger-codegen | 23b64dd5e5266a7d0d7fb7a5c800d618c12696de | [
"Apache-2.0"
] | 1 | 2020-09-06T18:36:28.000Z | 2020-09-06T18:36:28.000Z | samples/client/petstore/python/swagger_client/api/__init__.py | Cadcorp/swagger-codegen | 23b64dd5e5266a7d0d7fb7a5c800d618c12696de | [
"Apache-2.0"
] | null | null | null | samples/client/petstore/python/swagger_client/api/__init__.py | Cadcorp/swagger-codegen | 23b64dd5e5266a7d0d7fb7a5c800d618c12696de | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from swagger_client.api.animal_api import AnimalApi
from swagger_client.api.another_fake_api import AnotherFakeApi
from swagger_client.api.dog_api import DogApi
from swagger_client.api.fake_api import FakeApi
from swagger_client.api.fake_classname_tags_123_api import FakeClassnameTags123Api
from swagger_client.api.pet_api import PetApi
from swagger_client.api.store_api import StoreApi
from swagger_client.api.user_api import UserApi
| 37.357143 | 82 | 0.873805 | 80 | 523 | 5.4 | 0.375 | 0.203704 | 0.314815 | 0.37037 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014706 | 0.089866 | 523 | 13 | 83 | 40.230769 | 0.892857 | 0.078394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3e60c400087395779db60a0ca496dfa5b529aa2a | 140 | py | Python | dabuildsys/__init__.py | andersk/debathena-build-system | 5845751f8d67f4ed62ff67b2ea584f85f25cb280 | [
"MIT"
] | null | null | null | dabuildsys/__init__.py | andersk/debathena-build-system | 5845751f8d67f4ed62ff67b2ea584f85f25cb280 | [
"MIT"
] | null | null | null | dabuildsys/__init__.py | andersk/debathena-build-system | 5845751f8d67f4ed62ff67b2ea584f85f25cb280 | [
"MIT"
] | null | null | null | from apt import *
from checkout import *
from common import *
from config import *
from git import *
from srcname import *
import reprepro
| 15.555556 | 22 | 0.764286 | 20 | 140 | 5.35 | 0.45 | 0.46729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192857 | 140 | 8 | 23 | 17.5 | 0.946903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e4445d8092833ad0fcb3ca8fa50f1575c6bab517 | 5,617 | py | Python | tests/test_client.py | kunmosky1/pybotters | 659ec11a35b1fe8d9ed07d0cb2310a99c57e8642 | [
"MIT"
] | 1 | 2022-03-24T13:27:33.000Z | 2022-03-24T13:27:33.000Z | tests/test_client.py | yoshiY-private/pybotters | 571fa207470d9aebed4b2966815489bd1e5c95b6 | [
"MIT"
] | null | null | null | tests/test_client.py | yoshiY-private/pybotters | 571fa207470d9aebed4b2966815489bd1e5c95b6 | [
"MIT"
] | 1 | 2022-03-23T16:01:06.000Z | 2022-03-23T16:01:06.000Z | import json
from unittest.mock import mock_open
import aiohttp
import pybotters
import pytest
import pytest_mock
from asyncmock import AsyncMock
async def test_client():
apis = {
'name1': ['key1', 'secret1'],
'name2': ['key2', 'secret2'],
'name3': ['key3', 'secret3'],
}
base_url = 'http://example.com'
async with pybotters.Client(apis=apis, base_url=base_url) as client:
assert isinstance(client._session, aiohttp.ClientSession)
assert not client._session.closed
assert client._base_url == base_url
assert client._session.closed
assert client._session.__dict__['_apis'] == {
'name1': tuple(['key1', 'secret1'.encode()]),
'name2': tuple(['key2', 'secret2'.encode()]),
'name3': tuple(['key3', 'secret3'.encode()]),
}
async def test_client_open(mocker: pytest_mock.MockerFixture):
read_data = (
'{"name1":["key1","secret1"],"name2":["key2","secret2"],"name3":["key3","secret'
'3"]}'
)
m = mocker.patch('pybotters.client.open', mock_open(read_data=read_data))
apis = '/path/to/apis.json'
async with pybotters.Client(apis=apis) as client:
assert isinstance(client._session, aiohttp.ClientSession)
assert not client._session.closed
assert client._session.closed
assert client._session.__dict__['_apis'] == {
'name1': tuple(['key1', 'secret1'.encode()]),
'name2': tuple(['key2', 'secret2'.encode()]),
'name3': tuple(['key3', 'secret3'.encode()]),
}
m.assert_called_once_with(apis)
async def test_client_warn(mocker: pytest_mock.MockerFixture):
apis = {'name1', 'key1', 'secret1'}
base_url = 'http://example.com'
async with pybotters.Client(apis=apis, base_url=base_url) as client: # type: ignore
assert isinstance(client._session, aiohttp.ClientSession)
assert not client._session.closed
assert client._base_url == base_url
assert client._session.closed
assert client._session.__dict__['_apis'] == {}
async def test_client_open_error(mocker: pytest_mock.MockerFixture):
read_data = r'name1:\- key1\n- secret1'
mocker.patch('pybotters.client.open', mock_open(read_data=read_data))
apis = '/path/to/apis.json'
with pytest.raises(json.JSONDecodeError):
async with pybotters.Client(apis=apis):
pass
@pytest.mark.asyncio
async def test_client_request_get(mocker: pytest_mock.MockerFixture):
patched = mocker.patch('aiohttp.client.ClientSession._request')
async with pybotters.Client() as client:
ret = client.request('GET', 'http://example.com', params={'foo': 'bar'})
assert patched.called
assert isinstance(ret, aiohttp.client._RequestContextManager)
@pytest.mark.asyncio
async def test_client_request_post(mocker: pytest_mock.MockerFixture):
patched = mocker.patch('aiohttp.client.ClientSession._request')
async with pybotters.Client() as client:
ret = client.request('POST', 'http://example.com', data={'foo': 'bar'})
assert patched.called
assert isinstance(ret, aiohttp.client._RequestContextManager)
@pytest.mark.asyncio
async def test_client_get(mocker: pytest_mock.MockerFixture):
patched = mocker.patch('aiohttp.client.ClientSession._request')
async with pybotters.Client() as client:
ret = client.get('http://example.com', params={'foo': 'bar'})
assert patched.called
assert isinstance(ret, aiohttp.client._RequestContextManager)
@pytest.mark.asyncio
async def test_client_post(mocker: pytest_mock.MockerFixture):
patched = mocker.patch('aiohttp.client.ClientSession._request')
async with pybotters.Client() as client:
ret = client.post('http://example.com', data={'foo': 'bar'})
assert patched.called
assert isinstance(ret, aiohttp.client._RequestContextManager)
@pytest.mark.asyncio
async def test_client_put(mocker: pytest_mock.MockerFixture):
patched = mocker.patch('aiohttp.client.ClientSession._request')
async with pybotters.Client() as client:
ret = client.put('http://example.com', data={'foo': 'bar'})
assert patched.called
assert isinstance(ret, aiohttp.client._RequestContextManager)
@pytest.mark.asyncio
async def test_client_delete(mocker: pytest_mock.MockerFixture):
patched = mocker.patch('aiohttp.client.ClientSession._request')
async with pybotters.Client() as client:
ret = client.delete('http://example.com', data={'foo': 'bar'})
assert patched.called
assert isinstance(ret, aiohttp.client._RequestContextManager)
@pytest.mark.asyncio
async def test_client_ws_connect(mocker: pytest_mock.MockerFixture):
runner_mock = mocker.Mock()
runner_mock.wait = AsyncMock()
m = mocker.patch('pybotters.client.WebSocketRunner', return_value=runner_mock)
hdlr_str = mocker.Mock()
hdlr_bytes = mocker.Mock()
hdlr_json = mocker.Mock()
async with pybotters.Client() as client:
ret = await client.ws_connect(
'ws://test.org',
send_str='{"foo":"bar"}',
send_bytes=b'{"foo":"bar"}',
send_json={'foo': 'bar'},
hdlr_str=hdlr_str,
hdlr_bytes=hdlr_bytes,
hdlr_json=hdlr_json,
)
assert m.called
assert m.call_args == [
('ws://test.org', client._session),
{
'send_str': '{"foo":"bar"}',
'send_bytes': b'{"foo":"bar"}',
'send_json': {'foo': 'bar'},
'hdlr_str': hdlr_str,
'hdlr_bytes': hdlr_bytes,
'hdlr_json': hdlr_json,
},
]
assert ret == runner_mock
| 36.474026 | 88 | 0.669753 | 668 | 5,617 | 5.44012 | 0.134731 | 0.057788 | 0.036324 | 0.054485 | 0.840121 | 0.814254 | 0.77628 | 0.766648 | 0.7306 | 0.7306 | 0 | 0.009202 | 0.187467 | 5,617 | 153 | 89 | 36.712418 | 0.787029 | 0.002136 | 0 | 0.438462 | 0 | 0 | 0.168481 | 0.06675 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0 | false | 0.007692 | 0.053846 | 0 | 0.053846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e4b0ebda2fdee7d750722892d8a59fbfaa5fb217 | 4,629 | py | Python | network/refinement_network.py | sveatlo/inpainting | 6870ee56beea7401aa97194f76487c391af9dd5d | [
"Unlicense"
] | 1 | 2021-08-08T03:17:17.000Z | 2021-08-08T03:17:17.000Z | network/refinement_network.py | sveatlo/inpainting | 6870ee56beea7401aa97194f76487c391af9dd5d | [
"Unlicense"
] | 6 | 2021-08-08T13:12:55.000Z | 2022-03-13T15:26:02.000Z | network/refinement_network.py | sveatlo/unmasked | 6870ee56beea7401aa97194f76487c391af9dd5d | [
"Unlicense"
] | null | null | null | import torch
import torch.nn as nn
from network.contextual_attention import ContextualAttention
from network.gated_conv import GatedConv2d, GatedDeConv2d
class RefinementNetwork(nn.Module):
def __init__(self, in_channels: int = 4, out_channels: int = 3, latent_channels: int = 48, padding_type: str = 'zero', activation: str = 'lrelu', norm: str = 'none'):
super().__init__()
# b1 has attention
self.b1_1 = nn.Sequential(
GatedConv2d(in_channels, latent_channels, 5, 1, 2, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels, latent_channels, 3, 2, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels, latent_channels*2, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*2, latent_channels*4, 3, 2, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation='relu', norm=norm)
)
self.b1_2 = nn.Sequential(
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm)
)
self.context_attention=ContextualAttention(ksize=3, stride=1, rate=2, fuse_k=3, softmax_scale=10, fuse=True)
# b2 is conv only
self.b2 = nn.Sequential(
GatedConv2d(in_channels, latent_channels, 5, 1, 2, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels, latent_channels, 3, 2, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels, latent_channels*2, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*2, latent_channels*2, 3, 2, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*2, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 2, dilation=2, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 4, dilation=4, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 8, dilation=8, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 16, dilation=16, padding_type=padding_type, activation=activation, norm=norm)
)
self.combine = nn.Sequential(
GatedConv2d(latent_channels*8, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*4, latent_channels*4, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedDeConv2d(latent_channels*4, latent_channels*2, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels*2, latent_channels*2, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedDeConv2d(latent_channels*2, latent_channels, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels, latent_channels//2, 3, 1, 1, padding_type=padding_type, activation=activation, norm=norm),
GatedConv2d(latent_channels//2, out_channels, 3, 1, 1, padding_type=padding_type, activation='none', norm=norm),
nn.Tanh()
)
def forward(self, img, coarse_img, mask):
img_masked = img * (1 - mask) + coarse_img * mask
x = torch.cat([img_masked, mask], dim=1)
x_1 = self.b2(x)
x_2 = self.b1_1(x)
mask_s = nn.functional.interpolate(mask, (x_2.shape[2], x_2.shape[3]))
x_2 = self.context_attention(x_2, x_2, mask_s)
x_2 = self.b1_2(x_2)
y = torch.cat([x_1, x_2], dim=1)
y = self.combine(y)
y = nn.functional.interpolate(y, (img.shape[2], img.shape[3]))
return y
| 69.089552 | 170 | 0.700799 | 633 | 4,629 | 4.905213 | 0.118483 | 0.180676 | 0.144928 | 0.177134 | 0.754589 | 0.736232 | 0.736232 | 0.736232 | 0.736232 | 0.701127 | 0 | 0.048942 | 0.183409 | 4,629 | 66 | 171 | 70.136364 | 0.772487 | 0.006913 | 0 | 0.185185 | 0 | 0 | 0.004571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e4f5fbedba3a527d6fab1730311726d5c28e1001 | 71 | py | Python | SuperSaaS/Models/Schedule.py | fluxility/supersaas-python-api-client | 96cfc4e6151f8ee73cb6a486788a89dbd22e93d2 | [
"MIT"
] | 2 | 2020-04-07T04:00:49.000Z | 2021-08-15T19:21:17.000Z | SuperSaaS/Models/Schedule.py | fluxility/supersaas-python-api-client | 96cfc4e6151f8ee73cb6a486788a89dbd22e93d2 | [
"MIT"
] | 9 | 2018-10-15T09:46:14.000Z | 2020-04-11T06:37:38.000Z | SuperSaaS/Models/Schedule.py | fluxility/supersaas-python-api-client | 96cfc4e6151f8ee73cb6a486788a89dbd22e93d2 | [
"MIT"
] | 6 | 2018-05-17T08:31:48.000Z | 2021-07-27T03:19:03.000Z | from .BaseModel import BaseModel
class Schedule(BaseModel):
pass
| 11.833333 | 32 | 0.760563 | 8 | 71 | 6.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183099 | 71 | 5 | 33 | 14.2 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
9009c5e24044e32211877563f772802977da7a6b | 89 | py | Python | tests/tensortrade/actions/test_continuous_actions.py | radusl/tensortrade | b7cce87572fb47f4fcf9c19a9fbf9279db7cc8c9 | [
"Apache-2.0"
] | null | null | null | tests/tensortrade/actions/test_continuous_actions.py | radusl/tensortrade | b7cce87572fb47f4fcf9c19a9fbf9279db7cc8c9 | [
"Apache-2.0"
] | null | null | null | tests/tensortrade/actions/test_continuous_actions.py | radusl/tensortrade | b7cce87572fb47f4fcf9c19a9fbf9279db7cc8c9 | [
"Apache-2.0"
] | null | null | null | from tensortrade import TradingContext
from tensortrade.actions import ContinuousActions
| 29.666667 | 49 | 0.898876 | 9 | 89 | 8.888889 | 0.666667 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089888 | 89 | 2 | 50 | 44.5 | 0.987654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5f7ed8c161776d346cf02940f491e1e2f31fc7d5 | 35 | py | Python | kyototycoon/__init__.py | rulerhuang/python-kyototycoon | 1ec42ef17048c649aaf9ec51a2da44ac46d011fe | [
"BSD-3-Clause"
] | null | null | null | kyototycoon/__init__.py | rulerhuang/python-kyototycoon | 1ec42ef17048c649aaf9ec51a2da44ac46d011fe | [
"BSD-3-Clause"
] | null | null | null | kyototycoon/__init__.py | rulerhuang/python-kyototycoon | 1ec42ef17048c649aaf9ec51a2da44ac46d011fe | [
"BSD-3-Clause"
] | null | null | null | from kyototycoon import * # noqa
| 17.5 | 34 | 0.714286 | 4 | 35 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 35 | 1 | 35 | 35 | 0.925926 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5fd381370652258549c322559cc1d642d12dc4c8 | 202 | py | Python | tests/test_main.py | k2on/interlock_hub | 34740cb598cea75eaa716552414a1b1b1dcbc520 | [
"MIT"
] | null | null | null | tests/test_main.py | k2on/interlock_hub | 34740cb598cea75eaa716552414a1b1b1dcbc520 | [
"MIT"
] | null | null | null | tests/test_main.py | k2on/interlock_hub | 34740cb598cea75eaa716552414a1b1b1dcbc520 | [
"MIT"
] | null | null | null | from . import LocalServerMock
import pytest
def test_main():
local_server = LocalServerMock()
assert local_server.status_code == 1
assert local_server.status_code_name == "INTERNAL_SETUP"
| 22.444444 | 60 | 0.762376 | 25 | 202 | 5.84 | 0.64 | 0.226027 | 0.232877 | 0.315068 | 0.369863 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005917 | 0.163366 | 202 | 8 | 61 | 25.25 | 0.857988 | 0 | 0 | 0 | 0 | 0 | 0.069307 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
39a0959ca4cd67cea81979888cb29ec8b6302181 | 133 | py | Python | techk/apps/base/admin.py | kloness/fullstack-challenge | eccd19facce1450836191df762256075ec369a26 | [
"MIT"
] | null | null | null | techk/apps/base/admin.py | kloness/fullstack-challenge | eccd19facce1450836191df762256075ec369a26 | [
"MIT"
] | null | null | null | techk/apps/base/admin.py | kloness/fullstack-challenge | eccd19facce1450836191df762256075ec369a26 | [
"MIT"
] | null | null | null | from django.contrib import admin
from apps.base import models
admin.site.register(models.Category)
admin.site.register(models.Book)
| 22.166667 | 36 | 0.827068 | 20 | 133 | 5.5 | 0.6 | 0.163636 | 0.309091 | 0.418182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082707 | 133 | 5 | 37 | 26.6 | 0.901639 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
39d1675a26abbb34819db9a72db3b51a66f77d7e | 91 | py | Python | settings.py | AILab-FOI/MMO-IF | 74a633bb7687ffdca8b3043046b0c572d5cc2969 | [
"MIT"
] | null | null | null | settings.py | AILab-FOI/MMO-IF | 74a633bb7687ffdca8b3043046b0c572d5cc2969 | [
"MIT"
] | null | null | null | settings.py | AILab-FOI/MMO-IF | 74a633bb7687ffdca8b3043046b0c572d5cc2969 | [
"MIT"
] | null | null | null | XMPP_SERVER = 'rec.foi.hr'
SERVER_JID = 'tpeharda_server@rec.foi.hr'
SERVER_PASS = 'pass'
| 18.2 | 41 | 0.736264 | 15 | 91 | 4.2 | 0.533333 | 0.285714 | 0.380952 | 0.444444 | 0.634921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10989 | 91 | 4 | 42 | 22.75 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.43956 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.333333 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
39f5515f691a0888a417a9350b2c995078f3f2f0 | 96 | py | Python | tests/unit/test_line.py | vboulanger/draw | 63a40999c84218d5237195adf02eb817dad54601 | [
"MIT"
] | 15 | 2020-01-15T17:28:29.000Z | 2021-02-24T20:32:05.000Z | tests/unit/test_line.py | vboulanger/draw | 63a40999c84218d5237195adf02eb817dad54601 | [
"MIT"
] | null | null | null | tests/unit/test_line.py | vboulanger/draw | 63a40999c84218d5237195adf02eb817dad54601 | [
"MIT"
] | null | null | null | import depict
import pytest
def test_hello_world():
depict.line([1, 2, 4], show_plot=False) | 19.2 | 43 | 0.729167 | 16 | 96 | 4.1875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0.145833 | 96 | 5 | 43 | 19.2 | 0.780488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f2f85439eb14a8efb1ae0d8b40643791c88cc16c | 192 | py | Python | pynmmso/listeners/__init__.py | wood-chris/pynmmso | e13f8139160421a9d3f7e650ad6f988c9244ca69 | [
"MIT"
] | 5 | 2019-06-01T06:21:25.000Z | 2021-11-17T18:43:43.000Z | pynmmso/listeners/__init__.py | wood-chris/pynmmso | e13f8139160421a9d3f7e650ad6f988c9244ca69 | [
"MIT"
] | null | null | null | pynmmso/listeners/__init__.py | wood-chris/pynmmso | e13f8139160421a9d3f7e650ad6f988c9244ca69 | [
"MIT"
] | 3 | 2019-10-01T11:24:06.000Z | 2021-09-23T17:20:03.000Z | from .base_listener import BaseListener
from .multi_listener import MultiListener
from .parallel_predictor_listener import ParallelPredictorListener
from .trace_listener import TraceListener
| 32 | 66 | 0.890625 | 21 | 192 | 7.904762 | 0.571429 | 0.337349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088542 | 192 | 5 | 67 | 38.4 | 0.948571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8405273dff58262b210ba2621eebd3cacf5bc6ff | 9,932 | py | Python | cronman/tests/test_monitor.py | ryancheley/django-cronman | 5be5d9d5eecba0f110808c9e7a97ef89ef620ade | [
"BSD-3-Clause"
] | 17 | 2018-09-25T16:28:36.000Z | 2022-01-31T14:43:24.000Z | cronman/tests/test_monitor.py | ryancheley/django-cronman | 5be5d9d5eecba0f110808c9e7a97ef89ef620ade | [
"BSD-3-Clause"
] | 14 | 2018-11-04T14:45:14.000Z | 2022-02-01T04:02:47.000Z | cronman/tests/test_monitor.py | ryancheley/django-cronman | 5be5d9d5eecba0f110808c9e7a97ef89ef620ade | [
"BSD-3-Clause"
] | 3 | 2018-09-25T16:28:44.000Z | 2022-02-01T04:08:23.000Z | # -*- coding: utf-8 -*-
# vi:si:et:sw=4:sts=4:ts=4
from __future__ import unicode_literals
from django.utils.encoding import force_bytes
import mock
import requests
from cronman.monitor import Cronitor, Slack
from cronman.tests.base import BaseCronTestCase, override_cron_settings
class CronitorTestCase(BaseCronTestCase):
"""Tests for Cronitor class"""
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=False,
)
@mock.patch("cronman.monitor.requests.head")
def test_run_disabled(self, mock_head):
"""Test for `run` method, case: Cronitor disabled"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.run("cRoNiD")
mock_head.assert_not_called()
cronitor.logger.warning.assert_has_calls(
[mock.call("Cronitor request ignored (disabled in settings).")]
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=True,
)
@mock.patch("cronman.monitor.requests.head")
def test_run_enabled(self, mock_head):
"""Test for `run` method, case: Cronitor enabled"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.run("cRoNiD")
mock_head.assert_called_once_with(
"https://cronitor.link/cRoNiD/run", params=None, timeout=10
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=True,
)
@mock.patch(
"cronman.monitor.requests.head",
side_effect=requests.ConnectTimeout("msg"),
)
def test_run_failed(self, mock_head):
"""Test for `run` method, case: Cronitor enabled, request failed"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.run("cRoNiD")
mock_head.assert_called_once_with(
"https://cronitor.link/cRoNiD/run", params=None, timeout=10
)
cronitor.logger.warning.assert_has_calls(
[
mock.call(
"Cronitor request failed: "
"https://cronitor.link/cRoNiD/run "
"ConnectTimeout: msg"
)
]
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=False,
)
@mock.patch("cronman.monitor.requests.head")
def test_complete_disabled(self, mock_head):
"""Test for `complete` method, case: Cronitor disabled"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.complete("cRoNiD")
mock_head.assert_not_called()
cronitor.logger.warning.assert_has_calls(
[mock.call("Cronitor request ignored (disabled in settings).")]
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=True,
)
@mock.patch("cronman.monitor.requests.head")
def test_complete_enabled(self, mock_head):
"""Test for `complete` method, case: Cronitor enabled"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.complete("cRoNiD")
mock_head.assert_called_once_with(
"https://cronitor.link/cRoNiD/complete", params=None, timeout=10
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=True,
)
@mock.patch(
"cronman.monitor.requests.head",
side_effect=requests.ConnectTimeout("msg"),
)
def test_complete_failed(self, mock_head):
"""Test for `complete` method, case: Cronitor enabled, request failed"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.complete("cRoNiD")
mock_head.assert_called_once_with(
"https://cronitor.link/cRoNiD/complete", params=None, timeout=10
)
cronitor.logger.warning.assert_has_calls(
[
mock.call(
"Cronitor request failed: "
"https://cronitor.link/cRoNiD/complete "
"ConnectTimeout: msg"
)
]
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=False,
)
@mock.patch("cronman.monitor.requests.head")
def test_fail_disabled(self, mock_head):
"""Test for `fail` method, case: Cronitor disabled"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.fail("cRoNiD", msg="RuntimeError: test message")
mock_head.assert_not_called()
cronitor.logger.warning.assert_has_calls(
[mock.call("Cronitor request ignored (disabled in settings).")]
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=True,
)
@mock.patch("cronman.monitor.requests.head")
def test_fail_enabled(self, mock_head):
"""Test for `fail` method, case: Cronitor enabled"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.fail("cRoNiD", msg="RuntimeError: test message")
mock_head.assert_called_once_with(
"https://cronitor.link/cRoNiD/fail",
params={"msg": "RuntimeError: test message"},
timeout=10,
)
@override_cron_settings(
CRONMAN_CRONITOR_URL="https://cronitor.link/{cronitor_id}/{end_point}",
CRONMAN_CRONITOR_ENABLED=True,
)
@mock.patch(
"cronman.monitor.requests.head",
side_effect=requests.ConnectTimeout("msg"),
)
def test_fail_failed(self, mock_head):
"""Test for `fail` method, case: Cronitor enabled, request failed"""
cronitor = Cronitor()
cronitor.logger = mock.MagicMock()
cronitor.fail("cRoNiD", msg="RuntimeError: test message")
mock_head.assert_called_once_with(
"https://cronitor.link/cRoNiD/fail",
params={"msg": "RuntimeError: test message"},
timeout=10,
)
cronitor.logger.warning.assert_has_calls(
[
mock.call(
"Cronitor request failed: "
"https://cronitor.link/cRoNiD/fail "
"ConnectTimeout: msg"
)
]
)
class SlackTestCase(BaseCronTestCase):
"""Tests for Slack class"""
@override_cron_settings(
CRONMAN_SLACK_URL="https://fake-chat.slack.com/services/hooks/slackbot",
CRONMAN_SLACK_TOKEN="sLaCkTokEn",
CRONMAN_SLACK_DEFAULT_CHANNEL="cronitor",
CRONMAN_SLACK_ENABLED=False,
)
@mock.patch("cronman.monitor.requests.post")
def test_post_disabled(self, mock_post):
"""Test for `post` method, case: Slack disabled"""
slack = Slack()
slack.logger = mock.MagicMock()
slack.post("This is a test!")
mock_post.assert_not_called()
slack.logger.warning.assert_has_calls(
[mock.call("Slack request ignored (disabled in settings).")]
)
@override_cron_settings(
CRONMAN_SLACK_URL="https://fake-chat.slack.com/services/hooks/slackbot",
CRONMAN_SLACK_TOKEN="sLaCkTokEn",
CRONMAN_SLACK_DEFAULT_CHANNEL="cronitor",
CRONMAN_SLACK_ENABLED=True,
)
@mock.patch("cronman.monitor.requests.post")
def test_post_enabled(self, mock_post):
"""Test for `post` method, case: Slack enabled"""
slack = Slack()
slack.logger = mock.MagicMock()
slack.post("This is a test!")
mock_post.assert_called_once_with(
"https://fake-chat.slack.com/services/hooks/slackbot?"
"token=sLaCkTokEn&channel=%23cronitor",
data=force_bytes("This is a test!"),
timeout=7,
)
@override_cron_settings(
CRONMAN_SLACK_URL="https://fake-chat.slack.com/services/hooks/slackbot",
CRONMAN_SLACK_TOKEN="sLaCkTokEn",
CRONMAN_SLACK_DEFAULT_CHANNEL="cronitor",
CRONMAN_SLACK_ENABLED=True,
)
@mock.patch("cronman.monitor.requests.post")
def test_post_enabled_custom_channel(self, mock_post):
"""Test for `post` method, case: Slack enabled, custom channel"""
slack = Slack()
slack.logger = mock.MagicMock()
slack.post("This is a test!", channel="dev")
mock_post.assert_called_once_with(
"https://fake-chat.slack.com/services/hooks/slackbot?"
"token=sLaCkTokEn&channel=%23dev",
data=force_bytes("This is a test!"),
timeout=7,
)
@override_cron_settings(
CRONMAN_SLACK_URL="https://fake-chat.slack.com/services/hooks/slackbot",
CRONMAN_SLACK_TOKEN="sLaCkTokEn",
CRONMAN_SLACK_DEFAULT_CHANNEL="cronitor",
CRONMAN_SLACK_ENABLED=True,
)
@mock.patch(
"cronman.monitor.requests.post",
side_effect=requests.ConnectTimeout("msg"),
)
def test_post_failed(self, mock_post):
"""Test for `post` method, case: Slack enabled, request failed"""
slack = Slack()
slack.logger = mock.MagicMock()
slack.post("This is a test!")
mock_post.assert_called_once_with(
"https://fake-chat.slack.com/services/hooks/slackbot?"
"token=sLaCkTokEn&channel=%23cronitor",
data=force_bytes("This is a test!"),
timeout=7,
)
slack.logger.error.assert_has_calls(
[mock.call("Slack request failed: ConnectTimeout: msg")]
)
| 36.785185 | 80 | 0.626158 | 1,073 | 9,932 | 5.575955 | 0.095993 | 0.045128 | 0.051145 | 0.058666 | 0.919104 | 0.914926 | 0.904396 | 0.884172 | 0.884172 | 0.847735 | 0 | 0.003367 | 0.252517 | 9,932 | 269 | 81 | 36.921933 | 0.802532 | 0.079138 | 0 | 0.670996 | 0 | 0 | 0.255375 | 0.052928 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.056277 | false | 0 | 0.025974 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
841a2e6f6fd098680c3d9e5b19c6e875ffe7df02 | 3,495 | py | Python | openapi_server/models/__init__.py | graphsense/graphsense-REST | 2e4a9c20835e54d971e3fc3aae5780bc87d48647 | [
"MIT"
] | 14 | 2017-11-25T18:27:14.000Z | 2022-02-22T09:42:09.000Z | openapi_server/models/__init__.py | graphsense/graphsense-REST | 2e4a9c20835e54d971e3fc3aae5780bc87d48647 | [
"MIT"
] | 64 | 2019-03-29T08:15:05.000Z | 2022-02-24T10:28:34.000Z | openapi_server/models/__init__.py | graphsense/graphsense-REST | 2e4a9c20835e54d971e3fc3aae5780bc87d48647 | [
"MIT"
] | 12 | 2018-10-20T22:29:29.000Z | 2022-02-16T10:12:30.000Z | # coding: utf-8
# flake8: noqa
from __future__ import absolute_import
# import models into model package
from openapi_server.models.address import Address
from openapi_server.models.address_tag import AddressTag
from openapi_server.models.address_tag_all_of import AddressTagAllOf
from openapi_server.models.address_tx import AddressTx
from openapi_server.models.address_tx_utxo import AddressTxUtxo
from openapi_server.models.address_txs import AddressTxs
from openapi_server.models.addresses import Addresses
from openapi_server.models.block import Block
from openapi_server.models.block_tx import BlockTx
from openapi_server.models.block_tx_utxo import BlockTxUtxo
from openapi_server.models.blocks import Blocks
from openapi_server.models.concept import Concept
from openapi_server.models.currency_stats import CurrencyStats
from openapi_server.models.entities import Entities
from openapi_server.models.entity import Entity
from openapi_server.models.entity_addresses import EntityAddresses
from openapi_server.models.entity_tag import EntityTag
from openapi_server.models.entity_tag_all_of import EntityTagAllOf
from openapi_server.models.link import Link
from openapi_server.models.link_utxo import LinkUtxo
from openapi_server.models.neighbor import Neighbor
from openapi_server.models.neighbors import Neighbors
from openapi_server.models.rates import Rates
from openapi_server.models.rates_rates import RatesRates
from openapi_server.models.search_result import SearchResult
from openapi_server.models.search_result_by_currency import SearchResultByCurrency
from openapi_server.models.search_result_leaf import SearchResultLeaf
from openapi_server.models.search_result_level1 import SearchResultLevel1
from openapi_server.models.search_result_level1_all_of import SearchResultLevel1AllOf
from openapi_server.models.search_result_level2 import SearchResultLevel2
from openapi_server.models.search_result_level2_all_of import SearchResultLevel2AllOf
from openapi_server.models.search_result_level3 import SearchResultLevel3
from openapi_server.models.search_result_level3_all_of import SearchResultLevel3AllOf
from openapi_server.models.search_result_level4 import SearchResultLevel4
from openapi_server.models.search_result_level4_all_of import SearchResultLevel4AllOf
from openapi_server.models.search_result_level5 import SearchResultLevel5
from openapi_server.models.search_result_level5_all_of import SearchResultLevel5AllOf
from openapi_server.models.search_result_level6 import SearchResultLevel6
from openapi_server.models.search_result_level6_all_of import SearchResultLevel6AllOf
from openapi_server.models.stats import Stats
from openapi_server.models.stats_ledger import StatsLedger
from openapi_server.models.stats_ledger_version import StatsLedgerVersion
from openapi_server.models.stats_note import StatsNote
from openapi_server.models.stats_tags_source import StatsTagsSource
from openapi_server.models.stats_tool import StatsTool
from openapi_server.models.stats_version import StatsVersion
from openapi_server.models.tag import Tag
from openapi_server.models.tags import Tags
from openapi_server.models.taxonomy import Taxonomy
from openapi_server.models.tx import Tx
from openapi_server.models.tx_account import TxAccount
from openapi_server.models.tx_summary import TxSummary
from openapi_server.models.tx_utxo import TxUtxo
from openapi_server.models.tx_value import TxValue
from openapi_server.models.txs import Txs
from openapi_server.models.values import Values
| 56.370968 | 85 | 0.897282 | 480 | 3,495 | 6.254167 | 0.19375 | 0.205197 | 0.317122 | 0.429047 | 0.47968 | 0.306129 | 0.163891 | 0 | 0 | 0 | 0 | 0.00799 | 0.068956 | 3,495 | 61 | 86 | 57.295082 | 0.914567 | 0.016881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
841e6bb96e087d2561ddc80998f7d249a14f42cc | 191 | py | Python | crome_synthesis/tools/__init__.py | pierg/crome-synthesis | c4392e69176e67e99c4bbacf8affbd949acebd2a | [
"MIT"
] | null | null | null | crome_synthesis/tools/__init__.py | pierg/crome-synthesis | c4392e69176e67e99c4bbacf8affbd949acebd2a | [
"MIT"
] | null | null | null | crome_synthesis/tools/__init__.py | pierg/crome-synthesis | c4392e69176e67e99c4bbacf8affbd949acebd2a | [
"MIT"
] | null | null | null | import os
from pathlib import Path
output_folder_synthesis: Path = Path(os.path.dirname(__file__)).parent.parent / "output"
persistence_path: Path = output_folder_synthesis / "persistence"
| 27.285714 | 88 | 0.801047 | 25 | 191 | 5.76 | 0.48 | 0.138889 | 0.222222 | 0.347222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104712 | 191 | 6 | 89 | 31.833333 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.089005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
080aba4917a4e8c7a0ea87172488a3006012f211 | 8,409 | py | Python | tests/test_unicode.py | RuiluGao/containers | 60bd1844a4f165b1b619ece91077ee956b3a6942 | [
"MIT"
] | null | null | null | tests/test_unicode.py | RuiluGao/containers | 60bd1844a4f165b1b619ece91077ee956b3a6942 | [
"MIT"
] | null | null | null | tests/test_unicode.py | RuiluGao/containers | 60bd1844a4f165b1b619ece91077ee956b3a6942 | [
"MIT"
] | 2 | 2021-04-12T02:25:55.000Z | 2021-04-27T04:47:02.000Z | from containers.unicode import NormalizedStr
import pytest
strings = [
'César Chávez', # spanish, NFD normalized
'César Chávez', # spanish, NFC normalized
'César Chávez', # spanish, unnormalized
]
def test_repr_0():
assert repr(NormalizedStr(strings[0])) == "NormalizedStr('César Chávez', 'NFC')"
def test_repr_1():
assert repr(NormalizedStr(strings[1])) == "NormalizedStr('César Chávez', 'NFC')"
def test_repr_2():
assert repr(NormalizedStr(strings[2])) == "NormalizedStr('César Chávez', 'NFC')"
def test_repr_NFD_0():
assert repr(NormalizedStr(strings[0], 'NFD')) == "NormalizedStr('César Chávez', 'NFD')"
def test_repr_NFD_1():
assert repr(NormalizedStr(strings[1], 'NFD')) == "NormalizedStr('César Chávez', 'NFD')"
def test_repr_NFD_2():
assert repr(NormalizedStr(strings[2], 'NFD')) == "NormalizedStr('César Chávez', 'NFD')"
def test_str_0():
assert str(NormalizedStr(strings[0])) == 'César Chávez'
def test_str_1():
assert str(NormalizedStr(strings[1])) == 'César Chávez'
def test_str_2():
assert str(NormalizedStr(strings[2])) == 'César Chávez'
def test_len_0():
assert len(NormalizedStr(strings[0])) == 12
def test_len_1():
assert len(NormalizedStr(strings[1])) == 12
def test_len_2():
assert len(NormalizedStr(strings[2])) == 12
def test_len_NFD_0():
assert len(NormalizedStr(strings[0], 'NFD')) == 14
def test_len_NFD_1():
assert len(NormalizedStr(strings[1], 'NFD')) == 14
def test_len_NFD_2():
assert len(NormalizedStr(strings[2], 'NFD')) == 14
def test_contains_0():
assert strings[0] in NormalizedStr(strings[0])
def test_contains_1():
assert strings[0] in NormalizedStr(strings[1])
def test_contains_2():
assert strings[0] in NormalizedStr(strings[2])
def test_contains_3():
assert strings[0] in NormalizedStr(strings[0], 'NFD')
def test_contains_4():
assert strings[0] in NormalizedStr(strings[1], 'NFD')
def test_contains_5():
assert strings[0] in NormalizedStr(strings[2], 'NFD')
def test_contains_6():
assert strings[1] in NormalizedStr(strings[0])
def test_contains_7():
assert strings[1] in NormalizedStr(strings[1])
def test_contains_8():
assert strings[1] in NormalizedStr(strings[2])
def test_contains_9():
assert strings[1] in NormalizedStr(strings[0], 'NFD')
def test_contains_10():
assert strings[1] in NormalizedStr(strings[1], 'NFD')
def test_contains_11():
assert strings[1] in NormalizedStr(strings[2], 'NFD')
def test_contains_12():
assert strings[2] in NormalizedStr(strings[0])
def test_contains_13():
assert strings[2] in NormalizedStr(strings[1])
def test_contains_14():
assert strings[2] in NormalizedStr(strings[2])
def test_contains_15():
assert strings[2] in NormalizedStr(strings[0], 'NFD')
def test_contains_16():
assert strings[2] in NormalizedStr(strings[1], 'NFD')
def test_contains_17():
assert strings[2] in NormalizedStr(strings[2], 'NFD')
def test_contains_18():
assert 'Cesar' not in NormalizedStr(strings[0], 'NFD')
def test_contains_19():
assert 'Cesar' not in NormalizedStr(strings[1], 'NFD')
def test_contains_20():
assert 'Cesar' not in NormalizedStr(strings[2], 'NFD')
def test_contains_21():
assert 'Cesar' not in NormalizedStr(strings[0], 'NFC')
def test_contains_22():
assert 'Cesar' not in NormalizedStr(strings[1], 'NFC')
def test_contains_23():
assert 'Cesar' not in NormalizedStr(strings[2], 'NFC')
def test_getitem_0():
assert NormalizedStr(strings[0], 'NFC')[0] == 'C'
def test_getitem_1():
assert NormalizedStr(strings[0], 'NFC')[1] == 'é'
def test_getitem_2():
assert NormalizedStr(strings[0], 'NFC')[9] == 'v'
def test_getitem_3():
assert NormalizedStr(strings[0], 'NFD')[0] == 'C'
def test_getitem_4():
assert NormalizedStr(strings[0], 'NFD')[1] == 'e'
def test_getitem_5():
assert NormalizedStr(strings[0], 'NFD')[9] == 'a'
def test_lower_0():
assert str(NormalizedStr(strings[0], 'NFD').lower()) == 'césar chávez'
def test_lower_1():
assert str(NormalizedStr(strings[0], 'NFC').lower()) == 'césar chávez'
def test_lower_2():
assert str(NormalizedStr(strings[1], 'NFD').lower()) == 'césar chávez'
def test_lower_3():
assert str(NormalizedStr(strings[1], 'NFC').lower()) == 'césar chávez'
def test_lower_4():
assert str(NormalizedStr(strings[2], 'NFD').lower()) == 'césar chávez'
def test_lower_5():
assert str(NormalizedStr(strings[2], 'NFC').lower()) == 'césar chávez'
def test_lower_6():
x = NormalizedStr(strings[0])
y = x.lower()
assert str(x) == 'César Chávez'
assert str(y) == 'césar chávez'
def test_lower_7():
x = NormalizedStr(strings[1])
y = x.lower()
assert str(x) == 'César Chávez'
assert str(y) == 'césar chávez'
def test_lower_8():
x = NormalizedStr(strings[2])
y = x.lower()
assert str(x) == 'César Chávez'
assert str(y) == 'césar chávez'
def test_upper_0():
assert str(NormalizedStr(strings[0], 'NFD').upper()) == 'CÉSAR CHÁVEZ'
def test_upper_1():
assert str(NormalizedStr(strings[0], 'NFC').upper()) == 'CÉSAR CHÁVEZ'
def test_upper_2():
assert str(NormalizedStr(strings[1], 'NFD').upper()) == 'CÉSAR CHÁVEZ'
def test_upper_3():
assert str(NormalizedStr(strings[1], 'NFC').upper()) == 'CÉSAR CHÁVEZ'
def test_upper_4():
assert str(NormalizedStr(strings[2], 'NFD').upper()) == 'CÉSAR CHÁVEZ'
def test_upper_5():
assert str(NormalizedStr(strings[2], 'NFC').upper()) == 'CÉSAR CHÁVEZ'
def test_upper_6():
x = NormalizedStr(strings[0])
y = x.upper()
assert str(x) == 'César Chávez'
assert str(y) == 'CÉSAR CHÁVEZ'
def test_upper_7():
x = NormalizedStr(strings[1])
y = x.upper()
assert str(x) == 'César Chávez'
assert str(y) == 'CÉSAR CHÁVEZ'
def test_upper_8():
x = NormalizedStr(strings[2])
y = x.upper()
assert str(x) == 'César Chávez'
assert str(y) == 'CÉSAR CHÁVEZ'
def test_add_0():
x = NormalizedStr(strings[0])
y = NormalizedStr(strings[1])
z = NormalizedStr(strings[2])
assert str(x + y + z) == str(y + x + z)
assert str(x) == str(NormalizedStr(strings[0], 'NFC'))
assert str(y) == str(NormalizedStr(strings[1], 'NFC'))
assert str(z) == str(NormalizedStr(strings[2], 'NFC'))
def test_add_1():
x = NormalizedStr(strings[0], 'NFD')
y = NormalizedStr(strings[1], 'NFD')
z = NormalizedStr(strings[2], 'NFD')
assert str(x + y + z) == str(y + x + z)
assert str(x) == str(NormalizedStr(strings[0], 'NFD'))
assert str(y) == str(NormalizedStr(strings[1], 'NFD'))
assert str(z) == str(NormalizedStr(strings[2], 'NFD'))
def test_add_2():
x = NormalizedStr(strings[0], 'NFD')
y = NormalizedStr(strings[1], 'NFD')
z = NormalizedStr(strings[2], 'NFD')
assert str(x + strings[1] + strings[2]) == str(y + x + z)
assert str(x) == str(NormalizedStr(strings[0], 'NFD'))
assert str(y) == str(NormalizedStr(strings[1], 'NFD'))
assert str(z) == str(NormalizedStr(strings[2], 'NFD'))
def test_add_3():
x = NormalizedStr(strings[0], 'NFC')
y = NormalizedStr(strings[1], 'NFC')
z = NormalizedStr(strings[2], 'NFC')
assert str(x + strings[1] + strings[2]) == str(y + x + z)
assert str(x) == str(NormalizedStr(strings[0], 'NFC'))
assert str(y) == str(NormalizedStr(strings[1], 'NFC'))
assert str(z) == str(NormalizedStr(strings[2], 'NFC'))
def test_add_3():
x = '\u0301'
y = 'a'
assert str(NormalizedStr(x)) + y == str(NormalizedStr(x+y))
def test_add_4():
x = '\u0301'
y = 'a'
assert str(NormalizedStr(x,'NFD')) + y == str(NormalizedStr(x+y,'NFD'))
def test_add_5():
x = '\u0302'
y = 'a'
assert str(NormalizedStr(x)) + y == str(NormalizedStr(x+y))
def test_add_6():
x = '\u0302'
y = 'a'
assert str(NormalizedStr(x,'NFD')) + y == str(NormalizedStr(x+y,'NFD'))
def test_iter_0():
x = NormalizedStr(strings[0], 'NFC')
assert list(x) == list(strings[0])
def test_iter_1():
x = NormalizedStr(strings[0], 'NFD')
assert list(x) == list(strings[1])
def test_iter_2():
x = NormalizedStr(strings[0], 'NFC')
assert len(list(x)) == 12
def test_iter_3():
x = NormalizedStr(strings[2], 'NFC')
assert len(list(x)) == 12
| 27.936877 | 93 | 0.64205 | 1,171 | 8,409 | 4.47737 | 0.054654 | 0.34713 | 0.144192 | 0.072096 | 0.901583 | 0.815182 | 0.705894 | 0.366393 | 0.267976 | 0.251192 | 0 | 0.035144 | 0.181115 | 8,409 | 300 | 94 | 28.03 | 0.726256 | 0.008205 | 0 | 0.317536 | 0 | 0 | 0.103767 | 0.007558 | 0 | 0 | 0 | 0 | 0.440758 | 1 | 0.35545 | false | 0 | 0.009479 | 0 | 0.364929 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
083146665e7d19d654282efdf1829d91aeee3771 | 80 | py | Python | tests/samples/project/components/inherited_flatness.py | machinable-org/machinable | 9d96e942dde05d68699bc7bc0c3d062ee18652ad | [
"MIT"
] | 23 | 2020-02-28T14:29:04.000Z | 2021-12-23T20:50:54.000Z | tests/samples/project/components/inherited_flatness.py | machinable-org/machinable | 9d96e942dde05d68699bc7bc0c3d062ee18652ad | [
"MIT"
] | 172 | 2020-02-24T12:12:11.000Z | 2022-03-29T03:08:24.000Z | tests/samples/project/components/inherited_flatness.py | machinable-org/machinable | 9d96e942dde05d68699bc7bc0c3d062ee18652ad | [
"MIT"
] | 1 | 2020-11-23T22:42:20.000Z | 2020-11-23T22:42:20.000Z | from machinable import Component
class InheritedFlatness(Component):
pass
| 13.333333 | 35 | 0.8 | 8 | 80 | 8 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1625 | 80 | 5 | 36 | 16 | 0.955224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
f22cdffc3511007eeaa4897f25b38fc34a7f38b1 | 213 | py | Python | src/blog/tasks.py | cbsBiram/xarala__ssr | 863e1362c786daa752b942b796f7a015211d2f1b | [
"FSFAP"
] | null | null | null | src/blog/tasks.py | cbsBiram/xarala__ssr | 863e1362c786daa752b942b796f7a015211d2f1b | [
"FSFAP"
] | null | null | null | src/blog/tasks.py | cbsBiram/xarala__ssr | 863e1362c786daa752b942b796f7a015211d2f1b | [
"FSFAP"
] | null | null | null | from celery import task
from send_mail.views import send_author_submitted_email
@task
def author_submitted(email, post_title):
mail_sent = send_author_submitted_email(email, post_title)
return mail_sent
| 23.666667 | 62 | 0.821596 | 32 | 213 | 5.09375 | 0.46875 | 0.276074 | 0.368098 | 0.294479 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131455 | 213 | 8 | 63 | 26.625 | 0.881081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f2312836b27ba7aa99ece13015680795aa69f81b | 5,908 | py | Python | chainerlp/ada_loss/transforms_test.py | kumasento/gradient-scaling | 0ca435433b9953e33656173c4d60ebd61c5c5e87 | [
"MIT"
] | 7 | 2020-08-12T12:04:28.000Z | 2021-11-22T15:56:08.000Z | chainerlp/ada_loss/transforms_test.py | kumasento/gradient-scaling | 0ca435433b9953e33656173c4d60ebd61c5c5e87 | [
"MIT"
] | 1 | 2021-10-07T08:37:39.000Z | 2021-10-08T02:41:39.000Z | chainerlp/ada_loss/transforms_test.py | kumasento/gradient-scaling | 0ca435433b9953e33656173c4d60ebd61c5c5e87 | [
"MIT"
] | null | null | null | """ Test the base AdaLoss class. """
import unittest
import chainer
import chainer.functions as F
import chainer.links as L
import cupy as cp
import numpy as np
from chainer import testing
from chainer.testing import attr
from chainerlp.ada_loss.transforms import *
from chainerlp.links import *
from chainerlp.links.models.resnet import BasicBlock
from ada_loss.chainer_impl.ada_loss_scaled import AdaLossScaled
from ada_loss.chainer_impl.ada_loss_transforms import *
class TransformsTest(unittest.TestCase):
""" Test ChainerLP custom transforms """
def test_transform_basic_block(self):
""" """
link = BasicBlock(3, 4, stride=2, residual_conv=True)
tran = AdaLossTransformBasicBlock()
link_ = tran(link)
self.assertIsInstance(link_, AdaLossBasicBlock)
# run inference
x = chainer.Variable(np.random.normal(size=(1, 3, 32, 32)).astype("float32"))
y1 = link(x)
y2 = link_(x)
self.assertTrue(np.allclose(y1.array, y2.array))
def test_transform_conv2d_bn_activ(self):
""" """
link = Conv2DBNActiv(3, 4, ksize=3, stride=1, pad=1)
tran = AdaLossTransformConv2DBNActiv()
link_ = tran(link)
self.assertIsInstance(link_, AdaLossConv2DBNActiv)
# run inference
x = chainer.Variable(np.random.normal(size=(1, 3, 32, 32)).astype("float32"))
y1 = link(x)
y2 = link_(x)
self.assertTrue(np.allclose(y1.array, y2.array))
@attr.gpu
def test_transform_resnet20(self):
""" """
cp.random.seed(0)
cp.cuda.Device(0).use()
with chainer.using_config("dtype", "float16"):
cfg = {
"loss_scale_method": "fixed",
"fixed_loss_scale": 1.0,
}
net1 = resnet20(n_class=10)
net1.to_device(0)
x_data = cp.random.normal(size=(1, 3, 32, 32)).astype("float16")
x = chainer.Variable(x_data)
y1 = net1(x)
net1_params = list(net1.namedparams())
net2 = AdaLossScaled(
net1,
init_scale=1.0,
transforms=[
AdaLossTransformLinear(),
AdaLossTransformBasicBlock(),
AdaLossTransformConv2DBNActiv(),
],
cfg=cfg,
verbose=True,
)
net2.to_device(0)
y2 = net2(x)
net2_params = list(net2.namedparams())
self.assertEqual(len(net1_params), len(net2_params))
for i, p in enumerate(net1_params):
self.assertTrue(cp.allclose(p[1].array, net2_params[i][1].array))
self.assertTrue(cp.allclose(y1.array, y2.array))
# Should not raise error
y_data = cp.random.normal(size=(1, 10)).astype("float16")
y2.grad = y_data
y2.backward()
@attr.gpu
def test_transform_resnet18(self):
""" """
cp.random.seed(0)
cp.cuda.Device(0).use()
with chainer.using_config("dtype", "float16"):
cfg = {
"loss_scale_method": "fixed",
"fixed_loss_scale": 1.0,
}
net1 = resnet18(n_class=10)
net1.to_device(0)
x_data = cp.random.normal(size=(2, 3, 224, 224)).astype("float16")
x = chainer.Variable(x_data)
y1 = net1(x)
net1_params = list(net1.namedparams())
net2 = AdaLossScaled(
net1,
init_scale=1.0,
transforms=[
AdaLossTransformLinear(),
AdaLossTransformBasicBlock(),
AdaLossTransformConv2DBNActiv(),
],
cfg=cfg,
verbose=True,
)
net2.to_device(0)
y2 = net2(x)
net2_params = list(net2.namedparams())
self.assertEqual(len(net1_params), len(net2_params))
for i, p in enumerate(net1_params):
self.assertTrue(cp.allclose(p[1].array, net2_params[i][1].array))
self.assertTrue(cp.allclose(y1.array, y2.array))
# Should not raise error
y_data = cp.random.normal(size=(2, 10)).astype("float16")
y2.grad = y_data
y2.backward()
@attr.gpu
@attr.slow
def test_transform_resnet50(self):
""" """
cp.random.seed(0)
cp.cuda.Device(0).use()
with chainer.using_config("dtype", "float16"):
cfg = {
"loss_scale_method": "fixed",
"fixed_loss_scale": 1.0,
}
net1 = resnet50(n_class=10)
net1.to_device(0)
x_data = cp.random.normal(size=(2, 3, 224, 224)).astype("float16")
x = chainer.Variable(x_data)
y1 = net1(x)
net1_params = list(net1.namedparams())
net2 = AdaLossScaled(
net1,
init_scale=1.0,
transforms=[
AdaLossTransformLinear(),
AdaLossTransformBottleneck(),
AdaLossTransformConv2DBNActiv(),
],
cfg=cfg,
verbose=True,
)
net2.to_device(0)
y2 = net2(x)
net2_params = list(net2.namedparams())
self.assertEqual(len(net1_params), len(net2_params))
for i, p in enumerate(net1_params):
self.assertTrue(cp.allclose(p[1].array, net2_params[i][1].array))
self.assertTrue(cp.allclose(y1.array, y2.array))
# Should not raise error
y_data = cp.random.normal(size=(2, 10)).astype("float16")
y2.grad = y_data
y2.backward()
testing.run_module(__name__, __file__)
| 30.770833 | 85 | 0.538084 | 639 | 5,908 | 4.826291 | 0.194053 | 0.023346 | 0.041505 | 0.035019 | 0.757458 | 0.744812 | 0.721141 | 0.702335 | 0.700389 | 0.700389 | 0 | 0.051514 | 0.346141 | 5,908 | 191 | 86 | 30.931937 | 0.746829 | 0.027251 | 0 | 0.725352 | 0 | 0 | 0.036172 | 0 | 0 | 0 | 0 | 0 | 0.091549 | 1 | 0.035211 | false | 0 | 0.091549 | 0 | 0.133803 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f23fb50349b8639c26e371438f318f3a9d72d3ce | 72,154 | py | Python | inceptionnets.py | lalonderodney/INN-Inflated-Neural-Nets | 50ce42e4584815d066d0fd39a7f12f55130910e5 | [
"Apache-2.0"
] | 5 | 2019-07-03T01:08:14.000Z | 2020-02-29T21:27:06.000Z | inceptionnets.py | lalonderodney/INN-Inflated-Neural-Nets | 50ce42e4584815d066d0fd39a7f12f55130910e5 | [
"Apache-2.0"
] | 8 | 2020-02-26T20:27:52.000Z | 2022-03-12T00:02:34.000Z | inceptionnets.py | lalonderodney/INN-Inflated-Neural-Nets | 50ce42e4584815d066d0fd39a7f12f55130910e5 | [
"Apache-2.0"
] | 1 | 2019-10-13T10:48:39.000Z | 2019-10-13T10:48:39.000Z | """Inception-v1 Inflated 3D ConvNet used for Kinetics CVPR paper.
The model is introduced in:
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
Joao Carreira, Andrew Zisserman
https://arxiv.org/pdf/1705.07750v1.pdf.
"""
from __future__ import absolute_import, print_function, division
import warnings
from keras import backend as K
from keras import layers, Model
from keras.utils import get_source_inputs, get_file
import tensorflow as tf
import numpy as np
from tqdm import tqdm
from os.path import exists
def _obtain_input_shape(input_shape,
default_size,
min_size,
data_format,
require_flatten,
weights=None):
"""Internal utility to compute/validate a model's input shape.
# Arguments
input_shape: Either None (will return the default network input shape),
or a user-provided shape to be validated.
default_size: Default input width/height for the model.
min_size: Minimum input width/height accepted by the model.
data_format: Image data format to use.
require_flatten: Whether the model is expected to
be linked to a classifier via a Flatten layer.
weights: One of `None` (random initialization)
or 'imagenet' (pre-training on ImageNet).
If weights='imagenet' input channels must be equal to 3.
# Returns
An integer shape tuple (may include None entries).
# Raises
ValueError: In case of invalid argument values.
"""
if weights != 'imagenet' and input_shape and len(input_shape) == 3:
if data_format == 'channels_first':
default_shape = (input_shape[0], default_size, default_size)
else:
default_shape = (default_size, default_size, input_shape[-1])
else:
if data_format == 'channels_first':
default_shape = (3, default_size, default_size)
else:
default_shape = (default_size, default_size, 3)
if weights == 'imagenet' and require_flatten:
if input_shape is not None:
if input_shape != default_shape:
raise ValueError('When setting`include_top=True` '
'and loading `imagenet` weights, '
'`input_shape` should be ' +
str(default_shape) + '.')
return default_shape
if input_shape:
if data_format == 'channels_first':
if input_shape is not None:
if len(input_shape) != 3:
raise ValueError(
'`input_shape` must be a tuple of three integers.')
if ((input_shape[1] is not None and input_shape[1] < min_size) or
(input_shape[2] is not None and input_shape[2] < min_size)):
raise ValueError('Input size must be at least ' +
str(min_size) + 'x' + str(min_size) +
'; got `input_shape=' +
str(input_shape) + '`')
else:
if input_shape is not None:
if len(input_shape) != 3:
raise ValueError(
'`input_shape` must be a tuple of three integers.')
if ((input_shape[0] is not None and input_shape[0] < min_size) or
(input_shape[1] is not None and input_shape[1] < min_size)):
raise ValueError('Input size must be at least ' +
str(min_size) + 'x' + str(min_size) +
'; got `input_shape=' +
str(input_shape) + '`')
else:
if require_flatten:
input_shape = default_shape
else:
if data_format == 'channels_first':
input_shape = (3, None, None)
else:
input_shape = (None, None, 3)
if require_flatten:
if None in input_shape:
raise ValueError('If `include_top` is True, '
'you should specify a static `input_shape`. '
'Got `input_shape=' + str(input_shape) + '`')
return input_shape
"""Inception V3 model for Keras.
Note that the input image format for this model is different than for
the VGG16 and ResNet models (299x299 instead of 224x224),
and that the input preprocessing function is also different (same as Xception).
# Reference
- [Rethinking the Inception Architecture for Computer Vision](
http://arxiv.org/abs/1512.00567)
"""
def conv2d_bn_v3(x,
filters,
num_row,
num_col,
padding='same',
strides=(1, 1),
name=None):
"""Utility function to apply conv + BN.
# Arguments
x: input tensor.
filters: filters in `Conv2D`.
num_row: height of the convolution kernel.
num_col: width of the convolution kernel.
padding: padding mode in `Conv2D`.
strides: strides in `Conv2D`.
name: name of the ops; will become `name + '_conv'`
for the convolution and `name + '_bn'` for the
batch norm layer.
# Returns
Output tensor after applying `Conv2D` and `BatchNormalization`.
"""
if name is not None:
bn_name = name + '_bn'
conv_name = name + '_conv'
else:
bn_name = None
conv_name = None
if K.image_data_format() == 'channels_first':
bn_axis = 1
else:
bn_axis = 3
x = layers.Conv2D(
filters, (num_row, num_col),
strides=strides,
padding=padding,
kernel_initializer='he_normal',
use_bias=False,
name=conv_name)(x)
x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)
x = layers.Activation('relu', name=name)(x)
return x
def InceptionV3(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
disentangled=True,
**kwargs):
"""Instantiates the Inception v3 architecture.
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
# Arguments
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(299, 299, 3)` (with `channels_last` data format)
or `(3, 299, 299)` (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 75.
E.g. `(150, 150, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
# Returns
A Keras model instance.
# Raises
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
"""
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = _obtain_input_shape(
input_shape,
default_size=299,
min_size=75,
data_format=K.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = 3
num_mods = int(input_shape[channel_axis-1]/3)
if input_tensor is None and disentangled:
mod_list = list()
for mod_ind in range(num_mods):
if channel_axis == 3:
mod_input = layers.Lambda(lambda x: x[:, :, :, 3*mod_ind:3*mod_ind+3],
output_shape=input_shape[0:2] + [3],
name='mod_input_{}'.format(mod_ind))(img_input)
else:
mod_input = layers.Lambda(lambda x: x[:, 3*mod_ind:3*mod_ind+3, :, :],
output_shape=[3] + input_shape[1:3],
name='mod_input_{}'.format(mod_ind))(img_input)
mod_list.append(conv2d_bn_v3(mod_input, 32, 3, 3, strides=(2, 2), padding='valid',
name='mod_{}'.format(mod_ind)))
x = layers.Concatenate(name='conv_2d_1')(mod_list)
else:
x = conv2d_bn_v3(img_input, 32, 3, 3, strides=(2, 2), padding='valid')
x = conv2d_bn_v3(x, 32, 3, 3, padding='valid')
x = conv2d_bn_v3(x, 64, 3, 3)
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = conv2d_bn_v3(x, 80, 1, 1, padding='valid')
x = conv2d_bn_v3(x, 192, 3, 3, padding='valid')
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
# mixed 0: 35 x 35 x 256
branch1x1 = conv2d_bn_v3(x, 64, 1, 1)
branch5x5 = conv2d_bn_v3(x, 48, 1, 1)
branch5x5 = conv2d_bn_v3(branch5x5, 64, 5, 5)
branch3x3dbl = conv2d_bn_v3(x, 64, 1, 1)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 32, 1, 1)
x = layers.concatenate(
[branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed0')
# mixed 1: 35 x 35 x 288
branch1x1 = conv2d_bn_v3(x, 64, 1, 1)
branch5x5 = conv2d_bn_v3(x, 48, 1, 1)
branch5x5 = conv2d_bn_v3(branch5x5, 64, 5, 5)
branch3x3dbl = conv2d_bn_v3(x, 64, 1, 1)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 64, 1, 1)
x = layers.concatenate(
[branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed1')
# mixed 2: 35 x 35 x 288
branch1x1 = conv2d_bn_v3(x, 64, 1, 1)
branch5x5 = conv2d_bn_v3(x, 48, 1, 1)
branch5x5 = conv2d_bn_v3(branch5x5, 64, 5, 5)
branch3x3dbl = conv2d_bn_v3(x, 64, 1, 1)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 64, 1, 1)
x = layers.concatenate(
[branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed2')
# mixed 3: 17 x 17 x 768
branch3x3 = conv2d_bn_v3(x, 384, 3, 3, strides=(2, 2), padding='valid')
branch3x3dbl = conv2d_bn_v3(x, 64, 1, 1)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 96, 3, 3)
branch3x3dbl = conv2d_bn_v3(
branch3x3dbl, 96, 3, 3, strides=(2, 2), padding='valid')
branch_pool = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = layers.concatenate(
[branch3x3, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed3')
# mixed 4: 17 x 17 x 768
branch1x1 = conv2d_bn_v3(x, 192, 1, 1)
branch7x7 = conv2d_bn_v3(x, 128, 1, 1)
branch7x7 = conv2d_bn_v3(branch7x7, 128, 1, 7)
branch7x7 = conv2d_bn_v3(branch7x7, 192, 7, 1)
branch7x7dbl = conv2d_bn_v3(x, 128, 1, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 128, 7, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 128, 1, 7)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 128, 7, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 192, 1, 7)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 192, 1, 1)
x = layers.concatenate(
[branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed4')
# mixed 5, 6: 17 x 17 x 768
for i in range(2):
branch1x1 = conv2d_bn_v3(x, 192, 1, 1)
branch7x7 = conv2d_bn_v3(x, 160, 1, 1)
branch7x7 = conv2d_bn_v3(branch7x7, 160, 1, 7)
branch7x7 = conv2d_bn_v3(branch7x7, 192, 7, 1)
branch7x7dbl = conv2d_bn_v3(x, 160, 1, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 160, 7, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 160, 1, 7)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 160, 7, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 192, 1, 7)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 192, 1, 1)
x = layers.concatenate(
[branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed' + str(5 + i))
# mixed 7: 17 x 17 x 768
branch1x1 = conv2d_bn_v3(x, 192, 1, 1)
branch7x7 = conv2d_bn_v3(x, 192, 1, 1)
branch7x7 = conv2d_bn_v3(branch7x7, 192, 1, 7)
branch7x7 = conv2d_bn_v3(branch7x7, 192, 7, 1)
branch7x7dbl = conv2d_bn_v3(x, 192, 1, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 192, 7, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 192, 1, 7)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 192, 7, 1)
branch7x7dbl = conv2d_bn_v3(branch7x7dbl, 192, 1, 7)
branch_pool = layers.AveragePooling2D((3, 3),
strides=(1, 1),
padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 192, 1, 1)
x = layers.concatenate(
[branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed7')
# mixed 8: 8 x 8 x 1280
branch3x3 = conv2d_bn_v3(x, 192, 1, 1)
branch3x3 = conv2d_bn_v3(branch3x3, 320, 3, 3,
strides=(2, 2), padding='valid')
branch7x7x3 = conv2d_bn_v3(x, 192, 1, 1)
branch7x7x3 = conv2d_bn_v3(branch7x7x3, 192, 1, 7)
branch7x7x3 = conv2d_bn_v3(branch7x7x3, 192, 7, 1)
branch7x7x3 = conv2d_bn_v3(
branch7x7x3, 192, 3, 3, strides=(2, 2), padding='valid')
branch_pool = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
x = layers.concatenate(
[branch3x3, branch7x7x3, branch_pool],
axis=channel_axis,
name='mixed8')
# mixed 9: 8 x 8 x 2048
for i in range(2):
branch1x1 = conv2d_bn_v3(x, 320, 1, 1)
branch3x3 = conv2d_bn_v3(x, 384, 1, 1)
branch3x3_1 = conv2d_bn_v3(branch3x3, 384, 1, 3)
branch3x3_2 = conv2d_bn_v3(branch3x3, 384, 3, 1)
branch3x3 = layers.concatenate(
[branch3x3_1, branch3x3_2],
axis=channel_axis,
name='mixed9_' + str(i))
branch3x3dbl = conv2d_bn_v3(x, 448, 1, 1)
branch3x3dbl = conv2d_bn_v3(branch3x3dbl, 384, 3, 3)
branch3x3dbl_1 = conv2d_bn_v3(branch3x3dbl, 384, 1, 3)
branch3x3dbl_2 = conv2d_bn_v3(branch3x3dbl, 384, 3, 1)
branch3x3dbl = layers.concatenate(
[branch3x3dbl_1, branch3x3dbl_2], axis=channel_axis)
branch_pool = layers.AveragePooling2D(
(3, 3), strides=(1, 1), padding='same')(x)
branch_pool = conv2d_bn_v3(branch_pool, 192, 1, 1)
x = layers.concatenate(
[branch1x1, branch3x3, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed' + str(9 + i))
if include_top:
# Classification block
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
x = layers.Dense(classes, activation='softmax', name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = Model(inputs, x, name='inception_v3')
return model
"""Inception V3 model for Keras.
Note that the input image format for this model is different than for
the VGG16 and ResNet models (299x299 instead of 224x224),
and that the input preprocessing function is also different (same as Xception).
# Reference
- [Rethinking the Inception Architecture for Computer Vision](
http://arxiv.org/abs/1512.00567)
"""
def _obtain_input_shape_3d(input_shape,
default_slice_size,
min_slice_size,
default_num_slices,
min_num_slices,
data_format,
require_flatten,
weights=None):
"""Internal utility to compute/validate the model's input shape.
(Adapted from `keras/applications/imagenet_utils.py`)
# Arguments
input_shape: either None (will return the default network input shape),
or a user-provided shape to be validated.
default_slice_size: default input slices(images) width/height for the model.
min_slice_size: minimum input slices(images) width/height accepted by the model.
default_num_slices: default input number of slices(images) for the model.
min_num_slices: minimum input number of slices accepted by the model.
data_format: image data format to use.
require_flatten: whether the model is expected to
be linked to a classifier via a Flatten layer.
weights: one of `None` (random initialization)
or 'kinetics_only' (pre-training on Kinetics dataset).
or 'imagenet_and_kinetics' (pre-training on ImageNet and Kinetics datasets).
If weights='kinetics_only' or weights=='imagenet_and_kinetics' then
input channels must be equal to 3.
# Returns
An integer shape tuple (may include None entries).
# Raises
ValueError: in case of invalid argument values.
"""
if weights != 'kinetics_only' and weights != 'imagenet_and_kinetics' and input_shape and len(input_shape) == 4:
if data_format == 'channels_first':
default_shape = (input_shape[2], default_slice_size, default_slice_size, default_num_slices)
else:
default_shape = (default_slice_size, default_slice_size, default_num_slices, input_shape[-1])
else:
if data_format == 'channels_first':
default_shape = (3, default_slice_size, default_slice_size, default_num_slices)
else:
default_shape = (default_slice_size, default_slice_size, default_num_slices, 3)
if (weights == 'kinetics_only' or weights == 'imagenet_and_kinetics') and require_flatten:
if input_shape is not None:
if input_shape != default_shape:
raise ValueError('When setting`include_top=True` '
'and loading `imagenet` weights, '
'`input_shape` should be ' +
str(default_shape) + '.')
return default_shape
if input_shape:
if data_format == 'channels_first':
if input_shape is not None:
if len(input_shape) != 4:
raise ValueError(
'`input_shape` must be a tuple of four integers.')
if input_shape[3] is not None and input_shape[3] < min_num_slices:
raise ValueError('Input number of slices must be at least ' +
str(min_num_slices) + '; got '
'`input_shape=' + str(input_shape) + '`')
if ((input_shape[1] is not None and input_shape[1] < min_slice_size) or
(input_shape[2] is not None and input_shape[2] < min_slice_size)):
raise ValueError('Input size must be at least ' +
str(min_slice_size) + 'x' + str(min_slice_size) + '; got '
'`input_shape=' + str(
input_shape) + '`')
else:
if input_shape is not None:
if len(input_shape) != 4:
raise ValueError(
'`input_shape` must be a tuple of four integers.')
if input_shape[2] is not None and input_shape[2] < min_num_slices:
raise ValueError('Input number of slices must be at least ' +
str(min_num_slices) + '; got '
'`input_shape=' + str(input_shape) + '`')
if ((input_shape[0] is not None and input_shape[0] < min_slice_size) or
(input_shape[1] is not None and input_shape[1] < min_slice_size)):
raise ValueError('Input size must be at least ' +
str(min_slice_size) + 'x' + str(min_slice_size) + '; got '
'`input_shape=' + str(
input_shape) + '`')
else:
if require_flatten:
input_shape = default_shape
else:
if data_format == 'channels_first':
input_shape = [3, None, None, None]
else:
input_shape = [None, None, None, 3]
if require_flatten:
if None in input_shape:
raise ValueError('If `include_top` is True, '
'you should specify a static `input_shape`. '
'Got `input_shape=' + str(input_shape) + '`')
return input_shape
def conv3d_bn_v3(x,
filters,
num_row,
num_col,
num_dep,
padding='same',
strides=(1, 1, 1),
name=None):
"""Utility function to apply conv + BN.
# Arguments
x: input tensor.
filters: filters in `Conv2D`.
num_row: height of the convolution kernel.
num_col: width of the convolution kernel.
padding: padding mode in `Conv2D`.
strides: strides in `Conv2D`.
name: name of the ops; will become `name + '_conv'`
for the convolution and `name + '_bn'` for the
batch norm layer.
# Returns
Output tensor after applying `Conv2D` and `BatchNormalization`.
"""
if name is not None:
bn_name = name + '_bn'
conv_name = name + '_conv'
else:
bn_name = None
conv_name = None
if K.image_data_format() == 'channels_first':
bn_axis = 1
else:
bn_axis = 4
x = layers.Conv3D(
filters, (num_row, num_col, num_dep),
strides=strides,
padding=padding,
use_bias=False,
name=conv_name)(x)
x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)
x = layers.Activation('relu', name=name)(x)
return x
def Inflated_Inceptionv3(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
disentangled=True,
**kwargs):
"""Instantiates the Inception v3 architecture.
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
# Arguments
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(299, 299, 3)` (with `channels_last` data format)
or `(3, 299, 299)` (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 75.
E.g. `(150, 150, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
# Returns
A Keras model instance.
# Raises
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
"""
if not (weights in {'imagenet', None} or exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = _obtain_input_shape_3d(
input_shape,
default_slice_size=224,
min_slice_size=32,
default_num_slices=5,
min_num_slices=1,
data_format=K.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = 4
num_mods = int(input_shape[channel_axis-1]/3)
if input_shape[2] >= 3 or input_shape[2] is None:
first_conv_kernel_depth = 3
else:
first_conv_kernel_depth = input_shape[2]
if input_tensor is None and disentangled:
mod_list = list()
for mod_ind in range(num_mods):
if channel_axis == 4:
mod_input = layers.Lambda(lambda x: x[:, :, :, :, 3*mod_ind:3*mod_ind+3],
output_shape=input_shape[0:3] + [3],
name='mod_input_{}'.format(mod_ind))(img_input)
else:
mod_input = layers.Lambda(lambda x: x[:, 3*mod_ind:3*mod_ind+3, :, :, :],
output_shape=[3] + input_shape[1:4],
name='mod_input_{}'.format(mod_ind))(img_input)
mod_list.append(conv3d_bn_v3(mod_input, 32, 3, 3, first_conv_kernel_depth, strides=(2, 2, 1),
name='mod_{}'.format(mod_ind)))
x = layers.Concatenate(name='conv_2d_1')(mod_list)
else:
x = conv3d_bn_v3(img_input, 32, 3, 3, first_conv_kernel_depth, strides=(2, 2, 1))
x = conv3d_bn_v3(x, 32, 3, 3, 3)
x = conv3d_bn_v3(x, 64, 3, 3, 3)
x = layers.MaxPooling3D((3, 3, 3), strides=(2, 2, 1), padding='same')(x)
x = conv3d_bn_v3(x, 80, 1, 1, 1)
x = conv3d_bn_v3(x, 192, 3, 3, 3)
x = layers.MaxPooling3D((3, 3, 3), strides=(2, 2, 1), padding='same')(x)
# mixed 0: 35 x 35 x 256
branch1x1 = conv3d_bn_v3(x, 64, 1, 1, 1)
branch5x5 = conv3d_bn_v3(x, 48, 1, 1, 1)
branch5x5 = conv3d_bn_v3(branch5x5, 64, 5, 5, 5)
branch3x3dbl = conv3d_bn_v3(x, 64, 1, 1, 1)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch_pool = layers.AveragePooling3D((3, 3, 3),
strides=(1, 1, 1),
padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 32, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed0')
# mixed 1: 35 x 35 x 288
branch1x1 = conv3d_bn_v3(x, 64, 1, 1, 1)
branch5x5 = conv3d_bn_v3(x, 48, 1, 1, 1)
branch5x5 = conv3d_bn_v3(branch5x5, 64, 5, 5, 5)
branch3x3dbl = conv3d_bn_v3(x, 64, 1, 1, 1)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch_pool = layers.AveragePooling3D((3, 3, 3),
strides=(1, 1, 1),
padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 64, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed1')
# mixed 2: 35 x 35 x 288
branch1x1 = conv3d_bn_v3(x, 64, 1, 1, 1)
branch5x5 = conv3d_bn_v3(x, 48, 1, 1, 1)
branch5x5 = conv3d_bn_v3(branch5x5, 64, 5, 5, 5)
branch3x3dbl = conv3d_bn_v3(x, 64, 1, 1, 1)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch_pool = layers.AveragePooling3D((3, 3, 3),
strides=(1, 1, 1),
padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 64, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch5x5, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed2')
# mixed 3: 17 x 17 x 768
branch3x3 = conv3d_bn_v3(x, 384, 3, 3, 3, strides=(2, 2, 1))
branch3x3dbl = conv3d_bn_v3(x, 64, 1, 1, 1)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 96, 3, 3, 3)
branch3x3dbl = conv3d_bn_v3(
branch3x3dbl, 96, 3, 3, 3, strides=(2, 2, 1))
branch_pool = layers.MaxPooling3D((3, 3, 3), strides=(2, 2, 1), padding='same')(x)
x = layers.concatenate(
[branch3x3, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed3')
# mixed 4: 17 x 17 x 768
branch1x1 = conv3d_bn_v3(x, 192, 1, 1, 1)
branch7x7 = conv3d_bn_v3(x, 128, 1, 1, 1)
branch7x7 = conv3d_bn_v3(branch7x7, 128, 1, 7, 1)
branch7x7 = conv3d_bn_v3(branch7x7, 192, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(x, 128, 1, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 128, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 128, 1, 7, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 128, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 192, 1, 7, 1)
branch_pool = layers.AveragePooling3D((3, 3, 3),
strides=(1, 1, 1),
padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 192, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed4')
# mixed 5, 6: 17 x 17 x 768
for i in range(2):
branch1x1 = conv3d_bn_v3(x, 192, 1, 1, 1)
branch7x7 = conv3d_bn_v3(x, 160, 1, 1, 1)
branch7x7 = conv3d_bn_v3(branch7x7, 160, 1, 7, 1)
branch7x7 = conv3d_bn_v3(branch7x7, 192, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(x, 160, 1, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 160, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 160, 1, 7, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 160, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 192, 1, 7, 1)
branch_pool = layers.AveragePooling3D(
(3, 3, 3), strides=(1, 1, 1), padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 192, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed' + str(5 + i))
# mixed 7: 17 x 17 x 768
branch1x1 = conv3d_bn_v3(x, 192, 1, 1, 1)
branch7x7 = conv3d_bn_v3(x, 192, 1, 1, 1)
branch7x7 = conv3d_bn_v3(branch7x7, 192, 1, 7, 1)
branch7x7 = conv3d_bn_v3(branch7x7, 192, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(x, 192, 1, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 192, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 192, 1, 7, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 192, 7, 1, 1)
branch7x7dbl = conv3d_bn_v3(branch7x7dbl, 192, 1, 7, 1)
branch_pool = layers.AveragePooling3D((3, 3, 3),
strides=(1, 1, 1),
padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 192, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch7x7, branch7x7dbl, branch_pool],
axis=channel_axis,
name='mixed7')
# mixed 8: 8 x 8 x 1280
branch3x3 = conv3d_bn_v3(x, 192, 1, 1, 1)
branch3x3 = conv3d_bn_v3(branch3x3, 320, 3, 3, 3,
strides=(2, 2, 1))
branch7x7x3 = conv3d_bn_v3(x, 192, 1, 1, 1)
branch7x7x3 = conv3d_bn_v3(branch7x7x3, 192, 1, 7, 1)
branch7x7x3 = conv3d_bn_v3(branch7x7x3, 192, 7, 1, 1)
branch7x7x3 = conv3d_bn_v3(
branch7x7x3, 192, 3, 3, 3, strides=(2, 2, 1))
branch_pool = layers.MaxPooling3D((3, 3, 3), strides=(2, 2, 1), padding='same')(x)
x = layers.concatenate(
[branch3x3, branch7x7x3, branch_pool],
axis=channel_axis,
name='mixed8')
# mixed 9: 8 x 8 x 2048
for i in range(2):
branch1x1 = conv3d_bn_v3(x, 320, 1, 1, 1)
branch3x3 = conv3d_bn_v3(x, 384, 1, 1, 1)
branch3x3_1 = conv3d_bn_v3(branch3x3, 384, 1, 3, 1)
branch3x3_2 = conv3d_bn_v3(branch3x3, 384, 3, 1, 1)
branch3x3 = layers.concatenate(
[branch3x3_1, branch3x3_2],
axis=channel_axis,
name='mixed9_' + str(i))
branch3x3dbl = conv3d_bn_v3(x, 448, 1, 1, 1)
branch3x3dbl = conv3d_bn_v3(branch3x3dbl, 384, 3, 3, 3)
branch3x3dbl_1 = conv3d_bn_v3(branch3x3dbl, 384, 1, 3, 1)
branch3x3dbl_2 = conv3d_bn_v3(branch3x3dbl, 384, 3, 1, 1)
branch3x3dbl = layers.concatenate(
[branch3x3dbl_1, branch3x3dbl_2], axis=channel_axis)
branch_pool = layers.AveragePooling3D(
(3, 3, 3), strides=(1, 1, 1), padding='same')(x)
branch_pool = conv3d_bn_v3(branch_pool, 192, 1, 1, 1)
x = layers.concatenate(
[branch1x1, branch3x3, branch3x3dbl, branch_pool],
axis=channel_axis,
name='mixed' + str(9 + i))
if include_top:
# Classification block
x = layers.GlobalAveragePooling3D(name='avg_pool')(x)
x = layers.Dense(classes, activation='softmax', name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling3D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling3D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = Model(inputs, x, name='inflated_inception_v3')
return model
"""Inception-ResNet V2 model for Keras.
Model naming and structure follows TF-slim implementation
(which has some additional layers and different number of
filters from the original arXiv paper):
https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py
Pre-trained ImageNet weights are also converted from TF-slim,
which can be found in:
https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models
# Reference
- [Inception-v4, Inception-ResNet and the Impact of
Residual Connections on Learning](https://arxiv.org/abs/1602.07261)
"""
BASE_WEIGHT_URL = ('https://github.com/fchollet/deep-learning-models/'
'releases/download/v0.7/')
def conv2d_bn_v4(x,
filters,
kernel_size,
strides=1,
padding='same',
activation='relu',
use_bias=False,
name=None):
"""Utility function to apply conv + BN.
# Arguments
x: input tensor.
filters: filters in `Conv2D`.
kernel_size: kernel size as in `Conv2D`.
strides: strides in `Conv2D`.
padding: padding mode in `Conv2D`.
activation: activation in `Conv2D`.
use_bias: whether to use a bias in `Conv2D`.
name: name of the ops; will become `name + '_ac'` for the activation
and `name + '_bn'` for the batch norm layer.
# Returns
Output tensor after applying `Conv2D` and `BatchNormalization`.
"""
x = layers.Conv2D(filters,
kernel_size,
strides=strides,
padding=padding,
use_bias=use_bias,
name=name)(x)
if not use_bias:
bn_axis = 1 if K.image_data_format() == 'channels_first' else 3
bn_name = None if name is None else name + '_bn'
x = layers.BatchNormalization(axis=bn_axis,
scale=False,
name=bn_name)(x)
if activation is not None:
ac_name = None if name is None else name + '_ac'
x = layers.Activation(activation, name=ac_name)(x)
return x
def inception_resnet_block(x, scale, block_type, block_idx, activation='relu'):
"""Adds a Inception-ResNet block.
This function builds 3 types of Inception-ResNet blocks mentioned
in the paper, controlled by the `block_type` argument (which is the
block name used in the official TF-slim implementation):
- Inception-ResNet-A: `block_type='block35'`
- Inception-ResNet-B: `block_type='block17'`
- Inception-ResNet-C: `block_type='block8'`
# Arguments
x: input tensor.
scale: scaling factor to scale the residuals (i.e., the output of
passing `x` through an inception module) before adding them
to the shortcut branch.
Let `r` be the output from the residual branch,
the output of this block will be `x + scale * r`.
block_type: `'block35'`, `'block17'` or `'block8'`, determines
the network structure in the residual branch.
block_idx: an `int` used for generating layer names.
The Inception-ResNet blocks
are repeated many times in this network.
We use `block_idx` to identify
each of the repetitions. For example,
the first Inception-ResNet-A block
will have `block_type='block35', block_idx=0`,
and the layer names will have
a common prefix `'block35_0'`.
activation: activation function to use at the end of the block
(see [activations](../activations.md)).
When `activation=None`, no activation is applied
(i.e., "linear" activation: `a(x) = x`).
# Returns
Output tensor for the block.
# Raises
ValueError: if `block_type` is not one of `'block35'`,
`'block17'` or `'block8'`.
"""
if block_type == 'block35':
branch_0 = conv2d_bn_v4(x, 32, 1)
branch_1 = conv2d_bn_v4(x, 32, 1)
branch_1 = conv2d_bn_v4(branch_1, 32, 3)
branch_2 = conv2d_bn_v4(x, 32, 1)
branch_2 = conv2d_bn_v4(branch_2, 48, 3)
branch_2 = conv2d_bn_v4(branch_2, 64, 3)
branches = [branch_0, branch_1, branch_2]
elif block_type == 'block17':
branch_0 = conv2d_bn_v4(x, 192, 1)
branch_1 = conv2d_bn_v4(x, 128, 1)
branch_1 = conv2d_bn_v4(branch_1, 160, [1, 7])
branch_1 = conv2d_bn_v4(branch_1, 192, [7, 1])
branches = [branch_0, branch_1]
elif block_type == 'block8':
branch_0 = conv2d_bn_v4(x, 192, 1)
branch_1 = conv2d_bn_v4(x, 192, 1)
branch_1 = conv2d_bn_v4(branch_1, 224, [1, 3])
branch_1 = conv2d_bn_v4(branch_1, 256, [3, 1])
branches = [branch_0, branch_1]
else:
raise ValueError('Unknown Inception-ResNet block type. '
'Expects "block35", "block17" or "block8", '
'but got: ' + str(block_type))
block_name = block_type + '_' + str(block_idx)
channel_axis = 1 if K.image_data_format() == 'channels_first' else 3
mixed = layers.Concatenate(
axis=channel_axis, name=block_name + '_mixed')(branches)
up = conv2d_bn_v4(mixed,
K.int_shape(x)[channel_axis],
1,
activation=None,
use_bias=True,
name=block_name + '_conv')
x = layers.Lambda(lambda inputs, scale: inputs[0] + inputs[1] * scale,
output_shape=K.int_shape(x)[1:],
arguments={'scale': scale},
name=block_name)([x, up])
if activation is not None:
x = layers.Activation(activation, name=block_name + '_ac')(x)
return x
def InceptionResNetV2(include_top=True,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs):
"""Instantiates the Inception-ResNet v2 architecture.
Optionally loads weights pre-trained on ImageNet.
Note that the data format convention used by the model is
the one specified in your Keras config at `~/.keras/keras.json`.
# Arguments
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is `False` (otherwise the input shape
has to be `(299, 299, 3)` (with `'channels_last'` data format)
or `(3, 299, 299)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 75.
E.g. `(150, 150, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the last convolutional block.
- `'avg'` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `'max'` means that global max pooling will be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is `True`, and
if no `weights` argument is specified.
# Returns
A Keras `Model` instance.
# Raises
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
"""
if not (weights in {'imagenet', None} or exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization), `imagenet` '
'(pre-training on ImageNet), '
'or the path to the weights file to be loaded.')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as `"imagenet"` with `include_top`'
' as true, `classes` should be 1000')
# Determine proper input shape
input_shape = _obtain_input_shape(
input_shape,
default_size=299,
min_size=75,
data_format=K.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
# Stem block: 35 x 35 x 192
x = conv2d_bn_v4(img_input, 32, 3, strides=2, padding='valid')
x = conv2d_bn_v4(x, 32, 3, padding='valid')
x = conv2d_bn_v4(x, 64, 3)
x = layers.MaxPooling2D(3, strides=2)(x)
x = conv2d_bn_v4(x, 80, 1, padding='valid')
x = conv2d_bn_v4(x, 192, 3, padding='valid')
x = layers.MaxPooling2D(3, strides=2)(x)
# Mixed 5b (Inception-A block): 35 x 35 x 320
branch_0 = conv2d_bn_v4(x, 96, 1)
branch_1 = conv2d_bn_v4(x, 48, 1)
branch_1 = conv2d_bn_v4(branch_1, 64, 5)
branch_2 = conv2d_bn_v4(x, 64, 1)
branch_2 = conv2d_bn_v4(branch_2, 96, 3)
branch_2 = conv2d_bn_v4(branch_2, 96, 3)
branch_pool = layers.AveragePooling2D(3, strides=1, padding='same')(x)
branch_pool = conv2d_bn_v4(branch_pool, 64, 1)
branches = [branch_0, branch_1, branch_2, branch_pool]
channel_axis = 1 if K.image_data_format() == 'channels_first' else 3
x = layers.Concatenate(axis=channel_axis, name='mixed_5b')(branches)
# 10x block35 (Inception-ResNet-A block): 35 x 35 x 320
for block_idx in range(1, 11):
x = inception_resnet_block(x,
scale=0.17,
block_type='block35',
block_idx=block_idx)
# Mixed 6a (Reduction-A block): 17 x 17 x 1088
branch_0 = conv2d_bn_v4(x, 384, 3, strides=2, padding='valid')
branch_1 = conv2d_bn_v4(x, 256, 1)
branch_1 = conv2d_bn_v4(branch_1, 256, 3)
branch_1 = conv2d_bn_v4(branch_1, 384, 3, strides=2, padding='valid')
branch_pool = layers.MaxPooling2D(3, strides=2, padding='valid')(x)
branches = [branch_0, branch_1, branch_pool]
x = layers.Concatenate(axis=channel_axis, name='mixed_6a')(branches)
# 20x block17 (Inception-ResNet-B block): 17 x 17 x 1088
for block_idx in range(1, 21):
x = inception_resnet_block(x,
scale=0.1,
block_type='block17',
block_idx=block_idx)
# Mixed 7a (Reduction-B block): 8 x 8 x 2080
branch_0 = conv2d_bn_v4(x, 256, 1)
branch_0 = conv2d_bn_v4(branch_0, 384, 3, strides=2, padding='valid')
branch_1 = conv2d_bn_v4(x, 256, 1)
branch_1 = conv2d_bn_v4(branch_1, 288, 3, strides=2, padding='valid')
branch_2 = conv2d_bn_v4(x, 256, 1)
branch_2 = conv2d_bn_v4(branch_2, 288, 3)
branch_2 = conv2d_bn_v4(branch_2, 320, 3, strides=2, padding='valid')
branch_pool = layers.MaxPooling2D(3, strides=2, padding='valid')(x)
branches = [branch_0, branch_1, branch_2, branch_pool]
x = layers.Concatenate(axis=channel_axis, name='mixed_7a')(branches)
# 10x block8 (Inception-ResNet-C block): 8 x 8 x 2080
for block_idx in range(1, 10):
x = inception_resnet_block(x,
scale=0.2,
block_type='block8',
block_idx=block_idx)
x = inception_resnet_block(x,
scale=1.,
activation=None,
block_type='block8',
block_idx=10)
# Final convolution block: 8 x 8 x 1536
x = conv2d_bn_v4(x, 1536, 1, name='conv_7b')
if include_top:
# Classification block
x = layers.GlobalAveragePooling2D(name='avg_pool')(x)
x = layers.Dense(classes, activation='softmax', name='predictions')(x)
else:
if pooling == 'avg':
x = layers.GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = Model(inputs, x, name='inception_resnet_v2')
# Load weights.
if weights == 'imagenet':
if include_top:
fname = 'inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5'
weights_path = get_file(
fname,
BASE_WEIGHT_URL + fname,
cache_subdir='models',
file_hash='e693bd0210a403b3192acc6073ad2e96')
else:
fname = ('inception_resnet_v2_weights_'
'tf_dim_ordering_tf_kernels_notop.h5')
weights_path = get_file(
fname,
BASE_WEIGHT_URL + fname,
cache_subdir='models',
file_hash='d19885ff4a710c122648d3b5c3b684e4')
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
"""Inception-v1 Inflated 3D ConvNet used for Kinetics CVPR paper.
The model is introduced in:
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
Joao Carreira, Andrew Zisserman
https://arxiv.org/abs/1705.07750v1
"""
WEIGHTS_NAME = ['rgb_kinetics_only', 'flow_kinetics_only', 'rgb_imagenet_and_kinetics', 'flow_imagenet_and_kinetics']
# path to pretrained models with top (classification layer)
WEIGHTS_PATH_I3D = {
'rgb_kinetics_only': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/rgb_inception_i3d_kinetics_only_tf_dim_ordering_tf_kernels.h5',
'flow_kinetics_only': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/flow_inception_i3d_kinetics_only_tf_dim_ordering_tf_kernels.h5',
'rgb_imagenet_and_kinetics': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/rgb_inception_i3d_imagenet_and_kinetics_tf_dim_ordering_tf_kernels.h5',
'flow_imagenet_and_kinetics': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/flow_inception_i3d_imagenet_and_kinetics_tf_dim_ordering_tf_kernels.h5'
}
# path to pretrained models with no top (no classification layer)
WEIGHTS_PATH_NO_TOP_I3D = {
'rgb_kinetics_only': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/rgb_inception_i3d_kinetics_only_tf_dim_ordering_tf_kernels_no_top.h5',
'flow_kinetics_only': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/flow_inception_i3d_kinetics_only_tf_dim_ordering_tf_kernels_no_top.h5',
'rgb_imagenet_and_kinetics': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/rgb_inception_i3d_imagenet_and_kinetics_tf_dim_ordering_tf_kernels_no_top.h5',
'flow_imagenet_and_kinetics': 'https://github.com/dlpbc/keras-kinetics-i3d/releases/download/v0.2/flow_inception_i3d_imagenet_and_kinetics_tf_dim_ordering_tf_kernels_no_top.h5'
}
def conv3d_bn(x,
filters,
num_row,
num_col,
num_slices,
padding='same',
strides=(1, 1, 1),
use_bias=False,
use_activation_fn=True,
use_bn=True,
name=None):
"""Utility function to apply conv3d + BN.
# Arguments
x: input tensor.
filters: filters in `Conv3D`.
num_slices: slices (time depth) of the convolution kernel.
num_row: height of the convolution kernel.
num_col: width of the convolution kernel.
padding: padding mode in `Conv3D`.
strides: strides in `Conv3D`.
use_bias: use bias or not
use_activation_fn: use an activation function or not.
use_bn: use batch normalization or not.
name: name of the ops; will become `name + '_conv'`
for the convolution and `name + '_bn'` for the
batch norm layer.
# Returns
Output tensor after applying `Conv3D` and `BatchNormalization`.
"""
if name is not None:
bn_name = name + '_bn'
conv_name = name + '_conv'
else:
bn_name = None
conv_name = None
x = layers.Conv3D(
filters, (num_row, num_col, num_slices),
strides=strides,
padding=padding,
use_bias=use_bias,
name=conv_name)(x)
if use_bn:
if K.image_data_format() == 'channels_first':
bn_axis = 1
else:
bn_axis = 4
x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(x)
if use_activation_fn:
x = layers.Activation('relu', name=name)(x)
return x
def Inception_Inflated3d(include_top=True,
weights=None,
input_tensor=None,
input_shape=None,
dropout_prob=0.0,
endpoint_logit=True,
disentangled = True,
classes=400):
"""Instantiates the Inflated 3D Inception v1 architecture.
Optionally loads weights pre-trained
on Kinetics. Note that when using TensorFlow,
for best performance you should set
`image_data_format='channels_last'` in your Keras config
at ~/.keras/keras.json.
The model and the weights are compatible with both
TensorFlow and Theano. The data format
convention used by the model is the one
specified in your Keras config file.
Note that the default input slice(image) size for this model is 224x224.
# Arguments
include_top: whether to include the the classification
layer at the top of the network.
weights: one of `None` (random initialization)
or 'kinetics_only' (pre-training on Kinetics dataset only).
or 'imagenet_and_kinetics' (pre-training on ImageNet and Kinetics datasets).
input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(NUM_FRAMES, 224, 224, 3)` (with `channels_last` data format)
or `(NUM_FRAMES, 3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 inputs channels.
NUM_FRAMES should be no smaller than 8. The authors used 64
slices per example for training and testing on kinetics dataset
Also, Width and height should be no smaller than 32.
E.g. `(64, 150, 150, 3)` would be one valid value.
dropout_prob: optional, dropout probability applied in dropout layer
after global average pooling layer.
0.0 means no dropout is applied, 1.0 means dropout is applied to all features.
Note: Since Dropout is applied just before the classification
layer, it is only useful when `include_top` is set to True.
endpoint_logit: (boolean) optional. If True, the model's forward pass
will end at producing logits. Otherwise, softmax is applied after producing
the logits to produce the class probabilities prediction. Setting this parameter
to True is particularly useful when you want to combine results of rgb model
and optical flow model.
- `True` end model forward pass at logit output
- `False` go further after logit to produce softmax predictions
Note: This parameter is only useful when `include_top` is set to True.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
# Returns
A Keras model instance.
# Raises
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
"""
if not (weights in WEIGHTS_NAME or weights is None or exists(weights)):
raise ValueError('The `weights` argument should be either '
'`None` (random initialization) or %s' %
str(WEIGHTS_NAME) + ' '
'or a valid path to a file containing `weights` values')
if weights in WEIGHTS_NAME and include_top and classes != 400:
raise ValueError('If using `weights` as one of these %s, with `include_top`'
' as true, `classes` should be 400' % str(WEIGHTS_NAME))
# Determine proper input shape
input_shape = _obtain_input_shape_3d(
input_shape,
default_slice_size=224,
min_slice_size=32,
default_num_slices=5,
min_num_slices=1,
data_format=K.image_data_format(),
require_flatten=include_top,
weights=weights)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
if K.image_data_format() == 'channels_first':
channel_axis = 1
else:
channel_axis = 4
num_mods = int(input_shape[channel_axis - 1] / 3)
if input_shape[2] >= 7 or input_shape[2] is None:
first_conv_kernel_depth = 7
else:
first_conv_kernel_depth = input_shape[2]
if input_tensor is None and disentangled:
mod_list = list()
for mod_ind in range(num_mods):
if channel_axis == 4:
mod_input = layers.Lambda(lambda x: x[:, :, :, :, 3 * mod_ind:3 * mod_ind + 3],
output_shape=input_shape[0:3] + [3],
name='mod_input_{}'.format(mod_ind))(img_input)
else:
mod_input = layers.Lambda(lambda x: x[:, 3 * mod_ind:3 * mod_ind + 3, :, :, :],
output_shape=[3] + input_shape[1:4],
name='mod_input_{}'.format(mod_ind))(img_input)
mod_list.append(conv3d_bn(mod_input, 64, 7, 7, first_conv_kernel_depth, strides=(2, 2, 1), padding='same',
name='mod_{}'.format(mod_ind)))
x = layers.concatenate(mod_list, axis=channel_axis, name='Conv3d_1a_7x7')
else:
x = conv3d_bn(img_input, 64, 7, 7, first_conv_kernel_depth, strides=(2, 2, 1), padding='same',
name='Conv3d_1a_7x7')
# Downsampling (spatial only)
x = layers.MaxPooling3D((3, 3, 1), strides=(2, 2, 1), padding='same', name='MaxPool2d_2a_3x3')(x)
x = conv3d_bn(x, 64, 1, 1, 1, strides=(1, 1, 1), padding='same', name='Conv3d_2b_1x1')
x = conv3d_bn(x, 192, 3, 3, 3, strides=(1, 1, 1), padding='same', name='Conv3d_2c_3x3')
# Downsampling (spatial only)
x = layers.MaxPooling3D((3, 3, 1), strides=(2, 2, 1), padding='same', name='MaxPool2d_3a_3x3')(x)
# Mixed 3b
branch_0 = conv3d_bn(x, 64, 1, 1, 1, padding='same', name='Conv3d_3b_0a_1x1')
branch_1 = conv3d_bn(x, 96, 1, 1, 1, padding='same', name='Conv3d_3b_1a_1x1')
branch_1 = conv3d_bn(branch_1, 128, 3, 3, 3, padding='same', name='Conv3d_3b_1b_3x3')
branch_2 = conv3d_bn(x, 16, 1, 1, 1, padding='same', name='Conv3d_3b_2a_1x1')
branch_2 = conv3d_bn(branch_2, 32, 3, 3, 3, padding='same', name='Conv3d_3b_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_3b_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 32, 1, 1, 1, padding='same', name='Conv3d_3b_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_3b')
# Mixed 3c
branch_0 = conv3d_bn(x, 128, 1, 1, 1, padding='same', name='Conv3d_3c_0a_1x1')
branch_1 = conv3d_bn(x, 128, 1, 1, 1, padding='same', name='Conv3d_3c_1a_1x1')
branch_1 = conv3d_bn(branch_1, 192, 3, 3, 3, padding='same', name='Conv3d_3c_1b_3x3')
branch_2 = conv3d_bn(x, 32, 1, 1, 1, padding='same', name='Conv3d_3c_2a_1x1')
branch_2 = conv3d_bn(branch_2, 96, 3, 3, 3, padding='same', name='Conv3d_3c_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_3c_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 64, 1, 1, 1, padding='same', name='Conv3d_3c_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_3c')
# Downsampling (spatial only)
x = layers.MaxPooling3D((3, 3, 1), strides=(2, 2, 1), padding='same', name='MaxPool2d_4a_3x3')(x)
# Mixed 4b
branch_0 = conv3d_bn(x, 192, 1, 1, 1, padding='same', name='Conv3d_4b_0a_1x1')
branch_1 = conv3d_bn(x, 96, 1, 1, 1, padding='same', name='Conv3d_4b_1a_1x1')
branch_1 = conv3d_bn(branch_1, 208, 3, 3, 3, padding='same', name='Conv3d_4b_1b_3x3')
branch_2 = conv3d_bn(x, 16, 1, 1, 1, padding='same', name='Conv3d_4b_2a_1x1')
branch_2 = conv3d_bn(branch_2, 48, 3, 3, 3, padding='same', name='Conv3d_4b_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_4b_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 64, 1, 1, 1, padding='same', name='Conv3d_4b_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_4b')
# Mixed 4c
branch_0 = conv3d_bn(x, 160, 1, 1, 1, padding='same', name='Conv3d_4c_0a_1x1')
branch_1 = conv3d_bn(x, 112, 1, 1, 1, padding='same', name='Conv3d_4c_1a_1x1')
branch_1 = conv3d_bn(branch_1, 224, 3, 3, 3, padding='same', name='Conv3d_4c_1b_3x3')
branch_2 = conv3d_bn(x, 24, 1, 1, 1, padding='same', name='Conv3d_4c_2a_1x1')
branch_2 = conv3d_bn(branch_2, 64, 3, 3, 3, padding='same', name='Conv3d_4c_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_4c_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 64, 1, 1, 1, padding='same', name='Conv3d_4c_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_4c')
# Mixed 4d
branch_0 = conv3d_bn(x, 128, 1, 1, 1, padding='same', name='Conv3d_4d_0a_1x1')
branch_1 = conv3d_bn(x, 128, 1, 1, 1, padding='same', name='Conv3d_4d_1a_1x1')
branch_1 = conv3d_bn(branch_1, 256, 3, 3, 3, padding='same', name='Conv3d_4d_1b_3x3')
branch_2 = conv3d_bn(x, 24, 1, 1, 1, padding='same', name='Conv3d_4d_2a_1x1')
branch_2 = conv3d_bn(branch_2, 64, 3, 3, 3, padding='same', name='Conv3d_4d_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_4d_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 64, 1, 1, 1, padding='same', name='Conv3d_4d_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_4d')
# Mixed 4e
branch_0 = conv3d_bn(x, 112, 1, 1, 1, padding='same', name='Conv3d_4e_0a_1x1')
branch_1 = conv3d_bn(x, 144, 1, 1, 1, padding='same', name='Conv3d_4e_1a_1x1')
branch_1 = conv3d_bn(branch_1, 288, 3, 3, 3, padding='same', name='Conv3d_4e_1b_3x3')
branch_2 = conv3d_bn(x, 32, 1, 1, 1, padding='same', name='Conv3d_4e_2a_1x1')
branch_2 = conv3d_bn(branch_2, 64, 3, 3, 3, padding='same', name='Conv3d_4e_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_4e_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 64, 1, 1, 1, padding='same', name='Conv3d_4e_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_4e')
# Mixed 4f
branch_0 = conv3d_bn(x, 256, 1, 1, 1, padding='same', name='Conv3d_4f_0a_1x1')
branch_1 = conv3d_bn(x, 160, 1, 1, 1, padding='same', name='Conv3d_4f_1a_1x1')
branch_1 = conv3d_bn(branch_1, 320, 3, 3, 3, padding='same', name='Conv3d_4f_1b_3x3')
branch_2 = conv3d_bn(x, 32, 1, 1, 1, padding='same', name='Conv3d_4f_2a_1x1')
branch_2 = conv3d_bn(branch_2, 128, 3, 3, 3, padding='same', name='Conv3d_4f_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_4f_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 128, 1, 1, 1, padding='same', name='Conv3d_4f_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_4f')
# Downsampling (spatial only)
x = layers.MaxPooling3D((2, 2, 1), strides=(2, 2, 1), padding='same', name='MaxPool2d_5a_2x2')(x)
# Mixed 5b
branch_0 = conv3d_bn(x, 256, 1, 1, 1, padding='same', name='Conv3d_5b_0a_1x1')
branch_1 = conv3d_bn(x, 160, 1, 1, 1, padding='same', name='Conv3d_5b_1a_1x1')
branch_1 = conv3d_bn(branch_1, 320, 3, 3, 3, padding='same', name='Conv3d_5b_1b_3x3')
branch_2 = conv3d_bn(x, 32, 1, 1, 1, padding='same', name='Conv3d_5b_2a_1x1')
branch_2 = conv3d_bn(branch_2, 128, 3, 3, 3, padding='same', name='Conv3d_5b_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_5b_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 128, 1, 1, 1, padding='same', name='Conv3d_5b_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_5b')
# Mixed 5c
branch_0 = conv3d_bn(x, 384, 1, 1, 1, padding='same', name='Conv3d_5c_0a_1x1')
branch_1 = conv3d_bn(x, 192, 1, 1, 1, padding='same', name='Conv3d_5c_1a_1x1')
branch_1 = conv3d_bn(branch_1, 384, 3, 3, 3, padding='same', name='Conv3d_5c_1b_3x3')
branch_2 = conv3d_bn(x, 48, 1, 1, 1, padding='same', name='Conv3d_5c_2a_1x1')
branch_2 = conv3d_bn(branch_2, 128, 3, 3, 3, padding='same', name='Conv3d_5c_2b_3x3')
branch_3 = layers.MaxPooling3D((3, 3, 3), strides=(1, 1, 1), padding='same', name='MaxPool2d_5c_3a_3x3')(x)
branch_3 = conv3d_bn(branch_3, 128, 1, 1, 1, padding='same', name='Conv3d_5c_3b_1x1')
x = layers.concatenate(
[branch_0, branch_1, branch_2, branch_3],
axis=channel_axis,
name='Mixed_5c')
if include_top:
# Classification block
x = layers.AveragePooling3D((7, 7, 2), strides=(1, 1, 1), padding='valid', name='global_avg_pool')(x)
x = layers.Dropout(dropout_prob)(x)
x = conv3d_bn(x, classes, 1, 1, 1, padding='same',
use_bias=True, use_activation_fn=False, use_bn=False, name='Conv3d_6a_1x1')
num_slices_remaining = int(x.shape[1])
x = layers.Reshape((num_slices_remaining, classes))(x)
# logits (raw scores for each class)
x = layers.Lambda(lambda x: K.mean(x, axis=1, keepdims=False),
output_shape=lambda s: (s[0], s[2]))(x)
if not endpoint_logit:
x = layers.Activation('softmax', name='prediction')(x)
else:
x = layers.GlobalAveragePooling3D(name='global_avg_pool')(x)
inputs = img_input
# create model
model = Model(inputs, x, name='i3d_inception')
# # load weights
# if weights in WEIGHTS_NAME:
# if weights == WEIGHTS_NAME[0]: # rgb_kinetics_only
# if include_top:
# weights_url = WEIGHTS_PATH_I3D['rgb_kinetics_only']
# model_name = 'i3d_inception_rgb_kinetics_only.h5'
# else:
# weights_url = WEIGHTS_PATH_NO_TOP_I3D['rgb_kinetics_only']
# model_name = 'i3d_inception_rgb_kinetics_only_no_top.h5'
#
# elif weights == WEIGHTS_NAME[1]: # flow_kinetics_only
# if include_top:
# weights_url = WEIGHTS_PATH_I3D['flow_kinetics_only']
# model_name = 'i3d_inception_flow_kinetics_only.h5'
# else:
# weights_url = WEIGHTS_PATH_NO_TOP_I3D['flow_kinetics_only']
# model_name = 'i3d_inception_flow_kinetics_only_no_top.h5'
#
# elif weights == WEIGHTS_NAME[2]: # rgb_imagenet_and_kinetics
# if include_top:
# weights_url = WEIGHTS_PATH_I3D['rgb_imagenet_and_kinetics']
# model_name = 'i3d_inception_rgb_imagenet_and_kinetics.h5'
# else:
# weights_url = WEIGHTS_PATH_NO_TOP_I3D['rgb_imagenet_and_kinetics']
# model_name = 'i3d_inception_rgb_imagenet_and_kinetics_no_top.h5'
#
# elif weights == WEIGHTS_NAME[3]: # flow_imagenet_and_kinetics
# if include_top:
# weights_url = WEIGHTS_PATH_I3D['flow_imagenet_and_kinetics']
# model_name = 'i3d_inception_flow_imagenet_and_kinetics.h5'
# else:
# weights_url = WEIGHTS_PATH_NO_TOP_I3D['flow_imagenet_and_kinetics']
# model_name = 'i3d_inception_flow_imagenet_and_kinetics_no_top.h5'
#
# downloaded_weights_path = get_file(model_name, weights_url, cache_subdir='models')
# model.load_weights(downloaded_weights_path)
#
# elif weights is not None:
# model.load_weights(weights)
return model | 41.901278 | 180 | 0.596641 | 9,682 | 72,154 | 4.233526 | 0.056806 | 0.011418 | 0.00666 | 0.019664 | 0.856644 | 0.832736 | 0.803069 | 0.770914 | 0.731782 | 0.713777 | 0 | 0.078508 | 0.299692 | 72,154 | 1,722 | 181 | 41.901278 | 0.732674 | 0.244186 | 0 | 0.646484 | 0 | 0.007813 | 0.117235 | 0.009157 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010742 | false | 0 | 0.008789 | 0 | 0.032227 | 0.000977 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f25a5172a64c173b2d63acd120537976d142765d | 11,781 | py | Python | language/parse.py | entrity0305/hunmin | cbc377bc7e2efaf2ef8f208b0f485b2f585df121 | [
"MIT"
] | 2 | 2021-06-06T01:59:35.000Z | 2021-06-06T14:50:07.000Z | language/parse.py | entrity0305/hunmin | cbc377bc7e2efaf2ef8f208b0f485b2f585df121 | [
"MIT"
] | null | null | null | language/parse.py | entrity0305/hunmin | cbc377bc7e2efaf2ef8f208b0f485b2f585df121 | [
"MIT"
] | null | null | null | import language.errors as errors
from language.ast import functionStatement, varExprStatement, ifStatement, elifStatement, elseStatement, whileStatement, returnStatement, functionStatement
def parse(tokens, currentIndent):
parseResult = ''
currentPos = 0
isString = False
previousExpr = '' #현재 전까지의, 내부적으로 정의되지 않은 함수/클래스 처리
internalExpr = [] #블럭 안에 구문
tokenExpr = []
while currentPos < len(tokens):
currentToken = tokens[currentPos]
if currentToken == '=': #변수 선언문 처리
varName = previousExpr
previousExpr = '' #초기화
currentPos += 1 #'=' 더하는것을 방지
try:
while True: #선언문 종료일때까지/문자열 안에 'EOS'가 아니면
if tokens[currentPos] == 'EOS' and not(isString):
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
tokenExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEOSError
parseResult += str(varExprStatement(currentIndent, varName, ''.join(tokenExpr)))
tokenExpr = [] #초기화
elif currentToken == 'if':
currentPos += 1 #'if'를 더하는것을 방지
opened = 1 #특정 문이 겹처 있을때 종료 감지
try:
while True:
if tokens[currentPos] == 'then' and not(isString):
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
tokenExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingThenError
currentPos += 1
try:
while True:
if tokens[currentPos] == 'END' and not(isString):
opened -= 1 #하나를 닫음
if (tokens[currentPos] == 'then' and not(isString)) or (tokens[currentPos] == 'while' and not(isString)) or (tokens[currentPos] == 'else' and not(isString)) or (tokens[currentPos] == 'def' and not(isString)):
opened += 1 #하나를 염
if opened == 0: #만약 다 닫혔다면
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
internalExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEndError
parseResult += str(ifStatement(currentIndent, ''.join(tokenExpr), parse(internalExpr, currentIndent + 1)))
tokenExpr = []
internalExpr = [] #초기화
elif currentToken == 'elif':
currentPos += 1 #'elif'를 더하는것을 방지
opened = 1 #특정 문이 겹처 있을때 종료 감지
try:
while True:
if tokens[currentPos] == 'then' and not(isString):
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
tokenExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingThenError
currentPos += 1
try:
while True:
if tokens[currentPos] == 'END' and not(isString):
opened -= 1 #하나를 닫음
if (tokens[currentPos] == 'then' and not(isString)) or (tokens[currentPos] == 'while' and not(isString)) or (tokens[currentPos] == 'else' and not(isString)) or (tokens[currentPos] == 'def' and not(isString)):
opened += 1 #하나를 염
if opened == 0: #만약 다 닫혔다면
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
internalExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEndError
parseResult += str(elifStatement(currentIndent, ''.join(tokenExpr), parse(internalExpr, currentIndent + 1)))
tokenExpr = []
internalExpr = [] #초기화
elif currentToken == 'else':
currentPos += 1
opened = 1 #특정 문이 겹쳐있는지 확인
try:
while True:
if tokens[currentPos] == 'END' and not(isString):
opened -= 1 #하나를 닫음
if (tokens[currentPos] == 'then' and not(isString)) or (tokens[currentPos] == 'while' and not(isString)) or (tokens[currentPos] == 'else' and not(isString)) or (tokens[currentPos] == 'def' and not(isString)):
opened += 1 #하나를 염
if opened == 0: #만약 다 닫혔다면
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
internalExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEndError
parseResult += str(elseStatement(currentIndent, parse(internalExpr, currentIndent + 1)))
tokenExpr = []
internalExpr = [] #초기화
elif currentToken == 'while':
currentPos += 1
opened = 1 #특정 문이 겹쳐있는지 확인
try:
while True:
if tokens[currentPos] == 'END' and not(isString):
opened -= 1 #하나를 닫음
if (tokens[currentPos] == 'then' and not(isString)) or (tokens[currentPos] == 'while' and not(isString)) or (tokens[currentPos] == 'else' and not(isString)) or (tokens[currentPos] == 'def' and not(isString)):
opened += 1 #하나를 염
if opened == 0: #만약 다 닫혔다면
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
internalExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEndError
parseResult += str(whileStatement(currentIndent, previousExpr, parse(internalExpr, currentIndent + 1)))
previousExpr = '' #초기화
elif currentToken == 'break':
parseResult += '\n{}break\n'.format(currentIndent * ' ')
elif currentToken == 'CALL': #함수 실행
parseResult += currentIndent * ' ' + previousExpr + '\n' #현재 전까지의 내용을 더함
previousExpr = '' #초기화
elif currentToken == 'return':
currentPos += 1 #'return'을 더하는것을 방지
try:
while True:
if tokens[currentPos] == 'EOR' and not(isString):
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
tokenExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEORError
parseResult += str(returnStatement(currentIndent, ''.join(tokenExpr)))
tokenExpr = [] #초기화
elif currentToken == 'for':
pass
elif currentToken == 'def':
currentPos += 1
opened = 1 #특정 문이 겹쳐있는지 확인
functionName = '' #함수의 이름
try:
while True:
if tokens[currentPos] == '=' and not(isString):
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
functionName += tokens[currentPos]
currentPos += 1
except IndexError:
raise errors.missingEqualError
currentPos += 1 #'=' 더하는 것을 방지
try:
while True:
if tokens[currentPos] == 'END' and not(isString):
opened -= 1 #하나를 닫음
if (tokens[currentPos] == 'then' and not(isString)) or (tokens[currentPos] == 'while' and not(isString)) or (tokens[currentPos] == 'else' and not(isString)) or (tokens[currentPos] == 'def' and not(isString)):
opened += 1 #하나를 염
if opened == 0: #만약 다 닫혔다면
break
if tokens[currentPos] == '"' and not(isString): #만약 '"'인데 문자열이 아니면 문자열을 시작
isString = True
elif tokens[currentPos] == '"' and isString: #만약 '"'인데 문자열이면 문자열을 종료
isString = False
internalExpr.append(tokens[currentPos])
currentPos += 1
except IndexError:
raise errors.missingEndError
parseResult += str(functionStatement(currentIndent, functionName, parse(internalExpr, currentIndent + 1)))
internalExpr = [] #초기화
else:
previousExpr += currentToken
currentPos += 1
return parseResult
| 38.25 | 229 | 0.439267 | 954 | 11,781 | 5.424528 | 0.115304 | 0.188599 | 0.108213 | 0.046377 | 0.773913 | 0.769275 | 0.747633 | 0.741063 | 0.725024 | 0.711304 | 0 | 0.007549 | 0.471522 | 11,781 | 307 | 230 | 38.374593 | 0.823643 | 0.073508 | 0 | 0.808824 | 0 | 0 | 0.017825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004902 | false | 0.004902 | 0.009804 | 0 | 0.019608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f2be83ed559540e537e5bc23c1df45a6a94c64c1 | 33 | py | Python | vnpy/app/paper_account/ui/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 19,529 | 2015-03-02T12:17:35.000Z | 2022-03-31T17:18:27.000Z | vnpy/app/paper_account/ui/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 2,186 | 2015-03-04T23:16:33.000Z | 2022-03-31T03:44:01.000Z | vnpy/app/paper_account/ui/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 8,276 | 2015-03-02T05:21:04.000Z | 2022-03-31T13:13:13.000Z | from .widget import PaperManager
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f2c752a56dfe8e17f62ffed2af5b5fa1cbc649d4 | 9,019 | py | Python | lernomatic/models/autoencoder/aae_common.py | stfnwong/lernomatic | 2cd7506c04c4e2afca22b7c05ebb0cc94f8048d9 | [
"MIT"
] | null | null | null | lernomatic/models/autoencoder/aae_common.py | stfnwong/lernomatic | 2cd7506c04c4e2afca22b7c05ebb0cc94f8048d9 | [
"MIT"
] | null | null | null | lernomatic/models/autoencoder/aae_common.py | stfnwong/lernomatic | 2cd7506c04c4e2afca22b7c05ebb0cc94f8048d9 | [
"MIT"
] | null | null | null | """
AAE_COMMON
Some basic AAEencoder models
Stefan Wong 2019
"""
import importlib
import torch
import torch.nn as nn
import torch.nn.functional as F
from lernomatic.models import common
# debug
#
# Encoder side modules
class AAEQNet(common.LernomaticModel):
def __init__(self,
x_dim:int=784,
z_dim:int=2,
hidden_size:int=512,
num_classes:int=10,
dropout:float=0.2,
cat_mode:bool=False) -> None:
self.import_path : str = 'lernomatic.models.autoencoder.aae_common'
self.model_name : str = 'AAEQNet'
self.module_name : str = 'AAEQNetModule'
self.module_import_path: str = 'lernomatic.models.autoencoder.aae_common'
self.net = AAEQNetModule(
x_dim,
z_dim,
hidden_size,
num_classes = num_classes,
dropout=dropout,
cat_mode = cat_mode
)
def __repr__(self) -> str:
return 'AAEQNet'
def get_hidden_size(self) -> int:
return self.net.hidden_size
def get_x_dim(self) -> int:
return self.net.x_dim
def get_z_dim(self) -> int:
return self.net.z_dim
def get_num_classes(self) -> int:
return self.net.num_classes
def set_cat_mode(self) -> None:
self.net.cat_mode = True
def unset_cat_mode(self) -> None:
self.net.cat_mode = False
def get_model_args(self) -> dict:
return {
'x_dim' : self.net.x_dim,
'z_dim' : self.net.z_dim,
'hidden_size' : self.net.hidden_size,
'num_classes' : self.net.num_classes,
'dropout' : self.net.dropout,
'cat_mode' : self.net.cat_mode
}
def set_params(self, params : dict) -> None:
self.import_path = params['model_import_path']
self.model_name = params['model_name']
self.module_name = params['module_name']
self.module_import_path = params['module_import_path']
# Import the actual network module
imp = importlib.import_module(self.module_import_path)
mod = getattr(imp, self.module_name)
self.net = mod(
params['model_args']['x_dim'],
params['model_args']['z_dim'],
params['model_args']['hidden_size'],
dropout = params['model_args']['dropout'],
cat_mode = params['model_args']['cat_mode']
)
self.net.load_state_dict(params['model_state_dict'])
class AAEQNetModule(nn.Module):
def __init__(self,
x_dim:int,
z_dim:int,
hidden_size:int,
num_classes:int=10,
dropout:float=0.2,
cat_mode:bool=False) -> None:
self.x_dim :int = x_dim
self.z_dim :int = z_dim
self.hidden_size :int = hidden_size
self.num_classes :int = num_classes
self.dropout :float = dropout
self.cat_mode :bool = cat_mode
super(AAEQNetModule, self).__init__()
# network graph
self.l1 = nn.Linear(self.x_dim, self.hidden_size) # MNIST size?
self.l2 = nn.Linear(self.hidden_size, self.hidden_size)
# gaussian (z)
self.lingauss = nn.Linear(self.hidden_size, self.z_dim)
# categorical code (y)
self.lincat = nn.Linear(self.hidden_size, self.num_classes)
def forward(self, X:torch.Tensor) -> torch.Tensor:
X = self.l1(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
X = self.l2(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
xgauss = self.lingauss(X)
if self.cat_mode:
xcat = F.softmax(self.lincat(X), dim=0)
return (xcat, xgauss)
return xgauss
# Decoder side
class AAEPNet(common.LernomaticModel):
def __init__(self,
x_dim:int=784,
z_dim:int=2,
hidden_size:int=512,
dropout:float=0.2) -> None:
self.import_path : str = 'lernomatic.models.autoencoder.aae_common'
self.model_name : str = 'AAEPNet'
self.module_name : str = 'AAEPNetModule'
self.module_import_path: str = 'lernomatic.models.autoencoder.aae_common'
self.net = AAEPNetModule(
x_dim,
z_dim,
hidden_size,
dropout=dropout
)
def __repr__(self) -> str:
return 'AAEPNet'
def get_hidden_size(self) -> int:
return self.net.hidden_size
def get_x_dim(self) -> int:
return self.net.x_dim
def get_z_dim(self) -> int:
return self.net.z_dim
def get_model_args(self) -> dict:
return {
'x_dim' : self.net.x_dim,
'z_dim' : self.net.z_dim,
'hidden_size' : self.net.hidden_size,
'dropout' : self.net.dropout
}
def set_params(self, params : dict) -> None:
self.import_path = params['model_import_path']
self.model_name = params['model_name']
self.module_name = params['module_name']
self.module_import_path = params['module_import_path']
# Import the actual network module
imp = importlib.import_module(self.module_import_path)
mod = getattr(imp, self.module_name)
self.net = mod(
params['model_args']['x_dim'],
params['model_args']['z_dim'],
params['model_args']['hidden_size'],
dropout = params['model_args']['dropout'],
)
self.net.load_state_dict(params['model_state_dict'])
class AAEPNetModule(nn.Module):
def __init__(self, x_dim:int, z_dim:int, hidden_size:int, dropout:float=0.2) -> None:
self.x_dim :int = x_dim
self.z_dim :int = z_dim
self.hidden_size :int = hidden_size
self.dropout :float = dropout
super(AAEPNetModule, self).__init__()
# network graph
self.l1 = nn.Linear(self.z_dim, self.hidden_size)
self.l2 = nn.Linear(self.hidden_size, self.hidden_size)
self.l3 = nn.Linear(self.hidden_size, self.x_dim)
def forward(self, X:torch.Tensor) -> torch.Tensor:
X = self.l1(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
X = self.l2(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
X = self.l3(X)
X = F.sigmoid(X)
return X
class AAEDNetGauss(common.LernomaticModel):
def __init__(self,
z_dim:int=2,
hidden_size:int=512,
dropout:float=0.2) -> None:
self.import_path : str = 'lernomatic.models.autoencoder.aae_common'
self.model_name : str = 'AAEDNetGauss'
self.module_name : str = 'AAEDNetGaussModule'
self.module_import_path: str = 'lernomatic.models.autoencoder.aae_common'
self.net = AAEDNetGaussModule(
z_dim,
hidden_size,
dropout=dropout
)
def __repr__(self) -> str:
return 'AAEDNetGauss'
def get_hidden_size(self) -> int:
return self.net.hidden_size
def get_z_dim(self) -> int:
return self.net.z_dim
def get_model_args(self) -> dict:
return {
'z_dim' : self.net.z_dim,
'hidden_size' : self.net.hidden_size,
'dropout' : self.net.dropout
}
def set_params(self, params : dict) -> None:
self.import_path = params['model_import_path']
self.model_name = params['model_name']
self.module_name = params['module_name']
self.module_import_path = params['module_import_path']
# Import the actual network module
imp = importlib.import_module(self.module_import_path)
mod = getattr(imp, self.module_name)
self.net = mod(
params['model_args']['z_dim'],
params['model_args']['hidden_size'],
params['model_args']['dropout'],
)
self.net.load_state_dict(params['model_state_dict'])
class AAEDNetGaussModule(nn.Module):
def __init__(self, z_dim:int, hidden_size:int, dropout:float=0.2) -> None:
self.z_dim = z_dim
self.hidden_size = hidden_size
self.dropout :float = dropout
super(AAEDNetGaussModule, self).__init__()
# network graph
self.l1 = nn.Linear(self.z_dim, self.hidden_size)
self.l2 = nn.Linear(self.hidden_size, self.hidden_size)
self.l3 = nn.Linear(self.hidden_size, 1)
def forward(self, X:torch.Tensor) -> torch.Tensor:
X = self.l1(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
X = self.l2(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
X = self.l3(X)
X = F.dropout(X, p=self.dropout)
X = F.relu(X)
X = F.sigmoid(X)
return X
| 31.315972 | 89 | 0.57146 | 1,152 | 9,019 | 4.228299 | 0.087674 | 0.088278 | 0.054609 | 0.036953 | 0.810101 | 0.788134 | 0.767604 | 0.748512 | 0.736604 | 0.728803 | 0 | 0.009064 | 0.315002 | 9,019 | 287 | 90 | 31.425087 | 0.779378 | 0.0316 | 0 | 0.689498 | 0 | 0 | 0.098347 | 0.027542 | 0 | 0 | 0 | 0 | 0 | 1 | 0.13242 | false | 0 | 0.091324 | 0.068493 | 0.3379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4bbc146cd81770ab56b28a74011c83ece94486eb | 42 | py | Python | tests/test_smoke.py | jmp/python-pipeline | 6cd8e56f8e3357df84bdd3f1428f86b6ea9fd27b | [
"MIT"
] | null | null | null | tests/test_smoke.py | jmp/python-pipeline | 6cd8e56f8e3357df84bdd3f1428f86b6ea9fd27b | [
"MIT"
] | null | null | null | tests/test_smoke.py | jmp/python-pipeline | 6cd8e56f8e3357df84bdd3f1428f86b6ea9fd27b | [
"MIT"
] | null | null | null | def test_smoke() -> None:
assert True
| 14 | 25 | 0.642857 | 6 | 42 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 42 | 2 | 26 | 21 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29d81748bc98cc2cc1ed327f8718e3f30100a34e | 49,708 | py | Python | functions/visualizations.py | levidantzinger/hawaii_covid_forecast | 848cd19f0464520232d9e2434246f2dee53ee140 | [
"MIT"
] | null | null | null | functions/visualizations.py | levidantzinger/hawaii_covid_forecast | 848cd19f0464520232d9e2434246f2dee53ee140 | [
"MIT"
] | null | null | null | functions/visualizations.py | levidantzinger/hawaii_covid_forecast | 848cd19f0464520232d9e2434246f2dee53ee140 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from bokeh.plotting import figure, output_file, show
from bokeh.models import ColumnDataSource
from bokeh.models.tools import HoverTool
from bokeh.models import HoverTool
from bokeh.models.widgets import Tabs, Panel
from bokeh.resources import CDN
from bokeh.embed import file_html
import plotly.graph_objects as go
import plotly.express as px
from plotly.subplots import make_subplots
import chart_studio.plotly as py
import chart_studio
#########################################################
############## ~ Create Graphs (Bokeh) ~ ################
#########################################################
def initialize_plotting_function(y_metric, pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases):
"""
Initializing function for plotting historical data + model output data
"""
# Set data sources
source_pessimistic_14 = ColumnDataSource(pessimistic_14)
source_expected_14 = ColumnDataSource(expected_14)
source_daily_new_cases_historical = ColumnDataSource(hdc_covid_data_df_historical_graph)
source_state_df_historical = ColumnDataSource(hdc_covid_data_df_historical_graph)
# Creates interactive hover
tooltips = [
('Scenario', '@Scenario'),
(f'{y_metric}',f'@{y_metric}'),
('Date', '@Date{%F}')
]
y_max = 0
if y_metric == 'Hospitalizations':
y_max = int(max_hosp)
if y_metric == 'ICU':
y_max = int(max_ICU)
if y_metric == 'Deaths':
y_max = int(max_Deaths)
if y_metric == 'Reported_New_Cases':
y_max = int(max_Reported_New_Cases)
# Initalize plot foundation
p = figure(x_axis_type = "datetime", y_range=(0, y_max))
# Add historical lines
if y_metric == 'Hospitalizations':
historical_hosp_line = p.line(x='Date', y=f'{y_metric}',
source=source_state_df_historical,
line_width=2, color = 'grey')
p.add_tools(HoverTool(renderers=[historical_hosp_line], tooltips=tooltips, mode='vline', formatters={'Date': 'datetime'}))
if y_metric == 'Deaths':
historical_death_line = p.line(x='Date', y=f'{y_metric}',
source=source_state_df_historical,
line_width=2, color = 'grey')
p.add_tools(HoverTool(renderers=[historical_death_line], tooltips=tooltips, mode='vline', formatters={'Date': 'datetime'}))
if y_metric == 'ICU':
historical_ICU_line = p.line(x='Date', y=f'{y_metric}',
source=source_state_df_historical,
line_width=2, color = 'grey')
p.add_tools(HoverTool(renderers=[historical_ICU_line], tooltips=tooltips, mode='vline', formatters={'Date': 'datetime'}))
if y_metric == 'Reported_New_Cases':
historical_daily_new_cases_line = p.line(x='Date', y=f'{y_metric}',
source=source_daily_new_cases_historical,
line_width=2, color = 'grey')
p.add_tools(HoverTool(renderers=[historical_daily_new_cases_line], tooltips=tooltips, mode='vline', formatters={'Date': 'datetime'}))
# Add forecast lines
pessimistic_14_line = p.line(x='Date', y=f'{y_metric}',
source=source_pessimistic_14,
line_width=2, color='firebrick', legend='Pessimistic')
expected_14_line = p.line(x='Date', y=f'{y_metric}',
source=source_expected_14,
line_width=2, color='steelblue', legend='Expected')
p.add_tools(HoverTool(renderers=[expected_14_line], tooltips=tooltips, mode='vline', formatters={'Date': 'datetime'}))
p.add_tools(HoverTool(renderers=[pessimistic_14_line], tooltips=tooltips, mode='vline', formatters={'Date': 'datetime'}))
# Add Graph details
p.title.text = f'Number of {y_metric}'
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = f'{y_metric}'
# Sets graph size
p.plot_width = 1200
p.plot_height = 700
# Sets legend
p.legend.location = "top_left"
p.legend.click_policy="hide"
return p
def forecast_graph(pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases):
"""
Creates tabs for each metric and displays output to html
"""
# Create panels for each tab
cases_tab = Panel(child=initialize_plotting_function('Reported_New_Cases', pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases), title='Reported New Cases')
hospitalized_tab = Panel(child=initialize_plotting_function('Hospitalizations', pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases), title='Hospitalizations')
ICU_tab = Panel(child=initialize_plotting_function('ICU', pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases), title='ICU')
Deaths_tab = Panel(child=initialize_plotting_function('Deaths', pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases), title='Deaths')
Susceptible_tab = Panel(child=initialize_plotting_function('Susceptible', pessimistic_14, expected_14, hdc_covid_data_df_historical_graph, max_hosp, max_ICU, max_Deaths, max_Reported_New_Cases), title='Susceptible')
# Assign the panels to Tabs
tabs = Tabs(tabs=[Susceptible_tab, cases_tab, hospitalized_tab, ICU_tab, Deaths_tab])
return tabs
#########################################################
############## ~ Create Graphs (Bokeh) ~ ################
#########################################################
def create_forecast_graphs(cdc_metric, df, df_column, expected_14, pessimistic_14, legend_name, max_metric, chart_studio_name, push_to_site):
if (cdc_metric == 'case') or (cdc_metric == 'death'):
cdc_forecast = pd.read_csv(f'./cdc_forecasts/{cdc_metric}_forecast.csv')
cdc_forecast.rename(columns={cdc_forecast.iloc[:, 0].name : 'Date'}, inplace=True)
cdc_forecast['Date'] = pd.to_datetime(cdc_forecast['Date'])
# Filter CDC forecast to correct dates
start_counter = 0
for i in cdc_forecast['Date']:
if i != pessimistic_14['Date'].iloc[0]:
start_counter += 1
else:
break
cdc_forecast = cdc_forecast.iloc[start_counter:start_counter+16]
cdc_forecast = cdc_forecast.set_index('Date').rename_axis(None)
# Add bools for ensemble buttons
show_ensemble_lines = [True, True, True] # True's represent HIPAM lines and will always be displayed
hide_ensemble_lines = [True, True, True] # True's represent HIPAM lines and will always be displayed
for i in range(0, len(cdc_forecast.columns)):
show_ensemble_lines.append(True)
hide_ensemble_lines.append(False)
# Function to create plotly figure & push to Chart Studio
fig = go.Figure()
fig.add_trace(go.Scatter(x=df['Date'], y=df[df_column], mode='lines', name=legend_name,
line=dict(color='lightgray', width=4)))
fig.add_trace(go.Scatter(x=pessimistic_14['Date'], y=pessimistic_14[df_column], mode='lines', name=legend_name,
line=dict(color='lightcoral', width=4)))
fig.add_trace(go.Scatter(x=expected_14['Date'], y=expected_14[df_column], mode='lines', name=legend_name,
line=dict(color='lightblue', width=4)))
if (cdc_metric == 'case') or (cdc_metric == 'death'):
for i in range(0, len(cdc_forecast.columns)):
fig.add_trace(go.Scatter(x=pessimistic_14['Date'], y=cdc_forecast.iloc[:, i], mode='lines', name=legend_name,
line=dict(color='lightgray', width=1)))
fig.update_xaxes(showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)'
))
fig.update_yaxes(range=[0, max_metric],
showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
if (cdc_metric == 'case') or (cdc_metric == 'death'):
fig.update_layout(autosize=False,
width=1500,
height=1000,
showlegend=False,
plot_bgcolor='white',
margin=dict(
autoexpand=False,
l=80,
r=80,
t=50
),
title={'y':1},
updatemenus=[
dict(
type="buttons",
bgcolor = 'rgb(205, 205, 205)',
bordercolor = 'rgb(84, 84, 84)',
font = dict(color='rgb(84, 84, 84)'),
direction="right",
active=-1,
x=0.57,
y=1.05,
buttons=list([
dict(label="Show Ensemble",
method="update",
args=[{"visible": show_ensemble_lines},
{"annotations": []}]),
dict(label="Hide Ensemble",
method="update",
args=[{"visible": hide_ensemble_lines},
{"annotations": []}])
]),
)
])
else:
fig.update_layout(autosize=False,
width=1500,
height=1000,
showlegend=False,
plot_bgcolor='white',
margin=dict(
autoexpand=False,
l=80,
r=80,
t=50
),
title={'y':1}
)
if push_to_site == 'Y':
username = '' # your username
api_key = '' # your api key - go to profile > settings > regenerate key
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
py.plot(fig, filename = f'hipam_forecast_{chart_studio_name}', auto_open=True)
fig.write_html(f"./chart_studio/file_{chart_studio_name}.html")
else:
fig.show()
#########################################################
########## ~ Create Oahu Graph (Plotly) ~ ###############
#########################################################
def create_oahu_reopening_graph_plotly(oahu_7_day_avg_cases, oahu_test_positivity_rate, oahu_dates, oahu_7_day_avg_cases_color, oahu_test_positivity_rate_color, push_to_site_oahu):
oahu_stats = {'7 Day Avg. Cases' : oahu_7_day_avg_cases,
'Test Positivity Rate' : oahu_test_positivity_rate
}
oahu_df = pd.DataFrame(oahu_stats)
oahu_df.index = oahu_dates
fig = make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(go.Bar(x=oahu_df.index, y=oahu_df['7 Day Avg. Cases'], name = 'Cases', marker_color = oahu_7_day_avg_cases_color),
secondary_y=False)
fig.add_trace(go.Scatter(x=oahu_df.index, y=oahu_df['Test Positivity Rate'], mode='markers', marker=dict(size=50, color=oahu_test_positivity_rate_color), marker_symbol='cross-dot', name='Test Positivity'),
secondary_y=True)
fig.update_traces(marker_line_color='rgb(84,84,84)', marker_line_width=3, opacity=0.8)
fig.update_xaxes(showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
fig.update_yaxes(range = [0, 140],
title_text='7 Day Avg. Cases',
title_font = {"size": 20, "color": 'rgb(140, 140, 140)'},
secondary_y=False,
showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
fig.update_yaxes(range = [0, 0.0625],
title_text='7 Day Avg. Test Positivity',
title_font = {"size": 20, "color": 'rgb(140, 140, 140)'},
secondary_y=True,
showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
tickformat='.1%',
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
fig.update_layout(autosize=False,
width=1500,
height=1000,
showlegend=False,
plot_bgcolor='white',
margin=dict(
autoexpand=False,
l=80,
r=80,
t=50
),
title={'y':1},
shapes=[
dict(
type="rect",
# x-reference is assigned to the x-values
xref="paper",
# y-reference is assigned to the plot paper [0,1]
yref="paper",
x0=0,
y0=0,
x1=0.29,
y1=1,
fillcolor="FireBrick",
opacity=0.2,
layer="below",
line_width=0,
),
dict(
type="rect",
# x-reference is assigned to the x-values
xref="paper",
# y-reference is assigned to the plot paper [0,1]
yref="paper",
x0=0.29,
y0=0,
x1=0.815,
y1=1,
fillcolor="orange",
opacity=0.2,
layer="below",
line_width=0,
),
dict(
type="rect",
# x-reference is assigned to the x-values
xref="paper",
# y-reference is assigned to the plot paper [0,1]
yref="paper",
x0=0.815,
y0=0,
x1=0.92,
y1=1,
fillcolor="gold",
opacity=0.2,
layer="below",
line_width=0,
),
{
'type': 'line',
'xref': 'paper',
'x0': 0.94,
'y0': 100, # use absolute value or variable here
'x1': 0,
'y1': 100, # ditto
'line': {
'color': 'FireBrick',
'width': 4,
'dash': 'dash',
},
},
{
'type': 'line',
'xref': 'paper',
'x0': 0.94,
'y0': 50, # use absolute value or variable here
'x1': 0,
'y1': 50, # ditto
'line': {
'color': 'Orange',
'width': 4,
'dash': 'dash',
},
},
{
'type': 'line',
'xref': 'paper',
'x0': 0.94,
'y0': 25, # use absolute value or variable here
'x1': 0,
'y1': 25, # ditto
'line': {
'color': 'Gold',
'width': 4,
'dash': 'dash',
},
},
],
)
if push_to_site_oahu == 'Y':
username = '' # your username
api_key = '' # your api key - go to profile > settings > regenerate key
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
py.plot(fig, filename = 'current_oahu', auto_open=True)
fig.write_html("./chart_studio/file_current_oahu.html")
else:
fig.show()
#########################################################
################ ~ Current Situation ~ ##################
#########################################################
############# ~ Cases ~ ##############
def create_case_situation_graph(hdc_cases, push_to_site_current_situation):
hdc_cases['NewCases_Rate'] = hdc_cases['NewCases_Rate'].astype(float)
hdc_cases['Region'] = ['All County' if i == 'State' else i for i in hdc_cases['Region']]
low = 1
medium = 10
high = 25
critical = 38
def get_threshold_data(df, metric, region, threshold, ceiling):
threshold_list = []
for e, i in enumerate(df[f'{metric}'][df['Region'] == f'{region} County']):
if (e > len((df[f'{metric}'][df['Region'] == f'{region} County']))-2) & (i >= threshold) & (i <= ceiling):
threshold_list.append(i)
break
if (e > len((df[f'{metric}'][df['Region'] == f'{region} County']))-2):
threshold_list.append(np.nan)
break
if df[f'{metric}'][df['Region'] == f'{region} County'].iloc[e+1] > threshold:
threshold_list.append(i)
continue
if df[f'{metric}'][df['Region'] == f'{region} County'].iloc[e] > threshold:
threshold_list.append(i)
continue
if (e != 0) & (df[f'{metric}'][df['Region'] == f'{region} County'].iloc[e-1] > threshold) & (i <= threshold):
threshold_list.append(i)
else:
threshold_list.append(np.nan)
return threshold_list
hawaii_medium_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Hawaii', low, medium)
hawaii_high_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Hawaii', medium, high)
hawaii_critical_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Hawaii', high, critical)
Honolulu_medium_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Honolulu', low, medium)
Honolulu_high_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Honolulu', medium, high)
Honolulu_critical_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Honolulu', high, critical)
Kauai_medium_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Kauai', low, medium)
Kauai_high_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Kauai', medium, high)
Kauai_critical_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Kauai', high, critical)
Maui_medium_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Maui', low, medium)
Maui_high_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Maui', medium, high)
Maui_critical_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'Maui', high, critical)
all_medium_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'All', low, medium)
all_high_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'All', medium, high)
all_critical_cases = get_threshold_data(hdc_cases, 'NewCases_Rate', 'All', high, critical)
fig = go.Figure()
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Hawaii County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Hawaii County'], mode='lines', name='Hawaii County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Hawaii County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Hawaii County'].where(hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Hawaii County'] <= low), mode='lines', name='Hawaii County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Hawaii County'], y=hawaii_medium_cases, mode='lines', name='Hawaii County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Hawaii County'], y=hawaii_high_cases, mode='lines', name='Hawaii County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Hawaii County'], y=hawaii_critical_cases, mode='lines', name='Hawaii County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Honolulu County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Honolulu County'], mode='lines', name='Honolulu County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Honolulu County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Honolulu County'].where(hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Honolulu County'] <= low), mode='lines', name='Honolulu County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Honolulu County'], y=Honolulu_medium_cases, mode='lines', name='Honolulu County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Honolulu County'], y=Honolulu_high_cases, mode='lines', name='Honolulu County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Honolulu County'], y=Honolulu_critical_cases, mode='lines', name='Honolulu County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Kauai County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Kauai County'], mode='lines', name='Kauai County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Kauai County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Kauai County'].where(hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Kauai County'] <= low), mode='lines', name='Kauai County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Kauai County'], y=Kauai_medium_cases, mode='lines', name='Kauai County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Kauai County'], y=Kauai_high_cases, mode='lines', name='Kauai County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Kauai County'], y=Kauai_critical_cases, mode='lines', name='Kauai County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Maui County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Maui County'], mode='lines', name='Maui County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Maui County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Maui County'].where(hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Maui County'] <= low), mode='lines', name='Maui County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Maui County'], y=Maui_medium_cases, mode='lines', name='Maui County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Maui County'], y=Maui_high_cases, mode='lines', name='Maui County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'Maui County'], y=Maui_critical_cases, mode='lines', name='Maui County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'All County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'All County'], mode='lines', name='All County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'All County'], y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'All County'].where(hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'All County'] <= low), mode='lines', name='All County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'All County'], y=all_medium_cases, mode='lines', name='All County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'All County'], y=all_high_cases, mode='lines', name='All County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_cases['Date'][hdc_cases['Region'] == 'All County'], y=all_critical_cases, mode='lines', name='All County',
line=dict(color='firebrick', width=4)))
fig.update_xaxes(showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
fig.update_yaxes(range=[0, hdc_cases['NewCases_Rate'].max()+1],
showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
annotations = []
annotations.append(dict(xref='paper', x=1.01, y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Hawaii County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Hawaii'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Honolulu County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Honolulu'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Kauai County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Kauai'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'Maui County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Maui'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_cases['NewCases_Rate'][hdc_cases['Region'] == 'All County'].iloc[-1],
xanchor='left', yanchor='middle',
text='All'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
fig.update_layout(autosize=False,
width=1500,
height=1000,
showlegend=False,
plot_bgcolor='white',
margin=dict(
autoexpand=False,
l=80,
r=80,
t=50
),
title={'y':1},
annotations=annotations,
shapes=[
{
'type': 'line',
'xref': 'paper',
'x0': 1,
'y0': 10, # use absolute value or variable here
'x1': 0,
'y1': 10, # ditto
'line': {
'color': '#f38181',
'width': 4,
'dash': 'dash',
},
},
{
'type': 'line',
'xref': 'paper',
'x0': 1,
'y0': 1, # use absolute value or variable here
'x1': 0,
'y1': 1, # ditto
'line': {
'color': '#fce38a',
'width': 4,
'dash': 'dash',
},
}
]
)
if push_to_site_current_situation == 'Y':
username = '' # your username
api_key = '' # your api key - go to profile > settings > regenerate key
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
py.plot(fig, filename = 'all_counties_cases', auto_open=True)
fig.write_html("./chart_studio/file_all_counties_cases.html")
else:
fig.show()
############# ~ Positivity ~ ##############
def create_positivity_situation_graph(hdc_positivity, push_to_site_current_situation):
hdc_positivity['% Pos'] = hdc_positivity['% Pos'].astype(float)
hdc_positivity['Region'] = ['All County' if i == 'State' else i for i in hdc_positivity['Region']]
low = 0.03
medium = 0.1
high = 0.2
critical = 0.31
def get_threshold_data(df, metric, region, threshold, ceiling):
threshold_list = []
for e, i in enumerate(df[f'{metric}'][df['Region'] == f'{region} County']):
if (e > len((df[f'{metric}'][df['Region'] == f'{region} County']))-2) & (i >= threshold) & (i <= ceiling):
threshold_list.append(i)
if (e > len((df[f'{metric}'][df['Region'] == f'{region} County']))-2):
threshold_list.append(np.nan)
break
if df[f'{metric}'][df['Region'] == f'{region} County'].iloc[e+1] > threshold:
threshold_list.append(i)
continue
if df[f'{metric}'][df['Region'] == f'{region} County'].iloc[e] > threshold:
threshold_list.append(i)
continue
if (e != 0) & (df[f'{metric}'][df['Region'] == f'{region} County'].iloc[e-1] > threshold) & (i <= threshold):
threshold_list.append(i)
else:
threshold_list.append(np.nan)
return threshold_list
hawaii_medium_cases = get_threshold_data(hdc_positivity, '% Pos', 'Hawaii', low, medium)
hawaii_high_cases = get_threshold_data(hdc_positivity, '% Pos', 'Hawaii', medium, high)
hawaii_critical_cases = get_threshold_data(hdc_positivity, '% Pos', 'Hawaii', high, critical)
Honolulu_medium_cases = get_threshold_data(hdc_positivity, '% Pos', 'Honolulu', low, medium)
Honolulu_high_cases = get_threshold_data(hdc_positivity, '% Pos', 'Honolulu', medium, high)
Honolulu_critical_cases = get_threshold_data(hdc_positivity, '% Pos', 'Honolulu', high, critical)
Kauai_medium_cases = get_threshold_data(hdc_positivity, '% Pos', 'Kauai', low, medium)
Kauai_high_cases = get_threshold_data(hdc_positivity, '% Pos', 'Kauai', medium, high)
Kauai_critical_cases = get_threshold_data(hdc_positivity, '% Pos', 'Kauai', high, critical)
Maui_medium_cases = get_threshold_data(hdc_positivity, '% Pos', 'Maui', low, medium)
Maui_high_cases = get_threshold_data(hdc_positivity, '% Pos', 'Maui', medium, high)
Maui_critical_cases = get_threshold_data(hdc_positivity, '% Pos', 'Maui', high, critical)
all_medium_cases = get_threshold_data(hdc_positivity, '% Pos', 'All', low, medium)
all_high_cases = get_threshold_data(hdc_positivity, '% Pos', 'All', medium, high)
all_critical_cases = get_threshold_data(hdc_positivity, '% Pos', 'All', high, critical)
hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'].where((hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'] > low) & (hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'] <= medium))
fig = go.Figure()
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Hawaii County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Hawaii County'], mode='lines', name='Hawaii County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Hawaii County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Hawaii County'].where(hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Hawaii County'] <= low), mode='lines', name='Hawaii County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Hawaii County'], y=hawaii_medium_cases, mode='lines', name='Hawaii County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Hawaii County'], y=hawaii_high_cases, mode='lines', name='Hawaii County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Hawaii County'], y=hawaii_critical_cases, mode='lines', name='Hawaii County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Honolulu County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Honolulu County'], mode='lines', name='Honolulu County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Honolulu County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Honolulu County'].where(hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Honolulu County'] <= low), mode='lines', name='Honolulu County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Honolulu County'], y=Honolulu_medium_cases, mode='lines', name='Honolulu County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Honolulu County'], y=Honolulu_high_cases, mode='lines', name='Honolulu County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Honolulu County'], y=Honolulu_critical_cases, mode='lines', name='Honolulu County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Kauai County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Kauai County'], mode='lines', name='Kauai County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Kauai County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Kauai County'].where(hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Kauai County'] <= low), mode='lines', name='Kauai County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Kauai County'], y=Kauai_medium_cases, mode='lines', name='Kauai County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Kauai County'], y=Kauai_high_cases, mode='lines', name='Kauai County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Kauai County'], y=Kauai_critical_cases, mode='lines', name='Kauai County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Maui County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'], mode='lines', name='Maui County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Maui County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'].where(hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'] <= low), mode='lines', name='Maui County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Maui County'], y=Maui_medium_cases, mode='lines', name='Maui County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Maui County'], y=Maui_high_cases, mode='lines', name='Maui County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'Maui County'], y=Maui_critical_cases, mode='lines', name='Maui County',
line=dict(color='firebrick', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'All County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'All County'], mode='lines', name='All County',
line=dict(color='white', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'All County'], y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'All County'].where(hdc_positivity['% Pos'][hdc_positivity['Region'] == 'All County'] <= low), mode='lines', name='All County',
line=dict(color='lightgreen', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'All County'], y=all_medium_cases, mode='lines', name='All County',
line=dict(color='#fce38a', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'All County'], y=all_high_cases, mode='lines', name='All County',
line=dict(color='#f38181', width=4)))
fig.add_trace(go.Scatter(x=hdc_positivity['Date'][hdc_positivity['Region'] == 'All County'], y=all_critical_cases, mode='lines', name='All County',
line=dict(color='firebrick', width=4)))
fig.update_xaxes(showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
fig.update_yaxes(range=[0, hdc_positivity['% Pos'].max()+.01],
showline=True,
showgrid=False,
showticklabels=True,
linecolor='rgb(140, 140, 140)',
linewidth=2,
tickformat='.1%',
ticks='outside',
tickfont=dict(
family='Arial',
size=16,
color='rgb(140, 140, 140)',
))
annotations = []
annotations.append(dict(xref='paper', x=1.01, y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Hawaii County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Hawaii'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Honolulu County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Honolulu'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Kauai County'].iloc[-1],
xanchor='left', yanchor='middle',
text='Kauai'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'Maui County'].iloc[-2],
xanchor='left', yanchor='middle',
text='Maui'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
annotations.append(dict(xref='paper', x=1.01, y=hdc_positivity['% Pos'][hdc_positivity['Region'] == 'All County'].iloc[-2],
xanchor='left', yanchor='middle',
text='All'.format(color='rgb(140, 140, 140)'),
font=dict(family='Arial',
size=16,
color='rgb(140, 140, 140)'),
showarrow=False))
fig.update_layout(autosize=False,
width=1500,
height=1000,
showlegend=False,
plot_bgcolor='white',
margin=dict(
autoexpand=False,
l=80,
r=80,
t=50
),
title={'y':1},
annotations=annotations,
shapes=[
{
'type': 'line',
'xref': 'paper',
'x0': 1,
'y0': .1, # use absolute value or variable here
'x1': 0,
'y1': .1, # ditto
'line': {
'color': '#f38181',
'width': 4,
'dash': 'dash',
},
},
{
'type': 'line',
'xref': 'paper',
'x0': 1,
'y0': 0.03, # use absolute value or variable here
'x1': 0,
'y1': 0.03, # ditto
'line': {
'color': '#fce38a',
'width': 4,
'dash': 'dash',
},
}
]
)
if push_to_site_current_situation == 'Y':
username = '' # your username
api_key = '' # your api key - go to profile > settings > regenerate key
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
py.plot(fig, filename = 'all_counties_positivity', auto_open=True)
fig.write_html("./chart_studio/file_all_counties_positivity.html")
else:
fig.show()
| 57.201381 | 298 | 0.495614 | 5,136 | 49,708 | 4.600662 | 0.064642 | 0.06437 | 0.02607 | 0.03081 | 0.849041 | 0.828685 | 0.809048 | 0.80744 | 0.777435 | 0.738626 | 0 | 0.03067 | 0.359157 | 49,708 | 868 | 299 | 57.267281 | 0.711097 | 0.031263 | 0 | 0.603748 | 0 | 0 | 0.152702 | 0.00569 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01071 | false | 0 | 0.018742 | 0 | 0.034806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
29ee4101189aeb0d188c5b3db2898cb84022d10b | 51,893 | py | Python | test/functional/test_functional.py | thenetcircle/dino-service | 90f90e0b21ba920506dc8fc44caf69d5bed9fb6a | [
"MIT"
] | null | null | null | test/functional/test_functional.py | thenetcircle/dino-service | 90f90e0b21ba920506dc8fc44caf69d5bed9fb6a | [
"MIT"
] | 4 | 2021-05-24T04:31:34.000Z | 2021-06-28T03:38:56.000Z | test/functional/test_functional.py | thenetcircle/dino-service | 90f90e0b21ba920506dc8fc44caf69d5bed9fb6a | [
"MIT"
] | null | null | null | import json
import time
import arrow
from dinofw.rest.queries import AbstractQuery
from dinofw.utils import utcnow_ts, to_dt
from dinofw.utils.config import MessageTypes, ErrorCodes
from test.base import BaseTest
from test.functional.base_functional import BaseServerRestApi
class TestServerRestApi(BaseServerRestApi):
def test_get_groups_for_user_before_joining(self):
self.assert_groups_for_user(0)
def test_get_groups_for_user_after_joining(self):
self.create_and_join_group()
self.assert_groups_for_user(1)
def test_leaving_a_group(self):
self.assert_groups_for_user(0)
group_id = self.create_and_join_group()
self.assert_groups_for_user(1)
self.user_leaves_group(group_id)
self.assert_groups_for_user(0)
def test_another_user_joins_group(self):
self.assert_groups_for_user(0, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(0, user_id=BaseTest.OTHER_USER_ID)
# first user joins, check that other user isn't in any groups
group_id = self.create_and_join_group()
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(0, user_id=BaseTest.OTHER_USER_ID)
# other user also joins, check that both are in a group now
self.user_joins_group(group_id, user_id=BaseTest.OTHER_USER_ID)
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(1, user_id=BaseTest.OTHER_USER_ID)
def test_users_in_group(self):
self.assert_groups_for_user(0, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(0, user_id=BaseTest.OTHER_USER_ID)
# first user joins, check that other user isn't in any groups
group_id = self.create_and_join_group()
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(0, user_id=BaseTest.OTHER_USER_ID)
# other user also joins, check that both are in a group now
self.user_joins_group(group_id, user_id=BaseTest.OTHER_USER_ID)
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(1, user_id=BaseTest.OTHER_USER_ID)
def test_update_user_statistics_in_group(self):
group_id = self.create_and_join_group()
now_ts = self.update_user_stats_to_now(group_id, BaseTest.USER_ID)
user_stats = self.get_user_stats(group_id, BaseTest.USER_ID)
self.assertEqual(group_id, user_stats["stats"]["group_id"])
self.assertEqual(BaseTest.USER_ID, user_stats["stats"]["user_id"])
self.assertEqual(now_ts, user_stats["stats"]["last_read_time"])
def test_group_unhidden_on_new_message_for_all_users(self):
# both users join a new group
group_id = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id, BaseTest.OTHER_USER_ID)
# the group should not be hidden for either user at this time
self.assert_hidden_for_user(False, group_id, BaseTest.USER_ID)
self.assert_hidden_for_user(False, group_id, BaseTest.OTHER_USER_ID)
# both users should have the group in the list
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(1, user_id=BaseTest.OTHER_USER_ID)
# hide the group for the other user
self.update_hide_group_for(group_id, True, BaseTest.OTHER_USER_ID)
# make sure the group is hidden for the other user
self.assert_hidden_for_user(False, group_id, BaseTest.USER_ID)
self.assert_hidden_for_user(True, group_id, BaseTest.OTHER_USER_ID)
# other user doesn't have any groups since he hid it
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(0, user_id=BaseTest.OTHER_USER_ID)
# sending a message should un-hide the group for all users in it
self.send_message_to_group_from(group_id, BaseTest.USER_ID)
# should not be hidden anymore for any user
self.assert_hidden_for_user(False, group_id, BaseTest.USER_ID)
self.assert_hidden_for_user(False, group_id, BaseTest.OTHER_USER_ID)
# both users have 1 group now since none is hidden anymore
self.assert_groups_for_user(1, user_id=BaseTest.USER_ID)
self.assert_groups_for_user(1, user_id=BaseTest.OTHER_USER_ID)
def test_one_user_deletes_some_history(self):
# both users join a new group
group_id = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id, BaseTest.OTHER_USER_ID)
# 'join' action log should exist for both user
self.assert_messages_in_group(group_id, user_id=BaseTest.USER_ID, amount=1)
self.assert_messages_in_group(
group_id, user_id=BaseTest.OTHER_USER_ID, amount=1
)
# each user sends 4 messages each, then we delete some of them for one user
messages_to_send_each = 4
self.send_message_to_group_from(
group_id, user_id=BaseTest.USER_ID, amount=messages_to_send_each
)
messages = self.send_message_to_group_from(
group_id,
user_id=BaseTest.OTHER_USER_ID,
amount=messages_to_send_each,
)
# first user deletes the first 5 messages in the group
self.update_delete_before(
group_id, delete_before=messages[0]["created_at"], user_id=BaseTest.USER_ID
)
# first user should have 3, since we deleted everything before the other user's
# first message (including that first message); second user should have all 8
# since he/she didn't delete anything plus 1 more for the 'join' action log
self.assert_messages_in_group(
group_id, user_id=BaseTest.USER_ID, amount=messages_to_send_each - 1
)
self.assert_messages_in_group(
group_id, user_id=BaseTest.OTHER_USER_ID, amount=messages_to_send_each * 2 + 1
)
def test_joining_a_group_changes_last_update_time(self):
group_id = self.create_and_join_group(BaseTest.USER_ID)
group = self.get_group(group_id)
last_update_time = group["group"]["updated_at"]
# update time should have changed from sending a new message
self.send_message_to_group_from(group_id, user_id=BaseTest.USER_ID)
group = self.get_group(group_id)
self.assertNotEqual(group["group"]["updated_at"], last_update_time)
# update time should now have changed
self.user_joins_group(group_id, BaseTest.OTHER_USER_ID)
group = self.get_group(group_id)
self.assertNotEqual(group["group"]["updated_at"], last_update_time)
def test_total_unread_count_changes_when_user_read_time_changes(self):
group_id1 = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id1, BaseTest.OTHER_USER_ID)
self.send_message_to_group_from(
group_id1, user_id=BaseTest.USER_ID, amount=10
)
self.assert_total_unread_count(user_id=BaseTest.USER_ID, unread_count=0)
self.assert_total_unread_count(user_id=BaseTest.OTHER_USER_ID, unread_count=10)
group_id2 = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id2, BaseTest.OTHER_USER_ID)
self.send_message_to_group_from(
group_id2, user_id=BaseTest.USER_ID, amount=10
)
self.assert_total_unread_count(user_id=BaseTest.USER_ID, unread_count=0)
self.assert_total_unread_count(user_id=BaseTest.OTHER_USER_ID, unread_count=20)
# sending a message should mark the group as "read"
self.send_message_to_group_from(group_id2, user_id=BaseTest.OTHER_USER_ID)
self.assert_total_unread_count(user_id=BaseTest.USER_ID, unread_count=1)
self.assert_total_unread_count(user_id=BaseTest.OTHER_USER_ID, unread_count=10)
# first user should now have 2 unread
self.send_message_to_group_from(group_id2, user_id=BaseTest.OTHER_USER_ID)
self.assert_total_unread_count(user_id=BaseTest.USER_ID, unread_count=2)
self.assert_total_unread_count(user_id=BaseTest.OTHER_USER_ID, unread_count=10)
# first user should now have 3 unread
self.send_message_to_group_from(group_id1, user_id=BaseTest.OTHER_USER_ID)
self.assert_total_unread_count(user_id=BaseTest.USER_ID, unread_count=3)
self.assert_total_unread_count(user_id=BaseTest.OTHER_USER_ID, unread_count=0)
def test_pin_group_changes_ordering(self):
group_id1 = self.create_and_join_group(BaseTest.USER_ID)
group_id2 = self.create_and_join_group(BaseTest.USER_ID)
self.send_message_to_group_from(group_id1, user_id=BaseTest.USER_ID)
# group 2 should now be on top
self.send_message_to_group_from(group_id2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id2, group_id1)
# should be in the other order after pinning
self.pin_group_for(group_id1, BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id1, group_id2)
# should not change order since group 1 is pinned
self.send_message_to_group_from(group_id2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id1, group_id2)
# after unpinning the group with the latest message should be first
self.unpin_group_for(group_id1, BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id2, group_id1)
def test_last_read_updated_in_history_api(self):
group_id = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id, BaseTest.OTHER_USER_ID)
histories = self.histories_for(group_id, BaseTest.USER_ID)
last_read_user_1_before = self.last_read_in_histories_for(
histories, BaseTest.USER_ID
)
last_read_user_2_before = self.last_read_in_histories_for(
histories, BaseTest.OTHER_USER_ID
)
self.send_message_to_group_from(group_id, user_id=BaseTest.USER_ID)
histories = self.histories_for(group_id, BaseTest.USER_ID)
last_read_user_1_after = self.last_read_in_histories_for(
histories, BaseTest.USER_ID
)
last_read_user_2_after = self.last_read_in_histories_for(
histories, BaseTest.OTHER_USER_ID
)
self.assertNotEqual(last_read_user_1_before, last_read_user_1_after)
self.assertEqual(last_read_user_2_before, last_read_user_2_after)
def test_last_read_removed_on_leave(self):
group_id = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id, BaseTest.OTHER_USER_ID)
histories = self.histories_for(group_id, BaseTest.USER_ID)
self.assert_in_histories(BaseTest.USER_ID, histories, is_in=True)
self.assert_in_histories(BaseTest.OTHER_USER_ID, histories, is_in=True)
self.user_leaves_group(group_id, BaseTest.OTHER_USER_ID)
self.send_message_to_group_from(group_id, user_id=BaseTest.USER_ID)
histories = self.histories_for(group_id, BaseTest.USER_ID)
self.assert_in_histories(BaseTest.USER_ID, histories, is_in=True)
self.assert_in_histories(BaseTest.OTHER_USER_ID, histories, is_in=False)
def test_group_exists_when_leaving(self):
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(0, len(groups))
group_id = self.create_and_join_group(BaseTest.USER_ID)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(1, len(groups))
self.user_leaves_group(group_id)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(0, len(groups))
def test_highlight_group_for_other_user(self):
group_id1 = self.create_and_join_group(BaseTest.USER_ID)
group_id2 = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id1, BaseTest.OTHER_USER_ID)
self.user_joins_group(group_id2, BaseTest.OTHER_USER_ID)
self.send_message_to_group_from(group_id1, user_id=BaseTest.USER_ID)
self.send_message_to_group_from(group_id2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id2, group_id1)
self.highlight_group_for_user(group_id1, BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id1, group_id2)
def test_highlight_makes_group_unhidden(self):
group_id = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id, BaseTest.OTHER_USER_ID)
# just joined a group, should have one
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(1, len(groups))
# after hiding we should not have any groups anymore
self.update_hide_group_for(group_id, hide=True, user_id=BaseTest.USER_ID)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(0, len(groups))
# make sure it becomes unhidden if highlighted by someone
self.highlight_group_for_user(group_id, BaseTest.USER_ID)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(1, len(groups))
def _test_delete_highlight_changes_order(self):
# TODO: don't delete hightlight, just call the history api for the group
group_id1 = self.create_and_join_group(BaseTest.USER_ID)
group_id2 = self.create_and_join_group(BaseTest.USER_ID)
self.user_joins_group(group_id1, BaseTest.OTHER_USER_ID)
self.user_joins_group(group_id2, BaseTest.OTHER_USER_ID)
self.send_message_to_group_from(group_id1, user_id=BaseTest.USER_ID)
self.send_message_to_group_from(group_id2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id2, group_id1)
self.highlight_group_for_user(group_id1, BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id1, group_id2)
# back to normal
self.delete_highlight_group_for_user(group_id1, BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_id2, group_id1)
def _test_highlight_ordered_higher_than_pin(self):
# TODO: don't delete hightlight, just call the history api for the group
group_1 = self.create_and_join_group(BaseTest.USER_ID)
group_2 = self.create_and_join_group(BaseTest.USER_ID)
group_3 = self.create_and_join_group(BaseTest.USER_ID)
# first send a message to each group with a short delay
for group_id in [group_1, group_2, group_3]:
self.send_message_to_group_from(
group_id, user_id=BaseTest.USER_ID
)
# last group to receive a message should be on top
self.assert_order_of_groups(BaseTest.USER_ID, group_3, group_2, group_1)
# pinned a group should put it at the top
self.pin_group_for(group_2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_2, group_3, group_1)
# highlight has priority over pinning, so group 1 should be above group 2 now
self.highlight_group_for_user(group_1, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_1, group_2, group_3)
# sending a message to group 3 should not change anything, since 1 and 2 are highlighted and pinned respectively
self.send_message_to_group_from(group_3, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_1, group_2, group_3)
# group 2 and 3 are pinned, but 3 has more recent message now
self.pin_group_for(group_3, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_1, group_3, group_2)
# group 1 has now the older message and not highlighted so should be at the bottom
self.delete_highlight_group_for_user(group_1, BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_3, group_2, group_1)
# group 2 should be on top after highlighting it
self.highlight_group_for_user(group_2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_2, group_3, group_1)
# group 1 has a more recent highlight than group 2
self.highlight_group_for_user(group_1, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_1, group_2, group_3)
# pinning group 2 should not change anything since highlight has priority over pinning
self.pin_group_for(group_2, user_id=BaseTest.USER_ID)
self.assert_order_of_groups(BaseTest.USER_ID, group_1, group_2, group_3)
def test_change_group_name(self):
self.create_and_join_group(BaseTest.USER_ID)
groups = self.groups_for_user(BaseTest.USER_ID)
original_name = groups[0]["group"]["name"]
self.assertIsNotNone(original_name)
new_name = "new test name for group"
self.edit_group(groups[0]["group"]["group_id"], name=new_name)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(groups[0]["group"]["name"], new_name)
self.assertNotEqual(original_name, new_name)
def test_change_group_owner(self):
self.create_and_join_group(BaseTest.USER_ID)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(BaseTest.USER_ID, groups[0]["group"]["owner_id"])
self.edit_group(groups[0]["group"]["group_id"], owner=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(BaseTest.USER_ID)
self.assertEqual(BaseTest.OTHER_USER_ID, groups[0]["group"]["owner_id"])
def test_receiver_unread_count(self):
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=True)
self.assertEqual(groups[0]["stats"]["unread"], 0)
self.assertEqual(groups[0]["stats"]["receiver_unread"], 1)
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=True)
self.assertEqual(groups[0]["stats"]["unread"], 0)
self.assertEqual(groups[0]["stats"]["receiver_unread"], 2)
groups = self.groups_for_user(user_id=BaseTest.OTHER_USER_ID, count_unread=True)
self.assertEqual(groups[0]["stats"]["unread"], 2)
self.assertEqual(groups[0]["stats"]["receiver_unread"], 0)
self.send_1v1_message(
user_id=BaseTest.OTHER_USER_ID, receiver_id=BaseTest.USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.OTHER_USER_ID, count_unread=True)
self.assertEqual(groups[0]["stats"]["unread"], 0)
self.assertEqual(groups[0]["stats"]["receiver_unread"], 1)
def test_unread_count_is_negative_if_query_says_do_not_count(self):
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False, receiver_stats=False)
self.assertEqual(groups[0]["stats"]["unread"], -1)
self.assertEqual(groups[0]["stats"]["receiver_unread"], -1)
def test_last_updated_at_changes_on_send_msg(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
self.assertGreater(groups[0]["stats"]["last_updated_time"], last_updated_at)
def test_create_action_log_in_all_groups_for_user(self):
group_1 = self.send_1v1_message()["group_id"]
group_2 = self.create_and_join_group()
group_3 = self.create_and_join_group()
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(2, stats["group_amount"])
self.assertEqual(1, stats["one_to_one_amount"])
# this group should only have 1 message, which we sent
self.assertEqual(1, len(self.histories_for(group_1)["messages"]))
# these two should have no messages
self.assertEqual(0, len(self.histories_for(group_2)["messages"]))
self.assertEqual(0, len(self.histories_for(group_3)["messages"]))
self.create_action_log_in_all_groups_for_user()
# all groups should now have one extra message, the action log we created
self.assertEqual(2, len(self.histories_for(group_1)["messages"]))
self.assertEqual(1, len(self.histories_for(group_2)["messages"]))
self.assertEqual(1, len(self.histories_for(group_3)["messages"]))
def test_last_updated_at_changes_on_highlight(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.highlight_group_for_user(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
self.assertGreater(groups[0]["stats"]["last_updated_time"], last_updated_at)
def test_last_updated_at_changes_on_bookmark_true(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.bookmark_group(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID,
bookmark=True,
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
self.assertGreater(groups[0]["stats"]["last_updated_time"], last_updated_at)
def test_last_updated_at_changes_on_bookmark_false(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.bookmark_group(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID,
bookmark=True,
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
bookmarked_updated_at = groups[0]["stats"]["last_updated_time"]
self.assertGreater(bookmarked_updated_at, last_updated_at)
self.bookmark_group(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID,
bookmark=False,
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
not_bookmarked_updated_at = groups[0]["stats"]["last_updated_time"]
self.assertGreater(not_bookmarked_updated_at, bookmarked_updated_at)
def test_last_updated_at_changes_on_hide_true(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.update_hide_group_for(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID,
hide=True
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False, hidden=True)
self.assertGreater(groups[0]["stats"]["last_updated_time"], last_updated_at)
def test_last_updated_at_changes_on_hide_false(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.update_hide_group_for(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID,
hide=True
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False, hidden=True)
hidden_updated_at = groups[0]["stats"]["last_updated_time"]
self.assertGreater(hidden_updated_at, last_updated_at)
self.update_hide_group_for(
group_id=groups[0]["group"]["group_id"],
user_id=BaseTest.USER_ID,
hide=False
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
not_hidden_updated_at = groups[0]["stats"]["last_updated_time"]
self.assertGreater(not_hidden_updated_at, hidden_updated_at)
def test_last_updated_at_changes_on_name_change(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
new_name = "new test name for group"
self.edit_group(groups[0]["group"]["group_id"], name=new_name)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
new_updated_at = groups[0]["stats"]["last_updated_time"]
self.assertGreater(new_updated_at, last_updated_at)
def test_last_updated_at_changed_on_update_attachment(self):
group_message = self.send_1v1_message(message_type=MessageTypes.IMAGE)
self.assertEqual(MessageTypes.IMAGE, group_message["message_type"])
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.update_attachment(
group_message["message_id"], group_message["created_at"]
)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
new_updated_at = groups[0]["stats"]["last_updated_time"]
self.assertGreater(new_updated_at, last_updated_at)
def test_last_updated_at_not_changed_on_create_attachment(self):
self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
last_updated_at = groups[0]["stats"]["last_updated_time"]
group_message = self.send_1v1_message(message_type=MessageTypes.IMAGE)
self.assertEqual(MessageTypes.IMAGE, group_message["message_type"])
groups = self.groups_for_user(user_id=BaseTest.USER_ID, count_unread=False)
create_attachment_updated_at = groups[0]["stats"]["last_updated_time"]
# TODO: do we want to update it on template creation as well as when processing is done?
self.assertGreater(create_attachment_updated_at, last_updated_at)
def test_last_updated_at_changes_on_no_more_unread(self):
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.OTHER_USER_ID)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.histories_for(
groups[0]["group"]["group_id"], user_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(user_id=BaseTest.OTHER_USER_ID)
self.assertGreater(groups[0]["stats"]["last_updated_time"], last_updated_at)
def test_last_updated_at_changes_on_new_message_other_user(self):
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(
user_id=BaseTest.OTHER_USER_ID, count_unread=False
)
last_updated_at = groups[0]["stats"]["last_updated_time"]
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_for_user(
user_id=BaseTest.OTHER_USER_ID, count_unread=False
)
self.assertGreater(groups[0]["stats"]["last_updated_time"], last_updated_at)
def test_get_groups_updated_since(self):
when = utcnow_ts() - 100
groups = self.groups_updated_since(user_id=BaseTest.OTHER_USER_ID, since=when)
self.assertEqual(0, len(groups))
self.send_1v1_message(
user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID
)
groups = self.groups_updated_since(user_id=BaseTest.OTHER_USER_ID, since=when)
self.assertEqual(1, len(groups))
groups = self.groups_updated_since(
user_id=BaseTest.OTHER_USER_ID, since=when + 500
)
self.assertEqual(0, len(groups))
def test_update_attachment(self):
group_message = self.send_1v1_message(message_type=MessageTypes.IMAGE)
self.assertEqual(MessageTypes.IMAGE, group_message["message_type"])
history = self.histories_for(group_message["group_id"])
all_attachments = self.attachments_for(group_message["group_id"])
# a 'placeholder' message should have been created, but no attachment
self.assertEqual(1, len(history["messages"]))
self.assertEqual(0, len(all_attachments))
attachment = self.update_attachment(
group_message["message_id"], group_message["created_at"]
)
history = self.histories_for(group_message["group_id"])
all_attachments = self.attachments_for(group_message["group_id"])
# now the message should have been updated, and the attachment created
self.assertEqual(group_message["message_id"], attachment["message_id"])
self.assertNotEqual(attachment["created_at"], attachment["updated_at"])
self.assertEqual(1, len(history["messages"]))
self.assertEqual(1, len(all_attachments))
def test_count_group_types_in_user_stats(self):
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(0, stats["group_amount"])
self.assertEqual(0, stats["one_to_one_amount"])
self.send_1v1_message()
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(0, stats["group_amount"])
self.assertEqual(1, stats["one_to_one_amount"])
self.create_and_join_group()
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(1, stats["group_amount"])
self.assertEqual(1, stats["one_to_one_amount"])
self.create_and_join_group()
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(2, stats["group_amount"])
self.assertEqual(1, stats["one_to_one_amount"])
self.send_1v1_message(receiver_id=8844)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(2, stats["group_amount"])
self.assertEqual(2, stats["one_to_one_amount"])
def test_user_stats_group_read_and_send_times(self):
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(0, stats["group_amount"])
self.assertEqual(0, stats["one_to_one_amount"])
group_message = self.send_1v1_message()
stats = self.get_global_user_stats(hidden=False)
last_sent_time_first = stats["last_sent_time"]
last_sent_group_id = stats["last_sent_group_id"]
self.assertEqual(group_message["group_id"], last_sent_group_id)
self.assertIsNotNone(last_sent_time_first)
group_message = self.send_1v1_message()
stats = self.get_global_user_stats(hidden=False)
last_sent_time_second = stats["last_sent_time"]
last_sent_group_id = stats["last_sent_group_id"]
self.assertEqual(group_message["group_id"], last_sent_group_id)
self.assertNotEqual(last_sent_time_first, last_sent_time_second)
def test_create_attachment_updates_group_overview(self):
group_message = self.send_1v1_message()
histories = self.histories_for(group_message["group_id"])
self.assertEqual(1, len(histories["messages"]))
groups = self.groups_for_user()
last_msg_overview = groups[0]["group"]["last_message_overview"]
self.update_attachment(group_message["message_id"], group_message["created_at"])
groups = self.groups_for_user()
new_msg_overview = groups[0]["group"]["last_message_overview"]
self.assertNotEqual(last_msg_overview, new_msg_overview)
histories = self.histories_for(group_message["group_id"])
self.assertEqual(1, len(histories["messages"]))
def test_receiver_highlight_exists_in_group_list(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user()
receiver_highlight_time = groups[0]["stats"]["receiver_highlight_time"]
self.assertIsNotNone(receiver_highlight_time)
self.highlight_group_for_user(group_message["group_id"], user_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user()
new_receiver_highlight_time = groups[0]["stats"]["receiver_highlight_time"]
self.assertNotEqual(receiver_highlight_time, new_receiver_highlight_time)
def test_receiver_hide_exists_in_group_list(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user()
receiver_hide = groups[0]["stats"]["receiver_hide"]
hide = groups[0]["stats"]["hide"]
self.assertFalse(receiver_hide)
self.assertFalse(hide)
self.update_hide_group_for(group_message["group_id"], hide=True, user_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user()
new_receiver_hide = groups[0]["stats"]["receiver_hide"]
new_hide = groups[0]["stats"]["hide"]
self.assertTrue(new_receiver_hide)
self.assertFalse(new_hide)
def test_receiver_delete_before_in_group_list(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user()
receiver_delete_before = groups[0]["stats"]["receiver_delete_before"]
delete_before = groups[0]["stats"]["delete_before"]
self.assertEqual(receiver_delete_before, delete_before)
delete_time = utcnow_ts()
self.update_delete_before(group_message["group_id"], delete_time, user_id=BaseTest.OTHER_USER_ID)
groups = self.groups_for_user()
new_receiver_delete_before = groups[0]["stats"]["receiver_delete_before"]
new_delete_before = groups[0]["stats"]["delete_before"]
self.assertEqual(delete_before, new_delete_before)
self.assertEqual(delete_time, new_receiver_delete_before)
self.assertNotEqual(receiver_delete_before, new_receiver_delete_before)
def test_update_bookmark(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user()
bookmark = groups[0]["stats"]["bookmark"]
self.assertFalse(bookmark)
self.bookmark_group(group_message["group_id"], bookmark=True)
groups = self.groups_for_user()
bookmark = groups[0]["stats"]["bookmark"]
self.assertTrue(bookmark)
self.bookmark_group(group_message["group_id"], bookmark=False)
groups = self.groups_for_user()
bookmark = groups[0]["stats"]["bookmark"]
self.assertFalse(bookmark)
def test_mark_all_groups_as_read_removes_bookmark(self):
group_message = self.send_1v1_message()
self.bookmark_group(group_message["group_id"], bookmark=True)
stats = self.groups_for_user()[0]["stats"]
self.assertTrue(stats["bookmark"])
self.mark_as_read()
stats = self.groups_for_user()[0]["stats"]
self.assertFalse(stats["bookmark"])
def test_mark_all_groups_as_read_resets_count(self):
self.send_1v1_message(user_id=BaseTest.OTHER_USER_ID, receiver_id=BaseTest.USER_ID)
stats = self.groups_for_user(count_unread=True)[0]["stats"]
self.assertEqual(1, stats["unread"])
self.mark_as_read()
stats = self.groups_for_user(count_unread=True)[0]["stats"]
self.assertEqual(0, stats["unread"])
def test_groups_for_user_only_unread_includes_bookmarks(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user(only_unread=True)
self.assertEqual(0, len(groups))
self.bookmark_group(group_message["group_id"], bookmark=True)
groups = self.groups_for_user(only_unread=True)
self.assertEqual(1, len(groups))
self.assertEqual(group_message["group_id"], groups[0]["group"]["group_id"])
def test_bookmark_removed_on_get_histories(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user(only_unread=True)
self.assertEqual(0, len(groups))
self.bookmark_group(group_message["group_id"], bookmark=True)
groups = self.groups_for_user(only_unread=True)
self.assertEqual(1, len(groups))
self.histories_for(group_message["group_id"])
groups = self.groups_for_user(only_unread=True)
self.assertEqual(0, len(groups))
def test_new_message_wakeup_users(self):
group_message = self.send_1v1_message()
groups = self.groups_for_user(only_unread=True)
self.assertEqual(0, len(groups))
self.update_hide_group_for(group_message["group_id"], hide=True)
# make sure it's hidden
group_and_stats = self.groups_for_user(hidden=True)
self.assertEqual(1, len(group_and_stats))
self.assertTrue(group_and_stats[0]["stats"]["hide"])
# should still be hidden
group_and_stats = self.groups_for_user(hidden=True)
self.assertEqual(1, len(group_and_stats))
self.assertTrue(group_and_stats[0]["stats"]["hide"])
group_and_stats = self.groups_for_user()
self.assertEqual(0, len(group_and_stats))
# try to wake up the users
self.send_1v1_message()
# should have woken up now
group_and_stats = self.groups_for_user()
self.assertEqual(1, len(group_and_stats))
self.assertFalse(group_and_stats[0]["stats"]["hide"])
def test_new_message_resets_delete_before(self):
group_message = self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
group_and_stats = self.groups_for_user()
join_time = group_and_stats[0]["stats"]["join_time"]
delete_before_original = group_and_stats[0]["stats"]["delete_before"]
self.assertEqual(join_time, delete_before_original)
yesterday = round(arrow.utcnow().shift(days=-1).float_timestamp, 3)
self.update_delete_before(group_message["group_id"], delete_before=yesterday)
group_and_stats = self.groups_updated_since(user_id=BaseTest.USER_ID, since=1560000000)
delete_before_updated = group_and_stats[0]["stats"]["delete_before"]
self.assertEqual(yesterday, delete_before_updated)
self.assertNotEqual(join_time, delete_before_updated)
# update it again so the next message resets it to join_time
now = utcnow_ts()
self.update_delete_before(group_message["group_id"], delete_before=now)
# should not reset 'delete_before'
self.send_1v1_message(user_id=BaseTest.OTHER_USER_ID, receiver_id=BaseTest.USER_ID)
group_and_stats = self.groups_for_user()
delete_before_auto_updated = group_and_stats[0]["stats"]["delete_before"]
self.assertNotEqual(join_time, delete_before_auto_updated)
self.assertEqual(now, delete_before_auto_updated)
def test_create_action_log_updating_delete_before(self):
group_message = self.send_1v1_message(user_id=BaseTest.USER_ID, receiver_id=BaseTest.OTHER_USER_ID)
group_id = group_message["group_id"]
histories = self.histories_for(group_id)
self.assertEqual(1, len(histories["messages"]))
yesterday = round(arrow.utcnow().shift(days=-1).float_timestamp, 3)
self.update_delete_before(group_id, delete_before=yesterday, create_action_log=True)
histories = self.histories_for(group_id)
self.assertEqual(2, len(histories["messages"]))
def test_create_action_log_automatically_created_group(self):
# this group doesn't exist yet
log = self.create_action_log(
user_id=BaseTest.USER_ID,
receiver_id=BaseTest.OTHER_USER_ID
)
self.assertIsNotNone(log["group_id"])
def test_delete_all_groups_for_user(self):
self.send_1v1_message()
groups = self.groups_for_user()
self.assertEqual(1, len(groups))
self.leave_all_groups()
groups = self.groups_for_user()
self.assertEqual(0, len(groups))
self.send_1v1_message(receiver_id=1000)
self.send_1v1_message(receiver_id=1001)
self.send_1v1_message(receiver_id=1002)
self.send_1v1_message(receiver_id=1003)
groups = self.groups_for_user()
self.assertEqual(4, len(groups))
self.leave_all_groups()
groups = self.groups_for_user()
self.assertEqual(0, len(groups))
def test_get_attachment_from_file_id_returns_no_such_attachment(self):
group_message = self.send_1v1_message(message_type=MessageTypes.IMAGE)
attachment = self.attachment_for_file_id(group_message["group_id"], BaseTest.FILE_ID, assert_response=False)
self.assert_error(attachment, ErrorCodes.NO_SUCH_ATTACHMENT)
def test_get_attachment_from_file_id_returns_no_such_group(self):
attachment = self.attachment_for_file_id(BaseTest.GROUP_ID, BaseTest.FILE_ID, assert_response=False)
self.assert_error(attachment, ErrorCodes.NO_SUCH_GROUP)
def test_get_attachment_from_file_id_returns_ok(self):
group_message = self.send_1v1_message(message_type=MessageTypes.IMAGE)
message_id = group_message["message_id"]
created_at = group_message["created_at"]
group_id = group_message["group_id"]
self.update_attachment(message_id, created_at, payload=json.dumps({
"file_id": BaseTest.FILE_ID,
"context": BaseTest.FILE_CONTEXT
}))
attachment = self.attachment_for_file_id(group_id, BaseTest.FILE_ID)
self.assertIsNotNone(attachment)
self.assertIn(BaseTest.FILE_ID, attachment["message_payload"])
self.assertEqual(attachment["message_type"], MessageTypes.IMAGE)
def test_read_receipt_published_when_opening_conversation(self):
group_message = self.send_1v1_message()
self.assertEqual(0, len(self.env.client_publisher.sent_reads))
self.histories_for(group_message["group_id"], user_id=BaseTest.OTHER_USER_ID)
# USER_ID should have gotten a read-receipt from OTHER_USER_ID
self.assertEqual(1, len(self.env.client_publisher.sent_reads[BaseTest.USER_ID]))
self.assertEqual(BaseTest.OTHER_USER_ID, self.env.client_publisher.sent_reads[BaseTest.USER_ID][0][1])
def test_read_receipt_not_duplicated_when_opening_conversation(self):
group_message = self.send_1v1_message()
self.assertEqual(0, len(self.env.client_publisher.sent_reads))
self.histories_for(group_message["group_id"], user_id=BaseTest.OTHER_USER_ID)
# USER_ID should have gotten a read-receipt from OTHER_USER_ID
self.assertEqual(1, len(self.env.client_publisher.sent_reads[BaseTest.USER_ID]))
self.assertEqual(BaseTest.OTHER_USER_ID, self.env.client_publisher.sent_reads[BaseTest.USER_ID][0][1])
self.histories_for(group_message["group_id"], user_id=BaseTest.OTHER_USER_ID)
# should not have another one
self.assertEqual(1, len(self.env.client_publisher.sent_reads[BaseTest.USER_ID]))
self.assertEqual(BaseTest.OTHER_USER_ID, self.env.client_publisher.sent_reads[BaseTest.USER_ID][0][1])
def test_hidden_groups_is_not_counted_in_user_stats_api(self):
group_message0 = self.send_1v1_message(receiver_id=4444)
group_message1 = self.send_1v1_message(receiver_id=5555)
group_id0 = group_message0["group_id"]
group_id1 = group_message1["group_id"]
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(0, stats["group_amount"])
self.assertEqual(2, stats["one_to_one_amount"])
self.update_hide_group_for(group_id0, hide=True)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(1, stats["one_to_one_amount"])
self.update_hide_group_for(group_id1, hide=True)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(0, stats["one_to_one_amount"])
self.update_hide_group_for(group_id0, hide=False)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(1, stats["one_to_one_amount"])
def test_hidden_groups_is_counted_in_user_stats_api_if_specified_in_request(self):
group_message0 = self.send_1v1_message(receiver_id=4444)
group_message1 = self.send_1v1_message(receiver_id=5555)
self.send_1v1_message(receiver_id=6666)
group_id0 = group_message0["group_id"]
group_id1 = group_message1["group_id"]
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(0, stats["group_amount"])
self.assertEqual(3, stats["one_to_one_amount"])
self.update_hide_group_for(group_id0, hide=True)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(2, stats["one_to_one_amount"])
stats = self.get_global_user_stats(hidden=True)
self.assertEqual(1, stats["one_to_one_amount"])
self.update_hide_group_for(group_id1, hide=True)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(1, stats["one_to_one_amount"])
stats = self.get_global_user_stats(hidden=True)
self.assertEqual(2, stats["one_to_one_amount"])
def test_hidden_groups_is_not_counted_if_not_specified(self):
group_message0 = self.send_1v1_message(receiver_id=4444)
group_message1 = self.send_1v1_message(receiver_id=5555)
self.send_1v1_message(receiver_id=6666)
group_id0 = group_message0["group_id"]
group_id1 = group_message1["group_id"]
stats = self.get_global_user_stats(hidden=None)
self.assertEqual(0, stats["group_amount"])
self.assertEqual(3, stats["one_to_one_amount"])
self.update_hide_group_for(group_id0, hide=True)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(2, stats["one_to_one_amount"])
stats = self.get_global_user_stats(hidden=True)
self.assertEqual(1, stats["one_to_one_amount"])
stats = self.get_global_user_stats(hidden=None)
self.assertEqual(2, stats["one_to_one_amount"])
self.update_hide_group_for(group_id1, hide=True)
stats = self.get_global_user_stats(hidden=False)
self.assertEqual(1, stats["one_to_one_amount"])
stats = self.get_global_user_stats(hidden=True)
self.assertEqual(2, stats["one_to_one_amount"])
stats = self.get_global_user_stats(hidden=None)
self.assertEqual(1, stats["one_to_one_amount"])
def test_until_param_excluded_matching_group_in_list(self):
self.send_1v1_message(receiver_id=22)
self.assert_groups_for_user(1)
# need a different timestamp on the other group
time.sleep(0.01)
self.send_1v1_message(receiver_id=44)
self.assert_groups_for_user(2)
time.sleep(0.01)
all_groups = self.groups_for_user()
self.assertEqual(2, len(all_groups))
groups = self.groups_for_user(until=all_groups[1]["group"]["last_message_time"])
self.assertEqual(0, len(groups))
groups = self.groups_for_user(until=all_groups[0]["group"]["last_message_time"])
self.assertEqual(1, len(groups))
def test_get_group_information(self):
group_message = self.send_1v1_message()
# defaults to -1 if not counting
info = self.get_group_info(group_message["group_id"], count_messages=False)
self.assertEqual(-1, info["message_amount"])
info = self.get_group_info(group_message["group_id"], count_messages=True)
self.assertEqual(1, info["message_amount"])
# should be two now
group_message = self.send_1v1_message()
info = self.get_group_info(group_message["group_id"], count_messages=True)
self.assertEqual(2, info["message_amount"])
def test_unread_groups_amount_in_user_stats(self):
# default is to count
stats = self.get_global_user_stats()
self.assertEqual(0, stats["unread_groups_amount"])
stats = self.get_global_user_stats(count_unread=True)
self.assertEqual(0, stats["unread_groups_amount"])
stats = self.get_global_user_stats(count_unread=False)
self.assertEqual(-1, stats["unread_groups_amount"])
self.send_1v1_message(user_id=50, receiver_id=BaseTest.USER_ID)
stats = self.get_global_user_stats(count_unread=True)
self.assertEqual(1, stats["unread_groups_amount"])
self.send_1v1_message(user_id=51, receiver_id=BaseTest.USER_ID)
stats = self.get_global_user_stats(count_unread=True)
self.assertEqual(2, stats["unread_groups_amount"])
# not a new group so should not change number of unread groups
self.send_1v1_message(user_id=51, receiver_id=BaseTest.USER_ID)
stats = self.get_global_user_stats(count_unread=True)
self.assertEqual(2, stats["unread_groups_amount"])
def test_join_existing_group(self):
users = [BaseTest.USER_ID, 50]
other_users = [51, 52, 53, 54]
group = self.create_and_join_group(
BaseTest.USER_ID, users=users
)
self.user_joins_group(group, other_users[0])
def test_get_groups_with_undeleted_messages(self):
groups = list()
users = [BaseTest.USER_ID, 50, 51, 52, 53, 54]
# first create some groups and send some messages
for _ in list(range(5)):
groups.append(self.create_and_join_group(
BaseTest.USER_ID, users=users
))
time.sleep(0.01)
self.send_message_to_group_from(groups[0], BaseTest.USER_ID)
# check that initially there should be now groups to consider for deleting
# messages
to_del = self.env.db.get_groups_with_undeleted_messages(self.env.session_maker())
self.assertEqual(0, len(to_del))
delete_time = utcnow_ts()
# of only one user has delete_before > first_message_time, the group should
# not be considered for message deletion (since other users haven't changed
# their delete_before)
self.update_delete_before(groups[0], delete_time, users[0])
to_del = self.env.db.get_groups_with_undeleted_messages(self.env.session_maker())
self.assertEqual(0, len(to_del))
# if all users have a delete_before > first_message_time, then the deletion
# query should find it
for user in users:
self.update_delete_before(groups[0], delete_time, user)
to_del = self.env.db.get_groups_with_undeleted_messages(self.env.session_maker())
self.assertEqual(1, len(to_del))
# returns a list of tuples: [(group_id, min(delete_before)),]
self.assertEqual(to_del[0][0], groups[0])
self.assertEqual(to_del[0][1], to_dt(delete_time))
| 43.316361 | 120 | 0.711772 | 7,373 | 51,893 | 4.621864 | 0.0453 | 0.070605 | 0.072718 | 0.04977 | 0.828799 | 0.799718 | 0.769052 | 0.740499 | 0.711095 | 0.673504 | 0 | 0.015299 | 0.192608 | 51,893 | 1,197 | 121 | 43.352548 | 0.798033 | 0.073748 | 0 | 0.597786 | 0 | 0 | 0.059179 | 0.002751 | 0 | 0 | 0 | 0.000835 | 0.296433 | 1 | 0.079951 | false | 0 | 0.00984 | 0 | 0.091021 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4b06d0909dbe05e164fd173312ca210d8193d63d | 49 | py | Python | chainerpruner/masks/__init__.py | DeNA/ChainerPruner | 0df33e94845420ae74eb46067b9132a9b34686cf | [
"MIT"
] | 20 | 2018-12-15T07:53:30.000Z | 2022-02-10T14:11:24.000Z | chainerpruner/masks/__init__.py | DeNA/ChainerPruner | 0df33e94845420ae74eb46067b9132a9b34686cf | [
"MIT"
] | null | null | null | chainerpruner/masks/__init__.py | DeNA/ChainerPruner | 0df33e94845420ae74eb46067b9132a9b34686cf | [
"MIT"
] | 2 | 2019-06-11T07:03:40.000Z | 2019-08-19T09:45:47.000Z | from chainerpruner.masks.normmask import NormMask | 49 | 49 | 0.897959 | 6 | 49 | 7.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 49 | 1 | 49 | 49 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9a9356d3be1d720be473eddba96d63b7679ea47 | 276,625 | py | Python | tests/unit/gapic/automl_v1beta1/test_auto_ml.py | nicain/python-automl | 06004491f16e5512e0890e40565d218123310ac6 | [
"Apache-2.0"
] | 68 | 2020-01-31T18:13:23.000Z | 2022-03-28T14:57:12.000Z | tests/unit/gapic/automl_v1beta1/test_auto_ml.py | nicain/python-automl | 06004491f16e5512e0890e40565d218123310ac6 | [
"Apache-2.0"
] | 184 | 2020-01-31T17:34:00.000Z | 2022-03-30T22:42:11.000Z | tests/unit/gapic/automl_v1beta1/test_auto_ml.py | nicain/python-automl | 06004491f16e5512e0890e40565d218123310ac6 | [
"Apache-2.0"
] | 29 | 2020-01-31T19:32:55.000Z | 2022-01-29T08:07:34.000Z | # -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import mock
import grpc
from grpc.experimental import aio
import math
import pytest
from proto.marshal.rules.dates import DurationRule, TimestampRule
from google.api_core import client_options
from google.api_core import exceptions as core_exceptions
from google.api_core import future
from google.api_core import gapic_v1
from google.api_core import grpc_helpers
from google.api_core import grpc_helpers_async
from google.api_core import operation_async # type: ignore
from google.api_core import operations_v1
from google.api_core import path_template
from google.auth import credentials as ga_credentials
from google.auth.exceptions import MutualTLSChannelError
from google.cloud.automl_v1beta1.services.auto_ml import AutoMlAsyncClient
from google.cloud.automl_v1beta1.services.auto_ml import AutoMlClient
from google.cloud.automl_v1beta1.services.auto_ml import pagers
from google.cloud.automl_v1beta1.services.auto_ml import transports
from google.cloud.automl_v1beta1.types import annotation_spec
from google.cloud.automl_v1beta1.types import classification
from google.cloud.automl_v1beta1.types import column_spec
from google.cloud.automl_v1beta1.types import column_spec as gca_column_spec
from google.cloud.automl_v1beta1.types import data_stats
from google.cloud.automl_v1beta1.types import data_types
from google.cloud.automl_v1beta1.types import dataset
from google.cloud.automl_v1beta1.types import dataset as gca_dataset
from google.cloud.automl_v1beta1.types import detection
from google.cloud.automl_v1beta1.types import image
from google.cloud.automl_v1beta1.types import io
from google.cloud.automl_v1beta1.types import model
from google.cloud.automl_v1beta1.types import model as gca_model
from google.cloud.automl_v1beta1.types import model_evaluation
from google.cloud.automl_v1beta1.types import operations
from google.cloud.automl_v1beta1.types import regression
from google.cloud.automl_v1beta1.types import service
from google.cloud.automl_v1beta1.types import table_spec
from google.cloud.automl_v1beta1.types import table_spec as gca_table_spec
from google.cloud.automl_v1beta1.types import tables
from google.cloud.automl_v1beta1.types import text
from google.cloud.automl_v1beta1.types import text_extraction
from google.cloud.automl_v1beta1.types import text_sentiment
from google.cloud.automl_v1beta1.types import translation
from google.cloud.automl_v1beta1.types import video
from google.longrunning import operations_pb2
from google.oauth2 import service_account
from google.protobuf import field_mask_pb2 # type: ignore
from google.protobuf import timestamp_pb2 # type: ignore
import google.auth
def client_cert_source_callback():
return b"cert bytes", b"key bytes"
# If default endpoint is localhost, then default mtls endpoint will be the same.
# This method modifies the default endpoint so the client can produce a different
# mtls endpoint for endpoint testing purposes.
def modify_default_endpoint(client):
return (
"foo.googleapis.com"
if ("localhost" in client.DEFAULT_ENDPOINT)
else client.DEFAULT_ENDPOINT
)
def test__get_default_mtls_endpoint():
api_endpoint = "example.googleapis.com"
api_mtls_endpoint = "example.mtls.googleapis.com"
sandbox_endpoint = "example.sandbox.googleapis.com"
sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com"
non_googleapi = "api.example.com"
assert AutoMlClient._get_default_mtls_endpoint(None) is None
assert AutoMlClient._get_default_mtls_endpoint(api_endpoint) == api_mtls_endpoint
assert (
AutoMlClient._get_default_mtls_endpoint(api_mtls_endpoint) == api_mtls_endpoint
)
assert (
AutoMlClient._get_default_mtls_endpoint(sandbox_endpoint)
== sandbox_mtls_endpoint
)
assert (
AutoMlClient._get_default_mtls_endpoint(sandbox_mtls_endpoint)
== sandbox_mtls_endpoint
)
assert AutoMlClient._get_default_mtls_endpoint(non_googleapi) == non_googleapi
@pytest.mark.parametrize("client_class", [AutoMlClient, AutoMlAsyncClient,])
def test_auto_ml_client_from_service_account_info(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_info"
) as factory:
factory.return_value = creds
info = {"valid": True}
client = client_class.from_service_account_info(info)
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "automl.googleapis.com:443"
@pytest.mark.parametrize(
"transport_class,transport_name",
[
(transports.AutoMlGrpcTransport, "grpc"),
(transports.AutoMlGrpcAsyncIOTransport, "grpc_asyncio"),
],
)
def test_auto_ml_client_service_account_always_use_jwt(transport_class, transport_name):
with mock.patch.object(
service_account.Credentials, "with_always_use_jwt_access", create=True
) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=True)
use_jwt.assert_called_once_with(True)
with mock.patch.object(
service_account.Credentials, "with_always_use_jwt_access", create=True
) as use_jwt:
creds = service_account.Credentials(None, None, None)
transport = transport_class(credentials=creds, always_use_jwt_access=False)
use_jwt.assert_not_called()
@pytest.mark.parametrize("client_class", [AutoMlClient, AutoMlAsyncClient,])
def test_auto_ml_client_from_service_account_file(client_class):
creds = ga_credentials.AnonymousCredentials()
with mock.patch.object(
service_account.Credentials, "from_service_account_file"
) as factory:
factory.return_value = creds
client = client_class.from_service_account_file("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
client = client_class.from_service_account_json("dummy/file/path.json")
assert client.transport._credentials == creds
assert isinstance(client, client_class)
assert client.transport._host == "automl.googleapis.com:443"
def test_auto_ml_client_get_transport_class():
transport = AutoMlClient.get_transport_class()
available_transports = [
transports.AutoMlGrpcTransport,
]
assert transport in available_transports
transport = AutoMlClient.get_transport_class("grpc")
assert transport == transports.AutoMlGrpcTransport
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(AutoMlClient, transports.AutoMlGrpcTransport, "grpc"),
(AutoMlAsyncClient, transports.AutoMlGrpcAsyncIOTransport, "grpc_asyncio"),
],
)
@mock.patch.object(
AutoMlClient, "DEFAULT_ENDPOINT", modify_default_endpoint(AutoMlClient)
)
@mock.patch.object(
AutoMlAsyncClient, "DEFAULT_ENDPOINT", modify_default_endpoint(AutoMlAsyncClient)
)
def test_auto_ml_client_client_options(client_class, transport_class, transport_name):
# Check that if channel is provided we won't create a new one.
with mock.patch.object(AutoMlClient, "get_transport_class") as gtc:
transport = transport_class(credentials=ga_credentials.AnonymousCredentials())
client = client_class(transport=transport)
gtc.assert_not_called()
# Check that if channel is provided via str we will create a new one.
with mock.patch.object(AutoMlClient, "get_transport_class") as gtc:
client = client_class(transport=transport_name)
gtc.assert_called()
# Check the case api_endpoint is provided.
options = client_options.ClientOptions(api_endpoint="squid.clam.whelk")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "never".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is
# "always".
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}):
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_MTLS_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has
# unsupported value.
with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}):
with pytest.raises(MutualTLSChannelError):
client = client_class()
# Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"}
):
with pytest.raises(ValueError):
client = client_class()
# Check the case quota_project_id is provided
options = client_options.ClientOptions(quota_project_id="octopus")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id="octopus",
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name,use_client_cert_env",
[
(AutoMlClient, transports.AutoMlGrpcTransport, "grpc", "true"),
(
AutoMlAsyncClient,
transports.AutoMlGrpcAsyncIOTransport,
"grpc_asyncio",
"true",
),
(AutoMlClient, transports.AutoMlGrpcTransport, "grpc", "false"),
(
AutoMlAsyncClient,
transports.AutoMlGrpcAsyncIOTransport,
"grpc_asyncio",
"false",
),
],
)
@mock.patch.object(
AutoMlClient, "DEFAULT_ENDPOINT", modify_default_endpoint(AutoMlClient)
)
@mock.patch.object(
AutoMlAsyncClient, "DEFAULT_ENDPOINT", modify_default_endpoint(AutoMlAsyncClient)
)
@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"})
def test_auto_ml_client_mtls_env_auto(
client_class, transport_class, transport_name, use_client_cert_env
):
# This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default
# mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists.
# Check the case client_cert_source is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
options = client_options.ClientOptions(
client_cert_source=client_cert_source_callback
)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
if use_client_cert_env == "false":
expected_client_cert_source = None
expected_host = client.DEFAULT_ENDPOINT
else:
expected_client_cert_source = client_cert_source_callback
expected_host = client.DEFAULT_MTLS_ENDPOINT
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case ADC client cert is provided. Whether client cert is used depends on
# GOOGLE_API_USE_CLIENT_CERTIFICATE value.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=True,
):
with mock.patch(
"google.auth.transport.mtls.default_client_cert_source",
return_value=client_cert_source_callback,
):
if use_client_cert_env == "false":
expected_host = client.DEFAULT_ENDPOINT
expected_client_cert_source = None
else:
expected_host = client.DEFAULT_MTLS_ENDPOINT
expected_client_cert_source = client_cert_source_callback
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=expected_host,
scopes=None,
client_cert_source_for_mtls=expected_client_cert_source,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
# Check the case client_cert_source and ADC client cert are not provided.
with mock.patch.dict(
os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env}
):
with mock.patch.object(transport_class, "__init__") as patched:
with mock.patch(
"google.auth.transport.mtls.has_default_client_cert_source",
return_value=False,
):
patched.return_value = None
client = client_class(transport=transport_name)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(AutoMlClient, transports.AutoMlGrpcTransport, "grpc"),
(AutoMlAsyncClient, transports.AutoMlGrpcAsyncIOTransport, "grpc_asyncio"),
],
)
def test_auto_ml_client_client_options_scopes(
client_class, transport_class, transport_name
):
# Check the case scopes are provided.
options = client_options.ClientOptions(scopes=["1", "2"],)
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file=None,
host=client.DEFAULT_ENDPOINT,
scopes=["1", "2"],
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
@pytest.mark.parametrize(
"client_class,transport_class,transport_name",
[
(AutoMlClient, transports.AutoMlGrpcTransport, "grpc"),
(AutoMlAsyncClient, transports.AutoMlGrpcAsyncIOTransport, "grpc_asyncio"),
],
)
def test_auto_ml_client_client_options_credentials_file(
client_class, transport_class, transport_name
):
# Check the case credentials file is provided.
options = client_options.ClientOptions(credentials_file="credentials.json")
with mock.patch.object(transport_class, "__init__") as patched:
patched.return_value = None
client = client_class(transport=transport_name, client_options=options)
patched.assert_called_once_with(
credentials=None,
credentials_file="credentials.json",
host=client.DEFAULT_ENDPOINT,
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_auto_ml_client_client_options_from_dict():
with mock.patch(
"google.cloud.automl_v1beta1.services.auto_ml.transports.AutoMlGrpcTransport.__init__"
) as grpc_transport:
grpc_transport.return_value = None
client = AutoMlClient(client_options={"api_endpoint": "squid.clam.whelk"})
grpc_transport.assert_called_once_with(
credentials=None,
credentials_file=None,
host="squid.clam.whelk",
scopes=None,
client_cert_source_for_mtls=None,
quota_project_id=None,
client_info=transports.base.DEFAULT_CLIENT_INFO,
always_use_jwt_access=True,
)
def test_create_dataset(
transport: str = "grpc", request_type=service.CreateDatasetRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = gca_dataset.Dataset(
name="name_value",
display_name="display_name_value",
description="description_value",
example_count=1396,
etag="etag_value",
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
),
)
response = client.create_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_dataset.Dataset)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.example_count == 1396
assert response.etag == "etag_value"
def test_create_dataset_from_dict():
test_create_dataset(request_type=dict)
def test_create_dataset_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
client.create_dataset()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateDatasetRequest()
@pytest.mark.asyncio
async def test_create_dataset_async(
transport: str = "grpc_asyncio", request_type=service.CreateDatasetRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_dataset.Dataset(
name="name_value",
display_name="display_name_value",
description="description_value",
example_count=1396,
etag="etag_value",
)
)
response = await client.create_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_dataset.Dataset)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.example_count == 1396
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_create_dataset_async_from_dict():
await test_create_dataset_async(request_type=dict)
def test_create_dataset_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateDatasetRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
call.return_value = gca_dataset.Dataset()
client.create_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_dataset_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateDatasetRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(gca_dataset.Dataset())
await client.create_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_dataset_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = gca_dataset.Dataset()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_dataset(
parent="parent_value",
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].dataset
mock_val = gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
)
assert arg == mock_val
def test_create_dataset_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_dataset(
service.CreateDatasetRequest(),
parent="parent_value",
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
@pytest.mark.asyncio
async def test_create_dataset_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = gca_dataset.Dataset()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(gca_dataset.Dataset())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_dataset(
parent="parent_value",
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].dataset
mock_val = gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_dataset_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_dataset(
service.CreateDatasetRequest(),
parent="parent_value",
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
def test_get_dataset(transport: str = "grpc", request_type=service.GetDatasetRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = dataset.Dataset(
name="name_value",
display_name="display_name_value",
description="description_value",
example_count=1396,
etag="etag_value",
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
),
)
response = client.get_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, dataset.Dataset)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.example_count == 1396
assert response.etag == "etag_value"
def test_get_dataset_from_dict():
test_get_dataset(request_type=dict)
def test_get_dataset_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
client.get_dataset()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetDatasetRequest()
@pytest.mark.asyncio
async def test_get_dataset_async(
transport: str = "grpc_asyncio", request_type=service.GetDatasetRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
dataset.Dataset(
name="name_value",
display_name="display_name_value",
description="description_value",
example_count=1396,
etag="etag_value",
)
)
response = await client.get_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, dataset.Dataset)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.example_count == 1396
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_get_dataset_async_from_dict():
await test_get_dataset_async(request_type=dict)
def test_get_dataset_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetDatasetRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
call.return_value = dataset.Dataset()
client.get_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_dataset_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetDatasetRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(dataset.Dataset())
await client.get_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_dataset_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = dataset.Dataset()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_dataset(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_dataset_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_dataset(
service.GetDatasetRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_dataset_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = dataset.Dataset()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(dataset.Dataset())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_dataset(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_dataset_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_dataset(
service.GetDatasetRequest(), name="name_value",
)
def test_list_datasets(
transport: str = "grpc", request_type=service.ListDatasetsRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListDatasetsResponse(
next_page_token="next_page_token_value",
)
response = client.list_datasets(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListDatasetsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListDatasetsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_datasets_from_dict():
test_list_datasets(request_type=dict)
def test_list_datasets_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
client.list_datasets()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListDatasetsRequest()
@pytest.mark.asyncio
async def test_list_datasets_async(
transport: str = "grpc_asyncio", request_type=service.ListDatasetsRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListDatasetsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_datasets(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListDatasetsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListDatasetsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_datasets_async_from_dict():
await test_list_datasets_async(request_type=dict)
def test_list_datasets_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListDatasetsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
call.return_value = service.ListDatasetsResponse()
client.list_datasets(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_datasets_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListDatasetsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListDatasetsResponse()
)
await client.list_datasets(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_datasets_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListDatasetsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_datasets(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_datasets_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_datasets(
service.ListDatasetsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_datasets_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListDatasetsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListDatasetsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_datasets(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_datasets_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_datasets(
service.ListDatasetsRequest(), parent="parent_value",
)
def test_list_datasets_pager():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(), dataset.Dataset(),],
next_page_token="abc",
),
service.ListDatasetsResponse(datasets=[], next_page_token="def",),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(),], next_page_token="ghi",
),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_datasets(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, dataset.Dataset) for i in results)
def test_list_datasets_pages():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_datasets), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(), dataset.Dataset(),],
next_page_token="abc",
),
service.ListDatasetsResponse(datasets=[], next_page_token="def",),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(),], next_page_token="ghi",
),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(),],
),
RuntimeError,
)
pages = list(client.list_datasets(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_datasets_async_pager():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_datasets), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(), dataset.Dataset(),],
next_page_token="abc",
),
service.ListDatasetsResponse(datasets=[], next_page_token="def",),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(),], next_page_token="ghi",
),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(),],
),
RuntimeError,
)
async_pager = await client.list_datasets(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, dataset.Dataset) for i in responses)
@pytest.mark.asyncio
async def test_list_datasets_async_pages():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_datasets), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(), dataset.Dataset(),],
next_page_token="abc",
),
service.ListDatasetsResponse(datasets=[], next_page_token="def",),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(),], next_page_token="ghi",
),
service.ListDatasetsResponse(
datasets=[dataset.Dataset(), dataset.Dataset(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_datasets(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_dataset(
transport: str = "grpc", request_type=service.UpdateDatasetRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = gca_dataset.Dataset(
name="name_value",
display_name="display_name_value",
description="description_value",
example_count=1396,
etag="etag_value",
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
),
)
response = client.update_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_dataset.Dataset)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.example_count == 1396
assert response.etag == "etag_value"
def test_update_dataset_from_dict():
test_update_dataset(request_type=dict)
def test_update_dataset_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
client.update_dataset()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateDatasetRequest()
@pytest.mark.asyncio
async def test_update_dataset_async(
transport: str = "grpc_asyncio", request_type=service.UpdateDatasetRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_dataset.Dataset(
name="name_value",
display_name="display_name_value",
description="description_value",
example_count=1396,
etag="etag_value",
)
)
response = await client.update_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_dataset.Dataset)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.description == "description_value"
assert response.example_count == 1396
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_update_dataset_async_from_dict():
await test_update_dataset_async(request_type=dict)
def test_update_dataset_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateDatasetRequest()
request.dataset.name = "dataset.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
call.return_value = gca_dataset.Dataset()
client.update_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "dataset.name=dataset.name/value",) in kw[
"metadata"
]
@pytest.mark.asyncio
async def test_update_dataset_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateDatasetRequest()
request.dataset.name = "dataset.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(gca_dataset.Dataset())
await client.update_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "dataset.name=dataset.name/value",) in kw[
"metadata"
]
def test_update_dataset_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = gca_dataset.Dataset()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_dataset(
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].dataset
mock_val = gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
)
assert arg == mock_val
def test_update_dataset_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_dataset(
service.UpdateDatasetRequest(),
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
@pytest.mark.asyncio
async def test_update_dataset_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.update_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = gca_dataset.Dataset()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(gca_dataset.Dataset())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_dataset(
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].dataset
mock_val = gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_update_dataset_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_dataset(
service.UpdateDatasetRequest(),
dataset=gca_dataset.Dataset(
translation_dataset_metadata=translation.TranslationDatasetMetadata(
source_language_code="source_language_code_value"
)
),
)
def test_delete_dataset(
transport: str = "grpc", request_type=service.DeleteDatasetRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeleteDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_dataset_from_dict():
test_delete_dataset(request_type=dict)
def test_delete_dataset_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
client.delete_dataset()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeleteDatasetRequest()
@pytest.mark.asyncio
async def test_delete_dataset_async(
transport: str = "grpc_asyncio", request_type=service.DeleteDatasetRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeleteDatasetRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_dataset_async_from_dict():
await test_delete_dataset_async(request_type=dict)
def test_delete_dataset_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DeleteDatasetRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_dataset_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DeleteDatasetRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_dataset(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_dataset_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_dataset(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_dataset_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_dataset(
service.DeleteDatasetRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_dataset_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_dataset), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_dataset(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_dataset_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_dataset(
service.DeleteDatasetRequest(), name="name_value",
)
def test_import_data(transport: str = "grpc", request_type=service.ImportDataRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.import_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ImportDataRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_import_data_from_dict():
test_import_data(request_type=dict)
def test_import_data_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
client.import_data()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ImportDataRequest()
@pytest.mark.asyncio
async def test_import_data_async(
transport: str = "grpc_asyncio", request_type=service.ImportDataRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.import_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ImportDataRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_import_data_async_from_dict():
await test_import_data_async(request_type=dict)
def test_import_data_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ImportDataRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.import_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_import_data_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ImportDataRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.import_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_import_data_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.import_data(
name="name_value",
input_config=io.InputConfig(
gcs_source=io.GcsSource(input_uris=["input_uris_value"])
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].input_config
mock_val = io.InputConfig(
gcs_source=io.GcsSource(input_uris=["input_uris_value"])
)
assert arg == mock_val
def test_import_data_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.import_data(
service.ImportDataRequest(),
name="name_value",
input_config=io.InputConfig(
gcs_source=io.GcsSource(input_uris=["input_uris_value"])
),
)
@pytest.mark.asyncio
async def test_import_data_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.import_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.import_data(
name="name_value",
input_config=io.InputConfig(
gcs_source=io.GcsSource(input_uris=["input_uris_value"])
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].input_config
mock_val = io.InputConfig(
gcs_source=io.GcsSource(input_uris=["input_uris_value"])
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_import_data_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.import_data(
service.ImportDataRequest(),
name="name_value",
input_config=io.InputConfig(
gcs_source=io.GcsSource(input_uris=["input_uris_value"])
),
)
def test_export_data(transport: str = "grpc", request_type=service.ExportDataRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.export_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportDataRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_export_data_from_dict():
test_export_data(request_type=dict)
def test_export_data_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
client.export_data()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportDataRequest()
@pytest.mark.asyncio
async def test_export_data_async(
transport: str = "grpc_asyncio", request_type=service.ExportDataRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.export_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportDataRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_export_data_async_from_dict():
await test_export_data_async(request_type=dict)
def test_export_data_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ExportDataRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.export_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_export_data_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ExportDataRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.export_data(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_export_data_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.export_data(
name="name_value",
output_config=io.OutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].output_config
mock_val = io.OutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
)
assert arg == mock_val
def test_export_data_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.export_data(
service.ExportDataRequest(),
name="name_value",
output_config=io.OutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
@pytest.mark.asyncio
async def test_export_data_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_data), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.export_data(
name="name_value",
output_config=io.OutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].output_config
mock_val = io.OutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_export_data_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.export_data(
service.ExportDataRequest(),
name="name_value",
output_config=io.OutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
def test_get_annotation_spec(
transport: str = "grpc", request_type=service.GetAnnotationSpecRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = annotation_spec.AnnotationSpec(
name="name_value", display_name="display_name_value", example_count=1396,
)
response = client.get_annotation_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetAnnotationSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, annotation_spec.AnnotationSpec)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.example_count == 1396
def test_get_annotation_spec_from_dict():
test_get_annotation_spec(request_type=dict)
def test_get_annotation_spec_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
client.get_annotation_spec()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetAnnotationSpecRequest()
@pytest.mark.asyncio
async def test_get_annotation_spec_async(
transport: str = "grpc_asyncio", request_type=service.GetAnnotationSpecRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
annotation_spec.AnnotationSpec(
name="name_value",
display_name="display_name_value",
example_count=1396,
)
)
response = await client.get_annotation_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetAnnotationSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, annotation_spec.AnnotationSpec)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.example_count == 1396
@pytest.mark.asyncio
async def test_get_annotation_spec_async_from_dict():
await test_get_annotation_spec_async(request_type=dict)
def test_get_annotation_spec_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetAnnotationSpecRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
call.return_value = annotation_spec.AnnotationSpec()
client.get_annotation_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_annotation_spec_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetAnnotationSpecRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
annotation_spec.AnnotationSpec()
)
await client.get_annotation_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_annotation_spec_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = annotation_spec.AnnotationSpec()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_annotation_spec(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_annotation_spec_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_annotation_spec(
service.GetAnnotationSpecRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_annotation_spec_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_annotation_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = annotation_spec.AnnotationSpec()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
annotation_spec.AnnotationSpec()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_annotation_spec(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_annotation_spec_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_annotation_spec(
service.GetAnnotationSpecRequest(), name="name_value",
)
def test_get_table_spec(
transport: str = "grpc", request_type=service.GetTableSpecRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = table_spec.TableSpec(
name="name_value",
time_column_spec_id="time_column_spec_id_value",
row_count=992,
valid_row_count=1615,
column_count=1302,
etag="etag_value",
)
response = client.get_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetTableSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, table_spec.TableSpec)
assert response.name == "name_value"
assert response.time_column_spec_id == "time_column_spec_id_value"
assert response.row_count == 992
assert response.valid_row_count == 1615
assert response.column_count == 1302
assert response.etag == "etag_value"
def test_get_table_spec_from_dict():
test_get_table_spec(request_type=dict)
def test_get_table_spec_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
client.get_table_spec()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetTableSpecRequest()
@pytest.mark.asyncio
async def test_get_table_spec_async(
transport: str = "grpc_asyncio", request_type=service.GetTableSpecRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
table_spec.TableSpec(
name="name_value",
time_column_spec_id="time_column_spec_id_value",
row_count=992,
valid_row_count=1615,
column_count=1302,
etag="etag_value",
)
)
response = await client.get_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetTableSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, table_spec.TableSpec)
assert response.name == "name_value"
assert response.time_column_spec_id == "time_column_spec_id_value"
assert response.row_count == 992
assert response.valid_row_count == 1615
assert response.column_count == 1302
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_get_table_spec_async_from_dict():
await test_get_table_spec_async(request_type=dict)
def test_get_table_spec_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetTableSpecRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
call.return_value = table_spec.TableSpec()
client.get_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_table_spec_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetTableSpecRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
table_spec.TableSpec()
)
await client.get_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_table_spec_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = table_spec.TableSpec()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_table_spec(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_table_spec_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_table_spec(
service.GetTableSpecRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_table_spec_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_table_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = table_spec.TableSpec()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
table_spec.TableSpec()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_table_spec(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_table_spec_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_table_spec(
service.GetTableSpecRequest(), name="name_value",
)
def test_list_table_specs(
transport: str = "grpc", request_type=service.ListTableSpecsRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListTableSpecsResponse(
next_page_token="next_page_token_value",
)
response = client.list_table_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListTableSpecsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListTableSpecsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_table_specs_from_dict():
test_list_table_specs(request_type=dict)
def test_list_table_specs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
client.list_table_specs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListTableSpecsRequest()
@pytest.mark.asyncio
async def test_list_table_specs_async(
transport: str = "grpc_asyncio", request_type=service.ListTableSpecsRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListTableSpecsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_table_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListTableSpecsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListTableSpecsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_table_specs_async_from_dict():
await test_list_table_specs_async(request_type=dict)
def test_list_table_specs_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListTableSpecsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
call.return_value = service.ListTableSpecsResponse()
client.list_table_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_table_specs_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListTableSpecsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListTableSpecsResponse()
)
await client.list_table_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_table_specs_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListTableSpecsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_table_specs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_table_specs_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_table_specs(
service.ListTableSpecsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_table_specs_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListTableSpecsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListTableSpecsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_table_specs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_table_specs_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_table_specs(
service.ListTableSpecsRequest(), parent="parent_value",
)
def test_list_table_specs_pager():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListTableSpecsResponse(
table_specs=[
table_spec.TableSpec(),
table_spec.TableSpec(),
table_spec.TableSpec(),
],
next_page_token="abc",
),
service.ListTableSpecsResponse(table_specs=[], next_page_token="def",),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(),], next_page_token="ghi",
),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(), table_spec.TableSpec(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_table_specs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, table_spec.TableSpec) for i in results)
def test_list_table_specs_pages():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_table_specs), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListTableSpecsResponse(
table_specs=[
table_spec.TableSpec(),
table_spec.TableSpec(),
table_spec.TableSpec(),
],
next_page_token="abc",
),
service.ListTableSpecsResponse(table_specs=[], next_page_token="def",),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(),], next_page_token="ghi",
),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(), table_spec.TableSpec(),],
),
RuntimeError,
)
pages = list(client.list_table_specs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_table_specs_async_pager():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_table_specs), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListTableSpecsResponse(
table_specs=[
table_spec.TableSpec(),
table_spec.TableSpec(),
table_spec.TableSpec(),
],
next_page_token="abc",
),
service.ListTableSpecsResponse(table_specs=[], next_page_token="def",),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(),], next_page_token="ghi",
),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(), table_spec.TableSpec(),],
),
RuntimeError,
)
async_pager = await client.list_table_specs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, table_spec.TableSpec) for i in responses)
@pytest.mark.asyncio
async def test_list_table_specs_async_pages():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_table_specs), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListTableSpecsResponse(
table_specs=[
table_spec.TableSpec(),
table_spec.TableSpec(),
table_spec.TableSpec(),
],
next_page_token="abc",
),
service.ListTableSpecsResponse(table_specs=[], next_page_token="def",),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(),], next_page_token="ghi",
),
service.ListTableSpecsResponse(
table_specs=[table_spec.TableSpec(), table_spec.TableSpec(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_table_specs(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_table_spec(
transport: str = "grpc", request_type=service.UpdateTableSpecRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_table_spec.TableSpec(
name="name_value",
time_column_spec_id="time_column_spec_id_value",
row_count=992,
valid_row_count=1615,
column_count=1302,
etag="etag_value",
)
response = client.update_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateTableSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_table_spec.TableSpec)
assert response.name == "name_value"
assert response.time_column_spec_id == "time_column_spec_id_value"
assert response.row_count == 992
assert response.valid_row_count == 1615
assert response.column_count == 1302
assert response.etag == "etag_value"
def test_update_table_spec_from_dict():
test_update_table_spec(request_type=dict)
def test_update_table_spec_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
client.update_table_spec()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateTableSpecRequest()
@pytest.mark.asyncio
async def test_update_table_spec_async(
transport: str = "grpc_asyncio", request_type=service.UpdateTableSpecRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_table_spec.TableSpec(
name="name_value",
time_column_spec_id="time_column_spec_id_value",
row_count=992,
valid_row_count=1615,
column_count=1302,
etag="etag_value",
)
)
response = await client.update_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateTableSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_table_spec.TableSpec)
assert response.name == "name_value"
assert response.time_column_spec_id == "time_column_spec_id_value"
assert response.row_count == 992
assert response.valid_row_count == 1615
assert response.column_count == 1302
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_update_table_spec_async_from_dict():
await test_update_table_spec_async(request_type=dict)
def test_update_table_spec_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateTableSpecRequest()
request.table_spec.name = "table_spec.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
call.return_value = gca_table_spec.TableSpec()
client.update_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "table_spec.name=table_spec.name/value",) in kw[
"metadata"
]
@pytest.mark.asyncio
async def test_update_table_spec_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateTableSpecRequest()
request.table_spec.name = "table_spec.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_table_spec.TableSpec()
)
await client.update_table_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "table_spec.name=table_spec.name/value",) in kw[
"metadata"
]
def test_update_table_spec_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_table_spec.TableSpec()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_table_spec(
table_spec=gca_table_spec.TableSpec(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].table_spec
mock_val = gca_table_spec.TableSpec(name="name_value")
assert arg == mock_val
def test_update_table_spec_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_table_spec(
service.UpdateTableSpecRequest(),
table_spec=gca_table_spec.TableSpec(name="name_value"),
)
@pytest.mark.asyncio
async def test_update_table_spec_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_table_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_table_spec.TableSpec()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_table_spec.TableSpec()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_table_spec(
table_spec=gca_table_spec.TableSpec(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].table_spec
mock_val = gca_table_spec.TableSpec(name="name_value")
assert arg == mock_val
@pytest.mark.asyncio
async def test_update_table_spec_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_table_spec(
service.UpdateTableSpecRequest(),
table_spec=gca_table_spec.TableSpec(name="name_value"),
)
def test_get_column_spec(
transport: str = "grpc", request_type=service.GetColumnSpecRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = column_spec.ColumnSpec(
name="name_value", display_name="display_name_value", etag="etag_value",
)
response = client.get_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetColumnSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, column_spec.ColumnSpec)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.etag == "etag_value"
def test_get_column_spec_from_dict():
test_get_column_spec(request_type=dict)
def test_get_column_spec_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
client.get_column_spec()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetColumnSpecRequest()
@pytest.mark.asyncio
async def test_get_column_spec_async(
transport: str = "grpc_asyncio", request_type=service.GetColumnSpecRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
column_spec.ColumnSpec(
name="name_value", display_name="display_name_value", etag="etag_value",
)
)
response = await client.get_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetColumnSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, column_spec.ColumnSpec)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_get_column_spec_async_from_dict():
await test_get_column_spec_async(request_type=dict)
def test_get_column_spec_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetColumnSpecRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
call.return_value = column_spec.ColumnSpec()
client.get_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_column_spec_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetColumnSpecRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
column_spec.ColumnSpec()
)
await client.get_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_column_spec_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = column_spec.ColumnSpec()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_column_spec(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_column_spec_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_column_spec(
service.GetColumnSpecRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_column_spec_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_column_spec), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = column_spec.ColumnSpec()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
column_spec.ColumnSpec()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_column_spec(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_column_spec_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_column_spec(
service.GetColumnSpecRequest(), name="name_value",
)
def test_list_column_specs(
transport: str = "grpc", request_type=service.ListColumnSpecsRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListColumnSpecsResponse(
next_page_token="next_page_token_value",
)
response = client.list_column_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListColumnSpecsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListColumnSpecsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_column_specs_from_dict():
test_list_column_specs(request_type=dict)
def test_list_column_specs_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
client.list_column_specs()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListColumnSpecsRequest()
@pytest.mark.asyncio
async def test_list_column_specs_async(
transport: str = "grpc_asyncio", request_type=service.ListColumnSpecsRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListColumnSpecsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_column_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListColumnSpecsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListColumnSpecsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_column_specs_async_from_dict():
await test_list_column_specs_async(request_type=dict)
def test_list_column_specs_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListColumnSpecsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
call.return_value = service.ListColumnSpecsResponse()
client.list_column_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_column_specs_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListColumnSpecsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListColumnSpecsResponse()
)
await client.list_column_specs(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_column_specs_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListColumnSpecsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_column_specs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_column_specs_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_column_specs(
service.ListColumnSpecsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_column_specs_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListColumnSpecsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListColumnSpecsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_column_specs(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_column_specs_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_column_specs(
service.ListColumnSpecsRequest(), parent="parent_value",
)
def test_list_column_specs_pager():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListColumnSpecsResponse(
column_specs=[
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
],
next_page_token="abc",
),
service.ListColumnSpecsResponse(column_specs=[], next_page_token="def",),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(),], next_page_token="ghi",
),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(), column_spec.ColumnSpec(),],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_column_specs(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, column_spec.ColumnSpec) for i in results)
def test_list_column_specs_pages():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListColumnSpecsResponse(
column_specs=[
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
],
next_page_token="abc",
),
service.ListColumnSpecsResponse(column_specs=[], next_page_token="def",),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(),], next_page_token="ghi",
),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(), column_spec.ColumnSpec(),],
),
RuntimeError,
)
pages = list(client.list_column_specs(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_column_specs_async_pager():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListColumnSpecsResponse(
column_specs=[
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
],
next_page_token="abc",
),
service.ListColumnSpecsResponse(column_specs=[], next_page_token="def",),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(),], next_page_token="ghi",
),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(), column_spec.ColumnSpec(),],
),
RuntimeError,
)
async_pager = await client.list_column_specs(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, column_spec.ColumnSpec) for i in responses)
@pytest.mark.asyncio
async def test_list_column_specs_async_pages():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_column_specs),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListColumnSpecsResponse(
column_specs=[
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
column_spec.ColumnSpec(),
],
next_page_token="abc",
),
service.ListColumnSpecsResponse(column_specs=[], next_page_token="def",),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(),], next_page_token="ghi",
),
service.ListColumnSpecsResponse(
column_specs=[column_spec.ColumnSpec(), column_spec.ColumnSpec(),],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_column_specs(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_update_column_spec(
transport: str = "grpc", request_type=service.UpdateColumnSpecRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_column_spec.ColumnSpec(
name="name_value", display_name="display_name_value", etag="etag_value",
)
response = client.update_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateColumnSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_column_spec.ColumnSpec)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.etag == "etag_value"
def test_update_column_spec_from_dict():
test_update_column_spec(request_type=dict)
def test_update_column_spec_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
client.update_column_spec()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateColumnSpecRequest()
@pytest.mark.asyncio
async def test_update_column_spec_async(
transport: str = "grpc_asyncio", request_type=service.UpdateColumnSpecRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_column_spec.ColumnSpec(
name="name_value", display_name="display_name_value", etag="etag_value",
)
)
response = await client.update_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.UpdateColumnSpecRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, gca_column_spec.ColumnSpec)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.etag == "etag_value"
@pytest.mark.asyncio
async def test_update_column_spec_async_from_dict():
await test_update_column_spec_async(request_type=dict)
def test_update_column_spec_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateColumnSpecRequest()
request.column_spec.name = "column_spec.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
call.return_value = gca_column_spec.ColumnSpec()
client.update_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "column_spec.name=column_spec.name/value",) in kw[
"metadata"
]
@pytest.mark.asyncio
async def test_update_column_spec_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UpdateColumnSpecRequest()
request.column_spec.name = "column_spec.name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_column_spec.ColumnSpec()
)
await client.update_column_spec(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "column_spec.name=column_spec.name/value",) in kw[
"metadata"
]
def test_update_column_spec_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_column_spec.ColumnSpec()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.update_column_spec(
column_spec=gca_column_spec.ColumnSpec(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].column_spec
mock_val = gca_column_spec.ColumnSpec(name="name_value")
assert arg == mock_val
def test_update_column_spec_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.update_column_spec(
service.UpdateColumnSpecRequest(),
column_spec=gca_column_spec.ColumnSpec(name="name_value"),
)
@pytest.mark.asyncio
async def test_update_column_spec_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.update_column_spec), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = gca_column_spec.ColumnSpec()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
gca_column_spec.ColumnSpec()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.update_column_spec(
column_spec=gca_column_spec.ColumnSpec(name="name_value"),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].column_spec
mock_val = gca_column_spec.ColumnSpec(name="name_value")
assert arg == mock_val
@pytest.mark.asyncio
async def test_update_column_spec_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.update_column_spec(
service.UpdateColumnSpecRequest(),
column_spec=gca_column_spec.ColumnSpec(name="name_value"),
)
def test_create_model(transport: str = "grpc", request_type=service.CreateModelRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.create_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_create_model_from_dict():
test_create_model(request_type=dict)
def test_create_model_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
client.create_model()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateModelRequest()
@pytest.mark.asyncio
async def test_create_model_async(
transport: str = "grpc_asyncio", request_type=service.CreateModelRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.create_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.CreateModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_create_model_async_from_dict():
await test_create_model_async(request_type=dict)
def test_create_model_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateModelRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.create_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_create_model_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.CreateModelRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.create_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_create_model_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.create_model(
parent="parent_value",
model=gca_model.Model(
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].model
mock_val = gca_model.Model(
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
)
)
assert arg == mock_val
def test_create_model_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.create_model(
service.CreateModelRequest(),
parent="parent_value",
model=gca_model.Model(
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
)
),
)
@pytest.mark.asyncio
async def test_create_model_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.create_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.create_model(
parent="parent_value",
model=gca_model.Model(
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
arg = args[0].model
mock_val = gca_model.Model(
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
)
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_create_model_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.create_model(
service.CreateModelRequest(),
parent="parent_value",
model=gca_model.Model(
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
)
),
)
def test_get_model(transport: str = "grpc", request_type=service.GetModelRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = model.Model(
name="name_value",
display_name="display_name_value",
dataset_id="dataset_id_value",
deployment_state=model.Model.DeploymentState.DEPLOYED,
translation_model_metadata=translation.TranslationModelMetadata(
base_model="base_model_value"
),
)
response = client.get_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, model.Model)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.dataset_id == "dataset_id_value"
assert response.deployment_state == model.Model.DeploymentState.DEPLOYED
def test_get_model_from_dict():
test_get_model(request_type=dict)
def test_get_model_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
client.get_model()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetModelRequest()
@pytest.mark.asyncio
async def test_get_model_async(
transport: str = "grpc_asyncio", request_type=service.GetModelRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model.Model(
name="name_value",
display_name="display_name_value",
dataset_id="dataset_id_value",
deployment_state=model.Model.DeploymentState.DEPLOYED,
)
)
response = await client.get_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, model.Model)
assert response.name == "name_value"
assert response.display_name == "display_name_value"
assert response.dataset_id == "dataset_id_value"
assert response.deployment_state == model.Model.DeploymentState.DEPLOYED
@pytest.mark.asyncio
async def test_get_model_async_from_dict():
await test_get_model_async(request_type=dict)
def test_get_model_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
call.return_value = model.Model()
client.get_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_model_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(model.Model())
await client.get_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_model_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = model.Model()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_model_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_model(
service.GetModelRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_model_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.get_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = model.Model()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(model.Model())
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_model_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_model(
service.GetModelRequest(), name="name_value",
)
def test_list_models(transport: str = "grpc", request_type=service.ListModelsRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListModelsResponse(
next_page_token="next_page_token_value",
)
response = client.list_models(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListModelsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListModelsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_models_from_dict():
test_list_models(request_type=dict)
def test_list_models_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
client.list_models()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListModelsRequest()
@pytest.mark.asyncio
async def test_list_models_async(
transport: str = "grpc_asyncio", request_type=service.ListModelsRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListModelsResponse(next_page_token="next_page_token_value",)
)
response = await client.list_models(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListModelsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListModelsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_models_async_from_dict():
await test_list_models_async(request_type=dict)
def test_list_models_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListModelsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
call.return_value = service.ListModelsResponse()
client.list_models(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_models_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListModelsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListModelsResponse()
)
await client.list_models(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_models_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListModelsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_models(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_models_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_models(
service.ListModelsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_models_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListModelsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListModelsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_models(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_models_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_models(
service.ListModelsRequest(), parent="parent_value",
)
def test_list_models_pager():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelsResponse(
model=[model.Model(), model.Model(), model.Model(),],
next_page_token="abc",
),
service.ListModelsResponse(model=[], next_page_token="def",),
service.ListModelsResponse(model=[model.Model(),], next_page_token="ghi",),
service.ListModelsResponse(model=[model.Model(), model.Model(),],),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_models(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, model.Model) for i in results)
def test_list_models_pages():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.list_models), "__call__") as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelsResponse(
model=[model.Model(), model.Model(), model.Model(),],
next_page_token="abc",
),
service.ListModelsResponse(model=[], next_page_token="def",),
service.ListModelsResponse(model=[model.Model(),], next_page_token="ghi",),
service.ListModelsResponse(model=[model.Model(), model.Model(),],),
RuntimeError,
)
pages = list(client.list_models(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_models_async_pager():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_models), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelsResponse(
model=[model.Model(), model.Model(), model.Model(),],
next_page_token="abc",
),
service.ListModelsResponse(model=[], next_page_token="def",),
service.ListModelsResponse(model=[model.Model(),], next_page_token="ghi",),
service.ListModelsResponse(model=[model.Model(), model.Model(),],),
RuntimeError,
)
async_pager = await client.list_models(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, model.Model) for i in responses)
@pytest.mark.asyncio
async def test_list_models_async_pages():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_models), "__call__", new_callable=mock.AsyncMock
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelsResponse(
model=[model.Model(), model.Model(), model.Model(),],
next_page_token="abc",
),
service.ListModelsResponse(model=[], next_page_token="def",),
service.ListModelsResponse(model=[model.Model(),], next_page_token="ghi",),
service.ListModelsResponse(model=[model.Model(), model.Model(),],),
RuntimeError,
)
pages = []
async for page_ in (await client.list_models(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_delete_model(transport: str = "grpc", request_type=service.DeleteModelRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.delete_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeleteModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_delete_model_from_dict():
test_delete_model(request_type=dict)
def test_delete_model_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
client.delete_model()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeleteModelRequest()
@pytest.mark.asyncio
async def test_delete_model_async(
transport: str = "grpc_asyncio", request_type=service.DeleteModelRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.delete_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeleteModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_delete_model_async_from_dict():
await test_delete_model_async(request_type=dict)
def test_delete_model_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DeleteModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.delete_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_delete_model_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DeleteModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.delete_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_delete_model_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.delete_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_delete_model_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.delete_model(
service.DeleteModelRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_delete_model_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.delete_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.delete_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_delete_model_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.delete_model(
service.DeleteModelRequest(), name="name_value",
)
def test_deploy_model(transport: str = "grpc", request_type=service.DeployModelRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.deploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeployModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_deploy_model_from_dict():
test_deploy_model(request_type=dict)
def test_deploy_model_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
client.deploy_model()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeployModelRequest()
@pytest.mark.asyncio
async def test_deploy_model_async(
transport: str = "grpc_asyncio", request_type=service.DeployModelRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.deploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.DeployModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_deploy_model_async_from_dict():
await test_deploy_model_async(request_type=dict)
def test_deploy_model_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DeployModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.deploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_deploy_model_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.DeployModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.deploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_deploy_model_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.deploy_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_deploy_model_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.deploy_model(
service.DeployModelRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_deploy_model_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.deploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.deploy_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_deploy_model_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.deploy_model(
service.DeployModelRequest(), name="name_value",
)
def test_undeploy_model(
transport: str = "grpc", request_type=service.UndeployModelRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.undeploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.UndeployModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_undeploy_model_from_dict():
test_undeploy_model(request_type=dict)
def test_undeploy_model_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
client.undeploy_model()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.UndeployModelRequest()
@pytest.mark.asyncio
async def test_undeploy_model_async(
transport: str = "grpc_asyncio", request_type=service.UndeployModelRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.undeploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.UndeployModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_undeploy_model_async_from_dict():
await test_undeploy_model_async(request_type=dict)
def test_undeploy_model_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UndeployModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.undeploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_undeploy_model_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.UndeployModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.undeploy_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_undeploy_model_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.undeploy_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_undeploy_model_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.undeploy_model(
service.UndeployModelRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_undeploy_model_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.undeploy_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.undeploy_model(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_undeploy_model_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.undeploy_model(
service.UndeployModelRequest(), name="name_value",
)
def test_export_model(transport: str = "grpc", request_type=service.ExportModelRequest):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.export_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_export_model_from_dict():
test_export_model(request_type=dict)
def test_export_model_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
client.export_model()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportModelRequest()
@pytest.mark.asyncio
async def test_export_model_async(
transport: str = "grpc_asyncio", request_type=service.ExportModelRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.export_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportModelRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_export_model_async_from_dict():
await test_export_model_async(request_type=dict)
def test_export_model_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ExportModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.export_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_export_model_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ExportModelRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.export_model(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_export_model_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.export_model(
name="name_value",
output_config=io.ModelExportOutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].output_config
mock_val = io.ModelExportOutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
)
assert arg == mock_val
def test_export_model_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.export_model(
service.ExportModelRequest(),
name="name_value",
output_config=io.ModelExportOutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
@pytest.mark.asyncio
async def test_export_model_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(type(client.transport.export_model), "__call__") as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.export_model(
name="name_value",
output_config=io.ModelExportOutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].output_config
mock_val = io.ModelExportOutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_export_model_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.export_model(
service.ExportModelRequest(),
name="name_value",
output_config=io.ModelExportOutputConfig(
gcs_destination=io.GcsDestination(
output_uri_prefix="output_uri_prefix_value"
)
),
)
def test_export_evaluated_examples(
transport: str = "grpc", request_type=service.ExportEvaluatedExamplesRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/spam")
response = client.export_evaluated_examples(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportEvaluatedExamplesRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
def test_export_evaluated_examples_from_dict():
test_export_evaluated_examples(request_type=dict)
def test_export_evaluated_examples_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
client.export_evaluated_examples()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportEvaluatedExamplesRequest()
@pytest.mark.asyncio
async def test_export_evaluated_examples_async(
transport: str = "grpc_asyncio", request_type=service.ExportEvaluatedExamplesRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
response = await client.export_evaluated_examples(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ExportEvaluatedExamplesRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, future.Future)
@pytest.mark.asyncio
async def test_export_evaluated_examples_async_from_dict():
await test_export_evaluated_examples_async(request_type=dict)
def test_export_evaluated_examples_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ExportEvaluatedExamplesRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
call.return_value = operations_pb2.Operation(name="operations/op")
client.export_evaluated_examples(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_export_evaluated_examples_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ExportEvaluatedExamplesRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/op")
)
await client.export_evaluated_examples(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_export_evaluated_examples_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.export_evaluated_examples(
name="name_value",
output_config=io.ExportEvaluatedExamplesOutputConfig(
bigquery_destination=io.BigQueryDestination(
output_uri="output_uri_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].output_config
mock_val = io.ExportEvaluatedExamplesOutputConfig(
bigquery_destination=io.BigQueryDestination(output_uri="output_uri_value")
)
assert arg == mock_val
def test_export_evaluated_examples_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.export_evaluated_examples(
service.ExportEvaluatedExamplesRequest(),
name="name_value",
output_config=io.ExportEvaluatedExamplesOutputConfig(
bigquery_destination=io.BigQueryDestination(
output_uri="output_uri_value"
)
),
)
@pytest.mark.asyncio
async def test_export_evaluated_examples_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.export_evaluated_examples), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = operations_pb2.Operation(name="operations/op")
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
operations_pb2.Operation(name="operations/spam")
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.export_evaluated_examples(
name="name_value",
output_config=io.ExportEvaluatedExamplesOutputConfig(
bigquery_destination=io.BigQueryDestination(
output_uri="output_uri_value"
)
),
)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
arg = args[0].output_config
mock_val = io.ExportEvaluatedExamplesOutputConfig(
bigquery_destination=io.BigQueryDestination(output_uri="output_uri_value")
)
assert arg == mock_val
@pytest.mark.asyncio
async def test_export_evaluated_examples_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.export_evaluated_examples(
service.ExportEvaluatedExamplesRequest(),
name="name_value",
output_config=io.ExportEvaluatedExamplesOutputConfig(
bigquery_destination=io.BigQueryDestination(
output_uri="output_uri_value"
)
),
)
def test_get_model_evaluation(
transport: str = "grpc", request_type=service.GetModelEvaluationRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = model_evaluation.ModelEvaluation(
name="name_value",
annotation_spec_id="annotation_spec_id_value",
display_name="display_name_value",
evaluated_example_count=2446,
classification_evaluation_metrics=classification.ClassificationEvaluationMetrics(
au_prc=0.634
),
)
response = client.get_model_evaluation(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetModelEvaluationRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, model_evaluation.ModelEvaluation)
assert response.name == "name_value"
assert response.annotation_spec_id == "annotation_spec_id_value"
assert response.display_name == "display_name_value"
assert response.evaluated_example_count == 2446
def test_get_model_evaluation_from_dict():
test_get_model_evaluation(request_type=dict)
def test_get_model_evaluation_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
client.get_model_evaluation()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetModelEvaluationRequest()
@pytest.mark.asyncio
async def test_get_model_evaluation_async(
transport: str = "grpc_asyncio", request_type=service.GetModelEvaluationRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model_evaluation.ModelEvaluation(
name="name_value",
annotation_spec_id="annotation_spec_id_value",
display_name="display_name_value",
evaluated_example_count=2446,
)
)
response = await client.get_model_evaluation(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.GetModelEvaluationRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, model_evaluation.ModelEvaluation)
assert response.name == "name_value"
assert response.annotation_spec_id == "annotation_spec_id_value"
assert response.display_name == "display_name_value"
assert response.evaluated_example_count == 2446
@pytest.mark.asyncio
async def test_get_model_evaluation_async_from_dict():
await test_get_model_evaluation_async(request_type=dict)
def test_get_model_evaluation_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetModelEvaluationRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
call.return_value = model_evaluation.ModelEvaluation()
client.get_model_evaluation(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_get_model_evaluation_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.GetModelEvaluationRequest()
request.name = "name/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model_evaluation.ModelEvaluation()
)
await client.get_model_evaluation(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "name=name/value",) in kw["metadata"]
def test_get_model_evaluation_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = model_evaluation.ModelEvaluation()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.get_model_evaluation(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
def test_get_model_evaluation_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.get_model_evaluation(
service.GetModelEvaluationRequest(), name="name_value",
)
@pytest.mark.asyncio
async def test_get_model_evaluation_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.get_model_evaluation), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = model_evaluation.ModelEvaluation()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
model_evaluation.ModelEvaluation()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.get_model_evaluation(name="name_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].name
mock_val = "name_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_get_model_evaluation_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.get_model_evaluation(
service.GetModelEvaluationRequest(), name="name_value",
)
def test_list_model_evaluations(
transport: str = "grpc", request_type=service.ListModelEvaluationsRequest
):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListModelEvaluationsResponse(
next_page_token="next_page_token_value",
)
response = client.list_model_evaluations(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListModelEvaluationsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListModelEvaluationsPager)
assert response.next_page_token == "next_page_token_value"
def test_list_model_evaluations_from_dict():
test_list_model_evaluations(request_type=dict)
def test_list_model_evaluations_empty_call():
# This test is a coverage failsafe to make sure that totally empty calls,
# i.e. request == None and no flattened fields passed, work.
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
client.list_model_evaluations()
call.assert_called()
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListModelEvaluationsRequest()
@pytest.mark.asyncio
async def test_list_model_evaluations_async(
transport: str = "grpc_asyncio", request_type=service.ListModelEvaluationsRequest
):
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# Everything is optional in proto3 as far as the runtime is concerned,
# and we are mocking out the actual API, so just send an empty request.
request = request_type()
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListModelEvaluationsResponse(
next_page_token="next_page_token_value",
)
)
response = await client.list_model_evaluations(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == service.ListModelEvaluationsRequest()
# Establish that the response is the type that we expect.
assert isinstance(response, pagers.ListModelEvaluationsAsyncPager)
assert response.next_page_token == "next_page_token_value"
@pytest.mark.asyncio
async def test_list_model_evaluations_async_from_dict():
await test_list_model_evaluations_async(request_type=dict)
def test_list_model_evaluations_field_headers():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListModelEvaluationsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
call.return_value = service.ListModelEvaluationsResponse()
client.list_model_evaluations(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
@pytest.mark.asyncio
async def test_list_model_evaluations_field_headers_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Any value that is part of the HTTP/1.1 URI should be sent as
# a field header. Set these to a non-empty value.
request = service.ListModelEvaluationsRequest()
request.parent = "parent/value"
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListModelEvaluationsResponse()
)
await client.list_model_evaluations(request)
# Establish that the underlying gRPC stub method was called.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
assert args[0] == request
# Establish that the field header was sent.
_, _, kw = call.mock_calls[0]
assert ("x-goog-request-params", "parent=parent/value",) in kw["metadata"]
def test_list_model_evaluations_flattened():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListModelEvaluationsResponse()
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
client.list_model_evaluations(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls) == 1
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
def test_list_model_evaluations_flattened_error():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
client.list_model_evaluations(
service.ListModelEvaluationsRequest(), parent="parent_value",
)
@pytest.mark.asyncio
async def test_list_model_evaluations_flattened_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
# Designate an appropriate return value for the call.
call.return_value = service.ListModelEvaluationsResponse()
call.return_value = grpc_helpers_async.FakeUnaryUnaryCall(
service.ListModelEvaluationsResponse()
)
# Call the method with a truthy value for each flattened field,
# using the keyword arguments to the method.
response = await client.list_model_evaluations(parent="parent_value",)
# Establish that the underlying call was made with the expected
# request object values.
assert len(call.mock_calls)
_, args, _ = call.mock_calls[0]
arg = args[0].parent
mock_val = "parent_value"
assert arg == mock_val
@pytest.mark.asyncio
async def test_list_model_evaluations_flattened_error_async():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials(),)
# Attempting to call a method with both a request object and flattened
# fields is an error.
with pytest.raises(ValueError):
await client.list_model_evaluations(
service.ListModelEvaluationsRequest(), parent="parent_value",
)
def test_list_model_evaluations_pager():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
next_page_token="abc",
),
service.ListModelEvaluationsResponse(
model_evaluation=[], next_page_token="def",
),
service.ListModelEvaluationsResponse(
model_evaluation=[model_evaluation.ModelEvaluation(),],
next_page_token="ghi",
),
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
),
RuntimeError,
)
metadata = ()
metadata = tuple(metadata) + (
gapic_v1.routing_header.to_grpc_metadata((("parent", ""),)),
)
pager = client.list_model_evaluations(request={})
assert pager._metadata == metadata
results = [i for i in pager]
assert len(results) == 6
assert all(isinstance(i, model_evaluation.ModelEvaluation) for i in results)
def test_list_model_evaluations_pages():
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations), "__call__"
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
next_page_token="abc",
),
service.ListModelEvaluationsResponse(
model_evaluation=[], next_page_token="def",
),
service.ListModelEvaluationsResponse(
model_evaluation=[model_evaluation.ModelEvaluation(),],
next_page_token="ghi",
),
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
),
RuntimeError,
)
pages = list(client.list_model_evaluations(request={}).pages)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
@pytest.mark.asyncio
async def test_list_model_evaluations_async_pager():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
next_page_token="abc",
),
service.ListModelEvaluationsResponse(
model_evaluation=[], next_page_token="def",
),
service.ListModelEvaluationsResponse(
model_evaluation=[model_evaluation.ModelEvaluation(),],
next_page_token="ghi",
),
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
),
RuntimeError,
)
async_pager = await client.list_model_evaluations(request={},)
assert async_pager.next_page_token == "abc"
responses = []
async for response in async_pager:
responses.append(response)
assert len(responses) == 6
assert all(isinstance(i, model_evaluation.ModelEvaluation) for i in responses)
@pytest.mark.asyncio
async def test_list_model_evaluations_async_pages():
client = AutoMlAsyncClient(credentials=ga_credentials.AnonymousCredentials,)
# Mock the actual call within the gRPC stub, and fake the request.
with mock.patch.object(
type(client.transport.list_model_evaluations),
"__call__",
new_callable=mock.AsyncMock,
) as call:
# Set the response to a series of pages.
call.side_effect = (
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
next_page_token="abc",
),
service.ListModelEvaluationsResponse(
model_evaluation=[], next_page_token="def",
),
service.ListModelEvaluationsResponse(
model_evaluation=[model_evaluation.ModelEvaluation(),],
next_page_token="ghi",
),
service.ListModelEvaluationsResponse(
model_evaluation=[
model_evaluation.ModelEvaluation(),
model_evaluation.ModelEvaluation(),
],
),
RuntimeError,
)
pages = []
async for page_ in (await client.list_model_evaluations(request={})).pages:
pages.append(page_)
for page_, token in zip(pages, ["abc", "def", "ghi", ""]):
assert page_.raw_page.next_page_token == token
def test_credentials_transport_error():
# It is an error to provide credentials and a transport instance.
transport = transports.AutoMlGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport,
)
# It is an error to provide a credentials file and a transport instance.
transport = transports.AutoMlGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = AutoMlClient(
client_options={"credentials_file": "credentials.json"},
transport=transport,
)
# It is an error to provide scopes and a transport instance.
transport = transports.AutoMlGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
with pytest.raises(ValueError):
client = AutoMlClient(
client_options={"scopes": ["1", "2"]}, transport=transport,
)
def test_transport_instance():
# A client may be instantiated with a custom transport instance.
transport = transports.AutoMlGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
client = AutoMlClient(transport=transport)
assert client.transport is transport
def test_transport_get_channel():
# A client may be instantiated with a custom transport instance.
transport = transports.AutoMlGrpcTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
transport = transports.AutoMlGrpcAsyncIOTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
channel = transport.grpc_channel
assert channel
@pytest.mark.parametrize(
"transport_class",
[transports.AutoMlGrpcTransport, transports.AutoMlGrpcAsyncIOTransport,],
)
def test_transport_adc(transport_class):
# Test default credentials are used if not provided.
with mock.patch.object(google.auth, "default") as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class()
adc.assert_called_once()
def test_transport_grpc_default():
# A client should use the gRPC transport by default.
client = AutoMlClient(credentials=ga_credentials.AnonymousCredentials(),)
assert isinstance(client.transport, transports.AutoMlGrpcTransport,)
def test_auto_ml_base_transport_error():
# Passing both a credentials object and credentials_file should raise an error
with pytest.raises(core_exceptions.DuplicateCredentialArgs):
transport = transports.AutoMlTransport(
credentials=ga_credentials.AnonymousCredentials(),
credentials_file="credentials.json",
)
def test_auto_ml_base_transport():
# Instantiate the base transport.
with mock.patch(
"google.cloud.automl_v1beta1.services.auto_ml.transports.AutoMlTransport.__init__"
) as Transport:
Transport.return_value = None
transport = transports.AutoMlTransport(
credentials=ga_credentials.AnonymousCredentials(),
)
# Every method on the transport should just blindly
# raise NotImplementedError.
methods = (
"create_dataset",
"get_dataset",
"list_datasets",
"update_dataset",
"delete_dataset",
"import_data",
"export_data",
"get_annotation_spec",
"get_table_spec",
"list_table_specs",
"update_table_spec",
"get_column_spec",
"list_column_specs",
"update_column_spec",
"create_model",
"get_model",
"list_models",
"delete_model",
"deploy_model",
"undeploy_model",
"export_model",
"export_evaluated_examples",
"get_model_evaluation",
"list_model_evaluations",
)
for method in methods:
with pytest.raises(NotImplementedError):
getattr(transport, method)(request=object())
with pytest.raises(NotImplementedError):
transport.close()
# Additionally, the LRO client (a property) should
# also raise NotImplementedError
with pytest.raises(NotImplementedError):
transport.operations_client
def test_auto_ml_base_transport_with_credentials_file():
# Instantiate the base transport with a credentials file
with mock.patch.object(
google.auth, "load_credentials_from_file", autospec=True
) as load_creds, mock.patch(
"google.cloud.automl_v1beta1.services.auto_ml.transports.AutoMlTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
load_creds.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.AutoMlTransport(
credentials_file="credentials.json", quota_project_id="octopus",
)
load_creds.assert_called_once_with(
"credentials.json",
scopes=None,
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
def test_auto_ml_base_transport_with_adc():
# Test the default credentials are used if credentials and credentials_file are None.
with mock.patch.object(google.auth, "default", autospec=True) as adc, mock.patch(
"google.cloud.automl_v1beta1.services.auto_ml.transports.AutoMlTransport._prep_wrapped_messages"
) as Transport:
Transport.return_value = None
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport = transports.AutoMlTransport()
adc.assert_called_once()
def test_auto_ml_auth_adc():
# If no credentials are provided, we should use ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
AutoMlClient()
adc.assert_called_once_with(
scopes=None,
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id=None,
)
@pytest.mark.parametrize(
"transport_class",
[transports.AutoMlGrpcTransport, transports.AutoMlGrpcAsyncIOTransport,],
)
def test_auto_ml_transport_auth_adc(transport_class):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(google.auth, "default", autospec=True) as adc:
adc.return_value = (ga_credentials.AnonymousCredentials(), None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
adc.assert_called_once_with(
scopes=["1", "2"],
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
quota_project_id="octopus",
)
@pytest.mark.parametrize(
"transport_class,grpc_helpers",
[
(transports.AutoMlGrpcTransport, grpc_helpers),
(transports.AutoMlGrpcAsyncIOTransport, grpc_helpers_async),
],
)
def test_auto_ml_transport_create_channel(transport_class, grpc_helpers):
# If credentials and host are not provided, the transport class should use
# ADC credentials.
with mock.patch.object(
google.auth, "default", autospec=True
) as adc, mock.patch.object(
grpc_helpers, "create_channel", autospec=True
) as create_channel:
creds = ga_credentials.AnonymousCredentials()
adc.return_value = (creds, None)
transport_class(quota_project_id="octopus", scopes=["1", "2"])
create_channel.assert_called_with(
"automl.googleapis.com:443",
credentials=creds,
credentials_file=None,
quota_project_id="octopus",
default_scopes=("https://www.googleapis.com/auth/cloud-platform",),
scopes=["1", "2"],
default_host="automl.googleapis.com",
ssl_credentials=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
@pytest.mark.parametrize(
"transport_class",
[transports.AutoMlGrpcTransport, transports.AutoMlGrpcAsyncIOTransport],
)
def test_auto_ml_grpc_transport_client_cert_source_for_mtls(transport_class):
cred = ga_credentials.AnonymousCredentials()
# Check ssl_channel_credentials is used if provided.
with mock.patch.object(transport_class, "create_channel") as mock_create_channel:
mock_ssl_channel_creds = mock.Mock()
transport_class(
host="squid.clam.whelk",
credentials=cred,
ssl_channel_credentials=mock_ssl_channel_creds,
)
mock_create_channel.assert_called_once_with(
"squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_channel_creds,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
# Check if ssl_channel_credentials is not provided, then client_cert_source_for_mtls
# is used.
with mock.patch.object(transport_class, "create_channel", return_value=mock.Mock()):
with mock.patch("grpc.ssl_channel_credentials") as mock_ssl_cred:
transport_class(
credentials=cred,
client_cert_source_for_mtls=client_cert_source_callback,
)
expected_cert, expected_key = client_cert_source_callback()
mock_ssl_cred.assert_called_once_with(
certificate_chain=expected_cert, private_key=expected_key
)
def test_auto_ml_host_no_port():
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="automl.googleapis.com"
),
)
assert client.transport._host == "automl.googleapis.com:443"
def test_auto_ml_host_with_port():
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(),
client_options=client_options.ClientOptions(
api_endpoint="automl.googleapis.com:8000"
),
)
assert client.transport._host == "automl.googleapis.com:8000"
def test_auto_ml_grpc_transport_channel():
channel = grpc.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.AutoMlGrpcTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
def test_auto_ml_grpc_asyncio_transport_channel():
channel = aio.secure_channel("http://localhost/", grpc.local_channel_credentials())
# Check that channel is used if provided.
transport = transports.AutoMlGrpcAsyncIOTransport(
host="squid.clam.whelk", channel=channel,
)
assert transport.grpc_channel == channel
assert transport._host == "squid.clam.whelk:443"
assert transport._ssl_channel_credentials == None
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[transports.AutoMlGrpcTransport, transports.AutoMlGrpcAsyncIOTransport],
)
def test_auto_ml_transport_channel_mtls_with_client_cert_source(transport_class):
with mock.patch(
"grpc.ssl_channel_credentials", autospec=True
) as grpc_ssl_channel_cred:
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_ssl_cred = mock.Mock()
grpc_ssl_channel_cred.return_value = mock_ssl_cred
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
cred = ga_credentials.AnonymousCredentials()
with pytest.warns(DeprecationWarning):
with mock.patch.object(google.auth, "default") as adc:
adc.return_value = (cred, None)
transport = transport_class(
host="squid.clam.whelk",
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=client_cert_source_callback,
)
adc.assert_called_once()
grpc_ssl_channel_cred.assert_called_once_with(
certificate_chain=b"cert bytes", private_key=b"key bytes"
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
assert transport._ssl_channel_credentials == mock_ssl_cred
# Remove this test when deprecated arguments (api_mtls_endpoint, client_cert_source) are
# removed from grpc/grpc_asyncio transport constructor.
@pytest.mark.parametrize(
"transport_class",
[transports.AutoMlGrpcTransport, transports.AutoMlGrpcAsyncIOTransport],
)
def test_auto_ml_transport_channel_mtls_with_adc(transport_class):
mock_ssl_cred = mock.Mock()
with mock.patch.multiple(
"google.auth.transport.grpc.SslCredentials",
__init__=mock.Mock(return_value=None),
ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred),
):
with mock.patch.object(
transport_class, "create_channel"
) as grpc_create_channel:
mock_grpc_channel = mock.Mock()
grpc_create_channel.return_value = mock_grpc_channel
mock_cred = mock.Mock()
with pytest.warns(DeprecationWarning):
transport = transport_class(
host="squid.clam.whelk",
credentials=mock_cred,
api_mtls_endpoint="mtls.squid.clam.whelk",
client_cert_source=None,
)
grpc_create_channel.assert_called_once_with(
"mtls.squid.clam.whelk:443",
credentials=mock_cred,
credentials_file=None,
scopes=None,
ssl_credentials=mock_ssl_cred,
quota_project_id=None,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
assert transport.grpc_channel == mock_grpc_channel
def test_auto_ml_grpc_lro_client():
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc",
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(transport.operations_client, operations_v1.OperationsClient,)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_auto_ml_grpc_lro_async_client():
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc_asyncio",
)
transport = client.transport
# Ensure that we have a api-core operations client.
assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,)
# Ensure that subsequent calls to the property send the exact same object.
assert transport.operations_client is transport.operations_client
def test_annotation_spec_path():
project = "squid"
location = "clam"
dataset = "whelk"
annotation_spec = "octopus"
expected = "projects/{project}/locations/{location}/datasets/{dataset}/annotationSpecs/{annotation_spec}".format(
project=project,
location=location,
dataset=dataset,
annotation_spec=annotation_spec,
)
actual = AutoMlClient.annotation_spec_path(
project, location, dataset, annotation_spec
)
assert expected == actual
def test_parse_annotation_spec_path():
expected = {
"project": "oyster",
"location": "nudibranch",
"dataset": "cuttlefish",
"annotation_spec": "mussel",
}
path = AutoMlClient.annotation_spec_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_annotation_spec_path(path)
assert expected == actual
def test_column_spec_path():
project = "winkle"
location = "nautilus"
dataset = "scallop"
table_spec = "abalone"
column_spec = "squid"
expected = "projects/{project}/locations/{location}/datasets/{dataset}/tableSpecs/{table_spec}/columnSpecs/{column_spec}".format(
project=project,
location=location,
dataset=dataset,
table_spec=table_spec,
column_spec=column_spec,
)
actual = AutoMlClient.column_spec_path(
project, location, dataset, table_spec, column_spec
)
assert expected == actual
def test_parse_column_spec_path():
expected = {
"project": "clam",
"location": "whelk",
"dataset": "octopus",
"table_spec": "oyster",
"column_spec": "nudibranch",
}
path = AutoMlClient.column_spec_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_column_spec_path(path)
assert expected == actual
def test_dataset_path():
project = "cuttlefish"
location = "mussel"
dataset = "winkle"
expected = "projects/{project}/locations/{location}/datasets/{dataset}".format(
project=project, location=location, dataset=dataset,
)
actual = AutoMlClient.dataset_path(project, location, dataset)
assert expected == actual
def test_parse_dataset_path():
expected = {
"project": "nautilus",
"location": "scallop",
"dataset": "abalone",
}
path = AutoMlClient.dataset_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_dataset_path(path)
assert expected == actual
def test_model_path():
project = "squid"
location = "clam"
model = "whelk"
expected = "projects/{project}/locations/{location}/models/{model}".format(
project=project, location=location, model=model,
)
actual = AutoMlClient.model_path(project, location, model)
assert expected == actual
def test_parse_model_path():
expected = {
"project": "octopus",
"location": "oyster",
"model": "nudibranch",
}
path = AutoMlClient.model_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_model_path(path)
assert expected == actual
def test_model_evaluation_path():
project = "cuttlefish"
location = "mussel"
model = "winkle"
model_evaluation = "nautilus"
expected = "projects/{project}/locations/{location}/models/{model}/modelEvaluations/{model_evaluation}".format(
project=project,
location=location,
model=model,
model_evaluation=model_evaluation,
)
actual = AutoMlClient.model_evaluation_path(
project, location, model, model_evaluation
)
assert expected == actual
def test_parse_model_evaluation_path():
expected = {
"project": "scallop",
"location": "abalone",
"model": "squid",
"model_evaluation": "clam",
}
path = AutoMlClient.model_evaluation_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_model_evaluation_path(path)
assert expected == actual
def test_table_spec_path():
project = "whelk"
location = "octopus"
dataset = "oyster"
table_spec = "nudibranch"
expected = "projects/{project}/locations/{location}/datasets/{dataset}/tableSpecs/{table_spec}".format(
project=project, location=location, dataset=dataset, table_spec=table_spec,
)
actual = AutoMlClient.table_spec_path(project, location, dataset, table_spec)
assert expected == actual
def test_parse_table_spec_path():
expected = {
"project": "cuttlefish",
"location": "mussel",
"dataset": "winkle",
"table_spec": "nautilus",
}
path = AutoMlClient.table_spec_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_table_spec_path(path)
assert expected == actual
def test_common_billing_account_path():
billing_account = "scallop"
expected = "billingAccounts/{billing_account}".format(
billing_account=billing_account,
)
actual = AutoMlClient.common_billing_account_path(billing_account)
assert expected == actual
def test_parse_common_billing_account_path():
expected = {
"billing_account": "abalone",
}
path = AutoMlClient.common_billing_account_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_common_billing_account_path(path)
assert expected == actual
def test_common_folder_path():
folder = "squid"
expected = "folders/{folder}".format(folder=folder,)
actual = AutoMlClient.common_folder_path(folder)
assert expected == actual
def test_parse_common_folder_path():
expected = {
"folder": "clam",
}
path = AutoMlClient.common_folder_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_common_folder_path(path)
assert expected == actual
def test_common_organization_path():
organization = "whelk"
expected = "organizations/{organization}".format(organization=organization,)
actual = AutoMlClient.common_organization_path(organization)
assert expected == actual
def test_parse_common_organization_path():
expected = {
"organization": "octopus",
}
path = AutoMlClient.common_organization_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_common_organization_path(path)
assert expected == actual
def test_common_project_path():
project = "oyster"
expected = "projects/{project}".format(project=project,)
actual = AutoMlClient.common_project_path(project)
assert expected == actual
def test_parse_common_project_path():
expected = {
"project": "nudibranch",
}
path = AutoMlClient.common_project_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_common_project_path(path)
assert expected == actual
def test_common_location_path():
project = "cuttlefish"
location = "mussel"
expected = "projects/{project}/locations/{location}".format(
project=project, location=location,
)
actual = AutoMlClient.common_location_path(project, location)
assert expected == actual
def test_parse_common_location_path():
expected = {
"project": "winkle",
"location": "nautilus",
}
path = AutoMlClient.common_location_path(**expected)
# Check that the path construction is reversible.
actual = AutoMlClient.parse_common_location_path(path)
assert expected == actual
def test_client_withDEFAULT_CLIENT_INFO():
client_info = gapic_v1.client_info.ClientInfo()
with mock.patch.object(
transports.AutoMlTransport, "_prep_wrapped_messages"
) as prep:
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
with mock.patch.object(
transports.AutoMlTransport, "_prep_wrapped_messages"
) as prep:
transport_class = AutoMlClient.get_transport_class()
transport = transport_class(
credentials=ga_credentials.AnonymousCredentials(), client_info=client_info,
)
prep.assert_called_once_with(client_info)
@pytest.mark.asyncio
async def test_transport_close_async():
client = AutoMlAsyncClient(
credentials=ga_credentials.AnonymousCredentials(), transport="grpc_asyncio",
)
with mock.patch.object(
type(getattr(client.transport, "grpc_channel")), "close"
) as close:
async with client:
close.assert_not_called()
close.assert_called_once()
def test_transport_close():
transports = {
"grpc": "_grpc_channel",
}
for transport, close_name in transports.items():
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport
)
with mock.patch.object(
type(getattr(client.transport, close_name)), "close"
) as close:
with client:
close.assert_not_called()
close.assert_called_once()
def test_client_ctx():
transports = [
"grpc",
]
for transport in transports:
client = AutoMlClient(
credentials=ga_credentials.AnonymousCredentials(), transport=transport
)
# Test client calls underlying transport.
with mock.patch.object(type(client.transport), "close") as close:
close.assert_not_called()
with client:
pass
close.assert_called()
| 38.234278 | 133 | 0.68299 | 32,913 | 276,625 | 5.518184 | 0.015982 | 0.015857 | 0.025768 | 0.06202 | 0.951514 | 0.93183 | 0.916601 | 0.893299 | 0.874237 | 0.851432 | 0 | 0.004723 | 0.23234 | 276,625 | 7,234 | 134 | 38.239563 | 0.850547 | 0.221025 | 0 | 0.696251 | 0 | 0 | 0.072798 | 0.024439 | 0 | 0 | 0 | 0 | 0.142555 | 1 | 0.044906 | false | 0.000212 | 0.019064 | 0.000424 | 0.064393 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9b6b2a4c975a904bf4b1f0b185a28c0455de1df | 43,433 | py | Python | pymc3/tests/test_gp.py | HVoltBb/pymc3 | da482cd5ba08e421e8a62d879822cc41e5b91420 | [
"Apache-2.0"
] | 3 | 2020-10-06T21:07:30.000Z | 2021-03-04T11:40:17.000Z | pymc3/tests/test_gp.py | HVoltBb/pymc3 | da482cd5ba08e421e8a62d879822cc41e5b91420 | [
"Apache-2.0"
] | null | null | null | pymc3/tests/test_gp.py | HVoltBb/pymc3 | da482cd5ba08e421e8a62d879822cc41e5b91420 | [
"Apache-2.0"
] | 3 | 2019-09-09T13:09:32.000Z | 2021-09-12T14:37:51.000Z | # Copyright 2020 The PyMC Developers
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint:disable=unused-variable
from functools import reduce
from ..math import cartesian, kronecker
from operator import add
import pymc3 as pm
import theano
import theano.tensor as tt
import numpy as np
import numpy.testing as npt
import pytest
np.random.seed(101)
class TestZeroMean:
def test_value(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
zero_mean = pm.gp.mean.Zero()
M = theano.function([], zero_mean(X))()
assert np.all(M==0)
assert M.shape == (10, )
class TestConstantMean:
def test_value(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
const_mean = pm.gp.mean.Constant(6)
M = theano.function([], const_mean(X))()
assert np.all(M==6)
assert M.shape == (10, )
class TestLinearMean:
def test_value(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
linear_mean = pm.gp.mean.Linear(2, 0.5)
M = theano.function([], linear_mean(X))()
npt.assert_allclose(M[1], 0.7222, atol=1e-3)
assert M.shape == (10, )
class TestAddProdMean:
def test_add(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
mean1 = pm.gp.mean.Linear(coeffs=2, intercept=0.5)
mean2 = pm.gp.mean.Constant(2)
mean = mean1 + mean2 + mean2
M = theano.function([], mean(X))()
npt.assert_allclose(M[1], 0.7222 + 2 + 2, atol=1e-3)
def test_prod(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
mean1 = pm.gp.mean.Linear(coeffs=2, intercept=0.5)
mean2 = pm.gp.mean.Constant(2)
mean = mean1 * mean2 * mean2
M = theano.function([], mean(X))()
npt.assert_allclose(M[1], 0.7222 * 2 * 2, atol=1e-3)
def test_add_multid(self):
X = np.linspace(0, 1, 30).reshape(10, 3)
A = np.array([1, 2, 3])
b = 10
with pm.Model() as model:
mean1 = pm.gp.mean.Linear(coeffs=A, intercept=b)
mean2 = pm.gp.mean.Constant(2)
mean = mean1 + mean2 + mean2
M = theano.function([], mean(X))()
npt.assert_allclose(M[1], 10.8965 + 2 + 2, atol=1e-3)
def test_prod_multid(self):
X = np.linspace(0, 1, 30).reshape(10, 3)
A = np.array([1, 2, 3])
b = 10
with pm.Model() as model:
mean1 = pm.gp.mean.Linear(coeffs=A, intercept=b)
mean2 = pm.gp.mean.Constant(2)
mean = mean1 * mean2 * mean2
M = theano.function([], mean(X))()
npt.assert_allclose(M[1], 10.8965 * 2 * 2, atol=1e-3)
class TestCovAdd:
def test_symadd_cov(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov1 = pm.gp.cov.ExpQuad(1, 0.1)
cov2 = pm.gp.cov.ExpQuad(1, 0.1)
cov = cov1 + cov2
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_rightadd_scalar(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
a = 1
cov = pm.gp.cov.ExpQuad(1, 0.1) + a
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 1.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_leftadd_scalar(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
a = 1
cov = a + pm.gp.cov.ExpQuad(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 1.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_rightadd_matrix(self):
X = np.linspace(0, 1, 10)[:, None]
M = 2 * np.ones((10, 10))
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(1, 0.1) + M
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_leftadd_matrixt(self):
X = np.linspace(0, 1, 10)[:, None]
M = 2 * tt.ones((10, 10))
with pm.Model() as model:
cov = M + pm.gp.cov.ExpQuad(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_leftprod_matrix(self):
X = np.linspace(0, 1, 3)[:, None]
M = np.array([[1, 2, 3], [2, 1, 2], [3, 2, 1]])
with pm.Model() as model:
cov = M + pm.gp.cov.ExpQuad(1, 0.1)
cov_true = pm.gp.cov.ExpQuad(1, 0.1) + M
K = theano.function([], cov(X))()
K_true = theano.function([], cov_true(X))()
assert np.allclose(K, K_true)
class TestCovProd:
def test_symprod_cov(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov1 = pm.gp.cov.ExpQuad(1, 0.1)
cov2 = pm.gp.cov.ExpQuad(1, 0.1)
cov = cov1 * cov2
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.53940 * 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_rightprod_scalar(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
a = 2
cov = pm.gp.cov.ExpQuad(1, 0.1) * a
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_leftprod_scalar(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
a = 2
cov = a * pm.gp.cov.ExpQuad(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_rightprod_matrix(self):
X = np.linspace(0, 1, 10)[:, None]
M = 2 * np.ones((10, 10))
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(1, 0.1) * M
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2 * 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_leftprod_matrix(self):
X = np.linspace(0, 1, 3)[:, None]
M = np.array([[1, 2, 3], [2, 1, 2], [3, 2, 1]])
with pm.Model() as model:
cov = M * pm.gp.cov.ExpQuad(1, 0.1)
cov_true = pm.gp.cov.ExpQuad(1, 0.1) * M
K = theano.function([], cov(X))()
K_true = theano.function([], cov_true(X))()
assert np.allclose(K, K_true)
def test_multiops(self):
X = np.linspace(0, 1, 3)[:, None]
M = np.array([[1, 2, 3], [2, 1, 2], [3, 2, 1]])
with pm.Model() as model:
cov1 = 3 + pm.gp.cov.ExpQuad(1, 0.1) + M * pm.gp.cov.ExpQuad(1, 0.1) * M * pm.gp.cov.ExpQuad(1, 0.1)
cov2 = pm.gp.cov.ExpQuad(1, 0.1) * M * pm.gp.cov.ExpQuad(1, 0.1) * M + pm.gp.cov.ExpQuad(1, 0.1) + 3
K1 = theano.function([], cov1(X))()
K2 = theano.function([], cov2(X))()
assert np.allclose(K1, K2)
# check diagonal
K1d = theano.function([], cov1(X, diag=True))()
K2d = theano.function([], cov2(X, diag=True))()
npt.assert_allclose(np.diag(K1), K2d, atol=1e-5)
npt.assert_allclose(np.diag(K2), K1d, atol=1e-5)
class TestCovKron:
def test_symprod_cov(self):
X1 = np.linspace(0, 1, 10)[:, None]
X2 = np.linspace(0, 1, 10)[:, None]
X = cartesian(X1, X2)
with pm.Model() as model:
cov1 = pm.gp.cov.ExpQuad(1, 0.1)
cov2 = pm.gp.cov.ExpQuad(1, 0.1)
cov = pm.gp.cov.Kron([cov1, cov2])
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 1 * 0.53940, atol=1e-3)
npt.assert_allclose(K[0, 11], 0.53940 * 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_multiops(self):
X1 = np.linspace(0, 1, 3)[:, None]
X21 = np.linspace(0, 1, 5)[:, None]
X22 = np.linspace(0, 1, 4)[:, None]
X2 = cartesian(X21, X22)
X = cartesian(X1, X21, X22)
with pm.Model() as model:
cov1 = 3 + pm.gp.cov.ExpQuad(1, 0.1) + pm.gp.cov.ExpQuad(1, 0.1) * pm.gp.cov.ExpQuad(1, 0.1)
cov2 = pm.gp.cov.ExpQuad(1, 0.1) * pm.gp.cov.ExpQuad(2, 0.1)
cov = pm.gp.cov.Kron([cov1, cov2])
K_true = kronecker(theano.function([], cov1(X1))(), theano.function([], cov2(X2))()).eval()
K = theano.function([], cov(X))()
npt.assert_allclose(K_true, K)
class TestCovSliceDim:
def test_slice1(self):
X = np.linspace(0, 1, 30).reshape(10, 3)
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(3, 0.1, active_dims=[0, 0, 1])
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.20084298, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_slice2(self):
X = np.linspace(0, 1, 30).reshape(10, 3)
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(3, ls=[0.1, 0.1], active_dims=[1,2])
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.34295549, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_slice3(self):
X = np.linspace(0, 1, 30).reshape(10, 3)
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(3, ls=np.array([0.1, 0.1]), active_dims=[1,2])
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.34295549, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_diffslice(self):
X = np.linspace(0, 1, 30).reshape(10, 3)
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(3, ls=0.1, active_dims=[1, 0, 0]) + pm.gp.cov.ExpQuad(3, ls=[0.1, 0.2, 0.3])
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.683572, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_raises(self):
lengthscales = 2.0
with pytest.raises(ValueError):
pm.gp.cov.ExpQuad(1, lengthscales, [True, False])
pm.gp.cov.ExpQuad(2, lengthscales, [True])
class TestStability:
def test_stable(self):
X = np.random.uniform(low=320., high=400., size=[2000, 2])
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(2, 0.1)
dists = theano.function([], cov.square_dist(X, X))()
assert not np.any(dists < 0)
class TestExpQuad:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_2d(self):
X = np.linspace(0, 1, 10).reshape(5, 2)
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(2, 0.5)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.820754, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_2dard(self):
X = np.linspace(0, 1, 10).reshape(5, 2)
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(2, np.array([1, 2]))
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.969607, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_inv_lengthscale(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.ExpQuad(1, ls_inv=10)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.53940, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestWhiteNoise:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.WhiteNoise(sigma=0.5)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.0, atol=1e-3)
npt.assert_allclose(K[0, 0], 0.5**2, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
# check predict
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.0, atol=1e-3)
# white noise predicting should return all zeros
npt.assert_allclose(K[0, 0], 0.0, atol=1e-3)
class TestConstant:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Constant(2.5)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 2.5, atol=1e-3)
npt.assert_allclose(K[0, 0], 2.5, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 2.5, atol=1e-3)
npt.assert_allclose(K[0, 0], 2.5, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestRatQuad:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.RatQuad(1, ls=0.1, alpha=0.5)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.66896, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.66896, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestExponential:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Exponential(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.57375, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.57375, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestMatern52:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Matern52(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.46202, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.46202, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestMatern32:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Matern32(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.42682, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.42682, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestMatern12:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Matern12(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.32919, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.32919, atol=1e-3)
Kd = theano.function([],cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestCosine:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Cosine(1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.766, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.766, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestPeriodic:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Periodic(1, 0.1, 0.1)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.00288, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.00288, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestLinear:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Linear(1, 0.5)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.19444, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.19444, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestPolynomial:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
with pm.Model() as model:
cov = pm.gp.cov.Polynomial(1, 0.5, 2, 0)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.03780, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.03780, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
class TestWarpedInput:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
def warp_func(x, a, b, c):
return x + (a * tt.tanh(b * (x - c)))
with pm.Model() as model:
cov_m52 = pm.gp.cov.Matern52(1, 0.2)
cov = pm.gp.cov.WarpedInput(1, warp_func=warp_func, args=(1, 10, 1), cov_func=cov_m52)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 0.79593, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 0.79593, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_raises(self):
cov_m52 = pm.gp.cov.Matern52(1, 0.2)
with pytest.raises(TypeError):
pm.gp.cov.WarpedInput(1, cov_m52, "str is not callable")
with pytest.raises(TypeError):
pm.gp.cov.WarpedInput(1, "str is not Covariance object", lambda x: x)
class TestGibbs:
def test_1d(self):
X = np.linspace(0, 2, 10)[:, None]
def tanh_func(x, x1, x2, w, x0):
return (x1 + x2) / 2.0 - (x1 - x2) / 2.0 * tt.tanh((x - x0) / w)
with pm.Model() as model:
cov = pm.gp.cov.Gibbs(1, tanh_func, args=(0.05, 0.6, 0.4, 1.0))
K = theano.function([], cov(X))()
npt.assert_allclose(K[2, 3], 0.136683, atol=1e-4)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[2, 3], 0.136683, atol=1e-4)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_raises(self):
with pytest.raises(TypeError):
pm.gp.cov.Gibbs(1, "str is not callable")
with pytest.raises(NotImplementedError):
pm.gp.cov.Gibbs(2, lambda x: x)
with pytest.raises(NotImplementedError):
pm.gp.cov.Gibbs(3, lambda x: x, active_dims=[0,1])
class TestScaledCov:
def test_1d(self):
X = np.linspace(0, 1, 10)[:, None]
def scaling_func(x, a, b):
return a + b*x
with pm.Model() as model:
cov_m52 = pm.gp.cov.Matern52(1, 0.2)
cov = pm.gp.cov.ScaledCov(1, scaling_func=scaling_func, args=(2, -1), cov_func=cov_m52)
K = theano.function([], cov(X))()
npt.assert_allclose(K[0, 1], 3.00686, atol=1e-3)
K = theano.function([], cov(X, X))()
npt.assert_allclose(K[0, 1], 3.00686, atol=1e-3)
# check diagonal
Kd = theano.function([], cov(X, diag=True))()
npt.assert_allclose(np.diag(K), Kd, atol=1e-5)
def test_raises(self):
cov_m52 = pm.gp.cov.Matern52(1, 0.2)
with pytest.raises(TypeError):
pm.gp.cov.ScaledCov(1, cov_m52, "str is not callable")
with pytest.raises(TypeError):
pm.gp.cov.ScaledCov(1, "str is not Covariance object", lambda x: x)
class TestHandleArgs:
def test_handleargs(self):
def func_noargs(x):
return x
def func_onearg(x, a):
return x + a
def func_twoarg(x, a, b):
return x + a + b
x = 100
a = 2
b = 3
func_noargs2 = pm.gp.cov.handle_args(func_noargs, None)
func_onearg2 = pm.gp.cov.handle_args(func_onearg, a)
func_twoarg2 = pm.gp.cov.handle_args(func_twoarg, args=(a, b))
assert func_noargs(x) == func_noargs2(x, args=None)
assert func_onearg(x, a) == func_onearg2(x, args=a)
assert func_twoarg(x, a, b) == func_twoarg2(x, args=(a, b))
class TestCoregion:
def setup_method(self):
self.nrows = 6
self.ncols = 3
self.W = np.random.rand(self.nrows, self.ncols)
self.kappa = np.random.rand(self.nrows)
self.B = np.dot(self.W, self.W.T) + np.diag(self.kappa)
self.rand_rows = np.random.randint(0, self.nrows, size=(20, 1))
self.rand_cols = np.random.randint(0, self.ncols, size=(10, 1))
self.X = np.concatenate((self.rand_rows, np.random.rand(20, 1)), axis=1)
self.Xs = np.concatenate((self.rand_cols, np.random.rand(10, 1)), axis=1)
def test_full(self):
B_mat = self.B[self.rand_rows, self.rand_rows.T]
with pm.Model() as model:
B = pm.gp.cov.Coregion(2, W=self.W, kappa=self.kappa, active_dims=[0])
npt.assert_allclose(
B(np.array([[2, 1.5], [3, -42]])).eval(),
self.B[2:4, 2:4]
)
npt.assert_allclose(B(self.X).eval(), B_mat)
def test_fullB(self):
B_mat = self.B[self.rand_rows, self.rand_rows.T]
with pm.Model() as model:
B = pm.gp.cov.Coregion(1, B=self.B)
npt.assert_allclose(
B(np.array([[2], [3]])).eval(),
self.B[2:4, 2:4]
)
npt.assert_allclose(B(self.X).eval(), B_mat)
def test_Xs(self):
B_mat = self.B[self.rand_rows, self.rand_cols.T]
with pm.Model() as model:
B = pm.gp.cov.Coregion(2, W=self.W, kappa=self.kappa, active_dims=[0])
npt.assert_allclose(
B(np.array([[2, 1.5]]), np.array([[3, -42]])).eval(),
self.B[2, 3]
)
npt.assert_allclose(B(self.X, self.Xs).eval(), B_mat)
def test_diag(self):
B_diag = np.diag(self.B)[self.rand_rows.ravel()]
with pm.Model() as model:
B = pm.gp.cov.Coregion(2, W=self.W, kappa=self.kappa, active_dims=[0])
npt.assert_allclose(
B(np.array([[2, 1.5]]), diag=True).eval(),
np.diag(self.B)[2]
)
npt.assert_allclose(B(self.X, diag=True).eval(), B_diag)
def test_raises(self):
with pm.Model() as model:
with pytest.raises(ValueError):
B = pm.gp.cov.Coregion(2, W=self.W, kappa=self.kappa)
def test_raises2(self):
with pm.Model() as model:
with pytest.raises(ValueError):
B = pm.gp.cov.Coregion(1, W=self.W, kappa=self.kappa, B=self.B)
def test_raises3(self):
with pm.Model() as model:
with pytest.raises(ValueError):
B = pm.gp.cov.Coregion(1)
class TestMarginalVsLatent:
R"""
Compare the logp of models Marginal, noise=0 and Latent.
"""
def setup_method(self):
X = np.random.randn(50,3)
y = np.random.randn(50)*0.01
Xnew = np.random.randn(60, 3)
pnew = np.random.randn(60)*0.01
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.Marginal(mean_func, cov_func)
f = gp.marginal_likelihood("f", X, y, noise=0.0, is_observed=False, observed=y)
p = gp.conditional("p", Xnew)
self.logp = model.logp({"p": pnew})
self.X = X
self.Xnew = Xnew
self.y = y
self.pnew = pnew
def testLatent1(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.Latent(mean_func, cov_func)
f = gp.prior("f", self.X, reparameterize=False)
p = gp.conditional("p", self.Xnew)
latent_logp = model.logp({"f": self.y, "p": self.pnew})
npt.assert_allclose(latent_logp, self.logp, atol=0, rtol=1e-2)
def testLatent2(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.Latent(mean_func, cov_func)
f = gp.prior("f", self.X, reparameterize=True)
p = gp.conditional("p", self.Xnew)
chol = np.linalg.cholesky(cov_func(self.X).eval())
y_rotated = np.linalg.solve(chol, self.y - 0.5)
latent_logp = model.logp({"f_rotated_": y_rotated, "p": self.pnew})
npt.assert_allclose(latent_logp, self.logp, atol=5)
class TestMarginalVsMarginalSparse:
R"""
Compare logp of models Marginal and MarginalSparse.
Should be nearly equal when inducing points are same as inputs.
"""
def setup_method(self):
X = np.random.randn(50,3)
y = np.random.randn(50)*0.01
Xnew = np.random.randn(60, 3)
pnew = np.random.randn(60)*0.01
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.Marginal(mean_func, cov_func)
sigma = 0.1
f = gp.marginal_likelihood("f", X, y, noise=sigma)
p = gp.conditional("p", Xnew)
self.logp = model.logp({"p": pnew})
self.X = X
self.Xnew = Xnew
self.y = y
self.sigma = sigma
self.pnew = pnew
self.gp = gp
@pytest.mark.parametrize('approx', ['FITC', 'VFE', 'DTC'])
def testApproximations(self, approx):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.MarginalSparse(mean_func, cov_func, approx=approx)
f = gp.marginal_likelihood("f", self.X, self.X, self.y, self.sigma)
p = gp.conditional("p", self.Xnew)
approx_logp = model.logp({"f": self.y, "p": self.pnew})
npt.assert_allclose(approx_logp, self.logp, atol=0, rtol=1e-2)
@pytest.mark.parametrize('approx', ['FITC', 'VFE', 'DTC'])
def testPredictVar(self, approx):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.MarginalSparse(mean_func, cov_func, approx=approx)
f = gp.marginal_likelihood("f", self.X, self.X, self.y, self.sigma)
mu1, var1 = self.gp.predict(self.Xnew, diag=True)
mu2, var2 = gp.predict(self.Xnew, diag=True)
npt.assert_allclose(mu1, mu2, atol=0, rtol=1e-3)
npt.assert_allclose(var1, var2, atol=0, rtol=1e-3)
def testPredictCov(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
mean_func = pm.gp.mean.Constant(0.5)
gp = pm.gp.MarginalSparse(mean_func, cov_func, approx="DTC")
f = gp.marginal_likelihood("f", self.X, self.X, self.y, self.sigma, is_observed=False)
mu1, cov1 = self.gp.predict(self.Xnew, pred_noise=True)
mu2, cov2 = gp.predict(self.Xnew, pred_noise=True)
npt.assert_allclose(mu1, mu2, atol=0, rtol=1e-3)
npt.assert_allclose(cov1, cov2, atol=0, rtol=1e-3)
class TestGPAdditive:
def setup_method(self):
self.X = np.random.randn(50,3)
self.y = np.random.randn(50)*0.01
self.Xnew = np.random.randn(60, 3)
self.noise = pm.gp.cov.WhiteNoise(0.1)
self.covs = (pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3]),
pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3]),
pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3]))
self.means = (pm.gp.mean.Constant(0.5),
pm.gp.mean.Constant(0.5),
pm.gp.mean.Constant(0.5))
def testAdditiveMarginal(self):
with pm.Model() as model1:
gp1 = pm.gp.Marginal(self.means[0], self.covs[0])
gp2 = pm.gp.Marginal(self.means[1], self.covs[1])
gp3 = pm.gp.Marginal(self.means[2], self.covs[2])
gpsum = gp1 + gp2 + gp3
fsum = gpsum.marginal_likelihood("f", self.X, self.y, noise=self.noise)
model1_logp = model1.logp({"fsum": self.y})
with pm.Model() as model2:
gptot = pm.gp.Marginal(reduce(add, self.means), reduce(add, self.covs))
fsum = gptot.marginal_likelihood("f", self.X, self.y, noise=self.noise)
model2_logp = model2.logp({"fsum": self.y})
npt.assert_allclose(model1_logp, model2_logp, atol=0, rtol=1e-2)
with model1:
fp1 = gpsum.conditional("fp1", self.Xnew, given={"X": self.X, "y": self.y,
"noise": self.noise, "gp": gpsum})
with model2:
fp2 = gptot.conditional("fp2", self.Xnew)
fp = np.random.randn(self.Xnew.shape[0])
npt.assert_allclose(fp1.logp({"fp1": fp}), fp2.logp({"fp2": fp}), atol=0, rtol=1e-2)
@pytest.mark.parametrize('approx', ['FITC', 'VFE', 'DTC'])
def testAdditiveMarginalSparse(self, approx):
Xu = np.random.randn(10, 3)
sigma = 0.1
with pm.Model() as model1:
gp1 = pm.gp.MarginalSparse(self.means[0], self.covs[0], approx=approx)
gp2 = pm.gp.MarginalSparse(self.means[1], self.covs[1], approx=approx)
gp3 = pm.gp.MarginalSparse(self.means[2], self.covs[2], approx=approx)
gpsum = gp1 + gp2 + gp3
fsum = gpsum.marginal_likelihood("f", self.X, Xu, self.y, noise=sigma)
model1_logp = model1.logp({"fsum": self.y})
with pm.Model() as model2:
gptot = pm.gp.MarginalSparse(reduce(add, self.means), reduce(add, self.covs), approx=approx)
fsum = gptot.marginal_likelihood("f", self.X, Xu, self.y, noise=sigma)
model2_logp = model2.logp({"fsum": self.y})
npt.assert_allclose(model1_logp, model2_logp, atol=0, rtol=1e-2)
with model1:
fp1 = gpsum.conditional("fp1", self.Xnew, given={"X": self.X, "Xu": Xu, "y": self.y,
"sigma": sigma, "gp": gpsum})
with model2:
fp2 = gptot.conditional("fp2", self.Xnew)
fp = np.random.randn(self.Xnew.shape[0])
npt.assert_allclose(fp1.logp({"fp1": fp}), fp2.logp({"fp2": fp}), atol=0, rtol=1e-2)
def testAdditiveLatent(self):
with pm.Model() as model1:
gp1 = pm.gp.Latent(self.means[0], self.covs[0])
gp2 = pm.gp.Latent(self.means[1], self.covs[1])
gp3 = pm.gp.Latent(self.means[2], self.covs[2])
gpsum = gp1 + gp2 + gp3
fsum = gpsum.prior("fsum", self.X, reparameterize=False)
model1_logp = model1.logp({"fsum": self.y})
with pm.Model() as model2:
gptot = pm.gp.Latent(reduce(add, self.means), reduce(add, self.covs))
fsum = gptot.prior("fsum", self.X, reparameterize=False)
model2_logp = model2.logp({"fsum": self.y})
npt.assert_allclose(model1_logp, model2_logp, atol=0, rtol=1e-2)
with model1:
fp1 = gpsum.conditional("fp1", self.Xnew, given={"X": self.X, "f": self.y, "gp": gpsum})
with model2:
fp2 = gptot.conditional("fp2", self.Xnew)
fp = np.random.randn(self.Xnew.shape[0])
npt.assert_allclose(fp1.logp({"fsum": self.y, "fp1": fp}),
fp2.logp({"fsum": self.y, "fp2": fp}), atol=0, rtol=1e-2)
def testAdditiveSparseRaises(self):
# cant add different approximations
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
gp1 = pm.gp.MarginalSparse(cov_func=cov_func, approx="DTC")
gp2 = pm.gp.MarginalSparse(cov_func=cov_func, approx="FITC")
with pytest.raises(Exception) as e_info:
gp1 + gp2
def testAdditiveTypeRaises1(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
gp1 = pm.gp.MarginalSparse(cov_func=cov_func, approx="DTC")
gp2 = pm.gp.Marginal(cov_func=cov_func)
with pytest.raises(Exception) as e_info:
gp1 + gp2
def testAdditiveTypeRaises2(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
gp1 = pm.gp.Latent(cov_func=cov_func)
gp2 = pm.gp.Marginal(cov_func=cov_func)
with pytest.raises(Exception) as e_info:
gp1 + gp2
class TestTP:
R"""
Compare TP with high degress of freedom to GP
"""
def setup_method(self):
X = np.random.randn(20,3)
y = np.random.randn(20)*0.01
Xnew = np.random.randn(50, 3)
pnew = np.random.randn(50)*0.01
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
gp = pm.gp.Latent(cov_func=cov_func)
f = gp.prior("f", X, reparameterize=False)
p = gp.conditional("p", Xnew)
self.X = X
self.y = y
self.Xnew = Xnew
self.pnew = pnew
self.latent_logp = model.logp({"f": y, "p": pnew})
self.plogp = p.logp({"f": y, "p": pnew})
def testTPvsLatent(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
tp = pm.gp.TP(cov_func=cov_func, nu=10000)
f = tp.prior("f", self.X, reparameterize=False)
p = tp.conditional("p", self.Xnew)
tp_logp = model.logp({"f": self.y, "p": self.pnew})
npt.assert_allclose(self.latent_logp, tp_logp, atol=0, rtol=1e-2)
def testTPvsLatentReparameterized(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
tp = pm.gp.TP(cov_func=cov_func, nu=10000)
f = tp.prior("f", self.X, reparameterize=True)
p = tp.conditional("p", self.Xnew)
chol = np.linalg.cholesky(cov_func(self.X).eval())
y_rotated = np.linalg.solve(chol, self.y)
# testing full model logp unreliable due to introduction of chi2__log__
plogp = p.logp({"f_rotated_": y_rotated, "p": self.pnew, "chi2__log__": np.log(1e20)})
npt.assert_allclose(self.plogp, plogp, atol=0, rtol=1e-2)
def testAdditiveTPRaises(self):
with pm.Model() as model:
cov_func = pm.gp.cov.ExpQuad(3, [0.1, 0.2, 0.3])
gp1 = pm.gp.TP(cov_func=cov_func, nu=10)
gp2 = pm.gp.TP(cov_func=cov_func, nu=10)
with pytest.raises(Exception) as e_info:
gp1 + gp2
class TestLatentKron:
"""
Compare gp.LatentKron to gp.Latent, both with Gaussian noise.
"""
def setup_method(self):
self.Xs = [np.linspace(0, 1, 7)[:, None],
np.linspace(0, 1, 5)[:, None],
np.linspace(0, 1, 6)[:, None]]
self.X = cartesian(*self.Xs)
self.N = np.prod([len(X) for X in self.Xs])
self.y = np.random.randn(self.N) * 0.1
self.Xnews = (np.random.randn(5, 1),
np.random.randn(5, 1),
np.random.randn(5, 1))
self.Xnew = np.concatenate(self.Xnews, axis=1)
self.pnew = np.random.randn(len(self.Xnew))*0.01
ls = 0.2
with pm.Model() as latent_model:
self.cov_funcs = (pm.gp.cov.ExpQuad(1, ls),
pm.gp.cov.ExpQuad(1, ls),
pm.gp.cov.ExpQuad(1, ls))
cov_func = pm.gp.cov.Kron(self.cov_funcs)
self.mean = pm.gp.mean.Constant(0.5)
gp = pm.gp.Latent(mean_func=self.mean, cov_func=cov_func)
f = gp.prior("f", self.X)
p = gp.conditional("p", self.Xnew)
chol = np.linalg.cholesky(cov_func(self.X).eval())
self.y_rotated = np.linalg.solve(chol, self.y - 0.5)
self.logp = latent_model.logp({"f_rotated_": self.y_rotated, "p": self.pnew})
def testLatentKronvsLatent(self):
with pm.Model() as kron_model:
kron_gp = pm.gp.LatentKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
f = kron_gp.prior('f', self.Xs)
p = kron_gp.conditional('p', self.Xnew)
kronlatent_logp = kron_model.logp({"f_rotated_": self.y_rotated, "p": self.pnew})
npt.assert_allclose(kronlatent_logp, self.logp, atol=0, rtol=1e-3)
def testLatentKronRaisesAdditive(self):
with pm.Model() as kron_model:
gp1 = pm.gp.LatentKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
gp2 = pm.gp.LatentKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
with pytest.raises(TypeError):
gp1 + gp2
def testLatentKronRaisesSizes(self):
with pm.Model() as kron_model:
gp = pm.gp.LatentKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
with pytest.raises(ValueError):
gp.prior("f", Xs=[np.linspace(0, 1, 7)[:, None],
np.linspace(0, 1, 5)[:, None]])
class TestMarginalKron:
"""
Compare gp.MarginalKron to gp.Marginal.
"""
def setup_method(self):
self.Xs = [np.linspace(0, 1, 7)[:, None],
np.linspace(0, 1, 5)[:, None],
np.linspace(0, 1, 6)[:, None]]
self.X = cartesian(*self.Xs)
self.N = np.prod([len(X) for X in self.Xs])
self.y = np.random.randn(self.N) * 0.1
self.Xnews = (np.random.randn(5, 1),
np.random.randn(5, 1),
np.random.randn(5, 1))
self.Xnew = np.concatenate(self.Xnews, axis=1)
self.sigma = 0.2
self.pnew = np.random.randn(len(self.Xnew))*0.01
ls = 0.2
with pm.Model() as model:
self.cov_funcs = [pm.gp.cov.ExpQuad(1, ls),
pm.gp.cov.ExpQuad(1, ls),
pm.gp.cov.ExpQuad(1, ls)]
cov_func = pm.gp.cov.Kron(self.cov_funcs)
self.mean = pm.gp.mean.Constant(0.5)
gp = pm.gp.Marginal(mean_func=self.mean, cov_func=cov_func)
f = gp.marginal_likelihood("f", self.X, self.y, noise=self.sigma)
p = gp.conditional("p", self.Xnew)
self.mu, self.cov = gp.predict(self.Xnew)
self.logp = model.logp({"p": self.pnew})
def testMarginalKronvsMarginalpredict(self):
with pm.Model() as kron_model:
kron_gp = pm.gp.MarginalKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
f = kron_gp.marginal_likelihood('f', self.Xs, self.y,
sigma=self.sigma, shape=self.N)
p = kron_gp.conditional('p', self.Xnew)
mu, cov = kron_gp.predict(self.Xnew)
npt.assert_allclose(mu, self.mu, atol=0, rtol=1e-2)
npt.assert_allclose(cov, self.cov, atol=0, rtol=1e-2)
def testMarginalKronvsMarginal(self):
with pm.Model() as kron_model:
kron_gp = pm.gp.MarginalKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
f = kron_gp.marginal_likelihood('f', self.Xs, self.y,
sigma=self.sigma, shape=self.N)
p = kron_gp.conditional('p', self.Xnew)
kron_logp = kron_model.logp({'p': self.pnew})
npt.assert_allclose(kron_logp, self.logp, atol=0, rtol=1e-2)
def testMarginalKronRaises(self):
with pm.Model() as kron_model:
gp1 = pm.gp.MarginalKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
gp2 = pm.gp.MarginalKron(mean_func=self.mean,
cov_funcs=self.cov_funcs)
with pytest.raises(TypeError):
gp1 + gp2
| 40.215741 | 112 | 0.547418 | 6,554 | 43,433 | 3.556912 | 0.064083 | 0.014413 | 0.087509 | 0.064087 | 0.814516 | 0.785261 | 0.755577 | 0.738418 | 0.713023 | 0.692562 | 0 | 0.060206 | 0.287546 | 43,433 | 1,079 | 113 | 40.253012 | 0.693155 | 0.036263 | 0 | 0.620496 | 0 | 0 | 0.009011 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 1 | 0.100225 | false | 0 | 0.010135 | 0.006757 | 0.153153 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9c2df2e65cd82ce07b34d3c829a80e91f7b3b7c | 144 | py | Python | deep_rl/component/__init__.py | JACKHAHA363/DeepRL | 5e91086c17fd6de85f4d53873fab17e049dd5df5 | [
"Apache-2.0"
] | 3 | 2018-11-15T06:45:09.000Z | 2019-11-18T01:58:09.000Z | deep_rl/component/__init__.py | JACKHAHA363/DeepRL | 5e91086c17fd6de85f4d53873fab17e049dd5df5 | [
"Apache-2.0"
] | 3 | 2020-05-01T12:37:57.000Z | 2022-03-12T00:26:55.000Z | deep_rl/component/__init__.py | pladosz/MOHQA | 60f8f2e7e9fc9eaac0985bbc63a8092b95b21161 | [
"Apache-2.0"
] | 3 | 2019-07-04T02:08:37.000Z | 2020-12-04T05:32:46.000Z | from .atari_wrapper import *
from .policy import *
from .replay import *
from .task import *
from .random_process import *
from .bench import * | 24 | 29 | 0.75 | 20 | 144 | 5.3 | 0.5 | 0.471698 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 144 | 6 | 30 | 24 | 0.883333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9e9a4804bde7536662097f2910a41f1b357d9f5 | 29 | py | Python | fastapi_helpers/db/seeders/__init__.py | finalsa/fastapi-helpers | 3fc06e4445e29416876822d3f9485c65c51886cc | [
"MIT"
] | 2 | 2021-09-19T00:56:42.000Z | 2022-01-19T06:13:45.000Z | fastapi_helpers/db/seeders/__init__.py | finalsa/fastapi-helpers | 3fc06e4445e29416876822d3f9485c65c51886cc | [
"MIT"
] | 1 | 2021-11-27T18:05:08.000Z | 2021-12-24T02:42:11.000Z | fastapi_helpers/db/seeders/__init__.py | finalsa/fastapi-helpers | 3fc06e4445e29416876822d3f9485c65c51886cc | [
"MIT"
] | 1 | 2021-12-23T07:21:56.000Z | 2021-12-23T07:21:56.000Z | from .reader import DbSeeder
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9fa27cd97e4ac5286c438704a33045637d1742e | 17,932 | py | Python | ref/fir_design_helper.py | kromalee/signal-filter-designer | b9fdda48443fe2e65af91ea0a7ac9930092fdf67 | [
"MIT"
] | null | null | null | ref/fir_design_helper.py | kromalee/signal-filter-designer | b9fdda48443fe2e65af91ea0a7ac9930092fdf67 | [
"MIT"
] | null | null | null | ref/fir_design_helper.py | kromalee/signal-filter-designer | b9fdda48443fe2e65af91ea0a7ac9930092fdf67 | [
"MIT"
] | null | null | null | import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
from logging import getLogger
log = getLogger(__name__)
def firwin_lpf(N_taps, fc, fs=1.0):
"""
:param N_taps:滤波器的长度(系数数,即滤波器阶数+ 1)
:param fc:滤波器的截止频率
:param fs:信号的采样频率
:return:长度数字系数FIR滤波器的系数。
"""
return signal.firwin(N_taps, 2 * fc / fs)
def firwin_hpf(N_taps, fc, fs=1.0):
"""
:param N_taps:
:param fc:
:param fs:
:return:
"""
return signal.firwin(N_taps, 2 * fc / fs)
def firwin_bpf(N_taps, f1, f2, fs=1.0, pass_zero=False):
"""
:param N_taps:
:param f1:
:param f2:
:param fs:
:param pass_zero:
:return:
"""
return signal.firwin(N_taps, 2 * (f1, f2) / fs, pass_zero=pass_zero)
def firwin_bsf(N_taps, f1, f2, fs=1.0, pass_zero=False):
"""
:param N_taps:
:param f1:
:param f2:
:param fs:
:param pass_zero:
:return:
"""
return signal.firwin(N_taps, 2 * (f1, f2) / fs, pass_zero=pass_zero)
def firwin_kaiser_lpf(f_pass, f_stop, d_stop, fs=1.0, N_bump=0, status=True):
"""
Design an FIR lowpass filter using the sinc() kernel and
a Kaiser window. The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation
d_stop in dB, all relative to a sampling rate of fs Hz.
Note: the passband ripple cannot be set independent of the
stopband attenuation.
Mark Wickert October 2016
"""
wc = 2 * np.pi * (f_pass + f_stop) / 2 / fs
delta_w = 2 * np.pi * (f_stop - f_pass) / fs
# Find the filter order
M = np.ceil((d_stop - 8) / (2.285 * delta_w))
# Adjust filter order up or down as needed
M += N_bump
N_taps = M + 1
# Obtain the Kaiser window
beta = signal.kaiser_beta(d_stop)
w_k = signal.kaiser(N_taps, beta)
n = np.arange(N_taps)
b_k = wc / np.pi * np.sinc(wc / np.pi * (n - M / 2)) * w_k
b_k /= np.sum(b_k)
if status:
log.info('Kaiser Win filter taps = %d.' % N_taps)
return b_k
def firwin_kaiser_hpf(f_stop, f_pass, d_stop, fs=1.0, N_bump=0, status=True):
"""
Design an FIR highpass filter using the sinc() kernel and
a Kaiser window. The filter order is determined based on
f_pass Hz, f_stop Hz, and the desired stopband attenuation
d_stop in dB, all relative to a sampling rate of fs Hz.
Note: the passband ripple cannot be set independent of the
stopband attenuation.
Mark Wickert October 2016
"""
# Transform HPF critical frequencies to lowpass equivalent
f_pass_eq = fs / 2. - f_pass
f_stop_eq = fs / 2. - f_stop
# Design LPF equivalent
wc = 2 * np.pi * (f_pass_eq + f_stop_eq) / 2 / fs
delta_w = 2 * np.pi * (f_stop_eq - f_pass_eq) / fs
# Find the filter order
M = np.ceil((d_stop - 8) / (2.285 * delta_w))
# Adjust filter order up or down as needed
M += N_bump
N_taps = M + 1
# Obtain the Kaiser window
beta = signal.kaiser_beta(d_stop)
w_k = signal.kaiser(N_taps, beta)
n = np.arange(N_taps)
b_k = wc / np.pi * np.sinc(wc / np.pi * (n - M / 2)) * w_k
b_k /= np.sum(b_k)
# Transform LPF equivalent to HPF
n = np.arange(len(b_k))
b_k *= (-1) ** n
if status:
log.info('Kaiser Win filter taps = %d.' % N_taps)
return b_k
def firwin_kaiser_bpf(f_stop1, f_pass1, f_pass2, f_stop2, d_stop,
fs=1.0, N_bump=0, status=True):
"""
Design an FIR bandpass filter using the sinc() kernel and
a Kaiser window. The filter order is determined based on
f_stop1 Hz, f_pass1 Hz, f_pass2 Hz, f_stop2 Hz, and the
desired stopband attenuation d_stop in dB for both stopbands,
all relative to a sampling rate of fs Hz.
Note: the passband ripple cannot be set independent of the
stopband attenuation.
Mark Wickert October 2016
"""
# Design BPF starting from simple LPF equivalent
# The upper and lower stopbands are assumed to have
# the same attenuation level. The LPF equivalent critical
# frequencies:
f_pass = (f_pass2 - f_pass1) / 2
f_stop = (f_stop2 - f_stop1) / 2
# Continue to design equivalent LPF
wc = 2 * np.pi * (f_pass + f_stop) / 2 / fs
delta_w = 2 * np.pi * (f_stop - f_pass) / fs
# Find the filter order
M = np.ceil((d_stop - 8) / (2.285 * delta_w))
# Adjust filter order up or down as needed
M += N_bump
N_taps = M + 1
# Obtain the Kaiser window
beta = signal.kaiser_beta(d_stop)
w_k = signal.kaiser(N_taps, beta)
n = np.arange(N_taps)
b_k = wc / np.pi * np.sinc(wc / np.pi * (n - M / 2)) * w_k
b_k /= np.sum(b_k)
# Transform LPF to BPF
f0 = (f_pass2 + f_pass1) / 2
w0 = 2 * np.pi * f0 / fs
n = np.arange(len(b_k))
b_k_bp = 2 * b_k * np.cos(w0 * (n - M / 2))
if status:
log.info('Kaiser Win filter taps = %d.' % N_taps)
return b_k_bp
def firwin_kaiser_bsf(f_stop1, f_pass1, f_pass2, f_stop2, d_stop,
fs=1.0, N_bump=0, status=True):
"""
Design an FIR bandstop filter using the sinc() kernel and
a Kaiser window. The filter order is determined based on
f_stop1 Hz, f_pass1 Hz, f_pass2 Hz, f_stop2 Hz, and the
desired stopband attenuation d_stop in dB for both stopbands,
all relative to a sampling rate of fs Hz.
Note: The passband ripple cannot be set independent of the
stopband attenuation.
Note: The filter order is forced to be even (odd number of taps)
so there is a center tap that can be used to form 1 - H_BPF.
Mark Wickert October 2016
"""
# First design a BPF starting from simple LPF equivalent
# The upper and lower stopbands are assumed to have
# the same attenuation level. The LPF equivalent critical
# frequencies:
f_pass = (f_pass2 - f_pass1) / 2
f_stop = (f_stop2 - f_stop1) / 2
# Continue to design equivalent LPF
wc = 2 * np.pi * (f_pass + f_stop) / 2 / fs
delta_w = 2 * np.pi * (f_stop - f_pass) / fs
# Find the filter order
M = np.ceil((d_stop - 8) / (2.285 * delta_w))
# Adjust filter order up or down as needed
M += N_bump
# Make filter order even (odd number of taps)
if ((M + 1) / 2.0 - int((M + 1) / 2.0)) == 0:
M += 1
N_taps = M + 1
# Obtain the Kaiser window
beta = signal.kaiser_beta(d_stop)
w_k = signal.kaiser(N_taps, beta)
n = np.arange(N_taps)
b_k = wc / np.pi * np.sinc(wc / np.pi * (n - M / 2)) * w_k
b_k /= np.sum(b_k)
# Transform LPF to BPF
f0 = (f_pass2 + f_pass1) / 2
w0 = 2 * np.pi * f0 / fs
n = np.arange(len(b_k))
b_k_bs = 2 * b_k * np.cos(w0 * (n - M / 2))
# Transform BPF to BSF via 1 - BPF for odd N_taps
b_k_bs = -b_k_bs
b_k_bs[int(M / 2)] += 1
if status:
log.info('Kaiser Win filter taps = %d.' % N_taps)
return b_k_bs
# def lowpass_order(f_pass, f_stop, dpass_dB, dstop_dB, fsamp=1):
# """
# Optimal FIR (equal ripple) Lowpass Order Determination
#
# Text reference: Ifeachor, Digital Signal Processing a Practical Approach,
# second edition, Prentice Hall, 2002.
# Journal paper reference: Herriman et al., Practical Design Rules for Optimum
# Finite Imulse Response Digitl Filters, Bell Syst. Tech. J., vol 52, pp.
# 769-799, July-Aug., 1973.IEEE, 1973.
# """
# dpass = 1 - 10 ** (-dpass_dB / 20)
# dstop = 10 ** (-dstop_dB / 20)
# Df = (f_stop - f_pass) / fsamp
# a1 = 5.309e-3
# a2 = 7.114e-2
# a3 = -4.761e-1
# a4 = -2.66e-3
# a5 = -5.941e-1
# a6 = -4.278e-1
#
# Dinf = np.log10(dstop) * (a1 * np.log10(dpass) ** 2 + a2 * np.log10(dpass) + a3) \
# + (a4 * np.log10(dpass) ** 2 + a5 * np.log10(dpass) + a6)
# f = 11.01217 + 0.51244 * (np.log10(dpass) - np.log10(dstop))
# N = Dinf / Df - f * Df + 1
# ff = 2 * np.array([0, f_pass, f_stop, fsamp / 2]) / fsamp
# aa = np.array([1, 1, 0, 0])
# wts = np.array([1.0, dpass / dstop])
# return int(N), ff, aa, wts
#
#
# def bandpass_order(f_stop1, f_pass1, f_pass2, f_stop2, dpass_dB, dstop_dB, fsamp=1):
# """
# Optimal FIR (equal ripple) Bandpass Order Determination
#
# Text reference: Ifeachor, Digital Signal Processing a Practical Approach,
# second edition, Prentice Hall, 2002.
# Journal paper reference: F. Mintzer & B. Liu, Practical Design Rules for Optimum
# FIR Bandpass Digital Filters, IEEE Transactions on Acoustics and Speech, pp.
# 204-206, April,1979.
# """
# dpass = 1 - 10 ** (-dpass_dB / 20)
# dstop = 10 ** (-dstop_dB / 20)
# Df1 = (f_pass1 - f_stop1) / fsamp
# Df2 = (f_stop2 - f_pass2) / fsamp
# b1 = 0.01201
# b2 = 0.09664
# b3 = -0.51325
# b4 = 0.00203
# b5 = -0.5705
# b6 = -0.44314
#
# Df = min(Df1, Df2)
# Cinf = np.log10(dstop) * (b1 * np.log10(dpass) ** 2 + b2 * np.log10(dpass) + b3) \
# + (b4 * np.log10(dpass) ** 2 + b5 * np.log10(dpass) + b6)
# g = -14.6 * np.log10(dpass / dstop) - 16.9
# N = Cinf / Df + g * Df + 1
# ff = 2 * np.array([0, f_stop1, f_pass1, f_pass2, f_stop2, fsamp / 2]) / fsamp
# aa = np.array([0, 0, 1, 1, 0, 0])
# wts = np.array([dpass / dstop, 1, dpass / dstop])
# return int(N), ff, aa, wts
#
#
# def bandstop_order(f_stop1, f_pass1, f_pass2, f_stop2, dpass_dB, dstop_dB, fsamp=1):
# """
# Optimal FIR (equal ripple) Bandstop Order Determination
#
# Text reference: Ifeachor, Digital Signal Processing a Practical Approach,
# second edition, Prentice Hall, 2002.
# Journal paper reference: F. Mintzer & B. Liu, Practical Design Rules for Optimum
# FIR Bandpass Digital Filters, IEEE Transactions on Acoustics and Speech, pp.
# 204-206, April,1979.
# """
# dpass = 1 - 10 ** (-dpass_dB / 20)
# dstop = 10 ** (-dstop_dB / 20)
# Df1 = (f_pass1 - f_stop1) / fsamp
# Df2 = (f_stop2 - f_pass2) / fsamp
# b1 = 0.01201
# b2 = 0.09664
# b3 = -0.51325
# b4 = 0.00203
# b5 = -0.5705
# b6 = -0.44314
#
# Df = min(Df1, Df2)
# Cinf = np.log10(dstop) * (b1 * np.log10(dpass) ** 2 + b2 * np.log10(dpass) + b3) \
# + (b4 * np.log10(dpass) ** 2 + b5 * np.log10(dpass) + b6)
# g = -14.6 * np.log10(dpass / dstop) - 16.9
# N = Cinf / Df + g * Df + 1
# ff = 2 * np.array([0, f_stop1, f_pass1, f_pass2, f_stop2, fsamp / 2]) / fsamp
# aa = np.array([1, 1, 0, 0, 1, 1])
# wts = np.array([2, dpass / dstop, 2])
# return int(N), ff, aa, wts
#
#
# def fir_remez_lpf(f_pass, f_stop, d_pass, d_stop, fs=1.0, N_bump=5, status=True):
# """
# Design an FIR lowpass filter using remez with order
# determination. The filter order is determined based on
# f_pass Hz, fstop Hz, and the desired passband ripple
# d_pass dB and stopband attenuation d_stop dB all
# relative to a sampling rate of fs Hz.
#
# Mark Wickert October 2016, updated October 2018
# """
# n, ff, aa, wts = lowpass_order(f_pass, f_stop, d_pass, d_stop, fsamp=fs)
# # Bump up the order by N_bump to bring down the final d_pass & d_stop
# N_taps = n
# N_taps += N_bump
# b = signal.remez(N_taps, ff, aa[0::2], wts, Hz=2)
# if status:
# log.info('Remez filter taps = %d.' % N_taps)
# return b
#
#
# def fir_remez_hpf(f_stop, f_pass, d_pass, d_stop, fs=1.0, N_bump=5, status=True):
# """
# Design an FIR highpass filter using remez with order
# determination. The filter order is determined based on
# f_pass Hz, fstop Hz, and the desired passband ripple
# d_pass dB and stopband attenuation d_stop dB all
# relative to a sampling rate of fs Hz.
#
# Mark Wickert October 2016, updated October 2018
# """
# # Transform HPF critical frequencies to lowpass equivalent
# f_pass_eq = fs / 2. - f_pass
# f_stop_eq = fs / 2. - f_stop
# # Design LPF equivalent
# n, ff, aa, wts = lowpass_order(f_pass_eq, f_stop_eq, d_pass, d_stop, fsamp=fs)
# # Bump up the order by N_bump to bring down the final d_pass & d_stop
# N_taps = n
# N_taps += N_bump
# b = signal.remez(N_taps, ff, aa[0::2], wts, Hz=2)
# # Transform LPF equivalent to HPF
# n = np.arange(len(b))
# b *= (-1) ** n
# if status:
# log.info('Remez filter taps = %d.' % N_taps)
# return b
#
#
# def fir_remez_bpf(f_stop1, f_pass1, f_pass2, f_stop2, d_pass, d_stop,
# fs=1.0, N_bump=5, status=True):
# """
# Design an FIR bandpass filter using remez with order
# determination. The filter order is determined based on
# f_stop1 Hz, f_pass1 Hz, f_pass2 Hz, f_stop2 Hz, and the
# desired passband ripple d_pass dB and stopband attenuation
# d_stop dB all relative to a sampling rate of fs Hz.
#
# Mark Wickert October 2016, updated October 2018
# """
# n, ff, aa, wts = bandpass_order(f_stop1, f_pass1, f_pass2, f_stop2,
# d_pass, d_stop, fsamp=fs)
# # Bump up the order by N_bump to bring down the final d_pass & d_stop
# N_taps = n
# N_taps += N_bump
# b = signal.remez(N_taps, ff, aa[0::2], wts, Hz=2)
# if status:
# log.info('Remez filter taps = %d.' % N_taps)
# return b
#
#
# def fir_remez_bsf(f_pass1, f_stop1, f_stop2, f_pass2, d_pass, d_stop,
# fs=1.0, N_bump=5, status=True):
# """
# Design an FIR bandstop filter using remez with order
# determination. The filter order is determined based on
# f_pass1 Hz, f_stop1 Hz, f_stop2 Hz, f_pass2 Hz, and the
# desired passband ripple d_pass dB and stopband attenuation
# d_stop dB all relative to a sampling rate of fs Hz.
#
# Mark Wickert October 2016, updated October 2018
# """
# n, ff, aa, wts = bandstop_order(f_pass1, f_stop1, f_stop2, f_pass2,
# d_pass, d_stop, fsamp=fs)
# # Bump up the order by N_bump to bring down the final d_pass & d_stop
# # Initially make sure the number of taps is even so N_bump needs to be odd
# if np.mod(n, 2) != 0:
# n += 1
# N_taps = n
# N_taps += N_bump
# b = signal.remez(N_taps, ff, aa[0::2], wts, Hz=2,
# maxiter=25, grid_density=16)
# if status:
# log.info('N_bump must be odd to maintain odd filter length')
# log.info('Remez filter taps = %d.' % N_taps)
# return b
def freqz_resp_list(b, a=np.array([1]), mode='dB', fs=1.0, Npts=1024, fsize=(6, 4)):
"""
A method for displaying digital filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freq_resp(self,mode = 'dB',Npts = 1024)
A method for displaying the filter frequency response magnitude,
phase, and group delay. A plot is produced using matplotlib
freqz_resp(b,a=[1],mode = 'dB',Npts = 1024,fsize=(6,4))
b = ndarray of numerator coefficients
a = ndarray of denominator coefficents
mode = display mode: 'dB' magnitude, 'phase' in radians, or
'groupdelay_s' in samples and 'groupdelay_t' in sec,
all versus frequency in Hz
Npts = number of points to plot; default is 1024
fsize = figure size; defult is (6,4) inches
Mark Wickert, January 2015
"""
if type(b) == list:
# We have a list of filters
N_filt = len(b)
f = np.arange(0, Npts) / (2.0 * Npts)
for n in range(N_filt):
w, H = signal.freqz(b[n], a[n], 2 * np.pi * f)
if n == 0:
plt.figure(figsize=fsize)
if mode.lower() == 'db':
plt.plot(f * fs, 20 * np.log10(np.abs(H)))
if n == N_filt - 1:
plt.xlabel('Frequency (Hz)')
plt.ylabel('Gain (dB)')
plt.title('Frequency Response - Magnitude')
elif mode.lower() == 'phase':
plt.plot(f * fs, np.angle(H))
if n == N_filt - 1:
plt.xlabel('Frequency (Hz)')
plt.ylabel('Phase (rad)')
plt.title('Frequency Response - Phase')
elif (mode.lower() == 'groupdelay_s') or (mode.lower() == 'groupdelay_t'):
"""
Notes
-----
Since this calculation involves finding the derivative of the
phase response, care must be taken at phase wrapping points
and when the phase jumps by +/-pi, which occurs when the
amplitude response changes sign. Since the amplitude response
is zero when the sign changes, the jumps do not alter the group
delay results.
"""
theta = np.unwrap(np.angle(H))
# Since theta for an FIR filter is likely to have many pi phase
# jumps too, we unwrap a second time 2*theta and divide by 2
theta2 = np.unwrap(2 * theta) / 2.
theta_dif = np.diff(theta2)
f_diff = np.diff(f)
Tg = -np.diff(theta2) / np.diff(w)
# For gain almost zero set groupdelay = 0
idx = np.nonzero(np.ravel(20 * np.log10(H[:-1]) < -400))[0]
Tg[idx] = np.zeros(len(idx))
max_Tg = np.max(Tg)
# print(max_Tg)
if mode.lower() == 'groupdelay_t':
max_Tg /= fs
plt.plot(f[:-1] * fs, Tg / fs)
plt.ylim([0, 1.2 * max_Tg])
else:
plt.plot(f[:-1] * fs, Tg)
plt.ylim([0, 1.2 * max_Tg])
if n == N_filt - 1:
plt.xlabel('Frequency (Hz)')
if mode.lower() == 'groupdelay_t':
plt.ylabel('Group Delay (s)')
else:
plt.ylabel('Group Delay (samples)')
plt.title('Frequency Response - Group Delay')
else:
s1 = 'Error, mode must be "dB", "phase, '
s2 = '"groupdelay_s", or "groupdelay_t"'
log.info(s1 + s2)
| 36.447154 | 88 | 0.588836 | 2,872 | 17,932 | 3.542827 | 0.130919 | 0.022113 | 0.01769 | 0.011794 | 0.76226 | 0.748305 | 0.735332 | 0.727469 | 0.713612 | 0.696315 | 0 | 0.052205 | 0.290709 | 17,932 | 491 | 89 | 36.521385 | 0.747779 | 0.609915 | 0 | 0.565217 | 0 | 0 | 0.07096 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0.152174 | 0.028986 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
8a29855f6162a2b92d3d9039c4be3fccbd867462 | 12,613 | py | Python | testing/scripts/test_rolling_updates.py | jsreid13/seldon-core | 4197eb65bff5ac40629b717a9d533f0e92160ad2 | [
"Apache-2.0"
] | 1 | 2021-05-31T14:52:17.000Z | 2021-05-31T14:52:17.000Z | testing/scripts/test_rolling_updates.py | jsreid13/seldon-core | 4197eb65bff5ac40629b717a9d533f0e92160ad2 | [
"Apache-2.0"
] | null | null | null | testing/scripts/test_rolling_updates.py | jsreid13/seldon-core | 4197eb65bff5ac40629b717a9d533f0e92160ad2 | [
"Apache-2.0"
] | null | null | null | import os
import time
import logging
import pytest
from subprocess import run
from seldon_e2e_utils import (
wait_for_status,
wait_for_rollout,
rest_request_ambassador,
initial_rest_request,
assert_model,
assert_model_during_op,
retry_run,
API_AMBASSADOR,
API_ISTIO_GATEWAY,
get_pod_name_for_sdep,
wait_for_pod_shutdown,
)
def to_resources_path(file_name):
return os.path.join("..", "resources", file_name)
with_api_gateways = pytest.mark.parametrize(
"api_gateway", [API_AMBASSADOR, API_ISTIO_GATEWAY], ids=["ambas", "istio"]
)
@pytest.mark.sequential
@pytest.mark.flaky(max_runs=3)
@with_api_gateways
class TestRollingHttp(object):
# Test updating a model to a multi predictor model
def test_rolling_update5(self, namespace, api_gateway):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(
f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}"
)
retry_run(f"kubectl apply -f ../resources/graph1.json -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace)
logging.warning("Initial request")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
retry_run(f"kubectl apply -f ../resources/graph6.json -n {namespace}")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
i = 0
for i in range(50):
r = rest_request_ambassador("mymodel", namespace, api_gateway)
assert r.status_code == 200
res = r.json()
assert (res["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
)
if (not r.status_code == 200) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
):
break
time.sleep(1)
assert i < 100
logging.warning("Success for test_rolling_update5")
run(f"kubectl delete -f ../resources/graph1.json -n {namespace}", shell=True)
run(f"kubectl delete -f ../resources/graph6.json -n {namespace}", shell=True)
# Test updating a model with a new image version as the only change
def test_rolling_update6(self, namespace, api_gateway):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(
f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}"
)
retry_run(f"kubectl apply -f ../resources/graph1svc.json -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace, expected_deployments=2)
logging.warning("Initial request")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
retry_run(f"kubectl apply -f ../resources/graph2svc.json -n {namespace}")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
i = 0
for i in range(100):
r = rest_request_ambassador("mymodel", namespace, api_gateway)
assert r.status_code == 200
res = r.json()
assert (res["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
)
if (not r.status_code == 200) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
):
break
time.sleep(1)
assert i < 100
logging.warning("Success for test_rolling_update6")
run(f"kubectl delete -f ../resources/graph1svc.json -n {namespace}", shell=True)
run(f"kubectl delete -f ../resources/graph2svc.json -n {namespace}", shell=True)
# test changing the image version and the name of its container
def test_rolling_update7(self, namespace, api_gateway):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(
f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}"
)
retry_run(f"kubectl apply -f ../resources/graph1svc.json -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace, expected_deployments=2)
logging.warning("Initial request")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
retry_run(f"kubectl apply -f ../resources/graph3svc.json -n {namespace}")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
i = 0
for i in range(100):
r = rest_request_ambassador("mymodel", namespace, api_gateway)
assert r.status_code == 200
res = r.json()
assert (res["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
)
if (not r.status_code == 200) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
):
break
time.sleep(1)
assert i < 100
logging.warning("Success for test_rolling_update7")
run(f"kubectl delete -f ../resources/graph1svc.json -n {namespace}", shell=True)
run(f"kubectl delete -f ../resources/graph3svc.json -n {namespace}", shell=True)
# Test updating a model with a new resource request but same image
def test_rolling_update8(self, namespace, api_gateway):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(
f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}"
)
retry_run(f"kubectl apply -f ../resources/graph1svc.json -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace, expected_deployments=2)
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
retry_run(f"kubectl apply -f ../resources/graph4svc.json -n {namespace}")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
i = 0
for i in range(50):
r = rest_request_ambassador("mymodel", namespace, api_gateway)
assert r.status_code == 200
res = r.json()
assert res["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
time.sleep(1)
assert i == 49
logging.warning("Success for test_rolling_update8")
run(f"kubectl delete -f ../resources/graph1svc.json -n {namespace}", shell=True)
run(f"kubectl delete -f ../resources/graph4svc.json -n {namespace}", shell=True)
# Test updating a model with a multi deployment new model
def test_rolling_update9(self, namespace, api_gateway):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(
f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}"
)
retry_run(f"kubectl apply -f ../resources/graph1svc.json -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace, expected_deployments=2)
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
retry_run(f"kubectl apply -f ../resources/graph5svc.json -n {namespace}")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
i = 0
for i in range(50):
r = rest_request_ambassador("mymodel", namespace, api_gateway)
assert r.status_code == 200
res = r.json()
assert res["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
time.sleep(1)
assert i == 49
logging.warning("Success for test_rolling_update9")
run(f"kubectl delete -f ../resources/graph1svc.json -n {namespace}", shell=True)
run(f"kubectl delete -f ../resources/graph5svc.json -n {namespace}", shell=True)
# Test updating a model to a multi predictor model
def test_rolling_update10(self, namespace, api_gateway):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(
f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}"
)
retry_run(f"kubectl apply -f ../resources/graph1svc.json -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace, expected_deployments=2)
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
retry_run(f"kubectl apply -f ../resources/graph6svc.json -n {namespace}")
r = initial_rest_request("mymodel", namespace, endpoint=api_gateway)
assert r.status_code == 200
assert r.json()["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]
i = 0
for i in range(50):
r = rest_request_ambassador("mymodel", namespace, api_gateway)
assert r.status_code == 200
res = r.json()
assert (res["data"]["tensor"]["values"] == [1.0, 2.0, 3.0, 4.0]) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
)
if (not r.status_code == 200) or (
res["data"]["tensor"]["values"] == [5.0, 6.0, 7.0, 8.0]
):
break
time.sleep(1)
assert i < 100
logging.warning("Success for test_rolling_update10")
run(f"kubectl delete -f ../resources/graph1svc.json -n {namespace}", shell=True)
run(f"kubectl delete -f ../resources/graph6svc.json -n {namespace}", shell=True)
@pytest.mark.flaky(max_runs=2)
@with_api_gateways
@pytest.mark.parametrize(
"from_deployment,to_deployment,change",
[
("graph1.json", "graph2.json", True), # New image version
(
"graph1.json",
"graph3.json",
True,
), # New image version and new name of container
("graph1.json", "graph4.json", True), # New resource request but same image
("graph1.json", "graph5.json", True), # Update with multi-deployment new model
("graph1.json", "graph8.json", True), # From v1alpha2 to v1
("graph7.json", "graph8.json", False), # From v1alpha3 to v1
],
)
def test_rolling_deployment(
namespace, api_gateway, from_deployment, to_deployment, change
):
if api_gateway == API_ISTIO_GATEWAY:
retry_run(f"kubectl create -f ../resources/seldon-gateway.yaml -n {namespace}")
from_file_path = to_resources_path(from_deployment)
retry_run(f"kubectl apply -f {from_file_path} -n {namespace}")
wait_for_status("mymodel", namespace)
wait_for_rollout("mymodel", namespace)
assert_model("mymodel", namespace, initial=True, endpoint=api_gateway)
old_pod_name = get_pod_name_for_sdep("mymodel", namespace)[0]
to_file_path = to_resources_path(to_deployment)
def _update_model():
retry_run(f"kubectl apply -f {to_file_path} -n {namespace}")
if change:
wait_for_pod_shutdown(old_pod_name, namespace)
wait_for_status("mymodel", namespace)
time.sleep(2) # Wait a little after deployment marked Available
assert_model_during_op(_update_model, "mymodel", namespace, endpoint=api_gateway)
delete_cmd = f"kubectl delete --ignore-not-found -n {namespace}"
run(f"{delete_cmd} -f {from_file_path}", shell=True)
run(f"{delete_cmd} -f {to_file_path}", shell=True)
| 46.032847 | 88 | 0.605011 | 1,685 | 12,613 | 4.366766 | 0.090208 | 0.078282 | 0.049334 | 0.041859 | 0.83433 | 0.785268 | 0.731856 | 0.731856 | 0.731856 | 0.728731 | 0 | 0.040398 | 0.250297 | 12,613 | 273 | 89 | 46.201465 | 0.737733 | 0.045271 | 0 | 0.618474 | 0 | 0 | 0.264466 | 0.076239 | 0 | 0 | 0 | 0 | 0.184739 | 1 | 0.036145 | false | 0 | 0.024096 | 0.004016 | 0.068273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a31061a368f9e42191d703cefd82299f6fab8d5 | 102 | py | Python | hook-nestor.py | usnistgov/nestor-web | a77377bd524ef21694771d966b3bb10fbaf457e4 | [
"BSD-Source-Code"
] | 2 | 2020-12-14T05:54:01.000Z | 2020-12-25T05:11:25.000Z | hook-nestor.py | usnistgov/nestor-web | a77377bd524ef21694771d966b3bb10fbaf457e4 | [
"BSD-Source-Code"
] | 2 | 2022-02-28T01:29:52.000Z | 2022-03-25T19:24:39.000Z | hook-nestor.py | usnistgov/nestor-web | a77377bd524ef21694771d966b3bb10fbaf457e4 | [
"BSD-Source-Code"
] | null | null | null | from PyInstaller.utils.hooks import collect_all
datas, binaries, hiddenimports = collect_all('nestor') | 51 | 54 | 0.833333 | 13 | 102 | 6.384615 | 0.846154 | 0.240964 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 102 | 2 | 54 | 51 | 0.882979 | 0 | 0 | 0 | 0 | 0 | 0.058252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a836988252e8f260a82772c78e2c948e800fbb7 | 11,255 | py | Python | scripts/interhemisphere/shortest_paths_crossings.py | mwinding/connectome_analysis | dbc747290891805863c9481921d8080dc2043d21 | [
"MIT"
] | null | null | null | scripts/interhemisphere/shortest_paths_crossings.py | mwinding/connectome_analysis | dbc747290891805863c9481921d8080dc2043d21 | [
"MIT"
] | 2 | 2022-02-10T11:03:49.000Z | 2022-02-10T11:04:08.000Z | scripts/interhemisphere/shortest_paths_crossings.py | mwinding/connectome_analysis | dbc747290891805863c9481921d8080dc2043d21 | [
"MIT"
] | null | null | null | #%%
from pymaid_creds import url, name, password, token
import pymaid
rm = pymaid.CatmaidInstance(url, token, name, password)
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import numpy.random as random
import gzip
import csv
import connectome_tools.process_matrix as pm
import connectome_tools.process_graph as pg
import connectome_tools.celltype as ct
from tqdm import tqdm
from joblib import Parallel, delayed
import networkx as nx
# allows text to be editable in Illustrator
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['ps.fonttype'] = 42
# font settings
plt.rcParams['font.size'] = 5
plt.rcParams['font.family'] = 'arial'
# load adjacency matrix, graphs, and pairs
adj = pm.Promat.pull_adj('ad', subgraph='brain and accessory')
pairs = pm.Promat.get_pairs()
ad_edges = pd.read_csv('data/edges_threshold/ad_all-paired-edges.csv', index_col=0)
ad_edges_split = pd.read_csv('data/edges_threshold/pairwise-threshold_ad_all-edges.csv', index_col=0)
graph = pg.Analyze_Nx_G(ad_edges)
graph_split = pg.Analyze_Nx_G(ad_edges_split, split_pairs=True)
pairs = pm.Promat.get_pairs()
# %%
# calculate shortest paths
dVNC_pair_ids = list(pm.Promat.load_pairs_from_annotation('mw dVNC', pairs, return_type='all_pair_sorted').leftid)
dSEZ_pair_ids = list(pm.Promat.load_pairs_from_annotation('mw dSEZ', pairs, return_type='all_pair_sorted').leftid)
RGN_pair_ids = list(pm.Promat.load_pairs_from_annotation('mw RGN', pairs, return_type='all_pair_sorted').leftid)
target_names = ['dVNC', 'dSEZ', 'RGN']
targets = [dVNC_pair_ids, dSEZ_pair_ids, RGN_pair_ids]
sensories_names = ['olfactory', 'gustatory-external', 'gustatory-pharyngeal', 'enteric', 'thermo-warm', 'thermo-cold', 'visual', 'noci', 'mechano-Ch', 'mechano-II/III', 'proprio', 'respiratory']
sensories_skids = [ct.Celltype_Analyzer.get_skids_from_meta_annotation(f'mw {name}') for name in sensories_names]
sensories_pair_ids = [pm.Promat.load_pairs_from_annotation(annot='', pairList=pairs, return_type='all_pair_ids', skids=celltype, use_skids=True) for celltype in sensories_skids]
all_sensories = [x for sublist in sensories_pair_ids for x in sublist]
all_sensories = list(np.intersect1d(all_sensories, graph.G.nodes))
dVNC_pair_ids = list(np.intersect1d(dVNC_pair_ids, graph.G.nodes))
cutoff=10
shortest_paths = []
for i in range(len(all_sensories)):
sens_shortest_paths = []
for j in range(len(dVNC_pair_ids)):
try:
shortest_path = nx.shortest_path(graph.G, all_sensories[i], dVNC_pair_ids[j])
sens_shortest_paths.append(shortest_path)
except:
print(f'probably no path exists from {all_sensories[i]}-{dVNC_pair_ids[j]}')
shortest_paths.append(sens_shortest_paths)
all_shortest_paths = [x for sublist in shortest_paths for x in sublist]
# %%
# calculate crossing per path
graph_crossings = pg.Prograph.crossing_counts(graph.G, all_shortest_paths)
control_hists = []
total_paths = len(graph_crossings)
binwidth = 1
x_range = list(range(0, 7))
data = graph_crossings
bins = np.arange(min(data), max(data) + binwidth + 0.5) - 0.5
hist = np.histogram(data, bins=bins)
for hist_pair in zip(hist[0], hist[0]/total_paths, [x for x in range(len(hist[0]))], ['control']*len(hist[0]), [0]*len(hist[0])):
control_hists.append(hist_pair)
control_hists = pd.DataFrame(control_hists, columns = ['count', 'fraction', 'bin', 'condition', 'repeat'])
# plot as raw path counts
fig, ax = plt.subplots(1,1, figsize=(3,3))
sns.barplot(data=control_hists, x='bin', y='count', hue='condition', ax=ax)
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_dVNC.pdf', bbox_inches='tight')
# plot as fraction total paths
fig, ax = plt.subplots(1,1, figsize=(3,3))
sns.barplot(data=control_hists, x='bin', y='fraction', hue='condition', ax=ax)
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_dVNC_fraction-total-paths.pdf', bbox_inches='tight')
# %%
shortest_paths = []
for i in range(len(all_sensories)):
sens_shortest_paths = []
for j in range(len(dSEZ_pair_ids)):
try:
shortest_path = nx.shortest_path(graph.G, all_sensories[i], dSEZ_pair_ids[j])
sens_shortest_paths.append(shortest_path)
except:
print(f'probably no path exists from {all_sensories[i]}-{dSEZ_pair_ids[j]}')
shortest_paths.append(sens_shortest_paths)
all_shortest_paths = [x for sublist in shortest_paths for x in sublist]
# calculate crossing per path
graph_crossings = pg.Prograph.crossing_counts(graph.G, all_shortest_paths)
control_hists = []
total_paths = len(graph_crossings)
binwidth = 1
x_range = list(range(0, 7))
data = graph_crossings
bins = np.arange(min(data), max(data) + binwidth + 0.5) - 0.5
hist = np.histogram(data, bins=bins)
for hist_pair in zip(hist[0], hist[0]/total_paths, [x for x in range(len(hist[0]))], ['control']*len(hist[0]), [0]*len(hist[0])):
control_hists.append(hist_pair)
control_hists = pd.DataFrame(control_hists, columns = ['count', 'fraction', 'bin', 'condition', 'repeat'])
# plot as raw path counts
fig, ax = plt.subplots(1,1, figsize=(3,3))
sns.barplot(data=control_hists, x='bin', y='count', hue='condition', ax=ax)
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_dSEZ.pdf', bbox_inches='tight')
# plot as fraction total paths
fig, ax = plt.subplots(1,1, figsize=(3,3))
sns.barplot(data=control_hists, x='bin', y='fraction', hue='condition', ax=ax)
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_dSEZ_fraction-total-paths.pdf', bbox_inches='tight')
# %%
#
shortest_paths = []
for i in range(len(all_sensories)):
sens_shortest_paths = []
for j in range(len(RGN_pair_ids)):
try:
shortest_path = nx.shortest_path(graph.G, all_sensories[i], RGN_pair_ids[j])
sens_shortest_paths.append(shortest_path)
except:
print(f'probably no path exists from {all_sensories[i]}-{RGN_pair_ids[j]}')
shortest_paths.append(sens_shortest_paths)
all_shortest_paths = [x for sublist in shortest_paths for x in sublist]
# calculate crossing per path
graph_crossings = pg.Prograph.crossing_counts(graph.G, all_shortest_paths)
control_hists = []
total_paths = len(graph_crossings)
binwidth = 1
x_range = list(range(0, 7))
data = graph_crossings
bins = np.arange(min(data), max(data) + binwidth + 0.5) - 0.5
hist = np.histogram(data, bins=bins)
for hist_pair in zip(hist[0], hist[0]/total_paths, [x for x in range(len(hist[0]))], ['control']*len(hist[0]), [0]*len(hist[0])):
control_hists.append(hist_pair)
control_hists = pd.DataFrame(control_hists, columns = ['count', 'fraction', 'bin', 'condition', 'repeat'])
# plot as raw path counts
fig, ax = plt.subplots(1,1, figsize=(3,3))
sns.barplot(data=control_hists, x='bin', y='count', hue='condition', ax=ax)
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_RGN.pdf', bbox_inches='tight')
# plot as fraction total paths
fig, ax = plt.subplots(1,1, figsize=(3,3))
sns.barplot(data=control_hists, x='bin', y='fraction', hue='condition', ax=ax)
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_RGN_fraction-total-paths.pdf', bbox_inches='tight')
# %%
# interhemisphere shortest paths to all outputs
output_pair_ids = dVNC_pair_ids + dSEZ_pair_ids + RGN_pair_ids
shortest_paths = []
for i in range(len(all_sensories)):
sens_shortest_paths = []
for j in range(len(output_pair_ids)):
try:
shortest_path = nx.shortest_path(graph.G, all_sensories[i], output_pair_ids[j])
sens_shortest_paths.append(shortest_path)
except:
print(f'probably no path exists from {all_sensories[i]}-{output_pair_ids[j]}')
shortest_paths.append(sens_shortest_paths)
all_shortest_paths = [x for sublist in shortest_paths for x in sublist]
# calculate crossing per path
graph_crossings = pg.Prograph.crossing_counts(graph.G, all_shortest_paths)
control_hists = []
total_paths = len(graph_crossings)
binwidth = 1
x_range = list(range(0, 7))
data = graph_crossings
bins = np.arange(min(data), max(data) + binwidth + 0.5) - 0.5
hist = np.histogram(data, bins=bins)
for hist_pair in zip(hist[0], hist[0]/total_paths, [x for x in range(len(hist[0]))], ['control']*len(hist[0]), [0]*len(hist[0])):
control_hists.append(hist_pair)
control_hists = pd.DataFrame(control_hists, columns = ['count', 'fraction', 'bin', 'condition', 'repeat'])
# plot as raw path counts
fig, ax = plt.subplots(1,1, figsize=(1,1))
sns.barplot(data=control_hists, x='bin', y='count', hue='condition', ax=ax)
ax.set(xlim=(-0.75, 7.75))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_output.pdf', bbox_inches='tight')
# plot as fraction total paths
fig, ax = plt.subplots(1,1, figsize=(1,1))
sns.barplot(data=control_hists, x='bin', y='fraction', hue='condition', ax=ax)
ax.set(xlim=(-0.75, 7.75), ylim=(0,0.3))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_output_fraction-total-paths.pdf', bbox_inches='tight')
fig, ax = plt.subplots(1,1, figsize=(1,1))
sns.barplot(data=control_hists[control_hists.bin%2==0], x='bin', y='fraction', hue='condition', ax=ax)
ax.set(xlim=(-0.75, 7.75), ylim=(0,0.3))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_output_fraction-total-paths_even.pdf', bbox_inches='tight')
fig, ax = plt.subplots(1,1, figsize=(1,1))
sns.barplot(data=control_hists[control_hists.bin%2==1], x='bin', y='fraction', hue='condition', ax=ax)
ax.set(xlim=(-0.75, 7.75), ylim=(0,0.3))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_output_fraction-total-paths_odd.pdf', bbox_inches='tight')
fig, ax = plt.subplots(1,1, figsize=(1,1))
sns.barplot(data=control_hists.set_index('bin', drop=False).loc[[0,2,4,6,8,1,3,5,7], :], x=['0','2','4','6','8','1','3','5','7'], y='fraction', hue='condition', ax=ax)
ax.set(xlim=(-0.75, 7.75), ylim=(0,0.3))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/shortest-paths_sens_to_output_fraction-total-paths_even-odd.pdf', bbox_inches='tight')
# %%
# fraction of paths with crossings
data = pd.DataFrame([[1, control_hists[control_hists.bin==0].fraction.values[0], 'no_crossing'],
[1,sum(control_hists[control_hists.bin>0].fraction), 'crossing']], columns=['celltype','fraction', 'condition'])
fig, ax = plt.subplots(1,1, figsize=(0.35,0.75))
sns.barplot(data = data, x = 'celltype', y='fraction', hue='condition', ax=ax)
ax.set(ylim=(0,1))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/no-crossing_vs_crossing.pdf', bbox_inches='tight')
data = pd.DataFrame([[1, sum(control_hists[control_hists.bin%2==0].fraction), 'ipsilateral'],
[1,sum(control_hists[control_hists.bin%2==1].fraction), 'contralateral']], columns=['celltype','fraction', 'condition'])
fig, ax = plt.subplots(1,1, figsize=(0.35,0.75))
sns.barplot(data = data, x = 'celltype', y='fraction', hue='condition', ax=ax)
ax.set(ylim=(0,1))
plt.savefig('interhemisphere/plots/interhemisphere_crossings/ipsi-crossing_vs_contra-crossing.pdf', bbox_inches='tight')
# %%
| 41.531365 | 194 | 0.733363 | 1,764 | 11,255 | 4.484127 | 0.119615 | 0.073957 | 0.013148 | 0.026296 | 0.811757 | 0.78976 | 0.77421 | 0.742351 | 0.73426 | 0.723894 | 0 | 0.020202 | 0.111595 | 11,255 | 270 | 195 | 41.685185 | 0.770877 | 0.048689 | 0 | 0.601124 | 0 | 0 | 0.228184 | 0.135768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.011236 | 0.08427 | 0 | 0.08427 | 0.022472 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ab46e1089b2f9a0314dab03318c50a8eba1ec2c | 11,799 | py | Python | demistomock.py | Kerol-System/demisto-sdk | 9cb208a73b9820455ca3ca3fe4b4ea08c502f271 | [
"MIT"
] | 1 | 2021-07-30T13:03:19.000Z | 2021-07-30T13:03:19.000Z | demistomock.py | Kerol-System/demisto-sdk | 9cb208a73b9820455ca3ca3fe4b4ea08c502f271 | [
"MIT"
] | null | null | null | demistomock.py | Kerol-System/demisto-sdk | 9cb208a73b9820455ca3ca3fe4b4ea08c502f271 | [
"MIT"
] | 1 | 2021-06-17T09:27:09.000Z | 2021-06-17T09:27:09.000Z |
import json
integrationContext = {}
exampleIncidents = [{"Brand":"Builtin","Category":"Builtin","Contents":{"data":[{"CustomFields":{},"account":"","activated":"0001-01-01T00:00:00Z","attachment":None,"autime":1550670443962164000,"canvases":None,"category":"","closeNotes":"","closeReason":"","closed":"0001-01-01T00:00:00Z","closingUserId":"","created":"2019-02-20T15:47:23.962164+02:00","details":"","droppedCount":0,"dueDate":"2019-03-02T15:47:23.962164+02:00","hasRole":False,"id":"1","investigationId":"1","isPlayground":False,"labels":[{"type":"Instance","value":"test"},{"type":"Brand","value":"Manual"}],"lastOpen":"0001-01-01T00:00:00Z","linkedCount":0,"linkedIncidents":None,"modified":"2019-02-20T15:47:27.158969+02:00","name":"1","notifyTime":"2019-02-20T15:47:27.156966+02:00","occurred":"2019-02-20T15:47:23.962163+02:00","openDuration":0,"owner":"analyst","parent":"","phase":"","playbookId":"playbook0","previousRoles":None,"rawCategory":"","rawCloseReason":"","rawJSON":"","rawName":"1","rawPhase":"","rawType":"Unclassified","reason":"","reminder":"0001-01-01T00:00:00Z","roles":None,"runStatus":"waiting","severity":0,"sla":0,"sourceBrand":"Manual","sourceInstance":"amichay","status":1,"type":"Unclassified","version":6}],"total":1},"ContentsFormat":"json","EntryContext":None,"Evidence":False,"EvidenceID":"","File":"","FileID":"","FileMetadata":None,"HumanReadable":None,"ID":"","IgnoreAutoExtract":False,"ImportantEntryContext":None,"Metadata":{"brand":"Builtin","category":"","contents":"","contentsSize":0,"created":"2019-02-24T09:44:51.992682+02:00","cronView":False,"endingDate":"0001-01-01T00:00:00Z","entryTask":None,"errorSource":"","file":"","fileID":"","fileMetadata":None,"format":"json","hasRole":False,"id":"","instance":"Builtin","investigationId":"7ab2ac46-4142-4af8-8cbe-538efb4e63d6","modified":"0001-01-01T00:00:00Z","note":False,"parentContent":"!getIncidents query=\"id:1\"","parentEntryTruncated":False,"parentId":"111@7ab2ac46-4142-4af8-8cbe-538efb4e63d6","pinned":False,"playbookId":"","previousRoles":None,"recurrent":False,"reputationSize":0,"reputations":None,"roles":None,"scheduled":False,"startDate":"0001-01-01T00:00:00Z","system":"","tags":None,"tagsRaw":None,"taskId":"","times":0,"timezoneOffset":0,"type":1,"user":"","version":0},"ModuleName":"InnerServicesModule","Note":False,"ReadableContentsFormat":"","System":"","Tags":None,"Type":1,"Version":0}]
exampleContext = [{"Brand":"Builtin","Category":"Builtin","Contents":{"context":{},"id":"1","importantKeys":None,"modified":"2019-02-24T09:50:21.798306+02:00","version":30},"ContentsFormat":"json","EntryContext":None,"Evidence":False,"EvidenceID":"","File":"","FileID":"","FileMetadata":None,"HumanReadable":None,"ID":"","IgnoreAutoExtract":False,"ImportantEntryContext":None,"Metadata":{"brand":"Builtin","category":"","contents":"","contentsSize":0,"created":"2019-02-24T09:50:28.652202+02:00","cronView":False,"endingDate":"0001-01-01T00:00:00Z","entryTask":None,"errorSource":"","file":"","fileID":"","fileMetadata":None,"format":"json","hasRole":False,"id":"","instance":"Builtin","investigationId":"7ab2ac46-4142-4af8-8cbe-538efb4e63d6","modified":"0001-01-01T00:00:00Z","note":False,"parentContent":"!getContext id=\"1\"","parentEntryTruncated":False,"parentId":"120@7ab2ac46-4142-4af8-8cbe-538efb4e63d6","pinned":False,"playbookId":"","previousRoles":None,"recurrent":False,"reputationSize":0,"reputations":None,"roles":None,"scheduled":False,"startDate":"0001-01-01T00:00:00Z","system":"","tags":None,"tagsRaw":None,"taskId":"","times":0,"timezoneOffset":0,"type":0,"user":"","version":0},"ModuleName":"InnerServicesModule","Note":False,"ReadableContentsFormat":"","System":"","Tags":None,"Type":0,"Version":0}]
exampleUsers = [{"Brand":"Builtin","Category":"Builtin","Contents":[{"accUser":False,"addedSharedDashboards":None,"canPostTicket":False,"dashboards":None,"defaultAdmin":True,"disableHyperSearch":False,"editorStyle":"","email":"admintest@demisto.com","helpSnippetDisabled":False,"homepage":"","id":"admin","image":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAJAAAACQCAYAAADnRuK4AAAACXBIWXMAABYlAAAWJQFJUiTwAAAFeElEQVR42u2dO1MbVxSAj2whIZAQHlWksSrcwKAMVWjYNLhiRjOkSYXyC7z5BfE/iNKmYWlchRnNyE2oRKO0y5DGaiI3pokcbRYNelopCDNOeGmlXS269/v6BcH9dPace+4jMhwOhwIwJk/4FwACAQIBAgECASAQIBAgECAQAAIBAgECAQIBIBAgECAQIBAAAgECAQIBAgECASAQIBAgECAQAAIBAgECAQIBIBAEQfSxfJBK74NY7ZrYg4ac9huMzD1sRDOSj2XFTKzLciQW6meJhH3AlN1viNmqyknvHDM8ko7E5PXCppiJdT0Fsto1+e6iggkTsh9fFStl6JUDIY9/HHZqUrw80ycC2f2GGE5ZnGGX0feRP559K9mnKfUjkNmqIk8AFNvTj0JTF8juN0iYA6LUqasvkNV5x0gHxPtPF1P/nVOfB7IfmONJR2JSWtoRY+6LmR/QgluRw05NaWmnHoEeen1ZKWPm5WkOu1rIE0oEeoh8LDvz8hhOWZvZdHphyINAyINAyEMOpCe6z6oTgZCHCCRy1Zy1Ou9uTBNsz61IIf5CCvOryINAN6kPXMm7x3fmHye9cznpnUuxfSal1I4vzUbkUeQVVh+4kmsejZS8nvYbkmseSX3gThzpvmweIY8KAuXdY08D6Qy7knePJ5KHNUyKCGS1a2OVzaf9hljtGvJoL9AEXX2vzyKPgkn0JGuKvDxrtqry0+XvmKJiDjQNkAeBAIEAgXxme24llGdBEYEK8RehPAuqCDS/KhvRjOfnNqIZ3/tiCDSjlFI7kvZwuEA6EpNSaodRR6Arsk9TYi/vjRSJNqIZsZf3pr5zE4FmRKKDpHFrcrw9tyIHSQN5AkKZ9UCF+VVyGyIQIBAgEAACAQIBAgECASAQIBAg0AiUunVGBYHu5qHFXAW3IpXeB0ZmRph6LywXzdy7K8IZduVr5y0jQwS6I8KwGjAw0iFcvDJ1gXLRDGuSAyKMw0lDSaKLi1uhfFtUx0ys6SFQLpqR4uIWI+4j+/FVfSKQyNUCsIOkwcj7wEY0I8VkOF9ILpybcV4l1kKN5qELdA1XXo7O8ydJycezYs6vh77O+9EI9FiJ/PnzxK+XSno39LtNlcuBdEls7eU9ZeUR4ZzowDhIGlrsEkEgn1HpuioECqGc9usoYXIgDfOdSnpXu92vRCAf+GFhU14vbGr5tyPQhPmOlTJm/pI8BAqB50+SUlp6KbkxzihCIM3ZnluR0tJLped3SKID4lViTemZZSJQgOgyOYhAASTLlfSu9vkOAnmkOewq3wydFLrxQBINCAQIBAikCXa/IdmPb8a6ufAhrHZNsh/fiK3JslztkujmsCu5v36R958uRETkx8WvxEys+/Kzi5dn8n3rNxG5anXYz75RvnrTSqDmsCuGU76xaH8/vipWypjoZxfcihx2/hvRdJgC0OoVZl5Ub93xcdipieGUpTnGVd7XUv5fHpGrC37Niyo5kBLytKq3DvI1J71zMZyyp3vl6wNXDKd87562w05NzJa6EmnxCvNy6/KobQu73xDDKY98b72qfTQtIlAumhn5MAfn31fSfRWa1a55kicdiSnbR9Mmia4PXMm7x552vd5WoX1eaY2C6gvttavC8n//6mkf/ucV2m2V1n3osPBMy2bqOCJcJ9rjiKcy2nbjvb6KvODn5CQCPfLqzGxVR06GR0mWi4tbWq1a1H49kNdyfNLyH4Go0LSrtBAooApNl0oLgQKq0HSptBAogApNp0oLgXys0HSstBDIpwqN/WEINHaFJiLaVloI5EOFJiJsLkQg8Bu29QACAQIBAgECASAQIBAgECAQAAIBAgECAQIBIBAgECAQIBAAAgECAQLBLPMPFxalhUpzvrEAAAAASUVORK5CYII=","investigationPage":"","lastLogin":"0001-01-01T00:00:00Z","name":"Admin Dude","notify":["mattermost","email","slack"],"phone":"+650-123456","playgroundCleared":False,"playgroundId":"818df1a9-98dc-46df-84dc-dbd2fffc0fda","preferences":{"userPreferencesIncidentTableQueries":{"Open Jobs in the last 7 days":{"picker":{"predefinedRange":{"id":"7","name":"Last 7 days"}},"query":"-status:closed category:job"},"Open incidents in the last 7 days":{"isDefault":True,"picker":{"predefinedRange":{"id":"7","name":"Last 7 days"}},"query":"-status:closed -category:job"}},"userPreferencesWarRoomFilter":{"categories":["chats","incidentInfo","commandAndResults","notes"],"fromTime":"0001-01-01T00:00:00Z","pageSize":0,"tagsAndOperator":False,"usersAndOperator":False},"userPreferencesWarRoomFilterExpanded":False,"userPreferencesWarRoomFilterMap":{"Chats only":{"categories":["chats"],"fromTime":"0001-01-01T00:00:00Z","pageSize":0,"tagsAndOperator":False,"usersAndOperator":False},"Default Filter":{"categories":["chats","incidentInfo","commandAndResults","notes"],"fromTime":"0001-01-01T00:00:00Z","pageSize":0,"tagsAndOperator":False,"usersAndOperator":False},"Playbook results":{"categories":["playbookTaskResult","playbookErrors","justFound"],"fromTime":"0001-01-01T00:00:00Z","pageSize":0,"tagsAndOperator":False,"usersAndOperator":False}},"userPreferencesWarRoomFilterOpen":True},"roles":{"demisto":["Administrator"]},"theme":"","username":"admin","wasAssigned":False}],"ContentsFormat":"json","EntryContext":{"DemistoUsers":[{"email":"admintest@demisto.com","name":"Admin Dude","phone":"+650-123456","roles":["demisto: [Administrator]"],"username":"admin"}]},"Evidence":False,"EvidenceID":"","File":"","FileID":"","FileMetadata":None,"HumanReadable":"## Users\nUsername|Email|Name|Phone|Roles\n-|-|-|-|-\nadmin|admintest@demisto.com|Admin Dude|\\+650-123456|demisto: \\[Administrator\\]\n","ID":"","IgnoreAutoExtract":False,"ImportantEntryContext":None,"Metadata":{"brand":"Builtin","category":"","contents":"","contentsSize":0,"created":"2019-02-24T09:50:28.686449+02:00","cronView":False,"endingDate":"0001-01-01T00:00:00Z","entryTask":None,"errorSource":"","file":"","fileID":"","fileMetadata":None,"format":"json","hasRole":False,"id":"","instance":"Builtin","investigationId":"7ab2ac46-4142-4af8-8cbe-538efb4e63d6","modified":"0001-01-01T00:00:00Z","note":False,"parentContent":"!getUsers online=\"False\"","parentEntryTruncated":False,"parentId":"120@7ab2ac46-4142-4af8-8cbe-538efb4e63d6","pinned":False,"playbookId":"","previousRoles":None,"recurrent":False,"reputationSize":0,"reputations":None,"roles":None,"scheduled":False,"startDate":"0001-01-01T00:00:00Z","system":"","tags":None,"tagsRaw":None,"taskId":"","times":0,"timezoneOffset":0,"type":1,"user":"","version":0},"ModuleName":"InnerServicesModule","Note":False,"ReadableContentsFormat":"","System":"","Tags":None,"Type":1,"Version":0}]
exampleDemistoUrls = {"evidenceBoard":"https://test-address:8443/#/EvidenceBoard/7ab2ac46-4142-4af8-8cbe-538efb4e63d6","investigation":"https://test-address:8443/#/Details/7ab2ac46-4142-4af8-8cbe-538efb4e63d6","relatedIncidents":"https://test-address:8443/#/Cluster/7ab2ac46-4142-4af8-8cbe-538efb4e63d6","server":"https://test-address:8443","warRoom":"https://test-address:8443/#/WarRoom/7ab2ac46-4142-4af8-8cbe-538efb4e63d6","workPlan":"https://test-address:8443/#/WorkPlan/7ab2ac46-4142-4af8-8cbe-538efb4e63d6"}
def params():
return {}
def args():
return {}
def command():
return ""
def log(msg):
print(msg)
def get(obj, field):
""" Get the field from the given dict using dot notation """
parts = field.split('.')
for part in parts:
if obj and part in obj:
obj = obj[part]
else:
return None
return obj
def context():
return {}
def uniqueFile():
return '4fa3f70d-2d5d-4482-ab73-43dc24063a18'
def getLastRun():
return {'lastRun': "2018-10-24T14:13:20+00:00"}
def setLastRun(obj):
return None
def info(*args):
log(args)
def error(*args):
log(args)
def debug(*args):
log(args)
def results(results):
if type(results) is dict and results.get("contents"):
results = results.get("contents")
print("demisto results: {}".format(json.dumps(results, indent=4, sort_keys=True)))
def credentials(credentials):
print("credentials: {}".format(credentials))
def getFilePath(entry_id):
return ""
def investigation():
return {'id': '1'}
def executeCommand(command, args):
commands = {
"getIncidents": exampleIncidents,
"getContext": exampleContext,
"getUsers": exampleUsers
}
if commands.get(command):
return commands.get(command)
return ""
def getParam(param):
return params().get(param)
def getArg(arg):
return args().get(arg)
def setIntegrationContext(context):
global integrationContext
integrationContext = context
def getIntegrationContext():
return integrationContext
def incidents(incidents):
return results({'Type': 1, 'Contents': json.dumps(incidents), 'ContentsFormat': 'json'})
def setContext(contextPath, value):
return {"status": True}
def demistoUrls():
return exampleDemistoUrls
def appendContext(key, data, dedup=False):
return None
def dt(obj=None, trnsfrm=None):
return ""
def addEntry(id, entry, username=None, email=None, footer=None):
return ""
def mirrorInvestigation(id, mirrorType, autoClose=False):
return ""
def updateModuleHealth(error):
return ""
def directMessage(message, username = None, email = None, anyoneCanOpenIncidents = None):
return ""
def createIncidents(incidents, lastRun = None):
return []
def findUser(username, email):
return {}
| 80.265306 | 5,210 | 0.749725 | 1,147 | 11,799 | 7.710549 | 0.309503 | 0.012212 | 0.022388 | 0.026459 | 0.382519 | 0.32033 | 0.32033 | 0.32033 | 0.31332 | 0.31332 | 0 | 0.099502 | 0.047716 | 11,799 | 146 | 5,211 | 80.815068 | 0.687611 | 0.004407 | 0 | 0.209302 | 0 | 0.023256 | 0.622817 | 0.263526 | 0 | 1 | 0 | 0 | 0 | 1 | 0.372093 | false | 0 | 0.046512 | 0.267442 | 0.732558 | 0.034884 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8aba0a4a57743fcd9a8d40112431742d12db1b1c | 106 | py | Python | pyseries/prediction/tests/__init__.py | flaviovdf/pyseries | 59c8a321790d2398d71305710b7d322ce2d8eaaf | [
"BSD-3-Clause"
] | 7 | 2015-04-12T00:27:39.000Z | 2018-08-10T13:17:48.000Z | pyseries/prediction/tests/__init__.py | flaviovdf/pyseries | 59c8a321790d2398d71305710b7d322ce2d8eaaf | [
"BSD-3-Clause"
] | null | null | null | pyseries/prediction/tests/__init__.py | flaviovdf/pyseries | 59c8a321790d2398d71305710b7d322ce2d8eaaf | [
"BSD-3-Clause"
] | 4 | 2015-04-15T03:14:30.000Z | 2018-11-09T22:06:32.000Z | # -*- coding: utf8
from __future__ import division, print_function
'''Tests for the prediction module'''
| 21.2 | 47 | 0.745283 | 13 | 106 | 5.692308 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0.141509 | 106 | 4 | 48 | 26.5 | 0.802198 | 0.150943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
0a08015c2846acc27d785ec104650566f4571d88 | 2,201 | py | Python | zm_util.py | montagdude/zoneminder-notifier | f3387ed5e322f59cc88551d9972253e29397bc2a | [
"MIT"
] | 7 | 2019-04-19T14:07:44.000Z | 2021-12-30T01:56:46.000Z | zm_util.py | montagdude/zoneminder-notifier | f3387ed5e322f59cc88551d9972253e29397bc2a | [
"MIT"
] | 1 | 2020-12-17T17:07:45.000Z | 2020-12-17T20:12:10.000Z | zm_util.py | montagdude/zoneminder-notifier | f3387ed5e322f59cc88551d9972253e29397bc2a | [
"MIT"
] | 2 | 2020-05-08T09:57:34.000Z | 2022-01-07T23:39:02.000Z | import sys
from datetime import datetime
import ConfigParser
def debug(message, pipe="stdout"):
curr_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
if pipe == "stderr":
sys.stderr.write("{:s} {:s}\n".format(curr_time, message))
else:
sys.stdout.write("{:s} {:s}\n".format(curr_time, message))
def get_from_config(config, section, option, required=True, default=None):
try:
val = config.get(section, option)
except ConfigParser.NoOptionError:
if required:
debug("{:s}:{:s} is required".format(section, option), "stderr")
sys.exit(1)
else:
val = default
return val
def get_bool_from_config(config, section, option, required=True, default=None):
try:
val = config.getboolean(section, option)
except ConfigParser.NoOptionError:
if required:
debug("{:s}:{:s} is required".format(section, option), "stderr")
sys.exit(1)
else:
val = default
except ValueError:
debug("{:s}:{:s}: unable to convert string to boolean".format(section,
option), "stderr")
sys.exit(1)
return val
def get_int_from_config(config, section, option, required=True, default=None):
try:
val = config.getint(section, option)
except ConfigParser.NoOptionError:
if required:
debug("{:s}:{:s} is required".format(section, option), "stderr")
sys.exit(1)
else:
val = default
except ValueError:
debug("{:s}:{:s}: unable to convert string to integer".format(section,
option), "stderr")
sys.exit(1)
return val
def get_float_from_config(config, section, option, required=True, default=None):
try:
val = config.getfloat(section, option)
except ConfigParser.NoOptionError:
if required:
debug("{:s}:{:s} is required".format(section, option), "stderr")
sys.exit(1)
else:
val = default
except ValueError:
debug("{:s}:{:s}: unable to convert string to float".format(section,
option), "stderr")
sys.exit(1)
return val
| 28.960526 | 80 | 0.590641 | 261 | 2,201 | 4.927203 | 0.206897 | 0.151633 | 0.038103 | 0.136081 | 0.825816 | 0.825816 | 0.825816 | 0.825816 | 0.780715 | 0.748056 | 0 | 0.004378 | 0.273512 | 2,201 | 75 | 81 | 29.346667 | 0.799875 | 0 | 0 | 0.688525 | 0 | 0 | 0.142208 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081967 | false | 0 | 0.04918 | 0 | 0.196721 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a13c45cef958c900f62958022604e89cb6fe291 | 29 | py | Python | enotipy/__init__.py | anthony-walker/ENotipy | 490bd700ad4fbe3ca5cfe77efc8e3376b955e425 | [
"BSD-3-Clause"
] | null | null | null | enotipy/__init__.py | anthony-walker/ENotipy | 490bd700ad4fbe3ca5cfe77efc8e3376b955e425 | [
"BSD-3-Clause"
] | null | null | null | enotipy/__init__.py | anthony-walker/ENotipy | 490bd700ad4fbe3ca5cfe77efc8e3376b955e425 | [
"BSD-3-Clause"
] | null | null | null | from .enotipy import ENotiPy
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0a3cb232477601357009a01f39d2ab7aa039b059 | 1,312 | py | Python | HW01_Triangle_Qi_Zhao_Test.py | qiblaqi/567-Fall-2020-master | 8c0e82654581e22fe7b021a6b330bdd0e7eea994 | [
"MIT"
] | null | null | null | HW01_Triangle_Qi_Zhao_Test.py | qiblaqi/567-Fall-2020-master | 8c0e82654581e22fe7b021a6b330bdd0e7eea994 | [
"MIT"
] | null | null | null | HW01_Triangle_Qi_Zhao_Test.py | qiblaqi/567-Fall-2020-master | 8c0e82654581e22fe7b021a6b330bdd0e7eea994 | [
"MIT"
] | null | null | null | import unittest
import HW01_Triangle_Qi_Zhao as Triangle
class TestTriangle(unittest.TestCase):
def test_case(self):
self.assertEqual(Triangle.classify_triangle(2,2,3),"isoceles")
self.assertEqual(Triangle.classify_triangle(3,2,2),"isoceles")
self.assertEqual(Triangle.classify_triangle(3,4,5),"right")
self.assertEqual(Triangle.classify_triangle(3.0,4.0,5.0),"right")
self.assertEqual(Triangle.classify_triangle(5,4,3),"right")
self.assertEqual(Triangle.classify_triangle(4,5,3),"right")
self.assertEqual(Triangle.classify_triangle(3,3,3),"equilateral")
self.assertEqual(Triangle.classify_triangle(3.5,3.5,3.5),"equilateral")
self.assertEqual(Triangle.classify_triangle(2,3,4),"scalene")
self.assertEqual(Triangle.classify_triangle(4,3,2),"scalene")
self.assertEqual(Triangle.classify_triangle(2.2,3.3,4.4),"scalene")
def test_case_2(self):
self.assertEqual(Triangle.classify_triangle(-3,-3,-3),"not a triangle")
self.assertEqual(Triangle.classify_triangle(0,0,0),"not a triangle")
self.assertEqual(Triangle.classify_triangle(3,0,0),"not a triangle")
self.assertEqual(Triangle.classify_triangle(-2,-2,3),"not a triangle")
if __name__ == '__main__':
unittest.main() | 52.48 | 83 | 0.708079 | 179 | 1,312 | 5.027933 | 0.167598 | 0.25 | 0.383333 | 0.516667 | 0.812222 | 0.812222 | 0.58 | 0.364444 | 0.117778 | 0.117778 | 0 | 0.050622 | 0.141768 | 1,312 | 25 | 84 | 52.48 | 0.748668 | 0 | 0 | 0 | 0 | 0 | 0.108911 | 0 | 0 | 0 | 0 | 0 | 0.681818 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a5e5b75c53dee980e93c0fcbf2db7dbfc13312d | 21 | py | Python | component/io/__init__.py | 12rambau/damage_proxy_map | 6796e5e4885378e3b634877610df9e6d94123de3 | [
"MIT"
] | null | null | null | component/io/__init__.py | 12rambau/damage_proxy_map | 6796e5e4885378e3b634877610df9e6d94123de3 | [
"MIT"
] | null | null | null | component/io/__init__.py | 12rambau/damage_proxy_map | 6796e5e4885378e3b634877610df9e6d94123de3 | [
"MIT"
] | null | null | null | from .dmp_io import * | 21 | 21 | 0.761905 | 4 | 21 | 3.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 21 | 1 | 21 | 21 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a8d8b8a51466e2c8df9af74b327e79c0df40b9e | 138 | py | Python | tests/basics/bytes_compare_bytearray.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | tests/basics/bytes_compare_bytearray.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | tests/basics/bytes_compare_bytearray.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | print(b"123" == bytearray(b"123"))
print(b'123' < bytearray(b"124"))
print(b'123' > bytearray(b"122"))
print(bytearray(b"23") in b"1234")
| 27.6 | 34 | 0.652174 | 25 | 138 | 3.6 | 0.36 | 0.177778 | 0.3 | 0.6 | 0.633333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 0.086957 | 138 | 4 | 35 | 34.5 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6abd6b31784848ddeedb6b793f31482de9aee04e | 454 | py | Python | tests/samples/classimplements.py | arnimarj/mypy-zope | 3670eab231b20bf28fce23094556bf2f883ef0cd | [
"MIT"
] | 29 | 2019-01-06T20:07:43.000Z | 2021-11-10T06:40:45.000Z | tests/samples/classimplements.py | Shoobx/mypy-zope | 3fea0f111384381709d00b6f811a34ec15f791df | [
"MIT"
] | 52 | 2019-03-05T07:10:37.000Z | 2022-03-15T07:13:15.000Z | tests/samples/classimplements.py | arnimarj/mypy-zope | 3670eab231b20bf28fce23094556bf2f883ef0cd | [
"MIT"
] | 11 | 2019-03-10T04:54:18.000Z | 2021-05-17T07:12:51.000Z | from zope.interface import implementer
from zope.interface import Interface
from zope.interface import classImplements
class IFoo(Interface):
def foo() -> int:
pass
class IBar(Interface):
def bar() -> str:
pass
class FooBar:
def foo(self) -> int:
return 0
def bar(self) -> str:
return ""
classImplements(FooBar, IFoo, IBar)
foo: IFoo = FooBar()
bar: IBar = FooBar()
"""
<output>
</output>
"""
| 12.971429 | 42 | 0.623348 | 54 | 454 | 5.240741 | 0.37037 | 0.084806 | 0.180212 | 0.243816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002959 | 0.255507 | 454 | 34 | 43 | 13.352941 | 0.83432 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0.117647 | 0.176471 | 0.117647 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
6aeb4108565d142b722fe2c348379c53a6c39348 | 38,570 | py | Python | sdk/servicebus/azure-servicebus/tests/async_tests/mgmt_tests/test_mgmt_queues_async.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 3 | 2020-06-23T02:25:27.000Z | 2021-09-07T18:48:11.000Z | sdk/servicebus/azure-servicebus/tests/async_tests/mgmt_tests/test_mgmt_queues_async.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 510 | 2019-07-17T16:11:19.000Z | 2021-08-02T08:38:32.000Z | sdk/servicebus/azure-servicebus/tests/async_tests/mgmt_tests/test_mgmt_queues_async.py | rsdoherty/azure-sdk-for-python | 6bba5326677468e6660845a703686327178bb7b1 | [
"MIT"
] | 5 | 2019-09-04T12:51:37.000Z | 2020-09-16T07:28:40.000Z | #-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------
import pytest
import datetime
import msrest
from azure.core.exceptions import HttpResponseError, ResourceNotFoundError, ResourceExistsError
from azure.servicebus.aio.management import ServiceBusAdministrationClient
from azure.servicebus.management import QueueProperties
from azure.servicebus.aio._base_handler_async import ServiceBusSharedKeyCredential
from azure.servicebus._common.utils import utc_now
from devtools_testutils import AzureMgmtTestCase, CachedResourceGroupPreparer
from servicebus_preparer import (
CachedServiceBusNamespacePreparer,
ServiceBusNamespacePreparer
)
from mgmt_test_utilities_async import (
AsyncMgmtQueueListTestHelper,
AsyncMgmtQueueListRuntimeInfoTestHelper,
run_test_async_mgmt_list_with_parameters,
run_test_async_mgmt_list_with_negative_parameters,
async_pageable_to_list,
clear_queues
)
class ServiceBusAdministrationClientQueueAsyncTests(AzureMgmtTestCase):
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_basic(self, servicebus_namespace_connection_string,
servicebus_namespace, servicebus_namespace_key_name,
servicebus_namespace_primary_key):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
await mgmt_service.create_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 1 and queues[0].name == "test_queue"
await mgmt_service.delete_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
fully_qualified_namespace = servicebus_namespace.name + '.servicebus.windows.net'
mgmt_service = ServiceBusAdministrationClient(
fully_qualified_namespace,
credential=ServiceBusSharedKeyCredential(servicebus_namespace_key_name, servicebus_namespace_primary_key)
)
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
await mgmt_service.create_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 1 and queues[0].name == "test_queue"
await mgmt_service.delete_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_with_special_chars(self, servicebus_namespace_connection_string):
# Queue names can contain letters, numbers, periods (.), hyphens (-), underscores (_), and slashes (/), up to 260 characters. Queue names are also case-insensitive.
queue_name = 'txt/.-_123'
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
await mgmt_service.create_queue(queue_name)
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 1 and queues[0].name == queue_name
await mgmt_service.delete_queue(queue_name)
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_with_parameters(self, servicebus_namespace_connection_string):
pytest.skip("start_idx and max_count are currently removed, they might come back in the future.")
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await run_test_async_mgmt_list_with_parameters(AsyncMgmtQueueListTestHelper(mgmt_service))
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_with_negative_credential(self, servicebus_namespace, servicebus_namespace_key_name,
servicebus_namespace_primary_key):
# invalid_conn_str = 'Endpoint=sb://invalid.servicebus.windows.net/;SharedAccessKeyName=invalid;SharedAccessKey=invalid'
# mgmt_service = ServiceBusAdministrationClient.from_connection_string(invalid_conn_str)
# with pytest.raises(ServiceRequestError):
# await async_pageable_to_list(mgmt_service.list_queues())
invalid_conn_str = 'Endpoint=sb://{}.servicebus.windows.net/;SharedAccessKeyName=invalid;SharedAccessKey=invalid'.format(servicebus_namespace.name)
mgmt_service = ServiceBusAdministrationClient.from_connection_string(invalid_conn_str)
with pytest.raises(HttpResponseError):
await async_pageable_to_list(mgmt_service.list_queues())
# fully_qualified_namespace = 'invalid.servicebus.windows.net'
# mgmt_service = ServiceBusAdministrationClient(
# fully_qualified_namespace,
# credential=ServiceBusSharedKeyCredential(servicebus_namespace_key_name, servicebus_namespace_primary_key)
# )
# with pytest.raises(ServiceRequestError):
# await async_pageable_to_list(mgmt_service.list_queues())
fully_qualified_namespace = servicebus_namespace.name + '.servicebus.windows.net'
mgmt_service = ServiceBusAdministrationClient(
fully_qualified_namespace,
credential=ServiceBusSharedKeyCredential("invalid", "invalid")
)
with pytest.raises(HttpResponseError):
await async_pageable_to_list(mgmt_service.list_queues())
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_with_negative_parameters(self, servicebus_namespace_connection_string):
pytest.skip("start_idx and max_count are currently removed, they might come back in the future.")
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await run_test_async_mgmt_list_with_negative_parameters(AsyncMgmtQueueListTestHelper(mgmt_service))
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_delete_basic(self, servicebus_namespace_connection_string):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
await mgmt_service.create_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 1
await mgmt_service.create_queue('txt/.-_123')
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 2
await mgmt_service.delete_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 1 and queues[0].name == 'txt/.-_123'
await mgmt_service.delete_queue('txt/.-_123')
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_delete_one_and_check_not_existing(self, servicebus_namespace_connection_string):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
for i in range(10):
await mgmt_service.create_queue("queue{}".format(i))
random_delete_idx = 0
to_delete_queue_name = "queue{}".format(random_delete_idx)
await mgmt_service.delete_queue(to_delete_queue_name)
queue_names = [queue.name for queue in (await async_pageable_to_list(mgmt_service.list_queues()))]
assert len(queue_names) == 9 and to_delete_queue_name not in queue_names
for name in queue_names:
await mgmt_service.delete_queue(name)
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_delete_negtive(self, servicebus_namespace_connection_string):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
await mgmt_service.create_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 1
await mgmt_service.delete_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
assert len(queues) == 0
with pytest.raises(ResourceNotFoundError):
await mgmt_service.delete_queue("test_queue")
with pytest.raises(ResourceNotFoundError):
await mgmt_service.delete_queue("non_existing_queue")
with pytest.raises(ValueError):
await mgmt_service.delete_queue("")
with pytest.raises(TypeError):
await mgmt_service.delete_queue(queue_name=None)
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_create_by_name(self, servicebus_namespace_connection_string, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "eidk"
created_at_utc = utc_now()
await mgmt_service.create_queue(queue_name)
try:
queue = await mgmt_service.get_queue(queue_name)
assert queue.name == queue_name
assert queue.availability_status == 'Available'
assert queue.status == 'Active'
# assert created_at_utc < queue.created_at_utc < utc_now() + datetime.timedelta(minutes=10)
finally:
await mgmt_service.delete_queue(queue_name)
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_create_with_invalid_name(self, servicebus_namespace_connection_string, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
with pytest.raises(msrest.exceptions.ValidationError):
await mgmt_service.create_queue(Exception())
with pytest.raises(msrest.exceptions.ValidationError):
await mgmt_service.create_queue('')
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_create_with_queue_description(self, servicebus_namespace_connection_string, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "dkldf"
queue_name_2 = "vjiqjx"
topic_name = "aghadh"
await mgmt_service.create_topic(topic_name)
await mgmt_service.create_queue(
queue_name,
auto_delete_on_idle=datetime.timedelta(minutes=10),
dead_lettering_on_message_expiration=True,
default_message_time_to_live=datetime.timedelta(minutes=11),
duplicate_detection_history_time_window=datetime.timedelta(minutes=12),
forward_dead_lettered_messages_to=topic_name,
forward_to=topic_name,
enable_batched_operations=True,
enable_express=True,
enable_partitioning=True,
lock_duration=datetime.timedelta(seconds=13),
max_delivery_count=14,
max_size_in_megabytes=3072,
#requires_duplicate_detection=True,
requires_session=True
)
await mgmt_service.create_queue(
queue_name_2,
auto_delete_on_idle="PT10M1S",
dead_lettering_on_message_expiration=True,
default_message_time_to_live="PT11M2S",
duplicate_detection_history_time_window="PT12M3S",
enable_batched_operations=True,
enable_express=True,
enable_partitioning=True,
forward_dead_lettered_messages_to=topic_name,
forward_to=topic_name,
lock_duration="PT13S",
max_delivery_count=14,
max_size_in_megabytes=3072,
requires_session=True
)
try:
queue = await mgmt_service.get_queue(queue_name)
assert queue.name == queue_name
assert queue.auto_delete_on_idle == datetime.timedelta(minutes=10)
assert queue.dead_lettering_on_message_expiration == True
assert queue.default_message_time_to_live == datetime.timedelta(minutes=11)
assert queue.duplicate_detection_history_time_window == datetime.timedelta(minutes=12)
assert queue.enable_batched_operations == True
assert queue.forward_dead_lettered_messages_to.endswith(".servicebus.windows.net/{}".format(topic_name))
assert queue.forward_to.endswith(".servicebus.windows.net/{}".format(topic_name))
assert queue.enable_express == True
assert queue.enable_partitioning == True
assert queue.lock_duration == datetime.timedelta(seconds=13)
assert queue.max_delivery_count == 14
assert queue.max_size_in_megabytes % 3072 == 0
#assert queue.requires_duplicate_detection == True
assert queue.requires_session == True
queue2 = await mgmt_service.get_queue(queue_name_2)
assert queue2.name == queue_name_2
assert queue2.auto_delete_on_idle == datetime.timedelta(minutes=10, seconds=1)
assert queue2.dead_lettering_on_message_expiration == True
assert queue2.default_message_time_to_live == datetime.timedelta(minutes=11, seconds=2)
assert queue2.duplicate_detection_history_time_window == datetime.timedelta(minutes=12, seconds=3)
assert queue2.enable_batched_operations == True
assert queue2.enable_express == True
assert queue2.enable_partitioning == True
assert queue2.forward_dead_lettered_messages_to.endswith(".servicebus.windows.net/{}".format(topic_name))
assert queue2.forward_to.endswith(".servicebus.windows.net/{}".format(topic_name))
assert queue2.lock_duration == datetime.timedelta(seconds=13)
assert queue2.max_delivery_count == 14
assert queue2.max_size_in_megabytes % 3072 == 0
assert queue2.requires_session == True
finally:
await mgmt_service.delete_queue(queue_name)
await mgmt_service.delete_queue(queue_name_2)
await mgmt_service.delete_topic(topic_name)
await mgmt_service.close()
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_create_duplicate(self, servicebus_namespace_connection_string, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "eriodk"
await mgmt_service.create_queue(queue_name)
try:
with pytest.raises(ResourceExistsError):
await mgmt_service.create_queue(queue_name)
finally:
await mgmt_service.delete_queue(queue_name)
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_update_success(self, servicebus_namespace_connection_string, servicebus_namespace, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "ewuidfj"
topic_name = "dkfjaks"
queue_description = await mgmt_service.create_queue(queue_name)
await mgmt_service.create_topic(topic_name)
try:
# Try updating one setting.
queue_description.lock_duration = datetime.timedelta(minutes=2)
await mgmt_service.update_queue(queue_description)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.lock_duration == datetime.timedelta(minutes=2)
# Update forwarding settings with entity name.
queue_description.forward_to = topic_name
queue_description.forward_dead_lettered_messages_to = topic_name
await mgmt_service.update_queue(queue_description)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.forward_dead_lettered_messages_to.endswith(".servicebus.windows.net/{}".format(topic_name))
assert queue_description.forward_to.endswith(".servicebus.windows.net/{}".format(topic_name))
# Update forwarding settings with None.
queue_description.forward_to = None
queue_description.forward_dead_lettered_messages_to = None
await mgmt_service.update_queue(queue_description)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.forward_dead_lettered_messages_to is None
assert queue_description.forward_to is None
# Now try updating all settings.
queue_description.auto_delete_on_idle = datetime.timedelta(minutes=10)
queue_description.dead_lettering_on_message_expiration = True
queue_description.default_message_time_to_live = datetime.timedelta(minutes=11)
queue_description.duplicate_detection_history_time_window = datetime.timedelta(minutes=12)
queue_description.enable_batched_operations = True
queue_description.enable_express = True
#queue_description.enable_partitioning = True # Cannot be changed after creation
queue_description.lock_duration = datetime.timedelta(seconds=13)
queue_description.max_delivery_count = 14
queue_description.max_size_in_megabytes = 3072
queue_description.forward_to = "sb://{}.servicebus.windows.net/{}".format(servicebus_namespace.name, queue_name)
queue_description.forward_dead_lettered_messages_to = "sb://{}.servicebus.windows.net/{}".format(servicebus_namespace.name, queue_name)
#queue_description.requires_duplicate_detection = True # Read only
#queue_description.requires_session = True # Cannot be changed after creation
await mgmt_service.update_queue(queue_description)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.auto_delete_on_idle == datetime.timedelta(minutes=10)
assert queue_description.dead_lettering_on_message_expiration == True
assert queue_description.default_message_time_to_live == datetime.timedelta(minutes=11)
assert queue_description.duplicate_detection_history_time_window == datetime.timedelta(minutes=12)
assert queue_description.enable_batched_operations == True
assert queue_description.enable_express == True
#assert queue_description.enable_partitioning == True
assert queue_description.lock_duration == datetime.timedelta(seconds=13)
assert queue_description.max_delivery_count == 14
assert queue_description.max_size_in_megabytes == 3072
assert queue_description.forward_to.endswith(".servicebus.windows.net/{}".format(queue_name))
# Note: We endswith to avoid the fact that the servicebus_namespace_name is replacered locally but not in the properties bag, and still test this.
assert queue_description.forward_dead_lettered_messages_to.endswith(".servicebus.windows.net/{}".format(queue_name))
#assert queue_description.requires_duplicate_detection == True
#assert queue_description.requires_session == True
queue_description.auto_delete_on_idle = "PT10M1S"
queue_description.default_message_time_to_live = "PT11M2S"
queue_description.duplicate_detection_history_time_window = "PT12M3S"
await mgmt_service.update_queue(queue_description)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.auto_delete_on_idle == datetime.timedelta(minutes=10, seconds=1)
assert queue_description.default_message_time_to_live == datetime.timedelta(minutes=11, seconds=2)
assert queue_description.duplicate_detection_history_time_window == datetime.timedelta(minutes=12, seconds=3)
# updating all settings with keyword arguments.
await mgmt_service.update_queue(
queue_description,
auto_delete_on_idle=datetime.timedelta(minutes=15),
dead_lettering_on_message_expiration=False,
default_message_time_to_live=datetime.timedelta(minutes=16),
duplicate_detection_history_time_window=datetime.timedelta(minutes=17),
enable_batched_operations=False,
enable_express=False,
lock_duration=datetime.timedelta(seconds=18),
max_delivery_count=15,
max_size_in_megabytes=2048,
forward_to=None,
forward_dead_lettered_messages_to=None
)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.auto_delete_on_idle == datetime.timedelta(minutes=15)
assert queue_description.dead_lettering_on_message_expiration == False
assert queue_description.default_message_time_to_live == datetime.timedelta(minutes=16)
assert queue_description.duplicate_detection_history_time_window == datetime.timedelta(minutes=17)
assert queue_description.enable_batched_operations == False
assert queue_description.enable_express == False
#assert queue_description.enable_partitioning == True
assert queue_description.lock_duration == datetime.timedelta(seconds=18)
assert queue_description.max_delivery_count == 15
assert queue_description.max_size_in_megabytes == 2048
# Note: We endswith to avoid the fact that the servicebus_namespace_name is replacered locally but not in the properties bag, and still test this.
assert queue_description.forward_to == None
assert queue_description.forward_dead_lettered_messages_to == None
#assert queue_description.requires_duplicate_detection == True
#assert queue_description.requires_session == True
finally:
await mgmt_service.delete_queue(queue_name)
await mgmt_service.delete_topic(topic_name)
await mgmt_service.close()
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_update_invalid(self, servicebus_namespace_connection_string, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "vbmfm"
queue_description = await mgmt_service.create_queue(queue_name)
try:
# handle a null update properly.
with pytest.raises(TypeError):
await mgmt_service.update_queue(None)
# handle an invalid type update properly.
with pytest.raises(TypeError):
await mgmt_service.update_queue(Exception("test"))
# change a setting we can't change; should fail.
queue_description.requires_session = True
with pytest.raises(HttpResponseError):
await mgmt_service.update_queue(queue_description)
queue_description.requires_session = False
#change the name to a queue that doesn't exist; should fail.
queue_description.name = "dkfrgx"
with pytest.raises(HttpResponseError):
await mgmt_service.update_queue(queue_description)
queue_description.name = queue_name
#change the name to a queue with an invalid name exist; should fail.
queue_description.name = ''
with pytest.raises(msrest.exceptions.ValidationError):
await mgmt_service.update_queue(queue_description)
queue_description.name = queue_name
#change to a setting with an invalid value; should still fail.
queue_description.lock_duration = datetime.timedelta(days=25)
with pytest.raises(HttpResponseError):
await mgmt_service.update_queue(queue_description)
queue_description.lock_duration = datetime.timedelta(minutes=5)
finally:
await mgmt_service.delete_queue(queue_name)
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_runtime_properties_basic(self, servicebus_namespace_connection_string):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queues = await async_pageable_to_list(mgmt_service.list_queues())
queues_infos = await async_pageable_to_list(mgmt_service.list_queues_runtime_properties())
assert len(queues) == len(queues_infos) == 0
await mgmt_service.create_queue("test_queue")
queues = await async_pageable_to_list(mgmt_service.list_queues())
queues_infos = await async_pageable_to_list(mgmt_service.list_queues_runtime_properties())
assert len(queues) == 1 and len(queues_infos) == 1
assert queues[0].name == queues_infos[0].name == "test_queue"
info = queues_infos[0]
assert info.size_in_bytes == 0
assert info.created_at_utc is not None
assert info.accessed_at_utc is not None
assert info.updated_at_utc is not None
assert info.total_message_count == 0
assert info.active_message_count == 0
assert info.dead_letter_message_count == 0
assert info.transfer_dead_letter_message_count == 0
assert info.transfer_message_count == 0
assert info.scheduled_message_count == 0
await mgmt_service.delete_queue("test_queue")
queues_infos = await async_pageable_to_list(mgmt_service.list_queues_runtime_properties())
assert len(queues_infos) == 0
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_runtime_properties_with_negative_parameters(self, servicebus_namespace_connection_string):
pytest.skip("start_idx and max_count are currently removed, they might come back in the future.")
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await run_test_async_mgmt_list_with_negative_parameters(AsyncMgmtQueueListRuntimeInfoTestHelper(mgmt_service))
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_list_runtime_properties_with_parameters(self, servicebus_namespace_connection_string):
pytest.skip("start_idx and max_count are currently removed, they might come back in the future.")
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await run_test_async_mgmt_list_with_parameters(AsyncMgmtQueueListRuntimeInfoTestHelper(mgmt_service))
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_get_runtime_properties_basic(self, servicebus_namespace_connection_string):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
await mgmt_service.create_queue("test_queue")
queue_runtime_properties = await mgmt_service.get_queue_runtime_properties("test_queue")
assert queue_runtime_properties
assert queue_runtime_properties.name == "test_queue"
assert queue_runtime_properties.size_in_bytes == 0
assert queue_runtime_properties.created_at_utc is not None
assert queue_runtime_properties.accessed_at_utc is not None
assert queue_runtime_properties.updated_at_utc is not None
assert queue_runtime_properties.total_message_count == 0
assert queue_runtime_properties.active_message_count == 0
assert queue_runtime_properties.dead_letter_message_count == 0
assert queue_runtime_properties.transfer_dead_letter_message_count == 0
assert queue_runtime_properties.transfer_message_count == 0
assert queue_runtime_properties.scheduled_message_count == 0
await mgmt_service.delete_queue("test_queue")
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_async_mgmt_queue_get_runtime_properties_negative(self, servicebus_namespace_connection_string):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
with pytest.raises(TypeError):
await mgmt_service.get_queue_runtime_properties(None)
with pytest.raises(msrest.exceptions.ValidationError):
await mgmt_service.get_queue_runtime_properties("")
with pytest.raises(ResourceNotFoundError):
await mgmt_service.get_queue_runtime_properties("non_existing_queue")
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_mgmt_queue_async_update_dict_success(self, servicebus_namespace_connection_string, servicebus_namespace, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "fjruid"
queue_description = await mgmt_service.create_queue(queue_name)
queue_description_dict = dict(queue_description)
try:
# Try updating one setting.
queue_description_dict["lock_duration"] = datetime.timedelta(minutes=2)
await mgmt_service.update_queue(queue_description_dict)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.lock_duration == datetime.timedelta(minutes=2)
# Now try updating all settings.
queue_description_dict = dict(queue_description)
queue_description_dict["auto_delete_on_idle"] = datetime.timedelta(minutes=10)
queue_description_dict["dead_lettering_on_message_expiration"] = True
queue_description_dict["default_message_time_to_live"] = datetime.timedelta(minutes=11)
queue_description_dict["duplicate_detection_history_time_window"] = datetime.timedelta(minutes=12)
queue_description_dict["enable_batched_operations"] = True
queue_description_dict["enable_express"] = True
#queue_description_dict["enable_partitioning"] = True # Cannot be changed after creation
queue_description_dict["lock_duration"] = datetime.timedelta(seconds=13)
queue_description_dict["max_delivery_count"] = 14
queue_description_dict["max_size_in_megabytes"] = 3072
queue_description_dict["forward_to"] = "sb://{}.servicebus.windows.net/{}".format(servicebus_namespace.name, queue_name)
queue_description_dict["forward_dead_lettered_messages_to"] = "sb://{}.servicebus.windows.net/{}".format(servicebus_namespace.name, queue_name)
#queue_description_dict["requires_duplicate_detection"] = True # Read only
#queue_description_dict["requires_session"] = True # Cannot be changed after creation
await mgmt_service.update_queue(queue_description_dict)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.auto_delete_on_idle == datetime.timedelta(minutes=10)
assert queue_description.dead_lettering_on_message_expiration == True
assert queue_description.default_message_time_to_live == datetime.timedelta(minutes=11)
assert queue_description.duplicate_detection_history_time_window == datetime.timedelta(minutes=12)
assert queue_description.enable_batched_operations == True
assert queue_description.enable_express == True
#assert queue_description.enable_partitioning == True
assert queue_description.lock_duration == datetime.timedelta(seconds=13)
assert queue_description.max_delivery_count == 14
assert queue_description.max_size_in_megabytes == 3072
assert queue_description.forward_to.endswith(".servicebus.windows.net/{}".format(queue_name))
# Note: We endswith to avoid the fact that the servicebus_namespace_name is replacered locally but not in the properties bag, and still test this.
assert queue_description.forward_dead_lettered_messages_to.endswith(".servicebus.windows.net/{}".format(queue_name))
#assert queue_description.requires_duplicate_detection == True
#assert queue_description.requires_session == True
# updating all settings with keyword arguments.
await mgmt_service.update_queue(
dict(queue_description),
auto_delete_on_idle=datetime.timedelta(minutes=15),
dead_lettering_on_message_expiration=False,
default_message_time_to_live=datetime.timedelta(minutes=16),
duplicate_detection_history_time_window=datetime.timedelta(minutes=17),
enable_batched_operations=False,
enable_express=False,
lock_duration=datetime.timedelta(seconds=18),
max_delivery_count=15,
max_size_in_megabytes=2048,
forward_to=None,
forward_dead_lettered_messages_to=None
)
queue_description = await mgmt_service.get_queue(queue_name)
assert queue_description.auto_delete_on_idle == datetime.timedelta(minutes=15)
assert queue_description.dead_lettering_on_message_expiration == False
assert queue_description.default_message_time_to_live == datetime.timedelta(minutes=16)
assert queue_description.duplicate_detection_history_time_window == datetime.timedelta(minutes=17)
assert queue_description.enable_batched_operations == False
assert queue_description.enable_express == False
# assert queue_description.enable_partitioning == True
assert queue_description.lock_duration == datetime.timedelta(seconds=18)
assert queue_description.max_delivery_count == 15
assert queue_description.max_size_in_megabytes == 2048
# Note: We endswith to avoid the fact that the servicebus_namespace_name is replacered locally but not in the properties bag, and still test this.
assert queue_description.forward_to == None
assert queue_description.forward_dead_lettered_messages_to == None
# assert queue_description.requires_duplicate_detection == True
# assert queue_description.requires_session == True
finally:
await mgmt_service.delete_queue(queue_name)
await mgmt_service.close()
@CachedResourceGroupPreparer(name_prefix='servicebustest')
@CachedServiceBusNamespacePreparer(name_prefix='servicebustest')
async def test_mgmt_queue_async_update_dict_error(self, servicebus_namespace_connection_string, **kwargs):
mgmt_service = ServiceBusAdministrationClient.from_connection_string(servicebus_namespace_connection_string)
await clear_queues(mgmt_service)
queue_name = "fjruid"
queue_description = await mgmt_service.create_queue(queue_name)
try:
# send in queue dict without non-name keyword args
queue_description_only_name = {"name": queue_name}
with pytest.raises(TypeError):
await mgmt_service.update_queue(queue_description_only_name)
finally:
await mgmt_service.delete_queue(queue_name)
| 58.26284 | 172 | 0.732149 | 4,203 | 38,570 | 6.329289 | 0.067809 | 0.062025 | 0.048718 | 0.052628 | 0.910683 | 0.881287 | 0.854785 | 0.800391 | 0.77201 | 0.747199 | 0 | 0.009142 | 0.197381 | 38,570 | 661 | 173 | 58.350983 | 0.850179 | 0.094063 | 0 | 0.595785 | 0 | 0 | 0.061154 | 0.020413 | 0 | 0 | 0 | 0 | 0.243295 | 1 | 0 | false | 0 | 0.021073 | 0 | 0.022989 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0af01a0606d7a2a66d44e234a2edf4b715670990 | 45,687 | py | Python | tests/unit/cloud/clouds/test_vmware.py | springborland/salt | bee85e477d57e9a171884e54fefb9a59d0835ed0 | [
"Apache-2.0"
] | 1 | 2020-04-09T03:25:10.000Z | 2020-04-09T03:25:10.000Z | tests/unit/cloud/clouds/test_vmware.py | springborland/salt | bee85e477d57e9a171884e54fefb9a59d0835ed0 | [
"Apache-2.0"
] | null | null | null | tests/unit/cloud/clouds/test_vmware.py | springborland/salt | bee85e477d57e9a171884e54fefb9a59d0835ed0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
:codeauthor: `Nitin Madhok <nmadhok@clemson.edu>`
tests.unit.cloud.clouds.vmware_test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
# Import Python libs
from __future__ import absolute_import, print_function, unicode_literals
from copy import deepcopy
# Import Salt Libs
from salt import config
from salt.cloud.clouds import vmware
from salt.exceptions import SaltCloudSystemExit
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, patch
from tests.support.unit import TestCase, skipIf
# Attempt to import pyVim and pyVmomi libs
HAS_LIBS = True
# pylint: disable=import-error,no-name-in-module,unused-import
try:
from pyVim.connect import SmartConnect, Disconnect
from pyVmomi import vim, vmodl
except ImportError:
HAS_LIBS = False
# pylint: enable=import-error,no-name-in-module,unused-import
# Global Variables
PROVIDER_CONFIG = {
"vcenter01": {
"vmware": {
"driver": "vmware",
"url": "vcenter01.domain.com",
"user": "DOMAIN\\user",
"password": "verybadpass",
}
}
}
VM_NAME = "test-vm"
PROFILE = {
"base-gold": {
"provider": "vcenter01:vmware",
"datastore": "Datastore1",
"resourcepool": "Resources",
"folder": "vm",
}
}
class ExtendedTestCase(TestCase, LoaderModuleMockMixin):
"""
Extended TestCase class containing additional helper methods.
"""
def setup_loader_modules(self):
return {
vmware: {
"__virtual__": MagicMock(return_value="vmware"),
"__active_provider_name__": "",
}
}
def assertRaisesWithMessage(self, exc_type, exc_msg, func, *args, **kwargs):
try:
func(*args, **kwargs)
self.assertFail()
except Exception as exc: # pylint: disable=broad-except
self.assertEqual(type(exc), exc_type)
self.assertEqual(exc.message, exc_msg)
@skipIf(not HAS_LIBS, "Install pyVmomi to be able to run this test.")
class VMwareTestCase(ExtendedTestCase):
"""
Unit TestCase for salt.cloud.clouds.vmware module.
"""
def test_test_vcenter_connection_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call test_vcenter_connection
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.test_vcenter_connection, call="action"
)
def test_get_vcenter_version_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call get_vcenter_version
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.get_vcenter_version, call="action"
)
def test_avail_images_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call avail_images
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.avail_images, call="action")
def test_avail_locations_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call avail_locations
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.avail_locations, call="action")
def test_avail_sizes_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call avail_sizes
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.avail_sizes, call="action")
def test_list_datacenters_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_datacenters
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_datacenters, call="action")
def test_list_clusters_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_clusters
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_clusters, call="action")
def test_list_datastore_clusters_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_datastore_clusters
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.list_datastore_clusters, call="action"
)
def test_list_datastores_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_datastores
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_datastores, call="action")
def test_list_hosts_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_hosts
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_hosts, call="action")
def test_list_resourcepools_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_resourcepools
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_resourcepools, call="action")
def test_list_networks_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_networks
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_networks, call="action")
def test_list_nodes_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes, call="action")
def test_list_nodes_min_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes_min
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes_min, call="action")
def test_list_nodes_full_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes_full
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes_full, call="action")
def test_list_nodes_select_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_nodes_full
with --action or -a.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_nodes_select, call="action")
def test_list_folders_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_folders
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_folders, call="action")
def test_list_snapshots_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_snapshots
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_snapshots, call="action")
def test_list_hosts_by_cluster_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_hosts_by_cluster
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.list_hosts_by_cluster, call="action"
)
def test_list_clusters_by_datacenter_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_clusters_by_datacenter
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.list_clusters_by_datacenter, call="action"
)
def test_list_hosts_by_datacenter_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_hosts_by_datacenter
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.list_hosts_by_datacenter, call="action"
)
def test_list_hbas_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_hbas
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_hbas, call="action")
def test_list_dvs_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_dvs
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_dvs, call="action")
def test_list_vapps_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_vapps
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_vapps, call="action")
def test_list_templates_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call list_templates
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.list_templates, call="action")
def test_create_datacenter_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call create_datacenter
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.create_datacenter, call="action")
def test_create_cluster_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call create_cluster
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.create_cluster, call="action")
def test_rescan_hba_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call rescan_hba
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.rescan_hba, call="action")
def test_upgrade_tools_all_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call upgrade_tools_all
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.upgrade_tools_all, call="action")
def test_enter_maintenance_mode_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call enter_maintenance_mode
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.enter_maintenance_mode, call="action"
)
def test_exit_maintenance_mode_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call exit_maintenance_mode
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.exit_maintenance_mode, call="action"
)
def test_create_folder_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call create_folder
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.create_folder, call="action")
def test_add_host_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call add_host
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.add_host, call="action")
def test_remove_host_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call remove_host
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.remove_host, call="action")
def test_connect_host_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call connect_host
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.connect_host, call="action")
def test_disconnect_host_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call disconnect_host
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.disconnect_host, call="action")
def test_reboot_host_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call reboot_host
with anything other than --function or -f.
"""
self.assertRaises(SaltCloudSystemExit, vmware.reboot_host, call="action")
def test_create_datastore_cluster_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call create_datastore_cluster
with anything other than --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.create_datastore_cluster, call="action"
)
def test_show_instance_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call show_instance
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.show_instance, name=VM_NAME, call="function"
)
def test_start_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call start
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.start, name=VM_NAME, call="function"
)
def test_stop_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call stop
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.stop, name=VM_NAME, call="function"
)
def test_suspend_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call suspend
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.suspend, name=VM_NAME, call="function"
)
def test_reset_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call reset
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.reset, name=VM_NAME, call="function"
)
def test_terminate_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call terminate
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.terminate, name=VM_NAME, call="function"
)
def test_destroy_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call destroy
with --function or -f.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.destroy, name=VM_NAME, call="function"
)
def test_upgrade_tools_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call upgrade_tools
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.upgrade_tools, name=VM_NAME, call="function"
)
def test_create_snapshot_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call create_snapshot
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.create_snapshot, name=VM_NAME, call="function"
)
def test_revert_to_snapshot_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call revert_to_snapshot
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.revert_to_snapshot,
name=VM_NAME,
call="function",
)
def test_remove_snapshot_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call remove_snapshot
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.remove_snapshot,
name=VM_NAME,
kwargs={"snapshot_name": "mySnapshot"},
call="function",
)
def test_remove_snapshot_call_no_snapshot_name_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when name is not present in kwargs.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.remove_snapshot, name=VM_NAME, call="action"
)
def test_remove_all_snapshots_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call remove_all_snapshots
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.remove_all_snapshots,
name=VM_NAME,
call="function",
)
def test_convert_to_template_call(self):
"""
Tests that a SaltCloudSystemExit is raised when trying to call convert_to_template
with anything other than --action or -a.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.convert_to_template,
name=VM_NAME,
call="function",
)
def test_avail_sizes(self):
"""
Tests that avail_sizes returns an empty dictionary.
"""
self.assertEqual(vmware.avail_sizes(call="foo"), {})
def test_create_datacenter_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
create_datacenter.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.create_datacenter, kwargs=None, call="function"
)
def test_create_datacenter_no_name_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when name is not present in
kwargs that are provided to create_datacenter.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datacenter,
kwargs={"foo": "bar"},
call="function",
)
def test_create_datacenter_name_too_short(self):
"""
Tests that a SaltCloudSystemExit is raised when name is present in kwargs
that are provided to create_datacenter but is an empty string.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datacenter,
kwargs={"name": ""},
call="function",
)
def test_create_datacenter_name_too_long(self):
"""
Tests that a SaltCloudSystemExit is raised when name is present in kwargs
that are provided to create_datacenter but is a string with length <= 80.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datacenter,
kwargs={
"name": "cCD2GgJGPG1DUnPeFBoPeqtdmUxIWxDoVFbA14vIG0BPoUECkgbRMnnY6gaUPBvIDCcsZ5HU48ubgQu5c"
},
call="function",
)
def test_create_cluster_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
create_cluster.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.create_cluster, kwargs=None, call="function"
)
def test_create_cluster_no_name_no_datacenter_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when neither the name nor the
datacenter is present in kwargs that are provided to create_cluster.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_cluster,
kwargs={"foo": "bar"},
call="function",
)
def test_create_cluster_no_datacenter_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when the name is present but the
datacenter is not present in kwargs that are provided to create_cluster.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_cluster,
kwargs={"name": "my-cluster"},
call="function",
)
def test_create_cluster_no_name_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when the datacenter is present
but the name is not present in kwargs that are provided to create_cluster.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_cluster,
kwargs={"datacenter": "my-datacenter"},
call="function",
)
def test_rescan_hba_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
rescan_hba.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.rescan_hba, kwargs=None, call="function"
)
def test_rescan_hba_no_host_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when host is not present in
kwargs that are provided to rescan_hba.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.rescan_hba,
kwargs={"foo": "bar"},
call="function",
)
def test_create_snapshot_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
create_snapshot.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_snapshot,
name=VM_NAME,
kwargs=None,
call="action",
)
def test_create_snapshot_no_snapshot_name_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when snapshot_name is not present
in kwargs that are provided to create_snapshot.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_snapshot,
name=VM_NAME,
kwargs={"foo": "bar"},
call="action",
)
def test_add_host_no_esxi_host_user_in_config(self):
"""
Tests that a SaltCloudSystemExit is raised when esxi_host_user is not
specified in the cloud provider configuration when calling add_host.
"""
with patch.dict(vmware.__opts__, {"providers": PROVIDER_CONFIG}, clean=True):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"You must specify the ESXi host username in your providers config.",
vmware.add_host,
kwargs=None,
call="function",
)
def test_add_host_no_esxi_host_password_in_config(self):
"""
Tests that a SaltCloudSystemExit is raised when esxi_host_password is not
specified in the cloud provider configuration when calling add_host.
"""
provider_config_additions = {
"esxi_host_user": "root",
}
provider_config = deepcopy(PROVIDER_CONFIG)
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
with patch.dict(vmware.__opts__, {"providers": provider_config}, clean=True):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"You must specify the ESXi host password in your providers config.",
vmware.add_host,
kwargs=None,
call="function",
)
def test_no_clonefrom_just_image(self):
"""
Tests that the profile is configured correctly when deploying using an image
"""
profile_additions = {"image": "some-image.iso"}
provider_config = deepcopy(PROVIDER_CONFIG)
profile = deepcopy(PROFILE)
profile["base-gold"].update(profile_additions)
provider_config_additions = {"profiles": profile}
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
vm_ = {"profile": profile}
with patch.dict(vmware.__opts__, {"providers": provider_config}, clean=True):
self.assertEqual(
config.is_profile_configured(
vmware.__opts__, "vcenter01:vmware", "base-gold", vm_=vm_
),
True,
)
def test_just_clonefrom(self):
"""
Tests that the profile is configured correctly when deploying by cloning from a template
"""
profile_additions = {
"clonefrom": "test-template",
"image": "should ignore image",
}
provider_config = deepcopy(PROVIDER_CONFIG)
profile = deepcopy(PROFILE)
profile["base-gold"].update(profile_additions)
provider_config_additions = {"profiles": profile}
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
vm_ = {"profile": profile}
with patch.dict(vmware.__opts__, {"providers": provider_config}, clean=True):
self.assertEqual(
config.is_profile_configured(
vmware.__opts__, "vcenter01:vmware", "base-gold", vm_=vm_
),
True,
)
def test_add_new_ide_controller_helper(self):
"""
Tests that creating a new controller, ensuring that it will generate a controller key
if one is not provided
"""
with patch(
"salt.cloud.clouds.vmware.randint", return_value=101
) as randint_mock:
controller_label = "Some label"
bus_number = 1
spec = vmware._add_new_ide_controller_helper(
controller_label, None, bus_number
)
self.assertEqual(spec.device.key, randint_mock.return_value)
spec = vmware._add_new_ide_controller_helper(
controller_label, 200, bus_number
)
self.assertEqual(spec.device.key, 200)
self.assertEqual(spec.device.busNumber, bus_number)
self.assertEqual(spec.device.deviceInfo.label, controller_label)
self.assertEqual(spec.device.deviceInfo.summary, controller_label)
def test_manage_devices_just_cd(self):
"""
Tests that when adding IDE/CD drives, controller keys will be in the apparent
safe-range on ESX 5.5 but randomly generated on other versions (i.e. 6)
"""
device_map = {
"ide": {"IDE 0": {}, "IDE 1": {}},
"cd": {"CD/DVD Drive 1": {"controller": "IDE 0"}},
}
with patch(
"salt.cloud.clouds.vmware.get_vcenter_version",
return_value="VMware ESXi 5.5.0",
):
specs = vmware._manage_devices(device_map, vm=None)["device_specs"]
self.assertEqual(
specs[0].device.key, vmware.SAFE_ESX_5_5_CONTROLLER_KEY_INDEX
)
self.assertEqual(
specs[1].device.key, vmware.SAFE_ESX_5_5_CONTROLLER_KEY_INDEX + 1
)
self.assertEqual(
specs[2].device.controllerKey, vmware.SAFE_ESX_5_5_CONTROLLER_KEY_INDEX
)
with patch(
"salt.cloud.clouds.vmware.get_vcenter_version", return_value="VMware ESXi 6"
):
with patch(
"salt.cloud.clouds.vmware.randint", return_value=100
) as first_key:
specs = vmware._manage_devices(device_map, vm=None)["device_specs"]
self.assertEqual(specs[0].device.key, first_key.return_value)
self.assertEqual(specs[2].device.controllerKey, first_key.return_value)
def test_add_host_no_host_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when host is not present in
kwargs that are provided to add_host.
"""
provider_config_additions = {
"esxi_host_user": "root",
"esxi_host_password": "myhostpassword",
}
provider_config = deepcopy(PROVIDER_CONFIG)
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
with patch.dict(vmware.__opts__, {"providers": provider_config}, clean=True):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"You must specify either the IP or DNS name of the host system.",
vmware.add_host,
kwargs={"foo": "bar"},
call="function",
)
def test_add_host_both_cluster_and_datacenter_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when both cluster and datacenter
are present in kwargs that are provided to add_host.
"""
provider_config_additions = {
"esxi_host_user": "root",
"esxi_host_password": "myhostpassword",
}
provider_config = deepcopy(PROVIDER_CONFIG)
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
with patch.dict(vmware.__opts__, {"providers": provider_config}, clean=True):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"You must specify either the cluster name or the datacenter name.",
vmware.add_host,
kwargs={
"host": "my-esxi-host",
"datacenter": "my-datacenter",
"cluster": "my-cluster",
},
call="function",
)
def test_add_host_neither_cluster_nor_datacenter_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when neither cluster nor
datacenter is present in kwargs that are provided to add_host.
"""
provider_config_additions = {
"esxi_host_user": "root",
"esxi_host_password": "myhostpassword",
}
provider_config = deepcopy(PROVIDER_CONFIG)
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
with patch.dict(vmware.__opts__, {"providers": provider_config}, clean=True):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"You must specify either the cluster name or the datacenter name.",
vmware.add_host,
kwargs={"host": "my-esxi-host"},
call="function",
)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_add_host_cluster_not_exists(self):
"""
Tests that a SaltCloudSystemExit is raised when the specified cluster present
in kwargs that are provided to add_host does not exist in the VMware
environment.
"""
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_by_property", MagicMock(return_value=None)
):
provider_config_additions = {
"esxi_host_user": "root",
"esxi_host_password": "myhostpassword",
}
provider_config = deepcopy(PROVIDER_CONFIG)
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
with patch.dict(
vmware.__opts__, {"providers": provider_config}, clean=True
):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"Specified cluster does not exist.",
vmware.add_host,
kwargs={"host": "my-esxi-host", "cluster": "my-cluster"},
call="function",
)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_add_host_datacenter_not_exists(self):
"""
Tests that a SaltCloudSystemExit is raised when the specified datacenter
present in kwargs that are provided to add_host does not exist in the VMware
environment.
"""
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_by_property", MagicMock(return_value=None)
):
provider_config_additions = {
"esxi_host_user": "root",
"esxi_host_password": "myhostpassword",
}
provider_config = deepcopy(PROVIDER_CONFIG)
provider_config["vcenter01"]["vmware"].update(provider_config_additions)
with patch.dict(
vmware.__opts__, {"providers": provider_config}, clean=True
):
self.assertRaisesWithMessage(
SaltCloudSystemExit,
"Specified datacenter does not exist.",
vmware.add_host,
kwargs={"host": "my-esxi-host", "datacenter": "my-datacenter"},
call="function",
)
def test_remove_host_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
remove_host.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.remove_host, kwargs=None, call="function"
)
def test_remove_host_no_host_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when host is not present in
kwargs that are provided to remove_host.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.remove_host,
kwargs={"foo": "bar"},
call="function",
)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_remove_host_not_exists(self):
"""
Tests that a SaltCloudSystemExit is raised when the specified host present
in kwargs that are provided to remove_host does not exist in the VMware
environment.
"""
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_by_property", MagicMock(return_value=None)
):
self.assertRaises(
SaltCloudSystemExit,
vmware.remove_host,
kwargs={"host": "my-host"},
call="function",
)
def test_connect_host_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
connect_host.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.connect_host, kwargs=None, call="function"
)
def test_connect_host_no_host_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when host is not present in
kwargs that are provided to connect_host.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.connect_host,
kwargs={"foo": "bar"},
call="function",
)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_connect_host_not_exists(self):
"""
Tests that a SaltCloudSystemExit is raised when the specified host present
in kwargs that are provided to connect_host does not exist in the VMware
environment.
"""
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_by_property", MagicMock(return_value=None)
):
self.assertRaises(
SaltCloudSystemExit,
vmware.connect_host,
kwargs={"host": "my-host"},
call="function",
)
def test_disconnect_host_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
disconnect_host.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.disconnect_host, kwargs=None, call="function"
)
def test_disconnect_host_no_host_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when host is not present in
kwargs that are provided to disconnect_host.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.disconnect_host,
kwargs={"foo": "bar"},
call="function",
)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_disconnect_host_not_exists(self):
"""
Tests that a SaltCloudSystemExit is raised when the specified host present
in kwargs that are provided to disconnect_host does not exist in the VMware
environment.
"""
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_by_property", MagicMock(return_value=None)
):
self.assertRaises(
SaltCloudSystemExit,
vmware.disconnect_host,
kwargs={"host": "my-host"},
call="function",
)
def test_reboot_host_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
reboot_host.
"""
self.assertRaises(
SaltCloudSystemExit, vmware.reboot_host, kwargs=None, call="function"
)
def test_reboot_host_no_host_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when host is not present in
kwargs that are provided to reboot_host.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.reboot_host,
kwargs={"foo": "bar"},
call="function",
)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_reboot_host_not_exists(self):
"""
Tests that a SaltCloudSystemExit is raised when the specified host present
in kwargs that are provided to connect_host does not exist in the VMware
environment.
"""
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_by_property", MagicMock(return_value=None)
):
self.assertRaises(
SaltCloudSystemExit,
vmware.reboot_host,
kwargs={"host": "my-host"},
call="function",
)
def test_create_datastore_cluster_no_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when no kwargs are provided to
create_datastore_cluster.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datastore_cluster,
kwargs=None,
call="function",
)
def test_create_datastore_cluster_no_name_in_kwargs(self):
"""
Tests that a SaltCloudSystemExit is raised when name is not present in
kwargs that are provided to create_datastore_cluster.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datastore_cluster,
kwargs={"foo": "bar"},
call="function",
)
def test_create_datastore_cluster_name_too_short(self):
"""
Tests that a SaltCloudSystemExit is raised when name is present in kwargs
that are provided to create_datastore_cluster but is an empty string.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datastore_cluster,
kwargs={"name": ""},
call="function",
)
def test_create_datastore_cluster_name_too_long(self):
"""
Tests that a SaltCloudSystemExit is raised when name is present in kwargs
that are provided to create_datastore_cluster but is a string with length <= 80.
"""
self.assertRaises(
SaltCloudSystemExit,
vmware.create_datastore_cluster,
kwargs={
"name": "cCD2GgJGPG1DUnPeFBoPeqtdmUxIWxDoVFbA14vIG0BPoUECkgbRMnnY6gaUPBvIDCcsZ5HU48ubgQu5c"
},
call="function",
)
def test__add_new_hard_disk_helper(self):
with patch("salt.cloud.clouds.vmware._get_si", MagicMock(return_value=None)):
with patch(
"salt.utils.vmware.get_mor_using_container_view",
side_effect=[None, None],
):
self.assertRaises(
SaltCloudSystemExit,
vmware._add_new_hard_disk_helper,
disk_label="test",
size_gb=100,
unit_number=0,
datastore="whatever",
)
with patch(
"salt.utils.vmware.get_mor_using_container_view",
side_effect=["Datastore", None],
):
self.assertRaises(
AttributeError,
vmware._add_new_hard_disk_helper,
disk_label="test",
size_gb=100,
unit_number=0,
datastore="whatever",
)
vmware.salt.utils.vmware.get_mor_using_container_view.assert_called_with(
None, vim.Datastore, "whatever"
)
with patch(
"salt.utils.vmware.get_mor_using_container_view",
side_effect=[None, "Cluster"],
):
self.assertRaises(
AttributeError,
vmware._add_new_hard_disk_helper,
disk_label="test",
size_gb=100,
unit_number=0,
datastore="whatever",
)
vmware.salt.utils.vmware.get_mor_using_container_view.assert_called_with(
None, vim.StoragePod, "whatever"
)
class CloneFromSnapshotTest(TestCase):
"""
Test functionality to clone from snapshot
"""
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_quick_linked_clone(self):
"""
Test that disk move type is
set to createNewChildDiskBacking
"""
self._test_clone_type(vmware.QUICK_LINKED_CLONE)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_current_state_linked_clone(self):
"""
Test that disk move type is
set to moveChildMostDiskBacking
"""
self._test_clone_type(vmware.CURRENT_STATE_LINKED_CLONE)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_copy_all_disks_full_clone(self):
"""
Test that disk move type is
set to moveAllDiskBackingsAndAllowSharing
"""
self._test_clone_type(vmware.COPY_ALL_DISKS_FULL_CLONE)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_flatten_all_all_disks_full_clone(self):
"""
Test that disk move type is
set to moveAllDiskBackingsAndDisallowSharing
"""
self._test_clone_type(vmware.FLATTEN_DISK_FULL_CLONE)
@skipIf(HAS_LIBS is False, "Install pyVmomi to be able to run this unit test.")
def test_raises_error_for_invalid_disk_move_type(self):
"""
Test that invalid disk move type
raises error
"""
with self.assertRaises(SaltCloudSystemExit):
self._test_clone_type("foobar")
def _test_clone_type(self, clone_type):
"""
Assertions for checking that a certain clone type
works
"""
obj_ref = MagicMock()
obj_ref.snapshot = vim.vm.Snapshot(None, None)
obj_ref.snapshot.currentSnapshot = vim.vm.Snapshot(None, None)
clone_spec = vmware.handle_snapshot(
vim.vm.ConfigSpec(),
obj_ref,
vim.vm.RelocateSpec(),
False,
{"snapshot": {"disk_move_type": clone_type}},
)
self.assertEqual(clone_spec.location.diskMoveType, clone_type)
obj_ref2 = MagicMock()
obj_ref2.snapshot = vim.vm.Snapshot(None, None)
obj_ref2.snapshot.currentSnapshot = vim.vm.Snapshot(None, None)
clone_spec2 = vmware.handle_snapshot(
vim.vm.ConfigSpec(),
obj_ref2,
vim.vm.RelocateSpec(),
True,
{"snapshot": {"disk_move_type": clone_type}},
)
self.assertEqual(clone_spec2.location.diskMoveType, clone_type)
| 36.520384 | 107 | 0.612713 | 4,899 | 45,687 | 5.501123 | 0.07226 | 0.025714 | 0.044378 | 0.045195 | 0.852171 | 0.822078 | 0.799221 | 0.750909 | 0.730167 | 0.693655 | 0 | 0.003342 | 0.305864 | 45,687 | 1,250 | 108 | 36.5496 | 0.846467 | 0.259702 | 0 | 0.482709 | 0 | 0 | 0.126503 | 0.030037 | 0 | 0 | 0 | 0 | 0.161383 | 1 | 0.145533 | false | 0.011527 | 0.01585 | 0.001441 | 0.167147 | 0.001441 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e402fd99e2fcbeacd372469c004182d0993d220c | 1,505 | py | Python | Assembler/tests/test_assemble_data_strings.py | Laegluin/mikrorechner | 7e5e878072c941e422889465c43dea838b83e5fd | [
"MIT"
] | 1 | 2019-01-28T01:53:20.000Z | 2019-01-28T01:53:20.000Z | Assembler/tests/test_assemble_data_strings.py | Laegluin/mikrorechner | 7e5e878072c941e422889465c43dea838b83e5fd | [
"MIT"
] | null | null | null | Assembler/tests/test_assemble_data_strings.py | Laegluin/mikrorechner | 7e5e878072c941e422889465c43dea838b83e5fd | [
"MIT"
] | null | null | null | from tests import test
import datastrings as datastr
#wir müssen wg labels nur auf richtige datenstringform testen
def test_get_datastring_indexes():
test.assertEquals(datastr.get_datastring_indexes(['R4 = R5 + R6']), [])
test.assertEquals(datastr.get_datastring_indexes(['R4 = R5 + R6',
'0xFF']), [1])
test.assertEquals(datastr.get_datastring_indexes(['R4 = R5 + R6',
'0b101011']), [1])
test.assertEquals(datastr.get_datastring_indexes(['R4 = R5 + R6',
'124']), [1])
test.assertEquals(datastr.get_datastring_indexes(['R4 = R5 + R6',
'-124']), [1])
test.assertEquals(datastr.get_datastring_indexes(['0xFf',
'-124']), [0,1])
test.assertEquals(datastr.get_datastring_indexes(['-0xFf',
'-124']), [1])
# def test_datastring_to_binary_string():
# test.assertEquals(asm.datastring_to_binary_string('0xFFFFFF'),'1'*(3*8))
# test.assertEquals(asm.datastring_to_binary_string('0x0'), '0' * 8)
# test.assertEquals(asm.datastring_to_binary_string('0b111'), '0'*5+'1'*3)
# test.assertEquals(asm.datastring_to_binary_string('255'), '1' * 8)
# test.assertEquals(asm.datastring_to_binary_string('0b11111'), '0'*3+'1'*5)
def test_all():
test_get_datastring_indexes() | 53.75 | 80 | 0.568106 | 162 | 1,505 | 5.030864 | 0.259259 | 0.235583 | 0.220859 | 0.223313 | 0.704294 | 0.704294 | 0.704294 | 0.598773 | 0.43681 | 0.245399 | 0 | 0.070225 | 0.290365 | 1,505 | 28 | 81 | 53.75 | 0.692884 | 0.315615 | 0 | 0.333333 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0.011719 | 0 | 0.388889 | 1 | 0.111111 | true | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c2b6fd1b12155a34ce7ae75a558cd8d5c6b55d8 | 28 | py | Python | Session/__init__.py | howiemac/evoke4 | 5d7af36c9fb23d94766d54c9c63436343959d3a8 | [
"BSD-3-Clause"
] | null | null | null | Session/__init__.py | howiemac/evoke4 | 5d7af36c9fb23d94766d54c9c63436343959d3a8 | [
"BSD-3-Clause"
] | null | null | null | Session/__init__.py | howiemac/evoke4 | 5d7af36c9fb23d94766d54c9c63436343959d3a8 | [
"BSD-3-Clause"
] | null | null | null | from Session import Session
| 14 | 27 | 0.857143 | 4 | 28 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c3b7d0b41e93fb88f68390d81486e2338b71f57 | 1,184 | py | Python | code/game/classes/elements/Element.py | Maaack/SixX | ed92ddd3bc7d0d9612bdff36df6a208ae9a24fdd | [
"Apache-2.0"
] | 1 | 2017-10-08T06:38:32.000Z | 2017-10-08T06:38:32.000Z | code/game/classes/elements/Element.py | Maaack/SixX | ed92ddd3bc7d0d9612bdff36df6a208ae9a24fdd | [
"Apache-2.0"
] | null | null | null | code/game/classes/elements/Element.py | Maaack/SixX | ed92ddd3bc7d0d9612bdff36df6a208ae9a24fdd | [
"Apache-2.0"
] | null | null | null | from game.libs import make_hash
class Element(object):
_Game = None
BasicObject = None
def __init__(self):
self._id = make_hash()
def get_id(self):
return self._id
id = property(get_id)
def get_physical_object(self):
return self.BasicObject.body
def get_movable_object(self):
return self.BasicObject.body
def get_angle(self):
return self.BasicObject.body.angle
def get_position(self):
return self.BasicObject.body.position
def get_points(self):
return self.BasicObject.get_points()
def is_hovering(self, position):
return self.BasicObject.shape.point_query(position)
def is_selected(self, position):
return self.BasicObject.shape.point_query(position)
def is_deselected(self, position):
return self.BasicObject.shape.point_query(position)
def destroy(self):
self.destroy_Basics()
self._Game.drop_Object(self)
def destroy_Basics(self):
self.destroy_Basic()
def destroy_Basic(self):
if hasattr(self.BasicObject, 'destroy'):
self.BasicObject.destroy()
self.BasicObject = None | 23.68 | 59 | 0.66723 | 145 | 1,184 | 5.234483 | 0.255172 | 0.217391 | 0.221344 | 0.16469 | 0.500659 | 0.346509 | 0.346509 | 0.346509 | 0.238472 | 0.238472 | 0 | 0 | 0.241554 | 1,184 | 50 | 60 | 23.68 | 0.845212 | 0 | 0 | 0.147059 | 0 | 0 | 0.005907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.382353 | false | 0 | 0.029412 | 0.264706 | 0.794118 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7c5f172b97b552807c3a08c8325dbf556fae6da8 | 49 | py | Python | indice_pollution/regions/Provence-Alpes-Côte d'Azur.py | betagouv/indice_pollution | 112f262397686110475ed360101b311e76d5d914 | [
"MIT"
] | 2 | 2020-09-02T20:17:22.000Z | 2022-03-04T21:06:28.000Z | indice_pollution/regions/Provence-Alpes-Côte d'Azur.py | betagouv/indice_pollution | 112f262397686110475ed360101b311e76d5d914 | [
"MIT"
] | 34 | 2020-08-13T11:47:13.000Z | 2022-03-31T08:05:37.000Z | indice_pollution/regions/Provence-Alpes-Côte d'Azur.py | betagouv/indice_pollution | 112f262397686110475ed360101b311e76d5d914 | [
"MIT"
] | null | null | null | from .Sud import Forecast, Episode, Service #noqa | 49 | 49 | 0.795918 | 7 | 49 | 5.571429 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122449 | 49 | 1 | 49 | 49 | 0.906977 | 0.081633 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cffe9e253b085cc9738084837d8a5a1e65d96d7 | 100 | py | Python | terrascript/template/r.py | hugovk/python-terrascript | 08fe185904a70246822f5cfbdc9e64e9769ec494 | [
"BSD-2-Clause"
] | 4 | 2022-02-07T21:08:14.000Z | 2022-03-03T04:41:28.000Z | terrascript/template/r.py | hugovk/python-terrascript | 08fe185904a70246822f5cfbdc9e64e9769ec494 | [
"BSD-2-Clause"
] | null | null | null | terrascript/template/r.py | hugovk/python-terrascript | 08fe185904a70246822f5cfbdc9e64e9769ec494 | [
"BSD-2-Clause"
] | 2 | 2022-02-06T01:49:42.000Z | 2022-02-08T14:15:00.000Z | # terrascript/template/r.py
import terrascript
class template_dir(terrascript.Resource):
pass
| 14.285714 | 41 | 0.79 | 12 | 100 | 6.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13 | 100 | 6 | 42 | 16.666667 | 0.896552 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
6b0ed4ef5ef6aa2a519c74798fca110492636aae | 20,733 | py | Python | tests/pytests/unit/states/test_boto_cloudwatch_event.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 9,425 | 2015-01-01T05:59:24.000Z | 2022-03-31T20:44:05.000Z | tests/pytests/unit/states/test_boto_cloudwatch_event.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 33,507 | 2015-01-01T00:19:56.000Z | 2022-03-31T23:48:20.000Z | tests/pytests/unit/states/test_boto_cloudwatch_event.py | ifraixedes/saltstack-salt | b54becb8b43cc9b7c00b2c0bc637ac534dc62896 | [
"Apache-2.0"
] | 5,810 | 2015-01-01T19:11:45.000Z | 2022-03-31T02:37:20.000Z | import logging
import random
import string
import pytest
import salt.config
import salt.loader
import salt.states.boto_cloudwatch_event as boto_cloudwatch_event
from tests.support.mock import MagicMock, patch
boto = pytest.importorskip("boto")
boto3 = pytest.importorskip("boto3", "1.2.1")
botocore = pytest.importorskip("botocore", "1.4.41")
log = logging.getLogger(__name__)
class GlobalConfig:
region = "us-east-1"
access_key = "GKTADJGHEIQSXMKKRBJ08H"
secret_key = "askdjghsdfjkghWupUjasdflkdfklgjsdfjajkghs"
conn_parameters = {
"region": region,
"key": access_key,
"keyid": secret_key,
"profile": {},
}
error_message = (
"An error occurred (101) when calling the {0} operation: Test-defined error"
)
error_content = {"Error": {"Code": 101, "Message": "Test-defined error"}}
rule_name = "test_thing_type"
rule_desc = "test_thing_type_desc"
rule_sched = "rate(20 min)"
rule_arn = "arn:::::rule/arn"
rule_ret = dict(
Arn=rule_arn,
Description=rule_desc,
EventPattern=None,
Name=rule_name,
RoleArn=None,
ScheduleExpression=rule_sched,
State="ENABLED",
)
@pytest.fixture
def global_config():
params = GlobalConfig()
return params
@pytest.fixture
def configure_loader_modules():
opts = salt.config.DEFAULT_MINION_OPTS.copy()
opts["grains"] = salt.loader.grains(opts)
ctx = {}
utils = salt.loader.utils(
opts,
whitelist=["boto3", "args", "systemd", "path", "platform"],
context=ctx,
)
serializers = salt.loader.serializers(opts)
funcs = funcs = salt.loader.minion_mods(
opts, context=ctx, utils=utils, whitelist=["boto_cloudwatch_event"]
)
salt_states = salt.loader.states(
opts=opts,
functions=funcs,
utils=utils,
whitelist=["boto_cloudwatch_event"],
serializers=serializers,
)
return {
boto_cloudwatch_event: {
"__opts__": opts,
"__salt__": funcs,
"__utils__": utils,
"__states__": salt_states,
"__serializers__": serializers,
}
}
def test_present_when_failing_to_describe_rule(global_config):
"""
Tests exceptions when checking rule existence
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "error on list rules"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "error on list rules" in result.get("comment", {})
def test_present_when_failing_to_create_a_new_rule(global_config):
"""
Tests present on a rule name that doesn't exist and
an error is thrown on creation.
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "put_rule"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "put_rule" in result.get("comment", "")
def test_present_when_failing_to_describe_the_new_rule(global_config):
"""
Tests present on a rule name that doesn't exist and
an error is thrown when adding targets.
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "describe_rule"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "describe_rule" in result.get("comment", "")
def test_present_when_failing_to_create_a_new_rules_targets(global_config):
"""
Tests present on a rule name that doesn't exist and
an error is thrown when adding targets.
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.return_value = global_config.rule_ret
conn.put_targets.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "put_targets"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "put_targets" in result.get("comment", "")
def test_present_when_rule_does_not_exist(global_config):
"""
Tests the successful case of creating a new rule, and updating its
targets
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.return_value = global_config.rule_ret
conn.put_targets.return_value = {"FailedEntryCount": 0}
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is True
def test_present_when_failing_to_update_an_existing_rule(global_config):
"""
Tests present on an existing rule where an error is thrown on updating the pool properties.
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.describe_rule.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "describe_rule"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "describe_rule" in result.get("comment", "")
def test_present_when_failing_to_get_targets(global_config):
"""
Tests present on an existing rule where put_rule succeeded, but an error
is thrown on getting targets
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.return_value = global_config.rule_ret
conn.list_targets_by_rule.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "list_targets"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "list_targets" in result.get("comment", "")
def test_present_when_failing_to_put_targets(global_config):
"""
Tests present on an existing rule where put_rule succeeded, but an error
is thrown on putting targets
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.return_value = global_config.rule_ret
conn.list_targets.return_value = {"Targets": []}
conn.put_targets.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "put_targets"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is False
assert "put_targets" in result.get("comment", "")
def test_present_when_putting_targets(global_config):
"""
Tests present on an existing rule where put_rule succeeded, and targets
must be added
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.return_value = global_config.rule_ret
conn.list_targets.return_value = {"Targets": []}
conn.put_targets.return_value = {"FailedEntryCount": 0}
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is True
def test_present_when_removing_targets(global_config):
"""
Tests present on an existing rule where put_rule succeeded, and targets
must be removed
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
conn.put_rule.return_value = global_config.rule_ret
conn.describe_rule.return_value = global_config.rule_ret
conn.list_targets.return_value = {"Targets": [{"Id": "target1"}, {"Id": "target2"}]}
conn.put_targets.return_value = {"FailedEntryCount": 0}
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.present"](
name="test present",
Name=global_config.rule_name,
Description=global_config.rule_desc,
ScheduleExpression=global_config.rule_sched,
Targets=[{"Id": "target1", "Arn": "arn::::::*"}],
**global_config.conn_parameters
)
assert result.get("result") is True
def test_absent_when_failing_to_describe_rule(global_config):
"""
Tests exceptions when checking rule existence
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "error on list rules"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test present",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is False
assert "error on list rules" in result.get("comment", {})
def test_absent_when_rule_does_not_exist(global_config):
"""
Tests absent on an non-existing rule
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": []}
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test absent",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is True
assert result["changes"] == {}
def test_absent_when_failing_to_list_targets(global_config):
"""
Tests absent on an rule when the list_targets call fails
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.list_targets_by_rule.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "list_targets"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test absent",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is False
assert "list_targets" in result.get("comment", "")
def test_absent_when_failing_to_remove_targets_exception(global_config):
"""
Tests absent on an rule when the remove_targets call fails
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.list_targets_by_rule.return_value = {"Targets": [{"Id": "target1"}]}
conn.remove_targets.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "remove_targets"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test absent",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is False
assert "remove_targets" in result.get("comment", "")
def test_absent_when_failing_to_remove_targets_nonexception(global_config):
"""
Tests absent on an rule when the remove_targets call fails
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.list_targets_by_rule.return_value = {"Targets": [{"Id": "target1"}]}
conn.remove_targets.return_value = {"FailedEntryCount": 1}
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test absent",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is False
def test_absent_when_failing_to_delete_rule(global_config):
"""
Tests absent on an rule when the delete_rule call fails
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.list_targets_by_rule.return_value = {"Targets": [{"Id": "target1"}]}
conn.remove_targets.return_value = {"FailedEntryCount": 0}
conn.delete_rule.side_effect = botocore.exceptions.ClientError(
global_config.error_content, "delete_rule"
)
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test absent",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is False
assert "delete_rule" in result.get("comment", "")
def test_absent(global_config):
"""
Tests absent on an rule
"""
global_config.conn_parameters["key"] = "".join(
random.choice(string.ascii_lowercase + string.digits) for _ in range(50)
)
patcher = patch("boto3.session.Session")
mock_session = patcher.start()
session_instance = mock_session.return_value
conn = MagicMock()
session_instance.client.return_value = conn
conn.list_rules.return_value = {"Rules": [global_config.rule_ret]}
conn.list_targets_by_rule.return_value = {"Targets": [{"Id": "target1"}]}
conn.remove_targets.return_value = {"FailedEntryCount": 0}
result = boto_cloudwatch_event.__states__["boto_cloudwatch_event.absent"](
name="test absent",
Name=global_config.rule_name,
**global_config.conn_parameters
)
assert result.get("result") is True
| 37.903108 | 95 | 0.694931 | 2,497 | 20,733 | 5.448939 | 0.073288 | 0.105836 | 0.067029 | 0.064971 | 0.886888 | 0.88483 | 0.872777 | 0.866603 | 0.860503 | 0.851316 | 0 | 0.005655 | 0.189746 | 20,733 | 546 | 96 | 37.972527 | 0.804274 | 0.057686 | 0 | 0.673423 | 0 | 0 | 0.128994 | 0.049249 | 0 | 0 | 0 | 0 | 0.065315 | 1 | 0.042793 | false | 0 | 0.024775 | 0 | 0.099099 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b1f03489bb02108e6d2ccfcdaf6bb72e5664d8c | 3,287 | py | Python | tests/integration/transformers/pii/test_anonymizer.py | HDI-Project/RDT | f1648d10346f4e431957aca65e25a00879a5d419 | [
"MIT"
] | 8 | 2018-06-20T22:59:07.000Z | 2019-02-19T08:48:53.000Z | tests/integration/transformers/pii/test_anonymizer.py | HDI-Project/RDT | f1648d10346f4e431957aca65e25a00879a5d419 | [
"MIT"
] | 63 | 2018-06-20T22:08:37.000Z | 2019-12-16T18:57:08.000Z | tests/integration/transformers/pii/test_anonymizer.py | HDI-Project/RDT | f1648d10346f4e431957aca65e25a00879a5d419 | [
"MIT"
] | 5 | 2018-11-06T16:45:48.000Z | 2020-01-02T13:41:07.000Z |
import numpy as np
import pandas as pd
from rdt.transformers.pii import AnonymizedFaker
def test_anonymizedfaker():
"""End to end test with the default settings of the ``AnonymizedFaker``."""
data = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username': ['a', 'b', 'c', 'd', 'e']
})
instance = AnonymizedFaker()
transformed = instance.fit_transform(data, 'username')
reverse_transform = instance.reverse_transform(transformed)
expected_transformed = pd.DataFrame({
'id': [1, 2, 3, 4, 5]
})
pd.testing.assert_frame_equal(transformed, expected_transformed)
assert len(reverse_transform['username']) == 5
def test_anonymizedfaker_custom_provider():
"""End to end test with a custom provider and function for the ``AnonymizedFaker``."""
data = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username': ['a', 'b', 'c', 'd', 'e'],
'cc': [
'2276346007210438',
'4149498289355',
'213144860944676',
'4514775286178',
'213133122335401'
]
})
instance = AnonymizedFaker('credit_card', 'credit_card_number')
transformed = instance.fit_transform(data, 'cc')
reverse_transform = instance.reverse_transform(transformed)
expected_transformed = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username': ['a', 'b', 'c', 'd', 'e'],
})
pd.testing.assert_frame_equal(transformed, expected_transformed)
assert len(reverse_transform['cc']) == 5
def test_anonymizedfaker_with_nans():
"""End to end test with the default settings of the ``AnonymizedFaker`` with ``nan`` values."""
data = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username': ['a', np.nan, 'c', 'd', 'e']
})
instance = AnonymizedFaker(model_missing_values=True)
transformed = instance.fit_transform(data, 'username')
reverse_transform = instance.reverse_transform(transformed)
expected_transformed = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username.is_null': [0.0, 1.0, 0.0, 0.0, 0.0]
})
pd.testing.assert_frame_equal(transformed, expected_transformed)
assert len(reverse_transform['username']) == 5
assert reverse_transform['username'].isna().sum() == 1
def test_anonymizedfaker_custom_provider_with_nans():
"""End to end test with a custom provider for the ``AnonymizedFaker`` with `` nans``."""
data = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username': ['a', 'b', 'c', 'd', 'e'],
'cc': [
'2276346007210438',
np.nan,
'213144860944676',
'4514775286178',
'213133122335401'
]
})
instance = AnonymizedFaker(
'credit_card',
'credit_card_number',
model_missing_values=True
)
transformed = instance.fit_transform(data, 'cc')
reverse_transform = instance.reverse_transform(transformed)
expected_transformed = pd.DataFrame({
'id': [1, 2, 3, 4, 5],
'username': ['a', 'b', 'c', 'd', 'e'],
'cc.is_null': [0.0, 1.0, 0.0, 0.0, 0.0]
})
pd.testing.assert_frame_equal(transformed, expected_transformed)
assert len(reverse_transform['cc']) == 5
assert reverse_transform['cc'].isna().sum() == 1
| 31.009434 | 99 | 0.600548 | 381 | 3,287 | 5.028871 | 0.183727 | 0.11691 | 0.015658 | 0.058455 | 0.860647 | 0.797495 | 0.797495 | 0.789144 | 0.756785 | 0.73382 | 0 | 0.078926 | 0.240645 | 3,287 | 105 | 100 | 31.304762 | 0.688702 | 0.098266 | 0 | 0.670886 | 0 | 0 | 0.123046 | 0 | 0 | 0 | 0 | 0 | 0.126582 | 1 | 0.050633 | false | 0 | 0.037975 | 0 | 0.088608 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
863b76b4534decaeb9825aa7e0d5770527c9038a | 1,632 | py | Python | lib/tests.py | pchaigno/grr | 69c81624c281216a45c4bb88a9d4e4b0613a3556 | [
"Apache-2.0"
] | 1 | 2015-01-07T05:29:57.000Z | 2015-01-07T05:29:57.000Z | lib/tests.py | pchaigno/grr | 69c81624c281216a45c4bb88a9d4e4b0613a3556 | [
"Apache-2.0"
] | null | null | null | lib/tests.py | pchaigno/grr | 69c81624c281216a45c4bb88a9d4e4b0613a3556 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""GRR library tests.
This module loads and registers all the GRR library tests.
"""
# These need to register plugins so, pylint: disable=unused-import
from grr.lib import access_control_test
from grr.lib import aff4_test
from grr.lib import artifact_test
from grr.lib import artifact_utils_test
from grr.lib import build_test
from grr.lib import client_index_test
from grr.lib import communicator_test
from grr.lib import config_lib_test
from grr.lib import config_validation_test
from grr.lib import console_utils_test
from grr.lib import data_store_test
from grr.lib import email_alerts_test
from grr.lib import export_test
from grr.lib import export_utils_test
from grr.lib import flow_test
from grr.lib import flow_utils_test
from grr.lib import front_end_test
from grr.lib import fuse_mount_test
from grr.lib import hunt_test
from grr.lib import ipv6_utils_test
from grr.lib import lexer_test
from grr.lib import objectfilter_test
from grr.lib import parsers_test
from grr.lib import queue_manager_test
from grr.lib import rekall_profile_server_test
from grr.lib import stats_test
from grr.lib import test_lib
from grr.lib import threadpool_test
from grr.lib import type_info_test
from grr.lib import utils_test
from grr.lib.aff4_objects import tests
from grr.lib.builders import tests
from grr.lib.checks import tests
from grr.lib.data_stores import tests
from grr.lib.flows import tests
from grr.lib.hunts import tests
from grr.lib.local import tests
from grr.lib.output_plugins import tests
from grr.lib.rdfvalues import tests
from grr.tools import entry_point_test
# pylint: enable=unused-import
| 32 | 66 | 0.838848 | 292 | 1,632 | 4.510274 | 0.260274 | 0.212604 | 0.296128 | 0.364465 | 0.622627 | 0.237661 | 0 | 0 | 0 | 0 | 0 | 0.002083 | 0.117647 | 1,632 | 50 | 67 | 32.64 | 0.9125 | 0.11826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
865872cfa42f70b19d0b2baf43957da73b773885 | 278 | py | Python | kokki/cookbooks/monit/libraries/resources.py | samuel/kokki | da98da55e0bba8db5bda993666a43c6fdc4cacdb | [
"BSD-3-Clause"
] | 11 | 2015-01-14T00:43:26.000Z | 2020-12-29T06:12:51.000Z | kokki/cookbooks/monit/libraries/resources.py | samuel/kokki | da98da55e0bba8db5bda993666a43c6fdc4cacdb | [
"BSD-3-Clause"
] | null | null | null | kokki/cookbooks/monit/libraries/resources.py | samuel/kokki | da98da55e0bba8db5bda993666a43c6fdc4cacdb | [
"BSD-3-Clause"
] | 3 | 2015-01-14T01:05:56.000Z | 2019-01-26T05:09:37.000Z |
from kokki import Service, BooleanArgument
class MonitService(Service):
provider = "*monit.MonitServiceProvider"
supports_restart = BooleanArgument(default=True)
supports_status = BooleanArgument(default=True)
supports_reload = BooleanArgument(default=False)
| 27.8 | 52 | 0.78777 | 26 | 278 | 8.307692 | 0.653846 | 0.305556 | 0.240741 | 0.314815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136691 | 278 | 9 | 53 | 30.888889 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.097473 | 0.097473 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
86e6c6f8e5adb7c54579c25856f05b763bf2afc0 | 115 | py | Python | setup.py | rtloftin/interactive_agents | f7d57d1421000b2e8a79a9dff179b8fe7c8d3fc0 | [
"MIT"
] | null | null | null | setup.py | rtloftin/interactive_agents | f7d57d1421000b2e8a79a9dff179b8fe7c8d3fc0 | [
"MIT"
] | 5 | 2022-03-11T07:58:53.000Z | 2022-03-17T12:57:26.000Z | setup.py | rtloftin/interactive_agents | f7d57d1421000b2e8a79a9dff179b8fe7c8d3fc0 | [
"MIT"
] | 1 | 2022-03-11T19:28:53.000Z | 2022-03-11T19:28:53.000Z | # TODO: Allow module to be installed for access from external scripts - get dependencies - read more about setup.py | 115 | 115 | 0.791304 | 18 | 115 | 5.055556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165217 | 115 | 1 | 115 | 115 | 0.947917 | 0.982609 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4c87450c76a63312e7c77fd9f3470db2d0c3d7a | 9,140 | py | Python | tests/test_router_oauth.py | EndevelCZ/fastapi-users | 58583b4be41104ed7ca03db6f361223f7fa9b2bb | [
"MIT"
] | null | null | null | tests/test_router_oauth.py | EndevelCZ/fastapi-users | 58583b4be41104ed7ca03db6f361223f7fa9b2bb | [
"MIT"
] | null | null | null | tests/test_router_oauth.py | EndevelCZ/fastapi-users | 58583b4be41104ed7ca03db6f361223f7fa9b2bb | [
"MIT"
] | null | null | null | from typing import Any, AsyncGenerator, Dict, cast
import httpx
import pytest
from fastapi import FastAPI, status
from httpx_oauth.oauth2 import BaseOAuth2
from fastapi_users.authentication import Authenticator
from fastapi_users.router.oauth import generate_state_token, get_oauth_router
from tests.conftest import (
AsyncMethodMocker,
MockAuthentication,
UserDB,
UserManagerMock,
)
@pytest.fixture
def get_test_app_client(
secret,
get_user_manager_oauth,
mock_authentication,
oauth_client,
get_test_client,
):
async def _get_test_app_client(
redirect_url: str = None,
) -> AsyncGenerator[httpx.AsyncClient, None]:
mock_authentication_bis = MockAuthentication(name="mock-bis")
authenticator = Authenticator(
[mock_authentication, mock_authentication_bis], get_user_manager_oauth
)
oauth_router = get_oauth_router(
oauth_client,
get_user_manager_oauth,
authenticator,
secret,
redirect_url,
)
app = FastAPI()
app.include_router(oauth_router)
async for client in get_test_client(app):
yield client
return _get_test_app_client
@pytest.fixture
@pytest.mark.asyncio
async def test_app_client(get_test_app_client):
async for client in get_test_app_client():
yield client
@pytest.fixture
@pytest.mark.asyncio
async def test_app_client_redirect_url(get_test_app_client):
async for client in get_test_app_client("http://www.tintagel.bt/callback"):
yield client
@pytest.mark.router
@pytest.mark.oauth
@pytest.mark.asyncio
class TestAuthorize:
async def test_missing_authentication_backend(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client: httpx.AsyncClient,
oauth_client: BaseOAuth2,
):
async_method_mocker(
oauth_client, "get_authorization_url", return_value="AUTHORIZATION_URL"
)
response = await test_app_client.get(
"/authorize",
params={"scopes": ["scope1", "scope2"]},
)
assert response.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
async def test_wrong_authentication_backend(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client: httpx.AsyncClient,
oauth_client: BaseOAuth2,
):
async_method_mocker(
oauth_client, "get_authorization_url", return_value="AUTHORIZATION_URL"
)
response = await test_app_client.get(
"/authorize",
params={
"authentication_backend": "foo",
"scopes": ["scope1", "scope2"],
},
)
assert response.status_code == status.HTTP_422_UNPROCESSABLE_ENTITY
async def test_success(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client: httpx.AsyncClient,
oauth_client: BaseOAuth2,
):
get_authorization_url_mock = async_method_mocker(
oauth_client, "get_authorization_url", return_value="AUTHORIZATION_URL"
)
response = await test_app_client.get(
"/authorize",
params={
"authentication_backend": "mock",
"scopes": ["scope1", "scope2"],
},
)
assert response.status_code == status.HTTP_200_OK
get_authorization_url_mock.assert_called_once()
data = response.json()
assert "authorization_url" in data
async def test_with_redirect_url(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client_redirect_url: httpx.AsyncClient,
oauth_client: BaseOAuth2,
):
get_authorization_url_mock = async_method_mocker(
oauth_client, "get_authorization_url", return_value="AUTHORIZATION_URL"
)
response = await test_app_client_redirect_url.get(
"/authorize",
params={
"authentication_backend": "mock",
"scopes": ["scope1", "scope2"],
},
)
assert response.status_code == status.HTTP_200_OK
get_authorization_url_mock.assert_called_once()
data = response.json()
assert "authorization_url" in data
@pytest.mark.router
@pytest.mark.oauth
@pytest.mark.asyncio
@pytest.mark.parametrize(
"access_token",
[
({"access_token": "TOKEN", "expires_at": 1579179542}),
({"access_token": "TOKEN"}),
],
)
class TestCallback:
async def test_invalid_state(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client: httpx.AsyncClient,
oauth_client: BaseOAuth2,
user_oauth: UserDB,
access_token: str,
):
async_method_mocker(oauth_client, "get_access_token", return_value=access_token)
get_id_email_mock = async_method_mocker(
oauth_client, "get_id_email", return_value=("user_oauth1", user_oauth.email)
)
response = await test_app_client.get(
"/callback",
params={"code": "CODE", "state": "STATE"},
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
get_id_email_mock.assert_called_once_with("TOKEN")
async def test_active_user(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client: httpx.AsyncClient,
oauth_client: BaseOAuth2,
user_oauth: UserDB,
user_manager_oauth: UserManagerMock,
access_token: str,
):
state_jwt = generate_state_token({"authentication_backend": "mock"}, "SECRET")
async_method_mocker(oauth_client, "get_access_token", return_value=access_token)
async_method_mocker(
oauth_client, "get_id_email", return_value=("user_oauth1", user_oauth.email)
)
async_method_mocker(
user_manager_oauth, "oauth_callback", return_value=user_oauth
)
response = await test_app_client.get(
"/callback",
params={"code": "CODE", "state": state_jwt},
)
assert response.status_code == status.HTTP_200_OK
data = cast(Dict[str, Any], response.json())
assert data["token"] == str(user_oauth.id)
async def test_inactive_user(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client: httpx.AsyncClient,
oauth_client: BaseOAuth2,
inactive_user_oauth: UserDB,
user_manager_oauth: UserManagerMock,
access_token: str,
):
state_jwt = generate_state_token({"authentication_backend": "mock"}, "SECRET")
async_method_mocker(oauth_client, "get_access_token", return_value=access_token)
async_method_mocker(
oauth_client,
"get_id_email",
return_value=("user_oauth1", inactive_user_oauth.email),
)
async_method_mocker(
user_manager_oauth, "oauth_callback", return_value=inactive_user_oauth
)
response = await test_app_client.get(
"/callback",
params={"code": "CODE", "state": state_jwt},
)
assert response.status_code == status.HTTP_400_BAD_REQUEST
async def test_redirect_url_router(
self,
async_method_mocker: AsyncMethodMocker,
test_app_client_redirect_url: httpx.AsyncClient,
oauth_client: BaseOAuth2,
user_oauth: UserDB,
user_manager_oauth: UserManagerMock,
access_token: str,
):
state_jwt = generate_state_token({"authentication_backend": "mock"}, "SECRET")
get_access_token_mock = async_method_mocker(
oauth_client, "get_access_token", return_value=access_token
)
async_method_mocker(
oauth_client, "get_id_email", return_value=("user_oauth1", user_oauth.email)
)
async_method_mocker(
user_manager_oauth, "oauth_callback", return_value=user_oauth
)
response = await test_app_client_redirect_url.get(
"/callback",
params={"code": "CODE", "state": state_jwt},
)
assert response.status_code == status.HTTP_200_OK
get_access_token_mock.assert_called_once_with(
"CODE", "http://www.tintagel.bt/callback"
)
data = cast(Dict[str, Any], response.json())
assert data["token"] == str(user_oauth.id)
@pytest.mark.asyncio
async def test_oauth_authorize_namespace(
secret,
get_user_manager_oauth,
mock_authentication,
oauth_client,
get_test_client,
redirect_url: str = None,
):
mock_authentication_bis = MockAuthentication(name="mock-bis")
authenticator = Authenticator(
[mock_authentication, mock_authentication_bis], get_user_manager_oauth
)
app = FastAPI()
app.include_router(
get_oauth_router(
oauth_client,
get_user_manager_oauth,
authenticator,
secret,
redirect_url,
)
)
assert app.url_path_for("oauth:authorize") == "/authorize"
| 30.165017 | 88 | 0.647046 | 978 | 9,140 | 5.648262 | 0.108384 | 0.03168 | 0.058834 | 0.047791 | 0.84323 | 0.80449 | 0.793085 | 0.791637 | 0.787654 | 0.75887 | 0 | 0.008342 | 0.265536 | 9,140 | 302 | 89 | 30.264901 | 0.814539 | 0 | 0 | 0.658824 | 0 | 0 | 0.102845 | 0.023632 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.003922 | false | 0 | 0.031373 | 0 | 0.047059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4cf098997ac54aefc5805c85f63e6e41c7fe602 | 91 | py | Python | examples/import_graph_example_pkg/module_a.py | SMAT-Lab/Scalpel | 1022200043f2d9e8c24256821b863997ab34dd49 | [
"Apache-2.0"
] | 102 | 2021-12-15T09:08:48.000Z | 2022-03-24T15:15:25.000Z | examples/import_graph_example_pkg/module_a.py | StarWatch27/Scalpel | 8853e6e84f318f3cfeda0e03d274748b2fbe30fa | [
"Apache-2.0"
] | 11 | 2021-12-04T11:48:31.000Z | 2022-03-21T09:21:45.000Z | examples/import_graph_example_pkg/module_a.py | StarWatch27/Scalpel | 8853e6e84f318f3cfeda0e03d274748b2fbe30fa | [
"Apache-2.0"
] | 11 | 2021-12-04T11:47:41.000Z | 2022-02-06T09:04:39.000Z | from .module_b import B
from .module_c import C
class A:
def foo(self):
return | 15.166667 | 23 | 0.659341 | 16 | 91 | 3.625 | 0.6875 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274725 | 91 | 6 | 24 | 15.166667 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0.2 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d4efe0e92a8237d7093403c20234ebb5fd2b5a28 | 20,640 | py | Python | Code/python/Py/MainWindow.py | guanjilai/FastCAE | 7afd59a3c0a82c1edf8f5d09482e906e878eb807 | [
"BSD-3-Clause"
] | null | null | null | Code/python/Py/MainWindow.py | guanjilai/FastCAE | 7afd59a3c0a82c1edf8f5d09482e906e878eb807 | [
"BSD-3-Clause"
] | null | null | null | Code/python/Py/MainWindow.py | guanjilai/FastCAE | 7afd59a3c0a82c1edf8f5d09482e906e878eb807 | [
"BSD-3-Clause"
] | null | null | null | #-------关联C++库---------------
import ctypes
import platform
from ctypes import *
system = platform.system()
if system == "Windows":
pre = "./"
suff = ".dll"
else:
pre = "./lib"
suff = ".so"
libfile = ctypes.cdll.LoadLibrary
filename = pre+"MainWindow"+suff
mw = libfile(filename)
#---------------------------------
#--------------New_script_FilterClip---------
#-------定义函数------------------
def showFastCAE():
mw.showFastCAE()
pass
def undo():
mw.undo()
pass
def redo():
mw.redo()
pass
def clearData():
mw.clearData()
pass
def updateInterface():
mw.updateInterface()
pass
def importMesh(filename,suffix,modelID):
str = bytes(filename,encoding='utf-8')
suf = bytes(suffix,encoding='utf-8')
mw.importMesh(str,suf,modelID)
pass
def exportMesh(filename,suffix,modelID):
str = bytes(filename,encoding='utf-8')
suf = bytes(suffix,encoding='utf-8')
mw.exportMesh(str,suf,modelID)
pass
def importGeometry(filename):
str = bytes(filename,encoding='utf-8')
mw.importGeometry(str)
pass
def exportGeometry(filename):
str = bytes(filename,encoding='utf-8')
mw.exportGeometry(str)
pass
def openProjectFile(filename):
str = bytes(filename,encoding='utf-8')
mw.openProjectFile(str)
pass
def saveProjectFile(filename):
str = bytes(filename,encoding='utf-8')
mw.saveProjectFile(str)
pass
def saveImage(w,h,id,win,file):
wi = bytes(win,encoding='utf-8')
f = bytes(file,encoding='utf-8')
mw.saveImage(w,h,id,wi,f)
pass
def setView(id,win,view):
wi = bytes(win,encoding='utf-8')
vi = bytes(view,encoding='utf-8')
mw.setView(id,wi,vi)
pass
def setViewRandValue(id,win,x1,x2,x3,y1,y2,y3,z1,z2,z3):
wi=bytes(win,encoding='utf-8')
mw.setViewRandValue(id,wi,x1,x2,x3,y1,y2,y3,z1,z2,z3)
pass
def openPost3D():
mw.openPost3D()
pass
def openPost2D():
mw.openPost2D()
pass
def openPreWindow():
mw.openPreWindow()
pass
def solveProject(projectIndex,solverIndex):
mw.solveProject(projectIndex,solverIndex)
pass
def script_openFile(id, type, file):
stype = bytes(type,encoding='utf-8')
sfile = bytes(file,encoding='utf-8')
mw.script_openFile(id, stype, sfile)
pass
def script_applyClicked(id, type):
stype=bytes(type,encoding='utf-8')
mw.script_applyClicked(id, stype)
pass
def createSet(name, type, idstring):
name = bytes(name,encoding='utf-8')
idstring = bytes(idstring,encoding='utf-8')
type = bytes(type,encoding='utf-8')
mw.createSet(name,type,idstring)
pass
def createGeoComponent(name, type, strgIDs, strItemIDs):
name = bytes(name,encoding='utf-8')
type = bytes(type,encoding='utf-8')
strgIDs = bytes(strgIDs,encoding='utf-8')
strItemIDs = bytes(strItemIDs,encoding='utf-8')
mw.createGeoComponent(name, type, strgIDs, strItemIDs)
pass
def createVTKTransform(componentIds, rotate, moveLocation, scale):
componentIds = bytes(componentIds,encoding='utf-8')
rotate = bytes(rotate,encoding='utf-8')
moveLocation = bytes(moveLocation,encoding='utf-8')
scale = bytes(scale,encoding='utf-8')
mw.createVTKTransform(componentIds, rotate, moveLocation, scale)
pass
def findConplanarPorC(seedType, seedId, minAngle, kernalId, setName):
seedType = bytes(seedType, encoding='utf-8')
setName = bytes(setName, encoding='utf-8')
minAngle = c_double(minAngle)
mw.findConplanarPorC(seedType, seedId, minAngle, kernalId, setName)
pass
def script_Properties_Opacity(id, type, obj_id, mOpacity):
type=bytes(type,encoding='utf-8')
mOpacity=c_double(mOpacity);
mw.script_Properties_Opacity(id, type, obj_id, mOpacity)
pass
def script_Properties_colorColumn(id, type, obj_id, mColorColumnStyle):
type=bytes(type,encoding='utf-8')
mColorColumnStyle=bytes(mColorColumnStyle,encoding='utf-8')
mw.script_Properties_colorColumn(id, type, obj_id, mColorColumnStyle)
pass
def script_Properties_scalarBarTitle(id, type, obj_id, colName, m_title):
type=bytes(type,encoding='utf-8')
colName=bytes(colName,encoding='utf-8')
m_title=bytes(m_title,encoding='utf-8')
mw.script_Properties_scalarBarTitle(id, type, obj_id, colName, m_title)
pass
def script_Properties_scalarBarFontSize(id, type, obj_id, colName, m_fontSize):
type=bytes(type,encoding='utf-8')
colName=bytes(colName,encoding='utf-8')
mw.script_Properties_scalarBarFontSize(id, type, obj_id, colName, m_fontSize)
pass
def script_Properties_scalarBarNumLables(id, type, obj_id, colName, m_numLables):
type=bytes(type,encoding='utf-8')
colName=bytes(colName,encoding='utf-8')
mw.script_Properties_scalarBarNumLables(id, type, obj_id, colName, m_numLables)
pass
def script_Properties_lineWidth(id, type, obj_id, mLineWidth):
type=bytes(type,encoding='utf-8')
mw.script_Properties_lineWidth(id, type, obj_id, mLineWidth)
pass
def script_Properties_pointSize(id, type, obj_id, mPointSize):
type=bytes(type,encoding='utf-8')
mw.script_Properties_pointSize(id, type, obj_id, mPointSize)
pass
def script_Properties_translate(id, type, obj_id, x, y, z):
type=bytes(type,encoding='utf-8')
mw.script_Properties_translate(id, type, obj_id, x, y, z)
pass
def script_Properties_origin(id, type, obj_id, x, y, z):
type=bytes(type,encoding='utf-8')
x=c_double(x)
y=c_double(y)
z=c_double(z)
mw.script_Properties_origin(id, type, obj_id, x, y, z)
pass
def script_Properties_scale(id, type, obj_id, x, y, z):
type=bytes(type,encoding='utf-8')
x=c_double(x)
y=c_double(y)
z=c_double(z)
mw.script_Properties_scale(id, type, obj_id, x, y, z)
pass
def script_Properties_orientation(id, type, obj_id, x, y, z):
type=bytes(type,encoding='utf-8')
x=c_double(x)
y=c_double(y)
z=c_double(z)
mw.script_Properties_orientation(id, type, obj_id, x, y, z)
pass
def script_Properties_representation(id, type, obj_id, m_enum_representationtype):
type=bytes(type,encoding='utf-8')
mw.script_Properties_representation(id, type, obj_id, m_enum_representationtype)
pass
def script_Properties_specular(id, type, obj_id, mSpecular):
type=bytes(type,encoding='utf-8')
mSpecular=c_double(mSpecular)
mw.script_Properties_specular(id, type, obj_id, mSpecular)
pass
def script_Properties_diffuse(id, type, obj_id, mDiffuse):
type=bytes(type,encoding='utf-8')
mDiffuse=c_double(mDiffuse)
mw.script_Properties_diffuse(id, type, obj_id, mDiffuse)
pass
def script_Properties_ambient(id, type, obj_id, mAmbient):
type=bytes(type,encoding='utf-8')
mAmbient=c_double(mAmbient)
mw.script_Properties_ambient(id, type, obj_id, mAmbient)
pass
def script_Properties_specularPower(id, type, obj_id, mSpecularPower):
type=bytes(type,encoding='utf-8')
mw.script_Properties_specularPower(id, type, obj_id, mSpecularPower)
pass
def script_Properties_specularColor(id, type, obj_id, r, g, b):
type=bytes(type,encoding='utf-8')
mw.script_Properties_specularColor(id, type, obj_id, r, g, b)
pass
def script_Properties_solidColor(id, type, obj_id, r, g, b):
type=bytes(type,encoding='utf-8')
mw.script_Properties_solidColor(id, type, obj_id, r, g, b)
pass
def script_Properties_edgeColor(id, type, obj_id, r, g, b):
type=bytes(type,encoding='utf-8')
mw.script_Properties_edgeColor(id, type, obj_id, r, g, b)
pass
def script_Properties_interpolation(id, type, obj_id, m_enum_interpolationtype):
type=bytes(type,encoding='utf-8')
mw.script_Properties_interpolation(id, type, obj_id, m_enum_interpolationtype)
pass
def script_Properties_Flag_scalarBar(id, type, obj_id, mColorColumnStyle):
type=bytes(type,encoding='utf-8')
mColorColumnStyle=bytes(mColorColumnStyle,encoding='utf-8')
mw.script_Properties_Flag_scalarBar(id, type, obj_id, mColorColumnStyle)
pass
def script_Properties_EnableOpacityMap(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_EnableOpacityMap(id, type, obj_id, val)
pass
def script_Properties_visible(id, type, obj_id, flag_show_actors):
type=bytes(type,encoding='utf-8')
mw.script_Properties_visible(id, type, obj_id, flag_show_actors)
pass
def script_Properties_show_scalarBars(id, type, obj_id, mScalarBarVisible):
type=bytes(type,encoding='utf-8')
mw.script_Properties_show_scalarBars(id, type, obj_id, mScalarBarVisible)
pass
def script_Properties_show_cubeAxes(id, type, obj_id, flag_cubeAxes):
type=bytes(type,encoding='utf-8')
mw.script_Properties_show_cubeAxes(id, type, obj_id, flag_cubeAxes)
pass
def script_Properties_scalarBarPosition(id, type, obj_id, colName, tep_orietation, pos0, pos1, pos2, pos3):
type=bytes(type,encoding='utf-8')
pos0=c_double(pos0)
pos1=c_double(pos1)
pos2=c_double(pos2)
pos3=c_double(pos3)
colName=bytes(colName,encoding='utf-8')
mw.script_Properties_scalarBarPosition(id, type, obj_id, colName, tep_orietation, pos0, pos1, pos2, pos3)
pass
def script_FilterClip(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterClip(id, type, obj_id)
pass
def script_FilterSlice(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterSlice(id, type, obj_id)
pass
def script_FilterContour(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterContour(id, type, obj_id)
pass
def script_FilterVector(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterVector(id, type, obj_id)
pass
def script_FilterReflection(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterReflection(id, type, obj_id)
pass
def script_FilterSmooth(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterSmooth(id, type, obj_id)
pass
def script_FilterStreamLine(id, type, obj_id):
type=bytes(type,encoding='utf-8')
mw.script_FilterStreamLine(id, type, obj_id)
pass
####### ###############################################
def script_Properties_vector_GlyphVector(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=bytes(val,encoding='utf-8')
mw.script_Properties_vector_GlyphVector(id, type, obj_id, val)
pass
def script_Properties_vector_scalar(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=bytes(val,encoding='utf-8')
mw.script_Properties_vector_scalar(id, type, obj_id, val)
pass
def script_Properties_vector_normal(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=bytes(val,encoding='utf-8')
mw.script_Properties_vector_normal(id, type, obj_id, val)
pass
def script_Properties_vector_numPoints(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_vector_numPoints(id, type, obj_id, val)
pass
def script_Properties_vector_glyph_type(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_vector_glyph_type(id, type, obj_id, val)
pass
def script_Properties_vector_glyph_tipRes(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_vector_glyph_tipRes(id, type, obj_id, val)
pass
def script_Properties_vector_glyph_tipRad(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_vector_glyph_tipRad(id, type, obj_id, val)
pass
def script_Properties_vector_glyph_tipLen(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_vector_glyph_tipLen(id, type, obj_id, val)
pass
def script_Properties_vector_glyph_shaftRes(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_vector_glyph_shaftRes(id, type, obj_id, val)
pass
def script_Properties_vector_glyph_shaftRad(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_vector_glyph_shaftRad(id, type, obj_id, val)
pass
def script_Properties_view_backgroundType(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_view_backgroundType(id, type, obj_id, val)
pass
def script_Properties_view_backgroundColor(id, type, obj_id, red, green, blue):
mw.script_Properties_view_backgroundColor(id, type, obj_id, red, green, blue)
pass
def script_Properties_view_background2Color(id, type, obj_id, red, green, blue):
type=bytes(type,encoding='utf-8')
mw.script_Properties_view_background2Color(id, type, obj_id, red, green, blue)
pass
def script_Properties_view_axesVisible(id, type, a):
type=bytes(type,encoding='utf-8')
mw.script_Properties_view_axesVisible(id, type, a)
pass
def script_Properties_view_cameraParallel(id, type, a):
type=bytes(type,encoding='utf-8')
mw.script_Properties_view_cameraParallel(id, type, a)
pass
def script_Properties_view_interaction(id, type, a):
type=bytes(type,encoding='utf-8')
mw.script_Properties_view_interaction(id, type, a)
pass
def script_Properties_renderView(id, type):
type=bytes(type,encoding='utf-8')
mw.script_Properties_renderView(id, type)
pass
def script_Camera_Position(id, type, pos0, pos1, pos2):
type=bytes(type,encoding='utf-8')
pos0=c_double(pos0)
pos1=c_double(pos1)
pos2=c_double(pos2)
mw.script_Camera_Position(id, type, pos0, pos1, pos2)
pass
def script_Camera_FocalPoint(id, type, focalPoint0, focalPoint1, focalPoint2):
type=bytes(type,encoding='utf-8')
focalPoint0=c_double(focalPoint0)
focalPoint1=c_double(focalPoint1)
focalPoint2=c_double(focalPoint2)
mw.script_Camera_FocalPoint(id, type, focalPoint0, focalPoint1, focalPoint2)
pass
def script_Camera_ClippingRange(id, type, clippingRange0, clippingRange1):
type=bytes(type,encoding='utf-8')
clippingRange0=c_double(clippingRange0)
clippingRange1=c_double(clippingRange1)
mw.script_Camera_ClippingRange(id, type, clippingRange0, clippingRange1)
pass
def script_Camera_ViewUp(id, type, viewup0, viewup1, viewup2):
type=bytes(type,encoding='utf-8')
viewup0=c_double(viewup0)
viewup1=c_double(viewup1)
viewup2=c_double(viewup2)
mw.script_Camera_ViewUp(id, type, viewup0, viewup1, viewup2)
pass
def script_Camera_ViewAngle(id, type, angle):
type=bytes(type,encoding='utf-8')
angle=c_double(angle)
mw.script_Camera_ViewAngle(id, type, angle)
pass
def script_Camera_Zoom(id, type, zoom):
type=bytes(type,encoding='utf-8')
zoom=c_double(zoom)
mw.script_Camera_Zoom(id, type, zoom)
pass
def script_Camera_Reset(id, type):
type=bytes(type,encoding='utf-8')
mw.script_Camera_Reset(id, type,)
pass
def script_Properties_planeOrigin(id, type, obj_id, x, y, z):
type=bytes(type,encoding='utf-8')
x=c_double(x)
y=c_double(y)
z=c_double(z)
mw.script_Properties_planeOrigin(id, type, obj_id, x, y, z)
pass
def script_Properties_planeNormal(id, type, obj_id, x, y, z):
type=bytes(type,encoding='utf-8')
x=c_double(x)
y=c_double(y)
z=c_double(z)
mw.script_Properties_planeNormal(id, type, obj_id, x, y, z)
pass
def script_Properties_planeVisible(id, type, obj_id, a):
type=bytes(type,encoding='utf-8')
mw.script_Properties_planeVisible(id, type, obj_id, a)
pass
def script_Properties_insideOut(id, type, obj_id, a):
type=bytes(type,encoding='utf-8')
mw.script_Properties_insideOut(id, type, obj_id, a)
pass
def script_Properties_contourColumn(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=bytes(val,encoding='utf-8')
mw.script_Properties_contourColumn(id, type, obj_id, val)
pass
def script_Properties_contourValue(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_contourValue(id, type, obj_id, val)
pass
def script_Properties_contour_reflection(id, type, obj_id, aaa):
type=bytes(type,encoding='utf-8')
mw.script_Properties_contour_reflection(id, type, obj_id, aaa)
pass
def script_Properties_contour_reflectionAxes(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_contour_reflectionAxes(id, type, obj_id, val)
pass
def script_Properties_reflectionAxes(id, type, obj_id, reflection_axis):
type=bytes(type,encoding='utf-8')
mw.script_Properties_reflectionAxes(id, type, obj_id, reflection_axis)
pass
def script_Properties_smooth(id, type, obj_id, smotype, coef):
type=bytes(type,encoding='utf-8')
coef=c_double(coef)
mw.script_Properties_smooth(id, type, obj_id, smotype, coef)
pass
def script_Properties_streamline_vector(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=bytes(val,encoding='utf-8')
mw.script_Properties_streamline_vector(id, type, obj_id, val)
pass
def script_Properties_streamline_integration_direction(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_integration_direction(id, type, obj_id, val)
pass
def script_Properties_streamline_integration_type(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_integration_type(id, type, obj_id, val)
pass
def script_Properties_streamline_integration_stepUnit(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_integration_stepUnit(id, type, obj_id, val)
pass
def script_Properties_streamline_integration_initStepLen(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_integration_initStepLen(id, type, obj_id, val)
pass
def script_Properties_streamline_integration_miniStepLen(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_integration_miniStepLen(id, type, obj_id, val)
pass
def script_Properties_streamline_integration_maxiStepLen(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_integration_maxiStepLen(id, type, obj_id, val)
pass
def script_Properties_streamline_stream_maxiSteps(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_stream_maxiSteps(id, type, obj_id, val)
pass
def script_Properties_streamline_stream_maxiStreamLen(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_stream_maxiStreamLen(id, type, obj_id, val)
pass
###########
def script_Properties_streamline_stream_terminalSpeed(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_stream_terminalSpeed(id, type, obj_id, val)
pass
def script_Properties_streamline_stream_maxiError(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_stream_maxiError(id, type, obj_id, val)
pass
def script_Properties_streamline_seeds_type(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_seeds_type(id, type, obj_id, val)
pass
def script_Properties_streamline_seeds_mPoint(id, type, obj_id, val0, val1, val2):
type=bytes(type,encoding='utf-8')
val0=c_double(val0)
val1=c_double(val1)
val2=c_double(val2)
mw.script_Properties_streamline_seeds_mPoint(id, type, obj_id, val0, val1, val2)
pass
def script_Properties_streamline_seeds_num_points(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_seeds_num_points(id, type, obj_id, val)
pass
def script_Properties_streamline_seeds_radius(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
val=c_double(val)
mw.script_Properties_streamline_seeds_radius(id, type, obj_id, val)
pass
def script_Properties_streamline_vorticity(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_vorticity(id, type, obj_id, val)
pass
def script_Properties_streamline_interpolatorType(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_interpolatorType(id, type, obj_id, val)
pass
def script_Properties_streamline_surface_streamLines(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_surface_streamLines(id, type, obj_id, val)
pass
def script_Properties_streamline_reflection(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_reflection(id, type, obj_id, val)
pass
def script_Properties_streamline_reflectionAxes(id, type, obj_id, val):
type=bytes(type,encoding='utf-8')
mw.script_Properties_streamline_reflectionAxes(id, type, obj_id, val)
pass
| 31.802773 | 107 | 0.74312 | 3,045 | 20,640 | 4.808539 | 0.078818 | 0.07499 | 0.09343 | 0.114192 | 0.865456 | 0.842098 | 0.781519 | 0.753039 | 0.667395 | 0.575058 | 0 | 0.012371 | 0.126647 | 20,640 | 648 | 108 | 31.851852 | 0.7999 | 0.006492 | 0 | 0.489524 | 0 | 0 | 0.032594 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211429 | false | 0.211429 | 0.013333 | 0 | 0.224762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d4f63f2b28f7521feffe4516d909fa789aec811b | 259,185 | py | Python | pytgbot/bot/base.py | luckydonald/pytgbot | 2fd6072e99f2656d335059fc0a1478b2a62f0c4c | [
"MIT"
] | 52 | 2015-06-25T15:48:19.000Z | 2021-08-10T20:29:11.000Z | pytgbot/bot/base.py | luckydonald/pytgbot | 2fd6072e99f2656d335059fc0a1478b2a62f0c4c | [
"MIT"
] | 16 | 2016-04-12T08:11:30.000Z | 2021-07-22T18:00:07.000Z | pytgbot/bot/base.py | luckydonald/pytgbot | 2fd6072e99f2656d335059fc0a1478b2a62f0c4c | [
"MIT"
] | 14 | 2015-06-26T15:29:48.000Z | 2021-08-10T20:29:14.000Z | # -*- coding: utf-8 -*-
import json
import re
from abc import abstractmethod
from warnings import warn
from datetime import timedelta, datetime
from urllib.parse import urlparse, urlunparse
from luckydonaldUtils.logger import logging
from luckydonaldUtils.encoding import unicode_type, to_unicode as u, to_native as n
from luckydonaldUtils.exceptions import assert_type_or_raise
from ..exceptions import TgApiServerException, TgApiParseException
from ..exceptions import TgApiTypeError, TgApiResponseException
from ..api_types.sendable.inline import InlineQueryResult
from ..api_types.receivable.peer import User
from ..api_types import from_array_list, as_array
from ..api_types.sendable.files import InputFile
from ..api_types.sendable import Sendable
__author__ = 'luckydonald'
__all__ = ["BotBase"]
logger = logging.getLogger(__name__)
DEFAULT_BASE_URL = "https://api.telegram.org/bot{api_key}/{command}"
DEFAULT_DOWNLOAD_URL = "https://api.telegram.org/file/bot{api_key}/{file}"
DEFAULT_TIMEOUT = 60.0 # a int or float for seconds or None to not specify any.
class BotBase(object):
def __init__(self, api_key, return_python_objects=True, base_url=None, download_url=None, default_timeout=None):
"""
A Bot instance. From here you can call all the functions.
The api key can be obtained from @BotFather, see https://core.telegram.org/bots#6-botfather
:param api_key: The API key. Something like "ABC-DEF1234ghIkl-zyx57W2v1u123ew11"
:type api_key: str
:param base_url: Change the URL of the server instance.
Set to None to use the telegram default.
This is needed for sending commands to the server.
It will later fill in the variables `{api_key}` and `{command}` accordingly.
See `str.format()` at https://docs.python.org/3/library/string.html#string-formatting.
:type base_url: None|str
:param download_url: Change the URL of file downloads from the server instance.
Set to None to use the telegram default.
This is needed if you wanna use `.get_file(…)`,
there it will later fill in the variables `{api_key}` and `{file}` accordingly.
See `str.format()` at https://docs.python.org/3/library/string.html#string-formatting.
:type download_url: None|str
:param return_python_objects: If it should convert the json to `pytgbot.api_types.*` object (`True`, default)
or return the parsed json values directly (`False`).
:type return_python_objects: bool
:param default_timeout: The default timeout to use for requests to the telegram api.
:type return_python_objects: None|float|int
"""
if api_key is None or not api_key:
raise ValueError("No api_key given.")
# end if
self.api_key = api_key
self.return_python_objects = return_python_objects
self._last_update = datetime.now()
self._base_url = DEFAULT_BASE_URL if base_url is None else base_url
self._download_url = self.calculate_download_url(self._base_url, download_url)
self._default_timeout = DEFAULT_TIMEOUT if default_timeout is None else default_timeout
self._me = None # will be filled when using the property .id or .username, or when calling ._load_info()
# end def __init__
@classmethod
def calculate_download_url(cls, base_url, download_url):
"""
Tries to get the best fitting download URL from what we have.
:param base_url:
:type base_url: str
:param download_url:
:type download_url: str
:return:
:rtype: str
"""
if base_url == DEFAULT_BASE_URL:
# the normal url is already the official url
if download_url is None:
# so we can use the official download url as well
return DEFAULT_DOWNLOAD_URL
else:
# but someone wants to override that.
return download_url
# end if
else:
# the normal url is a custom one
if download_url is None:
# Okey. So this is tricky. There's a custom set url, but no download url.
# We'll try to do the same structure as the official api,
# but issue a warning that a user should better specify it directly instead.
# We'll change the path from what we hope to be similar to
# "/bot{api_key}/{command}"
# to
# "/file/bot{api_key}/{file}"
# by appending '/file' to the front of the path part and and replacing {command} with {file} in the resulting version.
# first warn
parsed_base_url = urlparse(base_url)
# copy the tuple version of the named tuple to a editable list.
parsed_base_url = list(parsed_base_url[:])
# append "/file" to the path which is the third ([2]) tuple/list attribute.
parsed_base_url[2] = '/file' + (parsed_base_url[2] or '')
# piece that together as a full url again
download_url = urlunparse(parsed_base_url)
# replace the "{command}" with "{file}".
download_url = download_url.format(api_key="{api_key}", command="{file}")
# we're done, shout at the user for making us so much effort and return it.
warn(
"Custom server `base_url` set ({base_url!r}), but no custom `download_url`. ".format(base_url=base_url) +
"Tried to guess it as {download_url!r}.".format(download_url=download_url)
)
return download_url
else:
# someone wants to override that. Thanks.
return download_url
# end if
# end if
# end def
def _prepare_request(self, command, query):
"""
Prepares the command url, and converts the query json.
:param command: The Url command parameter
:type command: str
:param query: Will get json encoded.
:return: params and a url, for use with requests etc.
"""
params = {}
files = {}
for key in query.keys():
element = query[key]
if element is not None:
if isinstance(element, (str, int, float, bool)):
params[key] = element
elif isinstance(element, InputFile):
params[key], file_info = element.get_input_media_referenced_files(key)
if file_info is not None:
files.update(file_info)
# end if
else:
params[key] = json.dumps(as_array(element))
# end if
# end if
# end for
url = self._base_url.format(api_key=n(self.api_key), command=n(command))
return url, params, files
# end def _prepare_request
def _postprocess_request(self, request, response, json):
"""
This converts the response to either the response or a parsed :class:`pytgbot.api_types.receivable.Receivable`.
:param request: the request
:type request: request.Request|httpx.Request
:param response: the request response
:type response: requests.Response|httpx.Response
:param json: the parsed json array
:type json: dict
:return: The json response from the server, or, if `self.return_python_objects` is `True`, a parsed return type.
:rtype: DictObject.DictObject | pytgbot.api_types.receivable.Receivable
"""
from DictObject import DictObject
try:
logger.debug(json)
res = DictObject.objectify(json)
except Exception as e:
raise TgApiResponseException('Parsing answer as json failed.', response, e)
# end if
# TG should always return an dict, with at least a status or something.
if self.return_python_objects:
if res.ok is not True:
raise TgApiServerException(
error_code=res.error_code if "error_code" in res or hasattr(res, "error_code") else None,
response=response,
description=res.description if "description" in res or hasattr(res, "description") else None,
request=request
)
# end if not ok
if "result" not in res:
raise TgApiParseException('Key "result" is missing.')
# end if no result
return res.result
# end if return_python_objects
return res
# end def _postprocess_request
def _do_fileupload(self, file_param_name, value, _command=None, _file_is_optional=False, **kwargs):
"""
:param file_param_name: For what field the file should be uploaded.
:type file_param_name: str
:param value: File to send. You can either pass a file_id as String to resend a file
file that is already on the Telegram servers, or upload a new file,
specifying the file path as :class:`pytgbot.api_types.sendable.files.InputFile`.
If `_file_is_optional` is set to `True`, it can also be set to `None`.
:type value: pytgbot.api_types.sendable.files.InputFile | str | None
:param _command: Overwrite the command to be send.
Default is to convert `file_param_name` to camel case (`"voice_note"` -> `"sendVoiceNote"`)
:type _command: str|None
:param _file_is_optional: If the file (`value`) is allowed to be None.
:type _file_is_optional: bool
:param kwargs: will get json encoded.
:return: The json response from the server, or, if `self.return_python_objects` is `True`, a parsed return type.
:rtype: DictObject.DictObject | pytgbot.api_types.receivable.Receivable
:raises TgApiTypeError, TgApiParseException, TgApiServerException: Everything from :meth:`Bot.do`, and :class:`TgApiTypeError`
"""
from ..api_types.sendable.files import InputFile
from luckydonaldUtils.encoding import unicode_type
from luckydonaldUtils.encoding import to_native as n
if value is None and _file_is_optional:
# Is None but set optional, so do nothing.
pass
elif isinstance(value, str):
kwargs[file_param_name] = str(value)
elif isinstance(value, unicode_type):
kwargs[file_param_name] = n(value)
elif isinstance(value, InputFile):
files = value.get_request_files(file_param_name)
if "files" in kwargs and kwargs["files"]:
# already are some files there, merge them.
assert isinstance(kwargs["files"], dict), 'The files should be of type dict, but are of type {}.'.format(type(kwargs["files"]))
for key in files.keys():
assert key not in kwargs["files"], '{key} would be overwritten!'
kwargs["files"][key] = files[key]
# end for
else:
# no files so far
kwargs["files"] = files
# end if
else:
raise TgApiTypeError("Parameter {key} is not type (str, {text_type}, {input_file_type}), but type {type}".format(
key=file_param_name, type=type(value), input_file_type=InputFile, text_type=unicode_type))
# end if
if not _command:
# command as camelCase # "voice_note" -> "sendVoiceNote" # https://stackoverflow.com/a/10984923/3423324
command = re.sub(r'(?!^)_([a-zA-Z])', lambda m: m.group(1).upper(), "send_" + file_param_name)
else:
command = _command
# end def
return self.do(command, **kwargs)
# end def _do_fileupload
def get_download_url(self, file):
"""
Creates a url to download the file.
Note: Contains the secret API key, so you should not share this url!
:param file: The File you want to get the url to download.
Either the telegram's `File` as returned by `.get_file`
or the string of the file name on the api servers (which would be `File.file_path` in essence).
:type file: pytgbot.api_types.receivable.media.File|str
:return: url
:rtype: str
"""
from ..api_types.receivable.media import File
assert_type_or_raise(file, File, str, parameter_name='file')
if isinstance(file, File):
file_path = file.file_path
else:
file_path = file
# end if
return self._download_url.format(api_key=n(self.api_key), file=n(file_path))
# end def get_download_url
@abstractmethod
def _load_info(self):
"""
This functions stores the id and the username of the bot.
Called by `.username` and `.id` properties.
Must be synchronous, even in asynchronous subclasses.
:return:
"""
raise NotImplementedError('subclass needs to overwrite this.')
# end def
@property
def me(self):
"""
:rtype: User
"""
if not self._me:
self._load_info()
# end if
return self._me
# end def
@property
def username(self):
return self.me.username
# end def
@property
def id(self):
return self.me.id
# end def
def __str__(self):
return "{s.__class__.__name__}(username={s.username!r}, id={s.id!r})".format(s=self)
# end def
@abstractmethod
def get_updates(self, offset=None, limit=100, poll_timeout=0, allowed_updates=None, request_timeout=None, delta=timedelta(milliseconds=100), error_as_empty=False):
raise NotImplementedError('subclass needs to overwrite this.')
# end def
@abstractmethod
def do(self, command, files=None, use_long_polling=False, request_timeout=None, **query):
raise NotImplementedError('subclass needs to overwrite this.')
# end def
# start of generated functions
def _get_updates__make_request(self, offset=None, limit=None, timeout=None, allowed_updates=None):
"""
Internal function for making the request to the API's getUpdates endpoint.
Optional keyword parameters:
:param offset: Identifier of the first update to be returned. Must be greater by one than the highest among the identifiers of previously received updates. By default, updates starting with the earliest unconfirmed update are returned. An update is considered confirmed as soon as getUpdates is called with an offset higher than its update_id. The negative offset can be specified to retrieve updates starting from -offset update from the end of the updates queue. All previous updates will forgotten.
:type offset: int
:param limit: Limits the number of updates to be retrieved. Values between 1-100 are accepted. Defaults to 100.
:type limit: int
:param timeout: Timeout in seconds for long polling. Defaults to 0, i.e. usual short polling. Should be positive, short polling should be used for testing purposes only.
:type timeout: int
:param allowed_updates: A JSON-serialized list of the update types you want your bot to receive. For example, specify ["message", "edited_channel_post", "callback_query"] to only receive updates of these types. See Update for a complete list of available update types. Specify an empty list to receive all update types except chat_member (default). If not specified, the previous setting will be used.Please note that this parameter doesn't affect updates created before the call to the getUpdates, so unwanted updates may be received for a short period of time.
:type allowed_updates: list of str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(offset, None, int, parameter_name="offset")
assert_type_or_raise(limit, None, int, parameter_name="limit")
assert_type_or_raise(timeout, None, int, parameter_name="timeout")
assert_type_or_raise(allowed_updates, None, list, parameter_name="allowed_updates")
return self.do("getUpdates", offset=offset, limit=limit, timeout=timeout, allowed_updates=allowed_updates)
# end def _get_updates__make_request
def _get_updates__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getUpdates endpoint.
:return: An Array of Update objects is returned
:rtype: list of pytgbot.api_types.receivable.updates.Update
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Update
try:
return Update.from_array_list(result, list_level=1)
except TgApiParseException:
logger.debug("Failed parsing as api_type Update", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_updates__process_result
def _set_webhook__make_request(self, url, certificate=None, ip_address=None, max_connections=None, allowed_updates=None, drop_pending_updates=None):
"""
Internal function for making the request to the API's setWebhook endpoint.
Parameters:
:param url: HTTPS url to send updates to. Use an empty string to remove webhook integration
:type url: str|unicode
Optional keyword parameters:
:param certificate: Upload your public key certificate so that the root certificate in use can be checked. See our self-signed guide for details.
:type certificate: pytgbot.api_types.sendable.files.InputFile
:param ip_address: The fixed IP address which will be used to send webhook requests instead of the IP address resolved through DNS
:type ip_address: str|unicode
:param max_connections: Maximum allowed number of simultaneous HTTPS connections to the webhook for update delivery, 1-100. Defaults to 40. Use lower values to limit the load on your bot's server, and higher values to increase your bot's throughput.
:type max_connections: int
:param allowed_updates: A JSON-serialized list of the update types you want your bot to receive. For example, specify ["message", "edited_channel_post", "callback_query"] to only receive updates of these types. See Update for a complete list of available update types. Specify an empty list to receive all update types except chat_member (default). If not specified, the previous setting will be used.Please note that this parameter doesn't affect updates created before the call to the setWebhook, so unwanted updates may be received for a short period of time.
:type allowed_updates: list of str|unicode
:param drop_pending_updates: Pass True to drop all pending updates
:type drop_pending_updates: bool
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.files import InputFile
assert_type_or_raise(url, unicode_type, parameter_name="url")
assert_type_or_raise(certificate, None, InputFile, parameter_name="certificate")
assert_type_or_raise(ip_address, None, unicode_type, parameter_name="ip_address")
assert_type_or_raise(max_connections, None, int, parameter_name="max_connections")
assert_type_or_raise(allowed_updates, None, list, parameter_name="allowed_updates")
assert_type_or_raise(drop_pending_updates, None, bool, parameter_name="drop_pending_updates")
return self.do("setWebhook", url=url, certificate=certificate, ip_address=ip_address, max_connections=max_connections, allowed_updates=allowed_updates, drop_pending_updates=drop_pending_updates)
# end def _set_webhook__make_request
def _set_webhook__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setWebhook endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_webhook__process_result
def _delete_webhook__make_request(self, drop_pending_updates=None):
"""
Internal function for making the request to the API's deleteWebhook endpoint.
Optional keyword parameters:
:param drop_pending_updates: Pass True to drop all pending updates
:type drop_pending_updates: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(drop_pending_updates, None, bool, parameter_name="drop_pending_updates")
return self.do("deleteWebhook", drop_pending_updates=drop_pending_updates)
# end def _delete_webhook__make_request
def _delete_webhook__process_result(self, result):
"""
Internal function for processing the json data returned by the API's deleteWebhook endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _delete_webhook__process_result
def _get_webhook_info__make_request(self):
"""
Internal function for making the request to the API's getWebhookInfo endpoint.
:return: the decoded json
:rtype: dict|list|bool
"""
return self.do("getWebhookInfo", )
# end def _get_webhook_info__make_request
def _get_webhook_info__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getWebhookInfo endpoint.
:return: On success, returns a WebhookInfo object
:rtype: pytgbot.api_types.receivable.updates.WebhookInfo
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import WebhookInfo
try:
return WebhookInfo.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type WebhookInfo", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_webhook_info__process_result
def _get_me__make_request(self):
"""
Internal function for making the request to the API's getMe endpoint.
:return: the decoded json
:rtype: dict|list|bool
"""
return self.do("getMe", )
# end def _get_me__make_request
def _get_me__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getMe endpoint.
:return: Returns basic information about the bot in form of a User object
:rtype: pytgbot.api_types.receivable.peer.User
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import User
try:
return User.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type User", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_me__process_result
def _log_out__make_request(self):
"""
Internal function for making the request to the API's logOut endpoint.
:return: the decoded json
:rtype: dict|list|bool
"""
return self.do("logOut", )
# end def _log_out__make_request
def _log_out__process_result(self, result):
"""
Internal function for processing the json data returned by the API's logOut endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _log_out__process_result
def _send_message__make_request(self, chat_id, text, parse_mode=None, entities=None, disable_web_page_preview=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendMessage endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param text: Text of the message to be sent, 1-4096 characters after entities parsing
:type text: str|unicode
Optional keyword parameters:
:param parse_mode: Mode for parsing entities in the message text. See formatting options for more details.
:type parse_mode: str|unicode
:param entities: A JSON-serialized list of special entities that appear in message text, which can be specified instead of parse_mode
:type entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param disable_web_page_preview: Disables link previews for links in this message
:type disable_web_page_preview: bool
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(text, unicode_type, parameter_name="text")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(entities, None, list, parameter_name="entities")
assert_type_or_raise(disable_web_page_preview, None, bool, parameter_name="disable_web_page_preview")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendMessage", chat_id=chat_id, text=text, parse_mode=parse_mode, entities=entities, disable_web_page_preview=disable_web_page_preview, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_message__make_request
def _send_message__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendMessage endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_message__process_result
def _forward_message__make_request(self, chat_id, from_chat_id, message_id, disable_notification=None):
"""
Internal function for making the request to the API's forwardMessage endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param from_chat_id: Unique identifier for the chat where the original message was sent (or channel username in the format @channelusername)
:type from_chat_id: int | str|unicode
:param message_id: Message identifier in the chat specified in from_chat_id
:type message_id: int
Optional keyword parameters:
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(from_chat_id, (int, unicode_type), parameter_name="from_chat_id")
assert_type_or_raise(message_id, int, parameter_name="message_id")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
return self.do("forwardMessage", chat_id=chat_id, from_chat_id=from_chat_id, message_id=message_id, disable_notification=disable_notification)
# end def _forward_message__make_request
def _forward_message__process_result(self, result):
"""
Internal function for processing the json data returned by the API's forwardMessage endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _forward_message__process_result
def _copy_message__make_request(self, chat_id, from_chat_id, message_id, caption=None, parse_mode=None, caption_entities=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's copyMessage endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param from_chat_id: Unique identifier for the chat where the original message was sent (or channel username in the format @channelusername)
:type from_chat_id: int | str|unicode
:param message_id: Message identifier in the chat specified in from_chat_id
:type message_id: int
Optional keyword parameters:
:param caption: New caption for media, 0-1024 characters after entities parsing. If not specified, the original caption is kept
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the new caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the new caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(from_chat_id, (int, unicode_type), parameter_name="from_chat_id")
assert_type_or_raise(message_id, int, parameter_name="message_id")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("copyMessage", chat_id=chat_id, from_chat_id=from_chat_id, message_id=message_id, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _copy_message__make_request
def _copy_message__process_result(self, result):
"""
Internal function for processing the json data returned by the API's copyMessage endpoint.
:return: Returns the MessageId of the sent message on success
:rtype: pytgbot.api_types.receivable.responses.MessageId
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.responses import MessageId
try:
return MessageId.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type MessageId", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _copy_message__process_result
def _send_photo__make_request(self, chat_id, photo, caption=None, parse_mode=None, caption_entities=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendPhoto endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param photo: Photo to send. Pass a file_id as String to send a photo that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get a photo from the Internet, or upload a new photo using multipart/form-data. The photo must be at most 10 MB in size. The photo's width and height must not exceed 10000 in total. Width and height ratio must be at most 20. More info on Sending Files »
:type photo: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param caption: Photo caption (may also be used when resending photos by file_id), 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the photo caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(photo, (InputFile, unicode_type), parameter_name="photo")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendPhoto", chat_id=chat_id, photo=photo, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_photo__make_request
def _send_photo__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendPhoto endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_photo__process_result
def _send_audio__make_request(self, chat_id, audio, caption=None, parse_mode=None, caption_entities=None, duration=None, performer=None, title=None, thumb=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendAudio endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param audio: Audio file to send. Pass a file_id as String to send an audio file that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get an audio file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files »
:type audio: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param caption: Audio caption, 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the audio caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param duration: Duration of the audio in seconds
:type duration: int
:param performer: Performer
:type performer: str|unicode
:param title: Track name
:type title: str|unicode
:param thumb: Thumbnail of the file sent; can be ignored if thumbnail generation for the file is supported server-side. The thumbnail should be in JPEG format and less than 200 kB in size. A thumbnail's width and height should not exceed 320. Ignored if the file is not uploaded using multipart/form-data. Thumbnails can't be reused and can be only uploaded as a new file, so you can pass "attach://<file_attach_name>" if the thumbnail was uploaded using multipart/form-data under <file_attach_name>. More info on Sending Files »
:type thumb: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(audio, (InputFile, unicode_type), parameter_name="audio")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(duration, None, int, parameter_name="duration")
assert_type_or_raise(performer, None, unicode_type, parameter_name="performer")
assert_type_or_raise(title, None, unicode_type, parameter_name="title")
assert_type_or_raise(thumb, None, (InputFile, unicode_type), parameter_name="thumb")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendAudio", chat_id=chat_id, audio=audio, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, duration=duration, performer=performer, title=title, thumb=thumb, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_audio__make_request
def _send_audio__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendAudio endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_audio__process_result
def _send_document__make_request(self, chat_id, document, thumb=None, caption=None, parse_mode=None, caption_entities=None, disable_content_type_detection=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendDocument endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param document: File to send. Pass a file_id as String to send a file that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get a file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files »
:type document: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param thumb: Thumbnail of the file sent; can be ignored if thumbnail generation for the file is supported server-side. The thumbnail should be in JPEG format and less than 200 kB in size. A thumbnail's width and height should not exceed 320. Ignored if the file is not uploaded using multipart/form-data. Thumbnails can't be reused and can be only uploaded as a new file, so you can pass "attach://<file_attach_name>" if the thumbnail was uploaded using multipart/form-data under <file_attach_name>. More info on Sending Files »
:type thumb: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param caption: Document caption (may also be used when resending documents by file_id), 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the document caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param disable_content_type_detection: Disables automatic server-side content type detection for files uploaded using multipart/form-data
:type disable_content_type_detection: bool
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(document, (InputFile, unicode_type), parameter_name="document")
assert_type_or_raise(thumb, None, (InputFile, unicode_type), parameter_name="thumb")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(disable_content_type_detection, None, bool, parameter_name="disable_content_type_detection")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendDocument", chat_id=chat_id, document=document, thumb=thumb, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, disable_content_type_detection=disable_content_type_detection, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_document__make_request
def _send_document__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendDocument endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_document__process_result
def _send_video__make_request(self, chat_id, video, duration=None, width=None, height=None, thumb=None, caption=None, parse_mode=None, caption_entities=None, supports_streaming=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendVideo endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param video: Video to send. Pass a file_id as String to send a video that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get a video from the Internet, or upload a new video using multipart/form-data. More info on Sending Files »
:type video: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param duration: Duration of sent video in seconds
:type duration: int
:param width: Video width
:type width: int
:param height: Video height
:type height: int
:param thumb: Thumbnail of the file sent; can be ignored if thumbnail generation for the file is supported server-side. The thumbnail should be in JPEG format and less than 200 kB in size. A thumbnail's width and height should not exceed 320. Ignored if the file is not uploaded using multipart/form-data. Thumbnails can't be reused and can be only uploaded as a new file, so you can pass "attach://<file_attach_name>" if the thumbnail was uploaded using multipart/form-data under <file_attach_name>. More info on Sending Files »
:type thumb: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param caption: Video caption (may also be used when resending videos by file_id), 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the video caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param supports_streaming: Pass True, if the uploaded video is suitable for streaming
:type supports_streaming: bool
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(video, (InputFile, unicode_type), parameter_name="video")
assert_type_or_raise(duration, None, int, parameter_name="duration")
assert_type_or_raise(width, None, int, parameter_name="width")
assert_type_or_raise(height, None, int, parameter_name="height")
assert_type_or_raise(thumb, None, (InputFile, unicode_type), parameter_name="thumb")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(supports_streaming, None, bool, parameter_name="supports_streaming")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendVideo", chat_id=chat_id, video=video, duration=duration, width=width, height=height, thumb=thumb, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, supports_streaming=supports_streaming, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_video__make_request
def _send_video__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendVideo endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_video__process_result
def _send_animation__make_request(self, chat_id, animation, duration=None, width=None, height=None, thumb=None, caption=None, parse_mode=None, caption_entities=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendAnimation endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param animation: Animation to send. Pass a file_id as String to send an animation that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get an animation from the Internet, or upload a new animation using multipart/form-data. More info on Sending Files »
:type animation: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param duration: Duration of sent animation in seconds
:type duration: int
:param width: Animation width
:type width: int
:param height: Animation height
:type height: int
:param thumb: Thumbnail of the file sent; can be ignored if thumbnail generation for the file is supported server-side. The thumbnail should be in JPEG format and less than 200 kB in size. A thumbnail's width and height should not exceed 320. Ignored if the file is not uploaded using multipart/form-data. Thumbnails can't be reused and can be only uploaded as a new file, so you can pass "attach://<file_attach_name>" if the thumbnail was uploaded using multipart/form-data under <file_attach_name>. More info on Sending Files »
:type thumb: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param caption: Animation caption (may also be used when resending animation by file_id), 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the animation caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(animation, (InputFile, unicode_type), parameter_name="animation")
assert_type_or_raise(duration, None, int, parameter_name="duration")
assert_type_or_raise(width, None, int, parameter_name="width")
assert_type_or_raise(height, None, int, parameter_name="height")
assert_type_or_raise(thumb, None, (InputFile, unicode_type), parameter_name="thumb")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendAnimation", chat_id=chat_id, animation=animation, duration=duration, width=width, height=height, thumb=thumb, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_animation__make_request
def _send_animation__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendAnimation endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_animation__process_result
def _send_voice__make_request(self, chat_id, voice, caption=None, parse_mode=None, caption_entities=None, duration=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendVoice endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param voice: Audio file to send. Pass a file_id as String to send a file that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get a file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files »
:type voice: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param caption: Voice message caption, 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the voice message caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param duration: Duration of the voice message in seconds
:type duration: int
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(voice, (InputFile, unicode_type), parameter_name="voice")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(duration, None, int, parameter_name="duration")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendVoice", chat_id=chat_id, voice=voice, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, duration=duration, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_voice__make_request
def _send_voice__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendVoice endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_voice__process_result
def _send_video_note__make_request(self, chat_id, video_note, duration=None, length=None, thumb=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendVideoNote endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param video_note: Video note to send. Pass a file_id as String to send a video note that exists on the Telegram servers (recommended) or upload a new video using multipart/form-data. More info on Sending Files ». Sending video notes by a URL is currently unsupported
:type video_note: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param duration: Duration of sent video in seconds
:type duration: int
:param length: Video width and height, i.e. diameter of the video message
:type length: int
:param thumb: Thumbnail of the file sent; can be ignored if thumbnail generation for the file is supported server-side. The thumbnail should be in JPEG format and less than 200 kB in size. A thumbnail's width and height should not exceed 320. Ignored if the file is not uploaded using multipart/form-data. Thumbnails can't be reused and can be only uploaded as a new file, so you can pass "attach://<file_attach_name>" if the thumbnail was uploaded using multipart/form-data under <file_attach_name>. More info on Sending Files »
:type thumb: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(video_note, (InputFile, unicode_type), parameter_name="video_note")
assert_type_or_raise(duration, None, int, parameter_name="duration")
assert_type_or_raise(length, None, int, parameter_name="length")
assert_type_or_raise(thumb, None, (InputFile, unicode_type), parameter_name="thumb")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendVideoNote", chat_id=chat_id, video_note=video_note, duration=duration, length=length, thumb=thumb, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_video_note__make_request
def _send_video_note__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendVideoNote endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_video_note__process_result
def _send_media_group__make_request(self, chat_id, media, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None):
"""
Internal function for making the request to the API's sendMediaGroup endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param media: A JSON-serialized array describing messages to be sent, must include 2-10 items
:type media: list of pytgbot.api_types.sendable.input_media.InputMediaAudio | list of pytgbot.api_types.sendable.input_media.InputMediaDocument | list of pytgbot.api_types.sendable.input_media.InputMediaPhoto | list of pytgbot.api_types.sendable.input_media.InputMediaVideo
Optional keyword parameters:
:param disable_notification: Sends messages silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the messages are a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.input_media import InputMediaAudio
from pytgbot.api_types.sendable.input_media import InputMediaDocument
from pytgbot.api_types.sendable.input_media import InputMediaPhoto
from pytgbot.api_types.sendable.input_media import InputMediaVideo
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(media, (list, list, list, list), parameter_name="media")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
return self.do("sendMediaGroup", chat_id=chat_id, media=media, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply)
# end def _send_media_group__make_request
def _send_media_group__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendMediaGroup endpoint.
:return: On success, an array of Messages that were sent is returned
:rtype: list of pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array_list(result, list_level=1)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_media_group__process_result
def _send_location__make_request(self, chat_id, latitude, longitude, horizontal_accuracy=None, live_period=None, heading=None, proximity_alert_radius=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendLocation endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param latitude: Latitude of the location
:type latitude: float
:param longitude: Longitude of the location
:type longitude: float
Optional keyword parameters:
:param horizontal_accuracy: The radius of uncertainty for the location, measured in meters; 0-1500
:type horizontal_accuracy: float
:param live_period: Period in seconds for which the location will be updated (see Live Locations, should be between 60 and 86400.
:type live_period: int
:param heading: For live locations, a direction in which the user is moving, in degrees. Must be between 1 and 360 if specified.
:type heading: int
:param proximity_alert_radius: For live locations, a maximum distance for proximity alerts about approaching another chat member, in meters. Must be between 1 and 100000 if specified.
:type proximity_alert_radius: int
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(latitude, float, parameter_name="latitude")
assert_type_or_raise(longitude, float, parameter_name="longitude")
assert_type_or_raise(horizontal_accuracy, None, float, parameter_name="horizontal_accuracy")
assert_type_or_raise(live_period, None, int, parameter_name="live_period")
assert_type_or_raise(heading, None, int, parameter_name="heading")
assert_type_or_raise(proximity_alert_radius, None, int, parameter_name="proximity_alert_radius")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendLocation", chat_id=chat_id, latitude=latitude, longitude=longitude, horizontal_accuracy=horizontal_accuracy, live_period=live_period, heading=heading, proximity_alert_radius=proximity_alert_radius, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_location__make_request
def _send_location__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendLocation endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_location__process_result
def _edit_message_live_location__make_request(self, latitude, longitude, chat_id=None, message_id=None, inline_message_id=None, horizontal_accuracy=None, heading=None, proximity_alert_radius=None, reply_markup=None):
"""
Internal function for making the request to the API's editMessageLiveLocation endpoint.
Parameters:
:param latitude: Latitude of new location
:type latitude: float
:param longitude: Longitude of new location
:type longitude: float
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Required if inline_message_id is not specified. Identifier of the message to edit
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:param horizontal_accuracy: The radius of uncertainty for the location, measured in meters; 0-1500
:type horizontal_accuracy: float
:param heading: Direction in which the user is moving, in degrees. Must be between 1 and 360 if specified.
:type heading: int
:param proximity_alert_radius: Maximum distance for proximity alerts about approaching another chat member, in meters. Must be between 1 and 100000 if specified.
:type proximity_alert_radius: int
:param reply_markup: A JSON-serialized object for a new inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(latitude, float, parameter_name="latitude")
assert_type_or_raise(longitude, float, parameter_name="longitude")
assert_type_or_raise(chat_id, None, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
assert_type_or_raise(horizontal_accuracy, None, float, parameter_name="horizontal_accuracy")
assert_type_or_raise(heading, None, int, parameter_name="heading")
assert_type_or_raise(proximity_alert_radius, None, int, parameter_name="proximity_alert_radius")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("editMessageLiveLocation", latitude=latitude, longitude=longitude, chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id, horizontal_accuracy=horizontal_accuracy, heading=heading, proximity_alert_radius=proximity_alert_radius, reply_markup=reply_markup)
# end def _edit_message_live_location__make_request
def _edit_message_live_location__process_result(self, result):
"""
Internal function for processing the json data returned by the API's editMessageLiveLocation endpoint.
:return: On success, if the edited message is not an inline message, the edited Message is returned, otherwise True is returned
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _edit_message_live_location__process_result
def _stop_message_live_location__make_request(self, chat_id=None, message_id=None, inline_message_id=None, reply_markup=None):
"""
Internal function for making the request to the API's stopMessageLiveLocation endpoint.
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Required if inline_message_id is not specified. Identifier of the message with live location to stop
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:param reply_markup: A JSON-serialized object for a new inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(chat_id, None, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("stopMessageLiveLocation", chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id, reply_markup=reply_markup)
# end def _stop_message_live_location__make_request
def _stop_message_live_location__process_result(self, result):
"""
Internal function for processing the json data returned by the API's stopMessageLiveLocation endpoint.
:return: On success, if the message is not an inline message, the edited Message is returned, otherwise True is returned
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _stop_message_live_location__process_result
def _send_venue__make_request(self, chat_id, latitude, longitude, title, address, foursquare_id=None, foursquare_type=None, google_place_id=None, google_place_type=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendVenue endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param latitude: Latitude of the venue
:type latitude: float
:param longitude: Longitude of the venue
:type longitude: float
:param title: Name of the venue
:type title: str|unicode
:param address: Address of the venue
:type address: str|unicode
Optional keyword parameters:
:param foursquare_id: Foursquare identifier of the venue
:type foursquare_id: str|unicode
:param foursquare_type: Foursquare type of the venue, if known. (For example, "arts_entertainment/default", "arts_entertainment/aquarium" or "food/icecream".)
:type foursquare_type: str|unicode
:param google_place_id: Google Places identifier of the venue
:type google_place_id: str|unicode
:param google_place_type: Google Places type of the venue. (See supported types.)
:type google_place_type: str|unicode
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(latitude, float, parameter_name="latitude")
assert_type_or_raise(longitude, float, parameter_name="longitude")
assert_type_or_raise(title, unicode_type, parameter_name="title")
assert_type_or_raise(address, unicode_type, parameter_name="address")
assert_type_or_raise(foursquare_id, None, unicode_type, parameter_name="foursquare_id")
assert_type_or_raise(foursquare_type, None, unicode_type, parameter_name="foursquare_type")
assert_type_or_raise(google_place_id, None, unicode_type, parameter_name="google_place_id")
assert_type_or_raise(google_place_type, None, unicode_type, parameter_name="google_place_type")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendVenue", chat_id=chat_id, latitude=latitude, longitude=longitude, title=title, address=address, foursquare_id=foursquare_id, foursquare_type=foursquare_type, google_place_id=google_place_id, google_place_type=google_place_type, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_venue__make_request
def _send_venue__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendVenue endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_venue__process_result
def _send_contact__make_request(self, chat_id, phone_number, first_name, last_name=None, vcard=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendContact endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param phone_number: Contact's phone number
:type phone_number: str|unicode
:param first_name: Contact's first name
:type first_name: str|unicode
Optional keyword parameters:
:param last_name: Contact's last name
:type last_name: str|unicode
:param vcard: Additional data about the contact in the form of a vCard, 0-2048 bytes
:type vcard: str|unicode
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(phone_number, unicode_type, parameter_name="phone_number")
assert_type_or_raise(first_name, unicode_type, parameter_name="first_name")
assert_type_or_raise(last_name, None, unicode_type, parameter_name="last_name")
assert_type_or_raise(vcard, None, unicode_type, parameter_name="vcard")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendContact", chat_id=chat_id, phone_number=phone_number, first_name=first_name, last_name=last_name, vcard=vcard, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_contact__make_request
def _send_contact__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendContact endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_contact__process_result
def _send_poll__make_request(self, chat_id, question, options, is_anonymous=None, type=None, allows_multiple_answers=None, correct_option_id=None, explanation=None, explanation_parse_mode=None, explanation_entities=None, open_period=None, close_date=None, is_closed=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendPoll endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param question: Poll question, 1-300 characters
:type question: str|unicode
:param options: A JSON-serialized list of answer options, 2-10 strings 1-100 characters each
:type options: list of str|unicode
Optional keyword parameters:
:param is_anonymous: True, if the poll needs to be anonymous, defaults to True
:type is_anonymous: bool
:param type: Poll type, "quiz" or "regular", defaults to "regular"
:type type: str|unicode
:param allows_multiple_answers: True, if the poll allows multiple answers, ignored for polls in quiz mode, defaults to False
:type allows_multiple_answers: bool
:param correct_option_id: 0-based identifier of the correct answer option, required for polls in quiz mode
:type correct_option_id: int
:param explanation: Text that is shown when a user chooses an incorrect answer or taps on the lamp icon in a quiz-style poll, 0-200 characters with at most 2 line feeds after entities parsing
:type explanation: str|unicode
:param explanation_parse_mode: Mode for parsing entities in the explanation. See formatting options for more details.
:type explanation_parse_mode: str|unicode
:param explanation_entities: A JSON-serialized list of special entities that appear in the poll explanation, which can be specified instead of parse_mode
:type explanation_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param open_period: Amount of time in seconds the poll will be active after creation, 5-600. Can't be used together with close_date.
:type open_period: int
:param close_date: Point in time (Unix timestamp) when the poll will be automatically closed. Must be at least 5 and no more than 600 seconds in the future. Can't be used together with open_period.
:type close_date: int
:param is_closed: Pass True, if the poll needs to be immediately closed. This can be useful for poll preview.
:type is_closed: bool
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(question, unicode_type, parameter_name="question")
assert_type_or_raise(options, list, parameter_name="options")
assert_type_or_raise(is_anonymous, None, bool, parameter_name="is_anonymous")
assert_type_or_raise(type, None, unicode_type, parameter_name="type")
assert_type_or_raise(allows_multiple_answers, None, bool, parameter_name="allows_multiple_answers")
assert_type_or_raise(correct_option_id, None, int, parameter_name="correct_option_id")
assert_type_or_raise(explanation, None, unicode_type, parameter_name="explanation")
assert_type_or_raise(explanation_parse_mode, None, unicode_type, parameter_name="explanation_parse_mode")
assert_type_or_raise(explanation_entities, None, list, parameter_name="explanation_entities")
assert_type_or_raise(open_period, None, int, parameter_name="open_period")
assert_type_or_raise(close_date, None, int, parameter_name="close_date")
assert_type_or_raise(is_closed, None, bool, parameter_name="is_closed")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendPoll", chat_id=chat_id, question=question, options=options, is_anonymous=is_anonymous, type=type, allows_multiple_answers=allows_multiple_answers, correct_option_id=correct_option_id, explanation=explanation, explanation_parse_mode=explanation_parse_mode, explanation_entities=explanation_entities, open_period=open_period, close_date=close_date, is_closed=is_closed, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_poll__make_request
def _send_poll__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendPoll endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_poll__process_result
def _send_dice__make_request(self, chat_id, emoji=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendDice endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
Optional keyword parameters:
:param emoji: Emoji on which the dice throw animation is based. Currently, must be one of "🎲", "🎯", "🏀", "⚽", "🎳", or "🎰". Dice can have values 1-6 for "🎲", "🎯" and "🎳", values 1-5 for "🏀" and "⚽", and values 1-64 for "🎰". Defaults to "🎲"
:type emoji: str|unicode
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(emoji, None, unicode_type, parameter_name="emoji")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendDice", chat_id=chat_id, emoji=emoji, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_dice__make_request
def _send_dice__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendDice endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_dice__process_result
def _send_chat_action__make_request(self, chat_id, action):
"""
Internal function for making the request to the API's sendChatAction endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param action: Type of action to broadcast. Choose one, depending on what the user is about to receive: typing for text messages, upload_photo for photos, record_video or upload_video for videos, record_voice or upload_voice for voice notes, upload_document for general files, choose_sticker for stickers, find_location for location data, record_video_note or upload_video_note for video notes.
:type action: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(action, unicode_type, parameter_name="action")
return self.do("sendChatAction", chat_id=chat_id, action=action)
# end def _send_chat_action__make_request
def _send_chat_action__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendChatAction endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_chat_action__process_result
def _get_user_profile_photos__make_request(self, user_id, offset=None, limit=None):
"""
Internal function for making the request to the API's getUserProfilePhotos endpoint.
Parameters:
:param user_id: Unique identifier of the target user
:type user_id: int
Optional keyword parameters:
:param offset: Sequential number of the first photo to be returned. By default, all photos are returned.
:type offset: int
:param limit: Limits the number of photos to be retrieved. Values between 1-100 are accepted. Defaults to 100.
:type limit: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(offset, None, int, parameter_name="offset")
assert_type_or_raise(limit, None, int, parameter_name="limit")
return self.do("getUserProfilePhotos", user_id=user_id, offset=offset, limit=limit)
# end def _get_user_profile_photos__make_request
def _get_user_profile_photos__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getUserProfilePhotos endpoint.
:return: Returns a UserProfilePhotos object
:rtype: pytgbot.api_types.receivable.media.UserProfilePhotos
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.media import UserProfilePhotos
try:
return UserProfilePhotos.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type UserProfilePhotos", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_user_profile_photos__process_result
def _get_file__make_request(self, file_id):
"""
Internal function for making the request to the API's getFile endpoint.
Parameters:
:param file_id: File identifier to get info about
:type file_id: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(file_id, unicode_type, parameter_name="file_id")
return self.do("getFile", file_id=file_id)
# end def _get_file__make_request
def _get_file__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getFile endpoint.
:return: On success, a File object is returned
:rtype: pytgbot.api_types.receivable.media.File
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.media import File
try:
return File.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type File", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_file__process_result
def _ban_chat_member__make_request(self, chat_id, user_id, until_date=None, revoke_messages=None):
"""
Internal function for making the request to the API's banChatMember endpoint.
Parameters:
:param chat_id: Unique identifier for the target group or username of the target supergroup or channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
Optional keyword parameters:
:param until_date: Date when the user will be unbanned, unix time. If user is banned for more than 366 days or less than 30 seconds from the current time they are considered to be banned forever. Applied for supergroups and channels only.
:type until_date: int
:param revoke_messages: Pass True to delete all messages from the chat for the user that is being removed. If False, the user will be able to see messages in the group that were sent before the user was removed. Always True for supergroups and channels.
:type revoke_messages: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(until_date, None, int, parameter_name="until_date")
assert_type_or_raise(revoke_messages, None, bool, parameter_name="revoke_messages")
return self.do("banChatMember", chat_id=chat_id, user_id=user_id, until_date=until_date, revoke_messages=revoke_messages)
# end def _ban_chat_member__make_request
def _ban_chat_member__process_result(self, result):
"""
Internal function for processing the json data returned by the API's banChatMember endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _ban_chat_member__process_result
def _unban_chat_member__make_request(self, chat_id, user_id, only_if_banned=None):
"""
Internal function for making the request to the API's unbanChatMember endpoint.
Parameters:
:param chat_id: Unique identifier for the target group or username of the target supergroup or channel (in the format @username)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
Optional keyword parameters:
:param only_if_banned: Do nothing if the user is not banned
:type only_if_banned: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(only_if_banned, None, bool, parameter_name="only_if_banned")
return self.do("unbanChatMember", chat_id=chat_id, user_id=user_id, only_if_banned=only_if_banned)
# end def _unban_chat_member__make_request
def _unban_chat_member__process_result(self, result):
"""
Internal function for processing the json data returned by the API's unbanChatMember endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _unban_chat_member__process_result
def _restrict_chat_member__make_request(self, chat_id, user_id, permissions, until_date=None):
"""
Internal function for making the request to the API's restrictChatMember endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup (in the format @supergroupusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
:param permissions: A JSON-serialized object for new user permissions
:type permissions: pytgbot.api_types.receivable.peer.ChatPermissions
Optional keyword parameters:
:param until_date: Date when restrictions will be lifted for the user, unix time. If user is restricted for more than 366 days or less than 30 seconds from the current time, they are considered to be restricted forever
:type until_date: int
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.peer import ChatPermissions
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(permissions, ChatPermissions, parameter_name="permissions")
assert_type_or_raise(until_date, None, int, parameter_name="until_date")
return self.do("restrictChatMember", chat_id=chat_id, user_id=user_id, permissions=permissions, until_date=until_date)
# end def _restrict_chat_member__make_request
def _restrict_chat_member__process_result(self, result):
"""
Internal function for processing the json data returned by the API's restrictChatMember endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _restrict_chat_member__process_result
def _promote_chat_member__make_request(self, chat_id, user_id, is_anonymous=None, can_manage_chat=None, can_post_messages=None, can_edit_messages=None, can_delete_messages=None, can_manage_voice_chats=None, can_restrict_members=None, can_promote_members=None, can_change_info=None, can_invite_users=None, can_pin_messages=None):
"""
Internal function for making the request to the API's promoteChatMember endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
Optional keyword parameters:
:param is_anonymous: Pass True, if the administrator's presence in the chat is hidden
:type is_anonymous: bool
:param can_manage_chat: Pass True, if the administrator can access the chat event log, chat statistics, message statistics in channels, see channel members, see anonymous administrators in supergroups and ignore slow mode. Implied by any other administrator privilege
:type can_manage_chat: bool
:param can_post_messages: Pass True, if the administrator can create channel posts, channels only
:type can_post_messages: bool
:param can_edit_messages: Pass True, if the administrator can edit messages of other users and can pin messages, channels only
:type can_edit_messages: bool
:param can_delete_messages: Pass True, if the administrator can delete messages of other users
:type can_delete_messages: bool
:param can_manage_voice_chats: Pass True, if the administrator can manage voice chats
:type can_manage_voice_chats: bool
:param can_restrict_members: Pass True, if the administrator can restrict, ban or unban chat members
:type can_restrict_members: bool
:param can_promote_members: Pass True, if the administrator can add new administrators with a subset of their own privileges or demote administrators that he has promoted, directly or indirectly (promoted by administrators that were appointed by him)
:type can_promote_members: bool
:param can_change_info: Pass True, if the administrator can change chat title, photo and other settings
:type can_change_info: bool
:param can_invite_users: Pass True, if the administrator can invite new users to the chat
:type can_invite_users: bool
:param can_pin_messages: Pass True, if the administrator can pin messages, supergroups only
:type can_pin_messages: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(is_anonymous, None, bool, parameter_name="is_anonymous")
assert_type_or_raise(can_manage_chat, None, bool, parameter_name="can_manage_chat")
assert_type_or_raise(can_post_messages, None, bool, parameter_name="can_post_messages")
assert_type_or_raise(can_edit_messages, None, bool, parameter_name="can_edit_messages")
assert_type_or_raise(can_delete_messages, None, bool, parameter_name="can_delete_messages")
assert_type_or_raise(can_manage_voice_chats, None, bool, parameter_name="can_manage_voice_chats")
assert_type_or_raise(can_restrict_members, None, bool, parameter_name="can_restrict_members")
assert_type_or_raise(can_promote_members, None, bool, parameter_name="can_promote_members")
assert_type_or_raise(can_change_info, None, bool, parameter_name="can_change_info")
assert_type_or_raise(can_invite_users, None, bool, parameter_name="can_invite_users")
assert_type_or_raise(can_pin_messages, None, bool, parameter_name="can_pin_messages")
return self.do("promoteChatMember", chat_id=chat_id, user_id=user_id, is_anonymous=is_anonymous, can_manage_chat=can_manage_chat, can_post_messages=can_post_messages, can_edit_messages=can_edit_messages, can_delete_messages=can_delete_messages, can_manage_voice_chats=can_manage_voice_chats, can_restrict_members=can_restrict_members, can_promote_members=can_promote_members, can_change_info=can_change_info, can_invite_users=can_invite_users, can_pin_messages=can_pin_messages)
# end def _promote_chat_member__make_request
def _promote_chat_member__process_result(self, result):
"""
Internal function for processing the json data returned by the API's promoteChatMember endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _promote_chat_member__process_result
def _set_chat_administrator_custom_title__make_request(self, chat_id, user_id, custom_title):
"""
Internal function for making the request to the API's setChatAdministratorCustomTitle endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup (in the format @supergroupusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
:param custom_title: New custom title for the administrator; 0-16 characters, emoji are not allowed
:type custom_title: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(custom_title, unicode_type, parameter_name="custom_title")
return self.do("setChatAdministratorCustomTitle", chat_id=chat_id, user_id=user_id, custom_title=custom_title)
# end def _set_chat_administrator_custom_title__make_request
def _set_chat_administrator_custom_title__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setChatAdministratorCustomTitle endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_chat_administrator_custom_title__process_result
def _ban_chat_sender_chat__make_request(self, chat_id, sender_chat_id):
"""
Internal function for making the request to the API's banChatSenderChat endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param sender_chat_id: Unique identifier of the target sender chat
:type sender_chat_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(sender_chat_id, int, parameter_name="sender_chat_id")
return self.do("banChatSenderChat", chat_id=chat_id, sender_chat_id=sender_chat_id)
# end def _ban_chat_sender_chat__make_request
def _ban_chat_sender_chat__process_result(self, result):
"""
Internal function for processing the json data returned by the API's banChatSenderChat endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _ban_chat_sender_chat__process_result
def _unban_chat_sender_chat__make_request(self, chat_id, sender_chat_id):
"""
Internal function for making the request to the API's unbanChatSenderChat endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param sender_chat_id: Unique identifier of the target sender chat
:type sender_chat_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(sender_chat_id, int, parameter_name="sender_chat_id")
return self.do("unbanChatSenderChat", chat_id=chat_id, sender_chat_id=sender_chat_id)
# end def _unban_chat_sender_chat__make_request
def _unban_chat_sender_chat__process_result(self, result):
"""
Internal function for processing the json data returned by the API's unbanChatSenderChat endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _unban_chat_sender_chat__process_result
def _set_chat_permissions__make_request(self, chat_id, permissions):
"""
Internal function for making the request to the API's setChatPermissions endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup (in the format @supergroupusername)
:type chat_id: int | str|unicode
:param permissions: A JSON-serialized object for new default chat permissions
:type permissions: pytgbot.api_types.receivable.peer.ChatPermissions
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.peer import ChatPermissions
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(permissions, ChatPermissions, parameter_name="permissions")
return self.do("setChatPermissions", chat_id=chat_id, permissions=permissions)
# end def _set_chat_permissions__make_request
def _set_chat_permissions__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setChatPermissions endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_chat_permissions__process_result
def _export_chat_invite_link__make_request(self, chat_id):
"""
Internal function for making the request to the API's exportChatInviteLink endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("exportChatInviteLink", chat_id=chat_id)
# end def _export_chat_invite_link__make_request
def _export_chat_invite_link__process_result(self, result):
"""
Internal function for processing the json data returned by the API's exportChatInviteLink endpoint.
:return: Returns the new invite link as String on success
:rtype: str|unicode
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(str, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive str", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _export_chat_invite_link__process_result
def _create_chat_invite_link__make_request(self, chat_id, name=None, expire_date=None, member_limit=None, creates_join_request=None):
"""
Internal function for making the request to the API's createChatInviteLink endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
Optional keyword parameters:
:param name: Invite link name; 0-32 characters
:type name: str|unicode
:param expire_date: Point in time (Unix timestamp) when the link will expire
:type expire_date: int
:param member_limit: Maximum number of users that can be members of the chat simultaneously after joining the chat via this invite link; 1-99999
:type member_limit: int
:param creates_join_request: True, if users joining the chat via the link need to be approved by chat administrators. If True, member_limit can't be specified
:type creates_join_request: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(name, None, unicode_type, parameter_name="name")
assert_type_or_raise(expire_date, None, int, parameter_name="expire_date")
assert_type_or_raise(member_limit, None, int, parameter_name="member_limit")
assert_type_or_raise(creates_join_request, None, bool, parameter_name="creates_join_request")
return self.do("createChatInviteLink", chat_id=chat_id, name=name, expire_date=expire_date, member_limit=member_limit, creates_join_request=creates_join_request)
# end def _create_chat_invite_link__make_request
def _create_chat_invite_link__process_result(self, result):
"""
Internal function for processing the json data returned by the API's createChatInviteLink endpoint.
:return: Returns the new invite link as ChatInviteLink object
:rtype: pytgbot.api_types.receivable.peer.ChatInviteLink
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import ChatInviteLink
try:
return ChatInviteLink.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type ChatInviteLink", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _create_chat_invite_link__process_result
def _edit_chat_invite_link__make_request(self, chat_id, invite_link, name=None, expire_date=None, member_limit=None, creates_join_request=None):
"""
Internal function for making the request to the API's editChatInviteLink endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param invite_link: The invite link to edit
:type invite_link: str|unicode
Optional keyword parameters:
:param name: Invite link name; 0-32 characters
:type name: str|unicode
:param expire_date: Point in time (Unix timestamp) when the link will expire
:type expire_date: int
:param member_limit: Maximum number of users that can be members of the chat simultaneously after joining the chat via this invite link; 1-99999
:type member_limit: int
:param creates_join_request: True, if users joining the chat via the link need to be approved by chat administrators. If True, member_limit can't be specified
:type creates_join_request: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(invite_link, unicode_type, parameter_name="invite_link")
assert_type_or_raise(name, None, unicode_type, parameter_name="name")
assert_type_or_raise(expire_date, None, int, parameter_name="expire_date")
assert_type_or_raise(member_limit, None, int, parameter_name="member_limit")
assert_type_or_raise(creates_join_request, None, bool, parameter_name="creates_join_request")
return self.do("editChatInviteLink", chat_id=chat_id, invite_link=invite_link, name=name, expire_date=expire_date, member_limit=member_limit, creates_join_request=creates_join_request)
# end def _edit_chat_invite_link__make_request
def _edit_chat_invite_link__process_result(self, result):
"""
Internal function for processing the json data returned by the API's editChatInviteLink endpoint.
:return: Returns the edited invite link as a ChatInviteLink object
:rtype: pytgbot.api_types.receivable.peer.ChatInviteLink
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import ChatInviteLink
try:
return ChatInviteLink.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type ChatInviteLink", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _edit_chat_invite_link__process_result
def _revoke_chat_invite_link__make_request(self, chat_id, invite_link):
"""
Internal function for making the request to the API's revokeChatInviteLink endpoint.
Parameters:
:param chat_id: Unique identifier of the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param invite_link: The invite link to revoke
:type invite_link: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(invite_link, unicode_type, parameter_name="invite_link")
return self.do("revokeChatInviteLink", chat_id=chat_id, invite_link=invite_link)
# end def _revoke_chat_invite_link__make_request
def _revoke_chat_invite_link__process_result(self, result):
"""
Internal function for processing the json data returned by the API's revokeChatInviteLink endpoint.
:return: Returns the revoked invite link as ChatInviteLink object
:rtype: pytgbot.api_types.receivable.peer.ChatInviteLink
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import ChatInviteLink
try:
return ChatInviteLink.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type ChatInviteLink", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _revoke_chat_invite_link__process_result
def _approve_chat_join_request__make_request(self, chat_id, user_id):
"""
Internal function for making the request to the API's approveChatJoinRequest endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
return self.do("approveChatJoinRequest", chat_id=chat_id, user_id=user_id)
# end def _approve_chat_join_request__make_request
def _approve_chat_join_request__process_result(self, result):
"""
Internal function for processing the json data returned by the API's approveChatJoinRequest endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _approve_chat_join_request__process_result
def _decline_chat_join_request__make_request(self, chat_id, user_id):
"""
Internal function for making the request to the API's declineChatJoinRequest endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
return self.do("declineChatJoinRequest", chat_id=chat_id, user_id=user_id)
# end def _decline_chat_join_request__make_request
def _decline_chat_join_request__process_result(self, result):
"""
Internal function for processing the json data returned by the API's declineChatJoinRequest endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _decline_chat_join_request__process_result
def _set_chat_photo__make_request(self, chat_id, photo):
"""
Internal function for making the request to the API's setChatPhoto endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param photo: New chat photo, uploaded using multipart/form-data
:type photo: pytgbot.api_types.sendable.files.InputFile
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.files import InputFile
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(photo, InputFile, parameter_name="photo")
return self.do("setChatPhoto", chat_id=chat_id, photo=photo)
# end def _set_chat_photo__make_request
def _set_chat_photo__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setChatPhoto endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_chat_photo__process_result
def _delete_chat_photo__make_request(self, chat_id):
"""
Internal function for making the request to the API's deleteChatPhoto endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("deleteChatPhoto", chat_id=chat_id)
# end def _delete_chat_photo__make_request
def _delete_chat_photo__process_result(self, result):
"""
Internal function for processing the json data returned by the API's deleteChatPhoto endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _delete_chat_photo__process_result
def _set_chat_title__make_request(self, chat_id, title):
"""
Internal function for making the request to the API's setChatTitle endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param title: New chat title, 1-255 characters
:type title: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(title, unicode_type, parameter_name="title")
return self.do("setChatTitle", chat_id=chat_id, title=title)
# end def _set_chat_title__make_request
def _set_chat_title__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setChatTitle endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_chat_title__process_result
def _set_chat_description__make_request(self, chat_id, description=None):
"""
Internal function for making the request to the API's setChatDescription endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
Optional keyword parameters:
:param description: New chat description, 0-255 characters
:type description: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(description, None, unicode_type, parameter_name="description")
return self.do("setChatDescription", chat_id=chat_id, description=description)
# end def _set_chat_description__make_request
def _set_chat_description__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setChatDescription endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_chat_description__process_result
def _pin_chat_message__make_request(self, chat_id, message_id, disable_notification=None):
"""
Internal function for making the request to the API's pinChatMessage endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Identifier of a message to pin
:type message_id: int
Optional keyword parameters:
:param disable_notification: Pass True, if it is not necessary to send a notification to all chat members about the new pinned message. Notifications are always disabled in channels and private chats.
:type disable_notification: bool
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, int, parameter_name="message_id")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
return self.do("pinChatMessage", chat_id=chat_id, message_id=message_id, disable_notification=disable_notification)
# end def _pin_chat_message__make_request
def _pin_chat_message__process_result(self, result):
"""
Internal function for processing the json data returned by the API's pinChatMessage endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _pin_chat_message__process_result
def _unpin_chat_message__make_request(self, chat_id, message_id=None):
"""
Internal function for making the request to the API's unpinChatMessage endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
Optional keyword parameters:
:param message_id: Identifier of a message to unpin. If not specified, the most recent pinned message (by sending date) will be unpinned.
:type message_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
return self.do("unpinChatMessage", chat_id=chat_id, message_id=message_id)
# end def _unpin_chat_message__make_request
def _unpin_chat_message__process_result(self, result):
"""
Internal function for processing the json data returned by the API's unpinChatMessage endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _unpin_chat_message__process_result
def _unpin_all_chat_messages__make_request(self, chat_id):
"""
Internal function for making the request to the API's unpinAllChatMessages endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("unpinAllChatMessages", chat_id=chat_id)
# end def _unpin_all_chat_messages__make_request
def _unpin_all_chat_messages__process_result(self, result):
"""
Internal function for processing the json data returned by the API's unpinAllChatMessages endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _unpin_all_chat_messages__process_result
def _leave_chat__make_request(self, chat_id):
"""
Internal function for making the request to the API's leaveChat endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup or channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("leaveChat", chat_id=chat_id)
# end def _leave_chat__make_request
def _leave_chat__process_result(self, result):
"""
Internal function for processing the json data returned by the API's leaveChat endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _leave_chat__process_result
def _get_chat__make_request(self, chat_id):
"""
Internal function for making the request to the API's getChat endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup or channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("getChat", chat_id=chat_id)
# end def _get_chat__make_request
def _get_chat__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getChat endpoint.
:return: Returns a Chat object on success
:rtype: pytgbot.api_types.receivable.peer.Chat
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import Chat
try:
return Chat.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Chat", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_chat__process_result
def _get_chat_administrators__make_request(self, chat_id):
"""
Internal function for making the request to the API's getChatAdministrators endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup or channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("getChatAdministrators", chat_id=chat_id)
# end def _get_chat_administrators__make_request
def _get_chat_administrators__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getChatAdministrators endpoint.
:return: On success, returns an Array of ChatMember objects that contains information about all chat administrators except other bots
:rtype: list of pytgbot.api_types.receivable.peer.ChatMember
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import ChatMember
try:
return ChatMember.from_array_list(result, list_level=1)
except TgApiParseException:
logger.debug("Failed parsing as api_type ChatMember", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_chat_administrators__process_result
def _get_chat_member_count__make_request(self, chat_id):
"""
Internal function for making the request to the API's getChatMemberCount endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup or channel (in the format @channelusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("getChatMemberCount", chat_id=chat_id)
# end def _get_chat_member_count__make_request
def _get_chat_member_count__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getChatMemberCount endpoint.
:return: Returns Int on success
:rtype: int
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(int, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive int", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_chat_member_count__process_result
def _get_chat_member__make_request(self, chat_id, user_id):
"""
Internal function for making the request to the API's getChatMember endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup or channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param user_id: Unique identifier of the target user
:type user_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(user_id, int, parameter_name="user_id")
return self.do("getChatMember", chat_id=chat_id, user_id=user_id)
# end def _get_chat_member__make_request
def _get_chat_member__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getChatMember endpoint.
:return: Returns a ChatMember object on success
:rtype: pytgbot.api_types.receivable.peer.ChatMember
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.peer import ChatMember
try:
return ChatMember.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type ChatMember", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_chat_member__process_result
def _set_chat_sticker_set__make_request(self, chat_id, sticker_set_name):
"""
Internal function for making the request to the API's setChatStickerSet endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup (in the format @supergroupusername)
:type chat_id: int | str|unicode
:param sticker_set_name: Name of the sticker set to be set as the group sticker set
:type sticker_set_name: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(sticker_set_name, unicode_type, parameter_name="sticker_set_name")
return self.do("setChatStickerSet", chat_id=chat_id, sticker_set_name=sticker_set_name)
# end def _set_chat_sticker_set__make_request
def _set_chat_sticker_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setChatStickerSet endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_chat_sticker_set__process_result
def _delete_chat_sticker_set__make_request(self, chat_id):
"""
Internal function for making the request to the API's deleteChatStickerSet endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target supergroup (in the format @supergroupusername)
:type chat_id: int | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
return self.do("deleteChatStickerSet", chat_id=chat_id)
# end def _delete_chat_sticker_set__make_request
def _delete_chat_sticker_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's deleteChatStickerSet endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _delete_chat_sticker_set__process_result
def _answer_callback_query__make_request(self, callback_query_id, text=None, show_alert=None, url=None, cache_time=None):
"""
Internal function for making the request to the API's answerCallbackQuery endpoint.
Parameters:
:param callback_query_id: Unique identifier for the query to be answered
:type callback_query_id: str|unicode
Optional keyword parameters:
:param text: Text of the notification. If not specified, nothing will be shown to the user, 0-200 characters
:type text: str|unicode
:param show_alert: If True, an alert will be shown by the client instead of a notification at the top of the chat screen. Defaults to false.
:type show_alert: bool
:param url: URL that will be opened by the user's client. If you have created a Game and accepted the conditions via @Botfather, specify the URL that opens your game — note that this will only work if the query comes from a callback_game button.Otherwise, you may use links like t.me/your_bot?start=XXXX that open your bot with a parameter.
:type url: str|unicode
:param cache_time: The maximum amount of time in seconds that the result of the callback query may be cached client-side. Telegram apps will support caching starting in version 3.14. Defaults to 0.
:type cache_time: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(callback_query_id, unicode_type, parameter_name="callback_query_id")
assert_type_or_raise(text, None, unicode_type, parameter_name="text")
assert_type_or_raise(show_alert, None, bool, parameter_name="show_alert")
assert_type_or_raise(url, None, unicode_type, parameter_name="url")
assert_type_or_raise(cache_time, None, int, parameter_name="cache_time")
return self.do("answerCallbackQuery", callback_query_id=callback_query_id, text=text, show_alert=show_alert, url=url, cache_time=cache_time)
# end def _answer_callback_query__make_request
def _answer_callback_query__process_result(self, result):
"""
Internal function for processing the json data returned by the API's answerCallbackQuery endpoint.
:return: On success, True is returned
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _answer_callback_query__process_result
def _set_my_commands__make_request(self, commands, scope=None, language_code=None):
"""
Internal function for making the request to the API's setMyCommands endpoint.
Parameters:
:param commands: A JSON-serialized list of bot commands to be set as the list of the bot's commands. At most 100 commands can be specified.
:type commands: list of pytgbot.api_types.sendable.command.BotCommand
Optional keyword parameters:
:param scope: A JSON-serialized object, describing scope of users for which the commands are relevant. Defaults to BotCommandScopeDefault.
:type scope: pytgbot.api_types.sendable.command.BotCommandScope
:param language_code: A two-letter ISO 639-1 language code. If empty, commands will be applied to all users from the given scope, for whose language there are no dedicated commands
:type language_code: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.command import BotCommand
from pytgbot.api_types.sendable.command import BotCommandScope
assert_type_or_raise(commands, list, parameter_name="commands")
assert_type_or_raise(scope, None, BotCommandScope, parameter_name="scope")
assert_type_or_raise(language_code, None, unicode_type, parameter_name="language_code")
return self.do("setMyCommands", commands=commands, scope=scope, language_code=language_code)
# end def _set_my_commands__make_request
def _set_my_commands__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setMyCommands endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_my_commands__process_result
def _delete_my_commands__make_request(self, scope=None, language_code=None):
"""
Internal function for making the request to the API's deleteMyCommands endpoint.
Optional keyword parameters:
:param scope: A JSON-serialized object, describing scope of users for which the commands are relevant. Defaults to BotCommandScopeDefault.
:type scope: pytgbot.api_types.sendable.command.BotCommandScope
:param language_code: A two-letter ISO 639-1 language code. If empty, commands will be applied to all users from the given scope, for whose language there are no dedicated commands
:type language_code: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.command import BotCommandScope
assert_type_or_raise(scope, None, BotCommandScope, parameter_name="scope")
assert_type_or_raise(language_code, None, unicode_type, parameter_name="language_code")
return self.do("deleteMyCommands", scope=scope, language_code=language_code)
# end def _delete_my_commands__make_request
def _delete_my_commands__process_result(self, result):
"""
Internal function for processing the json data returned by the API's deleteMyCommands endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _delete_my_commands__process_result
def _get_my_commands__make_request(self, scope=None, language_code=None):
"""
Internal function for making the request to the API's getMyCommands endpoint.
Optional keyword parameters:
:param scope: A JSON-serialized object, describing scope of users. Defaults to BotCommandScopeDefault.
:type scope: pytgbot.api_types.sendable.command.BotCommandScope
:param language_code: A two-letter ISO 639-1 language code or an empty string
:type language_code: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.command import BotCommandScope
assert_type_or_raise(scope, None, BotCommandScope, parameter_name="scope")
assert_type_or_raise(language_code, None, unicode_type, parameter_name="language_code")
return self.do("getMyCommands", scope=scope, language_code=language_code)
# end def _get_my_commands__make_request
def _get_my_commands__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getMyCommands endpoint.
:return: On success, an array of the commands is returned. If commands aren't set, an empty list is returned
:rtype: list of pytgbot.api_types.sendable.command.BotCommand
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.sendable.command import BotCommand
try:
return BotCommand.from_array_list(result, list_level=1)
except TgApiParseException:
logger.debug("Failed parsing as api_type BotCommand", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_my_commands__process_result
def _edit_message_text__make_request(self, text, chat_id=None, message_id=None, inline_message_id=None, parse_mode=None, entities=None, disable_web_page_preview=None, reply_markup=None):
"""
Internal function for making the request to the API's editMessageText endpoint.
Parameters:
:param text: New text of the message, 1-4096 characters after entities parsing
:type text: str|unicode
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Required if inline_message_id is not specified. Identifier of the message to edit
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:param parse_mode: Mode for parsing entities in the message text. See formatting options for more details.
:type parse_mode: str|unicode
:param entities: A JSON-serialized list of special entities that appear in message text, which can be specified instead of parse_mode
:type entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param disable_web_page_preview: Disables link previews for links in this message
:type disable_web_page_preview: bool
:param reply_markup: A JSON-serialized object for an inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(text, unicode_type, parameter_name="text")
assert_type_or_raise(chat_id, None, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(entities, None, list, parameter_name="entities")
assert_type_or_raise(disable_web_page_preview, None, bool, parameter_name="disable_web_page_preview")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("editMessageText", text=text, chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id, parse_mode=parse_mode, entities=entities, disable_web_page_preview=disable_web_page_preview, reply_markup=reply_markup)
# end def _edit_message_text__make_request
def _edit_message_text__process_result(self, result):
"""
Internal function for processing the json data returned by the API's editMessageText endpoint.
:return: On success, if the edited message is not an inline message, the edited Message is returned, otherwise True is returned
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _edit_message_text__process_result
def _edit_message_caption__make_request(self, chat_id=None, message_id=None, inline_message_id=None, caption=None, parse_mode=None, caption_entities=None, reply_markup=None):
"""
Internal function for making the request to the API's editMessageCaption endpoint.
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Required if inline_message_id is not specified. Identifier of the message to edit
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:param caption: New caption of the message, 0-1024 characters after entities parsing
:type caption: str|unicode
:param parse_mode: Mode for parsing entities in the message caption. See formatting options for more details.
:type parse_mode: str|unicode
:param caption_entities: A JSON-serialized list of special entities that appear in the caption, which can be specified instead of parse_mode
:type caption_entities: list of pytgbot.api_types.receivable.media.MessageEntity
:param reply_markup: A JSON-serialized object for an inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.media import MessageEntity
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(chat_id, None, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
assert_type_or_raise(caption, None, unicode_type, parameter_name="caption")
assert_type_or_raise(parse_mode, None, unicode_type, parameter_name="parse_mode")
assert_type_or_raise(caption_entities, None, list, parameter_name="caption_entities")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("editMessageCaption", chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id, caption=caption, parse_mode=parse_mode, caption_entities=caption_entities, reply_markup=reply_markup)
# end def _edit_message_caption__make_request
def _edit_message_caption__process_result(self, result):
"""
Internal function for processing the json data returned by the API's editMessageCaption endpoint.
:return: On success, if the edited message is not an inline message, the edited Message is returned, otherwise True is returned
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _edit_message_caption__process_result
def _edit_message_media__make_request(self, media, chat_id=None, message_id=None, inline_message_id=None, reply_markup=None):
"""
Internal function for making the request to the API's editMessageMedia endpoint.
Parameters:
:param media: A JSON-serialized object for a new media content of the message
:type media: pytgbot.api_types.sendable.input_media.InputMedia
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Required if inline_message_id is not specified. Identifier of the message to edit
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:param reply_markup: A JSON-serialized object for a new inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.input_media import InputMedia
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(media, InputMedia, parameter_name="media")
assert_type_or_raise(chat_id, None, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("editMessageMedia", media=media, chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id, reply_markup=reply_markup)
# end def _edit_message_media__make_request
def _edit_message_media__process_result(self, result):
"""
Internal function for processing the json data returned by the API's editMessageMedia endpoint.
:return: On success, if the edited message is not an inline message, the edited Message is returned, otherwise True is returned
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _edit_message_media__process_result
def _edit_message_reply_markup__make_request(self, chat_id=None, message_id=None, inline_message_id=None, reply_markup=None):
"""
Internal function for making the request to the API's editMessageReplyMarkup endpoint.
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Required if inline_message_id is not specified. Identifier of the message to edit
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:param reply_markup: A JSON-serialized object for an inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(chat_id, None, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("editMessageReplyMarkup", chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id, reply_markup=reply_markup)
# end def _edit_message_reply_markup__make_request
def _edit_message_reply_markup__process_result(self, result):
"""
Internal function for processing the json data returned by the API's editMessageReplyMarkup endpoint.
:return: On success, if the edited message is not an inline message, the edited Message is returned, otherwise True is returned
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _edit_message_reply_markup__process_result
def _stop_poll__make_request(self, chat_id, message_id, reply_markup=None):
"""
Internal function for making the request to the API's stopPoll endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Identifier of the original message with the poll
:type message_id: int
Optional keyword parameters:
:param reply_markup: A JSON-serialized object for a new message inline keyboard.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, int, parameter_name="message_id")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("stopPoll", chat_id=chat_id, message_id=message_id, reply_markup=reply_markup)
# end def _stop_poll__make_request
def _stop_poll__process_result(self, result):
"""
Internal function for processing the json data returned by the API's stopPoll endpoint.
:return: On success, the stopped Poll is returned
:rtype: pytgbot.api_types.receivable.media.Poll
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.media import Poll
try:
return Poll.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Poll", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _stop_poll__process_result
def _delete_message__make_request(self, chat_id, message_id):
"""
Internal function for making the request to the API's deleteMessage endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param message_id: Identifier of the message to delete
:type message_id: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(message_id, int, parameter_name="message_id")
return self.do("deleteMessage", chat_id=chat_id, message_id=message_id)
# end def _delete_message__make_request
def _delete_message__process_result(self, result):
"""
Internal function for processing the json data returned by the API's deleteMessage endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _delete_message__process_result
def _send_sticker__make_request(self, chat_id, sticker, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendSticker endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param sticker: Sticker to send. Pass a file_id as String to send a file that exists on the Telegram servers (recommended), pass an HTTP URL as a String for Telegram to get a .WEBP file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files »
:type sticker: pytgbot.api_types.sendable.files.InputFile | str|unicode
Optional keyword parameters:
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: Additional interface options. A JSON-serialized object for an inline keyboard, custom reply keyboard, instructions to remove reply keyboard or to force a reply from the user.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardMarkup | pytgbot.api_types.sendable.reply_markup.ReplyKeyboardRemove | pytgbot.api_types.sendable.reply_markup.ForceReply
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.files import InputFile
from pytgbot.api_types.sendable.reply_markup import ForceReply
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardMarkup
from pytgbot.api_types.sendable.reply_markup import ReplyKeyboardRemove
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(sticker, (InputFile, unicode_type), parameter_name="sticker")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, (InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply), parameter_name="reply_markup")
return self.do("sendSticker", chat_id=chat_id, sticker=sticker, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_sticker__make_request
def _send_sticker__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendSticker endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_sticker__process_result
def _get_sticker_set__make_request(self, name):
"""
Internal function for making the request to the API's getStickerSet endpoint.
Parameters:
:param name: Name of the sticker set
:type name: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(name, unicode_type, parameter_name="name")
return self.do("getStickerSet", name=name)
# end def _get_sticker_set__make_request
def _get_sticker_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getStickerSet endpoint.
:return: On success, a StickerSet object is returned
:rtype: pytgbot.api_types.receivable.stickers.StickerSet
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.stickers import StickerSet
try:
return StickerSet.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type StickerSet", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_sticker_set__process_result
def _upload_sticker_file__make_request(self, user_id, png_sticker):
"""
Internal function for making the request to the API's uploadStickerFile endpoint.
Parameters:
:param user_id: User identifier of sticker file owner
:type user_id: int
:param png_sticker: PNG image with the sticker, must be up to 512 kilobytes in size, dimensions must not exceed 512px, and either width or height must be exactly 512px. More info on Sending Files »
:type png_sticker: pytgbot.api_types.sendable.files.InputFile
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.files import InputFile
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(png_sticker, InputFile, parameter_name="png_sticker")
return self.do("uploadStickerFile", user_id=user_id, png_sticker=png_sticker)
# end def _upload_sticker_file__make_request
def _upload_sticker_file__process_result(self, result):
"""
Internal function for processing the json data returned by the API's uploadStickerFile endpoint.
:return: Returns the uploaded File on success
:rtype: pytgbot.api_types.receivable.media.File
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.media import File
try:
return File.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type File", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _upload_sticker_file__process_result
def _create_new_sticker_set__make_request(self, user_id, name, title, emojis, png_sticker=None, tgs_sticker=None, contains_masks=None, mask_position=None):
"""
Internal function for making the request to the API's createNewStickerSet endpoint.
Parameters:
:param user_id: User identifier of created sticker set owner
:type user_id: int
:param name: Short name of sticker set, to be used in t.me/addstickers/ URLs (e.g., animals). Can contain only english letters, digits and underscores. Must begin with a letter, can't contain consecutive underscores and must end in "_by_<bot username>". <bot_username> is case insensitive. 1-64 characters.
:type name: str|unicode
:param title: Sticker set title, 1-64 characters
:type title: str|unicode
:param emojis: One or more emoji corresponding to the sticker
:type emojis: str|unicode
Optional keyword parameters:
:param png_sticker: PNG image with the sticker, must be up to 512 kilobytes in size, dimensions must not exceed 512px, and either width or height must be exactly 512px. Pass a file_id as a String to send a file that already exists on the Telegram servers, pass an HTTP URL as a String for Telegram to get a file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files »
:type png_sticker: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param tgs_sticker: TGS animation with the sticker, uploaded using multipart/form-data. See https://core.telegram.org/animated_stickers#technical-requirements for technical requirements
:type tgs_sticker: pytgbot.api_types.sendable.files.InputFile
:param contains_masks: Pass True, if a set of mask stickers should be created
:type contains_masks: bool
:param mask_position: A JSON-serialized object for position where the mask should be placed on faces
:type mask_position: pytgbot.api_types.receivable.stickers.MaskPosition
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.stickers import MaskPosition
from pytgbot.api_types.sendable.files import InputFile
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(name, unicode_type, parameter_name="name")
assert_type_or_raise(title, unicode_type, parameter_name="title")
assert_type_or_raise(emojis, unicode_type, parameter_name="emojis")
assert_type_or_raise(png_sticker, None, (InputFile, unicode_type), parameter_name="png_sticker")
assert_type_or_raise(tgs_sticker, None, InputFile, parameter_name="tgs_sticker")
assert_type_or_raise(contains_masks, None, bool, parameter_name="contains_masks")
assert_type_or_raise(mask_position, None, MaskPosition, parameter_name="mask_position")
return self.do("createNewStickerSet", user_id=user_id, name=name, title=title, emojis=emojis, png_sticker=png_sticker, tgs_sticker=tgs_sticker, contains_masks=contains_masks, mask_position=mask_position)
# end def _create_new_sticker_set__make_request
def _create_new_sticker_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's createNewStickerSet endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _create_new_sticker_set__process_result
def _add_sticker_to_set__make_request(self, user_id, name, emojis, png_sticker=None, tgs_sticker=None, mask_position=None):
"""
Internal function for making the request to the API's addStickerToSet endpoint.
Parameters:
:param user_id: User identifier of sticker set owner
:type user_id: int
:param name: Sticker set name
:type name: str|unicode
:param emojis: One or more emoji corresponding to the sticker
:type emojis: str|unicode
Optional keyword parameters:
:param png_sticker: PNG image with the sticker, must be up to 512 kilobytes in size, dimensions must not exceed 512px, and either width or height must be exactly 512px. Pass a file_id as a String to send a file that already exists on the Telegram servers, pass an HTTP URL as a String for Telegram to get a file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files »
:type png_sticker: pytgbot.api_types.sendable.files.InputFile | str|unicode
:param tgs_sticker: TGS animation with the sticker, uploaded using multipart/form-data. See https://core.telegram.org/animated_stickers#technical-requirements for technical requirements
:type tgs_sticker: pytgbot.api_types.sendable.files.InputFile
:param mask_position: A JSON-serialized object for position where the mask should be placed on faces
:type mask_position: pytgbot.api_types.receivable.stickers.MaskPosition
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.receivable.stickers import MaskPosition
from pytgbot.api_types.sendable.files import InputFile
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(name, unicode_type, parameter_name="name")
assert_type_or_raise(emojis, unicode_type, parameter_name="emojis")
assert_type_or_raise(png_sticker, None, (InputFile, unicode_type), parameter_name="png_sticker")
assert_type_or_raise(tgs_sticker, None, InputFile, parameter_name="tgs_sticker")
assert_type_or_raise(mask_position, None, MaskPosition, parameter_name="mask_position")
return self.do("addStickerToSet", user_id=user_id, name=name, emojis=emojis, png_sticker=png_sticker, tgs_sticker=tgs_sticker, mask_position=mask_position)
# end def _add_sticker_to_set__make_request
def _add_sticker_to_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's addStickerToSet endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _add_sticker_to_set__process_result
def _set_sticker_position_in_set__make_request(self, sticker, position):
"""
Internal function for making the request to the API's setStickerPositionInSet endpoint.
Parameters:
:param sticker: File identifier of the sticker
:type sticker: str|unicode
:param position: New sticker position in the set, zero-based
:type position: int
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(sticker, unicode_type, parameter_name="sticker")
assert_type_or_raise(position, int, parameter_name="position")
return self.do("setStickerPositionInSet", sticker=sticker, position=position)
# end def _set_sticker_position_in_set__make_request
def _set_sticker_position_in_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setStickerPositionInSet endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_sticker_position_in_set__process_result
def _delete_sticker_from_set__make_request(self, sticker):
"""
Internal function for making the request to the API's deleteStickerFromSet endpoint.
Parameters:
:param sticker: File identifier of the sticker
:type sticker: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(sticker, unicode_type, parameter_name="sticker")
return self.do("deleteStickerFromSet", sticker=sticker)
# end def _delete_sticker_from_set__make_request
def _delete_sticker_from_set__process_result(self, result):
"""
Internal function for processing the json data returned by the API's deleteStickerFromSet endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _delete_sticker_from_set__process_result
def _set_sticker_set_thumb__make_request(self, name, user_id, thumb=None):
"""
Internal function for making the request to the API's setStickerSetThumb endpoint.
Parameters:
:param name: Sticker set name
:type name: str|unicode
:param user_id: User identifier of the sticker set owner
:type user_id: int
Optional keyword parameters:
:param thumb: A PNG image with the thumbnail, must be up to 128 kilobytes in size and have width and height exactly 100px, or a TGS animation with the thumbnail up to 32 kilobytes in size; see https://core.telegram.org/animated_stickers#technical-requirements for animated sticker technical requirements. Pass a file_id as a String to send a file that already exists on the Telegram servers, pass an HTTP URL as a String for Telegram to get a file from the Internet, or upload a new one using multipart/form-data. More info on Sending Files ». Animated sticker set thumbnail can't be uploaded via HTTP URL.
:type thumb: pytgbot.api_types.sendable.files.InputFile | str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.files import InputFile
assert_type_or_raise(name, unicode_type, parameter_name="name")
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(thumb, None, (InputFile, unicode_type), parameter_name="thumb")
return self.do("setStickerSetThumb", name=name, user_id=user_id, thumb=thumb)
# end def _set_sticker_set_thumb__make_request
def _set_sticker_set_thumb__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setStickerSetThumb endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_sticker_set_thumb__process_result
def _answer_inline_query__make_request(self, inline_query_id, results, cache_time=None, is_personal=None, next_offset=None, switch_pm_text=None, switch_pm_parameter=None):
"""
Internal function for making the request to the API's answerInlineQuery endpoint.
Parameters:
:param inline_query_id: Unique identifier for the answered query
:type inline_query_id: str|unicode
:param results: A JSON-serialized array of results for the inline query
:type results: list of pytgbot.api_types.sendable.inline.InlineQueryResult
Optional keyword parameters:
:param cache_time: The maximum amount of time in seconds that the result of the inline query may be cached on the server. Defaults to 300.
:type cache_time: int
:param is_personal: Pass True, if results may be cached on the server side only for the user that sent the query. By default, results may be returned to any user who sends the same query
:type is_personal: bool
:param next_offset: Pass the offset that a client should send in the next query with the same text to receive more results. Pass an empty string if there are no more results or if you don't support pagination. Offset length can't exceed 64 bytes.
:type next_offset: str|unicode
:param switch_pm_text: If passed, clients will display a button with specified text that switches the user to a private chat with the bot and sends the bot a start message with the parameter switch_pm_parameter
:type switch_pm_text: str|unicode
:param switch_pm_parameter: Deep-linking parameter for the /start message sent to the bot when user presses the switch button. 1-64 characters, only A-Z, a-z, 0-9, _ and - are allowed.Example: An inline bot that sends YouTube videos can ask the user to connect the bot to their YouTube account to adapt search results accordingly. To do this, it displays a 'Connect your YouTube account' button above the results, or even before showing any. The user presses the button, switches to a private chat with the bot and, in doing so, passes a start parameter that instructs the bot to return an OAuth link. Once done, the bot can offer a switch_inline button so that the user can easily return to the chat where they wanted to use the bot's inline capabilities.
:type switch_pm_parameter: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.inline import InlineQueryResult
assert_type_or_raise(inline_query_id, unicode_type, parameter_name="inline_query_id")
assert_type_or_raise(results, list, parameter_name="results")
assert_type_or_raise(cache_time, None, int, parameter_name="cache_time")
assert_type_or_raise(is_personal, None, bool, parameter_name="is_personal")
assert_type_or_raise(next_offset, None, unicode_type, parameter_name="next_offset")
assert_type_or_raise(switch_pm_text, None, unicode_type, parameter_name="switch_pm_text")
assert_type_or_raise(switch_pm_parameter, None, unicode_type, parameter_name="switch_pm_parameter")
return self.do("answerInlineQuery", inline_query_id=inline_query_id, results=results, cache_time=cache_time, is_personal=is_personal, next_offset=next_offset, switch_pm_text=switch_pm_text, switch_pm_parameter=switch_pm_parameter)
# end def _answer_inline_query__make_request
def _answer_inline_query__process_result(self, result):
"""
Internal function for processing the json data returned by the API's answerInlineQuery endpoint.
:return: On success, True is returned
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _answer_inline_query__process_result
def _send_invoice__make_request(self, chat_id, title, description, payload, provider_token, currency, prices, max_tip_amount=None, suggested_tip_amounts=None, start_parameter=None, provider_data=None, photo_url=None, photo_size=None, photo_width=None, photo_height=None, need_name=None, need_phone_number=None, need_email=None, need_shipping_address=None, send_phone_number_to_provider=None, send_email_to_provider=None, is_flexible=None, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendInvoice endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat or username of the target channel (in the format @channelusername)
:type chat_id: int | str|unicode
:param title: Product name, 1-32 characters
:type title: str|unicode
:param description: Product description, 1-255 characters
:type description: str|unicode
:param payload: Bot-defined invoice payload, 1-128 bytes. This will not be displayed to the user, use for your internal processes.
:type payload: str|unicode
:param provider_token: Payments provider token, obtained via Botfather
:type provider_token: str|unicode
:param currency: Three-letter ISO 4217 currency code, see more on currencies
:type currency: str|unicode
:param prices: Price breakdown, a JSON-serialized list of components (e.g. product price, tax, discount, delivery cost, delivery tax, bonus, etc.)
:type prices: list of pytgbot.api_types.sendable.payments.LabeledPrice
Optional keyword parameters:
:param max_tip_amount: The maximum accepted amount for tips in the smallest units of the currency (integer, not float/double). For example, for a maximum tip of US$ 1.45 pass max_tip_amount = 145. See the exp parameter in currencies.json, it shows the number of digits past the decimal point for each currency (2 for the majority of currencies). Defaults to 0
:type max_tip_amount: int
:param suggested_tip_amounts: A JSON-serialized array of suggested amounts of tips in the smallest units of the currency (integer, not float/double). At most 4 suggested tip amounts can be specified. The suggested tip amounts must be positive, passed in a strictly increased order and must not exceed max_tip_amount.
:type suggested_tip_amounts: list of int
:param start_parameter: Unique deep-linking parameter. If left empty, forwarded copies of the sent message will have a Pay button, allowing multiple users to pay directly from the forwarded message, using the same invoice. If non-empty, forwarded copies of the sent message will have a URL button with a deep link to the bot (instead of a Pay button), with the value used as the start parameter
:type start_parameter: str|unicode
:param provider_data: A JSON-serialized data about the invoice, which will be shared with the payment provider. A detailed description of required fields should be provided by the payment provider.
:type provider_data: str|unicode
:param photo_url: URL of the product photo for the invoice. Can be a photo of the goods or a marketing image for a service. People like it better when they see what they are paying for.
:type photo_url: str|unicode
:param photo_size: Photo size
:type photo_size: int
:param photo_width: Photo width
:type photo_width: int
:param photo_height: Photo height
:type photo_height: int
:param need_name: Pass True, if you require the user's full name to complete the order
:type need_name: bool
:param need_phone_number: Pass True, if you require the user's phone number to complete the order
:type need_phone_number: bool
:param need_email: Pass True, if you require the user's email address to complete the order
:type need_email: bool
:param need_shipping_address: Pass True, if you require the user's shipping address to complete the order
:type need_shipping_address: bool
:param send_phone_number_to_provider: Pass True, if user's phone number should be sent to provider
:type send_phone_number_to_provider: bool
:param send_email_to_provider: Pass True, if user's email address should be sent to provider
:type send_email_to_provider: bool
:param is_flexible: Pass True, if the final price depends on the shipping method
:type is_flexible: bool
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: A JSON-serialized object for an inline keyboard. If empty, one 'Pay total price' button will be shown. If not empty, the first button must be a Pay button.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.payments import LabeledPrice
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(chat_id, (int, unicode_type), parameter_name="chat_id")
assert_type_or_raise(title, unicode_type, parameter_name="title")
assert_type_or_raise(description, unicode_type, parameter_name="description")
assert_type_or_raise(payload, unicode_type, parameter_name="payload")
assert_type_or_raise(provider_token, unicode_type, parameter_name="provider_token")
assert_type_or_raise(currency, unicode_type, parameter_name="currency")
assert_type_or_raise(prices, list, parameter_name="prices")
assert_type_or_raise(max_tip_amount, None, int, parameter_name="max_tip_amount")
assert_type_or_raise(suggested_tip_amounts, None, list, parameter_name="suggested_tip_amounts")
assert_type_or_raise(start_parameter, None, unicode_type, parameter_name="start_parameter")
assert_type_or_raise(provider_data, None, unicode_type, parameter_name="provider_data")
assert_type_or_raise(photo_url, None, unicode_type, parameter_name="photo_url")
assert_type_or_raise(photo_size, None, int, parameter_name="photo_size")
assert_type_or_raise(photo_width, None, int, parameter_name="photo_width")
assert_type_or_raise(photo_height, None, int, parameter_name="photo_height")
assert_type_or_raise(need_name, None, bool, parameter_name="need_name")
assert_type_or_raise(need_phone_number, None, bool, parameter_name="need_phone_number")
assert_type_or_raise(need_email, None, bool, parameter_name="need_email")
assert_type_or_raise(need_shipping_address, None, bool, parameter_name="need_shipping_address")
assert_type_or_raise(send_phone_number_to_provider, None, bool, parameter_name="send_phone_number_to_provider")
assert_type_or_raise(send_email_to_provider, None, bool, parameter_name="send_email_to_provider")
assert_type_or_raise(is_flexible, None, bool, parameter_name="is_flexible")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("sendInvoice", chat_id=chat_id, title=title, description=description, payload=payload, provider_token=provider_token, currency=currency, prices=prices, max_tip_amount=max_tip_amount, suggested_tip_amounts=suggested_tip_amounts, start_parameter=start_parameter, provider_data=provider_data, photo_url=photo_url, photo_size=photo_size, photo_width=photo_width, photo_height=photo_height, need_name=need_name, need_phone_number=need_phone_number, need_email=need_email, need_shipping_address=need_shipping_address, send_phone_number_to_provider=send_phone_number_to_provider, send_email_to_provider=send_email_to_provider, is_flexible=is_flexible, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_invoice__make_request
def _send_invoice__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendInvoice endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_invoice__process_result
def _answer_shipping_query__make_request(self, shipping_query_id, ok, shipping_options=None, error_message=None):
"""
Internal function for making the request to the API's answerShippingQuery endpoint.
Parameters:
:param shipping_query_id: Unique identifier for the query to be answered
:type shipping_query_id: str|unicode
:param ok: Specify True if delivery to the specified address is possible and False if there are any problems (for example, if delivery to the specified address is not possible)
:type ok: bool
Optional keyword parameters:
:param shipping_options: Required if ok is True. A JSON-serialized array of available shipping options.
:type shipping_options: list of pytgbot.api_types.sendable.payments.ShippingOption
:param error_message: Required if ok is False. Error message in human readable form that explains why it is impossible to complete the order (e.g. "Sorry, delivery to your desired address is unavailable'). Telegram will display this message to the user.
:type error_message: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.payments import ShippingOption
assert_type_or_raise(shipping_query_id, unicode_type, parameter_name="shipping_query_id")
assert_type_or_raise(ok, bool, parameter_name="ok")
assert_type_or_raise(shipping_options, None, list, parameter_name="shipping_options")
assert_type_or_raise(error_message, None, unicode_type, parameter_name="error_message")
return self.do("answerShippingQuery", shipping_query_id=shipping_query_id, ok=ok, shipping_options=shipping_options, error_message=error_message)
# end def _answer_shipping_query__make_request
def _answer_shipping_query__process_result(self, result):
"""
Internal function for processing the json data returned by the API's answerShippingQuery endpoint.
:return: On success, True is returned
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _answer_shipping_query__process_result
def _answer_pre_checkout_query__make_request(self, pre_checkout_query_id, ok, error_message=None):
"""
Internal function for making the request to the API's answerPreCheckoutQuery endpoint.
Parameters:
:param pre_checkout_query_id: Unique identifier for the query to be answered
:type pre_checkout_query_id: str|unicode
:param ok: Specify True if everything is alright (goods are available, etc.) and the bot is ready to proceed with the order. Use False if there are any problems.
:type ok: bool
Optional keyword parameters:
:param error_message: Required if ok is False. Error message in human readable form that explains the reason for failure to proceed with the checkout (e.g. "Sorry, somebody just bought the last of our amazing black T-shirts while you were busy filling out your payment details. Please choose a different color or garment!"). Telegram will display this message to the user.
:type error_message: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(pre_checkout_query_id, unicode_type, parameter_name="pre_checkout_query_id")
assert_type_or_raise(ok, bool, parameter_name="ok")
assert_type_or_raise(error_message, None, unicode_type, parameter_name="error_message")
return self.do("answerPreCheckoutQuery", pre_checkout_query_id=pre_checkout_query_id, ok=ok, error_message=error_message)
# end def _answer_pre_checkout_query__make_request
def _answer_pre_checkout_query__process_result(self, result):
"""
Internal function for processing the json data returned by the API's answerPreCheckoutQuery endpoint.
:return: On success, True is returned
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _answer_pre_checkout_query__process_result
def _set_passport_data_errors__make_request(self, user_id, errors):
"""
Internal function for making the request to the API's setPassportDataErrors endpoint.
Parameters:
:param user_id: User identifier
:type user_id: int
:param errors: A JSON-serialized array describing the errors
:type errors: list of pytgbot.api_types.sendable.passport.PassportElementError
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.passport import PassportElementError
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(errors, list, parameter_name="errors")
return self.do("setPassportDataErrors", user_id=user_id, errors=errors)
# end def _set_passport_data_errors__make_request
def _set_passport_data_errors__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setPassportDataErrors endpoint.
:return: Returns True on success
:rtype: bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_passport_data_errors__process_result
def _send_game__make_request(self, chat_id, game_short_name, disable_notification=None, reply_to_message_id=None, allow_sending_without_reply=None, reply_markup=None):
"""
Internal function for making the request to the API's sendGame endpoint.
Parameters:
:param chat_id: Unique identifier for the target chat
:type chat_id: int
:param game_short_name: Short name of the game, serves as the unique identifier for the game. Set up your games via Botfather.
:type game_short_name: str|unicode
Optional keyword parameters:
:param disable_notification: Sends the message silently. Users will receive a notification with no sound.
:type disable_notification: bool
:param reply_to_message_id: If the message is a reply, ID of the original message
:type reply_to_message_id: int
:param allow_sending_without_reply: Pass True, if the message should be sent even if the specified replied-to message is not found
:type allow_sending_without_reply: bool
:param reply_markup: A JSON-serialized object for an inline keyboard. If empty, one 'Play game_title' button will be shown. If not empty, the first button must launch the game.
:type reply_markup: pytgbot.api_types.sendable.reply_markup.InlineKeyboardMarkup
:return: the decoded json
:rtype: dict|list|bool
"""
from pytgbot.api_types.sendable.reply_markup import InlineKeyboardMarkup
assert_type_or_raise(chat_id, int, parameter_name="chat_id")
assert_type_or_raise(game_short_name, unicode_type, parameter_name="game_short_name")
assert_type_or_raise(disable_notification, None, bool, parameter_name="disable_notification")
assert_type_or_raise(reply_to_message_id, None, int, parameter_name="reply_to_message_id")
assert_type_or_raise(allow_sending_without_reply, None, bool, parameter_name="allow_sending_without_reply")
assert_type_or_raise(reply_markup, None, InlineKeyboardMarkup, parameter_name="reply_markup")
return self.do("sendGame", chat_id=chat_id, game_short_name=game_short_name, disable_notification=disable_notification, reply_to_message_id=reply_to_message_id, allow_sending_without_reply=allow_sending_without_reply, reply_markup=reply_markup)
# end def _send_game__make_request
def _send_game__process_result(self, result):
"""
Internal function for processing the json data returned by the API's sendGame endpoint.
:return: On success, the sent Message is returned
:rtype: pytgbot.api_types.receivable.updates.Message
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _send_game__process_result
def _set_game_score__make_request(self, user_id, score, force=None, disable_edit_message=None, chat_id=None, message_id=None, inline_message_id=None):
"""
Internal function for making the request to the API's setGameScore endpoint.
Parameters:
:param user_id: User identifier
:type user_id: int
:param score: New score, must be non-negative
:type score: int
Optional keyword parameters:
:param force: Pass True, if the high score is allowed to decrease. This can be useful when fixing mistakes or banning cheaters
:type force: bool
:param disable_edit_message: Pass True, if the game message should not be automatically edited to include the current scoreboard
:type disable_edit_message: bool
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat
:type chat_id: int
:param message_id: Required if inline_message_id is not specified. Identifier of the sent message
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(score, int, parameter_name="score")
assert_type_or_raise(force, None, bool, parameter_name="force")
assert_type_or_raise(disable_edit_message, None, bool, parameter_name="disable_edit_message")
assert_type_or_raise(chat_id, None, int, parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
return self.do("setGameScore", user_id=user_id, score=score, force=force, disable_edit_message=disable_edit_message, chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id)
# end def _set_game_score__make_request
def _set_game_score__process_result(self, result):
"""
Internal function for processing the json data returned by the API's setGameScore endpoint.
:return: On success, if the message is not an inline message, the Message is returned, otherwise True is returned. Returns an error, if the new score is not greater than the user's current score in the chat and force is False
:rtype: pytgbot.api_types.receivable.updates.Message | bool
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.updates import Message
try:
return Message.from_array(result)
except TgApiParseException:
logger.debug("Failed parsing as api_type Message", exc_info=True)
# end try
try:
return from_array_list(bool, result, list_level=0, is_builtin=True)
except TgApiParseException:
logger.debug("Failed parsing as primitive bool", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _set_game_score__process_result
def _get_game_high_scores__make_request(self, user_id, chat_id=None, message_id=None, inline_message_id=None):
"""
Internal function for making the request to the API's getGameHighScores endpoint.
Parameters:
:param user_id: Target user id
:type user_id: int
Optional keyword parameters:
:param chat_id: Required if inline_message_id is not specified. Unique identifier for the target chat
:type chat_id: int
:param message_id: Required if inline_message_id is not specified. Identifier of the sent message
:type message_id: int
:param inline_message_id: Required if chat_id and message_id are not specified. Identifier of the inline message
:type inline_message_id: str|unicode
:return: the decoded json
:rtype: dict|list|bool
"""
assert_type_or_raise(user_id, int, parameter_name="user_id")
assert_type_or_raise(chat_id, None, int, parameter_name="chat_id")
assert_type_or_raise(message_id, None, int, parameter_name="message_id")
assert_type_or_raise(inline_message_id, None, unicode_type, parameter_name="inline_message_id")
return self.do("getGameHighScores", user_id=user_id, chat_id=chat_id, message_id=message_id, inline_message_id=inline_message_id)
# end def _get_game_high_scores__make_request
def _get_game_high_scores__process_result(self, result):
"""
Internal function for processing the json data returned by the API's getGameHighScores endpoint.
:return: On success, returns an Array of GameHighScore objects
:rtype: list of pytgbot.api_types.receivable.game.GameHighScore
"""
if not self.return_python_objects:
return result
# end if
logger.debug("Trying to parse {data}".format(data=repr(result)))
from pytgbot.api_types.receivable.game import GameHighScore
try:
return GameHighScore.from_array_list(result, list_level=1)
except TgApiParseException:
logger.debug("Failed parsing as api_type GameHighScore", exc_info=True)
# end try
# no valid parsing so far
raise TgApiParseException("Could not parse result.") # See debug log for details!
# end if return_python_objects
return result
# end def _get_game_high_scores__process_result
# end of generated functions
# end class Bot
| 49.209227 | 835 | 0.711025 | 34,411 | 259,185 | 5.11598 | 0.031531 | 0.015132 | 0.027061 | 0.038337 | 0.845262 | 0.795599 | 0.763238 | 0.738108 | 0.721408 | 0.714143 | 0 | 0.002101 | 0.223373 | 259,185 | 5,266 | 836 | 49.218572 | 0.872319 | 0.458194 | 0 | 0.697757 | 0 | 0.001181 | 0.105428 | 0.008489 | 0 | 0 | 0 | 0 | 0.235537 | 1 | 0.103306 | false | 0.002952 | 0.102715 | 0.001771 | 0.409681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0772dc6be08c1d342da8ce114bbbad8a8dd82909 | 1,521 | py | Python | Chapter04/ext_pp.py | PacktPublishing/Extending-OpenStack | 87487bb79ab4dabba4e693c00a2ebdfd223b8106 | [
"MIT"
] | 1 | 2021-10-04T04:52:48.000Z | 2021-10-04T04:52:48.000Z | Chapter04/ext_pp.py | PacktPublishing/Extending-OpenStack | 87487bb79ab4dabba4e693c00a2ebdfd223b8106 | [
"MIT"
] | null | null | null | Chapter04/ext_pp.py | PacktPublishing/Extending-OpenStack | 87487bb79ab4dabba4e693c00a2ebdfd223b8106 | [
"MIT"
] | 3 | 2018-02-28T09:21:24.000Z | 2018-06-18T14:12:04.000Z | from neutron.plugins.ml2 import driver_api as api
from neutron.db import api as db_api
from oslo_log import log as pp_logger
LOG = pp_logger.getLogger(__name__)
class MyExtDriver(api.MechanismDriver):
def initialize(self):
LOG.info("Initializing MyExtDriver driver ")
def create_port_precommit(self, context):
port = context.current
network = context.network
LOG.info(“Create Network Port Precommits with associated network: %s ” %(network.current['name']))
def create_port_postcommit(self, context):
port = context.current
network = context.network
LOG.info(“Create Network Port Postcommits with associated network: %s ” %(network.current['name']))
def delete_port_precommit(self, context):
port = context.current
LOG.info(“Delete Network Port Precommits with associated network: %s ” %(network.current['name']))
def delete_port_postcommit(self, context):
port = context.current
network = context.network
LOG.info(“Delete Network Port Postcommits with associated network: %s ” %(network.current['name']))
def update_port_precommit(self, context):
port = context.current
network = context.network
LOG.info(“Update Network Port Precommits with associated network: %s ” %(network.current['name']))
def update_port_postcommit(self, context):
port = context.current
network = context.network
LOG.info(“Update Network Port Postcommits with associated network: %s ” %(network.current['name']))
| 39 | 106 | 0.718606 | 189 | 1,521 | 5.671958 | 0.195767 | 0.045709 | 0.083955 | 0.123134 | 0.789179 | 0.771455 | 0.771455 | 0.736007 | 0.736007 | 0.736007 | 0 | 0.000801 | 0.179487 | 1,521 | 38 | 107 | 40.026316 | 0.858173 | 0 | 0 | 0.366667 | 0 | 0 | 0.036818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07847e811ffb2bc3a805159bac6c8893e5f6560b | 45 | py | Python | net/tools/logger/__init__.py | eungbean/DCGAN-pytorch-lightning-comet | 414cbd5eb50b7e45848479a69077d2060210d4ec | [
"Apache-2.0"
] | null | null | null | net/tools/logger/__init__.py | eungbean/DCGAN-pytorch-lightning-comet | 414cbd5eb50b7e45848479a69077d2060210d4ec | [
"Apache-2.0"
] | null | null | null | net/tools/logger/__init__.py | eungbean/DCGAN-pytorch-lightning-comet | 414cbd5eb50b7e45848479a69077d2060210d4ec | [
"Apache-2.0"
] | null | null | null | from .getCometLogger import get_comet_logger
| 22.5 | 44 | 0.888889 | 6 | 45 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
078d12d2b7031a0bd0e329b58d421d15f54fd0d3 | 25,022 | py | Python | app/utils.py | mvanderlinde/starter-snake-python | 7db5fd5d8036e1cff0e34ec65b961568a63cc70d | [
"MIT"
] | null | null | null | app/utils.py | mvanderlinde/starter-snake-python | 7db5fd5d8036e1cff0e34ec65b961568a63cc70d | [
"MIT"
] | null | null | null | app/utils.py | mvanderlinde/starter-snake-python | 7db5fd5d8036e1cff0e34ec65b961568a63cc70d | [
"MIT"
] | null | null | null | closest_snake_distance = 1000
best_move = 'up'
best_move_distance = 0
best_move_coords = {
'x': 0,
'y': 0
}
def distance(me, thing):
x_distance = abs(me['x']-thing['x'])
y_distance = abs(me['y']-thing['y'])
return x_distance + y_distance
def has_room(data, me, direction):
available_space = 0
my_length = len(me['body'])
my_head = me['body'][0]
if direction == 'left':
if is_safe(data, my_head['x']-1, my_head['y']) and not is_safe(data, my_head['x']-2, my_head['y']) and not is_safe(data, my_head['x']-1, my_head['y']-1) and not is_safe(data, my_head['x']-1, my_head['y']+1):
return False
for x in range(1,int(my_length/2)):
for y in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']-y):
available_space = available_space + 1
elif direction == 'right':
if is_safe(data, my_head['x']+1, my_head['y']) and not is_safe(data, my_head['x']+2, my_head['y']) and not is_safe(data, my_head['x']+1, my_head['y']-1) and not is_safe(data, my_head['x']+1, my_head['y']+1):
return False
for x in range(1,int(my_length/2)):
for y in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']+x, my_head['y']-y):
available_space = available_space + 1
elif direction == 'down':
if is_safe(data, my_head['x'], my_head['y']+1) and not is_safe(data, my_head['x'], my_head['y']+2) and not is_safe(data, my_head['x']-1, my_head['y']+1) and not is_safe(data, my_head['x']+1, my_head['y']+1):
return False
for y in range(1,int(my_length/2)):
for x in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']+y):
available_space = available_space + 1
else:
if is_safe(data, my_head['x'], my_head['y']-1) and not is_safe(data, my_head['x'], my_head['y']-2) and not is_safe(data, my_head['x']-1, my_head['y']-1) and not is_safe(data, my_head['x']+1, my_head['y']-1):
return False
for y in range(1,int(my_length/2)):
for x in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']-y):
available_space = available_space + 1
return available_space >= my_length
def has_some_room(data, me, direction):
available_space = 0
my_length = len(me['body'])
my_head = me['body'][0]
if direction == 'left':
if is_safe(data, my_head['x']-1, my_head['y']) and not is_safe(data, my_head['x']-2, my_head['y']) and not is_safe(data, my_head['x']-1, my_head['y']-1) and not is_safe(data, my_head['x']-1, my_head['y']+1):
return False
for x in range(1,int(my_length/2)):
for y in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']-y):
available_space = available_space + 1
elif direction == 'right':
if is_safe(data, my_head['x']+1, my_head['y']) and not is_safe(data, my_head['x']+2, my_head['y']) and not is_safe(data, my_head['x']+1, my_head['y']-1) and not is_safe(data, my_head['x']+1, my_head['y']+1):
return False
for x in range(1,int(my_length/2)):
for y in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']+x, my_head['y']-y):
available_space = available_space + 1
elif direction == 'down':
if is_safe(data, my_head['x'], my_head['y']+1) and not is_safe(data, my_head['x'], my_head['y']+2) and not is_safe(data, my_head['x']-1, my_head['y']+1) and not is_safe(data, my_head['x']+1, my_head['y']+1):
return False
for y in range(1,int(my_length/2)):
for x in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']+y):
available_space = available_space + 1
else:
if is_safe(data, my_head['x'], my_head['y']-1) and not is_safe(data, my_head['x'], my_head['y']-2) and not is_safe(data, my_head['x']-1, my_head['y']-1) and not is_safe(data, my_head['x']+1, my_head['y']-1):
return False
for y in range(1,int(my_length/2)):
for x in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']-y):
available_space = available_space + 1
return available_space >= int(my_length * .75)
def has_half_room(data, me, direction):
available_space = 0
my_length = len(me['body'])
my_head = me['body'][0]
if direction == 'left':
if is_safe(data, my_head['x']-1, my_head['y']) and not is_safe(data, my_head['x']-2, my_head['y']) and not is_safe(data, my_head['x']-1, my_head['y']-1) and not is_safe(data, my_head['x']-1, my_head['y']+1):
return False
for x in range(1,int(my_length/2)):
for y in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']-y):
available_space = available_space + 1
elif direction == 'right':
if is_safe(data, my_head['x']+1, my_head['y']) and not is_safe(data, my_head['x']+2, my_head['y']) and not is_safe(data, my_head['x']+1, my_head['y']-1) and not is_safe(data, my_head['x']+1, my_head['y']+1):
return False
for x in range(1,int(my_length/2)):
for y in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']+x, my_head['y']-y):
available_space = available_space + 1
elif direction == 'down':
if is_safe(data, my_head['x'], my_head['y']+1) and not is_safe(data, my_head['x'], my_head['y']+2) and not is_safe(data, my_head['x']-1, my_head['y']+1) and not is_safe(data, my_head['x']+1, my_head['y']+1):
return False
for y in range(1,int(my_length/2)):
for x in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']+y):
available_space = available_space + 1
else:
if is_safe(data, my_head['x'], my_head['y']-1) and not is_safe(data, my_head['x'], my_head['y']-2) and not is_safe(data, my_head['x']-1, my_head['y']-1) and not is_safe(data, my_head['x']+1, my_head['y']-1):
return False
for y in range(1,int(my_length/2)):
for x in range(int(my_length/2)*-1,int(my_length/2)):
if is_safe(data, my_head['x']-x, my_head['y']-y):
available_space = available_space + 1
return available_space >= int(my_length/2)
def within_one(body_part, x, y, me):
for my_body_part in me['body']:
if my_body_part['x'] == body_part['x'] and my_body_part['y'] == body_part['y']:
return False
x_distance = body_part['x'] - x
y_distance = body_part['y'] - y
global best_move
global best_move_distance
global best_move_coords
global closest_snake_distance
if closest_snake_distance == (abs(x_distance) + abs(y_distance)):
if abs(x_distance) > best_move_distance or abs(y_distance) > best_move_distance:
if abs(x_distance) > abs(y_distance):
best_move_distance = abs(x_distance)
if x_distance > 0:
best_move = 'left'
best_move_coords = {
'x': me['body'][0]['x']-1,
'y': me['body'][0]['y']
}
else:
best_move = 'right'
best_move_coords = {
'x': me['body'][0]['x']+1,
'y': me['body'][0]['y']
}
else:
best_move_distance = abs(y_distance)
if y_distance > 0:
best_move = 'up'
best_move_coords = {
'x': me['body'][0]['x'],
'y': me['body'][0]['y']-1
}
else:
best_move = 'down'
best_move_coords = {
'x': me['body'][0]['x'],
'y': me['body'][0]['y']+1
}
return (abs(x_distance) <= 1 and abs(y_distance) <= 1)
def is_safe(data, x, y, check_super_safe=False, check_head_safe=False):
if x >= data['board']['height'] or y >= data['board']['height'] or x < 0 or y < 0:
return False
me = data['you']
global closest_snake_distance
for snake in data['board']['snakes']:
snake_distance = abs(me['body'][0]['x'] - snake['body'][0]['x']) + abs(me['body'][0]['y'] - snake['body'][0]['y'])
if snake_distance < closest_snake_distance:
closest_snake_distance = snake_distance
for body_part in snake['body']:
if check_super_safe and within_one(body_part, x, y, me) and len(me['body']) <= len(snake['body']):
return False
elif check_head_safe and within_one(snake['body'][0], x, y, {'body': [me['body'][0]]}):
return False
elif body_part['x'] == x and body_part['y'] == y and (body_part['x'] != snake['body'][0]['x'] or body_part['y'] != snake['body'][0]['y'] or len(me['body']) <= len(snake['body'])):
return False
return True
def find_closest_food(data):
closest_distance = 1000 # Set closest to large number to start
closest_food = None
for food in data['board']['food']:
current_distance = distance(data['you']['body'][0], food)
if current_distance < closest_distance:
closest_distance = current_distance
closest_food = food
return closest_food
def which_way(data, food):
me = data['you']['body'][0]
global best_move
global best_move_distance
global best_move_coords
global closest_snake_distance
best_move = None
best_move_distance = 0
best_move_coords = {
'x': 0,
'y': 0
}
closest_snake_distance = 1000
if food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_super_safe=True) and has_room(data, data['you'], 'right'):
print('*** Super safe food right with room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_super_safe=True) and has_room(data, data['you'], 'left'):
print('*** Super safe food left with room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_super_safe=True) and has_room(data, data['you'], 'down'):
print('*** Super safe food down with room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_super_safe=True) and has_room(data, data['you'], 'up'):
print('*** Super safe food up with room')
return 'up'
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']) and has_room(data, data['you'], 'right'):
print('*** Super safe head food right with room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']) and has_room(data, data['you'], 'left'):
print('*** Super safe head food left with room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1) and has_room(data, data['you'], 'down'):
print('*** Super safe head food down with room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1) and has_room(data, data['you'], 'up'):
print('*** Super safe head food up with room')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_super_safe=True) and has_room(data, data['you'], 'right'):
print('*** Super safe right with room')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_super_safe=True) and has_room(data, data['you'], 'left'):
print('*** Super safe left with room')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_super_safe=True) and has_room(data, data['you'], 'down'):
print('*** Super safe down with room')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_super_safe=True) and has_room(data, data['you'], 'up'):
print('*** Super safe up with room')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']) and has_room(data, data['you'], 'right'):
print('*** Super safe head right with room')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']) and has_room(data, data['you'], 'left'):
print('*** Super safe head left with room')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1) and has_room(data, data['you'], 'down'):
print('*** Super safe head down with room')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1) and has_room(data, data['you'], 'up'):
print('*** Super safe head up with room')
return 'up'
elif best_move and is_safe(data, best_move_coords['x'], best_move_coords['y']) and has_room(data, data['you'], best_move):
print('*** Best move ' + best_move + ' to ' + str(best_move_coords['x']) + ',' + str(best_move_coords['y']) + ' with room')
return best_move
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_super_safe=True) and has_some_room(data, data['you'], 'right'):
print('*** Super safe food right with some room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_super_safe=True) and has_some_room(data, data['you'], 'left'):
print('*** Super safe food left with some room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_super_safe=True) and has_some_room(data, data['you'], 'down'):
print('*** Super safe food down with some room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_super_safe=True) and has_some_room(data, data['you'], 'up'):
print('*** Super safe food up with some room')
return 'up'
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']) and has_some_room(data, data['you'], 'right'):
print('*** Super safe head food right with some room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']) and has_some_room(data, data['you'], 'left'):
print('*** Super safe head food left with some room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1) and has_some_room(data, data['you'], 'down'):
print('*** Super safe head food down with some room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1) and has_some_room(data, data['you'], 'up'):
print('*** Super safe head food up with some room')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_super_safe=True) and has_some_room(data, data['you'], 'right'):
print('*** Super safe right with some room')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_super_safe=True) and has_some_room(data, data['you'], 'left'):
print('*** Super safe left with some room')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_super_safe=True) and has_some_room(data, data['you'], 'down'):
print('*** Super safe down with some room')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_super_safe=True) and has_some_room(data, data['you'], 'up'):
print('*** Super safe up with some room')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']) and has_some_room(data, data['you'], 'right'):
print('*** Super safe head right with some room')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']) and has_some_room(data, data['you'], 'left'):
print('*** Super safe head left with some room')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1) and has_some_room(data, data['you'], 'down'):
print('*** Super safe head down with some room')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1) and has_some_room(data, data['you'], 'up'):
print('*** Super safe head up with some room')
return 'up'
elif best_move and is_safe(data, best_move_coords['x'], best_move_coords['y']) and has_some_room(data, data['you'], best_move):
print('*** Best move ' + best_move + ' to ' + str(best_move_coords['x']) + ',' + str(best_move_coords['y']) + ' with some room')
return best_move
if food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_super_safe=True) and has_half_room(data, data['you'], 'right'):
print('*** Super safe food right with half room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_super_safe=True) and has_half_room(data, data['you'], 'left'):
print('*** Super safe food left with half room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_super_safe=True) and has_half_room(data, data['you'], 'down'):
print('*** Super safe food down with half room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_super_safe=True) and has_half_room(data, data['you'], 'up'):
print('*** Super safe food up with half room')
return 'up'
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']) and has_half_room(data, data['you'], 'right'):
print('*** Super safe head food right with half room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']) and has_half_room(data, data['you'], 'left'):
print('*** Super safe head food left with half room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1) and has_half_room(data, data['you'], 'down'):
print('*** Super safe head food down with half room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1) and has_half_room(data, data['you'], 'up'):
print('*** Super safe head food up with half room')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_super_safe=True) and has_half_room(data, data['you'], 'right'):
print('*** Super safe right with half room')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_super_safe=True) and has_half_room(data, data['you'], 'left'):
print('*** Super safe left with half room')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_super_safe=True) and has_half_room(data, data['you'], 'down'):
print('*** Super safe down with half room')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_super_safe=True) and has_half_room(data, data['you'], 'up'):
print('*** Super safe up with half room')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']) and has_half_room(data, data['you'], 'right'):
print('*** Super safe head right with half room')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']) and has_half_room(data, data['you'], 'left'):
print('*** Super safe head left with half room')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1) and has_half_room(data, data['you'], 'down'):
print('*** Super safe head down with half room')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1) and has_half_room(data, data['you'], 'up'):
print('*** Super safe head up with half room')
return 'up'
elif best_move and is_safe(data, best_move_coords['x'], best_move_coords['y']) and has_half_room(data, data['you'], best_move):
print('*** Best move ' + best_move + ' to ' + str(best_move_coords['x']) + ',' + str(best_move_coords['y']) + ' with half room')
return best_move
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_super_safe=True):
print('*** Super safe food right')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_super_safe=True):
print('*** Super safe food left')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_super_safe=True):
print('*** Super safe food down')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_super_safe=True):
print('*** Super safe food up')
return 'up'
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']):
print('*** Super safe head food right')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']):
print('*** Super safe head food left')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1):
print('*** Super safe head food down')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1):
print('*** Super safe head food up')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_super_safe=True):
print('*** Super safe right')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_super_safe=True):
print('*** Super safe left')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_super_safe=True):
print('*** Super safe down')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_super_safe=True):
print('*** Super safe up')
return 'up'
elif is_safe(data, me['x']+1, me['y'], check_head_safe=True) and is_safe(data, me['x']+1, me['y']):
print('*** Super safe head right')
return 'right'
elif is_safe(data, me['x']-1, me['y'], check_head_safe=True) and is_safe(data, me['x']-1, me['y']):
print('*** Super safe head left')
return 'left'
elif is_safe(data, me['x'], me['y']+1, check_head_safe=True) and is_safe(data, me['x'], me['y']+1):
print('*** Super safe head down')
return 'down'
elif is_safe(data, me['x'], me['y']-1, check_head_safe=True) and is_safe(data, me['x'], me['y']-1):
print('*** Super safe head up')
return 'up'
elif best_move and is_safe(data, best_move_coords['x'], best_move_coords['y']):
print('*** Best move ' + best_move + 'to ' + str(best_move_coords['x']) + ',' + str(best_move_coords['y']))
return best_move
elif is_safe(data, me['x']+1, me['y']) and has_room(data, data['you'], 'right'):
print('*** Safe right with room')
return 'right'
elif is_safe(data, me['x']-1, me['y']) and has_room(data, data['you'], 'left'):
print('*** Safe left with room')
return 'left'
elif is_safe(data, me['x'], me['y']+1) and has_room(data, data['you'], 'down'):
print('*** Safe down with room')
return 'down'
elif is_safe(data, me['x']+1, me['y']):
print('*** Safe right')
return 'right'
elif is_safe(data, me['x']-1, me['y']):
print('*** Safe left')
return 'left'
elif is_safe(data, me['x'], me['y']+1):
print('*** Safe down')
return 'down'
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y']) and has_room(data, data['you'], 'right'):
print('*** Safe food right with room')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y']) and has_room(data, data['you'], 'left'):
print('*** Safe food left with room')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1) and has_room(data, data['you'], 'down'):
print('*** Safe food down with room')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1) and has_room(data, data['you'], 'up'):
print('*** Safe food up with room')
return 'up'
elif food and me['x'] < food['x'] and is_safe(data, me['x']+1, me['y']):
print('*** Safe food right')
return 'right'
elif food and me['x'] > food['x'] and is_safe(data, me['x']-1, me['y']):
print('*** Safe food left')
return 'left'
elif food and me['y'] < food['y'] and is_safe(data, me['x'], me['y']+1):
print('*** Safe food down')
return 'down'
elif food and me['y'] > food['y'] and is_safe(data, me['x'], me['y']-1):
print('*** Safe food up')
return 'up'
else:
print('*** Nothing safe, go up and die')
return 'up'
| 52.02079 | 211 | 0.609823 | 4,460 | 25,022 | 3.251121 | 0.01704 | 0.072414 | 0.12069 | 0.091034 | 0.918966 | 0.907034 | 0.90069 | 0.896276 | 0.890759 | 0.890759 | 0 | 0.015095 | 0.18188 | 25,022 | 480 | 212 | 52.129167 | 0.693225 | 0.001439 | 0 | 0.487472 | 0 | 0 | 0.159502 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018223 | false | 0 | 0 | 0 | 0.261959 | 0.189066 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
07922714c1eaddab1c4b6ab43a09721440b0f604 | 43,291 | py | Python | server/tests/unit/common/apis/test_api_v2.py | chanzuckerberg/single-cell-explorer | 51402e8befeca61311e6bd7a4127fa24b9f6e7be | [
"MIT"
] | 2 | 2021-08-30T16:32:16.000Z | 2022-03-25T22:36:23.000Z | server/tests/unit/common/apis/test_api_v2.py | chanzuckerberg/single-cell-explorer | 51402e8befeca61311e6bd7a4127fa24b9f6e7be | [
"MIT"
] | 194 | 2021-08-18T23:52:44.000Z | 2022-03-30T19:40:41.000Z | server/tests/unit/common/apis/test_api_v2.py | chanzuckerberg/single-cell-explorer | 51402e8befeca61311e6bd7a4127fa24b9f6e7be | [
"MIT"
] | 1 | 2022-01-21T09:20:15.000Z | 2022-01-21T09:20:15.000Z | import json
import os
import time
from http import HTTPStatus
import hashlib
from http.client import HTTPException
from unittest.mock import patch
import requests
from server.common.config.app_config import AppConfig
from server.tests import decode_fbs, FIXTURES_ROOT
from server.tests.fixtures.fixtures import pbmc3k_colors
from server.tests.unit import BaseTest, skip_if
BAD_FILTER = {"filter": {"obs": {"annotation_value": [{"name": "xyz"}]}}}
class EndPoints(BaseTest):
@classmethod
def setUpClass(cls, app_config=None):
super().setUpClass(app_config)
cls.app.testing = True
cls.client = cls.app.test_client()
os.environ["SKIP_STATIC"] = "True"
for i in range(90):
try:
result = cls.client.get(f"{cls.TEST_URL_BASE}schema")
cls.schema = json.loads(result.data)
except requests.exceptions.ConnectionError:
time.sleep(1)
def test_initialize(self):
endpoint = "schema"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertEqual(result_data["schema"]["dataframe"]["nObs"], 2638)
self.assertEqual(len(result_data["schema"]["annotations"]["obs"]), 2)
self.assertEqual(len(result_data["schema"]["annotations"]["obs"]["columns"]), 5)
def test_config(self):
endpoint = "config"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertIn("library_versions", result_data["config"])
self.assertEqual(result_data["config"]["displayNames"]["dataset"], "pbmc3k")
def test_get_layout_fbs(self):
endpoint = "layout/obs"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 8)
self.assertIsNotNone(df["columns"])
self.assertSetEqual(
set(df["col_idx"]),
{"pca_0", "pca_1", "tsne_0", "tsne_1", "umap_0", "umap_1", "draw_graph_fr_0", "draw_graph_fr_1"},
)
self.assertIsNone(df["row_idx"])
self.assertEqual(len(df["columns"]), df["n_cols"])
def test_bad_filter(self):
endpoint = "data/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.put(url, headers=header, json=BAD_FILTER)
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
def test_get_annotations_obs_fbs(self):
endpoint = "annotations/obs"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 5)
self.assertIsNotNone(df["columns"])
self.assertIsNone(df["row_idx"])
self.assertEqual(len(df["columns"]), df["n_cols"])
obs_index_col_name = self.schema["schema"]["annotations"]["obs"]["index"]
self.assertCountEqual(
df["col_idx"],
[obs_index_col_name, "n_genes", "percent_mito", "n_counts", "louvain"],
)
def test_get_annotations_obs_keys_fbs(self):
endpoint = "annotations/obs"
query = "annotation-name=n_genes&annotation-name=percent_mito"
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 2)
self.assertIsNotNone(df["columns"])
self.assertIsNone(df["row_idx"])
self.assertEqual(len(df["columns"]), df["n_cols"])
self.assertCountEqual(df["col_idx"], ["n_genes", "percent_mito"])
def test_get_annotations_obs_error(self):
endpoint = "annotations/obs"
query = "annotation-name=notakey"
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
# TEMP: Testing count 15 to match hardcoded values for diffexp
# TODO(#1281): Switch back to dynamic values
def test_diff_exp(self):
endpoint = "diffexp/obs"
url = f"{self.TEST_URL_BASE}{endpoint}"
params = {
"mode": "topN",
"set1": {"filter": {"obs": {"annotation_value": [{"name": "louvain", "values": ["NK cells"]}]}}},
"set2": {"filter": {"obs": {"annotation_value": [{"name": "louvain", "values": ["CD8 T cells"]}]}}},
"count": 15,
}
result = self.client.post(url, json=params)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertEqual(len(result_data["positive"]), 15)
self.assertEqual(len(result_data["negative"]), 15)
def test_diff_exp_indices(self):
endpoint = "diffexp/obs"
url = f"{self.TEST_URL_BASE}{endpoint}"
params = {
"mode": "topN",
"count": 15,
"set1": {"filter": {"obs": {"index": [[0, 500]]}}},
"set2": {"filter": {"obs": {"index": [[500, 1000]]}}},
}
result = self.client.post(url, json=params)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertEqual(len(result_data["positive"]), 15)
self.assertEqual(len(result_data["negative"]), 15)
def test_get_annotations_var_fbs(self):
endpoint = "annotations/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 1838)
self.assertEqual(df["n_cols"], 2)
self.assertIsNotNone(df["columns"])
self.assertIsNone(df["row_idx"])
self.assertEqual(len(df["columns"]), df["n_cols"])
var_index_col_name = self.schema["schema"]["annotations"]["var"]["index"]
self.assertCountEqual(df["col_idx"], [var_index_col_name, "n_cells"])
def test_get_annotations_var_keys_fbs(self):
endpoint = "annotations/var"
query = "annotation-name=n_cells"
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 1838)
self.assertEqual(df["n_cols"], 1)
self.assertIsNotNone(df["columns"])
self.assertIsNone(df["row_idx"])
self.assertEqual(len(df["columns"]), df["n_cols"])
self.assertCountEqual(df["col_idx"], ["n_cells"])
def test_get_annotations_var_error(self):
endpoint = "annotations/var"
query = "annotation-name=notakey"
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
def test_data_mimetype_error(self):
endpoint = "data/var"
header = {"Accept": "xxx"}
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.put(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.NOT_ACCEPTABLE)
def test_fbs_default(self):
endpoint = "data/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
headers = {"Accept": "application/octet-stream"}
result = self.client.put(url, headers=headers)
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
filter = {"filter": {"var": {"index": [0, 1, 4]}}}
result = self.client.put(url, headers=headers, json=filter)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
def test_data_put_fbs(self):
endpoint = "data/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.put(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
def test_data_get_fbs(self):
endpoint = "data/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
def test_data_put_filter_fbs(self):
endpoint = "data/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
filter = {"filter": {"var": {"index": [0, 1, 4]}}}
result = self.client.put(url, headers=header, json=filter)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 3)
self.assertIsNotNone(df["columns"])
self.assertIsNone(df["row_idx"])
self.assertEqual(len(df["columns"]), df["n_cols"])
self.assertListEqual(df["col_idx"].tolist(), [0, 1, 4])
def test_data_get_filter_fbs(self):
index_col_name = self.schema["schema"]["annotations"]["var"]["index"]
endpoint = "data/var"
query = f"var:{index_col_name}=SIK1"
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 1)
def test_data_get_unknown_filter_fbs(self):
index_col_name = self.schema["schema"]["annotations"]["var"]["index"]
endpoint = "data/var"
query = f"var:{index_col_name}=UNKNOWN"
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 0)
def test_data_put_single_var(self):
endpoint = "data/var"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
index_col_name = self.schema["schema"]["annotations"]["var"]["index"]
var_filter = {"filter": {"var": {"annotation_value": [{"name": index_col_name, "values": ["RER1"]}]}}}
result = self.client.put(url, headers=header, json=var_filter)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 1)
def test_colors(self):
endpoint = "colors"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertEqual(result_data, pbmc3k_colors)
@skip_if(lambda x: os.getenv("SKIP_STATIC"), "Skip static test when running locally")
def test_static(self):
endpoint = "static"
file = "assets/favicon.ico"
url = f"{endpoint}/{file}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
def test_genesets_config(self):
result = self.client.get(f"{self.TEST_URL_BASE}config")
config_data = json.loads(result.data)
params = config_data["config"]["parameters"]
annotations_genesets = params["annotations_genesets"]
annotations_genesets_readonly = params["annotations_genesets_readonly"]
annotations_genesets_summary_methods = params["annotations_genesets_summary_methods"]
self.assertTrue(annotations_genesets)
self.assertTrue(annotations_genesets_readonly)
self.assertEqual(annotations_genesets_summary_methods, ["mean"])
def test_get_genesets(self):
endpoint = "genesets"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url, headers={"Accept": "application/json"})
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertIsNotNone(result_data["genesets"])
def test_get_summaryvar(self):
index_col_name = self.schema["schema"]["annotations"]["var"]["index"]
endpoint = "summarize/var"
# single column
filter = f"var:{index_col_name}=F5"
query = f"method=mean&{filter}"
query_hash = hashlib.sha1(query.encode()).hexdigest()
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 1)
self.assertEqual(df["col_idx"], [query_hash])
self.assertAlmostEqual(df["columns"][0][0], -0.110451095)
# multi-column
col_names = ["F5", "BEB3", "SIK1"]
filter = "&".join([f"var:{index_col_name}={name}" for name in col_names])
query = f"method=mean&{filter}"
query_hash = hashlib.sha1(query.encode()).hexdigest()
url = f"{self.TEST_URL_BASE}{endpoint}?{query}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 1)
self.assertEqual(df["col_idx"], [query_hash])
self.assertAlmostEqual(df["columns"][0][0], -0.16628358)
def test_post_summaryvar(self):
index_col_name = self.schema["schema"]["annotations"]["var"]["index"]
endpoint = "summarize/var"
headers = {"Content-Type": "application/x-www-form-urlencoded", "Accept": "application/octet-stream"}
# single column
filter = f"var:{index_col_name}=F5"
query = f"method=mean&{filter}"
query_hash = hashlib.sha1(query.encode()).hexdigest()
url = f"{self.TEST_URL_BASE}{endpoint}?key={query_hash}"
result = self.client.post(url, headers=headers, data=query)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 1)
self.assertEqual(df["col_idx"], [query_hash])
self.assertAlmostEqual(df["columns"][0][0], -0.110451095)
# multi-column
col_names = ["F5", "BEB3", "SIK1"]
filter = "&".join([f"var:{index_col_name}={name}" for name in col_names])
query = f"method=mean&{filter}"
query_hash = hashlib.sha1(query.encode()).hexdigest()
url = f"{self.TEST_URL_BASE}{endpoint}?key={query_hash}"
result = self.client.post(url, headers=headers, data=query)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
self.assertEqual(df["n_cols"], 1)
self.assertEqual(df["col_idx"], [query_hash])
self.assertAlmostEqual(df["columns"][0][0], -0.16628358)
class EndPointsCxg(EndPoints):
"""Test Case for endpoints"""
@classmethod
def setUpClass(cls):
app_config = AppConfig()
app_config.update_default_dataset_config(user_annotations__enable=False)
def test_get_genesets_json(self):
endpoint = "genesets"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url, headers={"Accept": "application/json"})
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertIsNotNone(result_data["genesets"])
self.assertIsNotNone(result_data["tid"])
self.assertEqual(
result_data,
{
"genesets": [
{
"genes": [
{"gene_description": " a gene_description", "gene_symbol": "F5"},
{"gene_description": "", "gene_symbol": "SUMO3"},
{"gene_description": "", "gene_symbol": "SRM"},
],
"geneset_description": "a description",
"geneset_name": "first gene set name",
},
{
"genes": [
{"gene_description": "", "gene_symbol": "RER1"},
{"gene_description": "", "gene_symbol": "SIK1"},
],
"geneset_description": "",
"geneset_name": "second_gene_set",
},
{"genes": [], "geneset_description": "", "geneset_name": "third gene set"},
{"genes": [], "geneset_description": "fourth description", "geneset_name": "fourth_gene_set"},
{"genes": [], "geneset_description": "", "geneset_name": "fifth_dataset"},
{
"genes": [
{"gene_description": "", "gene_symbol": "ACD"},
{"gene_description": "", "gene_symbol": "AATF"},
{"gene_description": "", "gene_symbol": "F5"},
{"gene_description": "", "gene_symbol": "PIGU"},
],
"geneset_description": "",
"geneset_name": "summary test",
},
{"genes": [], "geneset_description": "", "geneset_name": "geneset_to_delete"},
{"genes": [], "geneset_description": "", "geneset_name": "geneset_to_edit"},
{"genes": [], "geneset_description": "", "geneset_name": "fill_this_geneset"},
{
"genes": [{"gene_description": "", "gene_symbol": "SIK1"}],
"geneset_description": "",
"geneset_name": "empty_this_geneset",
},
{
"genes": [{"gene_description": "", "gene_symbol": "SIK1"}],
"geneset_description": "",
"geneset_name": "brush_this_gene",
},
],
"tid": 0,
},
)
def test_get_genesets_csv(self):
endpoint = "genesets"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url, headers={"Accept": "text/csv"})
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "text/csv")
expected_data = """gene_set_name,gene_set_description,gene_symbol,gene_description\r
first gene set name,a description,F5, a gene_description\r
first gene set name,a description,SUMO3,\r
first gene set name,a description,SRM,\r
second_gene_set,,RER1,\r
second_gene_set,,SIK1,\r
third gene set,,,\r
fourth_gene_set,fourth description,,\r
fifth_dataset,,,\r
summary test,,ACD,\r
summary test,,AATF,\r
summary test,,F5,\r
summary test,,PIGU,\r
geneset_to_delete,,,\r
geneset_to_edit,,,\r
fill_this_geneset,,,\r
empty_this_geneset,,SIK1,\r
brush_this_gene,,SIK1,\r
"""
self.assertEqual(result.data.decode("utf-8"), expected_data)
def test_put_genesets(self):
endpoint = "genesets"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url, headers={"Accept": "application/json"})
self.assertEqual(result.status_code, HTTPStatus.OK)
test1 = {"tid": 3, "genesets": []}
result = self.client.put(url, json=test1)
self.assertEqual(result.status_code, HTTPStatus.METHOD_NOT_ALLOWED)
class TestDataLocatorMockApi(BaseTest):
@classmethod
@patch("server.data_common.dataset_metadata.requests.get")
def setUpClass(cls, mock_get):
cls.data_locator_api_base = "api.cellxgene.staging.single-cell.czi.technology/dp/v1"
cls.config = AppConfig()
cls.config.update_server_config(
data_locator__api_base=cls.data_locator_api_base,
app__web_base_url="https://cellxgene.staging.single-cell.czi.technology.com",
multi_dataset__dataroot={"e": {"base_url": "e", "dataroot": FIXTURES_ROOT}},
app__flask_secret_key="testing",
app__debug=True,
data_locator__s3__region_name="us-east-1",
)
super().setUpClass(cls.config)
cls.TEST_DATASET_URL_BASE = "/e/pbmc3k_v1.cxg"
cls.TEST_URL_BASE = f"{cls.TEST_DATASET_URL_BASE}/api/v0.2/"
cls.config.complete_config()
cls.response_body = json.dumps(
{
"collection_id": "4f098ff4-4a12-446b-a841-91ba3d8e3fa6",
"collection_visibility": "PUBLIC",
"dataset_id": "2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
"s3_uri": f"{FIXTURES_ROOT}/pbmc3k.cxg",
"tombstoned": False,
}
)
mock_get.return_value = MockResponse(body=cls.response_body, status_code=200)
cls.app.testing = True
cls.client = cls.app.test_client()
result = cls.client.get(f"{cls.TEST_URL_BASE}schema")
cls.schema = json.loads(result.data)
assert mock_get.call_count == 1
assert (
f"http://{mock_get._mock_call_args[1]['url']}"
== f"http://{cls.data_locator_api_base}/datasets/meta?url={cls.config.server_config.get_web_base_url()}{cls.TEST_DATASET_URL_BASE}/"
) # noqa E501
@patch("server.data_common.dataset_metadata.requests.get")
def test_data_adaptor_uses_corpora_api(self, mock_get):
mock_get.return_value = MockResponse(body=self.response_body, status_code=200)
endpoint = "schema"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
mock_get.assert_called_once_with(
url="api.cellxgene.staging.single-cell.czi.technology/dp/v1/datasets/meta?url=https://cellxgene.staging.single-cell.czi.technology.com/e/pbmc3k_v1.cxg/",
headers={"Content-Type": "application/json", "Accept": "application/json"},
)
# Check mocked MatrixDataLoader correctly loads schema
result_data = json.loads(result.data)
self.assertEqual(result_data["schema"]["dataframe"]["nObs"], 2638)
self.assertEqual(len(result_data["schema"]["annotations"]["obs"]), 2)
self.assertEqual(len(result_data["schema"]["annotations"]["obs"]["columns"]), 5)
@patch("server.data_common.dataset_metadata.requests.get")
def test_config(self, mock_get):
mock_get.return_value = MockResponse(body=self.response_body, status_code=200)
endpoint = "config"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertIsNotNone(result_data["config"])
mock_get.assert_called_once_with(
url="api.cellxgene.staging.single-cell.czi.technology/dp/v1/datasets/meta?url=https://cellxgene.staging.single-cell.czi.technology.com/e/pbmc3k_v1.cxg/",
headers={"Content-Type": "application/json", "Accept": "application/json"},
)
@patch("server.data_common.dataset_metadata.requests.get")
def test_get_annotations_obs_fbs(self, mock_get):
mock_get.return_value = MockResponse(body=self.response_body, status_code=200)
endpoint = "annotations/obs"
url = f"{self.TEST_URL_BASE}{endpoint}"
header = {"Accept": "application/octet-stream"}
result = self.client.get(url, headers=header)
# check that the metadata api was called
mock_get.assert_called_once_with(
url="api.cellxgene.staging.single-cell.czi.technology/dp/v1/datasets/meta?url=https://cellxgene.staging.single-cell.czi.technology.com/e/pbmc3k_v1.cxg/",
headers={"Content-Type": "application/json", "Accept": "application/json"},
)
# check response
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/octet-stream")
# TODO @madison refactor mock out s3 instead of MatrixDataLoader
# check mocked MatrixDataLoader is returning correctly
df = decode_fbs.decode_matrix_FBS(result.data)
self.assertEqual(df["n_rows"], 2638)
@patch("server.data_common.dataset_metadata.requests.get")
def test_metadata_api_called_for_new_dataset(self, mock_get):
self.TEST_DATASET_URL_BASE = "/e/pbmc3k_v0.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
response_body = json.dumps(
{
"collection_id": "4f098ff4-4a12-446b-a841-91ba3d8e3fa6",
"collection_visibility": "PUBLIC",
"dataset_id": "2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
"s3_uri": f"{FIXTURES_ROOT}/pbmc3k.cxg",
"tombstoned": False,
}
)
mock_get.return_value = MockResponse(body=response_body, status_code=200)
endpoint = "schema"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
# check that the metadata api was correctly called for the new dataset
self.assertEqual(mock_get.call_count, 1)
self.assertEqual(
f"http://{mock_get._mock_call_args[1]['url']}",
"http://api.cellxgene.staging.single-cell.czi.technology/dp/v1/datasets/meta?url=https://cellxgene.staging.single-cell.czi.technology.com/e/pbmc3k_v0.cxg/",
# noqa E501
)
@patch("server.data_common.dataset_metadata.requests.get")
def test_data_locator_defaults_to_name_based_lookup_if_metadata_api_throws_error(self, mock_get):
self.TEST_DATASET_URL_BASE = "/e/pbmc3k.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
mock_get.side_effect = HTTPException
endpoint = "schema"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
# check that the metadata api was correctly called for the new dataset
self.assertEqual(mock_get.call_count, 1)
self.assertEqual(
f"http://{mock_get._mock_call_args[1]['url']}",
"http://api.cellxgene.staging.single-cell.czi.technology/dp/v1/datasets/meta?url=https://cellxgene.staging.single-cell.czi.technology.com/e/pbmc3k.cxg/",
# noqa E501
)
# check schema loads correctly even with metadata api exception
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
expected_response_body = {
"schema": {
"annotations": {
"obs": {
"columns": [
{"name": "name_0", "type": "string", "writable": False},
{"name": "n_genes", "type": "int32", "writable": False},
{"name": "percent_mito", "type": "float32", "writable": False},
{"name": "n_counts", "type": "float32", "writable": False},
{
"categories": [
"CD4 T cells",
"CD14+ Monocytes",
"B cells",
"CD8 T cells",
"NK cells",
"FCGR3A+ Monocytes",
"Dendritic cells",
"Megakaryocytes",
],
"name": "louvain",
"type": "categorical",
"writable": False,
},
],
"index": "name_0",
},
"var": {
"columns": [
{"name": "name_0", "type": "string", "writable": False},
{"name": "n_cells", "type": "int32", "writable": False},
],
"index": "name_0",
},
},
"dataframe": {"nObs": 2638, "nVar": 1838, "type": "float32"},
"layout": {
"obs": [
{"dims": ["draw_graph_fr_0", "draw_graph_fr_1"], "name": "draw_graph_fr", "type": "float32"},
{"dims": ["pca_0", "pca_1"], "name": "pca", "type": "float32"},
{"dims": ["tsne_0", "tsne_1"], "name": "tsne", "type": "float32"},
{"dims": ["umap_0", "umap_1"], "name": "umap", "type": "float32"},
]
},
}
}
self.assertEqual(json.loads(result.data), expected_response_body)
def test_dataset_does_not_exist(self):
self.TEST_DATASET_URL_BASE = "/e/no_dataset.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
endpoint = "schema"
url = f"{self.TEST_URL_BASE}{endpoint}"
response = self.client.get(url)
self.assertEqual(response.status_code, 404)
@patch("server.data_common.dataset_metadata.requests.get")
def test_tombstoned_datasets_redirect_to_data_portal(self, mock_get):
response_body = json.dumps(
{
"collection_id": "4f098ff4-4a12-446b-a841-91ba3d8e3fa6",
"collection_visibility": "PUBLIC",
"dataset_id": "2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
"s3_uri": None,
"tombstoned": True,
}
)
mock_get.return_value = MockResponse(body=response_body, status_code=200)
endpoint = "config"
self.TEST_DATASET_URL_BASE = "/e/pbmc3k_v2.cxg"
url = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, 302)
self.assertEqual(
result.headers["Location"],
"https://cellxgene.staging.single-cell.czi.technology.com/collections/4f098ff4-4a12-446b-a841-91ba3d8e3fa6?tombstoned_dataset_id=2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
) # noqa E501
class TestDatasetMetadata(BaseTest):
@classmethod
def setUpClass(cls):
cls.data_locator_api_base = "api.cellxgene.staging.single-cell.czi.technology/dp/v1"
cls.app__web_base_url = "https://cellxgene.staging.single-cell.czi.technology/"
cls.config = AppConfig()
cls.config.update_server_config(
data_locator__api_base=cls.data_locator_api_base,
app__web_base_url=cls.app__web_base_url,
multi_dataset__dataroot={"e": {"base_url": "e", "dataroot": FIXTURES_ROOT}},
app__flask_secret_key="testing",
app__debug=True,
data_locator__s3__region_name="us-east-1",
app__generate_cache_control_headers=True,
)
cls.meta_response_body = {
"collection_id": "4f098ff4-4a12-446b-a841-91ba3d8e3fa6",
"collection_visibility": "PUBLIC",
"dataset_id": "2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
"s3_uri": f"{FIXTURES_ROOT}/pbmc3k.cxg",
"tombstoned": False,
}
super().setUpClass(cls.config)
cls.app.testing = True
cls.client = cls.app.test_client()
def verify_response(self, result):
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
self.assertTrue(result.cache_control.no_store)
self.assertEqual(result.cache_control.max_age, 0)
@patch("server.data_common.dataset_metadata.request_dataset_metadata_from_data_portal")
@patch("server.data_common.dataset_metadata.requests.get")
def test_dataset_metadata_api_called_for_public_collection(self, mock_get, mock_dp):
self.TEST_DATASET_URL_BASE = "/e/pbmc3k_v0_public.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
response_body = {
"contact_email": "test_email",
"contact_name": "test_user",
"datasets": [
{
"collection_visibility": "PUBLIC",
"id": "2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
"name": "Test Dataset",
},
],
"description": "test_description",
"id": "4f098ff4-4a12-446b-a841-91ba3d8e3fa6",
"links": [
"http://test.link",
],
"name": "Test Collection",
"visibility": "PUBLIC",
}
mock_get.return_value = MockResponse(body=json.dumps(response_body), status_code=200)
mock_dp.return_value = self.meta_response_body
endpoint = "dataset-metadata"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.verify_response(result)
self.assertEqual(mock_get.call_count, 1)
response_obj = json.loads(result.data)["metadata"]
self.assertEqual(response_obj["dataset_name"], "Test Dataset")
expected_url = f"https://cellxgene.staging.single-cell.czi.technology/collections/{response_body['id']}"
self.assertEqual(response_obj["dataset_id"], response_body["datasets"][0]["id"])
self.assertEqual(response_obj["collection_url"], expected_url)
self.assertEqual(response_obj["collection_name"], response_body["name"])
self.assertEqual(response_obj["collection_contact_email"], response_body["contact_email"])
self.assertEqual(response_obj["collection_contact_name"], response_body["contact_name"])
self.assertEqual(response_obj["collection_description"], response_body["description"])
self.assertEqual(response_obj["collection_links"], response_body["links"])
self.assertEqual(response_obj["collection_datasets"], response_body["datasets"])
@patch("server.data_common.dataset_metadata.request_dataset_metadata_from_data_portal")
@patch("server.data_common.dataset_metadata.requests.get")
def test_dataset_metadata_api_called_for_private_collection(self, mock_get, mock_dp):
self.TEST_DATASET_URL_BASE = "/e/pbmc3k_v0_private.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
response_body = {
"contact_email": "test_email",
"contact_name": "test_user",
"datasets": [
{
"collection_visibility": "PRIVATE",
"id": "2fa37b10-ab4d-49c9-97a8-b4b3d80bf939",
"name": "Test Dataset",
},
],
"description": "test_description",
"id": "4f098ff4-4a12-446b-a841-91ba3d8e3fa6",
"links": [
"http://test.link",
],
"name": "Test Collection",
"visibility": "PRIVATE",
}
mock_get.return_value = MockResponse(body=json.dumps(response_body), status_code=200)
meta_response_body_private = self.meta_response_body.copy()
meta_response_body_private["collection_visibility"] = "PRIVATE"
mock_dp.return_value = meta_response_body_private
endpoint = "dataset-metadata"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.verify_response(result)
self.assertEqual(mock_get.call_count, 1)
response_obj = json.loads(result.data)["metadata"]
self.assertEqual(response_obj["dataset_name"], "Test Dataset")
expected_url = f"https://cellxgene.staging.single-cell.czi.technology/collections/{response_body['id']}/private"
self.assertEqual(response_obj["dataset_id"], response_body["datasets"][0]["id"])
self.assertEqual(response_obj["collection_url"], expected_url)
self.assertEqual(response_obj["collection_name"], response_body["name"])
self.assertEqual(response_obj["collection_contact_email"], response_body["contact_email"])
self.assertEqual(response_obj["collection_contact_name"], response_body["contact_name"])
self.assertEqual(response_obj["collection_description"], response_body["description"])
self.assertEqual(response_obj["collection_links"], response_body["links"])
self.assertEqual(response_obj["collection_datasets"], response_body["datasets"])
@patch("server.data_common.dataset_metadata.request_dataset_metadata_from_data_portal")
def test_dataset_metadata_api_fails_gracefully_on_dataset_not_found(self, mock_dp):
# If request_dataset_metadata_from_data_portal, it always returns None
mock_dp.return_value = None
endpoint = "dataset-metadata"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.NOT_FOUND)
@patch("server.data_common.dataset_metadata.request_dataset_metadata_from_data_portal")
@patch("server.data_common.dataset_metadata.requests.get")
def test_dataset_metadata_api_fails_gracefully_on_connection_failure(self, mock_get, mock_dp):
# TODO: Shouldn't matter what we request if the request's connection fails altogether
self.TEST_DATASET_URL_BASE = "/e/pbmc3k_v0.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
mock_dp.return_value = self.meta_response_body
mock_get.side_effect = Exception("Cannot connect to the data portal")
endpoint = "dataset-metadata"
url = f"{self.TEST_URL_BASE}{endpoint}"
result = self.client.get(url)
# TODO: This doesn't seem like a valid test for connection failure. There should be *no* response at all instead of a 400 status
self.assertEqual(result.status_code, HTTPStatus.BAD_REQUEST)
class TestConfigEndpoint(BaseTest):
@classmethod
def setUpClass(cls):
cls.data_locator_api_base = "api.cellxgene.staging.single-cell.czi.technology/dp/v1"
cls.app__web_base_url = "https://cellxgene.staging.single-cell.czi.technology/"
cls.config = AppConfig()
cls.config.update_server_config(
data_locator__api_base=cls.data_locator_api_base,
app__web_base_url=cls.app__web_base_url,
multi_dataset__dataroot={"e": {"base_url": "e", "dataroot": FIXTURES_ROOT}},
app__flask_secret_key="testing",
app__debug=True,
data_locator__s3__region_name="us-east-1",
)
super().setUpClass(cls.config)
cls.app.testing = True
cls.client = cls.app.test_client()
def test_config_has_collections_home_page(self):
self.TEST_DATASET_URL_BASE = "/e/pbmc3k_v0.cxg"
self.TEST_URL_BASE = f"{self.TEST_DATASET_URL_BASE}/api/v0.2/"
endpoint = "config"
url = f"{self.TEST_URL_BASE}{endpoint}"
# print(f"SDFSDF SDJFSF D {url}")
result = self.client.get(url)
self.assertEqual(result.status_code, HTTPStatus.OK)
self.assertEqual(result.headers["Content-Type"], "application/json")
result_data = json.loads(result.data)
self.assertEqual(result_data["config"]["links"]["collections-home-page"], self.app__web_base_url[:-1])
class MockResponse:
def __init__(self, body, status_code):
self.content = body
self.status_code = status_code
def json(self):
return json.loads(self.content)
| 46.103301 | 179 | 0.619806 | 4,973 | 43,291 | 5.163684 | 0.082043 | 0.088204 | 0.064605 | 0.028038 | 0.804081 | 0.781222 | 0.769851 | 0.741773 | 0.727014 | 0.705518 | 0 | 0.019898 | 0.240789 | 43,291 | 938 | 180 | 46.152452 | 0.761402 | 0.022083 | 0 | 0.624845 | 0 | 0.009938 | 0.28955 | 0.117144 | 0 | 0 | 0 | 0.001066 | 0.231056 | 1 | 0.06087 | false | 0 | 0.014907 | 0.001242 | 0.084472 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
079e66d7fc6f1e2a828c3335752a9c0276831c29 | 31 | py | Python | segmate/editor/__init__.py | justacid/segmate | 7b66b207ca353805f7ad9c7e003645cd2cbc227a | [
"MIT"
] | null | null | null | segmate/editor/__init__.py | justacid/segmate | 7b66b207ca353805f7ad9c7e003645cd2cbc227a | [
"MIT"
] | null | null | null | segmate/editor/__init__.py | justacid/segmate | 7b66b207ca353805f7ad9c7e003645cd2cbc227a | [
"MIT"
] | null | null | null | from .scene import EditorScene
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07b6e9be5690fb08d6d57064068f79452dc6b11c | 89 | py | Python | tests/pieces/test_mjr.py | kirbysebastian/Salpakan | e05a43491f7c3c2528a3f9126cee3eba8308a94d | [
"MIT"
] | null | null | null | tests/pieces/test_mjr.py | kirbysebastian/Salpakan | e05a43491f7c3c2528a3f9126cee3eba8308a94d | [
"MIT"
] | null | null | null | tests/pieces/test_mjr.py | kirbysebastian/Salpakan | e05a43491f7c3c2528a3f9126cee3eba8308a94d | [
"MIT"
] | null | null | null | from packages.pieces.MJR import MJR
def test_flg_name():
assert(str(MJR()) == 'MJR') | 22.25 | 36 | 0.685393 | 14 | 89 | 4.214286 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146067 | 89 | 4 | 37 | 22.25 | 0.776316 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07c8ebc7396f414a78bc994e2f0e1383840b9785 | 59 | py | Python | snips_nlu/nlu_engine/__init__.py | CharlyBlavier/snips-nlu-Copy | 829d513ac464e0421a264fd64d8b94f59a09875e | [
"Apache-2.0"
] | 3,764 | 2018-02-27T08:25:52.000Z | 2022-03-30T17:59:22.000Z | snips_nlu/nlu_engine/__init__.py | unicorns18/snips-nlu | 74b2893c91fc0bafc919a7e088ecb0b2bd611acf | [
"Apache-2.0"
] | 305 | 2018-02-28T13:45:23.000Z | 2022-03-10T15:33:35.000Z | snips_nlu/nlu_engine/__init__.py | unicorns18/snips-nlu | 74b2893c91fc0bafc919a7e088ecb0b2bd611acf | [
"Apache-2.0"
] | 559 | 2018-03-04T15:44:15.000Z | 2022-03-21T17:00:21.000Z | from snips_nlu.nlu_engine.nlu_engine import SnipsNLUEngine
| 29.5 | 58 | 0.898305 | 9 | 59 | 5.555556 | 0.666667 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 59 | 1 | 59 | 59 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed1d6b3dd4bb60c7770323ab6a63ac6a53777394 | 35,509 | py | Python | tests/unit/utils/test_parsers.py | ipmb/salt | 699912ef9cde28040378aa53d6c7a12d8af756b1 | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/test_parsers.py | ipmb/salt | 699912ef9cde28040378aa53d6c7a12d8af756b1 | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/test_parsers.py | ipmb/salt | 699912ef9cde28040378aa53d6c7a12d8af756b1 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
'''
:codeauthor: :email:`Denys Havrysh <denys.gavrysh@gmail.com>`
'''
# Import python libs
from __future__ import absolute_import
import os
import logging
# Import Salt Testing Libs
from tests.support.unit import skipIf, TestCase
from tests.support.helpers import destructiveTest, skip_if_not_root
from tests.support.mock import (
MagicMock,
patch,
NO_MOCK,
NO_MOCK_REASON
)
# Import Salt Libs
import salt.log.setup as log
import salt.config
import salt.syspaths
import salt.utils.parsers
import salt.utils.platform
class ErrorMock(object): # pylint: disable=too-few-public-methods
'''
Error handling
'''
def __init__(self):
'''
init
'''
self.msg = None
def error(self, msg):
'''
Capture error message
'''
self.msg = msg
class LogSetupMock(object):
'''
Logger setup
'''
def __init__(self):
'''
init
'''
self.log_level = None
self.log_file = None
self.log_level_logfile = None
self.config = {}
self.temp_log_level = None
def setup_console_logger(self, log_level='error', **kwargs): # pylint: disable=unused-argument
'''
Set console loglevel
'''
self.log_level = log_level
def setup_extended_logging(self, opts):
'''
Set opts
'''
self.config = opts
def setup_logfile_logger(self, logfile, loglevel, **kwargs): # pylint: disable=unused-argument
'''
Set logfile and loglevel
'''
self.log_file = logfile
self.log_level_logfile = loglevel
@staticmethod
def get_multiprocessing_logging_queue(): # pylint: disable=invalid-name
'''
Mock
'''
import multiprocessing
return multiprocessing.Queue()
def setup_multiprocessing_logging_listener(self, opts, *args): # pylint: disable=invalid-name,unused-argument
'''
Set opts
'''
self.config = opts
def setup_temp_logger(self, log_level='error'):
'''
Set temp loglevel
'''
self.temp_log_level = log_level
class ObjectView(object): # pylint: disable=too-few-public-methods
'''
Dict object view
'''
def __init__(self, d):
self.__dict__ = d
@destructiveTest
@skip_if_not_root
class LogSettingsParserTests(TestCase):
'''
Unit Tests for Log Level Mixin with Salt parsers
'''
args = []
skip_console_logging_config = False
log_setup = None
# Set config option names
loglevel_config_setting_name = 'log_level'
logfile_config_setting_name = 'log_file'
logfile_loglevel_config_setting_name = 'log_level_logfile' # pylint: disable=invalid-name
def setup_log(self):
'''
Mock logger functions
'''
self.log_setup = LogSetupMock()
patcher = patch.multiple(
log,
setup_console_logger=self.log_setup.setup_console_logger,
setup_extended_logging=self.log_setup.setup_extended_logging,
setup_logfile_logger=self.log_setup.setup_logfile_logger,
get_multiprocessing_logging_queue=self.log_setup.get_multiprocessing_logging_queue,
setup_multiprocessing_logging_listener=self.log_setup.setup_multiprocessing_logging_listener,
setup_temp_logger=self.log_setup.setup_temp_logger
)
patcher.start()
self.addCleanup(patcher.stop)
self.addCleanup(setattr, self, 'log_setup', None)
# log level configuration tests
def test_get_log_level_cli(self):
'''
Tests that log level match command-line specified value
'''
# Set defaults
default_log_level = self.default_config[self.loglevel_config_setting_name]
# Set log level in CLI
log_level = 'critical'
args = ['--log-level', log_level] + self.args
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=self.default_config)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
console_log_level = getattr(parser.options, self.loglevel_config_setting_name)
# Check console log level setting
self.assertEqual(console_log_level, log_level)
# Check console loggger log level
self.assertEqual(self.log_setup.log_level, log_level)
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.temp_log_level, log_level)
# Check log file logger log level
self.assertEqual(self.log_setup.log_level_logfile, default_log_level)
def test_get_log_level_config(self):
'''
Tests that log level match the configured value
'''
args = self.args
# Set log level in config
log_level = 'info'
opts = self.default_config.copy()
opts.update({self.loglevel_config_setting_name: log_level})
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=opts)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
console_log_level = getattr(parser.options, self.loglevel_config_setting_name)
# Check console log level setting
self.assertEqual(console_log_level, log_level)
# Check console loggger log level
self.assertEqual(self.log_setup.log_level, log_level)
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file logger log level
self.assertEqual(self.log_setup.log_level_logfile, log_level)
def test_get_log_level_default(self):
'''
Tests that log level match the default value
'''
# Set defaults
log_level = default_log_level = self.default_config[self.loglevel_config_setting_name]
args = self.args
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=self.default_config)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
console_log_level = getattr(parser.options, self.loglevel_config_setting_name)
# Check log level setting
self.assertEqual(console_log_level, log_level)
# Check console loggger log level
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file logger
self.assertEqual(self.log_setup.log_level_logfile, default_log_level)
# Check help message
self.assertIn('Default: \'{0}\'.'.format(default_log_level),
parser.get_option('--log-level').help)
# log file configuration tests
def test_get_log_file_cli(self):
'''
Tests that log file match command-line specified value
'''
# Set defaults
log_level = self.default_config[self.loglevel_config_setting_name]
# Set log file in CLI
log_file = '{0}_cli.log'.format(self.log_file)
args = ['--log-file', log_file] + self.args
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=self.default_config)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_file_option = getattr(parser.options, self.logfile_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.config[self.logfile_config_setting_name],
log_file)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file setting
self.assertEqual(log_file_option, log_file)
# Check log file logger
self.assertEqual(self.log_setup.log_file, log_file)
def test_get_log_file_config(self):
'''
Tests that log file match the configured value
'''
# Set defaults
log_level = self.default_config[self.loglevel_config_setting_name]
args = self.args
# Set log file in config
log_file = '{0}_config.log'.format(self.log_file)
opts = self.default_config.copy()
opts.update({self.logfile_config_setting_name: log_file})
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=opts)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_file_option = getattr(parser.options, self.logfile_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.config[self.logfile_config_setting_name],
log_file)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file setting
self.assertEqual(log_file_option, log_file)
# Check log file logger
self.assertEqual(self.log_setup.log_file, log_file)
def test_get_log_file_default(self):
'''
Tests that log file match the default value
'''
# Set defaults
log_level = self.default_config[self.loglevel_config_setting_name]
log_file = default_log_file = self.default_config[self.logfile_config_setting_name]
args = self.args
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=self.default_config)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_file_option = getattr(parser.options, self.logfile_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.config[self.logfile_config_setting_name],
log_file)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file setting
self.assertEqual(log_file_option, log_file)
# Check log file logger
self.assertEqual(self.log_setup.log_file, log_file)
# Check help message
self.assertIn('Default: \'{0}\'.'.format(default_log_file),
parser.get_option('--log-file').help)
# log file log level configuration tests
def test_get_log_file_level_cli(self):
'''
Tests that file log level match command-line specified value
'''
# Set defaults
default_log_level = self.default_config[self.loglevel_config_setting_name]
# Set log file level in CLI
log_level_logfile = 'error'
args = ['--log-file-level', log_level_logfile] + self.args
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=self.default_config)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_level_logfile_option = getattr(parser.options,
self.logfile_loglevel_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, default_log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
default_log_level)
self.assertEqual(self.log_setup.config[self.logfile_loglevel_config_setting_name],
log_level_logfile)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file level setting
self.assertEqual(log_level_logfile_option, log_level_logfile)
# Check log file logger
self.assertEqual(self.log_setup.log_level_logfile, log_level_logfile)
def test_get_log_file_level_config(self):
'''
Tests that log file level match the configured value
'''
# Set defaults
log_level = self.default_config[self.loglevel_config_setting_name]
args = self.args
# Set log file level in config
log_level_logfile = 'info'
opts = self.default_config.copy()
opts.update({self.logfile_loglevel_config_setting_name: log_level_logfile})
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=opts)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_level_logfile_option = getattr(parser.options,
self.logfile_loglevel_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.config[self.logfile_loglevel_config_setting_name],
log_level_logfile)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file level setting
self.assertEqual(log_level_logfile_option, log_level_logfile)
# Check log file logger
self.assertEqual(self.log_setup.log_level_logfile, log_level_logfile)
def test_get_log_file_level_default(self):
'''
Tests that log file level match the default value
'''
# Set defaults
default_log_level = self.default_config[self.loglevel_config_setting_name]
log_level = default_log_level
log_level_logfile = default_log_level
args = self.args
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=self.default_config)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_level_logfile_option = getattr(parser.options,
self.logfile_loglevel_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.config[self.logfile_loglevel_config_setting_name],
log_level_logfile)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file level setting
self.assertEqual(log_level_logfile_option, log_level_logfile)
# Check log file logger
self.assertEqual(self.log_setup.log_level_logfile, log_level_logfile)
# Check help message
self.assertIn('Default: \'{0}\'.'.format(default_log_level),
parser.get_option('--log-file-level').help)
def test_get_console_log_level_with_file_log_level(self): # pylint: disable=invalid-name
'''
Tests that both console log level and log file level setting are working together
'''
log_level = 'critical'
log_level_logfile = 'debug'
args = ['--log-file-level', log_level_logfile] + self.args
opts = self.default_config.copy()
opts.update({self.loglevel_config_setting_name: log_level})
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=opts)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
log_level_logfile_option = getattr(parser.options,
self.logfile_loglevel_config_setting_name)
if not self.skip_console_logging_config:
# Check console loggger
self.assertEqual(self.log_setup.log_level, log_level)
# Check extended logger
self.assertEqual(self.log_setup.config[self.loglevel_config_setting_name],
log_level)
self.assertEqual(self.log_setup.config[self.logfile_loglevel_config_setting_name],
log_level_logfile)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file level setting
self.assertEqual(log_level_logfile_option, log_level_logfile)
# Check log file logger
self.assertEqual(self.log_setup.log_level_logfile, log_level_logfile)
@skipIf(salt.utils.platform.is_windows(), 'Windows uses a logging listener')
def test_log_created(self):
'''
Tests that log file is created
'''
args = self.args
log_file = self.log_file
log_file_name = self.logfile_config_setting_name
opts = self.default_config.copy()
opts.update({'log_file': log_file})
if log_file_name is not 'log_file':
opts.update({log_file_name:
getattr(self, log_file_name)})
if log_file_name is 'key_logfile':
self.skipTest('salt-key creates log file outside of parse_args.')
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=opts)):
parser.parse_args(args)
if log_file_name is 'log_file':
self.assertEqual(os.path.getsize(log_file), 0)
else:
self.assertEqual(os.path.getsize(getattr(self, log_file_name)), 0)
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.platform.is_windows(), 'Windows uses a logging listener')
class MasterOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Master options
'''
def setUp(self):
'''
Setting up
'''
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_master_parser_test'
# Function to patch
self.config_func = 'salt.config.master_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.MasterOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.platform.is_windows(), 'Windows uses a logging listener')
class MinionOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Minion options
'''
def setUp(self):
'''
Setting up
'''
# Set defaults
self.default_config = salt.config.DEFAULT_MINION_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_minion_parser_test'
# Function to patch
self.config_func = 'salt.config.minion_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.MinionOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class ProxyMinionOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Proxy Minion options
'''
def setUp(self):
'''
Setting up
'''
# Set defaults
self.default_config = salt.config.DEFAULT_MINION_OPTS.copy()
self.default_config.update(salt.config.DEFAULT_PROXY_MINION_OPTS)
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_proxy_minion_parser_test'
# Function to patch
self.config_func = 'salt.config.proxy_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.ProxyMinionOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(salt.utils.platform.is_windows(), 'Windows uses a logging listener')
class SyndicOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Syndic options
'''
def setUp(self):
'''
Setting up
'''
# Set config option names
self.logfile_config_setting_name = 'syndic_log_file'
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_syndic_parser_test'
self.syndic_log_file = '/tmp/salt_syndic_log'
# Function to patch
self.config_func = 'salt.config.syndic_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SyndicOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltCMDOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt CLI options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['foo', 'bar.baz']
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_cmd_parser_test'
# Function to patch
self.config_func = 'salt.config.client_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltCMDOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltCPOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing salt-cp options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['foo', 'bar', 'baz']
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_cp_parser_test'
# Function to patch
self.config_func = 'salt.config.master_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltCPOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltKeyOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing salt-key options
'''
def setUp(self):
'''
Setting up
'''
self.skip_console_logging_config = True
# Set config option names
self.logfile_config_setting_name = 'key_logfile'
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_key_parser_test'
self.key_logfile = '/tmp/key_logfile'
# Function to patch
self.config_func = 'salt.config.master_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltKeyOptionParser
self.addCleanup(delattr, self, 'parser')
# log level configuration tests
def test_get_log_level_cli(self):
'''
Tests that console log level option is not recognized
'''
# No console log level will be actually set
log_level = default_log_level = None
option = '--log-level'
args = self.args + [option, 'error']
parser = self.parser()
mock_err = ErrorMock()
with patch('salt.utils.parsers.OptionParser.error', mock_err.error):
parser.parse_args(args)
# Check error msg
self.assertEqual(mock_err.msg, 'no such option: {0}'.format(option))
# Check console loggger has not been set
self.assertEqual(self.log_setup.log_level, log_level)
self.assertNotIn(self.loglevel_config_setting_name, self.log_setup.config)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file logger log level
self.assertEqual(self.log_setup.log_level_logfile, default_log_level)
def test_get_log_level_config(self):
'''
Tests that log level set in config is ignored
'''
log_level = 'info'
args = self.args
# Set log level in config and set additional mocked opts keys
opts = {self.loglevel_config_setting_name: log_level,
self.logfile_config_setting_name: 'key_logfile',
'log_fmt_logfile': None,
'log_datefmt_logfile': None,
'log_rotate_max_bytes': None,
'log_rotate_backup_count': None}
parser = self.parser()
with patch(self.config_func, MagicMock(return_value=opts)):
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
# Check config name absence in options
self.assertNotIn(self.loglevel_config_setting_name, parser.options.__dict__)
# Check console loggger has not been set
self.assertEqual(self.log_setup.log_level, None)
self.assertNotIn(self.loglevel_config_setting_name, self.log_setup.config)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file logger log level
self.assertEqual(self.log_setup.log_level_logfile, log_level)
def test_get_log_level_default(self):
'''
Tests that log level default value is ignored
'''
# Set defaults
default_log_level = self.default_config[self.loglevel_config_setting_name]
log_level = None
args = self.args
parser = self.parser()
parser.parse_args(args)
with patch('salt.utils.parsers.is_writeable', MagicMock(return_value=True)):
parser.setup_logfile_logger()
# Check config name absence in options
self.assertNotIn(self.loglevel_config_setting_name, parser.options.__dict__)
# Check console loggger has not been set
self.assertEqual(self.log_setup.log_level, log_level)
self.assertNotIn(self.loglevel_config_setting_name, self.log_setup.config)
# Check temp logger
self.assertEqual(self.log_setup.temp_log_level, 'error')
# Check log file logger log level
self.assertEqual(self.log_setup.log_level_logfile, default_log_level)
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltCallOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Minion options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['foo.bar']
# Set defaults
self.default_config = salt.config.DEFAULT_MINION_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_call_parser_test'
# Function to patch
self.config_func = 'salt.config.minion_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltCallOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltRunOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Master options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['foo.bar']
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_run_parser_test'
# Function to patch
self.config_func = 'salt.config.master_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltRunOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltSSHOptionParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Master options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['foo', 'bar.baz']
# Set config option names
self.logfile_config_setting_name = 'ssh_log_file'
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_ssh_parser_test'
self.ssh_log_file = '/tmp/ssh_logfile'
# Function to patch
self.config_func = 'salt.config.master_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltSSHOptionParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltCloudParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Cloud options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['-p', 'foo', 'bar']
# Set default configs
# Cloud configs are merged with master configs in
# config/__init__.py, so we'll do that here as well
# As we need the 'user' key later on.
self.default_config = salt.config.DEFAULT_MASTER_OPTS.copy()
self.default_config.update(salt.config.DEFAULT_CLOUD_OPTS)
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_cloud_parser_test'
# Function to patch
self.config_func = 'salt.config.cloud_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltCloudParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SPMParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Cloud options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = ['foo', 'bar']
# Set config option names
self.logfile_config_setting_name = 'spm_logfile'
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS.copy()
self.default_config.update(salt.config.DEFAULT_SPM_OPTS)
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/spm_parser_test'
self.spm_logfile = '/tmp/spm_logfile'
# Function to patch
self.config_func = 'salt.config.spm_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SPMParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltAPIParserTestCase(LogSettingsParserTests):
'''
Tests parsing Salt Cloud options
'''
def setUp(self):
'''
Setting up
'''
# Set mandatory CLI options
self.args = []
# Set config option names
self.logfile_config_setting_name = 'api_logfile'
# Set defaults
self.default_config = salt.config.DEFAULT_MASTER_OPTS.copy()
self.default_config.update(salt.config.DEFAULT_API_OPTS)
self.addCleanup(delattr, self, 'default_config')
# Log file
self.log_file = '/tmp/salt_api_parser_test'
self.api_logfile = '/tmp/api_logfile'
# Function to patch
self.config_func = 'salt.config.api_config'
# Mock log setup
self.setup_log()
# Assign parser
self.parser = salt.utils.parsers.SaltAPIParser
self.addCleanup(delattr, self, 'parser')
@skipIf(NO_MOCK, NO_MOCK_REASON)
class DaemonMixInTestCase(TestCase):
'''
Tests the PIDfile deletion in the DaemonMixIn.
'''
def setUp(self):
'''
Setting up
'''
# Set PID
self.pid = '/some/fake.pid'
# Setup mixin
self.mixin = salt.utils.parsers.DaemonMixIn()
self.mixin.config = {}
self.mixin.config['pidfile'] = self.pid
# logger
self.logger = logging.getLogger('salt.utils.parsers')
def test_pid_file_deletion(self):
'''
PIDfile deletion without exception.
'''
with patch('os.unlink', MagicMock()) as os_unlink:
with patch('os.path.isfile', MagicMock(return_value=True)):
with patch.object(self.logger, 'info') as mock_logger:
self.mixin._mixin_before_exit()
assert mock_logger.call_count == 0
assert os_unlink.call_count == 1
def test_pid_file_deletion_with_oserror(self):
'''
PIDfile deletion with exception
'''
with patch('os.unlink', MagicMock(side_effect=OSError())) as os_unlink:
with patch('os.path.isfile', MagicMock(return_value=True)):
with patch.object(self.logger, 'info') as mock_logger:
self.mixin._mixin_before_exit()
assert os_unlink.call_count == 1
mock_logger.assert_called_with(
'PIDfile could not be deleted: {0}'.format(self.pid))
# Hide the class from unittest framework when it searches for TestCase classes in the module
del LogSettingsParserTests
| 33.979904 | 114 | 0.64133 | 4,172 | 35,509 | 5.198466 | 0.065197 | 0.06529 | 0.037071 | 0.056806 | 0.81363 | 0.768213 | 0.750415 | 0.729851 | 0.714174 | 0.687431 | 0 | 0.000502 | 0.269988 | 35,509 | 1,044 | 115 | 34.012452 | 0.836162 | 0.156383 | 0 | 0.612524 | 0 | 0 | 0.079857 | 0.037409 | 0 | 0 | 0 | 0 | 0.158513 | 1 | 0.080235 | false | 0 | 0.023483 | 0 | 0.152642 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed5819aec212ba7ec19ab99e0678cadfeebabe5e | 1,985 | py | Python | scripts/slave/recipe_modules/chromium_tests/chromium_goma.py | mithro/chromium-build | 98d83e124dc08510756906171922a22ba27b87fa | [
"BSD-3-Clause"
] | null | null | null | scripts/slave/recipe_modules/chromium_tests/chromium_goma.py | mithro/chromium-build | 98d83e124dc08510756906171922a22ba27b87fa | [
"BSD-3-Clause"
] | null | null | null | scripts/slave/recipe_modules/chromium_tests/chromium_goma.py | mithro/chromium-build | 98d83e124dc08510756906171922a22ba27b87fa | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from . import steps
SPEC = {
'builders': {
'Chromium Linux Goma Staging': {
'chromium_config': 'chromium',
'chromium_apply_config': ['goma_staging', 'clobber', 'mb'],
'gclient_config': 'chromium',
'chromium_config_kwargs': {
'BUILD_CONFIG': 'Release',
'TARGET_BITS': 64,
},
'compile_targets': [ 'chromium_builder_tests' ],
'tests': steps.GOMA_TESTS,
'goma_staging': True,
'testing': {
'platform': 'linux',
},
},
'Chromium Mac Goma Staging': {
'chromium_config': 'chromium',
'chromium_apply_config': ['goma_staging', 'clobber', 'mb'],
'gclient_config': 'chromium',
'chromium_config_kwargs': {
'BUILD_CONFIG': 'Release',
'TARGET_BITS': 64,
},
'compile_targets': [ 'chromium_builder_tests' ],
'tests': steps.GOMA_TESTS,
'goma_staging': True,
'testing': {
'platform': 'mac',
},
},
'CrWinGomaStaging': {
'chromium_config': 'chromium',
'chromium_apply_config': ['goma_staging', 'clobber', 'mb'],
'gclient_config': 'chromium',
'chromium_config_kwargs': {
'BUILD_CONFIG': 'Release',
'TARGET_BITS': 64,
},
'compile_targets': [ 'chromium_builder_tests' ],
'tests': steps.GOMA_TESTS,
'goma_staging': True,
'testing': {
'platform': 'win',
},
},
'Chromium Linux Goma GCE Staging': {
'chromium_config': 'chromium',
'chromium_apply_config': ['goma_staging', 'clobber', 'mb'],
'gclient_config': 'chromium',
'chromium_config_kwargs': {
'BUILD_CONFIG': 'Release',
'TARGET_BITS': 64,
},
'compile_targets': [ 'chromium_builder_tests' ],
'tests': steps.GOMA_TESTS,
'goma_staging': True,
'testing': {
'platform': 'linux',
},
},
},
}
| 27.957746 | 72 | 0.590932 | 195 | 1,985 | 5.728205 | 0.287179 | 0.098478 | 0.157565 | 0.107431 | 0.804834 | 0.804834 | 0.804834 | 0.804834 | 0.804834 | 0.804834 | 0 | 0.008011 | 0.24534 | 1,985 | 70 | 73 | 28.357143 | 0.73765 | 0.078086 | 0 | 0.646154 | 0 | 0 | 0.523001 | 0.142388 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.015385 | 0 | 0.015385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed5ca80052e828b730a0871adf03cdddee1d9e82 | 115 | py | Python | docknv/user/__init__.py | sharingcloud/docknv | 6eec6a576a32cb05278b7af045f90859066c9f1d | [
"MIT"
] | null | null | null | docknv/user/__init__.py | sharingcloud/docknv | 6eec6a576a32cb05278b7af045f90859066c9f1d | [
"MIT"
] | null | null | null | docknv/user/__init__.py | sharingcloud/docknv | 6eec6a576a32cb05278b7af045f90859066c9f1d | [
"MIT"
] | null | null | null | """User module."""
from .models import * # noqa
from .methods import * # noqa
from .exceptions import * # noqa
| 19.166667 | 33 | 0.652174 | 14 | 115 | 5.357143 | 0.571429 | 0.4 | 0.373333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208696 | 115 | 5 | 34 | 23 | 0.824176 | 0.243478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
ed6c047a1d7a71a1bbc9526de9e8f4d97d1899f2 | 37,011 | py | Python | tests/test_func_api_regression.py | Ic3fr0g/vecstack | a584bf69292e4d122d103559a25940c300e4cc36 | [
"MIT"
] | null | null | null | tests/test_func_api_regression.py | Ic3fr0g/vecstack | a584bf69292e4d122d103559a25940c300e4cc36 | [
"MIT"
] | null | null | null | tests/test_func_api_regression.py | Ic3fr0g/vecstack | a584bf69292e4d122d103559a25940c300e4cc36 | [
"MIT"
] | null | null | null | #-------------------------------------------------------------------------------
# Main concept for testing returned arrays:
# 1). create ground truth e.g. with cross_val_predict
# 2). run vecstack
# 3). compare returned arrays with ground truth
# 4). compare arrays from file with ground truth
#-------------------------------------------------------------------------------
from __future__ import print_function
from __future__ import division
import unittest
from numpy.testing import assert_array_equal
# from numpy.testing import assert_allclose
from numpy.testing import assert_equal
from numpy.testing import assert_raises
from numpy.testing import assert_warns
import os
import glob
import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse import csc_matrix
from scipy.sparse import coo_matrix
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import cross_val_score
# from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.datasets import load_boston
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import make_scorer
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from vecstack import stacking
from vecstack.core import model_action
n_folds = 5
temp_dir = 'tmpdw35lg54ms80eb42'
boston = load_boston()
X, y = boston.data, boston.target
# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Make train/test split by hand to avoid strange errors probably related to testing suit:
# https://github.com/scikit-learn/scikit-learn/issues/1684
# https://github.com/scikit-learn/scikit-learn/issues/1704
# Note: Python 2.7, 3.4 - OK, but 3.5, 3.6 - error
np.random.seed(0)
ind = np.arange(500)
np.random.shuffle(ind)
ind_train = ind[:400]
ind_test = ind[400:]
X_train = X[ind_train]
X_test = X[ind_test]
y_train = y[ind_train]
y_test = y[ind_test]
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
class MinimalEstimator:
"""Has no get_params attribute"""
def __init__(self, random_state=0):
self.random_state = random_state
def __repr__(self):
return 'Demo string from __repr__'
def fit(self, X, y):
return self
def predict(self, X):
return np.ones(X.shape[0])
def predict_proba(self, X):
return np.zeros(X.shape[0])
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
class TestFuncRegression(unittest.TestCase):
@classmethod
def setUpClass(cls):
try:
os.mkdir(temp_dir)
except:
print('Unable to create temp dir')
@classmethod
def tearDownClass(cls):
try:
os.rmdir(temp_dir)
except:
print('Unable to remove temp dir')
def tearDown(self):
# Remove files after each test
files = glob.glob(os.path.join(temp_dir, '*.npy'))
files.extend(glob.glob(os.path.join(temp_dir, '*.log.txt')))
try:
for file in files:
os.remove(file)
except:
print('Unable to remove temp file')
#---------------------------------------------------------------------------
# Testing returned and saved arrays in each mode
#---------------------------------------------------------------------------
def test_oof_pred_mode(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(X_train, y_train)
S_test_1 = model.predict(X_test).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_oof_mode(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
S_test_1 = None
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_pred_mode(self):
model = LinearRegression()
S_train_1 = None
_ = model.fit(X_train, y_train)
S_test_1 = model.predict(X_test).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_oof_pred_bag_mode(self):
S_test_temp = np.zeros((X_test.shape[0], n_folds))
kf = KFold(n_splits = n_folds, shuffle = False, random_state = 0)
for fold_counter, (tr_index, te_index) in enumerate(kf.split(X_train, y_train)):
# Split data and target
X_tr = X_train[tr_index]
y_tr = y_train[tr_index]
X_te = X_train[te_index]
y_te = y_train[te_index]
model = LinearRegression()
_ = model.fit(X_tr, y_tr)
S_test_temp[:, fold_counter] = model.predict(X_test)
S_test_1 = np.mean(S_test_temp, axis = 1).reshape(-1, 1)
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred_bag', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_pred_bag_mode(self):
S_test_temp = np.zeros((X_test.shape[0], n_folds))
kf = KFold(n_splits = n_folds, shuffle = False, random_state = 0)
for fold_counter, (tr_index, te_index) in enumerate(kf.split(X_train, y_train)):
# Split data and target
X_tr = X_train[tr_index]
y_tr = y_train[tr_index]
X_te = X_train[te_index]
y_te = y_train[te_index]
model = LinearRegression()
_ = model.fit(X_tr, y_tr)
S_test_temp[:, fold_counter] = model.predict(X_test)
S_test_1 = np.mean(S_test_temp, axis = 1).reshape(-1, 1)
S_train_1 = None
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'pred_bag', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing <sample_weight> all ones
#---------------------------------------------------------------------------
def test_oof_pred_mode_sample_weight_one(self):
sw = np.ones(len(y_train))
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict',
fit_params = {'sample_weight': sw}).reshape(-1, 1)
_ = model.fit(X_train, y_train, sample_weight = sw)
S_test_1 = model.predict(X_test).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0,
sample_weight = sw)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Test <sample_weight> all random
#---------------------------------------------------------------------------
def test_oof_pred_mode_sample_weight_random(self):
np.random.seed(0)
sw = np.random.rand(len(y_train))
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict',
fit_params = {'sample_weight': sw}).reshape(-1, 1)
_ = model.fit(X_train, y_train, sample_weight = sw)
S_test_1 = model.predict(X_test).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0,
sample_weight = sw)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing <transform_target> and <transform_pred> parameters
#---------------------------------------------------------------------------
def test_oof_pred_mode_transformations(self):
model = LinearRegression()
S_train_1 = np.expm1(cross_val_predict(model, X_train, y = np.log1p(y_train), cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict')).reshape(-1, 1)
_ = model.fit(X_train, np.log1p(y_train))
S_test_1 = np.expm1(model.predict(X_test)).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0,
transform_target = np.log1p, transform_pred = np.expm1)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing <verbose> parameter
#---------------------------------------------------------------------------
def test_oof_pred_mode_verbose_1(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(X_train, y_train)
S_test_1 = model.predict(X_test).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
models = [LinearRegression()]
S_train_3, S_test_3 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 1)
models = [LinearRegression()]
S_train_4, S_test_4 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 2)
models = [LinearRegression()]
S_train_5, S_test_5 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False,
mode = 'oof_pred', random_state = 0, verbose = 0)
models = [LinearRegression()]
S_train_6, S_test_6 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False,
mode = 'oof_pred', random_state = 0, verbose = 1)
models = [LinearRegression()]
S_train_7, S_test_7 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False,
mode = 'oof_pred', random_state = 0, verbose = 2)
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
assert_array_equal(S_train_1, S_train_4)
assert_array_equal(S_test_1, S_test_4)
assert_array_equal(S_train_1, S_train_5)
assert_array_equal(S_test_1, S_test_5)
assert_array_equal(S_train_1, S_train_6)
assert_array_equal(S_test_1, S_test_6)
assert_array_equal(S_train_1, S_train_7)
assert_array_equal(S_test_1, S_test_7)
#---------------------------------------------------------------------------
# Test <metric> parameter and its default values depending on <regression> parameter
# Important. We use <greater_is_better = True> in <make_scorer> for any error function
# because we need raw scores (without minus sign)
#---------------------------------------------------------------------------
def test_oof_mode_metric(self):
model = LinearRegression()
scorer = make_scorer(mean_absolute_error)
scores = cross_val_score(model, X_train, y = y_train, cv = n_folds,
scoring = scorer, n_jobs = 1, verbose = 0)
mean_str_1 = '%.8f' % np.mean(scores)
std_str_1 = '%.8f' % np.std(scores)
models = [LinearRegression()]
S_train, S_test = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, save_dir=temp_dir,
mode = 'oof', random_state = 0, verbose = 0)
# Load mean score and std from file
# Normally if cleaning is performed there is only one .log.txt file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.log.txt')))[-1] # take the latest file
with open(file_name) as f:
for line in f:
if 'MEAN' in line:
split = line.strip().split()
break
mean_str_2 = split[1][1:-1]
std_str_2 = split[3][1:-1]
assert_equal(mean_str_1, mean_str_2)
assert_equal(std_str_1, std_str_2)
#-------------------------------------------------------------------------------
# Test several mdels in one run
#-------------------------------------------------------------------------------
def test_oof_pred_mode_2_models(self):
model = LinearRegression()
S_train_1_a = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(X_train, y_train)
S_test_1_a = model.predict(X_test).reshape(-1, 1)
model = Ridge(random_state = 0)
S_train_1_b = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(X_train, y_train)
S_test_1_b = model.predict(X_test).reshape(-1, 1)
S_train_1 = np.c_[S_train_1_a, S_train_1_b]
S_test_1 = np.c_[S_test_1_a, S_test_1_b]
models = [LinearRegression(),
Ridge(random_state = 0)]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_oof_pred_bag_mode_2_models(self):
# Model a
S_test_temp = np.zeros((X_test.shape[0], n_folds))
kf = KFold(n_splits = n_folds, shuffle = False, random_state = 0)
for fold_counter, (tr_index, te_index) in enumerate(kf.split(X_train, y_train)):
# Split data and target
X_tr = X_train[tr_index]
y_tr = y_train[tr_index]
X_te = X_train[te_index]
y_te = y_train[te_index]
model = LinearRegression()
_ = model.fit(X_tr, y_tr)
S_test_temp[:, fold_counter] = model.predict(X_test)
S_test_1_a = np.mean(S_test_temp, axis = 1).reshape(-1, 1)
model = LinearRegression()
S_train_1_a = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
# Model b
S_test_temp = np.zeros((X_test.shape[0], n_folds))
kf = KFold(n_splits = n_folds, shuffle = False, random_state = 0)
for fold_counter, (tr_index, te_index) in enumerate(kf.split(X_train, y_train)):
# Split data and target
X_tr = X_train[tr_index]
y_tr = y_train[tr_index]
X_te = X_train[te_index]
y_te = y_train[te_index]
model = Ridge(random_state = 0)
_ = model.fit(X_tr, y_tr)
S_test_temp[:, fold_counter] = model.predict(X_test)
S_test_1_b = np.mean(S_test_temp, axis = 1).reshape(-1, 1)
model = Ridge(random_state = 0)
S_train_1_b = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
S_train_1 = np.c_[S_train_1_a, S_train_1_b]
S_test_1 = np.c_[S_test_1_a, S_test_1_b]
models = [LinearRegression(),
Ridge(random_state = 0)]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred_bag', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing sparse types CSR, CSC, COO
#---------------------------------------------------------------------------
def test_oof_pred_mode_sparse_csr(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, csr_matrix(X_train), y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(csr_matrix(X_train), y_train)
S_test_1 = model.predict(csr_matrix(X_test)).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, csr_matrix(X_train), y_train, csr_matrix(X_test),
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_oof_pred_mode_sparse_csc(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, csc_matrix(X_train), y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(csc_matrix(X_train), y_train)
S_test_1 = model.predict(csc_matrix(X_test)).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, csc_matrix(X_train), y_train, csc_matrix(X_test),
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
def test_oof_pred_mode_sparse_coo(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, coo_matrix(X_train), y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(coo_matrix(X_train), y_train)
S_test_1 = model.predict(coo_matrix(X_test)).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, coo_matrix(X_train), y_train, coo_matrix(X_test),
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing X_train -> SCR, X_test -> COO
#---------------------------------------------------------------------------
def test_oof_pred_mode_sparse_csr_coo(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, csr_matrix(X_train), y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(csr_matrix(X_train), y_train)
S_test_1 = model.predict(coo_matrix(X_test)).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, csr_matrix(X_train), y_train, coo_matrix(X_test),
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing X_train -> SCR, X_test -> Dense
#---------------------------------------------------------------------------
def test_oof_pred_mode_sparse_csr_dense(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, csr_matrix(X_train), y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
_ = model.fit(csr_matrix(X_train), y_train)
S_test_1 = model.predict(X_test).reshape(-1, 1)
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, csr_matrix(X_train), y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing X_test=None
#---------------------------------------------------------------------------
def test_oof_mode_xtest_is_none(self):
model = LinearRegression()
S_train_1 = cross_val_predict(model, X_train, y = y_train, cv = n_folds,
n_jobs = 1, verbose = 0, method = 'predict').reshape(-1, 1)
S_test_1 = None
models = [LinearRegression()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, None,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#---------------------------------------------------------------------------
# Testing parameter exceptions
#---------------------------------------------------------------------------
def test_exceptions(self):
# Empty model list
assert_raises(ValueError, stacking, [], X_train, y_train, X_test)
# Wrong mode
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train, X_test, mode='abc')
# Path does not exist
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train, X_test, save_dir='./As26bV85')
# n_folds is not int
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train, X_test, n_folds='A')
# n_folds is less than 2
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train, X_test, n_folds=1)
# Wrong verbose value
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train, X_test, verbose=25)
# Internal function model_action
assert_raises(ValueError, model_action, LinearRegression(),
X_train, y_train, X_test, sample_weight=None,
action='abc', transform=None)
#---------------------------------------------------------------------------
# Testing parameter warnings
#---------------------------------------------------------------------------
def test_warnings(self):
# Parameters specific for classification are ignored if regression=True
assert_warns(UserWarning, stacking, [LinearRegression()],
X_train, y_train, X_test, regression=True,
needs_proba=True)
assert_warns(UserWarning, stacking, [LinearRegression()],
X_train, y_train, X_test, regression=True,
stratified=True)
assert_warns(UserWarning, stacking, [LinearRegression()],
X_train, y_train, X_test, regression=True,
needs_proba=True, stratified=True)
#---------------------------------------------------------------------------
# Test if model has no 'get_params'
#---------------------------------------------------------------------------
def test_oof_pred_mode_no_get_params(self):
S_train_1 = np.ones(X_train.shape[0]).reshape(-1, 1)
S_test_1 = np.ones(X_test.shape[0]).reshape(-1, 1)
models = [MinimalEstimator()]
S_train_2, S_test_2 = stacking(models, X_train, y_train, X_test,
regression = True, n_folds = n_folds, shuffle = False, save_dir=temp_dir,
mode = 'oof_pred', random_state = 0, verbose = 0)
# Load OOF from file
# Normally if cleaning is performed there is only one .npy file at given moment
# But if we have no cleaning there may be more then one file so we take the latest
file_name = sorted(glob.glob(os.path.join(temp_dir, '*.npy')))[-1] # take the latest file
S = np.load(file_name)
S_train_3 = S[0]
S_test_3 = S[1]
assert_array_equal(S_train_1, S_train_2)
assert_array_equal(S_test_1, S_test_2)
assert_array_equal(S_train_1, S_train_3)
assert_array_equal(S_test_1, S_test_3)
#-------------------------------------------------------------------------------
# Test inconsistent data shape or type
#-------------------------------------------------------------------------------
def test_inconsistent_data(self):
# nan or inf in y
y_train_nan = y_train.copy()
y_train_nan[0] = np.nan
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train_nan, X_test)
# y has two or more columns
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, np.c_[y_train, y_train], X_test)
# X_train and y_train shape nismatch
assert_raises(ValueError, stacking, [LinearRegression()],
X_train, y_train[:10], X_test)
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
if __name__ == '__main__':
unittest.main()
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
| 44.008323 | 101 | 0.564805 | 5,073 | 37,011 | 3.796373 | 0.056574 | 0.041279 | 0.067293 | 0.070616 | 0.842723 | 0.818267 | 0.804715 | 0.788566 | 0.765876 | 0.756529 | 0 | 0.022116 | 0.265759 | 37,011 | 840 | 102 | 44.060714 | 0.686587 | 0.228473 | 0 | 0.699048 | 0 | 0 | 0.020817 | 0 | 0 | 0 | 0 | 0 | 0.188571 | 1 | 0.057143 | false | 0 | 0.04381 | 0.007619 | 0.112381 | 0.007619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9c1b6d5d9b1912cb9366024bde1c8af75a45bf56 | 69 | py | Python | Lesson15_Files/7-logCsvFile.py | StyvenSoft/degree-python | 644953608948f341f5a20ceb9a02976a128b472b | [
"MIT"
] | 23 | 2021-06-06T15:35:55.000Z | 2022-03-21T06:53:42.000Z | Data Scientist Career Path/3. Python Fundamentals/11. Python Files/7. csv.py | shivaniverma1/Data-Scientist | f82939a411484311171465591455880c8e354750 | [
"MIT"
] | null | null | null | Data Scientist Career Path/3. Python Fundamentals/11. Python Files/7. csv.py | shivaniverma1/Data-Scientist | f82939a411484311171465591455880c8e354750 | [
"MIT"
] | 9 | 2021-06-08T01:32:04.000Z | 2022-03-18T15:38:09.000Z | with open('logger.csv') as log_csv_file:
print(log_csv_file.read()) | 34.5 | 40 | 0.753623 | 13 | 69 | 3.692308 | 0.692308 | 0.25 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 69 | 2 | 41 | 34.5 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
9c5c59b656d8b13c805d978bfdf344fd998bd838 | 219 | py | Python | colorization_extract/config.py | wakananai/GANet | b345a36fef65e27dac90ec01002b2c39df848696 | [
"MIT"
] | null | null | null | colorization_extract/config.py | wakananai/GANet | b345a36fef65e27dac90ec01002b2c39df848696 | [
"MIT"
] | null | null | null | colorization_extract/config.py | wakananai/GANet | b345a36fef65e27dac90ec01002b2c39df848696 | [
"MIT"
] | null | null | null | pts_in_hull_numpy='./colorization_extract/resources/pts_in_hull.npy'
prototxt='./colorization_extract/models/colorization_deploy_v2.prototxt'
caffemodel='./colorization_extract/models/colorization_release_v2.caffemodel' | 73 | 77 | 0.881279 | 27 | 219 | 6.703704 | 0.518519 | 0.314917 | 0.099448 | 0.40884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009217 | 0.009132 | 219 | 3 | 77 | 73 | 0.824885 | 0 | 0 | 0 | 0 | 0 | 0.786364 | 0.786364 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92bdd822ab6920f03b5f0c1c982f93ecc800cbbc | 81 | py | Python | utils/misc/__init__.py | in7erval/Wall-e-2.0 | db28255db00a131b8fd297f09910f585f8fe3aca | [
"MIT"
] | null | null | null | utils/misc/__init__.py | in7erval/Wall-e-2.0 | db28255db00a131b8fd297f09910f585f8fe3aca | [
"MIT"
] | null | null | null | utils/misc/__init__.py | in7erval/Wall-e-2.0 | db28255db00a131b8fd297f09910f585f8fe3aca | [
"MIT"
] | null | null | null | from .throttling import rate_limit
from . import logging
from . import in_inline
| 20.25 | 34 | 0.814815 | 12 | 81 | 5.333333 | 0.666667 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 81 | 3 | 35 | 27 | 0.927536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92e5104e37d4e5f2649bd13338b2b14365d904f0 | 37 | py | Python | uvcgan/train/callbacks/__init__.py | LS4GAN/uvcgan | 376439ae2a9be684ff279ddf634fe137aadc5df5 | [
"BSD-2-Clause"
] | 20 | 2022-02-14T22:36:19.000Z | 2022-03-29T06:31:30.000Z | uvcgan/train/callbacks/__init__.py | LS4GAN/uvcgan | 376439ae2a9be684ff279ddf634fe137aadc5df5 | [
"BSD-2-Clause"
] | 1 | 2022-03-09T17:23:30.000Z | 2022-03-09T17:23:30.000Z | uvcgan/train/callbacks/__init__.py | LS4GAN/uvcgan | 376439ae2a9be684ff279ddf634fe137aadc5df5 | [
"BSD-2-Clause"
] | 3 | 2022-02-14T22:36:41.000Z | 2022-03-20T12:53:29.000Z | from .history import TrainingHistory
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13707cf1fbde71c2621ce60cb4fc1ee26e36e167 | 117 | py | Python | manta_lab/tuning/internal/integrations/control_ray.py | coxwave/manta | fc55357e139dbf61854595475f401656440d53aa | [
"MIT"
] | 17 | 2021-11-19T08:26:33.000Z | 2022-01-27T10:22:09.000Z | manta_lab/tuning/internal/integrations/control_ray.py | coxwave/manta | fc55357e139dbf61854595475f401656440d53aa | [
"MIT"
] | 95 | 2021-11-19T05:29:22.000Z | 2022-03-03T09:39:46.000Z | manta_lab/tuning/internal/integrations/control_ray.py | coxwave/manta | fc55357e139dbf61854595475f401656440d53aa | [
"MIT"
] | 2 | 2021-12-10T07:31:54.000Z | 2021-12-10T07:43:37.000Z | from manta_lab.tuning.interface.controller import TuningController
class RayController(TuningController):
pass
| 19.5 | 66 | 0.837607 | 12 | 117 | 8.083333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 117 | 5 | 67 | 23.4 | 0.932692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
137e261735387ec3528edc28ecb869c5aad5a2a7 | 27 | py | Python | openstack_dashboard/tests/__init__.py | griddynamics/osc-robot-openstack-dashboard | 93bafcf0c35ef6bd0e5a0a4434b222befa940e27 | [
"Apache-2.0"
] | 3 | 2015-05-18T13:49:44.000Z | 2015-05-18T14:38:13.000Z | django-openstack/django_openstack/tests/__init__.py | tylesmit/openstack-dashboard | 8199011a98aa8bc5672e977db014f61eccc4668c | [
"Apache-2.0"
] | null | null | null | django-openstack/django_openstack/tests/__init__.py | tylesmit/openstack-dashboard | 8199011a98aa8bc5672e977db014f61eccc4668c | [
"Apache-2.0"
] | null | null | null | from testsettings import *
| 13.5 | 26 | 0.814815 | 3 | 27 | 7.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13b1c6ab73264a6ba6ea1722d71155e637612eb1 | 31 | py | Python | tests/conftest.py | GCES-Pydemic/pydemic-ui | 8e3d8bc9f73887edf6bd8ab78a4ead29fe8239ed | [
"MIT"
] | 1 | 2020-09-30T13:02:53.000Z | 2020-09-30T13:02:53.000Z | tests/conftest.py | GCES-Pydemic/pydemic-ui | 8e3d8bc9f73887edf6bd8ab78a4ead29fe8239ed | [
"MIT"
] | 29 | 2020-10-03T02:10:38.000Z | 2020-12-07T22:19:13.000Z | tests/conftest.py | GCES-Pydemic/pydemic-ui | 8e3d8bc9f73887edf6bd8ab78a4ead29fe8239ed | [
"MIT"
] | null | null | null | from pydemic.testing import en
| 15.5 | 30 | 0.83871 | 5 | 31 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
13cc6391e569c235115293ad2132aee04233989a | 1,560 | py | Python | webStorm-APICloud/python_tools/Lib/test/test_future4.py | zzr925028429/androidyianyan | 8967fdba92473e8e65ee222515dfc54cdae5bb0b | [
"MIT"
] | 81 | 2017-03-13T08:24:01.000Z | 2021-04-02T09:48:38.000Z | webStorm-APICloud/python_tools/Lib/test/test_future4.py | zzr925028429/androidyianyan | 8967fdba92473e8e65ee222515dfc54cdae5bb0b | [
"MIT"
] | 6 | 2017-04-30T08:36:55.000Z | 2017-09-22T01:37:28.000Z | webStorm-APICloud/python_tools/Lib/test/test_future4.py | zzr925028429/androidyianyan | 8967fdba92473e8e65ee222515dfc54cdae5bb0b | [
"MIT"
] | 41 | 2017-03-18T14:11:58.000Z | 2021-04-14T05:06:09.000Z | from __future__ import unicode_literals
import unittest
from test import test_support
class TestFuture(unittest.TestCase):
def assertType(self, obj, typ):
self.assert_(type(obj) is typ,
"type(%r) is %r, not %r" % (obj, type(obj), typ))
def test_unicode_strings(self):
self.assertType("", unicode)
self.assertType('', unicode)
self.assertType(r"", unicode)
self.assertType(r'', unicode)
self.assertType(""" """, unicode)
self.assertType(''' ''', unicode)
self.assertType(r""" """, unicode)
self.assertType(r''' ''', unicode)
self.assertType(u"", unicode)
self.assertType(u'', unicode)
self.assertType(ur"", unicode)
self.assertType(ur'', unicode)
self.assertType(u""" """, unicode)
self.assertType(u''' ''', unicode)
self.assertType(ur""" """, unicode)
self.assertType(ur''' ''', unicode)
self.assertType(b"", str)
self.assertType(b'', str)
self.assertType(br"", str)
self.assertType(br'', str)
self.assertType(b""" """, str)
self.assertType(b''' ''', str)
self.assertType(br""" """, str)
self.assertType(br''' ''', str)
self.assertType('' '', unicode)
self.assertType('' u'', unicode)
self.assertType(u'' '', unicode)
self.assertType(u'' u'', unicode)
def test_main():
test_support.run_unittest(TestFuture)
if __name__ == "__main__":
test_main()
| 32.5 | 62 | 0.555128 | 163 | 1,560 | 5.184049 | 0.190184 | 0.463905 | 0.472189 | 0.182249 | 0.686391 | 0.686391 | 0.686391 | 0.686391 | 0.685207 | 0.685207 | 0 | 0 | 0.271795 | 1,560 | 47 | 63 | 33.191489 | 0.743838 | 0 | 0 | 0 | 0 | 0 | 0.027759 | 0 | 0 | 0 | 0 | 0 | 0.75 | 0 | null | null | 0 | 0.075 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13f11bd0c866eb89a8befc154a19b309843309d9 | 5,760 | py | Python | pyggi/tree/edits.py | bloa/pyggi | 79babee583460494b1c533e45e49c7cb23ed70c9 | [
"MIT"
] | null | null | null | pyggi/tree/edits.py | bloa/pyggi | 79babee583460494b1c533e45e49c7cb23ed70c9 | [
"MIT"
] | null | null | null | pyggi/tree/edits.py | bloa/pyggi | 79babee583460494b1c533e45e49c7cb23ed70c9 | [
"MIT"
] | null | null | null | import random
from ..base import AbstractEdit
from . import AbstractTreeEngine, XmlEngine
class NodeDeletion(AbstractEdit):
NODE_TYPE = ''
def apply(self, program, new_contents, new_locations):
engine = program.engines[self.target[0]]
return engine.do_delete(program.contents, program.locations,
new_contents, new_locations,
self.target)
@classmethod
def create(cls, program, target_file=None):
if target_file is None:
target_file = program.random_file(AbstractTreeEngine)
return cls(program.random_target(target_file, cls.NODE_TYPE))
class NodeReplacement(AbstractEdit):
NODE_TYPE = ''
def apply(self, program, new_contents, new_locations):
engine = program.engines[self.target[0]]
return engine.do_replace(program.contents, program.locations,
new_contents, new_locations,
self.target, self.data[0])
@classmethod
def create(cls, program, target_file=None, ingr_file=None):
if target_file is None:
target_file = program.random_file(AbstractTreeEngine)
if ingr_file is None:
ingr_file = program.random_file(engine=program.engines[target_file])
return cls(program.random_target(target_file, cls.NODE_TYPE),
program.random_target(ingr_file, cls.NODE_TYPE))
class NodeInsertion(AbstractEdit):
NODE_PARENT_TYPE = ''
NODE_TYPE = ''
def apply(self, program, new_contents, new_locations):
engine = program.engines[self.target[0]]
return engine.do_insert(program.contents, program.locations,
new_contents, new_locations,
self.target, self.data[0])
@classmethod
def create(cls, program, target_file=None, ingr_file=None):
if target_file is None:
target_file = program.random_file(AbstractTreeEngine)
if ingr_file is None:
ingr_file = program.random_file(engine=program.engines[target_file])
return cls(program.random_target(target_file, '_inter_{}'.format(cls.NODE_PARENT_TYPE)),
program.random_target(ingr_file, cls.NODE_TYPE))
class NodeMoving(AbstractEdit):
NODE_PARENT_TYPE = ''
NODE_TYPE = ''
def apply(self, program, new_contents, new_locations):
engine = program.engines[self.target[0]]
return_code = engine.do_insert(program.contents, program.locations,
new_contents, new_locations,
self.target, self.data[0])
if return_code:
return_code = engine.do_delete(program, self.data[0], new_contents, new_locations)
return return_code
@classmethod
def create(cls, program, target_file=None, ingr_file=None):
if target_file is None:
target_file = program.random_file(AbstractTreeEngine)
if ingr_file is None:
ingr_file = program.random_file(engine=program.engines[target_file])
return cls(program.random_target(target_file, '_inter_{}'.format(cls.NODE_PARENT_TYPE)),
program.random_target(ingr_file, cls.NODE_TYPE))
class TextSetting(AbstractEdit):
NODE_TYPE = ''
CHOICES = ['']
def apply(self, program, new_contents, new_locations):
engine = program.engines[self.target[0]]
return engine.do_set_text(program.contents, program.locations,
new_contents, new_locations,
self.target, self.data[0])
@classmethod
def create(cls, program, target_file=None, choices=None):
if choices == None:
choices = cls.CHOICES
if target_file is None:
target_file = program.random_file(XmlEngine)
target = program.random_target(target_file, cls.NODE_TYPE)
value = random.choice(choices)
return cls(target, value)
class TextWrapping(AbstractEdit):
NODE_TYPE = ''
CHOICES = [('(', ')')]
def apply(self, program, new_contents, new_locations):
engine = program.engines[self.target[0]]
return engine.do_wrap_text(program.contents, program.locations,
new_contents, new_locations,
self.target, self.data[0][0], self.data[0][1])
@classmethod
def create(cls, program, target_file=None, choices=None):
if choices == None:
choices = cls.CHOICES
if target_file is None:
target_file = program.random_file(XmlEngine)
target = program.random_target(target_file, cls.NODE_TYPE)
value = random.choice(choices)
return cls(target, value)
class StmtDeletion(NodeDeletion):
NODE_TYPE = 'stmt'
class StmtReplacement(NodeReplacement):
NODE_TYPE = 'stmt'
class StmtInsertion(NodeInsertion):
NODE_PARENT_TYPE = 'block'
NODE_TYPE = 'stmt'
class StmtMoving(NodeMoving):
NODE_PARENT_TYPE = 'block'
NODE_TYPE = 'stmt'
class ConditionReplacement(NodeReplacement):
NODE_TYPE = 'condition'
class ExprReplacement(NodeReplacement):
NODE_TYPE = 'expr'
class ComparisonOperatorSetting(TextSetting):
NODE_TYPE = 'operator_comp'
CHOICES = ['==', '!=', '<', '<=', '>', '>=']
class ArithmeticOperatorSetting(TextSetting):
NODE_TYPE = 'operator_arith'
CHOICES = ['+', '-', '*', '/', '%']
class NumericSetting(TextSetting):
NODE_TYPE = 'number'
CHOICES = ['-1', '0', '1']
class RelativeNumericSetting(TextWrapping):
NODE_TYPE = 'number'
CHOICES = [('(', '+1)'), ('(', '-1)'), ('(', '/2)'), ('(', '*2)'), ('(', '*3/2)'), ('(', '*2/3)')]
| 36.923077 | 102 | 0.628819 | 631 | 5,760 | 5.522979 | 0.114105 | 0.077475 | 0.052224 | 0.085796 | 0.791392 | 0.777331 | 0.777331 | 0.777331 | 0.746341 | 0.746341 | 0 | 0.006092 | 0.259028 | 5,760 | 155 | 103 | 37.16129 | 0.81045 | 0 | 0 | 0.656 | 0 | 0 | 0.025174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096 | false | 0 | 0.024 | 0 | 0.552 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
b939b2373eab92033d6ca0c080943c7362a8b3e5 | 9,179 | py | Python | tests/test_rio_convert.py | ljburtz/rasterio | ce00d482faa24068f41164b79901eeec31777263 | [
"BSD-3-Clause"
] | 1 | 2020-10-25T17:32:06.000Z | 2020-10-25T17:32:06.000Z | tests/test_rio_convert.py | ljburtz/rasterio | ce00d482faa24068f41164b79901eeec31777263 | [
"BSD-3-Clause"
] | null | null | null | tests/test_rio_convert.py | ljburtz/rasterio | ce00d482faa24068f41164b79901eeec31777263 | [
"BSD-3-Clause"
] | null | null | null | import os
from click.testing import CliRunner
import numpy as np
import pytest
import rasterio
from rasterio.rio.main import main_group
TEST_BBOX = [-11850000, 4804000, -11840000, 4808000]
def bbox(*args):
return ' '.join([str(x) for x in args])
@pytest.mark.parametrize("bounds", [bbox(*TEST_BBOX)])
def test_clip_bounds(runner, tmpdir, bounds):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group, ["clip", "tests/data/shade.tif", output, "--bounds", bounds]
)
assert result.exit_code == 0
assert os.path.exists(output)
with rasterio.open(output) as out:
assert out.shape == (419, 173)
@pytest.mark.parametrize("bounds", [bbox(*TEST_BBOX)])
def test_clip_bounds_with_complement(runner, tmpdir, bounds):
output = str(tmpdir.join("test.tif"))
result = runner.invoke(
main_group,
[
"clip",
"tests/data/shade.tif",
output,
"--bounds",
bounds,
"--with-complement",
],
)
assert result.exit_code == 0
assert os.path.exists(output)
with rasterio.open(output) as out:
assert out.shape == (419, 1047)
data = out.read()
assert (data[420:, :] == 255).all()
def test_clip_bounds_geographic(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['clip', 'tests/data/RGB.byte.tif', output, '--geographic', '--bounds',
'-78.95864996545055 23.564991210854686 -76.57492370013823 25.550873767433984'])
assert result.exit_code == 0
assert os.path.exists(output)
with rasterio.open(output) as out:
assert out.shape == (718, 791)
def test_clip_like(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group, [
'clip', 'tests/data/shade.tif', output, '--like',
'tests/data/shade.tif'])
assert result.exit_code == 0
assert os.path.exists(output)
with rasterio.open('tests/data/shade.tif') as template_ds:
with rasterio.open(output) as out:
assert out.shape == template_ds.shape
assert np.allclose(out.bounds, template_ds.bounds)
def test_clip_missing_params(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group, ['clip', 'tests/data/shade.tif', output])
assert result.exit_code == 2
assert '--bounds or --like required' in result.output
def test_clip_bounds_disjunct(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['clip', 'tests/data/shade.tif', output, '--bounds', bbox(0, 0, 10, 10)])
assert result.exit_code == 2
assert '--bounds' in result.output
def test_clip_like_disjunct(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group, [
'clip', 'tests/data/shade.tif', output, '--like',
'tests/data/RGB.byte.tif'])
assert result.exit_code == 2
assert '--like' in result.output
def test_clip_overwrite_without_option(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['clip', 'tests/data/shade.tif', output, '--bounds', bbox(*TEST_BBOX)])
assert result.exit_code == 0
result = runner.invoke(
main_group,
['clip', 'tests/data/shade.tif', output, '--bounds', bbox(*TEST_BBOX)])
assert result.exit_code == 1
assert '--overwrite' in result.output
def test_clip_overwrite_with_option(runner, tmpdir):
output = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['clip', 'tests/data/shade.tif', output, '--bounds', bbox(*TEST_BBOX)])
assert result.exit_code == 0
result = runner.invoke(
main_group,
[
"clip",
"tests/data/shade.tif",
output,
"--bounds",
bbox(*TEST_BBOX),
"--overwrite",
],
)
assert result.exit_code == 0
# Tests: format and type conversion, --format and --dtype
def test_format(tmpdir, runner):
outputname = str(tmpdir.join('test.jpg'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', outputname, '--format', 'JPEG'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
assert src.driver == 'JPEG'
def test_format_short(tmpdir, runner):
outputname = str(tmpdir.join('test.jpg'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', outputname, '-f', 'JPEG'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
assert src.driver == 'JPEG'
def test_output_opt(tmpdir, runner):
outputname = str(tmpdir.join('test.jpg'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', '-o', outputname, '-f', 'JPEG'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
assert src.driver == 'JPEG'
def test_dtype(tmpdir, runner):
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', outputname, '--dtype', 'uint16'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
assert src.dtypes == tuple(['uint16'] * 3)
def test_dtype_rescaling_uint8_full(tmpdir, runner):
"""Rescale uint8 [0, 255] to uint8 [0, 255]"""
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', outputname, '--scale-ratio', '1.0'])
assert result.exit_code == 0
src_stats = [
{"max": 255.0, "mean": 44.434478650699106, "min": 1.0},
{"max": 255.0, "mean": 66.02203484105824, "min": 1.0},
{"max": 255.0, "mean": 71.39316199120559, "min": 1.0}]
with rasterio.open(outputname) as src:
for band, expected in zip(src.read(masked=True), src_stats):
assert round(band.min() - expected['min'], 6) == 0.0
assert round(band.max() - expected['max'], 6) == 0.0
assert round(band.mean() - expected['mean'], 6) == 0.0
def test_dtype_rescaling_uint8_half(tmpdir, runner):
"""Rescale uint8 [0, 255] to uint8 [0, 127]"""
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(main_group, [
'convert', 'tests/data/RGB.byte.tif', outputname, '--scale-ratio', '0.5'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
for band in src.read():
assert round(band.min() - 0, 6) == 0.0
assert round(band.max() - 127, 6) == 0.0
def test_dtype_rescaling_uint16(tmpdir, runner):
"""Rescale uint8 [0, 255] to uint16 [0, 4095]"""
# NB: 255 * 16 is 4080, we don't actually get to 4095.
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(main_group, [
'convert', 'tests/data/RGB.byte.tif', outputname, '--dtype', 'uint16',
'--scale-ratio', '16'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
for band in src.read():
assert round(band.min() - 0, 6) == 0.0
assert round(band.max() - 4080, 6) == 0.0
def test_dtype_rescaling_float64(tmpdir, runner):
"""Rescale uint8 [0, 255] to float64 [-1, 1]"""
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(main_group, [
'convert', 'tests/data/RGB.byte.tif', outputname, '--dtype', 'float64',
'--scale-ratio', str(2.0 / 255), '--scale-offset', '-1.0'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
for band in src.read():
assert round(band.min() + 1.0, 6) == 0.0
assert round(band.max() - 1.0, 6) == 0.0
def test_rgb(tmpdir, runner):
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', outputname, '--rgb'])
assert result.exit_code == 0
with rasterio.open(outputname) as src:
assert src.colorinterp[0] == rasterio.enums.ColorInterp.red
def test_convert_overwrite_without_option(runner, tmpdir):
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', '-o', outputname, '-f', 'JPEG'])
assert result.exit_code == 0
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', '-o', outputname, '-f', 'JPEG'])
assert result.exit_code == 1
assert '--overwrite' in result.output
def test_convert_overwrite_with_option(runner, tmpdir):
outputname = str(tmpdir.join('test.tif'))
result = runner.invoke(
main_group,
['convert', 'tests/data/RGB.byte.tif', '-o', outputname, '-f', 'JPEG'])
assert result.exit_code == 0
result = runner.invoke(
main_group, [
'convert', 'tests/data/RGB.byte.tif', '-o', outputname, '-f', 'JPEG',
'--overwrite'])
assert result.exit_code == 0
| 32.66548 | 88 | 0.610851 | 1,191 | 9,179 | 4.608732 | 0.124265 | 0.04427 | 0.078703 | 0.096192 | 0.815631 | 0.796138 | 0.767899 | 0.709054 | 0.705046 | 0.677355 | 0 | 0.048157 | 0.228565 | 9,179 | 280 | 89 | 32.782143 | 0.727016 | 0.030069 | 0 | 0.617512 | 0 | 0 | 0.157302 | 0.038847 | 0 | 0 | 0 | 0 | 0.24424 | 1 | 0.096774 | false | 0 | 0.02765 | 0.004608 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9aaf321efaf8fa9b763e1953a3cdd6a7d93287a | 64 | py | Python | forklib/__init__.py | leenr/forklib | 08e2bf1aed41c1247ddf1f859838fa4b2b93b939 | [
"Apache-2.0"
] | null | null | null | forklib/__init__.py | leenr/forklib | 08e2bf1aed41c1247ddf1f859838fa4b2b93b939 | [
"Apache-2.0"
] | null | null | null | forklib/__init__.py | leenr/forklib | 08e2bf1aed41c1247ddf1f859838fa4b2b93b939 | [
"Apache-2.0"
] | null | null | null | from .forking import fork, get_id
from .iterator import fork_map | 32 | 33 | 0.828125 | 11 | 64 | 4.636364 | 0.727273 | 0.392157 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 64 | 2 | 34 | 32 | 0.910714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9dfcd2434df34442ad5e4b795a1356b1a407b4d | 67 | py | Python | boa3_test/test_sc/list_test/TypeHintAssignment.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 25 | 2020-07-22T19:37:43.000Z | 2022-03-08T03:23:55.000Z | boa3_test/test_sc/list_test/TypeHintAssignment.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 419 | 2020-04-23T17:48:14.000Z | 2022-03-31T13:17:45.000Z | boa3_test/test_sc/list_test/TypeHintAssignment.py | hal0x2328/neo3-boa | 6825a3533384cb01660773050719402a9703065b | [
"Apache-2.0"
] | 15 | 2020-05-21T21:54:24.000Z | 2021-11-18T06:17:24.000Z | from typing import List
def Main():
a: List[int] = [1, 2, 3]
| 11.166667 | 28 | 0.567164 | 12 | 67 | 3.166667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 0.268657 | 67 | 5 | 29 | 13.4 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9edd5b673a1e60f51e542300da9c5579ccb4361 | 15,884 | py | Python | tests/_sync/test_connection_pool.py | devl00p/httpcore | 4e2be44cd25f47cf1106e356bdedb77b724d4112 | [
"BSD-3-Clause"
] | null | null | null | tests/_sync/test_connection_pool.py | devl00p/httpcore | 4e2be44cd25f47cf1106e356bdedb77b724d4112 | [
"BSD-3-Clause"
] | null | null | null | tests/_sync/test_connection_pool.py | devl00p/httpcore | 4e2be44cd25f47cf1106e356bdedb77b724d4112 | [
"BSD-3-Clause"
] | null | null | null | from typing import List
import pytest
from tests import concurrency
from httpcore import ConnectionPool, ConnectError, UnsupportedProtocol
from httpcore.backends.mock import MockBackend
def test_connection_pool_with_keepalive():
"""
By default HTTP/1.1 requests should be returned to the connection pool.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
with ConnectionPool(
network_backend=network_backend,
) as pool:
# Sending an intial request, which once complete will return to the pool, IDLE.
with pool.stream("GET", "https://example.com/") as response:
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
]
response.read()
assert response.status == 200
assert response.content == b"Hello, world!"
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 1]>"
]
# Sending a second request to the same origin will reuse the existing IDLE connection.
with pool.stream("GET", "https://example.com/") as response:
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>"
]
response.read()
assert response.status == 200
assert response.content == b"Hello, world!"
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>"
]
# Sending a request to a different origin will not reuse the existing IDLE connection.
with pool.stream("GET", "http://example.com/") as response:
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['http://example.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
"<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>",
]
response.read()
assert response.status == 200
assert response.content == b"Hello, world!"
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['http://example.com:80', HTTP/1.1, IDLE, Request Count: 1]>",
"<HTTPConnection ['https://example.com:443', HTTP/1.1, IDLE, Request Count: 2]>",
]
def test_connection_pool_with_close():
"""
HTTP/1.1 requests that include a 'Connection: Close' header should
not be returned to the connection pool.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
with ConnectionPool(network_backend=network_backend) as pool:
# Sending an intial request, which once complete will not return to the pool.
with pool.stream(
"GET", "https://example.com/", headers={"Connection": "close"}
) as response:
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
]
response.read()
assert response.status == 200
assert response.content == b"Hello, world!"
info = [repr(c) for c in pool.connections]
assert info == []
def test_trace_request():
"""
The 'trace' request extension allows for a callback function to inspect the
internal events that occur while sending a request.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
called = []
def trace(name, kwargs):
called.append(name)
with ConnectionPool(network_backend=network_backend) as pool:
pool.request("GET", "https://example.com/", extensions={"trace": trace})
assert called == [
"connection.connect_tcp.started",
"connection.connect_tcp.complete",
"connection.start_tls.started",
"connection.start_tls.complete",
"http11.send_request_headers.started",
"http11.send_request_headers.complete",
"http11.send_request_body.started",
"http11.send_request_body.complete",
"http11.receive_response_headers.started",
"http11.receive_response_headers.complete",
"http11.receive_response_body.started",
"http11.receive_response_body.complete",
"http11.response_closed.started",
"http11.response_closed.complete",
]
def test_connection_pool_with_http_exception():
"""
HTTP/1.1 requests that result in an exception during the connection should
not be returned to the connection pool.
"""
network_backend = MockBackend([b"Wait, this isn't valid HTTP!"])
called = []
def trace(name, kwargs):
called.append(name)
with ConnectionPool(network_backend=network_backend) as pool:
# Sending an initial request, which once complete will not return to the pool.
with pytest.raises(Exception):
pool.request(
"GET", "https://example.com/", extensions={"trace": trace}
)
info = [repr(c) for c in pool.connections]
assert info == []
assert called == [
"connection.connect_tcp.started",
"connection.connect_tcp.complete",
"connection.start_tls.started",
"connection.start_tls.complete",
"http11.send_request_headers.started",
"http11.send_request_headers.complete",
"http11.send_request_body.started",
"http11.send_request_body.complete",
"http11.receive_response_headers.started",
"http11.receive_response_headers.failed",
"http11.response_closed.started",
"http11.response_closed.complete",
]
def test_connection_pool_with_connect_exception():
"""
HTTP/1.1 requests that result in an exception during connection should not
be returned to the connection pool.
"""
class FailedConnectBackend(MockBackend):
def connect_tcp(
self, host: str, port: int, timeout: float = None, local_address: str = None
):
raise ConnectError("Could not connect")
network_backend = FailedConnectBackend([])
called = []
def trace(name, kwargs):
called.append(name)
with ConnectionPool(network_backend=network_backend) as pool:
# Sending an initial request, which once complete will not return to the pool.
with pytest.raises(Exception):
pool.request(
"GET", "https://example.com/", extensions={"trace": trace}
)
info = [repr(c) for c in pool.connections]
assert info == []
assert called == [
"connection.connect_tcp.started",
"connection.connect_tcp.failed",
]
def test_connection_pool_with_immediate_expiry():
"""
Connection pools with keepalive_expiry=0.0 should immediately expire
keep alive connections.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
with ConnectionPool(
keepalive_expiry=0.0,
network_backend=network_backend,
) as pool:
# Sending an intial request, which once complete will not return to the pool.
with pool.stream("GET", "https://example.com/") as response:
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
]
response.read()
assert response.status == 200
assert response.content == b"Hello, world!"
info = [repr(c) for c in pool.connections]
assert info == []
def test_connection_pool_with_no_keepalive_connections_allowed():
"""
When 'max_keepalive_connections=0' is used, IDLE connections should not
be returned to the pool.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
with ConnectionPool(
max_keepalive_connections=0, network_backend=network_backend
) as pool:
# Sending an intial request, which once complete will not return to the pool.
with pool.stream("GET", "https://example.com/") as response:
info = [repr(c) for c in pool.connections]
assert info == [
"<HTTPConnection ['https://example.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
]
response.read()
assert response.status == 200
assert response.content == b"Hello, world!"
info = [repr(c) for c in pool.connections]
assert info == []
def test_connection_pool_concurrency():
"""
HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
of allowable connection in the pool.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
def fetch(pool, domain, info_list):
with pool.stream("GET", f"http://{domain}/") as response:
info = [repr(c) for c in pool.connections]
info_list.append(info)
response.read()
with ConnectionPool(
max_connections=1, network_backend=network_backend
) as pool:
info_list: List[str] = []
with concurrency.open_nursery() as nursery:
for domain in ["a.com", "b.com", "c.com", "d.com", "e.com"]:
nursery.start_soon(fetch, pool, domain, info_list)
for item in info_list:
# Check that each time we inspected the connection pool, only a
# single connection was established at any one time.
assert len(item) == 1
# Each connection was to a different host, and only sent a single
# request on that connection.
assert item[0] in [
"<HTTPConnection ['http://a.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
"<HTTPConnection ['http://b.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
"<HTTPConnection ['http://c.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
"<HTTPConnection ['http://d.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
"<HTTPConnection ['http://e.com:80', HTTP/1.1, ACTIVE, Request Count: 1]>",
]
def test_connection_pool_concurrency_same_domain_closing():
"""
HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
of allowable connection in the pool.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"Connection: close\r\n",
b"\r\n",
b"Hello, world!",
]
)
def fetch(pool, domain, info_list):
with pool.stream("GET", f"https://{domain}/") as response:
info = [repr(c) for c in pool.connections]
info_list.append(info)
response.read()
with ConnectionPool(
max_connections=1, network_backend=network_backend, http2=True
) as pool:
info_list: List[str] = []
with concurrency.open_nursery() as nursery:
for domain in ["a.com", "a.com", "a.com", "a.com", "a.com"]:
nursery.start_soon(fetch, pool, domain, info_list)
for item in info_list:
# Check that each time we inspected the connection pool, only a
# single connection was established at any one time.
assert len(item) == 1
# Only a single request was sent on each connection.
assert (
item[0]
== "<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>"
)
def test_connection_pool_concurrency_same_domain_keepalive():
"""
HTTP/1.1 requests made in concurrency must not ever exceed the maximum number
of allowable connection in the pool.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
* 5
)
def fetch(pool, domain, info_list):
with pool.stream("GET", f"https://{domain}/") as response:
info = [repr(c) for c in pool.connections]
info_list.append(info)
response.read()
with ConnectionPool(
max_connections=1, network_backend=network_backend, http2=True
) as pool:
info_list: List[str] = []
with concurrency.open_nursery() as nursery:
for domain in ["a.com", "a.com", "a.com", "a.com", "a.com"]:
nursery.start_soon(fetch, pool, domain, info_list)
for item in info_list:
# Check that each time we inspected the connection pool, only a
# single connection was established at any one time.
assert len(item) == 1
# The connection sent multiple requests.
assert item[0] in [
"<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 1]>",
"<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 2]>",
"<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 3]>",
"<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 4]>",
"<HTTPConnection ['https://a.com:443', HTTP/1.1, ACTIVE, Request Count: 5]>",
]
def test_unsupported_protocol():
with ConnectionPool() as pool:
with pytest.raises(UnsupportedProtocol):
pool.request("GET", "ftp://www.example.com/")
with pytest.raises(UnsupportedProtocol):
pool.request("GET", "://www.example.com/")
def test_connection_pool_closed_while_request_in_flight():
"""
Closing a connection pool while a request/response is still in-flight
should raise an error.
"""
network_backend = MockBackend(
[
b"HTTP/1.1 200 OK\r\n",
b"Content-Type: plain/text\r\n",
b"Content-Length: 13\r\n",
b"\r\n",
b"Hello, world!",
]
)
with ConnectionPool(
network_backend=network_backend,
) as pool:
# Send a request, and then close the connection pool while the
# response has not yet been streamed.
with pool.stream("GET", "https://example.com/"):
with pytest.raises(RuntimeError):
pool.close()
| 34.232759 | 98 | 0.580773 | 1,935 | 15,884 | 4.683204 | 0.106977 | 0.009049 | 0.013573 | 0.02207 | 0.836019 | 0.816817 | 0.8145 | 0.799603 | 0.799603 | 0.778636 | 0 | 0.025141 | 0.296336 | 15,884 | 463 | 99 | 34.306695 | 0.785631 | 0.151662 | 0 | 0.662539 | 0 | 0.068111 | 0.304302 | 0.070943 | 0 | 0 | 0 | 0 | 0.108359 | 1 | 0.058824 | false | 0 | 0.01548 | 0 | 0.077399 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a42cfe8316e13a326b79a2ced1f6fb696de079e | 7,210 | py | Python | test/test_1d_solver.py | Pseudomanifold/POT | 5861209f27fe8e022eca2ed2c8d0bb1da4a1146b | [
"MIT"
] | 830 | 2020-04-21T03:45:02.000Z | 2022-03-31T18:27:08.000Z | test/test_1d_solver.py | Pseudomanifold/POT | 5861209f27fe8e022eca2ed2c8d0bb1da4a1146b | [
"MIT"
] | 192 | 2020-04-17T10:15:37.000Z | 2022-03-24T10:00:57.000Z | test/test_1d_solver.py | Pseudomanifold/POT | 5861209f27fe8e022eca2ed2c8d0bb1da4a1146b | [
"MIT"
] | 402 | 2020-04-17T18:24:00.000Z | 2022-03-29T18:26:33.000Z | """Tests for module 1d Wasserstein solver"""
# Author: Adrien Corenflos <adrien.corenflos@aalto.fi>
# Nicolas Courty <ncourty@irisa.fr>
#
# License: MIT License
import numpy as np
import pytest
import ot
from ot.lp import wasserstein_1d
from ot.backend import get_backend_list, tf
from scipy.stats import wasserstein_distance
backend_list = get_backend_list()
def test_emd_1d_emd2_1d_with_weights():
# test emd1d gives similar results as emd
n = 20
m = 30
rng = np.random.RandomState(0)
u = rng.randn(n, 1)
v = rng.randn(m, 1)
w_u = rng.uniform(0., 1., n)
w_u = w_u / w_u.sum()
w_v = rng.uniform(0., 1., m)
w_v = w_v / w_v.sum()
M = ot.dist(u, v, metric='sqeuclidean')
G, log = ot.emd(w_u, w_v, M, log=True)
wass = log["cost"]
G_1d, log = ot.emd_1d(u, v, w_u, w_v, metric='sqeuclidean', log=True)
wass1d = log["cost"]
wass1d_emd2 = ot.emd2_1d(u, v, w_u, w_v, metric='sqeuclidean', log=False)
wass1d_euc = ot.emd2_1d(u, v, w_u, w_v, metric='euclidean', log=False)
# check loss is similar
np.testing.assert_allclose(wass, wass1d)
np.testing.assert_allclose(wass, wass1d_emd2)
# check loss is similar to scipy's implementation for Euclidean metric
wass_sp = wasserstein_distance(u.reshape((-1,)), v.reshape((-1,)), w_u, w_v)
np.testing.assert_allclose(wass_sp, wass1d_euc)
# check constraints
np.testing.assert_allclose(w_u, G.sum(1))
np.testing.assert_allclose(w_v, G.sum(0))
@pytest.mark.parametrize('nx', backend_list)
def test_wasserstein_1d(nx):
from scipy.stats import wasserstein_distance
rng = np.random.RandomState(0)
n = 100
x = np.linspace(0, 5, n)
rho_u = np.abs(rng.randn(n))
rho_u /= rho_u.sum()
rho_v = np.abs(rng.randn(n))
rho_v /= rho_v.sum()
xb = nx.from_numpy(x)
rho_ub = nx.from_numpy(rho_u)
rho_vb = nx.from_numpy(rho_v)
# test 1 : wasserstein_1d should be close to scipy W_1 implementation
np.testing.assert_almost_equal(wasserstein_1d(xb, xb, rho_ub, rho_vb, p=1),
wasserstein_distance(x, x, rho_u, rho_v))
# test 2 : wasserstein_1d should be close to one when only translating the support
np.testing.assert_almost_equal(wasserstein_1d(xb, xb + 1, p=2),
1.)
# test 3 : arrays test
X = np.stack((np.linspace(0, 5, n), np.linspace(0, 5, n) * 10), -1)
Xb = nx.from_numpy(X)
res = wasserstein_1d(Xb, Xb, rho_ub, rho_vb, p=2)
np.testing.assert_almost_equal(100 * res[0], res[1], decimal=4)
def test_wasserstein_1d_type_devices(nx):
rng = np.random.RandomState(0)
n = 10
x = np.linspace(0, 5, n)
rho_u = np.abs(rng.randn(n))
rho_u /= rho_u.sum()
rho_v = np.abs(rng.randn(n))
rho_v /= rho_v.sum()
for tp in nx.__type_list__:
print(nx.dtype_device(tp))
xb = nx.from_numpy(x, type_as=tp)
rho_ub = nx.from_numpy(rho_u, type_as=tp)
rho_vb = nx.from_numpy(rho_v, type_as=tp)
res = wasserstein_1d(xb, xb, rho_ub, rho_vb, p=1)
nx.assert_same_dtype_device(xb, res)
@pytest.mark.skipif(not tf, reason="tf not installed")
def test_wasserstein_1d_device_tf():
if not tf:
return
nx = ot.backend.TensorflowBackend()
rng = np.random.RandomState(0)
n = 10
x = np.linspace(0, 5, n)
rho_u = np.abs(rng.randn(n))
rho_u /= rho_u.sum()
rho_v = np.abs(rng.randn(n))
rho_v /= rho_v.sum()
# Check that everything stays on the CPU
with tf.device("/CPU:0"):
xb = nx.from_numpy(x)
rho_ub = nx.from_numpy(rho_u)
rho_vb = nx.from_numpy(rho_v)
res = wasserstein_1d(xb, xb, rho_ub, rho_vb, p=1)
nx.assert_same_dtype_device(xb, res)
if len(tf.config.list_physical_devices('GPU')) > 0:
# Check that everything happens on the GPU
xb = nx.from_numpy(x)
rho_ub = nx.from_numpy(rho_u)
rho_vb = nx.from_numpy(rho_v)
res = wasserstein_1d(xb, xb, rho_ub, rho_vb, p=1)
nx.assert_same_dtype_device(xb, res)
assert nx.dtype_device(res)[1].startswith("GPU")
def test_emd_1d_emd2_1d():
# test emd1d gives similar results as emd
n = 20
m = 30
rng = np.random.RandomState(0)
u = rng.randn(n, 1)
v = rng.randn(m, 1)
M = ot.dist(u, v, metric='sqeuclidean')
G, log = ot.emd([], [], M, log=True)
wass = log["cost"]
G_1d, log = ot.emd_1d(u, v, [], [], metric='sqeuclidean', log=True)
wass1d = log["cost"]
wass1d_emd2 = ot.emd2_1d(u, v, [], [], metric='sqeuclidean', log=False)
wass1d_euc = ot.emd2_1d(u, v, [], [], metric='euclidean', log=False)
# check loss is similar
np.testing.assert_allclose(wass, wass1d)
np.testing.assert_allclose(wass, wass1d_emd2)
# check loss is similar to scipy's implementation for Euclidean metric
wass_sp = wasserstein_distance(u.reshape((-1,)), v.reshape((-1,)))
np.testing.assert_allclose(wass_sp, wass1d_euc)
# check constraints
np.testing.assert_allclose(np.ones((n,)) / n, G.sum(1))
np.testing.assert_allclose(np.ones((m,)) / m, G.sum(0))
# check G is similar
np.testing.assert_allclose(G, G_1d, atol=1e-15)
# check AssertionError is raised if called on non 1d arrays
u = np.random.randn(n, 2)
v = np.random.randn(m, 2)
with pytest.raises(AssertionError):
ot.emd_1d(u, v, [], [])
def test_emd1d_type_devices(nx):
rng = np.random.RandomState(0)
n = 10
x = np.linspace(0, 5, n)
rho_u = np.abs(rng.randn(n))
rho_u /= rho_u.sum()
rho_v = np.abs(rng.randn(n))
rho_v /= rho_v.sum()
for tp in nx.__type_list__:
print(nx.dtype_device(tp))
xb = nx.from_numpy(x, type_as=tp)
rho_ub = nx.from_numpy(rho_u, type_as=tp)
rho_vb = nx.from_numpy(rho_v, type_as=tp)
emd = ot.emd_1d(xb, xb, rho_ub, rho_vb)
emd2 = ot.emd2_1d(xb, xb, rho_ub, rho_vb)
nx.assert_same_dtype_device(xb, emd)
nx.assert_same_dtype_device(xb, emd2)
@pytest.mark.skipif(not tf, reason="tf not installed")
def test_emd1d_device_tf():
nx = ot.backend.TensorflowBackend()
rng = np.random.RandomState(0)
n = 10
x = np.linspace(0, 5, n)
rho_u = np.abs(rng.randn(n))
rho_u /= rho_u.sum()
rho_v = np.abs(rng.randn(n))
rho_v /= rho_v.sum()
# Check that everything stays on the CPU
with tf.device("/CPU:0"):
xb = nx.from_numpy(x)
rho_ub = nx.from_numpy(rho_u)
rho_vb = nx.from_numpy(rho_v)
emd = ot.emd_1d(xb, xb, rho_ub, rho_vb)
emd2 = ot.emd2_1d(xb, xb, rho_ub, rho_vb)
nx.assert_same_dtype_device(xb, emd)
nx.assert_same_dtype_device(xb, emd2)
if len(tf.config.list_physical_devices('GPU')) > 0:
# Check that everything happens on the GPU
xb = nx.from_numpy(x)
rho_ub = nx.from_numpy(rho_u)
rho_vb = nx.from_numpy(rho_v)
emd = ot.emd_1d(xb, xb, rho_ub, rho_vb)
emd2 = ot.emd2_1d(xb, xb, rho_ub, rho_vb)
nx.assert_same_dtype_device(xb, emd)
nx.assert_same_dtype_device(xb, emd2)
assert nx.dtype_device(emd)[1].startswith("GPU")
| 30.680851 | 86 | 0.630929 | 1,223 | 7,210 | 3.501226 | 0.121832 | 0.032228 | 0.056516 | 0.045773 | 0.812004 | 0.794021 | 0.73844 | 0.725362 | 0.725362 | 0.709248 | 0 | 0.028344 | 0.231761 | 7,210 | 234 | 87 | 30.811966 | 0.744719 | 0.119279 | 0 | 0.685535 | 0 | 0 | 0.024984 | 0 | 0 | 0 | 0 | 0 | 0.163522 | 1 | 0.044025 | false | 0 | 0.044025 | 0 | 0.09434 | 0.012579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e01b9fdbf8f09762ab20f68310c1faac13e9d8d8 | 319 | py | Python | web/ace/admin.py | MattYu/django-docker-nginx-postgres-letsEncrypt-jobBoard | a59d4c3d8adf38eaa986eaf5efc21f774010aa2a | [
"MIT"
] | 1 | 2020-06-03T02:10:30.000Z | 2020-06-03T02:10:30.000Z | web/ace/admin.py | MattYu/django-docker-nginx-postgres-letsEncrypt-jobBoard | a59d4c3d8adf38eaa986eaf5efc21f774010aa2a | [
"MIT"
] | 6 | 2020-06-15T00:09:02.000Z | 2022-03-12T00:28:10.000Z | web/ace/admin.py | MattYu/django-docker-nginx-postgres-letsEncrypt-jobBoard | a59d4c3d8adf38eaa986eaf5efc21f774010aa2a | [
"MIT"
] | 1 | 2020-10-29T03:09:08.000Z | 2020-10-29T03:09:08.000Z | from django.contrib import admin
from jobapplications.models import JobApplication, Education, Experience, SupportingDocument, CoverLetter
from ace.models import Candidate_termsAndConditions, Employer_termsAndConditions
admin.site.register(Candidate_termsAndConditions)
admin.site.register(Employer_termsAndConditions) | 53.166667 | 105 | 0.890282 | 32 | 319 | 8.75 | 0.5625 | 0.085714 | 0.192857 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.059561 | 319 | 6 | 106 | 53.166667 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0eb33d2bb907f996ff46a3d34ca6c6aee714a788 | 10,316 | py | Python | test_core.py | eccentricOrange/npbc | 3e3411093c476c29fbc01a4da768aa6f909cdc24 | [
"MIT"
] | 1 | 2021-11-01T08:21:14.000Z | 2021-11-01T08:21:14.000Z | test_core.py | eccentricOrange/npbc | 3e3411093c476c29fbc01a4da768aa6f909cdc24 | [
"MIT"
] | 2 | 2021-11-25T07:55:47.000Z | 2022-02-06T11:11:25.000Z | test_core.py | eccentricOrange/npbc | 3e3411093c476c29fbc01a4da768aa6f909cdc24 | [
"MIT"
] | null | null | null | """
test data-independent functions from the core
- none of these depend on data in the database
"""
from datetime import date as date_type
from pytest import raises
import npbc_core
from npbc_exceptions import InvalidMonthYear, InvalidUndeliveredString
def test_get_number_of_each_weekday():
test_function = npbc_core.get_number_of_each_weekday
assert list(test_function(1, 2022)) == [5, 4, 4, 4, 4, 5, 5]
assert list(test_function(2, 2022)) == [4, 4, 4, 4, 4, 4, 4]
assert list(test_function(3, 2022)) == [4, 5, 5 ,5, 4, 4, 4]
assert list(test_function(2, 2020)) == [4, 4, 4, 4, 4, 5, 4]
assert list(test_function(12, 1954)) == [4, 4, 5, 5, 5, 4, 4]
def test_validate_undelivered_string():
test_function = npbc_core.validate_undelivered_string
with raises(InvalidUndeliveredString):
test_function("a")
test_function("monday")
test_function("1-mondays")
test_function("1monday")
test_function("1 monday")
test_function("monday-1")
test_function("monday-1")
test_function("")
test_function("1")
test_function("6")
test_function("31")
test_function("31","")
test_function("3","1")
test_function("3","1","")
test_function("3","1")
test_function("3","1")
test_function("3","1")
test_function("1","2","3-9")
test_function("1","2","3-9","11","12","13-19")
test_function("1","2","3-9","11","12","13-19","21","22","23-29")
test_function("1","2","3-9","11","12","13-19","21","22","23-29","31")
test_function("1","2","3","4","5","6","7","8","9")
test_function("mondays")
test_function("mondays,tuesdays")
test_function("mondays","tuesdays","wednesdays")
test_function("mondays","5-21")
test_function("mondays","5-21","tuesdays","5-21")
test_function("1-monday")
test_function("2-monday")
test_function("all")
test_function("All")
test_function("aLl")
test_function("alL")
test_function("aLL")
test_function("ALL")
def test_undelivered_string_parsing():
MONTH = 5
YEAR = 2017
test_function = npbc_core.parse_undelivered_strings
assert test_function(MONTH, YEAR, '') == set([])
assert test_function(MONTH, YEAR, '1') == set([
date_type(year=YEAR, month=MONTH, day=1)
])
assert test_function(MONTH, YEAR, '1-2') == set([
date_type(year=YEAR, month=MONTH, day=1),
date_type(year=YEAR, month=MONTH, day=2)
])
assert test_function(MONTH, YEAR, '5-17') == set([
date_type(year=YEAR, month=MONTH, day=5),
date_type(year=YEAR, month=MONTH, day=6),
date_type(year=YEAR, month=MONTH, day=7),
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=9),
date_type(year=YEAR, month=MONTH, day=10),
date_type(year=YEAR, month=MONTH, day=11),
date_type(year=YEAR, month=MONTH, day=12),
date_type(year=YEAR, month=MONTH, day=13),
date_type(year=YEAR, month=MONTH, day=14),
date_type(year=YEAR, month=MONTH, day=15),
date_type(year=YEAR, month=MONTH, day=16),
date_type(year=YEAR, month=MONTH, day=17)
])
assert test_function(MONTH, YEAR, '5-17', '19') == set([
date_type(year=YEAR, month=MONTH, day=5),
date_type(year=YEAR, month=MONTH, day=6),
date_type(year=YEAR, month=MONTH, day=7),
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=9),
date_type(year=YEAR, month=MONTH, day=10),
date_type(year=YEAR, month=MONTH, day=11),
date_type(year=YEAR, month=MONTH, day=12),
date_type(year=YEAR, month=MONTH, day=13),
date_type(year=YEAR, month=MONTH, day=14),
date_type(year=YEAR, month=MONTH, day=15),
date_type(year=YEAR, month=MONTH, day=16),
date_type(year=YEAR, month=MONTH, day=17),
date_type(year=YEAR, month=MONTH, day=19)
])
assert test_function(MONTH, YEAR, '5-17', '19-21') == set([
date_type(year=YEAR, month=MONTH, day=5),
date_type(year=YEAR, month=MONTH, day=6),
date_type(year=YEAR, month=MONTH, day=7),
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=9),
date_type(year=YEAR, month=MONTH, day=10),
date_type(year=YEAR, month=MONTH, day=11),
date_type(year=YEAR, month=MONTH, day=12),
date_type(year=YEAR, month=MONTH, day=13),
date_type(year=YEAR, month=MONTH, day=14),
date_type(year=YEAR, month=MONTH, day=15),
date_type(year=YEAR, month=MONTH, day=16),
date_type(year=YEAR, month=MONTH, day=17),
date_type(year=YEAR, month=MONTH, day=19),
date_type(year=YEAR, month=MONTH, day=20),
date_type(year=YEAR, month=MONTH, day=21)
])
assert test_function(MONTH, YEAR, '5-17', '19-21', '23') == set([
date_type(year=YEAR, month=MONTH, day=5),
date_type(year=YEAR, month=MONTH, day=6),
date_type(year=YEAR, month=MONTH, day=7),
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=9),
date_type(year=YEAR, month=MONTH, day=10),
date_type(year=YEAR, month=MONTH, day=11),
date_type(year=YEAR, month=MONTH, day=12),
date_type(year=YEAR, month=MONTH, day=13),
date_type(year=YEAR, month=MONTH, day=14),
date_type(year=YEAR, month=MONTH, day=15),
date_type(year=YEAR, month=MONTH, day=16),
date_type(year=YEAR, month=MONTH, day=17),
date_type(year=YEAR, month=MONTH, day=19),
date_type(year=YEAR, month=MONTH, day=20),
date_type(year=YEAR, month=MONTH, day=21),
date_type(year=YEAR, month=MONTH, day=23)
])
assert test_function(MONTH, YEAR, 'mondays') == set([
date_type(year=YEAR, month=MONTH, day=1),
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=15),
date_type(year=YEAR, month=MONTH, day=22),
date_type(year=YEAR, month=MONTH, day=29)
])
assert test_function(MONTH, YEAR, 'mondays', 'wednesdays') == set([
date_type(year=YEAR, month=MONTH, day=1),
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=15),
date_type(year=YEAR, month=MONTH, day=22),
date_type(year=YEAR, month=MONTH, day=29),
date_type(year=YEAR, month=MONTH, day=3),
date_type(year=YEAR, month=MONTH, day=10),
date_type(year=YEAR, month=MONTH, day=17),
date_type(year=YEAR, month=MONTH, day=24),
date_type(year=YEAR, month=MONTH, day=31)
])
assert test_function(MONTH, YEAR, '2-monday') == set([
date_type(year=YEAR, month=MONTH, day=8)
])
assert test_function(MONTH, YEAR, '2-monday', '3-wednesday') == set([
date_type(year=YEAR, month=MONTH, day=8),
date_type(year=YEAR, month=MONTH, day=17)
])
def test_calculating_cost_of_one_paper():
DAYS_PER_WEEK = [5, 4, 4, 4, 4, 5, 5]
COST_AND_DELIVERY_DATA: list[tuple[bool, float]] = [
(False, 0),
(False, 0),
(True, 2),
(True, 2),
(True, 5),
(False, 0),
(True, 1)
]
test_function = npbc_core.calculate_cost_of_one_paper
assert test_function(
DAYS_PER_WEEK,
set([]),
COST_AND_DELIVERY_DATA
) == 41
assert test_function(
DAYS_PER_WEEK,
set([]),
[
(False, 0),
(False, 0),
(True, 2),
(True, 2),
(True, 5),
(False, 0),
(False, 1)
]
) == 36
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=8)
]),
COST_AND_DELIVERY_DATA
) == 41
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=8),
date_type(year=2022, month=1, day=8)
]),
COST_AND_DELIVERY_DATA
) == 41
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=8),
date_type(year=2022, month=1, day=17)
]),
COST_AND_DELIVERY_DATA
) == 41
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=2)
]),
COST_AND_DELIVERY_DATA
) == 40
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=2),
date_type(year=2022, month=1, day=2)
]),
COST_AND_DELIVERY_DATA
) == 40
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=6),
date_type(year=2022, month=1, day=7)
]),
COST_AND_DELIVERY_DATA
) == 34
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=6),
date_type(year=2022, month=1, day=7),
date_type(year=2022, month=1, day=8)
]),
COST_AND_DELIVERY_DATA
) == 34
assert test_function(
DAYS_PER_WEEK,
set([
date_type(year=2022, month=1, day=6),
date_type(year=2022, month=1, day=7),
date_type(year=2022, month=1, day=7),
date_type(year=2022, month=1, day=7),
date_type(year=2022, month=1, day=8),
date_type(year=2022, month=1, day=8),
date_type(year=2022, month=1, day=8)
]),
COST_AND_DELIVERY_DATA
) == 34
def test_validate_month_and_year():
test_function = npbc_core.validate_month_and_year
test_function(1, 2020)
test_function(12, 2020)
test_function(1, 2021)
test_function(12, 2021)
test_function(1, 2022)
test_function(12, 2022)
with raises(InvalidMonthYear):
test_function(-54, 2020)
test_function(0, 2020)
test_function(13, 2020)
test_function(45, 2020)
test_function(1, -5)
test_function(12, -5)
test_function(1.6, 10) # type: ignore
test_function(12.6, 10) # type: ignore
test_function(1, '10') # type: ignore
test_function(12, '10') # type: ignore
| 32.338558 | 73 | 0.592187 | 1,483 | 10,316 | 3.931895 | 0.070802 | 0.139942 | 0.207855 | 0.22226 | 0.814612 | 0.772252 | 0.706397 | 0.658206 | 0.652375 | 0.62888 | 0 | 0.07343 | 0.252811 | 10,316 | 318 | 74 | 32.440252 | 0.683057 | 0.014056 | 0 | 0.632353 | 0 | 0 | 0.033858 | 0 | 0 | 0 | 0 | 0 | 0.095588 | 1 | 0.018382 | false | 0 | 0.014706 | 0 | 0.033088 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0eba6bbcf0e7a6ccb3a07882187316f1625695ea | 98 | py | Python | src/ecommerce/views.py | Maelstroms38/ecommerce | 78b600ee5cfb84c1a5c5b3d7b16f6c1d20908788 | [
"MIT"
] | null | null | null | src/ecommerce/views.py | Maelstroms38/ecommerce | 78b600ee5cfb84c1a5c5b3d7b16f6c1d20908788 | [
"MIT"
] | null | null | null | src/ecommerce/views.py | Maelstroms38/ecommerce | 78b600ee5cfb84c1a5c5b3d7b16f6c1d20908788 | [
"MIT"
] | null | null | null | from django.shortcuts import render
def about(request):
return render(request, "about.html", {}) | 24.5 | 41 | 0.755102 | 13 | 98 | 5.692308 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112245 | 98 | 4 | 41 | 24.5 | 0.850575 | 0 | 0 | 0 | 0 | 0 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
16214166538b6404b5bd5a2ce035131bf924ad37 | 8,164 | py | Python | tests/cli/test_calc.py | unkcpz/aiida-optimade | 598016f0a1d0c8cab026bc0d7d04c75135c6970c | [
"MIT"
] | 5 | 2019-12-03T23:53:00.000Z | 2020-08-25T06:04:06.000Z | tests/cli/test_calc.py | unkcpz/aiida-optimade | 598016f0a1d0c8cab026bc0d7d04c75135c6970c | [
"MIT"
] | 310 | 2019-12-03T23:53:01.000Z | 2022-03-30T06:57:40.000Z | tests/cli/test_calc.py | unkcpz/aiida-optimade | 598016f0a1d0c8cab026bc0d7d04c75135c6970c | [
"MIT"
] | 6 | 2019-12-03T23:52:13.000Z | 2022-01-13T11:16:28.000Z | """Test CLI `aiida-optimade calc` command"""
# pylint: disable=unused-argument,too-many-locals,import-error
import os
import re
import pytest
@pytest.mark.skipif(
os.getenv("PYTEST_OPTIMADE_CONFIG_FILE") is not None,
reason="Test is not for MongoDB",
)
def test_calc_all_new(run_cli_command, aiida_profile, top_dir, caplog):
"""Test `aiida-optimade -p profile_name calc` works for non-existent fields.
By "non-existent" the meaning is calculating fields that don't already exist for
any Nodes.
"""
from aiida import orm
from aiida.tools.importexport import import_data
from aiida_optimade.cli import cmd_calc
from aiida_optimade.translators.entities import AiidaEntityTranslator
# Clear database and get initialized_structure_nodes.aiida
aiida_profile.reset_db()
archive = top_dir.joinpath("tests/cli/static/initialized_structure_nodes.aiida")
import_data(archive)
fields = ["elements", "chemical_formula_hill"]
extras_key = AiidaEntityTranslator.EXTRAS_KEY
original_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={
f"extras.{extras_key}": {"or": [{"has_key": field} for field in fields]}
},
project=["*", f"extras.{extras_key}"],
)
.all()
)
# Remove these fields
for node, optimade in original_data:
for field in fields:
optimade.pop(field, None)
node.set_extra(extras_key, optimade)
del node
del original_data
n_structure_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={
f"extras.{extras_key}": {
"or": [{"!has_key": field} for field in fields]
}
},
)
.count()
)
options = ["--force-yes"] + fields
result = run_cli_command(cmd_calc.calc, options)
assert (
f"Fields found for {n_structure_data} Nodes." not in result.stdout
), result.stdout
assert (
f"Removing fields for {n_structure_data} Nodes." not in result.stdout
), result.stdout
assert "Success:" in result.stdout, result.stdout
assert f"calculated for {n_structure_data} Nodes" in result.stdout, result.stdout
n_updated_structure_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={
f"extras.{extras_key}": {"or": [{"has_key": field} for field in fields]}
},
)
.count()
)
assert n_structure_data == n_updated_structure_data
# Ensure the database was reported to be updated.
assert (
re.match(r".*Updating Node [0-9]+ in AiiDA DB!.*", caplog.text, flags=re.DOTALL)
is not None
), caplog.text
# Repopulate database with the "proper" test data
aiida_profile.reset_db()
original_data = top_dir.joinpath("tests/static/test_structures.aiida")
import_data(original_data)
@pytest.mark.skipif(
os.getenv("PYTEST_OPTIMADE_CONFIG_FILE") is not None,
reason="Test is not for MongoDB",
)
def test_calc(run_cli_command, aiida_profile, top_dir):
"""Test `aiida-optimade -p profile_name calc` works."""
from aiida import orm
from aiida.tools.importexport import import_data
from aiida_optimade.cli import cmd_calc
from aiida_optimade.translators.entities import AiidaEntityTranslator
# Clear database and get initialized_structure_nodes.aiida
aiida_profile.reset_db()
archive = top_dir.joinpath("tests/cli/static/initialized_structure_nodes.aiida")
import_data(archive)
fields = ["elements", "chemical_formula_hill"]
extras_key = AiidaEntityTranslator.EXTRAS_KEY
n_structure_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={
f"extras.{extras_key}": {"or": [{"has_key": field} for field in fields]}
},
)
.count()
)
options = ["--force-yes"] + fields
result = run_cli_command(cmd_calc.calc, options)
assert f"Fields found for {n_structure_data} Nodes." in result.stdout, result.stdout
assert (
f"Removing fields for {n_structure_data} Nodes." in result.stdout
), result.stdout
assert "Success:" in result.stdout, result.stdout
assert f"calculated for {n_structure_data} Nodes" in result.stdout, result.stdout
n_updated_structure_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={
f"extras.{extras_key}": {"or": [{"has_key": field} for field in fields]}
},
)
.count()
)
assert n_structure_data == n_updated_structure_data
# Repopulate database with the "proper" test data
aiida_profile.reset_db()
original_data = top_dir.joinpath("tests/static/test_structures.aiida")
import_data(original_data)
@pytest.mark.skipif(
os.getenv("PYTEST_OPTIMADE_CONFIG_FILE") is not None,
reason="Test is not for MongoDB",
)
def test_calc_partially_init(run_cli_command, aiida_profile, top_dir, caplog):
"""Test `aiida-optimade -p profile_name calc` works for a partially initalized DB"""
from aiida import orm
from aiida.tools.importexport import import_data
from aiida_optimade.cli import cmd_calc
from aiida_optimade.translators.entities import AiidaEntityTranslator
# Clear database and get initialized_structure_nodes.aiida
aiida_profile.reset_db()
archive = top_dir.joinpath("tests/cli/static/initialized_structure_nodes.aiida")
import_data(archive)
extras_key = AiidaEntityTranslator.EXTRAS_KEY
original_data = orm.QueryBuilder().append(
orm.StructureData, project=["*", f"extras.{extras_key}"]
)
n_total_nodes = original_data.count()
original_data = original_data.all()
# Alter extra for various Nodes
node, _ = original_data[0]
node.delete_extra(extras_key)
del node
node, optimade = original_data[1]
optimade.pop("elements", None)
optimade.pop("elements_ratios", None)
node.set_extra(extras_key, optimade)
del node
node, optimade = original_data[2]
optimade.pop("elements", None)
node.set_extra(extras_key, optimade)
del node
node, optimade = original_data[3]
optimade.pop("elements_ratios", None)
node.set_extra(extras_key, optimade)
del node
del original_data
# "elements" should not be found in 3 Nodes
options = ["--force-yes", "elements"]
result = run_cli_command(cmd_calc.calc, options)
assert f"Field found for {n_total_nodes - 3} Nodes." in result.stdout, result.stdout
assert (
f"Removing field for {n_total_nodes - 3} Nodes." in result.stdout
), result.stdout
assert "Success:" in result.stdout, result.stdout
assert f"calculated for {n_total_nodes} Nodes" in result.stdout, result.stdout
n_updated_structure_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={f"extras.{extras_key}": {"has_key": "elements"}},
)
.count()
)
assert n_total_nodes == n_updated_structure_data
# Only the requested fields should have been calculated now.
# Let's check with "elements_ratios", which was removed from the extras,
# but wasn't re-calculated.
# The 3 Nodes where "elements_ratios" has been removed, should still have
# it removed now.
n_special_structure_data = (
orm.QueryBuilder()
.append(
orm.StructureData,
filters={f"extras.{extras_key}": {"has_key": "elements_ratios"}},
)
.count()
)
assert n_special_structure_data == n_total_nodes - 3
# Ensure the database was reported to be updated.
assert (
re.match(r".*Updating Node [0-9]+ in AiiDA DB!.*", caplog.text, flags=re.DOTALL)
is not None
), caplog.text
# Repopulate database with the "proper" test data
aiida_profile.reset_db()
original_data = top_dir.joinpath("tests/static/test_structures.aiida")
import_data(original_data)
| 31.279693 | 88 | 0.659603 | 1,017 | 8,164 | 5.091445 | 0.155359 | 0.05562 | 0.032445 | 0.04635 | 0.833526 | 0.824643 | 0.818656 | 0.812669 | 0.80533 | 0.803592 | 0 | 0.002091 | 0.238609 | 8,164 | 260 | 89 | 31.4 | 0.83092 | 0.140005 | 0 | 0.680851 | 0 | 0 | 0.183044 | 0.053794 | 0 | 0 | 0 | 0 | 0.095745 | 1 | 0.015957 | false | 0 | 0.111702 | 0 | 0.12766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
164e7db7f1377e33ba358d6e57b2f0b41441ffc7 | 88 | py | Python | saturn.py | mrzv/saturn | e89e01c8cfb2bf9980f4e3117a3d4b498305dc59 | [
"BSD-3-Clause-LBNL"
] | null | null | null | saturn.py | mrzv/saturn | e89e01c8cfb2bf9980f4e3117a3d4b498305dc59 | [
"BSD-3-Clause-LBNL"
] | null | null | null | saturn.py | mrzv/saturn | e89e01c8cfb2bf9980f4e3117a3d4b498305dc59 | [
"BSD-3-Clause-LBNL"
] | null | null | null | #!/usr/bin/env python3
import saturn_notebook.__main__
saturn_notebook.__main__.main()
| 17.6 | 31 | 0.818182 | 12 | 88 | 5.166667 | 0.666667 | 0.451613 | 0.580645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.068182 | 88 | 4 | 32 | 22 | 0.743902 | 0.238636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1664af147e19d0c4cf4f77b21473a938b9832a4f | 103 | py | Python | code/tools/line_length.py | akashpattnaik/pre-ictal-similarity | 85f963aa0c6d2d0a6e971ffa005c400e136a0a76 | [
"MIT"
] | null | null | null | code/tools/line_length.py | akashpattnaik/pre-ictal-similarity | 85f963aa0c6d2d0a6e971ffa005c400e136a0a76 | [
"MIT"
] | null | null | null | code/tools/line_length.py | akashpattnaik/pre-ictal-similarity | 85f963aa0c6d2d0a6e971ffa005c400e136a0a76 | [
"MIT"
] | null | null | null | import numpy as np
def line_length(signal):
return np.sum(np.abs(np.diff(signal, axis=0)), axis=0) | 25.75 | 58 | 0.708738 | 20 | 103 | 3.6 | 0.7 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 0.135922 | 103 | 4 | 58 | 25.75 | 0.786517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.